AI's Dark Side: How Criminals Use AI

AI's Dark Side: How Criminals Use AI
👋 Hi, I am Mark. I am a strategic futurist and innovation keynote speaker. I advise governments and enterprises on emerging technologies such as AI or the metaverse. My subscribers receive a free weekly newsletter on cutting-edge technology.

Who knew AI could be the best friend of a cybercriminal?

Generative AI has revolutionized productivity—not just for businesses, but for criminals too. From phishing to deepfake scams, AI tools are making malicious activities easier and more effective. Phishing emails are now nearly flawless, bypassing language barriers and fooling even the most vigilant recipients. Deepfake audio scams have reached new heights, enabling cybercriminals to impersonate voices convincingly and swindle millions.

Criminals are also using AI to bypass identity checks with sophisticated face-swapping apps. The rise of jailbreak-as-a-service allows hackers to manipulate AI systems to generate dangerous outputs, while AI models are also used for doxxing by deducing personal information from online data. As AI's capabilities grow, so do the threats it poses:

Five Ways Criminals Are Exploiting AI:

  1. Phishing: The biggest use case for generative AI among criminals is phishing, where AI enhances the quality and plausibility of scam emails. Since the launch of ChatGPT, phishing scams have increased by a whopping 1265%. Previously obvious scam attempts, such as the infamous "Nigerian prince" emails, are now polished to perfection, making them harder to detect. With AI-powered translation, scammers can create grammatically flawless messages in any language, broadening their victim pool and increasing their success rates.
  2. Deepfake Audio Scams: Deepfake technology has advanced to the point where synthetic audio and video are nearly indistinguishable from reality. Cybercriminals exploit this by creating fake audio clips of executives or family members to defraud individuals and organizations. A notable example involved a company in Hong Kong losing $25 million after being deceived by a deepfake of their CFO. These scams are cheap to execute and devastatingly effective.
  3. Bypassing Identity Checks: AI deepfakes are also used to bypass "know your customer" (KYC) verifications required by banks and crypto exchanges. Criminals use apps to overlay fake IDs and deepfake faces during video verifications, tricking systems into approving fraudulent accounts. These services are easily accessible and relatively inexpensive, posing a significant threat to financial security.
  4. Jailbreak-as-a-Service: Instead of developing their own AI models, criminals now use existing ones by bypassing safety protocols through jailbreaking. This service allows them to generate harmful content, such as malware or phishing scripts, by manipulating AI outputs. Services like EscapeGPT offer anonymous access to jailbroken AI, continuously updating methods to evade detection and restrictions imposed by AI providers.
  5. Doxxing and Surveillance: AI's ability to analyze vast amounts of data makes it an effective tool for doxxing, revealing personal information about individuals. By mimicking private investigators, AI models can infer sensitive details like location, age, and occupation from seemingly innocuous text. This capability has led to the emergence of new services that exploit AI for surveillance, making it easier for malicious actors to target individuals.

The rapid advancement of AI technology presents significant challenges for businesses. The very tools designed to improve efficiency and productivity are now being leveraged by cybercriminals to launch sophisticated attacks. This dual-edged nature of AI necessitates a proactive approach to cybersecurity.

For businesses, the key to protection lies in awareness and vigilance. Implementing comprehensive cybersecurityFurthermore, adopting AI tools that prioritize security and transparency and regularly auditing these tools for potential vulnerabilities can help businesses stay ahead of cybercriminals. Additionally, businesses should invest in employee training programs to educate staff about the latest phishing techniques, deepfake scams, and other AI-driven threats.

Collaboration between IT and legal teams can help establish robust policies for AI usage, ensuring compliance with regulations and safeguarding sensitive data. Encouraging a culture of cybersecurity awareness can also play a crucial role in preventing breaches.

Furthermore, adopting AI tools that prioritize security and transparency and regularly auditing these tools for potential vulnerabilities can help businesses stay ahead of cybercriminals. Partnering with cybersecurity firms specializing in AI threats can provide additional layers of protection.

The evolving landscape of AI-driven crime is a stark reminder that as technology progresses, so too must our defences. By staying informed and implementing strategic safeguards, businesses can harness the benefits of AI while protecting themselves from its darker implications.

Read the full article on MIT Technology Review.

----

💡 If you enjoyed this content, be sure to download my new app for a unique experience beyond your traditional newsletter.

This is one of many short posts I share daily on my app, and you can have real-time insights, recommendations and conversations with my digital twin via text, audio or video in 28 languages! Go to my PWA at app.thedigitalspeaker.com and sign up to take our connection to the next level! 🚀

upload in progress, 0

If you are interested in hiring me as your futurist and innovation speaker, feel free to complete the below form.

Dr Mark van Rijmenam

Dr Mark van Rijmenam

Dr. Mark van Rijmenam is a strategic futurist known as The Digital Speaker. He is a true Architect of Tomorrow, bringing both vision and pragmatism to his keynotes. As a renowned global keynote speaker, a Global Speaking Fellow, recognized as a Global Guru Futurist and a 5-time author, he captivates Fortune 500 business leaders and governments globally.

Recognized by Salesforce as one of 16 must-know AI influencers, he combines forward-thinking insights with a balanced, optimistic dystopian view. With his pioneering use of a digital twin and his next-gen media platform Futurwise, Mark doesn’t just speak on AI and the future—he lives it, inspiring audiences to harness technology ethically and strategically. You can reach his digital twin via WhatsApp at: +1 (830) 463-6967

Share