AI's Dark Side: How Criminals Use AI

Who knew AI could be the best friend of a cybercriminal?

Generative AI has revolutionized productivity—not just for businesses, but for criminals too. From phishing to deepfake scams, AI tools are making malicious activities easier and more effective. Phishing emails are now nearly flawless, bypassing language barriers and fooling even the most vigilant recipients. Deepfake audio scams have reached new heights, enabling cybercriminals to impersonate voices convincingly and swindle millions.

Criminals are also using AI to bypass identity checks with sophisticated face-swapping apps. The rise of jailbreak-as-a-service allows hackers to manipulate AI systems to generate dangerous outputs, while AI models are also used for doxxing by deducing personal information from online data. As AI's capabilities grow, so do the threats it poses:

Five Ways Criminals Are Exploiting AI:

  1. Phishing: The biggest use case for generative AI among criminals is phishing, where AI enhances the quality and plausibility of scam emails. Since the launch of ChatGPT, phishing scams have increased by a whopping 1265%. Previously obvious scam attempts, such as the infamous "Nigerian prince" emails, are now polished to perfection, making them harder to detect. With AI-powered translation, scammers can create grammatically flawless messages in any language, broadening their victim pool and increasing their success rates.
  2. Deepfake Audio Scams: Deepfake technology has advanced to the point where synthetic audio and video are nearly indistinguishable from reality. Cybercriminals exploit this by creating fake audio clips of executives or family members to defraud individuals and organizations. A notable example involved a company in Hong Kong losing $25 million after being deceived by a deepfake of their CFO. These scams are cheap to execute and devastatingly effective.
  3. Bypassing Identity Checks: AI deepfakes are also used to bypass "know your customer" (KYC) verifications required by banks and crypto exchanges. Criminals use apps to overlay fake IDs and deepfake faces during video verifications, tricking systems into approving fraudulent accounts. These services are easily accessible and relatively inexpensive, posing a significant threat to financial security.
  4. Jailbreak-as-a-Service: Instead of developing their own AI models, criminals now use existing ones by bypassing safety protocols through jailbreaking. This service allows them to generate harmful content, such as malware or phishing scripts, by manipulating AI outputs. Services like EscapeGPT offer anonymous access to jailbroken AI, continuously updating methods to evade detection and restrictions imposed by AI providers.
  5. Doxxing and Surveillance: AI's ability to analyze vast amounts of data makes it an effective tool for doxxing, revealing personal information about individuals. By mimicking private investigators, AI models can infer sensitive details like location, age, and occupation from seemingly innocuous text. This capability has led to the emergence of new services that exploit AI for surveillance, making it easier for malicious actors to target individuals.

The rapid advancement of AI technology presents significant challenges for businesses. The very tools designed to improve efficiency and productivity are now being leveraged by cybercriminals to launch sophisticated attacks. This dual-edged nature of AI necessitates a proactive approach to cybersecurity.

For businesses, the key to protection lies in awareness and vigilance. Implementing comprehensive cybersecurityFurthermore, adopting AI tools that prioritize security and transparency and regularly auditing these tools for potential vulnerabilities can help businesses stay ahead of cybercriminals. Additionally, businesses should invest in employee training programs to educate staff about the latest phishing techniques, deepfake scams, and other AI-driven threats.

Collaboration between IT and legal teams can help establish robust policies for AI usage, ensuring compliance with regulations and safeguarding sensitive data. Encouraging a culture of cybersecurity awareness can also play a crucial role in preventing breaches.

Furthermore, adopting AI tools that prioritize security and transparency and regularly auditing these tools for potential vulnerabilities can help businesses stay ahead of cybercriminals. Partnering with cybersecurity firms specializing in AI threats can provide additional layers of protection.

The evolving landscape of AI-driven crime is a stark reminder that as technology progresses, so too must our defences. By staying informed and implementing strategic safeguards, businesses can harness the benefits of AI while protecting themselves from its darker implications.

Read the full article on MIT Technology Review.

----

💡 If you enjoyed this content, be sure to download my new app for a unique experience beyond your traditional newsletter.

This is one of many short posts I share daily on my app, and you can have real-time insights, recommendations and conversations with my digital twin via text, audio or video in 28 languages! Go to my PWA at app.thedigitalspeaker.com and sign up to take our connection to the next level! 🚀

If you are interested in hiring me as your futurist and innovation speaker, feel free to complete the below form.