How to Fight Deepfake Scams

How to Fight Deepfake Scams
đź‘‹ Hi, I am Mark. I am a strategic futurist and innovation keynote speaker. I advise governments and enterprises on emerging technologies such as AI or the metaverse. My subscribers receive a free weekly newsletter on cutting-edge technology.

Think you can spot a scam? Even the savviest executives are getting duped by AI deepfakes—until they ask the right questions.

A Ferrari executive recently thwarted a deepfake scam by posing a question only the real CEO, Benedetto Vigna, could answer. The incident unfolded when the executive received WhatsApp messages seemingly from Vigna discussing a confidential deal. Although the messages mimicked Vigna’s style and voice, subtle inconsistencies—like an unfamiliar phone number and a different profile picture—raised suspicions.

Using advanced AI to replicate Vigna's voice, the scammer convincingly spoke of a sensitive transaction needing the executive's immediate attention. However, the executive detected slight mechanical intonations in the voice. To confirm his doubts, he asked a personal question about a book Vigna had recently recommended, "Decalogue of Complexity." Unable to answer, the scammer abruptly ended the call, exposing the fraud.

Ferrari is not the first company to be attacked. Earlier this year, the British multinational design and engineering firm Arup, known for iconic structures like the Sydney Opera House, fell victim to a $25 million deepfake scam via Zoom. In January, a finance worker in Arup's Hong Kong office was tricked into attending a video call with what he believed were the CFO and colleagues.

The convincing deepfake re-creations used AI-generated voices and images, leading the employee to authorise 15 transactions totalling $25.6 million. Initially suspicious of a phishing email, the worker’s doubts were quelled by the realistic video call, highlighting the sophisticated nature of modern scams. Arup confirmed the incident and stated that their financial stability remains unaffected. 

Finally, in May 2024, WPP CEO Mark Read thwarted an elaborate deepfake scam that aimed to defraud the world's largest advertising firm. Using a cloned voice and manipulated video, scammers set up a convincing Microsoft Teams meeting, attempting to trick a senior executive into starting a new business venture and sharing sensitive information. Despite the scam's sophistication, it was unsuccessful, thanks to the vigilance of WPP staff.

How to Protect Your Business from Deepfake Scams

These incidents underscore the rising threat of AI-generated scams and the necessity for sophisticated verification methods.

A recent survey reveals that almost half of Americans (48%) feel less capable of identifying scams due to AI advancements. Only 18% feel very confident in recognizing scams, with many struggling to differentiate between reality and AI-generated deceptions.

The prevalence of scams, especially those impersonating familiar contacts, has surged, leaving many anxious. Despite this, 38% of respondents view AI positively, using it for daily tasks like answering questions and translating languages.

With deepfake technology becoming more convincing, executives must be vigilant and prepared to authenticate identities through unique personal questions. As AI advances, businesses must adapt security measures to protect against increasingly sophisticated fraud attempts. How can companies stay ahead of fraudsters in this evolving digital landscape? I believe there are three ways:

  • Educate ourselves on AI
  • Develop new skills
  • Trust, but verify

Let's dive into them.

1. Educate: Understand and Adapt to a Rapidly Changing World

In today’s rapidly evolving world, we must educate ourselves and gain a deep understanding of how our environment transforms due to advancements in AI and other digital technologies. That means experimenting with new tools and technologies and developing a solid comprehension of phenomena like deepfakes, large language models and other emerging technologies that will impact you. 

Recognising the potential impacts of deepfakes is crucial for our security and privacy and for maintaining trust in the information we consume and share.

Creating general awareness within our surroundings, whether in our families or professional networks, is equally important. By fostering discussions and educating others about the implications of AI, we can build a more informed and resilient society. To navigate this new landscape effectively, we must become fluent in the language of tomorrow: AI. This means understanding how AI works, its potential applications, and the ethical considerations it entails.

Moreover, we need to become digitally aware—attuned to how digital tools influence our lives, from mental health and privacy to societal norms and ethical standards.

Unfortunately, many of us have sleepwalked into the digital age, adopting technologies without fully grasping their implications. It’s time to wake up and proactively engage with the digital world, ensuring we are not only consumers of technology but also knowledgeable and responsible participants in its evolution. By doing so, we can harness the power of AI to improve our lives while safeguarding our values and principles.

2. Five Skills We Should Focus on in the Age of AI

As we delve deeper into the age of artificial intelligence, the proliferation of deepfakes presents both a technological marvel and a significant challenge. These AI-generated videos and images seriously threaten security, privacy, and trust. To effectively combat the spread and impact of deepfakes and ensure a thriving digital future for all, we should focus on developing five key skills.

These skills enhance our ability to detect and manage deepfakes and ensure that we leverage AI ethically and responsibly in our rapidly evolving digital landscape. By honing our analytical skills, embracing adaptability, exercising strategic foresight, fostering digital literacy, and upholding ethical standards, we can better navigate the complexities of AI and protect the integrity of information. 

1. Analytical Skills

In the AI era, analytical skills are more vital than ever. Understanding and interpreting synthetic media and data generated by AI systems is crucial for informed decision-making. These skills help identify patterns, trends, and insights, transforming raw data into strategic initiatives.

Moreover, while AI is powerful, it’s not perfect. Analytical skills are essential to critically evaluate AI outputs, uncover biases, and improve accuracy. This shift from intuition to data-driven strategies enhances forecasting and risk assessment, turning complex data into actionable insights.

Problem-solving also benefits, as analytical thinkers can dissect issues and optimise AI performance. Human-AI collaboration thrives when professionals understand AI's processes and know where human judgment is needed.

2. Adaptability

Adaptability encompasses continuous learning, flexibility, resilience, and embracing change. Continuous learning keeps professionals updated with the latest AI tools and trends, ensuring relevance. Flexibility involves being open-minded and versatile, enabling smooth transitions and innovation. Resilience helps navigate setbacks, viewing failures as learning opportunities.

Embracing change means proactively seeking and integrating new technologies and fostering an organisational culture that rewards innovation. These elements create a robust framework for individuals and organisations to excel in an AI-driven world.

3. Strategic Foresight

In the age of AI, strategic foresight is crucial for businesses to anticipate trends, prepare for challenges, and make informed decisions. This involves staying ahead of technological advancements, recognising opportunities, and mitigating risks. 

Building future-ready AI systems requires long-term planning, ethical considerations, and sustainability. Strategic foresight also helps navigate regulatory changes, skill gaps, and technological disruptions. Organisations can allocate resources effectively and shape long-term success by utilising data-driven insights and scenario planning. 

4. Digital Literacy

Digital literacy is more than just knowing how to use digital tools; it's about understanding their implications on our well-being, privacy, and ethics. It involves technical proficiency, recognising how digital tools impact mental health, and managing screen time effectively. Digital literacy also means being aware of data privacy and cybersecurity threats, ensuring informed consent, and understanding the ethical use of AI. 

Moreover, it addresses the digital divide, promoting equitable access to technology and fostering community engagement. Developing digital literacy is crucial to balance productivity and well-being as AI advances. Ignoring digital literacy is like driving blindfolded in the AI era—dangerous and irresponsible.

5. Ethics

If you think ethics in AI is just a buzzword, you’re already part of the problem.

In the AI-driven world, ethics and responsible usage are critical. Ensuring fairness by addressing biases in AI systems, maintaining transparency in AI processes, and establishing accountability for AI decisions are essential steps.

Responsible AI involves ethical design, regulatory compliance, and prioritising human well-being. Addressing bias, ensuring data privacy, and protecting AI systems from cyber threats are integral to ethical AI usage. Additionally, AI should be developed with social responsibility, focusing on the public good and avoiding harmful applications.

Transparency and explainability, through Explainable AI and open communication, build trust. Implementing ethical guidelines and forming ethics committees can guide organisations in responsible AI practices.

Fighting Deepfakes with Our New Skills

As deepfake technology continues to advance, the ability to detect and counteract these digital deceptions becomes increasingly critical. The five skills outlined serve as the foundation for combating deepfake scams effectively.

Analytical skills enable us to critically evaluate and interpret the vast amounts of AI-generated data, helping to identify and expose manipulated content. Adaptability ensures we stay current with the latest technological advancements and can pivot quickly in response to new threats. Strategic foresight allows us to anticipate future challenges and opportunities, preparing us to handle the evolving landscape of AI-generated media.

Digital literacy empowers individuals to understand the broader implications of digital tools, from their impact on mental health to privacy concerns, fostering a more informed and vigilant society. Finally, a strong ethical framework guides the responsible use of AI, ensuring that these powerful tools are employed for the greater good rather than malicious purposes.

By cultivating these skills, we enhance our capacity to fight deepfakes and build a resilient and ethically grounded approach to navigating the AI-driven world.

3. Trust but Verify

“Trust, but verify” is a Russian proverb that gained international prominence when Suzanne Massie, a scholar of Russian history, taught it to U.S. President Ronald Reagan. Reagan used it frequently during nuclear disarmament discussions with the Soviet Union, emphasising the importance of verification in building trust. 

This principle is more relevant than ever as AI becomes an increasingly powerful force in our lives. We must develop robust methods to verify the functionality, intentions, and output of AI systems. This involves creating systems and tools that can effectively identify AI-generated content and confirm the authenticity of digital identities in our rapidly evolving online landscape.

We must implement stringent verification methods to protect ourselves against AI scams. Simply asking questions that only the real person you think you are dealing with can answer is a vital start. This approach can help confirm the identity of individuals in voice, video, or metaverse interactions, just like the Ferrari executive did.

One practical recommendation I often share during my keynotes is using a safety word for conversations with your closest family members—something known only to them. In case of doubt, this safety word can serve as a quick and reliable method to verify the identity of the person you are communicating with.

Creating robust verification systems involves several key strategies, including:

1. Advanced Detection Tools: Developing AI tools to analyse content for signs of manipulation. These tools can use forensic analysis, pattern recognition, and anomaly detection techniques to identify deepfakes.

2. Authentication Mechanisms: Implementing multi-factor authentication methods, including biometric verification (e.g., facial recognition, voice recognition) and cryptographic techniques, to confirm the identity of individuals in digital interactions.

3. Digital Watermarking: Embedding digital watermarks in legitimate media to verify authenticity. Watermarking can provide a way to trace the origin of content and detect alterations. Numerous companies are developing watermarking solutions, including Meta, Google, and many others

In an age where hyper-realistic digital deepfakes can be easily created, the principle of “trust but verify” is more critical than ever. Protecting ourselves from deepfake scams requires robust verification methods, including advanced detection tools, authentication mechanisms, and public awareness.

Conclusion

In the rapidly evolving digital landscape, the proliferation of deepfakes poses significant challenges to security, privacy, and trust. As highlighted in this article, deepfakes are not merely a technological curiosity but a real and present threat capable of causing substantial financial and reputational damage. The incidents involving Ferrari, Arup, and WPP underscore the sophistication and potential impact of these AI-generated deceptions. 

By leveraging analytical skills, adaptability, strategic foresight, digital literacy, and ethical AI practices, we can build a resilient defence against the threats posed by deepfakes. Implementing robust verification methods is essential for confirming the authenticity of digital content and identities. Simple yet effective measures, like using a safety word with close family members, can provide an additional layer of security in personal interactions.

Education is the cornerstone of all these skills. In today's rapidly evolving world, educating ourselves and those around us about AI and its implications is critical. This means understanding how AI works, recognising its potential limitations and risks, and staying informed about the latest developments. Education helps us build a deeper comprehension of phenomena like deepfakes, large language models, and other emerging technologies that impact our lives.

Moreover, fostering discussions and spreading awareness within our families, professional networks, and communities is vital for building a more informed and resilient society. Proactively engaging with the digital world ensures that we are not only consumers of technology but also knowledgeable and responsible participants in its evolution. 

Remember, implementing simple yet effective measures, such as using a safety word with close family members, can make a significant difference in ensuring the authenticity of digital interactions. How will you ensure that you, your team, and your organisation are digitally literate and prepared for the AI-driven future? By embracing these skills and practices, we can safeguard our digital environments, protect our personal and professional lives, and create a secure and trustworthy digital future.

Images: Midjourney

Dr Mark van Rijmenam

Dr Mark van Rijmenam

Dr. Mark van Rijmenam is a strategic futurist known as The Digital Speaker. He stands at the forefront of the digital age and lives and breathes cutting-edge technologies to inspire Fortune 500 companies and governments worldwide. As an optimistic dystopian, he has a deep understanding of AI, blockchain, the metaverse, and other emerging technologies, and he blends academic rigour with technological innovation.

His pioneering efforts include the world’s first TEDx Talk in VR in 2020. In 2023, he further pushed boundaries when he delivered a TEDx talk in Athens with his digital twin , delving into the complex interplay of AI and our perception of reality. In 2024, he launched a digital twin of himself offering interactive, on-demand conversations via text, audio or video in 29 languages, thereby bridging the gap between the digital and physical worlds – another world’s first.

As a distinguished 5-time author and corporate educator, Dr Van Rijmenam is celebrated for his candid, independent, and balanced insights. He is also the founder of Futurwise , which focuses on elevating global digital awareness for a responsible and thriving digital future.

Share

Digital Twin