When Your Next Online Friend is a Pentagon-Crafted Illusion

When Your Next Online Friend is a Pentagon-Crafted Illusion
๐Ÿ‘‹ Hi, I am Mark. I am a strategic futurist and innovation keynote speaker. I advise governments and enterprises on emerging technologies such as AI or the metaverse. My subscribers receive a free weekly newsletter on cutting-edge technology.

In a world where your next best friend might not even be human, the Pentagon is now working to create fake online personas so convincing, even AI wonโ€™t know theyโ€™re not real. Sounds like the plot of a dystopian novel, right? Welcome to the future of online deception.

  • The Pentagon plans to create AI personas indistinguishable from real people, using technologies like StyleGAN.
  • These personas will not only feature static images but also realistic selfie videos, making them nearly impossible to detect.
  • The U.S. militaryโ€™s use of this technology conflicts with its previous warnings about the dangers of deepfakes.

In a startling development, the U.S. militaryโ€™s Joint Special Operations Command (JSOC) is on a mission to build AI-generated online personas that are virtually indistinguishable from real people. These fake personasโ€”complete with government ID-quality photos, multiple facial expressions, and convincing background environmentsโ€”are being developed as part of covert operations to gather intelligence on social media platforms. The technology behind this initiative includes cutting-edge systems like Nvidiaโ€™s StyleGAN, which is capable of generating hyper-realistic digital faces that look entirely human.

AI Selfie Videos

But these avatars arenโ€™t just limited to still images. JSOC aims to go further by creating "selfie" videos where fabricated individuals interact in virtual environments, communicating with AI-synced audio to complete the illusion. These personas would be used to infiltrate public forums and online networks, gathering intelligence without raising alarms. Imagine chatting with someone on a platform, sharing ideas, only to realize months later that your โ€œfriendโ€ never existed.

While the technology sounds like something out of a sci-fi thriller, its real-world implications are profound. The U.S. military has openly criticized the use of deepfake technology by adversarial states such as Russia and China, which have used similar tools for disinformation and influence campaigns. Yet, the Pentagonโ€™s move to develop these AI-driven deepfakes introduces a troubling double standard. What happens when the lines between truth and fabrication blur to the point where even advanced AI canโ€™t detect the difference?

The technology itself is a significant leap forward in AI manipulation. StyleGAN, originally released in 2019, became famous for powering the popular website "This Person Does Not Exist," where each refresh of the page generates a new, lifelike face of a person who never existed. The U.S. military, however, is aiming far beyond static images. Their ambition extends into dynamic environments, where entire virtual lives could be fabricatedโ€”complete with selfie videos and matching audio that deceive even the most sophisticated detection systems.

Ethically, this raises enormous concerns. A government that has previously condemned the use of AI-generated content to manipulate public opinion is now leaning into the very same technology for its own clandestine operations. According to Heidy Khlaaf, chief AI scientist at the AI Now Institute, โ€œThere are no legitimate use cases besides deception.โ€ If the Pentagon succeeds in creating these undetectable fake identities, it could normalize the use of deepfake personas for intelligence and warfareโ€”paving the way for authoritarian states to follow suit.

This technology could undermine public trust, both domestically and globally. Will citizens start to question the authenticity of every online interaction (as they already should)? Moreover, will foreign adversaries retaliate by developing even more advanced tools of deception, escalating an arms race where AI-driven disinformation becomes the norm?

As AI-driven deception becomes more prevalent, the question isn't just about what we can buildโ€”but whether we should. In a world where trust is already fragile, do we really want to live in a digital landscape where even the people we interact with online may not exist?

Read the full article on The Intercept.

----

๐Ÿ’ก If you enjoyed this content, be sure to download my new app for a unique experience beyond your traditional newsletter.

This is one of many short posts I share daily on my app, and you can have real-time insights, recommendations and conversations with my digital twin via text, audio or video in 28 languages! Go to my PWA at app.thedigitalspeaker.com and sign up to take our connection to the next level! ๐Ÿš€

upload in progress, 0

If you are interested in hiring me as your futurist and innovation speaker, feel free to complete the below form.

The Pentagonโ€™s Joint Special Operations Command (JSOC) is seeking to develop AI-driven personas for covert operations, leveraging technologies like StyleGAN to generate hyper-realistic fake online identities.

These personas will not only feature government ID-quality images, multiple facial expressions, and background environments but also integrate "selfie" videos with audio that can deceive even the most advanced social media algorithms. The capability could allow the U.S. military to create avatars that infiltrate online forums for intelligence gathering without raising suspicion.

However, this initiative raises ethical concerns. The same government that has warned about the dangers of deepfakes is now investing heavily in technology to create undetectable false identities for its own purposes. This paradox could lead to an arms race where states rely on increasingly sophisticated AI-generated deceptions.

  • The Pentagon wants to create personas that can deceive both humans and machines.
  • Technologies like StyleGAN will power these identities, integrating visuals, videos, and audio.
  • These avatars will be used to infiltrate online spaces for intelligence purposes.

As we accelerate toward a future of AI-driven deception, we must ask ourselves: Is it wise to build a world where trust in digital identities becomes virtually nonexistent? How do we protect trust and authenticity in a digital age where even our closest online allies might be fake?

I agree with the Terms and Privacy Statement
Dr Mark van Rijmenam

Dr Mark van Rijmenam

Dr. Mark van Rijmenam is a strategic futurist known as The Digital Speaker. He stands at the forefront of the digital age and lives and breathes cutting-edge technologies to inspire Fortune 500 companies and governments worldwide. As an optimistic dystopian, he has a deep understanding of AI, blockchain, the metaverse, and other emerging technologies, blending academic rigor with technological innovation.

His pioneering efforts include the worldโ€™s first TEDx Talk in VR in 2020. In 2023, he further pushed boundaries when he delivered a TEDx talk in Athens with his digital twin, delving into the complex interplay of AI and our perception of reality. In 2024, he launched a digital twin of himself, offering interactive, on-demand conversations via text, audio, or video in 29 languages, thereby bridging the gap between the digital and physical worlds โ€“ another worldโ€™s first.

Dr. Van Rijmenam is a prolific author and has written more than 1,200 articles and five books in his career. As a corporate educator, he is celebrated for his candid, independent, and balanced insights. He is also the founder of Futurwise, which focuses on elevating global knowledge on crucial topics like technology, healthcare, and climate change by providing high-quality, hyper-personalized, and easily digestible insights from trusted sources.

Share

Digital Twin