When Your Next Online Friend is a Pentagon-Crafted Illusion

When Your Next Online Friend is a Pentagon-Crafted Illusion
๐Ÿ‘‹ Hi, I am Mark. I am a strategic futurist and innovation keynote speaker. I advise governments and enterprises on emerging technologies such as AI or the metaverse. My subscribers receive a free weekly newsletter on cutting-edge technology.

In a world where your next best friend might not even be human, the Pentagon is now working to create fake online personas so convincing, even AI wonโ€™t know theyโ€™re not real. Sounds like the plot of a dystopian novel, right? Welcome to the future of online deception.

  • The Pentagon plans to create AI personas indistinguishable from real people, using technologies like StyleGAN.
  • These personas will not only feature static images but also realistic selfie videos, making them nearly impossible to detect.
  • The U.S. militaryโ€™s use of this technology conflicts with its previous warnings about the dangers of deepfakes.

In a startling development, the U.S. militaryโ€™s Joint Special Operations Command (JSOC) is on a mission to build AI-generated online personas that are virtually indistinguishable from real people. These fake personasโ€”complete with government ID-quality photos, multiple facial expressions, and convincing background environmentsโ€”are being developed as part of covert operations to gather intelligence on social media platforms. The technology behind this initiative includes cutting-edge systems like Nvidiaโ€™s StyleGAN, which is capable of generating hyper-realistic digital faces that look entirely human.

AI Selfie Videos

But these avatars arenโ€™t just limited to still images. JSOC aims to go further by creating "selfie" videos where fabricated individuals interact in virtual environments, communicating with AI-synced audio to complete the illusion. These personas would be used to infiltrate public forums and online networks, gathering intelligence without raising alarms. Imagine chatting with someone on a platform, sharing ideas, only to realize months later that your โ€œfriendโ€ never existed.

While the technology sounds like something out of a sci-fi thriller, its real-world implications are profound. The U.S. military has openly criticized the use of deepfake technology by adversarial states such as Russia and China, which have used similar tools for disinformation and influence campaigns. Yet, the Pentagonโ€™s move to develop these AI-driven deepfakes introduces a troubling double standard. What happens when the lines between truth and fabrication blur to the point where even advanced AI canโ€™t detect the difference?

The technology itself is a significant leap forward in AI manipulation. StyleGAN, originally released in 2019, became famous for powering the popular website "This Person Does Not Exist," where each refresh of the page generates a new, lifelike face of a person who never existed. The U.S. military, however, is aiming far beyond static images. Their ambition extends into dynamic environments, where entire virtual lives could be fabricatedโ€”complete with selfie videos and matching audio that deceive even the most sophisticated detection systems.

Ethically, this raises enormous concerns. A government that has previously condemned the use of AI-generated content to manipulate public opinion is now leaning into the very same technology for its own clandestine operations. According to Heidy Khlaaf, chief AI scientist at the AI Now Institute, โ€œThere are no legitimate use cases besides deception.โ€ If the Pentagon succeeds in creating these undetectable fake identities, it could normalize the use of deepfake personas for intelligence and warfareโ€”paving the way for authoritarian states to follow suit.

This technology could undermine public trust, both domestically and globally. Will citizens start to question the authenticity of every online interaction (as they already should)? Moreover, will foreign adversaries retaliate by developing even more advanced tools of deception, escalating an arms race where AI-driven disinformation becomes the norm?

As AI-driven deception becomes more prevalent, the question isn't just about what we can buildโ€”but whether we should. In a world where trust is already fragile, do we really want to live in a digital landscape where even the people we interact with online may not exist?

Read the full article on The Intercept.

----

๐Ÿ’ก If you enjoyed this content, be sure to download my new app for a unique experience beyond your traditional newsletter.

This is one of many short posts I share daily on my app, and you can have real-time insights, recommendations and conversations with my digital twin via text, audio or video in 28 languages! Go to my PWA at app.thedigitalspeaker.com and sign up to take our connection to the next level! ๐Ÿš€

upload in progress, 0

If you are interested in hiring me as your futurist and innovation speaker, feel free to complete the below form.

I agree with the Terms and Privacy Statement
Dr Mark van Rijmenam

Dr Mark van Rijmenam

Dr. Mark van Rijmenam is a strategic futurist known as The Digital Speaker. He is a true Architect of Tomorrow, bringing both vision and pragmatism to his keynotes. As a renowned global keynote speaker, a Global Speaking Fellow, recognized as a Global Guru Futurist and a 5-time author, he captivates Fortune 500 business leaders and governments globally.

Recognized by Salesforce as one of 16 must-know AI influencers, he combines forward-thinking insights with a balanced, optimistic dystopian view. With his pioneering use of a digital twin and his next-gen media platform Futurwise, Mark doesnโ€™t just speak on AI and the futureโ€”he lives it, inspiring audiences to harness technology ethically and strategically. You can reach his digital twin via WhatsApp at: +1 (830) 463-6967

Share