Synthetic Minds | The Failures of Big Tech 🤬

'Synthetic Minds' serves as a mirror to the multifaceted, synthetic elements that are beginning to weave into the fabric of our society. This week's newsletter focuses on the massive failures of Big Tech and the consequences of these failures for our society, our children and future generations. If we want a thriving synthetic future, it is time to hold Big Tech accountable for the technology they are developing.


Cinematic Futures: Crafting Tomorrow's World with Brad Rochefort - Synthetic Minds Podcast EP07

My new podcast episode

Can science fiction storytellers predict and shape our future better than tech moguls?

Brad Rochefort’s cinematic storytelling doesn’t just entertain—it provokes deep thought about our technological trajectory and its ethical implications. In the latest Synthetic Minds podcast, Brad discusses how his works, like "Zip Code" and "The Holdout," serve as blueprints for exploring futuristic concepts.

Rochefort’s creative process blends real-world advancements with imaginative storytelling. His film "Zip Code," inspired by a real-life blogger's cybernetic journey, showcases how blending human experience with tech innovations can generate compelling narratives. These stories allow businesses to conceptualise future scenarios, anticipating the impacts of emerging technologies.

Brad emphasises the role of speculative fiction in strategic planning. By simulating potential futures, his narratives help organisations explore the societal impacts of new technologies in a risk-free environment. 


Synthetic Snippets: Quick Bytes for Your Synthetic Mind

Quick, curated insights to feed your quest for a better understanding of our evolving synthetic future. The below is just a small selection of my daily updates that I share via The Digital Speaker app. Download and subscribe today to receive real-time updates. Use the coupon code SynMinds24 to receive your first month for free.

1. BIGTECH'S BETRAYAL: AI TRAINING ON KIDS' PHOTOS

When privacy fails: BigTech is using your children's photos to train AI without consent.

Human Rights Watch (HRW) has uncovered a disturbing trend: AI models are being trained on photos of children, even those protected by strict privacy settings. HRW researcher Hye Jung Han discovered 190 photos of Australian children in the LAION-5B dataset, created from web snapshots. These photos, taken without consent, expose children to privacy risks and potential deepfake misuse.

The problem extends beyond just capturing images. URLs in the dataset can reveal children's identities, including names and locations, posing severe safety threats. Despite strict YouTube privacy settings, AI scrapers still archive and use these images, highlighting BigTech's failure to protect user data.

AI-generated deepfakes already harm children, as seen in Melbourne, where explicit deepfakes of 50 girls were circulated online. For Indigenous communities, the unauthorised use of images disrupts cultural practices, exacerbating the issue. How can we trust BigTech to safeguard our future when they exploit our children's privacy? (Ars Technica)


Big Tech is wrong: Your content isn't their free playground!

Microsoft’s AI chief, Mustafa Suleyman, wrongly asserts that online content is “freeware” for AI training. This controversial stance is not just misleading but threatens the very foundation of copyright protection. 

Content creators, like myself, are outraged as tech giants exploit our intellectual property without consent or payment, turning it into lucrative AI training data and consequently selling it onward for a lot of money. 

This blatant disregard for creators' rights must stop. We need robust legal protections to ensure fair compensation and prevent Big Tech from profiting off stolen content. Shouldn't we defend our creative works from becoming Big Tech's goldmine without permission? (The Register)


3. AI’S VOICE GENDER BIAS: SEXY AND SUBSERVIENT STEREOTYPES PERSIST

Why are our AI assistants still stuck in the 1950s?

AI voices are perpetuating outdated gender stereotypes despite technological advancements. OpenAI's ChatGPT, with its husky-voiced assistant Sky, echoes the compliant, empathetic female archetype popularised by Hollywood. 

This isn’t just about aesthetics; it's about re-encoding biases in our everyday tech. The dilemma is stark: as we push for more naturalistic AI, are we reinforcing harmful stereotypes? 

The real challenge is designing AI that doesn't just sound like a reassuring friend but genuinely respects and reflects diverse identities and roles. Can we embrace responsible synthetic futures that break free from these limiting moulds? (New York Times)


4. EXPLOITING TRAUMA: THE DARK SIDE OF DEEPFAKE TECHNOLOGY

Is tech innovation worth the cost when it revictimises those already harmed by heinous crimes?

The emergence of nonconsensual deepfake pornography has reached a new low, with videos exploiting sex trafficking survivors from the infamous GirlsDoPorn operation. These deepfakes, posted on a notorious website, blend AI technology with the suffering of victims, creating a chilling new form of abuse. 

Despite advances in technology, laws to protect against such malicious use lag significantly, leaving survivors to rely on intellectual property laws to fight back. It is time for law enforcement and big tech to step up to protect victims and prevent such despicable acts! (Wired)


5. THE $365.63 AUTOMATED NEWS SCAM: HOW AI IS POLLUTING THE INTERNET

Is your daily news diet being contaminated by AI-driven plagiarism factories?

For just $365.63, Emanuel Maiberg from 404 Media created an AI-driven news website, Prototype.Press, that churns out plagiarised articles at an alarming rate. 

Using a freelancer from Fiverr, Maiberg set up the site to automatically republish content from reputable sources like 404 Media and Wired, reworded by ChatGPT. This experiment highlights a growing industry that prioritises clicks over quality, leveraging tools like WordPress plugins and automated content generators. 

Despite the ease and profitability of these sites, they fail to capture the essence of genuine journalism, relying instead on AI-generated mediocrity. This problem is exactly the reason why we are building Futurwise.com. (404 Media)




Know someone who needs the Synthetic Minds newsletter?

Forward it to someone who might like it too.