Unloved Online: Can Big Tech Be Held Accountable for Teen Suicides?

Unloved Online: Can Big Tech Be Held Accountable for Teen Suicides?
๐Ÿ‘‹ Hi, I am Mark. I am a strategic futurist and innovation keynote speaker. I advise governments and enterprises on emerging technologies such as AI or the metaverse. My subscribers receive a free weekly newsletter on cutting-edge technology.

It's time we hold the tech sector accountable for the devastating consequences of their creations โ€“ and ban kids from social media and platforms like Character.AI, where artificial friendships are replacing human connections and claiming young lives.

I was struck by the tragic story of 14-year-old Sewell Setzer III, who took his life after developing an emotional attachment to a lifelike A.I. chatbot on Character.AI.

Sewell's parents and friends had no idea he'd fallen for a chatbot, but his isolation, poor grades, and lack of interest in activities he once loved were clear warning signs. Unfortunately, this phenomenon is not an isolated incident.

Millions of people, including teenagers, are using these apps, which are largely unregulated and often designed to simulate intimate relationships and to keep you engaged as long as possible.

Character.AI's co-founder, Noam Shazeer, wanted to push this technology ahead fast, before moving back to Google for $2.7 billion, wants to push this technology ahead fast, but at what cost? As A.I. companionship platforms continue to grow in popularity, I wonder: are we witnessing a cure for loneliness or a new menace?

  • A.I. companionship apps like Character.AI are largely unregulated and may worsen isolation in depressed and chronically lonely users, including teenagers.
  • These platforms often have no specific safety features for underage users and no parental controls, allowing children to access potentially harmful content.
  • A.I. companionship apps can provide harmless entertainment, but their claims about mental health benefits are largely unproven, and there is a dark side to them.

Less regulation to improve innovation has its merits, but there should be clear boundaries and protecting our children should be one of them to counter the inherent dangers of these platforms.

It is About Time to Protect our Children

Stories like these make me very sad, and I wonder why we cannot get our act together to protect the most vulnerable and most important population of our society: the next generation.

Of course, this is a rhetorical question, as I am well aware that our drive for capitalism and our short-term focus on shareholder profit is what drive this. Officially, children are not allowed on social media under the age of 13, but there is no verification process, and the owners and directors of technology companies are not being held accountable if children actually join platforms such as Instagram or Character.ai. As a result, technology companies see children as an attractive target market as they can easily be persuaded to spend more and more time on the platform.

The prefrontal cortex is crucial for evaluating risks and controlling impulses, but in teenagers, this is still underdeveloped. Its immaturity makes them more likely to engage in risky behaviors, including substance use and social media addiction. This is compounded by their heightened sensitivity to rewards, which can lead to repeated drug use or social media use as they seek pleasurable experiences. The research is very clear about this, and yet, we let our societies be abused by Big Tech in their pursuit of more shareholder profit.

In addition, apart from preventing children from joining these platforms, we should focus heavily on educating them on the risks of the internet, AI and social media. Unfortunately, in most schools, this is underdeveloped and little attention is being paid to online bullying or the dangers of social media or gaming platforms such as Fortnite or Roblox.

As we navigate the complex world of A.I. technology, it's crucial that we prioritize the well-being and safety of our children. We must consider the long-term consequences of creating artificial relationships that may ultimately destroy our youngsters' lives. It is time governments step up and ban kids from these platforms, including social media, and require proper education on how to navigate these online risks.

Read the full article in the New York Times.

----

๐Ÿ’ก If you enjoyed this content, be sure to download my new app for a unique experience beyond your traditional newsletter.

This is one of many short posts I share daily on my app, and you can have real-time insights, recommendations and conversations with my digital twin via text, audio or video in 28 languages! Go to my PWA at app.thedigitalspeaker.com and sign up to take our connection to the next level! ๐Ÿš€

upload in progress, 0

If you are interested in hiring me as your futurist and innovation speaker, feel free to complete the below form.

I agree with the Terms and Privacy Statement
Dr Mark van Rijmenam

Dr Mark van Rijmenam

Dr. Mark van Rijmenam is a strategic futurist known as The Digital Speaker. He is a true Architect of Tomorrow, bringing both vision and pragmatism to his keynotes. As a renowned global keynote speaker, a Global Speaking Fellow, recognized as a Global Guru Futurist and a 5-time author, he captivates Fortune 500 business leaders and governments globally.

Recognized by Salesforce as one of 16 must-know AI influencers, he combines forward-thinking insights with a balanced, optimistic dystopian view. With his pioneering use of a digital twin and his next-gen media platform Futurwise, Mark doesnโ€™t just speak on AI and the futureโ€”he lives it, inspiring audiences to harness technology ethically and strategically. You can reach his digital twin via WhatsApp at: +1 (830) 463-6967

Share