Synthetic Minds | AI Triage for 40 Million, and No One Ran a Safety Check

Synthetic Minds | AI Triage for 40 Million, and No One Ran a Safety Check

The Synthetic Minds newsletter offers short daily insights to get you thinking. If you enjoy it, please forward. If you need more insights, subscribe to Futurwise and get 25% off for the first three months!

Over the weekend, I turned my book into an interactive masterclass, built entirely with AI. Read how I did it here, or start using it for free.

Today’s topic: Healthcare & Longevity


AI Triage for 40 Million, and No One Ran a Safety Check

OpenAI launched ChatGPT Health on January 7, 2026. Within weeks, 40 million people were using it daily to decide whether to go to the emergency room.

Now the first independent safety evaluation, published in Nature Medicine by researchers at Mount Sinai, found the platform under-triaged 51.6% of emergency cases.

In respiratory failure and diabetic ketoacidosis scenarios, it had roughly even odds of advising patients to wait. It was 12 times more likely to downplay symptoms when a family member minimized their severity. Its suicide-crisis safeguards triggered inconsistently, sometimes appearing for lower-risk cases, sometimes vanishing when patients described how they intended to harm themselves.

That's the capability story. Here is the signal.

No independent body evaluated this product before it reached 40 million daily users.

The lead researcher said it explicitly: "We wouldn't accept that for a medication or a medical device."

A concurrent Brown University study identified 15 ethical violations when LLMs operate as therapists, deceptive empathy, crisis mismanagement, zero regulatory accountability, across GPT, Claude, and Llama.

OpenAI's response? The study "does not reflect how people typically use ChatGPT Health." That is the exact defense pharmaceutical companies are prohibited from making about post-market safety data.

We require FDA approval for drugs that alter the body. We require nothing for AI that triages whether you live or die.

Before we sleepwalk into another technology scaled for profit before patients, one question demands an answer: if we demand approval for chemicals that affect the body, why do we demand nothing for AI that affects the mind?


'Synthetic Minds' continues to reflect the synthetic forces reshaping our world. Quick, curated insights to feed your quest for a better understanding of our evolving synthetic future, powered by Futurwise:

1. A groundbreaking study has uncovered the brain's hidden defense against Alzheimer's, a natural cleanup system that holds promise for new treatments. (ScienceDaily)

2. A groundbreaking Australian-made AI tool has been developed to detect high breast cancer risk in women who were previously given a clean bill of health, revolutionizing breast cancer screening and potentially saving lives. (ABC News)

3. Juvenescence, an AI-enabled biotech company focused on longevity, has successfully completed the Phase 1 trial of its PAI-1 inhibitor, MDI-2517. The drug, designed to target the processes that accelerate aging, has shown to be safe, well-tolerated, and suitable for once-daily dosing. Could this be the breakthrough we've been waiting for? (Longevity.Technology)

4. Africa's digital future is being shaped by tech empires. This results in concerns about 'digital colonialism' and 'algorithmic colonialism', where data ownership and dependency are replicated, and choices, data, and revenues migrate to foreign countries. (ITWeb)

5. In a world where technology is rapidly changing the entertainment industry, one AI-generated 'actor' is making waves: Tilly Norwood, and 'she' is getting her own virtual world, Tillyverse. (GlobalNews)


Now What? How to Ride the Tsunami of Change

If you are interested in more insights, grab my latest, award-winning, book Now What? How to Ride the Tsunami of Change and learn how to embrace a mindset that can deal with exponential change.

If this newsletter was forwarded to you, you can sign up here.

Thank you.
Mark