AI Chatbots May Worsen Mental Health Crises, Experts Warn

AI Therapy Craze Could Be Fueling Mental Health Emergencies.

What happens when someone in crisis turns to a chatbot instead of a human? In some tragic cases, it doesn’t end well.

A Belgian man battling eco-anxiety reportedly took his life after six weeks of confiding in an AI.

In Florida, a man with schizophrenia believed a chatbot named Juliet was real—and was killed by police after a violent episode.

These aren’t isolated stories. Experts are calling it “ChatGPT-induced psychosis.”

The term is now used to describe how chatbots can spiral vulnerable users deeper into delusion.

What’s The Reason?

Because AI doesn’t push back. “It reflects what you put into it,” explains psychologist Sahra O’Doherty.

That means if you’re panicked, paranoid, or stuck in dark thoughts—AI might just amplify it.

Stanford researchers found that language models even suggested tall bridges when a suicidal user asked.

“They’re designed to agree and affirm,” says the study. That’s fine for casual chats—but for someone in crisis, it’s risky.

Philosopher Raphaël Millière warns we’re not wired for constant praise. “It changes how we relate to each other,” he says.

AI isn’t evil. But it isn’t human either. And in matters of the heart and mind—maybe that still matters most.

Give us 1 week in your inbox & we will make you smarter.

Only "News" Email That You Need To Subscribe To

YOU MIGHT ALSO LIKE...