AI Chatbots May Encourage Delusional Beliefs, Study Warns

The study authors advocate for clinical testing of AI chatbots in conjunction with trained mental health professionals.

Can a chatbot accidentally push someone deeper into a delusion?

A new scientific review suggests the possibility—and it’s raising eyebrows in the mental health community.

Researchers writing in The Lancet Psychiatry warn that AI chatbots could reinforce delusional thinking.

Particularly among people already vulnerable to psychosis.

The review was led by psychiatrist Hamilton Morrin of King’s College London, who examined around 20 media reports describing what some call “AI psychosis.”

So what’s the concern? Morrin says chatbots can sometimes validate or amplify grandiose beliefs.

The idea that someone has special powers, cosmic significance, or a unique destiny.

In some cases, bots responded with mystical language, even implying users were communicating with a higher or cosmic entity.

AI And Psychosis

But experts stress an important point: AI likely doesn’t create psychosis from scratch.

Instead, it may intensify beliefs already forming. As Kwame McKenzie explains, people in the early stages of psychosis might be particularly vulnerable.

Another complication? Chatbots talk back.

That interactive feedback can accelerate harmful thinking patterns, says Dominic Oliver of University of Oxford.

AI companies insist their tools aren’t substitutes for therapy.

Still, researchers argue chatbots should be tested alongside mental health professionals.

After all, when technology starts sounding like a trusted voice, the line between helpful guidance and harmful reinforcement can get surprisingly blurry.

Give us 1 week in your inbox & we will make you smarter.

Only "News" Email That You Need To Subscribe To

YOU MIGHT ALSO LIKE...