Teen Took His Life After ChatGPT ‘Encouragement’ Lawsuit Says

Teen killed himself after ‘months of encouragement from ChatGPT’, lawsuit claims.

What happens when the tool designed to help ends up doing harm?

That’s the heartbreaking question at the center of a new lawsuit against OpenAI.

16-year-old Adam Raine from California took his own life earlier this year following months of conversations with ChatGPT.

According to his family’s lawyer, the teenager exchanged up to 650 messages a day with the chatbot.

Some of which included discussions about suicide methods and even drafting a goodbye note.

“This was inevitable,” attorney Jay Edelson said, alleging OpenAI rushed its model, known as 4o, to market “despite clear safety issues.”

OpenAI, now valued at half a trillion dollars, admits its systems can “fall short.”

Did Safety Fall Behind AI Progress?

The company has promised tighter safeguards, stronger parental controls, and new measures to stop long conversations from slipping past safety training.

“We are deeply saddened by Mr. Raine’s passing,” a spokesperson said, extending condolences to the family.

Experts warn this isn’t an isolated risk.

Microsoft AI chief Mustafa Suleyman recently raised alarms about a “psychosis risk.”

Mania-like or paranoid episodes triggered by long, immersive AI chats.

OpenAI says updates in GPT-5 will better “ground” users in reality.

But for Adam’s grieving parents, one question lingers: in the race to dominate AI, did safety get left behind?

Give us 1 week in your inbox & we will make you smarter.

Only "News" Email That You Need To Subscribe To

YOU MIGHT ALSO LIKE...