AI Firm Under Fire After Chatbot Creates Disturbing Story

Imagine asking a chatbot for information about yourself—only to find out it claims you murdered your own children.

That’s exactly what happened to a Norwegian man, and now OpenAI is facing serious legal trouble.

ChatGPT, known for its ability to generate human-like responses, has been accused of fabricating a disturbing crime story.

The AI falsely labeled Arve Hjalmar Holmen as a convicted murderer.

According to privacy advocacy group Noyb, the AI falsely labeled Arve Hjalmar Holmen as a convicted murderer.

It blended real elements of his life with pure fiction

“The fact that someone could read this output and believe it is true is what scares me the most,” Holmen said.

An Alarming Situation?

This isn’t the first time ChatGPT has “hallucinated” false information, but this case is particularly alarming.

Under EU law, companies must ensure personal data is accurate, and Noyb argues OpenAI is in clear violation.

The chatbot can face big fines under the EU privacy laws if this ends up in court.

“You can’t just spread false information and slap on a tiny disclaimer,” said Noyb’s lawyer Joakim Söderberg.

While OpenAI has since updated the chatbot to correct the error, Noyb insists the false information still lingers within the system.

If found guilty of breaching privacy laws, OpenAI could face hefty fines—but the bigger question remains: can AI truly be trusted with personal data?

Give us 1 week in your inbox & we will make you smarter.

Only "News" Email That You Need To Subscribe To

YOU MIGHT ALSO LIKE...