OpenAI Faces Criminal Probe Over ChatGPT’s Alleged Role In Shooting

OpenAI faces criminal probe over role of ChatGPT in shooting.

Can a chatbot be blamed for a crime it didn’t physically commit?

That’s the uncomfortable question now hanging over one of the world’s biggest AI companies.

OpenAI is facing a criminal investigation in the US after authorities in Florida said its tool ChatGPT may have been used by a 20-year-old student.

In the lead-up to a deadly shooting at Florida State University.

Florida Attorney General James Uthmeier didn’t hold back.

“ChatGPT offered significant advice to this shooter,” he claimed.

Investigators believe the bot may have discussed weapons, timing, and even potential locations on campus.

Strong words—but are they legally enough to assign blame to a machine?

OpenAI pushes back firmly. “ChatGPT is not responsible for this terrible crime,” a spokesperson said.

Adding that the system only provides information drawn from public sources and does not encourage illegal acts.

AI Accountability Debate

Still, prosecutors are digging deeper, arguing that if a human had provided similar guidance, it could amount to “aiding and abetting.”

That’s where things get legally messy: can intent exist without a person behind the advice?

Experts say this could become a defining test case for AI accountability.

Not just what AI says, but what responsibility companies carry for how people use it.

And here’s the bigger question: if information is everywhere, who’s responsible when someone turns it into harm—the tool, or the intent behind it?

Give us 1 week in your inbox & we will make you smarter.

Only "News" Email That You Need To Subscribe To

YOU MIGHT ALSO LIKE...