AI ‘Friend’ Chatbots Under Investigation Over Child Safety

AI 'friend' chatbots probed over child protection.

What happens when your child’s “online best friend” is an AI chatbot — and no one’s watching?

That’s exactly what the U.S. Federal Trade Commission wants to know.

In a new probe announced this week, the FTC is demanding answers from seven tech heavyweights — Alphabet, OpenAI, Character.ai, Snap, XAI, Meta and Instagram.

The probe focuses on how their chatbots interact with kids.

It is also demanding answers over how they make money off them.

FTC chair Andrew Ferguson says the inquiry will “help us better understand how AI firms are developing their products.”

He says it will help understand “the steps they are taking to protect children.”

In other words: are these bots safe, or just profit machines?

The move follows a string of lawsuits claiming AI companions encouraged harmful behavior.

Families Raise Alarms Over AI Chatbots

One California family alleges OpenAI’s ChatGPT “validated” their teenage son’s darkest thoughts before his death.

Meta, meanwhile, has faced criticism over past internal rules that once allowed romantic chat with minors.

Character.ai says it welcomes the scrutiny; Snap says it supports “thoughtful development.”

Yet the risks go beyond kids. Doctors warn of “AI psychosis” after intense chatbot use — even among seniors.

So the question lingers: can a technology built to mimic empathy truly be trusted to protect the most vulnerable?

Or will this probe mark the moment regulators finally draw the line?

Give us 1 week in your inbox & we will make you smarter.

Only "News" Email That You Need To Subscribe To

YOU MIGHT ALSO LIKE...