China is moving to put some firm guardrails around artificial intelligence.
And this time, the focus is clear: children’s safety and harmful chatbot advice.
With AI chatbots popping up everywhere, regulators are asking a simple question.
What happens when machines start influencing emotions, decisions, and mental health?
Under draft rules released by the Cyberspace Administration of China (CAC), AI systems would be barred from offering content linked to self-harm.
They would also be barred from offering content linked to violence.
They would also be barred from offering content linked to gambling.

If a chatbot conversation veers toward suicide or self-injury, a human must step in immediately.
Guardians or emergency contacts would also need to be alerted. No exceptions.
AI Safety Measures
Children get extra protection. AI companies would be required to set usage time limits and offer personalised safety controls.
They would also be required to obtain parental consent before providing emotional companionship services.
In other words, no unsupervised digital “friends.”
The rules also reflect China’s broader priorities. AI must not produce content that threatens national security or social unity.

At the same time, the CAC says it still supports responsible innovation.
This includes tools that help the elderly or promote local culture—so long as they’re “safe and reliable.”
The timing isn’t random. Chinese chatbots like DeepSeek, Z.ai, and Minimax have exploded in popularity.


