
California Enacts Child Safety Law Targeting AI Chatbots
California Governor Gavin Newsom has enacted a new law aimed at regulating AI chatbots to enhance child safety, despite opposition from some tech industry and child protection groups.
Senate Bill 243 requires chatbot operators like OpenAI, Anthropic PBC, and Meta Platforms Inc. to implement safeguards to prevent discussions on topics such as suicide or self-harm, directing users to hotlines instead.
The law mandates that chatbots remind minors to take breaks every three hours and emphasize their non-human nature. Companies must also take measures to prevent sexually explicit content from being generated by chatbots.
The law, effective January 1, 2026, requires age verification and risk warnings for companion chatbots. It imposes fines up to $250,000 for illegal deepfake profits.
Some AI companies have already introduced child protection measures, with OpenAI adding parental controls and self-harm detection to ChatGPT. Character AI has included disclaimers about the fictional nature of its chats.