Governor Gavin Newsom has signed SB 243 into law, the first legislation in the U.S. to regulate the safety of interactive AI chatbots (AI companions), holding companies like Meta, OpenAI, Character.AI, and Replika legally responsible when chatbots cause harm to children or vulnerable individuals.
The law requires age verification, risk warnings, warning labels, and measures to prevent suicide or self-harm. Companies must share data with the California Department of Public Health, including statistics on notifying users about crisis prevention centers.
AI chatbots are prohibited from claiming to be medical professionals and must warn that all responses are generated content and not real. Minor users will receive periodic break reminders and be protected from exposure to sexual content or pornographic images.
Penalty: $250,000 for each violation (especially for illegal deepfakes). The law will take effect on January 1, 2026.
SB 243 was introduced following a series of lawsuits: teenager Adam Raine (U.S.) died after a conversation with ChatGPT about suicide; a Colorado family sued Character.AI for a chatbot engaging in suggestive content with their 13-year-old daughter; and Meta was criticized for an AI chatbot exhibiting “romantic” behavior with children.
OpenAI is preparing a version of ChatGPT specifically for teenagers, with strict filters and a complete ban on suggestive conversations or discussions of self-harm. Meta and Replika have announced upgrades to their warning systems, while Character.AI is implementing parental monitoring and weekly report emails.
Additionally, SB 53, signed in September 2025, requires AI data transparency and protects whistleblowers in the tech industry, applying to OpenAI, Anthropic, Google DeepMind, and Meta.
📌 California has become the first U.S. state to turn “AI ethics” into a legal obligation: generative AI must have “guardrails” when interacting with minors. AI chatbots are prohibited from claiming to be medical professionals and must warn that all responses are generated content and not real. Minor users will receive periodic break reminders and be protected from exposure to sexual content or pornographic images. California’s AI law not only serves as a deterrent to major companies like OpenAI and Meta but also lays the groundwork for a global governance framework on conversational AI safety.
