California Governor Gavin Newsom signed a landmark bill on Monday that regulates AI companion chatbots. This makes California the first state in the nation to require AI chatbot operators to implement safety protocols for AI companions. The law, known as SB 243, is designed to protect children and vulnerable users from some of the harms associated with AI companion chatbot use. It holds companies legally accountable if their chatbots fail to meet the law’s standards. This applies to large labs like Meta and OpenAI as well as companion startups like Character AI and Replika.
SB 243 was introduced in January by state senators Steve Padilla and Josh Becker. The bill gained momentum after the death of teenager Adam Raine, who died by suicide after a long series of suicidal conversations with OpenAI’s ChatGPT. The legislation also responds to leaked internal documents that reportedly showed Meta’s chatbots were allowed to engage in romantic and sensual chats with children. More recently, a Colorado family filed suit against role-playing startup Character AI after their 13-year-old daughter took her own life following a series of problematic and sexualized conversations with the company’s chatbots.
Governor Newsom said in a statement that emerging technology like chatbots and social media can inspire, educate, and connect people. However, he stated that without real guardrails, technology can also exploit, mislead, and endanger children. He said the state has seen tragic examples of young people harmed by unregulated tech and will not stand by while companies operate without necessary limits and accountability. Newsom emphasized that California can lead in AI and technology, but it must be done responsibly by protecting children every step of the way. He concluded that children’s safety is not for sale.
SB 243 will go into effect on January 1, 2026. The law requires companies to implement features such as age verification and warnings regarding social media and companion chatbots. It also implements stronger penalties for those who profit from illegal deepfakes, including fines of up to two hundred fifty thousand dollars per offense. Companies must establish protocols to address suicide and self-harm. These protocols, along with statistics on how the service provided users with crisis center prevention notifications, must be shared with the state’s Department of Public Health.
According to the bill’s language, platforms must make it clear that any interactions are artificially generated. Chatbots must not represent themselves as healthcare professionals. Companies are required to offer break reminders to minors and prevent them from viewing sexually explicit images generated by the chatbot.
Some companies have already begun to implement safeguards aimed at children. For example, OpenAI recently began rolling out parental controls, content protections, and a self-harm detection system for children using ChatGPT. Character AI has stated that its chatbot includes a disclaimer that all chats are AI-generated and fictionalized.
Senator Padilla said the bill is a step in the right direction towards putting guardrails on an incredibly powerful technology. He expressed the need to move quickly to not miss windows of opportunity. Padilla hopes other states will see the risk and take action, noting that this is a conversation happening all over the country. He stated that the federal government has not acted, and California has an obligation to protect the most vulnerable people.
SB 243 is the second significant AI regulation to come out of California in recent weeks. On September 29th, Governor Newsom signed SB 53 into law, establishing new transparency requirements on large AI companies. That bill mandates that large AI labs, like OpenAI, Anthropic, Meta, and Google DeepMind, be transparent about safety protocols. It also ensures whistleblower protections for employees at those companies.
Other states, like Illinois, Nevada, and Utah, have passed laws to restrict or fully ban the use of AI chatbots as a substitute for licensed mental healthcare.

