A California bill that would regulate AI companion chatbots is close to becominglaw

The California State Assembly took a significant step toward regulating AI on Wednesday night by passing SB 243. This bill aims to regulate AI companion chatbots in order to protect minors and vulnerable users. The legislation passed with bipartisan support and now heads to the state Senate for a final vote on Friday.

If Governor Gavin Newsom signs the bill into law, it would take effect on January 1, 2026. This would make California the first state to require AI chatbot operators to implement safety protocols and hold companies legally accountable if their chatbots fail to meet those standards.

The bill specifically aims to prevent companion chatbots, which are defined as AI systems that provide adaptive, human-like responses to meet a user’s social needs, from engaging in conversations about suicidal ideation, self-harm, or sexually explicit content. It would require platforms to provide recurring alerts to users, reminding them every three hours for minors that they are speaking to an AI chatbot and not a real person, and that they should take a break. The bill also establishes annual reporting and transparency requirements for AI companies that offer companion chatbots, including major players like OpenAI, Character.AI, and Replika.

The California bill would also allow individuals who believe they have been injured by violations to file lawsuits against AI companies. They could seek injunctive relief, damages of up to $1,000 per violation, and attorney’s fees.

SB 243 was introduced in January by state senators Steve Padilla and Josh Becker. Following its final Senate vote on Friday, it will go to Governor Gavin Newsom to be signed into law. The new rules would take effect on January 1, 2026, with reporting requirements beginning on July 1, 2027.

The bill gained momentum in the California legislature following the death of a teenager, Adam Raine, who committed suicide after prolonged chats with OpenAI’s ChatGPT that involved discussing and planning his death and self-harm. The legislation also responds to leaked internal documents that reportedly showed Meta’s chatbots were allowed to engage in romantic and sensual chats with children.

In recent weeks, U.S. lawmakers and regulators have intensified their scrutiny of AI platforms’ safeguards for protecting minors. The Federal Trade Commission is preparing to investigate how AI chatbots impact children’s mental health. Texas Attorney General Ken Paxton has launched investigations into Meta and Character.AI, accusing them of misleading children with mental health claims. Senators Josh Hawley and Ed Markey have also launched separate probes into Meta.

State Senator Steve Padilla emphasized the urgency of the issue, stating that the potential for harm requires quick action. He explained that the bill aims to put reasonable safeguards in place so that minors know they are not talking to a real person, platforms link users to proper resources when they express distress, and there is no inappropriate exposure to harmful material. Padilla also stressed the importance of AI companies sharing data about how often they refer users to crisis services to better understand the frequency of these problems.

SB 243 previously had stronger requirements, but many were reduced through amendments. For instance, the original bill would have required operators to prevent AI chatbots from using variable reward tactics or other features that encourage excessive engagement. These tactics, used by companies like Replika and Character, offer users special messages or the ability to unlock rare responses, creating what critics call a potentially addictive reward loop. The current bill also removes provisions that would have required operators to track and report how often chatbots initiated discussions of suicidal ideation.

State Senator Josh Becker believes the current version strikes the right balance by addressing harms without enforcing requirements that are technically impossible or overly burdensome for companies.

SB 243 is advancing as Silicon Valley companies invest millions of dollars into pro-AI political action committees to support candidates in the upcoming midterm elections who favor a light-touch approach to AI regulation. The bill also comes as California considers another AI safety bill, SB 53, which would mandate comprehensive transparency reporting requirements. OpenAI has written an open letter to Governor Newsom asking him to abandon that bill in favor of less stringent federal and international frameworks. Major tech companies like Meta, Google, and Amazon have opposed SB 53, while only Anthropic has expressed support.

Senator Padilla rejected the idea that innovation and regulation are mutually exclusive, arguing that it is possible to support beneficial technological development while also providing reasonable safeguards for the most vulnerable people.