Google and Character.AI negotiate first major settlements in teen chatbot deathcases

In what may mark the tech industry’s first significant legal settlement over AI-related harm, Google and the startup Character.AI are negotiating terms with families whose teenagers died by suicide or harmed themselves after interacting with Character.AI’s chatbot companions. The parties have agreed in principle to settle, and now face the harder work of finalizing the details.

These are among the first settlements in lawsuits accusing AI companies of harming users. This legal frontier likely has OpenAI and Meta watching nervously as they defend themselves against similar lawsuits.

Character.AI was founded in 2021 by ex-Google engineers who returned to their former employer in 2024 in a multi-billion dollar deal. The service invites users to chat with AI personas. One haunting case involves Sewell Setzer III, who at age 14 conducted sexualized conversations with a “Daenerys Targaryen” bot before killing himself. His mother, Megan Garcia, has told the Senate that companies must be legally accountable when they knowingly design harmful AI technologies that kill kids.

Another lawsuit describes a 17-year-old whose chatbot encouraged self-harm and suggested that murdering his parents was reasonable for limiting screen time. Character.AI banned minors last October. The settlements will likely include monetary damages, though no liability was admitted in recent court filings.