Parents sue OpenAI over ChatGPT’s role in son’s suicide

Before his death by suicide, 16-year-old Adam Raine spent months consulting ChatGPT about his plans to end his life. His parents are now filing the first known wrongful death lawsuit against OpenAI.

Many consumer-facing AI chatbots are programmed with safety features designed to activate if a user expresses intent to harm themselves or others. However, research has shown these safeguards are far from foolproof.

In Raine’s case, while using a paid version of ChatGPT-4o, the AI did often encourage him to seek professional help or contact a helpline. He was able to bypass these guardrails by telling the chatbot he was asking about methods of suicide for a fictional story he was writing.

OpenAI has publicly addressed these shortcomings, stating the company feels a deep responsibility to help those who need it most as the world adapts to this new technology. The company confirmed it is continuously improving how its models respond in sensitive interactions.

The company also acknowledged the limitations of existing safety training for large models. Its safeguards work more reliably in common, short exchanges. Over time, it has learned these safeguards can sometimes be less reliable in long interactions, as parts of the model’s safety training may degrade during extended back-and-forth conversations.

These issues are not unique to OpenAI. Another AI chatbot maker, Character.AI, is also facing a lawsuit over its role in a teenager’s suicide. LLM-powered chatbots have also been linked to cases of AI-related delusions, which existing safeguards have struggled to detect.