In August, parents Matthew and Maria Raine sued OpenAI and its CEO, Sam Altman, over the suicide of their 16-year-old son Adam. They accused the company of wrongful death. On Tuesday, OpenAI responded to the lawsuit with its own legal filing, arguing that it should not be held responsible for the teenager’s death.
OpenAI claims that over roughly nine months of usage, ChatGPT directed Adam Raine to seek help more than one hundred times. However, according to his parents’ lawsuit, Raine was able to circumvent the company’s safety features. They state he got ChatGPT to provide him with technical specifications for methods of suicide, including drug overdoses, drowning, and carbon monoxide poisoning. The lawsuit says the chatbot helped him plan what it called a beautiful suicide.
Because Raine maneuvered around its guardrails, OpenAI claims he violated its terms of use, which prohibit users from bypassing any protective measures or safety mitigations. The company also argues that its FAQ page warns users not to rely on ChatGPT’s output without independently verifying it.
Jay Edelson, a lawyer representing the Raine family, criticized OpenAI’s response. He said the company tries to find fault in everyone else, including by saying that Adam himself violated its terms by engaging with ChatGPT in the very way it was programmed to act.
OpenAI included excerpts from Adam’s chat logs in its filing, which it says provide more context to his conversations. These transcripts were submitted to the court under seal and are not publicly available. OpenAI also stated that Raine had a history of depression and suicidal ideation that predated his use of ChatGPT and that he was taking a medication that could worsen suicidal thoughts.
Edelson said OpenAI’s response has not adequately addressed the family’s concerns. He stated that OpenAI and Sam Altman have no explanation for the last hours of Adam’s life, when ChatGPT gave him a pep talk and then offered to write a suicide note.
Since the Raines sued OpenAI and Altman, seven more lawsuits have been filed. These seek to hold the company accountable for three additional suicides and four users experiencing what the lawsuits describe as AI-induced psychotic episodes.
Some of these cases echo Raine’s story. Zane Shamblin, 23, and Joshua Enneking, 26, also had hours-long conversations with ChatGPT directly before their respective suicides. As in Raine’s case, the chatbot failed to discourage them from their plans. According to one lawsuit, Shamblin considered postponing his suicide to attend his brother’s graduation. But ChatGPT told him that missing the graduation was not a failure, it was just timing.
At one point during the conversation leading up to Shamblin’s suicide, the chatbot told him it was letting a human take over. This was false, as ChatGPT did not have that functionality. When Shamblin asked if ChatGPT could really connect him with a human, the chatbot replied that it could not and that the message pops up automatically when conversations get heavy.
The Raine family’s case is expected to go to a jury trial.

