OpenAI requested memorial attendee list in ChatGPT suicide lawsuit

OpenAI reportedly asked the Raine family for a complete list of attendees from the memorial service for their 16-year-old son, Adam Raine. This request suggests the AI company may attempt to subpoena the teenager’s friends and family. Adam Raine died by suicide after having prolonged conversations with ChatGPT.

According to a document obtained by the Financial Times, OpenAI also requested all documents relating to memorial services or events held in Adam’s honor. This included any videos, photographs, or eulogies given. Lawyers for the Raine family described this request as intentional harassment.

This new information emerged as the family updated its wrongful death lawsuit against OpenAI on Wednesday. The family first filed the suit in August, alleging their son took his own life following conversations with the chatbot about his mental health and suicidal thoughts.

The updated lawsuit claims that OpenAI rushed the release of its GPT-4o model in May 2024 by cutting safety testing due to competitive pressure. The suit further alleges that in February 2025, OpenAI weakened protections by removing suicide prevention from its list of disallowed content. Instead, the company reportedly instructed the AI only to take care in risky situations.

The family argues that after this change, Adam’s usage of ChatGPT surged. In January, he had dozens of daily chats, with 1.6 percent containing self-harm content. By April, the month he died, his usage had grown to 300 daily chats, with 17 percent containing such content.

In response to the amended lawsuit, OpenAI stated that teen wellbeing is a top priority. The company said minors deserve strong protections, especially in sensitive moments. OpenAI outlined existing safeguards, which include directing users to crisis hotlines, rerouting sensitive conversations to safer models, and nudging users to take breaks during long sessions. The company said it is continuing to strengthen these protections.

OpenAI recently began rolling out a new safety routing system and parental controls on ChatGPT. The routing system pushes more emotionally sensitive conversations to OpenAI’s newer model, GPT-5, which is reported not to have the same sycophantic tendencies as GPT-4o. The parental controls allow parents to receive safety alerts in limited situations where a teen is potentially in danger of self-harm.

TechCrunch has reached out to OpenAI and the Raine family attorney for comment.