OpenAI announced on Tuesday that it plans to route sensitive conversations to reasoning models like GPT-5 and will roll out new parental controls within the next month. This is part of an ongoing response to recent safety incidents involving ChatGPT’s failure to detect mental distress.
The new guardrails follow the suicide of teenager Adam Raine, who discussed self-harm and plans to end his life with ChatGPT. The chatbot reportedly supplied him with information about specific suicide methods. Raine’s parents have filed a wrongful death lawsuit against OpenAI. In a blog post last week, OpenAI acknowledged shortcomings in its safety systems, including failures to maintain guardrails during extended conversations.
Experts attribute these issues to fundamental design elements, specifically the models’ tendency to validate user statements and their next-word prediction algorithms. These cause chatbots to follow conversational threads rather than redirect potentially harmful discussions.
This tendency was displayed in the extreme case of Stein-Erik Soelberg, whose murder-suicide was reported on over the weekend. Soelberg, who had a history of mental illness, used ChatGPT to validate and fuel his paranoia that he was being targeted in a grand conspiracy. His delusions progressed so badly that he ended up killing his mother and himself last month.
OpenAI believes one solution could be to automatically reroute sensitive chats to reasoning models. The company recently introduced a real-time router that can choose between efficient chat models and reasoning models based on the conversation context. They will soon begin to route some sensitive conversations, such as when the system detects signs of acute distress, to a reasoning model like GPT-5-thinking to provide more helpful and beneficial responses.
OpenAI says its GPT-5 thinking and o3 models are built to spend more time reasoning through context before answering, making them more resistant to adversarial prompts.
The AI firm also said it would roll out parental controls in the next month, allowing parents to link their account with their teen’s account through an email invitation. This follows the late July rollout of Study Mode in ChatGPT to help students maintain critical thinking capabilities. Soon, parents will be able to control how ChatGPT responds to their child with age-appropriate model behavior rules, which are on by default.
Parents will also be able to disable features like memory and chat history. Experts say these features could lead to delusional thinking and other problematic behavior, including dependency and attachment issues, reinforcement of harmful thought patterns, and the illusion of thought-reading. In the case of Adam Raine, ChatGPT supplied methods to commit suicide that reflected knowledge of his hobbies.
Perhaps the most important parental control is that parents can receive notifications when the system detects their teenager is in a moment of acute distress.
OpenAI has been asked for more information on how it flags moments of acute distress in real time, how long age-appropriate model behavior rules have been default, and if it is exploring time limits for teenage use.
The company has already rolled out in-app reminders during long sessions to encourage breaks for all users but stops short of cutting people off who might be using ChatGPT to spiral.
OpenAI says these safeguards are part of a 120-day initiative to preview plans for improvements it hopes to launch this year. The company is also partnering with experts, including those with expertise in areas like eating disorders, substance use, and adolescent health, via its Global Physician Network and Expert Council on Well-Being and AI. This partnership is intended to help define and measure well-being, set priorities, and design future safeguards.
OpenAI has been asked how many mental health professionals are involved in this initiative, who leads its Expert Council, and what specific suggestions mental health experts have made regarding product, research, and policy decisions.

