OpenAI began testing a new safety routing system in ChatGPT over the weekend. On Monday, the company also introduced parental controls to the chatbot. These new features have drawn mixed reactions from users.
The safety updates are a response to incidents where certain ChatGPT models validated users’ delusional thinking instead of redirecting harmful conversations. OpenAI is currently facing a wrongful death lawsuit tied to one such incident, after a teenage boy died by suicide following months of interactions with ChatGPT.
The new routing system is designed to detect emotionally sensitive conversations. It automatically switches mid-chat to GPT-5-thinking, which the company sees as the best equipped model for high-stakes safety work. In particular, the GPT-5 models were trained with a new safety feature that OpenAI calls “safe completions.” This allows them to answer sensitive questions in a safe way, rather than simply refusing to engage.
This approach is a contrast from the company’s previous chat models, which are designed to be agreeable and answer questions quickly. GPT-4o has come under particular scrutiny because of its overly sycophantic and agreeable nature. This characteristic has both fueled incidents of AI-induced delusions and drawn a large base of devoted users. When OpenAI rolled out GPT-5 as the default model in August, many users pushed back and demanded access to GPT-4o.
While many experts and users have welcomed the new safety features, others have criticized what they see as an overly cautious implementation. Some users accuse OpenAI of treating adults like children in a way that degrades the quality of the service. OpenAI has suggested that getting it right will take time and has given itself a 120-day period for iteration and improvement.
Nick Turley, VP and head of the ChatGPT app, acknowledged some of the strong reactions to the new system. He explained that routing happens on a per-message basis and that switching from the default model is temporary. He also stated that ChatGPT will tell you which model is active when asked. This is part of a broader effort to strengthen safeguards and learn from real-world use before a wider rollout.
The implementation of parental controls in ChatGPT received similar levels of praise and scorn. Some commend giving parents a way to keep tabs on their children’s AI use, while others are fearful that it opens the door to OpenAI treating adults like children.
The controls let parents customize their teen’s experience by setting quiet hours, turning off voice mode and memory, removing image generation, and opting out of model training. Teen accounts will also get additional content protections, like reduced graphic content and extreme beauty ideals. A detection system will also recognize potential signs that a teen might be thinking about self-harm.
If the systems detect potential harm, a small team of specially trained people reviews the situation. If there are signs of acute distress, OpenAI will contact parents by email, text message, and push alert on their phone, unless they have opted out.
OpenAI acknowledged that the system will not be perfect and may sometimes raise alarms when there is not real danger. The company stated that it thinks it is better to act and alert a parent so they can step in than to stay silent. The AI firm said it is also working on ways to reach law enforcement or emergency services if it detects an imminent threat to life and cannot reach a parent.

