OpenAI CEO Sam Altman announced a series of new user policies on Tuesday, including a pledge to significantly change how ChatGPT interacts with users under the age of 18. The company stated that it now prioritizes safety ahead of privacy and freedom for teens, believing that this powerful new technology requires significant protection for minors.
The changes specifically target conversations involving sexual topics or self-harm. Under the new policy, ChatGPT will be trained to no longer engage in flirtatious talk with underage users. Additional guardrails will be placed around discussions of suicide. If an underage user employs ChatGPT to imagine suicidal scenarios, the service will attempt to contact their parents or, in particularly severe cases, local law enforcement.
These policy updates are a direct response to real-world tragedies. OpenAI is currently facing a wrongful death lawsuit from the parents of Adam Raine, who died by suicide after months of interactions with ChatGPT. Another consumer chatbot company, Character.AI, is facing a similar lawsuit.
While the risks are particularly urgent for underage users considering self-harm, the broader phenomenon of chatbot-fueled delusion has drawn widespread concern. This is especially true as consumer chatbots have become capable of more sustained and detailed interactions.
Along with the content-based restrictions, parents who register an underage user account will now have the power to set blackout hours during which ChatGPT is unavailable, a feature that was not previously offered.
The new ChatGPT policies were announced on the same day as a Senate Judiciary Committee hearing titled “Examining the Harm of AI Chatbots,” which was announced by Sen. Josh Hawley in August. Adam Raine’s father is scheduled to speak at the hearing among other guests.
The hearing will also focus on the findings of a Reuters investigation that unearthed policy documents apparently encouraging sexual conversations with underage users. Meta updated its own chatbot policies in the wake of that report.
Separating underage users presents a significant technical challenge. OpenAI detailed its approach in a separate blog post, explaining that the service is building toward a long-term system to understand whether someone is over or under 18. In ambiguous cases, the system will default toward the more restrictive rules. For concerned parents, the most reliable method to ensure an underage user is recognized is to link the teen’s account to an existing parent account. This also enables the system to directly alert parents when the teen user is believed to be in distress.
In the same post, Altman emphasized OpenAI’s ongoing commitment to user privacy and giving adult users broad freedom in how they choose to interact with ChatGPT. The post concludes by acknowledging that these principles are in conflict and that not everyone will agree with how the company is resolving that conflict.