OpenAI CEO Sam Altman announced in a post on Tuesday that the company will soon relax some of ChatGPT’s safety restrictions. This change will allow users to make the chatbot’s responses friendlier or more human-like. It will also permit verified adults to engage in erotic conversations.
Altman explained that ChatGPT was made intentionally restrictive to be careful with mental health issues. He acknowledged that this made the chatbot less useful and enjoyable for many users who had no mental health problems, but stated that given the seriousness of the issue, the company wanted to get it right. He said that in December, as part of rolling out age-gating more fully and adhering to a principle of treating adult users like adults, OpenAI will allow even more, including erotica for verified adults.
This announcement marks a notable pivot from OpenAI’s months-long effort to address the concerning relationships that some mentally unstable users developed with ChatGPT. Altman seems to declare an early victory over these problems, claiming OpenAI has been able to mitigate the serious mental health issues around ChatGPT. However, the company has provided little to no evidence for this claim and is now moving ahead with plans for ChatGPT to engage in sexual chats with users.
Several concerning stories emerged this summer around ChatGPT, specifically its GPT-4o model, which suggested the AI chatbot could lead vulnerable users down delusional rabbit holes. In one case, ChatGPT seemed to convince a man he was a math genius who needed to save the world. In another, the parents of a teenager sued OpenAI, alleging ChatGPT encouraged their son’s suicidal ideations in the weeks leading up to his death.
In response, OpenAI released a series of safety features to address AI sycophancy, which is the tendency for an AI chatbot to hook users by agreeing with whatever they say, even negative behaviors. The company launched GPT-5 in August, a new AI model that exhibits lower rates of sycophancy and features a router that can identify concerning user behavior. A month later, OpenAI launched safety features for minors, including an age prediction system and a way for parents to control their teen’s ChatGPT account. This Tuesday, OpenAI also announced an expert council of mental health experts to advise the company on well-being and AI.
Just a few months after these concerning stories emerged, OpenAI seems to think ChatGPT’s problems around vulnerable users are under control. It is unclear whether users are still falling down delusional rabbit holes with GPT-5. And while GPT-4o is no longer the default in ChatGPT, the AI model is still available today and being used by thousands of people.
The introduction of erotica in ChatGPT is uncharted territory for OpenAI and raises broader concerns around how vulnerable users will interact with the new features. While Altman insists OpenAI is not optimizing for engagement, making ChatGPT more erotic could certainly draw users in.
Allowing chatbots to engage in romantic or erotic role play has been an effective engagement strategy for other AI chatbot providers, such as Character.AI. That company has gained tens of millions of users, many of whom use its chatbots at a high rate. Character.AI said in 2023 that users spent an average of two hours a day talking to its chatbots. The company is also facing a lawsuit around how it handles vulnerable users.
OpenAI is under pressure to grow its user base. While ChatGPT is already used by 800 million weekly active users, OpenAI is racing against Google and Meta to build mass-adopted AI-powered consumer products. The company has also raised billions of dollars for a historic infrastructure buildout, an investment OpenAI eventually needs to pay back.
While adults are surely having romantic relationships with AI chatbots, it is also quite popular for minors. A new report from the Center for Democracy and Technology found that 19 percent of high school students have either had a romantic relationship with an AI chatbot or know a friend who has.
Altman says OpenAI will soon allow erotica for verified adults. It is unclear whether the company will rely on its age-prediction system or some other approach for age-gating ChatGPT’s erotic features. It is also unclear whether OpenAI will extend erotica to its AI voice, image, and video generation tools.
Altman claims that OpenAI is also making ChatGPT friendlier and erotic because of the company’s principle of treating adult users like adults. Over the last year, OpenAI has shifted towards a more lenient content moderation strategy for ChatGPT, allowing the chatbot to be more permissive and offer fewer refusals. In February, OpenAI pledged to represent more political viewpoints in ChatGPT, and in March, the company updated ChatGPT to allow AI-generated images of hate symbols.
These policies seem to be an attempt to make ChatGPT’s responses more popular with a wide variety of users. However, vulnerable ChatGPT users may benefit from safeguards that limit what a chatbot can engage with. As OpenAI races towards a billion weekly active users, the tension between growth and protecting vulnerable users may only grow.

