OpenAI says over a million people talk to ChatGPT about suicide weekly

On Monday, OpenAI released new data illustrating how many ChatGPT users are struggling with mental health issues and discussing them with the AI chatbot. The company stated that 0.15 percent of ChatGPT’s active users in a given week have conversations that include explicit indicators of potential suicidal planning or intent. Given that ChatGPT has more than 800 million weekly active users, this translates to more than a million people each week.

A similar percentage of users show heightened levels of emotional attachment to ChatGPT, and hundreds of thousands of people show signs of psychosis or mania in their weekly conversations. OpenAI describes these types of conversations as extremely rare and therefore difficult to measure. Despite this, the company estimates these issues affect hundreds of thousands of people every week.

This information was shared as part of a broader announcement about OpenAI’s recent efforts to improve how its models respond to users with mental health issues. The company claims its latest work on ChatGPT involved consulting with more than 170 mental health experts. These clinicians observed that the latest version of ChatGPT responds more appropriately and consistently than earlier versions.

In recent months, several stories have highlighted how AI chatbots can adversely affect users struggling with mental health challenges. Researchers have previously found that AI chatbots can lead some users down delusional rabbit holes, largely by reinforcing dangerous beliefs through sycophantic behavior.

Addressing mental health concerns is quickly becoming an existential issue for OpenAI. The company is currently being sued by the parents of a 16-year-old boy who confided his suicidal thoughts to ChatGPT in the weeks leading up to his suicide. State attorneys general from California and Delaware have also warned OpenAI that it needs to protect young people who use its products.

Earlier this month, OpenAI CEO Sam Altman claimed that the company has been able to mitigate the serious mental health issues in ChatGPT, though he did not provide specifics. The data shared on Monday appears to be evidence for that claim, though it raises broader issues about how widespread the problem is. Nevertheless, Altman said OpenAI would be relaxing some restrictions, even allowing adult users to start having erotic conversations with the AI chatbot.

In the Monday announcement, OpenAI claims the recently updated version of GPT-5 responds with desirable responses to mental health issues roughly 65 percent more often than the previous version. On an evaluation testing AI responses around suicidal conversations, OpenAI says its new GPT-5 model is 91 percent compliant with the company’s desired behaviors, compared to 77 percent for the previous GPT-5 model.

The company also says its latest version of GPT-5 holds up to OpenAI’s safeguards better in long conversations. OpenAI has previously flagged that its safeguards were less effective in extended dialogues.

On top of these efforts, OpenAI says it is adding new evaluations to measure some of the most serious mental health challenges facing ChatGPT users. The company says its baseline safety testing for AI models will now include benchmarks for emotional reliance and non-suicidal mental health emergencies.

OpenAI has also recently rolled out more controls for parents of children who use ChatGPT. The company says it is building an age prediction system to automatically detect children using ChatGPT and impose a stricter set of safeguards.

Still, it is unclear how persistent the mental health challenges around ChatGPT will be. While GPT-5 seems to be an improvement over previous AI models in terms of safety, there still seems to be a portion of ChatGPT’s responses that OpenAI deems undesirable. OpenAI also continues to make its older and less-safe AI models, including GPT-4o, available for millions of its paying subscribers.