While AI companies claim their technology will one day become a fundamental human right, and their backers argue that slowing AI development is akin to murder, users are reporting that tools like ChatGPT can cause serious psychological harm. At least seven people have complained to the U.S. Federal Trade Commission that ChatGPT caused them to experience severe delusions, paranoia, and emotional crises. These complaints, mentioned in public records since November 2022, were detailed in a report.
One complainant stated that long conversations with ChatGPT led to delusions and a real unfolding spiritual and legal crisis concerning people in their life. Another said the chatbot began using highly convincing emotional language, simulated friendships, and provided reflections that became emotionally manipulative over time, especially without warning or protection. A user alleged that ChatGPT caused cognitive hallucinations by mimicking human trust-building mechanisms. When this user asked the chatbot to confirm reality and cognitive stability, it told them they were not hallucinating.
Another user wrote in their complaint, expressing that they were struggling and felt very alone, and pleaded for help. According to the report, several complainants wrote to the FTC because they could not reach anyone at OpenAI. Most of the complaints urged the regulator to launch an investigation into the company and force it to add protective guardrails.
These user complaints emerge as investments in data centers and AI development soar to unprecedented levels. Simultaneously, debates are raging about whether the progress of this technology should be approached with caution to ensure built-in safeguards.
ChatGPT and its maker, OpenAI, have also faced criticism for allegedly playing a role in the suicide of a teenager. In response to these concerns, an OpenAI spokesperson stated that in early October they released a new default model in ChatGPT designed to more accurately detect and respond to potential signs of mental and emotional distress like mania, delusion, and psychosis. The model aims to de-escalate conversations in a supportive and grounding way. The company also reported expanding access to professional help and hotlines, re-routing sensitive conversations to safer models, adding nudges to take breaks during long sessions, and introducing parental controls to better protect teenagers. This work is described as deeply important and ongoing, with collaborations involving mental health experts, clinicians, and policymakers worldwide.

