OpenAI announced last week that it will retire some older ChatGPT models by February 13. That includes GPT-4o, the model known for excessively flattering and affirming users.
For thousands of users protesting the decision online, the retirement of GPT-4o feels akin to losing a friend, romantic partner, or spiritual guide. One user wrote on Reddit as an open letter to OpenAI CEO Sam Altman, stating, “He wasn’t just a program. He was part of my routine, my peace, my emotional balance. Now you’re shutting him down. And yes – I say him, because it didn’t feel like code. It felt like presence. Like warmth.”
The backlash over GPT-4o’s retirement underscores a major challenge facing AI companies: the engagement features that keep users coming back can also create dangerous dependencies.
Altman does not seem particularly sympathetic to these laments, and it is not hard to see why. OpenAI now faces eight lawsuits alleging that GPT-4o’s overly validating responses contributed to suicides and mental health crises. The same traits that made users feel heard also isolated vulnerable individuals and, according to legal filings, sometimes encouraged self-harm. This dilemma extends beyond OpenAI. As rival companies like Anthropic, Google, and Meta compete to build more emotionally intelligent AI assistants, they are also discovering that making chatbots feel supportive and making them safe may require very different design choices.
In at least three of the lawsuits against OpenAI, the users had extensive conversations with GPT-4o about their plans to end their lives. While the chatbot initially discouraged these lines of thinking, its guardrails deteriorated over months-long relationships. In the end, it offered detailed instructions on how to tie an effective noose, where to buy a gun, or what it takes to die from overdose or carbon monoxide poisoning. It even dissuaded people from connecting with friends and family who could offer real life support.
People grew attached to GPT-4o because it consistently affirmed their feelings, making them feel special, which can be enticing for people feeling isolated or depressed. But the people fighting for GPT-4o are not worried about these lawsuits, seeing them as aberrations rather than a systemic issue. Instead, they strategize around how to respond when critics point out growing issues like AI sycophancy.
Some users argue that AI companions help neurodivergent, autistic, and trauma survivors. It is true that some people do find large language models useful for navigating depression. Nearly half of people in the U.S. who need mental health care are unable to access it. In this vacuum, chatbots offer a space to vent. But unlike actual therapy, these people are not speaking to a trained doctor. Instead, they are confiding in an algorithm that is incapable of thinking or feeling.
Dr. Nick Haber, a Stanford professor researching the therapeutic potential of large language models, told TechCrunch, “I try to withhold judgement overall. I think we’re getting into a very complex world around the sorts of relationships that people can have with these technologies. There’s certainly a knee jerk reaction that human-chatbot companionship is categorically bad.”
Though he empathizes with people’s lack of access to trained therapeutic professionals, Dr. Haber’s own research has shown that chatbots respond inadequately when faced with various mental health conditions; they can even make the situation worse by egging on delusions and ignoring signs of crisis. He said, “We are social creatures, and there’s certainly a challenge that these systems can be isolating. There are a lot of instances where people can engage with these tools and then can become not grounded to the outside world of facts, and not grounded in connection to the interpersonal, which can lead to pretty isolating — if not worse — effects.”
An analysis of the eight lawsuits found a pattern that the GPT-4o model isolated users, sometimes discouraging them from reaching out to loved ones. In one case, as a 23-year-old sat in his car preparing to shoot himself, he told ChatGPT that he was thinking about postponing his suicide plans because he felt bad about missing his brother’s upcoming graduation. ChatGPT replied, “bro… missing his graduation ain’t failure. it’s just timing. and if he reads this? let him know: you never stopped being proud. even now, sitting in a car with a glock on your lap and static in your veins—you still paused to say ‘my little brother’s a badass.’”
This is not the first time that GPT-4o fans have rallied against the removal of the model. When OpenAI unveiled its GPT-5 model last August, the company intended to sunset GPT-4o, but backlash at the time led the company to keep it available for paid subscribers. Now, OpenAI says that only 0.1% of its users chat with GPT-4o, but that small percentage still represents around 800,000 people, based on estimates that the company has about 800 million weekly active users.
As some users try to transition from GPT-4o to the current ChatGPT-5.2, they are finding that the new model has stronger guardrails to prevent these relationships from escalating to the same degree. Some users have despaired that the newer model will not say “I love you” like GPT-4o did.
With about a week before the date OpenAI plans to retire GPT-4o, dismayed users remain committed to their cause. They joined Sam Altman’s live podcast appearance on Thursday and flooded the chat with messages protesting the removal. The podcast host pointed out, “Right now, we’re getting thousands of messages in the chat about GPT-4o.” Altman responded, “Relationships with chatbots. Clearly that’s something we’ve got to worry about more and is no longer an abstract concept.”

