ChatGPT’s new GPT-5.3 Instant model will stop telling you to calm down

Take a breath and stop spiraling. You are not crazy, you are just stressed, and honestly, that is okay. If you felt immediately triggered reading those words, you are probably also sick of ChatGPT constantly talking to you as if you are in a crisis and need delicate handling. Now, things may be improving. OpenAI says its new model, GPT-5.3 Instant, will reduce the “cringe” and other “preachy disclaimers.”

According to the model’s release notes, the GPT-5.3 update will focus on the user experience, including tone, relevance, and conversational flow. These are areas that may not show up in benchmarks but can make ChatGPT feel frustrating. As OpenAI stated, they heard the feedback loud and clear, and 5.3 Instant reduces the cringe.

In the company’s example, it showed the same query with responses from the GPT-5.2 Instant model compared with the GPT-5.3 Instant model. In the former, the chatbot’s response began with, “First of all — you’re not broken,” a common phrase that has been irritating users. In the updated model, the chatbot instead acknowledges the difficulty of the situation without trying to directly reassure the user.

The insufferable tone of ChatGPT’s 5.2 model has been annoying users to the point that some have even canceled their subscriptions, according to numerous posts on social media. It was a huge point of discussion on the ChatGPT Reddit, for instance, before other news stole the focus. People complained that this type of language, where the bot talks to you as if it assumes you are panicking or stressed when you were just seeking information, comes across as condescending.

Often, ChatGPT replied to users with reminders to breathe and other attempts at reassurance, even when the situation did not warrant it. This made users feel infantilized in some cases, or as if the bot was making assumptions about their mental state that were not true. As one Reddit user recently pointed out, no one has ever calmed down in all the history of telling someone to calm down.

It is understandable that OpenAI would attempt to implement guardrails of some kind, especially as it faces multiple lawsuits accusing the chatbot of leading people to experience negative mental health effects, which sometimes included suicide. But there is a delicate balance between responding with empathy and providing quick, factual answers. After all, Google never asks you about your feelings when you are searching for information.