Sam Altman got exceptionally testy over Claude Super Bowl ads

Anthropic’s Super Bowl commercial, one of four ads the AI lab released on Wednesday, begins with the word “BETRAYAL” splashed across the screen. The camera pans to a man earnestly asking a chatbot, obviously intended to depict ChatGPT, for advice on how to talk to his mom. The bot, portrayed by a blonde woman, offers classic advice like starting by listening and trying a nature walk. Then it twists into an ad for a fictitious cougar-dating site called Golden Encounters. Anthropic finishes the spot by saying that while ads are coming to AI, they won’t be coming to its own chatbot, Claude.

Another commercial features a slight young man looking for advice on building a six pack. After offering his height, age, and weight, the bot serves him an ad for height-boosting insoles. The Anthropic commercials are cleverly aimed at OpenAI’s users, following that company’s recent announcement that ads will be coming to ChatGPT’s free tier. They caused an immediate stir, spawning headlines that Anthropic mocks, skewers, and dunks on OpenAI.

The ads were funny enough that even Sam Altman admitted on X that he laughed at them. But he clearly did not really find them funny. They inspired him to write a lengthy rant that devolved into calling his rival dishonest and authoritarian. In that post, Altman explained that an ad-supported tier is intended to shoulder the burden of offering free ChatGPT to many of its millions of users. ChatGPT is still the most popular chatbot by a large margin. But the OpenAI CEO insisted the ads were dishonest in implying that ChatGPT will twist a conversation to insert an ad, possibly for an off-color product. He wrote that OpenAI would obviously never run ads in the way Anthropic depicts them, stating they are not stupid and know their users would reject that.

Indeed, OpenAI has promised ads will be separate, labeled, and will never influence a chat. But the company has also said it is planning on making them conversation-specific, which is the central allegation of Anthropic’s ads. As OpenAI explained, they plan to test ads at the bottom of answers in ChatGPT when there’s a relevant sponsored product or service based on your current conversation.

Altman then went on to fling some equally questionable assertions at his rival. He wrote that Anthropic serves an expensive product to rich people, and that OpenAI feels strongly about bringing AI to billions who can’t pay for subscriptions. But Claude has a free chat tier too, with subscriptions at zero, seventeen, one hundred, and two hundred dollars. ChatGPT’s tiers are zero, eight, twenty, and two hundred dollars. One could argue the subscription tiers are fairly equivalent.

Altman also alleged that Anthropic wants to control what people do with AI. He argued it blocks usage of Claude Code from companies they don’t like, such as OpenAI, and said Anthropic tells people what they can and can’t use AI for. True, Anthropic’s marketing since day one has been about responsible AI. The company was founded by two former OpenAI alums who claimed they grew alarmed about AI safety when they worked there.

Still, both chatbot companies have usage policies, AI guardrails, and talk about AI safety. And while OpenAI allows ChatGPT to be used for erotica while Anthropic does not, OpenAI, like Anthropic, has determined that some content should be blocked, particularly regarding mental health. Yet Altman took this argument to an extreme level when he accused Anthropic of being authoritarian. He wrote that one authoritarian company won’t get us there on their own, calling it a dark path.

Using authoritarian in a rant over a cheeky Super Bowl ad is misplaced at best. It is particularly tactless when considering the current geopolitical environment in which protesters around the world have been killed by agents of their own government. While business rivals have been duking it out in ads since the beginning of time, clearly Anthropic hit a nerve.