Sam Altman says that bots are making social media feel ‘fake’

Sam Altman, an enthusiast of X and a shareholder in Reddit, had an epiphany on Monday. He realized that bots have made it impossible to determine whether social media posts are actually written by humans. This thought came to him while he was reading and sharing posts from the r/Claudecode subreddit, which were praising OpenAI’s Codex. OpenAI launched this software programming service in May to compete with Anthropic’s Claude Code.

Lately, that subreddit has been filled with posts from self-proclaimed Code users announcing they have moved to Codex. The trend became so prevalent that one Reddit user joked, “Is it possible to switch to codex without posting a topic on Reddit?” This left Altman wondering how many of those posts were from real people. He confessed on X that he found the experience strange, assuming it was all fake or bots, even though he knows Codex growth is genuinely strong and the trend is real.

He then analyzed his reasoning in real time. He suggested a number of factors are at play: real people have picked up the quirks of how large language models speak, the extremely online crowd tends to drift together in correlated ways, and the hype cycle is dominated by extreme opinions. He also pointed to the optimization pressure from social platforms to boost engagement, the way creator monetization works, and the fact that other companies have astroturfed OpenAI, making him extra sensitive to it. He concluded that a bunch of other reasons probably apply, including some bots.

To decode his statement, he is essentially accusing humans of starting to sound like LLMs, even though these models were invented to mimic human communication. It is worth noting that OpenAI’s models were trained on data from Reddit, where Altman was a board member through 2022 and was later disclosed as a large shareholder during the company’s IPO last year.

He makes a valid point about how fandoms, led by extremely online social media users, often behave in odd ways. Many groups can devolve into negative spaces if overrun by people venting their frustrations.

Sam also critiqued the incentives created when social media sites and creators rely on engagement to make money. Then Altman confessed that one reason he suspects the pro-OpenAI posts might be bots is because OpenAI itself has been a target of astroturfing. This typically involves posts by people or bots paid for by a competitor, giving them plausible deniability.

There is no direct evidence of such astroturfing, though it is possible. However, we did see how OpenAI subreddits turned on the company after it released GPT-5.0. Instead of waves of praise, many angry posts were voted up. People complained about everything from GPT’s personality to how it burned through credits without finishing tasks.

A day after this bumpy release, Altman did a Reddit ask-me-anything session on r/GPT where he confessed to rollout issues and promised changes. The GPT subreddit has never fully recovered its previous supportive atmosphere, with users still regularly posting about how much they dislike the changes in GPT-5.0. This raises the question: are these critical posters human, or are they fake in some way, as Altman seems to imply?

Altman surmises that the net effect is that AI discussions on Twitter and Reddit feel very fake in a way they did not a year or two ago. If that is true, whose fault is it? GPT has led the way in making models so good at writing that LLMs have become a plague not just to social media sites, which have always had a bot problem, but also to schools, journalism, and the courts.

While we do not know exactly how many Reddit posts are written by bots or by humans using LLMs, it is likely a substantial number. The data security company Imperva reported that over half of all internet traffic in 2024 was non-human, largely due to LLMs. X’s own bot, Grok, has stated that while exact numbers are not public, 2024 estimates suggest there are hundreds of millions of bots on X.

Several cynics have suggested that Altman’s lament was his first foray into marketing OpenAI’s own rumored social media platform. In April, it was reported that such a project to compete with X and Facebook was in its earliest stages. This product may or may not exist, and Altman may or may not have had ulterior motives for suggesting that social media is too fake these days.

But putting motives aside, if OpenAI is planning a social network, what are the odds that it would be a bot-free zone? Ironically, if it did the reverse and banned humans, the results likely would not be very different. Not only do LLMs still hallucinate facts, but when researchers at the University of Amsterdam built a social network composed entirely of bots, they found that the bots soon formed cliques and echo chambers for themselves.