Reddit takes on the bots with new ‘human verification’ requirements for fishybehavior

Digg, a would-be competitor to Reddit, recently shut down after failing to control the bots that overran its site. Now, Reddit is directly confronting the same challenge. The company announced it will begin labeling automated accounts that provide a service to users, similar to how “good bots” are labeled on other platforms. It will also require accounts suspected of being bots to verify they are human.

Reddit stresses this will not be a sitewide verification requirement. It will only occur if an account’s activity or technical markers suggest it is not human. Accounts that cannot pass the verification may be restricted. To identify potential bots, Reddit is using specialized tools that examine account-level signals, such as how quickly an account attempts to post content. Notably, using AI to write posts or comments is not against Reddit’s own policies, though individual community moderators may set their own rules.

For verification, Reddit will use third-party tools like passkeys from Apple, Google, or YubiKey, and biometric services like Face ID. In some countries, such as the U.K. and Australia, and some U.S. states, verification may require government IDs due to local age verification regulations, though Reddit states this is not its preferred method.

Reddit co-founder and CEO Steve Huffman emphasized a privacy-first approach. The goal is to confirm a person is behind an account, not to identify who that person is, aiming to increase transparency while preserving the anonymity central to Reddit’s culture.

These changes address the widespread problem of bots on social platforms, where they are used to influence politics, spread misinformation, inflate popularity, secretly market products, and generate fake ad clicks. Estimates suggest bot traffic, including web crawlers and AI agents, could exceed human traffic by 2027.

Reddit has become a particular target for bots that manipulate narratives, shill for companies, post spam, drive traffic, and conduct unauthorized research. Furthermore, because Reddit’s content is used to train AI models through lucrative deals, there is suspicion that bots are even posting questions to generate training data in areas where AI lacks information.

Reddit’s other co-founder, Alexis Ohanian, has addressed the related “dead internet theory,” the conjecture that bots outnumber humans online and that most web activity is automated. In the age of AI agents, this theory is increasingly becoming a reality.

The company announced last year it would begin requiring human verification due to the growing number of bots and evolving regulations. However, Huffman notes that current solutions are not ideal, stating that the best long-term solutions will be decentralized, private, and ideally not require an ID at all.

Alongside these changes, Reddit says it will continue its existing efforts to remove bots and spam, averaging 100,000 account removals per day. It will also rely on user reports of suspected bots, with improved tooling still to come. Developers running beneficial bots can learn about labeling them with a new “APP” label in the relevant developer community.