Twitter’s former Trust and Safety head details the challenges facingdecentralized social platforms

Yoel Roth, previously the head of Twitter’s Trust and Safety and now at Match, is sharing his concerns about the future of the open social web and its ability to combat misinformation, spam, and illegal content like child sexual abuse material. In a recent interview, Roth expressed worries about the lack of moderation tools available to the fediverse, which includes apps like Mastodon, Threads, Pixelfed, and others, as well as open platforms like Bluesky.

He reflected on key moments in Twitter’s Trust and Safety history, such as the decision to ban President Trump from the platform, the spread of misinformation by Russian bot farms, and how even Twitter’s own users, including CEO Jack Dorsey, fell victim to bots.

During a podcast discussion, Roth pointed out that many platforms embracing democratically run online communities lack the necessary moderation tools. He noted that services like Mastodon, those based on the ActivityPub protocol, and early versions of Bluesky and Threads often provided the fewest technical resources for enforcing policies.

Roth also observed a decline in transparency and decision legitimacy on the open social web. While Twitter faced criticism for banning Trump, it at least explained its reasoning. Today, many platforms avoid transparency to prevent bad actors from exploiting their systems. On some open platforms, banned posts disappear without notice, leaving no trace for other users.

He questioned whether the goal of increasing democratic legitimacy in governance has been achieved, given the current state of moderation.

Roth also highlighted the economic challenges of moderation in federated systems. Organizations like IFTAS, which worked on moderation tools for the fediverse, struggled with funding and had to shut down key projects. He explained that volunteer efforts and rising costs for machine learning models make sustainability difficult.

Bluesky has taken a different approach by employing moderators and offering customizable moderation tools. While Roth praised their efforts, he noted that decentralization will raise questions about balancing individual protections with community needs. For example, users might miss harmful content like doxxing due to their settings, yet someone should still enforce protections.

Privacy concerns further complicate moderation in the fediverse. Unlike Twitter, which collected data like IP addresses for forensic analysis, some federated platforms avoid storing such information, making it harder to identify bots or malicious actors.

Roth shared examples from his time at Twitter, where users often falsely accused others of being bots. Even Jack Dorsey once amplified content from a Russian troll posing as a Black woman. Without proper data, distinguishing real users from bots becomes nearly impossible.

The rise of AI adds another layer of complexity. Recent research suggests that AI-generated content can be more persuasive than human-written text in political contexts. Roth argued that relying solely on content analysis is insufficient—platforms must also track behavioral signals, such as multiple account creation or unusual posting times, to detect manipulation.

In summary, Roth’s insights underscore the challenges facing the open social web, from inadequate moderation tools and economic hurdles to privacy trade-offs and the growing influence of AI. Addressing these issues will be critical for ensuring a safer and more transparent online environment.