In 2023, the website then known as Twitter partially open sourced its algorithm for the first time. This occurred shortly after Tesla billionaire Elon Musk acquired the platform, as he claimed to be on a mission to restructure the social media company to make it more transparent. However, the initial code release was swiftly criticized as “transparency theater.” Critics noted the release was incomplete and failed to reveal much about the inner workings of the organization or the rationale behind the code’s design.
Now the site, rebranded as X, has open sourced its algorithm again. This move fulfills a promise made by Musk last week, in which he stated the new algorithm, including all code used to determine recommended organic and advertising posts, would be made open source. Musk also promised to provide ongoing transparency into the algorithm every four weeks for the foreseeable future.
In a post on GitHub, X provided an accessible write-up about its feed-generating code, along with a diagram of how the program works. What has been revealed is not particularly earth-shattering, but it does provide a peek behind the algorithmic curtain. The diagram shows that when gathering content for a user, the algorithm considers their engagement history, such as what posts they have clicked on. It also surveys recent posts from accounts the user follows and conducts a machine-learning analysis of “out-of-network” posts from accounts the user does not follow that it believes the user might find appealing.
The algorithm then filters out certain kinds of posts. This includes content from blocked accounts, posts associated with muted keywords, and material deemed too violent or spam-like. The remaining content is then ranked based on what the algorithm predicts the user will find most appealing. This ranking process considers factors like relevance and content diversity to prevent users from seeing a stream of identical posts. The algorithm also evaluates content according to the predicted likelihood that the user will like, reply to, repost, favorite, or otherwise engage with it.
According to X, this entire system is AI-based. The GitHub write-up notes the system relies entirely on the company’s Grok-based transformer to learn relevance from user engagement sequences. In other words, Grok analyzes what users click and like, feeding that information back into the recommendation system. The write-up also states there is no manual feature engineering for content relevance, meaning humans do not manually adjust how the algorithm determines what is relevant. This automation significantly reduces the complexity in the company’s data pipelines and serving infrastructure.
The motivation for X revealing this information now is not totally clear. In the past, Musk has stated he wants to make the platform an exemplar of corporate transparency. When the Twitter algorithm was first revealed in 2023, Musk said providing code transparency would be incredibly embarrassing at first but would ultimately lead to rapid improvement in recommendation quality. He added that most importantly, he hoped to earn users’ trust. The platform proclaimed that initial open-sourcing marked a new era of transparency for Twitter.
Despite Musk’s talk of transparency, certain aspects of the platform have arguably grown less open since his takeover. When Musk bought Twitter, the site transitioned from a public company to a private one, an evolution not typically synonymous with openness. While the site used to release multiple transparency reports per year, X did not release its first transparency report until September of 2024. In December, X was fined $140 million by European Union regulators who claimed the site violated transparency obligations under the Digital Services Act. Regulators argued the site’s verification check mark system made it more difficult for users to judge the authenticity of accounts.
X has also been under pressure recently due to the ways its chatbot, Grok, has been used to create and distribute sexualized content. The California Attorney General’s office and congressional lawmakers have scrutinized the platform in recent weeks, citing claims that Grok has been used to create naked images of women and minors. As a result, some may view this latest appeal to openness as just more theater.

