Bluesky issues its first transparency report, noting rise in user reports andlegal demands

Bluesky released its first transparency report this week. The report documents actions taken by its Trust and Safety team and covers initiatives like age-assurance compliance, monitoring of influence operations, and automated labeling.

The social media startup, a rival to platforms like X and Threads, grew nearly 60% in 2025. Its user base expanded from 25.9 million to 41.2 million accounts. This figure includes accounts hosted on Bluesky’s own infrastructure and those on independent servers within its decentralized network, which is built on the AT Protocol.

Users made 1.41 billion posts on the platform during the past year. This activity represented 61% of all posts ever made on Bluesky. Of those posts, 235 million contained media, accounting for 62% of all media posts shared on the platform to date.

The company reported a fivefold increase in legal requests from law enforcement, government regulators, and legal representatives in 2025. It received 1,470 requests, up from 238 requests in 2024.

While Bluesky previously shared moderation reports for 2023 and 2024, this is its first comprehensive transparency report. The new report addresses areas beyond moderation, such as regulatory compliance and account verification.

Moderation reports from users increased by 54% in 2025. The company received 9.97 million user reports, up from 6.48 million in 2024. Bluesky noted that this growth closely tracked its 57% user growth over the same period. In the prior year, moderation reports had seen a dramatic 17x increase.

Approximately 3% of the user base, or 1.24 million users, submitted reports in 2025. The top reporting categories were “misleading” content, which includes spam, at 43.73% of the total. “Harassment” accounted for 19.93%, and sexual content made up 13.54%. A catch-all “other” category included 22.14% of reports. Other specific categories like violence, child safety, breaking site rules, or self-harm accounted for much smaller percentages.

Within the “misleading” category’s 4.36 million reports, spam accounted for 2.49 million reports. For the 1.99 million “harassment” reports, hate speech represented the largest share with about 55,400 reports. Other areas included targeted harassment (about 42,520 reports), trolling (29,500 reports), and doxxing (about 3,170 reports). Bluesky stated that the majority of “harassment” reports fell into a gray area of antisocial behavior, such as rude remarks, that did not fit other specific categories.

Most of the sexual content reports, totaling 1.52 million, concerned mislabeling. This means adult content was not properly marked with metadata tags that allow user-controlled moderation. A smaller number focused on nonconsensual intimate imagery (about 7,520), abuse content (about 6,120), and deepfakes (over 2,000).

Reports focused on violence totaled 24,670. These were broken into subcategories like threats or incitement (about 10,170 reports), glorification of violence (6,630 reports), and extremist content (3,230 reports).

In addition to user reports, Bluesky’s automated system flagged 2.54 million potential violations. The company reported success in reducing daily reports of antisocial behavior, which dropped 79% after implementing a system that identifies toxic replies and reduces their visibility by placing them behind an extra click. User reports per 1,000 monthly active users also declined by 50.9% from January to December 2025.

Outside of moderation, Bluesky noted it removed 3,619 accounts for suspected influence operations, most likely originating from Russia.

The company stated last fall it was becoming more aggressive about moderation and enforcement. In 2025, Bluesky took down 2.44 million items, including accounts and content. The year prior, it had taken down 66,308 accounts, and its automated tooling removed 35,842 accounts. Moderators also took down 6,334 records, while automated systems removed 282.

Bluesky issued 3,192 temporary suspensions in 2025 and 14,659 permanent removals for ban evasion. Most permanent suspensions targeted accounts engaging in inauthentic behavior, spam networks, and impersonation.

The report suggests Bluesky prefers labeling content over removing users. Last year, the company applied 16.49 million labels to content, a 200% year-over-year increase. Account takedowns grew 104% from 1.02 million to 2.08 million. Most labeling involved adult and suggestive content or nudity.