Elon Musk’s social network X is rolling out a new feature to label edited images as “manipulated media.” The announcement came from Musk himself in a cryptic post that simply stated “Edited visuals warning,” as he reshared an update from the anonymous account DogeDesigner, which often serves as a proxy for announcing new X features.
However, the company has provided no clarification on how it will determine what constitutes manipulated media. It is unclear whether the policy will apply to images edited with traditional tools like Adobe Photoshop or focus solely on AI-generated content. DogeDesigner’s post claimed the feature could make it harder for legacy media groups to spread misleading clips or pictures, presenting it as a new development for X.
Prior to its acquisition and rebranding to X, the platform formerly known as Twitter had a policy for labeling tweets containing manipulated, deceptively altered, or fabricated media. That earlier policy, explained by site integrity head Yoel Roth in 2020, was not limited to AI and included techniques like selective editing, cropping, slowing down audio, or manipulating subtitles.
It remains uncertain if X is reinstating those same rules or has created a new system to specifically tackle AI-generated content. X’s current help documentation mentions a policy against sharing inauthentic media, but enforcement appears inconsistent, as demonstrated by the recent proliferation of deepfake non-consensual nude images on the platform. The issue is further complicated by the fact that even official sources, including the White House, have shared manipulated imagery.
Labeling content as “manipulated media” or an “AI image” is a nuanced challenge. Given X’s role as a channel for political propaganda, both domestic and foreign, transparency around how the company defines “edited” content is critical. Users also deserve to know if there is any appeal process beyond the crowdsourced Community Notes system.
Other platforms have faced significant hurdles with similar systems. When Meta introduced AI image labeling in 2024, its detection tools frequently malfunctioned, incorrectly tagging real photographs as “Made with AI.” This occurred because AI features are now commonly integrated into standard creative software used by photographers. For example, simple actions like using Adobe’s cropping tool or its Generative AI Fill for object removal could trigger Meta’s AI detector. Meta later updated its label to the less definitive “AI info” to better reflect when AI tools were merely used in the editing process.
There are existing standards for verifying digital content, such as the C2PA, the Coalition for Content Provenance and Authenticity. Related initiatives include the Content Authenticity Initiative and Project Origin, which focus on adding tamper-evident metadata to media. Major technology companies like Microsoft, Adobe, Intel, Sony, and OpenAI are part of the C2PA’s steering committee. X, however, is not currently listed among the C2PA’s members.
X is not alone in confronting manipulated media. Beyond Meta, platforms like TikTok label AI-generated content. Streaming services such as Deezer and Spotify are scaling initiatives to identify and label AI music. Google Photos uses the C2PA standard to indicate how photos on its platform were created.
Ultimately, while Elon Musk has announced X’s new “manipulated media” warning, essential details are missing. It is unclear if the feature is genuinely new, what specific rules it follows, whether it focuses on AI or all edits, and what standards or technology will power the detection. X has not responded to requests for comment on these matters.

