Elon Musk teases a new image-labeling system for X…we think?

Elon Musk’s X is the latest social network to announce a feature that will label edited images as “manipulated media.” This news comes from a cryptic post by Musk himself, though the company has not clarified how it will determine what qualifies as manipulated. It is unknown whether the policy will include images edited with traditional tools like Adobe Photoshop.

So far, the only details emerge from Musk resharing a post from the anonymous account DogeDesigner, which often serves as a proxy for announcing new X features. That post claimed the new system could make it harder for legacy media groups to spread misleading clips or pictures and stated the feature is new to X.

However, details remain scarce. Before its acquisition and rebranding to X, the platform formerly known as Twitter had a policy for labeling tweets that used manipulated, deceptively altered, or fabricated media. That older policy, explained by site integrity head Yoel Roth in 2020, was not limited to AI and included techniques like selective editing, cropping, slowing down video, or manipulating subtitles.

It is unclear if X is readopting these same rules or has made significant changes to tackle AI-generated content specifically. X’s current help documentation mentions a policy against sharing inauthentic media, but enforcement appears inconsistent, as demonstrated by the recent spread of non-consensual AI-generated nude images. The issue is widespread, with even official sources like the White House having shared manipulated photographs.

Labeling content as “manipulated media” or an “AI image” is a nuanced challenge. Given X’s role as a platform for domestic and international political propaganda, transparency around how the company defines “edited” or AI-generated content is crucial. Users also deserve to know if there is any dispute process beyond the crowdsourced Community Notes system.

As Meta discovered when it introduced AI image labeling in 2024, detection systems can easily go awry. Meta incorrectly tagged real photographs with its “Made with AI” label because AI features are now integrated into many standard creative tools used by photographers. For example, Adobe’s cropping tool or its Generative AI Fill for object removal could trigger AI detectors, even if the image was not wholly AI-generated. Meta later updated its label to “AI info” to better reflect when AI tools were merely used in the editing process.

Today, standards exist for verifying digital content, such as the C2PA, the Coalition for Content Provenance and Authenticity. Related initiatives include the Content Authenticity Initiative and Project Origin, which focus on adding tamper-evident metadata to media. Major companies like Microsoft, the BBC, Adobe, Intel, Sony, and OpenAI are part of the C2PA’s steering committee.

Presumably, X’s implementation would follow some known process for identifying AI content, but Elon Musk did not specify what that is. He also did not clarify if the policy targets only AI images or any image not uploaded directly from a smartphone camera. It is even uncertain if the feature is entirely new, as DogeDesigner claims.

X is not alone in grappling with this issue. Beyond Meta, platforms like TikTok label AI content. Streaming services like Deezer and Spotify are scaling initiatives to identify and label AI-generated music. Google Photos uses C2PA standards to indicate how photos were made. X is not currently listed as a member of the C2PA, and the company does not typically respond to requests for comment.