US senators demand answers from X, Meta, Alphabet on sexualized deepfakes

The tech world’s deepfake pornography problem is now bigger than just one platform. Several U.S. senators have sent a letter to the leaders of X, Meta, Alphabet, Snap, Reddit, and TikTok. They are asking these companies to provide proof that they have robust protections and policies in place and to explain how they plan to curb the rise of sexualized deepfakes on their platforms.

The senators also demanded that the companies preserve all documents and information relating to the creation, detection, moderation, and monetization of sexualized, AI-generated images, as well as any related policies. This letter comes hours after X said it updated its AI tool, Grok, to prohibit it from making edits of real people in revealing clothing and restricted image creation and edits via Grok to paying subscribers.

Pointing to media reports about how easily and often Grok generated sexualized and nude images of women and children, the senators pointed out that platforms’ guardrails to prevent users from posting non-consensual, sexualized imagery may not be enough. The letter states that while many companies maintain policies against such content, in practice users are finding ways around these guardrails or the guardrails are failing. Grok, and consequently X, have been heavily criticized for enabling this trend, but other platforms are not immune.

Deepfakes first gained popularity on Reddit, when a page displaying synthetic porn videos of celebrities went viral before the platform took it down in 2018. Sexualized deepfakes targeting celebrities and politicians have multiplied on TikTok and YouTube, though they usually originate elsewhere.

Meta’s Oversight Board last year called out two cases of explicit AI images of female public figures. The platform has also allowed nudify apps to sell ads on its services, though it did later sue a company called CrushAI. There have been multiple reports of kids spreading deepfakes of peers on Snapchat. Telegram, which isn’t included on the senators’ list, has also become notorious for hosting bots built to undress photos of women.

The letter demands the companies provide their policy definitions of terms like “deepfake” and “non-consensual intimate imagery.” It asks for descriptions of their policies and enforcement approach for non-consensual AI deepfakes, including altered clothing and virtual undressing. The senators want to know about current content policies, internal guidance for moderators, and how policies govern AI tools. They are asking what filters or guardrails are in place to prevent generation and distribution, how deepfakes are identified and prevented from re-upload, and how the platforms prevent users and themselves from profiting from such content. The letter also demands information on how terms of service enable user bans and what is done to notify victims.

The letter is signed by Senators Lisa Blunt Rochester, Tammy Baldwin, Richard Blumenthal, Kirsten Gillibrand, Mark Kelly, Ben Ray Luján, Brian Schatz, and Adam Schiff.

This move comes just a day after xAI’s owner Elon Musk said he was not aware of any naked underage images generated by Grok. Later, California’s attorney general opened an investigation into xAI’s chatbot, following mounting pressure from governments around the world incensed by the lack of guardrails. xAI has maintained that it takes action to remove illegal content on X, including child sexual abuse material and non-consensual nudity, though neither the company nor Musk have addressed why Grok was allowed to generate such edits initially.

The problem extends beyond non-consensual manipulated sexualized imagery. While not all AI image services let users undress people, they do allow easy generation of deepfakes. For example, OpenAI’s Sora 2 reportedly allowed users to generate explicit videos featuring children; Google’s Nano Banana seemingly generated an image showing Charlie Kirk being shot; and racist videos made with Google’s AI video model are garnering millions of views on social media.

The issue grows more complex with Chinese image and video generators. Many Chinese tech companies and apps, especially those linked to ByteDance, offer easy ways to edit faces, voices, and videos, and those outputs have spread to Western platforms. China has stronger synthetic content labeling requirements that don’t exist in the U.S. on the federal level, where people instead rely on fragmented and dubiously enforced platform policies.

U.S. lawmakers have passed some legislation seeking to rein in deepfake pornography, but the impact has been limited. The Take It Down Act, which became federal law in May, is meant to criminalize the creation and dissemination of non-consensual, sexualized imagery. But a number of provisions in the law make it difficult to hold image-generating platforms accountable, as they focus most scrutiny on individual users instead.

Meanwhile, a number of states are trying to take matters into their own hands to protect consumers and elections. This week, New York Governor Kathy Hochul proposed laws that would require AI-generated content to be labeled and would ban non-consensual deepfakes in specified periods leading up to elections, including depictions of opposition candidates.