US senators demand answers from X, Meta, Alphabet and others on sexualizeddeepfakes

The tech world’s problem with non-consensual, sexualized deepfakes is now expanding beyond just one platform. Several U.S. senators have sent a letter to the leaders of X, Meta, Alphabet, Snap, Reddit, and TikTok. In it, they ask the companies to provide proof that they have robust protections and policies in place and to explain how they plan to curb the rise of sexualized deepfakes on their platforms.

The senators also demanded that the companies preserve all documents and information relating to the creation, detection, moderation, and monetization of sexualized, AI-generated images, as well as any related policies. This letter comes hours after X announced it updated its AI tool, Grok, to prohibit it from making edits of real people in revealing clothing and restricted image creation and edits via Grok to paying subscribers.

Pointing to media reports about how easily and often Grok generated sexualized and nude images of women and children, the senators noted that platforms’ existing guardrails to prevent users from posting such non-consensual imagery may not be sufficient. The letter states that while many companies maintain policies against this content and many AI systems claim to block explicit material, users are finding ways around these guardrails or the guardrails are simply failing.

While Grok and X have been heavily criticized for enabling this trend, other platforms are not immune. Deepfakes first gained popularity on Reddit years ago when a page displaying synthetic porn videos of celebrities went viral before being taken down. Sexualized deepfakes targeting celebrities and politicians have since multiplied on platforms like TikTok and YouTube, though they often originate elsewhere.

Meta’s Oversight Board last year called out cases of explicit AI images of female public figures on its platforms. Meta has also allowed apps that can undress people in photos to sell ads on its services, though it later sued one such developer. There have been multiple reports of kids spreading deepfakes of peers on Snapchat. Telegram, which was not included in the senators’ letter, has also become notorious for hosting bots built to undress photos of women.

In response to the letter, X pointed to its announcement regarding the Grok update. A Reddit spokesperson stated that the platform does not allow non-consensual intimate media, does not offer tools to make it, and takes proactive measures to remove it. Alphabet, Snap, TikTok, and Meta did not immediately respond to requests for comment.

The senators’ letter demands the companies provide detailed information on several points. These include their policy definitions for terms like “deepfake,” their enforcement approaches for non-consensual AI imagery, descriptions of current content policies, and how they govern AI tools related to intimate content. The letter also asks what filters or guardrails are in place to prevent deepfake generation, how such content is identified and prevented from re-uploading, and how the platforms prevent users and themselves from monetizing this content. Additionally, it requests details on how terms of service enable user bans and what is done to notify victims.

The letter is signed by a group of Democratic senators. This move comes just a day after xAI’s owner, Elon Musk, said he was not aware of any naked underage images generated by Grok. Later that same week, California’s attorney general opened an investigation into xAI’s chatbot following mounting pressure from governments around the world. xAI has maintained that it takes action to remove illegal content on X, but neither the company nor Musk have addressed why Grok was allowed to generate such edits initially.

The problem extends beyond non-consensual sexualized imagery. While not all AI image services let users “undress” people, many allow the easy generation of deepfakes. For example, there have been reports of other AI models generating explicit videos featuring children, violent political imagery, and racist videos that garner millions of views on social media.

The issue grows more complex with Chinese image and video generators. Many Chinese tech companies and apps offer easy ways to edit faces, voices, and videos, and those outputs often spread to Western platforms. China has stronger synthetic content labeling requirements that do not exist at the federal level in the U.S., where the public instead relies on fragmented and inconsistently enforced platform policies.

U.S. lawmakers have passed some legislation, like the Take It Down Act which became federal law, seeking to criminalize the creation and dissemination of non-consensual sexualized imagery. However, provisions in the law make it difficult to hold image-generating platforms accountable, as the focus remains largely on individual users.

Meanwhile, states are attempting to take action. This week, New York’s governor proposed laws that would require AI-generated content to be labeled and would ban non-consensual deepfakes in specified periods leading up to elections.