The problem of nonconsensual, sexualized deepfakes in the tech world has expanded beyond any single platform. In a formal letter, several U.S. senators are demanding that the leaders of X, Meta, Alphabet, Snap, Reddit, and TikTok provide proof of robust protections and policies. The senators have asked the companies to explain their plans to curb the rise of sexualized deepfakes on their platforms.
The letter also demands that these companies preserve all documents and information relating to the creation, detection, moderation, and monetization of sexualized, AI-generated images, along with any related policies. This action follows an announcement from X that it updated its AI tool, Grok, to prohibit edits of real people in revealing clothing and restricted image creation via Grok to paying subscribers.
The senators cited media reports detailing how easily and frequently Grok generated sexualized and nude images of women and children. They pointed out that platforms’ existing guardrails to prevent the posting of nonconsensual, sexualized imagery may be insufficient. The letter states that while many companies maintain policies against such content, users are finding ways around these guardrails or the guardrails are simply failing.
Although Grok and X have faced heavy criticism for enabling this trend, other platforms are not immune. Deepfakes first gained significant attention on Reddit years ago with a page featuring synthetic porn videos of celebrities, which was later taken down. Sexualized deepfakes targeting celebrities and politicians have since multiplied on platforms like TikTok and YouTube, though they often originate elsewhere.
Meta’s Oversight Board recently addressed cases of explicit AI images of female public figures. The platform has also had to deal with nudify apps selling ads on its services, leading to a lawsuit against one such company. There have been multiple reports of children spreading deepfakes of peers on Snapchat. Additionally, Telegram, which was not included in the senators’ letter, has become notorious for hosting bots designed to undress photos of women.
In response to the letter, X referenced its announcement about the Grok update. A Reddit spokesperson stated that the platform does not allow any non-consensual intimate media, does not offer tools to create it, and takes proactive measures to remove it. Alphabet, Snap, TikTok, and Meta did not immediately respond to requests for comment.
The senators’ letter demands detailed information from the companies. This includes their policy definitions for terms like “deepfake” and “non-consensual intimate imagery.” It asks for descriptions of their enforcement approaches for non-consensual AI deepfakes, including altered clothing and virtual undressing. The companies must also outline their current content policies, internal guidance for moderators, and how their policies govern AI tools related to intimate content.
Further demands cover the specific filters and guardrails implemented to prevent the generation and distribution of deepfakes. The letter asks about the mechanisms used to identify such content and prevent its re-upload, how they prevent users from profiting from it, and how the platforms themselves avoid monetizing it. The companies must also explain how their terms of service enable them to ban users who post deepfakes and what they do to notify victims.
The letter is signed by Senators Lisa Blunt Rochester, Tammy Baldwin, Richard Blumenthal, Kirsten Gillibrand, Mark Kelly, Ben Ray Luján, Brian Schatz, and Adam Schiff. This move occurred just a day after xAI’s owner, Elon Musk, stated he was not aware of any naked underage images generated by Grok. Shortly after, California’s attorney general opened an investigation into xAI’s chatbot, following mounting pressure from governments worldwide concerned about the lack of guardrails.
xAI has maintained that it takes action to remove illegal content on X, including child sexual abuse material and non-consensual nudity. However, neither the company nor Musk have addressed why Grok was initially allowed to generate such edits.
The issue extends beyond non-consensual manipulated sexual imagery. While not all AI image services permit users to “undress” people, many easily generate deepfakes. For example, OpenAI’s Sora 2 reportedly allowed users to generate explicit videos featuring children. Google’s Nano Banana seemingly generated an image showing a public figure being shot, and racist videos made with Google’s AI video model have garnered millions of views on social media.
The problem grows more complex with Chinese image and video generators. Many Chinese tech companies and apps, especially those linked to ByteDance, offer easy ways to edit faces, voices, and videos, with these outputs spreading to Western platforms. China has stronger synthetic content labeling requirements that do not exist at the federal level in the U.S., where reliance falls on fragmented and dubiously enforced platform policies.
U.S. lawmakers have passed some legislation, like the Take It Down Act, which criminalizes the creation and dissemination of non-consensual, sexualized imagery. However, provisions in the law make it difficult to hold image-generating platforms accountable, as scrutiny focuses mostly on individual users.
Meanwhile, states are taking action. New York Governor Kathy Hochul recently proposed laws that would require AI-generated content to be labeled and would ban non-consensual deepfakes in specified periods leading up to elections.

