Musk denies awareness of Grok sexual underage images as California AG launchesprobe

Elon Musk stated on Wednesday that he is not aware of any naked underage images generated by his company’s AI chatbot, Grok. This denial came just hours before the California attorney general opened an investigation into xAI over the proliferation of nonconsensual sexually explicit material created by the tool.

Pressure is mounting on Musk and his companies from governments worldwide, including the United Kingdom, Europe, Malaysia, and Indonesia. This follows reports of users on X, the social media platform also owned by Musk, prompting Grok to turn photos of real women and children into sexualized images without their consent.

According to estimates from the AI detection platform Copyleaks, roughly one such image was being posted on X every minute. A separate sample of data gathered in early January found that approximately 6,700 images were generated per hour over a 24-hour period.

California Attorney General Rob Bonta said the material has been used to harass people across the internet and urged xAI to take immediate action. His office will investigate whether xAI violated state law. Several laws exist to protect targets of nonconsensual sexual imagery and child sexual abuse material. Last year, the federal Take It Down Act was signed into law, criminalizing the knowing distribution of nonconsensual intimate images and requiring platforms to remove such content within 48 hours. California also has its own series of laws signed in 2024 to crack down on sexually explicit deepfakes.

The trend of using Grok to create sexualized imagery appears to have begun near the end of the previous year. It reportedly started after some adult-content creators used the tool to generate sexualized imagery of themselves for marketing, which then led to widespread misuse. In several public cases, Grok was used to alter real photos of women, including well-known figures like actress Millie Bobby Brown, by changing clothing or physical features in overtly sexual ways.

In response to the controversy, xAI has reportedly begun implementing safeguards. Grok now requires a premium subscription to fulfill certain image-generation requests, and even then may not generate the requested image or may produce a more generic version. According to April Kozen of Copyleaks, Grok appears more permissive with adult content creators, and the company’s overall approach suggests it is experimenting with multiple mechanisms to control problematic image generation, though inconsistencies remain.

Neither xAI nor Musk has directly addressed the core problem. A few days after the reports emerged, Musk appeared to make light of the issue by asking Grok to generate an image of himself in a bikini. In a statement, X’s safety account said the company takes action against illegal content, including child sexual abuse material, but did not specifically address Grok’s role in creating sexualized manipulated imagery of women.

In his recent post, Musk narrowly focused on the absence of naked underage images, stating he was aware of “literally zero.” Legal expert Michael Goodyear suggested Musk likely focused on child sexual abuse material because the penalties for it are greater than for nonconsensual adult imagery. Musk’s post characterized the incidents as uncommon, attributing them to user requests or adversarial hacking, and presented them as technical bugs to be fixed, without acknowledging potential shortcomings in Grok’s safety design.

The California attorney general is not the only regulator taking action. Indonesia and Malaysia have both temporarily blocked access to Grok. India has demanded that X make immediate technical changes to the chatbot. The European Commission has ordered xAI to retain all documents related to Grok, a precursor to a potential investigation. The United Kingdom’s online safety watchdog, Ofcom, has also opened a formal investigation under the U.K. Online Safety Act.

This is not the first time Grok has faced criticism for sexualized imagery. As noted by Attorney General Bonta, Grok includes a “spicy mode” designed to generate explicit content. An update in October made it easier to bypass safety guidelines, resulting in users creating hardcore pornography and graphic violent sexual images. Many of these images have been of AI-generated people, which some may find less harmful than manipulated images of real individuals.

Copyleaks co-founder Alon Yamin emphasized the immediate and personal impact when AI systems allow the manipulation of real people’s images without consent. He stated that with the rapid rise in AI capabilities for manipulated media, detection and governance tools are needed now more than ever to help prevent misuse.