For the past two weeks, the social media platform X has been flooded with AI-manipulated nude images created by the Grok AI chatbot. An alarming range of women have been affected by these non-consensual images, including prominent models, actresses, news figures, crime victims, and even world leaders.
A December 31 research paper from Copyleaks estimated roughly one image was being posted each minute, but later tests found far more. A sample gathered from January 5-6 found 6,700 images per hour over a 24-hour period.
While public figures from around the world have decried the choice to release the model without safeguards, there are few clear mechanisms for regulators hoping to rein in this image-manipulating system. The result has become a painful lesson in the limits of tech regulation and a forward-looking challenge for regulators hoping to make a mark.
Unsurprisingly, the most aggressive action has come from the European Commission, which on Thursday ordered xAI to retain all documents related to its Grok chatbot. The move is a common precursor to a formal investigation. This is particularly significant given recent reporting that suggests Elon Musk may have personally intervened to prevent safeguards from being placed on what images Grok could generate.
It is unclear whether X has made any technical changes to the Grok model, although the public media tab for Grok’s X account has been removed. In a statement, the company specifically denounced the use of AI tools to produce child sexual imagery, stating that anyone using Grok to make illegal content will face consequences.
In the meantime, regulators around the world have issued stern warnings. The United Kingdom’s Ofcom stated it was in touch with xAI and would undertake a swift assessment. U.K. Prime Minister Keir Starmer called the phenomenon disgraceful and disgusting, offering Ofcom full support to take action.
The Australian eSafety Commissioner noted her office had received a doubling in complaints related to Grok since late 2025. She stated they would use their regulatory tools to investigate and take appropriate action.
By far the largest market to threaten action is India, where Grok was the subject of a formal complaint from a member of Parliament. India’s communications regulator ordered X to address the issue and submit a report. While a report was submitted, it is unclear whether the regulator will be satisfied. If not, X could lose its safe harbor status in India, a potentially serious limitation on its ability to operate there.

