For more than two years, an app called ClothOff has been terrorizing young women online, and it has been maddeningly difficult to stop. The app has been taken down from the two major app stores and it is banned from most social platforms, but it remains available on the web and through a Telegram bot. In October, a clinic at Yale Law School filed a lawsuit to take down the app entirely, forcing the owners to delete all images and cease operation. Simply finding the defendants, however, has been a challenge. The app is incorporated in the British Virgin Islands, but the lawsuit’s co-lead counsel, Professor John Langford, explains they believe it is run by a brother and sister in Belarus and may be part of a larger global network.
This situation is a bitter lesson following the recent flood of non-consensual pornography generated by Elon Musk’s xAI, which included many underage victims. Child sexual abuse material is the most legally toxic content on the internet, illegal to produce, transmit, or store, and regularly scanned for on every major cloud service. Despite intense legal prohibitions, there are still few ways to deal with image generators like ClothOff, as Langford’s case demonstrates. Individual users can be prosecuted, but platforms like ClothOff and Grok are far more difficult to police, leaving few options for victims seeking justice in court.
The clinic’s complaint paints an alarming picture. The plaintiff is an anonymous high school student in New Jersey whose classmates used ClothOff to alter her Instagram photos. She was fourteen years old when the original photos were taken, which means the AI-modified versions are legally classified as child abuse imagery. Even though the modified images are straightforwardly illegal, local authorities declined to prosecute the case, citing the difficulty of obtaining evidence from suspects’ devices. The complaint states that neither the school nor law enforcement ever established how broadly the material of the victim and other girls was distributed.
The court case has moved slowly. The complaint was filed in October, and in the months since, Langford and his colleagues have been in the process of serving notice to the defendants, a difficult task given the global nature of the enterprise. Once they have been served, the clinic can push for a court appearance and a judgment, but in the meantime the legal system has given little comfort to ClothOff’s victims.
The Grok case might seem like a simpler problem to fix. Elon Musk’s xAI is not hiding, and there is plenty of money at the end for lawyers who can win a claim. But Grok is a general purpose tool, which makes it much harder to hold accountable in court. Langford notes that ClothOff is designed and marketed specifically as a deepfake pornography generator, whereas suing a general system that users can query for all sorts of things is far more complicated.
A number of US laws have already banned deepfake pornography. But while specific users are clearly breaking those laws, it is much harder to hold the entire platform accountable. Existing laws require clear evidence of an intent to harm, meaning proof that xAI knew their tool would be used to produce non-consensual pornography. Without that evidence, xAI’s First Amendment rights would provide significant legal protection. Langford states that Child Sexual Abuse material is not protected expression, so designing a system to create that content operates outside First Amendment protection. However, a general system that users can query for many things exists in a grayer area.
The easiest way to surmount those problems would be to show that xAI had willfully ignored the problem, a real possibility given reporting that Musk directed employees to loosen Grok’s safeguards. But even then, it would be a far riskier case. Langford says reasonable people can question why more stringent controls were not in place to prevent this, suggesting a kind of recklessness, but it remains a more complicated legal argument.
These First Amendment issues are why xAI’s biggest pushback has come from court systems without robust legal protections for free speech. Both Indonesia and Malaysia have taken steps to block access to the Grok chatbot, while regulators in the United Kingdom have opened an investigation that could lead to a similar ban. Other preliminary steps have been taken by the European Commission, France, Ireland, India, and Brazil. In contrast, no US regulatory agency has issued an official response.
It is impossible to say how the investigations will resolve, but at the very least, the flood of imagery raises many questions for regulators to investigate, and the answers could be damning. Langford concludes that if you are posting or distributing Child Sexual Abuse material, you are violating criminal prohibitions and can be held accountable. The hard question is what the platform knew, what it did or did not do, and what it is doing now in response.

