India has ordered social media platforms to increase their policing of deepfakes and other AI-generated impersonations. The government has also sharply shortened the time these companies have to comply with official takedown orders. This move could reshape how global tech firms moderate content in one of the world’s largest and fastest-growing internet markets.
The changes were published on Tuesday as amendments to India’s 2021 IT Rules. They bring deepfakes under a formal regulatory framework, mandating the labeling and traceability of synthetic audio and visual content. The rules slash compliance timelines for platforms, introducing a three-hour deadline for official takedown orders and a two-hour window for certain urgent user complaints.
India’s importance as a digital market amplifies the impact of these new rules. With over a billion internet users and a predominantly young population, the South Asian nation is a critical market for platforms like Meta and YouTube. This makes it likely that compliance measures adopted in India will influence global product and moderation practices.
Under the amended rules, social media platforms that allow users to upload or share audio-visual content must require disclosures on whether material is synthetically generated. They must deploy tools to verify those claims and ensure that deepfakes are clearly labeled and embedded with traceable data. Certain categories of synthetic content, including deceptive impersonations, non-consensual intimate imagery, and material linked to serious crimes, are barred outright.
Non-compliance, particularly in cases flagged by authorities or users, can expose companies to greater legal liability by jeopardizing their safe-harbor protections under Indian law. The rules lean heavily on automated systems to meet these obligations, expecting platforms to deploy technical tools to verify disclosures, identify deepfakes, and prevent the sharing of prohibited content.
Policy experts note the rules mark a more calibrated approach to regulating AI-generated deepfakes. However, the significantly compressed grievance timelines will materially raise compliance burdens. The requirement for intermediaries to remove content within three hours upon receiving knowledge departs from established free-speech principles, according to some legal analysts.
Digital advocacy groups have raised concerns, stating the rules risk accelerating censorship by drastically compressing takedown timelines. This leaves little scope for human review and pushes platforms toward automated over-removal. These short timelines could undermine free-speech protections and due process.
Industry sources indicate the amendments followed a limited consultation process, with only a narrow set of suggestions reflected in the final rules. The scale of changes between the draft and final rules warranted another round of consultation to give companies clearer guidance, the sources said.
Government takedown powers have already been a point of contention in India. Social media platforms and civil-society groups have long criticized the breadth and opacity of content removal orders. The latest changes come just months after the Indian government reduced the number of officials authorized to order content removals from the internet in response to a legal challenge.
The amended rules will come into effect on February 20, giving platforms little time to adjust their compliance systems. The rollout coincides with India’s hosting of the AI Impact Summit in New Delhi from February 16 to 20, which is expected to draw senior global technology executives and policymakers to the country.

