- NextWave AI
- Posts
- India Mandates Labels for AI-Generated Content, Cuts Takedown Time to Just Hours
India Mandates Labels for AI-Generated Content, Cuts Takedown Time to Just Hours
How Your Ads Will Win in 2026
Great ads don’t happen by accident. And in a world flooded with AI-generated content, the difference between “nice idea” and “real impact” matters more than ever.
Join award-winning creative strategist Babak Behrad and Neurons CEO Thomas Z. Ramsøy for a practical, science-backed webinar on what actually drives performance in modern advertising.
They’ll break down how top campaigns earn attention, stick in your target’s memory, and build brands people remember.
You’ll see how to:
Apply neuroscience to creative decisions
Design branding moments that actually land
Make ads feel instantly relevant to real humans
In 2026, you have to earn attention. This webinar will show you exactly how to do it.
India Mandates Labels for AI-Generated Content, Cuts Takedown Time to Just Hours
In a decisive move to strengthen digital accountability and curb the misuse of artificial intelligence, the Government of India has introduced sweeping amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The updated framework, officially notified by the Ministry of Electronics and Information Technology (MeitY), will take effect on February 20, 2026, marking a significant shift in how online platforms manage synthetic and potentially harmful content.
The new regulations require digital platforms to clearly label AI-generated material while drastically shortening the timeline for removing unlawful posts—from as long as 36 hours to as little as two or three hours in certain cases.
Mandatory Labelling of Synthetic Media
At the core of the amendments is the mandatory disclosure of AI-generated or “synthetic” content, including deepfake videos, altered visuals, and artificially generated audio. Platforms must ensure that such material carries a clear and prominent label so users can immediately identify it as artificially created.
The government has formally defined synthetically generated information as audio, visual, or audio-visual content that is artificially created, modified, or altered using computer resources in a way that makes it appear authentic or real.
However, not every digitally modified file falls under this category. Routine editing—such as colour correction, compression, translation, or technical formatting—has been exempted, provided it does not distort the original meaning.
To further enhance transparency, platforms must embed permanent metadata or technical identifiers that help trace the origin of synthetic content. Once applied, these labels cannot be hidden, altered, or removed.
Platforms Face Greater Responsibility
The amendments place substantial compliance obligations on social media companies such as YouTube, Instagram, and Facebook. Before allowing content to go live, platforms must ask users to declare whether their post is AI-generated and deploy automated tools to verify those declarations.
If content is confirmed as synthetic, the platform must ensure the disclosure is prominently displayed. Failure to exercise due diligence could expose intermediaries to liability under the revised framework.
Additionally, companies are expected to implement “reasonable” technical safeguards to prevent the misuse of their tools for creating or spreading deceptive, illegal, or sexually exploitative AI material.
Drastically Reduced Takedown Timelines
One of the most striking features of the new rules is the sharply reduced response window for removing unlawful content. Platforms must now comply with government or court orders within three hours, while non-consensual intimate imagery must be taken down within two hours.
This acceleration is intended to curb the rapid spread of misinformation and harmful deepfakes, thereby improving user protection in India’s expanding digital ecosystem.
The government has also shortened grievance redressal timelines, cutting response periods from 15 days to seven days and reducing other moderation deadlines as well.
Targeting Deepfakes and Online Harm
The amended rules explicitly prohibit AI-generated material linked to child sexual abuse, impersonation, false documents, obscene content, or misleading depictions of real individuals or events.
Violations may trigger immediate removal of content, suspension of user accounts, disclosure of user identity to victims, and mandatory reporting to law enforcement agencies under applicable criminal statutes.
By treating synthetic content as “information,” the government has brought AI-generated media squarely within the scope of unlawful activity provisions under the IT Rules.
Safe Harbour at Stake
The concept of “safe harbour”—which protects platforms from liability for user-generated content—remains available only if intermediaries comply with the new requirements. If a company is found to have knowingly allowed rule violations, it may be deemed to have failed its due diligence obligations.
Legal experts suggest the amendments significantly raise the compliance bar for large platforms and could even lead to over-removal of content as companies attempt to avoid regulatory risk.
A Calibrated Yet Firm Regulatory Approach
Observers note that while the government has tightened accountability, it has also responded to industry concerns by narrowing the definition of synthetic content and dropping earlier proposals such as mandatory large watermarks.
Overall, the reforms reflect India’s broader effort to counter misinformation, deepfakes, and online abuse while positioning the country as a serious participant in global AI governance.
Conclusion
India’s latest IT rule amendments represent one of the country’s most assertive regulatory steps in the age of generative AI. By mandating transparent labelling, enforcing rapid takedowns, and strengthening platform accountability, the government aims to create a safer digital environment without stifling legitimate technological innovation.

