scorecardresearch
Add as a preferred source on Google
Wednesday, February 11, 2026
Support Our Journalism
HomeIndiaGovernance36-hr takedown window cut to 3 hrs: Centre notifies IT Rules amendment...

36-hr takedown window cut to 3 hrs: Centre notifies IT Rules amendment to tighten noose on deepfakes

Amendments force intermediaries to swiftly act on harmful synthetic media while narrowing obligations to content likely to mislead users. The changes will come into force on 20 February.

Follow Us :
Text Size:

New Delhi: Social media platforms will now be required to remove or disable access to objectionable content within three hours of receiving a government or court order, under amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, notified by the Centre on Tuesday.

The revised timelines mark a sharp reduction from the 36-hour window proposed in the draft amendments and provided under the earlier rules, and are among the most consequential changes in the notification, which will come into force on 20 February.

The amendments also shorten grievance redressal timelines for users, requiring faster acknowledgment and action by platforms. The notification was issued by the Ministry of Electronics and Information Technology under Section 87 of the Information Technology Act, 2000.

Alongside the tighter timelines, the rules bring deepfakes and other forms of synthetically generated content within a defined regulatory framework.

“Synthetically generated information” has been defined as audio, visual or audio-visual content that is “artificially or algorithmically created, generated, modified or altered using a computer resource” in a manner that appears real and depicts a person or event in a way likely to be perceived as authentic.

Routine editing, accessibility tools and the preparation of documents or educational material have been excluded, so long as they do not result in a false electronic record.

The notified rules follow a draft released by the Ministry of Electronics and Information Technology for public consultation in October 2025, after a series of incidents involving AI-generated videos and audio clips that falsely depicted individuals, including public figures.

India has seen a rise in cases involving non-consensual deepfake pornography, impersonation scams and misleading clips circulating during elections and periods of social tension, prompting concerns over misinformation, reputational harm and public order.

Compared to the draft, the final rules narrow the scope of content that must be flagged. While the consultation version had defined synthetically generated information broadly as any content “artificially or algorithmically created, generated, modified or altered”, the notified rules place greater emphasis on content that misrepresents a person or event “in a manner that is likely to deceive”.

At the same time, the amendments significantly tighten timelines for compliance. Intermediaries must now remove or disable access to content within three hours of government or court orders, a sharp reduction from the 36-hour window proposed in the draft and provided under earlier rules. Grievance redressal timelines for users have also been reduced.

The rules require intermediaries that enable the creation or dissemination of synthetic content to deploy technical measures to prevent unlawful material, including child sexual abuse content, non-consensual intimate imagery, false documents or electronic records, material related to explosives or arms, and content that falsely depicts individuals or real-world events.

Where synthetic content is otherwise lawful, platforms must ensure it is clearly labelled and embedded with permanent metadata or other provenance markers, including a unique identifier. Intermediaries are barred from allowing users to remove or suppress such labels.

For significant social media intermediaries, the rules mandate that users declare whether content is synthetically generated, that platforms verify the accuracy of such declarations through technical measures, and that confirmed synthetic content is prominently labelled before publication.

The notification states that where an intermediary “knowingly permitted, promoted, or failed to act upon” such content in violation of the rules, it will be deemed to have failed to exercise due diligence.

Platforms are also required to periodically inform users of the consequences of violations, including account suspension, removal of content and liability under law. Users must be warned that misuse of synthetic content “may attract penalty or punishment” under statutes such as the Bharatiya Nyaya Sanhita, the Protection of Children from Sexual Offences Act and the Representation of the People Act.

The amendments were issued under Section 87 of the Information Technology Act, 2000, and replaces references to the Indian Penal Code with the Bharatiya Nyaya Sanhita, 2023.

(Edited by Viny Mishra)


Also read: Deepfake on duty: when I asked AI to read Op Sindoor citations


 

Subscribe to our channels on YouTube, Telegram & WhatsApp

Support Our Journalism

India needs fair, non-hyphenated and questioning journalism, packed with on-ground reporting. ThePrint – with exceptional reporters, columnists and editors – is doing just that.

Sustaining this needs support from wonderful readers like you.

Whether you live in India or overseas, you can take a paid subscription by clicking here.

Support Our Journalism

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular