New Delhi: In a closed-door meeting that barely lasted half an hour, the Ministry of Electronics and Information Technology (MeitY) firmly refused to tweak its newly amended IT Rules as tech platforms including giants Google and Meta expressed concern about their feasibility, especially the 3-hour takedown mandate for unlawful content.
According to multiple people aware of the discussions, MeitY Secretary S. Krishnan is learnt to have left within roughly 20 minutes after reiterating that the government would not reconsider the notified provisions.
ThePrint has learnt that some participants cautioned that the framework, if applied aggressively, might have implications for freedom of speech. Government officials pushed back strongly, according to people present, with officials responding that “private companies must not moral-police the government on this, adding that safeguarding users from harmful synthetic content is a legitimate regulatory objective within India’s constitutional framework.
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, notified on 10 February and brought into force from 20 February, introduce a formal definition of “synthetically generated information” (SGI) and impose enhanced due diligence obligations on intermediaries. These include mandatory labelling of AI-generated or synthetic content.
Under the new framework, platforms must also take down unlawful or government-flagged content within three hours of receiving notice—a significant reduction from the earlier 36-hour window. In cases involving non-consensual intimate imagery or certain deepfake content, the deadline may be shortened to two hours.
After Krishnan left the meeting, MeitY Joint Secretary Ajit Kumar gave a brief presentation on the Sahyog portal—a government platform designed to streamline the issuance of takedown notices to intermediaries under Section 79(3)(b) of the IT Act, 2000. No further discussions took place on the rules after Krishnan’s departure.
ThePrint has reached out to Google and Meta representatives for comment via email. This report will be updated if and when a response is received.
Industry flags compliance, free speech concerns
Wednesday’s closed-door meeting was attended by major tech platforms and industry associations. Representatives from companies including Meta, Google, ShareChat, and others argued that the compressed timelines are operationally challenging, particularly where contextual analysis, human review, or legal vetting is required.
Executives also expressed concern about the limited gap between notification and enforcement, stating that companies were given little time to recalibrate moderation systems, build automated labelling mechanisms, or restructure compliance workflows.
In addition to operational objections, companies raised concerns about freedom of expression. They argued that the broad definitions of SGI combined with stringent takedown timelines could lead to over-moderation, potentially affecting satire, parody, journalistic reporting and political commentary.
ShareChat, an Indian social media company, raised concerns regarding the watermarking of AI-generated content and sought clarification on whether the same compliance requirements would apply to cross-posted content across platforms. In response, officials clarified that the obligation remains straightforward: “If a takedown order has been given, it is simple—one must take it down.”
What govt said
Ministry officials rejected arguments that the timelines are impractical, it is learnt, and told the companies that global social media intermediaries already comply with similar rapid takedown requirements in other jurisdictions and therefore cannot claim that such standards are unworkable in India.
Referring to the recent controversy surrounding Grok, a senior functionary reportedly is learnt to have said that “whatever has happened in the past few weeks cannot happen again”.
Participants were also informed that the Prime Minister’s Office (PMO) has taken cognisance of the matter, people in the know said, and indicated that the issue has received attention at the highest levels and policy flexibility is limited.
Core framework unchanged, officials assert
Another person aware of the discussions said MeitY officials also rejected the argument that companies were given too little time to comply. They asserted that the draft amendments had been placed in the public domain in October 2025 and that the final rules have not materially changed in substance.
Given that the core framework—including labelling obligations and enhanced due diligence around synthetically altered content—remained largely intact, companies had sufficient time to begin building compliance systems, officials argued, adding that the 20 February enforcement deadline therefore cannot be cited as unreasonable.
The October draft rules adopted a broader definition of synthetically generated information (SGI), potentially covering a wider range of AI outputs, and proposed rigid visible watermarking requirements, including fixed display thresholds. The final rules notified on 10 February narrowed SGI largely to audio-visual content, dropped prescriptive watermark size mandates in favour of flexible but prominent labelling, and introduced stricter, time-bound takedown obligations.
What industry bodies said
Industry associations including the Internet and Mobile Association of India (IAMAI), US-India Strategic Partnership Forum (USISPF), and the Broadband India Forum (BIF) also made submissions.
While these bodies broadly supported the need to regulate deepfakes and AI misuse, they echoed concerns over definitional clarity and implementation timelines. However, ThePrint has learnt that BIF—whose patron members include Meta and Google—did not actively argue against or strongly contest the provisions during the meeting.
By the conclusion of the meeting, MeitY’s message was unequivocal—no amendments will be made and no extension of compliance timelines is under consideration.
(Edited by Gitanjali Das)

