The Union Government took an important step toward enacting what could become India’s first substantive Artificial Intelligence regulation on 22 October, targeting social media companies and their users. The Ministry for Electronics and Information Technology has issued draft amendments to the Information Technology (IT) Rules 2021 to govern the circulation of AI-generated or synthetic content on social media.
Generative AI equips everyone to produce text, image, music, or video content instantly. Globally, governments are thinking of ways to mitigate associated risks, such as the spread of disinformation. Despite all the AI-optimism in India, exemplified by the upcoming AI Impact Summit 2026, these rules seem to follow the hawkish digital governance models of China and the European Union (EU). China recently introduced AI labelling rules to distinguish AI-generated content from human-generated one. The EU requires deepfakes to be marked or labelled.
Broadly, the proposed rules mandate social media companies and services that enable the creation or modification of AI content to label “synthetically generated information”. New obligations are envisaged under the existing due diligence for such entities under the IT Rules. They include: (a) labelling and embedding metadata to ensure users can identify synthetic content; (b) superimposing labels covering 10 per cent of any visible content; and (c) ensuring that the largest social media companies carry and verify declarations by their users on any sharing of synthetic content.
Rewriting legal liability
Notably, social media companies will also be allowed to take down content based on user complaints, without the need for a court or government order. India’s IT Act provides social media companies statutory immunity from State persecution for user-generated content, a legal protection commonly referred to in internet law as ‘safe harbour’. This immunity allows social media companies to innovate and grow without being on the hook for the diversity and scale of content they host. In the absence of this protection, social media would not be the participatory space its users value.
The draft rules diminish India’s safe harbour regime in two ways. First, social media companies are allowed to step out of their role as passive conduits of information since they can take down content simply based on user complaints. Second, bigger players will need to constantly monitor content on their services, because “awareness” of the existence of synthetic content, whether detected voluntarily or through complaints, is the trigger for losing legal immunity. A consequence of these two constructs operating together is that social media companies may err on the side of caution and purge content—no matter the merits—rather than face enforcement heat.
These provisions dilute the “actual knowledge” standard that was laid down by the Supreme Court in the landmark case of Shreya Singhal v. Union of India, which shifted the burden of determining illegality of content from intermediaries like social media companies to the courts and the State. By reversing this idea, the rules place the onus back on such companies, forcing them to become arbiters of online speech.
This is also not an outcome our parliamentary democracy favours, given the recent uptick in bipartisan concern on the powers already in the hands of social media companies. This year, two parliamentary standing committees have in unison suggested added accountability from social media companies, not more powers to allow them to moderate content. Policy proposals to enhance online safety should be tethered to the idea of safe harbour, and not left to the discretion of social media companies.
Also read: Internet can’t be regulated like TV. Look at how UK, Australia are doing it
Alternative approaches to labelling
The labelling mandate might also require fine-tuning. Labelling helps users identify AI-generated material. Yet, experts note that labelling content at scale is challenging and social media companies will struggle if such a flagging mandate is imposed. These companies will inevitably miss some synthetic content, which may then boost the perceived credibility of unlabeled content. Human-produced posts may also be mislabeled, which could then undermine trust in legitimate information.
Any policy intervention to address harms emanating from synthetic content should focus on transparency through standardisation of labels and user literacy. For instance, the Coalition for Content Provenance and Authenticity provides open technical standards for users to establish the roots of posted content. Nudging wider adoption of such industry standards may ensure relative consistency in labels and flexibility in experimentation.
It’s also axiomatic that as our information space becomes more artificial, users will have to be more vigilant and aware. Studies show that rules mandating AI labels may enhance transparency, but they don’t significantly change the persuasiveness of the content itself. This is where the complementary safeguard of media literacy for users also becomes imperative.
The challenge with labeling synthetic content is ultimately human. Like energy labels on electronic appliances, it can take decades for consumers to adapt to such labels and influence their decision-making. We cannot wait for users to get used to such labels, given the social media’s impact on society. We need a smarter strategy, one that rests on a foundation of strong collaboration between the State and companies, tethered to time-tested first principles and supported by shared resources.
Srishti Joshi is Manager, Koan Advisory Group. Views are personal.
This article is part of ThePrint-Koan Advisory series that analyses emerging policies, laws and regulations in India’s technology sector. Read all the articles here.
(Edited by Ratan Priya)

