scorecardresearch
Wednesday, July 16, 2025
Support Our Journalism
HomeOpinionAI regulation gets trickier with Grok. India needs adaptive, not reactionary policies

AI regulation gets trickier with Grok. India needs adaptive, not reactionary policies

India stands at a crossroads in shaping the legal and ethical future of generative AI.

Follow Us :
Text Size:

Generative AI chatbot Grok’s responses to user queries about Indian political leaders kicked up a storm in the country last month. Grok is a product of xAI, Elon Musk’s firm that is now the parent company of X. The Ministry of Electronics and Information Technology said that it is in talks with X, with which Grok is integrated, over how the chatbot generates replies and the data used to train it. 

Prior to the advent of generative AI systems like Grok, there were two kinds of content systems online – the publishers and the intermediaries. The publishers created their own content, and were therefore liable for it if it were found unlawful. The intermediaries, on the other hand, did not create their own content. The content on intermediary platforms, which include social media services, is typically created by users. Hence, liability constructs for intermediaries were devised accordingly. 

A safe harbour provision was introduced in many parts of the world that shielded online intermediaries from liability, provided they met certain conditions. These included an almost complete absence of involvement in the creation of the content. Later on, the provision of safe harbour was also made contingent on these entities complying with additional obligations such as the creation and communication of rules to users about what kind of content can and cannot be posted on their platforms. 

Who is liable?

Grok presents a departure from the publisher and intermediary paradigm, as both users and the system play a role in the creation of information. But does it raise new questions of liability? 

First, the law is clear on what content can and cannot be allowed. Article 19(2) of the Constitution only permits the government to restrict speech under eight grounds, which include defamation, morality, and the preservation of public order. The Supreme Court made clear last month that any content’s potential to offend the majority’s viewpoint does not oblige the government to restrict such speech. Thus, for any speech to be deemed unlawful by the government, it must touch upon one of the reasonable restrictions on speech under the Constitution. Moreover, a competent court must make the assessment on whether this is indeed so. 

Second, AI content governance is highly circumstantial and must be considered in a specific context, meaning that a precedent from one case cannot be blanketly applied to a variety of different situations. For instance, if a person is writing a historical book on hate speech, and is relying on a generative AI system to assist her in listing different examples, then it is not unlawful. But if another person uses a chatbot to publicly produce and disseminate speech to incite unrest or violence, it would be a different matter. 

What matters is the context in which the content is generated, as well as the intent behind it. The takeaway for decision-makers is that one set of rules may not be appropriate for all the different scenarios in which generative AI is used. Matters may have to be considered on a case-by-case basis. 

Third, in the case of Grok, the question is whether the chatbot is required to adhere to content related restrictions set out under the Information Technology Rules, 2021. Rule 3(1)(b) requires intermediaries to make reasonable efforts to ensure that certain prohibited content is not shared by its users. Intermediaries are also meant to notify users of their content policies in their preferred language. Now, Grok has been designed by X’s parent company, and deployed by X. So, does Grok form a part of X’s intermediary service? If so, is it also obligated to refuse queries that conflict with the content guidelines prescribed in IT Rules or with its own community guidelines? Another AI service, Perplexity, owned by a different company, has also been deployed on X? Would the liability rules differ in Perplexity’s case?

Fourth, the role of the user has to be factored into content that is created by AI systems like Grok. Many developers of AI systems put guardrails in place to ensure that their systems do not generate certain unlawful information. For instance, ChatGPT does not let you recreate the images of celebrities because that could constitute a violation of the latter’s personality rights. But sophisticated users can bypass such guardrails to make a generative AI system create content that it is not supposed to, which may be unlawful. Given that such attacks are difficult to pre-empt and guard against, such a scenario may warrant regulatory concession and provision of safe harbour. 


Also read: AI is coming for doctors, teachers, creatives. Can India protect these jobs?


Not just regulation

India stands at a crossroads in shaping the legal and ethical future of generative AI. While constitutional boundaries around free speech must remain the bedrock of any regulation, the rise of AI demands a more adaptive, context-sensitive approach—one that recognises the shared responsibility between platforms, developers, and users. 

Previous responses to controversies involving AI generated content have been reactionary and lacked nuance. In 2023, the MeitY issued an advisory to intermediaries as a response to AI deepfakes circulating on the internet. Among other things, the advisory sought to create a licensing regime for all AI systems in the country. Such a blanket imposition, which would apply to even benign use cases of generative AI such as the customer care bots you interact with on food delivery apps, was uncalled for. Such reactionary policies risk stifling innovation and punishing legitimate use cases. Instead, India must craft a forward-looking legal architecture that balances accountability with innovation, and freedom with responsibility. The challenge is not just to regulate AI—but to do it wisely.

The author is the Director of the Esya Centre, a tech-policy focussed think-tank, and an advisor to Koan. Views are personal.

(Edited by Aamaan Alam Khan)

Subscribe to our channels on YouTube, Telegram & WhatsApp

Support Our Journalism

India needs fair, non-hyphenated and questioning journalism, packed with on-ground reporting. ThePrint – with exceptional reporters, columnists and editors – is doing just that.

Sustaining this needs support from wonderful readers like you.

Whether you live in India or overseas, you can take a paid subscription by clicking here.

Support Our Journalism

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular