scorecardresearch
Sunday, April 28, 2024
Support Our Journalism
HomeOpinionAI governance a delicate dance between innovation & oversight. MeitY’s 2024 advisory...

AI governance a delicate dance between innovation & oversight. MeitY’s 2024 advisory shows why

The 2024 advisory goes well beyond the scope of the IT rules by seemingly creating an AI governance mandate – raising questions about its validity and the future of AI governance in India.

Follow Us :
Text Size:

On 1 March 2024, the Ministry of Electronics and Information Technology issued an advisory regarding the deployment of artificial intelligence systems by online intermediaries in India. An advisory is typically a clarification, and focuses on a particular aspect of regulation as well as a specific context. The advisory comes on the heels of another one issued in December 2023, which targeted deepfakes – AI-generated likenesses that can be used for disinformation campaigns.

Under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021 (IT Rules), online intermediaries must inform users that they are prohibited from intentionally uploading misinformation. The 2023 advisory called on online intermediaries to make the stipulation about the prohibition on the wilful sharing of false content unambiguous. However, the 2024 advisory goes well beyond the scope of the IT rules by seemingly creating an AI governance mandate – raising questions about its validity and the future of AI governance in India.

Licensing regime for AI systems?

For starters, the 2024 advisory requires government permission before making ‘under-testing’ or ‘unreliable’ artificial intelligence models/large language models, generative AI, software, and algorithms available for public use. The stipulation effectively operates as a licensing regime for AI systems. Several issues emerge here. One, none of the terms denoting AI in the advisory are defined, so it is unclear what they encompass, making it difficult to discern what companies can or cannot do.

Second, neither IT Rules nor the parent legislation, the Information Technology Act 2000, define these terms. Nor do they empower the government to create a licensing scheme for any technology. As such, the validity of such a requirement, and indeed the entire framework, comes into question.

Third, the 2024 advisory extends to online intermediaries – platforms that facilitate transactions or interactions between users. In fact, the Union Minister of State for IT, Rajeev Chandrasekhar, clarified on 4 March that it applies only to significant social media intermediaries – entities with five million or more users – and not start-ups. Ostensibly, this would not include OpenAI’s ChatGPT platform, or any other AI-based text or image generator as they publish content based on user prompts. How, then, is such an advisory expected to be effective, as AI-related harms can be perpetrated by entities of any type or size?

Fourth, the advisory does not provide parameters to establish the reliability of AI systems, nor is there any indication about the government department that would oversee the enforcement of such a programme.


Also read: AI coming into play means we must rethink economics. Old labour market theories won’t work


Difficulty of assessing reliability

The concept of testing AI systems for reliability is nebulous even in jurisdictions where processes have been outlined in dedicated frameworks.

For instance, in 2023, the Joe Biden administration issued an executive order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, where red-teaming was required for certain high-risk generative AI models. The order defines AI red-teaming as a systematic effort to identify weaknesses and vulnerabilities in an AI system, typically conducted in a controlled setting and in collaboration with AI developers.

Andrew Burt, a visiting fellow at Yale Law School’s Information Society Project points out that the provision on red-teaming is welcome as it is key to identifying and managing AI risks. However, he also highlights that there is little clarity on what makes up a red team, how to standardise processes for testing, and how to codify and distribute results once testing ends. Coming back to the Indian context, it is, therefore, hard to see how compliance regarding reliability is expected to play out in the absence of further clarification.

Another stipulation in the advisory is the requirement for intermediaries to ensure that AI systems do not permit bias or discrimination. While the intent behind this is laudable, problems of bias and discrimination are challenging to overcome. Bias can emanate from anywhere in the AI value chain: from how the problem a system aims to solve is framed, to design and deployment. It may even emanate from the language used to train the AI system. For instance, one study by researchers from the Manipal Institute of Technology and the United States’ University of Texas at Austin notes that in most Indic languages, nouns are assigned a specific gender based on the grammatical rules of each language. As a result, the bias evaluation process differs from that used in English.

Further, another study, conducted at the US Cornell University, found that Hindi-to-English machine translation systems frequently yield gender-specific outcomes, even in situations where a gender-neutral result is anticipated. What’s more, some types of biases only become apparent when the AI system is used in a particular context, and this can be hard to foresee. What follows is that the directions in the advisory make it nearly impossible for a significant social media intermediary to release a consumer-facing AI product in India.

Though the advisory calls for compliance with its tenets in 15 days, it may not be legally enforceable given the way it is worded and the scope of the subject matter it deals with. And larger concerns notwithstanding, there are glimmers of more workable solutions in the advisory as well. For instance, it requires intermediaries to notify users about the unreliability of outputs generated by an AI system.

That said, if the advisory is a portent for AI governance, the government may want to opt for a more considered approach down the road. This includes designing a more context-specific framework to address risks presented by different AI models in different situations more directly. It also includes accounting for the limitations of mitigation measures regarding risks like bias and finding a way to balance innovation imperatives against them. Ultimately, the success of the future of AI governance in India hinges on striking a delicate balance between regulation and innovation, ensuring that this burgeoning field advances safely and ethically without stifling creativity and progress.

The author is a consultant on emerging technology for Koan Advisory. Views are personal

This article is part of ThePrint-Koan Advisory series that analyses emerging policies, laws and regulations in India’s technology sector. Read all the articles here.

(Edited by Zoya Bhatti)

Subscribe to our channels on YouTube, Telegram & WhatsApp

Support Our Journalism

India needs fair, non-hyphenated and questioning journalism, packed with on-ground reporting. ThePrint – with exceptional reporters, columnists and editors – is doing just that.

Sustaining this needs support from wonderful readers like you.

Whether you live in India or overseas, you can take a paid subscription by clicking here.

Support Our Journalism

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular