scorecardresearch
Thursday, May 2, 2024
Support Our Journalism
HomeOpinionGlobal policymakers don’t understand AI enough to regulate it. Tech companies must...

Global policymakers don’t understand AI enough to regulate it. Tech companies must step up now

When software is built to prioritise speed over safety, its creators delay dealing with possible negative consequences.

Follow Us :
Text Size:

On 11 April 2023, China released a comprehensive draft of measures to regulate Generative Artificial Intelligence, which can automatically turn basic user inputs into creative outputs like texts, images or videos. AI has dominated news cycles for its far-reaching consequences. Proponents argue that it can bring industry-altering efficiencies, while sceptics worry about its capacity to cause job losses, spread misinformation, and violate copyright. The Chinese draft measures follow a recent public call for pausing research into generative AI while regulators get enough “time to catch up”.

There is no international consensus on how generative AI should be regulated yet. Several multilateral organisations, such as the International Telecommunication Union and the Organisation for Economic Cooperation and Development, have released non-binding AI guidelines. Jurisdictions like Italy and Spain are investigating the pitfalls of generative AI, and the American National Institute for Standards and Technology has released a voluntary AI Risk Management Framework. China intends to hold AI service providers responsible for machine-generated content, as well as filtering inappropriate content, auditing user prompts, verifying user identities, preventing algorithmic discrimination and generating “fake” news.

During the recent Budget Session of Parliament, Ashwini Vaishnaw, Minister of Electronics and Information Technology, said that the government has no plans to regulate the growth of AI. Even if India chooses not to restrict generative AI companies, regulations in other countries could impact Indian users, given the interconnected nature of the digital world. Policymakers worldwide are struggling to administer such rules for several reasons.


Also read: Appellate bodies must for India’s complex digital ecosystem. But there are many hurdles


Rule makers struggle to play catch up

First, policymakers find it difficult to ringfence generative AI, whose applications can range from low-risk, such as single-purpose commercial chatbots, to high-risk, such as misinformation-spreading deepfake generation. Generative AI is like a powerful, omnipresent machine that can do various things, especially when connected to the internet.

Second, it’s axiomatic that rulemaking plays catch up with emerging technologies. Consider the example of the European Union’s Artificial Intelligence Act, which had almost been finalised by December 2022. It is being re-drafted and re-negotiated to include “high-risk” applications of generative AI in its scope, which could have an adverse effect on a person’s health, safety or human rights. Such applications would now require high levels of  transparency, safety and human oversight.

Finally, policymakers lack the necessary technical expertise to understand how the rules they make would impact the software or its users. Experts have recently raised concerns about the inability of American lawmakers to understand AI enough to regulate it. Around 80 per cent of Indian MPs list either agricultural or political and social work as their expertise. It is safe to assume that most elected representatives struggle to understand the complex scientific principles that drive developments like generative AI.


Also read: Guidelines on online gaming a self-regulation template. But iron out 5 complexities


Don’t rely solely on rules

In such a scenario, rules and regulations alone cannot protect consumers against the dangers posed by technology. Companies must include thorough risk assessments and conduct ethics reviews as a part of their software lifecycles.

Lawrence Lessig, an eminent American legal scholar, posited that there are multiple modalities of regulation to influence people’s behaviour: law, architecture, norms and the market. In his words, “code is law”; that is, how generative AI is built will determine the rules of its interaction and governance in the future.

Companies must look at Generative AI’s ethical and social implications more seriously. When software is built to prioritise speed over safety, its creators delay dealing with possible negative consequences. Leaked details from Microsoft reveal a desire to hurriedly go to market with generative AI tools, with limited concern about what the software will be used for. At a time when unforeseen applications of AI are emerging, technology companies are scaling back review teams as part of cost-cutting measures; this cannot continue.


Also read: TRAI’s OTT regulation agenda is confusing. It forgets consumers, serves telco interests


How companies can determine AI safety

It is imperative that tech companies invest in oversight committees that include ethicists who can help explore the impact of new products before their release. There is past precedence for such a step. The technology for human gene manipulation through the Clustered Regularly Interfaced Short Palindromic Repeat technique (popularly known as CRISPR) has been self-curtailed by scientific teams around the world, given its loaded ethical implications. A widely circulated open letter advocates for a pause in the development of generative AI until it can be determined whether such software is safe and its risks manageable. This call for a moratorium may reflect the caution practitioners wish to exercise.

However, it is impossible to enforce effectively and would be a waste of time unless there is a clear plan for using such a pause. Companies should examine the possibility of adopting more thoughtful and continuous approaches. For example, they can introduce a technical sandbox to their development cycle, using which contentious features can be incrementally tested.

China’s draft measures would make technology companies responsible for the outputs of generative AI systems. As policymakers around the world draft laws that balance the need to mitigate risk and promote innovation, software developers will have to take on a bigger role in promoting the future-safe use of their products based on a detailed and public assessment of future harms. Indian companies should invest in coalition building and a global information exchange, in order to reach consensus on the minimum safety standards that all public-facing generative AI must meet.

The author works at Koan Advisory Group, a technology policy consulting firm in New Delhi. Views are personal.

This article is part of ThePrint-Koan Advisory series that analyses emerging policies, laws and regulations in India’s technology sector. Read all the articles here.

(Edited by Zoya Bhatti)

Subscribe to our channels on YouTube, Telegram & WhatsApp

Support Our Journalism

India needs fair, non-hyphenated and questioning journalism, packed with on-ground reporting. ThePrint – with exceptional reporters, columnists and editors – is doing just that.

Sustaining this needs support from wonderful readers like you.

Whether you live in India or overseas, you can take a paid subscription by clicking here.

Support Our Journalism

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular