scorecardresearch
Saturday, April 27, 2024
YourTurnSubscriberWrites: Government’s Right to Legislate on Artificial Intelligence

SubscriberWrites: Government’s Right to Legislate on Artificial Intelligence

Most of the operations inside an AI machine are not known to the designer and it is difficult to fathom how it has arrived at a decision or answer, due to the nature of the model being used.

Thank you dear subscribers, we are overwhelmed with your response.

Your Turn is a unique section from ThePrint featuring points of view from its subscribers. If you are a subscriber, have a point of view, please send it to us. If not, do subscribe here: https://theprint.in/subscribe/

The Ministry of Electronics and Information Technology (MeitY) had issued an advisory to the AI corporations on 01 March 2024. This created a bit of a discontent amongst AI developers particularly, technology giants in the industry and AI developers. There are serious issues with AI, if it placed on platforms without adequate controls in place.

The recent case of a cine actress image being morphed through the use of AI to create a Deepfake has had its repercussions. Our Prime Minister was also concerned about fake videos doing the rounds on social media. The most serious question that was asked of a Chatbot , Gemini of Google was a loaded query, ‘Is …. a fascist?’ The response, typical of an AI machine was at most, a reply stating an obviously ambivalent answer , with no clear pin pointed answer to the question. Such an output has implications for the future of the society. It can damage the social fabric of the society, bring in fear, lay siege to communal harmony and also lay grounds for insurrection against the State.

Why does AI answer such questions ambivalently? Firstly, most of the operations inside the AI machine is not known to the designer of the machine and it is difficult to fathom as to how it has arrived at a decision or answer, due to the nature of the model being used. These are black operations and it is extremely difficult to audit such large machines to know what is wrong with the answer and how it came to that conclusion. It is because large technology giants are working on information in the exabyte/zettabyte range. These are large amounts of data which is humanly impossible to crunch by a human brain. While scientists are working to create white box operations in AI, yet since the capability of largescale models which now work on significantly very high amount of data; it is almost impossible to carry out an audit of every answer and gain an insight as to how it arrived at a particular answer. On the other hand, if freedom is not given to developers, large scale models cannot be developed. There should not be a fear amongst developers that a wrong answer can lead to an arrest or punishment. If we read through the mission statement of Google AI, it says that it is to organize world’s information and make it universally accessible and useful. It would be the Government’s case to judge every such ambivalent answer on a case to case basis, so long as it does not disturb communal harmony or cause an insurrection.

A proposed bill (to be called the Safe and Secure Innovation for the Frontier Artificial Intelligence Systems Act, 2023-24) was introduced in the State of California, by Senator Scott Wiener, requires cutting edge AI developers to develop and adopt safety
security protocols and the developers are to bear responsibility if it causes risks and critically harms public safety. In nutshell developers are to bear legal liability for risks advanced AI systems pose to the society. It also mandates that cybersecurity measures are to be adopted to prevent theft, malicious use, misappropriation or inadvertent release of model weights ( weighting is a technique for improving models). In case required the developers should have the ability to fully switch off the models in case of an emergency or develop a ‘kill switch’ for such machines. It also mandates that developers would not be allowed to release any model that would pose an unreasonable risk or be able to use the hazardous capability of the model to cause critical harm. The bill seeks to define critical harm to include creation or use of chemical, biological, radiological or nuclear weapons and cyber-attacks including economic damages of more than US$500 million. This is a significant definition. It also requires the AI developers to self- certify that the requirements have been implemented into the AI system.

The question is how do we minimize such risks. One way is to use Model cards, which was first developed by Google in 2018, to bring in transparency and accountability in an AI system. Why is this required? India is a socialistic state and say if AI system is biased, it could upend social schemes by the government, in which certain strata of the society could be at a receiving end, by exclusion from the schemes. Indian Government should mandate the use of model cards in very large AI systems. The advisory by the government was in the right direction and there is a need to bring in a legislation to regulate AI.

These pieces are being published as they have been received – they have not been edited/fact-checked by ThePrint.

Subscribe to our channels on YouTube, Telegram & WhatsApp

Support Our Journalism

India needs fair, non-hyphenated and questioning journalism, packed with on-ground reporting. ThePrint – with exceptional reporters, columnists and editors – is doing just that.

Sustaining this needs support from wonderful readers like you.

Whether you live in India or overseas, you can take a paid subscription by clicking here.

Support Our Journalism

LEAVE A REPLY

Please enter your comment!
Please enter your name here