New Delhi: Amid discussions over the need to regulate artificial intelligence (AI), the Centre is looking to collaborate with organisations and individuals to develop indigenous tools to mitigate AI-related risks and ensure fair and ethical practices.
IndiaAI — an independent business division under the Union Ministry of Electronics and Information Technology (MeitY) — has invited expressions of interest (EOI) for collaborative proposals for undertaking ‘safe & trusted’ AI projects, including establishing ethical AI frameworks and creating AI risk assessment and management tools and deepfake detection tools.
This decision comes close on the heels of Union Minister of Electronics and Information Technology Ashwini Vaishnaw’s announcement last week, saying the government is “open to the idea” of bringing in a new law to regulate artificial intelligence.
Addressing questions on AI governance and development in the Parliament during the ongoing session, the minister said, “Ethical issues in AI are a global concern, and India is committed to addressing these challenges through robust debate and responsible innovation.”
“…It is a major challenge that societies across the world are facing—the accountability of social media, particularly in the context of fake news and the creation of fake narratives,” Vaishnaw said, adding that establishing societal and legal accountability requires a significant consensus.
“These are the issues where freedom of speech comes on the one hand, and accountability and having a proper real news network getting created, on the other hand. These are things, which need to be debated, and if the house agrees and if there is a consensus in the entire society, we can come up with the new law.” he added.
IndiaAI, which is the implementation agency of the Rs 10,372-crore IndiaAI Mission, approved by the Union Cabinet in March, noted in the submission guidelines that within the mission, the ‘safe & trusted’ pillar emphasises the need for a balanced, technology-enabled and India-specific approach to AI governance.
“This involves the development of indigenous technical tools, guidelines, frameworks, and standards that are contextualised to India’s unique challenges and opportunities, as well as our social, cultural, linguistic, and economic diversity,” it said.
To achieve this, MeitY, through IndiaAI, will provide support for such ‘safe & trusted’ AI projects to mitigate AI-related risks, ensuring fair, ethical and robust AI practices.
The applicants, preferably from India-based academic institutions, research and development organisations, government bodies, start-ups, and other private sector companies, will need to focus on creating implementable and usable solutions promoting the just and ethical development of AI across different sectors. The themes around which proposals have been invited include watermarking and labelling, ethical AI framework, and deepfake detection.
Under watermarking and labelling, the proposed tool needs to help differentiate AI-generated content from non-AI generated content and embed unique, imperceptible markers in AI-generated content to ensure traceability and security. These tools can also include functionalities to prevent the generation of harmful and illegal materials, ensuring compliance with ethical standards and national laws.
The document noted that ethical AI frameworks provide a structured approach to ensure that AI systems respect fundamental human values, uphold fairness, transparency and accountability, and avoid perpetuating biases or discrimination. “These frameworks encourage developers, researchers, and organisations to consider the broader societal implications of their AI solutions and make informed decisions to minimise potential harm,” it said.
Additionally, applications can be submitted to develop comprehensive AI risk assessment and management tools and frameworks. These should be designed to identify, assess and mitigate cross-sectoral safety risks, ensuring that failures in one area are contained and managed without affecting interconnected sectors.
“As AI systems become integral to critical infrastructure and services, it is crucial to evaluate their ability to withstand high-stress scenarios, including natural disasters, cyberattacks, data disruptions, or operational failures,” the note said, adding that stress-testing can provide actionable insights into system weaknesses and ensure preparedness for high-stakes situations.
It also called for proposals for tools for deepfake detection. “With the increase in realism of deepfakes due to advancements in AI, it is imperative to develop deepfake detection methods that safeguard against potential misinformation and manipulation in the society,” the note said.
It also sought ways for seamless integration of such detection tools into web browsers and social media platforms, providing real-time deepfake detection.
(Edited by Radifah Kabir)
MeiTY must assist the Meiteis in their fight against illegal immigrants and narco-terrorists.
That should be the first priority being a MeiTY.