New Delhi: Scam content that utilises Artificial Intelligence (AI) is on the rise and has increased after the recent elections, the Ministry of Electronics and Information Technology (MeitY) told the Delhi High Court Monday.
Apart from this, the state elections saw a rise in deepfakes targeting women, and this phenomenon was even more aggravated in the case of certain personalities, the Centre said in its status report.
“Deepfake can be face swap, voice clone, etc. there is no fixed definition,” the report states, adding that in such cases, audio detection also tends to be challenging.
Deepfake technology allows for the creation of realistic videos, audio recordings, and images which can be manipulated to mislead viewers by superimposing the likeness of one person onto another, altering their words and actions, and presenting a false narrative, usually to spread misinformation.
By way of its report dated 24 March, the Centre also pointed to the lack of a clear definition of deepfakes while highlighting the need to educate users through large-scale campaigns on identifying and understanding deepfakes.
Notably, these observations came on a batch of petitions addressing the non-regulation of deepfake technology and AI in the country.
Speaking to ThePrint, Mumtaz Bhalla, Partner, White Collar Crime, Arbitration & Litigation at law firm Economic Laws Practice, said, “While we are all excited about the benefits of AI, we are losing sight of the dangers an unregulated field can pose. Existing laws aren’t adequate because they were made at a time when nobody could think of deepfakes and AI.”
Underlining the ubiquitousness of AI in recent times, Bhalla added, “Now that it’s all here, and sparing no one, regulations must be brought out to protect the stealing of women’s faces and artist’s works across the country, which have now become global issues.”
Stating that we cannot legitimise the violation of law by the use of AI, the Delhi-based lawyer, who represented one of the petitioners in this case, added that anyone using a woman’s face or an artistic work to train AI or create content must seek the consent of such an artist or model.
Also Read: Hidden in plain sight: The unanswered questions in the Justice Varma cash controversy
The story so far
On 21 November last year, the Delhi High Court had directed the Ministry of Information and Technology to formulate a committee and examine the suggestions made by the petitioners in this case.
A bench of justices Tushar Rao Gedela and Manmohan had also directed this committee to consider the regulations as well as the statutory framework in places like the European Union.
“This Court further directs the Committee relating to the issue of deepfakes to invite and hear the experiences and suggestions of a few of the stakeholders like the intermediary platforms, telecommunication service providers, victims of deepfakes and websites which provide and deploy deepfakes before submitting its report,” the court had said, giving them three months’ time to file the report.
Other findings in the report
Pointing to the minutes of the first meeting of the committee on deepfakes, which was conducted in December last year, the report said, “The focus areas include data protection laws, mandatory labelling of deepfakes and guidelines for consent and content moderation. Technical aspects such as AI detection tools, watermarking, and collaboration with technology firms are also critical.”
Shedding light on the need for large-scale literacy programmes for educating the public on this issue, along with fostering ethical research for sustainable solutions, the meeting also held discussions about inviting fact-checkers, tech firms, and deepfake victims for the consultation process.
One of the suggestions made in this meeting was to leverage crime data analysis to identify patterns such as the common modus operandi, victim profiles, and target objectives (like defamation or extortion), the report said. “Studying 2024 complaint data to extract insights, detect trends and enhance detection, prevention, and forensic processes could be done,” it said.
“Existing legal frameworks, such as IT rules and the DPDP Act, will be reviewed to assess their sufficiency, while gaps may warrant additional recommendations”—was also discussed in the meeting.
In January this year, another meeting was held to gather more experiences and suggestions. During this meeting, service providers like Airtel and Vodafone highlighted that they merely act as conduits and do not post user-generated content themselves. Instead, they said they only take action as per government orders and submit reports.
At the same meeting, Meta said that it is easy for sophisticated actors to remove watermarks or labels and that society cannot depend on a single actor to solve this. Meta also pointed to its AI labelling policy, which allows users to disclose when AI content is uploaded by them. This policy applies to advertisements too, Meta said while adding that they are simultaneously working on protecting celebrity personas.
On the other hand, X submitted that they have a synthetic and manipulated media policy, where content that is deceptive in nature is taken down. It also said that it works within the existing legal frameworks to take down such content.
The different pleas
In the present case, the court was acting on a batch of pleas filed by three persons, namely journalist Rajat Sharma, advocate Chaitanya Rohilla, and a professional model named Kanchan Nagar.
One of the pleas filed on behalf of the model argued for the interests of artists across the country, including photographers, videographers, singers, musicians, and other creative professionals, seeking to amend the Information Technology Act, 2000, and frame appropriate guidelines for regulating the use of AI and deepfakes for commercial purposes. The petition also sought to modify Section 66D of the IT Act to include AI or deepfake technology, where punishment is prescribed for cheating “by personation” through means of a computer resource.
Although broadly, all three pleas dealt with the non-regulation of deepfakes and AI, Rohilla’s public interest litigation (PIL) raised concerns about the risks associated with AI systems, the lack of a clear definition of AI, and the deceptive nature of deepfakes. It also spoke of recent incidents and the intersection of AI with personal data protection and how the country fares in this regard.
Flagging ethical concerns like the “malicious use” of AI for spreading misinformation and citing instances of targeted propaganda, the plea said that deepfakes using deep learning techniques are described as a form of synthetic media with applications.
Drawing attention to privacy violation concerns, along with instances of economic and emotional harm owing to inadequate safeguards, the plea also mentioned global regulatory efforts, like those made by the European Union’s AI Act, and similar safeguards in the United States of America.
“Existing laws are deemed inadequate for addressing deepfake manifestations, and concerns persist about the Digital Personal Data Protection Act, 2023,” the plea adds while listing websites offering deepfake services and saying that this necessitates the need for identifying and regulating such sites by relevant authorities.
The plea also sought a direction from the court to identify and block websites providing access to deepfakes and for laying down guidelines for AI regulation, in accordance with fundamental rights.
(Edited by Radifah Kabir)
Also Read: Can there be an FIR against Justice Yashwant Varma? A 1991 SC verdict holds the answer