New Delhi: The National Commission for Women (NCW) has urged the Centre to amend legislations related to women’s safety and privacy to explicitly define and criminalise “deepfake abuse”, amid growing cases of AI-generated content being used to harass women, spread misinformation and manipulate public perception.
This recommendation has come amid recurring instances of morphed videos and images of women being used for making non-consensual intimate and explicit pornographic content.
The NCW shared a press release 4 November, saying it has written to the Ministry of Home Affairs, Electronics & Information Technology and the Ministry of Women and Child Development, recommending the review of cyber-related laws relating to women.
Speaking on the recommendations, Senior Advocate Charu Mathur, Advocate-on-record, Supreme Court, told ThePrint, “We don’t even have the terminology for deepfakes or synthetic media in our laws. The IT Act, 2000, is nearly 25 years old, and what we have is a patchwork stitched together to fit new digital crimes. Even the new Bharatiya Nyaya Sanhita is old wine in a new bottle—it repeats offences like defamation or voyeurism without addressing how AI-generated likenesses and morphing actually operate.”
She said India urgently needs a specific, women-centric deepfake offence, but it must be carefully drafted—”not a lazy duplication of old provisions”.
“The law should expand procedural tools with rules on forensic AI evidence, platform accountability, and metadata preservation. Without that, proving what’s real or fake will remain our biggest challenge,” she added.
According to the recently released National Crime Records Bureau’s report, India witnessed a 31.2 percent surge in cybercrime cases, rising from 65,893 in 2022 to 86,420 in 2023.
The majority of these offences involved online fraud, extortion, and sexual exploitation. The report further notes that women remain disproportionately targeted, with several states showing steep increases in gendered cyber offences.
Chhattisgarh recorded a 36 percent rise, nearly one-fifth of which were linked to online harassment, cyberstalking, or the circulation of obscene material, underscoring the growing vulnerability of women in India’s digital spaces.
Recently, actor Girija Oak spoke of how her AI-generated images have been circulating online. She shared a video on Instagram and spoke of its impact, especially on her 12-year-old son.
“These obscene images of his mother are something he will see one day. He’ll know they’re not real—they’re AI-generated, just as everyone who sees them now knows they’re fake—but they still give people a cheap thrill, a kind of titillation. That is scary. I know I can’t do much to stop it, but doing nothing doesn’t feel right either,” Oak said.
Also Read: Shah Rukh Khan is the biggest victim of deepfakes, says a 2025 list
Legal provisions recommended by NCW
The commission has proposed a new section in the Bhartiya Nyaya Sanhita, 2023, to cover ‘modified content’ to include the creation, modification, or distribution of digitally created or altered images, videos, or audio that falsely depict any person in an “explicit, defamatory, or misleading manner”.
Furthermore, the NCW has stated the recommendation aims to bridge the legal and infrastructure gap as the existing legal framework, which includes defamation, harassment, privacy breaches, and obscenity, falls short in addressing AI-generated fake content.
The commission has proposed strengthening victim support, toughening penalties, and establishing accountability for platforms under the Information Technology Act 2000.
It also calls for updates to the IT Rules, 2021, mandating account verification, recognising AI-manipulated images, defining gender-based harassment, and ensuring platform transparency through AI audits.
To enhance online safety and data privacy, the commission has also proposed amendments to the Digital Personal Data Protection Act, 2023, mandating the takedown of non-consensual content within 12 hours and introducing graded penalties for gender-based data misuse.
To strengthen workplace safety, the commission has suggested expanding the scope of the Sexual Harassment of Women at Workplace (Prevention, extent and Prohibition and Redressal) Act, 2013, to explicitly include cases of digital harassment and misconduct in remote work environments.
It further proposes amendments to the Protection of Children from Sexual Offences (POCSO) Act, 2012, to recognise online grooming and digital image manipulation as punishable offences, while making social media platforms liable for failing to report such abuse.
Lack of clarity
Currently, Indian law lacks clarity on synthetic media misuse, creating a critical legal gap for victims of digital deception. This leaves the judiciary with limited tools to ensure deterrence or punishment.
In the Kamya Buch v. JIX5A & Ors case, the Delhi High Court in July 2025 described the circulation of explicit, AI-manipulated images of the petitioner as a “patent breach” of her privacy and dignity under Article 21 (right to liberty), and granted interim injunction and other significant relief to Buch.
Buch was targeted by a large-scale harassment campaign using AI-generated deepfakes, morphed images and pornographic content.
The court restrained the defendants, including anonymous users, pornographic websites and major intermediaries from publishing the content and directed the prompt takedown and de-indexing of the offending URLs. The order drew on established privacy principles laid down in the Puttaswamy ruling.
“NCW’s recommendation to the central government is indeed a welcome step. At present, the available remedy for cases involving deepfake or morphed content largely rests on tortious liability. In the absence of a comprehensive statutory framework, courts have so far been limited to directing the removal of URLs from platforms hosting such obscene material,” Advocate Raghav Awasthi, who represented Kamya Buch, told ThePrint.
He said the current mechanism of seeking judicial intervention can best be described as akin to getting a glass of champagne in “business class—it is available, but at a considerable cost”.
“The financial and procedural barriers make it nearly impossible for most individuals to access the courts for appropriate relief. The foremost step should be to make the judiciary more sensitive to such issues and to ensure that victims receive just and meaningful compensation upon the conclusion of the trial,” added Awasthi.
Actor Aishwarya Rai Bachchan moved the Delhi High Court over unauthorised use of her likeness in AI-generated and morphed content. In its order on 9 September, the court observed that such misuse infringes an individual’s personality rights, including their name, image, and likeness, when done without consent, recognising it as a violation of privacy and dignity.
“The Aishwarya Rai Bachchan case before the Delhi High Court underscores a serious gap in India’s legal framework. If a public figure with access to legal expertise and media visibility struggles to secure swift relief, the situation for ordinary citizens is even more daunting. Current remedies under the IT Act, Bharatiya Nyaya Sanhita, and civil law are largely reactive, and they come into play only after the harm has already occurred,” senior advocate Pinky Anand told ThePrint.
In another instance, a social media influencer also approached the Delhi High Court to pull down inappropriate and deplorable content, which, she said, caused mental trauma and affected her future career prospects.
The court immediately ordered takedown of all deepfake content from platforms and pornographic websites. It simultaneously ordered the registration of an FIR under the IT Act and the BNS.
(Edited by Ajeet Tiwari)
Also Read: Deepfake on duty: when I asked AI to read Op Sindoor citations

