Meta would like you to make something with its AI tools. For weeks now, it has been popping up in your Instagram feed with friendly prompts and ready templates, nudging you to become a creator, too. You could generate a corgi being spirited off by a bald eagle or a mock news interview with Donald Trump, which, on the misinformation scale, would merit a “WTF?” but still be inoffensive. Neither would ruin your life. But Indian men have — unsurprisingly — chosen misogyny.
The result is a feed peppered with reels of women who look like someone and no one at the same time, wearing sarees that slip off their synthetic bodies in slow motion. In videos set to trending songs, they are attacked by stray cows who rip their clothes apart, while the men in the comments cheer on their humiliation. Thousands of views are drawn by videos of pregnant women, who can apparently select the sex of their child, by lying down in a comic-book contraption — serving two fetishes at the same time. Slop doesn’t even begin to describe it.
They show up with bare navels and exaggerated bodies in the fantasies of schoolboys and adult men alike. I’m afraid it only gets worse from here. Mirrored in these reels are the most depraved and taboo categories from mainstream pornography, like mothers and sons. There are women in sexual scenarios, talking dirty with multiple black men. An account with 39,000 followers, which routinely hits millions of views, is dedicated entirely to videos that leave nothing to the imagination, featuring a creepy old man and a very young woman.
All of these accounts accumulate tens of thousands of followers who do not seem to mind, or perhaps do not notice, that the woman they are following is not a woman at all. But variations of this rabid impulse to control the images of women have existed since the dawn of the image itself.
Old impulse, new tools
One of the earliest documented cases of non-consensual image manipulation of women’s bodies dates to the 19th century. In 1888, a New York photographer named Le Grange Brown was accused of cutting faces out of portraits of society women and pasting them onto the bodies of naked women. These composites were then sold in saloons. It all happened within years of the portable camera becoming widely available.
Since then, the smartphone, the internet, and AI have all been turned on women’s bodies with harrowing certainty — and nobody involved needed to be taught how. AI, marketed as the cure to cancer and world hunger, has so far proven most reliable at undressing women.
In January this year, researchers found that Elon Musk’s AI chatbot Grok was generating approximately 6,700 sexualised images of women and children per hour. In nine days, it produced 4.4 million images, of which 1.8 million were sexualised depictions of women and 23,000 were sexualised images of children. Users, unafraid of outing themselves, coached one another on how to word their prompts to get past safety filters, specifically targeting real people like celebrities, models, and women who were neither. Some even asked Grok to add blood and bruising to their photos.
When the scale of it caused widespread outrage, xAI responded by restricting image generation to paid subscribers — which is to say, you could undress women if you could afford the subscription fee. Ashley St Clair, the mother of one of Musk’s children, sued xAI after discovering that Grok had been used to generate sexual images from a photograph of her taken when she was 14 years old.
But treating this as a Grok problem, or even an AI problem, misses what the platforms are actually doing. In 2023, the Wall Street Journal found that Instagram’s algorithm was serving overtly sexual content to test accounts that followed only young gymnasts and cheerleaders within minutes. Safety tools were rolled out, but in 2024, accounts registered as minors were drowning in explicit recommendations. In a follow-up investigation conducted over seven months, the Wall Street Journal and Northeastern University set up new minor accounts that scrolled through Instagram’s video Reels feed, skipping “normal” content and lingering on more “racy” adult videos. The finding: “After only 20 minutes of scrolling, the accounts were flooded with promotions for ‘adult sex-content creators’ and offers of nude photos.”
Even Snapchat, an app tacitly used to exchange risque photos, and TikTok, tested under identical conditions, did not produce the same results. Even though internal documents from 2022 showed that Meta knew what its algorithm was doing, the company responded by labelling the experiments “artificial”, which is certainly a choice.
The real-world toll
What is far from artificial, however, is the cost borne by real women who do not make the news. India’s cybercrime complaints involving women jumped from 48,335 in 2024 to 76,657 in 2025— an increase of over 28,000 cases in a single year. The largest category, at nearly 38,000 complaints, was sexually obscene material, followed by over 19,000 cases involving sexually explicit acts and more than 10,000 linked to child sexual abuse material.
The 2025 “Make it Real” report from the RATI Foundation and Tattle found that 92 per cent of the women who reported deepfake abuse were not actresses or public figures but ordinary women. In most cases, the perpetrator had no real-world connection to the victim at all: AI was used precisely because the abuser had no access to a woman’s private life and did not need any. Even a publicly available photograph was enough.
In one case, a woman who had submitted a photograph as part of a loan application found the image digitally manipulated by one of dozens of “nudify” apps. The photo was circulated on WhatsApp with her phone number, leading to a barrage of sexually explicit calls from strangers. Nearly 70 per cent of the abuse cases reported to the RATI helpline in 2025 involved AI-generated or manipulated images. When these women reported these incidents to the police, as men on social media are always so helpful in suggesting, they were frequently told to simply delete their social media accounts.
The solution to being violated online was to stop existing there, the digital equivalent of being told to stay indoors. What’s crushing is that it has that exact self-censoring effect on other women, who withdraw from spaces that are supposed to be theirs too.
In theory at least, India’s IT Amendment Rules of 2026 seek to address this problem. They require platforms to take down flagged deepfakes within three hours of receiving a government or court direction, which is its own serpentine task. But critics have pointed out that the rules are bundled with a broader tightening of digital regulation, aimed less at protecting women than at controlling what digital creators can say. Meanwhile, Meta, which has opened these tools up to its largest user base in the world, announced this year that it will replace its human content moderators with AI.
Yet, a sorry kind of faith is embedded in the idea of regulation, of moderation. It urges you to believe this is merely a governance problem, waiting to be solved by better filters and faster takedowns. It is not. Tools can be governed eventually. But who will govern the men?
Karanjeet Kaur is a journalist, former editor of Arré, and a partner at TWO Design. She tweets @Kaju_Katri. Views are personal.
(Edited by Asavari Singh)

