New Delhi: Italian Prime Minister Giorgia Meloni has spoken in a Facebook post criticising the widespread circulation of AI-generated deepfake images of herself, including one in lingerie.
“In recent days, several fake images of me have been circulating, generated using artificial intelligence and passed off as real by some overzealous opponents. I must admit that whoever created them… even improved my appearance quite a bit,” Meloni posted the image on her official Facebook account on 5 May.
“But the fact remains that, in order to attack and spread falsehoods, people are now willing to use absolutely anything,” she added.
Meloni is shown dressed in lingerie while sitting on a bed; the prime minister captioned the photo “fake photo generated with AI”.
In her statement, Meloni criticised such forms of bullying and warned about the consequences of a technology capable of misleading as well as harming individuals.
“Deepfakes are a dangerous tool, because they can deceive, manipulate and target anyone. I can defend myself. Many others cannot. For this reason, one rule should always apply: verify before believing, and think before sharing. Because today it happens to me, tomorrow it could happen to anyone,” she wrote.
Such deepfake images, however, are not an isolated incident.
In 2024, Meloni became one of the first women heads of government to pursue legal action against producers of deepfake pornographic material using her likeness, testifying in a trial in Sardinia. The two men took images of Meloni and several other Italian women from social media and morphed them vulgarly. These images were then shared on a platform with more than 700,000 subscribers.
Following the incident, Italy took strict measures and became the first country to enact a law regulating AI usage. The law introduced prison terms up to five years for anyone using deepfakes to cause harm. The law also prohibits children under the age of 14 from using AI without explicit parental consent.
Also Read: AI-generated images are distorting India’s military heroes. It’s a desecration of memory
Weaponising AI images against women
Meloni’s case highlights the dangers of weaponising AI tools against women, especially women in power, to humiliate and discredit them.
The 2026 United Nations report also acknowledged the problem. It warned that women in public life are facing “growing and increasingly sophisticated forms of online violence,” with AI-assisted “virtual rape” now within easy reach of perpetrators.
In the United States only, as per a 2024 report, there has been a surge of deepfake content especially against women members of Congress.
This act of perpetuating violence against women in public life has become a common practice in online spaces—over 50 per cent of women in various regions are experiencing technology-facilitated abuse.
So far, women have been the most common target of deepfake imagery. Male politicians are not threatened by smear campaigns or digital abuse. Even when they are targeted, the damage is usually reputational or policy-based, not designed to shame them through their bodies.
During her 2024 election campaign, Kamala Harris, the US Vice President and Presidential nominee, also faced similar issues. After her nomination, sexually explicit deepfake images using her likeness circulated online, including false “documentary-style” videos that fabricated sensational personal histories. The videos gained much traction, with some gaining a million views in a matter of hours.
These images and videos gained further credibility as Elon Musk and President Donald Trump shared them online, blurring the line between reality and fiction.
Alexandria Ocasio-Cortez, another US Democratic representative, spoke bluntly about the trauma of seeing herself in an AI-altered video.
In an interview with Rolling Stone, Ocasio-Cortez said that encountering an AI-generated video depicting her performing a sex act made clear that being deepfaked was “not as imaginary as people want to make it seem”.
She added that the mental image of the deepfake version of herself lingered with her throughout the day.
“There’s a shock to seeing images of yourself that someone could think are real. As a survivor of physical sexual assault, it adds a level of dysregulation. It resurfaces trauma, while I’m … in the middle of a fucking meeting,” she added.
In 2022, weeks before the assembly elections, an AI-generated video of Irish politician Cara Hunter circulated widely on WhatsApp, damaging her reputation at a critical moment. In an interview with the Guardian, she spoke about the impact the morphed videos had on her.
“There’s this woman–a woman who seemed to have my face–who is doing a handstand and having mutual oral sex with a man. And I’m looking at this, sitting surrounded by family, in the middle of a very heated election campaign,” she said.
Hunter, who was 27 at that time, also talked about how others were perceiving her video.
“All of them were just really vitriolic. Those messages were from people who hate women,” she said.
In Pakistan, both provincial information minister Azma Bukhari and Punjab Chief Minister Maryam Nawaz have been targeted by deepfake videos aimed not at policy debate but at moral disqualification. In Bangladesh, deepfakes of opposition figures Rumin Farhana and Nipun Roy have been used to provoke outrage among conservative audiences.
As the patriarchal political landscape slowly wakes up to the threat, laws and regulations have done little to deter perpetrators. Many are yet to recognise the dangers of deepfakes and their misuse, which makes cases like that of Giorgia Meloni a stark wake-up call.
(Edited by Insha Jalil Waziri)

