scorecardresearch
Add as a preferred source on Google
Thursday, March 19, 2026
Support Our Journalism
HomeTechDisinfo is rewriting Iran conflict in real time. In 'first truly AI-native...

Disinfo is rewriting Iran conflict in real time. In ‘first truly AI-native war’, India an active target

In the days since US-Israel attacked Iran on 28 February, bad actors have overwhelmed social media feeds with images and videos of missile attacks, destruction, combat and death.

Follow Us :
Text Size:

New Delhi: On 1 March, 2026, a video began circulating on X showing a warship engulfed in fire and billowing black smoke. Arabic-language accounts captioned it breathlessly: a US destroyer, retreating to Cyprus after sustaining Iranian strikes. The footage was real. The framing was entirely fabricated. The images were from the USS Forrestal fire—a peacetime accident that occurred in 1967, nearly six decades before the current conflict.

This is the texture of the information war running parallel to the military one. In the days since the US and Israel attacked Iran on 28 February, bad actors have overwhelmed social media feeds with images and videos of missile attacks, destruction, combat and death. Much of this does not show real or current events, and comprises content often generated with artificial intelligence or simply old footage misrepresented as new.

Swasti Chatterjee, Deputy Editor (News) at Boom Live who has tracked misinformation across Covid, Operation Sindoor, and now this conflict, is blunt about what has changed. “AI-generated visuals have practically overwhelmed us,” she said. “By the time we are fact-checking, the narrative is already established.”

The structural conditions of this conflict have made disinformation almost inevitable. Joyojeet Pal, professor at the School of Information at the University of Michigan, identifies three overlapping drivers.

“There are typically three drivers of misinformation: epistemic drivers, because one doesn’t know what is going on; existential drivers, because people fear for their existence; and socio-cultural drivers, because one group has pre-existing negative views of another,” he said. “War situations which cut off internet have all three. This is a setup for a very high disinformation environment.”


Also Read: What Gulf states would say to Iran. War is temporary, geography is permanent


The scale of the fabrication

The examples documented across platforms in the first two weeks alone are staggering in their variety and reach. What added to the problem was the fact that many of these fake posts were shared by established journalists across social media platforms.

One widely shared video on X claimed to show Iranian missiles striking the centre of Tel Aviv. It was viewed by more than four million people—but it does not depict Tel Aviv. It is footage from Algeria showing football fans celebrating a local club’s victory, debunked multiple times in the past. Another clip, viewed over five million times with a caption about US fighter jets launching “the largest airstrike in modern history”, turned out to be simulations from Arma 3, a military video game with a photorealistic rendering style.

Other fakes included fabricated videos of missile barrages on Tel Aviv, panicked people fleeing a supposed attack on an airport, captured US special forces held at gunpoint, and a convoy of US soldiers on the ground in Iran—none of which occurred. A video supposedly showing Israelis protesting for peace—with Hebrew-looking gibberish on the signs—was entirely AI-generated. A near-identical fabrication showed Iranians cheering “We Love Israel!” was as fake, actually showing that neither side in this conflict has a monopoly on manufactured reality.

What distinguishes the current moment from all previous conflicts in the region is not simply the volume of misinformation—it is the quality, and the speed at which it is produced. Researchers have taken to describing it as the first truly AI-native war.

Pal, who also studied the India-Pakistan conflict closely, identifies what makes the current war a distinctly potent environment for fabricated content. “Both sides are heavily restricting access to journalists, and both sides have targeted high-attention targets—hotels, consulates, energy facilities,” he said. “These kinds of attacks are much more prone to fake footage, both because they are dramatic events and because of the news shutdown.”

Two longer forces are also in play: the polarisation hardened during the Gaza bombing which made a subset of users more willing to engage content that confirms their hatreds, and the simmering global anger over American tariffs and perceived bullying. “That has increased the number of people who want to see that the US is doing poorly in the war, and consequently more willing to engage that content,” Pal added.

The Iranian state apparatus has been particularly prolific. Since 28 February, 18 war-related claims by Iran were found to be false, according to the news rating organisation NewsGuard—compared to five false claims in the two weeks prior. NewsGuard also found that Iranian outlets are increasingly turning to AI-doctored images, often created outside Iran.

During the earlier June 2025 phase of the conflict, Fars News, Iran’s state-run news wire, published a video compilation titled ‘Tel Aviv Before and After the War with Iran’ in which nearly all footage was AI-generated. Israeli-aligned networks, meanwhile, have circulated synthetic videos of pro-Israel protests in Tehran, falsely claiming they show mounting popular dissent against the Iranian government.

Citizen Lab, a research group at the University of Toronto that investigates digital threats to civil society, documented the Israeli-backed ‘PRISONBREAK’ network which deployed AI-generated imagery and deepfakes timed to coincide with real Israeli military strikes. In India, the same pro-Israel tilt documented during the Gaza war has resurfaced: Indian accounts with large followings have shared fabricated content backing Israeli narratives.

Raqib Naik, executive director of the Centre for the Study of Organized Hate, a Washington-based think tank, frames disinformation as integral to modern conflict. “Digital or cyber warfare is a critical component of the overall warfare,” he said. “Countries are trying to win the narrative war in digital spaces—to keep that semblance of hope and winning within the country, but also to shape the perspectives of people internationally.” The tactics, he added, have fundamentally shifted: “State and non-state actors and individuals can now produce and manipulate content to peddle misinformation at an industrial scale without requiring much skill or resources.”

India: Target, amplifier & battlefield

What makes the disinformation ecosystem of this conflict particularly significant for Indian readers is that India is not a passive observer. It is an active target—and an unwitting amplifier. The stakes are not abstract.

India’s bilateral relationships with Iran, Israel and the Gulf states encompass energy imports, trade corridors, defence partnerships and the welfare of nearly nine million Indian nationals living and working across the Gulf. Fabricated content that misrepresents India’s alignment in this conflict or poisons its standing with Gulf audiences carries real diplomatic and economic risk.

A post carrying the flags of Bahrain, India and Israel claimed that Bahraini intelligence had arrested an Indian telecom engineer named “Nitin Mohan” for passing sensitive geospatial data to Israel’s Mossad. Independent fact-checkers identified it as an AI-generated fabrication from Pakistani propaganda accounts. The actual Bahraini ministry statement confirmed only that six people of unspecified Asian nationalities had been detained—no names, no nationalities specified, no connection to any intelligence service.

Separately, Arabic-language accounts pushed claims that India and Israel had jointly plotted against Pakistan, framing India as part of a “Hindu-Zionist coalition”—content designed to damage India’s standing in Muslim-majority Gulf states on which it depends for oil, remittances, and trade.

Then there is perhaps the most sophisticated operation tracked so far: Multiple accounts with Indian-sounding names, Hindi-language content, and Hindu cultural branding— Gayatri Mantra banners, nationalist bios—simultaneously posting near-identical content about Iranian battlefield victories. On closer examination, these appear to be coordinated operations using Indian-identity aesthetics to route pro-Iran, anti-American messaging into Hindi-speaking feeds.

Kiran Garimella, an assistant professor at Rutgers University who studies how information spreads online, places India’s vulnerability in structural context. After October 7, 2023—the conflict between Hamas and Israel that triggered the Gaza war—a disproportionate share of conflict-related amplification came from Indian accounts. “Someone was saying the Israelis are just outsourcing it to India,” he recalled. “There’s a lot of mobilisation — hundreds of thousands of accounts associated with political parties that have complementary political goals.”

Pal adds a psychological dimension. “In times of information warfare, there is a spike in motivated reasoning—people consuming what they want to believe rather than what is true,” he said. “Inattention, rather than reasoning, drives content consumption. Things that quickly catch your eye are much more likely to go viral than a textual debunk.”

Naik points to monetisation as a primary engine. “One of the main vectors of wartime disinformation right now is X, because reach is rewarded with money,” he said. “Bad faith digital creators weaponise trending topics—they push content that evokes attention or strikes emotions. The more extreme the content, the more reach it gets.”

The UAE arrested 35 individuals—19 of them Indian nationals—for publishing misleading and AI-generated videos related to the conflict. Offences carry a minimum one-year prison sentence and a fine of at least AED 100,000.


Also Read: What India can learn from the US-Israel war on Iran


 

What fact-checkers are up against

For verification organisations, the challenge is no longer simply distinguishing real from fake. It is whether the effort of establishing that distinction can be sustained at all.

Chatterjee said her team has largely reverted to older methods. “We do not stick to detection tools. We rely on the very good old practice of fact-checking.” A single fact-check can take 24 to 72 hours. “There is only so much we can do.”

During Covid, the dominant form of manipulation was edited text. By Operation Sindoor, it had shifted to repurposed video and fake letters attributed to ministers. “Unlike Operation Sindoor, we did not see that much volume of AI-generated content as we are seeing now,” Chatterjee said. “The same handles that pushed deepfakes after Sindoor have reappeared. This time their content targets Indian news anchors alongside political leaders.”

Naik connects the scale of the problem to a deliberate erosion of trust in credible sources. “There are attacks on trusted sources—mainstream news outlets with verified editorial processes,” he said. “Trump labels them fake news; Musk has spent years bringing down the credibility of mainstream outlets and presenting X as an alternative.” The result is a public that has lost its reference points, and is consequently more vulnerable to manipulation.

The tools designed to assist journalists are themselves compromised. Garimella pointed to a widely read technology journalist in the United States who shared fabricated footage linked to the Houthis. “If very tech-savvy, respected journalists are doing this,” he said, “we cannot blame normal people.”

Platforms: Failing in structure, not just in practice

Meta allowed an AI-generated video to go unchecked on Facebook during the 12-day war in June 2025, according to a ruling by the Oversight Board. The fabricated video, posted by a user in the Philippines posing as a news source, showed extensive damage to buildings in Haifa. Six users had reported it; a similar video had earlier been debunked on TikTok. Meta took no action. Meta had shut down its third-party fact-checking programme the previous year — a decision Naik said left the platform structurally exposed. “The guardrails that existed previously don’t exist anymore.”

X has fared no better. Researchers at the Atlantic Council’s Digital Forensic Research Lab tabulated more than 300 responses by Grok—X’s AI assistant—to a single fake video of a bombed airport. The bot’s responses contradicted each other, sometimes minute to minute. Said Naik pointedly: “People believe Grok is a fact-checking tool, which it isn’t. It is trained on data that is out there on the internet. Using it as a fact-checking tool is a mistake.”

Pal identifies a structural flaw specific to X. “X is prone to misinformation because of the structure of virality driven by verified users—not verified for being recognised information intermediaries, but because they pay for a blue tick and can monetise virality,” he said. “Community notes may still make the content and the quoted reposts of that content go viral.”

India’s regulatory response

The government response in India has been materially sharpened since Operation Sindoor. It is worth understanding how the machinery actually works. The Ministry of Information and Broadcasting does not itself take down content—it flags and refers. “MIB on its own does not pull down any content or act against it,” said Kanchan Gupta, senior adviser to the ministry. “The actual action happens in MeitY.”

MeitY holds primary authority to issue takedown orders under the IT Act, supported by its Emergency Response Team. The Ministry of Home Affairs and the Intelligence Bureau handle specific national security cases. Where licensed content is being misused—OTT footage, for instance, repurposed into disinformation—MIB can act directly. “Recently we took down a large number of Telegram channels that were using licensed content from OTT platforms,” Gupta said.

Since Operation Sindoor, the government has tightened the compliance window significantly through the IT amendments that came into force last month. Platforms must now act on government instructions within three hours.

“If they are not in compliance within three hours, they will be deemed to have violated the safe harbor rules,” Gupta said. “Compliance is mandatory. If it is three hours and one minute, that’s not going to be counted.” A 24-hour monitoring desk, using software to crawl X, Facebook, YouTube and Instagram, has been stood up to track content that impinges on India’s bilateral relationships. “Unlike Sindoor, this is not our conflict.” Gupta said. “But we have interests — energy interests, bilateral interests — PIB fact-check units and other mechanisms have been set up to spot, track, trace and take down content which impinges on India’s bilateral relations with any one of these countries.”

The government is now considering going further, with a proposal under active discussion to extend direct takedown authority beyond MeitY to other ministries with national security stakes, including Defence, External Affairs and MIB itself, allowing them to act on platforms directly during conflict situations, without routing every request through a single nodal ministry.

Gupta was candid about the limits of what any system can achieve. “It’s a beast which reproduces very fast. Often it slips into WhatsApp and Telegram, and then it can’t be spotted.” What circulates on X, Garimella adds, is only the visible fraction. “What you see on X is a really, really small tip of the iceberg. There’s a huge mountain of WhatsApp and even Facebook that we don’t see.”

The deepest damage: When real footage is dismissed as fake

There is an underappreciated second-order consequence: the erosion of trust in authentic content. A genuine video of a rocket hitting a busy Tehran street was dismissed widely online as AI-generated propaganda. An information environment saturated with fakes does not merely cause people to believe lies—it causes them to stop believing anything.

The Netanyahu case has become the defining illustration of this dynamic. The disinformation campaign began in the first days of March, when AI-generated images of Netanyahu bloodied and motionless in rubble—traced to IRGC-affiliated accounts, including 62 fake profiles posing as Scottish independence supporters and Irish nationalists—spread rapidly across X, Instagram and Telegram.

Israel scrambled to push back. Netanyahu posted a video of himself at a Jerusalem café sipping coffee, captioned “They say I’m what?”—a direct mockery of Iranian state media claims that he had been killed in a missile strike. It backfired spectacularly. Users zoomed in frame by frame: his face appeared to shift from round to oval when he looked down, his wedding ring seemed to flicker in and out of existence, and an earlier clip was said to show a sixth finger—a classic tell of AI generation.

A pro-Iran account posted a screenshot from AI detection tool Hive rating the video “96.9% AI-generated”, which received 1.2 million views in a single day. Grok then declared itself “100 percent sure” the café video was an advanced deepfake, citing “static coffee levels” and unnatural lip sync, a verdict that spread faster than any fact-check. Netanyahu’s office released a second video, then a third, each time changing the setting and framing to counter the scepticism—an unprecedented digital counter-offensive by a sitting head of government simply to prove he was alive. None of it worked. Each new clip was met with fresh scrutiny, fresh claims of manipulation, fresh zoom-ins on shadows and fingers.

None, Chatterjee noted, rose to the level of actual OSINT—the rigorous open-source verification practice of cross-referencing footage against satellite imagery and metadata. “These are very bogus examinations. These are not even OSINT.”

Israel cannot do a live broadcast for obvious security reasons—the moment a leader’s location is revealed, it becomes a target. But every pre-recorded clip it releases is now automatically suspect. “The Israel team is struggling really hard to prove he is alive,” Chatterjee said. “And us fact-checkers—what do we debunk? The moment we debunk such a theory, a troll army gets onto us with their own ridiculous reasons. We do not have the bandwidth.”

Naik sees information blackouts as a direct accelerant. “When governments censor content, there’s more appetite to look for it—and that is where bad faith actors weaponise that hunger,” he said. “There’s hunger, people want to see what’s going on, and these online accounts propel the spread of deceptive content.”

Garimella resists the most catastrophic readings, but the trajectory concerns him. “The rate of progress looks like there will be commoditised, open-source models that are harder to control soon. We might have to be prepared for that.” Pal argues the answer may lie in provenance rather than debunking. “There are no interventions that have been massively successful with debunking. We can all look forward to a future in which we would need provenance for any information coming our way.”

For Chatterjee, working in the eye of the storm, the view is less philosophical. “I don’t know where we are even going from here.”

The conflict continues. So does the production of images and claims that purport to describe it. The Iran- US Israel war is being fought on two fronts. One involves missiles, drones, and aircraft. The other involves pixels, prompts, and manufactured certainty. On the second front, the line between what is real and what has been fabricated is not always findable — and the organisations built to find it are already running out of time.

(Edited by Nardeep Singh Dahiya)


Also read: US-Israel must not light a Jauhar pyre in Iran—Rajput warriors had nothing to lose


 

Subscribe to our channels on YouTube, Telegram & WhatsApp

Support Our Journalism

India needs fair, non-hyphenated and questioning journalism, packed with on-ground reporting. ThePrint – with exceptional reporters, columnists and editors – is doing just that.

Sustaining this needs support from wonderful readers like you.

Whether you live in India or overseas, you can take a paid subscription by clicking here.

Support Our Journalism

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular