scorecardresearch
Sunday, May 5, 2024
Support Our Journalism
HomeOpinionText-to-video AI the new threat in election season. Here's something Indian politicians...

Text-to-video AI the new threat in election season. Here’s something Indian politicians can do

If political parties take proactive steps in the absence of fail-proof legislation, we can pave the way for a more informed, resilient, and democratic society.

Follow Us :
Text Size:

The ancient Indian political strategist Chanakya said, “In the game of politics, deception knows no bounds.” This adage might prove particularly true in 2024 — a year rife with elections around the world — as OpenAI introduces Sora, an artificial intelligence software capable of creating strikingly realistic images from basic text prompts within seconds.

A technological marvel, such software poses many challenges. Concerns shift from taking precautions against its potential misuse to questioning whether we really need this when half the globe is set to choose new governments. Political beings need good information to think clearly as a necessary first step.

AI has been used in elections in the past, but text-to-video generative AI is a game changer. Organised groups and operations on the internet have created deepfakes that are used to spread misleading or provocative information with the goal of sowing discord, manipulating public opinion, or influencing political outcomes. Such messages can now be put out cheaply, with speed, on a large scale at the stroke of a key.

Beyond legislation

The emergence of text-to-video AI has exposed a legal void in India. Though it has been acknowledged, we haven’t thought through that various existing laws could prevent the pong of disinformation. As it stands, the Information Technology (IT) Act makes clear that social media platforms such as Meta or X make citizens vulnerable to illegal and malicious content. It is intermediaries that face the brunt of regulatory action despite no involvement in the origination of such content. Their non-compliance can lead to legal action and fines.

The Election Commission of India (ECI) has enforced guidelines through the Model Code of Conduct (MCC), outlining ethical behaviour, fair campaign practices, and rules governing political advertisements. During the 2019 Lok Sabha election, then-Chief Election Commissioner Sunil Arora announced that major social media platforms had committed to accepting only pre-certified political ads and observing the silent period of 48 hours before polls. But this year, there have been no such agreements. In November 2023, the Aam Aadmi Party (AAP) filed a complaint with the ECI, accusing the Bharatiya Janata Party (BJP) of sharing morphed videos targeting party chief Arvind Kejriwal. It followed after the electoral body issued a show-cause notice to the AAP for posting clips allegedly defaming PM Narendra Modi.

Quest for accountability

In such a volatile political environment, a cross-party solution seems imperative, considering how politicians often churn out questionable information. Indian political parties must take the lead to prevent this in upcoming elections. Such an agreement, as recommended by the UK think-tank Demos, could help prevent AI tools’ weaponisation, amid delayed regulation in India. It suggests a framework for political parties to mitigate risks and proposes that parties publish guidance on AI use, lead by example in transparency, and form a consensus on AI-generated voice or imagery. It also urges parties to commit to not amplifying potentially deceptive content about opponents, setting high standards of campaign behaviour.

Case studies from America and Slovakia serve as cautionary tales, illustrating the real-world impact of digital deception on electoral processes. In America, influenced by false claims by former President Donald Trump, a majority of Republicans still doubt Joe Biden’s legitimate election — three years after rioters stormed the US Capitol. Days before Slovakia’s crucial election, fake AI-generated audio clips featured a top candidate admitting to election rigging and discussing beer price hikes. The candidate lost.

Some tech companies are stepping up to combat the misuse of AI tools. OpenAI is pooling expertise across safety, legal, and engineering teams to investigate abuses like deceptive deepfakes and chatbots, but there is still no simple way to distinguish the real from the fake despite what tech firms claim.

Similarly, Meta has a comprehensive strategy for upcoming state elections, activating an Elections Operations Centre for monitoring purposes. X, however, disbanded its ‘Election Integrity Team’, informed Elon Musk.

While regulation can serve as a deterrent, its efficacy in the digital realm is limited. The nimbleness of AI-driven technologies often outpaces the speed at which laws are enacted. The focus on good actors with incentives to remove deepfakes overlooks that beyond major platforms lie numerous regional networks, encrypted messaging apps, and obscure channels and messenger services where videos can circulate seamlessly.

Eroding trust

Deepfake videos often may contain half-truths, which makes them more convincing, and voters will no longer know what to believe.

A 2021 study published in the American Political Science Review explored the effectiveness of media literacy during the 2019 election campaign in Bihar. It found that respondents could distinguish between mainstream and fake news when shown pro- and anti-party false claims. The respondents were trained to spot misinformation before the campaign. However, when they were interviewed after voting and shown misinformation, they did not accept the correction — the bias favouring the party they had supported.

This presents another dilemma — the liar’s dividend. This phenomenon occurs when even legitimate evidence is dismissed as fake, eroding the very foundation of truth and accountability in elections. People will be worn down as fakes eat away at their trust.

And what about foreign actors who may try to create discontent in society by playing both sides? In December 2023, Meta removed 4,700 fake accounts that originated in China and polarised US and Indian users by spreading false information on divisive topics.

Add to it personalisation (demographic, geographic, psychographic, in various languages). Is it a bad thing? No, unless used maliciously. The targeting of individuals through AI has been widespread, and so have lies and manipulation through several campaigns. But what has changed is that you don’t need Cambridge Analytica’s computing wizardry to pull them off.

As the world watches, India has the opportunity to set a precedent in the fight against AI-driven misinformation. If political parties learn from experiences around the world and take proactive steps in the absence of fail-proof legislation, we can pave the way for a more informed, resilient, and democratic society.

The author is the Communications Director for Koan Advisory. This article is part of ThePrint-Koan Advisory series on emerging policies in India’s tech sector. Views are personal.

(Edited by Humra Laeeq)

Subscribe to our channels on YouTube, Telegram & WhatsApp

Support Our Journalism

India needs fair, non-hyphenated and questioning journalism, packed with on-ground reporting. ThePrint – with exceptional reporters, columnists and editors – is doing just that.

Sustaining this needs support from wonderful readers like you.

Whether you live in India or overseas, you can take a paid subscription by clicking here.

Support Our Journalism

4 COMMENTS

  1. Well said. An eye opener. Governments would sincerely need to take measures to prevent AI induced frauds. Lets try to keep up with the developments in AI.

  2. AI can be extremely invasive and cause a huge amount of misinformation and mischief. As newer methods of AI usage surface, a system to regulate their usage has to be in place.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular