scorecardresearch
Saturday, May 4, 2024
Support Our Journalism
HomeOpinionDeepfakes can cause geopolitical rifts. State should fund detection of manipulated videos

Deepfakes can cause geopolitical rifts. State should fund detection of manipulated videos

Deepfakes have the ability to swing elections, erode public trust. Two ways to combat them are detection & provenance. Its looming threat necessitates proactive State intervention.

Follow Us :
Text Size:

A prominent geopolitical risk of AI is its capacity to produce deepfakes—audio-visual content that depicts real individuals in fictitious situations. Just imagine how damaging weaponised deepfakes can be to the ties between nations. Nations are heeding this concern. In the G20 Delhi Declaration, member nations committed to a pro-innovation approach in regulating artificial intelligence, factoring in the risks posed by the technology. In July 2023, the G20 Conference on Crime and Security in the Age of NFTs, AI, and the Metaverse specifically called out the use of deepfakes by malicious actors as a rising concern.

According to Europol, the European Union Agency for Law Enforcement Cooperation, deepfakes can cause widespread institutional disruption. For instance, the BBC demonstrated how they may jeopardise elections by releasing videos in which Boris Johnson and Jeremy Corbyn appeared to endorse each other. If a significant portion of the UK population believed such videos, election results could have been compromised. The Europol report also highlighted how deepfakes posed threats to financial markets— they could be used to plummet a company’s stock price by portraying key executives negatively.

Deepfakes are likely to have a significant social impact as well. Researchers indicate that one of the most sinister ramifications of deepfakes is their ability to leave an imprint on people’s minds, even after being disproved — raising concerns about trust and information integrity in society.


Also Read: Why the Manoj Tiwari deepfakes should have India deeply worried


Detection and provenance 

There are two possible methods to counter deepfakes. The first is detection, where algorithms determine the authenticity of an image. However, this method has its limitations. A common detection method involves searching for visual inconsistencies indicating forgery. But, according to one study, if an image or video undergoes significant compression or distortion, these indicators might vanish, resulting in numerous false positives.

Intel’s attempt at creating a deepfake detector, FakeCatcher, serves as a case in point. It studied facial blood-flow patterns to identify a deepfake. Veins change colour when the heart pumps blood. Intel’s detector collected data from blood flow signals all over a person’s face in a given image or video and synthesised it to authenticate its veracity. The BBC reported that Intel’s detector was good at identifying lip-synced deepfake videos — where the mouth and voice were altered. However, the more distorted the resolution of a video was, the more likely that it would be deemed fake by the detector — even if it was not fabricated.

Complicating the landscape further is the very nature of deepfake creation. At the heart of this technology are Generative Adversarial Networks (GANs). These involve a continuous dance between a generator, which crafts the deepfakes, and a detector, which strives to identify them. Such an iterative process of creation and detection means that as soon as one deepfake is identified, the system mutates, making subsequent detections even more challenging.

The second method to identify a deepfake is through provenance, which embeds media with metadata tracking attributes like authorship, creation date, and edit history. Organisations like the Coalition for Content Provenance and Authenticity (C2PA) have developed open technical standards for the certification and authentication of media provenance. While research indicates that this method can reduce trust in deceptive content, it is not foolproof. It was found that in cases where provenance data is incomplete, users might distrust even genuine media. Moreover, without a global standard, its effectiveness is limited.


Also Read: Global policymakers don’t understand AI enough to regulate it. Tech companies must step up now


Protecting our shared reality

So, where does this leave the G20 in terms of policy options? Simply banning deepfakes will be ineffective due to the cross-border nature of content transmission. Alternatively, the US National Security Agency suggests that entities deploy a combination of detection and provenance techniques to battle deepfakes. Despite its fallibility, detection plays an important role in forensic analysis in the absence of information concerning provenance. However, detection is expensive because it has significant training data and compute requirements. Illustratively, the cost of using Reality Defender, a deepfake detection tool, can run from thousands to millions of US dollars to cover the cost of “expensive graphics processing chips and cloud computing power”. The figure begs the question — who will fund the cost of detection?

G20 members should look at the funding of energy transitions as inspiration for a solution. To effectively battle climate change, many countries established funds to facilitate large industrial energy transitions from carbon-intensive to low-carbon sources and technologies. The existential impact of deepfakes on our world is arguably close to the threat presented by climate change. It stands to reason, then, that States should step in and finance the adoption and upkeep of deepfake detection going forward as this looming threat necessitates such proactive intervention. Financing and championing the development of deepfake detection technologies is a pivotal step in safeguarding digital truth. It is an investment not just in technology, but in the very foundation of our shared reality.

The author is a consultant on Emerging Tech at Koan Advisory. Views are personal.

This article is part of ThePrint-Koan Advisory series that analyses emerging policies, laws and regulations in India’s technology sector. Read all the articles here.

(Edited by Theres Sudeep)

Subscribe to our channels on YouTube, Telegram & WhatsApp

Support Our Journalism

India needs fair, non-hyphenated and questioning journalism, packed with on-ground reporting. ThePrint – with exceptional reporters, columnists and editors – is doing just that.

Sustaining this needs support from wonderful readers like you.

Whether you live in India or overseas, you can take a paid subscription by clicking here.

Support Our Journalism

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular