scorecardresearch
Monday, November 4, 2024
Support Our Journalism
HomeTechFacebook is banning 'deepfakes' but important questions still remain

Facebook is banning ‘deepfakes’ but important questions still remain

Facebook's new approach of removing 'deepfakes' — manipulative media that look authentic — is being tailored to meet conditions that address the platform's concern.

Follow Us :
Text Size:

Facebook says that it is banning “deepfakes,” those high-tech doctored videos and audios that are essentially indistinguishable from the real thing.

That’s excellent news — an important step in the right direction. But the company didn’t go quite far enough, and important questions remain.

Policing deepfakes isn’t simple. As Facebook pointed out in its announcement this week, media can be manipulated for benign reasons, for example to make video sharper and audio clearer. Some forms of manipulation are clearly meant as jokes, satires, parodies or political statements — as, for example, when a rock star or politician is depicted as a giant. That’s not Facebook’s concern.

Facebook says that it will remove “misleading manipulative media” only if two conditions are met:

  • “It has been edited or synthesized — beyond adjustments for clarity or quality — in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say.”
  • “It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.”

Those conditions are meant to be precisely tailored to Facebook’s concern: Use of new or emerging technologies to mislead the average person into thinking that someone said something that they never said.

Facebook’s announcement also makes it clear that even if a video is not removed under the new policy, other safeguards might be triggered. If, for example, a video contains graphic violence or nudity, it will be taken down. And if it is determined to be false by independent third-party fact-checkers, those who see it or share it will see a warning informing them that it is false. Its distribution will also be greatly reduced in Facebook’s News Feed.

The new approach is a major step in the right direction, but two problems remain.

The first is that even if a deepfake is involved, the policy does not apply if it depicts deeds rather than words. Suppose that artificial intelligence is used to show a political candidate working with terrorists, engaging in sexual harassment, beating up a small child or using heroin.

Nothing in the new policy would address those depictions. That’s a serious gap.

The second problem is that the prohibition is limited to products of artificial intelligence or machine learning. But why?


Also read: Facebook pledges $130 million for content oversight board


Suppose that videos are altered in other ways — for example, by slowing down them down so as to make someone appear drunk or drugged, as in the case of an infamous doctored video of Nancy Pelosi.

Or suppose that a series of videos, directed against a candidate for governor, are produced not with artificial intelligence or machine learning, but nonetheless in such a way as to run afoul of the first condition; that is, they have been edited or synthesized so as to make the average person think that the candidate said words that she did not actually say. What matters is not the particular technology used to deceive people, but whether unacceptable deception has occurred.

Facebook must fear that a broader prohibition would create a tough line-drawing problem. In its public explanation, it also noted that if it “simply removed all manipulated videos flagged by fact-checkers as false,” the videos would remain available elsewhere online. By labeling them as false, the company said, “We’re providing people with important information and context.” Facebook seems to think that removal does less good, on balance, than a clear warning: “False.”

Maybe so, but in the context of deepfakes, Facebook has now concluded that removal is better than a warning. In terms of human psychology, that’s almost certainly the right conclusion. If you actually see someone saying or doing something, some part of your brain will think that they said or did it, even if you’ve been explicitly told that they didn’t.

There’s room for improvement, then, in Facebook’s new policy; the prohibition ought to be expanded. But steps in the right direction should be applauded. Better is good. – Bloomberg


Also read: Give up Facebook for a month and help economists fix GDP


 

Subscribe to our channels on YouTube, Telegram & WhatsApp

Support Our Journalism

India needs fair, non-hyphenated and questioning journalism, packed with on-ground reporting. ThePrint – with exceptional reporters, columnists and editors – is doing just that.

Sustaining this needs support from wonderful readers like you.

Whether you live in India or overseas, you can take a paid subscription by clicking here.

Support Our Journalism

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular