scorecardresearch
Friday, April 19, 2024
Support Our Journalism
HomeTechFacebook had cracked down on Bangladeshi ad farm targetting Utah before U.S....

Facebook had cracked down on Bangladeshi ad farm targetting Utah before U.S. midterms

Part of efforts to prevent 'meddling' in elections, social media giant promises to better monitor content ahead of elections in India, Philippines, Ukraine and Thailand.

Follow Us :
Text Size:

San Francisco: One day in mid-October at 11:24 a.m., an alert went off in Facebook Inc.’s election War Room. Political news in a Utah congressional district wasn’t coming from inside the U.S. — a mismatch Facebook had tuned its software algorithms to detect.

A data scientist in the election-monitoring center at Facebook headquarters in Menlo Park, California, inspected the activity manually and discovered, at 11:47 a.m., that the source spreading the content was an ad farm in Bangladesh. Ten minutes later, an operations specialist removed all the suspect activity.

That’s a real example from the 2018 midterm elections, shared by Facebook executives in a recent slide presentation in Paris, meant to demonstrate that the company’s tools are effective when working correctly. The slides, viewed by Bloomberg News, show in detail how Facebook has improved its process for rooting out bad actors using tactics similar to those Russian operatives used in 2016. The message: Facebook will be more prepared to take on misinformation and meddling in this year’s elections — in India, the Philippines, Ukraine, Thailand and other countries — as well as the U.S. presidential race in 2020.

The presentation details how Facebook has come to understand how networks of impostor accounts use the social network to amplify divisive ideas on immigration, guns and race relations. Since Russia used this strategy in 2016, the company says, it has built detection algorithms and involved more humans in the process, especially in cases where it needs to decide whether a user account is real or fake. Around the 2016 election, Russia often used real people, posting manually with fake identities, who would coordinate with each other to make ideas go viral.


Also read: Facebook, Twitter, Tik Tok to have separate link detailing expenditure on poll ads


Still, the company faces new concerns: If Facebook is focusing on cleaning up strategies used more than two years ago, is Russia a step ahead, using different tactics to avoid detection? Researchers have already tracked the migration of fake news to Facebook’s groups, for example, and to encrypted messaging, where not even Facebook can see what users are saying to each other. Chief Executive Officer Mark Zuckerberg has explained that encryption and private communication will be central to the company’s future product development.

Yet encryption isn’t mentioned in Facebook’s slide deck, which was presented at TICTec, a conference on the civic impacts of technology, by employees Samidh Chakrabarti, Monica Lee and Antonia Woodford. Instead, the company is focused on drawing conclusions from patterns of behavior that are visible, and acknowledges the solutions may have to vary for different countries. Facebook is working on “how to extend protection globally while adapting to each country’s systems,” according to one of the slides.

The presentation also reveals areas where Facebook sees the need to improve — especially when it comes to misinformation and political advertising.

Facebook recently started asking for political advertisers to verify their identities, so their ads can be shown in a public database. That system will be rolled out globally this year. The new rule “routinely blocks foreign actors from running political ads,” Facebook says, but there are flaws in the transparency. In the slide deck, the company said it is grappling with “how, if at all, to handle funders who are opaque entities,” such as limited liability companies in the U.S., and “how to define what should be a political ad, without being overly broad.”

The company also explains in the slides that while it took down 45,000 instances of content spreading misinformation about voting or trying to suppress votes, some of its fact-checking operations have a hard time addressing personal anecdotes — like someone saying their voting machine doesn’t work — could discourage someone from voting. Facebook also says it’s unclear how they will solve “more subtle forms of voter intimidation,” including attempts to convince people their votes won’t make a difference.

For now, Facebook says in the presentation, it’s going to use its social networks to encourage people to vote with ballot guides and election reminders, as well as colorful “I voted” stickers on Instagram.

With elections looming in countries around the world, the company’s efforts to keep news and voting information authentic will be tested almost constantly. It’s unclear whether they’ll be enough to keep meddling and fake news at bay — or to stave off government scrutiny.

Subscribe to our channels on YouTube, Telegram & WhatsApp

Support Our Journalism

India needs fair, non-hyphenated and questioning journalism, packed with on-ground reporting. ThePrint – with exceptional reporters, columnists and editors – is doing just that.

Sustaining this needs support from wonderful readers like you.

Whether you live in India or overseas, you can take a paid subscription by clicking here.

Support Our Journalism

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular