scorecardresearch
Tuesday, April 16, 2024
Support Our Journalism
HomeOpinionFacebook, Twitter & Google may have a lot to do with the...

Facebook, Twitter & Google may have a lot to do with the rise of hatred & polarisation

Follow Us :
Text Size:

An alarming possibility ia that these companies automated algorithms have learned they perform best by setting us against one another.

Donald Trump’s claim that Google’s search engine is biased against conservatives would be more interesting if it didn’t rest on the crazy view that most mainstream outlets such as the Associated Press, Reuters, sports channel ESPN or Business Insider are hotbeds of radicalized left-wing politics. Still, Republicans will generate plenty of hot air over the matter even if Google is not present during congressional hearings Wednesday on how tech companies manage data flows.

Unfortunately, the hearings – to include representatives from Facebook and Twitter — almost certainly won’t examine a much more important issue, which is how the business model of these three tech giants may have a lot to do with the rise of hatred, violence and political polarization, not only in the United States, but worldwide. An alarming possibility is that these companies’ automated algorithms, which analyze human behavior to boost user engagement, have learned that they perform best by setting us against one another.

There are, of course, many worrying things about the internet and social media. In his book “The Shallows,” journalist Nicholas Carr warned that their use was changing our brains. We’re losing our ability to focus deeply on a single task, and becoming more adept at handling fragments of information and switching frequently between shallow tasks. Technology is having a profound effect on how we read, as people increasingly skim online text, sampling the first line and then trying to spot words thereafter, capturing only the gist of the text. The skimming reader generally fails to grasp the complexity of arguments, and has little time to form creative thoughts of his or her own.

Far worse is how the technology may be acting as a vast optimized engine of social degradation. That’s the argument of a recent book by former tech engineer Jaron Lanier, known for his early work on virtual reality. In the current business model, Google, Twitter and Facebook offer free services and use them to gather immense quantities of user data. The companies’ algorithms then use that data to help advertisers feed users optimized stimuli to modify their behavior — encouraging them to buy stuff, for example. Of course, lots of the free services are great, as are the things the ads often help people find. We’re so used to this model that, aside from sporadic privacy concerns, we see it as almost natural.

But Lanier’s insightful point is that this model may also be a natural route to disaster, for a disconcertingly simple reason. Facebook, for example, makes money by helping advertisers target messages — including lies and conspiracies — to the people most likely to be persuaded. The algorithms looking for the best ways to engage users have no conscience, and will simply exploit anything that works. Lanier believes that the algos have learned that we’re more energized if we’re made to feel negative emotions, such as hatred, suspicion or rage.

“Social media is biased not to the left or the right,” as he puts it, “but downward,” toward an explosive amplification of negativity in human affairs. In learning how to best to manipulate people, tech algorithms may inadvertently be causing mass violence and progressive social degradation.

Lanier doesn’t support this argument with hard data, but plenty of other research makes the hypothesis sound all too plausible. For example, studies looking at how different kinds of emotions affect the engagement of online viewers find that messages designed to stir negative emotions including fear or anger tend to work better. A United Nations report concluded that that the spread of rumors on Facebook and other social media was crucial in sparking genocidal violence against the Rohingya in Myanmar. Such messaging also appears to have played a significant role in driving the recent outbreak of anti-refugee feeling in Germany.

The link seems to be quite general, as suggested by another recent study linking usage of Facebook with outbreaks of violence against immigrants across Germany. European researchers looked at all the anti-refugee attacks in Germany over two years, 3,335, seeking to find correlations between their locations and other variables such as local wealth, support for far-right politics, number of refugees and so on. The most significant explanatory factor turned out to be local Facebook use. In the data, a rise in per-person Facebook use of one standard deviation above the national average meant a 50 percent increase in the number of attacks on refugees.

At least in part, these may be the tragic human consequences of mechanical algorithms relentlessly acting to exploit a truth they’ve discovered — that paranoid messaging taps into deep human emotions and instincts, and therefore tends to get the most attention.

What can be done? There’s no reason the advertising-based model needs to remain dominant, especially if we realize the immense damage it’s causing. An alternative would be to give up our free services — Gmail, Facebook, Twitter — and pay for them directly. If social media companies made money from their users, instead of from third parties aiming to prey on those users, they would be more likely to serve users’ needs. Making it happen will take concerted government pressure, and from users as well, as the companies profit so handsomely from the current set-up, despite the toll on the rest of the world. But many computer scientists, such as those at the Center for Human Technology, have recognized the problem and think it can be fixed.

Get rid of the advertising model, Lanier notes, and anyone will still be completely free to pay to see poisonous propaganda. It’s just that no one will be able to pay in secret to have poison directed at someone else. That would make a big difference. – Bloomberg

Subscribe to our channels on YouTube, Telegram & WhatsApp

Support Our Journalism

India needs fair, non-hyphenated and questioning journalism, packed with on-ground reporting. ThePrint – with exceptional reporters, columnists and editors – is doing just that.

Sustaining this needs support from wonderful readers like you.

Whether you live in India or overseas, you can take a paid subscription by clicking here.

Support Our Journalism

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular