scorecardresearch
Saturday, November 2, 2024
Support Our Journalism
HomeOpinionUS, EU pledge to make algorithms trustworthy. It needs to be seen...

US, EU pledge to make algorithms trustworthy. It needs to be seen if it’ll work

The pledge, made during the Trade & Technology Council's inaugural meeting, is a marker for the Biden administration’s move toward more accountability & regulation of Big Tech.

Follow Us :
Text Size:

Over the past 18 months, humanity has dutifully trooped into lockdown and back out again — offering up a massive amount of personal data along the way. Remote work, Zoom schooling and contact-tracing became part of daily life; even today, in the name of public health, diners in Paris have their digital health passport scanned before opening the menu.

At the same time, we’ve been bombarded with evidence of how the algorithms messing with our data can go wrong. Vaccine misinformation is shared at lightning speed on social networks. Germany’s recent election was hit by fake-news campaigns. In England, students chanted “f*ck the algorithm” after one was used to grade test scores. And authoritarian regimes have used the pandemic to expand their own grip on digital surveillance systems, from China’s “social credit” scoring to facial recognition in Russia.

Hence why it’s encouraging to see the U.S. and European Union pledge to work together to tackle “algorithmic amplification” of harmful content online, and to make sure artificial-intelligence systems are “trustworthy” and share “democratic values.” It was one of many promises recently made by the transatlantic Trade and Technology Council at its inaugural meeting.

It’ll take this kind of cooperation to fight the scramble for the kind of AI supremacy pursued by China and Russia, both of which have been accused of waging information warfare and cyberattacks on Western targets while further restricting their citizens’ liberties at home.

The pledge is also a marker for the Biden administration’s move toward more accountability and regulation of Big Tech. To be sure, the U.S. — home to the likes of Facebook Inc. and Amazon.com Inc. — has taken a slower and more scattered approach than the EU, which has no equivalent of Silicon Valley and which can afford to take a tougher stance on “gatekeeper” platforms as a result. (Ahead of the TTC meeting, U.S. officials clearly voiced their frustration with “overly broad” regulation.) But the Federal Trade Commission has warned it will punish companies using harmful or biased algorithms.

What needs to be fleshed out is how to ensure that an algorithm is trustworthy. Enforcing rules against opaque processes is the “million-dollar question” for tech regulators, says Wojciech Wiewiorowski, the EU’s top data-protection supervisor.

On both sides of the Atlantic, a lot of hope has been placed in efforts to make AI “explainable.” But this isn’t always enough: The U.K. students angry at their exam-scoring algorithm weren’t reassured by the explanation that it was using historical school performance data to standardize results. Likewise, algorithmic processes are often seen as closely guarded trade secrets; when the New York Police Department tried to cancel its contract with Palantir Technologies Inc. and requested copies of its analysis, the software firm refused to provide it in a standardized format.


Also read: We created holograms you can touch – you can even shake your virtual colleague’s hand


Understanding AI systems

One goal should be rigorous testing of AI systems before they’re released into the wild, and frequent auditing afterwards. Dr. Carissa Veliz, of Oxford University’s AI Ethics Institute, draws parallels with randomized controlled trials of medical treatments for approval by the U.S. Food and Drug Administration. So an algorithm to assess job candidates could be tested in an experiment with a control group and followed up on afterwards. Companies could also be forced to share data with independent researchers.

Another area the EU and U.S. should work on together is clarifying what happens when algorithms go wrong — and whom to hold responsible. Regulators know existing product liability rules will look out of date in a world driven by overlapping tech processes where human decisions take a back seat. When an Uber self-driving car fatally struck a pedestrian in 2018, its human operator was found liable. But it’s chilling to see that the technology classified the pedestrian as another car, then an unidentified object, then a bicycle, in the seconds before the collision. In a future where humans are overruled by algorithms, where will responsibility truly lie?

There needs to be clarity on what untrustworthy AI looks like, too. We should be prepared to ban technologies that are too dangerous to let loose untested. Several countries want to prohibit autonomous lethal drones, for example. Wiewiorowski has proposed a ban on surveillance-based targeted ads. Social scoring was explicitly called out at the TTC.

There’s a long road ahead to building human-centric AI. But when the next TTC takes place, it would help to know not just what is trustworthy, but also what isn’t.—Bloomberg


Also read: Facebook prioritises ‘greed’ over children’s mental health, US senators say at hearing


 

Subscribe to our channels on YouTube, Telegram & WhatsApp

Support Our Journalism

India needs fair, non-hyphenated and questioning journalism, packed with on-ground reporting. ThePrint – with exceptional reporters, columnists and editors – is doing just that.

Sustaining this needs support from wonderful readers like you.

Whether you live in India or overseas, you can take a paid subscription by clicking here.

Support Our Journalism

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular