scorecardresearch
Thursday, April 25, 2024
Support Our Journalism
HomeOpinionGoogle, Twitter are making internet safe in 2 ways—inaction, overaction. There's a...

Google, Twitter are making internet safe in 2 ways—inaction, overaction. There’s a third

Artificial intelligence and machine learning cannot always distinguish between offensive and non-offensive media.

Follow Us :
Text Size:

Last week, Peiter ‘Mudge’ Zatko, the former head of security at Twitter, filed a complaint before the United States’ Securities and Exchange Commission, or SEC, alleging that Twitter’s method of measuring bots was “misleading.” In his complaint, Zatko added that “executives are incentivized (with bonuses of up to $10 million) to boost user counts rather than remove spam bots”. In the same week, The New York Times reported that Google had wrongly flagged images a man took of his son’s groin for medical purposes, as child sexual abuse material or CSAM

Both these incidents highlight the binaries into which digital media (self) regulation has fallen—inaction on one end and overaction on the other.

Like many other responsible and large digital counterparts, Google uses a combination of technologies such as artificial intelligence and machine learning (AI/ML) to identify CSAM. Reliance on such tools is a salient feature of Regulatory Technology (or RegTech, as it is popularly called), which can help address complex regulatory issues such as CSAM. In the extant case, a father in San Francisco took a picture of his child’s genitals and uploaded it on a telemedicine platform at his doctor’s request. However, the images were also automatically uploaded to Google Photos on the father’s phone and were subsequently flagged as CSAM. The images were simultaneously reported to law enforcement, but the police didn’t file charges against the father because they understood the issue. Unfortunately, the father was locked out of his Google account, for which he found no immediate recourse.


Also read: Why OTTs, telcos mustn’t lock horns over infra cost-sharing


Responsive design is key

It is important to underline here that AI/ML cannot always distinguish between media exchanged for medical purposes or those that are CSAM. There is a nuance that separates these types of media that algorithms cannot capture. The only way for tech companies to distinguish between these fine lines is to add a layer of human intervention at an appropriate stage. In this case, it would perhaps be to verify the father’s counterclaim once he was locked out of his account. Ultimately, it is trained people who will need to adjudicate because they can understand the context of an image or video, whereas algorithms can only identify patterns.

In complete contrast to a RegTech-centric approach are the goings on at Twitter, which has long struggled with the problem of bots despite possible technological solutions . Officially, the company claims that bots constitute less than five per cent of its user base. But Tauhid Zaman, an associate professor of operations management at Yale University, claimed in his August 2022 paper that the number is anywhere between 1 to 14 per cent, depending on the topic being discussed on the platform. It is also well-established that spam bots can be weaponised for disinformation and can cause a litany of other user harms. The lack of accuracy on bots is a key reason why Twitter’s sale to Tesla founder Elon Musk hasn’t gone through yet, according to reports.

To solve the ever-lingering problem of bots, Twitter could leverage multiple RegTech-aided methods to confirm the veracity of users, while still allowing for anonymity that many users cherish. And it could ramp up the use of RegTech to clean up unaccounted-for spam bots, as it reportedly already does.

That is, authenticity and anonymity needn’t be competing objectives on social media. For instance, social media businesses could adopt authentication methods in which users answer questions based on relevant public datasets. They could additionally provide options for document-based verification to reduce friction. Investing in technology that provides users with more agency is key. Moreover, services like Twitter could differentiate between authenticated anonymous users and authenticated non-anonymous users to provide a more real social experience.


Also read: Big isn’t always bad—why India mustn’t blindly copy EU’s Digital Markets Act


The importance of digital evolution

Digital products and services must evolve constantly, and continually invest in technology that furthers consumer interest. Today, it seems a little odd to think that Twitter did not even have a built-in search feature but in 2008, that was exactly the case. That year, it bought Summize, a search engine that would allow users to hunt for tweets in real-time. In 2011, it acquired Julpan, another real-time search engine that analysed Twitter posts and served topic-specific content to users. It is quite possible that the company never imagined needing a search engine on its service, and similarly, cannot imagine a social media ecosystem with lesser bots. But life on the internet, just as life in the real world, is uncertain and non-deterministic.

What is certain is that RegTech is an important innovation to improve public trust in the internet. After all, issues like CSAM and fake accounts are too widespread to be weeded out by human beings. Equally, algorithms, whose origins are rooted in pre-set methods, are ill-equipped to decipher and navigate the hazy maze of user behaviour on the internet. It needs the support of able hands and sharp minds acutely aware of diverse contexts to be able to succeed.

Digital businesses need to balance user rights with safety, inclusiveness with exceptions, and security with convenience. And they need to do all of this even when they achieve scale. This involves tradeoffs that are not easy but come with the territory. Sensible design and deployment of RegTech, coupled with the nuance that only humans bring to the table, can help deliver on the promise of a safe internet.

The authors work at Koan Advisory Group, a technology policy consulting firm. Views are personal.

This article is part of ThePrint-Koan Advisory series that analyses emerging policies, laws and regulations in India’s technology sector. Read all the articles here.

(Edited by Zoya Bhatti)

Subscribe to our channels on YouTube, Telegram & WhatsApp

Support Our Journalism

India needs fair, non-hyphenated and questioning journalism, packed with on-ground reporting. ThePrint – with exceptional reporters, columnists and editors – is doing just that.

Sustaining this needs support from wonderful readers like you.

Whether you live in India or overseas, you can take a paid subscription by clicking here.

Support Our Journalism

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular