scorecardresearch
Wednesday, March 27, 2024
Support Our Journalism
HomeOpinionFacebook’s reluctance to take down problem posts must force India towards co-regulation

Facebook’s reluctance to take down problem posts must force India towards co-regulation

Guidelines of Twitter and Facebook have drawbacks because they do not apply universally and are determined by platforms themselves.

Follow Us :
Text Size:

On 14 August, a report published in The Wall Street Journal claimed that Facebook’s public policy director for India had advised the social media platform against taking down hate speech videos posted by a Telangana BJP MLA among others. The Facebook official claimed that doing so would hurt the company’s business prospects in India, according to the WSJ news report. The WSJ’s revelation has kicked off a political furore in India with the Opposition alleging that the Menlo Park, California-based social media giant is biased towards the ruling party. Congress’ Shashi Tharoor, Chairman of the Parliamentary Standing Committee on Information Technology, stated that the Committee will examine this issue and ask Facebook for an explanation.

Facebook’s reluctance to take down problematic posts brings into question, yet again, the manner in which social media platforms moderate content. In October 2019, human rights activists accused Twitter of removing thousands of tweets that were critical of Prime Minister Narendra Modi’s government policy on Kashmir. Around the same time, Facebook was criticised for its failure to take down hate speech targeted at minorities in Assam.


Transparency in content moderation

The heart of the problem lies in the lack of transparency around content moderation by social media companies. Most major platforms publish community guidelines, which inform users about the scope of permissible content in detail. Platforms can take down any video, audio, image or text that violates these guidelines, even though there is no legal obligation on them to do so. Since no clear enforcement mechanism exists, platforms evolve and implement their own guidelines. This discretion paves the way for political considerations to influence decisions. The WSJ report also tells how Facebook agreed to abide by hate speech rules in Germany that are stricter than those in other jurisdictions. It also documents how the platform agreed to append “correction notices” to news deemed false by the government in Singapore, and prohibited access to dissidents’ political content in Vietnam.

When each social media company frames its own guidelines, the result is a multiplicity of standards applicable to online content. No mechanism exists to ensure any harmonisation, and each platform applies its own internal policy to moderate the same piece of content. The lack of a universal standard can confuse users who want to understand how to behave online, and at the same time makes it difficult to hold platforms accountable for their decisions. Often, when a platform takes down content for violating community guidelines, the user is not explained why it constitutes a violation. As a result, there is little possibility for her/him to contest any wrongful takedown. Thus, self-regulation empowers platforms at the cost of clarity and accountability vis-a-vis conduct standards.


Also read: Facebook, Twitter can’t police on speech violation. Only Indian law can


Issues with content regulation by government

While government regulation of social media platforms can appear to be a possible solution, it might turn out to be a problematic one. An analysis of the government’s approach to regulate broadcast media may be helpful to understand the issues that could crop up with the new media. Rules 6 and 7 of the Cable Television Networks Rules, 1994 contain the Programme Code and the Advertising Code respectively, which lay down standards for programmes and advertisements displayed on cable television. While most provisions in these rules are similar to the reasonable restrictions on free speech outlined in Article 19(2) of the Indian Constitution, some provisions are broader.

For instance, Rule 6(e) of the Cable Television Networks Rules prohibits the exhibition of a programme promoting “anti-national attitudes”. This phrase does not form a part of Article 19(2). The primary focus of Rule 6(e) is to prevent content that incites violence or contains anything against the maintenance of law and order. However, the phrase “anti-national attitudes” is not qualified with a requirement that a nexus be demonstrated between such attitudes and incitement to an offence or breakdown of law and order. Thus, content critical of the State could possibly be brought under its ambit — which would amount to a restriction on free speech that goes beyond what is envisaged under Article 19(2). 

Restrictions that do fall under the ambit of Article 19(2) are vaguely worded and can be regressively interpreted, since the rules do not account for context or changing social norms. For instance, Rule 6(k) prohibits the depiction of women’s bodies in a way that “is likely to deprave, corrupt or injure public morality or morals.” Such phraseology can be used to restrict content that depicts intimacy or educates viewers about sexual and domestic abuse, because the rules do not provide for a nuanced approach.

Community guidelines of platforms are better at striking a balance between preserving the freedom of expression and moderating content online. Twitter’s “sensitive media policy” is a good example. It recognises that some users may want to share sensitive content in order to show social realities, but is also mindful that others may not want to be exposed to such tweets. For example, users who post content that contains violence or is sexually explicit, must mark their accounts as sensitive. The policy contains detailed examples of the kind of content that comes under its purview, which makes it easier to understand and apply. However, as pointed out earlier, such guidelines have drawbacks because they do not apply universally and are determined by platforms themselves.


Also read: Denounce ‘anti-Muslim bigotry’ — Facebook staff in US, other countries write to company


Co-regulatory approach to content regulation

In this context, a co-regulatory model, for design and enforcement, could serve as a viable middle path. It’s a model which facilitates collaboration among government authorities, platforms and civil society organisations in regulation. Due to its participatory approach, it is better suited to evolve standards that can apply across diverse forms of content which thrive on different platforms. It also reassures users that platforms are working towards legitimate objectives, and gives the latter enough flexibility to decide how these should be achieved.

Moreover, co-regulation is not static but an ongoing process, in which standards are continuously reviewed and assessed. This enables the model to adapt better to changing times and technology. Another advantage of co-regulation is its ability to resolve any information asymmetry between platforms and regulators. Often, policymakers do not possess a sufficient understanding or knowledge to engage with various issues which pertain to platforms. A co-regulatory model involves continuous dialogue among all the participants in the platform economy, which leads to informed decision-making.

Other jurisdictions too have realised the benefits of a co-regulatory approach to content regulation. In May 2016, the European Commission, along with several technology companies, developed the Code of Conduct on Countering Illegal Hate Speech Online. It aims to ensure that requests to take down content are quickly decided. The Code requires companies to assess any removal requests against their Terms of Service and community guidelines, as well as any national laws that transpose EU laws to combat racism and xenophobia. Under the Code, companies have committed to review a majority of these requests under 24 hours, taking down content where necessary while respecting the freedom of speech online. Additionally, they have agreed to educate their users about the types of content that are not permissible under their rules and community guidelines.

Europe’s Code of Conduct is evaluated through monitoring exercises organised by a network of civil society organisations. These organisations also work with the European Commission and platforms in developing counter-narratives and counter-hate speech campaigns. There have been concerns about the Code’s ability to protect freedom of expression. But its existence signifies a willingness to explore solutions that go beyond traditional regulatory frameworks. A similar shift in thinking must also inform India’s approach to content regulation. If stakeholders collaborate with one another, the country can come a step closer towards standards for online content that are fair, transparent and enforceable.

The author works at Koan Advisory Group, a technology policy consulting firm. Views are personal.

This article is part of ThePrint-Koan Advisory series that analyses emerging policies, laws and regulations in India’s technology sector. Read all the articles here.

Subscribe to our channels on YouTube, Telegram & WhatsApp

Support Our Journalism

India needs fair, non-hyphenated and questioning journalism, packed with on-ground reporting. ThePrint – with exceptional reporters, columnists and editors – is doing just that.

Sustaining this needs support from wonderful readers like you.

Whether you live in India or overseas, you can take a paid subscription by clicking here.

Support Our Journalism

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular