Jurisdictions world over are grappling with the issue of regulating user content on social media. India, too, is in the process of evolving its approach. Public consultations on the proposed amendments to the Information Technology Rules, 2021 were recently held. A point of considerable emphasis was the designing of a grievance redressal mechanism to resolve user complaints against actions taken by social media platforms.
Why is this important? Unlike their initial avatars, social media platforms today actively moderate and curate content that they host. They do so by removing any offending speech, restricting access to such speech in a particular jurisdiction, and suspending or terminating a user account. Platforms moderate content to comply with statutory mandates as well as their own terms of service, and to increase user engagement.
An effective grievance redressal mechanism
Exercise of these powers in the case of high-profile accounts such as former US President Donald Trump or celebrities like Kangana Ranaut have routinely made headlines. But lay users also face consequences of such powers that go unnoticed or unheard. In their respective transparency reports, Facebook and Instagram reportedly removed 3.20 crore posts, while Google removed around 60,000 URLs suo moto. Therefore, as “arbiters of speech”, they are in a position to violate a person’s freedom of speech and expression.
To protect users from incorrect takedowns and account suspensions by social media platforms, the need was felt to institute effective grievance redressal mechanisms (GRM). In India, before May 2021, GRMs of social media platforms, if any, were designed as per the concerned platform’s terms of service. There was no standardisation, in terms of resolution and timelines, in the design of these GRMs. If you were to make a complaint, the process would typically consist of filling out an online form, which would usually solicit an automated response.
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules (or IT Rules), 2021 streamlined this by bringing in uniformity. Social media platforms now have to appoint a “grievance officer” before whom a user may file a complaint. The grievance officer is required to acknowledge the complaint within 24 hours and resolve it within 15 days. If unsatisfied with the officer’s order, the user may approach the high court/Supreme Court.
However, accessing the writ jurisdiction of the court can be a time and cost intensive process. And all users cannot afford that. In this light, it was important to create an appellate that is not as resource intensive to engage with.
The government’s motivation behind creating this appellate committee seems to come from other factors as well. According to the government, it created this tier because “currently there is no appellate mechanism provided by intermediaries nor is there any credible self-regulatory mechanism in place”. During the public consultation hearings, it clarified that the proposed “grievance appellate committee” was only a “mezzanine measure” which it had to reluctantly take because of the failure of the social media platforms to design something themselves. Indeed, the government and social media platforms saw convergence on a self-regulatory approach being the most optimal design for an appellate mechanism even as the bare minimum structure is unclear.
Apparently, this insistence on self-regulation seems like a progressive approach. It helps allay concerns of censorship that stem from the government exercising control over online speech through the grievance appellate committee.
Also read: ‘Singled out by govt’ — Twitter users could ‘jointly approach court’ as posts, handles blocked
Concerns with a self-regulatory model
However, letting the social media platforms control the regulation process is not in the best interests of users. Speech, by nature, is contextual. What may offend one person would seem legitimate to another. In fact, a person may find a piece of speech objectionable in one circumstance but not in another. Courts themselves have come to diametrically opposite determinations in relation to the same piece of speech.
Because the determination is so subjective, the process must be objective to ensure fairness. A self-regulatory model takes away from such objectivity for several reasons.
First, social media platforms have not been paragons of objectivity in deciding which content they want to host or take down. Their political biases have become visible through their decisions to either amplify or restrict certain kinds of content. For example, while Twitter is commonly understood to be more partial to liberal/Leftist views, Facebook has been alleged to be partial to Rightist stances. An internal appellate mechanism will likely toe the line of the organisation, and carry and reinforce the same biases in deciding whether a piece of speech should be allowed or not.
Second, even if a number of social media platforms come together to form an appellate tier, instead of individual appellate mechanisms, the members of this appellate tier will not have functional independence. As long as the members’ appointment, terms of employment and service conditions are controlled by social media platforms, they will be wary of taking decisions that may hurt the platform.
Third, a self-regulatory approach to adjudicating on speech is likely to be riddled with trust issues. Consider the case of Facebook. The platform’s solution for ensuring transparency and impartiality in its content moderation decisions was to constitute the Oversight Board. Facebook created a $130 million irrevocable trust to maintain the Board’s independence and the latter did overturn many of Facebook’s content moderation decisions. But now, the Board has come under severe criticism that its existence has not substantially improved Facebook’s content moderation practices. On the other hand, Facebook has complained that it cannot keep up with the Board’s recommendations and has sought to “improve the recommendation process”. This is emblematic of the tenuous grounds on which any self-regulatory mechanism under the aegis of a social media platform stands.
These concerns are amplified if, at a later stage, social media platforms are made subject to penalties for wrongfully suspending or terminating a post or user account. It can hardly be expected that social media platforms will design self-regulatory mechanisms in a manner that will encourage them being held liable and penalised for their decisions.
Also read: ‘Singled out by govt’ — Twitter users could ‘jointly approach court’ as posts, handles blocked
Way forward
Considering all this, it is important to consider if processes that will have significant impact on the freedom of speech and expression should be left to a self-regulatory mechanism. This especially so where the acceptance of the legitimacy of declaring speech being offending or not is based on the fairness of the process itself. Since a self-regulatory approach would lack such fairness, it is important the government re-evaluate its role.
It should design an appellate tier that is immune to undue interference either from itself or social media platforms. To do so, it should begin by removing the constitution of the grievance appellate committee by way of notification and give it statutory status.
The government has often repeated that the Information Technology Act, 2000 is long overdue for a rehaul and that it will herald the “Digital India Act”. Perhaps, that is the right place to provide for a robust design of these appellate mechanisms, instead of being one foot in and one foot out.
Trishee Goyal is a research fellow at the Centre for Applied Law and Technology Research, Vidhi Centre for Legal Policy. She tweets @TrisheeGoyal. Views are personal.
(Edited by Prashant)