Not constant monitoring by govt, it’s non-intervention that will help internet safety in India
Opinion

Not constant monitoring by govt, it’s non-intervention that will help internet safety in India

Proposed amendments to IT rules aim to prioritise safety of digital Indians by appointing a government-controlled grievance appellate committee.

Representational image of social media apps on a phone display | Photo: Pixabay

Representational image of social media apps on a phone display | Photo: Pixabay

When India’s Information Technology Act 2000 was first promulgated, it conferred a ‘conditional safe harbour’ upon ‘intermediaries, which include social media platforms, instant messaging portals, and entities acting as conduits connecting users. This benefited digital players, who were now exempt from liability for third-party information action after meeting specific diligence criteria.

However, these obligations have become harsher after being updated by the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021, also known as Intermediary Rules.

The proposed amendments to the rules ‘prioritise interests of digital India’ by placing a grievance appellate committee (GAC) at its centre, set new accountability standards for social media companies and mandate that they respect the constitutional rights of Indian citizens.

This committee would serve as a government authority to field appeals against the intermediary’s grievance redressal process. Unlike a statutory regulator, this authority would report to a Union ministry and comprise a chairperson and members appointed by the government.

Proponents of social media regulation have welcomed these diligence requirements as a step toward increasing accountability and introducing government monitoring mechanisms. However, these rules contradict earlier expectations that social media platforms and intermediaries must adopt a non-interventionist approach to enjoy safe harbour protection. 

The safe harbour of the IT Act 2000 was conditional. First, it was conferred only upon those intermediaries whose function was limited to providing access to a communication system. Second, the intermediary did not participate in transmitting information by initiating or modifying the transmission contents and selecting recipients. But the new Intermediary Rules expect players to adhere to several onerous obligations to demonstrate compliance and retain their safe harbour, failing which they will be punished under applicable laws.

It would be remiss not to mention that the Supreme Court, in its landmark judgment, Shreya Singhal v. Union of India had held that intermediaries would only be obligated to take down content from their platform upon receiving an order from a court or government authority. Similarly, actual knowledge would only be attributed to an intermediary when a court order or government notification apprises the intermediary of unlawful content on its platform.

The creation of GAC appears to fill a distinct void—the lack of a government authority to direct intermediaries to take down content in line with the Supreme Court’s diktat. However, this raises significant concerns regarding its independence. Social media platforms are worried that the government’s influence over GAC might lead to selective suppression for curbing dissent.

Concerns include the absence of a precise appeal mechanism to challenge decisions of the committee. Moreover, as opposed to other statutory authorities like the Securities and Exchange Board of India (SEBI), Insurance Regulatory and Development Authority (IRDAI) or Reserve Bank of India (RBI), which were set up by an act of Parliament, this ‘committee’ is set up by rules—via powers conferred upon the government—rather than by the legislature.

Accordingly,  the GAC cannot be the final judge of a grievance with a social media platform, for this would, in the absence of an appeal mechanism, leave only constitutional courts to be approached for recourse. While courts must continue to be the final arbiter of what content should be taken down, this intermediary appeal mechanism is unlikely to reduce the burden of matters that go to courts by offering a slow and convoluted solution to both users and social media platforms. The mechanism contemplates that a grievance by a user would first be referred to the platform, then to this appellate committee (which cannot be considered a quasi-judicial authority), and then, inevitably, to the courts.


Also Read: SC to deliver verdict on pleas concerning interpretation of provisions of anti-money laundering law


The non-interventionist approach

In providing an intermediate authority to handle user grievances, policymakers have overturned the more straightforward approach of mandating intermediaries to adopt a zero-intervention stance (eliminating discrimination and selective suppression of free speech by intermediaries). This would have enabled intermediaries to remain mere ‘conduits’ who cannot manipulate content and maintain end-to-end encryption.

The approach is based on proactive reporting of fake news, threats of violence or other content that contravenes law by impacted users rather than the platforms, leaving it to enthusiastic police enforcement and judicial machinery to intervene and mete out punishment. Concerns of large volumes of active users in India over allowing government surveillance of private communication on messaging platforms, and large volumes of worrying complaints, have made this approach sound in policy but not in practice. It is also largely dependent on the assumption that intermediaries would cooperate with the government, agnostic to personal data privacy laws, to improve access and surveillance over its users.

Last, the efficacy of this approach in stopping the spread of malicious content is based on three assumptions being correct: first, that users will promptly report malicious content, second, that authorities will expeditiously determine if the content is malicious and direct its takedown, and third, that a large number of users will not appeal complaints frivolously to overwhelm the volumes referred to an appellate grievance committee.


Also Read: ‘Kashi-Mathura disputes should be settled in court, like Ayodhya’, says RSS media chief


Defining self-regulation

On the other end of the policy spectrum would be an approach that mandates self-regulation but sets well-defined and unambiguous parameters. This states that the only discretion that intermediaries have is merely to determine if the content meets the eligibility of being taken down once brought to their notice. However, even with this approach, knowledge of an intermediary must meet the touchstone of actual knowledge, lest there be a misplaced expectation that an intermediary—a private entity—engages in self-surveillance on content posted by users.

It’s a settled position that any content taken down is a restriction on free speech. Naturally, then, legislation promulgated by Parliament, rather than rules issued by the incumbent government’s ministry, must be charged with appropriately defining types of content that need to be taken down while highlighting the circumstances in which they would. Moreover, these cannot be lazily defined with broad strokes and must pass proportionate, reasonable, and constitutionally sound tests.

A government-appointed appellate authority empowered with the ability to determine in its discretion what content must be taken down leaves doors wide open for abuse of discretion and to stamp out dissent. Recent example includes the abuse of Section 66A of the Information Technology Act 2000, which was used to assail free speech on the grounds of such impugned content being ‘grossly offensive’ or ‘menacing’—to the point where the Supreme Court eventually struck it down. This underscores the need to avoid government monitoring free speech on the internet. Moreover, the ebb and flow of governments have a tendency to colour a government authority’s interpretation of even unambiguous legislation.

With this backdrop, it would be too prudishly optimistic to entrust a discretionary power to government-backed bodies by presuming it will not be abused.

Creating multi-tiered, government-controlled grievance redressal mechanisms within the executive arm, ostensibly to cater to volumes of complaints, is likely to burden the judiciary with appeals for intervention. Intermediaries must feel empowered to act swiftly and suspend accounts once reported, without fear of losing their safe harbour.

However, this can only be achieved when the law itself adopts a litmus test style in defining parameters of what deserves censorship. Where legislation defines and codifies these thresholds, the legislation is held to public scrutiny, making it far easier to test if these restrictions are reasonable fetters to free speech, rather than allowing an intermediary or authority discretion to interpret the law.


Also Read: Twitter to place Centre’s ‘blocking orders’ before K’taka HC in sealed covers


Prescriptive Legislation – Predictable Liability

As the law constantly plays catch up with evolving technology and its use cases, pre-empting and prescribing what an intermediary must do in every circumstance is impossible. It also misdirects legislative efforts toward legislating for the exception rather than the rule. Instead, the law can adopt a principle-based approach for intermediaries  to take down offensive content and define a  bright-line test to describe the circumstances in which they would not be liable for hosting controversial content.

By leaving as little room for interpretation, the intermediaries that adopt a perverse interpretation or do not objectively apply clearly defined criteria may then justifiably face punitive action meted out by courts. Legislatively conferring takedown powers upon intermediaries, albeit within tightly defined boundaries to prevent selective suppression of content, would allow intermediaries to adopt mechanisms best suited to their platforms to uphold the principles defined by legislation.

Uncontestably, intermediaries must be held more accountable for lopsided or discriminatory censorship of information and for promoting misleading paid content (as that confers advertiser’s liability). By extending the safe harbour to intermediaries to enjoy no liability for taking action against infringing content, rather than depriving them of it for inaction, we would incentivise them to moderate content fairly and proactively. Rather than confer discretion to a government authority to interpret the law correctly, without personal or political bias, the law itself must be drafted in a more binary form.

Akash Karmakar is a partner with the Law Offices of Panag & Babu and leads the firm’s fintech and regulatory advisory practice. He tweets @akashxkarmakar. Views are personal.

(Edited by Zoya Bhatti)