Legal and policy-related implications of the proposed amendment raise serious concerns.
Soon after issuing an order designating various agencies with wide powers to intercept, monitor, and decrypt private communications, the Centre has proposed amending the Information Technology (Intermediaries Guidelines) Rules, 2011. Political analysts are likely to focus on the timing of this proposal, and how it might empower the state with regard to social media in the run-up to the general election. But the legal and policy-related implications of the proposed amendment, which are currently open for public consultation, raise serious concerns.
First, the proposal fixes a short time-frame of 72 hours for online intermediaries – that is, services through which people communicate or disseminate information, such as Facebook or LinkedIn – to provide information or assistance to designated agencies. This is more specific compared to the earlier general duty to comply “expeditiously”. As highlighted in our earlier piece, intermediaries face the risk of imprisonment of up to seven years for non-compliance. This raises the prospect of over-compliance with agency requests, which now stands exacerbated. Intermediaries must now necessarily act within a short time window and will have no wherewithal to either assess the prima facie legality of such requests or to air such concerns with an independent body. This is because there is no provision for judicial or any other kind of ex ante oversight under the law.
Second, the proposal has major implications for the privacy of online communications. It insists that any intermediary “shall enable tracing out of such originator of information on its platform as may be required by government agencies who are legally authorised”. The rationale for this is sometimes said to be instances of fake news, some of which recently provoked acts of violence.
While it is indeed important to identify wrongdoers who rely on the anonymity of the internet and locate individuals who can spread false information and cause harm, the newly proposed guideline risks becoming a weapon in the hands of the state to prescribe weak encryption or compel already end-to-end encrypted solutions to change the core technology architecture. Encryption technology and standards are crucial to sectors ranging from health to mobility to finance, and enable the storage and processing of sensitive private data. What India requires is a debate on encryption technology, and its uses and limits, rather than a blanket obligation to enable tracing.
Third, the proposal reverses a core principle of intermediary immunity that we have come to accept as part of an open internet – the “due diligence” principle. Under this principle, formalised in Section 79 of the IT Act, hosting intermediaries (ones with no role to play in the initiation, selection of recipient, or selection/modification of content, as part of any transmission through their platforms) are not liable for any third-party information or communication links. However, they need to observe “due diligence” when discharging their duties. There is no clear definition of the term in the Act or in the 2011 Rules. But what is well-accepted as not being covered under this principle in democratic societies is the existence of any positive monitoring obligations on an intermediary to track third-party content and voluntarily remove the same.
Were intermediaries to be vested with positive duties to ascertain the limits of acceptable conversation and act upon the same, this would result in state-authorised private censorship. The proposed measures unravel this original understanding of “due diligence”, casting an obligation on intermediaries to “deploy technology based automation tools … for proactively identifying and removing or disabling public access to unlawful information or content”.
Normally, the state can only censor on the basis of limited grounds specified under Article 19(2) of the Constitution. Further, the state can only do this in a necessary and proportionate manner. The current proposal changes this. It shifts the power of censorship to unaccountable private bodies.
The union appears to rely on the fact that none of the above measures directly regulate content in terms of what individuals can or cannot express. However, this reasoning cannot stand. Indian free speech doctrine, as made clear in cases such as Sakal Papers (1961), Bennett Coleman (1972) or Indian Express Newspapers (1984), has not only addressed the content of speech, but also considered regulating the medium through which expression takes place. (For example, restrictions on newsprint have famously been struck down.)
The important principle here has been that restrictions on the medium that involves expression have a real impact on our ability to express. Moreover, in today’s converged ecosystem, arguments that draw a distinction between a carrier and content are out of place. The rise of India’s newest telecom player, riding on free data, is a case in point. None of this means that regulation cannot and should not occur. But it does mean that such regulation must answer for the restrictions on free speech that it involves.
By stipulating traceability and active monitoring, the union is sending out a clear message that technologies that enable privacy should raise concerns, whereas those that could further censorship and surveillance carry few risks. The reality is more complex, and a serious approach to regulation should reflect that complexity.
Madhav Khosla, co-editor of the Oxford Handbook of the Indian Constitution, is a junior fellow at the Harvard Society of Fellows. Ananth Padmanabhan is a Fellow at the Centre for Policy Research. His Twitter handle is @ananth1148.