The shooting at a mosque in Christchurch, New Zealand by white supremacist Brenton Tarrant in March 2019 was a wake-up call for everyone. The power of technology to spread hate via livestream was reminiscent to what we saw in India two years prior — when Mohammed Afrazul, a Bengali muslim migrant worker in Rajasthan was hacked to death with a meat cleaver by a Right-wing Hindutva worker named Shambhulal Regar. His body was then burnt at the scene of the murder.
Regar had the entire incident videotaped and put it up on YouTube along with a sermon against ‘love jihad.’ The video went instantly viral and managed to raise money for his wife and family through advertising before it was caught and removed. Regar was celebrated by fellow members of the Vishwa Hindu Parishad. The police said that Regar’s actions were individual, that he was a ‘lone wolf hate killer’ but his language in subsequent interviews and his manifest use of social media to disseminate his hate filled video and message was alarming.
Tarrant was Regar in another form, on another continent — both killers united by extreme ideology, hate, violence, and their savvy with social media. The live streaming of the shooter’s video — taken with a GoPro camera attached to his cap — showed the world the raw, brute power and violence of extreme ideologies fuelled by online propaganda and hate.
Just as Tarrant’s network online proved that support for White supremacy has flourished in both open and closed spaces on social media for years, protected by the right to absolute free speech, so has militant, extreme Right-wing Hindutva ideology found a platform. And irrespective of both law enforcement and civil society concerns, these platforms seem unable to tackle the speed and scale of their spread, are being manipulated by identity politics and a manufactured sense of offence and offendedness.
The constant validation of extreme speech by political, religious, cultural groups may generate the numbers they seek, but there is a cost, as they also serve to embolden bad actors who disregard community standards knowing that both the right to free speech and ever changing technology will protect them from censure.
Autonomous bodies are often critiqued as having no real teeth. Corporate media — increasingly eyeing the generation of high traffic on their websites — are susceptible to external, political pressures. Machine learning tools and Artificial Intelligence are increasingly picking up language, re-engineering sharper, quicker image detection technology and monitoring content, but the volume is often too much for even the machines to handle, and local contexts next to impossible for them to fathom.
Both proportionality and rationality must be driving factors while attempting to regulate any form of media. And, in fact, it is also up to us, the users to ask whether we desire more government regulation, or whether we advocate for social media agencies to tighten their policies? Or, are we hoping for public-private collaboration in this realm?
Responsible broadcasting and institutional arrangements as a result of consultations between social media platforms, media industry bodies, civil society and law enforcement are an ideal regulatory framework, and a possible way forward for a Code of Conduct or an independent regulatory body for self-regulation can be appointed. This must be achieved without creating an ambiguous statutory structure that could leave avenues for potential legislative and state control.
Australia and Germany have some of the strictest laws and regulation over social media wherein they have imposed fines and imprisonment for ‘inaction on extremist hate speech’ within tight time frames. Simultaneously, the European Union has also established a code of conduct to ensure non-proliferation of hate speech under the framework of a ‘digital single market.’ The United Kingdom recently published a paper titled ‘Online Harms’, that establishes what it calls the ‘duty of care’.
Most other countries have been struggling between threats to tech companies and platforms and law enforcement responses against users that aren’t anonymous, but given the transnational nature of technology, a coherent global regulatory standard would ensure a uniformity of governance and regulation, while keeping specific local contexts in mind. With new platforms like TikTok, both the means of expression online and social responsibilities are constantly evolving.
The last decade has seen an exponential rate of growth in the technology sector. Smartphones today are virtually anatomical appendages to each one of us. Affordable gadgets, cheap data and the growth of languages on the internet have led to the emergence of a media discourse that now is inextricably linked to the ease of digital technology.
Technology has erased rural-urban divides and transcended boundaries and barriers, making development and governance achievable targets in a large, diverse country like India, with extremely remote areas. But it has also had an equal and opposite reaction. As technology teaches us to be borderless, it provides a vehicle to rally around identity, grievance and fear — all exploited by savvy politicians out to garner votes.
However, over-criminalising speech may have troubling outcomes. It is the crossroads between technology, profit, freedom, politics, identity, power and insecurity that any effort to regulate hate speech on social media will have to traverse. The need is to seek redress and action, not censorship. But the means towards this end have to be independent of government control.
Arriving at a regulatory mechanism is unlikely to be easy — a fact that is evident in the ongoing global debate, as different national governments cite their own legal frameworks to set precedent and seek control. But as hard as it may be, any regulatory framework that evolves will necessarily need to ensure that it not only protects the right to free speech in a democracy, but equally if not more so — creates safeguards and curbs against the impact and the process of online, social media amplification of hate speech that can lead to offline, real world violence.
The author is a Senior Fellow at the Observer Research Foundation and Assistant Professor at Ashoka University. Views are personal.
This is an edited excerpt of the author’s article that appears in Volume 9 of the Journal of Indian Law and Society. The JLS is a peer reviewed journal published by the National University of Juridical Sciences, Kolkata, with a focus on interdisciplinary research. Published bi-annually, it is the flagship journal of the Student Juridical Association.