Police in Chennai relies on facial recognition technology to help maintain law and order in crowded areas of the city. Similarly, Punjab Artificial Intelligence System, a recipient of a FICCI smart policing award, uses facial recognition for criminal identification. Police in Hyderabad too is using facial recognition to identify ‘persons of interest’ using CCTV footage.
The Indian government has also announced plans for an overarching national Automated Facial Recognition System (AFRS), which will be used for “criminal identification, verification and its dissemination among various police organisations and units across the country”.
Pared down to essentials, facial recognition technology of this kind captures faces in public spaces, creates a unique biometric map of each face (much like a fingerprint or DNA), and then compares these maps to existing databases like the Crime and Criminal Tracking Network and Systems (CCTNS), or even live CCTV footage if such is the need. A number of assumptions guide this enthusiastic turn towards facial recognition in India, each of which I unpack below.
Is this efficient?
First, it is assumed that the use of facial recognition will introduce efficiency and speed into one of the most understaffed police forces in the world. However, the logical leap from use of technology to efficiency derived from it is a costly one. In 2018, Delhi Police reported that the facial recognition system on trial was operating at an accuracy rate of 2 per cent. In 2019, when the accuracy rate fell to less than 1 per cent, the Ministry of Women and Child Development reported that the system couldn’t even accurately distinguish between boys and girls.
It is crucial to note that accuracy rates of such facial recognition software globally have been lowest among women, children, racial minorities, and non-binary genders. These biases make it a particularly dangerous application in law enforcement, because the overlap between overrepresented groups in the criminal justice system and those likely to be wrongly identified by a facial recognition system is significant. The uncritical adoption of these systems will thus exacerbate historical and institutional discrimination, while providing a shield against unlawful arrests and illegal targeting of individuals. As of now, efficiency is an unfulfilled promise, but discrimination and unreliability are demonstrated effects.
Privacy and surveillance concerns
But even perfectly accurate facial recognition systems are deeply problematic. A second assumption that guides authorities deploying facial recognition in India is that the use of these systems does not raise privacy or surveillance concerns. It is argued that facial recognition will only be used by law enforcement to track criminals and find missing children. In context of the AFRS, it is also argued that the use of facial recognition simply adds “another information layer to investigation by allowing matching photograph of suspects or missing persons with the photo database of CCTNS”.
This assumption is technically untenable. For facial recognition systems to work in the context of crime prevention, they must necessarily collect, store and analyse data from every individual who is captured by the camera to recognise suspects or ‘persons of interest’. In other words, if a system has to carry out the task of recognition in these use cases, it has to sort a positive match from a sea of negative ones. Authorities deploying facial recognition must reckon that however laudable the goal of facial recognition systems may be, mass surveillance is a pre-requisite for them to achieve these goals.
Further, the claim that facial recognition simply adds a layer of information for photograph matching is an oversimplification. This technology involves creating a biometric map of each person’s face, depending on the distance between their eyes, nose, mouth, jaw, the size of their forehead etc. It is far more similar to a fingerprint or DNA evidence or an iris scan than it is to a photograph, and fundamentally changes how law enforcement agencies identify individuals.
The question of legality
Third, it is assumed that use of facial recognition has a legal basis. Responding to a legal notice sent out by the Internet Freedom Foundation (IFF), the Home Ministry has stated that the use of facial recognition systems like the AFRS do not suffer from illegality – as they have been approved by the cabinet through a 2009 note on CCTNS.
This assumption is legally unsound. At the outset, as noted by IFF, a cabinet note “is not a statutory enactment” and cannot be considered the legal basis on which facial recognition is carried out.
Further, the Supreme Court in 2017 reaffirmed the Right to Privacy and explicitly stated that this right extended to public spaces, and that citizens cannot be subject to unlawful collection or exploitation of their personal data. The court also clarified that any infringement on the Right to Privacy, such as collecting personal data for law enforcement purposes, must be done in accordance with the proportionality standard. In other words, it must be in pursuit of a legitimate aim, that it is necessary and proportionate. Not only does facial recognition fail to satisfy the first part of that standard, it also falls short on the other two. Collection of biometric information that may or may not be used in the future is an inherent part of facial recognition systems.
Another popular argument is that the Right to Privacy is not absolute, and that the state can violate this right in the interest of national security and public safety. While this is in fact true – no fundamental right is absolute – a recent judgment of the Bombay High Court clarifies that even in the case of an overarching national security or public safety concern, the justification for infringement of a right must be carried out through the proportionality test laid down in the Supreme Court’s 2017 Puttaswamy judgment – that the case of public safety must be demonstrated, not merely claimed.
While India does not have a data protection law at the moment, the Personal Data Protection Bill, 2018 also contemplates proportionality as the standard to justify law enforcement exceptions in the context of personal information.
Law enforcement authorities around the world are grappling with the limitations and dangers of facial recognition. San Francisco banned police use of facial recognition technology earlier this year. Police trials of facial recognition systems in the United Kingdom have shown that the technology failed 80 per cent of the time, and has “significant operational shortcomings”. Given the dangerous and unreliable consequences of these technologies, police forces in the UK are proactively resisting piloting facial recognition systems.
India’s current approach buys into the hype of emerging technology, and willingly disregards evidence of harm, bias, and unconstitutionality. Facial recognition is not a panacea for understaffed police forces, and neither is it a suitable instance of modernisation of the police forces, especially given the Indian legal context in which its deployment is currently being considered.
This article is part of a series examining The Future of Data in partnership with Carnegie India leading up to its Global Technology Summit 2019 in Bengaluru from 4-6 December 2019. More details about the summit are available here.
Vidushi Marda is a lawyer and leads research on AI and human rights for Article 19’s global digital team. She is also a non-resident scholar for Carnegie India. Views are personal.