In light of the growing pendency and increasing vacancies in the judicial system, he believes AI will help streamline the case load at courts. It will allow courts to prioritise complex, intricate matters that require more human attention over routine and straightforward cases.
The CJI, however, made it clear that AI will never substitute judges.
While the use of AI in the judiciary might still be some time away, the technology has already been adopted in other phases of criminal justice in India, for example, policing.
As we hurl ourselves into the globalised order 4.0, several questions arise: Have our systems of law and order kept pace? Can our conventional correctional systems address the rise in sophisticated crimes and criminals? Are we ready to incorporate tech to tread the balance between retribution, rehabilitation and reintegration for a holistic sense of justice?
AI in Indian criminal justice system
In November 2019, Gurugram-based start-up Staqu launched its video analytics platform, JARVIS or Joint AI Research for Video Instances and Streams, in Uttar Pradesh. JARVIS mines CCTV footage to offer a string of services like violence, intrusion and pick-pocketing detection, besides crowd analysis.
This is a new way of tapping AI to generate useful information from long CCTV footage through short real-time alerts, and it significantly reduces the time to come up with actionable data.
Staqu is currently working with police forces in eight states and union territories, including Punjab, Haryana, Rajasthan, Bihar and Telangana.
In 2018, Punjab Police started using the Police Artificial Intelligence System or PAIS, developed by Staqu, which is equipped with options like face search, text search etc and a database with more than 1 lakh records of criminals housed in jails across the state.
In UP, another Staqu product, Trinetra has been aiding the police force since December 2018. Trinetra is an AI-enabled application that contains a database of approximately 5 lakh criminals with facial-recognition features.
In 2017, Delhi Police joined hands with INNEFU Labs’ facial recognition software AI Vision, which offers gait and body analysis. A homegrown artificial intelligence start-up, INNEFU is tapping into the booming demand for facial biometrics in India with their tests on Indian faces and feasible prices.
Meanwhile, police in Odisha are planning to use AI and mobile computing to improve the analysis of crime data. AI will help flag procedural mistakes by correctional officers. In December 2019, Odisha Police floated a request for proposals (RFP) to invite bids for eligible AI applications.
Other Asian countries like Hong Kong and China have started using AI to address scale — connected sensors, tracking wristbands help create smart systems that may render prison breaks a relic of the past.
In Hong Kong, the government is testing wearables to monitor individuals’ locations and activities, including their heart rates, at all times. Some jails like the Yancheng prison in China are using networked video surveillance systems to monitor high-profile inmates.
The good, bad and ugly
AI tools could provide relief in flagging police violence and preventing escalation that adds to a prison’s already stressful environment. Reporting abusive behaviour by guards and collation of previous cases of violence can improve the chances of seeking help for guards and other personnel on duty who otherwise continue to function with repressed issues.
AI can curb illegal operations and smuggling on prison premises by flagging abnormal activity and movement.
If prisons have to be correctional facilities, then AI tools can prove worthwhile in preventing inmates from going astray, for example, by assisting in tackling drug addictions through monitoring.
Prisons usually do not hold inmates from the same locational pool together because they will most likely have a similar lifestyle pattern and it can be difficult for them to ward off problematic traits and sustain on the rehabilitation trajectory.
As compared to human beings, AI has a far superior analytical range that can also study the depth of multiple elements in cohesion. For example, AI applications can factor in diverse variables like age, family background, native place, and nature of crime to decide cell allocation for inmates.
Laying guidelines for human-AI interaction
Despite the vast potential, a multitude of issues poses complications in employing AI in the criminal justice system. For example, how do we develop appropriate metrics to code a type of behaviour as “abnormal” to train AI in detection?
If AI is over-inclusive in its search for bad behaviour, it runs the risk of ignoring diverse interpretations of “normal” and creating an enormous psychological pressure to conform. If found to be under-inclusive, it risks missing issues that correctional staff, with the nuance and discretion that human beings possess, might have flagged.
Like in India, most programmes used in the US, like the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), are proprietary products, which means their creators don’t reveal the source code in order to protect their intellectual property.
In doing so, they prevent defendants from challenging the integrity of the models and include diverse perspectives into the algorithms.
In a paper titled ‘Guidelines for Human-AI Interaction’, a team of Microsoft researchers put forth a set of guidelines for practitioners designing applications and features that harness AI technologies.
The team distilled the guidelines from over 150 AI-related design recommendations and validated them through three rounds of evaluation.
The 18 guidelines include helping the user understand the extent of the AI-functionality, marking expected frequency of mistakes, setting-up a time-frame to know when to interrupt the AI-controlled environment, and ensuring that the AI system’s language and behaviour do not reinforce stereotypes and biases.
Especially in the context of a correctional facility, the authorities must ensure that the system is incorporated in a way that it doesn’t throw inmates off-guard. The authorities must strike a balance between the amount of information that should and should not be parted with, so as to not cause chaos or confusion.
Focused orientation sessions and opening channels of transparency with counselling for inmates would help introduce the technology as a guide rather than an Orwellian force.
Detailed frameworks for aiding criminal justice need to be customised based on the exact functionality of AI, the geo-social context and most importantly, by keeping the human component pivotal.
The author is a public policy and corporate affairs consultant based out of New Delhi. All views expressed are personal.