The debate about whether AI should enter Indian courts is already moot. It is here. What remains is a more difficult question: how should the legal system respond?
Across India, the answer has been scattered, ranging from bureaucratic paranoia to casual overreliance. But the anxiety that unites these responses is about AI introducing errors into judicial work. This is wildly disproportionate. The courts’ approach to modernisation should not be conditional on their ability to keep up with technology. The Indian legal system has never claimed perfection. Instead, it has inbuilt mechanisms of accountability and self-correction. And these mechanisms shouldn’t break when tools change.
The train has already left the station. The institutional embrace of technology and AI is structural. SUVAS, the Supreme Court’s translation engine, has rendered over 36,000 judgments into 19 Indian languages. TERES provides real-time AI transcription for Constitution Bench hearings. The Kerala High Court uses a transcription tool built by Adalat.ai for all witness depositions across its subordinate courts. The government’s eCourts Phase III project commits Rs 7,210 crore to judicial modernisation, with Rs 53.57 crore earmarked for AI and emerging technologies.
The response is scattered
Just weeks ago, the Supreme Court admonished lawyers for citing hallucinated cases.
Kerala High Court’s “Policy Regarding Use of Artificial Intelligence Tools in District Judiciary” gets the most important thing right: responsibility for orders rests with the judge. But it goes overboard on guardrails, mandating a detailed audit of all instances where AI is used, all steps to be recorded in a ledger, and errors reported to the Principal District Judge.
The Supreme Court’s White Paper on AI and the Judiciary (November 2025) is more considered: “AI will serve the ends of justice, not define them. Judges must remain the ultimate decision-makers.” The principle of human oversight is foundational. But even here, the instinct is to layer procedural requirements on top of it.
Fear of the machine
What connects these responses is a shared anxiety about AI making mistakes. The audit trails and ledgers all betray the same concern: that this new technology will somehow corrupt the judicial process in ways that existing tools do not.
The fear is not imaginary. The High Court of Manipur, in a case about the role of Village Defence Forces, felt compelled to use ChatGPT to research the body’s limits and functions. You’d think a High Court had better resources than a free chatbot. When a constitutional court reaches for a consumer AI product as a research tool, the instinct to regulate makes sense.
And how should courts regulate this use of AI? By strengthening existing institutional capacity regarding outcomes, not more onerous procedural oversight on process.
System is the backstop
Justice Robert H. Jackson of the United States Supreme Court put it best in Brown v. Allen (1953): “We are not final because we are infallible, but we are infallible only because we are final.” The legal system has claimed finality, never perfection. And finality is legitimate only because the humans exercising it can be held accountable.
The Indian judiciary knows this from experience. In K.S. Puttaswamy v. Union of India (2017), a nine-judge bench explicitly overruled the Emergency-era decision in ADM Jabalpur v. Shivkant Shukla (1976), declaring the majority opinions “seriously flawed”. It took 41 years, but the system corrected itself. If the Supreme Court can err gravely enough to deny habeas corpus to an entire nation, and the system’s own processes self-correct, then the anxiety about an AI tool introducing a citation error is wildly disproportionate.
To assume courts weigh every word committed to paper is to ignore reality. The Indian judiciary has long had a “Control+C” problem. Legal commentators have noted that judges frequently copy-paste large sections of text from previous orders. In our previous analyses in the Counting on Law series, we have scoured through hundreds of orders and judgments. Most feel templatised. The system has never demanded originality. It demands a judicial mind.
AI hallucination is not a new kind of threat, either. Case search engines, third-party databases, and printed reports have always carried errors, false citations, and wrongly indexed matters. Nobody has explained why a mistake by a careless first-year LLB graduate in their clerkship is less egregious than one made by an LLM. Technology might make it faster or easier to make mistakes, but whether the perpetrator uses neurons or neural networks shouldn’t matter.
Justice Venkatesh gets it right. mostly
Justice Anand Venkatesh of the Madras High Court has been ahead of the curve on this. Earlier this month, he embarked on a careful study of legal tech products harnessing AI. The resulting order permitted a product called Superlaw Courts to support the court in record keeping, document management and search in an arbitration matter. Legal judgment, it said, must stay with a human decision-maker. That conclusion is exactly right.
But the method reveals a deeper instinct worth questioning. Justice Venkatesh employs a “human process comparison” methodology, breaking the AI’s function down to its components and comparing them to a human associate’s workflow. When explaining Retrieval-Augmented Generation (RAG), he likens chunking to preparing reliable extracts from a brief. The explanation might reassure people who worry that this new technology threatens the sanctity of court processes.
The anthropomorphisation isn’t necessary. It concedes a frame where AI must justify itself by analogy to human work. The better question is whether the officer of the court who uses the tool can stand behind the work product. We don’t ask how a lawyer’s intern searched for a precedent. We ask whether the precedent is sound and whether the lawyer takes responsibility for citing it.
Also read: Why district judges almost never make it to India’s Supreme Court
Better Inspiration
Singapore offers a cleaner model. The Supreme Court of Singapore’s 2024 guidelines on generative AI declared the court “neutral” on adoption. No approved lists. No audit ledgers. The court simply reasserted that responsibility for diligence lies with the users: ensure accuracy, ensure relevance, and comply with intellectual property rules. Lawyers who can’t explain parts drafted with AI assistance face costs, disregarded documents, or disciplinary action.
The focus is on the work, not how it was produced. Minimal fuss, clear principles, fair warning. That’s a sharp contrast to the Indian instinct to smother responsible usage with procedure.
If Indian courts wish to modernise, they must look past the fear of the machine and focus on the institution behind it. We choose the law. We choose the humans who administer it. We accept that those humans are fallible, and we have built a system that accounts for that fallibility through appeals, review, scrutiny, and accountability. No tool changes this compact.
Siddarth Raman is co-founder and CTO of The Professeer. He tweets @thriddas. Gokul Sunoj is an associate at The Professeer. He tweets @GokulSunoj.
(Edited by Ratan Priya)

