scorecardresearch
Add as a preferred source on Google
Friday, May 8, 2026
Support Our Journalism
HomeJudiciaryRestriction vs regulation: As SC seeks expert panel, how legal fraternity views...

Restriction vs regulation: As SC seeks expert panel, how legal fraternity views use of AI in court

SC order comes after trial court in Andhra Pradesh leans on false AI-generated rulings in property dispute. Top court says such use of such fabricated rulings is ‘misconduct’.

Follow Us :
Text Size:

New Delhi: Emphasising the need for accountability and integrity in judicial proceedings, the Supreme Court this week asked the Bar Council of India to set up an expert committee on the use of Artificial Intelligence (AI) in court proceedings. The move comes after a trial court in Andhra Pradesh relied on non-existent AI-generated judgements in dealing with a property dispute.

A two-judge bench of Justices P.S. Narasimha and Alok Aradhe said Tuesday that the use of AI in coming up with fabricated judgements amounts to misconduct, and that the expert committee should include field experts who would examine issues connected to the use of AI.

The order has divided the Bar, with some lawyers saying AI tools can be used to assist them in their day-to-day tasks, like drafting or research, and others demanding strict regulation of AI in legal proceedings. The latter section also called for  initiating measures such as banning lawyers caught using fabricated cases for three years.

These concerns come amid a string of instances where fabricated judgements, non-existing precedents and inaccurate citations have found their way into the courtroom.

The case prompting apex court to constitute a committee originated from a challenge to a 19 August, 2025 trial court order in a property dispute suit, which relied on four cases that were allegedly AI-generated, and fake citations. Although these cases were cited by one of the litigants, the trial court took them into consideration.

Challenging the 19 August order, the petitioners moved the Andhra Pradesh High Court, which on 21 January this year took into consideration the fact that the judgements were AI-generated but still proceeded to decide the case on merits, expressing caution and dismissing the plea at the same time. This is what led the petitioners to approach the top court. The case is ongoing, and will be taken up by the bench again on 26 May.


Also Read: AI-generated art covered by India’s copyright law? Delhi HC gives copyright office 8 weeks to decide


Story so far

In February this year, the same bench while dealing with this case had taken note of the trial court deploying AI-generated, non-existing, fake or synthetic alleged judgements, and said that it would examine the consequences of such actions as they have a direct bearing on the integrity of the adjudicatory process.

“At the outset, we must declare that a decision based on such non-existent and fake alleged judgements is not an error in decision-making. It would be a misconduct and legal consequence shall follow. It is compelling that we examine this issue in more detail,” the court said in its February order.

In the last hearing, the two-judge bench made it clear that while it was not seeking to ban the use of AI, lawyers cannot it and apologise when caught. Appearing for the Centre, Attorney General R. Venkatramani also told the court that he would be collecting the views of the Ministry of Electronics and Information Technology (MeitY), and placing them before the court on this question of law.

Lawyers caution against AI

Saying that the legal profession presently stands at a technological crossroads of sorts, senior advocate Rajshekhar Rao told ThePrint, “We are at the crossroads of becoming stooges to technology. It gallops faster than we can keep up, and the number of legal tools which are available are not entirely credible. The situation needs to improve.”

Batting for a system of checks and filters while proceeding with the use of AI in legal tasks, Rao suggested adding Quick-Response (QR) codes to judgements, “Although I don’t know what the Supreme Court-appointed committee will ultimately do, every cited source must have a QR code in place, as Supreme Court judgements presently do,” Rao told ThePrint.

On more nuanced tasks like drafting, Rao said, “I have fundamental problems with using AI for drafting. It’s almost like an auto-prompt. The system thinks like you but it’s not you,” he said, adding that the use of AI runs the risk of humans losing their sense of originality.

On the other hand, senior advocate Gopal Sankaranarayanan emphasised the need for imposing “absolute liability” on lawyers citing AI-generated case law.

“I believe there should be absolute liability with citing case law. It has always been the responsibility of lawyers to check whether the judgements they are citing have been doubted or overruled. Similarly if AI judgements have been cited, there must be strict penalties imposed on the lawyers concerned, including barring them from practice for a period of time. If the courts are being invited to lay down the law fraudulently, it constitutes grave contempt and invites the harshest penalties,” Sankaranarayanan told ThePrint.

Echoing a similar sentiment, senior advocate Tanveer Ahmed Mir told ThePrint that the issue before the trial court could not be brushed aside as a simple technological mistake. “AI just like any other tool can be used for research but the research needs to be confirmed. You cannot use shortcut methods, without doing end-to-end checks. The AI tools or packages being sold right now by companies should be immediately proceeded against, in case the judgements they are presenting turn out to be fabricated,” he said.

Terming the present situation before the top court “a very serious matter”, and not just a criminal felony, Mir said that such an action by a lawyer certainly attracts contempt of court.

“Issue contempt notice against such entities and ban lawyers doing this for at least three years. If you drafted a petition and went to the court without checking if the judgments exist or not, this amounts to fabrication of records and needs to be dealt with seriously. It simply shows that you’re not willing to work,” he said.

Explaining how the situation actually unfurls on the ground, senior advocate Percival Billimoria said that several lawyers today have a fundamental lack of understanding about how generative AI functions.

“AI is also a predictive tool and does not necessarily throw up factual responses. This phenomenon is called AI hallucination and refers to the AI’s likelihood of giving one the response it thinks you need. You could use this tool to write the manuscript of a fiction book but lawyers must learn that AI is not a database of authorities which uses search terms and Boolean search methods, like Supreme Court Cases (SCC).”

Billimoria’s remarks pointed to a growing concern within courtrooms that many practitioners treat AI systems as authoritative legal databases when in reality they are predictive language models capable of generating convincing but entirely fictional material.

Elaborating more on AI’s propensity to hallucinate, advocate Manu Abhishek Bhardwaj said, “My mentor taught me that in law, every word carries weight. Consequently, rigorous due diligence is essential to validate every statement. As the Hindi proverb goes, ‘Nakal ke liye bhi akal chahiye’ — even to copy, one needs intelligence.”

Lawyers must verify any case law provided by AI, as these models often “hallucinate,” offering answers based on word association rather than verified data, Bhardwaj told ThePrint. “Simply cross-referencing citations in SCC, Manupatra (a legal tech suite), or a reference book takes less than 10 minutes and prevents significant courtroom embarrassment,” he said.

Supreme Court lawyer advocate Nizam Pasha termed the top court’s intervention on this issue “very necessary”.  “The shortcomings as well as the extent of permissible use of AI in legal proceedings need to be considered and demarcated early on, while reliance on it is still nascent.” While technology platforms must be asked to disclaim the correctness of legal advice being casually rendered by their AI tools, the real challenge is inculcating diligence and discipline to cross check and verify, particularly among lawyers, law students, judges and even clerks,” Pasha told ThePrint.


Also Read: UNDP-commissioned report flags ‘shadow use’ of AI in India’s legal sector. ‘Unaware of risks’


Bane or boon?

Expressing a divergent point of view, Delhi-based advocate Arpit Goel argued that the focus should not be on restricting AI, but on regulating and understanding it to increase efficiency.

Saying that AI itself is not the enemy, Goel added that the legal profession has always evolved alongside technological change. “Technology has always been a silent partner in the legal profession. We have seen typewriters and physical journals getting replaced by computers and digital databases. Artificial intelligence is simply the next step in that evolution.”

AI becoming a substitute for independent thought rather than a tool for assisting it is an inevitable blunder, Goel said. “Instances of non-existent case laws and fabricated citations are already surfacing. More troubling are situations where even submissions and judicial observations appear to be mechanically generated, resulting in content that may seem correct on facts but conveys an entirely distorted meaning,” Goel told ThePrint.

Similarly, advocate Yash Chaturvedi said AI cannot be permitted to dilute professional responsibility. “Artificial Intelligence is the future, and we cannot move forward without embracing it. However, unlike humans, AI is unaccountable for its errors. The blind use of AI in our profession will lead to catastrophic consequences. Ultimately, the practitioner, not the tool, must remain legally and ethically accountable,” he said.

The debate has also expanded into questions of institutional regulation and structural safeguards within the justice delivery system. Advocate Mohammed Kumail Haider who practices across Uttar Pradesh told ThePrint that courts themselves should not be burdened with identifying AI-generated inaccuracies at the final stage of adjudication.

“The expert committee formalised under the guardianship of the judiciary must ensure timely checks through appropriate tools so that judges performing judicial functions are not burdened with catching AI-driven false notions in drafts,” Haider said.

Supreme Court lawyer Soutik Banerjee said the issue fundamentally concerns professional ethics and duties owed to courts.

“Firstly, lawyers rely on AI tools and unknowingly cite non-existent case law, and secondly such tools are used as part of sharp practices to mislead the court. In both scenarios, AI is only the means through which the lawyer is misleading the court. The real issue therefore is to make lawyers accountable for the case law cited by them. Ensure that they do their homework, cross-check the citation from the source personally and then place reliance on it in court,” Banerjee told ThePrint, adding that the onus to ensure verification of citations relied upon must be incorporated in the Bar Council’s Standard of Professional Conduct and Etiquettes or Code of Ethics as a lawyer’s Duty to the Court.

Urja Pandey, counsel for the Union of India in the Supreme Court, also warned that AI-generated judgments threaten public faith in the judiciary itself, and accuracy is not an option in law, but rather a full-fledged necessity. “Citing non-existent AI judgements before a court is not a minor error. This shakes up the trust and faith in the justice system of India. Without checks, artificial intelligence can distort justice. The world is moving towards AI which can be beneficial to the legal researchers and lawyers but it cannot replace an independent application of mind,” Pandey said.

“Think of it as a horse. Powerful and fast, but without someone in control it goes wherever it pleases,” said advocate Himanshu Sihag who practices across Delhi’s courts.

(Edited by Nardeep Singh Dahiya)


Also Read: Indian judiciary must stop panicking over AI


 

Subscribe to our channels on YouTube, Telegram & WhatsApp

Support Our Journalism

India needs fair, non-hyphenated and questioning journalism, packed with on-ground reporting. ThePrint – with exceptional reporters, columnists and editors – is doing just that.

Sustaining this needs support from wonderful readers like you.

Whether you live in India or overseas, you can take a paid subscription by clicking here.

Support Our Journalism

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular