.png)
Reform Compass is a column by former senior officers of Income Tax, GST & Customs focused on reforms in policy and tax administration.
April 1, 2026 at 8:46 AM IST
Artificial Intelligence has arrived in law with a familiar pitch: faster drafts, better research, less grind. It sounds appealing, especially in a profession where long hours are taken for granted. But law is not built for shortcuts. It runs on precision, on facts that can be verified, on arguments that can be defended line by line. A single misplaced comma or a misquoted precedent can alter the trajectory of an entire argument, and the danger of AI-generated fabrications is no longer a cautionary tale from some distant jurisdiction.
That is why the recent episode before the Supreme Court of India stands out. In Gummadi Usha Rani & Anr. v. Sure Mallikarjuna Rao & Anr. (Order dated Feb 27, 2026), a Trial Court relied on four precedents while dismissing an objection. None of them existed. They had been generated by AI.
The Supreme Court's response was pointed. It framed the issue as one of institutional concern, not just a flawed order. It went on to say that reliance on non-existent judgments is not an error in reasoning but misconduct, one that carries consequences.
This is not a fringe incident. It is a symptom.
Unreliable Counsel
The friction has a measurable dimension. The Financial Times highlighted a survey by Anthropic in which 27% of users flagged unreliability as their primary concern. This is not surprising: generative models are, at their core, prediction engines that are statistically designed to sound plausible rather than be factually accurate. In law, plausibility is a dangerous substitute for a fact. A popular AI application will confidently invent a case citation, complete with volume and page numbers, simply because it is a statistically probable sequence of text. An unreliable intern is worse than having none.
$5-Trillion Disconnect
The gap between promise and performance is widening precisely as the financial stakes rise. From now until 2030, big technology firms and hyperscalers project a cumulative spend of $5 trillion on AI infrastructure. According to JPMorgan, sustaining that investment requires AI-related revenues to grow at 67% per year: from roughly $50 billion today to $650 billion by the end of the decade. Confronted with targets of that scale, AI leaders are structurally incentivised to amplify the upside and downplay the friction. The headlines follow accordingly.
What does not follow is the voice of the CTO actually tasked with making this work inside an enterprise. These are the people grappling with data privacy regulations, trying to contain hallucinating models, and attempting to build workflows that do not expose their organisations to litigation. They are rarely quoted. The gap between the narrative sold to markets and the reality lived in legal and compliance departments is not academic — it is operational, and it is widening.
Investors have been quick to classify companies as AI winners or losers, treating the technology as a near-term certainty rather than a long-term wager. Markets that priced in the revolution have not fully priced in the verification overhead, the liability exposure, or the sheer human labour required to audit what the machine produces. Those costs are real, and in law they are non-trivial.
None of this is an argument against AI in legal practice. The medium-term potential is genuinely significant in document review, contract analysis, and surfacing relevant precedents at speed. But potential is not performance, and in its current state, AI is an unpredictable assistant that requires unyielding supervision. The question is not whether the technology will improve, because it will. The question is how much institutional damage accumulates in the interim — how many proceedings are contaminated, how many professionals are exposed, and how many courts are forced to issue orders like the one in February.
Until the gap between promise and performance closes, the legal profession has no choice but to proceed with its eyes open, verifying everything and trusting nothing on faith.