The increasing integration of artificial intelligence (AI) technologies into judicial processes is raising significant concerns about the potential exacerbation of bias and structural distortions within legal systems worldwide. A striking case in India's Andhra Pradesh state, where a judge cited four legal precedents in a land dispute that were later found to be entirely fabricated by an AI tool, highlights these risks. The error only came to light during an appeal, reaching India's Supreme Court, which deemed it not merely an oversight but "misconduct."
India's Supreme Court has issued notices to the country's Attorney General, Solicitor General, and the Bar Council of India, signaling a serious response to the misuse of AI in legal proceedings. This incident underscores a global trend where AI is being adopted in courtrooms—from Colombia, where a judge included a ChatGPT conversation transcript in a ruling, to the United States, where lawyers faced sanctions for submitting briefs with AI-invented cases—often outpacing regulatory frameworks.
In India, a notable instance occurred in March 2023 when a judge of the Punjab and Haryana High Court consulted ChatGPT during a bail hearing for a murder case, openly acknowledging this in the written order. While transparency was praised, legal advocates warned of AI's propensity to generate false information and encode biases from training data. The context is critical: India's judiciary is overwhelmed, with approximately 55 million pending cases and over 180,000 unresolved for more than 30 years, leading to desperate measures that may compromise justice.
The backlog crisis in India, estimated to take centuries to clear at current rates, creates a precarious environment for unchecked AI adoption. Chief Justice Surya Kant noted that AI is paradoxically increasing workloads, as court staff must now verify AI-generated citations. Beyond fabrication, a deeper issue is AI's ability to inherit and perpetuate biases embedded in historical legal data, which often reflect societal inequalities. In India's prisons, marginalized communities like Dalits and Muslims are disproportionately represented among undertrial prisoners, raising alarms that AI systems could reinforce these disparities if used in predictive assessments.
Experts emphasize that AI must remain an assistive tool, not a decision-maker, in legal contexts. Initiatives like InLegalLLaMA, a model trained on Indian legal corpora, and SUPACE, an AI research assistant developed under the Supreme Court's e-committee, aim to enhance efficiency by retrieving case law and summarizing documents without influencing judgments. Similarly, in Brazil, AI is deployed to manage high-volume repetitive cases, but its role is strictly operational. The consensus is clear: while AI can streamline processes, ethical and judicial responsibilities cannot be delegated to algorithms, necessitating robust oversight and verification mechanisms.
Source: www.dw.com