CourtGPT: Can you expect a fair trial from AI lawyers and judges?

Share This Post


Chinese courts, meanwhile, are developing an AI system comprising “non-human judges”, designed to offer comprehensive support, enhancing legal services and reinforcing justice across “smart courts” by next year.

Closer home, former chief justice of India D.Y. Chandrachud, just days before his retirement on 11 November, tested the acumen of an AI “lawyer” at the Supreme Court’s National Judicial Museum by asking it if death penalty is constitutional. The on-screen AI advocate confirmed, referencing the “rarest of rare” standard for heinous crimes, which left Chandrachud visibly impressed. In June, he advocated a “measured” adoption of AI in India’s judicial system.

Many countries have already begun using AI, and now generative AI (GenAI), models to reshape legal systems, aid lawmakers, courts, and legal practitioners. From streamlining processes to predicting case outcomes, AI and legal-specific language models are promising to introduce efficiencies in many judicial systems, while reducing the chronic delays and backlogs of millions of cases that are plaguing courts the world over.

Goldman Sachs estimates that 44% of current legal work tasks could be automated by AI. According to the 2024 Legal Trends Report report by Themis Solutions Inc. (Clio), 79% of legal professionals have adopted AI in some way, and one in four use it widely or universally in their law firms.

 


View Full Image

Goldman Sachs estimates that 44% of current legal work tasks could be automated by AI. (Image: Pixabay)

Smart courts

In China, thousands of courts have mandatorily introduced AI-driven systems to aid case processing and expedite routine decisions, significantly cutting processing times. People in China can use smartphones to file a complaint, track the progress of a case and liaise with judges. The country has also installed AI-based automated machines in so-called “one-stop” stations to provide round-the-clock legal consultations, register cases, generate legal documents, and even calculate legal costs. Judges and prosecutors use the Xiao Baogong Intelligent Sentencing Prediction System in criminal law.

The Brazilian government, on its part, is collaborating with OpenAI to accelerate the screening and analysis of thousands of lawsuits using AI, aiming to prevent costly court losses that have strained the federal budget. In 2025, Brazil’s Planning and Budget Ministry projects government spending on court-ordered payments to reach at least 100 billion reais—around 1% of the country’s GDP. To reduce this burden, the Brazilian government is turning to AI, especially for handling small claims that collectively impact the budget but are hard to manage individually.

The solicitor general’s office (AGU) will apply AI to triage cases, generate statistical analyses for strategic planning, and summarize documents for court submissions. AI is intended to support AGU staff, improving efficiency without replacing human workers, who will oversee all AI-generated outputs.

Tools like LexisNexis and ROSS Intelligence (ROSS) can sift through vast libraries of case laws, statutes, and precedents– tasks that would typically take teams of lawyers days or even weeks. Judges and attorneys alike benefit from the accelerated pace, allowing them to focus on more nuanced aspects of cases.

As an example, Harvey is a GenAI platform especially for lawyers, built on OpenAI’s GPT-4. Its clients include PwC and “more than 15,000 law firms” are on its waiting list. Closer home, companies including Lexlegis.AI, a Mumbai-based legal research company, and Bengaluru-based local language models developer, Sarvam, have developed legal-specific large language models (LLMs) for the legal community in India.

Also Read: We need reduced government litigation to unclog the judicial system

E-courts project

While countries like India have yet to fully embrace AI in court decisions, the e-courts project and other digitization efforts are setting the stage for potential AI integration in the country’s legal administration. The vision document for phase-3 of the eCourts project, for instance, says its “framework will be forward-looking to include the use of artificial intelligence”.

“Courts and court systems have adapted to AI in some forms but there’s still a lot more that could be done. For instance, on using AI to reduce backlog. AI assistants or lawyers would, in effect, play the role of support teams. By themselves, they are not likely to reduce backlog or reduce cases. They could be used for a pre-litigation SWOT (strength, weakness, opportunity, threat) analysis, though,” said N.S. Nappinai, Supreme Court senior counsel and founder of Cyber Saathi.

“AI as such has not been implemented or experimented in the Indian court system beyond specific interventions,” Apar Gupta, advocate and co-founder at the Internet FreedomFoundation, corroborated.

The Indian e-Courts committee project is primarily focused on digital transformation, addressing foundational issues like computerising court systems and facilitating remote case proceedings post-pandemic, according to him. AI has been minimally implemented, limited to tasks like translating judgments into regional languages, as the judiciary first seeks to resolve structural challenges in infrastructure, staffing, and case processing efficiency.

The reason is that while courts the world over recognise that AI can improve the efficiency and fairness of the legal system, the idea of AI algorithms delivering “biased”, “opaque”, and “hallucinating” judgements can be very disturbing.

Several precautions are being taken but a lot more are required, according to Nappinai. “First and foremost, whilst AI may be adapted there would still be human intervention to oversee outcomes. Focus is now also shifting to cyber security requirements. Cautious usage of AI is adapted given the limitations of AI systems including due to bias, hallucinations and lack of customised systems for India,” she added.

According to Gupta, while simple automations like document watermarking and redaction are being used, “broader AI-based decisions require more careful, regulated implementation”. “Generative AI (like large language models, or LLMs) is viewed with caution, as its inherent inaccuracies could risk justice. While some initial enthusiasm for tools like ChatGPT emerged, judges are largely cautious,” he added.

This May, for instance, the Manipur high court took the help of Google and ChatGPT to do research on service laws as it disposed of a writ petition of a village defence force (VDF) member, Md Zakir Hussain, who had moved the court to challenge his “disengagement” by the police authorities for alleged dereliction of duty.

In March 2023, too, justice Anoop Chitkara of the Punjab and Haryana High Court used ChatGPT for information in a bail hearing involving ‘cruelty’ while committing a homicide.

However, five months later, justice Pratibha M. Singh of the Delhi high court ruled that GPT cannot be used by lawyers to provide reasoning on “legal or factual matters in a court of law”, when settling a trademark dispute involving designer Christian Louboutin.

Also Read: Generative AI and its interplay with law

The US, too, has employed models like COMPAS (correctional offender management profiling for alternative Sanctions) to predict recidivism (tendency of criminals to commit offences again) risk, influencing bail, sentencing, and parole decisions. However, this technology has faced severe criticism for perpetuating biases, particularly against minority communities. The Netherlands, too, encountered a setback with its welfare fraud detection AI, SyRI, which was terminated following accusations of racial profiling and privacy concerns.

To address such concerns, UNESCO has partnered with international experts, to develop draft guidelines for the use of AI in courts and tribunals. These guidelines, informed by UNESCO’s Recommendation on the Ethics of AI, aim to ensure that AI technologies are integrated into judicial systems in a manner that upholds justice, human rights, and the rule of law.

Rising influence and risks

In his 2023 year-end report, US chief justice John G. Roberts Jr. cautioned about the rising influence of AI in the legal profession, calling it the “latest technological frontier”. He noted that AI could soon make traditional legal research “unimaginable” without its assistance, but also warned of its risks, including privacy invasion and the risk of “dehumanizing the law.”

He cited a recent incident where lawyers, relying on ChatGPT, were fined for citing non-existent legal cases, underscoring the potential pitfalls of using AI in the field. “Legal determinations often involve grey areas that still require application of human judgment,” Roberts said, among other things.

The ‘Guidelines for the Use of Artificial Intelligence in Canadian Courts’ document, released in September, recognizes that in Canada, some judges have already embraced AI tools to improve their efficiency and accuracy, while others may be employing generative AI without realizing it. It cautions, “Even when AI output proves accurate and valuable, though, its use, particularly in the case of certain generative models, may inadvertently entangle judges in legal complexities such as copyright infringement.”

“What we need now is for court systems to adapt to tech to ease its burden and to streamline process driven aspects. It is critical for India to acknowledge the positives of use of tech and overcome resistance or fear to adapting tech but dosocautiously. They (legal-specific LLMs) can be effective support tools but cannot replace humandiscretion,” Nappinai said.

Gupta, on his part, suggests the integration of AI in legal practice with guidance from state bar councils and the Bar Council of India to help lawyers “responsibly and effectively” use generative AI. To benefit from AI’s efficiencies, he believes lawyers could use tools for specific tasks, such as case summarization, but they must apply critical thinking to AI-generated insights.

“For AI to positively transform legal practice, balanced regulation, ongoing training, and careful application are essential, rather than rushing to AI as a blanket solution,” Gupta concluded.

Also Read: We need judicial system reforms to ensure swift disposal of cases

 



Source link

spot_img

Related Posts

Salesforce launches Agentforce Testing Center to put agents through paces

Join our daily and weekly newsletters for the...

Maxar prepares for final WorldView Legion launch to complete advanced imaging constellation

WASHINGTON — Maxar Intelligence is targeting early 2025...

Is social media doing more harm than good to democracy?

In the U.K., The Guardian newspaper announced earlier...

CMA Readies Cloud Sector “Behavioural” Remedies

Targetting AWS, Microsoft? British competition regulator soon to...

Website in a weekend – it’s that easy!

With a modern platform like Squarespace, there’s no...

New Malayalam OTT Releases This Week: Thekku Vadakku, Adithattu, and More

As Malayalam cinema continues to thrive, November 2024...
spot_img