In his annual report, U.S. Chief Justice John Roberts examines the role of technology and, in particular, the future of AI in the judiciary.
Roberts predicts that while human judges will still be needed, AI will have a significant impact on the work of the judiciary, particularly at the trial level. He also says that committees within the federal judiciary will explore the use of AI in federal court litigation.
AI is finding its way into courtrooms, but will not replace judges
Roberts rejects the notion that AI could make judges obsolete, arguing that there are often gray areas in legal decisions that still require human judgment.
But he acknowledges that AI has the potential to dramatically improve access to important information for both lawyers and laypeople. AI could therefore provide broader access to the law.
I predict that human judges will be around for a while. But with equal confidence I predict that judicial work—particularly at the trial level—will be significantly affected by AI. Those changes will involve not only how judges go about doing their job, but also how they understand the role that AI plays in the cases that come before them.
However, the use of AI requires “caution and humility” or it risks violating privacy and dehumanizing the law.
2023 brought impressive AI breakthroughs in law – and epic failures
AI raised eyebrows in the legal world last year when ChatGPT 3.5 almost passed the bar exam, with GPT-4 being even more capable.
However, AI tools for legal research can also be risky, as evidenced by the recent example of Michael Cohen, former lawyer for President Donald Trump, who used Google Bard to generate false case citations.
Colorado attorney Zachariah C. Crabill and attorneys Peter LoDuca and Steven A. Schwartz of Levidow Law Firm were also fined and sentenced for using ChatGPT for their work and generating false information and case citations. And now think of all those who have not been exposed.
Chief Justice John Roberts mentions such cases in his annual report and explicitly warns against so-called hallucinations when using AI, stating that this is “always a bad idea.”