Recent weeks have seen a surge in stories about the impact of Artificial Intelligence in the courtroom, ranging from families using AI-generated avatars for impact statements to a rising tide of legal documents laced with hallucinated information. Legal experts are particularly alarmed by the growing number of fabricated citations and legal precedents found in documents prepared with the assistance of Artificial Intelligence tools. Judges, catching on to these mistakes, are expressing frustration and concern about the reliability of legal filings when advanced language models are involved in their creation.
In several high-profile cases, including those involving major law firms and leading technology companies, errors generated by Artificial Intelligence have not only wasted judicial time but also resulted in financial penalties and public reprimands. California judge Michael Wilner fined Ellis George after discovering that their court filing cited fabricated articles produced with the aid of Google Gemini and other law-specific AI models. Similarly, in a lawsuit involving Anthropic, an incorrect citation crafted by the company’s AI tool, Claude, slipped through without detection. An Israeli case saw prosecutors mistakenly citing nonexistent laws in a filing, an error they attributed to their use of Artificial Intelligence—a misstep that drew a sharp response from the presiding judge.
Maura Grossman, a professor specializing in law and computer science, has warned about these challenges since the first reported hallucinations in legal contexts. She notes that rather than diminishing, the problem seems to be accelerating, with even senior or elite lawyers falling prey to misplaced trust in language models’ outputs. The underlying issue, Grossman argues, is that attorneys tend to be seduced by the fluency and apparent authority of Artificial Intelligence-generated text, often skipping the critical verification they would apply to work produced by junior colleagues. Despite longstanding warnings and evolving best practices, many in the legal profession remain vulnerable to over-reliance on these tools. As companies continue marketing legal-specialized Artificial Intelligence as infallible, legal experts caution that repeated, systemic errors could soon have profound consequences on court decisions if left unchecked.