Artificial Intelligence is reshaping legal work, not replacing lawyers

Large language models are automating parts of legal practice, but persistent reasoning flaws, professional benchmarks, and labor statistics suggest that lawyers’ jobs remain largely secure for now.

When the generative Artificial Intelligence boom accelerated in 2022, many law students feared that their future jobs were at risk, especially after early demonstrations of chatbots passing standardized exams and predictions that 44% of legal work could be automated. New tools quickly entered law firms, where partners began using systems such as ChatGPT, Microsoft Copilot, Harvey, and Thomson Reuters’ CoCounsel to sift through documents and draft material that junior associates once handled. One major firm, Clifford Chance, recently cut 10% of its London staff and cited increased use of Artificial Intelligence, yet litigators like Rudi Miller chose courtroom work in part because judges have not allowed ChatGPT-enabled robots to argue in court and human judgment still dominates in live proceedings.

Despite the hype, practicing lawyers report that large language models struggle with the kind of nuanced reasoning that real legal work demands. Junior and senior associates describe using Artificial Intelligence for document review, first-draft research, and citation drafting, while repeatedly encountering hallucinated case citations, off-topic rambling, and failures on narrow or novel questions of law. One attorney notes that “right now, I don’t think very much of the work that litigators do, at least not the work that I do, can be outsourced to an AI utility,” and another says she would “much rather work with a junior associate than an AI tool” and cannot foresee that changing soon unless the tools improve extraordinarily fast. Researchers echo these doubts, arguing that passing the Uniform Bar Exam is not the same as exercising strategic judgment in ambiguous, high-stakes situations, and that models trained on next-word prediction may lack the mental model of the world needed for complex legal reasoning.

New benchmarks reinforce those concerns by testing how well models perform on realistic professional tasks rather than exams. The Professional Reasoning Benchmark, released by ScaleAI, found that even the strongest systems scored only 37% on the most difficult legal problems, often making inaccurate judgments or reaching correct answers through incomplete or opaque reasoning. The AI Productivity Index from Mercor reported “substantial limitations” in legal work, with the top model scoring 77.9% on its legal tasks, and the study warned that such performance may still be unusable in fields where errors carry high costs. Legal scholars note that these benchmarks still do not fully capture subjective, open-ended legal questions, while also highlighting that much legal work is not well recorded for training and that relevant documents are scattered across hierarchical statutes, regulations, and cases.

Labor data so far does not support a narrative of mass displacement. 93.4% of law school graduates in 2024 were employed within 10 months of graduation, which the National Association for Law Placement says is the highest rate on record, and the number of graduates working in law firms rose by 13% from 2023 to 2024. Talent leaders at major firms say they are not currently reducing headcount, and economists predict only incremental labor-market effects in the near term, citing the legal profession’s low risk tolerance and the limited capabilities of current Artificial Intelligence systems for complex matters. Institutional factors also slow automation: higher productivity can cut billable hours under the dominant business model, liability concerns push clients and firms to insist on human accountability, and regulations constrain how tools are deployed. At the same time, associates report that as Artificial Intelligence takes over grunt work like contract review, firms will need more formal training systems to replace the traditional apprenticeship model, and some junior lawyers quietly worry about whether they are the “last plane out” before deeper structural changes arrive.

55

Impact Score

Google and other chatbots surface real phone numbers

Generative Artificial Intelligence chatbots are surfacing real phone numbers and other personal details, sometimes by pulling from obscure public sources and sometimes by inventing plausible but wrong contact information. Privacy experts say users have few reliable ways to find out whether their data is in model training sets or to force its removal.

U.S. and China revisit Artificial Intelligence emergency talks

Washington and Beijing are exploring renewed talks on an emergency communication channel for Artificial Intelligence as fears grow over the capabilities of Anthropic’s Mythos model. The shift reflects rising concern in both capitals that competitive pressure is outpacing safeguards.

Artificial Intelligence divides employers as hiring and headcount shift

U.S. hiring beat expectations in April, but employers remain split on whether Artificial Intelligence should drive layoffs, productivity gains, or internal redeployment. At the same time, candidate use of Artificial Intelligence is outpacing employer adoption in hiring, adding new pressure to screening and entry-level recruiting.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.