Court limits attorney client privilege for use of Artificial Intelligence legal tools

A federal judge in New York ruled that a criminal defendant’s private chat with the Artificial Intelligence tool Claude was not protected by attorney client privilege or the work product doctrine, raising new risks for businesses and individuals who use Artificial Intelligence in legal contexts.

A recent decision by Judge Rakoff of the Southern District of New York held that an Artificial Intelligence chat created by a criminal defendant in anticipation of a meeting with his attorneys was not protected by attorney client privilege or the work product doctrine. The ruling is described as answering a “question of first impression nationwide” and signals that communications with consumer Artificial Intelligence tools may not receive traditional legal protections. The case involves Bradley Heppner, a former executive of GWG Holdings, Inc., who was indicted on October 28, 2025, on five federal felony counts involving alleged fraudulent activity. When FBI agents arrested Heppner, they seized numerous documents and electronic devices from his home, including approximately thirty-one communications between Heppner and the Artificial Intelligence tool Claude, and his counsel claimed privilege over those communications.

The court identified three elements required for attorney client privilege and found that the 31 Artificial Intelligence documents lacked at least two, if not all three, elements: a communication between a client and an attorney that was intended and kept confidential, and made for the purpose of obtaining legal advice. Judge Rakoff held there was no attorney client relationship because Claude is not an attorney and the privilege depends on “a trusting human relationship” with “a licensed professional who owes fiduciary duties and is subject to discipline.” The decision also concluded there was no confidentiality due to Anthropic’s terms and conditions, which state that Anthropic collects data on users’ inputs, uses that data to train the model, and reserves the right to disclose user data to “third parties,” including “governmental regulatory authorities,” so Heppner had no reasonable expectation of confidentiality. The court further found that Heppner did not use Claude for the purpose of obtaining legal advice, noting that he communicated with Claude of his own volition, not at the suggestion or direction of counsel, and that Claude itself told the government, “I’m not a lawyer and can’t provide formal legal advice or recommendations.”

The work product doctrine argument was also rejected because the Artificial Intelligence documents were not prepared “by or at the behest of counsel” and did not reflect counsel’s mental processes or strategy. Under Second Circuit law, the doctrine applies only to work performed by an attorney or the attorney’s agent, and the court held that Heppner was not acting as counsel’s agent when he engaged with Claude. Although the materials may have “affect[ed]” counsel’s later strategy, they did not “reflect” strategy at the time they were created, so the rationale for protection did not apply. The decision highlights several practical takeaways: clients should avoid discussing legal or factual issues with Artificial Intelligence tools that do not guarantee confidentiality, organizations should implement policies that require use of approved internal Artificial Intelligence tools and bar unsanctioned platforms, and companies and individuals facing potential litigation should consult with legal counsel before using any Artificial Intelligence tools, even internal ones. Legal teams are urged to proactively warn business stakeholders that turning to Artificial Intelligence before engaging counsel can inadvertently waive privilege and create discoverable documents.

65

Impact Score

Congress weighs Artificial Intelligence transparency rules

Bipartisan lawmakers are pushing a federal transparency standard for the largest Artificial Intelligence models as Congress works on a broader national framework. The proposal aims to increase public trust while avoiding stricter state-by-state requirements and heavier regulation.

Report finds California creative job losses are not driven by Artificial Intelligence

New research from Otis College of Art and Design finds California’s recent creative industry job losses stem from cost pressures and structural shifts, not direct worker displacement by generative Artificial Intelligence. The technology is changing workflows and expectations, but it is largely replacing tasks rather than entire jobs.

U.S. senators propose broader chip tool export ban for Chinese firms

A bipartisan proposal in the U.S. Senate would shift semiconductor equipment controls from specific fabs to targeted Chinese companies and their affiliates. The measure is aimed at cutting off access to advanced lithography and other wafer fabrication tools for firms such as Huawei, SMIC, YMTC, CXMT, and Hua Hong.

Trump executive order targets state Artificial Intelligence laws

Executive Order 14365 lays out a federal strategy to discourage, challenge, and potentially preempt state Artificial Intelligence laws viewed as burdensome. Employers are advised to keep complying with current state and local rules while preparing for regulatory uncertainty in 2026.

Who decides how America uses Artificial Intelligence in war

Stanford experts are divided over how the United States should govern Artificial Intelligence in defense, surveillance, and warfare. Their views converge on one point: decisions with such high stakes cannot be left to companies alone.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.