U.S. District Judge Jed Rakoff of the Southern District of New York issued a bench ruling on February 10, 2026, holding that a criminal defendant’s independent use of a generative Artificial Intelligence tool to assess his legal exposure was not protected by attorney client privilege or the work product doctrine. In United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y. Oct. 28, 2025), former financial services executive Bradley Heppner used Anthropic’s Claude, a publicly accessible generative Artificial Intelligence platform, to input prompts about the government’s securities fraud investigation and his potential liability, including facts learned from his lawyers, and the system generated written outputs. When agents later arrested Heppner on November 4, 2025 and searched his Dallas residence, they seized electronic devices that contained approximately thirty-one Artificial Intelligence generated documents consisting of his prompts and the platform’s responses.
Defense counsel argued that these roughly thirty-one documents were privileged because Heppner created them to prepare for discussions with counsel and later shared them with his attorneys, but they conceded that he had acted on his own initiative rather than at counsel’s direction. The government sought a determination that the documents were neither privileged communications nor protected work product, and the court agreed. Judge Rakoff concluded that the Artificial Intelligence documents were not communications with an attorney and were not created for the purpose of obtaining legal advice from an attorney, noting that Claude expressly warns users who ask legal questions that they should consult a “qualified attorney,” and characterizing such querying as research activity rather than a privileged exchange. The court further determined that the work product doctrine did not apply because the materials were not prepared by or at the direction of counsel in anticipation of litigation, and that subsequent sharing with lawyers could not retroactively confer protection, likening the situation to a client conducting Google searches or checking out library books and then later discussing that research with counsel.
The ruling turned heavily on confidentiality and the terms of use of the Artificial Intelligence platform. Claude is a retail, publicly accessible program trained on multiple sources, including data from user prompts and outputs, and its terms reserve rights to retain, train on, and disclose user information, including potential disclosure to “governmental regulatory authorities” and “third parties,” which undercut any reasonable expectation that the communications were made in confidence. The court did not resolve how privilege might apply in scenarios involving a closed, enterprise Artificial Intelligence environment with strong confidentiality protections, and the government itself acknowledged that the analysis “might be different” if counsel had directed the Artificial Intelligence searches. For companies, boards, executives, and compliance leaders who increasingly rely on generative Artificial Intelligence to analyze legal and regulatory exposure, organize facts, and explore strategy, the decision signals that unsupervised use of public tools can create discoverable material. The guidance stresses treating Artificial Intelligence as a powerful but disclosure prone utility, carefully vetting platform confidentiality terms, considering closed enterprise systems, involving counsel early, and formalizing protocols so that any Artificial Intelligence assisted work in investigations and litigation is structured, supervised, and aligned with traditional privilege requirements of confidentiality and attorney direction.
