sdny ruling finds generative artificial intelligence documents not privileged

A federal judge in the Southern District of New York held that a defendant’s use of a public generative Artificial Intelligence tool to analyze his legal exposure was not protected by attorney client privilege or the work product doctrine. The decision highlights how platform terms, confidentiality, and attorney involvement determine whether Artificial Intelligence assisted analyses remain shielded in investigations and litigation.

U.S. District Judge Jed Rakoff of the Southern District of New York issued a bench ruling on February 10, 2026, holding that a criminal defendant’s independent use of a generative Artificial Intelligence tool to assess his legal exposure was not protected by attorney client privilege or the work product doctrine. In United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y. Oct. 28, 2025), former financial services executive Bradley Heppner used Anthropic’s Claude, a publicly accessible generative Artificial Intelligence platform, to input prompts about the government’s securities fraud investigation and his potential liability, including facts learned from his lawyers, and the system generated written outputs. When agents later arrested Heppner on November 4, 2025 and searched his Dallas residence, they seized electronic devices that contained approximately thirty-one Artificial Intelligence generated documents consisting of his prompts and the platform’s responses.

Defense counsel argued that these roughly thirty-one documents were privileged because Heppner created them to prepare for discussions with counsel and later shared them with his attorneys, but they conceded that he had acted on his own initiative rather than at counsel’s direction. The government sought a determination that the documents were neither privileged communications nor protected work product, and the court agreed. Judge Rakoff concluded that the Artificial Intelligence documents were not communications with an attorney and were not created for the purpose of obtaining legal advice from an attorney, noting that Claude expressly warns users who ask legal questions that they should consult a “qualified attorney,” and characterizing such querying as research activity rather than a privileged exchange. The court further determined that the work product doctrine did not apply because the materials were not prepared by or at the direction of counsel in anticipation of litigation, and that subsequent sharing with lawyers could not retroactively confer protection, likening the situation to a client conducting Google searches or checking out library books and then later discussing that research with counsel.

The ruling turned heavily on confidentiality and the terms of use of the Artificial Intelligence platform. Claude is a retail, publicly accessible program trained on multiple sources, including data from user prompts and outputs, and its terms reserve rights to retain, train on, and disclose user information, including potential disclosure to “governmental regulatory authorities” and “third parties,” which undercut any reasonable expectation that the communications were made in confidence. The court did not resolve how privilege might apply in scenarios involving a closed, enterprise Artificial Intelligence environment with strong confidentiality protections, and the government itself acknowledged that the analysis “might be different” if counsel had directed the Artificial Intelligence searches. For companies, boards, executives, and compliance leaders who increasingly rely on generative Artificial Intelligence to analyze legal and regulatory exposure, organize facts, and explore strategy, the decision signals that unsupervised use of public tools can create discoverable material. The guidance stresses treating Artificial Intelligence as a powerful but disclosure prone utility, carefully vetting platform confidentiality terms, considering closed enterprise systems, involving counsel early, and formalizing protocols so that any Artificial Intelligence assisted work in investigations and litigation is structured, supervised, and aligned with traditional privilege requirements of confidentiality and attorney direction.

64

Impact Score

Anumana wins FDA clearance for pulmonary hypertension ECG Artificial Intelligence tool

Anumana has received FDA 510(k) clearance for an Artificial Intelligence-enabled pulmonary hypertension algorithm designed for use with standard 12-lead electrocardiograms. The company says the software can help clinicians spot early signs of disease within existing workflows and without moving patient data outside the health system environment.

Anu Bradford on tech sovereignty and regulatory fragmentation

Anu Bradford argues that Europe is wavering in its role as the world’s digital rule-setter just as governments everywhere move toward more state control over technology. Global companies are being pushed to treat geopolitical risk, data sovereignty, and Artificial Intelligence governance as core strategic issues.

Mistral launches text-to-speech model

Mistral has expanded its Voxtral family with a text-to-speech system aimed at enterprise voice applications. The company is positioning the open-weights model as a flexible alternative for organizations that want more control over deployment, cost and customization.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.