A recent decision by Judge Rakoff of the Southern District of New York held that an Artificial Intelligence chat created by a criminal defendant in anticipation of a meeting with his attorneys was not protected by attorney client privilege or the work product doctrine. The ruling is described as answering a “question of first impression nationwide” and signals that communications with consumer Artificial Intelligence tools may not receive traditional legal protections. The case involves Bradley Heppner, a former executive of GWG Holdings, Inc., who was indicted on October 28, 2025, on five federal felony counts involving alleged fraudulent activity. When FBI agents arrested Heppner, they seized numerous documents and electronic devices from his home, including approximately thirty-one communications between Heppner and the Artificial Intelligence tool Claude, and his counsel claimed privilege over those communications.
The court identified three elements required for attorney client privilege and found that the 31 Artificial Intelligence documents lacked at least two, if not all three, elements: a communication between a client and an attorney that was intended and kept confidential, and made for the purpose of obtaining legal advice. Judge Rakoff held there was no attorney client relationship because Claude is not an attorney and the privilege depends on “a trusting human relationship” with “a licensed professional who owes fiduciary duties and is subject to discipline.” The decision also concluded there was no confidentiality due to Anthropic’s terms and conditions, which state that Anthropic collects data on users’ inputs, uses that data to train the model, and reserves the right to disclose user data to “third parties,” including “governmental regulatory authorities,” so Heppner had no reasonable expectation of confidentiality. The court further found that Heppner did not use Claude for the purpose of obtaining legal advice, noting that he communicated with Claude of his own volition, not at the suggestion or direction of counsel, and that Claude itself told the government, “I’m not a lawyer and can’t provide formal legal advice or recommendations.”
The work product doctrine argument was also rejected because the Artificial Intelligence documents were not prepared “by or at the behest of counsel” and did not reflect counsel’s mental processes or strategy. Under Second Circuit law, the doctrine applies only to work performed by an attorney or the attorney’s agent, and the court held that Heppner was not acting as counsel’s agent when he engaged with Claude. Although the materials may have “affect[ed]” counsel’s later strategy, they did not “reflect” strategy at the time they were created, so the rationale for protection did not apply. The decision highlights several practical takeaways: clients should avoid discussing legal or factual issues with Artificial Intelligence tools that do not guarantee confidentiality, organizations should implement policies that require use of approved internal Artificial Intelligence tools and bar unsanctioned platforms, and companies and individuals facing potential litigation should consult with legal counsel before using any Artificial Intelligence tools, even internal ones. Legal teams are urged to proactively warn business stakeholders that turning to Artificial Intelligence before engaging counsel can inadvertently waive privilege and create discoverable documents.
