Generative Artificial Intelligence and confidentiality risks

Generative Artificial Intelligence is increasingly used in legal and commercial workflows, but sharing privileged material with open tools can jeopardize confidentiality. Recent decisions in England and the United States highlight growing legal risk and the need for tighter controls.

Generative Artificial Intelligence tools are now embedded in routine business activity, from summarising documents to analysing issues and capturing meeting notes. They are also increasingly used when organisations and individuals are dealing with legal issues. That creates a significant tension with legal professional privilege, which under English law depends on confidentiality being preserved. Legal advice privilege protects confidential communications between a client and its lawyers for the dominant purpose of seeking or giving legal advice, while litigation privilege can extend to certain confidential communications with third parties where litigation is in reasonable contemplation and the dominant purpose is the conduct of that litigation. If confidentiality is lost, privilege is likely to fall away.

The most acute risk arises when privileged material, including legal advice, draft pleadings, or litigation strategy, is uploaded into generative Artificial Intelligence systems. Publicly available platforms are especially problematic because their terms may permit storage, analysis, or reuse of user inputs, including for model training. In UK v Secretary of State for the Home Department [2026] UKUT 81 (IAC), the Upper Tribunal indicated that uploading confidential material into publicly available Artificial Intelligence platforms may be treated as placing that information into the public domain, so client confidentiality is lost and any related claim to legal professional privilege may fail. English authority remains limited, but the direction of travel is clear: confidentiality principles still govern and uncertainty itself creates litigation risk.

Recent US litigation points in the same direction. In early 2026, a US federal court in US v Heppner No 25 Cr 503 (SDNY) held that a defendant’s communications with a publicly available generative Artificial Intelligence tool were not protected by attorney-client privilege or work product protection. Although those concepts do not map directly onto English law, the reasoning turned on familiar issues: there was no lawyer-client relationship and confidentiality was lost by sharing information with a third-party provider. English courts have not yet ruled directly on the privilege status of Artificial Intelligence-assisted outputs, but disputes over disclosure are likely to test these issues soon.

Risk varies by deployment model. Publicly available tools present the greatest threat because they are open platforms with limited transparency over retention, reuse, and access. Enterprise or private systems can be configured with contractual confidentiality protections, restrictions on training and reuse, and clear retention and deletion controls. Even so, privilege does not arise automatically. Communications generated by non-lawyers using Artificial Intelligence to assess legal questions may never attract legal advice privilege, and confidentiality can still be undermined by weak access controls, onward sharing, or broad internal circulation.

High-risk scenarios include copying legal advice into tools for summarisation, using Artificial Intelligence notetakers on calls where legal issues are discussed, relying on non-lawyers to analyse legal risk before involving counsel, and distributing Artificial Intelligence-generated outputs across insurers, advisers, or wider business teams. Sensible precautions include treating public tools as non-confidential, avoiding the input of privileged material, training non-legal teams and senior management, adopting clear usage policies, performing due diligence on enterprise systems, and using Artificial Intelligence to support rather than replace legal advice. Courts are likely to scrutinise Artificial Intelligence-assisted workflows closely where confidentiality is in doubt.

55

Impact Score

AMD expands Samsung HBM4 deal for next-generation accelerators

AMD has secured Samsung HBM4 supply for its next-generation AMD Instinct MI455X graphics processing units, while the agreement also points to broader memory collaboration around future server chips. The arrangement suggests Samsung gained leverage as demand for advanced memory remains tight.

OpenAI acquires Astral to strengthen coding workflows

OpenAI is acquiring Astral, the developer of open source Python tools including uv, Ruff and ty, to integrate them with Codex. The move signals a push to make Artificial Intelligence coding systems more reliable across the full software development workflow.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.