Employers face a growing legal problem when employees paste proprietary source code, customer lists, financial projections, M&A targets, or confidential business strategies into public generative Artificial Intelligence tools such as ChatGPT, Claude, or Google Gemini. Trade secret law under the federal Defend Trade Secrets Act and the Uniform Trade Secrets Act generally requires a company to show that it used reasonable measures to maintain secrecy. Traditional safeguards such as confidentiality agreements, physical access controls, and employee training were built for older forms of data leakage, not for routine use of third-party Artificial Intelligence platforms that may have rights over user inputs.
The legal risk is not limited to intentional misconduct. A well-meaning employee can create the same exposure as a bad actor by transmitting sensitive information to an outside platform. In February 2026, the U.S. District Court for the Southern District of New York addressed a related confidentiality issue in United States v. Heppner. The court held that attorney-client privilege did not extend to documents a party had prepared using Claude and later shared with counsel, noting that Anthropic’s Privacy Policy permits the sharing of users’ personal data with certain third parties. The court concluded that users of public Artificial Intelligence platforms “do not have substantial privacy interests” in their communications with those systems. That reasoning could be used in trade secret disputes to argue that companies voluntarily disclosed protected information to a third party, undermining the reasonable measures element required for a later claim.
Employers also need to consider labor law when responding. Artificial Intelligence acceptable use policies that are too broad may trigger scrutiny under the National Labor Relations Act if they could reasonably chill employees from discussing wages, working conditions, or collective activity. Blanket bans on all Artificial Intelligence use, or sweeping confidentiality mandates that reach Artificial Intelligence-generated content without limitation, may invite claims that a policy restricts protected concerted activity. Policies therefore need to be narrowly tailored to protect legitimate business interests, especially trade secrets and proprietary information.
A defensible compliance program should combine policy, contracts, vendor review, technical controls, and training. Recommended steps include a written Artificial Intelligence acceptable use policy that clearly identifies prohibited categories of information and distinguishes approved enterprise tools from consumer-facing services, vendor audits focused on training rights and data isolation, technical controls such as Data Loss Prevention tools and network restrictions, scenario-based employee training, and updates to confidentiality and intellectual property agreements so they expressly address disclosure through Artificial Intelligence prompts. Because reasonableness is judged at the time of the alleged misappropriation, measures adopted only after a disclosure event provide no retroactive protection.
