Employers face trade secret risks from generative Artificial Intelligence use

Employers face growing legal risk when workers enter source code, customer lists, or business strategies into public generative Artificial Intelligence tools. Trade secret protection may be weakened by disclosure, while workplace rules must also avoid running afoul of labor law.

Employers face a growing legal problem when employees paste proprietary source code, customer lists, financial projections, M&A targets, or confidential business strategies into public generative Artificial Intelligence tools such as ChatGPT, Claude, or Google Gemini. Trade secret law under the federal Defend Trade Secrets Act and the Uniform Trade Secrets Act generally requires a company to show that it used reasonable measures to maintain secrecy. Traditional safeguards such as confidentiality agreements, physical access controls, and employee training were built for older forms of data leakage, not for routine use of third-party Artificial Intelligence platforms that may have rights over user inputs.

The legal risk is not limited to intentional misconduct. A well-meaning employee can create the same exposure as a bad actor by transmitting sensitive information to an outside platform. In February 2026, the U.S. District Court for the Southern District of New York addressed a related confidentiality issue in United States v. Heppner. The court held that attorney-client privilege did not extend to documents a party had prepared using Claude and later shared with counsel, noting that Anthropic’s Privacy Policy permits the sharing of users’ personal data with certain third parties. The court concluded that users of public Artificial Intelligence platforms “do not have substantial privacy interests” in their communications with those systems. That reasoning could be used in trade secret disputes to argue that companies voluntarily disclosed protected information to a third party, undermining the reasonable measures element required for a later claim.

Employers also need to consider labor law when responding. Artificial Intelligence acceptable use policies that are too broad may trigger scrutiny under the National Labor Relations Act if they could reasonably chill employees from discussing wages, working conditions, or collective activity. Blanket bans on all Artificial Intelligence use, or sweeping confidentiality mandates that reach Artificial Intelligence-generated content without limitation, may invite claims that a policy restricts protected concerted activity. Policies therefore need to be narrowly tailored to protect legitimate business interests, especially trade secrets and proprietary information.

A defensible compliance program should combine policy, contracts, vendor review, technical controls, and training. Recommended steps include a written Artificial Intelligence acceptable use policy that clearly identifies prohibited categories of information and distinguishes approved enterprise tools from consumer-facing services, vendor audits focused on training rights and data isolation, technical controls such as Data Loss Prevention tools and network restrictions, scenario-based employee training, and updates to confidentiality and intellectual property agreements so they expressly address disclosure through Artificial Intelligence prompts. Because reasonableness is judged at the time of the alleged misappropriation, measures adopted only after a disclosure event provide no retroactive protection.

58

Impact Score

Uber, Pony.ai and Verne plan robotaxi launch in Zagreb

Uber plans to launch Europe’s first commercial robotaxi service in Croatia through a partnership with Pony.ai and Verne. Zagreb is set to host the rollout as the companies target broader expansion across European markets.

Cpu prices rise as supply tightens around Artificial Intelligence demand

Cpu makers are gaining pricing power as advanced manufacturing capacity shifts toward higher-margin Artificial Intelligence chips. The squeeze is lifting costs across servers and high-performance consumer products while raising questions about longer-term demand and architecture shifts.

Arm lifts chip stocks with new Artificial Intelligence server processor outlook

Arm projected that its new data-center processor for agentic Artificial Intelligence could become a major revenue driver, sending its shares sharply higher and lifting other CPU makers. The forecast points to a broader shift in the Artificial Intelligence market from training toward inference and server computing.

Global Artificial Intelligence regulation in life sciences

Life sciences companies face a fast-changing regulatory and intellectual property environment as governments in the US, UK, EU, and China develop new rules for Artificial Intelligence. The focus is shifting toward patient safety, data governance, ethics, and cross-border compliance in drug development and commercialization.

GitHub faces questions over Artificial Intelligence-native development

GitHub’s sustained reliability problems and unclear leadership are raising doubts about whether it still deserves to be the default platform for Artificial Intelligence-native development. The broader developer tooling landscape is also contending with security failures, product attribution disputes, and renewed scrutiny of platform quality.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.