Employers face trade secret risks from generative Artificial Intelligence use

Employers face growing legal risk when workers enter source code, customer lists, or business strategies into public generative Artificial Intelligence tools. Trade secret protection may be weakened by disclosure, while workplace rules must also avoid running afoul of labor law.

Employers face a growing legal problem when employees paste proprietary source code, customer lists, financial projections, M&A targets, or confidential business strategies into public generative Artificial Intelligence tools such as ChatGPT, Claude, or Google Gemini. Trade secret law under the federal Defend Trade Secrets Act and the Uniform Trade Secrets Act generally requires a company to show that it used reasonable measures to maintain secrecy. Traditional safeguards such as confidentiality agreements, physical access controls, and employee training were built for older forms of data leakage, not for routine use of third-party Artificial Intelligence platforms that may have rights over user inputs.

The legal risk is not limited to intentional misconduct. A well-meaning employee can create the same exposure as a bad actor by transmitting sensitive information to an outside platform. In February 2026, the U.S. District Court for the Southern District of New York addressed a related confidentiality issue in United States v. Heppner. The court held that attorney-client privilege did not extend to documents a party had prepared using Claude and later shared with counsel, noting that Anthropic’s Privacy Policy permits the sharing of users’ personal data with certain third parties. The court concluded that users of public Artificial Intelligence platforms “do not have substantial privacy interests” in their communications with those systems. That reasoning could be used in trade secret disputes to argue that companies voluntarily disclosed protected information to a third party, undermining the reasonable measures element required for a later claim.

Employers also need to consider labor law when responding. Artificial Intelligence acceptable use policies that are too broad may trigger scrutiny under the National Labor Relations Act if they could reasonably chill employees from discussing wages, working conditions, or collective activity. Blanket bans on all Artificial Intelligence use, or sweeping confidentiality mandates that reach Artificial Intelligence-generated content without limitation, may invite claims that a policy restricts protected concerted activity. Policies therefore need to be narrowly tailored to protect legitimate business interests, especially trade secrets and proprietary information.

A defensible compliance program should combine policy, contracts, vendor review, technical controls, and training. Recommended steps include a written Artificial Intelligence acceptable use policy that clearly identifies prohibited categories of information and distinguishes approved enterprise tools from consumer-facing services, vendor audits focused on training rights and data isolation, technical controls such as Data Loss Prevention tools and network restrictions, scenario-based employee training, and updates to confidentiality and intellectual property agreements so they expressly address disclosure through Artificial Intelligence prompts. Because reasonableness is judged at the time of the alleged misappropriation, measures adopted only after a disclosure event provide no retroactive protection.

58

Impact Score

SK Group warns DRAM shortages could curb memory use

SK Group chairman Chey Tae-won warned that customers may reduce memory consumption through infrastructure and software optimization if DRAM suppliers fail to raise output. Demand from Artificial Intelligence data centers is keeping the market tight as memory makers weigh expansion against the long timelines for new fabs.

BitUnlocker bypasses TPM-only Windows 11 BitLocker

Intrinsec disclosed BitUnlocker, a downgrade attack that can bypass TPM-only Windows 11 BitLocker protections with physical access to a machine. The technique abuses a flaw in Windows recovery and deployment components and relies on older trusted boot code.

Micron samples 256 GB DDR5 9200 MT/s RDIMM server modules

Micron has begun sampling 256 GB DDR5 RDIMM server modules built on its 1-gamma technology to key ecosystem partners. The company positions the new modules as a higher-speed, more power-efficient option for scaling next-generation Artificial Intelligence and HPC infrastructure.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.