Training without consent is risky business: what business owners need to know about the proposed Artificial Intelligence Accountability and Data Protection Act

The proposed Artificial Intelligence Accountability and Data Protection Act would create a federal private right of action for use of individuals’ personal or copyrighted data without express consent, exposing companies that train models without permission to new liability. The bill would broaden covered works beyond registered copyrights and allow substantial remedies including compensatory, punitive and injunctive relief.

Artificial Intelligence is a powerful creative tool, but the legal landscape around protectable content and liability is evolving rapidly. On July 21, 2025, Senators Josh Hawley and Richard Blumenthal introduced the Artificial Intelligence Accountability and Data Protection Act, which would establish a new federal cause of action for individuals whose personal or copyrighted data is used in training Artificial Intelligence models without express, prior consent. The bill would apply to both personally identifiable information and data “generated by an individual and protected by copyright.”

The Artificial Intelligence Act would narrow or remove the fair use defense in this context by defining prohibited conduct to include the “appropriation, use, collection, processing, sale, or other exploitation of individuals’ data without express, prior consent,” and by defining “generation” to include content that “imitates, replicates, or is substantially derived from” covered data. Unlike the Copyright Act, the proposed measure would not limit enforcement to registered works. Remedies would include compensatory damages equal to the greatest of actual damages, treble profits, or $1,000, plus punitive damages, injunctive relief, and attorney’s fees. The bill also authorizes secondary liability for parties who aided and abetted misuse of covered data.

The article situates the bill amid ongoing litigation over training large language models. Courts have sometimes found training to be fair use, but rulings vary. Notable litigation includes Bartz v. Anthropic, in which a Northern District of California court held that training on lawfully acquired books could be fair use but preserving pirated copies was not, and an announced settlement for an eye-popping $1.5 billion. By contrast, Thomson Reuters v. Ross Intelligence found that commercial benefit from training outweighed a fair use defense in a non-generative model.

For companies and creators, the article urges immediate action: review and strengthen consent policies, audit training datasets to identify covered data, ensure consent is “freely given, informed, and unambiguous,” disclose all entities with data access, and implement monitoring to detect unauthorized use. Retailers should confirm permissions and secure indemnities from designers or manufacturers. Although the bill remains in committee, the practical steps suggested aim to reduce the heightened compliance and enforcement risks the proposal would create.

68

Impact Score

Inside the UK’s artificial intelligence security institute

The UK’s artificial intelligence security institute has found that popular frontier models can be jailbroken at scale, exposing reliability gaps and security risks for governments and regulated industries that rely on trusted vendors.

Siemens debuts digital twin composer for industrial metaverse deployments

Siemens has introduced digital twin composer, a software tool that builds industrial metaverse environments at scale by merging comprehensive digital twins with real-time physical data, enabling faster virtual decision making. Early deployments with PepsiCo report higher throughput, shorter design cycles and reduced capital expenditure through physics-accurate simulations and artificial intelligence driven optimization.

Cadence builds chiplet partner ecosystem for physical artificial intelligence and data center designs

Cadence has introduced a Chiplet Spec-to-Packaged Parts ecosystem aimed at simplifying chiplet design for physical artificial intelligence, data center and high performance computing workloads, backed by a roster of intellectual property and foundry partners. The program centers on a physical artificial intelligence chiplet platform and framework that integrates prevalidated components to cut risk and speed commercial deployment.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.