Training without consent is risky business: what business owners need to know about the proposed Artificial Intelligence Accountability and Data Protection Act

The proposed Artificial Intelligence Accountability and Data Protection Act would create a federal private right of action for use of individuals’ personal or copyrighted data without express consent, exposing companies that train models without permission to new liability. The bill would broaden covered works beyond registered copyrights and allow substantial remedies including compensatory, punitive and injunctive relief.

Artificial Intelligence is a powerful creative tool, but the legal landscape around protectable content and liability is evolving rapidly. On July 21, 2025, Senators Josh Hawley and Richard Blumenthal introduced the Artificial Intelligence Accountability and Data Protection Act, which would establish a new federal cause of action for individuals whose personal or copyrighted data is used in training Artificial Intelligence models without express, prior consent. The bill would apply to both personally identifiable information and data “generated by an individual and protected by copyright.”

The Artificial Intelligence Act would narrow or remove the fair use defense in this context by defining prohibited conduct to include the “appropriation, use, collection, processing, sale, or other exploitation of individuals’ data without express, prior consent,” and by defining “generation” to include content that “imitates, replicates, or is substantially derived from” covered data. Unlike the Copyright Act, the proposed measure would not limit enforcement to registered works. Remedies would include compensatory damages equal to the greatest of actual damages, treble profits, or $1,000, plus punitive damages, injunctive relief, and attorney’s fees. The bill also authorizes secondary liability for parties who aided and abetted misuse of covered data.

The article situates the bill amid ongoing litigation over training large language models. Courts have sometimes found training to be fair use, but rulings vary. Notable litigation includes Bartz v. Anthropic, in which a Northern District of California court held that training on lawfully acquired books could be fair use but preserving pirated copies was not, and an announced settlement for an eye-popping $1.5 billion. By contrast, Thomson Reuters v. Ross Intelligence found that commercial benefit from training outweighed a fair use defense in a non-generative model.

For companies and creators, the article urges immediate action: review and strengthen consent policies, audit training datasets to identify covered data, ensure consent is “freely given, informed, and unambiguous,” disclose all entities with data access, and implement monitoring to detect unauthorized use. Retailers should confirm permissions and secure indemnities from designers or manufacturers. Although the bill remains in committee, the practical steps suggested aim to reduce the heightened compliance and enforcement risks the proposal would create.

68

Impact Score

How to create your own Artificial Intelligence performance coach

Lucas Werthein, co-founder of Cactus, describes building a personal Artificial Intelligence health coach that synthesizes MRIs, blood tests, wearables and journals to optimize training, recovery and injury management. Claire Vo hosts a 30 to 45 minute episode that shows practical steps for integrating multiple data sources and setting safety guardrails.

What’s next for AlphaFold

Five years after AlphaFold 2 remade protein structure prediction, Google DeepMind co-lead John Jumper reflects on practical uses, limits and plans to combine structure models with large language models.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.