Training without consent is risky business: what business owners need to know about the proposed Artificial Intelligence Accountability and Data Protection Act

The proposed Artificial Intelligence Accountability and Data Protection Act would create a federal private right of action for use of individuals’ personal or copyrighted data without express consent, exposing companies that train models without permission to new liability. The bill would broaden covered works beyond registered copyrights and allow substantial remedies including compensatory, punitive and injunctive relief.

Artificial Intelligence is a powerful creative tool, but the legal landscape around protectable content and liability is evolving rapidly. On July 21, 2025, Senators Josh Hawley and Richard Blumenthal introduced the Artificial Intelligence Accountability and Data Protection Act, which would establish a new federal cause of action for individuals whose personal or copyrighted data is used in training Artificial Intelligence models without express, prior consent. The bill would apply to both personally identifiable information and data “generated by an individual and protected by copyright.”

The Artificial Intelligence Act would narrow or remove the fair use defense in this context by defining prohibited conduct to include the “appropriation, use, collection, processing, sale, or other exploitation of individuals’ data without express, prior consent,” and by defining “generation” to include content that “imitates, replicates, or is substantially derived from” covered data. Unlike the Copyright Act, the proposed measure would not limit enforcement to registered works. Remedies would include compensatory damages equal to the greatest of actual damages, treble profits, or $1,000, plus punitive damages, injunctive relief, and attorney’s fees. The bill also authorizes secondary liability for parties who aided and abetted misuse of covered data.

The article situates the bill amid ongoing litigation over training large language models. Courts have sometimes found training to be fair use, but rulings vary. Notable litigation includes Bartz v. Anthropic, in which a Northern District of California court held that training on lawfully acquired books could be fair use but preserving pirated copies was not, and an announced settlement for an eye-popping $1.5 billion. By contrast, Thomson Reuters v. Ross Intelligence found that commercial benefit from training outweighed a fair use defense in a non-generative model.

For companies and creators, the article urges immediate action: review and strengthen consent policies, audit training datasets to identify covered data, ensure consent is “freely given, informed, and unambiguous,” disclose all entities with data access, and implement monitoring to detect unauthorized use. Retailers should confirm permissions and secure indemnities from designers or manufacturers. Although the bill remains in committee, the practical steps suggested aim to reduce the heightened compliance and enforcement risks the proposal would create.

68

Impact Score

Industry 5.0 shifts focus to human centric value and sustainability

Industry 5.0 reframes industrial transformation around collaboration between humans and machines, emphasizing growth, resilience, and sustainability over narrow efficiency gains. Many organizations still underinvest in human centric and sustainable use cases despite evidence that they create higher value.

Best artificial intelligence video generators for every creator

Leading artificial intelligence video tools like Sora, Veo 3, Adobe Firefly, Runway and Midjourney target different needs, from free social clips to commercially safe productions, but all come with legal and ethical tradeoffs. Choosing the right platform means balancing price, creative control, output quality and how each service handles your data and copyrights.

UK mps open inquiry into artificial intelligence and edtech in education

UK mps have launched a cross party inquiry into how artificial intelligence and education technology are reshaping learning across early years, schools, colleges and universities, and how government should balance innovation with safeguards. The education committee will examine opportunities to improve teaching and workload alongside risks around inequality, privacy, safeguarding and assessment.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.