Training without consent is risky business: what business owners need to know about the proposed Artificial Intelligence Accountability and Data Protection Act

The proposed Artificial Intelligence Accountability and Data Protection Act would create a federal private right of action for use of individuals’ personal or copyrighted data without express consent, exposing companies that train models without permission to new liability. The bill would broaden covered works beyond registered copyrights and allow substantial remedies including compensatory, punitive and injunctive relief.

Artificial Intelligence is a powerful creative tool, but the legal landscape around protectable content and liability is evolving rapidly. On July 21, 2025, Senators Josh Hawley and Richard Blumenthal introduced the Artificial Intelligence Accountability and Data Protection Act, which would establish a new federal cause of action for individuals whose personal or copyrighted data is used in training Artificial Intelligence models without express, prior consent. The bill would apply to both personally identifiable information and data “generated by an individual and protected by copyright.”

The Artificial Intelligence Act would narrow or remove the fair use defense in this context by defining prohibited conduct to include the “appropriation, use, collection, processing, sale, or other exploitation of individuals’ data without express, prior consent,” and by defining “generation” to include content that “imitates, replicates, or is substantially derived from” covered data. Unlike the Copyright Act, the proposed measure would not limit enforcement to registered works. Remedies would include compensatory damages equal to the greatest of actual damages, treble profits, or $1,000, plus punitive damages, injunctive relief, and attorney’s fees. The bill also authorizes secondary liability for parties who aided and abetted misuse of covered data.

The article situates the bill amid ongoing litigation over training large language models. Courts have sometimes found training to be fair use, but rulings vary. Notable litigation includes Bartz v. Anthropic, in which a Northern District of California court held that training on lawfully acquired books could be fair use but preserving pirated copies was not, and an announced settlement for an eye-popping $1.5 billion. By contrast, Thomson Reuters v. Ross Intelligence found that commercial benefit from training outweighed a fair use defense in a non-generative model.

For companies and creators, the article urges immediate action: review and strengthen consent policies, audit training datasets to identify covered data, ensure consent is “freely given, informed, and unambiguous,” disclose all entities with data access, and implement monitoring to detect unauthorized use. Retailers should confirm permissions and secure indemnities from designers or manufacturers. Although the bill remains in committee, the practical steps suggested aim to reduce the heightened compliance and enforcement risks the proposal would create.

68

Impact Score

Google Vids opens free video generation to all Google users

Google has made Google Vids available to anyone with a Google account, adding free access to video generation with its latest models. The move expands Google’s end-to-end video workflow and increases pressure on rivals that charge for similar tools.

Court warns against chatbot legal advice in Heppner case

A federal court found that chats with a publicly available generative Artificial Intelligence tool were not protected by attorney-client privilege or the work-product doctrine. The ruling highlights litigation risks when executives or employees use chatbots for legal guidance without lawyer supervision.

Newsom orders California to weigh Artificial Intelligence harms in contract rules

Gov. Gavin Newsom has signed an executive order directing California agencies to account for potential Artificial Intelligence harms in state contracting while expanding approved use of generative tools across government. The move follows a dispute involving Anthropic and reflects a broader split between California and the Trump administration on Artificial Intelligence oversight.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.