Artificial Intelligence is a powerful creative tool, but the legal landscape around protectable content and liability is evolving rapidly. On July 21, 2025, Senators Josh Hawley and Richard Blumenthal introduced the Artificial Intelligence Accountability and Data Protection Act, which would establish a new federal cause of action for individuals whose personal or copyrighted data is used in training Artificial Intelligence models without express, prior consent. The bill would apply to both personally identifiable information and data “generated by an individual and protected by copyright.”
The Artificial Intelligence Act would narrow or remove the fair use defense in this context by defining prohibited conduct to include the “appropriation, use, collection, processing, sale, or other exploitation of individuals’ data without express, prior consent,” and by defining “generation” to include content that “imitates, replicates, or is substantially derived from” covered data. Unlike the Copyright Act, the proposed measure would not limit enforcement to registered works. Remedies would include compensatory damages equal to the greatest of actual damages, treble profits, or $1,000, plus punitive damages, injunctive relief, and attorney’s fees. The bill also authorizes secondary liability for parties who aided and abetted misuse of covered data.
The article situates the bill amid ongoing litigation over training large language models. Courts have sometimes found training to be fair use, but rulings vary. Notable litigation includes Bartz v. Anthropic, in which a Northern District of California court held that training on lawfully acquired books could be fair use but preserving pirated copies was not, and an announced settlement for an eye-popping $1.5 billion. By contrast, Thomson Reuters v. Ross Intelligence found that commercial benefit from training outweighed a fair use defense in a non-generative model.
For companies and creators, the article urges immediate action: review and strengthen consent policies, audit training datasets to identify covered data, ensure consent is “freely given, informed, and unambiguous,” disclose all entities with data access, and implement monitoring to detect unauthorized use. Retailers should confirm permissions and secure indemnities from designers or manufacturers. Although the bill remains in committee, the practical steps suggested aim to reduce the heightened compliance and enforcement risks the proposal would create.
