Judge rules works created by generative artificial intelligence are transformative

A federal judge found works produced by generative Artificial Intelligence can be ´exceedingly transformative´, but courts are still weighing when training on copyrighted books crosses the line.

The landscape of copyright litigation involving generative artificial intelligence has shifted in recent months, but not in a single, definitive direction. Earlier rulings had already drawn attention: a March decision against Ross Intelligence concluded that using Thomson Reuters´ Westlaw without permission was not fair use, in a case involving non-generative systems that analyzed and replicated proprietary legal content. That distinction matters because generative artificial intelligence does something different; it ingests copyrighted works and then synthesizes new output in response to prompts, raising the central fair use question of whether the output is transformative.

In late June a federal judge addressed that question in suits against Anthropic. Plaintiffs alleged Anthropic trained its language models on digitized books without authorization. The judge nonetheless wrote that the training use was fair, calling the resulting use ´exceedingly transformative´. The opinion noted Anthropic had purchased some print copies and digitized them, which the court treated differently from mass redistribution because the company was not ´adding new copies, creating new works, or redistributing existing copies´. That characterization gave the ruling a dramatic headline, but the court also allowed the case to continue because many alleged training files were obtained from pirate sites and were never paid for; the judge declined to treat pirated copies as equivalent to purchased training material.

Another recent decision involving Meta produced a mixed message. A court found that feeding copyright-protected works into large language models can be a violation in the abstract, yet ruled for Meta on the facts before it because the authors failed to show harm to the market for their books. The judge in that case emphasized the limited scope of the ruling, leaving open the possibility that stronger evidence could produce a different outcome for other plaintiffs.

Taken together these rulings suggest courts are carving out a middle ground rather than issuing a sweeping author- or company-friendly mandate. Uses that change a work´s purpose or create new kinds of output may tilt toward fair use, while wholesale copying from unpaid, pirate sources remains legally risky. Many appeals are likely, and the next few years could produce clearer precedents as higher courts weigh in, potentially up to the supreme court.

77

Impact Score

UK mps open inquiry into artificial intelligence and edtech in education

UK mps have launched a cross party inquiry into how artificial intelligence and education technology are reshaping learning across early years, schools, colleges and universities, and how government should balance innovation with safeguards. The education committee will examine opportunities to improve teaching and workload alongside risks around inequality, privacy, safeguarding and assessment.

Most UK firms see Artificial Intelligence training gap as shadow tool use grows

New research finds that 6 in 10 UK businesses say employees lack comprehensive Artificial Intelligence training, even as shadow use of unapproved tools becomes widespread and investment surges. Executives warn that without stronger skills, governance and strategy, many organisations risk missing out on expected Artificial Intelligence returns.

COSO issues internal control roadmap for governing generative artificial intelligence

COSO has released governance guidance that applies its Internal Control-Integrated Framework to generative artificial intelligence, offering audit-ready control structures and implementation tools for organizations. The publication details capability-based risk mapping, aligned controls, and practical templates to help institutions manage emerging technology risks.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.