Getty claims UK copyright lawsuit is not a threat to Artificial Intelligence sector

Getty asserts its UK copyright case against Stability does not jeopardize the broader Artificial Intelligence industry, despite Stability´s warnings.

Getty Images is challenging assertions made by Stability AI in an ongoing UK legal dispute, making clear that its landmark copyright lawsuit does not pose a widespread threat to the Artificial Intelligence sector. Stability AI, the company behind the image generator known as Stable Diffusion, is defending itself against allegations from Getty that it unlawfully used millions of Getty images without authorization for training its generative Artificial Intelligence model.

Court filings from Stability AI´s legal team depict the Getty lawsuit as a significant risk, claiming it threatens not just Stability´s entire business but the future of generative Artificial Intelligence as a whole. Stability has argued that the claims could set a precedent affecting all Artificial Intelligence developers that depend on large datasets to train their systems, particularly where copyright-protected material is involved. The company frames the case as having wide-reaching implications for innovation and competition in the rapidly growing Artificial Intelligence marketplace.

Getty Images, however, disputes these characterizations in its comments to the court. The company contends that its action is specifically targeted at unauthorized use of its intellectual property and is not aimed at undermining the broader Artificial Intelligence field. Getty maintains that its legal move is an attempt to establish critical boundaries for the ethical and lawful deployment of Artificial Intelligence in creative industries rather than to stifle industry growth or progress. As the legal proceedings continue, the outcome is likely to inform future interactions between copyright holders and Artificial Intelligence companies operating in the UK and potentially beyond.

75

Impact Score

Anthropic launches Claude Mythos for Project Glasswing

Anthropic has introduced Claude Mythos Preview, a new frontier Artificial Intelligence model positioned as a major advance in cybersecurity capability. The model is being used to power Project Glasswing, a coalition effort to secure critical software before similar capabilities spread more widely.

Artificial Intelligence speeds quantum encryption threat timeline

Research from Google and Oratomic suggests quantum computers capable of breaking core internet encryption may arrive sooner than expected. Artificial Intelligence played a key role in improving one of the new algorithms, raising fresh urgency around post-quantum security.

New methods aim to improve Large Language Model reasoning

A new study on arXiv outlines algorithmic techniques designed to strengthen Large Language Model reasoning and reduce hallucinations. The work reports better logical consistency and stronger performance on mathematical and coding benchmarks.

Nvidia acquisition of SchedMD raises Slurm neutrality concerns

Nvidia’s purchase of SchedMD has given it control of Slurm, an open-source scheduler that sits at the center of many supercomputing and large-model training systems. Researchers and engineers are watching for signs that support could tilt toward Nvidia hardware over AMD and Intel alternatives.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.