LightShed tool bypasses anti-artificial intelligence protections on digital art

LightShed, a new tool, undermines popular digital defenses against unauthorized use of artwork for Artificial Intelligence training.

A new technique named LightShed is poised to disrupt the defenses artists have deployed to prevent their digital artworks from being used in Artificial Intelligence model training. Developed by a research team from the University of Cambridge, Technical University of Darmstadt, and the University of Texas at San Antonio, LightShed is specifically designed to strip away distortions introduced by protective tools such as Glaze and Nightshade. These tools, known for altering digital art in ways that confuse Artificial Intelligence training algorithms, had provided a sense of security for creators fearing that their copyrighted styles and subjects would be exploited without permission.

Glaze and Nightshade work by applying minute, almost invisible changes to artworks—so-called ´perturbations´—to either confuse Artificial Intelligence systems regarding an artist´s style or mislead models about the depicted subject. The technology behind LightShed essentially reverses these changes, ´washing away´ the artificial distortions and restoring images to a state where Artificial Intelligence models can once again learn from them as intended. The LightShed team trained their system by exposing it to both poisoned and unaltered images, teaching it to selectively identify and remove the protective markers. According to lead researcher Hanna Foerster, this tool demonstrates that current anti-artificial intelligence measures are not infallible and should not be regarded as foolproof safeguards.

The revelation is significant for the estimated 7.5 million artists who rely on tools like Glaze for protection, particularly as legal and regulatory frameworks around Artificial Intelligence and copyright are still unsettled. The LightShed research, due to be presented at the Usenix Security Symposium in August, challenges the durability of ´poisoning´ strategies and acts as a call for ongoing innovation. Even the developers behind Glaze and Nightshade, including MIT Technology Review Innovator of the Year Shawn Shan, acknowledge these protections are temporary, yet still view them as valuable deterrents that signal artists´ resistance to unauthorized Artificial Intelligence training. Looking ahead, Foerster suggests the next wave of artist tools may involve robust watermarks embedded within art, aiming for more resilient methods to control the use of digital creations in the age of ever-adapting Artificial Intelligence.

76

Impact Score

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.

Please check your email for a Verification Code sent to . Didn't get a code? Click here to resend