Below are the competing letters the parties in the Sarah Andersen v. Stability AI lawsuit filed. The controversy is that the Dr. Ben Zhao of University of Chicago is one of the researchers behind Nightshade, Glaze, and other adversarial tools used to “poison” the data for image generators so that they fail at creating images requested. It’s kind of like throwing a monkey wrench into a machine, or taking a hammer to break the frames of cropping machines.
This “poisoning” of images used to train AI models can be seen as a form of self-help for artists who want to protect their own artworks from AI generators. To the AI developers, it seems like sabotage.
The issue in this case is whether Dr. Ben Zhao should get access to the “highly confidential” business information of the defendant AI image generator companies. Although Zhao also serves an expert in Bartz v. Anthropic, and Authors Guild v. OpenAI and Alter v. OpenAI, those cases involves books and large language models.
Andersen v. Stability AI, by contrast, involves diffusion models for image creation–which is precisely the type of model that is most directly jeopardized by Dr. Ben Zhao’s tools, Nightshade and Glaze.
Excerpts
Defendants



Plaintiffs



DOWNLOAD THE LETTERS OF THE PARTIES
Related Stories