Interesting controversy brewing in the Andersen v. Stability AI lawsuit over one of the plaintiffs’ expert, Dr. Ben Y. Zhao, the Neubauer Professor of Computer Science of the University of Chicago. Zhao is serving as an expert in this case, Bartz v. Anthropic, and Authors Guild v. OpenAI and Alter v. OpenAI.
The controversy stems from Dr. Zhao’s participation in creating the Nightshade tool, which deploys “prompt-specific poisoning attacks that corrupt a model’s ability to respond to specific targeted prompts.” “Nightshade [is] a prompt-specific poisoning attack optimized for potency that can completely control the output of a prompt in Stable Diffusion’s newest model (SDXL) with less than 100 poisoned training samples.”
Zhao is also involved in creating another disruptive tool called Glaze, which he described in his declaration: “Glaze is a system designed to protect human artists by disrupting style mimicry. At a high level, Glaze works by computing a set of minimal changes to artworks, such that it appears unchanged to human eyes, but appears to AI models like a dramatically different art style. For example, human eyes might find a glazed charcoal portrait with a realism style to be unchanged, but an AI model might see the glazed version as a modern abstract style, a la Jackson Pollock. So, when someone then prompts the model to generate art mimicking the charcoal artist, they will get something quite different from what they expected.”
Zhao described Nightshade in his declaration: “Nightshade works similarly as Glaze, but instead of a defense against style mimicry, it is designed as an offense tool to distort feature representations inside generative AI image models. Like Glaze, Nightshade is computed as a multi-objective optimization that minimizes visible changes to the original image. While human eyes see a shaded image that is largely unchanged from the original, the AI model sees a dramatically different composition in the image. For example, human eyes might see a shaded image of a cow in a green field largely unchanged, but an AI model might see a large leather purse lying in the grass. Trained on a sufficient number of shaded images that include a cow, a model will become increasingly convinced cows have nice brown leathery handles and smooth side pockets with a zipper, and perhaps a lovely brand logo.”
Stability AI and the other AI companies for image generators object to Dr. Zhao’s access to their source code and other highly confidential information in the lawsuit.



The Plaintiffs assert it’s all fine:



DOWNLOAD THE PARTIES’ DISCOVERY LETTER