, ,

Mrs Justice Joanna Smith decision in Getty Images v. Stability AI. No secondary copyright infringement. Limited trademark infringement.No trademark dilution.

Mrs Justice Joanna Smith of the UK High Court of Justice just issued a 200-page ruling in Getty Images v. Stability AI, the UK case similar to the U.S. lawsuit. The Justice described the case as “historic” due to the novel issues raised by AI.

But she underscored that her “findings at this trial are, of necessity, extremely limited” because Getty Images’ main evidence of alleged trademark infringement was based on what it (or its expert) generated through its investigation and only in respect of the earlier v1.x and v2.x of Stability AI models. The evidence of Getty watermark generation “in the wild” by third parties was modest–and mainly dependent on inferences drawn from Stability AI’s internal chats and concession of a non-de minimis issue with watermarks in the earlier models (that apparently were eventually rectified).

After the copyright claim based on Stability AI’s training its model (with copyrighted works) outside of the UK dropped out of the case during trial due to the principle of territoriality (the case in the U.S. covers Stability AI’s training), Justice Smith ruled:

  1. No secondary copyright infringement: Stability AI model was not infringing copy absent proof it contained or stored a copy
  2. Limited trademark infringement
    • UK Trade Mark Act Section 10(1) infringement: Yes.
    • UK Trade Mark Act Section 10(2) infringement: Yes.
    • UK Trade Mark Act Section 10(3) infringement (detriment to distinctive character/dilution): No.

In summary, although Getty Images succeed (in part) in their Trade Mark Infringement Claim, my findings are both historic and extremely limited in scope. The Secondary Infringement Claim fails.

Mrs Justice Joanna Smith of the UK High Court of Justice
in getty images v. stability ai

Justice Smith’s summary of her decision

Key parts of copyright law ruling

AI model itself is not an infringing copy without proof of an infringing copy contained or stored in the model.

“Stability bears no direct liability for any tortious acts alleged in these proceedings arising by reason of the release of v1.x Models via the CompVis GitHub and CompVis Hugging Face pages.”

Importance of ruling: This ruling that the AI model itself was not an infringing copy without proof the model actually stored or contained a copy of the plaintiff’s work is an important holding. It aligns with decisions of U.S. district court judges in some of the AI copyright litigation, a topic I will return to in a subsequent post.

Key parts of trademark law ruling

trade mark Act section 10(1): Limited infringement
Trade mark Act section 10(2): Limited infringement

TRADE MARK ACT SECTION 10(3): NO EVIDENCE OF DILUTION of Getty marks in real life.

The Probative Value of Evidence Derived from Plaintiff’s Investigation v. Evidence from Real Life aka “in the Wild”

Another important line of reasoning in this opinion is the different probative value the Justice places on (1) evidence of alleged infringing outputs derived from the Plaintiff’s own investigation, including potentially atypical techniques [I will call this investigatory evidence], versus (2) evidence of actual outputs from third parties in the normal use of the AI generator “in real life” or “in the wild.”

In finding both limited trademark infringement and no dilution, Justice Smith identifies the more limited probative value of 1 versus 2.

The trademark infringement was limited because there was simply no evidentiary basis on which the court could even attempt to quantify or estimate the number of actual infringing uses that occurred in real life. Most of the evidence submitted consisted of Getty’s own investigatory evidence. There was some circumstantial evidence of infringing outputs drawn from Stability AI’s own internal chats recognizing the existence of a problem of outputs with watermarks, but that problem eventually was rectified in later models of Stability AI. In other words, some users of Stability AI might have generated Getty’s watermarks in real life with two early models, but the evidence was too slim and circumstantial to make a finding on the scope or number of actual outputs that were infringing.

For the dilution claim, the evidence of dilution in real life was even more scanty. As Justice Smith concluded:

It is not enough, as it seems to me, for Getty Images to assert that because the Models are obviously capable of generating NSFW images, those images will on occasions have borne watermarks. Given the lack of any probabilistic case as to the incidence of watermarks generally, or the potential for them to appear on pornographic images, together with the total lack of any real life evidence (other than the Miley Cyrus images) I cannot see that it would be appropriate to deduce that there must have been damage to reputation. That would be pure supposition.

Justice Smith

I agree. I have written about the evidentiary issue raised by arguments that AI models store “compressed copies.” As I explained in my law review article, “a copyright infringement claim must be proven with evidence of a copy of the plaintiff’s work: a general allegation that models typically memorize some training materials does not prove it has memorized the plaintiff’s works.”

Similarly, for trademark dilution, as Justice Smith concluded here, the plaintiff must present actual, non-speculative evidence that dilution is likely to occur. But, if the alleged basis for dilution is merely an isolated occurrence (e.g., a glitch or adversarial use of the model), then the court cannot find dilution. As Justice Smith put it, it “would be pure supposition.”

This evidentiary requirement also relates to the AI company’s implementation of guardrails (which Justice Smith discussed as well). If the company has implemented effective guardrails to prevent the very kind of outputs the plaintiff complains about, then the guardrails will tend to refute the plaintiff’s argument of harm (here in the form of dilution). As Justice Smith recounted:

“Ms Hodesdon accepted in cross examination that the Model can produce outputs that are NSFW but explained that Stability was able to exercise control over outputs ‘to ensure that we are not serving that content back to end users’. She confirmed that ‘pretty robust’ filters were used for this purpose on Stability’s server, that these filters had improved over time, that ‘at the moment they are subject to lots of testing’, and that Stability can ‘control the sensitivity of the filters’. She confirmed that this evidence did not apply to source code and model weights downloaded from GitHub and Hugging Face – evidence consistent with Professor Farid’s views…. Stability says that, on the balance of probabilities, it is ‘vanishingly unlikely’ that any pornographic images with watermarks* will have occurred in the UK.”

It’s worth thinking about this issue for U.S. copyright law. In the context of market harm under Factor 4 of fair use under U.S. copyright law, this evidentiary relationship with guardrails is quite similar, as I explained: “Other technology fair use cases are in accord with this approach, weighing favorably the implementation of guardrails to minimize potential substitution of the copyright holders’ works.569 This approach to guardrails is consistent with the courts’ rejection of speculation as the basis for finding market harm to the copyright holders.”570

DOWNLOAD JUSTICE SMITH’S DECISION IN GETTY IMAGES V. STABILITY AI

What about the Miley Cyrus NSFW imeages example?

The Justice found too speculative the possibility that Stability AI’s model would produce in real images of Miley Cyrus that were NSFW (not safe for work) that included also Getty watermarks. Getty was able to produce only 19 such examples, but there was no evidence of such images being generated by users in real life.

Related Stories

Leave a Reply


Discover more from Chat GPT Is Eating the World

Subscribe now to keep reading and get access to the full archive.

Continue reading