- Judges Bibas, Alsup, and Chhabria all issued decisions on fair use in AI training.
- The bare results were: 2 finding fair use, 1 rejecting it.
- But the results belie a great deal of complexity–and consternation among all three judges.
With this week’s blockbuster rulings in Bartz v. Anthropic and Kadrey v. Meta, we now have decisions by three federal judges on the question whether the use of copyrighted works to train AI models that do not produce allegedly infringing outputs when deployed is a fair use. This is a novel question of law, so perhaps it should not be too surprising the judges appear conflicted in their rulings.
Simply looking at the results: Judges Alsup and Chhabria said yes in Anthropic and Meta, respectively, but Judge Bibas said no in Thomson Reuters v. ROSS Intelligence.
But the bare results belie a great deal of complexity–and consternation among all three judges. Let me highlight them.
Judge bibas reverses himself on fair use
Judge Bibas, a Third Circuit judge sitting in Delaware district court, was the first judge to rule on fair use in AI training. In 2023, he ruled that it might be transformative fair use on one version of the facts–and the case must go to trial. But, on the day before trial in 2024, Judge Bibas reversed himself, ordered rebriefing on summary judgment, and ultimately held that AI training was not transformative at all. At the request of ROSS, though, Judge Bibas allowed and supported an interlocutory appeal of the case, which the Third Circuit just granted.
“I acknowledge that these questions are hard under existing precedent,” Judge Bibas wrote.

judge alsup rebukes anthropic for “stealing” “pirated books” from shadow libraries
Judge Alsup was the next to decide fair use. And he ruled strongly in favor of fair use, describing AI as one of the most transformative technologies many of us will see in our lifetimes. Yet, there was a catch. Judge Alsup also ruled that Anthropic’s acquisition of “pirated books” from shadow libraries online and storage of them in a centralized, permanent library was a separate use under the teaching of Warhol–and not a fair use (with the implication they are infringing copies, with a trial on damages to follow). The outer limit of statutory damages for willful infringement (if it is found) is approximately 1 trillion 50 billion dollars (minus duplicate copies and unregistered works).
“[I]nstead [Anthropic] stole the works for its central library by downloading them from pirated libraries,” Judge Alsup wrote.

Indeed, Judge Alsup repeatedly referred to Anthropic’s conduct as “theft” and “stealing” of the “pirated books.” The strong language is noticeable compared to more common description of infringing and unauthorized, and may cast a shadow (no pun intended) on the collection of any dataset without permission of authors.
judge chhabria says all ai training on copyrighted works is likely illegal
Judge Chhabria issued the most recent opinion on fair use. And the ruling gives new meaning to the word opinion. The better half of the 40-page opinion reads more like an advisory opinion to putative facts not before the court.
Judge Chhabria reluctantly granted partial summary judgment to Meta finding its use of copyrighted books was fair use to train its AI model. But Judge Chhabria faulted the plaintiffs for not presenting sufficient evidence to raise a genuine issue of material fact that LlaMa would create market dilution of their works.
Judge Chhabria even stated in dicta that in most cases AI training on copyrighted works will likely be illegal due to market dilution. Judge Chhabria spent pages of the opinion developing this new theory of market harm–that the plaintiffs themselves had failed to advance adequately with sufficient evidence.
“The upshot is that in many circumstances it will be illegal to copy copyright-protected works to train generative AI models without permission. Which means that the companies, to avoid liability for copyright infringement, will generally need to pay copyright holders for the right to use their materials,” Judge Chhabria wrote.

will a jury ever decide the fair use question in Ai training?
When these lawsuits were first filed, I expected that some of the cases would be decided by juries. All of them involve requests for jury trials. But now, given these three decisions, I am less certain that the issue of fair use will go the jury in any of the remaining AI copyright cases. It might, especially if there’s a dispute of material fact such as related to market harm. But these three decisions might be leading indicators of what to expect next.
That would be a shame. Having a jury decide fair use would add a different perspective. Google v. Oracle involved a jury decision finding fair use, for example. And, for a technology that has already provoked such consternation among judges, a jury would add a view more representative of the public. Plus, the judge’s ability to grant judgment as a matter of law after a trial would always provide a check on erroneous applications of the law by a jury.
