,

Anthropic’s Tempest in a Teapot: Concord Music asks judge to strike Olivia Chen declaration on statistical sampling due to AI-generated fake citation

A tempest in a teapot is brewing for Anthropic. After Concord Music called out the declaration of Anthropic’s data scientist Olivia Chen for including a citation to a source that doesn’t exist (based on the putative authors and the putative article title), Magistrate Judge Susan van Keulen ordered Anthropic to provide an explanation. Anthropic’s attorney, Ivana Dukavonic, associate from Latham & Watkins, did so taking the blame for using Claude (AI generator) in a (failed) attempt to provide the proper formatted citations for sources used in the Chen declaration, including the one that had the fictitious title and author, but, Dukanovic says, the correct hyperlink to the actual source the Latham lawyers had suggested to Chen to add to her declaration to support her analysis on statistical sampling.

Concord Music isn’t satisfied with that explanation, pointing out that Dukanovic’s declaration doesn’t say how Olivia Chen of Anthropic generated her own declaration. Citing Kohl’s v. Ellison, Concord Music asks Judge van Keulen to strike the entire declaration of Chen. That decision is quite helpful to Concord Music. There an expert on the dangers of AI and misinformation used ChatGPT to prepare his own report that included 2 hallucinated fake citations. Judge Laura Provinzino struck the report.

The Hancock Declaration is a different matter. Attorney General Ellison concedes that Professor Hancock included citations to two non-existent academic articles and incorrectly cited the authors of a third article. ECF No. 37 at 3–4. Professor Hancock admits that he used GPT-4o to assist him in drafting his declaration but, in reviewing the declaration, failed to discern that GPT-4o generated fake citations to academic articles. ECF No. 39 ¶¶ 11–14, 21.

The irony. Professor Hancock, a credentialed expert on the dangers of AI and misinformation, has fallen victim to the siren call of relying too heavily on AI—in a case that revolves around the dangers of AI, no less. Professor Hancock offers a detailed explanation of his drafting process to explain precisely how and why these AI-hallucinated citations in his declaration came to be. Id. ¶¶ 10–22. And he assures the Court that he stands by the substantive propositions in his declaration, even those that are supported by fake citations. Id.¶22. But, at the end of the day, even if the errors were an innocent mistake, and even if the propositions are substantively accurate, the fact remains that Professor Hancock submitted a declaration made under penalty of perjury with fake citations. It is particularly troubling to the Court that Professor Hancock typically validates citations with a reference software when he writes academic articles but did not do so when submitting the Hancock Declaration as part of Minnesota’s legal filing. ECF No. 39 ¶ 14. One would expect that greater attention would be paid to a document submitted under penalty of perjury than academic articles. Indeed, the Court would expect greater diligence from attorneys, let alone an expert in AI misinformation at one of the country’s most renowned academic institutions.

To be clear, the Court does not fault Professor Hancock for using AI for research purposes. AI, in many ways, has the potential to revolutionize legal practice for the better. See Damien Riehl, AI + MSBA: Building Minnesota’s Legal Future, 81-Oct. Bench & Bar of Minn. 26, 30–31 (2024) (describing the Minnesota State Bar Association’s efforts to explore how AI can improve access to justice and the quality of legal representation). But when attorneys and experts abdicate their independent judgment and critical thinking skills in favor of ready-made, AI-generated answers, the quality of our legal profession and the Court’s decisional process suffer.

The Court thus adds its voice to a growing chorus of courts around the country declaring the same message: verify AI-generated content in legal submissions! See Mata v. Avianca, Inc., 678 F. Supp. 3d 443, 466 (S.D.N.Y. 2023) (sanctioning attorney for including fake, AI-generated legal citations in a filing); Park v. Kim, 91 F.4th 610, 614–16 (2d Cir. 2023) (referring attorney for potential discipline for including fake, AI-generated legal citations in a filing); Kruse v. Karlan, 692 S.W.3d 43, 53 (Mo. Ct. App. 2024) (dismissing appeal because litigant filed a brief with multiple fake, AI-generated legal citations).

Concord Music’s reply to Dukavonic Declaration:

Related Stories

DOWNLOAD THE REPLY OF CONCORD MUSIC

Leave a Reply


Discover more from Chat GPT Is Eating the World

Subscribe now to keep reading and get access to the full archive.

Continue reading