A day after the FTC’s investigation of OpenAI became public, the WSJ Editorial titled “Lina Khan’s Artificial Intelligence” questions her authority to investigate OpenAI. Khan is the Chair of the FTC.
The WSJ Editorial focuses on the FTC’s authority to investigate the potential defamation propagated on ChatGPT. The Editorial asks, “Does this standard also apply to what the New York Times or ProPublica write about certain people?” Regarding AI regulation, the WSJ Editorial argues, “That’s a question for Congress.”
However, in her testimony before Congress this week, FTC Chair Khan remarked, “We’ve heard about reports where people’s sensitive information is showing up in response to an inquiry from somebody else. We’ve heard about libel, defamatory statements, flatly untrue things that are emerging. That’s the type of fraud and deception that we are concerned about.”
FTC Section 5 authority
The FTC’s authority derives from Section 5 of the FTC Act, which recognizes the FTC’s power to investigate, among other things, “unfair or deceptive acts or practices in or affecting commerce.“
The FTC’s investigation of OpenAI’s privacy and data security practices (Questions 37-49, covering “Privacy and Prompt Injection Risks and Mitigations,” and “API Integrations and Plugins,” and “Monitoring, Collection, Use, and Retention of Personal Information,” has not generated the same kind of claims of overreach. The WSJ Editorial attacked the FTC’s investigation of possible defamatory or false statements about people generated on ChatGPT.
To assert authority to investigate under Section 5, the FTC will need to establish (1) a potential “unfair” act or practice or (2) a potential “deceptive” practice.
Of these two routes, often “deceptive” is easier to establish, such as when a company has a user policy, such as a privacy policy, that says one thing, but the company does another thing. Such potential deceptiveness falls well within the FTC’s authority to investigate. In 2019, for example, the FTC investigated Facebook for its alleged deceptive practice regarding user privacy and imposed a $5 billion penalty on Facebook, the largest in history. That followed a prior settlement with Facebook in 2011 for a similar violation.
A deceptive practice requires the FTC to show:
- There was a representation;
- The representation was likely to mislead customers acting reasonably under the circumstances, and
- The representation was material.
The FTC’s 20-page civil investigative demand (CID) appears to be pursuing potential deceptiveness in the company’s statements about ChatGPT’s capabilities regarding its risk to produce “inaccurate statements or risks to the privacy or security of consumers’ personal information.”
Congress limited the scope of “unfair” practice investigations in 1994:
- (n) Definition of unfair acts or practices. The Commission shall have no authority under this section or section 18 to declare unlawful an act or practice on the grounds that such act or practice is unfair unless the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition. In determining whether an act or practice is unfair, the Commission may consider established public policies as evidence to be considered with all other evidence. Such public policy considerations may not serve as a primary basis for such determination.
Does the investigation of OpenAI involve potential “unfair” or “deceptive” practices?
The FTC’s 20-page civil investigative demand (CID) to OpenAI does not differentiate between unfair and deceptive practices. Instead, for both areas of investigation (privacy/data security and defamation), the FTC says it is investigating whether OpenAI has engaged in “unfair or deceptive practices.” We’ll have to wait and see if OpenAI challenges the scope of the investigation, or the FTC’s power to investigate all of the issues raised in the CID.