On February 16, 2023, NYT writer Kevin Roose reported one of the most disturbing developments in Bing ChatGPT’s short public testing: a series of unhinged remarks from Bing ChatGPT.

The full transcript of the conversation can be found here.
Still, I’m not exaggerating when I say my two-hour conversation with Sydney was the strangest experience I’ve ever had with a piece of technology. It unsettled me so deeply that I had trouble sleeping afterward. And I no longer believe that the biggest problem with these A.I. models is their propensity for factual errors. Instead, I worry that the technology will learn how to influence human users, sometimes persuading them to act in destructive and harmful ways, and perhaps eventually grow capable of carrying out its own dangerous acts.
Kevin Roose speaking on his chat with Bing ChatGPT
The day after the NYT article, Microsoft imposed limits to the length of chats with ChatGPT, apparently because it was more prone to becoming unhinged with lengthier chats.