, ,

AlphaGo’s defeat of Lee Sedol post-mortem: Will AI kill human motivation?

AlphaGo v. Lee Sedol

During March 9 through March 15, 2016, Google DeepMind’s AlphaGo AI program defeated the 18-time world champion Lee Sedol at the ancient board game Go in Seoul, South Korea. The event was a cause cé·lè·bre in Asia, drawing, in a live broadcast, reportedly over 100 million viewers.

AlphaGo won 4 of the games, while Lee managed to win 1. In hindsight, that latter win for Lee over AI seems miraculous, although, at the time, many expected Lee, the best Go player in the world, to prevail. Lee himself predicted to the media that he would win 5-0 or 4-1.

However, AlphaGo proved to be too much. In the second match, AlphaGo’s Move 37 was a stunning, unconventional move that had Lee, plus Go commentators, flummoxed. Lee, who had just returned from a smoke outside, immediately had a quizzical look when he saw the move because it was too confounding. By all accounts, the move had never been made before. Google DeepMind’s own calculations indicated that the chance of a human making that move was one in ten thousand, or 0.010%.

Lee sat back in his chair, pondering the move–for twelve minutes. He looked almost defeated, with melancholy in his eyes. Lee lost.

Lee pulled off one win. His brilliant Move 78 has been described as a “divine move,” or a highly original and inspired move in Go. But, ultimately, Lee lost the match.

The match between AlphaGo and Lee Sedol was such an important event in AI’s development Google made a documentary about it you can watch on YouTube. Both fascinating and a bit depressing.

Lee Sedol’s retirement after defeat to AI

Although the victory by AlphaGo was viewed as a milestone for AI research, the fruits of which we are witnessing now, far less attention has been paid to the effect the match had on Lee, who retired shortly thereafter (though not after winning 9 more matches against human competitors) at the age of 33 years old.

“With the debut of AI in Go games, I’ve realized that I’m not at the top even if I become the number one through frantic efforts. Even if I become the number one, there is an entity that cannot be defeated,” Lee said.

In his book The Creative Act, Rick Rubin describes how he cried upon reading about AlphaGo’s victory. Initially, Rubin didn’t understand why he was so emotional, but, later, he said it was due to the AI program’s ability to capture the “beginner’s mind”: “[t]o see what no human has seen before, to know what no human has known before, to create as no human has created before.”

I cry for a different reason.

I am beginning to see AI as a human motivation-killer. Lee, one of the best Go players ever, quit because AI was far too good. Even putting aside the predicted job displacement by AI (Goldman says possibly 300 million jobs), we should worry about the effect AI’s advancement will have on human motivation. Others, such as Elon Musk, have even become “de-motivated” due to fatalistic fears of AI taking over. I think the far greater risk, though, is with AI simply becoming better than humans–in virtually everything.

Here’s the AI dilemma we face: If AI can analyze, calculate, code, create, read and summarize, write, and perform just about any task requiring intelligence better than us, will we still want to spend the time performing those tasks? We use calculators instead of computing math on our own. Will human reading and writing be next?

Perhaps AI isn’t better than humans at all these tasks now, but it seems foolish to think it won’t be better soon. AI is constantly learning on its own based on enormous datasets, coupled with human feedback. The scale and speed at which these AI models are learning are almost unfathomable. In a year, AI can learn exponentially more than any person can in a lifetime.

What if AI can write better than I?

Let’s consider writing. I love to write. But, using ChatGPT, I’ve created GPTs that can write in the style of Homer, Shakespeare, and Justice Holmes. Within seconds, the GPTs can do something I cannot: write in the styles of these authors–and do so, impressively. I doubt that I could ever write like Homer or Shakespeare, even though I took several classes studying their works. I might have an easier time trying to write like Justice Holmes, but I would have to spend months studying his writings before I could even attempt to write like him.

And even closer to home: I can train a GPT on my own writings and ask it to write in my style. I expect it would do a decent job. But I don’t have the motivation to create a GPT that writes in my style. A part of me fears that the GPT will be, or become, better at writing in my style than me.

By that I mean that AI’s prose would be more polished, with greater flair and turns of phrase, and even be more insightful. And it would take AI far less time to compose. Just seconds for what it would take me months to refine.

Then, I would experience the same feeling as Lee Sedol: wanting to retire.

Leave a Reply


Discover more from Chat GPT Is Eating the World

Subscribe now to keep reading and get access to the full archive.

Continue reading