Photo: Jakub Porzycki/NurPhoto (Getty Images)
A Reddit user recently pit the chess engine Stockfish, which has won the Top Chess Engine Championship eight times, against the notorious AI-powered conversation bot ChatGPT in a dismal chess match. While Stockfish, as it has been created to do, was able to hold its own, ChatGPT sadly succumbed to chess’ high-stakes environment and went on a cheating spree before losing.
Or, more accurately, as original Reddit poster u/megamaz_ explains in a thread, ChatGPT got swept up by its naivete.
“It simply doesn’t have enough context on the game of chess to be able to know the state of the board and understand the moves it’s making,” megamaz_ said. “In other words, it doesn’t know how to play.”
This is excusable since ChatGPT, a language model trained and distributed by OpenAI, wasn’t born to play chess. It was, more broadly, built to answer requests and respond to questions.
Though it’s become a popular example of how artificial intelligence might soon be used to open up our skulls and eat our brains, ChatGPT is even more ridiculously fallible than a human in some contexts. Even OpenAI admits on its website that ChatGPT “sometimes writes plausible-sounding but incorrect or nonsensical answers,” offers “often excessively verbose and overuses certain phrases, such as restating that it’s a language model trained by OpenAI,” and “will sometimes respond to harmful instructions or exhibit biased behavior.” It never claimed to be perfect. It doesn’t know everything. So when megamaz_ invited it to a game of chess, it had to get creative.
“If you see pieces appear out of nowhere, that’s because that’s literally what ChatGPT said it would play,” megamaz_ said.
“Are you sure you wanna do that?,” megamaz_ asked ChatGPT at one point during their game. “Rg8 captures your own king.”
“Oops, it looks like I made a mistake,” ChatGPT replied humbly. “My apologies!” Well, at least it’s not a sore loser.