The original version of this story appeared in Quanta Magazine.
When Covid-19 sent people home in early 2020, the computer scientist Tom Zahavy rediscovered chess. He had played as a kid and had recently read Garry Kasparov’s Deep Thinking, a memoir of the grandmaster’s 1997 matches against IBM’s chess-playing computer, Deep Blue. He watched chess videos on YouTube and The Queen’s Gambit on Netflix.
Despite his renewed interest, Zahavy wasn’t looking for ways to improve his game. “I’m not a great player,” he said. “I’m better at chess puzzles”—arrangements of pieces, often contrived and unlikely to occur during a real game, that challenge a player to find creative ways to gain the advantage.
The puzzles can help players sharpen their skills, but more recently they’ve helped reveal the hidden limitations of chess programs. One of the most notorious puzzles, devised by the mathematician Sir Roger Penrose in 2017, puts stronger black pieces (such as the queen and rooks) on the board, but in awkward positions. An experienced human player, playing white, could readily steer the game into a draw, but powerful computer chess programs would say black had a clear advantage. That difference, Zahavy said, suggested that even though computers could defeat the world’s best human players, they couldn’t yet recognize and work through every kind of tough problem. Since then, Penrose and others have devised sprawling collections of puzzles that computers struggle to solve.
Chess has long been a touchstone for testing new ideas in artificial intelligence, and Penrose’s puzzles piqued Zahavy’s interest. “I was trying to understand what makes these positions so hard for computers when at least some of them we can solve as humans,” he said. “I was completely fascinated.” It soon evolved into a professional interest: As a research scientist at Google DeepMind, Zahavy explores creative problem-solving approaches. The goal is to devise AI systems with a spectrum of possible behaviors beyond performing a single task.
A traditional AI chess program, trained to win, may not make sense of a Penrose puzzle, but Zahavy suspected that a program made up of many diverse systems, working together as a group, could make headway. So he and his colleagues developed a way to weave together multiple (up to 10) decisionmaking AI systems, each optimized and trained for different strategies, starting with AlphaZero, DeepMind’s powerful chess program. The new system, they reported in August, played better than AlphaZero alone, and it showed more skill—and more creativity—in dealing with Penrose’s puzzles. These abilities came, in a sense, from self-collaboration: If one approach hit a wall, the program simply turned to another.
Tom Zahavy helped design a computer system that plays chess more creatively by combining the approaches and strategies of up to 10 different programs.
Carlton Reid
Reece Rogers
Dell Cameron
Steven Levy
That approach fundamentally makes sense, said Allison Liemhetcharat, a computer scientist at DoorDash who has worked with multi-agent approaches to problem-solving in robotics. “With a population of agents, there’s a higher probability that the puzzles are in the domain that at least one of the agents was trained in.”
The work suggests that teams of diverse AI systems could efficiently tackle hard problems well beyond the game board. “This is a great example that looking for more than one way to solve a problem—like winning a chess game—provides a lot of benefits,” said Antoine Cully, an AI researcher at Imperial College London who was not involved with the DeepMind project. He compared it to an artificial version of human brainstorming sessions. “This thought process leads to creative and effective solutions that one would miss without doing this exercise.”
Before joining DeepMind, Zahavy was interested in deep reinforcement learning, an area of artificial intelligence in which a system uses neural networks to learn some task through trial and error. It’s the basis for the most powerful chess programs (and used in other AI applications like self-driving cars). The system starts with its environment. In chess, for example, the environment includes the game board and possible moves. If the task is to drive a car, the environment includes everything around the vehicle. The system then makes decisions, takes actions and evaluates how close it came to its goal. As it gets closer to the goal, it accumulates rewards, and as the system racks up rewards it improves its performance.
Reinforcement learning is how AlphaZero learned to become a chess master. DeepMind reported that during the program’s first nine hours of training, in December 2017, it played 44 million games against itself. At first, its moves were randomly determined, but over time it learned to select moves more likely to lead toward checkmate. After just hours of training, AlphaZero developed the ability to defeat any human chess player.
But as successful as reinforcement learning can be, it doesn’t always lead to strategies that reflected a general understanding of the game. Over the past half-decade or so, Zahavy and others noticed an uptick in the peculiar glitches that could happen on systems trained with trial and error. A system that plays video games, for example, might find a loophole and figure out how to cheat or skip a level, or it could just as easily get stuck in a repetitive loop. Penrose-style puzzles similarly suggested a kind of blind spot, or glitch, in AlphaZero—it couldn’t figure out how to approach a problem it had never seen before.
But maybe not all glitches are just errors. Zahavy suspected that AlphaZero’s blind spots might actually be something else in disguise—decisions and behaviors tied to the system’s internal rewards. Deep reinforcement learning systems, he said, don’t know how to fail—or even how to recognize failure. The ability to fail has long been linked to creative problem-solving. “Creativity has a human quality,” Kasparov wrote in Deep Thinking. “It accepts the notion of failure.”
Carlton Reid
Reece Rogers
Dell Cameron
Steven Levy
Antoine Cully has built robots that can effectively brainstorm multiple different solutions to a given problem.
AI systems typically don’t. And if a system doesn’t recognize that it has failed to complete its task, then it may not try something else. Instead, it will just keep trying to do what it has already done. That’s likely what led to those dead ends in video games—or to getting stuck on some Penrose challenges, Zahavy said. The system was chasing “weird kinds of intrinsic rewards,” he said, that it had developed during its training. Things that looked like mistakes from the outside were likely the consequence of developing specific but ultimately unsuccessful strategies.
The system viewed these peculiar rewards as steps towards the ultimate goal, which it was incapable of achieving, not knowing to try a new approach. “I was attempting to comprehend them,” Zahavy said.
One of the reasons these glitches are so significant and useful, is due to what researchers identify as a problem with generalization. Although reinforcement learning systems can form an efficient strategy for linking a given situation to a particular action—what researchers call a “policy”—they are unable to apply it to varying problems. “Regardless of the method, what usually happens with reinforcement learning is that you get the policy that resolves the distinct instance of the problem you’ve been working on, but it doesn’t generalize,” explains Julian Togelius, a computer scientist at New York University and research director at modl.ai.
I was attempting to understand why these [chess] positions are so challenging for computers, when we as humans can solve at least some of them.
Zahavy perceived the Penrose puzzles as needing this kind of generalization. Perhaps AlphaZero couldn’t solve most of the puzzles because it was overly focused on winning whole games, from start to finish. However, this approach introduced gaps revealed by the unexpected piece arrangements in Penrose puzzles. Maybe, Zahavy speculated, the program could figure out how to defeat the puzzle if it had sufficient creative space and utilized different training methods.
So he and his colleagues first collected a set of 53 Penrose puzzles and 15 additional challenge puzzles. On its own, AlphaZero solved less 4 percent of the Penrose puzzles and under 12 percent of the rest. Zahavy wasn’t surprised: Many of these puzzles were designed by chess masters to intentionally confuse computers.
As a test, the researchers tried training AlphaZero to play against itself using the Penrose puzzle arrangement as the starting position, instead of the full board of typical games. Its performance improved dramatically: It solved 96 percent of the Penrose puzzles and 76 percent of the challenge set. In general, when AlphaZero trained on a specific puzzle, it could solve that puzzle, just as it could win when it trained on a full game. Perhaps, Zahavy thought, if a chess program could somehow have access to all those different versions of AlphaZero, trained on those different positions, then that diversity could spark the ability to approach new problems productively. Perhaps it could generalize, in other words, solving not only the Penrose puzzles, but any broader chess problem.
Carlton Reid
Reece Rogers
Dell Cameron
Steven Levy
His group decided to find out. They built the new, diversified version of AlphaZero, which includes multiple AI systems that trained independently and on a variety of situations. The algorithm that governs the overall system acts as a kind of virtual matchmaker, Zahavy said: one designed to identify which agent has the best chance of succeeding when it’s time to make a move. He and his colleagues also coded in a “diversity bonus”—a reward for the system whenever it pulled strategies from a large selection of choices.
When the new system was set loose to play its own games, the team observed a lot of variety. The diversified AI player experimented with new, effective openings and novel—but sound—decisions about specific strategies, such as when and where to castle. In most matches, it defeated the original AlphaZero. The team also found that the diversified version could solve twice as many challenge puzzles as the original and could solve more than half of the total catalog of Penrose puzzles.
“The idea is that instead of finding one solution, or one single policy, that would beat any player, here [it uses] the idea of creative diversity,” Cully said.
With access to more and different played games, Zahavy said, the diversified AlphaZero had more options for sticky situations when they arose. “If you can control the kind of games that it sees, you basically control how it will generalize,” he said. Those weird intrinsic rewards (and their associated moves) could become strengths for diverse behaviors. Then the system could learn to assess and value the disparate approaches and see when they were most successful. “We found that this group of agents can actually come to an agreement on these positions.”
And, crucially, the implications extend beyond chess.
Cully said a diversified approach can help any AI system, not just those based on reinforcement learning. He has long used diversity to train physical systems, including a six-legged robot that was allowed to explore various kinds of movement, before he intentionally “injured” it, allowing it to continue moving using some of the techniques it had developed before. “We were just trying to find solutions that were different from all previous solutions we have found so far.” Recently, he has also been collaborating with researchers to use diversity to identify promising new drug candidates and develop effective stock-trading strategies.
“The goal is to generate a large collection of potentially thousands of different solutions, where every solution is very different from the next,” Cully said. So—just as the diversified chess player learned to do—for every type of problem, the overall system could choose the best possible solution. Zahavy’s AI system, he said, clearly shows how “searching for diverse strategies helps to think outside the box and find solutions.”
Zahavy suspects that in order for AI systems to think creatively, researchers simply have to get them to consider more options. That hypothesis suggests a curious connection between humans and machines: Maybe intelligence is just a matter of computational power. For an AI system, maybe creativity boils down to the ability to consider and select from a large enough buffet of options. As the system gains rewards for selecting a variety of optimal strategies, this kind of creative problem-solving gets reinforced and strengthened. Ultimately, in theory, it could emulate any kind of problem-solving strategy recognized as a creative one in humans. Creativity would become a computational problem.
Liemhetcharat noted that a diversified AI system is unlikely to completely resolve the broader generalization problem in machine learning. But it’s a step in the right direction. “It’s mitigating one of the shortcomings,” she said.
More practically, Zahavy’s results resonate with recent efforts that show how cooperation can lead to better performance on hard tasks among humans. Most of the hits on the Billboard 100 list were written by teams of songwriters, for example, not individuals. And there’s still room for improvement. The diverse approach is currently computationally expensive, since it must consider so many more possibilities than a typical system. Zahavy is also not convinced that even the diversified AlphaZero captures the entire spectrum of possibilities.
“I still [think] there is room to find different solutions,” he said. “It’s not clear to me that given all the data in the world, there is [only] one answer to every question.”
Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.