The chess player who mocked her opponent

The Sinquefield Cup in St Louis, Missouri, held August 16-29, brought together some of the world’s best chess players – including Magnus Carlsen, Fabiano Caruana and Viswanathan Anand – in games that captivated audiences . However, in terms of skill alone, the most hotly contested chess game of these two weeks did not take place in the port city.

Computers have been better than humans at chess since IBM’s Deep Blue beat Garry Kasparov in 1997. Since then computers have gotten much better and have beaten human champions of the world at Go and even DotA. Chess engine tournaments have also been held since 2011, showing how well computers can play, but more importantly the computational prowess they can muster. These tournaments do not enjoy the same number of spectators as human chess because the machines bring with them a “just solve a problem” attitude and no intrigue or passion, making matchups a protracted academic exercise.

Even so, the computer-versus-computer game played at’s California headquarters on August 23 was different. Leela Chess Zero, an open-source chess engine, seemed to mock its opponent, the Chiron Chess Engine, after securing an unassailable lead. Leela uses deep reinforcement learning to play and get better at the game, and this technology is far better than the technology that powers Chiron, a brute force algorithm.

Leela had taught herself to play chess from scratch, improving herself by competing in millions of games against a copy of herself, a method called competitive self-play. On the other hand, “conventional” chess engines like Chiron learn from a diet of completed chess games between grandmasters and endgame tables.

In California, Leela had played over a hundred perfect shots to bring her opponent to her virtual heels. Then all of a sudden she started to stray from conventional tactics and started doing some obviously sub-optimal moves. From a position where she could checkmate her opponent in 20 moves, Leela sacrificed her queen twice, promoted a pawn to less than a queen, gave up a rook and a knight before finally finishing off Chiron with the shortest possible. comrade.

Did the audience just witness an AI making a fool of itself? Because this kind of behavior is unexpected. Engines like Leela – similar to Google’s AlphaGo and OpenAI’s Five – are designed to win at specific games, not to show ego. Observers also couldn’t miss the similarities between it and Google’s Alpha Zero, after whom it is designed.

In December 2017, Alpha Zero played Stockfish, the most powerful conventional engine (although Google didn’t use the best version) at the time, built by the same developer who led Leela’s engineering. Alpha Zero had learned to play chess in four hours of competitive self-play. He then defeated Stockfish while often displaying signs of intuition which prompted comments that he had played more like a human.

Leela’s style is similar: she plays with human motifs that make her intelligible to human players. However, it is still unclear how Leela developed these human tendencies and how far she might go. In fact, it’s also unclear whether observers are simply anthropomorphizing it in an effort to explain its behavior.

Speculation without more data on this front is risky due to the way humans tend to perceive AI: sometimes as friendly (as when a computer is addressed with the female pronoun) but many others as malicious and untrustworthy. The movie antagonist 2001: A Space Odyssey is a prime example of this conception, rooted in historical doubts about how the pursuit of AI often flattens what it means to be human. Experts have warned that such portrayals could foster hysteria and disbelief about the possibilities of AI.

The team of programmers behind Leela came up with a possible cause: adding endgame tablebases to Leela’s game algorithm for the first time. Endgame tables are used to determine if a given position of pieces on the chessboard can lead to victory. The engines use these to determine their prospects and “run” the table for the shortest win. Leela’s programmers clarified, however, that she only had access to the table indicating whether a position was “won”, “lost” or “draw”, but not how.

So they concluded that Leela was probably transitioning to an endgame table position that she hadn’t yet learned to play perfectly. It remains to be seen if, once she gets the hang of the game, her playfulness – or what we think is that – will still be there.

Binit Priyaranjan is a literature student at the University of Delhi and a freelance writer.

Comments are closed.