{\huge GLL At the Match}
Dick and I will be on Sunday's game telecast \vspace{.5in}
Magnus Carlsen of Norway and Sergey Karjakin of Russia are midway through their world championship match in New York City. The match is organized by Agon Limited in partnership with the World Chess Federation (FIDE).
Tomorrow, Sunday---early today as I post---at 2pm ET is Game 7 with the match all square after six hard-fought draws. Dick and I are in New York City and will be on the telecast streamed by the sponsoring website, WorldChess.com. A one-time $15 charge brings access to that and all remaining games.
The match is being covered by major media. The movie documentary ``Carlsen'' opened yesterday. I was also struck by game-by-game coverage on the FiveThirtyEight website, including a post today titled, ``Are Computers Draining the Beauty Out Of Chess?''
With us on the gamecast will be Murray Campbell of IBM Watson. He was one of the creators of the machine Deep Blue, which famously defeated Garry Kasparov in ealy 1997. Since then no human player has battled a computer on even terms, while both software and hardware have improved to the point that Kasparov would probably lose to his phone. That is why I have helped draft rules against smartphone in tournament halls and much else as an official consultant of a FIDE Commission to combat cheating, whose chair Israel Gelfer shared lunch with Dick and Kathryn Farley and me earlier today. I will be wearing my deep-blue dress shirt in tribute.
The Games and Some Stats
The players occupy a cubicle behind a partition from the main audience. Ever since the 2006 championship reunification match in which Veselin Topalov accused Vladimir Kramnik of getting computer help, and mindful of past whispers about signals, FIDE has reserved the option of forestalling any possible audience input. Cameras show the on-board action. Expert commentators give running analysis for those onsite and the Internet audience. They have included Judit Polgar, who in 2005 was the first woman to compete in a round-robin tournament for the FIDE world title.
The games start at 2pm. Each player has a budget of 100 minutes for the first 40 moves plus 30 seconds ``increment'' after each move played, so four hours may elapse before the game reaches move 40. Then 50 minutes plus the increment are allotted until move 60, then a final 15 minutes plus the increment for the rest of the game. Although 40 is a typical game length, the six draws have averaged 55 moves per game. Games 3 and 4 saw Karjakin hold out for 78 and 92 moves in positions that at times were desperate. Those games were said to have kept Norwegian government ministers up until 3am and slowed the country.
Carlsen is rated 2853 on the Elo rating system, which is 2 points above the record high previously held by Kasparov but about 30 below Carlsen's own peak. Karjakin is at 2772, which makes him a slight but definite underdog. Arpad Elo designed his rating system in 1960 for the United States Chess Federation and it was adopted by FIDE in 1970. Only relative numbers matter: a linchpin is that a 200 point difference reflects and predicts the stronger player taking about 75% of the points.
The change in one's Elo rating after a tournament or match depends only on one's win-draw-loss record and the ratings of one's opponents. This simplicity makes it easily adaptable to other sports, and FiveThirtyEight uses Elo for their in-house predictions of football games and baseball series among other games. My own work, however, gauges a player's performance on the Elo scale directly by analysis of the moves he or she played---within a deeper analysis of the moves not played. On that scale I have Carlsen and Karjakin playing dead-even at a very high level:
Carlsen 2880 +- 165; Karjakin 2875 +- 170.
This is reflected also in a less-intensive ``screening run'' I have devised for quick assessment of large tournaments. It produces a value I call ROI for ``Raw Outlier Index'' on a 0--100 scale where 50 is the expected agreement with a particular computer program given one's rating. My tests using the Stockfish 7 and Komodo 10.2 programs both give the players a combined ROI of 51, with Stockfish giving them 51 apiece. I look forward to explaining how one can design a model that gets things yea-close.
Open Problems
Who will win? Will either one win tomorrow's game? We welcome you to catch the action.