{\huge Thanks for Additivity}
A mathematical metaphor for progress \vspace{.5in}
Nicole Oresme was a fourteenth-century polymath. He lived in France and became an advisor to King Charles V, who sponsored him to translate many works by Aristotle into French as well as Latin. He then became bishop of Liseux until his death in 1382. Oresme made original advances in physics, geography, and mathematics. He was evidently the first to prove that the harmonic series $latex {\sum_N \frac{1}{N}}&fg=000000$ sums to infinity.
Today---yesterday was Thanksgiving Day in the US---we give thanks for mathematical advances and discuss what kind of cornucopia they are.
László Babai has added one more talk this coming Tuesday, December 1, to his series on his quasipolynomial-time algorithm for graph isomorphism. Last Tuesday's talk, which was again Tweeted beginning here by Gabriel Gaster, got as far as his ``Design Lemma.'' The ``split-or-Johnson'' title was transferred to the last talk. Or what we think is the last talk.
We are also thankful for Terry Tao's recent culmination of a broad effort on the Erd\H{o}s Discrepancy Problem. We noticed an article three days ago posted online by Quanta Magazine mentioning Tao and our friend Gil Kalai. It is on the solution to a problem of Richard Kadison and Isadore Singer by Adam Markus, Dan Spielman, and Nikhil Srivastava. Their paper came in 2013, yet researchers are still working out the power of new tools that came from applying notions of expander graphs and matrix methods familiar in CS theory to a more-abstract seeming problem. We missed it at the time, and would love to hear from readers on other advances since then.
Musings on Progress
I, Ken, have been thinking abstractly about the nature of progress while justifying my position shared by Dick that not only is there an absolute limit to strength at chess as measured by the Elo rating system, but also that today's best computer chess programs are already close to it. The clear top two programs, Komodo and Stockfish, are in the final days of this year's Thoresen Chess Engines Championship (TCEC) final. With 8 wins against 2 for Stockfish, Komodo is on the verge of defending its title won a year ago. More striking however is that almost 90% of the games have been drawn, many seemingly without any chance for either to win.
Komodo playing Black has beaten Stockfish playing White four times. For the 100-game match, Erik Kislik and Nelson Hernandez selected 50 opening sequences of 8 moves and organizer Martin Thoresen set the engines to play each one once on each side. The openings leave White ahead typically 0.20--0.35 as measured by the engines. A Black win requires outplaying the opponent by over half-a-Pawn more than winning as White, so it is like breaking serve at tennis. Komodo has never looked in any danger as White.
If Komodo could hold at least two draws every ten games against any player, then by the rules of the rating system, no player could be rated more than 366 Elo points above it. Thus a rating estimated near 3250 for it now would translate to an absolute ceiling of about 3600, which is the high end of where various of my chess model's regression lines cross the $latex {y}&fg=000000$-axis at $latex {x = 0}&fg=000000$ meaning perfect play. If Komodo played deterministically then an adversary could be tailored to its mistakes, but there are various ways to randomize is play in positions with multiple near-optimal moves without affecting its strength much. Is the current Komodo that close to perfection? It is hard to conceive a better test than it is getting right now from Stockfish.
I was recently at an event with Hernandez and Komodo co-creator Mark Lefler. Lefler pointed out several possible improvements, each worth in the range 10--20 Elo points. I thought, what if the best of these is worth 20 Elo, the next 19, and each successive one has 95% of the value of its predecessor? By summation of geometric series, you gain no more than 400 total, again for a ceiling not much above 3600. The game of Go may have a higher ceiling but still a ceiling. I wondered if mathematical progress in general has this kind of geometric limit.
Oresme and the Harmonic Series
It strikes me that unlike a performance advance, an advance in knowledge is attenuated only by the length $latex {N}&fg=000000$ of advances already made in that line. I believe that the value on average is proportional to $latex {\frac{1}{N}}&fg=000000$ not so much intrinsically as in relation to the amount $latex {N}&fg=000000$ of work that must be expended to digest the previous advances which support it. Put simply, the $latex {N}&fg=000000$-th advance may require at worst $latex {N}&fg=000000$ units of effort to express, giving $latex {\frac{1}{N}}&fg=000000$ as a value-over-cost estimate. The point is that unlike a geometric series, $latex {\sum_N \frac{1}{N}}&fg=000000$ goes to $latex {+\infty}&fg=000000$.
Oresme proved this in the classic simple manner: by grouping every $latex {2^k}&fg=000000$ terms beginning (after $latex {1}&fg=000000$) with $latex {\frac{1}{2}}&fg=000000$, $latex {\frac{1}{3} + \frac{1}{4}}&fg=000000$, $latex {\frac{1}{5} + \frac{1}{6} + \frac{1}{7} + \frac{1}{8}}&fg=000000$, and so on. Every group has value at least $latex {\frac{1}{2}}&fg=000000$, so the sum is at least $latex {\frac{1}{2} + \frac{1}{2} + \frac{1}{2} + \dots = +\infty.}&fg=000000$ If mathematical progress is likewise harmonic, then its value is unbounded.
It is possible that improvements at chess could leverage the same additive power of harmonic series if they follow a Zipfian distribution. The best analogy may be to the situation of Heaps-Herdan's Law where chess games are like texts of words and a ``new word'' is a position in a game where a new improvement by a chess program could matter. This counts the improvements at unit value assuming each changes a draw into a win or a loss into a draw. I believe, however, that the ``draw attractor'' nature of chess---which is being remarked about the games of the TCEC final---pulls the frequency of improvements that can change a game below this law. The law may hold for improvements that can raise the measured value of an engine's position by 0.10 or 0.20 but not enough to change a game except in combination. Any multiplying together of harmonic terms makes it like $latex {\sum_N \frac{1}{N^s}}&fg=000000$ with $latex {s > 1}&fg=000000$ and gives a hard ceiling.
Which law should apply to mathematical progress? In the concrete here-and-now rather than asymptotic-in-$latex {N}&fg=000000$, Dick and I have opined that we are still in a golden age of discovery. One sign is that we can probably make much progress just by a more incisive understanding of what we already have. The MacTutor biography of Oresme quotes an assessment by Marshall Clagett:
This brilliant scholar has been credited with ... the invention of analytic geometry before Descartes, with propounding structural theories of compounds before nineteenth century organic chemists, with discovering the law of free fall before Galileo, and with advocating the rotation of the Earth before Copernicus. None of these claims is, in fact, true, although each is based on discussion by Oresme of some penetration and originality ...
We recently made the point that even solved conjectures can reward further penetration, and for this we can be thankful.
Open Problems
What kind of diminishing returns apply to mathematical progress, multiplicative/geometric or additive/harmonic?
One thing we hope is subject to a swiftly diminishing law is errata. We are thankful again for close readings of our quantum algorithms textbook, especially by Cem Say and his students at Bo\u{g}azi\c{c}i University. They and a couple others have added several more entries to our updated errata \href{http://www.cse.buffalo.edu/~regan/LRQbook.html} page.