Update 9/29/22 with some amendations to the files originally given "as-is": () Removed seven duplicate Coinbase Rapid entries, plus a few duplicate OTB events. () Recomputed Coinbase Rapid with settings specific to the Game/10' or Game/10'+2" time control rather than the generic "Rapid" settings. Especially the latter raised the median in the NiemannROI.txt list from 49.8 to 50.6. The 51.4 median in the list restricted to OTB tournaments became a joint median with 50.6. The essence of the conclusion that the distribution of performances is entirely normal remains unchanged. (Technically, having the mean, one-sigma range, and two-sigma ranges be close to their values in the bell curve is not enough to assure closeness to a normal distribution overall, but absent any clear indication of a bi-modal pattern that could mean "cheat sometimes, tank other times", the normal distribution is the likeliest source.) () The Rausis file was extended with four more events, so now comprises all 23 OTB events for which games in the time period can be found in ChessBase. The originally-posted versions of the lists are preserved for documentary completeness with "orig" in their filenames. *** It should be noted that all data here is from the initial "screening" stage of *** *** my system. I do not make results from the full-test stage online viewable. *** *** The present first-stage data, which is gathered in massive scale for all major *** *** tournaments, comes via the UB Center for Computational Research (CCR). *** Update 10/15/22: I actually forgot to remove one duplicate near the top, Niemann's available games from the Bundesliga 2021-22 season. I had noted it and intended to hunt down exactly why the two runs by Stockfish 11 (on the same set of 291 preserved moves) were slightly different. Komodo versions are run with a total program restart on each position in order to give reproducibility on a single core thread; clearing hash is not enough because Komodo maintains a "killer move" list between turns. Stockfish versions usually reproduce exactly on one thread even without clearing hash. (The issue with multiple core threads is that the OS can schedule them differently in different trials, leading to divergent hash table (over-)writes.) But sometimes there are slight differences in the output tables even when the move and move values are the same. Various kinds of blips can affect a computer from within and without---maybe chess can be focused more to study them. There is also the phenomenon I've described extensively at https://rjlipton.wpcomstaging.com/2012/05/04/digital-butterflies-and-prgs/ and the end of https://rjlipton.wpcomstaging.com/2018/02/16/a-coupe-of-duchamp/ to consider.) *Anyway*, I was actually making my data look less effective toward the point of being totally normal for Niemann than it is. The meta-point is that this point is robust against slight changes to the data. Update 9/19/23: Added all 664 games by Niemann at G/3+0" Blitz time control on Chess.com from 9/1/23 to 9/18/23, which is when Vladimir Kramnik lost a two-game playoff to Niemann (at G/10+2" Rapid) in the "AI Cup". By my "Rating Time Curve", G/3+0" yields 140 Elo lower quality than G/3+2" Blitz, which is the time control of the in-person World Blitz Championship, which provides the majority of the calibration data. In fact, subtracting 107 from Niemann's 2667 standard rating yields an average ROI of almost exactly 50.0 over the four testing engines. Leaving aside that the error bars for all these figures are about +-25 (which in turn is on the order of general rating uncertainty foran established player at any time), this would give exact parity if Niemann were rated 33 points higher, exactly 2700 as it turns out. Now considering the error bars, this provides a good cross-check of my settings for G/3+0" Blitz that were used in my official reports on the Niemann case---this time control applied for most of the online matches in 2020 that were called into question. See files ending "m107.sc9".