{\huge The Primes Strike Again}
And lead to new kinds of cheating and ideas for our field
\vspace{.25in}
Faadosly Polir is the older brother of Lofa Polir. He is now working as a full time reporter for GLL.
Today we are reporting several of his recent discoveries that you may find interesting.
You may wonder how GLL is able to afford a reporter on staff. We can't. All our work we bring you at the same price we always have. Reporters, however, are burgeoning. Maybe it's the political cycle. Maybe ``Spotlight'' winning the Oscar helped. Or maybe leprechauns have created a surplus---that would explain the political shenanigans we're seeing.
Faadosly didn't take long to earn his keep. He spotted an announcement in a deep-web online journal:
Second Hardy-Littlewood Conjecture (HL2) Proven.
We just covered the recent new and overwhelming evidence for the First Hardy-Littlewood Conjecture (HL1). Together, these conjectures have wonderful consequences, which unfortunately are already being turned for ill by some of our young peers.
Productivity
That's right, we are saddened to report here at GLL that the huge productive work of a few young theorists is due to a kind of cheating. To protect their names we will refer to them collectively as X.
It is now known that X have been able to write so many beautiful papers over the recent years owing to the discovery that ZF is actually inconsistent. Since ZF, basic set theory, is used to prove everything in computer science theory, it follows that anything can be proved. At my old university they are running supercomputers to do massive derivations in Vladimir Voevodsky's formulation of Homotopy Type Theory (HoTT). Although HoTT embeds the constructive version of PA it does not prove its consistency let alone that of ZF---yet it can model derivations using statements like HL2 as axioms. Faadosly's investigative reporting showed that what happened is that the ZF proof of HL2 enabled HoTT to prove that a bug in Edward Nelson's argument for inconsistency in PA goes away when lifted to ZF via known facts about HL1.
This isn't how X were caught. Rather they were caught completely separately by running software called MORP, for ``Measure of Research Productivity.'' MORP is not simply a plagiarism detector, rather it applies equations determined by regression over many thousands of papers in the literature. The equations can determine when someone's productivity is way too high. Our own cheating expert Ken reports:
There is over $latex {5\sigma}&fg=000000$ confidence that the results of X could not be obtained by a human mathematician without deep computer assistance.
MORP is interfacing with a project to determine the likelihood of conjectures by analyzing historical proof attempts on them. By deep learning over attempts on past conjectures before they were proved, and Monte-Carlo self-play of thousands of proof attempts with reinforcement learning when things are proven, they have achieved a success rate comparable to that of Google DeepMind's AlphaGo program.
When run on the contents of Gerhard Woeginger's $latex {\mathsf{P}}&fg=000000$ versus $latex {\mathsf{NP}}&fg=000000$ page, whose 107 proofs are evenly split between $latex {\mathsf{P = NP}}&fg=000000$ and $latex {\mathsf{P \neq NP}}&fg=000000$, the output gives confidence comparable to IBM Watson's in its "Jeopardy" answers that both are theorems.
Thus our field already had strong evidence for the inconsistency that we are now reporting. This is the first major inconsistency result in mathematics since 1901 and 1935, an unusually long gap of 81 years. Reached for comment, Voevodsky told Faadosly that he was not surprised: ``After all, if ZF had been consistent then ZF + $latex {\neg}&fg=000000$Con(ZF) would also be consistent, and the situation we have now is practically speaking not much different from that.''
Reverse Prizes
This news has heightened discussions already long underway among prize committees at top organizations in math and theory about new policy for the awarding of their large prizes. We have already covered feelings by many that paying large sums for past achievements (often long past) is inefficient for stimulating research and community interest---while at the same time they are being upstaged by startup upstarts giving way bigger prizes.
Doing away with prize money altogether was rejected as lame---the large sums are important for the public eye. The committees concluded it is vital to maintain the absolute values of the prizes. So the prizes will still be awarded as before by a blue-ribbon panel and carry large dollar amounts. The only thing different will be the sign. The winner will be required to pay the prize money to the organization. Thus a Turing Award winner will owe one million dollars to the ACM.
The motivation is two-fold. One, the prize money being paid to ACM will be used to hire extra needed administrative personnel. Second, it can be used for greater outreach. Also, a generous payment plan is being considered: payment of the prize money over periods as long as twenty years.
With help from Faadosly we have been running a private poll of past winners of the Turing and lesser prizes. Over 80% have said something equivalent to: I would have been happy to pay the prize amount. It's the Turing award after all. ``I could always have sold my house or perhaps a kidney,'' said one recent winner with a smile on her face. There is talk that this new kind of prize may be particularly appropriate for the new results on ZF, to cover a small part of the costs they will incur.
For now the committees have decided only to proceed on a provisional basis with the lesser prizes. Some are being given reversed names---for instance, nominations are now being accepted for the 2017 Thunk prize. The University of Michigan has proposed a reverse prize to benefit Ann Arbor's innovative but consolidating Le Dog restaurants.
Zero and Negative Acceptance Rates
I was just recently at Simons in Berkeley, and of course several people from Stanford were there. They discussed the recent New York Times article on Stanford's new policy of having a 0% admission rate, shooting its undergraduate program to the zenith for exclusivity. I mentioned that at Tech our enormously popular remote Master's in CS program is allowing us to emulate Stanford as regards on-site admission. This in turn will free up resources currently used on student housing and enable allocating more teaching staff to the online courses.
We began talking about similar ideas to increase the prestige of major conferences. Several conferences already have acceptance rates verging under 10%, so going to 0% is not so big an affine step. Indeed, several thought it too small and thought we should go all out for negative acceptance rates (NAR).
Conferences using NAR work on a simple system. Anyone wishing to submit creates an item on EasyChair or a similar logging service the way we do today. The Program Committee then sends that person one of their current original papers to referee. The best $latex {n\%}&fg=000000$ of the referee reports are then selected for oral presentation and double-blind publication in the proceedings, thereby achieving a $latex {-n\%}&fg=000000$ acceptance rate. People making the very top submissions, who previously would win Best Paper awards, become co-authors of the papers they referee.
This is considered an excellent way to promote research by talented people---now being selected for a PC needn't be a sacrifice of research time. It also galvanizes the conference reviewing process. The upcoming STOC Business Meeting will propose its use on a rotating basis by FOCS or STOC in alternate years.
Open Problems
What do you think of these developments? Are they all plausible? Do you believe the conjectures of Godfrey Hardy and John Littlewood? After all, these two giants conjectured both of them.
Have a happy April Fool's Day.