{\huge What Would Be Left, If...?}
Apocalypses in math and theory, plus a complexity question \vspace{.2in}
Douglas Adams wrote the watchword for today:
Don't Panic.
Still, his novel and series The Hitchhiker's Guide to the Galaxy begins with the impending destruction of the Earth, which goes ahead 5 minutes too soon. The remainder is post-apocalyptic. It is also pre-apocalyptic, because there are multiple Earths that face destruction at different times. At least having multiple days of prophesied doom is something we've really been dealing with.
Today---the last time we can use this word?---we wish to talk about real apocalypses in mathematical and scientific theories. And what was left afterward.
We have already blogged about what would happen to complexity theory if $latex {\mathsf{P = NP}}&fg=000000$ were true and proved. As we said, ``Most of the 573 pages of Arora-Barak would be gone.'' Well this is still hypothetical. Now we will look at actual cases in the past where whole theories were blown up by a surprise result.
This is different from theories going out of fashion and dying out, even if it was from internal causes. Likewise we don't consider the fadeouts of past civilizations to be catastrophes, only ones destroyed by things like volcanoes. Ironically, the branch of mathematics called catastrophe theory itself is said to be one of the fadeouts. As mathematical historian David Aubin wrote in ``Chapter III: Catastrophes'' of his 1998 Princeton PhD thesis:
Catastrophe Theory is dead. Today very, very few scientists identify themselves as `catastrophists'; the theory has no institutional basis, department, institute, or journal totally or even partly devoted to it. But do mathematics die?
He goes on to cite an article by Charles Fisher that proclaimed the death of Invariant Theory. To be sure, theories like that sometimes get revived. But first a word about the Mayans and ultimate catastrophe.
Baktun The Future
All the fuss is about today's ticking over of a Mayan unit of time called a baktun, or more properly b'ak'tun. It's not even a once-in-5,000-years event like everyone says, but rather once-in-144,000 days, making just over 394 years. The point is there have been 13 of them since the inception of the Mayan creation date according to their ``Long Count'' calendar, making $latex {13 \times 394.26 = 5,125.38}&fg=000000$ years in all. So the 14th b'ak'tun starts today---big whoop. The buzz comes from many Mayan inscriptions seeming to max out at 13, but others go as far as 19 and it is known that they counted by 20. Hence the real epoch will be when the 20th and final baktun ticks over to initiate the next piktun. That will be on October 13, 4772. If human civilization lasts that long, that is.
This still has us thinking, what if Earth really were suddenly blown up by Vogons or by Vegans or by a space rock a little bigger than last week's? What would be left? Anything? The reason is that according to a recently-agreed principle in fundamental physical theory, the answer should be everything.
The principle, as enunciated in all-capitals by popular science author Charles Seife in his 2007 book Decoding the Universe, states
{\sc Information Can Neither Be Created Nor Destroyed.}
As we mentioned last March, the agreement was symbolized by Stephen Hawking conceding a bet to John Preskill, who has graced these pages. Hawking underscored the point by making it a main part of the plot of a children's novel written with his daughter Lucy. The father falls into a black hole, but is resurrected by a computer able to piece back all the information because it was all recoverable. At least in theory.
Hence even if Earth really is swallowed up later today, or if we disappear leaving all our stored information and literary artefacts to decay within a 50,000-year estimated for The History Channel's series Life After People, all the information would in principle still exist. Is this comforting? Perhaps not. It could be that while all natural processes conserve information, the more violent ones might embody the computation of a one-way function. It then becomes an issue of complexity theory whether the output of that function could be reverted to its pre-apocalyptic state.
Apocalypses In Math
Here are a few examples from mathematics of ``extinction" events: usually the extinction was of a theory or whole approach to math.
$latex {\bullet}&fg=000000$ Bertrand Russell and Gottlob Frege:
Frege was just finishing his tome on logic when the letter from Russell arrived showing that Frege's system was inconsistent. The letter basically noticed that the set
$latex \displaystyle \{ x | x \not \in x \}, &fg=000000$
was not well defined. This destroyed the whole book that Frege had worked so hard on for years. Frege's quote is: A scientist can hardly meet with anything more undesirable than to have the foundations give way just as the work is finished. I was put in this position by a letter from Mr. Bertrand Russell when the work was nearly through the press. A study in being low-key as we might say today.$latex {\bullet}&fg=000000$ David Hilbert and Paul Gordan:
Gordan was known as ``the king of invariant theory." His most famous result is that the ring of invariants of binary forms of fixed degree is finitely generated. Hilbert proved his famous theorem that replaced binary by any degree, and replacing horribly complex arguments with a beautiful existence proof. This essentially to quote Wikipedia: almost put an end to classical invariant theory for several decades, though the classical epoch in the subject continued to the final publications of Alfred Young, more than 50 years later. Gordan was less low key than Frege, since his comment on Hilbert's brilliant work was: ``"This is not mathematics; this is theology." Oh well.
$latex {\bullet}&fg=000000$ Kurt Gödel and David Hilbert:
Hilbert, again, wanted to create a formal foundation of all mathematics based on an axiomatic approach. He had already done this for geometry in his famous work of 1899. No Eucild did have an axiomatic system thousands of years earlier, but it was not really formal. Some proofs relied on looking at diagrams and other obvious facts, so Hilbert added extra notions that made geometry based on a complete system. For example, Hilbert added the notion of betweenness of three points: point $latex {x}&fg=000000$ is between $latex {y}&fg=000000$ and $latex {z}&fg=000000$.
Of course Gödel proved his famous Incompleteness Theorem that Hilbert could not do what he did for geometry was impossible to do for number theory.
$latex {\bullet}&fg=000000$ Stephen Kleene and Barkley Rosser: Once Alonzo Church's lambda calculus and Haskell Curry's combinators were discovered in the 1930's, it seemed natural to build systems of logic around them. That was the original intent of both Curry and Church. It was therefore a shock when Kleene and Rosser, as students of Church, showed at a stroke that they were inconsistent. The reason is that their standard of ``unambiguous definition'' claimed too extensive a reach, as with Frege's formalization of the notion of ``set.'' It essentially allowed defining an exhaustive countable list of unambiguously-defined real numbers, for which the Cantor diagonal number was unambiguously defined within the system, a contradiction. I (Ken) liken this paradox phenomenon to the collapse of the Tower of Babel.
$latex {\bullet}&fg=000000$ Riemann's Non-Conjecture Refuted: At the very end of his famous 1859 paper which included the Riemann Hypothesis, Bernhard Riemann made a carefully-worded statement about the relationship between the prime-counting function $latex {\pi(x)}&fg=000000$ and the logarithmic integrals $latex {li(x)}&fg=000000$ and $latex {Li(x) = li(x) - li(2)}&fg=000000$:
Indeed, in the comparison of $latex {Li(x)}&fg=000000$ with the number of prime numbers less than $latex {x}&fg=000000$, undertaken by Gauss and Goldschmidt and carried through up to $latex {x =}&fg=000000$ three million, this number has shown itself out to be, in the rst hundred thousand, always less than $latex {Li(x)}&fg=000000$; in fact the difference grows, with many fluctuations, gradually with $latex {x}&fg=000000$.
Further calculations were consistent with inequality holding in general, until in 1914, John Littlewood refuted this not just once, but infinitely often. That is, he did not find a counterexample by computation, but rather proved that $latex {li(x) - \pi(x)}&fg=000000$ must change sign infinitely often. In fact, the first number giving a sign flip is still unknown, though it must be below $latex {e^{727.951346801}}&fg=000000$.
Although this is included on Wikipedia's short list of disproved mathematical ideas, it's significance is not the inequality hypothesis itself, but the fallible nature of numerical evidence. Michael Rubinstein and Peter Sarnak showed an opposite surprise: the set of integers $latex {x}&fg=000000$ giving a negative sign has non-vanishing density, in fact about $latex {0.00000026}&fg=000000$, so it is disturbing that no such $latex {x}&fg=000000$ is within the current range of calculation.
$latex {\bullet}&fg=000000$ Mertens Conjecture Refuted: The conjecture, which was first made in 1885 by Thomas Stieltjes not Franz Mertens, states that the sum of the first $latex {n}&fg=000000$ values of the Möbius function has absolute value at most $latex {\sqrt{n}}&fg=000000$. That is,
$latex \displaystyle M(n) = |\sum_{k=1}^n \mu(k)| \leq \sqrt{n}. &fg=000000$
Despite the fact that all computer calculations still support this, Andrew Odlyzko and Herman te Riele disproved it theoretically in 1985. At least it has an exponentially bigger leeway than the previous one: the best known upper bound on a bad $latex {n}&fg=000000$ is currently$latex \displaystyle e^{15,900,000,000,000,000,000,000,000,000,000,000,000,000}. &fg=000000$
Moreover the following weaker statement, which Stieltjes thought he had proved, is still open:$latex \displaystyle (\exists C > 0)(\exists n_0 > 0)(\forall n \geq n_0) M(n) \leq Cn^{\frac{1}{2}}. &fg=000000$
The reason this is portentious is that the following slight further weakening,$latex \displaystyle (\forall \epsilon > 0)(\exists C > 0)(\exists n_0 > 0)(\forall n \geq n_0) M(n) \leq Cn^{\frac{1}{2} + \epsilon}, &fg=000000$
is actually equivalent to the Riemann Hypothesis.The failure of Riemann would have a definite apocalyptic effect: it would wipe away all the many papers that assume it. It is not clear whether those papers could even be saved by the kind of ``relativization'' we have in complexity theory, whereby results obtained assuming $latex {\mathsf{P \neq NP}}&fg=000000$ and so on may still be valid relative to oracle languages $latex {B}&fg=000000$ such that $latex {\mathsf{P}^B \neq \mathsf{NP}^B}&fg=000000$.
Our Neighbors' Houses
Still, the loss of papers assuming Riemann would be less than what would happen in physics if supersymmetry were disproved, as its failure could take all of string theory down with it. The Standard Model of particle physics seems also to have survived problems the absence of the Higgs Boson would have caused, although issues with the Higgs are still causing apocalyptic reactions from some physicists.
Perhaps we in computer science theory are fortunate to experience less peril. Even so, we are left with this quotation attributed to Hilbert by Howard Eves:
One can measure the importance of a scientific work by the number of earlier publications rendered superfluous by it.
Open Problems
Do you have other favorite examples of results in the mathematical and general sciences that caused the collapse of theories?
Does Nature compute complexity-theoretic one-way functions?