{\huge The End Of Human Science?} Could we go the way of telegraph operators?

Lofa Polir has sent us some new information that will have widespread ramifications for math and theory and science in general.

Today Ken and I wish to comment on this information.

Polir is sure that this information is correct. If he is correct the consequences for all will be immense.

The Cover Story

His information is based on recent work of Sebastian Thrun---one of the world's experts in machine learning. In this week's New Yorker Magazine Thrun's work on replacing doctors who diagnose skin diseases is presented. The article describes him as:

Thrun, who grew up in Germany, is lean, with a shaved head and an air of comic exuberance; he looks like some fantastical fusion of Michel Foucault and Mr. Bean. Formerly a professor at Stanford, where he directed the Artificial Intelligence Lab, Thrun had gone off to start Google X, directing work on self-learning robots and driverless cars.

His work is really interesting, and he has stated that medical schools should stop teaching doctors to read X-rays and other images, since robotic systems will soon be better at this. His system for skin images already beats expert doctors at detecting abnormal growths.

But this project along with his others is a smokescreen for his most important project, claims Polir. Thrun has put together a double-secret project that has been running for over five years. The project's goal is: the automation of math and other sciences. Thrun predicts---well, let's take a look at what he is doing first.

The Real Story

Thrun's project is to use machine learning methods to build a system that can outperform us in doing science of all kinds. It requires huge amounts of data and he has access to that via the web. The strategy is an exact parallel of how Google DeepMind's AlphaGo program. Quoting our friends on Wikipedia regarding the latter:

The system's neural networks were initially bootstrapped from human gameplay expertise. AlphaGo was initially trained to mimic human play by attempting to match the moves of expert players from recorded historical games, using a database of around 30 million moves. Once it had reached a certain degree of proficiency, it was trained further by being set to play large numbers of games against other instances of itself, using reinforcement learning to improve its play.

In place of reading and digesting master games of Go, Thrun's system reads and digests scientific papers. The ability to have his algorithm ``read'' all papers in science is the secret:

Thrun points out that mathematicians in their lifetime may read and understand thousands of papers, but his system is capable of understanding millions of papers.

This ability is one of the reasons his algorithm will outperform us. Another is it can use immense computational power 24/7. It never needs to sleep or rest. Polir claims that Google has made an entire secret data center of over a billion CPU cores available to this project. In a closed agreement with the University of Wisconsin, the center is housed in the new IceCube neutrino observatory. Polir justifies revealing this on grounds it should be obvious---would they really hew out a cubic kilometer of ice in Antarctica just to observe neutrinos and ignore the cooling cost benefits of placing a huge processing center in the cavity?

How it Works

Old-time theorem provers used lots of axiom and proof rules. This kind of approach can only go yea-far. Homotopy type theory, which tries out a more topological approach, provided part of the inspiration to Thrun that he could find a better way. Another part was Roger Penrose's argument that humans are less blocked by Kurt Gödel's Incompleteness Theorems than logic-based systems are. So Thrun was spurred to start by making his machine learn from humans, much like AlphaGo.

In the New Yorker article---with extra information gleaned by Polir---Thrun describes the situation this way:

``Imagine an old-fashioned program to identify a dog,'' he said. ``A software engineer would write a thousand if-then-else statements: if it has ears, and a snout, and has hair, and is not a rat . . . and so forth, ad infinitum. But that’s not how a child learns to identify a dog, of course.'' Logic-based proof systems work the same way, but that's not really how we go about identifying a proof. Who checks modus ponens on every line? ``The machine-learning algorithm, like the child, pulls information from a training set that has been classified. Here’s a dog, and here’s not a dog. It then extracts features from one set versus another.'' Or like a grad student it learns, here's a proof, and here's not a proof. And, by testing itself against hundreds and thousands of theorems and proofs, it begins to create its own way to recognize a proof—again, the way a grad student does. It just knows how to do it.

Polir confirmed that Thrun's machine first runs the papers through the kind of ``Lint''-like module we posted about. This is not only a data-cleaning step but primes the reinforcement learning module on the mathematical and scientific content.

Then comes a Monte Carlo phase in which the system randomly generates alternative proofs of lemmas in the papers and scores the proofs for economy and clarity. This completes the automated paper-rewriting level of their service, which is under negotiations with Springer-Verlag and Elsevier and other academic publishers for deals that may assure steady funding of the larger project. Finally, the results of these runs are input into the deep-learning stack, which infers the kinds of moves that are most likely to lead to correct proofs and profitable discoveries.

One of the predictions that Thrun makes is that like doctors we may need to start thinking about training students to get PhDs in math soon. He goes on to raise the idea that the machine will make such basic discoveries that it will win Nobel Prizes in the future.

Some Results

The results of Thrun's project are so far secret, and it is likely that he will deny that it is happening right now. But Polir found out one example of what has been accomplished already.

Particle physics of the Standard Model uses quite a few elementary particles. See here for a discussion.

These 31 elementary particles are the most fundamental constituents of the universe. They are not, as far as we know, made up of other particles. The proton, for example, is not an elementary particle, because it is made up of three quarks, whereas the electron is an elementary particle, because it seems to have no internal structure.

\includegraphics[width=4in]{particles.png}

Although the Standard Model has worked impeccably in practice, it has higher complexity than physicists have expected from a bedrock theory of nature. The complexity comes from the large number of particles and the large number of constants that the model cannot predict.

A cluster of Thrun's dedicated machines has already found a new model that reduces the number of elementary particles from 31 to 7. The code name for the cluster and its model, in homage to AlphaGo, is AlphaO. The AlphaO model is claimed to still make all the same predictions as the standard one, but the reduction in undetermined constants could be immensely important.

Open Problems

Is Polir fooling? He may be and not be at the same time. If you had told us a year-plus ago that AlphaGo would wipe out the world's best Go players 60-0 in online semi-rapid games, we would have cried fool. The AlphaGo project is an example of a machine coming from nowhere to become the best in a game that people thought was beyond the ability of machines. Could it be soon the same with AlphaO? We will see.