{\huge A World Without Randomness}

What if there were no coin flips? \vspace{.5in}

Leonid Levin has originated many great ideas in our field. Indeed he had a share with Stephen Cook in the idea of $latex {\mathsf{NP}}&fg=000000$-completeness as a universality property shared by SAT and several other problems. He also formulated several variations on the measures of the complexity of finite strings that were introduced by Andrey Kolmogorov, Gregory Chaitin, and Ray Solomonoff. This extended to the notion of universal distribution named for Solomonoff and him, by which the prior probability of observing a string $latex {x}&fg=000000$ is held to be inverse-exponential in the complexity rather than the length of $latex {x}&fg=000000$. Levin's measures take into account the time needed to generate $latex {x}&fg=000000$, and have been used to argue that our world may be the output of a universal deterministic program, and hence devoid of 'true' randomness.

Today Ken and I want to use Levin's ideas to talk about what the world would be like if there were no randomness, no coin flipping, not even with slightly biased flips.

Persi Diaconis has famously showed that actual human coin flipping people has bias. The bias is small, but it is non-zero. He has said, "coin tossing is fair to two decimals but not to three. That is, typical flips show biases such as .495 or .503.'' This kind of bias, however, still allows one efficiently to extract truly-random bits. We are talking about something more extreme.

Heads

One day when my daughter Jennifer Lipton-O'Oconnor, whom I've mentioned for a unique diagonal argument, was quite young, I thought I would talk to her about randomness. I recall we were in our basement area, and I started to explain how quarters, that is American twenty-five cent pieces, could be used to create randomness. She listened a bit and said, But they always come up heads. I immediately said no and found a quarter and started to flip it. The first was indeed heads, then the next $latex {\dots}&fg=000000$ Finally, after five straight flips in a row of heads, Jen said, ``see I told you,'' and walked away.

I sat there thinking that this was somehow a failed lesson. I also thought of multiverse ideas. I wondered why I happened to be in the universe where the dad---me---came out looking foolish.

The play ``Rosencrantz and Guildenstern are Dead'' opens with 157 heads in a row. By multiverse theory there exists a world where that really happened. Playwright Tom Stoppard could have been more subtle and had the coinflips trace out the sequence of prime numbers, in the manner of the novel Contact, or the binary expansion of pi. Since the latter are low-complexity deterministic sequences, by Levin's measures they are scarcely different from all-heads as outcomes. In an unrestricted multiverse, there are worlds where those occur too, for as long as is relevant. Any of those worlds could be our world.

So let's look at a world of potentially no coins that are random.

SAT Is Not Too Hard

In this world we discover that SAT is relatively easy to solve. Consider our old friends Alice and Bob. Bob's job is to create an algorithm that generates hard instances of SAT. More formally, Bob is given the input $latex {1^{n}}&fg=000000$, and each time Alice presses a button, he must create a SAT problem with $latex {n}&fg=000000$ variables and set $latex {C}&fg=000000$ of $latex {O(n)}&fg=000000$ clauses, together\/ with a satisfying assignment $latex {x}&fg=000000$. He will give Alice $latex {C}&fg=000000$, and her job is to find a satisfying assignment---it need not be $latex {x}&fg=000000$, but must satisfy the constraints. Bob's generator algorithm, let's denote it by $latex {G}&fg=000000$, must run in time polynomial in $latex {n}&fg=000000$ at each button press. Both Alice and Bob may also increment $latex {n}&fg=000000$ to $latex {n+1}&fg=000000$ at any one step.

The question is: Can Alice solve these generated problems in polynomial time? The answer of course is still open. But Alice can solve them in time that is slightly super-polynomial.

Here is how she does it. She uses Levin's idea. Alice does not look at the clauses $latex {C}&fg=000000$ directly, but tries to find the generator that Bob used. Each generator is a deterministic Turing machine. It is somewhat unusual in that it keeps outputting instances $latex {(C,x)}&fg=000000$ rather than halting, and can be ``kicked'' by changing the value of $latex {n}&fg=000000$, but we can enumerate them as $latex {G_1,G_2,G_3,\dots}&fg=000000$ in a way that any $latex {G_i}&fg=000000$ can be loaded and simulated for a given length of time. Bob's generator is one of the $latex {G_i}&fg=000000$.

Roughly speaking, what Alice does is run them all, in a staggered fashion. The $latex {i^{th}}&fg=000000$ one is run a $latex {2^{-i}}&fg=000000$ portion of the time whenever Alice is seeking an answer. Eventually Alice will discover which generator Bob picked. More precisely, she finds the least $latex {j}&fg=000000$ such that $latex {G_j}&fg=000000$ outputs the same formulas $latex {C}&fg=000000$ as Bob's generator, and also outputs satisfying assignments for those formulas. The latter need not be the same as Bob's $latex {x}&fg=000000$-es, which Alice never sees.

Deterministic Adaptation

Alice's algorithm is fixed, and its only variable factor is the interaction with Bob's $latex {G}&fg=000000$. Allowing Bob to increment $latex {n}&fg=000000$ wards off the triviality of Alice having time to enumerate solutions to all $latex {n}&fg=000000$-variable formulas. Since Alice's algorithm is fixed, and must be capable of simulating any $latex {G_i}&fg=000000$, it cannot be hard-wired to run within any particular polynomial time bound.

It can, however, be coded to abide by any explicitly given constructible time bound $latex {t(n)}&fg=000000$ that is super-polynomial, however slightly, such as $latex {n^{\log n}}&fg=000000$ or $latex {n^{\log\log n}}&fg=000000$. Since each $latex {G_i}&fg=000000$ is run a constant portion of the time and $latex {t(n)}&fg=000000$ is super-polynomial, whenever Alice starts including a new $latex {G_i}&fg=000000$ in her simulation, she can if-needed eventually ``catch up'' to where Bob is after some number $latex {s}&fg=000000$ of button presses, since Bob's $latex {G_j}&fg=000000$ runs in polynomial time. Then she will either discover a mistake made by $latex {G_i}&fg=000000$, and cross it off her list, or $latex {G_i}&fg=000000$ will continue forever to give her solutions to the formulas posed by Bob.

When Alice arrives at the correct $latex {G_j}&fg=000000$, her running time to solve formulas posed by Bob also settles down to a multiple of the same polynomial. This is because we can program Alice to push the button for a new formula immediately when she gets a satisfying assignment, without expending $latex {t(n)}&fg=000000$ steps on other $latex {G_i}&fg=000000$'s.

To the outside world, it looks like Alice becomes a smart solver. In reality she is ``cheating'' by having stumbled upon the right helper to give her cribbed answers. The answers look like they are coming naturally, but they really only exploit Bob's being a fixed target. Bob is not allowed randomness so as to change the fixed pattern that Alice eventually emulates, nor adaptiveness to Alice's actions apart from formula requests and incrementing $latex {n}&fg=000000$.

Jürgen Schmidhuber, whom Ken just met at the CIG 2013 conference in Niagara Falls, Ontario, has drawn some practical learning aspects of Levin's universal search. This also underlies his proposal that cosmological processes are pseudorandom. He has itemized several more predictions and possible telltales of a highly deterministic universe. But first let us look at a world that is easier for complexity theorists to imagine, which I (Dick) have said I believe in.

A Similar World

Levin's original paper with his idea showed that if $latex {\mathsf{P = NP}}&fg=000000$ is true---even if we don't know a fast algorithm for SAT---then code analogous to Alice becomes a polynomial-time algorithm that gets SAT right on all but finitely many instances. This time a fixed formula $latex {C}&fg=000000$ of size $latex {n}&fg=000000$ is given to Alice, rather than formulas from repeated requests to Bob. Alice has the same explicit time budget $latex {t(n)}&fg=000000$ and staggers timesteps among $latex {G_1,\dots,G_r}&fg=000000$ (say $latex {r = \log_2 t(n)}&fg=000000$), rejecting unless she sees and verifies a satisfying assignment during her run. If some $latex {G_j}&fg=000000$ solves SAT in some polynomial $latex {q(n)}&fg=000000$ time, then eventually Alice will see its answer at a point where she has spent at most $latex {2^j q(n)}&fg=000000$ steps on other machines, and once she verifies the answer she stops.

Thus Alice settles down to $latex {O(q(n))}&fg=000000$ runtime and eventually gives the right answer on every successive formula. In particular, she eventually starts succeeding on every formula in any stream of instances that are thrown at her.

This gives our sense in which a world without randomness is like a world where $latex {\mathsf{P = NP}}&fg=000000$. In both cases we know an algorithm $latex {\cal A}&fg=000000$ that over a long time is observed to become smart at SAT. In one case it's because of a limitation on the stream of instances, in another it's because the power to solve SAT in general is really out there, but we observers can't tell the difference.

The analogy also makes it less farfetched to ask, is our world Bob? Instead of asking metaphysical questions about determinism and mechanism, all we are asking is:

Is it true in our world that every source of SAT instances is eventually 100% solved by $latex {\cal A}&fg=000000$?

Well we can also flip this around to say what this means for the idea of SAT being hard.

What Does This Mean?

Of course most of us believe there are truly-random coin flips, and most believe $latex {\mathsf{P \neq NP}}&fg=000000$. Granting these, what does this observation really mean? I think that it says something about how we find hard examples. The standard method for generating hard examples of SAT is to use a source of randomness, and use that to select the instance of SAT.

Thus a solver of SAT, it seems, must inherently be an extractor or randomness. Not, perhaps, in the formal sense of a randomness extractor as we mentioned before, but some kind of extractor nevertheless. The solver must be able to find the solution, which seemingly must contain lots of entropy. I cannot make this precise, but it seems right.

The instance-generation problem has of course been addressed by Levin in numerous papers, in the larger context of distributional complexity. We note especially his 1990 FOCS paper with Russell Impagliazzo, whose title announces that there are ``no better ways to generate hard NP instances than picking uniformly at random.'' The details require much more attention than we are giving here, since there are even simpler senses in which randomly-generated instances of SAT are almost trivially easy to solve.

Open Problems

How could one possibly tell one is in a world with no randomness? Well there is almost a way to prove it: flip a coin a million times---perhaps quantumly or noisily---and hit upon a much shorter $latex {x}&fg=000000$ that generates the output. But the chance of doing this even if it's true would be infinitesimal. This is why we wonder whether observing success at SAT against instance generators is more plausible, along with Schmidhuber's other telltales.

Short of proving it, what more can we say about a world with no randomness?