{\Large Could We Have Felt Evidence For SDP $latex {\neq}&fg=000000$ P?}
Evaluating mathematical beliefs \vspace{.3in}
Leonid Khachiyan in 1979 caused arguably the most sudden surprise to the West's popular scientific understanding since the successful launch of Sputnik in 1957. His ellipsoid method gave the first algorithm for linear program whose polynomial running time was verified. Thanks largely to Narendra Karmarkar it has been superseded by faster interior-point methods, while older algorithms have since been noted to run in polynomial time, but the breakthrough and inspiration came from Khachiyan. Could something like it happen again? Today Ken and I want to ask whether recent argument over beliefs about $latex {\mathsf{P=NP}}&fg=000000$? can be evaluated in light of this shock.
Khachiyan's ``rocket'' had actually left the hangar ten months before, in a January 1979 Doklady Akademia Nauk. paper whose title translates into English as, ``A polynomial algorithm for linear programming.'' As recounted by Berkeley's Eugene Lawler, it was sighted at a May 1979 meeting Lawler attended in Oberwohlfach, and after Péter Gacs and László Lovász supplied proofs missing in the paper, it was pronounced correct at a conference in Montreal. The discovery was picked up in October by Science News, and then by the magazine Science. An allusion by the latter to the NP-complete Traveling Salesman problem was moved to the headline of a Nov. 4 story in England's Guardian newspaper, and reflected three days later in the New York Times's front-page screamer, ``A Soviet Discovery Rocks World of Mathematics.''
Our point is not to say that linear programming being in $latex {\mathsf{P}}&fg=000000$ was a surprise. To those who knew the reality behind the headlines, it wasn't. As Lawler relates, the great George Dantzig had tried to set Times reporter Malcolm Browne straight on this and points related to what LP's can and (ostensibly) cannot solve. The simplex algorithm already solved the vast majority of $latex {n\!\times\!n}&fg=000000$ LP cases in $latex {O(n^3)}&fg=000000$ expected time, so there was no feeling of practical intractability. Rather our point draws on something perhaps less widely known and appreciated: that Khachiyan's basic ellipsoid idea extends to solve a much wider class than linear programs, called semi-definite programs or SDP's. Thus it can be said to show that a complexity class $latex {\mathsf{SDP}}&fg=000000$ defined by reductions to these programs equals $latex {\mathsf{P}}&fg=000000$.
We raise this with regard to the main technical argument in a recent post by Scott Aaronson titled ``The Scientific Case for $latex {\mathsf{P} \neq \mathsf{NP}}&fg=000000$.'' We wonder whether a similar argument might have seemed on the side of $latex {\mathsf{SDP \neq P}}&fg=000000$ in the years before Khachiyan. Even more speculatively, we wonder whether a kind of ``Reverse Oracle Result'' can be formulated to make any of this more concrete. But first let's review Scott's comments in the wider context of belief about $latex {\mathsf{P}}&fg=000000$ vs. $latex {\mathsf{NP}}&fg=000000$ and open problems that were resolved in surprising ways.
Scott's Comments and Our Positions
Essentially Scott gives a quite reasonable argument for $latex {\mathsf{P} \neq {NP}}&fg=000000$, in his usual elegant and convincing style. Bill Gasarch expanded it in his economical staccato style. But. But mathematics is not something we argue about like: who was the best hockey player of all time, or what is the right philosophy? The simple fact is that no one has proved that $latex {\mathsf{P} \neq {NP}}&fg=000000$.
Our purpose with our recent post on a 13-GB certificate of unsatisfiability was not to start a discussion about $latex {\mathsf{P \neq NP}}&fg=000000$, but rather to witness that $latex {\mathsf{NP}}&fg=000000$-hardness is not so great a practical obstacle as we may think. The Gröbner basis algorithm is run all the time, despite the problem it solves being complete for exponential space. Notably it runs in singly-exponential time on a generic set of cases. If we can shift down a level, this is like having ``smoothed polynomial-time behavior'' of an algorithm for a $latex {\mathsf{PSPACE}}&fg=000000$-complete problem. Solving nontrivial cases of $latex {\mathsf{NP}}&fg=000000$-hard problems is addictive.
Almost the entire business model of this company is to solve $latex {\mathsf{NP}}&fg=000000$-hard optimization problems, using non-quantum computers. As is evident from examples on their website, they are not just gristing easy approximation for run-of-the-mill instances. To quote one of their blog entries (their emphasis): According to academic research on NP-hard problems, it’s impossible to guarantee that optimal solutions to difficult problems will be found in a reasonable time frame. However, with almost all real-life planning puzzles, you can get excellent results very quickly.
Hence our initial reaction was, who cares about discussions on $latex {\mathsf{P} \neq {NP}}&fg=000000$ being true or not, aside from progress on this great question? A full solution would be wonderful, but just having small steps would be great, even a possible program for a solution would be welcome. So that was what we thought we should just say, nothing more, except noting our split answers to Bill G's reprised poll three years ago.
But. But Dick couldn't resist adding some more sections, while Ken made some effort to counter Scott's facts, counterfactually.
Dick Speaks
I feel compelled to explain why I am open-minded on this question perhaps more than anyone else. I have several reasons that I feel are important to remind all of us:
We'll address the full $latex {\mathsf{P}}&fg=000000$ versus $latex {\mathsf{NP}}&fg=000000$ question, and not situations where say the algorithm generating $latex {\mathsf{SAT}}&fg=000000$ instances is restricted to $latex {r(n)}&fg=000000$ random bits---a case in which we've noted that in the limit one can solve them all in something like time $latex {\mathsf{poly}(n)2^{r(n)}}&fg=000000$.
Bad Guesses
I have discussed guesses in mathematics many times before on this blog. One of the biggest issues in guessing wrong is that people do not take seriously the other possibility. Researchers tend not to work on showing $latex {\mathsf{P} = \mathsf{NP}}&fg=000000$ anymore. Research support does not go there, since we all ``know'' that it would be a waste of time, and there are other consequences to the field.
Here are some famous guesses that were essentially off by exponentials. For each I will list the time gap between the initial problem being raised and being solved.
$latex \displaystyle \pi(x) < li(x), &fg=000000$
for all reasonable size $latex {x}&fg=000000$. Here $latex {li(x)}&fg=000000$ is the logarithmic intergal$latex \displaystyle \int_{0}^{x}\frac{dt}{\ln t} &fg=000000$
that is an asympotic approximation to $latex {\pi(x)}&fg=000000$ the number of primes less than $latex {x}&fg=000000$. It was conjectured that this would always hold and was widely believed for over a century. Then John Littlewood proved that the lead changes between $latex {\pi(x)}&fg=000000$ and $latex {li(x)}&fg=000000$ infinitely often, although the first switch is upper bounded by an immense number. Richard Guy wrote a wonderful article on what he called the ``The Law of Small Numbers'': cases when evident phenomena held for small numbers but eventually would fail. Here is a table with other examples:\includegraphics[width=3in]{lawsmall.png}
By the way the ``common clock'' on $latex {\mathsf{P} \neq {NP}}&fg=000000$ is 43 years.
Weak Theorems and Galactic Algorithms
We do have the theorem that $latex {\mathsf{DTIME}(n)}&fg=000000$ is not equal to $latex {\mathsf{NTIME}(n)}&fg=000000$, which we have discussed before and which is particular to the multitape Turing machine model---make the tapes planes or trees and it goes away. We cannot even deduce from it that $latex {\mathsf{DTIME}(n^{2}) \neq \mathsf{NTIME}(n^{2}). }&fg=000000$ That's pretty weak. Remember that $latex {\mathsf{P} \neq \mathsf{NP}}&fg=000000$ means that $latex {\mathsf{DTIME}(n^{1000})}&fg=000000$ does not contain $latex {\mathsf{NTIME}(n)}&fg=000000$. And more, of course.
The $latex {\mathsf{P}}&fg=000000$ versus $latex {\mathsf{NP}}&fg=000000$ statement still allows cross-cutting the generally-understood significance. That is:
To be sure, some evidence cited by Scott is really for an exponential lower bound on $latex {\mathsf{SAT}}&fg=000000$; we have discussed this before too. But what we are saying still cuts against the usual argument that ``many people have worked on looking for algorithms for $latex {\mathsf{SAT}}&fg=000000$.'' Yes many have looked for algorithms, but most were interested in ``real'' practical algorithms. For this kind of quest there is not much difference between $latex {n^{20}}&fg=000000$ and $latex {2^{n}}&fg=000000$.
Aram Harrow communicates to us a more-concrete version of this point, which also serves as a bridge to Ken's musings.
Aram's Bridging Thoughts
Quoting Aram, with light editing: ``One of the stronger reasons for $latex {\mathsf{P \neq NP}}&fg=000000$ is the Bayesian one---it is easier to find algorithms than lower bounds, so our failure to find a subexponential-time algorithm for $latex {\mathsf{3SAT}}&fg=000000$ speaks louder than our failure to find a super-linear lower bound for $latex {\mathsf{3SAT}}&fg=000000$. A related way of expressing this is that before $latex {\mathsf{NP}}&fg=000000$-completeness was understood, thousands of researchers in disparate fields were unwittingly all trying to put $latex {\mathsf{3SAT}}&fg=000000$ into $latex {\mathsf{P}}&fg=000000$.
But a counter-argument is that all of those seemingly independent researchers would always come up with algorithms that relied on a few kinds of structure---a lot is covered by just two kinds:
This paucity of algorithmic variety can be viewed in (at least) two ways:
On the latter, Terry Tao's recent breakthrough on the Navier-Stokes equations is an example of how much the same ideas keep recirculating, and how much more quickly progress can be made by cross-applying ideas rather than coming up with radically new ones. Going from Erwin Schrödinger's equation to Peter Shor's quantum factoring algorithm is a 60-minute lecture, but it took over 60 years (and a change in perspective coming from the computer revolution) to discover. Our lack of algorithms reveals only our lack of creativity, and it is arrogant to posit fundamental limits to mathematics just because we can't solve a problem. Either way, the central question isn't so much about $latex {\mathsf{NP}}&fg=000000$ but rather a ``throwback'' of a question now being asked about quantum computers:
Where does the power of deterministic algorithms come from?
A related question is the power of the Lasserre hierarchy. It has been shown to be effective for a large number of problems, but with a surprisingly small number of truly different techniques. I would love to know whether further work will increase or decrease the number of ways in which we know how to use it; that is, either by discovering new methods or by unifying apparently different methods.
Ken's Sixth World
The Lasserre hierarchy builds upon LP's and SDP's, and this brings us back to the intro. I (still Dick) remember many people in the 1970's trying to prove that certain linear/convex programming problems were $latex {\mathsf{NP}}&fg=000000$-hard, despite all our confidence in the simplex algorithm for daily use. This makes Ken wonder:
What if SDP's really were hard?
Russell Impagliazzo famously categorized five worlds that are consistent with current knowledge of complexity. Is there room to analyze any more, ones that are inconsistent now, but might have been meaningfully consistent had our field taken a different path?
All of Impagliazzo's worlds---including ones with $latex {\mathsf{P \neq NP}}&fg=000000$ and with $latex {\mathsf{P = NP}}&fg=000000$---have been instantiated via oracle results. All oracle results involve pretending that some ostensibly hard problem is easy. For instance, the world with $latex {\mathsf{P = NP}}&fg=000000$ involves pretending $latex {\mathsf{PSPACE}}&fg=000000$-complete problems are easy, while known ones for $latex {\mathsf{P \neq NP}}&fg=000000$ involve granting free access to a set coded so that the free access itself activates a diagonalization. What I (Ken) wonder is whether there is a sensible formal way to do ``Reverse Oracle Results,'' which pretend that some easy problem $latex {X}&fg=000000$ is hard.
Countering Factuals with Counterfactuals
One known way to get this effect is to narrow the definition of ``easy'' so that $latex {X}&fg=000000$ still has easy reductions to $latex {X}&fg=000000$ from other problems $latex {W}&fg=000000$. For example, linear programming problems are P-complete under logspace (and even easier) reductions, as are problems of approximation by SDP's. But here I mean something more structural---a sense in which $latex {X}&fg=000000$ is the only route to solving a whole class of problems $latex {Y}&fg=000000$. Then we can segregate this entire class and pretend it all is hard. It might suffice to give ``easy'' reductions from $latex {X}&fg=000000$ to all these $latex {Y}&fg=000000$. In particular, a lot of $latex {Y}&fg=000000$'s are solved via the interior-point paradigm for (LP's and) SDP's.
Scott replied to my comment about a possible $latex {n^{20}}&fg=000000$ algorithm for $latex {\mathsf{SAT}}&fg=000000$ in his post by referring to his earlier comment that:
Since it would be inelegant and unnatural for the class $latex {\mathsf{P}}&fg=000000$ to be ``severed into two'' in this way, I’d say the much likelier possibility is simply that $latex {\mathsf{P \neq NP}}&fg=000000$.
The point is, perhaps $latex {\mathsf{P}}&fg=000000$ remains already manifestly ``severed into two'' along Khachiyan's and the interior-point faultlines. In particular, we wonder how the following consequences would have looked as conditional results had they been proved in the late 1970's:
Theorem 1 If $latex {\mathsf{SDP = P}}&fg=000000$, then counting matchings in a certain class of planar graphs is deterministic polynomial-time computable modulo $latex {7}&fg=000000$, or modulo any Mersenne prime, even though it is $latex {\mathsf{NP}}&fg=000000$-hard modulo $latex {3}&fg=000000$ or $latex {5}&fg=000000$.
Theorem 2 If $latex {\mathsf{SDP = P}}&fg=000000$, then $latex {\mathsf{MAX CUT}}&fg=000000$ is polynomial-time approximable within $latex {0.87856\dots}&fg=000000$, even though it is $latex {\mathsf{NP}}&fg=000000$-hard to approximate within $latex {0.941\dots}&fg=000000$.
It's not hard to imagine some student in Mike Paterson's group at Warwick in the 1970's coming up with the former theorem. Then we might picture a further argument like Scott's going along lines of this: ``What do Mersenne primes have to do with matchings and convex programming really? Surely counting must be equally hard modulo any odd prime---after all there's no exception for Mersenne in modular circuit lower bounds---so $latex {\mathsf{SDP \neq P}}&fg=000000$ must be an `invisible fence' around this kind of ridiculousness.'' On the second problem, why should the difference between $latex {0.878}&fg=000000$ and $latex {0.9412}&fg=000000$ matter to such a simple problem as $latex {\mathsf{MAX CUT}}&fg=000000$?
Of course in the light of knowledge we understand how these two famous theorems work. On the latter the Unique Games Conjecture already helps explain how $latex {0.87856\dots}&fg=000000$ may be special. But the present exercise is about how we reason when we don't (yet) have the light of knowledge.
Open Problems
Can we make some formal sense of a world where Khachiyan's breakthrough never happens?