{\huge Barriers to P=NP ``Proofs''}
Why TSP should not be mappable to a small linear program \vspace{.3in}
Sebastian Pokutta just gave an beautiful talk on proving lower bounds on the size of linear programs. He is a recent addition to the Georgia Tech faculty, but alas not in our college of computing. He is a faculty member of the ISyE Department at Tech, which is perhaps the best industrial engineering department in the world, and his addition certainly adds to their strength.
Today I want to talk about results that he presented and their connection to attempts to prove that the Traveling Salesman Problem (TSP) can be solved in polynomial time. TSP is one of the most famous of all the $latex {\mathsf{NP}}&fg=000000$-complete problems, famous enough to be the subject of a movie. The attempts on it may be more fundamental still. Recall that we once presented $latex {\mathsf{P}}&fg=000000$ versus $latex {\mathsf{NP}}&fg=000000$ itself not in terms of Turing machines or nondeterminism, but as an issue between linear and integer programs, that is between LP's and IP's.
The Start Of The Game
The idea is to express the problem as a linear programming problem, then invoke the wonderful result that any linear programming problem can be solved in time polynomial in its size. Put another way: get an LP for for the TSP, and then let the LP solver do all the heavy lifting---sorry too much jargon?
This is a plan, a great plan, but with one small difficulty. How do we express the TSP as a LP that is not too large? For decades people have tried to find such LP's, and all have failed---at least so far.
This and other attempts have led to a kind of ``game'' between those who try to solve TSP this way and those who try to find flaws in their proofs. Let's call the players the makers and the breakers. The makers try to find the magic encoding of TSP into an LP, while the breakers try to show that the encoding is incorrect.
This is an interesting game, indeed a game that I hope some day one side wins forever. To be sure, this requires that $latex {\mathsf{P}=\mathsf{NP}}&fg=000000$ be resolved. In one direction, if we can efficiently encode TSP instances via LP's then $latex {\mathsf{P}=\mathsf{NP}}&fg=000000$. The other direction uses the fact that solving LP's is known to be a complete problem for $latex {\mathsf{P}}&fg=000000$ even under very restricted reductions. These reductions would be efficient, but they need not preserve the ``character'' of the original formulation of the TSP instance.
Exactly what kind of character an LP formulation of a TSP instance may have is how the game is played. Not just how big it is, but how it encodes features of the instance, and how it relates to integer programs for TSP, are the main issues.
History
The game started in earnest when Ted Swart in the mid 1980's claimed to have discovered a polynomial-size way to express a special case of the TSP as a polynomial sized LP---he did the Hamiltonian Cycle problem. This really is the start: his attempt is listed \#1 on Gerhard Woeginger's $latex {\mathsf{P}}&fg=000000$ versus $latex {\mathsf{NP}}&fg=000000$ claims page. Note that there are sub-classes of instances on which the answers to TSP and Hamiltonian Cycle coincide and remain $latex {\mathsf{NP}}&fg=000000$-complete. As shown in a 1981 paper by Alon Itai, Christos Papadimitriou, and Jayme Luiz Szwarcfiter, these can be vertex-induced subgraphs of the rectangular grid, which are planar graphs. Then there is a TSP tour of total cost $latex {n}&fg=000000$ (using Euclidean length) if and only if the graph has a Hamiltonian cycle.
Swart's idea was to encode Hamiltonian Cycle/TSP as an integer program in a way that its relaxation to a linear program had the same answers. The encoding was direct: it defined what is called the standard TSP polytope $latex {P_0}&fg=000000$ for a given instance. $latex {P_0}&fg=000000$ lives in a space that has a co-ordinate $latex {x_{i,j}}&fg=000000$ for every pair of nodes (cities) $latex {i}&fg=000000$ and $latex {j}&fg=000000$. Points in this space are assignments of real numbers to every (possible) edge. $latex {P_0}&fg=000000$ is defined to be the convex hull of those points that assign $latex {1}&fg=000000$ to $latex {n}&fg=000000$ edges that form a cycle, and $latex {0}&fg=000000$ to all of the $latex {\binom{n}{2}}&fg=000000$ other edges.
In an IP we can define the desired extreme points directly by constraints saying $latex {x_{i,j} \in \{0,1\}}&fg=000000$, but in an LP relaxation we can only say $latex {x_{i,j} \geq 0}&fg=000000$ and $latex {x_{i,j} \leq 1}&fg=000000$. The other constraints saying $latex {\sum_j x_{i,j} = 2}&fg=000000$ for each $latex {i}&fg=000000$, and preventing a union of disjoint cycles, can be handled already by LP, the latter most efficiently with some extra variables. The problem is that the resulting relaxed LP polytope may have other extreme points that are not integral, and hence do not represent tours.
Swart thought he had found an LP encoding and objective function that evaded the problem, but his initial method failed: errors were found by breakers. Then he repaired his encoding, and more errors were found. The game with Swart's methods looked like it could go on forever, but finally Mihalis Yannakakis in 1988 stopped it with a famous paper, Expressing combinatorial optimization problems by linear programs.
Mihalis showed that any symmetric LP could not efficiently encode the TSP problem. Symmetric means that if you permute the labels $latex {i,j,\dots}&fg=000000$ of the cities, then any other variables can be permuted so that the LP itself remains the same. In particular, no one city has a distinguished role. Since Swart's attempts were all symmetric that stopped him. Mihalis gave the breakers a powerful tool: They no longer had to look at the details of the LP encoding---if the encoding had a certain symmetry then it had to be wrong. Pretty neat.
The Game Today
Mihalis left the makers a possible approach. If they could avoid the symmetry condition then his breaker theorem would no longer apply. He conjectured that the symmetry condition could be lifted, and he was right. Enter Sebastian and his colleagues: Gábor Braun, Samuel Fiorni, Serge Massar, David Steuer, Hans Raj Tiwary, and Ronald de Wolf. In a series of papers they gave the breakers a much more powerful set of tools. Besides removing the symmetric condition, they extended and improved the lower bounds. The two main papers are
How Did I Get Here?
Before I start talking about LP's and TSP's and the like, I would like to relate the cool answer to a question Prasad Tetali posed after Sebastian's talk, when the three of us were talking alone. Sebastian answered---you can guess the question---that he did not set out to work on the TSP problem. Rather he was interested in communication complexity, and worked on that for quite a while. Then he and his co-authors noticed that there were neat connections to the classic paper of Mihalis---connections that allowed them to improve his results. The rest is history, ongoing.
Projections
The key to the interest in the maker-breaker game is that the makers are not obligated to directly encode the TSP. They can use any variables that they like to encode the TSP: they do not have to use the obvious ones $latex {x_{i,j}}&fg=000000$. This freedom carried to extreme would make the breakers' job essentially impossible. Like any game---from chess to football---one needs rules to have a ``fair'' game---the completely unstructured game is as hard as $latex {\mathsf{P}}&fg=000000$ versus $latex {\mathsf{NP}}&fg=000000$ itself. So the key notion of ``character'' of a TSP instance is what the game players call extended formulations.
Suppose that $latex {P}&fg=000000$ is some polytope that represents some combinatorial problem, such as $latex {P_0}&fg=000000$ for TSP. The maker need not create an efficient encoding of $latex {P}&fg=000000$, but is allowed to work in a higher dimension and create an efficient encoding of another polytope $latex {Q}&fg=000000$. The restriction is that there must be a linear projection $latex {\pi}&fg=000000$ so that
$latex \displaystyle \pi(Q) = P. &fg=000000$
Essentially the polytope $latex {P}&fg=000000$ is a shadow of the polytope $latex {Q}&fg=000000$. Note that in the following picture from Sebastian's talk the shadow polytope has more faces than the higher-dimensional one: six vs. eight.\includegraphics[width=3in]{EFproject.png}
One thing I feel is under-emphasized in these papers is the attractiveness of extended formulations. The size of the smallest polytope that projects to another one is a natural question, and has quite a pretty theory. The question has beautiful connections to other parts of theory, such as communication complexity. For those trying to solve TSP, if it did have a small extended formulation, one could use it to solve the TSP fast. If the polytope $latex {Q}&fg=000000$ is simpler than $latex {P}&fg=000000$, working in the higher-dimensional space is just fine. Projections are good for solving. This is why a maker would seek to find small extended formulations---and why it is significant that they cannot work.
Main Results
The question is now quite precise: Can we show that any $latex {Q}&fg=000000$ that projects to the polytope $latex {P}&fg=000000$ must be very large? That is, must it require an exponential number of constraints? If this is true, then the makers are in trouble. At least they cannot use higher-dimensional variables and clever constraints to encode the TSP. There are still possible loop-holes, but the game has become much harder for them.
The results that are proved are of the form: any polytope $latex {Q}&fg=000000$ that projects to---i.e., is an extended formulation of---the polytope $latex {P}&fg=000000$ must have size at least $latex {\dots}&fg=000000$ Here is the main example specific to TSP:
Theorem 1 Any extended formulation of the TSP polytope $latex {P_0}&fg=000000$ must have at least $latex {2^{\Omega(\sqrt{n})}}&fg=000000$ constraints.
This applies even when the LP for $latex {Q}&fg=000000$ itself is not symmetric. One thing this implies is that the force of Mihalis' original insight is not just against symmetric programs, but against all programs that can project to a certain symmetric target.
Note also that as with Mihalis' original barrier result, this takes out of the maker's hands the argument that all the magic can be put into the choice of objective function. The barriers apply already just at the step of defining the feasible solution space. We don't need to examine the objective function in any claimed program whose constraints are already ruled out.
Barriers, Dodgeball, and Alley-Oop
Barriers do offer the maker some guidance in finding some approaches that might possibly work. But what Ken and I wish to say is that the guidance should be more than just playing ``dodgeball'' around the barriers. The reason is that the breakers may have set up not only the ground-level barriers but also the potential of an ``alley-oop'' play. This means taking the ball higher than your defense considered and spiking it down on top of you.
In the TSP case, suppose you encode TSP via a basic polytope $latex {P_1}&fg=000000$ different from $latex {P_0}&fg=000000$. Maybe the basic variables aren't edges between cities after all; maybe they're compound; maybe they're something else. You might think you've dodged the theorem. But first if your constraints defining $latex {P_1}&fg=000000$ are still symmetric, they may fall under a slight modification of the theorem.
OK, let's say your $latex {P_1}&fg=000000$ assigns distinguished roles to one or more of the cities. Now you've got a different basic $latex {P_1}&fg=000000$ and have avoided symmetry. Since projections and other easy inversions of extensions are your friends, you create a bigger LP $latex {Q}&fg=000000$, but not too big---it's still polynomial-sized. Are you safe now?
Here's the problem: Because $latex {Q}&fg=000000$ is not too big, there's a lot of room to build a $latex {Q'}&fg=000000$ with a higher set of constraints but still under a a bound like $latex {2^{\sqrt{n}}}&fg=000000$. Because you say your $latex {Q}&fg=000000$ accurately encodes the TSP your way, there's a lot of leverage already for the higher $latex {Q'}&fg=000000$ to encode TSP in some standard way, and so project to $latex {P_0}&fg=000000$. Indeed $latex {Q'}&fg=000000$ might use extra variables to achieve the kinds of symmetries that $latex {P_1}&fg=000000$ sought to avoid, while preserving the property of all (relevant) extrema being integral---on taking your statement for $latex {Q}&fg=000000$ as a basis. But Sebastian's theorem rules out $latex {Q'}&fg=000000$, and so your attempt $latex {Q}&fg=000000$ may get dunked after all. That's the alley-oop.
In circuit lower bounds, the alley-oop can be extending a proposed argument so that it naturalizes, perhaps adding the ``bait-and-switch'' trick we noted here. In other cases it can be taking a field extension, or using padding and translation to show implausible larger consequences.
The moral we draw is that makers facing such barriers shouldn't play dodgeball. Rather than squint at the barriers looking for loopholes, they should erect a counter-principle that is by-nature tall enough to prevent an alley-oop. The theorems in Sebastian's joint papers have clear ideas, which they extend to other computational problems besides TSP. Thus any maker wishing to advance another round should not expect to find a proof that works ad-hoc via a loophole, but should seek one that stands on a clear new idea. Preferably the idea should likewise extend to other problems directly, rather than by appeal to complexity-theoretic reductions.
A Personal Connection
The way that Sebastian proves his lower bounds is to show that the size of any $latex {Q}&fg=000000$ that projects to $latex {P}&fg=000000$ is controlled by the complexity of playing certain communication protools. I must add a personal note about this, since (nondeterministic) communication complexity plays a major role here. Back in 1981, Bob Sedgewick and I introduced this notion in a paper entitled Lower Bounds for VLSI. We needed this type of protocol to prove new lower bounds on the size of VLSI circuits.
Our results also were a kind of barrier. Our lower bound proofs actually didn't care if the VLSI connections themselves could be nondeterministic. Of course there is no practical notion of ``nondeterministic VLSI,'' but some problems have unexpectedly good upper bounds in that model. What this means is that anyone hoping to prove expectedly higher lower bounds for practical deterministic VLSI would have to try other techniques from ours---but ours were the only game in town for some time.
One key idea that was first made explicit in our paper is the relation between the cost of the protocol and the cost of covering 0-1 matrices with rectangles, as illustrated here:
\includegraphics[width=2in]{EFcover.png}
Again see Sebastian's papers for details of how all this fits together.
Open Problems
Can the makers still succeed? The new theorems have made it much harder for them to win. But there are still loop-holes and still room for creative new ideas. Let's leave the discussion of those for another day.