{\huge Proof Envy}

I wish that I could have $latex {\dots}&fg=000000$ \vspace{.5in}

Vijaya Ramachandran is now the Blakemore Regents Professor of Computer Science at The University of Texas at Austin. Once upon a time she was my PhD student, at Princeton, working on the theory of VLSI chip design. Long ago she moved successfully into other areas of theory. She became a leader in cache oblivious algorithms, an area created by Charles Leiserson, whom I advised as an undergraduate many years ago. It's a small world.

Today I want to talk about theorems, old theorems and new, that I wish I could have used in a paper.

Call the phenomenon theorem envy: There are theorems that I have long known, but have never been able to use in any actual paper, in any proof of my own. I find this curiously unsatisfying, to know a cool result and yet be unable to use it in one of my papers. Perhaps I am alone in having this feeling, perhaps you only care about solving your problem, not how you solve it. Mostly I would agree with that. But part of me would like to use Theorem X in one of my papers. It just seems like it would be cool to do that.

This isn't about envying those who originally proved the theorems. It's about those who actually use the big gear in their toolboxes. We may have lots of shiny gadgets in our boxes, but part reason they're shiny is they haven't been used.

Stirring the Brew

I realized this years ago when Ramachandran was working on her thesis. One of her results solved a problem that most had ignored---could not solve---about the cost of resizing drivers to make a chip go faster. One way to make a signal propagate faster across a chip is to use more power: the larger the driver, the faster the signal can move. This remains as significant today as then. The issue is not the speed-of-light restriction, but is related to the time it takes to charge a capacitive load:more power makes it happen faster. The downside then, and much worse today, is the added heat that power causes.

There is a curious cyclic issue in resizing drivers. The approach was in two steps:

  1. Look at a chip and determine the wires that were too long and were slowing down the chip.
  2. Resize their respective drivers so that they would go faster.

In practice this worked quite well. Yet there is a subtle bug: the resizing of the drivers moves parts of the chip, and this can make some wires become new bottlenecks. The obvious answer is to do the following:

  1. Look at a chip and determine the wires that were long and were slowing down the chip.
  2. Resize their respective drivers so that they would go faster.
  3. Look at the chip, and if it is still slow repeat step (1).

The theory question is, does this process actually ever stop? Is it possible that the chip just blows up and thus no resizing will make it go faster? Or is this unlikely or even impossible?

The answer is that using the famous Brouwer Fixed-Point Theorem, Ramachandran was able to prove that this cannot happen. She showed that given some reasonable restrictions on the chip topology, the process becomes a continuous function on a convex compact set, which by the theorem has a fixed point. Brouwer means ``brewer'' in Dutch, and one way to visualize is stirring a brew in a pot---as opposed to a torus in which it can flow with no vortex. The analysis also needs and shows quick convergence, but the essence is that things stop because the resizing operation has a fixed point.

I like her result. In my opinion its practical impact was shedding light on CAD, but I liked it even more because it used the wonderful Brouwer Fixed-Point Theorem. I never have been able to use that in any of my papers. Oh well.

So let me give a few other examples of other theorems that I would still like to get a chance to use in some future paper.

A List

Here is my list of my top few:

$latex {\bullet }&fg=000000$ Regularity Lemma. The famous Szemerédi regularity lemma states that every large enough graph can be divided into subsets of about the same size so that the edges between different subsets behave almost randomly. It was proved in a weaker form by Endre Szemerédi and used to prove his famous theorem on arithmetic progressions.

I have known it for many years, but have never been able to use it in my work. Of course many others have used it in their work. I would still like to be able to use it.

$latex {\bullet }&fg=000000$ Ax---Grothendieck theorem. This is a famous theorem about the relationship between injectivity and surjectivity of multi-variate polynomials. It was proved independently by James Ax and Alexander Grothendieck. One version of the theorem is that if $latex {F}&fg=000000$ is a polynomial function from $latex {\mathbb{C}^{n}}&fg=000000$ to $latex {\mathbb{C}^{n}}&fg=000000$ and $latex {F}&fg=000000$ is injective then $latex {F}&fg=000000$ is bijective.

The full theorem is more general, and what's remarkable is that the basic mechanism of the proof is really quite simple. It relies on the fact that injective implies surjective for finite sets: a map that is one-to-one on a finite set must be onto. I believe this should be useful in problems that concern polynomials, which after all is one of our main objects of study in complexity theory.

$latex {\bullet }&fg=000000$ Baire Category Theorem. This is the famous theorem of René-Louis Baire, who proved it is in his 1899 doctoral thesis. It is an existence theorem showing that any countable family of open and everywhere-dense sets in a complete metric space has a non-empty intersection. It has been used in topology and analysis to prove the existence if many wonderful and exotic objects.

In our world we often prove the existence of exotic objects via the probabilistic method, but the Baire Category Theorem is another different method. It has constructive versions in certain settings, and complexity theorists have used them to show that certain oracle constructions are generic. I believe there should be some further interesting applications of it, and still hope to use it someday.

$latex {\bullet }&fg=000000$ Feit-Thompson theorem: this is the famous theorem that every odd order group is solvable. It was proved by Walter Feit and John Thompson in the early 1960's. It is a cornerstone to the understanding and eventual classification of finite simple groups: it implies that all simple groups are even order.

I have always thought that this might be some deep connection between this theorem and our inability in complexity theory to understand the power of computations modulo composites, but have never been able to make this idea into a theorem.

$latex {\bullet}&fg=000000$ Hilbert's Nullstellensatz. This does get applied, as we noted here. The cool German name is enough to induce envy. Particularly interesting are various effective and combinatorial forms of it.

This also stands for theorems in polynomial ideal theory and algebraic geometry in general, as used e.g. by Ketan Mulmuley and Milind Sohoni in their attack on $latex {\mathsf{P \neq NP}}&fg=000000$ type questions, and by Craig Gentry in crypto for some schemes of fully homomorphic encryption.

Open Problems

Do you ever feel theorem envy? Which theorems would you like to use? Am I unique and alone in this affliction? Is there a cure?