{\huge What Is The Object?}
A tale of two approaches to solve open problems \vspace{.3in}
Jeff Kahn and Gil Kalai are among the top problem solvers in the world, and each has won notable prizes in his great career. Gil just recently visited Georgia Tech, and gave a wonderful talk on the interplay of boolean functions and harmonic analysis. Gil also is involved in our ongoing debate on the feasibility of quantum computers.
Today Ken and I wish to talk about two approaches to problem solving. Neither is better than the other, but they are fundamentally different.
I thought about this topic while listening to Gil's talk. My favorite among his solved problems is the one he did with Jeff, resolving the Borsuk Conjecture. In 1932 Karol Borsuk asked the following question for which he had proved ``yes'' in the case $latex {n=2}&fg=000000$: Die folgende Frage bleibt offen: Lässt sich jede beschränkte Teilmenge $latex {E}&fg=000000$ des Raumes $latex {\mathbb{R}^n}&fg=000000$ in $latex {(n + 1)}&fg=000000$ Mengen zerlegen, von denen jede einen kleineren Durchmesser als $latex {E}&fg=000000$ hat? Ken finds Wikipedia's translation to be optimal: The following question remains open: Can every bounded subset $latex {E}&fg=000000$ of the space $latex {\mathbb{R}^n}&fg=000000$ be partitioned into $latex {(n + 1)}&fg=000000$ sets, each of which has a smaller diameter than $latex {E}&fg=000000$?
Gil and Jeff showed that the widely believed positive answer $latex {n+1}&fg=000000$ was off a bit---the correct answer is exponential in $latex {n}&fg=000000$, actually it is at least $latex {1.2^{\sqrt n}}&fg=000000$ for large enough $latex {n}&fg=000000$.
The Guess
Suppose one wishes to solve some open problem X. There are many ways to attack a problem, as many ways as there are problems. One critical issue for all open problems is the guess: do you try to prove X is true or try to prove X is false? On exams we often say: Consider the problem $latex {\dots}&fg=000000$ Prove that it is true. Unfortunately in the real-world problems are not so labeled. They are more like an exam question of the form: Consider the problem $latex {\dots}&fg=000000$ Prove that it is true or supply a counter-example. These questions are much harder, and I usually avoid them on exams I create for my students. On a exam if you guess the wrong way you may waste valuable time, which could make you miss other questions, and perhaps fail the exam.
On an open problem---not an exam problem---you do need to make the guess. Are you going to try to prove X or try to disprove X? For some problems we all seem to believe that we know the right guess, but I am not so sure. Just be aware that a wrong guess means that you will never succeed in getting a proof. You can hope that even with a wrong guess your ``failure" to succeed will still lead to insights that may advance the field in general, and perhaps lead to a resolution of X anyway.
We have previously blogged about cases where the math and theory community guessed wrong. The problems in question were solved only after a few very smart people guessed the right way, since initially almost all of the community believed in the other direction. For Gil and Jeff on the Borsuk conjecture it was important that they first guessed right. Yes they were clever and had a deep insight into the problem, but making the right guess strikes me as coming before having that insight.
Why They Guessed Right
As stated, the conjecture was about arbitrary subsets of $latex {\mathbb{R}^n}&fg=000000$. Ken and I know people with brilliant visualization of objects in $latex {\mathbb{R}^n}&fg=000000$ for high $latex {n}&fg=000000$---for instance Harold Kuhn ``saw'' a tangency in 36-dimensional space from an example involving $latex {6 \times 6}&fg=000000$ matrices in Ken's Princeton senior thesis---but we are not among them. However, as noted on the first page of their paper (linked from here), there was a re-formulation of a major case of Borsuk's conjecture:
Conjecture. Let $latex {K}&fg=000000$ be a family of $latex {k}&fg=000000$-subsets of $latex {\{1, 2, ..., n\}}&fg=000000$ such that every two members of $latex {K}&fg=000000$ have $latex {t}&fg=000000$ elements in common. Then $latex {K}&fg=000000$ can be partitioned into $latex {n}&fg=000000$ parts so that in each part every two members have $latex {(t + 1)}&fg=000000$ elements in common.
This is now stated in terms of the objects that a combinatorialist like Gil or Jeff understands well: discrete sets and systems of subsets and partitions, vectors and codes$latex {\dots}&fg=000000$ Phrasing it this way primed their intuition that this was false. Once they found a construction for a strong counterexample, their proof was short. They quote Moby Dick on how they were led to it:
However contracted, that definition is the result of expanded meditation.
Then number theory, part of the landscape of any discrete mathematician, put a finishing touch on their exponential estimate by an application of the Prime Number Theorem.
Hence we will talk about what approach you take after you make your guess. This may come hand-in-hand with the insight, but importantly it also often comes before it. It may also be classed as a matter of attitude, of how much novelty one expects is required to reach the end. But we'll reference the classic divide between ``procedural'' and ``object-oriented'' programming languages.
The Procedural Approach
Okay, so you have guessed that X is true, and you start out to try to prove X. One method is what I think of as the standard procedure. You bring to bear on X all methods, lemmas, tricks, ideas, and more that you know that seem to be related to X. If you are lucky you will eventually see a pattern that will led to a proof.
The proof may be short, it may have modest length, it may be high-level, it may be long, it may be quite technical. The point is, you execute it as a series of actions based on objects you know and tools you try to sharpen. In any case you will be happy: you have solved your problem. Congrats. Time to move on to another question.
The Object Approach
Sometimes the existing tools are insufficient to solve a problem, even if you have guessed right. You might feel you should work on another problem. The object-based approach basically agrees.
In object-oriented programming, the initial emphasis is not so much to do what you want your ultimate program to do, but rather to describe and model all the data you will be dealing with. Often this means creating new or compound objects for the data, or making a new class hierarchy of which the concepts you initially considered are only a part. You may do a lot of extra work, coding objects that are not used in the final product. However, it is often easier to do modeling of information than to process it. Once you have modeled a lot of things, then you can start to think about exactly what you want to do with them.
In some applications, such as simulation and game design, this is called world-building. Ken and I are thinking about this because Shinichi Mochizuki's new claimed proof of the ABC Conjecture--and much more---appears to be a huge example. Here is a quotation by Minhyong Kim from the New York Times article, which we linked from our own item:
``He's taking what we know about numbers and addition and multiplication and really taking them apart,'' Dr. Kim said. ``He creates a whole new language---you could say a whole new universe of mathematical objects---in order to say something about the usual universe.''
An ABC of Objects?
Many others commenting around the Web say they are having trouble because they cannot understand Mochizuki's procedures---and this is because they have to learn his new objects first. Here is part of what Jordan Ellenberg says about them:
Mochizuki argues that it is too limiting to think about ``the category of schemes over Spec Z,'' as we are accustomed to do. $latex {\dots}&fg=000000$ Mochizuki argues that [this is] insufficiently rich to handle certain problems in Diophantine geometry. He wants us instead to think about what he calls the ``species'' of schemes over Spec Z, where a scheme in this sense is not an abstract object in a category, but something cut out by a formula. In some sense this view is more classical than the conventional one, in which we tend to feel good about ourselves if we can ``remove coordinates'' and think about objects and arrows without implicitly applying a forgetful functor and referring to the object as a space with a Zariski topology or---ptui!---a set of points.
Well nothing can be more ``real'' than a set of points, but so much conceptual building has been done on top of them already that they can be deprecated. Also when you take a first look at Mochizuki's outlines, the ABC Conjecture is barely prominent---so many other things are defined and dealt with first, in 512 total pages.
Problems That Birthed New Objects
Let us think back to a few cases where a proof introduced a new kind of object that spurred research in itself, even eclipsing the problem the prover originally solved.
$latex {\bullet}&fg=000000$ The problem: Show 5th-degree equations can/cannot be solved in closed form. The objects: Groups.
$latex {\bullet}&fg=000000$ The problem: Show $latex {\mathsf{NC^1}}&fg=000000$ can/cannot be simulated by bounded-width branching programs. The objects: Programs over groups.
$latex {\bullet}&fg=000000$ The problem: Riemann Hypothesis extended to finite fields. The objects: Varieties over finite fields..
$latex {\bullet}&fg=000000$ The problem: Improve constant-degree expanders; de-randomize logspace? The objects: Zig-zag graph products.
Note that Andrew Wiles's work on his brilliant solution of the Fermat Equation is probably not of this type. He used all the tools he could from advanced number theory, but did not really create new objects.
Declarative or Functional or Logic-based?
Procedural and object-oriented are not necessarily opposed concepts; for instance C++ and related languages use both. In programming language theory the concepts most directed contrasted to procedural is declarative, which is allied with the idea of functional which was cultivated in John Backus' famous 1978 Turing Award speech and article, and with logic programming languages such as Prolog.
The idea of these, however, is more that it should suffice to give a full logical specification of a problem, and then procedures for solving it will largely take care of themselves. There is some echo of this idea in specifications for PolyMath problems, and the notion of a ``Watson'' for math comes in. However, we prefer to focus on the imperative to create and describe new objects.
Open Problems
What are some other good examples of problems that are most known today for creating new objects?