{\huge Goldilocks Principle And P vs. NP}
The rule of three \vspace{.1in}
Robert Southey was the Poet Laureate of Britain from 1813 until his death in 1843. He published, anonymously, ``The Story of the Three Bears" in 1837.
Today Ken and I want to talk about the state of $latex {\mathsf{P}}&fg=000000$ versus $latex {\mathsf{NP}}&fg=000000$ and the relationship to this story.
The story, as I'm sure you know, is about Goldilocks. She has---no surprise---curly blond hair. She enters the home of three bears while they are away. She tries their chairs, eats some of their porridge, and falls asleep on one of their beds. When the bears return she runs away.
What you may not know is that Southey's original story had not a young girl but an old woman. She is not innocent but furtive, self-serving, and meddlesome: she breaks the little bear's chair and eats his breakfast. An 1849 retelling by Joseph Cundall changed her into a girl named ``Silver-hair'' and changed her motives to restlessness and curiosity. Her hair changed to gold around 1868 but she did not acquire the name ``Goldilocks'' until 1904.
Of course there is no change to our classic problem: Claims continue that there are proofs that $latex {\mathsf{P = NP}}&fg=000000$, claims continue that there are proofs that $latex {\mathsf{P \neq NP}}&fg=000000$, and claims continue that $latex {\mathsf{P \neq NP}}&fg=000000$---but without offering any proof. What connection can there be to the Goldilocks story? It is in the telling---the literary rule of three augmented with a total order.
Three Goldilocks Principles
The Goldilocks tale is really one of $latex {3 \times 3}&fg=000000$. It has her try at each stage: chairs, food, and beds. At each stage of the story, one item is too big-or-hot-or-hard, one is too small-or-cold-or-soft, and one is just right. Then the bears follow the same $latex {3 \times 3}&fg=000000$ sequence in discovering her traipsing.
The ``just right'' aspect has been named the Goldilocks Principle. Christopher Booker's oft-quoted description of the ``dialectical three'' goes as follows:
``[T]he first is wrong in one way, the second in another or opposite way, and only the third, in the middle, is just right. ... This idea that the way forward lies in finding an exact middle path between opposites is of extraordinary importance in storytelling.''
The Goldilocks Principle however leads, according to this history on the LetterPile website, to what it calls the ``Goldilocks Syndrome'':
``We are living in consumerism, where big companies non-stop create billions of realities, where everybody ... can feel `just right.' ...The problem starts when we can't stop looking for perfect solutions in [this] pretty imperfect world.''
It is not clear whether they have a solution, but they go on to describe and recommend the following ``Goldilocks Rule'':
``Balance between known and unknown, risky and risk-free, predictable and unpredictable.''
Our take on all this is: are we trying to be ``too perfect'' in our approach to $latex {\mathsf{P}}&fg=000000$ versus $latex {\mathsf{NP}}&fg=000000$? Can we profitably strike a new balance?
Who's Been Sleeping In My Bed?
I have argued recently and before that $latex {\mathsf{P = NP}}&fg=000000$ is possible, but with an algorithm for say $latex {\mathsf{SAT}}&fg=000000$ that is galactic---see here for our introduction of this term---meaning an algorithm that is completely useless. Here are three perspectives on the power of the two classes.
Lemma 1 There is a constant $latex {c>0}&fg=000000$ so that if $latex {S}&fg=000000$ is in $latex {\mathsf{P}}&fg=000000$, then $latex {S \cap \{0,1\}^{n}}&fg=000000$ has a Boolean circuit of size a most $latex {cn^{c}}&fg=000000$.
Note that the consequence is easy to show: Assume $latex {\mathsf{P = NP}}&fg=000000$ and that there is such a constant $latex {c}&fg=000000$. Then this contradicts Ravi Kannan's famous theorem that the polynomial hierarchy has sets that require $latex {n^{k}}&fg=000000$ boolean circuits for any $latex {k}&fg=000000$. (See this 2009 paper for more.) In terms of our theme: $latex {\mathsf{P}}&fg=000000$ is too weak to be equal to $latex {\mathsf{NP}}&fg=000000$---the bowl is too small.
There is however a different take: $latex {\mathsf{P}}&fg=000000$ is just right to masquerade as $latex {\mathsf{NP}}&fg=000000$. How to develop this idea is our new thought.
We can start by regarding the classic ``barriers''---relativization, natural proofs, and algebrization---as effects of such masquerading. Then focus further on the extent to which $latex {\mathsf{NP}}&fg=000000$-objects can be approximated by polynomial-time ones. Many $latex {\mathsf{NP}}&fg=000000$-complete problems are easy in average case under certain natural distributions. We wonder whether the theory can be structured to say that logspace objects, or ones from $latex {\mathsf{ACC}^0}&fg=000000$ (not to mention $latex {\mathsf{AC}^0}&fg=000000$) cannot approximate so well. An example of a technical issue to overcome is that languages like $latex {\mathsf{SAT}' = \{xy: x \in \mathsf{SAT} \vee y \notin 0^*\}}&fg=000000$ are $latex {\mathsf{NP}}&fg=000000$-hard but approximated ultra-simply by the language of all strings.
Of course, it would be a huge breakthrough already to separate $latex {\mathsf{NP}}&fg=000000$ from uniform $latex {\mathsf{ACC}^0}&fg=000000$, let alone from logspace or $latex {\mathsf{NL}}&fg=000000$. We're suggesting instead to think along these lines:
So, which bowl? Which bed to lie in? Most seem to believe the second is how to prove $latex {\mathsf{P} \neq \mathsf{NP}}&fg=000000$ but who knows. At least we're trying not to run away.
Open Problems
\bigskip The famous front-and-back cover art of the venerable 1979 textbook by John Hopcroft and Jeffrey Ullman is said to depict Cinderella:
\includegraphics[width=3in]{HopcroftUllmanCovers.png}
We believe Goldilocks fits better: curly hair, wearing boots not slippers, and breaking things. Both of us recall general optimism about solving $latex {\mathsf{P}}&fg=000000$ versus $latex {\mathsf{NP}}&fg=000000$ at the time the text was published. Now the artwork seems prophetic on what happens when we tug at the question. Can we get a ``middle-way'' approach to it up and functioning?