{\huge Approximate Complexity Theory}

You can fool all the functions some of the time, and some of the functions all the time, but you cannot fool all the functions all the time, or even most of the time. \vspace{.5in}

Eric Blais and Li-Yang Tan are both complexity theorists. Eric just finished his PhD and is currently a Simons Postdoctoral Fellow at MIT, while Li-Yang is about to finish his doctorate at Columbia.

Today Ken and I wish to talk about their recent paper on approximations to Boolean functions.

This paper entitled ``Approximating Boolean functions with depth-2 circuits'' just appeared at CCC 2013. I was unable to attend the meeting, but in reading the paper am quite excited about their result. I think it could have many interesting applications.

A circuit with $latex {s}&fg=000000$-many wires can be described as a graph by a binary string of length $latex {\sim s\log s}&fg=000000$ by listing the gates in order of $latex {\log s}&fg=000000$-sized labels and giving the targets of each wire out of each gate. Since there are $latex {2^{2^n}}&fg=000000$ Boolean functions $latex {f}&fg=000000$ on $latex {n}&fg=000000$ variables, one for each binary string of length $latex {2^n}&fg=000000$, it follows that many $latex {f}&fg=000000$ require size $latex {s}&fg=000000$ such that $latex {2s\log_2 s \approx \geq 2^n}&fg=000000$, so $latex {s = \Omega(2^n / n)}&fg=000000$. If there are $latex {m}&fg=000000$ arguments $latex {x}&fg=000000$ such that $latex {f(x) = 1}&fg=000000$, we can create a circuit in disjunctive normal form (DNF) by using a gadget of up to $latex {1 + n}&fg=000000$ NOT and AND gates to distinguish each such $latex {x}&fg=000000$, and running a wire from each gadget to a big OR with $latex {m}&fg=000000$ inputs. This has size about $latex {mn}&fg=000000$, which becomes $latex {n2^{n-1}}&fg=000000$ in the case of the parity function. The issue in their paper is whether strictly exponential bounds can be beaten when the function $latex {f}&fg=000000$ need only be computed approximately.

Some History

You may skip this history if you like. In ancient times these functions were called switching functions---indeed even FOCS bore the name ``Switching and Automata Theory.'' Later they became Boolean functions. I am not sure when things switched---OK, bad pun.

The reason for the initial name was they were studied for their importance to the phone company, when there was one phone company. Such functions were used to control phone calls: even though in those days phone lines themselves were analog, the switching of calls was digital. You either connected Alice to Bob or Alice to Eve or Alice to $latex {\dots}&fg=000000$, even if their conversation was really just over a wire that carried their voice as an analog signal.

In these early times the idea of ``$latex {n}&fg=000000$''---as in $latex {n}&fg=000000$ people, $latex {n}&fg=000000$ gates, $latex {n}&fg=000000$ variables...---had not been invented yet. By this I mean that the theoretical interest of researchers was occupied even by Boolean---I mean switching---functions of a few variables. For example, Donald Davies studied the cost of implementing three-variable functions in his paper, Switching Functions of Three Variables in 1957. Such results were non-trivial, in some ways harder than $latex {O}&fg=000000$-notation style work of today, owing to the use of ingenious special-purpose designs and encodings.

Davies worked at the National Physical Laboratory in the 1940's,, and is said to have ``spotted mistakes in Turing's seminal 1936 paper On Computable Numbers, much to Turing's annoyance.'' Rest assured that Alan Turing's main theorem on the halting problem is just fine.

Let's leave switching functions, and move on to $latex {n}&fg=000000$-variable Boolean functions and the work of Blais and Tan.

The General Approximation Result

Suppose that you wish to compute a Boolean function, but it is very complex in some measure. A natural idea is to try to approximate it. Since the function returns either $latex {0}&fg=000000$ or $latex {1}&fg=000000$ it seems that an approximation in the usual sense is useless. What would it mean to answer $latex {0.9}&fg=000000$? Actually as I write this it could be a type of confidence level. If you say $latex {0.9}&fg=000000$ it means that you are pretty sure that the answer is $latex {1}&fg=000000$ but do not completely rule out that the answer is $latex {0}&fg=000000$. Hmmmm. This is not unlike what the weather forecasters do---tomorrow has a $latex {90\%}&fg=000000$ chance of rain. Perhaps another day we should discuss this type of approximation.

Well Blais and Tan use another notion of approximation---a quite standard one. Suppose that $latex {f(x)}&fg=000000$ is a Boolean function. For some small $latex {\epsilon>0}&fg=000000$ say that the function $latex {g(x)}&fg=000000$ $latex {\epsilon}&fg=000000$-approximates $latex {f(x)}&fg=000000$ provided

$latex \displaystyle f(x) = g(x) &fg=000000$

for all but an $latex {\epsilon}&fg=000000$ fraction of the inputs. If the function has $latex {n}&fg=000000$ inputs, then this means that it must be right on

$latex \displaystyle (1 - \epsilon) 2^n&fg=000000$

of the inputs. Note that if $latex {g_0(x)}&fg=000000$ uses the first sense of approximation with real values above, then the function $latex {g(x)}&fg=000000$ obtained by rounding $latex {g_0(x)}&fg=000000$ may well meet this condition.

Clearly some functions are easy to approximate. The $latex {n}&fg=000000$-ary AND function is trivial: just say no all the time and you are right for all but one input. Also it would seem clear that more interesting functions might be quite a bit harder to approximate. The parity function comes to mind; a function that is impossible to compute exactly without reading all the inputs. But there is a surprise here in Blais and Tan's paper, which I will explain shortly. Their first main result is:

Theorem 1 For every $latex {\epsilon > 0}&fg=000000$ there is a constant $latex {c_\epsilon}&fg=000000$ such that for all $latex {n}&fg=000000$ and $latex {n}&fg=000000$-ary Boolean function $latex {f}&fg=000000$, there is a DNF $latex {g}&fg=000000$ of size at most $latex {c_\epsilon 2^n/\log n}&fg=000000$ that $latex {\epsilon}&fg=000000$-approximates $latex {f}&fg=000000$.

A little note on asymptotic notation: The way they actually state their theorem is:

``Every Boolean function can be $latex {\epsilon}&fg=000000$-approximated by a DNF of size $latex {O_{\epsilon}(2^n/\log n)}&fg=000000$.

One needs a certain skill in translating asymptotics rigorously. Some things are always clear: $latex {n}&fg=000000$ is the number of variables regarded as the input size, the base of $latex {\log n}&fg=000000$ doesn't matter since it's a constant that gets absorbed into the ``$latex {O}&fg=000000$'' (but is base 2 by default), and the constant in the $latex {O_{\epsilon}}&fg=000000$ depends on $latex {\epsilon}&fg=000000$ but not on $latex {n}&fg=000000$. What's less clear is that asymptotics are being used over all $latex {n}&fg=000000$ but ``Boolean function'' means just one $latex {n}&fg=000000$. The statement could mean a function defined on all of $latex {\{0,1\}^*}&fg=000000$ by a sequence of Boolean functions for each $latex {n}&fg=000000$, and then the constant in the $latex {O_{\epsilon}}&fg=000000$ could depend on the chosen sequence, but it doesn't---the correct version is as above. Their proof uses $latex {n \geq 10/\epsilon}&fg=000000$, but the statement is valid for all $latex {n}&fg=000000$; for $latex {n < 10/\epsilon}&fg=000000$---or whenever $latex {\epsilon = O(1/n)}&fg=000000$---their upper bounds lose their power and are met by their lower bounds which then become $latex {\Omega(2^n)}&fg=000000$.

The cool point in any event is that for any fixed $latex {\epsilon}&fg=000000$, $latex {c_{\epsilon}/\log n \longrightarrow 0}&fg=000000$ with $latex {n}&fg=000000$, so approximation beats strictly exponential size.

Width and Fooling

The part of their results Ken and I like best, however, have to do with the idea of the width of a DNF. This is defined to be the maximum arity of an AND gate in any of the DNF terms, and intuitively means the number of input variables that get read in any one place. For the parity example above the width is $latex {n}&fg=000000$, which is the maximum of course. But for the majority function, it suffices to have an AND gate of size $latex {m = \lfloor n/2 \rfloor + 1}&fg=000000$ for each $latex {m}&fg=000000$-subset of the variables, so the width is about $latex {n/2}&fg=000000$.

Now especially with parity, it is hard to conceive being able to compute these functions with DNFs of any lower width, or even approximate them. If a term leaves one variable $latex {x_i}&fg=000000$ unread, for parity it means the answer could be anything, fifty-fifty, and it seems that term must be useless. But the magic combination of approximation, the probabilistic method, Fourier analysis, and covering-code constructions gives their other main theorems:

Theorem 2 For every $latex {\epsilon > 0}&fg=000000$ there is a constant $latex {c_\epsilon < 1}&fg=000000$ such that for all $latex {n}&fg=000000$ and $latex {n}&fg=000000$-ary Boolean function $latex {f}&fg=000000$, there is a DNF $latex {g}&fg=000000$ of width at most $latex {c_\epsilon n}&fg=000000$ that $latex {\epsilon}&fg=000000$-approximates $latex {f}&fg=000000$.

In the case of parity, we can build $latex {g}&fg=000000$ with $latex {c_{epsilon} = (1 - 2\epsilon)}&fg=000000$ and of size $latex {2^{(1 - 2\epsilon)n}}&fg=000000$. These bounds are nearly tight, and moreover the construction gives one-sided error: $latex {g(x) = 1}&fg=000000$ for all $latex {x}&fg=000000$ of odd parity.

Ken and I would have had the opposite expectation, that small-width DNFs would be so weak that they can be collectively ``fooled.'' In cryptographic complexity, for a function $latex {f}&fg=000000$ to fool a class $latex {\cal G}&fg=000000$ of weaker functions means that no $latex {g \in {\cal G}}&fg=000000$ can gain any advantage in computing values of $latex {f}&fg=000000$---that from the perspective of functions in $latex {\cal G}&fg=000000$, the function $latex {f(x)}&fg=000000$ looks random, unpredictable, an importantly is not approximable in Blais and Tan's sense. But they prove the opposite: every Boolean function can be approximated by DNF's of smaller width---and for parity still sub-exponential size. Hence ``all the smaller-width DNF's cannot be fooled most of the time.'' At least we think Abraham Lincoln would have agreed.

They have other results for Boolean functions that are monotone up-or-down in each of the variables, called unate functions, and polynomial threshold functions\/ which have the form $latex {f(x) = \text{sign}(p(x))}&fg=000000$ for some real-valued polynomial $latex {p(x_1,\dots,x_n)}&fg=000000$. Here is a table of upper and lower bounds from their paper:

\includegraphics[width=4.5in]{BlaisTanTable.png}

The paper includes full details of proofs, which are interesting for their mix of randomization, analysis, and covering-code constructions.

Open Problems

What other applications can be found for their constructions, and for small width?

Yesterday, today, and tomorrow are collectively the 150th anniversary of the Battle of Gettysburg. Here is the NY Times story on 11/20/63 covering the speech Lincoln had given at Gettysbburg the previous day.