{\huge Details Left To The Reader...}
The buck stops here---on a blog, that is
\vspace{.5in}
Stasys Jukna has written a comprehensive book on boolean circuit complexity, called Boolean Function Complexity: Advances and Frontiers. It includes a discussion of Mike Fischer's Theorem on negations, which we recently re-gifted.
Today I would like to fill in some missing details to Mike's famous result.
Recall the theorem says:
Theorem 1 Let $latex {b(n) = \lceil \log(n+1) \rceil}&fg=000000$. Then if a boolean function can be computed by a circuit of size $latex {S}&fg=000000$ over the basis $latex {\{\vee,\wedge,\neg\}}&fg=000000$ then it can be computed by a circuit of size $latex {2S + n^{O(1)}}&fg=000000$ over the same basis and only using at most $latex {b(n)}&fg=000000$ negations.
There are a number of places where the ``proof'' of this theorem is given. In Fischer's original paper, in more recent improvements, and in Jukna's book. All say essentially: ``we leave to the reader the remaining details.'' In the book the details are sketched and left as an exercise, with a long hint. We have discussed this before---see here.
Proving and Using
There is nothing wrong with leaving out details, but when everyone seems to do that it can cause a problem. I looked at the proof sketches and thought for a bit---okay a more like a byte---that there might even be an error. I was worried there was a mistake in the proof. There is none. The proof is just fine. I was wrong.
But there is some reason to be concerned. For all the beauty of Fischer's Theorem, it does not seem to have been used\/ to prove something else. I would argue that the real way we get confidence in mathematical results is not by checking their proofs, but by a social process. This process can be improving the main result, which has been done in Fischer's case. However, these improvements actually affect mostly other parts of the proof, but still leave some details unexplained.
The best way to avoid being concerned and gain confidence in a theorem is to use it to prove results. Who does not believe that numbers modulo a prime form a field? This has been used over and over. A theorem that is used to prove other theorems is much more likely to be correct. If it has a bug and is wrong, there is a kind of strange-attractor phenomena that tends to lead to a contradiction. Or if not an out-and-out contradiction, it may at least lead to a result that is so surprising as to make one doubt the original theorem.
An example of a theorem where maybe only a couple dozen people have fully vetted the proof, but the result is used all the time, is the $latex {O(\log n)}&fg=000000$-depth, $latex {O(n \log n)}&fg=000000$-size sorting network of comparison gates designed by Miklós Ajtai, János Komlós, and Endre Szemerédi (AKS). A comparison gate has two input values $latex {x_1,x_2}&fg=000000$ and gives two output values, $latex {y_1 = \min\{x_1,x_2\}}&fg=000000$ and $latex {y_2 = \max\{x_1,x_2\}}&fg=000000$. The $latex {O}&fg=000000$ hides a big constant---the network is galactic---but its improvement over Ken Batcher's $latex {O(\log^2)}&fg=000000$-depth betwork was a boon to studying low-depth circuit complexity. If it were wrong, we'd expect to have seen some unbelievable circuit results by now.
Fischer's Proof
Let me try and explain the proof and give you all---well most---of the details, and I will try not to leave anything to the reader. As we recently posted, it suffices to invert\/ the input string $latex {w = w_1 w_2 \cdots w_n}&fg=000000$ so that we have also the sequence $latex {(bar{w}_1,\bar{w}_2,\dots ,\bar{w}_n)}&fg=000000$ using only $latex {b(n)}&fg=000000$ negations. Then we can compute $latex {f(w)}&fg=000000$ by a monotone circuit of those $latex {2n}&fg=000000$ values, at worst doubling the size from $latex {S}&fg=000000$ to $latex {2S}&fg=000000$, and in a sense given at the end of this post we can even do a little better.
The first step is to sort the input bits. This can be done by simulating comparison gates without any negations, since when $latex {x_1,x_2}&fg=000000$ are bits, $latex {y_1}&fg=000000$ is the AND and $latex {y_2}&fg=000000$ is the OR. If we care strongly about low depth we can use the AKS network for this, but we could also use Batcher's. Or whatever.
The second part of the proof, which is critical for us since it uses the negations, is the following: Let $latex {x_{1},\dots x_{n}}&fg=000000$ be the $latex {n}&fg=000000$ inputs sorted so that
$latex \displaystyle x_{1} \le \cdots \le x_{n}. &fg=000000$
Just to add the obvious, which is not always obvious to all, the list must look like$latex \displaystyle 0 \underbrace{\dots 0}_{k} \ \underbrace{1 \dots 1}_{m} &fg=000000$
where $latex {n = k+m}&fg=000000$. The goal, given these sorted values, is to construct the vector$latex \displaystyle y_{1},\dots,y_{n}&fg=000000$
so that each $latex {y_{i}}&fg=000000$ is equal to $latex {\neg x_{i}}&fg=000000$.The third and final part is to work the initial sorting backwards so that the outputs $latex { y_{1},\dots,y_{n}}&fg=000000$ get routed to their correct locations, corresponding to the $latex {w_i}&fg=000000$'s they negate. For those who care about reducing the size\/ of the network, this is where much of the research action is. Fischer's original idea, used also by Jukna, uses a trick, on pain of needing the quadratic size stated above.
The trick's idea is to re-interpret the sorted bit $latex {x_{n-k+1}}&fg=000000$ as telling whether $latex {w}&fg=000000$ has at least $latex {k}&fg=000000$ 1's---call this bit's value $latex {t(k)}&fg=000000$. Now for each $latex {i}&fg=000000$, $latex {1 \leq i \leq n}&fg=000000$, repeat the initial sorting step on the string $latex {w}&fg=000000$ minus bit $latex {w_i}&fg=000000$, and say that the results give values $latex {t_i(k)}&fg=000000$. All of these $latex {n^2}&fg=000000$ values are obtained using monotone gates. The trick itself is that for all $latex {i}&fg=000000$,
$latex \displaystyle \neg w_i = \bigwedge_{k=1}^n (\neg t(k) \vee t_i(k)). &fg=000000$
The point is that if $latex {w_i = 0}&fg=000000$, then bit $latex {i}&fg=000000$ never makes a difference to any ``threshold'' $latex {k}&fg=000000$, so $latex {t(k) = t_i(k)}&fg=000000$ for all $latex {k}&fg=000000$, so one of $latex {\neg t(k)}&fg=000000$ and $latex {t_i(k)}&fg=000000$ is always true, so the big AND gives $latex {1}&fg=000000$. Whereas if $latex {w_i = 1}&fg=000000$ then there is some $latex {k}&fg=000000$ for which $latex {t(k)}&fg=000000$ is true but $latex {t_i(k)}&fg=000000$ isn't, and the big AND gives $latex {0}&fg=000000$.Thus the bits $latex {y_{1},\dots,y_{n}}&fg=000000$ are really supplying the values $latex {\neg t(1),\dots,\neg t(n)}&fg=000000$ needed for this trick to work. Later authors have had other sorting-based ideas that improve the size, but the second part where the negations arise is the same. With these full details digested, we can focus on this part.
The Second Part
The claim is that for such sorted values it is possible to construct the vector
$latex \displaystyle y_{1},\dots,y_{n}&fg=000000$
so that each $latex {y_{i}}&fg=000000$ is equal to $latex {\neg x_{i}}&fg=000000$, by using a polynomial size circuit having only $latex {b(n)}&fg=000000$ negations. Let $latex {F_{n}(x)=y}&fg=000000$ be this function.Let's look at the intuition why this should be true. It is really just a simple divide-and-conquer recursion: one negation allows us to reduce the problem to one of half the size. This clearly yields a logarithmic bound. As usual we will assume that $latex {n}&fg=000000$ is a power of $latex {2}&fg=000000$. Look at the middle bit $latex {x_{m}}&fg=000000$ where $latex {m=n/2}&fg=000000$. There are two cases:
$latex {\bullet }&fg=000000$ In this case $latex {x_{m}=0}&fg=000000$. Then we get that
$latex \displaystyle x_{1}=x_{2}=\cdots=x_{m}=0. &fg=000000$
Thus,$latex \displaystyle F_{n}(x) = 0^{m}F_{n/2}(x_{m+1},\dots,x_{n}). &fg=000000$
$latex {\bullet }&fg=000000$ In this case $latex {x_{m}=1}&fg=000000$. Then we get that
$latex \displaystyle x_{m}=x_{m+1}=\cdots=x_{n}=1. &fg=000000$
Thus,$latex \displaystyle F_{n}(x) = F_{n/2}(x_{1},\dots,x_{m})1^{m}. &fg=000000$
So what is the difficulty? In pseudo-code we are doing the following:
if $latex {n=1}&fg=000000$ then
$latex \displaystyle \mathbf{return} \ \neg x_{1}. &fg=000000$
if $latex {x_{m}=0}&fg=000000$ then$latex \displaystyle \mathbf{return} \ 0^{m}F_{n/2}(x_{m+1},\dots,x_{n}) &fg=000000$
else$latex \displaystyle \mathbf{return} \ F_{n/2}(x_{1},\dots,x_{m})1^{m}. &fg=000000$
Well this would be just fine if we were using a programming language, but we are using circuits. They do not allow the full flexible array of constructs---at least not in the obvious way---that programming languages do. So this is the dirty detail that is ``left to $latex {\dots}&fg=000000$''The issue is that we must use circuits to do two things: (i) the recursive call on two different sets of variables based on the value of $latex {x_{m}}&fg=000000$; and (ii) the output of two different return values, again based on the value of $latex {x_{m}}&fg=000000$. All this must happen without incurring the cost of an extra negation.
Here is how we do this. Let $latex {t=x_{m}}&fg=000000$. We will have access to the values of $latex {t}&fg=000000$ and $latex {\bar{t}=\neg t}&fg=000000$. We will re-use these values many times, but of course we can do this all with one negation. Once $latex {t}&fg=000000$ and $latex {\bar{t}}&fg=000000$ are computed by the circuit, we can by fan-out use the value as many times as we wish. Of course we want to keep the size of the circuit polynomial, but that will follow.
The first problem is how to do two different calls to the circuit $latex {F_{n/2}}&fg=000000$ without incurring extra negations. If we naively just had a circuit for each call, we would wind up with a linear number of negations---we must avoid two recursive calls. So define new variables $latex {u_{1},\dots,u_{m}}&fg=000000$ as follows:
$latex \displaystyle u_{k} = (\bar{t} \wedge x_{m+k}) \vee (t \wedge x_{k}). &fg=000000$
Then we compute$latex \displaystyle F_{n/2}(u) = w. &fg=000000$
Note, that $latex {u}&fg=000000$ is set up so that it is one of the two possible calls, and we use the value of $latex {t}&fg=000000$ to decide which one to call. This uses no additional negations, which would have been terrible.The second problem is that we need to output different values based on the returned answer $latex {w}&fg=000000$ and the value of $latex {t}&fg=000000$. Recall that if $latex {t=0}&fg=000000$ we want the output to be
$latex \displaystyle 0^{m}F_{n/2}(x_{m+1},\dots,x_{n}) = 0^{m}w, &fg=000000$
and if $latex {t=1}&fg=000000$ we want it to be$latex \displaystyle F_{n/2}(x_{1},\dots,x_{m})1^{m} = w1^{m}.&fg=000000$
The way to do this in a circuit is as follows:$latex \displaystyle \begin{array}{rcl} r &=& 0^{m}w \\ s &=& w1^{m}. \end{array} &fg=000000$
Then the $latex {k}&fg=000000$ bit of the output is$latex \displaystyle (\bar{t}\wedge r_{k}) \vee (t \wedge s_{k}). &fg=000000$
Done.
Open Problems
I hope this helped you feel comfortable with the proof. I know that I feel better about it now. In a sense this is all ``obvious'' to strong circuit programmers, just as certain other tasks are ``obvious'' to strong Python---or whatever your favorite language is---programmers. There are always a set of idioms that they know and use daily in their programming. These idioms are so well encoded into their brains that they do not see any reason to supply details. For those of us who are not strong circuit programmers, that includes me, spelling out the details helps.
The real open problem is: can we use Fischer's Theorem to prove some things that are new and interesting? I wonder.