{\huge Who Invented Boolean Functions?}

Hint: I am not sure

Calixto Badesa is a faculty member at the University of Barcelona. His department is called the Department of Logic, History, and Philosophy of Science. It is neat to see the idea of the ``logic of science'' added to the familiar 'HPS' acronym. Well, Badesa himself specializes in Logic---that is, the history of Logic as a discipline.

Today I want to discuss why propositional calculus as we now know it took so long to create.

I am currently reading Badesa's book entitled: The Birth Of Model Theory. His main focus is on the fundamental Löwenheim-Skolem Theorem. This refers to several theorems. The first was proved by Leopold Löwenheim, in a paper in 1915. It was later re-proved by Thoralf Skolem.

For Badesa the theorem is pivotal because it leverages the distinction between syntax and semantics. Model Theory is about this distinction, and understanding it makes Logic a mature field. But that's almost saying Logic is less than 100 years old, while logic has been studied for 3,000 years. This makes Ken and me wonder how old our modern understanding of basic concepts really is.

Löwenheim-Skolem

The main result is that any first-order theory with only a countable number of symbols that has an infinite model must also have a countable model. This sounds plausible and is correct, but on second thought it is paradoxical. Consider ZF, or any other reasonable first order theory of set theory. This theory can prove that there are ``uncountable sets,'' yet by the Löwenheim-Skolem Theorem it follows that the theory has a countable model. The trick is that outside the theory we can ``see'' that the model is countable, but inside the theory we cannot. Very neat.

The story of the Löwenheim-Skolem Theorem, like the theorem itself, is somewhat paradoxical. The initial proof of Löwenheim's is unclear at several key places. Badesa spends a majority of his 220-page book discussing what Löwenheim actually did prove and what he did not. Also what Skolem proved and what he did not. Perhaps we can discuss this another time. Read his

Who invented Boolean Functions? This sounds like an old joke of Groucho Marx's. His radio/TV show \href was a game show where contestants, usually a couple, tried to win money by answering general-knowledge questions.

The best part of the show was not the game, but Groucho's patter with the contestants. Contestants who won $25 dollars or less were asked a trivial question by Groucho at the end of the show---he wanted them to have something more to take home. One ``trivial'' question he often used was: Who is buried in Grant's Tomb? Actually the correct answer is ``no one,'' since Ulysses Grant and his wife are in an an above-ground vault, rather than buried. Recall ``buried'' means in the ground. All answered ``Grant'' and were given a small prize, except one man who actually got the question right and shocked Groucho. The man pointed out that Grant's Tomb is an above-ground mausoleum. Oh well. \includegraphics[width=3in]{game.jpg}

So who actually did invent boolean functions? Indeed.

Boole's Work

Back to Badesa's book. In his first chapter he goes into quite a bit of detail on what Boole did, and also Jevons, Pierce, and Schröder. Boole used symbols such as $latex {x}&fg=000000$ and $latex {y}&fg=000000$ to represent ``the class of individuals to which a particular name or description is applicable.'' He used $latex {xy}&fg=000000$ to denote the class of things represented by $latex {x}&fg=000000$ and $latex {y}&fg=000000$ where both are simultaneously applicable. Thus if $latex {x}&fg=000000$ stands for ``white'' and $latex {y}&fg=000000$ for ``horse'', then $latex {xy}&fg=000000$ stands for ``white horses.'' He used $latex {x+y}&fg=000000$ for the ``or'' but states that the objects in $latex {x}&fg=000000$ and $latex {y}&fg=000000$ should be disjoint. Although he did sometimes diverge and allows the classes $latex {x}&fg=000000$ and $latex {y}&fg=000000$ to overlap, it was really Jevons who later observed that the disjointness restriction is unnecessary.

The point that I think is the most important is what Boole did not say. He did not say: There are two elements in the world: $latex {0}&fg=000000$ and $latex {1}&fg=000000$. Here are the operations that I use on these elements. Let now turn and study their properties. Consider $latex {\dots}&fg=000000$

Schröder studied what Boole did and built a set of axioms that attempt to capture the general structure. He used the symbol $latex {\mathrlap{=}\subset}&fg=000000$ in his theory (I actually cannot get his symbol exactly). One interesting point that Badesa discusses is the possibility that Schröder may have been the first ever to try to prove that a result is independent from a set of axioms. The issue was whether or not a particular axiom was needed. Pierce thought the axiom in question could be proved from the others, but lost his argument. Schröder in 1883 purported to show that the axiom was unprovable by finding a model that violates it, but satisfies the other axioms. This would show that it did not follow from the other axioms.

What is interesting about this is not the particular result, but that he may have been the first to prove an independence result via model theory.

Why So Long?

The real numbers are the basis of all mathematics of the Greeks, since geometry by definition is all about the real plane. Later the whole thrust of western mathematics was on functions from real numbers to real numbers. This of course gave rise to the notions of continuous functions, derivatives, integrals, and more. These are beautiful, important, and actually quite delicate concepts. Part of the difficulty is that all touch in one way or another on the infinite. Real numbers demand a good understanding of the infinite, as do all the other related concepts.

Any definition of a real number involves, in some way, the infinite. Whether you define them via infinite decimals, Dedekind cuts, Cauchy sequences of rationals, or otherwise, there is no way to avoid notions based on infinity. At least there are none that I know.

The same issues of infinity arise in the trouble that early mathematics had in defining precisely the notion of function. Again a proper definition requires that we have infinite sets. The number of real functions is not just ``infinity,'' but is a larger cardinality in the precise sense of Cantor.

Yet the notion of a boolean space $latex {\{0,1\}}&fg=000000$ of just two elements is all too concrete. At first glance it seems almost too easy. There are no fundamental issues in defining operations on this space. You cannot understand what a real function really is just by giving the values in a table. Yet for a Boolean function, all is said by a finite table---simple like an accountant's view of the world.

So why was the creation of a theory of Boolean functions so hard? My theory is that it was too simple. I argue it is a huge surprise historically that practical notions based on infinity were developed long before Cantor, while it lay hidden until the 20th Century that the theory of functions mapping $latex {0,1}&fg=000000$'s to $latex {0,1}&fg=000000$'s is so rich and so important. How can functions that you can see because they are finite be so important?

I would argue that this is a huge surprise that only recently has become clear. How could early mathematicians know that discrete simple functions were as important as continuous ones?

Logic in the Looking Glass

Ken thinks that we can perceive the answer by examining the work of another Victorian-era logician, Charles Dodgson. That is, Lewis Carroll. Carroll's work in logic mainly involved both formalizing and visualizing arguments based on syllogisms. Syllogisms involve quantifiers, and their terms refer directly to categories and attributes.

Today we might say that the various syllogistic theories of his time are fragments of the full predicate calculus in which quantifier forms are limited in complexity. Bounded-quantifier logic and arithmetic are subtle in computational theory even today. Thus the academics were following the long shadow of Aristotle into meaningfully intricate work, and did not see need to re-base it.

Carroll was influenced by John Venn and his diagrams for visualizing set-theoretic relations. He is known to have contacted Charles Babbage, but perhaps only as far as Babbage's devices could be used for numerical calculation. All of this tends away from our modern ideas of logic as based on $latex {0}&fg=000000$ and $latex {1}&fg=000000$, of computation being based on this logic, and even of ``it from bit.''

Open Problems

So who really did invent Boolean functions? And why were they not invented a thousand years ago?