Boolean algebra and computer chips

Computers work by manipulating patterns of zeroes and ones, which are represented by electrical signals varying in intensity. It is fascinating to ponder the fact that starting with simple manipulations of these electrical signals, we can build Mac OS.

The most basic logic units in a computer are called (fittingly) logic gates. These are devices with a number of inputs, and a number of outputs. The only allowed input and output are zeroes and ones. Thus logic gates represent functions \{ 0,1 \}^n \to \{ 0,1\}^m. Such functions are called Boolean functions (after George Boole). We often think of zero as representing "false", and one as representing "true". Thus the input of a Boolean function is an array of "truth values", and the output is another array of truth values.

The whole function can be represented by a truth table:

And Value
0 0 0
0 1 0
1 0 0
1 1 1

Here we have the truth table for the logical function "and", which takes two inputs, and produces one input. It is "true" if and only if both its inputs are true.

Similarly, we have the truth table for the "or" function:

Or Value
0 0 0
0 1 1
1 0 1
1 1 1

The "or" function \mathrm{or} : \{ 0 , 1 \}^2 \to \{ 0 , 1\} is true if and only if one of its input variables are true.

We also have the "negation function" \mathrm{not} : \{ 0, 1 \} \to \{ 0, 1\}  which is true if and only its input is not true.

Given these three functions, we can construct all other Boolean functions.

We first introduce a handy algebraic notation for Boolean functions. We can define them by a formula with variables x,y. We write \mathrm{and}(x,y) = x \cdot y. A formula has no value until it is evaluated at some truth values. Say we put in x=0, and y=0, meaning that both x and y are false. Then we have x \cdot y = 0 \cdot 0=0, since the "And" function must return false here. Similarly, we have 1 \cdot 1 = 1.

For the "or" function we write \mathrm{or}(x,y) = x + y. With this notation, we can manipulate Boolean functions must like in high school algebra. For example, it is true that x \cdot y = y \cdot x, and also that x + y = y + x, since in "and" and "or", order doesn't matter. Not so obvious, but still true, we have that x \cdot (y + z) = x \cdot y + x \cdot z, just as in high school algebra. We also have x + 0 =x, 0 \cdot x = 0, and 1 \cdot x = x (where 0 is the boolean function that is always false).

But here stops the similarities with high school algebra. In this algebraic system, it is always true that x \cdot x = x (shorthanded x^2 =x). This is true because if a statement x is true, then clearly the statement x \mathrm{And} x is true (and conversely). Similarly, we have that x +x = x. Also: x + 1 = 1.

We also write \overline x for the negation function.

Every boolean function of two variables can be described using this three functions. We list all boolean functions of two variables:

Function x 0 0 1 1
y 0 1 0 1
Constant 0 0 0 0 0 0
And x \cdot y 0 0 0 1
x And Not y x \cdot \overline y 0 0 1 0
x x 0 0 1 1
Not x And y \overline x \cdot y 0 1 0 0
y y 0 1 0 1
Xor x \cdot \overline y + \overline x \cdot y 0 1 1 0
Or x + y 0 1 1 1
Nor \overline{x + y} 1 0 0 0
Equivalence x \cdot y + \overline x \cdot \overline y 1 0 0 1
Not y \overline y 1 0 1 0
If y then x x + \overline y 1 0 1 1
Not x \overline x 1 1 0 0
If x then y \overline x + y 1 1 0 1
Nand \overline {x \cdot y} 1 1 1 0
Constant 1 1 1 1 1 1

The boring way to prove this is to check that all the expressions evaluate to the values in the right columns.

However, a more surprising fact, is that the only boolean function we really need is the "Nand" function: all other boolean functions can be built from combinations of Nand's. This can be proved using the algebraic laws we discussed above.

  • The not-function \{ 0,1 \} \to \{ 0,1 \} is the same as \mathrm{nand}(x,x) = \overline{ x \cdot x } = \overline {x } = \mathrm{not}(x). If we had physical nand gates and wires, we could then form the not-function as follows: 
  • Given this, it is easy to build the and-function: \mathrm{and}(x,y) = \mathrm{not} \circ \mathrm{nand}(x,y). This is done by wireing as follows:The gates are nand gates.
  • The construction of "or" is slightly more complicated. We first show a solution diagram: All gates are nand gates. That this is actually an "or" gate follows from de Morgan's laws: the wireing can be translated to the following formula
    \mathrm{nand}(\mathrm{nand}(x,x), \mathrm{nand}(y,y)). Inserting the definitions, this is \mathrm{nand}(\mathrm{not}(x), \mathrm{not}(y)), which is \overline{ \overline{x}, \overline{y}}. By de Morgan's law, this expression is equal to \overline{\overline{x}} + \overline{\overline{y}} = x + y.

Since we can build "not", "and", and "or", it follows that we can build all other logic gates. Thus everything a computer does comes from a simple "nand".

So far we have talked about the basic logic gates, and introduced an algebraic notation for computing with boolean functions. The next steps in explaining how a computer works would be to talk about arithmetic units, for example. That would be in a future blog post!

(an interesting mathematical question is this: what kind of algebra does the boolean functions constitute? They do not form a ring, since there is no additive inverse...)