We have our first guitar chord, E major:

We can apply a simple transformation to it to generate all of the major chords (there are twelve of them). The transformation is called transposition, and it simply moves all the notes up the fretboard by the same distance. We can easily move the three fingers that form the shape of E, but there are also three open strings. They have to be shifted as well. This is where barring with your index finger comes handy. Your finger creates a new nut (that’s what the upper end of the fretboard is called).

Below is the A major chord created by shifting E five semitones, or five frets, down the fretboard. The intervals don’t change, but the root changes from E to A, and all the notes get renamed accordingly.

This is how you grip it.

Technically, barring a chord, is not easy for beginners. You have to develop enough strength and precision in your left hand. But conceptually, it’s very simple. Shifting the whole shape doesn’t change relative intervals, so a major chord remains a major chord. If you don’t have perfect pitch, and somebody shifted a chord, you might not be able to tell. It’s all relative.

That’s why it reminds me of special relativity. You are looking at the same chord from a different frame of reference. All laws of physics (relative intervals) are the same. There is even an analog of relativistic shortening of distances (Lorenz contraction): the distances between frets get shorter as you move down the fretboard. If the frets continued all the way to the bridge, the distances between them would shrink to zero, and there would be infinitely many of them. Reaching the bridge is like reaching the speed of light in special relativity.

It’s very useful to know the names of frets on the E string, because each of them can become the root of a shifted chord. They are, starting from the zeroth fret, or the nut:

E, F, F#, G, G#, A, A#, B, C, C#, D, D#, E.

The twelveth fret is E again, one octave higher, and then the pattern repeats itself. As you can see, there is some regularity in the naming of notes, but then there are the odd cases. Every note can be sharped (the # sign) but you don’t see E# there because it’s the same as F. Also, B# is identified with C.

G major is barred on the third fret (three semitones from E):


and so on.

Later we’ll see that almost all chords with un-sharped names have alternative grips that don’t require barring. The odd one is F, which is really hard to play for beginners:

There is an alternative fingering that requires pressing two thinnest strings with the index finger and either not playing the thickest E string (muting it), or pressing it with a thumb wrapped around the stock:


Just for fun, here’s the F# major chord in which all the notes are sharped.

Perhaps surprisingly, transposition on the piano is much harder, because of the white key / black key irregularities.

Next time we’ll talk about minor and seventh chords.


Music teaches us a lot about reality. It shows enough regularity to suggest a simple mathematical model, but also enough irregularity to frustrate our attempts at formalizing it. In this series of essays, I’ll try to describe some of this frustration mixed with fascination. I’m going to talk about the guitar; both because I know more about it and because it’s even more quirky than the piano.

The guitar is a versatile instrument. You can play individual notes of a melody, you can play chords, and you can play the bass line, sometimes all at the same time. All this with just six strings. These six strings are tuned is such a way as to maximize the number of chords that can be played on it. You play chords by making shapes with your left hand (or the right hand if, like Jimi Hendrix, you’re left-handed). It’s a very interesting optimization problem that involves equal parts of music theory and human anatomy.

Here are some anatomical constraints: we have five fingers in the string-pressing hand. The thumb is mostly used for grip, although you can sometimes use it to play bass notes on the thickest string by reaching around the neck. That leaves us with four fingers to control six strings. If we want to strum all six strings, we have two options: we can let two or more strings ring free, or use one of the fingers (usually the index finger) to bar multiple strings and use the remaining three fingers to create the chord shape. So the basic chord shape is a three-finger grip. If we can form a chord with three fingers, we can move it up the fretboard using the index finger for barring. As with all musical instruments, the available shapes are limited by anatomy: we can only stretch our fingers so much.

Now for some music theory. The basic chords are triads built from three notes: the root, the third, and the fifth (relative to the root). The intervals between these notes determine the type of the chord. A major triad is build from a major third and a minor third (the sum of these thirds is a perfect fifth–yes, in music 3 + 3 = 5). The C major triad, for instance, consists of three notes: C, E, and G. The distance from C to E is a major third, and the distance from E to G is a minor third. The distance from C to G is a perfect fifth.

Naively, we might think that the guitar should be tuned in thirds, say, the lowest string C, then E, and then G. But what then? What about the three remaining strings? We could repeat C, E, and G, an octave higher. That would be okay if we only wanted to play major triads. But there are also minor triads, with a minor third followed by a major third. C minor triad is C, E♭ (E flat), and G. So maybe we could use that for the tuning? It would allow us to play C minor with no fingers, and C major by pressing two strings with two fingers. Unfortunately, there are many other types of chords that would be very hard to play in that tuning, so this idea is scrapped.

Observe, though, that with six strings, it’s unavoidable that some notes of the triad would have to be doubled (modulo shifting by an octave or two). This introduces more intervals between notes: for instance, the distance from G to the C in the next octave is a fourth. So within a duplicated triad we have the intervals of a major third, minor third, the perfect fifth (their sum), a fourth (from G to C), as well as two sixths (from E to C and from G to E), a few octaves, and so on.

So here’s a new idea: If we tune the strings in fourths, we can easily, without stretching our fingers too much, produce thirds, fourths, and fifths. That’s because we can shorten an interval by pressing the lower sting, or lengthen it by pressing the higher string.

Let’s see how this works. The lowest string on the guitar is E, so that’s where we’ll start. A fourth about it is A, so that’s the next thickest string. Let’s see what intervals we can make using those two.

By pressing the E string at the first fret, we can produce a major third, F to A.

By pressing it at the second fret, we can produce a minor third, F# to A.

By releasing the E string and pressing the A string at the second fret, we can produce a perfect fifth, from E to B.


And, of course, by releasing both strings, we get a perfect fourth, from E to A. That’s a lot of handy intervals within easy reach.

Let’s use this idea to build the simplest guitar chord, E major, which contains E, G#, and B. In principle, the order of these notes and the octaves they are in doesn’t matter, but some combinations sound better than others. We’ll start with the open E string for the root. To start with, let’s assume the tuning in fourths, so the second string is A, the third D, and the fourth G.

We can can press the A string at the second fret to produce the fifth of the triad, B. (We are skipping G# for now, because it’s not easily reachable.)

The next triad note within reach is another E, an octave higher. We can play it by pressing the D string at the second fret.

Now we can finally add the third, G#, by pressing the next string, G, at the first fret.

We now have the root (doubled), the fifth, and the third of the triad.

There are two more strings to go, and we have already used three fingers to press three strings. If we continued tuning strings in fourths, the next string would be C. That’s not part of our triad, and we can’t easily stretch our pinky to reach the next E. So we begrudgingly give up on our rule of fourths, and instead tune the next string a semitone lower than we promised, to a B. B happens to be in our triad, so we’re fine. And a fourth about B is again E, so that works too.

Here are the notes we used in this grip, together with intervals between them.

Notice that the root E is repeated three times, in different octaves. The fifth of the triad, B, appears twice, and the third, G#, only once. As we’ll see later, this arrangement gives us a lot of flexibility when transforming this grip.

The leap of a fifth in the bass, from E to B, is actually very pleasant to the ear — skipping the G# there is advantageous.

Here’s the same grip annotated with root-relative intervals. 1 is the root (E), 3 is the third (G#), and 5 is the fifth (B). It’s very important to remember which is which, in order to understand how to transform this shape to produce other interesting chords.

Not surprisingly, this is called the E shape in the popular CAGED system.

We’ve used only three fingers, which is great, because we’ll be able to use the index finger as a bar to move this triad up the fretboard, if we wish so.

In the process, we have arrived at the standard guitar tuning E, A, D, G, B, E. It is basically in fourth, except for the major third from G to B. This one exception introduces a lot of complexity into chord building on the guitar.

By now, you might have noticed some irregularities in music notation. They have accumulated over the centuries of development. We now use the so called equal temperament system in which the basic interval is a semitone, corresponding to one fret on the guitar. Standard musical intervals can be expressed in semitones, with the additional convenience that they satisfy standard arithmetic. For instance, a minor third is 3 semitones, a major third is 4 semitones, their sum is 7 semitones, corresponding to a perfect fifth. A perfect fourth is 5 semitones, which is an octave (12 semitones) minus the perfect fifth (7 semitones).

We could have motivated our tuning by postulating the distance of two octaves (24 semitones) between the lowest and the highest string. If we divide two octaves between six string (5 intervals), we get 4.8, which is almost the perfect fourth (5 semitones), but not quite. That’s why we introduced the “leap interval” of a major third between the G and the B strings.

Next, I’ll show you how all common chords and the majority of jazz chords can be derived from this single shape by applying various transformations (or, as mathematicians call them, morphisms).

Acknowledgment

I used the excellent free web program chordpic to generate my string diagrams.


The tremendous success, in recent centuries, of science and technology explaining the world around us and improving the human condition helped create the impression that we are on the brink of understanding the Universe. The world is complex, but we seem to have been able to reduce its complexity down to a relatively small number of fundamental laws. These laws are formulated in the language of mathematics, and the idea is that, even if we can’t solve all the equations describing complex systems, at least we can approximate the solutions, usually with the help of computers. These successes led to a feeling bordering on euphoria at the power of our reasoning. Eugene Wigner summed up this feeling in his famous essay, The Unreasonable Effectiveness of Mathematics in the Natural Sciences.

Granted, there are still a few missing pieces, like the unification of gravity with the Standard Model, and the 95% of the mass of the Universe unaccounted for, but we’re working on it… So there’s nothing to worry about, right?

Actually, if you think about it, the idea that the Universe can be reduced to a few basic principles is pretty preposterous. If this turned out to be the case, I would be the first to believe that we live in a simulation. It would mean that this enormous Universe, with all the galaxies, stars, and planets was designed with one purpose in mind: that a bunch of sentient monkeys on the third planet from a godforsaken star in a godforsaken galaxy were able to understand it. That they would be able to build, in their puny brains–maybe extended with some silicon chips and fiber optics–a perfect model of it.

How do we understand things? By building models in our (possibly computer-enhanced) minds. Obviously, it only makes sense if the model is smaller than the actual thing; which is only possible if reality is compressible. Now compare the size and the complexity of the Universe with the size and the complexity of our collective brains. Even with lossy compression, the discrepancy is staggering. But, you might say, we don’t need to model the totality of the Universe, just the small part around us. This is where compositionality becomes paramount. We assume that the world can be decomposed, and that the relevant part of it can be modeled, to a good approximation, independent from the rest.

Reductionism, which has been fueling science and technology, was made possible by the decompositionality of the world around us. And by “around us” I mean not only physical vicinity in space and time, but also proximity of scale. Consider that there are 35 orders of magnitude between us and the Planck length (which is where our most precious model of spacetime breaks down). It’s perfectly possible that the “sphere of decompositionality” we live in is but a thin membrane; more of an anomaly than a rule. The question is, why do we live in this sphere? Because that’s where life is! Call it anthropic or biotic principle.

The first rule of life is that there is a distinction between the living thing and the environment. That’s the primal decomposition.

It’s no wonder that one of the first “inventions” of life was the cell membrane. It decomposed space into the inside and the outside. But even more importantly, every living thing contains a model of its environment. Higher animals have brains to reason about the environment (where’s food? where’s predator?). But even a lowly virus encodes, in its DNA or RNA, the tricks it uses to break into a cell. Show me your RNA, and I’ll tell you how you spread. I’d argue that the definition of life is the ability to model the environment. And what makes the modeling possible is that the environment is decomposable and compressible.

We don’t think much of the possibility of life on the surface of a proton, mostly because we think that the proton is too small. But a proton is closer to our scale than it is to the Planck scale. A better argument is that the environment at the proton scale is not easily decomposable. A quarkling would not be able to produce a model of its world that would let it compete with other quarklings and start the evolution. A quarkling wouldn’t even be able to separate itself from its surroundings.

Once you accept the possibility that the Universe might not be decomposable, the next question is, why does it appear to be so overwhelmingly decomposable? Why do we believe so strongly that the models and theories that we construct in our brains reflect reality? In fact, for the longest time people would study the structure of the Universe using pure reason rather than experiment (some still do). Ancient Greek philosophers were masters of such introspection. This makes perfect sense if you consider that our brains reflect millions of years of evolution. Euclid didn’t have to build a Large Hadron Collider to study geometry. It was obvious to him that two parallel lines never intersect (it took us two thousand years to start questioning this assertion–still using pure reason).

You cannot talk about decomposition without mentioning atoms. Ancient Greeks came up with this idea by pure reasoning: if you keep cutting stuff, eventually you’ll get something that cannot be cut any more, the “uncuttable” or, in Greek, ἄτομον [atomon]. Of course, nowadays we not only know how to cut atoms but also protons and neutrons. You might say that we’ve been pretty successful in our decomposition program. But these successes came at the cost of constantly redefining the very concept of decomposition.

Intuitively, we have no problem imagining the Solar System as composed of the Sun and the planets. So when we figured out that atoms were not elementary, our first impulse was to see them as little planetary systems. That didn’t quite work, and we know now that, in order to describe the composition of the atom, we need quantum mechanics. Things are even stranger when decomposing protons into quarks. You can split an atom into free electrons and a nucleus, but you can’t split a proton into individual quarks. Quarks manifest themselves only under confinement, at high energies.

Also, the masses of the three constituent quarks add up only to one percent of the mass of the proton. So where does the rest of the mass come from? From virtual gluons and quark/antiquark pairs. So are those also the constituents of the proton? Well, sort of. This decomposition thing is getting really tricky once you get into quantum field theory.

Human babies don’t need to experiment with falling into a precipice in order to learn to avoid visual cliffs. We are born with some knowledge of spacial geometry, gravity, and (painful) properties of solid objects. We also learn to break things apart very early in life. So decomposition by breaking apart is very intuitive and the idea of a particle–the ultimate result of breaking apart–makes intuitive sense. There is another decomposition strategy: breaking things into waves. Again, it was Ancient Greeks, Pythagoras who studied music by decomposing it into harmonics, and Aristotle who suggested that sound propagates through movement of air. Eventually we uncovered wave phenomena in light, and then the rest of the electromagnetic spectrum. But our intuitions about particles and weaves are very different. In essence, particles are supposed to be localized and waves are distributed. The two decomposition strategies seem to be incompatible.

Enter quantum mechanics, which tells us that every elementary chunk of matter is both a wave and a particle. Even more shockingly, the distinction depends on the observer. When you don’t look at it, the electron behaves like a wave, the moment you glance at it, it becomes a particle. There is something deeply unsatisfying about this description and, if it weren’t for the amazing agreement with experiment, it would be considered absurd.

Let’s summarize what we’ve discussed so far. We assume that there is some reality (otherwise, there’s nothing to talk about), which can be, at least partially, approximated by decomposable models. We don’t want to identify reality with models, and we have no reason to assume that reality itself is decomposable. In our everyday experience, the models we work with fit reality almost perfectly. Outside everyday experience, especially at short distances, high energies, high velocities, and strong gravitational fields, our naive models break down. A physicist’s dream is to create the ultimate model that would explain everything. But any model is, by definition, decomposable. We don’t have a language to talk about non-decomposable things other than describing what they aren’t.

Let’s discuss a phenomenon that is borderline non-decomposable: two entangled particles. We have a quantum model that describes a single particle. A two-particle system should be some kind of composition of two single-particle systems. Things may be complicated when particles are close together, because of possible interaction between them, but if they move in opposite directions for long enough, the interaction should become negligible. This is what happens in classical mechanics, and also with isolated wave packets. When one experimenter measures the state of one of the particles, this should have no impact on the measurement done by another far-away scientist on the second particle. And yet it does! There is a correlation that Einstein called “the spooky action at a distance.” This is not a paradox, and it doesn’t contradict special relativity (you can’t pass information from one experimenter to the other). But if you try to stuff it into either particle or wave model, you can only explain it by assuming some kind of instant exchange of data between the two particles. That makes no sense!

We have an almost perfect model of quantum mechanical systems using wave functions until we introduce the observer. The observer is the Godzilla-like mythical beast that behaves according to classical physics. It performs experiments that result in the collapse of the wave function. The world undergoes an instantaneous transition: wave before, particle after. Of course an instantaneous change violates the principles of special relativity. To restore it, physicists came up with quantum field theory, in which the observers are essentially relegated to infinity (which, for all intents and purposes, starts a few centimeters away from the point of the violent collision in an collider). In any case, quantum theory is incomplete because it requires an external classical observer.

The idea that measurements may interfere with the system being measured makes perfect sense. In the macro world, when we shine the light on something, we don’t expect to disturb it too much; but we understand that the micro world is much more delicate. What’s happening in quantum mechanics is more fundamental, though. The experiment forces us to switch models. We have one perfectly decomposable model in terms of the Schroedinger equation. It lets us understand the propagation of the wave function from one point to another, from one moment to another. We stick to this model as long as possible, but a time comes when it no longer fits reality. We are forced to switch to a different, also decomposable, particle model. Reality doesn’t suddenly collapse. It’s our model that collapses because we insists–we have no choice!–on decomposability. But if nature is not decomposable, one model cannot possibly fit all of it.

What happens when we switch from one model to another? We have to initialize the new model with data extracted from the old model. But these models are incompatible. Something has to give. In quantum mechanics, we lose determinism. The transition doesn’t tell us how exactly to initialize the new model, it only gives us probabilities.

Notice that this approach doesn’t rely on the idea of a classical observer. What’s important is that somebody or something is trying to fit a decomposable model to reality, usually locally, although the case of entangled particles requires the reconciliation of two separate local models.

Model switching and model reconciliation also show up in the interpretation of the twin paradox in special relativity. In this case we have three models: the twin on Earth, the twin on the way to Proxima Centauri, and the twin on the way back. They start by reconciling their models–synchronizing the clocks. When the astronaut twin returns from the trip, they reconcile their models again. The interesting thing happens at Proxima Centauri, where the second twin turns around. We can actually describe the switch between the two models, one for the trip to, and another for the trip back, using more advanced general relativity, which can deal with accelerating frames. General relativity allows us to keep switching between local models, or inertial frames, in a continuous way. One could speculate that similar continuous switching between wave and particle models is what happens in quantum field theory.

In math, the closest match to this kind of model-switching is in the definition of topological manifolds and fiber bundles. A manifold is covered with maps–local models of the manifold in terms of simple n-dimensional spaces. Transitions between maps are well defined, but there is no guarantee that there exists one global map covering the whole manifold. To my knowledge, there is no theory in which such transitions would be probabilistic.

Seen from the distance, physics looks like a very patchy system, full of holes. Traditional wisdom has it that we should be able to eventually fill the holes and connect the patches. This optimism has its roots in the astounding series of successes in the first half of the twentieth century. Unfortunately, since then we have entered a stagnation era, despite record number of people and resources dedicated to basic research. It’s possible that it’s a temporary setback, but there is a definite possibility that we have simply reached the limits of decomposability. There is still a lot to explore within the decomposability sphere, and the amount of complexity that can be built on top of it is boundless. But there may be some areas that will forever be out of bounds to our reason.


Fig 1. Current decomposability sphere.

  • GR: General Relativity (gravity)
  • SR: Special Relativity
  • PQFT: Perturbative Quantum Field Theory (compatible with SR)
  • QM: Quantum Mechanics (non-relativistic)
  • BB: Big Bang
  • H: Higgs Field
  • SB: Symmetry Breaking (inflation)

Previously, we talked about the construction of initial algebras. The dual construction is that of terminal coalgebras. Just like an algebra can be used to fold a recursive data structure into a single value, a coalgebra can do the reverse: it lets us build a recursive data structure from a single seed.

Here’s a simple example. We define a tree that stores values in its nodes

data Tree a = Leaf | Node a (Tree a) (Tree a)

We can build such a tree from a single list as our seed. We can choose the algorithm in such a way that the tree is ordered in a particular way

split :: Ord a => [a] -> Tree a
split [] = Leaf
split (a : as) = Node a (split l) (split r)
  where (l, r) = partition (<a) as

A traversal of this tree will produce a sorted list. We’ll get back to this example after working out some theory behind it.

The functor

The tree in our example can be derived from the functor

data TreeF a x = LeafF | NodeF a x x
instance Functor (TreeF a) where
  fmap _ LeafF = LeafF
  fmap f (NodeF a x y) = NodeF a (f x) (f y)

Let’s simplify the problem and forget about the payload of the tree. We’re interested in the functor

data F x = LeafF | NodeF x x

Remember that, in the construction of the initial algebra, we were applying consecutive powers of the functor to the initial object. The dual construction of the terminal coalgebra involves applying powers of the functor to the terminal object: the unit type () in Haskell, or the singleton set 1 in Set.

Let’s build a few such trees. Here are a some terms generated by the single power of F

w1, u1 :: F ()
w1 = LeafF
u1 = NodeF () ()

And here are some generated by the square of F acting on ()

w2, u2, t2, s2 :: F (F ())

w2 = LeafF
u2 = NodeF w1 w1
t2 = NodeF w1 u1
s2 = NodeF u1 u1

Or, graphically

Notice that we are getting two kinds of trees, ones that have units () in their leaves and ones that don’t. Units may appear only at the (n+1)-st layer (root being the first layer) of F^n.

We are also getting some duplication between different powers of F. For instance, we get a single LeafF at the F level and another one at the F^2 level (in fact, at every consecutive level after that as well). The node with two LeafF leaves appears at every level starting with F^2, and so on. The trees without unit leaves are the ones we are familiar with—they are the finite trees. The ones with unit leaves are new and, as we will see, they will contribute infinite trees to the terminal coalgebra . We’ll construct the terminal coalgebra as a limit of an \omega-chain.

Terminal coalgebra as a limit

As was the case with initial algebras, we’ll construct a chain of powers of F, except that we’ll start with the terminal rather than the initial object, and we’ll use a different morphism to link them together. By definition, there is only one morphism from any object to the terminal object. In category theory, we’ll call this morphism \mbox{!`} \colon a \to 1 (upside-down exclamation mark) and implement it in Haskell as a polymorphic function

unit :: a -> ()
unit _ = ()

First, we’ll use \mbox{!`} to connect F 1 to 1, then lift \mbox{!`} to connect F^2 1 to F 1, and so on, using F^n \mbox{!`} to transform F^{n + 1} 1 to F^n 1.

Let’s see how it works in Haskell. Applying unit directly to F () turns it into ().

Values of the type F (F ()) are mapped to values of the type F ()

w2' = fmap unit w2
> LeafF
u2' = fmap unit u2
> NodeF () ()
t2' = fmap unit t2
> NodeF () ()
s2' = fmap unit s2
> NodeF () ()

and so on.

The following pattern emerges. F^n 1 contains trees that end with either leaves (at any level) or values of the unit type (only at the lowest, (n+1)-st level). The lifted morphism F^{n-1} \mbox{!`} (the (n-1)st power of fmap acting on unit) operates strictly on the nth level of a tree. It turns leaves and two-unit-nodes into single units ().

Alternatively, we can look at the preimages of these mappings—conceptually reversing the arrows. Observe that all trees at the F^2 level can be seen as generated from the trees at the F level by replacing every unit () with either a leaf LeafF or a node NodeF ()().

It’s as if a unit were a universal seed that can either sprout a leaf or a two-unit-node. We’ll see later that this process of growing recursive data structures from seeds corresponds to anamorphisms. Here, the terminal object plays the role of a universal seed that may give rise to two parallel universes. These correspond to the inverse image (a so-called fiber) of the lifted unit.

Now that we have an \omega-chain, we can define its limit. It’s easier to understand a limit in the category of sets. A limit in Set is a set of cones whose apex is the singleton set.

The simplest example of a limit is a product of sets. In that case, a cone with a singleton at the apex corresponds to a selection of elements, one per set. This agrees with our understanding of a product as a set of tuples.

A limit of a directed finite chain is just the starting set of the chain (the rightmost object in our pictures). This is because all projections, except for the rightmost one, are determined by commuting triangles. In the example below, \pi_b is determined by \pi_a:

\pi_b = f_1 \circ \pi_a

and so on. Here, every cone from 1 is fully determined by a function 1 \to a, and the set of such functions is isomorphic to a.

Things are more interesting when the chain is infinite, and there is no rightmost object—as is the case of our \omega-chain. It turns out that the limit of such a chain is the terminal coalgebra for the functor F.

In this case, the interpretation where we look at preimages of the morphisms in the chain is very helpful. We can view a particular power of F acting on 1 as a set of trees generated by expanding the universal seeds embedded in the trees from the lower power of F. Those trees that had no seeds, only LeafF leaves, are just propagated without change. So the limit will definitely contain all these trees. But it will also contain infinite trees. These correspond to cones that select ever growing trees in which there are always some seeds that are replaced with double-seed-nodes rather than LeafF leaves.

Compare this with the initial algebra construction which only generated finite trees. The terminal coalgebra for the functor TreeF is larger than the initial algebra for the same functor.

We have also seen a functor whose initial algebra was an empty set

data StreamF a x = ConsF a x

This functor has a well-defined non-empty terminal coalgebra. The n-th power of (StreamF a) acting on () consists of lists of as

ConsF a1 (ConsF a2 (... (ConsF an ())...))

The lifting of unit acting on such a list replaces the final (ConsF a ()) with () thus shortening the list by one item. Its “inverse” replaces the seed () with any value of type a (so it’s a multi-valued inverse, since there are, in general, many values of a). The limit is isomorphic to an infinite stream of a. In Haskell it can be written as a recursive data structure

data Stream a = ConsF a (Stream a)

Anamorphism

The limit of a diagram is defined as a universal cone. In our case this would be the cone consisting of the object we’ll call \nu F, with a set of projections \pi_n

such that any other cone factors through \nu F. We want to show that \nu F (if it exists) is a terminal coalgebra.

First, we have to show that \nu F is indeed a coalgebra, that is, there exists a morphism

k \colon \nu F \to F (\nu F)

We can apply F to the whole diagram. If F preserves \omega-limits, then we get a universal cone with the apex F (\nu F) and the \omega-chain with F 1 on the left. But our original object \nu F forms a cone over the same chain (ignoring the projection \pi_0). Therefore there must be a unique mapping k from it to F (\nu F).

The coalgebra (\nu F, k) is terminal if there is a unique morphism from any other coalgebra to it. Consider, for instance, a coalgebra (A, \kappa \colon A \to F A). With this coalgebra, we can construct an \omega-chain

We can connect the two omega chains using the terminal morphism from A to 1 and all its liftings

Notice that all squares in this diagram commute. The leftmost one commutes because 1 is the terminal object, therefore the mapping from A to it is unique, so the composite \mbox{!`} \circ F \mbox{!`} \circ \kappa must be the same as \mbox{!`}. A is therefore an apex of a cone over our original \omega-chain. By universality, there must be a unique morphism from A to the limit of this \omega-chain, \nu F. This morphism is in fact a coalgebra morphism and is called the anamorphism.

Adjunction

The constructions of initial algebras and terminal coalgebras can be compactly described using adjunctions.

There is an obvious forgetful functor U from the category of F-algebras C^F to C. This functor just picks the carrier and forgets the structure map. Under certain conditions, the left adjoint free functor \Phi exists

C^F ( \Phi x, a) \cong C(x, U a)

This adjunction can be evaluated at the initial object (the empty set in Set).

C^F ( \Phi 0, a) \cong C(0, U a)

This shows that there is a unique algebra morphism—the catamorphism— from \Phi 0 to any algebra a. This is because the hom-set C(0, U a) is a singleton for every a. Therefore \Phi 0 is the initial algebra \nu F.

Conversely, there is a cofree functor \Psi

C_F(c, \Psi x) \cong C(U c, x)

It can be evaluated at a terminal object

C_F(c, \Psi 1) \cong C(U c, 1)

showing that there is a unique coalgebra morphism—the anamorphism—from any coalgebra c to \Psi 1. This shows that \Psi 1 is the terminal coalgebra \nu F.

Fixed point

Lambek’s lemma works for both, initial algebras and terminal coalgebras. It states that their structure maps are isomorphisms, therefore their carriers are fixed points of the functor F

\mu F \cong F (\mu F)

\nu F \cong F (\nu F)

The difference is that \mu F is the least fixed point, and \nu F is the greatest fixed point of F. They are, in principle, different. And yet, in a programming language like Haskell, we only have one recursively defined data structure defining the fixed point

newtype Fix f = Fix { unFix :: f (Fix f) }

So which one is it?

We can define both the catamorphisms from-, and anamorphisms to-, the fixed point:

type Algebra f a = f a -> a

cata :: Functor f => Algebra f a -> Fix f -> a
cata alg = alg . fmap (cata alg) . unfix
type Coalgebra f a = a -> f a

ana :: Functor f => Coalgebra f a -> a -> Fix f
ana coa = Fix . fmap (ana coa) . coa

so it seems like Fix f is both initial as the carrier of an algebra and terminal as the carrier of a coalgebra. But we know that there are elements of \nu F that are not in \mu F—namely infinite trees and infinite streams—so the two fixed points are not isomorphic and cannot be both described by the same Fix f.

However, they are not unrelated. Because of the Lambek’s lemma, the initial algebra (\mu F, j) gives rise to a coalgebra (\mu F, j^{-1}), and the terminal coalgebra (\nu F, k) generates an algebra (\nu F, k^{-1}).

Because of universality, there must be a (unique) algebra morphism from the initial algebra (\mu F, j) to (\nu F, k^{-1}), and a unique coalgebra morphism from (\mu F, j^{-1}) to the terminal coalgebra (\nu F, k). It turns out that these two are given by the same morphism f \colon \mu F \to \nu F between the carriers. This morphism satisfies the equation

k \circ f \circ j = F f

which makes it both an algebra and a coalgebra morphism

Furthermore, it can be shown that, in Set, f is injective: it embeds \mu F in \nu F. This corresponds to our observation that \nu F contains \mu F plus some infinite data structures.

The question is, can Fix f describe infinite data? The answer depends on the nature of the programming language: infinite data structures can only exist in a lazy language. Since Haskell is lazy, Fix f corresponds to the greatest fixed point. The least fixed point forms a subset of Fix f (in fact, one can define a metric in which it’s a dense subset).

This is particularly obvious in the case of a functor that has no terminating leaves, like the stream functor.

data StreamF a x = StreamF a x
  deriving Functor

We’ve seen before that the initial algebra for StreamF a is empty, essentially because its action on Void is uninhabited. It does, however have a terminal coalgebra. And, in Haskell, the fixed point of the stream functor indeed generates infinite streams

type Stream a = Fix (StreamF a)

How do we know that? Because we can construct an infinite stream using an anamorphism. Notice that, unlike in the case of a catamorphism, the recursion in an anamorphism doesn’t have to be well founded and, indeed, in the case of a stream, it never terminates. This is why this won’t work in an eager language. But it works in Haskell. Here’s a coalgebra whose carrier is Int

coaInt :: Coalgebra (StreamF Int) Int
coaInt n = StreamF n (n + 1)

It generates an infinite stream of natural numbers

ints = ana coaInt 0

Of course, in Haskell, the work is not done until we demand some values. Here’s the function that extracts the head of the stream:

hd :: Stream a -> a
hd (Fix (StreamF x _)) = x

And here’s one that advances the stream

tl :: Stream a -> Stream a
tl (Fix (StreamF _ s)) = s

This is all true in Set, but Haskell is not Set. I had a discussion with Edward Kmett and he pointed out that Haskell’s fixed point data type can be considered the initial algebra as well. Suppose that you have an infinite data structure, like the stream we were just discussing. If you apply a catamorphism for an arbitrary algebra to it, it will most likely never terminate (try summing up an infinite stream of integers). In Haskell, however, this is interpreted as the catamorphism returning the bottom \bot, which is a perfectly legitimate value. And once you start including bottoms in your reasoning, all bets are off. In particular Void is no longer uninhabited—it contains \bot—and the colimit construction of the initial algebra is no longer valid. It’s possible that some of these results can be generalized using domain theory and enriched categories, but I’m not aware of any such work.

Bibliography

  1. Adamek, Introduction to coalgebra
  2. Michael Barr, Terminal coalgebras for endofunctors on sets

There is a bit of folklore about algebras in Haskell, which says that both the initial algebra and the terminal coalgebra for a given functor are defined by the same fixed point formula. This works for most common cases, but is not true in general. What is definitely true is that they are both fixed points–this result is called the Lambek’s lemma–but there may be many fixed points. The initial algebra is the least fixed point, and the terminal coalgebra is the greatest fixed point.

In this series of blog posts I will explore the ways one can construct these (co-)algebras using category theory and illustrate it with Haskell examples.

In this first installment, I’ll go over the construction of the initial algebra.

A functor

Let’s start with a simple functor that generates binary trees. Normally, we would store some additional data in a tree (meaning, the functor would take another argument), either in nodes or in leaves, but here we’re just interested in pure shapes.

data F a = Leaf 
         | Node a a
  deriving Show

Categorically, this functor can be written as a coproduct (sum) of the terminal object 1 (singleton) and the product of a with itself, here written simply as a^2

F a = 1 + a^2

The lifting of functions is given by this implementation of fmap

instance Functor F where
  fmap _ Leaf       = Leaf
  fmap f (Node x y) = Node (f x) (f y)

We can use this functor to build arbitrary level trees. Let’s consider, for instance, terms of type F Int. We can either build a Leaf, or a Node with two numbers in it

x1, y1 :: F Int
x1 = Leaf
y1 = Node 1 2 

With those, we can build next-level values of the type F^2 a or, in our case, F (F Int)

x2, y2 :: F (F Int)
x2 = Leaf
y2 = Node x1 y1

We can display y2 directly using show

> Node Leaf (Node 1 2)

or draw the corresponding tree

Since F is an endofunctor, so is F^2. Lifting a function f \colon a \to b to F^2 can be implemented by applying fmap twice. Here’s the action of the function (+1) on our test values

fmap (fmap (+1)) x2
> Leaf
fmap (fmap (+1)) y2
> Node Leaf (Node 2 3)

or, graphically,

You can see that Leafs at any level remain untouched; only the contents of bottom Nodes in the tree are transformed.

The colimit construction

The carrier of the initial algebra can be constructed as a colimit of an infinite sequence. This sequence is constructed by applying powers of F to the initial object which we’ll denote as 0. We’ll first see how this works in our example.

The initial object in Haskell is defined as a type with no data constructor (we are ignoring the question of non-termination in Haskell).

data Void
  deriving Show

In Set, this is just an empty set.

The Show instance for Void requires the pragma

{-# language EmptyDataDeriving #-}

Even though there are no values of the type Void, we can still construct a value of the type F Void

z1 :: F Void
z1 = Leaf

This degenerate version of a tree can be drawn as

This illustrates a very important property of our F: Its action on an empty set does not produce an empty set. This is what allows us to generate a non-trivial sequence of powers of F starting with the empty set.

Not every functor has this property. For instance, the construction of the initial algebra for the functor

data StreamF a x = ConsF a x

will produce an uninhabited type (empty set). Notice that this is different from its terminal coalgebra, which is the infinite stream

data Stream a = Cons a (Stream a)

This is an example of a functor whose initial algebra is not the same as the terminal coalgebra.

Double application of our F to Void produces, again, a Leaf, as well as a Node that contains two Leafs.

z2, v2 :: F (F Void)
z2 = Leaf

v2 = Node z1 z1
> Node Leaf Leaf

Graphically,

In general, powers of F acting on Void generate trees which terminate with Leafs, but there is no possibility of having terminal Nodes). Higher and higher powers of F acting on Void will eventually produce any tree we can think of. But for any given power, there will exist even larger trees that are not generated by it.

In order to get all the trees, we could try to take a sum (a coproduct) of infinitely many powers. Something like this

\sum_{n = 0}^{\infty} F^n 0

The problem is that we’d also get a lot of duplication. For instance, we saw that z1 was the same tree as z2. In general, a single Leaf is produced at all non-zero powers of F acting on Void. Similarly, all powers of F greater than one produce a single node with two leaves, and so on. Once a particular tree is produced at some power of F, all higher powers of F will also produce it.

We have to have a way of identifying multiply generated trees. This is why we need a colimit rather than a simple coproduct.

As a reminder, a coproduct is defined as a universal cocone. Here, the base of the cocone is the set of all powers of F acting on 0 (Haskell Void).

In a more general colimit, the objects in the base of the cocone may be connected by morphisms.

Coming from the initial object, there can be only one morphism. We’ll call this morphism ! or, in Haskell, absurd

absurd :: Void -> a
absurd a = case a of {}

This definition requires another pragma

{-# language EmptyCase #-}

We can construct a morphism from F 0 to F^2 0 as a lifting of !, F !. In Haskell, the lifting of absurd doesn’t change the shape of trees. Here it is acting on a leaf

z1' :: F (F Void)
z1' = fmap absurd z1
> Leaf

We can continue this process of lifting absurd to higher and higher powers of F

z2', v2' :: F (F (F Void))

z2' = fmap (fmap absurd) z2
> Leaf

v2' = fmap (fmap absurd) v2
> Node Leaf Leaf

We can construct an infinite chain (this kind of directed chain indexed by natural numbers is called an \omega-chain)

We can use this chain as the base of our cocone. The colimit of this chain is defined as the universal cocone. We will call the apex of this cocone \mu F

In Set these constructions have simple interpretations. A coproduct is a discriminated union. A colimit is a discriminated union in which we identify all those injections that are connected by morphisms in the base of the cocone. For instance

\iota_0 = \iota_{(F 0)}\, \circ \, !
\iota_{(F 0)} = \iota_{(F^2 0)} \circ F !

and so on.

Here we use the lifted absurd (or ! in the picture above) as the morphisms that connect the powers of F acting of Void (or 0 in the picture).

These are exactly the identifications that we were looking for. For instance, F ! maps the leaf generated by F 0 to the leaf which is the element of F^2 0. Or, translating it to Haskell, (fmap absurd) maps the leaf generated by F Void to the leaf generated by F (F Void), and so on.

All trees generated by the n‘th power of F are injected into the n+1‘st power of F by absurd lifted by the nth power of F.

The colimit is formed by equivalence classes with respect to these identifications. In particular, there is a class for a degenerate tree consisting of a single leaf whose representative can be taken from F Void, or from F (F Void), or from F (F (F Void)) and so on.

Initiality

The colimit \mu F is exactly the initial algebra for the functor F. This follows from the universal property of the colimit. First we will show that for any algebra (A, \alpha \colon F A \to A) there is a unique morphism from \mu F to A. Indeed, we can build a cocone with A at its apex and the injections given by

!

\alpha \circ F !

\alpha \circ F \alpha \circ F^2 !

and so on…

Since the colimit \mu F is defined by the universal cocone, there is a unique morphism from it to A. It can be shown that this morphism is in fact an algebra morphism. This morphism is called a catamorphism.

Fixed Point

Lambek’s lemma states that the initial algebra is a fixed point of the functor that defines it

F (\mu F) \cong \mu F

This can also be seen directly, by applying the functor to every object and morphism in the \omega-chain that defines the colimit. We get a new chain that starts at F 0

But the colimit of this chain is the same as the colimit \mu F of the original chain. This is becuase we can always add back the initial object to the chain, and define its injection \iota_0 as the composite

\iota_0 = \iota_{(F 0)} \circ !

On the other hand, if we apply F to the whole universal cocone, we’ll get a new cocone with the apex F (\mu F). In principle, this cocone doesn’t have to be universal, so we cannot be sure that F (\mu F) is a colimit. If it is, we say that F preserves the particular type of colimit—here, the \omega-colimit.

Remember: the image of a cocone under a functor is always a cocone (this follows from functoriality). Preservation of colimits is an additional requirement that the image of a universal cocone be universal.

The result is that, if F preserves \omega-colimits, then the initial algebra \mu F is a fixed point of F

F(\mu F) \cong \mu F

because both sides can be obtained as a colimit of the same \omega-chain.

Bibliography

  1. Adamek, Milius, Moss, Initial Algebras, Terminal Coalgebras, and
    the Theory of Fixed Points of Functors

We live in interesting times. For instance, we are witnessing several extinction events all at once. One of them is the massive extinction of species. The other is the extinction of jobs. Both are caused by advances in technology. As programmers, we might consider ourselves immune to the latter–after all, somebody will have to program these self-driving trucks that eliminate the need for drivers, or the diagnostic tools that eliminate the need for doctors. Eventually, though, even programming jobs will be automated. I can imagine the last programmer putting finishing touches on the program that will make his or her job redundant.

But before we get there, let’s consider which programming tasks are the first to go, and which have the biggest chance to persist for the longest time. Experience tells us that it’s the boring menial jobs that get automated first. So any time you get bored with your work, take note: you are probably doing something that a computer could do better.

One such task is the implementation of user interfaces. All this code that’s behind various buttons, input fields, sliders, etc., is pretty much standard. Granted, you have to put a lot of effort to make the code portable to a myriad of platforms: various desktops, web browsers, phones, watches, fridges, etc. But that’s exactly the kind of expertise that is easily codified. If you find yourself doing copy and paste programming, watch out: your buddy computer can do it too. The work on generating UI has already started, see for instance, pix2code.

The design of user interfaces, as opposed to their implementation, will be more resistant to automation. Not only because it involves creativity, but also because it deals with human issues. Good design must serve the human in front of it. I’m not saying that modeling a human user is impossible, but it’s definitely harder. Of course, in many standard tasks, a drastically simplified human model will work just fine.

So I’m sorry to say that, but those programmers who specialize in HTML and JavaScript will have to retrain themselves.

The next job on the chopping block, in my opinion, is that of a human optimizer. In fact the only reason it hasn’t been eliminated yet is economical. It’s still cheaper to hire people to optimize code than it is to invest in the necessary infrastructure. You might think that programmers are expensive–the salaries of programmers are quite respectable in comparison to other industries. But if this were true, a lot more effort would go into improving programmers’ productivity, in particular in creating better tools. This is not happening. But as demand for software is growing, and the AI is getting cheaper, at some point the economic balance will change. It will be advantageous to use AI to optimize code.

I’m sorry to say that, but C and C++ programmers will have to go. These are the languages whose only raison d’être is to squeeze maximum performance from hardware. We’ll probably always be interested in performance, but there are other ways of improving it. We are familiar with optimizing compilers that virtually eliminated the job of an assembly language programmer. They use optimizers that are based on algorithmic principles–that is methods which are understandable to humans. But there is a whole new generation of AI waiting in the aisles, which can be trained to optimize code written in higher level languages. Imagine a system, which would take this definition of quicksort written in Haskell:

qsort [] = []
qsort (p:xs) = qsort lesser ++ [p] ++ qsort greater
    where (lesser, greater) = partition (< p) xs

and produce code that would run as fast as its hand-coded C counterpart. Even if you don’t know Haskell, I can explain this code to you in just a few sentences. The first line says that sorting an empty list produces an empty list. The second line defines the action of quicksort on a list that consists of a head p–that will be our pivot–and the tail xs. The result is the concatenation (the symbol ++) of three lists. The first one is the result of (recursively) sorting the list lesser, the second is the singleton list containing the pivot, and the third is the result of sorting the list greater. Finally, the pair of lists (lesser, greater) is produced by partitioning xs using the predicate (< p), which reads “less than p.” You can’t get any simpler than that.

Of course the transformation required for optimizing this algorithm is highly nontrivial. Depending on the rest of the program, the AI might decide to change the representation of data from a list to a vector, replace copying by destructive swapping, put some effort in selecting a better pivot, use a different algorithm for sorting very short lists, and so on. This is what a human optimizer would do. But how much harder is this task than, say, playing a game of go against a grandmaster?

I am immensely impressed with the progress companies like Google or IBM made in playing go, chess, and Jeopardy, but I keep asking myself, why don’t they invest all this effort in programming technology? I can’t help but see parallels with Ancient Greece. The Ancient Greeks made tremendous breakthroughs in philosophy and mathematics–just think about Plato, Socrates, Euclid, or Pythagoras–but they had no technology to speak of. Hero of Alexandria invented a steam engine, but it was never put to work. It was only used as a parlor trick. There are many explanations of this phenomenon, but one that strikes close to home is that the Greeks didn’t need technology because they had access to cheap labor through slavery. I’m not implying that programmers are treated like slaves–far from it–but they seem to be considered cheap labor. In fact it’s so cheap to produce software that most of it is given away for free, or for the price of users’ attention in ad-supported software. A lot of software is just bait that’s supposed to entice the user to buy something more valuable, like beer.

It’s gradually becoming clear that programming jobs are diverging. This is not yet reflected in salaries, but as the job market matures, some programming jobs will be eliminated, others will increase in demand. The one area where humans are still indispensable is in specifying what has to be done. The AI will eventually be able to implement any reasonable program, as long as it gets a precise enough specification. So the programmers of the future will stop telling the computer how to perform a given task; rather they will specify what to do. In other words, declarative programming will overtake imperative programming. But I don’t think that explaining to the AI what it’s supposed to do will be easy. The AI will continue to be rather dumb, at least in the foreseeable future. It’s been noted that software that can beat the best go players in the world would be at a complete loss trying to prepare a dinner or clean the dishes. It’s able to play go because it’s reasonably easy to codify the task of playing go– the legal moves and the goal of the game. Humans are extremely bad at expressing their wishes, as illustrated by the following story:

A poor starving peasant couple are granted three wishes and the woman, just taking the first thing that comes to her mind, wishes for one sausage, which she receives immediately. Her husband, pointing out that she could have wished for immense wealth or food to last them a lifetime, becomes angry with her for making such a stupid wish and, not thinking, wishes the sausage were stuck on her nose. Sure enough, the sausage is stuck in the middle of her face, and then they have to use the third wish to make it go away, upon which it disappears completely.

As long as the dumb AI is unable to guess our wishes, there will be a need to specify them using a precise language. We already have such language, it’s called math. The advantage of math is that it was invented for humans, not for machines. It solves the basic problem of formalizing our thought process, so it can be reliably transmitted and verified. The definition of quicksort in Haskell is very mathematical. It can be easily verified using induction, because it’s recursive, and it operates on a recursive data structure: a list. The first line of code establishes the base case: an empty list is trivially sorted. Then we perform the induction step. We assume that we know how to sort all proper sublists of our list. We create two such sublists by partitioning the tail around the pivot. We sort the sublists, and then construct the final sorted list by inserting the pivot between them. As mathematical proofs go, this one is not particularly hard. In fact, in a typical mathematical text, it would be considered so trivial as to be left as an exercise for the reader.

Still, this kind of mathematical thinking seems to be alien to most people, including a lot of programmers. So why am I proposing it as the “programming language” of the future? Math is hard, but let’s consider the alternatives. Every programming language is a compromise between the human and the computer. There are languages that are “close to the metal,” like assembly or C, and there are languages that try to imitate natural language, like Cobol or SQL. But even in low level languages we try to use meaningful names for variables and functions in an attempt to make code more readable. In fact, there are programs that purposefully obfuscate source code by removing the formatting and replacing names with gibberish. The result is unreadable to most humans, but makes no difference to computers. Mathematical language doesn’t have to be machine readable. It’s a language that was created by the people, for the people. The reason why we find mathematical texts harder to read than, say, C++ code is because mathematicians work at a much higher abstraction level. If we tried to express the same ideas in C++, we would very quickly get completely lost.

Let me give you a small example. In mathematics, a monad is defined as a monoid in the category of endofunctors. That’s a very succinct definition. In order to understand it, you have to internalize a whole tower of abstractions, one built on top of another. When we implement monads in Haskell, we don’t use that definition. We pick a particular very simple category and implement only one aspect of the definition (we don’t implement monadic laws). In C++, we don’t even do that. If there are any monads in C++, they are implemented ad hoc, and not as a general concept (an example is the future monad which, to this day, is incomplete).

There is also some deeper math in the quicksort example. It’s a recursive function and recursion is related to algebras and fixed points. A more elaborate version of quicksort decomposes it into its more fundamental components. The recursion is captured in a combination of unfolding and folding that is called a hylomorphism. The unfolding is described by a coalgebra, while folding is driven by an algebra.

data TreeF a r = Leaf | Node a r r
  deriving Functor

split :: Ord a => Coalgebra (TreeF a) [a]
split [] = Leaf
split (a: as) = Node a l r 
  where (l, r) = partition (< a) as
    
join :: Algebra (TreeF a) [a]
join Leaf = []
join (Node a l r) = l ++ [a] ++ r

qsort :: Ord a => [a] -> [a]
qsort = hylo join split

You might think that this representation is an overkill. You may even use it in a conversation to impress your friends: “Quicksort is just a hylomorphism, what is the problem?” So how is it better than the original three-liner?

qsort [] = []
qsort (p:xs) = qsort lesser ++ [p] ++ qsort greater
    where (lesser, greater) = partition (< p) xs

The main difference is that the flow of control in this new implementation is driven by a data structure generated by the functor TreeF. This functor describes a binary tree whose every node has a value of type a and two children. We use those children in the unfolding process to store lists of elements, lesser ones on the left, greater (or equal) on the right. Then, in the folding process, these children are replenished again–this time with sorted lists. This may seem like an insignificant change, but it uses a different processing ability of our brains. The recursive function tells us a linear, one-dimensional, story. It appeals to our story-telling ability. The functor-driven approach appeals to our visual cortex. There is an up and down, and left and right in the tree. Not only that, we can think of the algorithm in terms of movement, or animation. We are first “growing” the tree from the seed and then “traversing” it to gather the fruit from the branches. These are some powerful metaphors.

If this kind of visualization works for us, it might as well work for the AI that will try to optimize our programs. It may also be able to access a knowledge base of similar algorithms based on recursion schemes and category theory.

I’m often asked by programmers: How is learning category theory going to help me in my everyday programming? The implication being that it’s not worth learning math if it can’t be immediately applied to your current job. This makes sense if you are trying to locally optimize your life. You are close to the local minimum of your utility function and you want to get even closer to it. But the utility function is not constant–it evolves in time. Local minima disappear. Category theory is the insurance policy against the drying out of your current watering hole.


What does Gödel’s incompletness theorem, Russell’s paradox, Turing’s halting problem, and Cantor’s diagonal argument have to do with the fact that negation has no fixed point? The surprising answer is that they are all related through
Lawvere’s fixed point theorem.

Before we dive deep into category theory, let’s unwrap this statement from the point of view of a (Haskell) programmer. Let’s start with some basics. Negation is a function that flips between True and False:

  not :: Bool -> Bool
  not True  = False
  not False = True

A fixed point is a value that doesn’t change under the action of a function. Obviously, negation has no fixed point. There are other functions with the same signature that have fixed points. For instance, the constant function:

  true True  = True
  true False = True

has True as a fixed point.

All the theorems I listed in the preamble (and a few more) are based on a simple but extremely powerful proof technique invented by Georg Cantor called the diagonal argument. I’ll illustrate this technique first by showing that the set of binary sequences is not countable.

Cantor’s job interview question

A binary sequence is an infinite stream of zeros and ones, which we can represent as Booleans. Here’s the definition a sequence (it’s just like a list, but without the nil constructor), together with two helper functions:

  data Seq a = Cons a (Seq a)
    deriving Functor
  
  head :: Seq a -> a
  head (Cons a as) = a

  tail :: Seq a -> Seq a
  tail (Cons a as) = as

And here’s the definition of a binary sequence:

  type BinSeq = Seq Bool

If such sequences were countable, it would mean that you could organize them all into one big (infinite) list. In other words we could implement a sequence generator that generates every possible binary sequence:

  allBinSeq :: Seq BinSeq

Suppose that you gave this problem as a job interview question, and the job candidate came up with an implementation. How would you test it? I’m assuming that you have at your disposal a procedure that can (presumably in infinite time) search and compare infinite sequences. You could throw at it some obvious sequences, like all True, all False, alternating True and False, and a few others that came to your mind.

What Cantor did is to use the candidate’s own contraption to produce a counter-example. First, he extracted the diagonal from the sequence of sequences. This is the code he wrote:

  diag :: Seq (Seq a) -> Seq a
  diag seqs = Cons (head (head seqs)) (diag (trim seqs))

  trim :: Seq (Seq a) -> Seq (Seq a)
  trim seqs = fmap tail (tail seqs)

You can visualize the sequence of sequences as a two-dimensional table that extends to infinity towards the right and towards the bottom.

  T F F T ...
  T F F T ...
  F F T T ...
  F F F T ...
  ...

Its diagonal is the sequence that starts with the fist element of the first sequence, followed by the second element of the second sequence, third element of the third sequence, and so on. In our case, it would be a sequence T F T T ....

It’s possible that the sequence, diag allBinSeq has already been listed in allBinSeq. But Cantor came up with a devilish trick: he negated the whole diagonal sequence:

  tricky = fmap not (diag allBinSeq)

and ran his test on the candidate’s solution. In our case, we would get F T F F ... The tricky sequence was obviously not equal to the first sequence because it differed from it in the first position. It was not the second, because it differed (at least) in the second position. Not the third either, because it was different in the third position. You get the gist…

Power sets are big

In reality, Cantor did not screen job applicants and he didn’t even program in Haskell. He used his argument to prove that real numbers cannot be enumerated.

But first let’s see how one could use Cantor’s diagonal argument to show that the set of subsets of natural numbers is not enumerable. Any subset of natural numbers can be represented by a sequence of Booleans, where True means a given number is in the subset, and False that it isn’t. Alternatively, you can think of such a sequence as a function:

  type Chi = Nat -> Bool

called a characteristic function. In fact characteristic functions can be used to define subsets of any set:

  type Chi a = a -> Bool

In particular, we could have defined binary sequences as characteristic functions on naturals:

  type BinSeq = Chi Nat

As programmers we often use this kind of equivalence between functions and data, especially in lazy languages.

The set of all subsets of a given set is called a power set. We have already shown that the power set of natural numbers is not enumerable, that is, there is no function:

  enumerate :: Nat -> Chi Nat

that would cover all characteristic functions. A function that covers its codomain is called surjective. So there is no surjection from natural numbers to all sequences of natural numbers.

In fact Cantor was able to prove a more general theorem: for any set, the power set is always larger than the original set. Let’s reformulate this. There is no surjection from the set A to the set of characteristic functions A \to 2 (where 2 stands for the two-element set of Booleans).

To prove this, let’s be contrarian and assume that there is a surjection:

  enumP :: A -> Chi A

Since we are going to use the diagonal argument, it makes sense to uncurry this function, so it looks more like a table:

  g :: (A, A) -> Bool
  g = uncurry enumP

Diagonal entries in the table are indexed using the following function:

  delta :: a -> (a, a)
  delta x = (x, x)

We can now define our custom characteristic function by picking diagonal elements and negating them, as we did in the case of natural numbers:

  tricky :: Chi A
  tricky = not . g . delta

If enumP is indeed a surjection, then it must produce our function tricky for some value of x :: A. In other words, there exists an x such that tricky is equal to enumP x.
This is an equality of functions, so let’s evaluate both functions at x (which will evaluate g at the diagonal).

  tricky x == (enumP x) x

The left hand side is equal to:

  tricky x = {- definition of tricky -}
  not (g (delta x)) = {- definition of g -}
  not (uncurry enumP (delta x)) = {- uncurry and delta -}
  not ((enumP x) x)

So our assumption that there exists a surjection A -> Chi A led to a contradition!

  not ((enumP x) x) == (enumP x) x

Real numbers are uncountable

You can kind of see how the diagonal argument could be used to show that real numbers are uncountable. Let’s just concentrate on reals that are greater than zero but less than one. Those numbers can be represented as sequences of decimal digits (the digits following the decimal point). If these reals were countable, we could list them one after another, just like we attempted to list all streams of Booleans. We would get some kind of a table infinitely extending to the right and towards the bottom. There is one small complication though. Some numbers have two equivalent decimal representations. For instance 0.48 is the same as 0.47999..., with infinitely many nines. So lets remove all rows from our table that end in an infinity of nines. We get something like this:

  3 5 0 5 ...
  9 9 0 8 ...
  4 0 2 3 ...
  0 0 9 9 ...
  ...

We can now apply our diagonal argument by picking one digit from every row. These digits form our diagonal number. In our example, it would be 3 9 2 9.

In the previous proof, we negated each element of the sequence to get a new sequence. Here we have to come up with some mapping that changes each digit to something else. For instance, we could add one to it, modulo nine. Except that, again, we don’t want to produce nines, because we could end up with a number that ends in an infinity of nines. But something like this will work just fine:

  h n = if n == 8 
        then 3
        else (n + 1) `mod` 9

The important part is that our function h replaces every digit with a different digit. In other words, h doesn’t have a fixed point.

Lawvere’s fixed point theorem

And this is what Lawvere realized: The diagonal argument establishes the relationship between the existence of a surjection on the one hand, and the existence of a no-fix-point mapping on the other hand. So far it’s been easy to find a no-fix-point functions. But let’s reverse the argument: If there is a surjection A \to (A \to Y) then every function Y \to Y must have a fixed point. That’s because, if we could find a no-fixed-point function, we could use the diagonal argument to show that there is no surjection.

But wait a moment. Except for the trivial case of a function on a one-element set, it’s always possible to find a function that has no fixed point. Just return something else than the argument you’re called with. This is definitely true in Set, but when you go to other categories, you can’t just construct morphisms willy-nilly. Even in categories where morphisms are functions, you might have constraints, such as continuity or smoothness. For instance, every continuous function from a real segment to itself has a fixed point (Brouwer’s theorem).

As usual, translating from the language of sets and functions to the language of category theory takes some work. The tricky part is to generalize the definition of a fixed point and surjection.

Points and string diagrams

First, to define a fixed point, we need to be able to define a point. This is normally done by defining a morphism from the terminal object 1, such a morphism is called a global element. In Set, the terminal object is a singleton set, and a morphism from a singleton set just picks an element of a set.

Since things will soon get complicated, I’d like to introduce string diagrams to better visualise things in a cartesian category. In string diagrams lines correspond to objects and dots correspond to morphisms. You read such diagrams bottom up. So a morphism

\dot a \colon 1 \to A

can be drawn as:

I will use dotted letters to denote “points” or morphisms originating in the unit. It is also customary to omit the unit from the picture. It turns out that everything works just fine with implicit units.


A fixed point of a morphism t \colon Y \to Y is a global element \dot y \colon 1 \to Y such that t \circ \dot y = \dot y. Here’s the traditional diagam:

And here’s the corresponding string diagram that encodes the commuting condition.

In string diagrams, morphisms are composed by stringing them along lines in the bottom up direction.

Surjections can be generalized in many ways. The one that works here is called surjection on points. A morphism h \colon A \to B is surjective on points when for every point \dot b of B (that is a global element \dot b \colon 1 \to B) there is a point \dot a of A (the domain of h) that is mapped to \dot b. In other words h \circ \dot a = \dot b

Or string-diagrammatically, for every \dot b there exists an \dot a such that:

Currying and uncurrying

To formulate Lawvere’s theorem, we’ll replace B with the exponential object Y^A, that is an object that represents the set of morphisms from A to Y. Conceptually, those morphism will correspond to rows in our table (or characteristic functions, when Y is 2). The mapping:

\bar g \colon A \to Y^A

generates these rows. I will use barred symbols, like \bar g for curried morphisms, that is morphisms to exponentials. The object A serves as the source of indexes for enumerating the rows of the table (just like the natural numbers in the previous example). The same object also provides indexing within each row.

This is best seen after uncurrying \bar g (we assume that we are in a cartesian closed category). The uncurried morphism, g \colon A \times A \to Y uses a product A \times A to index simultaneously into rows and columns of the table, just like pairs of natural numbers we used in the previous example.

The currying relationship between these two is given by the universal construction:

with the following commuting condition:

g = \varepsilon \circ (\bar g \times id_A)

Here, \varepsilon is the evaluation natural transformation (the couinit of the currying adjunction, or the dollar sign operator in Haskell).

This commuting condition can be visualized as a string diagram. Notice that products of objects correspond to parallel lines going up. Morphisms that operate on products, like \varepsilon or g, are drawn as dots that merge such lines.

We are interested in mappings that are point-surjective. In this case, we would like to demand that for every point \dot f \colon 1 \to Y^A there is a point \dot a \colon 1 \to A such that:

\dot f = \bar g \circ \dot a

or, diagrammatically, for every \dot f there exists an \dot a such that:

Conceptually, \dot f is a point in Y^A, which represents some arbitrary function A \to Y. Surjectivity of \bar g means that we can always find this function in our table by indexing into it with some \dot a.

This is a very general way of expressing what, in Haskell, would amount to: Every function f :: A -> Y can be obtained by partially applying our g_bar :: X -> A -> Y to some x :: X.

The diagonal

The way to index the diagonal of our table is to use the diagonal morphism \Delta \colon A \to A \times A. In a cartesian category, such a morphism always exists. It can be defined using the universal property of the product:

By combining this diagram with the diagram that defines the lifting of a pair of points \dot a we arrive at a useful identity:

\dot a \times \dot a = \Delta_A \circ \lambda_A \circ (id_A \times \dot a)

where \lambda_A is the left unitor, which asserts the isomorphism 1 \times A \to A

Here’s the string diagram representation of this identity:

In string diagrams we ignore unitors (as well as associators). Now you see why I like string diagrams. They make things much simpler.

Lawvere’s fixed point theorem

Theorem. (Lawvere) In a cartesian closed category, if there exists a point-surjective morphism \bar g \colon A \to Y^A then every endomorphism of Y must have a fixed point.

Note: Lawvere actually used a weaker definition of point surjectivity.

The proof is just a generalization of the diagonal argument.

Suppose that there is an endomorphims t \colon Y \to Y that has no fixed point. This generalizes the negation of the original example. We’ll create a special morphism by combining the diagonal entries of our table, and then “negating” them with t.

The table is described by (uncurried) g; and we access the diagonal through \Delta_A. So the tricky morphism A \to Y is just:

f = t \circ g \circ \Delta_A

or, in diagramatic form:

Since we were guaranteed that our table g is an exhaustive listing of all morphisms A \to Y, our new morphism f must be somewhere there. But in order to search the table, we have to first convert f to a point in the exponential object Y^A.

There is a one-to-one correspondence between points \dot f \colon 1 \to Y^A and morphisms f \colon A \to Y given by the universal property of the exponential (noting that 1 \times A is isomorphic to A through the left unitor, \lambda_A \colon 1 \times A \to A).

In other words, \dot f is the curried form of f \circ \lambda_A, and we have the following commuting codition:

f \circ \lambda_A = \varepsilon \circ (\dot f \times id_A)

Since \lambda is an isomorphism, we can invert it, and get the following formula for f in terms of \dot f:

f = \varepsilon \circ (\dot f \times id_A) \circ \lambda_A^{-1}

In the corresponding string diagram we omit the unitor altogether.

Now we can use our assumption that \bar g is point surjective to deduce that there must be a point \dot x \colon 1 \to A that will produce \dot f, in other words:

\dot f = \bar g \circ \dot x

So \dot x picks the row in which we find our tricky morphism. What we are interested in is “evaluating” this morphism at \dot x. That will be our paradoxical diagonal entry. By construction, it should be equal to the corresponding point of f, because this row is point-by-point equal to f; after all, we have just found it by searching for f! On the other hand, it should be different, because we’ve build f by “negating” diagonal entries of our table using t. Something has to give and, since we insist on surjectivity, we conclude that t is not doing its job of “negating.” It must have a fixed point at \dot x.

Let’s work out the details.

First, let’s apply the function we’ve found in the table at row \dot x to \dot x itself. Except that what we’ve found is a point in Y^A. Fortunately we have figured out earlier how to get f from \dot f. We apply the result to \dot x:

f \circ \dot x = \varepsilon \circ (\dot f \times id_A) \circ \lambda_A^{-1} \circ \dot x

Plugging into it the entry \dot f that we have found in the table, we get:

f \circ \dot x = \varepsilon \circ ((\bar g \circ \dot x) \times id_A) \circ \lambda_A^{-1} \circ \dot x

Here’s the corresponding string diagram:

We can now uncurry \bar g

And replace a pair of \dot x with a \Delta:

Compare this with the defining equation for f, as applied to \dot x:

f \circ \dot x = t \circ g \circ \Delta_A \circ \dot x

In other words, the morphism 1 \to Y:

g \circ \Delta_A \circ \dot x

is a fixed point of t. This contradicts our assumption that t had no fixed point.

Conclusion

When I started writing this blog post I though it would be a quick and easy job. After all, the proof of Lawvere’s theorem takes just a few lines both in the original paper and in all other sources I looked at. But then the details started piling up. The hardest part was convincing myself that it’s okay to disregard all the unitors. It’s not an easy thing for a programmer, because without unitors the formulas don’t “type check.” The left hand side may be a morphism from A \times I and the right hand side may start at A. A compiler would reject such code. I tried to do due diligence as much as I could, but at some point I decided to just go with string diagrams. In the process I got into some interesting discussions and even posted a question on Math Overflow. Hopefully it will be answered by the time you read this post.

One of the things I wasn’t sure about was if it was okay to slide unitors around, as in this diagram:

It turns out this is just naturality condition for \lambda, as John Baez was kind to remind me on Twitter (great place to do category theory!).

Acknowledgments

I’m grateful to Derek Elkins for reading the draft of this post.

Literature

  1. F. William Lawvere, Diagonal arguments and cartesian closed categories
  2. Noson S. Yanofsky, A Universal Approach to Self-Referential Paradoxes, Incompleteness and Fixed Points
  3. Qiaochu Yuan, Cartesian closed categories and the Lawvere fixed point theorem