Programming



The main idea of functional programming is to treat functions like any other data types. In particular, we want to be able to pass functions as arguments to other functions, return them as values, and store them in data structures. But what kind of data type is a function? It’s a type that, when paired with another piece of data called the argument, can be passed to a function called apply to produce the result.

apply :: (a -> d, a) -> d

In practice, function application is implicit in the syntax of the language. But, as we will see, even if your language doesn’t support higher-order functions, all you need is to roll out your own apply.

But where do these function objects, arguments to apply, come from; and how does the built-in apply know what to do with them?

When you’re implementing a function, you are, in a sense, telling apply what to do with it–what code to execute. You’re implementing individual chunks of apply. These chunks are usually scattered all over your program, sometimes anonymously in the form of lambdas.

We’ll talk about program transformations that introduce more functions, replace anonymous functions with named ones, or turn some functions into data types, without changing program semantics. The main advantage of such transformations is that they may improve performance, sometimes drastically so; or support distributed computing.

Function Objects

As usual, we look to category theory to provide theoretical foundation for defining function objects. It turns out that we are able to do functional programming because the category of types and functions is cartesian closed. The first part, cartesian, means that we can define product types. In Haskell, we have the pair type (a, b) built into the language. Categorically, we would write it as a \times b. Product is functorial in both arguments so, in particular, we can define a functor

    L_a c = c \times a

It’s really a family of functors that it parameterized by a.

The right adjoint to this functor

    R_a d = a \to d

defines the function type a \to d (a.k.a., the exponential object d^a). The existence of this adjunction is what makes a category closed. You may recognize these two functors as, respectively, the writer and the reader functor. When the parameter a is restricted to monoids, the writer functor becomes a monad (the reader is already a monad).

An adjunction is defined as a (natural) isomorphism of hom-sets:

    D(L c, d) \cong C(c, R d)

or, in our case of two endofunctors, for some fixed a,

    C(c \times a, d) \cong C(c, a \to d)

In Haskell, this is just the definition of currying:

curry   :: ((c, a) -> d)   -> (c -> (a -> d))
uncurry :: (c -> (a -> d)) -> ((c, a) -> d)

You may recognize the counit of this adjunction

    \epsilon_d : L_a (R_a d) \to \mbox{Id}\; d

as our apply function

    \epsilon_d : ((a \to d) \times a) \to d

Adjoint Functor Theorem

In my previous blog post I discussed the Freyd’s Adjoint Functor theorem from the categorical perspective. Here, I’m going to try to give it a programming interpretation. Also, the original theorem was formulated in terms of finding the left adjoint to a given functor. Here, we are interested in finding the right adjoint to the product functor. This is not a problem, since every construction in category theory can be dualized by reversing the arrows. So instead of considering the comma category c/R, we’ll work with the comma category L/d. Its objects are pairs (c, f), in which f is a morphism

    f \colon L c \to d.

This is the general picture but, in our case, we are dealing with a single category, and L is an endofunctor. We can implement the objects of our comma category in Haskell

data Comma a d c = Comma c ((c, a) -> d)

The type a is just a parameter, it parameterizes the (left) functor L_a

    L_a c = c \times a

and d is the target object of the comma category.

We are trying to construct a function object representing functions a->d, so what role does c play in all of this? To understand that, you have to take into account that a function object can be used to describe closures: functions that capture values from their environment. The type c represents those captured values. We’ll see this more explicitly later, when we talk about defunctionalizing closures.

Our comma category is a category of all closures that go from a to d while capturing all possible environments. The function object we are constructing is essentially a sum of all these closures, except that some of them are counted multiple times, so we need to perform some identifications. That’s what morphisms are for.

The morphisms of the comma category are morphisms h \colon c \to c' in \mathcal C that make the following triangles in \mathcal D commute.

Unfortunately, commuting diagrams cannot be expressed in Haskell. The closest we can get is to say that a morphism from

c1 :: Comma a d c

to

c2 :: Comma a d c'

is a function h :: c -> c' such that, if

c1 = Comma c f
f :: (c, a) -> d
c2 = Comma c' g
g :: (c', a) -> d

then

f = g . bimap h id

Here, bimap h id is the lifting of h to the functor L_a. More explicitly

f (c, x) = g (h c, x)

As we are interpreting c as the environment in which the closure is defined, the question is: does f use all of the information encoded in c or just a part of it? If it’s just a part, then we can factor it out. For instance, consider a lambda that captures an integer, but it’s only interested in whether the integer is even or odd. We can replace this lambda with one that captures a Boolean, and use the function even to transform the environment.

The next step in the construction is to define the projection functor from the comma category L/d back to \mathcal C that forgets the f part and just keeps the object c

    \pi_d \colon (c, f) \mapsto c

We use this functor to define a diagram in \mathcal C. Now, instead of taking its limit, as we did in the previous installment, we’ll take the colimit of this diagram. We’ll use this colimit to define the action of the right adjoint functor R on d.

    R d = \underset{L/d}{\mbox{colim}} \; \pi_d

In our case, the forgetful functor discards the function part of Comma a d c, keeping only the environment c. This means that, as long as d is not Void, we are dealing with a gigantic diagram that encompasses all objects in our category of types. The colimit of this diagram is a gigantic coproduct of everything, modulo identifications introduced by morphisms of the comma category. But these identifications are crucial in pruning out redundant closures. Every lambda that uses only part of the information from the captured environment can be identified with a simpler lambda that uses a simplified environment.

For illustration, consider a somewhat extreme case of constructing the function object 1 \to d, or d^1 (d to the power of the terminal object). This object should be isomorphic to d. Let’s see how this works: The terminal object 1 is the unit of the product, so

    L_1 c = c \times 1 \cong c

so the comma category L_1 / d is just the slice category C/d of arrows to d. It so happens that this category has the terminal object (d, id_d). The colimit of a diagram that has a terminal object is that terminal object. So, indeed, in this case, our construction produces a function object that is isomorphic to d.

    1 \to d \cong d

Intuitively, given a lambda that captures a value of type c from the environment and returns a d, we can trivially factor it out, using this lambda to transform the environment for c to d and then apply the identity on d. The latter corresponds to the comma category object (d, id_d), and the forgetful functor maps it to d.

It’s instructive to run a few more examples to get the hang of it. For instance, the function object Bool->d can be constructed by considering closures of the type

f :: (c, Bool) -> d

Any such closure can be factorized by the following transformation of the environment

h :: c -> (d, d)
h c = (f (c, True), f (c, False))

followed by

g :: ((d, d), Bool) -> d
g ((d1, d2), b) = if b then d1 else d2

Indeed:

f (c, b) = g (h c, b)

In other words
    2 \to d \cong d \times d
where 2 corresponds to the Bool type.

Counit

We are particularly interested in the counit of the adjunction. Its component at d is a morphism

    \epsilon_d : L R d \to d

It also happens to be an object in the comma category, namely

    (R d, \epsilon_d \colon L R d \to d).

In fact, it is the terminal object in that category. You can see that because for any other object (c, f \colon L c \to d) there is a morphism h \colon c \to R d that makes the following triangle commute:

This morphisms h is a leg in the terminal cocone that defines R d. We know for sure that c is in the base of that cocone, because it’s the projection \pi_d of (c, f \colon L c \to d).

To get some insight into the construction of the function object, imagine that you can enumerate the set of all possible environments c_i. The comma category L_a/d would then consist of pairs (c_i, f_i \colon (c_i, a) \to d). The coproduct of all those environments is a good candidate for the function object a \to d. Indeed, let’s try to define a counit for it:

    (\coprod c_i, a) \to d \cong \coprod (c_i, a) \to d \cong \prod ((c_i, a) \to d)

I used the distributive law:

    (\coprod c_i, a) \cong \coprod (c_i, a)

and the fact that the mapping out of a sum is the product of mappings. The right hand side can be constructed from the morphisms of the comma category.

So the object \coprod c_i satisfies at least one requirement of the function object: there is an implementation of apply for it. It is highly redundant, though. This is why, instead of the coproduct, we used the colimit in our construction of the function object. Also, we ignored the size issues.

Size Issues

As we discussed before, this construction doesn’t work in general because of size issues: the comma category is not necessarily small, and the colimit might not exist.

To address this problems, we have previously defined small solution sets. In the case of the right adjoint, a solution set is a family of objects that is weakly terminal in L/c. These are pairs (c_i, f_i \colon L c_i \to d) that, among themselves, can factor out any g \colon L c \to d

    g = f_i \circ L h

It means that we can always find an index i and a morphism h \colon c \to c_i to satisfy that equation. Every g might require a different f_i and h to factor through but, for any g, we are guaranteed to always find a pair.

Once we have a complete solution set, the right adjoint R d is constructed by first forming a coproduct of all the c_i and then using a coequalizer to construct one terminal object.

What is really interesting is that, in some cases, we can just use the coproduct of the solution set, \coprod_i c_i to approximate the adjoint (thus skipping the equalizer part).


The idea is that, in a particular program, we don’t need to represent all possible function types, just a (small) subset of those. We are also not particularly worried about uniqueness: it’s no problem if the same function ends up with multiple syntactic representations.

Let’s reformulate Freyd’s construction of the function object in programming terms. The solution set is the set of types c_i and functions
f_i \colon (c_i, a) \to d
such that, for any function
g \colon (c, a) \to d
that is of interest in our program (for instance, used as an argument to another function) there exists an i and a function
h \colon c \to c_i
such that g can be rewritten as
g (c, a) = f_i (h c, a)
In other words, every function of interest can be replaced by one of the solution-set functions. The environment for this standard function can be always extracted from the environment of the more general function.

CPS Transformation

A particular application of higher order functions shows up in the context of continuation passing transformation. Let’s look at a simple example. We are going to implement a function that traverses a binary tree containing strings, and concatenates them all into one string. Here’s the tree

data Tree = Leaf String 
          | Node Tree String Tree

Recursive traversal is pretty straightforward

show1 :: Tree -> String
show1 (Leaf s) = s
show1 (Node l s r) =
  show1 l ++  s ++ show1 r

We can test it on a small tree:

tree :: Tree
tree = Node (Node (Leaf "1 ") "2 " (Leaf "3 "))
            "4 " 
            (Leaf "5 ")
test = show1 tree

There is just one problem: recursion consumes the runtime stack, which is usually a limited resource. Your program may run out of stack space resulting in the “stack overflow” runtime error. This is why the compiler will turn recursion into iteration, whenever possible. And it is always possible if the function is tail recursive, that is, the recursive call is the last call in the function. No operation on the result of the recursive call is permitted in a tail recursive function.

This is clearly not happening in our implementation of show1: After the recursive call is made to traverse the left subtree, we still have to make another call to traverse the right tree, and the two results must be concatenated with the contents of the node.

Notice that this is not just a functional programming problem. In an imperative language, where iteration is the rule, tree traversal is still implemented using recursion. That’s because the data structure itself is recursive. It used to be a common interview question to implement non-recursive tree traversal, but the solution is always to explicitly implement your own stack (we’ll see how it’s done at the end of this post).

There is a standard procedure to make functions tail recursive using continuation passing style (CPS). The idea is simple: if there is stuff to do with the result of a function call, let the function we’re calling do it instead. This “stuff to do” is called a continuation. The function we are calling takes the continuation as an argument and, when it finishes its job, it calls it with the result. A continuation is a function, so CPS-transformed functions have to be higher-order: they must accept functions as arguments. Often, the continuations are defined on the spot using lambdas.

Here’s the CPS transformed tree traversal. Instead of returning a string, it accepts a continuation k, a function that takes a string and produces the final result of type a.

show2 :: Tree -> (String -> a) -> a
show2 (Leaf s) k = k s
show2 (Node lft s rgt) k =
  show2 lft (\ls -> 
    show2 rgt (\rs -> 
      k (ls ++ s ++ rs)))

If the tree is just a leaf, show2 calls the continuation with the string that’s stored in the leaf.

If the tree is a node, show2 calls itself recursively to convert the left child lft. This is a tail call, nothing more is done with its result. Instead, the rest of the work is packaged into a lambda and passed as a continuation to show2. This is the lambda

\ls -> 
    show2 rgt (\rs -> 
      k (ls ++ s ++ rs))

This lambda will be called with the result of traversing the left child. It will then call show2 with the right child and another lambda

\rs -> 
      k (ls ++ s ++ rs)

Again, this is a tail call. This lambda expects the string that is the result of traversing the right child. It concatenates the left string, the string from the current node, and the right string, and calls the original continuation k with it.

Finally, to convert the whole tree t, we call show2 with a trivial continuation that accepts the final result and immediately returns it.

show t = show2 t (\x -> x)

There is nothing special about lambdas as continuations. It’s possible to replace them with named functions. The difference is that a lambda can implicitly capture values from its environment. A named function must capture them explicitly. The three lambdas we used in our CPS-transformed traversal can be replaced with three named functions, each taking an additional argument representing the values captured from the environment:

done s = s
next (s, rgt, k) ls = show3 rgt (conc (ls, s, k))
conc (ls, s, k) rs = k (ls ++ s ++ rs)

The first function done is an identity function, it forces the generic type a to be narrowed down to String.

Here’s the modified traversal using named functions and explicit captures.

show3 :: Tree -> (String -> a) -> a
show3 (Leaf s) k = k s
show3 (Node lft s rgt) k =
  show3 lft (next (s, rgt, k))

show t = show3 t done

We can now start making the connection with the earlier discussion of the adjoint theorem. The three functions we have just defined, done, next, and conc, form the family

    f_i \colon (c_i, a) \to b.

They are functions of two arguments, or a pair of arguments. The first argument represents the object c_i, part of the solution set. It corresponds to the environment captured by the closure. The three c_i are, respectively

()
(String, Tree, String -> String)
(String, String, String->String)

(Notice the empty environment of done, here represented as the unit type ().)

The second argument of all three functions is of the type String, and the return type is also String so, according to Freyd’s theorem, we are in the process of defining the function object a \to b, where a is String and b is String.

Defunctionalization

Here’s the interesting part: instead of defining the general function type String->String, we can approximate it with the coproduct of the elements of the solution set. Here, the three components of the sum type correspond to the environments captured by our three functions.

data Kont = Done 
          | Next String Tree   Kont 
          | Conc String String Kont

The counit of the adjunction is approximated by a function from this sum type paired with a String, returning a String

apply :: Kont -> String -> String
apply Done s = s
apply (Next s rgt k) ls = show4 rgt (Conc ls s k)
apply (Conc ls s k) rs  = apply k (ls ++ s ++ rs)

Rather than passing one of the three functions to our higher-order CPS traversal, we can pass this sum type

show4 :: Tree -> Kont -> String
show4 (Leaf s) k = apply k s
show4 (Node lft s rgt) k = 
  show4 lft (Next s rgt k)

This is how we execute it

show t = show4 t Done

We have gotten rid of all higher-order functions by replacing their function arguments with a data type equipped with the apply function. There are several situations when this is advantageous. In procedural languages, defunctionalization may be used to replace recursion with loops. In fact, the Kont data structure can be seen as a user-defined stack, especially if it’s rewritten as a list.

type Kont = [(String, Either Tree String)]

Here, Done was replaced with an empty list and Next and Conc together correspond to pushing a value on the stack.

In Haskell, the compiler performs tail recursion optimization, but defunctionalization may still be useful in implementing distributed systems, or web servers. Any time we need to pass a function between a client and a server, we can replace it by a data type that can be easily serialized.

Bibliography

  1. John C. Reynolds, Definitional Interpreters for Higher-Order Programming Languages
  2. James Koppel, The Best Refactoring You’ve Never Heard Of.

One of the tropes of detective movies is the almost miraculous ability to reconstruct an image from a blurry photograph. You just scan the picture, say “enhance!”, and voila, the face of the suspect or the registration number of their car appear on your computer screen.

Computer, enhance!

With constant improvements in deep learning, we might eventually get there. In category theory, though, we do this all the time. We recover lost information. The procedure is based on the basic tenet of category theory: an object is defined by its interactions with the rest of the world. This is the basis of all universal constructions, the Yoneda lemma, Grothendieck fibration, Kan extensions, and practically everything else.

An iconic example is the construction of the left adjoint to a given functor, and that’s what we are going to study here. But first let me explain why I decided to pick this subject, and how it’s related to programming. I wanted to write a blog post about CPS (continuation passing style) and defunctionalization, and I stumbled upon an article in nLab that related defunctionalization to Freyd’s Adjoint Functor Theorem; in particular to the Solution Set Condition. Such an unexpected connection peaked my interest and I decided to dig deeper into it.

Adjunctions

Consider a functor R from some category \mathcal D to another category \mathcal C.

R \colon D \to C

A functor, in general, loses some data, so it’s normally impossible to invert it. It produces a “blurry” image of \mathcal D inside \mathcal C. Its left adjoint is a functor from \mathcal C to \mathcal D

L \colon C \to D

that attempts to reconstruct lost information, to the best of its ability. Often the functor R is forgetful, which means that it purposefully forgets some information. Its left adjoint is then called free, because it freely ad-libs the forgotten information.

Of course it’s not always possible, but under certain conditions such left adjoint exists. These conditions are spelled out in the Freyd’s General Adjoint Functor Theorem.

To understand them, we have to talk a little about size issues.

Size issues

A lot of interesting categories are large. It means that there are so many objects in the category that they don’t even form a set. The category of all sets, for instance, is large (there is no set of all sets). It’s also possible that morphisms between two objects don’t form a set.

A category in which objects form a set is called small, and a category in which hom-sets are sets is called locally small.

A lot of complexities in Freyd’s theorem are related to size issues, so it’s important to precisely spell out all the assumptions.

We assume that the source of the functor R, the category \mathcal D, is locally small. It must also be small-complete, that is, every small diagram in \mathcal D must have a limit. (A small diagram is a functor from a small category.) We also want the functor R to be continuous, that is, to preserve all small limits.

If it weren’t for size issues, this would be enough to guarantee the existence of the left adjoint, and we’ll first sketch the proof for this simplified case. In the general case, there is one more condition, the Solution Set Condition, which we’ll discuss later.

Left adjoint and the comma category

Here’s the problem we are trying to solve. We have a functor R that maps objects and morphisms from \mathcal D to \mathcal C. We want to define another functor L that goes in the opposite direction. We’re not looking for the inverse, so we’re not expecting the composition of this functor with R to be identity, but we want it to be related to identity by two natural transformations called unit and counit. Their components are, respectively:

\eta_c : c \to R L c

\epsilon_d : L R d \to d

and, as long as they satisfy some additional triangle identities, they will establish the adjunction L \dashv R.

We are going to define L point-wise, so let’s pick an object c in \mathcal C and try to propagate it back to \mathcal D. To do that, we have to gather as much information about c as possible. We will propagate all this information back to \mathcal D and find an object in \mathcal D that “looks the same.” Think of this as creating a hologram of c and shipping it back to \mathcal D.

All information about c is encoded in morphisms so, in order to generate our hologram, we’ll gather all morphisms that originate in c. These morphisms form a category called the coslice category c/C.

The objects in c/C are pairs (x, f \colon c \to x). In other words, these are all the arrows that emanate from c, indexed by their target objects x. But what really defines the structure of this category are morphisms between these arrows. A morphism in c/C from (x, f) to (y, g) is a morphism h \colon x \to y that makes the following triangle commute:

We now have complete information about c encoded in the slice category, but we have no way to propagate it back to \mathcal D. This is because, in general, the image of \mathcal D doesn’t cover the whole of \mathcal C. Even more importantly, not all morphisms in \mathcal C have corresponding morphisms in \mathcal D. We have to scale down our expectations, and define a partial hologram that does not capture all the information about c; only this part which can be back-propagated to \mathcal D using the functor R. Such partial hologram is called a comma category c/R.

The objects of c/R are pairs (d, f \colon c \to R d), where d is an object in \mathcal D. In other words, these are all the arrows emanating from c whose target is in the image of R. Again, the important structure is encoded in the morphisms of c/R. These are the arrows in \mathcal D, h \colon d \to d' that make the following diagram commute in \mathcal C

Notice an interesting fact: we can interpret these triangles as commutation conditions in a cone whose apex is c and whose base is formed by objects and morphisms in the image of R. But not all objects or morphism in the image of R are included. Only those morphisms that make the appropriate triangle commute–and these are exactly the morphisms that satisfy the cone condition. So the comma category builds a cone in \mathcal C.

Constructing the limit

We can now take all this information about c that’s been encoded in c/R and move it back to \mathcal D. We define a projection functor \pi_c \colon c/R \to D that maps (d, f) to d, thus forgetting the morphism f. What’s important, though, is that this functor keeps the information encoded in the morphisms of c/R, because these are morphisms in \mathcal D.

The image of \pi_c doesn’t necessarily cover the whole of \mathcal D, because not every R d has arrows coming from c. Similarly, only some morphisms, the ones that make the appropriate triangle in \mathcal C commute, are picked by \pi_c. But those objects and morphisms that are in the image of \pi_c form a diagram in \mathcal C. This diagram is our partial hologram, and we can use it to pick an object in \mathcal D that looks almost exactly like c. That object is the limit of this diagram. We pick the limit of this diagram as the definition of L c: the left adjoint of R acting on c.

Here’s the tricky part: we assumed that \mathcal D was small-complete, so every small diagram has a limit; but the diagram defined by \pi_c is not necessarily small. Let’s ignore this problem for a moment, and continue sketching the proof. We want to show that the mapping that assigns the limit of \pi_c to every c is left adjoint to R.

Let’s see if we can define the unit of the adjunction:

\eta_c : c \to R L c

Since we have defined L c as the limit of the diagram \pi_c and R preserves limits (small limits, really; but we are ignoring size problems for the moment) then R L c must be the limit of the diagram R \pi_c in \mathcal C. But, as we noted before, the diagram R \pi_c is exactly the base of the cone with the apex c that we used to define the comma category c/R. Since R L c is the limit of this diagram, there must be a unique morphism from any other cone to it. In particular there must be a morphism from c to it, because c is an apex of the cone defined by the comma category. And that’s the morphism we’ll chose as our \eta_c.

Incidentally, we can interpret \eta_c itself as an object of the comma category c/R, namely the one defined by the pair (Lc, \eta_c \colon c \to R L c). In fact, this is the initial object in that category. If you pick any other object, say, (d, g \colon c \to R d), you can always find a morphism h \colon L c \to d, which is just a leg, a projection, in the limiting cone that defines L c. It is automatically a morphism in c/R because the following triangle commutes:

This is the triangle that defines \eta_c as a morphism of cones, from the top cone with the apex c, to the bottom (limiting) cone with the apex R L c. We’ll use this interpretation later, when discussing the full version of the Freyd’s theorem.

We can also define the counit of the adjunction. Its component at c is a morphism

\epsilon_d : L R d \to d

First, we repeat our construction starting with c = R d. We define the comma category R d / R and use \pi_{R d} to create the diagram whose limit is L R d. We pick \epsilon_d to be a projection in the limiting cone. We are guaranteed that d is in the base of the cone, because it’s the image of (d, id \colon R d \to R d) under \pi_{R d}.

To complete this proof, one should show that the unit and counit are natural transformations and that they satisfy triangle identities.

End of a comma category

An interesting insight into this construction can be gained using the end calculus. In my previous post, I talked about (weighted) colimits as coends, but the same argument can be dualized to limits and ends. For instance, this is our comma category as a category of elements in the coend notation:

c/R \cong \mathcal{D} \int^d \mathcal{C} (c, R d)

The limit of of the projection functor \pi_c over the comma category can be written in the end notation as

\lim_{c/R} \pi_c \cong \int_{(d, f)\colon c/R} \pi_c (d, f) \cong \int_{(d, f)\colon c/R} d

This, in turn, can be rewritten as a weighted limit, with every d weighted by the set \mathcal{C}(c, R d):

\mbox{lim}^{\mathcal{C}(c, R -)} \mbox{Id} \cong \int_{d \colon \mathcal{D}} \mathcal{C}(c, R d) \pitchfork d

The pitchfork here is the power (cotensor) defined by the equation

\mathcal{D}\big(d', s \pitchfork d\big) \cong Set\big(s, \mathcal{D}(d', d)\big)

You may think of s \pitchfork d as the product of s copies of the object d, where s is a set. The name power conveys the idea of iterated multiplication. Or, since power is a special case of exponentiation, you may think of s \pitchfork d as a function object imitating mappings from s to d.

To continue, if the left adjoint L exists, the weighted limit in question can be replaced by

\int_{d \colon \mathcal{D}} \mathcal{D}(L c, d) \pitchfork d

which, using standard calculus of ends (see Appendix), can be shown to be isomorphic to L c. We end up with:

\lim_{c/R} \pi_c \cong L c

Solution set condition

So what about those pesky size issues? It’s one thing to demand the existence of all small limits, and a completely different thing to demand the existence of large limits (such requirement may narrow down the available categories to preorders). Since the comma category may be too large, maybe we can cut it down to size by carefully picking up a (small) set of objects out of all objects of \mathcal D. We may take some indexing set I and construct a family d_i of objects of \mathcal D indexed by elements of I. It doesn’t have to be one family for all—we may pick a different family for every object c for which we are performing our construction.

Instead of using the whole comma category c/R, we’ll limit ourselves to a set of arrows f_i \colon c \to R d_i. But in a comma category we also have morphisms between arrows. In fact they are the essential carriers of the structure of the comma category. Let’s have another look at these morphisms.

This commuting condition can be re-interpreted as a factorization of g through f. It so happens that every morphism g can be trivially factorized through some f by picking d = d' and h = id_d. But if we restrict the factors f to be members of the family f_i then not every g \colon c \to R d (for arbitrary d) can be automatically factorized. We have to demand it. That gives us the following:

Solution Set Condition: For every object c there exists a small set I with an I-indexed family of objects d_i in \mathcal D and a family of morphisms f_i \colon c \to R d_i, such that every morphism g \colon c \to R d can be factored through one of f_i. That is, there exists a morphism h \colon d_i \to d such that

g = R h \circ f_i

There is a shorthand for this statement: All comma categories c/R admit weakly initial families of objects. We’ll come back to it later.

Freyd’s theorem

We can now formulate:

Freyd’s Adjoint Functor Theorem: If \mathcal D is a locally small and small-complete category, and the functor R \colon D \to C is continuous (small-limit preserving), and it satisfies the solution set condition, then R has a left adjoint.

We’ve seen before that the key to defining the point-wise left adjoint was to find the initial object in the comma category c/R. The problem is that this comma category may be large. So the trick is to split the proof into two parts: first defining a weakly initial object, and then constructing the actual initial object using equalizers. A weakly initial object has morphisms to every object in the category but, unlike its strong version, these morphisms don’t have to be unique.

An even weaker notion is that of a weakly initial set of objects. These are objects that among themselves have arrows to every object in the category, but it’s possible that no individual object has all the arrows. The solution set in Freyd’s theorem is such a weakly initial set in the comma category c/R. Since we assumed that \mathcal C is small-complete, we can take a product of these objects and show that it’s weakly initial. The proof then proceeds with the construction of the initial object.

The details of the proof can be found in any category theory text or in nLab.

Next we’ll see the application of these results to the problem of defunctionalization of computer programs.

Appendix

To show that

\int_d \mathcal{D}(L c, d) \pitchfork d \cong L c

it’s enough to show that the hom-functors from an arbitrary object d' are isomorphic

\begin{aligned}  & \mathcal{D}\big(d', \int_d \mathcal{D}(L c, d) \pitchfork d\big) \\  \cong & \int_d \mathcal{D}\big(d', \mathcal{D}(L c, d) \pitchfork d\big) \\  \cong & \int_d Set\big( \mathcal{D}(L c, d), \mathcal{D}(d', d) \big) \\  \cong & \; \mathcal{D}(d', L c)  \end{aligned}

I used the continuity of the hom-functor, the definition of the power (cotensor) and the ninja Yoneda lemma.


It’s funny how similar ideas pop up in different branches of mathematics. Calculus, for instance, is built around metric spaces (or, more generally, Banach spaces) and measures. A limit of a sequence is defined by points getting closer and closer together. An integral is an area under a curve. In category theory, though, we don’t talk about distances or areas (except for Lawvere’s take on metric spaces), and yet we have the abstract notion of a limit, and we use integral notation for ends. The similarities are uncanny.

This blog post was inspired by my trying to understand the idea behind the Freyd’s adjoint functor theorem. It can be expressed as a colimit over a comma category, which is a special case of a Grothendieck fibration. To understand it, though, I had to get a better handle on weighted colimits which, as I learned, were even more general than Kan extensions.

Category of elements as coend

Grothendieck fibration is like splitting a category in two orthogonal directions, the base and the fiber. Fiber may vary from object to object (as in dependent types, which are indeed modeled as fibrations).

The simplest example of a Grothendieck fibration is the category of elements, in which fibers are simply sets. Of course, a set is also a category—a discrete category with no morphisms between elements, except for compulsory identity morphisms. A category of elements is built on top of a category \mathcal{C} using a functor

   F \colon \mathcal{C} \to Set

Such a functor is traditionally called a copresheaf (this construction works also on presheaves, \mathcal{C}^{op} \to Set). Objects in the category of elements are pairs (c, x) where c is an object in \mathcal{C}, and x \in F c is an element of a set.

A morphism from (c, x) to (c', x') is a morphism f \colon c \to c' in \mathcal{C}, such that (F f) x = x'.

There is an obvious projection functor that forgets the second component of the pair

   \Pi \colon (c, x) \mapsto c

(In fact, a general Grothendieck fibration starts with a projection functor.)

You will often see the category of elements written using integral notation. An integral, after all, is a gigantic sum of tiny slices. Similarly, objects of the category of elements form a gigantic sum (disjoint union) of sets F c. This is why you’ll see it written as an integral

   \int^{c \colon \mathcal{C}} F c

However, this notation conflicts with the one for conical colimits so, following Fosco Loregian, I’ll write the category of elements as

   \mathcal{C}\int^{c} F c

An interesting specialization of a category of elements is a comma category. It’s the category L/d of arrows originating in the image of the functor L \colon \mathcal{C} \to \mathcal{D} and terminating at a fixed object d in \mathcal{D}. The objects of L/d are pairs (c, f) where c is an object in \mathcal{C} and f \colon L c \to d is a morphism in \mathcal{D}. These morphisms are elements of the hom-set \mathcal{D}(L c , d), so the comma category is just a category of elements for the functor \mathcal{D}(L-, d) \colon \mathcal{C}^{op} \to Set

   L/d \cong \mathcal{C}\int^{c} \mathcal{D}(L c, d)

You’ll mostly see integral notation in the context of ends and coends. A coend of a profunctor is like a trace of a matrix: it’s a sum (a coproduct) of diagonal elements. But (co-)end notation may also be used for (co-)limits. Using the trace analogy, if you fill rows of a matrix with copies of the same vector, the trace will be the sum of the components of the vector. Similarly, you can construct a profunctor from a functor by repeating the same functor for every value of the first argument c':

   P(c', c) = F c

The coend over this profunctor is the colimit of the functor, a colimit being a generalization of the sum. By slight abuse of notation we write it as

   \mbox{colim}\, F = \int^{c \colon \mathcal{C}} F c

This kind of colimit is called conical, as opposed to what we are going to discuss next.

Weighted colimit as coend

A colimit is a universal cocone under a diagram. A diagram is a bunch of objects and morphisms in \mathcal{C} selected by a functor D \colon \mathcal{J} \to \mathcal{C} from some indexing category \mathcal{J}. The legs of a cocone are morphisms that connect the vertices of the diagram to the apex c of the cocone.

For any given indexing object j \colon \mathcal{J}, we select an element of the hom-set \mathcal{C}(D j, c), as a wire of the cocone. This is a selection of an element of a set (the hom-set) and, as such, can be described by a function from the singleton set *. In other words, a wire is a member of Set(*, \mathcal{C}(D j, c)). In fact, we can describe the whole cocone as a natural transformation between two functors, one of them being the constant functor 1 \colon j \mapsto *. The set of cocones is then the set of natural transformations:

   [\mathcal{J}^{op}, Set](1, \mathcal{C}(D -, c))

Here, [J^{op}, Set] is the category of presheaves, that is functors from \mathcal{J}^{op} to Set, with natural transformations as morphisms. As a bonus, we get the cocone triangle commuting conditions from naturality.

Using singleton sets to pick morphisms doesn’t generalize very well to enriched categories. For conical limits, we are building cocones from zero-thickness wires. What we need instead is what Max Kelly calls cylinders obtained by replacing the constant functor 1\colon \mathcal{J}^{op} \to Set with a more general functor W \colon \mathcal{J}^{op} \to Set. The result is a weighted colimit (or an indexed colimit, as Kelly calls it), \mbox{colim}^W D. The set of weighted cocones is defined by natural transformations

   [\mathcal{J}^{op}, Set](W, \mathcal{C}(D -, c))

and the weighted colimit is the universal one of these. This definition generalizes nicely to the enriched setting (which I won’t be discussing here).

Universality can be expressed as a natural isomorphism

   [\mathcal{J}^{op}, Set](W, \mathcal{C}(D -, c))  \cong  \mathcal{C}(\mbox{colim}^W D, c)

We interpret this formula as a one-to-one correspondence: for every weighted cocone with the apex c there is a unique morphism from the colimit to c. Naturality conditions guarantee that the appropriate triangles commute.

A weighted colimit can be expressed as a coend (see Appendix 1)

   \mbox{colim}^W D \cong \int^{j \colon \mathcal{J}} W j \cdot D j

The dot here stands for the tensor product of a set by an object. It’s defined by the formula

   \mathcal{C}(s \cdot c, c') \cong Set(s, \mathcal{C}(c, c'))

If you think of s \cdot c as the sum of s copies of the object c, then the above asserts that the set of functions from a sum (coproduct) is equivalent to a product of functions, one per element of the set s,

   (\coprod_s c) \to c' \cong \prod_s (c \to c')

Right adjoint as a colimit

A fibration is like a two-dimensional category. Or, if you’re familiar with bundles, it’s like a fiber bundle, which is locally isomorphic to a cartesian product of two spaces, the base and the fiber. In particular, the category of elements \mathcal{C} \int W is, roughly speaking, like a bundle whose base is the category \mathcal{C}, and the fiber is a (c-dependent) set W c.

We also have a projection functor on the category of elements \mathcal{C} \int W that ignores the W c component

   \Pi \colon (c, x) \mapsto c

The coend of this functor is the (conical) colimit

   \int^{(c, x) \colon \mathcal{C}\int W} \Pi (c, x) \cong \underset{\mathcal{C} \int W}{\mbox{colim}} \; \Pi

But this functor is constant along the fiber, so we can “integrate over it.” Since fibers depends on c, different objects end up weighted differently. The result is a coend over the base category, with objects c weighted by sets W c

   \int^{(c, x) \colon \mathcal{C}\int W} \Pi (c, x) \cong \int^{(c, x) \colon \mathcal{C}\int W} c  \cong   \int^{c \colon \mathcal{C}} W c \cdot c

Using a more traditional notation, this is the formula that relates a (conical) colimit over the category of elements and a weighted colimit of the identity functor

   \underset{\mathcal{C} \int W}{\mbox{colim}} \;  \Pi  \cong \mbox{colim}^W Id

There is a category of elements that will be of special interest to us when discussing adjunctions: the comma category for the functor L \colon \mathcal{C} \to \mathcal{D}, in which the weight functor is the hom-functor \mathcal{D}(L-, d)

   L/d \cong \mathcal{C}\int^{c} \mathcal{D}(L c, d)

If we plug it into the last formula, we get

   \underset{L/d}{\mbox{colim}} \;  \Pi  \cong \underset{C \int \mathcal{D}(L-, d)}{\mbox{colim}} \;  \Pi  \cong \int^{c \colon \mathcal{C}} \mathcal{D}(L c, d) \cdot c

If the functor L has a right adjoint

   \mathcal{D}(L c, d) \cong \mathcal{C}(c, R d)

we can rewrite this as

   \underset{L/d}{\mbox{colim}} \;  \Pi  \cong \int^{c \colon \mathcal{C}} \mathcal{C}(c, R d) \cdot c

and useing the ninja Yoneda lemma (see Appendix 2) we get a formula for the right adjoint in terms of a colimit of a comma category

   \underset{L/d}{\mbox{colim}} \; \Pi  \cong R d

Incidentally, this is the left Kan extension of the identity functor along L. (In fact, it can be used to define the right adjoint as long as it preserves the functor L.)

We’ll come back to this formula when discussing the Freyd’s adjoint functor theorem.

Appendix 1

I’m going to prove the following identity using some of the standard tricks of coend calculus

   \mbox{colim}^W D \cong \int^{j \colon \mathcal{J}} W j \cdot D j

To show that two objects are isomorphic, it’s enough to show that their hom-sets to any object c' are isomorphic (this follows from the Yoneda lemma)

   \begin{aligned}  \mathcal{C}(\mbox{colim}^W D, c') & \cong [\mathcal{J}^{op}, Set]\big(W-, \mathcal{C}(D -, c')\big) \\   &\cong \int_j Set \big(W j, \mathcal{C}(D j, c')\big) \\   &\cong \int_j \mathcal{C}(W j \cdot D j, c') \\   &\cong \mathcal{C}(\int^j W j \cdot D j, c')  \end{aligned}

I first used the universal property of the colimit, then rewrote the set of natural transformations as an end, used the definition of the tensor product of a set and an object, and replaced an end of a hom-set by a hom-set of a coend (continuity of the hom-set functor).

Appendix 2

The proof of

   \int^{c \colon \mathcal{C}} \mathcal{C}(c, R d) \cdot c \cong R d

follows the same pattern

   \begin{aligned}  &\mathcal{C}\Big( \big(\int^{c} \mathcal{C}(c, R d) \cdot c\big) , c'\Big)\\  \cong &\int_c \mathcal{C}\big( \mathcal{C}(c, R d) \cdot c , c'\big) \\  \cong &\int_c Set\big( \mathcal{C}(c, R d) , \mathcal{C}(c, c')\big) \\  \cong & \; \mathcal{C}(R d, c')   \end{aligned}

I used the fact that a hom-set from a coend is isomorphic to an end of a hom-set (continuity of hom-sets). Then I applied the definition of a tensor. Finally, I used the Yoneda lemma for contravariant functors, in which the set of natural transformations is written as an end.

   [ \mathcal{C}^{op}, Set]\big(\mathcal{C}(-, x), H \big) \cong \int_c Set \big( \mathcal{C}(c, x), H c \big) \cong H x


Previously we discussed ninth chords, which are the first in a series of extension chords. Extensions are the notes that go beyond the first octave. Since we build chords by stacking thirds on top of each other, the next logical step, after the ninth chord, is the eleventh and the thirteenth chords. And that’s it: there is no fifteenth chord, because the fifteenth would be the same as the root (albeit two octaves higher).

This strange musical arithmetic is best understood if we translate all intervals into their semitone equivalents in equal temperament. Since we started by constructing the E major chord, let’s work with the E major scale, which consists of the following notes:

|E |  |F#|  |G#|A  |  |B |  |C#|  |D#|E |

Let’s chart the chord tones taking E as the root.

We see the clash of several naming conventions. Letter names have their origin is the major diatonic scale, as implemented by the white keys on the piano starting from C.

|C |  |D |  |E |F |  |G |  |A |  |B |C |

They go in alphabetical order, wrapping around after G. On the guitar we don’t have white and black keys, so this convention seems rather arbitrary.

The names of intervals (here, marked by digits, with occasional accidental symbols) are also based on the diatonic scale. They essentially count the number of letters from the root (including the root). So the distance from E to B is 5, because you count E, F, G, A, B — five letters. For a mathematician this convention makes little sense, but it is what it is.

After 12 semitones, we wrap around, as far as note names are concerned. With intervals the situation is a bit more nuanced. The ninth can be, conceptually, identified with the second; the eleventh with the fourth; and the thirteenth with the sixth. But how we name the intervals depends on their harmonic function. For instance, the same note, C#, is called the sixth in the E6 chord, and the thirteenth in E13. The difference is that E13 also contains the (dominant) seventh and the ninth.

A full thirteenth chord contains seven notes (root, third, fifth, seventh, ninth, eleventh, and thirteenth), so it cannot possibly be voiced on a six-string guitar. We usually drop the eleventh (as you can see above). The ninth and the fifth can be omitted as well. The root is very important, since it defines the chord, but when you’re playing in a band, it can be taken over by the bass instrument. The third is important because it distinguishes between major and minor modes (but then again, you have power chords that skip the third). The seventh is somewhat important in defining the dominant role of the chord.

Notice that a thirteenth chord can be seen as two separate chords on top of each other. E13 can be decomposed into E7 with F#m on top (try to spot these two shapes in this grip). Seen this way, the major/minor clash is another argument to either drop the eleventh (which serves as the minor third of F#m) or sharp it.

Alternatively, one could decompose E13 into E with DΔ7 on top. The latter shape is also easily recognized in this grip.

I decided against listing eleventh chords because they are awkward to voice on the guitar and because they are rarely used. Thirteenth chords are more frequent, especially in jazz. You’ve seen E13, here’s G13:

It skips the 11th and the 5th; and the 9th at the top is optional.

The Role of Harmonics

It might be worth explaining why omitting the fifth in G13 doesn’t change the character of the chord. The reason is that, when you play the root note, you are also producing harmonics. One of the strongest harmonics is the fifth, more precisely, the fifth over the octave. So, even if you don’t voice it, you can hear it. In fact, a lot of the quality of a given chord voicing depends on the way the harmonics interact with each other, especially in the bass. When you strum the E chord on the guitar, you get a strong root sound E, and the B on the next thickest string amplifies its harmonic fifth. Compare this with the G shape, which also starts with the root, but the next string voices the third, B, which sounds okay, but not great, so some people mute it.

Inverted chords, even though they contain the same notes (up to octave equivalence) may sound dissonant, depending on the context (in particular, voice leading in the bass). This is why we don’t usually play the lowest string in C and A shapes, or the two lowest strings in the D shape.

In the C shape, the third in the bass clashes with the root and is usually muted. That’s because the strongest harmonic of E is B, which makes C/E sound like CΔ7.

On the other hand, when you play the CΔ7 chord, the E in the bass sounds great, for exactly the same reason.

You can also play C with the fifth in the bass, as C/G, and it sounds good, probably because the harmonic D of G gives it the ninth flavor. This harmonic is an octave and a fifth above G, so it corresponds to the D that would be voiced on the third fret of the B string.

The same reasoning doesn’t quite work for the A shape. Firstly, because all four lower strings in A/E voice the very strong power chord (two of them open strings) drowning out the following third. Also the fifth above E is the B that’s just two semitones below the third C# voiced on the B string. (Theoretically, C/G has a third doubled on the thinest string but that doesn’t seem to clash as badly with the D harmonic of G. Again, the ear beats theory!)

Next: Altered chords.


We have already discussed several kinds of seventh chords. But if you can extend the chord by adding a third above it, why not top it with yet another third? This way we arrive at the ninth chord. But a ninth is one whole step above the octave. So far we’ve been identifying notes that cross the octave with their counterparts that are 12 semitones lower. A mathematician would say that we are doing arithmetic modulo 12. But this is just a useful fiction. A lot of things in music theory can be motivated using modular arithmetic, but ultimately, we have to admit that if something doesn’t sound right, it’s not right.

A ninth is 14 semitones above the root (if you don’t flat or sharp it), so it should be identified with the second, which is 2 semitones up from the root. That puts it smack in the middle between the root and the third: a pretty jarring dissonance. We’ve seen a second used in a chord before, but it was playing the role of a suspended third. In a ninth chord, you keep the third, and move the second to the next octave, where it becomes a ninth and cannot do as much damage. Instead it provides color and tension, making things more interesting.

To construct E9, we start with E7. It has the root duplicated on the thinnest string, so it’s easy to raise it by two semitones to produce the ninth.

There are many variations of the ninth chord. There is a minor version, with the third lowered; the seventh can be raised to a major seventh; and the ninth itself can be flatted or sharped. We won’t cover all these.

Following the same pattern, C9 can be constructed from C7 by raising the root by two semitones.

We get a highly movable shape, especially if we put the fifth on the thinnest string. In particular, it can be moved one fret towards the nut to produce B9–a slight modification of the B7 grip we’ve seen before.

If you look carefully at this shape, you might recognize parts of Gm in it (the three thinnest strings). This is no coincidence. The fifth, the seventh, and the ninth of any ninth chord form a minor triad.

Here is the E9 grip obtained by transposing C9 down the fretboard. It’s used a lot in funk:

The same chord with a sharped ninth is called the Hendrix chord, after Jimi Hendrix who popularized it:

The E9 shape is not only movable, but it’s also easy to mutate. This is the minor version:


and this is the major seventh version:

Such chords are quite common in Bossa Nova.

A9 is obtained by raising the root of A7 by two semitones:


Can you spot the Dm shape raised by two frets?

Similarly, G9 is constructed from G7, and it conceals a Dm as part of it.

Next: Extension chords.


The tremendous success, in recent centuries, of science and technology explaining the world around us and improving the human condition helped create the impression that we are on the brink of understanding the Universe. The world is complex, but we seem to have been able to reduce its complexity down to a relatively small number of fundamental laws. These laws are formulated in the language of mathematics, and the idea is that, even if we can’t solve all the equations describing complex systems, at least we can approximate the solutions, usually with the help of computers. These successes led to a feeling bordering on euphoria at the power of our reasoning. Eugene Wigner summed up this feeling in his famous essay, The Unreasonable Effectiveness of Mathematics in the Natural Sciences.

Granted, there are still a few missing pieces, like the unification of gravity with the Standard Model, and the 95% of the mass of the Universe unaccounted for, but we’re working on it… So there’s nothing to worry about, right?

Actually, if you think about it, the idea that the Universe can be reduced to a few basic principles is pretty preposterous. If this turned out to be the case, I would be the first to believe that we live in a simulation. It would mean that this enormous Universe, with all the galaxies, stars, and planets was designed with one purpose in mind: that a bunch of sentient monkeys on the third planet from a godforsaken star in a godforsaken galaxy were able to understand it. That they would be able to build, in their puny brains–maybe extended with some silicon chips and fiber optics–a perfect model of it.

How do we understand things? By building models in our (possibly computer-enhanced) minds. Obviously, it only makes sense if the model is smaller than the actual thing; which is only possible if reality is compressible. Now compare the size and the complexity of the Universe with the size and the complexity of our collective brains. Even with lossy compression, the discrepancy is staggering. But, you might say, we don’t need to model the totality of the Universe, just the small part around us. This is where compositionality becomes paramount. We assume that the world can be decomposed, and that the relevant part of it can be modeled, to a good approximation, independent from the rest.

Reductionism, which has been fueling science and technology, was made possible by the decompositionality of the world around us. And by “around us” I mean not only physical vicinity in space and time, but also proximity of scale. Consider that there are 35 orders of magnitude between us and the Planck length (which is where our most precious model of spacetime breaks down). It’s perfectly possible that the “sphere of decompositionality” we live in is but a thin membrane; more of an anomaly than a rule. The question is, why do we live in this sphere? Because that’s where life is! Call it anthropic or biotic principle.

The first rule of life is that there is a distinction between the living thing and the environment. That’s the primal decomposition.

It’s no wonder that one of the first “inventions” of life was the cell membrane. It decomposed space into the inside and the outside. But even more importantly, every living thing contains a model of its environment. Higher animals have brains to reason about the environment (where’s food? where’s predator?). But even a lowly virus encodes, in its DNA or RNA, the tricks it uses to break into a cell. Show me your RNA, and I’ll tell you how you spread. I’d argue that the definition of life is the ability to model the environment. And what makes the modeling possible is that the environment is decomposable and compressible.

We don’t think much of the possibility of life on the surface of a proton, mostly because we think that the proton is too small. But a proton is closer to our scale than it is to the Planck scale. A better argument is that the environment at the proton scale is not easily decomposable. A quarkling would not be able to produce a model of its world that would let it compete with other quarklings and start the evolution. A quarkling wouldn’t even be able to separate itself from its surroundings.

Once you accept the possibility that the Universe might not be decomposable, the next question is, why does it appear to be so overwhelmingly decomposable? Why do we believe so strongly that the models and theories that we construct in our brains reflect reality? In fact, for the longest time people would study the structure of the Universe using pure reason rather than experiment (some still do). Ancient Greek philosophers were masters of such introspection. This makes perfect sense if you consider that our brains reflect millions of years of evolution. Euclid didn’t have to build a Large Hadron Collider to study geometry. It was obvious to him that two parallel lines never intersect (it took us two thousand years to start questioning this assertion–still using pure reason).

You cannot talk about decomposition without mentioning atoms. Ancient Greeks came up with this idea by pure reasoning: if you keep cutting stuff, eventually you’ll get something that cannot be cut any more, the “uncuttable” or, in Greek, ἄτομον [atomon]. Of course, nowadays we not only know how to cut atoms but also protons and neutrons. You might say that we’ve been pretty successful in our decomposition program. But these successes came at the cost of constantly redefining the very concept of decomposition.

Intuitively, we have no problem imagining the Solar System as composed of the Sun and the planets. So when we figured out that atoms were not elementary, our first impulse was to see them as little planetary systems. That didn’t quite work, and we know now that, in order to describe the composition of the atom, we need quantum mechanics. Things are even stranger when decomposing protons into quarks. You can split an atom into free electrons and a nucleus, but you can’t split a proton into individual quarks. Quarks manifest themselves only under confinement, at high energies.

Also, the masses of the three constituent quarks add up only to one percent of the mass of the proton. So where does the rest of the mass come from? From virtual gluons and quark/antiquark pairs. So are those also the constituents of the proton? Well, sort of. This decomposition thing is getting really tricky once you get into quantum field theory.

Human babies don’t need to experiment with falling into a precipice in order to learn to avoid visual cliffs. We are born with some knowledge of spacial geometry, gravity, and (painful) properties of solid objects. We also learn to break things apart very early in life. So decomposition by breaking apart is very intuitive and the idea of a particle–the ultimate result of breaking apart–makes intuitive sense. There is another decomposition strategy: breaking things into waves. Again, it was Ancient Greeks, Pythagoras who studied music by decomposing it into harmonics, and Aristotle who suggested that sound propagates through movement of air. Eventually we uncovered wave phenomena in light, and then the rest of the electromagnetic spectrum. But our intuitions about particles and weaves are very different. In essence, particles are supposed to be localized and waves are distributed. The two decomposition strategies seem to be incompatible.

Enter quantum mechanics, which tells us that every elementary chunk of matter is both a wave and a particle. Even more shockingly, the distinction depends on the observer. When you don’t look at it, the electron behaves like a wave, the moment you glance at it, it becomes a particle. There is something deeply unsatisfying about this description and, if it weren’t for the amazing agreement with experiment, it would be considered absurd.

Let’s summarize what we’ve discussed so far. We assume that there is some reality (otherwise, there’s nothing to talk about), which can be, at least partially, approximated by decomposable models. We don’t want to identify reality with models, and we have no reason to assume that reality itself is decomposable. In our everyday experience, the models we work with fit reality almost perfectly. Outside everyday experience, especially at short distances, high energies, high velocities, and strong gravitational fields, our naive models break down. A physicist’s dream is to create the ultimate model that would explain everything. But any model is, by definition, decomposable. We don’t have a language to talk about non-decomposable things other than describing what they aren’t.

Let’s discuss a phenomenon that is borderline non-decomposable: two entangled particles. We have a quantum model that describes a single particle. A two-particle system should be some kind of composition of two single-particle systems. Things may be complicated when particles are close together, because of possible interaction between them, but if they move in opposite directions for long enough, the interaction should become negligible. This is what happens in classical mechanics, and also with isolated wave packets. When one experimenter measures the state of one of the particles, this should have no impact on the measurement done by another far-away scientist on the second particle. And yet it does! There is a correlation that Einstein called “the spooky action at a distance.” This is not a paradox, and it doesn’t contradict special relativity (you can’t pass information from one experimenter to the other). But if you try to stuff it into either particle or wave model, you can only explain it by assuming some kind of instant exchange of data between the two particles. That makes no sense!

We have an almost perfect model of quantum mechanical systems using wave functions until we introduce the observer. The observer is the Godzilla-like mythical beast that behaves according to classical physics. It performs experiments that result in the collapse of the wave function. The world undergoes an instantaneous transition: wave before, particle after. Of course an instantaneous change violates the principles of special relativity. To restore it, physicists came up with quantum field theory, in which the observers are essentially relegated to infinity (which, for all intents and purposes, starts a few centimeters away from the point of the violent collision in an collider). In any case, quantum theory is incomplete because it requires an external classical observer.

The idea that measurements may interfere with the system being measured makes perfect sense. In the macro world, when we shine the light on something, we don’t expect to disturb it too much; but we understand that the micro world is much more delicate. What’s happening in quantum mechanics is more fundamental, though. The experiment forces us to switch models. We have one perfectly decomposable model in terms of the Schroedinger equation. It lets us understand the propagation of the wave function from one point to another, from one moment to another. We stick to this model as long as possible, but a time comes when it no longer fits reality. We are forced to switch to a different, also decomposable, particle model. Reality doesn’t suddenly collapse. It’s our model that collapses because we insists–we have no choice!–on decomposability. But if nature is not decomposable, one model cannot possibly fit all of it.

What happens when we switch from one model to another? We have to initialize the new model with data extracted from the old model. But these models are incompatible. Something has to give. In quantum mechanics, we lose determinism. The transition doesn’t tell us how exactly to initialize the new model, it only gives us probabilities.

Notice that this approach doesn’t rely on the idea of a classical observer. What’s important is that somebody or something is trying to fit a decomposable model to reality, usually locally, although the case of entangled particles requires the reconciliation of two separate local models.

Model switching and model reconciliation also show up in the interpretation of the twin paradox in special relativity. In this case we have three models: the twin on Earth, the twin on the way to Proxima Centauri, and the twin on the way back. They start by reconciling their models–synchronizing the clocks. When the astronaut twin returns from the trip, they reconcile their models again. The interesting thing happens at Proxima Centauri, where the second twin turns around. We can actually describe the switch between the two models, one for the trip to, and another for the trip back, using more advanced general relativity, which can deal with accelerating frames. General relativity allows us to keep switching between local models, or inertial frames, in a continuous way. One could speculate that similar continuous switching between wave and particle models is what happens in quantum field theory.

In math, the closest match to this kind of model-switching is in the definition of topological manifolds and fiber bundles. A manifold is covered with maps–local models of the manifold in terms of simple n-dimensional spaces. Transitions between maps are well defined, but there is no guarantee that there exists one global map covering the whole manifold. To my knowledge, there is no theory in which such transitions would be probabilistic.

Seen from the distance, physics looks like a very patchy system, full of holes. Traditional wisdom has it that we should be able to eventually fill the holes and connect the patches. This optimism has its roots in the astounding series of successes in the first half of the twentieth century. Unfortunately, since then we have entered a stagnation era, despite record number of people and resources dedicated to basic research. It’s possible that it’s a temporary setback, but there is a definite possibility that we have simply reached the limits of decomposability. There is still a lot to explore within the decomposability sphere, and the amount of complexity that can be built on top of it is boundless. But there may be some areas that will forever be out of bounds to our reason.


Fig 1. Current decomposability sphere.

  • GR: General Relativity (gravity)
  • SR: Special Relativity
  • PQFT: Perturbative Quantum Field Theory (compatible with SR)
  • QM: Quantum Mechanics (non-relativistic)
  • BB: Big Bang
  • H: Higgs Field
  • SB: Symmetry Breaking (inflation)

Previously, we talked about the construction of initial algebras. The dual construction is that of terminal coalgebras. Just like an algebra can be used to fold a recursive data structure into a single value, a coalgebra can do the reverse: it lets us build a recursive data structure from a single seed.

Here’s a simple example. We define a tree that stores values in its nodes

data Tree a = Leaf | Node a (Tree a) (Tree a)

We can build such a tree from a single list as our seed. We can choose the algorithm in such a way that the tree is ordered in a particular way

split :: Ord a => [a] -> Tree a
split [] = Leaf
split (a : as) = Node a (split l) (split r)
  where (l, r) = partition (<a) as

A traversal of this tree will produce a sorted list. We’ll get back to this example after working out some theory behind it.

The functor

The tree in our example can be derived from the functor

data TreeF a x = LeafF | NodeF a x x
instance Functor (TreeF a) where
  fmap _ LeafF = LeafF
  fmap f (NodeF a x y) = NodeF a (f x) (f y)

Let’s simplify the problem and forget about the payload of the tree. We’re interested in the functor

data F x = LeafF | NodeF x x

Remember that, in the construction of the initial algebra, we were applying consecutive powers of the functor to the initial object. The dual construction of the terminal coalgebra involves applying powers of the functor to the terminal object: the unit type () in Haskell, or the singleton set 1 in Set.

Let’s build a few such trees. Here are a some terms generated by the single power of F

w1, u1 :: F ()
w1 = LeafF
u1 = NodeF () ()

And here are some generated by the square of F acting on ()

w2, u2, t2, s2 :: F (F ())

w2 = LeafF
u2 = NodeF w1 w1
t2 = NodeF w1 u1
s2 = NodeF u1 u1

Or, graphically

Notice that we are getting two kinds of trees, ones that have units () in their leaves and ones that don’t. Units may appear only at the (n+1)-st layer (root being the first layer) of F^n.

We are also getting some duplication between different powers of F. For instance, we get a single LeafF at the F level and another one at the F^2 level (in fact, at every consecutive level after that as well). The node with two LeafF leaves appears at every level starting with F^2, and so on. The trees without unit leaves are the ones we are familiar with—they are the finite trees. The ones with unit leaves are new and, as we will see, they will contribute infinite trees to the terminal coalgebra . We’ll construct the terminal coalgebra as a limit of an \omega-chain.

Terminal coalgebra as a limit

As was the case with initial algebras, we’ll construct a chain of powers of F, except that we’ll start with the terminal rather than the initial object, and we’ll use a different morphism to link them together. By definition, there is only one morphism from any object to the terminal object. In category theory, we’ll call this morphism \mbox{!`} \colon a \to 1 (upside-down exclamation mark) and implement it in Haskell as a polymorphic function

unit :: a -> ()
unit _ = ()

First, we’ll use \mbox{!`} to connect F 1 to 1, then lift \mbox{!`} to connect F^2 1 to F 1, and so on, using F^n \mbox{!`} to transform F^{n + 1} 1 to F^n 1.

Let’s see how it works in Haskell. Applying unit directly to F () turns it into ().

Values of the type F (F ()) are mapped to values of the type F ()

w2' = fmap unit w2
> LeafF
u2' = fmap unit u2
> NodeF () ()
t2' = fmap unit t2
> NodeF () ()
s2' = fmap unit s2
> NodeF () ()

and so on.

The following pattern emerges. F^n 1 contains trees that end with either leaves (at any level) or values of the unit type (only at the lowest, (n+1)-st level). The lifted morphism F^{n-1} \mbox{!`} (the (n-1)st power of fmap acting on unit) operates strictly on the nth level of a tree. It turns leaves and two-unit-nodes into single units ().

Alternatively, we can look at the preimages of these mappings—conceptually reversing the arrows. Observe that all trees at the F^2 level can be seen as generated from the trees at the F level by replacing every unit () with either a leaf LeafF or a node NodeF ()().

It’s as if a unit were a universal seed that can either sprout a leaf or a two-unit-node. We’ll see later that this process of growing recursive data structures from seeds corresponds to anamorphisms. Here, the terminal object plays the role of a universal seed that may give rise to two parallel universes. These correspond to the inverse image (a so-called fiber) of the lifted unit.

Now that we have an \omega-chain, we can define its limit. It’s easier to understand a limit in the category of sets. A limit in Set is a set of cones whose apex is the singleton set.

The simplest example of a limit is a product of sets. In that case, a cone with a singleton at the apex corresponds to a selection of elements, one per set. This agrees with our understanding of a product as a set of tuples.

A limit of a directed finite chain is just the starting set of the chain (the rightmost object in our pictures). This is because all projections, except for the rightmost one, are determined by commuting triangles. In the example below, \pi_b is determined by \pi_a:

\pi_b = f_1 \circ \pi_a

and so on. Here, every cone from 1 is fully determined by a function 1 \to a, and the set of such functions is isomorphic to a.

Things are more interesting when the chain is infinite, and there is no rightmost object—as is the case of our \omega-chain. It turns out that the limit of such a chain is the terminal coalgebra for the functor F.

In this case, the interpretation where we look at preimages of the morphisms in the chain is very helpful. We can view a particular power of F acting on 1 as a set of trees generated by expanding the universal seeds embedded in the trees from the lower power of F. Those trees that had no seeds, only LeafF leaves, are just propagated without change. So the limit will definitely contain all these trees. But it will also contain infinite trees. These correspond to cones that select ever growing trees in which there are always some seeds that are replaced with double-seed-nodes rather than LeafF leaves.

Compare this with the initial algebra construction which only generated finite trees. The terminal coalgebra for the functor TreeF is larger than the initial algebra for the same functor.

We have also seen a functor whose initial algebra was an empty set

data StreamF a x = ConsF a x

This functor has a well-defined non-empty terminal coalgebra. The n-th power of (StreamF a) acting on () consists of lists of as

ConsF a1 (ConsF a2 (... (ConsF an ())...))

The lifting of unit acting on such a list replaces the final (ConsF a ()) with () thus shortening the list by one item. Its “inverse” replaces the seed () with any value of type a (so it’s a multi-valued inverse, since there are, in general, many values of a). The limit is isomorphic to an infinite stream of a. In Haskell it can be written as a recursive data structure

data Stream a = ConsF a (Stream a)

Anamorphism

The limit of a diagram is defined as a universal cone. In our case this would be the cone consisting of the object we’ll call \nu F, with a set of projections \pi_n

such that any other cone factors through \nu F. We want to show that \nu F (if it exists) is a terminal coalgebra.

First, we have to show that \nu F is indeed a coalgebra, that is, there exists a morphism

k \colon \nu F \to F (\nu F)

We can apply F to the whole diagram. If F preserves \omega-limits, then we get a universal cone with the apex F (\nu F) and the \omega-chain with F 1 on the left. But our original object \nu F forms a cone over the same chain (ignoring the projection \pi_0). Therefore there must be a unique mapping k from it to F (\nu F).

The coalgebra (\nu F, k) is terminal if there is a unique morphism from any other coalgebra to it. Consider, for instance, a coalgebra (A, \kappa \colon A \to F A). With this coalgebra, we can construct an \omega-chain

We can connect the two omega chains using the terminal morphism from A to 1 and all its liftings

Notice that all squares in this diagram commute. The leftmost one commutes because 1 is the terminal object, therefore the mapping from A to it is unique, so the composite \mbox{!`} \circ F \mbox{!`} \circ \kappa must be the same as \mbox{!`}. A is therefore an apex of a cone over our original \omega-chain. By universality, there must be a unique morphism from A to the limit of this \omega-chain, \nu F. This morphism is in fact a coalgebra morphism and is called the anamorphism.

Adjunction

The constructions of initial algebras and terminal coalgebras can be compactly described using adjunctions.

There is an obvious forgetful functor U from the category of F-algebras C^F to C. This functor just picks the carrier and forgets the structure map. Under certain conditions, the left adjoint free functor \Phi exists

C^F ( \Phi x, a) \cong C(x, U a)

This adjunction can be evaluated at the initial object (the empty set in Set).

C^F ( \Phi 0, a) \cong C(0, U a)

This shows that there is a unique algebra morphism—the catamorphism— from \Phi 0 to any algebra a. This is because the hom-set C(0, U a) is a singleton for every a. Therefore \Phi 0 is the initial algebra \nu F.

Conversely, there is a cofree functor \Psi

C_F(c, \Psi x) \cong C(U c, x)

It can be evaluated at a terminal object

C_F(c, \Psi 1) \cong C(U c, 1)

showing that there is a unique coalgebra morphism—the anamorphism—from any coalgebra c to \Psi 1. This shows that \Psi 1 is the terminal coalgebra \nu F.

Fixed point

Lambek’s lemma works for both, initial algebras and terminal coalgebras. It states that their structure maps are isomorphisms, therefore their carriers are fixed points of the functor F

\mu F \cong F (\mu F)

\nu F \cong F (\nu F)

The difference is that \mu F is the least fixed point, and \nu F is the greatest fixed point of F. They are, in principle, different. And yet, in a programming language like Haskell, we only have one recursively defined data structure defining the fixed point

newtype Fix f = Fix { unFix :: f (Fix f) }

So which one is it?

We can define both the catamorphisms from-, and anamorphisms to-, the fixed point:

type Algebra f a = f a -> a

cata :: Functor f => Algebra f a -> Fix f -> a
cata alg = alg . fmap (cata alg) . unfix
type Coalgebra f a = a -> f a

ana :: Functor f => Coalgebra f a -> a -> Fix f
ana coa = Fix . fmap (ana coa) . coa

so it seems like Fix f is both initial as the carrier of an algebra and terminal as the carrier of a coalgebra. But we know that there are elements of \nu F that are not in \mu F—namely infinite trees and infinite streams—so the two fixed points are not isomorphic and cannot be both described by the same Fix f.

However, they are not unrelated. Because of the Lambek’s lemma, the initial algebra (\mu F, j) gives rise to a coalgebra (\mu F, j^{-1}), and the terminal coalgebra (\nu F, k) generates an algebra (\nu F, k^{-1}).

Because of universality, there must be a (unique) algebra morphism from the initial algebra (\mu F, j) to (\nu F, k^{-1}), and a unique coalgebra morphism from (\mu F, j^{-1}) to the terminal coalgebra (\nu F, k). It turns out that these two are given by the same morphism f \colon \mu F \to \nu F between the carriers. This morphism satisfies the equation

k \circ f \circ j = F f

which makes it both an algebra and a coalgebra morphism

Furthermore, it can be shown that, in Set, f is injective: it embeds \mu F in \nu F. This corresponds to our observation that \nu F contains \mu F plus some infinite data structures.

The question is, can Fix f describe infinite data? The answer depends on the nature of the programming language: infinite data structures can only exist in a lazy language. Since Haskell is lazy, Fix f corresponds to the greatest fixed point. The least fixed point forms a subset of Fix f (in fact, one can define a metric in which it’s a dense subset).

This is particularly obvious in the case of a functor that has no terminating leaves, like the stream functor.

data StreamF a x = StreamF a x
  deriving Functor

We’ve seen before that the initial algebra for StreamF a is empty, essentially because its action on Void is uninhabited. It does, however have a terminal coalgebra. And, in Haskell, the fixed point of the stream functor indeed generates infinite streams

type Stream a = Fix (StreamF a)

How do we know that? Because we can construct an infinite stream using an anamorphism. Notice that, unlike in the case of a catamorphism, the recursion in an anamorphism doesn’t have to be well founded and, indeed, in the case of a stream, it never terminates. This is why this won’t work in an eager language. But it works in Haskell. Here’s a coalgebra whose carrier is Int

coaInt :: Coalgebra (StreamF Int) Int
coaInt n = StreamF n (n + 1)

It generates an infinite stream of natural numbers

ints = ana coaInt 0

Of course, in Haskell, the work is not done until we demand some values. Here’s the function that extracts the head of the stream:

hd :: Stream a -> a
hd (Fix (StreamF x _)) = x

And here’s one that advances the stream

tl :: Stream a -> Stream a
tl (Fix (StreamF _ s)) = s

This is all true in Set, but Haskell is not Set. I had a discussion with Edward Kmett and he pointed out that Haskell’s fixed point data type can be considered the initial algebra as well. Suppose that you have an infinite data structure, like the stream we were just discussing. If you apply a catamorphism for an arbitrary algebra to it, it will most likely never terminate (try summing up an infinite stream of integers). In Haskell, however, this is interpreted as the catamorphism returning the bottom \bot, which is a perfectly legitimate value. And once you start including bottoms in your reasoning, all bets are off. In particular Void is no longer uninhabited—it contains \bot—and the colimit construction of the initial algebra is no longer valid. It’s possible that some of these results can be generalized using domain theory and enriched categories, but I’m not aware of any such work.

Bibliography

  1. Adamek, Introduction to coalgebra
  2. Michael Barr, Terminal coalgebras for endofunctors on sets

There is a bit of folklore about algebras in Haskell, which says that both the initial algebra and the terminal coalgebra for a given functor are defined by the same fixed point formula. This works for most common cases, but is not true in general. What is definitely true is that they are both fixed points–this result is called the Lambek’s lemma–but there may be many fixed points. The initial algebra is the least fixed point, and the terminal coalgebra is the greatest fixed point.

In this series of blog posts I will explore the ways one can construct these (co-)algebras using category theory and illustrate it with Haskell examples.

In this first installment, I’ll go over the construction of the initial algebra.

A functor

Let’s start with a simple functor that generates binary trees. Normally, we would store some additional data in a tree (meaning, the functor would take another argument), either in nodes or in leaves, but here we’re just interested in pure shapes.

data F a = Leaf 
         | Node a a
  deriving Show

Categorically, this functor can be written as a coproduct (sum) of the terminal object 1 (singleton) and the product of a with itself, here written simply as a^2

F a = 1 + a^2

The lifting of functions is given by this implementation of fmap

instance Functor F where
  fmap _ Leaf       = Leaf
  fmap f (Node x y) = Node (f x) (f y)

We can use this functor to build arbitrary level trees. Let’s consider, for instance, terms of type F Int. We can either build a Leaf, or a Node with two numbers in it

x1, y1 :: F Int
x1 = Leaf
y1 = Node 1 2 

With those, we can build next-level values of the type F^2 a or, in our case, F (F Int)

x2, y2 :: F (F Int)
x2 = Leaf
y2 = Node x1 y1

We can display y2 directly using show

> Node Leaf (Node 1 2)

or draw the corresponding tree

Since F is an endofunctor, so is F^2. Lifting a function f \colon a \to b to F^2 can be implemented by applying fmap twice. Here’s the action of the function (+1) on our test values

fmap (fmap (+1)) x2
> Leaf
fmap (fmap (+1)) y2
> Node Leaf (Node 2 3)

or, graphically,

You can see that Leafs at any level remain untouched; only the contents of bottom Nodes in the tree are transformed.

The colimit construction

The carrier of the initial algebra can be constructed as a colimit of an infinite sequence. This sequence is constructed by applying powers of F to the initial object which we’ll denote as 0. We’ll first see how this works in our example.

The initial object in Haskell is defined as a type with no data constructor (we are ignoring the question of non-termination in Haskell).

data Void
  deriving Show

In Set, this is just an empty set.

The Show instance for Void requires the pragma

{-# language EmptyDataDeriving #-}

Even though there are no values of the type Void, we can still construct a value of the type F Void

z1 :: F Void
z1 = Leaf

This degenerate version of a tree can be drawn as

This illustrates a very important property of our F: Its action on an empty set does not produce an empty set. This is what allows us to generate a non-trivial sequence of powers of F starting with the empty set.

Not every functor has this property. For instance, the construction of the initial algebra for the functor

data StreamF a x = ConsF a x

will produce an uninhabited type (empty set). Notice that this is different from its terminal coalgebra, which is the infinite stream

data Stream a = Cons a (Stream a)

This is an example of a functor whose initial algebra is not the same as the terminal coalgebra.

Double application of our F to Void produces, again, a Leaf, as well as a Node that contains two Leafs.

z2, v2 :: F (F Void)
z2 = Leaf

v2 = Node z1 z1
> Node Leaf Leaf

Graphically,

In general, powers of F acting on Void generate trees which terminate with Leafs, but there is no possibility of having terminal Nodes). Higher and higher powers of F acting on Void will eventually produce any tree we can think of. But for any given power, there will exist even larger trees that are not generated by it.

In order to get all the trees, we could try to take a sum (a coproduct) of infinitely many powers. Something like this

\sum_{n = 0}^{\infty} F^n 0

The problem is that we’d also get a lot of duplication. For instance, we saw that z1 was the same tree as z2. In general, a single Leaf is produced at all non-zero powers of F acting on Void. Similarly, all powers of F greater than one produce a single node with two leaves, and so on. Once a particular tree is produced at some power of F, all higher powers of F will also produce it.

We have to have a way of identifying multiply generated trees. This is why we need a colimit rather than a simple coproduct.

As a reminder, a coproduct is defined as a universal cocone. Here, the base of the cocone is the set of all powers of F acting on 0 (Haskell Void).

In a more general colimit, the objects in the base of the cocone may be connected by morphisms.

Coming from the initial object, there can be only one morphism. We’ll call this morphism ! or, in Haskell, absurd

absurd :: Void -> a
absurd a = case a of {}

This definition requires another pragma

{-# language EmptyCase #-}

We can construct a morphism from F 0 to F^2 0 as a lifting of !, F !. In Haskell, the lifting of absurd doesn’t change the shape of trees. Here it is acting on a leaf

z1' :: F (F Void)
z1' = fmap absurd z1
> Leaf

We can continue this process of lifting absurd to higher and higher powers of F

z2', v2' :: F (F (F Void))

z2' = fmap (fmap absurd) z2
> Leaf

v2' = fmap (fmap absurd) v2
> Node Leaf Leaf

We can construct an infinite chain (this kind of directed chain indexed by natural numbers is called an \omega-chain)

We can use this chain as the base of our cocone. The colimit of this chain is defined as the universal cocone. We will call the apex of this cocone \mu F

In Set these constructions have simple interpretations. A coproduct is a discriminated union. A colimit is a discriminated union in which we identify all those injections that are connected by morphisms in the base of the cocone. For instance

\iota_0 = \iota_{(F 0)}\, \circ \, !
\iota_{(F 0)} = \iota_{(F^2 0)} \circ F !

and so on.

Here we use the lifted absurd (or ! in the picture above) as the morphisms that connect the powers of F acting of Void (or 0 in the picture).

These are exactly the identifications that we were looking for. For instance, F ! maps the leaf generated by F 0 to the leaf which is the element of F^2 0. Or, translating it to Haskell, (fmap absurd) maps the leaf generated by F Void to the leaf generated by F (F Void), and so on.

All trees generated by the n‘th power of F are injected into the n+1‘st power of F by absurd lifted by the nth power of F.

The colimit is formed by equivalence classes with respect to these identifications. In particular, there is a class for a degenerate tree consisting of a single leaf whose representative can be taken from F Void, or from F (F Void), or from F (F (F Void)) and so on.

Initiality

The colimit \mu F is exactly the initial algebra for the functor F. This follows from the universal property of the colimit. First we will show that for any algebra (A, \alpha \colon F A \to A) there is a unique morphism from \mu F to A. Indeed, we can build a cocone with A at its apex and the injections given by

!

\alpha \circ F !

\alpha \circ F \alpha \circ F^2 !

and so on…

Since the colimit \mu F is defined by the universal cocone, there is a unique morphism from it to A. It can be shown that this morphism is in fact an algebra morphism. This morphism is called a catamorphism.

Fixed Point

Lambek’s lemma states that the initial algebra is a fixed point of the functor that defines it

F (\mu F) \cong \mu F

This can also be seen directly, by applying the functor to every object and morphism in the \omega-chain that defines the colimit. We get a new chain that starts at F 0

But the colimit of this chain is the same as the colimit \mu F of the original chain. This is becuase we can always add back the initial object to the chain, and define its injection \iota_0 as the composite

\iota_0 = \iota_{(F 0)} \circ !

On the other hand, if we apply F to the whole universal cocone, we’ll get a new cocone with the apex F (\mu F). In principle, this cocone doesn’t have to be universal, so we cannot be sure that F (\mu F) is a colimit. If it is, we say that F preserves the particular type of colimit—here, the \omega-colimit.

Remember: the image of a cocone under a functor is always a cocone (this follows from functoriality). Preservation of colimits is an additional requirement that the image of a universal cocone be universal.

The result is that, if F preserves \omega-colimits, then the initial algebra \mu F is a fixed point of F

F(\mu F) \cong \mu F

because both sides can be obtained as a colimit of the same \omega-chain.

Bibliography

  1. Adamek, Milius, Moss, Initial Algebras, Terminal Coalgebras, and
    the Theory of Fixed Points of Functors

We live in interesting times. For instance, we are witnessing several extinction events all at once. One of them is the massive extinction of species. The other is the extinction of jobs. Both are caused by advances in technology. As programmers, we might consider ourselves immune to the latter–after all, somebody will have to program these self-driving trucks that eliminate the need for drivers, or the diagnostic tools that eliminate the need for doctors. Eventually, though, even programming jobs will be automated. I can imagine the last programmer putting finishing touches on the program that will make his or her job redundant.

But before we get there, let’s consider which programming tasks are the first to go, and which have the biggest chance to persist for the longest time. Experience tells us that it’s the boring menial jobs that get automated first. So any time you get bored with your work, take note: you are probably doing something that a computer could do better.

One such task is the implementation of user interfaces. All this code that’s behind various buttons, input fields, sliders, etc., is pretty much standard. Granted, you have to put a lot of effort to make the code portable to a myriad of platforms: various desktops, web browsers, phones, watches, fridges, etc. But that’s exactly the kind of expertise that is easily codified. If you find yourself doing copy and paste programming, watch out: your buddy computer can do it too. The work on generating UI has already started, see for instance, pix2code.

The design of user interfaces, as opposed to their implementation, will be more resistant to automation. Not only because it involves creativity, but also because it deals with human issues. Good design must serve the human in front of it. I’m not saying that modeling a human user is impossible, but it’s definitely harder. Of course, in many standard tasks, a drastically simplified human model will work just fine.

So I’m sorry to say that, but those programmers who specialize in HTML and JavaScript will have to retrain themselves.

The next job on the chopping block, in my opinion, is that of a human optimizer. In fact the only reason it hasn’t been eliminated yet is economical. It’s still cheaper to hire people to optimize code than it is to invest in the necessary infrastructure. You might think that programmers are expensive–the salaries of programmers are quite respectable in comparison to other industries. But if this were true, a lot more effort would go into improving programmers’ productivity, in particular in creating better tools. This is not happening. But as demand for software is growing, and the AI is getting cheaper, at some point the economic balance will change. It will be advantageous to use AI to optimize code.

I’m sorry to say that, but C and C++ programmers will have to go. These are the languages whose only raison d’être is to squeeze maximum performance from hardware. We’ll probably always be interested in performance, but there are other ways of improving it. We are familiar with optimizing compilers that virtually eliminated the job of an assembly language programmer. They use optimizers that are based on algorithmic principles–that is methods which are understandable to humans. But there is a whole new generation of AI waiting in the aisles, which can be trained to optimize code written in higher level languages. Imagine a system, which would take this definition of quicksort written in Haskell:

qsort [] = []
qsort (p:xs) = qsort lesser ++ [p] ++ qsort greater
    where (lesser, greater) = partition (< p) xs

and produce code that would run as fast as its hand-coded C counterpart. Even if you don’t know Haskell, I can explain this code to you in just a few sentences. The first line says that sorting an empty list produces an empty list. The second line defines the action of quicksort on a list that consists of a head p–that will be our pivot–and the tail xs. The result is the concatenation (the symbol ++) of three lists. The first one is the result of (recursively) sorting the list lesser, the second is the singleton list containing the pivot, and the third is the result of sorting the list greater. Finally, the pair of lists (lesser, greater) is produced by partitioning xs using the predicate (< p), which reads “less than p.” You can’t get any simpler than that.

Of course the transformation required for optimizing this algorithm is highly nontrivial. Depending on the rest of the program, the AI might decide to change the representation of data from a list to a vector, replace copying by destructive swapping, put some effort in selecting a better pivot, use a different algorithm for sorting very short lists, and so on. This is what a human optimizer would do. But how much harder is this task than, say, playing a game of go against a grandmaster?

I am immensely impressed with the progress companies like Google or IBM made in playing go, chess, and Jeopardy, but I keep asking myself, why don’t they invest all this effort in programming technology? I can’t help but see parallels with Ancient Greece. The Ancient Greeks made tremendous breakthroughs in philosophy and mathematics–just think about Plato, Socrates, Euclid, or Pythagoras–but they had no technology to speak of. Hero of Alexandria invented a steam engine, but it was never put to work. It was only used as a parlor trick. There are many explanations of this phenomenon, but one that strikes close to home is that the Greeks didn’t need technology because they had access to cheap labor through slavery. I’m not implying that programmers are treated like slaves–far from it–but they seem to be considered cheap labor. In fact it’s so cheap to produce software that most of it is given away for free, or for the price of users’ attention in ad-supported software. A lot of software is just bait that’s supposed to entice the user to buy something more valuable, like beer.

It’s gradually becoming clear that programming jobs are diverging. This is not yet reflected in salaries, but as the job market matures, some programming jobs will be eliminated, others will increase in demand. The one area where humans are still indispensable is in specifying what has to be done. The AI will eventually be able to implement any reasonable program, as long as it gets a precise enough specification. So the programmers of the future will stop telling the computer how to perform a given task; rather they will specify what to do. In other words, declarative programming will overtake imperative programming. But I don’t think that explaining to the AI what it’s supposed to do will be easy. The AI will continue to be rather dumb, at least in the foreseeable future. It’s been noted that software that can beat the best go players in the world would be at a complete loss trying to prepare a dinner or clean the dishes. It’s able to play go because it’s reasonably easy to codify the task of playing go– the legal moves and the goal of the game. Humans are extremely bad at expressing their wishes, as illustrated by the following story:

A poor starving peasant couple are granted three wishes and the woman, just taking the first thing that comes to her mind, wishes for one sausage, which she receives immediately. Her husband, pointing out that she could have wished for immense wealth or food to last them a lifetime, becomes angry with her for making such a stupid wish and, not thinking, wishes the sausage were stuck on her nose. Sure enough, the sausage is stuck in the middle of her face, and then they have to use the third wish to make it go away, upon which it disappears completely.

As long as the dumb AI is unable to guess our wishes, there will be a need to specify them using a precise language. We already have such language, it’s called math. The advantage of math is that it was invented for humans, not for machines. It solves the basic problem of formalizing our thought process, so it can be reliably transmitted and verified. The definition of quicksort in Haskell is very mathematical. It can be easily verified using induction, because it’s recursive, and it operates on a recursive data structure: a list. The first line of code establishes the base case: an empty list is trivially sorted. Then we perform the induction step. We assume that we know how to sort all proper sublists of our list. We create two such sublists by partitioning the tail around the pivot. We sort the sublists, and then construct the final sorted list by inserting the pivot between them. As mathematical proofs go, this one is not particularly hard. In fact, in a typical mathematical text, it would be considered so trivial as to be left as an exercise for the reader.

Still, this kind of mathematical thinking seems to be alien to most people, including a lot of programmers. So why am I proposing it as the “programming language” of the future? Math is hard, but let’s consider the alternatives. Every programming language is a compromise between the human and the computer. There are languages that are “close to the metal,” like assembly or C, and there are languages that try to imitate natural language, like Cobol or SQL. But even in low level languages we try to use meaningful names for variables and functions in an attempt to make code more readable. In fact, there are programs that purposefully obfuscate source code by removing the formatting and replacing names with gibberish. The result is unreadable to most humans, but makes no difference to computers. Mathematical language doesn’t have to be machine readable. It’s a language that was created by the people, for the people. The reason why we find mathematical texts harder to read than, say, C++ code is because mathematicians work at a much higher abstraction level. If we tried to express the same ideas in C++, we would very quickly get completely lost.

Let me give you a small example. In mathematics, a monad is defined as a monoid in the category of endofunctors. That’s a very succinct definition. In order to understand it, you have to internalize a whole tower of abstractions, one built on top of another. When we implement monads in Haskell, we don’t use that definition. We pick a particular very simple category and implement only one aspect of the definition (we don’t implement monadic laws). In C++, we don’t even do that. If there are any monads in C++, they are implemented ad hoc, and not as a general concept (an example is the future monad which, to this day, is incomplete).

There is also some deeper math in the quicksort example. It’s a recursive function and recursion is related to algebras and fixed points. A more elaborate version of quicksort decomposes it into its more fundamental components. The recursion is captured in a combination of unfolding and folding that is called a hylomorphism. The unfolding is described by a coalgebra, while folding is driven by an algebra.

data TreeF a r = Leaf | Node a r r
  deriving Functor

split :: Ord a => Coalgebra (TreeF a) [a]
split [] = Leaf
split (a: as) = Node a l r 
  where (l, r) = partition (< a) as
    
join :: Algebra (TreeF a) [a]
join Leaf = []
join (Node a l r) = l ++ [a] ++ r

qsort :: Ord a => [a] -> [a]
qsort = hylo join split

You might think that this representation is an overkill. You may even use it in a conversation to impress your friends: “Quicksort is just a hylomorphism, what is the problem?” So how is it better than the original three-liner?

qsort [] = []
qsort (p:xs) = qsort lesser ++ [p] ++ qsort greater
    where (lesser, greater) = partition (< p) xs

The main difference is that the flow of control in this new implementation is driven by a data structure generated by the functor TreeF. This functor describes a binary tree whose every node has a value of type a and two children. We use those children in the unfolding process to store lists of elements, lesser ones on the left, greater (or equal) on the right. Then, in the folding process, these children are replenished again–this time with sorted lists. This may seem like an insignificant change, but it uses a different processing ability of our brains. The recursive function tells us a linear, one-dimensional, story. It appeals to our story-telling ability. The functor-driven approach appeals to our visual cortex. There is an up and down, and left and right in the tree. Not only that, we can think of the algorithm in terms of movement, or animation. We are first “growing” the tree from the seed and then “traversing” it to gather the fruit from the branches. These are some powerful metaphors.

If this kind of visualization works for us, it might as well work for the AI that will try to optimize our programs. It may also be able to access a knowledge base of similar algorithms based on recursion schemes and category theory.

I’m often asked by programmers: How is learning category theory going to help me in my everyday programming? The implication being that it’s not worth learning math if it can’t be immediately applied to your current job. This makes sense if you are trying to locally optimize your life. You are close to the local minimum of your utility function and you want to get even closer to it. But the utility function is not constant–it evolves in time. Local minima disappear. Category theory is the insurance policy against the drying out of your current watering hole.


What does Gödel’s incompletness theorem, Russell’s paradox, Turing’s halting problem, and Cantor’s diagonal argument have to do with the fact that negation has no fixed point? The surprising answer is that they are all related through
Lawvere’s fixed point theorem.

Before we dive deep into category theory, let’s unwrap this statement from the point of view of a (Haskell) programmer. Let’s start with some basics. Negation is a function that flips between True and False:

  not :: Bool -> Bool
  not True  = False
  not False = True

A fixed point is a value that doesn’t change under the action of a function. Obviously, negation has no fixed point. There are other functions with the same signature that have fixed points. For instance, the constant function:

  true True  = True
  true False = True

has True as a fixed point.

All the theorems I listed in the preamble (and a few more) are based on a simple but extremely powerful proof technique invented by Georg Cantor called the diagonal argument. I’ll illustrate this technique first by showing that the set of binary sequences is not countable.

Cantor’s job interview question

A binary sequence is an infinite stream of zeros and ones, which we can represent as Booleans. Here’s the definition of a sequence (it’s just like a list, but without the nil constructor), together with two helper functions:

  data Seq a = Cons a (Seq a)
    deriving Functor
  
  head :: Seq a -> a
  head (Cons a as) = a

  tail :: Seq a -> Seq a
  tail (Cons a as) = as

And here’s the definition of a binary sequence:

  type BinSeq = Seq Bool

If such sequences were countable, it would mean that you could organize them all into one big (infinite) list. In other words we could implement a sequence generator that generates every possible binary sequence:

  allBinSeq :: Seq BinSeq

Suppose that you gave this problem as a job interview question, and the job candidate came up with an implementation. How would you test it? I’m assuming that you have at your disposal a procedure that can (presumably in infinite time) search and compare infinite sequences. You could throw at it some obvious sequences, like all True, all False, alternating True and False, and a few others that came to your mind.

What Cantor did is to use the candidate’s own contraption to produce a counter-example. First, he extracted the diagonal from the sequence of sequences. This is the code he wrote:

  diag :: Seq (Seq a) -> Seq a
  diag seqs = Cons (head (head seqs)) (diag (trim seqs))

  trim :: Seq (Seq a) -> Seq (Seq a)
  trim seqs = fmap tail (tail seqs)

You can visualize the sequence of sequences as a two-dimensional table that extends to infinity towards the right and towards the bottom.

  T F F T ...
  T F F T ...
  F F T T ...
  F F F T ...
  ...

Its diagonal is the sequence that starts with the fist element of the first sequence, followed by the second element of the second sequence, third element of the third sequence, and so on. In our case, it would be a sequence T F T T ....

It’s possible that the sequence, diag allBinSeq has already been listed in allBinSeq. But Cantor came up with a devilish trick: he negated the whole diagonal sequence:

  tricky = fmap not (diag allBinSeq)

and ran his test on the candidate’s solution. In our case, we would get F T F F ... The tricky sequence was obviously not equal to the first sequence because it differed from it in the first position. It was not the second, because it differed (at least) in the second position. Not the third either, because it was different in the third position. You get the gist…

Power sets are big

In reality, Cantor did not screen job applicants and he didn’t even program in Haskell. He used his argument to prove that real numbers cannot be enumerated.

But first let’s see how one could use Cantor’s diagonal argument to show that the set of subsets of natural numbers is not enumerable. Any subset of natural numbers can be represented by a sequence of Booleans, where True means a given number is in the subset, and False that it isn’t. Alternatively, you can think of such a sequence as a function:

  type Chi = Nat -> Bool

called a characteristic function. In fact characteristic functions can be used to define subsets of any set:

  type Chi a = a -> Bool

In particular, we could have defined binary sequences as characteristic functions on naturals:

  type BinSeq = Chi Nat

As programmers we often use this kind of equivalence between functions and data, especially in lazy languages.

The set of all subsets of a given set is called a power set. We have already shown that the power set of natural numbers is not enumerable, that is, there is no function:

  enumerate :: Nat -> Chi Nat

that would cover all characteristic functions. A function that covers its codomain is called surjective. So there is no surjection from natural numbers to all sequences of natural numbers.

In fact Cantor was able to prove a more general theorem: for any set, the power set is always larger than the original set. Let’s reformulate this. There is no surjection from the set A to the set of characteristic functions A \to 2 (where 2 stands for the two-element set of Booleans).

To prove this, let’s be contrarian and assume that there is a surjection:

  enumP :: A -> Chi A

Since we are going to use the diagonal argument, it makes sense to uncurry this function, so it looks more like a table:

  g :: (A, A) -> Bool
  g = uncurry enumP

Diagonal entries in the table are indexed using the following function:

  delta :: a -> (a, a)
  delta x = (x, x)

We can now define our custom characteristic function by picking diagonal elements and negating them, as we did in the case of natural numbers:

  tricky :: Chi A
  tricky = not . g . delta

If enumP is indeed a surjection, then it must produce our function tricky for some value of x :: A. In other words, there exists an x such that tricky is equal to enumP x.
This is an equality of functions, so let’s evaluate both functions at x (which will evaluate g at the diagonal).

  tricky x == (enumP x) x

The left hand side is equal to:

  tricky x = {- definition of tricky -}
  not (g (delta x)) = {- definition of g -}
  not (uncurry enumP (delta x)) = {- uncurry and delta -}
  not ((enumP x) x)

So our assumption that there exists a surjection A -> Chi A led to a contradition!

  not ((enumP x) x) == (enumP x) x

Real numbers are uncountable

You can kind of see how the diagonal argument could be used to show that real numbers are uncountable. Let’s just concentrate on reals that are greater than zero but less than one. Those numbers can be represented as sequences of decimal digits (the digits following the decimal point). If these reals were countable, we could list them one after another, just like we attempted to list all streams of Booleans. We would get some kind of a table infinitely extending to the right and towards the bottom. There is one small complication though. Some numbers have two equivalent decimal representations. For instance 0.48 is the same as 0.47999..., with infinitely many nines. So lets remove all rows from our table that end in an infinity of nines. We get something like this:

  3 5 0 5 ...
  9 9 0 8 ...
  4 0 2 3 ...
  0 0 9 9 ...
  ...

We can now apply our diagonal argument by picking one digit from every row. These digits form our diagonal number. In our example, it would be 3 9 2 9.

In the previous proof, we negated each element of the sequence to get a new sequence. Here we have to come up with some mapping that changes each digit to something else. For instance, we could add one to it, modulo nine. Except that, again, we don’t want to produce nines, because we could end up with a number that ends in an infinity of nines. But something like this will work just fine:

  h n = if n == 8 
        then 3
        else (n + 1) `mod` 9

The important part is that our function h replaces every digit with a different digit. In other words, h doesn’t have a fixed point.

Lawvere’s fixed point theorem

And this is what Lawvere realized: The diagonal argument establishes the relationship between the existence of a surjection on the one hand, and the existence of a no-fix-point mapping on the other hand. So far it’s been easy to find a no-fix-point functions. But let’s reverse the argument: If there is a surjection A \to (A \to Y) then every function Y \to Y must have a fixed point. That’s because, if we could find a no-fixed-point function, we could use the diagonal argument to show that there is no surjection.

But wait a moment. Except for the trivial case of a function on a one-element set, it’s always possible to find a function that has no fixed point. Just return something else than the argument you’re called with. This is definitely true in Set, but when you go to other categories, you can’t just construct morphisms willy-nilly. Even in categories where morphisms are functions, you might have constraints, such as continuity or smoothness. For instance, every continuous function from a real segment to itself has a fixed point (Brouwer’s theorem).

As usual, translating from the language of sets and functions to the language of category theory takes some work. The tricky part is to generalize the definition of a fixed point and surjection.

Points and string diagrams

First, to define a fixed point, we need to be able to define a point. This is normally done by defining a morphism from the terminal object 1, such a morphism is called a global element. In Set, the terminal object is a singleton set, and a morphism from a singleton set just picks an element of a set.

Since things will soon get complicated, I’d like to introduce string diagrams to better visualise things in a cartesian category. In string diagrams lines correspond to objects and dots correspond to morphisms. You read such diagrams bottom up. So a morphism

\dot a \colon 1 \to A

can be drawn as:

I will use dotted letters to denote “points” or morphisms originating in the unit. It is also customary to omit the unit from the picture. It turns out that everything works just fine with implicit units.


A fixed point of a morphism t \colon Y \to Y is a global element \dot y \colon 1 \to Y such that t \circ \dot y = \dot y. Here’s the traditional diagam:

And here’s the corresponding string diagram that encodes the commuting condition.

In string diagrams, morphisms are composed by stringing them along lines in the bottom up direction.

Surjections can be generalized in many ways. The one that works here is called surjection on points. A morphism h \colon A \to B is surjective on points when for every point \dot b of B (that is a global element \dot b \colon 1 \to B) there is a point \dot a of A (the domain of h) that is mapped to \dot b. In other words h \circ \dot a = \dot b

Or string-diagrammatically, for every \dot b there exists an \dot a such that:

Currying and uncurrying

To formulate Lawvere’s theorem, we’ll replace B with the exponential object Y^A, that is an object that represents the set of morphisms from A to Y. Conceptually, those morphism will correspond to rows in our table (or characteristic functions, when Y is 2). The mapping:

\bar g \colon A \to Y^A

generates these rows. I will use barred symbols, like \bar g for curried morphisms, that is morphisms to exponentials. The object A serves as the source of indexes for enumerating the rows of the table (just like the natural numbers in the previous example). The same object also provides indexing within each row.

This is best seen after uncurrying \bar g (we assume that we are in a cartesian closed category). The uncurried morphism, g \colon A \times A \to Y uses a product A \times A to index simultaneously into rows and columns of the table, just like pairs of natural numbers we used in the previous example.

The currying relationship between these two is given by the universal construction:

with the following commuting condition:

g = \varepsilon \circ (\bar g \times id_A)

Here, \varepsilon is the evaluation natural transformation (the couinit of the currying adjunction, or the dollar sign operator in Haskell).

This commuting condition can be visualized as a string diagram. Notice that products of objects correspond to parallel lines going up. Morphisms that operate on products, like \varepsilon or g, are drawn as dots that merge such lines.

We are interested in mappings that are point-surjective. In this case, we would like to demand that for every point \dot f \colon 1 \to Y^A there is a point \dot a \colon 1 \to A such that:

\dot f = \bar g \circ \dot a

or, diagrammatically, for every \dot f there exists an \dot a such that:

Conceptually, \dot f is a point in Y^A, which represents some arbitrary function A \to Y. Surjectivity of \bar g means that we can always find this function in our table by indexing into it with some \dot a.

This is a very general way of expressing what, in Haskell, would amount to: Every function f :: A -> Y can be obtained by partially applying our g_bar :: X -> A -> Y to some x :: X.

The diagonal

The way to index the diagonal of our table is to use the diagonal morphism \Delta \colon A \to A \times A. In a cartesian category, such a morphism always exists. It can be defined using the universal property of the product:

By combining this diagram with the diagram that defines the lifting of a pair of points \dot a we arrive at a useful identity:

\dot a \times \dot a = \Delta_A \circ \lambda_A \circ (id_A \times \dot a)

where \lambda_A is the left unitor, which asserts the isomorphism 1 \times A \to A

Here’s the string diagram representation of this identity:

In string diagrams we ignore unitors (as well as associators). Now you see why I like string diagrams. They make things much simpler.

Lawvere’s fixed point theorem

Theorem. (Lawvere) In a cartesian closed category, if there exists a point-surjective morphism \bar g \colon A \to Y^A then every endomorphism of Y must have a fixed point.

Note: Lawvere actually used a weaker definition of point surjectivity.

The proof is just a generalization of the diagonal argument.

Suppose that there is an endomorphims t \colon Y \to Y that has no fixed point. This generalizes the negation of the original example. We’ll create a special morphism by combining the diagonal entries of our table, and then “negating” them with t.

The table is described by (uncurried) g; and we access the diagonal through \Delta_A. So the tricky morphism A \to Y is just:

f = t \circ g \circ \Delta_A

or, in diagramatic form:

Since we were guaranteed that our table g is an exhaustive listing of all morphisms A \to Y, our new morphism f must be somewhere there. But in order to search the table, we have to first convert f to a point in the exponential object Y^A.

There is a one-to-one correspondence between points \dot f \colon 1 \to Y^A and morphisms f \colon A \to Y given by the universal property of the exponential (noting that 1 \times A is isomorphic to A through the left unitor, \lambda_A \colon 1 \times A \to A).

In other words, \dot f is the curried form of f \circ \lambda_A, and we have the following commuting codition:

f \circ \lambda_A = \varepsilon \circ (\dot f \times id_A)

Since \lambda is an isomorphism, we can invert it, and get the following formula for f in terms of \dot f:

f = \varepsilon \circ (\dot f \times id_A) \circ \lambda_A^{-1}

In the corresponding string diagram we omit the unitor altogether.

Now we can use our assumption that \bar g is point surjective to deduce that there must be a point \dot x \colon 1 \to A that will produce \dot f, in other words:

\dot f = \bar g \circ \dot x

So \dot x picks the row in which we find our tricky morphism. What we are interested in is “evaluating” this morphism at \dot x. That will be our paradoxical diagonal entry. By construction, it should be equal to the corresponding point of f, because this row is point-by-point equal to f; after all, we have just found it by searching for f! On the other hand, it should be different, because we’ve build f by “negating” diagonal entries of our table using t. Something has to give and, since we insist on surjectivity, we conclude that t is not doing its job of “negating.” It must have a fixed point at \dot x.

Let’s work out the details.

First, let’s apply the function we’ve found in the table at row \dot x to \dot x itself. Except that what we’ve found is a point in Y^A. Fortunately we have figured out earlier how to get f from \dot f. We apply the result to \dot x:

f \circ \dot x = \varepsilon \circ (\dot f \times id_A) \circ \lambda_A^{-1} \circ \dot x

Plugging into it the entry \dot f that we have found in the table, we get:

f \circ \dot x = \varepsilon \circ ((\bar g \circ \dot x) \times id_A) \circ \lambda_A^{-1} \circ \dot x

Here’s the corresponding string diagram:

We can now uncurry \bar g

And replace a pair of \dot x with a \Delta:

Compare this with the defining equation for f, as applied to \dot x:

f \circ \dot x = t \circ g \circ \Delta_A \circ \dot x

In other words, the morphism 1 \to Y:

g \circ \Delta_A \circ \dot x

is a fixed point of t. This contradicts our assumption that t had no fixed point.

Conclusion

When I started writing this blog post I though it would be a quick and easy job. After all, the proof of Lawvere’s theorem takes just a few lines both in the original paper and in all other sources I looked at. But then the details started piling up. The hardest part was convincing myself that it’s okay to disregard all the unitors. It’s not an easy thing for a programmer, because without unitors the formulas don’t “type check.” The left hand side may be a morphism from A \times I and the right hand side may start at A. A compiler would reject such code. I tried to do due diligence as much as I could, but at some point I decided to just go with string diagrams. In the process I got into some interesting discussions and even posted a question on Math Overflow. Hopefully it will be answered by the time you read this post.

One of the things I wasn’t sure about was if it was okay to slide unitors around, as in this diagram:

It turns out this is just naturality condition for \lambda, as John Baez was kind to remind me on Twitter (great place to do category theory!).

Acknowledgments

I’m grateful to Derek Elkins for reading the draft of this post.

Literature

  1. F. William Lawvere, Diagonal arguments and cartesian closed categories
  2. Noson S. Yanofsky, A Universal Approach to Self-Referential Paradoxes, Incompleteness and Fixed Points
  3. Qiaochu Yuan, Cartesian closed categories and the Lawvere fixed point theorem

« Previous PageNext Page »