Category Theory

Abstract: I derive free monoidal profunctors as fixed points of a higher order functor acting on profunctors. Monoidal profunctors play an important role in defining traversals.

The beauty of category theory is that it lets us reuse concepts at all levels. In my previous post I have derived a free monoidal functor that goes from a monoidal category C to Set. The current post may then be shortened to: Since profunctors are just functors from C^{op} \times C to Set, with the obvious monoidal structure induced by the tensor product in C, we automatically get free monoidal profunctors.

Let me fill in the details.

Profunctors in Haskell

Here’s the definition of a profunctor from Data.Profunctor:

class Profunctor p where
  dimap :: (s -> a) -> (b -> t) -> p a b -> p s t

The idea is that, just like a functor acts on objects, a profunctor p acts on pairs of objects \langle a, b \rangle. In other words, it’s a type constructor that takes two types as arguments. And just like a functor acts on morphisms, a profunctor acts on pairs of morphisms. The only tricky part is that the first morphism of the pair is reversed: instead of going from a to s, as one would expect, it goes from s to a. This is why we say that the first argument comes from the opposite category C^{op}, where all morphisms are reversed with respect to C. Thus a morphism from \langle a, b \rangle to \langle s, t \rangle in C^{op} \times C is a pair of morphisms \langle s \to a, b \to t \rangle.

Just like functors form a category, profunctors form a category too. In this category profunctors are objects, and natural transformations are morphisms. A natural transformation between two profunctors p and q is a family of functions which, in Haskell, can be approximated by a polymorphic function:

type p ::~> q = forall a b. p a b -> q a b

If the category C is monoidal (has a tensor product \otimes and a unit object 1), then the category C^{op} \times C has a trivially induced tensor product:

\langle a, b \rangle \otimes \langle c, d \rangle = \langle a \otimes c, b \otimes d \rangle

and unit \langle 1, 1 \rangle

In Haskell, we’ll use cartesian product (pair type) as the underlying tensor product, and () type as the unit.

Notice that the induced product does not have the usual exponential as the right adjoint. Indeed, the hom-set:

(C^{op} \times C) \, ( \langle a, b  \rangle \otimes  \langle c, d  \rangle,  \langle s, t  \rangle )

is a set of pairs of morphisms:

\langle s \to a \otimes c, b \otimes d \to t  \rangle

If the right adjoint existed, it would be a pair of objects \langle X, Y  \rangle, such that the following hom-set would be isomorphic to the previous one:

\langle X \to a, b \to Y  \rangle

While Y could be the internal hom, there is no candidate for X that would produce the isomorphism:

s \to a \otimes c \cong X \to a

(Consider, for instance, unit () for a.) This lack of the right adjoint is the reason why we can’t define an analog of Applicative for profunctors. We can, however, define a monoidal profunctor:

class Monoidal p where
  punit :: p () ()
  (>**<) :: p a b -> p c d -> p (a, c) (b, d)

This profunctor is a map between two monoidal structures. For instance, punit can be seen as mapping the unit in Set to the unit in C^{op} \times C:

punit :: () -> p <1, 1>

Operator >**< maps the product in Set to the induced product in C^{op} \times C:

(>**<) :: (p <a, b>, p <c, d>) -> p (<a, b> × <c, d>)

Day convolution, which works with monoidal structures, generalizes naturally to the profunctor category:

data PDay p q s t = forall a b c d. 
     PDay (p a b) (q c d) ((b, d) -> t) (s -> (a, c))

Higher Order Functors

Since profunctors form a category, we can define endofunctors in that category. This is a no-brainer in category theory, but it requires some new definitions in Haskell. Here’s a higher-order functor that maps a profunctor to another profunctor:

class HPFunctor pp where
  hpmap :: (p ::~> q) -> (pp p ::~> pp q)
  ddimap :: (s -> a) -> (b -> t) -> pp p a b -> pp p s t

The function hpmap lifts a natural transformation, and ddimap shows that the result of the mapping is also a profunctor.

An endofunctor in the profunctor category may have a fixed point:

newtype FixH pp a b = InH { outH :: pp (FixH pp) a b }

which is also a profunctor:

instance HPFunctor pp => Profunctor (FixH pp) where
    dimap f g (InH pp) = InH (ddimap f g pp)

Finally, our Day convolution is a higher-order endofunctor in the category of profunctors:

instance HPFunctor (PDay p) where
  hpmap nat (PDay p q from to) = PDay p (nat q) from to
  ddimap f g (PDay p q from to) = PDay p q (g . from) (to . f)

We’ll use this fact to construct a free monoidal profunctor next.

Free Monoidal Profunctor

In the previous post, I defined the free monoidal functor as a fixed point of the following endofunctor:

data FreeF f g t =
      DoneF t
    | MoreF (Day f g t)

Replacing the functors f and g with profunctors is straightforward:

data FreeP p q s t = 
      DoneP (s -> ()) (() -> t) 
    | MoreP (PDay p q s t)

The only tricky part is realizing that the first term in the sum comes from the unit of Day convolution, which is the type () -> t, and it generalizes to an appropriate pair of functions (we’ll simplify this definition later).

FreeP is a higher order endofunctor acting on profunctors:

instance HPFunctor (FreeP p) where
    hpmap _ (DoneP su ut) = DoneP su ut
    hpmap nat (MoreP day) = MoreP (hpmap nat day)
    ddimap f g (DoneP au ub) = DoneP (au . f) (g . ub)
    ddimap f g (MoreP day) = MoreP (ddimap f g day)

We can, therefore, define its fixed point:

type FreeMon p = FixH (FreeP p)

and show that it is indeed a monoidal profunctor. As before, the trick is to fist show the following property of Day convolution:

cons :: Monoidal q => PDay p q a b -> q c d -> PDay p q (a, c) (b, d)
cons (PDay pxy quv yva bxu) qcd = 
      PDay pxy (quv >**< qcd) (bimap yva id . reassoc) 
                              (assoc . bimap bxu id)


assoc ((a,b),c) = (a,(b,c))
reassoc (a, (b, c)) = ((a, b), c)

Using this function, we can show that FreeMon p is monoidal for any p:

instance Profunctor p => Monoidal (FreeMon p) where
  punit = InH (DoneP id id)
  (InH (DoneP au ub)) >**< frcd = dimap snd (\d -> (ub (), d)) frcd
  (InH (MoreP dayab)) >**< frcd = InH (MoreP (cons dayab frcd))

FreeMon can also be rewritten as a recursive data type:

data FreeMon p s t where
     DoneFM :: t -> FreeMon p s t
     MoreFM :: p a b -> FreeMon p c d -> 
                        (b -> d -> t) -> 
                        (s -> (a, c)) -> FreeMon p s t

Categorical Picture

As I mentioned before, from the categorical point of view there isn’t much to talk about. We define a functor in the category of profunctors:

A_p q = (C^{op} \times C) (1, -) + \int^{ a b c d } p a b \times q c d \times (C^{op} \times C) (\langle a, b \rangle \otimes \langle c, d \rangle, -)

As previously shown in the general case, its initial algebra defines a free monoidal profunctor.


I’m grateful to Eugenia Cheng not only for talking to me about monoidal profunctors, but also for getting me interested in category theory in the first place through her Catsters video series. Thanks also go to Edward Kmett for numerous discussions on this topic.


In my category theory blog posts, I stated many theorems, but I didn’t provide many proofs. In most cases, it’s enough to know that the proof exists. We trust mathematicians to do their job. Granted, when you’re a professional mathematician, you have to study proofs, because one day you’ll have to prove something, and it helps to know the tricks that other people used before you. But programmers are engineers, and are therefore used to taking advantage of existing solutions, trusting that somebody else made sure they were correct. So it would seem that mathematical proofs are irrelevant to programming. Or so it may seem, until you learn about the Curry-Howard isomorphism–or propositions as types, as it is sometimes called–which says that there is a one to one correspondence between logic and programs, and that every function can be seen as a proof of a theorem. And indeed, I found that a lot of proofs in category theory turn out to be recipes for implementing functions. In most cases the problem can be reduced to this: Given some morphisms, implement another morphism, usually using simple composition. This is very much like using point-free notation to define functions in Haskell. The other ingredient in categorical proofs is diagram chasing, which is very much like equational resoning in Haskell. Of course, mathematicians use different notation, and they use lots of different categories, but the principles are the same.

I want to illustrate these points with the example from Emily Riehl’s excellent book Category Theory in Context. It’s a book for mathematicians, so it’s not an easy read. I’m going to concentrate on theorem 6.2.1, which derives a formula for left Kan extensions using colimits. I picked this theorem because it has calculational content: it tells you how to calculate a particular functor.

It’s not a short proof, and I have made it even longer by unpacking every single step. These steps are not too hard, it’s mostly a matter of understanding and using definitions of functoriality, naturality, and universality.

There is a bonus at the end of this post for Haskell programmers.

Kan Extensions

I wrote about Kan extensions before, so here I’ll only recap the definition using the notation from Emily’s book. Here’s the setup: We want to extend a functor F, which goes from category C to E, along another functor K, which goes from C to D. This extension is a new functor from D to E.

To give you some intuition, imagine that the functor F is the Rosetta Stone. It’s a functor that maps the Ancient Egyptian text of a royal decree to the same text written in Ancient Greek. The functor K embeds the Rosetta Stone hieroglyphics into the know corpus of Egyptian texts from various papyri and inscriptions on the walls of temples. We want to extend the functor F to the whole corpus. In other words, we want to translate new texts from Egyptian to Greek (or whatever other language that’s isomorphic to it).

In the ideal case, we would just want F to be isomorphic to the composition of the new functor after K. That’s usually not possible, so we’ll settle for less. A Kan extension is a functor which, when composed with K produces something that is related to F through a natural transformation. In particular, the left Kan extension, Lan_K F, is equipped with a natural transformation \eta from F to Lan_K F \circ K.


(The right Kan extension has this natural transformation reversed.)

There are usually many such functors, so there is the standard trick of universal construction to pick the best one.

In our analogy, we would ideally like the new functor, when applied to the hieroglyphs from the Rosetta stone, to exactly reproduce the original translation, but we’ll settle for something that has the same meaning. We’ll try to translate new hieroglyphs by looking at their relationship with known hieroglyphs. That means we should look closely at morphism in D.

Comma Category

The key to understanding how Kan extensions work is to realize that, in general, the functor K embeds C in D in a lossy way.


There may be objects (and morphisms) in D that are not in the image of K. We have to somehow define the action of Lan_K F on those objects. What do we know about such objects?

We know from the Yoneda lemma that all the information about an object is encoded in the totality of morphisms incoming to or outgoing from that object. We can summarize this information in the form of a functor, the hom-functor. Or we can define a category based on this information. This category is called the slice category. Its objects are morphisms from the original category. Notice that this is different from Yoneda, where we talked about sets of morphisms — the hom-sets. Here we treat individual morphisms as objects.

This is the definition: Given a category C and a fixed object c in it, the slice category C/c has as objects pairs (x, f), where x is an object of C and f is a morphism from x to c. In other words, all the arrows whose codomain is c become objects in C/c.

A morphism in C/c between two objects, (x, f) and (y, g) is a morphism h : x \to y in C that makes the following triangle commute:


In our case, we are interested in an object d in D, and the slice category D/d describes it in terms of morphisms. Think of this category as a holographic picture of d.

But what we are really interested in, is how to transfer this information about d to E. We do have a functor F, which goes from C to E. We need to somehow back-propagate the information about d to C along K, and then use F to move it to E.

So let’s try again. Instead of using all morphisms impinging on d, let’s only pick the ones that originate in the image of K, because only those can be back-propagated to C.


This gives us limited information about d, but it’ll have to do. We’ll just use a partial hologram of d. Instead of the slice category, we’ll use the comma category K \downarrow d.

Here’s the definition: Given a functor K : C \to D and an object d of D, the comma category K \downarrow d has as objects pairs (c, f), where c is an object of C and f is a morphism from K c to d. So, indeed, this category describes the totality of morphisms coming to d from the image of K.

A morphism in the comma category from (c, f) to (c', g) is a morphism h : c \to c' such that the following triangle commutes:


So how does back-propagation work? There is a projection functor \Pi_d : K \downarrow d \to C that maps an object (c, f) to c (with obvious action on morphisms). This functor loses a lot of information about objects (completely forgets the f part), but it keeps a lot of the information in morphisms — it only picks those morphisms in C that preserve the structure of the comma category. And it lets us “upload” the hologram of d into C

Next, we transport all this information to E using F. We get a mapping

F \circ \Pi_d : K \downarrow d \to E

Here’s the main observation: We can look at this functor F \circ \Pi_d as a diagram in E (remember, when constructing limits, diagrams are defined through functors). It’s just a bunch of objects and morphisms in E that somehow approximate the image of d. This holographic information was extracted by looking at morphisms impinging on d. In our search for (Lan_K F) d we should therefore look for an object in E together with morphisms impinging on it from the diagram we’ve just constructed. In particular, we could look at cones under that diagram (or co-cones, as they are sometimes called). The best such cone defines a colimit. If that colimit exists, it is indeed the left Kan extension (Lan_K F) d. That’s the theorem we are going to prove.


To prove it, we’ll have to go through several steps:

  1. Show that the mapping we have just defined on objects is indeed a functor, that is, we’ll have to define its action on morphisms.
  2. Construct the transformation \eta from F to Lan_K F \circ K and show the naturality condition.
  3. Prove universality: For any other functor G together with a natural transformation \gamma, show that \gamma uniquely factorizes through \eta.

All of these things can be shown using clever manipulations of cones and the universality of the colimit. Let’s get to work.


We have defined the action of Lan_K F on objects of D. Let’s pick a morphism g : d \to d'. Just like d, the object d' defines its own comma category K \downarrow d', its own projection \Pi_{d'}, its own diagram F \circ \Pi_{d'}, and its own limiting cone. Parts of this new cone, however, can be reinterpreted as a cone for the old object d. That’s because, surprisingly, the diagram F \circ \Pi_{d'} contains the diagram F \circ \Pi_{d}.

Take, for instance, an object (c, f) : K \downarrow d, with f: K c \to d. There is a corresponding object (c, g \circ f) in K \downarrow d'. Both get projected down to the same c. That shows that every object in the diagram for the d cone is automatically an object in the diagram for the d' cone.


Now take a morphism h from (c, f) to (c', f') in K \downarrow d. It is also a morphism in K \downarrow d' between (c, g \circ f) and (c', g \circ f'). The commuting triangle condition in K \downarrow d

f = f' \circ K h

ensures the commuting condition in K \downarrow d'

g \circ f = g \circ f' \circ K h


All this shows that the new cone that defines the colimit of F \circ \Pi_{d'} contains a cone under F \circ \Pi_{d}.

But that diagram has its own colimit (Lan_K F) d. Because that colimit is universal, there must be a unique morphism from (Lan_K F) d to (Lan_K F) d', which makes all triangles between the two cones commute. We pick this morphism as the lifting of our g, which ensures the functoriality of Lan_K F.


The commuting condition will come in handy later, so let’s spell it out. For every object (c, f:Kc \to d) we have a leg of the cone, a morphism V_d^{(c, f)} from F c to (Lan_K F) d; and another leg of the cone V_{d'}^{(c, g \circ f)} from F c to (Lan_K F) d'. If we denote the lifting of g as (Lan_K F) g then the commuting triangle is:

(Lan_K F) g \circ V_{d}^{(c, f)} = V_{d'}^{(c, g \circ f)}


In principle, we should also check that this newly defined functor preserves composition and identity, but this pretty much follows automatically whenever lifting is defined using composition of morphisms, which is indeed the case here.

Natural Transformation

We want to show that there is a natural transformation \eta from F to Lan_K F \circ K. As usual, we’ll define the component of this natural transformation at some arbitrary object c. It’s a morphism between F c and (Lan_K F) (K c). We have lots of morphisms at our disposal, with all those cones lying around, so it shouldn’t be a problem.

First, observe that, because of the pre-composition with K, we are only looking at the action of Lan_K F on objects which are inside the image of K.


The objects of the corresponding comma category K \downarrow K c' have the form (c, f), where f : K c \to K c'. In particular, we can pick c' = c, and f = id_{K c}, which will give us one particular vertex of the diagram F \circ \Pi_{K c}. The object at this vertex is F c — exactly what we need as a starting point for our natural transformation. The leg of the colimiting cone we are interested in is:

V_{K c}^{(c, id)} : F c \to (Lan_K F) (K c)


We’ll pick this leg as the component of our natural transformation \eta_c.

What remains is to show the naturality condition. We pick a morphism h : c \to c'. We know how to lift this morphism using F and using Lan_K F \circ K. The other two sides of the naturality square are \eta_c and \eta_{c'}.

The bottom left composition is (Lan_K F) (K h) \circ V_{K c}^{(c, id)}. Let’s take the commuting triangle that we used to show the functoriality of Lan_K F:

(Lan_K F) g \circ V_{d}^{(c, f)} = V_{d'}^{(c, g \circ f)}

and replace f by id_{K c}, g by K h, d by K c, and d' by K c', to get:

(Lan_K F) (K h) \circ V_{K c}^{(c, id)} = V_{K c'}^{(c, K h)}

Let’s draw this as the diagonal in the naturality square, and examine the upper right composition:

V_{K c'}^{(c', id)} \circ F h.

This is also equal to the diagonal V_{K c'}^{(c, K h)}. That’s because these are two legs of the same colimiting cone corresponding to (c', id_{K c'}) and (c, K h). These vertices are connected by h in K \downarrow K c'.


But how do we know that h is a morphism in K \downarrow K c'? Not every morphism in C is a morphism in the comma category. In this case, however, the triangle identity is automatic, because one of its sides is an identity id_{K c'}.


We have shown that our naturality square is composed of two commuting triangles, with V_{K c'}^{(c, K h)} as its diagonal, and therefore commutes as a whole.


Kan extension is defined using a universal construction: it’s the best of all functors that extend F along K. It means that any other extension will factor through ours. More precisely, if there is another functor G and another natural transformation:

\gamma : F \to G \circ K


then there is a unique natural transformation \alpha, such that

(\alpha \cdot K) \circ \eta = \gamma


(where we have a horizontal composition of a natural transformation \alpha with the functor K)

As always, we start by clearly stating the goals. The proof proceeds in these steps:

  1. Implement \alpha.
  2. Prove that it’s natural.
  3. Show that it factorizes \gamma through \eta.
  4. Show that it’s unique.

We are defining a natural transformation \alpha between two functors: Lan_K F, which we have defined as a colimit, and G. Both are functors from D to E. Let’s implement the component of \alpha at some d. It must be a morphism:

\alpha_d : (Lan_K F) d \to G d

Notice that d varies over all of D, not just the image of K. We are comparing the two extensions where it really matters.

We have at our disposal the natural transformation:

\gamma : F \to G \circ K

or, in components:

\gamma_c : F c \to G (K c)

More importantly, though, we have the universal property of the colimit. If we can construct a cone with the nadir at G d then we can use its factorizing morphism to define \alpha_d.


This cone has to be built under the same diagram as (Lan_K F) d. So let’s start with some (c, f : K c \to d). We want to construct a leg of our new cone going from F c to G d. We can use \gamma_c to get to G (K c) and then hop to G d using G f. Thus we can define the new cone’s leg as:

W_c = G f \circ \gamma_c


Let’s make sure that this is really a cone, meaning, its sides must be commuting triangles.


Let’s pick a morphism h in K \downarrow d from (c, f) to (c', f'). A morphism in K \downarrow d must satisfy the triangle condition, f = f' \circ K h:


We can lift this triangle using G:

G f = G f' \circ G (K h)

Naturality condition for \gamma tells us that:

\gamma_{c'} \circ F h = G (K h) \circ \gamma_c

By combining the two, we get the pentagon:


whose outline forms a commuting triangle:

G f \circ \gamma_c = G f' \circ \gamma_{c'} \circ F h

Now that we have a cone with the nadir at G d, universality of the colimit tells us that there is a unique morphism from the colimiting cone to it that factorizes all triangles between the two cones. We make this morphism our \alpha_d. The commuting triangles are between the legs of the colimiting cone V_d^{(c, f)} and the legs of our new cone W_c:

\alpha_d \circ V_d^{(c, f)} = G f \circ \gamma_c


Now we have to show that so defined \alpha is a natural transformation. Let’s pick a morphism g : d \to d'. We can lift it using Lan_K F or using G in order to construct the naturality square:

G g \circ \alpha_d = \alpha_{d'} \circ (Lan_K F) g


Remember that the lifting of a morphism by Lan_K F satisfies the following commuting condition:

(Lan_K F) g \circ V_{d}^{(c, f)} = V_{d'}^{(c, g \circ f)}

We can combine the two diagrams:


The two arms of the large triangle can be replaced using the diagram that defines \alpha, and we get:

G (g \circ f) \circ \gamma_c = G g \circ G f \circ \gamma_c

which commutes due to functoriality of G.

Now we have to show that \alpha factorizes \gamma through \eta. Both \eta and \gamma are natural transformations between functors going from C to E, whereas \alpha connects functors from D to E. To extend \alpha, we horizontally compose it with the identity natural transformation from K to K. This is called whiskering and is written as \alpha \cdot K. This becomes clear when expressed in components. Let’s pick an object c in C. We want to show that:

\gamma_c = \alpha_{K c} \circ \eta_c


Let’s go back all the way to the definition of \eta:

\eta_c = V_{K c}^{(c, id)}

where id is the identity morphism at K c. Compare this with the definition of \alpha:

\alpha_d \circ V_d^{(c, f)} = G f \circ \gamma_c


If we replace f with id and d with K c, we get:

\alpha_{K c} \circ V_{K c}^{(c, id)} = G id \circ \gamma_c


which, due to functoriality of G is exactly what we need:

\alpha_{K c} \circ \eta_c = \gamma_c

Finally, we have to prove the uniqueness of \alpha. Here’s the trick: assume that there is another natural transformation \alpha' that factorizes \gamma. Let’s rewrite the naturality condition for \alpha':

G g \circ \alpha'_d = \alpha'_{d'} \circ (Lan_K F) g


Replacing g : d \to d' with f : K c \to d, we get:

G f \circ \alpha'_{K c} = \alpha'_d \circ (Lan_K F) f

The lifiting of f by Lan_K F satisfies the triangle identity:

V_d^{(c, f)} = (Lan_K F) f \circ V_{K c}^{(c, id)}

where we recognize V_{K c}^{(c, id)} as \eta_c.


We said that \alpha' factorizes \gamma through \eta:

\gamma_c = \alpha'_{K c} \circ \eta_c

which let us straighten the left side of the pentagon.


This shows that \alpha' is another factorization of the cone with the nadir at G d through the colimit cone with the nadir (Lan_K F)  d. But that would contradict the universality of the colimit, therefore \alpha' must be the same as \alpha.

This completes the proof.

Haskell Notes

This post would not be complete if I didn’t mention a Haskell implementation of Kan extensions by Edward Kmett, which you can find in the library Data.Functor.Kan.Lan. At first look you might not recognize the definition given there:

data Lan g h a where
  Lan :: (g b -> a) -> h b -> Lan g h a

To make it more in line with the previous discussion, let’s rename some variables:

data Lan k f d where
  Lan :: (k c -> d) -> f c -> Lan k f d

This is an example of GADT syntax, which is a Haskell way of implementing existential types. It’s equivalent to the following pseudo-Haskell:

Lan k f d = exists c. (k c -> d, f c)

This is more like it: you may recognize (k c -> d) as an object of the comma category K \downarrow d, and f c as the mapping of c (which is the projection of this object back to C) under the functor F. In fact, the Haskell representation is based on the encoding of the colimit using a coend:

(Lan_K F) d = \int^{c \in C} C(K c, d) \times F c

The Haskell library also contains the proof that Kan extension is a functor:

instance Functor (Lan k f) where
  fmap g (Lan kcd fc) = Lan (g . kcd) fc

The natural transformation \eta that is part of the definition of the Kan extension can be extracted from the Haskell definition:

eta :: f c -> Lan k f (k c)
eta = Lan id

In Haskell, we don’t have to prove naturality, as it is a consequence of parametricity.

The universality of the Kan extension is witnessed by the following function:

toLan :: Functor g => (forall c. f c -> g (k c)) -> Lan k f d -> g d
toLan gamma (Lan kcd fc) = fmap kcd (gamma fc)

It takes a natural transformation \gamma from F to G \circ K, and produces a natural transformation we called \alpha from Lan_K F to G.

This is \gamma expressed as a composition of \alpha and \eta:

fromLan :: (forall d. Lan k f d -> g d) -> f c -> g (k c)
fromLan alpha = alpha . eta

As an example of equational reasoning, let’s prove that \alpha defined by toLan indeed factorizes \gamma. In other words, let’s prove that:

fromLan (toLan gamma) = gamma

Let’s plug the definition of toLan into the left hand side:

fromLan (\(Lan kcd fc) -> fmap kcd (gamma fc))

then use the definition of fromLan:

(\(Lan kcd fc) -> fmap kcd (gamma fc)) . eta

Let’s apply this to some arbitrary function g and expand eta:

(\(Lan kcd fc) -> fmap kcd (gamma fc)) (Lan id g)

Beta-reduction gives us:

fmap id (gamma g)

which is indeed equal to the right hand side acting on g:

gamma g

The proof of toLan . fromLan = id is left as an exercise to the reader (hint: you’ll have to use naturality).


I’m grateful to Emily Riehl for reviewing the draft of this post and for writing the book from which I borrowed this proof.

La filosofia è scritta in questo grandissimo libro che continuamente ci sta aperto innanzi a gli occhi (io dico l’universo), ma non si può intendere se prima non s’impara a intender la lingua, e conoscer i caratteri, ne’ quali è scritto. Egli è scritto in lingua matematica, e i caratteri son triangoli, cerchi, ed altre figure geometriche, senza i quali mezi è impossibile a intenderne umanamente parola; senza questi è un aggirarsi vanamente per un oscuro laberinto.

― Galileo Galilei, Il Saggiatore (The Assayer)

Joan was quizzical; studied pataphysical science in the home. Late nights all alone with a test tube.

— The Beatles, Maxwell’s Silver Hammer

Unless you’re a member of the Flat Earth Society, I bet you’re pretty confident that the Earth is round. In fact, you’re so confident that you don’t even ask yourself the question why you are so confident. After all, there is overwhelming scientific evidence for the round-Earth hypothesis. There is the old “ships disappearing behind the horizon” proof, there are satellites circling the Earth, there are even photos of the Earth seen from the Moon, the list goes on and on. I picked this particular theory because it seems so obviously true. So if I try to convince you that the Earth is flat, I’ll have to dig very deep into the foundation of your belief systems. Here’s what I’ve found: We believe that the Earth is round not because it’s the truth, but because we are lazy and stingy (or, to give it a more positive spin, efficient and parsimonious). Let me explain…

The New Flat Earth Theory

Let’s begin by stressing how useful the flat-Earth model is in everyday life. I use it all the time. When I want to find the nearest ATM or a gas station, I take out my cell phone and look it up on its flat screen. I’m not carrying a special spherical gadget in my pocket. The screen on my phone is not bulging in the slightest when it’s displaying a map of my surroundings. So, at least within the limits of my city, or even the state, flat-Earth theory works just fine, thank you!

I’d like to make parallels with another widely accepted theory, Einstein’s special relativity. We believe that it’s true, but we never use it in everyday life. The vast majority of objects around us move much slower than the speed of light, so traditional Newtonian mechanics works just fine for us. When was the last time you had to reset your watch after driving from one city to another to account for the effects of time dilation?

The point is that every physical theory is only valid within a certain range of parameters. Physicists have always been looking for the Holy Grail of theories — the theory of everything that would be valid for all values of parameters with no exceptions. They haven’t found one yet.

But, obviously, special relativity is better than Newtonian mechanics because it’s more general. You can derive Newtonian mechanics as a low velocity approximation to special relativity. And, sure enough, the flat-Earth theory is an approximation to the round-Earth theory for small distances. Or, equivalently, it’s the limit as the radius of the Earth goes to infinity.

But suppose that we were prohibited (for instance, by a religion or a government) from ever considering the curvature of the Earth. As explorers travel farther and farther, they discover that the “naive” flat-Earth theory gives incorrect answers. Unlike present-day flat-earthers, who are not scientifically sophisticated, they would actually put some effort to refine their calculations to account for the “anomalies.” For instance, they could postulate that, as you get away from the North Pole, which is the center of the flat Earth, something funny keeps happening to measuring rods. They get elongated when positioned along the parallels (the circles centered at the North Pole). The further away you get from the North Pole, the more they elongate, until at a certain distance they become infinite. Which means that the distances (measured using those measuring rods) along the big circles get smaller and smaller until they shrink to zero.

I know this theory sounds weird at first, but so does special and, even more so, general relativity. In special relativity, weird things happen when your speed is close to the speed of light. Time slows down, distances shrink in the direction of flight (but not perpendicular to it!), and masses increase. In general relativity, similar things happen when you get closer to a black hole’s event horizon. In both theories things diverge as you hit the limit — the speed of light, or the event horizon, respectively.

Back to flat Earth — our explorers conquer space. They have to extend their weird geometry to three dimensions. They find out that horizontally positioned measuring rods shrink as you go higher (they un-shrink when you point them vertically). The intrepid explorers also dig into the ground, and probe the depths with seismographs. They find another singularity at a particular depth, where the horizontal dilation of measuring rods reaches infinity (round-Earthers call this the center of the Earth).

This generalized flat-Earth theory actually works. I know that, because I have just described the spherical coordinate system. We use it when we talk about degrees of longitude and latitude. We just never think of measuring distances using spherical coordinates — it’s too much work, and we are lazy. But it’s possible to express the metric tensor in those coordinates. It’s not constant — it varies with position — and it’s not isotropic — distances vary with direction. In fact, because of that, flat Earthers would be better equipped to understand general relativity than we are.

So is the Earth flat or spherical? Actually it’s neither. Both theories are just approximations. In cartesian coordinates, the Earth is the shape of a flattened ellipsoid, but as you increase the resolution, you discover more and more anomalies (we call them mountains, canyons, etc.). In spherical coordinates, the Earth is flat, but again, only approximately. The biggest difference is that the math is harder in spherical coordinates.

Have I confused you enough? On one level, unless you’re an astronaut, your senses tell you that the Earth is flat. On the other level, unless you’re a conspiracy theorist who believes that NASA is involved in a scam of enormous proportions, you believe that the Earth is pretty much spherical. Now I’m telling you that there is a perfectly consistent mathematical model in which the Earth is flat. It’s not a cult, it’s science! So why do you feel that the round Earth theory is closer to the truth?

The Occam’s Razor

The round Earth theory is just simpler. And for some reason we cling to the belief that nature abhors complexity (I know, isn’t it crazy?). We even express this belief as a principle called the Occam’s razor. In a nutshell, it says that:

Among competing hypotheses, the one with the fewest assumptions should be selected.

Notice that this is not a law of nature. It’s not even scientific: there is no way to falsify it. You can argue for the Occam’s razor on the grounds of theology (William of Ockham was a Franciscan friar) or esthetics (we like elegant theories), but ultimately it boils down to pragmatism: A simpler theory is easier to understand and use.

It’s a mistake to think that Occam’s razor tells us anything about the nature of things, whatever that means. It simply describes the limitations of our mind. It’s not nature that abhors complexity — it’s our brains that prefer simplicity.

Unless you believe that physical laws have an independent existence of their own.

The Layered Cake Hypothesis

Scientists since Galileo have a picture of the Universe that consists of three layers. The top layer is nature that we observe and interact with. Below are laws of physics — the mechanisms that drive nature and make it predictable. Still below is mathematics — the language of physics (that’s what Galileo’s quote at the top of this post is about). According to this view, physics and mathematics are the hidden components of the Universe. They are the invisible cogwheels and pulleys whose existence we can only deduce indirectly. According to this view, we discover the laws of physics. We also discover mathematics.

Notice that this is very different from art. We don’t say that Beethoven discovered the Fifth Symphony (although Igor Stravinsky called it “inevitable”) or that Leonardo da Vinci discovered the Mona Lisa. The difference is that, had not Beethoven composed his symphony, nobody would; but if Cardano hadn’t discovered complex numbers, somebody else probably would. In fact there were many cases of the same mathematical idea being discovered independently by more than one person. Does this prove that mathematical ideas exist the same way as, say, the moons of Jupiter?

Physical discoveries have a very different character than mathematical discoveries. Laws of physics are testable against physical reality. We perform experiments in the real world and if the results contradict a theory, we discard the theory. A mathematical theory, on the other hand, can only be tested against itself. We discard a theory when it leads to internal contradictions.

The belief that mathematics is discovered rather than invented has its roots in Platonism. When we say that the Earth is spherical, we are talking about the idea of a sphere. According to Plato, these ideas do exist independently of the observer — in this case, a mathematician who studies them. Most mathematicians are Platonists, whether they admit it or not.

Being able to formulate laws of physics in terms of simple mathematical equations is a thing of beauty and elegance. But you have to realize that history of physics is littered with carcasses of elegant theories. There was a very elegant theory, which postulated that all matter was made of just four elements: fire, air, water, and earth. The firmament was a collection of celestial spheres (spheres are so Platonic). Then the orbits of planets were supposed to be perfect circles — they weren’t. They aren’t even elliptical, if you study them close enough.

Celestial spheres. An elegant theory, slightly complicated by the need to introduce epicycles to describe the movements of planets

The Impass

But maybe at the level of elementary particles and quantum fields some of this presumed elegance of the Universe shines through? Well, not really. If the Universe obeyed the Occam’s razor, it would have stopped at two quarks, up and down. Nobody needs the strange and the charmed quarks, not to mention the bottom and the top quarks. The Standard Model of particle physics looks like a kitchen sink filled with dirty dishes. And then there is gravity that resists all attempts at grand unification. Strings were supposed to help but they turned out to be as messy as the rest of it.

Of course the current state of impasse in physics might be temporary. After all we’ve been making tremendous progress up until about the second half of the twentieth century (the most recent major theoretical breakthroughs were the discovery of the Higgs mechanism in 1964 and the proof or renormalizability of the Standard Model in 1971).

On the other hand, it’s possible that we might be reaching the limits of human capacity to understand the Universe. After all, there is no reason to believe that the structure of the Universe is simple enough for the human brain to analyze. There is no guarantee that it can be translated into the language of physics and mathematics.

Is the Universe Knowable?

In fact, if you think about it, our expectation that the Universe is knowable is quite arbitrary. On the one hand you have the vast complex Universe, on the other hand you have slightly evolved monkey brains that have only recently figured out how to use tools and communicate using speech. The idea that these brains could produce and store a model of the Universe is preposterous. Granted, our monkey brains are a product of evolution, and our survival depends on those brains being able to come up with workable models of our environment. These models, however, do not include the microcosm or the macrocosm — just the narrow band of phenomena in between. Our senses can perceive space and time scales within about 8 orders of magnitude. For comparison, the Universe is about 40 orders of magnitude larger than the size of the atomic nucleus (not to mention another 20 orders of magnitude down to Planck length).

The evolution came up with an ingenious scheme to deal with the complexities of our environment. Since it is impossible to store all information about the Universe in the very limited amount of memory at our disposal, and it’s impossible to run the simulation in real time, we have settled for the next best thing: creating simplified partial models that are composable.

The idea is that, in order to predict the trajectory of a spear thrown at a mammoth, it’s enough to roughly estimate the influence of a constant downward pull of gravity and the atmospheric drag on the idealized projectile. It is perfectly safe to ignore a lot of subtle effects: the non-uniformity of the gravitational field, air-density fluctuations, imperfections of the spear, not to mention relativistic effects or quantum corrections.

And this is the key to understanding our strategy: we build a simple model and then calculate corrections to it. The idea is that corrections are small enough as not to destroy the premise of the model.

Celestial Mechanics

A great example of this is celestial mechanics. To the lowest approximation, the planets revolve around the Sun along elliptical orbits. The ellipse is a solution of the one body problem in a central gravitational field of the Sun; or a two body problem, if you also take into account the tiny orbit of the Sun. But planets also interact with each other — in particular the heaviest one, Jupiter, influences the orbits of other planets. We can treat these interactions as corrections to the original solution. The more corrections we add, the better predictions we can make. Astronomers came up with some ingenious numerical methods to make such calculations possible. And yet it’s known that, in the long run, this procedure fails miserably. That’s because even the tiniest of corrections may lead to a complete change of behavior in the far future. This is the property of chaotic systems, our Solar System being just one example of such. You must have heard of the butterfly effect — the Universe is filled with this kind of butterflies.

Ephemerides: Tables showing positions of planets on the firmament.

The Microcosm

Anyone who is not shocked by quantum
theory has not understood a single word.

— Niels Bohr

At the other end of the spectrum we have atoms and elementary particles. We call them particles because, to the lowest approximation, they behave like particles. You might have seen traces made by particles in a bubble chamber.

Elementary particles might, at first sight, exhibit some properties of macroscopic objects. They follow paths through the bubble chamber. A rock thrown in the air also follows a path — so elementary particles can’t be much different from little rocks. This kind of thinking led to the first model of the atom as a miniature planetary system. As it turned out, elementary particles are nothing like little rocks. So maybe they are like waves on a lake? But waves are continuous and particles can be counted by Geiger counters. We would like elementary particles to either behave like particles or like waves but, despite our best efforts, they refuse to nicely fall into one of the categories.

There is a good reason why we favor particle and wave explanations: they are composable. A two-particle system is a composition of two one-particle systems. A complex wave can be decomposed into a superposition of simpler waves. A quantum system is neither. We might try to separate a two-particle system into its individual constituents, but then we have to introduce spooky action at a distance to explain quantum entanglement. A quantum system is an alien entity that does not fit our preconceived notions, and the main characteristic that distinguishes it from classical phenomena is that it’s not composable. If quantum phenomena were composable in some other way, different from particles or waves, we could probably internalize it. But non-composable phenomena are totally alien to our way of thinking. You might think that physicists have some deeper insight into quantum mechanics, but they don’t. Richard Feynman, who was a no-nonsense physicist, famously said, “If you think you understand quantum mechanics, you don’t understand quantum mechanics.” The problem with understanding quantum mechanics is not that it’s too complex. The problem is that our brains can only deal with concepts that are composable.

It’s interesting to notice that by accepting quantum mechanics we gave up on composability on one level in order to decompose something at another level. The periodic table of elements was the big challenge at the beginning of the 20th century. We already knew that earth, water, air, and fire were not enough. We understood that chemical compounds were combinations of atoms; but there were just too many kinds of atoms, and they could be grouped into families that shared similar properties. Atom was supposed to be indivisible (the Greek word ἄτομος [átomos] means indivisible), but we could not explain the periodic table without assuming that there was some underlying structure. And indeed, there is structure there, but the way the nucleus and the electrons compose in order to form an atom is far from trivial. Electrons are not like planets orbiting the nucleus. They form shells and orbitals. We had to wait for quantum mechanics and the Fermi exclusion principle to describe the structure of an atom.

Every time we explain one level of complexity by decomposing it in terms of simpler constituents we seem to trade off some of the simplicity of the composition itself. This happened again in the sixties, when physicists were faced with a confusing zoo of elementary particles. It seemed like there were hundreds of strongly interacting particles, hadrons, and every year was bringing new discoveries. This mess was finally cleaned up by the introduction of quarks. It was possible to categorize all hadrons as composed of just six types of quarks. This simplification didn’t come without a price, though. When we say an atom is composed of the nucleus and electrons, we can prove it by knocking off a few electrons and studying them as independent particles. We can even split the nucleus into protons and neutrons, although the neutrons outside of a nucleus are short lived. But no matter how hard we try, we cannot split a proton into its constituent quarks. In fact we know that quarks cannot exist outside of hadrons. This is called quark- or color-confinement. Quarks are supposed to come in three “colors,” but the only composites we can observe are colorless. We have stretched the idea of composition by accepting the fact that a composite structure can never be decomposed into its constituents.

I’m Slightly Perturbed

How do physicists deal with quantum mechanics? They use mathematics. Richard Feynman came up with ingenious ways to perform calculations in quantum electrodynamics using perturbation theory. The idea of perturbation theory is that you start with the simple approximation and keep adding corrections to it, just like with celestial mechanics. The terms in the expansion can be visualized as Feynman diagrams. For instance, the lowest term in the interaction between two electrons corresponds to a diagram in which the electrons exchange a virtual photon.

This terms gives the classical repulsive force between two charged particles. The first quantum correction to it involves the exchange of two virtual photons. And here’s the kicker: this correction is not only larger than the original term — it’s infinite! So much for small corrections. Yes, there are tricks to shove this infinity under the carpet, but everybody who’s not fooling themselves understands that the so called renormalization is an ugly hack. We don’t understand what the world looks like at very small scales and we try to ignore it using tricks that make mathematicians faint.

Physicists are very pragmatic. As long as there is a recipe for obtaining results that can be compared with the experiment, they are happy with a theory. In this respect, the Standard Model is the most successful theory in the Universe. It’s a unified quantum field theory of electromagnetism, strong, and weak interactions that produces results that are in perfect agreement with all high-energy experiments we were able to perform to this day. Unfortunately, the Standard Model does not give us the understanding of what’s happening. It’s as if physicists were given an alien cell phone and figured out how to use various applications on it but have no idea about the internal workings of the gadget. And that’s even before we try to involve gravity in the model.

The “periodic table” of elementary particles.

The prevailing wisdom is that these are just little setbacks on the way toward the ultimate theory of everything. We just have to figure out the correct math. It may take us twenty years, or two hundred years, but we’ll get there. The hope that math is the answer led theoretical physicists to study more and more esoteric corners of mathematics and to contribute to its development. One of the most prominent theoretical physicists, Edward Witten, the father of M-theory that unified a number of string theories, was awarded the prestigious Fields Medal for his contribution to mathematics (Nobel prizes are only awarded when a theory is confirmed by experiment which, in the case of string theory, may be a be long way off, if ever).

Math is About Composition

If mathematics is discoverable, then we might indeed be able to find the right combination of math and physics to unlock the secrets of the Universe. That would be extremely lucky, though.

There is one property of all of mathematics that is really striking, and it’s most clearly visible in foundational theories, such as logic, category theory, and lambda calculus. All these theories are about composability. They all describe how to construct more complex things from simpler elements. Logic is about combining simple predicates using conjunctions, disjunctions, and implications. Category theory starts by defining a composition of arrows. It then introduces ways of combining objects using products, coproducts, and exponentials. Typed lambda calculus, the foundation of computer languages, shows us how to define new types using product types, sum types, and functions. In fact it can be shown that constructive logic, cartesian closed categories, and typed lambda calculus are three different formulations of the same theory. This is known as the Curry Howard Lambek isomorphism. We’ve been discovering the same thing over and over again.

It turns out that most mathematical theories have a skeleton that can be captured by category theory. This should not be a surprise considering how the biggest revolutions in mathematics were the result of realization that two or more disciplines were closely related to each other. The latest such breakthrough was the proof of the Fermat’s last theorem. This proof was based on the Taniyama-Shimura conjecture that related the study of elliptic curves to modular forms — two radically different branches of mathematics.

Earlier, geometry was turned upside down when it became obvious that one can define shapes using algebraic equations in cartesian coordinates. This retooling of geometry turned out to be very advantageous, because algebra has better compositional qualities than Euclidean-style geometry.

Finally, any mathematical theory starts with a set of axioms, which are combined using proof systems to produce theorems. Proof systems are compositional which, again, supports the view that mathematics is all about composition. But even there we hit a snag when we tried to decompose the space of all statements into true and false. Gödel has shown that, in any non-trivial theory, we can formulate a statement that can neither be proved to be right or wrong, and thus the Hilbert’s great project of defining one grand mathematical theory fell apart. It’s as if we have discovered that the Lego blocks we were playing with were not part of a giant Lego spaceship.

Where Does Composability Come From?

It’s possible that composability is the fundamental property of the Universe, which would make it comprehensible to us humans, and it would validate our physics and mathematics. Personally, I’m very reluctant to accept this point of view, because it would give intelligent life a special place in the grand scheme of things. It’s as if the laws of the Universe were created in such a way as to be accessible to the brains of the evolved monkeys that we are.

It’s much more likely that mathematics describes the ways our brains are capable of composing simpler things into more complex systems. Anything that we can comprehend using our brains must, by necessity, be decomposable — and there are only so many ways of putting things together. Discovering mathematics means discovering the structure of our brains. Platonic ideals exist only as patterns of connections between neurons.

The amazing scientific progress that humanity has been able to make to this day was possible because there were so many decomposable phenomena available to us. Granted, as we progressed, we had to come up with more elaborate composition schemes. We have discovered differential equations, Hilbert spaces, path integrals, Lie groups, tensor calculus, fiber bundles, etc. With the combination of physics and mathematics we have tapped into a gold vein of composable phenomena. But research takes more and more resources as we progress, and it’s possible that we have reached the bedrock that may be resistant to our tools.

We have to seriously consider the possibility that there is a major incompatibility between the complexity of the Universe and the simplicity of our brains. We are not without recourse, though. We have at our disposal tools that multiply the power of our brains. The first such tool is language, which helps us combine brain powers of large groups of people. The invention of the printing press and then the internet helped us record and gain access to vast stores of information that’s been gathered by the combined forces of teams of researchers over long periods of time. But even though this is quantitative improvement, the processing of this information still relies on composition because it has to be presented to human brains. The fact that work can be divided among members of larger teams is proof of its decomposability. This is also why we sometimes need a genius to make a major breakthrough, when a task cannot be easily decomposed into smaller, easier, subtasks. But even genius has to start somewhere, and the ability to stand on the shoulders of giants is predicated on decomposability.

Can Computers Help?

The role of computers in doing science is steadily increasing. To begin with, once we have a scientific theory, we can write computer programs to perform calculations. Nobody calculates the orbits of planets by hand any more — computers can do it much faster and error free. We are also beginning to use computers to prove mathematical theorems. The four-color problem is an example of a proof that would be impossible without the help of computers. It was decomposable, but the number of special cases was well over a thousand (it was later reduced to 633 — still too many, even for a dedicated team of graduate students).

Every planar map can be colored using only four colors.

Computer programs that are used in theorem proving provide a level of indirection between the mind of a scientist and formal manipulations necessary to prove a theorem. A programmer is still in control, and the problem is decomposable, but the number of components may be much larger, often too large for a human to go over one by one. The combined forces of humans and computers can stretch the limits of composability.

But how can we tackle problems that cannot be decomposed? First, let’s observe that in real life we rarely bother to go through the process of detailed analysis. In fact the survival of our ancestors depended on the ability to react quickly to changing circumstances, to make instantaneous decisions. When you see a tiger, you don’t decompose the image into individual parts, analyze them, and put together a model of a tiger. Image recognition is one of these areas where the analytic approach fails miserably. People tried to write programs that would recognize faces using separate subroutines to detect eyes, noses, lips, ears, etc., and composing them together, but they failed. And yet we instinctively recognize faces of familiar people at a glance.

Neural Networks and the AI

We are now able to teach computers to classify images and recognize faces. We do it not by designing dedicated algorithms; we do it by training artificial neural networks. A neural network doesn’t start with a subsystem for recognizing eyes or noses. It’s possible that, in the process of training, it will develop the notions of lines, shadows, maybe even eyes and noses. But by no means is this necessary. Those abstractions, if they evolve, would be encoded in the connections between its neurons. We might even help the AI develop some general abstractions by tweaking its architecture. It’s common, for instance, to include convolutional layers to pre-process the input. Such a layer can be taught to recognize local features and compress the input to a more manageable size. This is very similar to how our own vision works: the retina in our eye does this kind of pre-processing before sending compressed signals through the optic nerve.

Compression is the key to matching the complexity of the task at hand to the simplicity of the system that is processing it. Just like our sensory organs and brains compress the inputs, so do neural networks. There are two kinds of compression: the kind that doesn’t lose any information, just removing the redundancy in the original signal; and the lossy kind that throws away irrelevant information. The task of deciding what information is irrelevant is in itself a process of discovery. The difference between the Earth and a sphere is the size of the Himalayas, but we ignore it when when we look at the globe. When calculating orbits around the Sun, we shrink all planets to points. That’s compression by elimination of details that we deem less important for the problem we are solving. In science, this kind of compression is called abstraction.

We are still way ahead of neural networks in our capacity to create abstractions. But it’s possible that, at some point, they’ll catch up with us. The problem is: Will we be able to understand machine-generated abstractions? We are already at the limits of understanding human-generated abstractions. You may count yourself a member of a very small club if you understand the statement “monad is a monoid in the category of endofunctors” that is chock full of mathematical abstractions. If neural networks come up with new abstractions/compression schemes, we might not be able to reverse engineer them. Unlike a human scientist, an AI is unlikely to be able to explain to us how it came up with a particular abstraction.

I’m not scared about a future AI trying to eliminate human kind (unless that’s what its design goals are). I’m afraid of the scenario in which we ask the AI a question like, “Can quantum mechanics be unified with gravity?” and it will answer, “Yes, but I can’t explain it to you, because you don’t have the brain capacity to understand the explanation.”

And this is the optimistic scenario. It assumes that such questions can be answered within the decomposition/re-composition framework. That the Universe can be decomposed into particles, waves, fields, strings, branes, and maybe some new abstractions that we haven’t even though about. We would at least get the satisfaction that we were on the right path but that the number of moving parts was simply too large for us to assimilate — just like with the proof of the four-color theorem.

But it’s possible that this reductionist scenario has its limits. That the complexity of the Universe is, at some level, irreducible and cannot be captured by human brains or even the most sophisticated AIs.

There are people who believe that we live in a computer simulation. But if the Universe is irreducible, it would mean that the smallest computer on which such a simulation could be run is the Universe itself, in which case it doesn’t make sense to call it a simulation.


The scientific method has been tremendously successful in explaining the workings of our world. It led to exponential expansion of science and technology that started in the 19th century and continues to this day. We are so used to its successes that we are betting the future of humanity on it. Usually when somebody attacks the scientific method, they are coming from the background of obscurantism. Such attacks are easily rebuffed or dismissed. What I’m arguing is that science is not a property of the Universe, but rather a construct of our limited brains. We have developed some very sophisticated tools to create models of the Universe based on the principle of composition. Mathematics is the study of various ways of composing things and physics is applied composition. There is no guarantee, however, that the Universe is decomposable. Assuming that would be tantamount to postulating that its structure revolves around human brains, just like we used to believe that the Universe revolves around Earth.

You can also watch my talk on this subject.

This is part 29 of Categories for Programmers. Previously: Enriched Categories. See the Table of Contents.

I realize that we might be getting away from programming and diving into hard-core math. But you never know what the next big revolution in programming might bring and what kind of math might be necessary to understand it. There are some very interesting ideas going around, like functional reactive programming with its continuous time, the extention of Haskell’s type system with dependent types, or the exploration on homotopy type theory in programming.

So far I’ve been casually identifying types with sets of values. This is not strictly correct, because such approach doesn’t take into account the fact that, in programming, we compute values, and the computation is a process that takes time and, in extreme cases, might not terminate. Divergent computations are part of every Turing-complete language.

There are also foundational reasons why set theory might not be the best fit as the basis for computer science or even math itself. A good analogy is that of set theory being the assembly language that is tied to a particular architecture. If you want to run your math on different architectures, you have to use more general tools.

One possibility is to use spaces in place of sets. Spaces come with more structure, and may be defined without recourse to sets. One thing usually associated with spaces is topology, which is necessary to define things like continuity. And the conventional approach to topology is, you guessed it, through set theory. In particular, the notion of a subset is central to topology. Not surprisingly, category theorists generalized this idea to categories other than Set. The type of category that has just the right properties to serve as a replacement for set theory is called a topos (plural: topoi), and it provides, among other things, a generalized notion of a subset.

Subobject Classifier

Let’s start by trying to express the idea of a subset using functions rather than elements. Any function f from some set a to b defines a subset of b–that of the image of a under f. But there are many functions that define the same subset. We need to be more specific. To begin with, we might focus on functions that are injective — ones that don’t smush multiple elements into one. Injective functions “inject” one set into another. For finite sets, you may visualize injective functions as parallel arrows connecting elements of one set to elements of another. Of course, the first set cannot be larger than the second set, or the arrows would necessarily converge. There is still some ambiguity left: there may be another set a' and another injective function f' from that set to b that picks the same subset. But you can easily convince yourself that such a set would have to be isomorphic to a. We can use this fact to define a subset as a family of injective functions that are related by isomorphisms of their domains. More precisely, we say that two injective functions:

f :: a -> b
f':: a'-> b

are equivalent if there is an isomorphism:

h :: a -> a'

such that:

f = f' . h

Such a family of equivalent injections defines a subset of b.

This definition can be lifted to an arbitrary category if we replace injective functions with monomorphism. Just to remind you, a monomorphism m from a to b is defined by its universal property. For any object c and any pair of morphisms:

g :: c -> a
g':: c -> a

such that:

m . g = m . g'

it must be that g = g'.

On sets, this definition is easier to understand if we consider what it would mean for a function m not to be a monomorphism. It would map two different elements of a to a single element of b. We could then find two functions g and g' that differ only at those two elements. The postcomposition with m would then mask this difference.

There is another way of defining a subset: using a single function called the characteristic function. It’s a function χ from the set b to a two-element set Ω. One element of this set is designated as “true” and the other as “false.” This function assigns “true” to those elements of b that are members of the subset, and “false” to those that aren’t.

It remains to specify what it means to designate an element of Ω as “true.” We can use the standard trick: use a function from a singleton set to Ω. We’ll call this function true:

true :: 1 -> Ω

These definitions can be combined in such a way that they not only define what a subobject is, but also define the special object Ω without talking about elements. The idea is that we want the morphism true to represent a “generic” subobject. In Set, it picks a single-element subset from a two-element set Ω. This is as generic as it gets. It’s clearly a proper subset, because Ω has one more element that’s not in that subset.

In a more general setting, we define true to be a monomorphism from the terminal object to the classifying object Ω. But we have to define the classifying object. We need a universal property that links this object to the characteristic function. It turns out that, in Set, the pullback of true along the characteristic function χ defines both the subset a and the injective function that embeds it in b. Here’s the pullback diagram:

Let’s analyze this diagram. The pullback equation is:

true . unit = χ . f

The function true . unit maps every element of a to “true.” Therefore f must map all elements of a to those elements of b for which χ is “true.” These are, by definition, the elements of the subset that is specified by the characteristic function χ. So the image of f is indeed the subset in question. The universality of the pullback guarantees that f is injective.

This pullback diagram can be used to define the classifying object in categories other than Set. Such a category must have a terminal object, which will let us define the monomorphism true. It must also have pullbacks — the actual requirement is that it must have all finite limits (a pullback is an example of a finite limit). Under those assumptions, we define the classifying object Ω by the property that, for every monomorphism f there is a unique morphism χ that completes the pullback diagram.

Let’s analyze the last statement. When we construct a pullback, we are given three objects Ω, b and 1; and two morphisms, true and χ. The existence of a pullback means that we can find the best such object a, equipped with two morphisms f and unit (the latter is uniquely determined by the definition of the terminal object), that make the diagram commute.

Here we are solving a different system of equations. We are solving for Ω and true while varying both a and b. For a given a and b there may or may not be a monomorphism f::a->b. But if there is one, we want it to be a pullback of some χ. Moreover, we want this χ to be uniquely determined by f.

We can’t say that there is a one-to-one correspondence between monomorphisms f and characteristic functions χ, because a pullback is only unique up to isomorphism. But remember our earlier definition of a subset as a family of equivalent injections. We can generalize it by defining a subobject of b as a family of equivalent monomorphisms to b. This family of monomorphisms is in one-to-one corrpespondence with the family of equivalent pullbacks of our diagram.

We can thus define a set of subobjects of b, Sub(b), as a family of monomorphisms, and see that it is isomorphic to the set of morphisms from b to Ω:

Sub(b) ≅ C(b, Ω)

This happens to be a natural isomorphism of two functors. In other words, Sub(-) is a representable (contravariant) functor whose representation is the object Ω.


A topos is a category that:

  1. Is cartesian closed: It has all products, the terminal object, and exponentials (defined as right adjoints to products),
  2. Has limits for all finite diagrams,
  3. Has a subobject classifier Ω.

This set of properties makes a topos a shoe-in for Set in most applications. It also has additional properties that follow from its definition. For instance, a topos has all finite colimits, including the initial object.

It would be tempting to define the subobject classifier as a coproduct (sum) of two copies of the terminal object –that’s what it is in Set— but we want to be more general than that. Topoi in which this is true are called Boolean.

Topoi and Logic

In set theory, a characteristic function may be interpreted as defining a property of the elements of a set — a predicate that is true for some elements and false for others. The predicate isEven selects a subset of even numbers from the set of natural numbers. In a topos, we can generalize the idea of a predicate to be a morphism from object a to Ω. This is why Ω is sometimes called the truth object.

Predicates are the building blocks of logic. A topos contains all the necessary instrumentation to study logic. It has products that correspond to logical conjunctions (logical and), coproducts for disjunctions (logical or), and exponentials for implications. All standard axioms of logic hold in a topos except for the law of excluded middle (or, equivalently, double negation elimination). That’s why the logic of a topos corresponds to constructive or intuitionistic logic.

Intuitionistic logic has been steadily gaining ground, finding unexpected support from computer science. The classical notion of excluded middle is based on the belief that there is absolute truth: Any statement is either true or false or, as Ancient Romans would say, tertium non datur (there is no third option). But the only way we can know whether something is true or false is if we can prove or disprove it. A proof is a process, a computation — and we know that computations take time and resources. In some cases, they may never terminate. It doesn’t make sense to claim that a statement is true if we cannot prove it in finite amount of time. A topos with its more nuanced truth object provides a more general framework for modeling interesting logics.

Next: Lawvere Theories.


  1. Show that the function f that is the pullback of true along the characteristic function must be injective.

Abstract: I present a uniform derivation of profunctor optics: isos, lenses, prisms, and grates based on the Yoneda lemma in the (enriched) profunctor category. In particular, lenses and prisms correspond to Tambara modules with the cartesian and cocartesian tensor product.

This blog post is the result of a collaboration between many people. The categorical profunctor picture solidified after long discussions with Edward Kmett. A lot of the theory was developed in exchanges on the Lens IRC channel between Russell O’Connor, Edward Kmett and James Deikun. They came up with the idea to use the Pastro functor to freely generate Tambara modules, which was the missing piece that completed the picture.

My interest in lenses started long time ago when I first made the connection between the universal quantification over functors in the van Laarhoven representation of lenses and the Yoneda lemma. Since I was still learning the basics of category theory, it took me a long time to find the right language to make the formal derivation. Unbeknownst to me Mauro Jaskellioff and Russell O’Connor independently had the same idea and they published a paper about it soon after I published my blog. But even though this solved the problem of lenses, prisms still seemed out of reach of the Yoneda lemma. Prisms require a more general formulation using universal quantification over profunctors. I was able to put a dent in it by deriving Isos from profunctor Yoneda, but then I was stuck again. I shared my ideas with Russell, who reached for help on the IRC channel, and a Haskell proof of concept was quickly established. Two years later, after a brainstorm with Edward, I was finally able to gather all these ideas in one place and give them a little categorical polish.

Yoneda Lemma

The starting point is the Yoneda lemma, which states that the set of natural transformations between the hom-functor C(a, -) in the category C and an arbitrary functor f from C to Set is (naturally) isomorphic with the set f a:

[C, Set](C(a, -), f) ≅ f a

Here, f is a member of the functor category [C, Set], where natural transformation form hom-sets.

The set of natural transformations may be represented as an end, leading to the following formulation of the Yoneda lemma:

x Set(C(a, x), f x) ≅ f a

This notation makes the object x explicit, which is often very convenient. It can be easily translated to Haskell, by replacing the end with the universal quantifier. We get:

forall x. (a -> x) -> f x ≅ f a

A special case of the Yoneda lemma replaces the functor f with a hom-functor in C:

f x = C(b, x)

and we get:

x Set(C(a, x), C(b, x)) ≅ C(b, a)

This form of the Yoneda lemma is useful in showing the Yoneda embedding, which states that any category C can be fully and faithfully embedded in the functor category [C, Set]. The embedding is a functor, and the above formula defines its action on morphisms.

We will be interested in using the Yoneda lemma in the functor category. We simply replace C with [C, Set] in the previous formula, and do some renaming of variables:

f Set([C, Set](g, f), [C, Set](h, f)) ≅ [C, Set](h, g)

The hom-sets in the functor category are sets of natural transformations, which can be rewritten using ends:

f Set(∫x Set(g x, f x), ∫x Set(h x, f x)) 
  ≅ ∫x Set(h x, g x)


This is a short recap of adjunctions. We start with two functors going between two categories C and D:

L :: C -> D
R :: D -> C

We say that L is left adjoint to R iff there is a natural isomorphism between hom-sets:

D(L x, y) ≅ C(x, R y)

In particular, we can define an adjunction in a functor category [C, Set]. We start with two higher order (endo-) functors:

L :: [C, Set] -> [C, Set]
R :: [C, Set] -> [C, Set]

We say that L is left adjoint to R iff there is a natural isomorphism between two sets of natural transformations:

[C, Set](L f, g) ≅ [C, Set](f, R g)

where f and g are functors from C to Set. We can rewrite natural transformations using ends:

x Set((L f) x, g x) ≅ ∫x Set(f x, (R g) x)

In Haskell, you may think of f and g as type constructors (with the corresponding Functor instances), in which case L and R are types that are parameterized by these type constructors (similar to how the monad or functor classes are).

Yoneda with Adjunction

Here’s a little trick. Since the fixed objects in the formula for Yoneda embedding are arbitrary, we can pick them to be images of other objects under some functor L that we know is left adjoint to another functor R:

x Set(D(L a, x), D(L b, x)) ≅ D(L b, L a)

Using the adjunction, this is isomorphic to:

x Set(C(a, R x), C(b, R x)) ≅ C(b, (R ∘ L) a)

Notice that the composition R ∘ L of adjoint functors is a monad in C. Let’s write this monad as Φ.

The interesting case is the adjunction between a forgetful functor U and a free functor F. We get:

x Set(C(a, U x), C(b, U x)) ≅ C(b, Φ a)

The end is taken over x in a category D that has some additional structure (we’ll see examples of that later); but the hom-sets are in the underlying simpler category C, which is the target of the forgetful functor U.

The Yoneda-with-adjunction formula generalizes to the category of functors:

f Set(∫x Set((L g) x, f x), ∫x Set((L h) x, f x)) 
  ≅ ∫x Set((L h) x, (L g) x)

leading to:

f Set(∫x Set((g x, (R f) x), ∫x Set(h x, (R f) x)) 
  ≅ ∫x Set(h x, (Φ g) x)

Here, Φ is the monad R ∘ L in the category of functors.

An interesting special case is when we substitute hom-functors for g and h:

g x = C(a, x)
h x = C(s, x)

We get:

f Set(∫x Set((C(a, x), (R f) x), ∫x Set(C(s, x), (R f) x)) 
  ≅ ∫x Set(C(s, x), (Φ C(a, -)) x)

We can then use the regular Yoneda lemma to “integrate over x” and reduce it down to:

f Set((R f) a, (R f) s)) ≅ (Φ C(a, -)) s

Again, we are particularly interested in the forgetful/free adjunction:

f Set((U f) a, (U f) s)) ≅ (Φ C(a, -)) s

with the monad:

Φ = U ∘ F

The simplest application of this identity is when the functors in question are identity functors. We get:

f Set(f a, f s)) ≅ C(a, s)

In Haskell this becomes:

forall f. Functor f => f a -> f s  ≅ a -> s

You may think of this formula as defining the trivial kind of optic that simply turns a to s.


Profunctors are just functors from a product category Cop×D to Set. All the results from the last section can be directly applied to the profunctor category [Cop×D, Set]. Keep in mind that morphisms in this category are natural transformations between profunctors. Here’s the key formula:

p Set((U p)<a, b>, (U p)<s, t>)) ≅ (Φ (Cop×D)(<a, b>, -)) <s, t>

I have replaced a with a pair <a, b> and s with a pair <s, t>. The end is taken over all profunctors that exhibit some structure that U forgets, and F freely creates. Φ is the monad U ∘ F. It’s a monad that acts on profunctors to produce other profunctors.

Notice that a hom-set in the category Cop×D is a set of pairs of morphisms:

<f, g> :: (Cop×D)(<a, b>, <s, t>)
f :: s -> a
g :: b -> t

the first one going in the opposite direction.

The simplest application of this identity is when we don’t impose any constraints on the profunctors, in which case Φ is the identity monad. We get:

p Set(p <a, b>, p <s, t>) ≅ (Cop×D)(<a, b>, <s, t>)

Haskell translation of this formula gives the well-known representation of Iso:

forall p. Profunctor p => p a b -> p s t ≅ Iso s t a b


data Iso s t a b = Iso (s -> a) (b -> t)

Interesting things happen when we impose more structure on our profunctors.

Enriched Categories

First, let’s generalize profunctors to work on enriched categories. We start with some monoidal category V whose objects serve as hom-objects in an enriched category A. The category V will essentially replace Set in our constructions. For instance, we’ll work with profunctors that are enriched functors from the (enriched) product category to V:

p :: Aop ⊗ A -> V

Notice that we use a tensor product of categories. The objects in such a category are pairs of objects, and the hom-objects are tensor products of individual hom-objects. The definition of composition in a product category requires that the tensor product in V be symmetric (up to isomorphism).

For such profunctors, there is a suitable generalization of the end:

x p x x

It’s an object in V together with a V-natural family of projections:

pry :: ∫x p x x -> p y y

We can formulate the Yoneda lemma in an enriched setting by considering enriched functors from A to V. We get the following generalization:

x [A(a, x), f x] ≅ f a

Notice that A(a, x) is now an object of V — the hom-object from a to x. The notation [v, w] generalizes the internal hom. It is defined as the right adjoint to the tensor product in V:

V(x ⊗ v, w) ≅ V(x, [v, w])

We are assuming that V is closed, so the internal hom is defined for every pair of objects.

Enriched functors, or V-functors, between two enriched categories C and D form a functor category [C, D] that is itself enriched over V. The hom-object between two functors f and g is given by the end:

[C, D](f, g) = ∫x D(f x, g x)

We can therefore use the Yoneda lemma in a category of enriched functors, or in the category of enriched profunctors. Therefore the result of the previous section holds in the enriched setting as well:

p [(U p)<a, b>, (U p)<s, t>] ≅ (Φ (Aop⊗A)(<a, b>, -)) <s, t>

with the understanding that:

(Aop⊗A)(<a, b>, -))

is an enriched hom functor mapping pairs of objects in A to objects in V, plus the appropriate action on hom-objects. This hom-functor is the profunctor on which Φ acts.

Tambara Modules

An enriched category A may have a monoidal structure of its own. We’ll use the same tensor product notation for its structure as we did for the underlying monoidal category V. There is also a tensorial unit object i in A.

A Tambara module is a V-functor p from Aop⊗A to V, which transforms under the tensor action of A according to a family of morphisms, natural in all three arguments:

α a x y :: p x y -> p (a ⊗ x) (a ⊗ y)

Notice that these are morphisms in the underlying category V, which is also the target of the profunctor.

We impose the usual unit law:

α i x y = id

and associativity:

α a⊗b x y = α a b⊗x b⊗y ∘ α b x y

Strictly speaking one can separately define left and right action but, for simplicity, we’ll assume that the product is symmetric (up to isomorphism).

The intuition behind Tambara modules is that some of the profunctor values are not independent of others. Once we have calculated p x y, we can obtain the value of p at any of the points on the path <a⊗x, a⊗y> by applying α.

Tambara modules form a category that’s enriched over V. The construction of this enrichment is non-trivial. The hom-object between two profunctors p and q in a category of profunctors is given by the end:

[Aop⊗A, V](p, q) = ∫<x y> V(p x y, q x y)

This object generalizes the set of natural transformations. Conceptually, not all natural transformation preserve the Tambara structure, so we have to define a subobject of this hom-object that does. The intuition is that the end is a generalized product of its components. It comes equipped with projections. For instance, the projection pr<x,y> picks the component:

V(p x y, q x y)

But there is also a projection pr<a⊗x, a⊗y> that picks:

V(p a⊗x a⊗y, q a⊗x a⊗y)

from the same end. These two objects are not completely independent, because they can both be transformed into the same object. We have:

V(id, αa) :: V(p x y, q x y) -> V(p x y, q a⊗x a⊗y)
V(αa, id) :: V(a⊗x a⊗y, q a⊗x a⊗y) -> V(p x y, q a⊗x a⊗y)

We are using the fact that the mapping:

<v, w> -> V(v, w)

is itself a profunctor Vop×V -> V, so it can be used to lift pairs of morphisms in V.

Now, given any triple a, x, and y, we want the two paths to be equivalent, which means finding the equalizer between each pair of morphisms:

V(id, αa) ∘ pr<x, y>
V(αa, id) ∘ pr<a⊗x, a⊗y>

Since we want our hom-object to satisfy the above condition for any triple, we have to construct it as an intersection of all those equalizers. Here, an intersection means an object of V together with a family of monomorphisms, each embedding it into a particular equalizer.

It’s possible to construct a forgetful functor from the Tambara category to the category of profunctors [Aop⊗A, V]. It forgets the existence of α and it maps hom-objects between the two categories. Composition in the Tambara category is defined is such a way as to be compatible with this forgetful functor.

The fact that Tambara modules form a category is important, because we want to be able to use the Yoneda lemma in that category.

Tambara Optics

The key observation is that the forgetful functor from the Tambara category has a left adjoint, and that their composition forms a monad in the category of profunctors. We’ll plug this monad into our general formula.

The construction of this monad starts with a comonad that is given by the following end:

(Θ p) s t = ∫c p (c⊗s) (c⊗t)

For a given profunctor p, this comonad builds a new profunctor that is essentially a gigantic product of all values of this profunctor “shifted” by tensoring its arguments with all possible objects c.

The monad we are interested in is the left adjoint to this comonad (calculated using a Kan extension):

(Φ p) s t = ∫ c x y A(s, c⊗x) ⊗ A(c⊗y, t) ⊗ p x y

Notice that we have two separate tensor products in this formula: one in V, between the hom-objects and the profunctor, and one in A, under the hom-objects. This monad takes an arbitrary profunctor p and produces a new profunctor Φ p.

We can now use our earlier formula:

p [(U p)<a, b>, (U p)<s, t>)] ≅ (Φ (Aop⊗A)(<a, b>, -)) <s, t>

inside the Tambara category. To calculate the right hand side, let’s evaluate the action of Φ on the hom-profunctor:

(Φ (Aop⊗A)(<a, b>, -)) <s, t>
= ∫ c x y A(s, c⊗x) ⊗ A(c⊗y, t) ⊗ (Aop⊗A)(<a, b>, <x, y>)

We can “integrate over” x and y using the Yoneda lemma to get:

 c A(s, c⊗a) ⊗ A(c⊗b, t)

We get the following result:

p [(U p)<a, b>, (U p)<s, t>)] ≅ ∫ c A(s, c⊗a) ⊗ A(c⊗b, t)

where the end on the left is taken over all Tambara modules, and U is the forgetful functor from the Tambara category to the category of profunctors.

If the category in question is closed, we can use the adjunction:

A(c⊗b, t) ≅ A(c, [b, t])

and “perform the integration” over c to arrive at the set/get formulation:

 c A(s, c⊗a) ⊗ A(c, [b, t]) ≅ A(s, [b, t]⊗a)

It corresponds to the familiar Haskell lens type:

(s -> b -> t, s -> a)

(This final trick doesn’t work for prisms, because there is no right adjoint to Either.)

Haskell Translation

A Tambara module is parameterized by the choice of the tensor product ten. We can write a general definition:

class (Profunctor p) => TamModule (ten :: * -> * -> *) p where
  leftAction  :: p a b -> p (c `ten` a) (c `ten` a)
  rightAction :: p a b -> p (a `ten` c) (b `ten` c)

This can be further specialized for two obvious monoidal structures: product and sum:

type TamProd p = TamModule (,) p
type TamSum p = TamModule Either p

The former is equivalent to what it called a Strong (or Cartesian) profunctor in Haskell, the latter is equivalent to a Choice (or Cocartesian) profunctor.

Replacing ends and coends with universal and existential quantifiers in Haskell, our main formula becomes (pseudocode):

forall p. TamModule ten p => p a b -> p s t 
   ≅ exists c. (s -> c `ten` a, c `ten` b -> t)

The two sides of the isomorphism can be defined as the following data structures:

type TamOptic ten s t a b 
    = forall p. TamModule ten p => p a b -> p s t
data Optic ten s t a b 
    = forall c. Optic (s -> c `ten` a) (c `ten` b -> t)

Chosing product for the tensor, we recover two equivalent definitions of a lens:

type Lens s t a b = forall p. Strong p => p a b -> p s t
data Lens s t a b = forall c. Lens (s -> (c, a)) ((c, b) -> t)

Chosing the coproduct, we get:

type Prism s t a b = forall p. Choice p => p a b -> p s t
data Prism s t a b = forall c. Prism (s -> Either c a) (Either c b -> t)

These are the well-known existential representations of lenses and prisms.

The monad Φ (or, equivalently, the free functor that generates Tambara modules), is known in Haskell under the name Pastro for product, and Copastro for coproduct:

data Pastro p a b where
  Pastro :: ((y, z) -> b) -> p x y -> (a -> (x, z)) 
            -> Pastro p a b
data Copastro p a b where
  Copastro :: (Either y z -> b) -> p x y -> (a -> Either x z) 
            -> Copastro p a b

They are the left adjoints of Tambara and Cotambara, respectively:

newtype Tambara p a b = Tambara forall c. p (a, c) (b, c)
newtype Cotambara p a b = Cotambara forall c. p (Either a c) (Either b c)

which are special cases of the comonad Θ.


It’s interesting that the work on Tambara modules has relevance to Haskell optics. It is, however, just one example of an even larger pattern.

The pattern is that we have a family of transformations in some category A. These transformations can be used to select a class of profunctors that have simple transformation laws. Using a tensor product in a monoidal category to transform objects, in essence “multiplying” them, is just one example of such symmetry. A more general pattern involves a family of transformations f that is closed under composition and includes a unit. We specify a transformation law for profunctors:

class Profunctor p => Related p where
    α f a b :: forall f. Trans f => p a b -> p (f a) (f b)

This requirement picks a class of profunctors that we call Related.

Why are profunctors relevant as carriers of symmetry? It’s because they generalize a relationship between objects. The profunctor transformation law essentially says that if two objects a and b are related through p then so are the transformed objects; and that there is a function α that relates the proofs of this relationship. This is in the spirit of profunctors as proof-relevant relations.

As an analogy, imagine that we are comparing people, and the transformation we’re interested in is aging. We notice that family relationships remain invariant under aging: if a is a sibling of b, they will remain siblings as they age. This is not true about other relationships, for instance being a boss of another person. But family bonds are not the only ones that survive the test of time. Another such relation is being older or younger than the other person.

Now imagine that you pick four people at random points in time and you find out that any time-invariant relation between two of them, a and b, also holds between s and t. You have to conclude that there is some connection between s and age-adjusted a, and between age-adjusted b and t. In other words there exists a time shift that transforms one pair to another.

Considering all possible relations from the class Related corresponds to taking the end over all profunctors from this class:

type Optic p s t a b = forall p. Related p => 
    p a b -> p s t

The end is a generalization of a product, so it’s enough that one of the components is empty for the whole end to be empty. It means that, for a particular choice of the four types a, b, s, and t, we have to be able to construct a whole family of morphisms, one for every p. We have seen that this end exists only if the four types are connected in a very peculiar way — for instance, if a and b are somehow embedded in s and t.

In the simplest case, we may choose the four types to be related by the transformation:

s = f a
t = f b

For these types, we know that the end exists:

forall p. Related p => 
    p a b -> p s t

because there is a family of appropriate morphisms: our αf a b. In general, though, we can get away with weaker connection.

Let’s look at an example of a family of transformations generated by pairing with arbitrary type c:

fc a = (c, a)

Profunctors that respect these transformations are Tambara modules over a cartesian product (or, in lens parlance, Strong profunctors). For the choice:

s = (c, a)
t = (c, b)

the end in question trivially exists. As we’ve seen, we can weaken these conditions. It’s enough that one way (lax) transformations exist:

s -> (c, a)
t <- (c, b)

These morphisms assert that s can be split into a pair, and that t can be constructed from a pair (but not the other way around).

Other Optics

With the understanding that optics may be defined using a family of transformations, we can analyze another optic called the Grate. It’s based on the following family:

type Reader e a = e -> a

Notice that, unlike the case of Tambara modules, this family is parameterized by a contravariant parameter e.

We are interested in profunctors that transform under these transformations:

class Profunctor p => Closed p where
    closed :: p a b -> p (x -> a) (x -> b)

They let us form the optic:

type Grate s t a b = forall p. Closed p => p a b -> p s t

It turns out that there is a profunctor functor that freely generates Closed profunctors. We have the obvious comonad:

newtype Closure p a b = Closure forall x. p (x -> a) (x -> b)

and its adjoint monad:

data Environment p u v where
  Environment :: ((c -> y) -> v) -> p x y -> (u -> (c -> x)) 
                 -> Environment p a b

or, in categorical notation:

(Φ p) u v = ∫ c x y A([c, y], v) ⊗ p x y ⊗ A(u, [c, x])

Using our construction, we apply this monad to the hom-profunctor:

(Φ (Aop⊗A)(<a, b>, -)) <s, t>
= ∫ c x y A([c, y], t) ⊗ (Aop⊗A)(<a, b>, <x, y>) ⊗ A(s, [c, x])
≅ ∫ c A([c, b], t) ⊗ A(s, [c, a])

Translating it back to Haskell, we get a representation of Grate as an existential type:

Grate s t a b = forall c. Grate ((c -> b) -> t) (s -> (c -> a))

This is very similar to the existential representation of a lens or a prism. It has the intuitive interpretation that s can be thought of as a container of a‘s indexed by some hidden type c.

We can also “perform the integration” using the Yoneda lemma, internal-hom-adjunction, and the symmetry of the product:

 c A([c, b], t) ⊗ A(s, [c, a])
≅ ∫ c A([c, b], t) ⊗ A(s ⊗ c, a)
≅ ∫ c A([c, b], t) ⊗ A(c, [s, a])
≅ A([[s, a], b], t)

to get the more familiar form:

Grate s t a b ≅ ((s -> a) -> b) -> t


I find it fascinating that constructions that were first discovered in Haskell to make Haskell’s optics composable have their categorical counterparts. This was not at all obvious, if only because some of them use parametricity arguments. Parametricity is the property of the language, not easily translatable to category theory. Now we know that the profunctor formulation of isos, lenses, prisms, and grates follows from the Yoneda lemma. The work is not complete yet. I haven’t been able to derive the same formulation for traversals, which combine two different tensor products plus some monoidal constraints.


  1. Haskell lens library, Edward Kmett
  2. Distributors on a tensor category, D. Tambara
  3. Doubles for monoidal categories, Craig Pastro, Ross Street
  4. Profunctor optics, Modular data accessors,
    Matthew Pickering, Jeremy Gibbons, and Nicolas Wu
  5. CPS based functional references, Twan van Laarhoven
  6. Isomorphism lenses, Twan van Laarhoven
  7. Theorem for Second-Order Functionals, Mauro Jaskellioff and Russell O’Connor

This is part 28 of Categories for Programmers. Previously: Kan Extensions. See the Table of Contents.

A category is small if its objects form a set. But we know that there are things larger than sets. Famously, a set of all sets cannot be formed within the standard set theory (the Zermelo-Fraenkel theory, optionally augmented with the Axiom of Choice). So a category of all sets must be large. There are mathematical tricks like Grothendieck universes that can be used to define collections that go beyond sets. These tricks let us talk about large categories.

A category is locally small if morphisms between any two objects form a set. If they don’t form a set, we have to rethink a few definitions. In particular, what does it mean to compose morphisms if we can’t even pick them from a set? The solution is to bootstrap ourselves by replacing hom-sets, which are objects in Set, with objects from some other category V. The difference is that, in general, objects don’t have elements, so we are no longer allowed to talk about individual morphisms. We have to define all properties of an enriched category in terms of operations that can be performed on hom-objects as a whole. In order to do that, the category that provides hom-objects must have additional structure — it must be a monoidal category. If we call this monoidal category V, we can talk about a category C enriched over V.

Beside size reasons, we might be interested in generalizing hom-sets to something that has more structure than mere sets. For instance, a traditional category doesn’t have the notion of a distance between objects. Two objects are either connected by morphisms or not. All objects that are connected to a given object are its neighbors. Unlike in real life; in a category, a friend of a friend of a friend is as close to me as my bosom buddy. In a suitably enriched category, we can define distances between objects.

There is one more very practical reason to get some experience with enriched categories, and that’s because a very useful online source of categorical knowledge, the nLab, is written mostly in terms of enriched categories.

Why Monoidal Category?

When constructing an enriched category we have to keep in mind that we should be able to recover the usual definitions when we replace the monoidal category with Set and hom-objects with hom-sets. The best way to accomplish this is to start with the usual definitions and keep reformulating them in a point-free manner — that is, without naming elements of sets.

Let’s start with the definition of composition. Normally, it takes a pair of morphisms, one from C(b, c) and one from C(a, b) and maps it to a morphism from C(a, c). In other words it’s a mapping:

C(b, c) × C(a, b) -> C(a, c)

This is a function between sets — one of them being the cartesian product of two hom-sets. This formula can be easily generalized by replacing cartesian product with something more general. A categorical product would work, but we can go even further and use a completely general tensor product.

Next come the identity morphisms. Instead of picking individual elements from hom-sets, we can define them using functions from the singleton set 1:

ja :: 1 -> C(a, a)

Again, we could replace the singleton set with the terminal object, but we can go even further by replacing it with the unit i of the tensor product.

As you can see, objects taken from some monoidal category V are good candidates for hom-set replacement.

Monoidal Category

We’ve talked about monoidal categories before, but it’s worth restating the definition. A monoidal category defines a tensor product that is a bifunctor:

⊗ :: V × V -> V

We want the tensor product to be associative, but it’s enough to satisfy associativity up to natural isomorphism. This isomorphism is called the associator. Its components are:

αa b c :: (a ⊗ b) ⊗ c -> a ⊗ (b ⊗ c)

It must be natural in all three arguments.

A monoidal category must also define a special unit object i that serves as the unit of the tensor product; again, up to natural isomorphism. The two isomorphisms are called, respectively, the left and the right unitor, and their components are:

λa :: i ⊗ a -> a
ρa :: a ⊗ i -> a

The associator and the unitors must satisfy coherence conditions:

A monoidal category is called symmetric if there is a natural isomorphism with components:

γa b :: a ⊗ b -> b ⊗ a

whose “square is one”:

γb a ∘ γa b = ida⊗b

and which is consistent with the monoidal structure.

An interesting thing about monoidal categories is that you may be able to define the internal hom (the function object) as the right adjoint to the tensor product. You may recall that the standard definition of the function object, or the exponential, was through the right adjoint to the categorical product. A category in which such an object existed for any pair of objects was called cartesian closed. Here is the adjunction that defines the internal hom in a monoidal category:

V(a ⊗ b, c) ~ V(a, [b, c])

Following G. M. Kelly, I’m using the notation [b, c] for the internal hom. The counit of this adjunction is the natural transformation whose components are called evaluation morphisms:

εa b :: ([a, b] ⊗ a) -> b

Notice that, if the tensor product is not symmetric, we may define another internal hom, denoted by [[a, c]], using the following adjunction:

V(a ⊗ b, c) ~ V(b, [[a, c]])

A monoidal category in which both are defined is called biclosed. An example of a category that is not biclosed is the category of endofunctors in Set, with functor composition serving as tensor product. That’s the category we used to define monads.

Enriched Category

A category C enriched over a monoidal category V replaces hom-sets with hom-objects. To every pair of objects a and b in C we associate an object C(a, b) in V. We use the same notation for hom-objects as we used for hom-sets, with the understanding that they don’t contain morphisms. On the other hand, V is a regular (non-enriched) category with hom-sets and morphisms. So we are not entirely rid of sets — we just swept them under the rug.

Since we cannot talk about individual morphisms in C, composition of morphisms is replaced by a family of morphisms in V:

∘ :: C(b, c) ⊗ C(a, b) -> C(a, c)

Similarly, identity morphisms are replaced by a family of morphisms in V:

ja :: i -> C(a, a)

where i is the tensor unit in V.

Associativity of composition is defined in terms of the associator in V:

Unit laws are likewise expressed in terms of unitors:


A preorder is defined as a thin category, one in which every hom-set is either empty or a singleton. We interpret a non-empty set C(a, b) as the proof that a is less than or equal to b. Such a category can be interpreted as enriched over a very simple monoidal category that contains just two objects, 0 and 1 (sometimes called False and True). Besides the mandatory identity morphisms, this category has a single morphism going from 0 to 1, let’s call it 0->1. A simple monoidal structure can be established in it, with the tensor product modeling the simple arithmetic of 0 and 1 (i.e., the only non-zero product is 1⊗1). The identity object in this category is 1. This is a strict monoidal category, that is, the associator and the unitors are identity morphisms.

Since in a preorder the-hom set is either empty or a singleton, we can easily replace it with a hom-object from our tiny category. The enriched preorder C has a hom-object C(a, b) for any pair of objects a and b. If a is less than or equal to b, this object is 1; otherwise it’s 0.

Let’s have a look at composition. The tensor product of any two objects is 0, unless both of them are 1, in which case it’s 1. If it’s 0, then we have two options for the composition morphism: it could be either id0 or 0->1. But if it’s 1, then the only option is id1. Translating this back to relations, this says that if a <= b and b <= c then a <= c, which is exactly the transitivity law we need.

What about the identity? It’s a morphism from 1 to C(a, a). There is only one morphism going from 1, and that’s the identity id1, so C(a, a) must be 1. It means that a <= a, which is the reflexivity law for a preorder. So both transitivity and reflexivity are automatically enforced, if we implement a preorder as an enriched category.

Metric Spaces

An interesting example is due to William Lawvere. He noticed that metric spaces can be defined using enriched categories. A metric space defines a distance between any two objects. This distance is a non-negative real number. It’s convenient to include inifinity as a possible value. If the distance is infinite, there is no way of getting from the starting object to the target object.

There are some obvious properties that have to be satisfied by distances. One of them is that the distance from an object to itself must be zero. The other is the triangle inequality: the direct distance is no larger than the sum of distances with intermediate stops. We don’t require the distance to be symmetric, which might seem weird at first but, as Lawvere explained, you can imagine that in one direction you’re walking uphill, while in the other you’re going downhill. In any case, symmetry may be imposed later as an additional constraint.

So how can a metric space be cast into a categorical language? We have to construct a category in which hom-objects are distances. Mind you, distances are not morphisms but hom-objects. How can a hom-object be a number? Only if we can construct a monoidal category V in which these numbers are objects. Non-negative real numbers (plus infinity) form a total order, so they can be treated as a thin category. A morphism between two such numbers x and y exists if and only if x >= y (note: this is the opposite direction to the one traditionally used in the definition of a preorder). The monoidal structure is given by addition, with zero serving as the unit object. In other words, the tensor product of two numbers is their sum.

A metric space is a category enriched over such monoidal category. A hom-object C(a, b) from object a to b is a non-negative (possibly infinite) number that we will call the distance from a to b. Let’s see what we get for identity and composition in such a category.

By our definitions, a morphism from the tensorial unit, which is the number zero, to a hom-object C(a, a) is the relation:

0 >= C(a, a)

Since C(a, a) is a non-negative number, this condition tells us that the distance from a to a is always zero. Check!

Now let’s talk about composition. We start with the tensor product of two abutting hom-objects, C(b, c)⊗C(a, b). We have defined the tensor product as the sum of the two distances. Composition is a morphism in V from this product to C(a, c). A morphism in V is defined as the greater-or-equal relation. In other words, the sum of distances from a to b and from b to c is greater than or equal to the distance from a to c. But that’s just the standard triangle inequality. Check!

By re-casting the metric space in terms of an enriched category, we get the triangle inequality and the zero self-distance “for free.”

Enriched Functors

The definition of a functor involves the mapping of morphisms. In the enriched setting, we don’t have the notion of individual morphisms, so we have to deal with hom-objects in bulk. Hom-objects are objects in a monoidal category V, and we have morphisms between them at our disposal. It therefore makes sense to define enriched functors between categories when they are enriched over the same monoidal category V. We can then use morphisms in V to map the hom-objects between two enriched categories.

An enriched functor F between two categories C and D, besides mapping objects to objects, also assigns, to every pair of objects in C, a morphism in V:

Fa b :: C(a, b) -> D(F a, F b)

A functor is a structure-preserving mapping. For regular functors it meant preserving composition and identity. In the enriched setting, the preservation of composition means that the following diagram commute:

The preservation of identity is replaced by the preservation of the morphisms in V that “select” the identity:

Self Enrichment

A closed symmetric monoidal category may be self-enriched by replacing hom-sets with internal homs (see the definition above). To make this work, we have to define the composition law for internal homs. In other words, we have to implement a morphism with the following signature:

[b, c] ⊗ [a, b] -> [a, c]

This is not much different from any other programming task, except that, in category theory, we usually use point free implementations. We start by specifying the set whose element it’s supposed to be. In this case, it’s a member of the hom-set:

V([b, c] ⊗ [a, b], [a, c])

This hom-set is isomorphic to:

V(([b, c] ⊗ [a, b]) ⊗ a, c)

I just used the adjunction that defined the internal hom [a, c]. If we can build a morphism in this new set, the adjunction will point us at the morphism in the original set, which we can then use as composition. We construct this morphism by composing several morphisms that are at our disposal. To begin with, we can use the associator α[b, c] [a, b] a to reassociate the expression on the left:

([b, c] ⊗ [a, b]) ⊗ a -> [b, c] ⊗ ([a, b] ⊗ a)

We can follow it with the co-unit of the adjunction εa b:

[b, c] ⊗ ([a, b] ⊗ a) -> [b, c] ⊗ b

And use the counit εb c again to get to c. We have thus constructed a morphism:

εb c . (id[b, c] ⊗ εa b) . α[b, c] [a, b] a

that is an element of the hom-set:

V(([b, c] ⊗ [a, b]) ⊗ a, c)

The adjunction will give us the composition law we were looking for.

Similarly, the identity:

ja :: i -> [a, a]

is a member of the following hom-set:

V(i, [a, a])

which is isomorphic, through adjunction, to:

 V(i ⊗ a, a)

We know that this hom-set contains the left identity λa. We can define ja as its image under the adjunction.

A practical example of self-enrichment is the category Set that serves as the prototype for types in programming languages. We’ve seen before that it’s a closed monoidal category with respect to cartesian product. In Set, the hom-set between any two sets is itself a set, so it’s an object in Set. We know that it’s isomorphic to the exponential set, so the external and the internal homs are equivalent. Now we also know that, through self-enrichment, we can use the exponential set as the hom-object and express composition in terms of cartesian products of exponential objects.

Relation to 2-Categories

I talked about 2-categories in the context of Cat, the category of (small) categories. The morphisms between categories are functors, but there is an additional structure: natural transformations between functors. In a 2-category, the objects are often called zero-cells; morphisms, 1-cells; and morphisms between morphisms, 2-cells. In Cat the 0-cells are categories, 1-cells are functors, and 2-cells are natural transformations.

But notice that functors between two categories form a category too; so, in Cat, we really have a hom-category rather than a hom-set. It turns out that, just like Set can be treated as a category enriched over Set, Cat can be treated as a category enriched over Cat. Even more generally, just like every category can be treated as enriched over Set, every 2-category can be considered enriched over Cat.

Next: Topoi.

This is part 27 of Categories for Programmers. Previously: Ends and Coends. See the Table of Contents.

So far we’ve been mostly working with a single category or a pair of categories. In some cases that was a little too constraining. For instance, when defining a limit in a category C, we introduced an index category I as the template for the pattern that would form the basis for our cones. It would have made sense to introduce another category, a trivial one, to serve as a template for the apex of the cone. Instead we used the constant functor Δc from I to C.

It’s time to fix this awkwardness. Let’s define a limit using three categories. Let’s start with the functor D from the index category I to C. This is the functor that selects the base of the cone — the diagram functor.

The new addition is the category 1 that contains a single object (and a single identity morphism). There is only one possible functor K from I to this category. It maps all objects to the only object in 1, and all morphisms to the identity morphism. Any functor F from 1 to C picks a potential apex for our cone.

A cone is a natural transformation ε from F ∘ K to D. Notice that F ∘ K does exactly the same thing as our original Δc. The following diagram shows this transformation.

We can now define a universal property that picks the “best” such functor F. This F will map 1 to the object that is the limit of D in C, and the natural transformation ε from F ∘ K to D will provide the corresponding projections. This universal functor is called the right Kan extension of D along K and is denoted by RanKD.

Let’s formulate the universal property. Suppose we have another cone — that is another functor F' together with a natural transformation ε' from F' ∘ K to D.

If the Kan extension F = RanKD exists, there must be a unique natural transformation σ from F' to it, such that ε' factorizes through ε, that is:

ε' = ε . (σ ∘ K)

Here, σ ∘ K is the horizontal composition of two natural transformations (one of them being the identity natural transformation on K). This transformation is then vertically composed with ε.

In components, when acting on an object i in I, we get:

ε'i = εi ∘ σ K i

In our case, σ has only one component corresponding to the single object of 1. So, indeed, this is the unique morphism from the apex of the cone defined by F' to the apex of the universal cone defined by RanKD. The commuting conditions are exactly the ones required by the definition of a limit.

But, importantly, we are free to replace the trivial category 1 with an arbitrary category A, and the definition of the right Kan extension remains valid.

Right Kan Extension

The right Kan extension of the functor D::I->C along the functor K::I->A is a functor F::A->C (denoted RanKD) together with a natural transformation

ε :: F ∘ K -> D

such that for any other functor F'::A->C and a natural transformation

ε' :: F' ∘ K -> D

there is a unique natural transformation

σ :: F' -> F

that factorizes ε':

ε' = ε . (σ ∘ K)

This is quite a mouthful, but it can be visualized in this nice diagram:

An interesting way of looking at this is to notice that, in a sense, the Kan extension acts like the inverse of “functor multiplication.” Some authors go as far as use the notation D/K for RanKD. Indeed, in this notation, the definition of ε, which is also called the counit of the right Kan extension, looks like simple cancellation:

ε :: D/K ∘ K -> D

There is another interpretation of Kan extensions. Consider that the functor K embeds the category I inside A. In the simplest case I could just be a subcategory of A. We have a functor D that maps I to C. Can we extend D to a functor F that is defined on the whole of A? Ideally, such an extension would make the composition F ∘ K be isomorphic to D. In other words, F would be extending the domain of D to A. But a full-blown isomorphism is usually too much to ask, and we can do with just half of it, namely a one-way natural transformation ε from F ∘ K to D. (The left Kan extension picks the other direction.)

Of course, the embedding picture breaks down when the functor K is not injective on objects or not faithful on hom-sets, as in the example of the limit. In that case, the Kan extension tries its best to extrapolate the lost information.

Kan Extension as Adjunction

Now suppose that the right Kan extension exists for any D (and a fixed K). In that case RanK - (with the dash replacing D) is a functor from the functor category [I, C] to the functor category [A, C]. It turns out that this functor is the right adjoint to the precomposition functor -∘K. The latter maps functors in [A, C] to functors in [I, C]. The adjunction is:

[I, C](F' ∘ K, D) ≅ [A, C](F', RanKD)

It is just a restatement of the fact that to every natural transformation we called ε' corresponds a unique natural transformation we called σ.

Furthermore, if we chose the category I to be the same as C, we can substitute the identity functor IC for D. We get the following identity:

[C, C](F' ∘ K, IC) ≅ [A, C](F', RanKIC)

We can now chose F' to be the same as RanKIC. In that case the right hand side contains the identity natural transformation and, corresponding to it, the left hand side gives us the following natural transformation:

ε :: RanKIC ∘ K -> IC

This looks very much like the counit of an adjunction:

RanKIC ⊣ K

Indeed, the right Kan extension of the identity functor along a functor K can be used to calculate the left adjoint of K. For that, one more condition is necessary: the right Kan extension must be preserved by the functor K. The preservation of the extension means that, if we calculate the Kan extension of the functor precomposed with K, we should get the same result as precomposing the original Kan extesion with K. In our case, this condition simplifies to:

K ∘ RanKIC ≅ RanKK

Notice that, using the division-by-K notation, the adjunction can be written as:

I/K ⊣ K

which confirms our intuition that an adjunction describes some kind of an inverse. The preservation condition becomes:

K ∘ I/K ≅ K/K

The right Kan extension of a functor along itself, K/K, is called a codensity monad.

The adjunction formula is an important result because, as we’ll see soon, we can calculate Kan extensions using ends (coends), thus giving us practical means of finding right (and left) adjoints.

Left Kan Extension

There is a dual construction that gives us the left Kan extension. To build some intuition, we’ll can start with the definition of a colimit and restructure it to use the singleton category 1. We build a cocone by using the functor D::I->C to form its base, and the functor F::1->C to select its apex.

The sides of the cocone, the injections, are components of a natural transformation η from D to F ∘ K.

The colimit is the universal cocone. So for any other functor F' and a natural transformation

η' :: D -> F'∘ K

there is a unique natural transformation σ from F to F'

such that:

η' = (σ ∘ K) . η

This is illustrated in the following diagram:

Replacing the singleton category 1 with A, this definition naturally generalized to the definition of the left Kan extension, denoted by LanKD.

The natural transformation:

η :: D -> LanKD ∘ K

is called the unit of the left Kan extension.

As before, we can recast the one-to-one correspondence between natural transformations:

η' = (σ ∘ K) . η

in terms of the adjunction:

[A, C](LanKD, F') ≅ [I, C](D, F' ∘ K)

In other words, the left Kan extension is the left adjoint, and the right Kan extension is the right adjoint of the postcomposition with K.

Just like the right Kan extension of the identity functor could be used to calculate the left adjoint of K, the left Kan extension of the identity functor turns out to be the right adjoint of K (with η being the unit of  the adjunction):

K ⊣ LanKIC

Combining the two results, we get:

RanKIC ⊣ K ⊣ LanKIC

Kan Extensions as Ends

The real power of Kan extensions comes from the fact that they can be calculated using ends (and coends). For simplicity, we’ll restrict our attention to the case where the target category C is Set, but the formulas can be extended to any category.

Let’s revisit the idea that a Kan extension can be used to extend the action of a functor outside of its original domain. Suppose that K embeds I inside A. Functor D maps I to Set. We could just say that for any object a in the image of K, that is a = K i, the extended functor maps a to D i. The problem is, what to do with those objects in A that are outside of the image of K? The idea is that every such object is potentially connected through lots of morphisms to every object in the image of K. A functor must preserve these morphisms. The totality of morphisms from an object a to the image of K is characterized by the hom-functor:

A(a, K -)

Notice that this hom-functor is a composition of two functors:

A(a, K -) = A(a, -) ∘ K

The right Kan extension is the right adjoint of functor composition:

[I, Set](F' ∘ K, D) ≅ [A, Set](F', RanKD)

Let’s see what happens when we replace F' with the hom functor:

[I, Set](A(a, -) ∘ K, D) ≅ [A, Set](A(a, -), RanKD)

and then inline the composition:

[I, Set](A(a, K -), D) ≅ [A, Set](A(a, -), RanKD)

The right hand side can be reduced using the Yoneda lemma:

[I, Set](A(a, K -), D) ≅ RanKD a

We can now rewrite the set of natural transformations as the end to get this very convenient formula for the right Kan extension:

RanKD a ≅ ∫i Set(A(a, K i), D i)

There is an analogous formula for the left Kan extension in terms of a coend:

LanKD a = ∫i A(K i, a) × D i

To see that this is the case, we’ll show that this is indeed the left adjoint to functor composition:

[A, Set](LanKD, F') ≅ [I, Set](D, F'∘ K)

Let’s substitute our formula in the left hand side:

[A, Set](∫i A(K i, -) × D i, F')

This is a set of natural transformations, so it can be rewritten as an end:

a Set(∫i A(K i, a) × D i, F'a)

Using the continuity of the hom-functor, we can replace the coend with the end:

ai Set(A(K i, a) × D i, F'a)

We can use the product-exponential adjunction:

ai Set(A(K i, a), (F'a)D i)

The exponential is isomorphic to the corresponding hom-set:

ai Set(A(K i, a), A(D i, F'a))

There is a theorem called the Fubini theorem that allows us to swap the two ends:

ia Set(A(K i, a), A(D i, F'a))

The inner end represents the set of natural transformations between two functors, so we can use the Yoneda lemma:

i A(D i, F'(K i))

This is indeed the set of natural transformations that forms the right hand side of the adjunction we set out to prove:

[I, Set](D, F'∘ K)

These kinds of calculations using ends, coends, and the Yoneda lemma are pretty typical for the “calculus” of ends.

Kan Extensions in Haskell

The end/coend formulas for Kan extensions can be easily translated to Haskell. Let’s start with the right extension:

RanKD a ≅ ∫i Set(A(a, K i), D i)

We replace the end with the universal quantifier, and hom-sets with function types:

newtype Ran k d a = Ran (forall i. (a -> k i) -> d i)

Looking at this definition, it’s clear that Ran must contain a value of type a to which the function can be applied, and a natural transformation between the two functors k and d. For instance, suppose that k is the tree functor, and d is the list functor, and you were given a Ran Tree [] String. If you pass it a function:

f :: String -> Tree Int

you’ll get back a list of Int, and so on. The right Kan extension will use your function to produce a tree and then repackage it into a list. For instance, you may pass it a parser that generates a parsing tree from a string, and you’ll get a list that corresponds to the depth-first traversal of this tree.

The right Kan extension can be used to calculate the left adjoint of a given functor by replacing the functor d with the identity functor. This leads to the left adjoint of a functor k being represented by the set of polymorphic functions of the type:

forall i. (a -> k i) -> i

Suppose that k is the forgetful functor from the category of monoids. The universal quantifier then goes over all monoids. Of course, in Haskell we cannot express monoidal laws, but the following is a decent approximation of the resulting free functor (the forgetful functor k is an identity on objects):

type Lst a = forall i. Monoid i => (a -> i) -> i

As expected, it generates free monoids, or Haskell lists:

toLst :: [a] -> Lst a
toLst as = \f -> foldMap f as
fromLst :: Lst a -> [a]
fromLst f = f (\a -> [a])

The left Kan extension is a coend:

LanKD a = ∫i A(K i, a) × D i

so it translates to an existential quantifier. Symbolically:

Lan k d a = exists i. (k i -> a, d i)

This can be encoded in Haskell using GADTs, or using a universally quantified data constructor:

data Lan k d a = forall i. Lan (k i -> a) (d i)

The interpretation of this data structure is that it contains a function that takes a container of some unspecified is and produces an a. It also has a container of those is. Since you have no idea what is are, the only thing you can do with this data structure is to retrieve the container of is, repack it into the container defined by the functor k using a natural transformation, and call the function to obtain the a. For instance, if d is a tree, and k is a list, you can serialize the tree, call the function with the resulting list, and obtain an a.

The left Kan extension can be used to calculate the right adjoint of a functor. We know that the right adjoint of the product functor is the exponential, so let’s try to implement it using the Kan extension:

type Exp a b = Lan ((,) a) I b

This is indeed isomorphic to the function type, as witnessed by the following pair of functions:

toExp :: (a -> b) -> Exp a b
toExp f = Lan (f . fst) (I ())

fromExp :: Exp a b -> (a -> b)
fromExp (Lan f (I x)) = \a -> f (a, x)

Notice that, as described earlier in the general case, we performed the following steps: (1) retrieved the container of x (here, it’s just a trivial identity container), and the function f, (2) repackaged the container using the natural transformation between the identity functor and the pair functor, and (3) called the function f.

Free Functor

An interesting application of Kan extensions is the construction of a free functor. It’s the solution to the following practical problem: suppose you have a type constructor — that is a mapping of objects. Is it possible to define a functor based on this type constructor? In other words, can we define a mapping of morphisms that would extend this type constructor to a full-blown endofunctor?

The key observation is that a type constructor can be described as a functor whose domain is a discrete category. A discrete category has no morphisms other than the identity morphisms. Given a category C, we can always construct a discrete category |C| by simply discarding all non-identity morphisms. A functor F from |C| to C is then a simple mapping of objects, or what we call a type constructor in Haskell. There is also a canonical functor J that injects |C| into C: it’s an identity on objects (and on identity morphisms). The left Kan extension of F along J, if it exists, is then a functor for C to C:

LanJ F a = ∫i C(J i, a) × F i

It’s called a free functor based on F.

In Haskell, we would write it as:

data FreeF f a = forall i. FMap (i -> a) (f i)

Indeed, for any type constructor f, FreeF f is a functor:

instance Functor (FreeF f) where
  fmap g (FMap h fi) = FMap (g . h) fi

As you can see, the free functor fakes the lifting of a function by recording both the function and its argument. It accumulates the lifted functions by recording their composition. Functor rules are automatically satisfied. This construction was used in a paper Freer Monads, More Extensible Effects.

Alternatively, we can use the right Kan extension for the same purpose:

newtype FreeF f a = FreeF (forall i. (a -> i) -> f i)

It’s easy to check that this is indeed a functor:

instance Functor (FreeF f) where
  fmap g (FreeF r) = FreeF (\bi -> r (bi . g))

Next: Enriched Categories.

Next Page »