Category Theory

I realize that we might be getting away from programming and diving into hard-core math. But you never know what the next big revolution in programming might bring and what kind of math might be necessary to understand it. There are some very interesting ideas going around, like functional reactive programming with its continuous time, the extention of Haskell’s type system with dependent types, or the exploration on homotopy type theory in programming.

So far I’ve been casually identifying types with sets of values. This is not strictly correct, because such approach doesn’t take into account the fact that, in programming, we compute values, and the computation is a process that takes time and, in extreme cases, might not terminate. Divergent computations are part of every Turing-complete language.

There are also foundational reasons why set theory might not be the best fit as the basis for computer science or even math itself. A good analogy is that of set theory being the assembly language that is tied to a particular architecture. If you want to run your math on different architectures, you have to use more general tools.

One possibility is to use spaces in place of sets. Spaces come with more structure, and may be defined without recourse to sets. One thing usually associated with spaces is topology, which is necessary to define things like continuity. And the conventional approach to topology is, you guessed it, through set theory. In particular, the notion of a subset is central to topology. Not surprisingly, category theorists generalized this idea to categories other than Set. The type of category that has just the right properties to serve as a replacement for set theory is called a topos (plural: topoi), and it provides, among other things, a generalized notion of a subset.

Subobject Classifier

Let’s start by trying to express the idea of a subset using functions rather than elements. Any function `f` from some set `a` to `b` defines a subset of `b`–that of the image of `a` under `f`. But there are many functions that define the same subset. We need to be more specific. To begin with, we might focus on functions that are injective — ones that don’t smush multiple elements into one. Injective functions “inject” one set into another. For finite sets, you may visualize injective functions as parallel arrows connecting elements of one set to elements of another. Of course, the first set cannot be larger than the second set, or the arrows would necessarily converge. There is still some ambiguity left: there may be another set `a'` and another injective function `f'` from that set to `b` that picks the same subset. But you can easily convince yourself that such a set would have to be isomorphic to `a`. We can use this fact to define a subset as a family of injective functions that are related by isomorphisms of their domains. More precisely, we say that two injective functions:

```f :: a -> b
f':: a'-> b```

are equivalent if there is an isomorphism:

`h :: a -> a'`

such that:

`f = f' . h`

Such a family of equivalent injections defines a subset of `b`.

This definition can be lifted to an arbitrary category if we replace injective functions with monomorphism. Just to remind you, a monomorphism `m` from `a` to `b` is defined by its universal property. For any object `c` and any pair of morphisms:

```g :: c -> a
g':: c -> a```

such that:

`m . g = m . g'`

it must be that `g = g'`.

On sets, this definition is easier to understand if we consider what it would mean for a function `m` not to be a monomorphism. It would map two different elements of `a` to a single element of `b`. We could then find two functions `g` and `g'` that differ only at those two elements. The postcomposition with `m` would then mask this difference.

There is another way of defining a subset: using a single function called the characteristic function. It’s a function `χ` from the set `b` to a two-element set `Ω`. One element of this set is designated as “true” and the other as “false.” This function assigns “true” to those elements of `b` that are members of the subset, and “false” to those that aren’t.

It remains to specify what it means to designate an element of `Ω` as “true.” We can use the standard trick: use a function from a singleton set to `Ω`. We’ll call this function `true`:

`true :: 1 -> Ω`

These definitions can be combined in such a way that they not only define what a subobject is, but also define the special object `Ω` without talking about elements. The idea is that we want the morphism `true` to represent a “generic” subobject. In Set, it picks a single-element subset from a two-element set `Ω`. This is as generic as it gets. It’s clearly a proper subset, because `Ω` has one more element that’s not in that subset.

In a more general setting, we define `true` to be a monomorphism from the terminal object to the classifying object `Ω`. But we have to define the classifying object. We need a universal property that links this object to the characteristic function. It turns out that, in Set, the pullback of `true` along the characteristic function `χ` defines both the subset `a` and the injective function that embeds it in `b`. Here’s the pullback diagram:

Let’s analyze this diagram. The pullback equation is:

`true . unit = χ . f`

The function `true . unit` maps every element of `a` to “true.” Therefore `f` must map all elements of `a` to those elements of `b` for which `χ` is “true.” These are, by definition, the elements of the subset that is specified by the characteristic function `χ`. So the image of `f` is indeed the subset in question. The universality of the pullback guarantees that `f` is injective.

This pullback diagram can be used to define the classifying object in categories other than Set. Such a category must have a terminal object, which will let us define the monomorphism `true`. It must also have pullbacks — the actual requirement is that it must have all finite limits (a pullback is an example of a finite limit). Under those assumptions, we define the classifying object `Ω` by the property that, for every monomorphism `f` there is a unique morphism `χ` that completes the pullback diagram.

Let’s analyze the last statement. When we construct a pullback, we are given three objects `Ω`, `b` and `1`; and two morphisms, `true` and `χ`. The existence of a pullback means that we can find the best such object `a`, equipped with two morphisms `f` and `unit` (the latter is uniquely determined by the definition of the terminal object), that make the diagram commute.

Here we are solving a different system of equations. We are solving for `Ω` and `true` while varying both `a` and `b`. For a given `a` and `b` there may or may not be a monomorphism `f::a->b`. But if there is one, we want it to be a pullback of some `χ`. Moreover, we want this `χ` to be uniquely determined by `f`.

We can’t say that there is a one-to-one correspondence between monomorphisms `f` and characteristic functions `χ`, because a pullback is only unique up to isomorphism. But remember our earlier definition of a subset as a family of equivalent injections. We can generalize it by defining a subobject of `b` as a family of equivalent monomorphisms to `b`. This family of monomorphisms is in one-to-one corrpespondence with the family of equivalent pullbacks of our diagram.

We can thus define a set of subobjects of `b`, `Sub(b)`, as a family of monomorphisms, and see that it is isomorphic to the set of morphisms from `b` to `Ω`:

`Sub(b) ≅ C(b, Ω)`

This happens to be a natural isomorphism of two functors. In other words, `Sub(-)` is a representable (contravariant) functor whose representation is the object Ω.

Topos

A topos is a category that:

1. Is cartesian closed: It has all products, the terminal object, and exponentials (defined as right adjoints to products),
2. Has limits for all finite diagrams,
3. Has a subobject classifier `Ω`.

This set of properties makes a topos a shoe-in for Set in most applications. It also has additional properties that follow from its definition. For instance, a topos has all finite colimits, including the initial object.

It would be tempting to define the subobject classifier as a coproduct (sum) of two copies of the terminal object –that’s what it is in Set— but we want to be more general than that. Topoi in which this is true are called Boolean.

Topoi and Logic

In set theory, a characteristic function may be interpreted as defining a property of the elements of a set — a predicate that is true for some elements and false for others. The predicate `isEven` selects a subset of even numbers from the set of natural numbers. In a topos, we can generalize the idea of a predicate to be a morphism from object `a` to `Ω`. This is why `Ω` is sometimes called the truth object.

Predicates are the building blocks of logic. A topos contains all the necessary instrumentation to study logic. It has products that correspond to logical conjunctions (logical and), coproducts for disjunctions (logical or), and exponentials for implications. All standard axioms of logic hold in a topos except for the law of excluded middle (or, equivalently, double negation elimination). That’s why the logic of a topos corresponds to constructive or intuitionistic logic.

Intuitionistic logic has been steadily gaining ground, finding unexpected support from computer science. The classical notion of excluded middle is based on the belief that there is absolute truth: Any statement is either true or false or, as Ancient Romans would say, tertium non datur (there is no third option). But the only way we can know whether something is true or false is if we can prove or disprove it. A proof is a process, a computation — and we know that computations take time and resources. In some cases, they may never terminate. It doesn’t make sense to claim that a statement is true if we cannot prove it in finite amount of time. A topos with its more nuanced truth object provides a more general framework for modeling interesting logics.

Next: Lawvere Theories.

Challenges

1. Show that the function `f` that is the pullback of `true` along the characteristic function must be injective.

Abstract: I present a uniform derivation of profunctor optics: isos, lenses, prisms, and grates based on the Yoneda lemma in the (enriched) profunctor category. In particular, lenses and prisms correspond to Tambara modules with the cartesian and cocartesian tensor product.

This blog post is the result of a collaboration between many people. The categorical profunctor picture solidified after long discussions with Edward Kmett. A lot of the theory was developed in exchanges on the Lens IRC channel between Russell O’Connor, Edward Kmett and James Deikun. They came up with the idea to use the `Pastro` functor to freely generate Tambara modules, which was the missing piece that completed the picture.

My interest in lenses started long time ago when I first made the connection between the universal quantification over functors in the van Laarhoven representation of lenses and the Yoneda lemma. Since I was still learning the basics of category theory, it took me a long time to find the right language to make the formal derivation. Unbeknownst to me Mauro Jaskellioff and Russell O’Connor independently had the same idea and they published a paper about it soon after I published my blog. But even though this solved the problem of lenses, prisms still seemed out of reach of the Yoneda lemma. Prisms require a more general formulation using universal quantification over profunctors. I was able to put a dent in it by deriving `Iso`s from profunctor Yoneda, but then I was stuck again. I shared my ideas with Russell, who reached for help on the IRC channel, and a Haskell proof of concept was quickly established. Two years later, after a brainstorm with Edward, I was finally able to gather all these ideas in one place and give them a little categorical polish.

Yoneda Lemma

The starting point is the Yoneda lemma, which states that the set of natural transformations between the hom-functor `C(a, -)` in the category `C` and an arbitrary functor `f` from `C` to `Set` is (naturally) isomorphic with the set `f a`:

`[C, Set](C(a, -), f) ≅ f a`

Here, `f` is a member of the functor category `[C, Set]`, where natural transformation form hom-sets.

The set of natural transformations may be represented as an end, leading to the following formulation of the Yoneda lemma:

`∫x Set(C(a, x), f x) ≅ f a`

This notation makes the object `x` explicit, which is often very convenient. It can be easily translated to Haskell, by replacing the end with the universal quantifier. We get:

`forall x. (a -> x) -> f x ≅ f a`

A special case of the Yoneda lemma replaces the functor `f` with a hom-functor in `C`:

`f x = C(b, x)`

and we get:

`∫x Set(C(a, x), C(b, x)) ≅ C(b, a)`

This form of the Yoneda lemma is useful in showing the Yoneda embedding, which states that any category `C` can be fully and faithfully embedded in the functor category `[C, Set]`. The embedding is a functor, and the above formula defines its action on morphisms.

We will be interested in using the Yoneda lemma in the functor category. We simply replace `C` with `[C, Set]` in the previous formula, and do some renaming of variables:

`∫f Set([C, Set](g, f), [C, Set](h, f)) ≅ [C, Set](h, g)`

The hom-sets in the functor category are sets of natural transformations, which can be rewritten using ends:

```∫f Set(∫x Set(g x, f x), ∫x Set(h x, f x))
≅ ∫x Set(h x, g x)```

This is a short recap of adjunctions. We start with two functors going between two categories `C` and `D`:

```L :: C -> D
R :: D -> C```

We say that `L` is left adjoint to `R` iff there is a natural isomorphism between hom-sets:

`D(L x, y) ≅ C(x, R y)`

In particular, we can define an adjunction in a functor category `[C, Set]`. We start with two higher order (endo-) functors:

```L :: [C, Set] -> [C, Set]
R :: [C, Set] -> [C, Set]```

We say that `L` is left adjoint to `R` iff there is a natural isomorphism between two sets of natural transformations:

`[C, Set](L f, g) ≅ [C, Set](f, R g)`

where `f` and `g` are functors from `C` to `Set`. We can rewrite natural transformations using ends:

`∫x Set((L f) x, g x) ≅ ∫x Set(f x, (R g) x)`

In Haskell, you may think of `f` and `g` as type constructors (with the corresponding `Functor` instances), in which case `L` and `R` are types that are parameterized by these type constructors (similar to how the monad or functor classes are).

Here’s a little trick. Since the fixed objects in the formula for Yoneda embedding are arbitrary, we can pick them to be images of other objects under some functor `L` that we know is left adjoint to another functor `R`:

`∫x Set(D(L a, x), D(L b, x)) ≅ D(L b, L a)`

Using the adjunction, this is isomorphic to:

`∫x Set(C(a, R x), C(b, R x)) ≅ C(b, (R ∘ L) a)`

Notice that the composition `R ∘ L` of adjoint functors is a monad in `C`. Let’s write this monad as `Φ`.

The interesting case is the adjunction between a forgetful functor `U` and a free functor `F`. We get:

`∫x Set(C(a, U x), C(b, U x)) ≅ C(b, Φ a)`

The end is taken over `x` in a category `D` that has some additional structure (we’ll see examples of that later); but the hom-sets are in the underlying simpler category `C`, which is the target of the forgetful functor `U`.

The Yoneda-with-adjunction formula generalizes to the category of functors:

```∫f Set(∫x Set((L g) x, f x), ∫x Set((L h) x, f x))
≅ ∫x Set((L h) x, (L g) x)```

```∫f Set(∫x Set((g x, (R f) x), ∫x Set(h x, (R f) x))
≅ ∫x Set(h x, (Φ g) x)```

Here, `Φ` is the monad `R ∘ L` in the category of functors.

An interesting special case is when we substitute hom-functors for `g` and `h`:

```g x = C(a, x)
h x = C(s, x)```

We get:

```∫f Set(∫x Set((C(a, x), (R f) x), ∫x Set(C(s, x), (R f) x))
≅ ∫x Set(C(s, x), (Φ C(a, -)) x)```

We can then use the regular Yoneda lemma to “integrate over `x`” and reduce it down to:

`∫f Set((R f) a, (R f) s)) ≅ (Φ C(a, -)) s`

Again, we are particularly interested in the forgetful/free adjunction:

`∫f Set((U f) a, (U f) s)) ≅ (Φ C(a, -)) s`

`Φ = U ∘ F`

The simplest application of this identity is when the functors in question are identity functors. We get:

`∫f Set(f a, f s)) ≅ C(a, s)`

`forall f. Functor f => f a -> f s  ≅ a -> s`

You may think of this formula as defining the trivial kind of optic that simply turns `a` to `s`.

Profunctors

Profunctors are just functors from a product category `Cop×D` to Set. All the results from the last section can be directly applied to the profunctor category `[Cop×D, Set]`. Keep in mind that morphisms in this category are natural transformations between profunctors. Here’s the key formula:

`∫p Set((U p)<a, b>, (U p)<s, t>)) ≅ (Φ (Cop×D)(<a, b>, -)) <s, t>`

I have replaced `a` with a pair `<a, b>` and `s` with a pair `<s, t>`. The end is taken over all profunctors that exhibit some structure that `U` forgets, and `F` freely creates. `Φ` is the monad `U ∘ F`. It’s a monad that acts on profunctors to produce other profunctors.

Notice that a hom-set in the category `Cop×D` is a set of pairs of morphisms:

```<f, g> :: (Cop×D)(<a, b>, <s, t>)
f :: s -> a
g :: b -> t```

the first one going in the opposite direction.

The simplest application of this identity is when we don’t impose any constraints on the profunctors, in which case `Φ` is the identity monad. We get:

`∫p Set(p <a, b>, p <s, t>) ≅ (Cop×D)(<a, b>, <s, t>)`

Haskell translation of this formula gives the well-known representation of `Iso`:

`forall p. Profunctor p => p a b -> p s t ≅ Iso s t a b`

where:

`data Iso s t a b = Iso (s -> a) (b -> t)`

Interesting things happen when we impose more structure on our profunctors.

Enriched Categories

First, let’s generalize profunctors to work on enriched categories. We start with some monoidal category `V` whose objects serve as hom-objects in an enriched category `A`. The category `V` will essentially replace Set in our constructions. For instance, we’ll work with profunctors that are enriched functors from the (enriched) product category to `V`:

`p :: Aop ⊗ A -> V`

Notice that we use a tensor product of categories. The objects in such a category are pairs of objects, and the hom-objects are tensor products of individual hom-objects. The definition of composition in a product category requires that the tensor product in `V` be symmetric (up to isomorphism).

For such profunctors, there is a suitable generalization of the end:

`∫x p x x`

It’s an object in `V` together with a `V`-natural family of projections:

`pry :: ∫x p x x -> p y y`

We can formulate the Yoneda lemma in an enriched setting by considering enriched functors from `A` to `V`. We get the following generalization:

`∫x [A(a, x), f x] ≅ f a`

Notice that `A(a, x)` is now an object of `V` — the hom-object from `a` to `x`. The notation `[v, w]` generalizes the internal hom. It is defined as the right adjoint to the tensor product in `V`:

`V(x ⊗ v, w) ≅ V(x, [v, w])`

We are assuming that `V` is closed, so the internal hom is defined for every pair of objects.

Enriched functors, or V-functors, between two enriched categories `C` and `D` form a functor category `[C, D]` that is itself enriched over `V`. The hom-object between two functors `f` and `g` is given by the end:

`[C, D](f, g) = ∫x D(f x, g x)`

We can therefore use the Yoneda lemma in a category of enriched functors, or in the category of enriched profunctors. Therefore the result of the previous section holds in the enriched setting as well:

`∫p [(U p)<a, b>, (U p)<s, t>] ≅ (Φ (Aop⊗A)(<a, b>, -)) <s, t>`

with the understanding that:

`(Aop⊗A)(<a, b>, -))`

is an enriched hom functor mapping pairs of objects in `A` to objects in `V`, plus the appropriate action on hom-objects. This hom-functor is the profunctor on which `Φ` acts.

Tambara Modules

An enriched category `A` may have a monoidal structure of its own. We’ll use the same tensor product notation for its structure as we did for the underlying monoidal category `V`. There is also a tensorial unit object `i` in `A`.

A Tambara module is a V-functor `p` from `Aop⊗A` to `V`, which transforms under the tensor action of `A` according to a family of morphisms, natural in all three arguments:

`α a x y :: p x y -> p (a ⊗ x) (a ⊗ y)`

Notice that these are morphisms in the underlying category `V`, which is also the target of the profunctor.

We impose the usual unit law:

`α i x y = id`

and associativity:

`α a⊗b x y = α a b⊗x b⊗y ∘ α b x y`

Strictly speaking one can separately define left and right action but, for simplicity, we’ll assume that the product is symmetric (up to isomorphism).

The intuition behind Tambara modules is that some of the profunctor values are not independent of others. Once we have calculated `p x y`, we can obtain the value of `p` at any of the points on the path `<a⊗x, a⊗y>` by applying `α`.

Tambara modules form a category that’s enriched over `V`. The construction of this enrichment is non-trivial. The hom-object between two profunctors `p` and `q` in a category of profunctors is given by the end:

`[Aop⊗A, V](p, q) = ∫<x y> V(p x y, q x y)`

This object generalizes the set of natural transformations. Conceptually, not all natural transformation preserve the Tambara structure, so we have to define a subobject of this hom-object that does. The intuition is that the end is a generalized product of its components. It comes equipped with projections. For instance, the projection `pr<x,y>` picks the component:

`V(p x y, q x y)`

But there is also a projection `pr<a⊗x, a⊗y>` that picks:

`V(p a⊗x a⊗y, q a⊗x a⊗y)`

from the same end. These two objects are not completely independent, because they can both be transformed into the same object. We have:

```V(id, αa) :: V(p x y, q x y) -> V(p x y, q a⊗x a⊗y)
V(αa, id) :: V(a⊗x a⊗y, q a⊗x a⊗y) -> V(p x y, q a⊗x a⊗y)```

We are using the fact that the mapping:

`<v, w> -> V(v, w)`

is itself a profunctor `Vop×V -> V`, so it can be used to lift pairs of morphisms in `V`.

Now, given any triple `a`, `x`, and `y`, we want the two paths to be equivalent, which means finding the equalizer between each pair of morphisms:

```V(id, αa) ∘ pr<x, y>
V(αa, id) ∘ pr<a⊗x, a⊗y>```

Since we want our hom-object to satisfy the above condition for any triple, we have to construct it as an intersection of all those equalizers. Here, an intersection means an object of `V` together with a family of monomorphisms, each embedding it into a particular equalizer.

It’s possible to construct a forgetful functor from the Tambara category to the category of profunctors `[Aop⊗A, V]`. It forgets the existence of `α` and it maps hom-objects between the two categories. Composition in the Tambara category is defined is such a way as to be compatible with this forgetful functor.

The fact that Tambara modules form a category is important, because we want to be able to use the Yoneda lemma in that category.

Tambara Optics

The key observation is that the forgetful functor from the Tambara category has a left adjoint, and that their composition forms a monad in the category of profunctors. We’ll plug this monad into our general formula.

The construction of this monad starts with a comonad that is given by the following end:

`(Θ p) s t = ∫c p (c⊗s) (c⊗t)`

For a given profunctor `p`, this comonad builds a new profunctor that is essentially a gigantic product of all values of this profunctor “shifted” by tensoring its arguments with all possible objects `c`.

The monad we are interested in is the left adjoint to this comonad (calculated using a Kan extension):

`(Φ p) s t = ∫ c x y A(s, c⊗x) ⊗ A(c⊗y, t) ⊗ p x y`

Notice that we have two separate tensor products in this formula: one in `V`, between the hom-objects and the profunctor, and one in `A`, under the hom-objects. This monad takes an arbitrary profunctor `p` and produces a new profunctor `Φ p`.

We can now use our earlier formula:

`∫p [(U p)<a, b>, (U p)<s, t>)] ≅ (Φ (Aop⊗A)(<a, b>, -)) <s, t>`

inside the Tambara category. To calculate the right hand side, let’s evaluate the action of `Φ` on the hom-profunctor:

```(Φ (Aop⊗A)(<a, b>, -)) <s, t>
= ∫ c x y A(s, c⊗x) ⊗ A(c⊗y, t) ⊗ (Aop⊗A)(<a, b>, <x, y>)```

We can “integrate over” `x` and `y` using the Yoneda lemma to get:

`∫ c A(s, c⊗a) ⊗ A(c⊗b, t)`

We get the following result:

`∫p [(U p)<a, b>, (U p)<s, t>)] ≅ ∫ c A(s, c⊗a) ⊗ A(c⊗b, t)`

where the end on the left is taken over all Tambara modules, and `U` is the forgetful functor from the Tambara category to the category of profunctors.

If the category in question is closed, we can use the adjunction:

`A(c⊗b, t) ≅ A(c, [b, t])`

and “perform the integration” over `c` to arrive at the set/get formulation:

`∫ c A(s, c⊗a) ⊗ A(c, [b, t]) ≅ A(s, [b, t]⊗a)`

It corresponds to the familiar Haskell lens type:

`(s -> b -> t, s -> a)`

(This final trick doesn’t work for prisms, because there is no right adjoint to `Either`.)

A Tambara module is parameterized by the choice of the tensor product `ten`. We can write a general definition:

```class (Profunctor p) => TamModule (ten :: * -> * -> *) p where
leftAction  :: p a b -> p (c `ten` a) (c `ten` a)
rightAction :: p a b -> p (a `ten` c) (b `ten` c)```

This can be further specialized for two obvious monoidal structures: product and sum:

```type TamProd p = TamModule (,) p
type TamSum p = TamModule Either p```

The former is equivalent to what it called a `Strong` (or `Cartesian`) profunctor in Haskell, the latter is equivalent to a `Choice` (or `Cocartesian`) profunctor.

Replacing ends and coends with universal and existential quantifiers in Haskell, our main formula becomes (pseudocode):

```forall p. TamModule ten p => p a b -> p s t
≅ exists c. (s -> c `ten` a, c `ten` b -> t)```

The two sides of the isomorphism can be defined as the following data structures:

```type TamOptic ten s t a b
= forall p. TamModule ten p => p a b -> p s t
data Optic ten s t a b
= forall c. Optic (s -> c `ten` a) (c `ten` b -> t)```

Chosing product for the tensor, we recover two equivalent definitions of a lens:

```type Lens s t a b = forall p. Strong p => p a b -> p s t
data Lens s t a b = forall c. Lens (s -> (c, a)) ((c, b) -> t)```

Chosing the coproduct, we get:

```type Prism s t a b = forall p. Choice p => p a b -> p s t
data Prism s t a b = forall c. Prism (s -> Either c a) (Either c b -> t)```

These are the well-known existential representations of lenses and prisms.

The monad `Φ` (or, equivalently, the free functor that generates Tambara modules), is known in Haskell under the name `Pastro` for product, and `Copastro` for coproduct:

```data Pastro p a b where
Pastro :: ((y, z) -> b) -> p x y -> (a -> (x, z))
-> Pastro p a b
data Copastro p a b where
Copastro :: (Either y z -> b) -> p x y -> (a -> Either x z)
-> Copastro p a b```

They are the left adjoints of `Tambara` and `Cotambara`, respectively:

```newtype Tambara p a b = Tambara forall c. p (a, c) (b, c)
newtype Cotambara p a b = Cotambara forall c. p (Either a c) (Either b c)```

which are special cases of the comonad `Θ`.

Discussion

It’s interesting that the work on Tambara modules has relevance to Haskell optics. It is, however, just one example of an even larger pattern.

The pattern is that we have a family of transformations in some category `A`. These transformations can be used to select a class of profunctors that have simple transformation laws. Using a tensor product in a monoidal category to transform objects, in essence “multiplying” them, is just one example of such symmetry. A more general pattern involves a family of transformations `f` that is closed under composition and includes a unit. We specify a transformation law for profunctors:

```class Profunctor p => Related p where
α f a b :: forall f. Trans f => p a b -> p (f a) (f b)```

This requirement picks a class of profunctors that we call `Related`.

Why are profunctors relevant as carriers of symmetry? It’s because they generalize a relationship between objects. The profunctor transformation law essentially says that if two objects `a` and `b` are related through `p` then so are the transformed objects; and that there is a function `α` that relates the proofs of this relationship. This is in the spirit of profunctors as proof-relevant relations.

As an analogy, imagine that we are comparing people, and the transformation we’re interested in is aging. We notice that family relationships remain invariant under aging: if `a` is a sibling of `b`, they will remain siblings as they age. This is not true about other relationships, for instance being a boss of another person. But family bonds are not the only ones that survive the test of time. Another such relation is being older or younger than the other person.

Now imagine that you pick four people at random points in time and you find out that any time-invariant relation between two of them, `a` and `b`, also holds between `s` and `t`. You have to conclude that there is some connection between `s` and age-adjusted `a`, and between age-adjusted `b` and `t`. In other words there exists a time shift that transforms one pair to another.

Considering all possible relations from the class `Related` corresponds to taking the end over all profunctors from this class:

```type Optic p s t a b = forall p. Related p =>
p a b -> p s t```

The end is a generalization of a product, so it’s enough that one of the components is empty for the whole end to be empty. It means that, for a particular choice of the four types `a`, `b`, `s`, and `t`, we have to be able to construct a whole family of morphisms, one for every `p`. We have seen that this end exists only if the four types are connected in a very peculiar way — for instance, if `a` and `b` are somehow embedded in `s` and `t`.

In the simplest case, we may choose the four types to be related by the transformation:

```s = f a
t = f b```

For these types, we know that the end exists:

```forall p. Related p =>
p a b -> p s t```

because there is a family of appropriate morphisms: our `αf a b`. In general, though, we can get away with weaker connection.

Let’s look at an example of a family of transformations generated by pairing with arbitrary type `c`:

`fc a = (c, a)`

Profunctors that respect these transformations are Tambara modules over a cartesian product (or, in lens parlance, `Strong` profunctors). For the choice:

```s = (c, a)
t = (c, b)```

the end in question trivially exists. As we’ve seen, we can weaken these conditions. It’s enough that one way (lax) transformations exist:

```s -> (c, a)
t <- (c, b)```

These morphisms assert that `s` can be split into a pair, and that `t` can be constructed from a pair (but not the other way around).

Other Optics

With the understanding that optics may be defined using a family of transformations, we can analyze another optic called the `Grate`. It’s based on the following family:

`type Reader e a = e -> a`

Notice that, unlike the case of Tambara modules, this family is parameterized by a contravariant parameter `e`.

We are interested in profunctors that transform under these transformations:

```class Profunctor p => Closed p where
closed :: p a b -> p (x -> a) (x -> b)```

They let us form the optic:

`type Grate s t a b = forall p. Closed p => p a b -> p s t`

It turns out that there is a profunctor functor that freely generates `Closed` profunctors. We have the obvious comonad:

`newtype Closure p a b = Closure forall x. p (x -> a) (x -> b)`

```data Environment p u v where
Environment :: ((c -> y) -> v) -> p x y -> (u -> (c -> x))
-> Environment p a b```

or, in categorical notation:

`(Φ p) u v = ∫ c x y A([c, y], v) ⊗ p x y ⊗ A(u, [c, x])`

Using our construction, we apply this monad to the hom-profunctor:

```(Φ (Aop⊗A)(<a, b>, -)) <s, t>
= ∫ c x y A([c, y], t) ⊗ (Aop⊗A)(<a, b>, <x, y>) ⊗ A(s, [c, x])
≅ ∫ c A([c, b], t) ⊗ A(s, [c, a])```

Translating it back to Haskell, we get a representation of `Grate` as an existential type:

`Grate s t a b = forall c. Grate ((c -> b) -> t) (s -> (c -> a))`

This is very similar to the existential representation of a lens or a prism. It has the intuitive interpretation that `s` can be thought of as a container of `a`‘s indexed by some hidden type `c`.

We can also “perform the integration” using the Yoneda lemma, internal-hom-adjunction, and the symmetry of the product:

```∫ c A([c, b], t) ⊗ A(s, [c, a])
≅ ∫ c A([c, b], t) ⊗ A(s ⊗ c, a)
≅ ∫ c A([c, b], t) ⊗ A(c, [s, a])
≅ A([[s, a], b], t)```

to get the more familiar form:

`Grate s t a b ≅ ((s -> a) -> b) -> t`

Conclusion

I find it fascinating that constructions that were first discovered in Haskell to make Haskell’s optics composable have their categorical counterparts. This was not at all obvious, if only because some of them use parametricity arguments. Parametricity is the property of the language, not easily translatable to category theory. Now we know that the profunctor formulation of isos, lenses, prisms, and grates follows from the Yoneda lemma. The work is not complete yet. I haven’t been able to derive the same formulation for traversals, which combine two different tensor products plus some monoidal constraints.

Bibliography

1. Haskell lens library, Edward Kmett
2. Distributors on a tensor category, D. Tambara
3. Doubles for monoidal categories, Craig Pastro, Ross Street
4. Profunctor optics, Modular data accessors,
Matthew Pickering, Jeremy Gibbons, and Nicolas Wu
5. CPS based functional references, Twan van Laarhoven
6. Isomorphism lenses, Twan van Laarhoven
7. Theorem for Second-Order Functionals, Mauro Jaskellioff and Russell O’Connor

A category is small if its objects form a set. But we know that there are things larger than sets. Famously, a set of all sets cannot be formed within the standard set theory (the Zermelo-Fraenkel theory, optionally augmented with the Axiom of Choice). So a category of all sets must be large. There are mathematical tricks like Grothendieck universes that can be used to define collections that go beyond sets. These tricks let us talk about large categories.

A category is locally small if morphisms between any two objects form a set. If they don’t form a set, we have to rethink a few definitions. In particular, what does it mean to compose morphisms if we can’t even pick them from a set? The solution is to bootstrap ourselves by replacing hom-sets, which are objects in Set, with objects from some other category V. The difference is that, in general, objects don’t have elements, so we are no longer allowed to talk about individual morphisms. We have to define all properties of an enriched category in terms of operations that can be performed on hom-objects as a whole. In order to do that, the category that provides hom-objects must have additional structure — it must be a monoidal category. If we call this monoidal category V, we can talk about a category C enriched over V.

Beside size reasons, we might be interested in generalizing hom-sets to something that has more structure than mere sets. For instance, a traditional category doesn’t have the notion of a distance between objects. Two objects are either connected by morphisms or not. All objects that are connected to a given object are its neighbors. Unlike in real life; in a category, a friend of a friend of a friend is as close to me as my bosom buddy. In a suitably enriched category, we can define distances between objects.

There is one more very practical reason to get some experience with enriched categories, and that’s because a very useful online source of categorical knowledge, the nLab, is written mostly in terms of enriched categories.

Why Monoidal Category?

When constructing an enriched category we have to keep in mind that we should be able to recover the usual definitions when we replace the monoidal category with Set and hom-objects with hom-sets. The best way to accomplish this is to start with the usual definitions and keep reformulating them in a point-free manner — that is, without naming elements of sets.

Let’s start with the definition of composition. Normally, it takes a pair of morphisms, one from `C(b, c)` and one from `C(a, b)` and maps it to a morphism from `C(a, c)`. In other words it’s a mapping:

`C(b, c) × C(a, b) -> C(a, c)`

This is a function between sets — one of them being the cartesian product of two hom-sets. This formula can be easily generalized by replacing cartesian product with something more general. A categorical product would work, but we can go even further and use a completely general tensor product.

Next come the identity morphisms. Instead of picking individual elements from hom-sets, we can define them using functions from the singleton set 1:

`ja :: 1 -> C(a, a)`

Again, we could replace the singleton set with the terminal object, but we can go even further by replacing it with the unit `i` of the tensor product.

As you can see, objects taken from some monoidal category V are good candidates for hom-set replacement.

Monoidal Category

We’ve talked about monoidal categories before, but it’s worth restating the definition. A monoidal category defines a tensor product that is a bifunctor:

`⊗ :: V × V -> V`

We want the tensor product to be associative, but it’s enough to satisfy associativity up to natural isomorphism. This isomorphism is called the associator. Its components are:

`αa b c :: (a ⊗ b) ⊗ c -> a ⊗ (b ⊗ c)`

It must be natural in all three arguments.

A monoidal category must also define a special unit object `i` that serves as the unit of the tensor product; again, up to natural isomorphism. The two isomorphisms are called, respectively, the left and the right unitor, and their components are:

```λa :: i ⊗ a -> a
ρa :: a ⊗ i -> a```

The associator and the unitors must satisfy coherence conditions:

A monoidal category is called symmetric if there is a natural isomorphism with components:

`γa b :: a ⊗ b -> b ⊗ a`

whose “square is one”:

`γb a ∘ γa b = ida⊗b`

and which is consistent with the monoidal structure.

An interesting thing about monoidal categories is that you may be able to define the internal hom (the function object) as the right adjoint to the tensor product. You may recall that the standard definition of the function object, or the exponential, was through the right adjoint to the categorical product. A category in which such an object existed for any pair of objects was called cartesian closed. Here is the adjunction that defines the internal hom in a monoidal category:

`V(a ⊗ b, c) ~ V(a, [b, c])`

Following G. M. Kelly, I’m using the notation `[b, c]` for the internal hom. The counit of this adjunction is the natural transformation whose components are called evaluation morphisms:

`εa b :: ([a, b] ⊗ a) -> b`

Notice that, if the tensor product is not symmetric, we may define another internal hom, denoted by `[[a, c]]`, using the following adjunction:

`V(a ⊗ b, c) ~ V(b, [[a, c]])`

A monoidal category in which both are defined is called biclosed. An example of a category that is not biclosed is the category of endofunctors in Set, with functor composition serving as tensor product. That’s the category we used to define monads.

Enriched Category

A category C enriched over a monoidal category V replaces hom-sets with hom-objects. To every pair of objects `a` and `b` in C we associate an object `C(a, b)` in V. We use the same notation for hom-objects as we used for hom-sets, with the understanding that they don’t contain morphisms. On the other hand, V is a regular (non-enriched) category with hom-sets and morphisms. So we are not entirely rid of sets — we just swept them under the rug.

Since we cannot talk about individual morphisms in C, composition of morphisms is replaced by a family of morphisms in V:

`∘ :: C(b, c) ⊗ C(a, b) -> C(a, c)`

Similarly, identity morphisms are replaced by a family of morphisms in V:

`ja :: i -> C(a, a)`

where `i` is the tensor unit in V.

Associativity of composition is defined in terms of the associator in V:

Unit laws are likewise expressed in terms of unitors:

Preorders

A preorder is defined as a thin category, one in which every hom-set is either empty or a singleton. We interpret a non-empty set `C(a, b)` as the proof that `a` is less than or equal to `b`. Such a category can be interpreted as enriched over a very simple monoidal category that contains just two objects, 0 and 1 (sometimes called False and True). Besides the mandatory identity morphisms, this category has a single morphism going from 0 to 1, let’s call it `0->1`. A simple monoidal structure can be established in it, with the tensor product modeling the simple arithmetic of 0 and 1 (i.e., the only non-zero product is `1⊗1`). The identity object in this category is 1. This is a strict monoidal category, that is, the associator and the unitors are identity morphisms.

Since in a preorder the-hom set is either empty or a singleton, we can easily replace it with a hom-object from our tiny category. The enriched preorder C has a hom-object `C(a, b)` for any pair of objects `a` and `b`. If `a` is less than or equal to `b`, this object is 1; otherwise it’s 0.

Let’s have a look at composition. The tensor product of any two objects is 0, unless both of them are 1, in which case it’s 1. If it’s 0, then we have two options for the composition morphism: it could be either `id0` or `0->1`. But if it’s 1, then the only option is `id1`. Translating this back to relations, this says that if `a <= b` and `b <= c` then `a <= c`, which is exactly the transitivity law we need.

What about the identity? It’s a morphism from 1 to `C(a, a)`. There is only one morphism going from 1, and that’s the identity `id1`, so `C(a, a)` must be 1. It means that `a <= a`, which is the reflexivity law for a preorder. So both transitivity and reflexivity are automatically enforced, if we implement a preorder as an enriched category.

Metric Spaces

An interesting example is due to William Lawvere. He noticed that metric spaces can be defined using enriched categories. A metric space defines a distance between any two objects. This distance is a non-negative real number. It’s convenient to include inifinity as a possible value. If the distance is infinite, there is no way of getting from the starting object to the target object.

There are some obvious properties that have to be satisfied by distances. One of them is that the distance from an object to itself must be zero. The other is the triangle inequality: the direct distance is no larger than the sum of distances with intermediate stops. We don’t require the distance to be symmetric, which might seem weird at first but, as Lawvere explained, you can imagine that in one direction you’re walking uphill, while in the other you’re going downhill. In any case, symmetry may be imposed later as an additional constraint.

So how can a metric space be cast into a categorical language? We have to construct a category in which hom-objects are distances. Mind you, distances are not morphisms but hom-objects. How can a hom-object be a number? Only if we can construct a monoidal category V in which these numbers are objects. Non-negative real numbers (plus infinity) form a total order, so they can be treated as a thin category. A morphism between two such numbers `x` and `y` exists if and only if `x >= y` (note: this is the opposite direction to the one traditionally used in the definition of a preorder). The monoidal structure is given by addition, with zero serving as the unit object. In other words, the tensor product of two numbers is their sum.

A metric space is a category enriched over such monoidal category. A hom-object `C(a, b)` from object `a` to `b` is a non-negative (possibly infinite) number that we will call the distance from `a` to `b`. Let’s see what we get for identity and composition in such a category.

By our definitions, a morphism from the tensorial unit, which is the number zero, to a hom-object `C(a, a)` is the relation:

`0 >= C(a, a)`

Since `C(a, a)` is a non-negative number, this condition tells us that the distance from `a` to `a` is always zero. Check!

Now let’s talk about composition. We start with the tensor product of two abutting hom-objects, `C(b, c)⊗C(a, b)`. We have defined the tensor product as the sum of the two distances. Composition is a morphism in V from this product to `C(a, c)`. A morphism in V is defined as the greater-or-equal relation. In other words, the sum of distances from `a` to `b` and from `b` to `c` is greater than or equal to the distance from `a` to `c`. But that’s just the standard triangle inequality. Check!

By re-casting the metric space in terms of an enriched category, we get the triangle inequality and the zero self-distance “for free.”

Enriched Functors

The definition of a functor involves the mapping of morphisms. In the enriched setting, we don’t have the notion of individual morphisms, so we have to deal with hom-objects in bulk. Hom-objects are objects in a monoidal category V, and we have morphisms between them at our disposal. It therefore makes sense to define enriched functors between categories when they are enriched over the same monoidal category V. We can then use morphisms in V to map the hom-objects between two enriched categories.

An enriched functor `F` between two categories C and D, besides mapping objects to objects, also assigns, to every pair of objects in C, a morphism in V:

`Fa b :: C(a, b) -> D(F a, F b)`

A functor is a structure-preserving mapping. For regular functors it meant preserving composition and identity. In the enriched setting, the preservation of composition means that the following diagram commute:

The preservation of identity is replaced by the preservation of the morphisms in V that “select” the identity:

Self Enrichment

A closed symmetric monoidal category may be self-enriched by replacing hom-sets with internal homs (see the definition above). To make this work, we have to define the composition law for internal homs. In other words, we have to implement a morphism with the following signature:

`[b, c] ⊗ [a, b] -> [a, c]`

This is not much different from any other programming task, except that, in category theory, we usually use point free implementations. We start by specifying the set whose element it’s supposed to be. In this case, it’s a member of the hom-set:

`V([b, c] ⊗ [a, b], [a, c])`

This hom-set is isomorphic to:

`V(([b, c] ⊗ [a, b]) ⊗ a, c)`

I just used the adjunction that defined the internal hom `[a, c]`. If we can build a morphism in this new set, the adjunction will point us at the morphism in the original set, which we can then use as composition. We construct this morphism by composing several morphisms that are at our disposal. To begin with, we can use the associator `α[b, c] [a, b] a` to reassociate the expression on the left:

`([b, c] ⊗ [a, b]) ⊗ a -> [b, c] ⊗ ([a, b] ⊗ a)`

We can follow it with the co-unit of the adjunction `εa b`:

`[b, c] ⊗ ([a, b] ⊗ a) -> [b, c] ⊗ b`

And use the counit `εb c` again to get to `c`. We have thus constructed a morphism:

`εb c . (id[b, c] ⊗ εa b) . α[b, c] [a, b] a`

that is an element of the hom-set:

`V(([b, c] ⊗ [a, b]) ⊗ a, c)`

The adjunction will give us the composition law we were looking for.

Similarly, the identity:

`ja :: i -> [a, a]`

is a member of the following hom-set:

`V(i, [a, a])`

which is isomorphic, through adjunction, to:

` V(i ⊗ a, a)`

We know that this hom-set contains the left identity `λa`. We can define `ja` as its image under the adjunction.

A practical example of self-enrichment is the category Set that serves as the prototype for types in programming languages. We’ve seen before that it’s a closed monoidal category with respect to cartesian product. In Set, the hom-set between any two sets is itself a set, so it’s an object in Set. We know that it’s isomorphic to the exponential set, so the external and the internal homs are equivalent. Now we also know that, through self-enrichment, we can use the exponential set as the hom-object and express composition in terms of cartesian products of exponential objects.

Relation to 2-Categories

I talked about 2-categories in the context of Cat, the category of (small) categories. The morphisms between categories are functors, but there is an additional structure: natural transformations between functors. In a 2-category, the objects are often called zero-cells; morphisms, 1-cells; and morphisms between morphisms, 2-cells. In Cat the 0-cells are categories, 1-cells are functors, and 2-cells are natural transformations.

But notice that functors between two categories form a category too; so, in Cat, we really have a hom-category rather than a hom-set. It turns out that, just like Set can be treated as a category enriched over Set, Cat can be treated as a category enriched over Cat. Even more generally, just like every category can be treated as enriched over Set, every 2-category can be considered enriched over Cat.

Next: Topoi.

This is part 27 of Categories for Programmers. Previously: Ends and Coends. See the Table of Contents.

So far we’ve been mostly working with a single category or a pair of categories. In some cases that was a little too constraining. For instance, when defining a limit in a category C, we introduced an index category `I` as the template for the pattern that would form the basis for our cones. It would have made sense to introduce another category, a trivial one, to serve as a template for the apex of the cone. Instead we used the constant functor `Δc` from `I` to `C`.

It’s time to fix this awkwardness. Let’s define a limit using three categories. Let’s start with the functor `D` from the index category I to C. This is the functor that selects the base of the cone — the diagram functor.

The new addition is the category 1 that contains a single object (and a single identity morphism). There is only one possible functor `K` from I to this category. It maps all objects to the only object in 1, and all morphisms to the identity morphism. Any functor `F` from 1 to C picks a potential apex for our cone.

A cone is a natural transformation `ε` from `F ∘ K` to `D`. Notice that `F ∘ K` does exactly the same thing as our original `Δc`. The following diagram shows this transformation.

We can now define a universal property that picks the “best” such functor `F`. This `F` will map 1 to the object that is the limit of `D` in C, and the natural transformation `ε` from `F ∘ K` to `D` will provide the corresponding projections. This universal functor is called the right Kan extension of `D` along `K` and is denoted by `RanKD`.

Let’s formulate the universal property. Suppose we have another cone — that is another functor `F'` together with a natural transformation `ε'` from `F' ∘ K` to `D`.

If the Kan extension `F = RanKD` exists, there must be a unique natural transformation `σ` from `F'` to it, such that `ε'` factorizes through `ε`, that is:

`ε' = ε . (σ ∘ K)`

Here, `σ ∘ K` is the horizontal composition of two natural transformations (one of them being the identity natural transformation on `K`). This transformation is then vertically composed with `ε`.

In components, when acting on an object `i` in I, we get:

`ε'i = εi ∘ σ K i`

In our case, `σ` has only one component corresponding to the single object of 1. So, indeed, this is the unique morphism from the apex of the cone defined by `F'` to the apex of the universal cone defined by `RanKD`. The commuting conditions are exactly the ones required by the definition of a limit.

But, importantly, we are free to replace the trivial category 1 with an arbitrary category A, and the definition of the right Kan extension remains valid.

Right Kan Extension

The right Kan extension of the functor `D::I->C` along the functor `K::I->A` is a functor `F::A->C` (denoted `RanKD`) together with a natural transformation

`ε :: F ∘ K -> D`

such that for any other functor `F'::A->C` and a natural transformation

`ε' :: F' ∘ K -> D`

there is a unique natural transformation

`σ :: F' -> F`

that factorizes `ε'`:

`ε' = ε . (σ ∘ K)`

This is quite a mouthful, but it can be visualized in this nice diagram:

An interesting way of looking at this is to notice that, in a sense, the Kan extension acts like the inverse of “functor multiplication.” Some authors go as far as use the notation `D/K` for `RanKD`. Indeed, in this notation, the definition of `ε`, which is also called the counit of the right Kan extension, looks like simple cancellation:

`ε :: D/K ∘ K -> D`

There is another interpretation of Kan extensions. Consider that the functor `K` embeds the category I inside A. In the simplest case I could just be a subcategory of A. We have a functor `D` that maps I to C. Can we extend `D` to a functor `F` that is defined on the whole of A? Ideally, such an extension would make the composition `F ∘ K` be isomorphic to `D`. In other words, `F` would be extending the domain of `D` to `A`. But a full-blown isomorphism is usually too much to ask, and we can do with just half of it, namely a one-way natural transformation `ε` from `F ∘ K` to `D`. (The left Kan extension picks the other direction.)

Of course, the embedding picture breaks down when the functor `K` is not injective on objects or not faithful on hom-sets, as in the example of the limit. In that case, the Kan extension tries its best to extrapolate the lost information.

Now suppose that the right Kan extension exists for any `D` (and a fixed `K`). In that case `RanK -` (with the dash replacing `D`) is a functor from the functor category `[I, C]` to the functor category `[A, C]`. It turns out that this functor is the right adjoint to the precomposition functor `-∘K`. The latter maps functors in `[A, C]` to functors in `[I, C]`. The adjunction is:

`[I, C](F' ∘ K, D) ≅ [A, C](F', RanKD)`

It is just a restatement of the fact that to every natural transformation we called `ε'` corresponds a unique natural transformation we called `σ`.

Furthermore, if we chose the category I to be the same as C, we can substitute the identity functor `IC` for `D`. We get the following identity:

`[C, C](F' ∘ K, IC) ≅ [A, C](F', RanKIC)`

We can now chose `F'` to be the same as `RanKIC`. In that case the right hand side contains the identity natural transformation and, corresponding to it, the left hand side gives us the following natural transformation:

`ε :: RanKIC ∘ K -> IC`

This looks very much like the counit of an adjunction:

`RanKIC ⊣ K`

Indeed, the right Kan extension of the identity functor along a functor `K` can be used to calculate the left adjoint of `K`. For that, one more condition is necessary: the right Kan extension must be preserved by the functor `K`. The preservation of the extension means that, if we calculate the Kan extension of the functor precomposed with `K`, we should get the same result as precomposing the original Kan extesion with `K`. In our case, this condition simplifies to:

`K ∘ RanKIC ≅ RanKK`

Notice that, using the division-by-K notation, the adjunction can be written as:

`I/K ⊣ K`

which confirms our intuition that an adjunction describes some kind of an inverse. The preservation condition becomes:

`K ∘ I/K ≅ K/K`

The right Kan extension of a functor along itself, `K/K`, is called a codensity monad.

The adjunction formula is an important result because, as we’ll see soon, we can calculate Kan extensions using ends (coends), thus giving us practical means of finding right (and left) adjoints.

Left Kan Extension

There is a dual construction that gives us the left Kan extension. To build some intuition, we’ll can start with the definition of a colimit and restructure it to use the singleton category 1. We build a cocone by using the functor `D::I->C` to form its base, and the functor `F::1->C` to select its apex.

The sides of the cocone, the injections, are components of a natural transformation `η` from `D` to `F ∘ K`.

The colimit is the universal cocone. So for any other functor `F'` and a natural transformation

`η' :: D -> F'∘ K`

there is a unique natural transformation `σ` from `F` to `F'`

such that:

`η' = (σ ∘ K) . η`

This is illustrated in the following diagram:

Replacing the singleton category 1 with A, this definition naturally generalized to the definition of the left Kan extension, denoted by `LanKD`.

The natural transformation:

`η :: D -> LanKD ∘ K`

is called the unit of the left Kan extension.

As before, we can recast the one-to-one correspondence between natural transformations:

`η' = (σ ∘ K) . η`

`[A, C](LanKD, F') ≅ [I, C](D, F' ∘ K)`

In other words, the left Kan extension is the left adjoint, and the right Kan extension is the right adjoint of the postcomposition with `K`.

Just like the right Kan extension of the identity functor could be used to calculate the left adjoint of `K`, the left Kan extension of the identity functor turns out to be the right adjoint of `K` (with `η` being the unit of  the adjunction):

`K ⊣ LanKIC`

Combining the two results, we get:

`RanKIC ⊣ K ⊣ LanKIC`

Kan Extensions as Ends

The real power of Kan extensions comes from the fact that they can be calculated using ends (and coends). For simplicity, we’ll restrict our attention to the case where the target category C is Set, but the formulas can be extended to any category.

Let’s revisit the idea that a Kan extension can be used to extend the action of a functor outside of its original domain. Suppose that `K` embeds I inside A. Functor `D` maps I to Set. We could just say that for any object `a` in the image of `K`, that is `a = K i`, the extended functor maps `a` to `D i`. The problem is, what to do with those objects in A that are outside of the image of `K`? The idea is that every such object is potentially connected through lots of morphisms to every object in the image of `K`. A functor must preserve these morphisms. The totality of morphisms from an object `a` to the image of `K` is characterized by the hom-functor:

`A(a, K -)`

Notice that this hom-functor is a composition of two functors:

`A(a, K -) = A(a, -) ∘ K`

The right Kan extension is the right adjoint of functor composition:

`[I, Set](F' ∘ K, D) ≅ [A, Set](F', RanKD)`

Let’s see what happens when we replace `F'` with the hom functor:

`[I, Set](A(a, -) ∘ K, D) ≅ [A, Set](A(a, -), RanKD)`

and then inline the composition:

`[I, Set](A(a, K -), D) ≅ [A, Set](A(a, -), RanKD)`

The right hand side can be reduced using the Yoneda lemma:

`[I, Set](A(a, K -), D) ≅ RanKD a`

We can now rewrite the set of natural transformations as the end to get this very convenient formula for the right Kan extension:

`RanKD a ≅ ∫i Set(A(a, K i), D i)`

There is an analogous formula for the left Kan extension in terms of a coend:

`LanKD a = ∫i A(K i, a) × D i`

To see that this is the case, we’ll show that this is indeed the left adjoint to functor composition:

`[A, Set](LanKD, F') ≅ [I, Set](D, F'∘ K)`

Let’s substitute our formula in the left hand side:

`[A, Set](∫i A(K i, -) × D i, F')`

This is a set of natural transformations, so it can be rewritten as an end:

`∫a Set(∫i A(K i, a) × D i, F'a)`

Using the continuity of the hom-functor, we can replace the coend with the end:

`∫a ∫i Set(A(K i, a) × D i, F'a)`

We can use the product-exponential adjunction:

`∫a ∫i Set(A(K i, a), (F'a)D i)`

The exponential is isomorphic to the corresponding hom-set:

`∫a ∫i Set(A(K i, a), A(D i, F'a))`

There is a theorem called the Fubini theorem that allows us to swap the two ends:

`∫i ∫a Set(A(K i, a), A(D i, F'a))`

The inner end represents the set of natural transformations between two functors, so we can use the Yoneda lemma:

`∫i A(D i, F'(K i))`

This is indeed the set of natural transformations that forms the right hand side of the adjunction we set out to prove:

`[I, Set](D, F'∘ K)`

These kinds of calculations using ends, coends, and the Yoneda lemma are pretty typical for the “calculus” of ends.

The end/coend formulas for Kan extensions can be easily translated to Haskell. Let’s start with the right extension:

`RanKD a ≅ ∫i Set(A(a, K i), D i)`

We replace the end with the universal quantifier, and hom-sets with function types:

`newtype Ran k d a = Ran (forall i. (a -> k i) -> d i)`

Looking at this definition, it’s clear that `Ran` must contain a value of type `a` to which the function can be applied, and a natural transformation between the two functors `k` and `d`. For instance, suppose that `k` is the tree functor, and `d` is the list functor, and you were given a `Ran Tree [] String`. If you pass it a function:

`f :: String -> Tree Int`

you’ll get back a list of `Int`, and so on. The right Kan extension will use your function to produce a tree and then repackage it into a list. For instance, you may pass it a parser that generates a parsing tree from a string, and you’ll get a list that corresponds to the depth-first traversal of this tree.

The right Kan extension can be used to calculate the left adjoint of a given functor by replacing the functor `d` with the identity functor. This leads to the left adjoint of a functor `k` being represented by the set of polymorphic functions of the type:

`forall i. (a -> k i) -> i`

Suppose that `k` is the forgetful functor from the category of monoids. The universal quantifier then goes over all monoids. Of course, in Haskell we cannot express monoidal laws, but the following is a decent approximation of the resulting free functor (the forgetful functor `k` is an identity on objects):

`type Lst a = forall i. Monoid i => (a -> i) -> i`

As expected, it generates free monoids, or Haskell lists:

```toLst :: [a] -> Lst a
toLst as = \f -> foldMap f as

fromLst :: Lst a -> [a]
fromLst f = f (\a -> [a])```

The left Kan extension is a coend:

`LanKD a = ∫i A(K i, a) × D i`

so it translates to an existential quantifier. Symbolically:

`Lan k d a = exists i. (k i -> a, d i)`

This can be encoded in Haskell using GADTs, or using a universally quantified data constructor:

`data Lan k d a = forall i. Lan (k i -> a) (d i)`

The interpretation of this data structure is that it contains a function that takes a container of some unspecified `i`s and produces an `a`. It also has a container of those `i`s. Since you have no idea what `i`s are, the only thing you can do with this data structure is to retrieve the container of `i`s, repack it into the container defined by the functor `k` using a natural transformation, and call the function to obtain the `a`. For instance, if `d` is a tree, and `k` is a list, you can serialize the tree, call the function with the resulting list, and obtain an `a`.

The left Kan extension can be used to calculate the right adjoint of a functor. We know that the right adjoint of the product functor is the exponential, so let’s try to implement it using the Kan extension:

`type Exp a b = Lan ((,) a) I b`

This is indeed isomorphic to the function type, as witnessed by the following pair of functions:

```toExp :: (a -> b) -> Exp a b
toExp f = Lan (f . fst) (I ())

fromExp :: Exp a b -> (a -> b)
fromExp (Lan f (I x)) = \a -> f (a, x)```

Notice that, as described earlier in the general case, we performed the following steps: (1) retrieved the container of `x` (here, it’s just a trivial identity container), and the function `f`, (2) repackaged the container using the natural transformation between the identity functor and the pair functor, and (3) called the function `f`.

Free Functor

An interesting application of Kan extensions is the construction of a free functor. It’s the solution to the following practical problem: suppose you have a type constructor — that is a mapping of objects. Is it possible to define a functor based on this type constructor? In other words, can we define a mapping of morphisms that would extend this type constructor to a full-blown endofunctor?

The key observation is that a type constructor can be described as a functor whose domain is a discrete category. A discrete category has no morphisms other than the identity morphisms. Given a category C, we can always construct a discrete category |C| by simply discarding all non-identity morphisms. A functor `F` from |C| to C is then a simple mapping of objects, or what we call a type constructor in Haskell. There is also a canonical functor `J` that injects |C| into C: it’s an identity on objects (and on identity morphisms). The left Kan extension of `F` along `J`, if it exists, is then a functor for C to C:

`LanJ F a = ∫i C(J i, a) × F i`

It’s called a free functor based on `F`.

In Haskell, we would write it as:

`data FreeF f a = forall i. FMap (i -> a) (f i)`

Indeed, for any type constructor `f`, `FreeF f` is a functor:

```instance Functor (FreeF f) where
fmap g (FMap h fi) = FMap (g . h) fi```

As you can see, the free functor fakes the lifting of a function by recording both the function and its argument. It accumulates the lifted functions by recording their composition. Functor rules are automatically satisfied. This construction was used in a paper Freer Monads, More Extensible Effects.

Alternatively, we can use the right Kan extension for the same purpose:

`newtype FreeF f a = FreeF (forall i. (a -> i) -> f i)`

It’s easy to check that this is indeed a functor:

```instance Functor (FreeF f) where
fmap g (FreeF r) = FreeF (\bi -> r (bi . g))```

Next: Enriched Categories.

The Free Theorem for Ends

In Haskell, the end of a profunctor `p` is defined as a product of all diagonal elements:

`forall c. p c c`

together with a family of projections:

```pi :: Profunctor p => forall c. (forall a. p a a) -> p c c
pi e = e```

In category theory, the end must also satisfy the edge condition which, in (type-annotated) Haskell, could be written as:

`dimap f idb . pib = dimap ida f . pia`

for any `f :: a -> b`.
Using a suitable formulation of parametricity, this equation can be shown to be a free theorem. Let’s first review the free theorem for functors before generalizing it to profunctors.

Functor Characterization

You may think of a functor as a container that has a shape and contents. You can manipulate the contents without changing the shape using `fmap`. In general, when applying `fmap`, you not only change the values stored in the container, you change their type as well. To really capture the shape of the container, you have to consider not only all possible mappings, but also more general relations between different contents.

A function is directional, and so is `fmap`, but relations don’t favor either side. They can map multiple values to the same value, and they can map one value to multiple values. Any relation on values induces a relation on containers. For a given functor `F`, if there is a relation `a` between type `A` and type `A'`:

`A <=a=> A'`

then there is a relation between type `F A` and `F A'`:

`F A <=(F a)=> F A'`

We call this induced relation `F a`.

For instance, consider the relation between students and their grades. Each student may have multiple grades (if they take multiple courses) so this relation is not a function. Given a list of students and a list of grades, we would say that the lists are related if and only if they match at each position. It means that they have to be equal length, and the first grade on the list of grades must belong to the first student on the list of students, and so on. Of course, a list is a very simple container, but this property can be generalized to any functor we can define in Haskell using algebraic data types.

The fact that `fmap` doesn’t change the shape of the container can be expressed as a “theorem for free” using relations. We start with two related containers:

```xs :: F A
xs':: F A'```

where `A` and `A'` are related through some relation `a`. We want related containers to be `fmap`ped to related containers. But we can’t use the same function to map both containers, because they contain different types. So we have to use two related functions instead. Related functions map related types to related types so, if we have:

```f :: A -> B
f':: A'-> B'```

and `A` is related to `A'` through `a`, we want `B` to be related to `B'` through some relation `b`. Also, we want the two functions to map related elements to related elements. So if `x` is related to `x'` through `a`, we want `f x` to be related to `f' x'` through `b`. In that case, we’ll say that `f` and `f'` are related through the relation that we call `a->b`:

`f <=(a->b)=> f'`

For instance, if `f` is mapping students’ SSNs to last names, and `f'` is mapping letter grades to numerical grades, the results will be related through the relation between students’ last names and their numerical grades.

To summarize, we require that for any two relations:

```A <=a=> A'
B <=b=> B'```

and any two functions:

```f :: A -> B
f':: A'-> B'```

such that:

`f <=(a->b)=> f'`

and any two containers:

```xs :: F A
xs':: F A'```

we have:

```if       xs <=(F a)=> xs'
then   F xs <=(F b)=> F xs'```

This characterization can be extended, with suitable changes, to contravariant functors.

Profunctor Characterization

A profunctor is a functor of two variables. It is contravariant in the first variable and covariant in the second. A profunctor can lift two functions simultaneously using `dimap`:

```class Profunctor p where
dimap :: (a -> b) -> (c -> d) -> p b c -> p a d```

We want `dimap` to preserve relations between profunctor values. We start by picking any relations `a`, `b`, `c`, and `d` between types:

```A <=a=> A'
B <=b=> B'
C <=c=> C'
D <=d=> D'
```

For any functions:

```f  :: A -> B
f' :: A'-> B'
g  :: C -> D
g' :: C'-> D'```

that are related through the following relations induced by function types:

```f <=(a->b)=> f'
g <=(c->d)=> g'```

we define:

```xs :: p B C
xs':: p B'C'```

The following condition must be satisfied:

```if             xs <=(p b c)=> xs'
then   (p f g) xs <=(p a d)=> (p f' g') xs'
```

where `p f g` stands for the lifting of the two functions by the profunctor `p`.

Here’s a quick sanity check. If `b` and `c` are functions:

```b :: B'-> B
c :: C -> C'```

than the relation:

`xs <=(p b c)=> xs'`

becomes:

```xs' = dimap b c xs
```

If `a` and `d` are functions:

```a :: A'-> A
d :: D -> D'
```

then these relations:

```f <=(a->b)=> f'
g <=(c->d)=> g'```

become:

```f . a = b . f'
d . g = g'. c```

and this relation:

`(p f g) xs <=(p a d)=> (p f' g') xs'`

becomes:

`(p f' g') xs' = dimap a d ((p f g) xs)`

Substituting `xs'`, we get:

`dimap f' g' (dimap b c xs) = dimap a d (dimap f g xs)`

and using functoriality:

```dimap (b . f') (g'. c) = dimap (f . a) (d . g)
```

which is identically true.

Special Case of Profunctor Characterization

We are interested in the diagonal elements of a profunctor. Let’s first specialize the general case to:

```C = B
C'= B'
c = b```

to get:

```xs = p B B
xs'= p B'B'```

and

```if             xs <=(p b b)=> xs'
then   (p f g) xs <=(p a d)=> (p f' g') xs'
```

Chosing the following substitutions:

```A = A'= B
D = D'= B'
a = id
d = id
f = id
g'= id
f'= g```

we get:

```if              xs <=(p b b)=> xs'
then   (p id g) xs <=(p id id)=> (p g id) xs'
```

Since `p id id` is the identity relation, we get:

`(p id g) xs = (p g id) xs'`

or

`dimap id g xs = dimap g id xs'`

Free Theorem

We apply the free theorem to the term `xs`:

`xs :: forall c. p c c`

It must be related to itself through the relation that is induced by its type:

`xs <=(forall b. p b b)=> xs`

for any relation `b`:

`B <=b=> B'`

Universal quantification translates to a relation between different instantiations of the polymorphic value:

`xsB <=(p b b)=> xsB'`

Notice that we can write:

```xsB = piB xs
xsB'= piB'xs```

using the projections we defined earlier.

We have just shown that this equation leads to:

`dimap id g xs = dimap g id xs'`

which shows that the wedge condition is indeed a free theorem.

Natural Transformations

Here’s another quick application of the free theorem. The set of natural transformations may be represented as an end of the following profunctor:

`type NatP a b = F a -> G b`
```instance Profunctor NatP where
dimap f g alpha = fmap g . alpha . fmap f```

The free theorem tells us that for any `mu :: NatP c c`:

`(dimap id g) mu = (dimap g id) mu`

which is the naturality condition:

`mu . fmap g = fmap g . mu`

It’s been know for some time that, in Haskell, naturality follows from parametricity, so this is not surprising.

Acknowledgment

I’d like to thank Edward Kmett for reviewing the draft of this post.

Bibliography

1. Bartosz Milewski, Ends and Coends
2. Edsko de Vries, Parametricity Tutorial, Part 1, Part 2, Contravariant Functions.

There are many intuitions that we may attach to morphisms in a category, but we can all agree that if there is a morphism from the object `a` to the object `b` than the two objects are in some way “related.” A morphism is, in a sense, the proof of this relation. This is clearly visible in any poset category, where a morphism is a relation. In general, there may be many “proofs” of the same relation between two objects. These proofs form a set that we call the hom-set. When we vary the objects, we get a mapping from pairs of objects to sets of “proofs.” This mapping is functorial — contravariant in the first argument and covariant in the second. We can look at it as establishing a global relationship between objects in the category. This relationship is described by the hom-functor:

`C(-, =) :: Cop × C -> Set`

In general, any functor like this may be interpreted as establishing a relation between objects in a category. A relation may also involve two different categories C and D. A functor, which describes such a relation, has the following signature and is called a profunctor:

`p :: Dop × C -> Set`

Mathematicians say that it’s a profunctor from `C` to `D` (notice the inversion), and use a slashed arrow as a symbol for it:

`C ↛ D`

You may think of a profunctor as a proof-relevant relation between objects of C and objects of D, where the elements of the set symbolize proofs of the relation. Whenever `p a b` is empty, there is no relation between `a` and `b`. Keep in mind that relations don’t have to be symmetric.

Another useful intuition is the generalization of the idea that an endofunctor is a container. A profunctor value of the type `p a b` could then be considered a container of `b`s that are keyed by elements of type `a`. In particular, an element of the hom-profunctor is a function from `a` to `b`.

In Haskell, a profunctor is defined as a two-argument type constructor `p` equipped with the method called `dimap`, which lifts a pair of functions, the first going in the “wrong” direction:

```class Profunctor p where
dimap :: (c -> a) -> (b -> d) -> p a b -> p c d```

The functoriality of the profunctor tells us that if we have a proof that `a` is related to `b`, then we get the proof that `c` is related to `d`, as long as there is a morphism from `c` to `a` and another from `b` to `d`. Or, we can think of the first function as translating new keys to the old keys, and the second function as modifying the contents of the container.

For profunctors acting within one category, we can extract quite a lot of information from diagonal elements of the type `p a a`. We can prove that `b` is related to `c` as long as we have a pair of morphisms `b->a` and `a->c`. Even better, we can use a single morphism to reach off-diagonal values. For instance, if we have a morphism `f::a->b`, we can lift the pair `<f, idb>` to go from `p b b` to `p a b`:

`dimap f id pbb :: p a b`

Or we can lift the pair `<ida, f>` to go from `p a a` to `p a b`:

`dimap id f paa :: p a b`

Dinatural Transformations

Since profunctors are functors, we can define natural transformations between them in the standard way. In many cases, though, it’s enough to define the mapping between diagonal elements of two profunctors. Such a transformation is called a dinatural transformation, provided it satisfies the commuting conditions that reflect the two ways we can connect diagonal elements to non-diagonal ones. A dinatural transformation between two profunctors `p` and `q`, which are members of the functor category `[Cop × C, Set]`, is a family of morphisms:

`αa :: p a a -> q a a`

for which the following diagram commutes, for any `f::a->b`:

Notice that this is strictly weaker than the naturality condition. If `α` were a natural transformation in `[Cop × C, Set]`, the above diagram could be constructed from two naturality squares and one functoriality condition (profunctor `q` preserving composition):

Notice that a component of a natural transformation `α` in `[Cop × C, Set]` is indexed by a pair of objects `α a b`. A dinatural transformation, on the other hand, is indexed by one object, since it only maps diagonal elements of the respective profunctors.

Ends

We are now ready to advance from “algebra” to what could be considered the “calculus” of category theory. The calculus of ends (and coends) borrows ideas and even some notation from traditional calculus. In particular, the coend may be understood as an infinite sum or an integral, whereas the end is similar to an infinite product. There is even something that resembles the Dirac delta function.

An end is a genaralization of a limit, with the functor replaced by a profunctor. Instead of a cone, we have a wedge. The base of a wedge is formed by diagonal elements of a profunctor `p`. The apex of the wedge is an object (here, a set, since we are considering Set-valued profunctors), and the sides are a family of functions mapping the apex to the sets in the base. You may think of this family as one polymorphic function — a function that’s polymorphic in its return type:

`α :: forall a . apex -> p a a`

Unlike in cones, within a wedge we don’t have any functions that would connect vertices of the base. However, as we’ve seen earlier, given any morphism `f::a->b` in C, we can connect both `p a a` and `p b b` to the common set `p a b`. We therefore insist that the following diagram commute:

This is called the wedge condition. It can be written as:

`p ida f ∘ αa = p f idb ∘ αb`

`dimap id f . alpha = dimap f id . alpha`

We can now proceed with the universal construction and define the end of `p` as the uinversal wedge — a set `e` together with a family of functions `π` such that for any other wedge with the apex `a` and a family `α` there is a unique function `h::a->e` that makes all triangles commute:

`πa ∘ h = αa`

The symbol for the end is the integral sign, with the “integration variable” in the subscript position:

`∫c p c c`

Components of `π` are called projection maps for the end:

`πa :: ∫c p c c -> p a a`

Note that if C is a discrete category (no morphisms other than the identities) the end is just a global product of all diagonal entries of `p` across the whole category C. Later I’ll show you that, in the more general case, there is a relationship between the end and this product through an equalizer.

In Haskell, the end formula translates directly to the universal quantifier:

`forall a. p a a`

Strictly speaking, this is just a product of all diagonal elements of `p`, but the wedge condition is satisfied automatically due to parametricity (I’ll explain it in a separate blog post). For any function `f :: a -> b`, the wedge condition reads:

`dimap f id . pi = dimap id f . pi`

or, with type annotations:

`dimap f idb . pib = dimap ida f . pia`

where both sides of the equation have the type:

`Profunctor p => (forall c. p c c) -> p a b`

and `pi` is the polymorphic projection:

```pi :: Profunctor p => forall c. (forall a. p a a) -> p c c
pi e = e```

Here, type inference automatically picks the right component of `e`.

Just as we were able to express the whole set of commutation conditions for a cone as one natural transformation, likewise we can group all the wedge conditions into one dinatural transformation. For that we need the generalization of the constant functor `Δc` to a constant profunctor that maps all pairs of objects to a single object `c`, and all pairs of morphisms to the identity morphism for this object. A wedge is a dinatural transformation from that functor to the profunctor `p`. Indeed, the dinaturality hexagon shrinks down to the wedge diamond when we realize that `Δc` lifts all morphisms to one identity function.

Ends can also be defined for target categories other than Set, but here we’ll only consider Set-valued profunctors and their ends.

Ends as Equalizers

The commutation condition in the definition of the end can be written using an equalizer. First, let’s define two functions (I’m using Haskell notation, because mathematical notation seems to be less user-friendly in this case). These functions correspond to the two converging branches of the wedge condition:

```lambda :: Profunctor p => p a a -> (a -> b) -> p a b
lambda paa f = dimap id f paa

rho :: Profunctor p => p b b -> (a -> b) -> p a b
rho pbb f = dimap f id pbb```

Both functions map diagonal elements of the profunctor `p` to polymorphic functions of the type:

`type ProdP p = forall a b. (a -> b) -> p a b`

These functions have different types. However, we can unify their types, if we form one big product type, gathering together all diagonal elements of `p`:

`newtype DiaProd p = DiaProd (forall a. p a a)`

The functions `lambda` and `rho` induce two mappings from this product type:

```lambdaP :: Profunctor p => DiaProd p -> ProdP p
lambdaP (DiaProd paa) = lambda paa

rhoP :: Profunctor p => DiaProd p -> ProdP p
rhoP (DiaProd paa) = rho paa```

The end of `p` is the equalizer of these two functions. Remember that the equalizer picks the largest subset on which two functions are equal. In this case it picks the subset of the product of all diagonal elements for which the wedge diagrams commute.

Natural Transformations as Ends

The most important example of an end is the set of natural transformations. A natural transformation between two functors `F` and `G` is a family of morphisms picked from hom-sets of the form `C(F a, G a)`. If it weren’t for the naturality condition, the set of natural transformations would be just the product of all these hom-sets. In fact, in Haskell, it is:

`forall a. f a -> g a`

The reason it works in Haskell is because naturality follows from parametricity. Outside of Haskell, though, not all diagonal sections across such hom-sets will yield natural transformations. But notice that the mapping:

`<a, b> -> C(F a, G b)`

is a profunctor, so it makes sense to study its end. This is the wedge condition:

Let’s just pick one element from the set `∫c C(F c, G c)`. The two projections will map this element to two components of a particular transformation, let’s call them:

```τa :: F a -> G a
τb :: F b -> G b```

In the left branch, we lift a pair of morphisms `<ida, G f>` using the hom-functor. You may recall that such lifting is implemented as simultaneous pre- and post-composition. When acting on `τa` the lifted pair gives us:

`G f ∘ τa ∘ ida`

The other branch of the diagram gives us:

`idb ∘ τb ∘ F f`

Their equality, demanded by the wedge condition, is nothing but the naturality condition for `τ`.

Coends

As expected, the dual to an end is called a coend. It is constructed from a dual to a wedge called a cowedge (pronounced co-wedge, not cow-edge).

An edgy cow?

The symbol for a coend is the integral sign with the “integration variable” in the superscript position:

`∫ c p c c`

Just like the end is related to a product, the coend is related to a coproduct, or a sum (in this respect, it resembles an integral, which is a limit of a sum). Rather than having projections, we have injections going from the diagonal elements of the profunctor down to the coend. If it weren’t for the cowedge conditions, we could say that the coend of the profunctor `p` is either `p a a`, or `p b b`, or `p c c`, and so on. Or we could say that there exists such an `a` for which the coend is just the set `p a a`. The universal quantifier that we used in the definition of the end turns into an existential quantifier for the coend.

This is why, in pseudo-Haskell, we would define the coend as:

`exists a. p a a`

The standard way of encoding existential quantifiers in Haskell is to use universally quantified data constructors. We can thus define:

`data Coend p = forall a. Coend (p a a)`

The logic behind this is that it should be possible to construct a coend using a value of any of the family of types `p a a`, no matter what `a` we chose.

Just like an end can be defined using an equalizer, a coend can be described using a coequalizer. All the cowedge conditions can be summarized by taking one gigantic coproduct of `p a b` for all possible functions `b->a`. In Haskell, that would be expressed as an existential type:

`data SumP p = forall a b. SumP (b -> a) (p a b)`

There are two ways of evaluating this sum type, by lifting the function using `dimap` and applying it to the profunctor `p`:

```lambda, rho :: Profunctor p => SumP p -> DiagSum p
lambda (SumP f pab) = DiagSum (dimap f id pab)
rho    (SumP f pab) = DiagSum (dimap id f pab)```

where `DiagSum` is the sum of diagonal elements of `p`:

`data DiagSum p = forall a. DiagSum (p a a)`

The coequalizer of these two functions is the coend. A coequilizer is obtained from `DiagSum p` by identifying values that are obtained by applying `lambda` or `rho` to the same argument. Here, the argument is a pair consisting of a function `b->a` and an element of `p a b`. The application of `lambda` and `rho` produces two potentially different values of the type `DiagSum p`. In the coend, these two values are identified, making the cowedge condition automatically satisfied.

The process of identification of related elements in a set is formally known as taking a quotient. To define a quotient we need an equivalence relation `~`, a relation that is reflexive, symmetric, and transitive:

```a ~ a
if a ~ b then b ~ a
if a ~ b and b ~ c then a ~ c```

Such a relation splits the set into equivalence classes. Each class consists of elements that are related to each other. We form a quotient set by picking one representative from each class. A classic example is the definition of rational numbers as pairs of whole numbers with the following equivalence relation:

`(a, b) ~ (c, d) iff a * d = b * c`

It’s easy to check that this is an equivalence relation. A pair `(a, b)` is interpreted as a fraction `a/b`, and fractions that have a common divisor are identified. A rational number is an equivalence class of such fractions.

You might recall from our earlier discussion of limits and colimits that the hom-functor is continuous, that is, it preserves limits. Dually, the contravariant hom-functor turns colimits into limits. These properties can be generalized to ends and coends, which are a generalization of limits and colimits, respectively. In particular, we get a very useful identity for converting coends to ends:

`Set(∫ x p x x, c) ≅ ∫x Set(p x x, c)`

Let’s have a look at it in pseudo-Haskell:

`(exists x. p x x) -> c ≅ forall x. p x x -> c`

It tells us that a function that takes an existential type is equivalent to a polymorphic function. This makes perfect sense, because such a function must be prepared to handle any one of the types that may be encoded in the existential type. It’s the same principle that tells us that a function that accepts a sum type must be implemented as a case statement, with a tuple of handlers, one for every type present in the sum. Here, the sum type is replaced by a coend, and a family of handlers becomes an end, or a polymorphic function.

Ninja Yoneda Lemma

The set of natural transformations that appears in the Yoneda lemma may be encoded using an end, resulting in the following formulation:

`∫z Set(C(a, z), F z) ≅ F a`

There is also a dual formula:

`∫ z C(a, z) × F z ≅ F a`

This identity is strongly reminiscent of the formula for the Dirac delta function (a function `δ(a - z)`, or rather a distribution, that has an infinite peak at `a = z`). Here, the hom-functor plays the role of the delta function.

Together these two identities are sometimes called the Ninja Yoneda lemma.

To prove the second formula, we will use the consequence of the Yoneda embedding, which states that two objects are isomorphic if and only if their hom-functors are isomorphic. In other words `a ≅ b` if and only if there is a natural transformation of the type:

`[C, Set](C(a, -), C(b, =))`

that is an isomorphism.

We start by inserting the left-hand side of the identity we want to prove inside a hom-functor that’s going to some arbitrary object `c`:

`Set(∫ z C(a, z) × F z, c)`

Using the continuity argument, we can replace the coend with the end:

`∫z Set(C(a, z) × F z, c)`

We can now take advantage of the adjunction between the product and the exponential:

`∫z Set(C(a, z), c(F z))`

We can “perform the integration” by using the Yoneda lemma to get:

`c(F a)`

This exponential object is isomorphic to the hom-set:

`Set(F a, c)`

Finally, we take advantage of the Yoneda embedding to arrive at the isomorphism:

`∫ z C(a, z) × F z ≅ F a`

Profunctor Composition

Let’s explore further the idea that a profunctor describes a relation — more precisely, a proof-relevant relation, meaning that the set `p a b` represents the set of proofs that `a` is related to `b`. If we have two relations `p` and `q` we can try to compose them. We’ll say that `a` is related to `b` through the composition of `q` after `p` if there exist an intermediary object `c` such that both `q b c` and `p c a` are non-empty. The proofs of this new relation are all pairs of proofs of individual relations. Therefore, with the understanding that the existential quantifier corresponds to a coend, and the cartesian product of two sets corresponds to “pairs of proofs,” we can define composition of profunctors using the following formula:

`(q ∘ p) a b = ∫ c p c a × q b c`

Here’s the equivalent Haskell definition from `Data.Profunctor.Composition`, after some renaming:

```data Procompose q p a b where
Procompose :: q a c -> p c b -> Procompose q p a b
```

This is using generalized algebraic data type, or GADT syntax, in which a free type variable (here `c`) is automatically existentially quanitified. The (uncurried) data constructor `Procompose` is thus equivalent to:

`exists c. (q a c, p c b)`

The unit of so defined composition is the hom-functor — this immediately follows from the Ninja Yoneda lemma. It makes sense, therefore, to ask the question if there is a category in which profunctors serve as morphisms. The answer is positive, with the caveat that both associativity and identity laws for profunctor composition hold only up to natural isomorphism. Such a category, where laws are valid up to isomorphism, is called a bicategory (which is more general than a 2-category). So we have a bicategory Prof, in which objects are categories, morphisms are profunctors, and morphisms between morphisms (a.k.a., two-cells) are natural transformations. In fact, one can go even further, because beside profunctors, we also have regular functors as morphisms between categories. A category which has two types of morphisms is called a double category.

Profunctors play an important role in the Haskell lens library and in the arrow library.

Next: Kan extensions.

Let’s review the definitions. A monad is an endofunctor `m` equipped with two natural transformations that satisfy some coherence conditions. The components of these transformations at `a` are:

```ηa :: a -> m a
μa :: m (m a) -> m a```

An algebra for the same endofunctor is a selection of a particular object — the carrier `a` — together with the morphism:

`alg :: m a -> a`

The first thing to notice is that the algebra goes in the opposite direction to `ηa`. The intuition is that `ηa` creates a trivial expression from a value of type `a`. The first coherence condition that makes the algebra compatible with the monad ensures that evaluating this expression using the algebra whose carrier is `a` gives us back the original value:

`alg ∘ ηa = ida`

The second condition arises from the fact that there are two ways of evaluating the doubly nested expression `m (m a)`. We can first apply `μa` to flatten the expression, and then use the evaluator of the algebra; or we can apply the lifted evaluator to evaluate the inner expressions, and then apply the evaluator to the result. We’d like the two strategies to be equivalent:

`alg ∘ μa = alg ∘ m alg`

Here, `m alg` is the morphism resulting from lifting `alg` using the functor `m`. The following commuting diagrams describe the two conditions (I replaced `m` with `T` in anticipation of what follows):

We can also express these condition in Haskell:

```alg . return = id
alg . join = alg . fmap alg```

Let’s look at a small example. An algebra for a list endofunctor consists of some type `a` and a function that produces an `a` from a list of `a`. We can express this function using `foldr` by choosing both the element type and the accumulator type to be equal to `a`:

`foldr :: (a -> a -> a) -> a -> [a] -> a`

This particular algebra is specified by a two-argument function, let’s call it `f`, and a value `z`. The list functor happens to also be a monad, with `return` turning a value into a singleton list. The composition of the algebra, here `foldr f z`, after `return` takes `x` to:

`foldr f z [x] = x `f` z`

where the action of `f` is written in the infix notation. The algebra is compatible with the monad if the following coherence condition is satisfied for every `x`:

`x `f` z = x`

If we look at `f` as a binary operator, this condition tells us that `z` is the right unit.

The second coherence condition operates on a list of lists. The action of `join` concatenates the individual lists. We can then fold the resulting list. On the other hand, we can first fold the individual lists, and then fold the resulting list. Again, if we interpret `f` as a binary operator, this condition tells us that this binary operation is associative. These conditions are certainly fulfilled when `(a, f, z)` is a monoid.

T-algebras

Since mathematicians prefer to call their monads `T`, they call algebras compatible with them T-algebras. T-algebras for a given monad T in a category C form a category called the Eilenberg-Moore category, often denoted by CT. Morphisms in that category are homomorphisms of algebras. These are the same homomorphisms we’ve seen defined for F-algebras.

A T-algebra is a pair consisting of a carrier object and an evaluator, `(a, f)`. There is an obvious forgetful functor `UT` from CT to C, which maps `(a, f)` to `a`. It also maps a homomorphism of T-algebras to a corresponding morphism between carrier objects in C. You may remember from our discussion of adjunctions that the left adjoint to a forgetful functor is called a free functor.

The left adjoint to `UT` is called `FT`. It maps an object `a` in C to a free algebra in CT. The carrier of this free algebra is `T a`. Its evaluator is a morphism from `T (T a)` back to `T a`. Since `T` is a monad, we can use the monadic `μa` (Haskell `join`) as the evaluator.

We still have to show that this is a T-algebra. For that, two coherence conditions must be satisified:

`alg ∘ ηTa = idTa`
`alg ∘ μa = alg ∘ T alg`

But these are just monadic laws, if you plug in `μ` for the algebra.

As you may recall, every adjunction defines a monad. It turns out that the adjunction between FT and UT defines the very monad `T` that was used in the construction of the Eilenberg-Moore category. Since we can perform this construction for every monad, we conclude that every monad can be generated from an adjunction. Later I’ll show you that there is another adjunction that generates the same monad.

Here’s the plan: First I’ll show you that `FT` is indeed the left adjoint of `UT`. I’ll do it by defining the unit and the counit of this adjunction and proving that the corresponding triangular identities are satisfied. Then I’ll show you that the monad generated by this adjunction is indeed our original monad.

The unit of the adjunction is the natural transformation:

`η :: I -> UT ∘ FT`

Let’s calculate the `a` component of this transformation. The identity functor gives us `a`. The free functor produces the free algebra `(T a, μa)`, and the forgetful functor reduces it to `T a`. Altogether we get a mapping from `a` to `T a`. We’ll simply use the unit of the monad `T` as the unit of this adjunction.

Let’s look at the counit:

`ε :: FT ∘ UT -> I`

Let’s calculate its component at some T-algebra `(a, f)`. The forgetful functor forgets the `f`, and the free functor produces the pair `(T a, μa)`. So in order to define the component of the counit `ε` at `(a, f)`, we need the right morphism in the Eilenberg-Moore category, or a homomorphism of T-algebras:

`(T a, μa) -> (a, f)`

Such homomorphism should map the carrier `T a` to `a`. Let’s just resurrect the forgotten evaluator `f`. This time we’ll use it as a homomorphism of T-algebras. Indeed, the same commuting diagram that makes `f` a T-algebra may be re-interpreted to show that it’s a homomorphism of T-algebras:

We have thus defined the component of the counit natural transformation `ε` at `(a, f)` (an object in the category of T-algebras) to be `f`.

To complete the adjunction we also need to show that the unit and the counit satisfy triangular identites. These are:

The first one holds because of the unit law for the monad `T`. The second is just the law of the T-algebra `(a, f)`.

We have established that the two functors form an adjunction:

`FT ⊣ UT`

`UT ∘ FT`

is the endofunctor in C that gives rise to the corresponding monad. Let’s see what its action on an object `a` is. The free algebra created by `FT` is `(T a, μa)`. The forgetful functor `FT` drops the evaluator. So, indeed, we have:

`UT ∘ FT = T`

As expected, the unit of the adjunction is the unit of the monad `T`.

You may remember that the counint of the adjunction produces monadic muliplication through the following formula:

`μ = R ∘ ε ∘ L`

This is a horizontal composition of three natural transformations, two of them being identity natural transformations mapping, respectively, `L` to `L` and `R` to `R`. The one in the middle, the counit, is a natural transformation whose component at an algebra `(a, f)` is `f`.

Let’s calculate the component `μa`. We first horizontally compose `ε` after `FT`, which results in the component of `ε` at `FTa`. Since `FT` takes `a` to the algebra `(T a, μa)`, and `ε` picks the evaluator, we end up with `μa`. Horizontal composition on the left with `UT` doesn’t change anything, since the action of `UT` on morphisms is trivial. So, indeed, the `μ` obtained from the adjunction is the same as the `μ` of the original monad `T`.

The Kleisli Category

We’ve seen the Kleisli category before. It’s a category constructed from another category C and a monad `T`. We’ll call this category CT. The objects in the Kleisli category CT are the objects of C, but the morphisms are different. A morphism `fK` from `a` to `b` in the Kleisli category corresponds to a morphism `f` from `a` to `T b` in the original category. We call this morphism a Kleisli arrow from `a` to `b`.

Composition of morphisms in the Kleisli category is defined in terms of monadic composition of Kleisli arrows. For instance, let’s compose `gK` after `fK`. In the Kleisli category we have:

```fK :: a -> b
gK :: b -> c```

which, in the category C, corresponds to:

```f :: a -> T b
g :: b -> T c```

We define the composition:

`hK = gK ∘ fK`

as a Kleisli arrow in C

```h :: a -> T c
h = μ ∘ (T g) ∘ f```

In Haskell we would write it as:

`h = join . fmap g . f`

There is a functor `F` from C to CT which acts trivially on objects. On morphims, it maps `f` in C to a morphism in CT by creating a Kleisli arrow that embellishes the return value of `f`. Given a morphism:

`f :: a -> b`

it creates a morphism in CT with the corresponding Kleisli arrow:

`η ∘ f`

In Haskell we’d write it as:

`return . f`

We can also define a functor `G` from CT back to C. It takes an object `a` from the Kleisli category and maps it to an object `T a` in C. Its action on a morphism `fK` corresponding to a Kleisli arrow:

`f :: a -> T b`

is a morphism in C:

`T a -> T b`

given by first lifting `f` and then applying `μ`:

`μT b ∘ T f`

`G fT = join . fmap f`

You may recognize this as the definition of monadic bind in terms of `join`.

It’s easy to see that the two functors form an adjunction:

`F ⊣ G`

and their composition `G ∘ F` reproduces the original monad `T`.

So this is the second adjunction that produces the same monad. In fact there is a whole category of adjunctions `Adj(C, T)` that result in the same monad `T` on C. The Kleisli adjunction we’ve just seen is the initial object in this category, and the Eilenberg-Moore adjunction is the terminal object.

Analogous constructions can be done for any comonad `W`. We can define a category of coalgebras that are compatible with a comonad. They make the following diagrams commute:

where `coa` is the coevaluation morphism of the coalgebra whose carrier is `a`:

`coa :: a -> W a`

and `ε` and `δ` are the two natural transformations defining the comonad (in Haskell, their components are called `extract` and `duplicate`).

There is an obvious forgetful functor `UW` from the category of these coalgebras to C. It just forgets the coevaluation. We’ll consider its right adjoint `FW`.

`UW ⊣ FW`

The right adjoint to a forgetful functor is called a cofree functor. `FW` generates cofree coalgebras. It assigns, to an object `a` in C, the coalgebra `(W a, δa)`. The adjunction reproduces the original comonad as the composite `FW ∘ UW`.

Similarly, we can construct a co-Kleisli category with co-Kleisli arrows and regenerate the comonad from the corresponding adjunction.

Lenses

Let’s go back to our discussion of lenses. A lens can be written as a coalgebra:

`coalgs :: a -> Store s a`

for the functor `Store s`:

`data Store s a = Store (s -> a) s`

This coalgebra can be also expressed as a pair of functions:

```set :: a -> s -> a
get :: a -> s```

(Think of `a` as standing for “all,” and `s` as a “small” part of it.) In terms of this pair, we have:

`coalgs a = Store (set a) (get a)`

Here, `a` is a value of type `a`. Notice that partially applied `set` is a function `s->a`.

We also know that `Store s` is a comonad:

```instance Comonad (Store s) where
extract (Store f s) = f s
duplicate (Store f s) = Store (Store f) s```

The question is: Under what conditions is a lens a coalgebra for this comonad? The first coherence condition:

`εa ∘ coalg = ida`

translates to:

`set a (get a) = a`

This is the lens law that expresses the fact that if you set a field of the structure `a` to its previous value, nothing changes.

The second condition:

`fmap coalg ∘ coalg = δa ∘ coalg`

requires a little more work. First, recall the definition of `fmap` for the `Store` functor:

`fmap g (Store f s) = Store (g . f) s`

Applying `fmap coalg` to the result of `coalg` gives us:

`Store (coalg . set a) (get a)`

On the other hand, applying `duplicate` to the result of `coalg` produces:

`Store (Store (set a)) (get a)`

For these two expressions to be equal, the two functions under `Store` must be equal when acting on an arbitrary `s`:

`coalg (set a s) = Store (set a) s`

Expanding `coalg`, we get:

`Store (set (set a s)) (get (set a s)) = Store (set a) s`

This is equivalent to two remaining lens laws. The first one:

`set (set a s) = set a`

tells us that setting the value of a field twice is the same as setting it once. The second law:

`get (set a s) = s`

tells us that getting a value of a field that was set to `s` gives `s` back.

In other words, a well-behaved lens is indeed a comonad coalgebra for the `Store` functor.

Challenges

1. What is the action of the free functor `F :: C -> CT` on morphisms. Hint: use the naturality condition for monadic `μ`.
`UW ⊣ FW`