### Programming

Previously: Profunctors.

# Traversals

A traversal is a kind of optic that can focus on zero or more items at a time. Naively, we would expect to have a getter that returns a list of values, and a setter that replaces a list of values. Think of a tree with $N$ leaves: a traversal would return a list of leaves, and it would allow you to replace them with a new list. The problem is that the size of the list you pass to the setter cannot be arbitrary—it must match the number of leaves in the particular tree. This is why, in Haskell, the setter and the getter are usually combined in a single function:

s -> ([b] -> t, [a])


Still, Haskell is not able to force the sizes of both lists to be equal.

Since a list type can be represented as an infinite sum of tuples, I knew that the categorical version of this formula must involve a power series, or a polynomial functor:

$\mathbf{Set} \big(s, \sum_{n} \mathbf{Set}(b^n, t) \times a^n\big)$

but was unable to come up with an existential form for it.

Pickering, Gibbons, and Wu came up with a representation for traversals using profunctors that were cartesian, cocartesian, and monoidal at the same time, but the monoidal constraint didn’t fit neatly into the Tambara scheme:

class Profunctor p => Monoidal p where
par   :: p a b -> p c d -> p (a, c) (b, d)
empty :: p () ()


We’ve been struggling with this problem, when one of my students, Mario Román came up with the ingenious idea to make $n$ existential.

The idea is that a coend in the existential representation of optics acts like a sum (or like an integral—hence the notation). A sum over natural numbers is equivalent to the coend over the category of natural numbers.

At the root of all optics there is a monoidal action. For lenses, this action is given by “scaling”

$a \to a \times c$

For prisms, it’s the “translation”

$a \to a + c$

For grates it’s the exponentiation

$a \to a^c$

The composition of a prism and a lens is an affine transformation

$a \to c_0 + a \times c_1$

A traversal is similarly generated by a polynomial functor, or a power series functor:

$a \to \sum_n c_n \times a^n$

The key observation here is that there is a different object $c_n$ for every power of $a$, which can only be expressed using dependent types in programming. For every multiplicity of foci, the residue is of a different type.

In category theory, we can express the whole infinite sequence of residues as a functor from the monoidal category $\mathbb{N}$ of natural numbers to $\mathbf{Set}$. (The sum is really a coend over $\mathbb{N}$.)

The existential version of a traversal is thus given by:

$\int^{c \colon [\mathbb{N}, \mathbf{Set}]} \mathbf{Set}\big(s, \sum_n c_n \times a^n\big) \times \mathbf{Set}\big( \sum_m c_m \times b^m, t\big)$

We can now use the continuity of the hom-set to replace the mapping out of a sum with a product of mappings:

$\int^{c \colon [\mathbb{N}, \mathbf{Set}]} \mathbf{Set}\big(s, \sum_n c_n \times a^n\big) \times \prod_m \mathbf{Set}\big( c_m \times b^m, t\big)$

$\int^{c \colon [\mathbb{N}, \mathbf{Set}]} \mathbf{Set}\big(s, \sum_n c_n \times a^n\big) \times \prod_m \mathbf{Set}\big( c_m, \mathbf{Set}( b^m, t)\big)$

The product of hom-sets is really an end over $\mathbb{N}$, or a set of natural transformations in $[\mathbb{N}, \mathbf{Set}]$

$\int^{c \colon [\mathbb{N}, \mathbf{Set}]} \mathbf{Set}\big(s, \sum_n c_n \times a^n\big) \times [\mathbb{N}, \mathbf{Set}]\big( c_-, \mathbf{Set}( b^-, t)\big)$

and we can apply the Yoneda lemma to “integrate” over $c$ to get:

$\mathbf{Set}(s, \sum_n (\mathbf{Set}(b^n, t) \times a^n)\big)$

which is exactly the formula for traversals.

Once we understood the existential representation of traversals, the profunctor representation followed. The equivalent of Tambara modules for traversals is a category of profunctors equipped with the monoidal action parameterized by objects in $[\mathbb{N}, \mathbf{Set}]$:

$\alpha_{c, \langle a, b \rangle} \colon p \langle a, b \rangle \to p\langle \sum_n c_n \times a^n, \sum_m c_m \times b^m \rangle$

The double Yoneda trick works for these profunctors as well, proving the equivalence with the existential representation.

# Generalizations

As hinted in my blog post and formalized by Mitchell Riley, Tambara modules can be generalized to an arbitrary monoidal action. We have also realized that we can combine actions in two different categories. We could take an arbitrary monoidal category $\mathcal{M}$, define its action on two categories, $\mathcal{C}$ and $\mathcal{D}$ using strong monoidal functors:

$F \colon \mathcal{M} \to [\mathcal{C}, \mathcal{C}]$

$G \colon \mathcal{M} \to [\mathcal{D}, \mathcal{D}]$

These actions define the most general existential optic:

$\mathbf{Optic} \langle s, t \rangle \langle a, b \rangle = \int^{m \colon \mathcal{M}} \mathcal{C}(s, F_m a) \times \mathcal{D}(G_m b, t)$

Notice that the pairs of arguments are heterogenous—e.g., in $\langle a, b \rangle$, $a$ is from $\mathcal{C}$, and $b$ is from $\mathcal{D}$.

We have also generalized Tambara modules:

$\alpha_{m, \langle a, b \rangle} \colon p \langle a, b \rangle \to p \langle F_m a, G_m b\rangle$

and the Pastro Street derivation of the promonad. That lead us to a more general proof of isomorphism between the profunctor formulation and the existential formulation of optics. Just to be general enough, we did it for enriched categories, replacing $\mathbf{Set}$ with an arbitrary monoidal category.

Finally, we described some new interesting optics like algebraic and monadic lenses.

# The Physicist’s Explanation

The traversal result confirmed my initial intuition from general relativity that the most general optics are generated by the analog of diffeomorphisms. These are the smooth coordinate transformations under which Einstein’s theory is covariant.

Physicists have long been using symmetry groups to build theories. Laws of physics are symmetric with respect to translations, time shifts, rotations, etc.; leading to laws of conservation of momentum, energy, angular momentum, etc. There is an uncanny resemblance of these transformations to some of the monoidal actions in optics. The prism is related to translations, the lens to rotations or scaling, etc.

There are many global symmetries in physics, but the real power comes from local symmetries: gauge symmetries and diffeomorphisms. These give rise to the Standard Model and to Einstein’s theory of gravity.

A general monoidal action seen in optics is highly reminiscent of a diffeomorphism, and the symmetry behind a traversal looks like it’s generated by an analytical function.

In my opinion, these similarities are a reflection of a deeper principle of compositionality. There is only a limited set of ways we can decompose complex problems, and sooner or later they all end up in category theory.

The main difference between physics and category theory is that category theory is more interested in one-way mappings, whereas physics deals with invertible transformations. For instance, in category theory, monoids are more fundamental than groups.

Here’s how categorical optics might be seen by a physicist.

In physics we would start with a group of transformations. Its representations would be used, for instance, to classify elementary particles. In optics we start with a monoidal category $\mathcal{M}$ and define its action in the target category $\mathcal{C}$. (Notice the use of a monoid rather than a group.)

$F \colon \mathcal{M} \to [\mathcal{C}, \mathcal{C}]$

In physics we would represent the group using matrices, here we use endofunctors.

A profunctor is like a path that connects the initial state to the final state. It describes all the ways in which $a$ can evolve into $b$.

If we use mixed optics, final states come from a different category $\mathcal{D}$, but their transformations are parameterized by the same monoidal category:

$G \colon \mathcal{M} \to [\mathcal{D}, \mathcal{D}]$

A path may be arbitrarily extended, at both ends, by a pair of morphisms. Given a morphism in $\mathcal{C}$:

$f \colon a' \to a$

and another one in $\mathcal{D}$

$g \colon b \to b'$

the profunctor uses them to extend the path:

$p \langle a, b \rangle \to p \langle a', b' \rangle$

A (generalized) Tambara module is like the space of paths that can be extended by transforming their endpoints.

$\alpha_{m, \langle a, b \rangle} \colon p \langle a, b \rangle \to p \langle F_m a, G_m b\rangle$

If we have a path that can evolve $a$ into $b$, then the same path can be used to evolve $F_m a$ into $G_m b$. In physics, we would say that the paths are “invariant” under the transformation, but in category theory we are fine with a one-way mapping.

The profunctor representation is like a path integral:

$\int_{p \colon \mathbf{Tam}} \mathbf{Set}( p \langle a, b \rangle, p \langle s, t \rangle)$

We fix the end-states but we vary the paths. We integrate over all paths that have the “invariance” or extensibility property that defines the Tambara module.

For every such path, we have a mapping that takes the evolution from $a$ to $b$ and produces the evolution (along the same path) from $s$ to $t$.

The main theorem of profunctor optics states that if, for a given collection of states, $\langle a, b \rangle, \langle s, t \rangle$, such a mapping exists, then these states are related. There exists a transformation and a pair of morphisms that are secretly used in the path integral to extend the original path.

$\int^{m \colon \mathcal{M}} \mathcal{C}(s, F_m a) \times \mathcal{D}(G_m b, t)$

Again, the mappings are one-way rather than both ways. They let us get from $s$ to $F_m a$ and from $G_m b$ to $t$.

This pair of morphisms is enough to extend any path $p \langle a, b \rangle$ to $p \langle s, t \rangle$ by first applying $\alpha_m$ and then lifting the two morphisms. The converse is also true: if every path can be extended then such a pair of morphisms must exist.

What seems unique to optics is the interplay between transformations and decompositions: The way $m$ can be interpreted both as parameterizing a monoidal action and the residue left over after removing the focus.

# Conclusion

For all the details and a list of references you can look at our paper “Profunctor optics, a categorical update.” It’s the result of our work at the Adjoint School of Applied Category Theory in Oxford in 2019. It’s avaliable on arXiv.

I’d like to thank Mario Román for reading the draft and providing valuable feedback.

Previously: Existentials.

# Double Yoneda

If you squint hard enough, the Yoneda lemma:

$\int_{x} \mathbf{Set}\big(\mathcal{C}(a, x), f x\big) \cong f a$

could be interpreted as the representable functor $\mathcal{C}(a, -)$ acting as the unit with respect to taking the end. It takes an $f$ and returns an $f$. Let’s keep this in mind.

We are going to need an identity that involves higher-order natural transformations between two higher-order functors. These are actually the functors $R_a$ that we’ve encountered before. They are parameterized by objects in $\mathcal{C}$, and their action on functors (co-presheaves) is to apply those functors to objects. They are the “give me a functor and I’ll apply it to my favorite object” kind of functors.

We need a natural transformation between two such functors, and we can express it as an end:

$\int_f \mathbf{Set}( R_a f, R_s f) = \int_f \mathbf{Set}( f a, f s)$

Here’s the trick: replace these functors with their Yoneda equivalents:

$\int_f \mathbf{Set}( f a, f s) \cong \int_f \mathbf{Set}\Big(\int_{x} \mathbf{Set}\big(\mathcal{C}(a, x), fx), \int_{y} \mathbf{Set}\big(\mathcal{C}(s, y), f y\big)\Big)$

Notice that this is now a mapping between two hom-sets in the functor category, the first one being:

$\int_{x} \mathbf{Set}\big(\mathcal{C}(a, x), fx\big) = [\mathcal{C}, \mathbf{Set}]\big(\mathcal{C}(a, -), f\big)$

We can now use the corollary of the Yoneda lemma to replace the set of natural transformation between these two hom-functors with the hom-set:

$[\mathcal{C}, \mathbf{Set}]\big(\mathcal{C}(s, -), \mathcal{C}(a, -) \big)$

But this is again a natural transformation between two hom-functors, so it can be further reduced to $\mathcal{C}(a, s)$. The result is:

$\int_f \mathbf{Set}( f a, f s) \cong \mathcal{C}(a, s)$

We’ve used the Yoneda lemma twice, so this trick is called the double-Yoneda.

# Profunctors

It turns out that the prism also has a functor-polymorphic representation, but it uses profunctors in place of regular functors. A profunctor is a functor of two arguments, but its action on arrows has a twist. Here’s the Haskell definition:

class Profunctor p where
dimap :: (a' -> a) -> (b -> b') -> (p a b -> p a' b')


It lifts a pair of functions, where the first one goes in the opposite direction.

In category theory, the “twist” is encoded by using the opposite category $\mathcal{C}^{op}$, so a profunctor is defined a functor from $\mathcal{C}^{op} \times \mathcal{C}$ to $\mathbf{Set}$.

The prime example of a profunctor is the hom-functor which, on objects, assigns the set $\mathcal{C}(a, b)$ to every pair $\langle a, b \rangle$.

Before we talk about the profunctor representation of prisms and lenses, there is a simple optic called Iso. It’s defined by a pair of functions:

from :: s -> a
to   :: b -> t


The key observation here is that such a pair of arrows is an element of the hom set in the category $\mathcal{C}^{op} \times \mathcal{C}$ between the pair $\langle a, b \rangle$ and the pair $\langle s, t \rangle$:

$(\mathcal{C}^{op} \times \mathcal{C})( \langle a, b \rangle, \langle s, t \rangle)$

The “twist” of using $\mathcal{C}^{op}$ reverses the direction of the first arrow.

Iso has a simple profunctor representation:

type Iso s t a b = forall p. Profunctor p => p a b -> p s t


This formula can be translated to category theory as an end in the profunctor category:

$\int_p \mathbf{Set}(p \langle a, b \rangle, p \langle s, t \rangle)$

Profunctor category is a category of co-presheaves $[\mathcal{C}^{op} \times \mathcal{C}, \mathbf{Set}]$. We can immediately apply the double Yoneda identity to it to get:

$\int_p \mathbf{Set}(p \langle a, b \rangle, p \langle s, t \rangle) \cong (\mathcal{C}^{op} \times \mathcal{C})( \langle a, b \rangle, \langle s, t \rangle)$

which shows the equivalence of the two representations.

# Tambara Modules

Here’s the profunctor representation of a prism:

type Prism s t a b = forall p. Choice p => p a b -> p s t


It looks almost the same as Iso, except that the quantification goes over a smaller class of profunctors called Choice (or cocartesian). This class is defined as:

class Profunctor p => Choice where
left'  :: p a b -> p (Either a c) (Either b c)
right' :: p a b -> p (Either c a) (Either c b)


Lenses can also be defined in a similar way, using the class of profunctors called Strong (or cartesian).

class Profunctor p => Strong where
first'  :: p a b -> p (a, c) (b, c)
second' :: p a b -> p (c, a) (c, b)


Profunctor categories with these structures are called Tambara modules. Tambara formulated them in the context of monoidal categories, for a more general tensor product. Sum (Either) and product (,) are just two special cases.

A Tambara module is an object in a profunctor category with additional structure defined by a family of morphisms:

$\alpha_{c, \langle a, b \rangle} \colon p \langle a, b \rangle \to p\langle c \otimes a, c \otimes b \rangle$

with some naturality and coherence conditions.

Lenses and prisms can thus be defined as ends in the appropriate Tambara modules

$\int_{p \colon \mathbf{Tam}} \mathbf{Set}(p \langle a, b \rangle, p \langle s, t \rangle)$

We can now use the double Yoneda trick to get the usual representation.

The problem is, we don’t know in what category the result should be. We know the objects are pairs $\langle a, b \rangle$, but what are the morphisms between them? It turns out this problem was solved in a paper by Pastro and Street. The category in question is the Kleisli category for a particular promonad. This category is now better known as $\mathbf{Optic}$. Let me explain.

The double Yoneda trick worked for an unconstrained category of functors. We need to generalize it to a category with some additional structure (for instance, a Tambara module).

Let’s say we start with a functor category $[\mathcal{C}, \mathbf{Set}]$ and endow it with some structure, resulting in another functor category $\mathcal{T}$. It means that there is a (higher-order) forgetful functor $U \colon \mathcal{T} \to [\mathcal{C}, \mathbf{Set}]$ that forgets this additional structure. We’ll also assume that there is the right adjoint functor $F$ that freely generates the structure.

We will re-start the derivation of double Yoneda using the forgetful functor

$\int_{f \colon \mathcal{T}} \mathbf{Set}( (U f) a, (U f) s)$

Here, $a$ and $s$ are objects in $\mathcal{C}$ and $(U f)$ is a functor in $[\mathcal{C}, \mathbf{Set}]$.

We perform the Yoneda trick the same way as before to get:

$\int_{f \colon \mathcal{T}} \mathbf{Set}\Big(\int_{x \colon C} \mathbf{Set}\big(\mathcal{C}(a, x),(U f) x), \int_{y \colon C} \mathbf{Set}\big(\mathcal{C}(s, y),(U f) y\big)\Big)$

Again, we have two sets of natural transformations, the first one being:

$\int_{x \colon C} \mathbf{Set}\big(\mathcal{C}(a, x), (U f) x\big) = [\mathcal{C}, \mathbf{Set}]\big(\mathcal{C}(a, -), U f\big)$

$[\mathcal{C}, \mathbf{Set}]\big(\mathcal{C}(a, -), U f\big) \cong \mathcal{T}\Big(F\big(\mathcal{C}(a, -)\big), f\Big)$

The right-hand side is a hom-set in the functor category $\mathcal{T}$. Plugging this back into the original formula, we get

$\int_{f \colon \mathcal{T}} \mathbf{Set}\Big(\mathcal{T}\Big(F\big(\mathcal{C}(a, -)\big), f\Big), \mathcal{T}\Big(F\big(\mathcal{C}(s, -)\big), f\Big) \Big)$

This is the set of natural transformations between two hom-functors, so we can use the corollary of the Yoneda lemma to replace it with:

$\mathcal{T}\Big( F\big(\mathcal{C}(s, -)\big), F\big(\mathcal{C}(a, -)\big) \Big)$

We can then use the adjunction again, in the opposite direction, to get:

$[\mathcal{C}, \mathbf{Set}] \Big( \mathcal{C}(s, -), (U \circ F)\big(\mathcal{C}(a, -)\big) \Big)$

or, using the end notation:

$\int_{c \colon C} \mathbf{Set} \Big(\mathcal{C}(s, c), (U \circ F)\big(\mathcal{C}(a, -)\big) c \Big)$

Finally, we use the Yoneda lemma again to get:

$(U \circ F) \big( \mathcal{C}(a, -) \big) s$

This is the action of the higher-order functor $(U \circ F)$ on the hom-functor $\mathcal{C}(a, -)$, the result of which is applied to $s$.

The composition of two functors that form an adjunction is a monad $\Phi$. This is a monad in the functor category $[\mathcal{C}, \mathbf{Set}]$. Altogether, we get:

$\int_{f \colon \mathcal{T}} \mathbf{Set}( (U f) a, (U f) s) \cong \Phi \big( \mathcal{C}(a, -) \big) s$

# Profunctor Representation of Lenses and Prisms

The previous formula can be immediately applied to the category of Tambara modules. The forgetful functor takes a Tambara module and maps it to a regular profunctor $p$, an object in the functor category $[\mathcal{C}^{op} \times \mathcal{C}, \mathbf{Set}]$. We replace $a$ and $s$ with pairs of objects. We get:

$\int_{p \colon \mathbf{Tam}} \mathbf{Set}(p \langle a, b \rangle, p \langle s, t \rangle) \cong \Phi \big( (\mathcal{C}^{op} \times \mathcal{C})(\langle a, b \rangle, -) \big) \langle s, t \rangle$

The only missing piece is the higher order monad $\Phi$—a monad operating on profunctors.

The key observation by Pastro and Street was that Tambara modules are higher-order coalgebras. The mappings:

$\alpha \colon p \langle a, b \rangle \to p\langle c \otimes a, c \otimes b \rangle$

can be thought of as components of a natural transformation

$\int_{\langle a, b \rangle, c} \mathbf{Set} \big( p \langle a, b \rangle, p\langle c \otimes a, c \otimes b \rangle \big)$

By continuity of hom-sets, we can move the end over $c$ to the right:

$\int_{\langle a, b \rangle} \mathbf{Set} \Big( p \langle a, b \rangle, \int_c p\langle c \otimes a, c \otimes b \rangle \Big)$

We can use this to define a higher order functor that acts on profunctors:

$(\Theta p)\langle a, b \rangle = \int_c p\langle c \otimes a, c \otimes b \rangle$

so that the family of Tambara mappings can be written as a set of natural transformations $p \to (\Theta p)$:

$\int_{\langle a, b \rangle} \mathbf{Set} \big( p \langle a, b \rangle, (\Theta p)\langle a, b \rangle \big)$

Natural transformations are morphisms in the category of profunctors, and such a morphism $p \to (\Theta p)$ is, by definition, a coalgebra for the functor $\Theta$.

Pastro and Street go on showing that $\Theta$ is more than a functor, it’s a comonad, and the Tambara structure is not just a coalgebra, it’s a comonad coalgebra.

$(\Phi p) \langle s, t \rangle = \int^{\langle x, y \rangle, c} (\mathcal{C}^{op} \times \mathcal{C})\big(\langle c \otimes x, c \otimes y \rangle, \langle s, t \rangle \big) \times p \langle x, y \rangle$

When a monad is adjoint to a comonad, the comonad coalgebras are isomorphic to monad algebras—in this case, Tambara modules. Indeed, the algebras $(\Phi p) \to p$ are given by natural transformations:

$\int_{\langle s, t \rangle} \mathbf{Set}\Big( (\Phi p) \langle s, t \rangle, p\langle s, t \rangle \Big)$

Substituting the formula for $\Phi$,

$\int_{\langle s, t \rangle} \mathbf{Set}\Big( \int^{\langle x, y \rangle, c} (\mathcal{C}^{op} \times \mathcal{C})\big(\langle c \otimes x, c \otimes y \rangle, \langle s, t \rangle \big) \times p \langle x, y \rangle, p\langle s, t \rangle \Big)$

by continuity of the hom-set (with the coend in the negative position turning into an end),

$\int_{\langle s, t \rangle} \int_{\langle x, y \rangle, c}\mathbf{Set}\Big( (\mathcal{C}^{op} \times \mathcal{C})\big(\langle c \otimes x, c \otimes y \rangle, \langle s, t \rangle \big) \times p \langle x, y \rangle, p\langle s, t \rangle \Big)$

$\int_{\langle s, t \rangle, \langle x, y \rangle, c}\mathbf{Set}\Big( (\mathcal{C}^{op} \times \mathcal{C})\big(\langle c \otimes x, c \otimes y \rangle, \langle s, t \rangle \big), \mathbf{Set}\big( p \langle x, y \rangle, p\langle s, t \rangle \big) \Big)$

and the Yoneda lemma, we get

$\int_{\langle x, y \rangle, c} \mathbf{Set}\big( p \langle x, y \rangle, p\langle c \otimes x, c \otimes y \rangle \big)$

which is the Tambara structure $\alpha$.

$\Phi$ is exactly the monad that appears on the right-hand side of the double-Yoneda with adjunctions. This is because every monad can be decomposed into a pair of adjoint functors. The decomposition we’re interested in is the one that involves the Kleisli category of free algebras for $\Phi$. And now we know that these algebras are Tambara modules.

All that remains is to evaluate the action of $\Phi$ on the represesentable functor:

$\Phi \big( (\mathcal{C}^{op} \times \mathcal{C})(\langle a, b \rangle, -) \big) \langle s, t \rangle$

It’s a matter of simple substitution:

$\int^{\langle x, y \rangle, c} (\mathcal{C}^{op} \times \mathcal{C})\big(\langle c \otimes x, c \otimes y \rangle, \langle s, t \rangle \big) \times (\mathcal{C}^{op} \times \mathcal{C})(\langle a, b \rangle, \langle x, y \rangle)$

and using the Yoneda lemma to replace $\langle x, y \rangle$ with $\langle a, b \rangle$. The result is:

$\int^c (\mathcal{C}^{op} \times \mathcal{C})\big(\langle c \otimes a, c \otimes b \rangle, \langle s, t \rangle \big)$

This is exactly the existential represenation of the lens and the prism:

$\int^c \mathcal{C}(s, c \otimes a) \times \mathcal{C}(c \otimes b, t)$

This was an encouraging result, and I was able to derive a few other optics using the same approach.

The idea was that Tambara modules were just one example of a monoidal action, and it could be easily generalized to other types of optics, like Grate, where the action $c \otimes a$ is replaced by the (contravariant in $c$) action $a^c$ (or c->a, in Haskell).

There was just one optic that resisted that treatment, the Traversal. The breakthrough came when I was joined by a group of talented students at the Applied Category Theory School in Oxford.

Next: Traversals.

My gateway drug to category theory was the Haskell lens library. What first piqued my attention was the van Laarhoven representation, which used functions that are functor-polymorphic. The following function type:

type Lens s t a b =
forall f. Functor f => (a -> f b) -> (s -> f t)


is isomorphic to the getter/setter pair that traditionally defines a lens:

get :: s -> a
set :: s -> b -> t


My intuition was that the Yoneda lemma must be somehow involved. I remember sharing this idea excitedly with Edward Kmett, who was the only expert on category theory I knew back then. The reasoning was that a polymorphic function in Haskell is equivalent to a natural transformation in category theory. The Yoneda lemma relates natural transformations to functor values. Let me explain.

In Haskell, the Yoneda lemma says that, for any functor f, this polymorphic function:

forall x. (a -> x) -> f x


is isomorphic to (f a).
In category theory, one way of writing it is:

$\int_{x} \mathbf{Set}\big(\mathcal{C}(a, x), f x\big) \cong f a$

If this looks a little intimidating, let me go through the notation:

1. The functor $f$ goes from some category $\mathcal{C}$ to the category of sets, which is called $\mathbf{Set}$. Such functor is called a co-presheaf.
2. $\mathcal{C}(a, x)$ stands for the set of arrows from $a$ to $x$ in $\mathcal{C}$, so it corresponds to the Haskell type a->x. In category theory it’s called a hom-set. The notation for hom-sets is: the name of the category followed by names of two objects in parentheses.
3. $\mathbf{Set}\big(\mathcal{C}(a, x), f x\big)$ stands for a set of functions from $\mathcal{C}(a, x)$ to $f x$ or, in Haskell  (a -> x)-> f x. It’s a hom-set in $\mathbf{Set}$.
4. Think of the integral sign as the forall quantifier. In category theory it’s called an end. Natural transformations between two functors $f$ and $g$ can be expressed using the end notation:
$\int_x \mathbf{Set}(f x, g x)$

As you can see, the translation is pretty straightforward. The van Laarhoven representation in this notation reads:

$\int_f \mathbf{Set}\big( \mathcal{C}(a, f b), \mathcal{C}(s, f t) \big)$

If you vary $x$ in $\mathcal{C}(b, x)$, it becomes a functor, which is called a representable functor—the object $b$ “representing” the whole functor. In Haskell, we call it the reader functor:

newtype Reader b x = Reader (b -> x)


You can plug a representable functor for $f$ in the Yoneda lemma to get the following very important corollary:

$\int_x \mathbf{Set}\big(\mathcal{C}(a, x), \mathcal{C}(b, x)\big) \cong \mathcal{C}(b, a)$

The set of natural transformation between two representable functors is isomorphic to a hom-set between the representing objects. (Notice that the objects are swapped on the right-hand side.)

# The van Laarhoven representation

There is just one little problem: the forall quantifier in the van Laarhoven formula goes over functors, not types.

This is okay, though, because category theory works at many levels. Functors themselves form a category, and the Yoneda lemma works in that category too.

For instance, the category of functors from $\mathcal{C}$ to $\mathbf{Set}$ is called $[\mathcal{C},\mathbf{Set}]$. A hom-set in that category is a set of natural transformations between two functors which, as we’ve seen, can be expressed as an end:

$[\mathcal{C},\mathbf{Set}](f, g) \cong \int_x \mathbf{Set}(f x, g x)$

Remember, it’s the name of the category, here $[\mathcal{C},\mathbf{Set}]$, followed by names of two objects (here, functors $f$ and $g$) in parentheses.

So the corollary to the Yoneda lemma in the functor category, after a few renamings, reads:

$\int_f \mathbf{Set}\big( [\mathcal{C},\mathbf{Set}](g, f), [\mathcal{C},\mathbf{Set}](h, f)\big) \cong [\mathcal{C},\mathbf{Set}](h, g)$

This is getting closer to the van Laarhoven formula because we have the end over functors, which is equivalent to

forall f. Functor f => ...


In fact, a judicious choice of $g$ and $h$ is all we need to finish the proof.

But sometimes it’s easier to define a functor indirectly, as an adjoint to another functor. Adjunctions actually allow us to switch categories. A functor $L$ defined by a mapping-out in one category can be adjoint to another functor $R$ defined by its mapping-in in another category.

$\mathcal{C}(L a, b) \cong \mathcal{D}(a, R b)$

A useful example is the currying adjunction in $\mathbf{Set}$:

$\mathbf{Set}(c \times a, y) \cong \mathbf{Set}(c, y^a) \cong \mathbf{Set}\big(c, \mathbf{Set}(a, y)\big)$

where $y^a$ corresponds to the function type a->y and, in $\mathbf{Set}$, is isomorphic to the hom-set $\mathbf{Set}(a, y)$. This is just saying that a function of two arguments is equivalent to a function returning a function.

Here’s the clever trick: let’s replace $g$ and $h$ in the functorial Yoneda lemma with $L_b a$ and $L_t s$, where $L_b$ and $L_t$ are some higher-order functors from $\mathcal{C}$ to $[\mathcal{C},\mathbf{Set}]$ (as you will see, this notation anticipates the final substitution). We get:

$\int_f \mathbf{Set}\big( [\mathcal{C},\mathbf{Set}](L_b a, f), [\mathcal{C},\mathbf{Set}](L_t s, f)\big) \cong [\mathcal{C},\mathbf{Set}](L_t s, L_b a)$

Now suppose that these functors are left adjoint to some other functors: $R_b$ and $R_t$ that go in the opposite direction from $[\mathcal{C},\mathbf{Set}]$ to $\mathcal{C}$ . We can then replace all mappings-out in $[\mathcal{C},\mathbf{Set}]$ with the corresponding mappings-in in $\mathcal{C}$:

$\int_f \mathbf{Set}\big( \mathcal{C}(a, R_b f), \mathcal{C}(s, R_t f)\big) \cong \mathcal{C}\big(s, R_t (L_b a)\big)$

We are almost there! The last step is to realize that, in order to get the van Laarhoven formula, we need:

$R_b f = f b$

$R_t f = f t$

So these are just functors that apply $f$ to some fixed objects: $b$ and $t$, respectively. The left-hand side becomes:

$\int_f \mathbf{Set}\big( \mathcal{C}(a, f b), \mathcal{C}(s, f t) \big)$

which is exactly the van Laarhoven representation.

Now let’s look at the right-hand side:

$\mathcal{C}\big(s, R_t (L_b a)\big) = \mathcal{C}\big( s, (L_b a) t \big)$

We know what $R_b$ is, but what’s its left adjoint $L_b$? It must satisfy the adjunction:

$[\mathcal{C},\mathbf{Set}](L_b a, f) \cong \mathcal{C}(a, R_b f) = \mathcal{C}(a, f b)$

or, using the end notation:

$\int_x \mathbf{Set}\big((L_b a) x, f x\big) \cong \mathcal{C}(a, f b)$

This identity has a simple solution when $\mathcal{C}$ is $\mathbf{Set}$, so we’ll just temporarily switch to $\mathbf{Set}$. We have:

$(L_b a) x = \mathbf{Set}(b, x) \times a$

which is known as the IStore comonad in Haskell. We can check the identity by first applying the currying adjunction to eliminate the product:

$\int_x \mathbf{Set}\big(\mathbf{Set}(b, x) \times a, f x\big) \cong \int_x \mathbf{Set}\big(\mathbf{Set}(b, x), \mathbf{Set}(a, f x )\big)$

and then using the Yoneda lemma to “integrate” over $x$, which replaces $x$ with $b$,

$\int_x \mathbf{Set}\big(\mathbf{Set}(b, x), \mathbf{Set}(a, f x )\big) \cong \mathbf{Set}(a, f b)$

So the right hand side of the original identity (after replacing $\mathcal{C}$ with $\mathbf{Set}$) becomes:

$\mathbf{Set}\big(s, R_t (L_b a)\big) \cong \mathbf{Set}\big( s, (L_b a) t \big) \cong \mathbf{Set}\big(s, \mathbf{Set}(b, t) \times a) \big)$

which can be translated to Haskell as:

(s -> b -> t, s -> a)


or a pair of set and get.

I was very proud of myself for finding the right chain of substitutions, so I was pretty surprised when I learned from Mauro Jaskelioff and Russell O’Connor that they had a paper ready for publication with exactly the same proof. (They added a reference to my blog in their publication, which was probably a first.)

# The Existentials

But there’s more: there are other optics for which this trick doesn’t work. The simplest one was the prism defined by a pair of functions:

match :: s -> Either t a
build :: b -> t


In this form it’s hard to see a commonality between a lens and a prism. There is, however, a way to unify them using existential types.

Here’s the idea: A lens can be applied to types that, at least conceptually, can be decomposed into two parts: the focus and the residue. It lets us extract the focus using get, and replace it with a new value using set, leaving the residue unchanged.

The important property of the residue is that it’s opaque: we don’t know how to retrieve it, and we don’t know how to modify it. All we know about it is that it exists and that it can be combined with the focus. This property can be expressed using existential types.

Symbolically, we would want to write something like this:

type Lens s t a b = exists c . (s -> (c, a), (c, b) -> t)


where c is the residue. We have here a pair of functions: The first decomposes the source s into the product of the residue c and the focus a . The second recombines the residue with the new focus b resulting in the target t.

data Lens s t a b where
Lens :: (s -> (c, a), (c, b) -> t) -> Lens s t a b


They can also be encoded in category theory using coends. So the lens can be written as:

$\int^c \mathcal{C}(s, c \times a) \times \mathcal{C}(c \times b, t)$

The integral sign with the argument at the top is called a coend. You can read it as “there exists a $c$”.

There is a version of the Yoneda lemma for coends as well:

$\int^c f c \times \mathcal{C}(c, a) \cong f a$

The intuition here is that, given a functorful of $c$‘s and a function c->a, we can fmap the latter over the former to obtain f a. We can do it even if we have no idea what the type c is.

We can use the currying adjunction and the Yoneda lemma to transform the new definition of the lens to the old one:

$\int^c \mathcal{C}(s, c \times a) \times \mathcal{C}(c \times b, t) \cong \int^c \mathcal{C}(s, c \times a) \times \mathcal{C}(c, t^b) \cong \mathcal{C}(s, t^b \times a)$

The exponential $t^b$ translates to the function type b->t, so this this is really the set/get pair that defines the lens.

The beauty of this representation is that it can be immediately applied to the prism, just by replacing the product with the sum (coproduct). This is the existential representation of a prism:

$\int^c \mathcal{C}(s, c + a) \times \mathcal{C}(c + b, t)$

To recover the standard encoding, we use the mapping-out property of the sum:

$\mathcal{C}(c + b, t) \cong \mathcal{C}(c, t) \times \mathcal{C}(b, t)$

This is simply saying that a function from the sum type is equivalent to a pair of functions—what we call case analysis in programming.

We get:

$\int^c \mathcal{C}(s, c + a) \times \mathcal{C}(c + b, t) \cong \int^c \mathcal{C}(s, c + a) \times \mathcal{C}(c, t) \times \mathcal{C}(b, t)$

This has the form suitable for the use of the Yoneda lemma, namely:

$\int^c f c \times \mathcal{C}(c, t)$

with the functor

$f c = \mathcal{C}(s, c + a) \times \mathcal{C}(b, t)$

The result of the Yoneda is replacing $c$ with $t$, so the result is:

$\mathcal{C}(s, t + a) \times \mathcal{C}(b, t)$

which is exactly the match/build pair (in Haskell, the sum is translated to Either).

It turns out that every optic has an existential form.

Next: Profunctors.

You might have heard people say that functional programming is more academic, and real engineering is done in imperative style. I’m going to show you that real engineering is functional, and I’m going to illustrate it using a computer game that is designed by engineers for engineers. It’s a simulation game called Factorio, in which you are given resources that you have to explore, build factories that process them, create more and more complex systems, until you are finally able to launch a spaceship that may take you away from an inhospitable planet. If this is not engineering at its purest then I don’t know what is. And yet almost all you do when playing this game has its functional programming counterparts and it can be used to teach basic concepts of not only programming but also, to some extent, category theory. So, without further ado, let’s jump in.

## Functions

The building blocks of every programming language are functions. A function takes input and produces output. In Factorio they are called assembling machines, or assemblers. Here’s an assembler that produces copper wire.

If you bring up the info about the assembler you’ll see the recipe that it’s using. This one takes one copper plate and produces a pair of coils of copper wire.

This recipe is really a function signature in a strongly typed system. We see two types: copper plate and copper wire, and an arrow between them. Also, for every copper plate the assembler produces a pair of copper wires. In Haskell we would declare this function as

makeCopperWire :: CopperPlate -> (CopperWire, CopperWire)

Not only do we have types for different components, but we can combine types into tuples–here it’s a homogenous pair (CopperWire, CopperWire). If you’re not familiar with Haskell notation, here’s what it might look like in C++:

std::pair<CopperWire, CopperWire> makeCopperWire(CopperPlate);

Here’s another function signature in the form of an assembler recipe:

It takes a pair of iron plates and produces an iron gear wheel. We could write it as

makeGear :: (IronPlate, IronPlate) -> Gear

or, in C++,

Gear makeGear(IronPlate, IronPlate);

Many recipes require a combination of differently typed ingredients, like the one for producing red science packs

We would declare this function as:

makeRedScience :: (CopperPlate, Gear) -> RedScience

Pairs are examples of product types. Factorio recipes use the plus sign to denote tuples; I guess this is because we often read a sum as “this and this”, and “and” introduces a product type. The assembler requires both inputs to produce the output, so it accepts a product type. If it required either one, we’d call it a sum type.

We can also tuple more than two ingredients, as in this recipe for producing electronic circuits (or green circuits, as they are commonly called)

makeGreenCircuit ::
(CopperWire, CopperWire, CopperWire, IronPlate) -> GreenCircuit

Now suppose that you have at your disposal the raw ingeredients: iron plates and copper plates. How would you go about producing red science or green circuits? This is where function composition kicks in. You can pass the output of the copper wire assembler as the input to the green circuit assembler. (You will still have to tuple it with an iron plate.)

Similarly, you can compose the gear assembler with the red science assembler.

The result is a new function with the following signature

makeRedScienceFrom ::
(CopperPlate, IronPlate, IronPlate) -> RedScience

And this is the implementation:

makeRedScienceFrom (cu, fe1, fe2) =
makeRedScience (cu, makeGear (fe1, fe2))

You start with one copper plate and two iron plates. You feed the iron plates to the gear assembler. You pair the resulting gear with the copper plate and pass it to the red science assembler.

Most assemblers in Factorio take more than one argument, so I couldn’t come up with a simpler example of composition, one that wouldn’t require untupling and retupling. In Haskell we usually use functions in their curried form (we’ll come back to this later), so composition is easy there.

Composition is also a feature of a category, so we should ask the question if we can treat assemblers as arrows in a category. Their composition is obviously associative. But do we have an equivalent of an identity arrow? It is something that takes input of some type and returns it back unchanged. And indeed we have things called inserters that do exactly that. Here’s an inserter between two assemblers.

In fact, in Factorio, you have to use an inserter for direct composition of assemblers, but that’s an implementation detail (technically, inserting an identity function doesn’t change anything).

An inserter is actually a polymorphic function, just like the identity function in Haskell

inserter :: a -> a
inserter x = x

It works for any type a.

But the Factorio category has more structure. As we have seen, it supports finite products (tuples) of arbitrary types. Such a category is called cartesian. (We’ll talk about the unit of this product later.)

Notice that we have identified multiple Factorio subsystem as functions: assemblers, inserters, compositions of assemblers, etc. In a programming language they would all be just functions. If we were to design a language based on Factorio (we could call it Functorio), we would enclose the composition of assemblers into an assembler, or even make an assembler that takes two assemblers and produces their composition. That would be a higher-order assembler.

## Higher order functions

The defining feature of functional languages is the ability to make functions first-class objects. That means the ability to pass a function as an argument to another function, and to return a function as a result of another function. For instance, we should have a recipe for producing assemblers. And, indeed, there is such recipe. All it needs is green circuits, some gear wheels, and a few iron plates:

If Factorio were a strongly typed language all the way, there would be separate recipes for producing different assemblers (that is assemblers with different recipes). For instance, we could have:

makeRedScienceAssembler ::
(GreenCircuit, Gear, IronPlate) -> RedScienceAssembler

Instead, the recipe produces a generic assembler, and it lets the player manually set the recipe in it. In a way, the player provides one last ingredient, an element of the enumeration of all possible recipes. This enumeration is displayed as a menu of choices:

After all, Factorio is an interactive game.

Since we have identified the inserter as the identity function, we should have a recipe for producing it as well. And indeed there is one:

Do we also have functions that take functions as arguments? In other words, recipes that use assemblers as input? Indeed we do:

Again, this recipe accepts a generic assembler that hasn’t been assigned its own recipe yet.

This shows that Factorio supports higher-order functions and is indeed a functional language. What we have here is a way of treating functions (assemblers) not only as arrows between objects, but also as objects that can be produced and consumed by functions. In category theory, such objectified arrow types are called exponential objects. A category in which arrow types are represented as objects is called closed, so we can view Factorio as a cartesian closed category.

In a strongly typed Factorio, we could say that the object RedScienceAssembler

is equivalent to its recipe

type RedScienceAssembler =
(CopperPlate, Gear) -> RedScience

We could then write a higher-order recipe that produces this particular assembler as:

makeRedScienceAssembler ::
(GreenCircuit, Gear, IronPlate)
-> ((CopperPlate, Gear) -> RedScience)

Similarly, in a strongly typed Factorio we would replace this higher-order recipe

with the following signature

makeGreenScience :: ((a -> a), Belt) -> GreenScience

assuming that the inserter is a polymorphic function a -> a.

## Linear types

There is one important aspect of functional programming that seems to be broken in Factorio. Functions are supposed to be pure: mutation is a no-no. And in Factorio we keep talking about assemblers consuming resources. A pure function doesn’t consume its arguments–you may pass the same item to many functions and it will still be there. Dealing with resources is a real problem in programming in general, including purely functional languages. Fortunately there are clever ways of dealing with it. In C++, for instance, we can use unique pointers and move semantics, in Rust we have ownership types, and Haskell recently introduced linear types. What Factorio does is very similar to Haskell’s linear types. A linear function is a function that is guaranteed to consume its argument. Functorio assemblers are linear functions.

Factorio is all about consuming and transforming resources. The resources originate as various ores and coal in mines. There are also trees that can be chopped to yield wood, and liquids like water or crude oil. These external resources are then consumed, linearly, by your industry. In Haskell, we would implement it by passing a linear function called a continuation to the resource producer. A linear function guarantees to consume the resource completely (no resource leaks) and not to make multiple copies of the same resource. These are the guarantees that the Factorio industrial complex provides automatically.

## Currying

Of course Factorio was not designed to be a programming language, so we can’t expect it to implement every aspect of programming. It is fun though to imagine how we would translate some more advanced programming features into Factorio. For instance, how would currying work? To support currying we would first need partial application. The idea is pretty simple. We have already seen that assemblers can be treated as first class objects. Now imagine that you could produce assemblers with a set recipe (strongly typed assemblers). For instance this one:

It’s a two-input assembler. Now give it a single copper plate, which in programmer speak is called partial application. It’s partial because we haven’t supplied it with an iron gear. We can think of the result of partial application as a new single-input assembler that expects an iron gear and is able to produce one beaker of red science. By partially applying the function makeRedScience

makeRedScience :: (CopperPlate, Gear) -> RedScience

we have created a new function of the type

Gear -> RedScience

In fact we have just designed a process that gave us a (higher-order) function that takes a copper plate and creates a “primed” assembler that only needs an iron gear to produce red science:

makeGearToRedScience :: CopperPlate -> (Gear -> RedScience)

In Haskell, we would implement this function using a lambda expression

makeGearToRedScience cu = \gear -> makeRedScience (cu, gear)

Now we would like to automate this process. We want to have something that takes a two-input assembler, for instance makeRedScience, and returns a single input assembler that produces another “primed” single-input assembler. The type signature of this beast would be:

curryRedScienceAssembler ::
((CopperPlate, Gear) -> RedScience)  -- RedScienceAssembler
-> (CopperPlate -> (Gear -> RedScience))

We would implement it as a double lambda:

curryRedScienceAssembler rsAssembler =
\cu -> (\gear -> rsAssembler (cu, gear))


Notice that it really doesn’t matter what the concrete types are. What’s important is that we can turn a function that takes a pair of arguments into a function that returns a function. We can make it fully polymorphic:

curry :: ((a, b) -> c)
-> (a -> (b -> c))

Here, the type variables a, b and c can be replaced with any types (in particular, CopperPlate, Gear, and RedScience).

curry f = \a -> \b -> f (a, b)

## Functors

So far we haven’t talked about how arguments (items) are delivered to functions (assemblers). We can manually drop items into assemblers, but that very quickly becomes boring. We need to automate the delivery systems. One way of doing it is by using some kind of containers: chests, train wagons, barrels, or conveyor belts. In programming we call these functors. Strictly speaking a functor can hold only one type of items at a time, so a chest of iron plates should be a different type than a chest of gears. Factorio doesn’t enforce this but, in practice, we rarely mix different types of items in one container.

The important property of a functor is that you can apply a function to its contents. This is best illustrated with conveyor belts. Here we take the recipe that turns a copper plate into copper wire and apply it to a whole conveyor belt of copper (coming from the right) to produce a conveyor belt of copper wire (going to the left).

The fact that a belt can carry any type of items can be expressed as a type constructor–a data type parameterized by an arbitrary type a

data Belt a

You can apply it to any type to get a belt of specific items, as in

Belt CopperPlate

We will model belts as Haskell lists.

data Belt a = MakeBelt [a]

The fact that it’s a functor is expressed by implementing a polymorphic function mapBelt

mapBelt :: (a -> b) -> (Belt a -> Belt b)

This function takes a function a->b and produces a function that transforms a belt of as to a belt of bs. So to create a belt of (pairs of) copper wire we’ll map the assembler that implements makeCoperWire over a belt of CopperPlate

makeBeltOfWire :: (Belt CopperPlate) -> (Belt (CopperWire, CopperWire))
makeBeltOfWire = mapBelt makeCopperWire

You may think of a belt as corresponding to a list of elements, or an infinite stream, depending on the way you use it.

In general, a type constructor F is called a functor if it supports the mapping of a function over its contents:

map :: (a -> b) -> (F a -> F b)

## Sum types

Uranium ore processing is interesting. It is done in a centrifuge, which accepts uranium ore and produces two isotopes of Uranium.

The new thing here is that the output is probabilistic. Most of the time (on average, 99.3% of the time) you’ll get Uranium 238, and only occasionally (0.7% of the time) Uranium 235 (the glowy one). Here the plus sign is used to actually encode a sum type. In Haskell we would use the Either type constructor, which generates a sum type:

makeUranium :: UraniumOre -> Either U235 U238

In other languages you might see it called a tagged union.

The two alternatives in the output type of the centrifuge require different actions: U235 can be turned into fuel cells, whereas U238 requires reprocessing. In Haskell, we would do it by pattern matching. We would apply one function to deal with U235 and another to deal with U238. In Factorio this is accomplished using filter inserters (a.k.a., purple inserters). A filter inserter corresponds to a function that picks one of the alternatives, for instance:

filterInserterU235 :: Either U235 U238 -> Maybe U235

The Maybe data type (or Optional in some languages) is used to accommodate the possibility of failure: you can’t get U235 if the union contained U238.

Each filter inserter is programmed for a particular type. Below you see two purple inserters used to split the output of the centrifuge into two different chests:

Incidentally, a mixed conveyor belt may be seen as carrying a sum type. The items on the belt may be, for instance, either copper wire or steel plates, which can be written as Either CopperWire SteelPlate. You don’t even need to use purple inserters to separate them, as any inserter becomes selective when connected to the input of an assembler. It will only pick up the items that are the inputs of the recipe for the given assembler.

## Monoidal functors

Every conveyor belt has two sides, so it’s natural to use it to transport pairs. In particular, it’s possible to merge a pair of belts into one belt of pairs.

We don’t use an assembler to do it, just some belt mechanics, but we can still think of it as a function. In this case, we would write it as

(Belt CopperPlate, Belt Gear) -> Belt (CopperPlate, Gear)

In the example above, we map the red science function over it

streamRedScience :: Belt (CopperPlate, Gear) -> Belt RedScience
streamRedScience beltOfPairs = mapBelt makeRedScience beltOfPairs

Since makeRedScience has the signature

makeRedScience :: (CopperPlate, Gear) -> RedScience

it all type checks.

Since we can apply belt merging to any type, we can write it as a polymorphic function

mergeBelts :: (Belt a, Belt b) -> Belt (a, b)
mergeBelts (MakeBelt as, MakeBelt bs) = MakeBelt (zip as bs)

(In our Haskell model, we have to zip two lists together to get a list of pairs.)

Belt is a functor. In general, a functor that has this kind of merging ability is called a monoidal functor, because it preserves the monoidal structure of the category. Here, the monoidal structure of the Factorio category is given by the product (pairing). Any monoidal functor F must preserve the product:

(F a, F b) -> F (a, b)

There is one more aspect to monoidal structure: the unit. The unit, when paired with anything, does nothing to it. More precisely, a pair (Unit, a) is, for all intents and purposes, equivalent to a. The best way to understand the unit in Factorio is to ask the question: The belt of what, when merged with the belt of a, will produce a belt of a? The answer is: the belt of nothing. Merging an empty belt with any other belt, makes no difference.

So emptiness is the monoidal unit, and we have, for instance:

(Belt CopperPlate, Belt Nothing) -> Belt CopperPlate

The ability to merge two belts, together with the ability to create an empty belt, makes Belt a monoidal functor. In general, besides preserving the product, the condition for the functor F to be monoidal is the ability to produce

F Nothing

Most functors, at least in Factorio, are not monoidal. For instance, chests cannot store pairs.

## Applicative functors

As I mentioned before, most assembler recipes take multiple arguments, which we modeled as tuples (products). We also talked about partial application which, essentially, takes an assembler and one of the ingredients and produces a “primed” assembler whose recipe requires one less ingredient. Now imagine that you have a whole belt of a single ingredient, and you map an assembler over it. In current Factorio, this assembler will accept one item and then get stuck waiting for the rest. But in our extended version of Factorio, which we call Functorio, mapping a multi-input assembler over a belt of single ingredient should produce a belt of “primed” assemblers. For instance, the red science assembler has the signature

(CopperPlate, Gear) -> RedScience

When mapped over a belt of CopperPlate it should produce a belt of partially applied assemblers, each with the recipe:

Gear -> RedScience

Now suppose that you have a belt of gears ready. You should be able to produce a belt of red science. If there only were a way to apply the first belt over the second belt. Something like this:

(Belt (Gear -> RedScience), Belt Gear) -> Belt RedScience

Here we have a belt of primed assemblers and a belt of gears and the output is a belt of red science.

A functor that supports this kind of merging is called an applicative functor. Belt is an applicative functor. In fact, we can tell that it’s applicative because we’ve established that it’s monoidal. Indeed, monoidality lets us merge the two belts to get a belt of pairs

Belt (Gear -> RedScience, Gear)

We know that there is a way of applying the Gear->RedScience assembler to a Gear resulting in RedScience. That’s just how assemblers work. But for the purpose of this argument, let’s give this application an explicit name: eval.

eval :: (Gear -> RedScience, Gear) -> RedScience
eval (gtor, gr) = gtor gr

(gtor gr is just Haskell syntax for applying the function gtor to the argument gr). We are abstracting the basic property of an assembler that it can be applied to an item.

Now, since Belt is a functor, we can map eval over our belt of pairs and get a belt of RedScience.

apBelt :: (Belt (Gear -> RedScience), Belt Gear) -> Belt RedScience
apBelt (gtors, gear) = mapBelt eval (mergeBelts (gtors, gears))

Going back to our original problem: given a belt of copper plate and a belt of gear, this is how we produce a belt of red science:

redScienceFromBelts :: (Belt CopperPlate, Belt Gear) -> Belt RedScience
redScienceFromBelts (beltCu, beltGear) =
apBelt (mapBelt (curry makeRedScience) beltCu, beltGear)


We curry the two-argument function makeRedScience and map it over the belt of copper plates. We get a beltful of primed assemblers. We then use apBelt to apply these assemblers to a belt of gears.

To get a general definition of an applicative functor, it’s enough to replace Belt with generic functor F, CopperPlate with a, and Gear with b. A functor F is applicative if there is a polymorphic function:

(F (a -> b), F a) -> F b

or, in curried form,

F (a -> b) -> F a -> F b

To complete the picture, we also need the equivalent of the monoidal unit law. A function called pure plays this role:

pure :: a -> F a

This just tell you that there is a way to create a belt with a single item on it.

In Factorio, the nesting of functors is drastically limited. It’s possible to produce belts, and you can put them on belts, so you can have a beltful of belts, Belt Belt. Similarly you can store chests inside chests. But you can’t have belts of loaded belts. You can’t pick a belt filled with copper plates and put it on another belt. In other words, you cannot transport beltfuls of stuff. Realistically, that wouldn’t make much sense in real world, but in Functorio, this is exactly what we need to implement monads. So imagine that you have a belt carrying a bunch of belts that are carrying copper plates. If belts were monadic, you could turn this whole thing into a single belt of copper plates. This functionality is called join (in some languages, “flatten”):

join :: Belt (Belt CopperPlate) -> Belt CopperPlate

This function just gathers all the copper plates from all the belts and puts them on a single belt. You can thing of it as concatenating all the subbelts into one.

Similarly, if chests were monadic (and there’s no reason they shouldn’t be) we would have:

join :: Chest (Chest Gear) -> Chest Gear

A monad must also support the applicative pure (in Haskell it’s called return) and, in fact, every monad is automatically applicative.

## Conclusion

There are many other aspects of Factorio that lead to interesting topics in programming. For instance, the train system requires dealing with concurrency. If two trains try to enter the same crossing, we’ll have a data race which, in Functorio, is called a train crash. In programming, we avoid data races using locks. In Factorio, they are called train signals. And, of course, locks lead to deadlocks, which are very hard to debug in Factorio.

In functional programming we might use STM (Software Transactional Memory) to deal with concurrency. A train approaching a crossing would start a crossing transaction. It would temporarily ignore all other trains and happily make the crossing. Then it would attempt to commit the crossing. The system would then check if, in the meanwhile, another train has successfully commited the same crossing. If so, it would say “oops! try again!”.

The series of posts about so called benign data races stirred a lot of controversy and led to numerous discussions at the startup I was working at called Corensic. Two bastions formed, one claiming that no data race was benign, and the other claiming that data races were essential for performance. Then it turned out that we couldn’t even agree on the definition of a data race. In particular, the C++11 definition seemed to deviate from the established notions.

# What Is a Data Race Anyway?

First of all, let’s make sure we know what we’re talking about. In current usage a data race is synonymous with a low-level data race, as opposed to a high-level race that involves either multiple memory locations, or multiple accesses per thread. Everybody agrees on the meaning of data conflict, which is multiple threads accessing the same memory location, at least one of them through a write. But a data conflict is not necessarily a data race. In order for it to become a race, one more condition must be true: the access has to be “simultaneous.”

Unfortunately, simultaneity is not a well defined term in concurrent systems. Leslie Lamport was the first to observe that a distributed system follows the rules of Special Relativity, with no independent notion of simultaneity, rather than those of Galilean Mechanics, with its absolute time. So, really, what defines a data race is up to your notion of simultaneity.

Maybe it’s easier to define what isn’t, rather than what is, simultaneous? Indeed, if we can tell which event happened before another event, we can be sure that they weren’t simultaneous. Hence the use of the famous “happened before” relationship in defining data races. In Special Relativity this kind of relationship is established by the exchange of messages, which can travel no faster than the speed of light. The act of sending a message always happens before the act of receiving the same message. In concurrent programming this kind of connection is made using synchronizing actions. Hence an alternative definition of a data race: A memory conflict without intervening synchronization.

The simplest examples of synchronizing actions are the taking and the releasing of a lock. Imagine two threads executing this code:

  mutex.lock();
x = x + 1;
mutex.unlock();

In any actual execution, accesses to the shared variable x from the two threads will be separated by a synchronization. The happens-before (HB) arrow will always go from one thread releasing the lock to the other thread acquiring it. For instance in:

1 mutex.lock();
2 x = x + 1;
3 mutex.unlock();
4 mutex.lock();
5 x = x + 1;
6 mutex.unlock();

the HB arrow goes from 3 to 4, clearly separating the conflicting accesses in 2 and 5.

Notice the careful choice of words: “actual execution.” The following execution that contains a race can never happen, provided the mutex indeed guarantees mutual exclusion:

1 mutex.lock();
2 mutex.lock();
3 x = x + 1; x = x + 1;
4 mutex.unlock();
5 mutex.unlock();

It turns out that the selection of possible executions plays an important role in the definition of a data race. In every memory model I know of, only sequentially consistent executions are tried in testing for data races. Notice that non-sequentially-consistent executions may actually happen, but they do not enter the data-race test.

In fact, most languages try to provide the so called DRF (Data Race Free) guarantee, which states that all executions of data-race-free programs are sequentially consistent. Don’t be alarmed by the apparent circularity of the argument: you start with sequentially consistent executions to prove data-race freedom and, if you don’t find any data races, you conclude that all executions are sequentially consistent. But if you do find a data race this way, then you know that non-sequentially-consistent executions are also possible.

DRF guarantee. If there are no data races for sequentially consistent executions, there are no non-sequentially consistent executions. But if there are data races for sequentially consistent executions, the non-sequentially consistent executions are possible.

As you can see, in order to define a data race you have to precisely define what you mean by “simultaneous,” or by “synchronization,” and you have to specify to which executions your definition may be applied.

# The Java Memory Model

In Java, besides traditional mutexes that are accessed through “synchronized” methods, there is another synchronization device called a volatile variable. Any access to a volatile variable is considered a synchronization action. You can draw happens-before arrows not only between consecutive unlocks and locks of the same object, but also between consecutive accesses to a volatile variable. With this extension in mind, Java offers the the traditional DRF guarantee. The semantics of data-race free programs is well defined in terms of sequential consistency thus making every Java programmer happy.

But Java didn’t stop there, it also attempted to provide at least some modicum of semantics for programs with data races. The idea is noble–as long as programmers are human, they will write buggy programs. It’s easy to proclaim that any program with data races exhibits undefined behavior, but if this undefined behavior results in serious security loopholes, people get really nervous. So what the Java memory model guarantees on top of DRF is that the undefined behavior resulting from data races cannot lead to out-of-thin-air values appearing in your program (for instance, security credentials for an intruder).

It is now widely recognized that this attempt to define the semantics of data races has failed, and the Java memory model is broken (I’m citing Hans Boehm here).

# The C++ Memory Model

Why is it so important to have a good definition of a data race? Is it because of the DRF guarantee? That seems to be the motivation behind the Java memory model. The absence of data races defines a subset of programs that are sequentially consistent and therefore have well-defined semantics. But these two properties: being sequentially consistent and having well-defined semantics are not necessarily the same. After all, Java tried (albeit unsuccessfully) to define semantics for non sequentially consistent programs.

So C++ chose a slightly different approach. The C++ memory model is based on partitioning all programs into three categories:

1. Sequentially consistent,
2. Non-sequentially consistent, but with defined semantics, and
3. Incorrect programs with undefined semantics

The first category is very similar to race-free Java programs. The place of Java volatile is taken by C++11 default atomic. The word “default” is crucial here, as we’ll see in a moment. Just like in Java, the DRF guarantee holds for those programs.

It’s the second category that’s causing all the controversy. It was introduced not so much for security as for performance reasons. Sequential consistency is expensive on most multiprocessors. This is why many C++ programmers currently resort to “benign” data races, even at the risk of undefined behavior. Hans Boehm’s paper, How to miscompile programs with “benign” data races, delivered a death blow to such approaches. He showed, example by example, how legitimate compiler optimizations may wreak havoc on programs with “benign” data races.

Fortunately, C++11 lets you relax sequential consistency in a controlled way, which combines high performance with the safety of well-defined (if complex) semantics. So the second category of C++ programs use atomic variables with relaxed memory ordering semantics. Here’s some typical syntax taken from my previous blog post:

std::atomic<int> owner = 0
...
owner.load(memory_order_relaxed);

And here’s the controversial part: According to the C++ memory model, relaxed memory operations, like the above load, don’t contribute to data races, even though they are not considered synchronization actions. Remember one of the versions of the definition of a data race: Conflicting actions without intervening synchronization? That definition doesn’t work any more.

The C++ Standard decided that only conflicts for which there is no defined semantics are called data races.

Notice that some forms of relaxed atomics may introduce synchronization. For instance, a write access with memory_order_release “happens before” another access with memory_order_acquire, if the latter follows the former in a particular execution (but not if they are reversed!).

# Conclusion

What does it all mean for the C++11 programmer? It means that there no longer is an excuse for data races. If you need benign data races for performance, rewrite your code using weak atomics. Weak atomics give you the same kind of performance as benign data races but they have well defined semantics. Traditional “benign” races are likely to be broken by optimizing compilers or on tricky architectures. But if you use weak atomics, the compiler will apply whatever means necessary to enforce the correct semantics, and your program will always execute correctly. It will even naturally align atomic variables to avoid torn reads and writes.

What’s more, since C++11 has well defined memory semantics, compiler writers are no longer forced to be conservative with their optimizations. If the programmer doesn’t specifically mark shared variables as atomic, the compiler is free to optimize code as if it were single-threaded. So all those clever tricks with benign data races are no longer guaranteed to work, even on relatively simple architectures, like the x86. For instance, compiler is free to use your lossy counter or a binary flag for its own temporary storage, as long as it restores it back later. If other threads access those variables through racy code, they might see arbitrary values as part of the “undefined behavior.” You have been warned!

The main idea of functional programming is to treat functions like any other data types. In particular, we want to be able to pass functions as arguments to other functions, return them as values, and store them in data structures. But what kind of data type is a function? It’s a type that, when paired with another piece of data called the argument, can be passed to a function called apply to produce the result.

apply :: (a -> d, a) -> d

In practice, function application is implicit in the syntax of the language. But, as we will see, even if your language doesn’t support higher-order functions, all you need is to roll out your own apply.

But where do these function objects, arguments to apply, come from; and how does the built-in apply know what to do with them?

When you’re implementing a function, you are, in a sense, telling apply what to do with it–what code to execute. You’re implementing individual chunks of apply. These chunks are usually scattered all over your program, sometimes anonymously in the form of lambdas.

We’ll talk about program transformations that introduce more functions, replace anonymous functions with named ones, or turn some functions into data types, without changing program semantics. The main advantage of such transformations is that they may improve performance, sometimes drastically so; or support distributed computing.

## Function Objects

As usual, we look to category theory to provide theoretical foundation for defining function objects. It turns out that we are able to do functional programming because the category of types and functions is cartesian closed. The first part, cartesian, means that we can define product types. In Haskell, we have the pair type (a, b) built into the language. Categorically, we would write it as $a \times b$. Product is functorial in both arguments so, in particular, we can define a functor

$L_a c = c \times a$

It’s really a family of functors that it parameterized by $a$.

The right adjoint to this functor

$R_a d = a \to d$

defines the function type $a \to d$ (a.k.a., the exponential object $d^a$). The existence of this adjunction is what makes a category closed. You may recognize these two functors as, respectively, the writer and the reader functor. When the parameter $a$ is restricted to monoids, the writer functor becomes a monad (the reader is already a monad).

An adjunction is defined as a (natural) isomorphism of hom-sets:

$D(L c, d) \cong C(c, R d)$

or, in our case of two endofunctors, for some fixed $a$,

$C(c \times a, d) \cong C(c, a \to d)$

In Haskell, this is just the definition of currying:

curry   :: ((c, a) -> d)   -> (c -> (a -> d))
uncurry :: (c -> (a -> d)) -> ((c, a) -> d)

You may recognize the counit of this adjunction

$\epsilon_d : L_a (R_a d) \to \mbox{Id}\; d$

as our apply function

$\epsilon_d : ((a \to d) \times a) \to d$

In my previous blog post I discussed the Freyd’s Adjoint Functor theorem from the categorical perspective. Here, I’m going to try to give it a programming interpretation. Also, the original theorem was formulated in terms of finding the left adjoint to a given functor. Here, we are interested in finding the right adjoint to the product functor. This is not a problem, since every construction in category theory can be dualized by reversing the arrows. So instead of considering the comma category $c/R$, we’ll work with the comma category $L/d$. Its objects are pairs $(c, f)$, in which $f$ is a morphism

$f \colon L c \to d$.

This is the general picture but, in our case, we are dealing with a single category, and $L$ is an endofunctor. We can implement the objects of our comma category in Haskell

data Comma a d c = Comma c ((c, a) -> d)

The type a is just a parameter, it parameterizes the (left) functor $L_a$

$L_a c = c \times a$

and d is the target object of the comma category.

We are trying to construct a function object representing functions a->d, so what role does c play in all of this? To understand that, you have to take into account that a function object can be used to describe closures: functions that capture values from their environment. The type c represents those captured values. We’ll see this more explicitly later, when we talk about defunctionalizing closures.

Our comma category is a category of all closures that go from $a$ to $d$ while capturing all possible environments. The function object we are constructing is essentially a sum of all these closures, except that some of them are counted multiple times, so we need to perform some identifications. That’s what morphisms are for.

The morphisms of the comma category are morphisms $h \colon c \to c'$ in $\mathcal C$ that make the following triangles in $\mathcal D$ commute.

Unfortunately, commuting diagrams cannot be expressed in Haskell. The closest we can get is to say that a morphism from

c1 :: Comma a d c

to

c2 :: Comma a d c'

is a function h :: c -> c' such that, if

c1 = Comma c f
f :: (c, a) -> d
c2 = Comma c' g
g :: (c', a) -> d

then

f = g . bimap h id

Here, bimap h id is the lifting of h to the functor $L_a$. More explicitly

f (c, x) = g (h c, x)

As we are interpreting c as the environment in which the closure is defined, the question is: does f use all of the information encoded in c or just a part of it? If it’s just a part, then we can factor it out. For instance, consider a lambda that captures an integer, but it’s only interested in whether the integer is even or odd. We can replace this lambda with one that captures a Boolean, and use the function even to transform the environment.

The next step in the construction is to define the projection functor from the comma category $L/d$ back to $\mathcal C$ that forgets the $f$ part and just keeps the object $c$

$\pi_d \colon (c, f) \mapsto c$

We use this functor to define a diagram in $\mathcal C$. Now, instead of taking its limit, as we did in the previous installment, we’ll take the colimit of this diagram. We’ll use this colimit to define the action of the right adjoint functor $R$ on $d$.

$R d = \underset{L/d}{\mbox{colim}} \; \pi_d$

In our case, the forgetful functor discards the function part of Comma a d c, keeping only the environment $c$. This means that, as long as d is not Void, we are dealing with a gigantic diagram that encompasses all objects in our category of types. The colimit of this diagram is a gigantic coproduct of everything, modulo identifications introduced by morphisms of the comma category. But these identifications are crucial in pruning out redundant closures. Every lambda that uses only part of the information from the captured environment can be identified with a simpler lambda that uses a simplified environment.

For illustration, consider a somewhat extreme case of constructing the function object $1 \to d$, or $d^1$ ($d$ to the power of the terminal object). This object should be isomorphic to $d$. Let’s see how this works: The terminal object $1$ is the unit of the product, so

$L_1 c = c \times 1 \cong c$

so the comma category $L_1 / d$ is just the slice category $C/d$ of arrows to $d$. It so happens that this category has the terminal object $(d, id_d)$. The colimit of a diagram that has a terminal object is that terminal object. So, indeed, in this case, our construction produces a function object that is isomorphic to $d$.

$1 \to d \cong d$

Intuitively, given a lambda that captures a value of type $c$ from the environment and returns a $d$, we can trivially factor it out, using this lambda to transform the environment for $c$ to $d$ and then apply the identity on $d$. The latter corresponds to the comma category object $(d, id_d)$, and the forgetful functor maps it to $d$.

It’s instructive to run a few more examples to get the hang of it. For instance, the function object Bool->d can be constructed by considering closures of the type

f :: (c, Bool) -> d

Any such closure can be factorized by the following transformation of the environment

h :: c -> (d, d)
h c = (f (c, True), f (c, False))

followed by

g :: ((d, d), Bool) -> d
g ((d1, d2), b) = if b then d1 else d2

Indeed:

f (c, b) = g (h c, b)

In other words
$2 \to d \cong d \times d$
where $2$ corresponds to the Bool type.

## Counit

We are particularly interested in the counit of the adjunction. Its component at $d$ is a morphism

$\epsilon_d : L R d \to d$

It also happens to be an object in the comma category, namely

$(R d, \epsilon_d \colon L R d \to d)$.

In fact, it is the terminal object in that category. You can see that because for any other object $(c, f \colon L c \to d)$ there is a morphism $h \colon c \to R d$ that makes the following triangle commute:

This morphisms $h$ is a leg in the terminal cocone that defines $R d$. We know for sure that $c$ is in the base of that cocone, because it’s the projection $\pi_d$ of $(c, f \colon L c \to d)$.

To get some insight into the construction of the function object, imagine that you can enumerate the set of all possible environments $c_i$. The comma category $L_a/d$ would then consist of pairs $(c_i, f_i \colon (c_i, a) \to d)$. The coproduct of all those environments is a good candidate for the function object $a \to d$. Indeed, let’s try to define a counit for it:

$(\coprod c_i, a) \to d \cong \coprod (c_i, a) \to d \cong \prod ((c_i, a) \to d)$

I used the distributive law:

$(\coprod c_i, a) \cong \coprod (c_i, a)$

and the fact that the mapping out of a sum is the product of mappings. The right hand side can be constructed from the morphisms of the comma category.

So the object $\coprod c_i$ satisfies at least one requirement of the function object: there is an implementation of apply for it. It is highly redundant, though. This is why, instead of the coproduct, we used the colimit in our construction of the function object. Also, we ignored the size issues.

## Size Issues

As we discussed before, this construction doesn’t work in general because of size issues: the comma category is not necessarily small, and the colimit might not exist.

To address this problems, we have previously defined small solution sets. In the case of the right adjoint, a solution set is a family of objects that is weakly terminal in $L/c$. These are pairs $(c_i, f_i \colon L c_i \to d)$ that, among themselves, can factor out any $g \colon L c \to d$

$g = f_i \circ L h$

It means that we can always find an index $i$ and a morphism $h \colon c \to c_i$ to satisfy that equation. Every $g$ might require a different $f_i$ and $h$ to factor through but, for any $g$, we are guaranteed to always find a pair.

Once we have a complete solution set, the right adjoint $R d$ is constructed by first forming a coproduct of all the $c_i$ and then using a coequalizer to construct one terminal object.

What is really interesting is that, in some cases, we can just use the coproduct of the solution set, $\coprod_i c_i$ to approximate the adjoint (thus skipping the equalizer part).

The idea is that, in a particular program, we don’t need to represent all possible function types, just a (small) subset of those. We are also not particularly worried about uniqueness: it’s no problem if the same function ends up with multiple syntactic representations.

Let’s reformulate Freyd’s construction of the function object in programming terms. The solution set is the set of types $c_i$ and functions
$f_i \colon (c_i, a) \to d$
such that, for any function
$g \colon (c, a) \to d$
that is of interest in our program (for instance, used as an argument to another function) there exists an $i$ and a function
$h \colon c \to c_i$
such that $g$ can be rewritten as
$g (c, a) = f_i (h c, a)$
In other words, every function of interest can be replaced by one of the solution-set functions. The environment for this standard function can be always extracted from the environment of the more general function.

## CPS Transformation

A particular application of higher order functions shows up in the context of continuation passing transformation. Let’s look at a simple example. We are going to implement a function that traverses a binary tree containing strings, and concatenates them all into one string. Here’s the tree

data Tree = Leaf String
| Node Tree String Tree


Recursive traversal is pretty straightforward

show1 :: Tree -> String
show1 (Leaf s) = s
show1 (Node l s r) =
show1 l ++  s ++ show1 r


We can test it on a small tree:

tree :: Tree
tree = Node (Node (Leaf "1 ") "2 " (Leaf "3 "))
"4 "
(Leaf "5 ")

test = show1 tree


There is just one problem: recursion consumes the runtime stack, which is usually a limited resource. Your program may run out of stack space resulting in the “stack overflow” runtime error. This is why the compiler will turn recursion into iteration, whenever possible. And it is always possible if the function is tail recursive, that is, the recursive call is the last call in the function. No operation on the result of the recursive call is permitted in a tail recursive function.

This is clearly not happening in our implementation of show1: After the recursive call is made to traverse the left subtree, we still have to make another call to traverse the right tree, and the two results must be concatenated with the contents of the node.

Notice that this is not just a functional programming problem. In an imperative language, where iteration is the rule, tree traversal is still implemented using recursion. That’s because the data structure itself is recursive. It used to be a common interview question to implement non-recursive tree traversal, but the solution is always to explicitly implement your own stack (we’ll see how it’s done at the end of this post).

There is a standard procedure to make functions tail recursive using continuation passing style (CPS). The idea is simple: if there is stuff to do with the result of a function call, let the function we’re calling do it instead. This “stuff to do” is called a continuation. The function we are calling takes the continuation as an argument and, when it finishes its job, it calls it with the result. A continuation is a function, so CPS-transformed functions have to be higher-order: they must accept functions as arguments. Often, the continuations are defined on the spot using lambdas.

Here’s the CPS transformed tree traversal. Instead of returning a string, it accepts a continuation k, a function that takes a string and produces the final result of type a.

show2 :: Tree -> (String -> a) -> a
show2 (Leaf s) k = k s
show2 (Node lft s rgt) k =
show2 lft (\ls ->
show2 rgt (\rs ->
k (ls ++ s ++ rs)))

If the tree is just a leaf, show2 calls the continuation with the string that’s stored in the leaf.

If the tree is a node, show2 calls itself recursively to convert the left child lft. This is a tail call, nothing more is done with its result. Instead, the rest of the work is packaged into a lambda and passed as a continuation to show2. This is the lambda

\ls ->
show2 rgt (\rs ->
k (ls ++ s ++ rs))

This lambda will be called with the result of traversing the left child. It will then call show2 with the right child and another lambda

\rs ->
k (ls ++ s ++ rs)

Again, this is a tail call. This lambda expects the string that is the result of traversing the right child. It concatenates the left string, the string from the current node, and the right string, and calls the original continuation k with it.

Finally, to convert the whole tree t, we call show2 with a trivial continuation that accepts the final result and immediately returns it.

show t = show2 t (\x -> x)

There is nothing special about lambdas as continuations. It’s possible to replace them with named functions. The difference is that a lambda can implicitly capture values from its environment. A named function must capture them explicitly. The three lambdas we used in our CPS-transformed traversal can be replaced with three named functions, each taking an additional argument representing the values captured from the environment:

done s = s
next (s, rgt, k) ls = show3 rgt (conc (ls, s, k))
conc (ls, s, k) rs = k (ls ++ s ++ rs)


The first function done is an identity function, it forces the generic type a to be narrowed down to String.

Here’s the modified traversal using named functions and explicit captures.

show3 :: Tree -> (String -> a) -> a
show3 (Leaf s) k = k s
show3 (Node lft s rgt) k =
show3 lft (next (s, rgt, k))

show t = show3 t done


We can now start making the connection with the earlier discussion of the adjoint theorem. The three functions we have just defined, done, next, and conc, form the family

$f_i \colon (c_i, a) \to b$.

They are functions of two arguments, or a pair of arguments. The first argument represents the object $c_i$, part of the solution set. It corresponds to the environment captured by the closure. The three $c_i$ are, respectively

()
(String, Tree, String -> String)
(String, String, String->String)


(Notice the empty environment of done, here represented as the unit type ().)

The second argument of all three functions is of the type String, and the return type is also String so, according to Freyd’s theorem, we are in the process of defining the function object $a \to b$, where $a$ is String and $b$ is String.

## Defunctionalization

Here’s the interesting part: instead of defining the general function type String->String, we can approximate it with the coproduct of the elements of the solution set. Here, the three components of the sum type correspond to the environments captured by our three functions.

data Kont = Done
| Next String Tree   Kont
| Conc String String Kont


The counit of the adjunction is approximated by a function from this sum type paired with a String, returning a String

apply :: Kont -> String -> String
apply Done s = s
apply (Next s rgt k) ls = show4 rgt (Conc ls s k)
apply (Conc ls s k) rs  = apply k (ls ++ s ++ rs)


Rather than passing one of the three functions to our higher-order CPS traversal, we can pass this sum type

show4 :: Tree -> Kont -> String
show4 (Leaf s) k = apply k s
show4 (Node lft s rgt) k =
show4 lft (Next s rgt k)


This is how we execute it

show t = show4 t Done

We have gotten rid of all higher-order functions by replacing their function arguments with a data type equipped with the apply function. There are several situations when this is advantageous. In procedural languages, defunctionalization may be used to replace recursion with loops. In fact, the Kont data structure can be seen as a user-defined stack, especially if it’s rewritten as a list.

type Kont = [(String, Either Tree String)]

Here, Done was replaced with an empty list and Next and Conc together correspond to pushing a value on the stack.

In Haskell, the compiler performs tail recursion optimization, but defunctionalization may still be useful in implementing distributed systems, or web servers. Any time we need to pass a function between a client and a server, we can replace it by a data type that can be easily serialized.

## Bibliography

1. John C. Reynolds, Definitional Interpreters for Higher-Order Programming Languages
2. James Koppel, The Best Refactoring You’ve Never Heard Of.

One of the tropes of detective movies is the almost miraculous ability to reconstruct an image from a blurry photograph. You just scan the picture, say “enhance!”, and voila, the face of the suspect or the registration number of their car appear on your computer screen.

Computer, enhance!

With constant improvements in deep learning, we might eventually get there. In category theory, though, we do this all the time. We recover lost information. The procedure is based on the basic tenet of category theory: an object is defined by its interactions with the rest of the world. This is the basis of all universal constructions, the Yoneda lemma, Grothendieck fibration, Kan extensions, and practically everything else.

An iconic example is the construction of the left adjoint to a given functor, and that’s what we are going to study here. But first let me explain why I decided to pick this subject, and how it’s related to programming. I wanted to write a blog post about CPS (continuation passing style) and defunctionalization, and I stumbled upon an article in nLab that related defunctionalization to Freyd’s Adjoint Functor Theorem; in particular to the Solution Set Condition. Such an unexpected connection peaked my interest and I decided to dig deeper into it.

Consider a functor $R$ from some category $\mathcal D$ to another category $\mathcal C$.

$R \colon D \to C$

A functor, in general, loses some data, so it’s normally impossible to invert it. It produces a “blurry” image of $\mathcal D$ inside $\mathcal C$. Its left adjoint is a functor from $\mathcal C$ to $\mathcal D$

$L \colon C \to D$

that attempts to reconstruct lost information, to the best of its ability. Often the functor $R$ is forgetful, which means that it purposefully forgets some information. Its left adjoint is then called free, because it freely ad-libs the forgotten information.

Of course it’s not always possible, but under certain conditions such left adjoint exists. These conditions are spelled out in the Freyd’s General Adjoint Functor Theorem.

To understand them, we have to talk a little about size issues.

# Size issues

A lot of interesting categories are large. It means that there are so many objects in the category that they don’t even form a set. The category of all sets, for instance, is large (there is no set of all sets). It’s also possible that morphisms between two objects don’t form a set.

A category in which objects form a set is called small, and a category in which hom-sets are sets is called locally small.

A lot of complexities in Freyd’s theorem are related to size issues, so it’s important to precisely spell out all the assumptions.

We assume that the source of the functor $R$, the category $\mathcal D$, is locally small. It must also be small-complete, that is, every small diagram in $\mathcal D$ must have a limit. (A small diagram is a functor from a small category.) We also want the functor $R$ to be continuous, that is, to preserve all small limits.

If it weren’t for size issues, this would be enough to guarantee the existence of the left adjoint, and we’ll first sketch the proof for this simplified case. In the general case, there is one more condition, the Solution Set Condition, which we’ll discuss later.

# Left adjoint and the comma category

Here’s the problem we are trying to solve. We have a functor $R$ that maps objects and morphisms from $\mathcal D$ to $\mathcal C$. We want to define another functor $L$ that goes in the opposite direction. We’re not looking for the inverse, so we’re not expecting the composition of this functor with $R$ to be identity, but we want it to be related to identity by two natural transformations called unit and counit. Their components are, respectively:

$\eta_c : c \to R L c$

$\epsilon_d : L R d \to d$

and, as long as they satisfy some additional triangle identities, they will establish the adjunction $L \dashv R$.

We are going to define $L$ point-wise, so let’s pick an object $c$ in $\mathcal C$ and try to propagate it back to $\mathcal D$. To do that, we have to gather as much information about $c$ as possible. We will propagate all this information back to $\mathcal D$ and find an object in $\mathcal D$ that “looks the same.” Think of this as creating a hologram of $c$ and shipping it back to $\mathcal D$.

All information about $c$ is encoded in morphisms so, in order to generate our hologram, we’ll gather all morphisms that originate in $c$. These morphisms form a category called the coslice category $c/C$.

The objects in $c/C$ are pairs $(x, f \colon c \to x)$. In other words, these are all the arrows that emanate from $c$, indexed by their target objects $x$. But what really defines the structure of this category are morphisms between these arrows. A morphism in $c/C$ from $(x, f)$ to $(y, g)$ is a morphism $h \colon x \to y$ that makes the following triangle commute:

We now have complete information about $c$ encoded in the slice category, but we have no way to propagate it back to $\mathcal D$. This is because, in general, the image of $\mathcal D$ doesn’t cover the whole of $\mathcal C$. Even more importantly, not all morphisms in $\mathcal C$ have corresponding morphisms in $\mathcal D$. We have to scale down our expectations, and define a partial hologram that does not capture all the information about $c$; only this part which can be back-propagated to $\mathcal D$ using the functor $R$. Such partial hologram is called a comma category $c/R$.

The objects of $c/R$ are pairs $(d, f \colon c \to R d)$, where $d$ is an object in $\mathcal D$. In other words, these are all the arrows emanating from $c$ whose target is in the image of $R$. Again, the important structure is encoded in the morphisms of $c/R$. These are the arrows in $\mathcal D$, $h \colon d \to d'$ that make the following diagram commute in $\mathcal C$

Notice an interesting fact: we can interpret these triangles as commutation conditions in a cone whose apex is $c$ and whose base is formed by objects and morphisms in the image of $R$. But not all objects or morphism in the image of $R$ are included. Only those morphisms that make the appropriate triangle commute–and these are exactly the morphisms that satisfy the cone condition. So the comma category builds a cone in $\mathcal C$.

# Constructing the limit

We can now take all this information about $c$ that’s been encoded in $c/R$ and move it back to $\mathcal D$. We define a projection functor $\pi_c \colon c/R \to D$ that maps $(d, f)$ to $d$, thus forgetting the morphism $f$. What’s important, though, is that this functor keeps the information encoded in the morphisms of $c/R$, because these are morphisms in $\mathcal D$.

The image of $\pi_c$ doesn’t necessarily cover the whole of $\mathcal D$, because not every $R d$ has arrows coming from $c$. Similarly, only some morphisms, the ones that make the appropriate triangle in $\mathcal C$ commute, are picked by $\pi_c$. But those objects and morphisms that are in the image of $\pi_c$ form a diagram in $\mathcal C$. This diagram is our partial hologram, and we can use it to pick an object in $\mathcal D$ that looks almost exactly like $c$. That object is the limit of this diagram. We pick the limit of this diagram as the definition of $L c$: the left adjoint of $R$ acting on $c$.

Here’s the tricky part: we assumed that $\mathcal D$ was small-complete, so every small diagram has a limit; but the diagram defined by $\pi_c$ is not necessarily small. Let’s ignore this problem for a moment, and continue sketching the proof. We want to show that the mapping that assigns the limit of $\pi_c$ to every $c$ is left adjoint to $R$.

Let’s see if we can define the unit of the adjunction:

$\eta_c : c \to R L c$

Since we have defined $L c$ as the limit of the diagram $\pi_c$ and $R$ preserves limits (small limits, really; but we are ignoring size problems for the moment) then $R L c$ must be the limit of the diagram $R \pi_c$ in $\mathcal C$. But, as we noted before, the diagram $R \pi_c$ is exactly the base of the cone with the apex $c$ that we used to define the comma category $c/R$. Since $R L c$ is the limit of this diagram, there must be a unique morphism from any other cone to it. In particular there must be a morphism from $c$ to it, because $c$ is an apex of the cone defined by the comma category. And that’s the morphism we’ll chose as our $\eta_c$.

Incidentally, we can interpret $\eta_c$ itself as an object of the comma category $c/R$, namely the one defined by the pair $(Lc, \eta_c \colon c \to R L c)$. In fact, this is the initial object in that category. If you pick any other object, say, $(d, g \colon c \to R d)$, you can always find a morphism $h \colon L c \to d$, which is just a leg, a projection, in the limiting cone that defines $L c$. It is automatically a morphism in $c/R$ because the following triangle commutes:

This is the triangle that defines $\eta_c$ as a morphism of cones, from the top cone with the apex $c$, to the bottom (limiting) cone with the apex $R L c$. We’ll use this interpretation later, when discussing the full version of the Freyd’s theorem.

We can also define the counit of the adjunction. Its component at $c$ is a morphism

$\epsilon_d : L R d \to d$

First, we repeat our construction starting with $c = R d$. We define the comma category $R d / R$ and use $\pi_{R d}$ to create the diagram whose limit is $L R d$. We pick $\epsilon_d$ to be a projection in the limiting cone. We are guaranteed that $d$ is in the base of the cone, because it’s the image of $(d, id \colon R d \to R d)$ under $\pi_{R d}$.

To complete this proof, one should show that the unit and counit are natural transformations and that they satisfy triangle identities.

# End of a comma category

An interesting insight into this construction can be gained using the end calculus. In my previous post, I talked about (weighted) colimits as coends, but the same argument can be dualized to limits and ends. For instance, this is our comma category as a category of elements in the coend notation:

$c/R \cong \mathcal{D} \int^d \mathcal{C} (c, R d)$

The limit of of the projection functor $\pi_c$ over the comma category can be written in the end notation as

$\lim_{c/R} \pi_c \cong \int_{(d, f)\colon c/R} \pi_c (d, f) \cong \int_{(d, f)\colon c/R} d$

This, in turn, can be rewritten as a weighted limit, with every $d$ weighted by the set $\mathcal{C}(c, R d)$:

$\mbox{lim}^{\mathcal{C}(c, R -)} \mbox{Id} \cong \int_{d \colon \mathcal{D}} \mathcal{C}(c, R d) \pitchfork d$

The pitchfork here is the power (cotensor) defined by the equation

$\mathcal{D}\big(d', s \pitchfork d\big) \cong Set\big(s, \mathcal{D}(d', d)\big)$

You may think of $s \pitchfork d$ as the product of $s$ copies of the object $d$, where $s$ is a set. The name power conveys the idea of iterated multiplication. Or, since power is a special case of exponentiation, you may think of $s \pitchfork d$ as a function object imitating mappings from $s$ to $d$.

To continue, if the left adjoint $L$ exists, the weighted limit in question can be replaced by

$\int_{d \colon \mathcal{D}} \mathcal{D}(L c, d) \pitchfork d$

which, using standard calculus of ends (see Appendix), can be shown to be isomorphic to $L c$. We end up with:

$\lim_{c/R} \pi_c \cong L c$

# Solution set condition

So what about those pesky size issues? It’s one thing to demand the existence of all small limits, and a completely different thing to demand the existence of large limits (such requirement may narrow down the available categories to preorders). Since the comma category may be too large, maybe we can cut it down to size by carefully picking up a (small) set of objects out of all objects of $\mathcal D$. We may take some indexing set $I$ and construct a family $d_i$ of objects of $\mathcal D$ indexed by elements of $I$. It doesn’t have to be one family for all—we may pick a different family for every object $c$ for which we are performing our construction.

Instead of using the whole comma category $c/R$, we’ll limit ourselves to a set of arrows $f_i \colon c \to R d_i$. But in a comma category we also have morphisms between arrows. In fact they are the essential carriers of the structure of the comma category. Let’s have another look at these morphisms.

This commuting condition can be re-interpreted as a factorization of $g$ through $f$. It so happens that every morphism $g$ can be trivially factorized through some $f$ by picking $d = d'$ and $h = id_d$. But if we restrict the factors $f$ to be members of the family $f_i$ then not every $g \colon c \to R d$ (for arbitrary $d$) can be automatically factorized. We have to demand it. That gives us the following:

Solution Set Condition: For every object $c$ there exists a small set $I$ with an $I$-indexed family of objects $d_i$ in $\mathcal D$ and a family of morphisms $f_i \colon c \to R d_i$, such that every morphism $g \colon c \to R d$ can be factored through one of $f_i$. That is, there exists a morphism $h \colon d_i \to d$ such that

$g = R h \circ f_i$

There is a shorthand for this statement: All comma categories $c/R$ admit weakly initial families of objects. We’ll come back to it later.

# Freyd’s theorem

We can now formulate:

Freyd’s Adjoint Functor Theorem: If $\mathcal D$ is a locally small and small-complete category, and the functor $R \colon D \to C$ is continuous (small-limit preserving), and it satisfies the solution set condition, then $R$ has a left adjoint.

We’ve seen before that the key to defining the point-wise left adjoint was to find the initial object in the comma category $c/R$. The problem is that this comma category may be large. So the trick is to split the proof into two parts: first defining a weakly initial object, and then constructing the actual initial object using equalizers. A weakly initial object has morphisms to every object in the category but, unlike its strong version, these morphisms don’t have to be unique.

An even weaker notion is that of a weakly initial set of objects. These are objects that among themselves have arrows to every object in the category, but it’s possible that no individual object has all the arrows. The solution set in Freyd’s theorem is such a weakly initial set in the comma category $c/R$. Since we assumed that $\mathcal C$ is small-complete, we can take a product of these objects and show that it’s weakly initial. The proof then proceeds with the construction of the initial object.

The details of the proof can be found in any category theory text or in nLab.

Next we’ll see the application of these results to the problem of defunctionalization of computer programs.

# Appendix

To show that

$\int_d \mathcal{D}(L c, d) \pitchfork d \cong L c$

it’s enough to show that the hom-functors from an arbitrary object $d'$ are isomorphic

\begin{aligned} & \mathcal{D}\big(d', \int_d \mathcal{D}(L c, d) \pitchfork d\big) \\ \cong & \int_d \mathcal{D}\big(d', \mathcal{D}(L c, d) \pitchfork d\big) \\ \cong & \int_d Set\big( \mathcal{D}(L c, d), \mathcal{D}(d', d) \big) \\ \cong & \; \mathcal{D}(d', L c) \end{aligned}

I used the continuity of the hom-functor, the definition of the power (cotensor) and the ninja Yoneda lemma.

It’s funny how similar ideas pop up in different branches of mathematics. Calculus, for instance, is built around metric spaces (or, more generally, Banach spaces) and measures. A limit of a sequence is defined by points getting closer and closer together. An integral is an area under a curve. In category theory, though, we don’t talk about distances or areas (except for Lawvere’s take on metric spaces), and yet we have the abstract notion of a limit, and we use integral notation for ends. The similarities are uncanny.

This blog post was inspired by my trying to understand the idea behind the Freyd’s adjoint functor theorem. It can be expressed as a colimit over a comma category, which is a special case of a Grothendieck fibration. To understand it, though, I had to get a better handle on weighted colimits which, as I learned, were even more general than Kan extensions.

# Category of elements as coend

Grothendieck fibration is like splitting a category in two orthogonal directions, the base and the fiber. Fiber may vary from object to object (as in dependent types, which are indeed modeled as fibrations).

The simplest example of a Grothendieck fibration is the category of elements, in which fibers are simply sets. Of course, a set is also a category—a discrete category with no morphisms between elements, except for compulsory identity morphisms. A category of elements is built on top of a category $\mathcal{C}$ using a functor

$F \colon \mathcal{C} \to Set$

Such a functor is traditionally called a copresheaf (this construction works also on presheaves, $\mathcal{C}^{op} \to Set$). Objects in the category of elements are pairs $(c, x)$ where $c$ is an object in $\mathcal{C}$, and $x \in F c$ is an element of a set.

A morphism from $(c, x)$ to $(c', x')$ is a morphism $f \colon c \to c'$ in $\mathcal{C}$, such that $(F f) x = x'$.

There is an obvious projection functor that forgets the second component of the pair

$\Pi \colon (c, x) \mapsto c$

(In fact, a general Grothendieck fibration starts with a projection functor.)

You will often see the category of elements written using integral notation. An integral, after all, is a gigantic sum of tiny slices. Similarly, objects of the category of elements form a gigantic sum (disjoint union) of sets $F c$. This is why you’ll see it written as an integral

$\int^{c \colon \mathcal{C}} F c$

However, this notation conflicts with the one for conical colimits so, following Fosco Loregian, I’ll write the category of elements as

$\mathcal{C}\int^{c} F c$

An interesting specialization of a category of elements is a comma category. It’s the category $L/d$ of arrows originating in the image of the functor $L \colon \mathcal{C} \to \mathcal{D}$ and terminating at a fixed object $d$ in $\mathcal{D}$. The objects of $L/d$ are pairs $(c, f)$ where $c$ is an object in $\mathcal{C}$ and $f \colon L c \to d$ is a morphism in $\mathcal{D}$. These morphisms are elements of the hom-set $\mathcal{D}(L c , d)$, so the comma category is just a category of elements for the functor $\mathcal{D}(L-, d) \colon \mathcal{C}^{op} \to Set$

$L/d \cong \mathcal{C}\int^{c} \mathcal{D}(L c, d)$

You’ll mostly see integral notation in the context of ends and coends. A coend of a profunctor is like a trace of a matrix: it’s a sum (a coproduct) of diagonal elements. But (co-)end notation may also be used for (co-)limits. Using the trace analogy, if you fill rows of a matrix with copies of the same vector, the trace will be the sum of the components of the vector. Similarly, you can construct a profunctor from a functor by repeating the same functor for every value of the first argument $c'$:

$P(c', c) = F c$

The coend over this profunctor is the colimit of the functor, a colimit being a generalization of the sum. By slight abuse of notation we write it as

$\mbox{colim}\, F = \int^{c \colon \mathcal{C}} F c$

This kind of colimit is called conical, as opposed to what we are going to discuss next.

# Weighted colimit as coend

A colimit is a universal cocone under a diagram. A diagram is a bunch of objects and morphisms in $\mathcal{C}$ selected by a functor $D \colon \mathcal{J} \to \mathcal{C}$ from some indexing category $\mathcal{J}$. The legs of a cocone are morphisms that connect the vertices of the diagram to the apex $c$ of the cocone.

For any given indexing object $j \colon \mathcal{J}$, we select an element of the hom-set $\mathcal{C}(D j, c)$, as a wire of the cocone. This is a selection of an element of a set (the hom-set) and, as such, can be described by a function from the singleton set $*$. In other words, a wire is a member of $Set(*, \mathcal{C}(D j, c))$. In fact, we can describe the whole cocone as a natural transformation between two functors, one of them being the constant functor $1 \colon j \mapsto *$. The set of cocones is then the set of natural transformations:

$[\mathcal{J}^{op}, Set](1, \mathcal{C}(D -, c))$

Here, $[J^{op}, Set]$ is the category of presheaves, that is functors from $\mathcal{J}^{op}$ to $Set$, with natural transformations as morphisms. As a bonus, we get the cocone triangle commuting conditions from naturality.

Using singleton sets to pick morphisms doesn’t generalize very well to enriched categories. For conical limits, we are building cocones from zero-thickness wires. What we need instead is what Max Kelly calls cylinders obtained by replacing the constant functor $1\colon \mathcal{J}^{op} \to Set$ with a more general functor $W \colon \mathcal{J}^{op} \to Set$. The result is a weighted colimit (or an indexed colimit, as Kelly calls it), $\mbox{colim}^W D$. The set of weighted cocones is defined by natural transformations

$[\mathcal{J}^{op}, Set](W, \mathcal{C}(D -, c))$

and the weighted colimit is the universal one of these. This definition generalizes nicely to the enriched setting (which I won’t be discussing here).

Universality can be expressed as a natural isomorphism

$[\mathcal{J}^{op}, Set](W, \mathcal{C}(D -, c)) \cong \mathcal{C}(\mbox{colim}^W D, c)$

We interpret this formula as a one-to-one correspondence: for every weighted cocone with the apex $c$ there is a unique morphism from the colimit to $c$. Naturality conditions guarantee that the appropriate triangles commute.

A weighted colimit can be expressed as a coend (see Appendix 1)

$\mbox{colim}^W D \cong \int^{j \colon \mathcal{J}} W j \cdot D j$

The dot here stands for the tensor product of a set by an object. It’s defined by the formula

$\mathcal{C}(s \cdot c, c') \cong Set(s, \mathcal{C}(c, c'))$

If you think of $s \cdot c$ as the sum of $s$ copies of the object $c$, then the above asserts that the set of functions from a sum (coproduct) is equivalent to a product of functions, one per element of the set $s$,

$(\coprod_s c) \to c' \cong \prod_s (c \to c')$

# Right adjoint as a colimit

A fibration is like a two-dimensional category. Or, if you’re familiar with bundles, it’s like a fiber bundle, which is locally isomorphic to a cartesian product of two spaces, the base and the fiber. In particular, the category of elements $\mathcal{C} \int W$ is, roughly speaking, like a bundle whose base is the category $\mathcal{C}$, and the fiber is a ($c$-dependent) set $W c$.

We also have a projection functor on the category of elements $\mathcal{C} \int W$ that ignores the $W c$ component

$\Pi \colon (c, x) \mapsto c$

The coend of this functor is the (conical) colimit

$\int^{(c, x) \colon \mathcal{C}\int W} \Pi (c, x) \cong \underset{\mathcal{C} \int W}{\mbox{colim}} \; \Pi$

But this functor is constant along the fiber, so we can “integrate over it.” Since fibers depends on $c$, different objects end up weighted differently. The result is a coend over the base category, with objects $c$ weighted by sets $W c$

$\int^{(c, x) \colon \mathcal{C}\int W} \Pi (c, x) \cong \int^{(c, x) \colon \mathcal{C}\int W} c \cong \int^{c \colon \mathcal{C}} W c \cdot c$

Using a more traditional notation, this is the formula that relates a (conical) colimit over the category of elements and a weighted colimit of the identity functor

$\underset{\mathcal{C} \int W}{\mbox{colim}} \; \Pi \cong \mbox{colim}^W Id$

There is a category of elements that will be of special interest to us when discussing adjunctions: the comma category for the functor $L \colon \mathcal{C} \to \mathcal{D}$, in which the weight functor is the hom-functor $\mathcal{D}(L-, d)$

$L/d \cong \mathcal{C}\int^{c} \mathcal{D}(L c, d)$

If we plug it into the last formula, we get

$\underset{L/d}{\mbox{colim}} \; \Pi \cong \underset{C \int \mathcal{D}(L-, d)}{\mbox{colim}} \; \Pi \cong \int^{c \colon \mathcal{C}} \mathcal{D}(L c, d) \cdot c$

If the functor $L$ has a right adjoint

$\mathcal{D}(L c, d) \cong \mathcal{C}(c, R d)$

we can rewrite this as

$\underset{L/d}{\mbox{colim}} \; \Pi \cong \int^{c \colon \mathcal{C}} \mathcal{C}(c, R d) \cdot c$

and useing the ninja Yoneda lemma (see Appendix 2) we get a formula for the right adjoint in terms of a colimit of a comma category

$\underset{L/d}{\mbox{colim}} \; \Pi \cong R d$

Incidentally, this is the left Kan extension of the identity functor along $L$. (In fact, it can be used to define the right adjoint as long as it preserves the functor $L$.)

We’ll come back to this formula when discussing the Freyd’s adjoint functor theorem.

# Appendix 1

I’m going to prove the following identity using some of the standard tricks of coend calculus

$\mbox{colim}^W D \cong \int^{j \colon \mathcal{J}} W j \cdot D j$

To show that two objects are isomorphic, it’s enough to show that their hom-sets to any object $c'$ are isomorphic (this follows from the Yoneda lemma)

\begin{aligned} \mathcal{C}(\mbox{colim}^W D, c') & \cong [\mathcal{J}^{op}, Set]\big(W-, \mathcal{C}(D -, c')\big) \\ &\cong \int_j Set \big(W j, \mathcal{C}(D j, c')\big) \\ &\cong \int_j \mathcal{C}(W j \cdot D j, c') \\ &\cong \mathcal{C}(\int^j W j \cdot D j, c') \end{aligned}

I first used the universal property of the colimit, then rewrote the set of natural transformations as an end, used the definition of the tensor product of a set and an object, and replaced an end of a hom-set by a hom-set of a coend (continuity of the hom-set functor).

# Appendix 2

The proof of

$\int^{c \colon \mathcal{C}} \mathcal{C}(c, R d) \cdot c \cong R d$

follows the same pattern

\begin{aligned} &\mathcal{C}\Big( \big(\int^{c} \mathcal{C}(c, R d) \cdot c\big) , c'\Big)\\ \cong &\int_c \mathcal{C}\big( \mathcal{C}(c, R d) \cdot c , c'\big) \\ \cong &\int_c Set\big( \mathcal{C}(c, R d) , \mathcal{C}(c, c')\big) \\ \cong & \; \mathcal{C}(R d, c') \end{aligned}

I used the fact that a hom-set from a coend is isomorphic to an end of a hom-set (continuity of hom-sets). Then I applied the definition of a tensor. Finally, I used the Yoneda lemma for contravariant functors, in which the set of natural transformations is written as an end.

$[ \mathcal{C}^{op}, Set]\big(\mathcal{C}(-, x), H \big) \cong \int_c Set \big( \mathcal{C}(c, x), H c \big) \cong H x$

Previously we discussed ninth chords, which are the first in a series of extension chords. Extensions are the notes that go beyond the first octave. Since we build chords by stacking thirds on top of each other, the next logical step, after the ninth chord, is the eleventh and the thirteenth chords. And that’s it: there is no fifteenth chord, because the fifteenth would be the same as the root (albeit two octaves higher).

This strange musical arithmetic is best understood if we translate all intervals into their semitone equivalents in equal temperament. Since we started by constructing the E major chord, let’s work with the E major scale, which consists of the following notes:

|E |  |F#|  |G#|A  |  |B |  |C#|  |D#|E |

Let’s chart the chord tones taking E as the root.

We see the clash of several naming conventions. Letter names have their origin is the major diatonic scale, as implemented by the white keys on the piano starting from C.

|C |  |D |  |E |F |  |G |  |A |  |B |C |

They go in alphabetical order, wrapping around after G. On the guitar we don’t have white and black keys, so this convention seems rather arbitrary.

The names of intervals (here, marked by digits, with occasional accidental symbols) are also based on the diatonic scale. They essentially count the number of letters from the root (including the root). So the distance from E to B is 5, because you count E, F, G, A, B — five letters. For a mathematician this convention makes little sense, but it is what it is.

After 12 semitones, we wrap around, as far as note names are concerned. With intervals the situation is a bit more nuanced. The ninth can be, conceptually, identified with the second; the eleventh with the fourth; and the thirteenth with the sixth. But how we name the intervals depends on their harmonic function. For instance, the same note, C#, is called the sixth in the E6 chord, and the thirteenth in E13. The difference is that E13 also contains the (dominant) seventh and the ninth.

A full thirteenth chord contains seven notes (root, third, fifth, seventh, ninth, eleventh, and thirteenth), so it cannot possibly be voiced on a six-string guitar. We usually drop the eleventh (as you can see above). The ninth and the fifth can be omitted as well. The root is very important, since it defines the chord, but when you’re playing in a band, it can be taken over by the bass instrument. The third is important because it distinguishes between major and minor modes (but then again, you have power chords that skip the third). The seventh is somewhat important in defining the dominant role of the chord.

Notice that a thirteenth chord can be seen as two separate chords on top of each other. E13 can be decomposed into E7 with F#m on top (try to spot these two shapes in this grip). Seen this way, the major/minor clash is another argument to either drop the eleventh (which serves as the minor third of F#m) or sharp it.

Alternatively, one could decompose E13 into E with DΔ7 on top. The latter shape is also easily recognized in this grip.

I decided against listing eleventh chords because they are awkward to voice on the guitar and because they are rarely used. Thirteenth chords are more frequent, especially in jazz. You’ve seen E13, here’s G13:

It skips the 11th and the 5th; and the 9th at the top is optional.

## The Role of Harmonics

It might be worth explaining why omitting the fifth in G13 doesn’t change the character of the chord. The reason is that, when you play the root note, you are also producing harmonics. One of the strongest harmonics is the fifth, more precisely, the fifth over the octave. So, even if you don’t voice it, you can hear it. In fact, a lot of the quality of a given chord voicing depends on the way the harmonics interact with each other, especially in the bass. When you strum the E chord on the guitar, you get a strong root sound E, and the B on the next thickest string amplifies its harmonic fifth. Compare this with the G shape, which also starts with the root, but the next string voices the third, B, which sounds okay, but not great, so some people mute it.

Inverted chords, even though they contain the same notes (up to octave equivalence) may sound dissonant, depending on the context (in particular, voice leading in the bass). This is why we don’t usually play the lowest string in C and A shapes, or the two lowest strings in the D shape.

In the C shape, the third in the bass clashes with the root and is usually muted. That’s because the strongest harmonic of E is B, which makes C/E sound like CΔ7.

On the other hand, when you play the CΔ7 chord, the E in the bass sounds great, for exactly the same reason.

You can also play C with the fifth in the bass, as C/G, and it sounds good, probably because the harmonic D of G gives it the ninth flavor. This harmonic is an octave and a fifth above G, so it corresponds to the D that would be voiced on the third fret of the B string.

The same reasoning doesn’t quite work for the A shape. Firstly, because all four lower strings in A/E voice the very strong power chord (two of them open strings) drowning out the following third. Also the fifth above E is the B that’s just two semitones below the third C# voiced on the B string. (Theoretically, C/G has a third doubled on the thinest string but that doesn’t seem to clash as badly with the D harmonic of G. Again, the ear beats theory!)

Next: Altered chords.

We have already discussed several kinds of seventh chords. But if you can extend the chord by adding a third above it, why not top it with yet another third? This way we arrive at the ninth chord. But a ninth is one whole step above the octave. So far we’ve been identifying notes that cross the octave with their counterparts that are 12 semitones lower. A mathematician would say that we are doing arithmetic modulo 12. But this is just a useful fiction. A lot of things in music theory can be motivated using modular arithmetic, but ultimately, we have to admit that if something doesn’t sound right, it’s not right.

A ninth is 14 semitones above the root (if you don’t flat or sharp it), so it should be identified with the second, which is 2 semitones up from the root. That puts it smack in the middle between the root and the third: a pretty jarring dissonance. We’ve seen a second used in a chord before, but it was playing the role of a suspended third. In a ninth chord, you keep the third, and move the second to the next octave, where it becomes a ninth and cannot do as much damage. Instead it provides color and tension, making things more interesting.

To construct E9, we start with E7. It has the root duplicated on the thinnest string, so it’s easy to raise it by two semitones to produce the ninth.

There are many variations of the ninth chord. There is a minor version, with the third lowered; the seventh can be raised to a major seventh; and the ninth itself can be flatted or sharped. We won’t cover all these.

Following the same pattern, C9 can be constructed from C7 by raising the root by two semitones.

We get a highly movable shape, especially if we put the fifth on the thinnest string. In particular, it can be moved one fret towards the nut to produce B9–a slight modification of the B7 grip we’ve seen before.

If you look carefully at this shape, you might recognize parts of Gm in it (the three thinnest strings). This is no coincidence. The fifth, the seventh, and the ninth of any ninth chord form a minor triad.

Here is the E9 grip obtained by transposing C9 down the fretboard. It’s used a lot in funk:

The same chord with a sharped ninth is called the Hendrix chord, after Jimi Hendrix who popularized it:

The E9 shape is not only movable, but it’s also easy to mutate. This is the minor version:

and this is the major seventh version:

Such chords are quite common in Bossa Nova.

A9 is obtained by raising the root of A7 by two semitones:

Can you spot the Dm shape raised by two frets?

Similarly, G9 is constructed from G7, and it conceals a Dm as part of it.

Next: Extension chords.

Next Page »