Previously we were exploring universal constructions for products, coproducts, and exponentials. In particular, we were able to prove the distributive law:

$(a + b) \times c \cong a \times c + b \times c$

The power of this law is that it relates the mapping-in universal construction (product on the left) with the mapping-out one (coproduct on the right). If you take into account that products and coproducts are just special cases of limits and colimits, you may ask a more general question: under what conditions limits commute with colimits. In a cartesian closed category a product of sums is not equal to the sum of products:

$(a + b) \times (c + d) \ncong a \times c + b \times d$

So, in general, products don’t commute with coproducts. But if you replace coproducts with a special kind of colimits, then it can be shown that:
Theorem.
In $Set$, filtered colimits commute with finite limits.

In this post I’ll try to explain these terms and provide some intuition why it works and how filtered colimits are related to the more traditional notion of limits that we know from calculus.

# Limits

Let’s start with limits. They are like products, except that, instead of just two objects at the bottom, you have any number of objects plus a bunch of morphisms between them. That’s called a diagram. Then you have an apex with arrows going down to all the objects in the diagram; and you get what is called a cone. If you have morphisms in your diagram, they form triangles. These triangles must commute. For instance, in Fig 1, we have:

$g \circ \pi_1 = \pi_3$

Fig. 1. A cone

This means that not all projections are independent–that you may obtain one projection from another by post-composing it with a morphism from the diagram. In Fig 1, for instance, you may extract a value of $c$ either directly using $\pi_3$ or by applying $g$ to the result of $\pi_1$.

A limit is defined as the universal cone with the apex $Lim$. It means that, if you have any other cone with some apex $c$, built over the same diagram, there is a unique morphism $h$ from $c$ to $Lim$ that makes all the triangles commute. For instance, in Fig 2, one of the commuting conditions is:

$\pi_1 \circ h = f_1$

and so on. We’ve seen similar commuting conditions in the definition of the product.

Fig. 2. A universal cone

If you think of $Lim$ in this example as a data structure, you would implement it as a product of $a_1$, $a_2$, and $a_3$, together with two functions:

$g_3 : a_1 \to a_3$

$g_2 : a_1 \to a_2$

But because of the commuting conditions, the three values stored in $Lim$ cannot be independent. If you pick a value for $a_1$, then the values for $a_2$ and $a_3$ are uniquely determined.

A limit, just like a product, is defined by a mapping-in property. If you want to define a morphism from some $c$ to $Lim$, you need to provide three morphisms $f_1$, $f_2$, and $f_3$. However, unlike in the case of a product, these morphisms must satisfy some commuting conditions. Here, $f_3$ must be equal to $g_3 \circ f_1$ and $f_2 = g_2 \circ f_1$. So, really, you only need to define $f_1$, and that uniquely determines $h$. This is why the cones in Fig 2 can be simplified, as shown in Fig 3.

Fig. 3. A simplified universal cone

Notice that the diagram essentially forms a subcategory inside the category $C$, even if we don’t explicitly draw all the identity morphisms or all the compositions. This is because triangles built by composing commuting triangles are again commuting. It therefore makes sense to define a diagram as a functor $F$ from an (often much smaller) index category $J$ to $C$. In our case it would be a category with just three objects, $j_1$, $j_2$, $j_3$, and two non-identity morphisms. (The diagram category for the product is even simpler: just two objects, no non-trivial morphisms.)

The properties of the diagram category determine the nature of cones and the nature of the limits. For instance, functors from a finite category will produce finite limits.

Fig. 4. Diagram category $J$

The diagram category $J$ in our example has a very peculiar property: it has a cone for every pair of objects (it’s a cone inside $J$, not to be confused with the cone in $C$). For instance, the pair $j_2$, $j_3$ is part of the cone with the apex $j_1$. This is also the apex for the (somewhat degenerate) cone based on $j_1$ and $j_2$ (with or without the connecting morphism). A category in which there is a cone for every finite subdiagram is called cofiltered. Limits defined by functors from cofiltered categories are called cofiltered limits.

The intuition is that cofiltered categories exhibit some kind of ordering. You may think of $j_1$ as a lower bound of $j_2$ and $j_3$. Following these bounds, you might eventually get to some kind of roots–here it’s the object $j_1$–and these roots will dictate the behavior of cones and the behavior of limits. Things get really interesting when the diagram category is infinite, because then there is no guarantee that you’ll ever reach a root. There is, for instance, no smallest (negative) integer, even though integers are ordered. You can begin to see parallels with traditional limits, like:

$\lim_{j \to -\infty} a_j$

That’s where these ideas originally came from.

Limits in the category of sets have a particularly simple interpretation. In $Set$, we can use functions from the terminal object–the singleton set–to pick individual set elements.

Fig. 5. Elements of the limit

For every selection in Fig 5. of $x_1$, $x_2$, $x_3$ there is a unique $h(x_1, x_2, x_3)$ that picks an element in $Lim$. But a selection of $x_1$, $x_2$, $x_3$ is nothing but a cone with the apex $1$. So there is a one-to-one correspondence between elements of $Lim$ and such cones. In other words, $Lim$ is a set of apex-1 cones.

# Colimits

Colimits are dual to limits–you get them by inverting all the arrows. So, instead of projections, you get injections, and the universal condition defines a mapping out of a colimit (see Fig 6).

Fig. 6. A universal cocone

If you look at the colimit as a data structure, it is similar to a coproduct, except that not all the injections are independent. In the example in Fig 6, $i_3$ and $i_2$ are determined by pre-composing $i_1$ with $g_3$ and $g_2$, respectively. It’s not clear how to implement a colimit in Haskell, so here’s a pseudo-Haskell attempt using imaginary dependent-type syntax:

  data Colim a1 a2 a3 (g2 :: a2 -> a1) (g3 :: a3 -> a1) =
= A1 a1 | A2 a2 | A3 a3


To deconstruct this colimit, you only need to provide one function $f_1 : a_1 \to c$.

  h :: (a1 -> c) -> Colim a1 a2 a3 g2 g3 -> c
h f1 (A1 a1)    = f1 a1
h f1 (A2 a2) = f1 (g2 a2)
h f1 (A3 a3) = f1 (g3 a3)


Granted, in a lazy language like Haskell, this would be an overkill way to store essentially just one value.

A colimit in the category of sets simplifies to a disjoint union of sets, in which some elements are identified. Suppose that the colimit $Colim_J F$ is defined by some diagram category $J$ and a functor $F : J \to Set$. Each object $j$ in $J$ produces a set $F j$.

Fig. 7. Colimit in Set. On the left, the diagram category $J$.

The disjoint union of all these sets is a set whose elements are the pairs $(x, j)$ where $x \in F j$. (Notice that the sets may overlap, but each element from the overlap will be counted as many times as the number of sets it belongs to.) Coproduct injections are then functions that take an element $x \in F j$ and map it into an element $(x, j) \in Colim_J F$. But that doesn’t take into account the presence of morphisms in the diagram. These morphisms are mapped to functions between corresponding sets. For instance, in Fig 7, we can take an element $x \in F j_2$. It is injected, using $i_2$, as an element $(x, j_2) \in Colim_J F$. But there is another path from $F j_2$ that uses $F g_2$ followed by $i_1$. That produces $((F g_2) x, j_1)$. If the triangle is to commute, these two must be equal. So in the actual colimit, they must be identified. In general, any two elements of the disjoint union that satisfy this relation:

$(x, j) \rightsquigarrow (x', j')\;\; \text{if}\;\; \exists_{g : j \to j'} (F g) x = x'$

must be identified. This is not an equivalence relation, but it can be extended to one (by first symmetrizing it, and then making it transitive again). A colimit is then a quotient of the disjoint union by this equivalence.

As before, I chose this example to illustrate a special type of a diagram. This is a diagram that can be obtained using a functor from a filtered category. A filtered category has this property that for any finite subdiagram, there is a cocone under it. Here, for instance, the subdiagram formed by $j_2$ and $j_3$ has a cocone with the apex $j_1$. Again, you may think of $j_1$ as a kind of upper bound of $j_2$ and $j_3$. If the filtered category is finite, following upper bounds will eventually lead you to some roots. And in Set, the equivalence relation will allow you shift all the elements down to those roots. But in an infinite case (think natural numbers) there may be no largest element–no root. And that brings filtered colimits closer to the intuition we have for limits in calculus. In fact, all the interesting filtered colimits are based on infinite diagrams.

# Commuting Limits and Colimits

What does it mean for a limit to commute with a colimit? A single colimit is generated by a functor from some index category $I \to C$. What we need is a bunch of such colimits so that we can take a limit over those. Therefore we need a bunch of functors $I \to C$. Moreover, those colimits have to form a diagram. So we need another index category $J$ to parameterize those functors. Altogether, we need a functor of two arguments:

$F : I \times J \to C$

It follows that, for any given $j$ in $J$ we have a functor $F(-, j) : I \to C$. We can take a colimit of that. Then we gather those colimits into a diagram whose shape is defined by $J$, and then take its limit. We get:

$Lim_J (Colim_I F)$

Alternatively, when we fix some $i$ in $I$, we get a functor $F(i, -) : J \to C$. We can take a limit of that. Then we can gather all those limits and form a diagram whose shape is defined by $I$. Finally we can take a colimit of that:

$Colim_I (Lim_J F)$

Fig. 8. Commuting limits (red diagram of shape $J$) and colimits (black diagram of shape $I$)

It’s not difficult to construct the mapping:

$Colim_I (Lim_J F) \to Lim_J (Colim_I F)$

using the universal property, since the colimit has the mapping-out property. It’s the other way around that’s tricky. But it always works in the special case when $I$ is filtered, $J$ is finite, and $C$ is $Set$.

Here’s the sketch of this amazing proof, which you can find in Saunders Mac Lane’s Categories for the Working Mathematician.

Since the target of the functor is Set, it might help to visualize its image as a rectangular array of sets. A fixed $j$ picks up a row of such sets, whereas a fixed $i$ picks up a column. Because we are dealing with sets, we can try to define the mapping:

$Lim_J (Colim_I F) \to Colim_I (Lim_J F)$

pointwise. Let’s pick an element of the limit on the left. As we’ve established earlier, a limit in Set is a set of apex-1 cones. So let’s pick one such cone. It’s just a selection of elements from a bunch of colimits.

As we’ve seen before, a colimit in Set is a discriminated union with some identifications. So our apex-1 cone will pick a set of representatives, one per colimit, say $(x_n, i_n)$. Any time there is a morphism $g : i_n \to i'_n$, we can replace one representative with another $(g (x_n), i'_n)$. The intuition is that we can slide the representatives horizontally within each row along morphisms.

If $I$ is a filtered category, then for any finite number of objects $i_n$, we can always find a common root (it will be the apex $i$ of a cocone formed by $i_n$ in $I$). So we can slide all the representatives to a single column. In other words, our cone can be brought to a set of representatives $(y_n, i)$, with a common $i$.

Fig. 9. A single cone after shifting representatives from all colimits to a common column

But that’s just a cone over $J$. It’s an element of $Lim_J F$. And we can inject it into a colimit over $I$ to get an element of $Colim_I (Lim_J F)$. We have thus defined our mapping.

# Conclusion

If you didn’t get the proof the first time, don’t get discouraged. Take a break, sleep over it, and then read it slowly again. Make sure you have internalized all the definitions. Draw your own pictures. The two major tricks are: (1) visualizing an element of a limit as a cone originating from the singleton set, and (2) the idea of sliding the elements of multiple colimits to a common column.

The importance of this theorem is that it tells you when and how you can define mappings out of limits. For instance, how to define functions from a product or from an end.

# Acknowledgment

I’m grateful to Derek Elkins for correcting mistakes in the original version of this post.

As functional programmers we are interested in functions. Category theorists are similarly interested in morphisms. There is a slight difference in approach, though. A programmer must implement a function, whereas a mathematician is often satisfied with the proof of existence of a morphism (unless said mathematician is a constructivist). Category theory if full of such proofs. It turns out that many of these proofs can be converted to code, often resulting in quite unexpected encodings.

A lot of objects in category theory are defined using universal constructions and universality is used all over the place to show the existence (as a rule: unique, up to unique isomorphism) of morphisms between objects.

There are two major types of universal constructions: the ones asserting the mapping-in property, and the ones asserting the mapping-out property. For instance, the product has the mapping-in property.

# Product

Recall that a product of two objects $a$ and $b$ is an object $a \times b$ together with two projections:

$\pi_1 : a \times b \to a$

$\pi_2 : a \times b \to b$

This object must satisfy the universal property: for any other object $c$ with a pair of morphisms:

$f : c \to a$

$g : c \to b$

there exists a unique morphism $h : c \to a \times b$ such that:

$f = \pi_1 \circ h$

$g = \pi_2 \circ h$

In other words, the two triangles in Fig 1 commute.

Fig. 1. Universality of the product

This universal property can be used any time you need to find a morphism that’s mapping into the product, and it can actually produce code.

For instance, let’s say you want to find a morphism from the terminal object $1$ to $a \times b$. All you need is to define two morphisms $x : 1 \to a$ and $y : 1 \to b$. This is not always possible, but if it is, you are guaranteed the existence of a morphism $h : 1 \to a \times b$ (Fig 2).

Fig. 2. Global element of a product

Morphisms from the terminal object are called global elements, so we have just shown that, as long as $a$ and $b$ have global elements, say $x$ and $y$, their product has a global element too. Moreover the projection $\pi_1$ of this global element is the same as $x$, and $\pi_2$ is the same as $y$. In other words, an element of a product is a pair of elements. But you probably knew that.

The universal construction of the product is implemented as an operator in Haskell:

  (&&&) :: (c->a) -> (c->b) -> (c -> (a, b))


We can also go the other way: given a mapping-in $h : c \to a \times b$, we can always extract a pair of morphisms:

$f = \pi_1 \circ h$

$g = \pi_2 \circ h$

This bijection between $h$ and a pair of morphisms $(f, g)$ is in fact an adjunction.

You might think this kind of reasoning is very different from what programmers do, but it’s not. Here’s one possible definition of a product in Haskell (besides the built-in one, (,)):

  data Product a b = MkProduct { fst :: a
, snd :: b }


It is in one-to-one correspondence with what I’ve just explained. The two functions fst and snd are $\pi_1$ and $\pi_2$, and MkProduct corresponds to our $h : 1 \to a \times b$. The categorical definition is just a different, much more general, way of saying the same thing.

Here’s another application of universality: Show that product is functorial. Suppose that you have a pair of morphisms:

$f : a \to a'$

$g : b \to b'$

and you want to lift them to a morphism:

$h : a \times b \to a' \times b'$

Since we are dealing with products, we should use the mapping-in property. So we draw the universality diagram for the target $a' \times b'$, and put the source $a \times b$ at the top. The pair of functions that fits the bill is $(f \circ \pi_1, g \circ \pi_2)$ (Fig 3).

Fig. 3. Functoriality of the product

The universal property gives us, uniquely, the $h$, which is usually written simply as $f \times g$.

Exercise for the reader: Show, using universality, that categorical product is symmetric.

# Coproduct

The coproduct, being the dual of the product, is defined by the universal mapping-out property, see Fig 4.

Fig. 4. Universality of the coproduct

So if you need a morphism from a coproduct $a + b$ to some $c$, it’s enough to define two morphisms:

$f : a \to c$

$g : b \to c$

This universal property may also be restated as the isomorphism between pairs of morphisms $(f, g)$ and morphisms of the type $a+b \to c$ (so there is, in fact, a corresponding adjunction).

This is easily illustrated in Haskell:

  h :: Either a b -> c
h (Left a)  = f a
h (Right b) = g b


Here Left and Right correspond to the two injections $i_1$ and $i_2$. There is a convenient function in Haskell that encapsulates this universal construction:

  either :: (a->c) -> (b->c) -> (Either a b -> c)


Exercise for the reader: Show that coproduct is functorial.

So next time you ask yourself, what can I do with a universal construction? the answer is: use it to define a morphism, either mapping in or mapping out of your construct. Why is it useful? Because it decomposes a problem into smaller problems. In the examples above, the problem of constructing one morphism $h$ was nicely decomposed into defining $f$ and $g$ separately.

The flip side of this is that there is no simple way of defining a mapping out of a product or a mapping into a coproduct.

# Distributive Law

For instance, you might wonder if the familiar distributive law:

$(a + b) \times c \cong a \times c + b \times c$

holds in an arbitrary category that defines products and coproducts (so called bicartesian category). You can immediately see that defining a morphism from right to left is easy, because it involves the mapping out of a coproduct. All we need is to define a pair of morphisms leading to the common target (Fig 5):

$f : a \times c \to (a + b) \times c$

$g : b \times c \to (a + b) \times c$

Fig. 5. Right to left proof

The trick is to take advantage of the functoriality of the product, which we have already established, and use it to implement $f$ and $g$ as:

$f = i_1 \times id_c$

$g = i_2 \times id_c$

But if you try to construct a proof in the other direction, from left to right, you’re stuck, because it would require the mapping out of a product. So the distributive property does not hold in general.

“Wait a moment!” I hear you say, “I can easily implement it in Haskell.”

  f :: (Either a b, c) -> Either (a, c) (b, c)
f (Left a, c)  = Left  (a, c)
f (Right b, c) = Right (b, c)


# Exponential

That’s correct, but Haskell does a little cheating behind the scenes. You can see it clearly when you convert this code to point free notation (I’ll explain later how I figured it out):

  f = uncurry (either (curry Left) (curry Right))


I want to direct your attention to the use of curry and uncurry. Currying is the application of another universal construction, namely that of the exponential object $c^b$, representing the function type b -> c. This is exactly the construction that provides the missing mapping out of a product, (a, b) -> c. Here we go:

  uncurry :: (c -> (a -> b)) -> ((c, a) -> b)


Categorically, we have the bijection between morphisms (again, a sign of an adjunction):

$h : c \to b^a$

$f : c \times a \to b$

Universality tells us that for every $c$ and $f$ there is a unique $h$ in Fig 6 (and vice versa). The arrow $h \times id_a$ is the lifting of the pair $(h, id_a)$ by the product functor (we’ve established the functoriality of the product earlier).

Fig. 6. Universality of the exponential

Not every category has exponentials–the ones that do are called cartesian closed (cartesian, because they must also have products).

So how does the fact that we have exponentials in Haskell help us here? We are trying to define a mapping out of a product:

$f : (a + b) \times c \to a \times c + b \times c$

Here’s where the exponential saves the day. This mapping exists if we can define another mapping:

$h : (a + b) \to (a \times c + b \times c)^c$

see Fig 7.

Fig. 7. Uncurrying

This morphism, in turn, is easy to define, because it involves a mapping out of a sum. We just need a pair of morphisms:

$h_1 : a \to (a \times c + b \times c)^c$

$h_2 : b \to (a \times c + b \times c)^c$

We can define the first morphism using the universal property of the exponential, picking the injection $i_1$:

Fig. 8. Defining $h_1$

This translates to Haskell as h1 = curry Left. Similarly for $h_2$ we get curry Right.

We can now combine all these diagrams into a single point-free definition, and that’s exactly how I came up with the original code:

  f = uncurry (either (curry Left) (curry Right))


Notice that curry is used to get from $f$ to $h$, and uncurry from $h$ to $f$ in the original diagram.

Products and coproducts are examples of more general constructions called limits and colimits. Importantly, the universal property of limits can be used to define the mapping-in morphisms, whereas the universal property of colimits allows us to define the mapping-out morphisms. I’ll talk more about it in the upcoming post.

I’ve been working with profunctors lately. They are interesting beasts, both in category theory and in programming. In Haskell, they form the basis of profunctor optics–in particular the lens library.

## Profunctor Recap

The categorical definition of a profunctor doesn’t even begin to describe its richness. You might say that it’s just a functor from a product category $\mathbb{C}^{op}\times \mathbb{D}$ to $Set$ (I’ll stick to $Set$ for simplicity, but there are generalizations to other categories as well).

A profunctor $P$ (a.k.a., a distributor, or bimodule) maps a pair of objects, $c$ from $\mathbb{C}$ and $d$ from $\mathbb{D}$, to a set $P(c, d)$. Being a functor, it also maps any pair of morphisms in $\mathbb{C}^{op}\times \mathbb{D}$:

$f\colon c' \to c$
$g\colon d \to d'$

to a function between those sets:

$P(f, g) \colon P(c, d) \to P(c', d')$

Notice that the first morphism $f$ goes in the opposite direction to what we normally expect for functors. We say that the profunctor is contravariant in its first argument and covariant in the second.

## Hom-Profunctor

The key point is to realize that a profunctor generalizes the idea of a hom-functor. Like a profunctor, a hom-functor maps pairs of objects to sets. Indeed, for any two objects in $\mathbb{C}$ we have the set of morphisms between them, $C(a, b)$.

Also, any pair of morphisms in $\mathbb{C}$:

$f\colon a' \to a$
$g\colon b \to b'$

can be lifted to a function, which we will denote by $C(f, g)$, between hom-sets:

$C(f, g) \colon C(a, b) \to C(a', b')$

Indeed, for any $h \in C(a, b)$ we have:

$C(f, g) h = g \circ h \circ f \in C(a', b')$

This (plus functorial laws) completes the definition of a functor from $\mathbb{C}^{op}\times \mathbb{C}$ to $Set$. So a hom-functor is a special case of an endo-profunctor (where $\mathbb{D}$ is the same as $\mathbb{C}$). It’s contravariant in the first argument and covariant in the second.

For Haskell programmers, here’s the definition of a profunctor from Edward Kmett’s Data.Profunctor library:

class Profunctor p where
dimap :: (a' -> a) -> (b -> b') -> p a b -> p a' b'

The function dimap does the lifting of a pair of morphisms.

Here’s the proof that the hom-functor which, in Haskell, is represented by the arrow ->, is a profunctor:

instance Profunctor (->) where
dimap ab cd bc = cd . bc . ab

Not only that: a general profunctor can be considered an extension of a hom-functor that forms a bridge between two categories. Consider a profunctor $P$ spanning two categories $\mathbb{C}$ and $\mathbb{D}$:

$P \colon \mathbb{C}^{op}\times \mathbb{D} \to Set$

For any two objects from one of the categories we have a regular hom-set. But if we take one object $c$ from $\mathbb{C}$ and another object $d$ from $\mathbb{D}$, we can generate a set $P(c, d)$. This set works just like a hom-set. Its elements are called heteromorphisms, because they can be thought of as representing morphism between two different categories. What makes them similar to morphisms is that they can be composed with regular morphisms. Suppose you have a morphism in $\mathbb{C}$:

$f\colon c' \to c$

and a heteromorphism $h \in P(c, d)$. Their composition is another heteromorphism obtained by lifting the pair $(f, id_d)$. Indeed:

$P(f, id_d) \colon P(c, d) \to P(c', d)$

so its action on $h$ produces a heteromorphism from $c'$ to $d$, which we can call the composition $h \circ f$ of a heteromorphism $h$ with a morphism $f$. Similarly, a morphism in $\mathbb{D}$:

$g\colon d \to d'$

can be composed with $h$ by lifting $(id_c, g)$.

In Haskell, this new composition would be implemented by applying dimap f id to precompose p c d with

f :: c' -> c

and dimap id g to postcompose it with

g :: d -> d'

This is how we can use a profunctor to glue together two categories. Two categories connected by a profunctor form a new category known as their collage.

A given profunctor provides unidirectional flow of heteromorphisms from $\mathbb{C}$ to $\mathbb{D}$, so there is no opportunity to compose two heteromorphisms.

## Profunctors As Relations

The opportunity to compose heteromorphisms arises when we decide to glue more than two categories. The clue as how to proceed comes from yet another interpretation of profunctors: as proof-relevant relations. In classical logic, a relation between sets assigns a Boolean true or false to each pair of elements. The elements are either related or not, period. In proof-relevant logic, we are not only interested in whether something is true, but also in gathering witnesses to the proofs. So, instead of assigning a single Boolean to each pair of elements, we assign a whole set. If the set is empty, the elements are unrelated. If it’s non-empty, each element is a separate witness to the relation.

This definition of a relation can be generalized to any category. In fact there is already a natural relation between objects in a category–the one defined by hom-sets. Two objects $a$ and $b$ are related this way if the hom-set $C(a, b)$ is non-empty. Each morphism in $C(a, b)$ serves as a witness to this relation.

With profunctors, we can define proof-relevant relations between objects that are taken from different categories. Object $c$ in $\mathbb{C}$ is related to object $d$ in $\mathbb{D}$ if $P(c, d)$ is a non-empty set. Moreover, each element of this set serves as a witness for the relation. Because of functoriality of $P$, this relation is compatible with the categorical structure, that is, it composes nicely with the relation defined by hom-sets.

In general, a composition of two relations $P$ and $Q$, denoted by $P \circ Q$ is defined as a path between objects. Objects $a$ and $c$ are related if there is a go-between object $b$ such that both $P(a, b)$ and $Q(b, c)$ are non-empty. As a witness of this relation we can pick any pair of elements, one from $P(a, b)$ and one from $Q(b, c)$.

By convention, a profunctor $P(a, b)$ is drawn as an arrow (often crossed) from $b$ to $a$, $a \nleftarrow b$.

Composition of profunctors/relations

## Profunctor Composition

To create a set of all witnesses of $P \circ Q$ we have to sum over all possible intermediate objects and all pairs of witnesses. Roughly speaking, such a sum (modulo some identifications) is expressed categorically as a coend:

$(P \circ Q)(a, c) = \int^b P(a, b) \times Q(b, c)$

As a refresher, a coend of a profunctor $P$ is a set $\int^a P(a, a)$ equipped with a family of injections

$i_x \colon P(x, x) \to \int^a P(a, a)$

that is universal in the sense that, for any other set $s$ and a family:

$\alpha_x \colon P(x, x) \to s$

there is a unique function $h$ that factorizes them all:

$\alpha_x = h \circ i_x$

Universal property of a coend

Profunctor composition can be translated into pseudo-Haskell as:

type Procompose q p a c = exists b. (p a b, q b c)

where the coend is encoded as an existential data type. The actual implementation (again, see Edward Kmett’s Data.Profunctor.Composition) is:

data Procompose q p a c where
Procompose :: q b c -> p a b -> Procompose q p a c

The existential quantifier is expressed in terms of a GADT (Generalized Algebraic Data Type), with the free occurrence of b inside the data constructor.

## Einstein’s Convention

By now you might be getting lost juggling the variances of objects appearing in those formulas. The coend variable, for instance, must appear under the integral sign once in the covariant and once in the contravariant position, and the variances on the right must match the variances on the left. Fortunately, there is a precedent in a different branch of mathematics, tensor calculus in vector spaces, with the kind of notation that takes care of variances. Einstein coopted and expanded this notation in his theory of relativity. Let’s see if we can adapt this technique to the calculus of profunctors.

The trick is to write contravariant indices as superscripts and the covariant ones as subscripts. So, from now on, we’ll write the components of a profunctor $p$ (we’ll switch to lower case to be compatible with Haskell) as $p^c\,_d$. Einstein also came up with a clever convention: implicit summation over a repeated index. In the case of profunctors, the summation corresponds to taking a coend. In this notation, a coend over a profunctor $p$ looks like a trace of a tensor:

$p^a\,_a = \int^a p(a, a)$

The composition of two profunctors becomes:

$(p \circ q)^a\, _c = p^a\,_b \, q^b\,_c = \int^b p(a, b) \times q(b, c)$

The summation convention applies only to adjacent indices. When they are separated by an explicit product sign (or any other operator), the coend is not assumed, as in:

$p^a\,_b \times q^b\,_c$

(no summation).

The hom-functor in a category $\mathbb{C}$ is also a profunctor, so it can be notated appropriately:

$C^a\,_b = C(a, b)$

The co-Yoneda lemma (see Ninja Yoneda) becomes:

$C^c\,_{c'}\,p^{c'}\,_d \cong p^c\,_d \cong p^c\,_{d'}\,D^{d'}\,_d$

suggesting that the hom-functors $C^c\,_{c'}$ and $D^{d'}\,_d$ behave like Kronecker deltas (in tensor-speak) or unit matrices. Here, the profunctor $p$ spans two categories

$p \colon \mathbb{C}^{op}\times \mathbb{D} \to Set$

The lifting of morphisms:

$f\colon c' \to c$
$g\colon d \to d'$

can be written as:

$p^f\,_g \colon p^c\,_d \to p^{c'}\,_{d'}$

There is one more useful identity that deals with mapping out from a coend. It’s the consequence of the fact that the hom-functor is continuous. It means that it maps (co-) limits to limits. More precisely, since the hom-functor is contravariant in the first variable, when we fix the target object, it maps colimits in the first variable to limits. (It also maps limits to limits in the second variable). Since a coend is a colimit, and an end is a limit, continuity leads to the following identity:

$Set(\int^c p(c, c), s) \cong \int_c Set(p(c, c), s)$

for any set $s$. Programmers know this identity as a generalization of case analysis: a function from a sum type is a product of functions (one function per case). If we interpret the coend as an existential quantifier, the end is equivalent to a universal quantifier.

Let’s apply this identity to the mapping out from a composition of two profunctors:

$p^a\,_b \, q^b\,_c \to s = Set\big(\int^b p(a, b) \times q(b, c), s\big)$

This is isomorphic to:

$\int_b Set\Big(p(a,b) \times q(b, c), s\Big)$

or, after currying (using the product/exponential adjunction),

$\int_b Set\Big(p(a, b), q(b, c) \to s\Big)$

This gives us the mapping out formula:

$p^a\,_b \, q^b\,_c \to s \cong p^a\,_b \to q^b\,_c \to s$

with the right hand side natural in $b$. Again, we don’t perform implicit summation on the right, where the repeated indices are separated by an arrow. There, the repeated index $b$ is universally quantified (through the end), giving rise to a natural transformation.

## Bicategory Prof

Since profunctors can be composed using the coend formula, it’s natural to ask if there is a category in which they work as morphisms. The only problem is that profunctor composition satisfies the associativity and unit laws (see the co-Yoneda lemma above) only up to isomorphism. Not to worry, there is a name for that: a bicategory. In a bicategory we have objects, which are called 0-cells; morphisms, which are called 1-cells; and morphisms between morphisms, which are called 2-cells. When we say that categorical laws are satisfied up to isomorphism, it means that there is an invertible 2-cell that maps one side of the law to another.

The bicategory $Prof$ has categories as 0-cells, profunctors as 1-cells, and natural transformations as 2-cells. A natural transformation $\alpha$ between profunctors $p$ and $q$

$\alpha \colon p \Rightarrow q$

has components that are functions:

$\alpha^c\,_d \colon p^c\,_d \to q^c\,_d$

satisfying the usual naturality conditions. Natural transformations between profunctors can be composed as functions (this is called vertical composition). In fact 2-cells in any bicategory are composable, and there always is a unit 2-cell. It follows that 1-cells between any two 0-cells form a category called the hom-category.

But there is another way of composing 2-cells that’s called horizontal composition. In $Prof$, this horizontal composition is not the usual horizontal composition of natural transformations, because composition of profunctors is not the usual composition of functors. We have to construct a natural transformation between one composition of profuntors, say $p^a\,_b \, q^b\,_c$ and another, $r^a\,_b \, s^b\,_c$, having at our disposal two natural transformations:

$\alpha \colon p \Rightarrow r$

$\beta \colon q \Rightarrow s$

The construction is a little technical, so I’m moving it to the appendix. We will denote such horizontal composition as:

$(\alpha \circ \beta)^a\,_c \colon p^a\,_b \, q^b\,_c \to r^a\,_b \, s^b\,_c$

If one of the natural transformations is an identity natural transformation, say, from $p^a\,_b$ to $p^a\,_b$, horizontal composition is called whiskering and can be written as:

$(p \circ \beta)^a\,_c \colon p^a\,_b \, q^b\,_c \to p^a\,_b \, s^b\,_c$

The fact that a monad is a monoid in the category of endofunctors is a lucky accident. That’s because, in general, a monad can be defined in any bicategory, and $Cat$ just happens to be a (strict) bicategory. It has (small) categories as 0-cells, functors as 1-cells, and natural transformations as 2-cells. A monad is defined as a combination of a 0-cell (you need a category to define a monad), an endo-1-cell (that would be an endofunctor in that category), and two 2-cells. These 2-cells are variably called multiplication and unit, $\mu$ and $\eta$, or join and return.

Since $Prof$ is a bicategory, we can define a monad in it, and call it a promonad. A promonad consists of a 0-cell $C$, which is a category; an endo-1-cell $p$, which is a profunctor in that category; and two 2-cells, which are natural transformations:

$\mu^a\,_b \colon p^a\,_c \, p^c\,_b \to p^a\,_b$

$\eta^a\,_b \colon C^a\,_b \to p^a\,_b$

Remember that $C^a\,_b$ is the hom-profunctor in the category $C$ which, due to co-Yoneda, happens to be the unit of profunctor composition.

Programmers might recognize elements of the Haskell Arrow in it (see my blog post on monoids).

We can apply the mapping-out identity to the definition of multiplication and get:

$\mu^a\,_b \colon p^a\,_c \to p^c\,_b \to p^a\,_b$

Notice that this looks very much like composition of heteromorphisms. Moreover, the monadic unit $\eta$ maps regular morphisms to heteromorphisms. We can then construct a new category, whose objects are the same as the objects of $\mathbb{C}$, with hom-sets given by the profunctor $p$. That is, a hom set from $a$ to $b$ is the set $p^a\,_b$. We can define an identity-on-object functor $J$ from $\mathbb{C}$ to that category, whose action on hom-sets is given by $\eta$.

Interestingly, this construction also works in the opposite direction (as was brought to my attention by Alex Campbell). Any indentity-on-objects functor defines a promonad. Indeed, given a functor $J$, we can always turn it into a profunctor:

$p(c, d) = D(J\, c, J\, d)$

$p^c\,_d = D^{J\, c}\,_{J\, d}$

Since $J$ is identity on objects, the composition of morphisms in $D$ can be used to define the composition of heteromorphisms. This, in turn, can be used to define $\mu$, thus showing that $p$ is a promonad on $\mathbb{C}$.

## Conclusion

I realize that I have touched upon some pretty advanced topics in category theory, like bicategories and promonads, so it’s a little surprising that these concepts can be illustrated in Haskell, some of them being present in popular libraries, like the Arrow library, which has applications in functional reactive programming.

I’ve been experimenting with applying Einstein’s summation convention to profunctors, admittedly with mixed results. This is definitely work in progress and I welcome suggestions to improve it. The main problem is that we sometimes need to apply the sum (coend), and at other times the product (end) to repeated indices. This is in particular awkward in the formulation of the mapping out property. I suggest separating the non-summed indices with product signs or arrows but I’m not sure how well this will work.

## Appendix: Horizontal Composition in Prof

We have at our disposal two natural transformations:

$\alpha \colon p \Rightarrow r$

$\beta \colon q \Rightarrow s$

and the following coend, which is the composition of the profunctors $p$ and $q$:

$\int^b p(a, b) \times q(b, c)$

Our goal is to construct an element of the target coend:

$\int^b r(a, b) \times s(b, c)$

Horizontal composition of 2-cells

To construct an element of a coend, we need to provide just one element of $r(a, b') \times s(b', c)$ for some $b'$. We’ll look for a function that would construct such an element in the following hom-set:

$Set\Big(\int^b p(a, b) \times q(b, c), r(a, b') \times s(b', c)\Big)$

Using Einstein notation, we can write it as:

$p^a\,_b \, q^b\,_c \to r^a\,_{b'} \times s^{b'}\,_c$

and then use the mapping out property:

$p^a\,_b \to q^b\,_c \to r^a\,_{b'} \times s^{b'}\,_c$

We can pick $b'$ equal to $b$ and implement the function using the components of the two natural transformations, $\alpha^a\,_{b} \times \beta^{b}\,_c$.

Of course, this is how a programmer might think of it. A mathematician will use the universal property of the coend $(p \circ q)^a\,_c$, as in the diagram below (courtesy Alex Campbell).

Horizontal composition using the universal property of a coend

In Haskell, we can define a natural transformation between two (endo-) profunctors as a polymorphic function:

newtype PNat p q = PNat (forall a b. p a b -> q a b)

Horizontal composition is then given by:

horPNat :: PNat p r -> PNat q s -> Procompose p q a c
-> Procompose r s a c
horPNat (PNat alpha) (PNat beta) (Procompose pbc qdb) =
Procompose (alpha pbc) (beta qdb)


## Acknowledgment

I’m grateful to Alex Campbell from Macquarie University in Sydney for extensive help with this blog post.

Oxford, UK.    2019 July 22 – 26

Dear scientists, mathematicians, linguists, philosophers, and hackers,

We are writing to let you know about a fantastic opportunity to learn about the emerging interdisciplinary field of applied category theory from some of its leading researchers at the ACT2019 School.   It will begin in January 2019 and culminate in a meeting in Oxford, July 22-26.

Applied category theory is a topic of interest for a growing community of researchers, interested in studying systems of all sorts using category-theoretic tools.  These systems are found in the natural sciences and social sciences, as well as in computer science, linguistics, and engineering. The background and experience of our community’s members are as varied as the systems being studied.

The goal of the ACT2019 School is to help grow this community by pairing ambitious young researchers together with established researchers in order to work on questions, problems, and conjectures in applied category theory.

# Who should apply?

Anyone from anywhere who is interested in applying category-theoretic methods to problems outside of pure mathematics. This is emphatically not restricted to math students, but one should be comfortable working with mathematics. Knowledge of basic category-theoretic language—the definition of monoidal category for example— is encouraged.

We will consider advanced undergraduates, Ph.D. students, and post-docs. We ask that you commit to the full program as laid out below.

Instructions on how to apply can be found below the research topic descriptions.

# Senior research mentors and their topics

Below is a list of the senior researchers, each of whom describes a research project that their team will pursue, as well as the background reading that will be studied between now and July 2019.

## Miriam Backens

Title: Simplifying quantum circuits using the ZX-calculus

Description: The ZX-calculus is a graphical calculus based on the category-theoretical formulation of quantum mechanics.  A complete set of graphical rewrite rules is known for the ZX-calculus, but not for quantum circuits over any universal gate set.  In this project, we aim to develop new strategies for using the ZX-calculus to simplify quantum circuits.

1. Matthes Amy, Jianxin Chen, Neil Ross. A finite presentation of CNOT-Dihedral operators. arXiv:1701.00140
2. Miriam Backens. The ZX-calculus is complete for stabiliser quantum mechanics. arXiv:1307.7025

## Tobias Fritz

Title: Partial evaluations, the bar construction, and second-order stochastic dominance

Description: We all know that 2+2+1+1 evaluates to 6. A less familiar notion is that it can partially evaluate to 5+1.  In this project, we aim to study the compositional structure of partial evaluation in terms of monads and the bar construction and see what this has to do with financial risk via second-order stochastic dominance.

1. Tobias Fritz, Paolo Perrone. Monads, partial evaluations, and rewriting. arXiv:1810.06037
2. Maria Manuel Clementino, Dirk Hofmann, George Janelidze. The monads of classical algebra are seldom weakly cartesian. Available here.
3. Todd Trimble. On the bar construction. Available here.

## Pieter Hofstra

Title: Complexity classes, computation, and Turing categories

Description: Turing categories form a categorical setting for studying computability without bias towards any particular model of computation. It is not currently clear, however, that Turing categories are useful to study practical aspects of computation such as complexity. This project revolves around the systematic study of step-based computation in the form of stack-machines, the resulting Turing categories, and complexity classes.  This will involve a study of the interplay between traced monoidal structure and computation. We will explore the idea of stack machines qua programming languages, investigate the expressive power, and tie this to complexity theory. We will also consider questions such as the following: can we characterize Turing categories arising from stack machines? Is there an initial such category? How does this structure relate to other categorical structures associated with computability?

1. J.R.B. Cockett, P.J.W. Hofstra. Introduction to Turing categories. APAL, Vol 156, pp 183-209, 2008.  Available here .
2. J.R.B. Cockett, P.J.W. Hofstra, P. Hrubes. Total maps of Turing categories. ENTCS (Proc. of MFPS XXX), pp 129-146, 2014.  Available here.
3. A. Joyal, R. Street, D. Verity. Traced monoidal categories. Mat. Proc. Cam. Phil. Soc. 3, pp. 447-468, 1996. Available here.

## Bartosz Milewski

Title: Traversal optics and profunctors

Description: In functional programming, optics are ways to zoom into a specific part of a given data type and mutate it.  Optics come in many flavors such as lenses and prisms and there is a well-studied categorical viewpoint, known as profunctor optics.  Of all the optic types, only the traversal has resisted a derivation from first principles into a profunctor description. This project aims to do just this.

1. Bartosz Milewski. Profunctor optics, categorical View. Available here.
2. Craig Pastro, Ross Street. Doubles for monoidal categories. arXiv:0711.1859

Title: Formal and experimental methods to reason about dialogue and discourse using categorical models of vector spaces

Description: Distributional semantics argues that meanings of words can be represented by the frequency of their co-occurrences in context. A model extending distributional semantics from words to sentences has a categorical interpretation via Lambek’s syntactic calculus or pregroups. In this project, we intend to further extend this model to reason about dialogue and discourse utterances where people interrupt each other, there are references that need to be resolved, disfluencies, pauses, and corrections. Additionally, we would like to design experiments and run toy models to verify predictions of the developed models.

1. Gerhard Jager.  A multi-modal analysis of anaphora and ellipsis. Available here.
2.  Matthew Purver, Ronnie Cann, Ruth Kempson. Grammars as parsers:    Meeting the dialogue challenge. Available here.

## David Spivak

Title: Toward a mathematical foundation for autopoiesis

Description: An autopoietic organization—anything from a living animal to a political party to a football team—is a system that is responsible for adapting and changing itself, so as to persist as events unfold. We want to develop mathematical abstractions that are suitable to found a scientific study of autopoietic organizations. To do this, we’ll begin by using behavioral mereology and graphical logic to frame a discussion of autopoiesis, most of all what it is and how it can be best conceived. We do not expect to complete this ambitious objective; we hope only to make progress toward it.

1. Fong, Myers, Spivak. Behavioral mereology.  arXiv:1811.00420.
2. Fong, Spivak. Graphical regular logic.  arXiv:1812.05765.
3. Luhmann. Organization and Decision, CUP. (Preface)

# School structure

All of the participants will be divided up into groups corresponding to the projects.  A group will consist of several students, a senior researcher, and a TA. Between January and June, we will have a reading course devoted to building the background necessary to meaningfully participate in the projects. Specifically, two weeks are devoted to each paper from the reading list. During this two week period, everybody will read the paper and contribute to a discussion in a private online chat forum.  There will be a TA serving as a domain expert and moderating this discussion. In the middle of the two week period, the group corresponding to the paper will give a presentation via video conference. At the end of the two week period, this group will compose a blog entry on this background reading that will be posted to the n-category cafe.

After all of the papers have been presented, there will be a two-week visit to Oxford University from 15 – 26 July 2019.  The first week is solely for participants of the ACT2019 School. Groups will work together on research projects, led by the senior researchers.

The second week of this visit is the ACT2019 Conference, where the wider applied category theory community will arrive to share new ideas and results. It is not part of the school, but there is a great deal of overlap and participation is very much encouraged. The school should prepare students to be able to follow the conference presentations to a reasonable degree.

# How to apply

To apply please send the following to act2019school@gmail.com

• A document with:
• An explanation of any relevant background you have in category theory or any of the specific projects areas
• The date you completed or expect to complete your Ph.D. and a one-sentence summary of its subject matter.
• Order of project preference
• To what extent can you commit to coming to Oxford (availability of funding is uncertain at this time)
• A brief statement (~300 words) on why you are interested in the ACT2019 School. Some prompts:
• how can this school contribute to your research goals
• how can this school help in your career?

Also, have sent on your behalf to act2019school@gmail.com a brief letter of recommendation confirming any of the following:

• ACT2019 School’s relevance to your research/career

# Questions?

• Daniel Cicala. cicala (at) math (dot) ucr (dot) edu
• Jules Hedges. julian (dot) hedges (at) cs (dot) ox (dot) ac (dot) uk

## Flies

It was a hot evening and, as is usual at that time of the year, there were quite a few flies buzzing around us. Understandably, I was annoyed, as they were interfering with my meditation.

“Master,” I said, “You keep telling me that everything in this world has a purpose, but I can’t figure out the purpose of these flies. All they do is break my concentration. Can we move indoors already, behind the screens, so that we can continue the lessons in peace?”

The Master looked at me the way he usually does when I say something that shows my lack of understanding — which unfortunately happens a lot.

“The flies are here to teach us about meditation,” he said.

“How so?” I said. “Are you trying to tell me that I should be able to quiet my mind even when there’s constant interference?”

“That would be the ultimate goal,” said the Master, “but for now I’d like you to observe the way these flies move. Can you do that?”

“Of course, Master,” I said and started watching the flies criss-crossing the air in front of me.

“What do you see?” asked the Master after a while.

“I see them zig-zagging constantly. They never seem to fly in a straight line for longer than a fraction of a second.”

“You are very astute, my Disciple,” said the Master. “Now, why would you say they’re doing that?”

“I think they are doing that to avoid being caught” I said. “Those flies that were, long ago, flying in straight lines were eliminated by predators, and only those that employed more elaborate movement schemes survived long enough to produce offspring. Evolution in action!” I said not without some pride at my cleverness.

“Quite so, my Disciple,” said the Master, “quite so…”

“But suppose,” he continued, “that, in some other universe, there is a colony of flies that live confined within the boundaries a hostile environment. Their life is short and full of suffering. But there is a benevolent being that can set individual flies free, to live a happy and productive life. The trouble is that she has to catch them first. And, in the beginning, it was easy, since they were all flying in straight lines. Almost all. The benevolent being was able to remove the straight-flying flies and make them happy. But there remained a few flies that, for one reason or another, kept zig-zagging. They survived long enough to produce offspring, some of which also kept zig-zagging. Soon enough, all flies in that hostile and unhappy environment developed this new survival strategy that prevented them from escaping their horrible fate. That’s evolution in action, too.”

“That’s a pretty sad story,” I said. “It shows that evolution is a cruel mistress. It doesn’t care if we are happy or not, as long as we produce offspring.”

“But what does it have to do with meditation?” I said, a little confused.

“The flies are our thoughts,” said the Master.

## The Pipe

It was a pleasant evening and I was enjoying the warm breeze coming from the mountains bringing with it the smell of pine and something else.

“Why are you smoking a pipe, Master? It’s bad for your health.”

“This is not a pipe,” said the Master.

“This is most definitely a pipe. You must have bought it in a pipe shop,” I said.

“This is not a pipe,” said the Master.

I thought for a moment.

“Oh, I see. You are making reference to the famous painting by Rene Magritte, right? Ceci n’est pas une pipe! But what Magritte meant was that it wasn’t a pipe–it was a picture of a pipe. He played on our confusion between the object itself and its representation. But here you are holding an actual pipe.”

“What makes you think it’s a pipe?” asked the Master.

“Well, I can look up the definition of a pipe and you’ll see that it describes the object you are holding. Do you want me to do that?” I asked.

I pulled out my tablet and started tapping.

“That which cannot be named,” said the Master.

“Oh, I know that one,” I said and continued tapping. I stopped after a few tries and looked up.

“I tried Tao and Dao, upper- and lowercase, but it didn’t work. Is it ‘the Tao,’ with ‘the’?”

“You are much too clever, my Disciple,” said the Master.

“Oh, you mean it’s literally ‘that which cannot be named’?” I started tapping again.

“Okay, here it is. According to the OED, a pipe is…” I hesitated for a moment.

“Wait, you don’t really want me to read the definition,” I said.

“No,” said the Master.

After a moment of silence, I said:
“You have corrected me, Master, because by naming the pipe I focused on just one small aspect of it. Its relationship to other pipes. By doing that I ignored its relationship to you, to me, to our conversation, to this lovely sunset, to Magritte and–now I get it–to the Tao.”

“But, Master, I’m confused,” I said after a while. “When you say that ‘The Tao that can be named is not the eternal Tao,’ you are giving it a name, aren’t you? You are calling it the Tao.”

“This is not a name,” said the Master.

“I have the feeling that if I say that this is indeed a name, I would be hitting a dead end,” I said.

We sat there for a moment while I was organizing my thoughts. Then it occurred to me.

“By calling it the Tao, you are not separating it from everything else, because the Tao is in everything. Neither are you ignoring its relationship to yourself or to me, because you are the Tao and I am the Tao. And the lovely sunset, and the pipe, and Magritte, it’s all the eternal Tao.

“This is not the Tao,” said the Master.

## Fallout

“I watched a movie last night,” I said. “And it made me think.”

“Movies often make us think,” said the Master. “Good movies, like life itself, ask a lot of questions, but rarely provide answers.”

“Well, that’s the thing, Master” I said. “Maybe you know the answer to this question, or maybe you can steer me towards the answer. I’m sure this problem has been analyzed before by many people much wiser than yours truly.

“It’s a problem of moral nature. In the movie, agent Hunt faces a dilemma. His friend is in immediate mortal danger. Hunt can save him, but at the risk of endangering the lives of thousands of innocent people. He makes a choice, saves the friend but, in the process, the terrorists get hold of plutonium, which they use to make nuclear bombs. Of course, in the movie, he’s ultimately able to avert the disaster, disabling the bombs literally one second before they’re about to go off.

“Sorry if I spoiled the movie for you, Master.”

“Don’t worry, I’ve seen the movie,” said the Master.

“So what do you think, Master? Was agent Hunt acting recklessly, risking uncountable lives to save one?”

“I think the answer is clear. It’s just simple math: one life against thousands. I would probably feel guilty for the rest of my life for sacrificing a friend, but what right do I have to risk thousands of innocent lives?”

“You say it’s simple math,” said the Master. “I presume there is an equation that calculates the moral value of an act, based on the number of lives saved or lost.”

“It’s not an exact science, but I guess one could make some rough estimates,” I said. “I’ve read some articles that mostly deal with pulling levers to divert trolleys. So this seems like one of these problems, where your friend is tied up on one track, and thousands of people on another. A runaway trolley is going to kill your friend, and you pull the switch to divert it to the other track, possibly killing thousands of people.”

“If this is simple math, then why do you say you’d feel guilty? Shouldn’t you feel satisfied, like when you solve a difficult equation?”

“I don’t know. I think I would always speculate: What if? What if I saved my friend and, just like in the movie, were able to avert the disaster? I’d never know.”

“And what if you saved your friend’s life and the bomb exploded?” asked the Master.

“I guess I’d feel terrible for the rest of my life. And I would probably be the most despised person on Earth.”

“And what if that explosion prevented an even bigger disaster in the future?” asked the Master.

“And what if that bigger disaster prevented an even bigger disaster?” I asked. “Where does this end? Are you saying that, since we cannot predict the results of our actions on a global scale, then there is no moral imperative?”

“Would that satisfy you?” asked the Master.

“No, it wouldn’t!”

“Would you like to have a small set of simple rules to guide all moral decisions in your life?” asked the Master.

“When you put it this way, I’m not sure. I think there’s been many attempts at rule-based ethics, and they all have exhibited some pretty disastrous failure modes. It vaguely reminds me of the Goedel’s incompleteness theorem. No matter what moral axioms you choose, there will be a situation in which they fail.

“On the other hand, rejecting the axioms may lead to an even bigger tragedy, like in the case of Raskolnikov.”

“Do you see similarities between Raskolnikov and agent Hunt?” asked the Master.

“They both reject the ‘Thou shalt not kill’ commandment. They both feel intense loyalty to their friends and family. But Raskolnikov had a lot of time to think about his choices, he even published an article about it; whereas Hunt acted impulsively, following his gut feelings. One was rational, the other irrational.”

“But you said that Raskolnikov had no axioms,” said the Master. “So how could he rationally justify his actions?”

“I see your point,” I said. “He was trying to do the math. Solve the ethical equation. His hubris was not in rejecting the accepted axioms, but in believing that he can come up with a better set. So, in a way, agent Hunt had the advantage of being a moral simpleton.”

“He was the uncarved wood,” said the Master.

Yes, it’s this time of the year again! I started a little tradition a year ago with Stalking a Hylomorphism in the Wild. This year I was reminded of the Advent of Code by a tweet with this succint C++ program:

This piece of code is probably unreadable to a regular C++ programmer, but makes perfect sense to a Haskell programmer.

Here’s the description of the problem: You are given a list of equal-length strings. Every string is different, but two of these strings differ only by one character. Find these two strings and return their matching part. For instance, if the two strings were “abcd” and “abxd”, you would return “abd”.

What makes this problem particularly interesting is its potential application to a much more practical task of matching strands of DNA while looking for mutations. I decided to explore the problem a little beyond the brute force approach. And, of course, I had a hunch that I might encounter my favorite wild beast–the hylomorphism.

## Brute force approach

First things first. Let’s do the boring stuff: read the file and split it into lines, which are the strings we are supposed to process. So here it is:

main = do
let cs = lines txt
print $findMatch cs The real work is done by the function findMatch, which takes a list of strings and produces the answer, which is a single string. findMatch :: [String] -> String First, let’s define a function that calculates the distance between any two strings. distance :: (String, String) -> Int We’ll define the distance as the count of mismatched characters. Here’s the idea: We have to compare strings (which, let me remind you, are of equal length) character by character. Strings are lists of characters. The first step is to take two strings and zip them together, producing a list of pairs of characters. In fact we can combine the zipping with the next operation–in this case, comparison for inequality, (/=)–using the library function zipWith. However, zipWith is defined to act on two lists, and we will want it to act on a pair of lists–a subtle distinction, which can be easily overcome by applying uncurry: uncurry :: (a -> b -> c) -> ((a, b) -> c) which turns a function of two arguments into a function that takes a pair. Here’s how we use it: uncurry (zipWith (/=)) The comparison operator (/=) produces a Boolean result, True or False. We want to count the number of differences, so we’ll covert True to one, and False to zero: fromBool :: Num a => Bool -> a fromBool False = 0 fromBool True = 1 (Notice that such subtleties as the difference between Bool and Int are blisfully ignored in C++.) Finally, we’ll sum all the ones using sum. Altogether we have: distance = sum . fmap fromBool . uncurry (zipWith (/=))  Now that we know how to find the distance between any two strings, we’ll just apply it to all possible pairs of strings. To generate all pairs, we’ll use list comprehension: let ps = [(s1, s2) | s1 <- ss, s2 <- ss] (In C++ code, this was done by cartesian_product.) Our goal is to find the pair whose distance is exactly one. To this end, we’ll apply the appropriate filter: filter ((== 1) . distance) ps For our purposes, we’ll assume that there is exactly one such pair (if there isn’t one, we are willing to let the program fail with a fatal exception). (s, s') = head$ filter ((== 1) . distance) ps

The final step is to remove the mismatched character:

filter (uncurry (==)) $zip s s' We use our friend uncurry again, because the equality operator (==) expects two arguments, and we are calling it with a pair of arguments. The result of filtering is a list of identical pairs. We’ll fmap fst to pick the first components. findMatch :: [String] -> String findMatch ss = let ps = [(s1, s2) | s1 <- ss, s2 <- ss] (s, s') = head$ filter ((== 1) . distance) ps
in fmap fst $filter (uncurry (==))$ zip s s'

This program produces the correct result and we could stop right here. But that wouldn’t be much fun, would it? Besides, it’s possible that other algorithms could perform better, or be more flexible when applied to a more general problem.

## Data-driven approach

The main problem with our brute-force approach is that we are comparing everything with everything. As we increase the number of input strings, the number of comparisons grows like a factorial. There is a standard way of cutting down on the number of comparison: organizing the input into a neat data structure.

We are comparing strings, which are lists of characters, and list comparison is done recursively. Assume that you know that two strings share a prefix. Compare the next character. If it’s equal in both strings, recurse. If it’s not, we have a single character fault. The rest of the two strings must now match perfectly to be considered a solution. So the best data structure for this kind of algorithm should batch together strings with equal prefixes. Such a data structure is called a prefix tree, or a trie (pronounced try).

At every level of our prefix tree we’ll branch based on the current character (so the maximum branching factor is, in our case, 26). We’ll record the character, the count of strings that share the prefix that led us there, and the child trie storing all the suffixes.

data Trie = Trie [(Char, Int, Trie)]
deriving (Show, Eq)

Here’s an example of a trie that stores just two strings, “abcd” and “abxd”. It branches after b.

   a 2
b 2
c 1    x 1
d 1    d 1

When inserting a string into a trie, we recurse both on the characters of the string and the list of branches. When we find a branch with the matching character, we increment its count and insert the rest of the string into its child trie. If we run out of branches, we create a new one based on the current character, give it the count one, and the child trie with the rest of the string:

insertS :: Trie -> String -> Trie
insertS t "" = t
insertS (Trie bs) s = Trie (inS bs s)
where
inS ((x, n, t) : bs) (c : cs) =
if c == x
then (c, n + 1, insertS t cs) : bs
else (x, n, t) : inS bs (c : cs)
inS [] (c : cs) = [(c, 1, insertS (Trie []) cs)]

We convert our input to a trie by inserting all the strings into an (initially empty) trie:

mkTrie :: [String] -> Trie
mkTrie = foldl insertS (Trie [])

Of course, there are many optimizations we could use, if we were to run this algorithm on big data. For instance, we could compress the branches as is done in radix trees, or we could sort the branches alphabetically. I won’t do it here.

I won’t pretend that this implementation is simple and elegant. And it will get even worse before it gets better. The problem is that we are dealing explicitly with recursion in multiple dimensions. We recurse over the input string, the list of branches at each node, as well as the child trie. That’s a lot of recursion to keep track of–all at once.

Now brace yourself: We have to traverse the trie starting from the root. At every branch we check the prefix count: if it’s greater than one, we have more than one string going down, so we recurse into the child trie. But there is also another possibility: we can allow to have a mismatch at the current level. The current characters may be different but, since we allow only one mismatch, the rest of the strings have to match exactly. That’s what the function exact does. Notice that exact t is used inside foldMap, which is a version of fold that works on monoids–here, on strings.

match1 :: Trie -> [String]
match1 (Trie bs) = go bs
where
go :: [(Char, Int, Trie)] -> [String]
go ((x, n, t) : bs) =
let a1s = if n > 1
then fmap (x:) $match1 t else [] a2s = foldMap (exact t) bs a3s = go bs -- recurse over list in a1s ++ a2s ++ a3s go [] = [] exact t (_, _, t') = matchAll t t' Here’s the function that finds all exact matches between two tries. It does it by generating all pairs of branches in which top characters match, and then recursing down. matchAll :: Trie -> Trie -> [String] matchAll (Trie bs) (Trie bs') = mAll bs bs' where mAll :: [(Char, Int, Trie)] -> [(Char, Int, Trie)] -> [String] mAll [] [] = [""] mAll bs bs' = let ps = [ (c, t, t') | (c, _, t) <- bs , (c', _', t') <- bs' , c == c'] in foldMap go ps go (c, t, t') = fmap (c:) (matchAll t t') When mAll reaches the leaves of the trie, it returns a singleton list containing an empty string. Subsequent actions of fmap (c:) will prepend characters to this string. Since we are expecting exactly one solution to the problem, we’ll extract it using head: findMatch1 :: [String] -> String findMatch1 cs = head$ match1 (mkTrie cs)

## Recursion schemes

As you hone your functional programming skills, you realize that explicit recursion is to be avoided at all cost. There is a small number of recursive patterns that have been codified, and they can be used to solve the majority of recursion problems (for some categorical background, see F-Algebras). Recursion itself can be expressed in Haskell as a data structure: a fixed point of a functor:

newtype Fix f = In { out :: f (Fix f) }

In particular, our trie can be generated from the following functor:

data TrieF a = TrieF [(Char, a)]
deriving (Show, Functor)

Notice how I have replaced the recursive call to the Trie type constructor with the free type variable a. The functor in question defines the structure of a single node, leaving holes marked by the occurrences of a for the recursion. When these holes are filled with full blown tries, as in the definition of the fixed point, we recover the complete trie.

I have also made one more simplification by getting rid of the Int in every node. This is because, in the recursion scheme I’m going to use, the folding of the trie proceeds bottom-up, rather than top-down, so the multiplicity information can be passed upwards.

The main advantage of recursion schemes is that they let us use simpler, non-recursive building blocks such as algebras and coalgebras. Let’s start with a simple coalgebra that lets us build a trie from a list of strings. A coalgebra is a fancy name for a particular type of function:

type Coalgebra f x = x -> f x

Think of x as a type for a seed from which one can grow a tree. A colagebra tells us how to use this seed to create a single node described by the functor f and populate it with (presumably smaller) seeds. We can then pass this coalgebra to a simple algorithm, which will recursively expand the seeds. This algorithm is called the anamorphism:

ana :: Functor f => Coalgebra f a -> a -> Fix f
ana coa = In . fmap (ana coa) . coa

Let’s see how we can apply it to the task of building a trie. The seed in our case is a list of strings (as per the definition of our problem, we’ll assume they are all equal length). We start by grouping these strings into bunches of strings that start with the same character. There is a library function called groupWith that does exactly that. We have to import the right library:

import GHC.Exts (groupWith)

This is the signature of the function:

groupWith :: Ord b => (a -> b) -> [a] -> [[a]]

It takes a function a -> b that converts each list element to a type that supports comparison (as per the typeclass Ord), and partitions the input into lists that compare equal under this particular ordering. In our case, we are going to extract the first character from a string using head and bunch together all strings that share that first character.

let sss = groupWith head ss

The tails of those strings will serve as seeds for the next tier of the trie. Eventually the strings will be shortened to nothing, triggering the end of recursion.

fromList :: Coalgebra TrieF [String]
fromList ss =
-- are strings empty? (checking one is enough)
then TrieF [] -- leaf
else
let sss = groupWith head ss
in TrieF $fmap mkBranch sss The function mkBranch takes a bunch of strings sharing the same first character and creates a branch seeded with the suffixes of those strings. mkBranch :: [String] -> (Char, [String]) mkBranch sss = let c = head (head sss) -- they're all the same in (c, fmap tail sss) Notice that we have completely avoided explicit recursion. The next step is a little harder. We have to fold the trie. Again, all we have to define is a step that folds a single node whose children have already been folded. This step is defined by an algebra: type Algebra f x = f x -> x Just as the type x described the seed in a coalgebra, here it describes the accumulator–the result of the folding of a recursive data structure. We pass this algebra to a special algorithm called a catamorphism that takes care of the recursion: cata :: Functor f => Algebra f a -> Fix f -> a cata alg = alg . fmap (cata alg) . out Notice that the folding proceeds from the bottom up: the algebra assumes that all the children have already been folded. The hardest part of designing an algebra is figuring out what information needs to be passed up in the accumulator. We obviously need to return the final result which, in our case, is the list of strings with one mismatched character. But when we are in the middle of a trie, we have to keep in mind that the mismatch may still happen above us. So we also need a list of strings that may serve as suffixes when the mismatch occurs. We have to keep them all, because they might be matched later with strings from other branches. In other words, we need to be accumulating two lists of strings. The first list accumulates all suffixes for future matching, the second accumulates the results: strings with one mismatch (after the mismatch has been removed). We therefore should implement the following algebra: Algebra TrieF ([String], [String]) To understand the implementation of this algebra, consider a single node in a trie. It’s a list of branches, or pairs, whose first component is the current character, and the second a pair of lists of strings–the result of folding a child trie. The first list contains all the suffixes gathered from lower levels of the trie. The second list contains partial results: strings that were matched modulo single-character defect. As an example, suppose that you have a node with two branches: [ ('a', (["bcd", "efg"], ["pq"])) , ('x', (["bcd"], []))] First we prepend the current character to strings in both lists using the function prep with the following signature: prep :: (Char, ([String], [String])) -> ([String], [String]) This way we convert each branch to a pair of lists. [ (["abcd", "aefg"], ["apq"]) , (["xbcd"], [])] We then merge all the lists of suffixes and, separately, all the lists of partial results, across all branches. In the example above, we concatenate the lists in the two columns. (["abcd", "aefg", "xbcd"], ["apq"])  Now we have to construct new partial results. To do this, we create another list of accumulated strings from all branches (this time without prefixing them): ss = concat$ fmap (fst . snd) bs

In our case, this would be the list:

["bcd", "efg", "bcd"]

To detect duplicate strings, we’ll insert them into a multiset, which we’ll implement as a map. We need to import the appropriate library:

import qualified Data.Map as M

and define a multiset Counts as:

type Counts a = M.Map a Int

Every time we add a new item, we increment the count:

add :: Ord a => Counts a -> a -> Counts a
add cs c = M.insertWith (+) c 1 cs

To insert all strings from a list, we use a fold:

mset = foldl add M.empty ss

We are only interested in items that have multiplicity greater than one. We can filter them and extract their keys:

dups = M.keys $M.filter (> 1) mset Here’s the complete algebra: accum :: Algebra TrieF ([String], [String]) accum (TrieF []) = ([""], []) accum (TrieF bs) = -- b :: (Char, ([String], [String])) let -- prepend chars to string in both lists pss = unzip$ fmap prep bs
(ss1, ss2) = both concat pss
-- find duplicates
ss = concat $fmap (fst . snd) bs mset = foldl add M.empty ss dups = M.keys$ M.filter (> 1) mset
in (ss1, dups ++ ss2)
where
prep :: (Char, ([String], [String])) -> ([String], [String])
prep (c, pss) = both (fmap (c:)) pss

I used a handy helper function that applies a function to both components of a pair:

both :: (a -> b) -> (a, a) -> (b, b)
both f (x, y) = (f x, f y)

And now for the grand finale: Since we create the trie using an anamorphism only to immediately fold it using a catamorphism, why don’t we cut the middle person? Indeed, there is an algorithm called the hylomorphism that does just that. It takes the algebra, the coalgebra, and the seed, and returns the fully charged accumulator.

hylo :: Functor f => Algebra f a -> Coalgebra f b -> b -> a
hylo alg coa = alg . fmap (hylo alg coa) . coa

And this is how we extract and print the final result:

print $head$ snd \$ hylo accum fromList cs

## Conclusion

The advantage of using the hylomorphism is that, because of Haskell’s laziness, the trie is never wholly constructed, and therefore doesn’t require large amounts of memory. At every step enough of the data structure is created as is needed for immediate computation; then it is promptly released. In fact, the definition of the data structure is only there to guide the steps of the algorithm. We use a data structure as a control structure. Since data structures are much easier to visualize and debug than control structures, it’s almost always advantageous to use them to drive computation.

In fact, you may notice that, in the very last step of the computation, our accumulator recreates the original list of strings (actually, because of laziness, they are never fully reconstructed, but that’s not the point). In reality, the characters in the strings are never copied–the whole algorithm is just a choreographed dance of internal pointers, or iterators. But that’s exactly what happens in the original C++ algorithm. We just use a higher level of abstraction to describe this dance.

I haven’t looked at the performance of various implementations. Feel free to test it and report the results. The code is available on github.

## Acknowledgments

I’m grateful to the participants of the Seattle Haskell Users’ Group for many helpful comments during my presentation.

I wanted to do category theory, not geometry, so the idea of studying simplexes didn’t seem very attractive at first. But as I was getting deeper into it, a very different picture emerged. Granted, the study of simplexes originated in geometry, but then category theorists took interest in it and turned it into something completely different. The idea is that simplexes define a very peculiar scheme for composing things. The way you compose lower dimensional simplexes in order to build higher dimensional simplexes forms a pattern that shows up in totally unrelated areas of mathematics… and programming. Recently I had a discussion with Edward Kmett in which he hinted at the simplicial structure of cumulative edits in a source file.

## Geometric picture

Let’s start with a simple idea, and see what we can do with it. The idea is that of triangulation, and it almost goes back to the beginning of the Agricultural Era. Somebody smart noticed long time ago that we can measure plots of land by subdividing them into triangles.

Why triangles and not, say, rectangles or quadrilaterals? Well, to begin with, a quadrilateral can be always divided into triangles, so triangles are more fundamental as units of composition in 2-d. But, more importantly, triangles also work when you embed them in higher dimensions, and quadrilaterals don’t. You can take any three points and there is a unique flat triangle that they span (it may be degenerate, if the points are collinear). But four points will, in general, span a warped quadrilateral. Mind you, rectangles work great on flat screens, and we use them all the time for selecting things with the mouse. But on a curved or bumpy surface, triangles are the only option.

Surveyors have covered the whole Earth, mountains and all, with triangles. In computer games, we build complex models, including human faces or dolphins, using wireframes. Wireframes are just systems of triangles that share some of the vertices and edges. So triangles can be used to approximate complex 2-d surfaces in 3-d.

## More dimensions

How can we generalize this process? First of all, we could use triangles in spaces that have more than 3 dimensions. This way we could, for instance, build a Klein bottle in 4-d without it intersecting itself.

We can also consider replacing triangles with higher-dimensional objects. For instance, we could approximate 3-d volumes by filling them with cubes. This technique is used in computer graphics, where we often organize lots of cubes in data structures called octrees. But just like squares or quadrilaterals don’t work very well on non-flat surfaces, cubes cannot be used in curved spaces. The natural generalization of a triangle to something that can fill a volume without any warping is a tetrahedron. Any four points in space span a tetrahedron.

We can go on generalizing this construction to higher and higher dimensions. To form an n-dimensional simplex we can pick $n+1$ points. We can draw a segment between any two points, a triangle between any three points, a tetrahedron between any four points, and so on. It’s thus natural to define a 1-dimensional simplex to be a segment, and a 0-dimensional simplex to be a point.

Simplexes (or simplices, as they are sometimes called) have very regular recursive structure. An n-dimensional simplex has $n+1$ faces, which are all $n-1$ dimensional simplexes. A tetrahedron has four triangular faces, a triangle has three sides (one-dimensional simplexes), and a segment has two endpoints. (A point should have one face–and it does, in the “augmented” theory). Every higher-dimensional simplex can be decomposed into lower-dimensional simplexes, and the process can be repeated until we get down to individual vertexes. This constitutes a very interesting composition scheme that will come up over and over again in unexpected places.

Notice that you can always construct a face of a simplex by deleting one point. It’s the point opposite to the face in question. This is why there are as many faces as there are points in a simplex.

## Look Ma! No coordinates!

So far we’ve been secretly thinking of points as elements of some n-dimensional linear space, presumably $\mathbb{R}^n$. Time to make another leap of abstraction. Let’s abandon coordinate systems. Can we still define simplexes and, if so, how would we use them?

Consider a wireframe built from triangles. It defines a particular shape. We can deform this shape any way we want but, as long as we don’t break connections or fuse points, we cannot change its topology. A wireframe corresponding to a torus can never be deformed into a wireframe corresponding to a sphere.

The information about topology is encoded in connections. The connections don’t depend on coordinates. Two points are either connected or not. Two triangles either share a side or they don’t. Two tetrahedrons either share a triangle or they don’t. So if we can define simplexes without resorting to coordinates, we’ll have a new language to talk about topology.

But what becomes of a point if we discard its coordinates? It becomes an element of a set. An arrangement of simplexes can be built from a set of points or 0-simplexes, together with a set of 1-simplexes, a set of 2-simplexes, and so on. Imagine that you bought a piece of furniture from Ikea. There is a bag of screws (0-simplexes), a box of sticks (1-simplexes), a crate of triangular planks (2-simplexes), and so on. All parts are freely stretchable (we don’t care about sizes).

You have no idea what the piece of furniture will look like unless you have an instruction booklet. The booklet tells you how to arrange things: which sticks form the edges of which triangles, etc. In general, you want to know which lower-order simplexes are the “faces” of higher-order simplexes. This can be determined by defining functions between the corresponding sets, which we’ll call face maps.

For instance, there should be two function from the set of segments to the set of points; one assigning the beginning, and the other the end, to each segment. There should be three functions from the set of triangles to the set of segments, and so on. If the same point is the end of one segment and the beginning of another, the two segments are connected. A segment may be shared between multiple triangles, a triangle may be shared between tetrahedrons, and so on.

You can compose these functions–for instance, to select a vertex of a triangle, or a side of a tetrahedron. Composable functions suggest a category, in this case a subcategory of Set. Selecting a subcategory suggests a functor from some other, simpler, category. What would that category be?

## The Simplicial category

The objects of this simpler category, let’s call it the simplicial category $\Delta$, would be mapped by our functor to corresponding sets of simplexes in Set. So, in $\Delta$, we need one object corresponding to the set of points, let’s call it $[0]$; another for segments, $[1]$; another for triangles, $[2]$; and so on. In other words, we need one object called $[n]$ per one set of n-dimensional simplexes.

What really determines the structure of this category is its morphisms. In particular, we need morphisms that would be mapped, under our functor, to the functions that define faces of our simplexes–the face maps. This means, in particular, that for every $n$ we need $n+1$ distinct functions from the image of $[n]$ to the image of $[n-1]$. These functions are themselves images of morphisms that go between $[n]$ and $[n-1]$ in $\Delta$; we do, however, have a choice of the direction of these morphisms. If we choose our functor to be contravariant, the face maps from the image of $[n]$ to the image of $[n-1]$ will be images of morphisms going from $[n-1]$ to $[n]$ (the opposite direction). This contravariant functor from $\Delta$ to Set (such functors are called pre-sheafe) is called the simplicial set.

What’s attractive about this idea is that there is a category that has exactly the right types of morphisms. It’s a category whose objects are ordinals, or ordered sets of numbers, and morphisms are order-preserving functions. Object $[0]$ is the one-element set $\{0\}$, $[1]$ is the set $\{0, 1\}$, $[2]$ is $\{0, 1, 2\}$, and so on. Morphisms are functions that preserve order, that is, if $n < m$ then $f(n) \leq f(m)$. Notice that the inequality is non-strict. This will become important in the definition of degeneracy maps.

The description of simplicial sets using a functor follows a very common pattern in category theory. The simpler category defines the primitives and the grammar for combining them. The target category (often the category of sets) provides models for the theory in question. The same trick is used, for instance, in defining abstract algebras in Lawvere theories. There, too, the syntactic category consists of a tower of objects with a very regular set of morphisms, and the models are contravariant Set-valued functors.

Because simplicial sets are functors, they form a functor category, with natural transformations as morphisms. A natural transformation between two simplicial sets is a family of functions that map vertices to vertices, edges to edges, triangles to triangles, and so on. In other words, it embeds one simplicial set in another.

## Face maps

We will obtain face maps as images of injective morphisms between objects of $\Delta$. Consider, for instance, an injection from $[1]$ to $[2]$. Such a morphism takes the set $\{0, 1\}$ and maps it to $\{0, 1, 2\}$. In doing so, it must skip one of the numbers in the second set, preserving the order of the other two. There are exactly three such morphisms, skipping either $0$, $1$, or $2$.

And, indeed, they correspond to three face maps. If you think of the three numbers as numbering the vertices of a triangle, the three face maps remove the skipped vertex from the triangle leaving the opposing side free. The functor is contravariant, so it reverses the direction of morphisms.

The same procedure works for higher order simplexes. An injection from $[n-1]$ to $[n]$ maps $\{0, 1,...,n-1\}$ to $\{0, 1,...,n\}$ by skipping some $k$ between $0$ and $n$.

The corresponding face map is called $d_{n, k}$, or simply $d_k$, if $n$ is obvious from the context.

Such face maps automatically satisfy the obvious identities for any $i < j$:

$d_i d_j = d_{j-1} d_i$

The change from $j$ to $j-1$ on the right compensates for the fact that, after removing the $i$th number, the remaining indexes are shifted down.

These injections generate, through composition, all the morphisms that strictly preserve the ordering (we also need identity maps to form a category). But, as I mentioned before, we are also interested in those maps that are non-strict in the preservation of ordering (that is, they can map two consecutive numbers into one). These generate the so called degeneracy maps. Before we get to definitions, let me provide some motivation.

## Homotopy

One of the important application of simplexes is in homotopy. You don’t need to study algebraic topology to get a feel of what homotopy is. Simply said, homotopy deals with shrinking and holes. For instance, you can always shrink a segment to a point. The intuition is pretty obvious. You have a segment at time zero, and a point at time one, and you can create a continuous “movie” in between. Notice that a segment is a 1-simplex, whereas a point is a 0-simplex. Shrinking therefore provides a bridge between different-dimensional simplexes.

Similarly, you can shrink a triangle to a segment–in particular the segment that is one of its sides.

You can also shrink a triangle to a point by pasting together two shrinking movies–first shrinking the triangle to a segment, and then the segment to a point. So shrinking is composable.

But not all higher-dimensional shapes can be shrunk to all lower-dimensional shapes. For instance, an annulus (a.k.a., a ring) cannot be shrunk to a segment–this would require tearing it. It can, however, be shrunk to a circular loop (or two segments connected end to end to form a loop). That’s because both, the annulus and the circle, have a hole. So continuous shrinking can be used to classify shapes according to how many holes they have.

We have a problem, though: You can’t describe continuous transformations without using coordinates. But we can do the next best thing: We can define degenerate simplexes to bridge the gap between dimensions. For instance, we can build a segment, which uses the same vertex twice. Or a collapsed triangle, which uses the same side twice (its third side is a degenerate segment).

## Degeneracy maps

We model operations on simplexes, such as face maps, through morphisms from the category opposite to $\Delta$. The creation of degenerate simplexes will therefore corresponds to mappings from $[n+1]$ to $[n]$. They obviously cannot be injective, but we may chose them to be surjective. For instance, the creation of a degenerate segment from a point corresponds to the (opposite) mapping of $\{0, 1\}$ to $\{0\}$, which collapses the two numbers to one.

We can construct a degenerate triangle from a segment in two ways. These correspond to the two surjections from $\{0, 1, 2\}$ to $\{0, 1\}$.

The first one called $\sigma_{1, 0}$ maps both $0$ and $1$ to $0$ and $2$ to $1$. Notice that, as required, it preserves the order, albeit weakly. The second, $\sigma_{1, 1}$ maps $0$ to $0$ but collapses $1$ and $2$ to $1$.

In general, $\sigma_{n, k}$ maps $\{0, 1, ... k, k+1 ... n+1\}$ to $\{0, 1, ... k ... n\}$ by collapsing $k$ and $k+1$ to $k$.

Our contravariant functor maps these order-preserving surjections to functions on sets. The resulting functions are called degeneracy maps: each $\sigma_{n, k}$ mapped to the corresponding $s_{n, k}$. As with face maps, we usually omit the first index, as it’s either arbitrary or easily deducible from the context.

Two degeneracy maps. In the triangles, two of the sides are actually the same segment. The third side is a degenerate segment whose ends are the same point.

There is an obvious identity for the composition of degeneracy maps:

$s_i s_j = s_{j+1} s_i$

for $i \leq j$.

The interesting identities relate degeneracy maps to face maps. For instance, when $i = j$ or $i = j + 1$, we have:

$d_i s_j = id$

(that’s the identity morphism). Geometrically speaking, imagine creating a degenerate triangle from a segment, for instance by using $s_0$. The first side of this triangle, which is obtained by applying $d_0$, is the original segment. The second side, obtained by $d_1$, is the same segment again.

The third side is degenerate: it can be obtained by applying $s_0$ to the vertex obtained by $d_1$.

In general, for $i > j + 1$:

$d_i s_j = s_j d_{i-1}$

Similarly:

$d_i s_j = s_{j-1} d_i$

for $i < j$.

All the face- and degeneracy-map identities are relevant because, given a family of sets and functions that satisfy them, we can reproduce the simplicial set (contravariant functor from $\Delta$ to Set) that generates them. This shows the equivalence of the geometric picture that deals with triangles, segments, faces, etc., with the combinatorial picture that deals with rearrangements of ordered sequences of numbers.

## Monoidal structure

A triangle can be constructed by adjoining a point to a segment. Add one more point and you get a tetrahedron. This process of adding points can be extended to adding together arbitrary simplexes. Indeed, there is a binary operator in $\Delta$ that combines two ordered sequences by stacking one after another.

This operation can be lifted to morphisms, making it a bifunctor. It is associative, so one might ask the question whether it can be used as a tensor product to make $\Delta$ a monoidal category. The only thing missing is the unit object.

The lowest dimensional simplex in $\Delta$ is $[0]$, which represents a point, so it cannot be a unit with respect to our tensor product. Instead we are obliged to add a new object, which is called $[-1]$, and is represented by an empty set. (Incidentally, this is the object that may serve as “the face” of a point.)

With the new object $[-1]$, we get the category $\Delta_a$, which is called the augmented simplicial category. Since the unit and associativity laws are satisfied “on the nose” (as opposed to “up to isomorphism”), $\Delta_a$ is a strict monoidal category.

Note: Some authors prefer to name the objects of $\Delta_a$ starting from zero, rather than minus one. They rename $[-1]$ to $\bold{0}$, $[0]$ to $\bold{1}$, etc. This convention makes even more sense if you consider that $\bold{0}$ is the initial object and $\bold{1}$ the terminal object in $\Delta_a$.

Monoidal categories are a fertile breeding ground for monoids. Indeed, the object $[0]$ in $\Delta_a$ is a monoid. It is equipped with two morphisms that act like unit and multiplication. It has an incoming morphism from the monoidal unit $[-1]$–the morphism that’s the precursor of the face map that assigns the empty set to every point. This morphism can be used as the unit $\eta$ of our monoid. It also has an incoming morphism from $[1]$ (which happens to be the tensorial square of $[0]$). It’s the precursor of the degeneracy map that creates a segment from a single point. This morphism is the multiplication $\mu$ of our monoid. Unit and associativity laws follow from the standard identities between morphisms in $\Delta_a$.

It turns out that this monoid $([0], \eta, \mu)$ in $\Delta_a$ is the mother of all monoids in strict monoidal categories. It can be shown that, for any monoid $m$ in any strict monoidal category $C$, there is a unique strict monoidal functor $F$ from $\Delta_a$ to $C$ that maps the monoid $[0]$ to the monoid $m$. The category $\Delta_a$ has exactly the right structure, and nothing more, to serve as the pattern for any monoid we can come up within a (strict) monoidal category. In particular, since a monad is just a monoid in the (strictly monoidal) category of endofunctors, the augmented simplicial category is behind every monad as well.

## One more thing

Incidentally, since $\Delta_a$ is a monoidal category, (contravariant) functors from it to Set are automatically equipped with monoidal structure via Day convolution. The result of Day convolution is a join of simplicial sets. It’s a generalized cone: two simplicial sets together with all possible connections between them. In particular, if one of the sets is just a single point, the result of the join is an actual cone (or a pyramid).

## Different shapes

If we are willing to let go of geometric interpretations, we can replace the target category of sets with an arbitrary category. Instead of having a set of simplexes, we’ll end up with an object of simplexes. Simplicial sets become simplicial objects.

Alternatively, we can generalize the source category. As I mentioned before, simplexes are a good choice of primitives because of their geometrical properties–they don’t warp. But if we don’t care about embedding these simplexes in $\mathbb{R}^n$, we can replace them with cubes of varying dimensions (a one dimensional cube is a segment, a two dimensional cube is a square, and so on). Functors from the category of n-cubes to Set are called cubical sets. An even further generalization replaces simplexes with shapeless globes producing globular sets.

All these generalizations become important tools in studying higher category theory. In an n-category, we naturally encounter various shapes, as reflected in the naming convention: objects are called 0-cells; morphisms, 1-cells; morphisms between morphisms, 2-cells, and so on. These “cells” are often visualized as n-dimensional shapes. If a 1-cell is an arrow, a 2-cell is a (directed) surface spanning two arrows; a 3-cell, a volume between two surfaces; e.t.c. In this way, the shapeless hom-set that connects two objects in a regular category turns into a topologically rich blob in an n-category.

This is even more pronounced in infinity groupoids, which became popularized by homotopy type theory, where we have an infinite tower of bidirectional n-morphisms. The presence or the absence of higher order morphisms between any two morphisms can be visualized as the existence of holes that prevent the morphing of one cell into another. This kind of morphing can be described by homotopies which, in turn, can be described using simplicial, cubical, globular, or even more exotic sets.

## Conclusion

I realize that this post might seem a little rambling. I have two excuses: One is that, when I started looking at simplexes, I had no idea where I would end up. One thing led to another and I was totally fascinated by the journey. The other is the realization how everything is related to everything else in mathematics. You start with simple triangles, you compose and decompose them, you see some structure emerging. Suddenly, the same compositional structure pops up in totally unrelated areas. You see it in algebraic topology, in a monoid in a monoidal category, or in a generalization of a hom-set in an n-category. Why is it so? It seems like there aren’t that many ways of composing things together, and we are forced to keep reusing them over and over again. We can glue them, nail them, or solder them. The way simplicial category is put together provides a template for one of the universal patterns of composition.

## Bibliography

1. John Baez, A Quick Tour of Basic Concepts in Simplicial Homotopy Theory
2. Greg Friedman, An elementary illustrated introduction to simplicial sets.
3. N J Wildberger, Algebraic Topology. An excellent series of videos.

## Acknowledgments

I’m grateful to Edward Kmett and Derek Elkins for reviewing the draft and for providing helpful suggestions.