This is part 27 of Categories for Programmers. Previously: Ends and Coends. See the Table of Contents.

So far we’ve been mostly working with a single category or a pair of categories. In some cases that was a little too constraining. For instance, when defining a limit in a category C, we introduced an index category I as the template for the pattern that would form the basis for our cones. It would have made sense to introduce another category, a trivial one, to serve as a template for the apex of the cone. Instead we used the constant functor Δc from I to C.

It’s time to fix this awkwardness. Let’s define a limit using three categories. Let’s start with the functor D from the index category I to C. This is the functor that selects the base of the cone — the diagram functor.

The new addition is the category 1 that contains a single object (and a single identity morphism). There is only one possible functor K from I to this category. It maps all objects to the only object in 1, and all morphisms to the identity morphism. Any functor F from 1 to C picks a potential apex for our cone.

A cone is a natural transformation ε from F ∘ K to D. Notice that F ∘ K does exactly the same thing as our original Δc. The following diagram shows this transformation.

We can now define a universal property that picks the “best” such functor F. This F will map 1 to the object that is the limit of D in C, and the natural transformation ε from F ∘ K to D will provide the corresponding projections. This universal functor is called the right Kan extension of D along K and is denoted by RanKD.

Let’s formulate the universal property. Suppose we have another cone — that is another functor F' together with a natural transformation ε' from F' ∘ K to D.

If the Kan extension F = RanKD exists, there must be a unique natural transformation σ from F' to it, such that ε' factorizes through ε, that is:

ε' = ε . (σ ∘ K)

Here, σ ∘ K is the horizontal composition of two natural transformations (one of them being the identity natural transformation on K). This transformation is then vertically composed with ε.

In components, when acting on an object i in I, we get:

ε'i = εi ∘ σ K i

In our case, σ has only one component corresponding to the single object of 1. So, indeed, this is the unique morphism from the apex of the cone defined by F' to the apex of the universal cone defined by RanKD. The commuting conditions are exactly the ones required by the definition of a limit.

But, importantly, we are free to replace the trivial category 1 with an arbitrary category A, and the definition of the right Kan extension remains valid.

Right Kan Extension

The right Kan extension of the functor D::I->C along the functor K::I->A is a functor F::A->C (denoted RanKD) together with a natural transformation

ε :: F ∘ K -> D

such that for any other functor F'::A->C and a natural transformation

ε' :: F' ∘ K -> D

there is a unique natural transformation

σ :: F' -> F

that factorizes ε':

ε' = ε . (σ ∘ K)

This is quite a mouthful, but it can be visualized in this nice diagram:

An interesting way of looking at this is to notice that, in a sense, the Kan extension acts like the inverse of “functor multiplication.” Some authors go as far as use the notation D/K for RanKD. Indeed, in this notation, the definition of ε, which is also called the counit of the right Kan extension, looks like simple cancellation:

ε :: D/K ∘ K -> D

There is another interpretation of Kan extensions. Consider that the functor K embeds the category I inside A. In the simplest case I could just be a subcategory of A. We have a functor D that maps I to C. Can we extend D to a functor F that is defined on the whole of A? Ideally, such an extension would make the composition F ∘ K be isomorphic to D. In other words, F would be extending the domain of D to A. But a full-blown isomorphism is usually too much to ask, and we can do with just half of it, namely a one-way natural transformation ε from F ∘ K to D. (The left Kan extension picks the other direction.)


Of course, the embedding picture breaks down when the functor K is not injective on objects or not faithful on hom-sets, as in the example of the limit. In that case, the Kan extension tries its best to extrapolate the lost information.

Kan Extension as Adjunction

Now suppose that the right Kan extension exists for any D (and a fixed K). In that case RanK - (with the dash replacing D) is a functor from the functor category [I, C] to the functor category [A, C]. It turns out that this functor is the right adjoint to the precomposition functor -∘K. The latter maps functors in [A, C] to functors in [I, C]. The adjunction is:

[I, C](F' ∘ K, D) ≅ [A, C](F', RanKD)

It is just a restatement of the fact that to every natural transformation we called ε' corresponds a unique natural transformation we called σ.

Furthermore, if we chose the category I to be the same as C, we can substitute the identity functor IC for D. We get the following identity:

[C, C](F' ∘ K, IC) ≅ [A, C](F', RanKIC)

We can now chose F' to be the same as RanKIC. In that case the right hand side contains the identity natural transformation and, corresponding to it, the left hand side gives us the following natural transformation:

ε :: RanKIC ∘ K -> IC

This looks very much like the counit of an adjunction:

RanKIC ⊣ K

Indeed, the right Kan extension of the identity functor along a functor K can be used to calculate the left adjoint of K. For that, one more condition is necessary: the right Kan extension must be preserved by the functor K. The preservation of the extension means that, if we calculate the Kan extension of the functor precomposed with K, we should get the same result as precomposing the original Kan extesion with K. In our case, this condition simplifies to:

K ∘ RanKIC ≅ RanKK

Notice that, using the division-by-K notation, the adjunction can be written as:

I/K ⊣ K

which confirms our intuition that an adjunction describes some kind of an inverse. The preservation condition becomes:

K ∘ I/K ≅ K/K

The right Kan extension of a functor along itself, K/K, is called a codensity monad.

The adjunction formula is an important result because, as we’ll see soon, we can calculate Kan extensions using ends (coends), thus giving us practical means of finding right (and left) adjoints.

Left Kan Extension

There is a dual construction that gives us the left Kan extension. To build some intuition, we’ll can start with the definition of a colimit and restructure it to use the singleton category 1. We build a cocone by using the functor D::I->C to form its base, and the functor F::1->C to select its apex.

The sides of the cocone, the injections, are components of a natural transformation η from D to F ∘ K.

The colimit is the universal cocone. So for any other functor F' and a natural transformation

η' :: D -> F'∘ K

there is a unique natural transformation σ from F to F'

such that:

η' = (σ ∘ K) . η

This is illustrated in the following diagram:

Replacing the singleton category 1 with A, this definition naturally generalized to the definition of the left Kan extension, denoted by LanKD.

The natural transformation:

η :: D -> LanKD ∘ K

is called the unit of the left Kan extension.

As before, we can recast the one-to-one correspondence between natural transformations:

η' = (σ ∘ K) . η

in terms of the adjunction:

[A, C](LanKD, F') ≅ [I, C](D, F' ∘ K)

In other words, the left Kan extension is the left adjoint, and the right Kan extension is the right adjoint of the postcomposition with K.

Just like the right Kan extension of the identity functor could be used to calculate the left adjoint of K, the left Kan extension of the identity functor turns out to be the right adjoint of K (with η being the unit of  the adjunction):

K ⊣ LanKIC

Combining the two results, we get:

RanKIC ⊣ K ⊣ LanKIC

Kan Extensions as Ends

The real power of Kan extensions comes from the fact that they can be calculated using ends (and coends). For simplicity, we’ll restrict our attention to the case where the target category C is Set, but the formulas can be extended to any category.

Let’s revisit the idea that a Kan extension can be used to extend the action of a functor outside of its original domain. Suppose that K embeds I inside A. Functor D maps I to Set. We could just say that for any object a in the image of K, that is a = K i, the extended functor maps a to D i. The problem is, what to do with those objects in A that are outside of the image of K? The idea is that every such object is potentially connected through lots of morphisms to every object in the image of K. A functor must preserve these morphisms. The totality of morphisms from an object a to the image of K is characterized by the hom-functor:

A(a, K -)


Notice that this hom-functor is a composition of two functors:

A(a, K -) = A(a, -) ∘ K

The right Kan extension is the right adjoint of functor composition:

[I, Set](F' ∘ K, D) ≅ [A, Set](F', RanKD)

Let’s see what happens when we replace F' with the hom functor:

[I, Set](A(a, -) ∘ K, D) ≅ [A, Set](A(a, -), RanKD)

and then inline the composition:

[I, Set](A(a, K -), D) ≅ [A, Set](A(a, -), RanKD)

The right hand side can be reduced using the Yoneda lemma:

[I, Set](A(a, K -), D) ≅ RanKD a

We can now rewrite the set of natural transformations as the end to get this very convenient formula for the right Kan extension:

RanKD a ≅ ∫i Set(A(a, K i), D i)

There is an analogous formula for the left Kan extension in terms of a coend:

LanKD a = ∫i A(K i, a) × D i

To see that this is the case, we’ll show that this is indeed the left adjoint to functor composition:

[A, Set](LanKD, F') ≅ [I, Set](D, F'∘ K)

Let’s substitute our formula in the left hand side:

[A, Set](∫i A(K i, -) × D i, F')

This is a set of natural transformations, so it can be rewritten as an end:

a Set(∫i A(K i, a) × D i, F'a)

Using the continuity of the hom-functor, we can replace the coend with the end:

ai Set(A(K i, a) × D i, F'a)

We can use the product-exponential adjunction:

ai Set(A(K i, a), (F'a)D i)

The exponential is isomorphic to the corresponding hom-set:

ai Set(A(K i, a), A(D i, F'a))

There is a theorem called the Fubini theorem that allows us to swap the two ends:

ia Set(A(K i, a), A(D i, F'a))

The inner end represents the set of natural transformations between two functors, so we can use the Yoneda lemma:

i A(D i, F'(K i))

This is indeed the set of natural transformations that forms the right hand side of the adjunction we set out to prove:

[I, Set](D, F'∘ K)

These kinds of calculations using ends, coends, and the Yoneda lemma are pretty typical for the “calculus” of ends.

Kan Extensions in Haskell

The end/coend formulas for Kan extensions can be easily translated to Haskell. Let’s start with the right extension:

RanKD a ≅ ∫i Set(A(a, K i), D i)

We replace the end with the universal quantifier, and hom-sets with function types:

newtype Ran k d a = Ran (forall i. (a -> k i) -> d i)

Looking at this definition, it’s clear that Ran must contain a value of type a to which the function can be applied, and a natural transformation between the two functors k and d. For instance, suppose that k is the tree functor, and d is the list functor, and you were given a Ran Tree [] String. If you pass it a function:

f :: String -> Tree Int

you’ll get back a list of Int, and so on. The right Kan extension will use your function to produce a tree and then repackage it into a list. For instance, you may pass it a parser that generates a parsing tree from a string, and you’ll get a list that corresponds to the depth-first traversal of this tree.

The right Kan extension can be used to calculate the left adjoint of a given functor by replacing the functor d with the identity functor. This leads to the left adjoint of a functor k being represented by the set of polymorphic functions of the type:

forall i. (a -> k i) -> i

Suppose that k is the forgetful functor from the category of monoids. The universal quantifier then goes over all monoids. Of course, in Haskell we cannot express monoidal laws, but the following is a decent approximation of the resulting free functor (the forgetful functor k is an identity on objects):

type Lst a = forall i. Monoid i => (a -> i) -> i

As expected, it generates free monoids, or Haskell lists:

toLst :: [a] -> Lst a
toLst as = \f -> foldMap f as
  
fromLst :: Lst a -> [a]
fromLst f = f (\a -> [a])

The left Kan extension is a coend:

LanKD a = ∫i A(K i, a) × D i

so it translates to an existential quantifier. Symbolically:

Lan k d a = exists i. (k i -> a, d i)

This can be encoded in Haskell using GADTs, or using a universally quantified data constructor:

data Lan k d a = forall i. Lan (k i -> a) (d i)

The interpretation of this data structure is that it contains a function that takes a container of some unspecified is and produces an a. It also has a container of those is. Since you have no idea what is are, the only thing you can do with this data structure is to retrieve the container of is, repack it into the container defined by the functor k using a natural transformation, and call the function to obtain the a. For instance, if d is a tree, and k is a list, you can serialize the tree, call the function with the resulting list, and obtain an a.

The left Kan extension can be used to calculate the right adjoint of a functor. We know that the right adjoint of the product functor is the exponential, so let’s try to implement it using the Kan extension:

type Exp a b = Lan ((,) a) I b

This is indeed isomorphic to the function type, as witnessed by the following pair of functions:

toExp :: (a -> b) -> Exp a b
toExp f = Lan (f . fst) (I ())

fromExp :: Exp a b -> (a -> b)
fromExp (Lan f (I x)) = \a -> f (a, x)

Notice that, as described earlier in the general case, we performed the following steps: (1) retrieved the container of x (here, it’s just a trivial identity container), and the function f, (2) repackaged the container using the natural transformation between the identity functor and the pair functor, and (3) called the function f.

Next: Enriched Categories.


The Free Theorem for Ends

In Haskell, the end of a profunctor p is defined as a product of all diagonal elements:

forall c. p c c

together with a family of projections:

pi :: Profunctor p => forall c. (forall a. p a a) -> p c c
pi e = e

In category theory, the end must also satisfy the edge condition which, in (type-annotated) Haskell, could be written as:

dimap f idb . pib = dimap ida f . pia

for any f :: a -> b.
Using a suitable formulation of parametricity, this equation can be shown to be a free theorem. Let’s first review the free theorem for functors before generalizing it to profunctors.

Functor Characterization

You may think of a functor as a container that has a shape and contents. You can manipulate the contents without changing the shape using fmap. In general, when applying fmap, you not only change the values stored in the container, you change their type as well. To really capture the shape of the container, you have to consider not only all possible mappings, but also more general relations between different contents.

A function is directional, and so is fmap, but relations don’t favor either side. They can map multiple values to the same value, and they can map one value to multiple values. Any relation on values induces a relation on containers. For a given functor F, if there is a relation a between type A and type A':

A <=a=> A'

then there is a relation between type F A and F A':

F A <=(F a)=> F A'

We call this induced relation F a.

For instance, consider the relation between students and their grades. Each student may have multiple grades (if they take multiple courses) so this relation is not a function. Given a list of students and a list of grades, we would say that the lists are related if and only if they match at each position. It means that they have to be equal length, and the first grade on the list of grades must belong to the first student on the list of students, and so on. Of course, a list is a very simple container, but this property can be generalized to any functor we can define in Haskell using algebraic data types.

The fact that fmap doesn’t change the shape of the container can be expressed as a “theorem for free” using relations. We start with two related containers:

xs :: F A
xs':: F A'

where A and A' are related through some relation a. We want related containers to be fmapped to related containers. But we can’t use the same function to map both containers, because they contain different types. So we have to use two related functions instead. Related functions map related types to related types so, if we have:

f :: A -> B
f':: A'-> B'

and A is related to A' through a, we want B to be related to B' through some relation b. Also, we want the two functions to map related elements to related elements. So if x is related to x' through a, we want f x to be related to f' x' through b. In that case, we’ll say that f and f' are related through the relation that we call a->b:

f <=(a->b)=> f'

For instance, if f is mapping students’ SSNs to last names, and f' is mapping letter grades to numerical grades, the results will be related through the relation between students’ last names and their numerical grades.

To summarize, we require that for any two relations:

A <=a=> A'
B <=b=> B'

and any two functions:

f :: A -> B
f':: A'-> B'

such that:

f <=(a->b)=> f'

and any two containers:

xs :: F A
xs':: F A'

we have:

if       xs <=(F a)=> xs'
then   F xs <=(F b)=> F xs'

This characterization can be extended, with suitable changes, to contravariant functors.

Profunctor Characterization

A profunctor is a functor of two variables. It is contravariant in the first variable and covariant in the second. A profunctor can lift two functions simultaneously using dimap:

class Profunctor p where
    dimap :: (a -> b) -> (c -> d) -> p b c -> p a d

We want dimap to preserve relations between profunctor values. We start by picking any relations a, b, c, and d between types:

A <=a=> A'
B <=b=> B'
C <=c=> C'
D <=d=> D'

For any functions:

f  :: A -> B
f' :: A'-> B'
g  :: C -> D
g' :: C'-> D'

that are related through the following relations induced by function types:

f <=(a->b)=> f'
g <=(c->d)=> g'

we define:

xs :: p B C
xs':: p B'C'

The following condition must be satisfied:

if             xs <=(p b c)=> xs'
then   (p f g) xs <=(p a d)=> (p f' g') xs'

where p f g stands for the lifting of the two functions by the profunctor p.

Here’s a quick sanity check. If b and c are functions:

b :: B'-> B
c :: C -> C'

than the relation:

xs <=(p b c)=> xs'

becomes:

xs' = dimap b c xs

If a and d are functions:

a :: A'-> A
d :: D -> D'

then these relations:

f <=(a->b)=> f'
g <=(c->d)=> g'

become:

f . a = b . f'
d . g = g'. c

and this relation:

(p f g) xs <=(p a d)=> (p f' g') xs'

becomes:

(p f' g') xs' = dimap a d ((p f g) xs)

Substituting xs', we get:

dimap f' g' (dimap b c xs) = dimap a d (dimap f g xs)

and using functoriality:

dimap (b . f') (g'. c) = dimap (f . a) (d . g)

which is identically true.

Special Case of Profunctor Characterization

We are interested in the diagonal elements of a profunctor. Let’s first specialize the general case to:

C = B
C'= B'
c = b

to get:

xs = p B B
xs'= p B'B'

and

if             xs <=(p b b)=> xs'
then   (p f g) xs <=(p a d)=> (p f' g') xs'

Chosing the following substitutions:

A = A'= B
D = D'= B'
a = id
d = id
f = id
g'= id
f'= g

we get:

if              xs <=(p b b)=> xs'
then   (p id g) xs <=(p id id)=> (p g id) xs'

Since p id id is the identity relation, we get:

(p id g) xs = (p g id) xs'

or

dimap id g xs = dimap g id xs'

Free Theorem

We apply the free theorem to the term xs:

xs :: forall c. p c c

It must be related to itself through the relation that is induced by its type:

xs <=(forall b. p b b)=> xs

for any relation b:

B <=b=> B'

Universal quantification translates to a relation between different instantiations of the polymorphic value:

xsB <=(p b b)=> xsB'

Notice that we can write:

xsB = piB xs
xsB'= piB'xs

using the projections we defined earlier.

We have just shown that this equation leads to:

dimap id g xs = dimap g id xs'

which shows that the wedge condition is indeed a free theorem.

Natural Transformations

Here’s another quick application of the free theorem. The set of natural transformations may be represented as an end of the following profunctor:

type NatP a b = F a -> G b
instance Profunctor NatP where
    dimap f g alpha = fmap g . alpha . fmap f

The free theorem tells us that for any mu :: NatP c c:

(dimap id g) mu = (dimap g id) mu

which is the naturality condition:

mu . fmap g = fmap g . mu

It’s been know for some time that, in Haskell, naturality follows from parametricity, so this is not surprising.

Acknowledgment

I’d like to thank Edward Kmett for reviewing the draft of this post.

Bibliography

  1. Bartosz Milewski, Ends and Coends
  2. Edsko de Vries, Parametricity Tutorial, Part 1, Part 2, Contravariant Functions.
  3. Bartosz Milewski, Parametricity: Money for Nothing and Theorems for Free

This is part 26 of Categories for Programmers. Previously: Algebras for Monads. See the Table of Contents.

There are many intuitions that we may attach to morphisms in a category, but we can all agree that if there is a morphism from the object a to the object b than the two objects are in some way “related.” A morphism is, in a sense, the proof of this relation. This is clearly visible in any poset category, where a morphism is a relation. In general, there may be many “proofs” of the same relation between two objects. These proofs form a set that we call the hom-set. When we vary the objects, we get a mapping from pairs of objects to sets of “proofs.” This mapping is functorial — contravariant in the first argument and covariant in the second. We can look at it as establishing a global relationship between objects in the category. This relationship is described by the hom-functor:

C(-, =) :: Cop × C -> Set

In general, any functor like this may be interpreted as establishing a relation between objects in a category. A relation may also involve two different categories C and D. A functor, which describes such a relation, has the following signature and is called a profunctor:

p :: Dop × C -> Set

Mathematicians say that it’s a profunctor from C to D (notice the inversion), and use a slashed arrow as a symbol for it:

C ↛ D

You may think of a profunctor as a proof-relevant relation between objects of C and objects of D, where the elements of the set symbolize proofs of the relation. Whenever p a b is empty, there is no relation between a and b. Keep in mind that relations don’t have to be symmetric.

Another useful intuition is the generalization of the idea that an endofunctor is a container. A profunctor value of the type p a b could then be considered a container of bs that are keyed by elements of type a. In particular, an element of the hom-profunctor is a function from a to b.

In Haskell, a profunctor is defined as a two-argument type constructor p equipped with the method called dimap, which lifts a pair of functions, the first going in the “wrong” direction:

class Profunctor p where
    dimap :: (c -> a) -> (b -> d) -> p a b -> p c d

The functoriality of the profunctor tells us that if we have a proof that a is related to b, then we get the proof that c is related to d, as long as there is a morphism from c to a and another from b to d. Or, we can think of the first function as translating new keys to the old keys, and the second function as modifying the contents of the container.

For profunctors acting within one category, we can extract quite a lot of information from diagonal elements of the type p a a. We can prove that b is related to c as long as we have a pair of morphisms b->a and a->c. Even better, we can use a single morphism to reach off-diagonal values. For instance, if we have a morphism f::a->b, we can lift the pair <f, idb> to go from p b b to p a b:

dimap f id pbb :: p a b

Or we can lift the pair <ida, f> to go from p a a to p a b:

dimap id f paa :: p a b

Dinatural Transformations

Since profunctors are functors, we can define natural transformations between them in the standard way. In many cases, though, it’s enough to define the mapping between diagonal elements of two profunctors. Such a transformation is called a dinatural transformation, provided it satisfies the commuting conditions that reflect the two ways we can connect diagonal elements to non-diagonal ones. A dinatural transformation between two profunctors p and q, which are members of the functor category [Cop × C, Set], is a family of morphisms:

αa :: p a a -> q a a

for which the following diagram commutes, for any f::a->b:

Notice that this is strictly weaker than the naturality condition. If α were a natural transformation in [Cop × C, Set], the above diagram could be constructed from two naturality squares and one functoriality condition (profunctor q preserving composition):

Notice that a component of a natural transformation α in [Cop × C, Set] is indexed by a pair of objects α a b. A dinatural transformation, on the other hand, is indexed by one object, since it only maps diagonal elements of the respective profunctors.

Ends

We are now ready to advance from “algebra” to what could be considered the “calculus” of category theory. The calculus of ends (and coends) borrows ideas and even some notation from traditional calculus. In particular, the coend may be understood as an infinite sum or an integral, whereas the end is similar to an infinite product. There is even something that resembles the Dirac delta function.

An end is a genaralization of a limit, with the functor replaced by a profunctor. Instead of a cone, we have a wedge. The base of a wedge is formed by diagonal elements of a profunctor p. The apex of the wedge is an object (here, a set, since we are considering Set-valued profunctors), and the sides are a family of functions mapping the apex to the sets in the base. You may think of this family as one polymorphic function — a function that’s polymorphic in its return type:

α :: forall a . apex -> p a a

Within a wedge we don’t consider any functions that would connect vertices of the base. However, as we’ve seen earlier, given any morphism f::a->b in C, we can connect both p a a and p b b to the common set p a b. We therefore insist that the following diagram commute:

This is called the wedge condition. It can be written as:

p ida f ∘ αa = p f idb ∘ αb

Or, using Haskell notation:

dimap id f . alpha = dimap f id . alpha

We can now proceed with the universal construction and define the end of p as the uinversal wedge — a set e together with a family of functions π such that for any other wedge with the apex a and a family α there is a unique function h::a->e that makes all triangles commute:

πa ∘ h = αa

The symbol for the end is the integral sign, with the “integration variable” in the subscript position:

c p c c

Components of π are called projection maps for the end:

πa :: ∫c p c c -> p a a

Note that if C is a discrete category (no morphisms other than the identities) the end is just a global product of all diagonal entries of p across the whole category C. Later I’ll show you that, in the more general case, there is a relationship between the end and this product through an equalizer.

In Haskell, the end formula translates directly to the universal quantifier:

forall a. p a a

Strictly speaking, this is just a product of all diagonal elements of p, but the wedge condition is satisfied automatically due to parametricity (I’ll explain it in a separate blog post). For any function f :: a -> b, the wedge condition reads:

dimap f id . pi = dimap id f . pi

or, with type annotations:

dimap f idb . pib = dimap ida f . pia

where both sides of the equation have the type:

Profunctor p => (forall c. p c c) -> p a b

and pi is the polymorphic projection:

pi :: Profunctor p => forall c. (forall a. p a a) -> p c c
pi e = e

Here, type inference automatically picks the right component of e.

Just as we were able to express the whole set of commutation conditions for a cone as one natural transformation, likewise we can group all the wedge conditions into one dinatural transformation. For that we need the generalization of the constant functor Δc to a constant profunctor that maps all pairs of objects to a single object c, and all pairs of morphisms to the identity morphism for this object. A wedge is a dinatural transformation from that functor to the profunctor p. Indeed, the dinaturality hexagon shrinks down to the wedge diamond when we realize that Δc lifts all morphisms to one identity function.

Ends can also be defined for target categories other than Set, but here we’ll only consider Set-valued profunctors and their ends.

Ends as Equalizers

The commutation condition in the definition of the end can be written using an equalizer. First, let’s define two functions (I’m using Haskell notation, because mathematical notation seems to be less user-friendly in this case). These functions correspond to the two converging branches of the wedge condition:

lambda :: Profunctor p => p a a -> (a -> b) -> p a b
lambda paa f = dimap id f paa

rho :: Profunctor p => p b b -> (a -> b) -> p a b
rho pbb f = dimap f id pbb

Both functions map diagonal elements of the profunctor p to polymorphic functions of the type:

type ProdP p = forall a b. (a -> b) -> p a b

These functions have different types. However, we can unify their types, if we form one big product type, gathering together all diagonal elements of p:

newtype DiaProd p = DiaProd (forall a. p a a)

The functions lambda and rho induce two mappings from this product type:

lambdaP :: Profunctor p => DiaProd p -> ProdP p
lambdaP (DiaProd paa) = lambda paa

rhoP :: Profunctor p => DiaProd p -> ProdP p
rhoP (DiaProd paa) = rho paa

The end of p is the equalizer of these two functions. Remember that the equalizer picks the largest subset on which two functions are equal. In this case it picks the subset of the product of all diagonal elements for which the wedge diagrams commute.

Natural Transformations as Ends

The most important example of an end is the set of natural transformations. A natural transformation between two functors F and G is a family of morphisms picked from hom-sets of the form C(F a, G a). If it weren’t for the naturality condition, the set of natural transformations would be just the product of all these hom-sets. In fact, in Haskell, it is:

forall a. f a -> g a

The reason it works in Haskell is because naturality follows from parametricity. Outside of Haskell, though, not all diagonal sections across such hom-sets will yield natural transformations. But notice that the mapping:

<a, b> -> C(F a, G b)

is a profunctor, so it makes sense to study its end. This is the wedge condition:

Let’s just pick one element from the set c C(F c, G c). The two projections will map this element to two components of a particular transformation, let’s call them:

τa :: F a -> G a
τb :: F b -> G b

In the left branch, we lift a pair of morphisms <ida, G f> using the hom-functor. You may recall that such lifting is implemented as simultaneous pre- and post-composition. When acting on τa the lifted pair gives us:

G f ∘ τa ∘ ida

The other branch of the diagram gives us:

idb ∘ τb ∘ F f

Their equality, demanded by the wedge condition, is nothing but the naturality condition for τ.

Coends

As expected, the dual to an end is called a coend. It is constructed from a dual to a wedge called a cowedge (pronounced co-wedge, not cow-edge).

An edgy cow?

The symbol for a coend is the integral sign with the “integration variable” in the superscript position:

 c p c c

Just like the end is related to a product, the coend is related to a coproduct, or a sum (in this respect, it resembles an integral, which is a limit of a sum). Rather than having projections, we have injections going from the diagonal elements of the profunctor down to the coend. If it weren’t for the cowedge conditions, we could say that the coend of the profunctor p is either p a a, or p b b, or p c c, and so on. Or we could say that there exists such an a for which the coend is just the set p a a. The universal quantifier that we used in the definition of the end turns into an existential quantifier for the coend.

This is why, in pseudo-Haskell, we would define the coend as:

exists a. p a a

The standard way of encoding existential quantifiers in Haskell is to use universally quantified data constructors. We can thus define:

data Coend p = forall a. Coend p a a

The logic behind this is that it should be possible to construct a coend using a value of any of the family of types p a a, no matter what a we chose.

You might recall from our earlier discussion of limits and colimits that the hom-functor is continuous, that is, it preserves limits. Dually, the contravariant hom-functor turns colimits into limits. These properties can be generalized to ends and coends, which are a generalization of limits and colimits, respectively. In particular, we get a very useful identity for converting coends to ends:

Set(∫ x p x x, c) ≅ ∫x Set(p x x, c)

Let’s have a look at it in pseudo-Haskell:

(exists x. p x x) -> c ≅ forall x. p x x -> c

It tells us that a function that takes an existential type is equivalent to a polymorphic function. This makes perfect sense, because such a function must be prepared to handle any one of the types that may be encoded in the existential type. It’s the same principle that tells us that a function that accepts a sum type must be implemented as a case statement, with a tuple of handlers, one for every type present in the sum. Here, the sum type is replaced by a coend, and a family of handlers becomes an end, or a polymorphic function.

Ninja Yoneda Lemma

The set of natural transformations that appears in the Yoneda lemma may be encoded using an end, resulting in the following formulation:

z Set(C(a, z), F z) ≅ F a

There is also a dual formula:

 z C(a, z) × F z ≅ F a

This identity is strongly reminiscent of the formula for the Dirac delta function (a function δ(a - z), or rather a distribution, that has an infinite peak at a = z). Here, the hom-functor plays the role of the delta function.

Together these two identities are sometimes called the Ninja Yoneda lemma.

To prove the second formula, we will use the consequence of the Yoneda embedding, which states that two objects are isomorphic if and only if their hom-functors are isomorphic. In other words a ≅ b if and only if there is a natural transformation of the type:

[C, Set](C(a, -), C(b, =))

that is an isomorphism.

We start by inserting the left-hand side of the identity we want to prove inside a hom-functor that’s going to some arbitrary object c:

Set(∫ z C(a, z) × F z, c)

Using the continuity argument, we can replace the coend with the end:

z Set(C(a, z) × F z, c)

We can now take advantage of the adjunction between the product and the exponential:

z Set(C(a, z), c(F z))

We can “perform the integration” by using the Yoneda lemma to get:

c(F a)

This exponential object is isomorphic to the hom-set:

Set(F a, c)

Finally, we take advantage of the Yoneda embedding to arrive at the isomorphism:

 z C(a, z) × F z ≅ F a

Profunctor Composition

Let’s explore further the idea that a profunctor describes a relation — more precisely, a proof-relevant relation, meaning that the set p a b represents the set of proofs that a is related to b. If we have two relations p and q we can try to compose them. We’ll say that a is related to b through the composition of q after p if there exist an intermediary object c such that both q b c and p c a are non-empty. The proofs of this new relation are all pairs of proofs of individual relations. Therefore, with the understanding that the existential quantifier corresponds to a coend, and the cartesian product of two sets corresponds to “pairs of proofs,” we can define composition of profunctors using the following formula:

(q ∘ p) a b = ∫ c p c a × q b c

Here’s the equivalent Haskell definition from Data.Profunctor.Composition, after some renaming:

data Procompose q p a b where
  Procompose :: q a c -> p c b -> Procompose q p a b

This is using generalized algebraic data type, or GADT syntax, in which a free type variable (here c) is automatically existentially quanitified. The (uncurried) data constructor Procompose is thus equivalent to:

exists c. (q a c, p c b)

The unit of so defined composition is the hom-functor — this immediately follows from the Ninja Yoneda lemma. It makes sense, therefore, to ask the question if there is a category in which profunctors serve as morphisms. The answer is positive, with the caveat that both associativity and identity laws for profunctor composition hold only up to natural isomorphism. Such a category, where laws are valid up to isomorphism, is called a bicategory (which is more general than a 2-category). So we have a bicategory Prof, in which objects are categories, morphisms are profunctors, and morphisms between morphisms (a.k.a., two-cells) are natural transformations. In fact, one can go even further, because beside profunctors, we also have regular functors as morphisms between categories. A category which has two types of morphisms is called a double category.

Profunctors play an important role in the Haskell lens library and in the arrow library.

Next: Kan extensions.


This is part 25 of Categories for Programmers. Previously: F-Algebras. See the Table of Contents.

If we interpret endofunctors as ways of defining expressions, algebras let us evaluate them and monads let us form and manipulate them. By combining algebras with monads we not only gain a lot of functionality but we can also answer a few interesting questions. One such question concerns the relation between monads and adjunctions. As we’ve seen, every adjunction defines a monad (and a comonad). The question is: Can every monad (comonad) be derived from an adjunction? The answer is positive. There is a whole family of adjunctions that generate a given monad. I’ll show you two such adjunction.

Let’s review the definitions. A monad is an endofunctor m equipped with two natural transformations that satisfy some coherence conditions. The components of these transformations at a are:

ηa :: a -> m a
μa :: m (m a) -> m a

An algebra for the same endofunctor is a selection of a particular object — the carrier a — together with the morphism:

alg :: m a -> a

The first thing to notice is that the algebra goes in the opposite direction to ηa. The intuition is that ηa creates a trivial expression from a value of type a. The first coherence condition that makes the algebra compatible with the monad ensures that evaluating this expression using the algebra whose carrier is a gives us back the original value:

alg ∘ ηa = ida

The second condition arises from the fact that there are two ways of evaluating the doubly nested expression m (m a). We can first apply μa to flatten the expression, and then use the evaluator of the algebra; or we can apply the lifted evaluator to evaluate the inner expressions, and then apply the evaluator to the result. We’d like the two strategies to be equivalent:

alg ∘ μa = alg ∘ m alg

Here, m alg is the morphism resulting from lifting alg using the functor m. The following commuting diagrams describe the two conditions (I replaced m with T in anticipation of what follows):

We can also express these condition in Haskell:

alg . return = id
alg . join = alg . fmap alg

Let’s look at a small example. An algebra for a list endofunctor consists of some type a and a function that produces an a from a list of a. We can express this function using foldr by choosing both the element type and the accumulator type to be equal to a:

foldr :: (a -> a -> a) -> a -> [a] -> a

This particular algebra is specified by a two-argument function, let’s call it f, and a value z. The list functor happens to also be a monad, with return turning a value into a singleton list. The composition of the algebra, here foldr f z, after return takes x to:

foldr f z [x] = x `f` z

where the action of f is written in the infix notation. The algebra is compatible with the monad if the following coherence condition is satisfied for every x:

x `f` z = x

If we look at f as a binary operator, this condition tells us that z is the right unit.

The second coherence condition operates on a list of lists. The action of join concatenates the individual lists. We can then fold the resulting list. On the other hand, we can first fold the individual lists, and then fold the resulting list. Again, if we interpret f as a binary operator, this condition tells us that this binary operation is associative. These conditions are certainly fulfilled when (a, f, z) is a monoid.

T-algebras

Since mathematicians prefer to call their monads T, they call algebras compatible with them T-algebras. T-algebras for a given monad T in a category C form a category called the Eilenberg-Moore category, often denoted by CT. Morphisms in that category are homomorphisms of algebras. These are the same homomorphisms we’ve seen defined for F-algebras.

A T-algebra is a pair consisting of a carrier object and an evaluator, (a, f). There is an obvious forgetful functor UT from CT to C, which maps (a, f) to a. It also maps a homomorphism of T-algebras to a corresponding morphism between carrier objects in C. You may remember from our discussion of adjunctions that the left adjoint to a forgetful functor is called a free functor.

The left adjoint to UT is called FT. It maps an object a in C to a free algebra in CT. The carrier of this free algebra is T a. Its evaluator is a morphism from T (T a) back to T a. Since T is a monad, we can use the monadic μa (Haskell join) as the evaluator.

We still have to show that this is a T-algebra. For that, two coherence conditions must be satisified:

alg ∘ ηTa = idTa
alg ∘ μa = alg ∘ T alg

But these are just monadic laws, if you plug in μ for the algebra.

As you may recall, every adjunction defines a monad. It turns out that the adjunction between FT and UT defines the very monad T that was used in the construction of the Eilenberg-Moore category. Since we can perform this construction for every monad, we conclude that every monad can be generated from an adjunction. Later I’ll show you that there is another adjunction that generates the same monad.

Here’s the plan: First I’ll show you that FT is indeed the left adjoint of UT. I’ll do it by defining the unit and the counit of this adjunction and proving that the corresponding triangular identities are satisfied. Then I’ll show you that the monad generated by this adjunction is indeed our original monad.

The unit of the adjunction is the natural transformation:

η :: I -> UT ∘ FT

Let’s calculate the a component of this transformation. The identity functor gives us a. The free functor produces the free algebra (T a, μa), and the forgetful functor reduces it to T a. Altogether we get a mapping from a to T a. We’ll simply use the unit of the monad T as the unit of this adjunction.

Let’s look at the counit:

ε :: FT ∘ UT -> I

Let’s calculate its component at some T-algebra (a, f). The forgetful functor forgets the f, and the free functor produces the pair (T a, μa). So in order to define the component of the counit ε at (a, f), we need the right morphism in the Eilenberg-Moore category, or a homomorphism of T-algebras:

(T a, μa) -> (a, f)

Such homomorphism should map the carrier T a to a. Let’s just resurrect the forgotten evaluator f. This time we’ll use it as a homomorphism of T-algebras. Indeed, the same commuting diagram that makes f a T-algebra may be re-interpreted to show that it’s a homomorphism of T-algebras:


We have thus defined the component of the counit natural transformation ε at (a, f) (an object in the category of T-algebras) to be f.

To complete the adjunction we also need to show that the unit and the counit satisfy triangular identites. These are:

The first one holds because of the unit law for the monad T. The second is just the law of the T-algebra (a, f).

We have established that the two functors form an adjunction:

FT ⊣ UT

Every adjunction gives rise to a monad. The round trip

UT ∘ FT

is the endofunctor in C that gives rise to the corresponding monad. Let’s see what its action on an object a is. The free algebra created by FT is (T a, μa). The forgetful functor FT drops the evaluator. So, indeed, we have:

UT ∘ FT = T

As expected, the unit of the adjunction is the unit of the monad T.

You may remember that the counint of the adjunction produces monadic muliplication through the following formula:

μ = R ∘ ε ∘ L

This is a horizontal composition of three natural transformations, two of them being identity natural transformations mapping, respectively, L to L and R to R. The one in the middle, the counit, is a natural transformation whose component at an algebra (a, f) is f.

Let’s calculate the component μa. We first horizontally compose ε after FT, which results in the component of ε at FTa. Since FT takes a to the algebra (T a, μa), and ε picks the evaluator, we end up with μa. Horizontal composition on the left with UT doesn’t change anything, since the action of UT on morphisms is trivial. So, indeed, the μ obtained from the adjunction is the same as the μ of the original monad T.

The Kleisli Category

We’ve seen the Kleisli category before. It’s a category constructed from another category C and a monad T. We’ll call this category CT. The objects in the Kleisli category CT are the objects of C, but the morphisms are different. A morphism fK from a to b in the Kleisli category corresponds to a morphism f from a to T b in the original category. We call this morphism a Kleisli arrow from a to b.

Composition of morphisms in the Kleisli category is defined in terms of monadic composition of Kleisli arrows. For instance, let’s compose gK after fK. In the Kleisli category we have:

fK :: a -> b
gK :: b -> c

which, in the category C, corresponds to:

f :: a -> T b
g :: b -> T c

We define the composition:

hK = gK ∘ fK

as a Kleisli arrow in C

h :: a -> T c
h = μ ∘ (T g) ∘ f

In Haskell we would write it as:

h = join . fmap g . f

There is a functor F from C to CT which acts trivially on objects. On morphims, it maps f in C to a morphism in CT by creating a Kleisli arrow that embellishes the return value of f. Given a morphism:

f :: a -> b

it creates a morphism in CT with the corresponding Kleisli arrow:

η ∘ f

In Haskell we’d write it as:

return . f

We can also define a functor G from CT back to C. It takes an object a from the Kleisli category and maps it to an object T a in C. Its action on a morphism fK corresponding to a Kleisli arrow:

f :: a -> T b

is a morphism in C:

T a -> T b

given by first lifting f and then applying μ:

μT b ∘ T f

In Haskell notation this would read:

G fT = join . fmap f

You may recognize this as the definition of monadic bind in terms of join.

It’s easy to see that the two functors form an adjunction:

F ⊣ G

and their composition G ∘ F reproduces the original monad T.

So this is the second adjunction that produces the same monad. In fact there is a whole category of adjunctions Adj(C, T) that result in the same monad T on C. The Kleisli adjunction we’ve just seen is the initial object in this category, and the Eilenberg-Moore adjunction is the terminal object.

Coalgebras for Comonads

Analogous constructions can be done for any comonad W. We can define a category of coalgebras that are compatible with a comonad. They make the following diagrams commute:

where coa is the coevaluation morphism of the coalgebra whose carrier is a:

coa :: a -> W a

and ε and δ are the two natural transformations defining the comonad (in Haskell, their components are called extract and duplicate).

There is an obvious forgetful functor UW from the category of these coalgebras to C. It just forgets the coevaluation. We’ll consider its right adjoint FW.

UW ⊣ FW

The right adjoint to a forgetful functor is called a cofree functor. FW generates cofree coalgebras. It assigns, to an object a in C, the coalgebra (W a, δa). The adjunction reproduces the original comonad as the composite FW ∘ UW.

Similarly, we can construct a co-Kleisli category with co-Kleisli arrows and regenerate the comonad from the corresponding adjunction.

Lenses

Let’s go back to our discussion of lenses. A lens can be written as a coalgebra:

coalgs :: a -> Store s a

for the functor Store s:

data Store s a = Store (s -> a) s

This coalgebra can be also expressed as a pair of functions:

set :: a -> s -> a
get :: a -> s

(Think of a as standing for “all,” and s as a “small” part of it.) In terms of this pair, we have:

coalgs a = Store (set a) (get a)

Here, a is a value of type a. Notice that partially applied set is a function s->a.

We also know that Store s is a comonad:

instance Comonad (Store s) where
  extract (Store f s) = f s
  duplicate (Store f s) = Store (Store f) s

The question is: Under what conditions is a lens a coalgebra for this comonad? The first coherence condition:

εa ∘ coalg = ida

translates to:

set a (get a) = a

This is the lens law that expresses the fact that if you set a field of the structure a to its previous value, nothing changes.

The second condition:

fmap colag ∘ coalg = δa ∘ coalg

requires a little more work. First, recall the definition of fmap for the Store functor:

fmap g (Store f s) = Store (g . f) s

Applying fmap coalg to the result of coalg gives us:

Store (coalg . set a) (get a)

On the other hand, applying duplicate to the result of coalg produces:

Store (Store (set a)) (get a)

For these two expressions to be equal, the two functions under Store must be equal when acting on an arbitrary s:

coalg (set a s) = Store (set a) s

Expanding coalg, we get:

Store (set (set a s)) (get (set a s)) = Store (set a) s

This is equivalent to two remaining lens laws. The first one:

set (set a s) = set a

tells us that setting the value of a field twice is the same as setting it once. The second law:

get (set a s) = s

tells us that getting a value of a field that was set to s gives s back.

In other words, a well-behaved lens is indeed a comonad coalgebra for the Store functor.

Challenges

  1. What is the action of the free functor F :: C -> CT on morphisms. Hint: use the naturality condition for monadic μ.
  2. Define the adjunction:
    UW ⊣ FW
  3. Prove that the above adjunction reproduces the original comonad.

Acknowledgment

I’d like to thank Gershom Bazerman for helpful comments.

Next: Ends and Coends.


This is part 24 of Categories for Programmers. Previously: Comonads. See the Table of Contents.

We’ve seen several formulations of a monoid: as a set, as a single-object category, as an object in a monoidal category. How much more juice can we squeeze out of this simple concept?

Let’s try. Take this definition of a monoid as a set m with a pair of functions:

μ :: m × m -> m
η :: 1 -> m

Here, 1 is the terminal object in Set — the singleton set. The first function defines multiplication (it takes a pair of elements and returns their product), the second selects the unit element from m. Not every choice of two functions with these signatures results in a monoid. For that we need to impose additional conditions: associativity and unit laws. But let’s forget about that for a moment and just consider “potential monoids.” A pair of functions is an element of a cartesian product of two sets of functions. We know that these sets may be represented as exponential objects:

μ ∈ m m×m
η ∈ m1

The cartesian product of these two sets is:

m m×m × m1

Using some high-school algebra (which works in every cartesian closed category), we can rewrite it as:

m m×m + 1

The plus sign stands for the coproduct in Set. We have just replaced a pair of functions with a single function — an element of the set:

m × m + 1 -> m

Any element of this set of functions is a potential monoid.

The beauty of this formulation is that it leads to interesting generalizations. For instance, how would we describe a group using this language? A group is a monoid with one additional function that assigns the inverse to every element. The latter is a function of the type m->m. As an example, integers form a group with addition as a binary operation, zero as the unit, and negation as the inverse. To define a group we would start with a triple of functions:

m × m -> m
m -> m
1 -> m

As before, we can combine all these triples into one set of functions:

m × m + m + 1 -> m

We started with one binary operator (addition), one unary operator (negation), and one nullary operator (identity — here zero). We combined them into one function. All functions with this signature define potential groups.

We can go on like this. For instance, to define a ring, we would add one more binary operator and one nullary operator, and so on. Each time we end up with a function type whose left-hand side is a sum of powers (possibly including the zeroth power — the terminal object), and the right-hand side being the set itself.

Now we can go crazy with generalizations. First of all, we can replace sets with objects and functions with morphisms. We can define n-ary operators as morphisms from n-ary products. It means that we need a category that supports finite products. For nullary operators we require the existence of the terminal object. So we need a cartesian category. In order to combine these operators we need exponentials, so that’s a cartesian closed category. Finally, we need coproducts to complete our algebraic shenanigans.

Alternatively, we can just forget about the way we derived our formulas and concentrate on the final product. The sum of products on the left hand side of our morphism defines an endofunctor. What if we pick an arbitrary endofunctor F instead? In that case we don’t have to impose any constraints on our category. What we obtain is called an F-algebra.

An F-algebra is a triple consisting of an endofunctor F, an object a, and a morphism

F a -> a

The object is often called the carrier, an underlying object or, in the context of programming, the carrier type. The morphism is often called the evaluation function or the structure map. Think of the functor F as forming expressions and the morphism as evaluating them.

Here’s the Haskell definition of an F-algebra:

type Algebra f a = f a -> a

It identifies the algebra with its evaluation function.

In the monoid example, the functor in question is:

data MonF a = MEmpty | MAppend a a

This is Haskell for 1 + a × a (remember algebraic data structures).

A ring would be defined using the following functor:

data RingF a = RZero
             | ROne
             | RAdd a a 
             | RMul a a
             | RNeg a

which is Haskell for 1 + 1 + a × a + a × a + a.

An example of a ring is the set of integers. We can choose Integer as the carrier type and define the evaluation function as:

evalZ :: Algebra RingF Integer
evalZ RZero      = 0
evalZ ROne       = 1
evalZ (RAdd m n) = m + n
evalZ (RMul m n) = m * n
evalZ (RNeg n)   = -n

There are more F-algebras based on the same functor RingF. For instance, polynomials form a ring and so do square matrices.

As you can see, the role of the functor is to generate expressions that can be evaluated using the evaluator of the algebra. So far we’ve only seen very simple expressions. We are often interested in more elaborate expressions that can be defined using recursion.

Recursion

One way to generate arbitrary expression trees is to replace the variable a inside the functor definition with recursion. For instance, an arbitrary expression in a ring is generated by this tree-like data structure:

data Expr = RZero
          | ROne
          | RAdd Expr Expr 
          | RMul Expr Expr
          | RNeg Expr

We can replace the original ring evaluator with its recursive version:

evalZ :: Expr -> Integer
evalZ RZero        = 0
evalZ ROne         = 1
evalZ (RAdd e1 e2) = evalZ e1 + evalZ e2
evalZ (RMul e1 e2) = evalZ e1 * evalZ e2
evalZ (RNeg e)     = -(evalZ e)

This is still not very practical, since we are forced to represent all integers as sums of ones, but it will do in a pinch.

But how can we describe expression trees using the language of F-algebras? We have to somehow formalize the process of replacing the free type variable in the definition of our functor, recursively, with the result of the replacement. Imagine doing this in steps. First, define a depth-one tree as:

type RingF1 a = RingF (RingF a)

We are filling the holes in the definition of RingF with depth-zero trees generated by RingF a. Depth-2 trees are similarly obtained as:

type RingF2 a = RingF (RingF (RingF a))

which we can also write as:

type RingF2 a = RingF (RingF1 a)

Continuing this process, we can write a symbolic equation:

type RingFn+1 a = RingF (RingFn a)

Conceptually, after repeating this process infinitely many times, we end up with our Expr. Notice that Expr does not depend on a. The starting point of our journey doesn’t matter, we always end up in the same place. This is not always true for an arbitrary endofunctor in an arbitrary category, but in the category Set things are nice.

Of course, this is a hand-waving argument, and I’ll make it more rigorous later.

Applying an endofunctor infinitely many times produces a fixed point, an object defined as:

Fix f = f (Fix f)

The intuition behind this definition is that, since we applied f infinitely many times to get Fix f, applying it one more time doesn’t change anything. In Haskell, the definition of a fixed point is:

newtype Fix f = Fix (f (Fix f))

Arguably, this would be more readable if the constructor’s name were different than the name of the type being defined, as in:

newtype Fix f = In (f (Fix f))

but I’ll stick with the accepted notation. The constructor Fix (or In, if you prefer) can be seen as a function:

Fix :: f (Fix f) -> Fix f

There is also a function that peels off one level of functor application:

unFix :: Fix f -> f (Fix f)
unFix (Fix x) = x

The two functions are the inverse of each other. We’ll use these functions later.

Category of F-Algebras

Here’s the oldest trick in the book: Whenever you come up with a way of constructing some new objects, see if they form a category. Not surprisingly, algebras over a given endofunctor F form a category. Objects in that category are algebras — pairs consisting of a carrier object a and a morphism F a -> a, both from the original category C.

To complete the picture, we have to define morphisms in the category of F-algebras. A morphism must map one algebra (a, f) to another algebra (b, g). We’ll define it as a morphism m that maps the carriers — it goes from a to b in the original category. Not any morphism will do: we want it to be compatible with the two evaluators. (We call such a structure-preserving morphism a homomorphism.) Here’s how you define a homomorphism of F-algebras. First, notice that we can lift m to the mapping:

F m :: F a -> F b

we can then follow it with g to get to b. Equivalently, we can use f to go from F a to a and then follow it with m. We want the two paths to be equal:

g ∘ F m = m ∘ f

alg

It’s easy to convince yourself that this is indeed a category (hint: identity morphisms from C work just fine, and a composition of homomorphisms is a homomorphism).

An initial object in the category of F-algebras, if it exists, is called the initial algebra. Let’s call the carrier of this initial algebra i and its evaluator j :: F i -> i. It turns out that j, the evaluator of the initial algebra, is an isomorphism. This result is known as Lambek’s theorem. The proof relies on the definition of the initial object, which requires that there be a unique homomorphism m from it to any other F-algebra. Since m is a homomorphism, the following diagram must commute:

alg2

Now let’s construct an algebra whose carrier is F i. The evaluator of such an algebra must be a morphism from F (F i) to F i. We can easily construct such an evaluator simply by lifting j:

F j :: F (F i) -> F i

Because (i, j) is the initial algebra, there must be a unique homomorphism m from it to (F i, F j). The following diagram must commute:

alg3a

But we also have this trivially commuting diagram (both paths are the same!):

alg3

which can be interpreted as showing that j is a homomorphism of algebras, mapping (F i, F j) to (i, j). We can glue these two diagrams together to get:

alg4

This diagram may, in turn, be interpreted as showing that j ∘ m is a homomorphism of algebras. Only in this case the two algebras are the same. Moreover, because (i, j) is initial, there can only be one homomorphism from it to itself, and that’s the identity morphism idi — which we know is a homomorphism of algebras. Therefore j ∘ m = idi. Similarly, we can prove that m ∘ j = idF i by stacking the two diagrams in the reverse order. This shows that m is the inverse of j and therefore j is an isomorphism between F i and i:

F i ≅ i

But that is just saying that i is a fixed point of F. That’s the formal proof behind the original hand-waving argument.

Back to Haskell: We recognize i as our Fix f, j as our constructor Fix, and its inverse as unFix. The isomorphism in Lambek’s theorem tells us that, in order to get the initial algebra, we take the functor f and replace its argument a with Fix f. We also see why the fixed point does not depend on a.

Natural Numbers

Natural numbers can also be defined as an F-algebra. The starting point is the pair of morphisms:

zero :: 1 -> N
succ :: N -> N

The first one picks the zero, and the second one maps all numbers to their successors. As before, we can combine the two into one:

1 + N -> N

The left hand side defines a functor which, in Haskell, can be written like this:

data NatF a = ZeroF | SuccF a

The fixed point of this functor (the initial algebra that it generates) can be encoded in Haskell as:

data Nat = Zero | Succ Nat

A natural number is either zero or a successor of another number. This is known as the Peano representation for natural numbers.

Catamorphisms

Let’s rewrite the initiality condition using Haskell notation. We call the initial algebra Fix f. Its evaluator is the contructor Fix. There is a unique morphism m from the initial algebra to any other algebra over the same functor. Let’s pick an algebra whose carrier is a and the evaluator is alg.

alg5
By the way, notice what m is: It’s an evaluator for the fixed point, an evaluator for the whole recursive expression tree. Let’s find a general way of implementing it.

Lambek’s theorem tells us that the constructor Fix is an isomorphism. We called its inverse unFix. We can therefore flip one arrow in this diagram to get:

alg6

Let’s write down the commutation condition for this diagram:

m = alg . fmap m . unFix

We can interpret this equation as a recursive definition of m. The recursion is bound to terminate for any finite tree created using the functor f. We can see that by noticing that fmap m operates underneath the top layer of the functor f. In other words, it works on the children of the original tree. The children are always one level shallower than the original tree.

Here’s what happens when we apply m to a tree constructed using Fix f. The action of unFix peels off the constructor, exposing the top level of the tree. We then apply m to all the children of the top node. This produces results of type a. Finally, we combine those results by applying the non-recursive evaluator alg. The key point is that our evaluator alg is a simple non-recursive function.

Since we can do this for any algebra alg, it makes sense to define a higher order function that takes the algebra as a parameter and gives us the function we called m. This higher order function is called a catamorphism:

cata :: Functor f => (f a -> a) -> Fix f -> a
cata alg = alg . fmap (cata alg) . unFix

Let’s see an example of that. Take the functor that defines natural numbers:

data NatF a = ZeroF | SuccF a

Let’s pick (Int, Int) as the carrier type and define our algebra as:

fib :: NatF (Int, Int) -> (Int, Int)
fib ZeroF = (1, 1)
fib (SuccF (m, n)) = (n, m + n)

You can easily convince yourself that the catamorphism for this algebra, cata fib, calculates Fibonacci numbers.

In general, an algebra for NatF defines a recurrence relation: the value of the current element in terms of the previous element. A catamorphism then evaluates the n-th element of that sequence.

Folds

A list of e is the initial algebra of the following functor:

data ListF e a = NilF | ConsF e a

Indeed, replacing the variable a with the result of recursion, which we’ll call List e, we get:

data List e = Nil | Cons e (List e)

An algebra for a list functor picks a particular carrier type and defines a function that does pattern matching on the two constructors. Its value for NilF tells us how to evaluate an empty list, and its value for ConsF tells us how to combine the current element with the previously accumulated value.

For instance, here’s an algebra that can be used to calculate the length of a list (the carrier type is Int):

lenAlg :: ListF e Int -> Int
lenAlg (ConsF e n) = n + 1
lenAlg NilF = 0

Indeed, the resulting catamorphism cata lenAlg calculates the length of a list. Notice that the evaluator is a combination of (1) a function that takes a list element and an accumulator and returns a new accumulator, and (2) a starting value, here zero. The type of the value and the type of the accumulator are given by the carrier type.

Compare this to the traditional Haskell definition:

length = foldr (\e n -> n + 1) 0

The two arguments to foldr are exactly the two components of the algebra.

Let’s try another example:

sumAlg :: ListF Double Double -> Double
sumAlg (ConsF e s) = e + s
sumAlg NilF = 0.0

Again, compare this with:

sum = foldr (\e s -> e + s) 0.0

As you can see, foldr is just a convenient specialization of a catamorphism to lists.

Coalgebras

As usual, we have a dual construction of an F-coagebra, where the direction of the morphism is reversed:

a -> F a

Coalgebras for a given functor also form a category, with homomorphisms preserving the coalgebraic structure. The terminal object (t, u) in that category is called the terminal (or final) coalgebra. For every other algebra (a, f) there is a unique homomorphism m that makes the following diagram commute:

alg7

A terminal colagebra is a fixed point of the functor, in the sense that the morphism u :: t -> F t is an isomorphism (Lambek’s theorem for coalgebras):

F t ≅ t

A terminal coalgebra is usually interpreted in programming as a recipe for generating (possibly infinite) data structures or transition systems.

Just like a catamorphism can be used to evaluate an initial algebra, an anamorphism can be used to coevaluate a terminal coalgebra:

ana :: Functor f => (a -> f a) -> a -> Fix f
ana coalg = Fix . fmap (ana coalg) . coalg

A canonical example of a coalgebra is based on a functor whose fixed point is an infinite stream of elements of type e. This is the functor:

data StreamF e a = StreamF e a
  deriving Functor

and this is its fixed point:

data Stream e = Stream e (Stream e)

A coalgebra for StreamF e is a function that takes the seed of type a and produces a pair (StreamF is a fancy name for a pair) consisting of an element and the next seed.

You can easily generate simple examples of coalgebras that produce infinite sequences, like the list of squares, or reciprocals.

A more interesting example is a coalgebra that produces a list of primes. The trick is to use an infinite list as a carrier. Our starting seed will be the list [2..]. The next seed will be the tail of this list with all multiples of 2 removed. It’s a list of odd numbers starting with 3. In the next step, we’ll take the tail of this list and remove all multiples of 3, and so on. You might recognize the makings of the sieve of Eratosthenes. This coalgebra is implemented by the following function:

era :: [Int] -> StreamF Int [Int]
era (p : ns) = StreamF p (filter (notdiv p) ns)
    where notdiv p n = n `mod` p /= 0

The anamorphism for this coalgebra generates the list of primes:

primes = ana era [2..]

A stream is an infinite list, so it should be possible to convert it to a Haskell list. To do that, we can use the same functor StreamF to form an algebra, and we can run a catamorphism over it. For instance, this is a catamorphism that converts a stream to a list:

toListC :: Fix (StreamF e) -> [e]
toListC = cata al
   where al :: StreamF e [e] -> [e]
         al (StreamF e a) = e : a

Here, the same fixed point is simultaneously an initial algebra and a terminal coalgebra for the same endofunctor. It’s not always like this, in an arbitrary category. In general, an endofunctor may have many (or no) fixed points. The initial algebra is the so called least fixed point, and the terminal coalgebra is the greatest fixed point. In Haskell, though, both are defined by the same formula, and they coincide.

The anamorphism for lists is called unfold. To create finite lists, the functor is modified to produce a Maybe pair:

unfoldr :: (b -> Maybe (a, b)) -> b -> [a]

The value of Nothing will terminate the generation of the list.

An interesting case of a coalgebra is related to lenses. A lens can be represented as a pair of a getter and a setter:

set :: a -> s -> a
get :: a -> s

Here, a is usually some product data type with a field of type s. The getter retrieves the value of that field and the setter replaces this field with a new value. These two functions can be combined into one:

a -> (s, s -> a)

We can rewrite this function further as:

a -> Store s a

where we have defined a functor:

data Store s a = Store (s -> a) s

Notice that this is not a simple algebraic functor constructed from sums of products. It involves an exponential as.

A lens is a coalgebra for this functor with the carrier type a. We’ve seen before that Store s is also a comonad. It turns out that a well-behaved lens corresponds to a coalgebra that is compatible with the comonad structure. We’ll talk about this in the next section.

Challenges

  1. Implement the evaluation function for a ring of polynomials of one variable. You can represent a polynomial as a list of coefficients in front of powers of x. For instance, 4x2-1 would be represented as (starting with the zero’th power) [-1, 0, 4].
  2. Generalize the previous construction to polynomials of many independent variables, like x2y-3y3z.
  3. Implement the algebra for the ring of 2×2 matrices.
  4. Define a coalgebra whose anamorphism produces a list of squares of natural numbers.
  5. Use unfoldr to generate a list of the first n primes.

Next: Algebras for Monads.


If there is one structure that permeates category theory and, by implication, the whole of mathematics, it’s the monoid. To study the evolution of this concept is to study the power of abstraction and the idea of getting more for less, which is at the core of mathematics. When I say “evolution” I don’t necessarily mean chronological development. I’m looking at a monoid as if it were a life form evolving through various eons of abstraction.

It’s an ambitious project and I’ll have to cover a lot of material. I’ll start slowly, with the definitions of magmas and monoids, but then I will accelerate. A lot of concepts will be introduced in one or two sentences, mainly to familiarize the reader with the notation. I’ll dwell a little on monoidal categories, then breeze through ends, coends, and profunctors. I’ll show you how monads, arrows, and applicative functors arise from monoids in various monoidal categories.

stego

The Magmas of the Hadean Eon

Monoids evolved from more primitive life forms feeding on sets. So, before even touching upon monoids, let’s talk about cartesian products, relations, and functions. You take two sets a and b (or, in the simplest case, two copies of the same set a) and form pairs of elements. That gives you a set of pairs, a.k.a., the cartesian product a×b. Any subset of such a cartesian product is called a relation. Two elements x and y are in a relation if the pair <x, y> is a member of that subset.

A function from a to b is a special kind of relation, in which every element x in the set a has one and only one element y in the set b that’s related to it. (Sometimes this is called a total function, since it’s defined for all elements of a).

Even before there were monoids, there was magma. A magma is a set with a binary operation and nothing else. So, in particular, there is no assumption of associativity, and there is no unit. A binary operation is simply a function from the cartesian product of a with itself back to a

a × a -> a

It takes a pair of elements <x, y>, both coming from the set a, and maps it to an element of a.

It’s tempting to quote the Haskell definition of a magma:

class Magma a where
  (<>) :: a -> a -> a

but this definition is already tainted with some higher concepts like currying. An alternative would be:

class Magma a where
  (<>) :: (a, a) -> a

Here, we at least see a pair of elements that are being “multiplied.” But the pair type (a, a) is also a higher-level concept. I’ll come back to it later.

Lack of associativity means that we cannot identify (x<>y)<>z with x<>(y<>z). You have to keep the parentheses.

You might have heard of quaternions — their multiplication is associative. But not many people have heard of octonions, which are not associative. In fact Hamilton, who discovered quaternions, invented the word associative to disassociate himself from octonions, which are not.

If you’re familiar with continuous groups, you might know that Lie algebras are not associative.

Closer to home — most operations on floating-point numbers are not associative on modern computers because of rounding errors.

But, really, most interesting binary operations are associative. So out of the magma emerges a semigroup. In a semigroup you can drop parentheses. A non-trivial (that is, non-monoidal) example of a semigroup is the set of integers with max binary operation. A maximum of three numbers is the same no matter in which order you pair them. But there is no integer that’s less or equal to any other integer, so this is not a monoid.

Monoids of the Archean Eon

But, really, most interesting binary operations are both associative and unital. There usually is a “do nothing” element with respect to most binary operations. So life as we know it begins with a monoid.

A monoid is a set with a binary operation that is associative, and with a special element called the unit e that is neutral with respect to the binary operation. To be precise, these are the three monoid laws:

(x <> y) <> z = x <> (y <> z)
e <> x = x
x <> e = x

In Haskell, the traditional definition of a monoid uses mempty for the unit and mappend for the binary operation:

class Monoid a where
    mempty  :: a
    mappend :: a -> a -> a

As with the magma, the definition of mappend is curried. Equivalently, it could have been written as:

mappend :: (a, a) -> a

I’ll come back to this point later.

There are plenty of examples of monoids. Non-negative integers with addition, or positive integers with multiplication are the obvious ones. Strings with concatenation are interesting too, because concatenation is not commutative.

Just like pairs of elements from two sets a and b organize themselves into a set a×b, which is their cartesian product; functions between two sets organize themselves into a set — the set of functions from a to b, which we sometimes write as a->b.

This organizing principle is characteristic of sets, where everything you can think of is a set. Except when it’s more than just a set — for instance when you try to organize all sets into one large collection. This collection, or “class,” is not itself a set. You can’t have a set of all sets, but you can have a category Set of “small” sets, which are sets that belong to a “universe.” In what follows, I will confine myself to a single universe in order to dodge questions from foundational mathematicians.

Let’s now pop one level up and look at cartesian product as an operation on sets. For any two sets a and b, we can construct the set a×b. If we view this as “multiplication” of sets, we can say that sets form a magma. But do they form a monoid? Not exactly! To begin with, cartesian product is not associative. We can see it in Haskell: the type ((a, b), c) is not the same as the type (a, (b, c)). They are, however, isomorphic. There is an invertible function called the associator, from one type to the other:

alpha :: ((a, b), c) -> (a, (b, c))
alpha ((x, y), z) = (x, (y, z))

It’s just a repackaging of containers (such repackaging is, by the way, called a natural transformation).

For the unit of this “multiplication” we can pick the singleton set. In Haskell, this is the type called unit and it’s denoted by an empty pair of parentheses (). Again, the unit laws are valid up to isomorphism. There are two such isomorphisms called left and right unitors:

lambda :: ((), a) -> a
lambda ((), x) = x
rho :: (a, ()) -> a
rho (x, ()) -> x

We have just exposed monoidal structure in the category Set. Set is not strictly a monoid because monoidal laws are satisfied only up to isomorphism.

There is another monoidal structure in Set. Just like cartesian product resembles multiplication, there is an operation on sets that resembles addition. It’s called disjoint sum. In Haskell it’s embodied in the type Either a b . Just like cartesian product, disjoint sum is associative up to isomorphism. The unit (or the “zero”) of this sum type is the empty set or, in Haskell, the Void type — also up to isomorphism.

The Cambrian Explosion of Categories

The first rule of abstraction is, You do not talk about Fight Club. In the category Set, for instance, we are not supposed to admit that sets have elements. An object in Set is really a set, but you never talk about its elements. We still have functions between sets, but they become abstract morphisms, of which we only know how they compose.

Composition of functions is associative, and there is an identity function for every set, which serves as a unit of composition. We can write these rules compactly as:

(f ∘ g) ∘ h = f ∘ (g ∘ h)
id ∘ f = f
f ∘ id = f

These look exactly like monoid laws. So do functions form a monoid with respect to composition? Not quite, because you can’t compose any two functions. They must be composable, which means their endpoints have to match. In Haskell, we can compose g after f, or g ∘ f, only if:

f :: a -> b
g :: b -> c

Also, there is no single identity function, but a whole family of functions ida, one for each set a. In Haskell, we call that a polymorphic function.

But notice what happens if we restrict ourselves to just a single object a in Set. Every morphism from a back to a can be composed with any other such morphism (their endpoints always match). Moreover, we are guaranteed that among those so called endomorphisms there is one identity morphism ida, which acts as a unit of composition.

Notice that I switched from the set/function nomenclature to the more general object/morphism naming convention of category theory. We can now forget about sets and functions and define an arbitrary category as a collection (a set in a given universe) of objects, and sets of morphisms that go between them. The only requirements are that any two composable morphisms compose, and that there is an identity morphism for every object. And that composition must be associative.

We can now forget about sets and define a monoid as a category that has only one object. The binary operation is just the composition of (endo-)morphisms. It works! We have defined a monoid without a set. Or have we?

No, we haven’t! We have just swept it under the rug — the rug being the set of morphisms. Yes, morphisms between any two objects form a set called the hom-set. In a category C, the hom-set between objects a and b is denoted by C(a, b). So we haven’t completely eliminated sets from the picture.

In the single object category M, we have only one hom-set M(a, a). The elements of this set — and we are allowed to call them elements because it’s a set — are morphisms like f and g. We can compose them, and we can call this composition “multiplication,” thus recovering our previous definition of the monoid as a set. We get associativity for free, and we have the identity morphism ida serving as the unit.

It might seem at first that we haven’t made progress and, in fact, we might have made some things more complicated by forgetting the internal structure of objects. For instance, in the category Set, it’s no longer obvious what an empty set is. You can’t say it’s a set with no elements because of the Fight Club rule. Similarly with the singleton set. Fortunately, it turns out that both these sets can be uniquely described in terms of their interactions with other sets. By that I mean the kind of functions/morphisms that connect them to other objects in Set. These object-opaque definitions are called universal constructions. For instance, the empty set is the only set that has a unique morphism going from it to every other set. The advantage of this characterization is that it can now be applied to any category. One may ask this question in any category: Is there an object that has this property? If there is, we call it the initial object. The empty set is the initial object in Set. Similarly, a singleton set is the terminal object in Set (and it’s unique up to unique isomorphism).

A cartesian product of two sets can also be defined using a universal construction, one which doesn’t mention elements (or pairs of elements). And again, this construction may be used to define a (categorical) product in other categories. Of particular interest are categories where a product exists for every pair of objects (it does in Set).

In such categories there is actually an even better way of defining a product using an adjunction. But before we can get to adjunctions, let me summarize a few millions of years of evolution in a few terse paragraphs.

A functor is a mapping of categories that preserves their structure. It maps objects to objects and morphisms to morphisms. In Haskell we define a functor (really, an endofunctor) as a type constructor f (a mapping from types to types) that can be lifted to functions that go between these types:

class Functor f where
  fmap :: (a -> b) -> (f a -> f b)

The mapping of morphisms must also preserve composition and identity. Functors may collapse multiple objects into one, and multiple morphisms into one, but they never break connections. You may also think of functors as embedding one category inside another.

Finally, functors can be composed in the obvious way, and there is an identity endofunctor that maps a category onto itself. It follows that categories (at least the small ones) form a category Cat in which functors serve as morphisms.

There may be many ways of embedding one category inside another, and it’s extremely useful to be able to compare such embeddings by defining mappings between them. If we have two functors F and G between two categories C and D we define a natural transformation between these functors by picking a morphism between a pair F a and G a, for every a.

In Haskell, a natural transformation between two functors f and g is a polymorphic function:

type Nat f g = forall a. f a -> g a

In general, natural transformations must obey additional naturality conditions, but in Haskell they come for free (due to parametricity).

Natural transformations may be composed, and there is an identity natural transformations from any functor to itself. It follows that functors between any two categories C and D form a category denoted by [C, D], where natural transformations play the role of morphisms. A hom-set in such a category is a set of natural transformations between two functors F and G denoted by [C, D](F, G).

An invertible natural transformation is called a natural isomorphism. If two functors are naturally isomorphic they are essentially the same.

Arthropods and their Adjoints

Using a pair of functors that are the inverse of each other we may define equivalence of categories, but there is an even more useful concept of adjoint functors that compare the structures of two non-equivalent categories. The idea is that we have a “right” functor R going from category C to D and a “left” functor L going in the other direction, from D to C.

Adj - 1

There are two possible compositions of these functors, both resulting in round trips or endofunctors. The categories would be equivalent if those endofunctors were naturally isomorphic to identity endofunctors. But for an adjunction, we impose weaker conditions. We require that there be two natural transformations (not necessarily isomorphisms):

η :: ID -> R ∘ L
ε :: L ∘ R -> IC

The first transformation η is called the unit; and the second ε, the counit of the adjunction.

In a small category objects form sets, so it’s possible to form a cartesian product of two small categories C and D. Object in such a category C×D are pairs of objects <c, d>, and morphisms are pairs of morphisms <f, g>.

After these preliminaries, we are ready to define the categorical product in C using an adjunction. We chose C×C as the left category. The left functor is the diagonal functor Δ that maps any object c to a pair <c, c> and any morphism f to a pair of morphisms <f, f>. Its right adjoint, if it exists, maps a pair of objects <a, b> to their categorical product a×b.

Adj-Product

Interestingly, the terminal object can also be defined using an adjunction. This time we chose, as the left category, a singleton category with one object and one (identity) morphism. The left functor maps any object c to the singleton object. Its right adjoint, if it exists, maps the singleton object to the terminal object in C.

A category with all products and the terminal object is called a cartesian category, or cartesian monoidal category. Why monoidal? Because the operation of taking the categorical product is monoidal. It’s associative, up to isomorphism; and its unit is the terminal object.

Incidentally, this is the same monoidal structure that we’ve seen in Set, but now it’s generalized to the level of other categories. There was another monoidal structure in Set induced by the disjoint sum. Its categorical generalization is given by the coproduct, with the initial object playing the role of the unit.

But what about the set of morphisms? In Set, morphisms between two sets a and b form a hom-set, which is the object of the same category Set. In an arbitrary category C, a hom-set C(a, b) is still a set — but now it’s not an object of C. That’s why it’s called the external hom-set. However, there are categories in which each external hom-set has a corresponding object called the internal hom. This object is also called an exponential, ba. It can be defined using an adjunction, but only if the category supports products. It’s an adjunction in which the left and right categories are the same. The left endofunctor takes an object b and maps it to a product b×a, where a is an arbitrary fixed object. Its adjoint functor maps an object b to the exponential ba. The counit of this adjunction:

ε :: ba × a -> b

is the evaluation function. In Haskell it has the following signature:

eval :: (a -> b, a) -> b

The Haskell function type a->b is equivalent to the exponential ba.

A category that has all products and exponentials together with the terminal object is called cartesian closed. Cartesian closed categories, or CCCs, play an important role in the semantics of programming languages.

Tensorosaurus Rex

We have already seen two very similar monoidal structures induced by products and coproducts. In mathematics, two is a crowd, so let’s look for a pattern. Both product and coproduct act as bifunctors C×C->C. Let’s call such a bifunctor a tensor product and write it as an infix operator a ⊗ b. As a bifunctor, the tensor product can also lift pairs of morphisms:

f :: a -> a'
g :: b -> b'
f ⊗ g :: a ⊗ b -> a' ⊗ b'

To define a monoid on top of a tensor product, we will require that it be associative — up to isomorphism:

α :: (a ⊗ b) ⊗ c -> a ⊗ (b ⊗ c)

We also need a unit object, which we will call i. The two unit laws are:

λ :: i ⊗ a -> a
ρ :: a ⊗ i -> a

A category with a tensor product that satisfies the above properties, plus some additional coherence conditions, is called a monoidal category.

We can now specialize the tensor product to categorical product, in which case the unit object is the terminal object; or to coproduct, in which case we chose the initial object as the unit. But there is an even more interesting operation that has all the properties of the tensor product. I’m talking about functor composition.

tyrano

Functorosaurus

Functors between any two categories C and D form a functor category [C, D] with natural transformations playing the role of morphisms. In general, these functors don’t compose (their endpoints don’t match) unless we pick the target category to be the same as the source category.

Endofunctor Composition

In the endofunctor category [C, C] any two functors can be composed. But in [C, C] functors are objects, so functor composition becomes an operation on objects. For any two endofunctors F and G it produces a new endofunctor F∘G. It’s a binary operation, so it’s a potential candidate for a tensor product. Indeed, it is a bifunctor: it can be lifted to natural transformations, which are morphisms in [C, C]. It’s associative — in fact it’s strictly associative, the associator α is the identity natural transformation. The unit with respect to endofunctor composition is the identity functor I. So the category of endofunctors is a monoidal category.

Unlike product and coproduct, which are symmetric up to isomorphism, endofunctor composition is not symmetric. In general, there is no relation between F∘G and G∘F.

Profunctor Composition

Different species of functors came up with their own composition strategies. Take for instance the profunctors, which are functors Cop×D->Set. They generalize the idea of relations between objects in C and D. The sets they map to may be thought of as sets of proofs of the relationship. An empty set means that the two objects are not related. If you want to compose two relations, you have to find an element that’s common to the image of one relation and the source of the other (relations are not, in general, symmetric). The proofs of the new composite relation are pairs of proofs of individual relations. Symbolically, if p and q are such profunctors/relations, their composition can be written as:

exists x. (p a x, q x b)

Existential quantification in Haskell translates to polymorphic construction, so the actual definition is:

data PCompose p q a b = forall x . PCompose (p a x) (q x b)

In category theory, existential quantification is encoded as the coend, which is a generalization of a colimit for profunctors. The coend formula for the composition of two profunctors reads:

(p ⊗ q) a b = ∫ z p a z × q z b

The product here is the cartesian product of sets.

Profunctors, being functors, form a category in which morphisms are natural transformations. As long as the two categories that they relate are the same, any two profunctors can be composed using a coend. So profunctor composition is a good candidate for a tensor product in such a category. It is indeed associative, up to isomorphism. But what’s the unit of profunctor composition? It turns out that the simplest profuctor — the hom-functor — because of the Yoneda lemma, is the unit of composition:

z C(a, z) × p z b ≅ p a b
∫ z p a z × C(z, b) ≅ p a b

Thus profunctors Cop×C->Set form a monoidal category.

atlanto

Day Convolution

Or consider Set-valued functors. They can be composed using Day convolution. For that, the category C must itself be monoidal. Day convolution of two functors C->Set is defined using a coend:

(f ★ g) a = ∫ x y f x × g y × C(x ⊗ y, a)

Here, the tensor product of x ⊗ y comes from the monoidal category C, the other products are just cartesian products of sets (one of them being the hom-set).

As before, in Haskell, the coend turns into existential quantifier, which can be written symbolically:

Day f g a = exists x y. ((f x, g y), (x, y) -> a)

and encoded as a polymorphic constructor:

data Day f g a = forall x y. Day (f x) (g y) ((x, y) -> a)

We use the fact that the category of Haskell types is monoidal with respect to cartesian product.

We can build a monoidal category based on Day convolution. The unit with respect to Day convolution is C(i, -), the hom-functor applied to i — the unit in the monoidal category C. For instance, the left identity can be derived from:

(C(i, -) ★ g) a = ∫ x y C(i, x) × g y × C(x ⊗ y, a)

Applying the Yoneda lemma, or “integrating over x,” we get:

y g y × C(i ⊗ y, a)

Considering that i is the unit of the tensor product, we can perform the second integration to get g a.

The Monozoic Era

Monoidal categories are important because they provide rich grazing grounds for monoids. In a monoidal category we can define a more general monoid. It’s an object m with some special properties. These properties replace the usual definitions of multiplication and unit.

First, let’s reformulate the definition of a set-based monoid, taking into account the fact that Set is a monoidal category with respect to cartesian product.

A monoid is a set, so it’s an object in Set — let’s call it m. Multiplication maps pairs of elements of m back to m. These pairs are just elements of the cartesian product m × m. So multiplication is defined as a function:

μ :: m × m -> m

Unit of multiplication is a special element of m. We can select this element by providing a special morphism from the singleton set to m:

η :: () -> m

We can now express associativity and unit laws as properties of these two functions. The beauty of this formulation is that it generalizes easily to any cartesian category — just replace functions with morphisms and the unit () with the terminal object. There’s no reason to stop there: we can lift this definition all the way up to a monoidal category.

A monoid in a monoidal category is an object m together with two morphisms:

μ :: m ⊗ m -> m
η :: i -> m

Here i is the unit object with respect to the tensor product ⊗. Monoidal laws can be expressed using the associator α and the two unitors, λ and ρ, of the monoidal category:

assoctensor

unitmon

Having previously defined several interesting monoidal categories, we can now go digging for new monoids.

Monads

Let’s start with the category of endofunctors where the tensor product is functor composition. A monoid in the category of endofunctors is an endofunctor m and two morphism. Remember that morphisms in a functor category are natural transformations. So we end up with two natural transformations:

μ :: m ∘ m -> m
η :: I -> m

where I is the identity functor. Their components at an object a are:

μa :: m (m a) -> m a
ηa :: a -> m a

This construct is easily recognizable as a monad. The associativity and unit laws are just monad laws. In Haskell, μa is called join and ηa is called return.

Arrows

Let’s switch to the category of profunctors Cop×C->Set with profunctor composition as the tensor product. A monoid in that category is a profunctor ar. Multiplication is defined by a natural transformation:

μ :: ar ⊗ ar -> ar

Its component at a, b is:

μa b :: (∫ z ar a z × ar z b) -> ar a b

To simplify this formula we need a very useful identity that relates coends to ends. A hom-set that starts at a coend is equivalent to an end of the hom set:

C(∫ z p z z, y) ≅ ∫ z C(p z z, y)

Or, replacing external hom-sets with internal homs:

(∫ z p z z) -> y ≅ ∫ z (p z z -> y)

In Haskell, this formula is used to turn functions that take existential types to functions that are polymorphic:

(exists z. p z z) -> y ≅ forall z. (p z z -> y)

Intuitively, it makes perfect sense. If you want to define a function that takes an existential type, you have to be prepared to handle any type.

Using that identity, our multiplication formula can be rewritten as:

μa b :: ∫ z ((ar a z × ar z b) -> ar a b)

In Haskell, this derivation uses the existential quantifier:

mu a b = (exists z. (ar a z, ar z b)) -> ar a b

As we discussed, a function from an existential type is equivalent to a polymorphic function:

forall z. (ar a z, ar z b) -> ar a b

or, after currying and dropping the redundant quantification:

ar a z -> ar z b -> ar a b

This looks very much like a composition of morphisms in a category. In Haskell, this function is known in the infix-operator form as:

(>>>) :: ar a z -> ar z b -> ar a b

Let’s see what we get as the monoidal unit. Remember that the unit object in the profunctor category is the hom-functor C(a, b).

ηa b :: C(a, b) -> ar a b

In Haskell, this polymorphic function is traditionally called arr:

arr :: (a -> b) -> ar a b

The whole construct is known in Haskell as a pre-arrow. The full arrow is defined as a monoid in the category of strong profunctors, with strength defined as a natural transformation:

sta b :: p a b -> p (a, x) (b, x)

In Haskell, this function is called first.

Applicatives

There are several categorical formulations of what’s called in Haskell the applicative functor. To first approximaton, Haskell’s type system is the category Set. To translate Haskell constructs to category theory, the safest approach is to just play with endofunctors in Set. But both Set and its endofunctors have a lot of extra structure, so I’d like to start in a slightly more general setting.

Let’s have a look at the monoidal category of functors [C, Set], with Day convolution as the tensor product, and C(i, -) as unit. A monoid in this category is a functor f with multiplication given by the natural transformation:

μ :: f ★ f -> f

and unit given by:

η :: C(i, -) -> f

It turns out that the existence of these two natural transformations is equivalent to the requirement that f be a lax monoidal functor, which is the basis of the definition of the applicative functor in Haskell.

A monoidal functor is a functor that maps monoidal structure of one category to the monoidal structure of another category. It maps the tensor product, and it maps the unit object. In our case, the source category C has the monoidal structure given by the tensor product ⊗, and the target category Set is monoidal with respect to the cartesian product ×. A functor is monoidal if it doesn’t matter whether we first map two object and then multiply them, or first multiply them and then map the result:

f x × f y ≅ f (x ⊗ y)

Also, the unit object in Set should be isomporphic to the result of mapping the unit object in C:

() ≅ f i

Here, () is the terminal object in Set and i is the unit object in C.

These conditions are relaxed in the definition of a lax monoidal functor. A lax monoidal functor replaces isomorphisms with regular unidirectional morphisms:

f x × f y -> f (x ⊗ y)
() -> f i

It can be shown that the monoid in the category[C, Set], with Day convolution as the tensor product, is equivalent to the lax monoidal functor.

The Haskell definition of Applicative doesn’t look like Day convolution or like a lax monoidal functor:

class Functor f => Applicative f where
    (<*>) :: f (a -> b) -> (f a -> f b)
    pure :: a -> f a

You may recognize pure as a component of η, the natural transformation defining the monoid with respect to Day convolution. When you replace the category C with Set, the unit object C(i, -) turns into the identity functor. However, the operator <*> is lifted from the definition of yet another lax functor, the lax closed functor. It’s a functor that preserves the closed structure defined by the internal hom functor. In Set, the internal hom functor is just the arrow (->), hence the definition:

class Functor f => Closed f where
    (<*>) :: f (a -> b) -> (f a -> f b)
    unit :: f ()

As long as the internal hom is defined through the adjunction with the product, a lax closed functor is equivalent to a lax monoidal functor.

Conclusion

It is pretty shocking to realize how many different animals share the same body plan — I’m talking here about the monoid as the skeleton of a myriad of different mathematical and programming constructs. And I haven’t even touched on the whole kingdom of enriched categories, where monoidal categories form the reservoir of hom-objects. Virtually all notions I’ve discussed here can be generalized to enriched categories, including functors, profunctors, the Yoneda lemma, Day convolution, and so on.

Glossary

  • Hadean Eon: Began with the formation of the Earth about 4.6 billion years ago. It’s the period before the earliest-known rocks.
  • Archean Eon: During the Archean, the Earth’s crust had cooled enough to allow the formation of continents.
  • Cambrian explosion: Relatively short evolutionary event, during which most major animal phyla appeared.
  • Arthropods: from Greek ἄρθρωσις árthrosis, “joint”
  • Tensor, from Latin tendere “to stretch”
  • Functor: from Latin fungi, “perform”

Bibliograhpy

  1. Moggi, Notions of Computation and Monads.
  2. Rivas, Jaskelioff, Notions of Computation as Monoids.

Unlike monads, which came into programming straight from category theory, applicative functors have their origins in programming. McBride and Paterson introduced applicative functors as a programming pearl in their paper Applicative programming with effects. They also provided a categorical interpretation of applicatives in terms of strong lax monoidal functors. It’s been accepted that, just like “a monad is a monoid in the category of endofunctors,” so “an applicative is a strong lax monoidal functor.”

The so called “tensorial strength” seems to be important in categorical semantics, and in his seminal paper Notions of computation and monads, Moggi argued that effects should be described using strong monads. It makes sense, considering that a computation is done in a context, and you should be able to make the global context available under the monad. The fact that we don’t talk much about strong monads in Haskell is due to the fact that all functors in the category Set, which underlies the Haskell’s type system, have canonical strength. So why do we talk about strength when dealing with applicative functors? I have looked into this question and came to the conclusion that there is no fundamental reason, and that it’s okay to just say:

An applicative is a lax monoidal functor

In this post I’ll discuss different equivalent categorical definitions of the applicative functor. I’ll start with a lax closed functor, then move to a lax monoidal functor, and show the equivalence of the two definitions. Then I’ll introduce the calculus of ends and show that the third definition of the applicative functor as a monoid in a suitable functor category equipped with Day convolution is equivalent to the previous ones.

Applicative as a Lax Closed Functor

The standard definition of the applicative functor in Haskell reads:

class Functor f => Applicative f where
    (<*>) :: f (a -> b) -> (f a -> f b)
    pure :: a -> f a

At first sight it doesn’t seem to involve a monoidal structure. It looks more like preserving function arrows (I added some redundant parentheses to suggest this interpretation).

Categorically, functors that “preserve arrows” are known as closed functors. Let’s look at a definition of a closed functor f between two categories C and D. We have to assume that both categories are closed, meaning that they have internal hom-objects for every pair of objects. Internal hom-objects are also called function objects or exponentials. They are normally defined through the right adjoint to the product functor:

C(z × a, b) ≅ C(z, a => b)

To distinguish between sets of morphisms and function objects (they are the same thing in Set), I will temporarily use double arrows for function objects.

We can take a functor f and act with it on the function object a=>b in the category C. We get an object f (a=>b) in D. Or we can map the two objects a and b from C to D and then construct the function object in D: f a => f b.

closed

We call a functor closed if the two results are isomorphic (I have subscripted the two arrows with the categories where they are defined):

f (a =>C b) ≅ (f a =>D f b)

and if the functor preserves the unit object:

iD ≅ f iC

What’s the unit object? Normally, this is the unit with respect to the same product that was used to define the function object using the adjunction. I’m saying “normally,” because it’s possible to define a closed category without a product.

Note: The two arrows and the two is are defined with respect to two different products. The first isomorphism must be natural in both a and b. Also, to complete the picture, there are some diagrams that must commute.

The two isomorphisms that define a closed functor can be relaxed and replaced by unidirectional morphisms. The result is a lax closed functor:

f (a => b) -> (f a => f b)
i -> f i

This looks almost like the definition of Applicative, except for one problem: how can we recover the natural transformation we call pure from a single morphism i -> f i.

One way to do it is from the position of strength. An endofunctor f has tensorial strength if there is a natural transformation:

stc a :: c ⊗ f a -> f (c ⊗ a)

Think of c as the context in which the computation f a is performed. Strength means that we can use this external context inside the computation.

In the category Set, with the tensor product replaced by cartesian product, all functors have canonical strength. In Haskell, we would define it as:

st (c, fa) = fmap ((,) c) fa

The morphism in the definition of the lax closed functor translates to:

unit :: () -> f ()

Notice that, up to isomorphism, the unit type () is the unit with respect to cartesian product. The relevant isomorphisms are:

λa :: ((), a) -> a
ρa :: (a, ()) -> a

Here’s the derivation from Rivas and Jaskelioff’s Notions of Computation as Monoids:

    a
≅  (a, ())   -- unit law, ρ-1
-> (a, f ()) -- lax unit
-> f (a, ()) -- strength
≅  f a       -- lifted unit law, f ρ

Strength is necessary if you’re starting with a lax closed (or monoidal — see the next section) endofunctor in an arbitrary closed (or monoidal) category and you want to derive pure within that category — not after you restrict it to Set.

There is, however, an alternative derivation using the Yoneda lemma:

f ()
≅ forall a. (() -> a) -> f a  -- Yoneda
≅ forall a. a -> f a -- because: (() -> a) ≅ a

We recover the whole natural transformation from a single value. The advantage of this derivation is that it generalizes beyond endofunctors and it doesn’t require strength. As we’ll see later, it also ties nicely with the Day-convolution definition of applicative. The Yoneda lemma only works for Set-valued functors, but so does Day convolution (there are enriched versions of both Yoneda and Day convolution, but I’m not going to discuss them here).

We can define the categorical version of the Haskell’s applicative functor as a lax closed functor going from a closed category C to Set. It’s a functor equipped with a natural transformation:

f (a => b) -> (f a -> f b)

where a=>b is the internal hom-object in C (the second arrow is a function type in Set), and a function:

1 -> f i

where 1 is the singleton set and i is the unit object in C.

The importance of a categorical definition is that it comes with additional identities or “axioms.” A lax closed functor must be compatible with the structure of both categories. I will not go into details here, because we are really only interested in closed categories that are monoidal, where these axioms are easier to express.

The definition of a lax closed functor is easily translated to Haskell:

class Functor f => Closed f where
    (<*>) :: f (a -> b) -> f a -> f b
    unit :: f ()

Applicative as a Lax Monoidal Functor

Even though it’s possible to define a closed category without a monoidal structure, in practice we usually work with monoidal categories. This is reflected in the equivalent definition of the Haskell’s applicative functor as a lax monoidal functor. In Haskell, we would write:

class Functor f => Monoidal f where
    (>*<) :: (f a, f b) -> f (a, b)
    unit :: f ()

This definition is equivalent to our previous definition of a closed functor. That’s because, as we’ve seen, a function object in a monoidal category is defined in terms of a product. We can show the equivalence in a more general categorical setting.

This time let’s start with a symmetric closed monoidal category C, in which the function object is defined through the right adjoint to the tensor product:

C(z ⊗ a, b) ≅ C(z, a => b)

As usual, the tensor product is associative and unital — with the unit object i — up to isomorphism. The symmetry is defined through natural isomorphism:

γ :: a ⊗ b -> b ⊗ a

A functor f between two monoidal categories is lax monoidal if there exist: (1) a natural transformation

f a ⊗ f b -> f (a ⊗ b)

and (2) a morphism

i -> f i

Notice that the products and units on either side of the two mappings are from different categories.

A (lax-) monoidal functor must also preserve associativity and unit laws.

For instance a triple product

f a ⊗ (f b ⊗ f c)

may be rearranged using an associator α to give

(f a ⊗ f b) ⊗ f c

then converted to

f (a ⊗ b) ⊗ f c

and then to

f ((a ⊗ b) ⊗ c)

Or it could be first converted to

f a ⊗ f (b ⊗ c)

and then to

f (a ⊗ (b ⊗ c))

These two should be equivalent under the associator in C.

assoc

Similarly, f a ⊗ i can be simplified to f a using the right unitor ρ in D. Or it could be first converted to f a ⊗ f i, then to f (a ⊗ i), and then to f a, using the right unitor in C. The two paths should be equivalent. (Similarly for the left identity.)

unit

We will now consider functors from C to Set, with Set equipped with the usual cartesian product, and the singleton set as unit. A lax monoidal functor is defined by: (1) a natural transformation:

(f a, f b) -> f (a ⊗ b)

and (2) a choice of an element of the set f i (a function from 1 to f i picks an element from that set).

We need the target category to be Set because we want to be able to use the Yoneda lemma to show equivalence with the standard definition of applicative. I’ll come back to this point later.

The Equivalence

The definitions of a lax closed and a lax monoidal functors are equivalent when C is a closed symmetric monoidal category. The proof relies on the existence of the adjunction, in particular the unit and the counit of the adjunction:

ηa :: a -> (b => (a ⊗ b))
εb :: (a => b) ⊗ a -> b

For instance, let’s assume that f is lax-closed. We want to construct the mapping

(f a, f b) -> f (a ⊗ b)

First, we apply the lifted pair (unit, identity), (f η, f id)

(f a -> f (b => a ⊗ b), f id)

to the left hand side. We get:

(f (b => a ⊗ b), f b)

Now we can use (the uncurried version of) the lax-closed morphism:

(f (b => x), f b) -> f x

to get:

f (a ⊗ b)

Conversely, assuming the lax-monoidal property we can show that the functor is lax-closed, that is to say, implement the following function:

(f (a => b), f a) -> f b

First we use the lax monoidal morphism on the left hand side:

f ((a => b) ⊗ a)

and then use the counit (a.k.a. the evaluation morphism) to get the desired result f b

There is yet another presentation of applicatives using Day convolution. But before we get there, we need a little refresher on calculus.

Calculus of Ends

Ends and coends are very useful constructs generalizing limits and colimits. They are defined through universal constructions. They have a few fundamental properties that are used over and over in categorical calculations. I’ll just introduce the notation and a few important identities. We’ll be working in a symmetric monoidal category C with functors from C to Set and profunctors from Cop×C to Set. The end of a profunctor p is a set denoted by:

a p a a

The most important thing about ends is that a set of natural transformations between two functors f and g can be represented as an end:

[C, Set](f, g) = ∫a C(f a, g a)

In Haskell, the end corresponds to universal quantification over a functor of mixed variance. For instance, the natural transformation formula takes the familiar form:

forall a. f a -> g a

The Yoneda lemma, which deals with natural transformations, can also be written using an end:

z (C(a, z) -> f z) ≅ f a

In Haskell, we can write it as the equivalence:

forall z. ((a -> z) -> f z) ≅ f a

which is a generalization of the continuation passing transform.

The dual notion of coend is similarly written using an integral sign, with the “integration variable” in the superscript position:

a p a a

In pseudo-Haskell, a coend is represented by an existential quantifier. It’s possible to define existential data types in Haskell by converting existential quantification to universal one. The relevant identity in terms of coends and ends reads:

(∫ z p z z) -> y ≅ ∫ z (p z z -> y)

In Haskell, this formula is used to turn functions that take existential types to functions that are polymorphic:

(exists z. p z z) -> y ≅ forall z. (p z z -> y)

Intuitively, it makes perfect sense. If you want to define a function that takes an existential type, you have to be prepared to handle any type.

The equivalent of the Yoneda lemma for coends reads:

z f z × C(z, a) ≅ f a

or, in pseudo-Haskell:

exists z. (f z, z -> a) ≅ f a

(The intuition is that the only thing you can do with this pair is to fmap the function over the first component.)

There is also a contravariant version of this identity:

z C(a, z) × f z ≅ f a

where f is a contravariant functor (a.k.a., a presheaf). In pseudo-Haskell:

exists z. (a -> z, f z) ≅ f a

(The intuition is that the only thing you can do with this pair is to apply the contramap of the first component to the second component.)

Using coends we can define a tensor product in the category of functors [C, Set]. This product is called Day convolution:

(f ★ g) a = ∫ x y f x × g y × C(x ⊗ y, a)

It is a bifunctor in that category (read, it can be used to lift natural transformations). It’s associative and symmetric up to isomorphism. It also has a unit — the hom-functor C(i, -), where i is the monoidal unit in C. In other words, Day convolution imbues the category [C, Set] with monoidal structure.

Let’s verify the unit laws.

(C(i, -) ★ g) a = ∫ x y C(i, x) × g y × C(x ⊗ y, a)

We can use the contravariant Yoneda to “integrate over x” to get:

y g y × C(i ⊗ y, a)

Considering that i is the unit of the tensor product in C, we get:

y g y × C(y, a)

Covariant Yoneda lets us “integrate over y” to get the desired g a. The same method works for the right unit law.

Applicative as a Monoid

Given a monoidal category, we can always define a monoid as an object m equipped with two morphisms:

μ :: m ⊗ m -> m
η :: i -> m

satisfying the laws of associativity and unitality.

We have shown that the functor category [C, Set] (with C a symmetric monoidal category) is monoidal under Day convolution. An object in this category is a functor f. The two morphisms that would make it a candidate for a monoid are natural transformations:

μ :: f ★ f -> f
η :: C(i, -) -> f

The a component of the natural transformation μ can be rewritten as:

(∫ x y f x × f y × C(x ⊗ y, a)) -> f a

which is equivalent to:

x y (f x × f y × C(x ⊗ y, a) -> f a)

or, upon currying:

x y (f x, f y) -> C(x ⊗ y, a) -> f a

It turns out that so defined monoid is equivalent to a lax monoidal functor. This was shown by Rivas and Jaskelioff. The following derivation is due to Bob Atkey.

The trick is to start with the whole set of natural transformation from f★f to f. The multiplication μ is just one of them. We’ll express the set of natural transformations as an end:

a ((f ★ f) a -> f a)

Plugging in the formula for the a component of μ, we get:

a x y (f x, f y) -> C(x ⊗ y, a) -> f a

The end over a does not involve the first argument, so we can move the integral sign:

x y (f x, f y) -> ∫ a C(x ⊗ y, a) -> f a

Then we use the Yoneda lemma to “perform the integration” over a:

x y (f x, f y) -> f (x ⊗ y)

You may recognize this as a set of natural transformations that define a lax monoidal functor. We have established a one-to-one correspondence between these natural transformations and the ones defining monoidal multiplication using Day convolution.

The remaining part is to show the equivalence between the unit with respect to Day convolution and the second part of the definition of the lax monoidal functor, the morphism:

1 -> f i

We start with the set of natural transformations that contains our η:

a (i -> a) -> f a

By Yoneda, this is just f i. Picking an element from a set is equivalent to defining a morphism from the singleton set 1, so for any choice of η we get:

1 -> f i

and vice versa. The two definitions are equivalent.

Notice that the monoidal unit η under Day convolution becomes the definition of pure in the Haskell version of applicative. Indeed, when we replace the category C with Set, f becomes and endofunctor, and the unit of Day convolution C(i, -) becomes the identity functor Id. We get:

η :: Id -> f

or, in components:

pure :: a -> f a

So, strictly speaking, the Haskell definition of Applicative mixes the elements of the lax closed functor and the monoidal unit under Day convolution.

Acknowledgments

I’m grateful to Mauro Jaskelioff and Exequiel Rivas for correspondence and to Bob Atkey, Dimitri Chikhladze, and Make Shulman for answering my questions on Math Overflow.