This is part 26 of Categories for Programmers. Previously: Algebras for Monads. See the Table of Contents.

There are many intuitions that we may attach to morphisms in a category, but we can all agree that if there is a morphism from the object a to the object b than the two objects are in some way “related.” A morphism is, in a sense, the proof of this relation. This is clearly visible in any poset category, where a morphism is a relation. In general, there may be many “proofs” of the same relation between two objects. These proofs form a set that we call the hom-set. When we vary the objects, we get a mapping from pairs of objects to sets of “proofs.” This mapping is functorial — contravariant in the first argument and covariant in the second. We can look at it as establishing a global relationship between objects in the category. This relationship is described by the hom-functor:

C(-, =) :: Cop × C -> Set

In general, any functor like this may be interpreted as establishing a relation between objects in a category. A relation may also involve two different categories C and D. A functor, which describes such a relation, has the following signature and is called a profunctor:

p :: Dop × C -> Set

Mathematicians say that it’s a profunctor from C to D (notice the inversion), and use a slashed arrow as a symbol for it:

C ↛ D

You may think of a profunctor as a proof-relevant relation between objects of C and objects of D, where the elements of the set symbolize proofs of the relation. Whenever p a b is empty, there is no relation between a and b. Keep in mind that relations don’t have to be symmetric.

Another useful intuition is the generalization of the idea that an endofunctor is a container. A profunctor value of the type p a b could then be considered a container of bs that are keyed by elements of type a. In particular, an element of the hom-profunctor is a function from a to b.

In Haskell, a profunctor is defined as a two-argument type constructor p equipped with the method called dimap, which lifts a pair of functions, the first going in the “wrong” direction:

class Profunctor p where
    dimap :: (c -> a) -> (b -> d) -> p a b -> p c d

The functoriality of the profunctor tells us that if we have a proof that a is related to b, then we get the proof that c is related to d, as long as there is a morphism from c to a and another from b to d. Or, we can think of the first function as translating new keys to the old keys, and the second function as modifying the contents of the container.

For profunctors acting within one category, we can extract quite a lot of information from diagonal elements of the type p a a. We can prove that b is related to c as long as we have a pair of morphisms b->a and a->c. Even better, we can use a single morphism to reach off-diagonal values. For instance, if we have a morphism f::a->b, we can lift the pair <f, idb> to go from p b b to p a b:

dimap f id pbb :: p a b

Or we can lift the pair <ida, f> to go from p a a to p a b:

dimap id f paa :: p a b

Dinatural Transformations

Since profunctors are functors, we can define natural transformations between them in the standard way. In many cases, though, it’s enough to define the mapping between diagonal elements of two profunctors. Such a transformation is called a dinatural transformation, provided it satisfies the commuting conditions that reflect the two ways we can connect diagonal elements to non-diagonal ones. A dinatural transformation between two profunctors p and q, which are members of the functor category [Cop × C, Set], is a family of morphisms:

αa :: p a a -> q a a

for which the following diagram commutes, for any f::a->b:

Notice that this is strictly weaker than the naturality condition. If α were a natural transformation in [Cop × C, Set], the above diagram could be constructed from two naturality squares and one functoriality condition (profunctor q preserving composition):

Notice that a component of a natural transformation α in [Cop × C, Set] is indexed by a pair of objects α a b. A dinatural transformation, on the other hand, is indexed by one object, since it only maps diagonal elements of the respective profunctors.

Ends

We are now ready to advance from “algebra” to what could be considered the “calculus” of category theory. The calculus of ends (and coends) borrows ideas and even some notation from traditional calculus. In particular, the coend may be understood as an infinite sum or an integral, whereas the end is similar to an infinite product. There is even something that resembles the Dirac delta function.

An end is a genaralization of a limit, with the functor replaced by a profunctor. Instead of a cone, we have a wedge. The base of a wedge is formed by diagonal elements of a profunctor p. The apex of the wedge is an object (here, a set, since we are considering Set-valued profunctors), and the sides are a family of functions mapping the apex to the sets in the base. You may think of this family as one polymorphic function — a function that’s polymorphic in its return type:

α :: forall a . apex -> p a a

Unlike in cones, within a wedge we don’t have any functions that would connect vertices of the base. However, as we’ve seen earlier, given any morphism f::a->b in C, we can connect both p a a and p b b to the common set p a b. We therefore insist that the following diagram commute:

This is called the wedge condition. It can be written as:

p ida f ∘ αa = p f idb ∘ αb

Or, using Haskell notation:

dimap id f . alpha = dimap f id . alpha

We can now proceed with the universal construction and define the end of p as the uinversal wedge — a set e together with a family of functions π such that for any other wedge with the apex a and a family α there is a unique function h::a->e that makes all triangles commute:

πa ∘ h = αa

The symbol for the end is the integral sign, with the “integration variable” in the subscript position:

c p c c

Components of π are called projection maps for the end:

πa :: ∫c p c c -> p a a

Note that if C is a discrete category (no morphisms other than the identities) the end is just a global product of all diagonal entries of p across the whole category C. Later I’ll show you that, in the more general case, there is a relationship between the end and this product through an equalizer.

In Haskell, the end formula translates directly to the universal quantifier:

forall a. p a a

Strictly speaking, this is just a product of all diagonal elements of p, but the wedge condition is satisfied automatically due to parametricity (I’ll explain it in a separate blog post). For any function f :: a -> b, the wedge condition reads:

dimap f id . pi = dimap id f . pi

or, with type annotations:

dimap f idb . pib = dimap ida f . pia

where both sides of the equation have the type:

Profunctor p => (forall c. p c c) -> p a b

and pi is the polymorphic projection:

pi :: Profunctor p => forall c. (forall a. p a a) -> p c c
pi e = e

Here, type inference automatically picks the right component of e.

Just as we were able to express the whole set of commutation conditions for a cone as one natural transformation, likewise we can group all the wedge conditions into one dinatural transformation. For that we need the generalization of the constant functor Δc to a constant profunctor that maps all pairs of objects to a single object c, and all pairs of morphisms to the identity morphism for this object. A wedge is a dinatural transformation from that functor to the profunctor p. Indeed, the dinaturality hexagon shrinks down to the wedge diamond when we realize that Δc lifts all morphisms to one identity function.

Ends can also be defined for target categories other than Set, but here we’ll only consider Set-valued profunctors and their ends.

Ends as Equalizers

The commutation condition in the definition of the end can be written using an equalizer. First, let’s define two functions (I’m using Haskell notation, because mathematical notation seems to be less user-friendly in this case). These functions correspond to the two converging branches of the wedge condition:

lambda :: Profunctor p => p a a -> (a -> b) -> p a b
lambda paa f = dimap id f paa

rho :: Profunctor p => p b b -> (a -> b) -> p a b
rho pbb f = dimap f id pbb

Both functions map diagonal elements of the profunctor p to polymorphic functions of the type:

type ProdP p = forall a b. (a -> b) -> p a b

These functions have different types. However, we can unify their types, if we form one big product type, gathering together all diagonal elements of p:

newtype DiaProd p = DiaProd (forall a. p a a)

The functions lambda and rho induce two mappings from this product type:

lambdaP :: Profunctor p => DiaProd p -> ProdP p
lambdaP (DiaProd paa) = lambda paa

rhoP :: Profunctor p => DiaProd p -> ProdP p
rhoP (DiaProd paa) = rho paa

The end of p is the equalizer of these two functions. Remember that the equalizer picks the largest subset on which two functions are equal. In this case it picks the subset of the product of all diagonal elements for which the wedge diagrams commute.

Natural Transformations as Ends

The most important example of an end is the set of natural transformations. A natural transformation between two functors F and G is a family of morphisms picked from hom-sets of the form C(F a, G a). If it weren’t for the naturality condition, the set of natural transformations would be just the product of all these hom-sets. In fact, in Haskell, it is:

forall a. f a -> g a

The reason it works in Haskell is because naturality follows from parametricity. Outside of Haskell, though, not all diagonal sections across such hom-sets will yield natural transformations. But notice that the mapping:

<a, b> -> C(F a, G b)

is a profunctor, so it makes sense to study its end. This is the wedge condition:

Let’s just pick one element from the set c C(F c, G c). The two projections will map this element to two components of a particular transformation, let’s call them:

τa :: F a -> G a
τb :: F b -> G b

In the left branch, we lift a pair of morphisms <ida, G f> using the hom-functor. You may recall that such lifting is implemented as simultaneous pre- and post-composition. When acting on τa the lifted pair gives us:

G f ∘ τa ∘ ida

The other branch of the diagram gives us:

idb ∘ τb ∘ F f

Their equality, demanded by the wedge condition, is nothing but the naturality condition for τ.

Coends

As expected, the dual to an end is called a coend. It is constructed from a dual to a wedge called a cowedge (pronounced co-wedge, not cow-edge).

An edgy cow?

The symbol for a coend is the integral sign with the “integration variable” in the superscript position:

 c p c c

Just like the end is related to a product, the coend is related to a coproduct, or a sum (in this respect, it resembles an integral, which is a limit of a sum). Rather than having projections, we have injections going from the diagonal elements of the profunctor down to the coend. If it weren’t for the cowedge conditions, we could say that the coend of the profunctor p is either p a a, or p b b, or p c c, and so on. Or we could say that there exists such an a for which the coend is just the set p a a. The universal quantifier that we used in the definition of the end turns into an existential quantifier for the coend.

This is why, in pseudo-Haskell, we would define the coend as:

exists a. p a a

The standard way of encoding existential quantifiers in Haskell is to use universally quantified data constructors. We can thus define:

data Coend p = forall a. Coend (p a a)

The logic behind this is that it should be possible to construct a coend using a value of any of the family of types p a a, no matter what a we chose.

Just like an end can be defined using an equalizer, a coend can be described using a coequalizer. All the cowedge conditions can be summarized by taking one gigantic coproduct of p a b for all possible functions b->a. In Haskell, that would be expressed as an existential type:

data SumP p = forall a b. SumP (b -> a) (p a b)

There are two ways of evaluating this sum type, by lifting the function using dimap and applying it to the profunctor p:

lambda, rho :: Profunctor p => SumP p -> DiagSum p
lambda (SumP f pab) = DiagSum (dimap f id pab)
rho    (SumP f pab) = DiagSum (dimap id f pab)

where DiagSum is the sum of diagonal elements of p:

data DiagSum p = forall a. DiagSum (p a a)

The coequalizer of these two functions is the coend. A coequilizer is obtained from DiagSum p by identifying values that are obtained by applying lambda or rho to the same argument. Here, the argument is a pair consisting of a function b->a and an element of p a b. The application of lambda and rho produces two potentially different values of the type DiagSum p. In the coend, these two values are identified, making the cowedge condition automatically satisfied.

The process of identification of related elements in a set is formally known as taking a quotient. To define a quotient we need an equivalence relation ~, a relation that is reflexive, symmetric, and transitive:

a ~ a
if a ~ b then b ~ a
if a ~ b and b ~ c then a ~ c

Such a relation splits the set into equivalence classes. Each class consists of elements that are related to each other. We form a quotient set by picking one representative from each class. A classic example is the definition of rational numbers as pairs of whole numbers with the following equivalence relation:

(a, b) ~ (c, d) iff a * d = b * c

It’s easy to check that this is an equivalence relation. A pair (a, b) is interpreted as a fraction a/b, and fractions that have a common divisor are identified. A rational number is an equivalence class of such fractions.

You might recall from our earlier discussion of limits and colimits that the hom-functor is continuous, that is, it preserves limits. Dually, the contravariant hom-functor turns colimits into limits. These properties can be generalized to ends and coends, which are a generalization of limits and colimits, respectively. In particular, we get a very useful identity for converting coends to ends:

Set(∫ x p x x, c) ≅ ∫x Set(p x x, c)

Let’s have a look at it in pseudo-Haskell:

(exists x. p x x) -> c ≅ forall x. p x x -> c

It tells us that a function that takes an existential type is equivalent to a polymorphic function. This makes perfect sense, because such a function must be prepared to handle any one of the types that may be encoded in the existential type. It’s the same principle that tells us that a function that accepts a sum type must be implemented as a case statement, with a tuple of handlers, one for every type present in the sum. Here, the sum type is replaced by a coend, and a family of handlers becomes an end, or a polymorphic function.

Ninja Yoneda Lemma

The set of natural transformations that appears in the Yoneda lemma may be encoded using an end, resulting in the following formulation:

z Set(C(a, z), F z) ≅ F a

There is also a dual formula:

 z C(a, z) × F z ≅ F a

This identity is strongly reminiscent of the formula for the Dirac delta function (a function δ(a - z), or rather a distribution, that has an infinite peak at a = z). Here, the hom-functor plays the role of the delta function.

Together these two identities are sometimes called the Ninja Yoneda lemma.

To prove the second formula, we will use the consequence of the Yoneda embedding, which states that two objects are isomorphic if and only if their hom-functors are isomorphic. In other words a ≅ b if and only if there is a natural transformation of the type:

[C, Set](C(a, -), C(b, =))

that is an isomorphism.

We start by inserting the left-hand side of the identity we want to prove inside a hom-functor that’s going to some arbitrary object c:

Set(∫ z C(a, z) × F z, c)

Using the continuity argument, we can replace the coend with the end:

z Set(C(a, z) × F z, c)

We can now take advantage of the adjunction between the product and the exponential:

z Set(C(a, z), c(F z))

We can “perform the integration” by using the Yoneda lemma to get:

c(F a)

This exponential object is isomorphic to the hom-set:

Set(F a, c)

Finally, we take advantage of the Yoneda embedding to arrive at the isomorphism:

 z C(a, z) × F z ≅ F a

Profunctor Composition

Let’s explore further the idea that a profunctor describes a relation — more precisely, a proof-relevant relation, meaning that the set p a b represents the set of proofs that a is related to b. If we have two relations p and q we can try to compose them. We’ll say that a is related to b through the composition of q after p if there exist an intermediary object c such that both q b c and p c a are non-empty. The proofs of this new relation are all pairs of proofs of individual relations. Therefore, with the understanding that the existential quantifier corresponds to a coend, and the cartesian product of two sets corresponds to “pairs of proofs,” we can define composition of profunctors using the following formula:

(q ∘ p) a b = ∫ c p c a × q b c

Here’s the equivalent Haskell definition from Data.Profunctor.Composition, after some renaming:

data Procompose q p a b where
  Procompose :: q a c -> p c b -> Procompose q p a b

This is using generalized algebraic data type, or GADT syntax, in which a free type variable (here c) is automatically existentially quanitified. The (uncurried) data constructor Procompose is thus equivalent to:

exists c. (q a c, p c b)

The unit of so defined composition is the hom-functor — this immediately follows from the Ninja Yoneda lemma. It makes sense, therefore, to ask the question if there is a category in which profunctors serve as morphisms. The answer is positive, with the caveat that both associativity and identity laws for profunctor composition hold only up to natural isomorphism. Such a category, where laws are valid up to isomorphism, is called a bicategory (which is more general than a 2-category). So we have a bicategory Prof, in which objects are categories, morphisms are profunctors, and morphisms between morphisms (a.k.a., two-cells) are natural transformations. In fact, one can go even further, because beside profunctors, we also have regular functors as morphisms between categories. A category which has two types of morphisms is called a double category.

Profunctors play an important role in the Haskell lens library and in the arrow library.

Next: Kan extensions.

Advertisements

This is part 25 of Categories for Programmers. Previously: F-Algebras. See the Table of Contents.

If we interpret endofunctors as ways of defining expressions, algebras let us evaluate them and monads let us form and manipulate them. By combining algebras with monads we not only gain a lot of functionality but we can also answer a few interesting questions. One such question concerns the relation between monads and adjunctions. As we’ve seen, every adjunction defines a monad (and a comonad). The question is: Can every monad (comonad) be derived from an adjunction? The answer is positive. There is a whole family of adjunctions that generate a given monad. I’ll show you two such adjunction.

Let’s review the definitions. A monad is an endofunctor m equipped with two natural transformations that satisfy some coherence conditions. The components of these transformations at a are:

ηa :: a -> m a
μa :: m (m a) -> m a

An algebra for the same endofunctor is a selection of a particular object — the carrier a — together with the morphism:

alg :: m a -> a

The first thing to notice is that the algebra goes in the opposite direction to ηa. The intuition is that ηa creates a trivial expression from a value of type a. The first coherence condition that makes the algebra compatible with the monad ensures that evaluating this expression using the algebra whose carrier is a gives us back the original value:

alg ∘ ηa = ida

The second condition arises from the fact that there are two ways of evaluating the doubly nested expression m (m a). We can first apply μa to flatten the expression, and then use the evaluator of the algebra; or we can apply the lifted evaluator to evaluate the inner expressions, and then apply the evaluator to the result. We’d like the two strategies to be equivalent:

alg ∘ μa = alg ∘ m alg

Here, m alg is the morphism resulting from lifting alg using the functor m. The following commuting diagrams describe the two conditions (I replaced m with T in anticipation of what follows):

We can also express these condition in Haskell:

alg . return = id
alg . join = alg . fmap alg

Let’s look at a small example. An algebra for a list endofunctor consists of some type a and a function that produces an a from a list of a. We can express this function using foldr by choosing both the element type and the accumulator type to be equal to a:

foldr :: (a -> a -> a) -> a -> [a] -> a

This particular algebra is specified by a two-argument function, let’s call it f, and a value z. The list functor happens to also be a monad, with return turning a value into a singleton list. The composition of the algebra, here foldr f z, after return takes x to:

foldr f z [x] = x `f` z

where the action of f is written in the infix notation. The algebra is compatible with the monad if the following coherence condition is satisfied for every x:

x `f` z = x

If we look at f as a binary operator, this condition tells us that z is the right unit.

The second coherence condition operates on a list of lists. The action of join concatenates the individual lists. We can then fold the resulting list. On the other hand, we can first fold the individual lists, and then fold the resulting list. Again, if we interpret f as a binary operator, this condition tells us that this binary operation is associative. These conditions are certainly fulfilled when (a, f, z) is a monoid.

T-algebras

Since mathematicians prefer to call their monads T, they call algebras compatible with them T-algebras. T-algebras for a given monad T in a category C form a category called the Eilenberg-Moore category, often denoted by CT. Morphisms in that category are homomorphisms of algebras. These are the same homomorphisms we’ve seen defined for F-algebras.

A T-algebra is a pair consisting of a carrier object and an evaluator, (a, f). There is an obvious forgetful functor UT from CT to C, which maps (a, f) to a. It also maps a homomorphism of T-algebras to a corresponding morphism between carrier objects in C. You may remember from our discussion of adjunctions that the left adjoint to a forgetful functor is called a free functor.

The left adjoint to UT is called FT. It maps an object a in C to a free algebra in CT. The carrier of this free algebra is T a. Its evaluator is a morphism from T (T a) back to T a. Since T is a monad, we can use the monadic μa (Haskell join) as the evaluator.

We still have to show that this is a T-algebra. For that, two coherence conditions must be satisified:

alg ∘ ηTa = idTa
alg ∘ μa = alg ∘ T alg

But these are just monadic laws, if you plug in μ for the algebra.

As you may recall, every adjunction defines a monad. It turns out that the adjunction between FT and UT defines the very monad T that was used in the construction of the Eilenberg-Moore category. Since we can perform this construction for every monad, we conclude that every monad can be generated from an adjunction. Later I’ll show you that there is another adjunction that generates the same monad.

Here’s the plan: First I’ll show you that FT is indeed the left adjoint of UT. I’ll do it by defining the unit and the counit of this adjunction and proving that the corresponding triangular identities are satisfied. Then I’ll show you that the monad generated by this adjunction is indeed our original monad.

The unit of the adjunction is the natural transformation:

η :: I -> UT ∘ FT

Let’s calculate the a component of this transformation. The identity functor gives us a. The free functor produces the free algebra (T a, μa), and the forgetful functor reduces it to T a. Altogether we get a mapping from a to T a. We’ll simply use the unit of the monad T as the unit of this adjunction.

Let’s look at the counit:

ε :: FT ∘ UT -> I

Let’s calculate its component at some T-algebra (a, f). The forgetful functor forgets the f, and the free functor produces the pair (T a, μa). So in order to define the component of the counit ε at (a, f), we need the right morphism in the Eilenberg-Moore category, or a homomorphism of T-algebras:

(T a, μa) -> (a, f)

Such homomorphism should map the carrier T a to a. Let’s just resurrect the forgotten evaluator f. This time we’ll use it as a homomorphism of T-algebras. Indeed, the same commuting diagram that makes f a T-algebra may be re-interpreted to show that it’s a homomorphism of T-algebras:


We have thus defined the component of the counit natural transformation ε at (a, f) (an object in the category of T-algebras) to be f.

To complete the adjunction we also need to show that the unit and the counit satisfy triangular identites. These are:

The first one holds because of the unit law for the monad T. The second is just the law of the T-algebra (a, f).

We have established that the two functors form an adjunction:

FT ⊣ UT

Every adjunction gives rise to a monad. The round trip

UT ∘ FT

is the endofunctor in C that gives rise to the corresponding monad. Let’s see what its action on an object a is. The free algebra created by FT is (T a, μa). The forgetful functor FT drops the evaluator. So, indeed, we have:

UT ∘ FT = T

As expected, the unit of the adjunction is the unit of the monad T.

You may remember that the counint of the adjunction produces monadic muliplication through the following formula:

μ = R ∘ ε ∘ L

This is a horizontal composition of three natural transformations, two of them being identity natural transformations mapping, respectively, L to L and R to R. The one in the middle, the counit, is a natural transformation whose component at an algebra (a, f) is f.

Let’s calculate the component μa. We first horizontally compose ε after FT, which results in the component of ε at FTa. Since FT takes a to the algebra (T a, μa), and ε picks the evaluator, we end up with μa. Horizontal composition on the left with UT doesn’t change anything, since the action of UT on morphisms is trivial. So, indeed, the μ obtained from the adjunction is the same as the μ of the original monad T.

The Kleisli Category

We’ve seen the Kleisli category before. It’s a category constructed from another category C and a monad T. We’ll call this category CT. The objects in the Kleisli category CT are the objects of C, but the morphisms are different. A morphism fK from a to b in the Kleisli category corresponds to a morphism f from a to T b in the original category. We call this morphism a Kleisli arrow from a to b.

Composition of morphisms in the Kleisli category is defined in terms of monadic composition of Kleisli arrows. For instance, let’s compose gK after fK. In the Kleisli category we have:

fK :: a -> b
gK :: b -> c

which, in the category C, corresponds to:

f :: a -> T b
g :: b -> T c

We define the composition:

hK = gK ∘ fK

as a Kleisli arrow in C

h :: a -> T c
h = μ ∘ (T g) ∘ f

In Haskell we would write it as:

h = join . fmap g . f

There is a functor F from C to CT which acts trivially on objects. On morphims, it maps f in C to a morphism in CT by creating a Kleisli arrow that embellishes the return value of f. Given a morphism:

f :: a -> b

it creates a morphism in CT with the corresponding Kleisli arrow:

η ∘ f

In Haskell we’d write it as:

return . f

We can also define a functor G from CT back to C. It takes an object a from the Kleisli category and maps it to an object T a in C. Its action on a morphism fK corresponding to a Kleisli arrow:

f :: a -> T b

is a morphism in C:

T a -> T b

given by first lifting f and then applying μ:

μT b ∘ T f

In Haskell notation this would read:

G fT = join . fmap f

You may recognize this as the definition of monadic bind in terms of join.

It’s easy to see that the two functors form an adjunction:

F ⊣ G

and their composition G ∘ F reproduces the original monad T.

So this is the second adjunction that produces the same monad. In fact there is a whole category of adjunctions Adj(C, T) that result in the same monad T on C. The Kleisli adjunction we’ve just seen is the initial object in this category, and the Eilenberg-Moore adjunction is the terminal object.

Coalgebras for Comonads

Analogous constructions can be done for any comonad W. We can define a category of coalgebras that are compatible with a comonad. They make the following diagrams commute:

where coa is the coevaluation morphism of the coalgebra whose carrier is a:

coa :: a -> W a

and ε and δ are the two natural transformations defining the comonad (in Haskell, their components are called extract and duplicate).

There is an obvious forgetful functor UW from the category of these coalgebras to C. It just forgets the coevaluation. We’ll consider its right adjoint FW.

UW ⊣ FW

The right adjoint to a forgetful functor is called a cofree functor. FW generates cofree coalgebras. It assigns, to an object a in C, the coalgebra (W a, δa). The adjunction reproduces the original comonad as the composite FW ∘ UW.

Similarly, we can construct a co-Kleisli category with co-Kleisli arrows and regenerate the comonad from the corresponding adjunction.

Lenses

Let’s go back to our discussion of lenses. A lens can be written as a coalgebra:

coalgs :: a -> Store s a

for the functor Store s:

data Store s a = Store (s -> a) s

This coalgebra can be also expressed as a pair of functions:

set :: a -> s -> a
get :: a -> s

(Think of a as standing for “all,” and s as a “small” part of it.) In terms of this pair, we have:

coalgs a = Store (set a) (get a)

Here, a is a value of type a. Notice that partially applied set is a function s->a.

We also know that Store s is a comonad:

instance Comonad (Store s) where
  extract (Store f s) = f s
  duplicate (Store f s) = Store (Store f) s

The question is: Under what conditions is a lens a coalgebra for this comonad? The first coherence condition:

εa ∘ coalg = ida

translates to:

set a (get a) = a

This is the lens law that expresses the fact that if you set a field of the structure a to its previous value, nothing changes.

The second condition:

fmap coalg ∘ coalg = δa ∘ coalg

requires a little more work. First, recall the definition of fmap for the Store functor:

fmap g (Store f s) = Store (g . f) s

Applying fmap coalg to the result of coalg gives us:

Store (coalg . set a) (get a)

On the other hand, applying duplicate to the result of coalg produces:

Store (Store (set a)) (get a)

For these two expressions to be equal, the two functions under Store must be equal when acting on an arbitrary s:

coalg (set a s) = Store (set a) s

Expanding coalg, we get:

Store (set (set a s)) (get (set a s)) = Store (set a) s

This is equivalent to two remaining lens laws. The first one:

set (set a s) = set a

tells us that setting the value of a field twice is the same as setting it once. The second law:

get (set a s) = s

tells us that getting a value of a field that was set to s gives s back.

In other words, a well-behaved lens is indeed a comonad coalgebra for the Store functor.

Challenges

  1. What is the action of the free functor F :: C -> CT on morphisms. Hint: use the naturality condition for monadic μ.
  2. Define the adjunction:
    UW ⊣ FW
  3. Prove that the above adjunction reproduces the original comonad.

Acknowledgment

I’d like to thank Gershom Bazerman for helpful comments.

Next: Ends and Coends.


This is part 24 of Categories for Programmers. Previously: Comonads. See the Table of Contents.

We’ve seen several formulations of a monoid: as a set, as a single-object category, as an object in a monoidal category. How much more juice can we squeeze out of this simple concept?

Let’s try. Take this definition of a monoid as a set m with a pair of functions:

μ :: m × m -> m
η :: 1 -> m

Here, 1 is the terminal object in Set — the singleton set. The first function defines multiplication (it takes a pair of elements and returns their product), the second selects the unit element from m. Not every choice of two functions with these signatures results in a monoid. For that we need to impose additional conditions: associativity and unit laws. But let’s forget about that for a moment and just consider “potential monoids.” A pair of functions is an element of a cartesian product of two sets of functions. We know that these sets may be represented as exponential objects:

μ ∈ m m×m
η ∈ m1

The cartesian product of these two sets is:

m m×m × m1

Using some high-school algebra (which works in every cartesian closed category), we can rewrite it as:

m m×m + 1

The plus sign stands for the coproduct in Set. We have just replaced a pair of functions with a single function — an element of the set:

m × m + 1 -> m

Any element of this set of functions is a potential monoid.

The beauty of this formulation is that it leads to interesting generalizations. For instance, how would we describe a group using this language? A group is a monoid with one additional function that assigns the inverse to every element. The latter is a function of the type m->m. As an example, integers form a group with addition as a binary operation, zero as the unit, and negation as the inverse. To define a group we would start with a triple of functions:

m × m -> m
m -> m
1 -> m

As before, we can combine all these triples into one set of functions:

m × m + m + 1 -> m

We started with one binary operator (addition), one unary operator (negation), and one nullary operator (identity — here zero). We combined them into one function. All functions with this signature define potential groups.

We can go on like this. For instance, to define a ring, we would add one more binary operator and one nullary operator, and so on. Each time we end up with a function type whose left-hand side is a sum of powers (possibly including the zeroth power — the terminal object), and the right-hand side being the set itself.

Now we can go crazy with generalizations. First of all, we can replace sets with objects and functions with morphisms. We can define n-ary operators as morphisms from n-ary products. It means that we need a category that supports finite products. For nullary operators we require the existence of the terminal object. So we need a cartesian category. In order to combine these operators we need exponentials, so that’s a cartesian closed category. Finally, we need coproducts to complete our algebraic shenanigans.

Alternatively, we can just forget about the way we derived our formulas and concentrate on the final product. The sum of products on the left hand side of our morphism defines an endofunctor. What if we pick an arbitrary endofunctor F instead? In that case we don’t have to impose any constraints on our category. What we obtain is called an F-algebra.

An F-algebra is a triple consisting of an endofunctor F, an object a, and a morphism

F a -> a

The object is often called the carrier, an underlying object or, in the context of programming, the carrier type. The morphism is often called the evaluation function or the structure map. Think of the functor F as forming expressions and the morphism as evaluating them.

Here’s the Haskell definition of an F-algebra:

type Algebra f a = f a -> a

It identifies the algebra with its evaluation function.

In the monoid example, the functor in question is:

data MonF a = MEmpty | MAppend a a

This is Haskell for 1 + a × a (remember algebraic data structures).

A ring would be defined using the following functor:

data RingF a = RZero
             | ROne
             | RAdd a a 
             | RMul a a
             | RNeg a

which is Haskell for 1 + 1 + a × a + a × a + a.

An example of a ring is the set of integers. We can choose Integer as the carrier type and define the evaluation function as:

evalZ :: Algebra RingF Integer
evalZ RZero      = 0
evalZ ROne       = 1
evalZ (RAdd m n) = m + n
evalZ (RMul m n) = m * n
evalZ (RNeg n)   = -n

There are more F-algebras based on the same functor RingF. For instance, polynomials form a ring and so do square matrices.

As you can see, the role of the functor is to generate expressions that can be evaluated using the evaluator of the algebra. So far we’ve only seen very simple expressions. We are often interested in more elaborate expressions that can be defined using recursion.

Recursion

One way to generate arbitrary expression trees is to replace the variable a inside the functor definition with recursion. For instance, an arbitrary expression in a ring is generated by this tree-like data structure:

data Expr = RZero
          | ROne
          | RAdd Expr Expr 
          | RMul Expr Expr
          | RNeg Expr

We can replace the original ring evaluator with its recursive version:

evalZ :: Expr -> Integer
evalZ RZero        = 0
evalZ ROne         = 1
evalZ (RAdd e1 e2) = evalZ e1 + evalZ e2
evalZ (RMul e1 e2) = evalZ e1 * evalZ e2
evalZ (RNeg e)     = -(evalZ e)

This is still not very practical, since we are forced to represent all integers as sums of ones, but it will do in a pinch.

But how can we describe expression trees using the language of F-algebras? We have to somehow formalize the process of replacing the free type variable in the definition of our functor, recursively, with the result of the replacement. Imagine doing this in steps. First, define a depth-one tree as:

type RingF1 a = RingF (RingF a)

We are filling the holes in the definition of RingF with depth-zero trees generated by RingF a. Depth-2 trees are similarly obtained as:

type RingF2 a = RingF (RingF (RingF a))

which we can also write as:

type RingF2 a = RingF (RingF1 a)

Continuing this process, we can write a symbolic equation:

type RingFn+1 a = RingF (RingFn a)

Conceptually, after repeating this process infinitely many times, we end up with our Expr. Notice that Expr does not depend on a. The starting point of our journey doesn’t matter, we always end up in the same place. This is not always true for an arbitrary endofunctor in an arbitrary category, but in the category Set things are nice.

Of course, this is a hand-waving argument, and I’ll make it more rigorous later.

Applying an endofunctor infinitely many times produces a fixed point, an object defined as:

Fix f = f (Fix f)

The intuition behind this definition is that, since we applied f infinitely many times to get Fix f, applying it one more time doesn’t change anything. In Haskell, the definition of a fixed point is:

newtype Fix f = Fix (f (Fix f))

Arguably, this would be more readable if the constructor’s name were different than the name of the type being defined, as in:

newtype Fix f = In (f (Fix f))

but I’ll stick with the accepted notation. The constructor Fix (or In, if you prefer) can be seen as a function:

Fix :: f (Fix f) -> Fix f

There is also a function that peels off one level of functor application:

unFix :: Fix f -> f (Fix f)
unFix (Fix x) = x

The two functions are the inverse of each other. We’ll use these functions later.

Category of F-Algebras

Here’s the oldest trick in the book: Whenever you come up with a way of constructing some new objects, see if they form a category. Not surprisingly, algebras over a given endofunctor F form a category. Objects in that category are algebras — pairs consisting of a carrier object a and a morphism F a -> a, both from the original category C.

To complete the picture, we have to define morphisms in the category of F-algebras. A morphism must map one algebra (a, f) to another algebra (b, g). We’ll define it as a morphism m that maps the carriers — it goes from a to b in the original category. Not any morphism will do: we want it to be compatible with the two evaluators. (We call such a structure-preserving morphism a homomorphism.) Here’s how you define a homomorphism of F-algebras. First, notice that we can lift m to the mapping:

F m :: F a -> F b

we can then follow it with g to get to b. Equivalently, we can use f to go from F a to a and then follow it with m. We want the two paths to be equal:

g ∘ F m = m ∘ f

alg

It’s easy to convince yourself that this is indeed a category (hint: identity morphisms from C work just fine, and a composition of homomorphisms is a homomorphism).

An initial object in the category of F-algebras, if it exists, is called the initial algebra. Let’s call the carrier of this initial algebra i and its evaluator j :: F i -> i. It turns out that j, the evaluator of the initial algebra, is an isomorphism. This result is known as Lambek’s theorem. The proof relies on the definition of the initial object, which requires that there be a unique homomorphism m from it to any other F-algebra. Since m is a homomorphism, the following diagram must commute:

alg2

Now let’s construct an algebra whose carrier is F i. The evaluator of such an algebra must be a morphism from F (F i) to F i. We can easily construct such an evaluator simply by lifting j:

F j :: F (F i) -> F i

Because (i, j) is the initial algebra, there must be a unique homomorphism m from it to (F i, F j). The following diagram must commute:

alg3a

But we also have this trivially commuting diagram (both paths are the same!):

alg3

which can be interpreted as showing that j is a homomorphism of algebras, mapping (F i, F j) to (i, j). We can glue these two diagrams together to get:

alg4

This diagram may, in turn, be interpreted as showing that j ∘ m is a homomorphism of algebras. Only in this case the two algebras are the same. Moreover, because (i, j) is initial, there can only be one homomorphism from it to itself, and that’s the identity morphism idi — which we know is a homomorphism of algebras. Therefore j ∘ m = idi. Using this fact and the commuting property of the left diagram we can prove that m ∘ j = idFi. This shows that m is the inverse of j and therefore j is an isomorphism between F i and i:

F i ≅ i

But that is just saying that i is a fixed point of F. That’s the formal proof behind the original hand-waving argument.

Back to Haskell: We recognize i as our Fix f, j as our constructor Fix, and its inverse as unFix. The isomorphism in Lambek’s theorem tells us that, in order to get the initial algebra, we take the functor f and replace its argument a with Fix f. We also see why the fixed point does not depend on a.

Natural Numbers

Natural numbers can also be defined as an F-algebra. The starting point is the pair of morphisms:

zero :: 1 -> N
succ :: N -> N

The first one picks the zero, and the second one maps all numbers to their successors. As before, we can combine the two into one:

1 + N -> N

The left hand side defines a functor which, in Haskell, can be written like this:

data NatF a = ZeroF | SuccF a

The fixed point of this functor (the initial algebra that it generates) can be encoded in Haskell as:

data Nat = Zero | Succ Nat

A natural number is either zero or a successor of another number. This is known as the Peano representation for natural numbers.

Catamorphisms

Let’s rewrite the initiality condition using Haskell notation. We call the initial algebra Fix f. Its evaluator is the contructor Fix. There is a unique morphism m from the initial algebra to any other algebra over the same functor. Let’s pick an algebra whose carrier is a and the evaluator is alg.

alg5
By the way, notice what m is: It’s an evaluator for the fixed point, an evaluator for the whole recursive expression tree. Let’s find a general way of implementing it.

Lambek’s theorem tells us that the constructor Fix is an isomorphism. We called its inverse unFix. We can therefore flip one arrow in this diagram to get:

alg6

Let’s write down the commutation condition for this diagram:

m = alg . fmap m . unFix

We can interpret this equation as a recursive definition of m. The recursion is bound to terminate for any finite tree created using the functor f. We can see that by noticing that fmap m operates underneath the top layer of the functor f. In other words, it works on the children of the original tree. The children are always one level shallower than the original tree.

Here’s what happens when we apply m to a tree constructed using Fix f. The action of unFix peels off the constructor, exposing the top level of the tree. We then apply m to all the children of the top node. This produces results of type a. Finally, we combine those results by applying the non-recursive evaluator alg. The key point is that our evaluator alg is a simple non-recursive function.

Since we can do this for any algebra alg, it makes sense to define a higher order function that takes the algebra as a parameter and gives us the function we called m. This higher order function is called a catamorphism:

cata :: Functor f => (f a -> a) -> Fix f -> a
cata alg = alg . fmap (cata alg) . unFix

Let’s see an example of that. Take the functor that defines natural numbers:

data NatF a = ZeroF | SuccF a

Let’s pick (Int, Int) as the carrier type and define our algebra as:

fib :: NatF (Int, Int) -> (Int, Int)
fib ZeroF = (1, 1)
fib (SuccF (m, n)) = (n, m + n)

You can easily convince yourself that the catamorphism for this algebra, cata fib, calculates Fibonacci numbers.

In general, an algebra for NatF defines a recurrence relation: the value of the current element in terms of the previous element. A catamorphism then evaluates the n-th element of that sequence.

Folds

A list of e is the initial algebra of the following functor:

data ListF e a = NilF | ConsF e a

Indeed, replacing the variable a with the result of recursion, which we’ll call List e, we get:

data List e = Nil | Cons e (List e)

An algebra for a list functor picks a particular carrier type and defines a function that does pattern matching on the two constructors. Its value for NilF tells us how to evaluate an empty list, and its value for ConsF tells us how to combine the current element with the previously accumulated value.

For instance, here’s an algebra that can be used to calculate the length of a list (the carrier type is Int):

lenAlg :: ListF e Int -> Int
lenAlg (ConsF e n) = n + 1
lenAlg NilF = 0

Indeed, the resulting catamorphism cata lenAlg calculates the length of a list. Notice that the evaluator is a combination of (1) a function that takes a list element and an accumulator and returns a new accumulator, and (2) a starting value, here zero. The type of the value and the type of the accumulator are given by the carrier type.

Compare this to the traditional Haskell definition:

length = foldr (\e n -> n + 1) 0

The two arguments to foldr are exactly the two components of the algebra.

Let’s try another example:

sumAlg :: ListF Double Double -> Double
sumAlg (ConsF e s) = e + s
sumAlg NilF = 0.0

Again, compare this with:

sum = foldr (\e s -> e + s) 0.0

As you can see, foldr is just a convenient specialization of a catamorphism to lists.

Coalgebras

As usual, we have a dual construction of an F-coagebra, where the direction of the morphism is reversed:

a -> F a

Coalgebras for a given functor also form a category, with homomorphisms preserving the coalgebraic structure. The terminal object (t, u) in that category is called the terminal (or final) coalgebra. For every other algebra (a, f) there is a unique homomorphism m that makes the following diagram commute:

alg7

A terminal colagebra is a fixed point of the functor, in the sense that the morphism u :: t -> F t is an isomorphism (Lambek’s theorem for coalgebras):

F t ≅ t

A terminal coalgebra is usually interpreted in programming as a recipe for generating (possibly infinite) data structures or transition systems.

Just like a catamorphism can be used to evaluate an initial algebra, an anamorphism can be used to coevaluate a terminal coalgebra:

ana :: Functor f => (a -> f a) -> a -> Fix f
ana coalg = Fix . fmap (ana coalg) . coalg

A canonical example of a coalgebra is based on a functor whose fixed point is an infinite stream of elements of type e. This is the functor:

data StreamF e a = StreamF e a
  deriving Functor

and this is its fixed point:

data Stream e = Stream e (Stream e)

A coalgebra for StreamF e is a function that takes the seed of type a and produces a pair (StreamF is a fancy name for a pair) consisting of an element and the next seed.

You can easily generate simple examples of coalgebras that produce infinite sequences, like the list of squares, or reciprocals.

A more interesting example is a coalgebra that produces a list of primes. The trick is to use an infinite list as a carrier. Our starting seed will be the list [2..]. The next seed will be the tail of this list with all multiples of 2 removed. It’s a list of odd numbers starting with 3. In the next step, we’ll take the tail of this list and remove all multiples of 3, and so on. You might recognize the makings of the sieve of Eratosthenes. This coalgebra is implemented by the following function:

era :: [Int] -> StreamF Int [Int]
era (p : ns) = StreamF p (filter (notdiv p) ns)
    where notdiv p n = n `mod` p /= 0

The anamorphism for this coalgebra generates the list of primes:

primes = ana era [2..]

A stream is an infinite list, so it should be possible to convert it to a Haskell list. To do that, we can use the same functor StreamF to form an algebra, and we can run a catamorphism over it. For instance, this is a catamorphism that converts a stream to a list:

toListC :: Fix (StreamF e) -> [e]
toListC = cata al
   where al :: StreamF e [e] -> [e]
         al (StreamF e a) = e : a

Here, the same fixed point is simultaneously an initial algebra and a terminal coalgebra for the same endofunctor. It’s not always like this, in an arbitrary category. In general, an endofunctor may have many (or no) fixed points. The initial algebra is the so called least fixed point, and the terminal coalgebra is the greatest fixed point. In Haskell, though, both are defined by the same formula, and they coincide.

The anamorphism for lists is called unfold. To create finite lists, the functor is modified to produce a Maybe pair:

unfoldr :: (b -> Maybe (a, b)) -> b -> [a]

The value of Nothing will terminate the generation of the list.

An interesting case of a coalgebra is related to lenses. A lens can be represented as a pair of a getter and a setter:

set :: a -> s -> a
get :: a -> s

Here, a is usually some product data type with a field of type s. The getter retrieves the value of that field and the setter replaces this field with a new value. These two functions can be combined into one:

a -> (s, s -> a)

We can rewrite this function further as:

a -> Store s a

where we have defined a functor:

data Store s a = Store (s -> a) s

Notice that this is not a simple algebraic functor constructed from sums of products. It involves an exponential as.

A lens is a coalgebra for this functor with the carrier type a. We’ve seen before that Store s is also a comonad. It turns out that a well-behaved lens corresponds to a coalgebra that is compatible with the comonad structure. We’ll talk about this in the next section.

Challenges

  1. Implement the evaluation function for a ring of polynomials of one variable. You can represent a polynomial as a list of coefficients in front of powers of x. For instance, 4x2-1 would be represented as (starting with the zero’th power) [-1, 0, 4].
  2. Generalize the previous construction to polynomials of many independent variables, like x2y-3y3z.
  3. Implement the algebra for the ring of 2×2 matrices.
  4. Define a coalgebra whose anamorphism produces a list of squares of natural numbers.
  5. Use unfoldr to generate a list of the first n primes.

Next: Algebras for Monads.


If there is one structure that permeates category theory and, by implication, the whole of mathematics, it’s the monoid. To study the evolution of this concept is to study the power of abstraction and the idea of getting more for less, which is at the core of mathematics. When I say “evolution” I don’t necessarily mean chronological development. I’m looking at a monoid as if it were a life form evolving through various eons of abstraction.

It’s an ambitious project and I’ll have to cover a lot of material. I’ll start slowly, with the definitions of magmas and monoids, but then I will accelerate. A lot of concepts will be introduced in one or two sentences, mainly to familiarize the reader with the notation. I’ll dwell a little on monoidal categories, then breeze through ends, coends, and profunctors. I’ll show you how monads, arrows, and applicative functors arise from monoids in various monoidal categories.

stego

The Magmas of the Hadean Eon

Monoids evolved from more primitive life forms feeding on sets. So, before even touching upon monoids, let’s talk about cartesian products, relations, and functions. You take two sets a and b (or, in the simplest case, two copies of the same set a) and form pairs of elements. That gives you a set of pairs, a.k.a., the cartesian product a×b. Any subset of such a cartesian product is called a relation. Two elements x and y are in a relation if the pair <x, y> is a member of that subset.

A function from a to b is a special kind of relation, in which every element x in the set a has one and only one element y in the set b that’s related to it. (Sometimes this is called a total function, since it’s defined for all elements of a).

Even before there were monoids, there was magma. A magma is a set with a binary operation and nothing else. So, in particular, there is no assumption of associativity, and there is no unit. A binary operation is simply a function from the cartesian product of a with itself back to a

a × a -> a

It takes a pair of elements <x, y>, both coming from the set a, and maps it to an element of a.

It’s tempting to quote the Haskell definition of a magma:

class Magma a where
  (<>) :: a -> a -> a

but this definition is already tainted with some higher concepts like currying. An alternative would be:

class Magma a where
  (<>) :: (a, a) -> a

Here, we at least see a pair of elements that are being “multiplied.” But the pair type (a, a) is also a higher-level concept. I’ll come back to it later.

Lack of associativity means that we cannot identify (x<>y)<>z with x<>(y<>z). You have to keep the parentheses.

You might have heard of quaternions — their multiplication is associative. But not many people have heard of octonions, which are not associative. In fact Hamilton, who discovered quaternions, invented the word associative to disassociate himself from octonions, which are not.

If you’re familiar with continuous groups, you might know that Lie algebras are not associative.

Closer to home — most operations on floating-point numbers are not associative on modern computers because of rounding errors.

But, really, most interesting binary operations are associative. So out of the magma emerges a semigroup. In a semigroup you can drop parentheses. A non-trivial (that is, non-monoidal) example of a semigroup is the set of integers with max binary operation. A maximum of three numbers is the same no matter in which order you pair them. But there is no integer that’s less or equal to any other integer, so this is not a monoid.

Monoids of the Archean Eon

But, really, most interesting binary operations are both associative and unital. There usually is a “do nothing” element with respect to most binary operations. So life as we know it begins with a monoid.

A monoid is a set with a binary operation that is associative, and with a special element called the unit e that is neutral with respect to the binary operation. To be precise, these are the three monoid laws:

(x <> y) <> z = x <> (y <> z)
e <> x = x
x <> e = x

In Haskell, the traditional definition of a monoid uses mempty for the unit and mappend for the binary operation:

class Monoid a where
    mempty  :: a
    mappend :: a -> a -> a

As with the magma, the definition of mappend is curried. Equivalently, it could have been written as:

mappend :: (a, a) -> a

I’ll come back to this point later.

There are plenty of examples of monoids. Non-negative integers with addition, or positive integers with multiplication are the obvious ones. Strings with concatenation are interesting too, because concatenation is not commutative.

Just like pairs of elements from two sets a and b organize themselves into a set a×b, which is their cartesian product; functions between two sets organize themselves into a set — the set of functions from a to b, which we sometimes write as a->b.

This organizing principle is characteristic of sets, where everything you can think of is a set. Except when it’s more than just a set — for instance when you try to organize all sets into one large collection. This collection, or “class,” is not itself a set. You can’t have a set of all sets, but you can have a category Set of “small” sets, which are sets that belong to a “universe.” In what follows, I will confine myself to a single universe in order to dodge questions from foundational mathematicians.

Let’s now pop one level up and look at cartesian product as an operation on sets. For any two sets a and b, we can construct the set a×b. If we view this as “multiplication” of sets, we can say that sets form a magma. But do they form a monoid? Not exactly! To begin with, cartesian product is not associative. We can see it in Haskell: the type ((a, b), c) is not the same as the type (a, (b, c)). They are, however, isomorphic. There is an invertible function called the associator, from one type to the other:

alpha :: ((a, b), c) -> (a, (b, c))
alpha ((x, y), z) = (x, (y, z))

It’s just a repackaging of containers (such repackaging is, by the way, called a natural transformation).

For the unit of this “multiplication” we can pick the singleton set. In Haskell, this is the type called unit and it’s denoted by an empty pair of parentheses (). Again, the unit laws are valid up to isomorphism. There are two such isomorphisms called left and right unitors:

lambda :: ((), a) -> a
lambda ((), x) = x
rho :: (a, ()) -> a
rho (x, ()) -> x

We have just exposed monoidal structure in the category Set. Set is not strictly a monoid because monoidal laws are satisfied only up to isomorphism.

There is another monoidal structure in Set. Just like cartesian product resembles multiplication, there is an operation on sets that resembles addition. It’s called disjoint sum. In Haskell it’s embodied in the type Either a b . Just like cartesian product, disjoint sum is associative up to isomorphism. The unit (or the “zero”) of this sum type is the empty set or, in Haskell, the Void type — also up to isomorphism.

The Cambrian Explosion of Categories

The first rule of abstraction is, You do not talk about Fight Club. In the category Set, for instance, we are not supposed to admit that sets have elements. An object in Set is really a set, but you never talk about its elements. We still have functions between sets, but they become abstract morphisms, of which we only know how they compose.

Composition of functions is associative, and there is an identity function for every set, which serves as a unit of composition. We can write these rules compactly as:

(f ∘ g) ∘ h = f ∘ (g ∘ h)
id ∘ f = f
f ∘ id = f

These look exactly like monoid laws. So do functions form a monoid with respect to composition? Not quite, because you can’t compose any two functions. They must be composable, which means their endpoints have to match. In Haskell, we can compose g after f, or g ∘ f, only if:

f :: a -> b
g :: b -> c

Also, there is no single identity function, but a whole family of functions ida, one for each set a. In Haskell, we call that a polymorphic function.

But notice what happens if we restrict ourselves to just a single object a in Set. Every morphism from a back to a can be composed with any other such morphism (their endpoints always match). Moreover, we are guaranteed that among those so called endomorphisms there is one identity morphism ida, which acts as a unit of composition.

Notice that I switched from the set/function nomenclature to the more general object/morphism naming convention of category theory. We can now forget about sets and functions and define an arbitrary category as a collection (a set in a given universe) of objects, and sets of morphisms that go between them. The only requirements are that any two composable morphisms compose, and that there is an identity morphism for every object. And that composition must be associative.

We can now forget about sets and define a monoid as a category that has only one object. The binary operation is just the composition of (endo-)morphisms. It works! We have defined a monoid without a set. Or have we?

No, we haven’t! We have just swept it under the rug — the rug being the set of morphisms. Yes, morphisms between any two objects form a set called the hom-set. In a category C, the hom-set between objects a and b is denoted by C(a, b). So we haven’t completely eliminated sets from the picture.

In the single object category M, we have only one hom-set M(a, a). The elements of this set — and we are allowed to call them elements because it’s a set — are morphisms like f and g. We can compose them, and we can call this composition “multiplication,” thus recovering our previous definition of the monoid as a set. We get associativity for free, and we have the identity morphism ida serving as the unit.

It might seem at first that we haven’t made progress and, in fact, we might have made some things more complicated by forgetting the internal structure of objects. For instance, in the category Set, it’s no longer obvious what an empty set is. You can’t say it’s a set with no elements because of the Fight Club rule. Similarly with the singleton set. Fortunately, it turns out that both these sets can be uniquely described in terms of their interactions with other sets. By that I mean the kind of functions/morphisms that connect them to other objects in Set. These object-opaque definitions are called universal constructions. For instance, the empty set is the only set that has a unique morphism going from it to every other set. The advantage of this characterization is that it can now be applied to any category. One may ask this question in any category: Is there an object that has this property? If there is, we call it the initial object. The empty set is the initial object in Set. Similarly, a singleton set is the terminal object in Set (and it’s unique up to unique isomorphism).

A cartesian product of two sets can also be defined using a universal construction, one which doesn’t mention elements (or pairs of elements). And again, this construction may be used to define a (categorical) product in other categories. Of particular interest are categories where a product exists for every pair of objects (it does in Set).

In such categories there is actually an even better way of defining a product using an adjunction. But before we can get to adjunctions, let me summarize a few millions of years of evolution in a few terse paragraphs.

A functor is a mapping of categories that preserves their structure. It maps objects to objects and morphisms to morphisms. In Haskell we define a functor (really, an endofunctor) as a type constructor f (a mapping from types to types) that can be lifted to functions that go between these types:

class Functor f where
  fmap :: (a -> b) -> (f a -> f b)

The mapping of morphisms must also preserve composition and identity. Functors may collapse multiple objects into one, and multiple morphisms into one, but they never break connections. You may also think of functors as embedding one category inside another.

Finally, functors can be composed in the obvious way, and there is an identity endofunctor that maps a category onto itself. It follows that categories (at least the small ones) form a category Cat in which functors serve as morphisms.

There may be many ways of embedding one category inside another, and it’s extremely useful to be able to compare such embeddings by defining mappings between them. If we have two functors F and G between two categories C and D we define a natural transformation between these functors by picking a morphism between a pair F a and G a, for every a.

In Haskell, a natural transformation between two functors f and g is a polymorphic function:

type Nat f g = forall a. f a -> g a

In general, natural transformations must obey additional naturality conditions, but in Haskell they come for free (due to parametricity).

Natural transformations may be composed, and there is an identity natural transformations from any functor to itself. It follows that functors between any two categories C and D form a category denoted by [C, D], where natural transformations play the role of morphisms. A hom-set in such a category is a set of natural transformations between two functors F and G denoted by [C, D](F, G).

An invertible natural transformation is called a natural isomorphism. If two functors are naturally isomorphic they are essentially the same.

Arthropods and their Adjoints

Using a pair of functors that are the inverse of each other we may define equivalence of categories, but there is an even more useful concept of adjoint functors that compare the structures of two non-equivalent categories. The idea is that we have a “right” functor R going from category C to D and a “left” functor L going in the other direction, from D to C.

Adj - 1

There are two possible compositions of these functors, both resulting in round trips or endofunctors. The categories would be equivalent if those endofunctors were naturally isomorphic to identity endofunctors. But for an adjunction, we impose weaker conditions. We require that there be two natural transformations (not necessarily isomorphisms):

η :: ID -> R ∘ L
ε :: L ∘ R -> IC

The first transformation η is called the unit; and the second ε, the counit of the adjunction.

In a small category objects form sets, so it’s possible to form a cartesian product of two small categories C and D. Object in such a category C×D are pairs of objects <c, d>, and morphisms are pairs of morphisms <f, g>.

After these preliminaries, we are ready to define the categorical product in C using an adjunction. We chose C×C as the left category. The left functor is the diagonal functor Δ that maps any object c to a pair <c, c> and any morphism f to a pair of morphisms <f, f>. Its right adjoint, if it exists, maps a pair of objects <a, b> to their categorical product a×b.

Adj-Product

Interestingly, the terminal object can also be defined using an adjunction. This time we chose, as the left category, a singleton category with one object and one (identity) morphism. The left functor maps any object c to the singleton object. Its right adjoint, if it exists, maps the singleton object to the terminal object in C.

A category with all products and the terminal object is called a cartesian category, or cartesian monoidal category. Why monoidal? Because the operation of taking the categorical product is monoidal. It’s associative, up to isomorphism; and its unit is the terminal object.

Incidentally, this is the same monoidal structure that we’ve seen in Set, but now it’s generalized to the level of other categories. There was another monoidal structure in Set induced by the disjoint sum. Its categorical generalization is given by the coproduct, with the initial object playing the role of the unit.

But what about the set of morphisms? In Set, morphisms between two sets a and b form a hom-set, which is the object of the same category Set. In an arbitrary category C, a hom-set C(a, b) is still a set — but now it’s not an object of C. That’s why it’s called the external hom-set. However, there are categories in which each external hom-set has a corresponding object called the internal hom. This object is also called an exponential, ba. It can be defined using an adjunction, but only if the category supports products. It’s an adjunction in which the left and right categories are the same. The left endofunctor takes an object b and maps it to a product b×a, where a is an arbitrary fixed object. Its adjoint functor maps an object b to the exponential ba. The counit of this adjunction:

ε :: ba × a -> b

is the evaluation function. In Haskell it has the following signature:

eval :: (a -> b, a) -> b

The Haskell function type a->b is equivalent to the exponential ba.

A category that has all products and exponentials together with the terminal object is called cartesian closed. Cartesian closed categories, or CCCs, play an important role in the semantics of programming languages.

Tensorosaurus Rex

We have already seen two very similar monoidal structures induced by products and coproducts. In mathematics, two is a crowd, so let’s look for a pattern. Both product and coproduct act as bifunctors C×C->C. Let’s call such a bifunctor a tensor product and write it as an infix operator a ⊗ b. As a bifunctor, the tensor product can also lift pairs of morphisms:

f :: a -> a'
g :: b -> b'
f ⊗ g :: a ⊗ b -> a' ⊗ b'

To define a monoid on top of a tensor product, we will require that it be associative — up to isomorphism:

α :: (a ⊗ b) ⊗ c -> a ⊗ (b ⊗ c)

We also need a unit object, which we will call i. The two unit laws are:

λ :: i ⊗ a -> a
ρ :: a ⊗ i -> a

A category with a tensor product that satisfies the above properties, plus some additional coherence conditions, is called a monoidal category.

We can now specialize the tensor product to categorical product, in which case the unit object is the terminal object; or to coproduct, in which case we chose the initial object as the unit. But there is an even more interesting operation that has all the properties of the tensor product. I’m talking about functor composition.

tyrano

Functorosaurus

Functors between any two categories C and D form a functor category [C, D] with natural transformations playing the role of morphisms. In general, these functors don’t compose (their endpoints don’t match) unless we pick the target category to be the same as the source category.

Endofunctor Composition

In the endofunctor category [C, C] any two functors can be composed. But in [C, C] functors are objects, so functor composition becomes an operation on objects. For any two endofunctors F and G it produces a new endofunctor F∘G. It’s a binary operation, so it’s a potential candidate for a tensor product. Indeed, it is a bifunctor: it can be lifted to natural transformations, which are morphisms in [C, C]. It’s associative — in fact it’s strictly associative, the associator α is the identity natural transformation. The unit with respect to endofunctor composition is the identity functor I. So the category of endofunctors is a monoidal category.

Unlike product and coproduct, which are symmetric up to isomorphism, endofunctor composition is not symmetric. In general, there is no relation between F∘G and G∘F.

Profunctor Composition

Different species of functors came up with their own composition strategies. Take for instance the profunctors, which are functors Cop×D->Set. They generalize the idea of relations between objects in C and D. The sets they map to may be thought of as sets of proofs of the relationship. An empty set means that the two objects are not related. If you want to compose two relations, you have to find an element that’s common to the image of one relation and the source of the other (relations are not, in general, symmetric). The proofs of the new composite relation are pairs of proofs of individual relations. Symbolically, if p and q are such profunctors/relations, their composition can be written as:

exists x. (p a x, q x b)

Existential quantification in Haskell translates to polymorphic construction, so the actual definition is:

data PCompose p q a b = forall x . PCompose (p a x) (q x b)

In category theory, existential quantification is encoded as the coend, which is a generalization of a colimit for profunctors. The coend formula for the composition of two profunctors reads:

(p ⊗ q) a b = ∫ z p a z × q z b

The product here is the cartesian product of sets.

Profunctors, being functors, form a category in which morphisms are natural transformations. As long as the two categories that they relate are the same, any two profunctors can be composed using a coend. So profunctor composition is a good candidate for a tensor product in such a category. It is indeed associative, up to isomorphism. But what’s the unit of profunctor composition? It turns out that the simplest profuctor — the hom-functor — because of the Yoneda lemma, is the unit of composition:

z C(a, z) × p z b ≅ p a b
∫ z p a z × C(z, b) ≅ p a b

Thus profunctors Cop×C->Set form a monoidal category.

atlanto

Day Convolution

Or consider Set-valued functors. They can be composed using Day convolution. For that, the category C must itself be monoidal. Day convolution of two functors C->Set is defined using a coend:

(f ★ g) a = ∫ x y f x × g y × C(x ⊗ y, a)

Here, the tensor product of x ⊗ y comes from the monoidal category C, the other products are just cartesian products of sets (one of them being the hom-set).

As before, in Haskell, the coend turns into existential quantifier, which can be written symbolically:

Day f g a = exists x y. ((f x, g y), (x, y) -> a)

and encoded as a polymorphic constructor:

data Day f g a = forall x y. Day (f x) (g y) ((x, y) -> a)

We use the fact that the category of Haskell types is monoidal with respect to cartesian product.

We can build a monoidal category based on Day convolution. The unit with respect to Day convolution is C(i, -), the hom-functor applied to i — the unit in the monoidal category C. For instance, the left identity can be derived from:

(C(i, -) ★ g) a = ∫ x y C(i, x) × g y × C(x ⊗ y, a)

Applying the Yoneda lemma, or “integrating over x,” we get:

y g y × C(i ⊗ y, a)

Considering that i is the unit of the tensor product, we can perform the second integration to get g a.

The Monozoic Era

Monoidal categories are important because they provide rich grazing grounds for monoids. In a monoidal category we can define a more general monoid. It’s an object m with some special properties. These properties replace the usual definitions of multiplication and unit.

First, let’s reformulate the definition of a set-based monoid, taking into account the fact that Set is a monoidal category with respect to cartesian product.

A monoid is a set, so it’s an object in Set — let’s call it m. Multiplication maps pairs of elements of m back to m. These pairs are just elements of the cartesian product m × m. So multiplication is defined as a function:

μ :: m × m -> m

Unit of multiplication is a special element of m. We can select this element by providing a special morphism from the singleton set to m:

η :: () -> m

We can now express associativity and unit laws as properties of these two functions. The beauty of this formulation is that it generalizes easily to any cartesian category — just replace functions with morphisms and the unit () with the terminal object. There’s no reason to stop there: we can lift this definition all the way up to a monoidal category.

A monoid in a monoidal category is an object m together with two morphisms:

μ :: m ⊗ m -> m
η :: i -> m

Here i is the unit object with respect to the tensor product ⊗. Monoidal laws can be expressed using the associator α and the two unitors, λ and ρ, of the monoidal category:

assoctensor

unitmon

Having previously defined several interesting monoidal categories, we can now go digging for new monoids.

Monads

Let’s start with the category of endofunctors where the tensor product is functor composition. A monoid in the category of endofunctors is an endofunctor m and two morphism. Remember that morphisms in a functor category are natural transformations. So we end up with two natural transformations:

μ :: m ∘ m -> m
η :: I -> m

where I is the identity functor. Their components at an object a are:

μa :: m (m a) -> m a
ηa :: a -> m a

This construct is easily recognizable as a monad. The associativity and unit laws are just monad laws. In Haskell, μa is called join and ηa is called return.

Arrows

Let’s switch to the category of profunctors Cop×C->Set with profunctor composition as the tensor product. A monoid in that category is a profunctor ar. Multiplication is defined by a natural transformation:

μ :: ar ⊗ ar -> ar

Its component at a, b is:

μa b :: (∫ z ar a z × ar z b) -> ar a b

To simplify this formula we need a very useful identity that relates coends to ends. A hom-set that starts at a coend is equivalent to an end of the hom set:

C(∫ z p z z, y) ≅ ∫ z C(p z z, y)

Or, replacing external hom-sets with internal homs:

(∫ z p z z) -> y ≅ ∫ z (p z z -> y)

In Haskell, this formula is used to turn functions that take existential types to functions that are polymorphic:

(exists z. p z z) -> y ≅ forall z. (p z z -> y)

Intuitively, it makes perfect sense. If you want to define a function that takes an existential type, you have to be prepared to handle any type.

Using that identity, our multiplication formula can be rewritten as:

μa b :: ∫ z ((ar a z × ar z b) -> ar a b)

In Haskell, this derivation uses the existential quantifier:

mu a b = (exists z. (ar a z, ar z b)) -> ar a b

As we discussed, a function from an existential type is equivalent to a polymorphic function:

forall z. (ar a z, ar z b) -> ar a b

or, after currying and dropping the redundant quantification:

ar a z -> ar z b -> ar a b

This looks very much like a composition of morphisms in a category. In Haskell, this function is known in the infix-operator form as:

(>>>) :: ar a z -> ar z b -> ar a b

Let’s see what we get as the monoidal unit. Remember that the unit object in the profunctor category is the hom-functor C(a, b).

ηa b :: C(a, b) -> ar a b

In Haskell, this polymorphic function is traditionally called arr:

arr :: (a -> b) -> ar a b

The whole construct is known in Haskell as a pre-arrow. The full arrow is defined as a monoid in the category of strong profunctors, with strength defined as a natural transformation:

sta b :: p a b -> p (a, x) (b, x)

In Haskell, this function is called first.

Applicatives

There are several categorical formulations of what’s called in Haskell the applicative functor. To first approximaton, Haskell’s type system is the category Set. To translate Haskell constructs to category theory, the safest approach is to just play with endofunctors in Set. But both Set and its endofunctors have a lot of extra structure, so I’d like to start in a slightly more general setting.

Let’s have a look at the monoidal category of functors [C, Set], with Day convolution as the tensor product, and C(i, -) as unit. A monoid in this category is a functor f with multiplication given by the natural transformation:

μ :: f ★ f -> f

and unit given by:

η :: C(i, -) -> f

It turns out that the existence of these two natural transformations is equivalent to the requirement that f be a lax monoidal functor, which is the basis of the definition of the applicative functor in Haskell.

A monoidal functor is a functor that maps monoidal structure of one category to the monoidal structure of another category. It maps the tensor product, and it maps the unit object. In our case, the source category C has the monoidal structure given by the tensor product ⊗, and the target category Set is monoidal with respect to the cartesian product ×. A functor is monoidal if it doesn’t matter whether we first map two object and then multiply them, or first multiply them and then map the result:

f x × f y ≅ f (x ⊗ y)

Also, the unit object in Set should be isomporphic to the result of mapping the unit object in C:

() ≅ f i

Here, () is the terminal object in Set and i is the unit object in C.

These conditions are relaxed in the definition of a lax monoidal functor. A lax monoidal functor replaces isomorphisms with regular unidirectional morphisms:

f x × f y -> f (x ⊗ y)
() -> f i

It can be shown that the monoid in the category[C, Set], with Day convolution as the tensor product, is equivalent to the lax monoidal functor.

The Haskell definition of Applicative doesn’t look like Day convolution or like a lax monoidal functor:

class Functor f => Applicative f where
    (<*>) :: f (a -> b) -> (f a -> f b)
    pure :: a -> f a

You may recognize pure as a component of η, the natural transformation defining the monoid with respect to Day convolution. When you replace the category C with Set, the unit object C(i, -) turns into the identity functor. However, the operator <*> is lifted from the definition of yet another lax functor, the lax closed functor. It’s a functor that preserves the closed structure defined by the internal hom functor. In Set, the internal hom functor is just the arrow (->), hence the definition:

class Functor f => Closed f where
    (<*>) :: f (a -> b) -> (f a -> f b)
    unit :: f ()

As long as the internal hom is defined through the adjunction with the product, a lax closed functor is equivalent to a lax monoidal functor.

Conclusion

It is pretty shocking to realize how many different animals share the same body plan — I’m talking here about the monoid as the skeleton of a myriad of different mathematical and programming constructs. And I haven’t even touched on the whole kingdom of enriched categories, where monoidal categories form the reservoir of hom-objects. Virtually all notions I’ve discussed here can be generalized to enriched categories, including functors, profunctors, the Yoneda lemma, Day convolution, and so on.

Glossary

  • Hadean Eon: Began with the formation of the Earth about 4.6 billion years ago. It’s the period before the earliest-known rocks.
  • Archean Eon: During the Archean, the Earth’s crust had cooled enough to allow the formation of continents.
  • Cambrian explosion: Relatively short evolutionary event, during which most major animal phyla appeared.
  • Arthropods: from Greek ἄρθρωσις árthrosis, “joint”
  • Tensor, from Latin tendere “to stretch”
  • Functor: from Latin fungi, “perform”

Bibliograhpy

  1. Moggi, Notions of Computation and Monads.
  2. Rivas, Jaskelioff, Notions of Computation as Monoids.

Unlike monads, which came into programming straight from category theory, applicative functors have their origins in programming. McBride and Paterson introduced applicative functors as a programming pearl in their paper Applicative programming with effects. They also provided a categorical interpretation of applicatives in terms of strong lax monoidal functors. It’s been accepted that, just like “a monad is a monoid in the category of endofunctors,” so “an applicative is a strong lax monoidal functor.”

The so called “tensorial strength” seems to be important in categorical semantics, and in his seminal paper Notions of computation and monads, Moggi argued that effects should be described using strong monads. It makes sense, considering that a computation is done in a context, and you should be able to make the global context available under the monad. The fact that we don’t talk much about strong monads in Haskell is due to the fact that all functors in the category Set, which underlies the Haskell’s type system, have canonical strength. So why do we talk about strength when dealing with applicative functors? I have looked into this question and came to the conclusion that there is no fundamental reason, and that it’s okay to just say:

An applicative is a lax monoidal functor

In this post I’ll discuss different equivalent categorical definitions of the applicative functor. I’ll start with a lax closed functor, then move to a lax monoidal functor, and show the equivalence of the two definitions. Then I’ll introduce the calculus of ends and show that the third definition of the applicative functor as a monoid in a suitable functor category equipped with Day convolution is equivalent to the previous ones.

Applicative as a Lax Closed Functor

The standard definition of the applicative functor in Haskell reads:

class Functor f => Applicative f where
    (<*>) :: f (a -> b) -> (f a -> f b)
    pure :: a -> f a

At first sight it doesn’t seem to involve a monoidal structure. It looks more like preserving function arrows (I added some redundant parentheses to suggest this interpretation).

Categorically, functors that “preserve arrows” are known as closed functors. Let’s look at a definition of a closed functor f between two categories C and D. We have to assume that both categories are closed, meaning that they have internal hom-objects for every pair of objects. Internal hom-objects are also called function objects or exponentials. They are normally defined through the right adjoint to the product functor:

C(z × a, b) ≅ C(z, a => b)

To distinguish between sets of morphisms and function objects (they are the same thing in Set), I will temporarily use double arrows for function objects.

We can take a functor f and act with it on the function object a=>b in the category C. We get an object f (a=>b) in D. Or we can map the two objects a and b from C to D and then construct the function object in D: f a => f b.

closed

We call a functor closed if the two results are isomorphic (I have subscripted the two arrows with the categories where they are defined):

f (a =>C b) ≅ (f a =>D f b)

and if the functor preserves the unit object:

iD ≅ f iC

What’s the unit object? Normally, this is the unit with respect to the same product that was used to define the function object using the adjunction. I’m saying “normally,” because it’s possible to define a closed category without a product.

Note: The two arrows and the two is are defined with respect to two different products. The first isomorphism must be natural in both a and b. Also, to complete the picture, there are some diagrams that must commute.

The two isomorphisms that define a closed functor can be relaxed and replaced by unidirectional morphisms. The result is a lax closed functor:

f (a => b) -> (f a => f b)
i -> f i

This looks almost like the definition of Applicative, except for one problem: how can we recover the natural transformation we call pure from a single morphism i -> f i.

One way to do it is from the position of strength. An endofunctor f has tensorial strength if there is a natural transformation:

stc a :: c ⊗ f a -> f (c ⊗ a)

Think of c as the context in which the computation f a is performed. Strength means that we can use this external context inside the computation.

In the category Set, with the tensor product replaced by cartesian product, all functors have canonical strength. In Haskell, we would define it as:

st (c, fa) = fmap ((,) c) fa

The morphism in the definition of the lax closed functor translates to:

unit :: () -> f ()

Notice that, up to isomorphism, the unit type () is the unit with respect to cartesian product. The relevant isomorphisms are:

λa :: ((), a) -> a
ρa :: (a, ()) -> a

Here’s the derivation from Rivas and Jaskelioff’s Notions of Computation as Monoids:

    a
≅  (a, ())   -- unit law, ρ-1
-> (a, f ()) -- lax unit
-> f (a, ()) -- strength
≅  f a       -- lifted unit law, f ρ

Strength is necessary if you’re starting with a lax closed (or monoidal — see the next section) endofunctor in an arbitrary closed (or monoidal) category and you want to derive pure within that category — not after you restrict it to Set.

There is, however, an alternative derivation using the Yoneda lemma:

f ()
≅ forall a. (() -> a) -> f a  -- Yoneda
≅ forall a. a -> f a -- because: (() -> a) ≅ a

We recover the whole natural transformation from a single value. The advantage of this derivation is that it generalizes beyond endofunctors and it doesn’t require strength. As we’ll see later, it also ties nicely with the Day-convolution definition of applicative. The Yoneda lemma only works for Set-valued functors, but so does Day convolution (there are enriched versions of both Yoneda and Day convolution, but I’m not going to discuss them here).

We can define the categorical version of the Haskell’s applicative functor as a lax closed functor going from a closed category C to Set. It’s a functor equipped with a natural transformation:

f (a => b) -> (f a -> f b)

where a=>b is the internal hom-object in C (the second arrow is a function type in Set), and a function:

1 -> f i

where 1 is the singleton set and i is the unit object in C.

The importance of a categorical definition is that it comes with additional identities or “axioms.” A lax closed functor must be compatible with the structure of both categories. I will not go into details here, because we are really only interested in closed categories that are monoidal, where these axioms are easier to express.

The definition of a lax closed functor is easily translated to Haskell:

class Functor f => Closed f where
    (<*>) :: f (a -> b) -> f a -> f b
    unit :: f ()

Applicative as a Lax Monoidal Functor

Even though it’s possible to define a closed category without a monoidal structure, in practice we usually work with monoidal categories. This is reflected in the equivalent definition of the Haskell’s applicative functor as a lax monoidal functor. In Haskell, we would write:

class Functor f => Monoidal f where
    (>*<) :: (f a, f b) -> f (a, b)
    unit :: f ()

This definition is equivalent to our previous definition of a closed functor. That’s because, as we’ve seen, a function object in a monoidal category is defined in terms of a product. We can show the equivalence in a more general categorical setting.

This time let’s start with a symmetric closed monoidal category C, in which the function object is defined through the right adjoint to the tensor product:

C(z ⊗ a, b) ≅ C(z, a => b)

As usual, the tensor product is associative and unital — with the unit object i — up to isomorphism. The symmetry is defined through natural isomorphism:

γ :: a ⊗ b -> b ⊗ a

A functor f between two monoidal categories is lax monoidal if there exist: (1) a natural transformation

f a ⊗ f b -> f (a ⊗ b)

and (2) a morphism

i -> f i

Notice that the products and units on either side of the two mappings are from different categories.

A (lax-) monoidal functor must also preserve associativity and unit laws.

For instance a triple product

f a ⊗ (f b ⊗ f c)

may be rearranged using an associator α to give

(f a ⊗ f b) ⊗ f c

then converted to

f (a ⊗ b) ⊗ f c

and then to

f ((a ⊗ b) ⊗ c)

Or it could be first converted to

f a ⊗ f (b ⊗ c)

and then to

f (a ⊗ (b ⊗ c))

These two should be equivalent under the associator in C.

assoc

Similarly, f a ⊗ i can be simplified to f a using the right unitor ρ in D. Or it could be first converted to f a ⊗ f i, then to f (a ⊗ i), and then to f a, using the right unitor in C. The two paths should be equivalent. (Similarly for the left identity.)

unit

We will now consider functors from C to Set, with Set equipped with the usual cartesian product, and the singleton set as unit. A lax monoidal functor is defined by: (1) a natural transformation:

(f a, f b) -> f (a ⊗ b)

and (2) a choice of an element of the set f i (a function from 1 to f i picks an element from that set).

We need the target category to be Set because we want to be able to use the Yoneda lemma to show equivalence with the standard definition of applicative. I’ll come back to this point later.

The Equivalence

The definitions of a lax closed and a lax monoidal functors are equivalent when C is a closed symmetric monoidal category. The proof relies on the existence of the adjunction, in particular the unit and the counit of the adjunction:

ηa :: a -> (b => (a ⊗ b))
εb :: (a => b) ⊗ a -> b

For instance, let’s assume that f is lax-closed. We want to construct the mapping

(f a, f b) -> f (a ⊗ b)

First, we apply the lifted pair (unit, identity), (f η, f id)

(f a -> f (b => a ⊗ b), f id)

to the left hand side. We get:

(f (b => a ⊗ b), f b)

Now we can use (the uncurried version of) the lax-closed morphism:

(f (b => x), f b) -> f x

to get:

f (a ⊗ b)

Conversely, assuming the lax-monoidal property we can show that the functor is lax-closed, that is to say, implement the following function:

(f (a => b), f a) -> f b

First we use the lax monoidal morphism on the left hand side:

f ((a => b) ⊗ a)

and then use the counit (a.k.a. the evaluation morphism) to get the desired result f b

There is yet another presentation of applicatives using Day convolution. But before we get there, we need a little refresher on calculus.

Calculus of Ends

Ends and coends are very useful constructs generalizing limits and colimits. They are defined through universal constructions. They have a few fundamental properties that are used over and over in categorical calculations. I’ll just introduce the notation and a few important identities. We’ll be working in a symmetric monoidal category C with functors from C to Set and profunctors from Cop×C to Set. The end of a profunctor p is a set denoted by:

a p a a

The most important thing about ends is that a set of natural transformations between two functors f and g can be represented as an end:

[C, Set](f, g) = ∫a C(f a, g a)

In Haskell, the end corresponds to universal quantification over a functor of mixed variance. For instance, the natural transformation formula takes the familiar form:

forall a. f a -> g a

The Yoneda lemma, which deals with natural transformations, can also be written using an end:

z (C(a, z) -> f z) ≅ f a

In Haskell, we can write it as the equivalence:

forall z. ((a -> z) -> f z) ≅ f a

which is a generalization of the continuation passing transform.

The dual notion of coend is similarly written using an integral sign, with the “integration variable” in the superscript position:

a p a a

In pseudo-Haskell, a coend is represented by an existential quantifier. It’s possible to define existential data types in Haskell by converting existential quantification to universal one. The relevant identity in terms of coends and ends reads:

(∫ z p z z) -> y ≅ ∫ z (p z z -> y)

In Haskell, this formula is used to turn functions that take existential types to functions that are polymorphic:

(exists z. p z z) -> y ≅ forall z. (p z z -> y)

Intuitively, it makes perfect sense. If you want to define a function that takes an existential type, you have to be prepared to handle any type.

The equivalent of the Yoneda lemma for coends reads:

z f z × C(z, a) ≅ f a

or, in pseudo-Haskell:

exists z. (f z, z -> a) ≅ f a

(The intuition is that the only thing you can do with this pair is to fmap the function over the first component.)

There is also a contravariant version of this identity:

z C(a, z) × f z ≅ f a

where f is a contravariant functor (a.k.a., a presheaf). In pseudo-Haskell:

exists z. (a -> z, f z) ≅ f a

(The intuition is that the only thing you can do with this pair is to apply the contramap of the first component to the second component.)

Using coends we can define a tensor product in the category of functors [C, Set]. This product is called Day convolution:

(f ★ g) a = ∫ x y f x × g y × C(x ⊗ y, a)

It is a bifunctor in that category (read, it can be used to lift natural transformations). It’s associative and symmetric up to isomorphism. It also has a unit — the hom-functor C(i, -), where i is the monoidal unit in C. In other words, Day convolution imbues the category [C, Set] with monoidal structure.

Let’s verify the unit laws.

(C(i, -) ★ g) a = ∫ x y C(i, x) × g y × C(x ⊗ y, a)

We can use the contravariant Yoneda to “integrate over x” to get:

y g y × C(i ⊗ y, a)

Considering that i is the unit of the tensor product in C, we get:

y g y × C(y, a)

Covariant Yoneda lets us “integrate over y” to get the desired g a. The same method works for the right unit law.

Applicative as a Monoid

Given a monoidal category, we can always define a monoid as an object m equipped with two morphisms:

μ :: m ⊗ m -> m
η :: i -> m

satisfying the laws of associativity and unitality.

We have shown that the functor category [C, Set] (with C a symmetric monoidal category) is monoidal under Day convolution. An object in this category is a functor f. The two morphisms that would make it a candidate for a monoid are natural transformations:

μ :: f ★ f -> f
η :: C(i, -) -> f

The a component of the natural transformation μ can be rewritten as:

(∫ x y f x × f y × C(x ⊗ y, a)) -> f a

which is equivalent to:

x y (f x × f y × C(x ⊗ y, a) -> f a)

or, upon currying:

x y (f x, f y) -> C(x ⊗ y, a) -> f a

It turns out that so defined monoid is equivalent to a lax monoidal functor. This was shown by Rivas and Jaskelioff. The following derivation is due to Bob Atkey.

The trick is to start with the whole set of natural transformation from f★f to f. The multiplication μ is just one of them. We’ll express the set of natural transformations as an end:

a ((f ★ f) a -> f a)

Plugging in the formula for the a component of μ, we get:

a x y (f x, f y) -> C(x ⊗ y, a) -> f a

The end over a does not involve the first argument, so we can move the integral sign:

x y (f x, f y) -> ∫ a C(x ⊗ y, a) -> f a

Then we use the Yoneda lemma to “perform the integration” over a:

x y (f x, f y) -> f (x ⊗ y)

You may recognize this as a set of natural transformations that define a lax monoidal functor. We have established a one-to-one correspondence between these natural transformations and the ones defining monoidal multiplication using Day convolution.

The remaining part is to show the equivalence between the unit with respect to Day convolution and the second part of the definition of the lax monoidal functor, the morphism:

1 -> f i

We start with the set of natural transformations that contains our η:

a (i -> a) -> f a

By Yoneda, this is just f i. Picking an element from a set is equivalent to defining a morphism from the singleton set 1, so for any choice of η we get:

1 -> f i

and vice versa. The two definitions are equivalent.

Notice that the monoidal unit η under Day convolution becomes the definition of pure in the Haskell version of applicative. Indeed, when we replace the category C with Set, f becomes and endofunctor, and the unit of Day convolution C(i, -) becomes the identity functor Id. We get:

η :: Id -> f

or, in components:

pure :: a -> f a

So, strictly speaking, the Haskell definition of Applicative mixes the elements of the lax closed functor and the monoidal unit under Day convolution.

Acknowledgments

I’m grateful to Mauro Jaskelioff and Exequiel Rivas for correspondence and to Bob Atkey, Dimitri Chikhladze, and Make Shulman for answering my questions on Math Overflow.


I’m a refugee. I fled Communist Poland and was granted political asylum in the United States. That was so long ago that I don’t think of myself as a refugee any more. I’m an American — not by birth but by choice. My understanding is that being an American has nothing to do with ethnicity, religion, or personal history. I became an American by accepting a certain system of values specified in the Constitution. Things like freedom of expression, freedom from persecution, equality, pursuit of happiness, etc. I’m also a Pole and proud of it. I speak the language, I know my history and culture. No contradiction here.

I’m a scientist, and I normally leave politics to others. In fact I came to the United States to get away from politics. In Poland, I was engaged in political struggle, I was a member of Solidarity, and I joined the resistance when Solidarity was crushed. I could have stayed and continued the fight, but I chose instead to leave and make my contribution to society in other areas.

There are times in history when it’s best for scientists to sit in their ivory towers and do what they are trained to do — science. There is time when it’s best for engineers to design new things, write software, and build gadgets that make life easier for everybody. But there are times when this is not enough. That’s why I’m interrupting my scheduled programming, my category theory for programmers blog, to say a few words about current events. Actually, first I’d like to reminisce a little.

When you live under a dictatorship, you have to develop certain skills. If direct approach can get you in trouble, you try to manipulate the system. When martial law was imposed in Poland, all international travel was suspended. I was a grad student then, working on my Ph.D. in theoretical physics. Contact with scientists from abroad was very important to me. As soon as the martial law was suspended, my supervisor and I decided to go for a visit — not to the West, mind you, but to the Soviet Union. But the authorities decided that giving passports to scientists was a great opportunity to make them work for the system. So before we could get a permission to go abroad, we had to visit the Department of Security — the Secret Police — for an interview. From our friends, who were interviewed before, we knew that we’d be offered a choice: become an informant or forget about traveling abroad.

My professor went first. He was on time, but they kept him waiting outside the office forever. After an hour, he stormed out. He didn’t get the passport.

When I went to my interview, it started with some innocuous questions. I was asked who the chief of Solidarity at the University was. That was no secret — he was my office mate in the Physics Department. Then the discussion turned to my future employment at the University. The idea was to suggest that the Department of Security could help me keep my position, or get me fired. Knowing what was coming, I bluffed, saying that I was one of the brightest young physicists around, and my employment was perfectly secure. Then I started talking about my planned trip to the Soviet Union. I took my interviewer into confidence, and explained how horribly the Soviet science is suffering because their government is not allowing their scientists to travel to the West, and how much better Polish science was because of that. You have to realize that, even in the depth of the Department of Security of a Communist country, there was no love for our Soviet brethren. If we could beat them at science, all the better. I got my passport without any more hassle.

I was exaggerating a little, especially about me being so bright, but it’s true that there is an international community of scientists and engineers that knows no borders. Any impediment to free exchange of ideas and people is very detrimental to its prosperity and, by association, to the prosperity of the societies they live in.

I consider the recent Muslim ban — and that’s what it should be called — a direct attack on this community, on a par with climate-change denials and gag orders against climate scientists working for the government. It’s really hard to piss off scientists and engineers, so I consider this a major accomplishment of the new presidency.

You can make fun of us nerds as much as you want, but every time you send a tweet, you’re using the infrastructure created by us. The billions of matal-oxide field-effect transistors and the liquid-crystal display in your tablet were made possible by developments in quantum mechanics and materials science. The operating system was written by software engineers in languages based on the math developed by Alan Turing and Alonzo Church. Try denying that, and you’ll end up tweeting with a quill on parchment.

Scientists and engineers consider themselves servants of the society. We don’t make many demands and are quite happy to be left alone to do our stuff. But if this service is disrupted by clueless, power-hungry politicians, we will act. We are everywhere, and we know how to use the Internet — we invented it.

P. S. I keep comments to my blog under moderation because of spam. But I will also delete comments that I consider clueless.

Here’s a little anecdote about cluelessness that I heard long time ago from my physicist friends in the Soviet Union. They had invited a guest scientist from the US to one of the conferences. They were really worried that he might say something politically charged and make future scientific exchanges impossible. So they asked him to, please, refrain from any political comments.

Time comes for the guest scientist to give a talk. And he starts with, “Before I came to the Soviet Union I was warned that I will be constantly minded by the secret police.” The director of the institute, who invited our scientist, is sitting in the first row between two KGB minders. All blood is leaving his face. The KGB minders stiffen in their seats. “I’m so happy that it turned out to be nonsense,” says the scientist and proceeds to give his talk. You see, it’s really hard to imagine what it’s like to live under dictatorship unless you’ve experienced it yourself. Trust me, I’ve been there and I recognize the warning signs.


This is part 23 of Categories for Programmers. Previously: Monads Categorically. See the Table of Contents.

Now that we have covered monads, we can reap the benefits of duality and get comonads for free simply by reversing the arrows and working in the opposite category.

Recall that, at the most basic level, monads are about composing Kleisli arrows:

a -> m b

where m is a functor that is a monad. If we use the letter w (upside down m) for the comonad, we can define co-Kleisli arrows as morphism of the type:

w a -> b

The analog of the fish operator for co-Kleisli arrows is defined as:

(=>=) :: (w a -> b) -> (w b -> c) -> (w a -> c)

For co-Kleisli arrows to form a category we also have to have an identity co-Kleisli arrow, which is called extract:

extract :: w a -> a

This is the dual of return. We also have to impose the laws of associativity as well as left- and right-identity. Putting it all together, we could define a comonad in Haskell as:

class Functor w => Comonad w where
    (=>=) :: (w a -> b) -> (w b -> c) -> (w a -> c)
    extract :: w a -> a

In practice, we use slightly different primitives, as we’ll see shortly.

The question is, what’s the use for comonads in programming?

Programming with Comonads

Let’s compare the monad with the comonad. A monad provides a way of putting a value in a container using return. It doesn’t give you access to a value or values stored inside. Of course, data structures that implement monads might provide access to their contents, but that’s considered a bonus. There is no common interface for extracting values from a monad. And we’ve seen the example of the IO monad that prides itself in never exposing its contents.

A comonad, on the other hand, provides the means of extracting a single value from it. It does not give the means to insert values. So if you want to think of a comonad as a container, it always comes pre-filled with contents, and it lets you peek at it.

Just as a Kleisli arrow takes a value and produces some embellished result — it embellishes it with context — a co-Kleisli arrow takes a value together with a whole context and produces a result. It’s an embodiment of contextual computation.

The Product Comonad

Remember the reader monad? We introduced it to tackle the problem of implementing computations that need access to some read-only environment e. Such computations can be represented as pure functions of the form:

(a, e) -> b

We used currying to turn them into Kleisli arrows:

a -> (e -> b)

But notice that these functions already have the form of co-Kleisli arrows. Let’s massage their arguments into the more convenient functor form:

data Product e a = P e a
  deriving Functor

We can easily define the composition operator by making the same environment available to the arrows that we are composing:

(=>=) :: (Product e a -> b) -> (Product e b -> c) -> (Product e a -> c)
f =>= g = \(P e a) -> let b = f (P e a)
                          c = g (P e b)
                       in c

The implementation of extract simply ignores the environment:

extract (P e a) = a

Not surprisingly, the product comonad can be used to perform exactly the same computations as the reader monad. In a way, the comonadic implementation of the environment is more natural — it follows the spirit of “computation in context.” On the other hand, monads come with the convenient syntactic sugar of the do notation.

The connection between the reader monad and the product comonad goes deeper, having to do with the fact that the reader functor is the right adjoint of the product functor. In general, though, comonads cover different notions of computation than monads. We’ll see more examples later.

It’s easy to generalize the Product comonad to arbitrary product types including tuples and records.

Dissecting the Composition

Continuing the process of dualization, we could go ahead and dualize monadic bind and join. Alternatively, we can repeat the process we used with monads, where we studied the anatomy of the fish operator. This approach seems more enlightening.

The starting point is the realization that the composition operator must produce a co-Kleisli arrow that takes w a and produces a c. The only way to produce a c is to apply the second function to an argument of the type w b:

(=>=) :: (w a -> b) -> (w b -> c) -> (w a -> c)
f =>= g = g ... 

But how can we produce a value of type w b that could be fed to g? We have at our disposal the argument of type w a and the function f :: w a -> b. The solution is to define the dual of bind, which is called extend:

extend :: (w a -> b) -> w a -> w b

Using extend we can implement composition:

f =>= g = g . extend f

Can we next dissect extend? You might be tempted to say, why not just apply the function w a -> b to the argument w a, but then you quickly realize that you’d have no way of converting the resulting b to w b. Remember, the comonad provides no means of lifting values. At this point, in the analogous construction for monads, we used fmap. The only way we could use fmap here would be if we had something of the type w (w a) at our disposal. If we coud only turn w a into w (w a). And, conveniently, that would be exactly the dual of join. We call it duplicate:

duplicate :: w a -> w (w a)

So, just like with the definitions of the monad, we have three equivalent definitions of the comonad: using co-Kleisli arrows, extend, or duplicate. Here’s the Haskell definition taken directly from Control.Comonad library:

class Functor w => Comonad w where
  extract :: w a -> a
  duplicate :: w a -> w (w a)
  duplicate = extend id
  extend :: (w a -> b) -> w a -> w b
  extend f = fmap f . duplicate

Provided are the default implementations of extend in terms of duplicate and vice versa, so you only need to override one of them.

The intuition behind these functions is based on the idea that, in general, a comonad can be thought of as a container filled with values of type a (the product comonad was a special case of just one value). There is a notion of the “current” value, one that’s easily accessible through extract. A co-Kleisli arrow performs some computation that is focused on the current value, but it has access to all the surrounding values. Think of the Conway’s game of life. Each cell contains a value (usually just True or False). A comonad corresponding to the game of life would be a grid of cells focused on the “current” cell.

So what does duplicate do? It takes a comonadic container w a and produces a container of containers w (w a). The idea is that each of these containers is focused on a different a inside w a. In the game of life, you would get a grid of grids, each cell of the outer grid containing an inner grid that’s focused on a different cell.

Now look at extend. It takes a co-Kleisli arrow and a comonadic container w a filled with as. It applies the computation to all of these as, replacing them with bs. The result is a comonadic container filled with bs. extend does it by shifting the focus from one a to another and applying the co-Kleisli arrow to each of them in turn. In the game of life, the co-Kleisli arrow would calculate the new state of the current cell. To do that, it would look at its context — presumably its nearest neighbors. The default implementation of extend illustrates this process. First we call duplicate to produce all possible foci and then we apply f to each of them.

The Stream Comonad

This process of shifting the focus from one element of the container to another is best illustrated with the example of an infinite stream. Such a stream is just like a list, except that it doesn’t have the empty constructor:

data Stream a = Cons a (Stream a)

It’s trivially a Functor:

instance Functor Stream where
    fmap f (Cons a as) = Cons (f a) (fmap f as)

The focus of a stream is its first element, so here’s the implementation of extract:

extract (Cons a _) = a

duplicate produces a stream of streams, each focused on a different element.

duplicate (Cons a as) = Cons (Cons a as) (duplicate as)

The first element is the original stream, the second element is the tail of the original stream, the third element is its tail, and so on, ad infinitum.

Here’s the complete instance:

instance Comonad Stream where
    extract (Cons a _) = a
    duplicate (Cons a as) = Cons (Cons a as) (duplicate as)

This is a very functional way of looking at streams. In an imperative language, we would probably start with a method advance that shifts the stream by one position. Here, duplicate produces all shifted streams in one fell swoop. Haskell’s laziness makes this possible and even desirable. Of course, to make a Stream practical, we would also implement the analog of advance:

tail :: Stream a -> Stream a
tail (Cons a as) = as

but it’s never part of the comonadic interface.

If you had any experience with digital signal processing, you’ll see immediately that a co-Kleisli arrow for a stream is just a digital filter, and extend produces a filtered stream.

As a simple example, let’s implement the moving average filter. Here’s a function that sums n elements of a stream:

sumS :: Num a => Int -> Stream a -> a
sumS n (Cons a as) = if n <= 0 then 0 else a + sumS (n - 1) as

Here’s the function that calculates the average of the first n elements of the stream:

average :: Fractional a => Int -> Stream a -> a
average n stm = (sumS n stm) / (fromIntegral n)

Partially applied average n is a co-Kleisli arrow, so we can extend it over the whole stream:

movingAvg :: Fractional a => Int -> Stream a -> Stream a
movingAvg n = extend (average n)

The result is the stream of running averages.

A stream is an example of a unidirectional, one-dimensional comonad. It can be easily made bidirectional or extended to two or more dimensions.

Comonad Categorically

Defining a comonad in category theory is a straightforward exercise in duality. As with the monad, we start with an endofunctor T. The two natural transformations, η and μ, that define the monad are simply reversed for the comonad:

ε :: T -> I
δ :: T -> T2

The components of these transformations correspond to extract and duplicate. Comonad laws are the mirror image of monad laws. No big surprise here.

Then there is the derivation of the monad from an adjunction. Duality reverses an adjunction: the left adjoint becomes the right adjoint and vice versa. And, since the composition R ∘ L defines a monad, L ∘ R must define a comonad. The counit of the adjunction:

ε :: L ∘ R -> I

is indeed the same ε that we see in the definition of the comonad — or, in components, as Haskell’s extract. We can also use the unit of the adjunction:

η :: I -> R ∘ L

to insert an R ∘ L in the middle of L ∘ R and produce L ∘ R ∘ L ∘ R. Making T2 from T defines the δ, and that completes the definition of the comonad.

We’ve also seen that the monad is a monoid. The dual of this statement would require the use of a comonoid, so what’s a comonoid? The original definition of a monoid as a single-object category doesn’t dualize to anything interesting. When you reverse the direction of all endomorphisms, you get another monoid. Recall, however, that in our approach to a monad, we used a more general definition of a monoid as an object in a monoidal category. The construction was based on two morphisms:

μ :: m ⊗ m -> m
η :: i -> m

The reversal of these morphisms produces a comonoid in a monoidal category:

δ :: m -> m ⊗ m
ε :: m -> i

One can write a definition of a comonoid in Haskell:

class Comonoid m where
  split   :: m -> (m, m)
  destroy :: m -> ()

but it is rather trivial. Obviously destroy ignores its argument.

destroy _ = ()

split is just a pair of functions:

split x = (f x, g x)

Now consider comonoid laws that are dual to the monoid unit laws.

lambda . bimap destroy id . split = id
rho . bimap id destroy . split = id

Here, lambda and rho are the left and right unitors, respectively (see the definition of monoidal categories). Plugging in the definitions, we get:

lambda (bimap destroy id (split x))
= lambda (bimap destroy id (f x, g x))
= lambda ((), g x)
= g x

which proves that g = id. Similarly, the second law expands to f = id. In conclusion:

split x = (x, x)

which shows that in Haskell (and, in general, in the category Set) every object is a trivial comonoid.

Fortunately there are other more interesting monoidal categories in which to define comonoids. One of them is the category of endofunctors. And it turns out that, just like the monad is a monoid in the category of endofunctors,

The comonad is a comonoid in the category of endofunctors.

The Store Comonad

Another important example of a comonad is the dual of the state monad. It’s called the costate comonad or, alternatively, the store comonad.

We’ve seen before that the state monad is generated by the adjunction that defines the exponentials:

L z = z × s
R a = s ⇒ a

We’ll use the same adjunction to define the costate comonad. A comonad is defined by the composition L ∘ R:

L (R a) = (s ⇒ a) × s

Translating this to Haskell, we start with the adjunction between the Prod functor on the left and the Reader functor or the right. Composing Prod after Reader is equivalent to the following definition:

data Store s a = Store (s -> a) s

The counit of the adjunction taken at the object a is the morphism:

εa :: ((s ⇒ a) × s) -> a

or, in Haskell notation:

counit (Prod (Reader f, s)) = f s

This becomes our extract:

extract (Store f s) = f s

The unit of the adjunction:

unit a = Reader (\s -> Prod (a, s))

can be rewritten as partially applied data constructor:

Store f :: s -> Store f s

We construct δ, or duplicate, as the horizontal composition:

δ :: L ∘ R -> L ∘ R ∘ L ∘ R
δ = L ∘ η ∘ R

We have to sneak η through the leftmost L, which is the Prod functor. It means acting with η, or Store f, on the left component of the pair (that’s what fmap for Prod would do). We get:

duplicate (Store f s) = Store (Store f) s

(Remember that, in the formula for δ, L and R stand for identity natural transformations whose components are identity morphisms.)

Here’s the complete definition of the Store comonad:

instance Comonad (Store s) where
  extract (Store f s) = f s
  duplicate (Store f s) = Store (Store f) s

You may think of the Reader part of Store as a generalized container of as that are keyed using elements of the type s. For instance, if s is Int, Reader Int a is an infinite bidirectional stream of as. Store pairs this container with a value of the key type. For instance, Reader Int a is paired with an Int. In this case, extract uses this integer to index into the infinite stream. You may think of the second component of Store as the current position.

Continuing with this example, duplicate creates a new infinite stream indexed by an Int. This stream contains streams as its elements. In particular, at the current position, it contains the original stream. But if you use some other Int (positive or negative) as the key, you’d obtain a shifted stream positioned at that new index.

In general, you can convince yourself that when extract acts on the duplicated Store it produces the original Store (in fact, the identity law for the comonad states that extract . duplicate = id).

The Store comonad plays an important role as the theoretical basis for the Lens library. Conceptually, the Store s a comonad encapsulates the idea of “focusing” (like a lens) on a particular substructure of the date type a using the type s as an index. In particular, a function of the type:

a -> Store s a

is equivalent to a pair of functions:

set :: a -> s -> a
get :: a -> s

If a is a product type, set could be implemented as setting the field of type s inside of a while returning the modified version of a. Similarly, get could be implemented to read the value of the s field from a. We’ll explore these ideas more in the next section.

Challenges

  1. Implement the Conway’s Game of Life using the Store comonad. Hint: What type do you pick for s?

Acknowledgments

I’m grateful to Edward Kmett for reading the draft of this post and pointing out flaws in my reasoning.

Next: F-Algebras.