If there is one structure that permeates category theory and, by implication, the whole of mathematics, it’s the monoid. To study the evolution of this concept is to study the power of abstraction and the idea of getting more for less, which is at the core of mathematics. When I say “evolution” I don’t necessarily mean chronological development. I’m looking at a monoid as if it were a life form evolving through various eons of abstraction.

It’s an ambitious project and I’ll have to cover a lot of material. I’ll start slowly, with the definitions of magmas and monoids, but then I will accelerate. A lot of concepts will be introduced in one or two sentences, mainly to familiarize the reader with the notation. I’ll dwell a little on monoidal categories, then breeze through ends, coends, and profunctors. I’ll show you how monads, arrows, and applicative functors arise from monoids in various monoidal categories.

stego

The Magmas of the Hadean Eon

Monoids evolved from more primitive life forms feeding on sets. So, before even touching upon monoids, let’s talk about cartesian products, relations, and functions. You take two sets a and b (or, in the simplest case, two copies of the same set a) and form pairs of elements. That gives you a set of pairs, a.k.a., the cartesian product a×b. Any subset of such a cartesian product is called a relation. Two elements x and y are in a relation if the pair <x, y> is a member of that subset.

A function from a to b is a special kind of relation, in which every element x in the set a has one and only one element y in the set b that’s related to it. (Sometimes this is called a total function, since it’s defined for all elements of a).

Even before there were monoids, there was magma. A magma is a set with a binary operation and nothing else. So, in particular, there is no assumption of associativity, and there is no unit. A binary operation is simply a function from the cartesian product of a with itself back to a

a × a -> a

It takes a pair of elements <x, y>, both coming from the set a, and maps it to an element of a.

It’s tempting to quote the Haskell definition of a magma:

class Magma a where
  (<>) :: a -> a -> a

but this definition is already tainted with some higher concepts like currying. An alternative would be:

class Magma a where
  (<>) :: (a, a) -> a

Here, we at least see a pair of elements that are being “multiplied.” But the pair type (a, a) is also a higher-level concept. I’ll come back to it later.

Lack of associativity means that we cannot identify (x<>y)<>z with x<>(y<>z). You have to keep the parentheses.

You might have heard of quaternions — their multiplication is associative. But not many people have heard of octonions, which are not associative. In fact Hamilton, who discovered quaternions, invented the word associative to disassociate himself from octonions, which are not.

If you’re familiar with continuous groups, you might know that Lie algebras are not associative.

Closer to home — most operations on floating-point numbers are not associative on modern computers because of rounding errors.

But, really, most interesting binary operations are associative. So out of the magma emerges a semigroup. In a semigroup you can drop parentheses. A non-trivial (that is, non-monoidal) example of a semigroup is the set of integers with max binary operation. A maximum of three numbers is the same no matter in which order you pair them. But there is no integer that’s less or equal to any other integer, so this is not a monoid.

Monoids of the Archean Eon

But, really, most interesting binary operations are both associative and unital. There usually is a “do nothing” element with respect to most binary operations. So life as we know it begins with a monoid.

A monoid is a set with a binary operation that is associative, and with a special element called the unit e that is neutral with respect to the binary operation. To be precise, these are the three monoid laws:

(x <> y) <> z = x <> (y <> z)
e <> x = x
x <> e = x

In Haskell, the traditional definition of a monoid uses mempty for the unit and mappend for the binary operation:

class Monoid a where
    mempty  :: a
    mappend :: a -> a -> a

As with the magma, the definition of mappend is curried. Equivalently, it could have been written as:

mappend :: (a, a) -> a

I’ll come back to this point later.

There are plenty of examples of monoids. Non-negative integers with addition, or positive integers with multiplication are the obvious ones. Strings with concatenation are interesting too, because concatenation is not commutative.

Just like pairs of elements from two sets a and b organize themselves into a set a×b, which is their cartesian product; functions between two sets organize themselves into a set — the set of functions from a to b, which we sometimes write as a->b.

This organizing principle is characteristic of sets, where everything you can think of is a set. Except when it’s more than just a set — for instance when you try to organize all sets into one large collection. This collection, or “class,” is not itself a set. You can’t have a set of all sets, but you can have a category Set of “small” sets, which are sets that belong to a “universe.” In what follows, I will confine myself to a single universe in order to dodge questions from foundational mathematicians.

Let’s now pop one level up and look at cartesian product as an operation on sets. For any two sets a and b, we can construct the set a×b. If we view this as “multiplication” of sets, we can say that sets form a magma. But do they form a monoid? Not exactly! To begin with, cartesian product is not associative. We can see it in Haskell: the type ((a, b), c) is not the same as the type (a, (b, c)). They are, however, isomorphic. There is an invertible function called the associator, from one type to the other:

alpha :: ((a, b), c) -> (a, (b, c))
alpha ((x, y), z) = (x, (y, z))

It’s just a repackaging of containers (such repackaging is, by the way, called a natural transformation).

For the unit of this “multiplication” we can pick the singleton set. In Haskell, this is the type called unit and it’s denoted by an empty pair of parentheses (). Again, the unit laws are valid up to isomorphism. There are two such isomorphisms called left and right unitors:

lambda :: ((), a) -> a
lambda ((), x) = x
rho :: (a, ()) -> a
rho (x, ()) -> x

We have just exposed monoidal structure in the category Set. Set is not strictly a monoid because monoidal laws are satisfied only up to isomorphism.

There is another monoidal structure in Set. Just like cartesian product resembles multiplication, there is an operation on sets that resembles addition. It’s called disjoint sum. In Haskell it’s embodied in the type Either a b . Just like cartesian product, disjoint sum is associative up to isomorphism. The unit (or the “zero”) of this sum type is the empty set or, in Haskell, the Void type — also up to isomorphism.

The Cambrian Explosion of Categories

The first rule of abstraction is, You do not talk about Fight Club. In the category Set, for instance, we are not supposed to admit that sets have elements. An object in Set is really a set, but you never talk about its elements. We still have functions between sets, but they become abstract morphisms, of which we only know how they compose.

Composition of functions is associative, and there is an identity function for every set, which serves as a unit of composition. We can write these rules compactly as:

(f ∘ g) ∘ h = f ∘ (g ∘ h)
id ∘ f = f
f ∘ id = f

These look exactly like monoid laws. So do functions form a monoid with respect to composition? Not quite, because you can’t compose any two functions. They must be composable, which means their endpoints have to match. In Haskell, we can compose g after f, or g ∘ f, only if:

f :: a -> b
g :: b -> c

Also, there is no single identity function, but a whole family of functions ida, one for each set a. In Haskell, we call that a polymorphic function.

But notice what happens if we restrict ourselves to just a single object a in Set. Every morphism from a back to a can be composed with any other such morphism (their endpoints always match). Moreover, we are guaranteed that among those so called endomorphisms there is one identity morphism ida, which acts as a unit of composition.

Notice that I switched from the set/function nomenclature to the more general object/morphism naming convention of category theory. We can now forget about sets and functions and define an arbitrary category as a collection (a set in a given universe) of objects, and sets of morphisms that go between them. The only requirements are that any two composable morphisms compose, and that there is an identity morphism for every object. And that composition must be associative.

We can now forget about sets and define a monoid as a category that has only one object. The binary operation is just the composition of (endo-)morphisms. It works! We have defined a monoid without a set. Or have we?

No, we haven’t! We have just swept it under the rug — the rug being the set of morphisms. Yes, morphisms between any two objects form a set called the hom-set. In a category C, the hom-set between objects a and b is denoted by C(a, b). So we haven’t completely eliminated sets from the picture.

In the single object category M, we have only one hom-set M(a, a). The elements of this set — and we are allowed to call them elements because it’s a set — are morphisms like f and g. We can compose them, and we can call this composition “multiplication,” thus recovering our previous definition of the monoid as a set. We get associativity for free, and we have the identity morphism ida serving as the unit.

It might seem at first that we haven’t made progress and, in fact, we might have made some things more complicated by forgetting the internal structure of objects. For instance, in the category Set, it’s no longer obvious what an empty set is. You can’t say it’s a set with no elements because of the Fight Club rule. Similarly with the singleton set. Fortunately, it turns out that both these sets can be uniquely described in terms of their interactions with other sets. By that I mean the kind of functions/morphisms that connect them to other objects in Set. These object-opaque definitions are called universal constructions. For instance, the empty set is the only set that has a unique morphism going from it to every other set. The advantage of this characterization is that it can now be applied to any category. One may ask this question in any category: Is there an object that has this property? If there is, we call it the initial object. The empty set is the initial object in Set. Similarly, a singleton set is the terminal object in Set (and it’s unique up to unique isomorphism).

A cartesian product of two sets can also be defined using a universal construction, one which doesn’t mention elements (or pairs of elements). And again, this construction may be used to define a (categorical) product in other categories. Of particular interest are categories where a product exists for every pair of objects (it does in Set).

In such categories there is actually an even better way of defining a product using an adjunction. But before we can get to adjunctions, let me summarize a few millions of years of evolution in a few terse paragraphs.

A functor is a mapping of categories that preserves their structure. It maps objects to objects and morphisms to morphisms. In Haskell we define a functor (really, an endofunctor) as a type constructor f (a mapping from types to types) that can be lifted to functions that go between these types:

class Functor f where
  fmap :: (a -> b) -> (f a -> f b)

The mapping of morphisms must also preserve composition and identity. Functors may collapse multiple objects into one, and multiple morphisms into one, but they never break connections. You may also think of functors as embedding one category inside another.

Finally, functors can be composed in the obvious way, and there is an identity endofunctor that maps a category onto itself. It follows that categories (at least the small ones) form a category Cat in which functors serve as morphisms.

There may be many ways of embedding one category inside another, and it’s extremely useful to be able to compare such embeddings by defining mappings between them. If we have two functors F and G between two categories C and D we define a natural transformation between these functors by picking a morphism between a pair F a and G a, for every a.

In Haskell, a natural transformation between two functors f and g is a polymorphic function:

type Nat f g = forall a. f a -> g a

In general, natural transformations must obey additional naturality conditions, but in Haskell they come for free (due to parametricity).

Natural transformations may be composed, and there is an identity natural transformations from any functor to itself. It follows that functors between any two categories C and D form a category denoted by [C, D], where natural transformations play the role of morphisms. A hom-set in such a category is a set of natural transformations between two functors F and G denoted by [C, D](F, G).

An invertible natural transformation is called a natural isomorphism. If two functors are naturally isomorphic they are essentially the same.

Arthropods and their Adjoints

Using a pair of functors that are the inverse of each other we may define equivalence of categories, but there is an even more useful concept of adjoint functors that compare the structures of two non-equivalent categories. The idea is that we have a “right” functor R going from category C to D and a “left” functor L going in the other direction, from D to C.

Adj - 1

There are two possible compositions of these functors, both resulting in round trips or endofunctors. The categories would be equivalent if those endofunctors were naturally isomorphic to identity endofunctors. But for an adjunction, we impose weaker conditions. We require that there be two natural transformations (not necessarily isomorphisms):

η :: ID -> R ∘ L
ε :: L ∘ R -> IC

The first transformation η is called the unit; and the second ε, the counit of the adjunction.

In a small category objects form sets, so it’s possible to form a cartesian product of two small categories C and D. Object in such a category C×D are pairs of objects <c, d>, and morphisms are pairs of morphisms <f, g>.

After these preliminaries, we are ready to define the categorical product in C using an adjunction. We chose C×C as the left category. The left functor is the diagonal functor Δ that maps any object c to a pair <c, c> and any morphism f to a pair of morphisms <f, f>. Its right adjoint, if it exists, maps a pair of objects <a, b> to their categorical product a×b.

Adj-Product

Interestingly, the terminal object can also be defined using an adjunction. This time we chose, as the left category, a singleton category with one object and one (identity) morphism. The left functor maps any object c to the singleton object. Its right adjoint, if it exists, maps the singleton object to the terminal object in C.

A category with all products and the terminal object is called a cartesian category, or cartesian monoidal category. Why monoidal? Because the operation of taking the categorical product is monoidal. It’s associative, up to isomorphism; and its unit is the terminal object.

Incidentally, this is the same monoidal structure that we’ve seen in Set, but now it’s generalized to the level of other categories. There was another monoidal structure in Set induced by the disjoint sum. Its categorical generalization is given by the coproduct, with the initial object playing the role of the unit.

But what about the set of morphisms? In Set, morphisms between two sets a and b form a hom-set, which is the object of the same category Set. In an arbitrary category C, a hom-set C(a, b) is still a set — but now it’s not an object of C. That’s why it’s called the external hom-set. However, there are categories in which each external hom-set has a corresponding object called the internal hom. This object is also called an exponential, ba. It can be defined using an adjunction, but only if the category supports products. It’s an adjunction in which the left and right categories are the same. The left endofunctor takes an object b and maps it to a product b×a, where a is an arbitrary fixed object. Its adjoint functor maps an object b to the exponential ba. The counit of this adjunction:

ε :: ba × a -> b

is the evaluation function. In Haskell it has the following signature:

eval :: (a -> b, a) -> b

The Haskell function type a->b is equivalent to the exponential ba.

A category that has all products and exponentials together with the terminal object is called cartesian closed. Cartesian closed categories, or CCCs, play an important role in the semantics of programming languages.

Tensorosaurus Rex

We have already seen two very similar monoidal structures induced by products and coproducts. In mathematics, two is a crowd, so let’s look for a pattern. Both product and coproduct act as bifunctors C×C->C. Let’s call such a bifunctor a tensor product and write it as an infix operator a ⊗ b. As a bifunctor, the tensor product can also lift pairs of morphisms:

f :: a -> a'
g :: b -> b'
f ⊗ g :: a ⊗ b -> a' ⊗ b'

To define a monoid on top of a tensor product, we will require that it be associative — up to isomorphism:

α :: (a ⊗ b) ⊗ c -> a ⊗ (b ⊗ c)

We also need a unit object, which we will call i. The two unit laws are:

λ :: i ⊗ a -> a
ρ :: a ⊗ i -> a

A category with a tensor product that satisfies the above properties, plus some additional coherence conditions, is called a monoidal category.

We can now specialize the tensor product to categorical product, in which case the unit object is the terminal object; or to coproduct, in which case we chose the initial object as the unit. But there is an even more interesting operation that has all the properties of the tensor product. I’m talking about functor composition.

tyrano

Functorosaurus

Functors between any two categories C and D form a functor category [C, D] with natural transformations playing the role of morphisms. In general, these functors don’t compose (their endpoints don’t match) unless we pick the target category to be the same as the source category.

Endofunctor Composition

In the endofunctor category [C, C] any two functors can be composed. But in [C, C] functors are objects, so functor composition becomes an operation on objects. For any two endofunctors F and G it produces a new endofunctor F∘G. It’s a binary operation, so it’s a potential candidate for a tensor product. Indeed, it is a bifunctor: it can be lifted to natural transformations, which are morphisms in [C, C]. It’s associative — in fact it’s strictly associative, the associator α is the identity natural transformation. The unit with respect to endofunctor composition is the identity functor I. So the category of endofunctors is a monoidal category.

Unlike product and coproduct, which are symmetric up to isomorphism, endofunctor composition is not symmetric. In general, there is no relation between F∘G and G∘F.

Profunctor Composition

Different species of functors came up with their own composition strategies. Take for instance the profunctors, which are functors Cop×D->Set. They generalize the idea of relations between objects in C and D. The sets they map to may be thought of as sets of proofs of the relationship. An empty set means that the two objects are not related. If you want to compose two relations, you have to find an element that’s common to the image of one relation and the source of the other (relations are not, in general, symmetric). The proofs of the new composite relation are pairs of proofs of individual relations. Symbolically, if p and q are such profunctors/relations, their composition can be written as:

exists x. (p a x, q x b)

Existential quantification in Haskell translates to polymorphic construction, so the actual definition is:

data PCompose p q a b = forall x . PCompose (p a x) (q x b)

In category theory, existential quantification is encoded as the coend, which is a generalization of a colimit for profunctors. The coend formula for the composition of two profunctors reads:

(p ⊗ q) a b = ∫ z p a z × q z b

The product here is the cartesian product of sets.

Profunctors, being functors, form a category in which morphisms are natural transformations. As long as the two categories that they relate are the same, any two profunctors can be composed using a coend. So profunctor composition is a good candidate for a tensor product in such a category. It is indeed associative, up to isomorphism. But what’s the unit of profunctor composition? It turns out that the simplest profuctor — the hom-functor — because of the Yoneda lemma, is the unit of composition:

z C(a, z) × p z b ≅ p a b
∫ z p a z × C(z, b) ≅ p a b

Thus profunctors Cop×C->Set form a monoidal category.

atlanto

Day Convolution

Or consider Set-valued functors. They can be composed using Day convolution. For that, the category C must itself be monoidal. Day convolution of two functors C->Set is defined using a coend:

(f ★ g) a = ∫ x y f x × g y × C(x ⊗ y, a)

Here, the tensor product of x ⊗ y comes from the monoidal category C, the other products are just cartesian products of sets (one of them being the hom-set).

As before, in Haskell, the coend turns into existential quantifier, which can be written symbolically:

Day f g a = exists x y. ((f x, g y), (x, y) -> a)

and encoded as a polymorphic constructor:

data Day f g a = forall x y. Day (f x) (g y) ((x, y) -> a)

We use the fact that the category of Haskell types is monoidal with respect to cartesian product.

We can build a monoidal category based on Day convolution. The unit with respect to Day convolution is C(i, -), the hom-functor applied to i — the unit in the monoidal category C. For instance, the left identity can be derived from:

(C(i, -) ★ g) a = ∫ x y C(i, x) × g y × C(x ⊗ y, a)

Applying the Yoneda lemma, or “integrating over x,” we get:

y g y × C(i ⊗ y, a)

Considering that i is the unit of the tensor product, we can perform the second integration to get g a.

The Monozoic Era

Monoidal categories are important because they provide rich grazing grounds for monoids. In a monoidal category we can define a more general monoid. It’s an object m with some special properties. These properties replace the usual definitions of multiplication and unit.

First, let’s reformulate the definition of a set-based monoid, taking into account the fact that Set is a monoidal category with respect to cartesian product.

A monoid is a set, so it’s an object in Set — let’s call it m. Multiplication maps pairs of elements of m back to m. These pairs are just elements of the cartesian product m × m. So multiplication is defined as a function:

μ :: m × m -> m

Unit of multiplication is a special element of m. We can select this element by providing a special morphism from the singleton set to m:

η :: () -> m

We can now express associativity and unit laws as properties of these two functions. The beauty of this formulation is that it generalizes easily to any cartesian category — just replace functions with morphisms and the unit () with the terminal object. There’s no reason to stop there: we can lift this definition all the way up to a monoidal category.

A monoid in a monoidal category is an object m together with two morphisms:

μ :: m ⊗ m -> m
η :: i -> m

Here i is the unit object with respect to the tensor product ⊗. Monoidal laws can be expressed using the associator α and the two unitors, λ and ρ, of the monoidal category:

assoctensor

unitmon

Having previously defined several interesting monoidal categories, we can now go digging for new monoids.

Monads

Let’s start with the category of endofunctors where the tensor product is functor composition. A monoid in the category of endofunctors is an endofunctor m and two morphism. Remember that morphisms in a functor category are natural transformations. So we end up with two natural transformations:

μ :: m ∘ m -> m
η :: I -> m

where I is the identity functor. Their components at an object a are:

μa :: m (m a) -> m a
ηa :: a -> m a

This construct is easily recognizable as a monad. The associativity and unit laws are just monad laws. In Haskell, μa is called join and ηa is called return.

Arrows

Let’s switch to the category of profunctors Cop×C->Set with profunctor composition as the tensor product. A monoid in that category is a profunctor ar. Multiplication is defined by a natural transformation:

μ :: ar ⊗ ar -> ar

Its component at a, b is:

μa b :: (∫ z ar a z × ar z b) -> ar a b

To simplify this formula we need a very useful identity that relates coends to ends. A hom-set that starts at a coend is equivalent to an end of the hom set:

C(∫ z p z z, y) ≅ ∫ z C(p z z, y)

Or, replacing external hom-sets with internal homs:

(∫ z p z z) -> y ≅ ∫ z (p z z -> y)

In Haskell, this formula is used to turn functions that take existential types to functions that are polymorphic:

(exists z. p z z) -> y ≅ forall z. (p z z -> y)

Intuitively, it makes perfect sense. If you want to define a function that takes an existential type, you have to be prepared to handle any type.

Using that identity, our multiplication formula can be rewritten as:

μa b :: ∫ z ((ar a z × ar z b) -> ar a b)

In Haskell, this derivation uses the existential quantifier:

mu a b = (exists z. (ar a z, ar z b)) -> ar a b

As we discussed, a function from an existential type is equivalent to a polymorphic function:

forall z. (ar a z, ar z b) -> ar a b

or, after currying and dropping the redundant quantification:

ar a z -> ar z b -> ar a b

This looks very much like a composition of morphisms in a category. In Haskell, this function is known in the infix-operator form as:

(>>>) :: ar a z -> ar z b -> ar a b

Let’s see what we get as the monoidal unit. Remember that the unit object in the profunctor category is the hom-functor C(a, b).

ηa b :: C(a, b) -> ar a b

In Haskell, this polymorphic function is traditionally called arr:

arr :: (a -> b) -> ar a b

The whole construct is known in Haskell as a pre-arrow. The full arrow is defined as a monoid in the category of strong profunctors, with strength defined as a natural transformation:

sta b :: p a b -> p (a, x) (b, x)

In Haskell, this function is called first.

Applicatives

There are several categorical formulations of what’s called in Haskell the applicative functor. To first approximaton, Haskell’s type system is the category Set. To translate Haskell constructs to category theory, the safest approach is to just play with endofunctors in Set. But both Set and its endofunctors have a lot of extra structure, so I’d like to start in a slightly more general setting.

Let’s have a look at the monoidal category of functors [C, Set], with Day convolution as the tensor product, and C(i, -) as unit. A monoid in this category is a functor f with multiplication given by the natural transformation:

μ :: f ★ f -> f

and unit given by:

η :: C(i, -) -> f

It turns out that the existence of these two natural transformations is equivalent to the requirement that f be a lax monoidal functor, which is the basis of the definition of the applicative functor in Haskell.

A monoidal functor is a functor that maps monoidal structure of one category to the monoidal structure of another category. It maps the tensor product, and it maps the unit object. In our case, the source category C has the monoidal structure given by the tensor product ⊗, and the target category Set is monoidal with respect to the cartesian product ×. A functor is monoidal if it doesn’t matter whether we first map two object and then multiply them, or first multiply them and then map the result:

f x × f y ≅ f (x ⊗ y)

Also, the unit object in Set should be isomporphic to the result of mapping the unit object in C:

() ≅ f i

Here, () is the terminal object in Set and i is the unit object in C.

These conditions are relaxed in the definition of a lax monoidal functor. A lax monoidal functor replaces isomorphisms with regular unidirectional morphisms:

f x × f y -> f (x ⊗ y)
() -> f i

It can be shown that the monoid in the category[C, Set], with Day convolution as the tensor product, is equivalent to the lax monoidal functor.

The Haskell definition of Applicative doesn’t look like Day convolution or like a lax monoidal functor:

class Functor f => Applicative f where
    (<*>) :: f (a -> b) -> (f a -> f b)
    pure :: a -> f a

You may recognize pure as a component of η, the natural transformation defining the monoid with respect to Day convolution. When you replace the category C with Set, the unit object C(i, -) turns into the identity functor. However, the operator <*> is lifted from the definition of yet another lax functor, the lax closed functor. It’s a functor that preserves the closed structure defined by the internal hom functor. In Set, the internal hom functor is just the arrow (->), hence the definition:

class Functor f => Closed f where
    (<*>) :: f (a -> b) -> (f a -> f b)
    unit :: f ()

As long as the internal hom is defined through the adjunction with the product, a lax closed functor is equivalent to a lax monoidal functor.

Conclusion

It is pretty shocking to realize how many different animals share the same body plan — I’m talking here about the monoid as the skeleton of a myriad of different mathematical and programming constructs. And I haven’t even touched on the whole kingdom of enriched categories, where monoidal categories form the reservoir of hom-objects. Virtually all notions I’ve discussed here can be generalized to enriched categories, including functors, profunctors, the Yoneda lemma, Day convolution, and so on.

Glossary

  • Hadean Eon: Began with the formation of the Earth about 4.6 billion years ago. It’s the period before the earliest-known rocks.
  • Archean Eon: During the Archean, the Earth’s crust had cooled enough to allow the formation of continents.
  • Cambrian explosion: Relatively short evolutionary event, during which most major animal phyla appeared.
  • Arthropods: from Greek ἄρθρωσις árthrosis, “joint”
  • Tensor, from Latin tendere “to stretch”
  • Functor: from Latin fungi, “perform”

Unlike monads, which came into programming straight from category theory, applicative functors have their origins in programming. McBride and Paterson introduced applicative functors as a programming pearl in their paper Applicative programming with effects. They also provided a categorical interpretation of applicatives in terms of strong lax monoidal functors. It’s been accepted that, just like “a monad is a monoid in the category of endofunctors,” so “an applicative is a strong lax monoidal functor.”

The so called “tensorial strength” seems to be important in categorical semantics, and in his seminal paper Notions of computation and monads, Moggi argued that effects should be described using strong monads. It makes sense, considering that a computation is done in a context, and you should be able to make the global context available under the monad. The fact that we don’t talk much about strong monads in Haskell is due to the fact that all functors in the category Set, which underlies the Haskell’s type system, have canonical strength. So why do we talk about strength when dealing with applicative functors? I have looked into this question and came to the conclusion that there is no fundamental reason, and that it’s okay to just say:

An applicative is a lax monoidal functor

In this post I’ll discuss different equivalent categorical definitions of the applicative functor. I’ll start with a lax closed functor, then move to a lax monoidal functor, and show the equivalence of the two definitions. Then I’ll introduce the calculus of ends and show that the third definition of the applicative functor as a monoid in a suitable functor category equipped with Day convolution is equivalent to the previous ones.

Applicative as a Lax Closed Functor

The standard definition of the applicative functor in Haskell reads:

class Functor f => Applicative f where
    (<*>) :: f (a -> b) -> (f a -> f b)
    pure :: a -> f a

At first sight it doesn’t seem to involve a monoidal structure. It looks more like preserving function arrows (I added some redundant parentheses to suggest this interpretation).

Categorically, functors that “preserve arrows” are known as closed functors. Let’s look at a definition of a closed functor f between two categories C and D. We have to assume that both categories are closed, meaning that they have internal hom-objects for every pair of objects. Internal hom-objects are also called function objects or exponentials. They are normally defined through the right adjoint to the product functor:

C(z × a, b) ≅ C(z, a => b)

To distinguish between sets of morphisms and function objects (they are the same thing in Set), I will temporarily use double arrows for function objects.

We can take a functor f and act with it on the function object a=>b in the category C. We get an object f (a=>b) in D. Or we can map the two objects a and b from C to D and then construct the function object in D: f a => f b.

closed

We call a functor closed if the two results are isomorphic (I have subscripted the two arrows with the categories where they are defined):

f (a =>C b) ≅ (f a =>D f b)

and if the functor preserves the unit object:

iD ≅ f iC

What’s the unit object? Normally, this is the unit with respect to the same product that was used to define the function object using the adjunction. I’m saying “normally,” because it’s possible to define a closed category without a product.

Note: The two arrows and the two is are defined with respect to two different products. The first isomorphism must be natural in both a and b. Also, to complete the picture, there are some diagrams that must commute.

The two isomorphisms that define a closed functor can be relaxed and replaced by unidirectional morphisms. The result is a lax closed functor:

f (a => b) -> (f a => f b)
i -> f i

This looks almost like the definition of Applicative, except for one problem: how can we recover the natural transformation we call pure from a single morphism i -> f i.

One way to do it is from the position of strength. An endofunctor f has tensorial strength if there is a natural transformation:

stc a :: c ⊗ f a -> f (c ⊗ a)

Think of c as the context in which the computation f a is performed. Strength means that we can use this external context inside the computation.

In the category Set, with the tensor product replaced by cartesian product, all functors have canonical strength. In Haskell, we would define it as:

st (c, fa) = fmap ((,) c) fa

The morphism in the definition of the lax closed functor translates to:

unit :: () -> f ()

Notice that, up to isomorphism, the unit type () is the unit with respect to cartesian product. The relevant isomorphisms are:

λa :: ((), a) -> a
ρa :: (a, ()) -> a

Here’s the derivation from Rivas and Jaskelioff’s Notions of Computation as Monoids:

    a
≅  (a, ())   -- unit law, ρ-1
-> (a, f ()) -- lax unit
-> f (a, ()) -- strength
≅  f a       -- lifted unit law, f ρ

Strength is necessary if you’re starting with a lax closed (or monoidal — see the next section) endofunctor in an arbitrary closed (or monoidal) category and you want to derive pure within that category — not after you restrict it to Set.

There is, however, an alternative derivation using the Yoneda lemma:

f ()
≅ forall a. (() -> a) -> f a  -- Yoneda
≅ forall a. a -> f a -- because: (() -> a) ≅ a

We recover the whole natural transformation from a single value. The advantage of this derivation is that it generalizes beyond endofunctors and it doesn’t require strength. As we’ll see later, it also ties nicely with the Day-convolution definition of applicative. The Yoneda lemma only works for Set-valued functors, but so does Day convolution (there are enriched versions of both Yoneda and Day convolution, but I’m not going to discuss them here).

We can define the categorical version of the Haskell’s applicative functor as a lax closed functor going from a closed category C to Set. It’s a functor equipped with a natural transformation:

f (a => b) -> (f a -> f b)

where a=>b is the internal hom-object in C (the second arrow is a function type in Set), and a function:

1 -> f i

where 1 is the singleton set and i is the unit object in C.

The importance of a categorical definition is that it comes with additional identities or “axioms.” A lax closed functor must be compatible with the structure of both categories. I will not go into details here, because we are really only interested in closed categories that are monoidal, where these axioms are easier to express.

The definition of a lax closed functor is easily translated to Haskell:

class Functor f => Closed f where
    (<*>) :: f (a -> b) -> f a -> f b
    unit :: f ()

Applicative as a Lax Monoidal Functor

Even though it’s possible to define a closed category without a monoidal structure, in practice we usually work with monoidal categories. This is reflected in the equivalent definition of the Haskell’s applicative functor as a lax monoidal functor. In Haskell, we would write:

class Functor f => Monoidal f where
    (>*<) :: (f a, f b) -> f (a, b)
    unit :: f ()

This definition is equivalent to our previous definition of a closed functor. That’s because, as we’ve seen, a function object in a monoidal category is defined in terms of a product. We can show the equivalence in a more general categorical setting.

This time let’s start with a symmetric closed monoidal category C, in which the function object is defined through the right adjoint to the tensor product:

C(z ⊗ a, b) ≅ C(z, a => b)

As usual, the tensor product is associative and unital — with the unit object i — up to isomorphism. The symmetry is defined through natural isomorphism:

γ :: a ⊗ b -> b ⊗ a

A functor f between two monoidal categories is lax monoidal if there exist: (1) a natural transformation

f a ⊗ f b -> f (a ⊗ b)

and (2) a morphism

i -> f i

Notice that the products and units on either side of the two mappings are from different categories.

A (lax-) monoidal functor must also preserve associativity and unit laws.

For instance a triple product

f a ⊗ (f b ⊗ f c)

may be rearranged using an associator α to give

(f a ⊗ f b) ⊗ f c

then converted to

f (a ⊗ b) ⊗ f c

and then to

f ((a ⊗ b) ⊗ c)

Or it could be first converted to

f a ⊗ f (b ⊗ c)

and then to

f (a ⊗ (b ⊗ c))

These two should be equivalent under the associator in C.

assoc

Similarly, f a ⊗ i can be simplified to f a using the right unitor ρ in D. Or it could be first converted to f a ⊗ f i, then to f (a ⊗ i), and then to f a, using the right unitor in C. The two paths should be equivalent. (Similarly for the left identity.)

unit

We will now consider functors from C to Set, with Set equipped with the usual cartesian product, and the singleton set as unit. A lax monoidal functor is defined by: (1) a natural transformation:

(f a, f b) -> f (a ⊗ b)

and (2) a choice of an element of the set f i (a function from 1 to f i picks an element from that set).

We need the target category to be Set because we want to be able to use the Yoneda lemma to show equivalence with the standard definition of applicative. I’ll come back to this point later.

The Equivalence

The definitions of a lax closed and a lax monoidal functors are equivalent when C is a closed symmetric monoidal category. The proof relies on the existence of the adjunction, in particular the unit and the counit of the adjunction:

ηa :: a -> (b => (a ⊗ b))
εb :: (a => b) ⊗ a -> b

For instance, let’s assume that f is lax-closed. We want to construct the mapping

(f a, f b) -> f (a ⊗ b)

First, we apply the lifted pair (unit, identity), (f η, f id)

(f a -> f (b => a ⊗ b), f id)

to the left hand side. We get:

(f (b => a ⊗ b), f b)

Now we can use (the uncurried version of) the lax-closed morphism:

(f (b => x), f b) -> f x

to get:

f (a ⊗ b)

Conversely, assuming the lax-monoidal property we can show that the functor is lax-closed, that is to say, implement the following function:

(f (a => b), f a) -> f b

First we use the lax monoidal morphism on the left hand side:

f ((a => b) ⊗ a)

and then use the counit (a.k.a. the evaluation morphism) to get the desired result f b

There is yet another presentation of applicatives using Day convolution. But before we get there, we need a little refresher on calculus.

Calculus of Ends

Ends and coends are very useful constructs generalizing limits and colimits. They are defined through universal constructions. They have a few fundamental properties that are used over and over in categorical calculations. I’ll just introduce the notation and a few important identities. We’ll be working in a symmetric monoidal category C with functors from C to Set and profunctors from Cop×C to Set. The end of a profunctor p is a set denoted by:

a p a a

The most important thing about ends is that a set of natural transformations between two functors f and g can be represented as an end:

[C, Set](f, g) = ∫a C(f a, g a)

In Haskell, the end corresponds to universal quantification over a functor of mixed variance. For instance, the natural transformation formula takes the familiar form:

forall a. f a -> g a

The Yoneda lemma, which deals with natural transformations, can also be written using an end:

z (C(a, z) -> f z) ≅ f a

In Haskell, we can write it as the equivalence:

forall z. ((a -> z) -> f z) ≅ f a

which is a generalization of the continuation passing transform.

The dual notion of coend is similarly written using an integral sign, with the “integration variable” in the superscript position:

a p a a

In pseudo-Haskell, a coend is represented by an existential quantifier. It’s possible to define existential data types in Haskell by converting existential quantification to universal one. The relevant identity in terms of coends and ends reads:

(∫ z p z z) -> y ≅ ∫ z (p z z -> y)

In Haskell, this formula is used to turn functions that take existential types to functions that are polymorphic:

(exists z. p z z) -> y ≅ forall z. (p z z -> y)

Intuitively, it makes perfect sense. If you want to define a function that takes an existential type, you have to be prepared to handle any type.

The equivalent of the Yoneda lemma for coends reads:

z f z × C(z, a) ≅ f a

or, in pseudo-Haskell:

exists z. (f z, z -> a) ≅ f a

(The intuition is that the only thing you can do with this pair is to fmap the function over the first component.)

There is also a contravariant version of this identity:

z C(a, z) × f z ≅ f a

where f is a contravariant functor (a.k.a., a presheaf). In pseudo-Haskell:

exists z. (a -> z, f z) ≅ f a

(The intuition is that the only thing you can do with this pair is to apply the contramap of the first component to the second component.)

Using coends we can define a tensor product in the category of functors [C, Set]. This product is called Day convolution:

(f ★ g) a = ∫ x y f x × g y × C(x ⊗ y, a)

It is a bifunctor in that category (read, it can be used to lift natural transformations). It’s associative and symmetric up to isomorphism. It also has a unit — the hom-functor C(i, -), where i is the monoidal unit in C. In other words, Day convolution imbues the category [C, Set] with monoidal structure.

Let’s verify the unit laws.

(C(i, -) ★ g) a = ∫ x y C(i, x) × g y × C(x ⊗ y, a)

We can use the contravariant Yoneda to “integrate over x” to get:

y g y × C(i ⊗ y, a)

Considering that i is the unit of the tensor product in C, we get:

y g y × C(y, a)

Covariant Yoneda lets us “integrate over y” to get the desired g a. The same method works for the right unit law.

Applicative as a Monoid

Given a monoidal category, we can always define a monoid as an object m equipped with two morphisms:

μ :: m ⊗ m -> m
η :: i -> m

satisfying the laws of associativity and unitality.

We have shown that the functor category [C, Set] (with C a symmetric monoidal category) is monoidal under Day convolution. An object in this category is a functor f. The two morphisms that would make it a candidate for a monoid are natural transformations:

μ :: f ★ f -> f
η :: C(i, -) -> f

The a component of the natural transformation μ can be rewritten as:

(∫ x y f x × f y × C(x ⊗ y, a)) -> f a

which is equivalent to:

x y (f x × f y × C(x ⊗ y, a) -> f a)

or, upon currying:

x y (f x, f y) -> C(x ⊗ y, a) -> f a

It turns out that so defined monoid is equivalent to a lax monoidal functor. This was shown by Rivas and Jaskelioff. The following derivation is due to Bob Atkey.

The trick is to start with the whole set of natural transformation from f★f to f. The multiplication μ is just one of them. We’ll express the set of natural transformations as an end:

a ((f ★ f) a -> f a)

Plugging in the formula for the a component of μ, we get:

a x y (f x, f y) -> C(x ⊗ y, a) -> f a

The end over a does not involve the first argument, so we can move the integral sign:

x y (f x, f y) -> ∫ a C(x ⊗ y, a) -> f a

Then we use the Yoneda lemma to “perform the integration” over a:

x y (f x, f y) -> f (x ⊗ y)

You may recognize this as a set of natural transformations that define a lax monoidal functor. We have established a one-to-one correspondence between these natural transformations and the ones defining monoidal multiplication using Day convolution.

The remaining part is to show the equivalence between the unit with respect to Day convolution and the second part of the definition of the lax monoidal functor, the morphism:

1 -> f i

We start with the set of natural transformations that contains our η:

a (i -> a) -> f a

By Yoneda, this is just f i. Picking an element from a set is equivalent to defining a morphism from the singleton set 1, so for any choice of η we get:

1 -> f i

and vice versa. The two definitions are equivalent.

Notice that the monoidal unit η under Day convolution becomes the definition of pure in the Haskell version of applicative. Indeed, when we replace the category C with Set, f becomes and endofunctor, and the unit of Day convolution C(i, -) becomes the identity functor Id. We get:

η :: Id -> f

or, in components:

pure :: a -> f a

So, strictly speaking, the Haskell definition of Applicative mixes the elements of the lax closed functor and the monoidal unit under Day convolution.

Acknowledgments

I’m grateful to Mauro Jaskelioff and Exequiel Rivas for correspondence and to Bob Atkey, Dimitri Chikhladze, and Make Shulman for answering my questions on Math Overflow.


I’m a refugee. I fled Communist Poland and was granted political asylum in the United States. That was so long ago that I don’t think of myself as a refugee any more. I’m an American — not by birth but by choice. My understanding is that being an American has nothing to do with ethnicity, religion, or personal history. I became an American by accepting a certain system of values specified in the Constitution. Things like freedom of expression, freedom from persecution, equality, pursuit of happiness, etc. I’m also a Pole and proud of it. I speak the language, I know my history and culture. No contradiction here.

I’m a scientist, and I normally leave politics to others. In fact I came to the United States to get away from politics. In Poland, I was engaged in political struggle, I was a member of Solidarity, and I joined the resistance when Solidarity was crushed. I could have stayed and continued the fight, but I chose instead to leave and make my contribution to society in other areas.

There are times in history when it’s best for scientists to sit in their ivory towers and do what they are trained to do — science. There is time when it’s best for engineers to design new things, write software, and build gadgets that make life easier for everybody. But there are times when this is not enough. That’s why I’m interrupting my scheduled programming, my category theory for programmers blog, to say a few words about current events. Actually, first I’d like to reminisce a little.

When you live under a dictatorship, you have to develop certain skills. If direct approach can get you in trouble, you try to manipulate the system. When martial law was imposed in Poland, all international travel was suspended. I was a grad student then, working on my Ph.D. in theoretical physics. Contact with scientists from abroad was very important to me. As soon as the martial law was suspended, my supervisor and I decided to go for a visit — not to the West, mind you, but to the Soviet Union. But the authorities decided that giving passports to scientists was a great opportunity to make them work for the system. So before we could get a permission to go abroad, we had to visit the Department of Security — the Secret Police — for an interview. From our friends, who were interviewed before, we knew that we’d be offered a choice: become an informant or forget about traveling abroad.

My professor went first. He was on time, but they kept him waiting outside the office forever. After an hour, he stormed out. He didn’t get the passport.

When I went to my interview, it started with some innocuous questions. I was asked who the chief of Solidarity at the University was. That was no secret — he was my office mate in the Physics Department. Then the discussion turned to my future employment at the University. The idea was to suggest that the Department of Security could help me keep my position, or get me fired. Knowing what was coming, I bluffed, saying that I was one of the brightest young physicists around, and my employment was perfectly secure. Then I started talking about my planned trip to the Soviet Union. I took my interviewer into confidence, and explained how horribly the Soviet science is suffering because their government is not allowing their scientists to travel to the West, and how much better Polish science was because of that. You have to realize that, even in the depth of the Department of Security of a Communist country, there was no love for our Soviet brethren. If we could beat them at science, all the better. I got my passport without any more hassle.

I was exaggerating a little, especially about me being so bright, but it’s true that there is an international community of scientists and engineers that knows no borders. Any impediment to free exchange of ideas and people is very detrimental to its prosperity and, by association, to the prosperity of the societies they live in.

I consider the recent Muslim ban — and that’s what it should be called — a direct attack on this community, on a par with climate-change denials and gag orders against climate scientists working for the government. It’s really hard to piss off scientists and engineers, so I consider this a major accomplishment of the new presidency.

You can make fun of us nerds as much as you want, but every time you send a tweet, you’re using the infrastructure created by us. The billions of matal-oxide field-effect transistors and the liquid-crystal display in your tablet were made possible by developments in quantum mechanics and materials science. The operating system was written by software engineers in languages based on the math developed by Alan Turing and Alonzo Church. Try denying that, and you’ll end up tweeting with a quill on parchment.

Scientists and engineers consider themselves servants of the society. We don’t make many demands and are quite happy to be left alone to do our stuff. But if this service is disrupted by clueless, power-hungry politicians, we will act. We are everywhere, and we know how to use the Internet — we invented it.

P. S. I keep comments to my blog under moderation because of spam. But I will also delete comments that I consider clueless.

Here’s a little anecdote about cluelessness that I heard long time ago from my physicist friends in the Soviet Union. They had invited a guest scientist from the US to one of the conferences. They were really worried that he might say something politically charged and make future scientific exchanges impossible. So they asked him to, please, refrain from any political comments.

Time comes for the guest scientist to give a talk. And he starts with, “Before I came to the Soviet Union I was warned that I will be constantly minded by the secret police.” The director of the institute, who invited our scientist, is sitting in the first row between two KGB minders. All blood is leaving his face. The KGB minders stiffen in their seats. “I’m so happy that it turned out to be nonsense,” says the scientist and proceeds to give his talk. You see, it’s really hard to imagine what it’s like to live under dictatorship unless you’ve experienced it yourself. Trust me, I’ve been there and I recognize the warning signs.


This is part 23 of Categories for Programmers. Previously: Monads Categorically. See the Table of Contents.

Now that we have covered monads, we can reap the benefits of duality and get comonads for free simply by reversing the arrows and working in the opposite category.

Recall that, at the most basic level, monads are about composing Kleisli arrows:

a -> m b

where m is a functor that is a monad. If we use the letter w (upside down m) for the comonad, we can define co-Kleisli arrows as morphism of the type:

w a -> b

The analog of the fish operator for co-Kleisli arrows is defined as:

(=>=) :: (w a -> b) -> (w b -> c) -> (w a -> c)

For co-Kleisli arrows to form a category we also have to have an identity co-Kleisli arrow, which is called extract:

extract :: w a -> a

This is the dual of return. We also have to impose the laws of associativity as well as left- and right-identity. Putting it all together, we could define a comonad in Haskell as:

class Functor w => Comonad w where
    (=>=) :: (w a -> b) -> (w b -> c) -> (w a -> c)
    extract :: w a -> a

In practice, we use slightly different primitives, as we’ll see shortly.

The question is, what’s the use for comonads in programming?

Programming with Comonads

Let’s compare the monad with the comonad. A monad provides a way of putting a value in a container using return. It doesn’t give you access to a value or values stored inside. Of course, data structures that implement monads might provide access to their contents, but that’s considered a bonus. There is no common interface for extracting values from a monad. And we’ve seen the example of the IO monad that prides itself in never exposing its contents.

A comonad, on the other hand, provides the means of extracting a single value from it. It does not give the means to insert values. So if you want to think of a comonad as a container, it always comes pre-filled with contents, and it lets you peek at it.

Just as a Kleisli arrow takes a value and produces some embellished result — it embellishes it with context — a co-Kleisli arrow takes a value together with a whole context and produces a result. It’s an embodiment of contextual computation.

The Product Comonad

Remember the reader monad? We introduced it to tackle the problem of implementing computations that need access to some read-only environment e. Such computations can be represented as pure functions of the form:

(a, e) -> b

We used currying to turn them into Kleisli arrows:

a -> (e -> b)

But notice that these functions already have the form of co-Kleisli arrows. Let’s massage their arguments into the more convenient functor form:

data Product e a = P e a
  deriving Functor

We can easily define the composition operator by making the same environment available to the arrows that we are composing:

(=>=) :: (Product e a -> b) -> (Product e b -> c) -> (Product e a -> c)
f =>= g = \(P e a) -> let b = f (P e a)
                          c = g (P e b)
                       in c

The implementation of extract simply ignores the environment:

extract (P e a) = a

Not surprisingly, the product comonad can be used to perform exactly the same computations as the reader monad. In a way, the comonadic implementation of the environment is more natural — it follows the spirit of “computation in context.” On the other hand, monads come with the convenient syntactic sugar of the do notation.

The connection between the reader monad and the product comonad goes deeper, having to do with the fact that the reader functor is the right adjoint of the product functor. In general, though, comonads cover different notions of computation than monads. We’ll see more examples later.

It’s easy to generalize the Product comonad to arbitrary product types including tuples and records.

Dissecting the Composition

Continuing the process of dualization, we could go ahead and dualize monadic bind and join. Alternatively, we can repeat the process we used with monads, where we studied the anatomy of the fish operator. This approach seems more enlightening.

The starting point is the realization that the composition operator must produce a co-Kleisli arrow that takes w a and produces a c. The only way to produce a c is to apply the second function to an argument of the type w b:

(=>=) :: (w a -> b) -> (w b -> c) -> (w a -> c)
f =>= g = g ... 

But how can we produce a value of type w b that could be fed to g? We have at our disposal the argument of type w a and the function f :: w a -> b. The solution is to define the dual of bind, which is called extend:

extend :: (w a -> b) -> w a -> w b

Using extend we can implement composition:

f =>= g = g . extend f

Can we next dissect extend? You might be tempted to say, why not just apply the function w a -> b to the argument w a, but then you quickly realize that you’d have no way of converting the resulting b to w b. Remember, the comonad provides no means of lifting values. At this point, in the analogous construction for monads, we used fmap. The only way we could use fmap here would be if we had something of the type w (w a) at our disposal. If we coud only turn w a into w (w a). And, conveniently, that would be exactly the dual of join. We call it duplicate:

duplicate :: w a -> w (w a)

So, just like with the definitions of the monad, we have three equivalent definitions of the comonad: using co-Kleisli arrows, extend, or duplicate. Here’s the Haskell definition taken directly from Control.Comonad library:

class Functor w => Comonad w where
  extract :: w a -> a
  duplicate :: w a -> w (w a)
  duplicate = extend id
  extend :: (w a -> b) -> w a -> w b
  extend f = fmap f . duplicate

Provided are the default implementations of extend in terms of duplicate and vice versa, so you only need to override one of them.

The intuition behind these functions is based on the idea that, in general, a comonad can be thought of as a container filled with values of type a (the product comonad was a special case of just one value). There is a notion of the “current” value, one that’s easily accessible through extract. A co-Kleisli arrow performs some computation that is focused on the current value, but it has access to all the surrounding values. Think of the Conway’s game of life. Each cell contains a value (usually just True or False). A comonad corresponding to the game of life would be a grid of cells focused on the “current” cell.

So what does duplicate do? It takes a comonadic container w a and produces a container of containers w (w a). The idea is that each of these containers is focused on a different a inside w a. In the game of life, you would get a grid of grids, each cell of the outer grid containing an inner grid that’s focused on a different cell.

Now look at extend. It takes a co-Kleisli arrow and a comonadic container w a filled with as. It applies the computation to all of these as, replacing them with bs. The result is a comonadic container filled with bs. extend does it by shifting the focus from one a to another and applying the co-Kleisli arrow to each of them in turn. In the game of life, the co-Kleisli arrow would calculate the new state of the current cell. To do that, it would look at its context — presumably its nearest neighbors. The default implementation of extend illustrates this process. First we call duplicate to produce all possible foci and then we apply f to each of them.

The Stream Comonad

This process of shifting the focus from one element of the container to another is best illustrated with the example of an infinite stream. Such a stream is just like a list, except that it doesn’t have the empty constructor:

data Stream a = Cons a (Stream a)

It’s trivially a Functor:

instance Functor Stream where
    fmap f (Cons a as) = Cons (f a) (fmap f as)

The focus of a stream is its first element, so here’s the implementation of extract:

extract (Cons a _) = a

duplicate produces a stream of streams, each focused on a different element.

duplicate (Cons a as) = Cons (Cons a as) (duplicate as)

The first element is the original stream, the second element is the tail of the original stream, the third element is its tail, and so on, ad infinitum.

Here’s the complete instance:

instance Comonad Stream where
    extract (Cons a _) = a
    duplicate (Cons a as) = Cons (Cons a as) (duplicate as)

This is a very functional way of looking at streams. In an imperative language, we would probably start with a method advance that shifts the stream by one position. Here, duplicate produces all shifted streams in one fell swoop. Haskell’s laziness makes this possible and even desirable. Of course, to make a Stream practical, we would also implement the analog of advance:

tail :: Stream a -> Stream a
tail (Cons a as) = as

but it’s never part of the comonadic interface.

If you had any experience with digital signal processing, you’ll see immediately that a co-Kleisli arrow for a stream is just a digital filter, and extend produces a filtered stream.

As a simple example, let’s implement the moving average filter. Here’s a function that sums n elements of a stream:

sumS :: Num a => Int -> Stream a -> a
sumS n (Cons a as) = if n <= 0 then 0 else a + sumS (n - 1) as

Here’s the function that calculates the average of the first n elements of the stream:

average :: Fractional a => Int -> Stream a -> a
average n stm = (sumS n stm) / (fromIntegral n)

Partially applied average n is a co-Kleisli arrow, so we can extend it over the whole stream:

movingAvg :: Fractional a => Int -> Stream a -> Stream a
movingAvg n = extend (average n)

The result is the stream of running averages.

A stream is an example of a unidirectional, one-dimensional comonad. It can be easily made bidirectional or extended to two or more dimensions.

Comonad Categorically

Defining a comonad in category theory is a straightforward exercise in duality. As with the monad, we start with an endofunctor T. The two natural transformations, η and μ, that define the monad are simply reversed for the comonad:

ε :: T -> I
δ :: T -> T2

The components of these transformations correspond to extract and duplicate. Comonad laws are the mirror image of monad laws. No big surprise here.

Then there is the derivation of the monad from an adjunction. Duality reverses an adjunction: the left adjoint becomes the right adjoint and vice versa. And, since the composition R ∘ L defines a monad, L ∘ R must define a comonad. The counit of the adjunction:

ε :: L ∘ R -> I

is indeed the same ε that we see in the definition of the comonad — or, in components, as Haskell’s extract. We can also use the unit of the adjunction:

η :: I -> R ∘ L

to insert an R ∘ L in the middle of L ∘ R and produce L ∘ R ∘ L ∘ R. Making T2 from T defines the δ, and that completes the definition of the comonad.

We’ve also seen that the monad is a monoid. The dual of this statement would require the use of a comonoid, so what’s a comonoid? The original definition of a monoid as a single-object category doesn’t dualize to anything interesting. When you reverse the direction of all endomorphisms, you get another monoid. Recall, however, that in our approach to a monad, we used a more general definition of a monoid as an object in a monoidal category. The construction was based on two morphisms:

μ :: m ⊗ m -> m
η :: i -> m

The reversal of these morphisms produces a comonoid in a monoidal category:

δ :: m -> m ⊗ m
ε :: m -> i

One can write a definition of a comonoid in Haskell:

class Comonoid m where
  split   :: m -> (m, m)
  destroy :: m -> ()

but it is rather trivial. Obviously destroy ignores its argument.

destroy _ = ()

split is just a pair of functions:

split x = (f x, g x)

Now consider comonoid laws that are dual to the monoid unit laws.

lambda . bimap destroy id . split = id
rho . bimap id destroy . split = id

Here, lambda and rho are the left and right unitors, respectively (see the definition of monoidal categories). Plugging in the definitions, we get:

lambda (bimap destroy id (split x))
= lambda (bimap destroy id (f x, g x))
= lambda ((), g x)
= g x

which proves that g = id. Similarly, the second law expands to f = id. In conclusion:

split x = (x, x)

which shows that in Haskell (and, in general, in the category Set) every object is a trivial comonoid.

Fortunately there are other more interesting monoidal categories in which to define comonoids. One of them is the category of endofunctors. And it turns out that, just like the monad is a monoid in the category of endofunctors,

The comonad is a comonoid in the category of endofunctors.

The Store Comonad

Another important example of a comonad is the dual of the state monad. It’s called the costate comonad or, alternatively, the store comonad.

We’ve seen before that the state monad is generated by the adjunction that defines the exponentials:

L z = z × s
R a = s ⇒ a

We’ll use the same adjunction to define the costate comonad. A comonad is defined by the composition L ∘ R:

L (R a) = (s ⇒ a) × s

Translating this to Haskell, we start with the adjunction between the Prod functor on the left and the Reader functor or the right. Composing Prod after Reader is equivalent to the following definition:

data Store s a = Store (s -> a) s

The counit of the adjunction taken at the object a is the morphism:

εa :: ((s ⇒ a) × s) -> a

or, in Haskell notation:

counit (Prod (Reader f, s)) = f s

This becomes our extract:

extract (Store f s) = f s

The unit of the adjunction:

unit a = Reader (\s -> Prod (a, s))

can be rewritten as partially applied data constructor:

Store f :: s -> Store f s

We construct δ, or duplicate, as the horizontal composition:

δ :: L ∘ R -> L ∘ R ∘ L ∘ R
δ = L ∘ η ∘ R

We have to sneak η through the leftmost L, which is the Prod functor. It means acting with η, or Store f, on the left component of the pair (that’s what fmap for Prod would do). We get:

duplicate (Store f s) = Store (Store f) s

(Remember that, in the formula for δ, L and R stand for identity natural transformations whose components are identity morphisms.)

Here’s the complete definition of the Store comonad:

instance Comonad (Store s) where
  extract (Store f s) = f s
  duplicate (Store f s) = Store (Store f) s

You may think of the Reader part of Store as a generalized container of as that are keyed using elements of the type s. For instance, if s is Int, Reader Int a is an infinite bidirectional stream of as. Store pairs this container with a value of the key type. For instance, Reader Int a is paired with an Int. In this case, extract uses this integer to index into the infinite stream. You may think of the second component of Store as the current position.

Continuing with this example, duplicate creates a new infinite stream indexed by an Int. This stream contains streams as its elements. In particular, at the current position, it contains the original stream. But if you use some other Int (positive or negative) as the key, you’d obtain a shifted stream positioned at that new index.

In general, you can convince yourself that when extract acts on the duplicated Store it produces the original Store (in fact, the identity law for the comonad states that extract . duplicate = id).

The Store comonad plays an important role as the theoretical basis for the Lens library. Conceptually, the Store s a comonad encapsulates the idea of “focusing” (like a lens) on a particular substructure of the date type a using the type s as an index. In particular, a function of the type:

a -> Store s a

is equivalent to a pair of functions:

set :: a -> s -> a
get :: a -> s

If a is a product type, set could be implemented as setting the field of type s inside of a while returning the modified version of a. Similarly, get could be implemented to read the value of the s field from a. We’ll explore these ideas more in the next section.

Challenges

  1. Implement the Conway’s Game of Life using the Store comonad. Hint: What type do you pick for s?

Acknowledgments

I’m grateful to Edward Kmett for reading the draft of this post and pointing out flaws in my reasoning.

Next: F-Algebras.


This is part 22 of Categories for Programmers. Previously: Monads and Effects. See the Table of Contents.

If you mention monads to a programmer, you’ll probably end up talking about effects. To a mathematician, monads are about algebras. We’ll talk about algebras later — they play an important role in programming — but first I’d like to give you a little intuition about their relation to monads. For now, it’s a bit of a hand-waving argument, but bear with me.

Algebra is about creating, manipulating, and evaluating expressions. Expressions are built using operators. Consider this simple expression:

x2 + 2 x + 1

This expression is formed using variables like x, and constants like 1 or 2, bound together with operators like plus or times. As programmers, we often think of expressions as trees.

exptree

Trees are containers so, more generally, an expression is a container for storing variables. In category theory, we represent containers as endofunctors. If we assign the type a to the variable x, our expression will have the type m a, where m is an endofunctor that builds expression trees. (Nontrivial branching expressions are usually created using recursively defined endofunctors.)

What’s the most common operation that can be performed on an expression? It’s substitution: replacing variables with expressions. For instance, in our example, we could replace x with y - 1 to get:

(y - 1)2 + 2 (y - 1) + 1

Here’s what happened: We took an expression of type m a and applied a transformation of type a -> m b (b represents the type of y). The result is an expression of type m b. Let me spell it out:

m a -> (a -> m b) -> m b

Yes, that’s the signature of monadic bind.

That was a bit of motivation. Now let’s get to the math of the monad. Mathematicians use different notation than programmers. They prefer to use the letter T for the endofunctor, and Greek letters: μ for join and η for return. Both join and return are polymorphic functions, so we can guess that they correspond to natural transformations.

Therefore, in category theory, a monad is defined as an endofunctor T equipped with a pair of natural transformations μ and η.

μ is a natural transformation from the square of the functor T2 back to T. The square is simply the functor composed with itself, T ∘ T (we can only do this kind of squaring for endofunctors).

μ :: T2 -> T

The component of this natural transformation at an object a is the morphism:

μa :: T (T a) -> T a

which, in Hask, translates directly to our definition of join.

η is a natural transformation between the identity functor I and T:

η :: I -> T

Considering that the action of I on the object a is just a, the component of η is given by the morphism:

ηa :: a -> T a

which translates directly to our definition of return.

These natural transformations must satisfy some additional laws. One way of looking at it is that these laws let us define a Kleisli category for the endofunctor T. Remember that a Kleisli arrow between a and b is defined as a morphism a -> T b. The composition of two such arrows (I’ll write it as a circle with the subscript T) can be implemented using μ:

g ∘T f = μc ∘ (T g) ∘ f

where

f :: a -> T b
g :: b -> T c

Here T, being a functor, can be applied to the morphism g. It might be easier to recognize this formula in Haskell notation:

f >=> g = join . fmap g . f

or, in components:

(f >=> g) a = join (fmap g (f a))

In terms of the algebraic interpretation, we are just composing two successive substitutions.

For Kleisli arrows to form a category we want their composition to be associative, and ηa to be the identity Kleisli arrow at a. This requirement can be translated to monadic laws for μ and η. But there is another way of deriving these laws that makes them look more like monoid laws. In fact μ is often called multiplication, and η unit.

Roughly speaking, the associativity law states that the two ways of reducing the cube of T, T3, down to T must give the same result. Two unit laws (left and right) state that when η is applied to T and then reduced by μ, we get back T.

Things are a little tricky because we are composing natural transformations and functors. So a little refresher on horizontal composition is in order. For instance, T3 can be seen as a composition of T after T2. We can apply to it the horizontal composition of two natural transformations:

IT ∘ μ

assoc1

and get T∘T; which can be further reduced to T by applying μ. IT is the identity natural transformation from T to T. You will often see the notation for this type of horizontal composition IT ∘ μ shortened to T∘μ. This notation is unambiguous because it makes no sense to compose a functor with a natural transformation, therefore T must mean IT in this context.

We can also draw the diagram in the (endo-) functor category [C, C]:

assoc2

Alternatively, we can treat T3 as the composition of T2∘T and apply μ∘T to it. The result is also T∘T which, again, can be reduced to T using μ. We require that the two paths produce the same result.

assoc

Similarly, we can apply the horizontal composition η∘T to the composition of the identity functor I after T to obtain T2, which can then be reduced using μ. The result should be the same as if we applied the identity natural transformation directly to T. And, by analogy, the same should be true for T∘η.

unitlawcomp-1

You can convince yourself that these laws guarantee that the composition of Kleisli arrows indeed satisfies the laws of a category.

The similarities between a monad and a monoid are striking. We have multiplication μ, unit η, associativity, and unit laws. But our definition of a monoid is too narrow to describe a monad as a monoid. So let’s generalize the notion of a monoid.

Monoidal Categories

Let’s go back to the conventional definition of a monoid. It’s a set with a binary operation and a special element called unit. In Haskell, this can be expressed as a typeclass:

class Monoid m where
    mappend :: m -> m -> m
    mempty  :: m

The binary operation mappend must be associative and unital (i.e., multiplication by the unit mempty is a no-op).

Notice that, in Haskell, the definition of mappend is curried. It can be interpreted as mapping every element of m to a function:

mappend :: m -> (m -> m)

It’s this interpretation that gives rise to the definition of a monoid as a single-object category where endomorphisms (m -> m) represent the elements of the monoid. But because currying is built into Haskell, we could as well have started with a different definition of multiplication:

mu :: (m, m) -> m

Here, the cartesian product (m, m) becomes the source of pairs to be multiplied.

This definition suggests a different path to generalization: replacing the cartesian product with categorical product. We could start with a category where products are globally defined, pick an object m there, and define multiplication as a morphism:

μ :: m × m -> m

We have one problem though: In an arbitrary category we can’t peek inside an object, so how do we pick the unit element? There is a trick to it. Remember how element selection is equivalent to a function from the singleton set? In Haskell, we could replace the definition of mempty with a function:

eta :: () -> m

The singleton is the terminal object in Set, so it’s natural to generalize this definition to any category that has a terminal object t:

η :: t -> m

This lets us pick the unit “element” without having to talk about elements.

Unlike in our previous definition of a monoid as a single-object category, monoidal laws here are not automatically satisfied — we have to impose them. But in order to formulate them we have to establish the monoidal structure of the underlying categorical product itself. Let’s recall how monoidal structure works in Haskell first.

We start with associativity. In Haskell, the corresponding equational law is:

mu x (mu y z) = mu (mu x y) z

Before we can generalize it to other categories, we have to rewrite it as an equality of functions (morphisms). We have to abstract it away from its action on individual variables — in other words, we have to use point-free notation. Knowning that the cartesian product is a bifunctor, we can write the left hand side as:

(mu . bimap id mu)(x, (y, z))

and the right hand side as:

(mu . bimap mu id)((x, y), z)

This is almost what we want. Unfortunately, the cartesian product is not strictly associative — (x, (y, z)) is not the same as ((x, y), z) — so we can’t just write point-free:

mu . bimap id mu = mu . bimap mu id

On the other hand, the two nestings of pairs are isomorphic. There is an invertible function called the associator that converts between them:

alpha :: ((a, b), c) -> (a, (b, c))
alpha ((x, y), z) = (x, (y, z))

With the help of the associator, we can write the point-free associativity law for mu:

mu . bimap id mu . alpha = mu . bimap mu id

We can apply a similar trick to unit laws which, in the new notation, take the form:

mu (eta (), x) = x
mu (x, eta ()) = x

They can be rewritten as:

(mu . bimap eta id) ((), x) = lambda ((), x)
(mu . bimap id eta) (x, ()) = rho (x, ())

The isomorphisms lambda and rho are called the left and right unitor, respectively. They witness the fact that the unit () is the identity of the cartesian product up to isomorphism:

lambda :: ((), a) -> a
lambda ((), x) = x
rho :: (a, ()) -> a
rho (x, ()) = x

The point-free versions of the unit laws are therefore:

mu . bimap id eta = lambda
mu . bimap eta id = rho

We have formulated point-free monoidal laws for mu and eta using the fact that the underlying cartesian product itself acts like a monoidal multiplication in the category of types. Keep in mind though that the associativity and unit laws for the cartesian product are valid only up to isomorphism.

It turns out that these laws can be generalized to any category with products and a terminal object. Categorical products are indeed associative up to isomorphism and the terminal object is the unit, also up to isomorphism. The associator and the two unitors are natural isomorphisms. The laws can be represented by commuting diagrams.

assocmon

Notice that, because the product is a bifunctor, it can lift a pair of morphisms — in Haskell this was done using bimap.

We could stop here and say that we can define a monoid on top of any category with categorical products and a terminal object. As long as we can pick an object m and two morphisms μ and η that satisfy monoidal laws, we have a monoid. But we can do better than that. We don’t need a full-blown categorical product to formulate the laws for μ and η. Recall that a product is defined through a universal construction that uses projections. We haven’t used any projections in our formulation of monoidal laws.

A bifunctor that behaves like a product without being a product is called a tensor product, often denoted by the infix operator ⊗. A definition of a tensor product in general is a bit tricky, but we won’t worry about it. We’ll just list its properties — the most important being associativity up to isomorphism.

Similarly, we don’t need the object t to be terminal. We never used its terminal property — namely, the existence of a unique morphism from any object to it. What we require is that it works well in concert with the tensor product. Which means that we want it to be the unit of the tensor product, again, up to isomorphism. Let’s put it all together:

A monoidal category is a category C equipped with a bifunctor called the tensor product:

⊗ :: C × C -> C

and a distinct object i called the unit object, together with three natural isomorphisms called, respectively, the associator and the left and right unitors:

αa b c :: (a ⊗ b) ⊗ c -> a ⊗ (b ⊗ c)
λa :: i ⊗ a -> a
ρa :: a ⊗ i -> a

(There is also a coherence condition for simplifying a quadruple tensor product.)

What’s important is that a tensor product describes many familiar bifunctors. In particular, it works for a product, a coproduct and, as we’ll see shortly, for the composition of endofunctors (and also for some more esoteric products like Day convolution). Monoidal categories will play an essential role in the formulation of enriched categories.

Monoid in a Monoidal Category

We are now ready to define a monoid in a more general setting of a monoidal category. We start by picking an object m. Using the tensor product we can form powers of m. The square of m is m ⊗ m. There are two ways of forming the cube of m, but they are isomorphic through the associator. Similarly for higher powers of m (that’s where we need the coherence conditions). To form a monoid we need to pick two morphisms:

μ :: m ⊗ m -> m
η :: i -> m

where i is the unit object for our tensor product.

monoid-1

These morphisms have to satisfy associativity and unit laws, which can be expressed in terms of the following commuting diagrams:

assoctensor

unitmon

Notice that it’s essential that the tensor product be a bifunctor because we need to lift pairs of morphisms to form products such as μ ⊗ id or η ⊗ id. These diagrams are just a straightforward generalization of our previous results for categorical products.

Monads as Monoids

Monoidal structures pop up in unexpected places. One such place is the functor category. If you squint a little, you might be able to see functor composition as a form of multiplication. The problem is that not any two functors can be composed — the target category of one has to be the source category of the other. That’s just the usual rule of composition of morphisms — and, as we know, functors are indeed morphisms in the category Cat. But just like endomorphisms (morphisms that loop back to the same object) are always composable, so are endofunctors. For any given category C, endofunctors from C to C form the functor category [C, C]. Its objects are endofunctors, and morphisms are natural transformations between them. We can take any two objects from this category, say endofunctors F and G, and produce a third object F ∘ G — an endofunctor that’s their composition.

Is endofunctor composition a good candidate for a tensor product? First, we have to establish that it’s a bifunctor. Can it be used to lift a pair of morphisms — here, natural transformations? The signature of the analog of bimap for the tensor product would look something like this:

bimap :: (a -> b) -> (c -> d) -> (a ⊗ c -> b ⊗ d)

If you replace objects by endofunctors, arrows by natural transformations, and tensor products by composition, you get:

(F -> F') -> (G -> G') -> (F ∘ G -> F' ∘ G')

which you may recognize as the special case of horizontal composition.

horizcomp

We also have at our disposal the identity endofunctor I, which can serve as the identity for endofunctor composition — our new tensor product. Moreover, functor composition is associative. In fact associativity and unit laws are strict — there’s no need for the associator or the two unitors. So endofunctors form a strict monoidal category with functor composition as tensor product.

What’s a monoid in this category? It’s an object — that is an endofunctor T; and two morphisms — that is natural transformations:

μ :: T ∘ T -> T
η :: I -> T

Not only that, here are the monoid laws:

assoc

unitlawcomp

They are exactly the monad laws we’ve seen before. Now you understand the famous quote from Saunders Mac Lane:

All told, monad is just a monoid in the category of endofunctors.

You might have seen it emblazoned on some t-shirts at functional programming conferences.

Monads from Adjunctions

An adjunction, L ⊣ R, is a pair of functors going back and forth between two categories C and D. There are two ways of composing them giving rise to two endofunctors, R ∘ L and L ∘ R. As per an adjunction, these endofunctors are related to identity functors through two natural transformations called unit and counit:

η :: ID -> R ∘ L
ε :: L ∘ R -> IC

Immediately we see that the unit of an adjunction looks just like the unit of a monad. It turns out that the endofunctor R ∘ L is indeed a monad. All we need is to define the appropriate μ to go with the η. That’s a natural transformation between the square of our endofunctor and the endofunctor itself or, in terms of the adjoint functors:

R ∘ L ∘ R ∘ L -> R ∘ L

And, indeed, we can use the counit to collapse the L ∘ R in the middle. The exact formula for μ is given by the horizontal composition:

μ = R ∘ ε ∘ L

Monadic laws follow from the identities satisfied by the unit and counit of the adjunction and the interchange law.

We don’t see a lot of monads derived from adjunctions in Haskell, because an adjunction usually involves two categories. However, the definitions of an exponential, or a function object, is an exception. Here are the two endofunctors that form this adjunction:

L z = z × s
R b = s ⇒ b

You may recognize their composition as the familiar state monad:

R (L z) = s ⇒ (z × s)

We’ve seen this monad before in Haskell:

newtype State s a = State (s -> (a, s))

Let’s also translate the adjunction to Haskell. The left functor is the product functor:

newtype Prod s a = Prod (a, s)

and the right functor is the reader functor:

newtype Reader s a = Reader (s -> a)

They form the adjunction:

instance Adjunction (Prod s) (Reader s) where
  counit (Prod (Reader f, s)) = f s
  unit a = Reader (\s -> Prod (a, s))

You can easily convince yourself that the composition of the reader functor after the product functor is indeed equivalent to the state functor:

newtype State s a = State (s -> (a, s))

As expected, the unit of the adjunction is equivalent to the return function of the state monad. The counit acts by evaluating a function acting on its argument. This is recognizable as the uncurried version of the function runState:

runState :: State s a -> s -> (a, s)
runState (State f) s = f s

(uncurried, because in counit it acts on a pair).

We can now define join for the state monad as a component of the natural transformation μ. For that we need a horizontal composition of three natural transformations:

μ = R ∘ ε ∘ L

In other words, we need to sneak the counit ε across one level of the reader functor. We can’t just call fmap directly, because the compiler would pick the one for the State functor, rather than the Reader functor. But recall that fmap for the reader functor is just left function composition. So we’ll use function composition directly.

We have to first peel off the data constructor State to expose the function inside the State functor. This is done using runState:

ssa :: State s (State s a)
runState ssa :: s -> (State s a, s)

Then we left-compose it with the counit, which is defined by uncurry runState. Finally, we clothe it back in the State data constructor:

join :: State s (State s a) -> State s a
join ssa = State (uncurry runState . runState ssa)

This is indeed the implementation of join for the State monad.

It turns out that not only every adjunction gives rise to a monad, but the converse is also true: every monad can be factorized into a composition of two adjoint functors. Such factorization is not unique though.

We’ll talk about the other endofunctor L ∘ R in the next section.

Next: Comonads.


This is part 21 of Categories for Programmers. Previously: Monads: Programmer’s Definition. See the Table of Contents.

Now that we know what the monad is for — it lets us compose embellished functions — the really interesting question is why embellished functions are so important in functional programming. We’ve already seen one example, the Writer monad, where embellishment let us create and accumulate a log across multiple function calls. A problem that would otherwise be solved using impure functions (e.g., by accessing and modifying some global state) was solved with pure functions.

The Problem

Here is a short list of similar problems, copied from Eugenio Moggi’s seminal paper, all of which are traditionally solved by abandoning the purity of functions.

  • Partiality: Computations that may not terminate
  • Nondeterminism: Computations that may return many results
  • Side effects: Computations that access/modify state
    • Read-only state, or the environment
    • Write-only state, or a log
    • Read/write state
  • Exceptions: Partial functions that may fail
  • Continuations: Ability to save state of the program and then restore it on demand
  • Interactive Input
  • Interactive Output

What really is mind blowing is that all these problems may be solved using the same clever trick: turning to embellished functions. Of course, the embellishment will be totally different in each case.

You have to realize that, at this stage, there is no requirement that the embellishment be monadic. It’s only when we insist on composition — being able to decompose a single embellished function into smaller embellished functions — that we need a monad. Again, since each of the embellishments is different, monadic composition will be implemented differently, but the overall pattern is the same. It’s a very simple pattern: composition that is associative and equipped with identity.

The next section is heavy on Haskell examples. Feel free to skim or even skip it if you’re eager to get back to category theory or if you’re already familiar with Haskell’s implementation of monads.

The Solution

First, let’s analyze the way we used the Writer monad. We started with a pure function that performed a certain task — given arguments, it produced a certain output. We replaced this function with another function that embellished the original output by pairing it with a string. That was our solution to the logging problem.

We couldn’t stop there because, in general, we don’t want to deal with monolithic solutions. We needed to be able to decompose one log-producing function into smaller log-producing functions. It’s the composition of those smaller functions that led us to the concept of a monad.

What’s really amazing is that the same pattern of embellishing the function return types works for a large variety of problems that normally would require abandoning purity. Let’s go through our list and identify the embellishment that applies to each problem in turn.

Partiality

We modify the return type of every function that may not terminate by turning it into a “lifted” type — a type that contains all values of the original type plus the special “bottom” value . For instance, the Bool type, as a set, would contain two elements: True and False. The lifted Bool contains three elements. Functions that return the lifted Bool may produce True or False, or execute forever.

The funny thing is that, in a lazy language like Haskell, a never-ending function may actually return a value, and this value may be passed to the next function. We call this special value the bottom. As long as this value is not explicitly needed (for instance, to be pattern matched, or produced as output), it may be passed around without stalling the execution of the program. Because every Haskell function may be potentially non-terminating, all types in Haskell are assumed to be lifted. This is why we often talk about the category Hask of Haskell (lifted) types and functions rather than the simpler Set. It is not clear, though, that Hask is a real category (see this Andrej Bauer post).

Nondeterminism

If a function can return many different results, it may as well return them all at once. Semantically, a non-deterministic function is equivalent to a function that returns a list of results. This makes a lot of sense in a lazy garbage-collected language. For instance, if all you need is one value, you can just take the head of the list, and the tail will never be evaluated. If you need a random value, use a random number generator to pick the n-th element of the list. Laziness even allows you to return an infinite list of results.

In the list monad — Haskell’s implementation of nondeterministic computations — join is implemented as concat. Remember that join is supposed to flatten a container of containers — concat concatenates a list of lists into a single list. return creates a singleton list:

instance Monad [] where
    join = concat
    return x = [x]

The bind operator for the list monad is given by the general formula: fmap followed by join which, in this case gives:

as >>= k = concat (fmap k as)

Here, the function k, which itself produces a list, is applied to every element of the list as. The result is a list of lists, which is flattened using concat.

From the programmer’s point of view, working with a list is easier than, for instance, calling a non-deterministic function in a loop, or implementing a function that returns an iterator (although, in modern C++, returning a lazy range would be almost equivalent to returning a list in Haskell).

A good example of using non-determinism creatively is in game programming. For instance, when a computer plays chess against a human, it can’t predict the opponent’s next move. It can, however, generate a list of all possible moves and analyze them one by one. Similarly, a non-deterministic parser may generate a list of all possible parses for a given expression.

Even though we may interpret functions returning lists as non-deterministic, the applications of the list monad are much wider. That’s because stitching together computations that produce lists is a perfect functional substitute for iterative constructs — loops — that are used in imperative programming. A single loop can be often rewritten using fmap that applies the body of the loop to each element of the list. The do notation in the list monad can be used to replace complex nested loops.

My favorite example is the program that generates Pythagorean triples — triples of positive integers that can form sides of right triangles.

triples = do
    z <- [1..]
    x <- [1..z]
    y <- [x..z]
    guard (x^2 + y^2 == z^2)
    return (x, y, z)

The first line tells us that z gets an element from an infinite list of positive numbers [1..]. Then x gets an element from the (finite) list [1..z] of numbers between 1 and z. Finally y gets an element from the list of numbers between x and z. We have three numbers 1 <= x <= y <= z at our disposal. The function guard takes a Bool expression and returns a list of units:

guard :: Bool -> [()]
guard True  = [()]
guard False = []

This function (which is a member of a larger class called MonadPlus) is used here to filter out non-Pythagorean triples. Indeed, if you look at the implementation of bind (or the related operator >>), you’ll notice that, when given an empty list, it produces an empty list. On the other hand, when given a non-empty list (here, the singleton list containing unit [()]), bind will call the continuation, here return (x, y, z), which produces a singleton list with a verified Pythagorean triple. All those singleton lists will be concatenated by the enclosing binds to produce the final (infinite) result. Of course, the caller of triples will never be able to consume the whole list, but that doesn’t matter, because Haskell is lazy.

The problem that normally would require a set of three nested loops has been dramatically simplified with the help of the list monad and the do notation. As if that weren’t enough, Haskell let’s you simplify this code even further using list comprehension:

triples = [(x, y, z) | z

This is just further syntactic sugar for the list monad (strictly speaking, MonadPlus).

You might see similar constructs in other functional or imperative languages under the guise of generators and coroutines.

Read-Only State

A function that has read-only access to some external state, or environment, can be always replaced by a function that takes that environment as an additional argument. A pure function (a, e) -> b (where e is the type of the environment) doesn’t look, at first sight, like a Kleisli arrow. But as soon as we curry it to a -> (e -> b) we recognize the embellishment as our old friend the reader functor:

newtype Reader e a = Reader (e -> a)

You may interpret a function returning a Reader as producing a mini-executable: an action that given an environment produces the desired result. There is a helper function runReader to execute such an action:

runReader :: Reader e a -> e -> a
runReader (Reader f) e = f e

It may produce different results for different values of the environment.

Notice that both the function returning a Reader, and the Reader action itself are pure.

To implement bind for the Reader monad, first notice that you have to produce a function that takes the environment e and produces a b:

ra >>= k = Reader (\e -> ...)

Inside the lambda, we can execute the action ra to produce an a:

ra >>= k = Reader (\e -> let a = runReader ra e
                         in ...)

We can then pass the a to the continuation k to get a new action rb:

ra >>= k = Reader (\e -> let a  = runReader ra e
                             rb = k a
                         in ...)

Finally, we can run the action rb with the environment e:

ra >>= k = Reader (\e -> let a  = runReader ra e
                             rb = k a
                         in runReader rb e)

To implement return we create an action that ignores the environment and returns the unchanged value.

Putting it all together, after a few simplifications, we get the following definition:

instance Monad (Reader e) where
    ra >>= k = Reader (\e -> runReader (k (runReader ra e)) e)
    return x = Reader (\e -> x)

Write-Only State

This is just our initial logging example. The embellishment is given by the Writer functor:

newtype Writer w a = Writer (a, w)

For completeness, there’s also a trivial helper runWriter that unpacks the data constructor:

runWriter :: Writer w a -> (a, w)
runWriter (Writer (a, w)) = (a, w)

As we’ve seen before, in order to make Writer composable, w has to be a monoid. Here’s the monad instance for Writer written in terms of the bind operator:

instance (Monoid w) => Monad (Writer w) where 
    (Writer (a, w)) >>= k = let (a', w') = runWriter (k a)
                            in Writer (a', w `mappend` w')
    return a = Writer (a, mempty)

State

Functions that have read/write access to state combine the embellishments of the Reader and the Writer. You may think of them as pure functions that take the state as an extra argument and produce a pair value/state as a result: (a, s) -> (b, s). After currying, we get them into the form of Kleisli arrows a -> (s -> (b, s)), with the embellishment abstracted in the State functor:

newtype State s a = State (s -> (a, s))

Again, we can look at a Kleisli arrow as returning an action, which can be executed using the helper function:

runState :: State s a -> s -> (a, s)
runState (State f) s = f s

Different initial states may not only produce different results, but also different final states.

The implementation of bind for the State monad is very similar to that of the Reader monad, except that care has to be taken to pass the correct state at each step:

sa >>= k = State (\s -> let (a, s') = runState sa s
                            sb = k a
                        in runState sb s')

Here’s the full instance:

instance Monad (State s) where
    sa >>= k = State (\s -> let (a, s') = runState sa s 
                            in runState (k a) s')
    return a = State (\s -> (a, s))

There are also two helper Kleisli arrows that may be used to manipulate the state. One of them retrieves the state for inspection:

get :: State s s
get = State (\s -> (s, s))

and the other replaces it with a completely new state:

put :: s -> State s ()
put s' = State (\s -> ((), s'))

Exceptions

An imperative function that throws an exception is really a partial function — it’s a function that’s not defined for some values of its arguments. The simplest implementation of exceptions in terms of pure total functions uses the Maybe functor. A partial function is extended to a total function that returns Just a whenever it makes sense, and Nothing when it doesn’t. If we want to also return some information about the cause of the failure, we can use the Either functor instead (with the first type fixed, for instance, to String).

Here’s the Monad instance for Maybe:

instance Monad Maybe where
    Nothing >>= k = Nothing
    Just a  >>= k = k a
    return a = Just a

Notice that monadic composition for Maybe correctly short-circuits the computation (the continuation k is never called) when an error is detected. That’s the behavior we expect from exceptions.

Continuations

It’s the “Don’t call us, we’ll call you!” situation you may experience after a job interview. Instead of getting a direct answer, you are supposed to provide a handler, a function to be called with the result. This style of programming is especially useful when the result is not known at the time of the call because, for instance, it’s being evaluated by another thread or delivered from a remote web site. A Kleisli arrow in this case returns a function that accepts a handler, which represents “the rest of the computation”:

data Cont r a = Cont ((a -> r) -> r)

The handler a -> r, when it’s eventually called, produces the result of type r, and this result is returned at the end. A continuation is parameterized by the result type. (In practice, this is often some kind of status indicator.)

There is also a helper function for executing the action returned by the Kleisli arrow. It takes the handler and passes it to the continuation:

runCont :: Cont r a -> (a -> r) -> r
runCont (Cont k) h = k h

The composition of continuations is notoriously difficult, so its handling through a monad and, in particular, the do notation, is of extreme advantage.

Let’s figure out the implementation of bind. First let’s look at the stripped down signature:

(>>=) :: ((a -> r) -> r) -> 
         (a -> (b -> r) -> r) -> 
         ((b -> r) -> r)

Our goal is to create a function that takes the handler (b -> r) and produces the result r. So that’s our starting point:

ka >>= kab = Cont (\hb -> ...)

Inside the lambda, we want to call the function ka with the appropriate handler that represents the rest of the computation. We’ll implement this handler as a lambda:

runCont ka (\a -> ...)

In this case, the rest of the computation involves first calling kab with a, and then passing hb to the resulting action kb:

runCont ka (\a -> let kb = kab a
                  in runCont kb hb)

As you can see, continuations are composed inside out. The final handler hb is called from the innermost layer of the computation. Here’s the full instance:

instance Monad (Cont r) where
    ka >>= kab = Cont (\hb -> runCont ka (\a -> runCont (kab a) hb))
    return a = Cont (\ha -> ha a)

Interactive Input

This is the trickiest problem and a source of a lot of confusion. Clearly, a function like getChar, if it were to return a character typed at the keyboard, couldn’t be pure. But what if it returned the character inside a container? As long as there was no way of extracting the character from this container, we could claim that the function is pure. Every time you call getChar it would return exactly the same container. Conceptually, this container would contain the superposition of all possible characters.

If you’re familiar with quantum mechanics, you should have no problem understanding this analogy. It’s just like the box with the Schrödinger’s cat inside — except that there is no way to open or peek inside the box. The box is defined using the special built-in IO functor. In our example, getChar could be declared as a Kleisli arrow:

getChar :: () -> IO Char

(Actually, since a function from the unit type is equivalent to picking a value of the return type, the declaration of getChar is simplified to getChar :: IO Char.)

Being a functor, IO lets you manipulate its contents using fmap. And, as a functor, it can store the contents of any type, not just a character. The real utility of this approach comes to light when you consider that, in Haskell, IO is a monad. It means that you are able to compose Kleisli arrows that produce IO objects.

You might think that Kleisli composition would allow you to peek at the contents of the IO object (thus “collapsing the wave function,” if we were to continue the quantum analogy). Indeed, you could compose getChar with another Kleisli arrow that takes a character and, say, converts it to an integer. The catch is that this second Kleisli arrow could only return this integer as an (IO Int). Again, you’ll end up with a superposition of all possible integers. And so on. The Schrödinger’s cat is never out of the bag. Once you are inside the IO monad, there is no way out of it. There is no equivalent of runState or runReader for the IO monad. There is no runIO!

So what can you do with the result of a Kleisli arrow, the IO object, other than compose it with another Kleisli arrow? Well, you can return it from main. In Haskell, main has the signature:

main :: IO ()

and you are free to think of it as a Kleisli arrow:

main :: () -> IO ()

From that perspective, a Haskell program is just one big Kleisli arrow in the IO monad. You can compose it from smaller Kleisli arrows using monadic composition. It’s up to the runtime system to do something with the resulting IO object (also called IO action).

Notice that the arrow itself is a pure function — it’s pure functions all the way down. The dirty work is relegated to the system. When it finally executes the IO action returned from main, it does all kinds of nasty things like reading user input, modifying files, printing obnoxious messages, formatting a disk, and so on. The Haskell program never dirties its hands (well, except when it calls unsafePerformIO, but that’s a different story).

Of course, because Haskell is lazy, main returns almost immediately, and the dirty work begins right away. It’s during the execution of the IO action that the results of pure computations are requested and evaluated on demand. So, in reality, the execution of a program is an interleaving of pure (Haskell) and dirty (system) code.

There is an alternative interpretation of the IO monad that is even more bizarre but makes perfect sense as a mathematical model. It treats the whole Universe as an object in a program. Notice that, conceptually, the imperative model treats the Universe as an external global object, so procedures that perform I/O have side effects by virtue of interacting with that object. They can both read and modify the state of the Universe.

We already know how to deal with state in functional programming — we use the state monad. Unlike simple state, however, the state of the Universe cannot be easily described using standard data structures. But we don’t have to, as long as we never directly interact with it. It’s enough that we assume that there exists a type RealWorld and, by some miracle of cosmic engineering, the runtime is able to provide an object of this type. An IO action is just a function:

type IO a  =  RealWorld -> (a, RealWorld)

Or, in terms of the State monad:

type IO = State RealWorld

However, >=> and return for the IO monad have to be built into the language.

Interactive Output

The same IO monad is used to encapsulate interactive output. RealWorld is supposed to contain all output devices. You might wonder why we can’t just call output functions from Haskell and pretend that they do nothing. For instance, why do we have:

putStr :: String -> IO ()

rather than the simpler:

putStr :: String -> ()

Two reasons: Haskell is lazy, so it would never call a function whose output — here, the unit object — is not used for anything. And, even if it weren’t lazy, it could still freely change the order of such calls and thus garble the output. The only way to force sequential execution of two functions in Haskell is through data dependency. The input of one function must depend on the output of another. Having RealWorld passed between IO actions enforces sequencing.

Conceptually, in this program:

main :: IO ()
main = do
    putStr "Hello "
    putStr "World!"

the action that prints “World!” receives, as input, the Universe in which “Hello ” is already on the screen. It outputs a new Universe, with “Hello World!” on the screen.

Conclusion

Of course I have just scratched the surface of monadic programming. Monads not only accomplish, with pure functions, what normally is done with side effects in imperative programming, but they also do it with a high degree of control and type safety. They are not without drawbacks, though. The major complaint about monads is that they don’t easily compose with each other. Granted, you can combine most of the basic monads using the monad transformer library. It’s relatively easy to create a monad stack that combines, say, state with exceptions, but there is no formula for stacking arbitrary monads together.

Next: Monads Categorically.


This is part 20 of Categories for Programmers. Previously: Free/Forgetful Adjunctions. See the Table of Contents.

Programmers have developed a whole mythology around monads. It’s supposed to be one of the most abstract and difficult concepts in programming. There are people who “get it” and those who don’t. For many, the moment when they understand the concept of the monad is like a mystical experience. The monad abstracts the essence of so many diverse constructions that we simply don’t have a good analogy for it in everyday life. We are reduced to groping in the dark, like those blind men touching different parts of the elephant end exclaiming triumphantly: “It’s a rope,” “It’s a tree trunk,” or “It’s a burrito!”

Let me set the record straight: The whole mysticism around the monad is the result of a misunderstanding. The monad is a very simple concept. It’s the diversity of applications of the monad that causes the confusion.

As part of research for this post I looked up duct tape (a.k.a., duck tape) and its applications. Here’s a little sample of things that you can do with it:

  • sealing ducts
  • fixing CO2 scrubbers on board Apollo 13
  • wart treatment
  • fixing Apple’s iPhone 4 dropped call issue
  • making a prom dress
  • building a suspension bridge

Now imagine that you didn’t know what duct tape was and you were trying to figure it out based on this list. Good luck!

So I’d like to add one more item to the collection of “the monad is like…” clichés: The monad is like duct tape. Its applications are widely diverse, but its principle is very simple: it glues things together. More precisely, it composes things.

This partially explains the difficulties a lot of programmers, especially those coming from the imperative background, have with understanding the monad. The problem is that we are not used to thinking of programing in terms of function composition. This is understandable. We often give names to intermediate values rather than pass them directly from function to function. We also inline short segments of glue code rather than abstract them into helper functions. Here’s an imperative-style implementation of the vector-length function in C:

double vlen(double * v) {
  double d = 0.0;
  int n;
  for (n = 0; n < 3; ++n)
    d += v[n] * v[n];
  return sqrt(d);
}

Compare this with the (stylized) Haskell version that makes function composition explicit:

vlen = sqrt . sum . fmap  (flip (^) 2)

(Here, to make things even more cryptic, I partially applied the exponentiation operator (^) by setting its second argument to 2.)

I’m not arguing that Haskell’s point-free style is always better, just that function composition is at the bottom of everything we do in programming. And even though we are effectively composing functions, Haskell does go to great lengths to provide imperative-style syntax called the do notation for monadic composition. We’ll see its use later. But first, let me explain why we need monadic composition in the first place.

The Kleisli Category

We have previously arrived at the writer monad by embellishing regular functions. The particular embellishment was done by pairing their return values with strings or, more generally, with elements of a monoid. We can now recognize that such embellishment is a functor:

newtype Writer w a = Writer (a, w)

instance Functor (Writer w) where
  fmap f (Writer (a, w)) = Writer (f a, w)

We have subsequently found a way of composing embellished functions, or Kleisli arrows, which are functions of the form:

a -> Writer w b

It was inside the composition that we implemented the accumulation of the log.

We are now ready for a more general definition of the Kleisli category. We start with a category C and an endofunctor m. The corresponding Kleisli category K has the same objects as C, but its morphisms are different. A morphism between two objects a and b in K is implemented as a morphism:

a -> m b

in the original category C. It’s important to keep in mind that we treat a Kleisli arrow in K as a morphism between a and b, and not between a and m b.

In our example, m was specialized to Writer w, for some fixed monoid w.

Kleisli arrows form a category only if we can define proper composition for them. If there is a composition, which is associative and has an identity arrow for every object, then the functor m is called a monad, and the resulting category is called the Kleisli category.

In Haskell, Kleisli composition is defined using the fish operator >=>, and the identity arrrow is a polymorphic function called return. Here’s the definition of a monad using Kleisli composition:

class Monad m where
  (>=>) :: (a -> m b) -> (b -> m c) -> (a -> m c)
  return :: a -> m a

Keep in mind that there are many equivalent ways of defining a monad, and that this is not the primary one in the Haskell ecosystem. I like it for its conceptual simplicity and the intuition it provides, but there are other definitions that are more convenient when programming. We’ll talk about them momentarily.

In this formulation, monad laws are very easy to express. They cannot be enforced in Haskell, but they can be used for equational reasoning. They are simply the standard composition laws for the Kleisli category:

(f >=> g) >=> h = f >=> (g >=> h) -- associativity
return >=> f = f                  -- left unit
f >=> return = f                  -- right unit

This kind of a definition also expresses what a monad really is: it’s a way of composing embellished functions. It’s not about side effects or state. It’s about composition. As we’ll see later, embellished functions may be used to express a variety of effects or state, but that’s not what the monad is for. The monad is the sticky duct tape that ties one end of an embellished function to the other end of an embellished function.

Going back to our Writer example: The logging functions (the Kleisli arrows for the Writer functor) form a category because Writer is a monad:

instance Monoid w => Monad (Writer w) where
    f >=> g = \a -> 
        let Writer (b, s)  = f a
            Writer (c, s') = g b
        in Writer (c, s `mappend` s')
    return a = Writer (a, mempty)

Monad laws for Writer w are satisfied as long as monoid laws for w are satisfied (they can’t be enforced in Haskell either).

There’s a useful Kleisli arrow defined for the Writer monad called tell. It’s sole purpose is to add its argument to the log:

tell :: w -> Writer w ()
tell s = Writer ((), s)

We’ll use it later as a building block for other monadic functions.

Fish Anatomy

When implementing the fish operator for different monads you quickly realize that a lot of code is repeated and can be easily factored out. To begin with, the Kleisli composition of two functions must return a function, so its implementation may as well start with a lambda taking an argument of type a:

(>=>) :: (a -> m b) -> (b -> m c) -> (a -> m c)
f >=> g = \a -> ...

The only thing we can do with this argument is to pass it to f:

f >=> g = \a -> let mb = f a
                in ...

At this point we have to produce the result of type m c, having at our disposal an object of type m b and a function g :: b -> m c. Let’s define a function that does that for us. This function is called bind and is usually written in the form of an infix operator:

(>>=) :: m a -> (a -> m b) -> m b

For every monad, instead of defining the fish operator, we may instead define bind. In fact the standard Haskell definition of a monad uses bind:

class Monad m where
    (>>=) :: m a -> (a -> m b) -> m b
    return :: a -> m a

Here’s the definition of bind for the Writer monad:

(Writer (a, w)) >>= f = let Writer (b, w') = f a
                        in  Writer (b, w `mappend` w')

It is indeed shorter than the definition of the fish operator.

It’s possible to further dissect bind, taking advantage of the fact that m is a functor. We can use fmap to apply the function a -> m b to the contents of m a. This will turn a into m b. The result of the application is therefore of type m (m b). This is not exactly what we want — we need the result of type m b — but we’re close. All we need is a function that collapses or flattens the double application of m. Such function is called join:

join :: m (m a) -> m a

Using join, we can rewrite bind as:

ma >>= f = join (fmap f ma)

That leads us to the third option for defining a monad:

class Functor m => Monad m where
    join :: m (m a) -> m a
    return :: a -> m a

Here we have explicitly requested that m be a Functor. We didn’t have to do that in the previous two definitions of the monad. That’s because any type constructor m that either supports the fish or bind operator is automatically a functor. For instance, it’s possible to define fmap in terms of bind and return:

fmap f ma = ma >>= \a -> return (f a)

For completeness, here’s join for the Writer monad:

join :: Monoid w => Writer w (Writer w a) -> Writer w a
join (Writer ((Writer (a, w')), w)) = Writer (a, w `mappend` w')

The do Notation

One way of writing code using monads is to work with Kleisli arrows — composing them using the fish operator. This mode of programming is the generalization of the point-free style. Point-free code is compact and often quite elegant. In general, though, it can be hard to understand, bordering on cryptic. That’s why most programmers prefer to give names to function arguments and intermediate values.

When dealing with monads it means favoring the bind operator over the fish operator. Bind takes a monadic value and returns a monadic value. The programmer may chose to give names to those values. But that’s hardly an improvement. What we really want is to pretend that we are dealing with regular values, not the monadic containers that encapsulate them. That’s how imperative code works — side effects, such as updating a global log, are mostly hidden from view. And that’s what the do notation emulates in Haskell.

You might be wondering then, why use monads at all? If we want to make side effects invisible, why not stick to an imperative language? The answer is that the monad gives us much better control over side effects. For instance, the log in the Writer monad is passed from function to function and is never exposed globally. There is no possibility of garbling the log or creating a data race. Also, monadic code is clearly demarcated and cordoned off from the rest of the program.

The do notation is just syntactic sugar for monadic composition. On the surface, it looks a lot like imperative code, but it translates directly to a sequence of binds and lambda expressions.

For instance, take the example we used previously to illustrate the composition of Kleisli arrows in the Writer monad. Using our current definitions, it could be rewritten as:

process :: String -> Writer String [String]
process = upCase >=> toWords

This function turns all characters in the input string to upper case and splits it into words, all the while producing a log of its actions.

In the do notation it would look like this:

process s = do
    upStr <- upCase s
    toWords upStr

Here, upStr is just a String, even though upCase produces a Writer:

upCase :: String -> Writer String String
upCase s = Writer (map toUpper s, "upCase ")

This is because the do block is desugared by the compiler to:

process s = 
   upCase s >>= \ upStr ->
       toWords upStr

The monadic result of upCase is bound to a lambda that takes a String. It’s the name of this string that shows up in the do block. When reading the line:

upStr <- upCase s

we say that upStr gets the result of upCase s.

The pseudo-imperative style is even more pronounced when we inline toWords. We replace it with the call to tell, which logs the string "toWords ", followed by the call to return with the result of splitting the string upStr using words. Notice that words is a regular function working on strings.

process s = do
    upStr <- upStr s
    tell "toWords "
    return (words upStr)

Here, each line in the do block introduces a new nested bind in the desugared code:

process s = 
    upCase s >>= \upStr ->
      tell "toWords " >>= \() ->
        return (words upStr)

Notice that tell produces a unit value, so it doesn’t have to be passed to the following lambda. Ignoring the contents of a monadic result (but not its effect — here, the contribution to the log) is quite common, so there is a special operator to replace bind in that case:

(>>) :: m a -> m b -> m b
m >> k = m >>= (\_ -> k)

The actual desugaring of our code looks like this:

process s = 
    upCase s >>= \upStr ->
      tell "toWords " >>
        return (words upStr)

In general, do blocks consist of lines (or sub-blocks) that either use the left arrow to introduce new names that are then available in the rest of the code, or are executed purely for side-effects. Bind operators are implicit between the lines of code. Incidentally, it is possible, in Haskell, to replace the formatting in the do blocks with braces and semicolons. This provides the justification for describing the monad as a way of overloading the semicolon.

Notice that the nesting of lambdas and bind operators when desugaring the do notation has the effect of influencing the execution of the rest of the do block based on the result of each line. This property can be used to introduce complex control structures, for instance to simulate exceptions.

Interestingly, the equivalent of the do notation has found its application in imperative languages, C++ in particular. I’m talking about resumable functions or coroutines. It’s not a secret that C++ futures form a monad. It’s an example of the continuation monad, which we’ll discuss shortly. The problem with continuations is that they are very hard to compose. In Haskell, we use the do notation to turn the spaghetti of “my handler will call your handler” into something that looks very much like sequential code. Resumable functions make the same transformation possible in C++. And the same mechanism can be applied to turn the spaghetti of nested loops into list comprehensions or “generators,” which are essentially the do notation for the list monad. Without the unifying abstraction of the monad, each of these problems is typically addressed by providing custom extensions to the language. In Haskell, this is all dealt with through libraries.

Next: Monads and Effects.