Functional Programming



Unlike monads, which came into programming straight from category theory, applicative functors have their origins in programming. McBride and Paterson introduced applicative functors as a programming pearl in their paper Applicative programming with effects. They also provided a categorical interpretation of applicatives in terms of strong lax monoidal functors. It’s been accepted that, just like “a monad is a monoid in the category of endofunctors,” so “an applicative is a strong lax monoidal functor.”

The so called “tensorial strength” seems to be important in categorical semantics, and in his seminal paper Notions of computation and monads, Moggi argued that effects should be described using strong monads. It makes sense, considering that a computation is done in a context, and you should be able to make the global context available under the monad. The fact that we don’t talk much about strong monads in Haskell is due to the fact that all functors in the category Set, which underlies the Haskell’s type system, have canonical strength. So why do we talk about strength when dealing with applicative functors? I have looked into this question and came to the conclusion that there is no fundamental reason, and that it’s okay to just say:

An applicative is a lax monoidal functor

In this post I’ll discuss different equivalent categorical definitions of the applicative functor. I’ll start with a lax closed functor, then move to a lax monoidal functor, and show the equivalence of the two definitions. Then I’ll introduce the calculus of ends and show that the third definition of the applicative functor as a monoid in a suitable functor category equipped with Day convolution is equivalent to the previous ones.

Applicative as a Lax Closed Functor

The standard definition of the applicative functor in Haskell reads:

class Functor f => Applicative f where
    (<*>) :: f (a -> b) -> (f a -> f b)
    pure :: a -> f a

At first sight it doesn’t seem to involve a monoidal structure. It looks more like preserving function arrows (I added some redundant parentheses to suggest this interpretation).

Categorically, functors that “preserve arrows” are known as closed functors. Let’s look at a definition of a closed functor f between two categories C and D. We have to assume that both categories are closed, meaning that they have internal hom-objects for every pair of objects. Internal hom-objects are also called function objects or exponentials. They are normally defined through the right adjoint to the product functor:

C(z × a, b) ≅ C(z, a => b)

To distinguish between sets of morphisms and function objects (they are the same thing in Set), I will temporarily use double arrows for function objects.

We can take a functor f and act with it on the function object a=>b in the category C. We get an object f (a=>b) in D. Or we can map the two objects a and b from C to D and then construct the function object in D: f a => f b.

closed

We call a functor closed if the two results are isomorphic (I have subscripted the two arrows with the categories where they are defined):

f (a =>C b) ≅ (f a =>D f b)

and if the functor preserves the unit object:

iD ≅ f iC

What’s the unit object? Normally, this is the unit with respect to the same product that was used to define the function object using the adjunction. I’m saying “normally,” because it’s possible to define a closed category without a product.

Note: The two arrows and the two is are defined with respect to two different products. The first isomorphism must be natural in both a and b. Also, to complete the picture, there are some diagrams that must commute.

The two isomorphisms that define a closed functor can be relaxed and replaced by unidirectional morphisms. The result is a lax closed functor:

f (a => b) -> (f a => f b)
i -> f i

This looks almost like the definition of Applicative, except for one problem: how can we recover the natural transformation we call pure from a single morphism i -> f i.

One way to do it is from the position of strength. An endofunctor f has tensorial strength if there is a natural transformation:

stc a :: c ⊗ f a -> f (c ⊗ a)

Think of c as the context in which the computation f a is performed. Strength means that we can use this external context inside the computation.

In the category Set, with the tensor product replaced by cartesian product, all functors have canonical strength. In Haskell, we would define it as:

st (c, fa) = fmap ((,) c) fa

The morphism in the definition of the lax closed functor translates to:

unit :: () -> f ()

Notice that, up to isomorphism, the unit type () is the unit with respect to cartesian product. The relevant isomorphisms are:

λa :: ((), a) -> a
ρa :: (a, ()) -> a

Here’s the derivation from Rivas and Jaskelioff’s Notions of Computation as Monoids:

    a
≅  (a, ())   -- unit law, ρ-1
-> (a, f ()) -- lax unit
-> f (a, ()) -- strength
≅  f a       -- lifted unit law, f ρ

Strength is necessary if you’re starting with a lax closed (or monoidal — see the next section) endofunctor in an arbitrary closed (or monoidal) category and you want to derive pure within that category — not after you restrict it to Set.

There is, however, an alternative derivation using the Yoneda lemma:

f ()
≅ forall a. (() -> a) -> f a  -- Yoneda
≅ forall a. a -> f a -- because: (() -> a) ≅ a

We recover the whole natural transformation from a single value. The advantage of this derivation is that it generalizes beyond endofunctors and it doesn’t require strength. As we’ll see later, it also ties nicely with the Day-convolution definition of applicative. The Yoneda lemma only works for Set-valued functors, but so does Day convolution (there are enriched versions of both Yoneda and Day convolution, but I’m not going to discuss them here).

We can define the categorical version of the Haskell’s applicative functor as a lax closed functor going from a closed category C to Set. It’s a functor equipped with a natural transformation:

f (a => b) -> (f a -> f b)

where a=>b is the internal hom-object in C (the second arrow is a function type in Set), and a function:

1 -> f i

where 1 is the singleton set and i is the unit object in C.

The importance of a categorical definition is that it comes with additional identities or “axioms.” A lax closed functor must be compatible with the structure of both categories. I will not go into details here, because we are really only interested in closed categories that are monoidal, where these axioms are easier to express.

The definition of a lax closed functor is easily translated to Haskell:

class Functor f => Closed f where
    (<*>) :: f (a -> b) -> f a -> f b
    unit :: f ()

Applicative as a Lax Monoidal Functor

Even though it’s possible to define a closed category without a monoidal structure, in practice we usually work with monoidal categories. This is reflected in the equivalent definition of the Haskell’s applicative functor as a lax monoidal functor. In Haskell, we would write:

class Functor f => Monoidal f where
    (>*<) :: (f a, f b) -> f (a, b)
    unit :: f ()

This definition is equivalent to our previous definition of a closed functor. That’s because, as we’ve seen, a function object in a monoidal category is defined in terms of a product. We can show the equivalence in a more general categorical setting.

This time let’s start with a symmetric closed monoidal category C, in which the function object is defined through the right adjoint to the tensor product:

C(z ⊗ a, b) ≅ C(z, a => b)

As usual, the tensor product is associative and unital — with the unit object i — up to isomorphism. The symmetry is defined through natural isomorphism:

γ :: a ⊗ b -> b ⊗ a

A functor f between two monoidal categories is lax monoidal if there exist: (1) a natural transformation

f a ⊗ f b -> f (a ⊗ b)

and (2) a morphism

i -> f i

Notice that the products and units on either side of the two mappings are from different categories.

A (lax-) monoidal functor must also preserve associativity and unit laws.

For instance a triple product

f a ⊗ (f b ⊗ f c)

may be rearranged using an associator α to give

(f a ⊗ f b) ⊗ f c

then converted to

f (a ⊗ b) ⊗ f c

and then to

f ((a ⊗ b) ⊗ c)

Or it could be first converted to

f a ⊗ f (b ⊗ c)

and then to

f (a ⊗ (b ⊗ c))

These two should be equivalent under the associator in C.

assoc

Similarly, f a ⊗ i can be simplified to f a using the right unitor ρ in D. Or it could be first converted to f a ⊗ f i, then to f (a ⊗ i), and then to f a, using the right unitor in C. The two paths should be equivalent. (Similarly for the left identity.)

unit

We will now consider functors from C to Set, with Set equipped with the usual cartesian product, and the singleton set as unit. A lax monoidal functor is defined by: (1) a natural transformation:

(f a, f b) -> f (a ⊗ b)

and (2) a choice of an element of the set f i (a function from 1 to f i picks an element from that set).

We need the target category to be Set because we want to be able to use the Yoneda lemma to show equivalence with the standard definition of applicative. I’ll come back to this point later.

The Equivalence

The definitions of a lax closed and a lax monoidal functors are equivalent when C is a closed symmetric monoidal category. The proof relies on the existence of the adjunction, in particular the unit and the counit of the adjunction:

ηa :: a -> (b => (a ⊗ b))
εb :: (a => b) ⊗ a -> b

For instance, let’s assume that f is lax-closed. We want to construct the mapping

(f a, f b) -> f (a ⊗ b)

First, we apply the lifted pair (unit, identity), (f η, f id)

(f a -> f (b => a ⊗ b), f id)

to the left hand side. We get:

(f (b => a ⊗ b), f b)

Now we can use (the uncurried version of) the lax-closed morphism:

(f (b => x), f b) -> f x

to get:

f (a ⊗ b)

Conversely, assuming the lax-monoidal property we can show that the functor is lax-closed, that is to say, implement the following function:

(f (a => b), f a) -> f b

First we use the lax monoidal morphism on the left hand side:

f ((a => b) ⊗ a)

and then use the counit (a.k.a. the evaluation morphism) to get the desired result f b

There is yet another presentation of applicatives using Day convolution. But before we get there, we need a little refresher on calculus.

Calculus of Ends

Ends and coends are very useful constructs generalizing limits and colimits. They are defined through universal constructions. They have a few fundamental properties that are used over and over in categorical calculations. I’ll just introduce the notation and a few important identities. We’ll be working in a symmetric monoidal category C with functors from C to Set and profunctors from Cop×C to Set. The end of a profunctor p is a set denoted by:

a p a a

The most important thing about ends is that a set of natural transformations between two functors f and g can be represented as an end:

[C, Set](f, g) = ∫a C(f a, g a)

In Haskell, the end corresponds to universal quantification over a functor of mixed variance. For instance, the natural transformation formula takes the familiar form:

forall a. f a -> g a

The Yoneda lemma, which deals with natural transformations, can also be written using an end:

z (C(a, z) -> f z) ≅ f a

In Haskell, we can write it as the equivalence:

forall z. ((a -> z) -> f z) ≅ f a

which is a generalization of the continuation passing transform.

The dual notion of coend is similarly written using an integral sign, with the “integration variable” in the superscript position:

a p a a

In pseudo-Haskell, a coend is represented by an existential quantifier. It’s possible to define existential data types in Haskell by converting existential quantification to universal one. The relevant identity in terms of coends and ends reads:

(∫ z p z z) -> y ≅ ∫ z (p z z -> y)

In Haskell, this formula is used to turn functions that take existential types to functions that are polymorphic:

(exists z. p z z) -> y ≅ forall z. (p z z -> y)

Intuitively, it makes perfect sense. If you want to define a function that takes an existential type, you have to be prepared to handle any type.

The equivalent of the Yoneda lemma for coends reads:

z f z × C(z, a) ≅ f a

or, in pseudo-Haskell:

exists z. (f z, z -> a) ≅ f a

(The intuition is that the only thing you can do with this pair is to fmap the function over the first component.)

There is also a contravariant version of this identity:

z C(a, z) × f z ≅ f a

where f is a contravariant functor (a.k.a., a presheaf). In pseudo-Haskell:

exists z. (a -> z, f z) ≅ f a

(The intuition is that the only thing you can do with this pair is to apply the contramap of the first component to the second component.)

Using coends we can define a tensor product in the category of functors [C, Set]. This product is called Day convolution:

(f ★ g) a = ∫ x y f x × g y × C(x ⊗ y, a)

It is a bifunctor in that category (read, it can be used to lift natural transformations). It’s associative and symmetric up to isomorphism. It also has a unit — the hom-functor C(i, -), where i is the monoidal unit in C. In other words, Day convolution imbues the category [C, Set] with monoidal structure.

Let’s verify the unit laws.

(C(i, -) ★ g) a = ∫ x y C(i, x) × g y × C(x ⊗ y, a)

We can use the contravariant Yoneda to “integrate over x” to get:

y g y × C(i ⊗ y, a)

Considering that i is the unit of the tensor product in C, we get:

y g y × C(y, a)

Covariant Yoneda lets us “integrate over y” to get the desired g a. The same method works for the right unit law.

Applicative as a Monoid

Given a monoidal category, we can always define a monoid as an object m equipped with two morphisms:

μ :: m ⊗ m -> m
η :: i -> m

satisfying the laws of associativity and unitality.

We have shown that the functor category [C, Set] (with C a symmetric monoidal category) is monoidal under Day convolution. An object in this category is a functor f. The two morphisms that would make it a candidate for a monoid are natural transformations:

μ :: f ★ f -> f
η :: C(i, -) -> f

The a component of the natural transformation μ can be rewritten as:

(∫ x y f x × f y × C(x ⊗ y, a)) -> f a

which is equivalent to:

x y (f x × f y × C(x ⊗ y, a) -> f a)

or, upon currying:

x y (f x, f y) -> C(x ⊗ y, a) -> f a

It turns out that so defined monoid is equivalent to a lax monoidal functor. This was shown by Rivas and Jaskelioff. The following derivation is due to Bob Atkey.

The trick is to start with the whole set of natural transformation from f★f to f. The multiplication μ is just one of them. We’ll express the set of natural transformations as an end:

a ((f ★ f) a -> f a)

Plugging in the formula for the a component of μ, we get:

a x y (f x, f y) -> C(x ⊗ y, a) -> f a

The end over a does not involve the first argument, so we can move the integral sign:

x y (f x, f y) -> ∫ a C(x ⊗ y, a) -> f a

Then we use the Yoneda lemma to “perform the integration” over a:

x y (f x, f y) -> f (x ⊗ y)

You may recognize this as a set of natural transformations that define a lax monoidal functor. We have established a one-to-one correspondence between these natural transformations and the ones defining monoidal multiplication using Day convolution.

The remaining part is to show the equivalence between the unit with respect to Day convolution and the second part of the definition of the lax monoidal functor, the morphism:

1 -> f i

We start with the set of natural transformations that contains our η:

a (i -> a) -> f a

By Yoneda, this is just f i. Picking an element from a set is equivalent to defining a morphism from the singleton set 1, so for any choice of η we get:

1 -> f i

and vice versa. The two definitions are equivalent.

Notice that the monoidal unit η under Day convolution becomes the definition of pure in the Haskell version of applicative. Indeed, when we replace the category C with Set, f becomes and endofunctor, and the unit of Day convolution C(i, -) becomes the identity functor Id. We get:

η :: Id -> f

or, in components:

pure :: a -> f a

So, strictly speaking, the Haskell definition of Applicative mixes the elements of the lax closed functor and the monoidal unit under Day convolution.

Acknowledgments

I’m grateful to Mauro Jaskelioff and Exequiel Rivas for correspondence and to Bob Atkey, Dimitri Chikhladze, and Make Shulman for answering my questions on Math Overflow.


This is part 23 of Categories for Programmers. Previously: Monads Categorically. See the Table of Contents.

Now that we have covered monads, we can reap the benefits of duality and get comonads for free simply by reversing the arrows and working in the opposite category.

Recall that, at the most basic level, monads are about composing Kleisli arrows:

a -> m b

where m is a functor that is a monad. If we use the letter w (upside down m) for the comonad, we can define co-Kleisli arrows as morphism of the type:

w a -> b

The analog of the fish operator for co-Kleisli arrows is defined as:

(=>=) :: (w a -> b) -> (w b -> c) -> (w a -> c)

For co-Kleisli arrows to form a category we also have to have an identity co-Kleisli arrow, which is called extract:

extract :: w a -> a

This is the dual of return. We also have to impose the laws of associativity as well as left- and right-identity. Putting it all together, we could define a comonad in Haskell as:

class Functor w => Comonad w where
    (=>=) :: (w a -> b) -> (w b -> c) -> (w a -> c)
    extract :: w a -> a

In practice, we use slightly different primitives, as we’ll see shortly.

The question is, what’s the use for comonads in programming?

Programming with Comonads

Let’s compare the monad with the comonad. A monad provides a way of putting a value in a container using return. It doesn’t give you access to a value or values stored inside. Of course, data structures that implement monads might provide access to their contents, but that’s considered a bonus. There is no common interface for extracting values from a monad. And we’ve seen the example of the IO monad that prides itself in never exposing its contents.

A comonad, on the other hand, provides the means of extracting a single value from it. It does not give the means to insert values. So if you want to think of a comonad as a container, it always comes pre-filled with contents, and it lets you peek at it.

Just as a Kleisli arrow takes a value and produces some embellished result — it embellishes it with context — a co-Kleisli arrow takes a value together with a whole context and produces a result. It’s an embodiment of contextual computation.

The Product Comonad

Remember the reader monad? We introduced it to tackle the problem of implementing computations that need access to some read-only environment e. Such computations can be represented as pure functions of the form:

(a, e) -> b

We used currying to turn them into Kleisli arrows:

a -> (e -> b)

But notice that these functions already have the form of co-Kleisli arrows. Let’s massage their arguments into the more convenient functor form:

data Product e a = Prod e a
  deriving Functor

We can easily define the composition operator by making the same environment available to the arrows that we are composing:

(=>=) :: (Product e a -> b) -> (Product e b -> c) -> (Product e a -> c)
f =>= g = \(Prod e a) -> let b = f (Prod e a)
                          c = g (Prod e b)
                       in c

The implementation of extract simply ignores the environment:

extract (Prod e a) = a

Not surprisingly, the product comonad can be used to perform exactly the same computations as the reader monad. In a way, the comonadic implementation of the environment is more natural — it follows the spirit of “computation in context.” On the other hand, monads come with the convenient syntactic sugar of the do notation.

The connection between the reader monad and the product comonad goes deeper, having to do with the fact that the reader functor is the right adjoint of the product functor. In general, though, comonads cover different notions of computation than monads. We’ll see more examples later.

It’s easy to generalize the Product comonad to arbitrary product types including tuples and records.

Dissecting the Composition

Continuing the process of dualization, we could go ahead and dualize monadic bind and join. Alternatively, we can repeat the process we used with monads, where we studied the anatomy of the fish operator. This approach seems more enlightening.

The starting point is the realization that the composition operator must produce a co-Kleisli arrow that takes w a and produces a c. The only way to produce a c is to apply the second function to an argument of the type w b:

(=>=) :: (w a -> b) -> (w b -> c) -> (w a -> c)
f =>= g = g ... 

But how can we produce a value of type w b that could be fed to g? We have at our disposal the argument of type w a and the function f :: w a -> b. The solution is to define the dual of bind, which is called extend:

extend :: (w a -> b) -> w a -> w b

Using extend we can implement composition:

f =>= g = g . extend f

Can we next dissect extend? You might be tempted to say, why not just apply the function w a -> b to the argument w a, but then you quickly realize that you’d have no way of converting the resulting b to w b. Remember, the comonad provides no means of lifting values. At this point, in the analogous construction for monads, we used fmap. The only way we could use fmap here would be if we had something of the type w (w a) at our disposal. If we coud only turn w a into w (w a). And, conveniently, that would be exactly the dual of join. We call it duplicate:

duplicate :: w a -> w (w a)

So, just like with the definitions of the monad, we have three equivalent definitions of the comonad: using co-Kleisli arrows, extend, or duplicate. Here’s the Haskell definition taken directly from Control.Comonad library:

class Functor w => Comonad w where
  extract :: w a -> a
  duplicate :: w a -> w (w a)
  duplicate = extend id
  extend :: (w a -> b) -> w a -> w b
  extend f = fmap f . duplicate

Provided are the default implementations of extend in terms of duplicate and vice versa, so you only need to override one of them.

The intuition behind these functions is based on the idea that, in general, a comonad can be thought of as a container filled with values of type a (the product comonad was a special case of just one value). There is a notion of the “current” value, one that’s easily accessible through extract. A co-Kleisli arrow performs some computation that is focused on the current value, but it has access to all the surrounding values. Think of the Conway’s game of life. Each cell contains a value (usually just True or False). A comonad corresponding to the game of life would be a grid of cells focused on the “current” cell.

So what does duplicate do? It takes a comonadic container w a and produces a container of containers w (w a). The idea is that each of these containers is focused on a different a inside w a. In the game of life, you would get a grid of grids, each cell of the outer grid containing an inner grid that’s focused on a different cell.

Now look at extend. It takes a co-Kleisli arrow and a comonadic container w a filled with as. It applies the computation to all of these as, replacing them with bs. The result is a comonadic container filled with bs. extend does it by shifting the focus from one a to another and applying the co-Kleisli arrow to each of them in turn. In the game of life, the co-Kleisli arrow would calculate the new state of the current cell. To do that, it would look at its context — presumably its nearest neighbors. The default implementation of extend illustrates this process. First we call duplicate to produce all possible foci and then we apply f to each of them.

The Stream Comonad

This process of shifting the focus from one element of the container to another is best illustrated with the example of an infinite stream. Such a stream is just like a list, except that it doesn’t have the empty constructor:

data Stream a = Cons a (Stream a)

It’s trivially a Functor:

instance Functor Stream where
    fmap f (Cons a as) = Cons (f a) (fmap f as)

The focus of a stream is its first element, so here’s the implementation of extract:

extract (Cons a _) = a

duplicate produces a stream of streams, each focused on a different element.

duplicate (Cons a as) = Cons (Cons a as) (duplicate as)

The first element is the original stream, the second element is the tail of the original stream, the third element is its tail, and so on, ad infinitum.

Here’s the complete instance:

instance Comonad Stream where
    extract (Cons a _) = a
    duplicate (Cons a as) = Cons (Cons a as) (duplicate as)

This is a very functional way of looking at streams. In an imperative language, we would probably start with a method advance that shifts the stream by one position. Here, duplicate produces all shifted streams in one fell swoop. Haskell’s laziness makes this possible and even desirable. Of course, to make a Stream practical, we would also implement the analog of advance:

tail :: Stream a -> Stream a
tail (Cons a as) = as

but it’s never part of the comonadic interface.

If you had any experience with digital signal processing, you’ll see immediately that a co-Kleisli arrow for a stream is just a digital filter, and extend produces a filtered stream.

As a simple example, let’s implement the moving average filter. Here’s a function that sums n elements of a stream:

sumS :: Num a => Int -> Stream a -> a
sumS n (Cons a as) = if n <= 0 then 0 else a + sumS (n - 1) as

Here’s the function that calculates the average of the first n elements of the stream:

average :: Fractional a => Int -> Stream a -> a
average n stm = (sumS n stm) / (fromIntegral n)

Partially applied average n is a co-Kleisli arrow, so we can extend it over the whole stream:

movingAvg :: Fractional a => Int -> Stream a -> Stream a
movingAvg n = extend (average n)

The result is the stream of running averages.

A stream is an example of a unidirectional, one-dimensional comonad. It can be easily made bidirectional or extended to two or more dimensions.

Comonad Categorically

Defining a comonad in category theory is a straightforward exercise in duality. As with the monad, we start with an endofunctor T. The two natural transformations, η and μ, that define the monad are simply reversed for the comonad:

ε :: T -> I
δ :: T -> T2

The components of these transformations correspond to extract and duplicate. Comonad laws are the mirror image of monad laws. No big surprise here.

Then there is the derivation of the monad from an adjunction. Duality reverses an adjunction: the left adjoint becomes the right adjoint and vice versa. And, since the composition R ∘ L defines a monad, L ∘ R must define a comonad. The counit of the adjunction:

ε :: L ∘ R -> I

is indeed the same ε that we see in the definition of the comonad — or, in components, as Haskell’s extract. We can also use the unit of the adjunction:

η :: I -> R ∘ L

to insert an R ∘ L in the middle of L ∘ R and produce L ∘ R ∘ L ∘ R. Making T2 from T defines the δ, and that completes the definition of the comonad.

We’ve also seen that the monad is a monoid. The dual of this statement would require the use of a comonoid, so what’s a comonoid? The original definition of a monoid as a single-object category doesn’t dualize to anything interesting. When you reverse the direction of all endomorphisms, you get another monoid. Recall, however, that in our approach to a monad, we used a more general definition of a monoid as an object in a monoidal category. The construction was based on two morphisms:

μ :: m ⊗ m -> m
η :: i -> m

The reversal of these morphisms produces a comonoid in a monoidal category:

δ :: m -> m ⊗ m
ε :: m -> i

One can write a definition of a comonoid in Haskell:

class Comonoid m where
  split   :: m -> (m, m)
  destroy :: m -> ()

but it is rather trivial. Obviously destroy ignores its argument.

destroy _ = ()

split is just a pair of functions:

split x = (f x, g x)

Now consider comonoid laws that are dual to the monoid unit laws.

lambda . bimap destroy id . split = id
rho . bimap id destroy . split = id

Here, lambda and rho are the left and right unitors, respectively (see the definition of monoidal categories). Plugging in the definitions, we get:

lambda (bimap destroy id (split x))
= lambda (bimap destroy id (f x, g x))
= lambda ((), g x)
= g x

which proves that g = id. Similarly, the second law expands to f = id. In conclusion:

split x = (x, x)

which shows that in Haskell (and, in general, in the category Set) every object is a trivial comonoid.

Fortunately there are other more interesting monoidal categories in which to define comonoids. One of them is the category of endofunctors. And it turns out that, just like the monad is a monoid in the category of endofunctors,

The comonad is a comonoid in the category of endofunctors.

The Store Comonad

Another important example of a comonad is the dual of the state monad. It’s called the costate comonad or, alternatively, the store comonad.

We’ve seen before that the state monad is generated by the adjunction that defines the exponentials:

L z = z × s
R a = s ⇒ a

We’ll use the same adjunction to define the costate comonad. A comonad is defined by the composition L ∘ R:

L (R a) = (s ⇒ a) × s

Translating this to Haskell, we start with the adjunction between the Product functor on the left and the Reader functor or the right. Composing Product after Reader is equivalent to the following definition:

data Store s a = Store (s -> a) s

The counit of the adjunction taken at the object a is the morphism:

εa :: ((s ⇒ a) × s) -> a

or, in Haskell notation:

counit (Prod (Reader f) s) = f s

This becomes our extract:

extract (Store f s) = f s

The unit of the adjunction is:

unit :: a -> Reader s (Product a s)
unit a = Reader (\s -> Prod a s)

We construct δ, or duplicate, as the horizontal composition:

δ :: L ∘ R -> L ∘ R ∘ L ∘ R
δ = L ∘ η ∘ R

We have to sneak η through the leftmost L, which is the Product functor. It means acting with η, or Store f, on the left component of the pair (that’s what fmap for Product would do). We get:

duplicate (Store f s) = Store (Store f) s

(Remember that, in the formula for δ, L and R stand for identity natural transformations whose components are identity morphisms.)

Here’s the complete definition of the Store comonad:

instance Comonad (Store s) where
  extract (Store f s) = f s
  duplicate (Store f s) = Store (Store f) s

You may think of the Reader part of Store as a generalized container of as that are keyed using elements of the type s. For instance, if s is Int, Reader Int a is an infinite bidirectional stream of as. Store pairs this container with a value of the key type. For instance, Reader Int a is paired with an Int. In this case, extract uses this integer to index into the infinite stream. You may think of the second component of Store as the current position.

Continuing with this example, duplicate creates a new infinite stream indexed by an Int. This stream contains streams as its elements. In particular, at the current position, it contains the original stream. But if you use some other Int (positive or negative) as the key, you’d obtain a shifted stream positioned at that new index.

In general, you can convince yourself that when extract acts on the duplicated Store it produces the original Store (in fact, the identity law for the comonad states that extract . duplicate = id).

The Store comonad plays an important role as the theoretical basis for the Lens library. Conceptually, the Store s a comonad encapsulates the idea of “focusing” (like a lens) on a particular substructure of the date type a using the type s as an index. In particular, a function of the type:

a -> Store s a

is equivalent to a pair of functions:

set :: a -> s -> a
get :: a -> s

If a is a product type, set could be implemented as setting the field of type s inside of a while returning the modified version of a. Similarly, get could be implemented to read the value of the s field from a. We’ll explore these ideas more in the next section.

Challenges

  1. Implement the Conway’s Game of Life using the Store comonad. Hint: What type do you pick for s?

Acknowledgments

I’m grateful to Edward Kmett for reading the draft of this post and pointing out flaws in my reasoning.

Next: F-Algebras.


This is part 20 of Categories for Programmers. Previously: Free/Forgetful Adjunctions. See the Table of Contents.

Programmers have developed a whole mythology around monads. It’s supposed to be one of the most abstract and difficult concepts in programming. There are people who “get it” and those who don’t. For many, the moment when they understand the concept of the monad is like a mystical experience. The monad abstracts the essence of so many diverse constructions that we simply don’t have a good analogy for it in everyday life. We are reduced to groping in the dark, like those blind men touching different parts of the elephant end exclaiming triumphantly: “It’s a rope,” “It’s a tree trunk,” or “It’s a burrito!”

Let me set the record straight: The whole mysticism around the monad is the result of a misunderstanding. The monad is a very simple concept. It’s the diversity of applications of the monad that causes the confusion.

As part of research for this post I looked up duct tape (a.k.a., duck tape) and its applications. Here’s a little sample of things that you can do with it:

  • sealing ducts
  • fixing CO2 scrubbers on board Apollo 13
  • wart treatment
  • fixing Apple’s iPhone 4 dropped call issue
  • making a prom dress
  • building a suspension bridge

Now imagine that you didn’t know what duct tape was and you were trying to figure it out based on this list. Good luck!

So I’d like to add one more item to the collection of “the monad is like…” clichés: The monad is like duct tape. Its applications are widely diverse, but its principle is very simple: it glues things together. More precisely, it composes things.

This partially explains the difficulties a lot of programmers, especially those coming from the imperative background, have with understanding the monad. The problem is that we are not used to thinking of programing in terms of function composition. This is understandable. We often give names to intermediate values rather than pass them directly from function to function. We also inline short segments of glue code rather than abstract them into helper functions. Here’s an imperative-style implementation of the vector-length function in C:

double vlen(double * v) {
  double d = 0.0;
  int n;
  for (n = 0; n < 3; ++n)
    d += v[n] * v[n];
  return sqrt(d);
}

Compare this with the (stylized) Haskell version that makes function composition explicit:

vlen = sqrt . sum . fmap  (flip (^) 2)

(Here, to make things even more cryptic, I partially applied the exponentiation operator (^) by setting its second argument to 2.)

I’m not arguing that Haskell’s point-free style is always better, just that function composition is at the bottom of everything we do in programming. And even though we are effectively composing functions, Haskell does go to great lengths to provide imperative-style syntax called the do notation for monadic composition. We’ll see its use later. But first, let me explain why we need monadic composition in the first place.

The Kleisli Category

We have previously arrived at the writer monad by embellishing regular functions. The particular embellishment was done by pairing their return values with strings or, more generally, with elements of a monoid. We can now recognize that such embellishment is a functor:

newtype Writer w a = Writer (a, w)

instance Functor (Writer w) where
  fmap f (Writer (a, w)) = Writer (f a, w)

We have subsequently found a way of composing embellished functions, or Kleisli arrows, which are functions of the form:

a -> Writer w b

It was inside the composition that we implemented the accumulation of the log.

We are now ready for a more general definition of the Kleisli category. We start with a category C and an endofunctor m. The corresponding Kleisli category K has the same objects as C, but its morphisms are different. A morphism between two objects a and b in K is implemented as a morphism:

a -> m b

in the original category C. It’s important to keep in mind that we treat a Kleisli arrow in K as a morphism between a and b, and not between a and m b.

In our example, m was specialized to Writer w, for some fixed monoid w.

Kleisli arrows form a category only if we can define proper composition for them. If there is a composition, which is associative and has an identity arrow for every object, then the functor m is called a monad, and the resulting category is called the Kleisli category.

In Haskell, Kleisli composition is defined using the fish operator >=>, and the identity arrrow is a polymorphic function called return. Here’s the definition of a monad using Kleisli composition:

class Monad m where
  (>=>) :: (a -> m b) -> (b -> m c) -> (a -> m c)
  return :: a -> m a

Keep in mind that there are many equivalent ways of defining a monad, and that this is not the primary one in the Haskell ecosystem. I like it for its conceptual simplicity and the intuition it provides, but there are other definitions that are more convenient when programming. We’ll talk about them momentarily.

In this formulation, monad laws are very easy to express. They cannot be enforced in Haskell, but they can be used for equational reasoning. They are simply the standard composition laws for the Kleisli category:

(f >=> g) >=> h = f >=> (g >=> h) -- associativity
return >=> f = f                  -- left unit
f >=> return = f                  -- right unit

This kind of a definition also expresses what a monad really is: it’s a way of composing embellished functions. It’s not about side effects or state. It’s about composition. As we’ll see later, embellished functions may be used to express a variety of effects or state, but that’s not what the monad is for. The monad is the sticky duct tape that ties one end of an embellished function to the other end of an embellished function.

Going back to our Writer example: The logging functions (the Kleisli arrows for the Writer functor) form a category because Writer is a monad:

instance Monoid w => Monad (Writer w) where
    f >=> g = \a -> 
        let Writer (b, s)  = f a
            Writer (c, s') = g b
        in Writer (c, s `mappend` s')
    return a = Writer (a, mempty)

Monad laws for Writer w are satisfied as long as monoid laws for w are satisfied (they can’t be enforced in Haskell either).

There’s a useful Kleisli arrow defined for the Writer monad called tell. It’s sole purpose is to add its argument to the log:

tell :: w -> Writer w ()
tell s = Writer ((), s)

We’ll use it later as a building block for other monadic functions.

Fish Anatomy

When implementing the fish operator for different monads you quickly realize that a lot of code is repeated and can be easily factored out. To begin with, the Kleisli composition of two functions must return a function, so its implementation may as well start with a lambda taking an argument of type a:

(>=>) :: (a -> m b) -> (b -> m c) -> (a -> m c)
f >=> g = \a -> ...

The only thing we can do with this argument is to pass it to f:

f >=> g = \a -> let mb = f a
                in ...

At this point we have to produce the result of type m c, having at our disposal an object of type m b and a function g :: b -> m c. Let’s define a function that does that for us. This function is called bind and is usually written in the form of an infix operator:

(>>=) :: m a -> (a -> m b) -> m b

For every monad, instead of defining the fish operator, we may instead define bind. In fact the standard Haskell definition of a monad uses bind:

class Monad m where
    (>>=) :: m a -> (a -> m b) -> m b
    return :: a -> m a

Here’s the definition of bind for the Writer monad:

(Writer (a, w)) >>= f = let Writer (b, w') = f a
                        in  Writer (b, w `mappend` w')

It is indeed shorter than the definition of the fish operator.

It’s possible to further dissect bind, taking advantage of the fact that m is a functor. We can use fmap to apply the function a -> m b to the contents of m a. This will turn a into m b. The result of the application is therefore of type m (m b). This is not exactly what we want — we need the result of type m b — but we’re close. All we need is a function that collapses or flattens the double application of m. Such function is called join:

join :: m (m a) -> m a

Using join, we can rewrite bind as:

ma >>= f = join (fmap f ma)

That leads us to the third option for defining a monad:

class Functor m => Monad m where
    join :: m (m a) -> m a
    return :: a -> m a

Here we have explicitly requested that m be a Functor. We didn’t have to do that in the previous two definitions of the monad. That’s because any type constructor m that either supports the fish or bind operator is automatically a functor. For instance, it’s possible to define fmap in terms of bind and return:

fmap f ma = ma >>= \a -> return (f a)

For completeness, here’s join for the Writer monad:

join :: Monoid w => Writer w (Writer w a) -> Writer w a
join (Writer ((Writer (a, w')), w)) = Writer (a, w `mappend` w')

The do Notation

One way of writing code using monads is to work with Kleisli arrows — composing them using the fish operator. This mode of programming is the generalization of the point-free style. Point-free code is compact and often quite elegant. In general, though, it can be hard to understand, bordering on cryptic. That’s why most programmers prefer to give names to function arguments and intermediate values.

When dealing with monads it means favoring the bind operator over the fish operator. Bind takes a monadic value and returns a monadic value. The programmer may chose to give names to those values. But that’s hardly an improvement. What we really want is to pretend that we are dealing with regular values, not the monadic containers that encapsulate them. That’s how imperative code works — side effects, such as updating a global log, are mostly hidden from view. And that’s what the do notation emulates in Haskell.

You might be wondering then, why use monads at all? If we want to make side effects invisible, why not stick to an imperative language? The answer is that the monad gives us much better control over side effects. For instance, the log in the Writer monad is passed from function to function and is never exposed globally. There is no possibility of garbling the log or creating a data race. Also, monadic code is clearly demarcated and cordoned off from the rest of the program.

The do notation is just syntactic sugar for monadic composition. On the surface, it looks a lot like imperative code, but it translates directly to a sequence of binds and lambda expressions.

For instance, take the example we used previously to illustrate the composition of Kleisli arrows in the Writer monad. Using our current definitions, it could be rewritten as:

process :: String -> Writer String [String]
process = upCase >=> toWords

This function turns all characters in the input string to upper case and splits it into words, all the while producing a log of its actions.

In the do notation it would look like this:

process s = do
    upStr <- upCase s
    toWords upStr

Here, upStr is just a String, even though upCase produces a Writer:

upCase :: String -> Writer String String
upCase s = Writer (map toUpper s, "upCase ")

This is because the do block is desugared by the compiler to:

process s = 
   upCase s >>= \ upStr ->
       toWords upStr

The monadic result of upCase is bound to a lambda that takes a String. It’s the name of this string that shows up in the do block. When reading the line:

upStr <- upCase s

we say that upStr gets the result of upCase s.

The pseudo-imperative style is even more pronounced when we inline toWords. We replace it with the call to tell, which logs the string "toWords ", followed by the call to return with the result of splitting the string upStr using words. Notice that words is a regular function working on strings.

process s = do
    upStr <- upCase s
    tell "toWords "
    return (words upStr)

Here, each line in the do block introduces a new nested bind in the desugared code:

process s = 
    upCase s >>= \upStr ->
      tell "toWords " >>= \() ->
        return (words upStr)

Notice that tell produces a unit value, so it doesn’t have to be passed to the following lambda. Ignoring the contents of a monadic result (but not its effect — here, the contribution to the log) is quite common, so there is a special operator to replace bind in that case:

(>>) :: m a -> m b -> m b
m >> k = m >>= (\_ -> k)

The actual desugaring of our code looks like this:

process s = 
    upCase s >>= \upStr ->
      tell "toWords " >>
        return (words upStr)

In general, do blocks consist of lines (or sub-blocks) that either use the left arrow to introduce new names that are then available in the rest of the code, or are executed purely for side-effects. Bind operators are implicit between the lines of code. Incidentally, it is possible, in Haskell, to replace the formatting in the do blocks with braces and semicolons. This provides the justification for describing the monad as a way of overloading the semicolon.

Notice that the nesting of lambdas and bind operators when desugaring the do notation has the effect of influencing the execution of the rest of the do block based on the result of each line. This property can be used to introduce complex control structures, for instance to simulate exceptions.

Interestingly, the equivalent of the do notation has found its application in imperative languages, C++ in particular. I’m talking about resumable functions or coroutines. It’s not a secret that C++ futures form a monad. It’s an example of the continuation monad, which we’ll discuss shortly. The problem with continuations is that they are very hard to compose. In Haskell, we use the do notation to turn the spaghetti of “my handler will call your handler” into something that looks very much like sequential code. Resumable functions make the same transformation possible in C++. And the same mechanism can be applied to turn the spaghetti of nested loops into list comprehensions or “generators,” which are essentially the do notation for the list monad. Without the unifying abstraction of the monad, each of these problems is typically addressed by providing custom extensions to the language. In Haskell, this is all dealt with through libraries.

Next: Monads and Effects.


In the previous blog post we talked about relations. I gave an example of a thin category as a kind of relation that’s compatible with categorical structure. In a thin category, the hom-set is either an empty set or a singleton set. It so happens that these two sets form a sub-category of Set. It’s a very interesting category. It consists of the two objects — let’s give them new names o and i. Besides the mandatory identity morphisms, we also have a single morphism going from o to i, corresponding to the function we call absurd in Haskell:

absurd :: Void -> a
absurd _ = a

This tiny category is sometimes called the interval category. I’ll call it o->i.

Impoverished 4

The object o is initial, and the object i is terminal — just as the empty set and the singleton set were in Set. Moreover, the cartesian product from Set can be used to define a tensor product in o->i. We’ll use this tensor product to build a monoidal category.

Monoidal Categories

A tensor product is a bifunctor ⊗ with some additional properties. Here, in the interval category, we’ll define it through the following multiplication table:

o ⊗ o = o
o ⊗ i = o
i ⊗ o = o
i ⊗ i = i

Its action on pairs of morphisms (what we call bimap in Haskell) is also easy to define. For instance, what’s the action of on the pair <absurd, idi>? This pair takes the pair <o, i> to <i, i>. Under the bifunctor , the first pair produces o, and the second i. There is only one morphism from o to i, so we have:

absurd ⊗ idi = absurd

If we designate the (terminal) object i as the unit of the tensor product, we get a (symmetric) monoidal category. A monoidal category is a category with a tensor product that’s associative and unital (usually, up to isomorphism — but here, strictly).

Now imagine that we replace hom-sets in our original thin category with objects from the monoidal category o->i (we’ll call them hom-objects). After all, we were only using two sets from Set. We can replace the empty hom-set with the object o, and the singleton hom-set with the object i. We get what’s called an enriched category (although, in this case, it’s more of an impoverished category).

Impoverished 9

An example of a thin category (a total order with objects 1, 2, and 3) with hom-sets replaced by hom-objects from the interval category. Think of i as corresponding to less-than-or-equal, and o as greater.

Enriched Categories

An enriched category has hom-objects instead of hom-sets. These are objects from some monoidal category V called the base category. The base category has to be monoidal because we want to define something that would replace the usual composition of morphisms. Morphisms are elements of hom-sets. However, hom-objects, in general, have no elements. We don’t know what an element of o or i is.

So to fully define an enriched category we have to come up with a sensible substitute for composition. To do that, we need to rethink composition — first in terms of hom-sets, then in terms of hom-objects.

We can think of composition as a function from a cartesian product of two hom-sets to a third hom-set:

composea b c :: C(b, c) × C(a, b) -> C(a, c)

Generalizing it, we can replace hom-sets with hom-objects (here, either o or i), the cartesian product with the tensor product, and a function with a morphism (notice: it’s a morphism in our monoidal category o->i). These composition-defining morphisms form a “composition table” for hom-objects.

As an example, take the composition of two is. Their product i ⊗ i is i again, and there is only one morphism out of i, the identity morphism. In terms of original hom-sets it would mean that the composition of two morphisms always exists. In general, we have to impose this condition when we’re defining a category, enriched or not — here it just happens automatically.

For instance (see illustration), compose0 1 2=idi:

compose0 1 2 (C(1, 2) ⊗ C(0, 1))
= compose0 1 2 (i ⊗ i)
= compose0 1 2 i
= i
= C(0, 2)

In every category we must also have identity morphisms. These are special elements in the hom-sets of the form C(a, a). We have to find a way to define their equivalent in the enriched setting. We’ll use the standard trick of defining generalized elements. It’s based on the observation that selecting an element from a set s is the same as selecting a morphism that goes from the singleton set (the terminal object in Set) to s. In a monoidal category, we replace the terminal object with the monoidal unit.

So, instead of picking an identity morphism in C(a, a), we use a morphism from the monoidal unit i:

ja :: i -> C(a, a)

Again, in the case of a thin category, there is only one morphism leaving i, and that’s the identity morphism. That’s why we are automatically guaranteed that, in a thin category, all hom-objects of the form C(a, a) are equal to i.

Composition in a category must also satisfy associativity and identity conditions. Associativity in the enriched setting translates straightforwardly to a commuting diagram, but identity is a little trickier. We have to use ja to “select” the identity from the hom-object C(a, a) while composing it with some other hom-object C(b, a). We start with the product:

i ⊗ C(b, a)

Because i is the monoidal unit, this is equal to C(b, a). On the other hand, we can tensor together two morphisms in o->i — remember, a tensor product is a bifunctor, so it also acts on morphisms. Here we’ll tensor ja and the identity at C(b, a):

ja ⊗ idC(b, a)

We act with this product on the product object i ⊗ C(b, a) to get C(a, a) ⊗ C(b, a). Then we use composition to get:

C(a, a) ⊗ C(b, a) -> C(b, a)

These two ways of getting to C(b, a) must coincide, leading to the identity condition for enriched categories.

Impoverished 5

Now that we’ve seen how the enrichment works for thin categories, we can apply the same mechanism to define categories enriched over any monoidal category V.

The important part is that V defines a (bifunctor) tensor product ⊗ and a unit object i. Associativity and unitality may be either strict or up to isomorphism (notice that a regular cartesian product is associative only up to isomorphism — (a, (b, c)) is not equal to ((a, b), c)).

Instead of sets of morphisms, an enriched category has hom-objects that are objects in V. We use the same notation as for hom-sets: C(a, b) is the hom-object that connects object a to object b. Composition is replaced by morphisms in V:

composea b c :: C(b, c) ⊗ C(a, b) -> C(a, c)

Instead of identity morphisms, we have the morphisms in V:

ja :: i -> C(a, a)

Finally, associativity and unitality of composition are imposed in the form of a few commuting diagrams.

Impoverished Yoneda

The Yoneda Lemma talks about functors from an arbitrary category to Set. To generalize the Yoneda lemma to enriched categories we first have to generalize functors. Their action on objects is not a problem; it’s the action on morphisms that needs our attention.

Enriched Functors

Since in an enriched category we no longer have access to individual morphisms, we have to define the action of functors on hom-objects wholesale. This is only possible if the hom-objects in the target category come from the same base category V as the hom-objects in the source category. In other words, both categories must be enriched over the same monoidal category. We can then use regular morphisms in V to map hom-objects.

Between any two objects a and b in C we have the hom-object C(a, b). The two objects are mapped by the functor f to f a and f b, and there is a hom-object between them, D(f a, f b). The action of f on C(a, b) is defined as a morphism in V:

C(a, b) -> D(f a, f b)

Impoverished 6

Let’s see what this means in our impoverished thin category. First of all, a functor will always map related objects to related objects. That’s because there is no morphism from i to o. A bond between two objects cannot be broken by an impoverished functor.

If the relation is a partial order, for instance less-than-or-equal, then it follows that a functor between posets preserves the ordering — it’s monotone.

A functor must also preserve composition and identity. The former can be easily expressed as a commuting diagram. Identity preservation in the enriched setting involves the use of ja. Starting from i we can use ja to get to C(a, a), which the functor maps to D(f a, f a). Or we can use jf a to get there directly. We insist that both paths be the same.

Impoverished 7

In our impoverished category, this just works because ja is the identity morphism and all C(a, a)s and D(a, a)s are equal to i.

Back to Yoneda: You might remember that we start the Yoneda construction by fixing one object a in C, and then varying another object x to define the functor:

x -> C(a, x)

This functor maps C to Set, because xs are objects in C, and hom-sets are sets — objects of Set.

In the enriched environment, the same construction results in a mapping from C to V, because hom-objects are objects of the base category V.

But is this mapping a functor? This is far from obvious, considering that C is an enriched category, and we have just said that enriched functors can only go between categories that are enriched over the same base category. The target of our functor, the category V, is not enriched. It turns out that, as long as V is closed, we can turn it into an enriched category.

Self Enrichment

Let’s first see how we would enrich our tiny category o->i. First of all, let’s check if it’s closed. Closedness means that hom-sets can be objectified — for every hom-set there is an object called the exponential object that objectifies it. The exponential object in a (symmetric) monoidal category is defined through the adjunction:

V(a⊗b, c) ≅ V(b, ca)

This is the standard adjunction for defining exponentials, except that we are using the tensor product instead of the regular product. The hom-sets are sets of morphisms between objects in V (here, in o->i).

Let’s check, for instance, if there’s an object that corresponds to the hom-set V(o, i), which we would call io. We have:

V(o⊗b, i) ≅ V(b, io)

Whatever b we chose, when multiplied by o it will yield o, so the left hand side is V(o, i), a singleton set. Therefore V(b, io) must be a singleton set too, for any choice of b. In particular, if b is i, we see that the only choice for io is:

io = i

You can check that all exponentiation rules in o->i can be obtained from simple algebra by replacing o with zero and i with one.

Every closed symmetric monoidal category can be enriched in itself by replacing hom-sets with the corresponding exponentials. For instance, in our case, we end up replacing all empty hom-sets in the category o->i with o, and all singleton hom-sets with i. You can easily convince yourself that it works, and the result is the category o->i enriched in itself.

Impoverished 8

We can now take a category C that’s enriched over a closed symmetric monoidal category V, and show that the mapping:

x -> C(a, x)

is indeed an enriched functor. It maps objects of C to objects of V and hom-objects of C to hom-objects (exponentials) of V.

Impoverished 10

An example of a functor from a total order enriched over the interval category to the interval category. This particular functor is equal to the hom-functor C(a->x) for a equal to 3.

Let’s see what this functor looks like in a poset. Given some a, the hom-object C(a, x) is equal to i if a <= x. So an x is mapped to i if it’s greater-or-equal to a, otherwise it’s mapped to o. If you think of the objects mapped to o as colored black and the ones mapped to i as colored red, you’ll see the object a and the whole graph below it must be painted red.

Enriched Natural Transformations

Now that we know what enriched functors are, we have to define natural transformations between them. This is a bit tricky, since a regular natural transformation is defined as a family of morphisms. But again, instead of picking individual morphisms from hom-sets we can work with the closest equivalent: generalized elements — morphisms going from the unit object i to hom-objects. So an enriched natural transformation between two enriched functors f and g is defined as a family of morphisms in V:

αa :: i -> V(f a, g a)

Natural transformations are very limited in our impoverished category. Let’s see what morphisms from i are at our disposal. We have one morphism from i to i: the identity morphism ida. This makes sense — we think of i as having a single element. There is no morphism from i back to o; and that makes sense too — we think of o as having no elements. The only possible generalized components of an impoverished natural transformation between two functors f and g correspond to D(f a, g a) equal to i; which means that, for every a, f a must be less-than-or-equal to g a. A natural transformation can only push a functor uphill.

When the target category is o->i, as in the impoverished Yoneda lemma, a natural transformation may never connect red to black. So once the first functor switches to red, the other must follow.

Naturality Condition

There is, of course, a naturality condition that goes with this definition of a natural transformation. The essence of it is that it shouldn’t matter if we first apply a functor and then the natural transformation α, or the other way around. In the enriched context, there are two ways of getting from C(a, b) to D(f a, g b). One is to multiply C(a, b) by i on the right:

C(a, b) ⊗ i

apply the product of g ⊗ αa to get:

D(g a, g b) ⊗ D(f a, g a)

and then apply composition to get:

D(f a, g b)

The other way is to multiply C(a, b) by i on the left:

i ⊗ C(a, b)

apply αb ⊗ f to get:

D(f b, g b) ⊗ D(f a, f b)

and compose the two to get:

D(f a, g b)

The naturality condition requires that this diagram commute.

Impoverished 11

Enriched Yoneda

The enriched version of the Yoneda lemma talks about enriched natural transformations from the functor x -> C(a, x) to any enriched functor f that goes from C to V.

Consider for a moment a functor from a poset to our tiny category o->i (which, by the way, is also a poset). It will map some objects to o (black) and others to i (red). As we’ve seen, a functor must preserve the less-than-or-equal relation, so once we get into the red territory, there is no going back to black. And a natural transformation may only repaint black to red, not the other way around.

Now we would like to say that natural transformations from x -> C(a, x) to f are in one-to-one correspondence with the elements of f a, except that f a is not a set, so it doesn’t have elements. It’s an object in V. So instead of talking about elements of f a, we’ll talk about generalized elements — morphisms from the unit object i to f a. And that’s how the enriched Yoneda lemma is formulated — as a natural bijection between the set of natural transformations and a set of morphisms from the unit object to f a.

Nat(C(a, -), f) ≅ i -> f a

In our running example, there are only two possible values for f a.

  1. If the value is o then there is no morphism from i to it. The Yoneda lemma tells us that there is no natural transformation in that case. That makes sense, because the value of the functor x -> C(a, x) at x=a is i, and there is no morphism from i to o.
  2. If the value is i then there is exactly one morphism from i to it — the identity. The Yoneda lemma tells use that there is just one natural transformation in that case. It’s the natural transformation whose generalized component at any object x is i->i.

Strong Enriched Yoneda

There is something unsatisfactory in the fact that the enriched Yoneda lemma ends up using a mapping between sets. First we try to get away from sets as far as possible, then we go back to sets of morphisms. It feels like cheating. Not to worry! There is a stronger version of the Yoneda lemma that deals with this problem. What we need is to replace the set of natural transformations with an object in V that would represent them — just like we replaced the set of morphisms with the exponential object. Such an object is defined as an end:

x V(f x, g x)

The strong version of the Yoneda lemma establishes the natural isomorphism:

x V(C(a, x), f x) ≅ f a

Enriched Profunctors

We’ve seen that a profunctor is a functor from a product category Cop × D to Set. The enriched version of a profunctor requires the notion of a product of enriched categories. We would like the product of enriched categories to also be an enriched category. In fact, we would like it to be enriched over the same base category V as the component categories.

We’ll define objects in such a category as pairs of objects from the component categories, but the hom-objects will be defined as tensor products of the component hom-objects. In the enriched product category, the hom-object between two pairs, <c, d> and <c', d'> is:

(Cop ⊗ D)(<c, d>, <c', d'>) = C(c, c') ⊗ D(d, d')

You can convince yourself that composition of such hom-objects requires the tensor product to be symmetric (at least up to isomorphism). That’s because you have to be able to rearrange the hom-objects in a tensor product of tensor products.

An enriched profunctor is defined as an enriched functor from the tensor product of two categories to the (self-enriched) base category:

Cop ⊗ D -> V

Just like regular profunctors, enriched profunctors can be composed using the coend formula. The only difference is that the cartesian product is replaced by the tensor product in V. They form a bicategory called V-Prof.

Enriched profunctors are the basis of the definition of Tambara modules, which are relevant in the application to Haskell lenses.

Conclusion

One of the reasons for using category theory is to get away from set theory. In general, objects in a category don’t have to form sets. The morphisms, however, are elements of sets — the hom-sets. Enriched categories go a step further and replace even those sets with categorical objects. However, it’s not categories all the way down — the base category that’s used for enrichment is still a regular old category with hom-sets.

Acknowledgments

I’m grateful to Gershom Bazerman for useful comments and to André van Meulebrouck for checking the grammar and spelling.


This is part 18 of Categories for Programmers. Previously: It’s All About Morphisms. See the Table of Contents.

In mathematics we have various ways of saying that one thing is like another. The strictest is equality. Two things are equal if there is no way to distinguish one from another. One can be substituted for the other in every imaginable context. For instance, did you notice that we used equality of morphisms every time we talked about commuting diagrams? That’s because morphisms form a set (hom-set) and set elements can be compared for equality.

But equality is often too strong. There are many examples of things being the same for all intents and purposes, without actually being equal. For instance, the pair type (Bool, Char) is not strictly equal to (Char, Bool), but we understand that they contain the same information. This concept is best captured by an isomorphism between two types — a morphism that’s invertible. Since it’s a morphism, it preserves the structure; and being “iso” means that it’s part of a round trip that lands you in the same spot, no matter on which side you start. In the case of pairs, this isomorphism is called swap:

swap       :: (a,b) -> (b,a)
swap (a,b) = (b,a)

swap happens to be its own inverse.

Adjunction and Unit/Counit Pair

When we talk about categories being isomorphic, we express this in terms of mappings between categories, a.k.a. functors. We would like to be able to say that two categories C and D are isomorphic if there exists a functor R (“right”) from C to D, which is invertible. In other words, there exists another functor L (“left”) from D back to C which, when composed with R, is equal to the identity functor I. There are two possible compositions, R ∘ L and L ∘ R; and two possible identity functors: one in C and another in D.

Adj - 1

But here’s the tricky part: What does it mean for two functors to be equal? What do we mean by this equality:

R ∘ L = ID

or this one:

L ∘ R = IC

It would be reasonable to define functor equality in terms of equality of objects. Two functors, when acting on equal objects, should produce equal objects. But we don’t, in general, have the notion of object equality in an arbitrary category. It’s just not part of the definition. (Going deeper into this rabbit hole of “what equality really is,” we would end up in Homotopy Type Theory.)

You might argue that functors are morphisms in the category of categories, so they should be equality-comparable. And indeed, as long as we are talking about small categories, where objects form a set, we can indeed use the equality of elements of a set to equality-compare objects.

But, remember, Cat is really a 2-category. Hom-sets in a 2-category have additional structure — there are 2-morphisms acting between 1-morphisms. In Cat, 1-morphisms are functors, and 2-morphisms are natural transformations. So it’s more natural (can’t avoid this pun!) to consider natural isomorphisms as substitutes for equality when talking about functors.

So, instead of isomorphism of categories, it makes sense to consider a more general notion of equivalence. Two categories C and D are equivalent if we can find two functors going back and forth between them, whose composition (either way) is naturally isomorphic to the identity functor. In other words, there is a two-way natural transformation between the composition R ∘ L and the identity functor ID, and another between L ∘ R and the identity functor IC.

Adjunction is even weaker than equivalence, because it doesn’t require that the composition of the two functors be isomorphic to the identity functor. Instead it stipulates the existence of a one way natural transformation from ID to R∘L, and another from L∘R to IC. Here are the signatures of these two natural transformations:

η :: ID -> R ∘ L
ε :: L ∘ R -> IC

η is called the unit, and ε the counit of the adjunction.

Notice the asymmetry between these two definitions. In general, we don’t have the two remaining mappings:

R ∘ L -> ID -- not necessarily
IC -> L ∘ R -- not necessarily

Because of this asymmetry, the functor L is called the left adjoint to the functor R, while the functor R is the right adjoint to L. (Of course, left and right make sense only if you draw your diagrams one particular way.)

The compact notation for the adjunction is:

L ⊣ R

To better understand the adjunction, let’s analyze the unit and the counit in more detail.

Adj-Unit

Let’s start with the unit. It’s a natural transformation, so it’s a family of morphisms. Given an object d in D, the component of η is a morphism between I d, which is equal to d, and (R ∘ L) d; which, in the picture, is called d':

ηd :: d -> (R ∘ L) d

Notice that the composition R∘L is an endofunctor in D.

This equation tells us that we can pick any object d in D as our starting point, and use the round trip functor R ∘ L to pick our target object d'. Then we shoot an arrow — the morphism ηd — to our target.

By the same token, the component of of the counit ε can be described as:

εc :: (L ∘ R) c -> c

It tells us that we can pick any object c in C as our target, and use the round trip functor L ∘ R to pick the source c' = (L ∘ R) c. Then we shoot the arrow — the morphism εc — from the source to the target.

Another way of looking at unit and counit is that unit lets us introduce the composition R ∘ L anywhere we could insert an identity functor on D; and counit lets us eliminate the composition L ∘ R, replacing it with the identity on C. That leads to some “obvious” consistency conditions, which make sure that introduction followed by elimination doesn’t change anything:

L = L ∘ ID -> L ∘ R ∘ L -> IC ∘ L  = L
R = ID ∘ R -> R ∘ L ∘ R -> R ∘ IC = R

These are called triangular identities because they make the following diagrams commute:

triangles

triangles-2
These are diagrams in the functor category: the arrows are natural transformations, and their composition is the horizontal composition of natural transformations. In components, these identities become:

ε L d ∘ L η d = id L d
R ε c ∘ η R c = id R c

We often see unit and counit in Haskell under different names. Unit is known as return (or pure, in the definition of Applicative):

return :: d -> m d

and counint as extract:

extract :: w c -> c

Here, m is the (endo-) functor corresponding to R∘L, and w is the (endo-) functor corresponding to L∘R. As we’ll see later, they are part of the definition of a monad and a comonad, respectively.

If you think of an endofunctor as a container, the unit (or return) is a polymorphic function that creates a default box around a value of arbitrary type. The counit (or extract) does the reverse: it retrieves or produces a single value from a container.

We’ll see later that every pair of adjoint functors defines a monad and a comonad. Conversely, every monad or comonad may be factorized into a pair of adjoint functors — this factorization is not unique, though.

In Haskell, we use monads a lot, but only rarely factorize them into pairs of adjoint functors, primarily because those functors would normally take us out of Hask.

We can however define adjunctions of endofunctors in Haskell. Here’s part of the definition taken from Data.Functor.Adjunction:

class (Functor f, Representable u) =>
      Adjunction f u | f -> u, u -> f where
  unit         :: a -> u (f a)
  counit       :: f (u a) -> a

This definition requires some explanation. First of all, it describes a multi-parameter type class — the two parameters being f and u. It establishes a relation called Adjunction between these two type constructors.

Additional conditions, after the vertical bar, specify functional dependencies. For instance, f -> u means that u is determined by f (the relation between f and u is a function, here on type constructors). Conversely, u -> f means that, if we know u, then f is uniquely determined.

I’ll explain in a moment why, in Haskell, we can impose the condition that the right adjoint u be a representable functor.

Adjunctions and Hom-Sets

There is an equivalent definition of the adjunction in terms of natural isomorphisms of hom-sets. This definition ties nicely with universal constructions we’ve been studying so far. Every time you hear the statement that there is some unique morphism, which factorizes some construction, you should think of it as a mapping of some set to a hom-set. That’s the meaning of “picking a unique morphism.”

Furthermore, factorization can be often described in terms of natural transformations. Factorization involves commuting diagrams — some morphism being equal to a composition of two morphisms (factors). A natural transformation maps morphisms to commuting diagrams. So, in a universal construction, we go from a morphism to a commuting diagram, and then to a unique morphism. We end up with a mapping from morphism to morphism, or from one hom-set to another (usually in different categories). If this mapping is invertible, and if it can be naturally extended across all hom-sets, we have an adjunction.

The main difference between universal constructions and adjunctions is that the latter are defined globally — for all hom-sets. For instance, using a universal construction you can define a product of two select objects, even if it doesn’t exist for any other pair of objects in that category. As we’ll see soon, if the product of any pair of objects exists in a category, it can be also defined through an adjunction.

Adj-HomSets

Here’s the alternative definition of the adjunction using hom-sets. As before, we have two functors L :: D->C and R :: C->D. We pick two arbitrary objects: the source object d in D, and the target object c in C. We can map the source object d to C using L. Now we have two objects in C, L d and c. They define a hom-set:

C(L d, c)

Similarly, we can map the target object c using R. Now we have two objects in D, d and R c. They, too, define a hom set:

D(d, R c)

We say that L is left adjoint to R iff there is an isomorphism of hom sets:

C(L d, c) ≅ D(d, R c)

that is natural both in d and c.

Naturality means that the source d can be varied smoothly across D; and the target c, across C. More precisely, we have a natural transformation φ between the following two (covariant) functors from C to Set. Here’s the action of these functors on objects:

c -> C(L d, c)
c -> D(d, R c)

The other natural transformation, ψ, acts between the following (contravariant) functors:

d -> C(L d, c)
d -> D(d, R c)

Both natural transformations must be invertible.

It’s easy to show that the two definitions of the adjunction are equivalent. For instance, let’s derive the unit transformation starting from the isomorphism of hom-sets:

C(L d, c) ≅ D(d, R c)

Since this isomorphism works for any object c, it must also work for c = L d:

C(L d, L d) ≅ D(d, (R ∘ L) d)

We know that the left hand side must contain at least one morphism, the identity. The natural transformation will map this morphism to an element of D(d, (R ∘ L) d) or, inserting the identity functor I, a morphism in:

D(I d, (R ∘ L) d)

We get a family of morphisms parameterized by d. They form a natural transformation between the functor I and the functor R ∘ L (the naturality condition is easy to verify). This is exactly our unit, η.

Conversely, starting from the existence of the unit and co-unit, we can define the transformations between hom-sets. For instance, let’s pick an arbitrary morphism f in the hom-set C(L d, c). We want to define a φ that, acting on f, produces a morphism in D(d, R c).

There isn’t really much choice. One thing we can try is to lift f using R. That will produce a morphism R f from R (L d) to R c — a morphism that’s an element of D((R ∘ L) d, R c).

What we need for a component of φ, is a morphism from d to R c. That’s not a problem, since we can use a component of ηd to get from d to (R ∘ L) d. We get:

φf = R f ∘ ηd

The other direction is analogous, and so is the derivation of ψ.

Going back to the Haskell definition of Adjunction, the natural transformations φ and ψ are replaced by polymorphic (in a and b) functions leftAdjunct and rightAdjunct, respectively. The functors L and R are called f and u:

class (Functor f, Representable u) =>
      Adjunction f u | f -> u, u -> f where
  leftAdjunct  :: (f a -> b) -> (a -> u b)
  rightAdjunct :: (a -> u b) -> (f a -> b)

The equivalence between the unit/counit formulation and the leftAdjunct/rightAdjunct formulation is witnessed by these mappings:

  unit           = leftAdjunct id
  counit         = rightAdjunct id
  leftAdjunct f  = fmap f . unit
  rightAdjunct f = counit . fmap f

It’s very instructive to follow the translation from the categorical description of the adjunction to Haskell code. I highly encourage this as an exercise.

We are now ready to explain why, in Haskell, the right adjoint is automatically a representable functor. The reason for this is that, to the first approximation, we can treat the category of Haskell types as the category of sets.

When the right category D is Set, the right adjoint R is a functor from C to Set. Such a functor is representable if we can find an object rep in C such that the hom-functor C(rep, _) is naturally isomorphic to R. It turns out that, if R is the right adjoint of some functor L from Set to C, such an object always exists — it’s the image of the singleton set () under L:

rep = L ()

Indeed, the adjunction tells us that the following two hom-sets are naturally isomorphic:

C(L (), c) ≅ Set((), R c)

For a given c, the right hand side is the set of functions from the singleton set () to R c. We’ve seen earlier that each such function picks one element from the set R c. The set of such functions is isomorphic to the set R c. So we have:

C(L (), -) ≅ R

which shows that R is indeed representable.

Product from Adjunction

We have previously introduced several concepts using universal constructions. Many of those concepts, when defined globally, are easier to express using adjunctions. The simplest non-trivial example is that of the product. The gist of the universal construction of the product is the ability to factorize any product-like candidate through the universal product.

More precisely, the product of two objects a and b is the object (a × b) (or (a, b) in the Haskell notation) equipped with two morphisms fst and snd such that, for any other candidate c equipped with two morphisms p::c->a and q::c->b, there exists a unique morphism m::c->(a, b) that factorizes p and q through fst and snd.

As we’ve seen earlier, in Haskell, we can implement a factorizer that generates this morphism from the two projections:

factorizer :: (c -> a) -> (c -> b) -> (c -> (a, b))
factorizer p q = \x -> (p x, q x)

It’s easy to verify that the factorization conditions hold:

fst . factorizer p q = p
snd . factorizer p q = q

We have a mapping that takes a pair of morphisms p and q and produces another morphism m = factorizer p q.

How can we translate this into a mapping between two hom-sets that we need to define an adjunction? The trick is to go outside of Hask and treat the pair of morphisms as a single morphism in the product category.

Let me remind you what a product category is. Take two arbitrary categories C and D. The objects in the product category C×D are pairs of objects, one from C and one from D. The morphisms are pairs of morphisms, one from C and one from D.

To define a product in some category C, we should start with the product category C×C. Pairs of morphism from C are single morphisms in the product category C×C.

Adj-ProductCat

It might be a little confusing at first that we are using a product category to define a product. These are, however, very different products. We don’t need a universal construction to define a product category. All we need is the notion of a pair of objects and a pair of morphisms.

However, a pair of objects from C is not an object in C. It’s an object in a different category, C×C. We can write the pair formally as <a, b>, where a and b are objects of C. The universal construction, on the other hand, is necessary in order to define the object a×b (or (a, b) in Haskell), which is an object in the same category C. This object is supposed to represent the pair <a, b> in a way specified by the universal construction. It doesn’t always exist and, even if it exists for some, might not exist for other pairs of objects in C.

Let’s now look at the factorizer as a mapping of hom-sets. The first hom-set is in the product category C×C, and the second is in C. A general morphism in C×C would be a pair of morphisms <f, g>:

f :: c' -> a
g :: c'' -> b

with c'' potentially different from c'. But to define a product, we are interested in a special morphism in C×C, the pair p and q that share the same source object c. That’s okay: In the definition of an adjunction, the source of the left hom-set is not an arbitrary object — it’s the result of the left functor L acting on some object from the right category. The functor that fits the bill is easy to guess — it’s the diagonal functor from C to C×C, whose action on objects is:

Δ c = <c, c>

The left-hand side hom-set in our adjunction should thus be:

(C×C)(Δ c, <a, b>)

It’s a hom-set in the product category. Its elements are pairs of morphisms that we recognize as the arguments to our factorizer:

(c -> a) -> (c -> b) ...

The right-hand side hom-set lives in C, and it goes between the source object c and the result of some functor R acting on the target object in C×C. That’s the functor that maps the pair <a, b> to our product object, a×b. We recognize this element of the hom-set as the result of the factorizer:

... -> (c -> (a, b))

Adj-Product

We still don’t have a full adjunction. For that we first need our factorizer to be invertible — we are building an isomorphism between hom-sets. The inverse of the factorizer should start from a morphism m — a morphism from some object c to the product object a×b. In other words, m should be an element of:

C(c, a×b)

The inverse factorizer should map m to a morphism <p, q> in C×C that goes from <c, c> to <a, b>; in other words, a morphism that’s an element of:

(C×C)(Δ c, <a, b>)

If that mapping exists, we conclude that there exists the right adjoint to the diagonal functor. That functor defines a product.

In Haskell, we can always construct the inverse of the factorizer by composing m with, respectively, fst and snd.

p = fst ∘ m
q = snd ∘ m

To complete the proof of the equivalence of the two ways of defining a product we also need to show that the mapping between hom-sets is natural in a, b, and c. I will leave this as an exercise for the dedicated reader.

To summarize what we have done: A categorical product may be defined globally as the right adjoint of the diagonal functor:

(C × C)(Δ c, <a, b>) ≅ C(c, a×b)

Here, a×b is the result of the action of our right adjoint functor Product on the pair <a, b>. Notice that any functor from C×C is a bifunctor, so Product is a bifunctor. In Haskell, the Product bifunctor is written simply as (,). You can apply it to two types and get their product type, for instance:

(,) Int Bool ~ (Int, Bool)

Exponential from Adjunction

The exponential ba, or the function object a⇒b, can be defined using a universal construction. This construction, if it exists for all pairs of objects, can be seen as an adjunction. Again, the trick is to concentrate on the statement:

For any other object z with a morphism

g :: z × a -> b

there is a unique morphism

h :: z -> (a⇒b)

This statement establishes a mapping between hom-sets.

In this case, we are dealing with objects in the same category, so the two adjoint functors are endofunctors. The left (endo-)functor L, when acting on object z, produces z × a. It’s a functor that corresponds to taking a product with some fixed a.

The right (endo-)functor R, when acting on b produces the function object a⇒b (or ba). Again, a is fixed. The adjunction between these two functors is often written as:

- × a ⊣ (-)a

The mapping of hom-sets that underlies this adjunction is best seen by redrawing the diagram that we used in the universal construction.

Adj-Expo

Notice that the eval morphism is nothing else but the counit of this adjunction:

(a⇒b) × a -> b

where:

(a⇒b) × a = (L ∘ R) b

I have previously mentioned that a universal construction defines a unique object, up to isomorphism. That’s why we have “the” product and “the” exponential. This property translates to adjunctions as well: if a functor has an adjoint, this adjoint is unique up to isomorphism.

Challenges

  1. Derive the naturality square for ψ, the transformation between the two (contravariant) functors:
    a -> C(L a, b)
    a -> D(a, R b)
  2. Derive the counit ε starting from the hom-sets isomorphism in the second definition of the adjunction.
  3. Complete the proof of equivalence of the two definitions of the adjunction.
  4. Show that the coproduct can be defined by an adjunction. Start with the definition of the factorizer for a coproduct.
  5. Show that the coproduct is the left adjoint of the diagonal functor.
  6. Define the adjunction between a product and a function object in Haskell.

Next: Free/Forgetful Adjunctions.

Acknowledgments

I’d like to thank Edward Kmett and Gershom Bazerman for checking my math and logic, and André van Meulebrouck, who has been volunteering his editing help throughout this series of posts.


This is part 17 of Categories for Programmers. Previously: Yoneda Embedding. See the Table of Contents.

If I haven’t convinced you yet that category theory is all about morphisms then I haven’t done my job properly. Since the next topic is adjunctions, which are defined in terms of isomorphisms of hom-sets, it makes sense to review our intuitions about the building blocks of hom-sets. Also, you’ll see that adjunctions provide a more general language to describe a lot of constructions we’ve studied before, so it might help to review them too.

Functors

To begin with, you should really think of functors as mappings of morphisms — the view that’s emphasized in the Haskell definition of the Functor typeclass, which revolves around fmap. Of course, functors also map objects — the endpoints of morphisms — otherwise we wouldn’t be able to talk about preserving composition. Objects tell us which pairs of morphisms are composable. The target of one morphism must be equal to the source of the other — if they are to be composed. So if we want the composition of morphisms to be mapped to the composition of lifted morphisms, the mapping of their endpoints is pretty much determined.

Commuting Diagrams

A lot of properties of morphisms are expressed in terms of commuting diagrams. If a particular morphism can be described as a composition of other morphisms in more than one way, then we have a commuting diagram.

In particular, commuting diagrams form the basis of almost all universal constructions (with the notable exceptions of the initial and terminal objects). We’ve seen this in the definitions of products, coproducts, various other (co-)limits, exponential objects, free monoids, etc.

The product is a simple example of a universal construction. We pick two objects a and b and see if there exists an object c, together with a pair of morphisms p and q, that has the universal property of being their product.

ProductRanking

A product is a special case of a limit. A limit is defined in terms of cones. A general cone is built from commuting diagrams. Commutativity of those diagrams may be replaced with a suitable naturality condition for the mapping of functors. This way commutativity is reduced to the role of the assembly language for the higher level language of natural transformations.

Natural Transformations

In general, natural transformations are very convenient whenever we need a mapping from morphisms to commuting squares. Two opposing sides of a naturality square are the mappings of some morphism f under two functors F and G. The other sides are the components of the natural transformation (which are also morphisms).

3_Naturality

Naturality means that when you move to the “neighboring” component (by neighboring I mean connected by a morphism), you’re not going against the structure of either the category or the functors. It doesn’t matter whether you first use a component of the natural transformation to bridge the gap between objects, and then jump to its neighbor using the functor; or the other way around. The two directions are orthogonal. A natural transformation moves you left and right, and the functors move you up and down or back and forth — so to speak. You can visualize the image of a functor as a sheet in the target category. A natural transformation maps one such sheet corresponding to F, to another, corresponding to G.

Sheets

We’ve seen examples of this orthogonality in Haskell. There the action of a functor modifies the content of a container without changing its shape, while a natural transformation repackages the untouched contents into a different container. The order of these operations doesn’t matter.

We’ve seen the cones in the definition of a limit replaced by natural transformations. Naturality ensures that the sides of every cone commute. Still, a limit is defined in terms of mappings between cones. These mappings must also satisfy commutativity conditions. (For instance, the triangles in the definition of the product must commute.)

These conditions, too, may be replaced by naturality. You may recall that the universal cone, or the limit, is defined as a natural transformation between the (contravariant) hom-functor:

F :: c -> C(c, Lim D)

and the (also contravariant) functor that maps objects in C to cones, which themselves are natural transformations:

G :: c -> Nat(Δc, D)

Here, Δc is the constant functor, and D is the functor that defines the diagram in C. Both functors F and G have well defined actions on morphisms in C. It so happens that this particular natural transformation between F and G is an isomorphism.

Natural Isomorphisms

A natural isomorphism — which is a natural transformation whose every component is reversible — is category theory’s way of saying that “two things are the same.” A component of such a transformation must be an isomorphism between objects — a morphism that has the inverse. If you visualize functor images as sheets, a natural isomorphism is a one-to-one invertible mapping between those sheets.

Hom-Sets

But what are morphisms? They do have more structure than objects: unlike objects, morphisms have two ends. But if you fix the source and the target objects, the morphisms between the two form a boring set (at least for locally small categories). We can give elements of this set names like f or g, to distinguish one from another — but what is it, really, that makes them different?

The essential difference between morphisms in a given hom-set lies in the way they compose with other morphisms (from abutting hom-sets). If there is a morphism h whose composition (either pre- or post-) with f is different than that with g, for instance:

h ∘ f ≠ h ∘ g

then we can directly “observe” the difference between f and g. But even if the difference is not directly observable, we might use functors to zoom in on the hom-set. A functor F may map the two morphisms to distinct morphisms:

F f ≠ F g

in a richer category, where the abutting hom-sets provide more resolution, e.g.,

h' ∘ F f ≠ h' ∘ F g

where h' is not in the image of F.

Hom-Set Isomorphisms

A lot of categorical constructions rely on isomorphisms between hom-sets. But since hom-sets are just sets, a plain isomorphism between them doesn’t tell you much. For finite sets, an isomorphism just says that they have the same number of elements. If the sets are infinite, their cardinality must be the same. But any meaningful isomorphism of hom-sets must take into account composition. And composition involves more than one hom-set. We need to define isomorphisms that span whole collections of hom-sets, and we need to impose some compatibility conditions that interoperate with composition. And a natural isomorphism fits the bill exactly.

But what’s a natural isomorphism of hom-sets? Naturality is a property of mappings between functors, not sets. So we are really talking about a natural isomorphism between hom-set-valued functors. These functors are more than just set-valued functors. Their action on morphisms is induced by the appropriate hom-functors. Morphisms are canonically mapped by hom-functors using either pre- or post-composition (depending on the covariance of the functor).

The Yoneda embedding is one example of such an isomorphism. It maps hom-sets in C to hom-sets in the functor category; and it’s natural. One functor in the Yoneda embedding is the hom-functor in C and the other maps objects to sets of natural transformations between hom-sets.

The definition of a limit is also a natural isomorphism between hom-sets (the second one, again, in the functor category):

C(c, Lim D) ≃ Nat(Δc, D)

It turns out that our construction of an exponential object, or that of a free monoid, can also be rewritten as a natural isomorphism between hom-sets.

This is no coincidence — we’ll see next that these are just different examples of adjunctions, which are defined as natural isomorphisms of hom-sets.

Asymmetry of Hom-Sets

There is one more observation that will help us understand adjunctions. Hom-sets are, in general, not symmetric. A hom-set C(a, b) is often very different from the hom-set C(b, a). The ultimate demonstration of this asymmetry is a partial order viewed as a category. In a partial order, a morphism from a to b exists if and only if a is less than or equal to b. If a and b are different, then there can be no morphism going the other way, from b to a. So if the hom-set C(a, b) is non-empty, which in this case means it’s a singleton set, then C(b, a) must be empty, unless a = b. The arrows in this category have a definite flow in one direction.

A preorder, which is based on a relation that’s not necessarily antisymmetric, is also “mostly” directional, except for occasional cycles. It’s convenient to think of an arbitrary category as a generalization of a preoder.

A preorder is a thin category — all hom-sets are either singletons or empty. We can visualize a general category as a “thick” preorder.

Challenges

  1. Consider some degenerate cases of a naturality condition and draw the appropriate diagrams. For instance, what happens if either functor F or G map both objects a and b (the ends of f :: a -> b) to the same object, e.g., F a = F b or G a = G b? (Notice that you get a cone or a co-cone this way.) Then consider cases where either F a = G a or F b = G b. Finally, what if you start with a morphism that loops on itself — f :: a -> a?

Next: Adjunctions.

Acknowledgments

I’d like to thank Gershom Bazerman for checking my math and logic, and André van Meulebrouck, who has been volunteering his editing help throughout this series of posts.


This is part 16 of Categories for Programmers. Previously: The Yoneda Lemma. See the Table of Contents.

We’ve seen previously that, when we fix an object a in the category C, the mapping C(a, -) is a (covariant) functor from C to Set.

x -> C(a, x)

(The codomain is Set because the hom-set C(a, x) is a set.) We call this mapping a hom-functor — we have previously defined its action on morphisms as well.

Now let’s vary a in this mapping. We get a new mapping that assigns the hom-functor C(a, -) to any a.

a -> C(a, -)

It’s a mapping of objects from category C to functors, which are objects in the functor category (see the section about functor categories in Natural Transformations). Let’s use the notation [C, Set] for the functor category from C to Set. You may also recall that hom-functors are the prototypical representable functors.

Every time we have a mapping of objects between two categories, it’s natural to ask if such a mapping is also a functor. In other words whether we can lift a morphism from one category to a morphism in the other category. A morphism in C is just an element of C(a, b), but a morphism in the functor category [C, Set] is a natural transformation. So we are looking for a mapping of morphisms to natural transformations.

Let’s see if we can find a natural transformation corresponding to a morphism f :: a->b. First, lets see what a and b are mapped to. They are mapped to two functors: C(a, -) and C(b, -). We need a natural transformation between those two functors.

And here’s the trick: we use the Yoneda lemma:

[C, Set](C(a, -), F) ≅ F a

and replace the generic F with the hom-functor C(b, -). We get:

[C, Set](C(a, -), C(b, -)) ≅ C(b, a)

Yoneda Embedding

This is exactly the natural transformation between the two hom-functors we were looking for, but with a little twist: We have a mapping between a natural transformation and a morphism — an element of C(b, a) — that goes in the “wrong” direction. But that’s okay; it only means that the functor we are looking at is contravariant.

Yoneda Embedding 2

Actually, we’ve got even more than we bargained for. The mapping from C to [C, Set] is not only a contravariant functor — it is a fully faithful functor. Fullness and faithfulness are properties of functors that describe how they map hom-sets.

A faithful functor is injective on hom-sets, meaning that it maps distinct morphisms to distinct morphisms. In other words, it doesn’t coalesce them.

A full functor is surjective on hom-sets, meaning that it maps one hom-set onto the other hom-set, fully covering the latter.

A fully faithful functor F is a bijection on hom-sets — a one to one matching of all elements of both sets. For every pair of objects a and b in the source category C there is a bijection between C(a, b) and D(F a, F b), where D is the target category of F (in our case, the functor category, [C, Set]). Notice that this doesn’t mean that F is a bijection on objects. There may be objects in D that are not in the image of F, and we can’t say anything about hom-sets for those objects.

The Embedding

The (contravariant) functor we have just described, the functor that maps objects in C to functors in [C, Set]:

a -> C(a, -)

defines the Yoneda embedding. It embeds a category C (strictly speaking, the category Cop, because of contravariance) inside the functor category [C, Set]. It not only maps objects in C to functors, but also faithfully preserves all connections between them.

This is a very useful result because mathematicians know a lot about the category of functors, especially functors whose codomain is Set. We can get a lot of insight about an arbitrary category C by embedding it in the functor category.

Of course there is a dual version of the Yoneda embedding, sometimes called the co-Yoneda embedding. Observe that we could have started by fixing the target object (rather than the source object) of each hom-set, C(-, a). That would give us a contravariant hom-functor. Contravariant functors from C to Set are our familiar presheaves (see, for instance, Limits and Colimits). The co-Yoneda embedding defines the embedding of a category C in the category of presheaves. Its action on morphisms is given by:

[C, Set](C(-, a), C(-, b)) ≅ C(a, b)

Again, mathematicians know a lot about the category of presheaves, so being able to embed an arbitrary category in it is a big win.

Application to Haskell

In Haskell, the Yoneda embedding can be represented as the isomorphism between natural transformations amongst reader functors on the one hand, and functions (going in the opposite direction) on the other hand:

forall x. (a -> x) -> (b -> x) ≅ b -> a

(Remember, the reader functor is equivalent to ((->) a).)

The left hand side of this identity is a polymorphic function that, given a function from a to x and a value of type b, can produce a value of type x (I’m uncurrying — dropping the parentheses around — the function b -> x). The only way this can be done for all x is if our function knows how to convert a b to an a. It has to secretly have access to a function b->a.

Given such a converter, btoa, one can define the left hand side, call itfromY, as:

fromY :: (a -> x) -> b -> x
fromY f b = f (btoa b)

Conversely, given a function fromY we can recover the converter by calling fromY with the identity:

fromY id :: b -> a

This establishes the bijection between functions of the type fromY and btoa.

An alternative way of looking at this isomorphism is that it’s a CPS encoding of a function from b to a. The argument a->x is a continuation (the handler). The result is a function from b to x which, when called with a value of type b, will execute the continuation precomposed with the function being encoded.

The Yoneda embedding also explains some of the alternative representations of data structures in Haskell. In particular, it provides a very useful representation of lenses from the Control.Lens library.

Preorder Example

This example was suggested by Robert Harper. It’s the application of the Yoneda embedding to a category defined by a preorder. A preorder is a set with an ordering relation between its elements that’s traditionally written as <= (less than or equal). The “pre” in preorder is there because we’re only requiring the relation to be transitive and reflexive but not necessarily antisymmetric (so it’s possible to have cycles).

A set with the preorder relation gives rise to a category. The objects are the elements of this set. A morphism from object a to b either doesn’t exist, if the objects cannot be compared or if it’s not true that a <= b; or it exists if a <= b, and it points from a to b. There is never more than one morphism from one object to another. Therefore any hom-set in such a category is either an empty set or a one-element set. Such a category is called thin.

It’s easy to convince yourself that this construction is indeed a category: The arrows are composable because, if a <= b and b <= c then a <= c; and the composition is associative. We also have the identity arrows because every element is (less than or) equal to itself (reflexivity of the underlying relation).

We can now apply the co-Yoneda embedding to a preorder category. In particular, we’re interested in its action on morphisms:

[C, Set](C(-, a), C(-, b)) ≅ C(a, b)

The hom-set on the right hand side is non-empty if and only if a <= b — in which case it’s a one-element set. Consequently, if a <= b, there exists a single natural transformation on the left. Otherwise there is no natural transformation.

So what’s a natural transformation between hom-functors in a preorder? It should be a family of functions between sets C(-, a) and C(-, b). In a preorder, each of these sets can either be empty or a singleton. Let’s see what kind of functions are there at our disposal.

There is a function from an empty set to itself (the identity acting on an empty set), a function absurd from an empty set to a singleton set (it does nothing, since it only needs to be defined for elements of an empty set, of which there are none), and a function from a singleton to itself (the identity acting on a one-element set). The only combination that is forbidden is the mapping from a singleton to an empty set (what would the value of such a function be when acting on the single element?).

So our natural transformation will never connect a singleton hom-set to an empty hom-set. In other words, if x <= a (singleton hom-set C(x, a)) then C(x, b) cannot be empty. A non-empty C(x, b) means that x is less or equal to b. So the existence of the natural transformation in question requires that, for every x, if x <= a then x <= b.

for all x, x ≤ a ⇒ x ≤ b

On the other hand, co-Yoneda tells us that the existence of this natural transformation is equivalent to C(a, b) being non-empty, or to a <= b. Together, we get:

a ≤ b if and only if for all x, x ≤ a ⇒ x ≤ b

We could have arrived at this result directly. The intuition is that, if a <= b then all elements that are below a must also be below b. Conversely, when you substitute a for x on the right hand side, it follows that a <= b. But you must admit that arriving at this result through the Yoneda embedding is much more exciting.

Naturality

The Yoneda lemma establishes the isomorphism between the set of natural transformations and an object in Set. Natural transformations are morphisms in the functor category [C, Set]. The set of natural transformation between any two functors is a hom-set in that category. The Yoneda lemma is the isomorphism:

[C, Set](C(a, -), F) ≅ F a

This isomorphism turns out to be natural in both F and a. In other words, it’s natural in (F, a), a pair taken from the product category [C, Set] × C. Notice that we are now treating F as an object in the functor category.

Let’s think for a moment what this means. A natural isomorphism is an invertible natural transformation between two functors. And indeed, the right hand side of our isomorphism is a functor. It’s a functor from [C, Set] × C to Set. Its action on a pair (F, a) is a set — the result of evaluating the functor F at the object a. This is called the evaluation functor.

The left hand side is also a functor that takes (F, a) to a set of natural transformations [C, Set](C(a, -), F).

To show that these are really functors, we should also define their action on morphisms. But what’s a morphism between a pair (F, a) and (G, b)? It’s a pair of morphisms, (Φ, f); the first being a morphism between functors — a natural transformation — the second being a regular morphism in C.

The evaluation functor takes this pair (Φ, f) and maps it to a function between two sets, F a and G b. We can easily construct such a function from the component of Φ at a (which maps F a to G a) and the morphism f lifted by G:

(G f) ∘ Φa

Notice that, because of naturality of Φ, this is the same as:

Φb ∘ (F f)

I’m not going to prove the naturality of the whole isomorphism — after you’ve established what the functors are, the proof is pretty mechanical. It follows from the fact that our isomorphism is built up from functors and natural transformations. There is simply no way for it to go wrong.

Challenges

  1. Express the co-Yoneda embedding in Haskell.
  2. Show that the bijection we established between fromY and btoa is an isomorphism (the two mappings are the inverse of each other).
  3. Work out the Yoneda embedding for a monoid. What functor corresponds to the monoid’s single object? What natural transformations correspond to monoid morphisms?
  4. What is the application of the covariant Yoneda embedding to preorders? (Question suggested by Gershom Bazerman.)
  5. Yoneda embedding can be used to embed an arbitrary functor category [C, D] in the functor category [[C, D], Set]. Figure out how it works on morphisms (which in this case are natural transformations).

Next: It’s All About Morphisms.

Acknowledgments

I’d like to thank Gershom Bazerman for checking my math and logic.


This is part 14 of Categories for Programmers. Previously: Free Monoids. See the Table of Contents.

It’s about time we had a little talk about sets. Mathematicians have a love/hate relationship with set theory. It’s the assembly language of mathematics — at least it used to be. Category theory tries to step away from set theory, to some extent. For instance, it’s a known fact that the set of all sets doesn’t exist, but the category of all sets, Set, does. So that’s good. On the other hand, we assume that morphisms between any two objects in a category form a set. We even called it a hom-set. To be fair, there is a branch of category theory where morphisms don’t form sets. Instead they are objects in another category. Those categories that use hom-objects rather than hom-sets, are called enriched categories. In what follows, though, we’ll stick to categories with good old-fashioned hom-sets.

A set is the closest thing to a featureless blob you can get outside of categorical objects. A set has elements, but you can’t say much about these elements. If you have a finite set, you can count the elements. You can kind of count the elements of an inifinite set using cardinal numbers. The set of natural numbers, for instance, is smaller than the set of real numbers, even though both are infinite. But, maybe surprisingly, a set of rational numbers is the same size as the set of natural numbers.

Other than that, all the information about sets can be encoded in functions between them — especially the invertible ones called isomorphisms. For all intents and purposes isomorphic sets are identical. Before I summon the wrath of foundational mathematicians, let me explain that the distinction between equality and isomorphism is of fundamental importance. In fact it is one of the main concerns of the latest branch of mathematics, the Homotopy Type Theory (HoTT). I’m mentioning HoTT because it’s a pure mathematical theory that takes inspiration from computation, and one of its main proponents, Vladimir Voevodsky, had a major epiphany while studying the Coq theorem prover. The interaction between mathematics and programming goes both ways.

The important lesson about sets is that it’s okay to compare sets of unlike elements. For instance, we can say that a given set of natural transformations is isomorphic to some set of morphisms, because a set is just a set. Isomorphism in this case just means that for every natural transformation from one set there is a unique morphism from the other set and vice versa. They can be paired against each other. You can’t compare apples with oranges, if they are objects from different categories, but you can compare sets of apples against sets of oranges. Often transforming a categorical problem into a set-theoretical problem gives us the necessary insight or even lets us prove valuable theorems.

The Hom Functor

Every category comes equipped with a canonical family of mappings to Set. Those mappings are in fact functors, so they preserve the structure of the category. Let’s build one such mapping.

Let’s fix one object a in C and pick another object x also in C. The hom-set C(a, x) is a set, an object in Set. When we vary x, keeping a fixed, C(a, x) will also vary in Set. Thus we have a mapping from x to Set.
Hom-Set

If we want to stress the fact that we are considering the hom-set as a mapping in its second argument, we use the notation:

C(a, -)

with the dash serving as the placeholder for the argument.

This mapping of objects is easily extended to the mapping of morphisms. Let’s take a morphism f in C between two arbitrary objects x and y. The object x is mapped to the set C(a, x), and the object y is mapped to C(a, y), under the mapping we have just defined. If this mapping is to be a functor, f must be mapped to a function between the two sets:

C(a, x) -> C(a, y)

Let’s define this function point-wise, that is for each argument separately. For the argument we should pick an arbitrary element of C(a, x) — let’s call it h. Morphisms are composable, if they match end to end. It so happens that the target of h matches the source of f, so their composition:

f ∘ h :: a -> y

is a morphism going from a to y. It is therefore a member of C(a, y).

Hom Functor

We have just found our function from C(a, x) to C(a, y), which can serve as the image of f. If there is no danger of confusion, we’ll write this lifted function as:

C(a, f)

and its action on a morphism h as:

C(a, f) h = f ∘ h

Since this construction works in any category, it must also work in the category of Haskell types. In Haskell, the hom-functor is better known as the Reader functor:

type Reader a x = a -> x
instance Functor (Reader a) where
    fmap f h = f . h

Now let’s consider what happens if, instead of fixing the source of the hom-set, we fix the target. In other words, we’re asking the question if the mapping

C(-, a)

is also a functor. It is, but instead of being covariant, it’s contravariant. That’s because the same kind of matching of morphisms end to end results in postcomposition by f; rather than precomposition, as was the case with C(a, -).

We have already seen this contravariant functor in Haskell. We called it Op:

type Op a x = x -> a
instance Contravariant (Op a) where
    contramap f h = h . f

Finally, if we let both objects vary, we get a profunctor C(-, =), which is contravariant in the first argument and covariant in the second (to underline the fact that the two arguments may vary independently, we use a double dash as the second placeholder). We have seen this profunctor before, when we talked about functoriality:

instance Profunctor (->) where
  dimap ab cd bc = cd . bc . ab
  lmap = flip (.)
  rmap = (.)

The important lesson is that this observation holds in any category: the mapping of objects to hom-sets is functorial. Since contravariance is equivalent to a mapping from the opposite category, we can state this fact succintly as:

C(-, =) :: Cop × C -> Set

Representable Functors

We’ve seen that, for every choice of an object a in C, we get a functor from C to Set. This kind of structure-preserving mapping to Set is often called a representation. We are representing objects and morphisms of C as sets and functions in Set.

The functor C(a, -) itself is sometimes called representable. More generally, any functor F that is naturally isomorphic to the hom-functor, for some choice of a, is called representable. Such functor must necessarily be Set-valued, since C(a, -) is.

I said before that we often think of isomorphic sets as identical. More generally, we think of isomorphic objects in a category as identical. That’s because objects have no structure other than their relation to other objects (and themselves) through morphisms.

For instance, we’ve previously talked about the category of monoids, Mon, that was initially modeled with sets. But we were careful to pick as morphisms only those functions that preserved the monoidal structure of those sets. So if two objects in Mon are isomorphic, meaning there is an invertible morphism between them, they have exactly the same structure. If we peeked at the sets and functions that they were based upon, we’d see that the unit element of one monoid was mapped to the unit element of another, and that a product of two elements was mapped to the product of their mappings.

The same reasoning can be applied to functors. Functors between two categories form a category in which natural transformations play the role of morphisms. So two functors are isomorphic, and can be thought of as identical, if there is an invertible natural transformation between them.

Let’s analyze the definition of the representable functor from this perspective. For F to be representable we require that: There be an object a in C; one natural transformation α from C(a, -) to F; another natural transformation, β, in the opposite direction; and that their composition be the identity natural transformation.

Let’s look at the component of α at some object x. It’s a function in Set:

αx :: C(a, x) -> F x

The naturality condition for this transformation tells us that, for any morphism f from x to y, the following diagram commutes:

F f ∘ αx = αy ∘ C(a, f)

In Haskell, we would replace natural transformations with polymorphic functions:

alpha :: forall x. (a -> x) -> F x

with the optional forall quantifier. The naturality condition

fmap f . alpha = alpha . fmap f

is automatically satisfied due to parametricity (it’s one of those theorems for free I mentioned earlier), with the understanding that fmap on the left is defined by the functor F, whereas the one on the right is defined by the reader functor. Since fmap for reader is just function precomposition, we can be even more explicit. Acting on h, an element of C(a, x), the naturality condition simplifies to:

fmap f (alpha h) = alpha (f . h)

The other transformation, beta, goes the opposite way:

beta :: forall x. F x -> (a -> x)

It must respect naturality conditions, and it must be the inverse of α:

α ∘ β = id = β ∘ α

We will see later that a natural transformation from C(a, -) to any Set-valued functor always exists, as long as F a is non-empty (Yoneda’s lemma) but it’s not necessarily invertible.

Let me give you an example in Haskell with the list functor and Int as a. Here’s a natural transformation that does the job:

alpha :: forall x. (Int -> x) -> [x]
alpha h = map h [12]

I have arbitrarily picked the number 12 and created a singleton list with it. I can then fmap the function h over this list and get a list of the type returned by h. (There are actually as many such transformations as there are list of integers.)

The naturality condition is equivalent to the composability of map (the list version of fmap):

map f (map h [12]) = map (f . h) [12]

But if we tried to find the inverse transformation, we would have to go from a list of arbitrary type x to a function returning x:

beta :: forall x. [x] -> (Int -> x)

You might think of retrieving an x from the list, e.g., using head, but that won’t work for an empty list. Notice that there is no choice for the type a (in place of Int) that would work here. So the list functor is not representable.

Remember when we talked about Haskell (endo-) functors being a little like containers? In the same vein we can think of representable functors as containers for storing memoized results of function calls (the members of hom-sets in Haskell are just functions). The representing object, the type a in C(a, -), is thought of as the key type, with which we can access the tabulated values of a function. The transformation we called α is called tabulate, and its inverse, β, is called index. Here’s a (slightly simplified) Representable class definition:

class Representable f where
   type Rep f :: *
   tabulate :: (Rep f -> x) -> f x
   index    :: f x -> Rep f -> x

Notice that the representing type, our a, which is called Rep f here, is part of the definition of Representable. The star just means that Rep f is a type (as opposed to a type constructor, or other more exotic kinds).

Infinite lists, or streams, which cannot be empty, are representable.

data Stream x = Cons x (Stream x)

You can think of them as memoized values of a function taking an Integer as an argument. (Strictly speaking, I should be using non-negative natural numbers, but I didn’t want to complicate the code.)

To tabulate such a function, you create an infinite stream of values. Of course, this is only possible because Haskell is lazy. The values are evaluated on demand. You access the memoized values using index:

instance Representable Stream where
    type Rep Stream = Integer
    tabulate f = Cons (f 0) (tabulate (f . (+1)))
    index (Cons b bs) n = if n == 0 then b else index bs (n - 1)

It’s interesting that you can implement a single memoization scheme to cover a whole family of functions with arbitrary return types.

Representability for contravariant functors is similarly defined, except that we keep the second argument of C(-, a) fixed. Or, equivalently, we may consider functors from Cop to Set, because Cop(a, -) is the same as C(-, a).

There is an interesting twist to representability. Remember that hom-sets can internally be treated as exponential objects, in cartesian closed categories. The hom-set C(a, x) is equivalent to xa, and for a representable functor F we can write:

-a = F

Let’s take the logarithm of both sides, just for kicks:

a = log F

Of course, this is a purely formal transformation, but if you know some of the properties of logarithms, it is quite helpful. In particular, it turns out that functors that are based on product types can be represented with sum types, and that sum-type functors are not in general representable (example: the list functor).

Finally, notice that a representable functor gives us two different implementations of the same thing — one a function, one a data structure. They have exactly the same content — the same values are retrieved using the same keys. That’s the sense of “sameness” I was talking about. Two naturally isomorphic functors are identical as far as their contents are involved. On the other hand, the two representations are often implemented differently and may have different performance characteristics. Memoization is used as a performance enhancement and may lead to substantially reduced run times. Being able to generate different representations of the same underlying computation is very valuable in practice. So, surprisingly, even though it’s not concerned with performance at all, category theory provides ample opportunities to explore alternative implementations that have practical value.

Challenges

  1. Show that the hom-functors map identity morphisms in C to corresponding identity functions in Set.
  2. Show that Maybe is not representable.
  3. Is the Reader functor representable?
  4. Using Stream representation, memoize a function that squares its argument.
  5. Show that tabulate and index for Stream are indeed the inverse of each other. (Hint: use induction.)
  6. The functor:
    Pair a = Pair a a

    is representable. Can you guess the type that represents it? Implement tabulate and index.

Bibliography

  1. The Catsters video about representable functors.

Next: The Yoneda Lemma.

Acknowledgments

I’d like to thank Gershom Bazerman for checking my math and logic, and André van Meulebrouck, who has been volunteering his editing help throughout this series of posts.


This is part 13 of Categories for Programmers. Previously: Limits and Colimits. See the Table of Contents.

Monoids are an important concept in both category theory and in programming. Categories correspond to strongly typed languages, monoids to untyped languages. That’s because in a monoid you can compose any two arrows, just as in an untyped language you can compose any two functions (of course, you may end up with a runtime error when you execute your program).

We’ve seen that a monoid may be described as a category with a single object, where all logic is encoded in the rules of morphism composition. This categorical model is fully equivalent to the more traditional set-theoretical definition of a monoid, where we “multiply” two elements of a set to get a third element. This process of “multiplication” can be further dissected into first forming a pair of elements and then identifying this pair with an existing element — their “product.”

What happens when we forgo the second part of multiplication — the identification of pairs with existing elements? We can, for instance, start with an arbitrary set, form all possible pairs of elements, and call them new elements. Then we’ll pair these new elements with all possible elements, and so on. This is a chain reaction — we’ll keep adding new elements forever. The result, an infinite set, will be almost a monoid. But a monoid also needs a unit element and the law of associativity. No problem, we can add a special unit element and identify some of the pairs — just enough to support the unit and associativity laws.

Let’s see how this works in a simple example. Let’s start with a set of two elements, {a, b}. We’ll call them the generators of the free monoid. First, we’ll add a special element e to serve as the unit. Next we’ll add all the pairs of elements and call them “products”. The product of a and b will be the pair (a, b). The product of b and a will be the pair (b, a), the product of a with a will be (a, a), the product of b with b will be (b, b). We can also form pairs with e, like (a, e), (e, b), etc., but we’ll identify them with a, b, etc. So in this round we’ll only add (a, a), (a, b) and (b, a) and (b, b), and end up with the set {e, a, b, (a, a), (a, b), (b, a), (b, b)}.

Bunnies

In the next round we’ll keep adding elements like: (a, (a, b)), ((a, b), a), etc. At this point we’ll have to make sure that associativity holds, so we’ll identify (a, (b, a)) with ((a, b), a), etc. In other words, we won’t be needing internal parentheses.

You can guess what the final result of this process will be: we’ll create all possible lists of as and bs. In fact, if we represent e as an empty list, we can see that our “multiplication” is nothing but list concatenation.

This kind of construction, in which you keep generating all possible combinations of elements, and perform the minimum number of identifications — just enough to uphold the laws — is called a free construction. What we have just done is to construct a free monoid from the set of generators {a, b}.

Free Monoid in Haskell

A two-element set in Haskell is equivalent to the type Bool, and the free monoid generated by this set is equivalent to the type [Bool] (list of Bool). (I am deliberately ignoring problems with infinite lists.)

A monoid in Haskell is defined by the type class:

class Monoid m where
    mempty  :: m
    mappend :: m -> m -> m

This just says that every Monoid must have a neutral element, which is called mempty, and a binary function (multiplication) called mappend. The unit and associativity laws cannot be expressed in Haskell and must be verified by the programmer every time a monoid is instantiated.

The fact that a list of any type forms a monoid is described by this instance definition:

instance Monoid [a] where
    mempty  = []
    mappend = (++)

It states that an empty list [] is the unit element, and list concatenation (++) is the binary operation.

As we have seen, a list of type a corresponds to a free monoid with the set a serving as generators. The set of natural numbers with multiplication is not a free monoid, because we identify lots of products. Compare for instance:

2 * 3 = 6
[2] ++ [3] = [2, 3] // not the same as [6]

That was easy, but the question is, can we perform this free construction in category theory, where we are not allowed to look inside objects? We’ll use our workhorse: the universal construction.

The second interesting question is, can any monoid be obtained from some free monoid by identifying more than the minimum number of elements required by the laws? I’ll show you that this follows directly from the universal construction.

Free Monoid Universal Construction

If you recall our previous experiences with universal constructions, you might notice that it’s not so much about constructing something as about selecting an object that best fits a given pattern. So if we want to use the universal construction to “construct” a free monoid, we have to consider a whole bunch of monoids from which to pick one. We need a whole category of monoids to chose from. But do monoids form a category?

Let’s first look at monoids as sets equipped with additional structure defined by unit and multiplication. We’ll pick as morphisms those functions that preserve the monoidal structure. Such structure-preserving functions are called homomorphisms. A monoid homomorphism must map the product of two elements to the product of the mapping of the two elements:

h (a * b) = h a * h b

and it must map unit to unit.
For instance, consider a homomorphism from lists of integers to integers. If we map [2] to 2 and [3] to 3, we have to map [2, 3] to 6, because concatenation

[2] ++ [3] = [2, 3]

becomes multiplication

2 * 3 = 6

Now let’s forget about the internal structure of individual monoids, and only look at them as objects with corresponding morphisms. You get a category Mon of monoids.

Okay, maybe before we forget about internal structure, let us notice an important property. Every object of Mon can be trivially mapped to a set. It’s just the set of its elements. This set is called the underlying set. In fact, not only can we map objects of Mon to sets, but we can also map morphisms of Mon (homomorphisms) to functions. Again, this seems sort of trivial, but it will become useful soon. This mapping of objects and morphisms from Mon to Set is in fact a functor. Since this functor “forgets” the monoidal structure — once we are inside a plain set, we no longer distinguish the unit element or care about multiplication — it’s called a forgetful functor. Forgetful functors come up regularly in category theory.

We now have two different views of Mon. We can treat it just like any other category with objects and morphisms. In that view, we don’t see the internal structure of monoids. All we can say about a particular object in Mon is that it connects to itself and to other objects through morphisms. The “multiplication” table of morphisms — the composition rules — are derived from the other view: monoids-as-sets. By going to category theory we haven’t lost this view completely — we can still access it through our forgetful functor.

To apply the universal construction, we need to define a special property that would let us search through the category of monoids and pick the best candidate for a free monoid. But a free monoid is defined by its generators. Different choices of generators produce different free monoids (a list of Bool is not the same as a list of Int). Our construction must start with a set of generators. So we’re back to sets!

That’s where the forgetful functor comes into play. We can use it to X-ray our monoids. We can identify the generators in the X-ray images of those blobs. Here’s how it works:

We start with a set of generators, x. That’s a set in Set.

The pattern we are going to match consists of a monoid m — an object of Mon — and a function p in Set:

p :: x -> U m

where U is our forgetful functor from Mon to Set. This is a weird heterogeneous pattern — half in Mon and half in Set.

The idea is that the function p will identify the set of generators inside the X-ray image of m. It doesn’t matter that functions may be lousy at identifying points inside sets (they may collapse them). It will all be sorted out by the universal construction, which will pick the best representative of this pattern.

Monoid Pattern

We also have to define the ranking among candidates. Suppose we have another candidate: a monoid n and a function that identifies the generators in its X-ray image:

q :: x -> U n

We’ll say that m is better than n if there is a morphism of monoids (that’s a structure-preserving homomorphism):

h :: m -> n

whose image under U (remember, U is a functor, so it maps morphisms to functions) factorizes through p:

q = U h . p

If you think of p as selecting the generators in m; and q as selecting “the same” generators in n; then you can think of h as mapping these generators between the two monoids. Remember that h, by definition, preserves the monoidal structure. It means that a product of two generators in one monoid will be mapped to a product of the corresponding two generators in the second monoid, and so on.

Monoid Ranking

This ranking may be used to find the best candidate — the free monoid. Here’s the definition:

We’ll say that m (together with the function p) is the free monoid with the generators x if and only if there is a unique morphism h from m to any other monoid n (together with the function q) that satisfies the above factorization property.

Incidentally, this answers our second question. The function U h is the one that has the power to collapse multiple elements of U m to a single element of U n. This collapse corresponds to identifying some elements of the free monoid. Therefore any monoid with generators x can be obtained from the free monoid based on x by identifying some of the elements. The free monoid is the one where only the bare minimum of identifications have been made.

We’ll come back to free monoids when we talk about adjunctions.

Challenges

  1. You might think (as I did, originally) that the requirement that a homomorphism of monoids preserve the unit is redundant. After all, we know that for all a
    h a * h e = h (a * e) = h a

    So h e acts like a right unit (and, by analogy, as a left unit). The problem is that h a, for all a might only cover a sub-monoid of the target monoid. There may be a “true” unit outside of the image of h. Show that an isomorphism between monoids that preserves multiplication must automatically preserve unit.

  2. Consider a monoid homomorphism from lists of integers with concatenation to integers with multiplication. What is the image of the empty list []? Assume that all singleton lists are mapped to the integers they contain, that is [3] is mapped to 3, etc. What’s the image of [1, 2, 3, 4]? How many different lists map to the integer 12? Is there any other homomorphism between the two monoids?
  3. What is the free monoid generated by a one-element set? Can you see what it’s isomorphic to?

Next: Representable Functors.

Acknowledgments

I’d like to thank Gershom Bazerman for checking my math and logic, and André van Meulebrouck, who has been volunteering his editing help throughout this series of posts.


Lenses are a fascinating subject. Edward Kmett’s lens library is an indispensable tool in every Haskell programmer’s toolbox. I set out to write this blog post with the goal of describing some new insights into their categorical interpretation, but then I started reviewing all the different formulations of lenses and their relations to each other. So this post turned into a little summary of the theoretical underpinning of lenses.

If you’re already familiar with lenses, you may skip directly to the last section, which describes some new results.

The Data-Centric Picture

Lens

Lenses start as a very simple idea: an accessor/mutator or getter/setter pair — something familiar to every C++, Java, or C# programmer. In Haskell they can be described as two functions:

get :: s -> a
set :: s -> a -> s

Given an object of type s, the first function produces the value of the object’s sub-component of type a — which is the focus of the particular lens. The second function takes the object together with a new value for the sub-component, and produces a modified object.

A simple example is a lens that focuses on the first component of a pair:

get :: (a, a') -> a
get (x, y) = x
set :: (a, a') -> a -> (a, a')
set (x, y) x' = (x', y)

In Haskell we can go even further and define a lens for a polymorphic object, in which the setter changes not only the value of the sub-component, but also its type. This will, of course, also change the type of the resulting object. So, in general we have:

get :: s -> a
set :: s -> b -> t

Our pair example doesn’t change much on the surface:

get :: (a, a') -> a
get (x, y) = x
set :: (a, a') -> b -> (b, a')
set (x, y) x' = (x', y)

The difference is that the type of x' can now be different from the type of x. The first component of the pair is of type a before the update, and of type b (the same as that of x') after it. This way we can turn a pair of, say, (Int, Bool) to a pair of (String, Bool).

We can always go back to the monomorphic version of the lens by choosing b equal to a and t equal to s.

Not every pair of functions like these constitutes a lens. A lens has to obey a few laws (first formulated by Pierce in the database context). In particular, if you get a component after you set it, you should get back the value you just put in:

get . set s = id

If you call set with the value you obtained from get, you should get back the unchanged object:

set s (get s) = s

Finally, a set should overwrite the result of a previous set:

set (set s a) b = set s b

So this is what I would call a “classical” lens. It’s formulated in easy to understand programming terms.

The Algebraic Picture

It was Russell O’Connor who noticed that when you refactor the common element from the getter/setter pair, you get an interesting algebraic structure. Instead of writing the lens as two functions, we can write it as one function returning a pair:

s -> (a, b -> t)

Think of this as factoring out the “this” pointer in OO and returning the interface. Of course, the difference is that, in functional programming, the setter does not mutate the original — it returns a new version of the object instead.

Let’s for a moment concentrate on the monomorphic version of the lens — one in which a is the same as b, and s is the same as t. We can define a data structure:

data Store a s = Store a (a -> s)

and rewrite the lens as:

s -> Store a s

For instance, our first-component-of-a-pair lens will take the form:

fstl :: (a, b) -> Store a (a, b)
fstl (x, y) = Store x (\x' -> (x', y))

The first observation is that, for any a, Store a is a functor:

instance Functor (Store a) where
    fmap f (Store x h) = Store x (f . h)

It’s a well-known fact that you can define algebras for a functor, so-called F-algebras. An algebra for a functor f consists of a type s called the carrier type and a function called the action:

alg :: f s -> s

This is almost like a lens (with f replaced by Store a), except that the arrow goes the wrong way. Not to worry: there is a dual notion called a coalgebra. It consists of a carrier type s and a function:

coalg :: s -> f s

Substitute Store a for f and you see that a lens is nothing but a coalgebra for this functor. This is not saying much — we have just given a mathematical name to a programming construct, no big deal. Except that Store a is more than a functor — it’s also a comonad.

What’s a comonad? It’s a monad with the arrows reversed.

You know that you can define a monad in Haskell using return and join. Reverse the arrows on those two, and you get extract and duplicate, the two functions that define a comonad.

class (Functor w) => Comonad w where
    extract :: w a -> a
    duplicate :: w a -> w (w a)

Here, w is a type constructor that is also a functor.

This is how you can implement those two functions for Store a:

instance Comonad (Store a) where
    -- extract :: Store a s -> s
    extract (Store x h) = h x
    -- duplicate :: Store a s -> Store a (Store a s)
    duplicate (Store x h) = Store x (\y -> Store y h)

So now we have two structures: a comonad w and a coalgebra:

type Coalgebra w s = s -> w s

A lens is a special case of this coalgebra where w is Store a.

Every time you have two structures, you may legitimately ask the question: Are they compatible? Just by looking at types, you may figure out some obvious compatibility conditions. In particular, since they go the opposite way, it would make sense for extract to undo the action of coalg:

coalg :: Coalgebra w s
extract . coalg = id

Coalgebra Law 1

Also, duplicating the result of coalg should be the same as applying coalg twice (the second time lifted by fmap):

fmap coalg . coalg = duplicate . coalg

Coalgebra Law 2

If these two conditions are satisfied, we call coalg a comonad coalgebra.

And here’s the clencher:

These two conditions when applied to the Store a comonad are equivalent to our earlier lens laws.

Let’s see how it works. First we’ll express the result of our lens coalgebra acting on some object s in terms of get and set (curried set s is a function a->s):

coalg s = Store (get s) (set s)

The first condition extract . coalg = id immediately gives us the law:

set s (get s) = s

When we act with duplicate on coalg s, we get:

Store (get s) (\y -> Store y (set s))

On the other hand, when we fmap our coalg over coalg s, we get:

Store (get s) ((\s' -> Store (get s') (set s')) . set s)

Two functions — the second components of the Store objects in those equations — must be equal when acting on any a. The first function produces:

Store a (set s)

In the second one, we first apply set s to a to get set s a, which we then pass to the lambda to get:

Store (get (set s a)) (set (set s a))

This reproduces the other two (monomorphic) lens laws:

get (set s a) = a

and

set (set s a) = set s

This algebraic construction can be extended to type-changing lenses by replacing Store with its indexed version:

data IStore a b t = IStore a (b -> t)

and the comonad with its indexed counterpart. The indexed store is also called Context in the lens parlance, and an indexed comonad is also called a parametrized comonad.

So what’s an indexed comonad? Let’s start with an indexed functor. It’s a type constructor that takes three types, a, b, and s, and is a functor in the third argument:

class IxFunctor f where
    imap :: (s -> t) -> f a b s -> f a b t

IStore is obviously an indexed functor:

instance IxFunctor IStore where
    -- imap :: (s -> t) -> IStore a b s -> IStore a b t
    imap f (IStore x h) = IStore x (f . h)

An indexed comonad has the indexed versions of extract and duplicate:

class IxComonad w where
    iextract :: w a a t -> t
    iduplicate :: w a b t -> w a j (w j b t)

Notice that iextract is “diagonal” in the index types, whereas the double application of w shares one index, j, between the two applications. This plays very well with the unit and multiplication interpretation of a monad — here it looks just like matrix multiplication (although we are dealing with a comonad rather than a monad).

It’s easy to see that the instantiation of the indexed comonad for IStore works the same way as the instantiation of the comonad for Store. The types just work out that way.

instance IxComonad IStore where
    -- iextract :: IStore a a t -> t
    iextract (IStore a h) = h a
    -- iduplicate :: IStore a b t -> IStore a c (IStore c b t)
    iduplicate (IStore a h) = IStore a (\c -> IStore c h)

There is also an indexed version of a comonad coalgebra, where the coalgebra is replaced by a family of mappings from some carrier type s to w a b t; with the type t determined by s together with the choice of of the indexes a and b:

type ICoalg w s t a b = s -> w a b t

The compatibility conditions that make it an (indexed) comonad coalgebra are almost identical to the standard compatibility conditions, except that we have to be careful about the index types. Here’s the first condition:

icoalg_aa :: ICoalg w s t a a
iextract . icoalg_aa = id

ICoalgebra Law

Let’s analyze the types. The inner part has the type:

icoalg_aa :: s -> w a a t

We apply iextract to it, which has the type:

iextract :: w a a t -> t

and get:

iextract . icoalg_aa :: s -> t

The right hand side of the condition has the type:

id :: s -> s

It follows that, for the diagonal components of ICoalg w s t, t must be equal to s. The diagonal part of ICoalg w s t is therefore a family of regular coalgebras.

As we have done with the monomorphic lens, we can express (ICoalg IStore s t a b), when acting on s, in terms of get and set:

icoalg_ab s = IStore (get s) (set s)

But now get s is of type a, while set s if of the type b -> t. We can still apply extract to the diagonal term IStore a a t as required by the first compatibility condition. When equating the result to id, we recover the lens law:

set s (get s) = s

Similarly, it’s straightforward to see that the second compatibility condition:

icoalg_bc :: ICoalg w s t b c
icoalg_ab :: ICoalg w s t a b
icoalg_ac :: ICoalg w s t a c
imap icoalg_bc . icoalg_ab = iduplicate . icoalg_ac

is equivalent to the other two lens laws.

The Parametric Picture

Despite being theoretically attractive, standard lenses were awkward to use and, in particular, to compose. The breakthrough came when Twan van Laarhoven realized that there is a higher-order representation for them that has very nice compositional properties. Composing lenses to focus on sub-objects of sub-objects turned into simple function composition.

Here’s Twan’s representation (generalized by Russell for the polymorphic case):

type Lens s t a b = forall f. Functor f => (a -> f b) -> (s -> f t)

So a lens is a polymorphic higher order function with a twist. The twist is that it’s polymorphic with respect to a functor rather than a type.

You can think of it this way: the caller provides a function to modify a particular field of s, turning it from type a to f b. What the caller gets back is a function that transforms the whole of s to f t. The idea is that the lens knows how to reconstruct the object, while putting it under a functor f — if you tell it how to modify a field, also under this functor.

For instance, continuing with our example, here’s the van Laarhoven lens that focuses on the first component of a pair:

vL :: Lens (a, c) (b, c) a b
vL h (x, x') = fmap (\y -> (y, x')) (h x)

Here, s is (a, c) and t is (b, c).

To see that the van Laarhoven representation is equivalent to the get/set one, let’s first change the order of arguments and pull s outside of the forall quantifier:

Lens s t a b = s -> (forall f. Functor f => (a -> f b) -> f t)

Here’s how you can read this definition: For a given s, if you give me a function from a to f b, I will produce a value of type f t. And I don’t care what functor you use!

What does it mean not to care about the functor? It means that the lens must be parametrically polymorphic in f. It can’t do case analysis on a functor. It must be implemented using the same formula for f being the list functor, or the Maybe functor, or the Const functor, etc. There’s only one thing all these functors have in common, and that’s the fmap function; so that’s what we are allowed to use in the implementation of the lens.

Now let’s think what we can do with a function a -> f b that we were given. There’s only one thing: apply it to some value of type a. So we must have access to a value of type a. The result of this application is some value of type f b, but we need to produce a value of the type f t. The only way to do it is to have a function of type b->t and sneak it under the functor using fmap (so here’s where the generic functor comes in). We conclude that the implementation of the function:

forall f. Functor f => (a -> f b) -> f t

must be hiding a value of type a and a function b->t. But that’s exactly the contents of IStore a b t. Parametricity tells us that there is an IStore hiding inside the van Laarhoven lens. The lens is equivalent to:

s -> IStore a b t

In fact, with a clever choice of functors we can recover both get and set from the van Laarhoven representation.

First we select our functor to be Const a. Note that the parameter a is not the one over which the functor is defined. Const a takes a second parameter b over which it is functorial. And, even though it takes b as a type parameter, it doesn’t use it at all. Instead, like a magician, it palms an a, and then reveals it at the end of the trick.

newtype Const a b = Const { getConst :: a }

instance Functor (Const a) where
    -- fmap :: (s -> t) -> Const a s -> Const a t
    fmap _ (Const a) = (Const a)

The constructor Const happens to be a function

Const :: a -> Const a b

which has the required form to be the first argument to the lens:

a -> f b

When we apply the lens, let’s call it vL, to the function Const, we get another function:

vL Const :: s -> Const a t

We can apply this function to s, and then, in the final reveal, retrieve the value of a that was smuggled inside Const a:

get vL s = getConst $ vL Const s

Similarly, we can recover set from the van Laarhoven lens using the Identity functor:

newtype Identity a = Identity { runIdentity :: a }

We define:

set vL s x = runIdentity $ vL (Identity . const x) s

The beauty of the van Laarhoven representation is that it composes lenses using simple function composition. A lens takes a function and returns a function. This function can, in turn, be passed as the argument to another lens, and so on.

There’s an interesting twist to this kind of composition — the function composition operator in Haskell is the dot, just like the field accessor in OO languages like Java or C++; and the composition follows the same order as the composition of accessors in those languages. This was first observed by Conal Elliott in the context of semantic editor combinators.

Consider a lens that focuses on the a field inside some object s. It’s type is:

Lens s t a b

When given a function:

h :: a -> f b

it returns a function:

h' :: s -> f t

Now consider another lens that focuses on the s field inside some even bigger object u. It’s type is:

Lens u w s t

It expects a function of the type:

g :: s -> f t

We can pass the result of the first lens directly to the second lens to form a composite:

Lens u w s t . Lens s t a b

We get a lens that focuses on the a field of the object s that is the sub-object of the big object u. It works just like in Java, where you apply a dot to the result of a getter or a setter, to dig deeper into a subobject.

Not only do lenses compose using regular function composition, but we can also use the identity function as the identity lens. So lenses form a category. It’s time to have a serious look at category theory. Warning: Heavy math ahead!

The Categorical Picture

I used parametricity arguments to justify the choice of the van Laarhoven representation for the lens. The lens function is supposed to have the same form for all functors f. Parametricity arguments have an operational feel to them, which is okay, but I feel like a solid categorical justification is more valuable than any symbol-shuffling argument. So I worked on it, and eventually came up with a derivation of the van Laarhoven representation using the Yoneda lemma. Apparently Russell O’Connor and Mauro Jaskelioff had similar feelings because they came up with the same result independently. We used the same approach, going through the Store functor and applying the Yoneda lemma twice, once in the functor category, and once in the Set category (see the Bibliography).

I would like to present the same result in a more general setting of the Yoneda embedding. It’s a direct consequence of the Yoneda lemma, and it states that any category can be embedded (fully and faithfully) in the category of functors from that category to Set.

Here’s how it works: Let’s fix some object a in some category C. For any object x in that category there is a hom-set C(a, x) of morphisms from a to x. A hom-set is a set — an object in the category Set of sets. So we have defined a mapping from C to Set that takes an x and maps it to the set C(a, x). This mapping is called C(a, _), with the underscore serving as a placeholder for the argument.

Hom-Set

It’s easy to convince yourself that this mapping is in fact a functor from C to Set. Indeed, take any morphism f from x to y. We want to map this morphism to a function (a morphism in Set) that goes between C(a, x) and C(a, y). Let’s define this lifted function component-wise: given any element h from C(a, x) we can map it to f . h. It’s just a composition of two morphisms from C. The resulting morphism is a member of C(a, y). We have lifted a morphism f from C to Set thus establishing that C(a, _) is a functor.

Hom Functor

Now consider two such functors, C(a, _) and C(b, _). The Yoneda embedding theorem tells us that there is a one-to-one correspondence between the set of natural transformations between these two functors and the hom-set C(b, a).

Nat(C(a, _), C(b, _)) ≅ C(b, a)

Notice the reversed order of a and b on the right-hand side.

Yoneda Embedding

Let’s rephrase what we have just seen. For every a in C, we can define a functor C(a, _) from C to Set. Such a functor is a member of the functor category Fun(C, Set). So we have a mapping from C to the functor category Fun(C, Set). Is this mapping a functor?

We have just seen that there is a mapping between morphisms in C and natural transformations in Fun(C, Set) — that’s the gist of the Yoneda embedding. But natural transformations are morphisms in the functor category. So we do have a functor from C to the functor category Fun(C, Set). It maps objects to objects and morphisms to morphisms. It’s a contravariant functor, because of the reversal of a and b. Moreover, it maps the hom-sets in the two categories one-to-one, so it’s a fully faithful functor, and therefore it defines an embedding of categories. Every category C can be embedded in the functor category Fun(C, Set). That’s called the Yoneda embedding.

Yoneda Embedding 2

There’s an interesting consequence of the Yoneda embedding: Every functor category can be embedded in its own functor category — just replace C with a functor category in the Yoneda embedding. Recall that functors between any two categories form a category. It’s a category in which objects are functors and morphisms are natural transformations. Yoneda embedding works for that category too, which means that a functor category can be embedded in a category of functors from that functor category to Set.

Let’s see what that means. We can fix one functor, say R and consider the hom-set from R to some arbitrary functor f. Since we are in a functor category, this hom-set is a set of natural transformations between the two functors, Nat(R, f).

Now let’s pick another functor S. It also defines a set of natural transformations Nat(S, f). We can keep picking functors and mapping them to sets (sets of natural transformations). In fact we know from the previous argument that this mapping is itself a functor. This time it’s a functor from a functor category to Set.

Functor Embedding

What does the Yoneda embedding tell us about any two such functors? That the set of natural transformations between them is isomorphic to the (reversed) hom-set. But this time hom-sets are sets of natural transformations. So we have:

Nat(Nat(R, _), Nat(S, _)) ≅ Nat(S, R)

Functor Embedding 2

All natural transformations in this formula are regular natural transformation except for the outer one, which is more interesting. You may recall that a natural transformation is a family of morphisms parameterized by objects. But in this case objects are functors, and morphisms are themselves natural transformations. So it’s a family of natural transformations parameterized by functors. Keep this in mind as we proceed.

To get a better feel of what’s happening, let’s translate this to Haskell. In Haskell we represent natural transformations as polymorphic functions. This makes sense, since a natural transformation is a family of morphisms (here functions) parameterized by objects (here types). So a member of Nat(R, f) can be represented as:

forall x. R x -> f x

Similarly, the second natural transformation in our formula turns into:

forall y. S y -> f y

As I said, the outer natural transformation in the Yoneda embedding is a family of natural transformations parameterized by a functor, so we get:

forall f. Functor f => (forall x. R x -> f x) -> (forall y. S y -> f y)

You can already see one element of the van Laarhoven representation: the quantification over a functor.

The right hand side of the Yoneda embedding is a natural transformation:

forall z. S z -> R z

The next step is to pick the appropriate functors for R and S. We’ll take R to be IStore a b and S to be IStore s t.

Let’s work on the first part:

forall x. IStore a b x -> f x

A function from IStore a b x is equivalent to a function of two arguments, one of them of type a and another of type b->x:

forall x. a -> (b -> x) -> f x

We can pull a out of forall to get:

a -> (forall x. (b -> x) -> f x)

If you squint a little, you recognize that the thing in parentheses is a natural transformation between the functor C(b, _) and f, where C is the category of Haskell types. We can now apply the Yoneda lemma, which says that this set of natural transformations is isomorphic to the set f b:

forall x. (b -> x) -> f x ≅ f b

We can apply the same transformation to the second part of our identity:

forall y. (IStore s t y -> f y) ≅ s -> f t

Taking it all together, we get:

forall f. Functor f => (a -> f b) -> (s -> f t) 
    ≅ forall z. IStore s t z -> IStore a b z

Let’s now work on the right hand side:

forall z. IStore s t z -> IStore a b z 
    ≅ forall z. s -> (t -> z) -> IStore a b z

Again, pulling s out of forall and applying the Yoneda lemma, we get:

s -> IStore a b t

But that’s just the standard representation of the lens:

s -> IStore a b t ≅ (s -> a, s -> b -> t) = (get, set)

Thus the Yoneda embedding of the functor category leads to the van Laarhoven representation of the lens:

forall f. Functor f => (a -> f b) -> (s -> ft) 
  ≅ (s -> a, s -> b -> t)

Playing with Adjunctions

This is all very satisfying, but you may wonder what’s so special about the IStore functor? The crucial step in the derivation of the van Laarhoven representation was the application of the Yoneda lemma to get this identity:

forall x. IStore a b x -> f x ≅ a -> f b

Let’s rewrite it in the more categorical language:

Nat(IStore a b, f) ≅ C(a, f b)

The set of natural transformations from the functor IStore a b to the functor f is isomorphic to the hom-set between a and f b. Any time you see an isomorphism of hom-sets (and remember that Nat is the hom-set in the functor category), you should be on the lookout for an adjunction. And indeed, we have an adjunction between two functors. One functor is defined as:

a -> IStore a b

It takes an object a in C and maps it to a functor IStore a b parameterized by some other object b. The other functor is:

f -> f b

It maps a functor, an object in the functor category, to an object in C. This functor is also parameterized by the same b. Since this is a flipped application, I’ll call it Flapp:

newtype Flapp b f = Flapp (f b)

So, for any b, the functor-valued functor IStore _ b is left adjoint to Flapp b. This is what makes IStore special.

IStore Adjunction

As a side note: IStore a b is a covariant functor in a and a contravariant functor in b. However, Store a is not functorial in a, because a appears in both positive and negative position in its definition. So the adjunction trick doesn’t work for a simple (monomorphic) lens.

We can now turn the tables and use the adjunction to define the functor IStore in an arbitrary category (notice that the Yoneda lemma worked only for Set-valued functors). We just define a functor-valued functor IStore to be the left adjoint to Flapp, provided it exists.

Nat(IStore a b, f) ≅ C(a, f b)

Here, Nat is a set of natural transformations between endofunctors in C.

We can substitute the so defined functor into the Yoneda embedding formula we used earlier:

Nat((Nat(IStore a b, f), Nat(IStore s t, f)) 
    ≅ Nat(IStore s t, IStore a b)

We can now use the adjunction, rather than the Yoneda lemma, to eliminate some of the occurrences IStore:

Nat(C(a, f b), C(s, f t))
    ≅ C(s, IStore a b t)

This is slightly more general than the original van Laarhoven equivalence.

We can go even farther and reproduce the Jaskelioff and O’Connor trick of constraining the generic functor in the definition of the van Laarhoven lens to a pointed or applicative functor. This results in a multi-focus lens. In particular, if we use pointed functors, we get lenses with zero or one targets, so called affine lenses. Restricting the functors further to applicative leads to lenses with any number of targets, or traversals.

The trick is that any pointed or applicative functor can be stripped of the additional functionality and treated just like any other functor. This act of “forgetting” about pure and <*> may itself be considered a functor in the functor category. It’s called, appropriately, a forgetful functor. The left adjoint to a forgetful functor (if it exists) is called a free functor. It takes an arbitrary functor and creates a pointed functor by generating an artificial pure; or it creates an applicative functor by adding <*>. This adjunction is described by a natural isomorphism of hom-sets — in this case sets of natural transformations:

Nat(S, U f) ≅ Nat(S*, f)

Here, U is the forgetful functor, and S* is the free applicative/pointed version of the functor S. The functor f ranges across applicative (respectively, pointed) functors.

Now we can try to substitute the free version of IStore in the Yoneda embedding formula:

Nat((Nat(IStore* a b, f), Nat(IStore* s t, f)) 
    ≅ Nat(IStore* s t, IStore* a b)

The formula holds for any applicative (pointed) functor f and a set of natural transformations over such functors.

The first step is to use the forgetful/free adjunction:

Nat((Nat(IStore a b, U f), Nat(IStore s t, U f)) 
    ≅ Nat(IStore s t, U IStore* a b)

Then we can use our defining adjunction for IStore to get:

Nat(C(a, U f b), C(s, U f t)) 
    ≅ C(s, U IStore* a b t)

In Haskell notation this reads:

forall f. Applicative f => (a -> f b) -> (s -> f t)
    ≅ s -> IAppStore a b t

(the action of U is implicit).

The free applicative version of IStore is defined as:

data IAppStore a b t = 
    Unit t
  | IAppStore a (IAppStore a b (b -> t))

These are all known results, but the use of the Yoneda embedding and the adjunction to define the IStore functor makes the derivation more compact and slightly more general.

Acknowledgments

I’m grateful to Mauro Jaskelioff, Gershom Bazerman, and Joseph Abrahamson for reading the draft and providing helpful comments and to André van Meulebrouck for editing help.

Bibliography

  1. Edward Kmett, The Haskell Lens Library
  2. Simon Peyton Jones, Lenses: Compositional Data Access and Manipulation. A Skills Matter video presentation.
  3. Joseph Abrahamson, A Little Lens Starter Tutorial
  4. Joseph Abrahamson, Lenses from Scratch
  5. Artyom, lens over tea tutorial
  6. Twan van Laarhoven, CPS based functional references
  7. Bartosz Milewski, Lenses, Stores, and Yoneda
  8. Mauro Jaskelioff, Russell O’Connor, A Representation Theorem for Second-Order Functionals
  9. Bartosz Milewski, Understanding Yoneda

« Previous PageNext Page »