This is part 15 of Categories for Programmers. Previously: Representable Functors. See the Table of Contents.

Most constructions in category theory are generalizations of results from other more specific areas of mathematics. Things like products, coproducts, monoids, exponentials, etc., have been known long before category theory. They might have been known under different names in different branches of mathematics. A cartesian product in set theory, a meet in order theory, a conjunction in logic — they are all specific examples of the abstract idea of a categorical product.

The Yoneda lemma stands out in this respect as a sweeping statement about categories in general with little or no precedent in other branches of mathematics. Some say that its closest analog is Cayley’s theorem in group theory (every group is isomorphic to a permutation group of some set).

The setting for the Yoneda lemma is an arbitrary category *C* together with a functor `F`

from *C* to **Set**. We’ve seen in the previous section that some **Set**-valued functors are representable, that is isomorphic to a hom-functor. The Yoneda lemma tells us that all **Set**-valued functors can be obtained from hom-functors through natural transformations, and it explicitly enumerates all such transformations.

When I talked about natural transformations, I mentioned that the naturality condition can be quite restrictive. When you define a component of a natural transformation at one object, naturality may be strong enough to “transport” this component to another object that is connected to it through a morphism. The more arrows between objects in the source and the target categories there are, the more constraints you have for transporting the components of natural transformations. **Set** happens to be a very arrow-rich category.

The Yoneda lemma tells us that a natural transformation between a hom-functor and any other functor `F`

is completely determined by specifying the value of its single component at just one point! The rest of the natural transformation just follows from naturality conditions.

So let’s review the naturality condition between the two functors involved in the Yoneda lemma. The first functor is the hom-functor. It maps any object `x`

in *C* to the set of morphisms `C(a, x)`

— for `a`

a fixed object in *C*. We’ve also seen that it maps any morphism `f`

from `x`

to `y`

to `C(a, f)`

.

The second functor is an arbitrary **Set**-valued functor `F`

.

Let’s call the natural transformation between these two functors `α`

. Because we are operating in **Set**, the components of the natural transformation, like `α`

or _{x}`α`

, are just regular functions between sets:_{y}

α_{x}:: C(a, x) -> F x α_{y}:: C(a, y) -> F y

And because these are just functions, we can look at their values at specific points. But what’s a point in the set `C(a, x)`

? Here’s the key observation: Every point in the set `C(a, x)`

is also a morphism `h`

from `a`

to `x`

.

So the naturality square for `α`

:

α_{y}∘ C(a, f) = F f ∘ α_{x}

becomes, point-wise, when acting on `h`

:

α_{y}(C(a, f) h) = (F f) (α_{x}h)

You might recall from the previous section that the action of the hom-functor `C(a, _)`

on a morphism `f`

was defined as precomposition:

C(a, f) h = f ∘ h

which leads to:

α_{y}(f ∘ h) = (F f) (α_{x}h)

Just how strong this condition is can be seen by specializing it to the case of `x`

equal to `a`

.

In that case `h`

becomes a morphism from `a`

to `a`

. We know that there is at least one such morphism, `h = id`

. Let’s plug it in:_{a}

α_{y}f = (F f) (α_{a}id_{a})

Notice what has just happened: The left hand side is the action of `α`

on an arbitrary element _{y}`f`

of `C(a, y)`

. And it is totally determined by the single value of `α`

at _{a}`id`

. We can pick any such value and it will generate a natural transformation. Since the values of _{a}`α`

are in the set _{a}`F a`

, any point in `F a`

will define some `α`

.

Conversely, given any natural transformation `α`

from `C(a, _)`

to `F`

, you can evaluate it at `id`

to get a point in _{a}`F a`

.

We have just proven the Yoneda lemma:

There is a one-to-one correspondence between natural transformations from `C(a, _)`

to `F`

and elements of `F a`

.

in other words,

Nat(C(a, _), F) ≅ F a

I’ll explain later how this correspondence is in fact a natural isomorphism.

Now let’s try to get some intuition about this result. The most amazing thing is that the whole natural transformation crystallizes from just one nucleation site: the value we assign to it at `id`

. It spreads from that point following the naturality condition. It floods the image of _{a}*C* in **Set**. So let’s first consider what the image of *C* is under `C(a, _)`

.

Let’s start with the image of `a`

itself. Under the hom-functor `C(a, _)`

, `a`

is mapped to the set `C(a, a)`

. Under the functor `F`

, on the other hand, it is mapped to the set `F a`

. The component of the natural transformation `α`

is some function from _{a}`C(a, a)`

to `F a`

. Let’s focus on just one point in the set `C(a, a)`

, the point corresponding to the morphism `id`

. To emphasize the fact that it’s just a point in a set, let’s call it _{a}`p`

. The component `α`

should map _{a}`p`

to some point `q`

in `F a`

. I’ll show you that any choice of `q`

leads to a unique natural transformation.

The first claim is that the choice of one point `q`

uniquely determines the rest of the function `α`

. Indeed, let’s pick any other point, _{a}`p'`

in `C(a, a)`

, corresponding to some morphism `g`

from `a`

to `a`

. And here’s where the magic of the Yoneda lemma happens: `g`

can be viewed as a point `p'`

in the set `C(a, a)`

. At the same time, it selects two *functions* between sets. Indeed, under the hom-functor, the morphism `g`

is mapped to a function `C(a, g)`

; and under `F`

it’s mapped to `F g`

.

Now let’s consider the action of `C(a, g)`

on our original `p`

. It is defined as precomposition, `g∘id`

, which is equal to _{a}`g`

, which corresponds to our point `p'`

. So `g`

mapped to a function, when acting on `p`

produces `p'`

, which is `g`

. We have come full circle!

Now consider the action of `F g`

on `q`

. It is some `q'`

, a point in `F a`

. To complete the naturality square, `p'`

must be mapped to `q'`

under `α`

. We picked an arbitrary _{a}`p'`

(an arbitrary `g`

) and derived its mapping under `α`

. The function _{a}`α`

is thus completely determined._{a}

The second claim is that `α`

is uniquely determined for any object _{x}`x`

in *C* that is connected to `a`

. The reasoning is analogous, except that now we have two more sets, `C(a, x)`

and `F x`

, and the morphism `g`

from `a`

to `x`

is mapped, under the hom-functor, to:

C(a, g) :: C(a, a) -> C(a, x)

and under `F`

to:

F g :: F a -> F x

Again, `C(a, g)`

acting on our `p`

is given by the precomposition: `g ∘ id`

, which corresponds to a point _{a}`p'`

in `C(a, x)`

. Naturality determines the value of `α`

acting on _{x}`p'`

to be:

q' = (F g) q

Since `p'`

was arbitrary, the whole function `α`

is thus determined._{x}

What if there are objects in *C* that have no connection to `a`

? They are all mapped under `C(a, _)`

to a single set — the empty set. Recall that the empty set is the initial object in the category of sets. It means that there is a unique function from this set to any other set. We called this function `absurd`

. So here, again, we have no choice for the component of the natural transformation: it can only be `absurd`

.

One way of understanding the Yoneda lemma is to realize that natural transformations between **Set**-valued functors are just families of functions, and functions are in general lossy. A function may collapse information and it may cover only parts of its codomain. The only functions that are not lossy are the ones that are invertible — the isomorphisms. It follows then that the best structure-preserving **Set**-valued functors are the representable ones. They are either the hom-functors or the functors that are naturally isomorphic to hom-functors. Any other functor `F`

is obtained from a hom-functor through a lossy transformation. Such a transformation may not only lose information, but it may also cover only a small part of the image of the functor `F`

in **Set**.

## Yoneda in Haskell

We have already encountered the hom-functor in Haskell under the guise of the reader functor:

type Reader a x = a -> x

The reader maps morphisms (here, functions) by precomposition:

instance Functor (Reader a) where fmap f h = f . h

The Yoneda lemma tells us that the reader functor can be naturally mapped to any other functor.

A natural transformation is a polymorphic function. So given a functor `F`

, we have a mapping to it from the reader functor:

alpha :: forall x . (a -> x) -> F x

As usual, `forall`

is optional, but I like to write it explicitly to emphasize parametric polymorphism of natural transformations.

The Yoneda lemma tells us that these natural transformations are in one-to-one correspondence with the elements of `F a`

:

forall x . (a -> x) -> F x ≅ F a

The right hand side of this identity is what we would normally consider a data structure. Remember the interpretation of functors as generalized containers? `F a`

is a container of `a`

. But the left hand side is a polymorphic function that takes a function as an argument. The Yoneda lemma tells us that the two representations are equivalent — they contain the same information.

Another way of saying this is: Give me a polymorphic function of the type:

alpha :: forall x . (a -> x) -> F x

and I’ll produce a container of `a`

. The trick is the one we used in the proof of the Yoneda lemma: we call this function with `id`

to get an element of `F a`

:

alpha id :: F a

The converse is also true: Given a value of the type `F a`

:

fa :: F a

one can define a polymorphic function:

alpha h = fmap h fa

of the correct type. You can easily go back and forth between the two representations.

The advantage of having multiple representations is that one might be easier to compose than the other, or that one might be more efficient in some applications than the other.

The simplest illustration of this principle is the code transformation that is often used in compiler construction: the continuation passing style or CPS. It’s the simplest application of the Yoneda lemma to the identity functor. Replacing `F`

with identity produces:

forall r . (a -> r) -> r ≅ a

The interpretation of this formula is that any type `a`

can be replaced by a function that takes a “handler” for `a`

. A handler is a function accepting `a`

and performing the rest of the computation — the continuation. (The type `r`

usually encapsulates some kind of status code.)

This style of programming is very common in UIs, in asynchronous systems, and in concurrent programming. The drawback of CPS is that it involves inversion of control. The code is split between producers and consumers (handlers), and is not easily composable. Anybody who’s done any amount of nontrivial web programming is familiar with the nightmare of spaghetti code from interacting stateful handlers. As we’ll see later, judicious use of functors and monads can restore some compositional properties of CPS.

## Co-Yoneda

As usual, we get a bonus construction by inverting the direction of arrows. The Yoneda lemma can be applied to the opposite category *C*^{op} to give us a mapping between contravariant functors.

Equivalently, we can derive the co-Yoneda lemma by fixing the target object of our hom-functors instead of the source. We get the contravariant hom-functor from *C* to **Set**: `C(_, a)`

. The contravariant version of the Yoneda lemma establishes one-to-one correspondence between natural transformations from this functor to any other contravariant functor `F`

and the elements of the set `F a`

:

Nat(C(_, a), F) ≅ F a

Here’s the Haskell version of the co-Yoneda lemma:

forall x . (x -> a) -> F x ≅ F a

Notice that in some literature it’s the contravariant version that’s called the Yoneda lemma.

## Challenges

- Show that the two functions
`phi`

and`psi`

that form the Yoneda isomorphism in Haskell are inverses of each other.phi :: (forall x . (a -> x) -> F x) -> F a phi alpha = alpha id

psi :: F a -> (forall x . (a -> x) -> F x) psi fa h = fmap h fa

- A discrete category is one that has objects but no morphisms other than identity morphisms. How does the Yoneda lemma work for functors from such a category?
- A list of units
`[()]`

contains no other information but its length. So, as a data type, it can be considered an encoding of integers. An empty list encodes zero, a singleton`[()]`

(a value, not a type) encodes one, and so on. Construct another representation of this data type using the Yoneda lemma for the list functor.

## Bibliography

- Catsters video

Next: Yoneda Embedding.

## Acknowledgments

I’d like to thank Gershom Bazerman for checking my math and logic, and André van Meulebrouck, who has been volunteering his editing help throughout this series of posts.

Follow @BartoszMilewski

September 2, 2015 at 5:12 am

When explaining Yoneda I find it useful to do the pre-order case first. It states that in any pre-order, x<=y iff for all z, z<=x implies z<=y. The proof is trivial, but the utility of this little lemma is amazing. One example is in the proof that every Heyting algebra (exponentiated lattice) is distributive. It is a dawdle using Yoneda; I don’t know how to do it without it.

September 2, 2015 at 7:06 pm

I was thinking of including the pre-order example, which I saw mentioned in your Oregon lectures (which I keep re-watching, every time understanding a little more). I wasn’t sure how to explain the “implies” part though. But you’re right, it is trivial.

The sets in question can only be a singleton and an empty set. There can be no function from singleton (corresponding to z<=x being true) to an empty set (corresponding to z<=y being false), so the existence of a natural transformation is equivalent to the implication z<=x => z<=y.

The cool thing about blogging is that I can modify a post in response to suggestions. So I will revise this post to include this example. And I’ll include a poset example in my next post on Yoneda embedding.

As for Heyting algebras, that would require a blog post or two just to introduce the concept. I don’t assume any particular math knowledge beyond high school from my readers, so I avoid examples that involve formal logic, topological spaces, etc.

September 2, 2015 at 8:06 pm

Yes, I had thought of remarking about the “implies” as a degenerate natural transformation between hom sets, but decided to be brief.

September 8, 2015 at 4:50 pm

Since it looks like the introduction to Monads is coming closer: I know too little about all this but there is a recent paper here: http://okmij.org/ftp/Haskell/extensible/ describing computationally efficient, composable methods to handle side-effects. Maybe some of it would be valuable for Monads? Or maybe for later topics.

November 6, 2015 at 12:10 pm

i think there is a mistake in last diagram c(a;g) not c(x,g)

November 6, 2015 at 2:59 pm

@benmoussa: Good catch! I didn’t like the colors in this diagram anyway, so I’m going to redo it.

July 1, 2016 at 11:43 am

Does this mean type system is boolean? I.e. double negation (A->

|)->|can produce A for all A.July 1, 2016 at 1:58 pm

Is it okay that a theorem proven in Sets is applied to Hask?

July 1, 2016 at 4:33 pm

@vpatryshev: I decided from the very beginning to ignore the Hask vs Set distinction, unless otherwise noted. There are some good reasons for using Set arguments in Hask, although one should always be careful. See for instance Fast and Loose Reasoning is Morally Correct.

October 31, 2016 at 7:56 am

As a quick side-note, I’d go further than saying Cayley’s theorem is the closest analogue to Yoneda. I’d say that it

isYoneda, in the special case of a category with one object and all isomorphisms. Representable functors in that case are nothing but G-torsors!December 2, 2016 at 11:15 pm

Is it precomposition or postcomposition? If we do f . h, which is f after h, shouldn’t we say that we post-compose with f (or we pre-compose with h)?

December 3, 2016 at 11:01 am

Yes, pre and post are very ambiguous in this context. Normally we consider putting something on the left as “pre”, but the left function is executed “after” the right function so, in this sense, it’s “post”. There doesn’t seem to be a consensus, as seen in this discussion.