Programming



The Free Theorem for Ends

In Haskell, the end of a profunctor p is defined as a product of all diagonal elements:

forall c. p c c

together with a family of projections:

pi :: Profunctor p => forall c. (forall a. p a a) -> p c c
pi e = e

In category theory, the end must also satisfy the edge condition which, in (type-annotated) Haskell, could be written as:

dimap f idb . pib = dimap ida f . pia

for any f :: a -> b.
Using a suitable formulation of parametricity, this equation can be shown to be a free theorem. Let’s first review the free theorem for functors before generalizing it to profunctors.

Functor Characterization

You may think of a functor as a container that has a shape and contents. You can manipulate the contents without changing the shape using fmap. In general, when applying fmap, you not only change the values stored in the container, you change their type as well. To really capture the shape of the container, you have to consider not only all possible mappings, but also more general relations between different contents.

A function is directional, and so is fmap, but relations don’t favor either side. They can map multiple values to the same value, and they can map one value to multiple values. Any relation on values induces a relation on containers. For a given functor F, if there is a relation a between type A and type A':

A <=a=> A'

then there is a relation between type F A and F A':

F A <=(F a)=> F A'

We call this induced relation F a.

For instance, consider the relation between students and their grades. Each student may have multiple grades (if they take multiple courses) so this relation is not a function. Given a list of students and a list of grades, we would say that the lists are related if and only if they match at each position. It means that they have to be equal length, and the first grade on the list of grades must belong to the first student on the list of students, and so on. Of course, a list is a very simple container, but this property can be generalized to any functor we can define in Haskell using algebraic data types.

The fact that fmap doesn’t change the shape of the container can be expressed as a “theorem for free” using relations. We start with two related containers:

xs :: F A
xs':: F A'

where A and A' are related through some relation a. We want related containers to be fmapped to related containers. But we can’t use the same function to map both containers, because they contain different types. So we have to use two related functions instead. Related functions map related types to related types so, if we have:

f :: A -> B
f':: A'-> B'

and A is related to A' through a, we want B to be related to B' through some relation b. Also, we want the two functions to map related elements to related elements. So if x is related to x' through a, we want f x to be related to f' x' through b. In that case, we’ll say that f and f' are related through the relation that we call a->b:

f <=(a->b)=> f'

For instance, if f is mapping students’ SSNs to last names, and f' is mapping letter grades to numerical grades, the results will be related through the relation between students’ last names and their numerical grades.

To summarize, we require that for any two relations:

A <=a=> A'
B <=b=> B'

and any two functions:

f :: A -> B
f':: A'-> B'

such that:

f <=(a->b)=> f'

and any two containers:

xs :: F A
xs':: F A'

we have:

if       xs <=(F a)=> xs'
then   F xs <=(F b)=> F xs'

This characterization can be extended, with suitable changes, to contravariant functors.

Profunctor Characterization

A profunctor is a functor of two variables. It is contravariant in the first variable and covariant in the second. A profunctor can lift two functions simultaneously using dimap:

class Profunctor p where
    dimap :: (a -> b) -> (c -> d) -> p b c -> p a d

We want dimap to preserve relations between profunctor values. We start by picking any relations a, b, c, and d between types:

A <=a=> A'
B <=b=> B'
C <=c=> C'
D <=d=> D'

For any functions:

f  :: A -> B
f' :: A'-> B'
g  :: C -> D
g' :: C'-> D'

that are related through the following relations induced by function types:

f <=(a->b)=> f'
g <=(c->d)=> g'

we define:

xs :: p B C
xs':: p B'C'

The following condition must be satisfied:

if             xs <=(p b c)=> xs'
then   (p f g) xs <=(p a d)=> (p f' g') xs'

where p f g stands for the lifting of the two functions by the profunctor p.

Here’s a quick sanity check. If b and c are functions:

b :: B'-> B
c :: C -> C'

than the relation:

xs <=(p b c)=> xs'

becomes:

xs' = dimap b c xs

If a and d are functions:

a :: A'-> A
d :: D -> D'

then these relations:

f <=(a->b)=> f'
g <=(c->d)=> g'

become:

f . a = b . f'
d . g = g'. c

and this relation:

(p f g) xs <=(p a d)=> (p f' g') xs'

becomes:

(p f' g') xs' = dimap a d ((p f g) xs)

Substituting xs', we get:

dimap f' g' (dimap b c xs) = dimap a d (dimap f g xs)

and using functoriality:

dimap (b . f') (g'. c) = dimap (f . a) (d . g)

which is identically true.

Special Case of Profunctor Characterization

We are interested in the diagonal elements of a profunctor. Let’s first specialize the general case to:

C = B
C'= B'
c = b

to get:

xs = p B B
xs'= p B'B'

and

if             xs <=(p b b)=> xs'
then   (p f g) xs <=(p a d)=> (p f' g') xs'

Chosing the following substitutions:

A = A'= B
D = D'= B'
a = id
d = id
f = id
g'= id
f'= g

we get:

if              xs <=(p b b)=> xs'
then   (p id g) xs <=(p id id)=> (p g id) xs'

Since p id id is the identity relation, we get:

(p id g) xs = (p g id) xs'

or

dimap id g xs = dimap g id xs'

Free Theorem

We apply the free theorem to the term xs:

xs :: forall c. p c c

It must be related to itself through the relation that is induced by its type:

xs <=(forall b. p b b)=> xs

for any relation b:

B <=b=> B'

Universal quantification translates to a relation between different instantiations of the polymorphic value:

xsB <=(p b b)=> xsB'

Notice that we can write:

xsB = piB xs
xsB'= piB'xs

using the projections we defined earlier.

We have just shown that this equation leads to:

dimap id g xs = dimap g id xs'

which shows that the wedge condition is indeed a free theorem.

Natural Transformations

Here’s another quick application of the free theorem. The set of natural transformations may be represented as an end of the following profunctor:

type NatP a b = F a -> G b
instance Profunctor NatP where
    dimap f g alpha = fmap g . alpha . fmap f

The free theorem tells us that for any mu :: NatP c c:

(dimap id g) mu = (dimap g id) mu

which is the naturality condition:

mu . fmap g = fmap g . mu

It’s been know for some time that, in Haskell, naturality follows from parametricity, so this is not surprising.

Acknowledgment

I’d like to thank Edward Kmett for reviewing the draft of this post.

Bibliography

  1. Bartosz Milewski, Ends and Coends
  2. Edsko de Vries, Parametricity Tutorial, Part 1, Part 2, Contravariant Functions.
  3. Bartosz Milewski, Parametricity: Money for Nothing and Theorems for Free

I’m a refugee. I fled Communist Poland and was granted political asylum in the United States. That was so long ago that I don’t think of myself as a refugee any more. I’m an American — not by birth but by choice. My understanding is that being an American has nothing to do with ethnicity, religion, or personal history. I became an American by accepting a certain system of values specified in the Constitution. Things like freedom of expression, freedom from persecution, equality, pursuit of happiness, etc. I’m also a Pole and proud of it. I speak the language, I know my history and culture. No contradiction here.

I’m a scientist, and I normally leave politics to others. In fact I came to the United States to get away from politics. In Poland, I was engaged in political struggle, I was a member of Solidarity, and I joined the resistance when Solidarity was crushed. I could have stayed and continued the fight, but I chose instead to leave and make my contribution to society in other areas.

There are times in history when it’s best for scientists to sit in their ivory towers and do what they are trained to do — science. There is time when it’s best for engineers to design new things, write software, and build gadgets that make life easier for everybody. But there are times when this is not enough. That’s why I’m interrupting my scheduled programming, my category theory for programmers blog, to say a few words about current events. Actually, first I’d like to reminisce a little.

When you live under a dictatorship, you have to develop certain skills. If direct approach can get you in trouble, you try to manipulate the system. When martial law was imposed in Poland, all international travel was suspended. I was a grad student then, working on my Ph.D. in theoretical physics. Contact with scientists from abroad was very important to me. As soon as the martial law was suspended, my supervisor and I decided to go for a visit — not to the West, mind you, but to the Soviet Union. But the authorities decided that giving passports to scientists was a great opportunity to make them work for the system. So before we could get a permission to go abroad, we had to visit the Department of Security — the Secret Police — for an interview. From our friends, who were interviewed before, we knew that we’d be offered a choice: become an informant or forget about traveling abroad.

My professor went first. He was on time, but they kept him waiting outside the office forever. After an hour, he stormed out. He didn’t get the passport.

When I went to my interview, it started with some innocuous questions. I was asked who the chief of Solidarity at the University was. That was no secret — he was my office mate in the Physics Department. Then the discussion turned to my future employment at the University. The idea was to suggest that the Department of Security could help me keep my position, or get me fired. Knowing what was coming, I bluffed, saying that I was one of the brightest young physicists around, and my employment was perfectly secure. Then I started talking about my planned trip to the Soviet Union. I took my interviewer into confidence, and explained how horribly the Soviet science is suffering because their government is not allowing their scientists to travel to the West, and how much better Polish science was because of that. You have to realize that, even in the depth of the Department of Security of a Communist country, there was no love for our Soviet brethren. If we could beat them at science, all the better. I got my passport without any more hassle.

I was exaggerating a little, especially about me being so bright, but it’s true that there is an international community of scientists and engineers that knows no borders. Any impediment to free exchange of ideas and people is very detrimental to its prosperity and, by association, to the prosperity of the societies they live in.

I consider the recent Muslim ban — and that’s what it should be called — a direct attack on this community, on a par with climate-change denials and gag orders against climate scientists working for the government. It’s really hard to piss off scientists and engineers, so I consider this a major accomplishment of the new presidency.

You can make fun of us nerds as much as you want, but every time you send a tweet, you’re using the infrastructure created by us. The billions of matal-oxide field-effect transistors and the liquid-crystal display in your tablet were made possible by developments in quantum mechanics and materials science. The operating system was written by software engineers in languages based on the math developed by Alan Turing and Alonzo Church. Try denying that, and you’ll end up tweeting with a quill on parchment.

Scientists and engineers consider themselves servants of the society. We don’t make many demands and are quite happy to be left alone to do our stuff. But if this service is disrupted by clueless, power-hungry politicians, we will act. We are everywhere, and we know how to use the Internet — we invented it.

P. S. I keep comments to my blog under moderation because of spam. But I will also delete comments that I consider clueless.

Here’s a little anecdote about cluelessness that I heard long time ago from my physicist friends in the Soviet Union. They had invited a guest scientist from the US to one of the conferences. They were really worried that he might say something politically charged and make future scientific exchanges impossible. So they asked him to, please, refrain from any political comments.

Time comes for the guest scientist to give a talk. And he starts with, “Before I came to the Soviet Union I was warned that I will be constantly minded by the secret police.” The director of the institute, who invited our scientist, is sitting in the first row between two KGB minders. All blood is leaving his face. The KGB minders stiffen in their seats. “I’m so happy that it turned out to be nonsense,” says the scientist and proceeds to give his talk. You see, it’s really hard to imagine what it’s like to live under dictatorship unless you’ve experienced it yourself. Trust me, I’ve been there and I recognize the warning signs.


This is part 22 of Categories for Programmers. Previously: Monads and Effects. See the Table of Contents.

If you mention monads to a programmer, you’ll probably end up talking about effects. To a mathematician, monads are about algebras. We’ll talk about algebras later — they play an important role in programming — but first I’d like to give you a little intuition about their relation to monads. For now, it’s a bit of a hand-waving argument, but bear with me.

Algebra is about creating, manipulating, and evaluating expressions. Expressions are built using operators. Consider this simple expression:

x2 + 2 x + 1

This expression is formed using variables like x, and constants like 1 or 2, bound together with operators like plus or times. As programmers, we often think of expressions as trees.

exptree

Trees are containers so, more generally, an expression is a container for storing variables. In category theory, we represent containers as endofunctors. If we assign the type a to the variable x, our expression will have the type m a, where m is an endofunctor that builds expression trees. (Nontrivial branching expressions are usually created using recursively defined endofunctors.)

What’s the most common operation that can be performed on an expression? It’s substitution: replacing variables with expressions. For instance, in our example, we could replace x with y - 1 to get:

(y - 1)2 + 2 (y - 1) + 1

Here’s what happened: We took an expression of type m a and applied a transformation of type a -> m b (b represents the type of y). The result is an expression of type m b. Let me spell it out:

m a -> (a -> m b) -> m b

Yes, that’s the signature of monadic bind.

That was a bit of motivation. Now let’s get to the math of the monad. Mathematicians use different notation than programmers. They prefer to use the letter T for the endofunctor, and Greek letters: μ for join and η for return. Both join and return are polymorphic functions, so we can guess that they correspond to natural transformations.

Therefore, in category theory, a monad is defined as an endofunctor T equipped with a pair of natural transformations μ and η.

μ is a natural transformation from the square of the functor T2 back to T. The square is simply the functor composed with itself, T ∘ T (we can only do this kind of squaring for endofunctors).

μ :: T2 -> T

The component of this natural transformation at an object a is the morphism:

μa :: T (T a) -> T a

which, in Hask, translates directly to our definition of join.

η is a natural transformation between the identity functor I and T:

η :: I -> T

Considering that the action of I on the object a is just a, the component of η is given by the morphism:

ηa :: a -> T a

which translates directly to our definition of return.

These natural transformations must satisfy some additional laws. One way of looking at it is that these laws let us define a Kleisli category for the endofunctor T. Remember that a Kleisli arrow between a and b is defined as a morphism a -> T b. The composition of two such arrows (I’ll write it as a circle with the subscript T) can be implemented using μ:

g ∘T f = μc ∘ (T g) ∘ f

where

f :: a -> T b
g :: b -> T c

Here T, being a functor, can be applied to the morphism g. It might be easier to recognize this formula in Haskell notation:

f >=> g = join . fmap g . f

or, in components:

(f >=> g) a = join (fmap g (f a))

In terms of the algebraic interpretation, we are just composing two successive substitutions.

For Kleisli arrows to form a category we want their composition to be associative, and ηa to be the identity Kleisli arrow at a. This requirement can be translated to monadic laws for μ and η. But there is another way of deriving these laws that makes them look more like monoid laws. In fact μ is often called multiplication, and η unit.

Roughly speaking, the associativity law states that the two ways of reducing the cube of T, T3, down to T must give the same result. Two unit laws (left and right) state that when η is applied to T and then reduced by μ, we get back T.

Things are a little tricky because we are composing natural transformations and functors. So a little refresher on horizontal composition is in order. For instance, T3 can be seen as a composition of T after T2. We can apply to it the horizontal composition of two natural transformations:

IT ∘ μ

assoc1

and get T∘T; which can be further reduced to T by applying μ. IT is the identity natural transformation from T to T. You will often see the notation for this type of horizontal composition IT ∘ μ shortened to T∘μ. This notation is unambiguous because it makes no sense to compose a functor with a natural transformation, therefore T must mean IT in this context.

We can also draw the diagram in the (endo-) functor category [C, C]:

assoc2

Alternatively, we can treat T3 as the composition of T2∘T and apply μ∘T to it. The result is also T∘T which, again, can be reduced to T using μ. We require that the two paths produce the same result.

assoc

Similarly, we can apply the horizontal composition η∘T to the composition of the identity functor I after T to obtain T2, which can then be reduced using μ. The result should be the same as if we applied the identity natural transformation directly to T. And, by analogy, the same should be true for T∘η.

unitlawcomp-1

You can convince yourself that these laws guarantee that the composition of Kleisli arrows indeed satisfies the laws of a category.

The similarities between a monad and a monoid are striking. We have multiplication μ, unit η, associativity, and unit laws. But our definition of a monoid is too narrow to describe a monad as a monoid. So let’s generalize the notion of a monoid.

Monoidal Categories

Let’s go back to the conventional definition of a monoid. It’s a set with a binary operation and a special element called unit. In Haskell, this can be expressed as a typeclass:

class Monoid m where
    mappend :: m -> m -> m
    mempty  :: m

The binary operation mappend must be associative and unital (i.e., multiplication by the unit mempty is a no-op).

Notice that, in Haskell, the definition of mappend is curried. It can be interpreted as mapping every element of m to a function:

mappend :: m -> (m -> m)

It’s this interpretation that gives rise to the definition of a monoid as a single-object category where endomorphisms (m -> m) represent the elements of the monoid. But because currying is built into Haskell, we could as well have started with a different definition of multiplication:

mu :: (m, m) -> m

Here, the cartesian product (m, m) becomes the source of pairs to be multiplied.

This definition suggests a different path to generalization: replacing the cartesian product with categorical product. We could start with a category where products are globally defined, pick an object m there, and define multiplication as a morphism:

μ :: m × m -> m

We have one problem though: In an arbitrary category we can’t peek inside an object, so how do we pick the unit element? There is a trick to it. Remember how element selection is equivalent to a function from the singleton set? In Haskell, we could replace the definition of mempty with a function:

eta :: () -> m

The singleton is the terminal object in Set, so it’s natural to generalize this definition to any category that has a terminal object t:

η :: t -> m

This lets us pick the unit “element” without having to talk about elements.

Unlike in our previous definition of a monoid as a single-object category, monoidal laws here are not automatically satisfied — we have to impose them. But in order to formulate them we have to establish the monoidal structure of the underlying categorical product itself. Let’s recall how monoidal structure works in Haskell first.

We start with associativity. In Haskell, the corresponding equational law is:

mu x (mu y z) = mu (mu x y) z

Before we can generalize it to other categories, we have to rewrite it as an equality of functions (morphisms). We have to abstract it away from its action on individual variables — in other words, we have to use point-free notation. Knowning that the cartesian product is a bifunctor, we can write the left hand side as:

(mu . bimap id mu)(x, (y, z))

and the right hand side as:

(mu . bimap mu id)((x, y), z)

This is almost what we want. Unfortunately, the cartesian product is not strictly associative — (x, (y, z)) is not the same as ((x, y), z) — so we can’t just write point-free:

mu . bimap id mu = mu . bimap mu id

On the other hand, the two nestings of pairs are isomorphic. There is an invertible function called the associator that converts between them:

alpha :: ((a, b), c) -> (a, (b, c))
alpha ((x, y), z) = (x, (y, z))

With the help of the associator, we can write the point-free associativity law for mu:

mu . bimap id mu . alpha = mu . bimap mu id

We can apply a similar trick to unit laws which, in the new notation, take the form:

mu (eta (), x) = x
mu (x, eta ()) = x

They can be rewritten as:

(mu . bimap eta id) ((), x) = lambda ((), x)
(mu . bimap id eta) (x, ()) = rho (x, ())

The isomorphisms lambda and rho are called the left and right unitor, respectively. They witness the fact that the unit () is the identity of the cartesian product up to isomorphism:

lambda :: ((), a) -> a
lambda ((), x) = x
rho :: (a, ()) -> a
rho (x, ()) = x

The point-free versions of the unit laws are therefore:

mu . bimap id eta = lambda
mu . bimap eta id = rho

We have formulated point-free monoidal laws for mu and eta using the fact that the underlying cartesian product itself acts like a monoidal multiplication in the category of types. Keep in mind though that the associativity and unit laws for the cartesian product are valid only up to isomorphism.

It turns out that these laws can be generalized to any category with products and a terminal object. Categorical products are indeed associative up to isomorphism and the terminal object is the unit, also up to isomorphism. The associator and the two unitors are natural isomorphisms. The laws can be represented by commuting diagrams.

assocmon

Notice that, because the product is a bifunctor, it can lift a pair of morphisms — in Haskell this was done using bimap.

We could stop here and say that we can define a monoid on top of any category with categorical products and a terminal object. As long as we can pick an object m and two morphisms μ and η that satisfy monoidal laws, we have a monoid. But we can do better than that. We don’t need a full-blown categorical product to formulate the laws for μ and η. Recall that a product is defined through a universal construction that uses projections. We haven’t used any projections in our formulation of monoidal laws.

A bifunctor that behaves like a product without being a product is called a tensor product, often denoted by the infix operator ⊗. A definition of a tensor product in general is a bit tricky, but we won’t worry about it. We’ll just list its properties — the most important being associativity up to isomorphism.

Similarly, we don’t need the object t to be terminal. We never used its terminal property — namely, the existence of a unique morphism from any object to it. What we require is that it works well in concert with the tensor product. Which means that we want it to be the unit of the tensor product, again, up to isomorphism. Let’s put it all together:

A monoidal category is a category C equipped with a bifunctor called the tensor product:

⊗ :: C × C -> C

and a distinct object i called the unit object, together with three natural isomorphisms called, respectively, the associator and the left and right unitors:

αa b c :: (a ⊗ b) ⊗ c -> a ⊗ (b ⊗ c)
λa :: i ⊗ a -> a
ρa :: a ⊗ i -> a

(There is also a coherence condition for simplifying a quadruple tensor product.)

What’s important is that a tensor product describes many familiar bifunctors. In particular, it works for a product, a coproduct and, as we’ll see shortly, for the composition of endofunctors (and also for some more esoteric products like Day convolution). Monoidal categories will play an essential role in the formulation of enriched categories.

Monoid in a Monoidal Category

We are now ready to define a monoid in a more general setting of a monoidal category. We start by picking an object m. Using the tensor product we can form powers of m. The square of m is m ⊗ m. There are two ways of forming the cube of m, but they are isomorphic through the associator. Similarly for higher powers of m (that’s where we need the coherence conditions). To form a monoid we need to pick two morphisms:

μ :: m ⊗ m -> m
η :: i -> m

where i is the unit object for our tensor product.

monoid-1

These morphisms have to satisfy associativity and unit laws, which can be expressed in terms of the following commuting diagrams:

assoctensor

unitmon

Notice that it’s essential that the tensor product be a bifunctor because we need to lift pairs of morphisms to form products such as μ ⊗ id or η ⊗ id. These diagrams are just a straightforward generalization of our previous results for categorical products.

Monads as Monoids

Monoidal structures pop up in unexpected places. One such place is the functor category. If you squint a little, you might be able to see functor composition as a form of multiplication. The problem is that not any two functors can be composed — the target category of one has to be the source category of the other. That’s just the usual rule of composition of morphisms — and, as we know, functors are indeed morphisms in the category Cat. But just like endomorphisms (morphisms that loop back to the same object) are always composable, so are endofunctors. For any given category C, endofunctors from C to C form the functor category [C, C]. Its objects are endofunctors, and morphisms are natural transformations between them. We can take any two objects from this category, say endofunctors F and G, and produce a third object F ∘ G — an endofunctor that’s their composition.

Is endofunctor composition a good candidate for a tensor product? First, we have to establish that it’s a bifunctor. Can it be used to lift a pair of morphisms — here, natural transformations? The signature of the analog of bimap for the tensor product would look something like this:

bimap :: (a -> b) -> (c -> d) -> (a ⊗ c -> b ⊗ d)

If you replace objects by endofunctors, arrows by natural transformations, and tensor products by composition, you get:

(F -> F') -> (G -> G') -> (F ∘ G -> F' ∘ G')

which you may recognize as the special case of horizontal composition.

horizcomp

We also have at our disposal the identity endofunctor I, which can serve as the identity for endofunctor composition — our new tensor product. Moreover, functor composition is associative. In fact associativity and unit laws are strict — there’s no need for the associator or the two unitors. So endofunctors form a strict monoidal category with functor composition as tensor product.

What’s a monoid in this category? It’s an object — that is an endofunctor T; and two morphisms — that is natural transformations:

μ :: T ∘ T -> T
η :: I -> T

Not only that, here are the monoid laws:

assoc

unitlawcomp

They are exactly the monad laws we’ve seen before. Now you understand the famous quote from Saunders Mac Lane:

All told, monad is just a monoid in the category of endofunctors.

You might have seen it emblazoned on some t-shirts at functional programming conferences.

Monads from Adjunctions

An adjunction, L ⊣ R, is a pair of functors going back and forth between two categories C and D. There are two ways of composing them giving rise to two endofunctors, R ∘ L and L ∘ R. As per an adjunction, these endofunctors are related to identity functors through two natural transformations called unit and counit:

η :: ID -> R ∘ L
ε :: L ∘ R -> IC

Immediately we see that the unit of an adjunction looks just like the unit of a monad. It turns out that the endofunctor R ∘ L is indeed a monad. All we need is to define the appropriate μ to go with the η. That’s a natural transformation between the square of our endofunctor and the endofunctor itself or, in terms of the adjoint functors:

R ∘ L ∘ R ∘ L -> R ∘ L

And, indeed, we can use the counit to collapse the L ∘ R in the middle. The exact formula for μ is given by the horizontal composition:

μ = R ∘ ε ∘ L

Monadic laws follow from the identities satisfied by the unit and counit of the adjunction and the interchange law.

We don’t see a lot of monads derived from adjunctions in Haskell, because an adjunction usually involves two categories. However, the definitions of an exponential, or a function object, is an exception. Here are the two endofunctors that form this adjunction:

L z = z × s
R b = s ⇒ b

You may recognize their composition as the familiar state monad:

R (L z) = s ⇒ (z × s)

We’ve seen this monad before in Haskell:

newtype State s a = State (s -> (a, s))

Let’s also translate the adjunction to Haskell. The left functor is the product functor:

newtype Prod s a = Prod (a, s)

and the right functor is the reader functor:

newtype Reader s a = Reader (s -> a)

They form the adjunction:

instance Adjunction (Prod s) (Reader s) where
  counit (Prod (Reader f, s)) = f s
  unit a = Reader (\s -> Prod (a, s))

You can easily convince yourself that the composition of the reader functor after the product functor is indeed equivalent to the state functor:

newtype State s a = State (s -> (a, s))

As expected, the unit of the adjunction is equivalent to the return function of the state monad. The counit acts by evaluating a function acting on its argument. This is recognizable as the uncurried version of the function runState:

runState :: State s a -> s -> (a, s)
runState (State f) s = f s

(uncurried, because in counit it acts on a pair).

We can now define join for the state monad as a component of the natural transformation μ. For that we need a horizontal composition of three natural transformations:

μ = R ∘ ε ∘ L

In other words, we need to sneak the counit ε across one level of the reader functor. We can’t just call fmap directly, because the compiler would pick the one for the State functor, rather than the Reader functor. But recall that fmap for the reader functor is just left function composition. So we’ll use function composition directly.

We have to first peel off the data constructor State to expose the function inside the State functor. This is done using runState:

ssa :: State s (State s a)
runState ssa :: s -> (State s a, s)

Then we left-compose it with the counit, which is defined by uncurry runState. Finally, we clothe it back in the State data constructor:

join :: State s (State s a) -> State s a
join ssa = State (uncurry runState . runState ssa)

This is indeed the implementation of join for the State monad.

It turns out that not only every adjunction gives rise to a monad, but the converse is also true: every monad can be factorized into a composition of two adjoint functors. Such factorization is not unique though.

We’ll talk about the other endofunctor L ∘ R in the next section.

Next: Comonads.


In the previous post I explored the application of the Yoneda lemma in the functor category to derive some results from the Haskell lens library. In particular I derived the profunctor representation of isos. There is one more trick that is used in the lens library: combining the Yoneda lemma with adjunctions. Jaskelioff and O’Connor used this trick in the context of free/forgetful adjunctions, but it can be easily generalized to any pair of adjoint higher order functors.

Adjunctions

An adjunction between two functors, L and R (left and right functor) is a natural isomorphism between hom-sets:

C(L d, c) ≅ D(d, R c)

The left functor L goes from the category D to C, and the right functor R goes in the opposite direction. Formally, having an adjunction allows us to shift the action of the functor from one end of the hom-set to the other. The shortcut notation for an adjunction is L ⊣ R.

Since adjunctions can be defined for arbitrary categories, they will also work between functor categories. In that case objects are functors and hom-sets are sets of natural transformations. For instance, Let’s consider an adjunction between two higher order functors:

ρ :: [C, C'] -> [D, D']
λ :: [D, D'] -> [C, C']

Here, [C, C'] is a category of functors between two categories C and C’, [D, D'] is a category of functors between D and D’, and ρ maps functors (and natural transformations) between these two categories. λ goes in the opposite direction. The adjunction λ ⊣ ρ is expressed as a natural isomorphism between sets of natural transformations:

[C, C'](λ g, h)  ≅  [D, D'](g, ρ h)

The two objects in functor categories are themselves functors:

h :: C -> C'
g :: D -> D'

Here’s the same adjunction written using ends:

x∈C C'((λ g) x, h x)  ≅  ∫y∈D D'(g y, (ρ h) y)

The end notation is easily translatable to Haskell. The end corresponds to a universal quantifier forall, and hom-sets become function types:

forall x. (lambda g) x -> h x ≅ forall y. g y -> (rho h) y

Since lambda and rho act on functors, they have kinds (*->*)->(*->*).

Yoneda with Adjunctions

Let’s recall the formula for the Yoneda embedding of the functor category:

f Set(∫x D(g x, f x), ∫y D(h y, f y))
  ≅ ∫z D(h z, g z)

Here, g, h, and f, are functors — objects in the functor category [C, D]. The ends represent natural transformations — morphisms in the functor category. The end over f is a higher order natural transformation.

Since g and h are arbitrary, let’s replace them with the results of the action of some higher order functors, λ g and λ' h. The idea is that λ and λ' are left halves of some higher order adjunctions.

f Set(∫x D'((λ g) x, f x), ∫y D'((λ' h) y, f y))
  ≅ ∫z D'((λ' h) z, (λ g) z)

The right halves of these adjunctions are, respectively, ρ and ρ'.

λ  ⊣ ρ
λ' ⊣ ρ'

Let’s apply these adjunctions inside the hom-sets:

f Set(∫x D(g x, (ρ f) x), ∫y D(h y, (ρ' f) y))
  ≅ ∫z D(h z, (ρ' (λ g)) z)

Let’s focus our attention on the category of sets. If we replace D with Set, we can pick g and h to be hom-functors (which are the simplest representable functors) parameterized by some arbitrary objects b and t:

g = C(b, -)
h = C(t, -)

We get:

f Set(∫x Set(C(b, x), (ρ f) x), ∫y Set(C(t, y), (ρ' f) y)
  ≅ ∫z Set(C(t, z), (ρ' (λ C(b, -))) z)

Remember, hom-functors behave like Dirac delta functions under the integration sign. That is to say, we can use the Yoneda lemma to “integrate” over x, y, and z:

f Set((ρ f) b, (ρ' f) t)
  ≅ (ρ' (λ C(b, -))) t

We are now free to pick a pair of adjoint higher order functors to suit our goal. Here’s one such choice for ρ: the functor that maps a functor f (an endofunctor in C) to a set of morphisms from some fixed object a to f acting on another object. This is an operation that lifts a functor to a profunctor. In Haskell it’s defined as UpStar. This higher-order functor is parameterized by the choice of the object a in C:

κa f = C(a, f -)

It can also be written in terms of the exponential object:

κa f = (f -)a

This functor has an obvious left adjoint:

λa g = a × g -

This follows from the standard adjunction between the product and the exponential.

Our pick for ρ' is the same functor but taken at a different carrier, s:

ρ' = κs

With those choices, the left side of the identity

f Set((ρ f) b, (ρ' f) t)
  ≅ (ρ' (λ C(b, -))) t

becomes:

f Set(C(a, f b), C(s, f t))

This is the categorical version of the van Laarhoven lens.

Let’s now evaluate the right hand side. First we apply λa to the hom-functor C(b, -) to get:

λa C(b, -) = a × C(b, -)

The action of ρ' produces the result:

C(s, (a × C(b, t)))

This, in turn, is the categorical version of the getter/setter representation of the lens.

Translation

In Haskell, our formula derived from the higher-order Yoneda lemma with the adjoint pair:

f Set((ρ f) b, (ρ' f) t)
  ≅ (ρ' (λ C(b, -))) t

takes the form:

forall f. Functor f => (rho f) b -> (rho' f) t 
  ≅ (rho' (lambda ((->)b))) t

With our choice for ρ as the up-star functor:

rho  f = a -> f -
rho' f = s -> f -

or, in proper Haskell:

type Rho  a f b = a -> f b
type Rho' s f t = s -> f t

we get:

forall f. Functor f => (a -> f b) -> (s -> f t) 
  ≅ (rho' (lambda ((->)b))) t

To get the λ, we plug our ρ into the adjunction formula. We get:

forall x. (lambda g) x -> h x ≅ forall x. g x -> a -> h x

which has the obvious solution:

lambda g = (a, g -)

or, in proper Haskell,

type Lambda a g x = (a, g x)

Indeed, with the currying and flipping of arguments, we get the adjunction:

forall x. (a, g x) -> h x ≅ forall x. g x -> a -> h x

Now let’s evaluate the right hand side:

(rho' (lambda ((->) b))) t

We start with:

lambda (b -> -) = (a, b -> -)

The action of rho' gives us:

rho' (a, b -> -) = s -> (a, b -> -)

Altogether:

(rho' (lambda ((->) b))) t = s -> (a, b -> t)

So the right hand side is just the getter/setter pair:

(s -> a, s -> b -> t)

The final result is the well known van Laarhoven representation of the lens:

forall f. Functor f => (a -> f b) -> (s -> f t) 
  ≅ (s -> a, s -> b -> t)

This is not a new result, but I like the elegance of this derivation — especially the role played by the exponential adjunction and the lifting of a functor to a profunctor. This formulation has the additional advantage of being generalizable towards the profunctor formulation of lenses.


The connection between the Haskell lens library and category theory is a constant source of amazement to me. The most interesting part is that lenses are formulated in terms of higher order functions that are polymorphic in functors (or, more generally, profunctors). Consider, for instance, this definition:

type Lens s t a b = forall f. Functor f => (a -> f b) -> (s -> f t)

In Haskell, saying that a function is polymorphic in functors, which form a class parameterized by type constructors of the kind *->* (or *->*->*, in the case of profunctors) and supporting a special method called fmap (or dimap, respectively) is rather mind-boggling.

In category theory, on the other hand, functors are standard fare. You can form categories of functors. The properties of such categories are described by pretty much the same machinery as those of any other category.

In particular, one of the most important theorems of category theory, the Yoneda lemma, works in the category of functors out of the box. I have previously shown how to employ the Yoneda lemma to derive the representation for Haskell lenses (see my original blog post and, independently, this paper by Jaskelioff and O’Connor — or a more recent expanded post). Continuing with this program, I’m going to show how to use the Yoneda lemma with profunctors. But let’s start with the basics.

By the way, if you feel intimidated by mathematical notation, don’t worry, I have provided a translation to Haskell. However, math notation is often more succinct and almost always more general. I guess, the same ideas could be expressed using C++ templates, but it would look like an incomprehensible mess.

Functor Categories

Functors between any two given categories C and D can themselves be organized into a category, which is often called [C, D] or DC. The objects in that category are functors, and the morphisms are natural transformations. Given two functors f and g, the hom-set between them can be either called

Nat(f, g)

or

[C, D](f, g)

depending how much information you want to expose. (For simplicity, I’ll assume that the categories are small, so that the “sets” or natural transformations are sets indeed.)

What’s interesting is that, since functor categories are just categories, we can have functors going between them. We sometimes call them higher order functors. We can also have higher order functors going from a functor category to a regular category, in particular to the category of sets, Set. An example of such a functor is a hom-functor in a functor category. You construct this functor (also called a representable functor) when you fix one end of the hom-set and vary the other. In any category, the mapping:

x -> C(a, x)

is a functor from C to Set. We often use a shorthand notation for this functor:

C(a, -)

If we replace C by a functor category then, for a fixed functor g, the mapping:

f -> [C, D](g, f)

is a higher order functor. It maps f to a set of natural transformations — itself an object in Set.

Representable functors play an important role in the Yoneda lemma. Take the set of natural transformations from a representable functor in C to any functor f that goes from C to Set. This set is in one-to-one correspondence with the set of values of this functor at the object a:

[C, Set](C(a, -), f) ≅ f a

This correspondence is an isomorphism, which is natural both in a and f.

The set of natural transformations between two functors f and g can also be expressed as an end:

[C, D](f, g) = ∫x∈C D(f x, g x)

The end notation is sometimes more convenient because it makes the object x (the “integration variable”) explicit. The Yoneda lemma, in this notation, becomes:

x∈C Set(C(a, x), f x) ≅ f a

If you’re familiar with distributions, this formula will immediately resonate with you — it looks like the definition of the Dirac delta function:

∫ dx δ(a - x) f(x) ≅ f(a)

We can apply the Yoneda lemma to a functor category to get:

Nat([C, D](g, -), φ) ≅ φ g

or, in the end notation,

f Set(∫x D(g x, f x), φ f) ≅ φ g

Here, the “integration variable” f is itself a functor from C to D, and so is g; φ, however, is a higher order functor. It maps functors from [C, D] to sets from Set. The natural transformations in this formula are higher order natural transformations between higher order functors.

Furthermore, if we substitute for φ another instance of the representable functor, [C, D](h, -), we get the formula for the higher order Yoneda embedding:

Nat([C, D](g, -), [C, D](h, -)) ≅ [C, D](h, g)

which reduces higher order natural transformations to lower order natural transformations. Notice the inversion of g and h on the right hand side.

Using the end notation, this becomes:

f Set(∫x D(g x, f x), ∫y D(h y, f y))
  ≅ ∫z D(h z, g z)

We can further specialize this formula by replacing D with Set. We can then choose both functors to be hom-functors (for some fixed a and b):

g = C(a, -)
h = C(b, -)

We get:

f Set(∫x Set(C(a, x), f x), ∫y Set(C(b, y), f y))
  ≅ ∫z Set(C(b, z), C(a, z))

This can be simplified by applying the Yoneda lemma to the internal ends (“integrating” over x, y, and z) to get:

f Set(f a, f b) ≅ C(a, b)

This simple formula has some interesting possibilities that I will explore later.

Translation

All this might be easier to digest for programmers when translated to Haskell. Natural transformations are polymorphic functions:

forall x. f x -> g x

Here, f and g are arbitrary Haskell Functors. It’s a straightforward translation of the end formula:

x∈Set Set(f x, g x)

where the end is replaced by the universal quantifier, and the hom-set in Set by a function type. I have deliberately used Set rather than Hask as the category of Haskell types, because I’m not going to pretend that I care about non-termination.

A higher order functor of the kind we are interested in is a mapping from functors to types, which could be defined as follows:

class HFunctor (phi :: (* -> *) -> *) where
  hfmap :: (forall a. f a -> g a) -> (phi f -> phi g)

The higher order hom-functor is defined as:

newtype HHom f g = HHom (forall a. f a -> g a)

Indeed, it’s easy to define hfmap for it:

instance HFunctor (HHom f) where
  hfmap nat (HHom nat') = HHom (nat . nat')

The types give it away:

nat    :: forall a. g a -> h a
nat'   :: forall a. f a -> g a
result :: HHom (forall a. f a -> h a)

Higher order natural transformations between such functors will have the signature:

type HNat (phi :: (* -> *) -> *) (psi :: (* -> *) -> *) = 
  forall f. Functor f => phi f -> psi f

The standard Yoneda lemma establishes the isomorphism between f a and the following higher order polymorphic function:

forall x. (a -> x) -> f x  ≅  f a

The Yoneda lemma for higher order functors is the equivalence between φ g and:

forall f. Functor f => forall x. (g x -> f x) -> φ f  ≅  φ g

Compare this again with:

f Set(∫x Set(g x, f x), φ f) ≅ φ g

The higher order Yoneda embedding takes the form of the equivalence between:

forall f. Functor f => forall x. (g x -> f x) -> forall y. (h y -> f y)

and

forall z. h z -> g z

The earlier result of the double application of the Yoneda lemma:

f Set(f a, f b) ≅ C(a, b)

translates to:

forall f. Functor f => f a -> f b ≅ a -> b

One direction of this equivalence simply reiterates the definition of a functor: a function a->b can be lifted to any functor. The other direction is a little more interesting. Given two types, a and b, if there is a function from f a to f b for any functor f, than there is a direct function from a to b. In Set, where there are functions between any two types, with the exception of a->Void, this is not a big surprise.

But there are other categories embedded in Set, and the same categorical formula will lead to more interesting translations. In particular, think of categories where the hom-set is not equivalent to a simple function type with trivial composition. A good example is the basic formulation of lens as the getter/setter pair, or a function of type:

type Lens s t a b = s -> (a, b -> t)

Such functions don’t compose naturally, but their functor-polymorphic representations do.

Profunctors

You’ve seen the reusability of categorical constructs in action. We can have functors operate on functors, and natural transformations that work between higher order functors. The same Yoneda lemma works as well in the category of types and functions, as in the category of functors and natural transformations. From that perspective, a profunctor is just a special case of a functor. Given two categories C and D, a profunctor is a functor:

Cop × D -> Set

It’s a map from a product category to Set. Because the first component of the product is the opposite category (all morphisms reversed), this functor is contravariant in the first argument.

Let’s translate this definition to Haskell. We substitute all three categories with the same category of types and functions, which is essentially Set (remember, we ignore the bottom values). So a profunctor is a functor from Setop×Set to Set. It’s a mapping of types — a two-argument type constructor p  — and a mapping of morphisms. A morphism in Setop×Set is a pair of functions going between pairs (a, b) and (s, t). Because of contravariance, the first function goes in the opposite direction:

(s -> a, b -> t)

A profunctor lifts this pair to a single function:

p a b -> p s t

The lifting is done by the function dimap, which is usually written in the curried form:

class Profunctor p where
    dimap :: (s -> a) -> (b -> t) -> p a b -> p s t

All said and done, a profunctor is still a functor, so we can reuse all the machinery of functor calculus, including all versions of the Yoneda lemma.

Let’s start with the Yoneda lemma for the category Cop×D. Straightforward substitution leads to:

[Cop×D, Set]((Cop×D)(<c, d>, -), p) ≅ p <c, d>

or, in the end notation:

<x, y>∈Cop×D Set((Cop×D)(<c, d>, <x, y>), p <x, y>) ≅ p <c, d>

Here, p is the profunctor operating on pairs of objects, such as <c, d>. A hom-set in the product category Cop×D goes between two such pairs:

(Cop×D)(<c, d>, <x, y>)

Here’s the straightforward translation to Haskell:

forall x y. (x -> c) -> (d -> y) -> p x y ≅ p c d

Notice the customary currying and the reversal of source with target in the first function argument due to contravariance.

Since profunctors are just functors, they form a functor category:

[Cop×D, Set]

(not to be confused with Prof, the profunctor category, where profunctors serve as morphisms rather than objects):

We can easily rewrite the higher-order Yoneda lemma replacing functors with profunctors:

p Set(∫<x, y> Set(q <x, y>, p <x, y>), π p) ≅ π q

And this is what it looks like in Haskell:

forall p. Profunctor p => (forall x y. q x y -> p x y) -> pi p ≅ pi q

Here, π is a higher order functor acting on profunctors, with values in Set. In Haskell it’s defined by a type class:

class HFunProf (pi :: (* -> * -> *) -> *) where
  fhpmap :: (forall a b. p a b -> q a b) -> (pi p -> pi q)

Natural transformations between such functors have the type:

type HNatProf (pi :: (* -> * -> *) -> *) (rho :: (* -> * -> *) -> *) =
  forall p. Profunctor p => pi p -> rho p

Notice that we are now defining functions that are polymorphic in profunctors. This is getting us closer to the profunctor formulation of the lens library, in particular to prisms and isos.

Understanding Isos

An iso is a perfect example of a data structure straddling the gap between lenses and prisms. Its first order definition is simple:

type Iso s t a b = (s -> a, b -> t)

The name derives from isomorphism, which is a special case of an iso (I think a cuter name for an iso would be Mirror). The crucial observation is that this is nothing but the type corresponding to a hom-set in the product category Setop×Set:

(Setop×Set)(<a b>, <s t>)

We know how to compose such morphisms:

compIso :: Iso s t a b -> Iso a b u v -> Iso s t u v
(f1, g1) `compIso` (f2, g2) = (f2 . f1, g1 . g2)

but it’s not as straightforward as function composition. Fortunately, there is a higher order representation of isos, which composes using simple function composition. The trick is to make it profunctor-polymorphic:

type Iso s t a b = forall p. Profunctor p => p a b -> p s t

Why are the two definitions isomorphic? There is a standard argument based on parametricity, which I will skip, because there is a better explanation.

Recall the earlier result of applying the Yoneda lemma to the functor category:

forall f. Functor f => f a -> f b ≅ a -> b

The similarity is striking, isn’t it? That’s because, the categorical formula for both identities is the same:

f Set(f a, f b) ≅ C(a, b)

All we need is to replace C with Cop×D and rewrite it in terms of pairs of objects:

p Set(p <a b>, p <s t>) ≅ (Cop×D)(<a b>, <s t>)

But that’s exactly what we need:

forall p. Profunctor p => p a b -> p s t  ≅ (s -> a, b -> t)

The immediate advantage of the profunctor-polymorphic representation is that you can compose two isos using straightforward function composition. Instead of using compIso, we can use the dot:

p :: Iso s t a b
q :: Iso a b u v
r :: Iso s t u v
r = p . q

Of course, the full power of lenses is in the ability to compose (and type-check) combinations of different elements of the library.

Note: The definition of Iso in the lens library involves a functor f:

type Iso s t a b = forall p f. (Profunctor p, Functor f) => 
    p a (f b) -> p s (f t)

This functor can be absorbed into the definition of the profunctor p without any loss of generality.

Next: Combining adjunctions with the Yoneda lemma.

Acknowledgments

I’m grateful to Gershom Bazerman and Gabor Greif for useful comments and to André van Meulebrouck for checking the grammar and spelling.


In the previous blog post we talked about relations. I gave an example of a thin category as a kind of relation that’s compatible with categorical structure. In a thin category, the hom-set is either an empty set or a singleton set. It so happens that these two sets form a sub-category of Set. It’s a very interesting category. It consists of the two objects — let’s give them new names o and i. Besides the mandatory identity morphisms, we also have a single morphism going from o to i, corresponding to the function we call absurd in Haskell:

absurd :: Void -> a
absurd _ = a

This tiny category is sometimes called the interval category. I’ll call it o->i.

Impoverished 4

The object o is initial, and the object i is terminal — just as the empty set and the singleton set were in Set. Moreover, the cartesian product from Set can be used to define a tensor product in o->i. We’ll use this tensor product to build a monoidal category.

Monoidal Categories

A tensor product is a bifunctor ⊗ with some additional properties. Here, in the interval category, we’ll define it through the following multiplication table:

o ⊗ o = o
o ⊗ i = o
i ⊗ o = o
i ⊗ i = i

Its action on pairs of morphisms (what we call bimap in Haskell) is also easy to define. For instance, what’s the action of on the pair <absurd, idi>? This pair takes the pair <o, i> to <i, i>. Under the bifunctor , the first pair produces o, and the second i. There is only one morphism from o to i, so we have:

absurd ⊗ idi = absurd

If we designate the (terminal) object i as the unit of the tensor product, we get a (symmetric) monoidal category. A monoidal category is a category with a tensor product that’s associative and unital (usually, up to isomorphism — but here, strictly).

Now imagine that we replace hom-sets in our original thin category with objects from the monoidal category o->i (we’ll call them hom-objects). After all, we were only using two sets from Set. We can replace the empty hom-set with the object o, and the singleton hom-set with the object i. We get what’s called an enriched category (although, in this case, it’s more of an impoverished category).

Impoverished 9

An example of a thin category (a total order with objects 1, 2, and 3) with hom-sets replaced by hom-objects from the interval category. Think of i as corresponding to less-than-or-equal, and o as greater.

Enriched Categories

An enriched category has hom-objects instead of hom-sets. These are objects from some monoidal category V called the base category. The base category has to be monoidal because we want to define something that would replace the usual composition of morphisms. Morphisms are elements of hom-sets. However, hom-objects, in general, have no elements. We don’t know what an element of o or i is.

So to fully define an enriched category we have to come up with a sensible substitute for composition. To do that, we need to rethink composition — first in terms of hom-sets, then in terms of hom-objects.

We can think of composition as a function from a cartesian product of two hom-sets to a third hom-set:

composea b c :: C(b, c) × C(a, b) -> C(a, c)

Generalizing it, we can replace hom-sets with hom-objects (here, either o or i), the cartesian product with the tensor product, and a function with a morphism (notice: it’s a morphism in our monoidal category o->i). These composition-defining morphisms form a “composition table” for hom-objects.

As an example, take the composition of two is. Their product i ⊗ i is i again, and there is only one morphism out of i, the identity morphism. In terms of original hom-sets it would mean that the composition of two morphisms always exists. In general, we have to impose this condition when we’re defining a category, enriched or not — here it just happens automatically.

For instance (see illustration), compose0 1 2=idi:

compose0 1 2 (C(1, 2) ⊗ C(0, 1))
= compose0 1 2 (i ⊗ i)
= compose0 1 2 i
= i
= C(0, 2)

In every category we must also have identity morphisms. These are special elements in the hom-sets of the form C(a, a). We have to find a way to define their equivalent in the enriched setting. We’ll use the standard trick of defining generalized elements. It’s based on the observation that selecting an element from a set s is the same as selecting a morphism that goes from the singleton set (the terminal object in Set) to s. In a monoidal category, we replace the terminal object with the monoidal unit.

So, instead of picking an identity morphism in C(a, a), we use a morphism from the monoidal unit i:

ja :: i -> C(a, a)

Again, in the case of a thin category, there is only one morphism leaving i, and that’s the identity morphism. That’s why we are automatically guaranteed that, in a thin category, all hom-objects of the form C(a, a) are equal to i.

Composition in a category must also satisfy associativity and identity conditions. Associativity in the enriched setting translates straightforwardly to a commuting diagram, but identity is a little trickier. We have to use ja to “select” the identity from the hom-object C(a, a) while composing it with some other hom-object C(b, a). We start with the product:

i ⊗ C(b, a)

Because i is the monoidal unit, this is equal to C(b, a). On the other hand, we can tensor together two morphisms in o->i — remember, a tensor product is a bifunctor, so it also acts on morphisms. Here we’ll tensor ja and the identity at C(b, a):

ja ⊗ idC(b, a)

We act with this product on the product object i ⊗ C(b, a) to get C(a, a) ⊗ C(b, a). Then we use composition to get:

C(a, a) ⊗ C(b, a) -> C(b, a)

These two ways of getting to C(b, a) must coincide, leading to the identity condition for enriched categories.

Impoverished 5

Now that we’ve seen how the enrichment works for thin categories, we can apply the same mechanism to define categories enriched over any monoidal category V.

The important part is that V defines a (bifunctor) tensor product ⊗ and a unit object i. Associativity and unitality may be either strict or up to isomorphism (notice that a regular cartesian product is associative only up to isomorphism — (a, (b, c)) is not equal to ((a, b), c)).

Instead of sets of morphisms, an enriched category has hom-objects that are objects in V. We use the same notation as for hom-sets: C(a, b) is the hom-object that connects object a to object b. Composition is replaced by morphisms in V:

composea b c :: C(b, c) ⊗ C(a, b) -> C(a, c)

Instead of identity morphisms, we have the morphisms in V:

ja :: i -> C(a, a)

Finally, associativity and unitality of composition are imposed in the form of a few commuting diagrams.

Impoverished Yoneda

The Yoneda Lemma talks about functors from an arbitrary category to Set. To generalize the Yoneda lemma to enriched categories we first have to generalize functors. Their action on objects is not a problem; it’s the action on morphisms that needs our attention.

Enriched Functors

Since in an enriched category we no longer have access to individual morphisms, we have to define the action of functors on hom-objects wholesale. This is only possible if the hom-objects in the target category come from the same base category V as the hom-objects in the source category. In other words, both categories must be enriched over the same monoidal category. We can then use regular morphisms in V to map hom-objects.

Between any two objects a and b in C we have the hom-object C(a, b). The two objects are mapped by the functor f to f a and f b, and there is a hom-object between them, D(f a, f b). The action of f on C(a, b) is defined as a morphism in V:

C(a, b) -> D(f a, f b)

Impoverished 6

Let’s see what this means in our impoverished thin category. First of all, a functor will always map related objects to related objects. That’s because there is no morphism from i to o. A bond between two objects cannot be broken by an impoverished functor.

If the relation is a partial order, for instance less-than-or-equal, then it follows that a functor between posets preserves the ordering — it’s monotone.

A functor must also preserve composition and identity. The former can be easily expressed as a commuting diagram. Identity preservation in the enriched setting involves the use of ja. Starting from i we can use ja to get to C(a, a), which the functor maps to D(f a, f a). Or we can use jf a to get there directly. We insist that both paths be the same.

Impoverished 7

In our impoverished category, this just works because ja is the identity morphism and all C(a, a)s and D(a, a)s are equal to i.

Back to Yoneda: You might remember that we start the Yoneda construction by fixing one object a in C, and then varying another object x to define the functor:

x -> C(a, x)

This functor maps C to Set, because xs are objects in C, and hom-sets are sets — objects of Set.

In the enriched environment, the same construction results in a mapping from C to V, because hom-objects are objects of the base category V.

But is this mapping a functor? This is far from obvious, considering that C is an enriched category, and we have just said that enriched functors can only go between categories that are enriched over the same base category. The target of our functor, the category V, is not enriched. It turns out that, as long as V is closed, we can turn it into an enriched category.

Self Enrichment

Let’s first see how we would enrich our tiny category o->i. First of all, let’s check if it’s closed. Closedness means that hom-sets can be objectified — for every hom-set there is an object called the exponential object that objectifies it. The exponential object in a (symmetric) monoidal category is defined through the adjunction:

V(a⊗b, c) ≅ V(b, ca)

This is the standard adjunction for defining exponentials, except that we are using the tensor product instead of the regular product. The hom-sets are sets of morphisms between objects in V (here, in o->i).

Let’s check, for instance, if there’s an object that corresponds to the hom-set V(o, i), which we would call io. We have:

V(o⊗b, i) ≅ V(b, io)

Whatever b we chose, when multiplied by o it will yield o, so the left hand side is V(o, i), a singleton set. Therefore V(b, io) must be a singleton set too, for any choice of b. In particular, if b is i, we see that the only choice for io is:

io = i

You can check that all exponentiation rules in o->i can be obtained from simple algebra by replacing o with zero and i with one.

Every closed symmetric monoidal category can be enriched in itself by replacing hom-sets with the corresponding exponentials. For instance, in our case, we end up replacing all empty hom-sets in the category o->i with o, and all singleton hom-sets with i. You can easily convince yourself that it works, and the result is the category o->i enriched in itself.

Impoverished 8

We can now take a category C that’s enriched over a closed symmetric monoidal category V, and show that the mapping:

x -> C(a, x)

is indeed an enriched functor. It maps objects of C to objects of V and hom-objects of C to hom-objects (exponentials) of V.

Impoverished 10

An example of a functor from a total order enriched over the interval category to the interval category. This particular functor is equal to the hom-functor C(a->x) for a equal to 3.

Let’s see what this functor looks like in a poset. Given some a, the hom-object C(a, x) is equal to i if a <= x. So an x is mapped to i if it’s greater-or-equal to a, otherwise it’s mapped to o. If you think of the objects mapped to o as colored black and the ones mapped to i as colored red, you’ll see the object a and the whole graph below it must be painted red.

Enriched Natural Transformations

Now that we know what enriched functors are, we have to define natural transformations between them. This is a bit tricky, since a regular natural transformation is defined as a family of morphisms. But again, instead of picking individual morphisms from hom-sets we can work with the closest equivalent: generalized elements — morphisms going from the unit object i to hom-objects. So an enriched natural transformation between two enriched functors f and g is defined as a family of morphisms in V:

αa :: i -> V(f a, g a)

Natural transformations are very limited in our impoverished category. Let’s see what morphisms from i are at our disposal. We have one morphism from i to i: the identity morphism ida. This makes sense — we think of i as having a single element. There is no morphism from i back to o; and that makes sense too — we think of o as having no elements. The only possible generalized components of an impoverished natural transformation between two functors f and g correspond to D(f a, g a) equal to i; which means that, for every a, f a must be less-than-or-equal to g a. A natural transformation can only push a functor uphill.

When the target category is o->i, as in the impoverished Yoneda lemma, a natural transformation may never connect red to black. So once the first functor switches to red, the other must follow.

Naturality Condition

There is, of course, a naturality condition that goes with this definition of a natural transformation. The essence of it is that it shouldn’t matter if we first apply a functor and then the natural transformation α, or the other way around. In the enriched context, there are two ways of getting from C(a, b) to D(f a, g b). One is to multiply C(a, b) by i on the right:

C(a, b) ⊗ i

apply the product of g ⊗ αa to get:

D(g a, g b) ⊗ D(f a, g a)

and then apply composition to get:

D(f a, g b)

The other way is to multiply C(a, b) by i on the left:

i ⊗ C(a, b)

apply αb ⊗ f to get:

D(f b, g b) ⊗ D(f a, f b)

and compose the two to get:

D(f a, g b)

The naturality condition requires that this diagram commute.

Impoverished 11

Enriched Yoneda

The enriched version of the Yoneda lemma talks about enriched natural transformations from the functor x -> C(a, x) to any enriched functor f that goes from C to V.

Consider for a moment a functor from a poset to our tiny category o->i (which, by the way, is also a poset). It will map some objects to o (black) and others to i (red). As we’ve seen, a functor must preserve the less-than-or-equal relation, so once we get into the red territory, there is no going back to black. And a natural transformation may only repaint black to red, not the other way around.

Now we would like to say that natural transformations from x -> C(a, x) to f are in one-to-one correspondence with the elements of f a, except that f a is not a set, so it doesn’t have elements. It’s an object in V. So instead of talking about elements of f a, we’ll talk about generalized elements — morphisms from the unit object i to f a. And that’s how the enriched Yoneda lemma is formulated — as a natural bijection between the set of natural transformations and a set of morphisms from the unit object to f a.

Nat(C(a, -), f) ≅ i -> f a

In our running example, there are only two possible values for f a.

  1. If the value is o then there is no morphism from i to it. The Yoneda lemma tells us that there is no natural transformation in that case. That makes sense, because the value of the functor x -> C(a, x) at x=a is i, and there is no morphism from i to o.
  2. If the value is i then there is exactly one morphism from i to it — the identity. The Yoneda lemma tells use that there is just one natural transformation in that case. It’s the natural transformation whose generalized component at any object x is i->i.

Strong Enriched Yoneda

There is something unsatisfactory in the fact that the enriched Yoneda lemma ends up using a mapping between sets. First we try to get away from sets as far as possible, then we go back to sets of morphisms. It feels like cheating. Not to worry! There is a stronger version of the Yoneda lemma that deals with this problem. What we need is to replace the set of natural transformations with an object in V that would represent them — just like we replaced the set of morphisms with the exponential object. Such an object is defined as an end:

x V(f x, g x)

The strong version of the Yoneda lemma establishes the natural isomorphism:

x V(C(a, x), f x) ≅ f a

Enriched Profunctors

We’ve seen that a profunctor is a functor from a product category Cop × D to Set. The enriched version of a profunctor requires the notion of a product of enriched categories. We would like the product of enriched categories to also be an enriched category. In fact, we would like it to be enriched over the same base category V as the component categories.

We’ll define objects in such a category as pairs of objects from the component categories, but the hom-objects will be defined as tensor products of the component hom-objects. In the enriched product category, the hom-object between two pairs, <c, d> and <c', d'> is:

(Cop ⊗ D)(<c, d>, <c', d'>) = C(c, c') ⊗ D(d, d')

You can convince yourself that composition of such hom-objects requires the tensor product to be symmetric (at least up to isomorphism). That’s because you have to be able to rearrange the hom-objects in a tensor product of tensor products.

An enriched profunctor is defined as an enriched functor from the tensor product of two categories to the (self-enriched) base category:

Cop ⊗ D -> V

Just like regular profunctors, enriched profunctors can be composed using the coend formula. The only difference is that the cartesian product is replaced by the tensor product in V. They form a bicategory called V-Prof.

Enriched profunctors are the basis of the definition of Tambara modules, which are relevant in the application to Haskell lenses.

Conclusion

One of the reasons for using category theory is to get away from set theory. In general, objects in a category don’t have to form sets. The morphisms, however, are elements of sets — the hom-sets. Enriched categories go a step further and replace even those sets with categorical objects. However, it’s not categories all the way down — the base category that’s used for enrichment is still a regular old category with hom-sets.

Acknowledgments

I’m grateful to Gershom Bazerman for useful comments and to André van Meulebrouck for checking the grammar and spelling.


A profunctor is a categorical construct that takes relations to a new level. It is an embodiment of a proof-relevant relation.

We don’t talk enough about relations. We talk about domesticated relations — functions; or captive ones — equalities; but we don’t talk enough about relations in the wild. And, as is often the case in category theory, a less constrained construct may have more interesting properties and may lead to better insights.

Relations

A relation between two sets is defined as a set of pairs. The first element of each pair is from the first set, and the second from the second. In other words, it’s a subset of the cartesian product of two sets.

This definition may be extended to categories. If C and D are small categories, we can define a relation between objects as a set of pairs of objects. In general, such pairs are themselves objects in the product category C×D. We could define a relation between categories as a subcategory of C×D. This works as long as we ignore morphisms or, equivalently, work with discrete categories.

There is another way of defining relations using a characteristic function. You can define a function on the cartesian product of two sets — a function that assigns zero (or false) to those pairs that are not in a relation, and one (or true) to those which are.

Extending this to categories, we would use a functor rather than a function. We could, for instance, define a relation as a functor from C×D to Set — a functor that maps pairs of objects to either an empty set or a singleton set. The (somewhat arbitrary) choice of Set as the target category will make sense later, when we make connection between relations and hom-sets.

But a functor is not just a mapping of objects — it must map morphisms as well. Here, since we are dealing with a product category, our characteristic functor must map pairs of morphisms to functions between sets. We only worry about the empty set and the singleton set, so there aren’t that many functions to chose from.

The next question is: Should we map the two morphisms in a pair covariantly, or maybe map one of them contravariantly? To see which possibility makes more sense, let’s consider the case when D is the same as C. In other words, let’s look at relations between objects of the same category. There are actually categories that are based on relations, for instance preorders. In a preorder, two objects are in a relation if there is a morphism between them; and there can be at most one morphism between any two objects. A hom-set in a preorder can only be an empty set or a singleton set. Empty set — no relation. Singleton set — the objects are related.

But that’s exactly how we defined a relation in the first place — a mapping from a pair of objects to Set (how’s that for foresight). In a preorder setting, this mapping is nothing but the hom-functor itself. And we know that hom-functors are contravariant in the first argument and covariant in the second:

C(-,=) :: Cop×C -> Set

That’s an argument in favor of choosing mixed covariance for the characteristic functor defining a relation.

A preorder is also called a thin category — a category where there’s at most one morphism per hom-set. Therefore a hom-functor in any thin category defines a relation.

Let’s dig a little deeper into why contravariance in the first argument makes sense in defining a relation. Suppose two objects a and b are related, i.e., the characteristic functor R maps the pair <a, b> to a singleton set. In the hom-set interpretation, where R is the hom-functor C(-, =), it means that there is a single morphism r:

r :: a -> b

Now let’s pick a morphism in Cop×C that goes from <a, b> to some <s, t>. A morphism in Cop×C is a pair of morphisms in C:

f :: s -> a
g :: b -> t

Impoverished 1

The composition of morphisms g ∘ r ∘ f is a morphism from s to t. That means the hom-set C(s, t) is not empty — therefore s and t are related.

Impoverished 2

And they should be related. That’s because the functor R acting on <f, g> must yield a function from the set C(a, b) to the set C(s, t). There’s no function from a non-empty set to the empty set. So, if the former is non-empty, the latter cannot be empty. In other words, if b is related to a and there is a morphism from <a, b> to <s, t> then t is related to s. We were able to “transport” the relation along a morphism. By making the characteristic functor R contravariant in the first argument and covariant in the second, we automatically make the relation compatible with the structure of the category.

Impoverished 3

In general, hom-sets are not constrained to empty and singleton sets. In an arbitrary category C, we can still think of hom-sets as defining some kind of generalized relation between objects. The empty hom-set still means no relation. Non-empty hom-sets can be seen as multiple “proofs” or “witnesses” to the relation.

Now that we know that we can imitate relations using hom-sets, let’s take a step back. I can think of two reasons why we would like to separate relations from hom-sets: One is that relations defined by hom-sets are always reflexive because of identity morphisms. The other reason is that we might want to define various relations on top of an existing category, a category that has its own hom-sets. It turns out that a profunctor is just the answer.

Profunctors

A profunctor assigns sets to pairs of objects — possibly objects taken from two different categories — and it does it in a way that’s compatible with the structure of these categories. In particular, it’s a functor that’s contravariant in its first argument and covariant in the second:

Cop × D -> Set

Interpreting elements of such sets as individual “proofs” of a relation, makes a profunctor a kind of proof-relevant relation. (This is very much in the spirit of Homotopy Type Theory (HoTT), where one considers proof-relevant equalities.)

In Haskell, we substitute all three categories in the definition of the profunctor with the category of types and functions; which is essentially Set, if you ignore the bottom values. So a profunctor is a functor from Setop×Set to Set. It’s a mapping of types — a two-argument type constructor p, and a mapping of morphisms. A morphism in Setop×Set is a pair of functions going between pairs of sets (a, b) and (s, t). Because of contravariance, the first function goes in the opposite direction:

(s -> a, b -> t)

A profunctor lifts this pair to a single function:

p a b -> p s t

The lifting is done by the function dimap, which is usually written in the curried form:

class Profunctor p where
    dimap :: (s -> a) -> (b -> t) -> p a b -> p s t

Profunctor Composition

As with any construction in category theory, we would like to know if profunctors are composable. But how do you compose something that has two inputs that are objects in different categories and one output that is a set? Just like with a Tetris block, we have to turn it on its side. Profunctors generalize relations between categories, so let’s compose them like relations.

Suppose we have a relation P from C to X and another relation Q from X to D. How do we build a composite relation between C and D? The standard way is to find an object in X that can serve as a bridge. We need an object x that is in a relation P with c (we’ll write it as c P x), and with which d in a relation Q (denoted as x Q d). If such an object exists, we say that d is in a relation with c — the relation being the composition of P and Q.

We’ll base the composition of profunctors on the same idea. Except that a profunctor produces a whole set of proofs of a relation. We not only have to provide an x that is related to both c and d, but also compose the proofs of these relations.

By convention, a profunctor from x to c, p x c, is interpreted as a relation from c to x (what we denoted c P x). So the first step in the composition is finding an x such that p x c is a non-empty set and for which q d x is also a non-empty set. This not only establishes the two relations, but also generates their proofs — elements of sets. The proof that both relations are in force is simply a pair of proofs (a logical conjunction, in terms of propositions as types). The set of such pairs, or the cartesian product of p x c and q d x, for some x, defines the composition of profunctors.

Have a look at this Haskell definition (in Data.Profunctor.Composition):

data Procompose p q d c where
  Procompose :: p x c -> q d x -> Procompose p q d c

Here, the cartesian product (p x c, q d x) is curried, and the existential quantifier over x is implicit in the use of the GADT.

This Haskell definition is a special case of a more general definition of the composition of profunctors that relate arbitrary categories. The existential quantifier in this case is represented by a coend:

(p ∘ q) d c = ∫x (p x c) × (q d x)

Since profunctors can be composed, it’s natural to ask if they form a category. It turns out that, rather than being a traditional category (it’s not, because profunctor composition is associative and unital only up to an isomorphism), they form a bicategory called Prof. The objects in that category are categories, morphisms are profunctors, and the role of the identity morphism is played by the hom-functor C(-,=) — our prototypical profunctor.

The fact that the hom-functor is the unit of profunctor composition follows from the so-called ninja Yoneda lemma. This can be also explained in terms of relations. The hom-functor establishes a relation between any two objects that are connected by at least one morphism. As I mentioned before, this relation is reflexive. It follows that if we have a “proof” of p d c, we can immediately compose it with the trivial “proof” of C(d, d), which is idd and get the proof of the composition

(p ∘ C(-, =)) d c

Conversely, if this composition exists, it means that there is a non-empty hom-set C(d, x) and a proof of p x c. We can then take the element of C(d, x):

f :: d -> x

pair it with an identity at c, and lift the pair:

<f, idc>

using p to transform p x c to p d c — the proof that d is in relation with c. The fact that the relation defined by a profunctor is compatible with the structure of the category (it can be “transported” using a morphism in the product category Cop×D) is crucial for this proof.

Acknowledgments

I’m grateful to Gershom Bazerman for useful comments and to André van Meulebrouck for checking the grammar and spelling.

Next Page »