The connection between the Haskell lens library and category theory is a constant source of amazement to me. The most interesting part is that lenses are formulated in terms of higher order functions that are polymorphic in functors (or, more generally, profunctors). Consider, for instance, this definition:

type Lens s t a b = forall f. Functor f => (a -> f b) -> (s -> f t)

In Haskell, saying that a function is polymorphic in functors, which form a class parameterized by type constructors of the kind *->* (or *->*->*, in the case of profunctors) and supporting a special method called fmap (or dimap, respectively) is rather mind-boggling.

In category theory, on the other hand, functors are standard fare. You can form categories of functors. The properties of such categories are described by pretty much the same machinery as those of any other category.

In particular, one of the most important theorems of category theory, the Yoneda lemma, works in the category of functors out of the box. I have previously shown how to employ the Yoneda lemma to derive the representation for Haskell lenses (see my original blog post and, independently, this paper by Jaskelioff and O’Connor — or a more recent expanded post). Continuing with this program, I’m going to show how to use the Yoneda lemma with profunctors. But let’s start with the basics.

By the way, if you feel intimidated by mathematical notation, don’t worry, I have provided a translation to Haskell. However, math notation is often more succinct and almost always more general. I guess, the same ideas could be expressed using C++ templates, but it would look like an incomprehensible mess.

Functor Categories

Functors between any two given categories C and D can themselves be organized into a category, which is often called [C, D] or DC. The objects in that category are functors, and the morphisms are natural transformations. Given two functors f and g, the hom-set between them can be either called

Nat(f, g)

or

[C, D](f, g)

depending how much information you want to expose. (For simplicity, I’ll assume that the categories are small, so that the “sets” or natural transformations are sets indeed.)

What’s interesting is that, since functor categories are just categories, we can have functors going between them. We sometimes call them higher order functors. We can also have higher order functors going from a functor category to a regular category, in particular to the category of sets, Set. An example of such a functor is a hom-functor in a functor category. You construct this functor (also called a representable functor) when you fix one end of the hom-set and vary the other. In any category, the mapping:

x -> C(a, x)

is a functor from C to Set. We often use a shorthand notation for this functor:

C(a, -)

If we replace C by a functor category then, for a fixed functor g, the mapping:

f -> [C, D](g, f)

is a higher order functor. It maps f to a set of natural transformations — itself an object in Set.

Representable functors play an important role in the Yoneda lemma. Take the set of natural transformations from a representable functor in C to any functor f that goes from C to Set. This set is in one-to-one correspondence with the set of values of this functor at the object a:

[C, Set](C(a, -), f) ≅ f a

This correspondence is an isomorphism, which is natural both in a and f.

The set of natural transformations between two functors f and g can also be expressed as an end:

[C, D](f, g) = ∫x∈C D(f x, g x)

The end notation is sometimes more convenient because it makes the object x (the “integration variable”) explicit. The Yoneda lemma, in this notation, becomes:

x∈C Set(C(a, x), f x) ≅ f a

If you’re familiar with distributions, this formula will immediately resonate with you — it looks like the definition of the Dirac delta function:

∫ dx δ(a - x) f(x) ≅ f(a)

We can apply the Yoneda lemma to a functor category to get:

Nat([C, D](g, -), φ) ≅ φ g

or, in the end notation,

f Set(∫x D(g x, f x), φ f) ≅ φ g

Here, the “integration variable” f is itself a functor from C to D, and so is g; φ, however, is a higher order functor. It maps functors from [C, D] to sets from Set. The natural transformations in this formula are higher order natural transformations between higher order functors.

Furthermore, if we substitute for φ another instance of the representable functor, [C, D](h, -), we get the formula for the higher order Yoneda embedding:

Nat([C, D](g, -), [C, D](h, -)) ≅ [C, D](h, g)

which reduces higher order natural transformations to lower order natural transformations. Notice the inversion of g and h on the right hand side.

Using the end notation, this becomes:

f Set(∫x D(g x, f x), ∫y D(h y, f y))
  ≅ ∫z D(h z, g z)

We can further specialize this formula by replacing D with Set. We can then choose both functors to be hom-functors (for some fixed a and b):

g = C(a, -)
h = C(b, -)

We get:

f Set(∫x Set(C(a, x), f x), ∫y Set(C(b, y), f y))
  ≅ ∫z Set(C(b, z), C(a, z))

This can be simplified by applying the Yoneda lemma to the internal ends (“integrating” over x, y, and z) to get:

f Set(f a, f b) ≅ C(a, b)

This simple formula has some interesting possibilities that I will explore later.

Translation

All this might be easier to digest for programmers when translated to Haskell. Natural transformations are polymorphic functions:

forall x. f x -> g x

Here, f and g are arbitrary Haskell Functors. It’s a straightforward translation of the end formula:

x∈Set Set(f x, g x)

where the end is replaced by the universal quantifier, and the hom-set in Set by a function type. I have deliberately used Set rather than Hask as the category of Haskell types, because I’m not going to pretend that I care about non-termination.

A higher order functor of the kind we are interested in is a mapping from functors to types, which could be defined as follows:

class HFunctor (phi :: (* -> *) -> *) where
  hfmap :: (forall a. f a -> g a) -> (phi f -> phi g)

The higher order hom-functor is defined as:

newtype HHom f g = HHom (forall a. f a -> g a)

Indeed, it’s easy to define hfmap for it:

instance HFunctor (HHom f) where
  hfmap nat (HHom nat') = HHom (nat . nat')

The types give it away:

nat    :: forall a. g a -> h a
nat'   :: forall a. f a -> g a
result :: HHom (forall a. f a -> h a)

Higher order natural transformations between such functors will have the signature:

type HNat (phi :: (* -> *) -> *) (psi :: (* -> *) -> *) = 
  forall f. Functor f => phi f -> psi f

The standard Yoneda lemma establishes the isomorphism between f a and the following higher order polymorphic function:

forall x. (a -> x) -> f x  ≅  f a

The Yoneda lemma for higher order functors is the equivalence between φ g and:

forall f. Functor f => forall x. (g x -> f x) -> φ f  ≅  φ g

Compare this again with:

f Set(∫x Set(g x, f x), φ f) ≅ φ g

The higher order Yoneda embedding takes the form of the equivalence between:

forall f. Functor f => forall x. (g x -> f x) -> forall y. (h y -> f y)

and

forall z. h z -> g z

The earlier result of the double application of the Yoneda lemma:

f Set(f a, f b) ≅ C(a, b)

translates to:

forall f. Functor f => f a -> f b ≅ a -> b

One direction of this equivalence simply reiterates the definition of a functor: a function a->b can be lifted to any functor. The other direction is a little more interesting. Given two types, a and b, if there is a function from f a to f b for any functor f, than there is a direct function from a to b. In Set, where there are functions between any two types, with the exception of a->Void, this is not a big surprise.

But there are other categories embedded in Set, and the same categorical formula will lead to more interesting translations. In particular, think of categories where the hom-set is not equivalent to a simple function type with trivial composition. A good example is the basic formulation of lens as the getter/setter pair, or a function of type:

type Lens s t a b = s -> (a, b -> t)

Such functions don’t compose naturally, but their functor-polymorphic representations do.

Profunctors

You’ve seen the reusability of categorical constructs in action. We can have functors operate on functors, and natural transformations that work between higher order functors. The same Yoneda lemma works as well in the category of types and functions, as in the category of functors and natural transformations. From that perspective, a profunctor is just a special case of a functor. Given two categories C and D, a profunctor is a functor:

Cop × D -> Set

It’s a map from a product category to Set. Because the first component of the product is the opposite category (all morphisms reversed), this functor is contravariant in the first argument.

Let’s translate this definition to Haskell. We substitute all three categories with the same category of types and functions, which is essentially Set (remember, we ignore the bottom values). So a profunctor is a functor from Setop×Set to Set. It’s a mapping of types — a two-argument type constructor p  — and a mapping of morphisms. A morphism in Setop×Set is a pair of functions going between pairs (a, b) and (s, t). Because of contravariance, the first function goes in the opposite direction:

(s -> a, b -> t)

A profunctor lifts this pair to a single function:

p a b -> p s t

The lifting is done by the function dimap, which is usually written in the curried form:

class Profunctor p where
    dimap :: (s -> a) -> (b -> t) -> p a b -> p s t

All said and done, a profunctor is still a functor, so we can reuse all the machinery of functor calculus, including all versions of the Yoneda lemma.

Let’s start with the Yoneda lemma for the category Cop×D. Straightforward substitution leads to:

[Cop×D, Set]((Cop×D)(<c, d>, -), p) ≅ p <c, d>

or, in the end notation:

<x, y>∈Cop×D Set((Cop×D)(<c, d>, <x, y>), p <x, y>) ≅ p <c, d>

Here, p is the profunctor operating on pairs of objects, such as <c, d>. A hom-set in the product category Cop×D goes between two such pairs:

(Cop×D)(<c, d>, <x, y>)

Here’s the straightforward translation to Haskell:

forall x y. (x -> c) -> (d -> y) -> p x y ≅ p c d

Notice the customary currying and the reversal of source with target in the first function argument due to contravariance.

Since profunctors are just functors, they form a functor category:

[Cop×D, Set]

(not to be confused with Prof, the profunctor category, where profunctors serve as morphisms rather than objects):

We can easily rewrite the higher-order Yoneda lemma replacing functors with profunctors:

p Set(∫<x, y> Set(q <x, y>, p <x, y>), π p) ≅ π q

And this is what it looks like in Haskell:

forall p. Profunctor p => (forall x y. q x y -> p x y) -> pi p ≅ pi q

Here, π is a higher order functor acting on profunctors, with values in Set. In Haskell it’s defined by a type class:

class HFunProf (pi :: (* -> * -> *) -> *) where
  fhpmap :: (forall a b. p a b -> q a b) -> (pi p -> pi q)

Natural transformations between such functors have the type:

type HNatProf (pi :: (* -> * -> *) -> *) (rho :: (* -> * -> *) -> *) =
  forall p. Profunctor p => pi p -> rho p

Notice that we are now defining functions that are polymorphic in profunctors. This is getting us closer to the profunctor formulation of the lens library, in particular to prisms and isos.

Understanding Isos

An iso is a perfect example of a data structure straddling the gap between lenses and prisms. Its first order definition is simple:

type Iso s t a b = (s -> a, b -> t)

The name derives from isomorphism, which is a special case of an iso (I think a cuter name for an iso would be Mirror). The crucial observation is that this is nothing but the type corresponding to a hom-set in the product category Setop×Set:

(Setop×Set)(<a b>, <s t>)

We know how to compose such morphisms:

compIso :: Iso s t a b -> Iso a b u v -> Iso s t u v
(f1, g1) `compIso` (f2, g2) = (f2 . f1, g1 . g2)

but it’s not as straightforward as function composition. Fortunately, there is a higher order representation of isos, which composes using simple function composition. The trick is to make it profunctor-polymorphic:

type Iso s t a b = forall p. Profunctor p => p a b -> p s t

Why are the two definitions isomorphic? There is a standard argument based on parametricity, which I will skip, because there is a better explanation.

Recall the earlier result of applying the Yoneda lemma to the functor category:

forall f. Functor f => f a -> f b ≅ a -> b

The similarity is striking, isn’t it? That’s because, the categorical formula for both identities is the same:

f Set(f a, f b) ≅ C(a, b)

All we need is to replace C with Cop×D and rewrite it in terms of pairs of objects:

p Set(p <a b>, p <s t>) ≅ (Cop×D)(<a b>, <s t>)

But that’s exactly what we need:

forall p. Profunctor p => p a b -> p s t  ≅ (b -> a, s -> t)

The immediate advantage of the profunctor-polymorphic representation is that you can compose two isos using straightforward function composition. Instead of using compIso, we can use the dot:

p :: Iso s t a b
q :: Iso a b u v
r :: Iso s t u v
r = q . p

Of course, the full power of lenses is in the ability to compose (and type-check) combinations of different elements of the library.

Note: The definition of Iso in the lens library involves a functor f:

type Iso s t a b = forall p f. (Profunctor p, Functor f) => 
    p a (f b) -> p s (f t)

This functor can be absorbed into the definition of the profunctor p without any loss of generality.

Next: Combining adjunctions with the Yoneda lemma.

Acknowledgments

I’m grateful to Gershom Bazerman and Gabor Greif for useful comments and to André van Meulebrouck for checking the grammar and spelling.


In the previous blog post we talked about relations. I gave an example of a thin category as a kind of relation that’s compatible with categorical structure. In a thin category, the hom-set is either an empty set or a singleton set. It so happens that these two sets form a cub-category of Set. It’s a very interesting category. It consists of the two objects — let’s give them new names o and i. Besides the mandatory identity morphisms, we also have a single morphism going from o to i, corresponding to the function we call absurd in Haskell:

absurd :: Void -> a
absurd _ = a

This tiny category is sometimes called the interval category. I’ll call it o->i.

Impoverished 4

The object o is initial, and the object i is terminal — just as the empty set and the singleton set were in Set. Moreover, the cartesian product from Set can be used to define a tensor product in o->i. We’ll use this tensor product to build a monoidal category.

Monoidal Categories

A tensor product is a bifunctor ⊗ with some additional properties. Here, in the interval category, we’ll define it through the following multiplication table:

o ⊗ o = o
o ⊗ i = o
i ⊗ o = o
i ⊗ i = i

Its action on pairs of morphisms (what we call bimap in Haskell) is also easy to define. For instance, what’s the action of on the pair <absurd, idi>? This pair takes the pair <o, i> to <i, i>. Under the bifunctor , the first pair produces o, and the second i. There is only one morphism from o to i, so we have:

absurd ⊗ idi = absurd

If we designate the (terminal) object i as the unit of the tensor product, we get a (symmetric) monoidal category. A monoidal category is a category with a tensor product that’s associative and unital (usually, up to isomorphism — but here, strictly).

Now imagine that we replace hom-sets in our original thin category with objects from the monoidal category o->i (we’ll call them hom-objects). After all, we were only using two sets from Set. We can replace the empty hom-set with the object o, and the singleton hom-set with the object i. We get what’s called an enriched category (although, in this case, it’s more of an impoverished category).

Impoverished 9

An example of a thin category (a total order with objects 1, 2, and 3) with hom-sets replaced by hom-objects from the interval category. Think of i as corresponding to less-than-or-equal, and o as greater.

Enriched Categories

An enriched category has hom-objects instead of hom-sets. These are objects from some monoidal category V called the base category. The base category has to be monoidal because we want to define something that would replace the usual composition of morphisms. Morphisms are elements of hom-sets. However, hom-objects, in general, have no elements. We don’t know what an element of o or i is.

So to fully define an enriched category we have to come up with a sensible substitute for composition. To do that, we need to rethink composition — first in terms of hom-sets, then in terms of hom-objects.

We can think of composition as a function from a cartesian product of two hom-sets to a third hom-set:

composea b c :: C(b, c) × C(a, b) -> C(a, c)

Generalizing it, we can replace hom-sets with hom-objects (here, either o or i), the cartesian product with the tensor product, and a function with a morphism (notice: it’s a morphism in our monoidal category o->i). These composition-defining morphisms form a “composition table” for hom-objects.

As an example, take the composition of two is. Their product i ⊗ i is i again, and there is only one morphism out of i, the identity morphism. In terms of original hom-sets it would mean that the composition of two morphisms always exists. In general, we have to impose this condition when we’re defining a category, enriched or not — here it just happens automatically.

For instance (see illustration), compose0 1 2=idi:

compose0 1 2 (C(1, 2) ⊗ C(0, 1))
= compose0 1 2 (i ⊗ i)
= compose0 1 2 i
= i
= C(0, 2)

In every category we must also have identity morphisms. These are special elements in the hom-sets of the form C(a, a). We have to find a way to define their equivalent in the enriched setting. We’ll use the standard trick of defining generalized elements. It’s based on the observation that selecting an element from a set s is the same as selecting a morphism that goes from the singleton set (the terminal object in Set) to s. In a monoidal category, we replace the terminal object with the monoidal unit.

So, instead of picking an identity morphism in C(a, a), we use a morphism from the monoidal unit i:

ja :: i -> C(a, a)

Again, in the case of a thin category, there is only one morphism leaving i, and that’s the identity morphism. That’s why we are automatically guaranteed that, in a thin category, all hom-objects of the form C(a, a) are equal to i.

Composition in a category must also satisfy associativity and identity conditions. Associativity in the enriched setting translates straightforwardly to a commuting diagram, but identity is a little trickier. We have to use ja to “select” the identity from the hom-object C(a, a) while composing it with some other hom-object C(b, a). We start with the product:

i ⊗ C(b, a)

Because i is the monoidal unit, this is equal to C(b, a). On the other hand, we can tensor together two morphisms in o->i — remember, a tensor product is a bifunctor, so it also acts on morphisms. Here we’ll tensor ja and the identity at C(b, a):

ja ⊗ idC(b, a)

We act with this product on the product object i ⊗ C(b, a) to get C(a, a) ⊗ C(b, a). Then we use composition to get:

C(a, a) ⊗ C(b, a) -> C(b, a)

These two ways of getting to C(b, a) must coincide, leading to the identity condition for enriched categories.

Impoverished 5

Now that we’ve seen how the enrichment works for thin categories, we can apply the same mechanism to define categories enriched over any monoidal category V.

The important part is that V defines a (bifunctor) tensor product ⊗ and a unit object i. Associativity and unitality may be either strict or up to isomorphism (notice that a regular cartesian product is associative only up to isomorphism — (a, (b, c)) is not equal to ((a, b), c)).

Instead of sets of morphisms, an enriched category has hom-objects that are objects in V. We use the same notation as for hom-sets: C(a, b) is the hom-object that connects object a to object b. Composition is replaced by morphisms in V:

composea b c :: C(b, c) ⊗ C(a, b) -> C(a, c)

Instead of identity morphisms, we have the morphisms in V:

ja :: i -> C(a, a)

Finally, associativity and unitality of composition are imposed in the form of a few commuting diagrams.

Impoverished Yoneda

The Yoneda Lemma talks about functors from an arbitrary category to Set. To generalize the Yoneda lemma to enriched categories we first have to generalize functors. Their action on objects is not a problem; it’s the action on morphisms that needs our attention.

Enriched Functors

Since in an enriched category we no longer have access to individual morphisms, we have to define the action of functors on hom-objects wholesale. This is only possible if the hom-objects in the target category come from the same base category V as the hom-objects in the source category. In other words, both categories must be enriched over the same monoidal category. We can then use regular morphisms in V to map hom-objects.

Between any two objects a and b in C we have the hom-object C(a, b). The two objects are mapped by the functor f to f a and f b, and there is a hom-object between them, D(f a, f b). The action of f on C(a, b) is defined as a morphism in V:

C(a, b) -> D(f a, f b)

Impoverished 6

Let’s see what this means in our impoverished thin category. First of all, a functor will always map related objects to related objects. That’s because there is no morphism from i to o. A bond between two objects cannot be broken by an impoverished functor.

If the relation is a partial order, for instance less-than-or-equal, then it follows that a functor between posets preserves the ordering — it’s monotone.

A functor must also preserve composition and identity. The former can be easily expressed as a commuting diagram. Identity preservation in the enriched setting involves the use of ja. Starting from i we can use ja to get to C(a, a), which the functor maps to D(f a, f a). Or we can use jf a to get there directly. We insist that both paths be the same.

Impoverished 7

In our impoverished category, this just works because ja is the identity morphism and all C(a, a)s and D(a, a)s are equal to i.

Back to Yoneda: You might remember that we start the Yoneda construction by fixing one object a in C, and then varying another object x to define the functor:

x -> C(a, x)

This functor maps C to Set, because xs are objects in C, and hom-sets are sets — objects of Set.

In the enriched environment, the same construction results in a mapping from C to V, because hom-objects are objects of the base category V.

But is this mapping a functor? This is far from obvious, considering that C is an enriched category, and we have just said that enriched functors can only go between categories that are enriched over the same base category. The target of our functor, the category V, is not enriched. It turns out that, as long as V is closed, we can turn it into an enriched category.

Self Enrichment

Let’s first see how we would enrich our tiny category o->i. First of all, let’s check if it’s closed. Closedness means that hom-sets can be objectified — for every hom-set there is an object called the exponential object that objectifies it. The exponential object in a (symmetric) monoidal category is defined through the adjunction:

V(a⊗b, c) ≅ V(b, ca)

This is the standard adjunction for defining exponentials, except that we are using the tensor product instead of the regular product. The hom-sets are sets of morphisms between objects in V (here, in o->i).

Let’s check, for instance, if there’s an object that corresponds to the hom-set V(o, i), which we would call io. We have:

V(o⊗b, i) ≅ V(b, io)

Whatever b we chose, when multiplied by o it will yield o, so the left hand side is V(o, i), a singleton set. Therefore V(b, io) must be a singleton set too, for any choice of b. In particular, if b is i, we see that the only choice for io is:

io = i

You can check that all exponentiation rules in o->i can be obtained from simple algebra by replacing o with zero and i with one.

Every closed symmetric monoidal category can be enriched in itself by replacing hom-sets with the corresponding exponentials. For instance, in our case, we end up replacing all empty hom-sets in the category o->i with o, and all singleton hom-sets with i. You can easily convince yourself that it works, and the result is the category o->i enriched in itself.

Impoverished 8

We can now take a category C that’s enriched over a closed symmetric monoidal category V, and show that the mapping:

x -> C(a, x)

is indeed an enriched functor. It maps objects of C to objects of V and hom-objects of C to hom-objects (exponentials) of V.

Impoverished 10

An example of a functor from a total order enriched over the interval category to the interval category. This particular functor is equal to the hom-functor C(a->x) for a equal to 3.

Let’s see what this functor looks like in a poset. Given some a, the hom-object C(a, x) is equal to i if a <= x. So an x is mapped to i if it’s greater-or-equal to a, otherwise it’s mapped to o. If you think of the objects mapped to o as colored black and the ones mapped to i as colored red, you’ll see the object a and the whole graph below it must be painted red.

Enriched Natural Transformations

Now that we know what enriched functors are, we have to define natural transformations between them. This is a bit tricky, since a regular natural transformation is defined as a family of morphisms. But again, instead of picking individual morphisms from hom-sets we can work with the closest equivalent: generalized elements — morphisms going from the unit object i to hom-objects. So an enriched natural transformation between two enriched functors f and g is defined as a family of morphisms in V:

αa :: i -> V(f a, g a)

Natural transformations are very limited in our impoverished category. Let’s see what morphisms from i are at our disposal. We have one morphism from i to i: the identity morphism ida. This makes sense — we think of i as having a single element. There is no morphism from i back to o; and that makes sense too — we think of o as having no elements. The only possible generalized components of an impoverished natural transformation between two functors f and g correspond to D(f a, g a) equal to i; which means that, for every a, f a must be less-than-or-equal to g a. A natural transformation can only push a functor uphill.

When the target category is o->i, as in the impoverished Yoneda lemma, a natural transformation may never connect red to black. So once the first functor switches to red, the other must follow.

Naturality Condition

There is, of course, a naturality condition that goes with this definition of a natural transformation. The essence of it is that it shouldn’t matter if we first apply a functor and then the natural transformation α, or the other way around. In the enriched context, there are two ways of getting from C(a, b) to D(f a, g b). One is to multiply C(a, b) by i on the right:

C(a, b) ⊗ i

apply the product of g ⊗ αa to get:

D(g a, g b) ⊗ D(f a, g a)

and then apply composition to get:

D(f a, g b)

The other way is to multiply C(a, b) by i on the left:

i ⊗ C(a, b)

apply αb ⊗ f to get:

D(f b, g b) ⊗ D(f a, f b)

and compose the two to get:

D(f a, g b)

The naturality condition requires that this diagram commute.

Impoverished 11

Enriched Yoneda

The enriched version of the Yoneda lemma talks about enriched natural transformations from the functor x -> C(a, x) to any enriched functor f that goes from C to V.

Consider for a moment a functor from a poset to our tiny category o->i (which, by the way, is also a poset). It will map some objects to o (black) and others to i (red). As we’ve seen, a functor must preserve the less-than-or-equal relation, so once we get into the red territory, there is no going back to black. And a natural transformation may only repaint black to red, not the other way around.

Now we would like to say that natural transformations from x -> C(a, x) to f are in one-to-one correspondence with the elements of f a, except that f a is not a set, so it doesn’t have elements. It’s an object in V. So instead of talking about elements of f a, we’ll talk about generalized elements — morphisms from the unit object i to f a. And that’s how the enriched Yoneda lemma is formulated — as a natural bijection between the set of natural transformations and a set of morphisms from the unit object to f a.

Nat(C(a, -), f) ≅ i -> f a

In our running example, there are only two possible values for f a.

  1. If the value is o then there is no morphism from i to it. The Yoneda lemma tells us that there is no natural transformation in that case. That makes sense, because the value of the functor x -> C(a, x) at x=a is i, and there is no morphism from i to o.
  2. If the value is i then there is exactly one morphism from i to it — the identity. The Yoneda lemma tells use that there is just one natural transformation in that case. It’s the natural transformation whose generalized component at any object x is i->i.

Strong Enriched Yoneda

There is something unsatisfactory in the fact that the enriched Yoneda lemma ends up using a mapping between sets. First we try to get away from sets as far as possible, then we go back to sets of morphisms. It feels like cheating. Not to worry! There is a stronger version of the Yoneda lemma that deals with this problem. What we need is to replace the set of natural transformations with an object in V that would represent them — just like we replaced the set of morphisms with the exponential object. Such an object is defined as an end:

x V(f x, g x)

The strong version of the Yoneda lemma establishes the natural isomorphism:

x V(C(a, x), f x) ≅ f a

Enriched Profunctors

We’ve seen that a profunctor is a functor from a product category Cop × D to Set. The enriched version of a profunctor requires the notion of a product of enriched categories. We would like the product of enriched categories to also be an enriched category. In fact, we would like it to be enriched over the same base category V as the component categories.

We’ll define objects in such a category as pairs of objects from the component categories, but the hom-objects will be defined as tensor products of the component hom-objects. In the enriched product category, the hom-object between two pairs, <c, d> and <c', d'> is:

(Cop ⊗ D)(<c, d>, <c', d'>) = C(c, c') ⊗ D(d, d')

You can convince yourself that composition of such hom-objects requires the tensor product to be symmetric (at least up to isomorphism). That’s because you have to be able to rearrange the hom-objects in a tensor product of tensor products.

An enriched profunctor is defined as an enriched functor from the tensor product of two categories to the (self-enriched) base category:

Cop ⊗ D -> V

Just like regular profunctors, enriched profunctors can be composed using the coend formula. The only difference is that the cartesian product is replaced by the tensor product in V. They form a bicategory called V-Prof.

Enriched profunctors are the basis of the definition of Tambara modules, which are relevant in the application to Haskell lenses.

Conclusion

One of the reasons for using category theory is to get away from set theory. In general, objects in a category don’t have to form sets. The morphisms, however, are elements of sets — the hom-sets. Enriched categories go a step further and replace even those sets with categorical objects. However, it’s not categories all the way down — the base category that’s used for enrichment is still a regular old category with hom-sets.

Acknowledgments

I’m grateful to Gershom Bazerman for useful comments and to André van Meulebrouck for checking the grammar and spelling.


A profunctor is a categorical construct that takes relations to a new level. It is an embodiment of a proof-relevant relation.

We don’t talk enough about relations. We talk about domesticated relations — functions; or captive ones — equalities; but we don’t talk enough about relations in the wild. And, as is often the case in category theory, a less constrained construct may have more interesting properties and may lead to better insights.

Relations

A relation between two sets is defined as a set of pairs. The first element of each pair is from the first set, and the second from the second. In other words, it’s a subset of the cartesian product of two sets.

This definition may be extended to categories. If C and D are small categories, we can define a relation between objects as a set of pairs of objects. In general, such pairs are themselves objects in the product category C×D. We could define a relation between categories as a subcategory of C×D. This works as long as we ignore morphisms or, equivalently, work with discrete categories.

There is another way of defining relations using a characteristic function. You can define a function on the cartesian product of two sets — a function that assigns zero (or false) to those pairs that are not in a relation, and one (or true) to those which are.

Extending this to categories, we would use a functor rather than a function. We could, for instance, define a relation as a functor from C×D to Set — a functor that maps pairs of objects to either an empty set or a singleton set. The (somewhat arbitrary) choice of Set as the target category will make sense later, when we make connection between relations and hom-sets.

But a functor is not just a mapping of objects — it must map morphisms as well. Here, since we are dealing with a product category, our characteristic functor must map pairs of morphisms to functions between sets. We only worry about the empty set and the singleton set, so there aren’t that many functions to chose from.

The next question is: Should we map the two morphisms in a pair covariantly, or maybe map one of them contravariantly? To see which possibility makes more sense, let’s consider the case when D is the same as C. In other words, let’s look at relations between objects of the same category. There are actually categories that are based on relations, for instance preorders. In a preorder, two objects are in a relation if there is a morphism between them; and there can be at most one morphism between any two objects. A hom-set in a preorder can only be an empty set or a singleton set. Empty set — no relation. Singleton set — the objects are related.

But that’s exactly how we defined a relation in the first place — a mapping from a pair of objects to Set (how’s that for foresight). In a preorder setting, this mapping is nothing but the hom-functor itself. And we know that hom-functors are contravariant in the first argument and covariant in the second:

C(-,=) :: Cop×C -> Set

That’s an argument in favor of choosing mixed covariance for the characteristic functor defining a relation.

A preorder is also called a thin category — a category where there’s at most one morphism per hom-set. Therefore a hom-functor in any thin category defines a relation.

Let’s dig a little deeper into why contravariance in the first argument makes sense in defining a relation. Suppose two objects a and b are related, i.e., the characteristic functor R maps the pair <a, b> to a singleton set. In the hom-set interpretation, where R is the hom-functor C(-, =), it means that there is a single morphism r:

r :: a -> b

Now let’s pick a morphism in Cop×C that goes from <a, b> to some <s, t>. A morphism in Cop×C is a pair of morphisms in C:

f :: s -> a
g :: b -> t

Impoverished 1

The composition of morphisms g ∘ r ∘ f is a morphism from s to t. That means the hom-set C(s, t) is not empty — therefore s and t are related.

Impoverished 2

And they should be related. That’s because the functor R acting on <f, g> must yield a function from the set C(a, b) to the set C(s, t). There’s no function from a non-empty set to the empty set. So, if the former is non-empty, the latter cannot be empty. In other words, if b is related to a and there is a morphism from <a, b> to <s, t> then t is related to s. We were able to “transport” the relation along a morphism. By making the characteristic functor R contravariant in the first argument and covariant in the second, we automatically make the relation compatible with the structure of the category.

Impoverished 3

In general, hom-sets are not constrained to empty and singleton sets. In an arbitrary category C, we can still think of hom-sets as defining some kind of generalized relation between objects. The empty hom-set still means no relation. Non-empty hom-sets can be seen as multiple “proofs” or “witnesses” to the relation.

Now that we know that we can imitate relations using hom-sets, let’s take a step back. I can think of two reasons why we would like to separate relations from hom-sets: One is that relations defined by hom-sets are always reflexive because of identity morphisms. The other reason is that we might want to define various relations on top of an existing category, a category that has its own hom-sets. It turns out that a profunctor is just the answer.

Profunctors

A profunctor assigns sets to pairs of objects — possibly objects taken from two different categories — and it does it in a way that’s compatible with the structure of these categories. In particular, it’s a functor that’s contravariant in its first argument and covariant in the second:

Cop × D -> Set

Interpreting elements of such sets as individual “proofs” of a relation, makes a profunctor a kind of proof-relevant relation. (This is very much in the spirit of Homotopy Type Theory (HoTT), where one considers proof-relevant equalities.)

In Haskell, we substitute all three categories in the definition of the profunctor with the category of types and functions; which is essentially Set, if you ignore the bottom values. So a profunctor is a functor from Setop×Set to Set. It’s a mapping of types — a two-argument type constructor p, and a mapping of morphisms. A morphism in Setop×Set is a pair of functions going between pairs of sets (a, b) and (s, t). Because of contravariance, the first function goes in the opposite direction:

(s -> a, b -> t)

A profunctor lifts this pair to a single function:

p a b -> p s t

The lifting is done by the function dimap, which is usually written in the curried form:

class Profunctor p where
    dimap :: (s -> a) -> (b -> t) -> p a b -> p s t

Profunctor Composition

As with any construction in category theory, we would like to know if profunctors are composable. But how do you compose something that has two inputs that are objects in different categories and one output that is a set? Just like with a Tetris block, we have to turn it on its side. Profunctors generalize relations between categories, so let’s compose them like relations.

Suppose we have a relation P from C to X and another relation Q from X to D. How do we build a composite relation between C and D? The standard way is to find an object in X that can serve as a bridge. We need an object x that is in a relation P with c (we’ll write it as c P x), and with which d in a relation Q (denoted as x Q d). If such an object exists, we say that d is in a relation with c — the relation being the composition of P and Q.

We’ll base the composition of profunctors on the same idea. Except that a profunctor produces a whole set of proofs of a relation. We not only have to provide an x that is related to both c and d, but also compose the proofs of these relations.

By convention, a profunctor from x to c, p x c, is interpreted as a relation from c to x (what we denoted c P x). So the first step in the composition is finding an x such that p x c is a non-empty set and for which q d x is also a non-empty set. This not only establishes the two relations, but also generates their proofs — elements of sets. The proof that both relations are in force is simply a pair of proofs (a logical conjunction, in terms of propositions as types). The set of such pairs, or the cartesian product of p x c and q d x, for some x, defines the composition of profunctors.

Have a look at this Haskell definition (in Data.Profunctor.Composition):

data Procompose p q d c where
  Procompose :: p x c -> q d x -> Procompose p q d c

Here, the cartesian product (p x c, q d x) is curried, and the existential quantifier over x is implicit in the use of the GADT.

This Haskell definition is a special case of a more general definition of the composition of profunctors that relate arbitrary categories. The existential quantifier in this case is represented by a coend:

(p ∘ q) d c = ∫x (p x c) × (q d x)

Since profunctors can be composed, it’s natural to ask if they form a category. It turns out that, rather than being a traditional category (it’s not, because profunctor composition is associative and unital only up to an isomorphism), they form a bicategory called Prof. The objects in that category are categories, morphisms are profunctors, and the role of the identity morphism is played by the hom-functor C(-,=) — our prototypical profunctor.

The fact that the hom-functor is the unit of profunctor composition follows from the so-called ninja Yoneda lemma. This can be also explained in terms of relations. The hom-functor establishes a relation between any two objects that are connected by at least one morphism. As I mentioned before, this relation is reflexive. It follows that if we have a “proof” of p d c, we can immediately compose it with the trivial “proof” of C(d, d), which is idd and get the proof of the composition

(p ∘ C(-, =)) d c

Conversely, if this composition exists, it means that there is a non-empty hom-set C(d, x) and a proof of p x c. We can then take the element of C(d, x):

f :: d -> x

pair it with an identity at c, and lift the pair:

<f, idc>

using p to transform p x c to p d c — the proof that d is in relation with c. The fact that the relation defined by a profunctor is compatible with the structure of the category (it can be “transported” using a morphism in the product category Cop×D) is crucial for this proof.

Acknowledgments

I’m grateful to Gershom Bazerman for useful comments and to André van Meulebrouck for checking the grammar and spelling.


This is part 19 of Categories for Programmers. Previously: Adjunctions. See the Table of Contents.

Free Monoid from Adjunction

Free constructions are a powerful application of adjunctions. A free functor is defined as the left adjoint to a forgetful functor. A forgetful functor is usually a pretty simple functor that forgets some structure. For instance, lots of interesting categories are built on top of sets. But categorical objects, which abstract those sets, have no internal structure — they have no elements. Still, those objects often carry the memory of sets, in the sense that there is a mapping — a functor — from a given category C to Set. A set corresponding to some object in C is called its underlying set.

Monoids are such objects that have underlying sets — sets of elements. There is a forgetful functor U from the category of monoids Mon to the category of sets, which maps monoids to their underlying sets. It also maps monoid morphisms (homomorphisms) to functions between sets.

I like to think of Mon as having split personality. On the one hand, it’s a bunch of sets with multiplication and unit elements. On the other hand, it’s a category with featureless objects whose only structure is encoded in morphisms that go between them. Every set-function that preserves multiplication and unit gives rise to a morphism in Mon.

Things to keep in mind:

  • There may be many monoids that map to the same set, and
  • There are fewer (or at most as many as) monoid morphisms than there are functions between their underlying sets.
Forgetful

Monoids m1 and m2 have the same underlying set. There are more functions between the underlying sets of m2 and m3 than there are morphisms between them.

The functor F that’s the left adjoint to the forgetful functor U is the free functor that builds free monoids from their generator sets. The adjunction follows from the free monoid universal construction we’ve discussed before.

In terms of hom-sets, we can write this adjunction as:

Mon(F x, m) ≅ Set(x, U m)

This (natural in x and m) isomorphism tells us that:

  • For every monoid homomorphism between the free monoid F x generated by x and an arbitrary monoid m there is a unique function that embeds the set of generators x in the underlying set of m. It’s a function in Set(x, U m).
  • For every function that embeds x in the underlying set of some m there is a unique monoid morphism between the free monoid generated by x and the monoid m. (This is the morphism we called h in our universal construction.)

FreeMonAdjunction

The intuition is that F x is the “maximum” monoid that can be built on the basis of x. If we could look inside monoids, we would see that any morphism that belongs to Mon(F x, m) embeds this free monoid in some other monoid m. It does it by possibly identifying some elements. In particular, it embeds the generators of F x (i.e., the elements of x) in m. The adjunction shows that the embedding of x, which is given by a function from Set(x, U m) on the right, uniquely determines the embedding of monoids on the left, and vice versa.

In Haskell, the list data structure is a free monoid (with some caveats: see Dan Doel’s blog post). A list type [a] is a free monoid with the type a representing the set of generators. For instance, the type [Char] contains the unit element — the empty list [] — and the singletons like ['a'], ['b'] — the generators of the free monoid. The rest is generated by applying the “product.” Here, the product of two lists simply appends one to another. Appending is associative and unital (that is, there is a neutral element — here, the empty list). A free monoid generated by Char is nothing but the set of all strings of characters from Char. It’s called String in Haskell:

type String = [Char]

(type defines a type synonym — a different name for an existing type).

Another interesting example is a free monoid built from just one generator. It’s the type of the list of units, [()]. Its elements are [], [()], [(), ()], etc. Every such list can be described by one natural number — its length. There is no more information encoded in the list of units. Appending two such lists produces a new list whose length is the sum of the lengths of its constituents. It’s easy to see that the type [()] is isomorphic to the additive monoid of natural numbers (with zero). Here are the two functions that are the inverse of each other, witnessing this isomorphism:

toNat :: [()] -> Int
toNat = length

toLst :: Int -> [()]
toLst n = replicate n ()

For simplicity I used the type Int rather than Natural, but the idea is the same. The function replicate creates a list of length n pre-filled with a given value — here, the unit.

Some Intuitions

What follows are some hand-waving arguments. Those kind of arguments are far from rigorous, but they help in forming intuitions.

To get some intuition about the free/forgetful adjunctions it helps to keep in mind that functors and functions are lossy in nature. Functors may collapse multiple objects and morphisms, functions may bunch together multiple elements of a set. Also, their image may cover only part of their codomain.

An “average” hom-set in Set will contain a whole spectrum of functions starting with the ones that are least lossy (e.g., injections or, possibly, isomorphisms) and ending with constant functions that collapse the whole domain to a single element (if there is one).

I tend to think of morphisms in an arbitrary category as being lossy too. It’s just a mental model, but it’s a useful one, especially when thinking of adjunctions — in particular those in which one of the categories is Set.

Formally, we can only speak of morphisms that are invertible (isomorphisms) or non-invertible. It’s that latter kind that may be though of as lossy. There is also a notion of mono- and epi- morphisms that generalize the idea of injective (non-collapsing) and surjective (covering the whole codomain) functions, but it’s possible to have a morphism that is both mono and epi, and which is still non-invertible.

In the Free ⊣ Forgetful adjunction, we have the more constrained category C on the left, and a less constrained category D on the right. Morphisms in C are “fewer” because they have to preserve some additional structure. In the case of Mon, they have to preserve multiplication and unit. Morphisms in D don’t have to preserve as much structure, so there are “more” of them.

When we apply a forgetful functor U to an object c in C, we think of it as revealing the “internal structure” of c. In fact, if D is Set we think of U as defining the internal structure of c — its underlying set. (In an arbitrary category, we can’t talk about the internals of an object other than through its connections to other objects, but here we are just hand-waving.)

If we map two objects c' and c using U, we expect that, in general, the mapping of the hom-set C(c', c) will cover only a subset of D(U c', U c). That’s because morphisms in C(c', c) have to preserve the additional structure, whereas the ones in D(U c', U c) don’t.

ForgettingMorphisms

But since an adjunction is defined as an isomporphism of particular hom-sets, we have to be very picky with our selection of c'. In the adjunction, c' is picked not from just anywhere in C, but from the (presumably smaller) image of the free functor F:

C(F d, c) ≅ D(d, U c)

The image of F must therefore consist of objects that have lots of morphisms going to an arbitrary c. In fact, there has to be as many structure-preserving morphisms from F d to c as there are non-structure preserving morphisms from d to U c. It means that the image of F must consist of essentially structure-free objects (so that there is no structure to preserve by morphisms). Such “structure-free” objects are called free objects.

FreeImage

In the monoid example, a free monoid has no structure other than what’s generated by unit and associativity laws. Other than that, all multiplications produce brand new elements.

In a free monoid, 2*3 is not 6 — it’s a new element [2, 3]. Since there is no identification of [2, 3] and 6, a morphism from this free monoid to any other monoid m is allowed to map them separately. But it’s also okay for it to map both [2, 3] and 6 (their product) to the same element of m. Or to identify [2, 3] and 5 (their sum) in an additive monoid, and so on. Different identifications give you different monoids.

This leads to another interesting intuition: Free monoids, instead of performing the monoidal operation, accumulate the arguments that were passed to it. Instead of multiplying 2 and 3 they remember 2 and 3 in a list. The advantage of this scheme is that we don’t have to specify what monoidal operation we will use. We can keep accumulating arguments, and only at the end apply an operator to the result. And it’s then that we can chose what operator to apply. We can add the numbers, or multiply them, or perform addition modulo 2, and so on. A free monoid separates the creation of an expression from its evaluation. We’ll see this idea again when we talk about algebras.

This intuition generalizes to other, more elaborate free constructions. For instance, we can accumulate whole expression trees before evaluating them. The advantage of this approach is that we can transform such trees to make the evaluation faster or less memory consuming. This is, for instance, done in implementing matrix calculus, where eager evaluation would lead to lots of allocations of temporary arrays to store intermediate results.

Challenges

  1. Consider a free monoid built from a singleton set as its generator. Show that there is a one-to-one correspondence between morphisms from this free monoid to any monoid m, and functions from the singleton set to the underlying set of m.

Next: Monads.

Acknowledgments

I’d like to thank Gershom Bazerman for checking my math and logic, and André van Meulebrouck, who has been volunteering his editing help throughout this series of posts.


This is part 18 of Categories for Programmers. Previously: It’s All About Morphisms. See the Table of Contents.

In mathematics we have various ways of saying that one thing is like another. The strictest is equality. Two things are equal if there is no way to distinguish one from another. One can be substituted for the other in every imaginable context. For instance, did you notice that we used equality of morphisms every time we talked about commuting diagrams? That’s because morphisms form a set (hom-set) and set elements can be compared for equality.

But equality is often too strong. There are many examples of things being the same for all intents and purposes, without actually being equal. For instance, the pair type (Bool, Char) is not strictly equal to (Char, Bool), but we understand that they contain the same information. This concept is best captured by an isomorphism between two types — a morphism that’s invertible. Since it’s a morphism, it preserves the structure; and being “iso” means that it’s part of a round trip that lands you in the same spot, no matter on which side you start. In the case of pairs, this isomorphism is called swap:

swap       :: (a,b) -> (b,a)
swap (a,b) = (b,a)

swap happens to be its own inverse.

Adjunction and Unit/Counit Pair

When we talk about categories being isomorphic, we express this in terms of mappings between categories, a.k.a. functors. We would like to be able to say that two categories C and D are isomorphic if there exists a functor R (“right”) from C to D, which is invertible. In other words, there exists another functor L (“left”) from D back to C which, when composed with R, is equal to the identity functor I. There are two possible compositions, R ∘ L and L ∘ R; and two possible identity functors: one in C and another in D.

Adj - 1

But here’s the tricky part: What does it mean for two functors to be equal? What do we mean by this equality:

R ∘ L = ID

or this one:

L ∘ R = IC

It would be reasonable to define functor equality in terms of equality of objects. Two functors, when acting on equal objects, should produce equal objects. But we don’t, in general, have the notion of object equality in an arbitrary category. It’s just not part of the definition. (Going deeper into this rabbit hole of “what equality really is,” we would end up in Homotopy Type Theory.)

You might argue that functors are morphisms in the category of categories, so they should be equality-comparable. And indeed, as long as we are talking about small categories, where objects form a set, we can indeed use the equality of elements of a set to equality-compare objects.

But, remember, Cat is really a 2-category. Hom-sets in a 2-category have additional structure — there are 2-morphisms acting between 1-morphisms. In Cat, 1-morphisms are functors, and 2-morphisms are natural transformations. So it’s more natural (can’t avoid this pun!) to consider natural isomorphisms as substitutes for equality when talking about functors.

So, instead of isomorphism of categories, it makes sense to consider a more general notion of equivalence. Two categories C and D are equivalent if we can find two functors going back and forth between them, whose composition (either way) is naturally isomorphic to the identity functor. In other words, there is a two-way natural transformation between the composition R ∘ L and the identity functor ID, and another between L ∘ R and the identity functor IC.

Adjunction is even weaker than equivalence, because it doesn’t require that the composition of the two functors be isomorphic to the identity functor. Instead it stipulates the existence of a one way natural transformation from ID to R∘L, and another from L∘R to IC. Here are the signatures of these two natural transformations:

η :: ID -> R ∘ L
ε :: L ∘ R -> IC

η is called the unit, and ε the counit of the adjunction.

Notice the asymmetry between these two definitions. In general, we don’t have the two remaining mappings:

R ∘ L -> ID -- not necessarily
IC -> L ∘ R -- not necessarily

Because of this asymmetry, the functor L is called the left adjoint to the functor R, while the functor R is the right adjoint to L. (Of course, left and right make sense only if you draw your diagrams one particular way.)

The compact notation for the adjunction is:

L ⊣ R

To better understand the adjunction, let’s analyze the unit and the counit in more detail.

Adj-Unit

Let’s start with the unit. It’s a natural transformation, so it’s a family of morphisms. Given an object d in D, the component of η is a morphism between I d, which is equal to d, and (R ∘ L) d; which, in the picture, is called d':

ηd :: d -> (R ∘ L) d

Notice that the composition R∘L is an endofunctor in D.

This equation tells us that, to get from object d to d' we can either take the shortcut using the morphism ηd in D, or do the round trip using two functors, with a layover in C. Or, stretching the analogy, we can take the airport shuttle from Departures to Arrivals at the airport D and skip the flights altogether. Notice that the morphism is a one way trip. There isn’t necessarily a shuttle going from d' back to d

Adj-Counit

By the same token, the component of of the counit ε can be described as:

εc' :: (L ∘ R) c -> c

where c' is (L ∘ R) c. It tells us that, after we’ve gone on the round trip with a layover in D, we can take the airport shuttle to get back to our starting point.

Notice again the asymmetry between unit and counit. Unit saves you a round trip, while counit completes the round trip. (Useful mnemonic: counit completes.)

Another way of looking at unit and counit is that unit lets us introduce the composition R ∘ L anywhere we could insert an identity functor on D; and counit lets us eliminate the composition L ∘ R, replacing it with the identity on C. That leads to some “obvious” consistency conditions, which make sure that introduction followed by elimination doesn’t change anything:

R = ID ∘ R -> R ∘ L ∘ R -> R ∘ IC = R
L = L ∘ ID -> L ∘ R ∘ L -> IC ∘ L  = L

We often see unit and counit in Haskell under different names. Unit is known as return (or pure, in the definition of Applicative):

return :: d -> m d

and counint as extract:

extract :: w c -> c

Here, m is the (endo-) functor corresponding to R∘L, and w is the (endo-) functor corresponding to L∘R. As we’ll see later, they are part of the definition of a monad and a comonad, respectively.

If you think of an endofunctor as a container, the unit (or return) is a polymorphic function that creates a default box around a value of arbitrary type. The counit (or extract) does the reverse: it retrieves or produces a single value from a container.

We’ll see later that every pair of adjoint functors defines a monad and a comonad. Conversely, every monad or comonad may be factorized into a pair of adjoint functors — this factorization is not unique, though.

In Haskell, we use monads a lot, but only rarely factorize them into pairs of adjoint functors, primarily because those functors would normally take us out of Hask.

We can however define adjunctions of endofunctors in Haskell. Here’s part of the definition taken from Data.Functor.Adjunction:

class (Functor f, Representable u) =>
      Adjunction f u | f -> u, u -> f where
  unit         :: a -> u (f a)
  counit       :: f (u a) -> a

This definition requires some explanation. First of all, it describes a multi-parameter type class — the two parameters being f and u. It establishes a relation called Adjunction between these two type constructors.

Additional conditions, after the vertical bar, specify functional dependencies. For instance, f -> u means that u is determined by f (the relation between f and u is a function, here on type constructors). Conversely, u -> f means that, if we know u, f is uniquely determined.

I’ll explain in a moment why, in Haskell, we can impose the condition that the right adjoint u be a representable functor.

Adjunction and Hom-Sets

There is an equivalent definition of the adjunction in terms of natural isomorphisms of hom-sets. This definition ties nicely with universal constructions we’ve been studying so far. Every time you hear the statement that there is some unique morphism, which factorizes some construction, you should think of it as a mapping of some set to a hom-set. That’s the meaning of “picking a unique morphism.”

Furthermore, factorization can be often described in terms of natural transformations. Factorization involves commuting diagrams — some morphism being equal to a composition of two morphisms (factors). A natural transformation maps morphisms to commuting diagrams. So, in a universal construction, we go from a morphism to a commuting diagram, and then to a unique morphism. We end up with a mapping from morphism to morphism, or from one hom-set to another (usually in different categories). If this mapping is invertible, and if it can be naturally extended across all hom-sets, we have an adjunction.

The main difference between universal constructions and adjunctions it that the latter are defined globally — for all hom-sets. For instance, using a universal construction you can define a product of two select objects, even if it doesn’t exist for any other pair of objects in that category. As we’ll see soon, if the product of any pair of objects exists in a category, it can be also defined through an adjunction.

Adj-HomSets

Here’s the alternative definition of the adjunction using hom-sets. As before, we have two functors L :: D->C and R :: C->D. We pick two arbitrary objects: the source object d in D, and the target object c in C. We can map the source object d to C using L. Now we have two objects in C, L d and c. They define a hom-set:

C(L d, c)

Similarly, we can map the target object c using R. Now we have two objects in D, d and R c. They, too, define a hom set:

D(d, R c)

We say that L is left adjoint to R iff there is an isomorphism of hom sets:

C(L d, c) ≅ D(d, R c)

that is natural both in d and c.

Naturality means that the source d can be varied smoothly across D; and the target c, across C. More precisely, we have a natural transformation φ between the following two (covariant) functors from C to Set. Here’s the action of these functors on objects:

c -> C(L d, c)
c -> D(d, R c)

The other natural transformation, ψ, acts between the following (contravariant) functors:

d -> C(L d, c)
d -> D(d, R c)

Both natural transformations must be invertible.

It’s easy to show that the two definitions of the adjunction are equivalent. For instance, let’s derive the unit transformation starting from the isomorphism of hom-sets:

C(L d, c) ≅ D(d, R c)

Since this isomorphism works for any object c, it must also work for c = L d:

C(L d, L d) ≅ D(d, (R ∘ L) d)

We know that the left hand side must contain at least one morphism, the identity. The natural transformation will map this morphism to an element of D(d, (R ∘ L) d) or, inserting the identity functor I, a morphism in:

D(I d, (R ∘ L) d)

We get a family of morphisms parameterized by d. They form a natural transformation between the functor I and the functor R ∘ L (the naturality condition is easy to verify). This is exactly our unit, η.

Conversely, starting from the existence of the unit and co-unit, we can define the transformations between hom-sets. For instance, let’s pick an arbitrary morphism f in the hom-set C(L d, c). We want to define a φ that, acting on f, produces a morphism in D(d, R c).

There isn’t really much choice. One thing we can try is to lift f using R. That will produce a morphism R f from R (L d) to R c — a morphism that’s an element of D((R ∘ L) d, R c).

What we need for a component of φ, is a morphism from d to R c. That’s not a problem, since we can use a component of ηd to get from d to (R ∘ L) d. We get:

φf = R f ∘ ηd

The other direction is analogous, and so is the derivation of ψ.

Going back to the Haskell definition of Adjunction, the natural transformations φ and ψ are replaced by polymorphic (in a and b) functions leftAdjunct and rightAdjunct, respectively. The functors L and R are called f and u:

class (Functor f, Representable u) =>
      Adjunction f u | f -> u, u -> f where
  leftAdjunct  :: (f a -> b) -> (a -> u b)
  rightAdjunct :: (a -> u b) -> (f a -> b)

The equivalence between the unit/counit formulation and the leftAdjunct/rightAdjunct formulation is witnessed by these mappings:

  unit           = leftAdjunct id
  counit         = rightAdjunct id
  leftAdjunct f  = fmap f . unit
  rightAdjunct f = counit . fmap f

It’s very instructive to follow the translation from the categorical description of the adjunction to Haskell code. I highly encourage this as an exercise.

We are now ready to explain why, in Haskell, the right adjoint is automatically a representable functor. The reason for this is that, to the first approximation, we can treat the category of Haskell types as the category of sets.

When the right category D is Set, the right adjoint R is a functor from C to Set. Such a functor is representable if we can find an object rep in C such that the hom-functor C(rep, _) is naturally isomorphic to R. It turns out that, if R is the right adjoint of some functor L from Set to C, such an object always exists — it’s the image of the singleton set () under L:

rep = L ()

Indeed, the adjunction tells us that the following two hom-sets are naturally isomorphic:

C(L (), c) ≅ Set((), R c)

For a given c, the right hand side is the set of functions from the singleton set () to R c. We’ve seen earlier that each such function picks one element from the set R c. The set of such functions is isomorphic to the set R c. So we have:

C(L (), -) ≅ R

which shows that R is indeed representable.

Product from Adjunction

We have previously introduced several concepts using universal constructions. Many of those concepts, when defined globally, are easier to express using adjunctions. The simplest non-trivial example is that of the product. The gist of the universal construction of the product is the ability to factorize any product-like candidate through the universal product.

More precisely, the product of two objects a and b is the object (a × b) (or (a, b) in the Haskell notation) equipped with two morphisms fst and snd such that, for any other candidate c equipped with two morphisms p::c->a and q::c->b, there exists a unique morphism m::c->(a, b) that factorizes p and q through fst and snd.

As we’ve seen earlier, in Haskell, we can implement a factorizer that generates this morphism from the two projections:

factorizer :: (c -> a) -> (c -> b) -> (c -> (a, b))
factorizer p q = \x -> (p x, q x)

It’s easy to verify that the factorization conditions hold:

fst . factorizer p q = p
snd . factorizer p q = q

We have a mapping that takes a pair of morphisms p and q and produces another morphism m = factorizer p q.

How can we translate this into a mapping between two hom-sets that we need to define an adjunction? The trick is to go outside of Hask and treat the pair of morphisms as a single morphism in the product category.

Let me remind you what a product category is. Take two arbitrary categories C and D. The objects in the product category C×D are pairs of objects, one from C and one from D. The morphisms are pairs of morphisms, one from C and one from D.

To define a product in some category C, we should start with the product category C×C. Pairs of morphism from C are single morphisms in the product category C×C.

Adj-ProductCat

It might be a little confusing at first that we are using a product category to define a product. These are, however, very different products. We don’t need a universal construction to define a product category. All we need is the notion of a pair of objects and a pair of morphisms.

However, a pair of objects from C is not an object in C. It’s an object in a different category, C×C. We can write the pair formally as <a, b>, where a and b are objects of C. The universal construction, on the other hand, is necessary in order to define the object a×b (or (a, b) in Haskell), which is an object in the same category C. This object is supposed to represent the pair <a, b> in a way specified by the universal construction. It doesn’t always exist and, even if it exists for some, might not exist for other pairs of objects in C.

Let’s now look at the factorizer as a mapping of hom-sets. The first hom-set is in the product category C×C, and the second is in C. A general morphism in C×C would be a pair of morphisms <f, g>:

f :: c' -> a
g :: c'' -> b

with c'' potentially different from c'. But to define a product, we are interested in a special morphism in C×C, the pair p and q that share the same source object c. That’s okay: In the definition of an adjuncion, the source of the left hom-set is not an arbitrary object — it’s the result of the left functor L acting on some object from the right category. The functor that fits the bill is easy to guess — it’s the diagonal functor from C to C×C, whose action on objects is:

Δ c = <c, c>

The left-hand side hom-set in our adjunction should thus be:

(C×C)(Δ c, <a, b>)

It’s a hom-set in the product category. Its elements are pairs of morphisms that we recognize as the arguments to our factorizer:

(c -> a) -> (c -> b) ...

The right-hand side hom-set lives in C, and it goes between the source object c and the result of some functor R acting on the target object in C×C. That’s the functor that maps the pair <a, b> to our product object, a×b. We recognize this element of the hom-set as the result of the factorizer:

... -> (c -> (a, b))

Adj-Product

We still don’t have a full adjunction. For that we first need our factorizer to be invertible — we are building an isomorphism between hom-sets. The inverse of the factorizer should start from a morphism m — a morphism from some object c to the product object a×b. In other words, m should be an element of:

C(c, a×b)

The inverse factorizer should map m to a morphism <p, q> in C×C that goes from <c, c> to <a, b>; in other words, a morphism that’s an element of:

(C×C)(Δ c, <a, b>)

If that mapping exists, we conclude that there exists the right adjoint to the diagonal functor. That functor defines a product.

In Haskell, we can always construct the inverse of the factorizer by composing m with, respectively, fst and snd.

p = fst ∘ m
q = snd ∘ m

To complete the proof of the equivalence of the two ways of defining a product we also need to show that the mapping between hom-sets is natural in a, b, and c. I will leave this as an exercise for the dedicated reader.

To summarize what we have done: A categorical product may be defined globally as the right adjoint of the diagonal functor:

(C × C)(Δ c, <a, b>) ≅ C(c, a×b)

Here, a×b is the result of the action of our right adjoint functor Product on the pair <a, b>. Notice that any functor from C×C is a bifunctor, so Product is a bifunctor. In Haskell, the Product bifunctor is written simply as (,). You can apply it to two types and get their product type, for instance:

(,) Int Bool ~ (Int, Bool)

Exponential from Adjunction

The exponential ba, or the function object a⇒b, can be defined using a universal construction. This construction, if it exists for all pairs of objects, can be seen as an adjunction. Again, the trick is to concentrate on the statement:

For any other object z with a morphism

g :: z × a -> b

there is a unique morphism

h :: z -> (a⇒b)

This statement establishes a mapping between hom-sets.

In this case, we are dealing with objects in the same category, so the two adjoint functors are endofunctors. The left (endo-)functor L, when acting on object z, produces z × a. It’s a functor that corresponds to taking a product with some fixed a.

The right (endo-)functor R, when acting on b produces the function object a⇒b (or ba). Again, a is fixed. The adjunction between these two functors is often written as:

- × a ⊣ (-)a

The mapping of hom-sets that underlies this adjunction is best seen by redrawing the diagram that we used in the universal construction.

Adj-Expo

Notice that the eval morphism is nothing else but the counit of this adjunction:

(a⇒b) × a -> b

where:

(a⇒b) × a = (L ∘ R) b

I have previously mentioned that a universal construction defines a unique object, up to isomorphism. That’s why we have “the” product and “the” exponential. This property translates to adjunctions as well: if a functor has an adjoint, this adjoint is unique up to isomorphism.

Challenges

  1. Derive the naturality square for ψ, the transformation between the two (contravariant) functors:
    a -> C(L a, b)
    a -> D(a, R b)
  2. Derive the counit ε starting from the hom-sets isomorphism in the second definition of the adjunction.
  3. Complete the proof of equivalence of the two definitions of the adjunction.
  4. Show that the coproduct can be defined by an adjunction. Start with the definition of the factorizer for a coproduct.
  5. Show that the coproduct is the left adjoint of the diagonal functor.
  6. Define the adjunction between a product and a function object in Haskell.

Next: Free/Forgetful Adjunctions.

Acknowledgments

I’d like to thank Edward Kmett and Gershom Bazerman for checking my math and logic, and André van Meulebrouck, who has been volunteering his editing help throughout this series of posts.


Note: This was published on April 1st.

For many years I’ve been advocating strong type systems. Their advantages seemed so obvious to me that I didn’t even bother doing much research in that area. But since I was being pestered by the advocates of untyped languages I finally decided to prove once and for all that strong typing is superior. Surprisingly, the task wasn’t as easy as I anticipated. More and more self-doubt started clouding my mind.

Take for instance the argument that it’s better to detect programmer’s errors at compile time than at runtime. Well, is it? We are making a huge assumption here that a correct program is better than a running program. In practice, a running program beats a correct program 99% of the time. What really matters is time-to-market (TTM), and trying to bypass a type system takes a lot of work and skill.

There is no guarantee that an incorrect program will actually fail, and even if it fails, it might fail only on a small percentage of tasks. If the bug is really annoying, it may be fixed in the next release of the product. Notice the trend in the industry to regularly push updates and bug fixes to users, who are now almost always connected to the Internet (see Windows Update, JVM, Adobe Flash, etc.). Ever since Borland’s Turbo C++ compiler, it became obvious to software manufacturers that convenience, TTM, and cute user interfaces beat reliability hands down. Putting a smiley face on an error message sells more programs than fixing the bug.

Error

Another problem with strong typing — something that the industry has been long aware of — is that it often takes programmers with an MSc (or sometimes even a PhD, in the case of generic C++) to understand and fix type errors that are detected by the compiler. These guys are expensive to hire and they tend to be snooty and turn up their noses on less interesting jobs.

Having realized that, I started looking at weakly typed or typeless languages. They are much easier to program in. The compiler doesn’t impose its totalitarian regime on the programmer. Not having to follow strict typing rules frees the programmer to concentrate on more important creative tasks — something we humans excel at. I was almost ready to switch to JavaScript when I discovered runtime errors. Runtime errors happen when the program takes a path that was not anticipated by the programmer (after all, the programmer always tests the most common path). That makes runtime errors relatively rare, but still, they could be a nuisance for the end user.

I realized that we needed a new programming language, a typeless and extremely expressive language. I called this language JDI! (it stands for Just Do It! and is pronounced Jedi). It’s essentially untyped lambda calculus mixed with some assembly. But the real clincher is the runtime that I called SBR (Seldom-Blow Runtime, pronounced saber). I have already implemented the proof of concept version of SBR called Light SBR. I will release it to GitHub tomorrow.

The key insight behind SBR is that, when a runtime error happens, the runtime already knows why the problem occurred (null pointer, missing functionality, index out of bounds, etc.). In most cases the runtime, instead of throwing an exception, can just do something else than what the program seemed to demand.

I’m not claiming that this is an original discovery — this type of programming has been successfully applied to web applications across the board. All I’m trying to do is to extend it to the mainstream. I’m happy to announce that some industries have already expressed interest in my project. We are going to try it first on self-driving cars, autopilots, and ultimately on the new generation of nuclear power plants.


I came in contact with Tambara modules when working on a categorical understanding of lenses. They were first mentioned to me by Edward Kmett, who implemented their Haskell version, Data.Profunctor.Tambara. Recently I had a discussion with Russell O’Connor about profunctor lenses. He then had a discussion with James “xplat” Deikun, who again pointed out the importance of Tambara modules. That finally pushed me to study Tambara’s original paper along with Pastro and Street’s generalizations of it. These are not easy papers to read; so, to motivate myself, I started writing this post with the idea of filling in the gaps in my education and providing some background and intuitions I gather in the process. Trying to explain things always helps me understand them better. I will also sketch some of the proofs — for details, see the original papers.

The general idea is that lenses are used to access components of product data types, whereas prisms are used with coproduct (sum) data types. In order to unify lenses and prisms, we need a framework that abstracts over products and coproducts. It so happens that both are examples of a tensor product. Tensors have their roots in vector calculus, so I’ll start with a little refresher on it, to see where the intuitions come from. Tensors may also serve as objects upon which we can represent groups or monoids.

The next step is to define a monoidal category, where the tensor product plays a role analogous to group (actually, monoid) action. Tensor categories are built on top of (or, enriched over) monoidal categories.

We can define monoidal action on tensor categories — analogous to representations of groups on tensor fields. One particular tensor category of interest to us is the category of distributors (profunctors). Distributors equipped with a tensor action are the subject of Tambara’s paper. It turns out that tensor action on distributors is directly related to profunctor strength, which is the basis of the general formulation of Haskell lenses and prisms.

Vectors and Tensors

We all have pretty good idea of what a vector space is. It’s a sets of vectors with vector addition and with multiplication by numbers. The numbers, or scalars, come from some field K (for instance real or complex numbers). These operations must obey some obvious rules. For instance, multiplying any vector by 1 (the multiplicative unit of K) gives back the same vector:

1v = v

Then there are the linearity conditions (α and β are scalars from K, and v and w are vectors):

αv + βv = (α + β)v
α(v + w) = αv + αw

which can be used to prove that every vector space has a basis — a minimal set of vectors whose linear combinations generate the whole space.

In other words, any vector v can be written as a combination of base vectors ei:

v = Σ αiei

where Σ represents the sum over all is.

Or we can go the other way: We can start with a set B that we call the base set, define formal addition and multiplication, and then create a free structure containing all formal combinations of base vectors. Our linearity laws are then used to identify equivalent combinations. This is called a free vector space over B. The advantage of this formulation is that it generalizes easily to tensor spaces.

A tensor space is created from two or more vector spaces. The elements of a tensor space are formal combinations of elements from the constituent vector spaces. Those “formal combinations” can be described in terms of a tensor product. A tensor product V ⊗ W of two vector spaces is a mapping of the cartesian product V × W (a set of pairs of vectors) to the free vector space built on top of V and W, with the appropriate identifications:

(v, w) + (v', w) = (v + v', w)
(v, w) + (v, w') = (v, w + w')
α(v, w) = (αv, w) = (v, αw)

A tensor product can also be defined for mappings between vector spaces. Given four vector spaces, V, W, X, Y, we can consider linear maps between them:

f :: V -> X
g :: W -> Y

The tensor product of these maps is a linear mapping between tensor products of the appropriate spaces:

(f ⊗ g) :: V ⊗ W -> X ⊗ Y
(f ⊗ g)(v ⊗ w) = (f v) ⊗ (g w)

Vector spaces form a category Vec with linear maps as morphisms. Tensor product can then be defined as a bifunctor from the product category Vec × Vec to Vec (so it also maps pairs of morphisms to morphisms).

Given a vector space V, we can also define a dual space V* of linear functions from V to K (remember, K is the field from which we get our scalars). The action of an element f of V* on a vector v from V is called evaluation (or, in physics, contraction):

eval :: V* ⊗ V -> K
eval f v = f v

Given a basis ei in V, the canonical basis in V* is a set of functions e*i such that:

eval e*i ej = δij

where δij is 1 for i=j and 0 otherwise (the Kronecker delta). Seen as a matrix, δ is a unit matrix. It almost looks like the dual space provides the “inverses” of vectors. This is an important intuition.

A general tensor space supports tensor products involving a mixture of vectors and dual vectors (linear maps). In physics, this allows the construction of mixed covariant and contravariant tensors.

The dual to evaluation is called co-evaluation. In finite dimensional vector spaces, it’s a mapping:

coeval :: K -> V ⊗ V*
coeval α = Σ α ei ⊗ e*i

It takes a scalar α and creates a tensor using basis vectors and their duals. Tensors can be summed and multiplied by scalars.

One obvious generalization of vector (and tensor) spaces is to replace the field K with a ring. A ring has addition, subtraction, and multiplication, but it doesn’t have division.

Groups and Monoids

Groups were originally introduced in terms of actions on vector spaces. The action of a group element g on a vector v maps it to another vector in the same vector space. This mapping is linear:

g (αv + βw) = α (g v) + β (g w)

Because of linearity, group action is fully determined by the transformation of basis vectors. An element of a group acting on a vector v=Σviei produces a vector w that can be decomposed into components w=Σwiei:

wi = Σ gij vj

The numbers gij form a square matrix. The mapping of group elements to these matrices is called the representation of the group. The group of rotations in two dimensions, for instance, can be represented using 2×2 matrices of the form:

|  cos α  sin α |
| -sin α  cos α |

This is an example of a representation of an infinite continuous group called SO(2) (Special Orthogonal group in 2-d).

Applying a group element to a vector produces another vector that can be acted upon by another group element, and so on. You can forget about elements acting on vectors and define group multiplication abstractly. A group is a (potentially infinite) set of elements with a binary operation that is associative, has a neutral (identity) element, and an inverse for every element. It turns out that the same group may have many representations in vector spaces. Associativity and identity in the group happen also to be the basic properties defining a category — invertibility, though, is not. It should come as no surprise that categorists prefer the simpler monoid structure and consider a group a more specialized versions of it.

You get a monoid by abandoning the requirement that all elements of a group have an inverse. Or, even more abstractly, you can define a monoid as a single-object category, where the composition of (endo-) morphisms defines multiplication, and the identity morphism is the neutral element. These two definitions are equivalent because endomorphisms form a set — the hom-set — that can be identified with the set of elements of the monoid. The hom-set has composition of endomorphisms, which can be identified with the binary monoidal operation.

Some groups and monoids are commutative (for instance integer addition); others are not (for instance string concatenation). The commutative subgroup or submonoid is called the center of the group or monoid. Elements of the center must commute with all elements of the group, not only among themselves.

You may also think of representing a group (or a monoid) as acting on itself. There are two ways of doing that: the left action and the right action. The action of a group element g can be represented as transforming the whole group by multiplying each element by g on the left:

Lg h = g * h

or on the right:

Rg h = h * g

Such a transformation results in a reshuffling of the elements of the group. Each g defines a different reshuffling. A reshuffling (for finite sets) is called a permutation, and one of the fundamental theorems in group theory, due to Cayley, says that every group is isomorphic to some permutation group.

Cayley’s theorem can be generalized to monoids. Instead of the reshuffling of elements we then talk about endomorphisms. Every monoid defined as a set M with multiplication and unit can be represented as a submonoid of endomorphisms of that set.

This equivalence is well know to Haskell programmers. Monoid multiplication may be represented as a binary function (multiplication):

mappend :: (M, M) -> M

or, after currying, as a function returning an endomorphism:

mappend :: M -> (M -> M)

The unit element, mempty becomes the identity endomorphism, id.

Monoidal Category

We’ve seen that a monoid can be defined as a set of elements with a binary operation, or as a single-object category. The next step in this ladder of abstractions is to rethink the idea of forming pairs of elements for a binary operation. When dealing with sets, the pairs are just elements of the cartesian product.

In a more general categorical setting, cartesian product may be replaced with categorical product. Multiplication is just a morphism from the product m×m to the object itself m. But how do we select the unit element of m? Categorical objects have no structure. So instead we use a generalized element, which is defined as a morphism from the terminal object (in set, that would be the singleton set) to m.

A monoid can thus be defined as an object m in a category that has products and the terminal object t together with two morphisms:

mult :: m × m -> m
unit :: t -> m

But what if the category C in which we are trying to define a monoid doesn’t have a product or the terminal object? No problem! Instead of categorical product we’ll define a bifunctor ⊗:

⊗ :: C × C -> C

It’s a functor from the product category C×C to C. It’s called a tensor product by analogy with the vector space construction we started with. As a functor, it also defines a mapping of morphisms.

Instead of the terminal object, we just pick one special object i and define a generalized unit as a morphism from i to m.

The tensor products must fulfill some obvious conditions like associativity and the unit laws. We could define them by equalities, e.g.,

(a ⊗ b) ⊗ c = a ⊗ (b ⊗ c)
i ⊗ a = a = a ⊗ i

The snag is that our prototypical tensor product, the cartesian product, doesn’t satisfy those identities. Consider the Haskell implementation of the cartesian product as a pair type, with the unit element as the unit type (). It’s not exactly true that:

((a, b), c) = (a, (b, c))
((), a) = a = (a, ())

However, it’s almost true. The types on both sides of the equations are isomorphic, as can be shown by defining polymorphic functions that mediate between those terms. In category theory, those polymorphic functions are replaced by natural transformations.

A category in which associativity and unit laws of the tensor product can be expressed as equalities is called a strict monoidal category. A category in which these laws are imposed only up to natural isomporphisms is called non-strict, or a lax monoidal category. The three isomorphisms are called the associator α, the left unitor λ, and the right unitor ρ, respectively:

α :: (a ⊗ b) ⊗ c -> a ⊗ (b ⊗ c)
λ :: i ⊗ a  -> a
ρ :: a ⊗ i -> a

(They all must have inverses.)

A useful example of a strict monoidal category is the category of endofunctors of some category C. We use functor composition for tensor product. Composition of two endofunctors F and G is always well defined and it produces another endofunctor G∘F. The unit of this monoidal category is the identity functor Id. Strict associativity and unit laws follow from the definition of functor composition and the definition of the identity functor.

In some categories it’s possible to define an exponential object ab, which represents a set of morphisms from b to a. The standard way of doing it is through the adjunction:

C(a × b, c) ≅ C(b, ca)

Here, C(x, y) represents the hom-set from x to y. The two hom-sets in the adjunction must be naturally isomorphic. In general, an adjunction is between two functors L and R:

C(L b, c) ≅ C(b, R c)

Here the two functors are:

La b = a × b
Ra c = ca

This definition of the exponential object can be extended to monoidal categories by replacing categorical product with the tensor product:

C(a ⊗ b, c) ≅ C(b, ca)

In a monoidal category we can also define a left exponential:

C(a ⊗ b, c) ≅ C(a, bc)

(if the tensor product is symmetric, or lax symmetric — up to a natural isomorphism — these two exponentials coincide).

There is an equivalent definition of an adjunction through unit η and counit ε — two natural transformations. These transformations are between the composition of two adjoint functors and the identity functor:

η :: Id -> R ∘ L
ε :: L ∘ R -> Id

This comes even closer to convincing us that exponentiation is the inverse of a product.

As it often happens, having equivalent definitions of the same thing may lead to different generalizations. We’ve seen that the category of endofunctors in some category C is a (strict) monoidal category. We can pick two endofunctors F and G and define two natural transformations:

ε :: G ∘ F -> Id
η :: Id -> F ∘ G

If such transformations exist, the pair F and G form an adjunction, F being left adjoint to G.

This definition can be extended to any monoidal product, not just composition. In a tensor category A, we have the unit object i, and we can try to define two morphisms:

ε :: a ⊗ a' -> i
η :: i -> a' ⊗ a

The pair of objects a and a' together with the morphisms ε and η are called a duality. A category is called rigid or autonomous if there is a dual for every object. A duality gives rise to an adjunction:

Hom(ax, y) ≅ Hom(x, a'y)

Comparing this with the exponential adjunction, we see that the dual of a acting on y plays the role of ya. In other words, a' says: raise the object that I’m multiplying to the power of a.

There may be many duals of a, but we can always choose one and denote it as ac. Multiplying it by y, acy is analogous to taking the exponential ya. It also works like a lax inverse of a because of:

ε :: a ac -> i
η :: i -> ac a

Notice that the duality functor, the mapping from a to ac, is contravariant in a.

Tambara works with rigid categories, whereas Pastro and Street mostly work with closed categories — with exponentials defined for every pair of objects.

Enriched Categories

In traditional category theory hom-sets are just sets. It’s possible, though, to replace hom-sets with something with more structure. Tambara, for instance, uses vector spaces for his hom-sets. In general, hom-sets may be replaced by objects in some other base category — this results in the notion of an enriched category. This base category must have some additional structure in order to support composition in the enriched category.

Composition of morphisms in a regular category is defined in terms of elements of hom-sets. It’s a mapping from a pair of composable morphisms to a morphism. Objects in an arbitrary category might not support the notion of “elements.” So we have to express composition in terms of entire hom-objects rather than their individual elements. The minimal structure necessary for that is a monoidal category. Instead of pairs of morphisms, we’ll operate on a whole (monoidal) product of two hom-objects. Composition is then a morphism in the base category V:

∘ :: A(b, c) ⊗ A(a, b) -> A(a, c)

Here, A(a, b) is an object in V — the hom-object from a to b. Associativity of composition reflects the associativity of the monoidal product (it may be lax!).

Identity morphisms are then “picked” in any A(a, a) by a morphism in V:

ida :: i -> A(a, a)

where i is the unit object in V. That’s the same trick we used to define generalized elements of objects in a monoidal category. Again, unit laws may be lax.

The main purpose of defining categories is to enable the definitions of functors and natural transformations. Functors map objects and morphisms, so in enriched categories, they have to map objects and hom-objects. Therefore it only makes sense to define enriched functors between categories that are enriched over the same base monoidal category, because that’s where the hom-objects live. An enriched functor must preserve composition — which is defined in terms of the monoidal product — and the identity morphism, which is defined in terms of the monoidal unit.

Similarly, it’s possible to define a natural transformation between two enriched functors F and G that go between two V-enriched categories A and B. The naturality square turns into a naturality hexagon that connects the object A(a, a’) to the object B(F a, G a’) in two different ways. Normally, components of a natural transformation are morphisms between F a and G a. In the enriched setting, there is no way to “pick” individual morphisms. Instead we use morphisms from the identity object in V — generalized elements of hom-objects.

Functors between two given categories A and B (enriched or not) form a category, with natural transformations as morphisms. On the other hand, functors are morphisms in the category Cat of (small) categories. The set of functors between two categories A and B is therefore both a hom-set in Cat and a category. Tambara denotes those hom-categories Hom(A, B). I will use this notation throughout. Otherwise, for hom-sets (and hom-objects in the enriched case) I will use the standard notation C(a, b), where C is the category, and a and b are objects in C.

The starting point of both Tambara and Pastro/Street is a tensor category A. It’s a category enriched over a monoidal category V. There is a separate tensor product defined in A. In Tambara, V is the category of vector spaces with the usual tensor product. In Pastro/Street, V is an arbitrary monoidal category.

Without loss of clarity, the tensor product in A is written without the use of any infix operator. For two objects x and y of A, the product is just xy. A tensor product of two morphisms f::x->x' and g::y->y' is denoted, in Tambara, as fg::xy->x'y' (not to be confused with composition f'∘f). Tambara assumes that associativity and unit laws in A are strict.

Summary

We have the following layers of abstraction:

  • V is a monoidal category
  • A is a tensor category enriched over V.

Modules

By analogy with groups and vector spaces, we would like to define the action of a tensor category A on some other category X. As before, we have the choice between left and right action (or both). Let’s start with the left action. It’s a bifunctor:

A × X -> X

In components, the notation is simplified to (no infix operator):

<a, x> -> ax

where a is an object in A and x is an object in X

We want this functor to be associative and unital. In particular:

ix = x

where i is the unit in the tensor category A. The category X with these properties is called a left A module.

Similarly, the right B module is equipped with the functor:

X × A -> X

The interesting case is a bimodule with two bifunctors:

A × X -> X
X × B -> X

The two tensor categories A and B may potentially be different (although they both must be enriched over the same category V as X).

Notice that A itself is a bimodule with both left and right action of A on A defined by the (tensor) product in A.

The usual thing in category theory is to introduce structure-preserving functors between similar categories.

So, if X and Y are two left modules over the same tensor category A, we can define A-linear functors that preserve the action of A. Such functors, in turn, form a category that Tambara calls HomA(X, Y) (notice the subscript A). Linearity in this case means that the left action weakly commutes with the functor. In other words, we have a natural isomorphism (here again, left action is understood without any infix operator):

λa, x :: F (ax) -> a(F x)

This mapping is invertible (it’s an isomorphism).

The same way we can define right- and bi- linear functor categories. In particular an (A, B)-linear functor preserves both the left and the right actions. Tambara calls the category of such functors HomA, B(X, Y). Linearity in this case means that we have two natural isomorphisms:

λa, x :: F (ax) -> a(F x)
ρx, b :: F (xb) -> (F x)b

The first result in Tambara’s paper is that, if X is a right A-module, then the category of right linear functors HomA(A, X) from A to X is equivalent to X.

The proof is quite simple. Right to left: Pick an object x in X. We can map it to a functor:

Gx :: A -> X

defined as the right action of a on x:

Gx a = xa

Its linearity is obvious:

ρa, b :: Gx (ab) -> (Gx a)b

Notice also that evaluating Gx on the identity i of A produces x. So, left to right, we can define the inverse mapping from HomA(A, X) to X as the evaluation of the functor at i.

The intuition from group theory is that the (right, in this case) action of the whole group on a fixed x creates an orbit in X. An orbit is the set of points (vectors) that can be reached from x by acting on it with all group elements (imagine a group of rotations around a fixed axis in 3-d — here, orbits are just circles). In our case, we can get an orbit of any x as the image of a linear functor Gx defined above that goes from A (the equivalent of the group) to X (the equivalent of the vector space in which we represent the group). It so happens that the image of any linear functor G from A to X is an orbit. It’s the orbit of the object G i. Any object in the image of G can be reached from G i by the action of some object of A. The image of G consists of objects of the form G a. G a can be rewritten as G (ia) which, by (right) linearity, is the same as (G i)a.

Our intuition that there should be more functors from A to X than there are objects in X fails when we impose the linearity constraint. The functors in HomA(A, X) are no longer linearly independent. There is a “basis” in that “space” that is in one-to-one correspondence with the objects of X.

A similar proof works for left modules.

The situation is trickier for bimodules with both left and right action, even if we pick the same tensor category on both sides, that is work with an (A, A)-bimodule.

Suppose that we wanted to map X to HomA, A(A, X). We can still define the (orbit of x) functor Gx as xa with the same ρab. But there is a slight problem with defining λba. We want:

λb a :: Gx (ba) -> b(Gx a)

which will work if there is a transformation:

xba -> bxa

We would like x to (weakly) commute with b. By analogy with the center of a group, we can define a centralizer ZA(X) as the category of those objects of X for which there is an isomorphism ωa between ax and xa. The equivalence of categories in this case is:

HomA,A(A, X) ≅ ZA(X)

So, for any object x that’s in the centralizer of X, we can define our Gx as xa. Conversely, for any (A, A)-linear functor, we can evaluate it at i to get an object of X. This object can be shown to be a member of the centralizer because, for any F in HomA, A(A, X):

a(F i) = F (a i) = F (i a) = (F i)a

Summary

We have the following layers of abstraction:

  • V is a monoidal category
  • A (and B) are tensor categories enriched over V
  • X is a category enriched over V
  • X is a module, if the action of A is defined over X (left, right, or both)
  • Linear functors between X and Y preserve left, right, or bi actions of A (or B).

In particular, bilinear functors from A (with the left and right action of A) to a bimodule X are in one to one correspondence with the centralizer ZA(X) of X under the action of A.

Distributors

To understand distributors, it helps to know a bit about calculus and/or signal processing. In calculus we deal with functions. Functions can be integrated. We can also have functions acting on functions — or functionals. In particular we can have linear functions on functions. It turns out that a lot of such functionals can be defined through integrals. A linear functional can be expressed as integration of test functions with some density. A density may be a function of two arguments, but a general linear functional may require a generalized density. For instance, the famous Dirac delta “function” cannot be represented as a function, although physicists often write:

f(x) = ∫ δ(x - y) f(y) dy

Such generalized functions are called distributions. Direct multiplication of distributions is ill-defined — the annoying infinities that crop up in quantum field theory are the result of attempts to multiply quantum fields, which are distributions.

A better product can be defined through convolution. Convolutions happen to be at the core of signal processing. If you want to soften an image, you convolve it with a Gaussian density. Convolution with a delta function reproduces the original image. Edge enhancement is done with the derivative of a delta function, and so on.

Convolutions can be generalized to functions over groups:

(f ★ g)(x) = ∫ f(y) g(y-1x) dλ(y)

where λ is a suitable group measure.

Roughly speaking, distributors are to functors as distributions are to functions. You might know distributors under the name of profunctors. A profunctor is a functor of two arguments, one of them from the opposite category.

p :: Xop × Y -> Set

In a way, a profunctor is a generalization of a bifunctor, at least when acting on objects. When acting on morphisms, a profunctor is contravariant in one argument and covariant in another. In the case of Y being the same as X, this is similar to the hom-functor C(a, b) being contravariant in a and covariant in b. A hom-functor is the simplest example of a profunctor. As we’ll see later, it’s even possible to model the composition of profunctors on composition of hom-functors.

A profunctor acting on two objects produces a set, an object in the Set category (again, generalizing a hom-functor, which is also Set-valued). Acting on a pair of morphisms (which is the same as a single morphism in the product category Xop × Y), a profunctor produces a function.

Distributors can be generalized to categories that are enriched over the same monoidal category V. In that case they are V-valued functors from Xop × Y to V.

Since distributors (profunctors) are functors, they form a functor category denoted by D(X, Y). Objects in a distributor category are (enriched) functors:

Xop × Y -> V

and morphisms are (enriched) natural transformations.

On the other hand, we can treat a distributor as if it were a morphism between categories (it has the right covariance for that). The composition of such morphisms is defined through the coend formula (a coend for profunctors is analogous to a colimit for functors):

(p ∘ q) x y = ∫z (p x z) ⊗ (q z y)

Here, p and q are distributors:

p :: (Xop × Z) -> V
q :: (Zop × Y) -> V

The tensor product is the product in V (here we explicitly use the infix operator). We “integrate” over the object z in the middle.

This way we can define a bicategory Dist of categories where distributors are morphisms (one-cells) and natural transformations are two-cells. If we also consider regular functors between categories, we get what is called a double category (not to be confused with a 2-category or a bicategory, which are all slightly different).

There is an equivalent way of looking at distributors as a generalization of relations. A relation between two sets is a subset of pairs of elements from those sets. We can model this categorically by treating a set as a discrete category of elements (no morpisms other than identities). The relation between two such sets is a set of formal arrows between their elements — when two elements are related, there is a single arrow between them, otherwise there’s no arrow. Now we can replace sets by categories and define a relation as a bifunctor from those categories to Set. An object from category X is “related” to an object from Y if the bifunctor in question maps them into a non-empty set, otherwise they are unrelated. Since there are many non-empty sets to chose from, there may be many “levels” of relation: ones corresponding to a singleton set, a dubleton, and so on.

We also have to think about the mapping of (pairs of) morphisms from the two categories. Since we would like the opposite relation to be a functor from opposite categories, the symmetric choice is to define a relation as a functor that is contravariant in one argument and covariant in the other — in other words, a profunctor. That way the opposite relation will still be a profunctor, albeit with ops reversed.

One can define the composition of relations. If p relates X to Z and q relates Z to Y then we say that an object x from X is related to an object y from Y if and only if there exists an object z of Z (an object in the middle) such that x is related to z and z is related to y. This existential qualification of z is represented, in category theory, as a coend (an end corresponding to the universal qualifier). Thus, through composition of relations, we recover the formula for the composition of profunctors:

(p ∘ q) x y = ∫z (p x z) ⊗ (q z y)

There is also a tensor structure in the distributor category D(X, Y) defined by Day convolution, which I’ll describe next.

Summary

We have the following layers of abstraction:

  • V is a monoidal category
  • A (and B) are tensor categories enriched over V
  • X, Y, and Z are categories enriched over V
  • A distributor is a functor from Xop × Y to V
  • Distributors may also be treated as “arrows” between categories and composed using coends.

Day Convolution

By analogy with distributions, distributors also have the equivalent of convolution defined on them. The integral is replaced by coend. The Day convolution of two functors F and G, both from the V-enriched monoidal category A to V, is defined as a (double) coend:

(F ⊗ G) x = ∫a,b A(a ⊗ b, x) ⊗ (F a) ⊗ (G b)

Notice the (here, explicit) use of a tensor product for objects of V (as well as for objects of A and for functors — hopefully this won’t lead to too much confusion). For V equal to Set, this is usually replaced by a cartesian product, but in Haskell this could be a product or a sum. In the formula, we also use the covariant hom-functor that maps an arbitrary object x in A to the hom-set A(a ⊗ b, x). This use of a coend justifies the use of the integral symbol: we are “integrating” over objects a and b in A.

If you squint hard enough you might find similarity between the Day convolution formula and the convolution on a group. Here we don’t have a group, so we have no analog of y-1x. Instead we define an appropriate “measure” using A(a ⊗ b, _). The convolution formula may be “partially integrated” to give the following equivalent definitions:

(F ⊗ G) x = ∫b (F bx) ⊗ (G b)
(F ⊗ G) x = ∫a (F a) ⊗ (G xa)

Here you can see the resemblance to group convolution even better — if you remember that exponentiation can be thought of as the “inverse” of the tensor product. The left and right exponentiations are analogous to the left and right inverses.

The partial integration trick is the consequence of the so-called ninja Yoneda lemma, which can be written as:

F x = ∫a A(a, x) ⊗ (F a)

Notice that the hom-functor A(a, x) plays the role of the Dirac delta function.

There is also a unit J of Day convolution:

J x = A(i, x)

where i is the monoidal identity object.

Taken together this shows that Day convolution is a tensor product in the category of enriched functors (hence the use of the tensor symbol ⊗).

It’s interesting to see Day convolution expressed in Haskell. The category of Haskell types (which is approximately Set, modulo termination) can be treated as enriched over itself. The tensor product is just the cartesian product, represented either as a pair or a record with multiple fields. In this setting, a categorical coend becomes an existential quantifier, which is equivalent to a universal quantifier in front of the type constructor.

This is the definition of Day convolution from the Edward Kmett’s Data.Functor.Day

data Day f g a = forall b c. Day (f b) (g c) (b -> c -> a)

Here, f and g are the two functors, and forall plays the role of the existential quantifier (being put in front of the data constructor Day). The original hom-set Set(b⊗c, a), has the tensor product replaced by a pair constructor, and is curried to b->c->a. The data constructor has three fields, corresponding to the tensor (cartesian) product of three terms in the original definition.

Summary

We have the following layers of abstraction:

  • V is a monoidal category
  • A is a tensor category enriched over V
  • Functors from A to V support a tensor product defined by Day convolution.

Tambara Modules

Modules were defined earlier through the action (left, right, or both) of a tensor category A on some other category X. Tambara modules specialize the category X to the category of distributors D(X, Y). We assume that the categories X and Y are themselves modules over A (that is, they have the action of A defined on them).

We define the left action of A on a distributor (profunctor) L(x, y) as:

a! :: L(x, y) -> L(ax, ay)

Similarly, the right action is given by:

!b :: L(x, y) -> L(xb, yb)

We also assume that the action of the unit object i from A is the identity.

These modules are called, respectively, the left and right Tambara modules. The Tambara (bi-)module supports both left and right actions and is denoted by:

AD(X, Y)B

In principle, A may be different from B.

If we choose the tensor product to be the categorical product and replace all categories with one, Tambara modules AD(A, A)A become Haskell’s strong profunctors:

class Profunctor p => Strong p where
  first'  :: p a b -> p (a, c) (b, c)
  second' :: p a b -> p (c, a) (c, b)

On the other hand, with the choice of the categorical coproduct as the tensor product, we get choice profunctors:

class Profunctor p => Choice p where
  left'  :: p a b -> p (Either a c) (Either b c)
  right' :: p a b -> p (Either c a) (Either c b)

We can even parameterize these classes by the type of the tensor product:

class (Profunctor p) => TamModule (ten :: * -> * -> *) p where
  leftAction  :: p a b -> p (c `ten` a) (c `ten` a)
  rightAction :: p a b -> p (a `ten` c) (b `ten` c)

and specialize it to:

type TamStrong p = TamModule (,) p
type TamChoice p = TamModule Either p

We can also define a Tambara module as a profunctor with two polymorphic functions:

data TambaraMod (ten :: * -> * -> *) p a b = TambaraMod 
  { runTambaraMod :: (forall c. p (a `ten` c) (b `ten` c),
                      forall d. p (d `ten` a) (d `ten` b))
  }

The Data.Profunctor.Tambara module specializes this definition for product and coproduct tensors. Since both these tensors are symmetric (weakly — up to an isomorphism), they can be constructed with just one polymorphic function each:

newtype Tambara p a b = Tambara { 
    runTambara :: forall c. p (a, c) (b, c) }
newtype TambaraSum p a b = TambaraSum { 
    runTambaraSum :: forall c. p (Either a c) (Either b c) }

Summary

We have the following layers of abstraction:

  • V is a monoidal category
  • A (and B) are tensor categories enriched over V
  • X and Y are categories enriched over V
  • X is a module if the action of A is defined over X (left, right, or both)
  • Linear functors between X and Y preserve left, right, or bi actions of A (or B)
  • A distributor is a functor from Xop × Y to V
  • Distributors can be composed using coends
  • Functors from A to V support a tensor product defined by Day convolution
  • Distributors D(X, Y) form a category enriched over V
  • Tambara modules are distributors with the action of A (left, right, or bi) defined on them.

Currying Tambara Modules

Let’s look again at the definition of a distributor:

Xop × Y -> V

It’s a functor of two arguments. We know that functions of two arguments can be curried — turned to functions of one argument that return functions. It turns out that a similar thing can be done with distributors. There is an isomorphism between the category of distributors and a category of functors returning functors, which looks very much like currying:

D(X, Y) ≅ Hom(Y, Hom(Xop, V))

According to this isomorphism, a distributor L is mapped to a functor G that takes an object of Y and maps it to another functor that takes an object of X and maps it to an object of V:

L x y = (G y) x

This correspondence may be extended to Tambara modules. Suppose that we have the left action of A defined on X and Y. Then there is an isomorphism of categories:

AD(X, Y) ≅ HomA(Y, Hom(Xop, V))

Remember that the category of left Tambara modules has the left action of A defined by A!. Acting on a distributor L it’s a map:

A! :: L x y -> L (ax) (ay)

On the right hand side of the isomorphism is a category of left-linear functors. An object in this category, K, is left linear:

K (ay) ≅ a(K y)

The target category for this functor is Hom(Xop, V), so K acting on y is another functor that, when acting on an object x of X produces a value in V.

L(x, y) ≅ (K y) x

We have to define the action of A! on the right hand side of this isomorphism. First, we use duality (assuming the category is rigid) — the mapping:

η :: i -> ac a

We get:

(K y) (acax)

Now we would like to use left-linearity of Hom(Xop, V) to move the action of ac out of the functor. Left linear structure on this category is defined by the equation:

(aF) x = F (acx)

where F is a functor from Xop to V.

We get:

(K y) (acax) = ((aK) y) (ax)

Finally, using left-linearity of K, we can turn this to:

(K (ay)) (ax)

which is what L (ax) (ay) is mapped to.

A similar argument may be used to show the general equivalence of Tambara bimodules with bilinear functors:

AD(X, Y)B ≅ HomA, B(Y, Hom(Xop, V))

Tambara Modules and Centralizers

The “currying” equivalence may be specialized to the case where all four tensor categories are the same:

AD(A, A)A ≅ HomA, A(A, Hom(Aop, V))

Earlier we’ve seen the equivalence of a bilinear functor and a centralizer:

HomA,A(A, X) ≅ ZA(X)

The category X here is an arbitrary tensor category over A. In particular, we can chose X to be Hom(Aop, V). This is the main result in Tambara’s paper:

AD(A, A)A ≅ ZA(Hom(Xop, V))

Earlier we’ve seen that distributors and, in particular, Tambara modules are equipped with a tensor product using Day convolution. Tambara also shows that the centralizers are equipped with a tensor product. The equivalence between Tambara modules and centralizers preserves this tensor product.

Acknowledgments

I’m grateful to Russell O’Connor, Edward Kmett, Dan Doel, Gershom Bazerman, and others, for fruitful discussions and useful comments and to André van Meulebrouck for checking the grammar and spelling.

Next: Free Tambara modules.