I came in contact with Tambara modules when working on a categorical understanding of lenses. They were first mentioned to me by Edward Kmett, who implemented their Haskell version, Data.Profunctor.Tambara. Recently I had a discussion with Russell O’Connor about profunctor lenses. He then had a discussion with James “xplat” Deikun, who again pointed out the importance of Tambara modules. That finally pushed me to study Tambara’s original paper along with Pastro and Street’s generalizations of it. These are not easy papers to read; so, to motivate myself, I started writing this post with the idea of filling in the gaps in my education and providing some background and intuitions I gather in the process. Trying to explain things always helps me understand them better. I will also sketch some of the proofs — for details, see the original papers.

The general idea is that lenses are used to access components of product data types, whereas prisms are used with coproduct (sum) data types. In order to unify lenses and prisms, we need a framework that abstracts over products and coproducts. It so happens that both are examples of a tensor product. Tensors have their roots in vector calculus, so I’ll start with a little refresher on it, to see where the intuitions come from. Tensors may also serve as objects upon which we can represent groups or monoids.

The next step is to define a monoidal category, where the tensor product plays a role analogous to group (actually, monoid) action. Tensor categories are built on top of (or, enriched over) monoidal categories.

We can define monoidal action on tensor categories — analogous to representations of groups on tensor fields. One particular tensor category of interest to us is the category of distributors (profunctors). Distributors equipped with a tensor action are the subject of Tambara’s paper. It turns out that tensor action on distributors is directly related to profunctor strength, which is the basis of the general formulation of Haskell lenses and prisms.

Vectors and Tensors

We all have pretty good idea of what a vector space is. It’s a sets of vectors with vector addition and with multiplication by numbers. The numbers, or scalars, come from some field K (for instance real or complex numbers). These operations must obey some obvious rules. For instance, multiplying any vector by 1 (the multiplicative unit of K) gives back the same vector:

1v = v

Then there are the linearity conditions (α and β are scalars from K, and v and w are vectors):

αv + βv = (α + β)v
α(v + w) = αv + αw

which can be used to prove that every vector space has a basis — a minimal set of vectors whose linear combinations generate the whole space.

In other words, any vector v can be written as a combination of base vectors ei:

v = Σ αiei

where Σ represents the sum over all is.

Or we can go the other way: We can start with a set B that we call the base set, define formal addition and multiplication, and then create a free structure containing all formal combinations of base vectors. Our linearity laws are then used to identify equivalent combinations. This is called a free vector space over B. The advantage of this formulation is that it generalizes easily to tensor spaces.

A tensor space is created from two or more vector spaces. The elements of a tensor space are formal combinations of elements from the constituent vector spaces. Those “formal combinations” can be described in terms of a tensor product. A tensor product V ⊗ W of two vector spaces is a mapping of the cartesian product V × W (a set of pairs of vectors) to the free vector space built on top of V and W, with the appropriate identifications:

(v, w) + (v', w) = (v + v', w)
(v, w) + (v, w') = (v, w + w')
α(v, w) = (αv, w) = (v, αw)

A tensor product can also be defined for mappings between vector spaces. Given four vector spaces, V, W, X, Y, we can consider linear maps between them:

f :: V -> X
g :: W -> Y

The tensor product of these maps is a linear mapping between tensor products of the appropriate spaces:

(f ⊗ g) :: V ⊗ W -> X ⊗ Y
(f ⊗ g)(v ⊗ w) = (f v) ⊗ (g w)

Vector spaces form a category Vec with linear maps as morphisms. Tensor product can then be defined as a bifunctor from the product category Vec × Vec to Vec (so it also maps pairs of morphisms to morphisms).

Given a vector space V, we can also define a dual space V* of linear functions from V to K (remember, K is the field from which we get our scalars). The action of an element f of V* on a vector v from V is called evaluation (or, in physics, contraction):

eval :: V* ⊗ V -> K
eval f v = f v

Given a basis ei in V, the canonical basis in V* is a set of functions e*i such that:

eval e*i ej = δij

where δij is 1 for i=j and 0 otherwise (the Kronecker delta). Seen as a matrix, δ is a unit matrix. It almost looks like the dual space provides the “inverses” of vectors. This is an important intuition.

A general tensor space supports tensor products involving a mixture of vectors and dual vectors (linear maps). In physics, this allows the construction of mixed covariant and contravariant tensors.

The dual to evaluation is called co-evaluation. In finite dimensional vector spaces, it’s a mapping:

coeval :: K -> V ⊗ V*
coeval α = Σ α vi ⊗ v*i

It takes a scalar α and creates a tensor using basis vectors and their duals. Tensors can be summed and multiplied by scalars.

One obvious generalization of vector (and tensor) spaces is to replace the field K with a ring. A ring has addition, subtraction, and multiplication, but it doesn’t have division.

Groups and Monoids

Groups were originally introduced in terms of actions on vector spaces. The action of a group element g on a vector v maps it to another vector in the same vector space. This mapping is linear:

g (αv + βw) = α (g v) + β (g w)

Because of linearity, group action is fully determined by the transformation of basis vectors. An element of a group acting on a vector v=Σviei produces a vector w that can be decomposed into components w=Σwiei:

wi = Σ gij vj

The numbers gij form a square matrix. The mapping of group elements to these matrices is called the representation of the group. The group of rotations in two dimensions, for instance, can be represented using 2×2 matrices of the form:

|  cos α  sin α |
| -sin α  cos α |

This is an example of a representation of an infinite continuous group called SO(2) (Special Orthogonal group in 2-d).

Applying a group element to a vector produces another vector that can be acted upon by another group element, and so on. You can forget about elements acting on vectors and define group multiplication abstractly. A group is a (potentially infinite) set of elements with a binary operation that is associative, has a neutral (identity) element, and an inverse for every element. It turns out that the same group may have many representations in vector spaces. Associativity and identity in the group happen also to be the basic properties defining a category — invertibility, though, is not. It should come as no surprise that categorists prefer the simpler monoid structure and consider a group a more specialized versions of it.

You get a monoid by abandoning the requirement that all elements of a group have an inverse. Or, even more abstractly, you can define a monoid as a single-object category, where the composition of (endo-) morphisms defines multiplication, and the identity morphism is the neutral element. These two definitions are equivalent because endomorphisms form a set — the hom-set — that can be identified with the set of elements of the monoid. The hom-set has composition of endomorphisms, which can be identified with the binary monoidal operation.

Some groups and monoids are commutative (for instance integer addition); others are not (for instance string concatenation). The commutative subgroup or submonoid is called the center of the group or monoid. Elements of the center must commute with all elements of the group, not only among themselves.

You may also think of representing a group (or a monoid) as acting on itself. There are two ways of doing that: the left action and the right action. The action of a group element g can be represented as transforming the whole group by multiplying each element by g on the left:

Lg h = g * h

or on the right:

Rg h = h * g

Such a transformation results in a reshuffling of the elements of the group. Each g defines a different reshuffling. A reshuffling (for finite sets) is called a permutation, and one of the fundamental theorems in group theory, due to Cayley, says that every group is isomorphic to some permutation group.

Cayley’s theorem can be generalized to monoids. Instead of the reshuffling of elements we then talk about endomorphisms. Every monoid defined as a set M with multiplication and unit can be represented as a submonoid of endomorphisms of that set.

This equivalence is well know to Haskell programmers. Monoid multiplication may be represented as a binary function (multiplication):

mappend :: (M, M) -> M

or, after currying, as a function returning an endomorphism:

mappend :: M -> (M -> M)

The unit element, mempty becomes the identity endomorphism, id.

Monoidal Category

We’ve seen that a monoid can be defined as a set of elements with a binary operation, or as a single-object category. The next step in this ladder of abstractions is to rethink the idea of forming pairs of elements for a binary operation. When dealing with sets, the pairs are just elements of the cartesian product.

In a more general categorical setting, cartesian product may be replaced with categorical product. Multiplication is just a morphism from the product m×m to the object itself m. But how do we select the unit element of m? Categorical objects have no structure. So instead we use a generalized element, which is defined as a morphism from the terminal object (in set, that would be the singleton set) to m.

A monoid can thus be defined as an object m in a category that has products and the terminal object t together with two morphisms:

mult :: m × m -> m
unit :: t -> m

But what if the category C in which we are trying to define a monoid doesn’t have a product or the terminal object? No problem! Instead of categorical product we’ll define a bifunctor ⊗:

⊗ :: C × C -> C

It’s a functor from the product category C×C to C. It’s called a tensor product by analogy with the vector space construction we started with. As a functor, it also defines a mapping of morphisms.

Instead of the terminal object, we just pick one special object i and define a generalized unit as a morphism from i to m.

The tensor products must fulfill some obvious conditions like associativity and the unit laws. We could define them by equalities, e.g.,

(a ⊗ b) ⊗ c = a ⊗ (b ⊗ c)
i ⊗ a = a = a ⊗ i

The snag is that our prototypical tensor product, the cartesian product, doesn’t satisfy those identities. Consider the Haskell implementation of the cartesian product as a pair type, with the unit element as the unit type (). It’s not exactly true that:

((a, b), c) = (a, (b, c))
((), a) = a = (a, ())

However, it’s almost true. The types on both sides of the equations are isomorphic, as can be shown by defining polymorphic functions that mediate between those terms. In category theory, those polymorphic functions are replaced by natural transformations.

A category in which associativity and unit laws of the tensor product can be expressed as equalities is called a strict monoidal category. A category in which these laws are imposed only up to natural isomporphisms is called non-strict, or a lax monoidal category. The three isomorphisms are called the associator α, the left unitor λ, and the right unitor ρ, respectively:

α :: (a ⊗ b) ⊗ c -> a ⊗ (b ⊗ c)
λ :: i ⊗ a  -> a
ρ :: a ⊗ i -> a

(They all must have inverses.)

A useful example of a strict monoidal category is the category of endofunctors of some category C. We use functor composition for tensor product. Composition of two endofunctors F and G is always well defined and it produces another endofunctor G∘F. The unit of this monoidal category is the identity functor Id. Strict ssociativity and unit laws follow from the definition of functor composition and the definition of the identity functor.

In some categories it’s possible to define an exponential object ab, which represents a set of morphisms from b to a. The standard way of doing it is through the adjunction:

C(a × b, c) ≅ C(b, ca)

Here, C(x, y) represents the hom-set from x to y. The two hom-sets in the adjunction must be naturally isomorphic. In general, an adjunction is between two functors L and R:

C(L b, c) ≅ C(b, R c)

Here the two functors are:

La b = a × b
Ra c = ca

This definition of the exponential object can be extended to monoidal categories by replacing categorical product with the tensor product:

C(a ⊗ b, c) ≅ C(b, ca)

In a monoidal category we can also define a left exponential:

C(a ⊗ b, c) ≅ C(a, bc)

(if the tensor product is symmetric, or lax symmetric — up to a natural isomorphism — these two exponentials coincide).

There is an equivalent definition of an adjunction through unit η and counit ε — two natural transformations. These transformations are between the composition of two adjoint functors and the identity functor:

η :: Id -> R ∘ L
ε :: L ∘ R -> Id

This comes even closer to convincing us that exponentiation is the inverse of a product.

As it often happens, having equivalent definitions of the same thing may lead to different generalizations. We’ve seen that the category of endofunctors in some category C is a (strict) monoidal category. We can pick two endofunctors F and G and define two natural transformations:

ε :: G ∘ F -> Id
η :: Id -> F ∘ G

If such transformations exist, the pair F and G form an adjunction, F being left adjoint to G.

This definition can be extended to any monoidal product, not just composition. In a tensor category A, we have the unit object i, and we can try to define two morphisms:

ε :: a ⊗ a' -> i
η :: i -> a' ⊗ a

The pair of objects a and a' together with the morphisms ε and η are called a duality. A category is called rigid or autonomous if there is a dual for every object. A duality gives rise to an adjunction:

Hom(ax, y) ≅ Hom(x, a'y)

Comparing this with the exponential adjunction, we see that the dual of a acting on y plays the role of ya. In other words, a' says: raise the object that I’m multiplying to the power of a.

There may be many duals of a, but we can always choose one and denote it as ac. Multiplying it by y, acy is analogous to taking the exponential ya. It also works like a lax inverse of a because of:

ε :: a ac -> i
η :: i -> ac a

Notice that the duality functor, the mapping from a to ac, is contravariant in a.

Tambara works with rigid categories, whereas Pastro and Street mostly work with closed categories — with exponentials defined for every pair of objects.

Enriched Categories

In traditional category theory hom-sets are just sets. It’s possible, though, to replace hom-sets with something with more structure. Tambara, for instance, uses vector spaces for his hom-sets. In general, hom-sets may be replaced by objects in some other base category — this results in the notion of an enriched category. This base category must have some additional structure in order to support composition in the enriched category.

Composition of morphisms in a regular category is defined in terms of elements of hom-sets. It’s a mapping from a pair of composable morphisms to a morphism. Objects in an arbitrary category might not support the notion of “elements.” So we have to express composition in terms of entire hom-objects rather than their individual elements. The minimal structure necessary for that is a monoidal category. Instead of pairs of morphisms, we’ll operate on a whole (monoidal) product of two hom-objects. Composition is then a morphism in the base category V:

∘ :: A(b, c) ⊗ A(a, b) -> A(a, c)

Here, A(a, b) is an object in V — the hom-object from a to b. Associativity of composition reflects the associativity of the monoidal product (it may be lax!).

Identity morphisms are then “picked” in any A(a, a) by a morphism in V:

ida :: i -> A(a, a)

where i is the unit object in V. That’s the same trick we used to define generalized elements of objects in a monoidal category. Again, unit laws may be lax.

The main purpose of defining categories is to enable the definitions of functors and natural transformations. Functors map objects and morphisms, so in enriched categories, they have to map objects and hom-objects. Therefore it only makes sense to define enriched functors between categories that are enriched over the same base monoidal category, because that’s where the hom-objects live. An enriched functor must preserve composition — which is defined in terms of the monoidal product — and the identity morphism, which is defined in terms of the monoidal unit.

Similarly, it’s possible to define a natural transformation between two enriched functors F and G that go between two V-enriched categories A and B. The naturality square turns into a naturality hexagon that connects the object A(a, a’) to the object B(F a, G a’) in two different ways. Normally, components of a natural transformation are morphisms between F a and G a. In the enriched setting, there is no way to “pick” individual morphisms. Instead we use morphisms from the identiy object in V — generalized elements of hom-objects.

Functors between two given categories A and B (enriched or not) form a category, with natural transformations as morphisms. On the other hand, functors are morphisms in the category Cat of (small) categories. The set of functors between two categories A and B is therefore both a hom-set in Cat and a category. Tambara denotes those hom-categories Hom(A, B). I will use this notation throughout. Otherwise, for hom-sets (and hom-objects in the enriched case) I will use the standard notation C(a, b), where C is the category, and a and b are objects in C.

The starting point of both Tambara and Pastro/Street is a tensor category A. It’s a category enriched over a monoidal category V. There is a separate tensor product defined in A. In Tambara, V is the category of vector spaces with the usual tensor product. In Pastro/Street, V is an arbitrary monoidal category.

Without loss of clarity, the tensor product in A is written without the use of any infix operator. For two objects x and y of A, the product is just xy. A tensor product of two morphisms f::x->x' and g::y->y' is denoted, in Tambara, as fg::xy->x'y' (not to be confused with composition f'∘f). Tambara assumes that associativity and unit laws in A are strict.

Summary

We have the following layers of abstraction:

  • V is a monoidal category
  • A is a tensor category enriched over V.

Modules

By analogy with groups and vector spaces, we would like to define the action of a tensor category A on some other category X. As before, we have the choice between left and right action (or both). Let’s start with the left action. It’s a bifunctor:

A × X -> X

In components, the notation is simplified to (no infix operator):

<a, x> -> ax

where a is an object in A and x is an object in X

We want this functor to be associative and unital. In particular:

ix = x

where i is the unit in the tensor category A. The category X with these properties is called a left A module.

Similarly, the right B module is equipped with the functor:

X × A -> X

The interesting case is a bimodule with two bifunctors:

A × X -> X
X × B -> X

The two tensor categories A and B may potentially be different (although they both must be enriched over the same category V as X).

Notice that A itself is a bimodule with both left and right action of A on A defined by the (tensor) product in A.

The usual thing in category theory is to introduce structure-preserving functors between similar categories.

So, if X and Y are two left modules over the same tensor category A, we can define A-linear functors that preserve the action of A. Such functors, in turn, form a category that Tambara calls HomA(X, Y) (notice the subscript A). Linearity in this case means that the left action weakly commutes with the functor. In other words, we have a natural isomorphism (here again, left action is understood without any infix operator):

λa, x :: F (ax) -> a(F x)

This mapping is invertible (it’s an isomorphism).

The same way we can define right- and bi- linear functor categories. In particular an (A, B)-linear functor preserves both the left and the right actions. Tambara calls the category of such functors HomA, B(X, Y). Linearity in this case means that we have two natural isomorphisms:

λa, x :: F (ax) -> a(F x)
ρx, b :: F (xb) -> (F x)b

The first result in Tambara’s paper is that, if X is a right A-module, then the category of right linear functors HomA(A, X) from A to X is equivalent to X.

The proof is quite simple. Right to left: Pick an object x in X. We can map it to a functor:

Gx :: A -> X

defined as the right action of a on x:

Gx a = xa

Its linearity is obvious:

ρa, b :: Gx (ab) -> (Gx a)b

Notice also that evaluating Gx on the identity i of A produces x. So, left to right, we can define the inverse mapping from HomA(A, X) to X as the evaluation of the functor at i.

The intuition from group theory is that the (right, in this case) action of the whole group on a fixed x creates an orbit in X. An orbit is the set of points (vectors) that can be reached from x by acting on it with all group elements (imagine a group of rotations around a fixed axis in 3-d — here, orbits are just circles). In our case, we can get an orbit of any x as the image of a linear functor Gx defined above that goes from A (the equivalent of the group) to X (the equivalent of the vector space in which we represent the group). It so happens that the image of any linear functor G from A to X is an orbit. It’s the orbit of the object G i. Any object in the image of G can be reached from G i by the action of some object of A. The image of G consists of objects of the form G a. G a can be rewritten as G (ia) which, by (right) linearity, is the same as (G i)a.

Our intuition that there should be more functors from A to X than there are objects in X fails when we impose the linearity constraint. The functors in HomA(A, X) are no longer linearly independent. There is a “basis” in that “space” that is in one-to-one correspondence with the objects of X.

A similar proof works for left modules.

The situation is trickier for bimodules with both left and right action, even if we pick the same tensor category on both sides, that is work with an (A, A)-bimodule.

Suppose that we wanted to map X to HomA, A(A, X). We can still define the (orbit of x) functor Gx as xa with the same ρab. But there is a slight problem with defining λba. We want:

λb a :: Gx (ba) -> b(Gx a)

which will work if there is a transformation:

xba -> bxa

We would like x to (weakly) commute with b. By analogy with the center of a group, we can define a centralizer ZA(X) as the category of those objects of X for which there is an isomorphism ωa between ax and xa. The equivalence of categories in this case is:

HomA,A(A, X) ≅ ZA(X)

So, for any object x that’s in the centralizer of X, we can define our Gx as xa. Conversely, for any (A, A)-linear functor, we can evaluate it at i to get an object of X. This object can be shown to be a member of the centralizer because, for any F in HomA, A(A, X):

a(F i) = F (a i) = F (i a) = (F i)a

Summary

We have the following layers of abstraction:

  • V is a monoidal category
  • A (and B) are tensor categories enriched over V
  • X is a category enriched over V
  • X is a module, if the action of A is defined over X (left, right, or both)
  • Linear functors between X and Y preserve left, right, or bi actions of A (or B).

In particular, bilinear functors from A (with the left and right action of A) to a bimodule X are in one to one correspondence with the centralizer ZA(X) of X under the action of A.

Distributors

To understand distributors, it helps to know a bit about calculus and/or signal processing. In calculus we deal with functions. Functions can be integrated. We can also have functions acting on functions — or functionals. In particular we can have linear functions on functions. It turns out that a lot of such functionals can be defined through integrals. A linear functional can be expressed as integration of test functions with some density. A density may be a function of two arguments, but a general linear functional may require a generalized density. For instance, the famous Dirac delta “function” cannot be represented as a function, although physicists often write:

f(x) = ∫ δ(x - y) f(y) dy

Such generalized functions are called distributions. Direct multiplication of distributions is ill-defined — the annoying infinities that crop up in quantum field theory are the result of attempts to multiply quantum fields, which are distributions.

A better product can be defined through convolution. Convolutions happen to be at the core of signal processing. If you want to soften an image, you convolve it with a Gaussian density. Convolution with a delta function reproduces the original image. Edge enhancement is done with the derivative of a delta function, and so on.

Convolutions can be generalized to functions over groups:

(f ★ g)(x) = ∫ f(y) g(y-1x) dλ(y)

where λ is a suitable group measure.

Roughly speaking, distributors are to functors as distributions are to functions. You might know distributors under the name of profunctors. A profunctor is a functor of two arguments, one of them from the opposite category.

p :: Xop × Y -> Set

In a way, a profunctor is a generalization of a bifunctor, at least when acting on objects. When acting on morphisms, a profunctor is contravariant in one argument and covariant in another. In the case of Y being the same as X, this is similar to the hom-functor C(a, b) being contravariant in a and covariant in b. A hom-functor is the simplest example of a profunctor. As we’ll see later, it’s even possible to model the composition of profunctors on composition of hom-functors.

A profunctor acting on two objects produces a set, an object in the Set category (again, generalizing a hom-functor, which is also Set-valued). Acting on a pair of morphisms (which is the same as a single morphism in the product category Xop × Y), a profunctor produces a function.

Distributors can be generalized to categories that are enriched over the same monoidal category V. In that case they are V-valued functors from Xop × Y to V.

Since distributors (profunctors) are functors, they form a functor category denoted by D(X, Y). Objects in a distributor category are (enriched) functors:

Xop × Y -> V

and morphisms are (enriched) natural transformations.

On the other hand, we can treat a distributor as if it were a morphism between categories (it has the right covariance for that). The composition of such morphisms is defined through the coend formula (a coend for profunctors is analogous to a colimit for functors):

(p ∘ q) x y = ∫z (p x z) ⊗ (q z y)

Here, p and q are distributors:

p :: (Xop × Z) -> V
q :: (Zop × Y) -> V

The tensor product is the product in V (here we explicitly use the infix operator). We “integrate” over the object z in the middle.

This way we can define a bicategory Dist of categories where distributors are morphisms (one-cells) and natural transformations are two-cells. If we also consider regular functors between categories, we get what is called a double category (not to be confused with a 2-category or a bicategory, which are all slightly different).

There is an equivalent way of looking at distributors as a generalization of relations. A relation between two sets is a subset of pairs of elements from those sets. We can model this categorically by treating a set as a discrete category of elements (no morpisms other than identities). The relation between two such sets is a set of formal arrows between their elements — when two elements are related, there is a single arrow between them, otherwise there’s no arrow. Now we can replace sets by categories and define a relation as a bifunctor from those categories to Set. An object from category X is “related” to an object from Y if the bifunctor in question maps them into a non-empty set, otherwise they are unrelated. Since there are many non-empty sets to chose from, there may be many “levels” of relation: ones corresponding to a singleton set, a dubleton, and so on.

We also have to think about the mapping of (pairs of) morphisms from the two categories. Since we would like the opposite relation to be a functor from opposite categories, the symmetric choice is to define a relation as a functor that is contravariant in one argument and covariant in the other — in other words, a profunctor. That way the opposite relation will still be a profunctor, albeit with ops reversed.

One can define the composition of relations. If p relates X to Z and q relates Z to Y then we say that an object x from X is related to an object y from Y if and only if there exists an object z of Z (an object in the middle) such that x is related to z and z is related to y. This existential qualification of z is represented, in category theory, as a coend (an end corresponding to the universal qualifier). Thus, through composition of relations, we recover the formula for the composition of profunctors:

(p ∘ q) x y = ∫z (p x z) ⊗ (q z y)

There is also a tensor structure in the distributor category D(X, Y) defined by Day convolution, which I’ll describe next.

Summary

We have the following layers of abstraction:

  • V is a monoidal category
  • A (and B) are tensor categories enriched over V
  • X, Y, and Z are categories enriched over V
  • A distributor is a functor from Xop × Y to V
  • Distributors may also be treated as “arrows” between categories and composed using coends.

Day Convolution

By analogy with distributions, distributors also have the equivalent of convolution defined on them. The integral is replaced by coend. The Day convolution of two functors F and G, both from the V-enriched monoidal category A to V, is defined as a (double) coend:

(F ⊗ G) x = ∫a,b A(a ⊗ b, x) ⊗ (F a) ⊗ (G b)

Notice the (here, explicit) use of a tensor product for objects of V (as well as for objects of A and for functors — hopefully this won’t lead to too much confusion). For V equal to Set, this is usually replaced by a cartesian product, but in Haskell this could be a product or a sum. In the formula, we also use the covariant hom-functor that maps an arbitrary object x in A to the hom-set A(a ⊗ b, x). This use of a coend justifies the use of the integral symbol: we are “integrating” over objects a and b in A.

If you squint hard enough you might find similarity between the Day convolution formula and the convolution on a group. Here we don’t have a group, so we have no analog of y-1x. Instead we define an appropriate “measure” using A(a ⊗ b, _). The convolution formula may be “partially integrated” to give the following equivalent definitions:

(F ⊗ G) x = ∫b (F bx) ⊗ (G b)
(F ⊗ G) x = ∫a (F a) ⊗ (G xa)

Here you can see the resemblance to group convolution even better — if you remember that exponentiation can be thought of as the “inverse” of the tensor product. The left and right exponentiations are analogous to the left and right inverses.

The partial integration trick is the consequence of the so-called ninja Yoneda lemma, which can be written as:

F x = ∫a A(a, x) ⊗ (F a)

Notice that the hom-functor A(a, x) plays the role of the Dirac delta function.

There is also a unit J of Day convolution:

J x = A(i, x)

where i is the monoidal identity object.

Taken together this shows that Day convolution is a tensor product in the category of enriched functors (hence the use of the tensor symbol ⊗).

It’s interesting to see Day convolution expressed in Haskell. The category of Haskell types (which is approximately Set, modulo termination) can be treated as enriched over itself. The tensor product is just the cartesian product, represented either as a pair or a record with multiple fields. In this setting, a categorical coend becomes an existential quantifier, which is equivalent to a universal quantifier in front of the type constructor.

This is the definition of Day convolution from the Edward Kmett’s Data.Functor.Day

data Day f g a = forall b c. Day (f b) (g c) (b -> c -> a)

Here, f and g are the two functors, and forall plays the role of the existential quantifier (being put in front of the data constructor Day). The original hom-set Set(b⊗c, a), has the tensor product replaced by a pair constructor, and is curried to b->c->a. The data constructor has three fields, corresponding to the tensor (cartesian) product of three terms in the original definition.

Summary

We have the following layers of abstraction:

  • V is a monoidal category
  • A is a tensor category enriched over V
  • Functors from A to V support a tensor product defined by Day convolution.

Tambara Modules

Modules were defined earlier through the action (left, right, or both) of a tensor category A on some other category X. Tambara modules specialize the category X to the category of distributors D(X, Y). We assume that the categories X and Y are themselves modules over A (that is, they have the action of A defined on them).

We define the left action of A on a distributor (profunctor) L(x, y) as:

a! :: L(x, y) -> L(ax, ay)

Similarly, the right action is given by:

!b :: L(x, y) -> L(xb, yb)

We also assume that the action of the unit object i from A is the identity.

These modules are called, respectively, the left and right Tambara modules. The Tambara (bi-)module supports both left and right actions and is denoted by:

AD(X, Y)B

In principle, A may be different from B.

If we choose the tensor product to be the categorical product and replace all categories with one, Tambara modules AD(A, A)A become Haskell’s strong profunctors:

class Profunctor p => Strong p where
  first'  :: p a b -> p (a, c) (b, c)
  second' :: p a b -> p (c, a) (c, b)

On the other hand, with the choice of the categorical coproduct as the tensor product, we get choice profunctors:

class Profunctor p => Choice p where
  left'  :: p a b -> p (Either a c) (Either b c)
  right' :: p a b -> p (Either c a) (Either c b)

We can even parameterize these classes by the type of the tensor product:

class (Profunctor p) => TamModule (ten :: * -> * -> *) p where
  leftAction  :: p a b -> p (c `ten` a) (c `ten` a)
  rightAction :: p a b -> p (a `ten` c) (b `ten` c)

and specialize it to:

type TamStrong p = TamModule (,) p
type TamChoice p = TamModule Either p

We can also define a Tambara module as a profunctor with two polymorphic functions:

data TambaraMod (ten :: * -> * -> *) p a b = TambaraMod 
  { runTambaraMod :: (forall c. p (a `ten` c) (b `ten` c),
                      forall d. p (d `ten` a) (d `ten` b))
  }

The Data.Profunctor.Tambara module specializes this definition for product and coproduct tensors. Since both these tensors are symmetric (weakly — up to an isomorphism), they can be constructed with just one polymorphic function each:

newtype Tambara p a b = Tambara { 
    runTambara :: forall c. p (a, c) (b, c) }
newtype TambaraSum p a b = TambaraSum { 
    runTambaraSum :: forall c. p (Either a c) (Either b c) }

Summary

We have the following layers of abstraction:

  • V is a monoidal category
  • A (and B) are tensor categories enriched over V
  • X and Y are categories enriched over V
  • X is a module if the action of A is defined over X (left, right, or both)
  • Linear functors between X and Y preserve left, right, or bi actions of A (or B)
  • A distributor is a functor from Xop × Y to V
  • Distributors can be composed using coends
  • Functors from A to V support a tensor product defined by Day convolution
  • Distributors D(X, Y) form a category enriched over V
  • Tambara modules are distributors with the action of A (left, right, or bi) defined on them.

Currying Tambara Modules

Let’s look again at the definition of a distributor:

Xop × Y -> V

It’s a functor of two arguments. We know that functions of two arguments can be curried — turned to functions of one argument that return functions. It turns out that a similar thing can be done with distributors. There is an isomorphism between the category of distributors and a category of functors returning functors, which looks very much like currying:

D(X, Y) ≅ Hom(Y, Hom(Xop, V))

According to this isomorphism, a distributor L is mapped to a functor G that takes an object of Y and maps it to another functor that takes an object of X and maps it to an object of V:

L x y = (G y) x

This correspondence may be extended to Tambara modules. Suppose that we have the left action of A defined on X and Y. Then there is an isomorphism of categories:

AD(X, Y) ≅ HomA(Y, Hom(Xop, V))

Remember that the category of left Tambara modules has the left action of A defined by A!. Acting on a distributor L it’s a map:

A! :: L x y -> L (ax) (ay)

On the right hand side of the isomorphism is a category of left-linear functors. An object in this category, K, is left linear:

K (ay) ≅ a(K y)

The target category for this functor is Hom(Xop, V), so K acting on y is another functor that, when acting on an object x of X produces a value in V.

L(x, y) ≅ (K y) x

We have to define the action of A! on the right hand side of this isomorphism. First, we use duality (assuming the category is rigid) — the mapping:

η :: i -> ac a

We get:

(K y) (acax)

Now we would like to use left-linearity of Hom(Xop, V) to move the action of ac out of the functor. Left linear structure on this category is defined by the equation:

(aF) x = F (acx)

where F is a functor from Xop to V.

We get:

(K y) (acax) = ((aK) y) (ax)

Finally, using left-linearity of K, we can turn this to:

(K (ay)) (ax)

which is what L (ax) (ay) is mapped to.

A similar argument may be used to show the general equivalence of Tambara bimodules with bilinear functors:

AD(X, Y)B ≅ HomA, B(Y, Hom(Xop, V))

Tambara Modules and Centralizers

The “currying” equivalence may be specialized to the case where all four tensor categories are the same:

AD(A, A)A ≅ HomA, A(A, Hom(Aop, V))

Earlier we’ve seen the equivalence of a bilinear functor and a centralizer:

HomA,A(A, X) ≅ ZA(X)

The category X here is an arbitrary tensor category over A. In particular, we can chose X to be Hom(Aop, V). This is the main result in Tambara’s paper:

AD(A, A)A ≅ ZA(Hom(Xop, V))

Earlier we’ve seen that distributors and, in particular, Tambara modules are equipped with a tensor product using Day convolution. Tambara also shows that the centralizers are equipped with a tensor product. The equivalence between Tabmara modules and centralizers preserves this tensor product.

Acknowledgments

I’m grateful to Russell O’Connor, Edward Kmett, Dan Doel, Gershom Bazerman, and others, for fruitful discussions and useful comments and to André van Meulebrouck for checking the grammar and spelling.

Next: Free Tabmara modules.


This is part 17 of Categories for Programmers. Previously: Yoneda Embedding. See the Table of Contents.

If I haven’t convinced you yet that category theory is all about morphisms then I haven’t done my job properly. Since the next topic is adjunctions, which are defined in terms of isomorphisms of hom-sets, it makes sense to review our intuitions about the building blocks of hom-sets. Also, you’ll see that adjunctions provide a more general language to describe a lot of constructions we’ve studied before, so it might help to review them too.

Functors

To begin with, you should really think of functors as mappings of morphisms — the view that’s emphasized in the Haskell definition of the Functor typeclass, which revolves around fmap. Of course, functors also map objects — the endpoints of morphisms — otherwise we wouldn’t be able to talk about preserving composition. Objects tell us which pairs of morphisms are composable. The target of one morphism must be equal to the source of the other — if they are to be composed. So if we want the composition of morphisms to be mapped to the composition of lifted morphisms, the mapping of their endpoints is pretty much determined.

Commuting Diagrams

A lot of properties of morphisms are expressed in terms of commuting diagrams. If a particular morphism can be described as a composition of other morphisms in more than one way, then we have a commuting diagram.

In particular, commuting diagrams form the basis of almost all universal constructions (with the notable exceptions of the initial and terminal objects). We’ve seen this in the definitions of products, coproducts, various other (co-)limits, exponential objects, free monoids, etc.

The product is a simple example of a universal construction. We pick two objects a and b and see if there exists an object c, together with a pair of morphisms p and q, that has the universal property of being their product.

ProductRanking

A product is a special case of a limit. A limit is defined in terms of cones. A general cone is built from commuting diagrams. Commutativity of those diagrams may be replaced with a suitable naturality condition for the mapping of functors. This way commutativity is reduced to the role of the assembly language for the higher level language of natural transformations.

Natural Transformations

In general, natural transformations are very convenient whenever we need a mapping from morphisms to commuting squares. Two opposing sides of a naturality square are the mappings of some morphism f under two functors F and G. The other sides are the components of the natural transformation (which are also morphisms).

3_Naturality

Naturality means that when you move to the “neighboring” component (by neighboring I mean connected by a morphism), you’re not going against the structure of either the category or the functors. It doesn’t matter whether you first use a component of the natural transformation to bridge the gap between objects, and then jump to its neighbor using the functor; or the other way around. The two directions are orthogonal. A natural transformation moves you left and right, and the functors move you up and down or back and forth — so to speak. You can visualize the image of a functor as a sheet in the target category. A natural transformation maps one such sheet corresponding to F, to another, corresponding to G.

Sheets

We’ve seen examples of this orthogonality in Haskell. There the action of a functor modifies the content of a container without changing its shape, while a natural transformation repackages the untouched contents into a different container. The order of these operations doesn’t matter.

We’ve seen the cones in the definition of a limit replaced by natural transformations. Naturality ensures that the sides of every cone commute. Still, a limit is defined in terms of mappings between cones. These mappings must also satisfy commutativity conditions. (For instance, the triangles in the definition of the product must commute.)

These conditions, too, may be replaced by naturality. You may recall that the universal cone, or the limit, is defined as a natural transformation between the (contravariant) hom-functor:

F :: c -> C(c, Lim D)

and the (also contravariant) functor that maps objects in C to cones, which themselves are natural transformations:

G :: c -> Nat(Δc, D)

Here, Δc is the constant functor, and D is the functor that defines the diagram in C. Both functors F and G have well defined actions on morphisms in C. It so happens that this particular natural transformation between F and G is an isomorphism.

Natural Isomorphisms

A natural isomorphism — which is a natural transformation whose every component is reversible — is category theory’s way of saying that “two things are the same.” A component of such a transformation must be an isomorphism between objects — a morphism that has the inverse. If you visualize functor images as sheets, a natural isomorphism is a one-to-one invertible mapping between those sheets.

Hom-Sets

But what are morphisms? They do have more structure than objects: unlike objects, morphisms have two ends. But if you fix the source and the target objects, the morphisms between the two form a boring set (at least for locally small categories). We can give elements of this set names like f or g, to distinguish one from another — but what is it, really, that makes them different?

The essential difference between morphisms in a given hom-set lies in the way they compose with other morphisms (from abutting hom-sets). If there is a morphism h whose composition (either pre- or post-) with f is different than that with g, for instance:

h ∘ f ≠ h ∘ g

then we can directly “observe” the difference between f and g. But even if the difference is not directly observable, we might use functors to zoom in on the hom-set. A functor F may map the two morphisms to distinct morphisms:

F f ≠ F g

in a richer category, where the abutting hom-sets provide more resolution, e.g.,

h' ∘ F f ≠ h' ∘ F g

where h' is not in the image of F.

Hom-Set Isomorphisms

A lot of categorical constructions rely on isomorphisms between hom-sets. But since hom-sets are just sets, a plain isomorphism between them doesn’t tell you much. For finite sets, an isomorphism just says that they have the same number of elements. If the sets are infinite, their cardinality must be the same. But any meaningful isomorphism of hom-sets must take into account composition. And composition involves more than one hom-set. We need to define isomorphisms that span whole collections of hom-sets, and we need to impose some compatibility conditions that interoperate with composition. And a natural isomorphism fits the bill exactly.

But what’s a natural isomorphism of hom-sets? Naturality is a property of mappings between functors, not sets. So we are really talking about a natural isomorphism between hom-set-valued functors. These functors are more than just set-valued functors. Their action on morphisms is induced by the appropriate hom-functors. Morphisms are canonically mapped by hom-functors using either pre- or post-composition (depending on the covariance of the functor).

The Yoneda embedding is one example of such an isomorphism. It maps hom-sets in C to hom-sets in the functor category; and it’s natural. One functor in the Yoneda embedding is the hom-functor in C and the other maps objects to sets of natural transformations between hom-sets.

The definition of a limit is also a natural isomorphism between hom-sets (the second one, again, in the functor category):

C(c, Lim D) ≃ Nat(Δc, D)

It turns out that our construction of an exponential object, or that of a free monoid, can also be rewritten as a natural isomorphism between hom-sets.

This is no coincidence — we’ll see next that these are just different examples of adjunctions, which are defined as natural isomorphisms of hom-sets.

Asymmetry of Hom-Sets

There is one more observation that will help us understand adjunctions. Hom-sets are, in general, not symmetric. A hom-set C(a, b) is often very different from the hom-set C(b, a). The ultimate demonstration of this asymmetry is a partial order viewed as a category. In a partial order, a morphism from a to b exists if and only if a is less than or equal to b. If a and b are different, then there can be no morphism going the other way, from b to a. So if the hom-set C(a, b) is non-empty, which in this case means it’s a singleton set, then C(b, a) must be empty, unless a = b. The arrows in this category have a definite flow in one direction.

A preorder, which is based on a relation that’s not necessarily antisymmetric, is also “mostly” directional, except for occasional cycles. It’s convenient to think of an arbitrary category as a generalization of a preoder.

A preorder is a thin category — all hom-sets are either singletons or empty. We can visualize a general category as a “thick” preorder.

Challenges

  1. Consider some degenerate cases of a naturality condition and draw the appropriate diagrams. For instance, what happens if either functor F or G map both objects a and b (the ends of f :: a -> b) to the same object, e.g., F a = F b or G a = G b? (Notice that you get a cone or a co-cone this way.) Then consider cases where either F a = G a or F b = G b. Finally, what if you start with a morphism that loops on itself — f :: a -> a?

Acknowledgments

I’d like to thank Gershom Bazerman for checking my math and logic, and André van Meulebrouck, who has been volunteering his editing help throughout this series of posts.


This is part 16 of Categories for Programmers. Previously: The Yoneda Lemma. See the Table of Contents.

We’ve seen previously that, when we fix an object a in the category C, the mapping C(a, _) is a (covariant) functor from C to Set.

x -> C(a, x)

(The codomain is Set because the hom-set C(a, x) is a set.) We call this mapping a hom-functor — we have previously defined its action on morphisms as well.

Now let’s vary a in this mapping. We get a new mapping that assigns the hom-functor C(a, _) to any a.

a -> C(a, _)

It’s a mapping of objects from category C to functors, which are objects in the functor category Fun(C, Set) (see the section about functor categories in Natural Transformations). You may also recall that hom-functors are the prototypical representable functors.

Every time we have a mapping of objects between two categories, it’s natural to ask if such a mapping is also a functor. In other words whether we can lift a morphism from one category to a morphism in the other category. A morphism in C is just an element of C(a, b), but a morphism in the functor category Fun(C, Set) is a natural transformation. So we are looking for a mapping of morphisms to natural transformations.

Let’s see if we can find a natural transformation corresponding to a morphism f :: a->b. First, lets see what a and b are mapped to. They are mapped to two functors: C(a, _) and C(b, _). We need a natural transformation between those two functors.

And here’s the trick: we use the Yoneda lemma:

Nat(C(a, _), F) ≅ F a

and replace the generic F with the hom-functor C(b, _). We get:

Nat(C(a, _), C(b, _)) ≅ C(b, a)

Yoneda Embedding

This is exactly the natural transformation between the two hom-functors we were looking for, but with a little twist: We have a mapping between a natural transformation and a morphism — an element of C(b, a) — that goes in the “wrong” direction. But that’s okay; it only means that the functor we are looking at is contravariant.

Yoneda Embedding 2

Actually, we’ve got even more than we bargained for. The mapping from C to Fun(C, Set) is not only a contravariant functor — it is a fully faithful functor. Fullness and faithfulness are properties of functors that describe how they map hom-sets.

A faithful functor is injective on hom-sets, meaning that it maps distinct morphisms to distinct morphisms. In other words, it doesn’t coalesce them.

A full functor is surjective on hom-sets, meaning that it maps one hom-set onto the other hom-set, fully covering the latter.

A fully faithful functor F is a bijection on hom-sets — a one to one matching of all elements of both sets. For every pair of objects a and b in the source category C there is a bijection between C(a, b) and D(F a, F b), where D is the target category of F (in our case, the functor category, Fun(C, Set)). Notice that this doesn’t mean that F is a bijection on objects. There may be objects in D that are not in the image of F, and we can’t say anything about hom-sets for those objects.

The Embedding

The (contravariant) functor we have just described, the functor that maps objects in C to functors in Fun(C, Set):

a -> C(a, _)

defines the Yoneda embedding. It embeds a category C (strictly speaking, the category Cop, because of contravariance) inside the functor category Fun(C, Set). It not only maps objects in C to functors, but also faithfully preserves all connections between them.

This is a very useful result because mathematicians know a lot about the category of functors, especially functors whose codomain is Set. We can get a lot of insight about an arbitrary category C by embedding it in the functor category.

Of course there is a dual version of the Yoneda embedding, sometimes called the co-Yoneda embedding. Observe that we could have started by fixing the target object (rather than the source object) of each hom-set, C(_, a). That would give us a contravariant hom-functor. Contravariant functors from C to Set are our familiar presheaves (see, for instance, Limits and Colimits). The co-Yoneda embedding defines the embedding of a category C in the category of presheaves. Its action on morphisms is given by:

Nat(C(_, a), C(_, b)) ≅ C(a, b)

Again, mathematicians know a lot about the category of presheaves, so being able to embed an arbitrary category in it is a big win.

Application to Haskell

In Haskell, the Yoneda embedding can be represented as the isomorphism between natural transformations amongst reader functors on the one hand, and functions (going in the opposite direction) on the other hand:

forall x. (a -> x) -> (b -> x) ≅ b -> a

(Remember, the reader functor is equivalent to ((->) a).)

The left hand side of this identity is a polymorphic function that, given a function from a to x and a value of type b, can produce a value of type x (I’m uncurrying — dropping the parentheses around — the function b -> x). The only way this can be done for all x is if our function knows how to convert a b to an a. It has to secretly have access to a function b->a.

Given such a converter, btoa, one can define the left hand side, call itfromY, as:

fromY :: (a -> x) -> b -> x
fromY f b = f (btoa b)

Conversely, given a function fromY we can recover the converter by calling fromY with the identity:

fromY id :: b -> a

This establishes the bijection between functions of the type fromY and btoa.

An alternative way of looking at this isomorphism is that it’s a CPS encoding of a function from b to a. The argument a->x is a continuation (the handler). The result is a function from b to x which, when called with a value of type b, will execute the continuation precomposed with the function being encoded.

The Yoneda embedding also explains some of the alternative representations of data structures in Haskell. In particular, it provides a very useful representation of lenses from the Control.Lens library.

Preorder Example

This example was suggested by Robert Harper. It’s the application of the Yoneda embedding to a category defined by a preorder. A preorder is a set with an ordering relation between its elements that’s traditionally written as <= (less than or equal). The “pre” in preorder is there because we’re only requiring the relation to be transitive and reflexive but not necessarily antisymmetric (so it’s possible to have cycles).

A set with the preorder relation gives rise to a category. The objects are the elements of this set. A morphism from object a to b either doesn’t exist, if the objects cannot be compared or if it’s not true that a <= b; or it exists if a <= b, and it points from a to b. There is never more than one morphism from one object to another. Therefore any hom-set in such a category is either an empty set or a one-element set. Such a category is called thin.

It’s easy to convince yourself that this construction is indeed a category: The arrows are composable because, if a <= b and b <= c then a <= c; and the composition is associative. We also have the identity arrows because every element is (less than or) equal to itself (reflexivity of the underlying relation).

We can now apply the co-Yoneda embedding to a preorder category. In particular, we’re interested in its action on morphisms:

Nat(C(_, a), C(_, b)) ≅ C(a, b)

The hom-set on the right hand side is non-empty if and only if a <= b — in which case it’s a one-element set. Consequently, if a <= b, there exists a single natural transformation on the left. Otherwise there is no natural transformation.

So what’s a natural transformation between hom-functors in a preorder? It should be a family of functions between sets C(_, a) and C(_, b). In a preorder, each of these sets can either be empty or a singleton. Let’s see what kind of functions are there at our disposal.

There is a function from an empty set to itself (the identity acting on an empty set), a function absurd from an empty set to a singleton set (it does nothing, since it only needs to be defined for elements of an empty set, of which there are none), and a function from a singleton to itself (the identity acting on a one-element set). The only combination that is forbidden is the mapping from a singleton to an empty set (what would the value of such a function be when acting on the single element?).

So our natural transformation will never connect a singleton hom-set to an empty hom-set. In other words, if x <= a (singleton hom-set C(x, a)) then C(x, b) cannot be empty. A non-empty C(x, b) means that x is less or equal to b. So the existence of the natural transformation in question requires that, for every x, if x <= a then x <= b.

for all x, x ≤ a ⇒ x ≤ b

On the other hand, co-Yoneda tells us that the existence of this natural transformation is equivalent to C(a, b) being non-empty, or to a <= b. Together, we get:

a ≤ b if and only if for all x, x ≤ a ⇒ x ≤ b

We could have arrived at this result directly. The intuition is that, if a <= b then all elements that are below a must also be below b. Conversely, when you substitute a for x on the right hand side, it follows that a <= b. But you must admit that arriving at this result through the Yoneda embedding is much more exciting.

Naturality

The Yoneda lemma establishes the isomorphism:

Nat(C(a, _), F) ≅ F a

This isomorphism turns out to be natural in both F and a. In other words, it’s natural in (F, a), a pair taken from the product category Fun(C, Set) × C. Notice that we are now treating F as an object in the functor category.

Let’s think for a moment what this means. A natural isomorphism is an invertible natural transformation between two functors. And indeed, the right hand side of our isomorphism is a functor. It’s a functor from Fun(C, Set) × C to Set. Its action on a pair (F, a) is a set — the result of evaluating the functor F at the object a. This is called the evaluation functor.

The left hand side is also a functor that takes (F, a) to a set of natural transformations Nat(C(a, _), F).

To show that these are really functors, we should also define their action on morphisms. But what’s a morphism between a pair (F, a) and (G, b)? It’s a pair of morphisms, (Φ, f); the first being a morphism between functors — a natural transformation — the second being a regular morphism in C.

The evaluation functor takes this pair (Φ, f) and maps it to a function between two sets, F a and G b. We can easily construct such a function from the component of Φ at a (which maps F a to G a) and the morphism f lifted by G:

(G f) ∘ Φa

Notice that, because of naturality of Φ, this is the same as:

Φb ∘ (F f)

I’m not going to prove the naturality of the whole isomorphism — after you’ve established what the functors are, the proof is pretty mechanical. It follows from the fact that our isomorphism is built up from functors and natural transformations. There is simply no way for it to go wrong.

Challenges

  1. Express the co-Yoneda embedding in Haskell.
  2. Show that the bijection we established between fromY and btoa is an isomorphism (the two mappings are the inverse of each other).
  3. Work out the Yoneda embedding for a monoid. What functor corresponds to the monoid’s single object? What natural transformations correspond to monoid morphisms?
  4. What is the application of the covariant Yoneda embedding to preorders? (Question suggested by Gershom Bazerman.)
  5. Yoneda embedding can be used to embed an arbitrary functor category Fun(C, D) in the functor category Fun(Fun(C, D), Set). Figure out how it works on morphisms (which, in this case, are natural transformations).
  6. Next: It’s All About Morphisms.

Acknowledgments

I’d like to thank Gershom Bazerman for checking my math and logic, and André van Meulebrouck, who has been volunteering his editing help throughout this series of posts.


This summer I spent some time talking with Edward Kmett about lots of things. (Which really means that he was talking and I was trying to keep up.) One of the topics was operads. The ideas behind operads are not that hard, if you’ve heard about category theory. But the Haskell wizardry to implement them and their related monads and comonads might be quite challenging. Dan Piponi wrote a blog post about operads and their monads some time ago. He used the operad-based monad to serialize and deserialize tree-like data structures. He showed that those monads may have some practical applications. But what Edward presented me with was an operad-based comonad with no application in sight. And just to make it harder, Edward implemented versions of all those constructs in the context of multicategories, which are operads with typed inputs. Feel free to browse his code on github. In case you feel a little overwhelmed, what follows may provide some guidance.

Let me first introduce some notions so we can start a conversation. You know that in a category you have objects and arrows between them. The usual intuition (at least for a programmer) is that arrows correspond to functions of one argument. To deal with functions of multiple arguments we have to introduce a bit more structure in the category: we need products. A function of multiple arguments may be thought of as a single-argument function taking a product (tuple) of arguments. In a Cartesian closed category, which is what we usually use in programming, we also have exponential objects and currying to represent multi-argument functions. But exponentials are defined in terms of products.

There is an alternative approach: replace single-sourced arrows with multi-sourced ones. An operad is sort of like a category, where morphisms may connect multiple objects to one. So the primitive in an operad is a kind of a tree with multiple inputs and a single output. You can think of it as an n-ary operator. Of course the composition of such primitives is a little tricky — we’ll come back to it later.

Operad

Dan Piponi, following Tom Leinster, defined a monad based on an operad. It combines, in one data structure, the tree-like shape with a list of values. You may think of the values as a serialized version of the tree described by the shape. The shapes compose following operad laws. There is another practical application of this data structure: it can be used to represent a decision tree with corresponding probabilities.

But a comonad that Edward implemented was trickier. Instead of containing a list, it produced a list. It was a polymorphic function taking a tree-like shape as an argument and producing a list of results. The original algebraic intuition of an operad representing a family of n-ary operators didn’t really fit this picture. The leaves of the trees corresponded to outputs rather than inputs.

We racked our brains in an attempt to find a problem for which this comonad would be a solution — an activity that is not often acknowledged but probably rather common. We finally came up with an idea of using it to evaluate game trees — and what’s a simpler game than tic-tac-toe? So, taking advantage of the fact that I could ask Edward questions about his multicategory implementation, I set out to writing maybe the most Rube Goldberg-like tic-tac-toe engine in existence.

Here’s the idea: We want to evaluate all possible moves up to a certain depth. We want to find out which ones are illegal (e.g., trying to overwrite a previous move) and which ones are winning; and we’d like to rank the rest. Since there are 9 possible moves at each stage (legal and illegal), we create a tree with the maximum branching factor of 9. The manipulation of such trees follows the laws of an operad.

GameTree

The comonadic game data structure is the evaluator: given a tree it produces a list of board valuations for each leaf. The game engine picks the best move, and then uses the comonadic duplicate to generate new game states, and so on. This is extremely brute force, but Haskell’s laziness keeps the exponential explosion in check. I added a bit of heuristics to bias the choices towards the center square and the corners, and the program either beats or ties against any player.

All this would be a relatively simple exercise in Haskell programming, so why not make it a little more challenging? The problem involves manipulation of multi-way trees and their matching lists, which is potentially error-prone. When you’re composing operads, you have to precisely match the number of outputs with the number of inputs. Of course, one can have runtime checks and assertions, but that’s not the Haskell way. We want compile-time consistency checks. We need compile-time natural numbers, counted vectors, and counted trees. Needless to say, this makes the code at least an order of magnitude harder to write. There are some libraries, most notably GHC.TypeLits, which help with type literals and simple arithmetic, but I wanted to learn type-level programming the hard way, so I decided not to use them. This is as low level as you can get. In the process I had to rewrite large chunks of the standard Prelude in terms of counted lists and trees. (If you’re interested in the TypeLits version of an operad, I recommend browsing Dan Doel’s code.)

The biggest challenges were related to existential types and to simple arithmetic laws, which we normally take for granted but which have to be explicitly stated when dealing with type-level natural numbers.

Board

The board is a 3 by 3 matrix. A matrix is a vector of vectors. Normally, we would implement vectors as lists and make sure that we never access elements beyond the end of the list. But here we would like to exercise some of the special powers of Haskell and shift bound checking to compile time. So we’ll define a general n by m matrix using counted vectors:

newtype Matrix n m a = Matrix { unMatrix :: Vec n (Vec m a) }

Notice that n and m are types rather than values.

The vector type is parameterized by compile-time natural numbers:

data Vec n a where
    VNil  :: Vec Z a
    VCons :: a -> Vec n a -> Vec (S n) a

This definition is very similar to the definition of a list as a GADT, except that it keeps track of the compile-time size of the vector. So the VNil constructor creates a vector of size Z, which is the compile-time representation of zero. The VCons constructor takes a value of type a and a vector of size n, and produces a vector of size (S n), which stands for the successor of n.

This is how natural numbers may be defined as a data type:

data Nat = Z | S Nat
  deriving Show

Here, Z and S are the two constructors of the data type Nat. But Z and S occur in the definition of Vec as types, not as data constructors. What happens here is that GHC can promote data types to kinds, and data constructors to types. With the extension:

{-# LANGUAGE DataKinds #-}

Nat can double as a kind inhabited by an infinite number of types:

Z, S Z, S (S Z), S (S (S Z)), …

which are in one-to-one correspondence with natural numbers. We can even create type aliases for the first few type naturals:

type One   = S Z
type Two   = S (S Z)
type Three = S (S (S Z))
…

Now the compiler, seeing the use of Z and S in the definition of Vec, can deduce that n is of kind Nat.

The kind Nat is inhabited by types, but these types are not inhabited by values. You cannot create a value of type Z or S Z. So, in data definitions, these types are always phantom types. You don’t pass any values of type Z, S Z, etc., to data constructors. Look at the two Vec constructors: VNil takes no arguments, and VCons takes a value of type a, and a value of type Vec n a.

So far we have encoded the size of the vector into its type, but how do we enforce compile-time bound checking? We do that by providing special access functions. The simplest of them is the vector analog of head:

headV :: Vec (S n) a -> a
headV (VCons a _) = a

The type signature of headV guarantees that it can be called only for vectors of non-zero length (the size has to be the successor of some number n). Notice that this is different from simply not providing a definition for:

headV VNil

An incomplete pattern would result in a runtime error. Here, trying to call headV with VNil produces a compile-time error.

A much more interesting problem is securing safe random access to a vector. A vector of size n can only be indexed by numbers that are strictly less than n. To this end we define, for every n, a separate type for numbers that are less than n

data Fin n where
    FinZ :: Fin (S n) -- zero is less than any successor
    FinS :: Fin n -> Fin (S n) -- n is less than (n+1)

Here, n is a type whose kind is Nat (this can be deduced from the use of S acting on n). Notice that Fin n is a regular inhabited type. In other words its kind is * and you can create values of that type.

Let’s see what the inhabitants of Fin n are. Using the FinZ constructor we can create a value of type Fin (S n), for any n. But Fin (S n) is not a single type — it’s a family of types parameterized by n. FinZ is an example of a polymorphic value. It can be passed to any function that expects Fin One, or Fin Two, etc., but not to one that expects Fin Z.

The FinS constructor takes a value of the type Fin n and produces a value of the type Fin (S n) — the successor of Fin n.

We will use values of the type Fin n to safely index vectors of size n:

ixV :: Fin n -> Vec n a -> a
ixV FinZ (x `VCons` _) = x
ixV (FinS fin_n) (_ `VCons` xs) = ixV fin_n xs

Any attempt at access beyond the end of a vector will result in a compilation error.

In our implementation of the tic-tac-toe board we’ll be using vectors of size Three. It’s easy to enumerate all members of Fin Three. These are:

FinZ             -- zero
FinS FinZ        -- one
FinS (FinS FinZ) -- two

We’ll also need to convert user input to board positions. Of course, not all inputs are valid, so the conversion function will return a Maybe value:

toFin3 :: Int -> Maybe (Fin Three)
toFin3 0 = Just FinZ
toFin3 1 = Just (FinS FinZ)
toFin3 2 = Just (FinS (FinS FinZ))
toFin3 _ = Nothing

Our tic-tac-toe board will be a 3×3 matrix of fields, optionally containing crosses or circles put there by the two players:

data Player = Cross | Circle
  deriving Eq

instance Show Player where
    show Cross  = " X "
    show Circle = " O “

type Board = Matrix Three Three (Maybe Player)

An empty board is filled with Nothing.

Moves

A move in the game consists of a player’s mark and two coordinates. The coordinates are compile-time limited to 0, 1, and 2 using the type Fin Three:

data Move = Move Player (Fin Three) (Fin Three)

The game engine will be dealing with trees of moves. The trees are edge labeled, each edge corresponding to an actual or a potential move. The leaves contain no information, they are just sentinels.

A MoveTree is either a Leaf with a nullary constructor, or a Fan, whose constructor takes Trees n:

data MoveTree n where
    Leaf ::               MoveTree One
    Fan  :: Trees n    -> MoveTree n

Trees is defined as an empty list NilT, or a cons of a branch consisting of a Move and a MoveTree followed by a tail of Trees:

data Trees n where
    NilT ::                                  Trees Z
    (:+) :: (Move, MoveTree k) -> Trees m -> Trees (k + m)

infixr 5 :+

You may recognize this data structure as an edge-labeled version of a rose tree. Here are a few examples of MoveTrees.

t1 :: MoveTree One
t1 = Leaf

t2 :: MoveTree Z
t2 = Fan (NilT)

t3 :: MoveTree One
t3 = Fan $ (Move Cross (FinS FinZ) FinZ, Leaf) :+ NilT

t4 :: MoveTree Two
t4 = Fan $ (Move Circle FinZ FinZ, t3) 
        :+ (Move Circle FinZ (FinS FinZ), t3) 
        :+ NilT

The last tree describes two possible branches: A circle at (0, 0) followed by a cross at (1, 0); and a circle at (0, 1) followed by a cross at (1, 0).

Trees

The compile-time parameter n in MoveTree n counts the number of leaves.

Of special interest is the infix constructor (:+) which has to add up the number of leaves in all branches. Here, the addition (k + m) must be performed on types rather than values. To define addition on types we use a multi-parameter type family — type family serving as a compile-time equivalent of a function. Here, the function is an infix operator (+). It takes two types of the kind Nat and produces a type of the kind Nat:

type family (+) (a :: Nat) (b :: Nat) :: Nat

The implementation of this compile-time function is defined inductively through two families of type instances. The base case covers the addition of zero on the left:

type instance Z + m = m

(This is an instance for the type family (+) written in the infix notation.)

The inductive step takes care of adding a successor of n, also on the left:

type instance S n + m = S (n + m)

Notice that the compiler won’t be able to deduce from these definitions that, for instance, m + Z is the same as m. We’ll have to do something special when the need arises — when we are forced to add a zero on the right. Compile-time arithmetic is funny that way.

Operad

The nice thing about move trees is that they are composable. It’s this composability that allows them to be used to speculatively predict multiple futures of a game. Given a current game tree, we can extend it by all possible moves of the computer player, and then extend it by all possible countermoves of the human opponent, and so on. This kind of grafting of trees on top of trees is captured by the operad.

What we are going to do is to consider our move trees as arrows with one or more inputs. Here things might get a little confusing, because a natural interpretation of a move tree is that its input is the first move, the root of the tree; and the leaves are the outputs. But for the sake of the operad, we’ll reverse the meaning of input and output.

In Haskell, we define a category by specifying the hom-set as a type. Then we define the composition of morphisms and pick the identity morphisms. We’ll do a similar thing with the operad. The difference is that an arrow in an operad is parameterized by the number of inputs (leaves of the tree). Continuing with the theme of compile-time safety, we’ll make this parameterization at compile-time.

The analog of the identity arrow will have a single input.

But how do we compose arrows that have multiple inputs? To compose an arrow with n inputs we need something that has n outputs. We can’t get n outputs from a single arrow (for n greater than 1) so we need a whole forest of arrows (with apologies for mixed metaphors). Composition in an operad connects an arrow to a forest. This is the definition:

class (Graded f) => Operad (f :: Nat -> *) where
  ident :: f (S Z)
  compose :: f n -> Forest f m n -> f m

Here, f is a compile-time function from Nat to a regular type — in other words, a data type parameterized by Nat. The identity has one input. Composition takes an n-ary arrow and a forest with m inputs and n outputs. As usual, the obvious identity and associativity laws are assumed but not expressible in Haskell. I’ll define the forest in a moment, but first let’s talk about the additional constraint, Graded f.

Conceptually, a Graded data type provides a way to retrieve its grade — or the count for a counted data structure — at runtime. But why would we need runtime grade information? Wasn’t the whole idea to perform the counting at compile time? It turns out that our compile-time Nats are great at parameterizing data structures. Types of the Nat kind can be used as phantom types. But the same trick won’t work for parameterizing polymorphic functions — there’s no place to insert phantom types into definitions of functions. A function type reflects the types of its arguments and the return type. So if we want to pass a compile-time count to a function, we have to do it through a dummy argument.

For that purpose we need a family of types parameterized by compile-time natural numbers. This time, though, the types must be inhabited, because we need to pass values of those types to functions. These values don’t have to carry any runtime information — they are only used to carry the type. It’s enough that each type be inhabited by a single dummy value, just like it is with the unit type (). Such types are called singleton types. Here’s the definition of the singleton natural number:

data SNat n where
  SZ :: SNat Z
  SS :: SNat n -> SNat (S n)

You can use it to create a series of values:

sZero :: SNat Z
sZero = SZ

sOne :: SNat One
sOne = SS SZ

sTwo :: SNst Two
sTwo = SS (SS SZ)

and so on…

You can also define a function for adding such values. It’s a polymorphic function that takes two singletons and produces another singleton. It really performs addition on types, but it gets the types at compile time from its arguments, and produces a singleton value of the correct type.

plus :: SNat n -> SNat m -> SNat (n + m)
plus SZ n = n
plus (SS n) m = SS (n `plus` m)

The Graded typeclass is defined for counted types — types that are parameterized by Nats:

class Graded (f :: Nat -> *) where
  grade :: f n -> SNat n

Our MoveTrees are easily graded:

instance Graded MoveTree where
    grade Leaf = SS SZ
    grade (Fan ts) = grade ts

instance Graded Trees where
    grade NilT = SZ
    grade ((_, t) :+ ts) = grade t `plus` grade ts

With those preliminaries out of the way, we are ready to implement the Operad instance for the MoveTree. We pick the single leaf tree as our identity.

ident = Leaf

Before we define composition, we have to define a forest. It’s a list of trees parameterized by two compile-time integers, which count the total number of inputs and outputs. A single tree f (our multi-input arrow) is parameterized by the number of inputs. It has the kind Nat->*.

data Forest f n m where
  Nil  :: Forest f Z Z 
  Cons :: f i1 -> Forest f i2 n -> Forest f (i1 + i2) (S n)

The Nil constructor creates an empty forest with zero inputs and zero outputs. The Cons constructor takes a tree with i1 inputs (and, implicitly, one output), and a forest with i2 inputs and n outputs. The result is a forest with i1+i2 inputs and n+1 outputs.

Forest

Composition in the operad has the following signature:

compose :: f n -> Forest f m n -> f m

It produces a tree by plugging the outputs of a forest in the inputs of a tree.

Compose

We’ll implement composition in multiple stages. First, we make sure that a single leaf is the left identity of our operad. The simplest case is when the right operand is a single-leaf forest :

compose Leaf (Cons Leaf Nil) = Leaf

Compose1

A little complication arises when we want to compose the identity with a single-tree forest. Naively, we would like to write:

compose Leaf (Cons t Nil) = t

Compose2

This should work, since the leaf has one input, and the single-tree forest has one output. Looking at the signature of compose, the compiler should be able to deduce that n in the definition of compose should be replaced by S Z. Let’s follow the arithmetic.

The forest is the result of Consing a tree with i1 inputs, and a Nil forest with Z inputs and Z outputs. By definition of Cons, the resulting forest has i1+Z inputs and S Z outputs. So the ns in compose match. The problem is with unifying the ms. The one from the forest is equal to i1+Z, and the one on the right hand side is i1. And herein lies the trouble: we are adding Z on the right of i1. As I mentioned before, the compiler has no idea that i1+Z is the same as i1. We’re stuck! The solution to this problem requires some cheating, as well as digging into the brave new world of constraint kinds.

Constraint Kinds

We want to tell the compiler that two types, n and (n + Z) are the same. Both types are of the kind Nat. Equality of types can be expressed as a constraint with the tilde between the two types:

n ~ (n + Z)

Constraints are inhabitants of a special kind called Constraint. Besides type equality, they can express typeclass constraints like Eq or Num.

The compiler treats constraints as if they were types and, in fact, lets you define type aliases for them:

type Stringy a = (Show a, Read a)

Here, Stringy, just like Show and Read, is of the kind * -> Constraint. Unlike regular types of kind *, constraints are not inhabited by values. You can use them as contexts in front of the double arrow, =>, but you can’t pass them as runtime values.

This situation is very similar to what we’ve seen with the Nat kind, which also contained uninhabited types. But with Nat we were able to reify those types by defining the corresponding singletons. A very similar trick works with Constraints. A reified constraint singleton is called a Dict:

data Dict :: Constraint -> * where
  Dict :: a => Dict a

In particular, if a is a typeclass constraint, you can think of Dict as a class dictionary — the generalization of a virtual table. There is in fact a hidden singleton that is passed by the compiler to functions with typeclass constraints. For instance, the function:

print :: Show a => a -> IO ()

is translated to a function of two variables, one of them being the virtual table for the typeclass Show. When you call print with an Int, the compiler finds the virtual table for the Show instance of Int and passes it to print.

The difference is that now we are trying to do explicitly what the compiler normally hides from us.

Notice that Dict has only one constructor that takes no arguments. You can construct a Dict from thin air. But because it’s a polymorphic value, you either have to specify what type of Dict you want to construct, or give the compiler enough information to figure it out on its own.

How do you specify the concrete type of a Dict? Dict is a type constructor of the kind Constraint->* so, to define a specific type, you need to provide a constraint. For instance, you could construct a dictionary using the constraint that the type One is the same as the type (One + Z):

myDict :: Dict (One ~ (One + Z))
myDict = Dict

This actually works, but it doesn’t generalize. What we really need is a whole family of singletons parameterized by n:

plusZ :: forall n. Dict (n ~ (n + Z))

But the compiler is not able to verify an infinite family of constraints. We are stuck!

When everything else fails, try cheating. Cheating in Haskell is called unsafeCoerce. We can take a dictionary that we know exists, for instance that of (n ~ n) and force the compiler to believe that it’s the right type:

plusZ :: forall n. Dict (n ~ (n + Z))
plusZ = unsafeCoerce (Dict :: Dict (n ~ n))

This is to be expected: We are hitting the limits of Haskell. Haskell is not a dependent type language and it’s not a theorem prover. It’s possible to avoid some of the ugliness by using TypeLits, but I wanted to show you the low level details.

To truly understand the meaning of constraints, we should take a moment to talk about the Curry-Howard isomorphism. It tells us that types are equivalent to propositions: logical statement that can be either true or false. A type that is inhabited corresponds to a true statement. Most data types we define in a program are clearly inhabited. They have constructors that let us create values — the inhabitants of a given type. Then there are function types, which may or may not be inhabited. If you can implement a function of a given type, then you have a proof that this type is inhabited. Things get really interesting when you consider polymorphic functions. They correspond to propositions with quantifiers. We know, for instance, that the type a->a is inhabited for all a — we have the proof: the identity function.

A type like Dict is even more interesting. It explicitly specifies the condition under which it is inhabited. The type Dict a is inhabited if the constraint a is true. For instance, (n ~ n) is true, so the corresponding dictionary, Dict (n ~ n), can be constructed. What’s even more interesting is that, if you can hand the compiler an instance of a particular dictionary, it is proof enough that the constraint it encapsulates is true. The actual value of plusZ is irrelevant but its existence is critical.

So how do we bring it to the compiler’s attention? One way is to pass the dictionary as an argument to a function, but that’s awkward. In our case, the signature of the function compose is fixed. A better option is to bring a proof to the local scope by pattern matching.

compose Leaf (Cons (t :: MoveTree m) Nil) = 
    case plusZ :: Dict (m ~ (m + Z)) of Dict -> t

Notice how we first introduce m into the scope by explicitly typing t inside the pattern for Forest. We fix the type of t to be:

MoveTree m

Then we explicitly type the value of plusZ, our global singleton, to be:

Dict (m ~ (m + Z))

This lets the compiler unify the n in the original definition of plusZ with our local m. Finally we pattern-match plusZ to its constructor, Dict. Obviously, the match will succeed. We don’t care about the result of this match, except that it introduces the proof of (m ~ (m + Z)) into the inner scope. It will let the compiler complete the type checking by unifying the actual type of t with the expected return type of compose.

Splitting the Forest

So far we have dealt with the simple cases of operadic composition, the ones where the left hand side had just one input. The general case involves connecting a tree that has k inputs to a forest that has k outputs and an arbitrary number of inputs. A MoveTree that is not a single Leaf is a Fan of Trees, which can be further split into the head tree and the tail. This corresponds to the pattern:

compose (Fan ((mv, t) :+ ts)) frt

SplitForest

We will proceed by recursion. The base case is the empty Fan:

compose (Fan NilT) Nil = Fan NilT

In the recursive case we have to split the forest frt into the part that matches the inputs of the tree t, and the remainder. The number of inputs of t is given by its grade — that’s why we needed the operad to be Graded.

If Forest was a simple list of trees, splitting it would be trivial: there’s even a function called splitAt in the Prelude. The fact that a Forest is counted makes it more interesting. But the real problem is that a Forest is parameterized by both the number of inputs and outputs. We want to separate a certain number of outputs, say m, but we have no idea how many inputs, i1, will go with that number of outputs. It depends on how much the individual trees branch inside the forest.

To see the problem, let’s try to come up with a signature for splitForest. It should look something like this:

splitForest :: SNat m -> SNat n -> Forest f i (m + n) 
    -> (Forest f i1 m, Forest f i2 n)

But what are i1 and i2? All we know is that they exist and that they should add up to i. If there was an existential quantifier in Haskell, we could try writing something like this:

splitForest :: exists i1 i2. (i1 + i2 ~ i) => 
    SNat m -> SNat n -> Forest f i (m + n)
    -> (Forest f i1 m, Forest f i2 n)

We can’t do exactly that, but this pseudocode suggests a neat workaround. The existential quantifier may be replaced by a universal quantifier under a CPS transformation. There is a Curry-Howard reason for that, which has to do with CPS representing logical negation. But this can also be easily explained programmatically. Since we cannot predict how the inputs will split in the general case; instead of returning a concrete result we may ask the caller to provide a function — a continuation — that can accept an arbitrary split and take over from there. The continuation itself must be universally quantified: it must work for all splits. Here’s the signature of the continuation:

(forall i1 i2. (i ~ (i1 + i2)) => 
        (Forest f i1 m, Forest f i2 n) -> r)

As usual, when doing a CPS transform we don’t care what the type r is — in fact, we have to universally quantify over it. And since we have a local constraint that involves i, we have to bring i into the inner scope. The way to scope type variables in Haskell is to explicitly quantify over them. And once you quantify over one type variable, you have to quantify over all of them. That’s why the declaration of splitForest starts with one giant quantifier:

forall m n i f r

Putting it all together, here’s the final type signature of splitForest:

splitForest :: forall m n i f r. SNat m -> SNat n -> Forest f i (m+n)
    -> (forall i1 i2. (i ~ (i1 + i2)) => 
        (Forest f i1 m, Forest f i2 n) -> r) 
    -> r

We will implement splitForest using recursion. The base case splits the forest at offset zero. It simply calls the continuation k with a pair consisting of an empty fragment and the unchanged forest:

splitForest SZ _ fs k = k (Nil, fs)

The recursive case is conceptually simple. The offset at which you split the forest is the successor of some number represented by a singleton sm. The forest itself is a Cons of a tree t and some tail ts. We want to split this tail into two fragments at sm — one less than (SS sm). We return the pair whose first component is the Cons of the tree t and the first fragment, and whose second component is the second fragment. Except that, instead of returning, we call the continuation. And in order to split the tail, we have to create another continuation to accept the fragments. So here’s the skeleton of the implementation:

splitForest (SS sm) 
            sn 
            (Cons t ts) 
            k =
    splitForest sm sn ts $
        ((m_frag, n_frag) -> k (Cons t m_frag, n_frag)

To make this compile, we need to fill in some of the type signatures. In particular, we need to extract the number of inputs i1 and i2 from the constituents of the forest. We also have to extract the number of inputs i3 and i4 of the fragments. Finally, we have to tell the compiler that addition is associative. I won’t go into the gory details, I’ll just show you the final implementation:

splitForest (SS (sm :: SNat m_1)) 
            sn 
            (Cons (t :: f i1) (ts :: Forest f i2 (m_1 + n))) 
            k =
    splitForest sm sn ts $
        ((m_frag :: Forest f i3 m_1), (n_frag :: Forest f i4 n)) ->
            case plusAssoc (Proxy :: Proxy i1) 
                           (Proxy :: Proxy i3) 
                           (Proxy :: Proxy i4) of 
               Dict -> k (Cons t m_frag, n_frag)

But what’s this Proxy business? The compiler is having — again — a problem with simple arithmetic. This time it’s the associativity of addition. We have to provide a proof that:

((i1 + i3) + i4) ~ (i1 + (i3 + i4))

But this time we can’t fake it with a polymorphic value; like we did with plusZ, which was parameterized by a single type of the kind Nat. We have to fake it with a polymorphic function:

plusAssoc :: p a -> q b -> r c -> Dict (((a + b) + c) ~ (a + (b + c)))
plusAssoc _ _ _ = unsafeCoerce (Dict :: Dict (a ~ a))

Here p, q, and r, are some arbitrary type constructors of the kind Nat->*. It doesn’t matter what the values of the arguements are, as long as they introduce the three (uninhabited) types, a, b, and c, into the scope. Proxy is a very simple polymorphic singleton type:

data Proxy t = Proxy

We create three Proxy values and call the function plusAssoc, which returns a dictionary that witnesses the associativity of the addition of the three Nats.

Equipped with the function splitForest, we can now complete our Operad instance:

instance Operad MoveTree where
    ident = Leaf
    compose Leaf (Cons Leaf Nil) = Leaf
    compose Leaf (Cons (t :: MoveTree m) Nil) = 
        case plusZ :: Dict (m ~ (m + Z)) of Dict -> t
    compose (Fan NilT) Nil = Fan NilT
    compose (Fan ((mv, t) :+ ts)) frt = 
        Fan $ splitForest (grade t) (grade ts) frt $
              (mts1, mts2) ->
                 let tree  = (compose t mts1)
                     (Fan trees) = (compose (Fan ts) mts2)
                 in (mv, tree) :+ trees
    compose _ _ = error "compose!"

The Comonad

A comonad is the dual of a monad. Just like a monad lets you lift a value using return, a comonad lets you extract a value. And just like a monad lets you collapse double encapsulation to single encapsulation using join, a comonad lets you duplicate the encapsulation.

class Functor w => Comonad w where
   extract :: w a -> a
   duplicate :: w a -> w (w a)

In other words, a monad lets you put stuff in and reduce whereas a comonad lets you take stuff out and reproduce.

A list monad, for instance, implements return by constructing a singleton list, and join by concatenating a list of lists.

An infinite list, or a stream comonad, implements extract by accessing the head of the list and duplicate by creating a stream of consecutive tails.

An operad can be used to define both a monad and a comonad. The monad M combines an operadic tree of n inputs with a vector of n elements.

data M f a where
   M :: f n -> Vec n a -> M f a

Monadic return combines the operadic identity with a singleton vector, whereas join grafts the operadic trees stored in the vector into the operad using compose and then concatenates the vectors.

The comonad W is also pretty straightforward. It’s defined as a polymorphic function, the evaluator, that takes an operad f n and produces a vector Vec n:

newtype W f a = W { runW :: forall n. f n -> Vec n a }

Comonad

It’s obviously a functor:

instance Functor (W f) where
    fmap g (W k) = W $ f -> fmap g (k f)

Comonadic extract calls the evaluator with the identity operad and extracts the value from the singleton vector:

extract (W k) = case k ident of
    VCons a VNil -> a

Extract

The implementation of duplicate is a bit more involved. Its signature is:

duplicate :: W f -> W (W f)

Given the evaluator inside W f:

ev :: forall n. f n -> Vec n a

it has to produce another evaluator:

forall m. f m -> Vec m (W f)

This function, when called with an operadic tree f m, which I’ll call the outer tree, must produce m new evaluators.

Duplicate

What should the kth such evaluator do when called with the inner tree fi? The obvious thing is to graft the inner tree at the kth input of the outer tree. We can saturate the rest of the inputs of the outer tree with identities. Then we’ll call the evaluator ev with this new larger tree to get a larger vector. Our desired result will be in the middle of this vector at offset k.

This is the complete implementation of the comonad:

instance Operad f => Comonad (W f) where
  extract (W k) = case k ident of
    VCons a VNil -> a
  duplicate (W ev :: W f a) = W $ f -> go f SZ (grade f)
    where
      -- n increases, m decreases
      -- n starts at zero, m starts at (grade f)
      go :: f (n + m) -> SNat n -> SNat m -> Vec m (W f a)
      go _ _ SZ = VNil
      go f n (SS m) =  case succAssoc n m of 
          Dict -> W ev' `VCons` go f (SS n) m
        where
          ev' :: f k -> Vec k a
          ev' fk = middleV n (grade fk) m 
                           (ev (f `compose` plantTreeAt n m fk))

As usual, we had to help the compiler with the arithmetic. This time it was the associativity of the successor:

succAssoc :: p a -> q b -> Dict ((a + S b) ~ S (a + b))
succAssoc _ _ = unsafeCoerce (Dict :: Dict (a ~ a))

Notice that we didn’t have to use the Proxy trick in succAssoc n m, since we had the singletons handy.

The Tic Tac Toe Comonad

The W comonad works with any operad, in particular it will work with our MoveTree.

type TicTacToe = W MoveTree Evaluation

We want the evaluator for this comonad to produce a vector of Evaluations, which we will define as:

type Evaluation = (Score, MoveTree One)

The scoring is done from the perspective of the computer. A Bad move is a move that falls on an already marked square. A Good move carries with it an integer score:

data Score = Bad | Win | Lose | Good Int
  deriving (Show, Eq)

Evaluation includes a single-branch MoveTree One, which is the list of moves that led to this evaluation. In particular, the singleton Evaluation returned by extract will contain the history of moves up to the current point in the game.

Let’s see what duplicate does in our case. It produces a vector of TicTacToe games, each containing a new evaluator. These new evaluators, when called with a move tree, whether it’s a single move, a tree of 9 possible moves, a tree of 81 possible moves and responses, etc.; will graft this tree to the corresponding leaf of the previous game tree and perform the evaluation. We’ll call duplicate after every move and pick one of the resulting games (evaluators).

The Evaluator

This blog post is mostly about operads and comonads, so I won’t go into a lot of detail about implementing game strategy. I’ll just give a general overview, and if you’re curious, you can view the code on github.

The heart of the operadic comonad is the evaluator function. To start the whole process running, we’ll create the initial board. We’ll use the function eval that takes a board and returns an evaluator (which is eval partially applied to the board).

main :: IO ()
main = do
    putStrLn "Make your moves by entering x y coordinates 1..3 1..3."
    let board = emptyBoard
        game = W (eval board)
    play board game

The evaluator is a function that takes a MoveTree and returns a vector of Evaluation. If the tree is just a single leaf (that’s the identity of our operad), the evaluation is trivial. The interesting part is the evaluation of a Fan of branches.

eval :: Board -> MoveTree n -> Vec n Evaluation
eval board moves = case moves of
    Leaf   -> singleV (Good 0, Leaf)
    Fan ts -> evalTs (evalBranch board) ts

The function evalTs iterates over branches, applying a branch evaluator to each tree and concatenating the resulting evaluation vectors. The only tricky part is that each branch may end in a different number of leaves, so the branch evaluator must be polymorphic in k:

evalTs :: (forall k. (Move, MoveTree k) -> Vec k Evaluation) 
          -> Trees n 
          -> Vec n Evaluation
evalTs _ NilT = VNil
evalTs ev (br :+ ts) = concatV (ev br) (evalTs f ts)

The branch evaluator must account for the possibility that a move might be invalid — it has to test whether the square has already been marked on the board. If it’s not, it marks the board and evaluates the move.

First, there are two simple cases: the move could be a winning move or a losing move. In those cases when the result is known immediately, that is Bad, Win, or Lose, evalBranch returns a vector of the size determined by the number of leaves in the branch. The vector is filled with the appropriate values (Bad, Win, or Lose).

The interesting case is when the move is neither invalid nor decisive. In that case we recurse into eval with the new board and the sub-tree that follows the move in question. We gather the resulting evaluations and adjust the scores. If any of the branches results in a loss, we lower the score on all of them. Otherwise we add the score of the current move to all scores for that tree.

Game Logic

At the very top level we have the game loop, which takes input from the user and responds with the computer’s move. A user move must be tested for correctness. First it’s converted to two Fin Three values (or Nothing). Then we create a singleton MoveTree with that move and pass it to the evaluator. If the move is invalid, we continue prompting the user. If the move is decisive, we announce the winner. Otherwise, we advance the game by calling duplicate, and then pick the new evaluator from the resulting tree of comonadic values — the one corresponding to the user move.

To generate the computer response, we create a two-deep tree of all possible moves (that is one computer move and one user move — that seems to be enough of the depth to win or tie every time). We call the evaluator with that tree and pick the best result. Again, if it’s a decisive move, we announce the winner. Otherwise, we call duplicate again, and pick the new evaluator corresponding to the selected move.

Conclusion

Does it make sense to implement tic-tac-toe using such heavy machinery? Not really! But it makes sense as an exercise in compile-time safety guarantees. I wouldn’t mind if those techniques were applied to writing software that makes life-and-death decisions. Nuclear reactors, killer drones, or airplane auto-pilots come to mind. Fast stock-trading software, even though it cannot kill you directly, can also be mission critical, if you’re attached to your billions. What’s an overkill in one situation may save your life in another. You need different tools for different tasks and Haskell provides the options.

The full source is available on github.

Thanks go to André van Meulebrouck for his editing help.


This is part 15 of Categories for Programmers. Previously: Representable Functors. See the Table of Contents.

Most constructions in category theory are generalizations of results from other more specific areas of mathematics. Things like products, coproducts, monoids, exponentials, etc., have been known long before category theory. They might have been known under different names in different branches of mathematics. A cartesian product in set theory, a meet in order theory, a conjunction in logic — they are all specific examples of the abstract idea of a categorical product.

The Yoneda lemma stands out in this respect as a sweeping statement about categories in general with little or no precedent in other branches of mathematics. Some say that its closest analog is Cayley’s theorem in group theory (every group is isomorphic to a permutation group of some set).

The setting for the Yoneda lemma is an arbitrary category C together with a functor F from C to Set. We’ve seen in the previous section that some Set-valued functors are representable, that is isomorphic to a hom-functor. The Yoneda lemma tells us that all Set-valued functors can be obtained from hom-functors through natural transformations, and it explicitly enumerates all such transformations.

When I talked about natural transformations, I mentioned that the naturality condition can be quite restrictive. When you define a component of a natural transformation at one object, naturality may be strong enough to “transport” this component to another object that is connected to it through a morphism. The more arrows between objects in the source and the target categories there are, the more constraints you have for transporting the components of natural transformations. Set happens to be a very arrow-rich category.

The Yoneda lemma tells us that a natural transformation between a hom-functor and any other functor F is completely determined by specifying the value of its single component at just one point! The rest of the natural transformation just follows from naturality conditions.

So let’s review the naturality condition between the two functors involved in the Yoneda lemma. The first functor is the hom-functor. It maps any object x in C to the set of morphisms C(a, x) — for a a fixed object in C. We’ve also seen that it maps any morphism f from x to y to C(a, f).

The second functor is an arbitrary Set-valued functor F.

Let’s call the natural transformation between these two functors α. Because we are operating in Set, the components of the natural transformation, like αx or αy, are just regular functions between sets:

αx :: C(a, x) -> F x
αy :: C(a, y) -> F y

Yoneda1

And because these are just functions, we can look at their values at specific points. But what’s a point in the set C(a, x)? Here’s the key observation: Every point in the set C(a, x) is also a morphism h from a to x.

So the naturality square for α:

αy ∘ C(a, f) = F f ∘ αx

becomes, point-wise, when acting on h:

αy ((C(a, f) h)) = (F f) (αx h)

You might recall from the previous section that the action of the hom-functor C(a, _) on a morphism f was defined as precomposition:

C(a, f) h = f ∘ h

which leads to:

αy (f ∘ h) = (F f) (αx h)

Just how strong this condition is can be seen by specializing it to the case of x equal to a.

Yoneda2

In that case h becomes a morphism from a to a. We know that there is at least one such morphism, h = ida. Let’s plug it in:

αy f = (F f) (αa ida)

Notice what has just happened: The left hand side is the action of αy on an arbitrary element f of C(a, y). And it is totally determined by the single value of αa at ida. We can pick any such value and it will generate a natural transformation. Since the values of αa are in the set F a, any point in F a will define some α.

Conversely, given any natural transformation α from C(a, _) to F, you can evaluate it at ida to get a point in F a.

We have just proven the Yoneda lemma:

There is a one-to-one correspondence between natural transformations from C(a, _) to F and elements of F a.

in other words,

Nat(C(a, _), F) ≅ F a

I’ll explain later how this correspondence is in fact a natural isomorphism.

Now let’s try to get some intuition about this result. The most amazing thing is that the whole natural transformation crystallizes from just one nucleation site: the value we assign to it at ida. It spreads from that point following the naturality condition. It floods the image of C in Set. So let’s first consider what the image of C is under C(a, _).

Let’s start with the image of a itself. Under the hom-functor C(a, _), a is mapped to the set C(a, a). Under the functor F, on the other hand, it is mapped to the set F a. The component of the natural transformation αa is some function from C(a, a) to F a. Let’s focus on just one point in the set C(a, a), the point corresponding to the morphism ida. To emphasize the fact that it’s just a point in a set, let’s call it p. The component αa should map p to some point q in F a. I’ll show you that any choice of q leads to a unique natural transformation.

Yoneda3

The first claim is that the choice of one point q uniquely determines the rest of the function αa. Indeed, let’s pick any other point, p' in C(a, a), corresponding to some morphism g from a to a. And here’s where the magic of the Yoneda lemma happens: g can be viewed as a point p' in the set C(a, a). At the same time, it selects two functions between sets. Indeed, under the hom-functor, the morphism g is mapped to a function C(a, g); and under F it’s mapped to F g.

Yoneda4

Now let’s consider the action of C(a, g) on our original p. It is defined as precomposition, g∘ida, which is equal to g, which corresponds to our point p'. So g mapped to a function, when acting on p produces p', which is g. We have come full circle!

Now consider the action of F g on q. It is some q', a point in F a. To complete the naturality square, p' must be mapped to q' under αa. We picked an arbitrary p' (an arbitrary g) and derived its mapping under αa. The function αa is thus completely determined.

The second claim is that αx is uniquely determined for any object x in C that is connected to a. The reasoning is analogous, except that now we have two more sets, C(a, x) and F x, and the morphism g from a to x is mapped, under the hom-functor, to:

C(x, g) :: C(a, a) -> C(a, x)

and under F to:

F g :: F a -> F x

Again, C(x, g) acting on our p is given by the precomposition: g ∘ ida, which corresponds to a point p' in C(a, x). Naturality determines the value of αx acting on p' to be:

q' = (F g) q

Since p' was arbitrary, the whole function αx is thus determined.

Yoneda5

What if there are objects in C that have no connection to a? They are all mapped under C(a, _) to a single set — the empty set. Recall that the empty set is the initial object in the category of sets. It means that there is a unique function from this set to any other set. We called this function absurd. So here, again, we have no choice for the component of the natural transformation: it can only be absurd.

One way of understanding the Yoneda lemma is to realize that natural transformations between Set-valued functors are just families of functions, and functions are in general lossy. A function may collapse information and it may cover only parts of its codomain. The only functions that are not lossy are the ones that are invertible — the isomorphisms. It follows then that the best structure-preserving Set-valued functors are the representable ones. They are either the hom-functors or the functors that are naturally isomorphic to hom-functors. Any other functor F is obtained from a hom-functor through a lossy transformation. Such a transformation may not only lose information, but it may also cover only a small part of the image of the functor F in Set.

Yoneda in Haskell

We have already encountered the hom-functor in Haskell under the guise of the reader functor:

type Reader a x = a -> x

The reader maps morphisms (here, functions) by precomposition:

instance Functor (Reader a) where
    fmap f h = f . h

The Yoneda lemma tells us that the reader functor can be naturally mapped to any other functor.

A natural transformation is a polymorphic function. So given a functor F, we have a mapping to it from the reader functor:

alpha :: forall x . (a -> x) -> F x

As usual, forall is optional, but I like to write it explicitly to emphasize parametric polymorphism of natural transformations.

The Yoneda lemma tells us that these natural transformations are in one-to-one correspondence with the elements of F a:

forall x . (a -> x) -> F x ≅ F a

The right hand side of this identity is what we would normally consider a data structure. Remember the interpretation of functors as generalized containers? F a is a container of a. But the left hand side is a polymorphic function that takes a function as an argument. The Yoneda lemma tells us that the two representations are equivalent — they contain the same information.

Another way of saying this is: Give me a polymorphic function of the type:

alpha :: forall x . (a -> x) -> F x

and I’ll produce a container of a. The trick is the one we used in the proof of the Yoneda lemma: we call this function with id to get an element of F a:

alpha id :: F a

The converse is also true: Given a value of the type F a:

fa :: F a

one can define a polymorphic function:

alpha h = fmap h fa

of the correct type. You can easily go back and forth between the two representations.

The advantage of having multiple representations is that one might be easier to compose than the other, or that one might be more efficient in some applications than the other.

The simplest illustration of this principle is the code transformation that is often used in compiler construction: the continuation passing style or CPS. It’s the simplest application of the Yoneda lemma to the identity functor. Replacing F with identity produces:

forall r . (a -> r) -> r ≅ a

The interpretation of this formula is that any type a can be replaced by a function that takes a “handler” for a. A handler is a function accepting a and performing the rest of the computation — the continuation. (The type r usually encapsulates some kind of status code.)

This style of programming is very common in UIs, in asynchronous systems, and in concurrent programming. The drawback of CPS is that it involves inversion of control. The code is split between producers and consumers (handlers), and is not easily composable. Anybody who’s done any amount of nontrivial web programming is familiar with the nightmare of spaghetti code from interacting stateful handlers. As we’ll see later, judicious use of functors and monads can restore some compositional properties of CPS.

Co-Yoneda

As usual, we get a bonus construction by inverting the direction of arrows. The Yoneda lemma can be applied to the opposite category Cop to give us a mapping between contravariant functors.

Equivalently, we can derive the co-Yoneda lemma by fixing the target object of our hom-functors instead of the source. We get the contravariant hom-functor from C to Set: C(_, a). The contravariant version of the Yoneda lemma establishes one-to-one correspondence between natural transformations from this functor to any other contravariant functor F and the elements of the set F a:

Nat(C(_, a), F) ≅ F a

Here’s the Haskell version of the co-Yoneda lemma:

forall x . (x -> a) -> F x ≅ F a

Notice that in some literature it’s the contravariant version that’s called the Yoneda lemma.

Challenges

  1. Show that the two functions phi and psi that form the Yoneda isomorphism in Haskell are inverses of each other.
    phi :: (forall x . (a -> x) -> F x) -> F a
    phi alpha = alpha id
    psi :: F a -> (forall x . (a -> x) -> F x)
    psi fa h = fmap h fa
  2. A discrete category is one that has objects but no morphisms other than identity morphisms. How does the Yoneda lemma work for functors from such a category?
  3. A list of units [()] contains no other information but its length. So, as a data type, it can be considered an encoding of integers. An empty list encodes zero, a singleton [()] (a value, not a type) encodes one, and so on. Construct another representation of this data type using the Yoneda lemma for the list functor.

Bibliography

  1. Catsters video

Next: Yoneda Embedding.

Acknowledgments

I’d like to thank Gershom Bazerman for checking my math and logic, and André van Meulebrouck, who has been volunteering his editing help throughout this series of posts.


This is part 14 of Categories for Programmers. Previously: Free Monoids. See the Table of Contents.

It’s about time we had a little talk about sets. Mathematicians have a love/hate relationship with set theory. It’s the assembly language of mathematics — at least it used to be. Category theory tries to step away from set theory, to some extent. For instance, it’s a known fact that the set of all sets doesn’t exist, but the category of all sets, Set, does. So that’s good. On the other hand, we assume that morphisms between any two objects in a category form a set. We even called it a hom-set. To be fair, there is a branch of category theory where morphisms don’t form sets. Instead they are objects in another category. Those categories that use hom-objects rather than hom-sets, are called enriched categories. In what follows, though, we’ll stick to categories with good old-fashioned hom-sets.

A set is the closest thing to a featureless blob you can get outside of categorical objects. A set has elements, but you can’t say much about these elements. If you have a finite set, you can count the elements. You can kind of count the elements of an inifinite set using cardinal numbers. The set of natural numbers, for instance, is smaller than the set of real numbers, even though both are infinite. But, maybe surprisingly, a set of rational numbers is the same size as the set of natural numbers.

Other than that, all the information about sets can be encoded in functions between them — especially the invertible ones called isomorphisms. For all intents and purposes isomorphic sets are identical. Before I summon the wrath of foundational mathematicians, let me explain that the distinction between equality and isomorphism is of fundamental importance. In fact it is one of the main concerns of the latest branch of mathematics, the Homotopy Type Theory (HoTT). I’m mentioning HoTT because it’s a pure mathematical theory that takes inspiration from computation, and one of its main proponents, Vladimir Voevodsky, had a major epiphany while studying the Coq theorem prover. The interaction between mathematics and programming goes both ways.

The important lesson about sets is that it’s okay to compare sets of unlike elements. For instance, we can say that a given set of natural transformations is isomorphic to some set of morphisms, because a set is just a set. Isomorphism in this case just means that for every natural transformation from one set there is a unique morphism from the other set and vice versa. They can be paired against each other. You can’t compare apples with oranges, if they are objects from different categories, but you can compare sets of apples against sets of oranges. Often transforming a categorical problem into a set-theoretical problem gives us the necessary insight or even lets us prove valuable theorems.

The Hom Functor

Every category comes equipped with a canonical family of mappings to Set. Those mappings are in fact functors, so they preserve the structure of the category. Let’s build one such mapping.

Let’s fix one object a in C and pick another object x also in C. The hom-set C(a, x) is a set, an object in Set. When we vary x, keeping a fixed, C(a, x) will also vary in Set. Thus we have a mapping from x to Set.
Hom-Set

If we want to stress the fact that we are considering the hom-set as a mapping in its second argument, we use the notation:

C(a, _)

with the underscore serving as the placeholder for the argument.

This mapping of objects is easily extended to the mapping of morphisms. Let’s take a morphism f in C between two arbitrary objects x and y. The object x is mapped to the set C(a, x), and the object y is mapped to C(a, y), under the mapping we have just defined. If this mapping is to be a functor, f must be mapped to a function between the two sets:

C(a, x) -> C(a, y)

Let’s define this function point-wise, that is for each argument separately. For the argument we should pick an arbitrary element of C(a, x) — let’s call it h. Morphisms are composable, if they match end to end. It so happens that the target of h matches the source of f, so their composition:

f ∘ h :: a -> y

is a morphism going from a to y. It is therefore a member of C(a, y).

Hom Functor

We have just found our function from C(a, x) to C(a, y), which can serve as the image of f. If there is no danger of confusion, we’ll write this lifted function as:

C(a, f)

and its action on a morphism h as:

C(a, f) h = f ∘ h

Since this construction works in any category, it must also work in the category of Haskell types. In Haskell, the hom-functor is better known as the Reader functor:

type Reader a x = a -> x
instance Functor (Reader a) where
    fmap f h = f . h

Now let’s consider what happens if, instead of fixing the source of the hom-set, we fix the target. In other words, we’re asking the question if the mapping

C(_, a)

is also a functor. It is, but instead of being covariant, it’s contravariant. That’s because the same kind of matching of morphisms end to end results in postcomposition by f; rather than precomposition, as was the case with C(a, _).

We have already seen this contravariant functor in Haskell. We called it Op:

type Op a x = x -> a
instance Contravariant (Op a) where
    contramap f h = h . f

Finally, if we let both objects vary, we get a profunctor C(_, _), which is contravariant in the first argument and covariant in the second. We have seen this profunctor before, when we talked about functoriality:

instance Profunctor (->) where
  dimap ab cd bc = cd . bc . ab
  lmap = flip (.)
  rmap = (.)

The important lesson is that this observation holds in any category: the mapping of objects to hom-sets is functorial. Since contravariance is equivalent to a mapping from the opposite category, we can state this fact succintly as:

C(_, _) :: Cop × C -> Set

Representable Functors

We’ve seen that, for every choice of an object a in C, we get a functor from C to Set. This kind of structure-preserving mapping to Set is often called a representation. We are representing objects and morphisms of C as sets and functions in Set.

The functor C(a, _) itself is sometimes called representable. More generally, any functor F that is naturally isomorphic to the hom-functor, for some choice of a, is called representable. Such functor must necessarily be Set-valued, since C(a, _) is.

I said before that we often think of isomorphic sets as identical. More generally, we think of isomorphic objects in a category as identical. That’s because objects have no structure other than their relation to other objects (and themselves) through morphisms.

For instance, we’ve previously talked about the category of monoids, Mon, that was initially modeled with sets. But we were careful to pick as morphisms only those functions that preserved the monoidal structure of those sets. So if two objects in Mon are isomorphic, meaning there is an invertible morphism between them, they have exactly the same structure. If we peeked at the sets and functions that they were based upon, we’d see that the unit element of one monoid was mapped to the unit element of another, and that a product of two elements was mapped to the product of their mappings.

The same reasoning can be applied to functors. Functors between two categories form a category in which natural transformations play the role of morphisms. So two functors are isomorphic, and can be thought of as identical, if there is an invertible natural transformation between them.

Let’s analyze the definition of the representable functor from this perspective. For F to be representable we require that: There be an object a in C; one natural transformation α from C(a, _) to F; another natural transformation, β, in the opposite direction; and that their composition be the identity natural transformation.

Let’s look at the component of α at some object x. It’s a function in Set:

αx :: C(a, x) -> F x

The naturality condition for this transformation tells us that, for any morphism f from x to y, the following diagram commutes:

F f ∘ αx = αy ∘ C(a, f)

In Haskell, we would replace natural transformations with polymorphic functions:

alpha :: forall x. (a -> x) -> F x

with the optional forall quantifier. The naturality condition

fmap f . alpha = alpha . fmap f

is automatically satisfied due to parametricity (it’s one of those theorems for free I mentioned earlier), with the understanding that fmap on the left is defined by the functor F, whereas the one on the right is defined by the reader functor. Since fmap for reader is just function precomposition, we can be even more explicit. Acting on h, an element of C(a, x), the naturality condition simplifies to:

fmap f (alpha h) = alpha (f . h)

The other transformation, beta, goes the opposite way:

beta :: forall x. F x -> (a -> x)

It must respect naturality conditions, and it must be the inverse of α:

α ∘ β = id = β ∘ α

We will see later that a natural transformation from C(a, _) to any Set-valued functor always exists (Yoneda’s lemma) but it is not necessarily invertible.

Let me give you an example in Haskell with the list functor and Int as a. Here’s a natural transformation that does the job:

alpha :: forall x. (Int -> x) -> [x]
alpha h = map h [12]

I have arbitrarily picked the number 12 and created a singleton list with it. I can then fmap the function h over this list and get a list of the type returned by h. (There are actually as many such transformations as there are integers.)

The naturality condition is equivalent to the composability of map (the list version of fmap):

map f (map h [12]) = map (f . h) [12]

But if we tried to find the inverse transformation, we would have to go from a list of arbitrary type x to a function returning x:

beta :: forall x. [x] -> (Int -> x)

You might think of retrieving an x from the list, e.g., using head, but that won’t work for an empty list. Notice that there is no choice for the type a (in place of Int) that would work here. So the list functor is not representable.

Remember when we talked about Haskell (endo-) functors being a little like containers? In the same vein we can think of representable functors as containers for storing memoized results of function calls (the members of hom-sets in Haskell are just functions). The representing object, the type a in C(a, _), is thought of as the key type, with which we can access the tabulated values of a function. The transformation we called α is called tabulate, and its inverse, β, is called index. Here’s a (slightly simplified) Representable class definition:

class Representable f where
   type Rep f :: *
   tabulate :: (Rep f -> x) -> f x
   index    :: f x -> Rep f -> x

Notice that the representing type, our a, which is called Rep f here, is part of the definition of Representable. The star just means that Rep f is a type (as opposed to a type constructor, or other more exotic kinds).

Infinite lists, or streams, which cannot be empty, are representable.

data Stream x = Cons x (Stream x)

You can think of them as memoized values of a function taking an Integer as an argument. (Strictly speaking, I should be using non-negative natural numbers, but I didn’t want to complicate the code.)

To tabulate such a function, you create an infinite stream of values. Of course, this is only possible because Haskell is lazy. The values are evaluated on demand. You access the memoized values using index:

instance Representable Stream where
    type Rep Stream = Integer
    tabulate f = Cons (f 0) (tabulate (f . (+1)))
    index (Cons b bs) n = if n == 0 then b else index bs (n - 1)

It’s interesting that you can implement a single memoization scheme to cover a whole family of functions with arbitrary return types.

Representability for contravariant functors is similarly defined, except that we keep the second argument of C(_, a) fixed. Or, equivalently, we may consider functors from Cop to Set, because Cop(a, _) is the same as C(_, a).

There is an interesting twist to representability. Remember that hom-sets can internally be treated as exponential objects, in cartesian closed categories. The hom-set C(a, x) is equivalent to xa, and for a representable functor F we can write:

_a = F

Let’s take the logarithm of both sides, just for kicks:

a = log F

Of course, this is a purely formal transformation, but if you know some of the properties of logarithms, it is quite helpful. In particular, it turns out that functors that are based on product types can be represented with sum types, and that sum-type functors are not in general representable (example: the list functor).

Finally, notice that a representable functor gives us two different implementations of the same thing — one a function, one a data structure. They have exactly the same content — the same values are retrieved using the same keys. That’s the sense of “sameness” I was talking about. Two naturally isomorphic functors are identical as far as their contents are involved. On the other hand, the two representations are often implemented differently and may have different performance characteristics. Memoization is used as a performance enhancement and may lead to substantially reduced run times. Being able to generate different representations of the same underlying computation is very valuable in practice. So, surprisingly, even though it’s not concerned with performance at all, category theory provides ample opportunities to explore alternative implementations that have practical value.

Challenges

  1. Show that the hom-functors map identity morphisms in C to corresponding identity functions in Set.
  2. Show that Maybe is not representable.
  3. Is the Reader functor representable?
  4. Using Stream representation, memoize a function that squares its argument.
  5. Show that tabulate and index for Stream are indeed the inverse of each other. (Hint: use induction.)
  6. The functor:
    Pair a = Pair a a

    is representable. Can you guess the type that represents it? Implement tabulate and index.

Bibliography

  1. The Catsters video about representable functors.

Next: The Yoneda Lemma.

Acknowledgments

I’d like to thank Gershom Bazerman for checking my math and logic, and André van Meulebrouck, who has been volunteering his editing help throughout this series of posts.


This is part 13 of Categories for Programmers. Previously: Limits and Colimits. See the Table of Contents.

Monoids are an important concept in both category theory and in programming. Categories correspond to strongly typed languages, monoids to untyped languages. That’s because in a monoid you can compose any two arrows, just as in an untyped language you can compose any two functions (of course, you may end up with a runtime error when you execute your program).

We’ve seen that a monoid may be described as a category with a single object, where all logic is encoded in the rules of morphism composition. This categorical model is fully equivalent to the more traditional set-theoretical definition of a monoid, where we “multiply” two elements of a set to get a third element. This process of “multiplication” can be further dissected into first forming a pair of elements and then identifying this pair with an existing element — their “product.”

What happens when we forgo the second part of multiplication — the identification of pairs with existing elements? We can, for instance, start with an arbitrary set, form all possible pairs of elements, and call them new elements. Then we’ll pair these new elements with all possible elements, and so on. This is a chain reaction — we’ll keep adding new elements forever. The result, an infinite set, will be almost a monoid. But a monoid also needs a unit element and the law of associativity. No problem, we can add a special unit element and identify some of the pairs — just enough to support the unit and associativity laws.

Let’s see how this works in a simple example. Let’s start with a set of two elements, {a, b}. We’ll call them the generators of the free monoid. First, we’ll add a special element e to serve as the unit. Next we’ll add all the pairs of elements and call them “products”. The product of a and b will be the pair (a, b). The product of b and a will be the pair (b, a), the product of a with a will be (a, a), the product of b with b will be (b, b). We can also form pairs with e, like (a, e), (e, b), etc., but we’ll identify them with a, b, etc. So in this round we’ll only add (a, a), (a, b) and (b, a) and (b, b), and end up with the set {e, a, b, (a, a), (a, b), (b, a), (b, b)}.

Bunnies

In the next round we’ll keep adding elements like: (a, (a, b)), ((a, b), a), etc. At this point we’ll have to make sure that associativity holds, so we’ll identify (a, (b, a)) with ((a, b), a), etc. In other words, we won’t be needing internal parentheses.

You can guess what the final result of this process will be: we’ll create all possible lists of as and bs. In fact, if we represent e as an empty list, we can see that our “multiplication” is nothing but list concatenation.

This kind of construction, in which you keep generating all possible combinations of elements, and perform the minimum number of identifications — just enough to uphold the laws — is called a free construction. What we have just done is to construct a free monoid from the set of generators {a, b}.

Free Monoid in Haskell

A two-element set in Haskell is equivalent to the type Bool, and the free monoid generated by this set is equivalent to the type [Bool] (list of Bool). (I am deliberately ignoring problems with infinite lists.)

A monoid in Haskell is defined by the type class:

class Monoid m where
    mempty  :: m
    mappend :: m -> m -> m

This just says that every Monoid must have a neutral element, which is called mempty, and a binary function (multiplication) called mappend. The unit and associativity laws cannot be expressed in Haskell and must be verified by the programmer every time a monoid is instantiated.

The fact that a list of any type forms a monoid is described by this instance definition:

instance Monoid [a] where
    mempty  = []
    mappend = (++)

It states that an empty list [] is the unit element, and list concatenation (++) is the binary operation.

As we have seen, a list of type a corresponds to a free monoid with the set a serving as generators. The set of natural numbers with multiplication is not a free monoid, because we identify lots of products. Compare for instance:

2 * 3 = 6
[2] ++ [3] = [2, 3] // not the same as [6]

That was easy, but the question is, can we perform this free construction in category theory, where we are not allowed to look inside objects? We’ll use our workhorse: the universal construction.

The second interesting question is, can any monoid be obtained from some free monoid by identifying more than the minimum number of elements required by the laws? I’ll show you that this follows directly from the universal construction.

Free Monoid Universal Construction

If you recall our previous experiences with universal constructions, you might notice that it’s not so much about constructing something as about selecting an object that best fits a given pattern. So if we want to use the universal construction to “construct” a free monoid, we have to consider a whole bunch of monoids from which to pick one. We need a whole category of monoids to chose from. But do monoids form a category?

Let’s first look at monoids as sets equipped with additional structure defined by unit and multiplication. We’ll pick as morphisms those functions that preserve the monoidal structure. Such structure-preserving functions are called homomorphisms. A monoid homomorphism must map the product of two elements to the product of the mapping of the two elements:

h (a * b) = h a * h b

and it must map unit to unit.
For instance, consider a homomorphism from lists of integers to integers. If we map [2] to 2 and [3] to 3, we have to map [2, 3] to 6, because concatenation

[2] ++ [3] = [2, 3]

becomes multiplication

2 * 3 = 6

Now let’s forget about the internal structure of individual monoids, and only look at them as objects with corresponding morphisms. You get a category Mon of monoids.

Okay, maybe before we forget about internal structure, let us notice an important property. Every object of Mon can be trivially mapped to a set. It’s just the set of its elements. This set is called the underlying set. In fact, not only can we map objects of Mon to sets, but we can also map morphisms of Mon (homomorphisms) to functions. Again, this seems sort of trivial, but it will become useful soon. This mapping of objects and morphisms from Mon to Set is in fact a functor. Since this functor “forgets” the monoidal structure — once we are inside a plain set, we no longer distinguish the unit element or care about multiplication — it’s called a forgetful functor. Forgetful functors come up regularly in category theory.

We now have two different views of Mon. We can treat it just like any other category with objects and morphisms. In that view, we don’t see the internal structure of monoids. All we can say about a particular object in Mon is that it connects to itself and to other objects through morphisms. The “multiplication” table of morphisms — the composition rules — are derived from the other view: monoids-as-sets. By going to category theory we haven’t lost this view completely — we can still access it through our forgetful functor.

To apply the universal construction, we need to define a special property that would let us search through the category of monoids and pick the best candidate for a free monoid. But a free monoid is defined by its generators. Different choices of generators produce different free monoids (a list of Bool is not the same as a list of Int). Our construction must start with a set of generators. So we’re back to sets!

That’s where the forgetful functor comes into play. We can use it to X-ray our monoids. We can identify the generators in the X-ray images of those blobs. Here’s how it works:

We start with a set of generators, x. That’s a set in Set.

The pattern we are going to match consists of a monoid m — an object of Mon — and a function p in Set:

p :: x -> U m

where U is our forgetful functor from Mon to Set. This is a weird heterogeneous pattern — half in Mon and half in Set.

The idea is that the function p will identify the set of generators inside the X-ray image of m. It doesn’t matter that functions may be lousy at identifying points inside sets (they may collapse them). It will all be sorted out by the universal construction, which will pick the best representative of this pattern.

Monoid Pattern

We also have to define the ranking among candidates. Suppose we have another candidate: a monoid n and a function that identifies the generators in its X-ray image:

q :: x -> U n

We’ll say that m is better than n if there is a morphism of monoids (that’s a structure-preserving homomorphism):

h :: m -> n

whose image under U (remember, U is a functor, so it maps morphisms to functions) factorizes through p:

q = U h . p

If you think of p as selecting the generators in m; and q as selecting “the same” generators in n; then you can think of h as mapping these generators between the two monoids. Remember that h, by definition, preserves the monoidal structure. It means that a product of two generators in one monoid will be mapped to a product of the corresponding two generators in the second monoid, and so on.

Monoid Ranking

This ranking may be used to find the best candidate — the free monoid. Here’s the definition:

We’ll say that m (together with the function p) is the free monoid with the generators x if and only if there is a unique morphism h from m to any other monoid n (together with the function q) that satisfies the above factorization property.

Incidentally, this answers our second question. The function U h is the one that has the power to collapse multiple elements of U m to a single element of U n. This collapse corresponds to identifying some elements of the free monoid. Therefore any monoid with generators x can be obtained from the free monoid based on x by identifying some of the elements. The free monoid is the one where only the bare minimum of identifications have been made.

We’ll come back to free monoids when we talk about adjunctions.

Challenges

  1. You might think (as I did, originally) that the requirement that a homomorphism of monoids preserve the unit is redundant. After all, we know that for all a
    h a * h e = h (a * e) = h a

    So h e acts like a right unit (and, by analogy, as a left unit). The problem is that h a, for all a might only cover a sub-monoid of the target monoid. There may be a “true” unit outside of the image of h. Show that an isomorphism between monoids that preserves multiplication must automatically preserve unit.

  2. Consider a monoid homomorphism from lists of integers with concatenation to integers with multiplication. What is the image of the empty list []? Assume that all singleton lists are mapped to the integers they contain, that is [3] is mapped to 3, etc. What’s the image of [1, 2, 3, 4]? How many different lists map to the integer 12? Is there any other homomorphism between the two monoids?
  3. What is the free monoid generated by a one-element set? Can you see what it’s isomorphic to?

Next: Representable Functors.

Acknowledgments

I’d like to thank Gershom Bazerman for checking my math and logic, and André van Meulebrouck, who has been volunteering his editing help throughout this series of posts.