Can a data race not be a bug? In the strictest sense I would say it’s always a bug. A correct program written in a high-level language should run the same way on every processor present, past, and future. But there is no proscription, or even a convention, about what a processor should (or shouldn’t) do when it encounters a race. This is usually described in higher-level language specs by the ominous phrase: “undefined behavior.” A data race could legitimately reprogram your BIOS, wipe out your disk, and stop the processor’s fan causing a multi-core meltdown.

Data race: Multiple threads accessing the same memory location without intervening synchronization, with at least one thread performing a write.

However, if your program is only designed to run on a particular family of processors, say the x86, you might allow certain types of data races for the sake of performance. And as your program matures, i.e., goes through many cycles of testing and debugging, the proportion of buggy races to benign races keeps decreasing. This becomes a real problem if you are using a data-race detection tool that cannot distinguish between the two. You get swamped by false positives.

Microsoft Research encountered and dealt with this problem when running their race detector called DataCollider on the Windows kernel (see Bibliography). Their program found 25 actual bugs, and almost an order of magnitude more benign data races. I’ll summarize their methodology and discuss their findings about benign data races.

Data Races in the Windows Kernel

The idea of the program is very simple. Put a hardware breakpoint on a shared memory access and wait for one of the threads to stumble upon it. This is a code breakpoint, which is triggered when the particular code location is executed. The x86 also supports another kind of a breakpoint, which is called a data breakpoint, triggered when the program accesses a specific memory location. So when a thread hits the code breakpoint, DataCollider installs a data breakpoint at the location the thread was just accessing. It then stalls the current thread and lets all other threads run. If any one of them hits the data breakpoint, it’s a race (as long as one of the accesses is a write). Consider this: If there was any synchronization (say, a lock acquisition was attempted) between the two accesses, the second thread would have been blocked from accessing that location. Since it wasn’t, it’s a classic data race.

Notice that this method might not catch all data races, but it doesn’t produce false positives. Except, of course, when the race is considered benign.

There are other interesting details of the algorithm. One is the choice of code locations for installing breakpoints. DataCollider first analyzes the program’s assembly code to create a pool of memory accesses. It discards all thread-local accesses and explicitly synchronized instructions (for instance, the ones with the LOCK prefix). It then randomly picks locations for breakpoints from this pool. Notice that rarely executed paths are as likely to be sampled as the frequently executed ones. This is important because data races, like fugitive criminals, often hide in less frequented places.

Pruning Benign Races

90% of data races caught by DataCollider in the Windows kernel were benign. For several reasons it’s hard to say how general this result is. First, the kernel had already been tested and debugged for some time, so many low-hanging concurrency bugs have been picked. Second, operating system kernels are highly optimized for a particular processor and might use all kinds of tricks to improve performance. Finally, kernels often use unusual synchronization strategies. Still, it’s interesting to see what shape benign data races take.

It turns out that half of all false positives came from lossy counters. There are many places where statistics are gathered: counting certain kinds of events, either for reporting or for performance enhancements. In those situations losing a few increments is of no relevance. However not all counters are lossy and, for instance, a data race in reference counting is a serious bug. DataCollider uses simple heuristic to detect lossy counters–they are the ones that are always incremented. A reference counter, on the other hand, is as often incremented as decremented.

Another benign race happens when one thread reads a particular bit in a bitfield while another thread updates another bit. A bit update is a read-modify-write (RMW) sequence: The thread reads the previous value of the bitfield, modifies one bit, and writes the whole bitfield back. Other bits are overwritten in the process too, but their new values are the same as the old values. A read from another thread of any of the the non-changed bits does not interfere with the write, at least not on the x86. Of course if yet another thread modified one of those bits, it would be a real bug, and it would be caught separately. The pruning of this type of race requires analysis of surrounding code (looking for the masking of other bits).

Windows kernel also has some special variables that are racy by design–current time is one such example. DataCollider has these locations hard-coded and automatically prunes them away.

There are benign races that are hard to prune automatically, and those are left for manual pruning (in fact, DataCollider reports all races, it just de-emphasizes the ones it considers benign). One of them is the double-checked locking pattern (DCLP), where a thread makes a non-synchronized read to be later re-confirmed under the lock. This pattern happens to work on the x86, although it definitely isn’t portable.

Finally, there is the interesting case of idempotent writes– two racing writes that happen to write the same value to the same location. Even though such scenarios are easy to prune, the implementers of DataCollider decided not to prune them because more often than not they led to the uncovering of concurrency bugs. Below is a table that summarizes various cases.

Benign race Differentiate from Pruned?
Lossy counter Reference counting Yes
Read and write of different bits Read and write of the whole word Yes
Deliberately racy variables Yes
DCLP No
Idempotent writes No

Conclusion

In the ideal world there would be no data races. But a concurrency bug detector must take into account the existence of benign data races. In the early stages of product testing the majority of detected races are real bugs. It’s only when chasing the most elusive of concurrency bugs that it becomes important to weed out benign races. But it’s the elusive ones that bite the hardest.

Bibliography

  1. John Erickson, Madanlal Musuvathi, Sebastian Burckhardt, Kirk Olynyk, Effective Data-Race Detection for the Kernel

If you’re like me, you might be worried that the world is being taken over by monoids. Binary operators that are associative and respect unity are popping up everywhere. Even our beloved monads are being dismissively called “just” monoids in the category of endofunctors. Personally, I have nothing against associativity, but when it’s not there, it’s not there. Did you know, for instance, that addition is non-associative? Of course, I’m talking about floating-point addition. Taking averages is non-associative. Lie algebras are non-associative. At a recent FARM (Functional Art, Music, Modelling and Design) Workshop, Paul Hudak spoke of a non-associative way of tiling temporal media (the %\ — percent backslash — operator).

In mathematics, there exists a simpler notion of magma, which is a structure with a binary operator with no additional conditions imposed on it. So monoid is really “just” a magma with some additional structure. Take that monoid!

There was a recent interesting post about free magmas and their relation to binary trees, which shows that magmas might be relevant outside of geology. All this encouraged me to revisit some of the areas traditionally associated with monoids and give them another look-see. If nothing else, it’s an opportunity to have a little refresher in monoidal and enriched categories. Hopefully, this will also help in understanding Edward Kmett’s latest experiments with encoding category theory in Hask.

Magmatic Category

Let’s start by slimming down the usual definition of a monoidal category: A magmatic category is a category 𝒱0 with a functor ⨁ : 𝒱0 ⨯ 𝒱0 → 𝒱0 from the cartesian product of 𝒱0 with itself to 𝒱0. This is our “attach” operator that works on objects and can be lifted to morphisms.

That’s it! A monoidal category would also have the unit object I, the associator, the left unitor, the right unitor, and two coherence axioms. But if magma is all we need, we’re done.

In what follows I’m not going to be interested in associativity until much later, but I will consider including the unit object I (a magma with a unit is called a unital magma). What defines the unit object are two natural (in X) isomorphisms: lX: I ⨁ X → X, and rX: X ⨁ I → X, called, respectively, the left and the right unitors. Still, because of no associativity, there are no coherence axioms one has to worry about in a monoidal category.

Having a unit object allows us to define an interesting functor that maps any object X to the hom-set 𝒱0(I, X) — the set of morphisms from I to X (obviously, this is a set only if 𝒱0 is locally small). We’ll call this functor V, to make sure you pay attention to fonts, and not just letters. Why is it interesting? Because it’s as close as we can get, in category theory, to defining an element of an object. Remember: in category theory you don’t look inside objects — all you have at your disposal are morphisms between amorphous objects.

So how would you define an element of, let’s say, a set, without looking inside it? You’d define it as a function (morphism) from a one-element set. Such a morphism picks a single element in the target set. But what’s a one-element set (remember: we can’t look inside sets)? Well, it’s a terminal object in the category of sets. Again, a terminal object is defined entirely in terms of morphisms. It’s the object that has a unique morphism coming to it from any other object. (Exercise: Show that a one element set is terminal among sets.)

So an element of a set can be defined as a morphism from the terminal set. Nothing stops us from extending this definition of an “element” to an arbitrary category that has a terminal object. But what does a terminal object have to do with a unit object in the monoidal (or unital-magmatic) category? In the archetypal monoidal category — the cartesian monoidal category — the terminal object is the unit. The binary operator is the product, defined by a universal construction. That’s why we can call a morphism f from the hom-set 𝒱0(I, X) an element of X. In this language, V maps X into a set of elements of X — the underlying set.

Exponential Objects and Currying

The usual universal construction of an exponential object YX can be easily reproduced in a magmatic category. We just replace the cartesian product in that construction with the “attach” operator we have at our disposal.

Here’s how you do it: Start with a pair of objects, X and Y. Consider all pairs <object Z, morphism g> in which g is a morphism from Z ⨁ X to Y. We are looking for one of these pairs, call it <YX, app>, such that any other pair “factorizes” through it (YX is just an object with a funny symbol). It means that, for each <Z, g> there is a unique morphism λg : Z→ YX, such that:

g = app ∘ (λg ⨁ id)

Notice that we are using the fact that ⨁ is a (bi-)functor, which means it can be applied to morphisms as well as objects. Here it’s applied to λg and id (the identity morphism on X).

The universal construction of an exponential object

The universal construction of an exponential object

There is an interesting class of categories in which there is an exponential object for every pair of objects. Or even better, an exponential objects that is defined by an adjunction. An adjunction relates two functors. In this case, one functor is defined by the action of the binary operator: Z→ Z ⨁ X. The exponentiation functor Y→ YX is then its right adjoint.  The adjunction is a natural bijection of hom-sets:

λ: 𝒱0(Z ⨁ X, Y) ≅ 𝒱0(Z, YX)

This equation is just the re-statement of the condition that for every g : Z ⨁ X→ Y there is a unique λg : Z→ YX, except that we now require this mapping to be natural in both Y and Z.

Here’s the interesting part: This definition can be viewed as generalized currying. Normal currying establishes the relationship between functions of two arguments and functions returning functions. A function of two arguments can be thought of as a function defined on a cartesian product of those arguments. Here, we are replacing the cartesian product with our magmatic operation, and the adjunction tells us that for every morphism g from Z ⨁ X to Y there is a morphism λg from Z to the exponential object YX. The currying intuition is that YX objectifies morphisms from X to Y; it objectifies the hom-set 𝒱0(X, Y) inside 𝒱0. In fact YX is often called the internal hom-set (in which case 𝒱0(X, Y) is called the external hom-set).

This intuition is reinforced by the observation that, if we replace Z with I, the magmatic unit, in the adjunction, we get:

𝒱0(I ⨁ X, Y) ≅ 𝒱0(I, YX)

I ⨁ X is naturally isomorphic to X (through the left unitor), so we get:

𝒱0(X, Y) ≅ 𝒱0(I, YX)

Now recall what we have said about morphisms from the unit: they define elements of an object through the functor we called V. The equation:

𝒱0(X, Y) ≅ V YX

tells us that elements of the (external) hom-set 𝒱0(X, Y) are in one-to-one correspondence with the “elements” of the exponential YX. This is not exactly the isomorphism of external and internal hom-sets, because of the intervening functor V, which might not even be faithful. However, this is exactly one of the conditions that define a closed category, hinting at the possibility that the adjunction defining the exponential object in a unital magmatic category might be enough to make it closed.

All this was done without any mention of associativity. So what is associativity good for? Here’s an example. Let’s replace X with (A ⨁ B) in the adjunction:

𝒱0(Z ⨁ (A ⨁ B), Y) ≅ 𝒱0(Z, Y(A ⨁ B))

If we have associativity, the above is naturally isomorphic to:

𝒱0((Z ⨁ A) ⨁ B, Y) ≅ 𝒱0(Z ⨁ A, YB) ≅ 𝒱0(Z, YBA)

Putting it all together and replacing Z with I, we get the natural isomorphism between “elements” of two objects:

V Y(A ⨁ B) ≅ V YBA

This looks just like currying inside the internal hom-set, except for the functor V. So to curry from the external to the internal hom-set it’s enough to have a unital magmatic category, but to curry within the internal hom-set we need the full monoidal category.

A word about notation: Traditionally, the exponential object from X to Y is denoted as YX. That’s the convention I’ve been following so far. Arguably, this notation is somewhat awkward. In his book, Basic Concepts of Enriched Category Theory, Kelly uses a more convenient notation: [X, Y]. In his notation, the above equation takes a slightly more readable form:

V [(A ⨁ B), Y] ≅ V [A, [B, Y]]

Magmatically Enriched Category

A (locally small) category consists of object and sets of morphisms, hom-sets, between every pair of objects. All we need to assume about morphisms is that they compose nicely and that there exist morphisms that act as units of composition. Think of it this way: Each hom-set is an object from the Set category. Composition is a mapping from a cartesian product of two such objects to a third object.

But why restrict ourselves to Set to draw our hom-sets from? Why not pick an arbitrary category? The only question is: How should we define composition of morphisms? Morphisms are elements of hom-sets. In an arbitrary category we might not be able to define elements of objects. What we could do, however, is to pick a category that has all the products and define composition as a mapping (morphism) from a product of two objects to a third object. This is analogous to defining composition in terms of cartesian products of hom-sets, rather than defining it point-wise on individual morphisms.

But once we made a bold decision to abandon sets, why restrict ourselves to cartesian monoidal categories (the ones with a product defined by a universal construction)? All we really need is some notion of a binary operation, and that’s given by a magmatic category. So let’s define a magmatically enriched category 𝒜 as a collection of objects together with an additional magmatic category (𝒱0, ⨁) from which we pick our hom-objects (no longer called hom-sets). For any two objects A and B in the category 𝒜, a hom-object 𝒜(A, B) is an object of 𝒱0. Composition is defined for any triple of objects A, B, and C. It’s a morphism between objects in 𝒱0:

M: 𝒜(B, C) ⨁ 𝒜(A, B) → 𝒜(A, C)

Of course I stole this definition from a monoidally enriched category. Another part of it, which might be useful in the future, is the definition of a unit element. In a normal category, a unit morphism idA is an element of the hom-set 𝒜(A, A) of morphisms that go from A back to A. In an enriched category, the only way to define an element of a hom-object is though a mapping from a unit object I, so here’s the generalization of the identity morphism:

jA: I → 𝒜(A, A)

We have to make sure that there is appropriate interplay between composition, left and right unitors, and the unit element. These are expressed through commutative diagrams:

Commuting diagrams relating composition M, unitors l and r, and the unit element

Commuting diagrams relating composition M, unitors l and r, and the unit element I

As before, since we don’t have to worry about associativity, we can omit the rest of the axioms.

Functors and Natural Transformations

A functor maps objects to objects and morphisms to morphisms. The mapping of objects translates directly to enriched settings, but we don’t want to deal with morphisms individually. We don’t want to look inside hom objects. Fortunately, we can define the action of a functor on a hom-object as a whole. The mappings of hom-objects are just morphisms in 𝒱0.

To define a functor T from an enriched category 𝒜 to ℬ, we have to define its action on any  hom-object 𝒜(A, B):

TAB : 𝒜(A, B) → ℬ(T A, T B)

Notice that both categories must be enriched over the same 𝒱0 for TAB to be a morphism.

A natural transformation is trickier. Normally it is defined as a family of morphisms parameterized by objects. In the enriched setting, we have a slight problem: For a given object A we have to pick a single morphism from the hom-object ℬ(T A, S A). Here T and S are functors between two enriched categories 𝒜 and ℬ. Luckily, we have this trick of replacing element selection with a mapping from the unit object. We’ll use it to define the components of a natural transformation:

αA: I → ℬ(T A, S A).

The fact that the existence of the unit is necessary to define natural transformations in enriched categories is a little surprising. It also means that adjointness, which is defined through natural isomorphism between hom-objects, cannot be defined without a unit.

The generalization of the naturality condition is also tricky. Normally we just pick a morphism f : A → B in the category 𝒜 and lift it using two functors, T and S, to the target category ℬ. We get two morphisms,

T f : T A → T B

S f : S A → S B

We can then compose them with two components of the natural transformation,

αA : T A → S A

αB : T B → S B

and demand that the resulting diagram commute:

αB ∘ T f = S f ∘ αA

In an enriched category we can’t easily pick individual morphisms, so we have to operate on whole hom-objects. Here’s what we have at our disposal. The components of the natural transformation at A and B:

αB: I → ℬ(T B, S B)

αA: I → ℬ(T A, S A)

The action of the two functors on the hom-object connecting A to B:

TAB : 𝒜(A, B) → ℬ(T A, T B)

SAB : 𝒜(A, B) → ℬ(S A, S B)

Also, the composition of morphisms is replaced with the action of our binary operator on hom-objects. If you look at the hom-sets involved in the standard naturality condition, their composition corresponds to these two mappings under M:

ℬ(T B, S B) ⨁ ℬ(T A, T B) → ℬ(T A, S B)

ℬ(S A, S B) ⨁ ℬ(T A, S A) → ℬ(T A, S B)

The right hand sides are identical, the left hand sides can be obtained by either acting with α on the unit object, or by lifting the hom-object 𝒜(A, B) using S or T. There aren’t that many choices in this jigsaw puzzle. In particular, we get:

α ⨁ T : I ⨁ 𝒜(A, B) → ℬ(T B, S B) ⨁ ℬ(T A, T B)

S ⨁ α : 𝒜(A, B) ⨁ I → ℬ(S A, S B) ⨁ ℬ(T A, S A)

Finally, both of the left hand sides can be obtained from 𝒜(A, B) by applying the inverse of, respectively, l and r — the left and right unitors (remember, they are isomorphisms, so they are invertible). That gives us the naturality hexagon:

Naturality condition for enriched categories

Naturality condition in enriched categories

Representable functors and the Yoneda lemma

On the surface, the enriched version of the Yoneda lemma looks deceptively simple. But there are some interesting nuances that should be mentioned. So let me recap what Yoneda is in the more traditional setting (see my blog post about it). It’s about functors from an arbitrary category to a much simpler category of sets — a category where we can actually look at elements, and where morphisms are familiar functions. In particular we are practically given for free those very convenient representable functors. These are the functors that map objects to hom-sets. You just fix one object, say K, in the category 𝒜, and for any other object X you get the set 𝒜(K, X) in Set.

This mapping of X to 𝒜(K, X) is called a representable functor because, in a way, it represents the object X as a set. It is a functor because it’s also defined on morphisms. Indeed, take a morphism f: X → Y. Its image must be a function from 𝒜(K, X) to 𝒜(K, Y). So pick an h in 𝒜(K, X) and map it to the composition f∘h, which is indeed in 𝒜(K, Y).

Now take any functor F from 𝒜 to Set, not necessarily representable. Yoneda tells us that natural transformations from any representable functor 𝒜(K, _) to F are in one to one correspondence with the elements of F K. In other words, there is a bijection:

Nat(𝒜(K, _), F) ≅ F K

which, additionally, is natural in both K and F (naturality if F requires you to look at F as an object in the category of functors, where morphisms are natural transformations).

Enriched representable functors

So how do we go about generalizing Yoneda to an enriched category. We start with a representable functor, except that now 𝒜(K, X) is not a hom-set but a hom-object in 𝒱0. The mapping from the enriched category 𝒜 to 𝒱0 cannot be treated as an enriched functor, because enriched functors are only defined between categories enriched over the same base category, and 𝒱0 is not enriched. So the trick is to enrich 𝒱0 over itself.

What does it mean? We want to keep the objects of 𝒱0 but somehow replace the hom-sets with objects from — again — 𝒱0. Now, if 𝒱0 is closed, it contains objects that are perfect for this purpose: the exponential objects. We’ll call the category resulting from enriching 𝒱0 over 𝒱0 simply 𝒱, hoping not to cause too much confusion. So we define a hom-object 𝒱(X, Y) as [X, Y] (the new notation for YX). We can apply our magmatic binary operator to two such hom-objects and map the resulting object to another hom-object:

M: [Y, Z] ⨁ [X, Y] → [X, Z]

This defines composition of hom-objects for us. The mapping is induced by the adjunction that defines the exponential object, and in fact requires associativity, so we can relax and admit that we are now in a closed monoidal (rather than magmatic) category. Oh well!

We’ve seen before that there is a mapping between hom-sets and exponentials, which we can rewrite in this form:

𝒱0(X, Y) ≅ V [X, Y]

(with V on the right hand side the functor that defines “elements” of an object). So 𝒱 not only has the same objects as 𝒱0 but its hom-objects are lifted by the functor V from hom-sets of 𝒱0. That makes 𝒱 a perfect stand-in for 𝒱0 in the enriched realm.

We are now free to define (enriched) functors that go from any category 𝒜 enriched over 𝒱0 to 𝒱. In particular, the mapping 𝒜(K, _): 𝒜 → 𝒱, which assigns the object 𝒜(K, X) to every X, defines a representable enriched functor. This way an object X in some exotic enriched category 𝒜 is represented by an object in a presumably better understood category 𝒱 which, being closed and monoidal, has more structure to play with.

Enriched Yoneda lemma

We now have all the tools to formulate the Yoneda lemma for enriched categories. Let’s pick an object K in an enriched category 𝒜. It defines an enriched representable functor, 𝒜(K, _) from 𝒜 to 𝒱. Let’s take another functor F from 𝒜 to 𝒱. The Yoneda lemma states that all natural transformations between these two functors:

α: 𝒜(K, _) → F

are in one-to-one correspondence with “elements” of the object F K, where by “elements” we mean mappings:

η: I → F K

More precisely, there is a bijection (one-to-one mapping that is onto) between the set of natural transformations Nat(𝒜(K, _)) and the set 𝒱0(I, F K).

Nat(𝒜(K, _)) ≅ 𝒱0(I, F K).

Notice that the bijection here is between sets: a set of natural transformations and a set of morphisms. There is a stronger version of the Yoneda lemma, which established a bijection between objects rather than sets. But for that one needs to objectify natural transformations, something that can indeed be done using ends. But that’s a topic for a whole new post.

Conclusions

My main motivation in this blog post was to try to figure out how far one can get with magmas before being forced to introduce a unit, and how far one can get with unital magmas before being forced to introduce associativity. These results are far from rigorous, since I might not have been aware of some hidden assumptions, and because the fact that some property is used in the proof of another property doesn’t mean that it is necessary.

So here’s the short summary. The following things are possible to define:

  1. Magmatic category
  2. Exponential objects and currying in a magmatic category
  3. Adjunction between magmatic product and exponentiation — the internal hom
  4. In unital magmatic category, the adjunction leads to isomorphism between external hom-set and “elements” of the internal hom
  5. Currying in the internal hom requires a monoidal category

Having a magmatic category one can define a magmatically enriched category and the following things:

  1. Functors
  2. Natural transformations — only over unitally magmatic category
  3. Enriched Yoneda lemma — only over monoidal category

Acknowledments

I’m grateful to Gershom Bazerman for stimulating discussions and helpful pointers.

Bibliography

  1. G. M. Kelly, Basic Concepts of Enriched Category Theory
  2. nLab, Closed category

I’m not fond of arguments based on lack of imagination. “There’s no way this code may fail!” might be a sign of great confidence or the result of ignorance. The inability to come up with a counterexample doesn’t prove a theorem. And yet there is one area of programming where such arguments work, and are quite useful. These are parametricity arguments: free theorems about polymorphic functions. Fortunately, there is solid theory behind parametricity. Free theorems are not based on ignorance. So I decided to read the relevant papers (see bibliography at the end of this post) and write a blog about it. How hard could it be? A few months and several failed attempts later I realized how naive I was. But I think I finally understand the basics enough to explain them in relatively simple terms.

Motivation

Here’s a classic example — a function that takes a list of arbitrary type a and returns a list of the same type:

r :: [a] -> [a]

What can this function do? Since it has to work with any type of list element, it can’t do anything type-specific. It can’t modify the elements or invent new ones. So all it can do is rearrange them, duplicate, or remove. Can you think of anything else?

The questions it a little tricky because it all depends on the kind of polymorphism your language supports. In Haskell, where we have parametric polymorphism, the above statement is for the most part true (modulo termination worries). In C++, which supports ad-hoc polymorphism, a generic function like:

template<class T> 
list<T> r(list<T>);

can do all kinds of weird things.

Parametric polymorphism means that a function will act on all types uniformly, so the declaration at the top indeed drastically narrows down the possibilities.

For instance, try fmapping any function of the type:

f :: a -> b

over a list of a. You can either apply fmap before or after acting on it with r. It shouldn’t matter whether you first modify the elements of the list and then rearrange them, or first rearrange and then modify them. The result should be the same:

r (fmap f as) = fmap f (r as)

But is it true just because we can’t imagine how it may fail, or can we make a formal argument to prove it?

Let’s Argue (Denotational) Semantics

One way to understand polymorphism is to have a good model for types. At first approximation types can be modeled as sets of values (strictly speaking, as shown by Reynolds, the set-theoretical model fails in the case of polymorphic lambda calculus, but there are ways around it).

The type Bool is a two-element set of True and False, Integer is a set of integers, and so on. Composite types can also be defined set-theoretically. For instance, a pair type is a cartesian product of two sets. A list of a is a set of lists with elements from the set a. A function type a->b is a set of functions between two sets.

For parametric polymorphism you need to first be able to define functions on types: functions that take a type and produce a new type. In other words, you should be able to define a family of types that is parametrized by another type.

For instance, given some type a, produce a type of pairs: (a, a). This can be formally written (not in Haskell) as:

Λa . (a, a)

Notice the capital lambda for defining functions on types, as opposed to the lowercase lambda used for functions on values.

To turn a family of types into a family of values, you put the universal quantifier in front of it, as in:

forall a . (a, a)

This is a valid type signature, but you’d be hard pressed to implement it. You’d have to provide a pair of concrete values for every possible type. You can’t do it uniformly across all types.

But there are some types of values that can be specified wholesale. These are function values. With one formula you can define a whole family of functions. The following signature, for instance, is perfectly implementable:

forall a . a -> a

Let’s analyze it. It consists of a type function:

Λa . a -> a

which, for any type a, returns a function type a->a. When universally quantified with forall, it becomes a family of concrete functions parameterized by type. This is possible because all these functions can be defined with one closed term:

id x = x

In Haskell, we usually omit the forall quantifier when there’s no danger of confusion. Any signature that contains a type variable is automatically universally quantified over it. (You’ll have to use explicit forall, however, with higher-order polymorphism, where a polymorphic function can be passed as an argument to another function.)

So what’s the set-theoretic model for polymorphism? You simply replace types with sets. A function on types becomes a function on sets. Notice that this is not the same as a function between sets. The latter assigns elements of one set to elements of another. The former assigns sets to sets. As in: Take any set a and return a cartesian product of the set with itself. (You might be tempted to think of functions on sets as functions between sets of all sets, but this kind of thinking leads to well known paradoxes.)

Or take any set a and return the set of functions from this set to itself. For this one we can easily build a polymorphic function — one which for every type a produces an actual function whose type is (a->a). So, for instance, if a is Int, we will produce a function (Int->Int); if a is Bool, we will produce a function (Bool->Bool); and so on. Now, with ad-hoc polymorphism it’s okay to code the Int function separately from the Bool function; but in parametric polymorphism, you’ll have to use the same code for all types.

Any language that provides some kind of pattern-matching on types (e.g., template specialization in C++) automatically introduces ad-hoc polymorphism. Incidentally, ad-hoc polymorphism is also possible in Haskell with the use of type families.

Preservation of Relations

Let’s go to our original example and rewrite it using the explicit universal quantifier:

r :: forall a. [a] -> [a]

It defines a family of functions parametrized by the type a. When used in Haskell code, a particular member of this family will be picked automatically by the type inference system, depending on the context. In what follows, I’ll use explicit subscripting for the same purpose. The free theorem I mentioned before can be rewritten as:

rb (fmap f as) = fmap f (ra as)

with the function:

f :: a -> b

serving as a bridge between the types a and b. Specifically, f relates values of type a to values of type b. This relation happens to be functional, which means that there is only one value of type b corresponding to any given value of type a.

But the correspondence between elements of two lists may, in principle, be more general. What’s more general than a function? A relation. A relation between two sets a and b is defined as a set of pairs — a subset of the cartesian product of a and b. A function is a special case of a relation, one that can be represented as a set of pairs of the form (x, f x), or in relational notation x <=> f x. This relation is often called the graph of the function, since it can be interpreted as coordinates of points on a 2-d plane that form the plot the function.

The key insight of Reynolds’ was that you can abstract the shape of a data structure by defining relations between values. For instance, how do we know that two pairs have the same shape — even if one is a pair of integers, say (1, 7), and the other a pair of colors, say (Red, Blue)? Because we can relate 1 to Red and 7 to Blue. This relation may be called: “occupying the same position”.

Notice that the relation doesn’t have to be functional. The pair (2, 2) can be related to the pair (Black, White) using the non-functional relation:

(2 <=> Black),
(2 <=> White)

Conversely, given any relation between integers and colors, you can easily test which integer pairs are related to which color pairs. For the above relation, for instance, these are all the pairs that are related:

((2, 2) <=> (Black, Black)),
((2, 2) <=> (Black, White)),
((2, 2) <=> (White, Black)),
((2, 2) <=> (White, White))

This idea is easily extended to lists. Two lists are related if their corresponding elements are related: the first element of one list must be related to the first element of the second list, etc.; and empty lists are always related.

In particular, if the relationship between elements is established by a function f, it’s easy to convince yourself that the lists as and bs are related if

bs = fmap f as

With this in mind, our free theorem can be rewritten as:

rb bs = fmap f (ra as)

In other words, it tells us that the two lists

rb bs

and

ra as

are related under f.

Fig 1. Polymorphic function r rearranges lists but preserves relations between elements

Fig 1. Polymorphic function r rearranges lists but preserves relations between elements

So r transforms related lists into related lists. It may change the shape of the list, but it never touches the values in it. When it acts on two related lists, it rearranges them in exactly the same way, without breaking any of the relations between corresponding elements.

Reading Types as Relations

The above examples showed that we can define relations between values of composite types in terms of relations between values of simpler types. We’ve seen this with the pair constructor and with the list constructor. Continuing this trend, we can state that two functions:

f :: a -> b

and

g :: a' -> b'

are related iff, for related x and y, f x is related to g y. In other words, related functions map related arguments to related values.

Notice what we are doing here: We are consistently replacing types with relations in type constructors. This way we can read complex types as relations.

But what about primitive types? Let’s consider an example. Two functions from lists to integers that simply calculate the lengths of the lists:

lenStr  :: [Char] -> Int
lenBool :: [Bool] -> Int

What happens when we call them with two related lists? The first requirement for lists to be related is that they are of equal length. So when called with related lists the two functions will return the same integer value . It makes sense for us to consider these two functions related because they don’t inspect the values stored in the lists — just their shapes. They also look like components of the same parametrically polymorphic function, length.

It therefore makes sense to read a primitive type, such as Int, as an identity relation: two values are related if they are equal. This way our two functions, lenStr and lenBool are indeed related, because they turn related lists to related results.

Notice that for non-polymorphic functions the relationship that follows from their type is pretty restrictive. For instance, two functions Int->Int are related if and only if their outputs are equal for equal inputs. In other words, the functions must be (extensionally) equal.

Now let’s go back to polymorphic functions. The type of a polymorphic function is specified by universally quantifying a function on types.

f :: forall A. ΦA

The function Φ maps types to types. In our set-theoretical model it maps sets to sets, but we want to read it in terms of relations.

A function on relations does two things: it maps input sets to output sets, and it maps pairs from input sets to pairs from output sets. If you pick an arbitrary relation A between sets a and a' then Φ will map A to a relation ΦA between sets b and b'. The connection between a and b, and between a' and b' may, in principle, vary with the input relation.

In describing polymorphic functions, we are only interested in functions on relations that “factorize” into functions on sets. It means that Φ defines two families of functions on sets, φ and ψ, such that our b is given by φa and b' is given by ψa'. ΦA then establishes a relation between the sets φa and ψa'.

Fig 2. Φ maps relations to relations

Fig 2. Φ maps relations to relations

Given that Φ maps relations to relations, a universally quantified version of it:

forall A. ΦA

maps pairs of sets to pairs of values.

Now suppose you have two polymorphic functions g and g':

g  :: forall a . φa
g' :: forall a'. ψa'

They both map types (sets) to values. We can instantiate g at some type a, and it will return a value ga of the type b=φa. We can instantiate g' at some type a', and it will return a value g'a' of the type b'=ψa'. We can do this for any relation A between two arbitrary sets a and a'.

We will say that g and g' are related through the relation induced by the type (forall A. ΦA) iff:

  • Φ factorizes, as a function on sets, into φ and ψ, and
  • the results ga and g'a' are related by ΦA.
Fig 3. Relation between two polymorphic functions

Fig 3. Relation between two polymorphic functions

In other words, polymorphic functions are related if they map related types to related values. Notice that in the interesting examples these values are themselves functions.

With these definitions, we can now reinterpret any type signature as a relation between values.

The Parametricity Theorem

Reynolds’ second key insight was that any term is in a relation with itself — the relation being defined by the term’s type. We have indeed defined the mapping of types to relations to make this work. Primitive types turn into identity relations, so obviously a primitive value is in relation with itself. A function between primitive types is in relation with itself because it maps related (equal) arguments into related (equal) results. A list or a pair of primitive types is in relation with itself because each element of it is equal to itself. You can recurse and consider a list of functions, or a pair of lists, etc., building the proof inductively, proceeding from simpler types to more and more complex types. The proof goes over all possible term construction rules and typing rules in a given theory.

Formally, this kind of proof is called “structural induction,” because you’re showing that more complex structures will satisfy the theorem as long as the simpler ones from which they are constructed, do. The only tricky part is dealing with polymorphic functions, because they are quantified over all types. In fact, this is the reason why the naive interpretation of types as sets breaks down (see, however, Pitts’ paper). It is possible, however, to prove the parametricity theorem in a more general setting, for instance, using frames, or in the framework of operational semantics.

Wadler’s key insight was to interpret Reynolds’ theorem not only as a way of identifying different implementations of the same type — for instance, cartesian and polar representations of complex numbers — but also as a source of free theorems for polymorphic types.

Let’s try applying parametricity theorem to some simple examples. Take a constant term: an integer like 5. Its type Int can be interpreted as a relation, which we defined to be the identity relation (it’s one of the primitive types). And indeed, 5 is in this relation with 5.

Take a function like:

ord :: Char -> Int

Its type defines a relation between functions: Two functions of the type Char->Int are related if they return equal integers for equal characters. Obviously, ord is in this relation with itself.

Those were trivial examples. The interesting ones involve polymorphic functions. So let’s go back to our starting example. The term now is the polymorphic function r whose type is:

forall a . [a] -> [a]

Parametricity tells us that r is in relation with itself. However, comparing a polymorphic function to itself involves comparing the instantiations of the same function at two arbitrary types (sets), say a and a':

ra <=> ra'

We are free to pick an arbitrary relation A between elements of these sets. A will be mapped to another relation by ΦA which, in our case, is induced by the type:

[a] -> [a]
Fig 4. Polymorphic function r maps related types to related values, which themselves are functions on lists

Fig 4. Polymorphic function r maps related types to related values, which themselves are functions on lists

By our definition, two functions ra and ra’ are related if they map related arguments to related results. In this case the arguments and the results are lists. So if we have two related lists:

as  :: [a]
as' :: [a']

they must, by parametricity, be mapped to two related lists:

bs  = ra  as
bs' = ra' as'

This must be true for any type of relation, so let’s pick a functional relation generated by some (f :: a -> a'). We have:

as' = fmap f as

The results of applying r, therefore, must be related through the same relation:

bs' = fmap f bs

Putting it all together, we get our expected result:

ra' (fmap f as) = fmap f (ra as)

Parametricity and Natural Transformations

The free theorem I used as the running example is interesting for another reason: The list constructor is a functor. You may think of functors as generalized containers for storing arbitrary types of values. You can imagine that they have shapes; and for two containers of the same shape you may establish a correspondence between “positions” at which the elements are stored. This is quite easy for traditional containers like lists or trees, and with a leap of faith it can be stretched to non-traditional “containers” like functions. We used the intuition of relations corresponding to the idea of “occupying the same position” within a data structure. This notion can be readily generalized to any polymorphic containers. Two trees, for instance, are related if they are both empty, or if they have the same shape and their corresponding elements are related.

Let’s try another functor: You can also think of Maybe as having two shapes: Nothing and Just. Two Nothings are always related, and two Justs are related if their contents are related.

This observation immediately gives us a free theorem about polymorphic functions of the type:

r :: forall a. [a] -> Maybe a

an example of which is safeHead. The theorem is:

fmap h . safeHead == safeHead . fmap h

Notice that the fmap on the left is defined by the Maybe functor, whereas the one on the right is the list one.

If you accept the premise that an appropriate relation can be defined for any functor, then you can derive a free theorem for all polymorphic functions of the type:

r :: forall a. f a -> g a

where f and g are functors. This type of function is known as a natural transformation between the two functors, and the free theorem:

fmap h . r == r . fmap h

is the naturality condition. That’s how naturality follows from parametricity.

Acknowledgments

I’d like to thank all the people I talked to about parametricity at the ICFP in Gothenburg, and Edward Kmett for reading and commenting on the draft of this blog.

Appendix: Other Examples

Here’s a list of other free theorems from Wadler’s paper. You might try proving them using parametricity.

r :: [a] -> a -- for instance, head
f . r == r . fmap f
r :: [a] -> [a] -> [a] -- for instance, (++)
fmap f (r as bs) == r (fmap f as) (fmap f bs)
r :: [[a]] -> [a] -- for instance, concat
fmap f . r == r . fmap (fmap f)
r :: (a, b) -> a -- for instance, fst
f . r == r . mapPair (f, g)
r :: (a, b) -> b -- for instance, snd
g . r == r . mapPair (f, g)
r :: ([a], [b]) -> [(a, b)] -- for instance, uncurry zip
fmap (mapPair (f, g)) . r == r . mapPair (fmap f, fmap g)
r :: (a -> Bool) -> [a] -> [a] -- for instance, filter
fmap f . r (p . f) = r p . fmap f
r :: (a -> a -> Ordering) -> [a] -> [a] -- for instance, sortBy
 -- assuming: f is monotone (preserves order)
fmap f . r cmp == r cmp' . fmap f
r :: (a -> b -> b) -> b -> [a] -> b -- for instance, foldl
-- assuming: g (acc x y) == acc (f x) (g y)
g . foldl acc zero == foldl acc (g zero) . fmap f
r :: a -> a -- id
f . r == r . f
r :: a -> b -> a -- for instance, the K combinator
f (r x y) == r (f x) (g y)

where:

mapPair :: (a -> c, b -> d) -> (a, b) -> (c, d)
mapPair (f, g) (x, y) = (f x, g y)

Bibliography

  1. John C Reynolds, Types, Abstraction and Parametric Polymorphism
  2. Philip Wadler, Theorems for Free!
  3. Claudio Hermida, Uday S. Reddy, Edmund P. Robinson, Logical Relations and Parametricity – A Reynolds Programme for Category Theory and Programming Languages
  4. Derek Dreyer, Paremetricity and Relational Reasoning, Oregon Programming Languages Summer School

In category theory we never look inside objects. All information about objects is encoded in the arrows (morphisms) between them. In set theory, on the other hand, we express properties of sets through their elements. But there is a strong link between categories and sets, at least in the case of locally small categories. In those categories, morphisms between any two objects form a set. We call this set the hom-set.

Go one level up in abstraction and you have categories of functors, which are mappings between categories. In a functor category a functor is treated as an object, and a natural transformation — a mapping between functors– is treated as a morphism. So a hom-set in a functor category is a set of natural transformations between two functors. An interesting question arises: How is this set related to the hom-sets in the categories that are mapped by these functors? After all, they are sets in the same category of sets. The answer involves, as usual, a universal construction. This one is called an end.

Notation

The problem with category theory is that it deals with so many levels of abstraction that you quickly run out of fonts for your notation. I’ll try to stick to more or less the standard notation. I’ll use capital letters C, D, etc. for categories (script font would be nice, but it’s not very practical in a blog). The corresponding lower case letters, c, d — optionally with primes, like c', d' — will denote objects in those categories. I’ll use f, g, h, etc. for morphisms and F, G, H for functors. I’ll also use the terse notation for hom-sets, so the set of all morphisms between objects c and c' in category C will be simply C(c, c'). The set of natural transformations between two functors F and G, both mapping C to D, will be [C, D](F, G). I will not use parentheses for functors acting on objects or morphisms, so F acting on c will be simply F c (and similarly F f, when acting on f). But occasionally I’ll break the rules if it helps the presentation.

Natural Transformations

A natural transformation is a map between functors. So let’s start with two functors F and G, both going from category C to another category D. Let’s focus on one object in C, call it c. F maps c to F c and G maps c to G c. These two, F c and G c, are objects in D. As such they define a hom-set: the set of all morphisms from F c to G c denoted D(F c, G c). A natural transformation picks one morphism from every such hom-set. Not all picks are natural, as we’ll see soon.

Hom-set defined by two images of object c under functors F and G

Hom-set defined by two images of object c under functors F and G

A hom-set, being a set, is also an object in Set — the category of sets. So let’s look at the hom-sets D(F c, G c) as objects in Set. There is one for every c. If we carefully pick one element from each, we will have a natural transformation (I said “carefully”). A natural transformation is a family of morphisms picked from hom-sets D(F c, G c), one for each c. If you think of those hom-sets as fibers growing from objects in C, a natural transformation is a cross-section through all these fibers.

Here’s another way of looking at this. Let’s pick an arbitrary set in Set, call it X (sorry, I’m using a capital letter for an object in Set, but I do need lowercase x for its element). A function φ from X to the set D(F c, G c) maps an element x of X to a particular morphism in D(F c, G c). We can think of it as picking a component of some natural transformation. Each choice of x would potentially correspond to a different natural transformation. So the set X could be looked upon as representing some set of natural transformations. We still have to fill in a lot of blanks, but we are on the right track.

A function from a set X to a hom-set defined by two functors F and G

A function from a set X to a hom-set defined by two functors F and G

Since a natural transformation is a family of morphisms, we need a family of such functions from X to D(F c, G c), one for every c. Let’s call this family τ. When we fix c, it’s a function from X to D(F c, G c). When we fix x, it’s a precursor of a natural transformation: a family of morphisms picked from hom-sets.

A family of functions parameterized by c and x

A family of functions parameterized by c and x

The only thing missing from this picture is the naturality condition. Not all families of D(F c, G c) are natural. We need to account for the fact that our functors F and G not only map objects, but also morphisms.

So let’s look at morphisms in C and how they are mapped to morphisms in D by, say, our functor F. We can pick two object in C, c and c'. Morphisms between those two form a hom-set C(c, c'). F maps this hom-set to another hom-set, D(F c, F c'). Similarly, G maps the same hom-set to D(G c, G c').

Four hom-sets defined by two functors F and G and two objects c and c'

Four hom-sets defined by two functors F and G and two objects c and c’

Taken together, we have a diagram of four hom-sets between four objects: F c, F c', G c, and G c'. Fixing a morphism f from c to c' (an element of C(c, c')) picks two morphisms, one F f from D(F c, F c') and one G f from D(G c, G c'), given the functoriality of F and G. Fixing an x in X picks two morphisms, one τc x from D(F c, G c) and one τc' x from D(F c', G c'). These four better commute:

G f . τc x = τc' x . F f
The commuting naturality square

The commuting naturality square

That’s the naturality condition. Any τ that satisfies this condition defines a set of natural transformations, one for each x.

Wedges

All this time I’ve been setting up the scene for one important insight. The set X and the family of functions τ look a lot like a cone (see my blog post about limits). Except that, instead on one functor, we have two, and τ is not really a natural transformation. But we are getting close. And if we can carve out something cone-like out of this construction, than we could maybe find something limit-like. And indeed the cone-like object is called a wedge and the limit-like thing is called an end, and in our case the end would be a set of all natural transformations from F to G. So let’s work this thing out.

If we were constructing a cone, we’d start with a functor from our source category C to the target category — Set in this case. That’s easy to define for objects: c goes into the set D(F c, G c). But we run into a problem with morphisms. Suppose we want to map a morphism h that goes from c to c':

h : c -> c'

Its image should be a function from D(F c, G c) to D(F c', G c'). It’s a function that maps morphisms to morphisms. Let’s see what morphisms are at our disposal. We have:

F h : F c -> F c'
G h : G c -> G c'

How can we turn a morphism f from D(F c, G c)

f : F c -> G c

into a morphism f' from D(F c', G c'):

f' : F c' -> G c'

One thing we could do is to post-compose G h after f to get to G c'; but there is no way to get anywhere from F c'. So it can’t be that simple.

But notice one thing. Even though τ maps elements of X to “diagonal” hom-sets (by diagonal I mean that the argument c is repeated in D(F c, G c)), naturality condition involves off-diagonal hom-sets, like D(F c, F c') and D(G c, G c') (c potentially different from c'). So maybe we should open our minds to those off-diagonal hom-sets in our search of a functor? How about a functor that maps a pair of elements (c, c') to a hom-set D(F c, G c')? Let’s see if we can figure out its action on morphisms.

Now, since we are constructing a functor of two arguments, we have to define its action on a pair of morphisms, (f, g).

f : c -> c'
g : d -> d'

The image of this pair should be a function in Set that maps D(F c, G d) into D(F c', G d'). Unfortunately, we run into the same problem again. Given h : F c -> G d, we can follow it with G g : G d -> G d', but there is no way we can move anywhere from F c'. The only morphism at our disposal goes the wrong way, F f : F c -> F c'. If we could only reverse it!

Ah! but functors that go “the wrong way” on morphisms are well known. They are called contravariant functors, as opposed to the good old covariant functors that we have grown to love and cherish. What we need is a functor that’s contravariant in the first argument and covariant in the second. Such functors even have a name: they are called profunctors.

So, given a pair of morphisms (f, g) our profunctor Snat will map a morphism from D(F c', G d) to a morphism from D(F c, G d'). Notice that this time I have inverted c and c'.

Given h : F c' -> G d, we can easily construct a correpsponding morphism in D(F c, G d') by this composition:

G g . h . F f : F c' -> G d
Constructing the action of a profunctor built from two functors on a pair of morphisms

Constructing the action of a profunctor built from two functors on a pair of morphisms

So that’s our action of Snat on a pair of morphisms from C. It turns a morphism h into:

(Snat f g) h = G g . h . F f

Let’s summarize what we have so far: We have a profunctor Snat, a set X, and a family of morphisms τ from X to diagonal elements Snat c c. We are not interested in defining τ for off-diagonal elements of Snat. What remains is to impose some kind of generic condition on τ that would let it generate natural transformations only. We would like to formulate this condition in terms of a general profunctor S, if possible.

What’s the minimal consistency condition on τ, given that it only generates diagonal elements S c c? We need a way to somehow connect two such objects, say S c c and S c' c'. Suppose we have a morphism f : c -> c'. How can we use this morphism to operate on S c c and S c' c'? A profunctor S can be used to map a pair of morphisms, so how about pairing our f with something obvious that always exists, like the identity morphism id? Let’s try it:

S idc f  : S c  c  -> S c c'
S f idc' : S c' c' -> S c c'

(Notice the inversion of c and c' when f is used as the first argument — that’s our contravariance in action.) We have found two ways to get from two diagonal elements to the same non-diagonal element of S. Together with τc and τc' they form a diamond. This diamond better commute or we’re in trouble.

The wedge condition

The wedge condition

S idc f . τc = S f idc' . τc'

Now we can finally define a wedge for an arbitrary profunctor S. A wedge for S consists of an object X and a family of morphisms τc from X to S c c that satisfy the wedge condition above.

And if we plug in our special profunctor Snat that maps pairs of objects to hom-sets, the wedge condition turns into the naturality condition for our two functors. Let’s check this out.

Remember, this is how Snat acts on a pair of morphisms f and g:

(Snat f g) h = G g . h . F f

Substituting identities in the right places, we get (remember, functors turn identities into identities):

(Snat idc f) h = G f . h
(Snat f idc') h' = h' . F f

Notice that h and h' are morphisms in D but, at the same time, elements of sets S c c and S c' c' in the wedge condition. They are given, respectively, by the action of τc and τc’ on an element x of X. The wedge condition for Snat will therefore post-compose τc and pre-compose τc:

G f . τc = τc' . F f

And this is exactly the naturality condition for components of τ. It means that τ that satisfies the wedge condition is a natural transformation for any choice of x.

Dinatural transformations

This definition of a wedge looks almost like the definition of a cone, except that the commuting conditions in a cone were expressed in terms of naturality of the transformation between the diagonal functor ΔX and the functor F. Here, we could also say that the object X is an image of a diagonal profunctor ΔX — trivially defined to map pairs of objects from C to an object X, and pairs of morphisms to the identity morphism on X.

A wedge

A wedge (Strictly speaking, a profunctor is a mapping from CopxC to D — the “op” accounting for the reversal of the direction of morphisms in the first argument)

So what we need to complete the picture is some generalized notion of a natural transformation between two profunctors R and S. Since we are only interested in the mapping of diagonal elements of profunctors, this transformation will be called a dinatural transformation (diagonally natural). All we need is to expand the top vertex of the diamond diagram (the wedge condition) to create the commuting hexagon below.

Dinaturality condition

Dinaturality condition

S idc f . τc . R f idc = S f idc' . τc' . R idc' f

With this dinaturality condition we can define a wedge for any profunctor S with a vertex X as a dinatural transformation between ΔX and S.

Now, just as we have defined a limit as a universal cone, we can define an end as a universal wedge. An end is an object End S and a dinatural transformation ω such that for any other wedge (X, τ) there is a unique morphism h from X to End S for which all the triangles in the following diagram commute:

The end as a universal wedge

The end as a universal wedge

In particular, since the wedges for our profunctor Snat defined natural transformations, their end defines the set of all natural transformations between the functors F and G: [C, D](F, G).

There is special notation for an end using the integral symbol. We are “integrating” a profunctor S along a diagnonal over all objects c in category C.

End S = ∫ c S(c, c)

Using this notation, the set of natural transformations can be written as the following end:

[C, D](F, G) = ∫ c D(F c, G c)

Note: This is a bit of abuse of notation that happens quite often in category theory. The integral sign makes more sense for coends, which are duals of ends. Coends are related to colimits, which include coproducts, which are the category theory proxies for sums. In this sense, a coend is a generalization of a sum; just as an integral is an infinite continuous sum. By the same token, an end is a generalization of a product. Unfortunately, there is no common symbol for a continuous product in calculus, so we are stuck with the integral symbol.

In Haskell, ends become universal quantifiers, hence this definition of natural transformation from Edward Kmett’s category extras (here f and g are functors):

type f :~> g = forall a. f a -> g a
type Natural f g = f :~> g

You can find more about representing profunctors, dinatural transformations, and ends in Haskell in Edward’s blog.

The Moment of Zen

You can think of a hom-set as defining a mapping: It takes a pair of objects and generates a set of morphisms — an object in Set. A profunctor generalizes this idea. It takes a pair of objects and generates a hom-set-like object in a category that’s not necessarily the category of sets. The mapping from a pair of objects to a hom-set is functorial: its action on pairs of morphisms is well defined as well. Except that this action is contravariant in the first argument and covariant in the second. And so is a profunctor which imitates it.

We’ve seen before how a functor can embed a simple graph into a category and define a limit. The limit embodies the structure of this graph in a single object. A profunctor embeds a structure of a hom-set into a category. An end then embodies this structure in a single object. When this hom-set structure is fashioned using two functors, the end becomes a set of mappings between those two functors — a set of natural transformations. But nothing stops us from fashioning more complex hom-set-like structures and finding their ends.

One such construction I’ve had my eyes on for a long itme is called the Kan extension. To give you an idea, imagine a functor T that’s defined on a sub-category M of a bigger category C. It maps it into a category A. The question is, can we extend, or interpolate, this functor over the rest of C? How would we define its value on an object c that’s not in M? The trick is to look at all the hom-sets that go from c to objects in M.

Kan extension

Kan extension setup


Our extended functor will have to map not only c to an object in A, but also all those hom-sets to hom-sets in A. After all, that’s what functors do. This mapping of hom-sets looks very much like a profunctor. It has to be contravariant in its source and covariant in its target. If this profunctor has an end, that’s a perfect candidate for the image of c under the extended functor. That, in a nutshell, is the idea behind the right Kan extension (the left one is, predictably, built with coends). But that’s a topic for another blog post.


Haskell is a language deeply rooted in category theory. But as you don’t need to study the root system of Vitis vinifera in order to enjoy a glass of wine, you don’t need to know much about category theory in order to program in Haskell. Nevertheless, some of us just can’t help ourselves. We have to dig into the rich terroir of category theory to gain deeper insight into the art of functional programming. And today I’d like to talk about functions.

The basic category-theoretical model for Haskell is the category Hask, where objects are Haskell types, and morphisms are functions. The problem with this picture is that it puts functions on a different footing than the rest of the language. Functions from type A to type B — in other words, morphisms from object A to object B in Hask — form a set. This set is called the hom set, Hom(A, B). The fact that it’s just a set and not something bigger is a property of Hask — the property of being locally small. But in Haskell functions from type A to type B also form a type A->B. A type is an object in Hask. So what’s the connection between the set Hom(A, B) and the object A->B? The answer to this question is very interesting and involves products, exponentials, currying, and of course universal constructions.

In my previous blog I talked about the universal construction of limits — objects that represent relationships between other objects. In particular, a product can be defined as such a limit representing the most trivial relationship between two objects — that of just being two objects. Morphisms are also involved in relationships between objects, so maybe there is a way of representing them as an object as well. And indeed, it’s possible to define an object to represent a set of morphisms between two other objects A and B. Such an object is called the exponential and denoted by BA.

Notice that the domain A of the morphisms appears in the exponent. That might seem odd at first, but it makes perfect sense if you consider the relationship between multiplication (product) and exponentiation. In arithmetic, mn means m multiplied by itself n times. If you replace m and n with types (for simplicity, think of types as sets of values) and multiplication with (set-theoretical) product, you can think of mn as a set of n-tuples of values of type m: (m1, m2, m3,… mn). Of course, if n is a type, it’s not immediately clear what an n-tuple is (it’s a categorical power), but you can gain some intuition if you consider enumerated finite types. For instance, functions from Bool to any type m, Bool->m, can be represented as all possible pairs of ms (one value for True and one for False). They correspond to the exponential mBool. Also, for finite types, the number of different functions from m to n is equal to mn. But the connection between products and exponentials goes deeper than that.

Universal Construction

The basic relationship describing a function is that of application. Given a pair (function, argument), produce a result. It terms of types, a function of type X->Y applied to X produces Y. We want to define the exponential object YX to model this relationship. How do we do that?

There isn’t really that much choice. We need to map a pair of objects (YX, X) to Y. But what is a pair, and what does it mean to map? We can represent the pair as an object — a product of YX × X — and then we can map it to Y using a morphism, which we’ll call app.

It immediately follows that we can’t define exponential objects if we don’t have products. Again, it kind of make intuitive sense — exponentiation arising from iterated multiplication.

From previous experience we know that having a relationship between objects is usually not enough to define a new object. There may be many other objects that model this relationship. We need a way to compare them and pick the one that models it best.

So suppose that we have an impostor object Z, together with a morphism g from Z × X to Y impersonating application. We know that our choice for YX is universal if for any Z and g there is a unique morphism, which we’ll call λg, that maps Z to YX, and which factors through app:

g = app . (λg, id)
Universality diagram defining the exponential object

Universality diagram defining the exponential object

Such universal object might not exist in every category, but it does in Hask. In general, a category in which there is a terminal object, a product of any two objects, and an exponential of any two objects is called Cartesian closed. Cartesian closed categories are, for obvious reasons, very important in computer science.

Currying

There’s another way of looking at the diagram that defines the exponential object. You can think of the morphism g as a function of two variables:

g :: (Z, X) -> Y

For any such g there is a unique morphism λg that maps Z to YX, an object representing a function from X to Y. This establishes a one-to-one correspondence between functions of two variables and functions returning functions, which we know under the name of currying. So currying “falls out” of the definition of the exponential object.

Adjunction

Any time there is a one-to-one correspondence between sets of morphisms you might want to look for an underlying adjunction. You might remember from my previous blog post that a functor F is said to be left adjoint to the functor G (or G right adjoint to F) if the following two hom sets are naturally isomorphic:

Hom(FZ, Y) ~ Hom(Z, GY)

In our case we have a one-to-one mapping between the morphism g from Z×X to Y and the morphism λg from Z to YX. In a category where all products and all exponentials exist, we can define these two functors:

FXZ = Z × X
GXY = YX

In Haskell, these functors would be implemented as:

newtype F x z = F (z, x)
instance Functor (F x) where
    fmap f (F (z, x)) = F (f z, x)

newtype G x y = G (x -> y)
instance Functor (G x) where
    fmap f (G g) = G (f . g)

and the isomorphism of hom sets would be given by the function phi and its inverse phi':

phi :: (F x z -> y) -> z -> G x y
phi f z = G $ \x -> f (F (z, x))

phi' :: (z -> G x y) -> F x z -> y
phi' g (F (z, x)) = let G f = g z 
                    in f x

Exponentiation can thus be defined as the right adjoint of taking a product.


C++ is like an oil tanker — it takes a long time for it to change course. The turbulent reefs towards which C++ has been heading were spotted on the horizon more than ten years ago. I’m talking, of course, about the end of smooth sailing under the Moore’s law and the arrival of the Multicore. It took six years to acknowledge the existence of concurrency in the C++11 Standard, but that’s only the beginning. It’s becoming more and more obvious that a major paradigm shift is needed if C++ is to remain relevant in the new era.

Why do we need a new paradigm to deal with concurrency? Can’t we use object oriented programming with small modifications? The answer to this question goes to the heart of programming: it’s about composability. We humans solve complex problems by splitting them into smaller subproblems. This is a recursive process, we split subproblems into still smaller pieces, and so on. Eventually we reach the size of the problem which can be easily translated into computer code. We then have to compose all these partial solutions into larger programs.

The key to composability is being able to hide complexity at each level. This is why object oriented programming has been so successful. When you’re implementing an object, you have to deal with its internals, with state transitions, intermediate states, etc. But once the object is implemented, all you see is the interface. The interface must be simpler than the implementation for object oriented programming to make sense. You compose larger objects from smaller objects based on their interfaces, not the details of their implementation. That’s how object oriented programming solves the problem of complexity.

Unfortunately, objects don’t compose in the presence of concurrency. They hide the wrong kind of things. They hide sharing and mutation. Let me quote the definition of data race: Two or more threads accessing the same piece of memory at the same time, at least one of them writing. In other words: Sharing + Mutation = Data Race. Nothing in the object’s interface informs you about the possibility of sharing and mutation inside the object’s implementation. Each object in isolation may be data-race-free but their composition may inadvertently introduce data races. And you won’t know about it unless you study the details of their implementation down to every single memory access.

In Java, an attempt had been made to mollify this problem: Every object is equipped with a mutex that can be invoked by declaring the method synchronized. This is not a scalable solution. Even Java’s clever thin lock implementation incurs non-negligible performance overhead, so it is used only when the programmer is well aware of potential races, which requires deep insight into the implementation of all subobjects, exactly the thing we are trying to avoid.

More importantly, locking itself doesn’t compose. There’s a classic example of a locked bank account whose deposit and withdraw methods are synchronized by a lock. The problem occurs when one tries to transfer money from one account to another. Without exposing the locks, it’s impossible to avoid a transient state in which the funds have already left one account but haven’t reached the second. With locks exposed, one may try to hold both locks during the transfer, but that creates a real potential for deadlocks. (Software Transactional Memory provides a composable solution to this problem, but there are no practical implementations of STM outside of Haskell and Clojure.)

Moreover, if we are interested in taking advantage of multicores to improve performance, the use of locks is a non-starter. Eking out parallel performance is hard enough without locks, given all the overheads of thread management and the Amdahl’s law. Parallelism requires a drastically different approach.

Since the central problem of concurrency is the conflict between sharing and mutation, the solution is to control these two aspects of programming. We can do mutation to our heart’s content as long as there’s no sharing. For instance, we can mutate local variables; or we can ensure unique ownership by making deep copies, using move semantics, or by employing unique_ptrs. Unique ownership plays very important role in message passing, allowing large amounts of data to be passed cheaply between threads.

However, the key to multicore programming is controlling mutation. This is why functional languages have been steadily gaining ground in concurrency and parallelism. In a nutshell, functional programmers have found a way to program using what, to all intents and purposes, looks like immutable data. An imperative programmer, when faced with immutability, is as confused as a barbecue cook in a vegetarian kitchen. And the truth is that virtually all data structures from the C++ standard library are unsuitable for this kind of programming — the standard vector being the worst offender. A continuous slab of memory is perfect for random or sequential access, but the moment mutation is involved, you can’t share it between threads. Of course, you can use a mutex to lock the whole vector every time you access it, but as I explained already, you can forget about performance and composability of such a solution.

The trick with functional data structures is that they appear immutable, and therefore require no synchronization when accessed from multiple threads. Mutation is replaced by construction: you construct a new object that’s a clone of the source object but with the requested modification in place. Obviously, if you tried to do this with a vector, you’d end up with a lot of copying. But functional data structures are designed for maximum sharing of representation. So a clone of a functional object will share most of its data with the original, and only record a small delta. The sharing is totally transparent since the originals are guaranteed to be immutable.

A singly-linked list is a classical, if not somewhat trivial, example of such a data structure. Adding an element to the front of a list requires only the creation of a single node to store the new value and a pointer to the original (immutable) list. There are also many tree-like data structures that are logarithmically cheap to clone-mutate (red-black trees, leftist heaps). Parallel algorithms are easy to implement with functional data structures, since the programmer doesn’t have to worry about synchronization.

Functional data structures, also known as “persistent” data structures, are naturally composable. This follows from the composability of immutable data — you can build larger immutable object from smaller immutable objects. But there’s more to it: This new way of mutating by construction also composes well. A composite persistent object can be clone-mutated by clone-mutating only the objects on the path to the mutation; everything else can be safely shared.

Concurrency also introduces nonstandard flows of control. In general, things don’t progress sequentially. Programmers have to deal with inversion of control, jumping from handler to handler, keeping track of shared mutable state, etc. Again, in functional programming this is nothing unusual. Functions are first class citizens and they can be composed in many ways. A handler is nothing but a continuation in the continuation passing style. Continuations do compose, albeit in ways that are not familiar to imperative programmers. Functional programmers have a powerful compositional tool called a monad that, among other things, can linearize inverted flow of control. The design of libraries for concurrent programming makes much more sense once you understand that.

A paradigm shift towards functional programming is unavoidable and I’m glad to report that there’s a growing awareness of that new trend among C++ programmers. I used to be the odd guy talking about Haskell and monads at C++ meetings and conferences. This is no longer so. There was a sea change at this year’s C++Now. The cool kids were all talking about functional programming, and the presentation “Functional Data Structures in C++” earned me the most inspiring session award. I take it as a sign that the C++ community is ready for a big change.


In my last blog post I talked about a universal construction of a product in an arbitrary category. This kind of construction might seem very abstract to you and me, but not to a mathematician. Every step in that construction may be analyzed under a microscope and further generalized.

Let’s start with the selection of two objects whose product we are defining. What does it mean to select two objects in a category C? In any other branch of mathematics that would be a stupid question, but not in category theory. You select two objects by providing a functor from a two-object category to C.

You might be used to thinking of categories as those big hulking things, like the category of sets or monoids. But there are also dwarf categories that consist of one or two objects and just a handful of arrows between them. They don’t represent anything other than simple graphs. In particular, the simplest non-trivial category, called 1, is just a single object with one identity morphism looping back on itself. We can define a functor from that category to any other category. It will map the single object to a particular object in the target category. (It will also map the only morphism into the identity morphism for that target object.) This functor is the essence of picking an object in a category. Instead of saying “Pick an object in the category C,” you may say “Give me a functor from the singleton category to C.”

The next simplest category is a two-object category, {1, 2}. We have two objects and two identity morphisms acting on them. Assume there are no morphisms between the two objects (so it’s a “discrete” category). A functor F from this category to C is the essence of picking two objects in C.

Functor F from {1,2} to C

Fig 1. Functor F from {1,2} to C

Actually, there is an even simpler functor from {1, 2} to C, a degenerate functor that picks just one object in C — it maps both object to the same object in C. We’ll call this functor ΔX, where X is the target object in C.

Const functor from {1, 2} to C

Fig 2. Const functor from {1, 2} to C

When there are two functors, there may be a natural transformation between them. Let’s try to define such a transformation between ΔX and F. The first functor, ΔX, maps both objects 1 and 2 to X. The second, F, maps 1 to A and 2 to B. A natural transformation between these two functors has two components: the first is a morphism p1 from X to A, and the second is a morphism p2 from X to B.

Components of the natural transformation from the constant functor to F

Fig 3. Components of the natural transformation from the constant functor Δ to F

But that’s exactly one of the triples (X, p1, p2) that I used in the definition of a product in my previous post. The product was defined as the universal triple with a unique mapping from any other triple to it.

We no longer have to talk about selecting objects and constructing triples. Instead we can talk about natural transformations between the constant functor and a functor from the category {1, 2} to C.

This might not sound like a great simplification, but it’s an essential step toward the next generalization. We are going to replace the simple category {1, 2} with an arbitrary category J (sometimes called an index category). Again, we have the constant functor ΔX from J to C, which maps all objects in J to a single object X. It also maps all morphisms in J to a single identity morphism iX. We also have the functor F that embeds J in C, but it will now map morphisms as well as objects. It’s helpful to think of J as a graph with vertices and arrows. F embeds this graph into C. As before, a natural transformation from ΔX to F has as its components morphisms going from X to the vertices of the diagram defined by F. Because of its conical shape, this natural transformation is called a cone.

A cone

Fig 5. A cone

But a natural transformation not only acts between objects but also between morphisms. In fact “naturality” is defined in terms of morphisms.

Fig 6 illustrates the general case. We have two functors F and G. These functors map objects X and Y, respectively, to FX, FY, and GX, GY.

A morphism f from X to Y is mapped by F to Ff and by G to Gf.

On top of that, we have the natural transformation Φ from F to G, whose components are ΦX and ΦY. For Φ to be natural these four arrows must form a commutative diagram. In other words:

Gf . ΦX = ΦY . Ff
Naturality square

Fig 6. Naturality square

Replace F with ΔX and you’ll see that the naturality condition for our cone translates into the commutativity of all the triangles with the vertex X. We didn’t see this condition in the product example, because there the base of the cone had no arrows. So let’s find a better example.

Equilizer

A lot of practical math problems boil down to solving an equation or two. An equation usually has the form: A function of some variable is equal to zero. The set of solutions is called a kernel. But this definition assumes that there is a zero, and that’s too specific. A more general formulation talks about two functions being equal. A set of values on which two functions are equal is called the equilizer. So the equilizer is the generalization of the kernel of the difference of two functions. It will work even if we don’t know what zero is or how to subtract values.

Let’s dissect this definition further. We have two functions, f and g, from A to B. The equilizer is the subset of A on which f and g are equal. How can we generalize this definition so that we don’t have to think about elements of A and B? We have two problems: how to define a subset and how to define the equality of elements.

Let’s start with the subset. We can look at a subset as an embedding of one set inside another. Every function, in a matter of speaking, defines an embedding of its domain into its co-domain. For instance, a function from real numbers to 3-d points can be viewed as an embedding of a line in space.

Let’s see how far we can get with this idea for the purpose of defining the equilizer of f and g. We need another set X and a function p from X to A. The image of X in A will serve as our definition of a subset of A. Now let’s apply the function f after p. We get a composite function f . p from X to B. Similarly, we can apply g after p to get g . p. In general, these two composite functions will be different, except when the image of p falls inside the equilizer of f and g. Notice that we can talk about equality of functions without resorting to equality of elements if we look at them as morphisms in a category.

You see where this is heading? The set X with a function p, while not exactly defining the equilizer, provides some sort of a candidate for it. This candidate can be fully described in categorical terms as an object and a morphism, without recourse to sets or their elements. And the equality

f . p = g . p

is just the condition for the following diagram to commute:

Fig 7. The equilizer cone

Fig 7. The equilizer cone

But this is a cone for a two-object-two-arrow category! The objects A and B and the morphisms f and g can be seen as the image of this category under some functor F. Likewise, the object X is the image of this category under the constant functor ΔX. The two orange arrows are the components of the natural transformation from ΔX to F, and the commutativity of this diagram is nothing but its naturality condition.

Of course, there may be many pairs (X, p) that satisfy our conditions. We are looking for the universal one, with the property that any other pair (Y, q) can be mapped into it.

Fig 8. X is the equilizer if its cone is universal

Fig 8. X is the equilizer if its cone is universal

Just like in the product case, we have to demand that q factorizes through h, or that the appropriate triangles commute — in particular q = p . h. This universal cone is the equilizer.

The Moment of Zen

You might ask yourself the question: Why should I care whether some diagrams commute or not? In particular, why are natural transformations better than the “unnatural” ones? Why is it important that naturality diagrams commute? There must be something incredibly deep behind this idea of commutativity to make it pop up in so many diverse branches of mathematics unified by category theory.

Indeed, the very definition of category contains the mother of all commuting diagrams: The condition that, if there is a morphism from A to B and another from B to C, then there is a shortcut morphism from A to C, and it doesn’t matter which way you go. This is the essence of composability: you decompose the path from A to C into its components.

Every programmer is familiar with the idea of functional decomposition, but this pattern goes much deeper than just programming. It’s the essence of our knowledge, of our understanding. We don’t understand anything unless we are able to split it into smaller steps and then put these steps back together — compose them. Without composition we can’t deal with complexity.

The other foundation of understanding is the ability to create models. We create simple models of complex phenomena all the time. By understanding how models work, we gain insight into how complex phenomena work. But for that we need to establish the correspondence, the mapping, between the domain of the model and the domain of the phenomenon that we’re studying. And what’s the most important property of that mapping? Well, our understanding of the model is built on our understanding of its parts and of the ways they compose. So the mapping must preserve composability! Mappings that preserve composability are called functors.

We’ve used some very simple categories as our models. The {1, 2} category modeled the selection of two objects. We used a functor to embed this model into a bigger category C. In that particular case, there wasn’t much structure for the functor to preserve. The model for the equilizer, on the other hand, was a bit more involved. Besides the two objects, it also had two parallel arrows. The functor that embedded this model had to map those arrows as well.

If functors are used for modeling, what are natural transformations? They relate different phenomena described by the same model.

A stick figure is a model for a human being. It can be mapped into Alice, or it can be mapped into Bob. A natural transformation would map Alice into Bob in such a way as to preserve the stick-figure structure. So if the circle is mapped into Alice’s head by one functor, and into Bob’s head by another, a natural transformation would map Alice’s head into Bob’s head.

The model tells us that there is a neck between the head and the torso. So we could use Alice’s neck to go from her head to her torso, and then map her torso to Bob’s torso using the natural transformation. But we could also map Alice’s head to Bob’s head using the natural transformation, and then use Bob’s neck to get to his torso. Naturality tells us that it doesn’t matter which way we go, the result is the same.

We also have the constant functor, which maps the model into a single blob. A natural transformation then maps this blob into Bob; again, with the condition that it doesn’t matter whether we reach Bob’s torso from the blob through his right hand and arm or through his left hand and arm.

Now take into account that, although the stick figure is just made of points and lines, it is mapped into real human beings and real blobs. So a morphism from a blob to Bob’s hand is not trivial, and there may be many different ones. Similarly, the arm that connects the hand to the torso contains veins, arteries, nerves, etc. So naturality is a non-trivial condition, especially if the blob itself is complex.

So what is special about the universal blob? It has to have some interesting internal structure because it not only maps into every part of Bob, but it maps better than any other blob. Any other blob that maps naturally into Bob also maps into the universal blob. And it maps in such a way that it doesn’t matter if you go directly from the said blob to Bob, or if you go through the universal blob. The universal blob contains the stick-figure essence of Bob, and no other blob (that is not isomorphic to it) can take its place.

Universality tells us something about the structure of an object not by dissecting it but by describing its relationship to other objects that model a certain relationship.

Limits

Having defined a cone as a natural transformation from ΔX to F — two functors that map the category J to category C — we can now define the limit of F as the universal cone. For that, we need a way to compare cones that differ only by their top vertex. If there is a universal cone with the top vertex U, then any other cone with the top vertex V can be uniquely mapped onto it. It must be a mapping that preserves the structure of the cone, so if h maps V into U, h must factorize all the arrows that form the V cone into the arrows that form the U cone. For instance, in Fig 9, we must have q = p . h, etc.

The limit U is the vertex of the universal cone

Fig 9. The limit U is the vertex of the universal cone

But there’s a better way of looking at it. Notice that, in the definition of a limit, we establish a one to one correspondence between a cone with the vertex V and a morphism h from V to U. The definition talks about there being a unique h for every cone, but the other way around works as well. Given an h we can construct the cone at V by composing the morphisms — as in q = p . h.

A cone is a natural transformation, a member of Nat(ΔV, F), the set of natural transformation between two functors, ΔV and F. A morphism h is a member of the home set Hom(V, U), the set of morphisms between two objects. So we can say that if U is a limit of F then there is an isomorphism between those two sets:

Nat(ΔV, F) ~ Hom(V, U)

In fact, if we insist that this isomorphism be natural, we’ll get all the commuting triangles for free. Remember? Naturality condition is just the commutability of certain diagrams. So if there is a natural isomorphism between these two sets, then U is a limit of F. In fact, we can use this isomorphism as the definition of the limit.

But what does it mean that the isomorphism is natural? What are the two functors that are mapped by it? One functor maps V into Hom(V, U), and the other maps V into Nat(ΔV, F). Both Hom(V, U) and Nat(ΔV, F) are sets in the category of sets. So these are functors from C to Set. You might recognize the first one from the Yoneda lemma.

The second one is a bit more tricky. To understand it, you have to realize that functors themselves form a category. If functors are object in the functor category than natural transformations between these functors are morphisms. Natural transformations compose (and their composition is associative), and there always is a unit natural transformation for each functor. So this is a legitimate category. A hom set in that category is a set of all natural transformations between two functors. Nat(ΔV, F) is one such hom set.

Whenever you see a natural isomorphism of hom sets, chances are there is an adjunction between two functors. A functor F is said to be left adjoint to the functor G (or G right adjoint to F) if the following two hom sets are naturally isomorphic:

Hom(FX, Y) ~ Hom(X, GY)

If you compare this with our definition of a limit, you’ll see that the functor that maps F to its limit U is right adjoint to the const functor ΔV. Of course this is only true if the said functor exists, which is not always true — not all diagrams of shape J must have limits in C.

Adjunction between delta and f is the natural isomorphism between the corresponding hom sets (arrows).

The adjunction between ΔV and Lim F (limit of F) is the natural isomorphism between the corresponding hom sets (arrows).

I hope to talk more about adjoint functors in the future.

Videography

Hopefully this blog post will prepare you to watch this excellent series of videos by Catsters on YouTube:

  1. Cones and limits: Definitions.
  2. Examples of limits: Terminal object, product, pullback, equalizer.
  3. Cones as natural transformations.
  4. Formal definition of limit as natural isomorphism.
  5. Limits and adjunctions.
  6. Colimits.