Programming



I want you to perform a little experiment. Take an egg, put it in a blender, and run it for ten seconds.

Oh, I forgot to tell you to first remove the eggshell. No problem, let’s run the blender in the opposite direction for ten seconds, and we’ll get the egg back.

It doesn’t work, does it? The reason is entropy. The second law of thermodynamics states that the entropy of an isolated system can never decrease. Blending an egg increased its entropy. Unblending it would decrease entropy. But there is a workaround: feed the blended egg to a chicken, and you will get a new egg. Granted, you might have to feed it more than one egg, but still: the miracle of life! Life seems to go against the general trend of the second law of thermodynamics.

Of course, life cannot flourish in a completely isolated system, so the laws of physics are safe. A chicken can produce an egg only by increasing the entropy of its environment and, indirectly, that of the Sun.

Entropy and the Universe

We have some kind of intuitive understanding of entropy as the degree of disorderliness. An egg is highly “ordered,” in that it has an ovoid shell, the white, the yolk and, most importantly, the genetic blueprint for a chicken. It is extremely unlikely that an egg would randomly assemble itself from the primordial soup. And yet, in a way, it did. It took about fourteen billion years, starting from the Big Bang, but it finally arrived to a supermarket near you.

Since entropy has been always copiously produced in the Universe, we are forced to deduce that the initial entropy of the Universe was much lower than it is today. The Universe has been running up the entropy bill at a tremendous pace ever since the Big Bang.

With our simplistic understanding of entropy as the opposite of order, it might be difficult to imagine what it meant for the primordial Universe to be low entropy. Were elementary particles nicely stacked according to their quantum Dewey decimal codes on separate shelves like books in a library? It turns out that, in the presence of gravity, the lowest entropy state is when matter is uniformly distributed throughout the Universe. This might be a little counter-intuitive, considering how blending an egg led to the increase of entropy. But uniform distribution of gravitational mass is a very precarious state. It’s like a needle balanced on its point. At the slightest disturbance, the parts of the volume with infinitesimally higher density will start collapsing on themselves due to gravity. The collapse will be slow in the beginning, but as it keeps increasing local density, it will attract more and more matter resulting in a positive feedback loop.

This is exactly what happened after the Big Bang (as far as we know). Low-entropy uniform soup started slowly curdling to form galaxies and stars. The more non-uniform the distribution of gravitating matter, the higher the entropy.

The ultimate fate of collapsing matter is a gravitational black hole, with all matter concentrated in a singular point. Black holes have extremely high entropy, so much so that it is believed that the current entropy of the Universe is dominated by gigantic black holes in the centers of galaxies.

So why hasn’t the whole Universe collapsed into one gigantic black hole? It’s because the breakneck race toward higher entropy has run against several obstacles. One of them works like a governor in a steam engine. Tiny fluctuations in mass density during the Big Bang were accompanied by tiny fluctuations in velocities of particles. These fluctuations resulted in random distribution of angular momentum. As a result, each collapsing region of the Universe ends up with some randomly assigned net angular momentum. In other words, it spins. And when matter is sucked up towards the center, it starts spinning faster and faster. That’s why every galaxy is spinning. The resulting centrifugal force keeps matter from falling all the way to the center and disappearing into a black hole.

The other obstacle towards reaching maximum entropy is the fact that clumps of matter of certain size turn into stars. When lots of atoms of hydrogen are squished together, they can reach a higher entropy state by fusing into helium. But this process produces excess photons, which keep pushing matter away, thus preventing total collapse. Eventually, the hydrogen burns out, the star undergoes a series of transitions and, depending on its mass, ends up as a supernova, or turns into a brown or white dwarf. What’s left after a supernova explosion can be a neutron star or a black hole.

In a neutron star, further collapse is stalled by another property of matter: Fermi statistics. Neutrons are fermions, and no two fermions may occupy the same quantum state. In particular, you can’t squeeze them all into a very small volume — they repel each other.

Are neutron stars and black holes the end products of the evolution of the Universe? Probably not. There is a strong suspicion that neutrons will eventually decay into leptons — mostly neutrinos, electrons, and positrons. Black holes will evaporate through Hawking radiation. The Universe will eventually reach its thermal death: an ever expanding volume filled with photons and leptons.

What’s Life Got to Do with It?

So far we’ve seen that matter has properties that tend to slow down the ratchet of entropy. Our Sun, for instance, could increase its entropy tremendously by turning all its hydrogen into helium in one fell swoop while collapsing to form a black hole. It can’t do that because of the heat and radiation pressure generated in the process. And even if all the heat were siphoned out, the leftover neutrons would congeal into a solid neutron star, preventing further collapse.

So the Sun is doing its best, under the circumstances, trying to dissipate the excess of energy. It does it mostly by radiating high energy photons. These are the photons of visible and ultraviolet light that warm up the Earth. The Earth, in turn, re-radiates this heat in the form of low energy infrared photons.

It turns out that turning high energy photons to low energy photons increases overall entropy. So, in its small way, the Earth speeds up the rise of entropy. In fact, it does it better than, for instance, Mercury; because the Earth has the atmosphere and the oceans, which are good heat sinks, and because it spins on its axis, transporting the accumulated heat from the sunny side to the shaded side, where it’s radiated into space in the form of infrared photons.

But Earth has another secret weapon that speeds up the advent of the heat death of the Universe: life. To begin with, living organisms consume energy during the day. They also need energy to survive at night, so they came up with clever ways to store energy in chemical compounds. They can then cash their savings at night, all the while radiating heat. At higher latitudes, they also store energy during summer and expend it during winter.

A steppe is better at entropy production than barren land; a forest or a jungle is still better. But human civilization is the best. Our cars, factories, cities, air conditioners, all produce entropy at a much faster pace than bare nature. We’re good at it!

The Self-Organizing Principle

The advent of life on Earth is often attributed to something called the self-organizing principle. It’s just a name for what happens in systems that are away from thermodynamic equilibrium. In those systems it is often possible to speed up the increase of entropy by organizing things a little better.

The simplest example of this is when you heat a layer of liquid in a pan. The liquid can transport energy by thermal conduction, which leads to overall raise in entropy. But there is a faster way: the heated liquid at the bottom of the pan is lighter than the cooler liquid at the top, so it can float to the top. The heavier liquid at the top can then sink to the bottom. This is called convection, and it’s faster than conduction. The only problem is that the two streams of liquid have to negotiate the flow, because they can’t both pass through the same point simultaneously. In fact, in the ideal case, they would be deadlocked. What happens in reality is an amazing feat of self-organization: regularly spaced hexagonal convection cells called Bénard cells emerge in the heated liquid.

Benard

A honeycomb pattern of Bénard cells suggests that order may be spontaneously generated in situations when it can speed up the production of entropy. If you have rich enough environment and wait long enough, more and more complex patterns that ease the production of entropy may emerge — such as life itself.

But life doesn’t emerge everywhere. As far as we know there’s no life on the Moon and no (visible) life on Mars. What’s different about Earth is that it is, and has always been, very turbulent. For starters, we have water that is constantly changing state. It’s boiling in hydrothermal vents, liquid in the oceans, solid in the ice caps; it’s precipitating from the atmosphere and evaporating from pools. It dissolves lots of chemical compounds and makes colloids with others. Continental plates keep shifting resulting in constant volcanic activity. New minerals are brought up from the depths and exposed to erosion. We also have a large Moon that’s causing regular tides, and the Earth’s axis of rotation is tilted resulting in changing seasons. On top of that, we have occasional comets causing impact winters. We can’t complain about lack of entertainment on Earth.

Here’s what I think: Life can only emerge and thrive on the edge. Our planet has been on the edge for a few billion years. Conditions on Earth have always been barely short of wiping the life out and, paradoxically, this is exactly what makes life possible. The Earth is a living proof that what doesn’t kill you, makes you stronger. There have been uncountable attempts on the life on Earth and they all resulted in accelerating the evolution towards higher life forms. I know that it might be controversial to call one form of life higher than another, but there is an objective measure that we can use, and that’s the efficiency of turning energy into entropy. In this respect, humans are indeed the highest form of life. We were able to tap into sources of energy that have been forgotten by nature for hundreds of millions of years in the form of coal, oil, and gas. We use all this to speed up the increase of entropy.

Why Are We Alone?

You might be familiar with the Fermi paradox. In essence, the question is: if life is inevitable, why haven’t we seen it all over the Universe. And judging by how quickly life emerged on Earth– essentially as soon as the water condensed into oceans– life seems to be inevitable, at least on Earth-like planets, which are very common in the Universe. And life — civilized life in particular — being so good at producing vast amounts of entropy, should eventually make itself conspicuous on the cosmic scale.

On the other hand, we don’t know how many planets are “on the edge,” and how narrow the edge is. It’s possible that for an Earth-like planet to enter the life-creating period is a relatively common occurrence — possibly right after the water gathers into oceans. Finding remnants of life on Mars would give support to this idea. But the Earth has been walking this narrow path between stagnation and destruction for more than four billion years. There have been long periods of stagnation: there was the snowball Earth when the oceans froze over, and the “boring billion,” when the air was filled with the smell of rotten eggs. There have been major extinction events, like the asteroid impact that wiped out the dinosaurs.

Being on edge means that you are likely to fall off. You either die of boredom (that’s what might have happened on Mars), or you get wiped out by a cataclysm (if the Chicxulub asteroid were a tad larger, the Earth could have been sterilized). It might be extremely unlikely to stay for a few billion years on the narrow path that leads from Bénard cells to a space-faring civilization. We might actually be the first to reach this level in our cosmic neighborhood. Life on Earth could be more like a professional Russian-roulette player than a nine-to-five worker.

There is also something we don’t quite get about cosmic timescales. For the last few hundred of years the powers of humanity have been growing exponentially. From the cosmic perspective, humanity looks like a sudden bloom that took over a stagnant pool on the outskirts of the Galaxy. We foolishly imagine that we can sustain this level of progress and in short time colonize the Solar system and reach for the stars. But one thing we know for sure about exponential growth is that it’s not sustainable in the long run. We are not only going to bump our heads against unbreakable laws of physics, but we’ll also have to deal with the limitations of human mind. And all other civilizations that might be out there will have to deal with the same problems. This might explain why we are not seeing them.

In fact, we could reverse this reasoning and argue that the fact that we don’t detect any signs of alien civilizations suggests that the obstacles that we see in front of us are not easily overcome. In particular:

  • The speed of light limits our ability to travel and exchange information at large distances. This is one of the hardest limits, because special relativity is the foundation of all physics.
  • The coupling of gravity to other forces is extremely weak, so the prospects of controlling gravity and counter-balancing acceleration are virtually non-existent. This means that there is no easy way to shrink the enormous distances between stars — no warp drive.
  • The size of the atom and the speed of light limit our ability to store and process information. This prevents us from extending the capabilities of our brains to discover and explore the laws of the Universe.

These three limits can also be related to three fundamental theories: special relativity, general relativity, and quantum mechanics, respectively.

So what does the future have in stock for humanity? It looks like we are reaching the end of exponential expansion. There hasn’t been any major breakthrough in fundamental physics for almost half a century, we are seeing the tail end of the Moore’s law, and the population of Earth is finally stabilizing. If we don’t wipe ourselves out from the face of the Earth, we might be facing a boring millennium, if not a boring million. And it’s entirely possible that we are surrounded by other civilizations that have already entered their boring periods. If they eventually graduate to the next stage, they will be ready to help the Universe increase its entropy on a vastly larger scale. Hopefully humanity will still be around to see the Galaxy blooming with sentient activity.

Advertisements

Abstract: I derive a free monoidal (applicative) functor as an initial algebra of a higher-order functor using Day convolution.

I thought I was done with monoids for a while, after writing my Monoids on Steroids post, but I keep bumping into them. This time I read a paper by Capriotti and Kaposi about Free Applicative Functors and it got me thinking about the relationship between applicative and monoidal functors. In a monoidal closed category, the two are equivalent, but monoidal structure seems to be more fundamental. It’s possible to have a monoidal category, which is not closed. Not to mention that monoidal structures look cleaner and more symmetrical than closed structures.

One particular statement in the paper caught my attention: the authors said that the free applicative functors are initial algebras for a very simple higher-order functor:

A G = Id + F \star G

which, in their own words, “makes precise the intuition that free applicative functors are in some sense lists (i.e. free monoids).” In this post I will decode this statement and then expand it by showing how to implement a free monoidal functor as a higher order fixed point using some properties of Day convolution.

Let me start with some refresher on lists. To define a list all we need is a monoidal category. If we pretend that Hask is a category, then our product is a pair type (a, b), which is associative up to isomorphism; with () as its unit, also up to isomorphism. We can then define a functor that generates lists:

A_a b = () + (a, b)

Notice the similarity with the Capriotti-Kaposi formula. You might recognize this functor in its Haskell form:

data ListF a b = Nil | Cons a b

The fixed point of this functor is the familiar list, a.k.a., the free monoid. So that’s the general idea.

Day Convolution

There are lots of interesting monoidal structures. Famously (because of the monad quip) the category of endofunctors is monoidal; with functor composition as product and identity functor as identity. What is less known is that functors from a monoidal category \mathscr{C} to \mathscr{S}et also form a monoidal category. A product of two such functors is defined through Day convolution. I have talked about Day convolution in the context of applicative functors, but here I’d like to give you some more intuition.

What did it for me was Alexander Campbell comment on Stack Exchange. You know how, in a vector space, you can have a set of basis vectors, say \vec{e}_i, and represent any vector as a linear combination:

\vec{v} = \sum \limits_{i = 1}^n v_i \vec{e}_i

It turns out that, under some conditions, there is a basis in the category of functors from \mathscr{C} to \mathscr{S}et. The basis is formed by representable functors of the form \mathscr{C}(x, -), and the decomposition of a functor F is given by the co-Yoneda lemma:

F a = \int^x F x \times \mathscr{C}(x, a)

The coend in this formula roughly corresponds to (possibly infinite) categorical sum (coproduct). The product under the integral sign is the cartesian product in \mathscr{S}et.

In pseudo-Haskell, we would write this formula as:

f a ~ exists x . (f x, x -> a)

because a coend corresponds to an existential type. The intuition is that, because the type x is hidden, the only thing the user can do with this data type is to fmap the function over the value f x, and that is equivalent to the value f a.

The actual Haskell encoding is given by the following GADT:

data Coyoneda f a where
  Coyoneda :: f x -> (x -> a) -> Coyoneda f a

Now suppose that we want to define “multiplication” of two functors that are represented using the coend formula.

(F \star G) a \cong \int^x F x \times \mathscr{C}(x, a) \star \int^y G y \times \mathscr{C}(y, a)

Let’s assume that our multiplication interacts nicely with coends. We get:

\int^{x y} F x \times G y \times (\mathscr{C}(x, a) \star \mathscr{C}(y, a))

All that remains is to define the multiplication of our “basis vectors,” the hom-functors. Since we have a tensor product \otimes in \mathscr{C}, the obvious choice is:

\mathscr{C}(x, -) \star \mathscr{C}(y, -) \cong \mathscr{C}(x \otimes y, -)

This gives us the formula for Day convolution:

(F \star G) a = \int^{x y} F x \times G y \times \mathscr{C}(x \otimes y, a)

We can translate it directly to Haskell:

data Day f g a = forall x y. Day (f x) (g y) ((x, y) -> a)

The actual Haskell library implementation uses GADTs, and also curries the product, but here I’m opting for encoding an existential type using forall in front of the constructor.

For those who like the analogy between functors and containers, Day convolution may be understood as containing a box of xs and a bag of ys, plus a binary combinator for turning every possible pair (x, y) into an a.

Day convolution lifts the monoidal structure of \mathscr{C} (the tensor product \otimes) to the functor category [\mathscr{C}, \mathscr{S}et]. In particular, it lifts the unit object 1 to the hom-functor \mathscr{C}(1, -). We have, for instance:

(F \star \mathscr{C}(1, -)) a = \int^{x y} F x \times \mathscr{C}(1, y) \times \mathscr{C}(x \otimes y, a)

which, by co-Yoneda, is isomorphic to:

\int^{x} F x \times \mathscr{C}(x \otimes 1, a)

Considering that x \otimes 1 is isomorphic to x, we can apply co-Yoneda again, to indeed get F a.

In Haskell, the unit object with respect to product is the unit type (), and the hom-functor \mathscr{C}(1, -) is isomorphic to the identity functor Id (a set of functions from unit to x is isomorphic to x).

We now have all the tools to understand the formula:

A G = Id + F \star G

or, more generally:

A_F G = \mathscr{C}(1, -) + F \star G

It’s a sum (coproduct) of the unit under Day convolution and the Day convolution of two functors F and G.

Lax Monoidal Functors

Whenever we have a functor between two monoidal categories \mathscr{C} and \mathscr{D}, it makes sense to ask how this functor interacts with the monoidal structure. For instance, does it map the unit object in one category to the unit object in another? Does it map the result of a tensor product to a tensor product of mapped arguments?

A lax monoidal functor doesn’t do that, but it does the next best thing. It doesn’t map unit to unit, but it provides a morphism that connects the unit in the target category to the image of the unit of the source category.

\epsilon : 1_\mathscr{D} \to F 1_\mathscr{C}

It also provides a family of morphisms from the product of images to the image of a product:

\mu_{a b} : F a \otimes_\mathscr{D} F b \to F (a \otimes_\mathscr{C} b)

which is natural in both arguments. These morphisms must satisfy additional conditions related to associativity and unitality of respective tensor products. If \epsilon and \mu are isomorphisms then we call F a strong monoidal functor.

If you look at the way Day convolution was defined, you can see now that we insisted that the hom-functor be strong monoidal:

\mathscr{C}(x, -) \star \mathscr{C}(y, -) \cong \mathscr{C}(x \otimes y, -)

Since hom-functors define the Yoneda embedding \mathscr{C} \to [\mathscr{C}, \mathscr{S}et], we can say that Day convolution makes Yoneda embedding strong monoidal.

The translation of the definition of a lax monoidal functor to Haskell is straightforward:

class Monoidal f where
  unit  :: f ()
  (>*<) :: f x -> f y -> f (x, y)

Because Hask is a closed monoidal category, Monoidal is equivalent to Applicative (see my post on Applicative Functors).

Fixed Points

Recursive data structures can be formally defined as fixed points of F-algebras. Lists, in particular, are fixed points of functors of the form:

F_a b = 1 + a \otimes b

defined in a monoidal category (\otimes and 1) with coproducts (the plus sign).

In general, a fixed point is defined as a point that is fixed under some particular mapping. For instance, a fixed point of a function f(x) is some value x_0 such that:

f(x_0) = x_0

Obviously, the function’s codomain has to be the same as its domain, for this equation to make sense.

By analogy, we can define a fixed point of an endofunctor F as an object that is isomorphic to its image under F:

F x \cong x

The isomorphism here is just a pair of morphisms, one the inverse of the other. One of these morphisms can be seen as part of the F-algebra (x, f) whose carrier is x and whose action is:

f : F x \to x

Lambek’s lemma states that the action of the initial (or terminal) F-algebra is an isomorphism. This explains why a fixed point of a functor is often referred to as an initial (terminal) algebra.

In Haskell, a fixed point of a functor f is called Fix f. It is defined by the property that f acting on Fix f must be isomorphic to Fix f:

Fix f \cong f (Fix f)

which can be expressed as:

newtype Fix f = In { out :: f (Fix f) }

Notice that the pair (Fix f, In) is the (initial) algebra for the functor f, with the carrier Fix f and the action In; and that out is the inverse of In, as prescribed by the Lambek’s lemma.

Here’s another useful intuition about fixed points: they can often be calculated iteratively as a limit of a sequence. For functions, if the following sequence converges to x:

x_{n+1} = f (x_n)

then f(x) = x (at least for continuous functions).

We can apply the same idea to our list functor, iteratively replacing b with the definition of F_a b = 1 + a \otimes b:

1 + a \otimes b\\  1 + a \otimes (1 + a \otimes b)\\  1 + a + a \otimes a \otimes (1 + a \otimes b)\\  1 + a + a \otimes a + a \otimes a \otimes a + ...

where we assumed associativity and unit laws (up to isomorphism). This formal expansion is in agreement with our intuition that a list is either empty, contains one element, a product of two elements, a product of three elements, and so on…

Higher Order Functors

Category theory has achieved something we can only dream of in programming languages: reusability of concepts. For instance, functors between two categories C and D form a category, with natural transformations as morphisms. Therefore everything we said about fixed points and algebras can be immediately applied to the functor category. In Haskell, however, we have to start almost from scratch. For instance, a higher order functor, which takes a functor as argument and returns another functor has to be defined as:

class HFunctor ff where
  ffmap :: Functor g => (a -> b) -> ff g a -> ff g b
  hfmap :: (g :~> h) -> (ff g :~> ff h)

The squiggly arrows are natural transformations:

infixr 0 :~>
type f :~> g = forall a. f a -> g a

Notice that the definition of HFunctor not only requires a higher-order version of fmap called hfmap, which lifts natural transformations, but also the lower-order ffmap that attests to the fact that the result of HFunctor is again a functor. (Quantified class constraints will soon make this redundant.)

The definition of a fixed point also has to be painstakingly rewritten:

newtype FixH ff a = InH { outH :: ff (FixH ff) a }

Functoriality of a higher order fixed point is easily established:

instance HFunctor f => Functor (FixH f) where
    fmap h (InH x) = InH (ffmap h x)

Finally, Day convolution is a higher order functor:

instance HFunctor (Day f) where
  hfmap nat (Day fx gy xyt) = Day fx (nat gy) xyt
  ffmap h   (Day fx gy xyt) = Day fx gy (h . xyt)

Free Monoidal Functor

With all the preliminaries out of the way, we are now ready to derive the main result.

We start with the higher-order functor whose initial algebra defines the free monoidal functor:

A_F G = Id + F \star G

We can translate it to Haskell as:

data FreeF f g t =
      DoneF t
    | MoreF (Day f g t)

It is a higher order functor, in that it takes a functor g and produces a new functor FreeF f g:

instance HFunctor (FreeF f) where
    hfmap _ (DoneF x) = DoneF x
    hfmap nat (MoreF day) = MoreF (hfmap nat day)
    ffmap f (DoneF x) = DoneF (f x)
    ffmap f (MoreF day) = MoreF (ffmap f day)

The claim is that, for any functor f, the (higher order) fixed point of FreeF f:

type FreeMon f = FixH (FreeF f)

is monoidal.

The usual approach to solving such a problem is to express FreeMon as a recursive data structure:

data FreeMonR f t =
      Done t
    | More (Day f (FreeMonR f) t)

and proceed from there. This is fine, but it doesn’t give us any insight about what property of the original higher-order functor makes its fixed point monoidal. So instead, I will concentrate on properties of Day convolution.

To begin with, let’s establish the functoriality of FreeMon using the fact that Day convolution is a functor:

instance Functor f => Functor (FreeMon f) where
  fmap h (InH (DoneF s)) = InH (DoneF (h s))
  fmap h (InH (MoreF day)) = InH (MoreF (ffmap h day))

The next step is based on the list analogy. The free monoidal functor is analogous to a list in which the product is replaced by Day convolution. The proof that it’s monoidal amounts to showing that one can “concatenate” two such lists. Concatenation is a recursive process in which we detach an element from one list and attach it to the other.

When building recursion, the functor g in Day convolution will play the role of the tail of the list. We’ll prove monoidality of the fixed point inductively by assuming that the tail is already monoidal. Here’s the crucial step expressed in terms of Day convolution:

cons :: Monoidal g => Day f g s -> g t -> Day f g (s, t)
cons (Day fx gy xys) gt = Day fx (gy >*< gt) (bimap xys id) . reassoc)

We took advantage of associativity:

reassoc :: (a, (b, c)) -> ((a, b), c)
reassoc (a, (b, c)) = ((a, b), c)

and functoriality (bimap) of the underlying product.

The intuition here is that we have a Day product of the head of the list, which is a box of xs; and the tail, which is a container of ys. We are appending to it another container of ts. We do it by concatenating the two containers (gy >*< gt) into one container of pairs (y \otimes t). The new combinator reassociates the nested pairs (x \otimes (y \otimes t)) and applies the old combinator to (x \otimes y).

The final step is to show that FreeMon defined through Day convolution is indeed monoidal. Here’s the proof:

instance Functor f => Monoidal (FreeMon f) where
  unit = InH (DoneF ())
  (InH (DoneF s)) >*< frt = fmap (s, ) frt
  (InH (MoreF day)) >*< frt = InH (MoreF (day `cons` frt))

A lax monoidal functor must also preserve associativity and unit laws. Unlike the corresponding laws for applicative functors, these are pretty straightforward to formulate.

The unit laws are:

fmap lunit (unit () >*< frx) = frx
fmap runit (frx >*< unit ()) = frx

where I used the left and right unitors:

lunit :: ((), a) -> a
lunit ((), a) = a
runit :: (a, ()) -> a
runit (a, ()) = a

The associativity law is:

fmap assoc ((frx >*< fry) >*< frz) = (frx >*< (fry >*< frz))

where I used the associator:

assoc :: ((a,b),c) -> (a,(b,c))
assoc ((a,b),c) = (a,(b,c))

Except for the left unit law, I wasn’t able to find simple derivations of these laws.

Categorical Picture

Translating this construction to category theory, we start with a monoidal category (\mathscr{C}, \otimes, 1, \alpha, \rho, \lambda), where \alpha is the associator, and \rho and \lambda are right and left unitors, respectively. We will be constructing a lax monoidal functors from \mathscr{C} to \mathscr{S}et, the latter equipped with the usual cartesian product and coproduct.

I will sketch some of the constructions without going into too much detail.

The analogue of cons is a family of natural transformations:

\beta_{s t} = \int^{x y} f x \times g y \times \mathscr{C}(x \otimes y, s) \times g t \to \int^{u v} f u \times g v \times \mathscr{C}(u \otimes v, s \otimes t)

We will assume that g is lax monoidal, so the left hand side can be mapped to:

\int^{x y} f x \times g (y \otimes t) \times \mathscr{C}(x \otimes y, s)

The set of natural transformations can be represented as an end:

\int_{s t} Set(\int^{x y} f x \times g (y \otimes t) \times \mathscr{C}(x \otimes y, s), \int^{u v} f u \times g v \times \mathscr{C}(u \otimes v, s \otimes t))

A hom-set from a coend is isomorphic to an end of a hom-set:

\int_{s t x y} Set(f x \times g (y \otimes t) \times \mathscr{C}(x \otimes y, s), \int^{u v} f u \times g v \times \mathscr{C}(u \otimes v, s \otimes t))

There is an injection that is a member of this hom-set:

i_{x, y \otimes t} : f x \times g (y \otimes t) \times \mathscr{C}(x \otimes (y \otimes t), -) \to \int^{u v} f u \times g v \times \mathscr{C}(u \otimes v, -)

Given a morphism h that is a member of \mathscr{C}(x \otimes y, s), we can construct the morphism (h \otimes id) \circ \alpha^{-1}, which is a member of \mathscr{C}(x \otimes (y \otimes t), s \otimes t).

The free monoidal functor Free_f is given as the initial algebra of the (higher-order) endofunctor acting on a functor g from [\mathscr{C}, \mathscr{S}et]:

A_f g = \mathscr{C}(1, -) + f \star g

By Lambek’s lemma, the action of this functor on the fixed point is naturally isomorphic to the fixed point itself:

\mathscr{C}(1, -) + (f \star Free_f) \cong Free_f

We want to show that Free_f is lax monoidal, that is that there’s a mapping:

\epsilon : 1 \to Free_f \, 1

and a family of natural transformations:

\mu_{s t} : Free_f\, s \times Free_f\, t \to Free_f\, (s \otimes t)

The first one can be simply chosen as the identity id_1 of the singleton set.

Let’s rewrite the type of natural transformations in the second one as an end:

\int_{s t} \mathscr{S}et(Free_f\, s \times Free_f\, t, Free_f\, (s \otimes t))

We can expand the first factor using the Lambek’s lemma:

\int_{s t} \mathscr{S}et((\mathscr{C}(1,s) + (f \star Free_f) s) \times Free_f\, t, Free_f\, (s \otimes t))

distribute the product over the sum:

\int_{s t} \mathscr{S}et(\mathscr{C}(1,s)\times Free_f\, t + (f \star Free_f) s \times Free_f\, t, Free_f\, (s \otimes t))

and replace functions from coproducts with products of functions:

\int_{s t} \mathscr{S}et(\mathscr{C}(1,s)\times Free_f\, t, Free_f\, (s \otimes t))  \times

\int_{s t} \mathscr{S}et((f \star Free_f) s \times Free_f\, t, Free_f\, (s \otimes t))

The first hom-set forms the base of induction and the second is the inductive step. If we call a member of \mathscr{C}(1,s) h then we can implement the first function as the lifting of (h 1 \otimes -) acting on Free_f\, t, and for the second, we can use \beta_{s t}.

Conclusion

The purpose of this post was to explore the formulation of a free lax monoidal functor without recourse to closed structure. I have to admit to a hidden agenda: The profunctor formulation of traversables involves monoidal profunctors, so that’s what I’m hoping to explore in the next post.

Appendix: Free Monad

While reviewing the draft of this post, Oleg Grenrus suggested that I derive free monad as a fixed point of a higher order functor. The monoidal product in this case is endofunctor composition:

newtype Compose f g a = Compose (f (g a))

The higher-order functor in question can be written as:

A_f g = Id + f \circ g

or, in Haskell:

data FreeMonadF f g a = 
    DoneFM a 
  | MoreFM (Compose f g a)
instance Functor f => HFunctor (FreeMonadF f) where
  hfmap _ (DoneFM a) = DoneFM a
  hfmap nat (MoreFM (Compose fg)) = MoreFM $ Compose $ fmap nat fg
  ffmap h (DoneFM a) = DoneFM (h a)
  ffmap h (MoreFM (Compose fg)) = MoreFM $ Compose $ fmap (fmap h) fg

The free monad is given by the fixed point:

type FreeMonad f = FixH (FreeMonadF f)

as witnessed by the following instance definition:

instance Functor f => Monad (FreeMonad f) where
  return = InH . DoneFM
  (InH (DoneFM a)) >>= k = k a
  fma >>= k = join (fmap k fma)
join :: Functor f => FreeMonad f (FreeMonad f a) -> FreeMonad f a
join (InH (DoneFM x)) = x
join (InH (MoreFM (Compose ffr))) = 
    InH $ MoreFM $ Compose $ fmap join ffr

Trying to improve my Haskell coding skills, I decided to test myself at solving the 2017 Advent of Code problems. It’s been a lot of fun and a great learning experience. One problem in particular stood out for me because, for the first time, it let me apply, in anger, the ideas I learned from category theory. But I’m not going to talk about category theory this time, just about programming.

The problem is really about dominoes. You get a set of dominoes, or pairs of numbers (the twist is that the numbers are not capped at 6), and you are supposed to build a chain, in which the numbers are matched between consecutive pieces. For instance, the chain [(0, 5), (5, 12), (12, 12), (12, 1)] is admissible. Like in the real game, the pieces can be turned around, so (1, 3) can be also used as (3, 1). The goal is to build a chain that starts from zero and maximizes the score, which is the sum of numbers on the pieces used.

The algorithm is pretty straightforward. You put all dominoes in a data structure that lets you quickly pull the pieces you need, you recursively build all possible chains, evaluate their sums, and pick the winner.

Let’s start with some data structures.

type Piece = (Int, Int)
type Chain = [Piece]

At each step in the procedure, we will be looking for a domino with a number that matches the end of the current chain. If the chain is [(0, 5), (5, 12)], we will be looking for pieces with 12 on one end. It’s best to organize pieces in a map indexed by these numbers. To allow for turning the dominoes, we’ll add each piece twice. For instance, the piece (12, 1) will be added as (12, 1) and (1, 12). We can make a small optimization for symmetric dominoes, like (12, 12), by adding them only once.

We’ll use the Map from the Prelude:

import qualified Data.Map as Map

The Map we’ll be using is:

type Pool = Map.Map Int [Int]

The key is an integer, the value is a list of integers corresponding to the other ends of pieces. This is made clear by the way we insert each piece in the map:

addPiece :: Piece -> Pool -> Pool
addPiece (m, n) = if m /= n 
                  then add m n . add n m
                  else add m n
  where 
    add m n pool = 
      case Map.lookup m pool of
        Nothing  -> Map.insert m [n] pool
        Just lst -> Map.insert m (n : lst) pool

I used point-free notation. If that’s confusing, here’s the translation:

addPiece :: Piece -> Pool -> Pool
addPiece (m, n) pool = if m /= n 
                       then add m n (add n m pool)
                       else add m n pool

As I said, each piece is added twice, except for the symmetric ones.

After using a piece in a chain, we’ll have to remove it from the pool:

removePiece :: Piece -> Pool -> Pool
removePiece (m, n) = if m /= n
                     then rem m n . rem n m
                     else rem m n
  where
    rem :: Int -> Int -> Pool -> Pool
    rem m n pool = 
      case fromJust $ Map.lookup m pool of
        []  -> Map.delete m pool
        lst -> Map.insert m (delete n lst) pool

You might be wondering why I’m using a partial function fromJust. In industrial-strength code I would pattern match on the Maybe and issue a diagnostic if the piece were not found. Here I’m fine with a fatal exception if there’s a bug in my reasoning.

It’s worth mentioning that, like all data structures in Haskell, Map is a persistent data structure. It means that it’s never modified in place, and its previous versions persist. This is invaluable in this kind of recursive algorithms, where we use backtracking to explore multiple paths.

The input of the puzzle is a list of pieces. We’ll start by inserting them into our map. In functional programming we never think in terms of loops: we transform data. A list of pieces is a (recursive) data structure. We want to traverse it and accumulate the information stored in it into a Map. This kind of transformation is, in general, called a catamorphism. A list catamorphism is called a fold. It is specified by two things: (1) its action on the empty list (here, it turns it into Map.empty), and (2) its action on the head of the current list and the accumulated result of processing the tail. The head of the current list is a piece, and the accumulator is the Map. The function addPiece has just the right signature:

presort :: [Piece] -> Pool
presort = foldr addPiece Map.empty

I’m using a right fold, but a left fold would work fine, too. Again, this is point free notation.

Now that the preliminaries are over, let’s think about the algorithm. My first approach was to define a bunch of mutually recursive functions that would build all possible chains, score them, and then pick the best one. After a few tries, I got hopelessly bogged down in details. I took a break and started thinking.

Functional programming is all about functions, right? Using a recursive function is the correct approach. Or is it? The more you program in Haskell, the more you realize that you get the most power by considering wholesale transformations of data structures. When creating a Map of pieces, I didn’t write a recursive function over a list — I used a fold instead. Of course, behind the scenes, fold is implemented using recursion (which, thanks to tail recursion, is usually transformed into a loop). But the idea of applying transformations to data structures is what lets us soar above the sea of details and into the higher levels of abstraction.

So here’s the new idea: let’s create one gigantic data structure that contains all admissible chains built from the domino pieces at our disposal. The obvious choice is a tree. At the root we’ll have the starting number: zero, as specified in the description of the problem. All pool pieces that have a zero at one end will start a new branch. Instead of storing the whole piece at the node, we can just store the second number — the first being determined by the parent. So a piece (0, 5) starts a branch with a 5 node right below the 0 node. Next we’d look for pieces with a 5. Suppose that one of them is (5, 12), so we create a node with a 12, and so on. A tree with a variable list of branches is called a rose tree:

data Rose = NodeR Int [Rose]
  deriving Show

It’s always instructive to keep in mind at least one special boundary case. Consider what would happen if (0, 5) were the only piece in the pool. We’d end up with the following tree:

NodeR 0 [NodeR 5 []]

We’ll come back to this example later.

The next question is, how do we build such a tree? We start with a set of dominoes gathered in a Map. At every step in the algorithm we pick a matching domino, remove it from the pool, and start a new subtree. To start a subtree we need a number and a pool of remaining pieces. Let’s call this combination a seed.

The process of building a recursive data structure from a seed is called anamorphism. It’s a well studied and well understood process, so let’s try to apply it in our case. The key is to separate the big picture from the small picture. The big picture is the recursive data structure — the rose tree, in our case. The small picture is what happens at a single node.

Let’s start with the small picture. We are given a seed of the type (Int, Pool). We use the number as a key to retrieve a list of matching pieces from the Pool (strictly speaking, just a list of numbers corresponding to the other ends of the pieces). Each piece will start a new subtree. The seed for such a subtree consists of the number at the other end of the piece and a new Pool with the piece removed. A function that produces seeds from a given seed looks like this:

grow (n, pool) = 
  case Map.lookup n pool of
    Nothing -> []
    Just ms -> [(m, removePiece (m, n) pool) | m <- ms]

Now we have to translate this to a procedure that recreates a complete tree. The trick is to split the definition of the tree into local and global pictures. The local picture is captured by this data structure:

data TreeF a = NodeF Int [a]
  deriving Functor

Here, the recursion of the original rose tree is replaced by the type parameter a. This data structure, which describes a single node, or a very shallow tree, is a functor with respect to a (the compiler is able to automatically figure out the implementation of fmap, but you can also do it by hand).

It’s important to realize that the recursive definition of a rose tree can be recovered as a fixed point of this functor. We define the fixed point as the data structure X that results from replacing a in the definition of TreeF with X. Symbolically:

X = TreeF X

In fact, this procedure of finding the fixed point can be written in all generality for any functor f. If we call the fixed point Fix f, we can define it by replacing the type argument to f with Fix f, as in:

newtype Fix f = Fix { unFix :: f (Fix f) }

Our rose tree is the fixed point of the functor TreeF:

type Tree = Fix TreeF

This splitting of the recursive part from the functor part is very convenient because it lets us use non-recursive functions to generate or traverse recursive data structures.

In particular, the procedure of unfolding a data structure from a seed is captured by a non-recursive function of the following signature:

type Coalgebra f a = a -> f a

Here, a serves as the seed that generates a single node populated with new seeds. We have already seen a function that generates seeds, we only have to cast it in the form of a coalgebra:

coalg :: Coalgebra TreeF (Int, Pool)
coalg (n, pool) = 
  case Map.lookup n pool of
    Nothing -> NodeF n []
    Just ms -> NodeF n [(m, removePiece (m, n) pool) | m <- ms]

The pièce de résistance is the formula that uses a given coalgebra to unfold a recursive date structure. It’s called the anamorphism:

ana :: Functor f => Coalgebra f a -> a -> Fix f
ana coalg = Fix . fmap (ana coalg) . coalg

Here’s the play-by-play: The anamorphism takes a seed and applies the coalgebra to it. That generates a single node with new seeds in place of children. Then it fmaps the whole anamorphism over this node, thus unfolding the seeds into full-blown trees. Finally, it applies the constructor Fix to produce the final tree. Notice that this is a recursive definition.

We are now in a position to build a tree that contains all admissible chains of dominoes. We do it by applying the anamorphism to our coalgebra:

tree = ana coalg

Once we have this tree, we could traverse it, or fold it, to retrieve all the chains and find the best one.

But once we have our tree in the form of a fixed point, we can be smart about folds as well. The procedure is essentially the same, except that now we are collecting information from the nodes of a tree. To do this, we define a non-recursive function called the algebra:

type Algebra f a = f a -> a

The type a is called the carrier of the algebra. It plays the role of the accumulator of data.

We are interested in the algebra that would help us collect chains of dominoes from our rose tree. Suppose that we have already applied this algebra to all children of a particular node. Each child tree would produce its own list of chains. Our goal is to extend those chains by adding one more piece that is determined by the current node. Let’s start with our earlier trivial case of a tree that contains a single piece (0, 5):

NodeR 0 [Node 5 []]

We replace the leaf node with some value x of the still unspecified carrier type. We get:

NodeR 0 x

Obviously, x must contain the number 5, to let us recover the original piece (0, 5). The result of applying the algebra to the top node must produce the chain [(0, 5)]. These two pieces of information suggest the carrier type to be a combination of a number and a list of chains. The leaf node is turned to (5, []), and the top node produces (0, [[(0, 5)]]).

With this choice of the carrier type, the algebra is easy to implement:

chainAlg :: Algebra TreeF (Int, [Chain])
chainAlg (NodeF n []) = (n, [])
chainAlg (NodeF n lst) = (n, concat [push (n, m) bs | (m, bs) <- lst])
  where
    push :: (Int, Int) -> [Chain] -> [Chain]
    push (n, m) [] = [[(n, m)]]
    push (n, m) bs = [(n, m) : br | br <- bs]]

For the leaf (a node with no children), we return the number stored in it together with an empty list. Otherwise, we gather the chains from children. If a child returns an empty list of chains, meaning it was a leaf, we create a single-piece chain. If the list is not empty, we prepend a new piece to all the chains. We then concatenate all lists of chains into one list.

All that remains is to apply our algebra recursively to the whole tree. Again, this can be done in full generality using a catamorphism:

cata :: Functor f => Algebra f a -> Fix f -> a
cata alg = alg . fmap (cata alg) . unFix

We start by stripping the fixed point constructor using unFix to expose a node, apply the catamorphism to all its children, and apply the algebra to the node.

To summarize: we use an anamorphism to create a tree, then use a catamorphism to convert the tree to a list of chains. Notice that we don’t need the tree itself — we only use it to drive the algorithm. Because Haskell is lazy, the tree is evaluated on demand, node by node, as it is walked by the catamorphism.

This combination of an anamorphism followed immediately by a catamorphism comes up often enough to merit its own name. It’s called a hylomorphism, and can be written concisely as:

hylo :: Functor f => Algebra f a -> Coalgebra f b -> b -> a
hylo f g = f . fmap (hylo f g) . g

In our example, we produce a list of chains using a hylomorphism:

let (_, chains) = hylo chainAlg coalg (0, pool)

The solution of the puzzle is the chain with the maximum score:

maximum $ fmap score chains

score :: Chain -> Int
score = sum . fmap score1
  where score1 (m, n) = m + n

Conclusion

The solution that I described in this post was not the first one that came to my mind. I could have persevered with the more obvious approach of implementing a big recursive function or a series of smaller mutually recursive ones. I’m glad I didn’t. I have found out that I’m much more productive when I can reason in terms of applying transformations to data structures.

You might think that a data structure that contains all admissible chains of dominoes would be too large to fit comfortably in memory, and you would probably be right in a strict language. But Haskell is a lazy language, and data structures work more often as control structures than as storage for data.

The use of recursion schemes further simplifies programming. You can design algebras and coalgebras as non-recursive functions, which are much easier to reason about, and then apply them to recursive data structures using catamorphisms and anamorphisms. You can even combine them into hylomorphisms.

It’s worth mentioning that we routinely apply these techniques to lists. I already mentioned that a fold is nothing but a list catamorphism. The functor in question can be written as:

data ListF e a = Nil | Cons e a
  deriving Functor

A list is a fixed point of this functor:

type List e = Fix (ListF e)

An algebra for the list functor is implemented by pattern matching on its two constructors:

alg :: ListF e a -> a
alg Nil = z
alg (Cons e a) = f e a

Notice that a list algebra is parameterized by two items: the value z and the function f :: e -> a -> a. These are exactly the parameters to foldr. So, when you are calling foldr, you are defining an algebra and performing a list catamorphism.

Likewise, a list anamorphism takes a coalgebra and a seed and produces a list. Finite lists are produced by the anamorphism called unfoldr:

unfoldr :: (b -> Maybe (a, b)) -> b -> [a]

You can learn more about algebras and coalgebras from the point of view of category theory, in another blog post.

The source code for this post is available on GitHub.


This is part 31 of Categories for Programmers. Previously: Lawvere Theories. See the Table of Contents.

There is no good place to end a book on category theory. There’s always more to learn. Category theory is a vast subject. At the same time, it’s obvious that the same themes, concepts, and patterns keep showing up over and over again. There is a saying that all concepts are Kan extensions and, indeed, you can use Kan extensions to derive limits, colimits, adjunctions, monads, the Yoneda lemma, and much more. The notion of a category itself arises at all levels of abstraction, and so does the concept of a monoid and a monad. Which one is the most basic? As it turns out they are all interrelated, one leading to another in a never-ending cycle of abstractions. I decided that showing these interconnections might be a good way to end this book.

Bicategories

One of the most difficult aspects of category theory is the constant switching of perspectives. Take the category of sets, for instance. We are used to defining sets in terms of elements. An empty set has no elements. A singleton set has one element. A cartesian product of two sets is a set of pairs, and so on. But when talking about the category Set I asked you to forget about the contents of sets and instead concentrate on morphisms (arrows) between them. You were allowed, from time to time, to peek under the covers to see what a particular universal construction in Set described in terms of elements. The terminal object turned out to be a set with one element, and so on. But these were just sanity checks.

A functor is defined as a mapping of categories. It’s natural to consider a mapping as a morphism in a category. A functor turned out to be a morphism in the category of categories (small categories, if we want to avoid questions about size). By treating a functor as an arrow, we forfeit the information about its action on the internals of a category (its objects and morphisms), just like we forfeit the information about the action of a function on elements of a set when we treat it as an arrow in Set. But functors between any two categories also form a category. This time you are asked to consider something that was an arrow in one category to be an object in another. In a functor category functors are objects and natural transformations are morphisms. We have discovered that the same thing can be an arrow in one category and an object in another. The naive view of objects as nouns and arrows as verbs doesn’t hold.

Instead of switching between two views, we can try to merge them into one. This is how we get the concept of a 2-category, in which objects are called 0-cells, morphisms are 1-cells, and morphisms between morphisms are 2-cells.

0-cells a, b; 1-cells f, g; and a 2-cell α.

The category of categories Cat is an immediate example. We have categories as 0-cells, functors as 1-cells, and natural transformations as 2-cells. The laws of a 2-category tell us that 1-cells between any two 0-cells form a category (in other words, C(a, b) is a hom-category rather than a hom-set). This fits nicely with our earlier assertion that functors between any two categories form a functor category.

In particular, 1-cells from any 0-cell back to itself also form a category, the hom-category C(a, a); but that category has even more structure. Members of C(a, a) can be viewed as arrows in C or as objects in C(a, a). As arrows, they can be composed with each other. But when we look at them as objects, the composition becomes a mapping from a pair of objects to an object. In fact it looks very much like a product — a tensor product to be precise. This tensor product has a unit: the identity 1-cell. It turns out that, in any 2-category, a hom-category C(a, a) is automatically a monoidal category with the tensor product defined as composition of 1-cells. Associativity and unit laws simply fall out from the corresponding category laws.

Let’s see what this means in our canonical example of a 2-category Cat. The hom-category Cat(a, a) is the category of endofunctors on a. Endofunctor composition plays the role of a tensor product in it. The identity functor is the unit with respect to this product. We’ve seen before that endofunctors form a monoidal category (we used this fact in the definition of a monad), but now we see that this is a more general phenomenon: endo-1-cells in any 2-category form a monoidal category. We’ll come back to it later when we generalize monads.

You might recall that, in a general monoidal category, we did not insist on the monoid laws being satisfied on the nose. It was often enough for the unit laws and the associativity laws to be satisfied up to isomorphism. In a 2-category, monoidal laws in C(a, a) follow from composition laws for 1-cells. These laws are strict, so we will always get a strict monoidal category. It is, however, possible to relax these laws as well. We can say, for instance, that a composition of the identity 1-cell ida with another 1-cell, f :: a -> b, is isomorphic, rather than equal, to f. Isomorphism of 1-cells is defined using 2-cells. In other words, there is a 2-cell:

ρ :: f ∘ ida -> f

that has an inverse.

Identity law in a bicategory holds up to isomorphism (an invertible 2-cell ρ).

We can do the same for the left identity and associativity laws. This kind of relaxed 2-category is called a bicategory (there are some additional coherency laws, which I will omit here).

As expected, endo-1-cells in a bicategory form a general monoidal category with non-strict laws.

An interesting example of a bicategory is the category of spans. A span between two objects a and b is an object x and a pair of morphisms:

f :: x -> a
g :: x -> b


You might recall that we used spans in the definition of a categorical product. Here, we want to look at spans as 1-cells in a bicategory. The first step is to define a composition of spans. Suppose that we have an adjoining span:

f':: y -> b
g':: y -> c


The composition would be a third span, with some apex z. The most natural choice for it is the pullback of g along f'. Remember that a pullback is the object z together with two morphisms:

h :: z -> x
h':: z -> y

such that:

g ∘ h = f' ∘ h'

which is universal among all such objects.


For now, let’s concentrate on spans over the category of sets. In that case, the pullback is just a set of pairs (p, q) from the cartesian product x × y such that:

g p = f' q

A morphism between two spans that share the same endpoints is defined as a morphism h between their apices, such that the appropriate triangles commute.

A 2-cell in Span.

To summarize, in the bicategory Span: 0-cells are sets, 1-cells are spans, 2-cells are span morphisms. An identity 1-cell is a degenerate span in which all three objects are the same, and the two morphisms are identities.

We’ve seen another example of a bicategory before: the bicategory Prof of profunctors, where 0-cells are categories, 1-cells are profunctors, and 2-cells are natural transformations. The composition of profunctors was given by a coend.

Monads

By now you should be pretty familiar with the definition of a monad as a monoid in the category of endofunctors. Let’s revisit this definition with the new understanding that the category of endofunctors is just one small hom-category of endo-1-cells in the bicategory Cat. We know it’s a monoidal category: the tensor product comes from the composition of endofunctors. A monoid is defined as an object in a monoidal category — here it will be an endofunctor T — together with two morphisms. Morphisms between endofunctors are natural transformations. One morphism maps the monoidal unit — the identity endofunctor — to T:

η :: I -> T

The second morphism maps the tensor product of T ⊗ T to T. The tensor product is given by endofunctor composition, so we get:

μ :: T ∘ T -> T


We recognize these as the two operations defining a monad (they are called return and join in Haskell), and we know that monoid laws turn to monad laws.

Now let’s remove all mention of endofunctors from this definition. We start with a bicategory C and pick a 0-cell a in it. As we’ve seen earlier, the hom-category C(a, a) is a monoidal category. We can therefore define a monoid in C(a, a) by picking a 1-cell, T, and two 2-cells:

η :: I -> T
μ :: T ∘ T -> T

satisfying the monoid laws. We call this a monad.


That’s a much more general definition of a monad using only 0-cells, 1-cells, and 2-cells. It reduces to the usual monad when applied to the bicategory Cat. But let’s see what happens in other bicategories.

Let’s construct a monad in Span. We pick a 0-cell, which is a set that, for reasons that will become clear soon, I will call Ob. Next, we pick an endo-1-cell: a span from Ob back to Ob. It has a set at the apex, which I will call Ar, equipped with two functions:

dom :: Ar -> Ob
cod :: Ar -> Ob


Let’s call the elements of the set Ar “arrows.” If I also tell you to call the elements of Ob “objects,” you might get a hint where this is leading to. The two functions dom and cod assign the domain and the codomain to an “arrow.”

To make our span into a monad, we need two 2-cells, η and μ. The monoidal unit, in this case, is the trivial span from Ob to Ob with the apex at Ob and two identity functions. The 2-cell η is a function between the apices Ob and Arr. In other words, η assigns an “arrow” to every “object.” A 2-cell in Span must satisfy commutation conditions — in this case:

dom ∘ η = id
cod ∘ η = id


In components, this becomes:

dom (η ob) = ob = cod (η ob)

where ob is an “object” in Ob. In other words, η assigns to every “object” and “arrow” whose domain and codomain are that “object.” We’ll call this special “arrow” the “identity arrow.”

The second 2-cell μ acts on the composition of the span Ar with itself. The composition is defined as a pullback, so its elements are pairs of elements from Ar — pairs of “arrows” (a1, a2). The pullback condition is:

cod a1 = dom a2

We say that a2 and a1 are “composable,” because the domain of one is the codomain of the other.

The 2-cell μ is a function that maps a pair of composable arrows (a1, a2) to a single arrow a3 from Ar. In other words μ defines composition of arrows.

It’s easy to check that monad laws correspond to identity and associativity laws for arrows. We have just defined a category (a small category, mind you, in which objects and arrows form sets).

So, all told, a category is just a monad in the bicategory of spans.

What is amazing about this result is that it puts categories on the same footing as other algebraic structures like monads and monoids. There is nothing special about being a category. It’s just two sets and four functions. In fact we don’t even need a separate set for objects, because objects can be identified with identity arrows (they are in one-to-one correspondence). So it’s really just a set and a few functions. Considering the pivotal role that category theory plays in all of mathematics, this is a very humbling realization.

Challenges

  1. Derive unit and associativity laws for the tensor product defined as composition of endo-1-cells in a bicategory.
  2. Check that monad laws for a monad in Span correspond to identity and associativity laws in the resulting category.
  3. Show that a monad in Prof is an identity-on-objects functor.
  4. What’s a monad algebra for a monad in Span?

Bibliography

  1. Paweł Sobociński’s blog.

This is part 30 of Categories for Programmers. Previously: Topoi. See the Table of Contents.

Nowadays you can’t talk about functional programming without mentioning monads. But there is an alternative universe in which, by chance, Eugenio Moggi turned his attention to Lawvere theories rather than monads. Let’s explore that universe.

Universal Algebra

There are many ways of describing algebras at various levels of abstraction. We try to find a general language to describe things like monoids, groups, or rings. At the simplest level, all these constructions define operations on elements of a set, plus some laws that must be satisfied by these operations. For instance, a monoid can be defined in terms of a binary operation that is associative. We also have a unit element and unit laws. But with a little bit of imagination we can turn the unit element to a nullary operation — an operation that takes no arguments and returns a special element of the set. If we want to talk about groups, we add a unary operator that takes an element and returns its inverse. There are corresponding left and right inverse laws to go with it. A ring defines two binary operators plus some more laws. And so on.

The big picture is that an algebra is defined by a set of n-ary operations for various values of n, and a set of equational identities. These identities are all universally quantified. The associativity equation must be satisfied for all possible combinations of three elements, and so on.

Incidentally, this eliminates fields from consideration, for the simple reason that zero (unit with respect to addition) has no inverse with respect to multiplication. The inverse law for a field can’t be universally quantified.

This definition of a universal algebra can be extended to categories other than Set, if we replace operations (functions) with morphisms. Instead of a set, we select an object a (called a generic object). A unary operation is just an endomorphism of a. But what about other arities (arity is the number of arguments for a given operation)? A binary operation (arity 2) can be defined as a morphism from the product a×a back to a. A general n-ary operation is a morphism from the n-th power of a to a:

αn :: an -> a

A nullary operation is a morphism from the terminal object (the zeroth power of a). So all we need in order to define any algebra is a category whose objects are powers of one special object a. The specific algebra is encoded in the hom-sets of this category. This is a Lawvere theory in a nutshell.

The derivation of Lawvere theories goes through many steps, so here’s the roadmap:

  1. Category of finite sets FinSet.
  2. Its skeleton F.
  3. Its opposite Fop.
  4. Lawvere theory L: an object in the category Law.
  5. Model M of a Lawvere category: an object in the category Mod(Law, Set).

Lavwere Theories

All Lawvere theories share a common backbone. All objects in a Lawvere theory are generated from just one object using products (really, just powers). But how do we define these products in a general category? It turns out that we can define products using a mapping from a simpler category. In fact this simpler category may define coproducts instead of products, and we’ll use a contravariant functor to embed them in our target category. A contravariant functor turns coproducts into products and injections to projections.

The natural choice for the backbone of a Lawvere category is the category of finite sets, FinSet. It contains the empty set 0, a singleton set 1, a two-element set 2, and so on. All objects in this category can be generated from the singleton set using coproducts (treating the empty set as a special case of a nullary coproduct). For instance, a two-element set is a sum of two singletons, 2 = 1 + 1, as expressed in Haskell:

type Two = Either () ()

However, even though it’s natural to think that there’s only one empty set, there may be many distinct singleton sets. In particular, the set 1 + 0 is different from the set 0 + 1, and different from 1 — even though they are all isomorphic. The coproduct in the category of sets is not associative. We can remedy that situation by building a category that identifies all isomorphic sets. Such a category is called a skeleton. In other words, the backbone of any Lawvere theory is the skeleton F of FinSet. The objects in this category can be identified with natural numbers (including zero) that correspond to the element count in FinSet. Coproduct plays the role of addition. Morphisms in F correspond to functions between finite sets. For instance, there is a unique morphism from 0 to n (empty set being the initial object), no morphisms from n to 0 (except 0->0), n morphisms from 1 to n (the injections), one morphism from n to 1, and so on. Here, n denotes an object in F corresponding to all n-element sets in FinSet that have been identified through isomorphims.

Using the category F we can formally define a Lawvere theory as a category L equipped with a special functor

IL :: Fop -> L

This functor must be a bijection on objects and it must preserve finite products (products in Fop are the same as coproducts in F):

IL (m × n) = IL m × IL n

You may sometimes see this functor characterized as identity-on-objects, which means that the objects in F and L are the same. We will therefore use the same names for them — we’ll denote them by natural numbers. Keep in mind though that objects in F are not the same as sets (they are classes of isomorphic sets).

The hom-sets in L are, in general, richer than those in Fop. They may contain morphisms other than the ones corresponding to functions in FinSet (the latter are sometimes called basic product operations). Equational laws of a Lawvere theory are encoded in those morphisms.

The key observation is that the singleton set 1 in F is mapped to some object that we also call 1 in L, and all the other objects in L are automatically powers of this object. For instance, the two-element set 2 in F is the coproduct 1+1, so it must be mapped to a product 1×1 (or 12) in L. In this sense, the category F behaves like the logarithm of L.

Among morphisms in L we have those transferred by the functor IL from F. They play structural role in L. In particular coproduct injections ik become product projections pk. A useful intuition is to imagine the projection:

pk :: 1n -> 1

as the prototype for a function of n variables that ignores all but the k’th variable. Conversely, constant morphisms n->1 in F become diagonal morphisms 1->1n in L. They correspond to duplication of variables.

The interesting morphisms in L are the ones that define n-ary operations other than projections. It’s those morphisms that distinguish one Lawvere theory from another. These are the multiplications, the additions, the selections of unit elements, and so on, that define the algebra. But to make L a full category, we also need compound operations n->m (or, equivalently, 1n -> 1m). Because of the simple structure of the category, they turn out to be products of simpler morphisms of the type n->1. This is a generalization of the statement that a function that returns a product is a product of functions (or, as we’ve seen earlier, that the hom-functor is continuous).

Lawvere theory L is based on Fop, from which it inherits the “boring” morphisms that define the products. It adds the “interesting” morphisms that describe the n-ary operations (dotted arrows).

Lavwere theories form a category Law, in which morphisms are functors that preserve finite products and commute with the functors I. Given two such theories, (L, IL) and (L', I'L'), a morphism between them is a functor F :: L -> L' such that:

F (m × n) = F m × F n
F ∘ IL = I'L'

Morphisms between Lawvere theories encapsulate the idea of the interpretation of one theory inside another. For instance, group multiplication may be interpreted as monoid multiplication if we ignore inverses.

The simplest trivial example of a Lawvere category is Fop itself (corresponding to the choice of the identity functor for IL). This Lawvere theory that has no operations or laws happens to be the initial object in Law.

At this point it would be very helpful to present a non-trivial example of a Lawvere theory, but it would be hard to explain it without first understanding what models are.

Models of Lawvere Theories

The key to understand Lawvere theories is to realize that one such theory generalizes a lot of individual algebras that share the same structure. For instance, the Lawvere theory of monoids describes the essence of being a monoid. It must be valid for all monoids. A particular monoid becomes a model of such a theory. A model is defined as a functor from the Lawvere theory L to the category of sets Set. (There are generalizations of Lawvere theories that use other categories for models but here I’ll just concentrate on Set.) Since the structure of L depends heavily on products, we require that such a functor preserve finite products. A model of L, also called the algebra over the Lawvere theory L, is therefore defined by a functor:

M :: L -> Set
M (a × b) ≅ M a × M b

Notice that we require the preservation of products only up to isomorphism. This is very important, because strict preservation of products would eliminate most interesting theories.

The preservation of products by models means that the image of M in Set is a sequence of sets generated by powers of the set M 1 — the image of the object 1 from L. Let’s call this set a. (This set is sometimes called a sort, and such algebra is called single-sorted. There exist generalizations of Lawvere theories to multi-sorted algebras.) In particular, binary operations from L are mapped to functions:

a × a -> a

As with any functor, it’s possible that multiple morphisms in L are collapsed to the same function in Set.

Incidentally, the fact that all laws are universally quantified equalities means that every Lawvere theory has a trivial model: a constant functor mapping all objects to the singleton set, and all morphisms to the identity function on it.

A general morphism in L of the form m -> n is mapped to a function:

am -> an

If we have two different models, M and N, a natural transformation between them is a family of functions indexed by n:

μn :: M n -> N n

or, equivalently:

μn :: an -> bn

where b = N 1.

Notice that the naturality condition guarantees the preservation of n-ary operations:

N f ∘ μn = μ1 ∘ M f

where f :: n -> 1 is an n-ary operation in L.

The functors that define models form a category of models, Mod(L, Set), with natural transformations as morphisms.

Consider a model for the trivial Lawvere category Fop. Such model is completely determined by its value at 1, M 1. Since M 1 can be any set, there are as many of these models as there are sets in Set. Moreover, every morphism in Mod(Fop, Set) (a natural transformation between functors M and N) is uniquely determined by its component at M 1. Conversely, every function M 1 -> N 1 induces a natural transformation between the two models M and N. Therefore Mod(Fop, Set) is equivalent to Set.

The Theory of Monoids

The simplest nontrivial example of a Lawvere theory describes the structure of monoids. It is a single theory that distills the structure of all possible monoids, in the sense that the models of this theory span the whole category Mon of monoids. We’ve already seen a universal construction, which showed that every monoid can be obtained from an appropriate free monoid by identifying a subset of morphisms. So a single free monoid already generalizes a whole lot of monoids. There are, however, infinitely many free monoids. The Lawvere theory for monoids LMon combines all of them in one elegant construction.

Every monoid must have a unit, so we have to have a special morphism η in LMon that goes from 0 to 1. Notice that there can be no corresponding morphism in F. Such morphism would go in the opposite direction, from 1 to 0 which, in FinSet, would be a function from the singleton set to the empty set. No such function exists.

Next, consider morphisms 2->1, members of LMon(2, 1), which must contain prototypes of all binary operations. When constructing models in Mod(LMon, Set), these morphisms will be mapped to functions from the cartesian product M 1 × M 1 to M 1. In other words, functions of two arguments.

The question is: how many functions of two arguments can one implement using only the monoidal operator. Let’s call the two arguments a and b. There is one function that ignores both arguments and returns the monoidal unit. Then there are two projections that return a and b, respectively. They are followed by functions that return ab, ba, aa, bb, aab, and so on… In fact there are as many such functions of two arguments as there are elements in the free monoid with generators a and b. Notice that LMon(2, 1) must contain all those morphisms because one of the models is the free monoid. In a free monoid they correspond to distinct functions. Other models may collapse multiple morphisms in LMon(2, 1) down to a single function, but not the free monoid.

If we denote the free monoid with n generators n*, we may identify the hom-set L(2, 1) with the hom-set Mon(1*, 2*) in Mon, the category of monoids. In general, we pick LMon(m, n) to be Mon(n*, m*). In other words, the category LMon is the opposite of the category of free monoids.

The category of models of the Lawvere theory for monoids, Mod(LMon, Set), is equivalent to the category of all monoids, Mon.

Lawvere Theories and Monads

As you may remember, algebraic theories can be described using monads — in particular algebras for monads. It should be no surprise then that there is a connection between Lawvere theories and monads.

First, let’s see how a Lawvere theory induces a monad. It does it through an adjunction between a forgetful functor and a free functor. The forgetful functor U assigns a set to each model. This set is given by evaluating the functor M from Mod(L, Set) at the object 1 in L.

Another way of deriving U is by exploiting the fact that Fop is the initial object in Law. It meanst that, for any Lawvere theory L, there is a unique functor Fop -> L. This functor induces the opposite functor on models (since models are functors from theories to sets):

Mod(L, Set) -> Mod(Fop, Set)

But, as we discussed, the category of models of Fop is equivalent to Set, so we get the forgetful functor:

U :: Mod(L, Set) -> Set

It can be shown that so defined U always has a left adjoint, the free functor F.

This is easily seen for finite sets. The free functor F produces free algebras. A free algebra is a particular model in Mod(L, Set) that is generated from a finite set of generators n. We can implement F as the representable functor:

L(n, -) :: L -> Set

To show that it’s indeed free, all we have to do is to prove that it’s a left adjoint to the forgetful functor:

Mod(L(n, -), M) ≅ Set(n, U(M))

Let’s simplify the right hand side:

Set(n, U(M)) ≅ Set(n, M 1) ≅ (M 1)n ≅ M n

(I used the fact that a set of morphisms is isomorphic to the exponential which, in this case, is just the iterated product.) The adjunction is the result of the Yoneda lemma:

[L, Set](L(n, -), M) ≅ M n

Together, the forgetful and the free functor define a monad T = U∘F on Set. Thus every Lawvere theory generates a monad.

It turns out that the category of algebras for this monad is equivalent to the category of models.

You may recall that monad algebras define ways to evaluate expressions that are formed using monads. A Lawvere theory defines n-ary operations that can be used to generate expressions. Models provide means to evaluate these expressions.

The connection between monads and Lawvere theories doesn’t go both ways, though. Only finitary monads lead to Lawvere thories. A finitary monad is based on a finitary functor. A finitary functor on Set is fully determined by its action on finite sets. Its action on an arbitrary set a can be evaluated using the following coend:

F a = ∫ n an × (F n)

Since the coend generalizes a coproduct, or a sum, this formula is a generalization of a power series expansion. Or we can use the intuition that a functor is a generalized container. In that case a finitary container of as can be described as a sum of shapes and contents. Here, F n is a set of shapes for storing n elements, and the contents is an n-tuple of elements, itself an element of an. For instance, a list (as a functor) is finitary, with one shape for every arity. A tree has more shapes per arity, and so on.

First off, all monads that are generated from Lawvere theories are finitary and they can be expressed as coends:

TL a = ∫ n an × L(n, 1)

Conversely, given any finitary monad T on Set, we can construct a Lawvere theory. We start by constructing a Kleisli category for T. As you may remember, a morphism in a Kleisli category from a to b is given by a morphism in the underlying category:

a -> T b

When restricted to finite sets, this becomes:

m -> T n

The category opposite to this Kleisli category, KlTop, restricted to finite sets, is the Lawvere theory in question. In particular, the hom-set L(n, 1) that describes n-ary operations in L is given by the hom-set KlT(1, n).

It turns out that most monads that we encounter in programming are finitary, with the notable exception of the continuation monad. It is possible to to extend the notion of Lawvere theory beyond finitary operations.

Monads as Coends

Let’s explore the coend formula in more detail.

TL a = ∫ n an × L(n, 1)

To begin with, this coend is taken over a profunctor P in F defined as:

P n m = an × L(m, 1)

This profunctor is contravariant in the first argument, n. Consider how it lifts morphisms. A morphism in FinSet is a mapping of finite sets f :: m -> n. Such a mapping describes a selection of m elements from an n-element set (repetitions are allowed). It can be lifted to the mapping of powers of a, namely (notice the direction):

an -> am

The lifting simply selects m elements from a tuple of n elements (a1, a2,...an) (possibly with repetitions).

For instance, let’s take fk :: 1 -> n — a selection of the kth element from an n-element set. It lifts to a function that takes a n-tuple of elements of a and returns the kth one.

Or let’s take f :: m -> 1 — a constant function that maps all m elements to one. Its lifting is a function that takes a single element of a and duplicates it m times:

λx -> (x, x, ... x)

You might notice that it’s not immediately obvious that the profunctor in question is covariant in the second argument. The hom-functor L(m, 1) is actually contravariant in m. However, we are taking the coend not in the category L but in the category F. The coend variable n goes over finite sets (or the skeletons of such). The category L contains the opposite of F, so a morphism m -> n in F is a member of L(n, m) in L (the embedding is given by the functor IL).

Let’s check the functoriality of L(m, 1) as a functor from F to Set. We want to lift a function f :: m -> n, so our goal is to implement a function from L(m, 1) to L(n, 1). Corresponding to the function f there is a morphism in L from n to m (notice the direction). Precomposing this morphism with L(m, 1) gives us a subset of L(n, 1).

Notice that, by lifting a function 1->n we can go from L(1, 1) to L(n, 1). We’ll use this fact later on.

The product of a contravariant functor an and a covariant functor L(m, 1) is a profunctor Fop×F->Set. Remember that a coend can be defined as a coproduct (disjoint sum) of all the diagonal members of a profunctor, in which some elements are identified. The identifications correspond to cowedge conditions.

Here, the coend starts as the disjoint sum of sets an × L(n, 1) over all ns. The identifications can be generated by expressing the coend as a coequilizer. We start with an off-diagonal term an × L(m, 1). To get to the diagonal, we can apply a morphism f :: m -> n either to the first or the second component of the product. The two results are then identified.

I have shown before that the lifting of f :: 1 -> n results in these two transformations:

an -> a

and:

L(1, 1) -> L(n, 1)

Therefore, starting from an × L(1, 1) we can reach both:

a × L(1, 1)

when we lift <f, id> and:

an × L(n, 1)

when we lift <id, f>. This doesn’t mean, however, that all elements of an × L(n, 1) can be identified with a × L(1, 1). That’s because not all elements of L(n, 1) can be reached from L(1, 1). Remember that we can only lift morphisms from F. A non-trivial n-ary operation in L cannot be constructed by lifting a morphism f :: 1 -> n.

In other words, we can only identify all addends in the coend formula for which L(n, 1) can be reached from L(1, 1) through the application of basic morphisms. They are all equivalent to a × L(1, 1). Basic morphisms are the ones that are images of morphisms in F.

Let’s see how this works in the simplest case of the Lawvere theory, the Fop itself. In such a theory, every L(n, 1) can be reached from L(1, 1). This is because L(1, 1) is a singleton containing just the identity morphism, and L(n, 1) only contains morphisms corresponding to injections 1->n in F, which are basic morphisms. Therefore all the addends in the coproduct are equivalent and we get:

T a = a × L(1, 1) = a

which is the identity monad.

Lawvere Theory of Side Effects

Since there is such a strong connection between monads and Lawvere theories, it’s natural to ask the question if Lawvere theories could be used in programming as an alternative to monads. The major problem with monads is that they don’t compose nicely. There is no generic recipe for building monad transformers. Lawvere theories have an advantage in this area: they can be composed using coproducts and tensor products. On the other hand, only finitary monads can be easily converted to Lawvere theories. The outlier here is the continuation monad. There is ongoing research in this area (see bibliography).

To give you a taste of how a Lawvere theory can be used to describe side effects, I’ll discuss the simple case of exceptions that are traditionally implemented using the Maybe monad.

The Maybe monad is generated by the Lawvere theory with a single nullary operation 0->1. A model of this theory is a functor that maps 1 to some set a, and maps the nullary operation to a function:

raise :: () -> a

We can recover the Maybe monad using the coend formula. Let’s consider what the addition of the nullary operation does to the hom-sets L(n, 1). Besides creating a new L(0, 1) (which is absent from Fop), it also adds new morphisms to L(n, 1). These are the results of composing morphism of the type n->0 with our 0->1. Such contributions are all identified with a0 × L(0, 1) in the coend formula, because they can be obtained from:

an × L(0, 1)

by lifting 0->n in two different ways.

The coend reduces to:

TL a = a0 + a1

or, using Haskell notation:

type Maybe a = Either () a

which is equivalent to:

data Maybe a = Nothing | Just a

Notice that this Lawvere theory only supports the raising of exceptions, not their handling.

Next: Monads, Monoids, and Categories.

Challenges

  1. Enumarate all morphisms between 2 and 3 in F (the skeleton of FinSet).
  2. Show that the category of models for the Lawvere theory of monoids is equivalent to the category of monad algebras for the list monad.
  3. The Lawvere theory of monoids generates the list monad. Show that its binary operations can be generated using the corresponding Kleisli arrows.
  4. FinSet is a subcategory of Set and there is a functor that embeds it in Set. Any functor on Set can be restricted to FinSet. Show that a finitary functor is the left Kan extension of its own restriction.

Acknowledgments

I’m grateful to Gershom Bazerman for many useful comments.

Further Reading

  1. Functorial Semantics of Algebraic Theories, F. William Lawvere
  2. Notions of computation determine monads, Gordon Plotkin and John Power

Abstract: I present a uniform derivation of profunctor optics: isos, lenses, prisms, and grates based on the Yoneda lemma in the (enriched) profunctor category. In particular, lenses and prisms correspond to Tambara modules with the cartesian and cocartesian tensor product.

This blog post is the result of a collaboration between many people. The categorical profunctor picture solidified after long discussions with Edward Kmett. A lot of the theory was developed in exchanges on the Lens IRC channel between Russell O’Connor, Edward Kmett and James Deikun. They came up with the idea to use the Pastro functor to freely generate Tambara modules, which was the missing piece that completed the picture.

My interest in lenses started long time ago when I first made the connection between the universal quantification over functors in the van Laarhoven representation of lenses and the Yoneda lemma. Since I was still learning the basics of category theory, it took me a long time to find the right language to make the formal derivation. Unbeknownst to me Mauro Jaskellioff and Russell O’Connor independently had the same idea and they published a paper about it soon after I published my blog. But even though this solved the problem of lenses, prisms still seemed out of reach of the Yoneda lemma. Prisms require a more general formulation using universal quantification over profunctors. I was able to put a dent in it by deriving Isos from profunctor Yoneda, but then I was stuck again. I shared my ideas with Russell, who reached for help on the IRC channel, and a Haskell proof of concept was quickly established. Two years later, after a brainstorm with Edward, I was finally able to gather all these ideas in one place and give them a little categorical polish.

Yoneda Lemma

The starting point is the Yoneda lemma, which states that the set of natural transformations between the hom-functor C(a, -) in the category C and an arbitrary functor f from C to Set is (naturally) isomorphic with the set f a:

[C, Set](C(a, -), f) ≅ f a

Here, f is a member of the functor category [C, Set], where natural transformation form hom-sets.

The set of natural transformations may be represented as an end, leading to the following formulation of the Yoneda lemma:

x Set(C(a, x), f x) ≅ f a

This notation makes the object x explicit, which is often very convenient. It can be easily translated to Haskell, by replacing the end with the universal quantifier. We get:

forall x. (a -> x) -> f x ≅ f a

A special case of the Yoneda lemma replaces the functor f with a hom-functor in C:

f x = C(b, x)

and we get:

x Set(C(a, x), C(b, x)) ≅ C(b, a)

This form of the Yoneda lemma is useful in showing the Yoneda embedding, which states that any category C can be fully and faithfully embedded in the functor category [C, Set]. The embedding is a functor, and the above formula defines its action on morphisms.

We will be interested in using the Yoneda lemma in the functor category. We simply replace C with [C, Set] in the previous formula, and do some renaming of variables:

f Set([C, Set](g, f), [C, Set](h, f)) ≅ [C, Set](h, g)

The hom-sets in the functor category are sets of natural transformations, which can be rewritten using ends:

f Set(∫x Set(g x, f x), ∫x Set(h x, f x)) 
  ≅ ∫x Set(h x, g x)

Adjunctions

This is a short recap of adjunctions. We start with two functors going between two categories C and D:

L :: C -> D
R :: D -> C

We say that L is left adjoint to R iff there is a natural isomorphism between hom-sets:

D(L x, y) ≅ C(x, R y)

In particular, we can define an adjunction in a functor category [C, Set]. We start with two higher order (endo-) functors:

L :: [C, Set] -> [C, Set]
R :: [C, Set] -> [C, Set]

We say that L is left adjoint to R iff there is a natural isomorphism between two sets of natural transformations:

[C, Set](L f, g) ≅ [C, Set](f, R g)

where f and g are functors from C to Set. We can rewrite natural transformations using ends:

x Set((L f) x, g x) ≅ ∫x Set(f x, (R g) x)

In Haskell, you may think of f and g as type constructors (with the corresponding Functor instances), in which case L and R are types that are parameterized by these type constructors (similar to how the monad or functor classes are).

Yoneda with Adjunction

Here’s a little trick. Since the fixed objects in the formula for Yoneda embedding are arbitrary, we can pick them to be images of other objects under some functor L that we know is left adjoint to another functor R:

x Set(D(L a, x), D(L b, x)) ≅ D(L b, L a)

Using the adjunction, this is isomorphic to:

x Set(C(a, R x), C(b, R x)) ≅ C(b, (R ∘ L) a)

Notice that the composition R ∘ L of adjoint functors is a monad in C. Let’s write this monad as Φ.

The interesting case is the adjunction between a forgetful functor U and a free functor F. We get:

x Set(C(a, U x), C(b, U x)) ≅ C(b, Φ a)

The end is taken over x in a category D that has some additional structure (we’ll see examples of that later); but the hom-sets are in the underlying simpler category C, which is the target of the forgetful functor U.

The Yoneda-with-adjunction formula generalizes to the category of functors:

f Set(∫x Set((L g) x, f x), ∫x Set((L h) x, f x)) 
  ≅ ∫x Set((L h) x, (L g) x)

leading to:

f Set(∫x Set((g x, (R f) x), ∫x Set(h x, (R f) x)) 
  ≅ ∫x Set(h x, (Φ g) x)

Here, Φ is the monad R ∘ L in the category of functors.

An interesting special case is when we substitute hom-functors for g and h:

g x = C(a, x)
h x = C(s, x)

We get:

f Set(∫x Set((C(a, x), (R f) x), ∫x Set(C(s, x), (R f) x)) 
  ≅ ∫x Set(C(s, x), (Φ C(a, -)) x)

We can then use the regular Yoneda lemma to “integrate over x” and reduce it down to:

f Set((R f) a, (R f) s)) ≅ (Φ C(a, -)) s

Again, we are particularly interested in the forgetful/free adjunction:

f Set((U f) a, (U f) s)) ≅ (Φ C(a, -)) s

with the monad:

Φ = U ∘ F

The simplest application of this identity is when the functors in question are identity functors. We get:

f Set(f a, f s)) ≅ C(a, s)

In Haskell this becomes:

forall f. Functor f => f a -> f s  ≅ a -> s

You may think of this formula as defining the trivial kind of optic that simply turns a to s.

Profunctors

Profunctors are just functors from a product category Cop×D to Set. All the results from the last section can be directly applied to the profunctor category [Cop×D, Set]. Keep in mind that morphisms in this category are natural transformations between profunctors. Here’s the key formula:

p Set((U p)<a, b>, (U p)<s, t>)) ≅ (Φ (Cop×D)(<a, b>, -)) <s, t>

I have replaced a with a pair <a, b> and s with a pair <s, t>. The end is taken over all profunctors that exhibit some structure that U forgets, and F freely creates. Φ is the monad U ∘ F. It’s a monad that acts on profunctors to produce other profunctors.

Notice that a hom-set in the category Cop×D is a set of pairs of morphisms:

<f, g> :: (Cop×D)(<a, b>, <s, t>)
f :: s -> a
g :: b -> t

the first one going in the opposite direction.

The simplest application of this identity is when we don’t impose any constraints on the profunctors, in which case Φ is the identity monad. We get:

p Set(p <a, b>, p <s, t>) ≅ (Cop×D)(<a, b>, <s, t>)

Haskell translation of this formula gives the well-known representation of Iso:

forall p. Profunctor p => p a b -> p s t ≅ Iso s t a b

where:

data Iso s t a b = Iso (s -> a) (b -> t)

Interesting things happen when we impose more structure on our profunctors.

Enriched Categories

First, let’s generalize profunctors to work on enriched categories. We start with some monoidal category V whose objects serve as hom-objects in an enriched category A. The category V will essentially replace Set in our constructions. For instance, we’ll work with profunctors that are enriched functors from the (enriched) product category to V:

p :: Aop ⊗ A -> V

Notice that we use a tensor product of categories. The objects in such a category are pairs of objects, and the hom-objects are tensor products of individual hom-objects. The definition of composition in a product category requires that the tensor product in V be symmetric (up to isomorphism).

For such profunctors, there is a suitable generalization of the end:

x p x x

It’s an object in V together with a V-natural family of projections:

pry :: ∫x p x x -> p y y

We can formulate the Yoneda lemma in an enriched setting by considering enriched functors from A to V. We get the following generalization:

x [A(a, x), f x] ≅ f a

Notice that A(a, x) is now an object of V — the hom-object from a to x. The notation [v, w] generalizes the internal hom. It is defined as the right adjoint to the tensor product in V:

V(x ⊗ v, w) ≅ V(x, [v, w])

We are assuming that V is closed, so the internal hom is defined for every pair of objects.

Enriched functors, or V-functors, between two enriched categories C and D form a functor category [C, D] that is itself enriched over V. The hom-object between two functors f and g is given by the end:

[C, D](f, g) = ∫x D(f x, g x)

We can therefore use the Yoneda lemma in a category of enriched functors, or in the category of enriched profunctors. Therefore the result of the previous section holds in the enriched setting as well:

p [(U p)<a, b>, (U p)<s, t>] ≅ (Φ (Aop⊗A)(<a, b>, -)) <s, t>

with the understanding that:

(Aop⊗A)(<a, b>, -))

is an enriched hom functor mapping pairs of objects in A to objects in V, plus the appropriate action on hom-objects. This hom-functor is the profunctor on which Φ acts.

Tambara Modules

An enriched category A may have a monoidal structure of its own. We’ll use the same tensor product notation for its structure as we did for the underlying monoidal category V. There is also a tensorial unit object i in A.

A Tambara module is a V-functor p from Aop⊗A to V, which transforms under the tensor action of A according to a family of morphisms, natural in all three arguments:

α a x y :: p x y -> p (a ⊗ x) (a ⊗ y)

Notice that these are morphisms in the underlying category V, which is also the target of the profunctor.

We impose the usual unit law:

α i x y = id

and associativity:

α a⊗b x y = α a b⊗x b⊗y ∘ α b x y

Strictly speaking one can separately define left and right action but, for simplicity, we’ll assume that the product is symmetric (up to isomorphism).

The intuition behind Tambara modules is that some of the profunctor values are not independent of others. Once we have calculated p x y, we can obtain the value of p at any of the points on the path <a⊗x, a⊗y> by applying α.

Tambara modules form a category that’s enriched over V. The construction of this enrichment is non-trivial. The hom-object between two profunctors p and q in a category of profunctors is given by the end:

[Aop⊗A, V](p, q) = ∫<x y> V(p x y, q x y)

This object generalizes the set of natural transformations. Conceptually, not all natural transformation preserve the Tambara structure, so we have to define a subobject of this hom-object that does. The intuition is that the end is a generalized product of its components. It comes equipped with projections. For instance, the projection pr<x,y> picks the component:

V(p x y, q x y)

But there is also a projection pr<a⊗x, a⊗y> that picks:

V(p a⊗x a⊗y, q a⊗x a⊗y)

from the same end. These two objects are not completely independent, because they can both be transformed into the same object. We have:

V(id, αa) :: V(p x y, q x y) -> V(p x y, q a⊗x a⊗y)
V(αa, id) :: V(a⊗x a⊗y, q a⊗x a⊗y) -> V(p x y, q a⊗x a⊗y)

We are using the fact that the mapping:

<v, w> -> V(v, w)

is itself a profunctor Vop×V -> V, so it can be used to lift pairs of morphisms in V.

Now, given any triple a, x, and y, we want the two paths to be equivalent, which means finding the equalizer between each pair of morphisms:

V(id, αa) ∘ pr<x, y>
V(αa, id) ∘ pr<a⊗x, a⊗y>

Since we want our hom-object to satisfy the above condition for any triple, we have to construct it as an intersection of all those equalizers. Here, an intersection means an object of V together with a family of monomorphisms, each embedding it into a particular equalizer.

It’s possible to construct a forgetful functor from the Tambara category to the category of profunctors [Aop⊗A, V]. It forgets the existence of α and it maps hom-objects between the two categories. Composition in the Tambara category is defined is such a way as to be compatible with this forgetful functor.

The fact that Tambara modules form a category is important, because we want to be able to use the Yoneda lemma in that category.

Tambara Optics

The key observation is that the forgetful functor from the Tambara category has a left adjoint, and that their composition forms a monad in the category of profunctors. We’ll plug this monad into our general formula.

The construction of this monad starts with a comonad that is given by the following end:

(Θ p) s t = ∫c p (c⊗s) (c⊗t)

For a given profunctor p, this comonad builds a new profunctor that is essentially a gigantic product of all values of this profunctor “shifted” by tensoring its arguments with all possible objects c.

The monad we are interested in is the left adjoint to this comonad (calculated using a Kan extension):

(Φ p) s t = ∫ c x y A(s, c⊗x) ⊗ A(c⊗y, t) ⊗ p x y

Notice that we have two separate tensor products in this formula: one in V, between the hom-objects and the profunctor, and one in A, under the hom-objects. This monad takes an arbitrary profunctor p and produces a new profunctor Φ p.

We can now use our earlier formula:

p [(U p)<a, b>, (U p)<s, t>)] ≅ (Φ (Aop⊗A)(<a, b>, -)) <s, t>

inside the Tambara category. To calculate the right hand side, let’s evaluate the action of Φ on the hom-profunctor:

(Φ (Aop⊗A)(<a, b>, -)) <s, t>
= ∫ c x y A(s, c⊗x) ⊗ A(c⊗y, t) ⊗ (Aop⊗A)(<a, b>, <x, y>)

We can “integrate over” x and y using the Yoneda lemma to get:

 c A(s, c⊗a) ⊗ A(c⊗b, t)

We get the following result:

p [(U p)<a, b>, (U p)<s, t>)] ≅ ∫ c A(s, c⊗a) ⊗ A(c⊗b, t)

where the end on the left is taken over all Tambara modules, and U is the forgetful functor from the Tambara category to the category of profunctors.

If the category in question is closed, we can use the adjunction:

A(c⊗b, t) ≅ A(c, [b, t])

and “perform the integration” over c to arrive at the set/get formulation:

 c A(s, c⊗a) ⊗ A(c, [b, t]) ≅ A(s, [b, t]⊗a)

It corresponds to the familiar Haskell lens type:

(s -> b -> t, s -> a)

(This final trick doesn’t work for prisms, because there is no right adjoint to Either.)

Haskell Translation

A Tambara module is parameterized by the choice of the tensor product ten. We can write a general definition:

class (Profunctor p) => TamModule (ten :: * -> * -> *) p where
  leftAction  :: p a b -> p (c `ten` a) (c `ten` a)
  rightAction :: p a b -> p (a `ten` c) (b `ten` c)

This can be further specialized for two obvious monoidal structures: product and sum:

type TamProd p = TamModule (,) p
type TamSum p = TamModule Either p

The former is equivalent to what it called a Strong (or Cartesian) profunctor in Haskell, the latter is equivalent to a Choice (or Cocartesian) profunctor.

Replacing ends and coends with universal and existential quantifiers in Haskell, our main formula becomes (pseudocode):

forall p. TamModule ten p => p a b -> p s t 
   ≅ exists c. (s -> c `ten` a, c `ten` b -> t)

The two sides of the isomorphism can be defined as the following data structures:

type TamOptic ten s t a b 
    = forall p. TamModule ten p => p a b -> p s t
data Optic ten s t a b 
    = forall c. Optic (s -> c `ten` a) (c `ten` b -> t)

Chosing product for the tensor, we recover two equivalent definitions of a lens:

type Lens s t a b = forall p. Strong p => p a b -> p s t
data Lens s t a b = forall c. Lens (s -> (c, a)) ((c, b) -> t)

Chosing the coproduct, we get:

type Prism s t a b = forall p. Choice p => p a b -> p s t
data Prism s t a b = forall c. Prism (s -> Either c a) (Either c b -> t)

These are the well-known existential representations of lenses and prisms.

The monad Φ (or, equivalently, the free functor that generates Tambara modules), is known in Haskell under the name Pastro for product, and Copastro for coproduct:

data Pastro p a b where
  Pastro :: ((y, z) -> b) -> p x y -> (a -> (x, z)) 
            -> Pastro p a b
data Copastro p a b where
  Copastro :: (Either y z -> b) -> p x y -> (a -> Either x z) 
            -> Copastro p a b

They are the left adjoints of Tambara and Cotambara, respectively:

newtype Tambara p a b = Tambara forall c. p (a, c) (b, c)
newtype Cotambara p a b = Cotambara forall c. p (Either a c) (Either b c)

which are special cases of the comonad Θ.

Discussion

It’s interesting that the work on Tambara modules has relevance to Haskell optics. It is, however, just one example of an even larger pattern.

The pattern is that we have a family of transformations in some category A. These transformations can be used to select a class of profunctors that have simple transformation laws. Using a tensor product in a monoidal category to transform objects, in essence “multiplying” them, is just one example of such symmetry. A more general pattern involves a family of transformations f that is closed under composition and includes a unit. We specify a transformation law for profunctors:

class Profunctor p => Related p where
    α f a b :: forall f. Trans f => p a b -> p (f a) (f b)

This requirement picks a class of profunctors that we call Related.

Why are profunctors relevant as carriers of symmetry? It’s because they generalize a relationship between objects. The profunctor transformation law essentially says that if two objects a and b are related through p then so are the transformed objects; and that there is a function α that relates the proofs of this relationship. This is in the spirit of profunctors as proof-relevant relations.

As an analogy, imagine that we are comparing people, and the transformation we’re interested in is aging. We notice that family relationships remain invariant under aging: if a is a sibling of b, they will remain siblings as they age. This is not true about other relationships, for instance being a boss of another person. But family bonds are not the only ones that survive the test of time. Another such relation is being older or younger than the other person.

Now imagine that you pick four people at random points in time and you find out that any time-invariant relation between two of them, a and b, also holds between s and t. You have to conclude that there is some connection between s and age-adjusted a, and between age-adjusted b and t. In other words there exists a time shift that transforms one pair to another.

Considering all possible relations from the class Related corresponds to taking the end over all profunctors from this class:

type Optic p s t a b = forall p. Related p => 
    p a b -> p s t

The end is a generalization of a product, so it’s enough that one of the components is empty for the whole end to be empty. It means that, for a particular choice of the four types a, b, s, and t, we have to be able to construct a whole family of morphisms, one for every p. We have seen that this end exists only if the four types are connected in a very peculiar way — for instance, if a and b are somehow embedded in s and t.

In the simplest case, we may choose the four types to be related by the transformation:

s = f a
t = f b

For these types, we know that the end exists:

forall p. Related p => 
    p a b -> p s t

because there is a family of appropriate morphisms: our αf a b. In general, though, we can get away with weaker connection.

Let’s look at an example of a family of transformations generated by pairing with arbitrary type c:

fc a = (c, a)

Profunctors that respect these transformations are Tambara modules over a cartesian product (or, in lens parlance, Strong profunctors). For the choice:

s = (c, a)
t = (c, b)

the end in question trivially exists. As we’ve seen, we can weaken these conditions. It’s enough that one way (lax) transformations exist:

s -> (c, a)
t <- (c, b)

These morphisms assert that s can be split into a pair, and that t can be constructed from a pair (but not the other way around).

Other Optics

With the understanding that optics may be defined using a family of transformations, we can analyze another optic called the Grate. It’s based on the following family:

type Reader e a = e -> a

Notice that, unlike the case of Tambara modules, this family is parameterized by a contravariant parameter e.

We are interested in profunctors that transform under these transformations:

class Profunctor p => Closed p where
    closed :: p a b -> p (x -> a) (x -> b)

They let us form the optic:

type Grate s t a b = forall p. Closed p => p a b -> p s t

It turns out that there is a profunctor functor that freely generates Closed profunctors. We have the obvious comonad:

newtype Closure p a b = Closure forall x. p (x -> a) (x -> b)

and its adjoint monad:

data Environment p u v where
  Environment :: ((c -> y) -> v) -> p x y -> (u -> (c -> x)) 
                 -> Environment p a b

or, in categorical notation:

(Φ p) u v = ∫ c x y A([c, y], v) ⊗ p x y ⊗ A(u, [c, x])

Using our construction, we apply this monad to the hom-profunctor:

(Φ (Aop⊗A)(<a, b>, -)) <s, t>
= ∫ c x y A([c, y], t) ⊗ (Aop⊗A)(<a, b>, <x, y>) ⊗ A(s, [c, x])
≅ ∫ c A([c, b], t) ⊗ A(s, [c, a])

Translating it back to Haskell, we get a representation of Grate as an existential type:

Grate s t a b = forall c. Grate ((c -> b) -> t) (s -> (c -> a))

This is very similar to the existential representation of a lens or a prism. It has the intuitive interpretation that s can be thought of as a container of a‘s indexed by some hidden type c.

We can also “perform the integration” using the Yoneda lemma, internal-hom-adjunction, and the symmetry of the product:

 c A([c, b], t) ⊗ A(s, [c, a])
≅ ∫ c A([c, b], t) ⊗ A(s ⊗ c, a)
≅ ∫ c A([c, b], t) ⊗ A(c, [s, a])
≅ A([[s, a], b], t)

to get the more familiar form:

Grate s t a b ≅ ((s -> a) -> b) -> t

Conclusion

I find it fascinating that constructions that were first discovered in Haskell to make Haskell’s optics composable have their categorical counterparts. This was not at all obvious, if only because some of them use parametricity arguments. Parametricity is the property of the language, not easily translatable to category theory. Now we know that the profunctor formulation of isos, lenses, prisms, and grates follows from the Yoneda lemma. The work is not complete yet. I haven’t been able to derive the same formulation for traversals, which combine two different tensor products plus some monoidal constraints.

Bibliography

  1. Haskell lens library, Edward Kmett
  2. Distributors on a tensor category, D. Tambara
  3. Doubles for monoidal categories, Craig Pastro, Ross Street
  4. Profunctor optics, Modular data accessors,
    Matthew Pickering, Jeremy Gibbons, and Nicolas Wu
  5. CPS based functional references, Twan van Laarhoven
  6. Isomorphism lenses, Twan van Laarhoven
  7. Theorem for Second-Order Functionals, Mauro Jaskellioff and Russell O’Connor

The Free Theorem for Ends

In Haskell, the end of a profunctor p is defined as a product of all diagonal elements:

forall c. p c c

together with a family of projections:

pi :: Profunctor p => forall c. (forall a. p a a) -> p c c
pi e = e

In category theory, the end must also satisfy the edge condition which, in (type-annotated) Haskell, could be written as:

dimap f idb . pib = dimap ida f . pia

for any f :: a -> b.
Using a suitable formulation of parametricity, this equation can be shown to be a free theorem. Let’s first review the free theorem for functors before generalizing it to profunctors.

Functor Characterization

You may think of a functor as a container that has a shape and contents. You can manipulate the contents without changing the shape using fmap. In general, when applying fmap, you not only change the values stored in the container, you change their type as well. To really capture the shape of the container, you have to consider not only all possible mappings, but also more general relations between different contents.

A function is directional, and so is fmap, but relations don’t favor either side. They can map multiple values to the same value, and they can map one value to multiple values. Any relation on values induces a relation on containers. For a given functor F, if there is a relation a between type A and type A':

A <=a=> A'

then there is a relation between type F A and F A':

F A <=(F a)=> F A'

We call this induced relation F a.

For instance, consider the relation between students and their grades. Each student may have multiple grades (if they take multiple courses) so this relation is not a function. Given a list of students and a list of grades, we would say that the lists are related if and only if they match at each position. It means that they have to be equal length, and the first grade on the list of grades must belong to the first student on the list of students, and so on. Of course, a list is a very simple container, but this property can be generalized to any functor we can define in Haskell using algebraic data types.

The fact that fmap doesn’t change the shape of the container can be expressed as a “theorem for free” using relations. We start with two related containers:

xs :: F A
xs':: F A'

where A and A' are related through some relation a. We want related containers to be fmapped to related containers. But we can’t use the same function to map both containers, because they contain different types. So we have to use two related functions instead. Related functions map related types to related types so, if we have:

f :: A -> B
f':: A'-> B'

and A is related to A' through a, we want B to be related to B' through some relation b. Also, we want the two functions to map related elements to related elements. So if x is related to x' through a, we want f x to be related to f' x' through b. In that case, we’ll say that f and f' are related through the relation that we call a->b:

f <=(a->b)=> f'

For instance, if f is mapping students’ SSNs to last names, and f' is mapping letter grades to numerical grades, the results will be related through the relation between students’ last names and their numerical grades.

To summarize, we require that for any two relations:

A <=a=> A'
B <=b=> B'

and any two functions:

f :: A -> B
f':: A'-> B'

such that:

f <=(a->b)=> f'

and any two containers:

xs :: F A
xs':: F A'

we have:

if       xs <=(F a)=> xs'
then   F xs <=(F b)=> F xs'

This characterization can be extended, with suitable changes, to contravariant functors.

Profunctor Characterization

A profunctor is a functor of two variables. It is contravariant in the first variable and covariant in the second. A profunctor can lift two functions simultaneously using dimap:

class Profunctor p where
    dimap :: (a -> b) -> (c -> d) -> p b c -> p a d

We want dimap to preserve relations between profunctor values. We start by picking any relations a, b, c, and d between types:

A <=a=> A'
B <=b=> B'
C <=c=> C'
D <=d=> D'

For any functions:

f  :: A -> B
f' :: A'-> B'
g  :: C -> D
g' :: C'-> D'

that are related through the following relations induced by function types:

f <=(a->b)=> f'
g <=(c->d)=> g'

we define:

xs :: p B C
xs':: p B'C'

The following condition must be satisfied:

if             xs <=(p b c)=> xs'
then   (p f g) xs <=(p a d)=> (p f' g') xs'

where p f g stands for the lifting of the two functions by the profunctor p.

Here’s a quick sanity check. If b and c are functions:

b :: B'-> B
c :: C -> C'

than the relation:

xs <=(p b c)=> xs'

becomes:

xs' = dimap b c xs

If a and d are functions:

a :: A'-> A
d :: D -> D'

then these relations:

f <=(a->b)=> f'
g <=(c->d)=> g'

become:

f . a = b . f'
d . g = g'. c

and this relation:

(p f g) xs <=(p a d)=> (p f' g') xs'

becomes:

(p f' g') xs' = dimap a d ((p f g) xs)

Substituting xs', we get:

dimap f' g' (dimap b c xs) = dimap a d (dimap f g xs)

and using functoriality:

dimap (b . f') (g'. c) = dimap (f . a) (d . g)

which is identically true.

Special Case of Profunctor Characterization

We are interested in the diagonal elements of a profunctor. Let’s first specialize the general case to:

C = B
C'= B'
c = b

to get:

xs = p B B
xs'= p B'B'

and

if             xs <=(p b b)=> xs'
then   (p f g) xs <=(p a d)=> (p f' g') xs'

Chosing the following substitutions:

A = A'= B
D = D'= B'
a = id
d = id
f = id
g'= id
f'= g

we get:

if              xs <=(p b b)=> xs'
then   (p id g) xs <=(p id id)=> (p g id) xs'

Since p id id is the identity relation, we get:

(p id g) xs = (p g id) xs'

or

dimap id g xs = dimap g id xs'

Free Theorem

We apply the free theorem to the term xs:

xs :: forall c. p c c

It must be related to itself through the relation that is induced by its type:

xs <=(forall b. p b b)=> xs

for any relation b:

B <=b=> B'

Universal quantification translates to a relation between different instantiations of the polymorphic value:

xsB <=(p b b)=> xsB'

Notice that we can write:

xsB = piB xs
xsB'= piB'xs

using the projections we defined earlier.

We have just shown that this equation leads to:

dimap id g xs = dimap g id xs'

which shows that the wedge condition is indeed a free theorem.

Natural Transformations

Here’s another quick application of the free theorem. The set of natural transformations may be represented as an end of the following profunctor:

type NatP a b = F a -> G b
instance Profunctor NatP where
    dimap f g alpha = fmap g . alpha . fmap f

The free theorem tells us that for any mu :: NatP c c:

(dimap id g) mu = (dimap g id) mu

which is the naturality condition:

mu . fmap g = fmap g . mu

It’s been know for some time that, in Haskell, naturality follows from parametricity, so this is not surprising.

Acknowledgment

I’d like to thank Edward Kmett for reviewing the draft of this post.

Bibliography

  1. Bartosz Milewski, Ends and Coends
  2. Edsko de Vries, Parametricity Tutorial, Part 1, Part 2, Contravariant Functions.
  3. Bartosz Milewski, Parametricity: Money for Nothing and Theorems for Free

Next Page »