Functors from a monoidal category C to Set form a monoidal category with Day convolution as product. A monoid in this category is a lax monoidal functor. We define an initial algebra using a higher order functor and show that it corresponds to a free lax monoidal functor.

Recently I’ve been obsessing over monoidal functors. I have already written two blog posts, one about free monoidal functors and one about free monoidal profunctors. I followed some ideas from category theory but, being a programmer, I leaned more towards writing code than being preoccupied with mathematical rigor. That left me longing for more elegant proofs of the kind I’ve seen in mathematical literature.

I believe that there isn’t that much difference between programming and math. There is a whole spectrum of abstractions ranging from assembly language, weakly typed languages, strongly typed languages, functional programming, set theory, type theory, category theory, and homotopy type theory. Each language comes with its own bag of tricks. Even within one language one starts with some relatively low level encodings and, with experience, progresses towards higher abstractions. I’ve seen it in Haskell, where I started by hand coding recursive functions, only to realize that I can be more productive using bulk operations on types, then building recursive data structures and applying recursive schemes, eventually diving into categories of functors and profunctors.

I’ve been collecting my own bag of mathematical tricks, mostly by reading papers and, more recently, talking to mathematicians. I’ve found that mathematicians are happy to share their knowledge even with outsiders like me. So when I got stuck trying to clean up my monoidal functor code, I reached out to Emily Riehl, who forwarded my query to Alexander Campbell from the Centre for Australian Category Theory. Alex’s answer was a very elegant proof of what I was clumsily trying to show in my previous posts. In this blog post I will explain his approach. I should also mention that most of the results presented in this post have already been covered in a comprehensive paper by Rivas and Jaskelioff, Notions of Computation as Monoids.

Lax Monoidal Functors

To properly state the problem, I’ll have to start with a lot of preliminaries. This will require some prior knowledge of category theory, all within the scope of my blog/book.

We start with a monoidal category C, that is a category in which you can “multiply” objects using some kind of a tensor product \otimes. For any pair of objects a and b there is an object a \otimes b; and this mapping is functorial in both arguments (that is, you can also “multiply” morphisms). A monoidal category will also have a special object I that is the unit of multiplication. In general, the unit and associativity laws are satisfied up to isomorphism:

\lambda : I \otimes a \cong a

\rho : a \otimes I \cong a

\alpha : (a \otimes b) \otimes c \cong a \otimes (b \otimes c)

These isomorphisms are called, respectively, the left and right unitors, and the associator.

The most familiar example of a monoidal category is the category of types and functions, in which the tensor product is the cartesian product (pair type) and the unit is the unit type ().

Let’s now consider functors from C to the category of sets, Set. These functors also form a category called [C, Set], in which morphisms between any two functors are natural transformations.

In Haskell, a natural transformation is approximated by a polymorphic function:

type f ~> g = forall x. f x -> g x

The category Set is monoidal, with cartesian product \times serving as a tensor product, and the singleton set 1 as the unit.

We are interested in functors in [C, Set] that preserve the monoidal structure. Such a functor should map the tensor product in C to the cartesian product in Set and the unit I to the singleton set 1. Accordingly, a strong monoidal functor F comes with two isomorphisms:

F a \times F b \cong F (a \otimes b)

1 \cong F I

We are interested in a weaker version of a monoidal functor called lax monoidal functor, which is equipped with a one-way natural transformation:

\mu : F a \times F b \to F (a \otimes b)

and a one-way morphism:

\eta : 1 \to F I

A lax monoidal functor must also preserve unit and associativity laws.

laxassoc

Associativity law: \alpha is the associator in the appropriate category (top arrow, in Set; bottom arrow, in C).

In Haskell, a lax monoidal functor can be defined as:

class Monoidal f where
  eta :: () -> f ()
  mu  :: (f a, f b) -> f (a, b)

It’s also known as the applicative functor.

Day Convolution and Monoidal Functors

It turns out that our category of functors [C, Set] is also equipped with monoidal structure. Two functors F and G can be “multiplied” using Day convolution:

(F \star G) c = \int^{a b} C(a \otimes b, c) \times F a \times G b

Here, C(a \otimes b, c) is the hom-set, or the set of morphisms from a \otimes b to c. The integral sign stands for a coend, which can be interpreted as a generalization of an (infinite) coproduct (modulo some identifications). An element of this coend can be constructed by injecting a triple consisting of a morphism from C(a \otimes b, c), an element of the set F a, and an element of the set G b, for some a and b.

In Haskell, a coend corresponds to an existential type, so the Day convolution can be defined as:

data Day f g c where
  Day :: ((a, b) -> c, f a, g b) -> Day f g c

(The actual definition uses currying.)

The unit with respect to Day convolution is the hom-functor:

C(I, -)

which assigns to every object c the set of morphisms C(I, c) and acts on morphisms by post-composition.

The proof that this is the unit is instructive, as it uses the standard trick: the co-Yoneda lemma. In the coend form, the co-Yoneda lemma reads, for a covariant functor F:

\int^x C(x, a) \times F x \cong F a

and for a contravariant functor H:

\int^x C(a, x) \times H x \cong H a

(The mnemonics is that the integration variable must appear twice, once in the negative, and once in the positive position. An argument to a contravariant functor is in a negative position.)

Indeed, substituting C(I, -) for the first functor in Day convolution produces:

(C(I, -) \star G) c = \int^{a b} C(a \otimes b, c) \times C(I, a) \times G b

which can be “integrated” over a using the Yoneda lemma to yield:

\int^{b} C(I \otimes b, c) \times G b

and, since I is the unit of the tensor product, this can be further “integrated” over b to give G c. The right unit law is analogous.

To summarize, we are dealing with three monoidal categories: C with the tensor product \otimes and unit I, Set with the cartesian product and singleton 1, and a functor category [C, Set] with Day convolution and unit C(I, -).

A Monoid in [C, Set]

A monoidal category can be used to define monoids. A monoid is an object m equipped with two morphisms — unit and multiplication:

\eta : I \to m

\mu : m \otimes m \to m

monoid-1

These morphisms must satisfy unit and associativity conditions, which are best illustrated using commuting diagrams.

monunit

Unit laws. λ and ρ are the unitors.

monassoc

Associativity law: α is the associator.

This definition of a monoid can be translated directly to Haskell:

class Monoid m where
  eta :: () -> m
  mu  :: (m, m) -> m

It so happens that a lax monoidal functor is exactly a monoid in our functor category [C, Set]. Since objects in this category are functors, a monoid is a functor F equipped with two natural transformations:

\eta : C(I, -) \to F

\mu : F \star F \to F

At first sight, these don’t look like the morphisms in the definition of a lax monoidal functor. We need some new tricks to show the equivalence.

Let’s start with the unit. The first trick is to consider not one natural transformation but the whole hom-set:

[C, Set](C(I, -), F)

The set of natural transformations can be represented as an end (which, incidentally, corresponds to the forall quantifier in the Haskell definition of natural transformations):

\int_c Set(C(I, c), F c)

The next trick is to use the Yoneda lemma which, in the end form reads:

\int_c Set(C(a, c), F c) \cong F a

In more familiar terms, this formula asserts that the set of natural transformations from the hom-functor C(a, -) to F is isomorphic to F a.

There is also a version of the Yoneda lemma for contravariant functors:

\int_c Set(C(c, a), H c) \cong H a

The application of Yoneda to our formula produces F I, which is in one-to-one correspondence with morphisms 1 \to F I.

We can use the same trick of bundling up natural transformations that define multiplication \mu.

[C, Set](F \star F, F)

and representing this set as an end over the hom-functor:

\int_c Set((F \star F) c, F c)

Expanding the definition of Day convolution, we get:

\int_c Set(\int^{a b} C(a \otimes b, c) \times F a \times F b, F c)

The next trick is to pull the coend out of the hom-set. This trick relies on the co-continuity of the hom-functor in the first argument: a hom-functor from a colimit is isomorphic to a limit of hom-functors. In programmer-speak: a function from a sum type is equivalent to a product of functions (we call it case analysis). A coend is a generalized colimit, so when we pull it out of a hom-functor, it turns into a limit, or an end. Here’s the general formula, in which p x y is an arbitrary profunctor:

Set(\int^x p x x, y) \cong \int_x Set(p x x, y)

Let’s apply it to our formula:

\int_c \int_{a b} Set(C(a \otimes b, c) \times F a \times F b, F c)

We can combine the ends under one integral sign (it’s allowed by the Fubini theorem) and move to the next trick: hom-set adjunction:

Set(a \times b, c) \cong Set(a, b \to c)

In programming this is known as currying. This adjunction exists because Set is a cartesian closed category. We’ll use this adjunction to move F a \times F b to the right:

\int_{a b c} Set(C(a \otimes b, c), (F a \times F b) \to F c)

Using the Yoneda lemma we can “perform the integration” over c  to get:

\int_{a b} (F a \times F b) \to F (a \otimes b))

This is exactly the set of natural transformations used in the definition of a lax monoidal functor. We have established one-to-one correspondence between monoidal multiplication and lax monoidal mapping.

Of course, a complete proof would require translating monoid laws to their lax monoidal counterparts. You can find more details in Rivas and Jaskelioff, Notions of Computation as Monoids.

We’ll use the fact that a monoid in the category [C, Set] is a lax monoidal functor later.

Alternative Derivation

Incidentally, there are shorter derivations of these formulas that use the trick borrowed from the proof of the Yoneda lemma, namely, evaluating things at the identity morphism. (Whenever mathematicians speak of Yoneda-like arguments, this is what they mean.)

Starting from F \star F \to F and plugging in the Day convolution formula, we get:

\int^{a' b'} C(a' \otimes b', c) \times F a' \times F b' \to F c

There is a component of this natural transformation at (a \otimes b) that is the morphism:

\int^{a' b'} C(a' \otimes b', a \otimes b) \times F a' \times F b' \to F (a \otimes b)

This morphism must be defined for all possible values of the coend. In particular, it must be defined for the triple (id_{a \otimes b}, F a, F b), giving us the \mu we seek.

There is also an alternative derivation for the unit: Take the component of the natural transformation \eta at I:

\eta_I : C(I, I) \to L I

C(I, I) is guaranteed to contain at least one element, the identity morphism id_I. We can use \eta_I \, id_I as the (only) value of the lax monoidal constraint at the singleton 1.

Free Monoid

Given a monoidal category C, we might be able to define a whole lot of monoids in it. These monoids form a category Mon(C). Morphisms in this category correspond to those morphisms in C that preserve monoidal structure.

Consider, for instance, two monoids m and m'. A monoid morphism is a morphism f : m \to m' in C such that the unit of m' is related to the unit of m:

\eta' = f \circ \eta

munit

and similarly for multiplication:

\mu' \circ (f \otimes f) = f \circ \mu

Remember, we assumed that the tensor product is functorial in both arguments, so it can be used to lift a pair of morphisms.

mmult

There is an obvious forgetful functor U from Mon(C) to C which, for every monoid, picks its underlying object in C and maps every monoid morphism to its underlying morphism in C.

The left adjoint to this functor, if it exists, will map an object a in C to a free monoid L a.

The intuition is that a free monoid L a is a list of a.

In Haskell, a list is defined recursively:

data List a = Nil | Cons a (List a)

Such a recursive definition can be formalized as a fixed point of a functor. For a list of a, this functor is:

data ListF a x = NilF | ConsF a x

Notice the peculiar structure of this functor. It’s a sum type: The first part is a singleton, which is isomorphic to the unit type (). The second part is a product of a and x. Since the unit type is the unit of the product in our monoidal category of types, we can rewrite this functor symbolically as:

\Phi a x = I + a \otimes x

It turns out that this formula works in any monoidal category that has finite coproducts (sums) that are preserved by the tensor product. The fixed point of this functor is the free functor that generates free monoids.

I’ll define what is meant by the fixed point and prove that it defines a monoid. The proof that it’s the result of a free/forgetful adjunction is a bit involved, so I’ll leave it for a future blog post.

Algebras

Let’s consider algebras for the functor F. Such an algebra is defined as an object x called the carrier, and a morphism:

f : F x \to x

called the structure map or the evaluator.

In Haskell, an algebra is defined as:

type Algebra f x = f x -> x

There may be a lot of algebras for a given functor. In fact there is a whole category of them. We define an algebra morphism between two algebras (x, f : F x \to x) and (x', f' : F x' \to x') as a morphism \nu : x \to x' which commutes with the two structure maps:

\nu \circ f = f' \circ F \nu

algmorph

The initial object in the category of algebras is called the initial algebra, or the fixed point of the functor that generates these algebras. As the initial object, it has a unique algebra morphism to any other algebra. This unique morphism is called a catamorphism.

In Haskell, the fixed point of a functor f is defined recursively:

newtype Fix f = In { out :: f (Fix f) }

with, for instance:

type List a = Fix (ListF a)

A catamorphism is defined as:

cata :: Functor f => Algebra f a -> Fix f -> a
cata alg = alg . fmap (cata alg) . out

A list catamorphism is called foldr.

We want to show that the initial algebra L a of the functor:

\Phi a x = I + a \otimes x

is a free monoid. Let’s see under what conditions it is a monoid.

Initial Algebra is a Monoid

In this section I will show you how to concatenate lists the hard way.

We know that function type b \to c (a.k.a., the exponential c^b) is the right adjoint to the product:

Set(a \times b, c) \cong Set(a, b \to c)

The function type is also called the internal hom.

In a monoidal category it’s sometimes possible to define an internal hom-object, denoted [b, c], as the right adjoint to the tensor product:

curry : C(a \otimes b, c) \cong C(a, [b, c])

If this adjoint exists, the category is called closed monoidal.

In a closed monoidal category, the initial algebra L a of the functor \Phi a x = I + a \otimes x is a monoid. (In particular, a Haskell list of a, which is a fixed point of ListF a, is a monoid.)

To show that, we have to construct two morphisms corresponding to unit and multiplication (in Haskell, empty list and concatenation):

\eta : I \to L a

\mu : L a \otimes L a \to L a

What we know is that L a is a carrier of the initial algebra for \Phi a, so it is equipped with the structure map:

I + a \otimes L a \to L a

which is equivalent to a pair of morphisms:

\alpha : I \to L a

\beta : a \otimes L a \to L a

Notice that, in Haskell, these correspond the two list constructors: Nil and Cons or, in terms of the fixed point:

nil :: () -> List a
nil () = In NilF

cons :: a -> List a -> List a
cons a as = In (ConsF a as)

We can immediately use \alpha to implement \eta.

The second one, \beta, one can be rewritten using the hom adjuncion as:

\bar{\beta} = curry \, \beta

\bar{\beta} : a \to [L a, L a]

Notice that, if we could prove that [L a, L a] is a carrier for the same algebra generated by \Phi a, we would know that there is a unique catamorphism from the initial algebra L a:

\kappa_{[L a, L a]} : L a \to [L a, L a]

which, by the hom adjunction, would give us the desired multiplication:

\mu : L a \otimes L a \to L a

Let’s establish some useful lemmas first.

Lemma 1: For any object x in a closed monoidal category, [x, x] is a monoid.

This is a generalization of the idea that endomorphisms form a monoid, in which identity morphism is the unit and composition is multiplication. Here, the internal hom-object [x, x] generalizes the set of endomorphisms.

Proof: The unit:

\eta : I \to [x, x]

follows, through adjunction, from the unit law in the monoidal category:

\lambda : I \otimes x \to x

(In Haskell, this is a fancy way of writing mempty = id.)

Multiplication takes the form:

\mu : [x, x] \otimes [x, x] \to [x, x]

which is reminiscent of composition of edomorphisms. In Haskell we would say:

mappend = (.)

By adjunction, we get:

curry^{-1} \, \mu : [x, x] \otimes [x, x] \otimes x \to x

We have at our disposal the counit eval of the adjunction:

eval : [x, x] \otimes x \cong x

We can apply it twice to get:

\mu = curry (eval \circ (id \otimes eval))

In Haskell, we could express this as:

mu :: ((x -> x), (x -> x)) -> (x -> x)
mu (f, g) = \x -> f (g x)

Here, the counit of the adjunction turns into simple function application.

\square

Lemma 2: For every morphism f : a \to m, where m is a monoid, we can construct an algebra of the functor \Phi a with m as its carrier.

Proof: Since m is a monoid, we have two morphisms:

\eta : I \to m

\mu : m \otimes m \to m

To show that m is a carrier of our algebra, we need two morphisms:

\alpha : I \to m

\beta : a \otimes m \to m

The first one is the same as \eta, the second can be implemented as:

\beta = \mu \circ (f \otimes id)

In Haskell, we would do case analysis:

mapAlg :: Monoid m => ListF a m -> m
mapAlg NilF = mempty
mapAlg (ConsF a m) = f a `mappend` m

\square

We can now build a larger proof. By lemma 1, [L a, L a] is a monoid with:

\mu = curry (eval \circ (id \otimes eval))

We also have a morphism \bar{\beta} : a \to [L a, L a] so, by lemma 2, [L a, L a] is also a carrier for the algebra:

\alpha = \eta

\beta = \mu \circ (\bar{\beta} \otimes id)

It follows that there is a unique catamorphism \kappa_{[L a, L a]} from the initial algebra L a to it, and we know how to use it to implement monoidal multiplication for L a. Therefore, L a is a monoid.

Translating this to Haskell, \bar{\beta} is the curried form of Cons and what we have shown is that concatenation (multiplication of lists) can be implemented as a catamorphism:

concat :: List a -> List a -> List a
conc x y = cata alg x y
  where alg NilF        = id
        alg (ConsF a t) = (cons a) . t

The type:

List a -> (List a -> List a)

(parentheses added for emphasis) corresponds to L a \to [L a, L a].

It’s interesting that concatenation can be described in terms of the monoid of list endomorphisms. Think of turning an element a of the list into a transformation, which prepends this element to its argument (that’s what \bar{\beta} does). These transformations form a monoid. We have an algebra that turns the unit I into an identity transformation on lists, and a pair a \otimes t (where t is a list transformation) into the composite \bar{\beta} a \circ t. The catamorphism for this algebra takes a list L a and turns it into one composite list transformation. We then apply this transformation to another list and get the final result: the concatenation of two lists. \square

Incidentally, lemma 2 also works in reverse: If a monoid m is a carrier of the algebra of \Phi a, then there is a morphism f : a \to m. This morphism can be thought of as inserting generators represented by a into the monoid m.

Proof: if m is both a monoid and a carrier for the algebra \Phi a, we can construct the morphism a \to m by first applying the identity law to go from a to a \otimes I, then apply id_a \otimes \eta to get a \otimes m. This can be right-injected into the coproduct I + a \otimes m and then evaluated down to m using the structure map for the algebra on m.

a \to a \otimes I \to a \otimes m \to I + a \otimes m \to m

insertion

In Haskell, this corresponds to a construction and evaluation of:

ConsF a mempty

\square

Free Monoidal Functor

Let’s go back to our functor category. We started with a monoidal category C and considered a functor category [C, Set]. We have shown that [C, Set] is itself a monoidal category with Day convolution as tensor product and the hom functor C(I, -) as unit. A monoid is this category is a lax monoidal functor.

The next step is to build a free monoid in [C, Set], which would give us a free lax monoidal functor. We have just seen such a construction in an arbitrary closed monoidal category. We just have to translate it to [C, Set]. We do this by replacing objects with functors and morphisms with natural transformations.

Our construction relied on defining an initial algebra for the functor:

I + a \otimes b

Straightforward translation of this formula to the functor category [C, Set] produces a higher order endofunctor:

A_F G = C(I, -) + F \star G

It defines, for any functor F, a mapping from a functor G to a functor A_F G. (It also maps natural transformations.)

We can now use A_F to define (higher-order) algebras. An algebra consists of a carrier — here, a functor T — and a structure map — here, a natural transformation:

A_F T \to T

The initial algebra for this higher-order endofunctor defines a monoid, and therefore a lax monoidal functor. We have shown it for an arbitrary closed monoidal category. So the only question is whether our functor category with Day convolution is closed.

We want to define the internal hom-object in [C, Set] that satisfies the adjunction:

[C, Set](F \star G, H) \cong [C, Set](F, [G, H])

We start with the set of natural transformations — the hom-set in [C, Set]:

[C, Set](F \star G, H)

We rewrite it as an end over c, and use the formula for Day convolution:

\int_c Set(\int^{a b} C(a \otimes b, c) \times F a \times G b, H c)

We use the co-continuity trick to pull the coend out of the hom-set and turn it into an end:

\int_{c a b} Set(C(a \otimes b, c) \times F a \times G b, H c)

Keeping in mind that our goal is to end up with F a on the left, we use the regular hom-set adjunction to shuffle the other two terms to the right:

\int_{c a b} Set(F a, C(a \otimes b, c) \times G b \to H c)

The hom-functor is continuous in the second argument, so we can sneak the end over b c under it:

\int_{a} Set(F a, \int_{b c} C(a \otimes b, c) \times G b \to H c)

We end up with a set of natural transformations from the functor F to the functor we will call:

[G, H] = \int_{b c} (C(- \otimes b, c) \times G b \to H c)

We therefore identify this functor as the right adjoint (internal hom-object) for Day convolution. We can further simplify it by using the hom-set adjunction:

\int_{b c} (C(- \otimes b, c) \to (G b \to H c))

and applying the Yoneda lemma to get:

[G, H] = \int_{b} (G b \to H (- \otimes b))

In Haskell, we would write it as:

newtype DayHom f g a = DayHom (forall b . f b -> g (a, b))

Since Day convolution has a right adjoint, we conclude that the fixed point of our higher order functor defines a free lax monoidal functor. We can write it in a recursive form as:

Free_F = C(I, -) + F \star Free_F

or, in Haskell:

data FreeMonR f t =
      Done t
    | More (Day f (FreeMonR f) t)

Free Monad

This blog post wouldn’t be complete without mentioning that the same construction works for monads. Famously, a monad is a monoid in the category of endofunctors. Endofunctors form a monoidal category with functor composition as tensor product and the identity functor as unit. The fact that we can construct a free monad using the formula:

FreeM_F = Id + F \circ FreeM_F

is due to the observation that functor composition has a right adjoint, which is the right Kan extension. Unfortunately, due to size issues, this Kan extension doesn’t always exist. I’ll quote Alex Campbell here: “By making suitable size restrictions, we can give conditions for free monads to exist: for example, free monads exist for accessible endofunctors on locally presentable categories; a special case is that free monads exist for finitary endofunctors on Set, where finitary means the endofunctor preserves filtered colimits (more generally, an endofunctor is accessible if it preserves \kappa-filtered colimits for some regular cardinal number \kappa).”

Conclusion

As we acquire experience in programming, we learn more tricks of trade. A seasoned programmer knows how to read a file, parse its contents, or sort an array. In category theory we use a different bag of tricks. We bunch morphisms into hom-sets, move ends and coends, use Yoneda to “integrate,” use adjunctions to shuffle things around, and use initial algebras to define recursive types.

Results derived in category theory can be translated to definitions of functions or data structures in programming languages. A lax monoidal functor becomes an Applicative. Free monoidal functor becomes:

data FreeMonR f t =
      Done t
    | More (Day f (FreeMonR f) t)

What’s more, since the derivation made very few assumptions about the category C (other than that it’s monoidal), this result can be immediately applied to profunctors (replacing C with C^{op}\times C) to produce:

data FreeMon p s t where
     DoneFM :: t -> FreeMon p s t
     MoreFM :: p a b -> FreeMon p c d -> 
                        (b -> d -> t) -> 
                        (s -> (a, c)) -> FreeMon p s t

Replacing Day convolution with endofunctor composition gives us a free monad:

data FreeMonadF f g a = 
    DoneFM a 
  | MoreFM (Compose f g a)

Category theory is also the source of laws (commuting diagrams) that can be used in equational reasoning to verify the correctness of programming constructs.

Writing this post has been a great learning experience. Every time I got stuck, I would ask Alex for help, and he would immediately come up with yet another algebra and yet another catamorphism. This was so different from the approach I would normally take, which would be to get bogged down in inductive proofs over recursive data structures.

Advertisements

I want you to perform a little experiment. Take an egg, put it in a blender, and run it for ten seconds.

Oh, I forgot to tell you to first remove the eggshell. No problem, let’s run the blender in the opposite direction for ten seconds, and we’ll get the egg back.

It doesn’t work, does it? The reason is entropy. The second law of thermodynamics states that the entropy of an isolated system can never decrease. Blending an egg increased its entropy. Unblending it would decrease entropy. But there is a workaround: feed the blended egg to a chicken, and you will get a new egg. Granted, you might have to feed it more than one egg, but still: the miracle of life! Life seems to go against the general trend of the second law of thermodynamics.

Of course, life cannot flourish in a completely isolated system, so the laws of physics are safe. A chicken can produce an egg only by increasing the entropy of its environment and, indirectly, that of the Sun.

Entropy and the Universe

We have some kind of intuitive understanding of entropy as the degree of disorderliness. An egg is highly “ordered,” in that it has an ovoid shell, the white, the yolk and, most importantly, the genetic blueprint for a chicken. It is extremely unlikely that an egg would randomly assemble itself from the primordial soup. And yet, in a way, it did. It took about fourteen billion years, starting from the Big Bang, but it finally arrived to a supermarket near you.

Since entropy has been always copiously produced in the Universe, we are forced to deduce that the initial entropy of the Universe was much lower than it is today. The Universe has been running up the entropy bill at a tremendous pace ever since the Big Bang.

With our simplistic understanding of entropy as the opposite of order, it might be difficult to imagine what it meant for the primordial Universe to be low entropy. Were elementary particles nicely stacked according to their quantum Dewey decimal codes on separate shelves like books in a library? It turns out that, in the presence of gravity, the lowest entropy state is when matter is uniformly distributed throughout the Universe. This might be a little counter-intuitive, considering how blending an egg led to the increase of entropy. But uniform distribution of gravitational mass is a very precarious state. It’s like a needle balanced on its point. At the slightest disturbance, the parts of the volume with infinitesimally higher density will start collapsing on themselves due to gravity. The collapse will be slow in the beginning, but as it keeps increasing local density, it will attract more and more matter resulting in a positive feedback loop.

This is exactly what happened after the Big Bang (as far as we know). Low-entropy uniform soup started slowly curdling to form galaxies and stars. The more non-uniform the distribution of gravitating matter, the higher the entropy.

The ultimate fate of collapsing matter is a gravitational black hole, with all matter concentrated in a singular point. Black holes have extremely high entropy, so much so that it is believed that the current entropy of the Universe is dominated by gigantic black holes in the centers of galaxies.

So why hasn’t the whole Universe collapsed into one gigantic black hole? It’s because the breakneck race toward higher entropy has run against several obstacles. One of them works like a governor in a steam engine. Tiny fluctuations in mass density during the Big Bang were accompanied by tiny fluctuations in velocities of particles. These fluctuations resulted in random distribution of angular momentum. As a result, each collapsing region of the Universe ends up with some randomly assigned net angular momentum. In other words, it spins. And when matter is sucked up towards the center, it starts spinning faster and faster. That’s why every galaxy is spinning. The resulting centrifugal force keeps matter from falling all the way to the center and disappearing into a black hole.

The other obstacle towards reaching maximum entropy is the fact that clumps of matter of certain size turn into stars. When lots of atoms of hydrogen are squished together, they can reach a higher entropy state by fusing into helium. But this process produces excess photons, which keep pushing matter away, thus preventing total collapse. Eventually, the hydrogen burns out, the star undergoes a series of transitions and, depending on its mass, ends up as a supernova, or turns into a brown or white dwarf. What’s left after a supernova explosion can be a neutron star or a black hole.

In a neutron star, further collapse is stalled by another property of matter: Fermi statistics. Neutrons are fermions, and no two fermions may occupy the same quantum state. In particular, you can’t squeeze them all into a very small volume — they repel each other.

Are neutron stars and black holes the end products of the evolution of the Universe? Probably not. There is a strong suspicion that neutrons will eventually decay into leptons — mostly neutrinos, electrons, and positrons. Black holes will evaporate through Hawking radiation. The Universe will eventually reach its thermal death: an ever expanding volume filled with photons and leptons.

What’s Life Got to Do with It?

So far we’ve seen that matter has properties that tend to slow down the ratchet of entropy. Our Sun, for instance, could increase its entropy tremendously by turning all its hydrogen into helium in one fell swoop while collapsing to form a black hole. It can’t do that because of the heat and radiation pressure generated in the process. And even if all the heat were siphoned out, the leftover neutrons would congeal into a solid neutron star, preventing further collapse.

So the Sun is doing its best, under the circumstances, trying to dissipate the excess of energy. It does it mostly by radiating high energy photons. These are the photons of visible and ultraviolet light that warm up the Earth. The Earth, in turn, re-radiates this heat in the form of low energy infrared photons.

It turns out that turning high energy photons to low energy photons increases overall entropy. So, in its small way, the Earth speeds up the rise of entropy. In fact, it does it better than, for instance, Mercury; because the Earth has the atmosphere and the oceans, which are good heat sinks, and because it spins on its axis, transporting the accumulated heat from the sunny side to the shaded side, where it’s radiated into space in the form of infrared photons.

But Earth has another secret weapon that speeds up the advent of the heat death of the Universe: life. To begin with, living organisms consume energy during the day. They also need energy to survive at night, so they came up with clever ways to store energy in chemical compounds. They can then cash their savings at night, all the while radiating heat. At higher latitudes, they also store energy during summer and expend it during winter.

A steppe is better at entropy production than barren land; a forest or a jungle is still better. But human civilization is the best. Our cars, factories, cities, air conditioners, all produce entropy at a much faster pace than bare nature. We’re good at it!

The Self-Organizing Principle

The advent of life on Earth is often attributed to something called the self-organizing principle. It’s just a name for what happens in systems that are away from thermodynamic equilibrium. In those systems it is often possible to speed up the increase of entropy by organizing things a little better.

The simplest example of this is when you heat a layer of liquid in a pan. The liquid can transport energy by thermal conduction, which leads to overall raise in entropy. But there is a faster way: the heated liquid at the bottom of the pan is lighter than the cooler liquid at the top, so it can float to the top. The heavier liquid at the top can then sink to the bottom. This is called convection, and it’s faster than conduction. The only problem is that the two streams of liquid have to negotiate the flow, because they can’t both pass through the same point simultaneously. In fact, in the ideal case, they would be deadlocked. What happens in reality is an amazing feat of self-organization: regularly spaced hexagonal convection cells called Bénard cells emerge in the heated liquid.

Benard

A honeycomb pattern of Bénard cells suggests that order may be spontaneously generated in situations when it can speed up the production of entropy. If you have rich enough environment and wait long enough, more and more complex patterns that ease the production of entropy may emerge — such as life itself.

But life doesn’t emerge everywhere. As far as we know there’s no life on the Moon and no (visible) life on Mars. What’s different about Earth is that it is, and has always been, very turbulent. For starters, we have water that is constantly changing state. It’s boiling in hydrothermal vents, liquid in the oceans, solid in the ice caps; it’s precipitating from the atmosphere and evaporating from pools. It dissolves lots of chemical compounds and makes colloids with others. Continental plates keep shifting resulting in constant volcanic activity. New minerals are brought up from the depths and exposed to erosion. We also have a large Moon that’s causing regular tides, and the Earth’s axis of rotation is tilted resulting in changing seasons. On top of that, we have occasional comets causing impact winters. We can’t complain about lack of entertainment on Earth.

Here’s what I think: Life can only emerge and thrive on the edge. Our planet has been on the edge for a few billion years. Conditions on Earth have always been barely short of wiping the life out and, paradoxically, this is exactly what makes life possible. The Earth is a living proof that what doesn’t kill you, makes you stronger. There have been uncountable attempts on the life on Earth and they all resulted in accelerating the evolution towards higher life forms. I know that it might be controversial to call one form of life higher than another, but there is an objective measure that we can use, and that’s the efficiency of turning energy into entropy. In this respect, humans are indeed the highest form of life. We were able to tap into sources of energy that have been forgotten by nature for hundreds of millions of years in the form of coal, oil, and gas. We use all this to speed up the increase of entropy.

Why Are We Alone?

You might be familiar with the Fermi paradox. In essence, the question is: if life is inevitable, why haven’t we seen it all over the Universe. And judging by how quickly life emerged on Earth– essentially as soon as the water condensed into oceans– life seems to be inevitable, at least on Earth-like planets, which are very common in the Universe. And life — civilized life in particular — being so good at producing vast amounts of entropy, should eventually make itself conspicuous on the cosmic scale.

On the other hand, we don’t know how many planets are “on the edge,” and how narrow the edge is. It’s possible that for an Earth-like planet to enter the life-creating period is a relatively common occurrence — possibly right after the water gathers into oceans. Finding remnants of life on Mars would give support to this idea. But the Earth has been walking this narrow path between stagnation and destruction for more than four billion years. There have been long periods of stagnation: there was the snowball Earth when the oceans froze over, and the “boring billion,” when the air was filled with the smell of rotten eggs. There have been major extinction events, like the asteroid impact that wiped out the dinosaurs.

Being on edge means that you are likely to fall off. You either die of boredom (that’s what might have happened on Mars), or you get wiped out by a cataclysm (if the Chicxulub asteroid were a tad larger, the Earth could have been sterilized). It might be extremely unlikely to stay for a few billion years on the narrow path that leads from Bénard cells to a space-faring civilization. We might actually be the first to reach this level in our cosmic neighborhood. Life on Earth could be more like a professional Russian-roulette player than a nine-to-five worker.

There is also something we don’t quite get about cosmic timescales. For the last few hundred of years the powers of humanity have been growing exponentially. From the cosmic perspective, humanity looks like a sudden bloom that took over a stagnant pool on the outskirts of the Galaxy. We foolishly imagine that we can sustain this level of progress and in short time colonize the Solar system and reach for the stars. But one thing we know for sure about exponential growth is that it’s not sustainable in the long run. We are not only going to bump our heads against unbreakable laws of physics, but we’ll also have to deal with the limitations of human mind. And all other civilizations that might be out there will have to deal with the same problems. This might explain why we are not seeing them.

In fact, we could reverse this reasoning and argue that the fact that we don’t detect any signs of alien civilizations suggests that the obstacles that we see in front of us are not easily overcome. In particular:

  • The speed of light limits our ability to travel and exchange information at large distances. This is one of the hardest limits, because special relativity is the foundation of all physics.
  • The coupling of gravity to other forces is extremely weak, so the prospects of controlling gravity and counter-balancing acceleration are virtually non-existent. This means that there is no easy way to shrink the enormous distances between stars — no warp drive.
  • The size of the atom and the speed of light limit our ability to store and process information. This prevents us from extending the capabilities of our brains to discover and explore the laws of the Universe.

These three limits can also be related to three fundamental theories: special relativity, general relativity, and quantum mechanics, respectively.

So what does the future have in stock for humanity? It looks like we are reaching the end of exponential expansion. There hasn’t been any major breakthrough in fundamental physics for almost half a century, we are seeing the tail end of the Moore’s law, and the population of Earth is finally stabilizing. If we don’t wipe ourselves out from the face of the Earth, we might be facing a boring millennium, if not a boring million. And it’s entirely possible that we are surrounded by other civilizations that have already entered their boring periods. If they eventually graduate to the next stage, they will be ready to help the Universe increase its entropy on a vastly larger scale. Hopefully humanity will still be around to see the Galaxy blooming with sentient activity.


Abstract: I derive free monoidal profunctors as fixed points of a higher order functor acting on profunctors. Monoidal profunctors play an important role in defining traversals.

The beauty of category theory is that it lets us reuse concepts at all levels. In my previous post I have derived a free monoidal functor that goes from a monoidal category C to Set. The current post may then be shortened to: Since profunctors are just functors from C^{op} \times C to Set, with the obvious monoidal structure induced by the tensor product in C, we automatically get free monoidal profunctors.

Let me fill in the details.

Profunctors in Haskell

Here’s the definition of a profunctor from Data.Profunctor:

class Profunctor p where
  dimap :: (s -> a) -> (b -> t) -> p a b -> p s t

The idea is that, just like a functor acts on objects, a profunctor p acts on pairs of objects \langle a, b \rangle. In other words, it’s a type constructor that takes two types as arguments. And just like a functor acts on morphisms, a profunctor acts on pairs of morphisms. The only tricky part is that the first morphism of the pair is reversed: instead of going from a to s, as one would expect, it goes from s to a. This is why we say that the first argument comes from the opposite category C^{op}, where all morphisms are reversed with respect to C. Thus a morphism from \langle a, b \rangle to \langle s, t \rangle in C^{op} \times C is a pair of morphisms \langle s \to a, b \to t \rangle.

Just like functors form a category, profunctors form a category too. In this category profunctors are objects, and natural transformations are morphisms. A natural transformation between two profunctors p and q is a family of functions which, in Haskell, can be approximated by a polymorphic function:

type p ::~> q = forall a b. p a b -> q a b

If the category C is monoidal (has a tensor product \otimes and a unit object 1), then the category C^{op} \times C has a trivially induced tensor product:

\langle a, b \rangle \otimes \langle c, d \rangle = \langle a \otimes c, b \otimes d \rangle

and unit \langle 1, 1 \rangle

In Haskell, we’ll use cartesian product (pair type) as the underlying tensor product, and () type as the unit.

Notice that the induced product does not have the usual exponential as the right adjoint. Indeed, the hom-set:

(C^{op} \times C) \, ( \langle a, b  \rangle \otimes  \langle c, d  \rangle,  \langle s, t  \rangle )

is a set of pairs of morphisms:

\langle s \to a \otimes c, b \otimes d \to t  \rangle

If the right adjoint existed, it would be a pair of objects \langle X, Y  \rangle, such that the following hom-set would be isomorphic to the previous one:

\langle X \to a, b \to Y  \rangle

While Y could be the internal hom, there is no candidate for X that would produce the isomorphism:

s \to a \otimes c \cong X \to a

(Consider, for instance, unit () for a.) This lack of the right adjoint is the reason why we can’t define an analog of Applicative for profunctors. We can, however, define a monoidal profunctor:

class Monoidal p where
  punit :: p () ()
  (>**<) :: p a b -> p c d -> p (a, c) (b, d)

This profunctor is a map between two monoidal structures. For instance, punit can be seen as mapping the unit in Set to the unit in C^{op} \times C:

punit :: () -> p <1, 1>

Operator >**< maps the product in Set to the induced product in C^{op} \times C:

(>**<) :: (p <a, b>, p <c, d>) -> p (<a, b> × <c, d>)

Day convolution, which works with monoidal structures, generalizes naturally to the profunctor category:

data PDay p q s t = forall a b c d. 
     PDay (p a b) (q c d) ((b, d) -> t) (s -> (a, c))

Higher Order Functors

Since profunctors form a category, we can define endofunctors in that category. This is a no-brainer in category theory, but it requires some new definitions in Haskell. Here’s a higher-order functor that maps a profunctor to another profunctor:

class HPFunctor pp where
  hpmap :: (p ::~> q) -> (pp p ::~> pp q)
  ddimap :: (s -> a) -> (b -> t) -> pp p a b -> pp p s t

The function hpmap lifts a natural transformation, and ddimap shows that the result of the mapping is also a profunctor.

An endofunctor in the profunctor category may have a fixed point:

newtype FixH pp a b = InH { outH :: pp (FixH pp) a b }

which is also a profunctor:

instance HPFunctor pp => Profunctor (FixH pp) where
    dimap f g (InH pp) = InH (ddimap f g pp)

Finally, our Day convolution is a higher-order endofunctor in the category of profunctors:

instance HPFunctor (PDay p) where
  hpmap nat (PDay p q from to) = PDay p (nat q) from to
  ddimap f g (PDay p q from to) = PDay p q (g . from) (to . f)

We’ll use this fact to construct a free monoidal profunctor next.

Free Monoidal Profunctor

In the previous post, I defined the free monoidal functor as a fixed point of the following endofunctor:

data FreeF f g t =
      DoneF t
    | MoreF (Day f g t)

Replacing the functors f and g with profunctors is straightforward:

data FreeP p q s t = 
      DoneP (s -> ()) (() -> t) 
    | MoreP (PDay p q s t)

The only tricky part is realizing that the first term in the sum comes from the unit of Day convolution, which is the type () -> t, and it generalizes to an appropriate pair of functions (we’ll simplify this definition later).

FreeP is a higher order endofunctor acting on profunctors:

instance HPFunctor (FreeP p) where
    hpmap _ (DoneP su ut) = DoneP su ut
    hpmap nat (MoreP day) = MoreP (hpmap nat day)
    ddimap f g (DoneP au ub) = DoneP (au . f) (g . ub)
    ddimap f g (MoreP day) = MoreP (ddimap f g day)

We can, therefore, define its fixed point:

type FreeMon p = FixH (FreeP p)

and show that it is indeed a monoidal profunctor. As before, the trick is to fist show the following property of Day convolution:

cons :: Monoidal q => PDay p q a b -> q c d -> PDay p q (a, c) (b, d)
cons (PDay pxy quv yva bxu) qcd = 
      PDay pxy (quv >**< qcd) (bimap yva id . reassoc) 
                              (assoc . bimap bxu id)

where

assoc ((a,b),c) = (a,(b,c))
reassoc (a, (b, c)) = ((a, b), c)

Using this function, we can show that FreeMon p is monoidal for any p:

instance Profunctor p => Monoidal (FreeMon p) where
  punit = InH (DoneP id id)
  (InH (DoneP au ub)) >**< frcd = dimap snd (\d -> (ub (), d)) frcd
  (InH (MoreP dayab)) >**< frcd = InH (MoreP (cons dayab frcd))

FreeMon can also be rewritten as a recursive data type:

data FreeMon p s t where
     DoneFM :: t -> FreeMon p s t
     MoreFM :: p a b -> FreeMon p c d -> 
                        (b -> d -> t) -> 
                        (s -> (a, c)) -> FreeMon p s t

Categorical Picture

As I mentioned before, from the categorical point of view there isn’t much to talk about. We define a functor in the category of profunctors:

A_p q = (C^{op} \times C) (1, -) + \int^{ a b c d } p a b \times q c d \times (C^{op} \times C) (\langle a, b \rangle \otimes \langle c, d \rangle, -)

As previously shown in the general case, its initial algebra defines a free monoidal profunctor.

Acknowledgments

I’m grateful to Eugenia Cheng not only for talking to me about monoidal profunctors, but also for getting me interested in category theory in the first place through her Catsters video series. Thanks also go to Edward Kmett for numerous discussions on this topic.


Abstract: I derive a free monoidal (applicative) functor as an initial algebra of a higher-order functor using Day convolution.

I thought I was done with monoids for a while, after writing my Monoids on Steroids post, but I keep bumping into them. This time I read a paper by Capriotti and Kaposi about Free Applicative Functors and it got me thinking about the relationship between applicative and monoidal functors. In a monoidal closed category, the two are equivalent, but monoidal structure seems to be more fundamental. It’s possible to have a monoidal category, which is not closed. Not to mention that monoidal structures look cleaner and more symmetrical than closed structures.

One particular statement in the paper caught my attention: the authors said that the free applicative functors are initial algebras for a very simple higher-order functor:

A G = Id + F \star G

which, in their own words, “makes precise the intuition that free applicative functors are in some sense lists (i.e. free monoids).” In this post I will decode this statement and then expand it by showing how to implement a free monoidal functor as a higher order fixed point using some properties of Day convolution.

Let me start with some refresher on lists. To define a list all we need is a monoidal category. If we pretend that Hask is a category, then our product is a pair type (a, b), which is associative up to isomorphism; with () as its unit, also up to isomorphism. We can then define a functor that generates lists:

A_a b = () + (a, b)

Notice the similarity with the Capriotti-Kaposi formula. You might recognize this functor in its Haskell form:

data ListF a b = Nil | Cons a b

The fixed point of this functor is the familiar list, a.k.a., the free monoid. So that’s the general idea.

Day Convolution

There are lots of interesting monoidal structures. Famously (because of the monad quip) the category of endofunctors is monoidal; with functor composition as product and identity functor as identity. What is less known is that functors from a monoidal category \mathscr{C} to \mathscr{S}et also form a monoidal category. A product of two such functors is defined through Day convolution. I have talked about Day convolution in the context of applicative functors, but here I’d like to give you some more intuition.

What did it for me was Alexander Campbell comment on Stack Exchange. You know how, in a vector space, you can have a set of basis vectors, say \vec{e}_i, and represent any vector as a linear combination:

\vec{v} = \sum \limits_{i = 1}^n v_i \vec{e}_i

It turns out that, under some conditions, there is a basis in the category of functors from \mathscr{C} to \mathscr{S}et. The basis is formed by representable functors of the form \mathscr{C}(x, -), and the decomposition of a functor F is given by the co-Yoneda lemma:

F a = \int^x F x \times \mathscr{C}(x, a)

The coend in this formula roughly corresponds to (possibly infinite) categorical sum (coproduct). The product under the integral sign is the cartesian product in \mathscr{S}et.

In pseudo-Haskell, we would write this formula as:

f a ~ exists x . (f x, x -> a)

because a coend corresponds to an existential type. The intuition is that, because the type x is hidden, the only thing the user can do with this data type is to fmap the function over the value f x, and that is equivalent to the value f a.

The actual Haskell encoding is given by the following GADT:

data Coyoneda f a where
  Coyoneda :: f x -> (x -> a) -> Coyoneda f a

Now suppose that we want to define “multiplication” of two functors that are represented using the coend formula.

(F \star G) a \cong \int^x F x \times \mathscr{C}(x, a) \star \int^y G y \times \mathscr{C}(y, a)

Let’s assume that our multiplication interacts nicely with coends. We get:

\int^{x y} F x \times G y \times (\mathscr{C}(x, a) \star \mathscr{C}(y, a))

All that remains is to define the multiplication of our “basis vectors,” the hom-functors. Since we have a tensor product \otimes in \mathscr{C}, the obvious choice is:

\mathscr{C}(x, -) \star \mathscr{C}(y, -) \cong \mathscr{C}(x \otimes y, -)

This gives us the formula for Day convolution:

(F \star G) a = \int^{x y} F x \times G y \times \mathscr{C}(x \otimes y, a)

We can translate it directly to Haskell:

data Day f g a = forall x y. Day (f x) (g y) ((x, y) -> a)

The actual Haskell library implementation uses GADTs, and also curries the product, but here I’m opting for encoding an existential type using forall in front of the constructor.

For those who like the analogy between functors and containers, Day convolution may be understood as containing a box of xs and a bag of ys, plus a binary combinator for turning every possible pair (x, y) into an a.

Day convolution lifts the monoidal structure of \mathscr{C} (the tensor product \otimes) to the functor category [\mathscr{C}, \mathscr{S}et]. In particular, it lifts the unit object 1 to the hom-functor \mathscr{C}(1, -). We have, for instance:

(F \star \mathscr{C}(1, -)) a = \int^{x y} F x \times \mathscr{C}(1, y) \times \mathscr{C}(x \otimes y, a)

which, by co-Yoneda, is isomorphic to:

\int^{x} F x \times \mathscr{C}(x \otimes 1, a)

Considering that x \otimes 1 is isomorphic to x, we can apply co-Yoneda again, to indeed get F a.

In Haskell, the unit object with respect to product is the unit type (), and the hom-functor \mathscr{C}(1, -) is isomorphic to the identity functor Id (a set of functions from unit to x is isomorphic to x).

We now have all the tools to understand the formula:

A G = Id + F \star G

or, more generally:

A_F G = \mathscr{C}(1, -) + F \star G

It’s a sum (coproduct) of the unit under Day convolution and the Day convolution of two functors F and G.

Lax Monoidal Functors

Whenever we have a functor between two monoidal categories \mathscr{C} and \mathscr{D}, it makes sense to ask how this functor interacts with the monoidal structure. For instance, does it map the unit object in one category to the unit object in another? Does it map the result of a tensor product to a tensor product of mapped arguments?

A lax monoidal functor doesn’t do that, but it does the next best thing. It doesn’t map unit to unit, but it provides a morphism that connects the unit in the target category to the image of the unit of the source category.

\epsilon : 1_\mathscr{D} \to F 1_\mathscr{C}

It also provides a family of morphisms from the product of images to the image of a product:

\mu_{a b} : F a \otimes_\mathscr{D} F b \to F (a \otimes_\mathscr{C} b)

which is natural in both arguments. These morphisms must satisfy additional conditions related to associativity and unitality of respective tensor products. If \epsilon and \mu are isomorphisms then we call F a strong monoidal functor.

If you look at the way Day convolution was defined, you can see now that we insisted that the hom-functor be strong monoidal:

\mathscr{C}(x, -) \star \mathscr{C}(y, -) \cong \mathscr{C}(x \otimes y, -)

Since hom-functors define the Yoneda embedding \mathscr{C} \to [\mathscr{C}, \mathscr{S}et], we can say that Day convolution makes Yoneda embedding strong monoidal.

The translation of the definition of a lax monoidal functor to Haskell is straightforward:

class Monoidal f where
  unit  :: f ()
  (>*<) :: f x -> f y -> f (x, y)

Because Hask is a closed monoidal category, Monoidal is equivalent to Applicative (see my post on Applicative Functors).

Fixed Points

Recursive data structures can be formally defined as fixed points of F-algebras. Lists, in particular, are fixed points of functors of the form:

F_a b = 1 + a \otimes b

defined in a monoidal category (\otimes and 1) with coproducts (the plus sign).

In general, a fixed point is defined as a point that is fixed under some particular mapping. For instance, a fixed point of a function f(x) is some value x_0 such that:

f(x_0) = x_0

Obviously, the function’s codomain has to be the same as its domain, for this equation to make sense.

By analogy, we can define a fixed point of an endofunctor F as an object that is isomorphic to its image under F:

F x \cong x

The isomorphism here is just a pair of morphisms, one the inverse of the other. One of these morphisms can be seen as part of the F-algebra (x, f) whose carrier is x and whose action is:

f : F x \to x

Lambek’s lemma states that the action of the initial (or terminal) F-algebra is an isomorphism. This explains why a fixed point of a functor is often referred to as an initial (terminal) algebra.

In Haskell, a fixed point of a functor f is called Fix f. It is defined by the property that f acting on Fix f must be isomorphic to Fix f:

Fix f \cong f (Fix f)

which can be expressed as:

newtype Fix f = In { out :: f (Fix f) }

Notice that the pair (Fix f, In) is the (initial) algebra for the functor f, with the carrier Fix f and the action In; and that out is the inverse of In, as prescribed by the Lambek’s lemma.

Here’s another useful intuition about fixed points: they can often be calculated iteratively as a limit of a sequence. For functions, if the following sequence converges to x:

x_{n+1} = f (x_n)

then f(x) = x (at least for continuous functions).

We can apply the same idea to our list functor, iteratively replacing b with the definition of F_a b = 1 + a \otimes b:

1 + a \otimes b\\  1 + a \otimes (1 + a \otimes b)\\  1 + a + a \otimes a \otimes (1 + a \otimes b)\\  1 + a + a \otimes a + a \otimes a \otimes a + ...

where we assumed associativity and unit laws (up to isomorphism). This formal expansion is in agreement with our intuition that a list is either empty, contains one element, a product of two elements, a product of three elements, and so on…

Higher Order Functors

Category theory has achieved something we can only dream of in programming languages: reusability of concepts. For instance, functors between two categories C and D form a category, with natural transformations as morphisms. Therefore everything we said about fixed points and algebras can be immediately applied to the functor category. In Haskell, however, we have to start almost from scratch. For instance, a higher order functor, which takes a functor as argument and returns another functor has to be defined as:

class HFunctor ff where
  ffmap :: Functor g => (a -> b) -> ff g a -> ff g b
  hfmap :: (g :~> h) -> (ff g :~> ff h)

The squiggly arrows are natural transformations:

infixr 0 :~>
type f :~> g = forall a. f a -> g a

Notice that the definition of HFunctor not only requires a higher-order version of fmap called hfmap, which lifts natural transformations, but also the lower-order ffmap that attests to the fact that the result of HFunctor is again a functor. (Quantified class constraints will soon make this redundant.)

The definition of a fixed point also has to be painstakingly rewritten:

newtype FixH ff a = InH { outH :: ff (FixH ff) a }

Functoriality of a higher order fixed point is easily established:

instance HFunctor f => Functor (FixH f) where
    fmap h (InH x) = InH (ffmap h x)

Finally, Day convolution is a higher order functor:

instance HFunctor (Day f) where
  hfmap nat (Day fx gy xyt) = Day fx (nat gy) xyt
  ffmap h   (Day fx gy xyt) = Day fx gy (h . xyt)

Free Monoidal Functor

With all the preliminaries out of the way, we are now ready to derive the main result.

We start with the higher-order functor whose initial algebra defines the free monoidal functor:

A_F G = Id + F \star G

We can translate it to Haskell as:

data FreeF f g t =
      DoneF t
    | MoreF (Day f g t)

It is a higher order functor, in that it takes a functor g and produces a new functor FreeF f g:

instance HFunctor (FreeF f) where
    hfmap _ (DoneF x) = DoneF x
    hfmap nat (MoreF day) = MoreF (hfmap nat day)
    ffmap f (DoneF x) = DoneF (f x)
    ffmap f (MoreF day) = MoreF (ffmap f day)

The claim is that, for any functor f, the (higher order) fixed point of FreeF f:

type FreeMon f = FixH (FreeF f)

is monoidal.

The usual approach to solving such a problem is to express FreeMon as a recursive data structure:

data FreeMonR f t =
      Done t
    | More (Day f (FreeMonR f) t)

and proceed from there. This is fine, but it doesn’t give us any insight about what property of the original higher-order functor makes its fixed point monoidal. So instead, I will concentrate on properties of Day convolution.

To begin with, let’s establish the functoriality of FreeMon using the fact that Day convolution is a functor:

instance Functor f => Functor (FreeMon f) where
  fmap h (InH (DoneF s)) = InH (DoneF (h s))
  fmap h (InH (MoreF day)) = InH (MoreF (ffmap h day))

The next step is based on the list analogy. The free monoidal functor is analogous to a list in which the product is replaced by Day convolution. The proof that it’s monoidal amounts to showing that one can “concatenate” two such lists. Concatenation is a recursive process in which we detach an element from one list and attach it to the other.

When building recursion, the functor g in Day convolution will play the role of the tail of the list. We’ll prove monoidality of the fixed point inductively by assuming that the tail is already monoidal. Here’s the crucial step expressed in terms of Day convolution:

cons :: Monoidal g => Day f g s -> g t -> Day f g (s, t)
cons (Day fx gy xys) gt = Day fx (gy >*< gt) (bimap xys id) . reassoc)

We took advantage of associativity:

reassoc :: (a, (b, c)) -> ((a, b), c)
reassoc (a, (b, c)) = ((a, b), c)

and functoriality (bimap) of the underlying product.

The intuition here is that we have a Day product of the head of the list, which is a box of xs; and the tail, which is a container of ys. We are appending to it another container of ts. We do it by concatenating the two containers (gy >*< gt) into one container of pairs (y \otimes t). The new combinator reassociates the nested pairs (x \otimes (y \otimes t)) and applies the old combinator to (x \otimes y).

The final step is to show that FreeMon defined through Day convolution is indeed monoidal. Here’s the proof:

instance Functor f => Monoidal (FreeMon f) where
  unit = InH (DoneF ())
  (InH (DoneF s)) >*< frt = fmap (s, ) frt
  (InH (MoreF day)) >*< frt = InH (MoreF (day `cons` frt))

A lax monoidal functor must also preserve associativity and unit laws. Unlike the corresponding laws for applicative functors, these are pretty straightforward to formulate.

The unit laws are:

fmap lunit (unit () >*< frx) = frx
fmap runit (frx >*< unit ()) = frx

where I used the left and right unitors:

lunit :: ((), a) -> a
lunit ((), a) = a
runit :: (a, ()) -> a
runit (a, ()) = a

The associativity law is:

fmap assoc ((frx >*< fry) >*< frz) = (frx >*< (fry >*< frz))

where I used the associator:

assoc :: ((a,b),c) -> (a,(b,c))
assoc ((a,b),c) = (a,(b,c))

Except for the left unit law, I wasn’t able to find simple derivations of these laws.

Categorical Picture

Translating this construction to category theory, we start with a monoidal category (\mathscr{C}, \otimes, 1, \alpha, \rho, \lambda), where \alpha is the associator, and \rho and \lambda are right and left unitors, respectively. We will be constructing a lax monoidal functors from \mathscr{C} to \mathscr{S}et, the latter equipped with the usual cartesian product and coproduct.

I will sketch some of the constructions without going into too much detail.

The analogue of cons is a family of natural transformations:

\beta_{s t} = \int^{x y} f x \times g y \times \mathscr{C}(x \otimes y, s) \times g t \to \int^{u v} f u \times g v \times \mathscr{C}(u \otimes v, s \otimes t)

We will assume that g is lax monoidal, so the left hand side can be mapped to:

\int^{x y} f x \times g (y \otimes t) \times \mathscr{C}(x \otimes y, s)

The set of natural transformations can be represented as an end:

\int_{s t} Set(\int^{x y} f x \times g (y \otimes t) \times \mathscr{C}(x \otimes y, s), \int^{u v} f u \times g v \times \mathscr{C}(u \otimes v, s \otimes t))

A hom-set from a coend is isomorphic to an end of a hom-set:

\int_{s t x y} Set(f x \times g (y \otimes t) \times \mathscr{C}(x \otimes y, s), \int^{u v} f u \times g v \times \mathscr{C}(u \otimes v, s \otimes t))

There is an injection that is a member of this hom-set:

i_{x, y \otimes t} : f x \times g (y \otimes t) \times \mathscr{C}(x \otimes (y \otimes t), -) \to \int^{u v} f u \times g v \times \mathscr{C}(u \otimes v, -)

Given a morphism h that is a member of \mathscr{C}(x \otimes y, s), we can construct the morphism (h \otimes id) \circ \alpha^{-1}, which is a member of \mathscr{C}(x \otimes (y \otimes t), s \otimes t).

The free monoidal functor Free_f is given as the initial algebra of the (higher-order) endofunctor acting on a functor g from [\mathscr{C}, \mathscr{S}et]:

A_f g = \mathscr{C}(1, -) + f \star g

By Lambek’s lemma, the action of this functor on the fixed point is naturally isomorphic to the fixed point itself:

\mathscr{C}(1, -) + (f \star Free_f) \cong Free_f

We want to show that Free_f is lax monoidal, that is that there’s a mapping:

\epsilon : 1 \to Free_f \, 1

and a family of natural transformations:

\mu_{s t} : Free_f\, s \times Free_f\, t \to Free_f\, (s \otimes t)

The first one can be simply chosen as the identity id_1 of the singleton set.

Let’s rewrite the type of natural transformations in the second one as an end:

\int_{s t} \mathscr{S}et(Free_f\, s \times Free_f\, t, Free_f\, (s \otimes t))

We can expand the first factor using the Lambek’s lemma:

\int_{s t} \mathscr{S}et((\mathscr{C}(1,s) + (f \star Free_f) s) \times Free_f\, t, Free_f\, (s \otimes t))

distribute the product over the sum:

\int_{s t} \mathscr{S}et(\mathscr{C}(1,s)\times Free_f\, t + (f \star Free_f) s \times Free_f\, t, Free_f\, (s \otimes t))

and replace functions from coproducts with products of functions:

\int_{s t} \mathscr{S}et(\mathscr{C}(1,s)\times Free_f\, t, Free_f\, (s \otimes t))  \times

\int_{s t} \mathscr{S}et((f \star Free_f) s \times Free_f\, t, Free_f\, (s \otimes t))

The first hom-set forms the base of induction and the second is the inductive step. If we call a member of \mathscr{C}(1,s) h then we can implement the first function as the lifting of (h 1 \otimes -) acting on Free_f\, t, and for the second, we can use \beta_{s t}.

Conclusion

The purpose of this post was to explore the formulation of a free lax monoidal functor without recourse to closed structure. I have to admit to a hidden agenda: The profunctor formulation of traversables involves monoidal profunctors, so that’s what I’m hoping to explore in the next post.

Appendix: Free Monad

While reviewing the draft of this post, Oleg Grenrus suggested that I derive free monad as a fixed point of a higher order functor. The monoidal product in this case is endofunctor composition:

newtype Compose f g a = Compose (f (g a))

The higher-order functor in question can be written as:

A_f g = Id + f \circ g

or, in Haskell:

data FreeMonadF f g a = 
    DoneFM a 
  | MoreFM (Compose f g a)
instance Functor f => HFunctor (FreeMonadF f) where
  hfmap _ (DoneFM a) = DoneFM a
  hfmap nat (MoreFM (Compose fg)) = MoreFM $ Compose $ fmap nat fg
  ffmap h (DoneFM a) = DoneFM (h a)
  ffmap h (MoreFM (Compose fg)) = MoreFM $ Compose $ fmap (fmap h) fg

The free monad is given by the fixed point:

type FreeMonad f = FixH (FreeMonadF f)

as witnessed by the following instance definition:

instance Functor f => Monad (FreeMonad f) where
  return = InH . DoneFM
  (InH (DoneFM a)) >>= k = k a
  fma >>= k = join (fmap k fma)
join :: Functor f => FreeMonad f (FreeMonad f a) -> FreeMonad f a
join (InH (DoneFM x)) = x
join (InH (MoreFM (Compose ffr))) = 
    InH $ MoreFM $ Compose $ fmap join ffr

In my category theory blog posts, I stated many theorems, but I didn’t provide many proofs. In most cases, it’s enough to know that the proof exists. We trust mathematicians to do their job. Granted, when you’re a professional mathematician, you have to study proofs, because one day you’ll have to prove something, and it helps to know the tricks that other people used before you. But programmers are engineers, and are therefore used to taking advantage of existing solutions, trusting that somebody else made sure they were correct. So it would seem that mathematical proofs are irrelevant to programming. Or so it may seem, until you learn about the Curry-Howard isomorphism–or propositions as types, as it is sometimes called–which says that there is a one to one correspondence between logic and programs, and that every function can be seen as a proof of a theorem. And indeed, I found that a lot of proofs in category theory turn out to be recipes for implementing functions. In most cases the problem can be reduced to this: Given some morphisms, implement another morphism, usually using simple composition. This is very much like using point-free notation to define functions in Haskell. The other ingredient in categorical proofs is diagram chasing, which is very much like equational resoning in Haskell. Of course, mathematicians use different notation, and they use lots of different categories, but the principles are the same.

I want to illustrate these points with the example from Emily Riehl’s excellent book Category Theory in Context. It’s a book for mathematicians, so it’s not an easy read. I’m going to concentrate on theorem 6.2.1, which derives a formula for left Kan extensions using colimits. I picked this theorem because it has calculational content: it tells you how to calculate a particular functor.

It’s not a short proof, and I have made it even longer by unpacking every single step. These steps are not too hard, it’s mostly a matter of understanding and using definitions of functoriality, naturality, and universality.

There is a bonus at the end of this post for Haskell programmers.

Kan Extensions

I wrote about Kan extensions before, so here I’ll only recap the definition using the notation from Emily’s book. Here’s the setup: We want to extend a functor F, which goes from category C to E, along another functor K, which goes from C to D. This extension is a new functor from D to E.

To give you some intuition, imagine that the functor F is the Rosetta Stone. It’s a functor that maps the Ancient Egyptian text of a royal decree to the same text written in Ancient Greek. The functor K embeds the Rosetta Stone hieroglyphics into the know corpus of Egyptian texts from various papyri and inscriptions on the walls of temples. We want to extend the functor F to the whole corpus. In other words, we want to translate new texts from Egyptian to Greek (or whatever other language that’s isomorphic to it).

In the ideal case, we would just want F to be isomorphic to the composition of the new functor after K. That’s usually not possible, so we’ll settle for less. A Kan extension is a functor which, when composed with K produces something that is related to F through a natural transformation. In particular, the left Kan extension, Lan_K F, is equipped with a natural transformation \eta from F to Lan_K F \circ K.

1

(The right Kan extension has this natural transformation reversed.)

There are usually many such functors, so there is the standard trick of universal construction to pick the best one.

In our analogy, we would ideally like the new functor, when applied to the hieroglyphs from the Rosetta stone, to exactly reproduce the original translation, but we’ll settle for something that has the same meaning. We’ll try to translate new hieroglyphs by looking at their relationship with known hieroglyphs. That means we should look closely at morphism in D.

Comma Category

The key to understanding how Kan extensions work is to realize that, in general, the functor K embeds C in D in a lossy way.

2

There may be objects (and morphisms) in D that are not in the image of K. We have to somehow define the action of Lan_K F on those objects. What do we know about such objects?

We know from the Yoneda lemma that all the information about an object is encoded in the totality of morphisms incoming to or outgoing from that object. We can summarize this information in the form of a functor, the hom-functor. Or we can define a category based on this information. This category is called the slice category. Its objects are morphisms from the original category. Notice that this is different from Yoneda, where we talked about sets of morphisms — the hom-sets. Here we treat individual morphisms as objects.

This is the definition: Given a category C and a fixed object c in it, the slice category C/c has as objects pairs (x, f), where x is an object of C and f is a morphism from x to c. In other words, all the arrows whose codomain is c become objects in C/c.

A morphism in C/c between two objects, (x, f) and (y, g) is a morphism h : x \to y in C that makes the following triangle commute:

3

In our case, we are interested in an object d in D, and the slice category D/d describes it in terms of morphisms. Think of this category as a holographic picture of d.

But what we are really interested in, is how to transfer this information about d to E. We do have a functor F, which goes from C to E. We need to somehow back-propagate the information about d to C along K, and then use F to move it to E.

So let’s try again. Instead of using all morphisms impinging on d, let’s only pick the ones that originate in the image of K, because only those can be back-propagated to C.

4

This gives us limited information about d, but it’ll have to do. We’ll just use a partial hologram of d. Instead of the slice category, we’ll use the comma category K \downarrow d.

Here’s the definition: Given a functor K : C \to D and an object d of D, the comma category K \downarrow d has as objects pairs (c, f), where c is an object of C and f is a morphism from K c to d. So, indeed, this category describes the totality of morphisms coming to d from the image of K.

A morphism in the comma category from (c, f) to (c', g) is a morphism h : c \to c' such that the following triangle commutes:

5

So how does back-propagation work? There is a projection functor \Pi_d : K \downarrow d \to C that maps an object (c, f) to c (with obvious action on morphisms). This functor loses a lot of information about objects (completely forgets the f part), but it keeps a lot of the information in morphisms — it only picks those morphisms in C that preserve the structure of the comma category. And it lets us “upload” the hologram of d into C

Next, we transport all this information to E using F. We get a mapping

F \circ \Pi_d : K \downarrow d \to E

Here’s the main observation: We can look at this functor F \circ \Pi_d as a diagram in E (remember, when constructing limits, diagrams are defined through functors). It’s just a bunch of objects and morphisms in E that somehow approximate the image of d. This holographic information was extracted by looking at morphisms impinging on d. In our search for (Lan_K F) d we should therefore look for an object in E together with morphisms impinging on it from the diagram we’ve just constructed. In particular, we could look at cones under that diagram (or co-cones, as they are sometimes called). The best such cone defines a colimit. If that colimit exists, it is indeed the left Kan extension (Lan_K F) d. That’s the theorem we are going to prove.

6

To prove it, we’ll have to go through several steps:

  1. Show that the mapping we have just defined on objects is indeed a functor, that is, we’ll have to define its action on morphisms.
  2. Construct the transformation \eta from F to Lan_K F \circ K and show the naturality condition.
  3. Prove universality: For any other functor G together with a natural transformation \gamma, show that \gamma uniquely factorizes through \eta.

All of these things can be shown using clever manipulations of cones and the universality of the colimit. Let’s get to work.

Functoriality

We have defined the action of Lan_K F on objects of D. Let’s pick a morphism g : d \to d'. Just like d, the object d' defines its own comma category K \downarrow d', its own projection \Pi_{d'}, its own diagram F \circ \Pi_{d'}, and its own limiting cone. Parts of this new cone, however, can be reinterpreted as a cone for the old object d. That’s because, surprisingly, the diagram F \circ \Pi_{d'} contains the diagram F \circ \Pi_{d}.

Take, for instance, an object (c, f) : K \downarrow d, with f: K c \to d. There is a corresponding object (c, g \circ f) in K \downarrow d'. Both get projected down to the same c. That shows that every object in the diagram for the d cone is automatically an object in the diagram for the d' cone.

7

Now take a morphism h from (c, f) to (c', f') in K \downarrow d. It is also a morphism in K \downarrow d' between (c, g \circ f) and (c', g \circ f'). The commuting triangle condition in K \downarrow d

f = f' \circ K h

ensures the commuting condition in K \downarrow d'

g \circ f = g \circ f' \circ K h

8

All this shows that the new cone that defines the colimit of F \circ \Pi_{d'} contains a cone under F \circ \Pi_{d}.

But that diagram has its own colimit (Lan_K F) d. Because that colimit is universal, there must be a unique morphism from (Lan_K F) d to (Lan_K F) d', which makes all triangles between the two cones commute. We pick this morphism as the lifting of our g, which ensures the functoriality of Lan_K F.

9

The commuting condition will come in handy later, so let’s spell it out. For every object (c, f:Kc \to d) we have a leg of the cone, a morphism V_d^{(c, f)} from F c to (Lan_K F) d; and another leg of the cone V_{d'}^{(c, g \circ f)} from F c to (Lan_K F) d'. If we denote the lifting of g as (Lan_K F) g then the commuting triangle is:

(Lan_K F) g \circ V_{d}^{(c, f)} = V_{d'}^{(c, g \circ f)}

11

In principle, we should also check that this newly defined functor preserves composition and identity, but this pretty much follows automatically whenever lifting is defined using composition of morphisms, which is indeed the case here.

Natural Transformation

We want to show that there is a natural transformation \eta from F to Lan_K F \circ K. As usual, we’ll define the component of this natural transformation at some arbitrary object c. It’s a morphism between F c and (Lan_K F) (K c). We have lots of morphisms at our disposal, with all those cones lying around, so it shouldn’t be a problem.

First, observe that, because of the pre-composition with K, we are only looking at the action of Lan_K F on objects which are inside the image of K.

12

The objects of the corresponding comma category K \downarrow K c' have the form (c, f), where f : K c \to K c'. In particular, we can pick c' = c, and f = id_{K c}, which will give us one particular vertex of the diagram F \circ \Pi_{K c}. The object at this vertex is F c — exactly what we need as a starting point for our natural transformation. The leg of the colimiting cone we are interested in is:

V_{K c}^{(c, id)} : F c \to (Lan_K F) (K c)

13

We’ll pick this leg as the component of our natural transformation \eta_c.

What remains is to show the naturality condition. We pick a morphism h : c \to c'. We know how to lift this morphism using F and using Lan_K F \circ K. The other two sides of the naturality square are \eta_c and \eta_{c'}.

14
The bottom left composition is (Lan_K F) (K h) \circ V_{K c}^{(c, id)}. Let’s take the commuting triangle that we used to show the functoriality of Lan_K F:

(Lan_K F) g \circ V_{d}^{(c, f)} = V_{d'}^{(c, g \circ f)}

and replace f by id_{K c}, g by K h, d by K c, and d' by K c', to get:

(Lan_K F) (K h) \circ V_{K c}^{(c, id)} = V_{K c'}^{(c, K h)}

Let’s draw this as the diagonal in the naturality square, and examine the upper right composition:

V_{K c'}^{(c', id)} \circ F h.

This is also equal to the diagonal V_{K c'}^{(c, K h)}. That’s because these are two legs of the same colimiting cone corresponding to (c', id_{K c'}) and (c, K h). These vertices are connected by h in K \downarrow K c'.

15

But how do we know that h is a morphism in K \downarrow K c'? Not every morphism in C is a morphism in the comma category. In this case, however, the triangle identity is automatic, because one of its sides is an identity id_{K c'}.

16

We have shown that our naturality square is composed of two commuting triangles, with V_{K c'}^{(c, K h)} as its diagonal, and therefore commutes as a whole.

Universality

Kan extension is defined using a universal construction: it’s the best of all functors that extend F along K. It means that any other extension will factor through ours. More precisely, if there is another functor G and another natural transformation:

\gamma : F \to G \circ K

17

then there is a unique natural transformation \alpha, such that

(\alpha \cdot K) \circ \eta = \gamma

18

(where we have a horizontal composition of a natural transformation \alpha with the functor K)

As always, we start by clearly stating the goals. The proof proceeds in these steps:

  1. Implement \alpha.
  2. Prove that it’s natural.
  3. Show that it factorizes \gamma through \eta.
  4. Show that it’s unique.

We are defining a natural transformation \alpha between two functors: Lan_K F, which we have defined as a colimit, and G. Both are functors from D to E. Let’s implement the component of \alpha at some d. It must be a morphism:

\alpha_d : (Lan_K F) d \to G d

Notice that d varies over all of D, not just the image of K. We are comparing the two extensions where it really matters.

We have at our disposal the natural transformation:

\gamma : F \to G \circ K

or, in components:

\gamma_c : F c \to G (K c)

More importantly, though, we have the universal property of the colimit. If we can construct a cone with the nadir at G d then we can use its factorizing morphism to define \alpha_d.

19

This cone has to be built under the same diagram as (Lan_K F) d. So let’s start with some (c, f : K c \to d). We want to construct a leg of our new cone going from F c to G d. We can use \gamma_c to get to G (K c) and then hop to G d using G f. Thus we can define the new cone’s leg as:

W_c = G f \circ \gamma_c

20

Let’s make sure that this is really a cone, meaning, its sides must be commuting triangles.

21

Let’s pick a morphism h in K \downarrow d from (c, f) to (c', f'). A morphism in K \downarrow d must satisfy the triangle condition, f = f' \circ K h:

22

We can lift this triangle using G:

G f = G f' \circ G (K h)

Naturality condition for \gamma tells us that:

\gamma_{c'} \circ F h = G (K h) \circ \gamma_c

By combining the two, we get the pentagon:

23

whose outline forms a commuting triangle:

G f \circ \gamma_c = G f' \circ \gamma_{c'} \circ F h

Now that we have a cone with the nadir at G d, universality of the colimit tells us that there is a unique morphism from the colimiting cone to it that factorizes all triangles between the two cones. We make this morphism our \alpha_d. The commuting triangles are between the legs of the colimiting cone V_d^{(c, f)} and the legs of our new cone W_c:

\alpha_d \circ V_d^{(c, f)} = G f \circ \gamma_c

24

Now we have to show that so defined \alpha is a natural transformation. Let’s pick a morphism g : d \to d'. We can lift it using Lan_K F or using G in order to construct the naturality square:

G g \circ \alpha_d = \alpha_{d'} \circ (Lan_K F) g

25

Remember that the lifting of a morphism by Lan_K F satisfies the following commuting condition:

(Lan_K F) g \circ V_{d}^{(c, f)} = V_{d'}^{(c, g \circ f)}

We can combine the two diagrams:

26

The two arms of the large triangle can be replaced using the diagram that defines \alpha, and we get:

G (g \circ f) \circ \gamma_c = G g \circ G f \circ \gamma_c

which commutes due to functoriality of G.

Now we have to show that \alpha factorizes \gamma through \eta. Both \eta and \gamma are natural transformations between functors going from C to E, whereas \alpha connects functors from D to E. To extend \alpha, we horizontally compose it with the identity natural transformation from K to K. This is called whiskering and is written as \alpha \cdot K. This becomes clear when expressed in components. Let’s pick an object c in C. We want to show that:

\gamma_c = \alpha_{K c} \circ \eta_c

27

Let’s go back all the way to the definition of \eta:

\eta_c = V_{K c}^{(c, id)}

where id is the identity morphism at K c. Compare this with the definition of \alpha:

\alpha_d \circ V_d^{(c, f)} = G f \circ \gamma_c

28

If we replace f with id and d with K c, we get:

\alpha_{K c} \circ V_{K c}^{(c, id)} = G id \circ \gamma_c

29

which, due to functoriality of G is exactly what we need:

\alpha_{K c} \circ \eta_c = \gamma_c

Finally, we have to prove the uniqueness of \alpha. Here’s the trick: assume that there is another natural transformation \alpha' that factorizes \gamma. Let’s rewrite the naturality condition for \alpha':

G g \circ \alpha'_d = \alpha'_{d'} \circ (Lan_K F) g

30

Replacing g : d \to d' with f : K c \to d, we get:

G f \circ \alpha'_{K c} = \alpha'_d \circ (Lan_K F) f

The lifiting of f by Lan_K F satisfies the triangle identity:

V_d^{(c, f)} = (Lan_K F) f \circ V_{K c}^{(c, id)}

where we recognize V_{K c}^{(c, id)} as \eta_c.

31

We said that \alpha' factorizes \gamma through \eta:

\gamma_c = \alpha'_{K c} \circ \eta_c

which let us straighten the left side of the pentagon.

32

This shows that \alpha' is another factorization of the cone with the nadir at G d through the colimit cone with the nadir (Lan_K F)  d. But that would contradict the universality of the colimit, therefore \alpha' must be the same as \alpha.

This completes the proof.

Haskell Notes

This post would not be complete if I didn’t mention a Haskell implementation of Kan extensions by Edward Kmett, which you can find in the library Data.Functor.Kan.Lan. At first look you might not recognize the definition given there:

data Lan g h a where
  Lan :: (g b -> a) -> h b -> Lan g h a

To make it more in line with the previous discussion, let’s rename some variables:

data Lan k f d where
  Lan :: (k c -> d) -> f c -> Lan k f d

This is an example of GADT syntax, which is a Haskell way of implementing existential types. It’s equivalent to the following pseudo-Haskell:

Lan k f d = exists c. (k c -> d, f c)

This is more like it: you may recognize (k c -> d) as an object of the comma category K \downarrow d, and f c as the mapping of c (which is the projection of this object back to C) under the functor F. In fact, the Haskell representation is based on the encoding of the colimit using a coend:

(Lan_K F) d = \int^{c \in C} C(K c, d) \times F c

The Haskell library also contains the proof that Kan extension is a functor:

instance Functor (Lan k f) where
  fmap g (Lan kcd fc) = Lan (g . kcd) fc

The natural transformation \eta that is part of the definition of the Kan extension can be extracted from the Haskell definition:

eta :: f c -> Lan k f (k c)
eta = Lan id

In Haskell, we don’t have to prove naturality, as it is a consequence of parametricity.

The universality of the Kan extension is witnessed by the following function:

toLan :: Functor g => (forall c. f c -> g (k c)) -> Lan k f d -> g d
toLan gamma (Lan kcd fc) = fmap kcd (gamma fc)

It takes a natural transformation \gamma from F to G \circ K, and produces a natural transformation we called \alpha from Lan_K F to G.

This is \gamma expressed as a composition of \alpha and \eta:

fromLan :: (forall d. Lan k f d -> g d) -> f c -> g (k c)
fromLan alpha = alpha . eta

As an example of equational reasoning, let’s prove that \alpha defined by toLan indeed factorizes \gamma. In other words, let’s prove that:

fromLan (toLan gamma) = gamma

Let’s plug the definition of toLan into the left hand side:

fromLan (\(Lan kcd fc) -> fmap kcd (gamma fc))

then use the definition of fromLan:

(\(Lan kcd fc) -> fmap kcd (gamma fc)) . eta

Let’s apply this to some arbitrary function g and expand eta:

(\(Lan kcd fc) -> fmap kcd (gamma fc)) (Lan id g)

Beta-reduction gives us:

fmap id (gamma g)

which is indeed equal to the right hand side acting on g:

gamma g

The proof of toLan . fromLan = id is left as an exercise to the reader (hint: you’ll have to use naturality).

Acknowledgments

I’m grateful to Emily Riehl for reviewing the draft of this post and for writing the book from which I borrowed this proof.


La filosofia è scritta in questo grandissimo libro che continuamente ci sta aperto innanzi a gli occhi (io dico l’universo), ma non si può intendere se prima non s’impara a intender la lingua, e conoscer i caratteri, ne’ quali è scritto. Egli è scritto in lingua matematica, e i caratteri son triangoli, cerchi, ed altre figure geometriche, senza i quali mezi è impossibile a intenderne umanamente parola; senza questi è un aggirarsi vanamente per un oscuro laberinto.

― Galileo Galilei, Il Saggiatore (The Assayer)

Joan was quizzical; studied pataphysical science in the home. Late nights all alone with a test tube.

— The Beatles, Maxwell’s Silver Hammer

Unless you’re a member of the Flat Earth Society, I bet you’re pretty confident that the Earth is round. In fact, you’re so confident that you don’t even ask yourself the question why you are so confident. After all, there is overwhelming scientific evidence for the round-Earth hypothesis. There is the old “ships disappearing behind the horizon” proof, there are satellites circling the Earth, there are even photos of the Earth seen from the Moon, the list goes on and on. I picked this particular theory because it seems so obviously true. So if I try to convince you that the Earth is flat, I’ll have to dig very deep into the foundation of your belief systems. Here’s what I’ve found: We believe that the Earth is round not because it’s the truth, but because we are lazy and stingy (or, to give it a more positive spin, efficient and parsimonious). Let me explain…

The New Flat Earth Theory

Let’s begin by stressing how useful the flat-Earth model is in everyday life. I use it all the time. When I want to find the nearest ATM or a gas station, I take out my cell phone and look it up on its flat screen. I’m not carrying a special spherical gadget in my pocket. The screen on my phone is not bulging in the slightest when it’s displaying a map of my surroundings. So, at least within the limits of my city, or even the state, flat-Earth theory works just fine, thank you!

I’d like to make parallels with another widely accepted theory, Einstein’s special relativity. We believe that it’s true, but we never use it in everyday life. The vast majority of objects around us move much slower than the speed of light, so traditional Newtonian mechanics works just fine for us. When was the last time you had to reset your watch after driving from one city to another to account for the effects of time dilation?

The point is that every physical theory is only valid within a certain range of parameters. Physicists have always been looking for the Holy Grail of theories — the theory of everything that would be valid for all values of parameters with no exceptions. They haven’t found one yet.

But, obviously, special relativity is better than Newtonian mechanics because it’s more general. You can derive Newtonian mechanics as a low velocity approximation to special relativity. And, sure enough, the flat-Earth theory is an approximation to the round-Earth theory for small distances. Or, equivalently, it’s the limit as the radius of the Earth goes to infinity.

But suppose that we were prohibited (for instance, by a religion or a government) from ever considering the curvature of the Earth. As explorers travel farther and farther, they discover that the “naive” flat-Earth theory gives incorrect answers. Unlike present-day flat-earthers, who are not scientifically sophisticated, they would actually put some effort to refine their calculations to account for the “anomalies.” For instance, they could postulate that, as you get away from the North Pole, which is the center of the flat Earth, something funny keeps happening to measuring rods. They get elongated when positioned along the parallels (the circles centered at the North Pole). The further away you get from the North Pole, the more they elongate, until at a certain distance they become infinite. Which means that the distances (measured using those measuring rods) along the big circles get smaller and smaller until they shrink to zero.

I know this theory sounds weird at first, but so does special and, even more so, general relativity. In special relativity, weird things happen when your speed is close to the speed of light. Time slows down, distances shrink in the direction of flight (but not perpendicular to it!), and masses increase. In general relativity, similar things happen when you get closer to a black hole’s event horizon. In both theories things diverge as you hit the limit — the speed of light, or the event horizon, respectively.

Back to flat Earth — our explorers conquer space. They have to extend their weird geometry to three dimensions. They find out that horizontally positioned measuring rods shrink as you go higher (they un-shrink when you point them vertically). The intrepid explorers also dig into the ground, and probe the depths with seismographs. They find another singularity at a particular depth, where the horizontal dilation of measuring rods reaches infinity (round-Earthers call this the center of the Earth).

This generalized flat-Earth theory actually works. I know that, because I have just described the spherical coordinate system. We use it when we talk about degrees of longitude and latitude. We just never think of measuring distances using spherical coordinates — it’s too much work, and we are lazy. But it’s possible to express the metric tensor in those coordinates. It’s not constant — it varies with position — and it’s not isotropic — distances vary with direction. In fact, because of that, flat Earthers would be better equipped to understand general relativity than we are.

So is the Earth flat or spherical? Actually it’s neither. Both theories are just approximations. In cartesian coordinates, the Earth is the shape of a flattened ellipsoid, but as you increase the resolution, you discover more and more anomalies (we call them mountains, canyons, etc.). In spherical coordinates, the Earth is flat, but again, only approximately. The biggest difference is that the math is harder in spherical coordinates.

Have I confused you enough? On one level, unless you’re an astronaut, your senses tell you that the Earth is flat. On the other level, unless you’re a conspiracy theorist who believes that NASA is involved in a scam of enormous proportions, you believe that the Earth is pretty much spherical. Now I’m telling you that there is a perfectly consistent mathematical model in which the Earth is flat. It’s not a cult, it’s science! So why do you feel that the round Earth theory is closer to the truth?

The Occam’s Razor

The round Earth theory is just simpler. And for some reason we cling to the belief that nature abhors complexity (I know, isn’t it crazy?). We even express this belief as a principle called the Occam’s razor. In a nutshell, it says that:

Among competing hypotheses, the one with the fewest assumptions should be selected.

Notice that this is not a law of nature. It’s not even scientific: there is no way to falsify it. You can argue for the Occam’s razor on the grounds of theology (William of Ockham was a Franciscan friar) or esthetics (we like elegant theories), but ultimately it boils down to pragmatism: A simpler theory is easier to understand and use.

It’s a mistake to think that Occam’s razor tells us anything about the nature of things, whatever that means. It simply describes the limitations of our mind. It’s not nature that abhors complexity — it’s our brains that prefer simplicity.

Unless you believe that physical laws have an independent existence of their own.

The Layered Cake Hypothesis

Scientists since Galileo have a picture of the Universe that consists of three layers. The top layer is nature that we observe and interact with. Below are laws of physics — the mechanisms that drive nature and make it predictable. Still below is mathematics — the language of physics (that’s what Galileo’s quote at the top of this post is about). According to this view, physics and mathematics are the hidden components of the Universe. They are the invisible cogwheels and pulleys whose existence we can only deduce indirectly. According to this view, we discover the laws of physics. We also discover mathematics.

Notice that this is very different from art. We don’t say that Beethoven discovered the Fifth Symphony (although Igor Stravinsky called it “inevitable”) or that Leonardo da Vinci discovered the Mona Lisa. The difference is that, had not Beethoven composed his symphony, nobody would; but if Cardano hadn’t discovered complex numbers, somebody else probably would. In fact there were many cases of the same mathematical idea being discovered independently by more than one person. Does this prove that mathematical ideas exist the same way as, say, the moons of Jupiter?

Physical discoveries have a very different character than mathematical discoveries. Laws of physics are testable against physical reality. We perform experiments in the real world and if the results contradict a theory, we discard the theory. A mathematical theory, on the other hand, can only be tested against itself. We discard a theory when it leads to internal contradictions.

The belief that mathematics is discovered rather than invented has its roots in Platonism. When we say that the Earth is spherical, we are talking about the idea of a sphere. According to Plato, these ideas do exist independently of the observer — in this case, a mathematician who studies them. Most mathematicians are Platonists, whether they admit it or not.

Being able to formulate laws of physics in terms of simple mathematical equations is a thing of beauty and elegance. But you have to realize that history of physics is littered with carcasses of elegant theories. There was a very elegant theory, which postulated that all matter was made of just four elements: fire, air, water, and earth. The firmament was a collection of celestial spheres (spheres are so Platonic). Then the orbits of planets were supposed to be perfect circles — they weren’t. They aren’t even elliptical, if you study them close enough.

Celestial spheres. An elegant theory, slightly complicated by the need to introduce epicycles to describe the movements of planets

The Impass

But maybe at the level of elementary particles and quantum fields some of this presumed elegance of the Universe shines through? Well, not really. If the Universe obeyed the Occam’s razor, it would have stopped at two quarks, up and down. Nobody needs the strange and the charmed quarks, not to mention the bottom and the top quarks. The Standard Model of particle physics looks like a kitchen sink filled with dirty dishes. And then there is gravity that resists all attempts at grand unification. Strings were supposed to help but they turned out to be as messy as the rest of it.

Of course the current state of impasse in physics might be temporary. After all we’ve been making tremendous progress up until about the second half of the twentieth century (the most recent major theoretical breakthroughs were the discovery of the Higgs mechanism in 1964 and the proof or renormalizability of the Standard Model in 1971).

On the other hand, it’s possible that we might be reaching the limits of human capacity to understand the Universe. After all, there is no reason to believe that the structure of the Universe is simple enough for the human brain to analyze. There is no guarantee that it can be translated into the language of physics and mathematics.

Is the Universe Knowable?

In fact, if you think about it, our expectation that the Universe is knowable is quite arbitrary. On the one hand you have the vast complex Universe, on the other hand you have slightly evolved monkey brains that have only recently figured out how to use tools and communicate using speech. The idea that these brains could produce and store a model of the Universe is preposterous. Granted, our monkey brains are a product of evolution, and our survival depends on those brains being able to come up with workable models of our environment. These models, however, do not include the microcosm or the macrocosm — just the narrow band of phenomena in between. Our senses can perceive space and time scales within about 8 orders of magnitude. For comparison, the Universe is about 40 orders of magnitude larger than the size of the atomic nucleus (not to mention another 20 orders of magnitude down to Planck length).

The evolution came up with an ingenious scheme to deal with the complexities of our environment. Since it is impossible to store all information about the Universe in the very limited amount of memory at our disposal, and it’s impossible to run the simulation in real time, we have settled for the next best thing: creating simplified partial models that are composable.

The idea is that, in order to predict the trajectory of a spear thrown at a mammoth, it’s enough to roughly estimate the influence of a constant downward pull of gravity and the atmospheric drag on the idealized projectile. It is perfectly safe to ignore a lot of subtle effects: the non-uniformity of the gravitational field, air-density fluctuations, imperfections of the spear, not to mention relativistic effects or quantum corrections.

And this is the key to understanding our strategy: we build a simple model and then calculate corrections to it. The idea is that corrections are small enough as not to destroy the premise of the model.

Celestial Mechanics

A great example of this is celestial mechanics. To the lowest approximation, the planets revolve around the Sun along elliptical orbits. The ellipse is a solution of the one body problem in a central gravitational field of the Sun; or a two body problem, if you also take into account the tiny orbit of the Sun. But planets also interact with each other — in particular the heaviest one, Jupiter, influences the orbits of other planets. We can treat these interactions as corrections to the original solution. The more corrections we add, the better predictions we can make. Astronomers came up with some ingenious numerical methods to make such calculations possible. And yet it’s known that, in the long run, this procedure fails miserably. That’s because even the tiniest of corrections may lead to a complete change of behavior in the far future. This is the property of chaotic systems, our Solar System being just one example of such. You must have heard of the butterfly effect — the Universe is filled with this kind of butterflies.

Ephemerides: Tables showing positions of planets on the firmament.

The Microcosm

Anyone who is not shocked by quantum
theory has not understood a single word.

— Niels Bohr

At the other end of the spectrum we have atoms and elementary particles. We call them particles because, to the lowest approximation, they behave like particles. You might have seen traces made by particles in a bubble chamber.

Elementary particles might, at first sight, exhibit some properties of macroscopic objects. They follow paths through the bubble chamber. A rock thrown in the air also follows a path — so elementary particles can’t be much different from little rocks. This kind of thinking led to the first model of the atom as a miniature planetary system. As it turned out, elementary particles are nothing like little rocks. So maybe they are like waves on a lake? But waves are continuous and particles can be counted by Geiger counters. We would like elementary particles to either behave like particles or like waves but, despite our best efforts, they refuse to nicely fall into one of the categories.

There is a good reason why we favor particle and wave explanations: they are composable. A two-particle system is a composition of two one-particle systems. A complex wave can be decomposed into a superposition of simpler waves. A quantum system is neither. We might try to separate a two-particle system into its individual constituents, but then we have to introduce spooky action at a distance to explain quantum entanglement. A quantum system is an alien entity that does not fit our preconceived notions, and the main characteristic that distinguishes it from classical phenomena is that it’s not composable. If quantum phenomena were composable in some other way, different from particles or waves, we could probably internalize it. But non-composable phenomena are totally alien to our way of thinking. You might think that physicists have some deeper insight into quantum mechanics, but they don’t. Richard Feynman, who was a no-nonsense physicist, famously said, “If you think you understand quantum mechanics, you don’t understand quantum mechanics.” The problem with understanding quantum mechanics is not that it’s too complex. The problem is that our brains can only deal with concepts that are composable.

It’s interesting to notice that by accepting quantum mechanics we gave up on composability on one level in order to decompose something at another level. The periodic table of elements was the big challenge at the beginning of the 20th century. We already knew that earth, water, air, and fire were not enough. We understood that chemical compounds were combinations of atoms; but there were just too many kinds of atoms, and they could be grouped into families that shared similar properties. Atom was supposed to be indivisible (the Greek word ἄτομος [átomos] means indivisible), but we could not explain the periodic table without assuming that there was some underlying structure. And indeed, there is structure there, but the way the nucleus and the electrons compose in order to form an atom is far from trivial. Electrons are not like planets orbiting the nucleus. They form shells and orbitals. We had to wait for quantum mechanics and the Fermi exclusion principle to describe the structure of an atom.

Every time we explain one level of complexity by decomposing it in terms of simpler constituents we seem to trade off some of the simplicity of the composition itself. This happened again in the sixties, when physicists were faced with a confusing zoo of elementary particles. It seemed like there were hundreds of strongly interacting particles, hadrons, and every year was bringing new discoveries. This mess was finally cleaned up by the introduction of quarks. It was possible to categorize all hadrons as composed of just six types of quarks. This simplification didn’t come without a price, though. When we say an atom is composed of the nucleus and electrons, we can prove it by knocking off a few electrons and studying them as independent particles. We can even split the nucleus into protons and neutrons, although the neutrons outside of a nucleus are short lived. But no matter how hard we try, we cannot split a proton into its constituent quarks. In fact we know that quarks cannot exist outside of hadrons. This is called quark- or color-confinement. Quarks are supposed to come in three “colors,” but the only composites we can observe are colorless. We have stretched the idea of composition by accepting the fact that a composite structure can never be decomposed into its constituents.

I’m Slightly Perturbed

How do physicists deal with quantum mechanics? They use mathematics. Richard Feynman came up with ingenious ways to perform calculations in quantum electrodynamics using perturbation theory. The idea of perturbation theory is that you start with the simple approximation and keep adding corrections to it, just like with celestial mechanics. The terms in the expansion can be visualized as Feynman diagrams. For instance, the lowest term in the interaction between two electrons corresponds to a diagram in which the electrons exchange a virtual photon.

This terms gives the classical repulsive force between two charged particles. The first quantum correction to it involves the exchange of two virtual photons. And here’s the kicker: this correction is not only larger than the original term — it’s infinite! So much for small corrections. Yes, there are tricks to shove this infinity under the carpet, but everybody who’s not fooling themselves understands that the so called renormalization is an ugly hack. We don’t understand what the world looks like at very small scales and we try to ignore it using tricks that make mathematicians faint.

Physicists are very pragmatic. As long as there is a recipe for obtaining results that can be compared with the experiment, they are happy with a theory. In this respect, the Standard Model is the most successful theory in the Universe. It’s a unified quantum field theory of electromagnetism, strong, and weak interactions that produces results that are in perfect agreement with all high-energy experiments we were able to perform to this day. Unfortunately, the Standard Model does not give us the understanding of what’s happening. It’s as if physicists were given an alien cell phone and figured out how to use various applications on it but have no idea about the internal workings of the gadget. And that’s even before we try to involve gravity in the model.

The “periodic table” of elementary particles.

The prevailing wisdom is that these are just little setbacks on the way toward the ultimate theory of everything. We just have to figure out the correct math. It may take us twenty years, or two hundred years, but we’ll get there. The hope that math is the answer led theoretical physicists to study more and more esoteric corners of mathematics and to contribute to its development. One of the most prominent theoretical physicists, Edward Witten, the father of M-theory that unified a number of string theories, was awarded the prestigious Fields Medal for his contribution to mathematics (Nobel prizes are only awarded when a theory is confirmed by experiment which, in the case of string theory, may be a be long way off, if ever).

Math is About Composition

If mathematics is discoverable, then we might indeed be able to find the right combination of math and physics to unlock the secrets of the Universe. That would be extremely lucky, though.

There is one property of all of mathematics that is really striking, and it’s most clearly visible in foundational theories, such as logic, category theory, and lambda calculus. All these theories are about composability. They all describe how to construct more complex things from simpler elements. Logic is about combining simple predicates using conjunctions, disjunctions, and implications. Category theory starts by defining a composition of arrows. It then introduces ways of combining objects using products, coproducts, and exponentials. Typed lambda calculus, the foundation of computer languages, shows us how to define new types using product types, sum types, and functions. In fact it can be shown that constructive logic, cartesian closed categories, and typed lambda calculus are three different formulations of the same theory. This is known as the Curry Howard Lambek isomorphism. We’ve been discovering the same thing over and over again.

It turns out that most mathematical theories have a skeleton that can be captured by category theory. This should not be a surprise considering how the biggest revolutions in mathematics were the result of realization that two or more disciplines were closely related to each other. The latest such breakthrough was the proof of the Fermat’s last theorem. This proof was based on the Taniyama-Shimura conjecture that related the study of elliptic curves to modular forms — two radically different branches of mathematics.

Earlier, geometry was turned upside down when it became obvious that one can define shapes using algebraic equations in cartesian coordinates. This retooling of geometry turned out to be very advantageous, because algebra has better compositional qualities than Euclidean-style geometry.

Finally, any mathematical theory starts with a set of axioms, which are combined using proof systems to produce theorems. Proof systems are compositional which, again, supports the view that mathematics is all about composition. But even there we hit a snag when we tried to decompose the space of all statements into true and false. Gödel has shown that, in any non-trivial theory, we can formulate a statement that can neither be proved to be right or wrong, and thus the Hilbert’s great project of defining one grand mathematical theory fell apart. It’s as if we have discovered that the Lego blocks we were playing with were not part of a giant Lego spaceship.

Where Does Composability Come From?

It’s possible that composability is the fundamental property of the Universe, which would make it comprehensible to us humans, and it would validate our physics and mathematics. Personally, I’m very reluctant to accept this point of view, because it would give intelligent life a special place in the grand scheme of things. It’s as if the laws of the Universe were created in such a way as to be accessible to the brains of the evolved monkeys that we are.

It’s much more likely that mathematics describes the ways our brains are capable of composing simpler things into more complex systems. Anything that we can comprehend using our brains must, by necessity, be decomposable — and there are only so many ways of putting things together. Discovering mathematics means discovering the structure of our brains. Platonic ideals exist only as patterns of connections between neurons.

The amazing scientific progress that humanity has been able to make to this day was possible because there were so many decomposable phenomena available to us. Granted, as we progressed, we had to come up with more elaborate composition schemes. We have discovered differential equations, Hilbert spaces, path integrals, Lie groups, tensor calculus, fiber bundles, etc. With the combination of physics and mathematics we have tapped into a gold vein of composable phenomena. But research takes more and more resources as we progress, and it’s possible that we have reached the bedrock that may be resistant to our tools.

We have to seriously consider the possibility that there is a major incompatibility between the complexity of the Universe and the simplicity of our brains. We are not without recourse, though. We have at our disposal tools that multiply the power of our brains. The first such tool is language, which helps us combine brain powers of large groups of people. The invention of the printing press and then the internet helped us record and gain access to vast stores of information that’s been gathered by the combined forces of teams of researchers over long periods of time. But even though this is quantitative improvement, the processing of this information still relies on composition because it has to be presented to human brains. The fact that work can be divided among members of larger teams is proof of its decomposability. This is also why we sometimes need a genius to make a major breakthrough, when a task cannot be easily decomposed into smaller, easier, subtasks. But even genius has to start somewhere, and the ability to stand on the shoulders of giants is predicated on decomposability.

Can Computers Help?

The role of computers in doing science is steadily increasing. To begin with, once we have a scientific theory, we can write computer programs to perform calculations. Nobody calculates the orbits of planets by hand any more — computers can do it much faster and error free. We are also beginning to use computers to prove mathematical theorems. The four-color problem is an example of a proof that would be impossible without the help of computers. It was decomposable, but the number of special cases was well over a thousand (it was later reduced to 633 — still too many, even for a dedicated team of graduate students).

Every planar map can be colored using only four colors.

Computer programs that are used in theorem proving provide a level of indirection between the mind of a scientist and formal manipulations necessary to prove a theorem. A programmer is still in control, and the problem is decomposable, but the number of components may be much larger, often too large for a human to go over one by one. The combined forces of humans and computers can stretch the limits of composability.

But how can we tackle problems that cannot be decomposed? First, let’s observe that in real life we rarely bother to go through the process of detailed analysis. In fact the survival of our ancestors depended on the ability to react quickly to changing circumstances, to make instantaneous decisions. When you see a tiger, you don’t decompose the image into individual parts, analyze them, and put together a model of a tiger. Image recognition is one of these areas where the analytic approach fails miserably. People tried to write programs that would recognize faces using separate subroutines to detect eyes, noses, lips, ears, etc., and composing them together, but they failed. And yet we instinctively recognize faces of familiar people at a glance.

Neural Networks and the AI

We are now able to teach computers to classify images and recognize faces. We do it not by designing dedicated algorithms; we do it by training artificial neural networks. A neural network doesn’t start with a subsystem for recognizing eyes or noses. It’s possible that, in the process of training, it will develop the notions of lines, shadows, maybe even eyes and noses. But by no means is this necessary. Those abstractions, if they evolve, would be encoded in the connections between its neurons. We might even help the AI develop some general abstractions by tweaking its architecture. It’s common, for instance, to include convolutional layers to pre-process the input. Such a layer can be taught to recognize local features and compress the input to a more manageable size. This is very similar to how our own vision works: the retina in our eye does this kind of pre-processing before sending compressed signals through the optic nerve.

Compression is the key to matching the complexity of the task at hand to the simplicity of the system that is processing it. Just like our sensory organs and brains compress the inputs, so do neural networks. There are two kinds of compression: the kind that doesn’t lose any information, just removing the redundancy in the original signal; and the lossy kind that throws away irrelevant information. The task of deciding what information is irrelevant is in itself a process of discovery. The difference between the Earth and a sphere is the size of the Himalayas, but we ignore it when when we look at the globe. When calculating orbits around the Sun, we shrink all planets to points. That’s compression by elimination of details that we deem less important for the problem we are solving. In science, this kind of compression is called abstraction.

We are still way ahead of neural networks in our capacity to create abstractions. But it’s possible that, at some point, they’ll catch up with us. The problem is: Will we be able to understand machine-generated abstractions? We are already at the limits of understanding human-generated abstractions. You may count yourself a member of a very small club if you understand the statement “monad is a monoid in the category of endofunctors” that is chock full of mathematical abstractions. If neural networks come up with new abstractions/compression schemes, we might not be able to reverse engineer them. Unlike a human scientist, an AI is unlikely to be able to explain to us how it came up with a particular abstraction.

I’m not scared about a future AI trying to eliminate human kind (unless that’s what its design goals are). I’m afraid of the scenario in which we ask the AI a question like, “Can quantum mechanics be unified with gravity?” and it will answer, “Yes, but I can’t explain it to you, because you don’t have the brain capacity to understand the explanation.”

And this is the optimistic scenario. It assumes that such questions can be answered within the decomposition/re-composition framework. That the Universe can be decomposed into particles, waves, fields, strings, branes, and maybe some new abstractions that we haven’t even though about. We would at least get the satisfaction that we were on the right path but that the number of moving parts was simply too large for us to assimilate — just like with the proof of the four-color theorem.

But it’s possible that this reductionist scenario has its limits. That the complexity of the Universe is, at some level, irreducible and cannot be captured by human brains or even the most sophisticated AIs.

There are people who believe that we live in a computer simulation. But if the Universe is irreducible, it would mean that the smallest computer on which such a simulation could be run is the Universe itself, in which case it doesn’t make sense to call it a simulation.

Conclusion

The scientific method has been tremendously successful in explaining the workings of our world. It led to exponential expansion of science and technology that started in the 19th century and continues to this day. We are so used to its successes that we are betting the future of humanity on it. Usually when somebody attacks the scientific method, they are coming from the background of obscurantism. Such attacks are easily rebuffed or dismissed. What I’m arguing is that science is not a property of the Universe, but rather a construct of our limited brains. We have developed some very sophisticated tools to create models of the Universe based on the principle of composition. Mathematics is the study of various ways of composing things and physics is applied composition. There is no guarantee, however, that the Universe is decomposable. Assuming that would be tantamount to postulating that its structure revolves around human brains, just like we used to believe that the Universe revolves around Earth.

You can also watch my talk on this subject.


Trying to improve my Haskell coding skills, I decided to test myself at solving the 2017 Advent of Code problems. It’s been a lot of fun and a great learning experience. One problem in particular stood out for me because, for the first time, it let me apply, in anger, the ideas I learned from category theory. But I’m not going to talk about category theory this time, just about programming.

The problem is really about dominoes. You get a set of dominoes, or pairs of numbers (the twist is that the numbers are not capped at 6), and you are supposed to build a chain, in which the numbers are matched between consecutive pieces. For instance, the chain [(0, 5), (5, 12), (12, 12), (12, 1)] is admissible. Like in the real game, the pieces can be turned around, so (1, 3) can be also used as (3, 1). The goal is to build a chain that starts from zero and maximizes the score, which is the sum of numbers on the pieces used.

The algorithm is pretty straightforward. You put all dominoes in a data structure that lets you quickly pull the pieces you need, you recursively build all possible chains, evaluate their sums, and pick the winner.

Let’s start with some data structures.

type Piece = (Int, Int)
type Chain = [Piece]

At each step in the procedure, we will be looking for a domino with a number that matches the end of the current chain. If the chain is [(0, 5), (5, 12)], we will be looking for pieces with 12 on one end. It’s best to organize pieces in a map indexed by these numbers. To allow for turning the dominoes, we’ll add each piece twice. For instance, the piece (12, 1) will be added as (12, 1) and (1, 12). We can make a small optimization for symmetric dominoes, like (12, 12), by adding them only once.

We’ll use the Map from the Prelude:

import qualified Data.Map as Map

The Map we’ll be using is:

type Pool = Map.Map Int [Int]

The key is an integer, the value is a list of integers corresponding to the other ends of pieces. This is made clear by the way we insert each piece in the map:

addPiece :: Piece -> Pool -> Pool
addPiece (m, n) = if m /= n 
                  then add m n . add n m
                  else add m n
  where 
    add m n pool = 
      case Map.lookup m pool of
        Nothing  -> Map.insert m [n] pool
        Just lst -> Map.insert m (n : lst) pool

I used point-free notation. If that’s confusing, here’s the translation:

addPiece :: Piece -> Pool -> Pool
addPiece (m, n) pool = if m /= n 
                       then add m n (add n m pool)
                       else add m n pool

As I said, each piece is added twice, except for the symmetric ones.

After using a piece in a chain, we’ll have to remove it from the pool:

removePiece :: Piece -> Pool -> Pool
removePiece (m, n) = if m /= n
                     then rem m n . rem n m
                     else rem m n
  where
    rem :: Int -> Int -> Pool -> Pool
    rem m n pool = 
      case fromJust $ Map.lookup m pool of
        []  -> Map.delete m pool
        lst -> Map.insert m (delete n lst) pool

You might be wondering why I’m using a partial function fromJust. In industrial-strength code I would pattern match on the Maybe and issue a diagnostic if the piece were not found. Here I’m fine with a fatal exception if there’s a bug in my reasoning.

It’s worth mentioning that, like all data structures in Haskell, Map is a persistent data structure. It means that it’s never modified in place, and its previous versions persist. This is invaluable in this kind of recursive algorithms, where we use backtracking to explore multiple paths.

The input of the puzzle is a list of pieces. We’ll start by inserting them into our map. In functional programming we never think in terms of loops: we transform data. A list of pieces is a (recursive) data structure. We want to traverse it and accumulate the information stored in it into a Map. This kind of transformation is, in general, called a catamorphism. A list catamorphism is called a fold. It is specified by two things: (1) its action on the empty list (here, it turns it into Map.empty), and (2) its action on the head of the current list and the accumulated result of processing the tail. The head of the current list is a piece, and the accumulator is the Map. The function addPiece has just the right signature:

presort :: [Piece] -> Pool
presort = foldr addPiece Map.empty

I’m using a right fold, but a left fold would work fine, too. Again, this is point free notation.

Now that the preliminaries are over, let’s think about the algorithm. My first approach was to define a bunch of mutually recursive functions that would build all possible chains, score them, and then pick the best one. After a few tries, I got hopelessly bogged down in details. I took a break and started thinking.

Functional programming is all about functions, right? Using a recursive function is the correct approach. Or is it? The more you program in Haskell, the more you realize that you get the most power by considering wholesale transformations of data structures. When creating a Map of pieces, I didn’t write a recursive function over a list — I used a fold instead. Of course, behind the scenes, fold is implemented using recursion (which, thanks to tail recursion, is usually transformed into a loop). But the idea of applying transformations to data structures is what lets us soar above the sea of details and into the higher levels of abstraction.

So here’s the new idea: let’s create one gigantic data structure that contains all admissible chains built from the domino pieces at our disposal. The obvious choice is a tree. At the root we’ll have the starting number: zero, as specified in the description of the problem. All pool pieces that have a zero at one end will start a new branch. Instead of storing the whole piece at the node, we can just store the second number — the first being determined by the parent. So a piece (0, 5) starts a branch with a 5 node right below the 0 node. Next we’d look for pieces with a 5. Suppose that one of them is (5, 12), so we create a node with a 12, and so on. A tree with a variable list of branches is called a rose tree:

data Rose = NodeR Int [Rose]
  deriving Show

It’s always instructive to keep in mind at least one special boundary case. Consider what would happen if (0, 5) were the only piece in the pool. We’d end up with the following tree:

NodeR 0 [NodeR 5 []]

We’ll come back to this example later.

The next question is, how do we build such a tree? We start with a set of dominoes gathered in a Map. At every step in the algorithm we pick a matching domino, remove it from the pool, and start a new subtree. To start a subtree we need a number and a pool of remaining pieces. Let’s call this combination a seed.

The process of building a recursive data structure from a seed is called anamorphism. It’s a well studied and well understood process, so let’s try to apply it in our case. The key is to separate the big picture from the small picture. The big picture is the recursive data structure — the rose tree, in our case. The small picture is what happens at a single node.

Let’s start with the small picture. We are given a seed of the type (Int, Pool). We use the number as a key to retrieve a list of matching pieces from the Pool (strictly speaking, just a list of numbers corresponding to the other ends of the pieces). Each piece will start a new subtree. The seed for such a subtree consists of the number at the other end of the piece and a new Pool with the piece removed. A function that produces seeds from a given seed looks like this:

grow (n, pool) = 
  case Map.lookup n pool of
    Nothing -> []
    Just ms -> [(m, removePiece (m, n) pool) | m <- ms]

Now we have to translate this to a procedure that recreates a complete tree. The trick is to split the definition of the tree into local and global pictures. The local picture is captured by this data structure:

data TreeF a = NodeF Int [a]
  deriving Functor

Here, the recursion of the original rose tree is replaced by the type parameter a. This data structure, which describes a single node, or a very shallow tree, is a functor with respect to a (the compiler is able to automatically figure out the implementation of fmap, but you can also do it by hand).

It’s important to realize that the recursive definition of a rose tree can be recovered as a fixed point of this functor. We define the fixed point as the data structure X that results from replacing a in the definition of TreeF with X. Symbolically:

X = TreeF X

In fact, this procedure of finding the fixed point can be written in all generality for any functor f. If we call the fixed point Fix f, we can define it by replacing the type argument to f with Fix f, as in:

newtype Fix f = Fix { unFix :: f (Fix f) }

Our rose tree is the fixed point of the functor TreeF:

type Tree = Fix TreeF

This splitting of the recursive part from the functor part is very convenient because it lets us use non-recursive functions to generate or traverse recursive data structures.

In particular, the procedure of unfolding a data structure from a seed is captured by a non-recursive function of the following signature:

type Coalgebra f a = a -> f a

Here, a serves as the seed that generates a single node populated with new seeds. We have already seen a function that generates seeds, we only have to cast it in the form of a coalgebra:

coalg :: Coalgebra TreeF (Int, Pool)
coalg (n, pool) = 
  case Map.lookup n pool of
    Nothing -> NodeF n []
    Just ms -> NodeF n [(m, removePiece (m, n) pool) | m <- ms]

The pièce de résistance is the formula that uses a given coalgebra to unfold a recursive date structure. It’s called the anamorphism:

ana :: Functor f => Coalgebra f a -> a -> Fix f
ana coalg = Fix . fmap (ana coalg) . coalg

Here’s the play-by-play: The anamorphism takes a seed and applies the coalgebra to it. That generates a single node with new seeds in place of children. Then it fmaps the whole anamorphism over this node, thus unfolding the seeds into full-blown trees. Finally, it applies the constructor Fix to produce the final tree. Notice that this is a recursive definition.

We are now in a position to build a tree that contains all admissible chains of dominoes. We do it by applying the anamorphism to our coalgebra:

tree = ana coalg

Once we have this tree, we could traverse it, or fold it, to retrieve all the chains and find the best one.

But once we have our tree in the form of a fixed point, we can be smart about folds as well. The procedure is essentially the same, except that now we are collecting information from the nodes of a tree. To do this, we define a non-recursive function called the algebra:

type Algebra f a = f a -> a

The type a is called the carrier of the algebra. It plays the role of the accumulator of data.

We are interested in the algebra that would help us collect chains of dominoes from our rose tree. Suppose that we have already applied this algebra to all children of a particular node. Each child tree would produce its own list of chains. Our goal is to extend those chains by adding one more piece that is determined by the current node. Let’s start with our earlier trivial case of a tree that contains a single piece (0, 5):

NodeR 0 [Node 5 []]

We replace the leaf node with some value x of the still unspecified carrier type. We get:

NodeR 0 x

Obviously, x must contain the number 5, to let us recover the original piece (0, 5). The result of applying the algebra to the top node must produce the chain [(0, 5)]. These two pieces of information suggest the carrier type to be a combination of a number and a list of chains. The leaf node is turned to (5, []), and the top node produces (0, [[(0, 5)]]).

With this choice of the carrier type, the algebra is easy to implement:

chainAlg :: Algebra TreeF (Int, [Chain])
chainAlg (NodeF n []) = (n, [])
chainAlg (NodeF n lst) = (n, concat [push (n, m) bs | (m, bs) <- lst])
  where
    push :: (Int, Int) -> [Chain] -> [Chain]
    push (n, m) [] = [[(n, m)]]
    push (n, m) bs = [(n, m) : br | br <- bs]]

For the leaf (a node with no children), we return the number stored in it together with an empty list. Otherwise, we gather the chains from children. If a child returns an empty list of chains, meaning it was a leaf, we create a single-piece chain. If the list is not empty, we prepend a new piece to all the chains. We then concatenate all lists of chains into one list.

All that remains is to apply our algebra recursively to the whole tree. Again, this can be done in full generality using a catamorphism:

cata :: Functor f => Algebra f a -> Fix f -> a
cata alg = alg . fmap (cata alg) . unFix

We start by stripping the fixed point constructor using unFix to expose a node, apply the catamorphism to all its children, and apply the algebra to the node.

To summarize: we use an anamorphism to create a tree, then use a catamorphism to convert the tree to a list of chains. Notice that we don’t need the tree itself — we only use it to drive the algorithm. Because Haskell is lazy, the tree is evaluated on demand, node by node, as it is walked by the catamorphism.

This combination of an anamorphism followed immediately by a catamorphism comes up often enough to merit its own name. It’s called a hylomorphism, and can be written concisely as:

hylo :: Functor f => Algebra f a -> Coalgebra f b -> b -> a
hylo f g = f . fmap (hylo f g) . g

In our example, we produce a list of chains using a hylomorphism:

let (_, chains) = hylo chainAlg coalg (0, pool)

The solution of the puzzle is the chain with the maximum score:

maximum $ fmap score chains

score :: Chain -> Int
score = sum . fmap score1
  where score1 (m, n) = m + n

Conclusion

The solution that I described in this post was not the first one that came to my mind. I could have persevered with the more obvious approach of implementing a big recursive function or a series of smaller mutually recursive ones. I’m glad I didn’t. I have found out that I’m much more productive when I can reason in terms of applying transformations to data structures.

You might think that a data structure that contains all admissible chains of dominoes would be too large to fit comfortably in memory, and you would probably be right in a strict language. But Haskell is a lazy language, and data structures work more often as control structures than as storage for data.

The use of recursion schemes further simplifies programming. You can design algebras and coalgebras as non-recursive functions, which are much easier to reason about, and then apply them to recursive data structures using catamorphisms and anamorphisms. You can even combine them into hylomorphisms.

It’s worth mentioning that we routinely apply these techniques to lists. I already mentioned that a fold is nothing but a list catamorphism. The functor in question can be written as:

data ListF e a = Nil | Cons e a
  deriving Functor

A list is a fixed point of this functor:

type List e = Fix (ListF e)

An algebra for the list functor is implemented by pattern matching on its two constructors:

alg :: ListF e a -> a
alg Nil = z
alg (Cons e a) = f e a

Notice that a list algebra is parameterized by two items: the value z and the function f :: e -> a -> a. These are exactly the parameters to foldr. So, when you are calling foldr, you are defining an algebra and performing a list catamorphism.

Likewise, a list anamorphism takes a coalgebra and a seed and produces a list. Finite lists are produced by the anamorphism called unfoldr:

unfoldr :: (b -> Maybe (a, b)) -> b -> [a]

You can learn more about algebras and coalgebras from the point of view of category theory, in another blog post.

The source code for this post is available on GitHub.