Abstract

The use of free monads, free applicatives, and cofree comonads lets us separate the construction of (often effectful or context-dependent) computations from their interpretation. In this paper I show how the ad hoc process of writing interpreters for these free constructions can be systematized using the language of higher order algebras (coalgebras) and catamorphisms (anamorphisms).

Introduction

Recursive schemes [meijer] are an example of successful application of concepts from category theory to programming. The idea is that recursive data structures can be defined as initial algebras of functors. This allows a separation of concerns: the functor describes the local shape of the data structure, and the fixed point combinator builds the recursion. Operations over data structures can be likewise separated into shallow, non-recursive computations described by algebras, and generic recursive procedures described by catamorphisms. In this way, data structures often replace control structures in driving computations.

Since functors also form a category, it’s possible to define functors acting on functors. Such higher order functors show up in a number of free constructions, notably free monads, free applicatives, and cofree comonads. These free constructions have good composability properties and they provide means of separating the creation of effectful computations from their interpretation.

This paper’s contribution is to systematize the construction of such interpreters. The idea is that free constructions arise as fixed points of higher order functors, and therefore can be approached with the same algebraic machinery as recursive data structures, only at a higher level. In particular, interpreters can be constructed as catamorphisms or anamorphisms of higher order algebras/coalgebras.

Initial Algebras and Catamorphisms

The canonical example of a data structure that can be described as an initial algebra of a functor is a list. In Haskell, a list can be defined recursively:

data List a = Nil | Cons a (List a)

There is an underlying non-recursive functor:

data ListF a x = NilF | ConsF a x
instance Functor (ListF a) where
  fmap f NilF = NilF
  fmap f (ConsF a x) = ConsF a (f x)

Once we have a functor, we can define its algebras. An algebra consist of a carrier c and a structure map (evaluator). An algebra can be defined for an arbitrary functor f:

type Algebra f c = f c -> c

Here’s an example of a simple list algebra, with Int as its carrier:

sum :: Algebra (ListF Int) Int
sum NilF = 0
sum (ConsF a c) = a + c

Algebras for a given functor form a category. The initial object in this category (if it exists) is called the initial algebra. In Haskell, we call the carrier of the initial algebra Fix f. Its structure map is a function:

f (Fix f) -> Fix f

By Lambek’s lemma, the structure map of the initial algebra is an isomorphism. In Haskell, this isomorphism is given by a pair of functions: the constructor In and the destructor out of the fixed point combinator:

newtype Fix f = In { out :: f (Fix f) }

When applied to the list functor, the fixed point gives rise to an alternative definition of a list:

type List a = Fix (ListF a)

The initiality of the algebra means that there is a unique algebra morphism from it to any other algebra. This morphism is called a catamorphism and, in Haskell, can be expressed as:

cata :: Functor f => Algebra f a -> Fix f -> a
cata alg = alg . fmap (cata alg) . out

A list catamorphism is known as a fold. Since the list functor is a sum type, its algebra consists of a value—the result of applying the algebra to NilF—and a function of two variables that corresponds to the ConsF constructor. You may recognize those two as the arguments to foldr:

foldr :: (a -> c -> c) -> c -> [a] -> c

The list functor is interesting because its fixed point is a free monoid. In category theory, monoids are special objects in monoidal categories—that is categories that define a product of two objects. In Haskell, a pair type plays the role of such a product, with the unit type as its unit (up to isomorphism).

As you can see, the list functor is the sum of a unit and a product. This formula can be generalized to an arbitrary monoidal category with a tensor product \otimes and a unit 1:

L\, a\, x = 1 + a \otimes x

Its initial algebra is a free monoid .

Higher Algebras

In category theory, once you performed a construction in one category, it’s easy to perform it in another category that shares similar properties. In Haskell, this might require reimplementing the construction.

We are interested in the category of endofunctors, where objects are endofunctors and morphisms are natural transformations. Natural transformations are represented in Haskell as polymorphic functions:

type f :~> g = forall a. f a -> g a
infixr 0 :~>

In the category of endofunctors we can define (higher order) functors, which map functors to functors and natural transformations to natural transformations:

class HFunctor hf where
  hfmap :: (g :~> h) -> (hf g :~> hf h)
  ffmap :: Functor g => (a -> b) -> hf g a -> hf g b

The first function lifts a natural transformation; and the second function, ffmap, witnesses the fact that the result of a higher order functor is again a functor.

An algebra for a higher order functor hf consists of a functor f (the carrier object in the functor category) and a natural transformation (the structure map):

type HAlgebra hf f = hf f :~> f

As with regular functors, we can define an initial algebra using the fixed point combinator for higher order functors:

newtype FixH hf a = InH { outH :: hf (FixH hf) a }

Similarly, we can define a higher order catamorphism:

hcata :: HFunctor h => HAlgebra h f -> FixH h :~> f
hcata halg = halg . hfmap (hcata halg) . outH

The question is, are there any interesting examples of higher order functors and algebras that could be used to solve real-life programming problems?

Free Monad

We’ve seen the usefulness of lists, or free monoids, for structuring computations. Let’s see if we can generalize this concept to higher order functors.

The definition of a list relies on the cartesian structure of the underlying category. It turns out that there are multiple cartesian structures of interest that can be defined in the category of functors. The simplest one defines a product of two endofunctors as their composition. Any two endofunctors can be composed. The unit of functor composition is the identity functor.

If you picture endofunctors as containers, you can easily imagine a tree of lists, or a list of Maybes.

A monoid based on this particular monoidal structure in the endofunctor category is a monad. It’s an endofunctor m equipped with two natural transformations representing unit and multiplication:

class Monad m where
  eta :: Identity    :~> m
  mu  :: Compose m m :~> m

In Haskell, the components of these natural transformations are known as return and join.

A straightforward generalization of the list functor to the functor category can be written as:

L\, f\, g = 1 + f \circ g

or, in Haskell,

type FunctorList f g = Identity :+: Compose f g

where we used the operator :+: to define the coproduct of two functors:

data (f :+: g) e = Inl (f e) | Inr (g e)
infixr 7 :+:

Using more conventional notation, FunctorList can be written as:

data MonadF f g a = 
    DoneM a 
  | MoreM (f (g a))

We’ll use it to generate a free monoid in the category of endofunctors. First of all, let’s show that it’s indeed a higher order functor in the second argument g:

instance Functor f => HFunctor (MonadF f) where
  hfmap _   (DoneM a)  = DoneM a
  hfmap nat (MoreM fg) = MoreM $ fmap nat fg
  ffmap h (DoneM a)    = DoneM (h a)
  ffmap h (MoreM fg)   = MoreM $ fmap (fmap h) fg

In category theory, because of size issues, this functor doesn’t always have a fixed point. For most common choices of f (e.g., for algebraic data types), the initial higher order algebra for this functor exists, and it generates a free monad. In Haskell, this free monad can be defined as:

type FreeMonad f = FixH (MonadF f)

We can show that FreeMonad is indeed a monad by implementing return and bind:

instance Functor f => Monad (FreeMonad f) where
  return = InH . DoneM
  (InH (DoneM a))    >>= k = k a
  (InH (MoreM ffra)) >>= k = 
        InH (MoreM (fmap (>>= k) ffra))

Free monads have many applications in programming. They can be used to write generic monadic code, which can then be interpreted in different monads. A very useful property of free monads is that they can be composed using coproducts. This follows from the theorem in category theory, which states that left adjoints preserve coproducts (or, more generally, colimits). Free constructions are, by definition, left adjoints to forgetful functors. This property of free monads was explored by Swierstra [swierstra] in his solution to the expression problem. I will use an example based on his paper to show how to construct monadic interpreters using higher order catamorphisms.

Free Monad Example

A stack-based calculator can be implemented directly using the state monad. Since this is a very simple example, it will be instructive to re-implement it using the free monad approach.

We start by defining a functor, in which the free parameter k represents the continuation:

data StackF k  = Push Int k
               | Top (Int -> k)
               | Pop k            
               | Add k
               deriving Functor

We use this functor to build a free monad:

type FreeStack = FreeMonad StackF

You may think of the free monad as a tree with nodes that are defined by the functor StackF. The unary constructors, like Add or Pop, create linear list-like branches; but the Top constructor branches out with one child per integer.

The level of indirection we get by separating recursion from the functor makes constructing free monad trees syntactically challenging, so it makes sense to define a helper function:

liftF :: (Functor f) => f r -> FreeMonad f r
liftF fr = InH $ MoreM $ fmap (InH . DoneM) fr

With this function, we can define smart constructors that build leaves of the free monad tree:

push :: Int -> FreeStack ()
push n = liftF (Push n ())

pop :: FreeStack ()
pop = liftF (Pop ())

top :: FreeStack Int
top = liftF (Top id)

add :: FreeStack ()
add = liftF (Add ())

All these preparations finally pay off when we are able to create small programs using do notation:

calc :: FreeStack Int
calc = do
  push 3
  push 4
  add
  x <- top
  pop
  return x

Of course, this program does nothing but build a tree. We need a separate interpreter to do the calculation. We’ll interpret our program in the state monad, with state implemented as a stack (list) of integers:

type MemState = State [Int]

The trick is to define a higher order algebra for the functor that generates the free monad and then use a catamorphism to apply it to the program. Notice that implementing the algebra is a relatively simple procedure because we don’t have to deal with recursion. All we need is to case-analyze the shallow constructors for the free monad functor MonadF, and then case-analyze the shallow constructors for the functor StackF.

runAlg :: HAlgebra (MonadF StackF) MemState
runAlg (DoneM a)  = return a
runAlg (MoreM ex) = 
  case ex of
    Top  ik  -> get >>= ik  . head
    Pop  k   -> get >>= put . tail   >> k
    Push n k -> get >>= put . (n : ) >> k
    Add  k   -> do (a: b: s) <- get
                   put (a + b : s)
                   k

The catamorphism converts the program calc into a state monad action, which can be run over an empty initial stack:

runState (hcata runAlg calc) []

The real bonus is the freedom to define other interpreters by simply switching the algebras. Here’s an algebra whose carrier is the Const functor:

showAlg :: HAlgebra (MonadF StackF) (Const String)

showAlg (DoneM a) = Const "Done!"
showAlg (MoreM ex) = Const $
  case ex of
    Push n k -> 
      "Push " ++ show n ++ ", " ++ getConst k
    Top ik -> 
      "Top, " ++ getConst (ik 42)
    Pop k -> 
      "Pop, " ++ getConst k
    Add k -> 
      "Add, " ++ getConst k

Runing the catamorphism over this algebra will produce a listing of our program:

getConst $ hcata showAlg calc

> "Push 3, Push 4, Add, Top, Pop, Done!"

Free Applicative

There is another monoidal structure that exists in the category of functors. In general, this structure will work for functors from an arbitrary monoidal category C to Set. Here, we’ll restrict ourselves to endofunctors on Set. The product of two functors is given by Day convolution, which can be implemented in Haskell using an existential type:

data Day f g c where
  Day :: f a -> g b -> ((a, b) -> c) -> Day f g c

The intuition is that a Day convolution contains a container of some as, and another container of some bs, together with a function that can convert any pair (a, b) to c.

Day convolution is a higher order functor:

instance HFunctor (Day f) where
  hfmap nat (Day fx gy xyt) = Day fx (nat gy) xyt
  ffmap h   (Day fx gy xyt) = Day fx gy (h . xyt)

In fact, because Day convolution is symmetric up to isomorphism, it is automatically functorial in both arguments.

To complete the monoidal structure, we also need a functor that could serve as a unit with respect to Day convolution. In general, this would be the the hom-functor from the monoidal unit:

C(1, -)

In our case, since 1 is the singleton set, this functor reduces to the identity functor.

We can now define monoids in the category of functors with the monoidal structure given by Day convolution. These monoids are equivalent to lax monoidal functors which, in Haskell, form the class:

class Functor f => Monoidal f where
  unit  :: f ()
  (>*<) :: f x -> f y -> f (x, y)

Lax monoidal functors are equivalent to applicative functors [mcbride], as seen in this implementation of pure and <*>:

  pure  :: a -> f a
  pure a = fmap (const a) unit
  (<*>) :: f (a -> b) -> f a -> f b
  fs <*> as = fmap (uncurry ($)) (fs >*< as)

We can now use the same general formula, but with Day convolution as the product:

L\, f\, g = 1 + f \star g

to generate a free monoidal (applicative) functor:

data FreeF f g t =
      DoneF t
    | MoreF (Day f g t)

This is indeed a higher order functor:

instance HFunctor (FreeF f) where
    hfmap _ (DoneF x)     = DoneF x
    hfmap nat (MoreF day) = MoreF (hfmap nat day)
    ffmap f (DoneF x)     = DoneF (f x)
    ffmap f (MoreF day)   = MoreF (ffmap f day)

and it generates a free applicative functor as its initial algebra:

type FreeA f = FixH (FreeF f)

Free Applicative Example

The following example is taken from the paper by Capriotti and Kaposi [capriotti]. It’s an option parser for a command line tool, whose result is a user record of the following form:

data User = User
  { username :: String 
  , fullname :: String
  , uid      :: Int
  } deriving Show

A parser for an individual option is described by a functor that contains the name of the option, an optional default value for it, and a reader from string:

data Option a = Option
  { optName    :: String
  , optDefault :: Maybe a
  , optReader  :: String -> Maybe a 
  } deriving Functor

Since we don’t want to commit to a particular parser, we’ll create a parsing action using a free applicative functor:

userP :: FreeA Option User
userP  = pure User 
  <*> one (Option "username" (Just "John")  Just)
  <*> one (Option "fullname" (Just "Doe")   Just)
  <*> one (Option "uid"      (Just 0)       readInt)

where readInt is a reader of integers:

readInt :: String -> Maybe Int
readInt s = readMaybe s

and we used the following smart constructors:

one :: f a -> FreeA f a
one fa = InH $ MoreF $ Day fa (done ()) fst

done :: a -> FreeA f a
done a = InH $ DoneF a

We are now free to define different algebras to evaluate the free applicative expressions. Here’s one that collects all the defaults:

alg :: HAlgebra (FreeF Option) Maybe
alg (DoneF a) = Just a
alg (MoreF (Day oa mb f)) = 
  fmap f (optDefault oa >*< mb)

I used the monoidal instance for Maybe:

instance Monoidal Maybe where
  unit = Just ()
  Just x >*< Just y = Just (x, y)
  _ >*< _ = Nothing

This algebra can be run over our little program using a catamorphism:

parserDef :: FreeA Option a -> Maybe a
parserDef = hcata alg

And here’s an algebra that collects the names of all the options:

alg2 :: HAlgebra (FreeF Option) (Const String)
alg2 (DoneF a) = Const "."
alg2 (MoreF (Day oa bs f)) = 
  fmap f (Const (optName oa) >*< bs)

Again, this uses a monoidal instance for Const:

instance Monoid m => Monoidal (Const m) where
  unit = Const mempty
  Const a >*< Const b = Const (a  b)

We can also define the Monoidal instance for IO:

instance Monoidal IO where
  unit = return ()
  ax >*< ay = do a <- ax
                 b <- ay
                 return (a, b)

This allows us to interpret the parser in the IO monad:

alg3 :: HAlgebra (FreeF Option) IO
alg3 (DoneF a) = return a
alg3 (MoreF (Day oa bs f)) = do
    putStrLn $ optName oa
    s <- getLine
    let ma = optReader oa s
        a = fromMaybe (fromJust (optDefault oa)) ma
    fmap f $ return a >*< bs

Cofree Comonad

Every construction in category theory has its dual—the result of reversing all the arrows. The dual of a product is a coproduct, the dual of an algebra is a coalgebra, and the dual of a monad is a comonad.

Let’s start by defining a higher order coalgebra consisting of a carrier f, which is a functor, and a natural transformation:

type HCoalgebra hf f = f :~> hf f

An initial algebra is dualized to a terminal coalgebra. In Haskell, both are the results of applying the same fixed point combinator, reflecting the fact that the Lambek’s lemma is self-dual. The dual to a catamorphism is an anamorphism. Here is its higher order version:

hana :: HFunctor hf 
     => HCoalgebra hf f -> (f :~> FixH hf)
hana hcoa = InH . hfmap (hana hcoa) . hcoa

The formula we used to generate free monoids:

1 + a \otimes x

dualizes to:

1 \times a \otimes x

and can be used to generate cofree comonoids .

A cofree functor is the right adjoint to the forgetful functor. Just like the left adjoint preserved coproducts, the right adjoint preserves products. One can therefore easily combine comonads using products (if the need arises to solve the coexpression problem).

Just like the monad is a monoid in the category of endofunctors, a comonad is a comonoid in the same category. The functor that generates a cofree comonad has the form:

type ComonadF f g = Identity :*: Compose f g

where the product of functors is defined as:

data (f :*: g) e = Both (f e) (g e)
infixr 6 :*:

Here’s the more familiar form of this functor:

data ComonadF f g e = e :< f (g e)

It is indeed a higher order functor, as witnessed by this instance:

instance Functor f => HFunctor (ComonadF f) where
  hfmap nat (e :< fge) = e :< fmap nat fge
  ffmap h (e :< fge) = h e :< fmap (fmap h) fge

A cofree comonad is the terminal coalgebra for this functor and can be written as a fixed point:

type Cofree f = FixH (ComonadF f)

Indeed, for any functor f, Cofree f is a comonad:

instance Functor f => Comonad (Cofree f) where
  extract (InH (e :< fge)) = e
  duplicate fr@(InH (e :< fge)) = 
                InH (fr :< fmap duplicate fge)

Cofree Comonad Example

The canonical example of a cofree comonad is an infinite stream:

type Stream = Cofree Identity

We can use this stream to sample a function. We’ll encapsulate this function inside the following functor (in fact, itself a comonad):

data Store a x = Store a (a -> x) 
    deriving Functor

We can use a higher order coalgebra to unpack the Store into a stream:

streamCoa :: HCoalgebra (ComonadF Identity)(Store Int)
streamCoa (Store n f) = 
    f n :< (Identity $ Store (n + 1) f)

The actual unpacking is a higher order anamorphism:

stream :: Store Int a -> Stream a
stream = hana streamCoa

We can use it, for instance, to generate a list of squares of natural numbers:

stream (Store 0 (^2))

Since, in Haskell, the same fixed point defines a terminal coalgebra as well as an initial algebra, we are free to construct algebras and catamorphisms for streams. Here’s an algebra that converts a stream to an infinite list:

listAlg :: HAlgebra (ComonadF Identity) []
listAlg(a :< Identity as) = a : as

toList :: Stream a -> [a]
toList = hcata listAlg

Future Directions

In this paper I concentrated on one type of higher order functor:

1 + a \otimes x

and its dual. This would be equivalent to studying folds for lists and unfolds for streams. But the structure of the functor category is richer than that. Just like basic data types can be combined into algebraic data types, so can functors. Moreover, besides the usual sums and products, the functor category admits at least two additional monoidal structures generated by functor composition and Day convolution.

Another potentially fruitful area of exploration is the profunctor category, which is also equipped with two monoidal structures, one defined by profunctor composition, and another by Day convolution. A free monoid with respect to profunctor composition is the basis of Haskell Arrow library [jaskelioff]. Profunctors also play an important role in the Haskell lens library [kmett].

Bibliography

  1. Erik Meijer, Maarten Fokkinga, and Ross Paterson, Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire
  2. Conor McBride, Ross Paterson, Idioms: applicative programming with effects
  3. Paolo Capriotti, Ambrus Kaposi, Free Applicative Functors
  4. Wouter Swierstra, Data types a la carte
  5. Exequiel Rivas and Mauro Jaskelioff, Notions of Computation as Monoids
  6. Edward Kmett, Lenses, Folds and Traversals
  7. Richard Bird and Lambert Meertens, Nested Datatypes
  8. Patricia Johann and Neil Ghani, Initial Algebra Semantics is Enough!
Advertisements

Part II: Free Monoids

String Diagrams

The utility of diagrams in formulating and proving theorems in category theory cannot be overemphasized. While working my way through the construction of free monoids, I noticed that there was a particular set of manipulations that had to be done algebraically, with little help from diagrams. These were operations involving a mix of tensor products and composition of morphisms. Tensor product is a bifunctor, so it preserves composition; which means you can often slide products through junctions of arrows—but the rules are not immediately obvious. Diagrams in which objects are nodes and morphisms are arrows have no immediate graphical representation for tensor products. A thought occurred to me that maybe a dual representation, where morphisms are nodes and objects are edges would be more accommodating. And indeed, a quick search for “string diagrams in monoidal categories” produced a paper by Joyal and Street, “The geometry of tensor calculus.”

The idea is very simple: if you represent morphisms as points on a plane, you have two additional dimensions to play with composition and tensoring. Two morphism—represented as points—can be composed if they share an object, which can be represented as a line connecting them. By convention, we read composition from the bottom of the diagram up. We follow lines as they go through points—that’s composition. Two lines ascending in parallel represent a tensor product. The geometry of the diagram just works!

Let me explain it on a simple example—the left unit law for a monoid (m, \pi, \mu):

\mu \circ (\pi \otimes m) = id

The left hand side is a composition of two morphisms. The first morphism \pi \otimes m starts from the object I \otimes m (see Fig. 12).

Fig. 12. Left unit law

In principle, there should be two parallel lines at the bottom, one for I and one for m; but I \otimes m is isomorphic to m, so the I line is redundant and can be omitted. Scanning the diagram from the bottom up, we encounter the morphism \pi in parallel with the m line. That’s exactly the graphical representation of \pi \otimes m. The output of \pi is also m, so we now have two upward moving m lines, corresponding to m \otimes m. That’s the input of the next morphism \mu. Its output is the single upward moving m. The unit law may be visualized as pulling the two m strings in opposite direction until the whole diagram is straightened to one vertical m string corresponding to id_m.

Here’s the right unit law:

\mu \circ (m \otimes \pi) = id

It works like a mirror image of the left unit law:

Fig. 13. Right unit law

The associativity law can be illustrated by the following diagram:

Fig. 14. Associativity law

The important property of a string diagram is that, because of functoriality, its value—the compound morphism it represents—doesn’t change under continuous transformations.

Monoid

First we’d like to show that the carrier of the free h-algebra (m, \sigma) which, as we’ve seen before, is also the initial algebra for the list functor I + h \otimes -, is automatically a monoid. To show that, we need to define its unit and multiplication—two morphisms that satisfy monoid laws. The obvious candidate for unit is the universal mapping \pi \colon I \to m. It’s the morphism in the definition of the free algebra from the previous post (see Fig 15).

Fig. 15. Free h-algebra (m, \sigma) generated by I

Multiplication is a morphism:

\mu \colon m \otimes m \to m

which, if you think of a free monoid as a list, is the generalization of list concatenation.

The trick is to show that m \otimes m is also a free h-algebra whose generator is m itself. We could then use the universality of m \otimes m to generate the unique algebra morphism from it to m (which is also an h-algebra). That will be our \mu.

Proposition. {Monoid.}

The free h-algebra (m, \sigma) generated by the unit object I is a monoid whose unit is:

\pi \colon I \to m

and whose multiplication:

\mu \colon m \otimes m \to m

is the unique h-algebra morphism

(m \otimes m, \sigma \otimes m) \to (m, \sigma)

induced by the identity morphism id_m.

Proof.
In the previous post we’ve shown that, if (m, \sigma) is a free algebra generated by the unit object I with the universal map \pi, then (m \otimes k, \sigma \otimes k) is a free algebra generated by k with the universal map \pi \otimes k (see Fig. 16).

Fig. 16. Free h-algebra generated by k \cong I \otimes k

We get \mu by redrawing this diagram: using m as both the generator and the target algebra, and replacing f with id_m (see Fig. 17):

Fig. 17. Monoid multiplication as an h-algebra morphism

Since so defined \mu is an h-algebra morphism, it makes the following diagram, Fig. 18, commute.

Fig. 18. \mu is an h-algebra morphism

This commuting condition can be redrawn as the identity of two string diagrams (Fig. 19) corresponding to the two paths through the original diagram.

Fig. 19. String diagram showing that \mu is an algebra morphism

The universal condition in Fig. 17:

\mu \circ (\pi \otimes m) = id_m

gives us immediately the left unit law for the monoid.

The right unit law:

\mu \circ (m \otimes \pi) = id_m

requires a little more work.

There is a standard trick that we can use to show that two morphisms, whose source (in this case m) is a free algebra, are equal. It’s enough to prove that they are algebra morphisms, and that they are both induced by the same morphism (in this case \pi). Their equality then follows from the uniqueness of the universal construction.

We know that \mu is an algebra morphism so, if we can show that m \otimes \pi is also an algebra morphism, their composition will be an algebra morphism too. Trivially, id_m is an algebra morphism so, if we can show that the two are induced by the same regular morphism \pi, then they must be equal.

To show that m \otimes \pi is an h-algebra morphism, we have to show that the diagram in Fig. 20 commutes.

Fig. 20. m \otimes \pi as an h-algebra morphism

We can redraw the two paths through Fig. 20 as two string diagrams in Fig. 21. They are equal because they can be deformed into each other.

Fig. 21. String diagram showing that m \otimes \pi is an algebra morphism

Therefore the composition \mu \circ (m \otimes \pi) is also an h-algebra morphism. The string diagram that illustrates this fact is shown in Fig. 22.

Fig. 22. String diagram showing that \mu \circ (m \otimes \pi) is an algebra morphism

Since the identity h-algebra morphism is induced by \pi, we’d like to show that \mu \circ (m \otimes \pi) is also induced by \pi (Fig. 23).

Fig. 23. Universal property of the free h-algebra generated by I, with the algebra morphism induced by \pi

To do that, we have to prove the universal condition in Fig. 23:

\mu \circ (m \otimes \pi) \circ \pi = \pi

This is represented as a string diagram identity in Fig. 24. We can deform this diagram by sliding the left \pi node up, past the right \pi node, and then using the left identity.

Fig. 24. Universal condition in Fig. 23.

This concludes the proof of the right identity.

The proof of associativity is very similar, so I’ll just sketch it. We have to show that the two diagrams in Fig. 14 are equal. We’ll use the same trick as before. We’ll show that they are both algebra morphisms. Their source is a free algebra generated by m \otimes m (see Fig. 25—the other diagram has \mu \circ (\mu \otimes m) replaced by \mu \circ (m \otimes \mu)). The universal condition follows from the unit law for m. Associativity condition:

\mu \circ (\mu \otimes m) = \mu \circ (m \otimes \mu)

will then follow from the uniqueness of the universal construction.

Fig. 25. One part of associativity as an h-algebra morphism

You can easily convince yourself that showing that something is an h-algebra morphism can be done by first attaching the h leg to the left of the string diagram and then sliding it to the top of the diagram, as illustrated in Fig. 26. This can be accomplished by repeatedly using the fact that \mu is an h-algebra morphism.

Fig. 26. String diagram showing that one of the associativity diagrams is an h-algebra morphism

The same process can be applied to the second associativity diagram, thus completing the proof.

\square

For Haskell programmers, recall from the previous post our construction of the free h-algebra generated by k and the derivation of the algebra morphism g from it to the internal-hom algebra:

g :: Expr -> (k -> n)
g () = f
g a  = k -> nu (a, f k)
g (a, b) = k -> nu (a, nu (b, f k))
...

In the current proof we have replaced k with m, which generalizes the list of hs, f became id, and \nu is a function that prepends an element to a list. In other words, g concatenates its list-argument in front of the second list, and it does it in the correct order.

Free Monoid

The monoid whe have just constructed from the free algebra is a free monoid. As we did with free algebras, instead of using the free/forgetful adjunction to prove it, we’ll use the free-object universal construction.

Theorem. {Free Monoid.}

The monoid (m, \pi, \mu) is a free monoid generated by h, with a universal mapping given by u = \sigma \circ (h \otimes \pi):

That is, for any monoid (n, \eta, \nu) and any morphism s \colon h \to n, there is a unique monoid morphism t from (m, \pi, \mu) to (n, \eta, \nu) such that the universal condition holds:

t \circ u = s

(see Fig. 27).

Fig. 27. Free monoid diagram

Proof. Recall that (m, \sigma) is a free h-algebra generated by I. It turns out that any monoid (n, \eta, \nu), for which there is a morphism s \colon h \to n, is automatically a carrier of an h-algebra. We construct its structure map \lambda by combining s with monoid multiplication \nu:

We can use n‘s monoidal unit \eta to insert I into n. Because (m, \sigma) is a free h-algebra, there is a unique algebra morphism, let’s call it t, from it to (n, \lambda), which is induced by \eta, such that t \circ \pi = \eta (see Fig. 28). We want to show that this algebra morphism is also a monoid morphism. Furthermore, if we can show that this is the unique monoid morphism induced by s, we will have a proof that m is a free monoid.

Fig. 28. Algebra morphism between monoids

Since t is an algebra morphism, the rectangle in Fig. 29 commutes.

Fig. 29. t is an h-algebra morphism

Let’s redraw it as an identity of string diagrams, Fig. 30. We’ll make use of it in a moment.

Fig. 30. t is an h-algebra morphism

Going back to Fig. 27, we want to show that the universal condition holds, which means that we want the diagram in Fig. 31 to commute (I have expanded the definition of u).

Fig. 31. Free monoid universal condition

In other words we want show that the following two string diagrams are equal:

Fig. 32. Free monoid universal condition

Using the identity in Fig. 30, the left hand side can be rewritten as:

Fig. 33. Step 1 in transforming Fig 32

The right leg can be shrunk down to \eta using the universal condition in Fig. 28:

t \circ \pi = \eta

which, incidentally, also expresses the fact that t preserves the monoidal unit.

Finally, we can use the right unit law for the monoid n, Fig. 34,

Fig. 34. Right unit law for monoid n

to arrive at the right hand side of the identity in Fig. 32. This completes the proof of the universal condition in Fig. 27.

Now we have to show that t is a full-blown monoid morphism, that is, it preserves multiplication (Fig. 35).

Fig. 35. Preservation of multiplication

The corresponding string diagrams are shown in Fig. 36.

Fig. 36. Preservation of multiplication

Let’s start with the fact that m \otimes m is the free h-algebra generated by m. We will show that the two paths through the diagram in Fig. 35 are both h-algebra morphisms, and that they are induced by the same regular morphism t \colon m \to n. Therefore they must be equal.

The bottom path in Fig. 35, t \circ \mu, is an h-algebra morphism by virtue of being a composition of two h-algebra morphisms. This composite is induced by morphism t in the diagram Fig. 37.

Fig. 37. h-algebra morphism t \circ \mu

The universal condition in Fig. 37 follows from the diagram in Fig. 38, which follows from the left unit law for the monoid (m, \pi, \mu).

Fig. 38. Universal condition in Fig. 37

We want to show that the top path in Fig. 35 is also an h-algebra morphism, that is, the diagram in Fig. 39 commutes.

Fig. 39. h-algebra morphism diagram for \nu \circ (t \otimes t)

We can redraw this diagram as a string diagram identity in Fig. 40.

Fig. 40. h-algebra morphism diagram for \nu \circ (t \otimes t)

First, let’s use the associativity law for the monoid n to transform the left hand side. We get the diagram in Fig. 41.

Fig. 41. After applying associativity, we can apply Fig. 30

We can now apply the identity in Fig. 30 to reproduce the right hand side of Fig. 40.

We have thus shown that both paths in Fig. 35 are algebra morphisms. We know that the bottom path is induced by morphism t. What remains is to show that the top path, which is given by \nu \circ (t \otimes t) is induced by the same t. This will be true, if we can show the universal condition in Fig. 42.

Fig. 42. h-algebra morphism \nu \circ (t \otimes t)

This universal condition can be expanded to the diagram in Fig. 43.

Fig. 43. Universal condition in Fig. 42

Here’s the string diagram that traces the path around the square (Fig. 44).

Fig. 44. Path around Fig. 43 as a string diagram

First, let’s use the preservation of unit by t, Fig. 45, to shrink the left leg,

Fig. 45. Preservation of unit by t

and follow it with the left unit law for the monoid (n, \eta, \nu) (Fig. 46). The result is that Fig. 44 shrinks to the single morphism t, thus making Fig. 43 commute.

Fig. 46. Left unit law for the monoid n

This completes the proof that t is a monoid morphism.

The final step is to make sure that t is the unique monoid morphism from m to n. Suppose that there is another monoid morphism t' (replacing t in Fig. 28). If we can show that t' is also an h-algebra morphism induced by the same \eta, it will have to, by universality, be equal to t. In other words, we have to show that the diagram in Fig. 28 also works for t'. Our assumptions are that both t and t' are monoid morphisms, that is, they preserve unit and multiplication; and they both satisfy the universal condition in Fig 27. In particular, t' satisfies the condition in Fig. 47.

Fig. 47. Free monoid universal condition for t' as a string diagram

Notice that, in the first part of the proof, we started with an h-algebra morphism t and had to show that it’s a monoid morphism. Now we are going in the opposite direction: we know that t' is a monoid morphism, and have to show that it’s an h-algebra morphism, and that the universal condition in Fig. 28

t' \circ \pi = \eta

holds. The latter simply restates our assumption that t' preserves the unit.

To show that t' is an algebra morphism, we have to show that the diagram in Fig 48 commutes.

Fig. 48. t' as an h-algebra morphism

This diagram may be redrawn as a pair of string diagrams, Fig 49.

Fig. 49. t' as an algebra morphism

The proof of this identity relies on redrawing string diagrams using the identities in Figs. 12, 19, 36, and 47. Before we continue, you might want to try it yourself. It’s an exercise well worth the effort.

We start by expanding the s node using the diagram in Fig. 47 to get Fig. 50.

Fig. 50. After expanding the left leg of the diagram in Fig. 49, we can apply preservation of multiplication by t'.

We can now use the preservation of multiplication by t' to obtain Fig 51.

Fig. 51. Applying the fact that \mu is an h-algebra morphism

Next, we can use the fact that \mu is an h-algebra morphism, Fig. 19, to slide the \sigma node up, and obtain Fig. 52.

Fig. 52. Applying left unit law

We can now use the left unit law for the monoid m:

\mu \circ (\pi \otimes m) = id

as illustrated in Fig. 12, to arrive at the right hand side of Fig. 49.

This concludes the proof that t' must be equal to t.

\square

Conclusion

To summarize, we have shown that the free monoid can be constructed from a free algebra of the functor h \otimes -. This is a very general result that is valid in any monoidal closed category. Earlier we’ve seen that this free algebra is also the initial algebra of the list functor I + h \otimes -. The immediate consequence of this theorem is that it lets us construct free monoids in functor categories with interesting monoidal structures. In particular, we get a free monad as a free monoid in the category of endofunctors with functor composition as tensor product. We can also get a free applicative, or free lax monoidal functor, if we define the tensor product as Day convolution—the latter can be also constructed in the profunctor category.


Preface

In my previous blog post I used, without proof, the fact that the initial algebra of the functor I + h \otimes - is a free monoid. The proof of this statement is not at all trivial and, frankly, I would never have been able to work it out by myself. I was lucky enough to get help from a mathematician, Alex Campbell, who sent me the proof he extracted from the paper by G. M. Kelly (see Bibliography).

I worked my way through this proof, filling some of the steps that might have been obvious to a mathematician, but not to an outsider. I even learned how to draw diagrams using the TikZ package for LaTeX.

What I realized was that category theorists have developed a unique language to manipulate mathematical formulas: the language of 2-dimensional diagrams. For a programmer interested in languages this is a fascinating topic. We are used to working with grammars that essentially deal with string substitutions. Although diagrams can be serialized—TikZ lets you do it—you can’t really operate on diagrams without laying them out on a page. The most common operation—diagram pasting—involves combining two or more diagrams along common edges. I am amazed that, to my knowledge, there is no tool to mechanize this process.

In this post you’ll also see lots of examples of using the same diagram shape (the free-construction diagram, or the algebra-morphism diagram), substituting new expressions for some of the nodes and edges. Not all substitutions are valid and I’m sure one could develop some kind of type system to verify their correctness.

Because of proliferation of diagrams, this blog post started growing out of proportion, so I decided to split it into two parts. If you are serious about studying this proof, I strongly suggest you download (or even print out) the PDF version of this blog.

Part I: Free Algebras

Introduction

Here’s the broad idea: The initial algebra that defines a free monoid is a fixed point of the functor I + h \otimes -, which I will call the list functor. Roughly speaking, it’s the result of replacing the dash in the definition of the functor with the result of the replacement. For instance, in the first iteration we would get:

I + h \otimes (I + h \otimes -) \cong I + h + h \otimes h \otimes -

I used the fact that I is the unit of the tensor product, the associativity of \otimes (all up to isomorphism), and the distributivity of tensor product over coproduct.

Continuing this process, we would arrive at an infinite sum of powers of h:

m = I + h + h \otimes h + h \otimes h \otimes h + ...

Intuitively, a free monoid is a list of hs, and this representation expresses the fact that a list is either trivial (corresponding to the unit I), or a single h, or a product of two hs, and so on…

Let’s have a look at the structure map of the initial algebra of the list functor:

I + h \otimes m \to m

Mapping out of a coproduct (sum) is equivalent to defining a pair of morphisms \langle \pi, \sigma \rangle:

\pi \colon I \to m

\sigma \colon h \otimes m \to m

the second of which may, in itself, be considered an algebra for the product functor h \otimes -.

Our goal is to show that the initial algebra of the list functor is a monoid so, in particular, it supports multiplication:

\mu \colon m \otimes m \to m

Following our intuition about lists, this multiplication corresponds to list concatenation. One way of concatenating two lists is to keep multiplying the second list by elements taken from the first list. This operation is described by the application of our product functor h \otimes -. Such repetitive application of a functor is described by a free algebra.

There is just one tricky part: when concatenating two lists, we have to disassemble the left list starting from the tail (alternatively, we could disassemble the right list from the head, but then we’d have to append elements to the tail of the left list, which is the same problem). And here’s the ingenious idea: you disassemble the left list from the head, but instead of applying the elements directly to the right list, you turn them into functions that prepend an element. In other words you convert a list of elements into a (reversed) list of functions. Then you apply this list of functions to the right list one by one.

This conversion is only possible if you can trade product for function — the process we know as currying. Currying is possible if there is an adjunction between the product and the exponential, a.k.a, the internal hom, [k, n] (which generalizes the set of functions from k to n):

C(m \otimes k, n) \cong C(m, [k, n])

We’ll assume that the underlying category C is monoidal closed, so that we can curry morphisms that map out from the tensor product:

g \colon m \otimes k \to n

\bar{g} \colon m \to [k, n]

(In what follows I’ll be using the overbar to denote the curried version of a morphism.)

The internal hom can also be defined using a universal construction, see Fig. 1. The morphism eval corresponds to the counit of the adjunction (although the universal construction is more general than the adjunction).

Fig. 1. Universal construction of the internal hom [k, n]. For any object m and a morphism g \colon m \otimes k \to n there is a unique morphism \bar{g} (the curried version of g) which makes the triangle commute.

The hard part of the proof is to show that the initial algebra produces a free monoid, which is a free object in the category of monoids. I’ll start by defining the notion of a free object.

Free Objects

You might be familiar with the definition of a free construction as the left adjoint to the forgetful functor. Fig 2 illustrates the essence of such an adjunction.

Fig. 2. Free/forgetful adjunction

The left hand side is in some category D of structured objects: algebras, monoids, monads, etc. The right hand side is in the underlying category C, often the category of sets. The adjunction establishes a one-to-one correspondence between sets of morphisms, of which g and f are examples. If U is the forgetful functor, then F is the free functor, and the object F x is called the free object generated by x. The adjunction is an isomorphism of hom-sets, natural in both x and z:

D(F x, z) \cong C(x, U z)

Unfortunately, this description doesn’t quite work for some important cases, like free monads. In the case of free monads, the right category is the category of endofunctors, and the left category is the category of monads. Because of size issues, not every endofunctor on the right generates a free monad on the left.

It turns out that there is a weaker definition of a free object that doesn’t require a full blown adjunction; and which reduces to it, when the adjunction can be defined globally.

Let’s start with the object x on the right, and try to define the corresponding free object F x on the left (by abuse of notation I will call this object F x, even if there is no functor F). For our definition, we need a condition that would work universally for any object z, and any morphism f from x to U z.

We are still missing one important ingredient: something that would tell us that x acts as a set of generators for F x. This property can be expressed by providing a morphism that inserts x into U (F x)—the object underlying the free object. In the case of an adjunction, this morphism happens to be the component of the unit natural transformation:

\eta \colon Id \to U \circ F

where Id is the identity functor (see Fig. 3).

Fig. 3. Unit of adjunction

The important property of the unit of adjunction \eta is that it can be used to recover the mapping from the left hom-set to the right hom-set in Fig. 2. Take a morphism g \colon F x \to z, lift it using U, and compose it with \eta_x. You get a morphism f \colon x \to U z:

f = U g \circ \eta_x

In the absence of an adjunction, we’ll make the existence of the morphism \eta_x part of the definition of the free object.

Definition. {Free object.}
A free object on x consists of an object F x and a morphism \eta_x \colon x \to U (F x) such that, for every object z and a morphism f \colon x \to U z, there is a unique morphism g \colon F x \to z such that:

U g \circ \eta_x = f

The diagram in Fig. 4 illustrates this definition. It is essentially a composition of the two previous diagrams, except that we don’t require the existence of a global mapping, let alone a functor, F.

Fig. 4. Definition of a free object F x

The morphism \eta_x is called the universal mapping, because it’s independent of z and f. The equation:

U g \circ \eta_x = f

is called the universal condition, because it works universally for every z and f. We say that g is induced by the morphism f.

There is a standard trick that uses the universal condition: Suppose you have two morphisms g and g' from the universal object to some z. To prove that they are equal, it’s enough to show that they are both induced by the same f. Showing the equality:

U g \circ \eta_x = U g' \circ \eta_x

is often easier because it lets you work in the simpler, underlying category.

Free Algebras

As I mentioned in the introduction, we are interested in algebras for the product functor: h \otimes -, which we’ll call h-algebras. Such an algebra consists of a carrier n and a structure map:

\nu \colon h \otimes n \to n

For every h, h-algebras form a category; and there is a forgetful functor U that maps an h-algebra to its carrier object n. We can therefore define a free algebra as a free object in the category of h-algebras, which may or may not be generalizable to a full-blown free/forgetful adjunction. Fig. 5 shows the universal condition that defines such a free algebra (m_k, \sigma) generated by an object k.

Fig. 5. Free h-algebra (m_k, \sigma) generated by k

In particular, we can define an important free h-algebra generated by the identity object I. This algebra (m, \sigma) has the structure map:

\sigma \colon h \otimes m \to m

and is described by the diagram in Fig. 6:

Fig. 6. Free h-algebra (m, \sigma) generated by I

Its universal condition reads:

g \circ \pi = f

By definition, since g is an algebra morphism, it makes the diagram in Fig. 7 commute:

Fig. 7. g is an h-algebra morphism

We use a notational convenience: h \otimes g is the lifting of the morphism g by the product functor h \otimes -. This might be confusing at first, because it looks like we are multiplying an object h by a morphism g. One way to parse it is to consider that, if we keep the first argument to the tensor product constant, then it’s a functor in the second component, symbolically h \otimes -. Since it’s a functor, we can use it to lift a morphism g, which can be notated as h \otimes g.

Alternatively, we can exploit the fact that tensor product is a bifunctor, and therefore it may lift a pair of morphism, as in id_h \otimes g; and h \otimes g is just a shorthand notation for this.

Bifunctoriality also means that the tensor product preserves composition and identity in both arguments. We’ll use these facts extensively later, especially as the basis for string diagrams.

The important property of m is that it also happens to be the initial algebra for the list functor I + h \otimes -. Indeed, for any other algebra with the carrier n and the structure map a pair \langle f, \nu \rangle, there exists a unique g given by Fig 6, such that the diagram in Fig. 8 commutes:

Fig. 8. Initiality of the algebra (m, \langle \pi, \sigma \rangle) for the functor I + h \otimes -.

Here, inl and inr are the two injections into the coproduct.

If you visualize m as the sum of all powers of h, \pi inserts the unit I (zeroth power) into it, and \sigma inserts the sum of non-zero powers.

\langle \pi, \sigma \rangle \colon I + h \otimes m \to m

The advantage of this result is that we can concentrate on studying the simpler problem of free h-algebras rather than the problem of initial algebras for the more complex list functor.

Example

Here’s some useful intuition behind h-algebras. Think of the functor (h \otimes -) as a means of forming formal expressions. These are very simple expressions: you take a variable and pair it (using the tensor product) with h. To define an algebra for this functor you pick a particular type n for your variable (i.e., the carrier object) and a recipe for evaluating any expression of type h \otimes n (i.e., the structure map).

Let’s try a simple example in Haskell. We’ll use pairs (and tuples) as our tensor product, with the empty tuple () as unit (up to isomorphism). An algebra for a functor f is defined as:

type Algebra f a = f a -> a

Consider h-algebras for a particular choice of h: the type of real numbers Double. In other words, these are algebras for the functor ((,) Double). Pick, as your carrier, the type of vectors:

data Vector = Vector { x :: Double
                     , y :: Double 
                     } deriving Show

and the structure map that scales a vector by multiplying it by a number:

vecAlg :: Algebra ((,) Double) Vector
vecAlg (a, v) = Vector { x = a * x v
                       , y = a * y v }

Define another algebra with the carrier the set of real numbers, and the structure map multiplication by a number.

mulAlg :: Algebra ((,) Double) Double
mulAlg (a, x) = a * x

There is an algebra morphism from vecAlg to mulAlg, which takes a vector and maps it to its length.

algMorph :: Vector -> Double
algMorph v = sqrt((x v)^2 + (y v)^2)

This is an algebra morphism, because it doesn’t matter if you first calculate the length of a vector and then multiply it by a, or first multiply the vector by a and then calculate its length (modulo floating-point rounding).

A free algebra has, as its carrier, the set of all possible expressions, and an evaluator that tells you how to convert a functor-ful of such expressions to another valid expression. A good analogy is to think of the functor as defining a grammar (as in BNF grammar) and a free algebra as a language generated by this grammar.

You can generate free h-expressions recursively. The starting point is the set of generators k as the source of variables. Applying the functor to it produces h \otimes k. Applying it again, produces h \otimes h \otimes k, and so on. The whole set of expressions is the infinite coproduct (sum):

k + h \otimes k + h \otimes h \otimes k + ...

An element of this coproduct is either an element of k, or an element of the product h \otimes k, and so on…

The universal mapping injects the set of generators k into the set of expressions (here, it would be the leftmost injection into the coproduct).

In the special case of an algebra generated by the unit I, the free algebra simplifies to the power series:

I + h + h \otimes h + ...

and \pi injects I into it.

Continuing with our example, let’s consider the free algebra for the functor ((,) Double) generated by the unit (). The free algebra is, symbolically, an infinite sum (coproduct) of tuples:

data Expr =
    () 
  | Double 
  | (Double, Double) 
  | (Double, Double, Double) 
  | ...

Here’s an example of an expression:

e = (1.1, 2.2, 3.3)

As you can see, free expressions are just lists of numbers. There is a function that inserts the unit into the set of expressions:

pi :: () -> Expr
pi () = ()

The free evaluator is a function (not actual Haskell):

sigma (a, ()) = a
sigma (a, x)  = (a, x)
sigma (a, (x, y)) = (a, x, y)
sigma (a, (x, y, z)) = (a, x, y, z)
...

Let’s see how the universal property works in our example. Let’s try, as the target (n, \nu), our earlier vector algebra:

vecAlg :: Algebra ((,) Double) Vector
vecAlg (a, v) = Vector { x = a * x v
                       , y = a * y v }

The morphism f picks some concrete vector, for instance:

f :: () -> Vector
f () = Vector 3 4

There should be a unique morphism of algebras g that takes an arbitrary expression (a list of Doubles) to a vector, such that g . pi = f picks our special vector:

Vector 3 4

In other words, g must take the empty tuple (the result of pi) to Vector 3 4. The question is, how is g defined for an arbitrary expression? Look at the diagram in Fig. 7 and the commuting condition it implies:

g \circ \sigma = \nu \circ (h \otimes g)

Let’s apply it to an expression (a, ()) (the action of the functor (h, -) on ()). Applying sigma to it produces a, followed by the action of g resulting in g a. This should be the same as first applying the lifted (id, g) acting on (a, ()), which gives us (a, Vector 3 4); followed by vecAlg, which produces Vector (a * 3) (a * 4). Together, we get:

g a = Vector (a * 3) (a * 4)

Repeating this process gives us:

g :: Expr -> Vector
g () = Vector 3 4
g a  = Vector (a * 3) (a * 4)
g (a, b) = Vector (a * b * 3) (a * b * 4)
...

This is the unique g induced by our f.

Properties of Free Algebras

Here are some interesting properties that will help us later: h-algebras are closed under multiplication and exponentiation. If (n, \nu) is an h-algebra, then there are also h-algebras with the carriers n \otimes k and [k, n], for an arbitrary object k. Let’s find their structure maps.

The first one should be a mapping:

h \otimes n \otimes k \to n \otimes k

which we can simply take as the lifting of \nu by the tensor product: \nu \otimes k.

The second one is:

\tau_{k} \colon h \otimes [k, n] \to [k, n]

which can be uncurried to:

h \otimes [k, n] \otimes k \to n

We have at our disposal the counit of the hom-adjunction:

eval \colon [k, n] \otimes k \to n

which we can use in combination with \nu:

\nu \circ (h \otimes eval)

to implement the (uncurried) structure map.

Here’s the same construction for Haskell programmers. Given an algebra:

nu :: Algebra ((,) h) n

we can create two algebras:

alpha :: Algebra ((,) h) (n, k)
alpha (a, (n, k)) = (nu (a, n), k)

and:

tau :: Algebra ((,) h) (k -> n)
tau (a, kton) = k -> nu (a, kton k)

These two algebras are related through an adjunction in the category of h-algebras:

Alg\big((n \otimes k, \nu \otimes k), (l, \lambda)\big) \cong Alg\big((n, \nu), ([k, l], \tau_{k})\big)

which follows directly from hom-adjunction acting on carriers.

C(n \otimes k, l) \cong C(n, [k, l])

Finally, we can show how to generate free algebras from arbitrary generators. Intuitively, this is obvious, since it can be described as the application of distributivity of tensor product over coproduct:

k + h \otimes k + h \otimes h \otimes k + ... =  (I + h + h \otimes h + ...) \otimes k

More formally, we have:

Proposition.
If (m, \sigma) is is the free h-algebra generated by I, then (m \otimes k, \sigma \otimes k) is the free h-algebra generated by k with the universal map given by \pi \otimes k.

Proof.
Let’s show the universal property of (m \otimes k, \sigma \otimes k). Take any h-algebra (n, \nu) and a morphism:

f \colon k \to n

We want to show that there is a unique g \colon m \otimes k \to n that closes the diagram in Fig. 9.

Fig. 9. Free h-algebra generated by k \cong I \otimes k

I have rewritten f (taking advantage of the isomorphism I \otimes k \cong k), as:

f \colon I \otimes k \to n

We can curry it to get:

\bar{f} \colon I \to [k, n]

The intuition is that the morphism \bar{f} selects an element of the internal hom [k, n] that corresponds to the original morphism f \colon k \to n.

We’ve seen earlier that there exists an h-algebra with the carrier [k, n] and the structure map \tau_k. But since m is the free h-algebra, there must be a unique algebra morphism \bar{g} (see Fig. 10):

\bar{g} \colon (m, \sigma) \to ([k, n], \tau_k)

such that:

\bar{g} \circ \pi = \bar{f}

Fig. 10. The construction of the unique morphism \bar{g}

Uncurying this \bar{g} gives us the sought after g.

The universal condition in Fig. 9:

g \circ (\pi \otimes k) = f

follows from pasting together two diagrams that define the relevant hom-adjunctions in Fig 11 (c.f., Fig. 1).

Fig. 11. The diagram defining the currying of both g and g \circ (\pi \otimes k). This is the pasting together of two diagrams that define the universal property of the internal hom [k, n], one for the object I and one for the object m.


\square

The immediate consequence of this proposition is that, in order to show that two h-algebra morphisms g, g' \colon m \otimes k \to n are equal, it’s enough to show the equality of two regular morphisms:

g \circ (\pi \otimes k) = g' \circ (\pi \otimes k) \colon k \to n

(modulo isomorphism between k and I \otimes k). We’ll use this property later.

It’s instructive to pick apart this proof in the simpler Haskell setting. We have the target algebra:

nu :: Algebra ((,) h) n

There is a related algebra [k, n] of the form:

tau :: Algebra ((,) h) (k -> n)
tau (a, kton) = k -> nu (a, kton k)

We’ve analyzed, previously, a version of the universal construction of g, which we can now generalize to Fig. 10. We can build up the definition of \bar{g}, starting with the condition \bar{g} \circ \pi = \bar{f}. Here, \bar{f} selects a function from the hom-set: this function is our f. That gives us the action of g on the unit:

g :: Expr -> (k -> n)
g () = f

Next, we make use of the fact that \bar{g} is an algebra morphism that satisfies the commuting condition:

\bar{g} \circ \sigma = \tau \circ (h \otimes \bar{g})

As before, we apply this equation to the expression (a, ()). The left hand side produces g a, while the right hand side produces tau (a, f). Next, we
apply the same equation to (a, (b, ())). The left hand side produces g (a, b). The right hand produces tau (a, tau (b, f)), and so on. Applying the definition of tau, we get:

g :: Expr -> (k -> n)
g () = f
g a  = k -> nu (a, f k)
g (a, b) = k -> nu (a, nu (b, f k))
...

Notice the order reversal in function application. The list that is the argument to g is converted to a series of applications of nu, with list elements in reverse order. We first apply nu (b, -) and then nu (a, -). This reversal is crucial to implementing list concatenation, where nu will prepend elements of the first list to the second list. We’ll see this in the second installment of this blog post.

Bibliography

  1. G.M. Kelly, A Unified Treatment of Transfinite Constructions for Free Algebras, Free Monoids, Colimits, Associated Sheaves, and so On. Bulletin of the Australian Mathematical Society, vol. 22, no. 01, 1980, p. 1.

Functors from a monoidal category C to Set form a monoidal category with Day convolution as product. A monoid in this category is a lax monoidal functor. We define an initial algebra using a higher order functor and show that it corresponds to a free lax monoidal functor.

Recently I’ve been obsessing over monoidal functors. I have already written two blog posts, one about free monoidal functors and one about free monoidal profunctors. I followed some ideas from category theory but, being a programmer, I leaned more towards writing code than being preoccupied with mathematical rigor. That left me longing for more elegant proofs of the kind I’ve seen in mathematical literature.

I believe that there isn’t that much difference between programming and math. There is a whole spectrum of abstractions ranging from assembly language, weakly typed languages, strongly typed languages, functional programming, set theory, type theory, category theory, and homotopy type theory. Each language comes with its own bag of tricks. Even within one language one starts with some relatively low level encodings and, with experience, progresses towards higher abstractions. I’ve seen it in Haskell, where I started by hand coding recursive functions, only to realize that I can be more productive using bulk operations on types, then building recursive data structures and applying recursive schemes, eventually diving into categories of functors and profunctors.

I’ve been collecting my own bag of mathematical tricks, mostly by reading papers and, more recently, talking to mathematicians. I’ve found that mathematicians are happy to share their knowledge even with outsiders like me. So when I got stuck trying to clean up my monoidal functor code, I reached out to Emily Riehl, who forwarded my query to Alexander Campbell from the Centre for Australian Category Theory. Alex’s answer was a very elegant proof of what I was clumsily trying to show in my previous posts. In this blog post I will explain his approach. I should also mention that most of the results presented in this post have already been covered in a comprehensive paper by Rivas and Jaskelioff, Notions of Computation as Monoids.

Lax Monoidal Functors

To properly state the problem, I’ll have to start with a lot of preliminaries. This will require some prior knowledge of category theory, all within the scope of my blog/book.

We start with a monoidal category C, that is a category in which you can “multiply” objects using some kind of a tensor product \otimes. For any pair of objects a and b there is an object a \otimes b; and this mapping is functorial in both arguments (that is, you can also “multiply” morphisms). A monoidal category will also have a special object I that is the unit of multiplication. In general, the unit and associativity laws are satisfied up to isomorphism:

\lambda : I \otimes a \cong a

\rho : a \otimes I \cong a

\alpha : (a \otimes b) \otimes c \cong a \otimes (b \otimes c)

These isomorphisms are called, respectively, the left and right unitors, and the associator.

The most familiar example of a monoidal category is the category of types and functions, in which the tensor product is the cartesian product (pair type) and the unit is the unit type ().

Let’s now consider functors from C to the category of sets, Set. These functors also form a category called [C, Set], in which morphisms between any two functors are natural transformations.

In Haskell, a natural transformation is approximated by a polymorphic function:

type f ~> g = forall x. f x -> g x

The category Set is monoidal, with cartesian product \times serving as a tensor product, and the singleton set 1 as the unit.

We are interested in functors in [C, Set] that preserve the monoidal structure. Such a functor should map the tensor product in C to the cartesian product in Set and the unit I to the singleton set 1. Accordingly, a strong monoidal functor F comes with two isomorphisms:

F a \times F b \cong F (a \otimes b)

1 \cong F I

We are interested in a weaker version of a monoidal functor called lax monoidal functor, which is equipped with a one-way natural transformation:

\mu : F a \times F b \to F (a \otimes b)

and a one-way morphism:

\eta : 1 \to F I

A lax monoidal functor must also preserve unit and associativity laws.

laxassoc

Associativity law: \alpha is the associator in the appropriate category (top arrow, in Set; bottom arrow, in C).

In Haskell, a lax monoidal functor can be defined as:

class Monoidal f where
  eta :: () -> f ()
  mu  :: (f a, f b) -> f (a, b)

It’s also known as the applicative functor.

Day Convolution and Monoidal Functors

It turns out that our category of functors [C, Set] is also equipped with monoidal structure. Two functors F and G can be “multiplied” using Day convolution:

(F \star G) c = \int^{a b} C(a \otimes b, c) \times F a \times G b

Here, C(a \otimes b, c) is the hom-set, or the set of morphisms from a \otimes b to c. The integral sign stands for a coend, which can be interpreted as a generalization of an (infinite) coproduct (modulo some identifications). An element of this coend can be constructed by injecting a triple consisting of a morphism from C(a \otimes b, c), an element of the set F a, and an element of the set G b, for some a and b.

In Haskell, a coend corresponds to an existential type, so the Day convolution can be defined as:

data Day f g c where
  Day :: ((a, b) -> c, f a, g b) -> Day f g c

(The actual definition uses currying.)

The unit with respect to Day convolution is the hom-functor:

C(I, -)

which assigns to every object c the set of morphisms C(I, c) and acts on morphisms by post-composition.

The proof that this is the unit is instructive, as it uses the standard trick: the co-Yoneda lemma. In the coend form, the co-Yoneda lemma reads, for a covariant functor F:

\int^x C(x, a) \times F x \cong F a

and for a contravariant functor H:

\int^x C(a, x) \times H x \cong H a

(The mnemonics is that the integration variable must appear twice, once in the negative, and once in the positive position. An argument to a contravariant functor is in a negative position.)

Indeed, substituting C(I, -) for the first functor in Day convolution produces:

(C(I, -) \star G) c = \int^{a b} C(a \otimes b, c) \times C(I, a) \times G b

which can be “integrated” over a using the Yoneda lemma to yield:

\int^{b} C(I \otimes b, c) \times G b

and, since I is the unit of the tensor product, this can be further “integrated” over b to give G c. The right unit law is analogous.

To summarize, we are dealing with three monoidal categories: C with the tensor product \otimes and unit I, Set with the cartesian product and singleton 1, and a functor category [C, Set] with Day convolution and unit C(I, -).

A Monoid in [C, Set]

A monoidal category can be used to define monoids. A monoid is an object m equipped with two morphisms — unit and multiplication:

\eta : I \to m

\mu : m \otimes m \to m

monoid-1

These morphisms must satisfy unit and associativity conditions, which are best illustrated using commuting diagrams.

monunit

Unit laws. λ and ρ are the unitors.

monassoc

Associativity law: α is the associator.

This definition of a monoid can be translated directly to Haskell:

class Monoid m where
  eta :: () -> m
  mu  :: (m, m) -> m

It so happens that a lax monoidal functor is exactly a monoid in our functor category [C, Set]. Since objects in this category are functors, a monoid is a functor F equipped with two natural transformations:

\eta : C(I, -) \to F

\mu : F \star F \to F

At first sight, these don’t look like the morphisms in the definition of a lax monoidal functor. We need some new tricks to show the equivalence.

Let’s start with the unit. The first trick is to consider not one natural transformation but the whole hom-set:

[C, Set](C(I, -), F)

The set of natural transformations can be represented as an end (which, incidentally, corresponds to the forall quantifier in the Haskell definition of natural transformations):

\int_c Set(C(I, c), F c)

The next trick is to use the Yoneda lemma which, in the end form reads:

\int_c Set(C(a, c), F c) \cong F a

In more familiar terms, this formula asserts that the set of natural transformations from the hom-functor C(a, -) to F is isomorphic to F a.

There is also a version of the Yoneda lemma for contravariant functors:

\int_c Set(C(c, a), H c) \cong H a

The application of Yoneda to our formula produces F I, which is in one-to-one correspondence with morphisms 1 \to F I.

We can use the same trick of bundling up natural transformations that define multiplication \mu.

[C, Set](F \star F, F)

and representing this set as an end over the hom-functor:

\int_c Set((F \star F) c, F c)

Expanding the definition of Day convolution, we get:

\int_c Set(\int^{a b} C(a \otimes b, c) \times F a \times F b, F c)

The next trick is to pull the coend out of the hom-set. This trick relies on the co-continuity of the hom-functor in the first argument: a hom-functor from a colimit is isomorphic to a limit of hom-functors. In programmer-speak: a function from a sum type is equivalent to a product of functions (we call it case analysis). A coend is a generalized colimit, so when we pull it out of a hom-functor, it turns into a limit, or an end. Here’s the general formula, in which p x y is an arbitrary profunctor:

Set(\int^x p x x, y) \cong \int_x Set(p x x, y)

Let’s apply it to our formula:

\int_c \int_{a b} Set(C(a \otimes b, c) \times F a \times F b, F c)

We can combine the ends under one integral sign (it’s allowed by the Fubini theorem) and move to the next trick: hom-set adjunction:

Set(a \times b, c) \cong Set(a, b \to c)

In programming this is known as currying. This adjunction exists because Set is a cartesian closed category. We’ll use this adjunction to move F a \times F b to the right:

\int_{a b c} Set(C(a \otimes b, c), (F a \times F b) \to F c)

Using the Yoneda lemma we can “perform the integration” over c  to get:

\int_{a b} (F a \times F b) \to F (a \otimes b))

This is exactly the set of natural transformations used in the definition of a lax monoidal functor. We have established one-to-one correspondence between monoidal multiplication and lax monoidal mapping.

Of course, a complete proof would require translating monoid laws to their lax monoidal counterparts. You can find more details in Rivas and Jaskelioff, Notions of Computation as Monoids.

We’ll use the fact that a monoid in the category [C, Set] is a lax monoidal functor later.

Alternative Derivation

Incidentally, there are shorter derivations of these formulas that use the trick borrowed from the proof of the Yoneda lemma, namely, evaluating things at the identity morphism. (Whenever mathematicians speak of Yoneda-like arguments, this is what they mean.)

Starting from F \star F \to F and plugging in the Day convolution formula, we get:

\int^{a' b'} C(a' \otimes b', c) \times F a' \times F b' \to F c

There is a component of this natural transformation at (a \otimes b) that is the morphism:

\int^{a' b'} C(a' \otimes b', a \otimes b) \times F a' \times F b' \to F (a \otimes b)

This morphism must be defined for all possible values of the coend. In particular, it must be defined for the triple (id_{a \otimes b}, F a, F b), giving us the \mu we seek.

There is also an alternative derivation for the unit: Take the component of the natural transformation \eta at I:

\eta_I : C(I, I) \to L I

C(I, I) is guaranteed to contain at least one element, the identity morphism id_I. We can use \eta_I \, id_I as the (only) value of the lax monoidal constraint at the singleton 1.

Free Monoid

Given a monoidal category C, we might be able to define a whole lot of monoids in it. These monoids form a category Mon(C). Morphisms in this category correspond to those morphisms in C that preserve monoidal structure.

Consider, for instance, two monoids m and m'. A monoid morphism is a morphism f : m \to m' in C such that the unit of m' is related to the unit of m:

\eta' = f \circ \eta

munit

and similarly for multiplication:

\mu' \circ (f \otimes f) = f \circ \mu

Remember, we assumed that the tensor product is functorial in both arguments, so it can be used to lift a pair of morphisms.

mmult

There is an obvious forgetful functor U from Mon(C) to C which, for every monoid, picks its underlying object in C and maps every monoid morphism to its underlying morphism in C.

The left adjoint to this functor, if it exists, will map an object a in C to a free monoid L a.

The intuition is that a free monoid L a is a list of a.

In Haskell, a list is defined recursively:

data List a = Nil | Cons a (List a)

Such a recursive definition can be formalized as a fixed point of a functor. For a list of a, this functor is:

data ListF a x = NilF | ConsF a x

Notice the peculiar structure of this functor. It’s a sum type: The first part is a singleton, which is isomorphic to the unit type (). The second part is a product of a and x. Since the unit type is the unit of the product in our monoidal category of types, we can rewrite this functor symbolically as:

\Phi a x = I + a \otimes x

It turns out that this formula works in any monoidal category that has finite coproducts (sums) that are preserved by the tensor product. The fixed point of this functor is the free functor that generates free monoids.

I’ll define what is meant by the fixed point and prove that it defines a monoid. The proof that it’s the result of a free/forgetful adjunction is a bit involved, so I’ll leave it for a future blog post.

Algebras

Let’s consider algebras for the functor F. Such an algebra is defined as an object x called the carrier, and a morphism:

f : F x \to x

called the structure map or the evaluator.

In Haskell, an algebra is defined as:

type Algebra f x = f x -> x

There may be a lot of algebras for a given functor. In fact there is a whole category of them. We define an algebra morphism between two algebras (x, f : F x \to x) and (x', f' : F x' \to x') as a morphism \nu : x \to x' which commutes with the two structure maps:

\nu \circ f = f' \circ F \nu

algmorph

The initial object in the category of algebras is called the initial algebra, or the fixed point of the functor that generates these algebras. As the initial object, it has a unique algebra morphism to any other algebra. This unique morphism is called a catamorphism.

In Haskell, the fixed point of a functor f is defined recursively:

newtype Fix f = In { out :: f (Fix f) }

with, for instance:

type List a = Fix (ListF a)

A catamorphism is defined as:

cata :: Functor f => Algebra f a -> Fix f -> a
cata alg = alg . fmap (cata alg) . out

A list catamorphism is called foldr.

We want to show that the initial algebra L a of the functor:

\Phi a x = I + a \otimes x

is a free monoid. Let’s see under what conditions it is a monoid.

Initial Algebra is a Monoid

In this section I will show you how to concatenate lists the hard way.

We know that function type b \to c (a.k.a., the exponential c^b) is the right adjoint to the product:

Set(a \times b, c) \cong Set(a, b \to c)

The function type is also called the internal hom.

In a monoidal category it’s sometimes possible to define an internal hom-object, denoted [b, c], as the right adjoint to the tensor product:

curry : C(a \otimes b, c) \cong C(a, [b, c])

If this adjoint exists, the category is called closed monoidal.

In a closed monoidal category, the initial algebra L a of the functor \Phi a x = I + a \otimes x is a monoid. (In particular, a Haskell list of a, which is a fixed point of ListF a, is a monoid.)

To show that, we have to construct two morphisms corresponding to unit and multiplication (in Haskell, empty list and concatenation):

\eta : I \to L a

\mu : L a \otimes L a \to L a

What we know is that L a is a carrier of the initial algebra for \Phi a, so it is equipped with the structure map:

I + a \otimes L a \to L a

which is equivalent to a pair of morphisms:

\alpha : I \to L a

\beta : a \otimes L a \to L a

Notice that, in Haskell, these correspond the two list constructors: Nil and Cons or, in terms of the fixed point:

nil :: () -> List a
nil () = In NilF

cons :: a -> List a -> List a
cons a as = In (ConsF a as)

We can immediately use \alpha to implement \eta.

The second one, \beta, one can be rewritten using the hom adjuncion as:

\bar{\beta} = curry \, \beta

\bar{\beta} : a \to [L a, L a]

Notice that, if we could prove that [L a, L a] is a carrier for the same algebra generated by \Phi a, we would know that there is a unique catamorphism from the initial algebra L a:

\kappa_{[L a, L a]} : L a \to [L a, L a]

which, by the hom adjunction, would give us the desired multiplication:

\mu : L a \otimes L a \to L a

Let’s establish some useful lemmas first.

Lemma 1: For any object x in a closed monoidal category, [x, x] is a monoid.

This is a generalization of the idea that endomorphisms form a monoid, in which identity morphism is the unit and composition is multiplication. Here, the internal hom-object [x, x] generalizes the set of endomorphisms.

Proof: The unit:

\eta : I \to [x, x]

follows, through adjunction, from the unit law in the monoidal category:

\lambda : I \otimes x \to x

(In Haskell, this is a fancy way of writing mempty = id.)

Multiplication takes the form:

\mu : [x, x] \otimes [x, x] \to [x, x]

which is reminiscent of composition of edomorphisms. In Haskell we would say:

mappend = (.)

By adjunction, we get:

curry^{-1} \, \mu : [x, x] \otimes [x, x] \otimes x \to x

We have at our disposal the counit eval of the adjunction:

eval : [x, x] \otimes x \cong x

We can apply it twice to get:

\mu = curry (eval \circ (id \otimes eval))

In Haskell, we could express this as:

mu :: ((x -> x), (x -> x)) -> (x -> x)
mu (f, g) = \x -> f (g x)

Here, the counit of the adjunction turns into simple function application.

\square

Lemma 2: For every morphism f : a \to m, where m is a monoid, we can construct an algebra of the functor \Phi a with m as its carrier.

Proof: Since m is a monoid, we have two morphisms:

\eta : I \to m

\mu : m \otimes m \to m

To show that m is a carrier of our algebra, we need two morphisms:

\alpha : I \to m

\beta : a \otimes m \to m

The first one is the same as \eta, the second can be implemented as:

\beta = \mu \circ (f \otimes id)

In Haskell, we would do case analysis:

mapAlg :: Monoid m => ListF a m -> m
mapAlg NilF = mempty
mapAlg (ConsF a m) = f a `mappend` m

\square

We can now build a larger proof. By lemma 1, [L a, L a] is a monoid with:

\mu = curry (eval \circ (id \otimes eval))

We also have a morphism \bar{\beta} : a \to [L a, L a] so, by lemma 2, [L a, L a] is also a carrier for the algebra:

\alpha = \eta

\beta = \mu \circ (\bar{\beta} \otimes id)

It follows that there is a unique catamorphism \kappa_{[L a, L a]} from the initial algebra L a to it, and we know how to use it to implement monoidal multiplication for L a. Therefore, L a is a monoid.

Translating this to Haskell, \bar{\beta} is the curried form of Cons and what we have shown is that concatenation (multiplication of lists) can be implemented as a catamorphism:

concat :: List a -> List a -> List a
conc x y = cata alg x y
  where alg NilF        = id
        alg (ConsF a t) = (cons a) . t

The type:

List a -> (List a -> List a)

(parentheses added for emphasis) corresponds to L a \to [L a, L a].

It’s interesting that concatenation can be described in terms of the monoid of list endomorphisms. Think of turning an element a of the list into a transformation, which prepends this element to its argument (that’s what \bar{\beta} does). These transformations form a monoid. We have an algebra that turns the unit I into an identity transformation on lists, and a pair a \otimes t (where t is a list transformation) into the composite \bar{\beta} a \circ t. The catamorphism for this algebra takes a list L a and turns it into one composite list transformation. We then apply this transformation to another list and get the final result: the concatenation of two lists. \square

Incidentally, lemma 2 also works in reverse: If a monoid m is a carrier of the algebra of \Phi a, then there is a morphism f : a \to m. This morphism can be thought of as inserting generators represented by a into the monoid m.

Proof: if m is both a monoid and a carrier for the algebra \Phi a, we can construct the morphism a \to m by first applying the identity law to go from a to a \otimes I, then apply id_a \otimes \eta to get a \otimes m. This can be right-injected into the coproduct I + a \otimes m and then evaluated down to m using the structure map for the algebra on m.

a \to a \otimes I \to a \otimes m \to I + a \otimes m \to m

insertion

In Haskell, this corresponds to a construction and evaluation of:

ConsF a mempty

\square

Free Monoidal Functor

Let’s go back to our functor category. We started with a monoidal category C and considered a functor category [C, Set]. We have shown that [C, Set] is itself a monoidal category with Day convolution as tensor product and the hom functor C(I, -) as unit. A monoid is this category is a lax monoidal functor.

The next step is to build a free monoid in [C, Set], which would give us a free lax monoidal functor. We have just seen such a construction in an arbitrary closed monoidal category. We just have to translate it to [C, Set]. We do this by replacing objects with functors and morphisms with natural transformations.

Our construction relied on defining an initial algebra for the functor:

I + a \otimes b

Straightforward translation of this formula to the functor category [C, Set] produces a higher order endofunctor:

A_F G = C(I, -) + F \star G

It defines, for any functor F, a mapping from a functor G to a functor A_F G. (It also maps natural transformations.)

We can now use A_F to define (higher-order) algebras. An algebra consists of a carrier — here, a functor T — and a structure map — here, a natural transformation:

A_F T \to T

The initial algebra for this higher-order endofunctor defines a monoid, and therefore a lax monoidal functor. We have shown it for an arbitrary closed monoidal category. So the only question is whether our functor category with Day convolution is closed.

We want to define the internal hom-object in [C, Set] that satisfies the adjunction:

[C, Set](F \star G, H) \cong [C, Set](F, [G, H])

We start with the set of natural transformations — the hom-set in [C, Set]:

[C, Set](F \star G, H)

We rewrite it as an end over c, and use the formula for Day convolution:

\int_c Set(\int^{a b} C(a \otimes b, c) \times F a \times G b, H c)

We use the co-continuity trick to pull the coend out of the hom-set and turn it into an end:

\int_{c a b} Set(C(a \otimes b, c) \times F a \times G b, H c)

Keeping in mind that our goal is to end up with F a on the left, we use the regular hom-set adjunction to shuffle the other two terms to the right:

\int_{c a b} Set(F a, C(a \otimes b, c) \times G b \to H c)

The hom-functor is continuous in the second argument, so we can sneak the end over b c under it:

\int_{a} Set(F a, \int_{b c} C(a \otimes b, c) \times G b \to H c)

We end up with a set of natural transformations from the functor F to the functor we will call:

[G, H] = \int_{b c} (C(- \otimes b, c) \times G b \to H c)

We therefore identify this functor as the right adjoint (internal hom-object) for Day convolution. We can further simplify it by using the hom-set adjunction:

\int_{b c} (C(- \otimes b, c) \to (G b \to H c))

and applying the Yoneda lemma to get:

[G, H] = \int_{b} (G b \to H (- \otimes b))

In Haskell, we would write it as:

newtype DayHom f g a = DayHom (forall b . f b -> g (a, b))

Since Day convolution has a right adjoint, we conclude that the fixed point of our higher order functor defines a free lax monoidal functor. We can write it in a recursive form as:

Free_F = C(I, -) + F \star Free_F

or, in Haskell:

data FreeMonR f t =
      Done t
    | More (Day f (FreeMonR f) t)

Free Monad

This blog post wouldn’t be complete without mentioning that the same construction works for monads. Famously, a monad is a monoid in the category of endofunctors. Endofunctors form a monoidal category with functor composition as tensor product and the identity functor as unit. The fact that we can construct a free monad using the formula:

FreeM_F = Id + F \circ FreeM_F

is due to the observation that functor composition has a right adjoint, which is the right Kan extension. Unfortunately, due to size issues, this Kan extension doesn’t always exist. I’ll quote Alex Campbell here: “By making suitable size restrictions, we can give conditions for free monads to exist: for example, free monads exist for accessible endofunctors on locally presentable categories; a special case is that free monads exist for finitary endofunctors on Set, where finitary means the endofunctor preserves filtered colimits (more generally, an endofunctor is accessible if it preserves \kappa-filtered colimits for some regular cardinal number \kappa).”

Conclusion

As we acquire experience in programming, we learn more tricks of trade. A seasoned programmer knows how to read a file, parse its contents, or sort an array. In category theory we use a different bag of tricks. We bunch morphisms into hom-sets, move ends and coends, use Yoneda to “integrate,” use adjunctions to shuffle things around, and use initial algebras to define recursive types.

Results derived in category theory can be translated to definitions of functions or data structures in programming languages. A lax monoidal functor becomes an Applicative. Free monoidal functor becomes:

data FreeMonR f t =
      Done t
    | More (Day f (FreeMonR f) t)

What’s more, since the derivation made very few assumptions about the category C (other than that it’s monoidal), this result can be immediately applied to profunctors (replacing C with C^{op}\times C) to produce:

data FreeMon p s t where
     DoneFM :: t -> FreeMon p s t
     MoreFM :: p a b -> FreeMon p c d -> 
                        (b -> d -> t) -> 
                        (s -> (a, c)) -> FreeMon p s t

Replacing Day convolution with endofunctor composition gives us a free monad:

data FreeMonadF f g a = 
    DoneFM a 
  | MoreFM (Compose f g a)

Category theory is also the source of laws (commuting diagrams) that can be used in equational reasoning to verify the correctness of programming constructs.

Writing this post has been a great learning experience. Every time I got stuck, I would ask Alex for help, and he would immediately come up with yet another algebra and yet another catamorphism. This was so different from the approach I would normally take, which would be to get bogged down in inductive proofs over recursive data structures.


I want you to perform a little experiment. Take an egg, put it in a blender, and run it for ten seconds.

Oh, I forgot to tell you to first remove the eggshell. No problem, let’s run the blender in the opposite direction for ten seconds, and we’ll get the egg back.

It doesn’t work, does it? The reason is entropy. The second law of thermodynamics states that the entropy of an isolated system can never decrease. Blending an egg increased its entropy. Unblending it would decrease entropy. But there is a workaround: feed the blended egg to a chicken, and you will get a new egg. Granted, you might have to feed it more than one egg, but still: the miracle of life! Life seems to go against the general trend of the second law of thermodynamics.

Of course, life cannot flourish in a completely isolated system, so the laws of physics are safe. A chicken can produce an egg only by increasing the entropy of its environment and, indirectly, that of the Sun.

Entropy and the Universe

We have some kind of intuitive understanding of entropy as the degree of disorderliness. An egg is highly “ordered,” in that it has an ovoid shell, the white, the yolk and, most importantly, the genetic blueprint for a chicken. It is extremely unlikely that an egg would randomly assemble itself from the primordial soup. And yet, in a way, it did. It took about fourteen billion years, starting from the Big Bang, but it finally arrived to a supermarket near you.

Since entropy has been always copiously produced in the Universe, we are forced to deduce that the initial entropy of the Universe was much lower than it is today. The Universe has been running up the entropy bill at a tremendous pace ever since the Big Bang.

With our simplistic understanding of entropy as the opposite of order, it might be difficult to imagine what it meant for the primordial Universe to be low entropy. Were elementary particles nicely stacked according to their quantum Dewey decimal codes on separate shelves like books in a library? It turns out that, in the presence of gravity, the lowest entropy state is when matter is uniformly distributed throughout the Universe. This might be a little counter-intuitive, considering how blending an egg led to the increase of entropy. But uniform distribution of gravitational mass is a very precarious state. It’s like a needle balanced on its point. At the slightest disturbance, the parts of the volume with infinitesimally higher density will start collapsing on themselves due to gravity. The collapse will be slow in the beginning, but as it keeps increasing local density, it will attract more and more matter resulting in a positive feedback loop.

This is exactly what happened after the Big Bang (as far as we know). Low-entropy uniform soup started slowly curdling to form galaxies and stars. The more non-uniform the distribution of gravitating matter, the higher the entropy.

The ultimate fate of collapsing matter is a gravitational black hole, with all matter concentrated in a singular point. Black holes have extremely high entropy, so much so that it is believed that the current entropy of the Universe is dominated by gigantic black holes in the centers of galaxies.

So why hasn’t the whole Universe collapsed into one gigantic black hole? It’s because the breakneck race toward higher entropy has run against several obstacles. One of them works like a governor in a steam engine. Tiny fluctuations in mass density during the Big Bang were accompanied by tiny fluctuations in velocities of particles. These fluctuations resulted in random distribution of angular momentum. As a result, each collapsing region of the Universe ends up with some randomly assigned net angular momentum. In other words, it spins. And when matter is sucked up towards the center, it starts spinning faster and faster. That’s why every galaxy is spinning. The resulting centrifugal force keeps matter from falling all the way to the center and disappearing into a black hole.

The other obstacle towards reaching maximum entropy is the fact that clumps of matter of certain size turn into stars. When lots of atoms of hydrogen are squished together, they can reach a higher entropy state by fusing into helium. But this process produces excess photons, which keep pushing matter away, thus preventing total collapse. Eventually, the hydrogen burns out, the star undergoes a series of transitions and, depending on its mass, ends up as a supernova, or turns into a brown or white dwarf. What’s left after a supernova explosion can be a neutron star or a black hole.

In a neutron star, further collapse is stalled by another property of matter: Fermi statistics. Neutrons are fermions, and no two fermions may occupy the same quantum state. In particular, you can’t squeeze them all into a very small volume — they repel each other.

Are neutron stars and black holes the end products of the evolution of the Universe? Probably not. There is a strong suspicion that neutrons will eventually decay into leptons — mostly neutrinos, electrons, and positrons. Black holes will evaporate through Hawking radiation. The Universe will eventually reach its thermal death: an ever expanding volume filled with photons and leptons.

What’s Life Got to Do with It?

So far we’ve seen that matter has properties that tend to slow down the ratchet of entropy. Our Sun, for instance, could increase its entropy tremendously by turning all its hydrogen into helium in one fell swoop while collapsing to form a black hole. It can’t do that because of the heat and radiation pressure generated in the process. And even if all the heat were siphoned out, the leftover neutrons would congeal into a solid neutron star, preventing further collapse.

So the Sun is doing its best, under the circumstances, trying to dissipate the excess of energy. It does it mostly by radiating high energy photons. These are the photons of visible and ultraviolet light that warm up the Earth. The Earth, in turn, re-radiates this heat in the form of low energy infrared photons.

It turns out that turning high energy photons to low energy photons increases overall entropy. So, in its small way, the Earth speeds up the rise of entropy. In fact, it does it better than, for instance, Mercury; because the Earth has the atmosphere and the oceans, which are good heat sinks, and because it spins on its axis, transporting the accumulated heat from the sunny side to the shaded side, where it’s radiated into space in the form of infrared photons.

But Earth has another secret weapon that speeds up the advent of the heat death of the Universe: life. To begin with, living organisms consume energy during the day. They also need energy to survive at night, so they came up with clever ways to store energy in chemical compounds. They can then cash their savings at night, all the while radiating heat. At higher latitudes, they also store energy during summer and expend it during winter.

A steppe is better at entropy production than barren land; a forest or a jungle is still better. But human civilization is the best. Our cars, factories, cities, air conditioners, all produce entropy at a much faster pace than bare nature. We’re good at it!

The Self-Organizing Principle

The advent of life on Earth is often attributed to something called the self-organizing principle. It’s just a name for what happens in systems that are away from thermodynamic equilibrium. In those systems it is often possible to speed up the increase of entropy by organizing things a little better.

The simplest example of this is when you heat a layer of liquid in a pan. The liquid can transport energy by thermal conduction, which leads to overall raise in entropy. But there is a faster way: the heated liquid at the bottom of the pan is lighter than the cooler liquid at the top, so it can float to the top. The heavier liquid at the top can then sink to the bottom. This is called convection, and it’s faster than conduction. The only problem is that the two streams of liquid have to negotiate the flow, because they can’t both pass through the same point simultaneously. In fact, in the ideal case, they would be deadlocked. What happens in reality is an amazing feat of self-organization: regularly spaced hexagonal convection cells called Bénard cells emerge in the heated liquid.

Benard

A honeycomb pattern of Bénard cells suggests that order may be spontaneously generated in situations when it can speed up the production of entropy. If you have rich enough environment and wait long enough, more and more complex patterns that ease the production of entropy may emerge — such as life itself.

But life doesn’t emerge everywhere. As far as we know there’s no life on the Moon and no (visible) life on Mars. What’s different about Earth is that it is, and has always been, very turbulent. For starters, we have water that is constantly changing state. It’s boiling in hydrothermal vents, liquid in the oceans, solid in the ice caps; it’s precipitating from the atmosphere and evaporating from pools. It dissolves lots of chemical compounds and makes colloids with others. Continental plates keep shifting resulting in constant volcanic activity. New minerals are brought up from the depths and exposed to erosion. We also have a large Moon that’s causing regular tides, and the Earth’s axis of rotation is tilted resulting in changing seasons. On top of that, we have occasional comets causing impact winters. We can’t complain about lack of entertainment on Earth.

Here’s what I think: Life can only emerge and thrive on the edge. Our planet has been on the edge for a few billion years. Conditions on Earth have always been barely short of wiping the life out and, paradoxically, this is exactly what makes life possible. The Earth is a living proof that what doesn’t kill you, makes you stronger. There have been uncountable attempts on the life on Earth and they all resulted in accelerating the evolution towards higher life forms. I know that it might be controversial to call one form of life higher than another, but there is an objective measure that we can use, and that’s the efficiency of turning energy into entropy. In this respect, humans are indeed the highest form of life. We were able to tap into sources of energy that have been forgotten by nature for hundreds of millions of years in the form of coal, oil, and gas. We use all this to speed up the increase of entropy.

Why Are We Alone?

You might be familiar with the Fermi paradox. In essence, the question is: if life is inevitable, why haven’t we seen it all over the Universe. And judging by how quickly life emerged on Earth– essentially as soon as the water condensed into oceans– life seems to be inevitable, at least on Earth-like planets, which are very common in the Universe. And life — civilized life in particular — being so good at producing vast amounts of entropy, should eventually make itself conspicuous on the cosmic scale.

On the other hand, we don’t know how many planets are “on the edge,” and how narrow the edge is. It’s possible that for an Earth-like planet to enter the life-creating period is a relatively common occurrence — possibly right after the water gathers into oceans. Finding remnants of life on Mars would give support to this idea. But the Earth has been walking this narrow path between stagnation and destruction for more than four billion years. There have been long periods of stagnation: there was the snowball Earth when the oceans froze over, and the “boring billion,” when the air was filled with the smell of rotten eggs. There have been major extinction events, like the asteroid impact that wiped out the dinosaurs.

Being on edge means that you are likely to fall off. You either die of boredom (that’s what might have happened on Mars), or you get wiped out by a cataclysm (if the Chicxulub asteroid were a tad larger, the Earth could have been sterilized). It might be extremely unlikely to stay for a few billion years on the narrow path that leads from Bénard cells to a space-faring civilization. We might actually be the first to reach this level in our cosmic neighborhood. Life on Earth could be more like a professional Russian-roulette player than a nine-to-five worker.

There is also something we don’t quite get about cosmic timescales. For the last few hundred of years the powers of humanity have been growing exponentially. From the cosmic perspective, humanity looks like a sudden bloom that took over a stagnant pool on the outskirts of the Galaxy. We foolishly imagine that we can sustain this level of progress and in short time colonize the Solar system and reach for the stars. But one thing we know for sure about exponential growth is that it’s not sustainable in the long run. We are not only going to bump our heads against unbreakable laws of physics, but we’ll also have to deal with the limitations of human mind. And all other civilizations that might be out there will have to deal with the same problems. This might explain why we are not seeing them.

In fact, we could reverse this reasoning and argue that the fact that we don’t detect any signs of alien civilizations suggests that the obstacles that we see in front of us are not easily overcome. In particular:

  • The speed of light limits our ability to travel and exchange information at large distances. This is one of the hardest limits, because special relativity is the foundation of all physics.
  • The coupling of gravity to other forces is extremely weak, so the prospects of controlling gravity and counter-balancing acceleration are virtually non-existent. This means that there is no easy way to shrink the enormous distances between stars — no warp drive.
  • The size of the atom and the speed of light limit our ability to store and process information. This prevents us from extending the capabilities of our brains to discover and explore the laws of the Universe.

These three limits can also be related to three fundamental theories: special relativity, general relativity, and quantum mechanics, respectively.

So what does the future have in stock for humanity? It looks like we are reaching the end of exponential expansion. There hasn’t been any major breakthrough in fundamental physics for almost half a century, we are seeing the tail end of the Moore’s law, and the population of Earth is finally stabilizing. If we don’t wipe ourselves out from the face of the Earth, we might be facing a boring millennium, if not a boring million. And it’s entirely possible that we are surrounded by other civilizations that have already entered their boring periods. If they eventually graduate to the next stage, they will be ready to help the Universe increase its entropy on a vastly larger scale. Hopefully humanity will still be around to see the Galaxy blooming with sentient activity.


Abstract: I derive free monoidal profunctors as fixed points of a higher order functor acting on profunctors. Monoidal profunctors play an important role in defining traversals.

The beauty of category theory is that it lets us reuse concepts at all levels. In my previous post I have derived a free monoidal functor that goes from a monoidal category C to Set. The current post may then be shortened to: Since profunctors are just functors from C^{op} \times C to Set, with the obvious monoidal structure induced by the tensor product in C, we automatically get free monoidal profunctors.

Let me fill in the details.

Profunctors in Haskell

Here’s the definition of a profunctor from Data.Profunctor:

class Profunctor p where
  dimap :: (s -> a) -> (b -> t) -> p a b -> p s t

The idea is that, just like a functor acts on objects, a profunctor p acts on pairs of objects \langle a, b \rangle. In other words, it’s a type constructor that takes two types as arguments. And just like a functor acts on morphisms, a profunctor acts on pairs of morphisms. The only tricky part is that the first morphism of the pair is reversed: instead of going from a to s, as one would expect, it goes from s to a. This is why we say that the first argument comes from the opposite category C^{op}, where all morphisms are reversed with respect to C. Thus a morphism from \langle a, b \rangle to \langle s, t \rangle in C^{op} \times C is a pair of morphisms \langle s \to a, b \to t \rangle.

Just like functors form a category, profunctors form a category too. In this category profunctors are objects, and natural transformations are morphisms. A natural transformation between two profunctors p and q is a family of functions which, in Haskell, can be approximated by a polymorphic function:

type p ::~> q = forall a b. p a b -> q a b

If the category C is monoidal (has a tensor product \otimes and a unit object 1), then the category C^{op} \times C has a trivially induced tensor product:

\langle a, b \rangle \otimes \langle c, d \rangle = \langle a \otimes c, b \otimes d \rangle

and unit \langle 1, 1 \rangle

In Haskell, we’ll use cartesian product (pair type) as the underlying tensor product, and () type as the unit.

Notice that the induced product does not have the usual exponential as the right adjoint. Indeed, the hom-set:

(C^{op} \times C) \, ( \langle a, b  \rangle \otimes  \langle c, d  \rangle,  \langle s, t  \rangle )

is a set of pairs of morphisms:

\langle s \to a \otimes c, b \otimes d \to t  \rangle

If the right adjoint existed, it would be a pair of objects \langle X, Y  \rangle, such that the following hom-set would be isomorphic to the previous one:

\langle X \to a, b \to Y  \rangle

While Y could be the internal hom, there is no candidate for X that would produce the isomorphism:

s \to a \otimes c \cong X \to a

(Consider, for instance, unit () for a.) This lack of the right adjoint is the reason why we can’t define an analog of Applicative for profunctors. We can, however, define a monoidal profunctor:

class Monoidal p where
  punit :: p () ()
  (>**<) :: p a b -> p c d -> p (a, c) (b, d)

This profunctor is a map between two monoidal structures. For instance, punit can be seen as mapping the unit in Set to the unit in C^{op} \times C:

punit :: () -> p <1, 1>

Operator >**< maps the product in Set to the induced product in C^{op} \times C:

(>**<) :: (p <a, b>, p <c, d>) -> p (<a, b> × <c, d>)

Day convolution, which works with monoidal structures, generalizes naturally to the profunctor category:

data PDay p q s t = forall a b c d. 
     PDay (p a b) (q c d) ((b, d) -> t) (s -> (a, c))

Higher Order Functors

Since profunctors form a category, we can define endofunctors in that category. This is a no-brainer in category theory, but it requires some new definitions in Haskell. Here’s a higher-order functor that maps a profunctor to another profunctor:

class HPFunctor pp where
  hpmap :: (p ::~> q) -> (pp p ::~> pp q)
  ddimap :: (s -> a) -> (b -> t) -> pp p a b -> pp p s t

The function hpmap lifts a natural transformation, and ddimap shows that the result of the mapping is also a profunctor.

An endofunctor in the profunctor category may have a fixed point:

newtype FixH pp a b = InH { outH :: pp (FixH pp) a b }

which is also a profunctor:

instance HPFunctor pp => Profunctor (FixH pp) where
    dimap f g (InH pp) = InH (ddimap f g pp)

Finally, our Day convolution is a higher-order endofunctor in the category of profunctors:

instance HPFunctor (PDay p) where
  hpmap nat (PDay p q from to) = PDay p (nat q) from to
  ddimap f g (PDay p q from to) = PDay p q (g . from) (to . f)

We’ll use this fact to construct a free monoidal profunctor next.

Free Monoidal Profunctor

In the previous post, I defined the free monoidal functor as a fixed point of the following endofunctor:

data FreeF f g t =
      DoneF t
    | MoreF (Day f g t)

Replacing the functors f and g with profunctors is straightforward:

data FreeP p q s t = 
      DoneP (s -> ()) (() -> t) 
    | MoreP (PDay p q s t)

The only tricky part is realizing that the first term in the sum comes from the unit of Day convolution, which is the type () -> t, and it generalizes to an appropriate pair of functions (we’ll simplify this definition later).

FreeP is a higher order endofunctor acting on profunctors:

instance HPFunctor (FreeP p) where
    hpmap _ (DoneP su ut) = DoneP su ut
    hpmap nat (MoreP day) = MoreP (hpmap nat day)
    ddimap f g (DoneP au ub) = DoneP (au . f) (g . ub)
    ddimap f g (MoreP day) = MoreP (ddimap f g day)

We can, therefore, define its fixed point:

type FreeMon p = FixH (FreeP p)

and show that it is indeed a monoidal profunctor. As before, the trick is to fist show the following property of Day convolution:

cons :: Monoidal q => PDay p q a b -> q c d -> PDay p q (a, c) (b, d)
cons (PDay pxy quv yva bxu) qcd = 
      PDay pxy (quv >**< qcd) (bimap yva id . reassoc) 
                              (assoc . bimap bxu id)

where

assoc ((a,b),c) = (a,(b,c))
reassoc (a, (b, c)) = ((a, b), c)

Using this function, we can show that FreeMon p is monoidal for any p:

instance Profunctor p => Monoidal (FreeMon p) where
  punit = InH (DoneP id id)
  (InH (DoneP au ub)) >**< frcd = dimap snd (\d -> (ub (), d)) frcd
  (InH (MoreP dayab)) >**< frcd = InH (MoreP (cons dayab frcd))

FreeMon can also be rewritten as a recursive data type:

data FreeMon p s t where
     DoneFM :: t -> FreeMon p s t
     MoreFM :: p a b -> FreeMon p c d -> 
                        (b -> d -> t) -> 
                        (s -> (a, c)) -> FreeMon p s t

Categorical Picture

As I mentioned before, from the categorical point of view there isn’t much to talk about. We define a functor in the category of profunctors:

A_p q = (C^{op} \times C) (1, -) + \int^{ a b c d } p a b \times q c d \times (C^{op} \times C) (\langle a, b \rangle \otimes \langle c, d \rangle, -)

As previously shown in the general case, its initial algebra defines a free monoidal profunctor.

Acknowledgments

I’m grateful to Eugenia Cheng not only for talking to me about monoidal profunctors, but also for getting me interested in category theory in the first place through her Catsters video series. Thanks also go to Edward Kmett for numerous discussions on this topic.


Abstract: I derive a free monoidal (applicative) functor as an initial algebra of a higher-order functor using Day convolution.

I thought I was done with monoids for a while, after writing my Monoids on Steroids post, but I keep bumping into them. This time I read a paper by Capriotti and Kaposi about Free Applicative Functors and it got me thinking about the relationship between applicative and monoidal functors. In a monoidal closed category, the two are equivalent, but monoidal structure seems to be more fundamental. It’s possible to have a monoidal category, which is not closed. Not to mention that monoidal structures look cleaner and more symmetrical than closed structures.

One particular statement in the paper caught my attention: the authors said that the free applicative functors are initial algebras for a very simple higher-order functor:

A G = Id + F \star G

which, in their own words, “makes precise the intuition that free applicative functors are in some sense lists (i.e. free monoids).” In this post I will decode this statement and then expand it by showing how to implement a free monoidal functor as a higher order fixed point using some properties of Day convolution.

Let me start with some refresher on lists. To define a list all we need is a monoidal category. If we pretend that Hask is a category, then our product is a pair type (a, b), which is associative up to isomorphism; with () as its unit, also up to isomorphism. We can then define a functor that generates lists:

A_a b = () + (a, b)

Notice the similarity with the Capriotti-Kaposi formula. You might recognize this functor in its Haskell form:

data ListF a b = Nil | Cons a b

The fixed point of this functor is the familiar list, a.k.a., the free monoid. So that’s the general idea.

Day Convolution

There are lots of interesting monoidal structures. Famously (because of the monad quip) the category of endofunctors is monoidal; with functor composition as product and identity functor as identity. What is less known is that functors from a monoidal category \mathscr{C} to \mathscr{S}et also form a monoidal category. A product of two such functors is defined through Day convolution. I have talked about Day convolution in the context of applicative functors, but here I’d like to give you some more intuition.

What did it for me was Alexander Campbell comment on Stack Exchange. You know how, in a vector space, you can have a set of basis vectors, say \vec{e}_i, and represent any vector as a linear combination:

\vec{v} = \sum \limits_{i = 1}^n v_i \vec{e}_i

It turns out that, under some conditions, there is a basis in the category of functors from \mathscr{C} to \mathscr{S}et. The basis is formed by representable functors of the form \mathscr{C}(x, -), and the decomposition of a functor F is given by the co-Yoneda lemma:

F a = \int^x F x \times \mathscr{C}(x, a)

The coend in this formula roughly corresponds to (possibly infinite) categorical sum (coproduct). The product under the integral sign is the cartesian product in \mathscr{S}et.

In pseudo-Haskell, we would write this formula as:

f a ~ exists x . (f x, x -> a)

because a coend corresponds to an existential type. The intuition is that, because the type x is hidden, the only thing the user can do with this data type is to fmap the function over the value f x, and that is equivalent to the value f a.

The actual Haskell encoding is given by the following GADT:

data Coyoneda f a where
  Coyoneda :: f x -> (x -> a) -> Coyoneda f a

Now suppose that we want to define “multiplication” of two functors that are represented using the coend formula.

(F \star G) a \cong \int^x F x \times \mathscr{C}(x, a) \star \int^y G y \times \mathscr{C}(y, a)

Let’s assume that our multiplication interacts nicely with coends. We get:

\int^{x y} F x \times G y \times (\mathscr{C}(x, a) \star \mathscr{C}(y, a))

All that remains is to define the multiplication of our “basis vectors,” the hom-functors. Since we have a tensor product \otimes in \mathscr{C}, the obvious choice is:

\mathscr{C}(x, -) \star \mathscr{C}(y, -) \cong \mathscr{C}(x \otimes y, -)

This gives us the formula for Day convolution:

(F \star G) a = \int^{x y} F x \times G y \times \mathscr{C}(x \otimes y, a)

We can translate it directly to Haskell:

data Day f g a = forall x y. Day (f x) (g y) ((x, y) -> a)

The actual Haskell library implementation uses GADTs, and also curries the product, but here I’m opting for encoding an existential type using forall in front of the constructor.

For those who like the analogy between functors and containers, Day convolution may be understood as containing a box of xs and a bag of ys, plus a binary combinator for turning every possible pair (x, y) into an a.

Day convolution lifts the monoidal structure of \mathscr{C} (the tensor product \otimes) to the functor category [\mathscr{C}, \mathscr{S}et]. In particular, it lifts the unit object 1 to the hom-functor \mathscr{C}(1, -). We have, for instance:

(F \star \mathscr{C}(1, -)) a = \int^{x y} F x \times \mathscr{C}(1, y) \times \mathscr{C}(x \otimes y, a)

which, by co-Yoneda, is isomorphic to:

\int^{x} F x \times \mathscr{C}(x \otimes 1, a)

Considering that x \otimes 1 is isomorphic to x, we can apply co-Yoneda again, to indeed get F a.

In Haskell, the unit object with respect to product is the unit type (), and the hom-functor \mathscr{C}(1, -) is isomorphic to the identity functor Id (a set of functions from unit to x is isomorphic to x).

We now have all the tools to understand the formula:

A G = Id + F \star G

or, more generally:

A_F G = \mathscr{C}(1, -) + F \star G

It’s a sum (coproduct) of the unit under Day convolution and the Day convolution of two functors F and G.

Lax Monoidal Functors

Whenever we have a functor between two monoidal categories \mathscr{C} and \mathscr{D}, it makes sense to ask how this functor interacts with the monoidal structure. For instance, does it map the unit object in one category to the unit object in another? Does it map the result of a tensor product to a tensor product of mapped arguments?

A lax monoidal functor doesn’t do that, but it does the next best thing. It doesn’t map unit to unit, but it provides a morphism that connects the unit in the target category to the image of the unit of the source category.

\epsilon : 1_\mathscr{D} \to F 1_\mathscr{C}

It also provides a family of morphisms from the product of images to the image of a product:

\mu_{a b} : F a \otimes_\mathscr{D} F b \to F (a \otimes_\mathscr{C} b)

which is natural in both arguments. These morphisms must satisfy additional conditions related to associativity and unitality of respective tensor products. If \epsilon and \mu are isomorphisms then we call F a strong monoidal functor.

If you look at the way Day convolution was defined, you can see now that we insisted that the hom-functor be strong monoidal:

\mathscr{C}(x, -) \star \mathscr{C}(y, -) \cong \mathscr{C}(x \otimes y, -)

Since hom-functors define the Yoneda embedding \mathscr{C} \to [\mathscr{C}, \mathscr{S}et], we can say that Day convolution makes Yoneda embedding strong monoidal.

The translation of the definition of a lax monoidal functor to Haskell is straightforward:

class Monoidal f where
  unit  :: f ()
  (>*<) :: f x -> f y -> f (x, y)

Because Hask is a closed monoidal category, Monoidal is equivalent to Applicative (see my post on Applicative Functors).

Fixed Points

Recursive data structures can be formally defined as fixed points of F-algebras. Lists, in particular, are fixed points of functors of the form:

F_a b = 1 + a \otimes b

defined in a monoidal category (\otimes and 1) with coproducts (the plus sign).

In general, a fixed point is defined as a point that is fixed under some particular mapping. For instance, a fixed point of a function f(x) is some value x_0 such that:

f(x_0) = x_0

Obviously, the function’s codomain has to be the same as its domain, for this equation to make sense.

By analogy, we can define a fixed point of an endofunctor F as an object that is isomorphic to its image under F:

F x \cong x

The isomorphism here is just a pair of morphisms, one the inverse of the other. One of these morphisms can be seen as part of the F-algebra (x, f) whose carrier is x and whose action is:

f : F x \to x

Lambek’s lemma states that the action of the initial (or terminal) F-algebra is an isomorphism. This explains why a fixed point of a functor is often referred to as an initial (terminal) algebra.

In Haskell, a fixed point of a functor f is called Fix f. It is defined by the property that f acting on Fix f must be isomorphic to Fix f:

Fix f \cong f (Fix f)

which can be expressed as:

newtype Fix f = In { out :: f (Fix f) }

Notice that the pair (Fix f, In) is the (initial) algebra for the functor f, with the carrier Fix f and the action In; and that out is the inverse of In, as prescribed by the Lambek’s lemma.

Here’s another useful intuition about fixed points: they can often be calculated iteratively as a limit of a sequence. For functions, if the following sequence converges to x:

x_{n+1} = f (x_n)

then f(x) = x (at least for continuous functions).

We can apply the same idea to our list functor, iteratively replacing b with the definition of F_a b = 1 + a \otimes b:

1 + a \otimes b\\  1 + a \otimes (1 + a \otimes b)\\  1 + a + a \otimes a \otimes (1 + a \otimes b)\\  1 + a + a \otimes a + a \otimes a \otimes a + ...

where we assumed associativity and unit laws (up to isomorphism). This formal expansion is in agreement with our intuition that a list is either empty, contains one element, a product of two elements, a product of three elements, and so on…

Higher Order Functors

Category theory has achieved something we can only dream of in programming languages: reusability of concepts. For instance, functors between two categories C and D form a category, with natural transformations as morphisms. Therefore everything we said about fixed points and algebras can be immediately applied to the functor category. In Haskell, however, we have to start almost from scratch. For instance, a higher order functor, which takes a functor as argument and returns another functor has to be defined as:

class HFunctor ff where
  ffmap :: Functor g => (a -> b) -> ff g a -> ff g b
  hfmap :: (g :~> h) -> (ff g :~> ff h)

The squiggly arrows are natural transformations:

infixr 0 :~>
type f :~> g = forall a. f a -> g a

Notice that the definition of HFunctor not only requires a higher-order version of fmap called hfmap, which lifts natural transformations, but also the lower-order ffmap that attests to the fact that the result of HFunctor is again a functor. (Quantified class constraints will soon make this redundant.)

The definition of a fixed point also has to be painstakingly rewritten:

newtype FixH ff a = InH { outH :: ff (FixH ff) a }

Functoriality of a higher order fixed point is easily established:

instance HFunctor f => Functor (FixH f) where
    fmap h (InH x) = InH (ffmap h x)

Finally, Day convolution is a higher order functor:

instance HFunctor (Day f) where
  hfmap nat (Day fx gy xyt) = Day fx (nat gy) xyt
  ffmap h   (Day fx gy xyt) = Day fx gy (h . xyt)

Free Monoidal Functor

With all the preliminaries out of the way, we are now ready to derive the main result.

We start with the higher-order functor whose initial algebra defines the free monoidal functor:

A_F G = Id + F \star G

We can translate it to Haskell as:

data FreeF f g t =
      DoneF t
    | MoreF (Day f g t)

It is a higher order functor, in that it takes a functor g and produces a new functor FreeF f g:

instance HFunctor (FreeF f) where
    hfmap _ (DoneF x) = DoneF x
    hfmap nat (MoreF day) = MoreF (hfmap nat day)
    ffmap f (DoneF x) = DoneF (f x)
    ffmap f (MoreF day) = MoreF (ffmap f day)

The claim is that, for any functor f, the (higher order) fixed point of FreeF f:

type FreeMon f = FixH (FreeF f)

is monoidal.

The usual approach to solving such a problem is to express FreeMon as a recursive data structure:

data FreeMonR f t =
      Done t
    | More (Day f (FreeMonR f) t)

and proceed from there. This is fine, but it doesn’t give us any insight about what property of the original higher-order functor makes its fixed point monoidal. So instead, I will concentrate on properties of Day convolution.

To begin with, let’s establish the functoriality of FreeMon using the fact that Day convolution is a functor:

instance Functor f => Functor (FreeMon f) where
  fmap h (InH (DoneF s)) = InH (DoneF (h s))
  fmap h (InH (MoreF day)) = InH (MoreF (ffmap h day))

The next step is based on the list analogy. The free monoidal functor is analogous to a list in which the product is replaced by Day convolution. The proof that it’s monoidal amounts to showing that one can “concatenate” two such lists. Concatenation is a recursive process in which we detach an element from one list and attach it to the other.

When building recursion, the functor g in Day convolution will play the role of the tail of the list. We’ll prove monoidality of the fixed point inductively by assuming that the tail is already monoidal. Here’s the crucial step expressed in terms of Day convolution:

cons :: Monoidal g => Day f g s -> g t -> Day f g (s, t)
cons (Day fx gy xys) gt = Day fx (gy >*< gt) (bimap xys id) . reassoc)

We took advantage of associativity:

reassoc :: (a, (b, c)) -> ((a, b), c)
reassoc (a, (b, c)) = ((a, b), c)

and functoriality (bimap) of the underlying product.

The intuition here is that we have a Day product of the head of the list, which is a box of xs; and the tail, which is a container of ys. We are appending to it another container of ts. We do it by concatenating the two containers (gy >*< gt) into one container of pairs (y \otimes t). The new combinator reassociates the nested pairs (x \otimes (y \otimes t)) and applies the old combinator to (x \otimes y).

The final step is to show that FreeMon defined through Day convolution is indeed monoidal. Here’s the proof:

instance Functor f => Monoidal (FreeMon f) where
  unit = InH (DoneF ())
  (InH (DoneF s)) >*< frt = fmap (s, ) frt
  (InH (MoreF day)) >*< frt = InH (MoreF (day `cons` frt))

A lax monoidal functor must also preserve associativity and unit laws. Unlike the corresponding laws for applicative functors, these are pretty straightforward to formulate.

The unit laws are:

fmap lunit (unit () >*< frx) = frx
fmap runit (frx >*< unit ()) = frx

where I used the left and right unitors:

lunit :: ((), a) -> a
lunit ((), a) = a
runit :: (a, ()) -> a
runit (a, ()) = a

The associativity law is:

fmap assoc ((frx >*< fry) >*< frz) = (frx >*< (fry >*< frz))

where I used the associator:

assoc :: ((a,b),c) -> (a,(b,c))
assoc ((a,b),c) = (a,(b,c))

Except for the left unit law, I wasn’t able to find simple derivations of these laws.

Categorical Picture

Translating this construction to category theory, we start with a monoidal category (\mathscr{C}, \otimes, 1, \alpha, \rho, \lambda), where \alpha is the associator, and \rho and \lambda are right and left unitors, respectively. We will be constructing a lax monoidal functors from \mathscr{C} to \mathscr{S}et, the latter equipped with the usual cartesian product and coproduct.

I will sketch some of the constructions without going into too much detail.

The analogue of cons is a family of natural transformations:

\beta_{s t} = \int^{x y} f x \times g y \times \mathscr{C}(x \otimes y, s) \times g t \to \int^{u v} f u \times g v \times \mathscr{C}(u \otimes v, s \otimes t)

We will assume that g is lax monoidal, so the left hand side can be mapped to:

\int^{x y} f x \times g (y \otimes t) \times \mathscr{C}(x \otimes y, s)

The set of natural transformations can be represented as an end:

\int_{s t} Set(\int^{x y} f x \times g (y \otimes t) \times \mathscr{C}(x \otimes y, s), \int^{u v} f u \times g v \times \mathscr{C}(u \otimes v, s \otimes t))

A hom-set from a coend is isomorphic to an end of a hom-set:

\int_{s t x y} Set(f x \times g (y \otimes t) \times \mathscr{C}(x \otimes y, s), \int^{u v} f u \times g v \times \mathscr{C}(u \otimes v, s \otimes t))

There is an injection that is a member of this hom-set:

i_{x, y \otimes t} : f x \times g (y \otimes t) \times \mathscr{C}(x \otimes (y \otimes t), -) \to \int^{u v} f u \times g v \times \mathscr{C}(u \otimes v, -)

Given a morphism h that is a member of \mathscr{C}(x \otimes y, s), we can construct the morphism (h \otimes id) \circ \alpha^{-1}, which is a member of \mathscr{C}(x \otimes (y \otimes t), s \otimes t).

The free monoidal functor Free_f is given as the initial algebra of the (higher-order) endofunctor acting on a functor g from [\mathscr{C}, \mathscr{S}et]:

A_f g = \mathscr{C}(1, -) + f \star g

By Lambek’s lemma, the action of this functor on the fixed point is naturally isomorphic to the fixed point itself:

\mathscr{C}(1, -) + (f \star Free_f) \cong Free_f

We want to show that Free_f is lax monoidal, that is that there’s a mapping:

\epsilon : 1 \to Free_f \, 1

and a family of natural transformations:

\mu_{s t} : Free_f\, s \times Free_f\, t \to Free_f\, (s \otimes t)

The first one can be simply chosen as the identity id_1 of the singleton set.

Let’s rewrite the type of natural transformations in the second one as an end:

\int_{s t} \mathscr{S}et(Free_f\, s \times Free_f\, t, Free_f\, (s \otimes t))

We can expand the first factor using the Lambek’s lemma:

\int_{s t} \mathscr{S}et((\mathscr{C}(1,s) + (f \star Free_f) s) \times Free_f\, t, Free_f\, (s \otimes t))

distribute the product over the sum:

\int_{s t} \mathscr{S}et(\mathscr{C}(1,s)\times Free_f\, t + (f \star Free_f) s \times Free_f\, t, Free_f\, (s \otimes t))

and replace functions from coproducts with products of functions:

\int_{s t} \mathscr{S}et(\mathscr{C}(1,s)\times Free_f\, t, Free_f\, (s \otimes t))  \times

\int_{s t} \mathscr{S}et((f \star Free_f) s \times Free_f\, t, Free_f\, (s \otimes t))

The first hom-set forms the base of induction and the second is the inductive step. If we call a member of \mathscr{C}(1,s) h then we can implement the first function as the lifting of (h 1 \otimes -) acting on Free_f\, t, and for the second, we can use \beta_{s t}.

Conclusion

The purpose of this post was to explore the formulation of a free lax monoidal functor without recourse to closed structure. I have to admit to a hidden agenda: The profunctor formulation of traversables involves monoidal profunctors, so that’s what I’m hoping to explore in the next post.

Appendix: Free Monad

While reviewing the draft of this post, Oleg Grenrus suggested that I derive free monad as a fixed point of a higher order functor. The monoidal product in this case is endofunctor composition:

newtype Compose f g a = Compose (f (g a))

The higher-order functor in question can be written as:

A_f g = Id + f \circ g

or, in Haskell:

data FreeMonadF f g a = 
    DoneFM a 
  | MoreFM (Compose f g a)
instance Functor f => HFunctor (FreeMonadF f) where
  hfmap _ (DoneFM a) = DoneFM a
  hfmap nat (MoreFM (Compose fg)) = MoreFM $ Compose $ fmap nat fg
  ffmap h (DoneFM a) = DoneFM (h a)
  ffmap h (MoreFM (Compose fg)) = MoreFM $ Compose $ fmap (fmap h) fg

The free monad is given by the fixed point:

type FreeMonad f = FixH (FreeMonadF f)

as witnessed by the following instance definition:

instance Functor f => Monad (FreeMonad f) where
  return = InH . DoneFM
  (InH (DoneFM a)) >>= k = k a
  fma >>= k = join (fmap k fma)
join :: Functor f => FreeMonad f (FreeMonad f a) -> FreeMonad f a
join (InH (DoneFM x)) = x
join (InH (MoreFM (Compose ffr))) = 
    InH $ MoreFM $ Compose $ fmap join ffr