Functional Programming



Yes, it’s this time of the year again! I started a little tradition a year ago with Stalking a Hylomorphism in the Wild. This year I was reminded of the Advent of Code by a tweet with this succint C++ program:

This piece of code is probably unreadable to a regular C++ programmer, but makes perfect sense to a Haskell programmer.

Here’s the description of the problem: You are given a list of equal-length strings. Every string is different, but two of these strings differ only by one character. Find these two strings and return their matching part. For instance, if the two strings were “abcd” and “abxd”, you would return “abd”.

What makes this problem particularly interesting is its potential application to a much more practical task of matching strands of DNA while looking for mutations. I decided to explore the problem a little beyond the brute force approach. And, of course, I had a hunch that I might encounter my favorite wild beast–the hylomorphism.

Brute force approach

First things first. Let’s do the boring stuff: read the file and split it into lines, which are the strings we are supposed to process. So here it is:

main = do
  txt <- readFile "day2.txt"
  let cs = lines txt
  print $ findMatch cs

The real work is done by the function findMatch, which takes a list of strings and produces the answer, which is a single string.

findMatch :: [String] -> String

First, let’s define a function that calculates the distance between any two strings.

distance :: (String, String) -> Int

We’ll define the distance as the count of mismatched characters.

Here’s the idea: We have to compare strings (which, let me remind you, are of equal length) character by character. Strings are lists of characters. The first step is to take two strings and zip them together, producing a list of pairs of characters. In fact we can combine the zipping with the next operation–in this case, comparison for inequality, (/=)–using the library function zipWith. However, zipWith is defined to act on two lists, and we will want it to act on a pair of lists–a subtle distinction, which can be easily overcome by applying uncurry:

uncurry :: (a -> b -> c) -> ((a, b) -> c)

which turns a function of two arguments into a function that takes a pair. Here’s how we use it:

uncurry (zipWith (/=))

The comparison operator (/=) produces a Boolean result, True or False. We want to count the number of differences, so we’ll covert True to one, and False to zero:

fromBool :: Num a => Bool -> a
fromBool False = 0
fromBool True  = 1

(Notice that such subtleties as the difference between Bool and Int are blisfully ignored in C++.)

Finally, we’ll sum all the ones using sum. Altogether we have:

distance = sum . fmap fromBool . uncurry (zipWith (/=))

Now that we know how to find the distance between any two strings, we’ll just apply it to all possible pairs of strings. To generate all pairs, we’ll use list comprehension:

let ps = [(s1, s2) | s1 <- ss, s2 <- ss]

(In C++ code, this was done by cartesian_product.)

Our goal is to find the pair whose distance is exactly one. To this end, we’ll apply the appropriate filter:

filter ((== 1) . distance) ps

For our purposes, we’ll assume that there is exactly one such pair (if there isn’t one, we are willing to let the program fail with a fatal exception).

(s, s') = head $ filter ((== 1) . distance) ps

The final step is to remove the mismatched character:

filter (uncurry (==)) $ zip s s'

We use our friend uncurry again, because the equality operator (==) expects two arguments, and we are calling it with a pair of arguments. The result of filtering is a list of identical pairs. We’ll fmap fst to pick the first components.

findMatch :: [String] -> String
findMatch ss = 
  let ps = [(s1, s2) | s1 <- ss, s2 <- ss]
      (s, s') = head $ filter ((== 1) . distance) ps
  in fmap fst $ filter (uncurry (==)) $ zip s s'

This program produces the correct result and we could stop right here. But that wouldn’t be much fun, would it? Besides, it’s possible that other algorithms could perform better, or be more flexible when applied to a more general problem.

Data-driven approach

The main problem with our brute-force approach is that we are comparing everything with everything. As we increase the number of input strings, the number of comparisons grows like a factorial. There is a standard way of cutting down on the number of comparison: organizing the input into a neat data structure.

We are comparing strings, which are lists of characters, and list comparison is done recursively. Assume that you know that two strings share a prefix. Compare the next character. If it’s equal in both strings, recurse. If it’s not, we have a single character fault. The rest of the two strings must now match perfectly to be considered a solution. So the best data structure for this kind of algorithm should batch together strings with equal prefixes. Such a data structure is called a prefix tree, or a trie (pronounced try).

At every level of our prefix tree we’ll branch based on the current character (so the maximum branching factor is, in our case, 26). We’ll record the character, the count of strings that share the prefix that led us there, and the child trie storing all the suffixes.

data Trie = Trie [(Char, Int, Trie)]
  deriving (Show, Eq)

Here’s an example of a trie that stores just two strings, “abcd” and “abxd”. It branches after b.

   a 2
   b 2
c 1    x 1
d 1    d 1

When inserting a string into a trie, we recurse both on the characters of the string and the list of branches. When we find a branch with the matching character, we increment its count and insert the rest of the string into its child trie. If we run out of branches, we create a new one based on the current character, give it the count one, and the child trie with the rest of the string:

insertS :: Trie -> String -> Trie
insertS t "" = t
insertS (Trie bs) s = Trie (inS bs s)
  where
    inS ((x, n, t) : bs) (c : cs) =
      if c == x 
      then (c, n + 1, insertS t cs) : bs
      else (x, n, t) : inS bs (c : cs)
    inS [] (c : cs) = [(c, 1, insertS (Trie []) cs)]

We convert our input to a trie by inserting all the strings into an (initially empty) trie:

mkTrie :: [String] -> Trie
mkTrie = foldl insertS (Trie [])

Of course, there are many optimizations we could use, if we were to run this algorithm on big data. For instance, we could compress the branches as is done in radix trees, or we could sort the branches alphabetically. I won’t do it here.

I won’t pretend that this implementation is simple and elegant. And it will get even worse before it gets better. The problem is that we are dealing explicitly with recursion in multiple dimensions. We recurse over the input string, the list of branches at each node, as well as the child trie. That’s a lot of recursion to keep track of–all at once.

Now brace yourself: We have to traverse the trie starting from the root. At every branch we check the prefix count: if it’s greater than one, we have more than one string going down, so we recurse into the child trie. But there is also another possibility: we can allow to have a mismatch at the current level. The current characters may be different but, since we allow only one mismatch, the rest of the strings have to match exactly. That’s what the function exact does. Notice that exact t is used inside foldMap, which is a version of fold that works on monoids–here, on strings.

match1 :: Trie -> [String]
match1 (Trie bs) = go bs
  where
    go :: [(Char, Int, Trie)] -> [String]
    go ((x, n, t) : bs) = 
      let a1s = if n > 1 
                then fmap (x:) $ match1 t
                else []
          a2s = foldMap (exact t) bs
          a3s = go bs -- recurse over list
      in a1s ++ a2s ++ a3s
    go [] = []
    exact t (_, _, t') = matchAll t t'

Here’s the function that finds all exact matches between two tries. It does it by generating all pairs of branches in which top characters match, and then recursing down.

matchAll :: Trie -> Trie -> [String]
matchAll (Trie bs) (Trie bs') = mAll bs bs'
  where
    mAll :: [(Char, Int, Trie)] -> [(Char, Int, Trie)] -> [String]
    mAll [] [] = [""]
    mAll bs bs' = 
      let ps = [ (c, t, t') 
               | (c,  _,  t)  <- bs
               , (c', _', t') <- bs'
               , c == c']
      in foldMap go ps
    go (c, t, t') = fmap (c:) (matchAll t t')

When mAll reaches the leaves of the trie, it returns a singleton list containing an empty string. Subsequent actions of fmap (c:) will prepend characters to this string.

Since we are expecting exactly one solution to the problem, we’ll extract it using head:

findMatch1 :: [String] -> String
findMatch1 cs = head $ match1 (mkTrie cs)

Recursion schemes

As you hone your functional programming skills, you realize that explicit recursion is to be avoided at all cost. There is a small number of recursive patterns that have been codified, and they can be used to solve the majority of recursion problems (for some categorical background, see F-Algebras). Recursion itself can be expressed in Haskell as a data structure: a fixed point of a functor:

newtype Fix f = In { out :: f (Fix f) }

In particular, our trie can be generated from the following functor:

data TrieF a = TrieF [(Char, a)]
  deriving (Show, Functor)

Notice how I have replaced the recursive call to the Trie type constructor with the free type variable a. The functor in question defines the structure of a single node, leaving holes marked by the occurrences of a for the recursion. When these holes are filled with full blown tries, as in the definition of the fixed point, we recover the complete trie.

I have also made one more simplification by getting rid of the Int in every node. This is because, in the recursion scheme I’m going to use, the folding of the trie proceeds bottom-up, rather than top-down, so the multiplicity information can be passed upwards.

The main advantage of recursion schemes is that they let us use simpler, non-recursive building blocks such as algebras and coalgebras. Let’s start with a simple coalgebra that lets us build a trie from a list of strings. A coalgebra is a fancy name for a particular type of function:

type Coalgebra f x = x -> f x

Think of x as a type for a seed from which one can grow a tree. A colagebra tells us how to use this seed to create a single node described by the functor f and populate it with (presumably smaller) seeds. We can then pass this coalgebra to a simple algorithm, which will recursively expand the seeds. This algorithm is called the anamorphism:

ana :: Functor f => Coalgebra f a -> a -> Fix f
ana coa = In . fmap (ana coa) . coa

Let’s see how we can apply it to the task of building a trie. The seed in our case is a list of strings (as per the definition of our problem, we’ll assume they are all equal length). We start by grouping these strings into bunches of strings that start with the same character. There is a library function called groupWith that does exactly that. We have to import the right library:

import GHC.Exts (groupWith)

This is the signature of the function:

groupWith :: Ord b => (a -> b) -> [a] -> [[a]]

It takes a function a -> b that converts each list element to a type that supports comparison (as per the typeclass Ord), and partitions the input into lists that compare equal under this particular ordering. In our case, we are going to extract the first character from a string using head and bunch together all strings that share that first character.

let sss = groupWith head ss

The tails of those strings will serve as seeds for the next tier of the trie. Eventually the strings will be shortened to nothing, triggering the end of recursion.

fromList :: Coalgebra TrieF [String]
fromList ss =
  -- are strings empty? (checking one is enough)
  if null (head ss) 
  then TrieF [] -- leaf
  else
    let sss = groupWith head ss
    in TrieF $ fmap mkBranch sss

The function mkBranch takes a bunch of strings sharing the same first character and creates a branch seeded with the suffixes of those strings.

mkBranch :: [String] -> (Char, [String])
mkBranch sss =
  let c = head (head sss) -- they're all the same
  in (c, fmap tail sss)

Notice that we have completely avoided explicit recursion.

The next step is a little harder. We have to fold the trie. Again, all we have to define is a step that folds a single node whose children have already been folded. This step is defined by an algebra:

type Algebra f x = f x -> x

Just as the type x described the seed in a coalgebra, here it describes the accumulator–the result of the folding of a recursive data structure.

We pass this algebra to a special algorithm called a catamorphism that takes care of the recursion:

cata :: Functor f => Algebra f a -> Fix f -> a
cata alg = alg . fmap (cata alg) . out

Notice that the folding proceeds from the bottom up: the algebra assumes that all the children have already been folded.

The hardest part of designing an algebra is figuring out what information needs to be passed up in the accumulator. We obviously need to return the final result which, in our case, is the list of strings with one mismatched character. But when we are in the middle of a trie, we have to keep in mind that the mismatch may still happen above us. So we also need a list of strings that may serve as suffixes when the mismatch occurs. We have to keep them all, because they might be matched later with strings from other branches.

In other words, we need to be accumulating two lists of strings. The first list accumulates all suffixes for future matching, the second accumulates the results: strings with one mismatch (after the mismatch has been removed). We therefore should implement the following algebra:

Algebra TrieF ([String], [String])

To understand the implementation of this algebra, consider a single node in a trie. It’s a list of branches, or pairs, whose first component is the current character, and the second a pair of lists of strings–the result of folding a child trie. The first list contains all the suffixes gathered from lower levels of the trie. The second list contains partial results: strings that were matched modulo single-character defect.

As an example, suppose that you have a node with two branches:

[ ('a', (["bcd", "efg"], ["pq"]))
, ('x', (["bcd"],        []))]

First we prepend the current character to strings in both lists using the function prep with the following signature:

prep :: (Char, ([String], [String])) -> ([String], [String])

This way we convert each branch to a pair of lists.

[ (["abcd", "aefg"], ["apq"])
, (["xbcd"],         [])]

We then merge all the lists of suffixes and, separately, all the lists of partial results, across all branches. In the example above, we concatenate the lists in the two columns.

(["abcd", "aefg", "xbcd"], ["apq"])

Now we have to construct new partial results. To do this, we create another list of accumulated strings from all branches (this time without prefixing them):

ss = concat $ fmap (fst . snd) bs

In our case, this would be the list:

["bcd", "efg", "bcd"]

To detect duplicate strings, we’ll insert them into a multiset, which we’ll implement as a map. We need to import the appropriate library:

import qualified Data.Map as M

and define a multiset Counts as:

type Counts a = M.Map a Int

Every time we add a new item, we increment the count:

add :: Ord a => Counts a -> a -> Counts a
add cs c = M.insertWith (+) c 1 cs

To insert all strings from a list, we use a fold:

mset = foldl add M.empty ss

We are only interested in items that have multiplicity greater than one. We can filter them and extract their keys:

dups = M.keys $ M.filter (> 1) mset

Here’s the complete algebra:

accum :: Algebra TrieF ([String], [String])
accum (TrieF []) = ([""], [])
accum (TrieF bs) = -- b :: (Char, ([String], [String]))
    let -- prepend chars to string in both lists
        pss = unzip $ fmap prep bs
        (ss1, ss2) = both concat pss
        -- find duplicates
        ss = concat $ fmap (fst . snd) bs
        mset = foldl add M.empty ss
        dups = M.keys $ M.filter (> 1) mset
     in (ss1, dups ++ ss2)
  where
      prep :: (Char, ([String], [String])) -> ([String], [String])
      prep (c, pss) = both (fmap (c:)) pss

I used a handy helper function that applies a function to both components of a pair:

both :: (a -> b) -> (a, a) -> (b, b)
both f (x, y) = (f x, f y)

And now for the grand finale: Since we create the trie using an anamorphism only to immediately fold it using a catamorphism, why don’t we cut the middle person? Indeed, there is an algorithm called the hylomorphism that does just that. It takes the algebra, the coalgebra, and the seed, and returns the fully charged accumulator.

hylo :: Functor f => Algebra f a -> Coalgebra f b -> b -> a
hylo alg coa = alg . fmap (hylo alg coa) . coa

And this is how we extract and print the final result:

print $ head $ snd $ hylo accum fromList cs

Conclusion

The advantage of using the hylomorphism is that, because of Haskell’s laziness, the trie is never wholly constructed, and therefore doesn’t require large amounts of memory. At every step enough of the data structure is created as is needed for immediate computation; then it is promptly released. In fact, the definition of the data structure is only there to guide the steps of the algorithm. We use a data structure as a control structure. Since data structures are much easier to visualize and debug than control structures, it’s almost always advantageous to use them to drive computation.

In fact, you may notice that, in the very last step of the computation, our accumulator recreates the original list of strings (actually, because of laziness, they are never fully reconstructed, but that’s not the point). In reality, the characters in the strings are never copied–the whole algorithm is just a choreographed dance of internal pointers, or iterators. But that’s exactly what happens in the original C++ algorithm. We just use a higher level of abstraction to describe this dance.

I haven’t looked at the performance of various implementations. Feel free to test it and report the results. The code is available on github.

Acknowledgments

I’m grateful to the participants of the Seattle Haskell Users’ Group for many helpful comments during my presentation.


There is a lot of folklore about various data types that pop up in discussions about lenses. For instance, it’s known that FunList and Bazaar are equivalent, although I haven’t seen a proof of that. Since both data structures appear in the context of Traversable, which is of great interest to me, I decided to do some research. In particular, I was interested in translating these data structures into constructs in category theory. This is a continuation of my previous blog posts on free monoids and free applicatives. Here’s what I have found out:

  • FunList is a free applicative generated by the Store functor. This can be shown by expressing the free applicative construction using Day convolution.
  • Using Yoneda lemma in the category of applicative functors I can show that Bazaar is equivalent to FunList

Let’s start with some definitions. FunList was first introduced by Twan van Laarhoven in his blog. Here’s a (slightly generalized) Haskell definition:

data FunList a b t = Done t 
                   | More a (FunList a b (b -> t))

It’s a non-regular inductive data structure, in the sense that its data constructor is recursively called with a different type, here the function type b->t. FunList is a functor in t, which can be written categorically as:

L_{a b} t = t + a \times L_{a b} (b \to t)

where b \to t is a shorthand for the hom-set Set(b, t).

Strictly speaking, a recursive data structure is defined as an initial algebra for a higher-order functor. I will show that the higher order functor in question can be written as:

A_{a b} g = I + \sigma_{a b} \star g

where \sigma_{a b} is the (indexed) store comonad, which can be written as:

\sigma_{a b} s = \Delta_a s \times C(b, s)

Here, \Delta_a is the constant functor, and C(b, -) is the hom-functor. In Haskell, this is equivalent to:

newtype Store a b s = Store (a, b -> s)

The standard (non-indexed) Store comonad is obtained by identifying a with b and it describes the objects of the slice category C/s (morphisms are functions f : a \to a' that make the obvious triangles commute).

If you’ve read my previous blog posts, you may recognize in A_{a b} the functor that generates a free applicative functor (or, equivalently, a free monoidal functor). Its fixed point can be written as:

L_{a b} = I + \sigma_{a b} \star L_{a b}

The star stands for Day convolution–in Haskell expressed as an existential data type:

data Day f g s where
  Day :: f a -> g b -> ((a, b) -> s) -> Day f g s

Intuitively, L_{a b} is a “list of” Store functors concatenated using Day convolution. An empty list is the identity functor, a one-element list is the Store functor, a two-element list is the Day convolution of two Store functors, and so on…

In Haskell, we would express it as:

data FunList a b t = Done t 
                   | More ((Day (Store a b) (FunList a b)) t)

To show the equivalence of the two definitions of FunList, let’s expand the definition of Day convolution inside A_{a b}:

(A_{a b} g) t = t + \int^{c d} (\Delta_b c \times C(a, c)) \times g d \times C(c \times d, t)

The coend \int^{c d} corresponds, in Haskell, to the existential data type we used in the definition of Day.

Since we have the hom-functor C(a, c) under the coend, the first step is to use the co-Yoneda lemma to “perform the integration” over c, which replaces c with a everywhere. We get:

t + \int^d \Delta_b a \times g d \times C(a \times d, t)

We can then evaluate the constant functor and use the currying adjunction:

C(a \times d, t) \cong C(d, a \to t)

to get:

t + \int^d b \times g d \times C(d, a \to t)

Applying the co-Yoneda lemma again, we replace d with a \to t:

t + b \times g (a \to t)

This is exactly the functor that generates FunList. So FunList is indeed the free applicative generated by Store.

All transformations in this derivation were natural isomorphisms.

Now let’s switch our attention to Bazaar, which can be defined as:

type Bazaar a b t = forall f. Applicative f => (a -> f b) -> f t

(The actual definition of Bazaar in the lens library is even more general–it’s parameterized by a profunctor in place of the arrow in a -> f b.)

The universal quantification in the definition of Bazaar immediately suggests the application of my favorite double Yoneda trick in the functor category: The set of natural transformations (morphisms in the functor category) between two functors (objects in the functor category) is isomorphic, through Yoneda embedding, to the following end in the functor category:

Nat(h, g) \cong \int_{f \colon [C, Set]} Set(Nat(g, f), Nat(h, f))

The end is equivalent (modulo parametricity) to Haskell forall. Here, the sets of natural transformations between pairs of functors are just hom-functors in the functor category and the end over f is a set of higher-order natural transformations between them.

In the double Yoneda trick we carefully select the two functors g and h to be either representable, or somehow related to representables.

The universal quantification in Bazaar is limited to applicative functors, so we’ll pick our two functors to be free applicatives. We’ve seen previously that the higher-order functor that generates free applicatives has the form:

F g = Id + g \star F g

Here’s the version of the Yoneda embedding in which f varies over all applicative functors in the category App, and g and h are arbitrary functors in [C, Set]:

App(F h, F g) \cong \int_{f \colon App} Set(App(F g, f), App(F h, f))

The free functor F is the left adjoint to the forgetful functor U:

App(F g, f) \cong [C, Set](g, U f)

Using this adjunction, we arrive at:

[C, Set](h, U (F g)) \cong \int_{f \colon App} Set([C, Set](g, U f), [C, Set](h, U f))

We’re almost there–we just need to carefuly pick the functors g and h. In order to arrive at the definition of Bazaar we want:

g = \sigma_{a b} = \Delta_a \times C(b, -)

h = C(t, -)

The right hand side becomes:

\int_{f \colon App} Set\big(\int_c Set (\Delta_a c \times C(b, c), (U f) c)), \int_c Set (C(t, c), (U f) c)\big)

where I represented natural transformations as ends. The first term can be curried:

Set \big(\Delta_a c \times C(b, c), (U f) c)\big) \cong Set\big(C(b, c), \Delta_a c \to (U f) c \big)

and the end over c can be evaluated using the Yoneda lemma. So can the second term. Altogether, the right hand side becomes:

\int_{f \colon App} Set\big(a \to (U f) b)), (U f) t)\big)

In Haskell notation, this is just the definition of Bazaar:

forall f. Applicative f => (a -> f b) -> f t

The left hand side can be written as:

\int_c Set(h c, (U (F g)) c)

Since we have chosen h to be the hom-functor C(t, -), we can use the Yoneda lemma to “perform the integration” and arrive at:

(U (F g)) t

With our choice of g = \sigma_{a b}, this is exactly the free applicative generated by Store–in other words, FunList.

This proves the equivalence of Bazaar and FunList. Notice that this proof is only valid for Set-valued functors, although a generalization to the enriched setting is relatively straightforward.

There is another family of functors, Traversable, that uses universal quantification over applicatives:

class (Functor t, Foldable t) => Traversable t where
  traverse :: forall f. Applicative f => (a -> f b) -> t a -> f (t b)

The same double Yoneda trick can be applied to it to show that it’s related to Bazaar. There is, however, a much simpler derivation, suggested to me by Derek Elkins, by changing the order of arguments:

traverse :: t a -> (forall f. Applicative f => (a -> f b) -> f (t b))

which is equivalent to:

traverse :: t a -> Bazaar a b (t b)

In view of the equivalence between Bazaar and FunList, we can also write it as:

traverse :: t a -> FunList a b (t b)

Note that this is somewhat similar to the definition of toList:

toList :: Foldable t => t a -> [a]

In a sense, FunList is able to freely accumulate the effects from traversable, so that they can be interpreted later.

Acknowledgments

I’m grateful to Edward Kmett and Derek Elkins for many discussions and valuable insights.


Abstract

The use of free monads, free applicatives, and cofree comonads lets us separate the construction of (often effectful or context-dependent) computations from their interpretation. In this paper I show how the ad hoc process of writing interpreters for these free constructions can be systematized using the language of higher order algebras (coalgebras) and catamorphisms (anamorphisms).

Introduction

Recursive schemes [meijer] are an example of successful application of concepts from category theory to programming. The idea is that recursive data structures can be defined as initial algebras of functors. This allows a separation of concerns: the functor describes the local shape of the data structure, and the fixed point combinator builds the recursion. Operations over data structures can be likewise separated into shallow, non-recursive computations described by algebras, and generic recursive procedures described by catamorphisms. In this way, data structures often replace control structures in driving computations.

Since functors also form a category, it’s possible to define functors acting on functors. Such higher order functors show up in a number of free constructions, notably free monads, free applicatives, and cofree comonads. These free constructions have good composability properties and they provide means of separating the creation of effectful computations from their interpretation.

This paper’s contribution is to systematize the construction of such interpreters. The idea is that free constructions arise as fixed points of higher order functors, and therefore can be approached with the same algebraic machinery as recursive data structures, only at a higher level. In particular, interpreters can be constructed as catamorphisms or anamorphisms of higher order algebras/coalgebras.

Initial Algebras and Catamorphisms

The canonical example of a data structure that can be described as an initial algebra of a functor is a list. In Haskell, a list can be defined recursively:

data List a = Nil | Cons a (List a)

There is an underlying non-recursive functor:

data ListF a x = NilF | ConsF a x
instance Functor (ListF a) where
  fmap f NilF = NilF
  fmap f (ConsF a x) = ConsF a (f x)

Once we have a functor, we can define its algebras. An algebra consist of a carrier c and a structure map (evaluator). An algebra can be defined for an arbitrary functor f:

type Algebra f c = f c -> c

Here’s an example of a simple list algebra, with Int as its carrier:

sum :: Algebra (ListF Int) Int
sum NilF = 0
sum (ConsF a c) = a + c

Algebras for a given functor form a category. The initial object in this category (if it exists) is called the initial algebra. In Haskell, we call the carrier of the initial algebra Fix f. Its structure map is a function:

f (Fix f) -> Fix f

By Lambek’s lemma, the structure map of the initial algebra is an isomorphism. In Haskell, this isomorphism is given by a pair of functions: the constructor In and the destructor out of the fixed point combinator:

newtype Fix f = In { out :: f (Fix f) }

When applied to the list functor, the fixed point gives rise to an alternative definition of a list:

type List a = Fix (ListF a)

The initiality of the algebra means that there is a unique algebra morphism from it to any other algebra. This morphism is called a catamorphism and, in Haskell, can be expressed as:

cata :: Functor f => Algebra f a -> Fix f -> a
cata alg = alg . fmap (cata alg) . out

A list catamorphism is known as a fold. Since the list functor is a sum type, its algebra consists of a value—the result of applying the algebra to NilF—and a function of two variables that corresponds to the ConsF constructor. You may recognize those two as the arguments to foldr:

foldr :: (a -> c -> c) -> c -> [a] -> c

The list functor is interesting because its fixed point is a free monoid. In category theory, monoids are special objects in monoidal categories—that is categories that define a product of two objects. In Haskell, a pair type plays the role of such a product, with the unit type as its unit (up to isomorphism).

As you can see, the list functor is the sum of a unit and a product. This formula can be generalized to an arbitrary monoidal category with a tensor product \otimes and a unit 1:

L\, a\, x = 1 + a \otimes x

Its initial algebra is a free monoid .

Higher Algebras

In category theory, once you performed a construction in one category, it’s easy to perform it in another category that shares similar properties. In Haskell, this might require reimplementing the construction.

We are interested in the category of endofunctors, where objects are endofunctors and morphisms are natural transformations. Natural transformations are represented in Haskell as polymorphic functions:

type f :~> g = forall a. f a -> g a
infixr 0 :~>

In the category of endofunctors we can define (higher order) functors, which map functors to functors and natural transformations to natural transformations:

class HFunctor hf where
  hfmap :: (g :~> h) -> (hf g :~> hf h)
  ffmap :: Functor g => (a -> b) -> hf g a -> hf g b

The first function lifts a natural transformation; and the second function, ffmap, witnesses the fact that the result of a higher order functor is again a functor.

An algebra for a higher order functor hf consists of a functor f (the carrier object in the functor category) and a natural transformation (the structure map):

type HAlgebra hf f = hf f :~> f

As with regular functors, we can define an initial algebra using the fixed point combinator for higher order functors:

newtype FixH hf a = InH { outH :: hf (FixH hf) a }

Similarly, we can define a higher order catamorphism:

hcata :: HFunctor h => HAlgebra h f -> FixH h :~> f
hcata halg = halg . hfmap (hcata halg) . outH

The question is, are there any interesting examples of higher order functors and algebras that could be used to solve real-life programming problems?

Free Monad

We’ve seen the usefulness of lists, or free monoids, for structuring computations. Let’s see if we can generalize this concept to higher order functors.

The definition of a list relies on the cartesian structure of the underlying category. It turns out that there are multiple cartesian structures of interest that can be defined in the category of functors. The simplest one defines a product of two endofunctors as their composition. Any two endofunctors can be composed. The unit of functor composition is the identity functor.

If you picture endofunctors as containers, you can easily imagine a tree of lists, or a list of Maybes.

A monoid based on this particular monoidal structure in the endofunctor category is a monad. It’s an endofunctor m equipped with two natural transformations representing unit and multiplication:

class Monad m where
  eta :: Identity    :~> m
  mu  :: Compose m m :~> m

In Haskell, the components of these natural transformations are known as return and join.

A straightforward generalization of the list functor to the functor category can be written as:

L\, f\, g = 1 + f \circ g

or, in Haskell,

type FunctorList f g = Identity :+: Compose f g

where we used the operator :+: to define the coproduct of two functors:

data (f :+: g) e = Inl (f e) | Inr (g e)
infixr 7 :+:

Using more conventional notation, FunctorList can be written as:

data MonadF f g a = 
    DoneM a 
  | MoreM (f (g a))

We’ll use it to generate a free monoid in the category of endofunctors. First of all, let’s show that it’s indeed a higher order functor in the second argument g:

instance Functor f => HFunctor (MonadF f) where
  hfmap _   (DoneM a)  = DoneM a
  hfmap nat (MoreM fg) = MoreM $ fmap nat fg
  ffmap h (DoneM a)    = DoneM (h a)
  ffmap h (MoreM fg)   = MoreM $ fmap (fmap h) fg

In category theory, because of size issues, this functor doesn’t always have a fixed point. For most common choices of f (e.g., for algebraic data types), the initial higher order algebra for this functor exists, and it generates a free monad. In Haskell, this free monad can be defined as:

type FreeMonad f = FixH (MonadF f)

We can show that FreeMonad is indeed a monad by implementing return and bind:

instance Functor f => Monad (FreeMonad f) where
  return = InH . DoneM
  (InH (DoneM a))    >>= k = k a
  (InH (MoreM ffra)) >>= k = 
        InH (MoreM (fmap (>>= k) ffra))

Free monads have many applications in programming. They can be used to write generic monadic code, which can then be interpreted in different monads. A very useful property of free monads is that they can be composed using coproducts. This follows from the theorem in category theory, which states that left adjoints preserve coproducts (or, more generally, colimits). Free constructions are, by definition, left adjoints to forgetful functors. This property of free monads was explored by Swierstra [swierstra] in his solution to the expression problem. I will use an example based on his paper to show how to construct monadic interpreters using higher order catamorphisms.

Free Monad Example

A stack-based calculator can be implemented directly using the state monad. Since this is a very simple example, it will be instructive to re-implement it using the free monad approach.

We start by defining a functor, in which the free parameter k represents the continuation:

data StackF k  = Push Int k
               | Top (Int -> k)
               | Pop k            
               | Add k
               deriving Functor

We use this functor to build a free monad:

type FreeStack = FreeMonad StackF

You may think of the free monad as a tree with nodes that are defined by the functor StackF. The unary constructors, like Add or Pop, create linear list-like branches; but the Top constructor branches out with one child per integer.

The level of indirection we get by separating recursion from the functor makes constructing free monad trees syntactically challenging, so it makes sense to define a helper function:

liftF :: (Functor f) => f r -> FreeMonad f r
liftF fr = InH $ MoreM $ fmap (InH . DoneM) fr

With this function, we can define smart constructors that build leaves of the free monad tree:

push :: Int -> FreeStack ()
push n = liftF (Push n ())

pop :: FreeStack ()
pop = liftF (Pop ())

top :: FreeStack Int
top = liftF (Top id)

add :: FreeStack ()
add = liftF (Add ())

All these preparations finally pay off when we are able to create small programs using do notation:

calc :: FreeStack Int
calc = do
  push 3
  push 4
  add
  x <- top
  pop
  return x

Of course, this program does nothing but build a tree. We need a separate interpreter to do the calculation. We’ll interpret our program in the state monad, with state implemented as a stack (list) of integers:

type MemState = State [Int]

The trick is to define a higher order algebra for the functor that generates the free monad and then use a catamorphism to apply it to the program. Notice that implementing the algebra is a relatively simple procedure because we don’t have to deal with recursion. All we need is to case-analyze the shallow constructors for the free monad functor MonadF, and then case-analyze the shallow constructors for the functor StackF.

runAlg :: HAlgebra (MonadF StackF) MemState
runAlg (DoneM a)  = return a
runAlg (MoreM ex) = 
  case ex of
    Top  ik  -> get >>= ik  . head
    Pop  k   -> get >>= put . tail   >> k
    Push n k -> get >>= put . (n : ) >> k
    Add  k   -> do (a: b: s) <- get
                   put (a + b : s)
                   k

The catamorphism converts the program calc into a state monad action, which can be run over an empty initial stack:

runState (hcata runAlg calc) []

The real bonus is the freedom to define other interpreters by simply switching the algebras. Here’s an algebra whose carrier is the Const functor:

showAlg :: HAlgebra (MonadF StackF) (Const String)

showAlg (DoneM a) = Const "Done!"
showAlg (MoreM ex) = Const $
  case ex of
    Push n k -> 
      "Push " ++ show n ++ ", " ++ getConst k
    Top ik -> 
      "Top, " ++ getConst (ik 42)
    Pop k -> 
      "Pop, " ++ getConst k
    Add k -> 
      "Add, " ++ getConst k

Runing the catamorphism over this algebra will produce a listing of our program:

getConst $ hcata showAlg calc

> "Push 3, Push 4, Add, Top, Pop, Done!"

Free Applicative

There is another monoidal structure that exists in the category of functors. In general, this structure will work for functors from an arbitrary monoidal category C to Set. Here, we’ll restrict ourselves to endofunctors on Set. The product of two functors is given by Day convolution, which can be implemented in Haskell using an existential type:

data Day f g c where
  Day :: f a -> g b -> ((a, b) -> c) -> Day f g c

The intuition is that a Day convolution contains a container of some as, and another container of some bs, together with a function that can convert any pair (a, b) to c.

Day convolution is a higher order functor:

instance HFunctor (Day f) where
  hfmap nat (Day fx gy xyt) = Day fx (nat gy) xyt
  ffmap h   (Day fx gy xyt) = Day fx gy (h . xyt)

In fact, because Day convolution is symmetric up to isomorphism, it is automatically functorial in both arguments.

To complete the monoidal structure, we also need a functor that could serve as a unit with respect to Day convolution. In general, this would be the the hom-functor from the monoidal unit:

C(1, -)

In our case, since 1 is the singleton set, this functor reduces to the identity functor.

We can now define monoids in the category of functors with the monoidal structure given by Day convolution. These monoids are equivalent to lax monoidal functors which, in Haskell, form the class:

class Functor f => Monoidal f where
  unit  :: f ()
  (>*<) :: f x -> f y -> f (x, y)

Lax monoidal functors are equivalent to applicative functors [mcbride], as seen in this implementation of pure and <*>:

  pure  :: a -> f a
  pure a = fmap (const a) unit
  (<*>) :: f (a -> b) -> f a -> f b
  fs <*> as = fmap (uncurry ($)) (fs >*< as)

We can now use the same general formula, but with Day convolution as the product:

L\, f\, g = 1 + f \star g

to generate a free monoidal (applicative) functor:

data FreeF f g t =
      DoneF t
    | MoreF (Day f g t)

This is indeed a higher order functor:

instance HFunctor (FreeF f) where
    hfmap _ (DoneF x)     = DoneF x
    hfmap nat (MoreF day) = MoreF (hfmap nat day)
    ffmap f (DoneF x)     = DoneF (f x)
    ffmap f (MoreF day)   = MoreF (ffmap f day)

and it generates a free applicative functor as its initial algebra:

type FreeA f = FixH (FreeF f)

Free Applicative Example

The following example is taken from the paper by Capriotti and Kaposi [capriotti]. It’s an option parser for a command line tool, whose result is a user record of the following form:

data User = User
  { username :: String 
  , fullname :: String
  , uid      :: Int
  } deriving Show

A parser for an individual option is described by a functor that contains the name of the option, an optional default value for it, and a reader from string:

data Option a = Option
  { optName    :: String
  , optDefault :: Maybe a
  , optReader  :: String -> Maybe a 
  } deriving Functor

Since we don’t want to commit to a particular parser, we’ll create a parsing action using a free applicative functor:

userP :: FreeA Option User
userP  = pure User 
  <*> one (Option "username" (Just "John")  Just)
  <*> one (Option "fullname" (Just "Doe")   Just)
  <*> one (Option "uid"      (Just 0)       readInt)

where readInt is a reader of integers:

readInt :: String -> Maybe Int
readInt s = readMaybe s

and we used the following smart constructors:

one :: f a -> FreeA f a
one fa = InH $ MoreF $ Day fa (done ()) fst

done :: a -> FreeA f a
done a = InH $ DoneF a

We are now free to define different algebras to evaluate the free applicative expressions. Here’s one that collects all the defaults:

alg :: HAlgebra (FreeF Option) Maybe
alg (DoneF a) = Just a
alg (MoreF (Day oa mb f)) = 
  fmap f (optDefault oa >*< mb)

I used the monoidal instance for Maybe:

instance Monoidal Maybe where
  unit = Just ()
  Just x >*< Just y = Just (x, y)
  _ >*< _ = Nothing

This algebra can be run over our little program using a catamorphism:

parserDef :: FreeA Option a -> Maybe a
parserDef = hcata alg

And here’s an algebra that collects the names of all the options:

alg2 :: HAlgebra (FreeF Option) (Const String)
alg2 (DoneF a) = Const "."
alg2 (MoreF (Day oa bs f)) = 
  fmap f (Const (optName oa) >*< bs)

Again, this uses a monoidal instance for Const:

instance Monoid m => Monoidal (Const m) where
  unit = Const mempty
  Const a >*< Const b = Const (a  b)

We can also define the Monoidal instance for IO:

instance Monoidal IO where
  unit = return ()
  ax >*< ay = do a <- ax
                 b <- ay
                 return (a, b)

This allows us to interpret the parser in the IO monad:

alg3 :: HAlgebra (FreeF Option) IO
alg3 (DoneF a) = return a
alg3 (MoreF (Day oa bs f)) = do
    putStrLn $ optName oa
    s <- getLine
    let ma = optReader oa s
        a = fromMaybe (fromJust (optDefault oa)) ma
    fmap f $ return a >*< bs

Cofree Comonad

Every construction in category theory has its dual—the result of reversing all the arrows. The dual of a product is a coproduct, the dual of an algebra is a coalgebra, and the dual of a monad is a comonad.

Let’s start by defining a higher order coalgebra consisting of a carrier f, which is a functor, and a natural transformation:

type HCoalgebra hf f = f :~> hf f

An initial algebra is dualized to a terminal coalgebra. In Haskell, both are the results of applying the same fixed point combinator, reflecting the fact that the Lambek’s lemma is self-dual. The dual to a catamorphism is an anamorphism. Here is its higher order version:

hana :: HFunctor hf 
     => HCoalgebra hf f -> (f :~> FixH hf)
hana hcoa = InH . hfmap (hana hcoa) . hcoa

The formula we used to generate free monoids:

1 + a \otimes x

dualizes to:

1 \times a \otimes x

and can be used to generate cofree comonoids .

A cofree functor is the right adjoint to the forgetful functor. Just like the left adjoint preserved coproducts, the right adjoint preserves products. One can therefore easily combine comonads using products (if the need arises to solve the coexpression problem).

Just like the monad is a monoid in the category of endofunctors, a comonad is a comonoid in the same category. The functor that generates a cofree comonad has the form:

type ComonadF f g = Identity :*: Compose f g

where the product of functors is defined as:

data (f :*: g) e = Both (f e) (g e)
infixr 6 :*:

Here’s the more familiar form of this functor:

data ComonadF f g e = e :< f (g e)

It is indeed a higher order functor, as witnessed by this instance:

instance Functor f => HFunctor (ComonadF f) where
  hfmap nat (e :< fge) = e :< fmap nat fge
  ffmap h (e :< fge) = h e :< fmap (fmap h) fge

A cofree comonad is the terminal coalgebra for this functor and can be written as a fixed point:

type Cofree f = FixH (ComonadF f)

Indeed, for any functor f, Cofree f is a comonad:

instance Functor f => Comonad (Cofree f) where
  extract (InH (e :< fge)) = e
  duplicate fr@(InH (e :< fge)) = 
                InH (fr :< fmap duplicate fge)

Cofree Comonad Example

The canonical example of a cofree comonad is an infinite stream:

type Stream = Cofree Identity

We can use this stream to sample a function. We’ll encapsulate this function inside the following functor (in fact, itself a comonad):

data Store a x = Store a (a -> x) 
    deriving Functor

We can use a higher order coalgebra to unpack the Store into a stream:

streamCoa :: HCoalgebra (ComonadF Identity)(Store Int)
streamCoa (Store n f) = 
    f n :< (Identity $ Store (n + 1) f)

The actual unpacking is a higher order anamorphism:

stream :: Store Int a -> Stream a
stream = hana streamCoa

We can use it, for instance, to generate a list of squares of natural numbers:

stream (Store 0 (^2))

Since, in Haskell, the same fixed point defines a terminal coalgebra as well as an initial algebra, we are free to construct algebras and catamorphisms for streams. Here’s an algebra that converts a stream to an infinite list:

listAlg :: HAlgebra (ComonadF Identity) []
listAlg(a :< Identity as) = a : as

toList :: Stream a -> [a]
toList = hcata listAlg

Future Directions

In this paper I concentrated on one type of higher order functor:

1 + a \otimes x

and its dual. This would be equivalent to studying folds for lists and unfolds for streams. But the structure of the functor category is richer than that. Just like basic data types can be combined into algebraic data types, so can functors. Moreover, besides the usual sums and products, the functor category admits at least two additional monoidal structures generated by functor composition and Day convolution.

Another potentially fruitful area of exploration is the profunctor category, which is also equipped with two monoidal structures, one defined by profunctor composition, and another by Day convolution. A free monoid with respect to profunctor composition is the basis of Haskell Arrow library [jaskelioff]. Profunctors also play an important role in the Haskell lens library [kmett].

Bibliography

  1. Erik Meijer, Maarten Fokkinga, and Ross Paterson, Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire
  2. Conor McBride, Ross Paterson, Idioms: applicative programming with effects
  3. Paolo Capriotti, Ambrus Kaposi, Free Applicative Functors
  4. Wouter Swierstra, Data types a la carte
  5. Exequiel Rivas and Mauro Jaskelioff, Notions of Computation as Monoids
  6. Edward Kmett, Lenses, Folds and Traversals
  7. Richard Bird and Lambert Meertens, Nested Datatypes
  8. Patricia Johann and Neil Ghani, Initial Algebra Semantics is Enough!

The Free Theorem for Ends

In Haskell, the end of a profunctor p is defined as a product of all diagonal elements:

forall c. p c c

together with a family of projections:

pi :: Profunctor p => forall c. (forall a. p a a) -> p c c
pi e = e

In category theory, the end must also satisfy the edge condition which, in (type-annotated) Haskell, could be written as:

dimap f idb . pib = dimap ida f . pia

for any f :: a -> b.
Using a suitable formulation of parametricity, this equation can be shown to be a free theorem. Let’s first review the free theorem for functors before generalizing it to profunctors.

Functor Characterization

You may think of a functor as a container that has a shape and contents. You can manipulate the contents without changing the shape using fmap. In general, when applying fmap, you not only change the values stored in the container, you change their type as well. To really capture the shape of the container, you have to consider not only all possible mappings, but also more general relations between different contents.

A function is directional, and so is fmap, but relations don’t favor either side. They can map multiple values to the same value, and they can map one value to multiple values. Any relation on values induces a relation on containers. For a given functor F, if there is a relation a between type A and type A':

A <=a=> A'

then there is a relation between type F A and F A':

F A <=(F a)=> F A'

We call this induced relation F a.

For instance, consider the relation between students and their grades. Each student may have multiple grades (if they take multiple courses) so this relation is not a function. Given a list of students and a list of grades, we would say that the lists are related if and only if they match at each position. It means that they have to be equal length, and the first grade on the list of grades must belong to the first student on the list of students, and so on. Of course, a list is a very simple container, but this property can be generalized to any functor we can define in Haskell using algebraic data types.

The fact that fmap doesn’t change the shape of the container can be expressed as a “theorem for free” using relations. We start with two related containers:

xs :: F A
xs':: F A'

where A and A' are related through some relation a. We want related containers to be fmapped to related containers. But we can’t use the same function to map both containers, because they contain different types. So we have to use two related functions instead. Related functions map related types to related types so, if we have:

f :: A -> B
f':: A'-> B'

and A is related to A' through a, we want B to be related to B' through some relation b. Also, we want the two functions to map related elements to related elements. So if x is related to x' through a, we want f x to be related to f' x' through b. In that case, we’ll say that f and f' are related through the relation that we call a->b:

f <=(a->b)=> f'

For instance, if f is mapping students’ SSNs to last names, and f' is mapping letter grades to numerical grades, the results will be related through the relation between students’ last names and their numerical grades.

To summarize, we require that for any two relations:

A <=a=> A'
B <=b=> B'

and any two functions:

f :: A -> B
f':: A'-> B'

such that:

f <=(a->b)=> f'

and any two containers:

xs :: F A
xs':: F A'

we have:

if       xs <=(F a)=> xs'
then   F xs <=(F b)=> F xs'

This characterization can be extended, with suitable changes, to contravariant functors.

Profunctor Characterization

A profunctor is a functor of two variables. It is contravariant in the first variable and covariant in the second. A profunctor can lift two functions simultaneously using dimap:

class Profunctor p where
    dimap :: (a -> b) -> (c -> d) -> p b c -> p a d

We want dimap to preserve relations between profunctor values. We start by picking any relations a, b, c, and d between types:

A <=a=> A'
B <=b=> B'
C <=c=> C'
D <=d=> D'

For any functions:

f  :: A -> B
f' :: A'-> B'
g  :: C -> D
g' :: C'-> D'

that are related through the following relations induced by function types:

f <=(a->b)=> f'
g <=(c->d)=> g'

we define:

xs :: p B C
xs':: p B'C'

The following condition must be satisfied:

if             xs <=(p b c)=> xs'
then   (p f g) xs <=(p a d)=> (p f' g') xs'

where p f g stands for the lifting of the two functions by the profunctor p.

Here’s a quick sanity check. If b and c are functions:

b :: B'-> B
c :: C -> C'

than the relation:

xs <=(p b c)=> xs'

becomes:

xs' = dimap b c xs

If a and d are functions:

a :: A'-> A
d :: D -> D'

then these relations:

f <=(a->b)=> f'
g <=(c->d)=> g'

become:

f . a = b . f'
d . g = g'. c

and this relation:

(p f g) xs <=(p a d)=> (p f' g') xs'

becomes:

(p f' g') xs' = dimap a d ((p f g) xs)

Substituting xs', we get:

dimap f' g' (dimap b c xs) = dimap a d (dimap f g xs)

and using functoriality:

dimap (b . f') (g'. c) = dimap (f . a) (d . g)

which is identically true.

Special Case of Profunctor Characterization

We are interested in the diagonal elements of a profunctor. Let’s first specialize the general case to:

C = B
C'= B'
c = b

to get:

xs = p B B
xs'= p B'B'

and

if             xs <=(p b b)=> xs'
then   (p f g) xs <=(p a d)=> (p f' g') xs'

Chosing the following substitutions:

A = A'= B
D = D'= B'
a = id
d = id
f = id
g'= id
f'= g

we get:

if              xs <=(p b b)=> xs'
then   (p id g) xs <=(p id id)=> (p g id) xs'

Since p id id is the identity relation, we get:

(p id g) xs = (p g id) xs'

or

dimap id g xs = dimap g id xs'

Free Theorem

We apply the free theorem to the term xs:

xs :: forall c. p c c

It must be related to itself through the relation that is induced by its type:

xs <=(forall b. p b b)=> xs

for any relation b:

B <=b=> B'

Universal quantification translates to a relation between different instantiations of the polymorphic value:

xsB <=(p b b)=> xsB'

Notice that we can write:

xsB = piB xs
xsB'= piB'xs

using the projections we defined earlier.

We have just shown that this equation leads to:

dimap id g xs = dimap g id xs'

which shows that the wedge condition is indeed a free theorem.

Natural Transformations

Here’s another quick application of the free theorem. The set of natural transformations may be represented as an end of the following profunctor:

type NatP a b = F a -> G b
instance Profunctor NatP where
    dimap f g alpha = fmap g . alpha . fmap f

The free theorem tells us that for any mu :: NatP c c:

(dimap id g) mu = (dimap g id) mu

which is the naturality condition:

mu . fmap g = fmap g . mu

It’s been know for some time that, in Haskell, naturality follows from parametricity, so this is not surprising.

Acknowledgment

I’d like to thank Edward Kmett for reviewing the draft of this post.

Bibliography

  1. Bartosz Milewski, Ends and Coends
  2. Edsko de Vries, Parametricity Tutorial, Part 1, Part 2, Contravariant Functions.
  3. Bartosz Milewski, Parametricity: Money for Nothing and Theorems for Free

This is part 26 of Categories for Programmers. Previously: Algebras for Monads. See the Table of Contents.

There are many intuitions that we may attach to morphisms in a category, but we can all agree that if there is a morphism from the object a to the object b than the two objects are in some way “related.” A morphism is, in a sense, the proof of this relation. This is clearly visible in any poset category, where a morphism is a relation. In general, there may be many “proofs” of the same relation between two objects. These proofs form a set that we call the hom-set. When we vary the objects, we get a mapping from pairs of objects to sets of “proofs.” This mapping is functorial — contravariant in the first argument and covariant in the second. We can look at it as establishing a global relationship between objects in the category. This relationship is described by the hom-functor:

C(-, =) :: Cop × C -> Set

In general, any functor like this may be interpreted as establishing a relation between objects in a category. A relation may also involve two different categories C and D. A functor, which describes such a relation, has the following signature and is called a profunctor:

p :: Dop × C -> Set

Mathematicians say that it’s a profunctor from C to D (notice the inversion), and use a slashed arrow as a symbol for it:

C ↛ D

You may think of a profunctor as a proof-relevant relation between objects of C and objects of D, where the elements of the set symbolize proofs of the relation. Whenever p a b is empty, there is no relation between a and b. Keep in mind that relations don’t have to be symmetric.

Another useful intuition is the generalization of the idea that an endofunctor is a container. A profunctor value of the type p a b could then be considered a container of bs that are keyed by elements of type a. In particular, an element of the hom-profunctor is a function from a to b.

In Haskell, a profunctor is defined as a two-argument type constructor p equipped with the method called dimap, which lifts a pair of functions, the first going in the “wrong” direction:

class Profunctor p where
    dimap :: (c -> a) -> (b -> d) -> p a b -> p c d

The functoriality of the profunctor tells us that if we have a proof that a is related to b, then we get the proof that c is related to d, as long as there is a morphism from c to a and another from b to d. Or, we can think of the first function as translating new keys to the old keys, and the second function as modifying the contents of the container.

For profunctors acting within one category, we can extract quite a lot of information from diagonal elements of the type p a a. We can prove that b is related to c as long as we have a pair of morphisms b->a and a->c. Even better, we can use a single morphism to reach off-diagonal values. For instance, if we have a morphism f::a->b, we can lift the pair <f, idb> to go from p b b to p a b:

dimap f id pbb :: p a b

Or we can lift the pair <ida, f> to go from p a a to p a b:

dimap id f paa :: p a b

Dinatural Transformations

Since profunctors are functors, we can define natural transformations between them in the standard way. In many cases, though, it’s enough to define the mapping between diagonal elements of two profunctors. Such a transformation is called a dinatural transformation, provided it satisfies the commuting conditions that reflect the two ways we can connect diagonal elements to non-diagonal ones. A dinatural transformation between two profunctors p and q, which are members of the functor category [Cop × C, Set], is a family of morphisms:

αa :: p a a -> q a a

for which the following diagram commutes, for any f::a->b:

Notice that this is strictly weaker than the naturality condition. If α were a natural transformation in [Cop × C, Set], the above diagram could be constructed from two naturality squares and one functoriality condition (profunctor q preserving composition):

Notice that a component of a natural transformation α in [Cop × C, Set] is indexed by a pair of objects α a b. A dinatural transformation, on the other hand, is indexed by one object, since it only maps diagonal elements of the respective profunctors.

Ends

We are now ready to advance from “algebra” to what could be considered the “calculus” of category theory. The calculus of ends (and coends) borrows ideas and even some notation from traditional calculus. In particular, the coend may be understood as an infinite sum or an integral, whereas the end is similar to an infinite product. There is even something that resembles the Dirac delta function.

An end is a generalization of a limit, with the functor replaced by a profunctor. Instead of a cone, we have a wedge. The base of a wedge is formed by diagonal elements of a profunctor p. The apex of the wedge is an object (here, a set, since we are considering Set-valued profunctors), and the sides are a family of functions mapping the apex to the sets in the base. You may think of this family as one polymorphic function — a function that’s polymorphic in its return type:

α :: forall a . apex -> p a a

Unlike in cones, within a wedge we don’t have any functions that would connect the vertices that form the base. However, as we’ve seen earlier, given any morphism f::a->b in C, we can connect both p a a and p b b to the common set p a b. We therefore insist that the following diagram commute:

This is called the wedge condition. It can be written as:

p ida f ∘ αa = p f idb ∘ αb

Or, using Haskell notation:

dimap id f . alpha = dimap f id . alpha

We can now proceed with the universal construction and define the end of p as the universal wedge — a set e together with a family of functions π such that for any other wedge with the apex a and a family α there is a unique function h::a->e that makes all triangles commute:

πa ∘ h = αa

The symbol for the end is the integral sign, with the “integration variable” in the subscript position:

c p c c

Components of π are called projection maps for the end:

πa :: ∫c p c c -> p a a

Note that if C is a discrete category (no morphisms other than the identities) the end is just a global product of all diagonal entries of p across the whole category C. Later I’ll show you that, in the more general case, there is a relationship between the end and this product through an equalizer.

In Haskell, the end formula translates directly to the universal quantifier:

forall a. p a a

Strictly speaking, this is just a product of all diagonal elements of p, but the wedge condition is satisfied automatically due to parametricity (I’ll explain it in a separate blog post). For any function f :: a -> b, the wedge condition reads:

dimap f id . pi = dimap id f . pi

or, with type annotations:

dimap f idb . pib = dimap ida f . pia

where both sides of the equation have the type:

Profunctor p => (forall c. p c c) -> p a b

and pi is the polymorphic projection:

pi :: Profunctor p => forall c. (forall a. p a a) -> p c c
pi e = e

Here, type inference automatically picks the right component of e.

Just as we were able to express the whole set of commutation conditions for a cone as one natural transformation, likewise we can group all the wedge conditions into one dinatural transformation. For that we need the generalization of the constant functor Δc to a constant profunctor that maps all pairs of objects to a single object c, and all pairs of morphisms to the identity morphism for this object. A wedge is a dinatural transformation from that functor to the profunctor p. Indeed, the dinaturality hexagon shrinks down to the wedge diamond when we realize that Δc lifts all morphisms to one identity function.

Ends can also be defined for target categories other than Set, but here we’ll only consider Set-valued profunctors and their ends.

Ends as Equalizers

The commutation condition in the definition of the end can be written using an equalizer. First, let’s define two functions (I’m using Haskell notation, because mathematical notation seems to be less user-friendly in this case). These functions correspond to the two converging branches of the wedge condition:

lambda :: Profunctor p => p a a -> (a -> b) -> p a b
lambda paa f = dimap id f paa

rho :: Profunctor p => p b b -> (a -> b) -> p a b
rho pbb f = dimap f id pbb

Both functions map diagonal elements of the profunctor p to polymorphic functions of the type:

type ProdP p = forall a b. (a -> b) -> p a b

Functions lambda and rho have different types. However, we can unify their types, if we form one big product type, gathering together all diagonal elements of p:

newtype DiaProd p = DiaProd (forall a. p a a)

The functions lambda and rho induce two mappings from this product type:

lambdaP :: Profunctor p => DiaProd p -> ProdP p
lambdaP (DiaProd paa) = lambda paa

rhoP :: Profunctor p => DiaProd p -> ProdP p
rhoP (DiaProd paa) = rho paa

The end of p is the equalizer of these two functions. Remember that the equalizer picks the largest subset on which two functions are equal. In this case it picks the subset of the product of all diagonal elements for which the wedge diagrams commute.

Natural Transformations as Ends

The most important example of an end is the set of natural transformations. A natural transformation between two functors F and G is a family of morphisms picked from hom-sets of the form C(F a, G a). If it weren’t for the naturality condition, the set of natural transformations would be just the product of all these hom-sets. In fact, in Haskell, it is:

forall a. f a -> g a

The reason it works in Haskell is because naturality follows from parametricity. Outside of Haskell, though, not all diagonal sections across such hom-sets will yield natural transformations. But notice that the mapping:

<a, b> -> C(F a, G b)

is a profunctor, so it makes sense to study its end. This is the wedge condition:

Let’s just pick one element from the set c C(F c, G c). The two projections will map this element to two components of a particular transformation, let’s call them:

τa :: F a -> G a
τb :: F b -> G b

In the left branch, we lift a pair of morphisms <ida, G f> using the hom-functor. You may recall that such lifting is implemented as simultaneous pre- and post-composition. When acting on τa the lifted pair gives us:

G f ∘ τa ∘ ida

The other branch of the diagram gives us:

idb ∘ τb ∘ F f

Their equality, demanded by the wedge condition, is nothing but the naturality condition for τ.

Coends

As expected, the dual to an end is called a coend. It is constructed from a dual to a wedge called a cowedge (pronounced co-wedge, not cow-edge).

An edgy cow?

The symbol for a coend is the integral sign with the “integration variable” in the superscript position:

 c p c c

Just like the end is related to a product, the coend is related to a coproduct, or a sum (in this respect, it resembles an integral, which is a limit of a sum). Rather than having projections, we have injections going from the diagonal elements of the profunctor down to the coend. If it weren’t for the cowedge conditions, we could say that the coend of the profunctor p is either p a a, or p b b, or p c c, and so on. Or we could say that there exists such an a for which the coend is just the set p a a. The universal quantifier that we used in the definition of the end turns into an existential quantifier for the coend.

This is why, in pseudo-Haskell, we would define the coend as:

exists a. p a a

The standard way of encoding existential quantifiers in Haskell is to use universally quantified data constructors. We can thus define:

data Coend p = forall a. Coend (p a a)

The logic behind this is that it should be possible to construct a coend using a value of any of the family of types p a a, no matter what a we chose.

Just like an end can be defined using an equalizer, a coend can be described using a coequalizer. All the cowedge conditions can be summarized by taking one gigantic coproduct of p a b for all possible functions b->a. In Haskell, that would be expressed as an existential type:

data SumP p = forall a b. SumP (b -> a) (p a b)

There are two ways of evaluating this sum type, by lifting the function using dimap and applying it to the profunctor p:

lambda, rho :: Profunctor p => SumP p -> DiagSum p
lambda (SumP f pab) = DiagSum (dimap f id pab)
rho    (SumP f pab) = DiagSum (dimap id f pab)

where DiagSum is the sum of diagonal elements of p:

data DiagSum p = forall a. DiagSum (p a a)

The coequalizer of these two functions is the coend. A coequilizer is obtained from DiagSum p by identifying values that are obtained by applying lambda or rho to the same argument. Here, the argument is a pair consisting of a function b->a and an element of p a b. The application of lambda and rho produces two potentially different values of the type DiagSum p. In the coend, these two values are identified, making the cowedge condition automatically satisfied.

The process of identification of related elements in a set is formally known as taking a quotient. To define a quotient we need an equivalence relation ~, a relation that is reflexive, symmetric, and transitive:

a ~ a
if a ~ b then b ~ a
if a ~ b and b ~ c then a ~ c

Such a relation splits the set into equivalence classes. Each class consists of elements that are related to each other. We form a quotient set by picking one representative from each class. A classic example is the definition of rational numbers as pairs of whole numbers with the following equivalence relation:

(a, b) ~ (c, d) iff a * d = b * c

It’s easy to check that this is an equivalence relation. A pair (a, b) is interpreted as a fraction a/b, and fractions whose numerator and denominator have a common divisor are identified. A rational number is an equivalence class of such fractions.

You might recall from our earlier discussion of limits and colimits that the hom-functor is continuous, that is, it preserves limits. Dually, the contravariant hom-functor turns colimits into limits. These properties can be generalized to ends and coends, which are a generalization of limits and colimits, respectively. In particular, we get a very useful identity for converting coends to ends:

Set(∫ x p x x, c) ≅ ∫x Set(p x x, c)

Let’s have a look at it in pseudo-Haskell:

(exists x. p x x) -> c ≅ forall x. p x x -> c

It tells us that a function that takes an existential type is equivalent to a polymorphic function. This makes perfect sense, because such a function must be prepared to handle any one of the types that may be encoded in the existential type. It’s the same principle that tells us that a function that accepts a sum type must be implemented as a case statement, with a tuple of handlers, one for every type present in the sum. Here, the sum type is replaced by a coend, and a family of handlers becomes an end, or a polymorphic function.

Ninja Yoneda Lemma

The set of natural transformations that appears in the Yoneda lemma may be encoded using an end, resulting in the following formulation:

z Set(C(a, z), F z) ≅ F a

There is also a dual formula:

 z C(z, a) × F z ≅ F a

This identity is strongly reminiscent of the formula for the Dirac delta function (a function δ(a - z), or rather a distribution, that has an infinite peak at a = z). Here, the hom-functor plays the role of the delta function.

Together these two identities are sometimes called the Ninja Yoneda lemma.

To prove the second formula, we will use the consequence of the Yoneda embedding, which states that two objects are isomorphic if and only if their hom-functors are isomorphic. In other words a ≅ b if and only if there is a natural transformation of the type:

[C, Set](C(a, -), C(b, =))

that is an isomorphism.

We start by inserting the left-hand side of the identity we want to prove inside a hom-functor that’s going to some arbitrary object c:

Set(∫ z C(z, a) × F z, c)

Using the continuity argument, we can replace the coend with the end:

z Set(C(z, a) × F z, c)

We can now take advantage of the adjunction between the product and the exponential:

z Set(C(z, a), c(F z))

We can “perform the integration” by using the Yoneda lemma to get:

c(F a)

(Notice that we used the contravariant version of the Yoneda lemma, since c(F z) is contravariant in z.)
This exponential object is isomorphic to the hom-set:

Set(F a, c)

Finally, we take advantage of the Yoneda embedding to arrive at the isomorphism:

 z C(z, a) × F z ≅ F a

Profunctor Composition

Let’s explore further the idea that a profunctor describes a relation — more precisely, a proof-relevant relation, meaning that the set p a b represents the set of proofs that a is related to b. If we have two relations p and q we can try to compose them. We’ll say that a is related to b through the composition of q after p if there exist an intermediary object c such that both q b c and p c a are non-empty. The proofs of this new relation are all pairs of proofs of individual relations. Therefore, with the understanding that the existential quantifier corresponds to a coend, and the cartesian product of two sets corresponds to “pairs of proofs,” we can define composition of profunctors using the following formula:

(q ∘ p) a b = ∫ c p c a × q b c

Here’s the equivalent Haskell definition from Data.Profunctor.Composition, after some renaming:

data Procompose q p a b where
  Procompose :: q a c -> p c b -> Procompose q p a b

This is using generalized algebraic data type, or GADT syntax, in which a free type variable (here c) is automatically existentially quanitified. The (uncurried) data constructor Procompose is thus equivalent to:

exists c. (q a c, p c b)

The unit of so defined composition is the hom-functor — this immediately follows from the Ninja Yoneda lemma. It makes sense, therefore, to ask the question if there is a category in which profunctors serve as morphisms. The answer is positive, with the caveat that both associativity and identity laws for profunctor composition hold only up to natural isomorphism. Such a category, where laws are valid up to isomorphism, is called a bicategory (which is more general than a 2-category). So we have a bicategory Prof, in which objects are categories, morphisms are profunctors, and morphisms between morphisms (a.k.a., two-cells) are natural transformations. In fact, one can go even further, because beside profunctors, we also have regular functors as morphisms between categories. A category which has two types of morphisms is called a double category.

Profunctors play an important role in the Haskell lens library and in the arrow library.

Challenge

  1. Write explicitly the cowedge condition for a coend.

Next: Kan extensions.


If there is one structure that permeates category theory and, by implication, the whole of mathematics, it’s the monoid. To study the evolution of this concept is to study the power of abstraction and the idea of getting more for less, which is at the core of mathematics. When I say “evolution” I don’t necessarily mean chronological development. I’m looking at a monoid as if it were a life form evolving through various eons of abstraction.

It’s an ambitious project and I’ll have to cover a lot of material. I’ll start slowly, with the definitions of magmas and monoids, but then I will accelerate. A lot of concepts will be introduced in one or two sentences, mainly to familiarize the reader with the notation. I’ll dwell a little on monoidal categories, then breeze through ends, coends, and profunctors. I’ll show you how monads, arrows, and applicative functors arise from monoids in various monoidal categories.

stego

The Magmas of the Hadean Eon

Monoids evolved from more primitive life forms feeding on sets. So, before even touching upon monoids, let’s talk about cartesian products, relations, and functions. You take two sets a and b (or, in the simplest case, two copies of the same set a) and form pairs of elements. That gives you a set of pairs, a.k.a., the cartesian product a×b. Any subset of such a cartesian product is called a relation. Two elements x and y are in a relation if the pair <x, y> is a member of that subset.

A function from a to b is a special kind of relation, in which every element x in the set a has one and only one element y in the set b that’s related to it. (Sometimes this is called a total function, since it’s defined for all elements of a).

Even before there were monoids, there was magma. A magma is a set with a binary operation and nothing else. So, in particular, there is no assumption of associativity, and there is no unit. A binary operation is simply a function from the cartesian product of a with itself back to a

a × a -> a

It takes a pair of elements <x, y>, both coming from the set a, and maps it to an element of a.

It’s tempting to quote the Haskell definition of a magma:

class Magma a where
  (<>) :: a -> a -> a

but this definition is already tainted with some higher concepts like currying. An alternative would be:

class Magma a where
  (<>) :: (a, a) -> a

Here, we at least see a pair of elements that are being “multiplied.” But the pair type (a, a) is also a higher-level concept. I’ll come back to it later.

Lack of associativity means that we cannot identify (x<>y)<>z with x<>(y<>z). You have to keep the parentheses.

You might have heard of quaternions — their multiplication is associative. But not many people have heard of octonions, which are not associative. In fact Hamilton, who discovered quaternions, invented the word associative to disassociate himself from octonions, which are not.

If you’re familiar with continuous groups, you might know that Lie algebras are not associative.

Closer to home — most operations on floating-point numbers are not associative on modern computers because of rounding errors.

But, really, most interesting binary operations are associative. So out of the magma emerges a semigroup. In a semigroup you can drop parentheses. A non-trivial (that is, non-monoidal) example of a semigroup is the set of integers with max binary operation. A maximum of three numbers is the same no matter in which order you pair them. But there is no integer that’s less or equal to any other integer, so this is not a monoid.

Monoids of the Archean Eon

But, really, most interesting binary operations are both associative and unital. There usually is a “do nothing” element with respect to most binary operations. So life as we know it begins with a monoid.

A monoid is a set with a binary operation that is associative, and with a special element called the unit e that is neutral with respect to the binary operation. To be precise, these are the three monoid laws:

(x <> y) <> z = x <> (y <> z)
e <> x = x
x <> e = x

In Haskell, the traditional definition of a monoid uses mempty for the unit and mappend for the binary operation:

class Monoid a where
    mempty  :: a
    mappend :: a -> a -> a

As with the magma, the definition of mappend is curried. Equivalently, it could have been written as:

mappend :: (a, a) -> a

I’ll come back to this point later.

There are plenty of examples of monoids. Non-negative integers with addition, or positive integers with multiplication are the obvious ones. Strings with concatenation are interesting too, because concatenation is not commutative.

Just like pairs of elements from two sets a and b organize themselves into a set a×b, which is their cartesian product; functions between two sets organize themselves into a set — the set of functions from a to b, which we sometimes write as a->b.

This organizing principle is characteristic of sets, where everything you can think of is a set. Except when it’s more than just a set — for instance when you try to organize all sets into one large collection. This collection, or “class,” is not itself a set. You can’t have a set of all sets, but you can have a category Set of “small” sets, which are sets that belong to a “universe.” In what follows, I will confine myself to a single universe in order to dodge questions from foundational mathematicians.

Let’s now pop one level up and look at cartesian product as an operation on sets. For any two sets a and b, we can construct the set a×b. If we view this as “multiplication” of sets, we can say that sets form a magma. But do they form a monoid? Not exactly! To begin with, cartesian product is not associative. We can see it in Haskell: the type ((a, b), c) is not the same as the type (a, (b, c)). They are, however, isomorphic. There is an invertible function called the associator, from one type to the other:

alpha :: ((a, b), c) -> (a, (b, c))
alpha ((x, y), z) = (x, (y, z))

It’s just a repackaging of containers (such repackaging is, by the way, called a natural transformation).

For the unit of this “multiplication” we can pick the singleton set. In Haskell, this is the type called unit and it’s denoted by an empty pair of parentheses (). Again, the unit laws are valid up to isomorphism. There are two such isomorphisms called left and right unitors:

lambda :: ((), a) -> a
lambda ((), x) = x
rho :: (a, ()) -> a
rho (x, ()) -> x

We have just exposed monoidal structure in the category Set. Set is not strictly a monoid because monoidal laws are satisfied only up to isomorphism.

There is another monoidal structure in Set. Just like cartesian product resembles multiplication, there is an operation on sets that resembles addition. It’s called disjoint sum. In Haskell it’s embodied in the type Either a b . Just like cartesian product, disjoint sum is associative up to isomorphism. The unit (or the “zero”) of this sum type is the empty set or, in Haskell, the Void type — also up to isomorphism.

The Cambrian Explosion of Categories

The first rule of abstraction is, You do not talk about Fight Club. In the category Set, for instance, we are not supposed to admit that sets have elements. An object in Set is really a set, but you never talk about its elements. We still have functions between sets, but they become abstract morphisms, of which we only know how they compose.

Composition of functions is associative, and there is an identity function for every set, which serves as a unit of composition. We can write these rules compactly as:

(f ∘ g) ∘ h = f ∘ (g ∘ h)
id ∘ f = f
f ∘ id = f

These look exactly like monoid laws. So do functions form a monoid with respect to composition? Not quite, because you can’t compose any two functions. They must be composable, which means their endpoints have to match. In Haskell, we can compose g after f, or g ∘ f, only if:

f :: a -> b
g :: b -> c

Also, there is no single identity function, but a whole family of functions ida, one for each set a. In Haskell, we call that a polymorphic function.

But notice what happens if we restrict ourselves to just a single object a in Set. Every morphism from a back to a can be composed with any other such morphism (their endpoints always match). Moreover, we are guaranteed that among those so called endomorphisms there is one identity morphism ida, which acts as a unit of composition.

Notice that I switched from the set/function nomenclature to the more general object/morphism naming convention of category theory. We can now forget about sets and functions and define an arbitrary category as a collection (a set in a given universe) of objects, and sets of morphisms that go between them. The only requirements are that any two composable morphisms compose, and that there is an identity morphism for every object. And that composition must be associative.

We can now forget about sets and define a monoid as a category that has only one object. The binary operation is just the composition of (endo-)morphisms. It works! We have defined a monoid without a set. Or have we?

No, we haven’t! We have just swept it under the rug — the rug being the set of morphisms. Yes, morphisms between any two objects form a set called the hom-set. In a category C, the hom-set between objects a and b is denoted by C(a, b). So we haven’t completely eliminated sets from the picture.

In the single object category M, we have only one hom-set M(a, a). The elements of this set — and we are allowed to call them elements because it’s a set — are morphisms like f and g. We can compose them, and we can call this composition “multiplication,” thus recovering our previous definition of the monoid as a set. We get associativity for free, and we have the identity morphism ida serving as the unit.

It might seem at first that we haven’t made progress and, in fact, we might have made some things more complicated by forgetting the internal structure of objects. For instance, in the category Set, it’s no longer obvious what an empty set is. You can’t say it’s a set with no elements because of the Fight Club rule. Similarly with the singleton set. Fortunately, it turns out that both these sets can be uniquely described in terms of their interactions with other sets. By that I mean the kind of functions/morphisms that connect them to other objects in Set. These object-opaque definitions are called universal constructions. For instance, the empty set is the only set that has a unique morphism going from it to every other set. The advantage of this characterization is that it can now be applied to any category. One may ask this question in any category: Is there an object that has this property? If there is, we call it the initial object. The empty set is the initial object in Set. Similarly, a singleton set is the terminal object in Set (and it’s unique up to unique isomorphism).

A cartesian product of two sets can also be defined using a universal construction, one which doesn’t mention elements (or pairs of elements). And again, this construction may be used to define a (categorical) product in other categories. Of particular interest are categories where a product exists for every pair of objects (it does in Set).

In such categories there is actually an even better way of defining a product using an adjunction. But before we can get to adjunctions, let me summarize a few millions of years of evolution in a few terse paragraphs.

A functor is a mapping of categories that preserves their structure. It maps objects to objects and morphisms to morphisms. In Haskell we define a functor (really, an endofunctor) as a type constructor f (a mapping from types to types) that can be lifted to functions that go between these types:

class Functor f where
  fmap :: (a -> b) -> (f a -> f b)

The mapping of morphisms must also preserve composition and identity. Functors may collapse multiple objects into one, and multiple morphisms into one, but they never break connections. You may also think of functors as embedding one category inside another.

Finally, functors can be composed in the obvious way, and there is an identity endofunctor that maps a category onto itself. It follows that categories (at least the small ones) form a category Cat in which functors serve as morphisms.

There may be many ways of embedding one category inside another, and it’s extremely useful to be able to compare such embeddings by defining mappings between them. If we have two functors F and G between two categories C and D we define a natural transformation between these functors by picking a morphism between a pair F a and G a, for every a.

In Haskell, a natural transformation between two functors f and g is a polymorphic function:

type Nat f g = forall a. f a -> g a

In general, natural transformations must obey additional naturality conditions, but in Haskell they come for free (due to parametricity).

Natural transformations may be composed, and there is an identity natural transformations from any functor to itself. It follows that functors between any two categories C and D form a category denoted by [C, D], where natural transformations play the role of morphisms. A hom-set in such a category is a set of natural transformations between two functors F and G denoted by [C, D](F, G).

An invertible natural transformation is called a natural isomorphism. If two functors are naturally isomorphic they are essentially the same.

Arthropods and their Adjoints

Using a pair of functors that are the inverse of each other we may define equivalence of categories, but there is an even more useful concept of adjoint functors that compare the structures of two non-equivalent categories. The idea is that we have a “right” functor R going from category C to D and a “left” functor L going in the other direction, from D to C.

Adj - 1

There are two possible compositions of these functors, both resulting in round trips or endofunctors. The categories would be equivalent if those endofunctors were naturally isomorphic to identity endofunctors. But for an adjunction, we impose weaker conditions. We require that there be two natural transformations (not necessarily isomorphisms):

η :: ID -> R ∘ L
ε :: L ∘ R -> IC

The first transformation η is called the unit; and the second ε, the counit of the adjunction.

In a small category objects form sets, so it’s possible to form a cartesian product of two small categories C and D. Object in such a category C×D are pairs of objects <c, d>, and morphisms are pairs of morphisms <f, g>.

After these preliminaries, we are ready to define the categorical product in C using an adjunction. We chose C×C as the left category. The left functor is the diagonal functor Δ that maps any object c to a pair <c, c> and any morphism f to a pair of morphisms <f, f>. Its right adjoint, if it exists, maps a pair of objects <a, b> to their categorical product a×b.

Adj-Product

Interestingly, the terminal object can also be defined using an adjunction. This time we chose, as the left category, a singleton category with one object and one (identity) morphism. The left functor maps any object c to the singleton object. Its right adjoint, if it exists, maps the singleton object to the terminal object in C.

A category with all products and the terminal object is called a cartesian category, or cartesian monoidal category. Why monoidal? Because the operation of taking the categorical product is monoidal. It’s associative, up to isomorphism; and its unit is the terminal object.

Incidentally, this is the same monoidal structure that we’ve seen in Set, but now it’s generalized to the level of other categories. There was another monoidal structure in Set induced by the disjoint sum. Its categorical generalization is given by the coproduct, with the initial object playing the role of the unit.

But what about the set of morphisms? In Set, morphisms between two sets a and b form a hom-set, which is the object of the same category Set. In an arbitrary category C, a hom-set C(a, b) is still a set — but now it’s not an object of C. That’s why it’s called the external hom-set. However, there are categories in which each external hom-set has a corresponding object called the internal hom. This object is also called an exponential, ba. It can be defined using an adjunction, but only if the category supports products. It’s an adjunction in which the left and right categories are the same. The left endofunctor takes an object b and maps it to a product b×a, where a is an arbitrary fixed object. Its adjoint functor maps an object b to the exponential ba. The counit of this adjunction:

ε :: ba × a -> b

is the evaluation function. In Haskell it has the following signature:

eval :: (a -> b, a) -> b

The Haskell function type a->b is equivalent to the exponential ba.

A category that has all products and exponentials together with the terminal object is called cartesian closed. Cartesian closed categories, or CCCs, play an important role in the semantics of programming languages.

Tensorosaurus Rex

We have already seen two very similar monoidal structures induced by products and coproducts. In mathematics, two is a crowd, so let’s look for a pattern. Both product and coproduct act as bifunctors C×C->C. Let’s call such a bifunctor a tensor product and write it as an infix operator a ⊗ b. As a bifunctor, the tensor product can also lift pairs of morphisms:

f :: a -> a'
g :: b -> b'
f ⊗ g :: a ⊗ b -> a' ⊗ b'

To define a monoid on top of a tensor product, we will require that it be associative — up to isomorphism:

α :: (a ⊗ b) ⊗ c -> a ⊗ (b ⊗ c)

We also need a unit object, which we will call i. The two unit laws are:

λ :: i ⊗ a -> a
ρ :: a ⊗ i -> a

A category with a tensor product that satisfies the above properties, plus some additional coherence conditions, is called a monoidal category.

We can now specialize the tensor product to categorical product, in which case the unit object is the terminal object; or to coproduct, in which case we chose the initial object as the unit. But there is an even more interesting operation that has all the properties of the tensor product. I’m talking about functor composition.

tyrano

Functorosaurus

Functors between any two categories C and D form a functor category [C, D] with natural transformations playing the role of morphisms. In general, these functors don’t compose (their endpoints don’t match) unless we pick the target category to be the same as the source category.

Endofunctor Composition

In the endofunctor category [C, C] any two functors can be composed. But in [C, C] functors are objects, so functor composition becomes an operation on objects. For any two endofunctors F and G it produces a new endofunctor F∘G. It’s a binary operation, so it’s a potential candidate for a tensor product. Indeed, it is a bifunctor: it can be lifted to natural transformations, which are morphisms in [C, C]. It’s associative — in fact it’s strictly associative, the associator α is the identity natural transformation. The unit with respect to endofunctor composition is the identity functor I. So the category of endofunctors is a monoidal category.

Unlike product and coproduct, which are symmetric up to isomorphism, endofunctor composition is not symmetric. In general, there is no relation between F∘G and G∘F.

Profunctor Composition

Different species of functors came up with their own composition strategies. Take for instance the profunctors, which are functors Cop×D->Set. They generalize the idea of relations between objects in C and D. The sets they map to may be thought of as sets of proofs of the relationship. An empty set means that the two objects are not related. If you want to compose two relations, you have to find an element that’s common to the image of one relation and the source of the other (relations are not, in general, symmetric). The proofs of the new composite relation are pairs of proofs of individual relations. Symbolically, if p and q are such profunctors/relations, their composition can be written as:

exists x. (p a x, q x b)

Existential quantification in Haskell translates to polymorphic construction, so the actual definition is:

data PCompose p q a b = forall x . PCompose (p a x) (q x b)

In category theory, existential quantification is encoded as the coend, which is a generalization of a colimit for profunctors. The coend formula for the composition of two profunctors reads:

(p ⊗ q) a b = ∫ z p a z × q z b

The product here is the cartesian product of sets.

Profunctors, being functors, form a category in which morphisms are natural transformations. As long as the two categories that they relate are the same, any two profunctors can be composed using a coend. So profunctor composition is a good candidate for a tensor product in such a category. It is indeed associative, up to isomorphism. But what’s the unit of profunctor composition? It turns out that the simplest profuctor — the hom-functor — because of the Yoneda lemma, is the unit of composition:

z C(a, z) × p z b ≅ p a b
∫ z p a z × C(z, b) ≅ p a b

Thus profunctors Cop×C->Set form a monoidal category.

atlanto

Day Convolution

Or consider Set-valued functors. They can be composed using Day convolution. For that, the category C must itself be monoidal. Day convolution of two functors C->Set is defined using a coend:

(f ★ g) a = ∫ x y f x × g y × C(x ⊗ y, a)

Here, the tensor product of x ⊗ y comes from the monoidal category C, the other products are just cartesian products of sets (one of them being the hom-set).

As before, in Haskell, the coend turns into existential quantifier, which can be written symbolically:

Day f g a = exists x y. ((f x, g y), (x, y) -> a)

and encoded as a polymorphic constructor:

data Day f g a = forall x y. Day (f x) (g y) ((x, y) -> a)

We use the fact that the category of Haskell types is monoidal with respect to cartesian product.

We can build a monoidal category based on Day convolution. The unit with respect to Day convolution is C(i, -), the hom-functor applied to i — the unit in the monoidal category C. For instance, the left identity can be derived from:

(C(i, -) ★ g) a = ∫ x y C(i, x) × g y × C(x ⊗ y, a)

Applying the Yoneda lemma, or “integrating over x,” we get:

y g y × C(i ⊗ y, a)

Considering that i is the unit of the tensor product, we can perform the second integration to get g a.

The Monozoic Era

Monoidal categories are important because they provide rich grazing grounds for monoids. In a monoidal category we can define a more general monoid. It’s an object m with some special properties. These properties replace the usual definitions of multiplication and unit.

First, let’s reformulate the definition of a set-based monoid, taking into account the fact that Set is a monoidal category with respect to cartesian product.

A monoid is a set, so it’s an object in Set — let’s call it m. Multiplication maps pairs of elements of m back to m. These pairs are just elements of the cartesian product m × m. So multiplication is defined as a function:

μ :: m × m -> m

Unit of multiplication is a special element of m. We can select this element by providing a special morphism from the singleton set to m:

η :: () -> m

We can now express associativity and unit laws as properties of these two functions. The beauty of this formulation is that it generalizes easily to any cartesian category — just replace functions with morphisms and the unit () with the terminal object. There’s no reason to stop there: we can lift this definition all the way up to a monoidal category.

A monoid in a monoidal category is an object m together with two morphisms:

μ :: m ⊗ m -> m
η :: i -> m

Here i is the unit object with respect to the tensor product ⊗. Monoidal laws can be expressed using the associator α and the two unitors, λ and ρ, of the monoidal category:

assoctensor

unitmon

Having previously defined several interesting monoidal categories, we can now go digging for new monoids.

Monads

Let’s start with the category of endofunctors where the tensor product is functor composition. A monoid in the category of endofunctors is an endofunctor m and two morphism. Remember that morphisms in a functor category are natural transformations. So we end up with two natural transformations:

μ :: m ∘ m -> m
η :: I -> m

where I is the identity functor. Their components at an object a are:

μa :: m (m a) -> m a
ηa :: a -> m a

This construct is easily recognizable as a monad. The associativity and unit laws are just monad laws. In Haskell, μa is called join and ηa is called return.

Arrows

Let’s switch to the category of profunctors Cop×C->Set with profunctor composition as the tensor product. A monoid in that category is a profunctor ar. Multiplication is defined by a natural transformation:

μ :: ar ⊗ ar -> ar

Its component at a, b is:

μa b :: (∫ z ar a z × ar z b) -> ar a b

To simplify this formula we need a very useful identity that relates coends to ends. A hom-set that starts at a coend is equivalent to an end of the hom set:

C(∫ z p z z, y) ≅ ∫ z C(p z z, y)

Or, replacing external hom-sets with internal homs:

(∫ z p z z) -> y ≅ ∫ z (p z z -> y)

In Haskell, this formula is used to turn functions that take existential types to functions that are polymorphic:

(exists z. p z z) -> y ≅ forall z. (p z z -> y)

Intuitively, it makes perfect sense. If you want to define a function that takes an existential type, you have to be prepared to handle any type.

Using that identity, our multiplication formula can be rewritten as:

μa b :: ∫ z ((ar a z × ar z b) -> ar a b)

In Haskell, this derivation uses the existential quantifier:

mu a b = (exists z. (ar a z, ar z b)) -> ar a b

As we discussed, a function from an existential type is equivalent to a polymorphic function:

forall z. (ar a z, ar z b) -> ar a b

or, after currying and dropping the redundant quantification:

ar a z -> ar z b -> ar a b

This looks very much like a composition of morphisms in a category. In Haskell, this function is known in the infix-operator form as:

(>>>) :: ar a z -> ar z b -> ar a b

Let’s see what we get as the monoidal unit. Remember that the unit object in the profunctor category is the hom-functor C(a, b).

ηa b :: C(a, b) -> ar a b

In Haskell, this polymorphic function is traditionally called arr:

arr :: (a -> b) -> ar a b

The whole construct is known in Haskell as a pre-arrow. The full arrow is defined as a monoid in the category of strong profunctors, with strength defined as a natural transformation:

sta b :: p a b -> p (a, x) (b, x)

In Haskell, this function is called first.

Applicatives

There are several categorical formulations of what’s called in Haskell the applicative functor. To first approximaton, Haskell’s type system is the category Set. To translate Haskell constructs to category theory, the safest approach is to just play with endofunctors in Set. But both Set and its endofunctors have a lot of extra structure, so I’d like to start in a slightly more general setting.

Let’s have a look at the monoidal category of functors [C, Set], with Day convolution as the tensor product, and C(i, -) as unit. A monoid in this category is a functor f with multiplication given by the natural transformation:

μ :: f ★ f -> f

and unit given by:

η :: C(i, -) -> f

It turns out that the existence of these two natural transformations is equivalent to the requirement that f be a lax monoidal functor, which is the basis of the definition of the applicative functor in Haskell.

A monoidal functor is a functor that maps monoidal structure of one category to the monoidal structure of another category. It maps the tensor product, and it maps the unit object. In our case, the source category C has the monoidal structure given by the tensor product ⊗, and the target category Set is monoidal with respect to the cartesian product ×. A functor is monoidal if it doesn’t matter whether we first map two object and then multiply them, or first multiply them and then map the result:

f x × f y ≅ f (x ⊗ y)

Also, the unit object in Set should be isomporphic to the result of mapping the unit object in C:

() ≅ f i

Here, () is the terminal object in Set and i is the unit object in C.

These conditions are relaxed in the definition of a lax monoidal functor. A lax monoidal functor replaces isomorphisms with regular unidirectional morphisms:

f x × f y -> f (x ⊗ y)
() -> f i

It can be shown that the monoid in the category[C, Set], with Day convolution as the tensor product, is equivalent to the lax monoidal functor.

The Haskell definition of Applicative doesn’t look like Day convolution or like a lax monoidal functor:

class Functor f => Applicative f where
    (<*>) :: f (a -> b) -> (f a -> f b)
    pure :: a -> f a

You may recognize pure as a component of η, the natural transformation defining the monoid with respect to Day convolution. When you replace the category C with Set, the unit object C(i, -) turns into the identity functor. However, the operator <*> is lifted from the definition of yet another lax functor, the lax closed functor. It’s a functor that preserves the closed structure defined by the internal hom functor. In Set, the internal hom functor is just the arrow (->), hence the definition:

class Functor f => Closed f where
    (<*>) :: f (a -> b) -> (f a -> f b)
    unit :: f ()

As long as the internal hom is defined through the adjunction with the product, a lax closed functor is equivalent to a lax monoidal functor.

Conclusion

It is pretty shocking to realize how many different animals share the same body plan — I’m talking here about the monoid as the skeleton of a myriad of different mathematical and programming constructs. And I haven’t even touched on the whole kingdom of enriched categories, where monoidal categories form the reservoir of hom-objects. Virtually all notions I’ve discussed here can be generalized to enriched categories, including functors, profunctors, the Yoneda lemma, Day convolution, and so on.

Glossary

  • Hadean Eon: Began with the formation of the Earth about 4.6 billion years ago. It’s the period before the earliest-known rocks.
  • Archean Eon: During the Archean, the Earth’s crust had cooled enough to allow the formation of continents.
  • Cambrian explosion: Relatively short evolutionary event, during which most major animal phyla appeared.
  • Arthropods: from Greek ἄρθρωσις árthrosis, “joint”
  • Tensor, from Latin tendere “to stretch”
  • Functor: from Latin fungi, “perform”

Bibliograhpy

  1. Moggi, Notions of Computation and Monads.
  2. Rivas, Jaskelioff, Notions of Computation as Monoids.

Unlike monads, which came into programming straight from category theory, applicative functors have their origins in programming. McBride and Paterson introduced applicative functors as a programming pearl in their paper Applicative programming with effects. They also provided a categorical interpretation of applicatives in terms of strong lax monoidal functors. It’s been accepted that, just like “a monad is a monoid in the category of endofunctors,” so “an applicative is a strong lax monoidal functor.”

The so called “tensorial strength” seems to be important in categorical semantics, and in his seminal paper Notions of computation and monads, Moggi argued that effects should be described using strong monads. It makes sense, considering that a computation is done in a context, and you should be able to make the global context available under the monad. The fact that we don’t talk much about strong monads in Haskell is due to the fact that all functors in the category Set, which underlies the Haskell’s type system, have canonical strength. So why do we talk about strength when dealing with applicative functors? I have looked into this question and came to the conclusion that there is no fundamental reason, and that it’s okay to just say:

An applicative is a lax monoidal functor

In this post I’ll discuss different equivalent categorical definitions of the applicative functor. I’ll start with a lax closed functor, then move to a lax monoidal functor, and show the equivalence of the two definitions. Then I’ll introduce the calculus of ends and show that the third definition of the applicative functor as a monoid in a suitable functor category equipped with Day convolution is equivalent to the previous ones.

Applicative as a Lax Closed Functor

The standard definition of the applicative functor in Haskell reads:

class Functor f => Applicative f where
    (<*>) :: f (a -> b) -> (f a -> f b)
    pure :: a -> f a

At first sight it doesn’t seem to involve a monoidal structure. It looks more like preserving function arrows (I added some redundant parentheses to suggest this interpretation).

Categorically, functors that “preserve arrows” are known as closed functors. Let’s look at a definition of a closed functor f between two categories C and D. We have to assume that both categories are closed, meaning that they have internal hom-objects for every pair of objects. Internal hom-objects are also called function objects or exponentials. They are normally defined through the right adjoint to the product functor:

C(z × a, b) ≅ C(z, a => b)

To distinguish between sets of morphisms and function objects (they are the same thing in Set), I will temporarily use double arrows for function objects.

We can take a functor f and act with it on the function object a=>b in the category C. We get an object f (a=>b) in D. Or we can map the two objects a and b from C to D and then construct the function object in D: f a => f b.

closed

We call a functor closed if the two results are isomorphic (I have subscripted the two arrows with the categories where they are defined):

f (a =>C b) ≅ (f a =>D f b)

and if the functor preserves the unit object:

iD ≅ f iC

What’s the unit object? Normally, this is the unit with respect to the same product that was used to define the function object using the adjunction. I’m saying “normally,” because it’s possible to define a closed category without a product.

Note: The two arrows and the two is are defined with respect to two different products. The first isomorphism must be natural in both a and b. Also, to complete the picture, there are some diagrams that must commute.

The two isomorphisms that define a closed functor can be relaxed and replaced by unidirectional morphisms. The result is a lax closed functor:

f (a => b) -> (f a => f b)
i -> f i

This looks almost like the definition of Applicative, except for one problem: how can we recover the natural transformation we call pure from a single morphism i -> f i.

One way to do it is from the position of strength. An endofunctor f has tensorial strength if there is a natural transformation:

stc a :: c ⊗ f a -> f (c ⊗ a)

Think of c as the context in which the computation f a is performed. Strength means that we can use this external context inside the computation.

In the category Set, with the tensor product replaced by cartesian product, all functors have canonical strength. In Haskell, we would define it as:

st (c, fa) = fmap ((,) c) fa

The morphism in the definition of the lax closed functor translates to:

unit :: () -> f ()

Notice that, up to isomorphism, the unit type () is the unit with respect to cartesian product. The relevant isomorphisms are:

λa :: ((), a) -> a
ρa :: (a, ()) -> a

Here’s the derivation from Rivas and Jaskelioff’s Notions of Computation as Monoids:

    a
≅  (a, ())   -- unit law, ρ-1
-> (a, f ()) -- lax unit
-> f (a, ()) -- strength
≅  f a       -- lifted unit law, f ρ

Strength is necessary if you’re starting with a lax closed (or monoidal — see the next section) endofunctor in an arbitrary closed (or monoidal) category and you want to derive pure within that category — not after you restrict it to Set.

There is, however, an alternative derivation using the Yoneda lemma:

f ()
≅ forall a. (() -> a) -> f a  -- Yoneda
≅ forall a. a -> f a -- because: (() -> a) ≅ a

We recover the whole natural transformation from a single value. The advantage of this derivation is that it generalizes beyond endofunctors and it doesn’t require strength. As we’ll see later, it also ties nicely with the Day-convolution definition of applicative. The Yoneda lemma only works for Set-valued functors, but so does Day convolution (there are enriched versions of both Yoneda and Day convolution, but I’m not going to discuss them here).

We can define the categorical version of the Haskell’s applicative functor as a lax closed functor going from a closed category C to Set. It’s a functor equipped with a natural transformation:

f (a => b) -> (f a -> f b)

where a=>b is the internal hom-object in C (the second arrow is a function type in Set), and a function:

1 -> f i

where 1 is the singleton set and i is the unit object in C.

The importance of a categorical definition is that it comes with additional identities or “axioms.” A lax closed functor must be compatible with the structure of both categories. I will not go into details here, because we are really only interested in closed categories that are monoidal, where these axioms are easier to express.

The definition of a lax closed functor is easily translated to Haskell:

class Functor f => Closed f where
    (<*>) :: f (a -> b) -> f a -> f b
    unit :: f ()

Applicative as a Lax Monoidal Functor

Even though it’s possible to define a closed category without a monoidal structure, in practice we usually work with monoidal categories. This is reflected in the equivalent definition of the Haskell’s applicative functor as a lax monoidal functor. In Haskell, we would write:

class Functor f => Monoidal f where
    (>*<) :: (f a, f b) -> f (a, b)
    unit :: f ()

This definition is equivalent to our previous definition of a closed functor. That’s because, as we’ve seen, a function object in a monoidal category is defined in terms of a product. We can show the equivalence in a more general categorical setting.

This time let’s start with a symmetric closed monoidal category C, in which the function object is defined through the right adjoint to the tensor product:

C(z ⊗ a, b) ≅ C(z, a => b)

As usual, the tensor product is associative and unital — with the unit object i — up to isomorphism. The symmetry is defined through natural isomorphism:

γ :: a ⊗ b -> b ⊗ a

A functor f between two monoidal categories is lax monoidal if there exist: (1) a natural transformation

f a ⊗ f b -> f (a ⊗ b)

and (2) a morphism

i -> f i

Notice that the products and units on either side of the two mappings are from different categories.

A (lax-) monoidal functor must also preserve associativity and unit laws.

For instance a triple product

f a ⊗ (f b ⊗ f c)

may be rearranged using an associator α to give

(f a ⊗ f b) ⊗ f c

then converted to

f (a ⊗ b) ⊗ f c

and then to

f ((a ⊗ b) ⊗ c)

Or it could be first converted to

f a ⊗ f (b ⊗ c)

and then to

f (a ⊗ (b ⊗ c))

These two should be equivalent under the associator in C.

assoc

Similarly, f a ⊗ i can be simplified to f a using the right unitor ρ in D. Or it could be first converted to f a ⊗ f i, then to f (a ⊗ i), and then to f a, using the right unitor in C. The two paths should be equivalent. (Similarly for the left identity.)

unit

We will now consider functors from C to Set, with Set equipped with the usual cartesian product, and the singleton set as unit. A lax monoidal functor is defined by: (1) a natural transformation:

(f a, f b) -> f (a ⊗ b)

and (2) a choice of an element of the set f i (a function from 1 to f i picks an element from that set).

We need the target category to be Set because we want to be able to use the Yoneda lemma to show equivalence with the standard definition of applicative. I’ll come back to this point later.

The Equivalence

The definitions of a lax closed and a lax monoidal functors are equivalent when C is a closed symmetric monoidal category. The proof relies on the existence of the adjunction, in particular the unit and the counit of the adjunction:

ηa :: a -> (b => (a ⊗ b))
εb :: (a => b) ⊗ a -> b

For instance, let’s assume that f is lax-closed. We want to construct the mapping

(f a, f b) -> f (a ⊗ b)

First, we apply the lifted pair (unit, identity), (f η, f id)

(f a -> f (b => a ⊗ b), f id)

to the left hand side. We get:

(f (b => a ⊗ b), f b)

Now we can use (the uncurried version of) the lax-closed morphism:

(f (b => x), f b) -> f x

to get:

f (a ⊗ b)

Conversely, assuming the lax-monoidal property we can show that the functor is lax-closed, that is to say, implement the following function:

(f (a => b), f a) -> f b

First we use the lax monoidal morphism on the left hand side:

f ((a => b) ⊗ a)

and then use the counit (a.k.a. the evaluation morphism) to get the desired result f b

There is yet another presentation of applicatives using Day convolution. But before we get there, we need a little refresher on calculus.

Calculus of Ends

Ends and coends are very useful constructs generalizing limits and colimits. They are defined through universal constructions. They have a few fundamental properties that are used over and over in categorical calculations. I’ll just introduce the notation and a few important identities. We’ll be working in a symmetric monoidal category C with functors from C to Set and profunctors from Cop×C to Set. The end of a profunctor p is a set denoted by:

a p a a

The most important thing about ends is that a set of natural transformations between two functors f and g can be represented as an end:

[C, Set](f, g) = ∫a C(f a, g a)

In Haskell, the end corresponds to universal quantification over a functor of mixed variance. For instance, the natural transformation formula takes the familiar form:

forall a. f a -> g a

The Yoneda lemma, which deals with natural transformations, can also be written using an end:

z (C(a, z) -> f z) ≅ f a

In Haskell, we can write it as the equivalence:

forall z. ((a -> z) -> f z) ≅ f a

which is a generalization of the continuation passing transform.

The dual notion of coend is similarly written using an integral sign, with the “integration variable” in the superscript position:

a p a a

In pseudo-Haskell, a coend is represented by an existential quantifier. It’s possible to define existential data types in Haskell by converting existential quantification to universal one. The relevant identity in terms of coends and ends reads:

(∫ z p z z) -> y ≅ ∫ z (p z z -> y)

In Haskell, this formula is used to turn functions that take existential types to functions that are polymorphic:

(exists z. p z z) -> y ≅ forall z. (p z z -> y)

Intuitively, it makes perfect sense. If you want to define a function that takes an existential type, you have to be prepared to handle any type.

The equivalent of the Yoneda lemma for coends reads:

z f z × C(z, a) ≅ f a

or, in pseudo-Haskell:

exists z. (f z, z -> a) ≅ f a

(The intuition is that the only thing you can do with this pair is to fmap the function over the first component.)

There is also a contravariant version of this identity:

z C(a, z) × f z ≅ f a

where f is a contravariant functor (a.k.a., a presheaf). In pseudo-Haskell:

exists z. (a -> z, f z) ≅ f a

(The intuition is that the only thing you can do with this pair is to apply the contramap of the first component to the second component.)

Using coends we can define a tensor product in the category of functors [C, Set]. This product is called Day convolution:

(f ★ g) a = ∫ x y f x × g y × C(x ⊗ y, a)

It is a bifunctor in that category (read, it can be used to lift natural transformations). It’s associative and symmetric up to isomorphism. It also has a unit — the hom-functor C(i, -), where i is the monoidal unit in C. In other words, Day convolution imbues the category [C, Set] with monoidal structure.

Let’s verify the unit laws.

(C(i, -) ★ g) a = ∫ x y C(i, x) × g y × C(x ⊗ y, a)

We can use the contravariant Yoneda to “integrate over x” to get:

y g y × C(i ⊗ y, a)

Considering that i is the unit of the tensor product in C, we get:

y g y × C(y, a)

Covariant Yoneda lets us “integrate over y” to get the desired g a. The same method works for the right unit law.

Applicative as a Monoid

Given a monoidal category, we can always define a monoid as an object m equipped with two morphisms:

μ :: m ⊗ m -> m
η :: i -> m

satisfying the laws of associativity and unitality.

We have shown that the functor category [C, Set] (with C a symmetric monoidal category) is monoidal under Day convolution. An object in this category is a functor f. The two morphisms that would make it a candidate for a monoid are natural transformations:

μ :: f ★ f -> f
η :: C(i, -) -> f

The a component of the natural transformation μ can be rewritten as:

(∫ x y f x × f y × C(x ⊗ y, a)) -> f a

which is equivalent to:

x y (f x × f y × C(x ⊗ y, a) -> f a)

or, upon currying:

x y (f x, f y) -> C(x ⊗ y, a) -> f a

It turns out that so defined monoid is equivalent to a lax monoidal functor. This was shown by Rivas and Jaskelioff. The following derivation is due to Bob Atkey.

The trick is to start with the whole set of natural transformation from f★f to f. The multiplication μ is just one of them. We’ll express the set of natural transformations as an end:

a ((f ★ f) a -> f a)

Plugging in the formula for the a component of μ, we get:

a x y (f x, f y) -> C(x ⊗ y, a) -> f a

The end over a does not involve the first argument, so we can move the integral sign:

x y (f x, f y) -> ∫ a C(x ⊗ y, a) -> f a

Then we use the Yoneda lemma to “perform the integration” over a:

x y (f x, f y) -> f (x ⊗ y)

You may recognize this as a set of natural transformations that define a lax monoidal functor. We have established a one-to-one correspondence between these natural transformations and the ones defining monoidal multiplication using Day convolution.

The remaining part is to show the equivalence between the unit with respect to Day convolution and the second part of the definition of the lax monoidal functor, the morphism:

1 -> f i

We start with the set of natural transformations that contains our η:

a (i -> a) -> f a

By Yoneda, this is just f i. Picking an element from a set is equivalent to defining a morphism from the singleton set 1, so for any choice of η we get:

1 -> f i

and vice versa. The two definitions are equivalent.

Notice that the monoidal unit η under Day convolution becomes the definition of pure in the Haskell version of applicative. Indeed, when we replace the category C with Set, f becomes and endofunctor, and the unit of Day convolution C(i, -) becomes the identity functor Id. We get:

η :: Id -> f

or, in components:

pure :: a -> f a

So, strictly speaking, the Haskell definition of Applicative mixes the elements of the lax closed functor and the monoidal unit under Day convolution.

Acknowledgments

I’m grateful to Mauro Jaskelioff and Exequiel Rivas for correspondence and to Bob Atkey, Dimitri Chikhladze, and Make Shulman for answering my questions on Math Overflow.


This is part 23 of Categories for Programmers. Previously: Monads Categorically. See the Table of Contents.

Now that we have covered monads, we can reap the benefits of duality and get comonads for free simply by reversing the arrows and working in the opposite category.

Recall that, at the most basic level, monads are about composing Kleisli arrows:

a -> m b

where m is a functor that is a monad. If we use the letter w (upside down m) for the comonad, we can define co-Kleisli arrows as morphism of the type:

w a -> b

The analog of the fish operator for co-Kleisli arrows is defined as:

(=>=) :: (w a -> b) -> (w b -> c) -> (w a -> c)

For co-Kleisli arrows to form a category we also have to have an identity co-Kleisli arrow, which is called extract:

extract :: w a -> a

This is the dual of return. We also have to impose the laws of associativity as well as left- and right-identity. Putting it all together, we could define a comonad in Haskell as:

class Functor w => Comonad w where
    (=>=) :: (w a -> b) -> (w b -> c) -> (w a -> c)
    extract :: w a -> a

In practice, we use slightly different primitives, as we’ll see shortly.

The question is, what’s the use for comonads in programming?

Programming with Comonads

Let’s compare the monad with the comonad. A monad provides a way of putting a value in a container using return. It doesn’t give you access to a value or values stored inside. Of course, data structures that implement monads might provide access to their contents, but that’s considered a bonus. There is no common interface for extracting values from a monad. And we’ve seen the example of the IO monad that prides itself in never exposing its contents.

A comonad, on the other hand, provides the means of extracting a single value from it. It does not give the means to insert values. So if you want to think of a comonad as a container, it always comes pre-filled with contents, and it lets you peek at it.

Just as a Kleisli arrow takes a value and produces some embellished result — it embellishes it with context — a co-Kleisli arrow takes a value together with a whole context and produces a result. It’s an embodiment of contextual computation.

The Product Comonad

Remember the reader monad? We introduced it to tackle the problem of implementing computations that need access to some read-only environment e. Such computations can be represented as pure functions of the form:

(a, e) -> b

We used currying to turn them into Kleisli arrows:

a -> (e -> b)

But notice that these functions already have the form of co-Kleisli arrows. Let’s massage their arguments into the more convenient functor form:

data Product e a = Prod e a
  deriving Functor

We can easily define the composition operator by making the same environment available to the arrows that we are composing:

(=>=) :: (Product e a -> b) -> (Product e b -> c) -> (Product e a -> c)
f =>= g = \(Prod e a) -> let b = f (Prod e a)
                          c = g (Prod e b)
                       in c

The implementation of extract simply ignores the environment:

extract (Prod e a) = a

Not surprisingly, the product comonad can be used to perform exactly the same computations as the reader monad. In a way, the comonadic implementation of the environment is more natural — it follows the spirit of “computation in context.” On the other hand, monads come with the convenient syntactic sugar of the do notation.

The connection between the reader monad and the product comonad goes deeper, having to do with the fact that the reader functor is the right adjoint of the product functor. In general, though, comonads cover different notions of computation than monads. We’ll see more examples later.

It’s easy to generalize the Product comonad to arbitrary product types including tuples and records.

Dissecting the Composition

Continuing the process of dualization, we could go ahead and dualize monadic bind and join. Alternatively, we can repeat the process we used with monads, where we studied the anatomy of the fish operator. This approach seems more enlightening.

The starting point is the realization that the composition operator must produce a co-Kleisli arrow that takes w a and produces a c. The only way to produce a c is to apply the second function to an argument of the type w b:

(=>=) :: (w a -> b) -> (w b -> c) -> (w a -> c)
f =>= g = g ... 

But how can we produce a value of type w b that could be fed to g? We have at our disposal the argument of type w a and the function f :: w a -> b. The solution is to define the dual of bind, which is called extend:

extend :: (w a -> b) -> w a -> w b

Using extend we can implement composition:

f =>= g = g . extend f

Can we next dissect extend? You might be tempted to say, why not just apply the function w a -> b to the argument w a, but then you quickly realize that you’d have no way of converting the resulting b to w b. Remember, the comonad provides no means of lifting values. At this point, in the analogous construction for monads, we used fmap. The only way we could use fmap here would be if we had something of the type w (w a) at our disposal. If we coud only turn w a into w (w a). And, conveniently, that would be exactly the dual of join. We call it duplicate:

duplicate :: w a -> w (w a)

So, just like with the definitions of the monad, we have three equivalent definitions of the comonad: using co-Kleisli arrows, extend, or duplicate. Here’s the Haskell definition taken directly from Control.Comonad library:

class Functor w => Comonad w where
  extract :: w a -> a
  duplicate :: w a -> w (w a)
  duplicate = extend id
  extend :: (w a -> b) -> w a -> w b
  extend f = fmap f . duplicate

Provided are the default implementations of extend in terms of duplicate and vice versa, so you only need to override one of them.

The intuition behind these functions is based on the idea that, in general, a comonad can be thought of as a container filled with values of type a (the product comonad was a special case of just one value). There is a notion of the “current” value, one that’s easily accessible through extract. A co-Kleisli arrow performs some computation that is focused on the current value, but it has access to all the surrounding values. Think of the Conway’s game of life. Each cell contains a value (usually just True or False). A comonad corresponding to the game of life would be a grid of cells focused on the “current” cell.

So what does duplicate do? It takes a comonadic container w a and produces a container of containers w (w a). The idea is that each of these containers is focused on a different a inside w a. In the game of life, you would get a grid of grids, each cell of the outer grid containing an inner grid that’s focused on a different cell.

Now look at extend. It takes a co-Kleisli arrow and a comonadic container w a filled with as. It applies the computation to all of these as, replacing them with bs. The result is a comonadic container filled with bs. extend does it by shifting the focus from one a to another and applying the co-Kleisli arrow to each of them in turn. In the game of life, the co-Kleisli arrow would calculate the new state of the current cell. To do that, it would look at its context — presumably its nearest neighbors. The default implementation of extend illustrates this process. First we call duplicate to produce all possible foci and then we apply f to each of them.

The Stream Comonad

This process of shifting the focus from one element of the container to another is best illustrated with the example of an infinite stream. Such a stream is just like a list, except that it doesn’t have the empty constructor:

data Stream a = Cons a (Stream a)

It’s trivially a Functor:

instance Functor Stream where
    fmap f (Cons a as) = Cons (f a) (fmap f as)

The focus of a stream is its first element, so here’s the implementation of extract:

extract (Cons a _) = a

duplicate produces a stream of streams, each focused on a different element.

duplicate (Cons a as) = Cons (Cons a as) (duplicate as)

The first element is the original stream, the second element is the tail of the original stream, the third element is its tail, and so on, ad infinitum.

Here’s the complete instance:

instance Comonad Stream where
    extract (Cons a _) = a
    duplicate (Cons a as) = Cons (Cons a as) (duplicate as)

This is a very functional way of looking at streams. In an imperative language, we would probably start with a method advance that shifts the stream by one position. Here, duplicate produces all shifted streams in one fell swoop. Haskell’s laziness makes this possible and even desirable. Of course, to make a Stream practical, we would also implement the analog of advance:

tail :: Stream a -> Stream a
tail (Cons a as) = as

but it’s never part of the comonadic interface.

If you had any experience with digital signal processing, you’ll see immediately that a co-Kleisli arrow for a stream is just a digital filter, and extend produces a filtered stream.

As a simple example, let’s implement the moving average filter. Here’s a function that sums n elements of a stream:

sumS :: Num a => Int -> Stream a -> a
sumS n (Cons a as) = if n <= 0 then 0 else a + sumS (n - 1) as

Here’s the function that calculates the average of the first n elements of the stream:

average :: Fractional a => Int -> Stream a -> a
average n stm = (sumS n stm) / (fromIntegral n)

Partially applied average n is a co-Kleisli arrow, so we can extend it over the whole stream:

movingAvg :: Fractional a => Int -> Stream a -> Stream a
movingAvg n = extend (average n)

The result is the stream of running averages.

A stream is an example of a unidirectional, one-dimensional comonad. It can be easily made bidirectional or extended to two or more dimensions.

Comonad Categorically

Defining a comonad in category theory is a straightforward exercise in duality. As with the monad, we start with an endofunctor T. The two natural transformations, η and μ, that define the monad are simply reversed for the comonad:

ε :: T -> I
δ :: T -> T2

The components of these transformations correspond to extract and duplicate. Comonad laws are the mirror image of monad laws. No big surprise here.

Then there is the derivation of the monad from an adjunction. Duality reverses an adjunction: the left adjoint becomes the right adjoint and vice versa. And, since the composition R ∘ L defines a monad, L ∘ R must define a comonad. The counit of the adjunction:

ε :: L ∘ R -> I

is indeed the same ε that we see in the definition of the comonad — or, in components, as Haskell’s extract. We can also use the unit of the adjunction:

η :: I -> R ∘ L

to insert an R ∘ L in the middle of L ∘ R and produce L ∘ R ∘ L ∘ R. Making T2 from T defines the δ, and that completes the definition of the comonad.

We’ve also seen that the monad is a monoid. The dual of this statement would require the use of a comonoid, so what’s a comonoid? The original definition of a monoid as a single-object category doesn’t dualize to anything interesting. When you reverse the direction of all endomorphisms, you get another monoid. Recall, however, that in our approach to a monad, we used a more general definition of a monoid as an object in a monoidal category. The construction was based on two morphisms:

μ :: m ⊗ m -> m
η :: i -> m

The reversal of these morphisms produces a comonoid in a monoidal category:

δ :: m -> m ⊗ m
ε :: m -> i

One can write a definition of a comonoid in Haskell:

class Comonoid m where
  split   :: m -> (m, m)
  destroy :: m -> ()

but it is rather trivial. Obviously destroy ignores its argument.

destroy _ = ()

split is just a pair of functions:

split x = (f x, g x)

Now consider comonoid laws that are dual to the monoid unit laws.

lambda . bimap destroy id . split = id
rho . bimap id destroy . split = id

Here, lambda and rho are the left and right unitors, respectively (see the definition of monoidal categories). Plugging in the definitions, we get:

lambda (bimap destroy id (split x))
= lambda (bimap destroy id (f x, g x))
= lambda ((), g x)
= g x

which proves that g = id. Similarly, the second law expands to f = id. In conclusion:

split x = (x, x)

which shows that in Haskell (and, in general, in the category Set) every object is a trivial comonoid.

Fortunately there are other more interesting monoidal categories in which to define comonoids. One of them is the category of endofunctors. And it turns out that, just like the monad is a monoid in the category of endofunctors,

The comonad is a comonoid in the category of endofunctors.

The Store Comonad

Another important example of a comonad is the dual of the state monad. It’s called the costate comonad or, alternatively, the store comonad.

We’ve seen before that the state monad is generated by the adjunction that defines the exponentials:

L z = z × s
R a = s ⇒ a

We’ll use the same adjunction to define the costate comonad. A comonad is defined by the composition L ∘ R:

L (R a) = (s ⇒ a) × s

Translating this to Haskell, we start with the adjunction between the Product functor on the left and the Reader functor or the right. Composing Product after Reader is equivalent to the following definition:

data Store s a = Store (s -> a) s

The counit of the adjunction taken at the object a is the morphism:

εa :: ((s ⇒ a) × s) -> a

or, in Haskell notation:

counit (Prod (Reader f) s) = f s

This becomes our extract:

extract (Store f s) = f s

The unit of the adjunction is:

unit :: a -> Reader s (Product a s)
unit a = Reader (\s -> Prod a s)

We construct δ, or duplicate, as the horizontal composition:

δ :: L ∘ R -> L ∘ R ∘ L ∘ R
δ = L ∘ η ∘ R

We have to sneak η through the leftmost L, which is the Product functor. It means acting with η, or Store f, on the left component of the pair (that’s what fmap for Product would do). We get:

duplicate (Store f s) = Store (Store f) s

(Remember that, in the formula for δ, L and R stand for identity natural transformations whose components are identity morphisms.)

Here’s the complete definition of the Store comonad:

instance Comonad (Store s) where
  extract (Store f s) = f s
  duplicate (Store f s) = Store (Store f) s

You may think of the Reader part of Store as a generalized container of as that are keyed using elements of the type s. For instance, if s is Int, Reader Int a is an infinite bidirectional stream of as. Store pairs this container with a value of the key type. For instance, Reader Int a is paired with an Int. In this case, extract uses this integer to index into the infinite stream. You may think of the second component of Store as the current position.

Continuing with this example, duplicate creates a new infinite stream indexed by an Int. This stream contains streams as its elements. In particular, at the current position, it contains the original stream. But if you use some other Int (positive or negative) as the key, you’d obtain a shifted stream positioned at that new index.

In general, you can convince yourself that when extract acts on the duplicated Store it produces the original Store (in fact, the identity law for the comonad states that extract . duplicate = id).

The Store comonad plays an important role as the theoretical basis for the Lens library. Conceptually, the Store s a comonad encapsulates the idea of “focusing” (like a lens) on a particular substructure of the date type a using the type s as an index. In particular, a function of the type:

a -> Store s a

is equivalent to a pair of functions:

set :: a -> s -> a
get :: a -> s

If a is a product type, set could be implemented as setting the field of type s inside of a while returning the modified version of a. Similarly, get could be implemented to read the value of the s field from a. We’ll explore these ideas more in the next section.

Challenges

  1. Implement the Conway’s Game of Life using the Store comonad. Hint: What type do you pick for s?

Acknowledgments

I’m grateful to Edward Kmett for reading the draft of this post and pointing out flaws in my reasoning.

Next: F-Algebras.


This is part 20 of Categories for Programmers. Previously: Free/Forgetful Adjunctions. See the Table of Contents.

Programmers have developed a whole mythology around monads. It’s supposed to be one of the most abstract and difficult concepts in programming. There are people who “get it” and those who don’t. For many, the moment when they understand the concept of the monad is like a mystical experience. The monad abstracts the essence of so many diverse constructions that we simply don’t have a good analogy for it in everyday life. We are reduced to groping in the dark, like those blind men touching different parts of the elephant end exclaiming triumphantly: “It’s a rope,” “It’s a tree trunk,” or “It’s a burrito!”

Let me set the record straight: The whole mysticism around the monad is the result of a misunderstanding. The monad is a very simple concept. It’s the diversity of applications of the monad that causes the confusion.

As part of research for this post I looked up duct tape (a.k.a., duck tape) and its applications. Here’s a little sample of things that you can do with it:

  • sealing ducts
  • fixing CO2 scrubbers on board Apollo 13
  • wart treatment
  • fixing Apple’s iPhone 4 dropped call issue
  • making a prom dress
  • building a suspension bridge

Now imagine that you didn’t know what duct tape was and you were trying to figure it out based on this list. Good luck!

So I’d like to add one more item to the collection of “the monad is like…” clichés: The monad is like duct tape. Its applications are widely diverse, but its principle is very simple: it glues things together. More precisely, it composes things.

This partially explains the difficulties a lot of programmers, especially those coming from the imperative background, have with understanding the monad. The problem is that we are not used to thinking of programing in terms of function composition. This is understandable. We often give names to intermediate values rather than pass them directly from function to function. We also inline short segments of glue code rather than abstract them into helper functions. Here’s an imperative-style implementation of the vector-length function in C:

double vlen(double * v) {
  double d = 0.0;
  int n;
  for (n = 0; n < 3; ++n)
    d += v[n] * v[n];
  return sqrt(d);
}

Compare this with the (stylized) Haskell version that makes function composition explicit:

vlen = sqrt . sum . fmap  (flip (^) 2)

(Here, to make things even more cryptic, I partially applied the exponentiation operator (^) by setting its second argument to 2.)

I’m not arguing that Haskell’s point-free style is always better, just that function composition is at the bottom of everything we do in programming. And even though we are effectively composing functions, Haskell does go to great lengths to provide imperative-style syntax called the do notation for monadic composition. We’ll see its use later. But first, let me explain why we need monadic composition in the first place.

The Kleisli Category

We have previously arrived at the writer monad by embellishing regular functions. The particular embellishment was done by pairing their return values with strings or, more generally, with elements of a monoid. We can now recognize that such embellishment is a functor:

newtype Writer w a = Writer (a, w)

instance Functor (Writer w) where
  fmap f (Writer (a, w)) = Writer (f a, w)

We have subsequently found a way of composing embellished functions, or Kleisli arrows, which are functions of the form:

a -> Writer w b

It was inside the composition that we implemented the accumulation of the log.

We are now ready for a more general definition of the Kleisli category. We start with a category C and an endofunctor m. The corresponding Kleisli category K has the same objects as C, but its morphisms are different. A morphism between two objects a and b in K is implemented as a morphism:

a -> m b

in the original category C. It’s important to keep in mind that we treat a Kleisli arrow in K as a morphism between a and b, and not between a and m b.

In our example, m was specialized to Writer w, for some fixed monoid w.

Kleisli arrows form a category only if we can define proper composition for them. If there is a composition, which is associative and has an identity arrow for every object, then the functor m is called a monad, and the resulting category is called the Kleisli category.

In Haskell, Kleisli composition is defined using the fish operator >=>, and the identity arrrow is a polymorphic function called return. Here’s the definition of a monad using Kleisli composition:

class Monad m where
  (>=>) :: (a -> m b) -> (b -> m c) -> (a -> m c)
  return :: a -> m a

Keep in mind that there are many equivalent ways of defining a monad, and that this is not the primary one in the Haskell ecosystem. I like it for its conceptual simplicity and the intuition it provides, but there are other definitions that are more convenient when programming. We’ll talk about them momentarily.

In this formulation, monad laws are very easy to express. They cannot be enforced in Haskell, but they can be used for equational reasoning. They are simply the standard composition laws for the Kleisli category:

(f >=> g) >=> h = f >=> (g >=> h) -- associativity
return >=> f = f                  -- left unit
f >=> return = f                  -- right unit

This kind of a definition also expresses what a monad really is: it’s a way of composing embellished functions. It’s not about side effects or state. It’s about composition. As we’ll see later, embellished functions may be used to express a variety of effects or state, but that’s not what the monad is for. The monad is the sticky duct tape that ties one end of an embellished function to the other end of an embellished function.

Going back to our Writer example: The logging functions (the Kleisli arrows for the Writer functor) form a category because Writer is a monad:

instance Monoid w => Monad (Writer w) where
    f >=> g = \a -> 
        let Writer (b, s)  = f a
            Writer (c, s') = g b
        in Writer (c, s `mappend` s')
    return a = Writer (a, mempty)

Monad laws for Writer w are satisfied as long as monoid laws for w are satisfied (they can’t be enforced in Haskell either).

There’s a useful Kleisli arrow defined for the Writer monad called tell. It’s sole purpose is to add its argument to the log:

tell :: w -> Writer w ()
tell s = Writer ((), s)

We’ll use it later as a building block for other monadic functions.

Fish Anatomy

When implementing the fish operator for different monads you quickly realize that a lot of code is repeated and can be easily factored out. To begin with, the Kleisli composition of two functions must return a function, so its implementation may as well start with a lambda taking an argument of type a:

(>=>) :: (a -> m b) -> (b -> m c) -> (a -> m c)
f >=> g = \a -> ...

The only thing we can do with this argument is to pass it to f:

f >=> g = \a -> let mb = f a
                in ...

At this point we have to produce the result of type m c, having at our disposal an object of type m b and a function g :: b -> m c. Let’s define a function that does that for us. This function is called bind and is usually written in the form of an infix operator:

(>>=) :: m a -> (a -> m b) -> m b

For every monad, instead of defining the fish operator, we may instead define bind. In fact the standard Haskell definition of a monad uses bind:

class Monad m where
    (>>=) :: m a -> (a -> m b) -> m b
    return :: a -> m a

Here’s the definition of bind for the Writer monad:

(Writer (a, w)) >>= f = let Writer (b, w') = f a
                        in  Writer (b, w `mappend` w')

It is indeed shorter than the definition of the fish operator.

It’s possible to further dissect bind, taking advantage of the fact that m is a functor. We can use fmap to apply the function a -> m b to the contents of m a. This will turn a into m b. The result of the application is therefore of type m (m b). This is not exactly what we want — we need the result of type m b — but we’re close. All we need is a function that collapses or flattens the double application of m. Such function is called join:

join :: m (m a) -> m a

Using join, we can rewrite bind as:

ma >>= f = join (fmap f ma)

That leads us to the third option for defining a monad:

class Functor m => Monad m where
    join :: m (m a) -> m a
    return :: a -> m a

Here we have explicitly requested that m be a Functor. We didn’t have to do that in the previous two definitions of the monad. That’s because any type constructor m that either supports the fish or bind operator is automatically a functor. For instance, it’s possible to define fmap in terms of bind and return:

fmap f ma = ma >>= \a -> return (f a)

For completeness, here’s join for the Writer monad:

join :: Monoid w => Writer w (Writer w a) -> Writer w a
join (Writer ((Writer (a, w')), w)) = Writer (a, w `mappend` w')

The do Notation

One way of writing code using monads is to work with Kleisli arrows — composing them using the fish operator. This mode of programming is the generalization of the point-free style. Point-free code is compact and often quite elegant. In general, though, it can be hard to understand, bordering on cryptic. That’s why most programmers prefer to give names to function arguments and intermediate values.

When dealing with monads it means favoring the bind operator over the fish operator. Bind takes a monadic value and returns a monadic value. The programmer may chose to give names to those values. But that’s hardly an improvement. What we really want is to pretend that we are dealing with regular values, not the monadic containers that encapsulate them. That’s how imperative code works — side effects, such as updating a global log, are mostly hidden from view. And that’s what the do notation emulates in Haskell.

You might be wondering then, why use monads at all? If we want to make side effects invisible, why not stick to an imperative language? The answer is that the monad gives us much better control over side effects. For instance, the log in the Writer monad is passed from function to function and is never exposed globally. There is no possibility of garbling the log or creating a data race. Also, monadic code is clearly demarcated and cordoned off from the rest of the program.

The do notation is just syntactic sugar for monadic composition. On the surface, it looks a lot like imperative code, but it translates directly to a sequence of binds and lambda expressions.

For instance, take the example we used previously to illustrate the composition of Kleisli arrows in the Writer monad. Using our current definitions, it could be rewritten as:

process :: String -> Writer String [String]
process = upCase >=> toWords

This function turns all characters in the input string to upper case and splits it into words, all the while producing a log of its actions.

In the do notation it would look like this:

process s = do
    upStr <- upCase s
    toWords upStr

Here, upStr is just a String, even though upCase produces a Writer:

upCase :: String -> Writer String String
upCase s = Writer (map toUpper s, "upCase ")

This is because the do block is desugared by the compiler to:

process s = 
   upCase s >>= \ upStr ->
       toWords upStr

The monadic result of upCase is bound to a lambda that takes a String. It’s the name of this string that shows up in the do block. When reading the line:

upStr <- upCase s

we say that upStr gets the result of upCase s.

The pseudo-imperative style is even more pronounced when we inline toWords. We replace it with the call to tell, which logs the string "toWords ", followed by the call to return with the result of splitting the string upStr using words. Notice that words is a regular function working on strings.

process s = do
    upStr <- upCase s
    tell "toWords "
    return (words upStr)

Here, each line in the do block introduces a new nested bind in the desugared code:

process s = 
    upCase s >>= \upStr ->
      tell "toWords " >>= \() ->
        return (words upStr)

Notice that tell produces a unit value, so it doesn’t have to be passed to the following lambda. Ignoring the contents of a monadic result (but not its effect — here, the contribution to the log) is quite common, so there is a special operator to replace bind in that case:

(>>) :: m a -> m b -> m b
m >> k = m >>= (\_ -> k)

The actual desugaring of our code looks like this:

process s = 
    upCase s >>= \upStr ->
      tell "toWords " >>
        return (words upStr)

In general, do blocks consist of lines (or sub-blocks) that either use the left arrow to introduce new names that are then available in the rest of the code, or are executed purely for side-effects. Bind operators are implicit between the lines of code. Incidentally, it is possible, in Haskell, to replace the formatting in the do blocks with braces and semicolons. This provides the justification for describing the monad as a way of overloading the semicolon.

Notice that the nesting of lambdas and bind operators when desugaring the do notation has the effect of influencing the execution of the rest of the do block based on the result of each line. This property can be used to introduce complex control structures, for instance to simulate exceptions.

Interestingly, the equivalent of the do notation has found its application in imperative languages, C++ in particular. I’m talking about resumable functions or coroutines. It’s not a secret that C++ futures form a monad. It’s an example of the continuation monad, which we’ll discuss shortly. The problem with continuations is that they are very hard to compose. In Haskell, we use the do notation to turn the spaghetti of “my handler will call your handler” into something that looks very much like sequential code. Resumable functions make the same transformation possible in C++. And the same mechanism can be applied to turn the spaghetti of nested loops into list comprehensions or “generators,” which are essentially the do notation for the list monad. Without the unifying abstraction of the monad, each of these problems is typically addressed by providing custom extensions to the language. In Haskell, this is all dealt with through libraries.

Next: Monads and Effects.


In the previous blog post we talked about relations. I gave an example of a thin category as a kind of relation that’s compatible with categorical structure. In a thin category, the hom-set is either an empty set or a singleton set. It so happens that these two sets form a sub-category of Set. It’s a very interesting category. It consists of the two objects — let’s give them new names o and i. Besides the mandatory identity morphisms, we also have a single morphism going from o to i, corresponding to the function we call absurd in Haskell:

absurd :: Void -> a
absurd _ = a

This tiny category is sometimes called the interval category. I’ll call it o->i.

Impoverished 4

The object o is initial, and the object i is terminal — just as the empty set and the singleton set were in Set. Moreover, the cartesian product from Set can be used to define a tensor product in o->i. We’ll use this tensor product to build a monoidal category.

Monoidal Categories

A tensor product is a bifunctor ⊗ with some additional properties. Here, in the interval category, we’ll define it through the following multiplication table:

o ⊗ o = o
o ⊗ i = o
i ⊗ o = o
i ⊗ i = i

Its action on pairs of morphisms (what we call bimap in Haskell) is also easy to define. For instance, what’s the action of on the pair <absurd, idi>? This pair takes the pair <o, i> to <i, i>. Under the bifunctor , the first pair produces o, and the second i. There is only one morphism from o to i, so we have:

absurd ⊗ idi = absurd

If we designate the (terminal) object i as the unit of the tensor product, we get a (symmetric) monoidal category. A monoidal category is a category with a tensor product that’s associative and unital (usually, up to isomorphism — but here, strictly).

Now imagine that we replace hom-sets in our original thin category with objects from the monoidal category o->i (we’ll call them hom-objects). After all, we were only using two sets from Set. We can replace the empty hom-set with the object o, and the singleton hom-set with the object i. We get what’s called an enriched category (although, in this case, it’s more of an impoverished category).

Impoverished 9

An example of a thin category (a total order with objects 1, 2, and 3) with hom-sets replaced by hom-objects from the interval category. Think of i as corresponding to less-than-or-equal, and o as greater.

Enriched Categories

An enriched category has hom-objects instead of hom-sets. These are objects from some monoidal category V called the base category. The base category has to be monoidal because we want to define something that would replace the usual composition of morphisms. Morphisms are elements of hom-sets. However, hom-objects, in general, have no elements. We don’t know what an element of o or i is.

So to fully define an enriched category we have to come up with a sensible substitute for composition. To do that, we need to rethink composition — first in terms of hom-sets, then in terms of hom-objects.

We can think of composition as a function from a cartesian product of two hom-sets to a third hom-set:

composea b c :: C(b, c) × C(a, b) -> C(a, c)

Generalizing it, we can replace hom-sets with hom-objects (here, either o or i), the cartesian product with the tensor product, and a function with a morphism (notice: it’s a morphism in our monoidal category o->i). These composition-defining morphisms form a “composition table” for hom-objects.

As an example, take the composition of two is. Their product i ⊗ i is i again, and there is only one morphism out of i, the identity morphism. In terms of original hom-sets it would mean that the composition of two morphisms always exists. In general, we have to impose this condition when we’re defining a category, enriched or not — here it just happens automatically.

For instance (see illustration), compose0 1 2=idi:

compose0 1 2 (C(1, 2) ⊗ C(0, 1))
= compose0 1 2 (i ⊗ i)
= compose0 1 2 i
= i
= C(0, 2)

In every category we must also have identity morphisms. These are special elements in the hom-sets of the form C(a, a). We have to find a way to define their equivalent in the enriched setting. We’ll use the standard trick of defining generalized elements. It’s based on the observation that selecting an element from a set s is the same as selecting a morphism that goes from the singleton set (the terminal object in Set) to s. In a monoidal category, we replace the terminal object with the monoidal unit.

So, instead of picking an identity morphism in C(a, a), we use a morphism from the monoidal unit i:

ja :: i -> C(a, a)

Again, in the case of a thin category, there is only one morphism leaving i, and that’s the identity morphism. That’s why we are automatically guaranteed that, in a thin category, all hom-objects of the form C(a, a) are equal to i.

Composition in a category must also satisfy associativity and identity conditions. Associativity in the enriched setting translates straightforwardly to a commuting diagram, but identity is a little trickier. We have to use ja to “select” the identity from the hom-object C(a, a) while composing it with some other hom-object C(b, a). We start with the product:

i ⊗ C(b, a)

Because i is the monoidal unit, this is equal to C(b, a). On the other hand, we can tensor together two morphisms in o->i — remember, a tensor product is a bifunctor, so it also acts on morphisms. Here we’ll tensor ja and the identity at C(b, a):

ja ⊗ idC(b, a)

We act with this product on the product object i ⊗ C(b, a) to get C(a, a) ⊗ C(b, a). Then we use composition to get:

C(a, a) ⊗ C(b, a) -> C(b, a)

These two ways of getting to C(b, a) must coincide, leading to the identity condition for enriched categories.

Impoverished 5

Now that we’ve seen how the enrichment works for thin categories, we can apply the same mechanism to define categories enriched over any monoidal category V.

The important part is that V defines a (bifunctor) tensor product ⊗ and a unit object i. Associativity and unitality may be either strict or up to isomorphism (notice that a regular cartesian product is associative only up to isomorphism — (a, (b, c)) is not equal to ((a, b), c)).

Instead of sets of morphisms, an enriched category has hom-objects that are objects in V. We use the same notation as for hom-sets: C(a, b) is the hom-object that connects object a to object b. Composition is replaced by morphisms in V:

composea b c :: C(b, c) ⊗ C(a, b) -> C(a, c)

Instead of identity morphisms, we have the morphisms in V:

ja :: i -> C(a, a)

Finally, associativity and unitality of composition are imposed in the form of a few commuting diagrams.

Impoverished Yoneda

The Yoneda Lemma talks about functors from an arbitrary category to Set. To generalize the Yoneda lemma to enriched categories we first have to generalize functors. Their action on objects is not a problem; it’s the action on morphisms that needs our attention.

Enriched Functors

Since in an enriched category we no longer have access to individual morphisms, we have to define the action of functors on hom-objects wholesale. This is only possible if the hom-objects in the target category come from the same base category V as the hom-objects in the source category. In other words, both categories must be enriched over the same monoidal category. We can then use regular morphisms in V to map hom-objects.

Between any two objects a and b in C we have the hom-object C(a, b). The two objects are mapped by the functor f to f a and f b, and there is a hom-object between them, D(f a, f b). The action of f on C(a, b) is defined as a morphism in V:

C(a, b) -> D(f a, f b)

Impoverished 6

Let’s see what this means in our impoverished thin category. First of all, a functor will always map related objects to related objects. That’s because there is no morphism from i to o. A bond between two objects cannot be broken by an impoverished functor.

If the relation is a partial order, for instance less-than-or-equal, then it follows that a functor between posets preserves the ordering — it’s monotone.

A functor must also preserve composition and identity. The former can be easily expressed as a commuting diagram. Identity preservation in the enriched setting involves the use of ja. Starting from i we can use ja to get to C(a, a), which the functor maps to D(f a, f a). Or we can use jf a to get there directly. We insist that both paths be the same.

Impoverished 7

In our impoverished category, this just works because ja is the identity morphism and all C(a, a)s and D(a, a)s are equal to i.

Back to Yoneda: You might remember that we start the Yoneda construction by fixing one object a in C, and then varying another object x to define the functor:

x -> C(a, x)

This functor maps C to Set, because xs are objects in C, and hom-sets are sets — objects of Set.

In the enriched environment, the same construction results in a mapping from C to V, because hom-objects are objects of the base category V.

But is this mapping a functor? This is far from obvious, considering that C is an enriched category, and we have just said that enriched functors can only go between categories that are enriched over the same base category. The target of our functor, the category V, is not enriched. It turns out that, as long as V is closed, we can turn it into an enriched category.

Self Enrichment

Let’s first see how we would enrich our tiny category o->i. First of all, let’s check if it’s closed. Closedness means that hom-sets can be objectified — for every hom-set there is an object called the exponential object that objectifies it. The exponential object in a (symmetric) monoidal category is defined through the adjunction:

V(a⊗b, c) ≅ V(b, ca)

This is the standard adjunction for defining exponentials, except that we are using the tensor product instead of the regular product. The hom-sets are sets of morphisms between objects in V (here, in o->i).

Let’s check, for instance, if there’s an object that corresponds to the hom-set V(o, i), which we would call io. We have:

V(o⊗b, i) ≅ V(b, io)

Whatever b we chose, when multiplied by o it will yield o, so the left hand side is V(o, i), a singleton set. Therefore V(b, io) must be a singleton set too, for any choice of b. In particular, if b is i, we see that the only choice for io is:

io = i

You can check that all exponentiation rules in o->i can be obtained from simple algebra by replacing o with zero and i with one.

Every closed symmetric monoidal category can be enriched in itself by replacing hom-sets with the corresponding exponentials. For instance, in our case, we end up replacing all empty hom-sets in the category o->i with o, and all singleton hom-sets with i. You can easily convince yourself that it works, and the result is the category o->i enriched in itself.

Impoverished 8

We can now take a category C that’s enriched over a closed symmetric monoidal category V, and show that the mapping:

x -> C(a, x)

is indeed an enriched functor. It maps objects of C to objects of V and hom-objects of C to hom-objects (exponentials) of V.

Impoverished 10

An example of a functor from a total order enriched over the interval category to the interval category. This particular functor is equal to the hom-functor C(a->x) for a equal to 3.

Let’s see what this functor looks like in a poset. Given some a, the hom-object C(a, x) is equal to i if a <= x. So an x is mapped to i if it’s greater-or-equal to a, otherwise it’s mapped to o. If you think of the objects mapped to o as colored black and the ones mapped to i as colored red, you’ll see the object a and the whole graph below it must be painted red.

Enriched Natural Transformations

Now that we know what enriched functors are, we have to define natural transformations between them. This is a bit tricky, since a regular natural transformation is defined as a family of morphisms. But again, instead of picking individual morphisms from hom-sets we can work with the closest equivalent: generalized elements — morphisms going from the unit object i to hom-objects. So an enriched natural transformation between two enriched functors f and g is defined as a family of morphisms in V:

αa :: i -> V(f a, g a)

Natural transformations are very limited in our impoverished category. Let’s see what morphisms from i are at our disposal. We have one morphism from i to i: the identity morphism ida. This makes sense — we think of i as having a single element. There is no morphism from i back to o; and that makes sense too — we think of o as having no elements. The only possible generalized components of an impoverished natural transformation between two functors f and g correspond to D(f a, g a) equal to i; which means that, for every a, f a must be less-than-or-equal to g a. A natural transformation can only push a functor uphill.

When the target category is o->i, as in the impoverished Yoneda lemma, a natural transformation may never connect red to black. So once the first functor switches to red, the other must follow.

Naturality Condition

There is, of course, a naturality condition that goes with this definition of a natural transformation. The essence of it is that it shouldn’t matter if we first apply a functor and then the natural transformation α, or the other way around. In the enriched context, there are two ways of getting from C(a, b) to D(f a, g b). One is to multiply C(a, b) by i on the right:

C(a, b) ⊗ i

apply the product of g ⊗ αa to get:

D(g a, g b) ⊗ D(f a, g a)

and then apply composition to get:

D(f a, g b)

The other way is to multiply C(a, b) by i on the left:

i ⊗ C(a, b)

apply αb ⊗ f to get:

D(f b, g b) ⊗ D(f a, f b)

and compose the two to get:

D(f a, g b)

The naturality condition requires that this diagram commute.

Impoverished 11

Enriched Yoneda

The enriched version of the Yoneda lemma talks about enriched natural transformations from the functor x -> C(a, x) to any enriched functor f that goes from C to V.

Consider for a moment a functor from a poset to our tiny category o->i (which, by the way, is also a poset). It will map some objects to o (black) and others to i (red). As we’ve seen, a functor must preserve the less-than-or-equal relation, so once we get into the red territory, there is no going back to black. And a natural transformation may only repaint black to red, not the other way around.

Now we would like to say that natural transformations from x -> C(a, x) to f are in one-to-one correspondence with the elements of f a, except that f a is not a set, so it doesn’t have elements. It’s an object in V. So instead of talking about elements of f a, we’ll talk about generalized elements — morphisms from the unit object i to f a. And that’s how the enriched Yoneda lemma is formulated — as a natural bijection between the set of natural transformations and a set of morphisms from the unit object to f a.

Nat(C(a, -), f) ≅ i -> f a

In our running example, there are only two possible values for f a.

  1. If the value is o then there is no morphism from i to it. The Yoneda lemma tells us that there is no natural transformation in that case. That makes sense, because the value of the functor x -> C(a, x) at x=a is i, and there is no morphism from i to o.
  2. If the value is i then there is exactly one morphism from i to it — the identity. The Yoneda lemma tells use that there is just one natural transformation in that case. It’s the natural transformation whose generalized component at any object x is i->i.

Strong Enriched Yoneda

There is something unsatisfactory in the fact that the enriched Yoneda lemma ends up using a mapping between sets. First we try to get away from sets as far as possible, then we go back to sets of morphisms. It feels like cheating. Not to worry! There is a stronger version of the Yoneda lemma that deals with this problem. What we need is to replace the set of natural transformations with an object in V that would represent them — just like we replaced the set of morphisms with the exponential object. Such an object is defined as an end:

x V(f x, g x)

The strong version of the Yoneda lemma establishes the natural isomorphism:

x V(C(a, x), f x) ≅ f a

Enriched Profunctors

We’ve seen that a profunctor is a functor from a product category Cop × D to Set. The enriched version of a profunctor requires the notion of a product of enriched categories. We would like the product of enriched categories to also be an enriched category. In fact, we would like it to be enriched over the same base category V as the component categories.

We’ll define objects in such a category as pairs of objects from the component categories, but the hom-objects will be defined as tensor products of the component hom-objects. In the enriched product category, the hom-object between two pairs, <c, d> and <c', d'> is:

(Cop ⊗ D)(<c, d>, <c', d'>) = C(c, c') ⊗ D(d, d')

You can convince yourself that composition of such hom-objects requires the tensor product to be symmetric (at least up to isomorphism). That’s because you have to be able to rearrange the hom-objects in a tensor product of tensor products.

An enriched profunctor is defined as an enriched functor from the tensor product of two categories to the (self-enriched) base category:

Cop ⊗ D -> V

Just like regular profunctors, enriched profunctors can be composed using the coend formula. The only difference is that the cartesian product is replaced by the tensor product in V. They form a bicategory called V-Prof.

Enriched profunctors are the basis of the definition of Tambara modules, which are relevant in the application to Haskell lenses.

Conclusion

One of the reasons for using category theory is to get away from set theory. In general, objects in a category don’t have to form sets. The morphisms, however, are elements of sets — the hom-sets. Enriched categories go a step further and replace even those sets with categorical objects. However, it’s not categories all the way down — the base category that’s used for enrichment is still a regular old category with hom-sets.

Acknowledgments

I’m grateful to Gershom Bazerman for useful comments and to André van Meulebrouck for checking the grammar and spelling.

Next Page »