Category Theory



Oxford, UK.    2019 July 22 – 26

Dear scientists, mathematicians, linguists, philosophers, and hackers,

We are writing to let you know about a fantastic opportunity to learn about the emerging interdisciplinary field of applied category theory from some of its leading researchers at the ACT2019 School.   It will begin in January 2019 and culminate in a meeting in Oxford, July 22-26.

Applied category theory is a topic of interest for a growing community of researchers, interested in studying systems of all sorts using category-theoretic tools.  These systems are found in the natural sciences and social sciences, as well as in computer science, linguistics, and engineering. The background and experience of our community’s members are as varied as the systems being studied.

The goal of the ACT2019 School is to help grow this community by pairing ambitious young researchers together with established researchers in order to work on questions, problems, and conjectures in applied category theory.

Who should apply?

Anyone from anywhere who is interested in applying category-theoretic methods to problems outside of pure mathematics. This is emphatically not restricted to math students, but one should be comfortable working with mathematics. Knowledge of basic category-theoretic language—the definition of monoidal category for example— is encouraged.

We will consider advanced undergraduates, Ph.D. students, and post-docs. We ask that you commit to the full program as laid out below.

Instructions on how to apply can be found below the research topic descriptions.

Senior research mentors and their topics

Below is a list of the senior researchers, each of whom describes a research project that their team will pursue, as well as the background reading that will be studied between now and July 2019.

Miriam Backens

Title: Simplifying quantum circuits using the ZX-calculus

Description: The ZX-calculus is a graphical calculus based on the category-theoretical formulation of quantum mechanics.  A complete set of graphical rewrite rules is known for the ZX-calculus, but not for quantum circuits over any universal gate set.  In this project, we aim to develop new strategies for using the ZX-calculus to simplify quantum circuits.

Background reading:

  1. Matthes Amy, Jianxin Chen, Neil Ross. A finite presentation of CNOT-Dihedral operators. arXiv:1701.00140
  2. Miriam Backens. The ZX-calculus is complete for stabiliser quantum mechanics. arXiv:1307.7025

Tobias Fritz

Title: Partial evaluations, the bar construction, and second-order stochastic dominance

Description: We all know that 2+2+1+1 evaluates to 6. A less familiar notion is that it can partially evaluate to 5+1.  In this project, we aim to study the compositional structure of partial evaluation in terms of monads and the bar construction and see what this has to do with financial risk via second-order stochastic dominance.

Background reading:

  1. Tobias Fritz, Paolo Perrone. Monads, partial evaluations, and rewriting. arXiv:1810.06037
  2. Maria Manuel Clementino, Dirk Hofmann, George Janelidze. The monads of classical algebra are seldom weakly cartesian. Available here.
  3. Todd Trimble. On the bar construction. Available here.

Pieter Hofstra

Title: Complexity classes, computation, and Turing categories

Description: Turing categories form a categorical setting for studying computability without bias towards any particular model of computation. It is not currently clear, however, that Turing categories are useful to study practical aspects of computation such as complexity. This project revolves around the systematic study of step-based computation in the form of stack-machines, the resulting Turing categories, and complexity classes.  This will involve a study of the interplay between traced monoidal structure and computation. We will explore the idea of stack machines qua programming languages, investigate the expressive power, and tie this to complexity theory. We will also consider questions such as the following: can we characterize Turing categories arising from stack machines? Is there an initial such category? How does this structure relate to other categorical structures associated with computability?

Background reading:

  1. J.R.B. Cockett, P.J.W. Hofstra. Introduction to Turing categories. APAL, Vol 156, pp 183-209, 2008.  Available here .
  2. J.R.B. Cockett, P.J.W. Hofstra, P. Hrubes. Total maps of Turing categories. ENTCS (Proc. of MFPS XXX), pp 129-146, 2014.  Available here.
  3. A. Joyal, R. Street, D. Verity. Traced monoidal categories. Mat. Proc. Cam. Phil. Soc. 3, pp. 447-468, 1996. Available here.

Bartosz Milewski

Title: Traversal optics and profunctors

Description: In functional programming, optics are ways to zoom into a specific part of a given data type and mutate it.  Optics come in many flavors such as lenses and prisms and there is a well-studied categorical viewpoint, known as profunctor optics.  Of all the optic types, only the traversal has resisted a derivation from first principles into a profunctor description. This project aims to do just this.

Background reading:

  1. Bartosz Milewski. Profunctor optics, categorical View. Available here.
  2. Craig Pastro, Ross Street. Doubles for monoidal categories. arXiv:0711.1859

Mehrnoosh Sadrzadeh

Title: Formal and experimental methods to reason about dialogue and discourse using categorical models of vector spaces

Description: Distributional semantics argues that meanings of words can be represented by the frequency of their co-occurrences in context. A model extending distributional semantics from words to sentences has a categorical interpretation via Lambek’s syntactic calculus or pregroups. In this project, we intend to further extend this model to reason about dialogue and discourse utterances where people interrupt each other, there are references that need to be resolved, disfluencies, pauses, and corrections. Additionally, we would like to design experiments and run toy models to verify predictions of the developed models.

Background reading:

  1. Gerhard Jager.  A multi-modal analysis of anaphora and ellipsis. Available here.
  2.  Matthew Purver, Ronnie Cann, Ruth Kempson. Grammars as parsers:    Meeting the dialogue challenge. Available here.

David Spivak

Title: Toward a mathematical foundation for autopoiesis

Description: An autopoietic organization—anything from a living animal to a political party to a football team—is a system that is responsible for adapting and changing itself, so as to persist as events unfold. We want to develop mathematical abstractions that are suitable to found a scientific study of autopoietic organizations. To do this, we’ll begin by using behavioral mereology and graphical logic to frame a discussion of autopoiesis, most of all what it is and how it can be best conceived. We do not expect to complete this ambitious objective; we hope only to make progress toward it.

Background reading:

  1. Fong, Myers, Spivak. Behavioral mereology.  arXiv:1811.00420.
  2. Fong, Spivak. Graphical regular logic.  arXiv:1812.05765.
  3. Luhmann. Organization and Decision, CUP. (Preface)

School structure

All of the participants will be divided up into groups corresponding to the projects.  A group will consist of several students, a senior researcher, and a TA. Between January and June, we will have a reading course devoted to building the background necessary to meaningfully participate in the projects. Specifically, two weeks are devoted to each paper from the reading list. During this two week period, everybody will read the paper and contribute to a discussion in a private online chat forum.  There will be a TA serving as a domain expert and moderating this discussion. In the middle of the two week period, the group corresponding to the paper will give a presentation via video conference. At the end of the two week period, this group will compose a blog entry on this background reading that will be posted to the n-category cafe.

After all of the papers have been presented, there will be a two-week visit to Oxford University from 15 – 26 July 2019.  The first week is solely for participants of the ACT2019 School. Groups will work together on research projects, led by the senior researchers.  

The second week of this visit is the ACT2019 Conference, where the wider applied category theory community will arrive to share new ideas and results. It is not part of the school, but there is a great deal of overlap and participation is very much encouraged. The school should prepare students to be able to follow the conference presentations to a reasonable degree.

How to apply

To apply please send the following to act2019school@gmail.com

  • Your CV
  • A document with:
    • An explanation of any relevant background you have in category theory or any of the specific projects areas
    • The date you completed or expect to complete your Ph.D. and a one-sentence summary of its subject matter.
    • Order of project preference
    • To what extent can you commit to coming to Oxford (availability of funding is uncertain at this time)
  • A brief statement (~300 words) on why you are interested in the ACT2019 School. Some prompts:
    • how can this school contribute to your research goals
    • how can this school help in your career?

Also, have sent on your behalf to act2019school@gmail.com a brief letter of recommendation confirming any of the following:

  • your background
  • ACT2019 School’s relevance to your research/career
  • your research experience

Questions?

For more information, contact either

  • Daniel Cicala. cicala (at) math (dot) ucr (dot) edu
  • Jules Hedges. julian (dot) hedges (at) cs (dot) ox (dot) ac (dot) uk
Advertisements

Yes, it’s this time of the year again! I started a little tradition a year ago with Stalking a Hylomorphism in the Wild. This year I was reminded of the Advent of Code by a tweet with this succint C++ program:

This piece of code is probably unreadable to a regular C++ programmer, but makes perfect sense to a Haskell programmer.

Here’s the description of the problem: You are given a list of equal-length strings. Every string is different, but two of these strings differ only by one character. Find these two strings and return their matching part. For instance, if the two strings were “abcd” and “abxd”, you would return “abd”.

What makes this problem particularly interesting is its potential application to a much more practical task of matching strands of DNA while looking for mutations. I decided to explore the problem a little beyond the brute force approach. And, of course, I had a hunch that I might encounter my favorite wild beast–the hylomorphism.

Brute force approach

First things first. Let’s do the boring stuff: read the file and split it into lines, which are the strings we are supposed to process. So here it is:

main = do
  txt <- readFile "day2.txt"
  let cs = lines txt
  print $ findMatch cs

The real work is done by the function findMatch, which takes a list of strings and produces the answer, which is a single string.

findMatch :: [String] -> String

First, let’s define a function that calculates the distance between any two strings.

distance :: (String, String) -> Int

We’ll define the distance as the count of mismatched characters.

Here’s the idea: We have to compare strings (which, let me remind you, are of equal length) character by character. Strings are lists of characters. The first step is to take two strings and zip them together, producing a list of pairs of characters. In fact we can combine the zipping with the next operation–in this case, comparison for inequality, (/=)–using the library function zipWith. However, zipWith is defined to act on two lists, and we will want it to act on a pair of lists–a subtle distinction, which can be easily overcome by applying uncurry:

uncurry :: (a -> b -> c) -> ((a, b) -> c)

which turns a function of two arguments into a function that takes a pair. Here’s how we use it:

uncurry (zipWith (/=))

The comparison operator (/=) produces a Boolean result, True or False. We want to count the number of differences, so we’ll covert True to one, and False to zero:

fromBool :: Num a => Bool -> a
fromBool False = 0
fromBool True  = 1

(Notice that such subtleties as the difference between Bool and Int are blisfully ignored in C++.)

Finally, we’ll sum all the ones using sum. Altogether we have:

distance = sum . fmap fromBool . uncurry (zipWith (/=))

Now that we know how to find the distance between any two strings, we’ll just apply it to all possible pairs of strings. To generate all pairs, we’ll use list comprehension:

let ps = [(s1, s2) | s1 <- ss, s2 <- ss]

(In C++ code, this was done by cartesian_product.)

Our goal is to find the pair whose distance is exactly one. To this end, we’ll apply the appropriate filter:

filter ((== 1) . distance) ps

For our purposes, we’ll assume that there is exactly one such pair (if there isn’t one, we are willing to let the program fail with a fatal exception).

(s, s') = head $ filter ((== 1) . distance) ps

The final step is to remove the mismatched character:

filter (uncurry (==)) $ zip s s'

We use our friend uncurry again, because the equality operator (==) expects two arguments, and we are calling it with a pair of arguments. The result of filtering is a list of identical pairs. We’ll fmap fst to pick the first components.

findMatch :: [String] -> String
findMatch ss = 
  let ps = [(s1, s2) | s1 <- ss, s2 <- ss]
      (s, s') = head $ filter ((== 1) . distance) ps
  in fmap fst $ filter (uncurry (==)) $ zip s s'

This program produces the correct result and we could stop right here. But that wouldn’t be much fun, would it? Besides, it’s possible that other algorithms could perform better, or be more flexible when applied to a more general problem.

Data-driven approach

The main problem with our brute-force approach is that we are comparing everything with everything. As we increase the number of input strings, the number of comparisons grows like a factorial. There is a standard way of cutting down on the number of comparison: organizing the input into a neat data structure.

We are comparing strings, which are lists of characters, and list comparison is done recursively. Assume that you know that two strings share a prefix. Compare the next character. If it’s equal in both strings, recurse. If it’s not, we have a single character fault. The rest of the two strings must now match perfectly to be considered a solution. So the best data structure for this kind of algorithm should batch together strings with equal prefixes. Such a data structure is called a prefix tree, or a trie (pronounced try).

At every level of our prefix tree we’ll branch based on the current character (so the maximum branching factor is, in our case, 26). We’ll record the character, the count of strings that share the prefix that led us there, and the child trie storing all the suffixes.

data Trie = Trie [(Char, Int, Trie)]
  deriving (Show, Eq)

Here’s an example of a trie that stores just two strings, “abcd” and “abxd”. It branches after b.

   a 2
   b 2
c 1    x 1
d 1    d 1

When inserting a string into a trie, we recurse both on the characters of the string and the list of branches. When we find a branch with the matching character, we increment its count and insert the rest of the string into its child trie. If we run out of branches, we create a new one based on the current character, give it the count one, and the child trie with the rest of the string:

insertS :: Trie -> String -> Trie
insertS t "" = t
insertS (Trie bs) s = Trie (inS bs s)
  where
    inS ((x, n, t) : bs) (c : cs) =
      if c == x 
      then (c, n + 1, insertS t cs) : bs
      else (x, n, t) : inS bs (c : cs)
    inS [] (c : cs) = [(c, 1, insertS (Trie []) cs)]

We convert our input to a trie by inserting all the strings into an (initially empty) trie:

mkTrie :: [String] -> Trie
mkTrie = foldl insertS (Trie [])

Of course, there are many optimizations we could use, if we were to run this algorithm on big data. For instance, we could compress the branches as is done in radix trees, or we could sort the branches alphabetically. I won’t do it here.

I won’t pretend that this implementation is simple and elegant. And it will get even worse before it gets better. The problem is that we are dealing explicitly with recursion in multiple dimensions. We recurse over the input string, the list of branches at each node, as well as the child trie. That’s a lot of recursion to keep track of–all at once.

Now brace yourself: We have to traverse the trie starting from the root. At every branch we check the prefix count: if it’s greater than one, we have more than one string going down, so we recurse into the child trie. But there is also another possibility: we can allow to have a mismatch at the current level. The current characters may be different but, since we allow only one mismatch, the rest of the strings have to match exactly. That’s what the function exact does. Notice that exact t is used inside foldMap, which is a version of fold that works on monoids–here, on strings.

match1 :: Trie -> [String]
match1 (Trie bs) = go bs
  where
    go :: [(Char, Int, Trie)] -> [String]
    go ((x, n, t) : bs) = 
      let a1s = if n > 1 
                then fmap (x:) $ match1 t
                else []
          a2s = foldMap (exact t) bs
          a3s = go bs -- recurse over list
      in a1s ++ a2s ++ a3s
    go [] = []
    exact t (_, _, t') = matchAll t t'

Here’s the function that finds all exact matches between two tries. It does it by generating all pairs of branches in which top characters match, and then recursing down.

matchAll :: Trie -> Trie -> [String]
matchAll (Trie bs) (Trie bs') = mAll bs bs'
  where
    mAll :: [(Char, Int, Trie)] -> [(Char, Int, Trie)] -> [String]
    mAll [] [] = [""]
    mAll bs bs' = 
      let ps = [ (c, t, t') 
               | (c,  _,  t)  <- bs
               , (c', _', t') <- bs'
               , c == c']
      in foldMap go ps
    go (c, t, t') = fmap (c:) (matchAll t t')

When mAll reaches the leaves of the trie, it returns a singleton list containing an empty string. Subsequent actions of fmap (c:) will prepend characters to this string.

Since we are expecting exactly one solution to the problem, we’ll extract it using head:

findMatch1 :: [String] -> String
findMatch1 cs = head $ match1 (mkTrie cs)

Recursion schemes

As you hone your functional programming skills, you realize that explicit recursion is to be avoided at all cost. There is a small number of recursive patterns that have been codified, and they can be used to solve the majority of recursion problems (for some categorical background, see F-Algebras). Recursion itself can be expressed in Haskell as a data structure: a fixed point of a functor:

newtype Fix f = In { out :: f (Fix f) }

In particular, our trie can be generated from the following functor:

data TrieF a = TrieF [(Char, a)]
  deriving (Show, Functor)

Notice how I have replaced the recursive call to the Trie type constructor with the free type variable a. The functor in question defines the structure of a single node, leaving holes marked by the occurrences of a for the recursion. When these holes are filled with full blown tries, as in the definition of the fixed point, we recover the complete trie.

I have also made one more simplification by getting rid of the Int in every node. This is because, in the recursion scheme I’m going to use, the folding of the trie proceeds bottom-up, rather than top-down, so the multiplicity information can be passed upwards.

The main advantage of recursion schemes is that they let us use simpler, non-recursive building blocks such as algebras and coalgebras. Let’s start with a simple coalgebra that lets us build a trie from a list of strings. A coalgebra is a fancy name for a particular type of function:

type Coalgebra f x = x -> f x

Think of x as a type for a seed from which one can grow a tree. A colagebra tells us how to use this seed to create a single node described by the functor f and populate it with (presumably smaller) seeds. We can then pass this coalgebra to a simple algorithm, which will recursively expand the seeds. This algorithm is called the anamorphism:

ana :: Functor f => Coalgebra f a -> a -> Fix f
ana coa = In . fmap (ana coa) . coa

Let’s see how we can apply it to the task of building a trie. The seed in our case is a list of strings (as per the definition of our problem, we’ll assume they are all equal length). We start by grouping these strings into bunches of strings that start with the same character. There is a library function called groupWith that does exactly that. We have to import the right library:

import GHC.Exts (groupWith)

This is the signature of the function:

groupWith :: Ord b => (a -> b) -> [a] -> [[a]]

It takes a function a -> b that converts each list element to a type that supports comparison (as per the typeclass Ord), and partitions the input into lists that compare equal under this particular ordering. In our case, we are going to extract the first character from a string using head and bunch together all strings that share that first character.

let sss = groupWith head ss

The tails of those strings will serve as seeds for the next tier of the trie. Eventually the strings will be shortened to nothing, triggering the end of recursion.

fromList :: Coalgebra TrieF [String]
fromList ss =
  -- are strings empty? (checking one is enough)
  if null (head ss) 
  then TrieF [] -- leaf
  else
    let sss = groupWith head ss
    in TrieF $ fmap mkBranch sss

The function mkBranch takes a bunch of strings sharing the same first character and creates a branch seeded with the suffixes of those strings.

mkBranch :: [String] -> (Char, [String])
mkBranch sss =
  let c = head (head sss) -- they're all the same
  in (c, fmap tail sss)

Notice that we have completely avoided explicit recursion.

The next step is a little harder. We have to fold the trie. Again, all we have to define is a step that folds a single node whose children have already been folded. This step is defined by an algebra:

type Algebra f x = f x -> x

Just as the type x described the seed in a coalgebra, here it describes the accumulator–the result of the folding of a recursive data structure.

We pass this algebra to a special algorithm called a catamorphism that takes care of the recursion:

cata :: Functor f => Algebra f a -> Fix f -> a
cata alg = alg . fmap (cata alg) . out

Notice that the folding proceeds from the bottom up: the algebra assumes that all the children have already been folded.

The hardest part of designing an algebra is figuring out what information needs to be passed up in the accumulator. We obviously need to return the final result which, in our case, is the list of strings with one mismatched character. But when we are in the middle of a trie, we have to keep in mind that the mismatch may still happen above us. So we also need a list of strings that may serve as suffixes when the mismatch occurs. We have to keep them all, because they might be matched later with strings from other branches.

In other words, we need to be accumulating two lists of strings. The first list accumulates all suffixes for future matching, the second accumulates the results: strings with one mismatch (after the mismatch has been removed). We therefore should implement the following algebra:

Algebra TrieF ([String], [String])

To understand the implementation of this algebra, consider a single node in a trie. It’s a list of branches, or pairs, whose first component is the current character, and the second a pair of lists of strings–the result of folding a child trie. The first list contains all the suffixes gathered from lower levels of the trie. The second list contains partial results: strings that were matched modulo single-character defect.

As an example, suppose that you have a node with two branches:

[ ('a', (["bcd", "efg"], ["pq"]))
, ('x', (["bcd"],        []))]

First we prepend the current character to strings in both lists using the function prep with the following signature:

prep :: (Char, ([String], [String])) -> ([String], [String])

This way we convert each branch to a pair of lists.

[ (["abcd", "aefg"], ["apq"])
, (["xbcd"],         [])]

We then merge all the lists of suffixes and, separately, all the lists of partial results, across all branches. In the example above, we concatenate the lists in the two columns.

(["abcd", "aefg", "xbcd"], ["apq"])

Now we have to construct new partial results. To do this, we create another list of accumulated strings from all branches (this time without prefixing them):

ss = concat $ fmap (fst . snd) bs

In our case, this would be the list:

["bcd", "efg", "bcd"]

To detect duplicate strings, we’ll insert them into a multiset, which we’ll implement as a map. We need to import the appropriate library:

import qualified Data.Map as M

and define a multiset Counts as:

type Counts a = M.Map a Int

Every time we add a new item, we increment the count:

add :: Ord a => Counts a -> a -> Counts a
add cs c = M.insertWith (+) c 1 cs

To insert all strings from a list, we use a fold:

mset = foldl add M.empty ss

We are only interested in items that have multiplicity greater than one. We can filter them and extract their keys:

dups = M.keys $ M.filter (> 1) mset

Here’s the complete algebra:

accum :: Algebra TrieF ([String], [String])
accum (TrieF []) = ([""], [])
accum (TrieF bs) = -- b :: (Char, ([String], [String]))
    let -- prepend chars to string in both lists
        pss = unzip $ fmap prep bs
        (ss1, ss2) = both concat pss
        -- find duplicates
        ss = concat $ fmap (fst . snd) bs
        mset = foldl add M.empty ss
        dups = M.keys $ M.filter (> 1) mset
     in (ss1, dups ++ ss2)
  where
      prep :: (Char, ([String], [String])) -> ([String], [String])
      prep (c, pss) = both (fmap (c:)) pss

I used a handy helper function that applies a function to both components of a pair:

both :: (a -> b) -> (a, a) -> (b, b)
both f (x, y) = (f x, f y)

And now for the grand finale: Since we create the trie using an anamorphism only to immediately fold it using a catamorphism, why don’t we cut the middle person? Indeed, there is an algorithm called the hylomorphism that does just that. It takes the algebra, the coalgebra, and the seed, and returns the fully charged accumulator.

hylo :: Functor f => Algebra f a -> Coalgebra f b -> b -> a
hylo alg coa = alg . fmap (hylo alg coa) . coa

And this is how we extract and print the final result:

print $ head $ snd $ hylo accum fromList cs

Conclusion

The advantage of using the hylomorphism is that, because of Haskell’s laziness, the trie is never wholly constructed, and therefore doesn’t require large amounts of memory. At every step enough of the data structure is created as is needed for immediate computation; then it is promptly released. In fact, the definition of the data structure is only there to guide the steps of the algorithm. We use a data structure as a control structure. Since data structures are much easier to visualize and debug than control structures, it’s almost always advantageous to use them to drive computation.

In fact, you may notice that, in the very last step of the computation, our accumulator recreates the original list of strings (actually, because of laziness, they are never fully reconstructed, but that’s not the point). In reality, the characters in the strings are never copied–the whole algorithm is just a choreographed dance of internal pointers, or iterators. But that’s exactly what happens in the original C++ algorithm. We just use a higher level of abstraction to describe this dance.

I haven’t looked at the performance of various implementations. Feel free to test it and report the results. The code is available on github.

Acknowledgments

I’m grateful to the participants of the Seattle Haskell Users’ Group for many helpful comments during my presentation.


I wanted to do category theory, not geometry, so the idea of studying simplexes didn’t seem very attractive at first. But as I was getting deeper into it, a very different picture emerged. Granted, the study of simplexes originated in geometry, but then category theorists took interest in it and turned it into something completely different. The idea is that simplexes define a very peculiar scheme for composing things. The way you compose lower dimensional simplexes in order to build higher dimensional simplexes forms a pattern that shows up in totally unrelated areas of mathematics… and programming. Recently I had a discussion with Edward Kmett in which he hinted at the simplicial structure of cumulative edits in a source file.

Geometric picture

Let’s start with a simple idea, and see what we can do with it. The idea is that of triangulation, and it almost goes back to the beginning of the Agricultural Era. Somebody smart noticed long time ago that we can measure plots of land by subdividing them into triangles.

Why triangles and not, say, rectangles or quadrilaterals? Well, to begin with, a quadrilateral can be always divided into triangles, so triangles are more fundamental as units of composition in 2-d. But, more importantly, triangles also work when you embed them in higher dimensions, and quadrilaterals don’t. You can take any three points and there is a unique flat triangle that they span (it may be degenerate, if the points are collinear). But four points will, in general, span a warped quadrilateral. Mind you, rectangles work great on flat screens, and we use them all the time for selecting things with the mouse. But on a curved or bumpy surface, triangles are the only option.

Surveyors have covered the whole Earth, mountains and all, with triangles. In computer games, we build complex models, including human faces or dolphins, using wireframes. Wireframes are just systems of triangles that share some of the vertices and edges. So triangles can be used to approximate complex 2-d surfaces in 3-d.

More dimensions

How can we generalize this process? First of all, we could use triangles in spaces that have more than 3 dimensions. This way we could, for instance, build a Klein bottle in 4-d without it intersecting itself.

Klein

We can also consider replacing triangles with higher-dimensional objects. For instance, we could approximate 3-d volumes by filling them with cubes. This technique is used in computer graphics, where we often organize lots of cubes in data structures called octrees. But just like squares or quadrilaterals don’t work very well on non-flat surfaces, cubes cannot be used in curved spaces. The natural generalization of a triangle to something that can fill a volume without any warping is a tetrahedron. Any four points in space span a tetrahedron.

We can go on generalizing this construction to higher and higher dimensions. To form an n-dimensional simplex we can pick n+1 points. We can draw a segment between any two points, a triangle between any three points, a tetrahedron between any four points, and so on. It’s thus natural to define a 1-dimensional simplex to be a segment, and a 0-dimensional simplex to be a point.

Simplexes (or simplices, as they are sometimes called) have very regular recursive structure. An n-dimensional simplex has n+1 faces, which are all n-1 dimensional simplexes. A tetrahedron has four triangular faces, a triangle has three sides (one-dimensional simplexes), and a segment has two endpoints. (A point should have one face–and it does, in the “augmented” theory). Every higher-dimensional simplex can be decomposed into lower-dimensional simplexes, and the process can be repeated until we get down to individual vertexes. This constitutes a very interesting composition scheme that will come up over and over again in unexpected places.

Notice that you can always construct a face of a simplex by deleting one point. It’s the point opposite to the face in question. This is why there are as many faces as there are points in a simplex.

Look Ma! No coordinates!

So far we’ve been secretly thinking of points as elements of some n-dimensional linear space, presumably \mathbb{R}^n. Time to make another leap of abstraction. Let’s abandon coordinate systems. Can we still define simplexes and, if so, how would we use them?

Consider a wireframe built from triangles. It defines a particular shape. We can deform this shape any way we want but, as long as we don’t break connections or fuse points, we cannot change its topology. A wireframe corresponding to a torus can never be deformed into a wireframe corresponding to a sphere.

The information about topology is encoded in connections. The connections don’t depend on coordinates. Two points are either connected or not. Two triangles either share a side or they don’t. Two tetrahedrons either share a triangle or they don’t. So if we can define simplexes without resorting to coordinates, we’ll have a new language to talk about topology.

But what becomes of a point if we discard its coordinates? It becomes an element of a set. An arrangement of simplexes can be built from a set of points or 0-simplexes, together with a set of 1-simplexes, a set of 2-simplexes, and so on. Imagine that you bought a piece of furniture from Ikea. There is a bag of screws (0-simplexes), a box of sticks (1-simplexes), a crate of triangular planks (2-simplexes), and so on. All parts are freely stretchable (we don’t care about sizes).

You have no idea what the piece of furniture will look like unless you have an instruction booklet. The booklet tells you how to arrange things: which sticks form the edges of which triangles, etc. In general, you want to know which lower-order simplexes are the “faces” of higher-order simplexes. This can be determined by defining functions between the corresponding sets, which we’ll call face maps.

For instance, there should be two function from the set of segments to the set of points; one assigning the beginning, and the other the end, to each segment. There should be three functions from the set of triangles to the set of segments, and so on. If the same point is the end of one segment and the beginning of another, the two segments are connected. A segment may be shared between multiple triangles, a triangle may be shared between tetrahedrons, and so on.

You can compose these functions–for instance, to select a vertex of a triangle, or a side of a tetrahedron. Composable functions suggest a category, in this case a subcategory of Set. Selecting a subcategory suggests a functor from some other, simpler, category. What would that category be?

The Simplicial category

The objects of this simpler category, let’s call it the simplicial category \Delta, would be mapped by our functor to corresponding sets of simplexes in Set. So, in \Delta, we need one object corresponding to the set of points, let’s call it [0]; another for segments, [1]; another for triangles, [2]; and so on. In other words, we need one object called [n] per one set of n-dimensional simplexes.

What really determines the structure of this category is its morphisms. In particular, we need morphisms that would be mapped, under our functor, to the functions that define faces of our simplexes–the face maps. This means, in particular, that for every n we need n+1 distinct functions from the image of [n] to the image of [n-1]. These functions are themselves images of morphisms that go between [n] and [n-1] in \Delta; we do, however, have a choice of the direction of these morphisms. If we choose our functor to be contravariant, the face maps from the image of [n] to the image of [n-1] will be images of morphisms going from [n-1] to [n] (the opposite direction). This contravariant functor from \Delta to Set (such functors are called pre-sheafe) is called the simplicial set.

What’s attractive about this idea is that there is a category that has exactly the right types of morphisms. It’s a category whose objects are ordinals, or ordered sets of numbers, and morphisms are order-preserving functions. Object [0] is the one-element set \{0\}, [1] is the set \{0, 1\}, [2] is \{0, 1, 2\}, and so on. Morphisms are functions that preserve order, that is, if n < m then f(n) \leq f(m). Notice that the inequality is non-strict. This will become important in the definition of degeneracy maps.

The description of simplicial sets using a functor follows a very common pattern in category theory. The simpler category defines the primitives and the grammar for combining them. The target category (often the category of sets) provides models for the theory in question. The same trick is used, for instance, in defining abstract algebras in Lawvere theories. There, too, the syntactic category consists of a tower of objects with a very regular set of morphisms, and the models are contravariant Set-valued functors.

Because simplicial sets are functors, they form a functor category, with natural transformations as morphisms. A natural transformation between two simplicial sets is a family of functions that map vertices to vertices, edges to edges, triangles to triangles, and so on. In other words, it embeds one simplicial set in another.

Face maps

We will obtain face maps as images of injective morphisms between objects of \Delta. Consider, for instance, an injection from [1] to [2]. Such a morphism takes the set \{0, 1\} and maps it to \{0, 1, 2\}. In doing so, it must skip one of the numbers in the second set, preserving the order of the other two. There are exactly three such morphisms, skipping either 0, 1, or 2.

And, indeed, they correspond to three face maps. If you think of the three numbers as numbering the vertices of a triangle, the three face maps remove the skipped vertex from the triangle leaving the opposing side free. The functor is contravariant, so it reverses the direction of morphisms.

facemaps

The same procedure works for higher order simplexes. An injection from [n-1] to [n] maps \{0, 1,...,n-1\} to \{0, 1,...,n\} by skipping some k between 0 and n.

The corresponding face map is called d_{n, k}, or simply d_k, if n is obvious from the context.

Such face maps automatically satisfy the obvious identities for any i < j:

d_i d_j = d_{j-1} d_i

The change from j to j-1 on the right compensates for the fact that, after removing the ith number, the remaining indexes are shifted down.

These injections generate, through composition, all the morphisms that strictly preserve the ordering (we also need identity maps to form a category). But, as I mentioned before, we are also interested in those maps that are non-strict in the preservation of ordering (that is, they can map two consecutive numbers into one). These generate the so called degeneracy maps. Before we get to definitions, let me provide some motivation.

Homotopy

One of the important application of simplexes is in homotopy. You don’t need to study algebraic topology to get a feel of what homotopy is. Simply said, homotopy deals with shrinking and holes. For instance, you can always shrink a segment to a point. The intuition is pretty obvious. You have a segment at time zero, and a point at time one, and you can create a continuous “movie” in between. Notice that a segment is a 1-simplex, whereas a point is a 0-simplex. Shrinking therefore provides a bridge between different-dimensional simplexes.

Similarly, you can shrink a triangle to a segment–in particular the segment that is one of its sides.

You can also shrink a triangle to a point by pasting together two shrinking movies–first shrinking the triangle to a segment, and then the segment to a point. So shrinking is composable.

But not all higher-dimensional shapes can be shrunk to all lower-dimensional shapes. For instance, an annulus (a.k.a., a ring) cannot be shrunk to a segment–this would require tearing it. It can, however, be shrunk to a circular loop (or two segments connected end to end to form a loop). That’s because both, the annulus and the circle, have a hole. So continuous shrinking can be used to classify shapes according to how many holes they have.

We have a problem, though: You can’t describe continuous transformations without using coordinates. But we can do the next best thing: We can define degenerate simplexes to bridge the gap between dimensions. For instance, we can build a segment, which uses the same vertex twice. Or a collapsed triangle, which uses the same side twice (its third side is a degenerate segment).

Degeneracy maps

We model operations on simplexes, such as face maps, through morphisms from the category opposite to \Delta. The creation of degenerate simplexes will therefore corresponds to mappings from [n+1] to [n]. They obviously cannot be injective, but we may chose them to be surjective. For instance, the creation of a degenerate segment from a point corresponds to the (opposite) mapping of \{0, 1\} to \{0\}, which collapses the two numbers to one.

We can construct a degenerate triangle from a segment in two ways. These correspond to the two surjections from \{0, 1, 2\} to \{0, 1\}.

The first one called \sigma_{1, 0} maps both 0 and 1 to 0 and 2 to 1. Notice that, as required, it preserves the order, albeit weakly. The second, \sigma_{1, 1} maps 0 to 0 but collapses 1 and 2 to 1.

In general, \sigma_{n, k} maps \{0, 1, ... k, k+1 ... n+1\} to \{0, 1, ... k ... n\} by collapsing k and k+1 to k.

Our contravariant functor maps these order-preserving surjections to functions on sets. The resulting functions are called degeneracy maps: each \sigma_{n, k} mapped to the corresponding s_{n, k}. As with face maps, we usually omit the first index, as it’s either arbitrary or easily deducible from the context.

Two degeneracy maps. In the triangles, two of the sides are actually the same segment. The third side is a degenerate segment whose ends are the same point.

There is an obvious identity for the composition of degeneracy maps:

s_i s_j = s_{j+1} s_i

for i \leq j.

The interesting identities relate degeneracy maps to face maps. For instance, when i = j or i = j + 1, we have:

d_i s_j = id

(that’s the identity morphism). Geometrically speaking, imagine creating a degenerate triangle from a segment, for instance by using s_0. The first side of this triangle, which is obtained by applying d_0, is the original segment. The second side, obtained by d_1, is the same segment again.

The third side is degenerate: it can be obtained by applying s_0 to the vertex obtained by d_1.

In general, for i > j + 1:

d_i s_j = s_j d_{i-1}

Similarly:

d_i s_j = s_{j-1} d_i

for i < j.

All the face- and degeneracy-map identities are relevant because, given a family of sets and functions that satisfy them, we can reproduce the simplicial set (contravariant functor from \Delta to Set) that generates them. This shows the equivalence of the geometric picture that deals with triangles, segments, faces, etc., with the combinatorial picture that deals with rearrangements of ordered sequences of numbers.

Monoidal structure

A triangle can be constructed by adjoining a point to a segment. Add one more point and you get a tetrahedron. This process of adding points can be extended to adding together arbitrary simplexes. Indeed, there is a binary operator in \Delta that combines two ordered sequences by stacking one after another.

This operation can be lifted to morphisms, making it a bifunctor. It is associative, so one might ask the question whether it can be used as a tensor product to make \Delta a monoidal category. The only thing missing is the unit object.

The lowest dimensional simplex in \Delta is [0], which represents a point, so it cannot be a unit with respect to our tensor product. Instead we are obliged to add a new object, which is called [-1], and is represented by an empty set. (Incidentally, this is the object that may serve as “the face” of a point.)

With the new object [-1], we get the category \Delta_a, which is called the augmented simplicial category. Since the unit and associativity laws are satisfied “on the nose” (as opposed to “up to isomorphism”), \Delta_a is a strict monoidal category.

Note: Some authors prefer to name the objects of \Delta_a starting from zero, rather than minus one. They rename [-1] to \bold{0}, [0] to \bold{1}, etc. This convention makes even more sense if you consider that \bold{0} is the initial object and \bold{1} the terminal object in \Delta_a.

Monoidal categories are a fertile breeding ground for monoids. Indeed, the object [0] in \Delta_a is a monoid. It is equipped with two morphisms that act like unit and multiplication. It has an incoming morphism from the monoidal unit [-1]–the morphism that’s the precursor of the face map that assigns the empty set to every point. This morphism can be used as the unit \eta of our monoid. It also has an incoming morphism from [1] (which happens to be the tensorial square of [0]). It’s the precursor of the degeneracy map that creates a segment from a single point. This morphism is the multiplication \mu of our monoid. Unit and associativity laws follow from the standard identities between morphisms in \Delta_a.

It turns out that this monoid ([0], \eta, \mu) in \Delta_a is the mother of all monoids in strict monoidal categories. It can be shown that, for any monoid m in any strict monoidal category C, there is a unique strict monoidal functor F from \Delta_a to C that maps the monoid [0] to the monoid m. The category \Delta_a has exactly the right structure, and nothing more, to serve as the pattern for any monoid we can come up within a (strict) monoidal category. In particular, since a monad is just a monoid in the (strictly monoidal) category of endofunctors, the augmented simplicial category is behind every monad as well.

One more thing

Incidentally, since \Delta_a is a monoidal category, (contravariant) functors from it to Set are automatically equipped with monoidal structure via Day convolution. The result of Day convolution is a join of simplicial sets. It’s a generalized cone: two simplicial sets together with all possible connections between them. In particular, if one of the sets is just a single point, the result of the join is an actual cone (or a pyramid).

Different shapes

If we are willing to let go of geometric interpretations, we can replace the target category of sets with an arbitrary category. Instead of having a set of simplexes, we’ll end up with an object of simplexes. Simplicial sets become simplicial objects.

Alternatively, we can generalize the source category. As I mentioned before, simplexes are a good choice of primitives because of their geometrical properties–they don’t warp. But if we don’t care about embedding these simplexes in \mathbb{R}^n, we can replace them with cubes of varying dimensions (a one dimensional cube is a segment, a two dimensional cube is a square, and so on). Functors from the category of n-cubes to Set are called cubical sets. An even further generalization replaces simplexes with shapeless globes producing globular sets.

All these generalizations become important tools in studying higher category theory. In an n-category, we naturally encounter various shapes, as reflected in the naming convention: objects are called 0-cells; morphisms, 1-cells; morphisms between morphisms, 2-cells, and so on. These “cells” are often visualized as n-dimensional shapes. If a 1-cell is an arrow, a 2-cell is a (directed) surface spanning two arrows; a 3-cell, a volume between two surfaces; e.t.c. In this way, the shapeless hom-set that connects two objects in a regular category turns into a topologically rich blob in an n-category.

This is even more pronounced in infinity groupoids, which became popularized by homotopy type theory, where we have an infinite tower of bidirectional n-morphisms. The presence or the absence of higher order morphisms between any two morphisms can be visualized as the existence of holes that prevent the morphing of one cell into another. This kind of morphing can be described by homotopies which, in turn, can be described using simplicial, cubical, globular, or even more exotic sets.

Conclusion

I realize that this post might seem a little rambling. I have two excuses: One is that, when I started looking at simplexes, I had no idea where I would end up. One thing led to another and I was totally fascinated by the journey. The other is the realization how everything is related to everything else in mathematics. You start with simple triangles, you compose and decompose them, you see some structure emerging. Suddenly, the same compositional structure pops up in totally unrelated areas. You see it in algebraic topology, in a monoid in a monoidal category, or in a generalization of a hom-set in an n-category. Why is it so? It seems like there aren’t that many ways of composing things together, and we are forced to keep reusing them over and over again. We can glue them, nail them, or solder them. The way simplicial category is put together provides a template for one of the universal patterns of composition.

Bibliography

    1. John Baez, A Quick Tour of Basic Concepts in Simplicial Homotopy Theory
    2. Greg Friedman, An elementary illustrated introduction to simplicial sets.
    3. N J Wildberger, Algebraic Topology. An excellent series of videos.

 

Acknowledgments

I’m grateful to Edward Kmett and Derek Elkins for reviewing the draft and for providing helpful suggestions.


There is a lot of folklore about various data types that pop up in discussions about lenses. For instance, it’s known that FunList and Bazaar are equivalent, although I haven’t seen a proof of that. Since both data structures appear in the context of Traversable, which is of great interest to me, I decided to do some research. In particular, I was interested in translating these data structures into constructs in category theory. This is a continuation of my previous blog posts on free monoids and free applicatives. Here’s what I have found out:

  • FunList is a free applicative generated by the Store functor. This can be shown by expressing the free applicative construction using Day convolution.
  • Using Yoneda lemma in the category of applicative functors I can show that Bazaar is equivalent to FunList

Let’s start with some definitions. FunList was first introduced by Twan van Laarhoven in his blog. Here’s a (slightly generalized) Haskell definition:

data FunList a b t = Done t 
                   | More a (FunList a b (b -> t))

It’s a non-regular inductive data structure, in the sense that its data constructor is recursively called with a different type, here the function type b->t. FunList is a functor in t, which can be written categorically as:

L_{a b} t = t + a \times L_{a b} (b \to t)

where b \to t is a shorthand for the hom-set Set(b, t).

Strictly speaking, a recursive data structure is defined as an initial algebra for a higher-order functor. I will show that the higher order functor in question can be written as:

A_{a b} g = I + \sigma_{a b} \star g

where \sigma_{a b} is the (indexed) store comonad, which can be written as:

\sigma_{a b} s = \Delta_a s \times C(b, s)

Here, \Delta_a is the constant functor, and C(b, -) is the hom-functor. In Haskell, this is equivalent to:

newtype Store a b s = Store (a, b -> s)

The standard (non-indexed) Store comonad is obtained by identifying a with b and it describes the objects of the slice category C/s (morphisms are functions f : a \to a' that make the obvious triangles commute).

If you’ve read my previous blog posts, you may recognize in A_{a b} the functor that generates a free applicative functor (or, equivalently, a free monoidal functor). Its fixed point can be written as:

L_{a b} = I + \sigma_{a b} \star L_{a b}

The star stands for Day convolution–in Haskell expressed as an existential data type:

data Day f g s where
  Day :: f a -> g b -> ((a, b) -> s) -> Day f g s

Intuitively, L_{a b} is a “list of” Store functors concatenated using Day convolution. An empty list is the identity functor, a one-element list is the Store functor, a two-element list is the Day convolution of two Store functors, and so on…

In Haskell, we would express it as:

data FunList a b t = Done t 
                   | More ((Day (Store a b) (FunList a b)) t)

To show the equivalence of the two definitions of FunList, let’s expand the definition of Day convolution inside A_{a b}:

(A_{a b} g) t = t + \int^{c d} (\Delta_b c \times C(a, c)) \times g d \times C(c \times d, t)

The coend \int^{c d} corresponds, in Haskell, to the existential data type we used in the definition of Day.

Since we have the hom-functor C(a, c) under the coend, the first step is to use the co-Yoneda lemma to “perform the integration” over c, which replaces c with a everywhere. We get:

t + \int^d \Delta_b a \times g d \times C(a \times d, t)

We can then evaluate the constant functor and use the currying adjunction:

C(a \times d, t) \cong C(d, a \to t)

to get:

t + \int^d b \times g d \times C(d, a \to t)

Applying the co-Yoneda lemma again, we replace d with a \to t:

t + b \times g (a \to t)

This is exactly the functor that generates FunList. So FunList is indeed the free applicative generated by Store.

All transformations in this derivation were natural isomorphisms.

Now let’s switch our attention to Bazaar, which can be defined as:

type Bazaar a b t = forall f. Applicative f => (a -> f b) -> f t

(The actual definition of Bazaar in the lens library is even more general–it’s parameterized by a profunctor in place of the arrow in a -> f b.)

The universal quantification in the definition of Bazaar immediately suggests the application of my favorite double Yoneda trick in the functor category: The set of natural transformations (morphisms in the functor category) between two functors (objects in the functor category) is isomorphic, through Yoneda embedding, to the following end in the functor category:

Nat(h, g) \cong \int_{f \colon [C, Set]} Set(Nat(g, f), Nat(h, f))

The end is equivalent (modulo parametricity) to Haskell forall. Here, the sets of natural transformations between pairs of functors are just hom-functors in the functor category and the end over f is a set of higher-order natural transformations between them.

In the double Yoneda trick we carefully select the two functors g and h to be either representable, or somehow related to representables.

The universal quantification in Bazaar is limited to applicative functors, so we’ll pick our two functors to be free applicatives. We’ve seen previously that the higher-order functor that generates free applicatives has the form:

F g = Id + g \star F g

Here’s the version of the Yoneda embedding in which f varies over all applicative functors in the category App, and g and h are arbitrary functors in [C, Set]:

App(F h, F g) \cong \int_{f \colon App} Set(App(F g, f), App(F h, f))

The free functor F is the left adjoint to the forgetful functor U:

App(F g, f) \cong [C, Set](g, U f)

Using this adjunction, we arrive at:

[C, Set](h, U (F g)) \cong \int_{f \colon App} Set([C, Set](g, U f), [C, Set](h, U f))

We’re almost there–we just need to carefuly pick the functors g and h. In order to arrive at the definition of Bazaar we want:

g = \sigma_{a b} = \Delta_a \times C(b, -)

h = C(t, -)

The right hand side becomes:

\int_{f \colon App} Set\big(\int_c Set (\Delta_a c \times C(b, c), (U f) c)), \int_c Set (C(t, c), (U f) c)\big)

where I represented natural transformations as ends. The first term can be curried:

Set \big(\Delta_a c \times C(b, c), (U f) c)\big) \cong Set\big(C(b, c), \Delta_a c \to (U f) c \big)

and the end over c can be evaluated using the Yoneda lemma. So can the second term. Altogether, the right hand side becomes:

\int_{f \colon App} Set\big(a \to (U f) b)), (U f) t)\big)

In Haskell notation, this is just the definition of Bazaar:

forall f. Applicative f => (a -> f b) -> f t

The left hand side can be written as:

\int_c Set(h c, (U (F g)) c)

Since we have chosen h to be the hom-functor C(t, -), we can use the Yoneda lemma to “perform the integration” and arrive at:

(U (F g)) t

With our choice of g = \sigma_{a b}, this is exactly the free applicative generated by Store–in other words, FunList.

This proves the equivalence of Bazaar and FunList. Notice that this proof is only valid for Set-valued functors, although a generalization to the enriched setting is relatively straightforward.

There is another family of functors, Traversable, that uses universal quantification over applicatives:

class (Functor t, Foldable t) => Traversable t where
  traverse :: forall f. Applicative f => (a -> f b) -> t a -> f (t b)

The same double Yoneda trick can be applied to it to show that it’s related to Bazaar. There is, however, a much simpler derivation, suggested to me by Derek Elkins, by changing the order of arguments:

traverse :: t a -> (forall f. Applicative f => (a -> f b) -> f (t b))

which is equivalent to:

traverse :: t a -> Bazaar a b (t b)

In view of the equivalence between Bazaar and FunList, we can also write it as:

traverse :: t a -> FunList a b (t b)

Note that this is somewhat similar to the definition of toList:

toList :: Foldable t => t a -> [a]

In a sense, FunList is able to freely accumulate the effects from traversable, so that they can be interpreted later.

Acknowledgments

I’m grateful to Edward Kmett and Derek Elkins for many discussions and valuable insights.


Abstract

The use of free monads, free applicatives, and cofree comonads lets us separate the construction of (often effectful or context-dependent) computations from their interpretation. In this paper I show how the ad hoc process of writing interpreters for these free constructions can be systematized using the language of higher order algebras (coalgebras) and catamorphisms (anamorphisms).

Introduction

Recursive schemes [meijer] are an example of successful application of concepts from category theory to programming. The idea is that recursive data structures can be defined as initial algebras of functors. This allows a separation of concerns: the functor describes the local shape of the data structure, and the fixed point combinator builds the recursion. Operations over data structures can be likewise separated into shallow, non-recursive computations described by algebras, and generic recursive procedures described by catamorphisms. In this way, data structures often replace control structures in driving computations.

Since functors also form a category, it’s possible to define functors acting on functors. Such higher order functors show up in a number of free constructions, notably free monads, free applicatives, and cofree comonads. These free constructions have good composability properties and they provide means of separating the creation of effectful computations from their interpretation.

This paper’s contribution is to systematize the construction of such interpreters. The idea is that free constructions arise as fixed points of higher order functors, and therefore can be approached with the same algebraic machinery as recursive data structures, only at a higher level. In particular, interpreters can be constructed as catamorphisms or anamorphisms of higher order algebras/coalgebras.

Initial Algebras and Catamorphisms

The canonical example of a data structure that can be described as an initial algebra of a functor is a list. In Haskell, a list can be defined recursively:

data List a = Nil | Cons a (List a)

There is an underlying non-recursive functor:

data ListF a x = NilF | ConsF a x
instance Functor (ListF a) where
  fmap f NilF = NilF
  fmap f (ConsF a x) = ConsF a (f x)

Once we have a functor, we can define its algebras. An algebra consist of a carrier c and a structure map (evaluator). An algebra can be defined for an arbitrary functor f:

type Algebra f c = f c -> c

Here’s an example of a simple list algebra, with Int as its carrier:

sum :: Algebra (ListF Int) Int
sum NilF = 0
sum (ConsF a c) = a + c

Algebras for a given functor form a category. The initial object in this category (if it exists) is called the initial algebra. In Haskell, we call the carrier of the initial algebra Fix f. Its structure map is a function:

f (Fix f) -> Fix f

By Lambek’s lemma, the structure map of the initial algebra is an isomorphism. In Haskell, this isomorphism is given by a pair of functions: the constructor In and the destructor out of the fixed point combinator:

newtype Fix f = In { out :: f (Fix f) }

When applied to the list functor, the fixed point gives rise to an alternative definition of a list:

type List a = Fix (ListF a)

The initiality of the algebra means that there is a unique algebra morphism from it to any other algebra. This morphism is called a catamorphism and, in Haskell, can be expressed as:

cata :: Functor f => Algebra f a -> Fix f -> a
cata alg = alg . fmap (cata alg) . out

A list catamorphism is known as a fold. Since the list functor is a sum type, its algebra consists of a value—the result of applying the algebra to NilF—and a function of two variables that corresponds to the ConsF constructor. You may recognize those two as the arguments to foldr:

foldr :: (a -> c -> c) -> c -> [a] -> c

The list functor is interesting because its fixed point is a free monoid. In category theory, monoids are special objects in monoidal categories—that is categories that define a product of two objects. In Haskell, a pair type plays the role of such a product, with the unit type as its unit (up to isomorphism).

As you can see, the list functor is the sum of a unit and a product. This formula can be generalized to an arbitrary monoidal category with a tensor product \otimes and a unit 1:

L\, a\, x = 1 + a \otimes x

Its initial algebra is a free monoid .

Higher Algebras

In category theory, once you performed a construction in one category, it’s easy to perform it in another category that shares similar properties. In Haskell, this might require reimplementing the construction.

We are interested in the category of endofunctors, where objects are endofunctors and morphisms are natural transformations. Natural transformations are represented in Haskell as polymorphic functions:

type f :~> g = forall a. f a -> g a
infixr 0 :~>

In the category of endofunctors we can define (higher order) functors, which map functors to functors and natural transformations to natural transformations:

class HFunctor hf where
  hfmap :: (g :~> h) -> (hf g :~> hf h)
  ffmap :: Functor g => (a -> b) -> hf g a -> hf g b

The first function lifts a natural transformation; and the second function, ffmap, witnesses the fact that the result of a higher order functor is again a functor.

An algebra for a higher order functor hf consists of a functor f (the carrier object in the functor category) and a natural transformation (the structure map):

type HAlgebra hf f = hf f :~> f

As with regular functors, we can define an initial algebra using the fixed point combinator for higher order functors:

newtype FixH hf a = InH { outH :: hf (FixH hf) a }

Similarly, we can define a higher order catamorphism:

hcata :: HFunctor h => HAlgebra h f -> FixH h :~> f
hcata halg = halg . hfmap (hcata halg) . outH

The question is, are there any interesting examples of higher order functors and algebras that could be used to solve real-life programming problems?

Free Monad

We’ve seen the usefulness of lists, or free monoids, for structuring computations. Let’s see if we can generalize this concept to higher order functors.

The definition of a list relies on the cartesian structure of the underlying category. It turns out that there are multiple cartesian structures of interest that can be defined in the category of functors. The simplest one defines a product of two endofunctors as their composition. Any two endofunctors can be composed. The unit of functor composition is the identity functor.

If you picture endofunctors as containers, you can easily imagine a tree of lists, or a list of Maybes.

A monoid based on this particular monoidal structure in the endofunctor category is a monad. It’s an endofunctor m equipped with two natural transformations representing unit and multiplication:

class Monad m where
  eta :: Identity    :~> m
  mu  :: Compose m m :~> m

In Haskell, the components of these natural transformations are known as return and join.

A straightforward generalization of the list functor to the functor category can be written as:

L\, f\, g = 1 + f \circ g

or, in Haskell,

type FunctorList f g = Identity :+: Compose f g

where we used the operator :+: to define the coproduct of two functors:

data (f :+: g) e = Inl (f e) | Inr (g e)
infixr 7 :+:

Using more conventional notation, FunctorList can be written as:

data MonadF f g a = 
    DoneM a 
  | MoreM (f (g a))

We’ll use it to generate a free monoid in the category of endofunctors. First of all, let’s show that it’s indeed a higher order functor in the second argument g:

instance Functor f => HFunctor (MonadF f) where
  hfmap _   (DoneM a)  = DoneM a
  hfmap nat (MoreM fg) = MoreM $ fmap nat fg
  ffmap h (DoneM a)    = DoneM (h a)
  ffmap h (MoreM fg)   = MoreM $ fmap (fmap h) fg

In category theory, because of size issues, this functor doesn’t always have a fixed point. For most common choices of f (e.g., for algebraic data types), the initial higher order algebra for this functor exists, and it generates a free monad. In Haskell, this free monad can be defined as:

type FreeMonad f = FixH (MonadF f)

We can show that FreeMonad is indeed a monad by implementing return and bind:

instance Functor f => Monad (FreeMonad f) where
  return = InH . DoneM
  (InH (DoneM a))    >>= k = k a
  (InH (MoreM ffra)) >>= k = 
        InH (MoreM (fmap (>>= k) ffra))

Free monads have many applications in programming. They can be used to write generic monadic code, which can then be interpreted in different monads. A very useful property of free monads is that they can be composed using coproducts. This follows from the theorem in category theory, which states that left adjoints preserve coproducts (or, more generally, colimits). Free constructions are, by definition, left adjoints to forgetful functors. This property of free monads was explored by Swierstra [swierstra] in his solution to the expression problem. I will use an example based on his paper to show how to construct monadic interpreters using higher order catamorphisms.

Free Monad Example

A stack-based calculator can be implemented directly using the state monad. Since this is a very simple example, it will be instructive to re-implement it using the free monad approach.

We start by defining a functor, in which the free parameter k represents the continuation:

data StackF k  = Push Int k
               | Top (Int -> k)
               | Pop k            
               | Add k
               deriving Functor

We use this functor to build a free monad:

type FreeStack = FreeMonad StackF

You may think of the free monad as a tree with nodes that are defined by the functor StackF. The unary constructors, like Add or Pop, create linear list-like branches; but the Top constructor branches out with one child per integer.

The level of indirection we get by separating recursion from the functor makes constructing free monad trees syntactically challenging, so it makes sense to define a helper function:

liftF :: (Functor f) => f r -> FreeMonad f r
liftF fr = InH $ MoreM $ fmap (InH . DoneM) fr

With this function, we can define smart constructors that build leaves of the free monad tree:

push :: Int -> FreeStack ()
push n = liftF (Push n ())

pop :: FreeStack ()
pop = liftF (Pop ())

top :: FreeStack Int
top = liftF (Top id)

add :: FreeStack ()
add = liftF (Add ())

All these preparations finally pay off when we are able to create small programs using do notation:

calc :: FreeStack Int
calc = do
  push 3
  push 4
  add
  x <- top
  pop
  return x

Of course, this program does nothing but build a tree. We need a separate interpreter to do the calculation. We’ll interpret our program in the state monad, with state implemented as a stack (list) of integers:

type MemState = State [Int]

The trick is to define a higher order algebra for the functor that generates the free monad and then use a catamorphism to apply it to the program. Notice that implementing the algebra is a relatively simple procedure because we don’t have to deal with recursion. All we need is to case-analyze the shallow constructors for the free monad functor MonadF, and then case-analyze the shallow constructors for the functor StackF.

runAlg :: HAlgebra (MonadF StackF) MemState
runAlg (DoneM a)  = return a
runAlg (MoreM ex) = 
  case ex of
    Top  ik  -> get >>= ik  . head
    Pop  k   -> get >>= put . tail   >> k
    Push n k -> get >>= put . (n : ) >> k
    Add  k   -> do (a: b: s) <- get
                   put (a + b : s)
                   k

The catamorphism converts the program calc into a state monad action, which can be run over an empty initial stack:

runState (hcata runAlg calc) []

The real bonus is the freedom to define other interpreters by simply switching the algebras. Here’s an algebra whose carrier is the Const functor:

showAlg :: HAlgebra (MonadF StackF) (Const String)

showAlg (DoneM a) = Const "Done!"
showAlg (MoreM ex) = Const $
  case ex of
    Push n k -> 
      "Push " ++ show n ++ ", " ++ getConst k
    Top ik -> 
      "Top, " ++ getConst (ik 42)
    Pop k -> 
      "Pop, " ++ getConst k
    Add k -> 
      "Add, " ++ getConst k

Runing the catamorphism over this algebra will produce a listing of our program:

getConst $ hcata showAlg calc

> "Push 3, Push 4, Add, Top, Pop, Done!"

Free Applicative

There is another monoidal structure that exists in the category of functors. In general, this structure will work for functors from an arbitrary monoidal category C to Set. Here, we’ll restrict ourselves to endofunctors on Set. The product of two functors is given by Day convolution, which can be implemented in Haskell using an existential type:

data Day f g c where
  Day :: f a -> g b -> ((a, b) -> c) -> Day f g c

The intuition is that a Day convolution contains a container of some as, and another container of some bs, together with a function that can convert any pair (a, b) to c.

Day convolution is a higher order functor:

instance HFunctor (Day f) where
  hfmap nat (Day fx gy xyt) = Day fx (nat gy) xyt
  ffmap h   (Day fx gy xyt) = Day fx gy (h . xyt)

In fact, because Day convolution is symmetric up to isomorphism, it is automatically functorial in both arguments.

To complete the monoidal structure, we also need a functor that could serve as a unit with respect to Day convolution. In general, this would be the the hom-functor from the monoidal unit:

C(1, -)

In our case, since 1 is the singleton set, this functor reduces to the identity functor.

We can now define monoids in the category of functors with the monoidal structure given by Day convolution. These monoids are equivalent to lax monoidal functors which, in Haskell, form the class:

class Functor f => Monoidal f where
  unit  :: f ()
  (>*<) :: f x -> f y -> f (x, y)

Lax monoidal functors are equivalent to applicative functors [mcbride], as seen in this implementation of pure and <*>:

  pure  :: a -> f a
  pure a = fmap (const a) unit
  (<*>) :: f (a -> b) -> f a -> f b
  fs <*> as = fmap (uncurry ($)) (fs >*< as)

We can now use the same general formula, but with Day convolution as the product:

L\, f\, g = 1 + f \star g

to generate a free monoidal (applicative) functor:

data FreeF f g t =
      DoneF t
    | MoreF (Day f g t)

This is indeed a higher order functor:

instance HFunctor (FreeF f) where
    hfmap _ (DoneF x)     = DoneF x
    hfmap nat (MoreF day) = MoreF (hfmap nat day)
    ffmap f (DoneF x)     = DoneF (f x)
    ffmap f (MoreF day)   = MoreF (ffmap f day)

and it generates a free applicative functor as its initial algebra:

type FreeA f = FixH (FreeF f)

Free Applicative Example

The following example is taken from the paper by Capriotti and Kaposi [capriotti]. It’s an option parser for a command line tool, whose result is a user record of the following form:

data User = User
  { username :: String 
  , fullname :: String
  , uid      :: Int
  } deriving Show

A parser for an individual option is described by a functor that contains the name of the option, an optional default value for it, and a reader from string:

data Option a = Option
  { optName    :: String
  , optDefault :: Maybe a
  , optReader  :: String -> Maybe a 
  } deriving Functor

Since we don’t want to commit to a particular parser, we’ll create a parsing action using a free applicative functor:

userP :: FreeA Option User
userP  = pure User 
  <*> one (Option "username" (Just "John")  Just)
  <*> one (Option "fullname" (Just "Doe")   Just)
  <*> one (Option "uid"      (Just 0)       readInt)

where readInt is a reader of integers:

readInt :: String -> Maybe Int
readInt s = readMaybe s

and we used the following smart constructors:

one :: f a -> FreeA f a
one fa = InH $ MoreF $ Day fa (done ()) fst

done :: a -> FreeA f a
done a = InH $ DoneF a

We are now free to define different algebras to evaluate the free applicative expressions. Here’s one that collects all the defaults:

alg :: HAlgebra (FreeF Option) Maybe
alg (DoneF a) = Just a
alg (MoreF (Day oa mb f)) = 
  fmap f (optDefault oa >*< mb)

I used the monoidal instance for Maybe:

instance Monoidal Maybe where
  unit = Just ()
  Just x >*< Just y = Just (x, y)
  _ >*< _ = Nothing

This algebra can be run over our little program using a catamorphism:

parserDef :: FreeA Option a -> Maybe a
parserDef = hcata alg

And here’s an algebra that collects the names of all the options:

alg2 :: HAlgebra (FreeF Option) (Const String)
alg2 (DoneF a) = Const "."
alg2 (MoreF (Day oa bs f)) = 
  fmap f (Const (optName oa) >*< bs)

Again, this uses a monoidal instance for Const:

instance Monoid m => Monoidal (Const m) where
  unit = Const mempty
  Const a >*< Const b = Const (a  b)

We can also define the Monoidal instance for IO:

instance Monoidal IO where
  unit = return ()
  ax >*< ay = do a <- ax
                 b <- ay
                 return (a, b)

This allows us to interpret the parser in the IO monad:

alg3 :: HAlgebra (FreeF Option) IO
alg3 (DoneF a) = return a
alg3 (MoreF (Day oa bs f)) = do
    putStrLn $ optName oa
    s <- getLine
    let ma = optReader oa s
        a = fromMaybe (fromJust (optDefault oa)) ma
    fmap f $ return a >*< bs

Cofree Comonad

Every construction in category theory has its dual—the result of reversing all the arrows. The dual of a product is a coproduct, the dual of an algebra is a coalgebra, and the dual of a monad is a comonad.

Let’s start by defining a higher order coalgebra consisting of a carrier f, which is a functor, and a natural transformation:

type HCoalgebra hf f = f :~> hf f

An initial algebra is dualized to a terminal coalgebra. In Haskell, both are the results of applying the same fixed point combinator, reflecting the fact that the Lambek’s lemma is self-dual. The dual to a catamorphism is an anamorphism. Here is its higher order version:

hana :: HFunctor hf 
     => HCoalgebra hf f -> (f :~> FixH hf)
hana hcoa = InH . hfmap (hana hcoa) . hcoa

The formula we used to generate free monoids:

1 + a \otimes x

dualizes to:

1 \times a \otimes x

and can be used to generate cofree comonoids .

A cofree functor is the right adjoint to the forgetful functor. Just like the left adjoint preserved coproducts, the right adjoint preserves products. One can therefore easily combine comonads using products (if the need arises to solve the coexpression problem).

Just like the monad is a monoid in the category of endofunctors, a comonad is a comonoid in the same category. The functor that generates a cofree comonad has the form:

type ComonadF f g = Identity :*: Compose f g

where the product of functors is defined as:

data (f :*: g) e = Both (f e) (g e)
infixr 6 :*:

Here’s the more familiar form of this functor:

data ComonadF f g e = e :< f (g e)

It is indeed a higher order functor, as witnessed by this instance:

instance Functor f => HFunctor (ComonadF f) where
  hfmap nat (e :< fge) = e :< fmap nat fge
  ffmap h (e :< fge) = h e :< fmap (fmap h) fge

A cofree comonad is the terminal coalgebra for this functor and can be written as a fixed point:

type Cofree f = FixH (ComonadF f)

Indeed, for any functor f, Cofree f is a comonad:

instance Functor f => Comonad (Cofree f) where
  extract (InH (e :< fge)) = e
  duplicate fr@(InH (e :< fge)) = 
                InH (fr :< fmap duplicate fge)

Cofree Comonad Example

The canonical example of a cofree comonad is an infinite stream:

type Stream = Cofree Identity

We can use this stream to sample a function. We’ll encapsulate this function inside the following functor (in fact, itself a comonad):

data Store a x = Store a (a -> x) 
    deriving Functor

We can use a higher order coalgebra to unpack the Store into a stream:

streamCoa :: HCoalgebra (ComonadF Identity)(Store Int)
streamCoa (Store n f) = 
    f n :< (Identity $ Store (n + 1) f)

The actual unpacking is a higher order anamorphism:

stream :: Store Int a -> Stream a
stream = hana streamCoa

We can use it, for instance, to generate a list of squares of natural numbers:

stream (Store 0 (^2))

Since, in Haskell, the same fixed point defines a terminal coalgebra as well as an initial algebra, we are free to construct algebras and catamorphisms for streams. Here’s an algebra that converts a stream to an infinite list:

listAlg :: HAlgebra (ComonadF Identity) []
listAlg(a :< Identity as) = a : as

toList :: Stream a -> [a]
toList = hcata listAlg

Future Directions

In this paper I concentrated on one type of higher order functor:

1 + a \otimes x

and its dual. This would be equivalent to studying folds for lists and unfolds for streams. But the structure of the functor category is richer than that. Just like basic data types can be combined into algebraic data types, so can functors. Moreover, besides the usual sums and products, the functor category admits at least two additional monoidal structures generated by functor composition and Day convolution.

Another potentially fruitful area of exploration is the profunctor category, which is also equipped with two monoidal structures, one defined by profunctor composition, and another by Day convolution. A free monoid with respect to profunctor composition is the basis of Haskell Arrow library [jaskelioff]. Profunctors also play an important role in the Haskell lens library [kmett].

Bibliography

  1. Erik Meijer, Maarten Fokkinga, and Ross Paterson, Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire
  2. Conor McBride, Ross Paterson, Idioms: applicative programming with effects
  3. Paolo Capriotti, Ambrus Kaposi, Free Applicative Functors
  4. Wouter Swierstra, Data types a la carte
  5. Exequiel Rivas and Mauro Jaskelioff, Notions of Computation as Monoids
  6. Edward Kmett, Lenses, Folds and Traversals
  7. Richard Bird and Lambert Meertens, Nested Datatypes
  8. Patricia Johann and Neil Ghani, Initial Algebra Semantics is Enough!

Functors from a monoidal category C to Set form a monoidal category with Day convolution as product. A monoid in this category is a lax monoidal functor. We define an initial algebra using a higher order functor and show that it corresponds to a free lax monoidal functor.

Recently I’ve been obsessing over monoidal functors. I have already written two blog posts, one about free monoidal functors and one about free monoidal profunctors. I followed some ideas from category theory but, being a programmer, I leaned more towards writing code than being preoccupied with mathematical rigor. That left me longing for more elegant proofs of the kind I’ve seen in mathematical literature.

I believe that there isn’t that much difference between programming and math. There is a whole spectrum of abstractions ranging from assembly language, weakly typed languages, strongly typed languages, functional programming, set theory, type theory, category theory, and homotopy type theory. Each language comes with its own bag of tricks. Even within one language one starts with some relatively low level encodings and, with experience, progresses towards higher abstractions. I’ve seen it in Haskell, where I started by hand coding recursive functions, only to realize that I can be more productive using bulk operations on types, then building recursive data structures and applying recursive schemes, eventually diving into categories of functors and profunctors.

I’ve been collecting my own bag of mathematical tricks, mostly by reading papers and, more recently, talking to mathematicians. I’ve found that mathematicians are happy to share their knowledge even with outsiders like me. So when I got stuck trying to clean up my monoidal functor code, I reached out to Emily Riehl, who forwarded my query to Alexander Campbell from the Centre for Australian Category Theory. Alex’s answer was a very elegant proof of what I was clumsily trying to show in my previous posts. In this blog post I will explain his approach. I should also mention that most of the results presented in this post have already been covered in a comprehensive paper by Rivas and Jaskelioff, Notions of Computation as Monoids.

Lax Monoidal Functors

To properly state the problem, I’ll have to start with a lot of preliminaries. This will require some prior knowledge of category theory, all within the scope of my blog/book.

We start with a monoidal category C, that is a category in which you can “multiply” objects using some kind of a tensor product \otimes. For any pair of objects a and b there is an object a \otimes b; and this mapping is functorial in both arguments (that is, you can also “multiply” morphisms). A monoidal category will also have a special object I that is the unit of multiplication. In general, the unit and associativity laws are satisfied up to isomorphism:

\lambda : I \otimes a \cong a

\rho : a \otimes I \cong a

\alpha : (a \otimes b) \otimes c \cong a \otimes (b \otimes c)

These isomorphisms are called, respectively, the left and right unitors, and the associator.

The most familiar example of a monoidal category is the category of types and functions, in which the tensor product is the cartesian product (pair type) and the unit is the unit type ().

Let’s now consider functors from C to the category of sets, Set. These functors also form a category called [C, Set], in which morphisms between any two functors are natural transformations.

In Haskell, a natural transformation is approximated by a polymorphic function:

type f ~> g = forall x. f x -> g x

The category Set is monoidal, with cartesian product \times serving as a tensor product, and the singleton set 1 as the unit.

We are interested in functors in [C, Set] that preserve the monoidal structure. Such a functor should map the tensor product in C to the cartesian product in Set and the unit I to the singleton set 1. Accordingly, a strong monoidal functor F comes with two isomorphisms:

F a \times F b \cong F (a \otimes b)

1 \cong F I

We are interested in a weaker version of a monoidal functor called lax monoidal functor, which is equipped with a one-way natural transformation:

\mu : F a \times F b \to F (a \otimes b)

and a one-way morphism:

\eta : 1 \to F I

A lax monoidal functor must also preserve unit and associativity laws.

laxassoc

Associativity law: \alpha is the associator in the appropriate category (top arrow, in Set; bottom arrow, in C).

In Haskell, a lax monoidal functor can be defined as:

class Monoidal f where
  eta :: () -> f ()
  mu  :: (f a, f b) -> f (a, b)

It’s also known as the applicative functor.

Day Convolution and Monoidal Functors

It turns out that our category of functors [C, Set] is also equipped with monoidal structure. Two functors F and G can be “multiplied” using Day convolution:

(F \star G) c = \int^{a b} C(a \otimes b, c) \times F a \times G b

Here, C(a \otimes b, c) is the hom-set, or the set of morphisms from a \otimes b to c. The integral sign stands for a coend, which can be interpreted as a generalization of an (infinite) coproduct (modulo some identifications). An element of this coend can be constructed by injecting a triple consisting of a morphism from C(a \otimes b, c), an element of the set F a, and an element of the set G b, for some a and b.

In Haskell, a coend corresponds to an existential type, so the Day convolution can be defined as:

data Day f g c where
  Day :: ((a, b) -> c, f a, g b) -> Day f g c

(The actual definition uses currying.)

The unit with respect to Day convolution is the hom-functor:

C(I, -)

which assigns to every object c the set of morphisms C(I, c) and acts on morphisms by post-composition.

The proof that this is the unit is instructive, as it uses the standard trick: the co-Yoneda lemma. In the coend form, the co-Yoneda lemma reads, for a covariant functor F:

\int^x C(x, a) \times F x \cong F a

and for a contravariant functor H:

\int^x C(a, x) \times H x \cong H a

(The mnemonics is that the integration variable must appear twice, once in the negative, and once in the positive position. An argument to a contravariant functor is in a negative position.)

Indeed, substituting C(I, -) for the first functor in Day convolution produces:

(C(I, -) \star G) c = \int^{a b} C(a \otimes b, c) \times C(I, a) \times G b

which can be “integrated” over a using the Yoneda lemma to yield:

\int^{b} C(I \otimes b, c) \times G b

and, since I is the unit of the tensor product, this can be further “integrated” over b to give G c. The right unit law is analogous.

To summarize, we are dealing with three monoidal categories: C with the tensor product \otimes and unit I, Set with the cartesian product and singleton 1, and a functor category [C, Set] with Day convolution and unit C(I, -).

A Monoid in [C, Set]

A monoidal category can be used to define monoids. A monoid is an object m equipped with two morphisms — unit and multiplication:

\eta : I \to m

\mu : m \otimes m \to m

monoid-1

These morphisms must satisfy unit and associativity conditions, which are best illustrated using commuting diagrams.

monunit

Unit laws. λ and ρ are the unitors.

monassoc

Associativity law: α is the associator.

This definition of a monoid can be translated directly to Haskell:

class Monoid m where
  eta :: () -> m
  mu  :: (m, m) -> m

It so happens that a lax monoidal functor is exactly a monoid in our functor category [C, Set]. Since objects in this category are functors, a monoid is a functor F equipped with two natural transformations:

\eta : C(I, -) \to F

\mu : F \star F \to F

At first sight, these don’t look like the morphisms in the definition of a lax monoidal functor. We need some new tricks to show the equivalence.

Let’s start with the unit. The first trick is to consider not one natural transformation but the whole hom-set:

[C, Set](C(I, -), F)

The set of natural transformations can be represented as an end (which, incidentally, corresponds to the forall quantifier in the Haskell definition of natural transformations):

\int_c Set(C(I, c), F c)

The next trick is to use the Yoneda lemma which, in the end form reads:

\int_c Set(C(a, c), F c) \cong F a

In more familiar terms, this formula asserts that the set of natural transformations from the hom-functor C(a, -) to F is isomorphic to F a.

There is also a version of the Yoneda lemma for contravariant functors:

\int_c Set(C(c, a), H c) \cong H a

The application of Yoneda to our formula produces F I, which is in one-to-one correspondence with morphisms 1 \to F I.

We can use the same trick of bundling up natural transformations that define multiplication \mu.

[C, Set](F \star F, F)

and representing this set as an end over the hom-functor:

\int_c Set((F \star F) c, F c)

Expanding the definition of Day convolution, we get:

\int_c Set(\int^{a b} C(a \otimes b, c) \times F a \times F b, F c)

The next trick is to pull the coend out of the hom-set. This trick relies on the co-continuity of the hom-functor in the first argument: a hom-functor from a colimit is isomorphic to a limit of hom-functors. In programmer-speak: a function from a sum type is equivalent to a product of functions (we call it case analysis). A coend is a generalized colimit, so when we pull it out of a hom-functor, it turns into a limit, or an end. Here’s the general formula, in which p x y is an arbitrary profunctor:

Set(\int^x p x x, y) \cong \int_x Set(p x x, y)

Let’s apply it to our formula:

\int_c \int_{a b} Set(C(a \otimes b, c) \times F a \times F b, F c)

We can combine the ends under one integral sign (it’s allowed by the Fubini theorem) and move to the next trick: hom-set adjunction:

Set(a \times b, c) \cong Set(a, b \to c)

In programming this is known as currying. This adjunction exists because Set is a cartesian closed category. We’ll use this adjunction to move F a \times F b to the right:

\int_{a b c} Set(C(a \otimes b, c), (F a \times F b) \to F c)

Using the Yoneda lemma we can “perform the integration” over c  to get:

\int_{a b} (F a \times F b) \to F (a \otimes b))

This is exactly the set of natural transformations used in the definition of a lax monoidal functor. We have established one-to-one correspondence between monoidal multiplication and lax monoidal mapping.

Of course, a complete proof would require translating monoid laws to their lax monoidal counterparts. You can find more details in Rivas and Jaskelioff, Notions of Computation as Monoids.

We’ll use the fact that a monoid in the category [C, Set] is a lax monoidal functor later.

Alternative Derivation

Incidentally, there are shorter derivations of these formulas that use the trick borrowed from the proof of the Yoneda lemma, namely, evaluating things at the identity morphism. (Whenever mathematicians speak of Yoneda-like arguments, this is what they mean.)

Starting from F \star F \to F and plugging in the Day convolution formula, we get:

\int^{a' b'} C(a' \otimes b', c) \times F a' \times F b' \to F c

There is a component of this natural transformation at (a \otimes b) that is the morphism:

\int^{a' b'} C(a' \otimes b', a \otimes b) \times F a' \times F b' \to F (a \otimes b)

This morphism must be defined for all possible values of the coend. In particular, it must be defined for the triple (id_{a \otimes b}, F a, F b), giving us the \mu we seek.

There is also an alternative derivation for the unit: Take the component of the natural transformation \eta at I:

\eta_I : C(I, I) \to L I

C(I, I) is guaranteed to contain at least one element, the identity morphism id_I. We can use \eta_I \, id_I as the (only) value of the lax monoidal constraint at the singleton 1.

Free Monoid

Given a monoidal category C, we might be able to define a whole lot of monoids in it. These monoids form a category Mon(C). Morphisms in this category correspond to those morphisms in C that preserve monoidal structure.

Consider, for instance, two monoids m and m'. A monoid morphism is a morphism f : m \to m' in C such that the unit of m' is related to the unit of m:

\eta' = f \circ \eta

munit

and similarly for multiplication:

\mu' \circ (f \otimes f) = f \circ \mu

Remember, we assumed that the tensor product is functorial in both arguments, so it can be used to lift a pair of morphisms.

mmult

There is an obvious forgetful functor U from Mon(C) to C which, for every monoid, picks its underlying object in C and maps every monoid morphism to its underlying morphism in C.

The left adjoint to this functor, if it exists, will map an object a in C to a free monoid L a.

The intuition is that a free monoid L a is a list of a.

In Haskell, a list is defined recursively:

data List a = Nil | Cons a (List a)

Such a recursive definition can be formalized as a fixed point of a functor. For a list of a, this functor is:

data ListF a x = NilF | ConsF a x

Notice the peculiar structure of this functor. It’s a sum type: The first part is a singleton, which is isomorphic to the unit type (). The second part is a product of a and x. Since the unit type is the unit of the product in our monoidal category of types, we can rewrite this functor symbolically as:

\Phi a x = I + a \otimes x

It turns out that this formula works in any monoidal category that has finite coproducts (sums) that are preserved by the tensor product. The fixed point of this functor is the free functor that generates free monoids.

I’ll define what is meant by the fixed point and prove that it defines a monoid. The proof that it’s the result of a free/forgetful adjunction is a bit involved, so I’ll leave it for a future blog post.

Algebras

Let’s consider algebras for the functor F. Such an algebra is defined as an object x called the carrier, and a morphism:

f : F x \to x

called the structure map or the evaluator.

In Haskell, an algebra is defined as:

type Algebra f x = f x -> x

There may be a lot of algebras for a given functor. In fact there is a whole category of them. We define an algebra morphism between two algebras (x, f : F x \to x) and (x', f' : F x' \to x') as a morphism \nu : x \to x' which commutes with the two structure maps:

\nu \circ f = f' \circ F \nu

algmorph

The initial object in the category of algebras is called the initial algebra, or the fixed point of the functor that generates these algebras. As the initial object, it has a unique algebra morphism to any other algebra. This unique morphism is called a catamorphism.

In Haskell, the fixed point of a functor f is defined recursively:

newtype Fix f = In { out :: f (Fix f) }

with, for instance:

type List a = Fix (ListF a)

A catamorphism is defined as:

cata :: Functor f => Algebra f a -> Fix f -> a
cata alg = alg . fmap (cata alg) . out

A list catamorphism is called foldr.

We want to show that the initial algebra L a of the functor:

\Phi a x = I + a \otimes x

is a free monoid. Let’s see under what conditions it is a monoid.

Initial Algebra is a Monoid

In this section I will show you how to concatenate lists the hard way.

We know that function type b \to c (a.k.a., the exponential c^b) is the right adjoint to the product:

Set(a \times b, c) \cong Set(a, b \to c)

The function type is also called the internal hom.

In a monoidal category it’s sometimes possible to define an internal hom-object, denoted [b, c], as the right adjoint to the tensor product:

curry : C(a \otimes b, c) \cong C(a, [b, c])

If this adjoint exists, the category is called closed monoidal.

In a closed monoidal category, the initial algebra L a of the functor \Phi a x = I + a \otimes x is a monoid. (In particular, a Haskell list of a, which is a fixed point of ListF a, is a monoid.)

To show that, we have to construct two morphisms corresponding to unit and multiplication (in Haskell, empty list and concatenation):

\eta : I \to L a

\mu : L a \otimes L a \to L a

What we know is that L a is a carrier of the initial algebra for \Phi a, so it is equipped with the structure map:

I + a \otimes L a \to L a

which is equivalent to a pair of morphisms:

\alpha : I \to L a

\beta : a \otimes L a \to L a

Notice that, in Haskell, these correspond the two list constructors: Nil and Cons or, in terms of the fixed point:

nil :: () -> List a
nil () = In NilF

cons :: a -> List a -> List a
cons a as = In (ConsF a as)

We can immediately use \alpha to implement \eta.

The second one, \beta, one can be rewritten using the hom adjuncion as:

\bar{\beta} = curry \, \beta

\bar{\beta} : a \to [L a, L a]

Notice that, if we could prove that [L a, L a] is a carrier for the same algebra generated by \Phi a, we would know that there is a unique catamorphism from the initial algebra L a:

\kappa_{[L a, L a]} : L a \to [L a, L a]

which, by the hom adjunction, would give us the desired multiplication:

\mu : L a \otimes L a \to L a

Let’s establish some useful lemmas first.

Lemma 1: For any object x in a closed monoidal category, [x, x] is a monoid.

This is a generalization of the idea that endomorphisms form a monoid, in which identity morphism is the unit and composition is multiplication. Here, the internal hom-object [x, x] generalizes the set of endomorphisms.

Proof: The unit:

\eta : I \to [x, x]

follows, through adjunction, from the unit law in the monoidal category:

\lambda : I \otimes x \to x

(In Haskell, this is a fancy way of writing mempty = id.)

Multiplication takes the form:

\mu : [x, x] \otimes [x, x] \to [x, x]

which is reminiscent of composition of edomorphisms. In Haskell we would say:

mappend = (.)

By adjunction, we get:

curry^{-1} \, \mu : [x, x] \otimes [x, x] \otimes x \to x

We have at our disposal the counit eval of the adjunction:

eval : [x, x] \otimes x \cong x

We can apply it twice to get:

\mu = curry (eval \circ (id \otimes eval))

In Haskell, we could express this as:

mu :: ((x -> x), (x -> x)) -> (x -> x)
mu (f, g) = \x -> f (g x)

Here, the counit of the adjunction turns into simple function application.

\square

Lemma 2: For every morphism f : a \to m, where m is a monoid, we can construct an algebra of the functor \Phi a with m as its carrier.

Proof: Since m is a monoid, we have two morphisms:

\eta : I \to m

\mu : m \otimes m \to m

To show that m is a carrier of our algebra, we need two morphisms:

\alpha : I \to m

\beta : a \otimes m \to m

The first one is the same as \eta, the second can be implemented as:

\beta = \mu \circ (f \otimes id)

In Haskell, we would do case analysis:

mapAlg :: Monoid m => ListF a m -> m
mapAlg NilF = mempty
mapAlg (ConsF a m) = f a `mappend` m

\square

We can now build a larger proof. By lemma 1, [L a, L a] is a monoid with:

\mu = curry (eval \circ (id \otimes eval))

We also have a morphism \bar{\beta} : a \to [L a, L a] so, by lemma 2, [L a, L a] is also a carrier for the algebra:

\alpha = \eta

\beta = \mu \circ (\bar{\beta} \otimes id)

It follows that there is a unique catamorphism \kappa_{[L a, L a]} from the initial algebra L a to it, and we know how to use it to implement monoidal multiplication for L a. Therefore, L a is a monoid.

Translating this to Haskell, \bar{\beta} is the curried form of Cons and what we have shown is that concatenation (multiplication of lists) can be implemented as a catamorphism:

concat :: List a -> List a -> List a
conc x y = cata alg x y
  where alg NilF        = id
        alg (ConsF a t) = (cons a) . t

The type:

List a -> (List a -> List a)

(parentheses added for emphasis) corresponds to L a \to [L a, L a].

It’s interesting that concatenation can be described in terms of the monoid of list endomorphisms. Think of turning an element a of the list into a transformation, which prepends this element to its argument (that’s what \bar{\beta} does). These transformations form a monoid. We have an algebra that turns the unit I into an identity transformation on lists, and a pair a \otimes t (where t is a list transformation) into the composite \bar{\beta} a \circ t. The catamorphism for this algebra takes a list L a and turns it into one composite list transformation. We then apply this transformation to another list and get the final result: the concatenation of two lists. \square

Incidentally, lemma 2 also works in reverse: If a monoid m is a carrier of the algebra of \Phi a, then there is a morphism f : a \to m. This morphism can be thought of as inserting generators represented by a into the monoid m.

Proof: if m is both a monoid and a carrier for the algebra \Phi a, we can construct the morphism a \to m by first applying the identity law to go from a to a \otimes I, then apply id_a \otimes \eta to get a \otimes m. This can be right-injected into the coproduct I + a \otimes m and then evaluated down to m using the structure map for the algebra on m.

a \to a \otimes I \to a \otimes m \to I + a \otimes m \to m

insertion

In Haskell, this corresponds to a construction and evaluation of:

ConsF a mempty

\square

Free Monoidal Functor

Let’s go back to our functor category. We started with a monoidal category C and considered a functor category [C, Set]. We have shown that [C, Set] is itself a monoidal category with Day convolution as tensor product and the hom functor C(I, -) as unit. A monoid is this category is a lax monoidal functor.

The next step is to build a free monoid in [C, Set], which would give us a free lax monoidal functor. We have just seen such a construction in an arbitrary closed monoidal category. We just have to translate it to [C, Set]. We do this by replacing objects with functors and morphisms with natural transformations.

Our construction relied on defining an initial algebra for the functor:

I + a \otimes b

Straightforward translation of this formula to the functor category [C, Set] produces a higher order endofunctor:

A_F G = C(I, -) + F \star G

It defines, for any functor F, a mapping from a functor G to a functor A_F G. (It also maps natural transformations.)

We can now use A_F to define (higher-order) algebras. An algebra consists of a carrier — here, a functor T — and a structure map — here, a natural transformation:

A_F T \to T

The initial algebra for this higher-order endofunctor defines a monoid, and therefore a lax monoidal functor. We have shown it for an arbitrary closed monoidal category. So the only question is whether our functor category with Day convolution is closed.

We want to define the internal hom-object in [C, Set] that satisfies the adjunction:

[C, Set](F \star G, H) \cong [C, Set](F, [G, H])

We start with the set of natural transformations — the hom-set in [C, Set]:

[C, Set](F \star G, H)

We rewrite it as an end over c, and use the formula for Day convolution:

\int_c Set(\int^{a b} C(a \otimes b, c) \times F a \times G b, H c)

We use the co-continuity trick to pull the coend out of the hom-set and turn it into an end:

\int_{c a b} Set(C(a \otimes b, c) \times F a \times G b, H c)

Keeping in mind that our goal is to end up with F a on the left, we use the regular hom-set adjunction to shuffle the other two terms to the right:

\int_{c a b} Set(F a, C(a \otimes b, c) \times G b \to H c)

The hom-functor is continuous in the second argument, so we can sneak the end over b c under it:

\int_{a} Set(F a, \int_{b c} C(a \otimes b, c) \times G b \to H c)

We end up with a set of natural transformations from the functor F to the functor we will call:

[G, H] = \int_{b c} (C(- \otimes b, c) \times G b \to H c)

We therefore identify this functor as the right adjoint (internal hom-object) for Day convolution. We can further simplify it by using the hom-set adjunction:

\int_{b c} (C(- \otimes b, c) \to (G b \to H c))

and applying the Yoneda lemma to get:

[G, H] = \int_{b} (G b \to H (- \otimes b))

In Haskell, we would write it as:

newtype DayHom f g a = DayHom (forall b . f b -> g (a, b))

Since Day convolution has a right adjoint, we conclude that the fixed point of our higher order functor defines a free lax monoidal functor. We can write it in a recursive form as:

Free_F = C(I, -) + F \star Free_F

or, in Haskell:

data FreeMonR f t =
      Done t
    | More (Day f (FreeMonR f) t)

Free Monad

This blog post wouldn’t be complete without mentioning that the same construction works for monads. Famously, a monad is a monoid in the category of endofunctors. Endofunctors form a monoidal category with functor composition as tensor product and the identity functor as unit. The fact that we can construct a free monad using the formula:

FreeM_F = Id + F \circ FreeM_F

is due to the observation that functor composition has a right adjoint, which is the right Kan extension. Unfortunately, due to size issues, this Kan extension doesn’t always exist. I’ll quote Alex Campbell here: “By making suitable size restrictions, we can give conditions for free monads to exist: for example, free monads exist for accessible endofunctors on locally presentable categories; a special case is that free monads exist for finitary endofunctors on Set, where finitary means the endofunctor preserves filtered colimits (more generally, an endofunctor is accessible if it preserves \kappa-filtered colimits for some regular cardinal number \kappa).”

Conclusion

As we acquire experience in programming, we learn more tricks of trade. A seasoned programmer knows how to read a file, parse its contents, or sort an array. In category theory we use a different bag of tricks. We bunch morphisms into hom-sets, move ends and coends, use Yoneda to “integrate,” use adjunctions to shuffle things around, and use initial algebras to define recursive types.

Results derived in category theory can be translated to definitions of functions or data structures in programming languages. A lax monoidal functor becomes an Applicative. Free monoidal functor becomes:

data FreeMonR f t =
      Done t
    | More (Day f (FreeMonR f) t)

What’s more, since the derivation made very few assumptions about the category C (other than that it’s monoidal), this result can be immediately applied to profunctors (replacing C with C^{op}\times C) to produce:

data FreeMon p s t where
     DoneFM :: t -> FreeMon p s t
     MoreFM :: p a b -> FreeMon p c d -> 
                        (b -> d -> t) -> 
                        (s -> (a, c)) -> FreeMon p s t

Replacing Day convolution with endofunctor composition gives us a free monad:

data FreeMonadF f g a = 
    DoneFM a 
  | MoreFM (Compose f g a)

Category theory is also the source of laws (commuting diagrams) that can be used in equational reasoning to verify the correctness of programming constructs.

Writing this post has been a great learning experience. Every time I got stuck, I would ask Alex for help, and he would immediately come up with yet another algebra and yet another catamorphism. This was so different from the approach I would normally take, which would be to get bogged down in inductive proofs over recursive data structures.


Abstract: I derive free monoidal profunctors as fixed points of a higher order functor acting on profunctors. Monoidal profunctors play an important role in defining traversals.

The beauty of category theory is that it lets us reuse concepts at all levels. In my previous post I have derived a free monoidal functor that goes from a monoidal category C to Set. The current post may then be shortened to: Since profunctors are just functors from C^{op} \times C to Set, with the obvious monoidal structure induced by the tensor product in C, we automatically get free monoidal profunctors.

Let me fill in the details.

Profunctors in Haskell

Here’s the definition of a profunctor from Data.Profunctor:

class Profunctor p where
  dimap :: (s -> a) -> (b -> t) -> p a b -> p s t

The idea is that, just like a functor acts on objects, a profunctor p acts on pairs of objects \langle a, b \rangle. In other words, it’s a type constructor that takes two types as arguments. And just like a functor acts on morphisms, a profunctor acts on pairs of morphisms. The only tricky part is that the first morphism of the pair is reversed: instead of going from a to s, as one would expect, it goes from s to a. This is why we say that the first argument comes from the opposite category C^{op}, where all morphisms are reversed with respect to C. Thus a morphism from \langle a, b \rangle to \langle s, t \rangle in C^{op} \times C is a pair of morphisms \langle s \to a, b \to t \rangle.

Just like functors form a category, profunctors form a category too. In this category profunctors are objects, and natural transformations are morphisms. A natural transformation between two profunctors p and q is a family of functions which, in Haskell, can be approximated by a polymorphic function:

type p ::~> q = forall a b. p a b -> q a b

If the category C is monoidal (has a tensor product \otimes and a unit object 1), then the category C^{op} \times C has a trivially induced tensor product:

\langle a, b \rangle \otimes \langle c, d \rangle = \langle a \otimes c, b \otimes d \rangle

and unit \langle 1, 1 \rangle

In Haskell, we’ll use cartesian product (pair type) as the underlying tensor product, and () type as the unit.

Notice that the induced product does not have the usual exponential as the right adjoint. Indeed, the hom-set:

(C^{op} \times C) \, ( \langle a, b  \rangle \otimes  \langle c, d  \rangle,  \langle s, t  \rangle )

is a set of pairs of morphisms:

\langle s \to a \otimes c, b \otimes d \to t  \rangle

If the right adjoint existed, it would be a pair of objects \langle X, Y  \rangle, such that the following hom-set would be isomorphic to the previous one:

\langle X \to a, b \to Y  \rangle

While Y could be the internal hom, there is no candidate for X that would produce the isomorphism:

s \to a \otimes c \cong X \to a

(Consider, for instance, unit () for a.) This lack of the right adjoint is the reason why we can’t define an analog of Applicative for profunctors. We can, however, define a monoidal profunctor:

class Monoidal p where
  punit :: p () ()
  (>**<) :: p a b -> p c d -> p (a, c) (b, d)

This profunctor is a map between two monoidal structures. For instance, punit can be seen as mapping the unit in Set to the unit in C^{op} \times C:

punit :: () -> p <1, 1>

Operator >**< maps the product in Set to the induced product in C^{op} \times C:

(>**<) :: (p <a, b>, p <c, d>) -> p (<a, b> × <c, d>)

Day convolution, which works with monoidal structures, generalizes naturally to the profunctor category:

data PDay p q s t = forall a b c d. 
     PDay (p a b) (q c d) ((b, d) -> t) (s -> (a, c))

Higher Order Functors

Since profunctors form a category, we can define endofunctors in that category. This is a no-brainer in category theory, but it requires some new definitions in Haskell. Here’s a higher-order functor that maps a profunctor to another profunctor:

class HPFunctor pp where
  hpmap :: (p ::~> q) -> (pp p ::~> pp q)
  ddimap :: (s -> a) -> (b -> t) -> pp p a b -> pp p s t

The function hpmap lifts a natural transformation, and ddimap shows that the result of the mapping is also a profunctor.

An endofunctor in the profunctor category may have a fixed point:

newtype FixH pp a b = InH { outH :: pp (FixH pp) a b }

which is also a profunctor:

instance HPFunctor pp => Profunctor (FixH pp) where
    dimap f g (InH pp) = InH (ddimap f g pp)

Finally, our Day convolution is a higher-order endofunctor in the category of profunctors:

instance HPFunctor (PDay p) where
  hpmap nat (PDay p q from to) = PDay p (nat q) from to
  ddimap f g (PDay p q from to) = PDay p q (g . from) (to . f)

We’ll use this fact to construct a free monoidal profunctor next.

Free Monoidal Profunctor

In the previous post, I defined the free monoidal functor as a fixed point of the following endofunctor:

data FreeF f g t =
      DoneF t
    | MoreF (Day f g t)

Replacing the functors f and g with profunctors is straightforward:

data FreeP p q s t = 
      DoneP (s -> ()) (() -> t) 
    | MoreP (PDay p q s t)

The only tricky part is realizing that the first term in the sum comes from the unit of Day convolution, which is the type () -> t, and it generalizes to an appropriate pair of functions (we’ll simplify this definition later).

FreeP is a higher order endofunctor acting on profunctors:

instance HPFunctor (FreeP p) where
    hpmap _ (DoneP su ut) = DoneP su ut
    hpmap nat (MoreP day) = MoreP (hpmap nat day)
    ddimap f g (DoneP au ub) = DoneP (au . f) (g . ub)
    ddimap f g (MoreP day) = MoreP (ddimap f g day)

We can, therefore, define its fixed point:

type FreeMon p = FixH (FreeP p)

and show that it is indeed a monoidal profunctor. As before, the trick is to fist show the following property of Day convolution:

cons :: Monoidal q => PDay p q a b -> q c d -> PDay p q (a, c) (b, d)
cons (PDay pxy quv yva bxu) qcd = 
      PDay pxy (quv >**< qcd) (bimap yva id . reassoc) 
                              (assoc . bimap bxu id)

where

assoc ((a,b),c) = (a,(b,c))
reassoc (a, (b, c)) = ((a, b), c)

Using this function, we can show that FreeMon p is monoidal for any p:

instance Profunctor p => Monoidal (FreeMon p) where
  punit = InH (DoneP id id)
  (InH (DoneP au ub)) >**< frcd = dimap snd (\d -> (ub (), d)) frcd
  (InH (MoreP dayab)) >**< frcd = InH (MoreP (cons dayab frcd))

FreeMon can also be rewritten as a recursive data type:

data FreeMon p s t where
     DoneFM :: t -> FreeMon p s t
     MoreFM :: p a b -> FreeMon p c d -> 
                        (b -> d -> t) -> 
                        (s -> (a, c)) -> FreeMon p s t

Categorical Picture

As I mentioned before, from the categorical point of view there isn’t much to talk about. We define a functor in the category of profunctors:

A_p q = (C^{op} \times C) (1, -) + \int^{ a b c d } p a b \times q c d \times (C^{op} \times C) (\langle a, b \rangle \otimes \langle c, d \rangle, -)

As previously shown in the general case, its initial algebra defines a free monoidal profunctor.

Acknowledgments

I’m grateful to Eugenia Cheng not only for talking to me about monoidal profunctors, but also for getting me interested in category theory in the first place through her Catsters video series. Thanks also go to Edward Kmett for numerous discussions on this topic.

Next Page »