This is part 21 of Categories for Programmers. Previously: Monads: Programmer’s Definition. See the Table of Contents.

Now that we know what the monad is for — it lets us compose embellished functions — the really interesting question is why embellished functions are so important in functional programming. We’ve already seen one example, the Writer monad, where embellishment let us create and accumulate a log across multiple function calls. A problem that would otherwise be solved using impure functions (e.g., by accessing and modifying some global state) was solved with pure functions.

The Problem

Here is a short list of similar problems, copied from Eugenio Moggi’s seminal paper, all of which are traditionally solved by abandoning the purity of functions.

  • Partiality: Computations that may not terminate
  • Nondeterminism: Computations that may return many results
  • Side effects: Computations that access/modify state
    • Read-only state, or the environment
    • Write-only state, or a log
    • Read/write state
  • Exceptions: Partial functions that may fail
  • Continuations: Ability to save state of the program and then restore it on demand
  • Interactive Input
  • Interactive Output

What really is mind blowing is that all these problems may be solved using the same clever trick: turning to embellished functions. Of course, the embellishment will be totally different in each case.

You have to realize that, at this stage, there is no requirement that the embellishment be monadic. It’s only when we insist on composition — being able to decompose a single embellished function into smaller embellished functions — that we need a monad. Again, since each of the embellishments is different, monadic composition will be implemented differently, but the overall pattern is the same. It’s a very simple pattern: composition that is associative and equipped with identity.

The next section is heavy on Haskell examples. Feel free to skim or even skip it if you’re eager to get back to category theory or if you’re already familiar with Haskell’s implementation of monads.

The Solution

First, let’s analyze the way we used the Writer monad. We started with a pure function that performed a certain task — given arguments, it produced a certain output. We replaced this function with another function that embellished the original output by pairing it with a string. That was our solution to the logging problem.

We couldn’t stop there because, in general, we don’t want to deal with monolithic solutions. We needed to be able to decompose one log-producing function into smaller log-producing functions. It’s the composition of those smaller functions that led us to the concept of a monad.

What’s really amazing is that the same pattern of embellishing the function return types works for a large variety of problems that normally would require abandoning purity. Let’s go through our list and identify the embellishment that applies to each problem in turn.

Partiality

We modify the return type of every function that may not terminate by turning it into a “lifted” type — a type that contains all values of the original type plus the special “bottom” value . For instance, the Bool type, as a set, would contain two elements: True and False. The lifted Bool contains three elements. Functions that return the lifted Bool may produce True or False, or execute forever.

The funny thing is that, in a lazy language like Haskell, a never-ending function may actually return a value, and this value may be passed to the next function. We call this special value the bottom. As long as this value is not explicitly needed (for instance, to be pattern matched, or produced as output), it may be passed around without stalling the execution of the program. Because every Haskell function may be potentially non-terminating, all types in Haskell are assumed to be lifted. This is why we often talk about the category Hask of Haskell (lifted) types and functions rather than the simpler Set. It is not clear, though, that Hask is a real category (see this Andrej Bauer post).

Nondeterminism

If a function can return many different results, it may as well return them all at once. Semantically, a non-deterministic function is equivalent to a function that returns a list of results. This makes a lot of sense in a lazy garbage-collected language. For instance, if all you need is one value, you can just take the head of the list, and the tail will never be evaluated. If you need a random value, use a random number generator to pick the n-th element of the list. Laziness even allows you to return an infinite list of results.

In the list monad — Haskell’s implementation of nondeterministic computations — join is implemented as concat. Remember that join is supposed to flatten a container of containers — concat concatenates a list of lists into a single list. return creates a singleton list:

instance Monad [] where
    join = concat
    return x = [x]

The bind operator for the list monad is given by the general formula: fmap followed by join which, in this case gives:

as >>= k = concat (fmap k as)

Here, the function k, which itself produces a list, is applied to every element of the list as. The result is a list of lists, which is flattened using concat.

From the programmer’s point of view, working with a list is easier than, for instance, calling a non-deterministic function in a loop, or implementing a function that returns an iterator (although, in modern C++, returning a lazy range would be almost equivalent to returning a list in Haskell).

A good example of using non-determinism creatively is in game programming. For instance, when a computer plays chess against a human, it can’t predict the opponent’s next move. It can, however, generate a list of all possible moves and analyze them one by one. Similarly, a non-deterministic parser may generate a list of all possible parses for a given expression.

Even though we may interpret functions returning lists as non-deterministic, the applications of the list monad are much wider. That’s because stitching together computations that produce lists is a perfect functional substitute for iterative constructs — loops — that are used in imperative programming. A single loop can be often rewritten using fmap that applies the body of the loop to each element of the list. The do notation in the list monad can be used to replace complex nested loops.

My favorite example is the program that generates Pythagorean triples — triples of positive integers that can form sides of right triangles.

triples = do
    z <- [1..]
    x <- [1..z]
    y <- [x..z]
    guard (x^2 + y^2 == z^2)
    return (x, y, z)

The first line tells us that z gets an element from an infinite list of positive numbers [1..]. Then x gets an element from the (finite) list [1..z] of numbers between 1 and z. Finally y gets an element from the list of numbers between x and z. We have three numbers 1 <= x <= y <= z at our disposal. The function guard takes a Bool expression and returns a list of units:

guard :: Bool -> [()]
guard True  = [()]
guard False = []

This function (which is a member of a larger class called MonadPlus) is used here to filter out non-Pythagorean triples. Indeed, if you look at the implementation of bind (or the related operator >>), you’ll notice that, when given an empty list, it produces an empty list. On the other hand, when given a non-empty list (here, the singleton list containing unit [()]), bind will call the continuation, here return (x, y, z), which produces a singleton list with a verified Pythagorean triple. All those singleton lists will be concatenated by the enclosing binds to produce the final (infinite) result. Of course, the caller of triples will never be able to consume the whole list, but that doesn’t matter, because Haskell is lazy.

The problem that normally would require a set of three nested loops has been dramatically simplified with the help of the list monad and the do notation. As if that weren’t enough, Haskell let’s you simplify this code even further using list comprehension:

triples = [(x, y, z) | z

This is just further syntactic sugar for the list monad (strictly speaking, MonadPlus).

You might see similar constructs in other functional or imperative languages under the guise of generators and coroutines.

Read-Only State

A function that has read-only access to some external state, or environment, can be always replaced by a function that takes that environment as an additional argument. A pure function (a, e) -> b (where e is the type of the environment) doesn’t look, at first sight, like a Kleisli arrow. But as soon as we curry it to a -> (e -> b) we recognize the embellishment as our old friend the reader functor:

newtype Reader e a = Reader (e -> a)

You may interpret a function returning a Reader as producing a mini-executable: an action that given an environment produces the desired result. There is a helper function runReader to execute such an action:

runReader :: Reader e a -> e -> a
runReader (Reader f) e = f e

It may produce different results for different values of the environment.

Notice that both the function returning a Reader, and the Reader action itself are pure.

To implement bind for the Reader monad, first notice that you have to produce a function that takes the environment e and produces a b:

ra >>= k = Reader (\e -> ...)

Inside the lambda, we can execute the action ra to produce an a:

ra >>= k = Reader (\e -> let a = runReader ra e
                         in ...)

We can then pass the a to the continuation k to get a new action rb:

ra >>= k = Reader (\e -> let a  = runReader ra e
                             rb = k a
                         in ...)

Finally, we can run the action rb with the environment e:

ra >>= k = Reader (\e -> let a  = runReader ra e
                             rb = k a
                         in runReader rb e)

To implement return we create an action that ignores the environment and returns the unchanged value.

Putting it all together, after a few simplifications, we get the following definition:

instance Monad (Reader e) where
    ra >>= k = Reader (\e -> runReader (k (runReader ra e)) e)
    return x = Reader (\e -> x)

Write-Only State

This is just our initial logging example. The embellishment is given by the Writer functor:

newtype Writer w a = Writer (a, w)

For completeness, there’s also a trivial helper runWriter that unpacks the data constructor:

runWriter :: Writer w a -> (a, w)
runWriter (Writer (a, w)) = (a, w)

As we’ve seen before, in order to make Writer composable, w has to be a monoid. Here’s the monad instance for Writer written in terms of the bind operator:

instance (Monoid w) => Monad (Writer w) where 
    (Writer (a, w)) >>= k = let (a', w') = runWriter (k a)
                            in Writer (a', w `mappend` w')
    return a = Writer (a, mempty)

State

Functions that have read/write access to state combine the embellishments of the Reader and the Writer. You may think of them as pure functions that take the state as an extra argument and produce a pair value/state as a result: (a, s) -> (b, s). After currying, we get them into the form of Kleisli arrows a -> (s -> (b, s)), with the embellishment abstracted in the State functor:

newtype State s a = State (s -> (a, s))

Again, we can look at a Kleisli arrow as returning an action, which can be executed using the helper function:

runState :: State s a -> s -> (a, s)
runState (State f) s = f s

Different initial states may not only produce different results, but also different final states.

The implementation of bind for the State monad is very similar to that of the Reader monad, except that care has to be taken to pass the correct state at each step:

sa >>= k = State (\s -> let (a, s') = runState sa s
                            sb = k a
                        in runState sb s')

Here’s the full instance:

instance Monad (State s) where
    sa >>= k = State (\s -> let (a, s') = runState sa s 
                            in runState (k a) s')
    return a = State (\s -> (a, s))

There are also two helper Kleisli arrows that may be used to manipulate the state. One of them retrieves the state for inspection:

get :: State s s
get = State (\s -> (s, s))

and the other replaces it with a completely new state:

put :: s -> State s ()
put s' = State (\s -> ((), s'))

Exceptions

An imperative function that throws an exception is really a partial function — it’s a function that’s not defined for some values of its arguments. The simplest implementation of exceptions in terms of pure total functions uses the Maybe functor. A partial function is extended to a total function that returns Just a whenever it makes sense, and Nothing when it doesn’t. If we want to also return some information about the cause of the failure, we can use the Either functor instead (with the first type fixed, for instance, to String).

Here’s the Monad instance for Maybe:

instance Monad Maybe where
    Nothing >>= k = Nothing
    Just a  >>= k = k a
    return a = Just a

Notice that monadic composition for Maybe correctly short-circuits the computation (the continuation k is never called) when an error is detected. That’s the behavior we expect from exceptions.

Continuations

It’s the “Don’t call us, we’ll call you!” situation you may experience after a job interview. Instead of getting a direct answer, you are supposed to provide a handler, a function to be called with the result. This style of programming is especially useful when the result is not known at the time of the call because, for instance, it’s being evaluated by another thread or delivered from a remote web site. A Kleisli arrow in this case returns a function that accepts a handler, which represents “the rest of the computation”:

data Cont r a = Cont ((a -> r) -> r)

The handler a -> r, when it’s eventually called, produces the result of type r, and this result is returned at the end. A continuation is parameterized by the result type. (In practice, this is often some kind of status indicator.)

There is also a helper function for executing the action returned by the Kleisli arrow. It takes the handler and passes it to the continuation:

runCont :: Cont r a -> (a -> r) -> r
runCont (Cont k) h = k h

The composition of continuations is notoriously difficult, so its handling through a monad and, in particular, the do notation, is of extreme advantage.

Let’s figure out the implementation of bind. First let’s look at the stripped down signature:

(>>=) :: ((a -> r) -> r) -> 
         (a -> (b -> r) -> r) -> 
         ((b -> r) -> r)

Our goal is to create a function that takes the handler (b -> r) and produces the result r. So that’s our starting point:

ka >>= kab = Cont (\hb -> ...)

Inside the lambda, we want to call the function ka with the appropriate handler that represents the rest of the computation. We’ll implement this handler as a lambda:

runCont ka (\a -> ...)

In this case, the rest of the computation involves first calling kab with a, and then passing hb to the resulting action kb:

runCont ka (\a -> let kb = kab a
                  in runCont kb hb)

As you can see, continuations are composed inside out. The final handler hb is called from the innermost layer of the computation. Here’s the full instance:

instance Monad (Cont r) where
    ka >>= kab = Cont (\hb -> runCont ka (\a -> runCont (kab a) hb))
    return a = Cont (\ha -> ha a)

Interactive Input

This is the trickiest problem and a source of a lot of confusion. Clearly, a function like getChar, if it were to return a character typed at the keyboard, couldn’t be pure. But what if it returned the character inside a container? As long as there was no way of extracting the character from this container, we could claim that the function is pure. Every time you call getChar it would return exactly the same container. Conceptually, this container would contain the superposition of all possible characters.

If you’re familiar with quantum mechanics, you should have no problem understanding this analogy. It’s just like the box with the Schrödinger’s cat inside — except that there is no way to open or peek inside the box. The box is defined using the special built-in IO functor. In our example, getChar could be declared as a Kleisli arrow:

getChar :: () -> IO Char

(Actually, since a function from the unit type is equivalent to picking a value of the return type, the declaration of getChar is simplified to getChar :: IO Char.)

Being a functor, IO lets you manipulate its contents using fmap. And, as a functor, it can store the contents of any type, not just a character. The real utility of this approach comes to light when you consider that, in Haskell, IO is a monad. It means that you are able to compose Kleisli arrows that produce IO objects.

You might think that Kleisli composition would allow you to peek at the contents of the IO object (thus “collapsing the wave function,” if we were to continue the quantum analogy). Indeed, you could compose getChar with another Kleisli arrow that takes a character and, say, converts it to an integer. The catch is that this second Kleisli arrow could only return this integer as an (IO Int). Again, you’ll end up with a superposition of all possible integers. And so on. The Schrödinger’s cat is never out of the bag. Once you are inside the IO monad, there is no way out of it. There is no equivalent of runState or runReader for the IO monad. There is no runIO!

So what can you do with the result of a Kleisli arrow, the IO object, other than compose it with another Kleisli arrow? Well, you can return it from main. In Haskell, main has the signature:

main :: IO ()

and you are free to think of it as a Kleisli arrow:

main :: () -> IO ()

From that perspective, a Haskell program is just one big Kleisli arrow in the IO monad. You can compose it from smaller Kleisli arrows using monadic composition. It’s up to the runtime system to do something with the resulting IO object (also called IO action).

Notice that the arrow itself is a pure function — it’s pure functions all the way down. The dirty work is relegated to the system. When it finally executes the IO action returned from main, it does all kinds of nasty things like reading user input, modifying files, printing obnoxious messages, formatting a disk, and so on. The Haskell program never dirties its hands (well, except when it calls unsafePerformIO, but that’s a different story).

Of course, because Haskell is lazy, main returns almost immediately, and the dirty work begins right away. It’s during the execution of the IO action that the results of pure computations are requested and evaluated on demand. So, in reality, the execution of a program is an interleaving of pure (Haskell) and dirty (system) code.

There is an alternative interpretation of the IO monad that is even more bizarre but makes perfect sense as a mathematical model. It treats the whole Universe as an object in a program. Notice that, conceptually, the imperative model treats the Universe as an external global object, so procedures that perform I/O have side effects by virtue of interacting with that object. They can both read and modify the state of the Universe.

We already know how to deal with state in functional programming — we use the state monad. Unlike simple state, however, the state of the Universe cannot be easily described using standard data structures. But we don’t have to, as long as we never directly interact with it. It’s enough that we assume that there exists a type RealWorld and, by some miracle of cosmic engineering, the runtime is able to provide an object of this type. An IO action is just a function:

type IO a  =  RealWorld -> (a, RealWorld)

Or, in terms of the State monad:

type IO = State RealWorld

However, >=> and return for the IO monad have to be built into the language.

Interactive Output

The same IO monad is used to encapsulate interactive output. RealWorld is supposed to contain all output devices. You might wonder why we can’t just call output functions from Haskell and pretend that they do nothing. For instance, why do we have:

putStr :: String -> IO ()

rather than the simpler:

putStr :: String -> ()

Two reasons: Haskell is lazy, so it would never call a function whose output — here, the unit object — is not used for anything. And, even if it weren’t lazy, it could still freely change the order of such calls and thus garble the output. The only way to force sequential execution of two functions in Haskell is through data dependency. The input of one function must depend on the output of another. Having RealWorld passed between IO actions enforces sequencing.

Conceptually, in this program:

main :: IO ()
main = do
    putStr "Hello "
    putStr "World!"

the action that prints “World!” receives, as input, the Universe in which “Hello ” is already on the screen. It outputs a new Universe, with “Hello World!” on the screen.

Conclusion

Of course I have just scratched the surface of monadic programming. Monads not only accomplish, with pure functions, what normally is done with side effects in imperative programming, but they also do it with a high degree of control and type safety. They are not without drawbacks, though. The major complaint about monads is that they don’t easily compose with each other. Granted, you can combine most of the basic monads using the monad transformer library. It’s relatively easy to create a monad stack that combines, say, state with exceptions, but there is no formula for stacking arbitrary monads together.

Next: Monads Categorically.


This is part 20 of Categories for Programmers. Previously: Free/Forgetful Adjunctions. See the Table of Contents.

Programmers have developed a whole mythology around monads. It’s supposed to be one of the most abstract and difficult concepts in programming. There are people who “get it” and those who don’t. For many, the moment when they understand the concept of the monad is like a mystical experience. The monad abstracts the essence of so many diverse constructions that we simply don’t have a good analogy for it in everyday life. We are reduced to groping in the dark, like those blind men touching different parts of the elephant end exclaiming triumphantly: “It’s a rope,” “It’s a tree trunk,” or “It’s a burrito!”

Let me set the record straight: The whole mysticism around the monad is the result of a misunderstanding. The monad is a very simple concept. It’s the diversity of applications of the monad that causes the confusion.

As part of research for this post I looked up duct tape (a.k.a., duck tape) and its applications. Here’s a little sample of things that you can do with it:

  • sealing ducts
  • fixing CO2 scrubbers on board Apollo 13
  • wart treatment
  • fixing Apple’s iPhone 4 dropped call issue
  • making a prom dress
  • building a suspension bridge

Now imagine that you didn’t know what duct tape was and you were trying to figure it out based on this list. Good luck!

So I’d like to add one more item to the collection of “the monad is like…” clichés: The monad is like duct tape. Its applications are widely diverse, but its principle is very simple: it glues things together. More precisely, it composes things.

This partially explains the difficulties a lot of programmers, especially those coming from the imperative background, have with understanding the monad. The problem is that we are not used to thinking of programing in terms of function composition. This is understandable. We often give names to intermediate values rather than pass them directly from function to function. We also inline short segments of glue code rather than abstract them into helper functions. Here’s an imperative-style implementation of the vector-length function in C:

double vlen(double * v) {
  double d = 0.0;
  int n;
  for (n = 0; n < 3; ++n)
    d += v[n] * v[n];
  return sqrt(d);
}

Compare this with the (stylized) Haskell version that makes function composition explicit:

vlen = sqrt . sum . fmap  (flip (^) 2)

(Here, to make things even more cryptic, I partially applied the exponentiation operator (^) by setting its second argument to 2.)

I’m not arguing that Haskell’s point-free style is always better, just that function composition is at the bottom of everything we do in programming. And even though we are effectively composing functions, Haskell does go to great lengths to provide imperative-style syntax called the do notation for monadic composition. We’ll see its use later. But first, let me explain why we need monadic composition in the first place.

The Kleisli Category

We have previously arrived at the writer monad by embellishing regular functions. The particular embellishment was done by pairing their return values with strings or, more generally, with elements of a monoid. We can now recognize that such embellishment is a functor:

newtype Writer w a = Writer (a, w)

instance Functor (Writer w) where
  fmap f (Writer (a, w)) = Writer (f a, w)

We have subsequently found a way of composing embellished functions, or Kleisli arrows, which are functions of the form:

a -> Writer w b

It was inside the composition that we implemented the accumulation of the log.

We are now ready for a more general definition of the Kleisli category. We start with a category C and an endofunctor m. The corresponding Kleisli category K has the same objects as C, but its morphisms are different. A morphism between two objects a and b in K is implemented as a morphism:

a -> m b

in the original category C. It’s important to keep in mind that we treat a Kleisli arrow in K as a morphism between a and b, and not between a and m b.

In our example, m was specialized to Writer w, for some fixed monoid w.

Kleisli arrows form a category only if we can define proper composition for them. If there is a composition, which is associative and has an identity arrow for every object, then the functor m is called a monad, and the resulting category is called the Kleisli category.

In Haskell, Kleisli composition is defined using the fish operator >=>, and the identity arrrow is a polymorphic function called return. Here’s the definition of a monad using Kleisli composition:

class Monad m where
  (>=>) :: (a -> m b) -> (b -> m c) -> (a -> m c)
  return :: a -> m a

Keep in mind that there are many equivalent ways of defining a monad, and that this is not the primary one in the Haskell ecosystem. I like it for its conceptual simplicity and the intuition it provides, but there are other definitions that are more convenient when programming. We’ll talk about them momentarily.

In this formulation, monad laws are very easy to express. They cannot be enforced in Haskell, but they can be used for equational reasoning. They are simply the standard composition laws for the Kleisli category:

(f >=> g) >=> h = f >=> (g >=> h) -- associativity
return >=> f = f                  -- left unit
f >=> return = f                  -- right unit

This kind of a definition also expresses what a monad really is: it’s a way of composing embellished functions. It’s not about side effects or state. It’s about composition. As we’ll see later, embellished functions may be used to express a variety of effects or state, but that’s not what the monad is for. The monad is the sticky duct tape that ties one end of an embellished function to the other end of an embellished function.

Going back to our Writer example: The logging functions (the Kleisli arrows for the Writer functor) form a category because Writer is a monad:

instance Monoid w => Monad (Writer w) where
    f >=> g = \a -> 
        let Writer (b, s)  = f a
            Writer (c, s') = g b
        in Writer (c, s `mappend` s')
    return a = Writer (a, mempty)

Monad laws for Writer w are satisfied as long as monoid laws for w are satisfied (they can’t be enforced in Haskell either).

There’s a useful Kleisli arrow defined for the Writer monad called tell. It’s sole purpose is to add its argument to the log:

tell :: w -> Writer w ()
tell s = Writer ((), s)

We’ll use it later as a building block for other monadic functions.

Fish Anatomy

When implementing the fish operator for different monads you quickly realize that a lot of code is repeated and can be easily factored out. To begin with, the Kleisli composition of two functions must return a function, so its implementation may as well start with a lambda taking an argument of type a:

(>=>) :: (a -> m b) -> (b -> m c) -> (a -> m c)
f >=> g = \a -> ...

The only thing we can do with this argument is to pass it to f:

f >=> g = \a -> let mb = f a
                in ...

At this point we have to produce the result of type m c, having at our disposal an object of type m b and a function g :: b -> m c. Let’s define a function that does that for us. This function is called bind and is usually written in the form of an infix operator:

(>>=) :: m a -> (a -> m b) -> m b

For every monad, instead of defining the fish operator, we may instead define bind. In fact the standard Haskell definition of a monad uses bind:

class Monad m where
    (>>=) :: m a -> (a -> m b) -> m b
    return :: a -> m a

Here’s the definition of bind for the Writer monad:

(Writer (a, w)) >>= f = let Writer (b, w') = f a
                        in  Writer (b, w `mappend` w')

It is indeed shorter than the definition of the fish operator.

It’s possible to further dissect bind, taking advantage of the fact that m is a functor. We can use fmap to apply the function a -> m b to the contents of m a. This will turn a into m b. The result of the application is therefore of type m (m b). This is not exactly what we want — we need the result of type m b — but we’re close. All we need is a function that collapses or flattens the double application of m. Such function is called join:

join :: m (m a) -> m a

Using join, we can rewrite bind as:

ma >>= f = join (fmap f ma)

That leads us to the third option for defining a monad:

class Functor m => Monad m where
    join :: m (m a) -> m a
    return :: a -> m a

Here we have explicitly requested that m be a Functor. We didn’t have to do that in the previous two definitions of the monad. That’s because any type constructor m that either supports the fish or bind operator is automatically a functor. For instance, it’s possible to define fmap in terms of bind and return:

fmap f ma = ma >>= \a -> return (f a)

For completeness, here’s join for the Writer monad:

join :: Monoid w => Writer w (Writer w a) -> Writer w a
join (Writer ((Writer (a, w')), w)) = Writer (a, w `mappend` w')

The do Notation

One way of writing code using monads is to work with Kleisli arrows — composing them using the fish operator. This mode of programming is the generalization of the point-free style. Point-free code is compact and often quite elegant. In general, though, it can be hard to understand, bordering on cryptic. That’s why most programmers prefer to give names to function arguments and intermediate values.

When dealing with monads it means favoring the bind operator over the fish operator. Bind takes a monadic value and returns a monadic value. The programmer may chose to give names to those values. But that’s hardly an improvement. What we really want is to pretend that we are dealing with regular values, not the monadic containers that encapsulate them. That’s how imperative code works — side effects, such as updating a global log, are mostly hidden from view. And that’s what the do notation emulates in Haskell.

You might be wondering then, why use monads at all? If we want to make side effects invisible, why not stick to an imperative language? The answer is that the monad gives us much better control over side effects. For instance, the log in the Writer monad is passed from function to function and is never exposed globally. There is no possibility of garbling the log or creating a data race. Also, monadic code is clearly demarcated and cordoned off from the rest of the program.

The do notation is just syntactic sugar for monadic composition. On the surface, it looks a lot like imperative code, but it translates directly to a sequence of binds and lambda expressions.

For instance, take the example we used previously to illustrate the composition of Kleisli arrows in the Writer monad. Using our current definitions, it could be rewritten as:

process :: String -> Writer String [String]
process = upCase >=> toWords

This function turns all characters in the input string to upper case and splits it into words, all the while producing a log of its actions.

In the do notation it would look like this:

process s = do
    upStr <- upCase s
    toWords upStr

Here, upStr is just a String, even though upCase produces a Writer:

upCase :: String -> Writer String String
upCase s = Writer (map toUpper s, "upCase ")

This is because the do block is desugared by the compiler to:

process s = 
   upCase s >>= \ upStr ->
       toWords upStr

The monadic result of upCase is bound to a lambda that takes a String. It’s the name of this string that shows up in the do block. When reading the line:

upStr <- upCase s

we say that upStr gets the result of upCase s.

The pseudo-imperative style is even more pronounced when we inline toWords. We replace it with the call to tell, which logs the string "toWords ", followed by the call to return with the result of splitting the string upStr using words. Notice that words is a regular function working on strings.

process s = do
    upStr <- upStr s
    tell "toWords "
    return (words upStr)

Here, each line in the do block introduces a new nested bind in the desugared code:

process s = 
    upCase s >>= \upStr ->
      tell "toWords " >>= \() ->
        return (words upStr)

Notice that tell produces a unit value, so it doesn’t have to be passed to the following lambda. Ignoring the contents of a monadic result (but not its effect — here, the contribution to the log) is quite common, so there is a special operator to replace bind in that case:

(>>) :: m a -> m b -> m b
m >> k = m >>= (\_ -> k)

The actual desugaring of our code looks like this:

process s = 
    upCase s >>= \upStr ->
      tell "toWords " >>
        return (words upStr)

In general, do blocks consist of lines (or sub-blocks) that either use the left arrow to introduce new names that are then available in the rest of the code, or are executed purely for side-effects. Bind operators are implicit between the lines of code. Incidentally, it is possible, in Haskell, to replace the formatting in the do blocks with braces and semicolons. This provides the justification for describing the monad as a way of overloading the semicolon.

Notice that the nesting of lambdas and bind operators when desugaring the do notation has the effect of influencing the execution of the rest of the do block based on the result of each line. This property can be used to introduce complex control structures, for instance to simulate exceptions.

Interestingly, the equivalent of the do notation has found its application in imperative languages, C++ in particular. I’m talking about resumable functions or coroutines. It’s not a secret that C++ futures form a monad. It’s an example of the continuation monad, which we’ll discuss shortly. The problem with continuations is that they are very hard to compose. In Haskell, we use the do notation to turn the spaghetti of “my handler will call your handler” into something that looks very much like sequential code. Resumable functions make the same transformation possible in C++. And the same mechanism can be applied to turn the spaghetti of nested loops into list comprehensions or “generators,” which are essentially the do notation for the list monad. Without the unifying abstraction of the monad, each of these problems is typically addressed by providing custom extensions to the language. In Haskell, this is all dealt with through libraries.

Next: Monads and Effects.


In the previous post I explored the application of the Yoneda lemma in the functor category to derive some results from the Haskell lens library. In particular I derived the profunctor representation of isos. There is one more trick that is used in the lens library: combining the Yoneda lemma with adjunctions. Jaskelioff and O’Connor used this trick in the context of free/forgetful adjunctions, but it can be easily generalized to any pair of adjoint higher order functors.

Adjunctions

An adjunction between two functors, L and R (left and right functor) is a natural isomorphism between hom-sets:

C(L d, c) ≅ D(d, R c)

The left functor L goes from the category D to C, and the right functor R goes in the opposite direction. Formally, having an adjunction allows us to shift the action of the functor from one end of the hom-set to the other. The shortcut notation for an adjunction is L ⊣ R.

Since adjunctions can be defined for arbitrary categories, they will also work between functor categories. In that case objects are functors and hom-sets are sets of natural transformations. For instance, Let’s consider an adjunction between two higher order functors:

ρ :: [C, C'] -> [D, D']
λ :: [D, D'] -> [C, C']

Here, [C, C'] is a category of functors between two categories C and C’, [D, D'] is a category of functors between D and D’, and ρ maps functors (and natural transformations) between these two categories. λ goes in the opposite direction. The adjunction λ ⊣ ρ is expressed as a natural isomorphism between sets of natural transformations:

[C, C'](λ g, h)  ≅  [D, D'](g, ρ h)

The two objects in functor categories are themselves functors:

h :: C -> C'
g :: D -> D'

Here’s the same adjunction written using ends:

x∈C C'((λ g) x, h x)  ≅  ∫y∈D D'(g y, (ρ h) y)

The end notation is easily translatable to Haskell. The end corresponds to a universal quantifier forall, and hom-sets become function types:

forall x. (lambda g) x -> h x ≅ forall y. g y -> (rho h) y

Since lambda and rho act on functors, they have kinds (*->*)->(*->*).

Yoneda with Adjunctions

Let’s recall the formula for the Yoneda embedding of the functor category:

f Set(∫x D(g x, f x), ∫y D(h y, f y))
  ≅ ∫z D(h z, g z)

Here, g, h, and f, are functors — objects in the functor category [C, D]. The ends represent natural transformations — morphisms in the functor category. The end over f is a higher order natural transformation.

Since g and h are arbitrary, let’s replace them with the results of the action of some higher order functors, λ g and λ' h. The idea is that λ and λ' are left halves of some higher order adjunctions.

f Set(∫x D'((λ g) x, f x), ∫y D'((λ' h) y, f y))
  ≅ ∫z D'((λ' h) z, (λ g) z)

The right halves of these adjunctions are, respectively, ρ and ρ'.

λ  ⊣ ρ
λ' ⊣ ρ'

Let’s apply these adjunctions inside the hom-sets:

f Set(∫x D(g x, (ρ f) x), ∫y D(h y, (ρ' f) y))
  ≅ ∫z D(h z, (ρ' (λ g)) z)

Let’s focus our attention on the category of sets. If we replace D with Set, we can pick g and h to be hom-functors (which are the simplest representable functors) parameterized by some arbitrary objects b and t:

g = C(b, -)
h = C(t, -)

We get:

f Set(∫x Set(C(b, x), (ρ f) x), ∫y Set(C(t, y), (ρ' f) y)
  ≅ ∫z Set(C(t, z), (ρ' (λ C(b, -))) z)

Remember, hom-functors behave like Dirac delta functions under the integration sign. That is to say, we can use the Yoneda lemma to “integrate” over x, y, and z:

f Set((ρ f) b, (ρ' f) t)
  ≅ (ρ' (λ C(b, -))) t

We are now free to pick a pair of adjoint higher order functors to suit our goal. Here’s one such choice for ρ: the functor that maps a functor f (an endofunctor in C) to a hom-functor in the Kleisli category. This higher-order functor is parameterized by the choice of the object a in C:

κa f = C(a, f -)

It can also be written in terms of the exponential object:

κa f = (f -)a

This functor has an obvious left adjoint:

λa g = a × g -

This follows from the standard adjunction between the product and the exponential.

Our pick for ρ' is the same Kleisli functor but taken at a different point, s:

ρ' = κs

With those choices, the left side of the identity

f Set((ρ f) b, (ρ' f) t)
  ≅ (ρ' (λ C(b, -))) t

becomes:

f Set(C(a, f b), C(s, f t))

This is the categorical version of the van Laarhoven lens.

Let’s now evaluate the right hand side. First we apply λa to the hom-functor C(b, -) to get:

λa C(b, -) = a × C(b, -)

The action of ρ' produces the result:

C(s, (a × C(b, t)))

This, in turn, is the categorical version of the getter/setter representation of the lens.

Translation

In Haskell, our formula derived from the higher-order Yoneda lemma with the adjoint pair:

f Set((ρ f) b, (ρ' f) t)
  ≅ (ρ' (λ C(b, -))) t

takes the form:

forall f. Functor f => (rho f) b -> (rho' f) t 
  ≅ (rho' (lambda ((->)b))) t

With our choice for ρ as the Kleisli functor:

rho  f = a -> f -
rho' f = s -> f -

or, in proper Haskell:

type Rho  a f b = a -> f b
type Rho' s f t = s -> f t

we get:

forall f. Functor f => (a -> f b) -> (s -> f t) 
  ≅ (rho' (lambda ((->)b))) t

To get the λ, we plug our ρ into the adjunction formula. We get:

forall x. (lambda g) x -> h x ≅ forall x. g x -> a -> h x

which has the obvious solution:

lambda g = (a, g -)

or, in proper Haskell,

type Lambda a g x = (a, g x)

Indeed, with the currying and flipping of arguments, we get the adjunction:

forall x. (a, g x) -> h x ≅ forall x. g x -> a -> h x

Now let’s evaluate the right hand side:

(rho' (lambda ((->) b))) t

We start with:

lambda (b -> -) = (a, b -> -)

The action of rho' gives us:

rho' (a, b -> -) = s -> (a, b -> -)

Altogether:

(rho' (lambda ((->) b))) t = s -> (a, b -> t)

So the right hand side is just the getter/setter pair:

(s -> a, s -> b -> t)

The final result is the well known van Laarhoven representation of the lens:

forall f. Functor f => (a -> f b) -> (s -> f t) 
  ≅ (s -> a, s -> b -> t)

This is not a new result, but I like the elegance of this derivation — especially the role played by the exponential adjunction in the Kleisli category. This formulation has the additional advantage of being generalizable towards the profunctor formulation of lenses.


The connection between the Haskell lens library and category theory is a constant source of amazement to me. The most interesting part is that lenses are formulated in terms of higher order functions that are polymorphic in functors (or, more generally, profunctors). Consider, for instance, this definition:

type Lens s t a b = forall f. Functor f => (a -> f b) -> (s -> f t)

In Haskell, saying that a function is polymorphic in functors, which form a class parameterized by type constructors of the kind *->* (or *->*->*, in the case of profunctors) and supporting a special method called fmap (or dimap, respectively) is rather mind-boggling.

In category theory, on the other hand, functors are standard fare. You can form categories of functors. The properties of such categories are described by pretty much the same machinery as those of any other category.

In particular, one of the most important theorems of category theory, the Yoneda lemma, works in the category of functors out of the box. I have previously shown how to employ the Yoneda lemma to derive the representation for Haskell lenses (see my original blog post and, independently, this paper by Jaskelioff and O’Connor — or a more recent expanded post). Continuing with this program, I’m going to show how to use the Yoneda lemma with profunctors. But let’s start with the basics.

By the way, if you feel intimidated by mathematical notation, don’t worry, I have provided a translation to Haskell. However, math notation is often more succinct and almost always more general. I guess, the same ideas could be expressed using C++ templates, but it would look like an incomprehensible mess.

Functor Categories

Functors between any two given categories C and D can themselves be organized into a category, which is often called [C, D] or DC. The objects in that category are functors, and the morphisms are natural transformations. Given two functors f and g, the hom-set between them can be either called

Nat(f, g)

or

[C, D](f, g)

depending how much information you want to expose. (For simplicity, I’ll assume that the categories are small, so that the “sets” or natural transformations are sets indeed.)

What’s interesting is that, since functor categories are just categories, we can have functors going between them. We sometimes call them higher order functors. We can also have higher order functors going from a functor category to a regular category, in particular to the category of sets, Set. An example of such a functor is a hom-functor in a functor category. You construct this functor (also called a representable functor) when you fix one end of the hom-set and vary the other. In any category, the mapping:

x -> C(a, x)

is a functor from C to Set. We often use a shorthand notation for this functor:

C(a, -)

If we replace C by a functor category then, for a fixed functor g, the mapping:

f -> [C, D](g, f)

is a higher order functor. It maps f to a set of natural transformations — itself an object in Set.

Representable functors play an important role in the Yoneda lemma. Take the set of natural transformations from a representable functor in C to any functor f that goes from C to Set. This set is in one-to-one correspondence with the set of values of this functor at the object a:

[C, Set](C(a, -), f) ≅ f a

This correspondence is an isomorphism, which is natural both in a and f.

The set of natural transformations between two functors f and g can also be expressed as an end:

[C, D](f, g) = ∫x∈C D(f x, g x)

The end notation is sometimes more convenient because it makes the object x (the “integration variable”) explicit. The Yoneda lemma, in this notation, becomes:

x∈C Set(C(a, x), f x) ≅ f a

If you’re familiar with distributions, this formula will immediately resonate with you — it looks like the definition of the Dirac delta function:

∫ dx δ(a - x) f(x) ≅ f(a)

We can apply the Yoneda lemma to a functor category to get:

Nat([C, D](g, -), φ) ≅ φ g

or, in the end notation,

f Set(∫x D(g x, f x), φ f) ≅ φ g

Here, the “integration variable” f is itself a functor from C to D, and so is g; φ, however, is a higher order functor. It maps functors from [C, D] to sets from Set. The natural transformations in this formula are higher order natural transformations between higher order functors.

Furthermore, if we substitute for φ another instance of the representable functor, [C, D](h, -), we get the formula for the higher order Yoneda embedding:

Nat([C, D](g, -), [C, D](h, -)) ≅ [C, D](h, g)

which reduces higher order natural transformations to lower order natural transformations. Notice the inversion of g and h on the right hand side.

Using the end notation, this becomes:

f Set(∫x D(g x, f x), ∫y D(h y, f y))
  ≅ ∫z D(h z, g z)

We can further specialize this formula by replacing D with Set. We can then choose both functors to be hom-functors (for some fixed a and b):

g = C(a, -)
h = C(b, -)

We get:

f Set(∫x Set(C(a, x), f x), ∫y Set(C(b, y), f y))
  ≅ ∫z Set(C(b, z), C(a, z))

This can be simplified by applying the Yoneda lemma to the internal ends (“integrating” over x, y, and z) to get:

f Set(f a, f b) ≅ C(a, b)

This simple formula has some interesting possibilities that I will explore later.

Translation

All this might be easier to digest for programmers when translated to Haskell. Natural transformations are polymorphic functions:

forall x. f x -> g x

Here, f and g are arbitrary Haskell Functors. It’s a straightforward translation of the end formula:

x∈Set Set(f x, g x)

where the end is replaced by the universal quantifier, and the hom-set in Set by a function type. I have deliberately used Set rather than Hask as the category of Haskell types, because I’m not going to pretend that I care about non-termination.

A higher order functor of the kind we are interested in is a mapping from functors to types, which could be defined as follows:

class HFunctor (phi :: (* -> *) -> *) where
  hfmap :: (forall a. f a -> g a) -> (phi f -> phi g)

The higher order hom-functor is defined as:

newtype HHom f g = HHom (forall a. f a -> g a)

Indeed, it’s easy to define hfmap for it:

instance HFunctor (HHom f) where
  hfmap nat (HHom nat') = HHom (nat . nat')

The types give it away:

nat    :: forall a. g a -> h a
nat'   :: forall a. f a -> g a
result :: HHom (forall a. f a -> h a)

Higher order natural transformations between such functors will have the signature:

type HNat (phi :: (* -> *) -> *) (psi :: (* -> *) -> *) = 
  forall f. Functor f => phi f -> psi f

The standard Yoneda lemma establishes the isomorphism between f a and the following higher order polymorphic function:

forall x. (a -> x) -> f x  ≅  f a

The Yoneda lemma for higher order functors is the equivalence between φ g and:

forall f. Functor f => forall x. (g x -> f x) -> φ f  ≅  φ g

Compare this again with:

f Set(∫x Set(g x, f x), φ f) ≅ φ g

The higher order Yoneda embedding takes the form of the equivalence between:

forall f. Functor f => forall x. (g x -> f x) -> forall y. (h y -> f y)

and

forall z. h z -> g z

The earlier result of the double application of the Yoneda lemma:

f Set(f a, f b) ≅ C(a, b)

translates to:

forall f. Functor f => f a -> f b ≅ a -> b

One direction of this equivalence simply reiterates the definition of a functor: a function a->b can be lifted to any functor. The other direction is a little more interesting. Given two types, a and b, if there is a function from f a to f b for any functor f, than there is a direct function from a to b. In Set, where there are functions between any two types, with the exception of a->Void, this is not a big surprise.

But there are other categories embedded in Set, and the same categorical formula will lead to more interesting translations. In particular, think of categories where the hom-set is not equivalent to a simple function type with trivial composition. A good example is the basic formulation of lens as the getter/setter pair, or a function of type:

type Lens s t a b = s -> (a, b -> t)

Such functions don’t compose naturally, but their functor-polymorphic representations do.

Profunctors

You’ve seen the reusability of categorical constructs in action. We can have functors operate on functors, and natural transformations that work between higher order functors. The same Yoneda lemma works as well in the category of types and functions, as in the category of functors and natural transformations. From that perspective, a profunctor is just a special case of a functor. Given two categories C and D, a profunctor is a functor:

Cop × D -> Set

It’s a map from a product category to Set. Because the first component of the product is the opposite category (all morphisms reversed), this functor is contravariant in the first argument.

Let’s translate this definition to Haskell. We substitute all three categories with the same category of types and functions, which is essentially Set (remember, we ignore the bottom values). So a profunctor is a functor from Setop×Set to Set. It’s a mapping of types — a two-argument type constructor p  — and a mapping of morphisms. A morphism in Setop×Set is a pair of functions going between pairs (a, b) and (s, t). Because of contravariance, the first function goes in the opposite direction:

(s -> a, b -> t)

A profunctor lifts this pair to a single function:

p a b -> p s t

The lifting is done by the function dimap, which is usually written in the curried form:

class Profunctor p where
    dimap :: (s -> a) -> (b -> t) -> p a b -> p s t

All said and done, a profunctor is still a functor, so we can reuse all the machinery of functor calculus, including all versions of the Yoneda lemma.

Let’s start with the Yoneda lemma for the category Cop×D. Straightforward substitution leads to:

[Cop×D, Set]((Cop×D)(<c, d>, -), p) ≅ p <c, d>

or, in the end notation:

<x, y>∈Cop×D Set((Cop×D)(<c, d>, <x, y>), p <x, y>) ≅ p <c, d>

Here, p is the profunctor operating on pairs of objects, such as <c, d>. A hom-set in the product category Cop×D goes between two such pairs:

(Cop×D)(<c, d>, <x, y>)

Here’s the straightforward translation to Haskell:

forall x y. (x -> c) -> (d -> y) -> p x y ≅ p c d

Notice the customary currying and the reversal of source with target in the first function argument due to contravariance.

Since profunctors are just functors, they form a functor category:

[Cop×D, Set]

(not to be confused with Prof, the profunctor category, where profunctors serve as morphisms rather than objects):

We can easily rewrite the higher-order Yoneda lemma replacing functors with profunctors:

p Set(∫<x, y> Set(q <x, y>, p <x, y>), π p) ≅ π q

And this is what it looks like in Haskell:

forall p. Profunctor p => (forall x y. q x y -> p x y) -> pi p ≅ pi q

Here, π is a higher order functor acting on profunctors, with values in Set. In Haskell it’s defined by a type class:

class HFunProf (pi :: (* -> * -> *) -> *) where
  fhpmap :: (forall a b. p a b -> q a b) -> (pi p -> pi q)

Natural transformations between such functors have the type:

type HNatProf (pi :: (* -> * -> *) -> *) (rho :: (* -> * -> *) -> *) =
  forall p. Profunctor p => pi p -> rho p

Notice that we are now defining functions that are polymorphic in profunctors. This is getting us closer to the profunctor formulation of the lens library, in particular to prisms and isos.

Understanding Isos

An iso is a perfect example of a data structure straddling the gap between lenses and prisms. Its first order definition is simple:

type Iso s t a b = (s -> a, b -> t)

The name derives from isomorphism, which is a special case of an iso (I think a cuter name for an iso would be Mirror). The crucial observation is that this is nothing but the type corresponding to a hom-set in the product category Setop×Set:

(Setop×Set)(<a b>, <s t>)

We know how to compose such morphisms:

compIso :: Iso s t a b -> Iso a b u v -> Iso s t u v
(f1, g1) `compIso` (f2, g2) = (f2 . f1, g1 . g2)

but it’s not as straightforward as function composition. Fortunately, there is a higher order representation of isos, which composes using simple function composition. The trick is to make it profunctor-polymorphic:

type Iso s t a b = forall p. Profunctor p => p a b -> p s t

Why are the two definitions isomorphic? There is a standard argument based on parametricity, which I will skip, because there is a better explanation.

Recall the earlier result of applying the Yoneda lemma to the functor category:

forall f. Functor f => f a -> f b ≅ a -> b

The similarity is striking, isn’t it? That’s because, the categorical formula for both identities is the same:

f Set(f a, f b) ≅ C(a, b)

All we need is to replace C with Cop×D and rewrite it in terms of pairs of objects:

p Set(p <a b>, p <s t>) ≅ (Cop×D)(<a b>, <s t>)

But that’s exactly what we need:

forall p. Profunctor p => p a b -> p s t  ≅ (s -> a, b -> t)

The immediate advantage of the profunctor-polymorphic representation is that you can compose two isos using straightforward function composition. Instead of using compIso, we can use the dot:

p :: Iso s t a b
q :: Iso a b u v
r :: Iso s t u v
r = p . q

Of course, the full power of lenses is in the ability to compose (and type-check) combinations of different elements of the library.

Note: The definition of Iso in the lens library involves a functor f:

type Iso s t a b = forall p f. (Profunctor p, Functor f) => 
    p a (f b) -> p s (f t)

This functor can be absorbed into the definition of the profunctor p without any loss of generality.

Next: Combining adjunctions with the Yoneda lemma.

Acknowledgments

I’m grateful to Gershom Bazerman and Gabor Greif for useful comments and to André van Meulebrouck for checking the grammar and spelling.


In the previous blog post we talked about relations. I gave an example of a thin category as a kind of relation that’s compatible with categorical structure. In a thin category, the hom-set is either an empty set or a singleton set. It so happens that these two sets form a cub-category of Set. It’s a very interesting category. It consists of the two objects — let’s give them new names o and i. Besides the mandatory identity morphisms, we also have a single morphism going from o to i, corresponding to the function we call absurd in Haskell:

absurd :: Void -> a
absurd _ = a

This tiny category is sometimes called the interval category. I’ll call it o->i.

Impoverished 4

The object o is initial, and the object i is terminal — just as the empty set and the singleton set were in Set. Moreover, the cartesian product from Set can be used to define a tensor product in o->i. We’ll use this tensor product to build a monoidal category.

Monoidal Categories

A tensor product is a bifunctor ⊗ with some additional properties. Here, in the interval category, we’ll define it through the following multiplication table:

o ⊗ o = o
o ⊗ i = o
i ⊗ o = o
i ⊗ i = i

Its action on pairs of morphisms (what we call bimap in Haskell) is also easy to define. For instance, what’s the action of on the pair <absurd, idi>? This pair takes the pair <o, i> to <i, i>. Under the bifunctor , the first pair produces o, and the second i. There is only one morphism from o to i, so we have:

absurd ⊗ idi = absurd

If we designate the (terminal) object i as the unit of the tensor product, we get a (symmetric) monoidal category. A monoidal category is a category with a tensor product that’s associative and unital (usually, up to isomorphism — but here, strictly).

Now imagine that we replace hom-sets in our original thin category with objects from the monoidal category o->i (we’ll call them hom-objects). After all, we were only using two sets from Set. We can replace the empty hom-set with the object o, and the singleton hom-set with the object i. We get what’s called an enriched category (although, in this case, it’s more of an impoverished category).

Impoverished 9

An example of a thin category (a total order with objects 1, 2, and 3) with hom-sets replaced by hom-objects from the interval category. Think of i as corresponding to less-than-or-equal, and o as greater.

Enriched Categories

An enriched category has hom-objects instead of hom-sets. These are objects from some monoidal category V called the base category. The base category has to be monoidal because we want to define something that would replace the usual composition of morphisms. Morphisms are elements of hom-sets. However, hom-objects, in general, have no elements. We don’t know what an element of o or i is.

So to fully define an enriched category we have to come up with a sensible substitute for composition. To do that, we need to rethink composition — first in terms of hom-sets, then in terms of hom-objects.

We can think of composition as a function from a cartesian product of two hom-sets to a third hom-set:

composea b c :: C(b, c) × C(a, b) -> C(a, c)

Generalizing it, we can replace hom-sets with hom-objects (here, either o or i), the cartesian product with the tensor product, and a function with a morphism (notice: it’s a morphism in our monoidal category o->i). These composition-defining morphisms form a “composition table” for hom-objects.

As an example, take the composition of two is. Their product i ⊗ i is i again, and there is only one morphism out of i, the identity morphism. In terms of original hom-sets it would mean that the composition of two morphisms always exists. In general, we have to impose this condition when we’re defining a category, enriched or not — here it just happens automatically.

For instance (see illustration), compose0 1 2=idi:

compose0 1 2 (C(1, 2) ⊗ C(0, 1))
= compose0 1 2 (i ⊗ i)
= compose0 1 2 i
= i
= C(0, 2)

In every category we must also have identity morphisms. These are special elements in the hom-sets of the form C(a, a). We have to find a way to define their equivalent in the enriched setting. We’ll use the standard trick of defining generalized elements. It’s based on the observation that selecting an element from a set s is the same as selecting a morphism that goes from the singleton set (the terminal object in Set) to s. In a monoidal category, we replace the terminal object with the monoidal unit.

So, instead of picking an identity morphism in C(a, a), we use a morphism from the monoidal unit i:

ja :: i -> C(a, a)

Again, in the case of a thin category, there is only one morphism leaving i, and that’s the identity morphism. That’s why we are automatically guaranteed that, in a thin category, all hom-objects of the form C(a, a) are equal to i.

Composition in a category must also satisfy associativity and identity conditions. Associativity in the enriched setting translates straightforwardly to a commuting diagram, but identity is a little trickier. We have to use ja to “select” the identity from the hom-object C(a, a) while composing it with some other hom-object C(b, a). We start with the product:

i ⊗ C(b, a)

Because i is the monoidal unit, this is equal to C(b, a). On the other hand, we can tensor together two morphisms in o->i — remember, a tensor product is a bifunctor, so it also acts on morphisms. Here we’ll tensor ja and the identity at C(b, a):

ja ⊗ idC(b, a)

We act with this product on the product object i ⊗ C(b, a) to get C(a, a) ⊗ C(b, a). Then we use composition to get:

C(a, a) ⊗ C(b, a) -> C(b, a)

These two ways of getting to C(b, a) must coincide, leading to the identity condition for enriched categories.

Impoverished 5

Now that we’ve seen how the enrichment works for thin categories, we can apply the same mechanism to define categories enriched over any monoidal category V.

The important part is that V defines a (bifunctor) tensor product ⊗ and a unit object i. Associativity and unitality may be either strict or up to isomorphism (notice that a regular cartesian product is associative only up to isomorphism — (a, (b, c)) is not equal to ((a, b), c)).

Instead of sets of morphisms, an enriched category has hom-objects that are objects in V. We use the same notation as for hom-sets: C(a, b) is the hom-object that connects object a to object b. Composition is replaced by morphisms in V:

composea b c :: C(b, c) ⊗ C(a, b) -> C(a, c)

Instead of identity morphisms, we have the morphisms in V:

ja :: i -> C(a, a)

Finally, associativity and unitality of composition are imposed in the form of a few commuting diagrams.

Impoverished Yoneda

The Yoneda Lemma talks about functors from an arbitrary category to Set. To generalize the Yoneda lemma to enriched categories we first have to generalize functors. Their action on objects is not a problem; it’s the action on morphisms that needs our attention.

Enriched Functors

Since in an enriched category we no longer have access to individual morphisms, we have to define the action of functors on hom-objects wholesale. This is only possible if the hom-objects in the target category come from the same base category V as the hom-objects in the source category. In other words, both categories must be enriched over the same monoidal category. We can then use regular morphisms in V to map hom-objects.

Between any two objects a and b in C we have the hom-object C(a, b). The two objects are mapped by the functor f to f a and f b, and there is a hom-object between them, D(f a, f b). The action of f on C(a, b) is defined as a morphism in V:

C(a, b) -> D(f a, f b)

Impoverished 6

Let’s see what this means in our impoverished thin category. First of all, a functor will always map related objects to related objects. That’s because there is no morphism from i to o. A bond between two objects cannot be broken by an impoverished functor.

If the relation is a partial order, for instance less-than-or-equal, then it follows that a functor between posets preserves the ordering — it’s monotone.

A functor must also preserve composition and identity. The former can be easily expressed as a commuting diagram. Identity preservation in the enriched setting involves the use of ja. Starting from i we can use ja to get to C(a, a), which the functor maps to D(f a, f a). Or we can use jf a to get there directly. We insist that both paths be the same.

Impoverished 7

In our impoverished category, this just works because ja is the identity morphism and all C(a, a)s and D(a, a)s are equal to i.

Back to Yoneda: You might remember that we start the Yoneda construction by fixing one object a in C, and then varying another object x to define the functor:

x -> C(a, x)

This functor maps C to Set, because xs are objects in C, and hom-sets are sets — objects of Set.

In the enriched environment, the same construction results in a mapping from C to V, because hom-objects are objects of the base category V.

But is this mapping a functor? This is far from obvious, considering that C is an enriched category, and we have just said that enriched functors can only go between categories that are enriched over the same base category. The target of our functor, the category V, is not enriched. It turns out that, as long as V is closed, we can turn it into an enriched category.

Self Enrichment

Let’s first see how we would enrich our tiny category o->i. First of all, let’s check if it’s closed. Closedness means that hom-sets can be objectified — for every hom-set there is an object called the exponential object that objectifies it. The exponential object in a (symmetric) monoidal category is defined through the adjunction:

V(a⊗b, c) ≅ V(b, ca)

This is the standard adjunction for defining exponentials, except that we are using the tensor product instead of the regular product. The hom-sets are sets of morphisms between objects in V (here, in o->i).

Let’s check, for instance, if there’s an object that corresponds to the hom-set V(o, i), which we would call io. We have:

V(o⊗b, i) ≅ V(b, io)

Whatever b we chose, when multiplied by o it will yield o, so the left hand side is V(o, i), a singleton set. Therefore V(b, io) must be a singleton set too, for any choice of b. In particular, if b is i, we see that the only choice for io is:

io = i

You can check that all exponentiation rules in o->i can be obtained from simple algebra by replacing o with zero and i with one.

Every closed symmetric monoidal category can be enriched in itself by replacing hom-sets with the corresponding exponentials. For instance, in our case, we end up replacing all empty hom-sets in the category o->i with o, and all singleton hom-sets with i. You can easily convince yourself that it works, and the result is the category o->i enriched in itself.

Impoverished 8

We can now take a category C that’s enriched over a closed symmetric monoidal category V, and show that the mapping:

x -> C(a, x)

is indeed an enriched functor. It maps objects of C to objects of V and hom-objects of C to hom-objects (exponentials) of V.

Impoverished 10

An example of a functor from a total order enriched over the interval category to the interval category. This particular functor is equal to the hom-functor C(a->x) for a equal to 3.

Let’s see what this functor looks like in a poset. Given some a, the hom-object C(a, x) is equal to i if a <= x. So an x is mapped to i if it’s greater-or-equal to a, otherwise it’s mapped to o. If you think of the objects mapped to o as colored black and the ones mapped to i as colored red, you’ll see the object a and the whole graph below it must be painted red.

Enriched Natural Transformations

Now that we know what enriched functors are, we have to define natural transformations between them. This is a bit tricky, since a regular natural transformation is defined as a family of morphisms. But again, instead of picking individual morphisms from hom-sets we can work with the closest equivalent: generalized elements — morphisms going from the unit object i to hom-objects. So an enriched natural transformation between two enriched functors f and g is defined as a family of morphisms in V:

αa :: i -> V(f a, g a)

Natural transformations are very limited in our impoverished category. Let’s see what morphisms from i are at our disposal. We have one morphism from i to i: the identity morphism ida. This makes sense — we think of i as having a single element. There is no morphism from i back to o; and that makes sense too — we think of o as having no elements. The only possible generalized components of an impoverished natural transformation between two functors f and g correspond to D(f a, g a) equal to i; which means that, for every a, f a must be less-than-or-equal to g a. A natural transformation can only push a functor uphill.

When the target category is o->i, as in the impoverished Yoneda lemma, a natural transformation may never connect red to black. So once the first functor switches to red, the other must follow.

Naturality Condition

There is, of course, a naturality condition that goes with this definition of a natural transformation. The essence of it is that it shouldn’t matter if we first apply a functor and then the natural transformation α, or the other way around. In the enriched context, there are two ways of getting from C(a, b) to D(f a, g b). One is to multiply C(a, b) by i on the right:

C(a, b) ⊗ i

apply the product of g ⊗ αa to get:

D(g a, g b) ⊗ D(f a, g a)

and then apply composition to get:

D(f a, g b)

The other way is to multiply C(a, b) by i on the left:

i ⊗ C(a, b)

apply αb ⊗ f to get:

D(f b, g b) ⊗ D(f a, f b)

and compose the two to get:

D(f a, g b)

The naturality condition requires that this diagram commute.

Impoverished 11

Enriched Yoneda

The enriched version of the Yoneda lemma talks about enriched natural transformations from the functor x -> C(a, x) to any enriched functor f that goes from C to V.

Consider for a moment a functor from a poset to our tiny category o->i (which, by the way, is also a poset). It will map some objects to o (black) and others to i (red). As we’ve seen, a functor must preserve the less-than-or-equal relation, so once we get into the red territory, there is no going back to black. And a natural transformation may only repaint black to red, not the other way around.

Now we would like to say that natural transformations from x -> C(a, x) to f are in one-to-one correspondence with the elements of f a, except that f a is not a set, so it doesn’t have elements. It’s an object in V. So instead of talking about elements of f a, we’ll talk about generalized elements — morphisms from the unit object i to f a. And that’s how the enriched Yoneda lemma is formulated — as a natural bijection between the set of natural transformations and a set of morphisms from the unit object to f a.

Nat(C(a, -), f) ≅ i -> f a

In our running example, there are only two possible values for f a.

  1. If the value is o then there is no morphism from i to it. The Yoneda lemma tells us that there is no natural transformation in that case. That makes sense, because the value of the functor x -> C(a, x) at x=a is i, and there is no morphism from i to o.
  2. If the value is i then there is exactly one morphism from i to it — the identity. The Yoneda lemma tells use that there is just one natural transformation in that case. It’s the natural transformation whose generalized component at any object x is i->i.

Strong Enriched Yoneda

There is something unsatisfactory in the fact that the enriched Yoneda lemma ends up using a mapping between sets. First we try to get away from sets as far as possible, then we go back to sets of morphisms. It feels like cheating. Not to worry! There is a stronger version of the Yoneda lemma that deals with this problem. What we need is to replace the set of natural transformations with an object in V that would represent them — just like we replaced the set of morphisms with the exponential object. Such an object is defined as an end:

x V(f x, g x)

The strong version of the Yoneda lemma establishes the natural isomorphism:

x V(C(a, x), f x) ≅ f a

Enriched Profunctors

We’ve seen that a profunctor is a functor from a product category Cop × D to Set. The enriched version of a profunctor requires the notion of a product of enriched categories. We would like the product of enriched categories to also be an enriched category. In fact, we would like it to be enriched over the same base category V as the component categories.

We’ll define objects in such a category as pairs of objects from the component categories, but the hom-objects will be defined as tensor products of the component hom-objects. In the enriched product category, the hom-object between two pairs, <c, d> and <c', d'> is:

(Cop ⊗ D)(<c, d>, <c', d'>) = C(c, c') ⊗ D(d, d')

You can convince yourself that composition of such hom-objects requires the tensor product to be symmetric (at least up to isomorphism). That’s because you have to be able to rearrange the hom-objects in a tensor product of tensor products.

An enriched profunctor is defined as an enriched functor from the tensor product of two categories to the (self-enriched) base category:

Cop ⊗ D -> V

Just like regular profunctors, enriched profunctors can be composed using the coend formula. The only difference is that the cartesian product is replaced by the tensor product in V. They form a bicategory called V-Prof.

Enriched profunctors are the basis of the definition of Tambara modules, which are relevant in the application to Haskell lenses.

Conclusion

One of the reasons for using category theory is to get away from set theory. In general, objects in a category don’t have to form sets. The morphisms, however, are elements of sets — the hom-sets. Enriched categories go a step further and replace even those sets with categorical objects. However, it’s not categories all the way down — the base category that’s used for enrichment is still a regular old category with hom-sets.

Acknowledgments

I’m grateful to Gershom Bazerman for useful comments and to André van Meulebrouck for checking the grammar and spelling.


A profunctor is a categorical construct that takes relations to a new level. It is an embodiment of a proof-relevant relation.

We don’t talk enough about relations. We talk about domesticated relations — functions; or captive ones — equalities; but we don’t talk enough about relations in the wild. And, as is often the case in category theory, a less constrained construct may have more interesting properties and may lead to better insights.

Relations

A relation between two sets is defined as a set of pairs. The first element of each pair is from the first set, and the second from the second. In other words, it’s a subset of the cartesian product of two sets.

This definition may be extended to categories. If C and D are small categories, we can define a relation between objects as a set of pairs of objects. In general, such pairs are themselves objects in the product category C×D. We could define a relation between categories as a subcategory of C×D. This works as long as we ignore morphisms or, equivalently, work with discrete categories.

There is another way of defining relations using a characteristic function. You can define a function on the cartesian product of two sets — a function that assigns zero (or false) to those pairs that are not in a relation, and one (or true) to those which are.

Extending this to categories, we would use a functor rather than a function. We could, for instance, define a relation as a functor from C×D to Set — a functor that maps pairs of objects to either an empty set or a singleton set. The (somewhat arbitrary) choice of Set as the target category will make sense later, when we make connection between relations and hom-sets.

But a functor is not just a mapping of objects — it must map morphisms as well. Here, since we are dealing with a product category, our characteristic functor must map pairs of morphisms to functions between sets. We only worry about the empty set and the singleton set, so there aren’t that many functions to chose from.

The next question is: Should we map the two morphisms in a pair covariantly, or maybe map one of them contravariantly? To see which possibility makes more sense, let’s consider the case when D is the same as C. In other words, let’s look at relations between objects of the same category. There are actually categories that are based on relations, for instance preorders. In a preorder, two objects are in a relation if there is a morphism between them; and there can be at most one morphism between any two objects. A hom-set in a preorder can only be an empty set or a singleton set. Empty set — no relation. Singleton set — the objects are related.

But that’s exactly how we defined a relation in the first place — a mapping from a pair of objects to Set (how’s that for foresight). In a preorder setting, this mapping is nothing but the hom-functor itself. And we know that hom-functors are contravariant in the first argument and covariant in the second:

C(-,=) :: Cop×C -> Set

That’s an argument in favor of choosing mixed covariance for the characteristic functor defining a relation.

A preorder is also called a thin category — a category where there’s at most one morphism per hom-set. Therefore a hom-functor in any thin category defines a relation.

Let’s dig a little deeper into why contravariance in the first argument makes sense in defining a relation. Suppose two objects a and b are related, i.e., the characteristic functor R maps the pair <a, b> to a singleton set. In the hom-set interpretation, where R is the hom-functor C(-, =), it means that there is a single morphism r:

r :: a -> b

Now let’s pick a morphism in Cop×C that goes from <a, b> to some <s, t>. A morphism in Cop×C is a pair of morphisms in C:

f :: s -> a
g :: b -> t

Impoverished 1

The composition of morphisms g ∘ r ∘ f is a morphism from s to t. That means the hom-set C(s, t) is not empty — therefore s and t are related.

Impoverished 2

And they should be related. That’s because the functor R acting on <f, g> must yield a function from the set C(a, b) to the set C(s, t). There’s no function from a non-empty set to the empty set. So, if the former is non-empty, the latter cannot be empty. In other words, if b is related to a and there is a morphism from <a, b> to <s, t> then t is related to s. We were able to “transport” the relation along a morphism. By making the characteristic functor R contravariant in the first argument and covariant in the second, we automatically make the relation compatible with the structure of the category.

Impoverished 3

In general, hom-sets are not constrained to empty and singleton sets. In an arbitrary category C, we can still think of hom-sets as defining some kind of generalized relation between objects. The empty hom-set still means no relation. Non-empty hom-sets can be seen as multiple “proofs” or “witnesses” to the relation.

Now that we know that we can imitate relations using hom-sets, let’s take a step back. I can think of two reasons why we would like to separate relations from hom-sets: One is that relations defined by hom-sets are always reflexive because of identity morphisms. The other reason is that we might want to define various relations on top of an existing category, a category that has its own hom-sets. It turns out that a profunctor is just the answer.

Profunctors

A profunctor assigns sets to pairs of objects — possibly objects taken from two different categories — and it does it in a way that’s compatible with the structure of these categories. In particular, it’s a functor that’s contravariant in its first argument and covariant in the second:

Cop × D -> Set

Interpreting elements of such sets as individual “proofs” of a relation, makes a profunctor a kind of proof-relevant relation. (This is very much in the spirit of Homotopy Type Theory (HoTT), where one considers proof-relevant equalities.)

In Haskell, we substitute all three categories in the definition of the profunctor with the category of types and functions; which is essentially Set, if you ignore the bottom values. So a profunctor is a functor from Setop×Set to Set. It’s a mapping of types — a two-argument type constructor p, and a mapping of morphisms. A morphism in Setop×Set is a pair of functions going between pairs of sets (a, b) and (s, t). Because of contravariance, the first function goes in the opposite direction:

(s -> a, b -> t)

A profunctor lifts this pair to a single function:

p a b -> p s t

The lifting is done by the function dimap, which is usually written in the curried form:

class Profunctor p where
    dimap :: (s -> a) -> (b -> t) -> p a b -> p s t

Profunctor Composition

As with any construction in category theory, we would like to know if profunctors are composable. But how do you compose something that has two inputs that are objects in different categories and one output that is a set? Just like with a Tetris block, we have to turn it on its side. Profunctors generalize relations between categories, so let’s compose them like relations.

Suppose we have a relation P from C to X and another relation Q from X to D. How do we build a composite relation between C and D? The standard way is to find an object in X that can serve as a bridge. We need an object x that is in a relation P with c (we’ll write it as c P x), and with which d in a relation Q (denoted as x Q d). If such an object exists, we say that d is in a relation with c — the relation being the composition of P and Q.

We’ll base the composition of profunctors on the same idea. Except that a profunctor produces a whole set of proofs of a relation. We not only have to provide an x that is related to both c and d, but also compose the proofs of these relations.

By convention, a profunctor from x to c, p x c, is interpreted as a relation from c to x (what we denoted c P x). So the first step in the composition is finding an x such that p x c is a non-empty set and for which q d x is also a non-empty set. This not only establishes the two relations, but also generates their proofs — elements of sets. The proof that both relations are in force is simply a pair of proofs (a logical conjunction, in terms of propositions as types). The set of such pairs, or the cartesian product of p x c and q d x, for some x, defines the composition of profunctors.

Have a look at this Haskell definition (in Data.Profunctor.Composition):

data Procompose p q d c where
  Procompose :: p x c -> q d x -> Procompose p q d c

Here, the cartesian product (p x c, q d x) is curried, and the existential quantifier over x is implicit in the use of the GADT.

This Haskell definition is a special case of a more general definition of the composition of profunctors that relate arbitrary categories. The existential quantifier in this case is represented by a coend:

(p ∘ q) d c = ∫x (p x c) × (q d x)

Since profunctors can be composed, it’s natural to ask if they form a category. It turns out that, rather than being a traditional category (it’s not, because profunctor composition is associative and unital only up to an isomorphism), they form a bicategory called Prof. The objects in that category are categories, morphisms are profunctors, and the role of the identity morphism is played by the hom-functor C(-,=) — our prototypical profunctor.

The fact that the hom-functor is the unit of profunctor composition follows from the so-called ninja Yoneda lemma. This can be also explained in terms of relations. The hom-functor establishes a relation between any two objects that are connected by at least one morphism. As I mentioned before, this relation is reflexive. It follows that if we have a “proof” of p d c, we can immediately compose it with the trivial “proof” of C(d, d), which is idd and get the proof of the composition

(p ∘ C(-, =)) d c

Conversely, if this composition exists, it means that there is a non-empty hom-set C(d, x) and a proof of p x c. We can then take the element of C(d, x):

f :: d -> x

pair it with an identity at c, and lift the pair:

<f, idc>

using p to transform p x c to p d c — the proof that d is in relation with c. The fact that the relation defined by a profunctor is compatible with the structure of the category (it can be “transported” using a morphism in the product category Cop×D) is crucial for this proof.

Acknowledgments

I’m grateful to Gershom Bazerman for useful comments and to André van Meulebrouck for checking the grammar and spelling.


This is part 19 of Categories for Programmers. Previously: Adjunctions. See the Table of Contents.

Free Monoid from Adjunction

Free constructions are a powerful application of adjunctions. A free functor is defined as the left adjoint to a forgetful functor. A forgetful functor is usually a pretty simple functor that forgets some structure. For instance, lots of interesting categories are built on top of sets. But categorical objects, which abstract those sets, have no internal structure — they have no elements. Still, those objects often carry the memory of sets, in the sense that there is a mapping — a functor — from a given category C to Set. A set corresponding to some object in C is called its underlying set.

Monoids are such objects that have underlying sets — sets of elements. There is a forgetful functor U from the category of monoids Mon to the category of sets, which maps monoids to their underlying sets. It also maps monoid morphisms (homomorphisms) to functions between sets.

I like to think of Mon as having split personality. On the one hand, it’s a bunch of sets with multiplication and unit elements. On the other hand, it’s a category with featureless objects whose only structure is encoded in morphisms that go between them. Every set-function that preserves multiplication and unit gives rise to a morphism in Mon.

Things to keep in mind:

  • There may be many monoids that map to the same set, and
  • There are fewer (or at most as many as) monoid morphisms than there are functions between their underlying sets.
Forgetful

Monoids m1 and m2 have the same underlying set. There are more functions between the underlying sets of m2 and m3 than there are morphisms between them.

The functor F that’s the left adjoint to the forgetful functor U is the free functor that builds free monoids from their generator sets. The adjunction follows from the free monoid universal construction we’ve discussed before.

In terms of hom-sets, we can write this adjunction as:

Mon(F x, m) ≅ Set(x, U m)

This (natural in x and m) isomorphism tells us that:

  • For every monoid homomorphism between the free monoid F x generated by x and an arbitrary monoid m there is a unique function that embeds the set of generators x in the underlying set of m. It’s a function in Set(x, U m).
  • For every function that embeds x in the underlying set of some m there is a unique monoid morphism between the free monoid generated by x and the monoid m. (This is the morphism we called h in our universal construction.)

FreeMonAdjunction

The intuition is that F x is the “maximum” monoid that can be built on the basis of x. If we could look inside monoids, we would see that any morphism that belongs to Mon(F x, m) embeds this free monoid in some other monoid m. It does it by possibly identifying some elements. In particular, it embeds the generators of F x (i.e., the elements of x) in m. The adjunction shows that the embedding of x, which is given by a function from Set(x, U m) on the right, uniquely determines the embedding of monoids on the left, and vice versa.

In Haskell, the list data structure is a free monoid (with some caveats: see Dan Doel’s blog post). A list type [a] is a free monoid with the type a representing the set of generators. For instance, the type [Char] contains the unit element — the empty list [] — and the singletons like ['a'], ['b'] — the generators of the free monoid. The rest is generated by applying the “product.” Here, the product of two lists simply appends one to another. Appending is associative and unital (that is, there is a neutral element — here, the empty list). A free monoid generated by Char is nothing but the set of all strings of characters from Char. It’s called String in Haskell:

type String = [Char]

(type defines a type synonym — a different name for an existing type).

Another interesting example is a free monoid built from just one generator. It’s the type of the list of units, [()]. Its elements are [], [()], [(), ()], etc. Every such list can be described by one natural number — its length. There is no more information encoded in the list of units. Appending two such lists produces a new list whose length is the sum of the lengths of its constituents. It’s easy to see that the type [()] is isomorphic to the additive monoid of natural numbers (with zero). Here are the two functions that are the inverse of each other, witnessing this isomorphism:

toNat :: [()] -> Int
toNat = length

toLst :: Int -> [()]
toLst n = replicate n ()

For simplicity I used the type Int rather than Natural, but the idea is the same. The function replicate creates a list of length n pre-filled with a given value — here, the unit.

Some Intuitions

What follows are some hand-waving arguments. Those kind of arguments are far from rigorous, but they help in forming intuitions.

To get some intuition about the free/forgetful adjunctions it helps to keep in mind that functors and functions are lossy in nature. Functors may collapse multiple objects and morphisms, functions may bunch together multiple elements of a set. Also, their image may cover only part of their codomain.

An “average” hom-set in Set will contain a whole spectrum of functions starting with the ones that are least lossy (e.g., injections or, possibly, isomorphisms) and ending with constant functions that collapse the whole domain to a single element (if there is one).

I tend to think of morphisms in an arbitrary category as being lossy too. It’s just a mental model, but it’s a useful one, especially when thinking of adjunctions — in particular those in which one of the categories is Set.

Formally, we can only speak of morphisms that are invertible (isomorphisms) or non-invertible. It’s that latter kind that may be though of as lossy. There is also a notion of mono- and epi- morphisms that generalize the idea of injective (non-collapsing) and surjective (covering the whole codomain) functions, but it’s possible to have a morphism that is both mono and epi, and which is still non-invertible.

In the Free ⊣ Forgetful adjunction, we have the more constrained category C on the left, and a less constrained category D on the right. Morphisms in C are “fewer” because they have to preserve some additional structure. In the case of Mon, they have to preserve multiplication and unit. Morphisms in D don’t have to preserve as much structure, so there are “more” of them.

When we apply a forgetful functor U to an object c in C, we think of it as revealing the “internal structure” of c. In fact, if D is Set we think of U as defining the internal structure of c — its underlying set. (In an arbitrary category, we can’t talk about the internals of an object other than through its connections to other objects, but here we are just hand-waving.)

If we map two objects c' and c using U, we expect that, in general, the mapping of the hom-set C(c', c) will cover only a subset of D(U c', U c). That’s because morphisms in C(c', c) have to preserve the additional structure, whereas the ones in D(U c', U c) don’t.

ForgettingMorphisms

But since an adjunction is defined as an isomporphism of particular hom-sets, we have to be very picky with our selection of c'. In the adjunction, c' is picked not from just anywhere in C, but from the (presumably smaller) image of the free functor F:

C(F d, c) ≅ D(d, U c)

The image of F must therefore consist of objects that have lots of morphisms going to an arbitrary c. In fact, there has to be as many structure-preserving morphisms from F d to c as there are non-structure preserving morphisms from d to U c. It means that the image of F must consist of essentially structure-free objects (so that there is no structure to preserve by morphisms). Such “structure-free” objects are called free objects.

FreeImage

In the monoid example, a free monoid has no structure other than what’s generated by unit and associativity laws. Other than that, all multiplications produce brand new elements.

In a free monoid, 2*3 is not 6 — it’s a new element [2, 3]. Since there is no identification of [2, 3] and 6, a morphism from this free monoid to any other monoid m is allowed to map them separately. But it’s also okay for it to map both [2, 3] and 6 (their product) to the same element of m. Or to identify [2, 3] and 5 (their sum) in an additive monoid, and so on. Different identifications give you different monoids.

This leads to another interesting intuition: Free monoids, instead of performing the monoidal operation, accumulate the arguments that were passed to it. Instead of multiplying 2 and 3 they remember 2 and 3 in a list. The advantage of this scheme is that we don’t have to specify what monoidal operation we will use. We can keep accumulating arguments, and only at the end apply an operator to the result. And it’s then that we can chose what operator to apply. We can add the numbers, or multiply them, or perform addition modulo 2, and so on. A free monoid separates the creation of an expression from its evaluation. We’ll see this idea again when we talk about algebras.

This intuition generalizes to other, more elaborate free constructions. For instance, we can accumulate whole expression trees before evaluating them. The advantage of this approach is that we can transform such trees to make the evaluation faster or less memory consuming. This is, for instance, done in implementing matrix calculus, where eager evaluation would lead to lots of allocations of temporary arrays to store intermediate results.

Challenges

  1. Consider a free monoid built from a singleton set as its generator. Show that there is a one-to-one correspondence between morphisms from this free monoid to any monoid m, and functions from the singleton set to the underlying set of m.

Next: Monads: Programmer’s Definition.

Acknowledgments

I’d like to thank Gershom Bazerman for checking my math and logic, and André van Meulebrouck, who has been volunteering his editing help throughout this series of posts.