This is part 13 of Categories for Programmers. Previously: Free Monoids. See the Table of Contents.

It’s about time we had a little talk about sets. Mathematicians have a love/hate relationship with set theory. It’s the assembly language of mathematics — at least it used to be. Category theory tries to step away from set theory, to some extent. For instance, it’s a known fact that the set of all sets doesn’t exist, but the category of all sets, Set, does. So that’s good. On the other hand, we assume that morphisms between any two objects in a category form a set. We even called it a hom-set. To be fair, there is a branch of category theory where morphisms don’t form sets. Instead they are objects in another category. Those categories that use hom-objects rather than hom-sets, are called enriched categories. In what follows, though, we’ll stick to categories with good old-fashioned hom-sets.

A set is the closest thing to a featureless blob you can get outside of categorical objects. A set has elements, but you can’t say much about these elements. If you have a finite set, you can count the elements. You can kind of count the elements of an inifinite set using cardinal numbers. The set of natural numbers, for instance, is smaller than the set of real numbers, even though both are infinite. But, maybe surprisingly, a set of rational numbers is the same size as the set of natural numbers.

Other than that, all the information about sets can be encoded in functions between them — especially the invertible ones called isomorphisms. For all intents and purposes isomorphic sets are identical. Before I summon the wrath of foundational mathematicians, let me explain that the distinction between equality and isomorphism is of fundamental importance. In fact it is one of the main concerns of the latest branch of mathematics, the Homotopy Type Theory (HoTT). I’m mentioning HoTT because it’s a pure mathematical theory that takes inspiration from computation, and one of its main proponents, Vladimir Voevodsky, had a major epiphany while studying the Coq theorem prover. The interaction between mathematics and programming goes both ways.

The important lesson about sets is that it’s okay to compare sets of unlike elements. For instance, we can say that a given set of natural transformations is isomorphic to some set of morphisms, because a set is just a set. Isomorphism in this case just means that for every natural transformation from one set there is a unique morphism from the other set and vice versa. They can be paired against each other. You can’t compare apples with oranges, if they are objects from different categories, but you can compare sets of apples against sets of oranges. Often transforming a categorical problem into a set-theoretical problem gives us the necessary insight or even lets us prove valuable theorems.

The Hom Functor

Every category comes equipped with a canonical family of mappings to Set. Those mappings are in fact functors, so they preserve the structure of the category. Let’s build one such mapping.

Let’s fix one object a in C and pick another object x also in C. The hom-set C(a, x) is a set, an object in Set. When we vary x, keeping a fixed, C(a, x) will also vary in Set. Thus we have a mapping from x to Set.
Hom-Set

If we want to stress the fact that we are considering the hom-set as a mapping in its second argument, we use the notation:

C(a, _)

with the underscore serving as the placeholder for the argument.

This mapping of objects is easily extended to the mapping of morphisms. Let’s take a morphism f in C between two arbitrary objects x and y. The object x is mapped to the set C(a, x), and the object y is mapped to C(a, y), under the mapping we have just defined. If this mapping is to be a functor, f must be mapped to a function between the two sets:

C(a, x) -> C(a, y)

Let’s define this function point-wise, that is for each argument separately. For the argument we should pick an arbitrary element of C(a, x) — let’s call it h. Morphisms are composable, if they match end to end. It so happens that the target of h matches the source of f, so their composition:

f ∘ h :: a -> y

is a morphism going from a to y. It is therefore a member of C(a, y).

Hom Functor

We have just found our function from C(a, x) to C(a, y), which can serve as the image of f. If there is no danger of confusion, we’ll write this lifted function as:

C(a, f)

and its action on a morphism h as:

C(a, f) h = f ∘ h

Since this construction works in any category, it must also work in the category of Haskell types. In Haskell, the hom-functor is better known as the Reader functor:

type Reader a x = a -> x
instance Functor (Reader a) where
    fmap f h = f . h

Now let’s consider what happens if, instead of fixing the source of the hom-set, we fix the target. In other words, we’re asking the question if the mapping

C(_, a)

is also a functor. It is, but instead of being covariant, it’s contravariant. That’s because the same kind of matching of morphisms end to end results in postcomposition by f; rather than precomposition, as was the case with C(a, _).

We have already seen this contravariant functor in Haskell. We called it Op:

type Op a x = x -> a
instance Contravariant (Op a) where
    contramap f h = h . f

Finally, if we let both objects vary, we get a profunctor C(_, _), which is contravariant in the first argument and covariant in the second. We have seen this profunctor before, when we talked about functoriality:

instance Profunctor (->) where
  dimap ab cd bc = cd . bc . ab
  lmap = flip (.)
  rmap = (.)

The important lesson is that this observation holds in any category: the mapping of objects to hom-sets is functorial. Since contravariance is equivalent to a mapping from the opposite category, we can state this fact succintly as:

C(_, _) :: Cop × C -> Set

Representable Functors

We’ve seen that, for every choice of an object a in C, we get a functor from C to Set. This kind of structure-preserving mapping to Set is often called a representation. We are representing objects and morphisms of C as sets and functions in Set.

The functor C(a, _) itself is sometimes called representable. More generally, any functor F that is naturally isomorphic to the hom-functor, for some choice of a, is called representable. Such functor must necessarily be Set-valued, since C(a, _) is.

I said before that we often think of isomorphic sets as identical. More generally, we think of isomorphic objects in a category as identical. That’s because objects have no structure other than their relation to other objects (and themselves) through morphisms.

For instance, we’ve previously talked about the category of monoids, Mon, that was initially modeled with sets. But we were careful to pick as morphisms only those functions that preserved the monoidal structure of those sets. So if two objects in Mon are isomorphic, meaning there is an invertible morphism between them, they have exactly the same structure. If we peeked at the sets and functions that they were based upon, we’d see that the unit element of one monoid was mapped to the unit element of another, and that a product of two elements was mapped to the product of their mappings.

The same reasoning can be applied to functors. Functors between two categories form a category in which natural transformations play the role of morphisms. So two functors are isomorphic, and can be thought of as identical, if there is an invertible natural transformation between them.

Let’s analyze the definition of the represantable functor from this perspective. For F to be representable we require that: There be an object a in C; one natural transformation α from C(a, _) to F; another natural transformation, β, in the opposite direction; and that their composition be the identity natural transformation.

Let’s look at the component of α at some object x. It’s a function in Set:

αx :: C(a, x) -> F x

The naturality condition for this transformation tells us that, for any morphism f from x to y, the following diagram commutes:

F f ∘ αx = αy ∘ C(a, f)

In Haskell, we would replace natural transformations with polymorphic functions:

alpha :: forall x. (a -> x) -> F x

with the optional forall quantifier. The naturality condition

fmap f . alpha = alpha . fmap f

is automatically satisfied due to parametricity (it’s one of those theorems for free I mentioned earlier), with the understanding that fmap on the left is defined by the functor F, whereas the one on the right is defined by the reader functor. Since fmap for reader is just function precomposition, we can be even more explicit. Acting on h, an element of C(a, x), the naturality condition simplifies to:

fmap f (alpha h) = alpha (f . h)

The other transformation, beta, goes the opposite way:

beta :: forall x. F x -> (a -> x)

It must respect naturality conditions, and it must be the inverse of α:

α ∘ β = id = β ∘ α

We will see later that a natural transformation from C(a, _) to any Set-valued functor always exists (Yoneda’s lemma) but it is not necessarily invertible.

Let me give you an example in Haskell with the list functor and Int as a. Here’s a natural transformation that does the job:

alpha :: forall x. (Int -> x) -> [x]
alpha h = map h [12]

I have arbitrarily picked the number 12 and created a singleton list with it. I can then fmap the function h over this list and get a list of the type returned by h. (There are actually as many such transformations as there are integers.)

The naturality condition is equivalent to the composability of map (the list version of fmap):

map f (map h [12]) = map (f . h) [12]

But if we tried to find the inverse transformation, we would have to go from a list of arbitrary type x to a function returning x:

beta :: forall x. [x] -> (Int -> x)

You might think of retrieving an x from the list, e.g., using head, but that won’t work for an empty list. Notice that there is no choice for the type a (in place of Int) that would work here. So the list functor is not representable.

Remember when we talked about Haskell (endo-) functors being a little like containers? In the same vein we can think of representable functors as containers for storing memoized results of function calls (the members of hom-sets in Haskell are just functions). The representing object, the type a in C(a, _), is thought of as the key type, with which we can access the tabulated values of a function. The transformation we called α is called tabulate, and its inverse, β, is called index. Here’s a (slightly simplified) Representable class definition:

class Representable f where
   type Rep f :: *
   tabulate :: (Rep f -> x) -> f x
   index    :: f x -> Rep f -> x

Notice that the representing type, our a, which is called Rep f here, is part of the definition of Representable. The star just means that Rep f is a type (as opposed to a type constructor, or other more exotic kinds).

Infinite lists, or streams, which cannot be empty, are representable.

data Stream x = Cons x (Stream x)

You can think of them as memoized values of a function taking an Integer as an argument. (Strictly speaking, I should be using non-negative natural numbers, but I didn’t want to complicate the code.)

To tabulate such a function, you create an infinite stream of values. Of course, this is only possible because Haskell is lazy. The values are evaluated on demand. You access the memoized values using index:

instance Representable Stream where
    type Rep Stream = Integer
    tabulate f = Cons (f 0) (tabulate (f . (+1)))
    index (Cons b bs) n = if n == 0 then b else index bs (n - 1)

It’s interesting that you can implement a single memoization scheme to cover a whole family of functions with arbitrary return types.

Representability for contravariant functors is similarly defined, except that we keep the second argument of C(_, a) fixed. Or, equivalently, we may consider functors from Cop to Set, because Cop(a, _) is the same as C(_, a).

There is an interesting twist to representability. Remember that hom-sets can internally be treated as exponential objects, in cartesian closed categories. The hom-set C(a, x) is equivalent to xa, and for a representable functor F we can write:

_a = F

Let’s take the logarithm of both sides, just for kicks:

a = log F

Of course, this is a purely formal transformation, but if you know some of the properties of logarithms, it is quite helpful. In particular, it turns out that functors that are based on product types can be represented with sum types, and that sum-type functors are not in general representable (example: the list functor).

Finally, notice that a representable functor gives us two different implementations of the same thing — one a function, one a data structure. They have exactly the same content — the same values are retrieved using the same keys. That’s the sense of “sameness” I was talking about. Two naturally isomorphic functors are identical as far as their contents are involved. On the other hand, the two representations are often implemented differently and may have different performance characteristics. Memoization is used as a performance enhancement and may lead to substantially reduced run times. Being able to generate different representations of the same underlying computation is very valuable in practice. So, surprisingly, even though it’s not concerned with performance at all, category theory provides ample opportunities to explore alternative implementations that have practical value.

Challenges

  1. Show that the hom-functors map identity morphisms in C to corresponding identity functions in Set.
  2. Show that Maybe is not representable.
  3. Is the Reader functor representable?
  4. Using Stream representation, memoize a function that squares its argument.
  5. Show that tabulate and index for Stream are indeed the inverse of each other. (Hint: use induction.)
  6. The functor:
    Pair a = Pair a a

    is representable. Can you guess the type that represents it? Implement tabulate and index.

Bibliography

  1. The Catsters video about representable functors.

Acknowledgments

I’d like to thank Gershom Bazerman for checking my math and logic, and André van Meulebrouck, who has been volunteering his editing help throughout this series of posts.


This is part 12 of Categories for Programmers. Previously: Limits and Colimits. See the Table of Contents.

Monoids are an important concept in both category theory and in programming. Categories correspond to strongly typed languages, monoids to untyped languages. That’s because in a monoid you can compose any two arrows, just as in an untyped language you can compose any two functions (of course, you may end up with a runtime error when you execute your program).

We’ve seen that a monoid may be described as a category with a single object, where all logic is encoded in the rules of morphism composition. This categorical model is fully equivalent to the more traditional set-theoretical definition of a monoid, where we “multiply” two elements of a set to get a third element. This process of “multiplication” can be further dissected into first forming a pair of elements and then identifying this pair with an existing element — their “product.”

What happens when we forgo the second part of multiplication — the identification of pairs with existing elements? We can, for instance, start with an arbitrary set, form all possible pairs of elements, and call them new elements. Then we’ll pair these new elements with all possible elements, and so on. This is a chain reaction — we’ll keep adding new elements forever. The result, an infinite set, will be almost a monoid. But a monoid also needs a unit element and the law of associativity. No problem, we can add a special unit element and identify some of the pairs — just enough to support the unit and associativity laws.

Let’s see how this works in a simple example. Let’s start with a set of two elements, {a, b}. We’ll call them the generators of the free monoid. First, we’ll add a special element e to serve as the unit. Next we’ll add all the pairs of elements and call them “products”. The product of a and b will be the pair (a, b). The product of b and a will be the pair (b, a), the product of a with a will be (a, a), the product of b with b will be (b, b). We can also form pairs with e, like (a, e), (e, b), etc., but we’ll identify them with a, b, etc. So in this round we’ll only add (a, a), (a, b) and (b, a) and (b, b), and end up with the set {e, a, b, (a, a), (a, b), (b, a), (b, b)}.

Bunnies

In the next round we’ll keep adding elements like: (a, (a, b)), ((a, b), a), etc. At this point we’ll have to make sure that associativity holds, so we’ll identify (a, (b, a)) with ((a, b), a), etc. In other words, we won’t be needing internal parentheses.

You can guess what the final result of this process will be: we’ll create all possible lists of as and bs. In fact, if we represent e as an empty list, we can see that our “multiplication” is nothing but list concatenation.

This kind of construction, in which you keep generating all possible combinations of elements, and perform the minimum number of identifications — just enough to uphold the laws — is called a free construction. What we have just done is to construct a free monoid from the set of generators {a, b}.

Free Monoid in Haskell

A two-element set in Haskell is equivalent to the type Bool, and the free monoid generated by this set is equivalent to the type [Bool] (list of Bool). (I am deliberately ignoring problems with infinite lists.)

A monoid in Haskell is defined by the type class:

class Monoid m where
    mempty  :: m
    mappend :: m -> m -> m

This just says that every Monoid must have a neutral element, which is called mempty, and a binary function (multiplication) called mappend. The unit and associativity laws cannot be expressed in Haskell and must be verified by the programmer every time a monoid is instantiated.

The fact that a list of any type forms a monoid is described by this instance definition:

instance Monoid [a] where
    mempty  = []
    mappend = (++)

It states that an empty list [] is the unit element, and list concatenation (++) is the binary operation.

As we have seen, a list of type a corresponds to a free monoid with the set a serving as generators. The set of natural numbers with multiplication is not a free monoid, because we identify lots of products. Compare for instance:

2 * 3 = 6
[2] ++ [3] = [2, 3] // not the same as [6]

That was easy, but the question is, can we perform this free construction in category theory, where we are not allowed to look inside objects? We’ll use our workhorse: the universal construction.

The second interesting question is, can any monoid be obtained from some free monoid by identifying more than the minimum number of elements required by the laws? I’ll show you that this follows directly from the universal construction.

Free Monoid Universal Construction

If you recall our previous experiences with universal constructions, you might notice that it’s not so much about constructing something as about selecting an object that best fits a given pattern. So if we want to use the universal construction to “construct” a free monoid, we have to consider a whole bunch of monoids from which to pick one. We need a whole category of monoids to chose from. But do monoids form a category?

Let’s first look at monoids as sets equipped with additional structure defined by unit and multiplication. We’ll pick as morphisms those functions that preserve the monoidal structure. Such structure-preserving functions are called homomorphisms. A monoid homomorphism must map the product of two elements to the product of the mapping of the two elements:

h (a * b) = h a * h b

and it must map unit to unit.
For instance, consider a homomorphism from lists of integers to integers. If we map [2] to 2 and [3] to 3, we have to map [2, 3] to 6, because concatenation

[2] ++ [3] = [2, 3]

becomes multiplication

2 * 3 = 6

Now let’s forget about the internal structure of individual monoids, and only look at them as objects with corresponding morphisms. You get a category Mon of monoids.

Okay, maybe before we forget about internal structure, let us notice an important property. Every object of Mon can be trivially mapped to a set. It’s just the set of its elements. This set is called the underlying set. In fact, not only can we map objects of Mon to sets, but we can also map morphisms of Mon (homomorphisms) to functions. Again, this seems sort of trivial, but it will become useful soon. This mapping of objects and morphisms from Mon to Set is in fact a functor. Since this functor “forgets” the monoidal structure — once we are inside a plain set, we no longer distinguish the unit element or care about multiplication — it’s called a forgetful functor. Forgetful functors come up regularly in category theory.

We now have two different views of Mon. We can treat it just like any other category with objects and morphisms. In that view, we don’t see the internal structure of monoids. All we can say about a particular object in Mon is that it connects to itself and to other objects through morphisms. The “multiplication” table of morphisms — the composition rules — are derived from the other view: monoids-as-sets. By going to category theory we haven’t lost this view completely — we can still access it through our forgetful functor.

To apply the universal construction, we need to define a special property that would let us search through the category of monoids and pick the best candidate for a free monoid. But a free monoid is defined by its generators. Different choices of generators produce different free monoids (a list of Bool is not the same as a list of Int). Our construction must start with a set of generators. So we’re back to sets!

That’s where the forgetful functor comes into play. We can use it to X-ray our monoids. We can identify the generators in the X-ray images of those blobs. Here’s how it works:

We start with a set of generators, x. That’s a set in Set.

The pattern we are going to match consists of a monoid m — an object of Mon — and a function p in Set:

p :: x -> U m

where U is our forgetful functor from Mon to Set. This is a weird heterogeneous pattern — half in Mon and half in Set.

The idea is that the function p will identify the set of generators inside the X-ray image of m. It doesn’t matter that functions may be lousy at identifying points inside sets (they may collapse them). It will all be sorted out by the universal construction, which will pick the best representative of this pattern.

Monoid Pattern

We also have to define the ranking among candidates. Suppose we have another candidate: a monoid n and a function that identifies the generators in its X-ray image:

q :: x -> U n

We’ll say that m is better than n if there is a morphism of monoids (that’s a structure-preserving homomorphism):

h :: m -> n

whose image under U (remember, U is a functor, so it maps morphisms to functions) factorizes through p:

q = U h . p

If you think of p as selecting the generators in m; and q as selecting “the same” generators in n; then you can think of h as mapping these generators between the two monoids. Remember that h, by definition, preserves the monoidal structure. It means that a product of two generators in one monoid will be mapped to a product of the corresponding two generators in the second monoid, and so on.

Monoid Ranking

This ranking may be used to find the best candidate — the free monoid. Here’s the definition:

We’ll say that m (together with the function p) is the free monoid with the generators x if and only if there is a unique morphism h from m to any other monoid n (together with the function q) that satisfies the above factorization property.

Incidentally, this answers our second question. The function U h is the one that has the power to collapse multiple elements of U m to a single element of U n. This collapse corresponds to identifying some elements of the free monoid. Therefore any monoid with generators x can be obtained from the free monoid based on x by identifying some of the elements. The free monoid is the one where only the bare minimum of identifications have been made.

We’ll come back to free monoids when we talk about adjunctions.

Challenges

  1. Show that monoid isomorphisms map units to units. A monoid isomorphism is a monoid homomorphism that has an inverse.
  2. Consider a monoid homomorphism from lists of integers with concatenation to integers with multiplication. What is the image of the empty list []? Assume that all singleton lists are mapped to the integers they contain, that is [3] is mapped to 3, etc. What’s the image of [1, 2, 3, 4]? How many different lists map to the integer 12? Is there any other homomorphism between the two monoids?
  3. What is the free monoid generated by a one-element set? Can you see what it’s isomorphic to?

Acknowledgments

I’d like to thank Gershom Bazerman for checking my math and logic, and André van Meulebrouck, who has been volunteering his editing help throughout this series of posts.


Lenses are a fascinating subject. Edward Kmett’s lens library is an indispensable tool in every Haskell programmer’s toolbox. I set out to write this blog post with the goal of describing some new insights into their categorical interpretation, but then I started reviewing all the different formulations of lenses and their relations to each other. So this post turned into a little summary of the theoretical underpinning of lenses.

If you’re already familiar with lenses, you may skip directly to the last section, which describes some new results.

The Data-Centric Picture

Lens

Lenses start as a very simple idea: an accessor/mutator or getter/setter pair — something familiar to every C++, Java, or C# programmer. In Haskell they can be described as two functions:

get :: s -> a
set :: s -> a -> s

Given an object of type s, the first function produces the value of the object’s sub-component of type a — which is the focus of the particular lens. The second function takes the object together with a new value for the sub-component, and produces a modified object.

A simple example is a lens that focuses on the first component of a pair:

get :: (a, a') -> a
get (x, y) = x
set :: (a, a') -> a -> (a, a')
set (x, y) x' = (x', y)

In Haskell we can go even further and define a lens for a polymorphic object, in which the setter changes not only the value of the sub-component, but also its type. This will, of course, also change the type of the resulting object. So, in general we have:

get :: s -> a
set :: s -> b -> t

Our pair example doesn’t change much on the surface:

get :: (a, a') -> a
get (x, y) = x
set :: (a, a') -> b -> (b, a')
set (x, y) x' = (x', y)

The difference is that the type of x' can now be different from the type of x. The first component of the pair is of type a before the update, and of type b (the same as that of x') after it. This way we can turn a pair of, say, (Int, Bool) to a pair of (String, Bool).

We can always go back to the monomorphic version of the lens by choosing b equal to a and t equal to s.

Not every pair of functions like these constitutes a lens. A lens has to obey a few laws (first formulated by Pierce in the database context). In particular, if you get a component after you set it, you should get back the value you just put in:

get . set s = id

If you call set with the value you obtained from get, you should get back the unchanged object:

set s (get s) = s

Finally, a set should overwrite the result of a previous set:

set (set s a) b = set s b

So this is what I would call a “classical” lens. It’s formulated in easy to understand programming terms.

The Algebraic Picture

It was Russell O’Connor who noticed that when you refactor the common element from the getter/setter pair, you get an interesting algebraic structure. Instead of writing the lens as two functions, we can write it as one function returning a pair:

s -> (a, b -> t)

Think of this as factoring out the “this” pointer in OO and returning the interface. Of course, the difference is that, in functional programming, the setter does not mutate the original — it returns a new version of the object instead.

Let’s for a moment concentrate on the monomorphic version of the lens — one in which a is the same as b, and s is the same as t. We can define a data structure:

data Store a s = Store a (a -> s)

and rewrite the lens as:

s -> Store a s

For instance, our first-component-of-a-pair lens will take the form:

fstl :: (a, b) -> Store a (a, b)
fstl (x, y) = Store x (\x' -> (x', y))

The first observation is that, for any a, Store a is a functor:

instance Functor (Store a) where
    fmap f (Store x h) = Store x (f . h)

It’s a well-known fact that you can define algebras for a functor, so-called F-algebras. An algebra for a functor f consists of a type s called the carrier type and a function called the action:

alg :: f s -> s

This is almost like a lens (with f replaced by Store a), except that the arrow goes the wrong way. Not to worry: there is a dual notion called a coalgebra. It consists of a carrier type s and a function:

coalg :: s -> f s

Substitute Store a for f and you see that a lens is nothing but a coalgebra for this functor. This is not saying much — we have just given a mathematical name to a programming construct, no big deal. Except that Store a is more than a functor — it’s also a comonad.

What’s a comonad? It’s a monad with the arrows reversed.

You know that you can define a monad in Haskell using return and join. Reverse the arrows on those two, and you get extract and duplicate, the two functions that define a comonad.

class (Functor w) => Comonad w where
    extract :: w a -> a
    duplicate :: w a -> w (w a)

Here, w is a type constructor that is also a functor.

This is how you can implement those two functions for Store a:

instance Comonad (Store a) where
    -- extract :: Store a s -> s
    extract (Store x h) = h x
    -- duplicate :: Store a s -> Store a (Store a s)
    duplicate (Store x h) = Store x (\y -> Store y h)

So now we have two structures: a comonad w and a coalgebra:

type Coalgebra w s = s -> w s

A lens is a special case of this coalgebra where w is Store a.

Every time you have two structures, you may legitimately ask the question: Are they compatible? Just by looking at types, you may figure out some obvious compatibility conditions. In particular, since they go the opposite way, it would make sense for extract to undo the action of coalg:

coalg :: Coalgebra w s
extract . coalg = id

Coalgebra Law 1

Also, duplicating the result of coalg should be the same as applying coalg twice (the second time lifted by fmap):

fmap coalg . coalg = duplicate . coalg

Coalgebra Law 2

If these two conditions are satisfied, we call coalg a comonad coalgebra.

And here’s the clencher:

These two conditions when applied to the Store a comonad are equivalent to our earlier lens laws.

Let’s see how it works. First we’ll express the result of our lens coalgebra acting on some object s in terms of get and set (curried set s is a function a->s):

coalg s = Store (get s) (set s)

The first condition extract . coalg = id immediately gives us the law:

set s (get s) = s

When we act with duplicate on coalg s, we get:

Store (get s) (\y -> Store y (set s))

On the other hand, when we fmap our coalg over coalg s, we get:

Store (get s) ((\s' -> Store (get s') (set s')) . set s)

Two functions — the second components of the Store objects in those equations — must be equal when acting on any a. The first function produces:

Store a (set s)

In the second one, we first apply set s to a to get set s a, which we then pass to the lambda to get:

Store (get (set s a)) (set (set s a))

This reproduces the other two (monomorphic) lens laws:

get (set s a) = a

and

set (set s a) = set s

This algebraic construction can be extended to type-changing lenses by replacing Store with its indexed version:

data IStore a b t = IStore a (b -> t)

and the comonad with its indexed counterpart. The indexed store is also called Context in the lens parlance, and an indexed comonad is also called a parametrized comonad.

So what’s an indexed comonad? Let’s start with an indexed functor. It’s a type constructor that takes three types, a, b, and s, and is a functor in the third argument:

class IxFunctor f where
    imap :: (s -> t) -> f a b s -> f a b t

IStore is obviously an indexed functor:

instance IxFunctor IStore where
    -- imap :: (s -> t) -> IStore a b s -> IStore a b t
    imap f (IStore x h) = IStore x (f . h)

An indexed comonad has the indexed versions of extract and duplicate:

class IxComonad w where
    iextract :: w a a t -> t
    iduplicate :: w a b t -> w a j (w j b t)

Notice that iextract is “diagonal” in the index types, whereas the double application of w shares one index, j, between the two applications. This plays very well with the unit and multiplication interpretation of a monad — here it looks just like matrix multiplication (although we are dealing with a comonad rather than a monad).

It’s easy to see that the instantiation of the indexed comonad for IStore works the same way as the instantiation of the comonad for Store. The types just work out that way.

instance IxComonad IStore where
    -- iextract :: IStore a a t -> t
    iextract (IStore a h) = h a
    -- iduplicate :: IStore a b t -> IStore a c (IStore c b t)
    iduplicate (IStore a h) = IStore a (\c -> IStore c h)

There is also an indexed version of a comonad coalgebra, where the coalgebra is replaced by a family of mappings from some carrier type s to w a b t; with the type t determined by s together with the choice of of the indexes a and b:

type ICoalg w s t a b = s -> w a b t

The compatibility conditions that make it an (indexed) comonad coalgebra are almost identical to the standard compatibility conditions, except that we have to be careful about the index types. Here’s the first condition:

icoalg_aa :: ICoalg w s t a a
iextract . icoalg_aa = id

ICoalgebra Law

Let’s analyze the types. The inner part has the type:

icoalg_aa :: s -> w a a t

We apply iextract to it, which has the type:

iextract :: w a a t -> t

and get:

iextract . icoalg_aa :: s -> t

The right hand side of the condition has the type:

id :: s -> s

It follows that, for the diagonal components of ICoalg w s t, t must be equal to s. The diagonal part of ICoalg w s t is therefore a family of regular coalgebras.

As we have done with the monomorphic lens, we can express (ICoalg IStore s t a b), when acting on s, in terms of get and set:

icoalg_ab s = IStore (get s) (set s)

But now get s is of type a, while set s if of the type b -> t. We can still apply extract to the diagonal term IStore a a t as required by the first compatibility condition. When equating the result to id, we recover the lens law:

set s (get s) = s

Similarly, it’s straightforward to see that the second compatibility condition:

icoalg_bc :: ICoalg w s t b c
icoalg_ab :: ICoalg w s t a b
icoalg_ac :: ICoalg w s t a c
imap icoalg_bc . icoalg_ab = iduplicate . icoalg_ac

is equivalent to the other two lens laws.

The Parametric Picture

Despite being theoretically attractive, standard lenses were awkward to use and, in particular, to compose. The breakthrough came when Twan van Laarhoven realized that there is a higher-order representation for them that has very nice compositional properties. Composing lenses to focus on sub-objects of sub-objects turned into simple function composition.

Here’s Twan’s representation (generalized by Russell for the polymorphic case):

type Lens s t a b = forall f. Functor f => (a -> f b) -> (s -> f t)

So a lens is a polymorphic higher order function with a twist. The twist is that it’s polymorphic with respect to a functor rather than a type.

You can think of it this way: the caller provides a function to modify a particular field of s, turning it from type a to f b. What the caller gets back is a function that transforms the whole of s to f t. The idea is that the lens knows how to reconstruct the object, while putting it under a functor f — if you tell it how to modify a field, also under this functor.

For instance, continuing with our example, here’s the van Laarhoven lens that focuses on the first component of a pair:

vL :: Lens (a, c) (b, c) a b
vL h (x, x') = fmap (\y -> (y, x')) (h x)

Here, s is (a, c) and t is (b, c).

To see that the van Laarhoven representation is equivalent to the get/set one, let’s first change the order of arguments and pull s outside of the forall quantifier:

Lens s t a b = s -> (forall f. Functor f => (a -> f b) -> f t)

Here’s how you can read this definition: For a given s, if you give me a function from a to f b, I will produce a value of type f t. And I don’t care what functor you use!

What does it mean not to care about the functor? It means that the lens must be parametrically polymorphic in f. It can’t do case analysis on a functor. It must be implemented using the same formula for f being the list functor, or the Maybe functor, or the Const functor, etc. There’s only one thing all these functors have in common, and that’s the fmap function; so that’s what we are allowed to use in the implementation of the lens.

Now let’s think what we can do with a function a -> f b that we were given. There’s only one thing: apply it to some value of type a. So we must have access to a value of type a. The result of this application is some value of type f b, but we need to produce a value of the type f t. The only way to do it is to have a function of type b->t and sneak it under the functor using fmap (so here’s where the generic functor comes in). We conclude that the implementation of the function:

forall f. Functor f => (a -> f b) -> f t

must be hiding a value of type a and a function b->t. But that’s exactly the contents of IStore a b t. Parametricity tells us that there is an IStore hiding inside the van Laarhoven lens. The lens is equivalent to:

s -> IStore a b t

In fact, with a clever choice of functors we can recover both get and set from the van Laarhoven representation.

First we select our functor to be Const a. Note that the parameter a is not the one over which the functor is defined. Const a takes a second parameter b over which it is functorial. And, even though it takes b as a type parameter, it doesn’t use it at all. Instead, like a magician, it palms an a, and then reveals it at the end of the trick.

newtype Const a b = Const { getConst :: a }

instance Functor (Const a) where
    -- fmap :: (s -> t) -> Const a s -> Const a t
    fmap _ (Const a) = (Const a)

The constructor Const happens to be a function

Const :: a -> Const a b

which has the required form to be the first argument to the lens:

a -> f b

When we apply the lens, let’s call it vL, to the function Const, we get another function:

vL Const :: s -> Const a t

We can apply this function to s, and then, in the final reveal, retrieve the value of a that was smuggled inside Const a:

get vL s = getConst $ vL Const s

Similarly, we can recover set from the van Laarhoven lens using the Identity functor:

newtype Identity a = Identity { runIdentity :: a }

We define:

set vL s x = runIdentity $ vL (Identity . const x) s

The beauty of the van Laarhoven representation is that it composes lenses using simple function composition. A lens takes a function and returns a function. This function can, in turn, be passed as the argument to another lens, and so on.

There’s an interesting twist to this kind of composition — the function composition operator in Haskell is the dot, just like the field accessor in OO languages like Java or C++; and the composition follows the same order as the composition of accessors in those languages. This was first observed by Conal Elliott in the context of semantic editor combinators.

Consider a lens that focuses on the a field inside some object s. It’s type is:

Lens s t a b

When given a function:

h :: a -> f b

it returns a function:

h' :: s -> f t

Now consider another lens that focuses on the s field inside some even bigger object u. It’s type is:

Lens u w s t

It expects a function of the type:

g :: s -> f t

We can pass the result of the first lens directly to the second lens to form a composite:

Lens u w s t . Lens s t a b

We get a lens that focuses on the a field of the object s that is the sub-object of the big object u. It works just like in Java, where you apply a dot to the result of a getter or a setter, to dig deeper into a subobject.

Not only do lenses compose using regular function composition, but we can also use the identity function as the identity lens. So lenses form a category. It’s time to have a serious look at category theory. Warning: Heavy math ahead!

The Categorical Picture

I used parametricity arguments to justify the choice of the van Laarhoven representation for the lens. The lens function is supposed to have the same form for all functors f. Parametricity arguments have an operational feel to them, which is okay, but I feel like a solid categorical justification is more valuable than any symbol-shuffling argument. So I worked on it, and eventually came up with a derivation of the van Laarhoven representation using the Yoneda lemma. Apparently Russell O’Connor and Mauro Jaskelioff had similar feelings because they came up with the same result independently. We used the same approach, going through the Store functor and applying the Yoneda lemma twice, once in the functor category, and once in the Set category (see the Bibliography).

I would like to present the same result in a more general setting of the Yoneda embedding. It’s a direct consequence of the Yoneda lemma, and it states that any category can be embedded (fully and faithfully) in the category of functors from that category to Set.

Here’s how it works: Let’s fix some object a in some category C. For any object x in that category there is a hom-set C(a, x) of morphisms from a to x. A hom-set is a set — an object in the category Set of sets. So we have defined a mapping from C to Set that takes an x and maps it to the set C(a, x). This mapping is called C(a, _), with the underscore serving as a placeholder for the argument.

Hom-Set

It’s easy to convince yourself that this mapping is in fact a functor from C to Set. Indeed, take any morphism f from x to y. We want to map this morphism to a function (a morphism in Set) that goes between C(a, x) and C(a, y). Let’s define this lifted function component-wise: given any element h from C(a, x) we can map it to f . h. It’s just a composition of two morphisms from C. The resulting morphism is a member of C(a, y). We have lifted a morphism f from C to Set thus establishing that C(a, _) is a functor.

Hom Functor

Now consider two such functors, C(a, _) and C(b, _). The Yoneda embedding theorem tells us that there is a one-to-one correspondence between the set of natural transformations between these two functors and the hom-set C(b, a).

Nat(C(a, _), C(b, _)) ≅ C(b, a)

Notice the reversed order of a and b on the right-hand side.

Yoneda Embedding

Let’s rephrase what we have just seen. For every a in C, we can define a functor C(a, _) from C to Set. Such a functor is a member of the functor category Fun(C, Set). So we have a mapping from C to the functor category Fun(C, Set). Is this mapping a functor?

We have just seen that there is a mapping between morphisms in C and natural transformations in Fun(C, Set) — that’s the gist of the Yoneda embedding. But natural transformations are morphisms in the functor category. So we do have a functor from C to the functor category Fun(C, Set). It maps objects to objects and morphisms to morphisms. It’s a contravariant functor, because of the reversal of a and b. Moreover, it maps the hom-sets in the two categories one-to-one, so it’s a fully faithful functor, and therefore it defines an embedding of categories. Every category C can be embedded in the functor category Fun(C, Set). That’s called the Yoneda embedding.

Yoneda Embedding 2

There’s an interesting consequence of the Yoneda embedding: Every functor category can be embedded in its own functor category — just replace C with a functor category in the Yoneda embedding. Recall that functors between any two categories form a category. It’s a category in which objects are functors and morphisms are natural transformations. Yoneda embedding works for that category too, which means that a functor category can be embedded in a category of functors from that functor category to Set.

Let’s see what that means. We can fix one functor, say R and consider the hom-set from R to some arbitrary functor f. Since we are in a functor category, this hom-set is a set of natural transformations between the two functors, Nat(R, f).

Now let’s pick another functor S. It also defines a set of natural transformations Nat(S, f). We can keep picking functors and mapping them to sets (sets of natural transformations). In fact we know from the previous argument that this mapping is itself a functor. This time it’s a functor from a functor category to Set.

Functor Embedding

What does the Yoneda embedding tell us about any two such functors? That the set of natural transformations between them is isomorphic to the (reversed) hom-set. But this time hom-sets are sets of natural transformations. So we have:

Nat(Nat(R, _), Nat(S, _)) ≅ Nat(S, R)

Functor Embedding 2

All natural transformations in this formula are regular natural transformation except for the outer one, which is more interesting. You may recall that a natural transformation is a family of morphisms parameterized by objects. But in this case objects are functors, and morphisms are themselves natural transformations. So it’s a family of natural transformations parameterized by functors. Keep this in mind as we proceed.

To get a better feel of what’s happening, let’s translate this to Haskell. In Haskell we represent natural transformations as polymorphic functions. This makes sense, since a natural transformation is a family of morphisms (here functions) parameterized by objects (here types). So a member of Nat(R, f) can be represented as:

forall x. R x -> f x

Similarly, the second natural transformation in our formula turns into:

forall y. S y -> f y

As I said, the outer natural transformation in the Yoneda embedding is a family of natural transformations parameterized by a functor, so we get:

forall f. Functor f => (forall x. R x -> f x) -> (forall y. S y -> f y)

You can already see one element of the van Laarhoven representation: the quantification over a functor.

The right hand side of the Yoneda embedding is a natural transformation:

forall z. S z -> R z

The next step is to pick the appropriate functors for R and S. We’ll take R to be IStore a b and S to be IStore s t.

Let’s work on the first part:

forall x. IStore a b x -> f x

A function from IStore a b x is equivalent to a function of two arguments, one of them of type a and another of type b->x:

forall x. a -> (b -> x) -> f x

We can pull a out of forall to get:

a -> (forall x. (b -> x) -> f x)

If you squint a little, you recognize that the thing in parentheses is a natural transformation between the functor C(b, _) and f, where C is the category of Haskell types. We can now apply the Yoneda lemma, which says that this set of natural transformations is isomorphic to the set f b:

forall x. (b -> x) -> f x ≅ f b

We can apply the same transformation to the second part of our identity:

forall y. (IStore s t y -> f y) ≅ s -> f t

Taking it all together, we get:

forall f. Functor f => (a -> f b) -> (s -> f t) 
    ≅ forall z. IStore s t z -> IStore a b z

Let’s now work on the right hand side:

forall z. IStore s t z -> IStore a b z 
    ≅ forall z. s -> (t -> z) -> IStore a b z

Again, pulling s out of forall and applying the Yoneda lemma, we get:

s -> IStore a b t

But that’s just the standard representation of the lens:

s -> IStore a b t ≅ (s -> a, s -> b -> t) = (get, set)

Thus the Yoneda embedding of the functor category leads to the van Laarhoven representation of the lens:

forall f. Functor f => (a -> f b) -> (s -> ft) 
  ≅ (s -> a, s -> b -> t)

Playing with Adjunctions

This is all very satisfying, but you may wonder what’s so special about the IStore functor? The crucial step in the derivation of the van Laarhoven representation was the application of the Yoneda lemma to get this identity:

forall x. IStore a b x -> f x ≅ a -> f b

Let’s rewrite it in the more categorical language:

Nat(IStore a b, f) ≅ C(a, f b)

The set of natural transformations from the functor IStore a b to the functor f is isomorphic to the hom-set between a and f b. Any time you see an isomorphism of hom-sets (and remember that Nat is the hom-set in the functor category), you should be on the lookout for an adjunction. And indeed, we have an adjunction between two functors. One functor is defined as:

a -> IStore a b

It takes an object a in C and maps it to a functor IStore a b parameterized by some other object b. The other functor is:

f -> f b

It maps a functor, an object in the functor category, to an object in C. This functor is also parameterized by the same b. Since this is a flipped application, I’ll call it Flapp:

newtype Flapp b f = Flapp (f b)

So, for any b, the functor-valued functor IStore _ b is left adjoint to Flapp b. This is what makes IStore special.

IStore Adjunction

As a side note: IStore a b is a covariant functor in a and a contravariant functor in b. However, Store a is not functorial in a, because a appears in both positive and negative position in its definition. So the adjunction trick doesn’t work for a simple (monomorphic) lens.

We can now turn the tables and use the adjunction to define the functor IStore in an arbitrary category (notice that the Yoneda lemma worked only for Set-valued functors). We just define a functor-valued functor IStore to be the left adjoint to Flapp, provided it exists.

Nat(IStore a b, f) ≅ C(a, f b)

Here, Nat is a set of natural transformations between endofunctors in C.

We can substitute the so defined functor into the Yoneda embedding formula we used earlier:

Nat((Nat(IStore a b, f), Nat(IStore s t, f)) 
    ≅ Nat(IStore s t, IStore a b)

We can now use the adjunction, rather than the Yoneda lemma, to eliminate some of the occurrences IStore:

Nat(C(a, f b), C(s, f t))
    ≅ C(s, IStore a b t)

This is slightly more general than the original van Laarhoven equivalence.

We can go even farther and reproduce the Jaskelioff and O’Connor trick of constraining the generic functor in the definition of the van Laarhoven lens to a pointed or applicative functor. This results in a multi-focus lens. In particular, if we use pointed functors, we get lenses with zero or one targets, so called affine lenses. Restricting the functors further to applicative leads to lenses with any number of targets, or traversals.

The trick is that any pointed or applicative functor can be stripped of the additional functionality and treated just like any other functor. This act of “forgetting” about pure and <*> may itself be considered a functor in the functor category. It’s called, appropriately, a forgetful functor. The left adjoint to a forgetful functor (if it exists) is called a free functor. It takes an arbitrary functor and creates a pointed functor by generating an artificial pure; or it creates an applicative functor by adding <*>. This adjunction is described by a natural isomorphism of hom-sets — in this case sets of natural transformations:

Nat(S, U f) ≅ Nat(S*, f)

Here, U is the forgetful functor, and S* is the free applicative/pointed version of the functor S. The functor f ranges across applicative (respectively, pointed) functors.

Now we can try to substitute the free version of IStore in the Yoneda embedding formula:

Nat((Nat(IStore* a b, f), Nat(IStore* s t, f)) 
    ≅ Nat(IStore* s t, IStore* a b)

The formula holds for any applicative (pointed) functor f and a set of natural transformations over such functors.

The first step is to use the forgetful/free adjunction:

Nat((Nat(IStore a b, U f), Nat(IStore s t, U f)) 
    ≅ Nat(IStore s t, U IStore* a b)

Then we can use our defining adjunction for IStore to get:

Nat(C(a, U f b), C(s, U f t)) 
    ≅ C(s, U IStore* a b t)

In Haskell notation this reads:

forall f. Applicative f => (a -> f b) -> (s -> f t)
    ≅ s -> IAppStore a b t

(the action of U is implicit).

The free applicative version of IStore is defined as:

data IAppStore a b t = 
    Unit t
  | IAppStore a (IAppStore a b (b -> t))

These are all known results, but the use of the Yoneda embedding and the adjunction to define the IStore functor makes the derivation more compact and slightly more general.

Acknowledgments

I’m grateful to Mauro Jaskelioff, Gershom Bazerman, and Joseph Abrahamson for reading the draft and providing helpful comments and to André van Meulebrouck for editing help.

Bibliography

  1. Edward Kmett, The Haskell Lens Library
  2. Simon Peyton Jones, Lenses: Compositional Data Access and Manipulation. A Skills Matter video presentation.
  3. Joseph Abrahamson, A Little Lens Starter Tutorial
  4. Joseph Abrahamson, Lenses from Scratch
  5. Artyom, lens over tea tutorial
  6. Twan van Laarhoven, CPS based functional references
  7. Bartosz Milewski, Lenses, Stores, and Yoneda
  8. Mauro Jaskelioff, Russell O’Connor, A Representation Theorem for Second-Order Functionals
  9. Bartosz Milewski, Understanding Yoneda

This is part 4 of the miniseries about solving a simple constraint-satisfaction problem:

  s e n d
+ m o r e
---------
m o n e y

using monads in C++. Previously: The Tale of Two Monads. To start from the beginning, go to Using Monads in C++ to Solve Constraints: 1.

In the previous post I showed some very general programming techniques based on functional data structures and monads. A lot of the code I wrote to solve this particular arithmetic puzzle is easily reusable. In fact the monadic approach is perfect for constructing libraries. I talked about it when discussing C++ futures and ranges. A monad is a pure expression of composability, and composability is the most important feature of any library.

You would be justified to think that rolling out a monad library just to solve a simple arithmetic puzzle is overkill. But I’m sure you can imagine a lot of other, more complex problems that could be solved using the same techniques. You might also fear that such functional methods — especially when implemented in C++, which is not optimized for this kind of programming — would lead to less performant code. This doesn’t matter too much if you are solving a one-shot problem, and the time you spend developing and debugging your code dwarfs the time it takes the program to complete execution. But even if performance were an issue and you were faced with a larger problem, functional code could be parallelized much easier than its imperative counterpart with no danger of concurrency bugs. So you might actually get better performance from functional code by running it in parallel.

Refactoring

In this installment I’d like to talk about something that a lot of functional programmers swear by: Functional programs are amazingly easy to refactor. Anybody who has tried to refactor C++ code can testify to how hard it is, and how long it takes to iron out all the bugs introduced by refactoring. With functional code, it’s a breeze. So let’s have another look at the code from the previous post and do some deep surgery on it. This is our starting point:

StateList<tuple> solve()
{
    StateList sel = &select;

    return mbind(sel, [=](int s) {
    return mbind(sel, [=](int e) {
    return mbind(sel, [=](int n) {
    return mbind(sel, [=](int d) {
    return mbind(sel, [=](int m) {
    return mbind(sel, [=](int o) {
    return mbind(sel, [=](int r) {
    return mbind(sel, [=](int y) {
        return mthen(guard(s != 0 && m != 0), [=]() {
            int send  = asNumber(vector{s, e, n, d});
            int more  = asNumber(vector{m, o, r, e});
            int money = asNumber(vector{m, o, n, e, y});
            return mthen(guard(send + more == money), [=]() {
                return mreturn(make_tuple(send, more, money));
            });
        });
    });});});});});});});});
}

What immediately stands out is the amount of repetition: eight lines of code look almost exactly the same. Conceptually, these lines correspond to eight nested loops that would be used in the imperative solution. The question is, how can we abstract over control structures, such as loops? But in our monadic implementation the loops are replaced with higher-order functions, and that opens up a path toward even further abstraction.

Reifying Substitutions

The first observation is that we are missing a data structure that should correspond to the central concept we use when describing the solution — the substitution. We are substituting numbers for letters, but we only see those letters as names of variables. A more natural implementation would use a map data structure, mapping characters to integers.

Notice, however, that this mapping has to be dynamically collected and torn down as we are exploring successive branches of the solution space. The imperative approach would demand backtracking. The functional approach makes use of persistent data structures. I have described such data structures previously, in particular a persistent red-black tree. It can be easily extended to a red-black map. You can find the implementation on github.

A red-black map is a generic data structure, which we’ll specialize for our use:

using Map = RBMap<char, int>;

A map lookup may fail, so it should either be implemented to return an optional, or use a default value in case of a failure. In our case we know that we never look up a value that’s not in the map (unless our code is buggy), so we can use a helper function that never fails:

int get(Map const & subst, char c)
{
    return subst.findWithDefault(-1, c);
}

Once we have objectified the substitution as a map, we can convert a whole string of characters to a number in one operation:

int toNumber(Map const & subst, string const & str)
{
    int acc = 0;
    for (char c : str)
    {
        int d = get(subst, c);
        acc = 10 * acc + d;
    }
    return acc;
}

There is one more thing that we may automate: finding the set of unique characters in the puzzle. Previously, we just figured out that they were s, e, n, d, m, o, r, y and hard-coded them into the solution. Now we can use a function, fashioned after a Haskell utility called nub:

string nub(string const & str)
{
    string result;
    for (char c : str)
    {
        if (result.find(c) == string::npos)
            result.push_back(c);
    }
    return result;
}

Don’t worry about the quadratic running time of nub — it’s only called once.

Recursion

With those preliminaries out of the way, let’s concentrate on analyzing the code. We have a series of nested mbind calls. The simplest thing is to turn these nested calls into recursion. This involves setting up the first recursive call, implementing the recursive function, and writing the code to terminate the recursion.

The main observation is that each level of nesting accomplishes one substitution of a character by a number. The recursion should end when we run out of characters to substitute.

Let’s first think about what data has to be passed down in a recursive call. One item is the string of characters that still need substituting. The second is the substitution map. While the first argument keeps shrinking, the second one keeps growing. The third argument is the number to be used for the next substitution.

Before we make the first recursive call, we have to prepare the initial arguments. We get the string of characters to be substituted by running nub over the text of our puzzle:

nub("sendmoremoney")

The second argument should start as an empty map. The third argument, the digit to be used for the next substitution, comes from binding the selection plan select<int> to a lambda that will call our recursive function. In Haskell, it’s common to call the auxiliary recursive function go, so that’s what we’ll do here as well:

StateList<tuple<int, int, int>> solve()
{
    StateList<int> sel = &select<int>;
    Map subst;
    return mbind(sel, [=](int s) {
        return go(nub("sendmoremoney"), subst, s);
    });
}

When implementing go, we have to think about terminating the recursion. But first, let’s talk about how to continue it.

We are called with the next digit to be used for substitution, so we should perform this substitution. This means inserting a key/value pair into our substitution map subst. The key is the first character from the string str. The value is the digit i that we were given.

subst.inserted(str[0], i)

Notice that, because the map is immutable, the method inserted returns a new map with one more entry. This is a persistent data structure, so the new map shares most of its data with its previous version. The creation of the new version takes logarithmic time in the number of entries (just as it does with a regular balanced tree that’s used in the Standard Library std::map). The advantage of using a persistent map is that we don’t have to do any backtracking — the unmodified version is still available after the return from the recursive call.

The recursive call to go expects three arguments: (1) the string of characters yet to be substituted — that will be the tail of the string that we were passed, (2) the new substitution map, and (3) the new digit n. We get this new digit by binding the digit selection sel to a lambda that makes the recursive call:

mbind(sel, [=](int n) {
    string tail(&str[1], &str[str.length()]);
    return go(tail, subst.inserted(str[0], i), n);
});

Now let’s consider the termination condition. Since go was called with the next digit to be substituted, we need at least one character in the string for this last substitution. So the termination is reached when the length of the string reaches one. We have to make the last substitution and then call the final function that prunes the bad substitutions. Here’s the complete implementation of go:

StateList<tuple<int, int, int>> go(string str, Map subst, int i)
{
    StateList<int> sel = &select<int>;

    assert(str.length() > 0);
    if (str.length() == 1)
        return prune(subst.inserted(str[0], i));
    else
    {
        return mbind(sel, [=](int n) {
            string tail(&str[1], &str[str.length()]);
            return go(tail, subst.inserted(str[0], i), n);
        });
    }
}

The function prune is lifted almost literally from the original implementation. The only difference is that we are now using a substitution map.

StateList<tuple<int, int, int>> prune(Map subst)
{
    return mthen(guard(get(subst, 's') != 0 && get(subst, 'm') != 0), 
      [=]() {
        int send  = toNumber(subst, "send");
        int more  = toNumber(subst, "more");
        int money = toNumber(subst, "money");
        return mthen(guard(send + more == money), [=]() {
            return mreturn(make_tuple(send, more, money));
        });
    });
}

The top-level code that starts the chain of events and displays the solution is left unchanged:

int main()
{
    List<int> lst{ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 };
    cout << evalStateList(solve(), lst);
    return 0;
}

It’s a matter of opinion whether the refactored code is simpler or not. It’s definitely easier to generalize and it’s less error prone. But the main point is how easy it was to make the refactorization. The reason for that is that, in functional code there are no hidden dependencies, because there are no hidden side effects. What would usually be considered a side effect in imperative code is accomplished using pure functions and monadic binding. There is no way to forget that a given function works with state, because a function that deals with state has the state specified in its signature — it returns a StateList. And it doesn’t itself modify the state. In fact it doesn’t have access to the state. It just composes functions that will modify the state when executed. That’s why we were able to move these functions around so easily.

A Word About Syntax

Monadic code in C++ is artificially complex. That’s because C++ was not designed for functional programming. The same program written in Haskell is much shorter. Here it is, complete with the helper functions prune, get, and toNumber:

solve = StateL select >>= go (nub "sendmoremoney") M.empty
  where
    go [c] subst i = prune (M.insert c i subst)
    go (c:cs) subst i = StateL select >>= go cs (M.insert c i subst)
    prune subst = do
        guard (get 's' /= 0 && get 'm' /= 0)
        let send  = toNumber "send"
            more  = toNumber "more"
            money = toNumber "money"
        guard (send + more == money)
        return (send, more, money)
      where
        get c = fromJust (M.lookup c subst)
        toNumber str = asNumber (map get str)
        asNumber = foldl (\t u -> t*10 + u) 0
main = print $ evalStateL solve [0..9]

Some of the simplification comes from using the operator >>= in place of mbind, and a simpler syntax for lambdas. But there is also some built-in syntactic sugar for chains of monadic bindings in the form of the do notation. You see an example of it in the implementation of prune.

The funny thing is that C++ is on the verge of introducing something equivalent to do notation called resumable functions. There is a very strong proposal for resumable functions in connection with the future monad, dealing with asynchronous functions. There is talk of using it with generators, or lazy ranges, which are a version of the list monad. But there is still a need for a push towards generalizing resumable functions to user-defined monads, like the one I described in this series. It just doesn’t make sense to add separate language extensions for each aspect of the same general pattern. This pattern has been known in functional programming for a long time and it’s called a monad. It’s a very useful and versatile pattern whose importance has been steadily growing as the complexity of software projects increases.

Bibliography

  1. The code for this blog is available on github.
  2. Douglas M. Auclair, MonadPlus: What a Super Monad!, The Monad.Reader Issue 11, August 25, 2008. It contains the Haskell implementation of the solution to this puzzle.
  3. Justin Le, Unique sample drawing & searches with List and StateT — “Send more money”. It was the blog post that inspired this series.

This is part 3 of the miniseries about solving a simple constraint-satisfaction problem:

  s e n d
+ m o r e
---------
m o n e y

using monads in C++. Previously: The State Monad

A constraint satisfaction problem may be solved by brute force, by trying all possible substitutions. This may be done using the imperative approach with loops, or a declarative approach with the list monad. The list monad works by fanning out a list of “alternative universes,” one for each individual substitution. It lets us explore the breadth of the solution space. The universes in which the substitution passes all the constraints are then gathered together to form the solution.

We noticed also that, as we are traversing the depth of the solution space, we could avoid testing unnecessary branches by keeping track of some state. Indeed, we start with a list of 10 possibilities for the first character, but there are only 9 possibilities for the second character, 8 for the third, and so on. It makes sense to start with a list of ten digits, and keep removing digits as we make our choices. At every point, the list of remaining choices is the state we have to keep track of.

Again, there is the imperative approach and the functional approach to state. The imperative approach makes the state mutable and the composition of actions requires backtracking. The functional approach uses persistent, immutable data structures, and the composition is done at the level of functions — plans of action that can be executed once the state is available. These plans of action form the state monad.

The functional solution to our problem involves the combination of the list monad and the state monad. Mashing two monads together is not trivial — in Haskell this is done using monad transformers — but here I’ll show you how to do it manually.

State List

Let’s start with the state, which is the list of digits to choose from:

using State = List<int>;

In the state monad, our basic data structure was a function that took a state and returned a pair: a value and a new state. Now we want our functions to generate, as in quantum mechanics, a whole list of alternative universes, each described by a pair: a value and a state.

Notice that we have to vary both the value and the state within the list. For instance, the function that selects a number from a list should produce a different number and a different list in each of the alternative universes. If the choice is, say, 3, the new state will have 3 removed from the list.

ListSelections

Here’s the convenient generic definition of the list of pairs:

template<class A>
using PairList = List<pair<A, State>>;

And this is our data structure that represents a non-deterministic plan of action:

template<class A> 
using StateList = function<PairList<A>(State)>;

Once we have such a non-deterministic plan of action, we can run it. We give it a state and get back a list of results, each paired with a new state:

template<class A>
PairList<A> runStateList(StateList<A> st, State s)
{
    return st(s);
}

Composing Non-Deterministic Plans

As we’ve seen before, the essence of every monad is the composition of individual actions. The higher-order function mbind that does the composition has the same structure for every monad. It takes two arguments. One argument is the result of our previous activity. For the list monad, it was a list of values. For the state monad it was a plan of action to produce a value (paired with the leftover state). For the combination state-list monad it will be a plan of action to produce a list of values (each combined with the leftover state).

The second argument to mbind is a continuation. It takes a single value. It’s supposed to be the value that is encapsulated in the first argument. In the case of the list monad, it was one value from the list. In the case of the state monad, it was the value that would be returned by the execution of the plan. In the combined state-list monad, this will be one of the values produced by the plan.

The continuation is supposed to produce the final result, be it a list of values or a plan. In the state-list monad it will be a plan to produce a list of values, each with its own leftover state.

Finally, the result of mbind is the composite: for the list monad it was a list, for the state monad it was a plan, and for the combination, it will be a plan to produce a list of values and states.

We could write the signature of mbind simply as:

template<class A, class B>
StateList<B> mbind(StateList<A>, function<StateList<B>(A)>)

but that would prevent the C++ compiler from making automatic type deductions. So instead we parameterize it with the type A and a function type F. That requires some additional footwork to extract the return type from the type of F:

template<class A, class F>
auto mbind(StateList<A> g, F k) 
    -> decltype(k(g(State()).front().first))

The implementation of mbind is a combination of what we did for the list monad and for the state monad. To begin with, we are supposed to return a plan, so we have to create a lambda. This lambda takes a state s as an argument. Inside the lambda, given the state, we can run the first argument — the plan. We get a list of pairs. Each pair contains a value and a new state. We want to execute the continuation k for each of these values.

To do that, we run fmap over this list (think std::transform, but less messy). Inside fmap, we disassemble each pair into value and state, and execute the continuation k over the value. The result is a new plan. We run this plan, passing it the new state. The result is a list of pairs: value, state.

Since each item in the list produces a list, the result of fmap is a list of lists. Just like we did in the list monad, we concatenate all these lists into one big list using concatAll.

template<class A, class F>
auto mbind(StateList<A> g, F k) 
    -> decltype(k(g(State()).front().first))
{
    return [g, k](State s) {
        PairList<A> plst = g(s);
        // List<PairList<B>> 
        auto lst2 = fmap([k](pair<A, State> const & p) {
            A a = p.first;
            State s1 = p.second;
            auto ka = k(a);
            auto result = runStateList(ka, s1);
            return result;
        }, plst);
        return concatAll(lst2);
    };
}

To make state-list into a full-blown monad we need one more ingredient: the function mreturn that turns any value into a trivial plan to produce a singleton list with that value. The state is just piped through:

template<class A>
StateList<A> mreturn(A a)
{
    return [a](State s) {
        // singleton list
        return PairList<A>(make_pair(a, s)); 
    };
}

So that would be all for the state-list monad, except that in C++ we have to special-case mbind for the type of continuation that takes a void argument (there is no value of type void in C++). We’ll call this version mthen:

template<class A, class F>
auto mthen(StateList<A> g, F k) -> decltype(k())
{
    return [g, k](State s) {
        PairList<A> plst = g(s);
        auto lst2 = fmap([k](pair<A, State> const & p) {
            State s1 = p.second;
            auto ka = k();
            auto result = runStateList(ka, s1);
            return result;
        }, plst);
        return concatAll(lst2);
    };
}

Notice that, even though the value resulting from running the first argument to mthen is discarded, we still have to run it because we need the modified state. In a sense, we are running it only “for side effects,” although all these functions are pure and have no real side effects. Once again, we can have our purity cake and eat it too, simulating side effects with pure functions.

Guards

Every time we generate a candidate substitution, we have to check if it satisfies our constraints. That would be easy if we could access the substitution. But we are in a rather peculiar situation: We are writing code that deals with plans to produce substitutions. How do you test a plan? Obviously we need to write a plan to do the testing (haven’t we heard that before?).

Here’s the trick: Look at the implementation of mthen (or mbind for that matter). When is the continuation not executed? The continuation is not executed if the list passed to fmap is empty. Not executing the continuation is equivalent to aborting the computation. And what does mthen return in that case? An empty list!

So the way to abort a computation is to pass an argument to mthen that produces an empty list. Such a poison pill of a plan is produced by a function traditionally called mzero.

template<class A>
StateList<A> mzero()
{
    return[](State s) {
        return PairList<A>(); // empty list
    };
}

In functional programming, a monad that supports this functionality is called a “monad plus” (it also has a function called mplus). It so happens that the list monad and the state-list monad are both plus.

Now we need a function that produces a poison pill conditionally, so we can pass the result of this function to mthen. Remember, mthen ignores the individual results, but is sensitive to the size of the list. The function guard, on success, produces a discardable value — here, a nullptr — in a singleton list. This will trigger the execution of the continuation in mthen, while discarding the nullptr. On failure, guard produces mzero, which will abort the computation.

StateList<void*> guard(bool b)
{
    if (b) {
        return [](State s) {
            // singleton list with discardable value
            return List<pair<void*, State>>(make_pair(nullptr, s));
        };
    }
    else
        return mzero<void*>(); // empty list
}

The Client Side

Everything I’ve described so far is reusable code that should be put in a library. Equipped with the state-list monad library, all the client needs to do is to write a few utility functions and combine them into a solution.

Here’s the function that picks a number from a list. Actually, it picks a number from a list in all possible ways. It returns a list of all picks paired with the remainders of the list. This function is our non-deterministic selection plan.

template<class A>
PairList<A> select(List<A> lst)
{
    if (lst.isEmpty())
        return PairList<A>();

    A       x  = lst.front();
    List<A> xs = lst.popped_front();

    auto result = List<pair<A, State>>();
    forEach(select(xs), [x, &result](pair<A, List<A>> const & p)
    {
        A       y  = p.first;
        List<A> ys = p.second;
        auto y_xys = make_pair(y, ys.pushed_front(x));
        result = result.pushed_front(y_xys);
    });

    return result.pushed_front(make_pair(x, xs));
}

Here’s a little utility function that converts a list (actually, a vector) of digits to a number:

int asNumber(vector<int> const & v)
{
    int acc = 0;
    for (auto i : v)
        acc = 10 * acc + i;
    return acc;
}

And here’s the final solution to our puzzle:

StateList<tuple<int, int, int>> solve()
{
    StateList<int> sel = &select<int>;

    return mbind(sel, [=](int s) {
    return mbind(sel, [=](int e) {
    return mbind(sel, [=](int n) {
    return mbind(sel, [=](int d) {
    return mbind(sel, [=](int m) {
    return mbind(sel, [=](int o) {
    return mbind(sel, [=](int r) {
    return mbind(sel, [=](int y) {
        return mthen(guard(s != 0 && m != 0), [=]() {
            int send  = asNumber(vector<int>{s, e, n, d});
            int more  = asNumber(vector<int>{m, o, r, e});
            int money = asNumber(vector<int>{m, o, n, e, y});
            return mthen(guard(send + more == money), [=]() {
                return mreturn(make_tuple(send, more, money));
            });
        });
    });});});});});});});});
}

It starts by creating the basic plan called sel for picking numbers from a list. (The list is the state that will be provided later.) It binds this selection to a very large continuation that produces the final result — the plan to produce triples of numbers.

Here, sel is really a plan to produce a list, but the continuation expects just one number, s. If you look at the implementation of mbind, you’ll see that this continuation is called for every single pick, and the results are aggregated into one list.

The large continuation that is passed to the first mbind is itself implemented using mbind. This one, again, takes the plan sel. But we know that the state on which this sel will operate is one element shorter than the one before it. This second selection is passed to the next continuation, and so on.

Then the pruning begins: There is an mthen that takes a guard that aborts all computations where either s or m is zero. Then three numbers are formed from the selected digits. Notice that all those digits are captured by the enclosing lambdas by value (the [=] clauses). Yet another mthen takes a guard that checks the arithmetics and, if that one passes, the final plan to produce a singleton triple (send, more, money) is returned.

In theory, all these singletons would be concatenated on the way up to produce a list of solutions, but in reality this puzzle has only one solution. To get this solution, you run the plan with the list of digits:

List<int> lst{ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 };
cout << evalStateList(solve(), lst);

The function evalStateList works the same way as runStateList except that it discards the leftover state. You can either implement it or use runStateList instead.

What’s Next?

One of the advantages of functional programming (other than thread safety) is that it makes refactoring really easy. Did you notice that 8 lines of code in our solution are almost identical? That’s outrageous! We have to do something about it. And while we’re at it, isn’t it obvious that a substitution should be represented as a map from characters to digits? I’ll show you how to do it in the next installment.

All this code and more is available on github.


This is the second part of the cycle “Using Monads in C++ to Solve Constraints” in which I’m illustrating functional techniques using the example of a simple puzzle:

  s e n d
+ m o r e
---------
m o n e y

Previously, I talked about using the list monad to search the breadth of the solution space.

What would you do if you won a lottery? Would you buy a sports car, drop your current job, and go on a trip across the United States? Maybe you would start your own company, multiply the money, and then buy a private jet?

We all like making plans, but they are often contingent on the state of our finances. Such plans can be described by functions. For instance, the car-buying plan is a function:

pair<Car, Cash> buyCar(Cash cashIn)

The input is some amount of Cash, and the output is a brand new Car and the leftover Cash (not necessarily a positive number!).

In general, a financial plan is a function that takes cash and returns the result paired with the new value of cash. It can be described generically using a template:

template<class A>
using Plan = function<pair<A, Cash>(Cash)>;

You can combine smaller plans to make bigger plans. For instance, you may use the leftover cash from your car purchase to finance your trip, or invest in a business, and so on.

There are some things that you already own, and you can trivially include them in your plans:

template<class A>
Plan<A> got_it(A a)
{
    return [a](Cash s) { return make_pair(a, s); };
}

What does all this daydreaming have to do with the solution to our puzzle? I mentioned previously that we needed to keep track of state, and this is how functional programmers deal with state. Instead of relying on side effects to silently modify the state, they write code that generates plans of action.

An imperative programmer, on the other hand, might implement the car-buying procedure by passing it a bank object, and the withdrawal of money would be a side effect of the purchase. Or, the horror!, the bank object could be global.

In functional programming, each individual plan is a function: The state comes in, and the new state goes out, paired with whatever value the function was supposed to produce in the first place. These little plans are aggregated into bigger plans. Finally, the master plan is executed — that’s when the actual state is passed in and the result comes out together with a new state. We can do the same in modern C++ using lambdas.

You might be familiar with a similar technique used with expression templates in C++. Expression templates form the basis of efficient implementation of matrix calculus, where expressions involving matrices and vectors are not evaluated on the spot but rather converted to parse trees. These trees may then be evaluated using more efficient techniques. You can think of an expression template as a plan for evaluating the final result.

The State Monad

SelectionsWithLists

To find the solution to our puzzle, we will generate substitutions by picking digits from a list of integers. This list of integers will be our state.

using State = List<int>;

We will be using a persistent list, so we won’t have to worry about backtracking. Persistent lists are never mutated — all their versions persist, and we can go back to them without fearing that they may have changed. We’ll need that when we combine our state calculations with the list monad to get the final solution. For now, we’ll just consider one substitution.

We’ll be making and combining all kinds of plans that rely on, and produce new state:

template<class A>
using Plan = function<pair<A, State>(State)>;

We can always run a plan, provided we have the state available:

template<class A>
pair<A, State> runPlan(Plan<A> pl, State s)
{
    return pl(s);
}

You may recall from the previous post that the essence of every monad is the ability to compose smaller items into bigger ones. In the case of the state monad we need the ability to compose plans of action.

For instance, suppose that you know how to generate a plan to travel across the United States on a budget, as long as you have a car. You don’t have a car though. Not to worry, you have a plan to get a car. It should be easy to generate a composite plan that involves buying the car and then continuing with your trip.

Notice the two ingredients: one is the plan to buy a car: Plan<Car>. The second ingredient is a function that takes a car and produces the plan for your trip, Plan<Trip>. This function is the continuation that leads to your final goal: it’s a function of the type Plan<Trip>(Car). And the continuation itself may in turn be a composite of many smaller plans.

So here’s the function mbind that binds a plan pl to a continuation k. The continuation uses the output of the plan pl to generate another plan. The function mbind is supposed to return a new composite plan, so it must return a lambda. Like any other plan, this lambda takes a state and returns a pair: value, state. We’ll implement this lambda for the most general case.

The logic is simple. Inside the lambda we will have the state available, so we can run the plan pl. We get back a pair: the value of type A and the new state. We pass this value to the continuation k and get back a new plan. Finally, we run that plan with the new state and we’re done.

template<class A, class F>
auto mbind(Plan<A> pl, F k) -> decltype(k(pl(State()).first))
{
    using B = decltype(k(pl(State()).first)(State()).first);
    // this should really be done with concepts
    static_assert(std::is_convertible<
        F, std::function<Plan<B>(A)>> ::value,
        "mbind requires a function type Plan<B>(A)");

    return [pl, k](State s) {
        pair<A, State> ps = runPlan(pl, s);
        Plan<B> plB = k(ps.first);
        return runPlan(plB, ps.second); // new state!
    };
}

Notice that all this running of plans inside mbind doesn’t happen immediately. It happens when the lambda is executed, which is when the larger plan is run (possibly as part of an even larger plan). So what mbind does is to create a new plan to be executed in the future.

And, as with every monad, there is a function that takes regular input and turns it into a trivial plan. I called it got_it before, but a more common name would be mreturn:

template<class A>
Plan<A> mreturn(A a)
{
    return [a](State s) { return make_pair(a, s); };
}

The state monad usually comes with two helper functions. The function getState gives direct access to the state by turning it into the return value:

Plan<State> getState()
{
    return [](State s) { return make_pair(s, s); };
}

Using getState you may examine the state when the plan is running, and dynamically select different branches of your plan. This makes monads very flexible, but it also makes the composition of different monads harder. We’ll see this in the next installment, when we compose the state monad with the list monad.

The second helper function is used to modify (completely replace) the state.

Plan<void*> putState(State newState)
{
    return [=](State s) { return make_pair(nullptr, newState); };
}

It doesn’t evaluate anything useful, so its return type is void* and the returned value is nullptr. Its only purpose is to encapsulate a side effect. What’s amazing is that you can do it and still preserve functional purity. There’s no better example of having your cake and eating it too.

Example

To demonstrate the state monad in action, let’s implement a tiny example. We start with a small plan that just picks the first number from a list (the list will be our state):

pair<int, List<int>> select(List<int> lst)
{
    int i = lst.front();
    return make_pair(i, lst.popped_front());
}

Persistent list method popped_front returns a list with its first element popped off. Typical of persistent data structures, this method doesn’t modify the original list. It doesn’t copy the list either — it just creates a pointer to its tail.

Here’s our first plan:

Plan<int> sel = &select;

Now we can create a more complex plan to produce a pair of integers:

Plan<pair<int, int>> pl = 
    mbind(sel, [=](int i) { return
    mbind(sel, [=](int j) { return
    mreturn(make_pair(i, j));
}); });

Let’s analyze this code. The first mbind takes a plan sel that selects the first element from a list (the list will be provided later, when we execute this plan). It binds it to the continuation (whose body I have grayed out) that takes the selected integer i and produces a plan to make a pair of integers. Here’s this continuation:

[=](int i) { return
    mbind(sel, [=](int j) { return
    mreturn(make_pair(i, j));
}); });

It binds the plan sel to another, smaller continuation that takes the selected element j and produces a plan to make a pair of integers. Here’s this smaller continuation:

[=](int j) { return
    mreturn(make_pair(i, j));
});

It combines the first integer i that was captured by the lambda, with the second integer j that is the argument to the lambda, and creates a trivial plan that returns a pair:

mreturn(make_pair(i, j));

Notice that we are using the same plan sel twice. But when this plan is executed inside of our final plan, it will return two different elements from the input list. When mbind is run, it first passes the state (the list of integers) to the first sel. It gets back the modified state — the list with the first element popped. It then uses this shorter list to execute the plan produced by the continuation. So the second sel gets the shortened list, and picks its first element, which is the second element of the original list. It, too, shortens the list, which is then passed to mreturn, which doesn’t modify it any more.

We can now run the final plan by giving it a list to pick from:

List<int> st{ 1, 2, 3 };
cout << runPlan(pl, st) << endl;

We are still not ready to solve the puzzle, but we are much closer. All we need is to combine the list monad with the state monad. I’m leaving this task for the next installment.

But here’s another look at the final solution:

StateL<tuple<int, int, int>> solve()
{
    StateL<int> sel = &select<int>;

    return mbind(sel, [=](int s) {
    return mbind(sel, [=](int e) {
    return mbind(sel, [=](int n) {
    return mbind(sel, [=](int d) {
    return mbind(sel, [=](int m) {
    return mbind(sel, [=](int o) {
    return mbind(sel, [=](int r) {
    return mbind(sel, [=](int y) {
        return mthen(guard(s != 0 && m != 0), [=]() {
            int send  = asNumber(vector<int>{s, e, n, d});
            int more  = asNumber(vector<int>{m, o, r, e});
            int money = asNumber(vector<int>{m, o, n, e, y});
            return mthen(guard(send + more == money), [=]() {
                return mreturn(make_tuple(send, more, money));
            });
        });
    });});});});});});});});
}

This time I didn’t rename mbind to for_each and mreturn to yield.

Now that you’re familiar with the state monad, you may appreciate the fact that the same sel passed as the argument to mbind will yield a different number every time, as long as the state list contains unique digits.

Before You Post a Comment

I know what you’re thinking: Why would I want to complicate my life with monads if there is a much simpler, imperative way of dealing with state? What’s the payoff of the functional approach? The immediate payoff is thread safety. In imperative programming mutable shared state is a never-ending source of concurrency bugs. The state monad and the use of persistent data structures eliminates the possibility of data races. And it does it without any need for synchronization (other than reference counting inside shared pointers).

I will freely admit that C++ is not the best language for functional programming, and that monadic code in C++ looks overly complicated. But, let’s face it, in C++ everything looks overly complicated. What I’m describing here are implementation details of something that should be encapsulated in an easy to use library. I’ll come back to it in the next installment of this mini-series.

Challenges

  1. Implement select from the example in the text using getState and putState.
  2. Implement evalPlan, a version of runPlan that only returns the final value, without the state.
  3. Implement mthen — a version of mbind where the continuation doesn’t take any arguments. It ignores the result of the Plan, which is the first argument to mthen (but it still runs it, and uses the modified state).
  4. Use the state monad to build a simple RPN calculator. The state is the stack (a List) of items:
    enum ItemType { Plus, Minus, Num };
    
    struct Item
    {
        ItemType _type;
        int _num;
        Item(int i) : _type(Num), _num(i) {}
        Item(ItemType t) : _type(t), _num(-1) {}
    };

    Implement the function calc() that creates a plan to evaluate an integer. Here’s the test code that should print -1:

    List<int> stack{ Item(Plus)
                   , Item(Minus)
                   , Item(4)
                   , Item(8)
                   , Item(3) };
    cout << evalPlan(calc(), stack) << endl;

    (The solution is available on github.)


I am sometimes asked by C++ programmers to give an example of a problem that can’t be solved without monads. This is the wrong kind of question — it’s like asking if there is a problem that can’t be solved without for loops. Obviously, if your language supports a goto, you can live without for loops. What monads (and for loops) can do for you is to help you structure your code. The use of loops and if statements lets you convert spaghetti code into structured code. Similarly, the use of monads lets you convert imperative code into declarative code. These are the kind of transformations that make code easier to write, understand, maintain, and generalize.

So here’s a problem that you may get as an interview question. It’s a small problem, so the advantages of various approaches might not be immediately obvious, especially if you’ve been trained all your life in imperative programming, and you are seeing monads for the first time.

You’re supposed write a program to solve this puzzle:

  s e n d
+ m o r e
---------
m o n e y

Each letter correspond to a different digit between 0 and 9. Before you continue reading this post, try to think about how you would approach this problem.

The Analysis

It never hurts to impress your interviewer with your general knowledge by correctly classifying the problem. This one belongs to the class of “constraint satisfaction problems.” The obvious constraint is that the numbers obtained by substituting letters with digits have to add up correctly. There are also some less obvious constraints, namely the numbers should not start with zero.

If you were to solve this problem using pencil and paper, you would probably come up with lots of heuristics. For instance, you would deduce that m must stand for 1 because that’s the largest possible carry from the addition of two digits (even if there is a carry from the previous column). Then you’d figure out that s must be either 8 or 9 to produce this carry, and so on. Given enough time, you could probably write an expert system with a large set of rules that could solve this and similar problems. (Mentioning an expert system could earn you extra points with the interviewer.)

However, the small size of the problem suggests that a simple brute force approach is probably best. The interviewer might ask you to estimate the number of possible substitutions, which is 10!/(10 – 8)! or roughly 2 million. That’s not a lot. So, really, the solution boils down to generating all those substitutions and testing the constraints for each.

The Straightforward Solution

The mind of an imperative programmer immediately sees the solution as a set of 8 nested loops (there are 8 unique letters in the problem: s, e, n, d, m, o, r, y). Something like this:

for (int s = 0; s < 10; ++s)
    for (int e = 0; e < 10; ++e)
        for (int n = 0; n < 10; ++n)
            for (int d = 0; d < 10; ++d)
                ...

and so on, until y. But then there is the condition that the digits have to be different, so you have to insert a bunch of tests like:

e != s
n != s && n != e
d != s && d != e && d != n

and so on, the last one involving 7 inequalities… Effectively you have replaced the uniqueness condition with 28 new constraints.

This would probably get you through the interview at Microsoft, Google, or Facebook, but really, can’t you do better than that?

The Smart Solution

Before I proceed, I should mention that what follows is almost a direct translation of a Haskell program from the blog post by Justin Le. I strongly encourage everybody to learn some Haskell, but in the meanwhile I’ll be happy to serve as your translator.

The problem with our naive solution is the 28 additional constraints. Well, I guess one could live with that — except that this is just a tiny example of a whole range of constraint satisfaction problems, and it makes sense to figure out a more general approach.

The problem can actually be formulated as a superposition of two separate concerns. One deals with the depth and the other with the breadth of the search for solutions.

Let me touch on the depth issue first. Let’s consider the problem of creating just one substitution of letters with numbers. This could be described as picking 8 digits from a list of 0, 1, …9, one at a time. Once a digit is picked, it’s no longer in the list. We don’t want to hard code the list, so we’ll make it a parameter to our algorithm. Notice that this approach works even if the list contains duplicates, or if the list elements are not easily comparable for equality (for instance, if they are futures). We’ll discuss the list-picking part of the problem in more detail later.

Now let’s talk about breadth: we have to repeat the above process for all possible picks. This is what the 8 nested loops were doing. Except that now we are in trouble because each individual pick is destructive. It removes items from the list — it mutates the list. This is a well known problem when searching through solution spaces, and the standard remedy is called backtracking. Once you have processed a particular candidate, you put the elements back in the list, and try the next one. Which means that you have to keep track of your state, either implicitly on your function’s stack, or in a separate explicit data structure.

Wait a moment! Weren’t we supposed to talk about functional programming? So what’s all this talk about mutation and state? Well, who said you can’t have state in functional programming? Functional programmers have been using the state monad since time immemorial. And mutation is not an issue if you’re using persistent data structures. So fasten your seat belts and make sure your folding trays are in the upright position.

The List Monad

We’ll start with a small refresher in quantum mechanics. As you may remember from school, quantum processes are non-deterministic. You may repeat the same experiment many times and every time get a different result. There is a very interesting view of quantum mechanics called the many-worlds interpretation, in which every experiment gives rise to multiple alternate histories. So if the spin of an electron may be measured as either up or down, there will be one universe in which it’s up, and one in which it’s down.

Selections

We’ll use the same approach to solving our puzzle. We’ll create an alternate universe for each digit substitution for a given letter. So we’ll start with 10 universes for the letter s; then we’ll split each of them into ten universes for the letter e, and so on. Of course, most of these universes won’t yield the desired result, so we’ll have to destroy them. I know, it seems kind of wasteful, but in functional programming it’s easy come, easy go. The creation of a new universe is relatively cheap. That’s because new universes are not that different from their parent universes, and they can share almost all of their data. That’s the idea behind persistent data structures. These are the immutable data structures that are “mutated” by cloning. A cloned data structure shares most of its implementation with the parent, except for a small delta. We’ll be using persistent lists described in my earlier post.

Once you internalize the many-worlds approach to programming, the implementation is pretty straightforward. First, we need functions that generate new worlds. Since we are cheap, we’ll only generate the parts that are different. So what’s the difference between all the worlds that we get when selecting the substitution for the letter s? Just the number that we assign to s. There are ten worlds corresponding to the ten possible digits (we’ll deal with the constraints like s being different from zero later). So all we need is a function that generates a list of ten digits. These are our ten universes in a nutshell. They share everything else.

Once you are in an alternate universe, you have to continue with your life. In functional programming, the rest of your life is just a function called a continuation. I know it sounds like a horrible simplification. All your actions, emotions, and hopes reduced to just one function. Well, maybe the continuation just describes one aspect of your life, the computational part, and you can still hold on to our emotions.

So what do our lives look like, and what do they produce? The input is the universe we’re in, in particular the one number that was picked for us. But since we live in a quantum universe, the outcome is a multitude of universes. So a continuation takes a number, and produces a list. It doesn’t have to be a list of numbers, just a list of whatever characterizes the differences between alternate universes. In particular, it could be a list of different solutions to our puzzle — triples of numbers corresponding to “send”, “more”, and “money”. (There is actually only one solution, but that’s beside the point.)

And what’s the very essence of this new approach? It’s the binding of the selection of the universes to the continuation. That’s where the action is. This binding, again, can be expressed as a function. It’s a function that takes a list of universes and a continuation that produces a list of universes. It returns an even bigger list of universes. We’ll call this function for_each, and we’ll make it as generic as possible. We won’t assume anything about the type of the universes that are passed in, or the type of the universes that the continuation k produces. We’ll also make the type of the continuation a template parameter and extract the return type from it using auto and decltype:

template<class A, class F>
auto for_each(List<A> lst, F k) -> decltype(k(lst.front()))
{
    using B = decltype(k(lst.front()).front());
    // This should really be expressed using concepts
    static_assert(std::is_convertible<
        F, std::function<List<B>(A)>>::value,
        "for_each requires a function type List<B>(A)");

    List<List<B>> lstLst = fmap(k, lst);
    return concatAll(lstLst);
}

The function fmap is similar to std::transform. It applies the continuation k to every element of the list lst. Because k itself produces a list, the result is a list of lists, lstLst. The function concatAll concatenates all those lists into one big list.

Congratulations! You have just seen a monad. This one is called the list monad and it’s used to model non-deterministic processes. The monad is actually defined by two functions. One of them is for_each, and here’s the other one:

template<class A>
List<A> yield(A a)
{
    return List<A> (a);
}

It’s a function that returns a singleton list. We use yield when we are done multiplying universes and we just want to return a single value. We use it to create a single-valued continuation. It represents the lonesome boring life, devoid of any choices.

I will later rename these functions to mbind and mreturn, because they are part of any monad, not just the list monad.

The names like for_each or yield have a very imperative ring to them. That’s because, in functional programming, monadic code plays a role similar to imperative code. But neither for_each nor yield are control structures — they are functions. In particular for_each, which sounds and works like a loop, is just a higher order function; and so is fmap, which is used in its implementation. Of course, at some level the code becomes imperative — fmap can either be implemented recursively or using an actual loop — but the top levels are just declarations of functions. Hence, declarative programming.

There is a slight difference between a loop and a function on lists like for_each: for_each takes a whole list as an argument, while a loop might generate individual items — in this case integers — on the fly. This is not a problem in a lazy functional language like Haskell, where a list is evaluated on demand. The same behavior may be implemented in C++ using streams or lazy ranges. I won’t use it here, since the lists we are dealing with are short, but you can read more about it in my earlier post Getting Lazy with C++.

We are not ready yet to implement the solution to our puzzle, but I’d like to give you a glimpse of what it looks like. For now, think of StateL as just a list. See if it starts making sense (I grayed out the usual C++ noise):

StateL<tuple<int, int, int>> solve()
{
    StateL<int> sel = &select<int>;

    return for_each(sel, [=](int s) {
    return for_each(sel, [=](int e) {
    return for_each(sel, [=](int n) {
    return for_each(sel, [=](int d) {
    return for_each(sel, [=](int m) {
    return for_each(sel, [=](int o) {
    return for_each(sel, [=](int r) {
    return for_each(sel, [=](int y) {
        return yield_if(s != 0 && m != 0, [=]() {
            int send  = asNumber(vector{s, e, n, d});
            int more  = asNumber(vector{m, o, r, e});
            int money = asNumber(vector{m, o, n, e, y});
            return yield_if(send + more == money, [=]() {
                return yield(make_tuple(send, more, money));
            });
        });
    });});});});});});});});
}

The first for_each takes a selection of integers, sel, (never mind how we deal with uniqueness); and a continuation, a lambda, that takes one integer, s, and produces a list of solutions — tuples of three integers. This continuation, in turn, calls for_each with a selection for the next letter, e, and another continuation that returns a list of solutions, and so on.

The innermost continuation is a conditional version of yield called yield_if. It checks a condition and produces a zero- or one-element list of solutions. Internally, it calls another yield_if, which calls the ultimate yield. If that final yield is called (and it might not be, if one of the previous conditions fails), it will produce a solution — a triple of numbers. If there is more than one solution, these singleton lists will get concatenated inside for_each while percolating to the top.

In the second part of this post I will come back to the problem of picking unique numbers and introduce the state monad. You can also have a peek at the code on github.

Challenges

  1. Implement for_each and yield for a vector instead of a List. Use the Standard Library transform instead of fmap.
  2. Using the list monad (or your vector monad), write a function that generates all positions on a chessboard as pairs of characters between 'a' and 'h' and numbers between 1 and 8.
  3. Implement a version of for_each (call it repeat) that takes a continuation k of the type function<List<B>()> (notice the void argument). The function repeat calls k for each element of the list lst, but it ignores the element itself.