We have this friendly competition going on between Eric Niebler and myself. He writes some clever C++ template code, and I feel the compulsion to explain it to him in functional terms. Then I write a blog about Haskell or category theory and Eric feels a compulsion to translate it into C++.

Eric is now working on his proposal to rewrite the C++ STL in terms of ranges and I keep reinterpreting his work in terms familiar to functional programmers. Eric’s range comprehensions are a result of some of this back and forth.

• A monad is-an applicative functor
• An applicative functor is-a pointed functor
• A pointed functor is-a functor

## Functor

I have a pet peeve about the use of the word “functor” in C++. People keep calling function objects functors. It’s like calling Luciano Pavarotti an “operator,” because he sings operas. The word functor has a very precise meaning in mathematics — moreover, it’s the branch of mathematics that’s extremely relevant to programming. So hijacking this term to mean a function-like object causes unnecessary confusion.

A functor in functional programming is a generic template, which allows the “lifting” of functions. Let me explain what it means. A generic template takes an arbitrary type as a template argument. So a range (whether lazy or eager) is a generic template because it can be instantiated for any type. You can have a range of integers, a range of vectors, a range of ranges, and so on. (We’ll come back to ranges of ranges later when we talk about monads.)

The “lifting” of functions means this: Give me any function from some type T to some other type U and I can apply this function to a range of T and produce a range of U. You may recognize this kind of lifting in the STL algorithm `std::transform`, which can be used to apply a function to a container. STL containers are indeed functors. Unfortunately, their functorial nature is buried under the noise of iterators. In Eric’s range library, the lifting is done much more cleanly using `view::transform`. Have a look at this example:

``` int total = accumulate(view::iota(1) |
view::transform([](int x){return x*x;}) |
view::take(10), 0);```

Here, `view::transform` takes an anonymous function that squares its argument, and lifts this function to the level of ranges. The range created by `view::iota(1)` is piped into it from the left, and the resulting rage of squares emerges from it on the right. The (infinite) range is then truncated by `take`‘ing the first 10 elements.

The function `view::iota(1)` is a factory that produces an infinite range of consecutive integers starting from 1. (We’ll come back to range factories later.)

In this form, `view::transform` plays the role of a higher-order function: one that takes a function and returns a function. It almost reaches the level of terseness and elegance of Haskell, where this example would look something like this:

`total = sum \$ take 10 \$ fmap (\x->x*x) [1..]`

(Traditionally, the flow of data in Haskell is from right to left.) The (higher-order) function `fmap` can be thought of as a “method” of the class `Functor` that does the lifting in Haskell. In C++ there is no overall functor abstraction, so each functor names its lifting function differently — for ranges, it’s `view::transform`.

The intuition behind a functor is that it generates a family of objects that somehow encapsulate values of arbitrary types. This encapsulation can be very concrete or very abstract. For instance, a container simply contains values of a given type. A range provides access to values that may be stored in some other container. A lazy range generates values on demand. A `future`, which is also a functor (or will be, in C++17), describes a value that might not be currently available because it’s being evaluated in a separate thread.

All these objects have one thing in common: they provide means to manipulate the encapsulated values with functions. That’s the only requirement for a functor. It’s not required that a functor provide access to encapsulated values (which may not even exist), although most do. In fact there is a functor (really, a monad), in Haskell, that provides no way of accessing its values other than outputting them to external devices.

## Pointed Functor

A pointed functor is a functor with one additional ability: it lets you lift individual values. Give me a value of any type and I will encapsulate it. In Haskell, the encapsulating function is called `pure` although, as we will see later, in the context of a monad it’s called `return`.

All containers are pointed, because you can always create a singleton container — one that contains only one value. Ranges are more interesting. You can obviously create a range from a singleton container. But you can also create a lazy range from a value using a (generic) function called `view::single`, which doesn’t have a backing container behind it.

There is, however, an alternative way of lifting a value to a range, and that is by repeating it indefinitely. The function that creates such infinite (lazy) ranges is called `view::repeat`. For instance, `view::repeat(1)` creates an infinite series of ones. You might be wondering what use could there be of such a repetitive range. Not much, unless you combine it with other ranges. In general, pointed functors are not very interesting other than as stepping stones towards applicative functors and monads. So let’s move on.

## Applicative Functor

An applicative functor is a pointed functor with one additional ability: it lets you lift multi-argument functions. We already know how to lift a single-argument function using `fmap` (or `transform`, or whatever it’s called for a particular functor).

With multi-argument functions acting on ranges we have two different options corresponding to the two choices for `pure` I mentioned before: `view::single` and `view::repeat`.

The idea, taken from functional languages, is to consider what happens when you provide the first argument to a function of multiple arguments (it’s called partial application). You don’t get back a value. Instead you get something that expects one or more remaining arguments. A thing that expects arguments is called a function (or a function object), so you get back a function of one fewer arguments. In C++ you can’t just call a function with fewer arguments than expected, or you get a compilation error, but there is a (higher-order) function in the Standard Library called `std::bind` that implements partial application.

This kind of transformation from a function of multiple arguments to a function of one argument that returns a function is called currying.

Let’s consider a simple example. We want to apply `std::make_pair` to two ranges: `view::ints(10, 11)` and `view::ints(1, 3)`. To this end, let’s replace `std::make_pair` with the corresponding curried function of one argument returning a function of one argument:

`[](int i) { return [i](int j) { return std::make_pair(i, j); };}`

First, we want to apply this function to the first range. We know how to apply a function to a range: we use `view::transform`.

```auto partial_app = view::ints(10, 11)
| view::transform([](int i) {
return [i](int j) { return std::make_pair(i, j); }
});```

What’s the result of this application? Can you guess? Our curried function will be applied to each integer in the range, returning a function that pairs that integer with its argument. So we end up with a range of functions of the form:

`[i](int j) { return std::make_pair(i, j); }`

So far so good — we have just used the functorial property of the range. But now we have to decide how to apply a range of functions to the second range of values. And that’s the essence of the definition of an applicative functor. In Haskell the operation of applying encapsulated functions to encapsulated arguments is denoted by an infix operator `<*>`.

With ranges, there are two basic strategies:

1. We can enumerate all possible combinations — in other words create the cartesian product of the range of functions with the range of values — or
2. Match corresponding functions to corresponding values — in other words, “zip” the two ranges.

The former, when applied to `view::ints(1, 3)`, will yield:

`{(10,1),(10,2),(10,3),(11,1),(11,2),(11,3)}`

and the latter will yield:

`{(10, 1),(11, 2)}`

(when the ranges are not equal length, you stop zipping when the shorter one is exhausted).

To see that those two choices correspond to the two choices for `pure`, we have to look at some consistency conditions. One of them is that if you encapsulate a single-argument function in a range using `pure` and then apply it to a range of arguments, you should get the same result as if you simply `fmap`ped this function directly over the range of arguments. For the record, I’ll write it here in Haskell:

`pure f <*> xs == fmap f xs`

This is sort of an obvious requirement: You have two different ways of applying a single-argument function to a range, they better give the same result.

Let’s try it with the `view::single` version of `pure`. When acting on a function, it will create a one-element range containing this function. The “all possible combinations” application will just apply this function to all elements of the argument range, which is exactly what `view::transform` would do.

Conversely, if we apply `view::repeat` to the function, we’ll get an infinite range that repeats this function at every position. We have to zip this range with the range of arguments in order to get the same result as `view::transform`. So this implementation of `pure` works with the zippy applicative. Notice that if the argument range is finite the result will also be finite. But this kind of application will also work with infinite ranges thanks to laziness.

So there are two legitimate implementations of the applicative functor for ranges. One uses `view::single` to lift values and uses the all possible combinations strategy to apply a range of functions to a range of arguments. The other uses `view::repeat` to lift values and the zipping application for ranges of functions and arguments. They are both acceptable and have their uses.

Now let’s go back to our original problem of applying a function of multiple arguments to a bunch of ranges. Since we are not doing it in Haskell, currying is not really a practical option.

As it turns out, the second version of applicative has been implemented by Eric as a (higher-order) function `view::zip_with`. This function takes a multi-argument callable object as its first argument, followed by a variadic list of ranges.

There is no corresponding implementation for the combinatorial applicative. I think the most natural interface would be an overload of `view::transform` (or maybe `view::fmap`) with the same signature as `zip_with`. Our example would then look like this:

`view::transform(std::make_pair, view::ints(10, 11), view::ints(1, 3));`

The need for this kind of interface is not as acute because, as we’ll learn next, the combinatorial applicative is supplanted by a more general monadic interface.

Monads are applicative functors with one additional functionality. There are two equivalent ways of describing this functionality. But let me first explain why this functionality is needed.

The range library comes with a bunch of range factories, such as `view::iota`, `view::ints`, or `view::repeat`. It’s also very likely that users of the library will want to create their own range factories. The problem is: How do you compose existing range factories to obtain new range factories?

Let me give you an example that generated a blog post exchange between me and Eric. The challenge was to generate a list of Pythagorean triples. The idea is to take a cross product of three infinite ranges of integers and select those triples that satisfy the equation x2 + y2 = z2. The cross product of ranges is reminiscent of the “all possible combinations” applicative, and indeed that’s the applicative that can be extended to a monad (the zippy one can’t).

To make this algorithm feasible, we have to organize these ranges so we can (lazily) traverse them. Let’s start with a factory that produces all integers from 1 to infinity. That’s the `view::ints(1)` factory. Then, for each `z` produced by that factory, let’s create another factory `view::ints(1, z)`. This range will provide our `x`s — and it makes no sense to try `x`s that are bigger than `z`s. These values, in turn, will be used in the creation of the third factory, `view::ints(x, z)` that will generate our `y`s. At the end we’ll filter out the triples that don’t satisfy the Pythagorean equation.

Notice how we are feeding the output of one range factory into another range factory. Do you see the problem? We can’t just iterate over an infinite range. We need a way to glue the output side of one range factory to the input side of another range factory without unpacking the range. And that’s what monads are for.

Remember also that there are functors that provide no way of extracting values, or for which extraction is expensive or blocking (as is the case with futures). Being able to compose those kinds of functor factories is often essential, and again, the monad is the answer.

Now let’s pinpoint the type of functionality that would allow us to glue range factories end-to-end. Since ranges are functorial, we can use `view::transform` to apply a factory to a range. After all a factory is just a function. The only problem is that the result of such application is a range of ranges. So, really, all that’s needed is a lazy way of flattening nested ranges. And that’s exactly what Eric’s `view::flatten` does.

With this new flattening power at our disposal, here’s a possible beginning of the solution to the Pythagorean triple problem:

```view::ints(1) | view::transform([](int z) {
view::ints(1, z) | ... } | view::flatten```

However, this combination of `view::transform` and `view::flatten` is so useful that it deserves its own function. In Haskell, this function is called “bind” and is written as an infix operator `>>=`. (And, while we’re at it, `flatten` is called `join`.)

And guess what the combination of `view::transform` and `view::flatten` is called in the range library. This discovery struck me as I was looking at one of Eric’s examples. It’s called `view::for_each`. Here’s the solution to the Pythagorean triple problem using `view::for_each` to bind range factories:

```auto triples =
for_each(ints(1), [](int z) {
return for_each(ints(1, z), [=](int x) {
return for_each(ints(x, z), [=](int y) {
return yield_if(x*x + y*y == z*z, std::make_tuple(x, y, z));
});
});
});```

And here’s the same code in Haskell:

```triples =
(>>=) [1..] \$ \z ->
(>>=) [1..z] \$ \x ->
(>>=) [x..z] \$ \y ->
guard (x^2 + y^2 == z^2) >> return (x, y, z)```

I have purposefully re-formatted Haskell code to match C++ (A more realistic rendition of it is in my post Getting Lazy with C++). Bind operators `>>=` are normally used in infix position but here I wanted to match them against `for_each`. Haskell’s `return` is the same as `view::single`, which Eric renamed to `yield` inside `for_each`. In this particular case, `yield` is conditional, which in Haskell is expressed using `guard`. The syntax for lambdas is also different, but otherwise the code matches almost perfectly.

This is an amazing and somewhat unexpected convergence. In our tweeter exchange, Eric sheepishly called his `for_each` code imperative. We are used to thinking of `for_each` as synonymous with looping, which is such an iconic imperative construct. But here, `for_each` is the monadic bind — the epitome of functional programming. This puppy is purely functional. It’s an expression that returns a value (a range) and has no side effects.

But what about those loops that do have side effects and don’t return a value? In Haskell, side effects are always encapsulated using monads. The equivalent of a `for_each` loop with side effects would return a monadic object. What we consider side effects would be encapsulated in that object. It’s not the loop that performs side effects, its that object. It’s an executable object. In the simplest case, this object contains a function that may be called with the state that is to be modified. For side effects that involve the external world, there is a special monad called the IO monad. You can produce IO objects, you can compose them using monadic bind, but you can’t execute them. Instead you return one such object that combines all the IO of your program from `main` and let the runtime execute it. (At least that’s the theory.)

Is this in any way relevant to an imperative programmer? After all, in C++ you can perform side effects anywhere in your code. The problem is that there are some parts of your code where side effects can kill you. In concurrent programs uncontrolled side effects lead to data races. In Software Transactional Memory (STM, which at some point may become part of C++) side effects may be re-run multiple times when a transaction is retried. There is an urgent need to control side effects and to separate pure functions from their impure brethren. Encapsulating side effects inside monads could be the ticket to extend the usefulness of pure functions inside an imperative language like C++.

To summarize: A monad is an applicative functor with an additional ability, which can be expressed either as a way of flattening a doubly encapsulated object, or as a way of applying a functor factory to an encapsulated object.

In the range library, the first method is implemented through `view::flatten`, and the second through `view::for_each`. Being an applicative functor means that a range can be manipulated using `view::transform` and that any value may be encapsulated using `view::single` or, inside `for_each`, using `yield`.

The ability to apply a range of functions to a range of arguments that is characteristic of an applicative functor falls out of the monadic functionality. For instance, the example from the previous section can be rewritten as:

```for_each(ints(10, 11), [](int i) {
return for_each(ints(1, 3), [i](int j) {
return yield(std::make_pair(i, j));
});
});```

## The Mess We’re In

I don’t think the ideas I presented here are particularly difficult. What might be confusing though is the many names that are used to describe the same thing. There is a tendency in imperative (and some functional) languages to come up with cute names for identical patterns specialized to different applications. It is also believed that programmers would be scared by terms taken from mathematics. Personally, I think that’s silly. A monad by any other name would smell as sweet, but we wouldn’t be able to communicate about them as easily. Here’s a sampling of various names used in relation to concepts I talked about:

1. Functor: `fmap`, `transform`, `Select` (LINQ)
2. Pointed functor: `pure`, `return`, `single`, `repeat`, `make_ready_future`, `yield`, `await`
3. Applicative functor: `<*>`, `zip_with`
4. Monad: `>>=`, `bind`, `mbind`, `for_each`, `next`, `then`, `SelectMany` (LINQ)

Part of the problem is the lack of expressive power in C++ to unite such diverse phenomena as ranges and futures. Unfortunately, the absence of unifying ideas adds to the already overwhelming complexity of the language and its libraries. The functional paradigm could be a force capable of building connections between seemingly distant application areas.

## Acknowledments

I’m grateful to Eric Niebler for reviewing the draft of this blog and correcting several mistakes. The remaining mistakes are all mine.