If you want to understand C++ template metaprogramming (TMP) you have to know functional programming. Seriously. I want you to think of TMP as maximally obfuscated (subset of) Haskell, and I’ll illustrate this point by point. If you don’t know Haskell, don’t worry, I’ll explain the syntax as I go.

The nice thing about single-paradigm languages like Haskell is that they have very simple syntax (think of Lisp that does everything with just a bunch of parentheses). I will start the Haskell-C++TMP mapping with basics, like functions and recursion, but I’ll try to cover a lot more, including higher-order functions, pattern matching, list comprehension (did you know it was expressible in C++?), and more.

Keep in mind that my Haskell examples are runtime functions operating on runtime data whereas their C++ TMP equivalents are compile-time templates operating mostly on types. Operation on types are essential in providing correct and efficient implementations of parametrized classes and functions.

By necessity the examples are simple, but the same mapping may be applied to much more complex templates from the C++ Standard Library and Boost. As a bonus, I’ll also explain the hot new thing, variadic templates.

Functional Approach to Functions

How do you implement useful functions if you don’t have mutable variables, if statements, or loops? To a C++ programmer that might seem like an impossible task. But that’s the reality of C++ compile-time language that forms the basis of TMP. Functional programming to the rescue!

As a warm-up, let’s see how Haskell implements a simple function, the factorial:

fact 0 = 1
fact n = n * fact (n - 1)

The first line states that the factorial of zero is one. The second line defines factorial for a non-zero argument, n (strictly speaking it only works for positive non-zero arguments). It does it using recursion: factorial of n is equal to n times the factorial of n-1. The recursion stops when n is equal to zero, in which case the first definition kicks in. Notice that the definition of the function fact is split into two sub-definitions. So when you call:

fact 4

the first definition is looked up first and, if it doesn’t match the argument (which it doesn’t), the second one comes into play. This is the simplest case of pattern matching: 4 doesn’t match the “pattern” 0, but it matches the pattern n.

Here’s almost exactly the same code expressed in C++ TMP:

template<int n> struct
fact {
    static const int value = n * fact<n - 1>::value;

template<> struct
fact<0> { // specialization for n = 0
    static const int value = 1;

You might notice how the horrible syntax of C++ TMP obscures the simplicity and elegance of this code. But once you are equipped with the C++/Haskell decoder ring, things become a lot clearer.

Let’s analyze this code. Just like in Haskell, there are two definitions of fact, except that their order is inverted. This is because C++ requires template specialization to follow the template’s general definition (or declaration, as we’ll see later). The pattern matching of arguments in C++ does not follow the order of declarations but rather is based on “best match”. If you instantiate the template with argument zero:

cout << "Factorial of 0 = " << fact<0>::value << endl;

the second pattern, fact<0>, is a better fit. Otherwise the first one, <int n>, is used.

Notice also the weird syntax for “function call”


and for the “return statement”

static const int value = n * fact<n - 1>::value;

This all makes sense if you look at templates as definitions of parameterized types, which was their initial purpose in C++. In that interpretation, we are defining a struct called fact, parameterized by an integer n, whose sole member is a static const integer called value. Moreover, this template is specialized for the case of n equal zero.

Now I want you to forget about what I just said and put on the glasses which make the C++ code look like the corresponding Haskell code.

Here’s another example–this time of a predicate (a function returning a Boolean):

is_zero 0 = True
is_zero x = False

Let’s spice it up a little for C++ and define a predicate on types rather than integers. The following compile-time function returns true only when the type T is a pointer:

template<class T> struct
isPtr {
    static const bool value = false;

template<class U> struct
isPtr<U*> {
    static const bool value = true;

This time the actual argument to isPtr is first matched to the more specialized pattern, U* and, if it fails, the general pattern is used.

We can add yet another specialization, which will pattern-match a const pointer:

template<class U> struct
isPtr<U * const> {
    static const bool value = true;

These types of type predicates may be, for instance, used to select more flexible and efficient implementations of parameterized containers. Think of the differences between a vector of values vs. a vector of pointers.


The basic data structure in functional languages is the list. Haskell’s lists are introduced using square brackets. For instance, a list of three numbers, 1, 2, 3, looks like:

[1, 2, 3]

List processing in functional languages follows the standard pattern: a list is split into head and tail, an operation is performed on the head, and the tail is processed using recursion. The splitting is done by pattern matching: the Haskell pattern being (head:tail) (to be precise, the colon in parentheses represents the cons operation–the creation of a list by prepending an element to an existing list).

Here’s a simple function, count, that calculates the length of a list:

count [] = 0
count (head:tail) = 1 + count tail

The first pattern, [], matches an empty list; the second a non-empty one. Notice that a function call in Haskell doesn’t use parentheses around arguments, so count tail is interpreted as a call to count with the argument tail.

Before C++0x, TMP was severely crippled by the lack of a list primitive. People used separate definitions for a list of zero, one, two, etc., elements, and even used special macros to define them. This is no longer true in C++0x, thanks to variadic templates and template parameter packs. Here’s our Haskel count translated into C++ TMP:

// Just a declaration
template<class... list> struct

template<> struct
count<> {
    static const int value = 0;

template<class head, class... tail> struct
count<head, tail...> {
    static const int value = 1 + count<tail...>::value;

First we have a declaration (not a definition) of the template count that takes a variable number of type parameters (the keyword class or typename introduces a type parameter). They are packed into a template parameter pack, list.

Once this general declaration is visible, specializations may follow in any order. I arranged them to follow the Haskell example. The first one matches the empty list and returns zero. The second one uses the pattern, <head, tail…>. This pattern will match any non-empty list and split it into the head and the (possibly empty) tail.

To “call” a variadic template, you initiate it with an arbitrary number of arguments and retrieve its member, value, e.g.,

int n = count<int, char, long>::value; // returns 3

A few words about variadic templates: A variadic template introduces a template parameter pack using the notation class… pack (or int… ipack, etc…). The only thing you may do with a pack is to expand it and pass to another variadic template. The expansion is done by following the name of the pack with three dots, as in tail…. You’ll see more examples later.

Variadic templates have many applications such as type-safe printf, tuples (objects that store an arbitrary number of differently typed arguments), variants, and many more.

Higher-Order Functions and Closures

The real power of functional programming comes from treating functions as first class citizens. It means that you may pass functions to other functions and return functions from functions. Functions operating on functions are called higher-order functions. Surprisingly, it seems like compile-time C++ has better support for higher-order functions than run-time C++.

Let’s start with a Haskell example. I want to define a function that takes two predicate functions and returns another predicate function that combines the two using logical OR. Here it is in Haskell:

or_combinator f1 f2 = 
    λ x -> (f1 x) || (f2 x)

The or_combinator returns an anonymous function (the famous “lambda”) that takes one argument, x, calls both f1 and f2 with it, and returns the logical OR of the two results. The return value of or_combinator is this freshly constructed function. I can then call this function with an arbitrary argument. For instance, here I’m checking if 2 is either zero or one (guess what, it isn’t!):

(or_combinator is_zero is_one) 2

I put the parentheses around the function and its arguments for readability, although they are not strictly necessary. 2 is the argument to the function returned by or_combinator.

The lambda that’s returned from or_combinator is actually a closure. It “captures” the two arguments, f1 and f2 passed to or_combinator. They may be used long after the call to or_combinator has returned.

It might take some getting used to it before you are comfortable with functions taking functions and returning functions, but it’s much easier to learn this stuff in Haskell than in the obfuscated C++. Indeed, here’s an almost direct translation of this example:

template<template<class> class f1, template<class> class f2> struct
or_combinator {
    template<class T> struct
    lambda {
        static const bool value = f1<T>::value || f2<T>::value;

Since in the metalanguage a function is represented by a template, the template or_combinator takes two such templates as arguments. It “calls” these templates using the standard syntax f<T>::value. Actually, the or_combinator doesn’t call these functions. Instead it defines a new template, which I call lambda, that takes the argument T and calls those functions. This template acts like a closure–it captures the two templates that are the arguments to or_combinator.

Here’s how you may use the or_combinator to combine two tests, isPtr and isConst and apply the result to the type const int:

   << "or_combinator<isPtr, isConst>::lambda<const int>::value = "
   << or_combinator<isPtr, isConst>::lambda<const int>::value 
   << std::endl;

Such logical combinators are essential for predicate composability.

Higher-Order Functions Operating on Lists

Once you combine higher-order functions with lists you have a powerful functional language at your disposal. Higher-order functions operating on lists look very much like algorithms. Let me show you some classic examples. Here’s the function (or algorithm), all, that returns true if and only if all elements of a list satisfy a given predicate.

all pred [] = True
all pred (head:tail) = (pred head) && (all pred tail)

By now you should be familiar with all the techniques I used here, like pattern matching or list recursion.

Here’s the same code obfuscated by the C++ syntax:

template<template<class> class predicate, class... list> struct

template<template<class> class predicate> struct
all<predicate> {
    static const bool value = true;

    template<class> class predicate, 
    class head, 
    class... tail> struct
all<predicate, head, tail...> {
    static const bool value = 
        && all<predicate, tail...>::value;

Except for the initial declaration required by C++ there is a one-to-one paradigm match between the two implementations.

Another useful algorithm, a veritable workhorse of functional programming, is “fold right” (together with it’s dual partner, “fold left”). It folds a list while accumulating the results (that’s why in runtime C++ this algorithm is called “accumulate”). Here’s the Haskell implementation:

foldr f init [] = init
foldr f init (head:tail) =
    f head (foldr f init tail)

Function f, which is the first argument to foldr, takes two arguments, the current element of the list and the accumulated value. Its purpose is to process the element and incorporate the result in the accumulator. The new accumulated value is then returned. It is totally up to the client to decide what kind of processing to perform, how it is accumulated, and what kind of value is used. The second argument, init, is the initial value for the accumulator.

Here’s how it works: The result of foldr is generated by acting with f on the head of the list and whatever has been accumulated by processing the tail of the list. The algorithm recurses until the tail is empty, in which case it returns the initial value. At runtime this type of algorithm would make N recursive calls before starting to pop the stack and accumulate the results.

For instance, foldr may be used to sum the elements of a list (so_far is the accumulator, which is initialized to zero):

add_it elem so_far = elem + so_far
sum_it lst = foldr add_it 0 lst

The accumulator function is add_it. If, instead, I wanted to calculate the product of all elements, I’d use a function mult_it and the starting value of one. You get the idea.

Here’s the same algorithm in C++ TMP:

template<template<class, int> class, int, class...> struct

template<template<class, int> class f, int init> struct
fold_right<f, init> {
    static const int value = init;

template<template<class, int> class f, int init, class head, class...tail> struct
fold_right<f, init, head, tail...> {
    static const int value = f<head, fold_right<f, init, tail...>::value>::value;

Once you understand the Haskell version, this complex code suddenly becomes transparent (if it doesn’t, try squinting ;-) ).

Lists of Numbers

Let’s now switch to integers for a moment. Haskell defines a function sum that adds all elements of a list:

sum [] = 0
sum (head:tail) = head + (sum tail)

We can do the same in C++ TMP (in five times as many lines of code):

template<int...> struct

template<> struct
sum<> {
    static const int value = 0;

template<int i, int... tail> struct
sum<i, tail...> {
    static const int value = i + sum<tail...>::value;

List Comprehension

Haskell has one more trick up its sleeve for operating on lists without explicit recursion. It’s called list comprehension. It’s a way of defining new lists based on existing lists. The nomenclature and notation are borrowed from Set Theory, where you often encounter definitions such as: S is a set of elements, where… Let’s look at a simple Haskell example:

[x * x | x <- [3, 4, 5]]

This is a set (list) of elements x * x, where x is from the list [3, 4, 5].

Remember our recursive definition of count? Using list comprehension it’s reduced to a one-liner:

count lst = sum [1 | x <- lst]

Here, we create a list of ones, one for each element of the list. Our result is the sum of those ones. To make this definition more amenable to translation into C++, let’s define an auxiliary function one that, for any argument x, returns 1.

one x = 1

Here’s the modified definition of count:

count lst = sum [one x | x <- lst]

Now we are ready to convert this code to C++ TMP:

template<class T> struct
one {
    static const int value = 1;

template<class... lst> struct
count {
    static const int value = sum<one<lst>::value...>::value;

Here our list is stored in a template parameter pack, lst. If we wanted to expand this pack, we’d use the notation lst…, but that’s not what’s happening here. The ellipsis appears after the pattern containing the pack:


Compare this with the equivalent Haskell:

[one x | x <- lst]

In C++, when the ellipsis follows a pattern that contains a pack, it’s not the pack that’s expanded, but the whole pattern is repeated for each element of the pack. Here, if our list were <int, char, void*>, the pattern would be expanded to:

<one<int>::value, one<char>::value, one<void*>::value>

The subsequent call to sum would be made with those arguments.

Notice that a different positioning of the ellipsis would result in a completely different expansion. This pattern:


would result in the call to one with the list of types, which would be an error.

Here’s another example of pattern expansion: a function that counts the number of pointers in a list of types:

template<class... lst> struct
countPtrs {
    static const int value = sum<isPtr<lst>::value ...>::value;

In this case the pattern is:

isPtr<lst>::value ...

and it expands into a list of Booleans. (I’m taking advantage of the fact that false is zero and true is one, when converted to integers.)

You may find a more complex practical example in the Gregor, Järvi, and Powell paper (see bibliography).


List comprehension can be used to define some very useful higher-order functions. One of such functions is map, which takes a list and applies a unary function to each element, resulting in a new list. You might be familiar with the runtime implementation of this algorithm in C++ STL under the name of transform. This is what map looks like in Haskell:

map f lst = [f x | x <- lst]

Here, f is the unary function and lst is the input list. You have to admire the terseness and elegance of this notation.

The first impulse would be to translate it into C++ TMP as:

template<template<class> class f, class... lst> struct
map {
    typedef f<lst>... type;

This is surprisingly terse too. The problem is that it doesn’t compile. As far as I know there is no way for a template to “return” a variable list of elements. In my opinion, this is a major language design flaw, but that’s just me.

There are several workarounds, none of them too exciting. One is to define a separate entity called a typelist along the lines of:

template struct
typelist<hd, tl...> {
    typedef hd head;
    typedef typelist<tl...> tail;

(As a matter of fact I have implemented typelists and related algorithms both in C++ and D.)

Another approach is to use continuations. Template parameter packs cannot be returned, but they can be passed to variadic templates (after expansion). So how about defining an algorithm like map to take one additional function that would consume the list that is the result of mapping? Such a function is often called a continuation, since it continues the calculation where normally one would return the result. First, let’s do it in Haskell:

map_cont cont f lst = cont [f x | x <- lst]

The function map_cont is just like map except that it takes a continuation, cont, and applies it to the result of mapping. We can test it by defining yet another implementation of count:

count_cont lst = map_cont sum one lst

The continuation here is the function sum that will be applied to the list produced by acting with function one on the list lst. Since this is quite a handful, let me rewrite it in a more familiar notation of runtime C++:

int map_cont(int (*cont)(list), int (*f)(int), list lst) {
    list tmp;
    for (auto it = lst.begin(); it != lst.end(); ++it)
    return cont(tmp);

Now for the same thing in compile-time C++:

template<template<class...> class cont, 
         template<class> class f,
         class... lst> struct
map_cont {
    static const int value =
        cont<typename f<lst>::type ...>::value;

It’s a one-to-one mapping of Haskell code–and it has very little in common with the iterative runtime C++ implementation. Also notice how loose the typing is in the TMP version as compared with the runtime version. The continuation is declared as a variadic template taking types, function f is declared as taking a type, and the list is a variadic list of types. Nothing is said about return types, except for the constraints that f returns a type (because cont consumes a list of types) and cont returns an int. Actually, this last constraint can be relaxed if we turn integers into types–a standard trick (hack?) in TMP:

template<int n> struct
Int {
    static const int value = n;

Loose typing–or “kinding,” as it is called for types of types–is an essential part of compile-time programming. In fact the popularity of the above trick shows that C++ kinding might be already too strong.

The D Digression

I’m grateful to Andrei Alexandrescu for reviewing this post. Since he objected to the sentence, “I’m disappointed that the D programming language followed the same path as C++ rather than lead the way,” I feel compelled to support my view with at least one example. Consider various implementations of all.

In Haskell, beside the terse and elegant version I showed before:

all pred [] = True
all pred (head:tail) = (pred head) && (all pred tail)

there is also a slightly more verbose one:

all pred list =
    if null list then True
    else pred (head list) && all pred (tail list)

which translates better into D. The D version (taken from its standard library, Phobos) is not as short as Haskell’s, but follows the same functional paradigm:

template allSatisfy(alias F, T...) {
    static if (T.length == 1)
        alias F!(T[0]) allSatisfy;
        enum bool allSatisfy = F!(T[0]) && allSatisfy!(F, T[1 .. $]);

It definitely beats C++, there’s no doubt about it. There are some oddities about it though. Notice that the type tuple, T… gets the standard list treatment, but the split into the head and tail follows the array/slice notation. The head is T[0] and the tail is an array slice, T[1..$]. Instead of using value for the return value, D uses the “eponymous hack,” as I call it. The template “returns” the value using its own name. It’s a hack because it breaks down if you want to define the equivalent of “local variable” inside a template. For instance, the following code doesn’t compile:

template allSatisfy(alias F, T...) {
    static if (T.length == 1)
        alias F!(T[0]) allSatisfy;
        private enum bool tailResult = allSatisfy!(F, T[1..$]);
        enum bool allSatisfy = F!(T[0]) && tailResult;

This breaks one of the fundamental property of any language: decomposability. You want to be able to decompose your calculation into smaller chunks and then combine them together. Of course, you may still use decomposition if you don’t use the eponymous hack and just call your return value value. But that means modifying all the calling sites.

By the way, this is how decomposition works in Haskell:

all2 pred [] = True;
all2 pred (head:tail) = (pred head) && tailResult
    where tailResult = all2 pred tail

Anyway, the important point is that there is no reason to force functional programming paradigm at compile-time. The following hypothetical syntax would be much easier for programmers who are used to imperative programming:

template allSatisfy(alias Pred, T...) {
    foreach(t; T)
        if (!Pred!(t))
            return false;
    return true;

In fact it’s much closer to the parameterized runtime function proposed by Andrei:

bool all(alias pred, Range)(Range r) {
    foreach (e; r) 
        if (!pred(e)) 
            return false;
    return true;

This is why I’m disappointed that the D programming language followed the same path as C++ rather than lead the way.


I have argued that some familiarity with Haskell may be really helpful in understanding and designing templates in C++. You might ask why C++ chose such horrible syntax to do compile-time functional programming. Well, it didn’t. The ability to do compile-time calculations in C++ was discovered rather than built into the language. It was a very fruitful discovery, as the subsequent developments, especially the implementation of the Boost MPL, have shown. However, once the functional paradigm and its weird syntax took root in C++ TMP, it stayed there forever. I’m aware of only one effort to rationalize C++ TMP by Daveed Vandevoorde in Reflective Metaprogramming in C++.

I have tested all code in this blog using the Hugs interpreter for Haskell and the GNU C++ compiler v. 4.4.1 with the special switch -std=c++0x. The names I used in the blog might conflict with the standard definitions. For instance, Hugs defines its own map and foldr.

If you want to learn more about template metaprogramming, I recommend two books:

  1. Andrei Alexandrescu, Modern C++ Design
  2. David Abrahams and Aleksey Gurtvoy, C++ Template Metaprogramming

The reference I used for variadic templates was the paper by Douglas Gregor, Jaakko Järvi, and Gary Powell

About these ads