C++



The video of my talk, Haskell and C++ Template Metaprogramming, is now available; and so are the slides. I pretty much covered the material from my last blog post, but many people (including me) find a video presentation easier to follow.

This is also a plug for the Northwest C++ Users Group that meets in Redmond every third Wednesday of the month. If you live in Seattle or on the east side of Lake Washington, check it out. You won’t be disappointed.


If you want to understand C++ template metaprogramming (TMP) you have to know functional programming. Seriously. I want you to think of TMP as maximally obfuscated (subset of) Haskell, and I’ll illustrate this point by point. If you don’t know Haskell, don’t worry, I’ll explain the syntax as I go.

The nice thing about single-paradigm languages like Haskell is that they have very simple syntax (think of Lisp that does everything with just a bunch of parentheses). I will start the Haskell-C++TMP mapping with basics, like functions and recursion, but I’ll try to cover a lot more, including higher-order functions, pattern matching, list comprehension (did you know it was expressible in C++?), and more.

Keep in mind that my Haskell examples are runtime functions operating on runtime data whereas their C++ TMP equivalents are compile-time templates operating mostly on types. Operation on types are essential in providing correct and efficient implementations of parametrized classes and functions.

By necessity the examples are simple, but the same mapping may be applied to much more complex templates from the C++ Standard Library and Boost. As a bonus, I’ll also explain the hot new thing, variadic templates.

Functional Approach to Functions

How do you implement useful functions if you don’t have mutable variables, if statements, or loops? To a C++ programmer that might seem like an impossible task. But that’s the reality of C++ compile-time language that forms the basis of TMP. Functional programming to the rescue!

As a warm-up, let’s see how Haskell implements a simple function, the factorial:

fact 0 = 1
fact n = n * fact (n - 1)

The first line states that the factorial of zero is one. The second line defines factorial for a non-zero argument, n (strictly speaking it only works for positive non-zero arguments). It does it using recursion: factorial of n is equal to n times the factorial of n-1. The recursion stops when n is equal to zero, in which case the first definition kicks in. Notice that the definition of the function fact is split into two sub-definitions. So when you call:

fact 4

the first definition is looked up first and, if it doesn’t match the argument (which it doesn’t), the second one comes into play. This is the simplest case of pattern matching: 4 doesn’t match the “pattern” 0, but it matches the pattern n.

Here’s almost exactly the same code expressed in C++ TMP:

template<int n> struct
fact {
    static const int value = n * fact<n - 1>::value;
};

template<> struct
fact<0> { // specialization for n = 0
    static const int value = 1;
};

You might notice how the horrible syntax of C++ TMP obscures the simplicity and elegance of this code. But once you are equipped with the C++/Haskell decoder ring, things become a lot clearer.

Let’s analyze this code. Just like in Haskell, there are two definitions of fact, except that their order is inverted. This is because C++ requires template specialization to follow the template’s general definition (or declaration, as we’ll see later). The pattern matching of arguments in C++ does not follow the order of declarations but rather is based on “best match”. If you instantiate the template with argument zero:

cout << "Factorial of 0 = " << fact<0>::value << endl;

the second pattern, fact<0>, is a better fit. Otherwise the first one, <int n>, is used.

Notice also the weird syntax for “function call”

fact<n>::value

and for the “return statement”

static const int value = n * fact<n - 1>::value;

This all makes sense if you look at templates as definitions of parameterized types, which was their initial purpose in C++. In that interpretation, we are defining a struct called fact, parameterized by an integer n, whose sole member is a static const integer called value. Moreover, this template is specialized for the case of n equal zero.

Now I want you to forget about what I just said and put on the glasses which make the C++ code look like the corresponding Haskell code.

Here’s another example–this time of a predicate (a function returning a Boolean):

is_zero 0 = True
is_zero x = False

Let’s spice it up a little for C++ and define a predicate on types rather than integers. The following compile-time function returns true only when the type T is a pointer:

template<class T> struct
isPtr {
    static const bool value = false;
};

template<class U> struct
isPtr<U*> {
    static const bool value = true;
};

This time the actual argument to isPtr is first matched to the more specialized pattern, U* and, if it fails, the general pattern is used.

We can add yet another specialization, which will pattern-match a const pointer:

template<class U> struct
isPtr<U * const> {
    static const bool value = true;
};

These types of type predicates may be, for instance, used to select more flexible and efficient implementations of parameterized containers. Think of the differences between a vector of values vs. a vector of pointers.

Lists

The basic data structure in functional languages is the list. Haskell’s lists are introduced using square brackets. For instance, a list of three numbers, 1, 2, 3, looks like:

[1, 2, 3]

List processing in functional languages follows the standard pattern: a list is split into head and tail, an operation is performed on the head, and the tail is processed using recursion. The splitting is done by pattern matching: the Haskell pattern being (head:tail) (to be precise, the colon in parentheses represents the cons operation–the creation of a list by prepending an element to an existing list).

Here’s a simple function, count, that calculates the length of a list:

count [] = 0
count (head:tail) = 1 + count tail

The first pattern, [], matches an empty list; the second a non-empty one. Notice that a function call in Haskell doesn’t use parentheses around arguments, so count tail is interpreted as a call to count with the argument tail.

Before C++0x, TMP was severely crippled by the lack of a list primitive. People used separate definitions for a list of zero, one, two, etc., elements, and even used special macros to define them. This is no longer true in C++0x, thanks to variadic templates and template parameter packs. Here’s our Haskel count translated into C++ TMP:

// Just a declaration
template<class... list> struct
count;

template<> struct
count<> {
    static const int value = 0;
};

template<class head, class... tail> struct
count<head, tail...> {
    static const int value = 1 + count<tail...>::value;
};

First we have a declaration (not a definition) of the template count that takes a variable number of type parameters (the keyword class or typename introduces a type parameter). They are packed into a template parameter pack, list.

Once this general declaration is visible, specializations may follow in any order. I arranged them to follow the Haskell example. The first one matches the empty list and returns zero. The second one uses the pattern, <head, tail…>. This pattern will match any non-empty list and split it into the head and the (possibly empty) tail.

To “call” a variadic template, you initiate it with an arbitrary number of arguments and retrieve its member, value, e.g.,

int n = count<int, char, long>::value; // returns 3

A few words about variadic templates: A variadic template introduces a template parameter pack using the notation class… pack (or int… ipack, etc…). The only thing you may do with a pack is to expand it and pass to another variadic template. The expansion is done by following the name of the pack with three dots, as in tail…. You’ll see more examples later.

Variadic templates have many applications such as type-safe printf, tuples (objects that store an arbitrary number of differently typed arguments), variants, and many more.

Higher-Order Functions and Closures

The real power of functional programming comes from treating functions as first class citizens. It means that you may pass functions to other functions and return functions from functions. Functions operating on functions are called higher-order functions. Surprisingly, it seems like compile-time C++ has better support for higher-order functions than run-time C++.

Let’s start with a Haskell example. I want to define a function that takes two predicate functions and returns another predicate function that combines the two using logical OR. Here it is in Haskell:

or_combinator f1 f2 = 
    λ x -> (f1 x) || (f2 x)

The or_combinator returns an anonymous function (the famous “lambda”) that takes one argument, x, calls both f1 and f2 with it, and returns the logical OR of the two results. The return value of or_combinator is this freshly constructed function. I can then call this function with an arbitrary argument. For instance, here I’m checking if 2 is either zero or one (guess what, it isn’t!):

(or_combinator is_zero is_one) 2

I put the parentheses around the function and its arguments for readability, although they are not strictly necessary. 2 is the argument to the function returned by or_combinator.

The lambda that’s returned from or_combinator is actually a closure. It “captures” the two arguments, f1 and f2 passed to or_combinator. They may be used long after the call to or_combinator has returned.

It might take some getting used to it before you are comfortable with functions taking functions and returning functions, but it’s much easier to learn this stuff in Haskell than in the obfuscated C++. Indeed, here’s an almost direct translation of this example:

template<template<class> class f1, template<class> class f2> struct
or_combinator {
    template<class T> struct
    lambda {
        static const bool value = f1<T>::value || f2<T>::value;
    };
};

Since in the metalanguage a function is represented by a template, the template or_combinator takes two such templates as arguments. It “calls” these templates using the standard syntax f<T>::value. Actually, the or_combinator doesn’t call these functions. Instead it defines a new template, which I call lambda, that takes the argument T and calls those functions. This template acts like a closure–it captures the two templates that are the arguments to or_combinator.

Here’s how you may use the or_combinator to combine two tests, isPtr and isConst and apply the result to the type const int:

std::cout 
   << "or_combinator<isPtr, isConst>::lambda<const int>::value = "
   << or_combinator<isPtr, isConst>::lambda<const int>::value 
   << std::endl;

Such logical combinators are essential for predicate composability.

Higher-Order Functions Operating on Lists

Once you combine higher-order functions with lists you have a powerful functional language at your disposal. Higher-order functions operating on lists look very much like algorithms. Let me show you some classic examples. Here’s the function (or algorithm), all, that returns true if and only if all elements of a list satisfy a given predicate.

all pred [] = True
all pred (head:tail) = (pred head) && (all pred tail)

By now you should be familiar with all the techniques I used here, like pattern matching or list recursion.

Here’s the same code obfuscated by the C++ syntax:

template<template<class> class predicate, class... list> struct
all;

template<template<class> class predicate> struct
all<predicate> {
    static const bool value = true;
};

template<
    template<class> class predicate, 
    class head, 
    class... tail> struct
all<predicate, head, tail...> {
    static const bool value = 
        predicate<head>::value 
        && all<predicate, tail...>::value;
};

Except for the initial declaration required by C++ there is a one-to-one paradigm match between the two implementations.

Another useful algorithm, a veritable workhorse of functional programming, is “fold right” (together with it’s dual partner, “fold left”). It folds a list while accumulating the results (that’s why in runtime C++ this algorithm is called “accumulate”). Here’s the Haskell implementation:

foldr f init [] = init
foldr f init (head:tail) =
    f head (foldr f init tail)

Function f, which is the first argument to foldr, takes two arguments, the current element of the list and the accumulated value. Its purpose is to process the element and incorporate the result in the accumulator. The new accumulated value is then returned. It is totally up to the client to decide what kind of processing to perform, how it is accumulated, and what kind of value is used. The second argument, init, is the initial value for the accumulator.

Here’s how it works: The result of foldr is generated by acting with f on the head of the list and whatever has been accumulated by processing the tail of the list. The algorithm recurses until the tail is empty, in which case it returns the initial value. At runtime this type of algorithm would make N recursive calls before starting to pop the stack and accumulate the results.

For instance, foldr may be used to sum the elements of a list (so_far is the accumulator, which is initialized to zero):

add_it elem so_far = elem + so_far
sum_it lst = foldr add_it 0 lst

The accumulator function is add_it. If, instead, I wanted to calculate the product of all elements, I’d use a function mult_it and the starting value of one. You get the idea.

Here’s the same algorithm in C++ TMP:

template<template<class, int> class, int, class...> struct
fold_right;

template<template<class, int> class f, int init> struct
fold_right<f, init> {
    static const int value = init;
};

template<template<class, int> class f, int init, class head, class...tail> struct
fold_right<f, init, head, tail...> {
    static const int value = f<head, fold_right<f, init, tail...>::value>::value;
};

Once you understand the Haskell version, this complex code suddenly becomes transparent (if it doesn’t, try squinting 😉 ).

Lists of Numbers

Let’s now switch to integers for a moment. Haskell defines a function sum that adds all elements of a list:

sum [] = 0
sum (head:tail) = head + (sum tail)

We can do the same in C++ TMP (in five times as many lines of code):

template<int...> struct
sum;

template<> struct
sum<> {
    static const int value = 0;
};

template<int i, int... tail> struct
sum<i, tail...> {
    static const int value = i + sum<tail...>::value;
};

List Comprehension

Haskell has one more trick up its sleeve for operating on lists without explicit recursion. It’s called list comprehension. It’s a way of defining new lists based on existing lists. The nomenclature and notation are borrowed from Set Theory, where you often encounter definitions such as: S is a set of elements, where… Let’s look at a simple Haskell example:

[x * x | x <- [3, 4, 5]]

This is a set (list) of elements x * x, where x is from the list [3, 4, 5].

Remember our recursive definition of count? Using list comprehension it’s reduced to a one-liner:

count lst = sum [1 | x <- lst]

Here, we create a list of ones, one for each element of the list. Our result is the sum of those ones. To make this definition more amenable to translation into C++, let’s define an auxiliary function one that, for any argument x, returns 1.

one x = 1

Here’s the modified definition of count:

count lst = sum [one x | x <- lst]

Now we are ready to convert this code to C++ TMP:

template<class T> struct
one {
    static const int value = 1;
};

template<class... lst> struct
count {
    static const int value = sum<one<lst>::value...>::value;
};

Here our list is stored in a template parameter pack, lst. If we wanted to expand this pack, we’d use the notation lst…, but that’s not what’s happening here. The ellipsis appears after the pattern containing the pack:

one<lst>::value...

Compare this with the equivalent Haskell:

[one x | x <- lst]

In C++, when the ellipsis follows a pattern that contains a pack, it’s not the pack that’s expanded, but the whole pattern is repeated for each element of the pack. Here, if our list were <int, char, void*>, the pattern would be expanded to:

<one<int>::value, one<char>::value, one<void*>::value>

The subsequent call to sum would be made with those arguments.

Notice that a different positioning of the ellipsis would result in a completely different expansion. This pattern:

one<lst...>::value

would result in the call to one with the list of types, which would be an error.

Here’s another example of pattern expansion: a function that counts the number of pointers in a list of types:

template<class... lst> struct
countPtrs {
    static const int value = sum<isPtr<lst>::value ...>::value;
};

In this case the pattern is:

isPtr<lst>::value ...

and it expands into a list of Booleans. (I’m taking advantage of the fact that false is zero and true is one, when converted to integers.)

You may find a more complex practical example in the Gregor, Järvi, and Powell paper (see bibliography).

Continuations

List comprehension can be used to define some very useful higher-order functions. One of such functions is map, which takes a list and applies a unary function to each element, resulting in a new list. You might be familiar with the runtime implementation of this algorithm in C++ STL under the name of transform. This is what map looks like in Haskell:

map f lst = [f x | x <- lst]

Here, f is the unary function and lst is the input list. You have to admire the terseness and elegance of this notation.

The first impulse would be to translate it into C++ TMP as:

template<template<class> class f, class... lst> struct
map {
    typedef f<lst>... type;
};

This is surprisingly terse too. The problem is that it doesn’t compile. As far as I know there is no way for a template to “return” a variable list of elements. In my opinion, this is a major language design flaw, but that’s just me.

There are several workarounds, none of them too exciting. One is to define a separate entity called a typelist along the lines of:

template struct
typelist<hd, tl...> {
    typedef hd head;
    typedef typelist<tl...> tail;
};

(As a matter of fact I have implemented typelists and related algorithms both in C++ and D.)

Another approach is to use continuations. Template parameter packs cannot be returned, but they can be passed to variadic templates (after expansion). So how about defining an algorithm like map to take one additional function that would consume the list that is the result of mapping? Such a function is often called a continuation, since it continues the calculation where normally one would return the result. First, let’s do it in Haskell:

map_cont cont f lst = cont [f x | x <- lst]

The function map_cont is just like map except that it takes a continuation, cont, and applies it to the result of mapping. We can test it by defining yet another implementation of count:

count_cont lst = map_cont sum one lst

The continuation here is the function sum that will be applied to the list produced by acting with function one on the list lst. Since this is quite a handful, let me rewrite it in a more familiar notation of runtime C++:

int map_cont(int (*cont)(list), int (*f)(int), list lst) {
    list tmp;
    for (auto it = lst.begin(); it != lst.end(); ++it)
        tmp.push_front(f(*it));
    return cont(tmp);
}

Now for the same thing in compile-time C++:

template<template<class...> class cont, 
         template<class> class f,
         class... lst> struct
map_cont {
    static const int value =
        cont<typename f<lst>::type ...>::value;
};

It’s a one-to-one mapping of Haskell code–and it has very little in common with the iterative runtime C++ implementation. Also notice how loose the typing is in the TMP version as compared with the runtime version. The continuation is declared as a variadic template taking types, function f is declared as taking a type, and the list is a variadic list of types. Nothing is said about return types, except for the constraints that f returns a type (because cont consumes a list of types) and cont returns an int. Actually, this last constraint can be relaxed if we turn integers into types–a standard trick (hack?) in TMP:

template<int n> struct
Int {
    static const int value = n;
};

Loose typing–or “kinding,” as it is called for types of types–is an essential part of compile-time programming. In fact the popularity of the above trick shows that C++ kinding might be already too strong.

The D Digression

I’m grateful to Andrei Alexandrescu for reviewing this post. Since he objected to the sentence, “I’m disappointed that the D programming language followed the same path as C++ rather than lead the way,” I feel compelled to support my view with at least one example. Consider various implementations of all.

In Haskell, beside the terse and elegant version I showed before:

all pred [] = True
all pred (head:tail) = (pred head) && (all pred tail)

there is also a slightly more verbose one:

all pred list =
    if null list then True
    else pred (head list) && all pred (tail list)

which translates better into D. The D version (taken from its standard library, Phobos) is not as short as Haskell’s, but follows the same functional paradigm:

template allSatisfy(alias F, T...) {
    static if (T.length == 1)
    {
        alias F!(T[0]) allSatisfy;
    }
    else
    {
        enum bool allSatisfy = F!(T[0]) && allSatisfy!(F, T[1 .. $]);
    }
}

It definitely beats C++, there’s no doubt about it. There are some oddities about it though. Notice that the type tuple, T… gets the standard list treatment, but the split into the head and tail follows the array/slice notation. The head is T[0] and the tail is an array slice, T[1..$]. Instead of using value for the return value, D uses the “eponymous hack,” as I call it. The template “returns” the value using its own name. It’s a hack because it breaks down if you want to define the equivalent of “local variable” inside a template. For instance, the following code doesn’t compile:

template allSatisfy(alias F, T...) {
    static if (T.length == 1)
    {
        alias F!(T[0]) allSatisfy;
    }
    else
    {
        private enum bool tailResult = allSatisfy!(F, T[1..$]);
        enum bool allSatisfy = F!(T[0]) && tailResult;
    }
}

This breaks one of the fundamental property of any language: decomposability. You want to be able to decompose your calculation into smaller chunks and then combine them together. Of course, you may still use decomposition if you don’t use the eponymous hack and just call your return value value. But that means modifying all the calling sites.

By the way, this is how decomposition works in Haskell:

all2 pred [] = True;
all2 pred (head:tail) = (pred head) && tailResult
    where tailResult = all2 pred tail

Anyway, the important point is that there is no reason to force functional programming paradigm at compile-time. The following hypothetical syntax would be much easier for programmers who are used to imperative programming:

template allSatisfy(alias Pred, T...) {
    foreach(t; T)
        if (!Pred!(t))
            return false;
    return true;
}

In fact it’s much closer to the parameterized runtime function proposed by Andrei:

bool all(alias pred, Range)(Range r) {
    foreach (e; r) 
        if (!pred(e)) 
            return false;
    return true;
}

This is why I’m disappointed that the D programming language followed the same path as C++ rather than lead the way.

Conclusions

I have argued that some familiarity with Haskell may be really helpful in understanding and designing templates in C++. You might ask why C++ chose such horrible syntax to do compile-time functional programming. Well, it didn’t. The ability to do compile-time calculations in C++ was discovered rather than built into the language. It was a very fruitful discovery, as the subsequent developments, especially the implementation of the Boost MPL, have shown. However, once the functional paradigm and its weird syntax took root in C++ TMP, it stayed there forever. I’m aware of only one effort to rationalize C++ TMP by Daveed Vandevoorde in Reflective Metaprogramming in C++.

I have tested all code in this blog using the Hugs interpreter for Haskell and the GNU C++ compiler v. 4.4.1 with the special switch -std=c++0x. The names I used in the blog might conflict with the standard definitions. For instance, Hugs defines its own map and foldr.

If you want to learn more about template metaprogramming, I recommend two books:

  1. Andrei Alexandrescu, Modern C++ Design
  2. David Abrahams and Aleksey Gurtvoy, C++ Template Metaprogramming

The reference I used for variadic templates was the paper by Douglas Gregor, Jaakko Järvi, and Gary Powell


Here’s the video from my recent talk to the Northwest C++ Users Group (NWCPP) about how to translate the data-race free type system into a system of user-defined annotations in C++. I start with the definition of a data race and discuss various ways to eliminate them. Then I describe the ownership system and give a few examples of annotated programs.

Here are the slides from the presentation. They include extensive notes.

By the way, we are looking for speakers at the NWCPP, not necessarily related to C++. We are thinking of changing the charter to include all programming languages. If you are near Seattle on Oct 21 09 (or any third Wednesday of the month), and you are ready to give a 60 min presentation, please contact me.


“I’ve been doing some template metaprogramming lately,” he said nonchallantly.

Why is it funny? Because template metaprogramming is considered really hard. I mean, über-guru-level hard. I’m lucky to be friends with two such gurus, Andrei Alexandrescu who wrote the seminal “Modern C++ Programming,” and Eric Niebler, who implemented the Xpressive library for Boost; so I know the horrors.

But why is template metaprogramming so hard? Big part of it is that C++ templates are rather ill suited for metaprogramming, to put it mildly. They are fine for simple tasks like parameterized containers and some generic algorithms, but not for operations on types or lists of types. To make things worse, C++ doesn’t provide a lot in terms of reflection, so even such simple tasks like deciding whether a given type is a pointer are hard (see the example later). Granted, C++0x offers some improvements, like template parameter packs; but guru-level creds are still required.

So, I’ve been doing some template metaprogramming lately… in D. The D programming language doesn’t have the baggage of compatibility with previous botched attempts, so it makes many things considerably easier on programmers. But before I get to it, I’d like to talk a little about the connection between generic programming and functional programming, give a short intro to functional programming; and then show some examples in C++ and D that involve pattern matching and type lists.

It’s not your father’s language

The key to understanding metaprogramming is to realize that it’s done in a different language than the rest of your program. Both in C++ and D you use a form of functional language for that purpose. First of all, no mutation! If you pass a list of types to a template, it won’t be able to append another type to it. It will have to create a completely new list using the old list and the new type as raw materials.

Frankly, I don’t know why mutation should be disallowed at compile time (all template calculations are done at compile time). In fact, for templates that are used in D mixins, I proposed not to invent a new language but to use a subset of D that included mutation. It worked just fine and made mixins much easier to use (for an example, see my DrDobbs article).

Once you disallow mutation, you’re pretty much stuck with functional paradigm. For instance, you can’t have loops, which require a mutable loop counter or some other mutable state, so you have to use recursion.

You’d think functional programmers would love template metaprogramming; except that they flip over horrendous syntax of C++ templates. The one thing going for functional programming is that it’s easy to define and implement. You can describe typeless lambda calculus with just a few formulas in operational semantics.

One thing is important though: meta-language can’t be strongly typed, because a strongly typed language requires another language to implement generic algorithms on top of it. So to terminate the succession of meta-meta-meta… languages there’s a need for either a typeless, or at least dynamically-typed, top-level meta-language. My suspicion is that C++0x concepts failed so miserably because they dragged the metalanguage in the direction of strong typing. The nails in the coffin for C++ concepts were concept maps, the moral equivalent of implicit conversions in strongly-typed languages.

Templates are still not totally typeless. They distinguish between type arguments (introduced by typename or class in C++), template template arguments, and typed template arguments. Here’s an example that shows all three kinds:

template<class T, template<class X> class F, int n>

Functional Programming in a Nutshell

“Functions operating on functions”–that’s the gist of functional programming. The rest is syntactic sugar. Some of this sugar is very important. For instance, you want to have built-in integers and lists for data types, and pattern matching for dispatching.

-Functions

Here’s a very simple compile-time function in the C++ template language:

template<class T> 
struct IsPtr {
    static const bool apply = false;
}

If it doesn’t look much like a function to you, here it is in more normal ad-hoc notation:

IsPtr(T) {
    return false;
}

You can “execute” or “call” this meta-function by instantiating the template IsPtr with a type argument and accessing its member apply:

IsPtr<int>::apply;

There is nothing magical about “apply”, you can call it anything (“result” or “value” are other popular identifiers). This particular meta-function returns a Boolean, but any compile-time constant may be returned. What’s more important, any type or a template may be returned. But let’s not get ahead of ourselves.

-Pattern matching

You might be wondering what the use is for a function (I’ll be dropping the “meta-” prefix in what follows) that always returns false and is called IsPtr. Enter the next weapon in the arsenal of functional programmers: pattern matching. What we need here is to be able to match function arguments to different patterns and execute different code depending on the match. In particular, we’d like to return a different value, true, for T matching the pattern T*. In the C++ metalanguage this is done by partial template specialization. It’s enough to define another template of the same name that matches a more specialized pattern, T*:

template<class T>
struct IsPtr<T*> {
    static const bool apply = true;
}

When faced with the call,

IsPtr<int*>::apply

the compiler will first look for specializations of the template IsPtr, starting with the most specialized one. In our case, the argument int* matches the pattern T* so the version returning true will be instantiated. Accessing the apply member of this instantiation will result in the Boolean value true, which is exactly what we wanted. Let me rewrite this example using less obfuscated syntax.

IsPtr(T*) {
    return true;
}
IsPtr(T) { // default case
    return false;
}

D template syntax is slightly less complex than that of C++. The above example will read:

template IsPtr(T) {
    static if (is (T dummy: U*, U))
        enum IsPtr = true;
    else
        enum IsPtr = false;
}
// Compile-time tests
static assert( IsPtr!(int*) );
static assert( !IsPtr!(int) );

As you can see, D offers compile-time if statements and more general pattern matching. The syntax of pattern matching is not as clear as it could be (what’s with the dummy?), but it’s more flexible. Compile-time constants are declared as enums.

There is one little trick (a hack?) that makes the syntax of “function call” a little cleaner. If, inside the template, you define a member of the same name as the template itself (I call it the “eponymous” member) than you don’t have to use the “apply” syntax. The “call” looks more like a call, except for the exclamation mark before the argument list (a D tradeoff for not using angle brackets). You’ll see later how the eponymous trick fails for more complex cases.

-Lists

The fundamental data structure in all functional languages is a list. Lists are very easy to operate upon using recursive algorithms and, as it turns out, they can be used to define arbitrarily complex data structures. No wonder C++0x felt obliged to introduce a compile-time type list as a primitive. It’s called a template parameter pack and the new syntax is:

template<class... T>Foo

You can instantiate such a template with zero arguments,

Foo<>

one argument,

Foo<int>

or more arguments,

Foo<int, char*, void*>

How do you iterate over a type list? Well, there is no iteration in the metalanguge so the best you can do is to use recursion. To do that, you have to be able to separate the head of the list from its tail. Then you perform the action on the head and call yourself recursively with the tail. The head/tail separation is done using pattern matching.

Let me demonstrate a simple example from the paper Variadic Templates by Garry Powell et al. It calculates the length of a pack using recursion. First, the basic case–length-zero list:

template<>
struct count <> {
    static const int value = 0;
}

That is the full specialization of a template, so it will be tried first. Here’s the general case:

template<typename Head, typename... Tail>
struct count<Head, Tail...> {
    static const int value = 1 + count<Tail...>::value;
}

Let’s see what it would look like in “normal” notation:

count() {
    return 0;
}
count(head, tail) {
    return 1 + count(tail);
}

And here’s the D version:

template count(T...) {
    static if (T.length == 0)
        enum count = 0;
    else
        enum count = 1 + count!(T[1..$]);
}
// tests
static assert( count!() == 0);
static assert( count!(int, char*, char[]) == 3);

T… denotes a type tuple, which supports array-like access. To get to the tail of the list, D uses array slicing, where T[1..$] denotes the slice of the array starting from index 1 up to the length of the array (denoted by the dollar sign). I’ll explain the important differences between C++ pack and D tuple (including pack expansion) in the next installment.

Conclusion

When looked upon from the functional perspective, template metaprogramming doesn’t look as intimidating as it it seems at first. Knowing this interpretation makes you wonder if there isn’t a better syntax or even a better paradigm for metaprogramming.

I’ll discuss more interesting parts of template metaprogramming in the next installment (this one is getting too big already). In particular, I’ll show examples of higher order meta-functions like Filter or Not and some interesting tricks with type lists.


What is there to reference counting that is not obvious? In any language that supports deterministic destruction and the overloading of the copy constructor and the assignment operator it should be trivial. Or so I though until I decided to implement a simple ref-counted thread handle in D. Two problems popped up:

  1. How does reference counting interact with garbage collection?
  2. How to avoid data races in a multithreaded environment?

In purely garbage-collected languages, like Java, you don’t implement reference counting, period. Which is pretty bad, if you ask me. GC is great at managing memory, but not so good at managing other resources. When a program runs out of memory, it forces a collection and reclaims unused memory. When it runs out of, say, system thread handles, it doesn’t reclaim unused handles–it just dies. You can’t use GC to manage system handles. So, as far as system resources go, Java forces the programmer to use the moral equivalent of C’s malloc and free. The programmer must free the resources explicitly.

In C++ you have std::shared_ptr for all your reference-counting needs, but you don’t have garbage collection for memory–at least not yet. (There is also the Microsoft’s C++/CLI which mixes the two systems.)

D offers the best of both worlds: GC and deterministic destruction. So let’s use GC to manage memory and reference counting (or other policies, like uniqueness) to manage other limited resources.

First attempt

The key to reference counting is to have a “value” type, like the shared_ptr in C++, that can be cloned and passed between functions at will. Internally this value type must have access to a shared chunk of memory that contains the reference count. In shared_ptr this chunk is a separately allocated integral type–the counter. The important thing is that all clones of the same resource share the same counter. The counter’s value reflects how many clones there are. Copy constructors and assignment operators take care of keeping the count exact. When the count goes to zero, that is the last copy of the resource goes out of scope, the resource is automatically freed (for instance, by calling CloseHandle).

In my first attempt, I decided that the memory allocated for the counter should be garbage-collected. After all, the important thing is to release the handle–the memory will take care of itself.

struct RcHandle {
   shared Counter _counter; // GC'd shared Counter object
   HANDLE _h;
   ~this() { // destructor
      if (_counter.dec() == 0) // access the counter
         CloseHandle(_h);
   }
   // methods that keep the counter up to date
}

RcHandle is a struct, which is a value type in D. Counter is a class, which is a reference type; so _counter really hides a pointer to shared memory.

A few tests later I got a nasty surprise. My program faulted while trying to access an already deallocated counter. How did that happen? How could garbage collector deallocate my counter if I still had a reference to it?

Here’s what I did in my test: I embedded the ref-counted handle inside another garbage-collected object:

class Embedder { // GC'd class object
   RcHandle _rc;
}

When the time came to collect that object (which happened after the program ran to completion), its finalizer was called. Whenever an object contains fields that have non-trivial destructors, the compiler generates a finalizer that calls the destructors of those embedded objects–_rc in this case. The destructor of the ref-counted handle checks the reference count stored in the counter. Unfortunately the counter didn’t exist anymore. Hours of debugging later I had the answer.

What happened is that the garbage collector had two objects on its list: the embedder and the counter. It just so happened that the collector decided to collect those two objects in reverse order: the counter first, then the embedding object. So, by the time it got to the finalizer of the embedding object, the counter was gone!

What I discovered (with the help of other members of the D team who were involved in the discussion) was that there are some limitations on mixing garbage collection with deterministic destruction. There is a general rule:

An object’s destructor must not access any garbage-collected objects embedded in it.

Since the destructor of the ref-counted handle must have access to the counter, the counter must not be garbage-collectible. That means only one thing: it has to be allocated using malloc and explicitly deallocated using free. Which brings us to the second problem.

Concurrency

What can be simpler than an atomic reference count? On most processors you can atomically increment and decrement a memory location. You can even decrement and test the value in one uninterruptible operation. Problem solved! Or is it?

There is one tiny complication–the location that you are atomically modifying might disappear. I know, this is totally counter-intuitive. After all the management of the counter follows the simple rule: the last to leave the room turns off the light. If the destructor of RcHandle sees the reference count going from one to zero, it knows that no other RcHandle has access it, and it can safely free the counter. Who can argue with cold logic?

Here’s the troubling scenario: RcHandle is embedded in an object that is visible from two threads:

class Embedder {
   RcHandle _rcH;
}
shared Embedder emb;

Thread 1 tries to overwrite the handle:

RcHandle myHandle1;
emb._rcH = myHandle1;

while Thread 2 tries to copy the same handle to a local variable:

RcHandle myHandle2 = emb._rcH;

Consider the following interleaving:

  1. T2: Load the address of the _counter embedded in _rcH.
  2. T1: Swap emb._rcH with myHandle
  3. T1: Decrement the counter that was embedded in _rcH. If it’s zero (and it is, in this example), free the counter.
  4. T2: Increment the _counter. Oops! This memory location has just been freed.

The snag is that there is a window between T2 reading the pointer in (1), and incrementing the location it’s pointing to in (4). Within that window, the reference count does not match the actual number of clients having access to the pointer. If T1 happens to do its ugly deed of freeing the counter within that window, the race may turn out deadly. (This problem has been known for some time and there were various proposals to fix it, for instance using DCAS, as in this paper on Lock-Free Reference Counting.)

Should we worry? After all the C++ shared_ptr also exposes this race and nobody is crying havoc. It turns out that it all boils down to the responsibilities of the shared object (and I’m grateful to Andrei for pointing it out).

A shared object should not willy-nilly expose its implementation to clients

If the clients of Embedder want access to the handle, they should call a synchronized method. Here’s the correct, race-free implementation of the Embedder in the scenario I just described.

class Embedder {
private:
   RcHandle _rcH;
public:
   synchronized RcHandle GetHandle() const { return _rcH; }
   synchronized void SetHandle(RcHandle h) { _rcH = h; }
   ...
}

The method GetHandle copies _rcH and increments its count under the Embedder‘s lock. Another thread calling SetHandle has no chance of interleaving with that action, because it is forced to use the same lock. D2 actually enforces this kind of protection for shared objects, so my original example wouldn’t even compile.

You might be thinking right now that all this is baloney because I’m trying to fit a square peg into a round hole. I’m imposing value semantics on a non-atomic object, and you cannot atomically overwrite a non-atomic object. However, using this logic, you could convince yourself that a double cannot have value semantics (on most common architectures doubles are too large to be atomic). And yet you can safely pass doubles between threads and assign them to each other (which is the same as overwriting). It’s only when you embed a double inside a shared object, you have to protect it from concurrent access. And it’s not the double protecting itself–it’s the shared embedder that is responsible for synchronization. It’s exactly the same with RcHandle.

Conclusions

Just when you thought you knew everything about reference counting you discover something you haven’t though about. The take-home message is that mixing garbage collection with deterministic destruction is not a trivial matter, and that a race-free reference-counted object is vulnerable to races when embedded it in another shared object. Something to keep in mind when programming in D or in C++.


Is the Actor model just another name for message passing between threads? In other words, can you consider a Java Thread object with a message queue an Actor? Or is there more to the Actor model? Bartosz investigates.

I’ll start with listing various properties that define the Actor Model. I will discuss implementation options in several languages.

Concurrency

Actors are objects that execute concurrently. Well, sort of. Erlang, for instance, is not an object-oriented language, so we can’t really talk about “objects”. An actor in Erlang is represented by a thing called a Process ID (Pid). But that’s nitpicking. The second part of the statement is more interesting. Strictly speaking, an actor may execute concurrently but at times it will not. For instance, in Scala, actor code may be executed by the calling thread.

Caveats aside, it’s convenient to think of actors as objects with a thread inside.

Message Passing

Actors communicate through message passing. Actors don’t communicate using shared memory (or at least pretend not to). The only way data may be passed between actors is through messages.

Erlang has a primitive send operation denoted by the exclamation mark. To send a message Msg to the process (actor) Pid you write:

Pid ! Msg

The message is copied to the address space of the receiver, so there is no sharing.

If you were to imitate this mechanism in Java, you would create a Thread object with a mailbox (a concurrent message queue), with no public methods other than put and get for passing messages. Enforcing copy semantics in Java is impossible so, strictly speaking, mailboxes should only store built-in types. Note that passing a Java Strings is okay, since strings are immutable.

-Typed messages

Here’s the first conundrum: in Java, as in any statically typed language, messages have to be typed. If you want to process more than one type of messages, it’s not enough to have just one mailbox per actor. In Erlang, which is dynamically typed, one canonical mailbox per actor suffices. In Java, mailboxes have to be abstracted from actors. So an actor may have one mailbox for accepting strings, another for integers, etc. You build actors from those smaller blocks.

But having multiple mailboxes creates another problem: How to block, waiting for messages from more than one mailbox at a time without breaking the encapsulation? And when one of the mailboxes fires, how to retrieve the correct type of a message from the appropriate mailbox? I’ll describe a few approaches.

-Pattern matching

Scala, which is also a statically typed language, uses the power of functional programming to to solve the typed messages problem. The receive statement uses pattern matching, which can match different types. It looks like a switch statements whose case labels are patterns. A pattern may specify the type it expects. You may send a string, or an integer, or a more complex data structure to an actor. A single receive statement inside the actor code may match any of those.

receive {
    case s: String => println("string: "+ s)
    case i: Int => println("integer: "+ i)
    case m => println("unknown: "+ m)
}

In Scala the type of a variable is specified after the colon, so s:String declares the variable s of the type String. The last case is a catch-all.

This is a very elegant solution to a difficult problem of marrying object-oriented programming to functional programming–a task at which Scala exceeds.

-Casting

Of course, we always have the option of escaping the type system. A mailbox could be just a queue of Objects. When a message is received, the actor could try casting it to each of the expected types in turn or use reflection to find out the type of the message. Here’s what Martin Odersky, the creator of Scala, has to say about it:

The most direct (some would say: crudest) form of decomposition uses the type-test and type-cast instructions available in Java and many other languages.

In the paper he co-authored with Emir and Williams (Matching Objects With Patterns) he gives the following evaluation of this method:

Evaluation: Type-tests and type-casts require zero overhead for the class hierarchy. The pattern matching itself is very verbose, for both shallow and deep patterns. In particular, every match appears as both a type-test and a subsequent type-cast. The scheme raises also the issue that type-casts are potentially unsafe because they can raise ClassCastExceptions. Type-tests and type-casts completely expose representation. They have mixed characteristics with respect to extensibility. On the one hand, one can add new variants without changing the framework (because there is nothing to be done in the framework itself). On the other hand, one cannot invent new patterns over existing variants that use the same syntax as the type-tests and type-casts.

The best one could do in C++ or D is to write generic code that hides casting from the client. Such generic code could use continuations to process messages after they’ve been cast. A continuation is a function that you pass to another function to be executed after that function completes (strictly speaking, a real continuation never returns, so I’m using this word loosely). The above example could be rewritten in C++ as:

void onString(std::string const & s) {
    cout << "string: " << s << std::endl;
}
void onInt(int i) {
    cout << "integer: " << i << std::endl;
}

receive<std::string, int> (&onString, &onInt);

where receive is a variadic template (available in C++0x). It would do the dynamic casting and call the appropriate function to process the result. The syntax is awkward and less flexible than that of Scala, but it works.

The use of lambdas might make things a bit clearer. Here’s an example in D using lambdas (function literals), courtesy Sean Kelly and Jason House:

receive(
    (string s){ writefln("string: %s", s); },
    (int i){ writefln("integer: %s", i); }
);

Interestingly enough, Scala’s receive is a library function with the pattern matching block playing the role of a continuation. Scala has syntactic sugar to make lambdas look like curly-braced blocks of code. Actually, each case statement is interpreted by Scala as a partial function–a function that is not defined for all values (or types) of arguments. The pattern matching part of case becomes the isDefinedAt method of this partial function object, and the code after that becomes its apply method. Of course, partial functions could also be implemented in C++ or D, but with a lot of superfluous awkwardness–lambda notation doesn’t help when partial functions are involved.

-Isolation

Finally, there is the problem of isolation. A message-passing system must be protected from data sharing. As long as the message is a primitive type and is passed by value (or an immutable type passed by reference), there’s no problem. But when you pass a mutable Object as a message, in reality you are passing a reference (a handle) to it. Suddenly your message is shared and may be accessed by more than one thread at a time. You either need additional synchronization outside of the Actor model or risk data races. Languages that are not strictly functional, including Scala, have to deal with this problem. They usually pass this responsibility, conveniently, to the programmer.

-Kilim

Java is not a good language to implement the Actor model. You can extend Java though, and there is one such extension worth mentioning called Kilim by Sriram Srinivasan and Alan Mycroft from Cambridge, UK. Messages in Kilim are restricted to objects with no internal aliasing, which have move semantics. The pre-processor (weaver) checks the structure of messages and generates appropriate Java code for passing them around. I tried to figure out how Kilim deals with waiting on multiple mailboxes, but there isn’t enough documentation available on the Web. The authors mention using the select statement, but never provide any details or examples.

Correction: Sriram was kind enough to provide an example of the use of select:

int n = Mailbox.select(mb0, mb1, .., timeout);

The return value is the index of the mailbox, or -1 for the timeout. Composability is an important feature of the message passing model.

Dynamic Networks

Everything I described so far is common to CSP (Communicating Sequential Processes) and the Actor model. Here’s what makes actors more general:

Connections between actors are dynamic. Unlike processes in CSP, actors may establish communication channels dynamically. They may pass messages containing references to actors (or mailboxes). They can then send messages to those actors. Here’s a Scala example:

receive {
    case (name: String, actor: Actor) =>
        actor ! lookup(name)
}

The original message is a tuple combining a string and an actor object. The receiver sends the result of lookup(name) to the actor it has just learned about. Thus a new communication channel between the receiver and the unknown actor can be established at runtime. (In Kilim the same is possible by passing mailboxes via messages.)

Actors in D

The D programming language with my proposed race-free type system could dramatically improve the safety of message passing. Race-free type system distinguishes between various types of sharing and enforces synchronization when necessary. For instance, since an Actor would be shared between threads, it would have to be declared shared. All objects inside a shared actor, including the mailbox, would automatically inherit the shared property. A shared message queue inside the mailbox could only store value types, unique types with move semantics, or reference types that are either immutable or are monitors (provide their own synchronization). These are exactly the types of messages that may be safely passed between actors. Notice that this is more than is allowed in Erlang (value types only) or Kilim (unique types only), but doesn’t include “dangerous” types that even Scala accepts (not to mention Java or C++).

I will discuss message queues in the next installment.


If it weren’t for the multitude of opportunities to shoot yourself in the foot, multithreaded programming would be easy. I’m going to discuss some of these “opportunities” in relation to global variables. I’ll talk about general issues and discuss the ways compilers can detect them. In particular, I’ll show the protections provided by my proposed extensions to the type system.

Global Variables

There are so many ways the sharing of global variables between threads can go wrong, it’s scary.

Let me start with the simplest example: the declaration of a global object of class Foo (in an unspecified language with Java-like syntax).

Foo TheFoo = new Foo;

In C++ or Java, TheFoo would immediately be visible to all threads, even if Foo provided no synchronization whatsoever (strictly speaking Java doesn’t have global variables, but static data members play the same role).

If the programmer doesn’t do anything to protect shared data, the default immediately exposes her to data races.

The D programming language (version 2.0, also known as D2) makes a better choice–global variables are, by default, thread local. That takes away the danger of accidental sharing. If the programmer wants to share a global variable, she has to declare it as such:

shared Foo TheFoo = new shared Foo;

It’s still up to the designer of the class Foo to provide appropriate synchronization.

Currently, the only multithreaded guarantee for shared objects in D2 is the absence of low-level data races on multiprocessors–and even that, only in the safe subset of D. What are low level data races? Those are the races that break some lock-free algorithms, like the infamous Double-Checked Locking Pattern. If I were to explain this to a Java programmer, I’d say that all data members in a shared object are volatile. This property propagates transitively to all objects the current object has access to.

Still, the following implementation of a shared object in D would most likely be incorrect even with the absence of low-level data races:

class Foo {
    private int[] _arr;
    public void append(int i) {
       _arr ~= i; // array append
    }
}

auto TheFoo = new shared Foo;

The problem is that an array in D has two fields: the length and the pointer to a buffer. In shared Foo, each of them would be updated atomically, but the duo would not. So two threads calling TheFoo.append could interleave their updates in an unpredictable way, possibly leading to loss of data.

My race-free type system goes further–it eliminates all data races, both low- and high-level. The same code would work differently in my scheme. When an object is declared shared, all its methods are automatically synchronized. TheFoo.append would take Foo‘s lock and make the whole append operation atomic. (For the advanced programmer who wants to implement lock-free algorithms my scheme offers a special lockfree qualifier, which I’ll describe shortly.)

Now suppose that you were cautious enough to design your Java/D2 class Foo to be thread safe:

class Foo {
    private int [] _arr;
    public synchronized void append(int i) {
       _arr ~= i; // array append
    }
}

Does it mean your global variable, TheFoo, is safe to use? Not in Java. Consider this:

static Foo TheFoo; // static = global
// Thread 1
TheFoo = new Foo();
// Thread 2
while (TheFoo == null)
    continue;
TheFoo.append(1);

You won’t even know what hit you when your program fails. I will direct the reader to one of my older posts that explains the problems of publication safety on a multiprocessor machine. The bottom line is that, in order to make your program work correctly in Java, you have to declare TheFoo as volatile (or final, if you simply want to prevent such usage). Again, it looks like in Java the defaults are stacked against safe multithreading.

This is not a problem in D2, since shared implies volatile.

In my scheme, the default behavior of shared is different. It works like Java’s final. The code that tries to rebind the shared object (re-assign to the handle) would not compile. This is to prevent accidental lock-free programming. (If you haven’t noticed, the code that waits on the handle of TheFoo to switch from null to non-null is lock-free. The handle is not protected by any lock.) Unlike D2, I don’t want to make lock-free programming “easy,” because it isn’t easy. It’s almost like D2 is endorsing lock-free programming by giving the programmer a false sense of security.

So what do you do if you really want to spin on the handle? You declare your object lockfree.

lockfree Foo TheFoo;

lockfree implies shared (it doesn’t make sense otherwise), but it also makes the handle “volatile”. All accesses to it will be made sequentially consistent (on the x86, it means all stores will compile to xchg).

Note that lockfree is shallow–data members of TheFoo don’t inherit the lockfree qualification. Instead, they inherit the implied shared property of TheFoo.

It’s not only object handles that can be made lockfree. Other atomic data types like integers, Booleans, etc., can also be lockfree. A lockfree struct is also possible–it is treated as a tuple whose all elements are lockfree. There is no atomicity guarantee for the whole struct. Methods can be declared lockfree to turn off default synchronization.

Conclusion

Even the simplest case of sharing a global variable between threads is fraught with danger. My proposal inobtrusively eliminates most common traps. The defaults are carefully chosen to let the beginners avoid the pitfalls of multithreaded programming.


I’ve been using auto_ptrs before they were available in C++ (I made my own). I wrote several articles about resource management, in which auto_ptr played a prominent role. auto_ptr had always had many flaws, but it was a workhorse of memory management in a language that shuns garbage collection. Many of the flaws have been understood over the years and, in the 0x version of C++, auto_ptr was supplanted by the new improved unique_ptr. Using the latest features of C++, like rvalue references, unique_ptr implements move semantics in a consistent manner. You can now store unique_ptrs in most containers and apply many algorithms to them.

So why am I not jumping for joy? Because I know how much more can be done.

But first let me summarize the idea behind the unique pointer. It is a (smart) pointer that is the only reference to the object it’s pointing to. In particular, you cannot make a copy of a unique pointer–if you could, you’d end up with two references to the same object. You can only move it (hence the move semantics), making the source of the move invalid. With the older auto_ptr the moving was done quietly during assignment or pass-by-value. The problem with that arrangement was that the source auto_ptr would suddenly turn to null, which was sometimes confusing and occasionally led to access violations–as in this example:

void f(auto_ptr<Foo> foo); // pass by value

auto_ptr<Foo> pFoo (new Foo());
f(pFoo); // pass by value nulls the internal pointer
pFoo->method(); // access violation

The improvement provided by unique_ptr is to require an explicit move, to sensitize the programmer to the fact that the source of move becomes invalid. Still, the following code will compile, although the bug is much more prominent:

void f(unique_ptr<Foo> foo);

unique_ptr<Foo> pFoo (new Foo())
f(move(pFoo)); // explicit move
pFoo->method(); // access violation

A bigger problem is that there is no guarantee that a unique_ptr indeed stores the unique reference to an object. To begin with, unique_ptr can be constructed from any pointer. That pointer might have already been aliased, as in:

void f(unique_ptr<Foo> foo) {
    // destructor of foo called at the end of scope
}

Foo * foo = new Foo();
unique_ptr<Foo> upFoo(foo); 
// foo now aliases the contents of upFoo
f(move(upFoo)); // explicit move
foo->method(); // undefined behavior

There is also an obvious backdoor in the form of the method unique_ptr::get, which can be used to spread new aliases. This can be particularly insidious when you have to pass the result of get to a foreign function:

void f(Foo * pf) {
    globalFoo = pf; // creates a global alias
}

unique_ptr<Foo> pFoo(new Foo());
f(pFoo.get()); // leaks an alias

Finally, it’s possible to create aliases to data members of the object stored in unique_ptr, as well as assign aliased references to its data members. Consider this example:

class Foo {
public:
    ~Foo() { delete _bar; }
    Bar * _bar;
};

Bar * pBar = new Bar();
unique_ptr<Foo> upFoo(new Foo());
pFoo->_bar = pBar;
// pBar is now an alias to a member of Foo
upFoo.reset(); // deletes _bar inside foo
pBar->method(); // undefined behavior

All this adds up to quite a number of ways to shoot yourself in the foot! Still, if the only problems were deterministic access violations, I could live with them (I’m a very disciplined programmer). They are reasonably easy to reproduce and can be debugged using standard methods (code coverage). But now there is a new elephant in the room–multithreading.

The beauty of unique objects is that they can be safely passed (moved) between threads and they never require locking. Indeed, since only one thread at a time can reference them, there is no danger of data races. Except when, as in the examples above, aliases are inadvertently leaked from unique_ptr. Accessing an object through such aliases from more than one thread without locking is a very nasty bug, usually very hard to reproduce.

Type system approach

Okay, so what’s the panacea? I’m glad you asked–the type system, of course! Make unique a type qualifier and all your troubles are gone. Notice that the compiler has no idea about the semantics of some library class called unique_ptr, but it can be made aware of the semantics of the unique type qualifier. What’s more, it can enforce this semantics from cradle to grave–from construction to destruction. (I know this proposal has no chance to be incorporated into C++, but it might be useful in, let’s say, the D programming language.)

You don’t construct a unique pointer from an arbitrary pointer. You just construct the object as unique (I’m still using C++-like syntax):

unique Foo * foo = new unique Foo();

Now that the compiler knows the meaning of unique, it can, for instance, detect if a unique pointer is accessed after having been moved. The compiler might prevent such an invalid pointer from being dereferenced or passed to other functions. Such bugs can be detected at compile time, rather than through access violations at runtime.

Do we need separate syntax for move? It might seem that using the assignment syntax (the equal sign) wouldn’t be so bad, as long as the compiler prevents us from dereferencing the null pointer left behind after a move. However, another problem arises when you start instantiating templates with unique pointers. Some containers and algorithms will work out of the box. Some will refuse to be instantiated because internally they try to access unique pointers after the move. But there is a third category of templates that, although move-correct, will generate unexpected results–like vectors with null unique pointers. Unfortunately, there are situations where the compiler has limited vision (i.e., it cannot do whole-program analysis) and can miss a null unique pointer dereference. For instance, it can’t prevent the use of a vector after a random element has been moved out of it.

Let’s assume that we have a special syntax for move, say := . Here’s a simple example of its use:

unique * Foo foo1 = new unique Foo();
unique * Foo foo2 := foo1; // move
foo1->method(); // compile-time error: foo1 invalid after move

In C++0x, a lot of containers and algorithms have been rewritten to use the explicit move instead of the assignment. It wouldn’t be a big deal to use := instead. Notice that the compiler would not allow the use of a regular assignment on unique pointers (unless they are rvalues), so the templates that are not ready for move semantics would refuse to get instantiated with unique template parameters.

Obviously, the move operator applied to a non-unique pointer must also work, otherwise we’d have to write separate templates for unique and non-unique template arguments.

We also have to use our special notation when passing unique arguments to functions, as in:

void f(unique * Foo);

unique * Foo foo = new unique Foo();
f(:= foo); // move
foo->method(); // compile-time error: foo invalid after move

When returning an lvalue unique pointer (for instance a member of a data structure), we use the notation return := foo;. The move operator may be elided when returning a local unique pointer, since the source ceases to exist upon the return (in general, move is implied when the source is a unique rvalue).

The biggest strength of the type-based approach is that the compiler can reliably prevent the aliasing of unique pointers. An assignment (as opposed to move) of an (lvalue) unique pointer to either unique or non-unique pointer is impossible. Taking the address of a unique pointer is forbidden too.

That leaves us with an interesting dilemma: How do you call a (library) function that takes a regular pointer if all you have at your disposal is a unique pointer? C++ unique_ptr let’s you do it through its method get. But we know how dangerous get is as far as aliasing goes.

If you don’t have the source code for the function you are calling, you probably shouldn’t be calling it with a unique pointer, because you have no idea what this function will do with it (store an alias in a global variable?). If you know the implementer of the function and he/she is willing to donate his/her kidney in case the function leaks aliases, you may risk casting away the uniqueness.

There is however a way for a function to guarantee that it’s alias-safe by declaring its parameters lent. The compiler will check the body of the function to make sure it doesn’t leak aliases to any part of the lent parameter, nor does it store non-unique (and possibly aliased) objects inside the lent object. Of course, the function can only pass the lent parameter (or any sub-part of it) to another function that makes the same guarantee.

It’s not obvious which should be the default: lent or it’s opposite, owned. If lent were the default, the compiler would be flagging a lot of valid single-threaded code (although it makes sense to assume that methods of a monitor object take lent parameters by default).

The relationship between unique and lent is very similar to that between immutable and const in D. If you declare data as unique or immutable you can safely pass it to a functions that declares its parameter as lent or const, respectively. lent guarantees that the parameter will not be aliased, const that it won’t be mutated.

Speaking of immutable–there’s always been a problem with constructing non-trivial immutable objects. The tricky part is that during construction we often need to explicitly modify the object, but we don’t want to leak non-const aliases to it or to its sub-parts. And now we have a new tool at hand to control aliasing–unique pointers. Instead of constructing an immutable object, you may construct a unique object, with all the guarantees about aliasing, and then move it to an immutable pointer. Just like this:

unique Foo * pFoo = new unique Foo();
immutable Foo * imFoo := pFoo;

(Of course, C++ doesn’t have immutable types either, but D does.)

By the way, you can always safely move a unique pointer to any of immutable, const, or regular pointers. Notice that it doesn’t mean that unique is a supertype of, for instance, immutable. You can’t pass unique where immutable is expected (you don’t want read-only aliases to escape to another thread!)–you can only move it.

A method that promises not to leak aliases to this is declared as lent. The compiler will detect any violations of this promise, as in:

class Foo {
    Bar * _bar;
public:
    Bar * get() lent {
        return _bar; // error: a lent method returning an alias
    }
};

In general, when you are dealing with a unique object, you may only call its lent methods.

Issues

Strict uniqueness imposes constraints on the internal structure of objects. Imagine creating a unique doubly-linked list. A link in such a list must be accessible from two places: from the previous link and from the next link. The scheme that absolutely prevents the creation of aliases to unique objects will not allow the creation of doubly-linked lists–and rightly so! Imagine moving a link out of the list: you’ll end up with a null pointer in the list, and a possible cross-thread aliasing if the (unique) link migrates to another thread.

There is a solution to this problem based on the ownership scheme (like the one used in Guava or GRFJ), which allows cross-aliasing of objects that share the same owner (e.g., all links are owned by the list). Such aliases cannot be leaked because the compiler won’t allow objects owned by a unique object to be moved. But that’s a topic for another blog post.

The use of iterators on unique containers is also highly restricted, but there are other ways of iterating that are inherently more thread-safe.

Conclusion

Any extension to the type system is usually met with some resistance from the user community. While some appreciate the added safety, others complain about added complexity. So it’s very important to be able to limit this complexity to specific areas.

When are unique and lent qualifiers really needed? In C++, at least in my experience, unique_ptr serves mostly as a poor man’s garbage collector. Since memory management permeates all C++ programs, the switch to using unique qualifiers would require a lot of changes even in single-threaded programs. In contrast, in garbage-collected languages, the main usefulness of unique is for multithreaded programming. A message queue that passes large unique objects between threads is more efficient than the one that deep-copies them. And unique objects never require locking.

When multithreading is involved, a safer type system doesn’t really add complexity. It translates the hard complexity of shared-memory concurrency into something that is more approachable by mere mortals.

As always, the introduction of new type modifiers may lead to code duplication (as in the case of const and non-const accessors). This problem can be dealt with using qualifier polymorphism–a topic for a whole new blog post.

I understand that this post might be hard to follow. That doesn’t mean that the uniqueness scheme is difficult to understand. My task was not only to explain the hows, but also the whys–and that requires giving a lot of background.

Bibliography

  1. Jonathan Aldrich, Valentin Kostadinov, Craig Chambers, Alias Annotations for Program Understanding. Describes unique and lent annotations in detail.
  2. See also my previous blog on Generic Race-Free Java.

When I read Dan Grossman’s paper, Type-Safe Multithreading in Cyclone, I imagine Dan on a raft in the middle of a tropical cyclone, his threads soaked by the rain, frantically racing to build a shelter. Maybe a type system can’t protect you from the rain, but it sure can protect you from data races–even if the final product looks a little like a Rube Goldberg contraption.

Cyclone, as it turns out, is a safe dialect of C. Dan’s goal was to extended Cyclone’s type system to make it safe in a multithreaded environment. This is not an easy task in a language that uses pointers and manual memory management. The ideas are similar to those in the dialects of Java in my previous posts, but the type-theoretical concepts are pretty advanced.

-Locks

A Cyclone lock is a special type parameterized by name. Defining such entities requires a major type-system tour de force. Little known notions like “existential types,” “dependent types,” and “effect systems” come into play. The major point of all this magic is that you want to use lock names without knowing what they are.

You create a Cyclon lock using the following pseudo-syntax:

let lk<λ> = newlock();

I’ll use Greek letters for symbolic lock names (except for the special name, loc), since the author didn’t propose any concrete syntax and, admittedly, never finished the implementation.

What’s happening behind the scenes is that newlock returns an existential type, which is unpacked into variable lk of type lock_t<λ>. That variable may, for instance, be passed to a (generic) function taking lock_t<λ>.

The beauty of an existential type is that it won’t let the client “guess” the name of the lock. The name is totally opaque–there is not even a type for it. It can only be generated by the system in the call to newlock.

To synchronize a (possibly compound) statement, you precede it with sync(lk), where lk is a lock variable (or an expression that evaluates to such). Continuing the previous example, we may write:

sync(lk) ++*p;

which increments the value pointed to by p under the lock lk.

-Ownership

In an OO language, ownership is the key to a race-free type system. The owner of a given object has to be locked before the object may be accessed.

In a non-OO language, like Cyclone, instead of specifying the owner of data, you specify the lock that protects it. Since you might not have access to the lock variable when you are declaring data, the symbolic lock name is used instead.

Also, since all references in Cyclon are expressed through pointers, lock annotations are associated with pointers. (This is again parallel to annotating object references with their owner objects.)

We can now complete the example above with the annotated definition of p:

int *λ p = new 0;

The pointer p can only be access when the lock whose name is λ is held.

-Local and Shared data

In Cyclon, thread-local data appear as a special case of shared data. A special lock name loc and a (global) variable nonlock are used to turn off the locking of individual instances. (Compare this to the dummy owner thisThread in the OO approach.)

Lock names are also used to parameterize functions and structs. In that case it might be necessary to be able to encode the “sharability” of the type. If the lock type has the kind LS, it actually protects Shared data. The kind LU, on the other hand, may be Unsharable, i.e., it could be instantiated with the thread-local loc. LS is a sub-kind of LU.

The main purpose of “kinding” is to ensure that thread-local data can never be shared between threads.

-Example

All this sounds very dry without an actual (however contrived) example. Let’s start with a simple function inc:

void inc<λ:LU>(int *λ p; {λ}) 
{
    *p = *p + 1;
}

The function is parameterized by a lock name parameter λ of the LU kind. It means that it can be instantiated both in the sharing and thread-local context. Lock-name variable λ first appears in the declaration of the pointer parameter p. Data pointed to by p is protected by the lock named λ. That’s not telling a lot–every pointer in Cyclon is protected by some lock (or nonlock).

What makes the difference is that the same name λ appears in the effects part of the declaration. The runtime system keeps track of what locks are being held at any point in the program, and passes this information to every function. Here, {λ} is an effect that contains one lock name, λ. In other words, the lock that protects p must be taken before calling inc. (Of course, other locks might be held too.)

Here’s an example of how inc may be called from another function, inc2.

void inc2<λ:LU>(lock_t<λ> plk, int *λ p ;{}) 
{
    sync(plk) inc(p);
}

The effects part of the declaration of inc2 is empty. No locks need be held before calling it. But when inc is called, the current effects include the name of the lock specified in the declaration of plk (the sync section adds λ to the effects). Since the same lock name appears in the declaration of p, the precondition for calling inc is fulfilled.

A struct containing a pointer must also be an existential type. The actual value of λ is assigned to LkInt when the type is “packed” (created from its constituents).

struct LkInt 
{<λ:LS> 
    lock_t<λ> plk;
    int*λ p; 
};

Here, LkInt (locked integer) only makes sense in a sharing context, so λ is of the strictly sharing kind, LS. That allows LkInts to be passed between threads.

Here’s an example of using a pointer to LkInt:

void g<λ:LU>(struct LkInt *λ s ;{λ}) 
{
    let LkInt{<λ'> .plk=lk, .p=ptr} = *s;
    inc2(lk, ptr);
}

Function g is parameterized by some lock name of the LU kind (so it may be loc). This lock is used by the callee to lock the LkInt argument s (λ is present in the callee effects).

Now s itself contains a pointer and a lock (of the LS kind). We get at them by unpacking an existential type–splitting it into constituents and giving them new names, lk and ptr. The name of lk is λ’. A shallow copy of s is performed in the process. We can now call inc2, which doesn’t require any effects, and uses lk to lock and increment ptr.

Finally, all these things come together in the example below:

void f(;{}) {
    let lk<λ> = newlock();
    int *λ p1 = new 0;
    int *loc p2 = new 0;
    struct LkInt *loc s = new LkInt{.plk=lk, .p=p1};
    spawn(g, s, sizeof(struct LkInt));
    inc2(lk, p1);
    inc2(nonlock, p2);
}

Here’s the play-by-play:

  • Function f requires no effects.
  • The programmer creates a lock lk with the name λ (unpacks an existential type returned by newlock).
  • She creates a pointer p1 to an integer protected by the lock named λ.
  • Then she creates an unprotected thread-local pointer p2 (using dummy lock name, loc)
  • She combines the lock lk and the pointer p1 into a struct LkInt. This is an example of “packing” an existential type. Note that both components must share the same lock name. (The pointer to LkInt, s, is thread-local–lock name: loc.)
  • The programmer spawns a thread that executes the function g with the argument s. The argument is passed by a shallow copy, but deep contents (the lock and the pointer) are shared. In fact they must be shared with the kinding LS; otherwise spawn would not compile. This mechanism protects any thread-local data from being shared with a new thread.
  • While the call to g(s) executes in parallel, the programmer increments p1 under the lock lk. Notice that these are the same variables that the other thread operates on. The shared lock enforces mutual exclusion.
  • The increment of thread-local p2 requires no lock, that’s why dummy nonlock is used.

-Limitations

The biggest limitation of the Cyclon system is that it’s static. The association between data and locks is rigid. That makes several useful idioms impossible to implement. There is no way to “move” data between threads. You can either pass a pointer together with its lock (essentially creating a monitor), or pass data by value.

The other serious problem is the overall complexity of the system–especially the “effects” part. When effects have to be included and manipulated in generic code things get a little hairy.

Granted, a lot of annotations can be elided–the defaults “do the right thing” in most cases. Still, what remains is asking a lot from an average programmer.

-Conclusion

I would like to thank Dan Grossman for correcting some of my mistakes and explaining a few subtle points.

In my next installment I’ll describe a more recent race-free extension of C, SharC, which supports a mix of static and dynamic checking.


In my last post I criticized the C++0x implementation of futures for their lack of composability. I now feel obliged to show what I consider “the right way” to do futures.

I use the event library of Concurrent Caml (actually, its version distributed with OCaml) as my model. The ideas behind it are described in J. H. Reppy’s Ph.D. thesis, Higher-order concurrency.

Events and Futures

CML channels and events are more general than futures, but here I’ll concentrate on the subset of their functionality that can be used to implement futures. Below is a useful translation table between CML and C++. (I should add that CML channels, as opposed to C++ futures, are synchronous. In practice it doesn’t matter that much, since promise::set_value–corresponding to channel send–is usually called right before the thread exits.)

CML C++
channel promise
event future
evt = receive chan ftr = prom.get_future()
sync (send chan msg) (synchronous) prom.set_value(msg) (asynchronous)
msg = sync evt msg = ftr.get()
choose evt_lst ???
evt’ = wrap evt fun ???

As you can see, C++ has no equivalents of choose and wrap. I will discuss those next.

choose

CML choose is a function that takes a list of events and creates a new composite event. When the program syncs on this event, it blocks until one of the listed events completes. The combination sync and choose is similar to Unix select or Windows WaitForMultipleObjects.

An example of the usefulness of choose is a situation when there is more than one algorithm to calculate a result, and the program runs them in parallel until the fastest one completes. Linear search, for instance, is faster for small sets than binary search. Instead of figuring out which one to use for each particular case, it might be cheaper (on a multicore) to run both and stop when the fist one is done.

What would choose look like in C++? One possibility is to implement it as a container object. You would push individual futures into it and then call get_future to obtain a composite future. Let me call this object future_or since it does the equivalent of logical OR for futures (the equivalent of AND is called a thread barrier).

  unique_future<int> ftr1 = promise1.get_future();
  unique_future<int> ftr2 = promise2.get_future();
  future_or<int> orf;
  orf.push_back(std::move(ftr1));
  orf.push_back(std::move(ftr2));
  unique_future<int> compositeFut = orf.get_future();
  ...
  int result = compositeFut.get();

Another possible implementation, in C++0x, would be to define a variadic template function future_choose. It would take a variable number of futures and return the composite future. The advantage of such an approach would be to let the compiler infer the types and the number of arguments automatically.

You probably noticed that future_or can only accept futures of one type (the same would be true about the function future_choose). For instance, to create a composite future that returns a string upon get, you must build it from other futures that also return a string (and not an int, for instance).

In principle, heterogeneous futures could be combined if they all returned different types through the same variant. Then, after a composite of variant-returning futures returns a variant, program logic would fork depending on the actual type inside the variant. This approach bypasses the static type system and is error prone. CML provides a more elegant solution using wrap.

wrap

The trick is to combine (wrap) a particular future with a function that processes its output. A future that returns an int is combined with a function that takes an int, etc. The result of this combination is a new future usually returning void.

When a wrapped future is synchronized (e.g., by calling wait), it executes the associated function, passing it the result returned by the original future. Just look at this example:

  void handler(Foo const & foo);
  unique_future<Foo> fooFuture = prom.get_future();
  // combine the future with a handler function
  unique_future<void> wrappedFuture = wrap_future(fooFuture, &handler);
  ...
  wrappedFuture.wait();
  // at this point fooFuture has returned a Foo object 
  // and the handler has been executed with that object.

There’s one thing worth mentioning: the compiler will assure type correctness of the arguments to wrap_future: the return type of the future must be compatible with the argument type of the function. Mismatch errors will be discovered at compile time. Compare this with the implementation using variants, which does not have this property.

Of course the usefulness of wrap really comes into play when you try to combine heterogeneous futures using future_choose. You just wrap each future with its own post-processing function (or callable object) and pass them all to future_choose. Wrapped futures are homogeneous–they all return void.

Here’s an example that uses C++0x lambdas (one of them being a closure):

  unique_future<Foo> fooFuture = prom1.get_future();
  unique_future<int> intFuture = prom2.get_future();
  int sum = 0;
  future<void> composite = future_choose(
      wrap_future(fooFuture, [](Foo const & foo){foo.Commit();},
      wrap_future(intFuture, [&sum](int i){sum += i;});
  ...
  composite.wait();

Wrapped futures would be useful, e.g., in implementing a server that listens to several communication channels.

What about Haskell?

As I described in my earlier post, unlike CML, Haskell implements asynchronous message passing. The original Concurrent Haskell paper argued against choose. Later however Avik Chaudhuri was able to show that the CML event library can be implemented on top of asynchronous MVars. So there are no theoretical obstacles to asynchronous implementation of futures.

Final remarks

The equivalent of choose called future_select has been proposed for Boost and C++0x, but it was turned down.

Interestingly enough, Microsoft’s Task Parallel Library for .NET implements higher-order concurrency. They have ContinueWith for wrapping futures and WaitAny for choice. I can only assume that it’s easier to implement those on a virtual machine.

I also wonder whether it would be more productive to implement the equivalent of the CML channels and events in C++, and treat futures as a special case.

If you like this post, please vote for it on reddit.

Bibliography

  1. J. H. Reppy. Higher-order concurrency. PhD thesis, Cornell
    University, 1992. Technical Report 92-1852.
  2. Avik Chaudhuri. A Concurrent ML Library in Concurrent Haskell.

« Previous PageNext Page »