In my previous blog posts I described C++ implementations of two basic functional data structures: a persistent list and a persistent red-black tree. I made an argument that persistent data structures are good for concurrency because of their immutability. In this post I will explain in much more detail the role of immutability in concurrent programming and argue that functional data structures make immutability scalable and composable.
Concurrency in 5 Minutes
To understand the role of functional data structures in concurrent programming we first have to understand concurrent programming. Okay, so maybe one blog post is not enough, but I’ll try my best at mercilessly slashing through the complexities and intricacies of concurrency while brutally ignoring all the details and subtleties.
The archetype for all concurrency is message passing. Without some form of message passing you have no communication between processes, threads, tasks, or whatever your units of execution are. The two parts of “message passing” loosely correspond to data (message) and action (passing). So there is the fiddling with data by one thread, some kind of handover between threads, and then the fiddling with data by another thread. The handover process requires synchronization.
There are two fundamental problems with this picture: Fiddling without proper synchronization leads to data races, and too much synchronization leads to deadlocks.
Communicating Processes
Let’s start with a simpler world and assume that our concurrent participants share no memory — in that case they are called processes. And indeed it might be physically impossible to share memory between isolated units because of distances or hardware protection. In that case messages are just pieces of data that are magically transported between processes. You just put them (serialize, marshall) in a special buffer and tell the system to transmit them to someone else, who then picks them up from the system.
So the problem reduces to the proper synchronization protocols. The theory behind such systems is the good old CSP (Communicating Sequential Processes) from the 1970s. It has subsequently been extended to the Actor Model and has been very successful in Erlang. There are no data races in Erlang because of the isolation of processes, and no traditional deadlocks because there are no locks (although you can have distributed deadlocks when processes are blocked on receiving messages from each other).
The fact that Erlang’s concurrency is process-based doesn’t mean that it’s heavy-weight. The Erlang runtime is quite able to spawn thousands of light-weight user-level processes that, at the implementation level, may share the same address space. Isolation is enforced by the language rather than by the operating system. Banning direct sharing of memory is the key to Erlang’s success as the language for concurrent programming.
So why don’t we stop right there? Because shared memory is so much faster. It’s not a big deal if your messages are integers, but imagine passing a video buffer from one process to another. If you share the same address space (that is, you are passing data between threads rather than processes) why not just pass a pointer to it?
Shared Memory
Shared memory is like a canvas where threads collaborate in painting images, except that they stand on the opposite sides of the canvas and use guns rather than brushes. The only way they can avoid killing each other is if they shout “duck!” before opening fire. This is why I like to think of shared-memory concurrency as the extension of message passing. Even though the “message” is not physically moved, the right to access it must be passed between threads. The message itself can be of arbitrary complexity: it could be a single word of memory or a hash table with thousands of entries.
It’s very important to realize that this transfer of access rights is necessary at every level, starting with a simple write into a single memory location. The writing thread has to send a message “I have written” and the reading thread has to acknowledge it: “I have read.” In standard portable C++ this message exchange might look something like this:
std::atomic x = false; // thread one x.store(true, std::memory_order_release); // thread two x.load(std::memory_order_acquire);
You rarely have to deal with such low level code because it’s abstracted into higher order libraries. You would, for instance, use locks for transferring access. A thread that acquires a lock gains unique access to a data structure that’s protected by it. It can freely modify it knowing that nobody else can see it. It’s the release of the lock variable that makes all those modifications visible to other threads. This release (e.g., mutex::unlock
) is then matched with the subsequent acquire (e.g., mutex::lock
) by another thread. In reality, the locking protocol is more complicated, but it is at its core based on the same principle as message passing, with unlock
corresponding to a message send (or, more general, a broadcast), and lock
to a message receive.
The point is, there is no sharing of memory without communication.
Immutable Data
The first rule of synchronization is:
The only time you don’t need synchronization is when the shared data is immutable.
We would like to use as much immutability in implementing concurrency as possible. It’s not only because code that doesn’t require synchronization is faster, but it’s also easier to write, maintain, and reason about. The only problem is that:
No object is born immutable.
Immutable objects never change, but all data, immutable or not, must be initialized before being read. And initialization means mutation. Static global data is initialized before entering main
, so we don’t have to worry about it, but everything else goes through a construction phase.
First, we have to answer the question: At what point after initialization is data considered immutable?
Here’s what needs to happen: A thread has to somehow construct the data that it destined to be immutable. Depending on the structure of that data, this could be a very simple or a very complex process. Then the state of that data has to be frozen — no more changes are allowed. But still, before the data can be read by another thread, a synchronization event has to take place. Otherwise the other thread might see partially constructed data. This problem has been extensively discussed in articles about the singleton pattern, so I won’t go into more detail here.
One such synchronization event is the creation of the receiving thread. All data that had been frozen before the new thread was created is seen as immutable by that thread. That’s why it’s okay to pass immutable data as an argument to a thread function.
Another such event is message passing. It is always safe to pass a pointer to immutable data to another thread. The handover always involves the release/acquire protocol (as illustrated in the example above).
All memory writes that happened in the first thread before it released the message become visible to the acquiring thread after it received it.
The act of message passing establishes the “happens-before” relationship for all memory writes prior to it, and all memory reads after it. Again, these low-level details are rarely visible to the programmer, since they are hidden in libraries (channels, mailboxes, message queues, etc.). I’m pointing them out only because there is no protection in the language against the user inadvertently taking affairs into their own hands and messing things up. So creating an immutable object and passing a pointer to it to another thread through whatever message passing mechanism is safe. I also like to think of thread creation as a message passing event — the payload being the arguments to the thread function.
The beauty of this protocol is that, once the handover is done, the second (and the third, and the fourth, and so on…) thread can read the whole immutable data structure over and over again without any need for synchronization. The same is not true for shared mutable data structures! For such structures every read has to be synchronized at a non-trivial performance cost.
However, it can’t be stressed enough that this is just a protocol and any deviation from it may be fatal. There is no language mechanism in C++ that may enforce this protocol.
Clusters
As I argued before, access rights to shared memory have to be tightly controlled. The problem is that shared memory is not partitioned nicely into separate areas, each with its own army, police, and border controls. Even though we understand that an object is frozen after construction and ready to be examined by other threads without synchronization, we have to ask ourselves the question: Where exactly does this object begin and end in memory? And how do we know that nobody else claims writing privileges to any of its parts? After all, in C++ it’s pointers all the way. This is one of the biggest problems faced by imperative programmers trying to harness concurrency — who’s pointing where?
For instance, what does it mean to get access to an immutable linked list? Obviously, it’s not enough that the head of the list never changes, every single element of the list must be immutable as well. In fact, any memory that can be transitively accessed from the head of the list must be immutable. Only then can you safely forgo synchronization when accessing the list, as you would in a single-threaded program. This transitive closure of memory accessible starting from a given pointer is often called a cluster. So when you’re constructing an immutable object, you have to be able to freeze the whole cluster before you can pass it to other threads.
But that’s not all! You must also guarantee that there are no mutable pointers outside of the cluster pointing to any part of it. Such pointers could be inadvertently used to modify the data other threads believe to be immutable.
That means the construction of an immutable object is a very delicate operation. You not only have to make sure you don’t leak any pointers, but you have to inspect every component you use in building your object for potential leaks — you either have to trust all your subcontractors or inspect their code under the microscope. This clearly is no way to build software! We need something that it scalable and composable. Enter…
Functional Data Structures
Functional data structures let you construct new immutable objects by composing existing immutable objects.
Remember, an immutable object is a complete cluster with no pointers sticking out of it, and no mutable pointers poking into it. A sum of such objects is still an immutable cluster. As long as the constructor of a functional data structure doesn’t violate the immutability of its arguments and does not leak mutable pointers to the memory it is allocating itself, the result will be another immutable object.
Of course, it would be nice if immutability were enforced by the type system, as it is in the D language. In C++ we have to replace the type system with discipline, but still, it helps to know exactly what the terms of the immutability contract are. For instance, make sure you pass only (const) references to other immutable objects to the constructor of an immutable object.
Let’s now review the example of the persistent binary tree from my previous post to see how it follows the principles I described above. In particular, let me show you that every Tree
forms an immutable cluster, as long as user data is stored in it by value (or is likewise immutable).
The proof proceeds through structural induction, but it’s easy to understand. An empty tree forms an immutable cluster trivially. A non-empty tree is created by combining two other trees. We can assume by the inductive step that both of them form immutable clusters:
Tree(Tree const & lft, T val, Tree const & rgt)
In particular, there are no external mutating pointers to lft
, rgt
, or to any of their nodes.
Inside the constructor we allocate a fresh node and pass it the three arguments:
Tree(Tree const & lft, T val, Tree const & rgt) : _root(std::make_shared<const Node>(lft._root, val, rgt._root)) {}
Here _root
is a private member of the Tree
:
std::shared_ptr<const Node> _root;
and Node
is a private struct defined inside Tree
:
struct Node { Node(std::shared_ptr<const Node> const & lft , T val , std::shared_ptr<const Node> const & rgt) : _lft(lft), _val(val), _rgt(rgt) {} std::shared_ptr<const Node> _lft; T _val; std::shared_ptr<const Node> _rgt; };
Notice that the only reference to the newly allocated Node
is stored in _root
through a const
pointer and is never leaked. Moreover, there are no methods of the tree that either modify or expose any part of the tree to modification. Therefore the newly constructed Tree
forms an immutable cluster. (With the usual caveat that you don’t try to bypass the C++ type system or use other dirty tricks).
As I discussed before, there is some bookkeeping related to reference counting in C++, which is however totally transparent to the user of functional data structures.
Conclusion
Immutable data structures play an important role in concurrency but there’s more to them that meets the eye. In this post I tried to demonstrate how to use them safely and productively. In particular, functional data structures provide a scalable and composable framework for working with immutable objects.
Of course not all problems of concurrency can be solved with immutability and not all immutable object can be easily created from other immutable objects. The classic example is a doubly-linked list: you can’t add a new element to it without modifying pointers in it. But there is a surprising variety of composable immutable data structures that can be used in C++ without breaking the rules. I will continue describing them in my future blog posts.
December 10, 2013 at 10:58 am
And of course, it is possible to implement a purely functional doubly-linked list, if one really wants to: http://okmij.org/ftp/Haskell/AlgorithmsH1.html#pure-cyclic-list
December 10, 2013 at 2:59 pm
@Franklin: Indeed! Another level of indirection (IntMap, in this case) can solve any problem 😉
December 10, 2013 at 11:21 pm
C++11 allows immutable data to be initilized at compile time through constexpr and literal types on namespace level. However, that means no heap memory, etc. Is allowed.
December 11, 2013 at 9:15 am
Good article 😉
December 11, 2013 at 10:44 am
I’m a big fan of unique references’ potential for concurrency:
– passing a unique reference is like ducking behind the canvas — it’s the right of access that is passing.
– every object is born uniquely
– “The only time you don’t need synchronization is when the shared data is immutable.” — you don’t need synchronisation if you’re the sole owner (and you know it statically)
If you don’t know it already, this paper is a nice one: “Uniqueness and Reference Immutability for Safe Parallelism”, Gordon et al. Link: http://homes.cs.washington.edu/~csgordon/papers/oopsla12.pdf
December 11, 2013 at 1:11 pm
@Stephan: I couldn’t agree more. I don’t know if you realize but I’ve been writing about the ownership system with unique_ptr and move semantics ever since I was involved in the design of the D language concurrency.
December 11, 2013 at 2:01 pm
@Bartosz: right, I remember now that I saw something like that on your blog — I’ll need to re-read it to refresh my memory. One detail that I love about uniques is that uniques to encapsulation are like finals (in java) to immutability: if you have a unique ref to something that has only unique refs internally, then you know that a) it’s tree shaped, b) it’s completely encapsulated from the rest of the heap. So encapsulation is achievable by consistently using unique refs, just like immutability is achieved consistently using final refs/fields.
The immutable aggregate allows unconstrained, lock-free sharing, while the encapsulated aggregate allows unconstrained, lock-free mutation (amongst others, you can have plenty of fun with this: change an aggregate’s type? You got it! Want to share it? Sure, cast to immutable, then share!)
I’m actually currently working on a language that somewhat builds on this notion, hence my first comment.
December 27, 2013 at 5:10 pm
Reblogged this on narenbabuin.
January 1, 2014 at 11:00 pm
There are really not good tutorial about concurrency in C++ especially if you compare to Java, Thanks for this great effort and please keep it coming.
January 9, 2014 at 8:20 am
If you are passing the rights to access a shared memory location. I would imagine that implies that the last process to do anything with that allocated memory would have to dispose of it. What about rights to a persistent memory location, like a hard drive. Would it be OK to pass those around. Or are you just talking about volatile memory?
January 9, 2014 at 12:36 pm
@Arturo: If you wanted threads to take turns writing to a file, you could do it using similar approach — for instance, passing around the file handle, making sure only one thread at a time uses it.
January 21, 2014 at 10:55 am
[…] The previous blog post in this series was about functional data structures and concurrency. […]
June 9, 2014 at 10:36 am
[…] trick with functional data structures is that they appear immutable, and therefore require no synchronization when accessed from multiple […]
July 12, 2015 at 11:42 am
[…] from: https://bartoszmilewski.com/2013/12/10/functional-data-structures-and-concurrency-in-c/ […]
November 17, 2017 at 9:21 am
Reblogged this on Mohd Maqbool Alam.