This video tutorial took a lot of effort because of my inflated expectations. I thought that std::async was a gateway to task-based parallelism. I blogged about task-based concurrency in The Future of Concurrent Programming and, in the context of Haskell, in Parallel Programming with Hints. And of course there is the problem of lack of composability of futures. So for the next 10 or so years we’ll have to stick to libraries, such as Microsoft PPL or Intel TBB or even OpenMP. Or experiment with other languages.
Follow @BartoszMilewski
(You can also follow me on Google+, if you search for Bartosz Milewski.)
Multithreading
October 3, 2011
C++11 Concurrency Tutorial: 5. Tasks
Posted by Bartosz Milewski under C++, Concurrency, Multicore, Multithreading, Parallelism, Programming, Tutorial[4] Comments
September 26, 2011
C++11 Concurrency Tutorial: Part 4
Posted by Bartosz Milewski under C++, Concurrency, Multicore, Multithreading, Parallelism, Programming, Tutorial[10] Comments
After a two-week break, attending the Intel Developers Forum and StrangeLoop, I finally had the time to record the fourth tutorial in the series. This time I’m showing how futures and promises work together to enable the passing of results back from threads. I also show how this process of calling a function asynchronously can be simplified using async. Next time I’ll talk more about async tasks and parallelism.
Follow @BartoszMilewski
(You can also follow me on Google+, if you search for Bartosz Milewski.)
September 12, 2011
C++11 Concurrency Tutorial: 3. Sharing
Posted by Bartosz Milewski under C++, Concurrency, Multithreading, Programming, Tutorial[5] Comments
The third installment of the tutorial is now online. I’m talking about shared memory concurrency and races, but with a tiny dollop of theory on top. It’s all illustrated with simple examples. Enjoy!
Follow @BartoszMilewski
September 6, 2011
C++11 Concurrency Tutorial: Part 2
Posted by Bartosz Milewski under C++, Concurrency, Multithreading, Programming, Tutorial[2] Comments
The second installment of the tutorial is now online. I’m talking about passing arguments to threads and about move semantics, including the new rvalue references. Enjoy!
Follow @BartoszMilewski
August 29, 2011
C++11 Concurrency Tutorial
Posted by Bartosz Milewski under C++, Concurrency, Multithreading, Parallelism, Programming, Tutorial[17] Comments
I’m starting a series of hands on tutorials on concurrent programming using C++11. This first installment jumps right into “Hello Thread!” and the discussion of fork/join parallelism. I also sneaked in some lambdas and even one closure. Enjoy!
Follow @BartoszMilewski
August 15, 2011
Data Races at the Processor Level
Posted by Bartosz Milewski under Concurrency, Memory Model, Multicore, Multithreading, Parallelism, Programming, x861 Comment
Follow @BartoszMilewski
Back to concurrency — this time at the lowest level. Is it possible to detect a data race by looking at assembly instructions executing on an x86 multicore processor? Find out in my other blog.
June 27, 2011
The Language of Concurrency Video
Posted by Bartosz Milewski under Atomics, Concurrency, Distributed Programming, Memory Model, Multicore, Multithreading, Parallelism, Programming[7] Comments
Follow @BartoszMilewski
By popular demand I turned my introductory webinar into a video presentation. The purpose of this 50 min presentation is to familiarize the viewer with the basic ideas of concurrent programming. Here’s the list of topics:
- Processes vs. Threads
- Multithreading vs. Parallelization
- Shared Memory vs. Message Passing
- Data Races and Atomicity Violations
- Relaxed Memory Models
- Sequential Consistency and DRF Guarantee
- Risks of Concurrency
- Debugging Concurrent Programs
Comments and suggestions for future videos are very welcome.
June 6, 2011
Introduction to Concurrency Webinar
Posted by Bartosz Milewski under Concurrency, Multicore, Multithreading, Parallelism, Programming, webinar[6] Comments
Follow @BartoszMilewski
Tomorrow, June 7th, 9 a.m. PDT (12 a.m. EST), I’ll be presenting a webinar, The Language of Concurrency, (the same as two weeks ago).
May 21, 2011
Concurrency Webinar
Posted by Bartosz Milewski under Concurrency, Multicore, Multithreading, Parallelism, Programming[7] Comments
I’m experimenting with new media so I prepared a one hour webinar on concurrency. You can’t really say much in an hour, so I’ll just give a broad overview of the domain without going into too much detail. Join me this Tuesday May 24, 2011 at 12pm PDT. The webinar is free, courtesy my employer, Corensic. You can preview the slides and, if you’re interested, register on the same page (you don’t have to fill out all the details).
May 20, 2011
Boostcon Day 4
Posted by Bartosz Milewski under Atomics, C++, Concurrency, Memory Model, Monads, Multicore, Multithreading, Programming1 Comment
I only went to one talk, not because the rest was not interesting, quite the contrary, but because I worked with Joel and Hartmut on rewriting Proto. I think we essentially got it. We have the exrpession monad implemented, my “compile” function turned out to be the equivalent of Proto transform, but with much more flexibility, expression extension produced a little Lambda EDSL with placeholders for arguments and even const terminals. It works like a charm. If you don’t know what I’m talking about, I promise to finish my blog series on monads in C++ real soon now.
The talk I went to was Chris Kohlhoff talking more about Asio, the asynchronous IO library. He was showing how the new features of C++11 make his code simpler, safer, and more flexible without too much effort. In particular he found move semantics extremely helpful in reducing (or, in some cases, eliminating) the need for memory allocation in steady state–an important property when running in an embedded system, for instance. But what I liked most was his approach to solving the inversion of control problem by implementing his own coroutines. Sure, he had to abuse C++ macros, but the code was actually much more readable and reflected the way we think about asynchronous calls.
The idea is that, with coroutines, you write your code in a linear way. You say “read socket asynchronously” and then yield. The flow of control exits the coroutine in the middle, and continues with other tasks. The trick is that the rest of the coroutine becomes your completion handler. When the async call completes, the coroutine is called again, but it starts executing right after the last yield. Your flow of control moves back and forth, but your code looks linear, instead of being fragmented into a bunch of handlers. It makes you wish coroutines were part of the language, as they are, for instance, in C#.
By the way, I caught Hans Boehm while he was waiting for the airport shuttle and asked questions about memory_order_relaxed. You know, the problem is, Can a relaxed load fetch an “out of thin air” value–a value that has never been written by anybody else? What I’m getting now is that in practice this will never happen, but it’s very hard to describe this requirement formally. In other words, Can a malicious chip manufacturer in cahoots with a compiler writer come up with a system that fulfills the letter of the C++ memory model and lets you fetch an out-of-thin-air value? I think the answer is yes, because the language of the Standard is purposely vague on this topic:
(29.3.11) [ Note: The requirements do allow r1 == r2 == 42 in the following example, with x and y initially zero:
// Thread 1: r1 = x.load(memory_order_relaxed); if (r1 == 42) y.store(r1, memory_order_relaxed); // Thread 2: r2 = y.load(memory_order_relaxed); if (r2 == 42) x.store(42, memory_order_relaxed);
However, implementations should not allow such behavior.—end note ]
