[If you prefer, you may watch the video of my talk on this topic (here are the slides).]
If you thought you were safe from functional programming in your cozy C++ niche, think again! First the lambdas and function objects and now the monad camouflaged as std::future
. But do not despair, it’s all just patterns. You won’t find them in the Gang of Four book, but once you see them, they will become obvious.
Let me give you some background: I was very disappointed with the design of C++11 std::future
. I described my misgivings in: Broken Promises — C++0x futures. I also made a few suggestions as how to fix it: Futures Done Right. Five years went by and, lo and behold, a proposal to improve std::future
and related API, N3721, was presented to the Standards Committee for discussion. I thought it would be a no brainer, since the proposal was fixing obvious holes in the original design. A week ago I attended the meetings of the C++ Standards Committee in Issaquah — since it was within driving distance from me — and was I in for a surprise! Apparently some design patterns that form the foundation of functional programming are not obvious to everybody. So now I find myself on the other side of the discussion and will try to explain why the improved design of std::future
is right.
Design arguments are not easy. You can’t mathematically prove that one design is better than another, or a certain set of abstractions is better than another — unless you discover some obvious design flaws in one of them. You might have a gut feeling that a particular solution is elegant, but how do you argue about elegance?
Thankfully, when designing a library, there are some well known and accepted criteria. The most important ones, in my mind, are orthogonality, a.k.a., separation of concerns, and composability. It also helps if the solution has been previously implemented and tested, especially in more than one language. I will argue that this is indeed the case with the extended std::future
design. In the process, I will describe some programming patterns that might be new to C++ programmers but have been tried and tested in functional languages. They tend to pop up more and more in imperative languages, especially in connection with concurrency and parallelism.
The Problem
In a nutshell, the problem that std::future
is trying to solve is that of returning the result of a computation that’s being performed in parallel, or returning the result of an asynchronous call. For instance, you start a computation in a separate thread (or a more general execution agent) and you want to, at some point in time, get back the result of that computation. This is one of the simplest models of concurrency: delegating the execution of a function (a closure) to another thread.
To return a value from one thread to another you need some kind of a communication channel. One thread puts a value in the channel, another picks it up. Instead of providing one channel abstraction, as does ML or Haskell, C++11 splits it into two separate abstractions: the promise and the future. The promise is the push end of the channel, the future is the pull end. (In Rust there are similar objects called Chan
and Port
.)
The general pattern is for the client to construct a promise, get the future from it using get_future
, and start a thread, passing it the promise. When the thread is done, it puts the result in the promise using set_value
. In the meanwhile, the calling thread may do some other work and eventually decide to retrieve the result from the future by calling its method get
. If the promise has been fulfilled, get
returns immediately with the value, otherwise it blocks until the value is available.
This pattern involves some boilerplate code dealing with the promise side of things, so the Standard introduced a shortcut called std::async
to simplify it. You call std::async
with a plain function (closure) and its result is automatically put into a hidden promise. All the client sees is the future
side of the channel. (I am simplifying things by ignoring exception handling and various modes of starting async
.)
The Functor Pattern
Here’s the first abstraction: A future is an object that encapsulates a value. By itself, this would be a pretty useless abstraction unless the encapsulation came with some other functionality or restriction. For instance, std::unique_ptr
encapsulates a value, but also manages the lifetime of the memory it occupies. A future encapsulates a value, but you might have to block to get it. Functional languages have a very useful pattern for just this kind of situation: the functor pattern (not to be confused with the C++ misnomer for a function object). A functor encapsulates a value of an arbitrary type, plus it lets you act on it with a function.
Notice that the functor doesn’t necessarily give you access to the value — instead it lets you modify it. The beauty of it is that, in the case of a future, a functor gives you the means to modify the value that potentially isn’t there yet — and it lets you do it without blocking. Of course, behind the scenes, the function (closure) that you provide is stored in the future and only applied when the value is ready and is being accessed using get
.
The first part of the fix that was proposed to the Committee was to turn std::future
into a functor. Technically, this is done by adding a new method, then
:
template<typename F> auto future::then(F&& func) -> future<decltype(func(*this))>;
This method takes a function object func
to be applied to the future in question. The result is a new future of the type that is returned by the function object, decltype(func(*this))
.
Things are slightly muddled by the fact that a future not only encapsulates the value to be calculated but also the possibility of an exception. This is why the function passed to then
takes the whole future, from which it can extract the value using get
, which at that point is guaranteed not to block, but may rethrow an exception. There is an additional proposal N3865 to introduce another method, next
, that would deal only with the value, not the exception. The advantage of next
is that it could be called with a regular function unaware of the existence of futures, with no additional boilerplate. For simplicity, I’ll be using next
in what follows.
The functor pattern makes perfect sense for composing a regular function on top of an asynchronous function (one returning a future), but it’s more general than that. Any time you have an object that is parameterized by an arbitrary type, you might be dealing with a functor. In C++, that would be a template class that doesn’t impose any restrictions on its template argument. Most containers have this property. In order for a generic class to be a functor it must also support a means to operate on its contents. Most containers in STL provide this functionality through the algorithm std::transform
. For an imperative programmer it might come as a surprise that such disparate things as futures and containers fall under the same functional pattern — a functor.
Unlike in functional languages, in C++ there is no natural reusable expression for the functor pattern, so it’s more of the pattern in the head of the programmer. For instance, because of memory management considerations, std::transform
operates on iterators rather than containers — the storage for the target container must be either pre-allocated or allocated on demand through iterator adapters. One could try to provide iterator adapters for futures, so they could be operated on by std::transform
, but ultimately the transformation has to act on the internals of the future (i.e., store the function object in it) so it either has to be a method or a friend of the future.
The Monad Pattern
The functor pattern is not enough to provide full composability for futures. The likely scenario is that the user creates a library of future-returning functions, each performing a specific task. He or she then needs the means to combine such functions into more complex tasks. This is, for instance, the case when combining asynchronous operations, such as opening a file and then reading from it. Suppose we have the async_open
function that returns a file handle future:
future<HANDLE> async_open(string &);
and the async_read
function that takes a file handle and returns a future with the buffer filled with data:
future<Buffer> async_read(HANDLE fh);
If you combine the two using next
, the result will be a future of a future:
future<future<Buffer>> ffBuf = async_open("foo").next(&async_read);
In order to continue chaining such calls without blocking — for instance to asynchronously process the buffer — you need a way to collapse the double future to a single future and then call next
on it.
The collapsing method, unwrap
, is another part of the extended future proposal. When called on a future<future<T>>
it returns future<T>
. It lets you chain asynchronous functions using next
followed by unwrap
.
async_open("foo").next(&async_read).unwrap().next(&async_process);
In functional programming such a collapsing function is called join
. The combination next
followed by unwrap
(or, in Haskell, fmap
followed by join
) is so common that it has its own name, bind (in Haskell it’s the operator >>=
). It might make sense to make bind
another method of future (possibly under a different name). [Edit: In fact, the proposal (n3721) is to overload then
to automatically perform unwrap
whenever the result is a future of a future. This way then
would also work as bind.]
There’s one more important usage pattern: a function that may execute asynchronously, but sometimes returns the result immediately. This often happens in recursive algorithms, when the recursion bottoms up. For instance, a parallel tree traversal function may spawn asynchronous tasks for traversing the children of a node, but when it reaches a leaf, it might want to return the result synchronously. Instead of writing complicated conditional code at each level, it’s easier to provide a “fake” future whose contents is immediately available — whose get
method never blocks. Such fake future and the function that creates it called make_ready_future
are also part of the proposal.
Together, the methods next
(or then
) and unwrap
, and the function make_ready_future
are easily recognizable by a functional programmer as forming the monad pattern (in Haskell, they would be called, respectively, fmap
, join
, and return
). It’s a very general pattern for composing functions that return encapsulated values. Using a monad you may work with such functions directly, rather than unwrapping their results at every step. In the case of futures, this is an important issue, since the “unwrapping” means making a potentially blocking call to get
and losing precious opportunities for parallelism. You want to set up as much computation up front and let the system schedule the most advantageous execution.
Combining functions using next
, unwrap
(or, equivalently, bind
), and make_ready_future
is equivalent to specifying data dependencies between computations and letting the runtime explore opportunities for parallelism between independent computations.
The Applicative Pattern
The combinators then
and next
are designed for linear composition: the output of one computation serves as the input for another. A more general pattern requires the combining of multiple asynchronous sources of data. In functional programming the problem would be described as applying a function to multiple arguments, hence the name “applicative” pattern. A functional programmer would take a multi-argument function and “lift” it to accept futures instead of immediate values.
As expected, in imperative programming things are a little messier. You have to create a barrier for all the input futures, retrieve the values, and then pass them to the multi-argument function or algorithm. The proposal contains a function called when_all
that implements the first part of the process — the barrier. It takes either a pair of iterators to a container of futures or a variable number of futures, and returns a future that fires when all the arguments are ready. Conceptually, it performs a logical AND of all input futures.
The iterator version of when_all
returns a future of a vector of futures, while the variadic version returns a future of a tuple of futures. It’s up to the client to get
the resulting vector or tuple and iterate over it. Because of that, it’s not possible to directly chain the results of when_all
the way then
or next
does it.
If you’re wondering how this kind of chaining is done in a functional language, you have to understand what partial application is. A function of many arguments doesn’t have to be applied to all of the arguments at once. You can imagine that applying it to the first argument doesn’t yield a value but rather a function on n-1 arguments. In C++11, this can be accomplished by calling std::bind
, which takes a multi-parameter function and a value of the first argument, and returns a function object (a closure) that takes the remaining n-1 arguments (actually, you may pass it more than one argument at a time).
In this spirit, you could bind a multi-parameter function to a single future and get a future of a function of n-1 arguments. Then you are left with the problem of applying a future of a function to a future of an argument, and that’s exactly what the applicative pattern is all about. In Haskell, the Applicative
class defines the operator <*>
that applies an encapsulated function to an encapsulated value.
The Monoid Pattern
A very common pattern is to start several computations in parallel and pick the one that finishes first. This is the basis of speculative computation, where you pitch several algorithms against each other. Or you might be waiting for any of a number of asynchronous events, and attend to them as soon as they happen.
At a minimum you would expect a combinator that acts like a logical OR of two futures. A functional programmer would be immediately on the lookout for the monoid pattern. A monoid is equipped with a binary operation and a unit element. If the binary operation on futures picks the one that finishes first, what should the unit future be? A unit combined with any element must give back that same element. Therefore we need a future that would lose the race with any other future. We could call this special future “never.” Calling get
on such a future would block forever.
In practice, one could slightly relax the definition of the “never” future. It would never return a result, but it could still throw an exception. A future like this could be used to implement a timeout. Pitching it against another future would either let the other future complete, or result in a timeout exception.
This is not the way the future extension proposal went, though. The proposed combinator is called when_any
and it takes either a pair of iterators to a container of futures or a variable number of futures. It returns a future of either a vector or a tuple of futures. It’s up to the client to iterate over those futures and find the one (or the ones) that fired by calling is_ready
on each of them.
The advantage of this approach is that the client may still write code to wait for the remaining futures to finish. The disadvantage is that the client is responsible for writing a lot of boilerplate code, which will obscure the program logic.
Performance and Programming Considerations
An objection to using futures as the main vehicle for asynchronous programming was raised in N3896: Library Foundations for Asynchronous Operations. The point it that it’s possible for an asynchronous API to have a result ready before the client had the opportunity to provide the continuation by calling then
(or next
). This results in unnecessary synchronization, which may negatively impact performance.
The alternative approach is to pass the continuation (the handler) directly to the asynchronous API. This is how a lot of asynchronous APIs are implemented at the lowest level anyway. The two approaches don’t exclude each other, but supporting both at the same time, as proposed in N3896, adds a lot of complexity to the programming model.
From the programmer’s perspective, the continuation passing model of N3896 is probably the hardest to use. The programming model is that of a state machine, with the client responsible for writing handlers for every transition.
Futures provide a useful abstraction by reifying the anticipated values. The programmer can write code as if the values were there. Futures also provide a common language between concurrent, parallel, and asynchronous worlds. It doesn’t matter if a value is to be evaluated by spawning a thread, creating a lightweight execution agent, or by calling an asynchronous API, as long as it’s encapsulated in a future. The compositional and expressional power of futures is well founded in major patterns of functional programming: the functor, the monad, the applicative, and the monoid.
There is another, even more attractive programming model that’s been proposed for C++, Resumable Functions, which makes asynchronous code look more like sequential code. This is based on a trick that’s well known to Haskell programmers in the form of the “do” notation. In C++, a resumable function would be chopped by the compiler into a series of continuations separated by await
keywords. Instead of creating a future and calling then
with a lambda function, the programmer would insert await
and continue writing code as if the value were available synchronously.
Acknowledgment
I’d like to thank Artur Laksberg for reading the draft of this blog and providing useful feedback.
February 26, 2014 at 5:06 pm
I think this is the first of your “Haskell tutorials” that clicks in my head. Thanks.
February 27, 2014 at 7:29 am
Having implemented part of N3784 for e.g. my bachelor thesis (http://abergmeier.blogspot.de/2014/02/another-first-in-thesis.html) I am somewhat irritated by the different proposals and the lack of public information about the discussion taking part inside of iso cpp.
February 27, 2014 at 1:36 pm
Thanks for this! I am woefully ignorant of Haskell. Seems there’s some worthy reading ahead.
February 27, 2014 at 10:15 pm
“Resumable functions” sounds more pretty much like await/async methods from C#.
Well, it’s nice to see that mainstream languages finally start to include automatic CPS-convertors.
February 28, 2014 at 1:45 am
For quite some time as a C++ programmer I’ve ignored functional programming like Haskell or Clojure, now it’s a good time to start thinking in terms of functional programming and how that can be applied to C++. Thanks for the very good read, it was very fruitful and I’ll return to this once again when doing my research on the topic.
March 1, 2014 at 4:55 pm
Typo: in “Such fake future and the function that creates it called make_ready_feature are also part of the proposal”, I think you meant “make_ready_future”.
March 2, 2014 at 12:38 am
[…] C++17: I See a Monad in Your Future! ::: Bartosz Milewski’s Programming Cafe […]
March 2, 2014 at 12:02 pm
@Paul: Fixed, Thanks!
March 2, 2014 at 1:30 pm
There is another blog post I wrote on this topic two years ago, which explains C++ futures and asynchronous API in terms of the Haskell continuation monad: https://www.fpcomplete.com/blog/2012/06/asynchronous-api-in-c-and-the-continuation-monad .
March 4, 2014 at 9:41 am
Thanks for pointing out the connections to functional programming language constructs; it looks like there’s a cohesive mental model there that’s helpful. On the other hand, it seems like a real game-changer would be the ability to block waiting for a value from another concurrently executing activity without incurring undue overhead. I hope that’s what “await” will bring to the table.
March 13, 2014 at 6:57 pm
@Joker_vD That’s no coincidence, look at the emails of the authors of the linked Resumable Functions proposal.
March 19, 2014 at 1:42 pm
Interesting read, thanks for sharing!
I suggested that we should aim for a composable std::future some 5 years ago when the boost future was being drafted … but I didn’t get much traction: http://lists.boost.org/Archives/boost/2009/01/146797.php
I’ll admit I also have a Haskell background 🙂
March 19, 2014 at 4:15 pm
I’ve been toying with monads in C++ recently as well. The idea was to build a continuation monad on top of the Boost.Asio library and use the ErrorT and ReaderT monad transformers to add error checking and cancelability into the final monad. The toy can be found here
https://github.com/inetic/masio
I haven’t yet grocked the Applicative pattern so the implementation I have is also using analogs to when_all and when_any functions, which is probably where I run out of ideas.
When cancelability is added, another interesting function from the when_* family, that I found useful is “all_or_none” which could be described as a logical AND with short circuiting (?). What it does, is to wait for all cancelable features as when_all does, but if one fails, it cancels all the rest of them. Is this something that could be modeled nicely in a functional language?
March 20, 2014 at 11:22 am
@inetic: Cancellation is a hard problem in any non-functional language because of resource cleanup. Simon Marlow has a whole chapter about cancellations and timeouts in “Parallel and Concurrent Programming in Haskell.” You might want to look it up.
March 25, 2014 at 5:45 pm
There is “make_ready_future” proposed which is useful to return a future when actually the value is already known at return time.
Will there also be a “maybe_future” which, given a future will just passthrough, but when given a non-future will auto-wrap it into one?
This is useful when calling functions that may or may not return a future. E.g. in Twisted (Python) this is “maybeDeferred”.
March 25, 2014 at 5:51 pm
Regarding cancelation, and FWIW, Twisted allows to provide a “canceler” callback at creation time of a future (Deferred in Twisted land). The “canceler” function will get called when the user cancels the future.
Would it make sense to have:
std::promise p( .. here the canceler callback ..);
std::future f = p.get_future();
and then p.cancel() will make f fail with an canceled_exception.
Probably also p.cancel(exception instance) to allow propagation of canceling details.
April 4, 2014 at 1:02 pm
Do you plan on giving a talk at CPPCon?
April 4, 2014 at 2:23 pm
I’ll be out of the country during CPPCon. But I’m giving a talk at C++ Now in May.
April 21, 2014 at 11:32 am
[…] patterns like functors, monads, and monoids. I have discussed them already in my post about C++ futures. It’s very edifying to see them emerge in a completely different […]
June 9, 2014 at 10:36 am
[…] to imperative programmers. Functional programmers have a powerful compositional tool called a monad that, among other things, can linearize inverted flow of control. The design of libraries for […]
July 1, 2014 at 7:25 am I’d love to hear your feedback a small async monad I created for one of my company’s to-be-opensourced project. It is designed to be used with
libev
to implement co-routines. Since this is a fairly small project I couldn’t justify spending too much time polishing it but it still turned up to be pretty usefull: .gist table { margin-bottom: 0; } This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters #pragma once /** * This is a quick abstraction over non-blocking asynchronous deferreds. In a * way you can see those as non-blocking futures (whereas std::future are * blocking). * This implementation is devoid of fanciness. Amongst other things: * – Deferreds are not cancelable * – They are not threadsafe. Callbacks are ran in whichever thread the * deferred was triggered and there's no mutex to protect from race condition * while registering callbacks/setting values. This is what we use libev * for… */ #include <functional> #include <vector> #include <cassert> #include <memory> template<typename _Val, typename _Error = void> class deferred_trigger; template<typename _Val, typename _Error = void> class __deferred_state; template<class _Val, typename _Error = void> class deferred { private : // Needs to be shared between the trigger and the deferred. // This were all the state actually resides. It's memory allocated so it never // needs to move and its lifespan is not tied to the lifespan of the deferred. std::shared_ptr<__deferred_state<_Val, _Error>> state_; explicit deferred(std::shared_ptr<__deferred_state<_Val, _Error>> s) : state_(s) {} public : friend class deferred_trigger<_Val, _Error>; deferred() = delete; deferred(const deferred<_Val, _Error> &rhs) = delete; deferred(deferred<_Val, _Error> &&rhs); deferred<_Val, _Error>& operator= (const deferred<_Val, _Error> &rhs) = delete; deferred<_Val, _Error>& operator= (deferred<_Val> &&rhs) = delete; // Add a handler to be called when the deferred is resolved. // It's an error to add a watcher to a deferred that already has // one. template <typename _F> deferred<_Val, _Error>& success(_F &&f); // Add a handler to be called when the deferred boinks out. // It's an error to add a watcher to a deferred that already has // one. // Returns the object for chaining template <typename _F> deferred<_Val, _Error>& fail(_F &&f); // Build a new deferred that is in failed state. // Monadic fail template<typename… _Args> static deferred<_Val, _Error> failed(_Args&&… v); // Monadic return template<typename… _Args> static deferred<_Val, _Error> resolved(_Args&&… v); // Monadic bind, fills in both the fail state and the success state… // F must be take a val and return a deferred<_Ret, _Error> template<typename _Ret, typename _F> deferred<_Ret, _Error> then(_F &&f); // V.otherwise(f) returns a future that resolved to either the value or V or // the value of f() if V fails. // Monadic bind, fills in both the `fail` state and the `success` state… // TODO: // template<typename _New_err> deferred<_Val, _Error> otherwise(std::function<deferred<_Val, _Error>()>); }; // class deferred template<typename _Val, typename _Error> class deferred_trigger { private : #ifndef NDEBUG bool deferred_issued_ = false; #endif std::shared_ptr<__deferred_state<_Val, _Error>> state_; public : deferred_trigger() : state_(new(std::nothrow) __deferred_state<_Val, _Error> ()) {} deferred_trigger(const deferred_trigger<_Val, _Error> &rhs) : state_(rhs.state_) {} deferred_trigger(deferred_trigger<_Val, _Error> &&rhs) : state_(std::move(rhs.state_)) {} deferred<_Val, _Error> deferred() { #ifndef NDEBUG assert (!deferred_issued_); deferred_issued_ = true; #endif return ::deferred<_Val, _Error>(state_); } deferred_trigger<_Val, _Error>& operator= (const deferred_trigger<_Val, _Error> &rhs) = delete; deferred_trigger<_Val>& operator= (deferred_trigger<_Val, _Error> &&rhs) = delete; template<typename… _Args> void set_value(_Args&&… v) { state_->set_value(std::forward<_Args>(v)…); } template<typename… _Args> void set_failed(_Args&&… v) { state_->set_failed(std::forward<_Args>(v)…); } bool is_filled() { return !state_->is_waiting(); } }; //—————————————————————————— // Implementation template <typename _Val> struct state_holder { _Val v; template <typename _T> using cb_type = std::function<_T(_Val&&)>; template<typename… _Args> state_holder(_Args&&… args) : v(std::forward<_Args>(args)…) {} template <typename _F> typename std::result_of<_F(_Val&&)>::type run(_F &&f) { return f(std::move(v)); } template <typename _F, typename… _Args> static typename std::result_of<_F(_Val&&)>::type call(_F &&f, _Args&&… args) { return f(_Val(std::forward<_Args>(args)…)); } ~state_holder()=default; }; template <> struct state_holder<void> { template <typename _T = void> using cb_type = std::function<_T()>; state_holder() {} template <typename _F> static typename std::result_of<_F()>::type run(_F &&f) { return f(); } template <typename _F> static typename std::result_of<_F()>::type call(_F &&f) { return f(); } ~state_holder()=default; }; template<typename _Val, typename _Error> class __deferred_state { friend class deferred<_Val, _Error>; public : typedef state_holder<_Val> success_state_type; typedef typename success_state_type::template cb_type<void> success_cb_type; typedef state_holder<_Error> failed_state_type; typedef typename failed_state_type::template cb_type<void> failed_cb_type; private : struct cbs_t { failed_cb_type on_failed; success_cb_type on_success; }; union { success_state_type val_; failed_state_type err_; cbs_t cbs_; }; enum class status { waiting, // cbs_ failed_not_fired, // err_ success_not_fired, // val_ done // -/- } status_; #ifndef NDEBUG bool failed_cb_set_ = false; bool success_cb_set_ = false; #endif public : __deferred_state(status st = status::waiting) : status_(st), cbs_{nullptr, nullptr}{} __deferred_state(const __deferred_state& rhs) = delete; __deferred_state& operator= (const __deferred_state&) = delete; ~__deferred_state(); template<typename… _Args> void set_value(_Args&&… v); template<typename… _Args> void set_failed(_Args&&… v); template<typename _F> void fail(_F &&); template<typename _F> void success(_F &&); bool is_waiting() { return status_ == status::waiting; } }; //—————————————————————————— template<typename _Val, typename _Error> deferred<_Val, _Error>::deferred(deferred<_Val, _Error> &&rhs) : state_(std::move(rhs.state_)) {} template<typename _Val, typename _Error> template<typename _F> deferred<_Val, _Error>& deferred<_Val, _Error>::success(_F &&f) { state_->success(std::forward<_F>(f)); return *this; } template<typename _Val, typename _Error> template<typename _F> deferred<_Val, _Error>& deferred<_Val, _Error>::fail(_F &&f) { state_->fail(std::forward<_F>(f)); return *this; } template<typename _Val, typename _Error> template<typename… _Args> deferred<_Val, _Error> deferred<_Val, _Error>::failed(_Args&&… v) { std::shared_ptr<__deferred_state<_Val, _Error> > s (new(std::nothrow) __deferred_state<_Val, _Error>); s->set_failed(std::forward<_Args>(v)…); return deferred<_Val, _Error>(s); } template<typename _Val, typename _Error> template<typename… _Args> deferred<_Val, _Error> deferred<_Val, _Error>::resolved(_Args&&… v) { std::shared_ptr<__deferred_state<_Val, _Error> > s (new(std::nothrow) __deferred_state<_Val, _Error>); s->set_value(std::forward<_Args>(v)…); return deferred<_Val, _Error>(s); } template<typename _Error, typename _V> void _chain_success(deferred<_V, _Error> &src, deferred_trigger<_V, _Error> dst) { src.success([dst](_V&& r) mutable { dst.set_value(std::move(r)); }); } template<typename _Error> void _chain_success (deferred<void, _Error> &src, deferred_trigger<void, _Error> dst) { src.success([dst]() mutable { dst.set_value(); }); } template<typename _V> void _chain_fail(deferred<_V> &src, deferred_trigger<_V> dst) { src.fail([dst]() mutable { dst.set_failed(); }); } template<typename _V, typename _Error> void _chain_fail(deferred<_V, _Error> &src, deferred_trigger<_V, _Error> dst) { src.fail([dst](_Error&& e) mutable { dst.set_failed(std::move(e)); }); } template<typename _V, typename _Error> void _chain(deferred<_V, _Error> &&src, deferred_trigger<_V, _Error> dst) { _chain_success<_Error>(src, dst); _chain_fail<_V>(src, dst); } template<typename _Error, typename _Ret, typename _Val, typename _F> typename std::enable_if<!std::is_void<_Val>::value, deferred<_Ret>>::type _then(deferred<_Val, _Error> &v, _F f) { deferred_trigger<_Ret> trig; deferred<_Ret> res = trig.deferred(); v.fail([trig] () mutable { trig.set_failed(); }).success([trig, f](_Val&& v) mutable { _chain<_Ret, _Error>(f(std::move(v)), trig); }); return res; } template<typename _Error, typename _Ret, typename _F> deferred<_Ret> _then(deferred<void> &v, _F &&f) { deferred_trigger<_Ret> trig; deferred<_Ret> res = trig.deferred(); v.fail([trig] () mutable { trig.set_failed(); }).success([trig, f]() mutable { _chain<void, _Error>(f(), trig); }); return res; } template<typename _Val, typename _Error> template<typename _Ret, typename _F> deferred<_Ret, _Error> deferred<_Val, _Error>::then(_F &&f) { return _then<_Error, _Ret>(*this, std::forward<_F>(f)); } template <typename _Val> deferred<_Val> _otherwise (deferred<_Val> &v, std::function<deferred<_Val> ()> f) { deferred_trigger<_Val> trig; deferred<_Val> res = trig.deferred(); _chain_success(v, trig); v.fail([trig, f] () mutable { _chain<_Val>(f(), trig); }); return res; } template <typename _Val, typename _Error> deferred<_Val, _Error> _otherwise (deferred<_Val, _Error> &v, std::function<deferred<_Val, _Error> ()> f) { deferred_trigger<_Val, _Error> trig; deferred<_Val, _Error> res = trig.deferred(); _chain_success(v, trig); v.fail([trig, f] (_Error) mutable { _chain<_Val, _Error>(f(), trig); }); return res; } template <typename _Val, typename _Error> deferred<_Val, _Error> deferred<_Val, _Error>::otherwise (std::function<deferred<_Val, _Error> ()> f) { return _otherwise(*this, f); } //—————————————————————————- // __deferred_state template<typename _Val, typename _Error> template<typename… _Args> void __deferred_state<_Val, _Error>::set_value(_Args&&… v) { assert(status_ == status::waiting); if (cbs_.on_success) { status_ = status::done; success_state_type::call(cbs_.on_success, std::forward<_Args>(v)…); cbs_.~cbs_t(); } else { cbs_.~cbs_t(); new (&val_) success_state_type(std::forward<_Args>(v)…); status_ = status::success_not_fired; } } template<typename _Val, typename _Error> template<typename… _Args> void __deferred_state<_Val, _Error>::set_failed(_Args&&… v) { assert(status_ == status::waiting); if (cbs_.on_failed) { status_ = status::done; failed_state_type::call(cbs_.on_failed, std::forward<_Args>(v)…); cbs_.~cbs_t(); } else { cbs_.~cbs_t(); new (&val_) failed_state_type(std::forward<_Args>(v)…); status_ = status::failed_not_fired; } } template<typename _Val, typename _Error> template<typename _F> void __deferred_state<_Val, _Error>::success(_F &&f) { #ifndef NDEBUG assert(!success_cb_set_); success_cb_set_ = true; #endif switch (status_) { case status::waiting: cbs_.on_success = std::forward<_F>(f); break; case status::success_not_fired: val_.run(std::forward<_F>(f)); val_.~success_state_type(); status_ = status::done; break; case status::failed_not_fired: case status::done: break; } } template<typename _Val, typename _Error> template<typename _F> void __deferred_state<_Val, _Error>::fail(_F &&f) { #ifndef NDEBUG assert (!failed_cb_set_); failed_cb_set_ = true; #endif switch (status_) { case status::waiting: cbs_.on_failed = std::forward<_F>(f); break; case status::failed_not_fired: err_.run(std::forward<_F>(f)); err_.~failed_state_type(); status_ = status::done; break; case status::success_not_fired: case status::done: break; } } template<typename _Val, typename _Error> __deferred_state<_Val, _Error>::~__deferred_state() { switch (status_) { case status::waiting: cbs_.~cbs_t(); break; case status::failed_not_fired: err_.~failed_state_type(); break; case status::success_not_fired: val_.~success_state_type(); break; case status::done: break; } } view raw deferred.hpp hosted with ❤ by GitHubOctober 22, 2014 at 11:32 pm
[…] that makes a functional language do imperative operations is preposterous. On the other hand, Bartosz Milewski is convinced that concurrency in C++ would be improved greatly if it would be structured the […]
February 4, 2015 at 10:23 am
Regarding the performance problem mentioned in N3896.
I think you can resolve some of them with the following construction:
async_open(“foo”).next(&async_read).unwrap().next(&async_process).execute();
You create a chain of commands, then execute the whole chain with the “execute()” at the end. This style separates the monad construction and its execution.
Let me know what you think.
February 4, 2015 at 11:36 am
@Hoang: This is just one use case. Requiring the call the execute would complicate the other use case, where you just start a single task without continuation (maybe even more common?). Maybe some kind of flag should distinguish between the case when you want the task to start without delay and when you want to build an expression tree first, and then fire it off with execute. Something like “suspend” or “prepare”. E.g. async_open(“foo”, async_expr::build_suspended).
July 21, 2015 at 5:55 am
@Bartosz: We can solve the syntax complication by using a type conversion trick. For example:
async_open(“foo”) returns an expression tree
.next() method extends the tree
The expression tree will be executed when either:
* The expression tree is casted to std::future or some other type
* The expression tree’s destructor is called before any execution
We don’t need to call “execute()” method explicitly in many cases.
July 21, 2015 at 11:34 am
@Hoang: Hold it right there. Executing anything nontrivial when the destructor is called is a really bad idea.
July 22, 2015 at 8:11 am
@Bartosz: The destructor case is for the scenario when user create an async task without casting it to a std::future. E.g.
//… ;
async_open(“foo”);
It means he/she can never retrieve the result or exception from the task. Therefore, it can be executed any time before the termination of the program without changing the semantic. There can be some work around of the destructor, for example, a global orphaned task list.
They key idea here should be the laziness in construction and evaluation of the task improves performance.
August 31, 2015 at 6:06 pm
[…] A adição de operações monádicas permite definir uma API que anexe continuações que seriam executadas quando o resultado estiver pronto, livrando-lhe do problema de bloquear a thread atual (que essencialmente mata o paralelismo). C++ é uma linguagem que atualmente possui abstrações de futures e promises, mas não possui as op…. […]
October 13, 2015 at 9:54 pm
Wonderful write-up. Thanks for your contribution; maybe there’s some hope for C++ in the long run. 🙂
December 8, 2015 at 8:53 am
[…] lot has been written about improved futures. See Dr. Dobb’s, Facebook’s Folly, and monadic futures. I won’t repeat that here but an example of .then() is in order. This example is also […]
January 31, 2016 at 3:49 pm
[…] Finally, this Twitter exchange between Gor Nishanov (one of the proposal authors) and Bartosz Milewski (one of the best writers on the intersection of functional programming and C++) made me feel a little better. Gor Nishanov knows that C++ coroutines are a monad in sheep’s clothing, so hopefully this proposal won’t repeat the mistakes of std::future. […]
May 2, 2016 at 12:52 pm
Isn’t the future/promise pattern more a co-monad than a monad?
https://hackage.haskell.org/package/comonad-5/docs/Control-Comonad.html#v:extract
Where:
The
extract
function would be thestd::future::get
method andextend
would be thefuture::then
method you defined here (with the two arguments flipped).If so, given – as you wrote – that
next
followed byunwrap
is essentially the monadicbind
, we now have something that is both, a monad and a co-monad(?). Are there any interesting implications from the category theory point of view?May 2, 2016 at 4:38 pm
It’s actually quite common to have some way of extracting contents from a monad — except for the IO monad. So that alone doesn’t make it a comonad.
I don’t see a good candidate for extend (or, equivalently, duplicate).
then
doesn’t have the right signature. (Flipped) extend would have to take a future and a function from a future to a value and it would return another future. And it can’t be a trivial function, because it has to “commute” with extract and have some associativity properties. In Haskell they are:May 2, 2016 at 5:19 pm
I’m probably being confused by
decltype
, but breaking the signature:It takes as the first argument
this
, which is afuture<A>
, sochecks. Then from
we can infer that
func
(the second argument), takes*this
(of typefuture<A>
) as an argument and returns some typeB
. Sochecks as well. Finally, the
then
function returnswhich, in a haskellish notation, could be written as
future<B>
. Which then checks the last signature requirement:Or again, in a haskellish notation:
May 2, 2016 at 6:02 pm
Oh the vagaries of C++!
First of all, futures have to deal with exceptions. The way it was done was to require the continuation function to deal with exceptions. Therefore, instead of taking the value, the continuation takes the ready future. The value is already there; or not, if an exception was thrown. The continuation has to check for an exception.
The method
then
was supposed to implement both the functorial map (Haskell’sfmap
) and the monadic bind (Haskell’s>>=
. Functorial map is supposed to take a function that turns A to B, but because of the exception business, it takes a future of A instead.When, on the other hand, you call
then
with a function that returns a future, it is implemented as a monadic bind. Again, because of the exception business, it takes a future instead of a value. Following this convoluted logic, comonadic extend would be taking a future of a future and return a future. I guess!This blog was written many years ago, and I’m not sure where futures stand in the current version of C++. Are they functorial and monadic? I don’t know.
July 29, 2017 at 4:46 pm
If you say Monad, fp ppl will listen. if you say Pattern, OO ppl will follow. If you say Monad Pattern, I can not follow sorry, definitions do really matter in Category theory.