Concurrent Composition
This page explains how to run multiple tasks concurrently using when_all.
Code snippets assume using namespace boost::capy; is in effect.
|
The Problem
Tasks are sequential by default. When you await multiple tasks:
task<void> sequential()
{
int a = co_await fetch_a(); // Wait for A
int b = co_await fetch_b(); // Then wait for B
int c = co_await fetch_c(); // Then wait for C
// Total time: A + B + C
}
Each task waits for the previous one to complete. For independent operations, this wastes time.
when_all
The when_all function launches multiple tasks concurrently and waits for
all of them to complete:
#include <boost/capy/when_all.hpp>
task<void> concurrent()
{
auto [a, b, c] = co_await when_all(
fetch_a(),
fetch_b(),
fetch_c()
);
// Total time: max(A, B, C)
}
All three fetches run in parallel. The co_await completes when the slowest
one finishes.
Return Value
when_all returns a tuple of results, with void types filtered out:
// All non-void: get a tuple of all results
auto [x, y] = co_await when_all(
task_returning_int(), // task<int>
task_returning_string() // task<std::string>
);
// x is int, y is std::string
// Mixed with void: void tasks don't contribute
auto [value] = co_await when_all(
task_returning_int(), // task<int>
task_void(), // task<void> - no contribution
task_void() // task<void> - no contribution
);
// value is int (only non-void result)
// All void: returns void
co_await when_all(
task_void(),
task_void()
);
// No tuple, no return value
Results appear in the same order as the input tasks.
Error Handling
Exceptions propagate from child tasks to the parent. When a task throws:
-
The exception is captured
-
Stop is requested for sibling tasks
-
All tasks are allowed to complete (or respond to stop)
-
The first exception is rethrown
task<void> handle_errors()
{
try {
co_await when_all(
might_fail(),
another_task(),
third_task()
);
} catch (std::exception const& e) {
// First exception from any child
std::cerr << "Error: " << e.what() << "\n";
}
}
First-Error Semantics
Only the first exception is captured; subsequent exceptions are discarded. This matches the behavior of most concurrent frameworks.
Stop Propagation
When an error occurs, when_all requests stop for all sibling tasks. Tasks
that support cancellation can respond by exiting early:
task<void> cancellable_work()
{
auto token = co_await get_stop_token();
for (int i = 0; i < 1000; ++i)
{
if (token.stop_requested())
co_return; // Exit early
co_await do_chunk(i);
}
}
task<void> example()
{
// If failing_task throws, cancellable_work sees stop_requested
co_await when_all(
failing_task(),
cancellable_work()
);
}
Parent Stop Token
when_all forwards the parent’s stop token to children. If the parent is
cancelled, all children see the request:
task<void> parent()
{
// Parent has a stop token from run_async
co_await when_all(
child_a(), // Sees parent's stop token
child_b() // Sees parent's stop token
);
}
std::stop_source source;
run_async(ex, source.get_token())(parent());
// Later: cancel everything
source.request_stop();
Execution Model
All child tasks inherit the parent’s executor affinity:
task<void> parent() // Running on executor ex
{
co_await when_all(
child_a(), // Runs on ex
child_b() // Runs on ex
);
}
Children are launched via dispatch() on the executor, which may run them
inline or queue them depending on the executor implementation.
No Parallelism by Default
With a single-threaded executor, tasks interleave but don’t run truly in parallel:
thread_pool pool(1); // Single thread
run_async(pool.get_executor())(parent());
// Tasks interleave at suspension points, but only one runs at a time
For true parallelism, use a multi-threaded pool:
thread_pool pool(4); // Four threads
run_async(pool.get_executor())(parent());
// Tasks may run on different threads
Example: Parallel HTTP Fetches
task<std::string> fetch(http_client& client, std::string url)
{
co_return co_await client.get(url);
}
task<void> fetch_all(http_client& client)
{
auto [home, about, contact] = co_await when_all(
fetch(client, "https://example.com/"),
fetch(client, "https://example.com/about"),
fetch(client, "https://example.com/contact")
);
std::cout << "Home: " << home.size() << " bytes\n";
std::cout << "About: " << about.size() << " bytes\n";
std::cout << "Contact: " << contact.size() << " bytes\n";
}
When NOT to Use when_all
Use when_all when:
-
Operations are independent
-
You want to reduce total wait time
-
You need all results before proceeding
Do NOT use when_all when:
-
Operations depend on each other — use sequential
co_await -
You need results as they complete — consider
when_any(not yet available) -
Memory is constrained — concurrent tasks consume more memory
Summary
| Feature | Description |
|---|---|
|
Launch tasks concurrently, wait for all |
Return type |
Tuple of non-void results in input order |
Error handling |
First exception propagated, siblings get stop |
Affinity |
Children inherit parent’s executor |
Stop propagation |
Parent and sibling stop tokens forwarded |
Next Steps
-
Cancellation — Stop token propagation
-
Thread Pool — Multi-threaded execution
-
Executor Affinity — Control where tasks run