Async Mutex
This page explains how to use async_mutex for coroutine-friendly mutual exclusion.
Code snippets assume using namespace boost::capy; is in effect.
|
The Problem
Standard mutexes block the calling thread. In a coroutine context, this wastes resources—a blocked thread could be running other coroutines. What you need is a mutex that suspends the coroutine instead of blocking the thread.
// BAD: Blocks the thread
std::mutex mtx;
task<void> bad_example()
{
std::lock_guard lock(mtx); // Thread blocked while waiting
// ... critical section ...
co_return;
}
// GOOD: Suspends the coroutine
async_mutex mtx;
task<void> good_example()
{
co_await mtx.lock(); // Coroutine suspends, thread is free
// ... critical section ...
mtx.unlock();
co_return;
}
What is async_mutex?
An async_mutex provides mutual exclusion for coroutines. When a coroutine
attempts to acquire a locked mutex, it suspends and joins a wait queue. When
the holder unlocks, the next waiter is resumed.
#include <boost/capy/ex/async_mutex.hpp>
async_mutex mtx;
task<void> protected_operation()
{
co_await mtx.lock();
// Only one coroutine executes this section at a time
do_work();
mtx.unlock();
}
Zero Allocation
The wait queue uses intrusive linking—no heap allocation occurs when waiting. The queue node is stored in the awaiter, which lives on the coroutine frame:
Coroutine Frame: [...state...] [awaiter with queue node]
This makes async_mutex suitable for high-frequency locking scenarios.
Basic Usage
lock_guard
The lock_guard class provides RAII semantics:
async_mutex::lock_guard guard = co_await mtx.scoped_lock();
// Move to extend lifetime
async_mutex::lock_guard g2 = std::move(guard);
// Guard unlocks in destructor
A moved-from guard is empty and does not unlock.
Query Lock State
Check if the mutex is currently held:
if (mtx.is_locked())
{
// Someone holds the lock
}
This is informational only—the state may change before you act on it.
Thread Safety
async_mutex is NOT thread-safe. It is designed for single-threaded
use where multiple coroutines may contend for a resource.
|
For multi-threaded scenarios, combine with a strand:
// All access through the same strand
strand s(pool.get_executor());
async_mutex mtx;
task<void> multi_threaded_safe()
{
co_await run_on(s, [&]() -> task<void> {
auto guard = co_await mtx.scoped_lock();
// Now safe: strand serializes, mutex excludes
co_return;
}());
}
Example: Protecting Shared State
class shared_counter
{
async_mutex mtx_;
int value_ = 0;
public:
task<void> increment()
{
auto guard = co_await mtx_.scoped_lock();
++value_;
}
task<int> get()
{
auto guard = co_await mtx_.scoped_lock();
co_return value_;
}
};
Example: Serializing I/O
class serial_writer
{
async_mutex mtx_;
file& file_;
public:
explicit serial_writer(file& f) : file_(f) {}
task<void> write(std::string_view data)
{
auto guard = co_await mtx_.scoped_lock();
// Only one write at a time
co_await file_.async_write(data);
}
};
async_mutex vs Strand
Both provide serialization, but differ in scope:
| Feature | async_mutex | Strand |
|---|---|---|
Scope |
Single resource |
All operations through the strand |
Overhead |
Per-lock wait |
Per-operation dispatch |
Use case |
Fine-grained locking |
Coarse-grained serialization |
Thread safety |
Single-threaded only |
Multi-threaded safe |
Use async_mutex for protecting specific resources. Use strands for broader
serialization of all operations.
When NOT to Use async_mutex
Use async_mutex when:
-
You need fine-grained mutual exclusion
-
Lock contention is expected
-
Critical sections are short
Do NOT use async_mutex when:
-
Operations are multi-threaded — combine with a strand
-
Critical sections are long — consider restructuring
-
You need condition variable semantics — not yet available
-
A strand provides sufficient serialization — simpler is better
Summary
| Feature | Description |
|---|---|
|
Non-blocking mutex for coroutines |
|
Awaitable that acquires the mutex |
|
Releases the mutex |
|
Returns |
|
Query current state |
|
RAII wrapper for automatic unlock |
Next Steps
-
Strands — Coarse-grained serialization
-
Concurrent Composition — Running tasks in parallel
-
Thread Pool — Multi-threaded execution