Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions doc/modules/ROOT/pages/2.cpp20-coroutines/2a.foundations.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ This model has a fundamental constraint: *run-to-completion*. Once a function st

== What Is a Coroutine?

A *coroutine* is a function that can suspend its execution and resume later from exactly where it left off. Think of it as a bookmark in a book of instructions—instead of reading the entire book in one sitting, you can mark your place, do something else, and return to continue reading.
A *coroutine* is a function that can suspend its execution and resume later from exactly where it left off. Think of it as a bookmark in a book of instructions—instead of reading the entire book in one sitting, you can mark your place, do something else, and return to continue reading.

When a coroutine suspends:

Expand All @@ -55,7 +55,7 @@ When a coroutine resumes:
* Local variables are restored to their previous values
* Execution continues from the suspension point

This capability is implemented through a *coroutine frame*a heap-allocated block of memory that stores the coroutine's state. Unlike stack frames, coroutine frames persist across suspension points because they live on the heap rather than the stack.
This capability is implemented through a *coroutine frame*—a heap-allocated block of memory that stores the coroutine's state. Unlike stack frames, coroutine frames persist across suspension points because they live on the heap rather than the stack.

[source,cpp]
----
Expand Down Expand Up @@ -138,8 +138,8 @@ This code reads like the original blocking version. Local variables like `reques

Coroutines also enable:

* *Generators* Functions that produce sequences of values on demand, computing each value only when requested
* *State machines* Complex control flow expressed as linear code with suspension points
* *Cooperative multitasking* Multiple logical tasks interleaved on a single thread
* *Generators* — Functions that produce sequences of values on demand, computing each value only when requested
* *State machines* — Complex control flow expressed as linear code with suspension points
* *Cooperative multitasking* — Multiple logical tasks interleaved on a single thread

You have now learned what coroutines are and why they exist. In the next section, you will learn the {cpp}20 syntax for creating coroutines.
16 changes: 8 additions & 8 deletions doc/modules/ROOT/pages/2.cpp20-coroutines/2b.syntax.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ task<std::string> fetch_page(std::string url)

=== co_yield

The `co_yield` keyword produces a value and suspends the coroutine. This pattern creates *generators*functions that produce sequences of values one at a time. After yielding a value, the coroutine pauses until someone asks for the next value.
The `co_yield` keyword produces a value and suspends the coroutine. This pattern creates *generators*—functions that produce sequences of values one at a time. After yielding a value, the coroutine pauses until someone asks for the next value.

[source,cpp]
----
Expand Down Expand Up @@ -91,11 +91,11 @@ For now, observe that the presence of `co_return` transforms what looks like a r

== Awaitables and Awaiters

When you write `co_await expr`, the expression `expr` must be an *awaitable*something that knows how to suspend and resume a coroutine. The awaitable produces an *awaiter* object that implements three methods:
When you write `co_await expr`, the expression `expr` must be an *awaitable*—something that knows how to suspend and resume a coroutine. The awaitable produces an *awaiter* object that implements three methods:

* `await_ready()` Returns `true` if the result is immediately available and no suspension is needed
* `await_suspend(handle)` Called when the coroutine suspends; receives a handle to the coroutine for later resumption
* `await_resume()` Called when the coroutine resumes; its return value becomes the value of the `co_await` expression
* `await_ready()` — Returns `true` if the result is immediately available and no suspension is needed
* `await_suspend(handle)` — Called when the coroutine suspends; receives a handle to the coroutine for later resumption
* `await_resume()` — Called when the coroutine resumes; its return value becomes the value of the `co_await` expression

=== Example: Understanding the Awaiter Protocol

Expand Down Expand Up @@ -185,8 +185,8 @@ The variable `i` inside `counter` maintains its value across all these suspensio

The {cpp} standard library provides two predefined awaiters:

* `std::suspend_always` `await_ready()` returns `false` (always suspend)
* `std::suspend_never` `await_ready()` returns `true` (never suspend)
* `std::suspend_always` — `await_ready()` returns `false` (always suspend)
* `std::suspend_never` — `await_ready()` returns `true` (never suspend)

These are useful building blocks for promise types and custom awaitables.

Expand All @@ -199,4 +199,4 @@ co_await std::suspend_always{};
co_await std::suspend_never{};
----

You have now learned the three coroutine keywords and how awaitables work. In the next section, you will learn about the promise type and coroutine handle—the machinery that makes coroutines function.
You have now learned the three coroutine keywords and how awaitables work. In the next section, you will learn about the promise type and coroutine handle—the machinery that makes coroutines function.
14 changes: 7 additions & 7 deletions doc/modules/ROOT/pages/2.cpp20-coroutines/2c.machinery.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
= Part III: Coroutine Machinery

This section explains the promise type and coroutine handle—the core machinery that controls coroutine behavior. You will build a complete generator type by understanding how these pieces work together.
This section explains the promise type and coroutine handle—the core machinery that controls coroutine behavior. You will build a complete generator type by understanding how these pieces work together.

== Prerequisites

Expand All @@ -10,7 +10,7 @@ This section explains the promise type and coroutine handle

== The Promise Type

Every coroutine has an associated *promise type*. This type acts as a controller for the coroutine, defining how it behaves at key points in its lifecycle. The promise type is not something you pass to the coroutine—it is a nested type inside the coroutine's return type that the compiler uses automatically.
Every coroutine has an associated *promise type*. This type acts as a controller for the coroutine, defining how it behaves at key points in its lifecycle. The promise type is not something you pass to the coroutine—it is a nested type inside the coroutine's return type that the compiler uses automatically.

The compiler expects to find a type named `promise_type` nested inside your coroutine's return type. If your coroutine returns `Generator<int>`, the compiler looks for `Generator<int>::promise_type`.

Expand Down Expand Up @@ -60,7 +60,7 @@ The compiler transforms your coroutine body into something resembling this pseud
Important observations:

* The return object is created before `initial_suspend()` runs, so it is available even if the coroutine suspends immediately
* `final_suspend()` determines whether the coroutine frame persists after completion—if it returns `suspend_always`, you must manually destroy the coroutine; if it returns `suspend_never`, the frame is destroyed automatically
* `final_suspend()` determines whether the coroutine frame persists after completion—if it returns `suspend_always`, you must manually destroy the coroutine; if it returns `suspend_never`, the frame is destroyed automatically

=== Tracing Promise Behavior

Expand Down Expand Up @@ -153,10 +153,10 @@ A `std::coroutine_handle<>` is a lightweight object that refers to a suspended c

=== Basic Operations

* `handle()` or `handle.resume()` Resume the coroutine
* `handle.done()` Returns `true` if the coroutine has completed
* `handle.destroy()` Destroy the coroutine frame (frees memory)
* `handle.promise()` Returns a reference to the promise object (typed handles only)
* `handle()` or `handle.resume()` — Resume the coroutine
* `handle.done()` — Returns `true` if the coroutine has completed
* `handle.destroy()` — Destroy the coroutine frame (frees memory)
* `handle.promise()` — Returns a reference to the promise object (typed handles only)

=== Typed vs Untyped Handles

Expand Down
16 changes: 8 additions & 8 deletions doc/modules/ROOT/pages/2.cpp20-coroutines/2d.advanced.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ This section covers advanced coroutine topics: symmetric transfer for efficient

== Symmetric Transfer

When a coroutine completes or awaits another coroutine, control must transfer somewhere. The naive approach—simply calling `handle.resume()`has a problem: each nested coroutine adds a frame to the call stack. With deep nesting, you risk stack overflow.
When a coroutine completes or awaits another coroutine, control must transfer somewhere. The naive approach—simply calling `handle.resume()`—has a problem: each nested coroutine adds a frame to the call stack. With deep nesting, you risk stack overflow.

*Symmetric transfer* solves this by returning a coroutine handle from `await_suspend`. Instead of resuming the target coroutine via a function call, the compiler generates a tail call that transfers control without growing the stack.

Expand Down Expand Up @@ -102,7 +102,7 @@ auto final_suspend() noexcept

== Coroutine Allocation

Every coroutine needs memory for its *coroutine frame*the heap-allocated structure holding local variables, parameters, and suspension state.
Every coroutine needs memory for its *coroutine frame*—the heap-allocated structure holding local variables, parameters, and suspension state.

=== Default Allocation

Expand Down Expand Up @@ -456,11 +456,11 @@ This generator:

You have now learned the complete mechanics of {cpp}20 coroutines:

* *Keywords* `co_await`, `co_yield`, and `co_return` transform functions into coroutines
* *Promise types* Control coroutine behavior at initialization, suspension, completion, and error handling
* *Coroutine handles* Lightweight references for resuming, querying, and destroying coroutines
* *Symmetric transfer* Efficient control flow without stack accumulation
* *Allocation* Custom allocation and HALO optimization
* *Exception handling* Capturing and propagating exceptions across suspension points
* *Keywords* — `co_await`, `co_yield`, and `co_return` transform functions into coroutines
* *Promise types* — Control coroutine behavior at initialization, suspension, completion, and error handling
* *Coroutine handles* — Lightweight references for resuming, querying, and destroying coroutines
* *Symmetric transfer* — Efficient control flow without stack accumulation
* *Allocation* — Custom allocation and HALO optimization
* *Exception handling* — Capturing and propagating exceptions across suspension points

These fundamentals prepare you for understanding Capy's `task<T>` type and the IoAwaitable protocol, which build on standard coroutine machinery with executor affinity and stop token propagation.
18 changes: 9 additions & 9 deletions doc/modules/ROOT/pages/3.concurrency/3b.synchronization.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ This section introduces the dangers of shared data access and the synchronizatio

== The Danger: Race Conditions

When multiple threads read the same data, all is well. But when at least one thread writes while others read or write, you have a *data race*. The result is undefined behavior—crashes, corruption, or silent errors.
When multiple threads read the same data, all is well. But when at least one thread writes while others read or write, you have a *data race*. The result is undefined behavior—crashes, corruption, or silent errors.

Consider this code:

Expand Down Expand Up @@ -39,17 +39,17 @@ int main()
}
----

Two threads, each incrementing 100,000 times. You would expect 200,000. But run this repeatedly and you will see different results—180,000, 195,327, maybe occasionally 200,000. Something is wrong.
Two threads, each incrementing 100,000 times. You would expect 200,000. But run this repeatedly and you will see different results—180,000, 195,327, maybe occasionally 200,000. Something is wrong.

The `++counter` operation looks atomic—indivisible—but it is not. It actually consists of three steps:
The `++counter` operation looks atomic—indivisible—but it is not. It actually consists of three steps:

1. Read the current value
2. Add one
3. Write the result back

Between any of these steps, the other thread might execute its own steps. Imagine both threads read `counter` when it is 5. Both add one, getting 6. Both write 6 back. Two increments, but the counter only went up by one. This is a *lost update*, a classic race condition.

The more threads, the more opportunity for races. The faster your processor, the more instructions execute between context switches, potentially hiding the bug—until one critical day in production.
The more threads, the more opportunity for races. The faster your processor, the more instructions execute between context switches, potentially hiding the bug—until one critical day in production.

== Mutual Exclusion: Mutexes

Expand Down Expand Up @@ -91,7 +91,7 @@ int main()

Now the output is always 200,000. The mutex ensures that between `lock()` and `unlock()`, only one thread executes. The increment is now effectively atomic.

But there is a problem with calling `lock()` and `unlock()` directly. If code between them throws an exception, `unlock()` never executes. The mutex stays locked forever, and any thread waiting for it blocks eternally—a *deadlock*.
But there is a problem with calling `lock()` and `unlock()` directly. If code between them throws an exception, `unlock()` never executes. The mutex stays locked forever, and any thread waiting for it blocks eternally—a *deadlock*.

== Lock Guards: Safety Through RAII

Expand Down Expand Up @@ -194,9 +194,9 @@ void safe_function()

=== Deadlock Prevention Rules

1. *Lock in consistent order* Define a global ordering for mutexes and always lock in that order
2. *Use std::scoped_lock for multiple mutexes* Let the library handle deadlock avoidance
3. *Hold locks for minimal time* Reduce the window for contention
4. *Avoid nested locks when possible* Simpler designs prevent deadlock by construction
1. *Lock in consistent order* — Define a global ordering for mutexes and always lock in that order
2. *Use std::scoped_lock for multiple mutexes* — Let the library handle deadlock avoidance
3. *Hold locks for minimal time* — Reduce the window for contention
4. *Avoid nested locks when possible* — Simpler designs prevent deadlock by construction

You have now learned about race conditions, mutexes, lock guards, and deadlocks. In the next section, you will explore advanced synchronization primitives: atomics, condition variables, and shared locks.
22 changes: 11 additions & 11 deletions doc/modules/ROOT/pages/3.concurrency/3c.advanced.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -44,14 +44,14 @@ No mutex, no lock guard, yet the result is always 200,000. The `std::atomic<int>

=== When to Use Atomics

Atomics work best for single-variable operations: counters, flags, simple state. They are faster than mutexes when contention is low. But they cannot protect complex operations involving multiple variables—for that, you need mutexes.
Atomics work best for single-variable operations: counters, flags, simple state. They are faster than mutexes when contention is low. But they cannot protect complex operations involving multiple variables—for that, you need mutexes.

Common atomic types include:

* `std::atomic<bool>` Thread-safe boolean flag
* `std::atomic<int>` Thread-safe integer counter
* `std::atomic<T*>` Thread-safe pointer
* `std::atomic<std::shared_ptr<T>>` Thread-safe shared pointer ({cpp}20)
* `std::atomic<bool>` — Thread-safe boolean flag
* `std::atomic<int>` — Thread-safe integer counter
* `std::atomic<T*>` — Thread-safe pointer
* `std::atomic<std::shared_ptr<T>>` — Thread-safe shared pointer ({cpp}20)

Any trivially copyable type can be made atomic.

Expand Down Expand Up @@ -134,12 +134,12 @@ The worker thread calls `cv.wait()`, which atomically releases the mutex and sus

=== The Predicate

The lambda `[]{ return ready; }` is the *predicate*. `wait()` will not return until this evaluates to true. This guards against *spurious wakeups*rare events where a thread wakes without notification. Always use a predicate.
The lambda `[]{ return ready; }` is the *predicate*. `wait()` will not return until this evaluates to true. This guards against *spurious wakeups*—rare events where a thread wakes without notification. Always use a predicate.

=== Notification Methods

* `notify_one()` Wake a single waiting thread
* `notify_all()` Wake all waiting threads
* `notify_one()` — Wake a single waiting thread
* `notify_all()` — Wake all waiting threads

Use `notify_one()` when only one thread needs to proceed (e.g., producer-consumer with single consumer). Use `notify_all()` when multiple threads might need to check the condition (e.g., broadcast events, shutdown signals).

Expand All @@ -160,7 +160,7 @@ auto status = cv.wait_until(lock, deadline, predicate);

== Shared Locks: Readers and Writers

Consider a data structure that is read frequently but written rarely. A regular mutex serializes all access—but why block readers from each other? Multiple threads can safely read simultaneously; only writes require exclusive access.
Consider a data structure that is read frequently but written rarely. A regular mutex serializes all access—but why block readers from each other? Multiple threads can safely read simultaneously; only writes require exclusive access.

*Shared mutexes* support this pattern:

Expand Down Expand Up @@ -191,10 +191,10 @@ void writer(int value)
=== Lock Types

`std::shared_lock`::
Acquires a *shared lock*multiple threads can hold shared locks simultaneously.
Acquires a *shared lock*—multiple threads can hold shared locks simultaneously.

`std::unique_lock` (on shared_mutex)::
Acquires an *exclusive lock*no other locks (shared or exclusive) can be held.
Acquires an *exclusive lock*—no other locks (shared or exclusive) can be held.

=== Behavior

Expand Down
Loading
Loading