Avoid accidental blocking, tune buffers, and keep thread pools and queues explicit.
Async code can be fast, but it can also be wasteful if you accidentally add too much scheduling overhead, too many context switches, or hidden blocking calls.
This section helps you reason about performance in async systems: where queues form, how buffer sizes affect latency, how to spot blocking work in the wrong place, and how to keep thread pools and execution policies explicit.
The guiding principle is the same as in performance work: measure first, then optimize what actually matters.