What You'll Learn
Rust's async model centers on a state machine called a [[future|Future]], not OS threads. We'll build an intuition for [[async-await|async fn]], .await, and what the runtime actually does—no heavy frameworks required, just small examples to understand the flow.
A quick metaphor: threads are like "hire more workers," while async is "do other tasks while waiting."
The difference is easier to see in a table:
| Model | Core idea | Best fit |
|---|---|---|
| Threads | Add more workers | CPU work, independent tasks |
| Async | Manage waiting efficiently | Network I/O, file I/O, long-latency work |
async fn and Future
Declaring an [[async-await|async fn]] doesn't run the body immediately; it returns a [[future|Future]], which is a plan that completes later. .await drives the future forward and hands control back to the runtime whenever it has to wait, without blocking the thread.
So calling an async fn does not mean "the work already finished." It means "the work is now described as a future that can be driven later."
async fn fetch_data(id: u32) -> String {
format!("result-{id}")
}
fn main() {
let future = fetch_data(42);
println!("future created but not run: {:#?}", std::any::type_name_of_val(&future));
}
This snippet doesn't touch the network, but it proves that async fn returns a Future. To actually run it, you need a runtime.
A Minimal Runtime: block_on
For learning, the futures crate ships executor::block_on.
use futures::executor::block_on;
async fn double_after_delay(x: u32) -> u32 {
// Show the control flow without real async I/O.
x * 2
}
fn main() {
let result = block_on(double_after_delay(5));
println!("result = {result}");
}
block_on blocks the current thread until the future completes. It's great for experiments but not a full application runtime.
You can think of it like this:
- block_on: "wait here until this one future finishes"
- tokio::main: "start a real async runtime that can manage many futures"
async main with tokio::main
tokio is one of the most common runtimes in production. Annotate main with #[tokio::main] and it bootstraps the runtime plus an async entry point for you.
#[tokio::main]
async fn main() {
let user = fetch_user().await;
println!("user = {user}");
}
async fn fetch_user() -> String {
"mathbong".to_string()
}
Even without real network calls, this shows that other futures can run while one awaits. Internally, the runtime keeps worker threads that drive each future's state transitions.
At the beginner level, it is enough to picture the runtime as a manager that decides which future should resume next.
Switching Mental Models
- Synchronous code holds onto a thread until a function finishes.
- Asynchronous code represents waiting as futures and lets the runtime switch to work with no latency.
Wrapping CPU-bound code in async doesn't help much; the sweet spot is long-latency operations such as network and file I/O.
So a simple rule of thumb is:
- Mostly waiting -> async is often a good fit
- Mostly computing -> threads or parallelism usually matter more
await Chains and Error Handling
async fn can still return Result and use the ? operator.
async fn read_config() -> Result<String, std::io::Error> {
let text = tokio::fs::read_to_string("config.toml").await?;
Ok(text)
}
#[tokio::main]
async fn main() -> Result<(), std::io::Error> {
let config = read_config().await?;
println!("config = {}", config.trim());
Ok(())
}
The ? operator exits the current future when the awaited result is Err, mirroring synchronous error handling.
Choosing a Runtime
tokio: go-to for network servers, CLIs, and async file I/O.async-std: mimics the standard library API for a friendlier learning curve.smol: lightweight runtime suited for small tools or embedded use.
You don't need a full comparison today—just remember that runtimes schedule futures.
Practice in CodeSandbox
The sandbox below uses CodeSandbox's Rust starter. Move the main code into src/main.rs, then compare cargo check and cargo run so you can read the compiler feedback beside the final output.
💬 댓글
이 글에 대한 의견을 남겨주세요