The fastest async runtime for Rust, built using fibers/green-threads
MIT License
msrv: 1.80.0 stable
Safe, performant, and efficient async rust runtime. Same syntax as async rust and zero-cost ergonomics. Based on stackful coroutines.
This library is not ready for production use. Many semantics and APIs are still under development.
This library is currently only available for Linux (other OS's contributions are welcome). For Windows and Mac users, running in Docker or WSL also work. See features and development stage
The rust docker container is sufficient.
The following is a simple echo server example
# support lib and async impls
cargo add --git https://github.com/davidzeng0/xx-core.git xx-core
# i/o engine and driver
cargo add --git https://github.com/davidzeng0/xx-pulse.git xx-pulse
In file main.rs
use xx_pulse::{Tcp, TcpListener};
use xx_pulse::impls::TaskExt;
use xx_core::error::{Error, Result};
use xx_core::macros::duration;
#[xx_pulse::main]
async fn main() -> Result<()> {
let listener = Tcp::bind("127.0.0.1:8080").await?;
// Can also timeout operations after 5s
let listener2: Option<Result<TcpListener>> = Tcp::bind("...")
.timeout(duration!(5 s))
.await;
loop {
let (mut client, _) = listener.accept().await?;
xx_pulse::spawn(async move {
let mut buf = [0; 1024];
loop {
let n = client.recv(&mut buf, Default::default()).await?;
if n == 0 {
break;
}
let n = client.send(&buf, Default::default()).await?;
if n == 0 {
break;
}
}
Ok::<_, Error>(())
}).await;
}
}
Thread local access is safe because of async/await syntax.
A compiler error prevents usage of .await
in synchronous functions and closures.
xx-pulse uses cooperative scheduling, so it is impossble to suspend in a closure without using unsafe
.
To suspend anyway, see using sync code as if it were async
#[asynchronous]
async fn try_use_thread_local() {
THREAD_LOCAL.with(|value| {
// Ok, cannot cause UB
use_value_synchronously(value);
});
THREAD_LOCAL.with(|value| {
// Compiler error: cannot use .await in a synchronous function/closure!
do_async_stuff().await;
});
}
Available I/O Backends:
Currently supported architectures:
Current and planned features:
std::io
, now with 100% less allocations, thread_local
lookups, memory copy, and passing owned buffers