LiveKit real-time SDK and server API for Rust
  • Assembly 53.9%
  • Rust 22.7%
  • C 12.1%
  • C++ 7.8%
  • JavaScript 3%
  • Other 0.3%
Find a file
Binh Pham 2a87895dc2
Add livekit-wakeword crate with ONNX-based wake word detection (#926)
* Add melspectrogram ONNX model with feature extraction and tests

Introduces the livekit-wakeword crate with a MelspectrogramModel that
extracts mel-scaled spectrogram features from raw i16 PCM audio using
ONNX Runtime. Includes tests verifying output shape and time-frame scaling.

* Add embedding ONNX model with inference and tests

* Reorganize wakeword crate: flatten modules and separate ONNX files

- Move ONNX models from models/ to onnx/ to avoid naming collision
- Flatten src/models/ into top-level src/ modules
- Colocate unit tests with their respective modules
- Remove prelude.rs in favor of direct imports

* Implement WakeWordModel with shared constants and session helpers

- Add WakeWordModel: stateless detection pipeline (mel -> embeddings -> classifier)
- Classifiers loaded dynamically from ONNX files on disk
- Hoist constants (SAMPLE_RATE, MEL_BINS, EMBEDDING_WINDOW, etc.) to lib.rs
- Extract shared session builder helpers to eliminate duplication
- Consolidate tests into a single end-to-end predict test
- Add hey_livekit.onnx classifier model

* Apply cargo fmt formatting

* Auto generate changeset

* Add Apache 2.0 license headers to wakeword source files

* Add README for livekit-wakeword crate

* Scope changeset to livekit-wakeword and add package to knope.toml

* Switch wakeword ONNX backend from ort default to tract

Replace the default ONNX Runtime C++ backend with the pure-Rust
tract backend via ort-tract to avoid upstream test failures from
native library linking issues.

* Fix minimum audio duration in wakeword README (~2s, not ~4s)

* Remove ndarray from public API signatures in livekit-wakeword

Accept plain slices (&[i16], &[f32]) instead of ndarray types in the
public detect/predict methods, keeping ndarray as an internal detail.

* Replace Box<dyn Error> with WakeWordError enum in public API

Add a typed WakeWordError enum (via thiserror) covering Ort, Shape, Io,
and ModelNotFound cases, replacing dynamic error types across all public
methods.

* Add input sample rate resampling and make internal modules private

Accept any common sample rate (16k–384k Hz) via a new sample_rate
parameter on WakeWordModel::new(), resampling to 16 kHz internally
using the resampler crate. Make embedding and melspectrogram modules
pub(crate) since they are implementation details. Change
MelspectrogramModel::detect() to accept &[f32] directly, avoiding
a redundant i16→f32→i16→f32 round-trip when resampling.

* Move predict test to integration tests and run cargo fmt

* Rename integration test to integration.rs and add license header

* Replace FFT resampler with FIR-based resampler for low-latency streaming

Switch from ResamplerFft to ResamplerFir which accepts arbitrary input
buffer sizes, eliminating chunk management and zero-padding complexity.
Uses 64-sample latency (~1.3ms at 48kHz) with 90dB stopband attenuation.

* Exclude tract backend on aarch64-pc-windows-msvc to fix MSVC build

tract-linalg has ARM64 assembly (.S files) that MSVC cannot compile.
Use a build.rs to auto-detect the target and conditionally enable
ort-tract, falling back to native ONNX Runtime on aarch64-pc-windows-msvc.

* Fix mel spectrogram normalization and add WAV-based integration tests

The Rust mel spectrogram was missing the x/10 + 2 post-processing
normalization from the openWakeWord pipeline, causing near-zero
classifier scores. Added positive/negative WAV sample tests with
a 0.5 threshold to catch regressions.

* Update changeset to reflect full PR scope

* Run cargo fmt on wakeword tests

---------

Co-authored-by: knope-bot[bot] <152252888+knope-bot[bot]@users.noreply.github.com>
2026-03-11 18:31:45 +08:00
.cargo chore: 'config' is deprecated in favor of config.toml (#312) 2024-02-15 21:26:25 +01:00
.changeset Add livekit-wakeword crate with ONNX-based wake word detection (#926) 2026-03-11 18:31:45 +08:00
.github update readme (#928) 2026-03-05 07:44:29 +01:00
.vscode fix: fix mute/unmute events for LocalTrack. (#799) 2025-12-02 13:57:50 +08:00
examples Upgrade example dependencies (#920) 2026-03-06 15:12:14 -08:00
imgproc chore: release (#881) 2026-02-12 03:36:52 +11:00
libwebrtc chore: release (#892) 2026-02-16 10:27:44 -08:00
livekit fix the video track subscription in single pc mode (#914) 2026-03-03 11:40:10 +08:00
livekit-api Bump jsonwebtoken to v10 to address CVE-2026-25537 (#917) 2026-02-26 09:25:13 -08:00
livekit-ffi Use knope bot for release management (#913) 2026-03-02 09:31:19 +01:00
livekit-ffi-node-bindings Fix node bindings release (#909) 2026-02-23 19:43:18 +01:00
livekit-protocol chore: release (#892) 2026-02-16 10:27:44 -08:00
livekit-runtime Use workspace dependencies & settings (#856) 2026-02-03 10:42:24 +11:00
livekit-uniffi Use workspace dependencies & settings (#856) 2026-02-03 10:42:24 +11:00
livekit-wakeword Add livekit-wakeword crate with ONNX-based wake word detection (#926) 2026-03-11 18:31:45 +08:00
soxr-sys chore: release (#845) 2026-02-08 17:15:48 -08:00
webrtc-sys webrtc-sys: Handle gracefully lack of libva on linux (#924) 2026-03-06 15:16:22 -08:00
yuv-sys chore: release (#881) 2026-02-12 03:36:52 +11:00
.gitignore clean up android .gitignore (#872) 2026-02-08 17:05:27 -08:00
.gitmodules move libyuv/imgproc from theomonnom/mikado (#508) 2024-12-10 21:01:31 +01:00
Cargo.lock Add livekit-wakeword crate with ONNX-based wake word detection (#926) 2026-03-11 18:31:45 +08:00
Cargo.toml Add livekit-wakeword crate with ONNX-based wake word detection (#926) 2026-03-11 18:31:45 +08:00
download_ffi.py Make download script backwards compatible with old tag format (#923) 2026-03-02 13:47:45 +01:00
knope.toml Add livekit-wakeword crate with ONNX-based wake word detection (#926) 2026-03-11 18:31:45 +08:00
LICENSE publish client-sdk-native (#12) 2023-01-02 20:13:48 +01:00
NOTICE Add license headers (#143) 2023-07-27 17:08:39 -07:00
README.md update readme (#928) 2026-03-05 07:44:29 +01:00
rustfmt.toml fix proto compilation & v0.3.2 (#315) 2024-02-28 19:09:22 +01:00

The LiveKit icon, the name of the repository and some sample code in the background.

📹🎙️🦀 Rust Client SDK for LiveKit

Use this SDK to add realtime video, audio and data features to your Rust app. By connecting to LiveKit Cloud or a self-hosted server, you can quickly build applications such as multi-modal AI, live streaming, or video calls with just a few lines of code.

crates.io livekit docs.rs Builds Tests

Features

  • Receiving tracks
  • Publishing tracks
  • Data channels
  • Simulcast
  • SVC codecs (AV1/VP9)
  • Adaptive Streaming
  • Dynacast
  • Hardware video enc/dec
    • H.264, H.265 using VideoToolbox (MacOS/iOS)
    • H.264 on NVidia and AMD GPUs (Linux)
    • H.264, H.265 on NVidia Jetson (Linux)
  • Supported Platforms
    • Windows
    • MacOS
    • Linux
    • iOS
    • Android

Crates

  • livekit-api: Server APIs and auth token generation
  • livekit: LiveKit real-time SDK
  • livekit-ffi: Internal crate, used to generate bindings for other languages
  • livekit-protocol: LiveKit protocol generated code

When adding the SDK as a dependency to your project, make sure to add the necessary rustflags to your cargo config, otherwise linking may fail.

Also, please refer to the list of the supported platform toolkits.

Getting started

Currently, Tokio is required to use this SDK, however we plan to make the async executor runtime agnostic.

Using Server API

Generating an access token

use livekit_api::access_token;
use std::env;

fn create_token() -> Result<String, access_token::AccessTokenError> {
    let api_key = env::var("LIVEKIT_API_KEY").expect("LIVEKIT_API_KEY is not set");
    let api_secret = env::var("LIVEKIT_API_SECRET").expect("LIVEKIT_API_SECRET is not set");

    let token = access_token::AccessToken::with_api_key(&api_key, &api_secret)
        .with_identity("rust-bot")
        .with_name("Rust Bot")
        .with_grants(access_token::VideoGrants {
             room_join: true,
             room: "my-room".to_string(),
             ..Default::default()
        })
        .to_jwt();
    return token
}

Creating a room with RoomService API

use livekit_api::services::room::{CreateRoomOptions, RoomClient};

#[tokio::main]
async fn main() {
    let room_service = RoomClient::new("http://localhost:7880").unwrap();

    let room = room_service
        .create_room("my_room", CreateRoomOptions::default())
        .await
        .unwrap();

    println!("Created room: {:?}", room);
}

Using Real-time SDK

Connect to a Room and listen for events:

use livekit::prelude::*;

#[tokio::main]
async fn main() -> Result<()> {
    let (room, mut room_events) = Room::connect(&url, &token).await?;

    while let Some(event) = room_events.recv().await {
        match event {
            RoomEvent::TrackSubscribed { track, publication, participant } => {
                // ...
            }
            _ => {}
        }
    }

    Ok(())
}

Receive video frames of a subscribed track

...
use futures::StreamExt; // this trait is required for iterating on audio & video frames
use livekit::prelude::*;

match event {
    RoomEvent::TrackSubscribed { track, publication, participant } => {
        match track {
            RemoteTrack::Audio(audio_track) => {
                let rtc_track = audio_track.rtc_track();
                let mut audio_stream = NativeAudioStream::new(rtc_track);
                tokio::spawn(async move {
                    // Receive the audio frames in a new task
                    while let Some(audio_frame) = audio_stream.next().await {
                        log::info!("received audio frame - {audio_frame:#?}");
                    }
                });
            },
            RemoteTrack::Video(video_track) => {
                let rtc_track = video_track.rtc_track();
                let mut video_stream = NativeVideoStream::new(rtc_track);
                tokio::spawn(async move {
                    // Receive the video frames in a new task
                    while let Some(video_frame) = video_stream.next().await {
                        log::info!("received video frame - {video_frame:#?}");
                    }
                });
            },
        }
    },
    _ => {}
}

Examples

  • basic room: simple example connecting to a room.
  • wgpu_room: complete example app with video rendering using wgpu and egui.
  • mobile: mobile app targeting iOS and Android
  • play_from_disk: publish audio from a wav file
  • save_to_disk: save received audio to a wav file

Building

MacOS

When building on MacOS, -ObjC linker flag is needed. LiveKit's WebRTC implementation make use of ObjectiveC libraries on the Mac. You may get the following error if the app isn't linked with ObjC:

*** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[RTCVideoCodecInfo nativeSdpVideoFormat]: unrecognized selector sent to instance 0x600003bc6660'

Motivation and Design Goals

LiveKit aims to provide an open source, end-to-end WebRTC stack that works everywhere. We have two goals in mind with this SDK:

  1. Build a standalone, cross-platform LiveKit client SDK for Rustaceans.
  2. Build a common core for other platform-specific SDKs (e.g. Unity, Unreal, iOS, Android)

Regarding (2), we've already developed a number of client SDKs for several platforms and encountered a few challenges in the process:

  • There's a significant amount of business/control logic in our signaling protocol and WebRTC. Currently, this logic needs to be implemented in every new platform we support.
  • Interactions with media devices and encoding/decoding are specific to each platform and framework.
  • For multi-platform frameworks (e.g. Unity, Flutter, React Native), the aforementioned tasks proved to be extremely painful.

Thus, we posited a Rust SDK, something we wanted build anyway, encapsulating all our business logic and platform-specific APIs into a clean set of abstractions, could also serve as the foundation for our other SDKs!

We'll first use it as a basis for our Unity SDK (under development), but over time, it will power our other SDKs, as well.


LiveKit Ecosystem
Agents SDKsPython · Node.js
LiveKit SDKsBrowser · Swift · Android · Flutter · React Native · Rust · Node.js · Python · Unity · Unity (WebGL) · ESP32 · C++
Starter AppsPython Agent · TypeScript Agent · React App · SwiftUI App · Android App · Flutter App · React Native App · Web Embed
UI ComponentsReact · Android Compose · SwiftUI · Flutter
Server APIsNode.js · Golang · Ruby · Java/Kotlin · Python · Rust · PHP (community) · .NET (community)
ResourcesDocs · Docs MCP Server · CLI · LiveKit Cloud
LiveKit Server OSSLiveKit server · Egress · Ingress · SIP
CommunityDeveloper Community · Slack · X · YouTube