🚀 Get Started

Integrate AI models with WebAssembly in just a few steps. Build, compile, and deploy secure, high-performance AI experiences.

1

Install WasmAI CLI

Install the WasmAI command-line interface to compile and package AI models as WebAssembly modules.

npm install -g @wasm-ai/cli
2

Convert AI Model to Wasm

Use the CLI to convert your trained model (PyTorch, TensorFlow, ONNX) into a WebAssembly module.

wasm-ai convert model.onnx --platform browser -o model.wasm
3

Implement in Your Project

Load the WebAssembly module in your JavaScript/TypeScript code and run AI inference directly in the browser.

const Module = wasmAI.init('./model.wasm');
const result = await Module.runInference(input);
4

Deploy Securely

WasmAI modules are sandboxed by default. Host them on any platform and run inference safely in user browsers.

Need More Help?

Check out the full documentation, join the community, or explore ready-to-use templates in the CLI.