Documentation

OmniRun
Documentation

Isolated virtual machines for AI agents. One API call to create, one to destroy. Every sandbox boots in under a second and runs its own kernel.

import { Sandbox } from '@omnirun/sdk';

const sandbox = await Sandbox.create('playground', {
  timeout: 300,
});

// sandbox.sandboxId → "sb_a1b2c3d4"

Quick Start

5-step walkthrough

Go from zero to running code in an isolated VM. Each step builds on the previous. The entire flow takes about 60 seconds.

Run in playground →
Step 1 -- Install
npm install @omnirun/sdk
Step 2 -- Create & Run
Step 3 -- Files
await sandbox.files.write('/tmp/data.csv', content)
Step 4 -- Teardown
await sandbox.kill()

TypeScript SDK

Installation & auth

Install the SDK and set your API key. The SDK automatically reads OMNIRUN_API_KEY from your environment.

All methods return typed responses. The SDK supports both ESM and CommonJS.

Run in playground →
Install
npm install @omnirun/sdk
Usage

Python SDK

Python client

The Python SDK provides both sync and async interfaces. Install via pip and set your API key.

Run in playground →
Install
pip install omnirun
Usage

REST API

Endpoints

All endpoints require a Bearer token. Base URL: https://api.omnirun.io

MethodEndpointDescription
Sandboxes
POST/sandboxesCreate a new sandbox
GET/sandboxesList active sandboxes
GET/sandboxes/:idGet sandbox status
DELETE/sandboxes/:idTerminate sandbox
Commands
POST/sandboxes/:id/commandsRun a command
GET/sandboxes/:id/commandsList executed commands
Files
POST/sandboxes/:id/filesUpload a file
GET/sandboxes/:id/filesRead a file (path as query param)
Exposures
POST/sandboxes/:id/exposuresExpose a port with preview URL
GET/sandboxes/:id/exposuresList active exposures
DELETE/sandboxes/:id/exposures/:eidRemove an exposure
LLM Proxy
POST/llm/v1/chat/completionsOpenAI-compatible chat completion
GET/llm/v1/modelsList available models
GET/llm/v1/usageGet spend tracking info
Vault
POST/vault/initInitialize user vault
POST/vault/credentialsStore a credential
GET/vault/credentialsList stored credentials
Auth
POST/auth/magic-link/requestRequest a magic link email
POST/auth/otp/verifyVerify OTP code
GET/auth/meGet current user info
POST/auth/tokensCreate an API token
History
GET/sandboxes/historyList past sandbox sessions

Platform

LLM Proxy

Use hundreds of AI models through a single API key with built-in spend tracking and rate limiting. OpenAI-compatible — just change your base URL.

Auth uses the same omr_ API key as sandbox operations. Three endpoints: POST /llm/v1/chat/completions, GET /llm/v1/models, and GET /llm/v1/usage.

// Using the OpenAI SDK
import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://api.omnirun.io/llm/v1",
  apiKey: "omr_your_api_key",
});

const completion = await client.chat.completions.create({
  model: "openai/gpt-4o-mini",
  messages: [{ role: "user", content: "Hello!" }],
});

console.log(completion.choices[0].message.content);
Available models
"openai/gpt-4o-mini"          // Free tier
"anthropic/claude-sonnet-4.6"  // Free tier
"google/gemini-3.1-pro-preview"  // Free tier
"kilo-auto/balanced"         // Free tier
"kilo-auto/free"             // Free tier

// 300+ more models available
// GET /llm/v1/models for full list
Spend tracking
// Free tier: $5.00 credit
// Spend tracked per-request from upstream
// 402 response when cap exceeded

GET /llm/v1/usage
// → { "spendUsedCents": 42,
//     "spendCapCents": 500,
//     "remainingCents": 458 }

Platform

Vault

Secure credential storage with per-user isolation. Store API keys and secrets in your vault, then inject them into sandboxes at creation time.

When vaultInject: true is set, all vault credentials are written to /tmp/.omnirun-env inside the sandbox and automatically sourced by the agent process. Vault sandboxes are exempt from idle auto-kill.

Initialize vault
curl -X POST https://api.omnirun.io/vault/init \
  -H "Authorization: Bearer $OMNIRUN_KEY"

// Auto-created on first login
Store credentials
curl -X POST https://api.omnirun.io/vault/credentials \
  -H "Authorization: Bearer $OMNIRUN_KEY" \
  -d '{"key": "OPENAI_API_KEY",
       "value": "sk-..."}'
Inject into sandbox
const sandbox = await Sandbox.create('playground', {
  vaultInject: true,
});

// Credentials written to /tmp/.omnirun-env
// Auto-sourced by the sandbox agent

Platform

OpenClaw

The openclaw template deploys an AI agent gateway in an isolated microVM. Connect messaging channels like WhatsApp, Telegram, Discord, and Slack.

All LLM requests route through the OmniRun proxy with per-user spend tracking and a $5 free tier. Use vaultInject: true to inject your own API keys for BYOK mode — set the config provider to "openai" or "anthropic" instead of "omnirun".

After starting the gateway, expose port 3000 and visit the /connect endpoint to scan the WhatsApp QR code. Once paired, messages are handled automatically.

Create OpenClaw sandbox
import { Sandbox } from "@omnirun/sdk";

const sandbox = await Sandbox.create("openclaw", {
  vaultInject: true,
});

// Write OpenClaw config
await sandbox.files.write(
  "/app/config.json",
  JSON.stringify({
    provider: "omnirun",
    model: "openai/gpt-4o-mini",
    channels: ["whatsapp"],
  })
);

// Start the gateway
await sandbox.commands.run("node /app/gateway.js");
OpenClaw config format
{
  "provider": "omnirun",      // Custom OmniRun provider
  "model": "openai/gpt-4o-mini",
  "channels": ["whatsapp"],  // or telegram, discord, slack
  "systemPrompt": "You are a helpful assistant."
}
Available models via proxy
"openai/gpt-4o-mini"          // Default, free tier
"anthropic/claude-sonnet-4.6"  // Free tier
"google/gemini-3.1-pro-preview"  // Free tier
"kilo-auto/balanced"         // Free tier

// BYOK: store your own key in the vault
// and set provider to "openai" or "anthropic"
WhatsApp pairing
// Expose the pairing endpoint
const exposure = await sandbox.expose(3000);

// Visit the URL to scan the QR code
console.log(exposure.url + "/connect");
// → https://abc123.run.omnirun.io/connect

// Once paired, messages flow automatically
// through the LLM proxy and back to WhatsApp

Platform

Limits

Each user can run up to 3 concurrent sandboxes by default. Creating a sandbox beyond this limit returns a 429 status code.

Sandboxes with no activity for 15 minutes are automatically killed. Sandboxes created with vaultInject: true are exempt from idle auto-kill.

Limit behavior
// Concurrent sandbox limit: 3 per user
// Exceeding the limit returns HTTP 429

POST /sandboxes
// → 429 { "error": "concurrent sandbox limit reached" }

// Idle auto-kill: 15 minutes with no activity
// Vault-injected sandboxes are exempt

Platform

Networking

All sandboxes start with default-deny networking. No outbound connections are allowed unless you explicitly configure an allow list.

This prevents data exfiltration and ensures sandboxed code can only reach services you approve.

Read security model →
Configure network allow list
const sandbox = await Sandbox.create('agent', {
  network: {
    allow: [
      'api.openai.com',
      'api.anthropic.com',
      'huggingface.co',
    ]
  }
});

// sandbox can only reach these 3 hosts
// all other outbound traffic is blocked
Try it — Network Isolation

Platform

File Transfer

Upload and download files using signed, time-bound URLs. All file access is scoped to a single sandbox.

Upload & download
// Write a file to the sandbox
await sandbox.files.write(
  '/tmp/input.json',
  JSON.stringify(data)
);

// Read a file from the sandbox
const output = await sandbox.files.read(
  '/tmp/result.csv'
);
Try it — File I/O

Platform

Templates

Templates define the pre-installed packages and base configuration for a sandbox. Use built-in templates or create your own.

Try playground templates →
Available templates
// Built-in templates
'playground'   // General purpose
'rust'         // Rust toolchain
'typescript'   // Node.js + TS
'javascript'   // Node.js
'php'          // PHP 8.x
'sql'          // PostgreSQL
'zig'          // Zig compiler

// Custom template
const sb = await Sandbox.create('my-custom-template');
Try it — Python Sandbox

Reference

Security Model

OmniRun uses Firecracker microVMs for hardware-level isolation. Each sandbox runs its own Linux kernel, unlike containers which share the host kernel.

Full security overview →
Isolation stack
// Isolation layers (top → bottom)
'Your Code'        // Userspace
'Guest Kernel'     // Dedicated per sandbox
'microVM'  // Minimal VMM
'KVM'              // Hardware virtualization
'Host OS'          // Hardened host
Try it — Isolation Check

Reference

Benchmarks

MicroVM boot times compared to traditional containers and full VMs. OmniRun sandboxes boot in under a second.

OmniRun (Firecracker)842ms
Docker container1.2s
Cloud VM (EC2)~30s

Median of cold-start measurements on Hetzner AX102 bare metal. 1,000 runs.

Try it — Benchmark

Reference

Changelog

Recent updates and improvements to the OmniRun platform.

v1.3.0 -- March 2026

LLM Proxy, Vault & Multi-Region

  • LLM Proxy: OpenAI-compatible API at /llm/v1 with per-user spend tracking and $5 cap
  • Vault System: Secure credential storage with per-sandbox injection
  • Auto Vault: Vault automatically created on first login
  • Sandbox Limits: Per-user concurrent limit (default 3) with idle auto-kill (15 min)
  • OpenClaw Template: Pre-configured AI agent sandbox (2 vCPU, 1GB)
  • Channel Connection: /connect page for WhatsApp pairing via OpenClaw
  • Multi-Region: Frankfurt (fra) node with region-prefix preview URLs
  • SDK 0.5.0: LLM class with streaming support
  • CLI 0.7.0: omni llm chat/models/usage commands

v1.2.0 -- March 2026

Zig template + network allow lists

Added Zig playground template. Network allow lists now support wildcard patterns.

v1.1.0 -- February 2026

Python SDK + file transfer

Released Python SDK. Added signed file upload/download URLs.

v1.0.0 -- January 2026

General availability

Initial public release with TypeScript SDK, REST API, and 6 language templates.