Not a container.
A virtual machine.

Every sandbox runs its own Linux kernel. There's no shared surface between workloads.

Isolation model

Five layers of isolation

Your Code

Application runs in userspace inside the guest

Guest Kernel

Dedicated Linux kernel per sandbox — no sharing

Firecracker VMM

Minimal virtual machine monitor — same as AWS Lambda

KVM

Hardware-assisted virtualization via Linux KVM

Host OS

Hardened host with minimal attack surface

Security features

Defense in depth.

Hardware isolation via Firecracker

Same technology behind AWS Lambda and Fargate. Each sandbox runs in its own microVM with dedicated kernel, memory, and CPU allocation.

# Isolation boundary

Container: process namespace

OmniRun: hardware VM boundary

# Kernel

Container: shared host kernel

OmniRun: dedicated guest kernel

Default-deny networking

No outbound connections unless explicitly allowed. Configure per-sandbox allow lists via the API or SDK to control exactly what a sandbox can reach.

// Allow specific domains
const sandbox = await Sandbox.create({
  network: {
    allow: [
      'api.openai.com',
      'huggingface.co'
    ]
  }
})

Signed artifact access

Time-bound URLs for file uploads and downloads. Per-sandbox scoped tokens that expire automatically. No long-lived credentials.

# Upload token

scope: sb_a1b2c3d4

expires: 300s

path: /tmp/upload/*

# Download URL

signed: HMAC-SHA256

ttl: 60s

E2E encrypted credential transfer

Credentials are encrypted client-side and decrypted only inside the VM. They never touch disk in plaintext on our infrastructure.

# Credential flow

1. Client encrypts with sandbox public key

2. Ciphertext sent over TLS

3. Decrypted inside VM memory only

4. Never written to disk

5. Destroyed on sandbox teardown

Deterministic teardown

VM process killed. LVM snapshot deleted. Network namespace removed. No residual state. No recovery possible. Provably gone.

// Teardown sequence
await sandbox.kill()

// ✓ VM process: SIGKILL
// ✓ LVM snapshot: deleted
// ✓ Network ns: removed
// ✓ Memory: zeroed
// ✓ State: irrecoverable

Per-user credential vault

Each user gets an isolated vault sandbox for credential storage. Credentials are stored in air-gapped VMs with no internet access. Keys are injected into sandboxes at creation time via secure file write — never exposed through the API. Only key names are readable; values stay inside the VM.

# Vault architecture

storage: air-gapped VM

network: none

isolation: per-user sandbox

# Injection flow

1. User stores credential by name

2. Value written to vault VM

3. Sandbox created with key reference

4. Value injected via secure file write

5. API returns key names only

Authenticated LLM proxy

All LLM requests route through the OmniRun proxy with per-user authentication. Each request is validated against the user's API token. Per-user spend tracking with configurable caps. Admin keys bypass spend limits with full audit logging. No direct provider key exposure — your OpenAI/Anthropic keys stay in the vault.

# LLM proxy auth flow

1. Request hits OmniRun proxy

2. User token validated

3. Spend checked against cap

4. Provider key injected from vault

5. Request forwarded to provider

# Key exposure

user sees: proxy URL only

provider key: never leaves vault

Scoped API tokens

API tokens are SHA-256 hashed at rest. Prefix-based fast lookup with constant-time verification. Tokens can be created, listed, and revoked via the API. Per-token last-used tracking. Magic link and OTP authentication for passwordless access.

# Token properties

hash: SHA-256 at rest

lookup: prefix-based

verify: constant-time

# Lifecycle

create: scoped

rotate: revoke + reissue

revoke: immediate

tracking: per-token last-used

Compliance

Compliance roadmap

EU Data Residency

Hetzner data centers, Germany

Active

SOC 2 Type II

Audit in progress, expected Q3 2026

In progress

Questions about security?

Request our full security pack, or explore the technical details in our documentation.