Every sandbox runs its own Linux kernel. There's no shared surface between workloads.
Isolation model
Your Code
Application runs in userspace inside the guest
Guest Kernel
Dedicated Linux kernel per sandbox — no sharing
Firecracker VMM
Minimal virtual machine monitor — same as AWS Lambda
KVM
Hardware-assisted virtualization via Linux KVM
Host OS
Hardened host with minimal attack surface
Security features
Same technology behind AWS Lambda and Fargate. Each sandbox runs in its own microVM with dedicated kernel, memory, and CPU allocation.
# Isolation boundary
Container: process namespace
OmniRun: hardware VM boundary
# Kernel
Container: shared host kernel
OmniRun: dedicated guest kernel
No outbound connections unless explicitly allowed. Configure per-sandbox allow lists via the API or SDK to control exactly what a sandbox can reach.
// Allow specific domains
const sandbox = await Sandbox.create({
network: {
allow: [
'api.openai.com',
'huggingface.co'
]
}
})Time-bound URLs for file uploads and downloads. Per-sandbox scoped tokens that expire automatically. No long-lived credentials.
# Upload token
scope: sb_a1b2c3d4
expires: 300s
path: /tmp/upload/*
# Download URL
signed: HMAC-SHA256
ttl: 60s
Credentials are encrypted client-side and decrypted only inside the VM. They never touch disk in plaintext on our infrastructure.
# Credential flow
1. Client encrypts with sandbox public key
2. Ciphertext sent over TLS
3. Decrypted inside VM memory only
4. Never written to disk
5. Destroyed on sandbox teardown
VM process killed. LVM snapshot deleted. Network namespace removed. No residual state. No recovery possible. Provably gone.
// Teardown sequence
await sandbox.kill()
// ✓ VM process: SIGKILL
// ✓ LVM snapshot: deleted
// ✓ Network ns: removed
// ✓ Memory: zeroed
// ✓ State: irrecoverableEach user gets an isolated vault sandbox for credential storage. Credentials are stored in air-gapped VMs with no internet access. Keys are injected into sandboxes at creation time via secure file write — never exposed through the API. Only key names are readable; values stay inside the VM.
# Vault architecture
storage: air-gapped VM
network: none
isolation: per-user sandbox
# Injection flow
1. User stores credential by name
2. Value written to vault VM
3. Sandbox created with key reference
4. Value injected via secure file write
5. API returns key names only
All LLM requests route through the OmniRun proxy with per-user authentication. Each request is validated against the user's API token. Per-user spend tracking with configurable caps. Admin keys bypass spend limits with full audit logging. No direct provider key exposure — your OpenAI/Anthropic keys stay in the vault.
# LLM proxy auth flow
1. Request hits OmniRun proxy
2. User token validated
3. Spend checked against cap
4. Provider key injected from vault
5. Request forwarded to provider
# Key exposure
user sees: proxy URL only
provider key: never leaves vault
API tokens are SHA-256 hashed at rest. Prefix-based fast lookup with constant-time verification. Tokens can be created, listed, and revoked via the API. Per-token last-used tracking. Magic link and OTP authentication for passwordless access.
# Token properties
hash: SHA-256 at rest
lookup: prefix-based
verify: constant-time
# Lifecycle
create: scoped
rotate: revoke + reissue
revoke: immediate
tracking: per-token last-used
Compliance
Hetzner data centers, Germany
Audit in progress, expected Q3 2026
Request our full security pack, or explore the technical details in our documentation.