AGENT INFRASTRUCTURE · IN PILOT

Give your AI agent
a computer.

Your agent spends 20% of the month doing work and 80% waiting on the LLM. Most platforms charge you for both. ORB only charges for the 20%.

Same code. A fraction of the cost.

See the math  ↓
0.9s
WAKE FROM SLEEP
0
CODE CHANGES
$0
FOR IDLE TIME
01 COST

100 agents. 2 GB each.
Idle 80% of the month.

Monthly bill Public on-demand pricing 2026-04
GPU-first platform
$24,150
Dev-environment SaaS
$9,803
Sandbox platform
$9,741
Raw cloud VMs × 100
$2,400
Sandbox w/ sleep mode
$2,417
ORB Cloud
$176

We only bill for the time your agent is
actually working.

When your agent sends an LLM request and starts waiting for the response, we put it to sleep on disk and stop the clock — until the response arrives. Same code, same speed, a different pricing model.

02 HOW IT WORKS

We bill for working,
not waiting.

Watch one agent live through a month. Each block is a slice of time.
waiting
day 1 day 10 day 20 day 30
working waiting on the LLM
Most platforms bill for every second— working or not
$0
ORB bills only for the work— the rust segments
$0
Same agent · same code · same speed Same per-second rate, different meter

When your agent sends an LLM request and starts waiting for the response, ORB notices and puts the agent to sleep on disk. It stays there — using zero memory and costing nothing — until the LLM responds.

Then ORB wakes it back up in 0.9 seconds and forwards the response. Your agent never knows it slept. Your code is unchanged. You only pay for the seconds your agent is actually working.

Source releases after the first 10 pilot teams are in production.

03 COMPUTER

Every computer comes with…

computer-01 running
Filesystem
/agent/data/
/agent/packages/
/home/ /tmp/ /usr/
Network
Dedicated IP
Full outbound access
Exposed ports
Runtime
bash, pip, npm, git
Any language
Any framework
State
Memory persists
Files persist
Packages persist
04 DEPLOY

Two API calls.
Or one prompt to your agent.

# 1 — create a computer
curl -X POST https://api.orbcloud.dev/v1/computers \
  -H "Authorization: Bearer $ORB_KEY" \
  -d '{"name": "my-agent", "runtime_mb": 1024}'

# 2 — deploy your agent
curl -X POST https://api.orbcloud.dev/v1/computers/$ID/agents \
  -H "Authorization: Bearer $ORB_KEY" \
  -d '{"task": "Review all open PRs"}'
[agent]
name  = "my-agent"
lang  = "python"
entry = "agent.py"

[source]
git    = "https://github.com/you/agent.git"
branch = "main"

[build]
steps = ["pip install -r requirements.txt"]

[backend]
provider = "anthropic"
# Drop ORB into your AI agent's context.
# It registers, builds, and deploys itself.

curl -sL https://docs.orbcloud.dev/llm.txt | claude
Read the full guide →
05 PROOF

What you actually get.

No internal capacity numbers. No infrastructure trivia. Just the things you'll feel.

0.9s
WAKE FROM SLEEP
end-to-end with full memory and network state
20 %
OF TIME BILLED
your agent waits the other 80% — for free
2
API CALLS TO DEPLOY
create a computer, ship your agent to it
0
CODE CHANGES
runs your existing agent unmodified
Currently in pilot

Currently onboarding pilot teams.

Direct access to the engineering team building ORB. Pilot cohorts shape the roadmap. Source releases after the first 10 pilot teams are in production.

06 USE CASES

Built for long-running,
stateful workloads.

Coding agents

Claude Code, Codex, Aider. Give each their own computer. Deploy hundreds in parallel.

Orchestrators

CrewAI, LangChain, Composio pipelines. Each gets its own isolated computer.

Research agents

Overnight crawling, summarization, analysis. Computers run 24/7 for pennies.

QA & review

Always-on PR reviewers. Persistent computers watching your repos continuously.

Browser agents

Playwright, browser-use at scale. Dedicated IP per computer.

Sales & outreach

Multi-day campaigns. Agents that personalize, track, and follow up over days.

07 PRICING

Simple, usage-based.

Subscription + usage. No per-seat fees.

Free
$0
post-pilot plan
Up to 10 computers
Up to 4 GB memory
4 CPU cores
Community support
Request pilot access
Scale
$199/mo
+ usage, post-pilot
Up to 10,000 computers
Up to 64 GB memory
8 CPU cores
Priority support
Request pilot access
Enterprise
Custom
volume pricing
Unlimited computers
Custom memory
Custom CPU
Dedicated + SLA
Talk to us
Pricing shown is the post-pilot plan.
See the rates →
08 FAQ

Questions you'd ask
before signing up.

What is a computer?
A lightweight, isolated Linux environment with its own filesystem, network, and persistent state. Purpose-built so AI agents get full system access without VM overhead.
Why is it so much cheaper?
We only bill for the seconds your agent is actually working. When your agent sends an LLM request and starts waiting for the response, we put it to sleep on disk and stop billing — for the 80% of the month it isn't doing anything. Most platforms keep it in memory and bill the whole time.
Do I need to change my code?
No. Deploy what you already have. Your code calls APIs normally. ORB handles the sleep / wake transparently.
How is this different from other sandbox platforms?
Most agent platforms run each sandbox inside a small virtual machine. Every second the VM exists, you're billed — including the long stretches when your agent is just waiting for the LLM to respond. Even platforms with a sleep mode still create that VM under the hood.

ORB takes a different approach. We run each agent as a regular Linux process on a shared server, then put it to sleep on disk the instant it goes idle. No VM per sandbox, and you only pay for the seconds your agent is actually awake.

For a fleet of 100 typical agents idle 80% of the month, that's roughly $176 on ORB versus $2,400-$24,000 elsewhere. → full comparison
Does the computer lose state?
No. Memory, files, installed packages, running processes, even open browser sessions. Everything is preserved and restored exactly as it was.
How fast does an agent wake up?
0.9 seconds, end to end, with full memory and network state restored. Measured on real coding agents in production. CI tests block any change that makes it slower.
How many computers can I run?
Thousands. Each gets its own isolated environment. Create and destroy via API.
What does the pilot mean for me?
ORB is currently onboarding its first 10 pilot teams. While you're in the pilot, cost is $0 — the post-pilot pricing above is what you'll pay at GA, not during. Pilot teams get direct access to the engineering team (not a sales rep) and shape the roadmap. In return, we ask for honest feedback and a quick note in our launch post if the pilot goes well.
Is ORB open source?
Not yet. Source releases publicly after the first 10 pilot teams are in production. During the pilot, teams get read access to the source under NDA if they want to audit it.

Ship agents to production
this week.

We're onboarding the first 10 pilot teams. Tell us what you're building.