The Code Reviewer
a pool of agents running on cloud, reviewing pull requests
idle

An AI reviewer, running on cloud, reading pull requests and catching the bugs before they ship.

A pool of autonomous OpenHands agents. Each one watches a set of GitHub repositories. When a pull request opens, GitHub fires a webhook, the agent wakes up in under a second, explores the codebase, and posts a review with real file and line citations.

all asleep
All reviewers are asleep.
3 of 3 agents checkpointed to disk. They cost nothing right now. The moment a pull request opens on a watched repository, GitHub will fire a webhook and the owning agent will wake up in under a second.
how it works

Asleep by default, awake only when there is work.

Each agent is a sandbox parked to disk with zero memory cost while idle. GitHub webhooks wake them up. Reviews take a couple of minutes; the rest of the day they are checkpointed. That is how the whole pool runs for the price of a coffee.

agent-1
simonw/datasette, encode/httpx, pydantic/pydantic-ai, python-poetry/poetry
asleep
agent-2
charmbracelet/bubbletea, charmbracelet/lipgloss, caddyserver/caddy, spf13/cobra
asleep
agent-3
pmndrs/zustand, vercel/swr, tokio-rs/axum, BurntSushi/ripgrep
asleep
what it has done

Reviews posted so far, across the watched repositories.

Each review is a markdown comment on the pull request. Sections: summary, architecture, issues with severity and line numbers, cross-file impact, assessment. The agents are told to cite real files they read and to flag nothing if nothing is wrong.

reviews posted
3
most recent 3h ago
repos watched
12
0 of 3 agents awake now
cloud cost
$6.16
≈ past 30 days on orb · runtime plus disk
recent reviews

The latest pull requests the pool has read.

Each line is a review posted on GitHub. Click through to read the full markdown comment on the pull request itself.

01
tokio-rs/axum#3731 · by aiqubits · agent-3
This PR adds a single line to `ECOSYSTEM.md` to include the KeyCompute project in the "Project showcase" section. The entry follows the existing format with a GitHub link and description.
1 warning2 suggestion
3h ago
02
pydantic/pydantic-ai#5125 · by dmontagu · agent-1
PR #5125 adds OpenTelemetry (OTel) event emission as the default observability surface for online evaluation, along with support for result grouping via `target` names and evaluator versioning. The PR introduces two new private modules (`_otel_emit.py` for OTel event emission and `_online.py` for refactored shared helpers), extends the `EvaluationSink.submit()` protocol with a `target` parameter, adds `emit_otel_events` and `include_baggage` config options, and documents agent integration via an `OnlineEvaluation` capability. This aligns online evals with the existing offline evals approach of using OTel for observability.
request-changes1 critical4 warning5 suggestion
8h ago
03
pydantic/pydantic-ai#5143 · by DouweM · agent-1
This PR implements native provider tool search for Anthropic (BM25/regex) and OpenAI Responses (server-executed) models, with a local token-matching fallback for other providers. It adds a `ToolSearchTool` builtin, makes `ToolSearch` a public capability with configuration options, and introduces a `managed_by_builtin` attribute on `ToolDefinition` to mark tools as part of a native builtin's corpus.
1 critical2 warning2 suggestion
8h ago

A note on what this is and isn’t.

Each review is written by an OpenHands agent — the same open-source software-engineering SDK that powers OpenHands Cloud — given a terminal and a file editor inside a cloned copy of the repository. The agent decides on its own what to read and how to reason about the diff.

The agents run on Orb Cloud, which checkpoints them to disk the moment they are idle and restores them in under a second when a webhook arrives. The language model is connected via LiteLLM, which lets us swap providers without code changes.

Reviews are automated opinions. They can be wrong. They are meant to be read, rebutted, or ignored.