Self-Hosting
Run Oz cloud agents on your own infrastructure while keeping Oz's orchestration and observability.
Enterprise feature: Self-hosted Oz cloud agents are available exclusively to teams on an Enterprise plan. To enable self-hosting for your team, contact sales.
Self-hosting allows your team to run Oz cloud agent workloads on your own infrastructure instead of Warp-managed servers. This is ideal for enterprises that need code and execution to remain within their network boundary while still benefiting from Oz's orchestration and visibility model.

How it works
With self-hosting:
Oz orchestrator still manages task lifecycle, observability, and the management experience.
Execution happens on your infrastructure. You run a worker process that connects to Oz and claims tasks routed to it.
Your team controls the compute. The worker runs on machines you provision and manage.
This means you get the same orchestration, session sharing, and team visibility features as Warp-hosted execution, but the agent runs inside your network.
Prerequisites
Before setting up a self-hosted worker, ensure you have:
A machine to run the worker — This can be a VM, a server, or even a local machine. The worker host can run on macOS, Linux, or Windows — any machine that runs Docker. Linux is recommended for production deployments.
Docker installed — The worker uses Docker to run agent tasks. Verify Docker is installed and running with
docker info.Enterprise plan with self-hosting enabled — Contact sales if self-hosting is not yet enabled for your team.
A team API key — In the Warp app, go to Settings > Platform to create a team-scoped API key.
Task containers require a linux/amd64 or linux/arm64 Docker daemon. The worker host itself can be any OS — Docker Desktop on macOS and Windows runs a Linux VM that satisfies this requirement.
Installation
Install Docker
If Docker is not already installed, follow the official Docker installation guide for your platform.
Verify Docker is running:
Running the worker
The worker is open source. See the oz-agent-worker repository for source code, issues, and contribution guidelines.
There are three ways to run the worker: via Docker (recommended), via go install, or by building from source.
Set your API key
In the Warp app, go to Settings > Platform to create a team API key. Then export it as an environment variable:
Option 1: Docker (recommended)
The worker needs access to the Docker daemon to spawn task containers. Mount the host's Docker socket into the container:
Option 2: Go install
Option 3: Build from source
Worker flags reference
The following flags are available when starting the worker:
Required:
--worker-id— A string identifying this worker. This is the value you pass to--hostwhen routing tasks. Choose something meaningful for your team (e.g.,prod-runner-1orci-worker). Multiple workers can share the same ID for load balancing (see below).--api-keyorWARP_API_KEYenv var — Your team API key for authentication. When running via Docker, pass it as-e WARP_API_KEY="...". When running the binary directly, use--api-keyor the environment variable.
Optional:
--log-level— Log verbosity. One ofdebug,info,warn,error. Defaults toinfo.--no-cleanup— Do not remove task containers after execution. Useful for debugging failed tasks—you can inspect the container's filesystem and logs after the run.-v/--volumes— Mount host directories into task containers. Format:HOST_PATH:CONTAINER_PATHorHOST_PATH:CONTAINER_PATH:MODE(where MODE isroorrw). Can be specified multiple times.-e/--env— Set environment variables in task containers. Format:KEY=VALUE(explicit value) orKEY(pass through from host environment). Can be specified multiple times.
Worker IDs starting with warp are reserved and cannot be used. The worker will refuse to start if --worker-id begins with warp.
Example with all flags:
When running the worker via Docker, there are two levels of -e flags. Docker's -e passes env vars to the worker container (e.g., WARP_API_KEY). The worker's -e / --env flags pass env vars into the task containers that the worker spawns. Keep these distinct:
Once started, the worker:
Connects to Oz via WebSocket
Waits for tasks routed to this worker ID
Runs each task in an isolated Docker container
Reports status and results back to Oz
Automatically reconnects with exponential backoff if the connection drops
You can run multiple workers with the same --worker-id for redundancy — tasks are distributed across connected workers using round-robin load balancing.
Docker connectivity
The worker uses the standard Docker client discovery mechanism to find the Docker daemon:
DOCKER_HOSTenvironment variable (e.g.,unix:///var/run/docker.sock,tcp://localhost:2375)Default socket (
/var/run/docker.sockon Linux,~/.docker/run/docker.sockfor rootless Docker)Docker context via
DOCKER_CONTEXTenvironment variableConfig file (
~/.docker/config.json) for context settings
Additional Docker environment variables:
DOCKER_API_VERSION— Specify Docker API versionDOCKER_CERT_PATH— Path to TLS certificatesDOCKER_TLS_VERIFY— Enable TLS verification
If the worker itself runs in Docker, you must mount any relevant config files (e.g., ~/.docker/config.json) into the worker container for Docker context and credential discovery to work.
Example: Connecting to a remote Docker daemon
Private Docker registries
The worker automatically uses credentials from your Docker config (~/.docker/config.json) when pulling task images. If your environments use images from a private registry, make sure the worker's host has been authenticated:
When running the worker via Docker, mount the Docker config into the container:
Sidecar images (the Warp agent binary and dependencies) are pulled from public registries and do not require authentication.
Routing runs to self-hosted workers
To run an Oz cloud agent on your self-hosted worker, specify the --host flag with your worker ID. The --host value must match the --worker-id of a connected worker exactly.
From the CLI
You can combine --host with any other run-cloud flags, such as --environment, --model, --mcp, --skill, --computer-use, and --attach.
From scheduled agents
When creating or updating a schedule, specify the host:
From integrations
When creating or updating an integration, specify the host:
All tasks created through that integration will be routed to your self-hosted worker.
From the API / SDKs
When creating a run via the Oz Agent API, include worker_host in the config:
From the web UI
When creating a run, schedule, or integration in the Oz web app, select your self-hosted worker from the host dropdown.
Environments with self-hosted workers
Self-hosted workers fully support environments. When a task specifies an environment, the worker:
Pulls the Docker image defined in the environment (or falls back to
ubuntu:22.04if none is specified).Clones the repositories and runs setup commands as configured.
Executes the agent inside the prepared container.
The same environment can be used for both Warp-hosted and self-hosted runs without modification. See Environments for details on creating and configuring environments.
The architecture of the environment's Docker image must match the architecture of the worker's Docker daemon. For example, an arm64 image will not run on a worker with an amd64 Docker daemon.
Musl-based Docker images (such as Alpine Linux) are not supported as task images. The agent runtime requires glibc. Use glibc-based images like Debian, Ubuntu, or the default (non-Alpine) variants of official Docker Hub images.
Monitoring runs
Self-hosted runs have the same observability as Warp-hosted runs:
Management UI — View task status, history, and metadata in the Oz dashboard.
Session sharing — Authorized teammates can attach to running tasks to monitor progress.
APIs and SDKs — Query task history and build monitoring using the Oz Agent API.
Security considerations
Docker socket access — The worker requires access to the Docker daemon to create task containers. When running the worker via Docker, this means mounting
/var/run/docker.sock. Ensure appropriate access controls on the host.Network egress — The worker needs outbound WebSocket connectivity to Oz (
wss://oz.warp.dev). No inbound ports need to be opened.API key management — Store your
WARP_API_KEYsecurely (e.g., in a secrets manager). Avoid hardcoding it in scripts or config files.Task isolation — Each task runs in its own Docker container. Containers are removed after execution by default (disable with
--no-cleanupfor debugging).Volume mounts — If using
-v/--volumes, be mindful of what host paths you expose to task containers.
Troubleshooting
Worker won't start
Verify Docker is running:
docker info.Ensure the Docker daemon platform is
linux/amd64orlinux/arm64.
Worker won't connect
Verify your API key is correct, not expired, and has team scope.
Regenerate the API key in Settings > Platform if you suspect it is invalid.
Ensure the machine has outbound internet access to Oz.
Check that no firewall rules are blocking WebSocket connections to
wss://oz.warp.dev.Increase log verbosity with
--log-level debugto see connection details.
Tasks not being picked up
Confirm the worker is running and connected (check the worker logs).
Verify the
--hostparameter matches your--worker-idexactly (case-sensitive).Ensure the worker's team matches the team creating the task.
Task failures
Check Docker is running:
docker info.Review task logs in the Oz dashboard or via session sharing.
Use
--no-cleanupto keep the container around for inspection after failure.Use
--log-level debugto see detailed container creation and execution logs.Ensure the worker machine has sufficient resources (CPU, memory, disk).
If using a custom Docker image, verify it is glibc-based (not Alpine/musl).
Verify the environment image architecture matches the worker's Docker daemon platform (e.g., an
amd64image on anamd64daemon).
Image pull failures
If using a private registry, ensure Docker credentials are available to the worker (see Private Docker registries).
Try pulling the image manually with
docker pull <image>on the worker host to verify access and diagnose authentication issues.Verify the image exists and the tag is correct.
Check network connectivity to the registry.
Last updated
Was this helpful?