summary: "Deploy OpenClaw Gateway to a Kubernetes cluster with Kustomize"
read_when:
- You want to run OpenClaw on a Kubernetes cluster
- You want to test OpenClaw in a Kubernetes environment
title: "Kubernetes"
---
# OpenClaw on Kubernetes
A minimal starting point for running OpenClaw on Kubernetes — not a production-ready deployment. It covers the core resources and is meant to be adapted to your environment.
## Why not Helm?
OpenClaw is a single container with some config files. The interesting customization is in agent content (markdown files, skills, config overrides), not infrastructure templating. Kustomize handles overlays without the overhead of a Helm chart. If your deployment grows more complex, a Helm chart can be layered on top of these manifests.
For local debugging, `./scripts/k8s/deploy.sh --show-token` prints the token after deploy.
## Local testing with Kind
If you don't have a cluster, create one locally with [Kind](https://kind.sigs.k8s.io/):
```bash
./scripts/k8s/create-kind.sh # auto-detects docker or podman
./scripts/k8s/create-kind.sh --delete # tear down
```
Then deploy as usual with `./scripts/k8s/deploy.sh`.
## Step by step
### 1) Deploy
**Option A** — API key in environment (one step):
```bash
# Replace with your provider: ANTHROPIC, GEMINI, OPENAI, or OPENROUTER
export <PROVIDER>_API_KEY="..."
./scripts/k8s/deploy.sh
```
The script creates a Kubernetes Secret with the API key and an auto-generated gateway token, then deploys. If the Secret already exists, it preserves the current gateway token and any provider keys not being changed.
**Option B** — create the secret separately:
```bash
export <PROVIDER>_API_KEY="..."
./scripts/k8s/deploy.sh --create-secret
./scripts/k8s/deploy.sh
```
Use `--show-token` with either command if you want the token printed to stdout for local testing.
The default manifests bind the gateway to loopback inside the pod. That works with `kubectl port-forward`, but it does not work with a Kubernetes `Service` or Ingress path that needs to reach the pod IP.
If you want to expose the gateway through an Ingress or load balancer:
- Change the gateway bind in `scripts/k8s/manifests/configmap.yaml` from `loopback` to a non-loopback bind that matches your deployment model
- Keep gateway auth enabled and use a proper TLS-terminated entrypoint
- Configure the Control UI for remote access using the supported web security model (for example HTTPS/Tailscale Serve and explicit allowed origins when needed)
## Re-deploy
```bash
./scripts/k8s/deploy.sh
```
This applies all manifests and restarts the pod to pick up any config or secret changes.
## Teardown
```bash
./scripts/k8s/deploy.sh --delete
```
This deletes the namespace and all resources in it, including the PVC.
## Architecture notes
- The gateway binds to loopback inside the pod by default, so the included setup is for `kubectl port-forward`
- No cluster-scoped resources — everything lives in a single namespace
- Security: `readOnlyRootFilesystem`, `drop: ALL` capabilities, non-root user (UID 1000)
- The default config keeps the Control UI on the safer local-access path: loopback bind plus `kubectl port-forward` to `http://127.0.0.1:18789`
- If you move beyond localhost access, use the supported remote model: HTTPS/Tailscale plus the appropriate gateway bind and Control UI origin settings
- Secrets are generated in a temp directory and applied directly to the cluster — no secret material is written to the repo checkout
## File structure
```
scripts/k8s/
├── deploy.sh # Creates namespace + secret, deploys via kustomize
├── create-kind.sh # Local Kind cluster (auto-detects docker/podman)
└── manifests/
├── kustomization.yaml # Kustomize base
├── configmap.yaml # openclaw.json + AGENTS.md
├── deployment.yaml # Pod spec with security hardening