openclaw/docs/concepts/model-providers.md

595 lines
21 KiB
Markdown
Raw Normal View History

2026-01-10 21:37:38 +01:00
---
summary: "Model provider overview with example configs + CLI flows"
read_when:
- You need a provider-by-provider model setup reference
- You want example configs or CLI onboarding commands for model providers
title: "Model Providers"
2026-01-10 21:37:38 +01:00
---
2026-01-31 21:13:13 +09:00
2026-01-10 21:37:38 +01:00
# Model providers
2026-01-13 08:11:59 +00:00
This page covers **LLM/model providers** (not chat channels like WhatsApp/Telegram).
2026-01-10 21:37:38 +01:00
For model selection rules, see [/concepts/models](/concepts/models).
## Quick rules
- Model refs use `provider/model` (example: `opencode/claude-opus-4-6`).
2026-01-10 21:37:38 +01:00
- If you set `agents.defaults.models`, it becomes the allowlist.
2026-03-15 22:00:13 -07:00
- CLI helpers: `openclaw setup --wizard`, `openclaw models list`, `openclaw models set <provider/model>`.
- Provider plugins can inject model catalogs via `registerProvider({ catalog })`;
OpenClaw merges that output into `models.providers` before writing
`models.json`.
- Provider manifests can declare `providerAuthEnvVars` so generic env-based
auth probes do not need to load plugin runtime. The remaining core env-var
map is now just for non-plugin/core providers and a few generic-precedence
cases such as Anthropic API-key-first onboarding.
- Provider plugins can also own provider runtime behavior via
`resolveDynamicModel`, `prepareDynamicModel`, `normalizeResolvedModel`,
`capabilities`, `prepareExtraParams`, `wrapStreamFn`,
`isCacheTtlEligible`, `buildMissingAuthMessage`,
`suppressBuiltInModel`, `augmentModelCatalog`, `isBinaryThinking`,
`supportsXHighThinking`, `resolveDefaultThinkingLevel`,
`isModernModelRef`, `prepareRuntimeAuth`, `resolveUsageAuth`, and
`fetchUsageSnapshot`.
## Plugin-owned provider behavior
Provider plugins can now own most provider-specific logic while OpenClaw keeps
the generic inference loop.
Typical split:
- `auth[].run` / `auth[].runNonInteractive`: provider owns onboarding/login
flows for `openclaw onboard`, `openclaw models auth`, and headless setup
- `wizard.onboarding` / `wizard.modelPicker`: provider owns auth-choice labels,
hints, and setup entries in onboarding/model pickers
- `catalog`: provider appears in `models.providers`
- `resolveDynamicModel`: provider accepts model ids not present in the local
static catalog yet
- `prepareDynamicModel`: provider needs a metadata refresh before retrying
dynamic resolution
- `normalizeResolvedModel`: provider needs transport or base URL rewrites
- `capabilities`: provider publishes transcript/tooling/provider-family quirks
- `prepareExtraParams`: provider defaults or normalizes per-model request params
- `wrapStreamFn`: provider applies request headers/body/model compat wrappers
- `isCacheTtlEligible`: provider decides which upstream model ids support prompt-cache TTL
- `buildMissingAuthMessage`: provider replaces the generic auth-store error
with a provider-specific recovery hint
- `suppressBuiltInModel`: provider hides stale upstream rows and can return a
vendor-owned error for direct resolution failures
- `augmentModelCatalog`: provider appends synthetic/final catalog rows after
discovery and config merging
- `isBinaryThinking`: provider owns binary on/off thinking UX
- `supportsXHighThinking`: provider opts selected models into `xhigh`
- `resolveDefaultThinkingLevel`: provider owns default `/think` policy for a
model family
- `isModernModelRef`: provider owns live/smoke preferred-model matching
- `prepareRuntimeAuth`: provider turns a configured credential into a short
lived runtime token
- `resolveUsageAuth`: provider resolves usage/quota credentials for `/usage`
and related status/reporting surfaces
- `fetchUsageSnapshot`: provider owns the usage endpoint fetch/parsing while
core still owns the summary shell and formatting
Current bundled examples:
- `anthropic`: Claude 4.6 forward-compat fallback, usage endpoint fetching,
and cache-TTL/provider-family metadata
- `openrouter`: pass-through model ids, request wrappers, provider capability
hints, and cache-TTL policy
- `github-copilot`: forward-compat model fallback, Claude-thinking transcript
hints, runtime token exchange, and usage endpoint fetching
- `openai`: GPT-5.4 forward-compat fallback, direct OpenAI transport
normalization, Codex-aware missing-auth hints, Spark suppression, synthetic
OpenAI/Codex catalog rows, thinking/live-model policy, and
provider-family metadata
- `google` and `google-gemini-cli`: Gemini 3.1 forward-compat fallback and
modern-model matching; Gemini CLI OAuth also owns usage-token parsing and
quota endpoint fetching for usage surfaces
- `moonshot`: shared transport, plugin-owned thinking payload normalization
- `kilocode`: shared transport, plugin-owned request headers, reasoning payload
normalization, Gemini transcript hints, and cache-TTL policy
- `zai`: GLM-5 forward-compat fallback, `tool_stream` defaults, cache-TTL
policy, binary-thinking/live-model policy, and usage auth + quota fetching
- `mistral`, `opencode`, and `opencode-go`: plugin-owned capability metadata
- `byteplus`, `cloudflare-ai-gateway`, `huggingface`, `kimi-coding`,
`minimax-portal`, `modelstudio`, `nvidia`, `qianfan`, `qwen-portal`,
`synthetic`, `together`, `venice`, `vercel-ai-gateway`, and `volcengine`:
plugin-owned catalogs only
- `minimax` and `xiaomi`: plugin-owned catalogs plus usage auth/snapshot logic
The bundled `openai` plugin now owns both provider ids: `openai` and
`openai-codex`.
That covers providers that still fit OpenClaw's normal transports. A provider
that needs a totally custom request executor is a separate, deeper extension
surface.
2026-01-10 21:37:38 +01:00
## API key rotation
- Supports generic provider rotation for selected providers.
- Configure multiple keys via:
- `OPENCLAW_LIVE_<PROVIDER>_KEY` (single live override, highest priority)
- `<PROVIDER>_API_KEYS` (comma or semicolon list)
- `<PROVIDER>_API_KEY` (primary key)
- `<PROVIDER>_API_KEY_*` (numbered list, e.g. `<PROVIDER>_API_KEY_1`)
- For Google providers, `GOOGLE_API_KEY` is also included as fallback.
- Key selection order preserves priority and deduplicates values.
- Requests are retried with the next key only on rate-limit responses (for example `429`, `rate_limit`, `quota`, `resource exhausted`).
- Non-rate-limit failures fail immediately; no key rotation is attempted.
- When all candidate keys fail, the final error is returned from the last attempt.
2026-01-10 21:37:38 +01:00
## Built-in providers (pi-ai catalog)
2026-01-30 03:15:10 +01:00
OpenClaw ships with the piai catalog. These providers require **no**
2026-01-10 21:37:38 +01:00
`models.providers` config; just set auth + pick a model.
### OpenAI
- Provider: `openai`
- Auth: `OPENAI_API_KEY`
- Optional rotation: `OPENAI_API_KEYS`, `OPENAI_API_KEY_1`, `OPENAI_API_KEY_2`, plus `OPENCLAW_LIVE_OPENAI_KEY` (single override)
- Example models: `openai/gpt-5.4`, `openai/gpt-5.4-pro`
2026-03-15 22:00:13 -07:00
- CLI: `openclaw setup --wizard --auth-choice openai-api-key`
- Default transport is `auto` (WebSocket-first, SSE fallback)
- Override per model via `agents.defaults.models["openai/<model>"].params.transport` (`"sse"`, `"websocket"`, or `"auto"`)
- OpenAI Responses WebSocket warm-up defaults to enabled via `params.openaiWsWarmup` (`true`/`false`)
- OpenAI priority processing can be enabled via `agents.defaults.models["openai/<model>"].params.serviceTier`
- OpenAI fast mode can be enabled per model via `agents.defaults.models["<provider>/<model>"].params.fastMode`
- `openai/gpt-5.3-codex-spark` is intentionally suppressed in OpenClaw because the live OpenAI API rejects it; Spark is treated as Codex-only
2026-01-10 21:37:38 +01:00
```json5
{
agents: { defaults: { model: { primary: "openai/gpt-5.4" } } },
2026-01-10 21:37:38 +01:00
}
```
### Anthropic
- Provider: `anthropic`
- Auth: `ANTHROPIC_API_KEY` or `claude setup-token`
- Optional rotation: `ANTHROPIC_API_KEYS`, `ANTHROPIC_API_KEY_1`, `ANTHROPIC_API_KEY_2`, plus `OPENCLAW_LIVE_ANTHROPIC_KEY` (single override)
- Example model: `anthropic/claude-opus-4-6`
2026-03-15 22:00:13 -07:00
- CLI: `openclaw setup --wizard --auth-choice token` (paste setup-token) or `openclaw models auth paste-token --provider anthropic`
2026-03-12 23:38:48 +00:00
- Direct API-key models support the shared `/fast` toggle and `params.fastMode`; OpenClaw maps that to Anthropic `service_tier` (`auto` vs `standard_only`)
- Policy note: setup-token support is technical compatibility; Anthropic has blocked some subscription usage outside Claude Code in the past. Verify current Anthropic terms and decide based on your risk tolerance.
- Recommendation: Anthropic API key auth is the safer, recommended path over subscription setup-token auth.
2026-01-10 21:37:38 +01:00
```json5
{
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-6" } } },
2026-01-10 21:37:38 +01:00
}
```
### OpenAI Code (Codex)
- Provider: `openai-codex`
- Auth: OAuth (ChatGPT)
- Example model: `openai-codex/gpt-5.4`
2026-03-15 22:00:13 -07:00
- CLI: `openclaw setup --wizard --auth-choice openai-codex` or `openclaw models auth login --provider openai-codex`
- Default transport is `auto` (WebSocket-first, SSE fallback)
- Override per model via `agents.defaults.models["openai-codex/<model>"].params.transport` (`"sse"`, `"websocket"`, or `"auto"`)
- Shares the same `/fast` toggle and `params.fastMode` config as direct `openai/*`
- `openai-codex/gpt-5.3-codex-spark` remains available when the Codex OAuth catalog exposes it; entitlement-dependent
- Policy note: OpenAI Codex OAuth is explicitly supported for external tools/workflows like OpenClaw.
2026-01-10 21:37:38 +01:00
```json5
{
agents: { defaults: { model: { primary: "openai-codex/gpt-5.4" } } },
2026-01-10 21:37:38 +01:00
}
```
### OpenCode
2026-01-10 21:37:38 +01:00
- Auth: `OPENCODE_API_KEY` (or `OPENCODE_ZEN_API_KEY`)
- Zen runtime provider: `opencode`
- Go runtime provider: `opencode-go`
- Example models: `opencode/claude-opus-4-6`, `opencode-go/kimi-k2.5`
2026-03-15 22:00:13 -07:00
- CLI: `openclaw setup --wizard --auth-choice opencode-zen` or `openclaw setup --wizard --auth-choice opencode-go`
2026-01-10 21:37:38 +01:00
```json5
{
agents: { defaults: { model: { primary: "opencode/claude-opus-4-6" } } },
2026-01-10 21:37:38 +01:00
}
```
### Google Gemini (API key)
- Provider: `google`
- Auth: `GEMINI_API_KEY`
- Optional rotation: `GEMINI_API_KEYS`, `GEMINI_API_KEY_1`, `GEMINI_API_KEY_2`, `GOOGLE_API_KEY` fallback, and `OPENCLAW_LIVE_GEMINI_KEY` (single override)
- Example models: `google/gemini-3.1-pro-preview`, `google/gemini-3-flash-preview`
- Compatibility: legacy OpenClaw config using `google/gemini-3.1-flash-preview` is normalized to `google/gemini-3-flash-preview`
2026-03-15 22:00:13 -07:00
- CLI: `openclaw setup --wizard --auth-choice gemini-api-key`
2026-01-10 21:37:38 +01:00
### Google Vertex and Gemini CLI
- Providers: `google-vertex`, `google-gemini-cli`
- Auth: Vertex uses gcloud ADC; Gemini CLI uses its OAuth flow
- Caution: Gemini CLI OAuth in OpenClaw is an unofficial integration. Some users have reported Google account restrictions after using third-party clients. Review Google terms and use a non-critical account if you choose to proceed.
- Gemini CLI OAuth is shipped as part of the bundled `google` plugin.
- Enable: `openclaw plugins enable google`
2026-01-30 03:15:10 +01:00
- Login: `openclaw models auth login --provider google-gemini-cli --set-default`
- Note: you do **not** paste a client id or secret into `openclaw.json`. The CLI login flow stores
2026-01-25 05:53:17 +00:00
tokens in auth profiles on the gateway host.
2026-01-10 21:37:38 +01:00
### Z.AI (GLM)
- Provider: `zai`
- Auth: `ZAI_API_KEY`
- Example model: `zai/glm-5`
2026-03-15 22:00:13 -07:00
- CLI: `openclaw setup --wizard --auth-choice zai-api-key`
2026-01-10 21:37:38 +01:00
- Aliases: `z.ai/*` and `z-ai/*` normalize to `zai/*`
2026-01-16 14:40:56 +01:00
### Vercel AI Gateway
- Provider: `vercel-ai-gateway`
- Auth: `AI_GATEWAY_API_KEY`
- Example model: `vercel-ai-gateway/anthropic/claude-opus-4.6`
2026-03-15 22:00:13 -07:00
- CLI: `openclaw setup --wizard --auth-choice ai-gateway-api-key`
2026-01-16 14:40:56 +01:00
### Kilo Gateway
- Provider: `kilocode`
- Auth: `KILOCODE_API_KEY`
- Example model: `kilocode/anthropic/claude-opus-4.6`
2026-03-15 22:00:13 -07:00
- CLI: `openclaw setup --wizard --kilocode-api-key <key>`
- Base URL: `https://api.kilo.ai/api/gateway/`
- Expanded built-in catalog includes GLM-5 Free, MiniMax M2.5 Free, GPT-5.2, Gemini 3 Pro Preview, Gemini 3 Flash Preview, Grok Code Fast 1, and Kimi K2.5.
See [/providers/kilocode](/providers/kilocode) for setup details.
### Other bundled provider plugins
2026-01-10 21:37:38 +01:00
- OpenRouter: `openrouter` (`OPENROUTER_API_KEY`)
- Example model: `openrouter/anthropic/claude-sonnet-4-5`
- Kilo Gateway: `kilocode` (`KILOCODE_API_KEY`)
- Example model: `kilocode/anthropic/claude-opus-4.6`
- MiniMax: `minimax` (`MINIMAX_API_KEY`)
- Moonshot: `moonshot` (`MOONSHOT_API_KEY`)
- Kimi Coding: `kimi-coding` (`KIMI_API_KEY` or `KIMICODE_API_KEY`)
- Qianfan: `qianfan` (`QIANFAN_API_KEY`)
- Model Studio: `modelstudio` (`MODELSTUDIO_API_KEY`)
- NVIDIA: `nvidia` (`NVIDIA_API_KEY`)
- Together: `together` (`TOGETHER_API_KEY`)
- Venice: `venice` (`VENICE_API_KEY`)
- Xiaomi: `xiaomi` (`XIAOMI_API_KEY`)
- Vercel AI Gateway: `vercel-ai-gateway` (`AI_GATEWAY_API_KEY`)
- Hugging Face Inference: `huggingface` (`HUGGINGFACE_HUB_TOKEN` or `HF_TOKEN`)
- Cloudflare AI Gateway: `cloudflare-ai-gateway` (`CLOUDFLARE_AI_GATEWAY_API_KEY`)
- Volcengine: `volcengine` (`VOLCANO_ENGINE_API_KEY`)
- BytePlus: `byteplus` (`BYTEPLUS_API_KEY`)
2026-01-10 21:37:38 +01:00
- xAI: `xai` (`XAI_API_KEY`)
- Mistral: `mistral` (`MISTRAL_API_KEY`)
- Example model: `mistral/mistral-large-latest`
2026-03-15 22:00:13 -07:00
- CLI: `openclaw setup --wizard --auth-choice mistral-api-key`
2026-01-10 21:37:38 +01:00
- Groq: `groq` (`GROQ_API_KEY`)
- Cerebras: `cerebras` (`CEREBRAS_API_KEY`)
2026-01-12 05:57:49 +00:00
- GLM models on Cerebras use ids `zai-glm-4.7` and `zai-glm-4.6`.
- OpenAI-compatible base URL: `https://api.cerebras.ai/v1`.
2026-01-10 21:37:38 +01:00
- GitHub Copilot: `github-copilot` (`COPILOT_GITHUB_TOKEN` / `GH_TOKEN` / `GITHUB_TOKEN`)
2026-03-15 22:00:13 -07:00
- Hugging Face Inference example model: `huggingface/deepseek-ai/DeepSeek-R1`; CLI: `openclaw setup --wizard --auth-choice huggingface-api-key`. See [Hugging Face (Inference)](/providers/huggingface).
2026-01-10 21:37:38 +01:00
## Providers via `models.providers` (custom/base URL)
Use `models.providers` (or `models.json`) to add **custom** providers or
OpenAI/Anthropiccompatible proxies.
Many of the bundled provider plugins below already publish a default catalog.
Use explicit `models.providers.<id>` entries only when you want to override the
default base URL, headers, or model list.
2026-01-12 06:47:57 +00:00
### Moonshot AI (Kimi)
Moonshot uses OpenAI-compatible endpoints, so configure it as a custom provider:
- Provider: `moonshot`
- Auth: `MOONSHOT_API_KEY`
- Example model: `moonshot/kimi-k2.5`
Kimi K2 model IDs:
[//]: # "moonshot-kimi-k2-model-refs:start"
- `moonshot/kimi-k2.5`
- `moonshot/kimi-k2-0905-preview`
- `moonshot/kimi-k2-turbo-preview`
- `moonshot/kimi-k2-thinking`
- `moonshot/kimi-k2-thinking-turbo`
[//]: # "moonshot-kimi-k2-model-refs:end"
2026-01-31 21:13:13 +09:00
2026-01-12 06:47:57 +00:00
```json5
{
agents: {
2026-01-31 21:13:13 +09:00
defaults: { model: { primary: "moonshot/kimi-k2.5" } },
2026-01-12 06:47:57 +00:00
},
models: {
mode: "merge",
providers: {
moonshot: {
baseUrl: "https://api.moonshot.ai/v1",
apiKey: "${MOONSHOT_API_KEY}",
api: "openai-completions",
2026-01-31 21:13:13 +09:00
models: [{ id: "kimi-k2.5", name: "Kimi K2.5" }],
},
},
},
2026-01-12 06:47:57 +00:00
}
```
### Kimi Coding
Kimi Coding uses Moonshot AI's Anthropic-compatible endpoint:
- Provider: `kimi-coding`
- Auth: `KIMI_API_KEY`
- Example model: `kimi-coding/k2p5`
```json5
{
env: { KIMI_API_KEY: "sk-..." },
agents: {
2026-01-31 21:13:13 +09:00
defaults: { model: { primary: "kimi-coding/k2p5" } },
},
}
```
2026-01-17 20:28:15 +00:00
### Qwen OAuth (free tier)
2026-01-17 20:20:25 +00:00
2026-01-17 20:28:15 +00:00
Qwen provides OAuth access to Qwen Coder + Vision via a device-code flow.
The bundled provider plugin is enabled by default, so just log in:
2026-01-17 20:20:25 +00:00
```bash
2026-01-30 03:15:10 +01:00
openclaw models auth login --provider qwen-portal --set-default
2026-01-17 20:20:25 +00:00
```
Model refs:
2026-01-31 21:13:13 +09:00
2026-01-17 20:20:25 +00:00
- `qwen-portal/coder-model`
- `qwen-portal/vision-model`
See [/providers/qwen](/providers/qwen) for setup details and notes.
### Volcano Engine (Doubao)
Volcano Engine (火山引擎) provides access to Doubao and other models in China.
- Provider: `volcengine` (coding: `volcengine-plan`)
- Auth: `VOLCANO_ENGINE_API_KEY`
- Example model: `volcengine/doubao-seed-1-8-251228`
2026-03-15 22:00:13 -07:00
- CLI: `openclaw setup --wizard --auth-choice volcengine-api-key`
```json5
{
agents: {
defaults: { model: { primary: "volcengine/doubao-seed-1-8-251228" } },
},
}
```
Available models:
- `volcengine/doubao-seed-1-8-251228` (Doubao Seed 1.8)
- `volcengine/doubao-seed-code-preview-251028`
- `volcengine/kimi-k2-5-260127` (Kimi K2.5)
- `volcengine/glm-4-7-251222` (GLM 4.7)
- `volcengine/deepseek-v3-2-251201` (DeepSeek V3.2 128K)
Coding models (`volcengine-plan`):
- `volcengine-plan/ark-code-latest`
- `volcengine-plan/doubao-seed-code`
- `volcengine-plan/kimi-k2.5`
- `volcengine-plan/kimi-k2-thinking`
- `volcengine-plan/glm-4.7`
### BytePlus (International)
BytePlus ARK provides access to the same models as Volcano Engine for international users.
- Provider: `byteplus` (coding: `byteplus-plan`)
- Auth: `BYTEPLUS_API_KEY`
- Example model: `byteplus/seed-1-8-251228`
2026-03-15 22:00:13 -07:00
- CLI: `openclaw setup --wizard --auth-choice byteplus-api-key`
```json5
{
agents: {
defaults: { model: { primary: "byteplus/seed-1-8-251228" } },
},
}
```
Available models:
- `byteplus/seed-1-8-251228` (Seed 1.8)
- `byteplus/kimi-k2-5-260127` (Kimi K2.5)
- `byteplus/glm-4-7-251222` (GLM 4.7)
Coding models (`byteplus-plan`):
- `byteplus-plan/ark-code-latest`
- `byteplus-plan/doubao-seed-code`
- `byteplus-plan/kimi-k2.5`
- `byteplus-plan/kimi-k2-thinking`
- `byteplus-plan/glm-4.7`
2026-01-13 00:22:03 +00:00
### Synthetic
Synthetic provides Anthropic-compatible models behind the `synthetic` provider:
- Provider: `synthetic`
- Auth: `SYNTHETIC_API_KEY`
- Example model: `synthetic/hf:MiniMaxAI/MiniMax-M2.5`
2026-03-15 22:00:13 -07:00
- CLI: `openclaw setup --wizard --auth-choice synthetic-api-key`
2026-01-13 00:22:03 +00:00
```json5
{
agents: {
defaults: { model: { primary: "synthetic/hf:MiniMaxAI/MiniMax-M2.5" } },
2026-01-13 00:22:03 +00:00
},
models: {
mode: "merge",
providers: {
synthetic: {
baseUrl: "https://api.synthetic.new/anthropic",
apiKey: "${SYNTHETIC_API_KEY}",
api: "anthropic-messages",
models: [{ id: "hf:MiniMaxAI/MiniMax-M2.5", name: "MiniMax M2.5" }],
2026-01-31 21:13:13 +09:00
},
},
},
2026-01-13 00:22:03 +00:00
}
```
2026-01-10 21:37:38 +01:00
### MiniMax
MiniMax is configured via `models.providers` because it uses custom endpoints:
2026-01-12 05:49:02 +00:00
- MiniMax (Anthropiccompatible): `--auth-choice minimax-api`
2026-01-10 21:37:38 +01:00
- Auth: `MINIMAX_API_KEY`
2026-01-12 00:57:17 +00:00
See [/providers/minimax](/providers/minimax) for setup details, model options, and config snippets.
feat: Add Ollama provider with automatic model discovery (#1606) * feat: Add Ollama provider with automatic model discovery - Add Ollama provider builder with automatic model detection - Discover available models from local Ollama instance via /api/tags API - Make resolveImplicitProviders async to support dynamic model discovery - Add comprehensive Ollama documentation with setup and usage guide - Add tests for Ollama provider integration - Update provider index and model providers documentation Closes #1531 * fix: Correct Ollama provider type definitions and error handling - Fix input property type to match ModelDefinitionConfig - Import ModelDefinitionConfig type properly - Fix error template literal to use String() for type safety - Simplify return type signature of discoverOllamaModels * fix: Suppress unhandled promise warnings from ensureClawdbotModelsJson in tests - Cast unused promise returns to 'unknown' to suppress TypeScript warnings - Tests that don't await the promise are intentionally not awaiting it - This fixes the failing test suite caused by unawaited async calls * fix: Skip Ollama model discovery during tests - Check for VITEST or NODE_ENV=test before making HTTP requests - Prevents test timeouts and hangs from network calls - Ollama discovery will still work in production/normal usage * fix: Set VITEST environment variable in test setup - Ensures Ollama discovery is skipped in all test runs - Prevents network calls during tests that could cause timeouts * test: Temporarily skip Ollama provider tests to diagnose CI failures * fix: Make Ollama provider opt-in to avoid breaking existing tests **Root Cause:** The Ollama provider was being added to ALL configurations by default (with a fallback API key of 'ollama-local'), which broke tests that expected NO providers when no API keys were configured. **Solution:** - Removed the default fallback API key for Ollama - Ollama provider now requires explicit configuration via: - OLLAMA_API_KEY environment variable, OR - Ollama profile in auth store - Updated documentation to reflect the explicit configuration requirement - Added a test to verify Ollama is not added by default This fixes all 4 failing test suites: - checks (node, test, pnpm test) - checks (bun, test, bunx vitest run) - checks-windows (node, test, pnpm test) - checks-macos (test, pnpm test) Closes #1531
2026-01-24 22:38:52 +00:00
### Ollama
2026-03-12 22:24:28 +00:00
Ollama ships as a bundled provider plugin and uses Ollama's native API:
feat: Add Ollama provider with automatic model discovery (#1606) * feat: Add Ollama provider with automatic model discovery - Add Ollama provider builder with automatic model detection - Discover available models from local Ollama instance via /api/tags API - Make resolveImplicitProviders async to support dynamic model discovery - Add comprehensive Ollama documentation with setup and usage guide - Add tests for Ollama provider integration - Update provider index and model providers documentation Closes #1531 * fix: Correct Ollama provider type definitions and error handling - Fix input property type to match ModelDefinitionConfig - Import ModelDefinitionConfig type properly - Fix error template literal to use String() for type safety - Simplify return type signature of discoverOllamaModels * fix: Suppress unhandled promise warnings from ensureClawdbotModelsJson in tests - Cast unused promise returns to 'unknown' to suppress TypeScript warnings - Tests that don't await the promise are intentionally not awaiting it - This fixes the failing test suite caused by unawaited async calls * fix: Skip Ollama model discovery during tests - Check for VITEST or NODE_ENV=test before making HTTP requests - Prevents test timeouts and hangs from network calls - Ollama discovery will still work in production/normal usage * fix: Set VITEST environment variable in test setup - Ensures Ollama discovery is skipped in all test runs - Prevents network calls during tests that could cause timeouts * test: Temporarily skip Ollama provider tests to diagnose CI failures * fix: Make Ollama provider opt-in to avoid breaking existing tests **Root Cause:** The Ollama provider was being added to ALL configurations by default (with a fallback API key of 'ollama-local'), which broke tests that expected NO providers when no API keys were configured. **Solution:** - Removed the default fallback API key for Ollama - Ollama provider now requires explicit configuration via: - OLLAMA_API_KEY environment variable, OR - Ollama profile in auth store - Updated documentation to reflect the explicit configuration requirement - Added a test to verify Ollama is not added by default This fixes all 4 failing test suites: - checks (node, test, pnpm test) - checks (bun, test, bunx vitest run) - checks-windows (node, test, pnpm test) - checks-macos (test, pnpm test) Closes #1531
2026-01-24 22:38:52 +00:00
- Provider: `ollama`
- Auth: None required (local server)
- Example model: `ollama/llama3.3`
- Installation: [https://ollama.com/download](https://ollama.com/download)
feat: Add Ollama provider with automatic model discovery (#1606) * feat: Add Ollama provider with automatic model discovery - Add Ollama provider builder with automatic model detection - Discover available models from local Ollama instance via /api/tags API - Make resolveImplicitProviders async to support dynamic model discovery - Add comprehensive Ollama documentation with setup and usage guide - Add tests for Ollama provider integration - Update provider index and model providers documentation Closes #1531 * fix: Correct Ollama provider type definitions and error handling - Fix input property type to match ModelDefinitionConfig - Import ModelDefinitionConfig type properly - Fix error template literal to use String() for type safety - Simplify return type signature of discoverOllamaModels * fix: Suppress unhandled promise warnings from ensureClawdbotModelsJson in tests - Cast unused promise returns to 'unknown' to suppress TypeScript warnings - Tests that don't await the promise are intentionally not awaiting it - This fixes the failing test suite caused by unawaited async calls * fix: Skip Ollama model discovery during tests - Check for VITEST or NODE_ENV=test before making HTTP requests - Prevents test timeouts and hangs from network calls - Ollama discovery will still work in production/normal usage * fix: Set VITEST environment variable in test setup - Ensures Ollama discovery is skipped in all test runs - Prevents network calls during tests that could cause timeouts * test: Temporarily skip Ollama provider tests to diagnose CI failures * fix: Make Ollama provider opt-in to avoid breaking existing tests **Root Cause:** The Ollama provider was being added to ALL configurations by default (with a fallback API key of 'ollama-local'), which broke tests that expected NO providers when no API keys were configured. **Solution:** - Removed the default fallback API key for Ollama - Ollama provider now requires explicit configuration via: - OLLAMA_API_KEY environment variable, OR - Ollama profile in auth store - Updated documentation to reflect the explicit configuration requirement - Added a test to verify Ollama is not added by default This fixes all 4 failing test suites: - checks (node, test, pnpm test) - checks (bun, test, bunx vitest run) - checks-windows (node, test, pnpm test) - checks-macos (test, pnpm test) Closes #1531
2026-01-24 22:38:52 +00:00
```bash
# Install Ollama, then pull a model:
ollama pull llama3.3
```
```json5
{
agents: {
2026-01-31 21:13:13 +09:00
defaults: { model: { primary: "ollama/llama3.3" } },
},
feat: Add Ollama provider with automatic model discovery (#1606) * feat: Add Ollama provider with automatic model discovery - Add Ollama provider builder with automatic model detection - Discover available models from local Ollama instance via /api/tags API - Make resolveImplicitProviders async to support dynamic model discovery - Add comprehensive Ollama documentation with setup and usage guide - Add tests for Ollama provider integration - Update provider index and model providers documentation Closes #1531 * fix: Correct Ollama provider type definitions and error handling - Fix input property type to match ModelDefinitionConfig - Import ModelDefinitionConfig type properly - Fix error template literal to use String() for type safety - Simplify return type signature of discoverOllamaModels * fix: Suppress unhandled promise warnings from ensureClawdbotModelsJson in tests - Cast unused promise returns to 'unknown' to suppress TypeScript warnings - Tests that don't await the promise are intentionally not awaiting it - This fixes the failing test suite caused by unawaited async calls * fix: Skip Ollama model discovery during tests - Check for VITEST or NODE_ENV=test before making HTTP requests - Prevents test timeouts and hangs from network calls - Ollama discovery will still work in production/normal usage * fix: Set VITEST environment variable in test setup - Ensures Ollama discovery is skipped in all test runs - Prevents network calls during tests that could cause timeouts * test: Temporarily skip Ollama provider tests to diagnose CI failures * fix: Make Ollama provider opt-in to avoid breaking existing tests **Root Cause:** The Ollama provider was being added to ALL configurations by default (with a fallback API key of 'ollama-local'), which broke tests that expected NO providers when no API keys were configured. **Solution:** - Removed the default fallback API key for Ollama - Ollama provider now requires explicit configuration via: - OLLAMA_API_KEY environment variable, OR - Ollama profile in auth store - Updated documentation to reflect the explicit configuration requirement - Added a test to verify Ollama is not added by default This fixes all 4 failing test suites: - checks (node, test, pnpm test) - checks (bun, test, bunx vitest run) - checks-windows (node, test, pnpm test) - checks-macos (test, pnpm test) Closes #1531
2026-01-24 22:38:52 +00:00
}
```
2026-03-12 22:24:28 +00:00
Ollama is detected locally at `http://127.0.0.1:11434` when you opt in with
`OLLAMA_API_KEY`, and the bundled provider plugin adds Ollama directly to
2026-03-15 22:00:13 -07:00
`openclaw setup --wizard` and the model picker. See [/providers/ollama](/providers/ollama)
2026-03-12 22:24:28 +00:00
for onboarding, cloud/local mode, and custom configuration.
feat: Add Ollama provider with automatic model discovery (#1606) * feat: Add Ollama provider with automatic model discovery - Add Ollama provider builder with automatic model detection - Discover available models from local Ollama instance via /api/tags API - Make resolveImplicitProviders async to support dynamic model discovery - Add comprehensive Ollama documentation with setup and usage guide - Add tests for Ollama provider integration - Update provider index and model providers documentation Closes #1531 * fix: Correct Ollama provider type definitions and error handling - Fix input property type to match ModelDefinitionConfig - Import ModelDefinitionConfig type properly - Fix error template literal to use String() for type safety - Simplify return type signature of discoverOllamaModels * fix: Suppress unhandled promise warnings from ensureClawdbotModelsJson in tests - Cast unused promise returns to 'unknown' to suppress TypeScript warnings - Tests that don't await the promise are intentionally not awaiting it - This fixes the failing test suite caused by unawaited async calls * fix: Skip Ollama model discovery during tests - Check for VITEST or NODE_ENV=test before making HTTP requests - Prevents test timeouts and hangs from network calls - Ollama discovery will still work in production/normal usage * fix: Set VITEST environment variable in test setup - Ensures Ollama discovery is skipped in all test runs - Prevents network calls during tests that could cause timeouts * test: Temporarily skip Ollama provider tests to diagnose CI failures * fix: Make Ollama provider opt-in to avoid breaking existing tests **Root Cause:** The Ollama provider was being added to ALL configurations by default (with a fallback API key of 'ollama-local'), which broke tests that expected NO providers when no API keys were configured. **Solution:** - Removed the default fallback API key for Ollama - Ollama provider now requires explicit configuration via: - OLLAMA_API_KEY environment variable, OR - Ollama profile in auth store - Updated documentation to reflect the explicit configuration requirement - Added a test to verify Ollama is not added by default This fixes all 4 failing test suites: - checks (node, test, pnpm test) - checks (bun, test, bunx vitest run) - checks-windows (node, test, pnpm test) - checks-macos (test, pnpm test) Closes #1531
2026-01-24 22:38:52 +00:00
2026-02-09 10:20:45 +00:00
### vLLM
2026-03-12 22:24:28 +00:00
vLLM ships as a bundled provider plugin for local/self-hosted OpenAI-compatible
servers:
2026-02-09 10:20:45 +00:00
- Provider: `vllm`
- Auth: Optional (depends on your server)
- Default base URL: `http://127.0.0.1:8000/v1`
To opt in to auto-discovery locally (any value works if your server doesnt enforce auth):
```bash
export VLLM_API_KEY="vllm-local"
```
Then set a model (replace with one of the IDs returned by `/v1/models`):
```json5
{
agents: {
defaults: { model: { primary: "vllm/your-model-id" } },
},
}
```
See [/providers/vllm](/providers/vllm) for details.
2026-03-12 22:24:28 +00:00
### SGLang
SGLang ships as a bundled provider plugin for fast self-hosted
OpenAI-compatible servers:
- Provider: `sglang`
- Auth: Optional (depends on your server)
- Default base URL: `http://127.0.0.1:30000/v1`
To opt in to auto-discovery locally (any value works if your server does not
enforce auth):
```bash
export SGLANG_API_KEY="sglang-local"
```
Then set a model (replace with one of the IDs returned by `/v1/models`):
```json5
{
agents: {
defaults: { model: { primary: "sglang/your-model-id" } },
},
}
```
See [/providers/sglang](/providers/sglang) for details.
2026-01-10 21:37:38 +01:00
### Local proxies (LM Studio, vLLM, LiteLLM, etc.)
Example (OpenAIcompatible):
```json5
{
agents: {
defaults: {
model: { primary: "lmstudio/minimax-m2.5-gs32" },
models: { "lmstudio/minimax-m2.5-gs32": { alias: "Minimax" } },
2026-01-31 21:13:13 +09:00
},
2026-01-10 21:37:38 +01:00
},
models: {
providers: {
lmstudio: {
baseUrl: "http://localhost:1234/v1",
apiKey: "LMSTUDIO_KEY",
api: "openai-completions",
models: [
{
id: "minimax-m2.5-gs32",
name: "MiniMax M2.5",
2026-01-10 21:37:38 +01:00
reasoning: false,
input: ["text"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
contextWindow: 200000,
2026-01-31 21:13:13 +09:00
maxTokens: 8192,
},
],
},
},
},
2026-01-10 21:37:38 +01:00
}
```
Notes:
2026-01-31 21:13:13 +09:00
- For custom providers, `reasoning`, `input`, `cost`, `contextWindow`, and `maxTokens` are optional.
2026-01-30 03:15:10 +01:00
When omitted, OpenClaw defaults to:
- `reasoning: false`
- `input: ["text"]`
- `cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 }`
- `contextWindow: 200000`
- `maxTokens: 8192`
- Recommended: set explicit values that match your proxy/model limits.
- For `api: "openai-completions"` on non-native endpoints (any non-empty `baseUrl` whose host is not `api.openai.com`), OpenClaw forces `compat.supportsDeveloperRole: false` to avoid provider 400 errors for unsupported `developer` roles.
- If `baseUrl` is empty/omitted, OpenClaw keeps the default OpenAI behavior (which resolves to `api.openai.com`).
- For safety, an explicit `compat.supportsDeveloperRole: true` is still overridden on non-native `openai-completions` endpoints.
2026-01-10 21:37:38 +01:00
## CLI examples
```bash
2026-03-15 22:00:13 -07:00
openclaw setup --wizard --auth-choice opencode-zen
openclaw models set opencode/claude-opus-4-6
2026-01-30 03:15:10 +01:00
openclaw models list
2026-01-10 21:37:38 +01:00
```
See also: [/gateway/configuration](/gateway/configuration) for full configuration examples.