fix: use openai-responses API for self-hosted providers

Local OpenAI-compatible providers (vLLM, Ollama, etc.) use the modern
/v1/chat/completions endpoint, not the legacy /v1/completions endpoint.

The recent refactoring in commit 3b79494 inadvertently set the API type
to 'openai-completions' which targets /v1/completions, causing 404 errors
for providers like vLLM that only implement /v1/chat/completions.

This change updates the API type to 'openai-responses' which correctly
targets the /v1/chat/completions endpoint that vLLM and other modern
OpenAI-compatible servers implement.

Fixes openclaw/openclaw#50719
This commit is contained in:
OpenClaw 2026-03-20 08:15:42 +07:00
parent 6309b1da6c
commit a39def2079

View File

@ -67,7 +67,7 @@ function buildOpenAICompatibleSelfHostedProviderConfig(params: {
...params.cfg.models?.providers,
[params.providerId]: {
baseUrl: params.baseUrl,
api: "openai-completions",
api: "openai-responses",
apiKey: params.providerApiKey,
models: [
{