fix: use openai-responses API for self-hosted providers
Local OpenAI-compatible providers (vLLM, Ollama, etc.) use the modern /v1/chat/completions endpoint, not the legacy /v1/completions endpoint. The recent refactoring in commit 3b79494 inadvertently set the API type to 'openai-completions' which targets /v1/completions, causing 404 errors for providers like vLLM that only implement /v1/chat/completions. This change updates the API type to 'openai-responses' which correctly targets the /v1/chat/completions endpoint that vLLM and other modern OpenAI-compatible servers implement. Fixes openclaw/openclaw#50719
This commit is contained in:
parent
6309b1da6c
commit
a39def2079
@ -67,7 +67,7 @@ function buildOpenAICompatibleSelfHostedProviderConfig(params: {
|
||||
...params.cfg.models?.providers,
|
||||
[params.providerId]: {
|
||||
baseUrl: params.baseUrl,
|
||||
api: "openai-completions",
|
||||
api: "openai-responses",
|
||||
apiKey: params.providerApiKey,
|
||||
models: [
|
||||
{
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user