fix: prefer exact provider alias over normalized lookup for image config fallback

Use params.provider (exact key, e.g. "nvidia-api") for direct config
lookup before falling back to findNormalizedProviderValue. This prevents
ambiguity when configs contain both an alias and its canonical name
(e.g. "nvidia-api" and "nvidia"), which could cause the wrong provider
block to be selected and miss the model definition.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
JaiminBhojani 2026-03-20 23:47:27 +05:30
parent 35a29f71f2
commit 8d0db12885
2 changed files with 20 additions and 13 deletions

View File

@ -294,10 +294,11 @@ describe("describeImageWithModel", () => {
expect(completeMock).toHaveBeenCalledOnce();
});
it("matches provider-prefixed model IDs against the original provider alias (#33185)", async () => {
// When provider is "nvidia-api", resolvedRef.provider becomes "nvidia" after
// normalization, but the user's config stores "nvidia-api/meta-llama". The
// lookup must also try the original params.provider prefix.
it("prefers exact provider alias over normalized lookup for config fallback (#33185)", async () => {
// When provider is "nvidia-api", resolvedRef.provider normalizes to "nvidia".
// If the config contains both "nvidia" and "nvidia-api" entries, the exact
// params.provider key must be used so the nvidia-api/<model> definition is
// found rather than falling into the "nvidia" block.
resolveModelWithRegistryMock.mockReturnValue({
provider: "nvidia",
id: "meta-llama",
@ -320,6 +321,12 @@ describe("describeImageWithModel", () => {
const cfg = {
models: {
providers: {
nvidia: {
baseUrl: "https://integrate.api.nvidia.com/v1",
apiKey: "nvidia-key", // pragma: allowlist secret
api: "openai-completions" as const,
models: [],
},
"nvidia-api": {
baseUrl: "https://integrate.api.nvidia.com/v1",
apiKey: "nvidia-key", // pragma: allowlist secret

View File

@ -72,19 +72,19 @@ async function resolveImageRuntime(params: {
// ID matching which can miss provider-prefixed IDs (e.g. "vllm/Qwen3.5" in
// config vs "Qwen3.5" after model ref parsing). Check the user's configured
// model definition for explicit image support so the tool works correctly.
// We also match against the original params.provider (pre-normalization) since
// configs may use aliases like "nvidia-api/meta/..." while resolvedRef.provider
// is normalized to "nvidia".
// We prefer the exact params.provider key first so that configs containing
// both an alias (e.g. "nvidia-api") and the canonical name ("nvidia") resolve
// to the correct block — findNormalizedProviderValue would pick whichever
// entry normalizes first, which may be the wrong one.
if (!model.input?.includes("image")) {
const providerConfig = findNormalizedProviderValue(
params.cfg?.models?.providers,
resolvedRef.provider,
);
const providers = params.cfg?.models?.providers;
const providerConfig =
providers?.[params.provider] ?? findNormalizedProviderValue(providers, resolvedRef.provider);
const configuredModel = providerConfig?.models?.find(
(m) =>
m.id === resolvedRef.model ||
m.id === `${resolvedRef.provider}/${resolvedRef.model}` ||
m.id === `${params.provider}/${resolvedRef.model}`,
m.id === `${params.provider}/${resolvedRef.model}` ||
m.id === `${resolvedRef.provider}/${resolvedRef.model}`,
);
if (configuredModel?.input?.includes("image")) {
model = {