Skip to content

fix: pass provider extra_kwargs from config to ChatOpenAI#1

Open
swinney wants to merge 1 commit intodevfrom
fix/provider-extra-kwargs-passthrough
Open

fix: pass provider extra_kwargs from config to ChatOpenAI#1
swinney wants to merge 1 commit intodevfrom
fix/provider-extra-kwargs-passthrough

Conversation

@swinney
Copy link
Copy Markdown
Member

@swinney swinney commented Apr 10, 2026

Summary

  • Bug: BaseReActAgent._build_provider_config() hardcoded extra_kwargs = {}, silently discarding any extra_kwargs defined in the provider YAML config
  • Fix: Initialize from cfg.get("extra_kwargs", {}) so user-defined kwargs flow through to the LangChain model constructor
  • Impact: Any extra_kwargs in provider config (e.g. extra_body, model_kwargs, stream_options) were being ignored

Root Cause

In src/archi/pipelines/agents/base_react.py:882, _build_provider_config() builds the provider config dict that gets passed to get_model()get_provider()ChatOpenAI(**kwargs). The extra_kwargs field is spread into the ChatOpenAI constructor via **self.config.extra_kwargs in each provider's get_chat_model() method.

However, _build_provider_config() was initializing extra = {} instead of reading from the YAML config, so the pipeline between config file → provider → LangChain was broken at this point:

YAML config → _build_provider_config() → ProviderConfig(extra_kwargs={})  ← lost here
                                                              ↓
                                        ChatOpenAI(**{})  ← nothing passed

Example

This config should disable Qwen3's thinking mode when served via vLLM:

providers:
  openai:
    base_url: http://localhost:8000/v1
    extra_kwargs:
      extra_body:
        chat_template_kwargs:
          enable_thinking: false

Before this fix, extra_body never reached ChatOpenAI and thinking output leaked into the response content. After this fix, vLLM correctly receives chat_template_kwargs in the request body.

Test plan

  • Deploy with a provider config containing extra_kwargs and verify the kwargs reach the LLM client
  • Verify existing providers without extra_kwargs still work (empty dict fallback)
  • Verify local provider mode override still works alongside user-defined extra_kwargs

🤖 Generated with Claude Code

_build_provider_config() was initializing extra_kwargs as an empty dict,
silently discarding any extra_kwargs defined in the provider YAML config.
This meant parameters like extra_body, model_kwargs, or any other
ChatOpenAI constructor arguments could not be configured per-provider.

The fix seeds the dict from cfg["extra_kwargs"] before applying
provider-specific overrides (e.g. local_mode), so user-defined kwargs
flow through to the LangChain model constructor.

Discovered while trying to disable Qwen3's thinking mode via:
  providers:
    openai:
      extra_kwargs:
        extra_body:
          chat_template_kwargs:
            enable_thinking: false

The config was parsed correctly but never reached ChatOpenAI because
_build_provider_config replaced it with {}.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant