Skip to content

fix(payload-filter): resolve parameter filtering bugs for OpenAI-comp…#2773

Open
zhchang wants to merge 1 commit intorouter-for-me:devfrom
zhchang:fix/openai-compat-parameter-filtering
Open

fix(payload-filter): resolve parameter filtering bugs for OpenAI-comp…#2773
zhchang wants to merge 1 commit intorouter-for-me:devfrom
zhchang:fix/openai-compat-parameter-filtering

Conversation

@zhchang
Copy link
Copy Markdown

@zhchang zhchang commented Apr 14, 2026

…atible providers

Fix three critical bugs preventing parameter filtering from working correctly with OpenAI-compatible providers like Grok and DeepSeek:

Bug #1: Executor ordering - ApplyThinking re-adds filtered parameters

  • Move ApplyPayloadConfigWithRoot() to run AFTER ApplyThinking() in openai_compat_executor
  • Ensures filter has final authority over parameters sent to upstream provider
  • Fixes: reasoning_effort re-injected after filter removes it for suffix models (e.g., grok-3(medium))

Bug #2: Protocol mismatch - Responses API not recognized by filter rules

  • Normalize "openai-response" protocol to "openai" in payloadModelRulesMatch()
  • Filter rules with protocol: "openai" now match both Chat Completions and Responses API
  • Fixes: Silent filter failures for Responses API requests

Bug #3: Case-sensitive model matching

  • Make matchModelPattern() case-insensitive by lowercasing both pattern and model
  • Pattern "grok-*" now correctly matches "Grok-3", "GROK-3", etc.
  • Improves robustness for real-world model name variations

Testing

  • Added 19 comprehensive unit tests in payload_helpers_test.go
  • Added 9 QA reproduction tests in test/reasoning_effort_filter_test.go
  • All tests passing (28/28), zero regressions
  • Verified end-to-end with real provider scenarios

Documentation

  • Added FILTER_ARCHITECTURE.md documenting filter system design
  • Provides reference for future maintenance and extensibility

…atible providers

Fix three critical bugs preventing parameter filtering from working correctly with OpenAI-compatible providers like Grok and DeepSeek:

## Bug #1: Executor ordering - ApplyThinking re-adds filtered parameters
- Move ApplyPayloadConfigWithRoot() to run AFTER ApplyThinking() in openai_compat_executor
- Ensures filter has final authority over parameters sent to upstream provider
- Fixes: reasoning_effort re-injected after filter removes it for suffix models (e.g., grok-3(medium))

## Bug router-for-me#2: Protocol mismatch - Responses API not recognized by filter rules
- Normalize "openai-response" protocol to "openai" in payloadModelRulesMatch()
- Filter rules with protocol: "openai" now match both Chat Completions and Responses API
- Fixes: Silent filter failures for Responses API requests

## Bug router-for-me#3: Case-sensitive model matching
- Make matchModelPattern() case-insensitive by lowercasing both pattern and model
- Pattern "grok-*" now correctly matches "Grok-3", "GROK-3", etc.
- Improves robustness for real-world model name variations

## Testing
- Added 19 comprehensive unit tests in payload_helpers_test.go
- Added 9 QA reproduction tests in test/reasoning_effort_filter_test.go
- All tests passing (28/28), zero regressions
- Verified end-to-end with real provider scenarios

## Documentation
- Added FILTER_ARCHITECTURE.md documenting filter system design
- Provides reference for future maintenance and extensibility

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
@github-actions
Copy link
Copy Markdown

This pull request targeted main.

The base branch has been automatically changed to dev.

@github-actions github-actions bot changed the base branch from main to dev April 14, 2026 02:19
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces architectural documentation for the parameter filtering system and implements several fixes to improve its reliability. Key changes include making model pattern matching case-insensitive, normalizing the openai-response protocol to openai for consistent rule application, and reordering the execution pipeline in the OpenAI compatibility executor so that filtering occurs after thinking logic. Feedback was provided to ensure protocol normalization is bi-directional, allowing rules explicitly targeting the aliased protocol to match correctly.

Comment on lines +164 to 174
if ep := strings.TrimSpace(entry.Protocol); ep != "" && protocol != "" {
// Normalize "openai-response" to "openai" for matching purposes
// so filter rules targeting "openai" also match Responses API requests
normalizedProto := protocol
if strings.EqualFold(normalizedProto, "openai-response") {
normalizedProto = "openai"
}
if !strings.EqualFold(ep, normalizedProto) {
continue
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The protocol normalization logic for openai-response is one-way. While it correctly allows rules targeting openai to match openai-response requests, it prevents rules explicitly targeting openai-response from matching because the runtime protocol is normalized to openai before comparison, but the rule's protocol (ep) is not. This means a rule like {Protocol: "openai-response"} will never match. To ensure consistency, both the runtime protocol and the rule's protocol should be normalized, or the comparison should handle the alias on both sides.

			if ep := strings.TrimSpace(entry.Protocol); ep != "" && protocol != "" {
				// Normalize "openai-response" to "openai" for matching purposes
				// so filter rules targeting "openai" also match Responses API requests.
				// We normalize both sides to ensure explicit "openai-response" rules still match.
				normalizedProto := protocol
				if strings.EqualFold(normalizedProto, "openai-response") {
					normalizedProto = "openai"
				}
				normalizedRuleProto := ep
				if strings.EqualFold(normalizedRuleProto, "openai-response") {
					normalizedRuleProto = "openai"
				}
				if !strings.EqualFold(normalizedRuleProto, normalizedProto) {
					continue
				}
			}

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 1ea02768eb

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +168 to +171
if strings.EqualFold(normalizedProto, "openai-response") {
normalizedProto = "openai"
}
if !strings.EqualFold(ep, normalizedProto) {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Normalize both rule and request protocols for response API

The new protocol normalization rewrites only the incoming protocol (openai-responseopenai) before comparison, so a rule configured with protocol: "openai-response" can never match anymore. In payloadModelRulesMatch, ep stays openai-response while normalizedProto becomes openai, which silently disables response-specific payload rules that worked before this commit.

Useful? React with 👍 / 👎.

return resp, err
}

translated = helps.ApplyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", translated, originalTranslated, requestedModel)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Strip stream after payload rules in responses/compact

For /responses/compact, stream is deleted earlier, but ApplyPayloadConfigWithRoot now runs afterward and can re-add stream via default/override rules (especially wildcard or protocol: openai rules). Because there is no second delete after payload rules, endpoint-specific sanitization can be undone and incompatible payloads can be sent upstream.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant