Hi all, thanks for the great work, however i encountered an issue when migrating existing project to codex-lb proxy
Problem
The Chat Completions endpoint uses extra="allow" on ChatCompletionsRequest, which transparently passes unknown parameters through to upstream ChatGPT.
When clients send thinking (Claude format) or enable_thinking (Qwen/DeepSeek format), these are forwarded as-is to ChatGPT's backend-api, which rejects them with:
proxy_error_response request_id=... method=POST path=/v1/chat/completions status=502
code=None message=Unsupported parameter: thinking
Many projects are built with SDKs targeting one of these providers. When migrating to use codex-lb as a proxy to ChatGPT, users must rewrite their thinking/reasoning parameter handling, which creates friction for adoption.
For example, a project using the OpenAI Python SDK with Qwen:
client = OpenAI(base_url="https://dashscope.aliyuncs.com/compatible-mode/v1", api_key="...")
client.chat.completions.create(
model="qwen-plus",
messages=[{"role": "user", "content": "hello"}],
extra_body={"enable_thinking": True},
stream=True,
)
Switching base_url to codex-lb causes a 502 because enable_thinking is passed through to ChatGPT unchanged.
Comparison with ChatMock
ChatMock handles this correctly because start_upstream_request() uses a whitelist approach — it manually constructs the upstream payload with only known fields. Unknown client parameters like thinking are naturally dropped.
codex-lb uses a passthrough approach (extra="allow") for forward compatibility with new OpenAI API features, but this causes incompatible third-party parameters to leak through.
Questions
Different providers (Claude, Qwen, DeepSeek) all use different parameter names for the same concept. Many users are migrating existing projects that use these SDKs. Are there any plans to normalize these third-party thinking parameters (thinking, enable_thinking) to ChatGPT's reasoning format, so that switching base_url to codex-lb "just works" without client code changes?
Thanksss
Hi all, thanks for the great work, however i encountered an issue when migrating existing project to codex-lb proxy
Problem
The Chat Completions endpoint uses
extra="allow"onChatCompletionsRequest, which transparently passes unknown parameters through to upstream ChatGPT.When clients send
thinking(Claude format) orenable_thinking(Qwen/DeepSeek format), these are forwarded as-is to ChatGPT's backend-api, which rejects them with:Many projects are built with SDKs targeting one of these providers. When migrating to use codex-lb as a proxy to ChatGPT, users must rewrite their thinking/reasoning parameter handling, which creates friction for adoption.
For example, a project using the OpenAI Python SDK with Qwen:
Switching
base_urlto codex-lb causes a 502 becauseenable_thinkingis passed through to ChatGPT unchanged.Comparison with ChatMock
ChatMock handles this correctly because
start_upstream_request()uses a whitelist approach — it manually constructs the upstream payload with only known fields. Unknown client parameters likethinkingare naturally dropped.codex-lb uses a passthrough approach (
extra="allow") for forward compatibility with new OpenAI API features, but this causes incompatible third-party parameters to leak through.Questions
Different providers (Claude, Qwen, DeepSeek) all use different parameter names for the same concept. Many users are migrating existing projects that use these SDKs. Are there any plans to normalize these third-party thinking parameters (
thinking,enable_thinking) to ChatGPT'sreasoningformat, so that switchingbase_urlto codex-lb "just works" without client code changes?Thanksss