You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
PR #385 (closing #378) made top_p optional and changed its default to None in LLMConfig. However, the fix only addressed the config layer — the API call layer in openevolve/llm/openai.py still unconditionally includes top_p in the request params:
After #385, self.top_p defaults to None, so this puts "top_p": None into the params dict. Whether this causes an error depends on the downstream API client:
The standard OpenAI SDK may strip None values internally (uses NOT_GIVEN as its sentinel)
AWS Bedrock, LiteLLM, and other OpenAI-compatible wrappers may serialize it as "top_p": null in the JSON body, triggering a 400 error
Reproduction
Configure OpenEvolve with an AWS Bedrock endpoint (or any OpenAI-compatible provider that does not strip null values)
PR #385 (closing #378) made
top_poptional and changed its default toNoneinLLMConfig. However, the fix only addressed the config layer — the API call layer inopenevolve/llm/openai.pystill unconditionally includestop_pin the request params:After #385,
self.top_pdefaults toNone, so this puts"top_p": Noneinto the params dict. Whether this causes an error depends on the downstream API client:Nonevalues internally (usesNOT_GIVENas its sentinel)"top_p": nullin the JSON body, triggering a 400 errorReproduction
nullvalues)top_pin the config (relying on theNonedefault from Fix Anthropic models error when both temperature and top_p are passed #385)"top_p": null, resulting in:Expected behavior
When
top_pisNone, the key should be omitted from the params dict entirely, rather than being sent asnull.Suggested fix