Conversation
|
This pull request targeted The base branch has been automatically changed to |
There was a problem hiding this comment.
Code Review
This pull request introduces global virtual models, which allow for model name aliases that map to specific upstream identifiers across all providers. The changes include configuration schema updates, sanitization logic, and integration into the Claude, Gemini, and OpenAI API handlers to ensure virtual models are listed and correctly resolved during request execution. The review feedback identifies several opportunities to reduce code duplication by refactoring the virtual model resolution and response-building logic into shared helper methods. Additionally, a redundant nil check was found in the virtual model table compilation logic that should be removed.
| if h.Cfg != nil && len(h.Cfg.VirtualModels) > 0 { | ||
| for _, vm := range h.Cfg.VirtualModels { | ||
| models = append(models, map[string]any{ | ||
| "id": vm.Name, | ||
| "object": "model", | ||
| "created": int64(0), | ||
| "owned_by": "virtual", | ||
| }) | ||
| } | ||
| } |
There was a problem hiding this comment.
This block of code for adding virtual models is duplicated in sdk/api/handlers/openai/openai_handlers.go and a similar block exists in sdk/api/handlers/gemini/gemini_handlers.go. To improve maintainability and reduce code duplication, consider creating a helper method on BaseAPIHandler.
Since the model structure for OpenAI/Claude is similar, one helper could serve both. A separate helper could be created for the Gemini-specific format.
For example, in handlers.go:
func (h *BaseAPIHandler) AppendOpenAIVirtualModels(models []map[string]any) []map[string]any {
if h.Cfg == nil || len(h.Cfg.VirtualModels) == 0 {
return models
}
for _, vm := range h.Cfg.VirtualModels {
models = append(models, map[string]any{
"id": vm.Name,
"object": "model",
"created": int64(0), // Using int64 for consistency
"owned_by": "virtual",
})
}
return models
}Then you could simply call models = h.AppendOpenAIVirtualModels(models) here and in the OpenAI handler.
| // Resolve virtual model before routing | ||
| if virtualModel := m.ResolveVirtualModel(req.Model); virtualModel != "" { | ||
| req.Model = virtualModel | ||
| } |
There was a problem hiding this comment.
This block for resolving the virtual model is duplicated across Execute, ExecuteCount (L1209-1212), and ExecuteStream (L1245-1248).
To avoid this repetition, you could create a small helper function. Since req is passed by value, the helper would need to return the modified request.
For example:
func (m *Manager) resolveVirtualModelForRequest(req cliproxyexecutor.Request) cliproxyexecutor.Request {
if virtualModel := m.ResolveVirtualModel(req.Model); virtualModel != "" {
req.Model = virtualModel
}
return req
}Then you could call req = m.resolveVirtualModelForRequest(req) at the beginning of each of the three Execute* methods.
| if table == nil { | ||
| table = &virtualModelTable{} | ||
| } |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 1fbd3cf20c
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| if h.AuthManager != nil { | ||
| virtualModel := h.AuthManager.ResolveVirtualModel(baseModel) | ||
| if virtualModel != "" { | ||
| resolvedModelName = virtualModel |
There was a problem hiding this comment.
Preserve suffix when replacing virtual model names
When a virtual model matches in getRequestDetails, resolvedModelName is replaced with the upstream model and the original request suffix is dropped. For example, fast(8192) mapped to gpt-5-codex-mini becomes gpt-5-codex-mini, so thinking suffix controls are silently lost for requests routed through this path. This changes model behavior for virtual aliases compared with existing suffix-preserving routing.
Useful? React with 👍 / 👎.
| normalizedModels = append(normalizedModels, map[string]any{ | ||
| "name": "models/" + vm.Name, |
There was a problem hiding this comment.
Keep Gemini list and model-detail responses consistent
This adds virtual models to the Gemini list output, but model-detail lookup still only searches registry-backed models, so a virtual ID returned by listing (for example models/fast) will 404 on GeminiGetHandler. Clients that validate or hydrate model metadata by GET-after-list will treat advertised virtual models as unavailable.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: a8de80fdb7
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| // Execute performs a non-streaming execution using the configured selector and executor. | ||
| // It supports multiple providers for the same model and round-robins the starting provider per model. | ||
| func (m *Manager) Execute(ctx context.Context, providers []string, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (cliproxyexecutor.Response, error) { | ||
| req = resolveVirtualModelForRequest(m, req) |
There was a problem hiding this comment.
Avoid remapping virtual models after provider selection
getRequestDetails already resolves virtual models and chooses providers from that resolved model, but Execute rewrites req.Model again via resolveVirtualModelForRequest without updating the provider list. With layered/override virtual mappings (for example fast -> gpt-5 and gpt-5 -> claude-opus), selection stays on OpenAI while execution model is switched to Claude, which can produce auth_not_found/invalid-model errors even though the target model is configured.
Useful? React with 👍 / 👎.
It adds another yet layer for model aliases.
It helpful if you need to switch a CLI on another model during the active session