Skip to content

Enhance ChatModelFactory to support ModelConfig and temperature for BrowserChatModel #131

@galaxyeye

Description

@galaxyeye

Motivation

Currently, the InferenceEngine requires a model temperature, but BrowserChatModel does not provide a mechanism for passing or configuring this. To improve flexibility and support model configuration, we propose the following enhancements:

Proposed Enhancement

  • Introduce a ModelConfig data class to encapsulate all model parameters, including temperature.
  • Update the ChatModelFactory so that every time a BrowserChatModel is required, it accepts a ModelConfig instance. If null is passed, fall back to the current logic.
  • Update the cache key to compute a hash from ModelConfig so each unique configuration gets a distinct cached model.

Suggested ModelConfig Structure

data class ModelConfig(
    val baseUrl: String,
    val apiKey: String,
    val organizationId: String,
    val projectId: String,
    val modelName: String,
    val temperature: Double,
    val topP: Double,
    val stop: List<String>,
    val maxTokens: Int,
    val maxCompletionTokens: Int,
    val presencePenalty: Double,
    val frequencyPenalty: Double,
    val logitBias: Map<String, Int>,
    val supportedCapabilities: Set<Capability>,
    val responseFormat: ResponseFormat,
    val responseFormatString: String,
    val strictJsonSchema: Boolean,
    val seed: Int,
    val user: String,
    val strictTools: Boolean,
    val parallelToolCalls: Boolean,
    val store: Boolean,
    val metadata: Map<String, String>,
    val serviceTier: String,
    val returnThinking: Boolean,
    val timeout: Duration,
    val maxRetries: Int,
    val logRequests: Boolean,
    val logResponses: Boolean,
    val customHeaders: Map<String, String>
)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions