feat(kernel-language-model-service): Add language model client#876
feat(kernel-language-model-service): Add language model client#876
Conversation
f2a4ffd to
e3cf0b3
Compare
e3cf0b3 to
5296cb9
Compare
|
@cursoragent review this PR for inconsistencies across docstrings, documentation and naming conventions |
|
Unable to authenticate your request. Please make sure to connect your GitHub account to Cursor. Go to Cursor |
Add @ocap/kernel-language-model-service — a new package that bridges the kernel with OpenAI-compatible language model backends. Core types and kernel service: - ChatParams / ChatResult / ChatService types for request/response shapes - makeKernelLanguageModelService: registers a kernel service object that dispatches chat calls to a provided chat function - LANGUAGE_MODEL_SERVICE_NAME constant Backends: - Open /v1 backend with SSE streaming (makeOpenV1NodejsService) - Ollama Node.js backend (OllamaNodejsService) Client API: - makeChatClient: returns an OpenAI-SDK-compatible client object that routes chat.completions.create() calls through a CapTP ChatService - makeSampleClient: convenience wrapper for single-token sampling Test utilities: - makeMockOpenV1Fetch: deterministic SSE mock for unit/integration tests Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Add an integration test and e2e test that exercise the full kernel → LMS service → Ollama round-trip via a bundled vat. - lms-chat-vat: bundled vat that sends a single chat message and logs the response, used to verify the round-trip through the kernel - lms-chat.ts: shared test helper (runLmsChatKernelTest) - lms-chat.test.ts: uses a mock fetch — CI-safe, no network - lms-chat.e2e.test.ts: uses real fetch against local Ollama - agents.e2e.test.ts: json/repl agent e2e tests against local Ollama - test/suite.test.ts: pre-flight check that Ollama is running Lay out the package to match kernel-test: all vats, helpers, and test files live under src/; test/ holds only the Ollama pre-flight suite. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2dffa25 to
ea9f656
Compare
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, have a team admin enable autofix in the Cursor dashboard.
| chat: async (params: ChatParams) => service.chat(params), | ||
| sample: async (params: SampleParams) => service.sample(params), | ||
| }); | ||
| }; |
There was a problem hiding this comment.
New factory function exported but never reachable
Low Severity
makeOllamaNodejsKernelService is exported from ollama/nodejs.ts and its JSDoc says it's "for use with makeKernelLanguageModelService", but it is never re-exported from index.ts and has zero usages anywhere in the codebase. Consumers of the package have no path to this function through the public API.


feat(kernel-language-model-service): open v1 backend and vat clientsKernel service (
src/kernel-service.ts):makeKernelLanguageModelServicewraps achat(and optionalsample) function into a flat, hardened object suitable forkernel.registerKernelServiceObject(). Return values are plain data — no exos — so they cross the marshal boundary cleanly. Exported asLANGUAGE_MODEL_SERVICE_NAME = 'languageModelService'.Open /v1 backend (
src/open-v1/):OpenV1BaseServiceposts to/v1/chat/completionswith injectedfetch, supporting both single-response and SSE streaming modes.makeOpenV1NodejsServiceis the Node.js entry point (acceptsbaseUrland optionalapiKey).Ollama backend (
src/ollama/):OllamaNodejsServiceextends the Open /v1 base, defaulting tohttp://localhost:11434and adding amakeInstance({ model })factory.Vat clients (
src/client.ts):makeChatClientreturns an OpenAI-SDK-shaped object (client.chat.completions.create(...)) that routes calls through a CapTPChatServicereference viaE().makeSampleClientis the equivalent for raw token sampling.Test utilities (
src/test-utils/): Replaces the previous queue-based mock model withmakeMockOpenV1Fetch— a deterministic SSE mock that emits a configured sequence of token strings, keeping integration tests CI-safe and fast.makeMockSampleprovides the same for the sample path.Also updates
kernel-testto use the new chat/sample API: replaceslms-user-vat/lms-queue-vatwithlms-chat-vat/lms-sample-vatand the corresponding test files.test(kernel-test-local): kernel-LMS integration testsAdds two test variants that share a common
runLmsChatKernelTesthelper:lms-chat.test.ts(CI): injectsmakeMockOpenV1Fetch— no network, runs undertest:dev.lms-chat.e2e.test.ts(local): uses realfetchagainst a local Ollama instance, run viatest:e2e:local.Both launch a subcluster with
lms-chat-vatthrough the kernel, register the LMS kernel service, and assert that the vat logslms-chat response: Hello(or equivalent).Also adds
agents.e2e.test.ts(moved from the old layout) exercising json/repl agents end-to-end, andtest/suite.test.tsas an Ollama pre-flight check.Note
Medium Risk
Adds new kernel-exposed service surfaces (
chat/sample) plus an OpenAI-compatible HTTP backend with SSE parsing; correctness and type-safety around streaming/non-streaming behavior and marshal-safe hardening are the main risks. Also updates multiple integration tests and local e2e wiring, which may affect CI/local test stability.Overview
Adds a new language-model service API suitable for kernel registration and vat consumption.
makeKernelLanguageModelServiceexposes a hardened{ chat, sample }service under the canonicalLANGUAGE_MODEL_SERVICE_NAME, and new vat-side helpersmakeChatClient/makeSampleClientprovide OpenAI-style ergonomics while enforcing model selection.Introduces new backends and expands Ollama support. Adds an Open
/v1/chat/completionsNode.js backend (makeOpenV1NodejsService) with optional API key and SSE streaming support, and extends the Ollama service to supportchatand rawsample(generate withraw: true) plus amakeOllamaNodejsKernelServiceadapter.Reworks tests and test utilities around the new API. Replaces the queue-based test LMS with
makeMockOpenV1FetchandmakeMockSample, adds unit coverage for the new clients/kernel service/Open v1 streaming, and updateskernel-testandkernel-test-localto run new chat/sample vat round-trip tests (including a local Ollama e2e path and updated build/runner config).Written by Cursor Bugbot for commit ea9f656. This will update automatically on new commits. Configure here.