| STATUS | +ID | +LABEL | +WORKSPACE | +ACTIONS | +
|---|
diff --git a/.gitignore b/.gitignore index 1cc7fc3..37f84da 100644 --- a/.gitignore +++ b/.gitignore @@ -4,6 +4,9 @@ build/ dist/ eggs/ .sisyphus/ +node_modules/ +out/ +vscode-extension/.vscode-test/ # Byte-compiled / optimized / DLL files __pycache__/ diff --git a/README.md b/README.md index 438c0a3..0b29f2b 100644 --- a/README.md +++ b/README.md @@ -133,6 +133,74 @@ http://localhost:8080/myproject/session Open `http://localhost:8080/` in a browser to see a live table of all discovered backends with their status, domains, and links. +## Browser Dashboard + +The browser dashboard is served from the control plane root (`/`) and uses +control-plane APIs directly. + +### Quick start + +```bash +go run . --port 8080 ~/my-projects +``` + +Then open: + +```text +http://localhost:8080/ +``` + +### Features + +- Session list and actions (`ATTACH`, `STOP`, `START`, `DEL`) +- SSE-driven status updates from `/api/events` +- Terminal attach via `/ws/terminal/{session-id}` +- Terminal scrollback hydration via `/api/sessions/{id}/scrollback` +- Chat panel streaming via `/api/sessions/{id}/chat` + +### Screenshot placeholder + +Add dashboard screenshots under a docs assets folder when available, for example: + +```text +docs/assets/browser-dashboard.png +docs/assets/browser-terminal.png +``` + +## VS Code Extension + +The repository includes a VS Code extension under `vscode-extension/`. + +### Install (development) + +```bash +cd vscode-extension +npm install +npm run compile +``` + +Open this repository in VS Code and run **Extension Development Host**. + +### Configure + +- `opencode.controlPlaneUrl` (default: `http://localhost:8080`) +- `opencode.authToken` (optional bearer token) + +### Features + +- Session tree view with connection status +- Session create/attach/stop/restart/delete commands +- Agent chat webview bound to selected session +- Terminal profile backed by control-plane websocket bridge +- Diff integration (`apply preview`, `apply`, `reject`, `clear highlights`) + +Troubleshooting note: terminal attach depends on runtime terminal bridge +prerequisites (session daemon terminal capability + control-plane attach path). +If prerequisites are unavailable in a given environment, terminal attach can fail +with `502`/`503` while session APIs remain functional. + +For detailed extension usage, see `vscode-extension/README.md`. + ## API | Endpoint | Description | @@ -476,12 +544,16 @@ Each probe is a single SSH round-trip. Unreachable hosts show as ○ offline and ├── oc-kill # Kill all opencode serve instances (standalone) ├── internal/ │ ├── config/config.go # Router configuration types, defaults, validation +│ ├── auth/ # Auth + CORS env configuration and middleware integration +│ ├── api/ # Session lifecycle API, SSE events, scrollback APIs │ ├── launcher/launcher.go # Manages opencode serve child processes │ ├── registry/registry.go # Thread-safe backend registry │ ├── scanner/scanner.go # Parallel port scanner + OpenCode probing │ ├── discovery/discovery.go # mDNS advertisement via zeroconf │ ├── proxy/proxy.go # Reverse proxy, routing, dashboard -│ └── tui/ # Remote session TUI (ocr) +│ ├── session/ # Session manager, health checks, circuit breaker +│ └── terminal/ # Terminal websocket handler + bridge +│ └── tui/ # Remote session TUI (ocr) │ ├── app.go # Top-level Bubble Tea model │ ├── components/ │ │ ├── header.go # Search bar, refresh countdown, fleet stats @@ -500,6 +572,9 @@ Each probe is a single SSH round-trip. Unreachable hosts show as ○ offline and └── go.sum ``` +For a full control-plane architecture guide (components, data flow, config, +security, failure modes), see `docs/architecture.md`. + ## Autodispatch (OpenClaw + TickTick) OpenCodeRouter is designed to be the service-discovery layer in a **programming task autodispatch pipeline**. An external orchestrator (e.g. OpenClaw) polls a task source (e.g. TickTick), resolves the target project via the router, and dispatches the task to the correct OpenCode instance. diff --git a/docs/architecture.md b/docs/architecture.md new file mode 100644 index 0000000..3f24c94 --- /dev/null +++ b/docs/architecture.md @@ -0,0 +1,472 @@ +# OpenCodeRouter Architecture Guide + +This document describes the control-plane architecture implemented in this worktree. +It focuses on component boundaries, runtime data flow, configuration defaults, +security behavior, and failure/recovery semantics. + +## 1. Scope + +The codebase contains two major runtime surfaces: + +1. **Control Plane server** (`main.go` + `internal/*`) + Hosts discovery, reverse proxy, session lifecycle APIs, SSE, terminal websocket + bridge, browser dashboard assets, and session scrollback endpoints. + +2. **VS Code extension** (`vscode-extension/`) + Uses control-plane HTTP/SSE/WebSocket APIs for session tree, chat, terminal, + and diff workflow integration. + +The architecture below applies to the control plane plus browser and extension +clients. + +## 2. System Architecture (ASCII) + +```text + +-----------------------------+ + | VS Code | + | - Session Tree | + | - Chat Webview | + | - Terminal Profile (PTY) | + | - Diff Edit Manager | + +--------------+--------------+ + | + | HTTP / SSE / WS + v ++------------------------------+ +--------------+--------------+ +------------------------------+ +| Browser UI | | OpenCodeRouter | | OpenCode Daemon(s) | +| / (dashboard + terminal) |<------>| Control Plane Server |<------>| opencode serve per session | +| - sessions table | HTTP | - api router | HTTP | - /global/health | +| - SSE indicator | SSE | - sessions handler | | - /session APIs | +| - terminal xterm | WS | - events handler (SSE) | WS | - terminal transport | +| - chat panel | | - terminal ws bridge | | | ++------------------------------+ | - proxy + scanner + registry| +------------------------------+ + | - scrollback cache (JSONL) | + +--------------+--------------+ + | + | local process mgmt + v + +-----------------------------+ + | Session Manager | + | - create/stop/restart | + | - health checks + circuit | + | - attach terminal | + | - event publication | + +-----------------------------+ +``` + +## 3. Runtime Components and Boundaries + +### 3.1 `main.go` (composition root) + +Responsibilities: + +- Parses CLI flags and builds `config.Config` (`internal/config/config.go` defaults). +- Loads auth/cors settings via `auth.LoadFromEnv()`. +- Creates and wires: + - `registry.Registry` + - `scanner.Scanner` + - `proxy.Proxy` + - `session.Manager` + - `api.Router` + - optional mDNS advertiser +- Starts HTTP server and graceful shutdown path. +- Performs startup orphan-process detection and optional cleanup offer + (`--cleanup-orphans` or `OCR_CLEANUP_ORPHANS=1`). + +Boundary notes: + +- `main.go` owns object lifecycle and orchestration only; business behavior lives + in internal packages. +- Startup cleanup is explicit-action only; no silent destructive default. + +### 3.2 `internal/config` + +Responsibilities: + +- Defines static control-plane defaults and validation constraints. +- Provides domain naming helper and outbound IP helper. + +Boundary notes: + +- No HTTP, no process management, no storage logic. + +### 3.3 `internal/auth` + +Responsibilities: + +- Defines auth/cors configuration model. +- Loads env-backed auth settings. +- Middleware integration occurs at API router boundary. + +Boundary notes: + +- Security policy is centralized by middleware; handlers do not duplicate auth + checks. + +### 3.4 `internal/scanner` + `internal/registry` + +Responsibilities: + +- Scanner probes configured local port range for daemon health/project/session info. +- Registry keeps thread-safe backend/session index and stale pruning. + +Boundary notes: + +- Scanner discovers runtime backends and refreshes registry snapshots. +- Registry is shared state for proxy routing and status views. + +### 3.5 `internal/session` manager + +Responsibilities: + +- Session lifecycle API: create/get/list/stop/restart/delete/attach terminal/health. +- Process supervision (`opencode serve` child process start + wait handling). +- Health loop with circuit-breaker behavior: + - threshold: 3 consecutive unhealthy probes by default + - cooldown: 30s by default before next probe + - reset on healthy probe and stop/restart paths +- Publishes session events to event bus. + +Boundary notes: + +- Manager is authoritative state for session lifecycle and health. +- Terminal WS handlers delegate terminal attachment via manager interface. + +### 3.6 `internal/api` router and handlers + +Responsibilities: + +- Mounts REST and SSE endpoints: + - `/api/sessions` and `/api/sessions/{id}/*` + - `/api/events` + - `/api/sessions/{id}/scrollback` + - `/ws/terminal/{session-id}` +- Session handler translates manager errors into stable HTTP status/code payloads + via `internal/errors` mapping. +- Event handler converts internal event types (including `session.health_changed`) + into SSE event stream (`session.health`). + +Boundary notes: + +- API layer owns transport contracts (JSON + SSE + WS), not core lifecycle logic. +- Fallback routing delegates to proxy/static UI handler. + +### 3.7 `internal/terminal` bridge + +Responsibilities: + +- Upgrades websocket connections and bridges client <-> daemon terminal streams. +- Validates session existence and health before attach. +- Appends terminal output to scrollback cache for reconnect hydration. + +Boundary notes: + +- Bridge is transport-level; terminal session ownership remains in session manager. + +### 3.8 Browser dashboard (`web/`) + +Responsibilities: + +- Session table and action controls (attach/stop/restart/delete). +- SSE status indicator states (`STREAM_ACTIVE`, `RECONNECTING`, `DISCONNECTED`). +- Terminal reconnect UX with bounded exponential backoff. +- Scrollback hydration before terminal websocket attach. +- Chat panel rendering and streaming support. + +Boundary notes: + +- Browser is a thin API/SSE/WS client; no daemon-direct calls. + +### 3.9 VS Code extension (`vscode-extension/`) + +Responsibilities: + +- Session tree provider with SSE-driven refresh and connection status bar. +- Resilient request path with bounded retry and stale-data fallback. +- Chat webview integration. +- PTY-backed terminal websocket bridge. +- Diff staging/preview/apply/reject workflow. + +Boundary notes: + +- Extension host performs control-plane communication; webview stays message-based. + +## 4. API-Level Data Flow + +### 4.1 Session lifecycle flow + +1. Client `POST /api/sessions` (workspace + optional labels). +2. Sessions handler validates payload and calls manager `Create`. +3. Manager allocates port, launches process, stores session, emits + `session.created` event. +4. Client receives normalized session view with health snapshot. +5. Subsequent operations (`stop`, `restart`, `delete`) map to manager methods + and publish corresponding events. + +Error mapping: + +- `WORKSPACE_PATH_REQUIRED`, `WORKSPACE_PATH_INVALID` -> `400` +- `SESSION_ALREADY_EXISTS`, `SESSION_STOPPED` -> `409` +- `SESSION_NOT_FOUND` -> `404` +- `NO_AVAILABLE_SESSION_PORTS` -> `503` +- `TERMINAL_ATTACH_UNAVAILABLE`, `DAEMON_UNHEALTHY` -> `503` + +### 4.2 Terminal data flow + +1. Browser/extension requests websocket upgrade at `/ws/terminal/{session-id}`. +2. Handler checks method, upgrade headers, session existence, and health. +3. Handler calls `AttachTerminal` on manager and starts bridge. +4. Client input is forwarded to daemon terminal stream. +5. Daemon output is forwarded to client and persisted to scrollback cache. +6. On disconnect, client-side reconnect logic controls retry/backoff behavior. + +### 4.3 Agent chat flow + +1. Client `POST /api/sessions/{id}/chat` with prompt payload. +2. Sessions handler creates daemon client for session daemon port. +3. Handler proxies daemon message stream back as SSE-style response chunks. +4. Browser/extension incrementally renders assistant/tool output. + +History path: + +- `GET /api/sessions/{id}/chat` -> daemon message history passthrough. + +### 4.4 Scrollback flow + +1. Terminal output appends entries to JSONL scrollback cache. +2. Client reconnect path requests + `GET /api/sessions/{id}/scrollback?type=terminal_output&limit=...`. +3. Handler applies filtering + offset/limit and returns entries. +4. Client hydrates terminal before opening live websocket. + +## 5. Configuration Reference (Defaults + Toggles) + +### 5.1 CLI/config defaults (`internal/config/config.go` + `main.go` flags) + +| Setting | Default | Source | Notes | +|---|---:|---|---| +| listen port | `8080` | `Config.Defaults()` | `--port` | +| listen addr | `0.0.0.0:8080` | `Config.Defaults()` + hostname flag | host by `--hostname` | +| username | OS user | `user.Current()` | `--username` override | +| scan start | `30000` | `Config.Defaults()` | `--scan-start` | +| scan end | `31000` | `Config.Defaults()` | `--scan-end` | +| scan interval | `5s` | `Config.Defaults()` | `--scan-interval` | +| scan concurrency | `20` | `Config.Defaults()` | `--scan-concurrency` | +| probe timeout | `800ms` | `Config.Defaults()` | `--probe-timeout` | +| stale after | `30s` | `Config.Defaults()` | `--stale-after` | +| mDNS enabled | `true` | `Config.Defaults()` | `--mdns` | +| mDNS service type | `_opencode._tcp` | `Config.Defaults()` | static default | +| startup orphan cleanup | `false` | `main.go` | opt-in `--cleanup-orphans` | + +Validation constraints: + +- port ranges must be 1..65535 +- `scan-end >= scan-start` +- `scan-interval >= 1s` +- username cannot be empty + +### 5.2 Session manager defaults (`internal/session/manager.go`) + +| Setting | Default | Notes | +|---|---:|---| +| session port range | `30000..31000` | overridden by manager config from `main.go` (`scan range + 100`) | +| health interval | `10s` | periodic health loop | +| health timeout | `2s` | per-probe context timeout | +| health fail threshold | `3` | opens circuit breaker | +| circuit cooldown | `30s` | next probe delay when circuit open | +| stop timeout | `5s` | graceful stop/kill fallback | +| opencode binary | `opencode` | default process starter command | + +### 5.3 Auth/CORS environment (`internal/auth/config.go`) + +| Env | Default | Meaning | +|---|---|---| +| `OCR_AUTH_ENABLED` | `false` | enables auth middleware gate | +| `OCR_AUTH_BEARER_TOKENS` | empty | CSV list of accepted bearer tokens | +| `OCR_AUTH_BASIC` | empty | CSV `user:pass` pairs | +| `OCR_CORS_ALLOW_ORIGINS` | `*` | CSV CORS allow-list | + +Bypass paths default: + +- `/api/health` +- `/api/backends` + +### 5.4 Startup cleanup env toggle (`main.go`) + +| Env | Default | Meaning | +|---|---|---| +| `OCR_CLEANUP_ORPHANS` | off | enables startup SIGTERM cleanup for detected orphan `opencode` listeners in scan range | + +### 5.5 VS Code extension runtime settings + +Defined in `vscode-extension/package.json`: + +| Setting | Default | Meaning | +|---|---|---| +| `opencode.controlPlaneUrl` | `http://localhost:8080` | base URL for control-plane API/SSE/WS | +| `opencode.authToken` | empty | optional bearer token for authenticated control planes | + +## 6. Security Model + +### 6.1 Network binding + +- Default bind is `0.0.0.0` (server reachable on network interfaces). +- For localhost-only operation, run with `--hostname 127.0.0.1`. + +### 6.2 Authentication + +- Optional middleware in front of all API/routes via `auth.Middleware`. +- Supports bearer token and basic auth based on environment configuration. +- Some health/backend endpoints can be bypassed by default path policy. + +### 6.3 CORS + +- Default CORS allow origins: `*`. +- Can be restricted by `OCR_CORS_ALLOW_ORIGINS` CSV values. + +### 6.4 Trust boundaries + +- Browser and extension are untrusted clients from server perspective; all + operations go through HTTP API checks. +- Session manager and daemon process orchestration run server-side only. + +### 6.5 Local process controls + +- Orphan cleanup is opt-in; default behavior is warning-only. +- Cleanup scope is bounded to configured scan port range and `opencode` listener + detection. + +## 7. Failure Modes and Recovery Behavior + +### 7.1 Session daemon unavailable + +Symptoms: + +- health probes fail +- terminal attach returns service unavailable + +Behavior: + +- manager marks health unhealthy and can transition session status to `error` +- SSE emits `session.health` +- dashboard row shows error state with start/restart affordance + +Recovery: + +- explicit `restart`/`start` action from UI/extension/API +- no automatic daemon restart behavior + +### 7.2 Port exhaustion for new session + +Symptoms: + +- create session fails with `NO_AVAILABLE_SESSION_PORTS` + +Behavior: + +- API returns descriptive `503` with stable error code + +Recovery: + +- stop/delete existing sessions +- widen configured scan/session port ranges + +### 7.3 SSE disruption + +Symptoms: + +- event stream disconnect/errors + +Behavior: + +- browser indicator transitions to reconnecting/disconnected states +- extension status bar transitions to disconnected/error with retry loop + +Recovery: + +- automatic reconnect loops with bounded delay logic + +### 7.4 Terminal websocket disruption + +Symptoms: + +- terminal websocket close/error + +Behavior: + +- browser terminal prints reconnect message and retries with exponential backoff +- extension terminal bridge performs reconnect strategy per bridge implementation + +Recovery: + +- automatic reconnect path +- user can detach/reattach terminal + +### 7.5 Control-plane API temporarily unavailable (extension) + +Symptoms: + +- session fetch failures / retryable statuses + +Behavior: + +- bounded backoff retries +- stale session data retained and marked as stale +- warning with Retry action + +Recovery: + +- retry from warning action or refresh command + +### 7.6 Startup orphan listeners in scan range + +Symptoms: + +- pre-existing `opencode serve` listeners occupy scan ports + +Behavior: + +- startup warning logs orphan candidates and cleanup hint +- optional cleanup if explicitly enabled + +Recovery: + +- rerun with explicit cleanup toggle +- or manually terminate orphan listeners + +## 8. Operational Notes + +1. Browser dashboard and VS Code extension both depend on control-plane API/SSE. +2. Terminal attach requires session terminal connectivity to daemon; environment + limitations can surface as 502/503 attach failures. +3. Scrollback hydration reduces terminal reconnect blind spots by loading cached + output before live websocket starts. +4. mDNS is optional; path-based routing and direct API usage remain available when + mDNS is disabled. + +## 9. File-Level Reference Map + +- Composition root: `main.go` +- Config defaults/validation: `internal/config/config.go` +- Auth/env config: `internal/auth/config.go` +- API router: `internal/api/router.go` +- Session lifecycle API: `internal/api/sessions.go` +- SSE stream: `internal/api/events.go` +- Scrollback API: `internal/api/scrollback.go` +- Session manager core: `internal/session/manager.go` +- Terminal websocket endpoint: `internal/terminal/handler.go` +- Browser client: `web/app.js`, `web/index.html` +- VS Code extension host: `vscode-extension/src/extension.ts` + +## 10. Summary + +OpenCodeRouter implements a control-plane architecture with clear layering: + +- discovery/proxy plane (`scanner`, `registry`, `proxy`) +- session lifecycle and health supervision (`session.Manager`) +- transport adapters (`internal/api`, `internal/terminal`) +- clients (browser dashboard and VS Code extension) + +Task 20–24 capabilities (terminal bridge, chat/diff integration, scrollback +hydration, resilience/error handling, circuit breaker) are represented in this +architecture and documented with concrete runtime defaults and failure behavior. diff --git a/go.mod b/go.mod index 4d7d2f3..413e86b 100644 --- a/go.mod +++ b/go.mod @@ -8,6 +8,7 @@ require ( charm.land/lipgloss/v2 v2.0.0 github.com/charmbracelet/x/vt v0.0.0-20260304084025-7dd5c0ab408e github.com/charmbracelet/x/xpty v0.1.3 + github.com/gorilla/websocket v1.5.3 github.com/grandcat/zeroconf v1.0.0 github.com/spf13/cobra v1.8.1 github.com/spf13/viper v1.20.1 diff --git a/go.sum b/go.sum index 36fb5bf..92a17ae 100644 --- a/go.sum +++ b/go.sum @@ -52,6 +52,8 @@ github.com/go-viper/mapstructure/v2 v2.2.1 h1:ZAaOCxANMuZx5RCeg0mBdEZk7DZasvvZIx github.com/go-viper/mapstructure/v2 v2.2.1/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM= github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI= github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= +github.com/gorilla/websocket v1.5.3 h1:saDtZ6Pbx/0u+bgYQ3q96pZgCzfhKXGPqt7kZ72aNNg= +github.com/gorilla/websocket v1.5.3/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE= github.com/grandcat/zeroconf v1.0.0 h1:uHhahLBKqwWBV6WZUDAT71044vwOTL+McW0mBJvo6kE= github.com/grandcat/zeroconf v1.0.0/go.mod h1:lTKmG1zh86XyCoUeIHSA4FJMBwCJiQmGfcP2PdzytEs= github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8= diff --git a/internal/api/events.go b/internal/api/events.go new file mode 100644 index 0000000..8fb28e1 --- /dev/null +++ b/internal/api/events.go @@ -0,0 +1,300 @@ +package api + +import ( + "context" + "encoding/json" + "io" + "log/slog" + "net/http" + "strconv" + "strings" + "time" + + "opencoderouter/internal/session" +) + +const ( + defaultEventsKeepaliveInterval = 15 * time.Second + defaultEventsRetryInterval = 5 * time.Second +) + +type BackendEvent struct { + Type string + Timestamp time.Time + SessionID string + Data any +} + +type BackendEventSubscribeFunc func(ctx context.Context) (<-chan BackendEvent, func(), error) + +type EventsHandlerConfig struct { + SessionEventBus session.EventBus + BackendSubscribe BackendEventSubscribeFunc + Logger *slog.Logger + KeepaliveInterval time.Duration + RetryInterval time.Duration +} + +type EventsHandler struct { + sessionEvents session.EventBus + backendSubscribe BackendEventSubscribeFunc + logger *slog.Logger + keepalive time.Duration + retry time.Duration +} + +type streamEnvelope struct { + Type string `json:"type"` + Source string `json:"source"` + Timestamp string `json:"timestamp"` + SessionID string `json:"sessionId,omitempty"` + Sequence int64 `json:"sequence"` + Payload any `json:"payload,omitempty"` +} + +func NewEventsHandler(cfg EventsHandlerConfig) *EventsHandler { + logger := cfg.Logger + if logger == nil { + logger = slog.Default() + } + + keepalive := cfg.KeepaliveInterval + if keepalive <= 0 { + keepalive = defaultEventsKeepaliveInterval + } + + retry := cfg.RetryInterval + if retry <= 0 { + retry = defaultEventsRetryInterval + } + + return &EventsHandler{ + sessionEvents: cfg.SessionEventBus, + backendSubscribe: cfg.BackendSubscribe, + logger: logger, + keepalive: keepalive, + retry: retry, + } +} + +func (h *EventsHandler) Register(mux *http.ServeMux) { + if h == nil || mux == nil { + return + } + mux.HandleFunc("/api/events", h.handleEvents) +} + +func (h *EventsHandler) handleEvents(w http.ResponseWriter, r *http.Request) { + if r.Method != http.MethodGet { + writeAPIError(w, http.StatusMethodNotAllowed, "method not allowed", "METHOD_NOT_ALLOWED") + return + } + + if h.sessionEvents == nil && h.backendSubscribe == nil { + writeAPIError(w, http.StatusServiceUnavailable, "event stream unavailable", "EVENT_STREAM_UNAVAILABLE") + return + } + + flusher, ok := w.(http.Flusher) + if !ok { + writeAPIError(w, http.StatusInternalServerError, "streaming unsupported", "STREAMING_UNSUPPORTED") + return + } + + var ( + sessionCh <-chan session.Event + sessionUnsubscribe func() + backendCh <-chan BackendEvent + backendUnsubscribe func() + ) + + if h.sessionEvents != nil { + var err error + sessionCh, sessionUnsubscribe, err = h.sessionEvents.Subscribe(session.EventFilter{Types: []session.EventType{ + session.EventTypeSessionCreated, + session.EventTypeSessionStopped, + session.EventTypeSessionHealthChanged, + session.EventTypeSessionAttached, + session.EventTypeSessionDetached, + }}) + if err != nil { + h.logger.Warn("failed to subscribe to session events", "error", err) + writeAPIError(w, http.StatusServiceUnavailable, "event stream unavailable", "EVENT_STREAM_UNAVAILABLE") + return + } + } + + if h.backendSubscribe != nil { + var err error + backendCh, backendUnsubscribe, err = h.backendSubscribe(r.Context()) + if err != nil { + if sessionUnsubscribe != nil { + sessionUnsubscribe() + } + h.logger.Warn("failed to subscribe to backend events", "error", err) + writeAPIError(w, http.StatusServiceUnavailable, "event stream unavailable", "EVENT_STREAM_UNAVAILABLE") + return + } + } + + defer func() { + if sessionUnsubscribe != nil { + sessionUnsubscribe() + } + if backendUnsubscribe != nil { + backendUnsubscribe() + } + }() + + w.Header().Set("Content-Type", "text/event-stream") + w.Header().Set("Cache-Control", "no-cache") + w.Header().Set("Connection", "keep-alive") + w.Header().Set("X-Accel-Buffering", "no") + w.WriteHeader(http.StatusOK) + + if err := writeSSERetry(w, flusher, h.retry); err != nil { + return + } + + sequence := parseLastEventID(r.Header.Get("Last-Event-ID")) + ticker := time.NewTicker(h.keepalive) + defer ticker.Stop() + + for { + if sessionCh == nil && backendCh == nil { + return + } + + select { + case <-r.Context().Done(): + return + case ev, ok := <-sessionCh: + if !ok { + sessionCh = nil + continue + } + + sequence++ + envelope := newSessionEnvelope(sequence, ev) + if err := writeSSEJSON(w, flusher, sequence, envelope.Type, envelope); err != nil { + return + } + case ev, ok := <-backendCh: + if !ok { + backendCh = nil + continue + } + + sequence++ + envelope := newBackendEnvelope(sequence, ev) + if err := writeSSEJSON(w, flusher, sequence, envelope.Type, envelope); err != nil { + return + } + case <-ticker.C: + if err := writeSSEComment(w, flusher, "keepalive"); err != nil { + return + } + } + } +} + +func newSessionEnvelope(sequence int64, ev session.Event) streamEnvelope { + timestamp := ev.Timestamp().UTC() + if timestamp.IsZero() { + timestamp = time.Now().UTC() + } + + eventType := string(ev.Type()) + if ev.Type() == session.EventTypeSessionHealthChanged { + eventType = "session.health" + } + + return streamEnvelope{ + Type: eventType, + Source: "session", + Timestamp: timestamp.Format(timeLayoutRFC3339Nano), + SessionID: ev.SessionID(), + Sequence: sequence, + Payload: ev, + } +} + +func newBackendEnvelope(sequence int64, ev BackendEvent) streamEnvelope { + timestamp := ev.Timestamp.UTC() + if timestamp.IsZero() { + timestamp = time.Now().UTC() + } + + eventType := strings.TrimSpace(ev.Type) + if eventType == "" { + eventType = "backend.event" + } + + return streamEnvelope{ + Type: eventType, + Source: "backend", + Timestamp: timestamp.Format(timeLayoutRFC3339Nano), + SessionID: strings.TrimSpace(ev.SessionID), + Sequence: sequence, + Payload: ev.Data, + } +} + +func parseLastEventID(raw string) int64 { + raw = strings.TrimSpace(raw) + if raw == "" { + return 0 + } + + id, err := strconv.ParseInt(raw, 10, 64) + if err != nil || id < 0 { + return 0 + } + + return id +} + +func writeSSERetry(w io.Writer, flusher http.Flusher, retry time.Duration) error { + if retry <= 0 { + retry = defaultEventsRetryInterval + } + if _, err := io.WriteString(w, "retry: "+strconv.FormatInt(retry.Milliseconds(), 10)+"\n\n"); err != nil { + return err + } + flusher.Flush() + return nil +} + +func writeSSEComment(w io.Writer, flusher http.Flusher, comment string) error { + comment = strings.ReplaceAll(comment, "\n", " ") + comment = strings.ReplaceAll(comment, "\r", " ") + if _, err := io.WriteString(w, ": "+comment+"\n\n"); err != nil { + return err + } + flusher.Flush() + return nil +} + +func writeSSEJSON(w io.Writer, flusher http.Flusher, id int64, event string, payload any) error { + encoded, err := json.Marshal(payload) + if err != nil { + return err + } + + if _, err := io.WriteString(w, "id: "+strconv.FormatInt(id, 10)+"\n"); err != nil { + return err + } + + event = strings.ReplaceAll(event, "\n", " ") + event = strings.ReplaceAll(event, "\r", " ") + if _, err := io.WriteString(w, "event: "+event+"\n"); err != nil { + return err + } + + if _, err := io.WriteString(w, "data: "+string(encoded)+"\n\n"); err != nil { + return err + } + + flusher.Flush() + return nil +} diff --git a/internal/api/events_test.go b/internal/api/events_test.go new file mode 100644 index 0000000..a017a8e --- /dev/null +++ b/internal/api/events_test.go @@ -0,0 +1,368 @@ +package api + +import ( + "bufio" + "context" + "encoding/json" + "io" + "net/http" + "net/http/httptest" + "strconv" + "strings" + "testing" + "time" + + "opencoderouter/internal/session" +) + +type parsedSSEFrame struct { + ID string + Event string + Data []string + Comments []string + Retry string +} + +type parsedStreamEnvelope struct { + Type string `json:"type"` + Source string `json:"source"` + Timestamp string `json:"timestamp"` + SessionID string `json:"sessionId"` + Sequence int64 `json:"sequence"` +} + +func TestEventsHandlerStreamsSessionEventsAndKeepalive(t *testing.T) { + eventBus := session.NewEventBus(16) + + mux := http.NewServeMux() + NewEventsHandler(EventsHandlerConfig{ + SessionEventBus: eventBus, + KeepaliveInterval: 20 * time.Millisecond, + RetryInterval: 10 * time.Millisecond, + }).Register(mux) + + srv := httptest.NewServer(mux) + defer srv.Close() + + resp := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/events", nil) + if resp.StatusCode != http.StatusOK { + defer resp.Body.Close() + t.Fatalf("status=%d want=%d", resp.StatusCode, http.StatusOK) + } + if contentType := resp.Header.Get("Content-Type"); !strings.HasPrefix(contentType, "text/event-stream") { + defer resp.Body.Close() + t.Fatalf("content-type=%q want prefix text/event-stream", contentType) + } + defer resp.Body.Close() + + reader := bufio.NewReader(resp.Body) + retryFrame := readSSEUntil(t, reader, 2*time.Second, func(frame parsedSSEFrame) bool { + return frame.Retry != "" + }) + if retryFrame.Retry == "" { + t.Fatal("expected retry frame") + } + + now := time.Now().UTC() + handle := session.SessionHandle{ID: "s-created", DaemonPort: 30123, WorkspacePath: "/tmp/work", Status: session.SessionStatusActive, CreatedAt: now, LastActivity: now} + if err := eventBus.Publish(session.SessionCreated{At: now, Session: handle}); err != nil { + t.Fatalf("publish session.created: %v", err) + } + + createdFrame := readSSEUntil(t, reader, 2*time.Second, func(frame parsedSSEFrame) bool { + return frame.Event == "session.created" && len(frame.Data) > 0 + }) + if createdFrame.ID != "1" { + t.Fatalf("created id=%q want=1", createdFrame.ID) + } + + var createdPayload parsedStreamEnvelope + decodeSSEDataJSON(t, createdFrame, &createdPayload) + if createdPayload.Type != "session.created" { + t.Fatalf("created type=%q want=session.created", createdPayload.Type) + } + if createdPayload.Source != "session" { + t.Fatalf("created source=%q want=session", createdPayload.Source) + } + if createdPayload.SessionID != "s-created" { + t.Fatalf("created sessionId=%q want=s-created", createdPayload.SessionID) + } + if createdPayload.Sequence != 1 { + t.Fatalf("created sequence=%d want=1", createdPayload.Sequence) + } + + if err := eventBus.Publish(session.SessionHealthChanged{ + At: now.Add(2 * time.Second), + Session: handle, + Previous: session.HealthStatus{State: session.HealthStateHealthy}, + Current: session.HealthStatus{State: session.HealthStateUnhealthy, Error: "probe timeout"}, + }); err != nil { + t.Fatalf("publish session.health: %v", err) + } + + healthFrame := readSSEUntil(t, reader, 2*time.Second, func(frame parsedSSEFrame) bool { + return frame.Event == "session.health" && len(frame.Data) > 0 + }) + if healthFrame.ID != "2" { + t.Fatalf("health id=%q want=2", healthFrame.ID) + } + + var healthPayload parsedStreamEnvelope + decodeSSEDataJSON(t, healthFrame, &healthPayload) + if healthPayload.Type != "session.health" { + t.Fatalf("health type=%q want=session.health", healthPayload.Type) + } + if healthPayload.Sequence != 2 { + t.Fatalf("health sequence=%d want=2", healthPayload.Sequence) + } + + keepaliveFrame := readSSEUntil(t, reader, 2*time.Second, func(frame parsedSSEFrame) bool { + for _, comment := range frame.Comments { + if strings.Contains(comment, "keepalive") { + return true + } + } + return false + }) + if len(keepaliveFrame.Comments) == 0 { + t.Fatal("expected keepalive comment frame") + } +} + +func TestEventsHandlerAppliesLastEventIDSequencing(t *testing.T) { + eventBus := session.NewEventBus(16) + + mux := http.NewServeMux() + NewEventsHandler(EventsHandlerConfig{SessionEventBus: eventBus}).Register(mux) + + srv := httptest.NewServer(mux) + defer srv.Close() + + req, err := http.NewRequest(http.MethodGet, srv.URL+"/api/events", nil) + if err != nil { + t.Fatalf("new request: %v", err) + } + req.Header.Set("Last-Event-ID", "41") + + resp, err := srv.Client().Do(req) + if err != nil { + t.Fatalf("request failed: %v", err) + } + if resp.StatusCode != http.StatusOK { + defer resp.Body.Close() + t.Fatalf("status=%d want=%d", resp.StatusCode, http.StatusOK) + } + defer resp.Body.Close() + + reader := bufio.NewReader(resp.Body) + _ = readSSEUntil(t, reader, 2*time.Second, func(frame parsedSSEFrame) bool { + return frame.Retry != "" + }) + + now := time.Now().UTC() + handle := session.SessionHandle{ID: "s-stopped", DaemonPort: 30124, WorkspacePath: "/tmp/work", Status: session.SessionStatusStopped, CreatedAt: now, LastActivity: now} + if err := eventBus.Publish(session.SessionStopped{At: now, Session: handle, Reason: "user"}); err != nil { + t.Fatalf("publish session.stopped: %v", err) + } + + stoppedFrame := readSSEUntil(t, reader, 2*time.Second, func(frame parsedSSEFrame) bool { + return frame.Event == "session.stopped" && len(frame.Data) > 0 + }) + if stoppedFrame.ID != "42" { + t.Fatalf("stopped id=%q want=42", stoppedFrame.ID) + } + + var stoppedPayload parsedStreamEnvelope + decodeSSEDataJSON(t, stoppedFrame, &stoppedPayload) + if stoppedPayload.Sequence != 42 { + t.Fatalf("stopped sequence=%d want=42", stoppedPayload.Sequence) + } +} + +func TestEventsHandlerStreamsBackendEventsWhenAvailable(t *testing.T) { + backendEvents := make(chan BackendEvent, 4) + + mux := http.NewServeMux() + NewEventsHandler(EventsHandlerConfig{ + BackendSubscribe: func(_ context.Context) (<-chan BackendEvent, func(), error) { + return backendEvents, func() {}, nil + }, + }).Register(mux) + + srv := httptest.NewServer(mux) + defer srv.Close() + + resp := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/events", nil) + if resp.StatusCode != http.StatusOK { + defer resp.Body.Close() + t.Fatalf("status=%d want=%d", resp.StatusCode, http.StatusOK) + } + defer resp.Body.Close() + + reader := bufio.NewReader(resp.Body) + _ = readSSEUntil(t, reader, 2*time.Second, func(frame parsedSSEFrame) bool { + return frame.Retry != "" + }) + + backendEvents <- BackendEvent{ + Type: "backend.updated", + Timestamp: time.Now().UTC(), + Data: map[string]any{"slug": "proj-a", "port": 32000}, + } + + backendFrame := readSSEUntil(t, reader, 2*time.Second, func(frame parsedSSEFrame) bool { + return frame.Event == "backend.updated" && len(frame.Data) > 0 + }) + if backendFrame.ID != "1" { + t.Fatalf("backend id=%q want=1", backendFrame.ID) + } + + var backendPayload parsedStreamEnvelope + decodeSSEDataJSON(t, backendFrame, &backendPayload) + if backendPayload.Source != "backend" { + t.Fatalf("backend source=%q want=backend", backendPayload.Source) + } + if backendPayload.Type != "backend.updated" { + t.Fatalf("backend type=%q want=backend.updated", backendPayload.Type) + } + if backendPayload.Sequence != 1 { + t.Fatalf("backend sequence=%d want=1", backendPayload.Sequence) + } +} + +func readSSEUntil(t *testing.T, reader *bufio.Reader, timeout time.Duration, match func(parsedSSEFrame) bool) parsedSSEFrame { + t.Helper() + deadline := time.Now().Add(timeout) + for { + remaining := time.Until(deadline) + if remaining <= 0 { + t.Fatalf("timed out waiting for matching SSE frame after %s", timeout) + } + frame := readSSEFrame(t, reader, remaining) + if match(frame) { + return frame + } + } +} + +func readSSEFrame(t *testing.T, reader *bufio.Reader, timeout time.Duration) parsedSSEFrame { + t.Helper() + type result struct { + frame parsedSSEFrame + err error + } + resultCh := make(chan result, 1) + + go func() { + var frame parsedSSEFrame + for { + line, err := reader.ReadString('\n') + if err != nil { + resultCh <- result{err: err} + return + } + line = strings.TrimRight(line, "\r\n") + if line == "" { + if frame.ID != "" || frame.Event != "" || frame.Retry != "" || len(frame.Data) > 0 || len(frame.Comments) > 0 { + resultCh <- result{frame: frame} + return + } + continue + } + + switch { + case strings.HasPrefix(line, "id:"): + frame.ID = strings.TrimSpace(strings.TrimPrefix(line, "id:")) + case strings.HasPrefix(line, "event:"): + frame.Event = strings.TrimSpace(strings.TrimPrefix(line, "event:")) + case strings.HasPrefix(line, "data:"): + frame.Data = append(frame.Data, strings.TrimSpace(strings.TrimPrefix(line, "data:"))) + case strings.HasPrefix(line, "retry:"): + frame.Retry = strings.TrimSpace(strings.TrimPrefix(line, "retry:")) + case strings.HasPrefix(line, ":"): + frame.Comments = append(frame.Comments, strings.TrimSpace(strings.TrimPrefix(line, ":"))) + } + } + }() + + select { + case res := <-resultCh: + if res.err != nil { + if res.err == io.EOF { + t.Fatal("unexpected EOF while reading SSE frame") + } + t.Fatalf("read SSE frame: %v", res.err) + } + return res.frame + case <-time.After(timeout): + t.Fatalf("timed out reading SSE frame after %s", timeout) + } + + return parsedSSEFrame{} +} + +func decodeSSEDataJSON(t *testing.T, frame parsedSSEFrame, dst any) { + t.Helper() + if len(frame.Data) == 0 { + t.Fatal("expected SSE data lines") + } + joined := strings.Join(frame.Data, "\n") + if err := json.Unmarshal([]byte(joined), dst); err != nil { + t.Fatalf("decode SSE data %q: %v", joined, err) + } +} + +func TestEventsHandlerRejectsUnsupportedMethod(t *testing.T) { + eventBus := session.NewEventBus(4) + + mux := http.NewServeMux() + NewEventsHandler(EventsHandlerConfig{SessionEventBus: eventBus}).Register(mux) + srv := httptest.NewServer(mux) + defer srv.Close() + + resp := doJSONRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/events", nil) + assertErrorShape(t, resp, http.StatusMethodNotAllowed, "METHOD_NOT_ALLOWED") +} + +func TestEventsHandlerParsesInvalidLastEventIDAsZero(t *testing.T) { + eventBus := session.NewEventBus(8) + + mux := http.NewServeMux() + NewEventsHandler(EventsHandlerConfig{SessionEventBus: eventBus}).Register(mux) + srv := httptest.NewServer(mux) + defer srv.Close() + + req, err := http.NewRequest(http.MethodGet, srv.URL+"/api/events", nil) + if err != nil { + t.Fatalf("new request: %v", err) + } + req.Header.Set("Last-Event-ID", "nonsense") + + resp, err := srv.Client().Do(req) + if err != nil { + t.Fatalf("request failed: %v", err) + } + if resp.StatusCode != http.StatusOK { + defer resp.Body.Close() + t.Fatalf("status=%d want=%d", resp.StatusCode, http.StatusOK) + } + defer resp.Body.Close() + + reader := bufio.NewReader(resp.Body) + _ = readSSEUntil(t, reader, 2*time.Second, func(frame parsedSSEFrame) bool { + return frame.Retry != "" + }) + + now := time.Now().UTC() + handle := session.SessionHandle{ID: "s-invalid-last-id", DaemonPort: 30125, WorkspacePath: "/tmp/work", Status: session.SessionStatusActive, CreatedAt: now, LastActivity: now} + if err := eventBus.Publish(session.SessionAttached{At: now, Session: handle, AttachedClients: 1, ClientID: "c-1"}); err != nil { + t.Fatalf("publish session.attached: %v", err) + } + + attachedFrame := readSSEUntil(t, reader, 2*time.Second, func(frame parsedSSEFrame) bool { + return frame.Event == "session.attached" && len(frame.Data) > 0 + }) + if attachedFrame.ID != strconv.FormatInt(1, 10) { + t.Fatalf("attached id=%q want=1", attachedFrame.ID) + } +} diff --git a/internal/api/remote_hosts.go b/internal/api/remote_hosts.go new file mode 100644 index 0000000..2c6e4b3 --- /dev/null +++ b/internal/api/remote_hosts.go @@ -0,0 +1,502 @@ +package api + +import ( + "context" + "errors" + "io" + "log/slog" + "net/http" + "strconv" + "strings" + "sync" + "time" + + "opencoderouter/internal/model" + "opencoderouter/internal/remote" + tuiconfig "opencoderouter/internal/tui/config" +) + +const defaultRemoteHostsCacheTTL = 60 * time.Second + +type remoteHostsDiscoverer interface { + Discover(ctx context.Context) ([]model.Host, error) +} + +type remoteHostsProber interface { + ProbeHosts(ctx context.Context, hosts []model.Host) ([]model.Host, error) +} + +type remoteHostsPathSetter interface { + SetSSHConfigPath(path string) +} + +type RemoteHostsHandlerConfig struct { + DiscoveryOptions remote.DiscoveryOptions + ProbeOptions remote.ProbeOptions + CacheTTL time.Duration + Runner remote.Runner + Logger *slog.Logger + + DiscoveryService remoteHostsDiscoverer + ProbeService remoteHostsProber +} + +type RemoteHostsHandler struct { + discovery remoteHostsDiscoverer + probe remoteHostsProber + logger *slog.Logger + cacheTTL time.Duration + + mu sync.RWMutex + lastHosts []model.Host + lastScannedAt time.Time + lastPartial bool + lastWarnings []string + lastConfigPath string +} + +type remoteHostsResponse struct { + Hosts []remoteHostView `json:"hosts"` + Cached bool `json:"cached"` + Stale bool `json:"stale"` + Partial bool `json:"partial"` + LastScan string `json:"lastScan,omitempty"` + Warnings []string `json:"warnings,omitempty"` +} + +type remoteHostView struct { + Name string `json:"name"` + Address string `json:"address"` + User string `json:"user,omitempty"` + Label string `json:"label"` + Priority int `json:"priority,omitempty"` + Status string `json:"status"` + LastSeen string `json:"lastSeen,omitempty"` + LastError string `json:"lastError,omitempty"` + OpencodeBin string `json:"opencodeBin,omitempty"` + SessionCount int `json:"sessionCount"` + Projects []remoteProjectView `json:"projects,omitempty"` + ProxyKind string `json:"proxyKind,omitempty"` + ProxyJumpRaw string `json:"proxyJumpRaw,omitempty"` + ProxyCommand string `json:"proxyCommand,omitempty"` + DependsOn []string `json:"dependsOn,omitempty"` + Dependents []string `json:"dependents,omitempty"` + BlockedBy []string `json:"blockedBy,omitempty"` + Transport string `json:"transport,omitempty"` + TransportError string `json:"transportError,omitempty"` +} + +type remoteProjectView struct { + Name string `json:"name"` + Sessions []remoteSessionView `json:"sessions,omitempty"` +} + +type remoteSessionView struct { + ID string `json:"id"` + Project string `json:"project,omitempty"` + Title string `json:"title,omitempty"` + Directory string `json:"directory,omitempty"` + LastActivity string `json:"lastActivity,omitempty"` + Status string `json:"status"` + MessageCount int `json:"messageCount,omitempty"` + Agents []string `json:"agents,omitempty"` + Activity string `json:"activity,omitempty"` +} + +func NewRemoteHostsHandler(cfg RemoteHostsHandlerConfig) *RemoteHostsHandler { + logger := cfg.Logger + if logger == nil { + logger = slog.New(slog.NewTextHandler(io.Discard, nil)) + } + + ttl := cfg.CacheTTL + if ttl <= 0 { + ttl = defaultRemoteHostsCacheTTL + } + + discoverySvc := cfg.DiscoveryService + if discoverySvc == nil { + discoverySvc = remote.NewDiscoveryService(normalizeDiscoveryOptions(cfg.DiscoveryOptions), cfg.Runner, logger) + } + + probeSvc := cfg.ProbeService + if probeSvc == nil { + probeSvc = remote.NewProbeService(normalizeProbeOptions(cfg.ProbeOptions), cfg.Runner, remote.NewCacheStore(ttl), logger) + } + + return &RemoteHostsHandler{ + discovery: discoverySvc, + probe: probeSvc, + logger: logger, + cacheTTL: ttl, + } +} + +func (h *RemoteHostsHandler) Register(mux *http.ServeMux) { + if h == nil || mux == nil { + return + } + mux.HandleFunc("/api/remote/hosts", h.handleList) +} + +func (h *RemoteHostsHandler) handleList(w http.ResponseWriter, r *http.Request) { + if r.Method != http.MethodGet { + writeAPIError(w, http.StatusMethodNotAllowed, "method not allowed", "METHOD_NOT_ALLOWED") + return + } + + if h.discovery == nil || h.probe == nil { + writeAPIError(w, http.StatusServiceUnavailable, "remote host services unavailable", "REMOTE_HOSTS_UNAVAILABLE") + return + } + + refresh, err := parseBoolQuery(r, "refresh") + if err != nil { + writeAPIError(w, http.StatusBadRequest, err.Error(), "INVALID_QUERY") + return + } + + sshConfigPath := strings.TrimSpace(r.URL.Query().Get("sshConfigPath")) + if setter, ok := h.discovery.(remoteHostsPathSetter); ok { + setter.SetSSHConfigPath(sshConfigPath) + } else if sshConfigPath != "" { + writeAPIError(w, http.StatusBadRequest, "sshConfigPath override unsupported", "SSH_CONFIG_OVERRIDE_UNSUPPORTED") + return + } + + if !refresh { + if hosts, scannedAt, partial, warnings, ok := h.snapshotIfFresh(sshConfigPath); ok { + writeJSON(w, http.StatusOK, remoteHostsResponse{ + Hosts: toRemoteHostViews(hosts), + Cached: true, + Stale: false, + Partial: partial, + LastScan: formatOptionalTime(scannedAt), + Warnings: append([]string(nil), warnings...), + }) + return + } + } + + hosts, warnings, partial, scanErr := h.scan(r.Context()) + if scanErr != nil { + h.logger.Warn("remote host scan completed with errors", "error", remote.SanitizeLogError(scanErr), "host_count", len(hosts)) + } + + if len(hosts) == 0 && scanErr != nil { + if cachedHosts, scannedAt, cachedPartial, cachedWarnings, ok := h.latestSnapshot(sshConfigPath); ok { + warnings = append(warnings, cachedWarnings...) + partial = partial || cachedPartial + writeJSON(w, http.StatusOK, remoteHostsResponse{ + Hosts: toRemoteHostViews(cachedHosts), + Cached: true, + Stale: true, + Partial: partial, + LastScan: formatOptionalTime(scannedAt), + Warnings: uniqueStrings(warnings), + }) + return + } + + writeAPIError(w, http.StatusServiceUnavailable, "remote host scan failed", "REMOTE_HOST_SCAN_FAILED") + return + } + + h.storeSnapshot(sshConfigPath, hosts, partial, warnings) + writeJSON(w, http.StatusOK, remoteHostsResponse{ + Hosts: toRemoteHostViews(hosts), + Cached: false, + Stale: false, + Partial: partial, + LastScan: formatOptionalTime(time.Now().UTC()), + Warnings: warnings, + }) +} + +func parseBoolQuery(r *http.Request, key string) (bool, error) { + value := strings.TrimSpace(r.URL.Query().Get(key)) + if value == "" { + return false, nil + } + parsed, err := strconv.ParseBool(value) + if err != nil { + return false, errors.New("invalid query boolean: " + key) + } + return parsed, nil +} + +func (h *RemoteHostsHandler) scan(ctx context.Context) ([]model.Host, []string, bool, error) { + hosts, discoverErr := h.discovery.Discover(ctx) + warnings := make([]string, 0) + partial := false + + if discoverErr != nil { + partial = true + warnings = append(warnings, "discovery: "+remote.SanitizeLogError(discoverErr)) + } + + var probeErr error + if len(hosts) > 0 { + hosts, probeErr = h.probe.ProbeHosts(ctx, hosts) + if probeErr != nil { + partial = true + warnings = append(warnings, "probe: "+remote.SanitizeLogError(probeErr)) + } + } + + warnings = uniqueStrings(warnings) + return hosts, warnings, partial, errors.Join(discoverErr, probeErr) +} + +func (h *RemoteHostsHandler) snapshotIfFresh(configPath string) ([]model.Host, time.Time, bool, []string, bool) { + h.mu.RLock() + defer h.mu.RUnlock() + + if len(h.lastHosts) == 0 || h.lastScannedAt.IsZero() { + return nil, time.Time{}, false, nil, false + } + if h.lastConfigPath != configPath { + return nil, time.Time{}, false, nil, false + } + if h.cacheTTL > 0 && time.Since(h.lastScannedAt) > h.cacheTTL { + return nil, time.Time{}, false, nil, false + } + + return cloneHosts(h.lastHosts), h.lastScannedAt, h.lastPartial, append([]string(nil), h.lastWarnings...), true +} + +func (h *RemoteHostsHandler) latestSnapshot(configPath string) ([]model.Host, time.Time, bool, []string, bool) { + h.mu.RLock() + defer h.mu.RUnlock() + + if len(h.lastHosts) == 0 || h.lastScannedAt.IsZero() { + return nil, time.Time{}, false, nil, false + } + if h.lastConfigPath != configPath { + return nil, time.Time{}, false, nil, false + } + + return cloneHosts(h.lastHosts), h.lastScannedAt, h.lastPartial, append([]string(nil), h.lastWarnings...), true +} + +func (h *RemoteHostsHandler) storeSnapshot(configPath string, hosts []model.Host, partial bool, warnings []string) { + h.mu.Lock() + h.lastHosts = cloneHosts(hosts) + h.lastScannedAt = time.Now().UTC() + h.lastPartial = partial + h.lastWarnings = append([]string(nil), warnings...) + h.lastConfigPath = configPath + h.mu.Unlock() +} + +func cloneHosts(hosts []model.Host) []model.Host { + if len(hosts) == 0 { + return nil + } + cloned := make([]model.Host, 0, len(hosts)) + for _, host := range hosts { + cloned = append(cloned, cloneHost(host)) + } + return cloned +} + +func cloneHost(host model.Host) model.Host { + cloned := host + cloned.Projects = cloneProjects(host.Projects) + cloned.JumpChain = append([]model.JumpHop(nil), host.JumpChain...) + cloned.DependsOn = append([]string(nil), host.DependsOn...) + cloned.Dependents = append([]string(nil), host.Dependents...) + cloned.BlockedBy = append([]string(nil), host.BlockedBy...) + return cloned +} + +func cloneProjects(projects []model.Project) []model.Project { + if len(projects) == 0 { + return nil + } + cloned := make([]model.Project, 0, len(projects)) + for _, project := range projects { + copied := model.Project{ + Name: project.Name, + Sessions: append([]model.Session(nil), project.Sessions...), + } + for i := range copied.Sessions { + copied.Sessions[i].Agents = append([]string(nil), copied.Sessions[i].Agents...) + } + cloned = append(cloned, copied) + } + return cloned +} + +func toRemoteHostViews(hosts []model.Host) []remoteHostView { + if len(hosts) == 0 { + return []remoteHostView{} + } + views := make([]remoteHostView, 0, len(hosts)) + for _, host := range hosts { + views = append(views, remoteHostView{ + Name: host.Name, + Address: host.Address, + User: host.User, + Label: host.Label, + Priority: host.Priority, + Status: string(host.Status), + LastSeen: formatOptionalTime(host.LastSeen), + LastError: host.LastError, + OpencodeBin: host.OpencodeBin, + SessionCount: host.SessionCount(), + Projects: toRemoteProjectViews(host.Projects), + ProxyKind: string(host.ProxyKind), + ProxyJumpRaw: host.ProxyJumpRaw, + ProxyCommand: host.ProxyCommand, + DependsOn: append([]string(nil), host.DependsOn...), + Dependents: append([]string(nil), host.Dependents...), + BlockedBy: append([]string(nil), host.BlockedBy...), + Transport: string(host.Transport), + TransportError: host.TransportError, + }) + } + return views +} + +func toRemoteProjectViews(projects []model.Project) []remoteProjectView { + if len(projects) == 0 { + return []remoteProjectView{} + } + views := make([]remoteProjectView, 0, len(projects)) + for _, project := range projects { + views = append(views, remoteProjectView{ + Name: project.Name, + Sessions: toRemoteSessionViews(project.Sessions), + }) + } + return views +} + +func toRemoteSessionViews(sessions []model.Session) []remoteSessionView { + if len(sessions) == 0 { + return []remoteSessionView{} + } + views := make([]remoteSessionView, 0, len(sessions)) + for _, session := range sessions { + views = append(views, remoteSessionView{ + ID: session.ID, + Project: session.Project, + Title: session.Title, + Directory: session.Directory, + LastActivity: formatOptionalTime(session.LastActivity), + Status: string(session.Status), + MessageCount: session.MessageCount, + Agents: append([]string(nil), session.Agents...), + Activity: string(session.Activity), + }) + } + return views +} + +func formatOptionalTime(ts time.Time) string { + if ts.IsZero() { + return "" + } + return ts.UTC().Format(timeLayoutRFC3339Nano) +} + +func uniqueStrings(values []string) []string { + if len(values) == 0 { + return nil + } + seen := make(map[string]struct{}, len(values)) + out := make([]string, 0, len(values)) + for _, value := range values { + trimmed := strings.TrimSpace(value) + if trimmed == "" { + continue + } + if _, ok := seen[trimmed]; ok { + continue + } + seen[trimmed] = struct{}{} + out = append(out, trimmed) + } + if len(out) == 0 { + return nil + } + return out +} + +func normalizeDiscoveryOptions(options remote.DiscoveryOptions) remote.DiscoveryOptions { + defaults := tuiconfig.DefaultConfig() + + if len(options.Include) == 0 { + options.Include = append([]string(nil), defaults.Hosts.Include...) + } + if len(options.Ignore) == 0 { + options.Ignore = append([]string(nil), defaults.Hosts.Ignore...) + } + if len(options.Overrides) == 0 { + options.Overrides = hostOverridesFromTUI(defaults.Hosts.Overrides) + } + + return options +} + +func normalizeProbeOptions(options remote.ProbeOptions) remote.ProbeOptions { + defaults := tuiconfig.DefaultConfig() + + if options.MaxParallel <= 0 { + options.MaxParallel = defaults.Polling.MaxParallel + } + if len(options.SessionScanPaths) == 0 { + options.SessionScanPaths = append([]string(nil), defaults.Sessions.ScanPaths...) + } + if len(options.Overrides) == 0 { + options.Overrides = hostOverridesFromTUI(defaults.Hosts.Overrides) + } + + if strings.TrimSpace(options.SSH.ControlMaster) == "" { + options.SSH.ControlMaster = defaults.SSH.ControlMaster + } + if options.SSH.ControlPersist <= 0 { + options.SSH.ControlPersist = defaults.SSH.ControlPersist + } + if strings.TrimSpace(options.SSH.ControlPath) == "" { + options.SSH.ControlPath = defaults.SSH.ControlPath + } + if options.SSH.ConnectTimeout <= 0 { + options.SSH.ConnectTimeout = defaults.SSH.ConnectTimeout + } + + if strings.TrimSpace(options.SortBy) == "" { + options.SortBy = defaults.Sessions.SortBy + } + if options.MaxDisplay <= 0 { + options.MaxDisplay = defaults.Sessions.MaxDisplay + } + if options.ActiveThreshold <= 0 { + options.ActiveThreshold = defaults.Display.ActiveThreshold + } + if options.IdleThreshold <= 0 { + options.IdleThreshold = defaults.Display.IdleThreshold + } + if options.IdleThreshold < options.ActiveThreshold { + options.IdleThreshold = options.ActiveThreshold + } + + return options +} + +func hostOverridesFromTUI(overrides map[string]tuiconfig.HostOverride) map[string]remote.HostOverride { + if len(overrides) == 0 { + return nil + } + converted := make(map[string]remote.HostOverride, len(overrides)) + for alias, override := range overrides { + converted[alias] = remote.HostOverride{ + Label: override.Label, + Priority: override.Priority, + OpencodePath: override.OpencodePath, + ScanPaths: append([]string(nil), override.ScanPaths...), + } + } + return converted +} diff --git a/internal/api/remote_hosts_test.go b/internal/api/remote_hosts_test.go new file mode 100644 index 0000000..e6e0833 --- /dev/null +++ b/internal/api/remote_hosts_test.go @@ -0,0 +1,320 @@ +package api + +import ( + "context" + "errors" + "net/http" + "net/http/httptest" + "testing" + "time" + + "opencoderouter/internal/model" +) + +type fakeRemoteDiscoverer struct { + hosts []model.Host + err error + path string +} + +func (f *fakeRemoteDiscoverer) Discover(_ context.Context) ([]model.Host, error) { + return cloneHosts(f.hosts), f.err +} + +func (f *fakeRemoteDiscoverer) SetSSHConfigPath(path string) { + f.path = path +} + +type fakeRemoteProber struct { + hosts []model.Host + err error + calls int +} + +func (f *fakeRemoteProber) ProbeHosts(_ context.Context, hosts []model.Host) ([]model.Host, error) { + f.calls++ + if len(f.hosts) > 0 { + return cloneHosts(f.hosts), f.err + } + return cloneHosts(hosts), f.err +} + +func TestRemoteHostsHandlerReturnsFreshScan(t *testing.T) { + discoverer := &fakeRemoteDiscoverer{ + hosts: []model.Host{{ + Name: "dev-host", + Address: "10.0.0.9", + User: "alice", + Label: "dev-host", + Status: model.HostStatusUnknown, + Transport: model.TransportReady, + Projects: []model.Project{{ + Name: "demo", + Sessions: []model.Session{{ + ID: "s-1", + Title: "Build", + Directory: "/repo", + Status: model.SessionStatusActive, + Activity: model.ActivityActive, + LastActivity: time.Now().Add(-time.Minute), + }}, + }}, + }}, + } + + prober := &fakeRemoteProber{ + hosts: []model.Host{{ + Name: "dev-host", + Address: "10.0.0.9", + User: "alice", + Label: "dev-host", + Status: model.HostStatusOnline, + Transport: model.TransportReady, + Projects: []model.Project{{ + Name: "demo", + Sessions: []model.Session{{ + ID: "s-1", + Title: "Build", + Directory: "/repo", + Status: model.SessionStatusActive, + Activity: model.ActivityActive, + LastActivity: time.Now().Add(-time.Minute), + }}, + }}, + }}, + } + + h := NewRemoteHostsHandler(RemoteHostsHandlerConfig{ + CacheTTL: time.Minute, + DiscoveryService: discoverer, + ProbeService: prober, + }) + + mux := http.NewServeMux() + h.Register(mux) + srv := httptest.NewServer(mux) + defer srv.Close() + + resp := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/remote/hosts?refresh=true&sshConfigPath=%2Ftmp%2Fssh.conf", nil) + if resp.StatusCode != http.StatusOK { + defer resp.Body.Close() + t.Fatalf("status=%d want=%d", resp.StatusCode, http.StatusOK) + } + body := decodeResponseJSON[remoteHostsResponse](t, resp.Body) + _ = resp.Body.Close() + + if body.Cached { + t.Fatal("expected uncached response") + } + if body.Stale { + t.Fatal("expected non-stale response") + } + if body.Partial { + t.Fatal("expected full response") + } + if len(body.Hosts) != 1 { + t.Fatalf("hosts len=%d want=1", len(body.Hosts)) + } + if body.Hosts[0].Name != "dev-host" { + t.Fatalf("host name=%q want=dev-host", body.Hosts[0].Name) + } + if body.Hosts[0].Status != string(model.HostStatusOnline) { + t.Fatalf("host status=%q want=%q", body.Hosts[0].Status, model.HostStatusOnline) + } + if body.Hosts[0].SessionCount != 1 { + t.Fatalf("session count=%d want=1", body.Hosts[0].SessionCount) + } + if discoverer.path != "/tmp/ssh.conf" { + t.Fatalf("ssh config path=%q want=%q", discoverer.path, "/tmp/ssh.conf") + } + if prober.calls != 1 { + t.Fatalf("probe calls=%d want=1", prober.calls) + } +} + +func TestRemoteHostsHandlerReturnsCachedWithinTTL(t *testing.T) { + discoverer := &fakeRemoteDiscoverer{hosts: []model.Host{{Name: "alpha", Address: "alpha.local", Label: "alpha", Status: model.HostStatusUnknown}}} + prober := &fakeRemoteProber{hosts: []model.Host{{Name: "alpha", Address: "alpha.local", Label: "alpha", Status: model.HostStatusOnline}}} + + h := NewRemoteHostsHandler(RemoteHostsHandlerConfig{ + CacheTTL: time.Minute, + DiscoveryService: discoverer, + ProbeService: prober, + }) + + mux := http.NewServeMux() + h.Register(mux) + srv := httptest.NewServer(mux) + defer srv.Close() + + first := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/remote/hosts", nil) + if first.StatusCode != http.StatusOK { + defer first.Body.Close() + t.Fatalf("first status=%d want=%d", first.StatusCode, http.StatusOK) + } + _ = first.Body.Close() + + if prober.calls != 1 { + t.Fatalf("probe calls after first=%d want=1", prober.calls) + } + + second := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/remote/hosts", nil) + if second.StatusCode != http.StatusOK { + defer second.Body.Close() + t.Fatalf("second status=%d want=%d", second.StatusCode, http.StatusOK) + } + body := decodeResponseJSON[remoteHostsResponse](t, second.Body) + _ = second.Body.Close() + + if !body.Cached { + t.Fatal("expected cached response") + } + if prober.calls != 1 { + t.Fatalf("probe calls after cached response=%d want=1", prober.calls) + } +} + +func TestRemoteHostsHandlerCacheScopedBySSHConfigPath(t *testing.T) { + discoverer := &fakeRemoteDiscoverer{hosts: []model.Host{{Name: "alpha", Address: "alpha.local", Label: "alpha", Status: model.HostStatusUnknown}}} + prober := &fakeRemoteProber{hosts: []model.Host{{Name: "alpha", Address: "alpha.local", Label: "alpha", Status: model.HostStatusOnline}}} + + h := NewRemoteHostsHandler(RemoteHostsHandlerConfig{ + CacheTTL: time.Minute, + DiscoveryService: discoverer, + ProbeService: prober, + }) + + mux := http.NewServeMux() + h.Register(mux) + srv := httptest.NewServer(mux) + defer srv.Close() + + first := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/remote/hosts?sshConfigPath=%2Ftmp%2Fa.conf", nil) + if first.StatusCode != http.StatusOK { + defer first.Body.Close() + t.Fatalf("first status=%d want=%d", first.StatusCode, http.StatusOK) + } + _ = first.Body.Close() + + second := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/remote/hosts?sshConfigPath=%2Ftmp%2Fa.conf", nil) + if second.StatusCode != http.StatusOK { + defer second.Body.Close() + t.Fatalf("second status=%d want=%d", second.StatusCode, http.StatusOK) + } + body := decodeResponseJSON[remoteHostsResponse](t, second.Body) + _ = second.Body.Close() + if !body.Cached { + t.Fatal("expected same-config request to be served from cache") + } + if prober.calls != 1 { + t.Fatalf("probe calls after cached same-config response=%d want=1", prober.calls) + } + + third := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/remote/hosts?sshConfigPath=%2Ftmp%2Fb.conf", nil) + if third.StatusCode != http.StatusOK { + defer third.Body.Close() + t.Fatalf("third status=%d want=%d", third.StatusCode, http.StatusOK) + } + body = decodeResponseJSON[remoteHostsResponse](t, third.Body) + _ = third.Body.Close() + if body.Cached { + t.Fatal("expected different-config request to trigger fresh scan") + } + if prober.calls != 2 { + t.Fatalf("probe calls after different-config response=%d want=2", prober.calls) + } + if discoverer.path != "/tmp/b.conf" { + t.Fatalf("ssh config path=%q want=%q", discoverer.path, "/tmp/b.conf") + } +} + +func TestRemoteHostsHandlerFallsBackToStaleCacheOnFailure(t *testing.T) { + discoverer := &fakeRemoteDiscoverer{hosts: []model.Host{{Name: "alpha", Address: "alpha.local", Label: "alpha", Status: model.HostStatusUnknown}}} + prober := &fakeRemoteProber{hosts: []model.Host{{Name: "alpha", Address: "alpha.local", Label: "alpha", Status: model.HostStatusOnline}}} + + h := NewRemoteHostsHandler(RemoteHostsHandlerConfig{ + CacheTTL: time.Second, + DiscoveryService: discoverer, + ProbeService: prober, + }) + + mux := http.NewServeMux() + h.Register(mux) + srv := httptest.NewServer(mux) + defer srv.Close() + + seed := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/remote/hosts?refresh=true", nil) + if seed.StatusCode != http.StatusOK { + defer seed.Body.Close() + t.Fatalf("seed status=%d want=%d", seed.StatusCode, http.StatusOK) + } + _ = seed.Body.Close() + + discoverer.err = errors.New("lookup failed") + discoverer.hosts = nil + prober.err = errors.New("unreachable") + prober.hosts = nil + + resp := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/remote/hosts?refresh=true", nil) + if resp.StatusCode != http.StatusOK { + defer resp.Body.Close() + t.Fatalf("status=%d want=%d", resp.StatusCode, http.StatusOK) + } + body := decodeResponseJSON[remoteHostsResponse](t, resp.Body) + _ = resp.Body.Close() + + if !body.Cached { + t.Fatal("expected cached fallback response") + } + if !body.Stale { + t.Fatal("expected stale fallback response") + } + if !body.Partial { + t.Fatal("expected partial fallback response") + } + if len(body.Hosts) != 1 || body.Hosts[0].Name != "alpha" { + t.Fatalf("unexpected fallback hosts: %#v", body.Hosts) + } + if len(body.Warnings) == 0 { + t.Fatal("expected warnings for failed refresh") + } +} + +func TestRemoteHostsHandlerMethodAndValidationErrors(t *testing.T) { + h := NewRemoteHostsHandler(RemoteHostsHandlerConfig{ + DiscoveryService: &fakeRemoteDiscoverer{}, + ProbeService: &fakeRemoteProber{}, + }) + + mux := http.NewServeMux() + h.Register(mux) + srv := httptest.NewServer(mux) + defer srv.Close() + + methodResp := doJSONRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/remote/hosts", nil) + assertErrorShape(t, methodResp, http.StatusMethodNotAllowed, "METHOD_NOT_ALLOWED") + + invalidQuery := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/remote/hosts?refresh=not-bool", nil) + assertErrorShape(t, invalidQuery, http.StatusBadRequest, "INVALID_QUERY") +} + +func TestRemoteHostsHandlerUnsupportedSSHConfigOverride(t *testing.T) { + h := NewRemoteHostsHandler(RemoteHostsHandlerConfig{ + DiscoveryService: remoteDiscovererNoPathSetter{}, + ProbeService: &fakeRemoteProber{}, + }) + + mux := http.NewServeMux() + h.Register(mux) + srv := httptest.NewServer(mux) + defer srv.Close() + + resp := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/remote/hosts?sshConfigPath=%2Ftmp%2Fssh.conf", nil) + assertErrorShape(t, resp, http.StatusBadRequest, "SSH_CONFIG_OVERRIDE_UNSUPPORTED") +} + +type remoteDiscovererNoPathSetter struct{} + +func (remoteDiscovererNoPathSetter) Discover(_ context.Context) ([]model.Host, error) { + return []model.Host{}, nil +} diff --git a/internal/api/router.go b/internal/api/router.go new file mode 100644 index 0000000..c427970 --- /dev/null +++ b/internal/api/router.go @@ -0,0 +1,69 @@ +package api + +import ( + "log/slog" + "net/http" + "time" + + "opencoderouter/internal/auth" + "opencoderouter/internal/cache" + "opencoderouter/internal/remote" + "opencoderouter/internal/session" + "opencoderouter/internal/terminal" +) + +type RouterConfig struct { + SessionManager session.SessionManager + SessionEventBus session.EventBus + BackendEventSubscribe BackendEventSubscribeFunc + AuthConfig auth.Config + ScrollbackCache cache.ScrollbackCache + RemoteDiscovery remote.DiscoveryOptions + RemoteProbe remote.ProbeOptions + RemoteCacheTTL time.Duration + RemoteRunner remote.Runner + RemoteLogger *slog.Logger + Fallback http.Handler +} + +func NewRouter(cfg RouterConfig) http.Handler { + mux := http.NewServeMux() + NewSessionsHandler(SessionsHandlerConfig{SessionManager: cfg.SessionManager, ScrollbackCache: cfg.ScrollbackCache}).Register(mux) + NewEventsHandler(EventsHandlerConfig{ + SessionEventBus: cfg.SessionEventBus, + BackendSubscribe: cfg.BackendEventSubscribe, + }).Register(mux) + NewRemoteHostsHandler(RemoteHostsHandlerConfig{ + DiscoveryOptions: cfg.RemoteDiscovery, + ProbeOptions: cfg.RemoteProbe, + CacheTTL: cfg.RemoteCacheTTL, + Runner: cfg.RemoteRunner, + Logger: cfg.RemoteLogger, + }).Register(mux) + + // Wire up the terminal handler + terminal.NewHandler(terminal.HandlerConfig{ + SessionManager: cfg.SessionManager, + ScrollbackCache: cfg.ScrollbackCache, + }).Register(mux) + + fallback := cfg.Fallback + if fallback == nil { + fallback = http.NotFoundHandler() + } + mux.Handle("/", fallback) + + authCfg := cfg.AuthConfig + defaults := auth.Defaults() + if authCfg.BypassPaths == nil { + authCfg.BypassPaths = defaults.BypassPaths + } + if len(authCfg.CORSAllowedOrigins) == 0 { + authCfg.CORSAllowedOrigins = defaults.CORSAllowedOrigins + } + if authCfg.BasicAuth == nil { + authCfg.BasicAuth = map[string]string{} + } + + return auth.Middleware(mux, authCfg) +} diff --git a/internal/api/router_test.go b/internal/api/router_test.go new file mode 100644 index 0000000..cce4533 --- /dev/null +++ b/internal/api/router_test.go @@ -0,0 +1,171 @@ +package api + +import ( + "context" + "net/http" + "net/http/httptest" + "testing" + + "opencoderouter/internal/auth" + "opencoderouter/internal/cache" + "opencoderouter/internal/session" +) + +func TestNewRouterMountsSessionRoutesAndFallback(t *testing.T) { + mgr := newFakeStatefulSessionManager() + workspace := t.TempDir() + _, err := mgr.Create(context.Background(), session.CreateOpts{WorkspacePath: workspace}) + if err != nil { + t.Fatalf("seed session: %v", err) + } + + fallback := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + if r.URL.Path == "/api/backends" { + w.WriteHeader(http.StatusAccepted) + return + } + w.WriteHeader(http.StatusTeapot) + }) + + h := NewRouter(RouterConfig{SessionManager: mgr, Fallback: fallback}) + srv := httptest.NewServer(h) + defer srv.Close() + + respSessions := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/sessions", nil) + if respSessions.StatusCode != http.StatusOK { + defer respSessions.Body.Close() + t.Fatalf("sessions status=%d want=%d", respSessions.StatusCode, http.StatusOK) + } + _ = respSessions.Body.Close() + + respFallback := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/backends", nil) + if respFallback.StatusCode != http.StatusAccepted { + defer respFallback.Body.Close() + t.Fatalf("fallback status=%d want=%d", respFallback.StatusCode, http.StatusAccepted) + } + _ = respFallback.Body.Close() +} + +func TestNewRouterAppliesAuthMiddlewareToSessionEndpoints(t *testing.T) { + mgr := newFakeStatefulSessionManager() + workspace := t.TempDir() + _, err := mgr.Create(context.Background(), session.CreateOpts{WorkspacePath: workspace}) + if err != nil { + t.Fatalf("seed session: %v", err) + } + + authCfg := auth.Defaults() + authCfg.Enabled = true + authCfg.BearerTokens = []string{"secret-token"} + + h := NewRouter(RouterConfig{ + SessionManager: mgr, + AuthConfig: authCfg, + ScrollbackCache: newRouterTestScrollbackCache(), + Fallback: http.NotFoundHandler(), + }) + + srv := httptest.NewServer(h) + defer srv.Close() + + unauthorized := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/sessions", nil) + if unauthorized.StatusCode != http.StatusUnauthorized { + defer unauthorized.Body.Close() + t.Fatalf("unauthorized status=%d want=%d", unauthorized.StatusCode, http.StatusUnauthorized) + } + _ = unauthorized.Body.Close() + + req, err := http.NewRequest(http.MethodGet, srv.URL+"/api/sessions", nil) + if err != nil { + t.Fatalf("new request: %v", err) + } + req.Header.Set("Authorization", "Bearer secret-token") + authorized, err := srv.Client().Do(req) + if err != nil { + t.Fatalf("authorized request failed: %v", err) + } + if authorized.StatusCode != http.StatusOK { + defer authorized.Body.Close() + t.Fatalf("authorized status=%d want=%d", authorized.StatusCode, http.StatusOK) + } + _ = authorized.Body.Close() +} + +func TestNewRouterKeepsAuthBypassPaths(t *testing.T) { + authCfg := auth.Defaults() + authCfg.Enabled = true + authCfg.BearerTokens = []string{"secret-token"} + + h := NewRouter(RouterConfig{ + AuthConfig: authCfg, + ScrollbackCache: newRouterTestScrollbackCache(), + Fallback: http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + if r.URL.Path == "/api/health" { + w.WriteHeader(http.StatusOK) + return + } + w.WriteHeader(http.StatusUnauthorized) + }), + }) + + srv := httptest.NewServer(h) + defer srv.Close() + + resp := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/health", nil) + if resp.StatusCode != http.StatusOK { + defer resp.Body.Close() + t.Fatalf("health status=%d want=%d", resp.StatusCode, http.StatusOK) + } + _ = resp.Body.Close() +} + +func TestNewRouterMountsEventsRoute(t *testing.T) { + eventBus := session.NewEventBus(8) + h := NewRouter(RouterConfig{ + SessionEventBus: eventBus, + ScrollbackCache: newRouterTestScrollbackCache(), + Fallback: http.NotFoundHandler(), + }) + + srv := httptest.NewServer(h) + defer srv.Close() + + resp, err := srv.Client().Get(srv.URL + "/api/events") + if err != nil { + t.Fatalf("events request failed: %v", err) + } + if resp.StatusCode != http.StatusOK { + defer resp.Body.Close() + t.Fatalf("events status=%d want=%d", resp.StatusCode, http.StatusOK) + } + if got := resp.Header.Get("Content-Type"); got != "text/event-stream" { + defer resp.Body.Close() + t.Fatalf("events content-type=%q want=%q", got, "text/event-stream") + } + _ = resp.Body.Close() +} + +func TestNewRouterMountsRemoteHostsRoute(t *testing.T) { + h := NewRouter(RouterConfig{ + ScrollbackCache: newRouterTestScrollbackCache(), + Fallback: http.NotFoundHandler(), + }) + + srv := httptest.NewServer(h) + defer srv.Close() + + resp := doJSONRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/remote/hosts", nil) + assertErrorShape(t, resp, http.StatusMethodNotAllowed, "METHOD_NOT_ALLOWED") +} + +type routerTestScrollbackCache struct{} + +func newRouterTestScrollbackCache() *routerTestScrollbackCache { return &routerTestScrollbackCache{} } + +func (c *routerTestScrollbackCache) Append(sessionID string, entry cache.Entry) error { return nil } +func (c *routerTestScrollbackCache) Get(sessionID string, offset, limit int) ([]cache.Entry, error) { + return []cache.Entry{}, nil +} +func (c *routerTestScrollbackCache) Trim(sessionID string, maxEntries int) error { return nil } +func (c *routerTestScrollbackCache) Clear(sessionID string) error { return nil } +func (c *routerTestScrollbackCache) Close() error { return nil } diff --git a/internal/api/scrollback.go b/internal/api/scrollback.go new file mode 100644 index 0000000..c660128 --- /dev/null +++ b/internal/api/scrollback.go @@ -0,0 +1,105 @@ +package api + +import ( + "errors" + "net/http" + "strconv" + "strings" + + "opencoderouter/internal/cache" +) + +const defaultScrollbackLimit = 1000 + +type ScrollbackHandler struct { + cache cache.ScrollbackCache +} + +type scrollbackQuery struct { + offset int + limit int + typeV cache.EntryType +} + +func NewScrollbackHandler(scrollbackCache cache.ScrollbackCache) *ScrollbackHandler { + return &ScrollbackHandler{cache: scrollbackCache} +} + +func (h *ScrollbackHandler) HandleGet(w http.ResponseWriter, r *http.Request, sessionID string) { + if h == nil || h.cache == nil { + writeAPIError(w, http.StatusServiceUnavailable, "scrollback cache unavailable", "SCROLLBACK_UNAVAILABLE") + return + } + + query, err := parseScrollbackQuery(r) + if err != nil { + writeAPIError(w, http.StatusBadRequest, err.Error(), "INVALID_SCROLLBACK_QUERY") + return + } + + entries, err := h.getFiltered(sessionID, query) + if err != nil { + writeAPIError(w, http.StatusInternalServerError, "failed to read scrollback", "SCROLLBACK_READ_FAILED") + return + } + + writeJSON(w, http.StatusOK, entries) +} + +func (h *ScrollbackHandler) getFiltered(sessionID string, query scrollbackQuery) ([]cache.Entry, error) { + if query.typeV == "" { + return h.cache.Get(sessionID, query.offset, query.limit) + } + + all, err := h.cache.Get(sessionID, 0, 0) + if err != nil { + return nil, err + } + + filtered := make([]cache.Entry, 0, len(all)) + for _, entry := range all { + if entry.Type == query.typeV { + filtered = append(filtered, entry) + } + } + + if query.offset >= len(filtered) { + return []cache.Entry{}, nil + } + + end := len(filtered) + if query.offset+query.limit < end { + end = query.offset + query.limit + } + + result := make([]cache.Entry, end-query.offset) + copy(result, filtered[query.offset:end]) + return result, nil +} + +func parseScrollbackQuery(r *http.Request) (scrollbackQuery, error) { + q := r.URL.Query() + result := scrollbackQuery{limit: defaultScrollbackLimit} + + if raw := strings.TrimSpace(q.Get("limit")); raw != "" { + limit, err := strconv.Atoi(raw) + if err != nil || limit <= 0 { + return scrollbackQuery{}, errors.New("limit must be a positive integer") + } + result.limit = limit + } + + if raw := strings.TrimSpace(q.Get("offset")); raw != "" { + offset, err := strconv.Atoi(raw) + if err != nil || offset < 0 { + return scrollbackQuery{}, errors.New("offset must be a non-negative integer") + } + result.offset = offset + } + + if raw := strings.TrimSpace(q.Get("type")); raw != "" { + result.typeV = cache.EntryType(raw) + } + + return result, nil +} diff --git a/internal/api/scrollback_test.go b/internal/api/scrollback_test.go new file mode 100644 index 0000000..1bd8727 --- /dev/null +++ b/internal/api/scrollback_test.go @@ -0,0 +1,169 @@ +package api + +import ( + "context" + "encoding/json" + "net/http" + "net/http/httptest" + "testing" + "time" + + "opencoderouter/internal/cache" + "opencoderouter/internal/session" +) + +func TestScrollbackEndpointReturnsEntriesWithDefaultLimit(t *testing.T) { + mgr := newFakeStatefulSessionManager() + sc := newTestScrollbackCache() + workspace := t.TempDir() + created, err := mgr.Create(context.Background(), session.CreateOpts{WorkspacePath: workspace}) + if err != nil { + t.Fatalf("seed session: %v", err) + } + + for i := 0; i < 3; i++ { + err := sc.Append(created.ID, cache.Entry{Timestamp: time.Now().UTC(), Type: cache.EntryTypeTerminalOutput, Content: []byte{byte('a' + i)}}) + if err != nil { + t.Fatalf("append: %v", err) + } + } + + srv := newScrollbackTestServer(t, mgr, sc) + defer srv.Close() + + resp := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/sessions/"+created.ID+"/scrollback", nil) + if resp.StatusCode != http.StatusOK { + defer resp.Body.Close() + t.Fatalf("status=%d want=%d", resp.StatusCode, http.StatusOK) + } + var entries []cache.Entry + if err := json.NewDecoder(resp.Body).Decode(&entries); err != nil { + _ = resp.Body.Close() + t.Fatalf("decode: %v", err) + } + _ = resp.Body.Close() + if len(entries) != 3 { + t.Fatalf("entries=%d want=3", len(entries)) + } +} + +func TestScrollbackEndpointSupportsLimitOffsetAndTypeFilter(t *testing.T) { + mgr := newFakeStatefulSessionManager() + sc := newTestScrollbackCache() + workspace := t.TempDir() + created, err := mgr.Create(context.Background(), session.CreateOpts{WorkspacePath: workspace}) + if err != nil { + t.Fatalf("seed session: %v", err) + } + + seed := []cache.Entry{ + {Timestamp: time.Now().UTC(), Type: cache.EntryTypeTerminalOutput, Content: []byte("o1")}, + {Timestamp: time.Now().UTC(), Type: cache.EntryTypeSystemEvent, Content: []byte("s1")}, + {Timestamp: time.Now().UTC(), Type: cache.EntryTypeTerminalOutput, Content: []byte("o2")}, + {Timestamp: time.Now().UTC(), Type: cache.EntryTypeTerminalOutput, Content: []byte("o3")}, + } + for _, entry := range seed { + if err := sc.Append(created.ID, entry); err != nil { + t.Fatalf("append: %v", err) + } + } + + srv := newScrollbackTestServer(t, mgr, sc) + defer srv.Close() + + resp := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/sessions/"+created.ID+"/scrollback?type=terminal_output&offset=1&limit=1", nil) + if resp.StatusCode != http.StatusOK { + defer resp.Body.Close() + t.Fatalf("status=%d want=%d", resp.StatusCode, http.StatusOK) + } + var entries []cache.Entry + if err := json.NewDecoder(resp.Body).Decode(&entries); err != nil { + _ = resp.Body.Close() + t.Fatalf("decode: %v", err) + } + _ = resp.Body.Close() + if len(entries) != 1 { + t.Fatalf("entries=%d want=1", len(entries)) + } + if entries[0].Type != cache.EntryTypeTerminalOutput || string(entries[0].Content) != "o2" { + t.Fatalf("unexpected filtered entry: %+v", entries[0]) + } +} + +func TestScrollbackEndpointRejectsInvalidQuery(t *testing.T) { + mgr := newFakeStatefulSessionManager() + sc := newTestScrollbackCache() + workspace := t.TempDir() + created, err := mgr.Create(context.Background(), session.CreateOpts{WorkspacePath: workspace}) + if err != nil { + t.Fatalf("seed session: %v", err) + } + + srv := newScrollbackTestServer(t, mgr, sc) + defer srv.Close() + + resp := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/sessions/"+created.ID+"/scrollback?limit=abc", nil) + assertErrorShape(t, resp, http.StatusBadRequest, "INVALID_SCROLLBACK_QUERY") + + resp2 := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/sessions/"+created.ID+"/scrollback?offset=-1", nil) + assertErrorShape(t, resp2, http.StatusBadRequest, "INVALID_SCROLLBACK_QUERY") +} + +func newScrollbackTestServer(t *testing.T, mgr session.SessionManager, sc cache.ScrollbackCache) *httptest.Server { + t.Helper() + mux := http.NewServeMux() + NewSessionsHandler(SessionsHandlerConfig{SessionManager: mgr, ScrollbackCache: sc}).Register(mux) + return httptest.NewServer(mux) +} + +type testScrollbackCache struct { + bySession map[string][]cache.Entry +} + +func newTestScrollbackCache() *testScrollbackCache { + return &testScrollbackCache{bySession: map[string][]cache.Entry{}} +} + +func (c *testScrollbackCache) Append(sessionID string, entry cache.Entry) error { + c.bySession[sessionID] = append(c.bySession[sessionID], entry) + return nil +} + +func (c *testScrollbackCache) Get(sessionID string, offset, limit int) ([]cache.Entry, error) { + entries := c.bySession[sessionID] + if offset < 0 { + offset = 0 + } + if offset >= len(entries) { + return []cache.Entry{}, nil + } + end := len(entries) + if limit > 0 && offset+limit < end { + end = offset + limit + } + out := make([]cache.Entry, end-offset) + copy(out, entries[offset:end]) + return out, nil +} + +func (c *testScrollbackCache) Trim(sessionID string, maxEntries int) error { + entries := c.bySession[sessionID] + if maxEntries <= 0 { + c.bySession[sessionID] = []cache.Entry{} + return nil + } + if len(entries) <= maxEntries { + return nil + } + c.bySession[sessionID] = append([]cache.Entry(nil), entries[len(entries)-maxEntries:]...) + return nil +} + +func (c *testScrollbackCache) Clear(sessionID string) error { + delete(c.bySession, sessionID) + return nil +} + +func (c *testScrollbackCache) Close() error { + return nil +} diff --git a/internal/api/sessions.go b/internal/api/sessions.go new file mode 100644 index 0000000..6910220 --- /dev/null +++ b/internal/api/sessions.go @@ -0,0 +1,545 @@ +package api + +import ( + "context" + "encoding/json" + "errors" + "fmt" + "io" + "log/slog" + "net/http" + "sort" + "strings" + "sync" + + "opencoderouter/internal/cache" + "opencoderouter/internal/daemon" + errorx "opencoderouter/internal/errors" + "opencoderouter/internal/session" +) + +type SessionsHandlerConfig struct { + SessionManager session.SessionManager + ScrollbackCache cache.ScrollbackCache + Logger *slog.Logger +} + +type SessionsHandler struct { + sessions session.SessionManager + scrollback *ScrollbackHandler + logger *slog.Logger + + mu sync.Mutex + attachments map[string][]session.TerminalConn +} + +type createSessionRequest struct { + WorkspacePath string `json:"workspacePath"` + Label string `json:"label"` + Labels map[string]string `json:"labels"` +} + +type sessionView struct { + ID string `json:"id"` + DaemonPort int `json:"daemonPort"` + WorkspacePath string `json:"workspacePath"` + Status session.SessionStatus `json:"status"` + CreatedAt string `json:"createdAt"` + LastActivity string `json:"lastActivity"` + AttachedClients int `json:"attachedClients"` + Labels map[string]string `json:"labels,omitempty"` + Health session.HealthStatus `json:"health"` +} + +type errorResponse struct { + Error string `json:"error"` + Code string `json:"code"` +} + +func NewSessionsHandler(cfg SessionsHandlerConfig) *SessionsHandler { + logger := cfg.Logger + if logger == nil { + logger = slog.Default() + } + + return &SessionsHandler{ + sessions: cfg.SessionManager, + scrollback: NewScrollbackHandler(cfg.ScrollbackCache), + logger: logger, + attachments: make(map[string][]session.TerminalConn), + } +} + +func (h *SessionsHandler) Register(mux *http.ServeMux) { + if h == nil || mux == nil { + return + } + mux.HandleFunc("/api/sessions", h.handleCollection) + mux.HandleFunc("/api/sessions/", h.handleByID) +} + +func (h *SessionsHandler) handleCollection(w http.ResponseWriter, r *http.Request) { + if h.sessions == nil { + writeAPIError(w, http.StatusServiceUnavailable, "session manager unavailable", "SESSION_MANAGER_UNAVAILABLE") + return + } + + switch r.Method { + case http.MethodPost: + h.handleCreate(w, r) + case http.MethodGet: + h.handleList(w, r) + default: + writeAPIError(w, http.StatusMethodNotAllowed, "method not allowed", "METHOD_NOT_ALLOWED") + } +} + +func (h *SessionsHandler) handleByID(w http.ResponseWriter, r *http.Request) { + if h.sessions == nil { + writeAPIError(w, http.StatusServiceUnavailable, "session manager unavailable", "SESSION_MANAGER_UNAVAILABLE") + return + } + + id, action, ok := parseSessionPath(r.URL.Path) + if !ok { + writeAPIError(w, http.StatusNotFound, "route not found", "NOT_FOUND") + return + } + + if action == "" { + switch r.Method { + case http.MethodGet: + h.handleGet(w, r, id) + case http.MethodDelete: + h.handleDelete(w, r, id) + default: + writeAPIError(w, http.StatusMethodNotAllowed, "method not allowed", "METHOD_NOT_ALLOWED") + } + return + } + + if action == "chat" && r.Method == http.MethodGet { + h.handleChatHistory(w, r, id) + return + } + + if action == "scrollback" && r.Method == http.MethodGet { + h.handleScrollback(w, r, id) + return + } + + if r.Method != http.MethodPost { + writeAPIError(w, http.StatusMethodNotAllowed, "method not allowed", "METHOD_NOT_ALLOWED") + return + } + + switch action { + case "stop": + h.handleStop(w, r, id) + case "start": + h.handleStart(w, r, id) + case "restart": + h.handleRestart(w, r, id) + case "attach": + h.handleAttach(w, r, id) + case "detach": + h.handleDetach(w, r, id) + case "chat": + h.handleChat(w, r, id) + default: + writeAPIError(w, http.StatusNotFound, "route not found", "NOT_FOUND") + } +} + +func (h *SessionsHandler) handleScrollback(w http.ResponseWriter, r *http.Request, id string) { + if _, err := h.sessions.Get(id); err != nil { + h.writeSessionManagerError(w, err) + return + } + h.scrollback.HandleGet(w, r, id) +} + +func (h *SessionsHandler) handleCreate(w http.ResponseWriter, r *http.Request) { + var req createSessionRequest + if err := decodeJSONBody(r, &req); err != nil { + writeAPIError(w, http.StatusBadRequest, err.Error(), "INVALID_REQUEST_BODY") + return + } + + opts := session.CreateOpts{ + WorkspacePath: req.WorkspacePath, + } + if len(req.Labels) > 0 || strings.TrimSpace(req.Label) != "" { + labels := make(map[string]string, len(req.Labels)+1) + for k, v := range req.Labels { + labels[k] = v + } + if label := strings.TrimSpace(req.Label); label != "" { + if _, exists := labels["label"]; !exists { + labels["label"] = label + } + } + opts.Labels = labels + } + + handle, err := h.sessions.Create(r.Context(), opts) + if err != nil { + h.writeSessionManagerError(w, err) + return + } + + view, err := h.buildSessionView(r.Context(), handle.ID) + if err != nil { + h.writeSessionManagerError(w, err) + return + } + + writeJSON(w, http.StatusCreated, view) +} + +func (h *SessionsHandler) handleList(w http.ResponseWriter, r *http.Request) { + filter := session.SessionListFilter{} + + if rawStatus := strings.TrimSpace(r.URL.Query().Get("status")); rawStatus != "" { + status := session.SessionStatus(rawStatus) + if !isValidSessionStatus(status) { + writeAPIError(w, http.StatusBadRequest, "invalid status filter", "INVALID_STATUS_FILTER") + return + } + filter.Status = status + } + + handles, err := h.sessions.List(filter) + if err != nil { + h.writeSessionManagerError(w, err) + return + } + + switch strings.TrimSpace(r.URL.Query().Get("sort")) { + case "", "createdAt": + case "lastActivity": + sort.Slice(handles, func(i, j int) bool { + if handles[i].LastActivity.Equal(handles[j].LastActivity) { + return handles[i].ID < handles[j].ID + } + return handles[i].LastActivity.After(handles[j].LastActivity) + }) + default: + writeAPIError(w, http.StatusBadRequest, "invalid sort option", "INVALID_SORT") + return + } + + views := make([]sessionView, 0, len(handles)) + for _, handle := range handles { + health, err := h.sessions.Health(r.Context(), handle.ID) + if err != nil { + h.logger.Debug("session health lookup failed for list", "session_id", handle.ID, "error", err) + health = session.HealthStatus{State: session.HealthStateUnknown} + } + views = append(views, toSessionView(handle, health)) + } + + writeJSON(w, http.StatusOK, views) +} + +func (h *SessionsHandler) handleGet(w http.ResponseWriter, r *http.Request, id string) { + view, err := h.buildSessionView(r.Context(), id) + if err != nil { + h.writeSessionManagerError(w, err) + return + } + + writeJSON(w, http.StatusOK, view) +} + +func (h *SessionsHandler) handleStop(w http.ResponseWriter, r *http.Request, id string) { + if err := h.sessions.Stop(r.Context(), id); err != nil { + h.writeSessionManagerError(w, err) + return + } + + view, err := h.buildSessionView(r.Context(), id) + if err != nil { + h.writeSessionManagerError(w, err) + return + } + + writeJSON(w, http.StatusOK, view) +} + +func (h *SessionsHandler) handleRestart(w http.ResponseWriter, r *http.Request, id string) { + handle, err := h.sessions.Restart(r.Context(), id) + if err != nil { + h.writeSessionManagerError(w, err) + return + } + + view := toSessionView(*handle, session.HealthStatus{State: session.HealthStateUnknown}) + if health, healthErr := h.sessions.Health(r.Context(), id); healthErr == nil { + view.Health = health + } + + writeJSON(w, http.StatusOK, view) +} + +func (h *SessionsHandler) handleStart(w http.ResponseWriter, r *http.Request, id string) { + h.handleRestart(w, r, id) +} + +func (h *SessionsHandler) handleDelete(w http.ResponseWriter, r *http.Request, id string) { + if err := h.sessions.Delete(r.Context(), id); err != nil { + h.writeSessionManagerError(w, err) + return + } + + h.clearAttachments(id) + w.WriteHeader(http.StatusNoContent) +} + +func (h *SessionsHandler) handleAttach(w http.ResponseWriter, r *http.Request, id string) { + conn, err := h.sessions.AttachTerminal(r.Context(), id) + if err != nil { + h.writeSessionManagerError(w, err) + return + } + if conn != nil { + h.storeAttachment(id, conn) + } + + view, err := h.buildSessionView(r.Context(), id) + if err != nil { + h.writeSessionManagerError(w, err) + return + } + + writeJSON(w, http.StatusOK, view) +} + +func (h *SessionsHandler) handleDetach(w http.ResponseWriter, r *http.Request, id string) { + if _, err := h.sessions.Get(id); err != nil { + h.writeSessionManagerError(w, err) + return + } + + if conn, ok := h.popAttachment(id); ok && conn != nil { + _ = conn.Close() + } + + view, err := h.buildSessionView(r.Context(), id) + if err != nil { + h.writeSessionManagerError(w, err) + return + } + + writeJSON(w, http.StatusOK, view) +} + +func (h *SessionsHandler) buildSessionView(ctx context.Context, id string) (sessionView, error) { + handle, err := h.sessions.Get(id) + if err != nil { + return sessionView{}, err + } + + health, err := h.sessions.Health(ctx, id) + if err != nil { + return sessionView{}, err + } + + return toSessionView(*handle, health), nil +} + +func (h *SessionsHandler) storeAttachment(id string, conn session.TerminalConn) { + h.mu.Lock() + h.attachments[id] = append(h.attachments[id], conn) + h.mu.Unlock() +} + +func (h *SessionsHandler) popAttachment(id string) (session.TerminalConn, bool) { + h.mu.Lock() + defer h.mu.Unlock() + + conns := h.attachments[id] + if len(conns) == 0 { + return nil, false + } + + idx := len(conns) - 1 + conn := conns[idx] + if idx == 0 { + delete(h.attachments, id) + } else { + h.attachments[id] = conns[:idx] + } + + return conn, true +} + +func (h *SessionsHandler) clearAttachments(id string) { + h.mu.Lock() + conns := h.attachments[id] + delete(h.attachments, id) + h.mu.Unlock() + + for _, conn := range conns { + if conn != nil { + _ = conn.Close() + } + } +} + +func (h *SessionsHandler) writeSessionManagerError(w http.ResponseWriter, err error) { + switch errorx.Code(err) { + case "WORKSPACE_PATH_REQUIRED", "WORKSPACE_PATH_INVALID": + writeAPIError(w, http.StatusBadRequest, errorx.Message(err), errorx.Code(err)) + case "SESSION_ALREADY_EXISTS", "SESSION_STOPPED": + writeAPIError(w, http.StatusConflict, errorx.Message(err), errorx.Code(err)) + case "SESSION_NOT_FOUND", "NO_AVAILABLE_SESSION_PORTS", "TERMINAL_ATTACH_UNAVAILABLE", "DAEMON_UNHEALTHY": + writeAPIError(w, errorx.HTTPStatus(err), errorx.Message(err), errorx.Code(err)) + case "REQUEST_CANCELED", "REQUEST_TIMEOUT": + writeAPIError(w, errorx.HTTPStatus(err), errorx.Message(err), errorx.Code(err)) + default: + h.logger.Error("session handler error", "error", err) + writeAPIError(w, http.StatusInternalServerError, errorx.Message(err), errorx.Code(err)) + } +} + +func parseSessionPath(path string) (id string, action string, ok bool) { + tail := strings.TrimPrefix(path, "/api/sessions/") + tail = strings.TrimSpace(tail) + tail = strings.Trim(tail, "/") + if tail == "" { + return "", "", false + } + + parts := strings.Split(tail, "/") + if len(parts) == 1 { + if parts[0] == "" { + return "", "", false + } + return parts[0], "", true + } + if len(parts) == 2 { + if parts[0] == "" || parts[1] == "" { + return "", "", false + } + return parts[0], parts[1], true + } + + return "", "", false +} + +func toSessionView(handle session.SessionHandle, health session.HealthStatus) sessionView { + return sessionView{ + ID: handle.ID, + DaemonPort: handle.DaemonPort, + WorkspacePath: handle.WorkspacePath, + Status: handle.Status, + CreatedAt: handle.CreatedAt.UTC().Format(timeLayoutRFC3339Nano), + LastActivity: handle.LastActivity.UTC().Format(timeLayoutRFC3339Nano), + AttachedClients: handle.AttachedClients, + Labels: handle.Labels, + Health: health, + } +} + +const timeLayoutRFC3339Nano = "2006-01-02T15:04:05.999999999Z07:00" + +func decodeJSONBody(r *http.Request, dst any) error { + dec := json.NewDecoder(r.Body) + dec.DisallowUnknownFields() + if err := dec.Decode(dst); err != nil { + return err + } + if err := dec.Decode(&struct{}{}); !errors.Is(err, io.EOF) { + if err == nil { + return errors.New("request body must contain a single JSON object") + } + return err + } + return nil +} + +func isValidSessionStatus(status session.SessionStatus) bool { + switch status { + case session.SessionStatusUnknown, session.SessionStatusActive, session.SessionStatusIdle, session.SessionStatusStopped, session.SessionStatusError: + return true + default: + return false + } +} + +func writeJSON(w http.ResponseWriter, status int, payload any) { + w.Header().Set("Content-Type", "application/json") + w.WriteHeader(status) + _ = json.NewEncoder(w).Encode(payload) +} + +func writeAPIError(w http.ResponseWriter, status int, message string, code string) { + writeJSON(w, status, errorResponse{Error: message, Code: code}) +} + +func (h *SessionsHandler) handleChat(w http.ResponseWriter, r *http.Request, id string) { + var req struct { + Prompt string `json:"prompt"` + } + if err := decodeJSONBody(r, &req); err != nil { + writeAPIError(w, http.StatusBadRequest, err.Error(), "INVALID_REQUEST_BODY") + return + } + + handle, err := h.sessions.Get(id) + if err != nil { + h.writeSessionManagerError(w, err) + return + } + + client, err := daemon.NewDaemonClient(fmt.Sprintf("http://127.0.0.1:%d", handle.DaemonPort), daemon.ClientConfig{}) + if err != nil { + writeAPIError(w, http.StatusInternalServerError, err.Error(), "DAEMON_CLIENT_ERROR") + return + } + + ch, err := client.SendMessage(r.Context(), id, req.Prompt) + if err != nil { + writeAPIError(w, http.StatusInternalServerError, err.Error(), "SEND_MESSAGE_ERROR") + return + } + + w.Header().Set("Content-Type", "text/event-stream") + w.Header().Set("Cache-Control", "no-cache") + w.Header().Set("Connection", "keep-alive") + + flusher, ok := w.(http.Flusher) + if !ok { + writeAPIError(w, http.StatusInternalServerError, "streaming unsupported", "STREAMING_UNSUPPORTED") + return + } + + for chunk := range ch { + data, _ := json.Marshal(chunk) + fmt.Fprintf(w, "data: %s\n\n", data) + flusher.Flush() + } +} + +func (h *SessionsHandler) handleChatHistory(w http.ResponseWriter, r *http.Request, id string) { + handle, err := h.sessions.Get(id) + if err != nil { + h.writeSessionManagerError(w, err) + return + } + + client, err := daemon.NewDaemonClient(fmt.Sprintf("http://127.0.0.1:%d", handle.DaemonPort), daemon.ClientConfig{}) + if err != nil { + writeAPIError(w, http.StatusInternalServerError, err.Error(), "DAEMON_CLIENT_ERROR") + return + } + + msgs, err := client.GetMessages(r.Context(), id) + if err != nil { + writeAPIError(w, http.StatusInternalServerError, err.Error(), "GET_MESSAGES_ERROR") + return + } + + writeJSON(w, http.StatusOK, msgs) +} diff --git a/internal/api/sessions_test.go b/internal/api/sessions_test.go new file mode 100644 index 0000000..fec0a8c --- /dev/null +++ b/internal/api/sessions_test.go @@ -0,0 +1,577 @@ +package api + +import ( + "bytes" + "context" + "encoding/json" + "errors" + "io" + "net/http" + "net/http/httptest" + "os" + "sort" + "strings" + "sync" + "testing" + "time" + + "opencoderouter/internal/session" +) + +type fakeTerminalConn struct { + mu sync.Mutex + onClose func() + closed bool +} + +func (c *fakeTerminalConn) Read(_ []byte) (int, error) { return 0, io.EOF } + +func (c *fakeTerminalConn) Write(p []byte) (int, error) { return len(p), nil } + +func (c *fakeTerminalConn) Close() error { + c.mu.Lock() + if c.closed { + c.mu.Unlock() + return nil + } + c.closed = true + onClose := c.onClose + c.mu.Unlock() + if onClose != nil { + onClose() + } + return nil +} + +func (c *fakeTerminalConn) Resize(_, _ int) error { return nil } + +type fakeStatefulSessionManager struct { + mu sync.Mutex + sessions map[string]session.SessionHandle + health map[string]session.HealthStatus + nextID int + createErr error + listErr error + getErr error + stopErr error + restartErr error + deleteErr error + attachErr error + healthErr error +} + +func newFakeStatefulSessionManager() *fakeStatefulSessionManager { + return &fakeStatefulSessionManager{ + sessions: make(map[string]session.SessionHandle), + health: make(map[string]session.HealthStatus), + } +} + +func (m *fakeStatefulSessionManager) Create(_ context.Context, opts session.CreateOpts) (*session.SessionHandle, error) { + m.mu.Lock() + defer m.mu.Unlock() + + if m.createErr != nil { + return nil, m.createErr + } + if strings.TrimSpace(opts.WorkspacePath) == "" { + return nil, session.ErrWorkspacePathRequired + } + + m.nextID++ + id := "session-" + time.Now().UTC().Format("150405") + "-" + string(rune('a'+m.nextID)) + now := time.Now().UTC() + handle := session.SessionHandle{ + ID: id, + DaemonPort: 32000 + m.nextID, + WorkspacePath: opts.WorkspacePath, + Status: session.SessionStatusActive, + CreatedAt: now, + LastActivity: now, + Labels: cloneLabels(opts.Labels), + } + m.sessions[id] = handle + m.health[id] = session.HealthStatus{State: session.HealthStateHealthy, LastCheck: now} + clone := handle + clone.Labels = cloneLabels(handle.Labels) + return &clone, nil +} + +func (m *fakeStatefulSessionManager) Get(id string) (*session.SessionHandle, error) { + m.mu.Lock() + defer m.mu.Unlock() + + if m.getErr != nil { + return nil, m.getErr + } + handle, ok := m.sessions[id] + if !ok { + return nil, session.ErrSessionNotFound + } + clone := handle + clone.Labels = cloneLabels(handle.Labels) + return &clone, nil +} + +func (m *fakeStatefulSessionManager) List(filter session.SessionListFilter) ([]session.SessionHandle, error) { + m.mu.Lock() + defer m.mu.Unlock() + + if m.listErr != nil { + return nil, m.listErr + } + + out := make([]session.SessionHandle, 0, len(m.sessions)) + for _, handle := range m.sessions { + if filter.Status != "" && handle.Status != filter.Status { + continue + } + clone := handle + clone.Labels = cloneLabels(handle.Labels) + out = append(out, clone) + } + sort.Slice(out, func(i, j int) bool { + if out[i].CreatedAt.Equal(out[j].CreatedAt) { + return out[i].ID < out[j].ID + } + return out[i].CreatedAt.Before(out[j].CreatedAt) + }) + return out, nil +} + +func (m *fakeStatefulSessionManager) Stop(_ context.Context, id string) error { + m.mu.Lock() + defer m.mu.Unlock() + + if m.stopErr != nil { + return m.stopErr + } + handle, ok := m.sessions[id] + if !ok { + return session.ErrSessionNotFound + } + handle.Status = session.SessionStatusStopped + handle.LastActivity = time.Now().UTC() + m.sessions[id] = handle + health := m.health[id] + health.State = session.HealthStateUnknown + health.LastCheck = time.Now().UTC() + m.health[id] = health + return nil +} + +func (m *fakeStatefulSessionManager) Restart(_ context.Context, id string) (*session.SessionHandle, error) { + m.mu.Lock() + defer m.mu.Unlock() + + if m.restartErr != nil { + return nil, m.restartErr + } + handle, ok := m.sessions[id] + if !ok { + return nil, session.ErrSessionNotFound + } + handle.Status = session.SessionStatusActive + handle.LastActivity = time.Now().UTC() + m.sessions[id] = handle + health := m.health[id] + health.State = session.HealthStateHealthy + health.LastCheck = time.Now().UTC() + health.Error = "" + m.health[id] = health + clone := handle + clone.Labels = cloneLabels(handle.Labels) + return &clone, nil +} + +func (m *fakeStatefulSessionManager) Delete(_ context.Context, id string) error { + m.mu.Lock() + defer m.mu.Unlock() + + if m.deleteErr != nil { + return m.deleteErr + } + if _, ok := m.sessions[id]; !ok { + return session.ErrSessionNotFound + } + delete(m.sessions, id) + delete(m.health, id) + return nil +} + +func (m *fakeStatefulSessionManager) AttachTerminal(_ context.Context, id string) (session.TerminalConn, error) { + m.mu.Lock() + if m.attachErr != nil { + m.mu.Unlock() + return nil, m.attachErr + } + handle, ok := m.sessions[id] + if !ok { + m.mu.Unlock() + return nil, session.ErrSessionNotFound + } + handle.AttachedClients++ + handle.LastActivity = time.Now().UTC() + m.sessions[id] = handle + m.mu.Unlock() + + return &fakeTerminalConn{onClose: func() { + m.mu.Lock() + defer m.mu.Unlock() + handle, ok := m.sessions[id] + if !ok { + return + } + if handle.AttachedClients > 0 { + handle.AttachedClients-- + } + handle.LastActivity = time.Now().UTC() + m.sessions[id] = handle + }}, nil +} + +func (m *fakeStatefulSessionManager) Health(_ context.Context, id string) (session.HealthStatus, error) { + m.mu.Lock() + defer m.mu.Unlock() + + if m.healthErr != nil { + return session.HealthStatus{}, m.healthErr + } + health, ok := m.health[id] + if !ok { + return session.HealthStatus{}, session.ErrSessionNotFound + } + return health, nil +} + +func cloneLabels(labels map[string]string) map[string]string { + if len(labels) == 0 { + return nil + } + cloned := make(map[string]string, len(labels)) + for k, v := range labels { + cloned[k] = v + } + return cloned +} + +func newSessionsTestServer(t *testing.T, mgr session.SessionManager) *httptest.Server { + t.Helper() + mux := http.NewServeMux() + NewSessionsHandler(SessionsHandlerConfig{SessionManager: mgr}).Register(mux) + return httptest.NewServer(mux) +} + +func doJSONRequest(t *testing.T, client *http.Client, method, url string, body any) *http.Response { + t.Helper() + + var reader io.Reader + if body != nil { + buf, err := json.Marshal(body) + if err != nil { + t.Fatalf("marshal request body: %v", err) + } + reader = bytes.NewReader(buf) + } + + req, err := http.NewRequest(method, url, reader) + if err != nil { + t.Fatalf("new request: %v", err) + } + if body != nil { + req.Header.Set("Content-Type", "application/json") + } + + resp, err := client.Do(req) + if err != nil { + t.Fatalf("request failed: %v", err) + } + return resp +} + +func decodeResponseJSON[T any](t *testing.T, body io.Reader) T { + t.Helper() + var out T + if err := json.NewDecoder(body).Decode(&out); err != nil { + t.Fatalf("decode response: %v", err) + } + return out +} + +func assertErrorShape(t *testing.T, resp *http.Response, expectedStatus int, expectedCode string) { + t.Helper() + defer resp.Body.Close() + if resp.StatusCode != expectedStatus { + t.Fatalf("status=%d want=%d", resp.StatusCode, expectedStatus) + } + + var payload map[string]any + if err := json.NewDecoder(resp.Body).Decode(&payload); err != nil { + t.Fatalf("decode error payload: %v", err) + } + if got := payload["code"]; got != expectedCode { + t.Fatalf("error code=%v want=%s", got, expectedCode) + } + if _, ok := payload["error"]; !ok { + t.Fatalf("expected error field in payload: %#v", payload) + } + if len(payload) != 2 { + t.Fatalf("expected payload shape {error,code}, got %#v", payload) + } +} + +func TestSessionsLifecycleEndpoints(t *testing.T) { + mgr := newFakeStatefulSessionManager() + srv := newSessionsTestServer(t, mgr) + defer srv.Close() + + workspace := t.TempDir() + + createResp := doJSONRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/sessions", map[string]any{ + "workspacePath": workspace, + "label": "api-test", + }) + if createResp.StatusCode != http.StatusCreated { + defer createResp.Body.Close() + t.Fatalf("create status=%d want=%d", createResp.StatusCode, http.StatusCreated) + } + created := decodeResponseJSON[sessionView](t, createResp.Body) + _ = createResp.Body.Close() + if created.ID == "" { + t.Fatal("expected created session id") + } + if created.Labels["label"] != "api-test" { + t.Fatalf("expected label api-test, got %#v", created.Labels) + } + + listResp := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/sessions", nil) + if listResp.StatusCode != http.StatusOK { + defer listResp.Body.Close() + t.Fatalf("list status=%d want=%d", listResp.StatusCode, http.StatusOK) + } + listed := decodeResponseJSON[[]sessionView](t, listResp.Body) + _ = listResp.Body.Close() + if len(listed) != 1 || listed[0].ID != created.ID { + t.Fatalf("unexpected list response: %#v", listed) + } + + getResp := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/sessions/"+created.ID, nil) + if getResp.StatusCode != http.StatusOK { + defer getResp.Body.Close() + t.Fatalf("get status=%d want=%d", getResp.StatusCode, http.StatusOK) + } + detail := decodeResponseJSON[sessionView](t, getResp.Body) + _ = getResp.Body.Close() + if detail.Health.State != session.HealthStateHealthy { + t.Fatalf("expected healthy state, got %s", detail.Health.State) + } + + stopResp := doJSONRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/sessions/"+created.ID+"/stop", nil) + if stopResp.StatusCode != http.StatusOK { + defer stopResp.Body.Close() + t.Fatalf("stop status=%d want=%d", stopResp.StatusCode, http.StatusOK) + } + stopped := decodeResponseJSON[sessionView](t, stopResp.Body) + _ = stopResp.Body.Close() + if stopped.Status != session.SessionStatusStopped { + t.Fatalf("stop status field=%s want=%s", stopped.Status, session.SessionStatusStopped) + } + + restartResp := doJSONRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/sessions/"+created.ID+"/restart", nil) + if restartResp.StatusCode != http.StatusOK { + defer restartResp.Body.Close() + t.Fatalf("restart status=%d want=%d", restartResp.StatusCode, http.StatusOK) + } + restarted := decodeResponseJSON[sessionView](t, restartResp.Body) + _ = restartResp.Body.Close() + if restarted.Status != session.SessionStatusActive { + t.Fatalf("restart status field=%s want=%s", restarted.Status, session.SessionStatusActive) + } + + stopAgainResp := doJSONRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/sessions/"+created.ID+"/stop", nil) + if stopAgainResp.StatusCode != http.StatusOK { + defer stopAgainResp.Body.Close() + t.Fatalf("second stop status=%d want=%d", stopAgainResp.StatusCode, http.StatusOK) + } + _ = stopAgainResp.Body.Close() + + startResp := doJSONRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/sessions/"+created.ID+"/start", nil) + if startResp.StatusCode != http.StatusOK { + defer startResp.Body.Close() + t.Fatalf("start status=%d want=%d", startResp.StatusCode, http.StatusOK) + } + started := decodeResponseJSON[sessionView](t, startResp.Body) + _ = startResp.Body.Close() + if started.Status != session.SessionStatusActive { + t.Fatalf("start status field=%s want=%s", started.Status, session.SessionStatusActive) + } + + attachResp := doJSONRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/sessions/"+created.ID+"/attach", nil) + if attachResp.StatusCode != http.StatusOK { + defer attachResp.Body.Close() + t.Fatalf("attach status=%d want=%d", attachResp.StatusCode, http.StatusOK) + } + attached := decodeResponseJSON[sessionView](t, attachResp.Body) + _ = attachResp.Body.Close() + if attached.AttachedClients != 1 { + t.Fatalf("attached clients=%d want=1", attached.AttachedClients) + } + + detachResp := doJSONRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/sessions/"+created.ID+"/detach", nil) + if detachResp.StatusCode != http.StatusOK { + defer detachResp.Body.Close() + t.Fatalf("detach status=%d want=%d", detachResp.StatusCode, http.StatusOK) + } + detached := decodeResponseJSON[sessionView](t, detachResp.Body) + _ = detachResp.Body.Close() + if detached.AttachedClients != 0 { + t.Fatalf("attached clients after detach=%d want=0", detached.AttachedClients) + } + + deleteResp := doJSONRequest(t, srv.Client(), http.MethodDelete, srv.URL+"/api/sessions/"+created.ID, nil) + if deleteResp.StatusCode != http.StatusNoContent { + defer deleteResp.Body.Close() + t.Fatalf("delete status=%d want=%d", deleteResp.StatusCode, http.StatusNoContent) + } + _ = deleteResp.Body.Close() + + missingResp := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/sessions/"+created.ID, nil) + assertErrorShape(t, missingResp, http.StatusNotFound, "SESSION_NOT_FOUND") +} + +func TestSessionsCreateValidationErrors(t *testing.T) { + mgr := newFakeStatefulSessionManager() + srv := newSessionsTestServer(t, mgr) + defer srv.Close() + + invalidReq, err := http.NewRequest(http.MethodPost, srv.URL+"/api/sessions", strings.NewReader(`{"workspacePath":`)) + if err != nil { + t.Fatalf("new request: %v", err) + } + invalidReq.Header.Set("Content-Type", "application/json") + invalidResp, err := srv.Client().Do(invalidReq) + if err != nil { + t.Fatalf("request failed: %v", err) + } + assertErrorShape(t, invalidResp, http.StatusBadRequest, "INVALID_REQUEST_BODY") + + mgr.createErr = session.ErrWorkspacePathInvalid + badPathResp := doJSONRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/sessions", map[string]any{ + "workspacePath": "/path/does/not/exist", + }) + assertErrorShape(t, badPathResp, http.StatusBadRequest, "WORKSPACE_PATH_INVALID") +} + +func TestSessionsCreatePortExhaustionError(t *testing.T) { + mgr := newFakeStatefulSessionManager() + srv := newSessionsTestServer(t, mgr) + defer srv.Close() + + mgr.createErr = session.ErrNoAvailableSessionPorts + resp := doJSONRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/sessions", map[string]any{ + "workspacePath": t.TempDir(), + }) + assertErrorShape(t, resp, http.StatusServiceUnavailable, "NO_AVAILABLE_SESSION_PORTS") +} + +func TestSessionsListFilterAndSort(t *testing.T) { + mgr := newFakeStatefulSessionManager() + workspace := t.TempDir() + first, err := mgr.Create(context.Background(), session.CreateOpts{WorkspacePath: workspace}) + if err != nil { + t.Fatalf("seed first session: %v", err) + } + second, err := mgr.Create(context.Background(), session.CreateOpts{WorkspacePath: workspace}) + if err != nil { + t.Fatalf("seed second session: %v", err) + } + if err := mgr.Stop(context.Background(), first.ID); err != nil { + t.Fatalf("seed stop first: %v", err) + } + + mgr.mu.Lock() + h := mgr.sessions[first.ID] + h.LastActivity = time.Now().UTC().Add(-2 * time.Hour) + mgr.sessions[first.ID] = h + h2 := mgr.sessions[second.ID] + h2.LastActivity = time.Now().UTC().Add(-1 * time.Minute) + mgr.sessions[second.ID] = h2 + mgr.mu.Unlock() + + srv := newSessionsTestServer(t, mgr) + defer srv.Close() + + filteredResp := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/sessions?status=stopped", nil) + if filteredResp.StatusCode != http.StatusOK { + defer filteredResp.Body.Close() + t.Fatalf("filtered status=%d want=%d", filteredResp.StatusCode, http.StatusOK) + } + filtered := decodeResponseJSON[[]sessionView](t, filteredResp.Body) + _ = filteredResp.Body.Close() + if len(filtered) != 1 || filtered[0].ID != first.ID { + t.Fatalf("unexpected filtered list: %#v", filtered) + } + + sortedResp := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/sessions?sort=lastActivity", nil) + if sortedResp.StatusCode != http.StatusOK { + defer sortedResp.Body.Close() + t.Fatalf("sorted status=%d want=%d", sortedResp.StatusCode, http.StatusOK) + } + sortedViews := decodeResponseJSON[[]sessionView](t, sortedResp.Body) + _ = sortedResp.Body.Close() + if len(sortedViews) != 2 || sortedViews[0].ID != second.ID { + t.Fatalf("unexpected sort ordering: %#v", sortedViews) + } + + invalidSortResp := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/sessions?sort=random", nil) + assertErrorShape(t, invalidSortResp, http.StatusBadRequest, "INVALID_SORT") +} + +func TestSessionsMethodAndRouteErrors(t *testing.T) { + mgr := newFakeStatefulSessionManager() + srv := newSessionsTestServer(t, mgr) + defer srv.Close() + + methodResp := doJSONRequest(t, srv.Client(), http.MethodPut, srv.URL+"/api/sessions", nil) + assertErrorShape(t, methodResp, http.StatusMethodNotAllowed, "METHOD_NOT_ALLOWED") + + unknownRouteResp := doJSONRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/sessions/s-1/unknown", nil) + assertErrorShape(t, unknownRouteResp, http.StatusNotFound, "NOT_FOUND") + + unknownStartMethod := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/sessions/s-1/start", nil) + assertErrorShape(t, unknownStartMethod, http.StatusMethodNotAllowed, "METHOD_NOT_ALLOWED") + + mgr.attachErr = session.ErrTerminalAttachDisabled + workspace := t.TempDir() + created, err := mgr.Create(context.Background(), session.CreateOpts{WorkspacePath: workspace}) + if err != nil { + t.Fatalf("seed session: %v", err) + } + + attachResp := doJSONRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/sessions/"+created.ID+"/attach", nil) + assertErrorShape(t, attachResp, http.StatusServiceUnavailable, "TERMINAL_ATTACH_UNAVAILABLE") +} + +func TestSessionsHandlerUnavailableManager(t *testing.T) { + srv := newSessionsTestServer(t, nil) + defer srv.Close() + + resp := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/sessions", nil) + assertErrorShape(t, resp, http.StatusServiceUnavailable, "SESSION_MANAGER_UNAVAILABLE") +} + +func TestDecodeJSONBodyRejectsTrailingPayload(t *testing.T) { + req := httptest.NewRequest(http.MethodPost, "/api/sessions", strings.NewReader(`{"workspacePath":"/tmp"}{}`)) + var body createSessionRequest + err := decodeJSONBody(req, &body) + if err == nil { + t.Fatal("expected decode error for trailing payload") + } + if errors.Is(err, io.EOF) { + t.Fatalf("expected structured decode error, got EOF") + } +} + +func TestMain(m *testing.M) { + os.Exit(m.Run()) +} diff --git a/internal/auth/config.go b/internal/auth/config.go new file mode 100644 index 0000000..28f6738 --- /dev/null +++ b/internal/auth/config.go @@ -0,0 +1,84 @@ +package auth + +import ( + "os" + "strings" +) + +const ( + envAuthEnabled = "OCR_AUTH_ENABLED" + envBearerTokens = "OCR_AUTH_BEARER_TOKENS" + envBasicAuth = "OCR_AUTH_BASIC" + envCORSAllowOrigins = "OCR_CORS_ALLOW_ORIGINS" +) + +type Config struct { + Enabled bool + BearerTokens []string + BasicAuth map[string]string + CORSAllowedOrigins []string + BypassPaths map[string]struct{} +} + +func Defaults() Config { + return Config{ + Enabled: false, + BearerTokens: nil, + BasicAuth: map[string]string{}, + CORSAllowedOrigins: []string{"*"}, + BypassPaths: map[string]struct{}{ + "/api/health": {}, + "/api/backends": {}, + }, + } +} + +func LoadFromEnv() Config { + cfg := Defaults() + + if raw := strings.TrimSpace(os.Getenv(envAuthEnabled)); raw != "" { + raw = strings.ToLower(raw) + cfg.Enabled = raw == "1" || raw == "true" || raw == "yes" || raw == "on" + } + + if raw := strings.TrimSpace(os.Getenv(envBearerTokens)); raw != "" { + cfg.BearerTokens = splitCSV(raw) + } + + if raw := strings.TrimSpace(os.Getenv(envBasicAuth)); raw != "" { + pairs := splitCSV(raw) + for _, pair := range pairs { + parts := strings.SplitN(pair, ":", 2) + if len(parts) != 2 { + continue + } + user := strings.TrimSpace(parts[0]) + pass := strings.TrimSpace(parts[1]) + if user == "" || pass == "" { + continue + } + cfg.BasicAuth[user] = pass + } + } + + if raw := strings.TrimSpace(os.Getenv(envCORSAllowOrigins)); raw != "" { + origins := splitCSV(raw) + if len(origins) > 0 { + cfg.CORSAllowedOrigins = origins + } + } + + return cfg +} + +func splitCSV(raw string) []string { + parts := strings.Split(raw, ",") + out := make([]string, 0, len(parts)) + for _, p := range parts { + trimmed := strings.TrimSpace(p) + if trimmed != "" { + out = append(out, trimmed) + } + } + return out +} diff --git a/internal/auth/config_test.go b/internal/auth/config_test.go new file mode 100644 index 0000000..5d80601 --- /dev/null +++ b/internal/auth/config_test.go @@ -0,0 +1,78 @@ +package auth + +import ( + "os" + "reflect" + "testing" +) + +func TestDefaults(t *testing.T) { + cfg := Defaults() + if cfg.Enabled { + t.Fatal("expected auth disabled by default") + } + if len(cfg.CORSAllowedOrigins) != 1 || cfg.CORSAllowedOrigins[0] != "*" { + t.Fatalf("unexpected default CORS origins: %#v", cfg.CORSAllowedOrigins) + } + if _, ok := cfg.BypassPaths["/api/health"]; !ok { + t.Fatal("expected /api/health bypass by default") + } + if _, ok := cfg.BypassPaths["/api/backends"]; !ok { + t.Fatal("expected /api/backends bypass by default") + } +} + +func TestLoadFromEnv(t *testing.T) { + t.Setenv(envAuthEnabled, "true") + t.Setenv(envBearerTokens, "tok-a, tok-b") + t.Setenv(envBasicAuth, "alice:secret,bob:pw") + t.Setenv(envCORSAllowOrigins, "https://a.example,https://b.example") + + cfg := LoadFromEnv() + if !cfg.Enabled { + t.Fatal("expected enabled from env") + } + if !reflect.DeepEqual(cfg.BearerTokens, []string{"tok-a", "tok-b"}) { + t.Fatalf("unexpected tokens: %#v", cfg.BearerTokens) + } + if got := cfg.BasicAuth["alice"]; got != "secret" { + t.Fatalf("unexpected alice password: %q", got) + } + if got := cfg.BasicAuth["bob"]; got != "pw" { + t.Fatalf("unexpected bob password: %q", got) + } + if !reflect.DeepEqual(cfg.CORSAllowedOrigins, []string{"https://a.example", "https://b.example"}) { + t.Fatalf("unexpected CORS origins: %#v", cfg.CORSAllowedOrigins) + } +} + +func TestLoadFromEnv_InvalidBasicEntriesIgnored(t *testing.T) { + t.Setenv(envBasicAuth, "bad,no-colon,ok:yes,:missing-user,user-only:") + cfg := LoadFromEnv() + if len(cfg.BasicAuth) != 1 { + t.Fatalf("expected 1 valid basic entry, got %d", len(cfg.BasicAuth)) + } + if cfg.BasicAuth["ok"] != "yes" { + t.Fatal("expected ok:yes to be parsed") + } +} + +func TestSplitCSV(t *testing.T) { + got := splitCSV(" a, ,b ,, c ") + want := []string{"a", "b", "c"} + if !reflect.DeepEqual(got, want) { + t.Fatalf("splitCSV mismatch: got %#v want %#v", got, want) + } +} + +func TestLoadFromEnv_RespectsUnset(t *testing.T) { + _ = os.Unsetenv(envAuthEnabled) + _ = os.Unsetenv(envBearerTokens) + _ = os.Unsetenv(envBasicAuth) + _ = os.Unsetenv(envCORSAllowOrigins) + + cfg := LoadFromEnv() + if cfg.Enabled { + t.Fatal("expected disabled when unset") + } +} diff --git a/internal/auth/middleware.go b/internal/auth/middleware.go new file mode 100644 index 0000000..b357110 --- /dev/null +++ b/internal/auth/middleware.go @@ -0,0 +1,190 @@ +package auth + +import ( + "crypto/rand" + "crypto/subtle" + "encoding/base64" + "encoding/hex" + "encoding/json" + "net/http" + "strings" + "time" +) + +type RateLimiter interface { + Allow(r *http.Request) bool +} + +type NoopRateLimiter struct{} + +func (NoopRateLimiter) Allow(_ *http.Request) bool { return true } + +func Middleware(next http.Handler, cfg Config) http.Handler { + if next == nil { + next = http.NotFoundHandler() + } + + if cfg.BasicAuth == nil { + cfg.BasicAuth = map[string]string{} + } + + return withRequestID(withCORS(withAuth(withRateLimit(next, NoopRateLimiter{}), cfg), cfg)) +} + +func withRequestID(next http.Handler) http.Handler { + return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + reqID := strings.TrimSpace(r.Header.Get("X-Request-ID")) + if reqID == "" { + reqID = newRequestID() + } + w.Header().Set("X-Request-ID", reqID) + next.ServeHTTP(w, r) + }) +} + +func withRateLimit(next http.Handler, limiter RateLimiter) http.Handler { + if limiter == nil { + limiter = NoopRateLimiter{} + } + return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + if !limiter.Allow(r) { + writeJSONError(w, http.StatusTooManyRequests, "rate_limited", "RATE_LIMITED", "rate limit exceeded") + return + } + next.ServeHTTP(w, r) + }) +} + +func withCORS(next http.Handler, cfg Config) http.Handler { + allowed := cfg.CORSAllowedOrigins + if len(allowed) == 0 { + allowed = []string{"*"} + } + + allowAll := false + allowSet := make(map[string]struct{}, len(allowed)) + for _, origin := range allowed { + if origin == "*" { + allowAll = true + continue + } + allowSet[origin] = struct{}{} + } + + return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + origin := strings.TrimSpace(r.Header.Get("Origin")) + if origin != "" { + if allowAll { + w.Header().Set("Access-Control-Allow-Origin", "*") + } else if _, ok := allowSet[origin]; ok { + w.Header().Set("Access-Control-Allow-Origin", origin) + w.Header().Add("Vary", "Origin") + } + w.Header().Set("Access-Control-Allow-Headers", "Authorization, Content-Type, X-Request-ID") + w.Header().Set("Access-Control-Allow-Methods", "GET, POST, PUT, PATCH, DELETE, OPTIONS") + } + + if r.Method == http.MethodOptions { + w.WriteHeader(http.StatusNoContent) + return + } + + next.ServeHTTP(w, r) + }) +} + +func withAuth(next http.Handler, cfg Config) http.Handler { + bypass := cfg.BypassPaths + if bypass == nil { + bypass = map[string]struct{}{} + } + + return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + if _, ok := bypass[r.URL.Path]; ok { + next.ServeHTTP(w, r) + return + } + + if !cfg.Enabled { + next.ServeHTTP(w, r) + return + } + + authz := strings.TrimSpace(r.Header.Get("Authorization")) + if validateBearer(authz, cfg.BearerTokens) || validateBasic(authz, cfg.BasicAuth) { + next.ServeHTTP(w, r) + return + } + + w.Header().Set("WWW-Authenticate", `Bearer realm="opencoderouter", Basic realm="opencoderouter"`) + writeJSONError(w, http.StatusUnauthorized, "unauthorized", "UNAUTHORIZED", "invalid or missing credentials") + }) +} + +func validateBearer(authz string, tokens []string) bool { + if len(tokens) == 0 { + return false + } + if !strings.HasPrefix(strings.ToLower(authz), "bearer ") { + return false + } + token := strings.TrimSpace(authz[len("Bearer "):]) + if token == "" { + return false + } + for _, allowed := range tokens { + if subtle.ConstantTimeCompare([]byte(token), []byte(allowed)) == 1 { + return true + } + } + return false +} + +func validateBasic(authz string, users map[string]string) bool { + if len(users) == 0 { + return false + } + if !strings.HasPrefix(strings.ToLower(authz), "basic ") { + return false + } + payload := strings.TrimSpace(authz[len("Basic "):]) + if payload == "" { + return false + } + + raw, err := base64.StdEncoding.DecodeString(payload) + if err != nil { + return false + } + + parts := strings.SplitN(string(raw), ":", 2) + if len(parts) != 2 { + return false + } + + user := parts[0] + pass := parts[1] + stored, ok := users[user] + if !ok { + return false + } + return subtle.ConstantTimeCompare([]byte(pass), []byte(stored)) == 1 +} + +func writeJSONError(w http.ResponseWriter, status int, errCode, code, msg string) { + w.Header().Set("Content-Type", "application/json") + w.WriteHeader(status) + _ = json.NewEncoder(w).Encode(map[string]string{ + "error": errCode, + "code": code, + "message": msg, + }) +} + +func newRequestID() string { + b := make([]byte, 12) + if _, err := rand.Read(b); err != nil { + return hex.EncodeToString([]byte(time.Now().UTC().Format("20060102150405.000000000"))) + } + return hex.EncodeToString(b) +} diff --git a/internal/auth/middleware_test.go b/internal/auth/middleware_test.go new file mode 100644 index 0000000..6dcffc3 --- /dev/null +++ b/internal/auth/middleware_test.go @@ -0,0 +1,151 @@ +package auth + +import ( + "encoding/base64" + "encoding/json" + "net/http" + "net/http/httptest" + "testing" +) + +func TestMiddleware_ValidBearerTokenPasses(t *testing.T) { + cfg := Defaults() + cfg.Enabled = true + cfg.BearerTokens = []string{"good-token"} + + called := false + h := Middleware(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + called = true + w.WriteHeader(http.StatusOK) + }), cfg) + + req := httptest.NewRequest(http.MethodGet, "/api/resolve?name=test", nil) + req.Header.Set("Authorization", "Bearer good-token") + w := httptest.NewRecorder() + h.ServeHTTP(w, req) + + if !called { + t.Fatal("expected next handler to be called") + } + if w.Code != http.StatusOK { + t.Fatalf("expected 200, got %d", w.Code) + } +} + +func TestMiddleware_InvalidTokenReturns401JSON(t *testing.T) { + cfg := Defaults() + cfg.Enabled = true + cfg.BearerTokens = []string{"good-token"} + + h := Middleware(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusOK) + }), cfg) + + req := httptest.NewRequest(http.MethodGet, "/api/resolve?name=test", nil) + req.Header.Set("Authorization", "Bearer wrong-token") + w := httptest.NewRecorder() + h.ServeHTTP(w, req) + + if w.Code != http.StatusUnauthorized { + t.Fatalf("expected 401, got %d", w.Code) + } + if ct := w.Header().Get("Content-Type"); ct != "application/json" { + t.Fatalf("expected JSON content type, got %q", ct) + } + + var payload map[string]string + if err := json.Unmarshal(w.Body.Bytes(), &payload); err != nil { + t.Fatalf("expected json body, got error: %v", err) + } + if payload["error"] != "unauthorized" || payload["code"] != "UNAUTHORIZED" { + t.Fatalf("unexpected error payload: %#v", payload) + } +} + +func TestMiddleware_ValidBasicAuthPasses(t *testing.T) { + cfg := Defaults() + cfg.Enabled = true + cfg.BasicAuth = map[string]string{"alice": "secret"} + + called := false + h := Middleware(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + called = true + w.WriteHeader(http.StatusOK) + }), cfg) + + basic := base64.StdEncoding.EncodeToString([]byte("alice:secret")) + req := httptest.NewRequest(http.MethodGet, "/api/resolve?name=test", nil) + req.Header.Set("Authorization", "Basic "+basic) + w := httptest.NewRecorder() + h.ServeHTTP(w, req) + + if !called { + t.Fatal("expected next handler to be called") + } + if w.Code != http.StatusOK { + t.Fatalf("expected 200, got %d", w.Code) + } +} + +func TestMiddleware_BypassHealthEndpoints(t *testing.T) { + cfg := Defaults() + cfg.Enabled = true + cfg.BearerTokens = []string{"good-token"} + + called := false + h := Middleware(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + called = true + w.WriteHeader(http.StatusOK) + }), cfg) + + req := httptest.NewRequest(http.MethodGet, "/api/health", nil) + w := httptest.NewRecorder() + h.ServeHTTP(w, req) + + if !called { + t.Fatal("expected bypass to call next handler") + } + if w.Code != http.StatusOK { + t.Fatalf("expected 200 for bypass endpoint, got %d", w.Code) + } +} + +func TestMiddleware_CORSAllowlist(t *testing.T) { + cfg := Defaults() + cfg.CORSAllowedOrigins = []string{"https://allowed.example"} + + h := Middleware(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusOK) + }), cfg) + + req := httptest.NewRequest(http.MethodGet, "/api/resolve?name=test", nil) + req.Header.Set("Origin", "https://allowed.example") + w := httptest.NewRecorder() + h.ServeHTTP(w, req) + + if got := w.Header().Get("Access-Control-Allow-Origin"); got != "https://allowed.example" { + t.Fatalf("expected allowed origin echoed, got %q", got) + } + + reqBlocked := httptest.NewRequest(http.MethodGet, "/api/resolve?name=test", nil) + reqBlocked.Header.Set("Origin", "https://blocked.example") + wBlocked := httptest.NewRecorder() + h.ServeHTTP(wBlocked, reqBlocked) + if got := wBlocked.Header().Get("Access-Control-Allow-Origin"); got != "" { + t.Fatalf("expected blocked origin to be omitted, got %q", got) + } +} + +func TestMiddleware_SetsRequestIDHeader(t *testing.T) { + h := Middleware(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusOK) + }), Defaults()) + + req := httptest.NewRequest(http.MethodGet, "/", nil) + w := httptest.NewRecorder() + h.ServeHTTP(w, req) + + if got := w.Header().Get("X-Request-ID"); got == "" { + t.Fatal("expected X-Request-ID response header") + } +} diff --git a/internal/cache/cache.go b/internal/cache/cache.go new file mode 100644 index 0000000..c0485a0 --- /dev/null +++ b/internal/cache/cache.go @@ -0,0 +1,57 @@ +package cache + +import ( + "log/slog" + "os" + "path/filepath" + "strings" +) + +const ( + defaultMaxEntriesPerSession = 10000 + defaultMaxTotalSize = 100 * 1024 * 1024 +) + +func NewJSONLCache(cfg CacheConfig) (ScrollbackCache, error) { + normalized := normalizeConfig(cfg) + if err := os.MkdirAll(normalized.StoragePath, storageDirPerm); err != nil { + return nil, err + } + + cache := &JSONLCache{ + config: normalized, + writers: make(map[string]*sessionWriter), + entryCounts: make(map[string]int), + lru: newSessionLRU(), + logger: slog.Default(), + } + if err := cache.bootstrapFromDisk(); err != nil { + return nil, err + } + + cache.mu.Lock() + defer cache.mu.Unlock() + if err := cache.evictLocked(); err != nil { + return nil, err + } + + return cache, nil +} + +func normalizeConfig(cfg CacheConfig) CacheConfig { + normalized := cfg + if normalized.MaxEntriesPerSession <= 0 { + normalized.MaxEntriesPerSession = defaultMaxEntriesPerSession + } + if normalized.MaxTotalSize <= 0 { + normalized.MaxTotalSize = defaultMaxTotalSize + } + if normalized.EvictionPolicy != EvictionPolicyLRU && normalized.EvictionPolicy != EvictionPolicyFIFO { + normalized.EvictionPolicy = EvictionPolicyLRU + } + normalized.StoragePath = strings.TrimSpace(normalized.StoragePath) + if normalized.StoragePath == "" { + normalized.StoragePath = filepath.Join(".opencode", "scrollback") + } + return normalized +} diff --git a/internal/cache/jsonl.go b/internal/cache/jsonl.go new file mode 100644 index 0000000..e4a791d --- /dev/null +++ b/internal/cache/jsonl.go @@ -0,0 +1,449 @@ +package cache + +import ( + "bufio" + "bytes" + "encoding/json" + "errors" + "fmt" + "io" + "log/slog" + "os" + "path/filepath" + "sort" + "strings" + "sync" +) + +var ErrCacheClosed = errors.New("scrollback cache is closed") + +const ( + jsonlExtension = ".jsonl" + storageDirPerm = 0o755 + sessionFilePerm = 0o600 +) + +type sessionWriter struct { + file *os.File + writer *bufio.Writer +} + +type JSONLCache struct { + config CacheConfig + mu sync.Mutex + closed bool + writers map[string]*sessionWriter + entryCounts map[string]int + lru *sessionLRU + logger *slog.Logger +} + +// JSONL schema contract: +// - File path layout: {storagePath}/{sessionID}.jsonl +// - Each line is one JSON object encoded from Entry. +// - Entry.Content ([]byte) is serialized by encoding/json as base64 text. +// - Lines are append-only and chronological for stable replay/hydration. +func (c *JSONLCache) Append(sessionID string, entry Entry) error { + c.mu.Lock() + defer c.mu.Unlock() + + if err := c.validateOpenLocked(sessionID); err != nil { + return err + } + + count, err := c.sessionCountLocked(sessionID) + if err != nil { + return err + } + + line, err := encodeEntryLine(entry) + if err != nil { + return err + } + + writer, err := c.writerLocked(sessionID) + if err != nil { + return err + } + if _, err := writer.writer.Write(line); err != nil { + _ = c.closeWriterLocked(sessionID) + return err + } + if err := writer.writer.Flush(); err != nil { + _ = c.closeWriterLocked(sessionID) + return err + } + + c.lru.AddSize(sessionID, int64(len(line))) + c.markAccessLocked(sessionID) + c.entryCounts[sessionID] = count + 1 + + if c.config.MaxEntriesPerSession > 0 && c.entryCounts[sessionID] > c.config.MaxEntriesPerSession { + if err := c.trimSessionLocked(sessionID, c.config.MaxEntriesPerSession); err != nil { + return err + } + } + + return c.evictLocked() +} + +func (c *JSONLCache) Get(sessionID string, offset, limit int) ([]Entry, error) { + c.mu.Lock() + defer c.mu.Unlock() + + if err := c.validateOpenLocked(sessionID); err != nil { + return nil, err + } + + entries, err := c.readEntriesLocked(sessionID) + if err != nil { + return nil, err + } + c.markAccessLocked(sessionID) + + if offset < 0 { + offset = 0 + } + if offset >= len(entries) { + return []Entry{}, nil + } + + end := len(entries) + if limit > 0 && offset+limit < end { + end = offset + limit + } + + out := make([]Entry, end-offset) + copy(out, entries[offset:end]) + return out, nil +} + +func (c *JSONLCache) Trim(sessionID string, maxEntries int) error { + c.mu.Lock() + defer c.mu.Unlock() + + if err := c.validateOpenLocked(sessionID); err != nil { + return err + } + return c.trimSessionLocked(sessionID, maxEntries) +} + +func (c *JSONLCache) Clear(sessionID string) error { + c.mu.Lock() + defer c.mu.Unlock() + + if err := c.validateOpenLocked(sessionID); err != nil { + return err + } + return c.removeSessionLocked(sessionID) +} + +func (c *JSONLCache) Close() error { + c.mu.Lock() + defer c.mu.Unlock() + + if c.closed { + return nil + } + + var closeErr error + for sessionID := range c.writers { + closeErr = errors.Join(closeErr, c.closeWriterLocked(sessionID)) + } + c.closed = true + return closeErr +} + +func (c *JSONLCache) bootstrapFromDisk() error { + entries, err := os.ReadDir(c.config.StoragePath) + if err != nil { + if errors.Is(err, os.ErrNotExist) { + return nil + } + return err + } + + sessionIDs := make([]string, 0, len(entries)) + sizes := make(map[string]int64, len(entries)) + for _, entry := range entries { + if entry.IsDir() { + continue + } + name := entry.Name() + if !strings.HasSuffix(name, jsonlExtension) { + continue + } + + sessionID := strings.TrimSuffix(name, jsonlExtension) + if strings.TrimSpace(sessionID) == "" { + continue + } + + info, infoErr := entry.Info() + if infoErr != nil { + return infoErr + } + sizes[sessionID] = info.Size() + sessionIDs = append(sessionIDs, sessionID) + } + + sort.Strings(sessionIDs) + for _, sessionID := range sessionIDs { + c.lru.SetSize(sessionID, sizes[sessionID]) + c.lru.Ensure(sessionID) + } + + return nil +} + +func (c *JSONLCache) validateOpenLocked(sessionID string) error { + if c.closed { + return ErrCacheClosed + } + if strings.TrimSpace(sessionID) == "" { + return fmt.Errorf("sessionID is required") + } + return nil +} + +func (c *JSONLCache) sessionPath(sessionID string) string { + return filepath.Join(c.config.StoragePath, sessionID+jsonlExtension) +} + +func (c *JSONLCache) writerLocked(sessionID string) (*sessionWriter, error) { + if writer, ok := c.writers[sessionID]; ok { + return writer, nil + } + + if err := os.MkdirAll(c.config.StoragePath, storageDirPerm); err != nil { + return nil, err + } + + file, err := os.OpenFile(c.sessionPath(sessionID), os.O_CREATE|os.O_APPEND|os.O_WRONLY, sessionFilePerm) + if err != nil { + return nil, err + } + + writer := &sessionWriter{ + file: file, + writer: bufio.NewWriter(file), + } + c.writers[sessionID] = writer + return writer, nil +} + +func (c *JSONLCache) closeWriterLocked(sessionID string) error { + writer, ok := c.writers[sessionID] + if !ok { + return nil + } + delete(c.writers, sessionID) + + flushErr := writer.writer.Flush() + closeErr := writer.file.Close() + return errors.Join(flushErr, closeErr) +} + +func (c *JSONLCache) sessionCountLocked(sessionID string) (int, error) { + if count, ok := c.entryCounts[sessionID]; ok { + return count, nil + } + + entries, err := c.readEntriesLocked(sessionID) + if err != nil { + return 0, err + } + count := len(entries) + c.entryCounts[sessionID] = count + return count, nil +} + +func (c *JSONLCache) markAccessLocked(sessionID string) { + if c.config.EvictionPolicy == EvictionPolicyFIFO { + c.lru.Ensure(sessionID) + return + } + c.lru.Touch(sessionID) +} + +func (c *JSONLCache) readEntriesLocked(sessionID string) ([]Entry, error) { + path := c.sessionPath(sessionID) + entries, err := c.decodeJSONLFile(path, sessionID) + if err != nil { + if errors.Is(err, os.ErrNotExist) { + c.entryCounts[sessionID] = 0 + c.lru.SetSize(sessionID, 0) + return []Entry{}, nil + } + return nil, err + } + + size, err := fileSize(path) + if err != nil { + return nil, err + } + c.lru.SetSize(sessionID, size) + c.entryCounts[sessionID] = len(entries) + return entries, nil +} + +func (c *JSONLCache) decodeJSONLFile(path, sessionID string) ([]Entry, error) { + file, err := os.Open(path) + if err != nil { + return nil, err + } + defer file.Close() + + entries := make([]Entry, 0, 128) + reader := bufio.NewReader(file) + lineNo := 0 + + for { + line, readErr := reader.ReadBytes('\n') + if len(line) > 0 { + lineNo++ + trimmed := bytes.TrimRight(line, "\r\n") + if len(trimmed) > 0 { + var entry Entry + if err := json.Unmarshal(trimmed, &entry); err != nil { + c.logger.Warn("cache skipping malformed JSONL line", "session_id", sessionID, "line", lineNo, "error", err) + } else { + entries = append(entries, entry) + } + } + } + + if errors.Is(readErr, io.EOF) { + break + } + if readErr != nil { + return nil, readErr + } + } + + return entries, nil +} + +func (c *JSONLCache) trimSessionLocked(sessionID string, maxEntries int) error { + if maxEntries <= 0 { + return c.removeSessionLocked(sessionID) + } + + entries, err := c.readEntriesLocked(sessionID) + if err != nil { + return err + } + if len(entries) <= maxEntries { + c.markAccessLocked(sessionID) + return c.evictLocked() + } + + trimmed := entries[len(entries)-maxEntries:] + if err := c.rewriteSessionLocked(sessionID, trimmed); err != nil { + return err + } + + c.entryCounts[sessionID] = len(trimmed) + c.markAccessLocked(sessionID) + return c.evictLocked() +} + +func (c *JSONLCache) rewriteSessionLocked(sessionID string, entries []Entry) error { + if err := c.closeWriterLocked(sessionID); err != nil { + return err + } + + if err := os.MkdirAll(c.config.StoragePath, storageDirPerm); err != nil { + return err + } + + path := c.sessionPath(sessionID) + tmpPath := path + ".tmp" + + file, err := os.OpenFile(tmpPath, os.O_CREATE|os.O_TRUNC|os.O_WRONLY, sessionFilePerm) + if err != nil { + return err + } + + writer := bufio.NewWriter(file) + var totalBytes int64 + writeErr := func() error { + for _, entry := range entries { + line, lineErr := encodeEntryLine(entry) + if lineErr != nil { + return lineErr + } + written, lineErr := writer.Write(line) + if lineErr != nil { + return lineErr + } + totalBytes += int64(written) + } + return writer.Flush() + }() + closeErr := file.Close() + if writeErr != nil || closeErr != nil { + if removeErr := os.Remove(tmpPath); removeErr != nil && !errors.Is(removeErr, os.ErrNotExist) { + c.logger.Debug("failed to remove temporary cache file", "path", tmpPath, "error", removeErr) + return errors.Join(writeErr, closeErr, removeErr) + } + return errors.Join(writeErr, closeErr) + } + + if err := os.Rename(tmpPath, path); err != nil { + if removeErr := os.Remove(tmpPath); removeErr != nil && !errors.Is(removeErr, os.ErrNotExist) { + c.logger.Debug("failed to remove temporary cache file after rename error", "path", tmpPath, "error", removeErr) + return errors.Join(err, removeErr) + } + return err + } + + c.lru.SetSize(sessionID, totalBytes) + return nil +} + +func (c *JSONLCache) removeSessionLocked(sessionID string) error { + closeErr := c.closeWriterLocked(sessionID) + removeErr := os.Remove(c.sessionPath(sessionID)) + if removeErr != nil && !errors.Is(removeErr, os.ErrNotExist) { + return errors.Join(closeErr, removeErr) + } + + delete(c.entryCounts, sessionID) + c.lru.Remove(sessionID) + return closeErr +} + +func (c *JSONLCache) evictLocked() error { + if c.config.MaxTotalSize <= 0 { + return nil + } + + for c.lru.TotalSize() > c.config.MaxTotalSize { + sessionID, ok := c.lru.Oldest() + if !ok { + break + } + if err := c.removeSessionLocked(sessionID); err != nil { + return err + } + } + + return nil +} + +func encodeEntryLine(entry Entry) ([]byte, error) { + encoded, err := json.Marshal(entry) + if err != nil { + return nil, err + } + return append(encoded, '\n'), nil +} + +func fileSize(path string) (int64, error) { + info, err := os.Stat(path) + if err != nil { + return 0, err + } + return info.Size(), nil +} diff --git a/internal/cache/jsonl_test.go b/internal/cache/jsonl_test.go new file mode 100644 index 0000000..f75b676 --- /dev/null +++ b/internal/cache/jsonl_test.go @@ -0,0 +1,251 @@ +package cache + +import ( + "encoding/json" + "fmt" + "os" + "path/filepath" + "sync" + "testing" + "time" +) + +func TestJSONLCacheAppendGetRoundTripAndMalformedLine(t *testing.T) { + cache := newTestCache(t, CacheConfig{StoragePath: t.TempDir(), MaxEntriesPerSession: 1000, MaxTotalSize: 64 * 1024 * 1024}) + + sessionID := "roundtrip" + base := time.Unix(1_700_000_000, 0).UTC() + entries := []Entry{ + {Timestamp: base, Type: EntryTypeAgentMessage, Content: []byte("alpha"), Metadata: map[string]any{"idx": 1}}, + {Timestamp: base.Add(time.Second), Type: EntryTypeToolCall, Content: []byte("beta"), Metadata: map[string]any{"idx": 2}}, + {Timestamp: base.Add(2 * time.Second), Type: EntryTypeTerminalOutput, Content: []byte("gamma"), Metadata: map[string]any{"idx": 3}}, + } + + for _, entry := range entries { + if err := cache.Append(sessionID, entry); err != nil { + t.Fatalf("append failed: %v", err) + } + } + + file, err := os.OpenFile(filepath.Join(cache.config.StoragePath, sessionID+jsonlExtension), os.O_APPEND|os.O_WRONLY, sessionFilePerm) + if err != nil { + t.Fatalf("open cache file failed: %v", err) + } + if _, err := file.WriteString("{this-is-not-json}\n"); err != nil { + _ = file.Close() + t.Fatalf("write malformed line failed: %v", err) + } + if err := file.Close(); err != nil { + t.Fatalf("close malformed writer failed: %v", err) + } + + all, err := cache.Get(sessionID, 0, 0) + if err != nil { + t.Fatalf("get all failed: %v", err) + } + if len(all) != len(entries) { + t.Fatalf("unexpected entries length: got %d want %d", len(all), len(entries)) + } + for i := range entries { + if !all[i].Timestamp.Equal(entries[i].Timestamp) || all[i].Type != entries[i].Type || string(all[i].Content) != string(entries[i].Content) { + t.Fatalf("entry mismatch at index %d: got %+v want %+v", i, all[i], entries[i]) + } + if fmt.Sprint(all[i].Metadata["idx"]) != fmt.Sprint(entries[i].Metadata["idx"]) { + t.Fatalf("metadata mismatch at index %d: got=%v want=%v", i, all[i].Metadata["idx"], entries[i].Metadata["idx"]) + } + } + + window, err := cache.Get(sessionID, 1, 1) + if err != nil { + t.Fatalf("paged get failed: %v", err) + } + if len(window) != 1 || window[0].Type != entries[1].Type || string(window[0].Content) != string(entries[1].Content) { + t.Fatalf("unexpected paged result: %+v", window) + } +} + +func TestJSONLCacheTrimAndClear(t *testing.T) { + cache := newTestCache(t, CacheConfig{StoragePath: t.TempDir(), MaxEntriesPerSession: 1000, MaxTotalSize: 64 * 1024 * 1024}) + + sessionID := "trim-clear" + base := time.Unix(1_700_000_000, 0).UTC() + for i := 0; i < 5; i++ { + entry := Entry{Timestamp: base.Add(time.Duration(i) * time.Second), Type: EntryTypeSystemEvent, Content: []byte(fmt.Sprintf("line-%d", i))} + if err := cache.Append(sessionID, entry); err != nil { + t.Fatalf("append %d failed: %v", i, err) + } + } + + if err := cache.Trim(sessionID, 2); err != nil { + t.Fatalf("trim failed: %v", err) + } + + trimmed, err := cache.Get(sessionID, 0, 0) + if err != nil { + t.Fatalf("get after trim failed: %v", err) + } + if len(trimmed) != 2 { + t.Fatalf("unexpected trim size: got %d want 2", len(trimmed)) + } + if string(trimmed[0].Content) != "line-3" || string(trimmed[1].Content) != "line-4" { + t.Fatalf("trim kept unexpected entries: %+v", trimmed) + } + + if err := cache.Clear(sessionID); err != nil { + t.Fatalf("clear failed: %v", err) + } + + entries, err := cache.Get(sessionID, 0, 0) + if err != nil { + t.Fatalf("get after clear failed: %v", err) + } + if len(entries) != 0 { + t.Fatalf("expected empty result after clear, got %d entries", len(entries)) + } + if _, err := os.Stat(filepath.Join(cache.config.StoragePath, sessionID+jsonlExtension)); !os.IsNotExist(err) { + t.Fatalf("expected cache file to be removed, stat err=%v", err) + } +} + +func TestJSONLCacheLRUEviction(t *testing.T) { + entry := Entry{Timestamp: time.Unix(1_700_000_000, 0).UTC(), Type: EntryTypeTerminalOutput, Content: []byte("payload")} + encoded, err := json.Marshal(entry) + if err != nil { + t.Fatalf("marshal test entry failed: %v", err) + } + lineSize := int64(len(encoded) + 1) + + cache := newTestCache(t, CacheConfig{ + StoragePath: t.TempDir(), + MaxEntriesPerSession: 1000, + MaxTotalSize: (lineSize * 2) + 5, + EvictionPolicy: EvictionPolicyLRU, + }) + + if err := cache.Append("s1", entry); err != nil { + t.Fatalf("append s1 failed: %v", err) + } + if err := cache.Append("s2", entry); err != nil { + t.Fatalf("append s2 failed: %v", err) + } + if _, err := cache.Get("s1", 0, 1); err != nil { + t.Fatalf("touch s1 failed: %v", err) + } + if err := cache.Append("s3", entry); err != nil { + t.Fatalf("append s3 failed: %v", err) + } + + if _, err := os.Stat(filepath.Join(cache.config.StoragePath, "s2"+jsonlExtension)); !os.IsNotExist(err) { + t.Fatalf("expected s2 to be evicted, stat err=%v", err) + } + if _, err := os.Stat(filepath.Join(cache.config.StoragePath, "s1"+jsonlExtension)); err != nil { + t.Fatalf("expected s1 to be kept: %v", err) + } + if _, err := os.Stat(filepath.Join(cache.config.StoragePath, "s3"+jsonlExtension)); err != nil { + t.Fatalf("expected s3 to be kept: %v", err) + } + + total := int64(0) + for _, sessionID := range []string{"s1", "s3"} { + info, err := os.Stat(filepath.Join(cache.config.StoragePath, sessionID+jsonlExtension)) + if err != nil { + t.Fatalf("stat %s failed: %v", sessionID, err) + } + total += info.Size() + } + if total > cache.config.MaxTotalSize { + t.Fatalf("total cache size exceeds limit: total=%d limit=%d", total, cache.config.MaxTotalSize) + } +} + +func TestJSONLCacheConcurrentAppend(t *testing.T) { + cache := newTestCache(t, CacheConfig{StoragePath: t.TempDir(), MaxEntriesPerSession: 20000, MaxTotalSize: 64 * 1024 * 1024}) + + const goroutines = 16 + const perWorker = 250 + + var wg sync.WaitGroup + for g := 0; g < goroutines; g++ { + g := g + wg.Add(1) + go func() { + defer wg.Done() + for i := 0; i < perWorker; i++ { + entry := Entry{ + Timestamp: time.Unix(1_700_000_000+int64(i), 0).UTC(), + Type: EntryTypeTerminalOutput, + Content: []byte(fmt.Sprintf("g%d-%d", g, i)), + } + if err := cache.Append("concurrent", entry); err != nil { + t.Errorf("append failed: %v", err) + return + } + } + }() + } + wg.Wait() + + entries, err := cache.Get("concurrent", 0, 0) + if err != nil { + t.Fatalf("get failed: %v", err) + } + expected := goroutines * perWorker + if len(entries) != expected { + t.Fatalf("unexpected entry count: got %d want %d", len(entries), expected) + } +} + +func TestJSONLCacheRoundTripPerformanceSmoke(t *testing.T) { + if testing.Short() { + t.Skip("skipping performance smoke test in short mode") + } + + cache := newTestCache(t, CacheConfig{StoragePath: t.TempDir(), MaxEntriesPerSession: 11000, MaxTotalSize: 128 * 1024 * 1024}) + + const count = 10000 + start := time.Now() + for i := 0; i < count; i++ { + entry := Entry{ + Timestamp: time.Unix(1_700_000_000+int64(i), 0).UTC(), + Type: EntryTypeAgentMessage, + Content: []byte("perf"), + } + if err := cache.Append("perf", entry); err != nil { + t.Fatalf("append failed at %d: %v", i, err) + } + } + entries, err := cache.Get("perf", 0, count) + if err != nil { + t.Fatalf("get failed: %v", err) + } + if len(entries) != count { + t.Fatalf("unexpected perf entry count: got %d want %d", len(entries), count) + } + + duration := time.Since(start) + if duration > 8*time.Second { + t.Fatalf("round-trip too slow: %s", duration) + } +} + +func newTestCache(t *testing.T, cfg CacheConfig) *JSONLCache { + t.Helper() + + instance, err := NewJSONLCache(cfg) + if err != nil { + t.Fatalf("create cache failed: %v", err) + } + + cache, ok := instance.(*JSONLCache) + if !ok { + t.Fatalf("unexpected cache type: %T", instance) + } + + t.Cleanup(func() { + if err := cache.Close(); err != nil { + t.Fatalf("close cache failed: %v", err) + } + }) + + return cache +} diff --git a/internal/cache/lru.go b/internal/cache/lru.go new file mode 100644 index 0000000..804e1b4 --- /dev/null +++ b/internal/cache/lru.go @@ -0,0 +1,104 @@ +package cache + +import ( + "container/list" + "sync" +) + +type sessionLRU struct { + mu sync.Mutex + order *list.List + nodes map[string]*list.Element + fileSize map[string]int64 + total int64 +} + +func newSessionLRU() *sessionLRU { + return &sessionLRU{ + order: list.New(), + nodes: make(map[string]*list.Element), + fileSize: make(map[string]int64), + } +} + +func (l *sessionLRU) Ensure(sessionID string) { + l.mu.Lock() + defer l.mu.Unlock() + l.ensureLocked(sessionID) +} + +func (l *sessionLRU) Touch(sessionID string) { + l.mu.Lock() + defer l.mu.Unlock() + elem := l.ensureLocked(sessionID) + l.order.MoveToBack(elem) +} + +func (l *sessionLRU) SetSize(sessionID string, size int64) { + l.mu.Lock() + defer l.mu.Unlock() + + if size < 0 { + size = 0 + } + old := l.fileSize[sessionID] + l.fileSize[sessionID] = size + l.total += size - old +} + +func (l *sessionLRU) AddSize(sessionID string, delta int64) { + l.mu.Lock() + defer l.mu.Unlock() + + current := l.fileSize[sessionID] + next := current + delta + if next < 0 { + next = 0 + } + l.fileSize[sessionID] = next + l.total += next - current +} + +func (l *sessionLRU) Remove(sessionID string) { + l.mu.Lock() + defer l.mu.Unlock() + + if elem, ok := l.nodes[sessionID]; ok { + l.order.Remove(elem) + delete(l.nodes, sessionID) + } + if size, ok := l.fileSize[sessionID]; ok { + l.total -= size + delete(l.fileSize, sessionID) + } +} + +func (l *sessionLRU) Oldest() (string, bool) { + l.mu.Lock() + defer l.mu.Unlock() + + front := l.order.Front() + if front == nil { + return "", false + } + key, ok := front.Value.(string) + if !ok || key == "" { + return "", false + } + return key, true +} + +func (l *sessionLRU) TotalSize() int64 { + l.mu.Lock() + defer l.mu.Unlock() + return l.total +} + +func (l *sessionLRU) ensureLocked(sessionID string) *list.Element { + if elem, ok := l.nodes[sessionID]; ok { + return elem + } + elem := l.order.PushBack(sessionID) + l.nodes[sessionID] = elem + return elem +} diff --git a/internal/cache/lru_test.go b/internal/cache/lru_test.go new file mode 100644 index 0000000..c7be067 --- /dev/null +++ b/internal/cache/lru_test.go @@ -0,0 +1,34 @@ +package cache + +import "testing" + +func TestSessionLRUOrderAndSize(t *testing.T) { + lru := newSessionLRU() + lru.Ensure("a") + lru.Ensure("b") + lru.Ensure("c") + + if oldest, ok := lru.Oldest(); !ok || oldest != "a" { + t.Fatalf("unexpected oldest before touch: %q, ok=%v", oldest, ok) + } + + lru.Touch("a") + if oldest, ok := lru.Oldest(); !ok || oldest != "b" { + t.Fatalf("unexpected oldest after touch: %q, ok=%v", oldest, ok) + } + + lru.SetSize("a", 10) + lru.AddSize("b", 20) + lru.SetSize("c", 30) + if total := lru.TotalSize(); total != 60 { + t.Fatalf("unexpected total size: got %d want 60", total) + } + + lru.Remove("b") + if oldest, ok := lru.Oldest(); !ok || oldest != "c" { + t.Fatalf("unexpected oldest after remove: %q, ok=%v", oldest, ok) + } + if total := lru.TotalSize(); total != 40 { + t.Fatalf("unexpected total size after remove: got %d want 40", total) + } +} diff --git a/internal/cache/types.go b/internal/cache/types.go new file mode 100644 index 0000000..5b5fe81 --- /dev/null +++ b/internal/cache/types.go @@ -0,0 +1,47 @@ +package cache + +import "time" + +// EntryType identifies the kind of scrollback record. +type EntryType string + +const ( + EntryTypeAgentMessage EntryType = "agent_message" + EntryTypeToolCall EntryType = "tool_call" + EntryTypeTerminalOutput EntryType = "terminal_output" + EntryTypeFileDiff EntryType = "file_diff" + EntryTypeSystemEvent EntryType = "system_event" +) + +// Entry is a single scrollback item associated with a session. +type Entry struct { + Timestamp time.Time `json:"timestamp"` + Type EntryType `json:"type"` + Content []byte `json:"content"` + Metadata map[string]any `json:"metadata,omitempty"` +} + +// EvictionPolicy controls how entries are evicted when limits are reached. +type EvictionPolicy string + +const ( + EvictionPolicyLRU EvictionPolicy = "LRU" + EvictionPolicyFIFO EvictionPolicy = "FIFO" +) + +// CacheConfig configures scrollback cache limits and local storage location. +type CacheConfig struct { + MaxEntriesPerSession int + MaxTotalSize int64 + EvictionPolicy EvictionPolicy + StoragePath string +} + +// ScrollbackCache defines contract-first persistence APIs for session scrollback. +type ScrollbackCache interface { + Append(sessionID string, entry Entry) error + Get(sessionID string, offset, limit int) ([]Entry, error) + Trim(sessionID string, maxEntries int) error + Clear(sessionID string) error + Close() error +} diff --git a/internal/daemon/API_FINDINGS.md b/internal/daemon/API_FINDINGS.md new file mode 100644 index 0000000..75464e7 --- /dev/null +++ b/internal/daemon/API_FINDINGS.md @@ -0,0 +1,82 @@ +# OpenCode Daemon API Findings (Spike) + +Date: 2026-03-05 +Task: 5 — Validate OpenCode daemon API assumptions (spike) +Daemon binary: `opencode 1.2.17` + +## Scope + +This is an integration **spike** only. It validates endpoint assumptions for Task 14 planning and records deltas between assumed contracts and observed daemon behavior. + +## Assumption Matrix + +| # | Assumption | Status | Evidence | +|---|---|---|---| +| 1 | `GET /doc` serves parseable OpenAPI spec with required routes | **CONFIRMED** | `TestSpikeDocEndpointOpenAPI` passed. Log: `openapi=3.1.1 path_count=85`; required routes present (`/event`, `/project/current`, `/session`, `/session/{sessionID}`, `/session/{sessionID}/message`). | +| 2 | `POST /session` response shape matches future `SessionHandle` (`id`, `daemon port`, `workspace path`, `status`, `created`, `last activity`, `attached clients`) | **DENIED** | `TestSpikeCreateSessionShape` log: `map[attached_clients:false created_at:true daemon_port:false id:true last_activity:false status:false workspace_path:true]`. Response has `id`, `directory`, `time`; missing `daemon_port`, `status`, `last_activity`, `attached_clients`. | +| 3 | `GET /session/{id}/messages` lists messages and supports SSE streaming | **DENIED** | `TestSpikeSessionMessagesEndpoints` log: `/messages` returned `text/html;charset=UTF-8` (web shell), not message list/SSE. `GET /session/{id}/message` (singular) returned JSON list (`0 entries` in fresh session). | +| 4 | `POST /session/{id}/message` itself streams token SSE response | **DENIED** | `TestSpikePostMessageAndTokenEvents`: endpoint returned `Content-Type: application/json` (single JSON response), while token deltas were observed on `/event` (`message.part.delta`). | +| 5 | `GET /event` emits SSE events for session state changes | **CONFIRMED** | `TestSpikeEventEndpointReceivesSessionUpdates` patched session title and received matching `session.updated` event on `/event`. | +| 6 | Two clients can attach simultaneously and both receive same session SSE updates | **CONFIRMED** | `TestSpikeMultiClientEventStreams` opened two `/event` streams; both observed `session.updated` for same `sessionID`. | +| 7 | `GET /session/{id}` includes file list, working directory, and agent info | **DENIED** | `TestSpikeSessionDetailFields` keys: `[directory id projectID slug time title version]`; booleans: `working_directory=true files=false agent=false`. | + +## Additional Endpoint Deltas (important for Task 14) + +1. **Singular vs plural API paths differ from assumptions** + - API routes are singular (`/session`, `/session/{id}/message`) + - `/sessions` and `/session/{id}/messages` resolved to web UI HTML in observed environment. + +2. **Token streaming source** + - Streaming token deltas (`message.part.delta`) are seen on global SSE endpoint `/event`, not as SSE response body from `POST /session/{id}/message`. + +3. **OpenAPI location** + - `/doc` returns JSON OpenAPI. + - `/openapi.json` returned web shell HTML in this environment. + +4. **Scanner contract mismatch risk** + - Existing scanner expects `/project/current` shape with `name` and `path`. + - Observed payload uses fields such as `worktree` and `sandboxes`; this can cause fallback registration paths unless scanner parsing is updated. + +## Verification Commands (exact) + +```bash +go test -count=1 -v ./internal/daemon/... +go build ./internal/daemon/... +PATH="/usr/local/go/bin:/usr/bin:/bin" go test -count=1 -run TestSpikeDocEndpointOpenAPI -v ./internal/daemon/... +``` + +## Test Invocation Output + +```text +=== RUN TestSpikeDocEndpointOpenAPI + spike_test.go:298: /doc openapi=3.1.1 path_count=85 +--- PASS: TestSpikeDocEndpointOpenAPI (5.10s) +=== RUN TestSpikeCreateSessionShape + spike_test.go:325: create-session field coverage=map[attached_clients:false created_at:true daemon_port:false id:true last_activity:false status:false workspace_path:true] +--- PASS: TestSpikeCreateSessionShape (2.65s) +=== RUN TestSpikeSessionMessagesEndpoints + spike_test.go:347: plural messages endpoint status=200 content-type="text/html;charset=UTF-8" body-prefix="..." + spike_test.go:370: singular message endpoint returned 0 entries +--- PASS: TestSpikeSessionMessagesEndpoints (2.82s) +=== RUN TestSpikePostMessageAndTokenEvents + spike_test.go:433: observed message.part.delta events for session=ses_33fe55437ffezbvVy6J33bsFWv (total_event_data_lines=65) +--- PASS: TestSpikePostMessageAndTokenEvents (15.30s) +=== RUN TestSpikeEventEndpointReceivesSessionUpdates + spike_test.go:469: event stream delivered session.updated for session=ses_33fe5185fffes3dFUYysFCoHW0 +--- PASS: TestSpikeEventEndpointReceivesSessionUpdates (3.17s) +=== RUN TestSpikeMultiClientEventStreams + spike_test.go:521: both event clients observed session.updated for session=ses_33fe50bf0ffeGySWMVQ3OSAKoz +--- PASS: TestSpikeMultiClientEventStreams (3.59s) +=== RUN TestSpikeSessionDetailFields + spike_test.go:563: session detail keys=[directory id projectID slug time title version] + spike_test.go:564: session detail capability working_directory=true files=false agent=false +--- PASS: TestSpikeSessionDetailFields (2.62s) +PASS +ok opencoderouter/internal/daemon 35.444s + +=== RUN TestSpikeDocEndpointOpenAPI + spike_test.go:253: spike skipped: opencode binary not available in PATH: exec: "opencode": executable file not found in $PATH +--- SKIP: TestSpikeDocEndpointOpenAPI (0.00s) +PASS +ok opencoderouter/internal/daemon 0.306s +``` diff --git a/internal/daemon/client.go b/internal/daemon/client.go new file mode 100644 index 0000000..2006e56 --- /dev/null +++ b/internal/daemon/client.go @@ -0,0 +1,1645 @@ +package daemon + +import ( + "bufio" + "bytes" + "context" + "encoding/base64" + "encoding/json" + "errors" + "fmt" + "io" + "math" + "net" + "net/http" + "net/url" + "strconv" + "strings" + "time" +) + +const ( + defaultClientTimeout = 15 * time.Second + defaultRetryBackoff = 150 * time.Millisecond + defaultStreamBuffer = 64 + defaultStreamIdleTimeout = 2 * time.Second + defaultScannerInitialSize = 64 * 1024 + defaultScannerMaxSize = 1024 * 1024 +) + +type Client struct { + baseURL string + config ClientConfig + httpClient *http.Client +} + +type DaemonClient = Client + +type endpointCandidate struct { + Path string + Query url.Values +} + +type httpResult struct { + StatusCode int + Header http.Header + Body []byte +} + +type sseFrame struct { + ID string + Event string + Data string +} + +type postResult struct { + payload map[string]interface{} + err error +} + +func NewClient(baseURL string, cfg ClientConfig) (*Client, error) { + baseURL = strings.TrimSpace(baseURL) + if baseURL == "" { + return nil, errors.New("base URL is required") + } + baseURL = strings.TrimRight(baseURL, "/") + + if cfg.Timeout <= 0 { + cfg.Timeout = defaultClientTimeout + } + if cfg.MaxRetries < 0 { + cfg.MaxRetries = 0 + } + if cfg.RetryBackoff <= 0 { + cfg.RetryBackoff = defaultRetryBackoff + } + if cfg.StreamBuffer <= 0 { + cfg.StreamBuffer = defaultStreamBuffer + } + if cfg.StreamIdleTimeout <= 0 { + cfg.StreamIdleTimeout = defaultStreamIdleTimeout + } + + httpClient := cfg.HTTPClient + if httpClient == nil { + httpClient = &http.Client{} + } else { + cloned := *httpClient + httpClient = &cloned + } + httpClient.Timeout = cfg.Timeout + + return &Client{ + baseURL: baseURL, + config: cfg, + httpClient: httpClient, + }, nil +} + +func NewDaemonClient(baseURL string, cfg ClientConfig) (*DaemonClient, error) { + return NewClient(baseURL, cfg) +} + +func (c *Client) ListSessions(ctx context.Context) ([]DaemonSession, error) { + candidates := []endpointCandidate{ + {Path: "/session"}, + {Path: "/sessions"}, + } + + payload, endpoint, err := c.getJSONFromCandidates(ctx, candidates) + if err != nil { + return nil, fmt.Errorf("list sessions failed: %w", err) + } + + sessions, ok := parseSessionListPayload(payload) + if !ok { + return nil, fmt.Errorf("list sessions failed: unsupported payload from %s", endpoint) + } + + return sessions, nil +} + +func (c *Client) GetSession(ctx context.Context, sessionID string) (DaemonSession, error) { + sessionID = strings.TrimSpace(sessionID) + if sessionID == "" { + return DaemonSession{}, errors.New("session ID is required") + } + + id := url.PathEscape(sessionID) + candidates := []endpointCandidate{ + {Path: "/session/" + id}, + {Path: "/sessions/" + id}, + } + + payload, endpoint, err := c.getJSONFromCandidates(ctx, candidates) + if err != nil { + return DaemonSession{}, fmt.Errorf("get session failed: %w", err) + } + + obj, ok := payload.(map[string]interface{}) + if !ok { + return DaemonSession{}, fmt.Errorf("get session failed: non-object payload from %s", endpoint) + } + + session := parseSessionEntry(obj) + if session.ID == "" { + session.ID = sessionID + } + if session.ID == "" { + return DaemonSession{}, fmt.Errorf("get session failed: missing session id in payload from %s", endpoint) + } + + return session, nil +} + +func (c *Client) GetMessages(ctx context.Context, sessionID string) ([]map[string]interface{}, error) { + sessionID = strings.TrimSpace(sessionID) + if sessionID == "" { + return nil, errors.New("session ID is required") + } + + id := url.PathEscape(sessionID) + candidates := []endpointCandidate{ + {Path: "/session/" + id + "/message"}, + {Path: "/sessions/" + id + "/messages"}, + } + + payload, endpoint, err := c.getJSONFromCandidates(ctx, candidates) + if err != nil { + return nil, fmt.Errorf("get messages failed: %w", err) + } + + arr, ok := payload.([]interface{}) + if !ok { + return nil, fmt.Errorf("get messages failed: non-array payload from %s", endpoint) + } + + var msgs []map[string]interface{} + for _, item := range arr { + if m, ok := item.(map[string]interface{}); ok { + msgs = append(msgs, m) + } + } + return msgs, nil +} + +func (c *Client) SendMessage(ctx context.Context, sessionID, prompt string) (<-chan MessageChunk, error) { + sessionID = strings.TrimSpace(sessionID) + prompt = strings.TrimSpace(prompt) + if sessionID == "" { + return nil, errors.New("session ID is required") + } + if prompt == "" { + return nil, errors.New("prompt is required") + } + + streamCtx, cancelStream := context.WithCancel(ctx) + events, err := c.subscribeEventsInternal(streamCtx) + if err != nil { + cancelStream() + fmt.Printf("subscribeEventsInternal failed: %v\n", err) + return c.sendMessageWithoutStream(ctx, sessionID, prompt), nil + } + + out := make(chan MessageChunk, c.config.StreamBuffer) + + go func() { + defer close(out) + defer cancelStream() + + postCh := make(chan postResult, 1) + go func() { + payload, postErr := c.postMessage(ctx, sessionID, prompt) + postCh <- postResult{payload: payload, err: postErr} + close(postCh) + }() + + var ( + postDone bool + sawDelta bool + idleCh <-chan time.Time + idleT *time.Timer + pending string + ) + + resetIdle := func() { + if !postDone || c.config.StreamIdleTimeout <= 0 { + return + } + if idleT == nil { + idleT = time.NewTimer(c.config.StreamIdleTimeout) + idleCh = idleT.C + return + } + if !idleT.Stop() { + select { + case <-idleT.C: + default: + } + } + idleT.Reset(c.config.StreamIdleTimeout) + } + + stopIdle := func() { + if idleT == nil { + return + } + if !idleT.Stop() { + select { + case <-idleT.C: + default: + } + } + idleCh = nil + } + + emit := func(chunk MessageChunk) bool { + select { + case out <- chunk: + return true + case <-ctx.Done(): + return false + } + } + + for { + select { + case <-ctx.Done(): + stopIdle() + return + case res, ok := <-postCh: + if !ok { + postCh = nil + continue + } + postCh = nil + postDone = true + if res.err != nil { + emit(MessageChunk{SessionID: sessionID, Type: "error", Error: res.err.Error(), Done: true}) + stopIdle() + return + } + if !sawDelta { + pending = extractMessageText(res.payload) + if pending == "" { + if encoded, marshalErr := json.Marshal(res.payload); marshalErr == nil { + pending = strings.TrimSpace(string(encoded)) + } + } + } + resetIdle() + case ev, ok := <-events: + if !ok { + if sawDelta { + emit(MessageChunk{SessionID: sessionID, Type: "stream.closed", Done: true}) + } else if postDone && pending != "" { + emit(MessageChunk{SessionID: sessionID, Type: "message.final", Delta: pending, Done: true}) + } else if postDone { + emit(MessageChunk{SessionID: sessionID, Type: "stream.closed", Done: true}) + } + stopIdle() + return + } + + if ev.Error != "" { + emit(MessageChunk{SessionID: sessionID, Type: "stream.error", Error: ev.Error, Done: true}) + stopIdle() + return + } + + if !eventMatchesSession(ev, sessionID) { + continue + } + + if isDeltaEvent(ev) { + sawDelta = true + pending = "" + if !emit(MessageChunk{ + SessionID: ev.SessionID, + MessageID: ev.MessageID, + Type: ev.Type, + Delta: ev.Delta, + Timestamp: ev.Timestamp, + RawData: ev.RawData, + Payload: ev.Payload, + }) { + stopIdle() + return + } + resetIdle() + } + + if isTerminalEvent(ev.Type) && (postDone || sawDelta) { + if !sawDelta && pending != "" { + emit(MessageChunk{ + SessionID: sessionID, + Type: "message.final", + Delta: pending, + Done: true, + Timestamp: ev.Timestamp, + RawData: ev.RawData, + Payload: ev.Payload, + }) + stopIdle() + return + } + emit(MessageChunk{ + SessionID: ev.SessionID, + MessageID: ev.MessageID, + Type: ev.Type, + Done: true, + Timestamp: ev.Timestamp, + RawData: ev.RawData, + Payload: ev.Payload, + }) + stopIdle() + return + } + case <-idleCh: + if pending != "" && !sawDelta { + emit(MessageChunk{SessionID: sessionID, Type: "message.final", Delta: pending, Done: true}) + } else { + emit(MessageChunk{SessionID: sessionID, Type: "stream.idle", Done: true}) + } + stopIdle() + return + } + } + }() + + return out, nil +} + +func (c *Client) ExecuteCommand(ctx context.Context, sessionID, command string) (CommandResult, error) { + sessionID = strings.TrimSpace(sessionID) + command = strings.TrimSpace(command) + if sessionID == "" { + return CommandResult{}, errors.New("session ID is required") + } + if command == "" { + return CommandResult{}, errors.New("command is required") + } + + requestBody, err := json.Marshal(ExecuteCommandRequest{Command: command}) + if err != nil { + return CommandResult{}, err + } + + id := url.PathEscape(sessionID) + candidates := []endpointCandidate{ + {Path: "/session/" + id + "/command"}, + {Path: "/session/" + id + "/commands"}, + {Path: "/command", Query: url.Values{"sessionID": []string{sessionID}}}, + {Path: "/commands", Query: url.Values{"sessionID": []string{sessionID}}}, + {Path: "/command", Query: url.Values{"sessionId": []string{sessionID}}}, + {Path: "/commands", Query: url.Values{"sessionId": []string{sessionID}}}, + } + + payload, endpoint, err := c.postJSONFromCandidates(ctx, candidates, requestBody) + if err != nil { + return CommandResult{}, fmt.Errorf("execute command failed: %w", err) + } + + obj, ok := payload.(map[string]interface{}) + if !ok { + return CommandResult{}, fmt.Errorf("execute command failed: non-object payload from %s", endpoint) + } + + result := parseCommandResultPayload(obj) + return result, nil +} + +func (c *Client) ListFiles(ctx context.Context, sessionID, globPattern string) ([]FileInfo, error) { + sessionID = strings.TrimSpace(sessionID) + globPattern = strings.TrimSpace(globPattern) + if sessionID == "" { + return nil, errors.New("session ID is required") + } + + id := url.PathEscape(sessionID) + query := url.Values{} + if globPattern != "" { + query.Set("glob", globPattern) + query.Set("pattern", globPattern) + } + + candidates := []endpointCandidate{ + {Path: "/session/" + id + "/file", Query: cloneValues(query)}, + {Path: "/session/" + id + "/files", Query: cloneValues(query)}, + {Path: "/file", Query: mergeValues(cloneValues(query), url.Values{"sessionID": []string{sessionID}})}, + {Path: "/files", Query: mergeValues(cloneValues(query), url.Values{"sessionID": []string{sessionID}})}, + {Path: "/file", Query: mergeValues(cloneValues(query), url.Values{"sessionId": []string{sessionID}})}, + {Path: "/files", Query: mergeValues(cloneValues(query), url.Values{"sessionId": []string{sessionID}})}, + } + + payload, endpoint, err := c.getJSONFromCandidates(ctx, candidates) + if err != nil { + return nil, fmt.Errorf("list files failed: %w", err) + } + + files, ok := parseFileListPayload(payload) + if !ok { + return nil, fmt.Errorf("list files failed: unsupported payload from %s", endpoint) + } + return files, nil +} + +func (c *Client) ReadFile(ctx context.Context, sessionID, filePath string) (FileContent, error) { + sessionID = strings.TrimSpace(sessionID) + filePath = strings.TrimSpace(filePath) + if sessionID == "" { + return FileContent{}, errors.New("session ID is required") + } + if filePath == "" { + return FileContent{}, errors.New("file path is required") + } + + id := url.PathEscape(sessionID) + escapedPath := escapePath(filePath) + + candidates := []endpointCandidate{ + {Path: "/session/" + id + "/file/" + escapedPath}, + {Path: "/session/" + id + "/files/" + escapedPath}, + {Path: "/file/" + escapedPath, Query: url.Values{"sessionID": []string{sessionID}}}, + {Path: "/files/" + escapedPath, Query: url.Values{"sessionID": []string{sessionID}}}, + {Path: "/file", Query: url.Values{"sessionID": []string{sessionID}, "path": []string{filePath}}}, + {Path: "/files", Query: url.Values{"sessionID": []string{sessionID}, "path": []string{filePath}}}, + {Path: "/file", Query: url.Values{"sessionId": []string{sessionID}, "path": []string{filePath}}}, + {Path: "/files", Query: url.Values{"sessionId": []string{sessionID}, "path": []string{filePath}}}, + } + + var lastErr error + for _, candidate := range candidates { + res, err := c.doRequest(ctx, http.MethodGet, candidate.Path, candidate.Query, nil, map[string]string{"Accept": "application/json"}, true) + if err != nil { + lastErr = err + continue + } + if isEndpointMismatchStatus(res.StatusCode) { + continue + } + if res.StatusCode < 200 || res.StatusCode >= 300 { + lastErr = fmt.Errorf("endpoint %s returned status %d", candidate.Path, res.StatusCode) + continue + } + + if responseLooksJSON(res) { + payload, decodeErr := decodeJSONPayload(res.Body) + if decodeErr != nil { + lastErr = fmt.Errorf("endpoint %s returned invalid JSON: %w", candidate.Path, decodeErr) + continue + } + switch typed := payload.(type) { + case map[string]interface{}: + content := parseFileContentPayload(typed, filePath) + if len(content.RawBytes) == 0 { + content.RawBytes = []byte(content.Content) + } + return content, nil + case string: + return FileContent{Path: filePath, Content: typed, RawBytes: []byte(typed)}, nil + default: + lastErr = fmt.Errorf("endpoint %s returned unsupported payload type", candidate.Path) + continue + } + } + + body := append([]byte(nil), res.Body...) + return FileContent{Path: filePath, Content: string(body), RawBytes: body}, nil + } + + if lastErr == nil { + lastErr = errors.New("no compatible file endpoint") + } + return FileContent{}, fmt.Errorf("read file failed: %w", lastErr) +} + +func (c *Client) SubscribeEvents(ctx context.Context) (<-chan DaemonEvent, error) { + return c.subscribeEventsInternal(ctx) +} + +func (c *Client) Health(ctx context.Context) (HealthResponse, error) { + candidates := []endpointCandidate{{Path: "/global/health"}, {Path: "/health"}} + payload, endpoint, err := c.getJSONFromCandidates(ctx, candidates) + if err != nil { + return HealthResponse{}, fmt.Errorf("health check failed: %w", err) + } + + obj, ok := payload.(map[string]interface{}) + if !ok { + return HealthResponse{}, fmt.Errorf("health check failed: non-object payload from %s", endpoint) + } + + return parseHealthPayload(obj), nil +} + +func (c *Client) Config(ctx context.Context) (DaemonConfig, error) { + candidates := []endpointCandidate{{Path: "/config"}, {Path: "/project/config"}} + payload, endpoint, err := c.getJSONFromCandidates(ctx, candidates) + if err != nil { + return DaemonConfig{}, fmt.Errorf("config fetch failed: %w", err) + } + + obj, ok := payload.(map[string]interface{}) + if !ok { + return DaemonConfig{}, fmt.Errorf("config fetch failed: non-object payload from %s", endpoint) + } + + return DaemonConfig{Raw: cloneMap(obj)}, nil +} + +func (c *Client) sendMessageWithoutStream(ctx context.Context, sessionID, prompt string) <-chan MessageChunk { + out := make(chan MessageChunk, 1) + go func() { + defer close(out) + payload, err := c.postMessage(ctx, sessionID, prompt) + if err != nil { + out <- MessageChunk{SessionID: sessionID, Type: "error", Error: err.Error(), Done: true} + return + } + text := extractMessageText(payload) + if text == "" { + encoded, _ := json.Marshal(payload) + text = string(encoded) + } + out <- MessageChunk{SessionID: sessionID, Type: "message.final", Delta: text, Done: true} + }() + return out +} + +func (c *Client) postMessage(ctx context.Context, sessionID, prompt string) (map[string]interface{}, error) { + requestBody, err := json.Marshal(MessageRequest{Parts: []MessagePart{{Type: "text", Text: prompt}}}) + if err != nil { + return nil, err + } + + id := url.PathEscape(sessionID) + candidates := []endpointCandidate{ + {Path: "/session/" + id + "/message"}, + {Path: "/sessions/" + id + "/messages"}, + {Path: "/session/" + id + "/messages"}, + } + + payload, _, err := c.postJSONFromCandidates(ctx, candidates, requestBody) + if err != nil { + return nil, err + } + + switch typed := payload.(type) { + case nil: + return map[string]interface{}{}, nil + case map[string]interface{}: + return typed, nil + default: + return map[string]interface{}{"data": typed}, nil + } +} + +func (c *Client) subscribeEventsInternal(ctx context.Context) (<-chan DaemonEvent, error) { + resp, err := c.openEventStream(ctx) + if err != nil { + return nil, err + } + + out := make(chan DaemonEvent, c.config.StreamBuffer) + + go func() { + defer close(out) + defer resp.Body.Close() + + emit := func(ev DaemonEvent) bool { + select { + case out <- ev: + return true + case <-ctx.Done(): + return false + } + } + + err := readSSEFrames(ctx, resp.Body, func(frame sseFrame) bool { + ev := parseDaemonEvent(frame) + return emit(ev) + }) + if err != nil && ctx.Err() == nil { + emit(DaemonEvent{Type: "stream.error", Error: err.Error()}) + } + }() + + return out, nil +} + +func (c *Client) openEventStream(ctx context.Context) (*http.Response, error) { + endpoints := []string{"/event", "/events"} + var lastErr error + + for _, endpoint := range endpoints { + resp, err := c.openEventEndpoint(ctx, endpoint) + if err != nil { + lastErr = err + continue + } + + if isEndpointMismatchStatus(resp.StatusCode) { + resp.Body.Close() + continue + } + if resp.StatusCode < 200 || resp.StatusCode >= 300 { + body, _ := io.ReadAll(io.LimitReader(resp.Body, 1024)) + resp.Body.Close() + lastErr = fmt.Errorf("endpoint %s returned status %d body=%s", endpoint, resp.StatusCode, strings.TrimSpace(string(body))) + continue + } + + contentType := strings.ToLower(resp.Header.Get("Content-Type")) + if contentType != "" && !strings.Contains(contentType, "text/event-stream") { + resp.Body.Close() + lastErr = fmt.Errorf("endpoint %s returned non-SSE content-type %q", endpoint, contentType) + continue + } + + return resp, nil + } + + if lastErr == nil { + lastErr = errors.New("no compatible event endpoint") + } + return nil, lastErr +} + +func (c *Client) openEventEndpoint(ctx context.Context, endpoint string) (*http.Response, error) { + attempts := 1 + if c.config.MaxRetries > 0 { + attempts += c.config.MaxRetries + } + + var lastErr error + for attempt := 0; attempt < attempts; attempt++ { + req, err := http.NewRequestWithContext(ctx, http.MethodGet, c.buildURL(endpoint, nil), nil) + if err != nil { + return nil, err + } + req.Header.Set("Accept", "text/event-stream") + if token := strings.TrimSpace(c.config.AuthToken); token != "" { + req.Header.Set("Authorization", "Bearer "+token) + } + + resp, err := c.httpClient.Do(req) + if err != nil { + lastErr = err + if attempt >= attempts-1 || ctx.Err() != nil || !isRetryableError(err) { + return nil, err + } + if !sleepBackoff(ctx, c.config.RetryBackoff, attempt+1) { + return nil, ctx.Err() + } + continue + } + + if attempt < attempts-1 && isRetryableStatus(resp.StatusCode) { + resp.Body.Close() + if !sleepBackoff(ctx, c.config.RetryBackoff, attempt+1) { + return nil, ctx.Err() + } + continue + } + + return resp, nil + } + + if lastErr == nil { + lastErr = errors.New("event stream request failed") + } + return nil, lastErr +} + +func (c *Client) getJSONFromCandidates(ctx context.Context, candidates []endpointCandidate) (interface{}, string, error) { + var errs []string + + for _, candidate := range candidates { + res, err := c.doRequest(ctx, http.MethodGet, candidate.Path, candidate.Query, nil, map[string]string{"Accept": "application/json"}, true) + if err != nil { + errs = append(errs, fmt.Sprintf("%s: %v", candidate.Path, err)) + continue + } + + if isEndpointMismatchStatus(res.StatusCode) { + continue + } + if res.StatusCode < 200 || res.StatusCode >= 300 { + errs = append(errs, fmt.Sprintf("%s: status %d", candidate.Path, res.StatusCode)) + continue + } + if !responseLooksJSON(res) { + errs = append(errs, fmt.Sprintf("%s: non-JSON content-type %q", candidate.Path, res.Header.Get("Content-Type"))) + continue + } + + payload, decodeErr := decodeJSONPayload(res.Body) + if decodeErr != nil { + errs = append(errs, fmt.Sprintf("%s: invalid JSON (%v)", candidate.Path, decodeErr)) + continue + } + + return payload, candidate.Path, nil + } + + if len(errs) == 0 { + return nil, "", errors.New("no compatible endpoint") + } + return nil, "", errors.New(strings.Join(errs, "; ")) +} + +func (c *Client) postJSONFromCandidates(ctx context.Context, candidates []endpointCandidate, body []byte) (interface{}, string, error) { + var errs []string + + for _, candidate := range candidates { + res, err := c.doRequest( + ctx, + http.MethodPost, + candidate.Path, + candidate.Query, + body, + map[string]string{"Accept": "application/json", "Content-Type": "application/json"}, + false, + ) + if err != nil { + errs = append(errs, fmt.Sprintf("%s: %v", candidate.Path, err)) + continue + } + + if isEndpointMismatchStatus(res.StatusCode) { + continue + } + if res.StatusCode < 200 || res.StatusCode >= 300 { + errs = append(errs, fmt.Sprintf("%s: status %d", candidate.Path, res.StatusCode)) + continue + } + + if len(bytes.TrimSpace(res.Body)) == 0 { + return nil, candidate.Path, nil + } + + if !responseLooksJSON(res) { + errs = append(errs, fmt.Sprintf("%s: non-JSON content-type %q", candidate.Path, res.Header.Get("Content-Type"))) + continue + } + + payload, decodeErr := decodeJSONPayload(res.Body) + if decodeErr != nil { + errs = append(errs, fmt.Sprintf("%s: invalid JSON (%v)", candidate.Path, decodeErr)) + continue + } + + return payload, candidate.Path, nil + } + + if len(errs) == 0 { + return nil, "", errors.New("no compatible endpoint") + } + return nil, "", errors.New(strings.Join(errs, "; ")) +} + +func (c *Client) doRequest( + ctx context.Context, + method string, + endpoint string, + query url.Values, + body []byte, + headers map[string]string, + retry bool, +) (*httpResult, error) { + attempts := 1 + if retry && c.config.MaxRetries > 0 { + attempts += c.config.MaxRetries + } + + var lastErr error + for attempt := 0; attempt < attempts; attempt++ { + req, err := http.NewRequestWithContext(ctx, method, c.buildURL(endpoint, query), bytes.NewReader(body)) + if err != nil { + return nil, err + } + + if token := strings.TrimSpace(c.config.AuthToken); token != "" { + req.Header.Set("Authorization", "Bearer "+token) + } + for key, value := range headers { + if strings.TrimSpace(value) == "" { + continue + } + req.Header.Set(key, value) + } + + resp, err := c.httpClient.Do(req) + if err != nil { + lastErr = err + if !retry || attempt >= attempts-1 || ctx.Err() != nil || !isRetryableError(err) { + return nil, err + } + if !sleepBackoff(ctx, c.config.RetryBackoff, attempt+1) { + return nil, ctx.Err() + } + continue + } + + responseBody, readErr := io.ReadAll(resp.Body) + resp.Body.Close() + if readErr != nil { + lastErr = readErr + if !retry || attempt >= attempts-1 || ctx.Err() != nil { + return nil, readErr + } + if !sleepBackoff(ctx, c.config.RetryBackoff, attempt+1) { + return nil, ctx.Err() + } + continue + } + + result := &httpResult{StatusCode: resp.StatusCode, Header: resp.Header.Clone(), Body: responseBody} + if retry && attempt < attempts-1 && isRetryableStatus(result.StatusCode) { + lastErr = fmt.Errorf("retryable status %d", result.StatusCode) + if !sleepBackoff(ctx, c.config.RetryBackoff, attempt+1) { + return nil, ctx.Err() + } + continue + } + + return result, nil + } + + if lastErr == nil { + lastErr = errors.New("request failed") + } + return nil, lastErr +} + +func (c *Client) buildURL(endpoint string, query url.Values) string { + u, err := url.Parse(c.baseURL) + if err != nil { + endpoint = "/" + strings.TrimPrefix(endpoint, "/") + if len(query) == 0 { + return strings.TrimRight(c.baseURL, "/") + endpoint + } + return strings.TrimRight(c.baseURL, "/") + endpoint + "?" + query.Encode() + } + + basePath := strings.TrimSuffix(u.Path, "/") + endpointPath := "/" + strings.TrimPrefix(endpoint, "/") + u.Path = basePath + endpointPath + if len(query) > 0 { + u.RawQuery = query.Encode() + } else { + u.RawQuery = "" + } + + return u.String() +} + +func parseSessionListPayload(payload interface{}) ([]DaemonSession, bool) { + var entries []interface{} + + switch typed := payload.(type) { + case []interface{}: + entries = typed + case map[string]interface{}: + if list, ok := typed["sessions"].([]interface{}); ok { + entries = list + } else if nested, ok := typed["data"].(map[string]interface{}); ok { + if list, ok := nested["sessions"].([]interface{}); ok { + entries = list + } + } else if firstString(typed, "id", "session_id", "sessionId", "sessionID") != "" { + entries = []interface{}{typed} + } else { + return nil, false + } + default: + return nil, false + } + + result := make([]DaemonSession, 0, len(entries)) + for _, entry := range entries { + obj, ok := entry.(map[string]interface{}) + if !ok { + continue + } + session := parseSessionEntry(obj) + if session.ID == "" { + continue + } + result = append(result, session) + } + + return result, true +} + +func parseSessionEntry(payload map[string]interface{}) DaemonSession { + return DaemonSession{ + ID: firstString(payload, "id", "session_id", "sessionId", "sessionID"), + Title: firstString(payload, "title", "name"), + Directory: firstString(payload, "directory", "worktree", "cwd", "workspace_path", "workspacePath"), + Status: firstString(payload, "status", "state"), + CreatedAt: firstTime(payload, "created_at", "createdAt", "created", "time"), + LastActivity: firstTime(payload, "last_activity", "lastActivity", "updated", "updated_at", "updatedAt", "time"), + DaemonPort: firstInt(payload, "daemon_port", "daemonPort", "port"), + AttachedClients: firstInt(payload, "attached_clients", "attachedClients"), + ProjectID: firstString(payload, "projectID", "projectId", "project_id"), + Slug: firstString(payload, "slug"), + Version: firstString(payload, "version"), + Raw: cloneMap(payload), + } +} + +func parseFileListPayload(payload interface{}) ([]FileInfo, bool) { + var entries []interface{} + + switch typed := payload.(type) { + case []interface{}: + entries = typed + case map[string]interface{}: + switch { + case typed["files"] != nil: + list, ok := typed["files"].([]interface{}) + if !ok { + return nil, false + } + entries = list + case typed["data"] != nil: + nested, ok := typed["data"].(map[string]interface{}) + if !ok { + return nil, false + } + if list, ok := nested["files"].([]interface{}); ok { + entries = list + } else { + return nil, false + } + case firstString(typed, "path", "file", "name") != "": + entries = []interface{}{typed} + default: + return nil, false + } + default: + return nil, false + } + + files := make([]FileInfo, 0, len(entries)) + for _, entry := range entries { + obj, ok := entry.(map[string]interface{}) + if !ok { + continue + } + info := parseFileInfoEntry(obj) + if info.Path == "" && info.Name == "" { + continue + } + files = append(files, info) + } + + return files, true +} + +func parseFileInfoEntry(payload map[string]interface{}) FileInfo { + pathValue := firstString(payload, "path", "file", "filepath", "filePath") + nameValue := firstString(payload, "name") + if nameValue == "" && pathValue != "" { + segments := strings.Split(strings.Trim(pathValue, "/"), "/") + if len(segments) > 0 { + nameValue = segments[len(segments)-1] + } + } + + return FileInfo{ + Path: pathValue, + Name: nameValue, + Size: firstInt64(payload, "size", "bytes", "length"), + IsDir: firstBool(payload, "is_dir", "isDir", "dir", "directory"), + Mode: firstString(payload, "mode", "permissions"), + ModTime: firstTime(payload, "mod_time", "modTime", "modified", "updated_at", "updatedAt"), + Raw: cloneMap(payload), + } +} + +func parseFileContentPayload(payload map[string]interface{}, requestedPath string) FileContent { + content := firstString(payload, "content", "text", "data") + encoding := firstString(payload, "encoding") + raw := []byte(content) + + if strings.EqualFold(encoding, "base64") && content != "" { + if decoded, err := base64.StdEncoding.DecodeString(content); err == nil { + raw = decoded + content = string(decoded) + } + } + + pathValue := firstString(payload, "path", "file", "filePath", "filepath") + if pathValue == "" { + pathValue = requestedPath + } + + return FileContent{ + Path: pathValue, + Content: content, + Encoding: encoding, + RawBytes: raw, + } +} + +func parseCommandResultPayload(payload map[string]interface{}) CommandResult { + result := CommandResult{ + ExitCode: firstInt(payload, "exit_code", "exitCode", "code", "status"), + Stdout: firstString(payload, "stdout", "output", "result"), + Stderr: firstString(payload, "stderr", "error"), + Raw: cloneMap(payload), + } + + if success, ok := firstBoolWithPresence(payload, "success", "ok"); ok { + result.Success = success + } else { + result.Success = result.ExitCode == 0 && result.Stderr == "" + } + + return result +} + +func parseHealthPayload(payload map[string]interface{}) HealthResponse { + healthy := false + if value, ok := firstBoolWithPresence(payload, "healthy", "ok", "status"); ok { + healthy = value + } + + return HealthResponse{ + Healthy: healthy, + Version: firstString(payload, "version"), + Raw: cloneMap(payload), + } +} + +func parseDaemonEvent(frame sseFrame) DaemonEvent { + ev := DaemonEvent{ + ID: strings.TrimSpace(frame.ID), + Type: strings.TrimSpace(frame.Event), + RawData: frame.Data, + } + + trimmed := bytes.TrimSpace([]byte(frame.Data)) + if len(trimmed) == 0 { + if ev.Type == "" { + ev.Type = "message" + } + return ev + } + + ev.Data = append([]byte(nil), trimmed...) + payload, err := decodeJSONPayload(trimmed) + if err != nil { + if ev.Type == "" { + ev.Type = "message" + } + return ev + } + + obj, ok := payload.(map[string]interface{}) + if !ok { + if ev.Type == "" { + ev.Type = "message" + } + return ev + } + + ev.Payload = cloneMap(obj) + if ev.Type == "" { + ev.Type = firstString(obj, "type", "event", "eventType", "name") + } + ev.SessionID = firstString(obj, "sessionID", "sessionId", "session_id") + if ev.SessionID == "" { + if nested := firstNestedMap(obj, "session", "data"); nested != nil { + ev.SessionID = firstString(nested, "id", "sessionID", "sessionId", "session_id") + } + } + ev.MessageID = firstString(obj, "messageID", "messageId", "message_id") + if ev.MessageID == "" { + if nested := firstNestedMap(obj, "message", "data"); nested != nil { + ev.MessageID = firstString(nested, "id", "messageID", "messageId", "message_id") + } + } + ev.Timestamp = firstTime(obj, "timestamp", "time", "created_at", "createdAt", "updated_at", "updatedAt") + ev.Delta = extractDelta(obj) + + if ev.Type == "" { + ev.Type = "message" + } + + return ev +} + +func extractMessageText(payload map[string]interface{}) string { + if len(payload) == 0 { + return "" + } + + if s := firstString(payload, "text", "content", "delta", "output", "result"); s != "" { + return s + } + + if nested := firstNestedMap(payload, "message", "data"); nested != nil { + if s := firstString(nested, "text", "content", "delta", "output", "result"); s != "" { + return s + } + } + + if parts, ok := payload["parts"].([]interface{}); ok { + for i := len(parts) - 1; i >= 0; i-- { + part, ok := parts[i].(map[string]interface{}) + if !ok { + continue + } + if s := firstString(part, "text", "content", "delta"); s != "" { + return s + } + } + } + + return "" +} + +func extractDelta(payload map[string]interface{}) string { + if s := firstString(payload, "delta"); s != "" { + return s + } + + if part := firstNestedMap(payload, "part"); part != nil { + if s := firstString(part, "delta", "text", "content"); s != "" { + return s + } + } + + if message := firstNestedMap(payload, "message"); message != nil { + if s := firstString(message, "delta", "text", "content"); s != "" { + return s + } + if part := firstNestedMap(message, "part"); part != nil { + if s := firstString(part, "delta", "text", "content"); s != "" { + return s + } + } + } + + if parts, ok := payload["parts"].([]interface{}); ok { + for i := len(parts) - 1; i >= 0; i-- { + part, ok := parts[i].(map[string]interface{}) + if !ok { + continue + } + if s := firstString(part, "delta", "text", "content"); s != "" { + return s + } + } + } + + return "" +} + +func eventMatchesSession(ev DaemonEvent, sessionID string) bool { + sessionID = strings.TrimSpace(sessionID) + if sessionID == "" { + return true + } + return strings.TrimSpace(ev.SessionID) == sessionID +} + +func isDeltaEvent(ev DaemonEvent) bool { + if strings.TrimSpace(ev.Delta) != "" { + return true + } + typ := strings.ToLower(strings.TrimSpace(ev.Type)) + return strings.Contains(typ, "message.part.delta") +} + +func isTerminalEvent(eventType string) bool { + switch strings.ToLower(strings.TrimSpace(eventType)) { + case "session.idle", "session.error", "message.completed", "message.done", "message.error", "message.stopped", "response.completed", "completion.done": + return true + default: + return false + } +} + +func readSSEFrames(ctx context.Context, reader io.Reader, handle func(frame sseFrame) bool) error { + scanner := bufio.NewScanner(reader) + scanner.Buffer(make([]byte, 0, defaultScannerInitialSize), defaultScannerMaxSize) + scanner.Split(splitSSEFrame) + + for scanner.Scan() { + select { + case <-ctx.Done(): + return ctx.Err() + default: + } + + frame, ok := parseSSEFrame(scanner.Bytes()) + if !ok { + continue + } + if !handle(frame) { + return nil + } + } + + if err := scanner.Err(); err != nil { + return err + } + + return nil +} + +func splitSSEFrame(data []byte, atEOF bool) (advance int, token []byte, err error) { + if atEOF && len(data) == 0 { + return 0, nil, nil + } + + if idx := bytes.Index(data, []byte("\r\n\r\n")); idx >= 0 { + return idx + 4, bytes.Trim(data[:idx], "\r\n"), nil + } + if idx := bytes.Index(data, []byte("\n\n")); idx >= 0 { + return idx + 2, bytes.Trim(data[:idx], "\r\n"), nil + } + + if atEOF { + return len(data), bytes.Trim(data, "\r\n"), nil + } + + return 0, nil, nil +} + +func parseSSEFrame(raw []byte) (sseFrame, bool) { + if len(raw) == 0 { + return sseFrame{}, false + } + + normalized := bytes.ReplaceAll(raw, []byte("\r\n"), []byte("\n")) + lines := bytes.Split(normalized, []byte("\n")) + + frame := sseFrame{} + dataLines := make([]string, 0, 4) + + for _, lineBytes := range lines { + line := strings.TrimRight(string(lineBytes), "\r") + if line == "" || strings.HasPrefix(line, ":") { + continue + } + + key, value, ok := strings.Cut(line, ":") + if !ok { + continue + } + value = strings.TrimLeft(value, " ") + + switch key { + case "id": + frame.ID = value + case "event": + frame.Event = value + case "data": + dataLines = append(dataLines, value) + } + } + + frame.Data = strings.Join(dataLines, "\n") + if strings.TrimSpace(frame.ID) == "" && strings.TrimSpace(frame.Event) == "" && strings.TrimSpace(frame.Data) == "" { + return sseFrame{}, false + } + + return frame, true +} + +func isEndpointMismatchStatus(status int) bool { + return status == http.StatusNotFound || status == http.StatusMethodNotAllowed || status == http.StatusNotAcceptable +} + +func responseLooksJSON(res *httpResult) bool { + contentType := strings.ToLower(strings.TrimSpace(res.Header.Get("Content-Type"))) + if strings.Contains(contentType, "application/json") || strings.Contains(contentType, "+json") { + return true + } + trimmed := bytes.TrimSpace(res.Body) + if len(trimmed) == 0 { + return true + } + if json.Valid(trimmed) { + return true + } + return false +} + +func decodeJSONPayload(body []byte) (interface{}, error) { + decoder := json.NewDecoder(bytes.NewReader(body)) + decoder.UseNumber() + var payload interface{} + if err := decoder.Decode(&payload); err != nil { + return nil, err + } + return payload, nil +} + +func sleepBackoff(ctx context.Context, step time.Duration, multiplier int) bool { + if step <= 0 { + step = defaultRetryBackoff + } + d := step * time.Duration(multiplier) + timer := time.NewTimer(d) + defer timer.Stop() + + select { + case <-ctx.Done(): + return false + case <-timer.C: + return true + } +} + +func isRetryableStatus(status int) bool { + return status == http.StatusTooManyRequests || status == http.StatusBadGateway || status == http.StatusServiceUnavailable || status == http.StatusGatewayTimeout || status >= 500 +} + +func isRetryableError(err error) bool { + if err == nil { + return false + } + if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) { + return false + } + var netErr net.Error + if errors.As(err, &netErr) { + if netErr.Timeout() { + return true + } + if t, ok := interface{}(netErr).(interface{ Temporary() bool }); ok { + return t.Temporary() + } + } + return true +} + +func cloneValues(values url.Values) url.Values { + if values == nil { + return nil + } + cloned := url.Values{} + for key, vals := range values { + copyVals := make([]string, len(vals)) + copy(copyVals, vals) + cloned[key] = copyVals + } + return cloned +} + +func mergeValues(values ...url.Values) url.Values { + merged := url.Values{} + for _, item := range values { + for key, vals := range item { + for _, value := range vals { + merged.Add(key, value) + } + } + } + return merged +} + +func escapePath(filePath string) string { + trimmed := strings.TrimSpace(filePath) + trimmed = strings.TrimPrefix(trimmed, "/") + segments := strings.Split(trimmed, "/") + for i := range segments { + segments[i] = url.PathEscape(segments[i]) + } + return strings.Join(segments, "/") +} + +func firstNestedMap(payload map[string]interface{}, keys ...string) map[string]interface{} { + for _, key := range keys { + value, ok := payload[key] + if !ok { + continue + } + if nested, ok := value.(map[string]interface{}); ok { + return nested + } + } + return nil +} + +func cloneMap(payload map[string]interface{}) map[string]interface{} { + if payload == nil { + return nil + } + cloned := make(map[string]interface{}, len(payload)) + for key, value := range payload { + cloned[key] = value + } + return cloned +} + +func firstString(payload map[string]interface{}, keys ...string) string { + for _, key := range keys { + value, ok := payload[key] + if !ok { + continue + } + switch typed := value.(type) { + case string: + if s := strings.TrimSpace(typed); s != "" { + return s + } + case json.Number: + if s := strings.TrimSpace(typed.String()); s != "" { + return s + } + case float64: + if !math.IsNaN(typed) && !math.IsInf(typed, 0) { + return strconv.FormatInt(int64(typed), 10) + } + case float32: + f := float64(typed) + if !math.IsNaN(f) && !math.IsInf(f, 0) { + return strconv.FormatInt(int64(f), 10) + } + case int: + return strconv.Itoa(typed) + case int64: + return strconv.FormatInt(typed, 10) + } + } + return "" +} + +func firstInt(payload map[string]interface{}, keys ...string) int { + for _, key := range keys { + value, ok := payload[key] + if !ok { + continue + } + switch typed := value.(type) { + case json.Number: + if n, err := typed.Int64(); err == nil { + return int(n) + } + if f, err := typed.Float64(); err == nil { + return int(f) + } + case float64: + if !math.IsNaN(typed) && !math.IsInf(typed, 0) { + return int(typed) + } + case float32: + f := float64(typed) + if !math.IsNaN(f) && !math.IsInf(f, 0) { + return int(f) + } + case int: + return typed + case int64: + return int(typed) + case string: + if n, err := strconv.Atoi(strings.TrimSpace(typed)); err == nil { + return n + } + } + } + return 0 +} + +func firstInt64(payload map[string]interface{}, keys ...string) int64 { + for _, key := range keys { + value, ok := payload[key] + if !ok { + continue + } + switch typed := value.(type) { + case json.Number: + if n, err := typed.Int64(); err == nil { + return n + } + if f, err := typed.Float64(); err == nil && !math.IsNaN(f) && !math.IsInf(f, 0) { + return int64(f) + } + case float64: + if !math.IsNaN(typed) && !math.IsInf(typed, 0) { + return int64(typed) + } + case float32: + f := float64(typed) + if !math.IsNaN(f) && !math.IsInf(f, 0) { + return int64(f) + } + case int: + return int64(typed) + case int64: + return typed + case string: + if n, err := strconv.ParseInt(strings.TrimSpace(typed), 10, 64); err == nil { + return n + } + } + } + return 0 +} + +func firstBool(payload map[string]interface{}, keys ...string) bool { + b, _ := firstBoolWithPresence(payload, keys...) + return b +} + +func firstBoolWithPresence(payload map[string]interface{}, keys ...string) (bool, bool) { + for _, key := range keys { + value, ok := payload[key] + if !ok { + continue + } + switch typed := value.(type) { + case bool: + return typed, true + case string: + s := strings.TrimSpace(strings.ToLower(typed)) + switch s { + case "true", "1", "yes", "ok", "healthy", "success": + return true, true + case "false", "0", "no", "error", "failed", "unhealthy": + return false, true + } + case json.Number: + if n, err := typed.Int64(); err == nil { + return n != 0, true + } + if f, err := typed.Float64(); err == nil { + return f != 0, true + } + case float64: + if !math.IsNaN(typed) && !math.IsInf(typed, 0) { + return typed != 0, true + } + case int: + return typed != 0, true + } + } + return false, false +} + +func firstTime(payload map[string]interface{}, keys ...string) time.Time { + for _, key := range keys { + value, ok := payload[key] + if !ok { + continue + } + timestamp := parseFlexibleTime(value) + if !timestamp.IsZero() { + return timestamp + } + } + return time.Time{} +} + +func parseFlexibleTime(value interface{}) time.Time { + switch typed := value.(type) { + case string: + s := strings.TrimSpace(typed) + if s == "" { + return time.Time{} + } + if ts, err := time.Parse(time.RFC3339Nano, s); err == nil { + return ts + } + if ts, err := time.Parse(time.RFC3339, s); err == nil { + return ts + } + if n, err := strconv.ParseInt(s, 10, 64); err == nil { + return unixMaybeMillis(n) + } + case json.Number: + if n, err := typed.Int64(); err == nil { + return unixMaybeMillis(n) + } + if f, err := typed.Float64(); err == nil { + return unixMaybeMillis(int64(f)) + } + case float64: + if !math.IsNaN(typed) && !math.IsInf(typed, 0) { + return unixMaybeMillis(int64(typed)) + } + case int64: + return unixMaybeMillis(typed) + case int: + return unixMaybeMillis(int64(typed)) + } + return time.Time{} +} + +func unixMaybeMillis(value int64) time.Time { + if value <= 0 { + return time.Time{} + } + if value > 1_000_000_000_000 { + return time.UnixMilli(value) + } + return time.Unix(value, 0) +} diff --git a/internal/daemon/client_test.go b/internal/daemon/client_test.go new file mode 100644 index 0000000..12f50d4 --- /dev/null +++ b/internal/daemon/client_test.go @@ -0,0 +1,537 @@ +package daemon + +import ( + "context" + "encoding/json" + "errors" + "fmt" + "net/http" + "net/http/httptest" + "strings" + "sync/atomic" + "testing" + "time" +) + +func mustNewClient(t *testing.T, baseURL string, cfg ClientConfig) *Client { + t.Helper() + client, err := NewClient(baseURL, cfg) + if err != nil { + t.Fatalf("failed to create daemon client: %v", err) + } + return client +} + +func TestListSessionsPrefersSingularSessionEndpoint(t *testing.T) { + var singularHits atomic.Int32 + var pluralHits atomic.Int32 + + mux := http.NewServeMux() + mux.HandleFunc("/session", func(w http.ResponseWriter, r *http.Request) { + singularHits.Add(1) + w.Header().Set("Content-Type", "application/json") + _ = json.NewEncoder(w).Encode(map[string]interface{}{ + "sessions": []map[string]interface{}{ + { + "id": "ses-1", + "directory": "/work/proj", + "time": "2026-03-05T10:00:00Z", + "projectID": "proj-1", + "slug": "proj", + "version": "1.2.17", + }, + }, + }) + }) + mux.HandleFunc("/sessions", func(w http.ResponseWriter, r *http.Request) { + pluralHits.Add(1) + w.Header().Set("Content-Type", "application/json") + _ = json.NewEncoder(w).Encode([]map[string]interface{}{}) + }) + + server := httptest.NewServer(mux) + defer server.Close() + + client := mustNewClient(t, server.URL, ClientConfig{Timeout: 2 * time.Second}) + sessions, err := client.ListSessions(context.Background()) + if err != nil { + t.Fatalf("ListSessions returned error: %v", err) + } + + if singularHits.Load() != 1 { + t.Fatalf("expected singular endpoint hit once, got %d", singularHits.Load()) + } + if pluralHits.Load() != 0 { + t.Fatalf("expected plural endpoint to be skipped, got %d hits", pluralHits.Load()) + } + + if len(sessions) != 1 { + t.Fatalf("expected 1 session, got %d", len(sessions)) + } + if sessions[0].ID != "ses-1" { + t.Fatalf("expected session id ses-1, got %q", sessions[0].ID) + } + if sessions[0].Directory != "/work/proj" { + t.Fatalf("expected directory /work/proj, got %q", sessions[0].Directory) + } + if sessions[0].CreatedAt.IsZero() { + t.Fatalf("expected CreatedAt parsed from time field") + } +} + +func TestListSessionsFallsBackFromHTMLShellResponse(t *testing.T) { + var singularHits atomic.Int32 + var pluralHits atomic.Int32 + + mux := http.NewServeMux() + mux.HandleFunc("/session", func(w http.ResponseWriter, r *http.Request) { + singularHits.Add(1) + w.Header().Set("Content-Type", "text/html; charset=utf-8") + _, _ = w.Write([]byte("
shell")) + }) + mux.HandleFunc("/sessions", func(w http.ResponseWriter, r *http.Request) { + pluralHits.Add(1) + w.Header().Set("Content-Type", "application/json") + _ = json.NewEncoder(w).Encode([]map[string]interface{}{{"id": "ses-fallback", "directory": "/tmp/fallback"}}) + }) + + server := httptest.NewServer(mux) + defer server.Close() + + client := mustNewClient(t, server.URL, ClientConfig{Timeout: 2 * time.Second}) + sessions, err := client.ListSessions(context.Background()) + if err != nil { + t.Fatalf("ListSessions returned error: %v", err) + } + + if singularHits.Load() != 1 || pluralHits.Load() != 1 { + t.Fatalf("expected fallback behavior singular=1 plural=1, got singular=%d plural=%d", singularHits.Load(), pluralHits.Load()) + } + if len(sessions) != 1 || sessions[0].ID != "ses-fallback" { + t.Fatalf("unexpected sessions payload: %+v", sessions) + } +} + +func TestGetSessionFallbackAndValidationError(t *testing.T) { + var singularHits atomic.Int32 + var pluralHits atomic.Int32 + + mux := http.NewServeMux() + mux.HandleFunc("/session/ses-42", func(w http.ResponseWriter, r *http.Request) { + singularHits.Add(1) + w.Header().Set("Content-Type", "text/html; charset=utf-8") + _, _ = w.Write([]byte("ui shell")) + }) + mux.HandleFunc("/sessions/ses-42", func(w http.ResponseWriter, r *http.Request) { + pluralHits.Add(1) + w.Header().Set("Content-Type", "application/json") + _ = json.NewEncoder(w).Encode(map[string]interface{}{ + "id": "ses-42", + "directory": "/work/fallback", + "slug": "fallback", + }) + }) + + server := httptest.NewServer(mux) + defer server.Close() + + client := mustNewClient(t, server.URL, ClientConfig{Timeout: 2 * time.Second}) + session, err := client.GetSession(context.Background(), "ses-42") + if err != nil { + t.Fatalf("GetSession returned error: %v", err) + } + if session.ID != "ses-42" { + t.Fatalf("expected session id ses-42, got %q", session.ID) + } + if singularHits.Load() != 1 || pluralHits.Load() != 1 { + t.Fatalf("expected fallback hits singular=1 plural=1, got singular=%d plural=%d", singularHits.Load(), pluralHits.Load()) + } + + if _, err := client.GetSession(context.Background(), ""); err == nil { + t.Fatalf("expected validation error for empty session id") + } +} + +func TestSubscribeEventsParsesMultiLineSSEData(t *testing.T) { + mux := http.NewServeMux() + mux.HandleFunc("/event", func(w http.ResponseWriter, r *http.Request) { + w.Header().Set("Content-Type", "text/event-stream") + flusher := w.(http.Flusher) + + _, _ = fmt.Fprint(w, "id: 1\n") + _, _ = fmt.Fprint(w, "event: message.part.delta\n") + _, _ = fmt.Fprint(w, "data: {\"type\":\"message.part.delta\",\"sessionID\":\"ses-1\",\n") + _, _ = fmt.Fprint(w, "data: \"part\":{\"delta\":\"hel\"}}\n\n") + flusher.Flush() + + _, _ = fmt.Fprint(w, "id: 2\n") + _, _ = fmt.Fprint(w, "event: session.updated\n") + _, _ = fmt.Fprint(w, "data: {\"type\":\"session.updated\",\"sessionID\":\"ses-1\"}\n\n") + flusher.Flush() + }) + + server := httptest.NewServer(mux) + defer server.Close() + + client := mustNewClient(t, server.URL, ClientConfig{Timeout: 2 * time.Second}) + ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second) + defer cancel() + + events, err := client.SubscribeEvents(ctx) + if err != nil { + t.Fatalf("SubscribeEvents returned error: %v", err) + } + + first := <-events + if first.Type != "message.part.delta" { + t.Fatalf("expected first type message.part.delta, got %q", first.Type) + } + if first.ID != "1" { + t.Fatalf("expected first id 1, got %q", first.ID) + } + if first.SessionID != "ses-1" { + t.Fatalf("expected first session ses-1, got %q", first.SessionID) + } + if first.Delta != "hel" { + t.Fatalf("expected parsed delta hel, got %q", first.Delta) + } + + second := <-events + if second.Type != "session.updated" { + t.Fatalf("expected second type session.updated, got %q", second.Type) + } +} + +func TestSendMessageStreamsChunksFromEventEndpoint(t *testing.T) { + startEvents := make(chan struct{}) + postedBody := make(chan MessageRequest, 1) + + mux := http.NewServeMux() + mux.HandleFunc("/event", func(w http.ResponseWriter, r *http.Request) { + w.Header().Set("Content-Type", "text/event-stream") + flusher := w.(http.Flusher) + w.WriteHeader(http.StatusOK) + flusher.Flush() + <-startEvents + + _, _ = fmt.Fprint(w, "event: message.part.delta\n") + _, _ = fmt.Fprint(w, "data: {\"type\":\"message.part.delta\",\"sessionID\":\"other\",\"delta\":\"ignore\"}\n\n") + flusher.Flush() + + _, _ = fmt.Fprint(w, "event: message.part.delta\n") + _, _ = fmt.Fprint(w, "data: {\"type\":\"message.part.delta\",\"sessionID\":\"ses-1\",\"delta\":\"Hel\"}\n\n") + flusher.Flush() + + _, _ = fmt.Fprint(w, "event: message.part.delta\n") + _, _ = fmt.Fprint(w, "data: {\"type\":\"message.part.delta\",\"sessionID\":\"ses-1\",\"part\":{\"delta\":\"lo\"}}\n\n") + flusher.Flush() + + _, _ = fmt.Fprint(w, "event: session.idle\n") + _, _ = fmt.Fprint(w, "data: {\"type\":\"session.idle\",\"sessionID\":\"ses-1\"}\n\n") + flusher.Flush() + }) + + mux.HandleFunc("/session/ses-1/message", func(w http.ResponseWriter, r *http.Request) { + defer close(startEvents) + var req MessageRequest + if err := json.NewDecoder(r.Body).Decode(&req); err == nil { + postedBody <- req + } + w.Header().Set("Content-Type", "application/json") + _ = json.NewEncoder(w).Encode(map[string]interface{}{"id": "msg-1"}) + }) + + server := httptest.NewServer(mux) + defer server.Close() + + client := mustNewClient(t, server.URL, ClientConfig{ + Timeout: 2 * time.Second, + StreamIdleTimeout: 300 * time.Millisecond, + }) + + ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second) + defer cancel() + + chunks, err := client.SendMessage(ctx, "ses-1", "hello") + if err != nil { + t.Fatalf("SendMessage returned error: %v", err) + } + + posted := <-postedBody + if len(posted.Parts) != 1 || posted.Parts[0].Text != "hello" { + t.Fatalf("unexpected posted request body: %+v", posted) + } + + collected := make([]MessageChunk, 0, 4) + for { + select { + case chunk, ok := <-chunks: + if !ok { + goto done + } + collected = append(collected, chunk) + case <-ctx.Done(): + t.Fatalf("timed out waiting for streamed chunks") + } + } + +done: + deltas := make([]string, 0, 2) + var doneChunk MessageChunk + for _, chunk := range collected { + if chunk.Delta != "" { + deltas = append(deltas, chunk.Delta) + } + if chunk.Done { + doneChunk = chunk + } + } + + if strings.Join(deltas, "") != "Hello" { + t.Fatalf("expected streamed deltas to form Hello, got %q (%+v)", strings.Join(deltas, ""), deltas) + } + if !doneChunk.Done { + t.Fatalf("expected terminal done chunk, got %+v", collected) + } + if doneChunk.Type != "session.idle" { + t.Fatalf("expected done chunk type session.idle, got %q", doneChunk.Type) + } +} + +func TestHealthHonorsTimeout(t *testing.T) { + mux := http.NewServeMux() + mux.HandleFunc("/global/health", func(w http.ResponseWriter, r *http.Request) { + time.Sleep(120 * time.Millisecond) + w.Header().Set("Content-Type", "application/json") + _ = json.NewEncoder(w).Encode(map[string]interface{}{"healthy": true}) + }) + + server := httptest.NewServer(mux) + defer server.Close() + + client := mustNewClient(t, server.URL, ClientConfig{Timeout: 40 * time.Millisecond, MaxRetries: 0}) + start := time.Now() + _, err := client.Health(context.Background()) + elapsed := time.Since(start) + + if err == nil { + t.Fatalf("expected timeout error, got nil") + } + if !errors.Is(err, context.DeadlineExceeded) && !strings.Contains(strings.ToLower(err.Error()), "timeout") { + t.Fatalf("expected timeout/deadline error, got %v", err) + } + if elapsed >= 300*time.Millisecond { + t.Fatalf("expected timeout to return quickly, elapsed=%s", elapsed) + } +} + +func TestHealthRetriesTransientFailures(t *testing.T) { + var attempts atomic.Int32 + + mux := http.NewServeMux() + mux.HandleFunc("/global/health", func(w http.ResponseWriter, r *http.Request) { + count := attempts.Add(1) + if count < 3 { + w.WriteHeader(http.StatusServiceUnavailable) + _, _ = w.Write([]byte("retry me")) + return + } + w.Header().Set("Content-Type", "application/json") + _ = json.NewEncoder(w).Encode(map[string]interface{}{"healthy": true, "version": "1.2.17"}) + }) + + server := httptest.NewServer(mux) + defer server.Close() + + client := mustNewClient(t, server.URL, ClientConfig{ + Timeout: 2 * time.Second, + MaxRetries: 2, + RetryBackoff: time.Millisecond, + }) + + health, err := client.Health(context.Background()) + if err != nil { + t.Fatalf("Health returned error: %v", err) + } + if !health.Healthy { + t.Fatalf("expected healthy=true after retries") + } + if attempts.Load() != 3 { + t.Fatalf("expected exactly 3 attempts, got %d", attempts.Load()) + } +} + +func TestExecuteCommandFallbackAndValidationError(t *testing.T) { + var commandHits atomic.Int32 + var commandsHits atomic.Int32 + + mux := http.NewServeMux() + mux.HandleFunc("/session/ses-cmd/command", func(w http.ResponseWriter, r *http.Request) { + commandHits.Add(1) + w.WriteHeader(http.StatusNotFound) + }) + mux.HandleFunc("/session/ses-cmd/commands", func(w http.ResponseWriter, r *http.Request) { + commandsHits.Add(1) + w.Header().Set("Content-Type", "application/json") + _ = json.NewEncoder(w).Encode(map[string]interface{}{"exit_code": 7, "stderr": "boom", "success": false}) + }) + + server := httptest.NewServer(mux) + defer server.Close() + + client := mustNewClient(t, server.URL, ClientConfig{Timeout: 2 * time.Second}) + result, err := client.ExecuteCommand(context.Background(), "ses-cmd", "ls") + if err != nil { + t.Fatalf("ExecuteCommand returned error: %v", err) + } + if result.ExitCode != 7 || result.Success { + t.Fatalf("unexpected command result: %+v", result) + } + if commandHits.Load() != 1 || commandsHits.Load() != 1 { + t.Fatalf("expected fallback hits command=1 commands=1, got command=%d commands=%d", commandHits.Load(), commandsHits.Load()) + } + + if _, err := client.ExecuteCommand(context.Background(), "ses-cmd", ""); err == nil { + t.Fatalf("expected validation error for empty command") + } +} + +func TestListFilesAndReadFileFallbackPaths(t *testing.T) { + var singularFilesHits atomic.Int32 + var pluralFilesHits atomic.Int32 + var queryReadHits atomic.Int32 + + mux := http.NewServeMux() + mux.HandleFunc("/session/ses-files/file", func(w http.ResponseWriter, r *http.Request) { + singularFilesHits.Add(1) + w.WriteHeader(http.StatusNotFound) + }) + mux.HandleFunc("/session/ses-files/files", func(w http.ResponseWriter, r *http.Request) { + pluralFilesHits.Add(1) + w.Header().Set("Content-Type", "application/json") + _ = json.NewEncoder(w).Encode(map[string]interface{}{"files": []map[string]interface{}{{"path": "README.md", "size": 6}}}) + }) + mux.HandleFunc("/session/ses-files/file/README.md", func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusNotFound) + }) + mux.HandleFunc("/session/ses-files/files/README.md", func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusNotFound) + }) + mux.HandleFunc("/file/README.md", func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusNotFound) + }) + mux.HandleFunc("/files/README.md", func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusNotFound) + }) + mux.HandleFunc("/file", func(w http.ResponseWriter, r *http.Request) { + if r.URL.Query().Get("sessionID") == "ses-files" && r.URL.Query().Get("path") == "README.md" { + queryReadHits.Add(1) + w.Header().Set("Content-Type", "application/json") + _ = json.NewEncoder(w).Encode(map[string]interface{}{"path": "README.md", "content": "query-read"}) + return + } + w.WriteHeader(http.StatusNotFound) + }) + + server := httptest.NewServer(mux) + defer server.Close() + + client := mustNewClient(t, server.URL, ClientConfig{Timeout: 2 * time.Second}) + files, err := client.ListFiles(context.Background(), "ses-files", "*.md") + if err != nil { + t.Fatalf("ListFiles returned error: %v", err) + } + if len(files) != 1 || files[0].Path != "README.md" { + t.Fatalf("unexpected files payload: %+v", files) + } + if singularFilesHits.Load() != 1 || pluralFilesHits.Load() != 1 { + t.Fatalf("expected fallback hits file=1 files=1, got file=%d files=%d", singularFilesHits.Load(), pluralFilesHits.Load()) + } + + read, err := client.ReadFile(context.Background(), "ses-files", "README.md") + if err != nil { + t.Fatalf("ReadFile returned error: %v", err) + } + if read.Content != "query-read" { + t.Fatalf("expected query-read content, got %q", read.Content) + } + if queryReadHits.Load() != 1 { + t.Fatalf("expected query-path fallback hit once, got %d", queryReadHits.Load()) + } +} + +func TestConfigInvalidPayloadReturnsError(t *testing.T) { + mux := http.NewServeMux() + mux.HandleFunc("/config", func(w http.ResponseWriter, r *http.Request) { + w.Header().Set("Content-Type", "application/json") + _, _ = w.Write([]byte(`["bad-shape"]`)) + }) + + server := httptest.NewServer(mux) + defer server.Close() + + client := mustNewClient(t, server.URL, ClientConfig{Timeout: 2 * time.Second}) + if _, err := client.Config(context.Background()); err == nil { + t.Fatalf("expected config shape error") + } +} + +func TestExecuteCommandListFilesReadFileAndConfig(t *testing.T) { + mux := http.NewServeMux() + mux.HandleFunc("/session/ses-1/command", func(w http.ResponseWriter, r *http.Request) { + w.Header().Set("Content-Type", "application/json") + _ = json.NewEncoder(w).Encode(map[string]interface{}{"exit_code": 0, "stdout": "ok", "success": true}) + }) + mux.HandleFunc("/session/ses-1/file", func(w http.ResponseWriter, r *http.Request) { + w.Header().Set("Content-Type", "application/json") + _ = json.NewEncoder(w).Encode(map[string]interface{}{ + "files": []map[string]interface{}{{"path": "README.md", "size": 5, "is_dir": false}}, + }) + }) + mux.HandleFunc("/session/ses-1/file/README.md", func(w http.ResponseWriter, r *http.Request) { + w.Header().Set("Content-Type", "text/plain") + _, _ = w.Write([]byte("hello")) + }) + mux.HandleFunc("/config", func(w http.ResponseWriter, r *http.Request) { + w.Header().Set("Content-Type", "application/json") + _ = json.NewEncoder(w).Encode(map[string]interface{}{"model": "claude", "provider": "anthropic"}) + }) + + server := httptest.NewServer(mux) + defer server.Close() + + client := mustNewClient(t, server.URL, ClientConfig{Timeout: 2 * time.Second}) + ctx := context.Background() + + cmd, err := client.ExecuteCommand(ctx, "ses-1", "pwd") + if err != nil { + t.Fatalf("ExecuteCommand returned error: %v", err) + } + if !cmd.Success || cmd.ExitCode != 0 || cmd.Stdout != "ok" { + t.Fatalf("unexpected command result: %+v", cmd) + } + + files, err := client.ListFiles(ctx, "ses-1", "*.md") + if err != nil { + t.Fatalf("ListFiles returned error: %v", err) + } + if len(files) != 1 || files[0].Path != "README.md" { + t.Fatalf("unexpected files payload: %+v", files) + } + + file, err := client.ReadFile(ctx, "ses-1", "README.md") + if err != nil { + t.Fatalf("ReadFile returned error: %v", err) + } + if file.Content != "hello" { + t.Fatalf("expected plain-text file content hello, got %q", file.Content) + } + + conf, err := client.Config(ctx) + if err != nil { + t.Fatalf("Config returned error: %v", err) + } + if conf.Raw["model"] != "claude" { + t.Fatalf("unexpected config payload: %+v", conf) + } +} diff --git a/internal/daemon/spike_test.go b/internal/daemon/spike_test.go new file mode 100644 index 0000000..2f6f0ce --- /dev/null +++ b/internal/daemon/spike_test.go @@ -0,0 +1,580 @@ +package daemon + +import ( + "bufio" + "bytes" + "context" + "encoding/json" + "fmt" + "io" + "net" + "net/http" + "os/exec" + "path/filepath" + "runtime" + "strconv" + "strings" + "testing" + "time" +) + +const ( + spikeStartupTimeout = 15 * time.Second + spikeHTTPTimeout = 30 * time.Second +) + +type spikeDaemon struct { + baseURL string + client *http.Client +} + +type openAPIDoc struct { + OpenAPI string `json:"openapi"` + Paths map[string]map[string]interface{} `json:"paths"` +} + +func requireSpikeDaemon(t *testing.T) *spikeDaemon { + t.Helper() + + binaryPath, err := exec.LookPath("opencode") + if err != nil { + t.Skipf("spike skipped: opencode binary not available in PATH: %v", err) + } + + port, err := reservePort() + if err != nil { + t.Skipf("spike skipped: unable to reserve local port: %v", err) + } + + ctx, cancel := context.WithCancel(context.Background()) + cmd := exec.CommandContext(ctx, binaryPath, "serve", "--port", strconv.Itoa(port)) + cmd.Dir = moduleRoot(t) + + var stderr bytes.Buffer + cmd.Stdout = io.Discard + cmd.Stderr = &stderr + + if err := cmd.Start(); err != nil { + cancel() + t.Skipf("spike skipped: failed to start opencode serve: %v", err) + } + + t.Cleanup(func() { + cancel() + _ = cmd.Wait() + }) + + baseURL := fmt.Sprintf("http://127.0.0.1:%d", port) + client := &http.Client{Timeout: 1200 * time.Millisecond} + + deadline := time.Now().Add(spikeStartupTimeout) + for time.Now().Before(deadline) { + req, _ := http.NewRequestWithContext(context.Background(), http.MethodGet, baseURL+"/global/health", nil) + resp, err := client.Do(req) + if err == nil { + body, _ := io.ReadAll(resp.Body) + _ = resp.Body.Close() + + if resp.StatusCode == http.StatusOK { + var health struct { + Healthy bool `json:"healthy"` + } + if json.Unmarshal(body, &health) == nil && health.Healthy { + return &spikeDaemon{ + baseURL: baseURL, + client: &http.Client{Timeout: spikeHTTPTimeout}, + } + } + } + } + time.Sleep(250 * time.Millisecond) + } + + t.Skipf( + "spike skipped: opencode serve did not become healthy at %s within %s; stderr=%q", + baseURL, + spikeStartupTimeout, + trimForLog(stderr.String(), 400), + ) + return nil +} + +func moduleRoot(t *testing.T) string { + t.Helper() + _, filename, _, ok := runtime.Caller(0) + if !ok { + t.Fatal("unable to resolve caller for module root") + } + return filepath.Clean(filepath.Join(filepath.Dir(filename), "..", "..")) +} + +func reservePort() (int, error) { + ln, err := net.Listen("tcp", "127.0.0.1:0") + if err != nil { + return 0, err + } + defer ln.Close() + addr, ok := ln.Addr().(*net.TCPAddr) + if !ok { + return 0, fmt.Errorf("unexpected address type %T", ln.Addr()) + } + return addr.Port, nil +} + +func trimForLog(s string, max int) string { + s = strings.TrimSpace(s) + if len(s) <= max { + return s + } + return s[:max] + "..." +} + +func mustCreateSession(t *testing.T, d *spikeDaemon) map[string]interface{} { + t.Helper() + + req, err := http.NewRequestWithContext(context.Background(), http.MethodPost, d.baseURL+"/session", nil) + if err != nil { + t.Fatalf("create-session request build failed: %v", err) + } + + resp, err := d.client.Do(req) + if err != nil { + t.Fatalf("create-session request failed: %v", err) + } + defer resp.Body.Close() + + body, err := io.ReadAll(resp.Body) + if err != nil { + t.Fatalf("create-session response read failed: %v", err) + } + + if resp.StatusCode != http.StatusOK { + t.Fatalf("create-session unexpected status=%d body=%s", resp.StatusCode, string(body)) + } + + var payload map[string]interface{} + if err := json.Unmarshal(body, &payload); err != nil { + t.Fatalf("create-session response is not JSON: %v; body=%s", err, string(body)) + } + + if stringField(payload, "id") == "" { + t.Fatalf("create-session response missing id field: %v", payload) + } + + return payload +} + +func stringField(payload map[string]interface{}, key string) string { + v, ok := payload[key] + if !ok { + return "" + } + s, _ := v.(string) + return s +} + +func hasAnyKey(payload map[string]interface{}, keys ...string) bool { + for _, key := range keys { + if _, ok := payload[key]; ok { + return true + } + } + return false +} + +func waitForEventDataMatch(ctx context.Context, d *spikeDaemon, match func(data string) bool) (matched bool, dataLines []string, err error) { + req, err := http.NewRequestWithContext(ctx, http.MethodGet, d.baseURL+"/event", nil) + if err != nil { + return false, nil, err + } + req.Header.Set("Accept", "text/event-stream") + + resp, err := d.client.Do(req) + if err != nil { + if ctx.Err() != nil { + return false, nil, nil + } + return false, nil, err + } + defer resp.Body.Close() + + if resp.StatusCode != http.StatusOK { + body, _ := io.ReadAll(io.LimitReader(resp.Body, 1024)) + return false, nil, fmt.Errorf("event stream status=%d body=%s", resp.StatusCode, string(body)) + } + + scanner := bufio.NewScanner(resp.Body) + scanner.Buffer(make([]byte, 0, 64*1024), 1024*1024) + + for scanner.Scan() { + line := scanner.Text() + if !strings.HasPrefix(line, "data: ") { + continue + } + data := strings.TrimSpace(strings.TrimPrefix(line, "data: ")) + if data == "" { + continue + } + dataLines = append(dataLines, data) + if match(data) { + return true, dataLines, nil + } + } + + if scanErr := scanner.Err(); scanErr != nil && ctx.Err() == nil { + return false, dataLines, scanErr + } + + return false, dataLines, nil +} + +func patchSessionTitle(t *testing.T, d *spikeDaemon, sessionID, title string) { + t.Helper() + body := strings.NewReader(fmt.Sprintf(`{"title":%q}`, title)) + req, err := http.NewRequestWithContext(context.Background(), http.MethodPatch, d.baseURL+"/session/"+sessionID, body) + if err != nil { + t.Fatalf("patch-session request build failed: %v", err) + } + req.Header.Set("Content-Type", "application/json") + + resp, err := d.client.Do(req) + if err != nil { + t.Fatalf("patch-session request failed: %v", err) + } + defer resp.Body.Close() + + bodyBytes, _ := io.ReadAll(io.LimitReader(resp.Body, 4096)) + if resp.StatusCode != http.StatusOK { + t.Fatalf("patch-session unexpected status=%d body=%s", resp.StatusCode, string(bodyBytes)) + } +} + +func TestSpikeDocEndpointOpenAPI(t *testing.T) { + d := requireSpikeDaemon(t) + + req, err := http.NewRequestWithContext(context.Background(), http.MethodGet, d.baseURL+"/doc", nil) + if err != nil { + t.Fatalf("request build failed: %v", err) + } + + resp, err := d.client.Do(req) + if err != nil { + t.Fatalf("GET /doc failed: %v", err) + } + defer resp.Body.Close() + + if resp.StatusCode != http.StatusOK { + body, _ := io.ReadAll(io.LimitReader(resp.Body, 1024)) + t.Fatalf("GET /doc unexpected status=%d body=%s", resp.StatusCode, string(body)) + } + + var spec openAPIDoc + if err := json.NewDecoder(resp.Body).Decode(&spec); err != nil { + t.Fatalf("GET /doc returned non-JSON payload: %v", err) + } + + if spec.OpenAPI == "" { + t.Fatalf("openapi version missing in /doc payload") + } + + if len(spec.Paths) == 0 { + t.Fatalf("/doc paths section is empty") + } + + requiredPaths := []string{ + "/event", + "/project/current", + "/session", + "/session/{sessionID}", + "/session/{sessionID}/message", + } + + for _, path := range requiredPaths { + if _, ok := spec.Paths[path]; !ok { + t.Fatalf("required path missing from /doc: %s", path) + } + } + + t.Logf("/doc openapi=%s path_count=%d", spec.OpenAPI, len(spec.Paths)) +} + +func TestSpikeCreateSessionShape(t *testing.T) { + d := requireSpikeDaemon(t) + payload := mustCreateSession(t, d) + + id := stringField(payload, "id") + directory := stringField(payload, "directory") + + if id == "" { + t.Fatalf("session create response missing id: %v", payload) + } + if directory == "" { + t.Fatalf("session create response missing directory: %v", payload) + } + + shape := map[string]bool{ + "id": hasAnyKey(payload, "id"), + "daemon_port": hasAnyKey(payload, "daemonPort", "daemon_port", "port"), + "workspace_path": hasAnyKey(payload, "workspacePath", "workspace_path", "directory"), + "status": hasAnyKey(payload, "status"), + "created_at": hasAnyKey(payload, "createdAt", "created_at", "time"), + "last_activity": hasAnyKey(payload, "lastActivity", "last_activity"), + "attached_clients": hasAnyKey(payload, "attachedClients", "attached_clients"), + } + + t.Logf("create-session field coverage=%v", shape) +} + +func TestSpikeSessionMessagesEndpoints(t *testing.T) { + d := requireSpikeDaemon(t) + payload := mustCreateSession(t, d) + sessionID := stringField(payload, "id") + + reqPlural, err := http.NewRequestWithContext(context.Background(), http.MethodGet, d.baseURL+"/session/"+sessionID+"/messages", nil) + if err != nil { + t.Fatalf("plural endpoint request build failed: %v", err) + } + reqPlural.Header.Set("Accept", "text/event-stream") + + respPlural, err := d.client.Do(reqPlural) + if err != nil { + t.Fatalf("GET /session/{id}/messages failed: %v", err) + } + pluralBody, _ := io.ReadAll(io.LimitReader(respPlural.Body, 2048)) + _ = respPlural.Body.Close() + + pluralContentType := respPlural.Header.Get("Content-Type") + t.Logf("plural messages endpoint status=%d content-type=%q body-prefix=%q", respPlural.StatusCode, pluralContentType, trimForLog(string(pluralBody), 220)) + + reqSingular, err := http.NewRequestWithContext(context.Background(), http.MethodGet, d.baseURL+"/session/"+sessionID+"/message", nil) + if err != nil { + t.Fatalf("singular endpoint request build failed: %v", err) + } + + respSingular, err := d.client.Do(reqSingular) + if err != nil { + t.Fatalf("GET /session/{id}/message failed: %v", err) + } + defer respSingular.Body.Close() + + if respSingular.StatusCode != http.StatusOK { + body, _ := io.ReadAll(io.LimitReader(respSingular.Body, 2048)) + t.Fatalf("GET /session/{id}/message unexpected status=%d body=%s", respSingular.StatusCode, string(body)) + } + + var messages []interface{} + if err := json.NewDecoder(respSingular.Body).Decode(&messages); err != nil { + t.Fatalf("GET /session/{id}/message did not return JSON list: %v", err) + } + + t.Logf("singular message endpoint returned %d entries", len(messages)) +} + +func TestSpikePostMessageAndTokenEvents(t *testing.T) { + d := requireSpikeDaemon(t) + payload := mustCreateSession(t, d) + sessionID := stringField(payload, "id") + + eventCtx, cancelEvents := context.WithTimeout(context.Background(), 30*time.Second) + defer cancelEvents() + + type eventResult struct { + matched bool + lines []string + err error + } + + eventCh := make(chan eventResult, 1) + go func() { + matched, lines, err := waitForEventDataMatch(eventCtx, d, func(data string) bool { + return strings.Contains(data, sessionID) && strings.Contains(data, `"message.part.delta"`) + }) + eventCh <- eventResult{matched: matched, lines: lines, err: err} + }() + + time.Sleep(600 * time.Millisecond) + + body := strings.NewReader(`{"parts":[{"type":"text","text":"Reply with exactly: spike-pong"}]}`) + req, err := http.NewRequestWithContext(context.Background(), http.MethodPost, d.baseURL+"/session/"+sessionID+"/message", body) + if err != nil { + t.Fatalf("POST /session/{id}/message request build failed: %v", err) + } + req.Header.Set("Content-Type", "application/json") + req.Header.Set("Accept", "application/json") + + resp, err := d.client.Do(req) + if err != nil { + t.Skipf("spike skipped token assertion: prompt call failed (%v)", err) + } + defer resp.Body.Close() + + respBody, _ := io.ReadAll(io.LimitReader(resp.Body, 2*1024*1024)) + if resp.StatusCode != http.StatusOK { + t.Fatalf("POST /session/{id}/message unexpected status=%d body=%s", resp.StatusCode, trimForLog(string(respBody), 400)) + } + + if ct := resp.Header.Get("Content-Type"); !strings.Contains(ct, "application/json") { + t.Fatalf("POST /session/{id}/message expected application/json response, got %q", ct) + } + + var messagePayload map[string]interface{} + if err := json.Unmarshal(respBody, &messagePayload); err != nil { + t.Fatalf("POST /session/{id}/message response was not JSON: %v", err) + } + + result := <-eventCh + if result.err != nil { + t.Skipf("spike skipped token assertion: event stream error: %v", result.err) + } + if !result.matched { + t.Skipf("spike skipped token assertion: no message.part.delta event observed for session %s within timeout (events=%d)", sessionID, len(result.lines)) + } + + t.Logf("observed message.part.delta events for session=%s (total_event_data_lines=%d)", sessionID, len(result.lines)) +} + +func TestSpikeEventEndpointReceivesSessionUpdates(t *testing.T) { + d := requireSpikeDaemon(t) + payload := mustCreateSession(t, d) + sessionID := stringField(payload, "id") + + eventCtx, cancel := context.WithTimeout(context.Background(), 12*time.Second) + defer cancel() + + type eventResult struct { + matched bool + lines []string + err error + } + + eventCh := make(chan eventResult, 1) + go func() { + matched, lines, err := waitForEventDataMatch(eventCtx, d, func(data string) bool { + return strings.Contains(data, sessionID) && strings.Contains(data, `"session.updated"`) + }) + eventCh <- eventResult{matched: matched, lines: lines, err: err} + }() + + time.Sleep(500 * time.Millisecond) + patchSessionTitle(t, d, sessionID, "spike-event-update") + + result := <-eventCh + if result.err != nil { + t.Fatalf("event stream failed: %v", result.err) + } + if !result.matched { + t.Fatalf("expected session.updated event for session %s; received %d data lines", sessionID, len(result.lines)) + } + + t.Logf("event stream delivered session.updated for session=%s", sessionID) +} + +func TestSpikeMultiClientEventStreams(t *testing.T) { + d := requireSpikeDaemon(t) + payload := mustCreateSession(t, d) + sessionID := stringField(payload, "id") + + ctx1, cancel1 := context.WithTimeout(context.Background(), 12*time.Second) + defer cancel1() + ctx2, cancel2 := context.WithTimeout(context.Background(), 12*time.Second) + defer cancel2() + + type eventResult struct { + matched bool + lines []string + err error + } + + stream1 := make(chan eventResult, 1) + stream2 := make(chan eventResult, 1) + + go func() { + matched, lines, err := waitForEventDataMatch(ctx1, d, func(data string) bool { + return strings.Contains(data, sessionID) && strings.Contains(data, `"session.updated"`) + }) + stream1 <- eventResult{matched: matched, lines: lines, err: err} + }() + + go func() { + matched, lines, err := waitForEventDataMatch(ctx2, d, func(data string) bool { + return strings.Contains(data, sessionID) && strings.Contains(data, `"session.updated"`) + }) + stream2 <- eventResult{matched: matched, lines: lines, err: err} + }() + + time.Sleep(900 * time.Millisecond) + patchSessionTitle(t, d, sessionID, "spike-multi-client") + + res1 := <-stream1 + res2 := <-stream2 + + if res1.err != nil { + t.Fatalf("event stream client #1 failed: %v", res1.err) + } + if res2.err != nil { + t.Fatalf("event stream client #2 failed: %v", res2.err) + } + if !res1.matched || !res2.matched { + t.Fatalf("expected both event clients to receive session update: client1=%t client2=%t", res1.matched, res2.matched) + } + + t.Logf("both event clients observed session.updated for session=%s", sessionID) +} + +func TestSpikeSessionDetailFields(t *testing.T) { + d := requireSpikeDaemon(t) + payload := mustCreateSession(t, d) + sessionID := stringField(payload, "id") + + req, err := http.NewRequestWithContext(context.Background(), http.MethodGet, d.baseURL+"/session/"+sessionID, nil) + if err != nil { + t.Fatalf("GET /session/{id} request build failed: %v", err) + } + + resp, err := d.client.Do(req) + if err != nil { + t.Fatalf("GET /session/{id} request failed: %v", err) + } + defer resp.Body.Close() + + if resp.StatusCode != http.StatusOK { + body, _ := io.ReadAll(io.LimitReader(resp.Body, 2048)) + t.Fatalf("GET /session/{id} unexpected status=%d body=%s", resp.StatusCode, string(body)) + } + + body, err := io.ReadAll(resp.Body) + if err != nil { + t.Fatalf("GET /session/{id} read failed: %v", err) + } + + var sessionPayload map[string]interface{} + if err := json.Unmarshal(body, &sessionPayload); err != nil { + t.Fatalf("GET /session/{id} response was not JSON: %v", err) + } + + if stringField(sessionPayload, "id") == "" { + t.Fatalf("GET /session/{id} response missing id: %v", sessionPayload) + } + + hasWorkingDirectory := hasAnyKey(sessionPayload, "directory", "worktree", "cwd") + hasFiles := hasAnyKey(sessionPayload, "files", "fileList", "file_list") + hasAgent := hasAnyKey(sessionPayload, "agent", "agents", "agentInfo", "agent_info") + + t.Logf("session detail keys=%v", sortedKeys(sessionPayload)) + t.Logf("session detail capability working_directory=%t files=%t agent=%t", hasWorkingDirectory, hasFiles, hasAgent) +} + +func sortedKeys(m map[string]interface{}) []string { + keys := make([]string, 0, len(m)) + for key := range m { + keys = append(keys, key) + } + for i := 0; i < len(keys)-1; i++ { + for j := i + 1; j < len(keys); j++ { + if keys[j] < keys[i] { + keys[i], keys[j] = keys[j], keys[i] + } + } + } + return keys +} diff --git a/internal/daemon/types.go b/internal/daemon/types.go new file mode 100644 index 0000000..d23e257 --- /dev/null +++ b/internal/daemon/types.go @@ -0,0 +1,105 @@ +package daemon + +import ( + "encoding/json" + "net/http" + "time" +) + +type ClientConfig struct { + Timeout time.Duration + MaxRetries int + RetryBackoff time.Duration + AuthToken string + HTTPClient *http.Client + StreamBuffer int + StreamIdleTimeout time.Duration +} + +type DaemonSession struct { + ID string `json:"id"` + Title string `json:"title,omitempty"` + Directory string `json:"directory,omitempty"` + Status string `json:"status,omitempty"` + CreatedAt time.Time `json:"createdAt,omitempty"` + LastActivity time.Time `json:"lastActivity,omitempty"` + DaemonPort int `json:"daemonPort,omitempty"` + AttachedClients int `json:"attachedClients,omitempty"` + ProjectID string `json:"projectID,omitempty"` + Slug string `json:"slug,omitempty"` + Version string `json:"version,omitempty"` + Raw map[string]interface{} `json:"-"` +} + +type MessagePart struct { + Type string `json:"type"` + Text string `json:"text"` +} + +type MessageRequest struct { + Parts []MessagePart `json:"parts"` +} + +type MessageChunk struct { + SessionID string `json:"sessionId,omitempty"` + MessageID string `json:"messageId,omitempty"` + Type string `json:"type,omitempty"` + Delta string `json:"delta,omitempty"` + Done bool `json:"done,omitempty"` + Error string `json:"error,omitempty"` + Timestamp time.Time `json:"timestamp,omitempty"` + RawData string `json:"rawData,omitempty"` + Payload map[string]interface{} `json:"payload,omitempty"` +} + +type ExecuteCommandRequest struct { + Command string `json:"command"` +} + +type CommandResult struct { + ExitCode int `json:"exitCode"` + Success bool `json:"success"` + Stdout string `json:"stdout,omitempty"` + Stderr string `json:"stderr,omitempty"` + Raw map[string]interface{} `json:"-"` +} + +type FileInfo struct { + Path string `json:"path"` + Name string `json:"name,omitempty"` + Size int64 `json:"size,omitempty"` + IsDir bool `json:"isDir"` + Mode string `json:"mode,omitempty"` + ModTime time.Time `json:"modTime,omitempty"` + Raw map[string]interface{} `json:"-"` +} + +type FileContent struct { + Path string `json:"path"` + Content string `json:"content"` + Encoding string `json:"encoding,omitempty"` + RawBytes []byte `json:"-"` +} + +type DaemonEvent struct { + ID string `json:"id,omitempty"` + Type string `json:"type,omitempty"` + SessionID string `json:"sessionId,omitempty"` + MessageID string `json:"messageId,omitempty"` + Timestamp time.Time `json:"timestamp,omitempty"` + Delta string `json:"delta,omitempty"` + RawData string `json:"rawData,omitempty"` + Data json.RawMessage `json:"data,omitempty"` + Payload map[string]interface{} `json:"payload,omitempty"` + Error string `json:"error,omitempty"` +} + +type HealthResponse struct { + Healthy bool `json:"healthy"` + Version string `json:"version,omitempty"` + Raw map[string]interface{} `json:"-"` +} + +type DaemonConfig struct { + Raw map[string]interface{} `json:"raw"` +} diff --git a/internal/errors/errors.go b/internal/errors/errors.go new file mode 100644 index 0000000..3ff59de --- /dev/null +++ b/internal/errors/errors.go @@ -0,0 +1,101 @@ +package errors + +import ( + "context" + stderrors "errors" + "net/http" + + "opencoderouter/internal/session" +) + +var ( + ErrSessionNotFound = stderrors.New("session not found") + ErrDaemonUnhealthy = stderrors.New("daemon unhealthy") + ErrAuthFailed = stderrors.New("authentication failed") + ErrPortExhausted = stderrors.New("no available ports") +) + +func HTTPStatus(err error) int { + switch { + case isSessionNotFound(err): + return http.StatusNotFound + case isPortExhausted(err): + return http.StatusServiceUnavailable + case stderrors.Is(err, ErrAuthFailed): + return http.StatusUnauthorized + case stderrors.Is(err, ErrDaemonUnhealthy), stderrors.Is(err, session.ErrTerminalAttachDisabled): + return http.StatusServiceUnavailable + case stderrors.Is(err, context.Canceled): + return http.StatusRequestTimeout + case stderrors.Is(err, context.DeadlineExceeded): + return http.StatusGatewayTimeout + default: + return http.StatusInternalServerError + } +} + +func Code(err error) string { + switch { + case isSessionNotFound(err): + return "SESSION_NOT_FOUND" + case stderrors.Is(err, session.ErrWorkspacePathRequired): + return "WORKSPACE_PATH_REQUIRED" + case stderrors.Is(err, session.ErrWorkspacePathInvalid): + return "WORKSPACE_PATH_INVALID" + case stderrors.Is(err, session.ErrSessionAlreadyExists): + return "SESSION_ALREADY_EXISTS" + case isPortExhausted(err): + return "NO_AVAILABLE_SESSION_PORTS" + case stderrors.Is(err, session.ErrSessionStopped): + return "SESSION_STOPPED" + case stderrors.Is(err, ErrAuthFailed): + return "AUTH_FAILED" + case stderrors.Is(err, ErrDaemonUnhealthy): + return "DAEMON_UNHEALTHY" + case stderrors.Is(err, session.ErrTerminalAttachDisabled): + return "TERMINAL_ATTACH_UNAVAILABLE" + case stderrors.Is(err, context.Canceled): + return "REQUEST_CANCELED" + case stderrors.Is(err, context.DeadlineExceeded): + return "REQUEST_TIMEOUT" + default: + return "INTERNAL_ERROR" + } +} + +func Message(err error) string { + switch { + case isSessionNotFound(err): + return "session not found" + case stderrors.Is(err, session.ErrWorkspacePathRequired): + return "workspace path is required" + case stderrors.Is(err, session.ErrWorkspacePathInvalid): + return "workspace path is invalid" + case stderrors.Is(err, session.ErrSessionAlreadyExists): + return "session already exists" + case isPortExhausted(err): + return "no available session ports" + case stderrors.Is(err, session.ErrSessionStopped): + return "session is stopped" + case stderrors.Is(err, ErrAuthFailed): + return "authentication failed" + case stderrors.Is(err, ErrDaemonUnhealthy): + return "daemon unhealthy" + case stderrors.Is(err, session.ErrTerminalAttachDisabled): + return "terminal attachment is unavailable" + case stderrors.Is(err, context.Canceled): + return "request canceled" + case stderrors.Is(err, context.DeadlineExceeded): + return "request timeout" + default: + return "internal server error" + } +} + +func isSessionNotFound(err error) bool { + return stderrors.Is(err, ErrSessionNotFound) || stderrors.Is(err, session.ErrSessionNotFound) +} + +func isPortExhausted(err error) bool { + return stderrors.Is(err, ErrPortExhausted) || stderrors.Is(err, session.ErrNoAvailableSessionPorts) +} diff --git a/internal/tui/model/types.go b/internal/model/types.go similarity index 84% rename from internal/tui/model/types.go rename to internal/model/types.go index 98b84ff..0e97d07 100644 --- a/internal/tui/model/types.go +++ b/internal/model/types.go @@ -1,6 +1,10 @@ package model -import "time" +import ( + "time" + + "opencoderouter/internal/registry" +) // ActivityState captures a high-level activity bucket for a session. type ActivityState string @@ -16,6 +20,33 @@ const ( ActivityUnknown ActivityState = "UNKNOWN" ) +// SessionState captures lifecycle state for control-plane sessions. +type SessionState string + +const ( + SessionStateActive SessionState = "active" + SessionStateIdle SessionState = "idle" + SessionStateStopped SessionState = "stopped" + SessionStateError SessionState = "error" +) + +// AttachMode identifies which client surface is attached to a session. +type AttachMode string + +const ( + AttachModeTerminal AttachMode = "terminal" + AttachModeBrowser AttachMode = "browser" + AttachModeVSCode AttachMode = "vscode" +) + +// DaemonInfo describes a managed OpenCode daemon instance. +type DaemonInfo struct { + Port int `json:"port"` + PID int `json:"pid"` + Health bool `json:"health"` + Version string `json:"version"` +} + // HostStatus represents remote availability from probe/discovery. type HostStatus string @@ -87,6 +118,12 @@ type Session struct { Activity ActivityState } +// BackendSession combines a discovered backend with its sessions. +type BackendSession struct { + Backend registry.Backend `json:"backend"` + Sessions []Session `json:"sessions"` +} + // JumpHop represents one hop in a ProxyJump chain. type JumpHop struct { // Raw is the original hop string from ssh config. diff --git a/internal/proxy/proxy.go b/internal/proxy/proxy.go index ca33d8d..801b699 100644 --- a/internal/proxy/proxy.go +++ b/internal/proxy/proxy.go @@ -3,14 +3,15 @@ package proxy import ( "encoding/json" "fmt" - "html/template" "log/slog" "net/http" "net/http/httputil" "net/url" "strings" + "sync" "time" + "opencoderouter/internal/auth" "opencoderouter/internal/config" "opencoderouter/internal/registry" ) @@ -22,22 +23,44 @@ import ( // // Unmatched requests get the dashboard. type Router struct { - registry *registry.Registry - cfg config.Config - logger *slog.Logger + registry *registry.Registry + cfg config.Config + logger *slog.Logger + handler http.Handler + uiHandler http.Handler + + wsMu sync.Mutex + wsConnections map[string]string + wsConnSeq uint64 + wsPingInterval time.Duration +} + +func writeJSONResponse(w http.ResponseWriter, payload any) { + if err := json.NewEncoder(w).Encode(payload); err != nil { + slog.Default().Debug("failed to encode JSON response", "error", err) + } } // New creates a new Router. -func New(reg *registry.Registry, cfg config.Config, logger *slog.Logger) *Router { - return &Router{ - registry: reg, - cfg: cfg, - logger: logger, +func New(reg *registry.Registry, cfg config.Config, logger *slog.Logger, uiHandler http.Handler) *Router { + rt := &Router{ + registry: reg, + cfg: cfg, + logger: logger, + wsConnections: make(map[string]string), + wsPingInterval: defaultWSPingInterval, + uiHandler: uiHandler, } + rt.handler = auth.Middleware(http.HandlerFunc(rt.routeRequest), auth.LoadFromEnv()) + return rt } // ServeHTTP implements http.Handler. func (rt *Router) ServeHTTP(w http.ResponseWriter, r *http.Request) { + rt.handler.ServeHTTP(w, r) +} + +func (rt *Router) routeRequest(w http.ResponseWriter, r *http.Request) { // Try host-based routing first. if slug := rt.slugFromHost(r.Host); slug != "" { if backend, ok := rt.registry.Lookup(slug); ok { @@ -46,6 +69,11 @@ func (rt *Router) ServeHTTP(w http.ResponseWriter, r *http.Request) { } } + if rt.isWSRoute(r.URL.Path) { + rt.handleWSProxy(w, r) + return + } + // Try path-based routing: /{slug}/... if slug, remainder := rt.slugFromPath(r.URL.Path); slug != "" { if backend, ok := rt.registry.Lookup(slug); ok { @@ -193,13 +221,13 @@ func (rt *Router) handleAPIBackends(w http.ResponseWriter, r *http.Request) { } w.Header().Set("Content-Type", "application/json") - json.NewEncoder(w).Encode(items) + writeJSONResponse(w, items) } // handleAPIHealth returns the router's own health status. func (rt *Router) handleAPIHealth(w http.ResponseWriter, r *http.Request) { w.Header().Set("Content-Type", "application/json") - json.NewEncoder(w).Encode(map[string]interface{}{ + writeJSONResponse(w, map[string]interface{}{ "healthy": true, "username": rt.cfg.Username, "backends": rt.registry.Len(), @@ -242,7 +270,7 @@ func (rt *Router) handleAPIResolve(w http.ResponseWriter, r *http.Request) { } w.Header().Set("Content-Type", "application/json") w.WriteHeader(http.StatusNotFound) - json.NewEncoder(w).Encode(map[string]interface{}{ + writeJSONResponse(w, map[string]interface{}{ "error": "not_found", "query": query, "detail": "no backend found for this project", @@ -251,7 +279,7 @@ func (rt *Router) handleAPIResolve(w http.ResponseWriter, r *http.Request) { } w.Header().Set("Content-Type", "application/json") - json.NewEncoder(w).Encode(map[string]interface{}{ + writeJSONResponse(w, map[string]interface{}{ "slug": backend.Slug, "project_name": backend.ProjectName, "project_path": backend.ProjectPath, @@ -264,108 +292,11 @@ func (rt *Router) handleAPIResolve(w http.ResponseWriter, r *http.Request) { }) } -// handleDashboard renders an HTML page listing all discovered backends. +// handleDashboard serves the dashboard UI. func (rt *Router) handleDashboard(w http.ResponseWriter, r *http.Request) { - backends := rt.registry.All() - - type entry struct { - Slug string - ProjectName string - ProjectPath string - Port int - Version string - Domain string - PathURL string - LastSeen string - Healthy bool - } - - entries := make([]entry, 0, len(backends)) - for _, b := range backends { - entries = append(entries, entry{ - Slug: b.Slug, - ProjectName: b.ProjectName, - ProjectPath: b.ProjectPath, - Port: b.Port, - Version: b.Version, - Domain: rt.cfg.DomainFor(b.Slug), - PathURL: fmt.Sprintf("/%s/", b.Slug), - LastSeen: b.LastSeen.Format(time.RFC3339), - Healthy: b.Healthy(rt.cfg.StaleAfter), - }) - } - - data := struct { - Username string - Entries []entry - MDNS bool - }{ - Username: rt.cfg.Username, - Entries: entries, - MDNS: rt.cfg.EnableMDNS, - } - - w.Header().Set("Content-Type", "text/html; charset=utf-8") - if err := dashboardTmpl.Execute(w, data); err != nil { - rt.logger.Error("dashboard render error", "error", err) + if rt.uiHandler != nil { + rt.uiHandler.ServeHTTP(w, r) + } else { + http.NotFound(w, r) } } - -var dashboardTmpl = template.Must(template.New("dashboard").Parse(` - - - - -User: {{.Username}} · mDNS: {{if .MDNS}}enabled{{else}}disabled{{end}} · JSON API
- -{{if .Entries}} -| Status | Project | Slug | Backend | Domain | Path | Version | Last Seen |
|---|---|---|---|---|---|---|---|
| {{if .Healthy}}Healthy{{else}}Stale{{end}} | -{{.ProjectName}} | -{{.Slug}} |
- 127.0.0.1:{{.Port}} |
- {{.Domain}} | -{{.PathURL}} | -{{.Version}} | -{{.LastSeen}} | -
No OpenCode instances discovered yet. Scanning ports…
-{{end}} - - - - -`)) diff --git a/internal/proxy/proxy_test.go b/internal/proxy/proxy_test.go index 6e420f6..0cbb8e9 100644 --- a/internal/proxy/proxy_test.go +++ b/internal/proxy/proxy_test.go @@ -1,12 +1,17 @@ package proxy import ( + "bufio" "encoding/json" + "fmt" "io" "log/slog" + "net" "net/http" "net/http/httptest" + "net/url" "os" + "strconv" "strings" "testing" "time" @@ -27,7 +32,33 @@ func testLogger() *slog.Logger { } func newTestRouter(reg *registry.Registry) *Router { - return New(reg, testCfg(), testLogger()) + mockUI := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.Header().Set("Content-Type", "text/html; charset=utf-8") + w.WriteHeader(http.StatusOK) + + body := "OpenCode Router testuser" + for _, b := range reg.All() { + body += " " + b.Slug + " " + fmt.Sprint(b.Port) + " " + b.Version + } + + if _, err := w.Write([]byte(body)); err != nil { + testLogger().Error("mock ui write failed", "error", err) + } + }) + return New(reg, testCfg(), testLogger(), mockUI) +} + +func mustPort(t *testing.T, rawURL string) int { + t.Helper() + u, err := url.Parse(rawURL) + if err != nil { + t.Fatalf("failed to parse URL %q: %v", rawURL, err) + } + port, err := strconv.Atoi(u.Port()) + if err != nil { + t.Fatalf("failed to parse port from %q: %v", rawURL, err) + } + return port } // --------------------------------------------------------------------------- @@ -109,7 +140,9 @@ func TestServeHTTP_HostRouting(t *testing.T) { backend := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { w.Header().Set("X-Backend", "reached") w.WriteHeader(http.StatusOK) - w.Write([]byte("hello from backend")) + if _, err := w.Write([]byte("hello from backend")); err != nil { + t.Fatalf("backend write failed: %v", err) + } })) defer backend.Close() @@ -156,7 +189,9 @@ func TestServeHTTP_PathRouting(t *testing.T) { // Start a fake backend that echoes the received path. backend := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { w.WriteHeader(http.StatusOK) - w.Write([]byte("path=" + r.URL.Path)) + if _, err := w.Write([]byte("path=" + r.URL.Path)); err != nil { + t.Fatalf("backend path write failed: %v", err) + } })) defer backend.Close() @@ -187,6 +222,189 @@ func TestServeHTTP_PathRouting(t *testing.T) { } } +func TestWSRouteParsing(t *testing.T) { + rt := newTestRouter(registry.New(30*time.Second, testLogger())) + + tests := []struct { + name string + path string + wantSlug string + wantRest string + wantMatch bool + }{ + {"valid with nested path", "/ws/proj/echo/path", "proj", "/echo/path", true}, + {"valid root path", "/ws/proj", "proj", "/", true}, + {"valid trailing slash", "/ws/proj/", "proj", "/", true}, + {"missing slug", "/ws/", "", "", false}, + {"wrong prefix", "/proj/ws/echo", "", "", false}, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + slug, rest, ok := rt.wsRoute(tt.path) + if ok != tt.wantMatch { + t.Fatalf("wsRoute(%q) match=%v, want %v", tt.path, ok, tt.wantMatch) + } + if slug != tt.wantSlug { + t.Fatalf("wsRoute(%q) slug=%q, want %q", tt.path, slug, tt.wantSlug) + } + if rest != tt.wantRest { + t.Fatalf("wsRoute(%q) remainder=%q, want %q", tt.path, rest, tt.wantRest) + } + }) + } +} + +func TestServeHTTP_WSRouteRequiresUpgrade(t *testing.T) { + reg := registry.New(30*time.Second, testLogger()) + rt := newTestRouter(reg) + + w := httptest.NewRecorder() + req := httptest.NewRequest(http.MethodGet, "/ws/proj/echo", nil) + rt.ServeHTTP(w, req) + + if w.Code != http.StatusBadRequest { + t.Fatalf("expected 400, got %d", w.Code) + } + if !strings.Contains(strings.ToLower(w.Body.String()), "upgrade") { + t.Fatalf("expected upgrade error message, got %q", w.Body.String()) + } +} + +func TestServeHTTP_WSRouteInvalidSlug(t *testing.T) { + reg := registry.New(30*time.Second, testLogger()) + rt := newTestRouter(reg) + + w := httptest.NewRecorder() + req := httptest.NewRequest(http.MethodGet, "/ws/missing/echo", nil) + req.Header.Set("Connection", "Upgrade") + req.Header.Set("Upgrade", "websocket") + rt.ServeHTTP(w, req) + + if w.Code != http.StatusNotFound { + t.Fatalf("expected 404, got %d", w.Code) + } + if !strings.Contains(w.Body.String(), `backend "missing" not found`) { + t.Fatalf("expected clear missing backend message, got %q", w.Body.String()) + } +} + +func TestServeHTTP_WSRouteProxyAndTrackConnection(t *testing.T) { + holdOpen := make(chan struct{}) + receivedPath := make(chan string, 1) + + backend := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + receivedPath <- r.URL.Path + + hj, ok := w.(http.Hijacker) + if !ok { + t.Error("response writer does not support hijacking") + return + } + + conn, rw, err := hj.Hijack() + if err != nil { + t.Errorf("hijack failed: %v", err) + return + } + + _, _ = rw.WriteString("HTTP/1.1 101 Switching Protocols\r\n") + _, _ = rw.WriteString("Connection: Upgrade\r\n") + _, _ = rw.WriteString("Upgrade: websocket\r\n") + _, _ = rw.WriteString("Sec-WebSocket-Accept: test\r\n\r\n") + _ = rw.Flush() + + <-holdOpen + _ = conn.Close() + })) + defer backend.Close() + + reg := registry.New(30*time.Second, testLogger()) + reg.Upsert(mustPort(t, backend.URL), "proj", "/home/test/proj", "1.0") + + rt := newTestRouter(reg) + srv := httptest.NewServer(rt) + defer srv.Close() + + u, err := url.Parse(srv.URL) + if err != nil { + t.Fatalf("failed to parse server URL: %v", err) + } + + conn, err := net.Dial("tcp", u.Host) + if err != nil { + t.Fatalf("dial failed: %v", err) + } + + _, err = fmt.Fprintf(conn, + "GET /ws/proj/echo HTTP/1.1\r\nHost: %s\r\nConnection: Upgrade\r\nUpgrade: websocket\r\nSec-WebSocket-Version: 13\r\nSec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ==\r\n\r\n", + u.Host, + ) + if err != nil { + t.Fatalf("failed to write request: %v", err) + } + + reader := bufio.NewReader(conn) + statusLine, err := reader.ReadString('\n') + if err != nil { + t.Fatalf("failed to read status line: %v", err) + } + if !strings.Contains(statusLine, "101") { + t.Fatalf("expected 101 response, got %q", statusLine) + } + + for { + line, err := reader.ReadString('\n') + if err != nil { + t.Fatalf("failed to read response headers: %v", err) + } + if line == "\r\n" { + break + } + } + + select { + case gotPath := <-receivedPath: + if gotPath != "/echo" { + t.Fatalf("expected proxied path /echo, got %q", gotPath) + } + case <-time.After(2 * time.Second): + t.Fatal("backend did not receive proxied websocket request") + } + + deadline := time.Now().Add(2 * time.Second) + tracked := false + for time.Now().Before(deadline) { + rt.wsMu.Lock() + n := len(rt.wsConnections) + rt.wsMu.Unlock() + if n > 0 { + tracked = true + break + } + time.Sleep(10 * time.Millisecond) + } + if !tracked { + t.Fatal("expected websocket connection to be tracked while open") + } + + close(holdOpen) + _ = conn.Close() + + deadline = time.Now().Add(2 * time.Second) + for time.Now().Before(deadline) { + rt.wsMu.Lock() + n := len(rt.wsConnections) + rt.wsMu.Unlock() + if n == 0 { + return + } + time.Sleep(10 * time.Millisecond) + } + + t.Fatal("expected websocket connection to be untracked after close") +} + // --------------------------------------------------------------------------- // Dashboard (fallback) // --------------------------------------------------------------------------- @@ -263,7 +481,9 @@ func TestAPIHealth(t *testing.T) { } var resp map[string]interface{} - json.Unmarshal(w.Body.Bytes(), &resp) + if err := json.Unmarshal(w.Body.Bytes(), &resp); err != nil { + t.Fatalf("unmarshal health response: %v", err) + } if resp["healthy"] != true { t.Error("expected healthy=true") @@ -294,7 +514,9 @@ func TestAPIBackends_Empty(t *testing.T) { } var items []interface{} - json.Unmarshal(w.Body.Bytes(), &items) + if err := json.Unmarshal(w.Body.Bytes(), &items); err != nil { + t.Fatalf("unmarshal backends response: %v", err) + } if len(items) != 0 { t.Errorf("expected empty list, got %d items", len(items)) } @@ -311,7 +533,9 @@ func TestAPIBackends_WithEntries(t *testing.T) { rt.ServeHTTP(w, req) var items []map[string]interface{} - json.Unmarshal(w.Body.Bytes(), &items) + if err := json.Unmarshal(w.Body.Bytes(), &items); err != nil { + t.Fatalf("unmarshal backends entries response: %v", err) + } if len(items) != 1 { t.Fatalf("expected 1 item, got %d", len(items)) } diff --git a/internal/proxy/ws.go b/internal/proxy/ws.go new file mode 100644 index 0000000..535286a --- /dev/null +++ b/internal/proxy/ws.go @@ -0,0 +1,101 @@ +package proxy + +import ( + "fmt" + "net/http" + "strings" + "sync/atomic" + "time" +) + +const defaultWSPingInterval = 30 * time.Second + +func (rt *Router) isWSRoute(path string) bool { + return path == "/ws" || path == "/ws/" || strings.HasPrefix(path, "/ws/") +} + +func (rt *Router) wsRoute(path string) (slug, remainder string, ok bool) { + if !strings.HasPrefix(path, "/ws/") { + return "", "", false + } + + trimmed := strings.TrimPrefix(path, "/ws/") + if trimmed == "" { + return "", "", false + } + + parts := strings.SplitN(trimmed, "/", 2) + slug = parts[0] + if slug == "" { + return "", "", false + } + + remainder = "/" + if len(parts) == 2 && parts[1] != "" { + remainder = "/" + parts[1] + } + + return slug, remainder, true +} + +func (rt *Router) handleWSProxy(w http.ResponseWriter, r *http.Request) { + if r.Method != http.MethodGet { + http.Error(w, "method not allowed", http.StatusMethodNotAllowed) + return + } + + slug, remainder, ok := rt.wsRoute(r.URL.Path) + if !ok { + http.Error(w, "invalid websocket route: expected /ws/{backend-slug}/{path...}", http.StatusBadRequest) + return + } + + if !isWebSocketUpgrade(r) { + http.Error(w, "websocket upgrade required", http.StatusBadRequest) + return + } + + backend, found := rt.registry.Lookup(slug) + if !found { + http.Error(w, fmt.Sprintf("backend %q not found", slug), http.StatusNotFound) + return + } + + connID := rt.trackWSConnection(slug) + defer rt.untrackWSConnection(connID) + + rt.proxyTo(backend, w, r, remainder) +} + +func isWebSocketUpgrade(r *http.Request) bool { + if !headerHasToken(r.Header.Get("Connection"), "upgrade") { + return false + } + if !headerHasToken(r.Header.Get("Upgrade"), "websocket") { + return false + } + return true +} + +func headerHasToken(headerValue, token string) bool { + for _, part := range strings.Split(headerValue, ",") { + if strings.EqualFold(strings.TrimSpace(part), token) { + return true + } + } + return false +} + +func (rt *Router) trackWSConnection(slug string) string { + connID := fmt.Sprintf("%s-%d", slug, atomic.AddUint64(&rt.wsConnSeq, 1)) + rt.wsMu.Lock() + rt.wsConnections[connID] = slug + rt.wsMu.Unlock() + return connID +} + +func (rt *Router) untrackWSConnection(connID string) { + rt.wsMu.Lock() + delete(rt.wsConnections, connID) + rt.wsMu.Unlock() +} diff --git a/internal/registry/registry.go b/internal/registry/registry.go index d0eb449..1a6863a 100644 --- a/internal/registry/registry.go +++ b/internal/registry/registry.go @@ -30,6 +30,7 @@ type Registry struct { mu sync.RWMutex backends map[string]*Backend // slug → backend byPort map[int]string // port → slug (for fast dedup) + sessions map[string]map[string]SessionMetadata staleAfter time.Duration logger *slog.Logger } @@ -39,6 +40,7 @@ func New(staleAfter time.Duration, logger *slog.Logger) *Registry { return &Registry{ backends: make(map[string]*Backend), byPort: make(map[int]string), + sessions: make(map[string]map[string]SessionMetadata), staleAfter: staleAfter, logger: logger, } @@ -54,6 +56,7 @@ func (r *Registry) Upsert(port int, projectName, projectPath, version string) bo // Check if this port was previously registered under a different slug. if oldSlug, ok := r.byPort[port]; ok && oldSlug != slug { delete(r.backends, oldSlug) + delete(r.sessions, oldSlug) r.logger.Info("backend project changed", "port", port, "old_slug", oldSlug, "new_slug", slug) } @@ -102,6 +105,7 @@ func (r *Registry) Prune() []string { if time.Since(b.LastSeen) > r.staleAfter { delete(r.backends, slug) delete(r.byPort, b.Port) + delete(r.sessions, slug) r.logger.Info("backend removed (stale)", "slug", slug, "port", b.Port) removed = append(removed, slug) } diff --git a/internal/registry/sessions.go b/internal/registry/sessions.go new file mode 100644 index 0000000..5286328 --- /dev/null +++ b/internal/registry/sessions.go @@ -0,0 +1,134 @@ +package registry + +import ( + "sort" + "strings" + "time" +) + +type SessionMetadata struct { + ID string `json:"id"` + Title string `json:"title,omitempty"` + Directory string `json:"directory,omitempty"` + Status string `json:"status,omitempty"` + LastActivity time.Time `json:"last_activity,omitempty"` + CreatedAt time.Time `json:"created_at,omitempty"` + DaemonPort int `json:"daemon_port,omitempty"` + AttachedClients int `json:"attached_clients,omitempty"` +} + +func (r *Registry) UpsertSession(backendSlug string, session SessionMetadata) bool { + backendSlug = strings.TrimSpace(backendSlug) + session.ID = strings.TrimSpace(session.ID) + if backendSlug == "" || session.ID == "" { + return false + } + + r.mu.Lock() + defer r.mu.Unlock() + + if _, ok := r.backends[backendSlug]; !ok { + return false + } + + backendSessions, ok := r.sessions[backendSlug] + if !ok { + backendSessions = make(map[string]SessionMetadata) + r.sessions[backendSlug] = backendSessions + } + + _, existed := backendSessions[session.ID] + backendSessions[session.ID] = session + return !existed +} + +func (r *Registry) ReplaceSessions(backendSlug string, sessions []SessionMetadata) { + backendSlug = strings.TrimSpace(backendSlug) + if backendSlug == "" { + return + } + + r.mu.Lock() + defer r.mu.Unlock() + + if _, ok := r.backends[backendSlug]; !ok { + return + } + + replacement := make(map[string]SessionMetadata, len(sessions)) + for _, session := range sessions { + session.ID = strings.TrimSpace(session.ID) + if session.ID == "" { + continue + } + replacement[session.ID] = session + } + + if len(replacement) == 0 { + delete(r.sessions, backendSlug) + return + } + + r.sessions[backendSlug] = replacement +} + +func (r *Registry) RemoveSession(backendSlug, sessionID string) bool { + backendSlug = strings.TrimSpace(backendSlug) + sessionID = strings.TrimSpace(sessionID) + if backendSlug == "" || sessionID == "" { + return false + } + + r.mu.Lock() + defer r.mu.Unlock() + + backendSessions, ok := r.sessions[backendSlug] + if !ok { + return false + } + if _, ok := backendSessions[sessionID]; !ok { + return false + } + + delete(backendSessions, sessionID) + if len(backendSessions) == 0 { + delete(r.sessions, backendSlug) + } + return true +} + +func (r *Registry) ListSessions(backendSlug string) []SessionMetadata { + backendSlug = strings.TrimSpace(backendSlug) + if backendSlug == "" { + return nil + } + + r.mu.RLock() + defer r.mu.RUnlock() + + backendSessions, ok := r.sessions[backendSlug] + if !ok { + return nil + } + + result := make([]SessionMetadata, 0, len(backendSessions)) + for _, session := range backendSessions { + result = append(result, session) + } + + sort.Slice(result, func(i, j int) bool { + return result[i].ID < result[j].ID + }) + return result +} + +func (r *Registry) RemoveSessionsForBackend(backendSlug string) { + backendSlug = strings.TrimSpace(backendSlug) + if backendSlug == "" { + return + } + + r.mu.Lock() + defer r.mu.Unlock() + delete(r.sessions, backendSlug) +} diff --git a/internal/registry/sessions_test.go b/internal/registry/sessions_test.go new file mode 100644 index 0000000..e4df316 --- /dev/null +++ b/internal/registry/sessions_test.go @@ -0,0 +1,83 @@ +package registry + +import ( + "testing" + "time" +) + +func TestSessionIndex_UpsertListRemove(t *testing.T) { + r := New(30*time.Second, testLogger()) + r.Upsert(4096, "proj", "/home/alice/proj", "1.0") + + created := r.UpsertSession("proj", SessionMetadata{ID: "s-1", Title: "first"}) + if !created { + t.Fatal("expected first upsert to create session") + } + + created = r.UpsertSession("proj", SessionMetadata{ID: "s-1", Title: "updated"}) + if created { + t.Fatal("expected second upsert to update existing session") + } + + list := r.ListSessions("proj") + if len(list) != 1 { + t.Fatalf("expected 1 session, got %d", len(list)) + } + if list[0].Title != "updated" { + t.Fatalf("expected updated title, got %q", list[0].Title) + } + + if !r.RemoveSession("proj", "s-1") { + t.Fatal("expected RemoveSession to return true") + } + if r.RemoveSession("proj", "s-1") { + t.Fatal("expected second RemoveSession to return false") + } + if len(r.ListSessions("proj")) != 0 { + t.Fatal("expected no sessions after remove") + } +} + +func TestSessionIndex_ReplaceSessionsRemovesMissing(t *testing.T) { + r := New(30*time.Second, testLogger()) + r.Upsert(4096, "proj", "/home/alice/proj", "1.0") + + r.ReplaceSessions("proj", []SessionMetadata{{ID: "a"}, {ID: "b"}}) + if got := len(r.ListSessions("proj")); got != 2 { + t.Fatalf("expected 2 sessions, got %d", got) + } + + r.ReplaceSessions("proj", []SessionMetadata{{ID: "b", Title: "keep"}}) + list := r.ListSessions("proj") + if len(list) != 1 { + t.Fatalf("expected 1 session after replacement, got %d", len(list)) + } + if list[0].ID != "b" { + t.Fatalf("expected only session b to remain, got %q", list[0].ID) + } +} + +func TestSessionIndex_RemovedWhenBackendPruned(t *testing.T) { + r := New(20*time.Millisecond, testLogger()) + r.Upsert(4096, "proj", "/home/alice/proj", "1.0") + r.UpsertSession("proj", SessionMetadata{ID: "s-1"}) + + time.Sleep(40 * time.Millisecond) + r.Prune() + + if len(r.ListSessions("proj")) != 0 { + t.Fatal("expected sessions to be cleared when backend is pruned") + } +} + +func TestSessionIndex_RemovedWhenProjectChangesOnPort(t *testing.T) { + r := New(30*time.Second, testLogger()) + r.Upsert(4096, "old", "/home/alice/old", "1.0") + r.UpsertSession("old", SessionMetadata{ID: "s-1"}) + + r.Upsert(4096, "new", "/home/alice/new", "1.0") + + if len(r.ListSessions("old")) != 0 { + t.Fatal("expected old project sessions removed after port project change") + } +} diff --git a/internal/remote/discovery.go b/internal/remote/discovery.go new file mode 100644 index 0000000..2a04fff --- /dev/null +++ b/internal/remote/discovery.go @@ -0,0 +1,679 @@ +package remote + +import ( + "bufio" + "context" + "errors" + "fmt" + "io" + "log/slog" + "net" + "net/url" + "os" + "os/user" + "path" + "path/filepath" + "sort" + "strconv" + "strings" + "time" + + "opencoderouter/internal/model" +) + +type DiscoveryService struct { + opts DiscoveryOptions + runner Runner + sshConfigPath string + logger *slog.Logger +} + +const maxSanitizedLogErrorRunes = 320 + +func NewDiscoveryService(opts DiscoveryOptions, runner Runner, logger *slog.Logger) *DiscoveryService { + if runner == nil { + runner = ExecRunner{} + } + if logger == nil { + logger = slog.New(slog.NewTextHandler(io.Discard, nil)) + } + sshConfigPath := strings.TrimSpace(opts.SSHConfigPath) + if sshConfigPath == "" { + sshConfigPath = defaultSSHConfigPath() + } + return &DiscoveryService{ + opts: opts, + runner: runner, + sshConfigPath: sshConfigPath, + logger: logger, + } +} + +func (s *DiscoveryService) SetSSHConfigPath(path string) { + if strings.TrimSpace(path) == "" { + s.sshConfigPath = defaultSSHConfigPath() + return + } + s.sshConfigPath = path +} + +func (s *DiscoveryService) Discover(ctx context.Context) ([]model.Host, error) { + startedAt := time.Now() + s.logger.Debug("starting host discovery", + "ssh_config_path", s.sshConfigPath, + "include_patterns_count", len(s.opts.Include), + "ignore_patterns_count", len(s.opts.Ignore), + ) + + aliases, err := s.loadHostAliases() + if err != nil { + s.logger.Error("host discovery failed", + "stage", "load_host_aliases", + "error", SanitizeLogError(err), + ) + return nil, err + } + s.logger.Debug("loaded host aliases", "alias_count", len(aliases)) + + filtered := filterAliasesWithLogger(aliases, s.opts.Include, s.opts.Ignore, s.logger) + s.logger.Debug("discovery aliases after filtering", "filtered_count", len(filtered)) + + hosts := make([]model.Host, 0, len(filtered)) + var probeErrs []error + + for _, alias := range filtered { + select { + case <-ctx.Done(): + err := fmt.Errorf("discover canceled: %w", ctx.Err()) + s.logger.Error("host discovery failed", + "stage", "context_canceled", + "processed_hosts", len(hosts), + "error", SanitizeLogError(err), + ) + return hosts, err + default: + } + + h, resolveErr := s.resolveHost(ctx, alias) + if resolveErr != nil { + h = model.Host{ + Name: alias, + Label: alias, + Status: model.HostStatusError, + LastError: resolveErr.Error(), + } + probeErrs = append(probeErrs, fmt.Errorf("resolve host %q: %w", alias, resolveErr)) + } + + if override, ok := s.opts.Overrides[alias]; ok { + if override.Label != "" { + h.Label = override.Label + } + h.Priority = override.Priority + if override.OpencodePath != "" { + h.OpencodeBin = override.OpencodePath + } + } + if h.Label == "" { + h.Label = h.Name + } + + hosts = append(hosts, h) + } + + sort.Slice(hosts, func(i, j int) bool { + if hosts[i].Priority != hosts[j].Priority { + return hosts[i].Priority > hosts[j].Priority + } + return hosts[i].Name < hosts[j].Name + }) + + buildDependencyGraphWithLogger(hosts, s.logger) + + if len(probeErrs) > 0 { + joinedErr := errors.Join(probeErrs...) + s.logger.Error("host discovery failed", + "stage", "resolve_hosts", + "host_count", len(hosts), + "failure_count", len(probeErrs), + "duration", time.Since(startedAt), + "error", SanitizeLogError(joinedErr), + ) + return hosts, joinedErr + } + + s.logger.Debug("host discovery complete", + "host_count", len(hosts), + "duration", time.Since(startedAt), + ) + + return hosts, nil +} + +func (s *DiscoveryService) loadHostAliases() ([]string, error) { + s.logger.Debug("reading ssh config for host aliases", "path", s.sshConfigPath) + + configPath, err := expandSSHPath(s.sshConfigPath) + if err != nil { + s.logger.Error("failed to expand ssh config path", "path", s.sshConfigPath, "error", SanitizeLogError(err)) + return nil, fmt.Errorf("expand ssh config path %q: %w", s.sshConfigPath, err) + } + + b, err := os.ReadFile(configPath) + if err != nil { + if os.IsNotExist(err) { + s.logger.Debug("ssh config file not found", "path", configPath, "alias_count", 0) + return nil, nil + } + s.logger.Error("failed to read ssh config", "path", configPath, "error", SanitizeLogError(err)) + return nil, fmt.Errorf("read ssh config %q: %w", configPath, err) + } + + expandedConfig, err := expandSSHConfigIncludes(configPath, b, nil) + if err != nil { + s.logger.Error("failed to expand ssh config includes", "path", configPath, "error", SanitizeLogError(err)) + return nil, fmt.Errorf("expand includes for ssh config %q: %w", configPath, err) + } + + aliases := parseSSHConfigHostsWithLogger(string(expandedConfig), s.logger) + s.logger.Debug("loaded host aliases from ssh config", "path", configPath, "alias_count", len(aliases)) + return aliases, nil +} + +func expandSSHConfigIncludes(configPath string, content []byte, visited map[string]struct{}) ([]byte, error) { + if visited == nil { + visited = make(map[string]struct{}) + } + + absPath, err := filepath.Abs(configPath) + if err != nil { + return nil, fmt.Errorf("resolve absolute path %q: %w", configPath, err) + } + canonicalPath := filepath.Clean(absPath) + if evaluatedPath, evalErr := filepath.EvalSymlinks(canonicalPath); evalErr == nil { + canonicalPath = evaluatedPath + } + + if _, seen := visited[canonicalPath]; seen { + return nil, nil + } + visited[canonicalPath] = struct{}{} + + parentDir := filepath.Dir(canonicalPath) + var out strings.Builder + + scanner := bufio.NewScanner(strings.NewReader(string(content))) + for scanner.Scan() { + rawLine := scanner.Text() + line := strings.TrimSpace(rawLine) + + includePatterns := parseSSHIncludePatterns(line) + if len(includePatterns) == 0 { + out.WriteString(rawLine) + out.WriteByte('\n') + continue + } + + for _, includePattern := range includePatterns { + resolvedPattern, resolveErr := resolveSSHIncludePattern(parentDir, includePattern) + if resolveErr != nil { + return nil, fmt.Errorf("resolve include pattern %q in %q: %w", includePattern, canonicalPath, resolveErr) + } + + matches, globErr := filepath.Glob(resolvedPattern) + if globErr != nil { + return nil, fmt.Errorf("expand include pattern %q in %q: %w", includePattern, canonicalPath, globErr) + } + + for _, includePath := range matches { + includeBytes, readErr := os.ReadFile(includePath) + if readErr != nil { + if os.IsNotExist(readErr) { + continue + } + return nil, fmt.Errorf("read included ssh config %q: %w", includePath, readErr) + } + + expandedInclude, includeErr := expandSSHConfigIncludes(includePath, includeBytes, visited) + if includeErr != nil { + return nil, includeErr + } + if len(expandedInclude) == 0 { + continue + } + + out.Write(expandedInclude) + if expandedInclude[len(expandedInclude)-1] != '\n' { + out.WriteByte('\n') + } + } + } + } + + if err := scanner.Err(); err != nil { + return nil, fmt.Errorf("scan ssh config %q: %w", canonicalPath, err) + } + + return []byte(out.String()), nil +} + +func parseSSHIncludePatterns(line string) []string { + fields := strings.Fields(line) + if len(fields) < 2 || !strings.EqualFold(fields[0], "include") { + return nil + } + + patterns := make([]string, 0, len(fields)-1) + for _, field := range fields[1:] { + if strings.HasPrefix(field, "#") { + break + } + + pattern := strings.Trim(field, "\"'") + if pattern == "" { + continue + } + + patterns = append(patterns, pattern) + } + + return patterns +} + +func resolveSSHIncludePattern(parentDir, includePattern string) (string, error) { + resolvedPattern, err := expandSSHPath(includePattern) + if err != nil { + return "", err + } + if !filepath.IsAbs(resolvedPattern) { + resolvedPattern = filepath.Join(parentDir, resolvedPattern) + } + + return filepath.Clean(resolvedPattern), nil +} + +func expandSSHPath(path string) (string, error) { + if path == "~" { + home, err := os.UserHomeDir() + if err != nil { + return "", err + } + return home, nil + } + + if strings.HasPrefix(path, "~/") { + home, err := os.UserHomeDir() + if err != nil { + return "", err + } + return filepath.Join(home, path[2:]), nil + } + + return path, nil +} + +func (s *DiscoveryService) resolveHost(ctx context.Context, alias string) (model.Host, error) { + s.logger.Debug("resolving host", "alias", alias) + s.logger.Debug("executing ssh -G", "alias", alias) + + out, err := s.runner.Run(ctx, "ssh", "-G", alias) + if err != nil { + s.logger.Error("failed to resolve host", + "alias", alias, + "error", SanitizeLogError(err), + ) + return model.Host{}, err + } + s.logger.Debug("ssh -G completed", "alias", alias, "output_bytes", len(out)) + + host := model.Host{ + Name: alias, + Address: alias, + User: currentUserName(), + Label: alias, + Status: model.HostStatusUnknown, + } + + scanner := bufio.NewScanner(strings.NewReader(string(out))) + for scanner.Scan() { + line := strings.TrimSpace(scanner.Text()) + if line == "" { + continue + } + parts := strings.Fields(line) + if len(parts) < 2 { + continue + } + + key := strings.ToLower(parts[0]) + value := strings.Join(parts[1:], " ") + switch key { + case "hostname": + host.Address = value + case "user": + host.User = value + case "proxyjump": + if value != "" && value != "none" { + host.ProxyJumpRaw = value + host.ProxyKind = model.ProxyKindJump + host.JumpChain = parseProxyJumpWithLogger(value, alias, s.logger) + } + case "proxycommand": + if value != "" && value != "none" { + host.ProxyCommand = value + if host.ProxyKind == "" || host.ProxyKind == model.ProxyKindNone { + host.ProxyKind = model.ProxyKindCommand + } + } + } + } + + if err := scanner.Err(); err != nil { + wrappedErr := fmt.Errorf("parse ssh -G output for %q: %w", alias, err) + s.logger.Error("failed to parse ssh -G output", + "alias", alias, + "error", SanitizeLogError(wrappedErr), + ) + return model.Host{}, wrappedErr + } + + s.logger.Debug("resolved host metadata", + "alias", alias, + "proxy_kind", host.ProxyKind, + "jump_hop_count", len(host.JumpChain), + "has_proxy_command", host.ProxyCommand != "", + ) + + return host, nil +} + +func ParseSSHConfigHosts(content string) []string { + return parseSSHConfigHostsWithLogger(content, nil) +} + +func parseSSHConfigHostsWithLogger(content string, logger *slog.Logger) []string { + if logger != nil { + logger.Debug("starting ssh config host parse", "content_bytes", len(content)) + } + + seen := make(map[string]struct{}) + aliases := make([]string, 0) + + scanner := bufio.NewScanner(strings.NewReader(content)) + for scanner.Scan() { + line := strings.TrimSpace(scanner.Text()) + if line == "" || strings.HasPrefix(line, "#") { + continue + } + + fields := strings.Fields(line) + if len(fields) < 2 || !strings.EqualFold(fields[0], "host") { + continue + } + + for _, candidate := range fields[1:] { + if strings.HasPrefix(candidate, "!") { + continue + } + if strings.ContainsAny(candidate, "*?") { + continue + } + if _, ok := seen[candidate]; ok { + continue + } + seen[candidate] = struct{}{} + aliases = append(aliases, candidate) + } + } + + if logger != nil { + logger.Debug("completed ssh config host parse", "alias_count", len(aliases)) + } + + return aliases +} + +func FilterAliases(aliases, includes, ignores []string) []string { + return filterAliasesWithLogger(aliases, includes, ignores, nil) +} + +func filterAliasesWithLogger(aliases, includes, ignores []string, logger *slog.Logger) []string { + if logger != nil { + logger.Debug("filtering host aliases", + "before_count", len(aliases), + "include_patterns_count", len(includes), + "ignore_patterns_count", len(ignores), + ) + } + + if len(includes) == 0 { + includes = []string{"*"} + } + + filtered := make([]string, 0, len(aliases)) + for _, alias := range aliases { + if !matchesAnyGlob(alias, includes) { + continue + } + if matchesAnyGlob(alias, ignores) { + continue + } + filtered = append(filtered, alias) + } + + if logger != nil { + logger.Debug("host alias filtering complete", + "before_count", len(aliases), + "after_count", len(filtered), + ) + } + + return filtered +} + +func matchesAnyGlob(candidate string, patterns []string) bool { + for _, pattern := range patterns { + matched, err := path.Match(pattern, candidate) + if err != nil { + if pattern == candidate { + return true + } + continue + } + if matched { + return true + } + } + return false +} + +func defaultSSHConfigPath() string { + home, err := os.UserHomeDir() + if err != nil || home == "" { + return ".ssh/config" + } + return filepath.Join(home, ".ssh", "config") +} + +func currentUserName() string { + u, err := user.Current() + if err != nil { + return "" + } + return u.Username +} + +func ParseProxyJump(raw string) []model.JumpHop { + return parseProxyJumpWithLogger(raw, "", nil) +} + +func parseProxyJumpWithLogger(raw, alias string, logger *slog.Logger) []model.JumpHop { + parts := strings.Split(raw, ",") + hops := make([]model.JumpHop, 0, len(parts)) + for _, part := range parts { + part = strings.TrimSpace(part) + if part == "" { + continue + } + hop := parseOneHop(part) + hops = append(hops, hop) + } + + if logger != nil { + if alias != "" { + logger.Debug("parsed proxy jump chain", + "alias", alias, + "hop_count", len(hops), + ) + } else { + logger.Debug("parsed proxy jump chain", "hop_count", len(hops)) + } + } + + return hops +} + +func parseOneHop(hop string) model.JumpHop { + j := model.JumpHop{Raw: hop} + + if strings.HasPrefix(hop, "ssh://") { + u, err := url.Parse(hop) + if err == nil { + j.Host = u.Hostname() + j.User = u.User.Username() + if p := u.Port(); p != "" { + j.Port, _ = strconv.Atoi(p) + } + return j + } + } + + userHost := hop + if at := strings.LastIndex(hop, "@"); at >= 0 { + j.User = hop[:at] + userHost = hop[at+1:] + } + + host, portStr, err := net.SplitHostPort(userHost) + if err == nil { + j.Host = host + j.Port, _ = strconv.Atoi(portStr) + } else { + j.Host = userHost + } + + return j +} + +func BuildDependencyGraph(hosts []model.Host) { + buildDependencyGraphWithLogger(hosts, nil) +} + +func buildDependencyGraphWithLogger(hosts []model.Host, logger *slog.Logger) { + startedAt := time.Now() + if logger != nil { + logger.Debug("building dependency graph", "host_count", len(hosts)) + } + + aliasIndex := make(map[string]int, len(hosts)) + addressIndex := make(map[string]int, len(hosts)) + for i, h := range hosts { + aliasIndex[h.Name] = i + if h.Address != "" { + addressIndex[h.Address] = i + } + } + + for i := range hosts { + if hosts[i].ProxyKind != model.ProxyKindJump || len(hosts[i].JumpChain) == 0 { + continue + } + + seen := make(map[string]bool) + for hi := range hosts[i].JumpChain { + hop := &hosts[i].JumpChain[hi] + alias := resolveHopAlias(hop.Host, aliasIndex, addressIndex) + if alias == "" { + hop.External = true + continue + } + hop.AliasRef = alias + if !seen[alias] { + seen[alias] = true + hosts[i].DependsOn = append(hosts[i].DependsOn, alias) + } + } + } + + edgeCount := 0 + for i := range hosts { + edgeCount += len(hosts[i].DependsOn) + } + if logger != nil { + logger.Debug("dependency graph edges resolved", "edge_count", edgeCount) + } + + for i := range hosts { + for _, dep := range hosts[i].DependsOn { + if idx, ok := aliasIndex[dep]; ok { + hosts[idx].Dependents = appendUnique(hosts[idx].Dependents, hosts[i].Name) + } + } + } + + if logger != nil { + logger.Debug("dependency graph build complete", + "host_count", len(hosts), + "edge_count", edgeCount, + "duration", time.Since(startedAt), + ) + } +} + +func resolveHopAlias(hopHost string, aliasIndex, addressIndex map[string]int) string { + if _, ok := aliasIndex[hopHost]; ok { + return hopHost + } + if idx, ok := addressIndex[hopHost]; ok { + for alias, i := range aliasIndex { + if i == idx { + return alias + } + } + } + return "" +} + +func appendUnique(slice []string, s string) []string { + for _, v := range slice { + if v == s { + return slice + } + } + return append(slice, s) +} + +func SanitizeLogError(err error) string { + if err == nil { + return "" + } + + msg := strings.TrimSpace(err.Error()) + msg = strings.NewReplacer("\r", " ", "\n", " ").Replace(msg) + msg = strings.Join(strings.Fields(msg), " ") + + lower := strings.ToLower(msg) + if idx := strings.Index(lower, "stderr:"); idx >= 0 { + msg = strings.TrimSpace(msg[:idx]) + " stderr: [redacted]" + } + if idx := strings.Index(strings.ToLower(msg), "stdout:"); idx >= 0 { + msg = strings.TrimSpace(msg[:idx]) + " stdout: [redacted]" + } + + runes := []rune(msg) + if len(runes) > maxSanitizedLogErrorRunes { + msg = strings.TrimSpace(string(runes[:maxSanitizedLogErrorRunes-1])) + "…" + } + + return msg +} diff --git a/internal/remote/discovery_test.go b/internal/remote/discovery_test.go new file mode 100644 index 0000000..0b3a5dd --- /dev/null +++ b/internal/remote/discovery_test.go @@ -0,0 +1,270 @@ +package remote + +import ( + "context" + "os" + "path/filepath" + "testing" +) + +type discoveryRunnerMock struct { + byAlias map[string]runResult +} + +type runResult struct { + stdout string + err error +} + +func (m discoveryRunnerMock) Run(_ context.Context, _ string, args ...string) ([]byte, error) { + if len(args) == 0 { + return nil, nil + } + alias := args[len(args)-1] + res, ok := m.byAlias[alias] + if !ok { + return []byte(""), nil + } + if res.err != nil { + return nil, res.err + } + return []byte(res.stdout), nil +} + +func TestParseSSHConfigHosts(t *testing.T) { + content := ` +Host * + ForwardAgent no + +Host prod-1 dev-1 backup-1 + User alice + +Host jump-? + User bob + +Host !ignored +` + + hosts := ParseSSHConfigHosts(content) + if len(hosts) != 3 { + t.Fatalf("expected 3 concrete hosts, got %d (%v)", len(hosts), hosts) + } + want := map[string]struct{}{"prod-1": {}, "dev-1": {}, "backup-1": {}} + for _, h := range hosts { + if _, ok := want[h]; !ok { + t.Fatalf("unexpected host alias %q", h) + } + } +} + +func TestLoadHostAliases_IncludeGlobAbsolute(t *testing.T) { + tmpDir := t.TempDir() + mainConfigPath := filepath.Join(tmpDir, "config") + includeDir := filepath.Join(tmpDir, "config.d") + + writeSSHConfigFile(t, mainConfigPath, "Host root-host\n User root\nInclude "+filepath.Join(includeDir, "*.conf")+"\n") + writeSSHConfigFile(t, filepath.Join(includeDir, "one.conf"), "Host include-one\n") + writeSSHConfigFile(t, filepath.Join(includeDir, "two.conf"), "Host include-two\n") + + svc := NewDiscoveryService(DiscoveryOptions{}, discoveryRunnerMock{}, nil) + svc.SetSSHConfigPath(mainConfigPath) + + aliases, err := svc.loadHostAliases() + if err != nil { + t.Fatalf("load host aliases: %v", err) + } + + assertAliasSet(t, aliases, "root-host", "include-one", "include-two") +} + +func TestLoadHostAliases_IncludeRelativePath(t *testing.T) { + tmpDir := t.TempDir() + mainConfigPath := filepath.Join(tmpDir, "config") + relativeIncludePath := filepath.Join("includes", "relative.conf") + + writeSSHConfigFile(t, mainConfigPath, "Host root-host\nInclude "+relativeIncludePath+"\n") + writeSSHConfigFile(t, filepath.Join(tmpDir, relativeIncludePath), "Host relative-host\n") + + svc := NewDiscoveryService(DiscoveryOptions{}, discoveryRunnerMock{}, nil) + svc.SetSSHConfigPath(mainConfigPath) + + aliases, err := svc.loadHostAliases() + if err != nil { + t.Fatalf("load host aliases: %v", err) + } + + assertAliasSet(t, aliases, "root-host", "relative-host") +} + +func TestLoadHostAliases_IncludeNested(t *testing.T) { + tmpDir := t.TempDir() + mainConfigPath := filepath.Join(tmpDir, "config") + levelOnePath := filepath.Join(tmpDir, "level-one.conf") + levelTwoPath := filepath.Join(tmpDir, "level-two.conf") + + writeSSHConfigFile(t, mainConfigPath, "Host root-host\nInclude "+levelOnePath+"\n") + writeSSHConfigFile(t, levelOnePath, "Host level-one-host\nInclude "+levelTwoPath+"\n") + writeSSHConfigFile(t, levelTwoPath, "Host level-two-host\n") + + svc := NewDiscoveryService(DiscoveryOptions{}, discoveryRunnerMock{}, nil) + svc.SetSSHConfigPath(mainConfigPath) + + aliases, err := svc.loadHostAliases() + if err != nil { + t.Fatalf("load host aliases: %v", err) + } + + assertAliasSet(t, aliases, "root-host", "level-one-host", "level-two-host") +} + +func TestLoadHostAliases_IncludeCycleSafe(t *testing.T) { + tmpDir := t.TempDir() + mainConfigPath := filepath.Join(tmpDir, "a.conf") + otherConfigPath := filepath.Join(tmpDir, "b.conf") + + writeSSHConfigFile(t, mainConfigPath, "Host cycle-a\nInclude "+otherConfigPath+"\n") + writeSSHConfigFile(t, otherConfigPath, "Host cycle-b\nInclude "+mainConfigPath+"\n") + + svc := NewDiscoveryService(DiscoveryOptions{}, discoveryRunnerMock{}, nil) + svc.SetSSHConfigPath(mainConfigPath) + + aliases, err := svc.loadHostAliases() + if err != nil { + t.Fatalf("load host aliases: %v", err) + } + + assertAliasSet(t, aliases, "cycle-a", "cycle-b") +} + +func TestLoadHostAliases_IncludeNonexistentGraceful(t *testing.T) { + tmpDir := t.TempDir() + mainConfigPath := filepath.Join(tmpDir, "config") + existingIncludePath := filepath.Join(tmpDir, "existing.conf") + + writeSSHConfigFile(t, mainConfigPath, "Host root-host\nInclude "+filepath.Join(tmpDir, "missing", "*.conf")+"\nInclude "+existingIncludePath+"\n") + writeSSHConfigFile(t, existingIncludePath, "Host existing-host\n") + + svc := NewDiscoveryService(DiscoveryOptions{}, discoveryRunnerMock{}, nil) + svc.SetSSHConfigPath(mainConfigPath) + + aliases, err := svc.loadHostAliases() + if err != nil { + t.Fatalf("load host aliases: %v", err) + } + + assertAliasSet(t, aliases, "root-host", "existing-host") +} + +func TestLoadHostAliases_IncludeExpandsHomeDir(t *testing.T) { + tmpDir := t.TempDir() + homeDir := filepath.Join(tmpDir, "home") + t.Setenv("HOME", homeDir) + + mainConfigPath := filepath.Join(tmpDir, "config") + homeIncludeDir := filepath.Join(homeDir, ".ssh", "config.d") + writeSSHConfigFile(t, filepath.Join(homeIncludeDir, "home.conf"), "Host home-host\n") + writeSSHConfigFile(t, mainConfigPath, "Host root-host\nInclude ~/.ssh/config.d/*.conf\n") + + svc := NewDiscoveryService(DiscoveryOptions{}, discoveryRunnerMock{}, nil) + svc.SetSSHConfigPath(mainConfigPath) + + aliases, err := svc.loadHostAliases() + if err != nil { + t.Fatalf("load host aliases: %v", err) + } + + assertAliasSet(t, aliases, "root-host", "home-host") +} + +func TestDiscover_WithFilteringAndOverrides(t *testing.T) { + tmpDir := t.TempDir() + sshPath := filepath.Join(tmpDir, "config") + configBody := ` +Host prod-1 dev-1 backup-1 + User alice +` + if err := os.WriteFile(sshPath, []byte(configBody), 0o600); err != nil { + t.Fatalf("write ssh config: %v", err) + } + + opts := DiscoveryOptions{ + Include: []string{"prod-*", "dev-*"}, + Ignore: []string{"backup-*"}, + Overrides: map[string]HostOverride{ + "prod-1": { + Label: "Production 1", + Priority: 1, + OpencodePath: "/usr/local/bin/opencode", + }, + }, + } + + runner := discoveryRunnerMock{byAlias: map[string]runResult{ + "prod-1": {stdout: "hostname 10.0.0.1\nuser deploy\n"}, + "dev-1": {stdout: "hostname 10.0.0.2\nuser dev\n"}, + }} + + svc := NewDiscoveryService(opts, runner, nil) + svc.SetSSHConfigPath(sshPath) + + hosts, err := svc.Discover(context.Background()) + if err != nil { + t.Fatalf("discover returned error: %v", err) + } + + if len(hosts) != 2 { + t.Fatalf("expected 2 hosts after filters, got %d", len(hosts)) + } + + if hosts[0].Name != "prod-1" { + t.Fatalf("expected first host to be prod-1 due priority sort, got %q", hosts[0].Name) + } + if hosts[0].Label != "Production 1" { + t.Fatalf("expected override label, got %q", hosts[0].Label) + } + if hosts[0].OpencodeBin != "/usr/local/bin/opencode" { + t.Fatalf("expected override opencode path, got %q", hosts[0].OpencodeBin) + } +} + +func TestNewDiscoveryService_NilLoggerDefaultsToDiscard(t *testing.T) { + t.Parallel() + + svc := NewDiscoveryService(DiscoveryOptions{}, discoveryRunnerMock{}, nil) + if svc == nil { + t.Fatal("expected discovery service to be constructed") + } + if svc.logger == nil { + t.Fatal("expected discovery service logger to default to non-nil discard logger") + } +} + +func writeSSHConfigFile(t *testing.T, filePath, body string) { + t.Helper() + + if err := os.MkdirAll(filepath.Dir(filePath), 0o755); err != nil { + t.Fatalf("create config directory %q: %v", filepath.Dir(filePath), err) + } + if err := os.WriteFile(filePath, []byte(body), 0o600); err != nil { + t.Fatalf("write config file %q: %v", filePath, err) + } +} + +func assertAliasSet(t *testing.T, got []string, want ...string) { + t.Helper() + + if len(got) != len(want) { + t.Fatalf("expected %d aliases, got %d (%v)", len(want), len(got), got) + } + + wantSet := make(map[string]struct{}, len(want)) + for _, alias := range want { + wantSet[alias] = struct{}{} + } + + for _, alias := range got { + if _, ok := wantSet[alias]; !ok { + t.Fatalf("unexpected alias %q in %v", alias, got) + } + } +} diff --git a/internal/remote/probe.go b/internal/remote/probe.go new file mode 100644 index 0000000..cbf7c55 --- /dev/null +++ b/internal/remote/probe.go @@ -0,0 +1,834 @@ +package remote + +import ( + "bytes" + "context" + "encoding/json" + "errors" + "fmt" + "io" + "log/slog" + "path/filepath" + "sort" + "strconv" + "strings" + "sync" + "time" + + "opencoderouter/internal/model" +) + +type cacheEntry struct { + host model.Host + expiresAt time.Time +} + +type CacheStore struct { + mu sync.RWMutex + ttl time.Duration + nowFunc func() time.Time + entries map[string]cacheEntry +} + +func NewCacheStore(ttl time.Duration) *CacheStore { + return &CacheStore{ + ttl: ttl, + nowFunc: time.Now, + entries: make(map[string]cacheEntry), + } +} + +func (c *CacheStore) Get(key string) (model.Host, bool) { + c.mu.RLock() + entry, ok := c.entries[key] + c.mu.RUnlock() + if !ok { + return model.Host{}, false + } + if c.nowFunc().After(entry.expiresAt) { + c.mu.Lock() + delete(c.entries, key) + c.mu.Unlock() + return model.Host{}, false + } + return entry.host, true +} + +func (c *CacheStore) Set(key string, host model.Host) { + c.mu.Lock() + defer c.mu.Unlock() + c.entries[key] = cacheEntry{host: host, expiresAt: c.nowFunc().Add(c.ttl)} +} + +func (c *CacheStore) PurgeExpired() int { + now := c.nowFunc() + removed := 0 + c.mu.Lock() + defer c.mu.Unlock() + for key, entry := range c.entries { + if now.After(entry.expiresAt) { + delete(c.entries, key) + removed++ + } + } + return removed +} + +type ProbeService struct { + opts ProbeOptions + runner Runner + cache *CacheStore + nowFn func() time.Time + logger *slog.Logger +} + +func NewProbeService(opts ProbeOptions, runner Runner, cache *CacheStore, logger *slog.Logger) *ProbeService { + if runner == nil { + runner = ExecRunner{} + } + if logger == nil { + logger = slog.New(slog.NewTextHandler(io.Discard, nil)) + } + return &ProbeService{ + opts: opts, + runner: runner, + cache: cache, + nowFn: time.Now, + logger: logger, + } +} + +func (s *ProbeService) SetNowFunc(nowFn func() time.Time) { + if nowFn == nil { + s.nowFn = time.Now + return + } + s.nowFn = nowFn +} + +type probeJob struct { + index int + host model.Host +} + +type probeResult struct { + index int + host model.Host + err error +} + +const opencodeMissingSentinel = "__OCR_OPENCODE_MISSING__" + +func (s *ProbeService) ProbeHosts(ctx context.Context, hosts []model.Host) ([]model.Host, error) { + startedAt := time.Now() + workerCount := s.opts.MaxParallel + if workerCount < 1 { + workerCount = 1 + } + + s.logger.Debug("probe hosts started", + "host_count", len(hosts), + "worker_count", workerCount, + ) + + if len(hosts) == 0 { + s.logger.Debug("probe hosts completed", + "host_count", 0, + "result_count", 0, + "error_count", 0, + "duration_ms", time.Since(startedAt).Milliseconds(), + ) + return nil, nil + } + + if s.cache != nil { + s.cache.PurgeExpired() + } + + jumpProviders := jumpProviderSet(hosts) + if len(jumpProviders) > 0 { + s.transportPreflight(ctx, hosts, jumpProviders) + propagateBlocked(s.logger, hosts) + } + + updated := make([]model.Host, len(hosts)) + copy(updated, hosts) + jobs := make(chan probeJob) + results := make(chan probeResult) + + for i := 0; i < workerCount; i++ { + go func() { + for job := range jobs { + jobCtx, cancel := s.hostProbeContext(ctx) + h, err := s.probeHost(jobCtx, job.host) + cancel() + results <- probeResult{index: job.index, host: h, err: err} + } + }() + } + + pending := 0 + for i, host := range hosts { + if host.Transport == model.TransportBlocked { + updated[i] = host + s.logger.Debug("probe host skipped blocked", + "host", host.Name, + "blocked_by", host.BlockedBy, + ) + continue + } + if s.cache != nil { + if cached, ok := s.cache.Get(host.Name); ok { + updated[i] = cached + s.logger.Debug("probe cache hit", "host", host.Name) + continue + } + s.logger.Debug("probe cache miss", "host", host.Name) + } + pending++ + jobs <- probeJob{index: i, host: host} + } + close(jobs) + + var probeErrs []error + for i := 0; i < pending; i++ { + select { + case <-ctx.Done(): + err := fmt.Errorf("probe canceled: %w", ctx.Err()) + probeErrs = append(probeErrs, err) + s.logger.Debug("probe host canceled", + "err_kind", errorKind(err), + "error", sanitizeErrorContext(err), + ) + case res := <-results: + updated[res.index] = res.host + if res.err != nil { + probeErrs = append(probeErrs, res.err) + } + if s.cache != nil { + s.cache.Set(res.host.Name, res.host) + } + } + } + + s.logger.Debug("probe hosts completed", + "host_count", len(hosts), + "result_count", len(updated), + "error_count", len(probeErrs), + "duration_ms", time.Since(startedAt).Milliseconds(), + ) + + if len(probeErrs) > 0 { + return updated, errors.Join(probeErrs...) + } + return updated, nil +} + +func (s *ProbeService) hostProbeContext(parent context.Context) (context.Context, context.CancelFunc) { + if s.opts.SSH.ConnectTimeout <= 0 { + return parent, func() {} + } + return context.WithTimeout(parent, time.Duration(s.opts.SSH.ConnectTimeout)*time.Second) +} + +func (s *ProbeService) scanPathsForHost(host model.Host) []string { + if override, ok := s.opts.Overrides[host.Name]; ok && len(override.ScanPaths) > 0 { + return override.ScanPaths + } + if len(s.opts.SessionScanPaths) > 0 { + return s.opts.SessionScanPaths + } + return []string{"~"} +} + +func (s *ProbeService) buildRemoteCmd(host model.Host) string { + paths := s.scanPathsForHost(host) + pathList := strings.Join(paths, " ") + + bin := host.OpencodeBin + if bin == "" { + bin = "opencode" + } + + remoteCmd := fmt.Sprintf( + `OC=$(command -v %s 2>/dev/null || echo "$HOME/.opencode/bin/%s"); `+ + `if [ -x "$OC" ]; then `+ + `find %s -maxdepth 2 -name .opencode -type d 2>/dev/null | while IFS= read -r d; do `+ + `(cd "$(dirname "$d")" && "$OC" session list --format json 2>/dev/null); `+ + `done; else printf '%s\n'; fi`, + bin, bin, pathList, opencodeMissingSentinel, + ) + + s.logger.Debug("probe remote command built", + "host", host.Name, + "cmd", sanitizeCommandForLog(remoteCmd, pathList), + ) + + return remoteCmd +} + +func (s *ProbeService) probeHost(ctx context.Context, host model.Host) (model.Host, error) { + startedAt := time.Now() + s.logger.Debug("probe host started", "host", host.Name) + + remoteCmd := s.buildRemoteCmd(host) + args := s.buildSSHArgs(host, remoteCmd) + s.logger.Debug("probe ssh args built", + "host", host.Name, + "arg_count", len(args), + ) + + out, runErr := s.runner.Run(ctx, "ssh", args...) + var sessions []model.Session + var parseErr error + if runErr == nil && strings.TrimSpace(string(out)) != opencodeMissingSentinel { + sessions, parseErr = s.parseSessions(out, host.Name) + } + + result := classifyProbeResult( + host.Name, + out, + runErr, + parseErr, + runErr != nil && isAuthError(host.Name, runErr, s.logger), + ) + if result.err != nil { + host.Status = result.status + host.LastError = result.lastError + s.logger.Error("probe host failed", + "host", host.Name, + "status", host.Status, + "err_kind", result.errKind, + "error", result.logError, + "duration_ms", time.Since(startedAt).Milliseconds(), + ) + return host, result.err + } + + if s.opts.MaxDisplay > 0 && len(sessions) > s.opts.MaxDisplay { + sessions = sessions[:s.opts.MaxDisplay] + } + + host.Projects = groupSessionsByProject(sessions) + host.Status = result.status + host.LastSeen = s.nowFn() + host.LastError = "" + s.logger.Debug("probe host completed", + "host", host.Name, + "status", host.Status, + "sessions", len(sessions), + "duration_ms", time.Since(startedAt).Milliseconds(), + ) + + return host, nil +} + +type probeClassification struct { + status model.HostStatus + lastError string + err error + errKind string + logError string +} + +func classifyProbeResult(hostName string, output []byte, runErr, parseErr error, authRequired bool) probeClassification { + if runErr != nil { + if authRequired { + return probeClassification{ + status: model.HostStatusAuthRequired, + lastError: "password authentication required", + err: fmt.Errorf("probe host %q: auth required", hostName), + errKind: "auth", + logError: "authentication failed", + } + } + return probeClassification{ + status: model.HostStatusOffline, + lastError: runErr.Error(), + err: fmt.Errorf("probe host %q: %w", hostName, runErr), + errKind: errorKind(runErr), + logError: sanitizeErrorContext(runErr), + } + } + + if strings.TrimSpace(string(output)) == opencodeMissingSentinel { + err := fmt.Errorf("probe host %q: opencode binary not found", hostName) + return probeClassification{ + status: model.HostStatusOffline, + lastError: "opencode binary not found", + err: err, + errKind: "opencode_missing", + logError: "opencode binary not found", + } + } + + if parseErr != nil { + return probeClassification{ + status: model.HostStatusError, + lastError: parseErr.Error(), + err: fmt.Errorf("parse sessions for %q: %w", hostName, parseErr), + errKind: errorKind(parseErr), + logError: sanitizeErrorContext(parseErr), + } + } + + return probeClassification{status: model.HostStatusOnline} +} + +func (s *ProbeService) buildSSHArgs(host model.Host, remoteCmd string) []string { + args := make([]string, 0, 12) + if s.opts.SSH.BatchMode { + args = append(args, "-o", "BatchMode=yes") + } + if s.opts.SSH.ConnectTimeout > 0 { + args = append(args, "-o", "ConnectTimeout="+strconv.Itoa(s.opts.SSH.ConnectTimeout)) + } + if s.opts.SSH.ControlMaster != "" { + args = append(args, "-o", "ControlMaster="+s.opts.SSH.ControlMaster) + } + if s.opts.SSH.ControlPersist > 0 { + args = append(args, "-o", "ControlPersist="+strconv.Itoa(s.opts.SSH.ControlPersist)) + } + if s.opts.SSH.ControlPath != "" { + args = append(args, "-o", "ControlPath="+s.opts.SSH.ControlPath) + } + args = append(args, host.Name, remoteCmd) + return args +} + +type remoteSession struct { + ID string `json:"id"` + Project string `json:"project"` + Title string `json:"title"` + LastActivity string `json:"last_activity"` + Status string `json:"status"` + MessageCount int `json:"message_count"` + Agents []string `json:"agents"` + Updated json.Number `json:"updated"` + Created json.Number `json:"created"` + Directory string `json:"directory"` + ProjectID string `json:"projectId"` +} + +type remoteEnvelope struct { + Sessions []remoteSession `json:"sessions"` +} + +func (s *ProbeService) parseSessions(raw []byte, host string) ([]model.Session, error) { + trimmed := bytes.TrimSpace(raw) + if len(trimmed) == 0 { + s.logger.Debug("parse sessions decoded", + "host", host, + "records", 0, + "sessions", 0, + "raw_bytes", 0, + ) + return nil, nil + } + + var list []remoteSession + + dec := json.NewDecoder(bytes.NewReader(trimmed)) + for dec.More() { + var batch []remoteSession + if err := dec.Decode(&batch); err != nil { + var env remoteEnvelope + if json.Unmarshal(trimmed, &env) == nil { + list = env.Sessions + break + } + s.logger.Error("parse sessions failed", + "host", host, + "err_kind", "parse", + "error", "invalid session payload", + "raw_bytes", len(trimmed), + ) + return nil, err + } + list = append(list, batch...) + } + + now := s.nowFn() + thresholds := model.ActivityThresholds{ + Active: s.opts.ActiveThreshold, + Idle: s.opts.IdleThreshold, + } + + sessions := make([]model.Session, 0, len(list)) + for _, rs := range list { + status := mapSessionStatus(rs.Status) + if status == model.SessionStatusArchived && !s.opts.ShowArchived { + continue + } + lastActivity := resolveTimestamp(rs) + project := resolveProject(rs) + sessions = append(sessions, model.Session{ + ID: rs.ID, + Project: project, + Title: rs.Title, + Directory: rs.Directory, + LastActivity: lastActivity, + Status: status, + MessageCount: rs.MessageCount, + Agents: append([]string(nil), rs.Agents...), + Activity: model.ResolveActivityState(lastActivity, now, thresholds), + }) + } + + sortBy := strings.ToLower(strings.TrimSpace(s.opts.SortBy)) + if sortBy == "last_activity" { + sort.SliceStable(sessions, func(i, j int) bool { + return sessions[i].LastActivity.After(sessions[j].LastActivity) + }) + } + + s.logger.Debug("parse sessions decoded", + "host", host, + "records", len(list), + "sessions", len(sessions), + "raw_bytes", len(trimmed), + ) + return sessions, nil +} + +func resolveTimestamp(rs remoteSession) time.Time { + if rs.LastActivity != "" { + return parseTimestamp(rs.LastActivity) + } + if rs.Updated.String() != "" { + if ms, err := rs.Updated.Int64(); err == nil && ms > 0 { + return time.UnixMilli(ms) + } + } + if rs.Created.String() != "" { + if ms, err := rs.Created.Int64(); err == nil && ms > 0 { + return time.UnixMilli(ms) + } + } + return time.Time{} +} + +func resolveProject(rs remoteSession) string { + if rs.Project != "" { + return rs.Project + } + if rs.Directory != "" { + return filepath.Base(rs.Directory) + } + return "" +} + +func groupSessionsByProject(sessions []model.Session) []model.Project { + byName := make(map[string][]model.Session) + for _, session := range sessions { + projectName := session.Project + if strings.TrimSpace(projectName) == "" { + projectName = "(unknown)" + } + byName[projectName] = append(byName[projectName], session) + } + + projects := make([]model.Project, 0, len(byName)) + for name, grouped := range byName { + projects = append(projects, model.Project{Name: name, Sessions: grouped}) + } + sort.Slice(projects, func(i, j int) bool { + return projects[i].Name < projects[j].Name + }) + return projects +} + +func mapSessionStatus(status string) model.SessionStatus { + switch strings.ToLower(strings.TrimSpace(status)) { + case "active", "running": + return model.SessionStatusActive + case "idle": + return model.SessionStatusIdle + case "archived", "closed", "done": + return model.SessionStatusArchived + default: + return model.SessionStatusUnknown + } +} + +func parseTimestamp(value string) time.Time { + if strings.TrimSpace(value) == "" { + return time.Time{} + } + t, err := time.Parse(time.RFC3339, value) + if err != nil { + return time.Time{} + } + return t +} + +func isAuthError(host string, err error, logger *slog.Logger) bool { + if err == nil { + return false + } + msg := strings.ToLower(err.Error()) + authIndicators := []string{ + "permission denied", + "no more authentication methods", + "publickey,password", + "keyboard-interactive", + "too many authentication failures", + "authentication failed", + } + for _, indicator := range authIndicators { + if strings.Contains(msg, indicator) { + if logger != nil { + logger.Error("probe auth indicator detected", + "host", host, + "err_kind", "auth", + "error", "authentication failed", + ) + } + return true + } + } + return false +} + +func (s *ProbeService) AuthBootstrapCmd(host model.Host) string { + controlPath := s.opts.SSH.ControlPath + if controlPath == "" { + controlPath = "~/.ssh/ocr-%C" + } + persist := s.opts.SSH.ControlPersist + if persist <= 0 { + persist = 600 + } + timeout := s.opts.SSH.ConnectTimeout + if timeout <= 0 { + timeout = 10 + } + + cmd := fmt.Sprintf( + "ssh -o ControlMaster=yes -o ControlPath=%s -o ControlPersist=%d -o ConnectTimeout=%d -Nf %s", + controlPath, + persist, + timeout, + host.Name, + ) + return cmd +} + +func jumpProviderSet(hosts []model.Host) map[string]bool { + providers := make(map[string]bool) + for _, h := range hosts { + for _, dep := range h.DependsOn { + providers[dep] = true + } + } + return providers +} + +func (s *ProbeService) transportPreflight(ctx context.Context, hosts []model.Host, providers map[string]bool) { + startedAt := time.Now() + s.logger.Debug("transport preflight started", "provider_count", len(providers)) + + type preflightResult struct { + idx int + status model.TransportStatus + err error + dur time.Duration + } + + results := make(chan preflightResult) + count := 0 + for i, h := range hosts { + if !providers[h.Name] { + continue + } + count++ + go func(idx int, host model.Host) { + hostStarted := time.Now() + s.logger.Debug("transport preflight host started", "host", host.Name) + args := s.buildSSHArgs(host, "true") + _, err := s.runner.Run(ctx, "ssh", args...) + if err == nil { + s.logger.Debug("transport preflight host result", + "host", host.Name, + "status", model.TransportReady, + "duration_ms", time.Since(hostStarted).Milliseconds(), + ) + results <- preflightResult{idx: idx, status: model.TransportReady, dur: time.Since(hostStarted)} + return + } + if isAuthError(host.Name, err, s.logger) { + s.logger.Debug("transport preflight host result", + "host", host.Name, + "status", model.TransportAuthRequired, + "err_kind", "auth", + "duration_ms", time.Since(hostStarted).Milliseconds(), + ) + results <- preflightResult{idx: idx, status: model.TransportAuthRequired, err: err, dur: time.Since(hostStarted)} + return + } + s.logger.Debug("transport preflight host result", + "host", host.Name, + "status", model.TransportUnreachable, + "err_kind", errorKind(err), + "duration_ms", time.Since(hostStarted).Milliseconds(), + ) + results <- preflightResult{idx: idx, status: model.TransportUnreachable, err: err, dur: time.Since(hostStarted)} + }(i, h) + } + + readyCount := 0 + failureCount := 0 + for j := 0; j < count; j++ { + res := <-results + hosts[res.idx].Transport = res.status + if res.err != nil { + hosts[res.idx].TransportError = res.err.Error() + failureCount++ + } else { + readyCount++ + } + } + s.logger.Debug("transport preflight completed", + "provider_count", count, + "ready_count", readyCount, + "failure_count", failureCount, + "duration_ms", time.Since(startedAt).Milliseconds(), + ) +} + +func propagateBlocked(logger *slog.Logger, hosts []model.Host) { + if logger == nil { + logger = slog.New(slog.NewTextHandler(io.Discard, nil)) + } + startedAt := time.Now() + blockedCount := 0 + + aliasIndex := make(map[string]int, len(hosts)) + for i, h := range hosts { + aliasIndex[h.Name] = i + } + + for i := range hosts { + if len(hosts[i].DependsOn) == 0 { + continue + } + var blockers []string + for _, dep := range hosts[i].DependsOn { + if idx, ok := aliasIndex[dep]; ok { + if hosts[idx].Transport != model.TransportReady && hosts[idx].Transport != model.TransportUnknown { + blockers = append(blockers, dep) + } + } + } + if len(blockers) > 0 { + hosts[i].Transport = model.TransportBlocked + hosts[i].BlockedBy = blockers + hosts[i].TransportError = fmt.Sprintf("blocked by: %s", strings.Join(blockers, ", ")) + blockedCount++ + logger.Debug("host transport blocked by dependency", + "host", hosts[i].Name, + "blocked_by", blockers, + ) + } + } + logger.Debug("dependency block propagation completed", + "host_count", len(hosts), + "blocked_count", blockedCount, + "duration_ms", time.Since(startedAt).Milliseconds(), + ) +} + +func sanitizeCommandForLog(cmd, pathList string) string { + sanitized := cmd + if strings.TrimSpace(pathList) != "" { + sanitized = strings.ReplaceAll(sanitized, pathList, "${value}`);
+ rendered = rendered.replace(/\*\*([^*]+)\*\*/g, '$1');
+ rendered = rendered.replace(/\*([^*]+)\*/g, '$1');
+ rendered = rendered.replace(/\[([^\]]+)\]\((https?:\/\/[^)]+)\)/g, '$1');
+ return rendered;
+}
+
+function looksLikeDiff(code) {
+ return /^[-+]/m.test(code);
+}
+
+function renderDiffCode(code) {
+ return escapeHtml(code)
+ .split('\n')
+ .map((line) => {
+ if (line.startsWith('+')) {
+ return `${line}`;
+ }
+ if (line.startsWith('-')) {
+ return `${line}`;
+ }
+ return line;
+ })
+ .join('\n');
+}
+
+function renderCodeBlock(language, code) {
+ const normalizedLanguage = (language || '').toLowerCase();
+ if (normalizedLanguage === 'diff' || looksLikeDiff(code)) {
+ const encoded = encodeURIComponent(code);
+ return `${renderDiffCode(code)}${escapeHtml(code)}`;
+}
+
+function renderMarkdown(markdown) {
+ const normalized = normalizeDiffMarkdown(markdown || '');
+ const codeBlocks = [];
+ const tokenized = normalized.replace(/```([a-zA-Z0-9_-]+)?\n([\s\S]*?)```/g, (_full, language, code) => {
+ const index = codeBlocks.push({ language: language || '', code }) - 1;
+ return `@@CODE_BLOCK_${index}@@`;
+ });
+
+ const lines = tokenized.split('\n');
+ let html = '';
+ let inList = false;
+
+ for (const line of lines) {
+ const codeMatch = line.match(/^@@CODE_BLOCK_(\d+)@@$/);
+ if (codeMatch) {
+ if (inList) {
+ html += '';
+ inList = false;
+ }
+ const block = codeBlocks[Number(codeMatch[1])];
+ html += renderCodeBlock(block.language, block.code);
+ continue;
+ }
+
+ const trimmed = line.trim();
+ if (!trimmed) {
+ if (inList) {
+ html += '';
+ inList = false;
+ }
+ continue;
+ }
+
+ const heading = trimmed.match(/^(#{1,6})\s+(.*)$/);
+ if (heading) {
+ if (inList) {
+ html += '';
+ inList = false;
+ }
+ const level = heading[1].length;
+ html += `${renderInline(trimmed)}
`; + } + + if (inList) { + html += ''; + } + + return html; +} + +function firstString(value, fallback = '') { + if (typeof value === 'string' && value.trim()) { + return value.trim(); + } + return fallback; +} + +function extractToolCall(chunk) { + const type = firstString(chunk.type || ''); + const payload = chunk.payload && typeof chunk.payload === 'object' ? chunk.payload : null; + if (!payload) { + return null; + } + + const payloadType = firstString(payload.type || payload.kind || ''); + const name = firstString(payload.name || payload.tool || payload.toolName || payload.call || 'tool'); + if (!type.toLowerCase().includes('tool') && !payloadType.toLowerCase().includes('tool') && !payload.input && !payload.arguments) { + return null; + } + + return { + name, + input: payload.input || payload.arguments || payload.args || payload.params || payload + }; +} + +function renderToolCall(toolCall) { + const details = document.createElement('details'); + details.className = 'tool-call'; + + const summary = document.createElement('summary'); + summary.textContent = `Tool Call: ${firstString(toolCall.name, 'tool')}`; + details.appendChild(summary); + + const pre = document.createElement('pre'); + pre.textContent = JSON.stringify(toolCall.input, null, 2); + details.appendChild(pre); + + return details; +} + +function renderMessageNode(message) { + const container = document.createElement('section'); + container.className = `message ${message.role}`; + + const header = document.createElement('div'); + header.className = 'message-header'; + header.textContent = message.role.toUpperCase(); + container.appendChild(header); + + const body = document.createElement('div'); + body.className = 'message-body'; + if (message.role === 'assistant') { + body.innerHTML = renderMarkdown(message.content || ''); + } else { + body.textContent = message.content || ''; + } + container.appendChild(body); + + if (message.toolCalls && message.toolCalls.length > 0) { + const tools = document.createElement('div'); + tools.className = 'tool-calls'; + for (const toolCall of message.toolCalls) { + tools.appendChild(renderToolCall(toolCall)); + } + container.appendChild(tools); + } + + wireInteractiveElements(container); + return container; +} + +function wireInteractiveElements(root) { + const fileLinks = root.querySelectorAll('.file-ref'); + for (const link of fileLinks) { + link.addEventListener('click', (event) => { + event.preventDefault(); + const target = event.currentTarget; + const path = decodeURIComponent(target.getAttribute('data-file-path') || ''); + const lineRaw = decodeURIComponent(target.getAttribute('data-file-line') || ''); + const line = Number.parseInt(lineRaw, 10); + vscode.postMessage({ + type: 'openFile', + path, + line: Number.isFinite(line) ? line : undefined + }); + }); + } + + const applyButtons = root.querySelectorAll('.apply-diff'); + for (const button of applyButtons) { + button.addEventListener('click', (event) => { + const target = event.currentTarget; + const diff = decodeURIComponent(target.getAttribute('data-diff') || ''); + vscode.postMessage({ type: 'applyDiff', diff }); + }); + } +} + +function renderMessages() { + dom.messages.innerHTML = ''; + for (const message of state.messages) { + dom.messages.appendChild(renderMessageNode(message)); + } + dom.messages.scrollTop = dom.messages.scrollHeight; +} + +function updateHeader() { + if (!state.session) { + dom.title.textContent = 'No session selected'; + } else { + const description = state.session.workspacePath ? ` · ${state.session.workspacePath}` : ''; + dom.title.textContent = `${state.session.label || state.session.id}${description}`; + } + dom.streamState.textContent = state.streaming ? 'streaming' : 'idle'; + dom.send.disabled = !state.session || state.streaming; +} + +function appendMessage(role, content) { + const message = { + id: makeId(), + role, + content, + toolCalls: [] + }; + state.messages.push(message); + renderMessages(); + return message; +} + +function getMessageById(id) { + return state.messages.find((message) => message.id === id) || null; +} + +function ensureActiveAssistantMessage() { + if (state.activeAssistantId) { + const existing = getMessageById(state.activeAssistantId); + if (existing) { + return existing; + } + } + + const created = appendMessage('assistant', ''); + state.activeAssistantId = created.id; + return created; +} + +function handleChatChunk(chunk) { + if (!chunk || typeof chunk !== 'object') { + return; + } + + if (chunk.error) { + appendMessage('system', String(chunk.error)); + state.activeAssistantId = null; + state.streaming = false; + updateHeader(); + return; + } + + const assistant = ensureActiveAssistantMessage(); + if (typeof chunk.delta === 'string' && chunk.delta.length > 0) { + assistant.content += chunk.delta; + } + + const toolCall = extractToolCall(chunk); + if (toolCall) { + assistant.toolCalls.push(toolCall); + } + + if (chunk.done === true) { + state.activeAssistantId = null; + state.streaming = false; + } + + renderMessages(); + updateHeader(); +} + +function replaceHistory(messages) { + state.messages = []; + if (Array.isArray(messages)) { + for (const item of messages) { + if (!item || typeof item !== 'object') { + continue; + } + state.messages.push({ + id: firstString(item.id, makeId()), + role: ['user', 'assistant', 'system'].includes(item.role) ? item.role : 'assistant', + content: firstString(item.content, ''), + toolCalls: Array.isArray(item.toolCalls) ? item.toolCalls : [] + }); + } + } + state.activeAssistantId = null; + renderMessages(); +} + +dom.form.addEventListener('submit', (event) => { + event.preventDefault(); + const prompt = dom.input.value.trim(); + if (!prompt || !state.session || state.streaming) { + return; + } + + appendMessage('user', prompt); + const assistant = appendMessage('assistant', ''); + state.activeAssistantId = assistant.id; + state.streaming = true; + updateHeader(); + + dom.input.value = ''; + vscode.postMessage({ type: 'sendPrompt', prompt }); +}); + +window.addEventListener('message', (event) => { + const msg = event.data; + if (!msg || typeof msg !== 'object') { + return; + } + + switch (msg.type) { + case 'session': + state.session = msg.session || null; + updateHeader(); + if (state.session) { + vscode.postMessage({ type: 'requestHistory' }); + } + break; + case 'chatHistory': + replaceHistory(msg.messages || []); + break; + case 'streamStarted': + state.streaming = true; + updateHeader(); + break; + case 'streamEnded': + state.streaming = false; + state.activeAssistantId = null; + updateHeader(); + break; + case 'chatChunk': + handleChatChunk(msg.chunk || {}); + break; + case 'error': + appendMessage('system', firstString(msg.message, 'Unknown error')); + state.streaming = false; + state.activeAssistantId = null; + updateHeader(); + break; + default: + break; + } +}); + +updateHeader(); +vscode.postMessage({ type: 'ready' }); diff --git a/vscode-extension/opencode-router-0.1.0.vsix b/vscode-extension/opencode-router-0.1.0.vsix new file mode 100644 index 0000000..243257f Binary files /dev/null and b/vscode-extension/opencode-router-0.1.0.vsix differ diff --git a/vscode-extension/package-lock.json b/vscode-extension/package-lock.json new file mode 100644 index 0000000..8322f67 --- /dev/null +++ b/vscode-extension/package-lock.json @@ -0,0 +1,5085 @@ +{ + "name": "opencode-router", + "version": "0.1.0", + "lockfileVersion": 3, + "requires": true, + "packages": { + "": { + "name": "opencode-router", + "version": "0.1.0", + "devDependencies": { + "@types/mocha": "^10.0.10", + "@types/node": "^20.16.5", + "@types/vscode": "^1.90.0", + "@typescript-eslint/eslint-plugin": "^8.56.1", + "@typescript-eslint/parser": "^8.56.1", + "@vscode/test-electron": "^2.5.2", + "@vscode/vsce": "^2.31.1", + "eslint": "^10.0.2", + "mocha": "^11.7.5", + "typescript": "^5.6.2" + }, + "engines": { + "vscode": "^1.90.0" + } + }, + "node_modules/@azure/abort-controller": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/@azure/abort-controller/-/abort-controller-2.1.2.tgz", + "integrity": "sha512-nBrLsEWm4J2u5LpAPjxADTlq3trDgVZZXHNKabeXZtpq3d3AbN/KGO82R87rdDz5/lYB024rtEf10/q0urNgsA==", + "dev": true, + "license": "MIT", + "dependencies": { + "tslib": "^2.6.2" + }, + "engines": { + "node": ">=18.0.0" + } + }, + "node_modules/@azure/core-auth": { + "version": "1.10.1", + "resolved": "https://registry.npmjs.org/@azure/core-auth/-/core-auth-1.10.1.tgz", + "integrity": "sha512-ykRMW8PjVAn+RS6ww5cmK9U2CyH9p4Q88YJwvUslfuMmN98w/2rdGRLPqJYObapBCdzBVeDgYWdJnFPFb7qzpg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@azure/abort-controller": "^2.1.2", + "@azure/core-util": "^1.13.0", + "tslib": "^2.6.2" + }, + "engines": { + "node": ">=20.0.0" + } + }, + "node_modules/@azure/core-client": { + "version": "1.10.1", + "resolved": "https://registry.npmjs.org/@azure/core-client/-/core-client-1.10.1.tgz", + "integrity": "sha512-Nh5PhEOeY6PrnxNPsEHRr9eimxLwgLlpmguQaHKBinFYA/RU9+kOYVOQqOrTsCL+KSxrLLl1gD8Dk5BFW/7l/w==", + "dev": true, + "license": "MIT", + "dependencies": { + "@azure/abort-controller": "^2.1.2", + "@azure/core-auth": "^1.10.0", + "@azure/core-rest-pipeline": "^1.22.0", + "@azure/core-tracing": "^1.3.0", + "@azure/core-util": "^1.13.0", + "@azure/logger": "^1.3.0", + "tslib": "^2.6.2" + }, + "engines": { + "node": ">=20.0.0" + } + }, + "node_modules/@azure/core-rest-pipeline": { + "version": "1.23.0", + "resolved": "https://registry.npmjs.org/@azure/core-rest-pipeline/-/core-rest-pipeline-1.23.0.tgz", + "integrity": "sha512-Evs1INHo+jUjwHi1T6SG6Ua/LHOQBCLuKEEE6efIpt4ZOoNonaT1kP32GoOcdNDbfqsD2445CPri3MubBy5DEQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "@azure/abort-controller": "^2.1.2", + "@azure/core-auth": "^1.10.0", + "@azure/core-tracing": "^1.3.0", + "@azure/core-util": "^1.13.0", + "@azure/logger": "^1.3.0", + "@typespec/ts-http-runtime": "^0.3.4", + "tslib": "^2.6.2" + }, + "engines": { + "node": ">=20.0.0" + } + }, + "node_modules/@azure/core-tracing": { + "version": "1.3.1", + "resolved": "https://registry.npmjs.org/@azure/core-tracing/-/core-tracing-1.3.1.tgz", + "integrity": "sha512-9MWKevR7Hz8kNzzPLfX4EAtGM2b8mr50HPDBvio96bURP/9C+HjdH3sBlLSNNrvRAr5/k/svoH457gB5IKpmwQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "tslib": "^2.6.2" + }, + "engines": { + "node": ">=20.0.0" + } + }, + "node_modules/@azure/core-util": { + "version": "1.13.1", + "resolved": "https://registry.npmjs.org/@azure/core-util/-/core-util-1.13.1.tgz", + "integrity": "sha512-XPArKLzsvl0Hf0CaGyKHUyVgF7oDnhKoP85Xv6M4StF/1AhfORhZudHtOyf2s+FcbuQ9dPRAjB8J2KvRRMUK2A==", + "dev": true, + "license": "MIT", + "dependencies": { + "@azure/abort-controller": "^2.1.2", + "@typespec/ts-http-runtime": "^0.3.0", + "tslib": "^2.6.2" + }, + "engines": { + "node": ">=20.0.0" + } + }, + "node_modules/@azure/identity": { + "version": "4.13.0", + "resolved": "https://registry.npmjs.org/@azure/identity/-/identity-4.13.0.tgz", + "integrity": "sha512-uWC0fssc+hs1TGGVkkghiaFkkS7NkTxfnCH+Hdg+yTehTpMcehpok4PgUKKdyCH+9ldu6FhiHRv84Ntqj1vVcw==", + "dev": true, + "license": "MIT", + "dependencies": { + "@azure/abort-controller": "^2.0.0", + "@azure/core-auth": "^1.9.0", + "@azure/core-client": "^1.9.2", + "@azure/core-rest-pipeline": "^1.17.0", + "@azure/core-tracing": "^1.0.0", + "@azure/core-util": "^1.11.0", + "@azure/logger": "^1.0.0", + "@azure/msal-browser": "^4.2.0", + "@azure/msal-node": "^3.5.0", + "open": "^10.1.0", + "tslib": "^2.2.0" + }, + "engines": { + "node": ">=20.0.0" + } + }, + "node_modules/@azure/logger": { + "version": "1.3.0", + "resolved": "https://registry.npmjs.org/@azure/logger/-/logger-1.3.0.tgz", + "integrity": "sha512-fCqPIfOcLE+CGqGPd66c8bZpwAji98tZ4JI9i/mlTNTlsIWslCfpg48s/ypyLxZTump5sypjrKn2/kY7q8oAbA==", + "dev": true, + "license": "MIT", + "dependencies": { + "@typespec/ts-http-runtime": "^0.3.0", + "tslib": "^2.6.2" + }, + "engines": { + "node": ">=20.0.0" + } + }, + "node_modules/@azure/msal-browser": { + "version": "4.29.0", + "resolved": "https://registry.npmjs.org/@azure/msal-browser/-/msal-browser-4.29.0.tgz", + "integrity": "sha512-/f3eHkSNUTl6DLQHm+bKecjBKcRQxbd/XLx8lvSYp8Nl/HRyPuIPOijt9Dt0sH50/SxOwQ62RnFCmFlGK+bR/w==", + "dev": true, + "license": "MIT", + "dependencies": { + "@azure/msal-common": "15.15.0" + }, + "engines": { + "node": ">=0.8.0" + } + }, + "node_modules/@azure/msal-common": { + "version": "15.15.0", + "resolved": "https://registry.npmjs.org/@azure/msal-common/-/msal-common-15.15.0.tgz", + "integrity": "sha512-/n+bN0AKlVa+AOcETkJSKj38+bvFs78BaP4rNtv3MJCmPH0YrHiskMRe74OhyZ5DZjGISlFyxqvf9/4QVEi2tw==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=0.8.0" + } + }, + "node_modules/@azure/msal-node": { + "version": "3.8.8", + "resolved": "https://registry.npmjs.org/@azure/msal-node/-/msal-node-3.8.8.tgz", + "integrity": "sha512-+f1VrJH1iI517t4zgmuhqORja0bL6LDQXfBqkjuMmfTYXTQQnh1EvwwxO3UbKLT05N0obF72SRHFrC1RBDv5Gg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@azure/msal-common": "15.15.0", + "jsonwebtoken": "^9.0.0", + "uuid": "^8.3.0" + }, + "engines": { + "node": ">=16" + } + }, + "node_modules/@eslint-community/eslint-utils": { + "version": "4.9.1", + "resolved": "https://registry.npmjs.org/@eslint-community/eslint-utils/-/eslint-utils-4.9.1.tgz", + "integrity": "sha512-phrYmNiYppR7znFEdqgfWHXR6NCkZEK7hwWDHZUjit/2/U0r6XvkDl0SYnoM51Hq7FhCGdLDT6zxCCOY1hexsQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "eslint-visitor-keys": "^3.4.3" + }, + "engines": { + "node": "^12.22.0 || ^14.17.0 || >=16.0.0" + }, + "funding": { + "url": "https://opencollective.com/eslint" + }, + "peerDependencies": { + "eslint": "^6.0.0 || ^7.0.0 || >=8.0.0" + } + }, + "node_modules/@eslint-community/regexpp": { + "version": "4.12.2", + "resolved": "https://registry.npmjs.org/@eslint-community/regexpp/-/regexpp-4.12.2.tgz", + "integrity": "sha512-EriSTlt5OC9/7SXkRSCAhfSxxoSUgBm33OH+IkwbdpgoqsSsUg7y3uh+IICI/Qg4BBWr3U2i39RpmycbxMq4ew==", + "dev": true, + "license": "MIT", + "engines": { + "node": "^12.0.0 || ^14.0.0 || >=16.0.0" + } + }, + "node_modules/@eslint/config-array": { + "version": "0.23.3", + "resolved": "https://registry.npmjs.org/@eslint/config-array/-/config-array-0.23.3.tgz", + "integrity": "sha512-j+eEWmB6YYLwcNOdlwQ6L2OsptI/LO6lNBuLIqe5R7RetD658HLoF+Mn7LzYmAWWNNzdC6cqP+L6r8ujeYXWLw==", + "dev": true, + "license": "Apache-2.0", + "dependencies": { + "@eslint/object-schema": "^3.0.3", + "debug": "^4.3.1", + "minimatch": "^10.2.4" + }, + "engines": { + "node": "^20.19.0 || ^22.13.0 || >=24" + } + }, + "node_modules/@eslint/config-helpers": { + "version": "0.5.3", + "resolved": "https://registry.npmjs.org/@eslint/config-helpers/-/config-helpers-0.5.3.tgz", + "integrity": "sha512-lzGN0onllOZCGroKJmRwY6QcEHxbjBw1gwB8SgRSqK8YbbtEXMvKynsXc3553ckIEBxsbMBU7oOZXKIPGZNeZw==", + "dev": true, + "license": "Apache-2.0", + "dependencies": { + "@eslint/core": "^1.1.1" + }, + "engines": { + "node": "^20.19.0 || ^22.13.0 || >=24" + } + }, + "node_modules/@eslint/core": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/@eslint/core/-/core-1.1.1.tgz", + "integrity": "sha512-QUPblTtE51/7/Zhfv8BDwO0qkkzQL7P/aWWbqcf4xWLEYn1oKjdO0gglQBB4GAsu7u6wjijbCmzsUTy6mnk6oQ==", + "dev": true, + "license": "Apache-2.0", + "dependencies": { + "@types/json-schema": "^7.0.15" + }, + "engines": { + "node": "^20.19.0 || ^22.13.0 || >=24" + } + }, + "node_modules/@eslint/object-schema": { + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/@eslint/object-schema/-/object-schema-3.0.3.tgz", + "integrity": "sha512-iM869Pugn9Nsxbh/YHRqYiqd23AmIbxJOcpUMOuWCVNdoQJ5ZtwL6h3t0bcZzJUlC3Dq9jCFCESBZnX0GTv7iQ==", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": "^20.19.0 || ^22.13.0 || >=24" + } + }, + "node_modules/@eslint/plugin-kit": { + "version": "0.6.1", + "resolved": "https://registry.npmjs.org/@eslint/plugin-kit/-/plugin-kit-0.6.1.tgz", + "integrity": "sha512-iH1B076HoAshH1mLpHMgwdGeTs0CYwL0SPMkGuSebZrwBp16v415e9NZXg2jtrqPVQjf6IANe2Vtlr5KswtcZQ==", + "dev": true, + "license": "Apache-2.0", + "dependencies": { + "@eslint/core": "^1.1.1", + "levn": "^0.4.1" + }, + "engines": { + "node": "^20.19.0 || ^22.13.0 || >=24" + } + }, + "node_modules/@humanfs/core": { + "version": "0.19.1", + "resolved": "https://registry.npmjs.org/@humanfs/core/-/core-0.19.1.tgz", + "integrity": "sha512-5DyQ4+1JEUzejeK1JGICcideyfUbGixgS9jNgex5nqkW+cY7WZhxBigmieN5Qnw9ZosSNVC9KQKyb+GUaGyKUA==", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": ">=18.18.0" + } + }, + "node_modules/@humanfs/node": { + "version": "0.16.7", + "resolved": "https://registry.npmjs.org/@humanfs/node/-/node-0.16.7.tgz", + "integrity": "sha512-/zUx+yOsIrG4Y43Eh2peDeKCxlRt/gET6aHfaKpuq267qXdYDFViVHfMaLyygZOnl0kGWxFIgsBy8QFuTLUXEQ==", + "dev": true, + "license": "Apache-2.0", + "dependencies": { + "@humanfs/core": "^0.19.1", + "@humanwhocodes/retry": "^0.4.0" + }, + "engines": { + "node": ">=18.18.0" + } + }, + "node_modules/@humanwhocodes/module-importer": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/@humanwhocodes/module-importer/-/module-importer-1.0.1.tgz", + "integrity": "sha512-bxveV4V8v5Yb4ncFTT3rPSgZBOpCkjfK0y4oVVVJwIuDVBRMDXrPyXRL988i5ap9m9bnyEEjWfm5WkBmtffLfA==", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": ">=12.22" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/nzakas" + } + }, + "node_modules/@humanwhocodes/retry": { + "version": "0.4.3", + "resolved": "https://registry.npmjs.org/@humanwhocodes/retry/-/retry-0.4.3.tgz", + "integrity": "sha512-bV0Tgo9K4hfPCek+aMAn81RppFKv2ySDQeMoSZuvTASywNTnVJCArCZE2FWqpvIatKu7VMRLWlR1EazvVhDyhQ==", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": ">=18.18" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/nzakas" + } + }, + "node_modules/@isaacs/cliui": { + "version": "8.0.2", + "resolved": "https://registry.npmjs.org/@isaacs/cliui/-/cliui-8.0.2.tgz", + "integrity": "sha512-O8jcjabXaleOG9DQ0+ARXWZBTfnP4WNAqzuiJK7ll44AmxGKv/J2M4TPjxjY3znBCfvBXFzucm1twdyFybFqEA==", + "dev": true, + "license": "ISC", + "dependencies": { + "string-width": "^5.1.2", + "string-width-cjs": "npm:string-width@^4.2.0", + "strip-ansi": "^7.0.1", + "strip-ansi-cjs": "npm:strip-ansi@^6.0.1", + "wrap-ansi": "^8.1.0", + "wrap-ansi-cjs": "npm:wrap-ansi@^7.0.0" + }, + "engines": { + "node": ">=12" + } + }, + "node_modules/@isaacs/cliui/node_modules/ansi-styles": { + "version": "6.2.3", + "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-6.2.3.tgz", + "integrity": "sha512-4Dj6M28JB+oAH8kFkTLUo+a2jwOFkuqb3yucU0CANcRRUbxS0cP0nZYCGjcc3BNXwRIsUVmDGgzawme7zvJHvg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/chalk/ansi-styles?sponsor=1" + } + }, + "node_modules/@isaacs/cliui/node_modules/emoji-regex": { + "version": "9.2.2", + "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-9.2.2.tgz", + "integrity": "sha512-L18DaJsXSUk2+42pv8mLs5jJT2hqFkFE4j21wOmgbUqsZ2hL72NsUU785g9RXgo3s0ZNgVl42TiHp3ZtOv/Vyg==", + "dev": true, + "license": "MIT" + }, + "node_modules/@isaacs/cliui/node_modules/string-width": { + "version": "5.1.2", + "resolved": "https://registry.npmjs.org/string-width/-/string-width-5.1.2.tgz", + "integrity": "sha512-HnLOCR3vjcY8beoNLtcjZ5/nxn2afmME6lhrDrebokqMap+XbeW8n9TXpPDOqdGK5qcI3oT0GKTW6wC7EMiVqA==", + "dev": true, + "license": "MIT", + "dependencies": { + "eastasianwidth": "^0.2.0", + "emoji-regex": "^9.2.2", + "strip-ansi": "^7.0.1" + }, + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/@isaacs/cliui/node_modules/wrap-ansi": { + "version": "8.1.0", + "resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-8.1.0.tgz", + "integrity": "sha512-si7QWI6zUMq56bESFvagtmzMdGOtoxfR+Sez11Mobfc7tm+VkUckk9bW2UeffTGVUbOksxmSw0AA2gs8g71NCQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "ansi-styles": "^6.1.0", + "string-width": "^5.0.1", + "strip-ansi": "^7.0.1" + }, + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/chalk/wrap-ansi?sponsor=1" + } + }, + "node_modules/@pkgjs/parseargs": { + "version": "0.11.0", + "resolved": "https://registry.npmjs.org/@pkgjs/parseargs/-/parseargs-0.11.0.tgz", + "integrity": "sha512-+1VkjdD0QBLPodGrJUeqarH8VAIvQODIbwh9XpP5Syisf7YoQgsJKPNFoqqLQlu+VQ/tVSshMR6loPMn8U+dPg==", + "dev": true, + "license": "MIT", + "optional": true, + "engines": { + "node": ">=14" + } + }, + "node_modules/@types/esrecurse": { + "version": "4.3.1", + "resolved": "https://registry.npmjs.org/@types/esrecurse/-/esrecurse-4.3.1.tgz", + "integrity": "sha512-xJBAbDifo5hpffDBuHl0Y8ywswbiAp/Wi7Y/GtAgSlZyIABppyurxVueOPE8LUQOxdlgi6Zqce7uoEpqNTeiUw==", + "dev": true, + "license": "MIT" + }, + "node_modules/@types/estree": { + "version": "1.0.8", + "resolved": "https://registry.npmjs.org/@types/estree/-/estree-1.0.8.tgz", + "integrity": "sha512-dWHzHa2WqEXI/O1E9OjrocMTKJl2mSrEolh1Iomrv6U+JuNwaHXsXx9bLu5gG7BUWFIN0skIQJQ/L1rIex4X6w==", + "dev": true, + "license": "MIT" + }, + "node_modules/@types/json-schema": { + "version": "7.0.15", + "resolved": "https://registry.npmjs.org/@types/json-schema/-/json-schema-7.0.15.tgz", + "integrity": "sha512-5+fP8P8MFNC+AyZCDxrB2pkZFPGzqQWUzpSeuuVLvm8VMcorNYavBqoFcxK8bQz4Qsbn4oUEEem4wDLfcysGHA==", + "dev": true, + "license": "MIT" + }, + "node_modules/@types/mocha": { + "version": "10.0.10", + "resolved": "https://registry.npmjs.org/@types/mocha/-/mocha-10.0.10.tgz", + "integrity": "sha512-xPyYSz1cMPnJQhl0CLMH68j3gprKZaTjG3s5Vi+fDgx+uhG9NOXwbVt52eFS8ECyXhyKcjDLCBEqBExKuiZb7Q==", + "dev": true, + "license": "MIT" + }, + "node_modules/@types/node": { + "version": "20.19.37", + "resolved": "https://registry.npmjs.org/@types/node/-/node-20.19.37.tgz", + "integrity": "sha512-8kzdPJ3FsNsVIurqBs7oodNnCEVbni9yUEkaHbgptDACOPW04jimGagZ51E6+lXUwJjgnBw+hyko/lkFWCldqw==", + "dev": true, + "license": "MIT", + "dependencies": { + "undici-types": "~6.21.0" + } + }, + "node_modules/@types/vscode": { + "version": "1.109.0", + "resolved": "https://registry.npmjs.org/@types/vscode/-/vscode-1.109.0.tgz", + "integrity": "sha512-0Pf95rnwEIwDbmXGC08r0B4TQhAbsHQ5UyTIgVgoieDe4cOnf92usuR5dEczb6bTKEp7ziZH4TV1TRGPPCExtw==", + "dev": true, + "license": "MIT" + }, + "node_modules/@typescript-eslint/eslint-plugin": { + "version": "8.56.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/eslint-plugin/-/eslint-plugin-8.56.1.tgz", + "integrity": "sha512-Jz9ZztpB37dNC+HU2HI28Bs9QXpzCz+y/twHOwhyrIRdbuVDxSytJNDl6z/aAKlaRIwC7y8wJdkBv7FxYGgi0A==", + "dev": true, + "license": "MIT", + "dependencies": { + "@eslint-community/regexpp": "^4.12.2", + "@typescript-eslint/scope-manager": "8.56.1", + "@typescript-eslint/type-utils": "8.56.1", + "@typescript-eslint/utils": "8.56.1", + "@typescript-eslint/visitor-keys": "8.56.1", + "ignore": "^7.0.5", + "natural-compare": "^1.4.0", + "ts-api-utils": "^2.4.0" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" + }, + "peerDependencies": { + "@typescript-eslint/parser": "^8.56.1", + "eslint": "^8.57.0 || ^9.0.0 || ^10.0.0", + "typescript": ">=4.8.4 <6.0.0" + } + }, + "node_modules/@typescript-eslint/parser": { + "version": "8.56.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/parser/-/parser-8.56.1.tgz", + "integrity": "sha512-klQbnPAAiGYFyI02+znpBRLyjL4/BrBd0nyWkdC0s/6xFLkXYQ8OoRrSkqacS1ddVxf/LDyODIKbQ5TgKAf/Fg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@typescript-eslint/scope-manager": "8.56.1", + "@typescript-eslint/types": "8.56.1", + "@typescript-eslint/typescript-estree": "8.56.1", + "@typescript-eslint/visitor-keys": "8.56.1", + "debug": "^4.4.3" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" + }, + "peerDependencies": { + "eslint": "^8.57.0 || ^9.0.0 || ^10.0.0", + "typescript": ">=4.8.4 <6.0.0" + } + }, + "node_modules/@typescript-eslint/project-service": { + "version": "8.56.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/project-service/-/project-service-8.56.1.tgz", + "integrity": "sha512-TAdqQTzHNNvlVFfR+hu2PDJrURiwKsUvxFn1M0h95BB8ah5jejas08jUWG4dBA68jDMI988IvtfdAI53JzEHOQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "@typescript-eslint/tsconfig-utils": "^8.56.1", + "@typescript-eslint/types": "^8.56.1", + "debug": "^4.4.3" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" + }, + "peerDependencies": { + "typescript": ">=4.8.4 <6.0.0" + } + }, + "node_modules/@typescript-eslint/scope-manager": { + "version": "8.56.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/scope-manager/-/scope-manager-8.56.1.tgz", + "integrity": "sha512-YAi4VDKcIZp0O4tz/haYKhmIDZFEUPOreKbfdAN3SzUDMcPhJ8QI99xQXqX+HoUVq8cs85eRKnD+rne2UAnj2w==", + "dev": true, + "license": "MIT", + "dependencies": { + "@typescript-eslint/types": "8.56.1", + "@typescript-eslint/visitor-keys": "8.56.1" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" + } + }, + "node_modules/@typescript-eslint/tsconfig-utils": { + "version": "8.56.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/tsconfig-utils/-/tsconfig-utils-8.56.1.tgz", + "integrity": "sha512-qOtCYzKEeyr3aR9f28mPJqBty7+DBqsdd63eO0yyDwc6vgThj2UjWfJIcsFeSucYydqcuudMOprZ+x1SpF3ZuQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" + }, + "peerDependencies": { + "typescript": ">=4.8.4 <6.0.0" + } + }, + "node_modules/@typescript-eslint/type-utils": { + "version": "8.56.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/type-utils/-/type-utils-8.56.1.tgz", + "integrity": "sha512-yB/7dxi7MgTtGhZdaHCemf7PuwrHMenHjmzgUW1aJpO+bBU43OycnM3Wn+DdvDO/8zzA9HlhaJ0AUGuvri4oGg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@typescript-eslint/types": "8.56.1", + "@typescript-eslint/typescript-estree": "8.56.1", + "@typescript-eslint/utils": "8.56.1", + "debug": "^4.4.3", + "ts-api-utils": "^2.4.0" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" + }, + "peerDependencies": { + "eslint": "^8.57.0 || ^9.0.0 || ^10.0.0", + "typescript": ">=4.8.4 <6.0.0" + } + }, + "node_modules/@typescript-eslint/types": { + "version": "8.56.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/types/-/types-8.56.1.tgz", + "integrity": "sha512-dbMkdIUkIkchgGDIv7KLUpa0Mda4IYjo4IAMJUZ+3xNoUXxMsk9YtKpTHSChRS85o+H9ftm51gsK1dZReY9CVw==", + "dev": true, + "license": "MIT", + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" + } + }, + "node_modules/@typescript-eslint/typescript-estree": { + "version": "8.56.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/typescript-estree/-/typescript-estree-8.56.1.tgz", + "integrity": "sha512-qzUL1qgalIvKWAf9C1HpvBjif+Vm6rcT5wZd4VoMb9+Km3iS3Cv9DY6dMRMDtPnwRAFyAi7YXJpTIEXLvdfPxg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@typescript-eslint/project-service": "8.56.1", + "@typescript-eslint/tsconfig-utils": "8.56.1", + "@typescript-eslint/types": "8.56.1", + "@typescript-eslint/visitor-keys": "8.56.1", + "debug": "^4.4.3", + "minimatch": "^10.2.2", + "semver": "^7.7.3", + "tinyglobby": "^0.2.15", + "ts-api-utils": "^2.4.0" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" + }, + "peerDependencies": { + "typescript": ">=4.8.4 <6.0.0" + } + }, + "node_modules/@typescript-eslint/utils": { + "version": "8.56.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/utils/-/utils-8.56.1.tgz", + "integrity": "sha512-HPAVNIME3tABJ61siYlHzSWCGtOoeP2RTIaHXFMPqjrQKCGB9OgUVdiNgH7TJS2JNIQ5qQ4RsAUDuGaGme/KOA==", + "dev": true, + "license": "MIT", + "dependencies": { + "@eslint-community/eslint-utils": "^4.9.1", + "@typescript-eslint/scope-manager": "8.56.1", + "@typescript-eslint/types": "8.56.1", + "@typescript-eslint/typescript-estree": "8.56.1" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" + }, + "peerDependencies": { + "eslint": "^8.57.0 || ^9.0.0 || ^10.0.0", + "typescript": ">=4.8.4 <6.0.0" + } + }, + "node_modules/@typescript-eslint/visitor-keys": { + "version": "8.56.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/visitor-keys/-/visitor-keys-8.56.1.tgz", + "integrity": "sha512-KiROIzYdEV85YygXw6BI/Dx4fnBlFQu6Mq4QE4MOH9fFnhohw6wX/OAvDY2/C+ut0I3RSPKenvZJIVYqJNkhEw==", + "dev": true, + "license": "MIT", + "dependencies": { + "@typescript-eslint/types": "8.56.1", + "eslint-visitor-keys": "^5.0.0" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" + } + }, + "node_modules/@typescript-eslint/visitor-keys/node_modules/eslint-visitor-keys": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/eslint-visitor-keys/-/eslint-visitor-keys-5.0.1.tgz", + "integrity": "sha512-tD40eHxA35h0PEIZNeIjkHoDR4YjjJp34biM0mDvplBe//mB+IHCqHDGV7pxF+7MklTvighcCPPZC7ynWyjdTA==", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": "^20.19.0 || ^22.13.0 || >=24" + }, + "funding": { + "url": "https://opencollective.com/eslint" + } + }, + "node_modules/@typespec/ts-http-runtime": { + "version": "0.3.4", + "resolved": "https://registry.npmjs.org/@typespec/ts-http-runtime/-/ts-http-runtime-0.3.4.tgz", + "integrity": "sha512-CI0NhTrz4EBaa0U+HaaUZrJhPoso8sG7ZFya8uQoBA57fjzrjRSv87ekCjLZOFExN+gXE/z0xuN2QfH4H2HrLQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "http-proxy-agent": "^7.0.0", + "https-proxy-agent": "^7.0.0", + "tslib": "^2.6.2" + }, + "engines": { + "node": ">=20.0.0" + } + }, + "node_modules/@vscode/test-electron": { + "version": "2.5.2", + "resolved": "https://registry.npmjs.org/@vscode/test-electron/-/test-electron-2.5.2.tgz", + "integrity": "sha512-8ukpxv4wYe0iWMRQU18jhzJOHkeGKbnw7xWRX3Zw1WJA4cEKbHcmmLPdPrPtL6rhDcrlCZN+xKRpv09n4gRHYg==", + "dev": true, + "license": "MIT", + "dependencies": { + "http-proxy-agent": "^7.0.2", + "https-proxy-agent": "^7.0.5", + "jszip": "^3.10.1", + "ora": "^8.1.0", + "semver": "^7.6.2" + }, + "engines": { + "node": ">=16" + } + }, + "node_modules/@vscode/vsce": { + "version": "2.32.0", + "resolved": "https://registry.npmjs.org/@vscode/vsce/-/vsce-2.32.0.tgz", + "integrity": "sha512-3EFJfsgrSftIqt3EtdRcAygy/OJ3hstyI1cDmIgkU9CFZW5C+3djr6mfosndCUqcVYuyjmxOK1xmFp/Bq7+NIg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@azure/identity": "^4.1.0", + "@vscode/vsce-sign": "^2.0.0", + "azure-devops-node-api": "^12.5.0", + "chalk": "^2.4.2", + "cheerio": "^1.0.0-rc.9", + "cockatiel": "^3.1.2", + "commander": "^6.2.1", + "form-data": "^4.0.0", + "glob": "^7.0.6", + "hosted-git-info": "^4.0.2", + "jsonc-parser": "^3.2.0", + "leven": "^3.1.0", + "markdown-it": "^12.3.2", + "mime": "^1.3.4", + "minimatch": "^3.0.3", + "parse-semver": "^1.1.1", + "read": "^1.0.7", + "semver": "^7.5.2", + "tmp": "^0.2.1", + "typed-rest-client": "^1.8.4", + "url-join": "^4.0.1", + "xml2js": "^0.5.0", + "yauzl": "^2.3.1", + "yazl": "^2.2.2" + }, + "bin": { + "vsce": "vsce" + }, + "engines": { + "node": ">= 16" + }, + "optionalDependencies": { + "keytar": "^7.7.0" + } + }, + "node_modules/@vscode/vsce-sign": { + "version": "2.0.9", + "resolved": "https://registry.npmjs.org/@vscode/vsce-sign/-/vsce-sign-2.0.9.tgz", + "integrity": "sha512-8IvaRvtFyzUnGGl3f5+1Cnor3LqaUWvhaUjAYO8Y39OUYlOf3cRd+dowuQYLpZcP3uwSG+mURwjEBOSq4SOJ0g==", + "dev": true, + "hasInstallScript": true, + "license": "SEE LICENSE IN LICENSE.txt", + "optionalDependencies": { + "@vscode/vsce-sign-alpine-arm64": "2.0.6", + "@vscode/vsce-sign-alpine-x64": "2.0.6", + "@vscode/vsce-sign-darwin-arm64": "2.0.6", + "@vscode/vsce-sign-darwin-x64": "2.0.6", + "@vscode/vsce-sign-linux-arm": "2.0.6", + "@vscode/vsce-sign-linux-arm64": "2.0.6", + "@vscode/vsce-sign-linux-x64": "2.0.6", + "@vscode/vsce-sign-win32-arm64": "2.0.6", + "@vscode/vsce-sign-win32-x64": "2.0.6" + } + }, + "node_modules/@vscode/vsce-sign-alpine-arm64": { + "version": "2.0.6", + "resolved": "https://registry.npmjs.org/@vscode/vsce-sign-alpine-arm64/-/vsce-sign-alpine-arm64-2.0.6.tgz", + "integrity": "sha512-wKkJBsvKF+f0GfsUuGT0tSW0kZL87QggEiqNqK6/8hvqsXvpx8OsTEc3mnE1kejkh5r+qUyQ7PtF8jZYN0mo8Q==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "SEE LICENSE IN LICENSE.txt", + "optional": true, + "os": [ + "alpine" + ] + }, + "node_modules/@vscode/vsce-sign-alpine-x64": { + "version": "2.0.6", + "resolved": "https://registry.npmjs.org/@vscode/vsce-sign-alpine-x64/-/vsce-sign-alpine-x64-2.0.6.tgz", + "integrity": "sha512-YoAGlmdK39vKi9jA18i4ufBbd95OqGJxRvF3n6ZbCyziwy3O+JgOpIUPxv5tjeO6gQfx29qBivQ8ZZTUF2Ba0w==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "SEE LICENSE IN LICENSE.txt", + "optional": true, + "os": [ + "alpine" + ] + }, + "node_modules/@vscode/vsce-sign-darwin-arm64": { + "version": "2.0.6", + "resolved": "https://registry.npmjs.org/@vscode/vsce-sign-darwin-arm64/-/vsce-sign-darwin-arm64-2.0.6.tgz", + "integrity": "sha512-5HMHaJRIQuozm/XQIiJiA0W9uhdblwwl2ZNDSSAeXGO9YhB9MH5C4KIHOmvyjUnKy4UCuiP43VKpIxW1VWP4tQ==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "SEE LICENSE IN LICENSE.txt", + "optional": true, + "os": [ + "darwin" + ] + }, + "node_modules/@vscode/vsce-sign-darwin-x64": { + "version": "2.0.6", + "resolved": "https://registry.npmjs.org/@vscode/vsce-sign-darwin-x64/-/vsce-sign-darwin-x64-2.0.6.tgz", + "integrity": "sha512-25GsUbTAiNfHSuRItoQafXOIpxlYj+IXb4/qarrXu7kmbH94jlm5sdWSCKrrREs8+GsXF1b+l3OB7VJy5jsykw==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "SEE LICENSE IN LICENSE.txt", + "optional": true, + "os": [ + "darwin" + ] + }, + "node_modules/@vscode/vsce-sign-linux-arm": { + "version": "2.0.6", + "resolved": "https://registry.npmjs.org/@vscode/vsce-sign-linux-arm/-/vsce-sign-linux-arm-2.0.6.tgz", + "integrity": "sha512-UndEc2Xlq4HsuMPnwu7420uqceXjs4yb5W8E2/UkaHBB9OWCwMd3/bRe/1eLe3D8kPpxzcaeTyXiK3RdzS/1CA==", + "cpu": [ + "arm" + ], + "dev": true, + "license": "SEE LICENSE IN LICENSE.txt", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@vscode/vsce-sign-linux-arm64": { + "version": "2.0.6", + "resolved": "https://registry.npmjs.org/@vscode/vsce-sign-linux-arm64/-/vsce-sign-linux-arm64-2.0.6.tgz", + "integrity": "sha512-cfb1qK7lygtMa4NUl2582nP7aliLYuDEVpAbXJMkDq1qE+olIw/es+C8j1LJwvcRq1I2yWGtSn3EkDp9Dq5FdA==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "SEE LICENSE IN LICENSE.txt", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@vscode/vsce-sign-linux-x64": { + "version": "2.0.6", + "resolved": "https://registry.npmjs.org/@vscode/vsce-sign-linux-x64/-/vsce-sign-linux-x64-2.0.6.tgz", + "integrity": "sha512-/olerl1A4sOqdP+hjvJ1sbQjKN07Y3DVnxO4gnbn/ahtQvFrdhUi0G1VsZXDNjfqmXw57DmPi5ASnj/8PGZhAA==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "SEE LICENSE IN LICENSE.txt", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@vscode/vsce-sign-win32-arm64": { + "version": "2.0.6", + "resolved": "https://registry.npmjs.org/@vscode/vsce-sign-win32-arm64/-/vsce-sign-win32-arm64-2.0.6.tgz", + "integrity": "sha512-ivM/MiGIY0PJNZBoGtlRBM/xDpwbdlCWomUWuLmIxbi1Cxe/1nooYrEQoaHD8ojVRgzdQEUzMsRbyF5cJJgYOg==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "SEE LICENSE IN LICENSE.txt", + "optional": true, + "os": [ + "win32" + ] + }, + "node_modules/@vscode/vsce-sign-win32-x64": { + "version": "2.0.6", + "resolved": "https://registry.npmjs.org/@vscode/vsce-sign-win32-x64/-/vsce-sign-win32-x64-2.0.6.tgz", + "integrity": "sha512-mgth9Kvze+u8CruYMmhHw6Zgy3GRX2S+Ed5oSokDEK5vPEwGGKnmuXua9tmFhomeAnhgJnL4DCna3TiNuGrBTQ==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "SEE LICENSE IN LICENSE.txt", + "optional": true, + "os": [ + "win32" + ] + }, + "node_modules/@vscode/vsce/node_modules/balanced-match": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/balanced-match/-/balanced-match-1.0.2.tgz", + "integrity": "sha512-3oSeUO0TMV67hN1AmbXsK4yaqU7tjiHlbxRDZOpH0KW9+CeX4bRAaX0Anxt0tx2MrpRpWwQaPwIlISEJhYU5Pw==", + "dev": true, + "license": "MIT" + }, + "node_modules/@vscode/vsce/node_modules/brace-expansion": { + "version": "1.1.12", + "resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.12.tgz", + "integrity": "sha512-9T9UjW3r0UW5c1Q7GTwllptXwhvYmEzFhzMfZ9H7FQWt+uZePjZPjBP/W1ZEyZ1twGWom5/56TF4lPcqjnDHcg==", + "dev": true, + "license": "MIT", + "dependencies": { + "balanced-match": "^1.0.0", + "concat-map": "0.0.1" + } + }, + "node_modules/@vscode/vsce/node_modules/minimatch": { + "version": "3.1.5", + "resolved": "https://registry.npmjs.org/minimatch/-/minimatch-3.1.5.tgz", + "integrity": "sha512-VgjWUsnnT6n+NUk6eZq77zeFdpW2LWDzP6zFGrCbHXiYNul5Dzqk2HHQ5uFH2DNW5Xbp8+jVzaeNt94ssEEl4w==", + "dev": true, + "license": "ISC", + "dependencies": { + "brace-expansion": "^1.1.7" + }, + "engines": { + "node": "*" + } + }, + "node_modules/acorn": { + "version": "8.16.0", + "resolved": "https://registry.npmjs.org/acorn/-/acorn-8.16.0.tgz", + "integrity": "sha512-UVJyE9MttOsBQIDKw1skb9nAwQuR5wuGD3+82K6JgJlm/Y+KI92oNsMNGZCYdDsVtRHSak0pcV5Dno5+4jh9sw==", + "dev": true, + "license": "MIT", + "bin": { + "acorn": "bin/acorn" + }, + "engines": { + "node": ">=0.4.0" + } + }, + "node_modules/acorn-jsx": { + "version": "5.3.2", + "resolved": "https://registry.npmjs.org/acorn-jsx/-/acorn-jsx-5.3.2.tgz", + "integrity": "sha512-rq9s+JNhf0IChjtDXxllJ7g41oZk5SlXtp0LHwyA5cejwn7vKmKp4pPri6YEePv2PU65sAsegbXtIinmDFDXgQ==", + "dev": true, + "license": "MIT", + "peerDependencies": { + "acorn": "^6.0.0 || ^7.0.0 || ^8.0.0" + } + }, + "node_modules/agent-base": { + "version": "7.1.4", + "resolved": "https://registry.npmjs.org/agent-base/-/agent-base-7.1.4.tgz", + "integrity": "sha512-MnA+YT8fwfJPgBx3m60MNqakm30XOkyIoH1y6huTQvC0PwZG7ki8NacLBcrPbNoo8vEZy7Jpuk7+jMO+CUovTQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 14" + } + }, + "node_modules/ajv": { + "version": "6.14.0", + "resolved": "https://registry.npmjs.org/ajv/-/ajv-6.14.0.tgz", + "integrity": "sha512-IWrosm/yrn43eiKqkfkHis7QioDleaXQHdDVPKg0FSwwd/DuvyX79TZnFOnYpB7dcsFAMmtFztZuXPDvSePkFw==", + "dev": true, + "license": "MIT", + "dependencies": { + "fast-deep-equal": "^3.1.1", + "fast-json-stable-stringify": "^2.0.0", + "json-schema-traverse": "^0.4.1", + "uri-js": "^4.2.2" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/epoberezkin" + } + }, + "node_modules/ansi-regex": { + "version": "6.2.2", + "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-6.2.2.tgz", + "integrity": "sha512-Bq3SmSpyFHaWjPk8If9yc6svM8c56dB5BAtW4Qbw5jHTwwXXcTLoRMkpDJp6VL0XzlWaCHTXrkFURMYmD0sLqg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/chalk/ansi-regex?sponsor=1" + } + }, + "node_modules/ansi-styles": { + "version": "3.2.1", + "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-3.2.1.tgz", + "integrity": "sha512-VT0ZI6kZRdTh8YyJw3SMbYm/u+NqfsAxEpWO0Pf9sq8/e94WxxOpPKx9FR1FlyCtOVDNOQ+8ntlqFxiRc+r5qA==", + "dev": true, + "license": "MIT", + "dependencies": { + "color-convert": "^1.9.0" + }, + "engines": { + "node": ">=4" + } + }, + "node_modules/argparse": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/argparse/-/argparse-2.0.1.tgz", + "integrity": "sha512-8+9WqebbFzpX9OR+Wa6O29asIogeRMzcGtAINdpMHHyAg10f05aSFVBbcEqGf/PXw1EjAZ+q2/bEBg3DvurK3Q==", + "dev": true, + "license": "Python-2.0" + }, + "node_modules/asynckit": { + "version": "0.4.0", + "resolved": "https://registry.npmjs.org/asynckit/-/asynckit-0.4.0.tgz", + "integrity": "sha512-Oei9OH4tRh0YqU3GxhX79dM/mwVgvbZJaSNaRk+bshkj0S5cfHcgYakreBjrHwatXKbz+IoIdYLxrKim2MjW0Q==", + "dev": true, + "license": "MIT" + }, + "node_modules/azure-devops-node-api": { + "version": "12.5.0", + "resolved": "https://registry.npmjs.org/azure-devops-node-api/-/azure-devops-node-api-12.5.0.tgz", + "integrity": "sha512-R5eFskGvOm3U/GzeAuxRkUsAl0hrAwGgWn6zAd2KrZmrEhWZVqLew4OOupbQlXUuojUzpGtq62SmdhJ06N88og==", + "dev": true, + "license": "MIT", + "dependencies": { + "tunnel": "0.0.6", + "typed-rest-client": "^1.8.4" + } + }, + "node_modules/balanced-match": { + "version": "4.0.4", + "resolved": "https://registry.npmjs.org/balanced-match/-/balanced-match-4.0.4.tgz", + "integrity": "sha512-BLrgEcRTwX2o6gGxGOCNyMvGSp35YofuYzw9h1IMTRmKqttAZZVU67bdb9Pr2vUHA8+j3i2tJfjO6C6+4myGTA==", + "dev": true, + "license": "MIT", + "engines": { + "node": "18 || 20 || >=22" + } + }, + "node_modules/base64-js": { + "version": "1.5.1", + "resolved": "https://registry.npmjs.org/base64-js/-/base64-js-1.5.1.tgz", + "integrity": "sha512-AKpaYlHn8t4SVbOHCy+b5+KKgvR4vrsD8vbvrbiQJps7fKDTkjkDry6ji0rUJjC0kzbNePLwzxq8iypo41qeWA==", + "dev": true, + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/feross" + }, + { + "type": "patreon", + "url": "https://www.patreon.com/feross" + }, + { + "type": "consulting", + "url": "https://feross.org/support" + } + ], + "license": "MIT", + "optional": true + }, + "node_modules/bl": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/bl/-/bl-4.1.0.tgz", + "integrity": "sha512-1W07cM9gS6DcLperZfFSj+bWLtaPGSOHWhPiGzXmvVJbRLdG82sH/Kn8EtW1VqWVA54AKf2h5k5BbnIbwF3h6w==", + "dev": true, + "license": "MIT", + "optional": true, + "dependencies": { + "buffer": "^5.5.0", + "inherits": "^2.0.4", + "readable-stream": "^3.4.0" + } + }, + "node_modules/boolbase": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/boolbase/-/boolbase-1.0.0.tgz", + "integrity": "sha512-JZOSA7Mo9sNGB8+UjSgzdLtokWAky1zbztM3WRLCbZ70/3cTANmQmOdR7y2g+J0e2WXywy1yS468tY+IruqEww==", + "dev": true, + "license": "ISC" + }, + "node_modules/brace-expansion": { + "version": "5.0.4", + "resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-5.0.4.tgz", + "integrity": "sha512-h+DEnpVvxmfVefa4jFbCf5HdH5YMDXRsmKflpf1pILZWRFlTbJpxeU55nJl4Smt5HQaGzg1o6RHFPJaOqnmBDg==", + "dev": true, + "license": "MIT", + "dependencies": { + "balanced-match": "^4.0.2" + }, + "engines": { + "node": "18 || 20 || >=22" + } + }, + "node_modules/browser-stdout": { + "version": "1.3.1", + "resolved": "https://registry.npmjs.org/browser-stdout/-/browser-stdout-1.3.1.tgz", + "integrity": "sha512-qhAVI1+Av2X7qelOfAIYwXONood6XlZE/fXaBSmW/T5SzLAmCgzi+eiWE7fUvbHaeNBQH13UftjpXxsfLkMpgw==", + "dev": true, + "license": "ISC" + }, + "node_modules/buffer": { + "version": "5.7.1", + "resolved": "https://registry.npmjs.org/buffer/-/buffer-5.7.1.tgz", + "integrity": "sha512-EHcyIPBQ4BSGlvjB16k5KgAJ27CIsHY/2JBmCRReo48y9rQ3MaUzWX3KVlBa4U7MyX02HdVj0K7C3WaB3ju7FQ==", + "dev": true, + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/feross" + }, + { + "type": "patreon", + "url": "https://www.patreon.com/feross" + }, + { + "type": "consulting", + "url": "https://feross.org/support" + } + ], + "license": "MIT", + "optional": true, + "dependencies": { + "base64-js": "^1.3.1", + "ieee754": "^1.1.13" + } + }, + "node_modules/buffer-crc32": { + "version": "0.2.13", + "resolved": "https://registry.npmjs.org/buffer-crc32/-/buffer-crc32-0.2.13.tgz", + "integrity": "sha512-VO9Ht/+p3SN7SKWqcrgEzjGbRSJYTx+Q1pTQC0wrWqHx0vpJraQ6GtHx8tvcg1rlK1byhU5gccxgOgj7B0TDkQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": "*" + } + }, + "node_modules/buffer-equal-constant-time": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/buffer-equal-constant-time/-/buffer-equal-constant-time-1.0.1.tgz", + "integrity": "sha512-zRpUiDwd/xk6ADqPMATG8vc9VPrkck7T07OIx0gnjmJAnHnTVXNQG3vfvWNuiZIkwu9KrKdA1iJKfsfTVxE6NA==", + "dev": true, + "license": "BSD-3-Clause" + }, + "node_modules/bundle-name": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/bundle-name/-/bundle-name-4.1.0.tgz", + "integrity": "sha512-tjwM5exMg6BGRI+kNmTntNsvdZS1X8BFYS6tnJ2hdH0kVxM6/eVZ2xy+FqStSWvYmtfFMDLIxurorHwDKfDz5Q==", + "dev": true, + "license": "MIT", + "dependencies": { + "run-applescript": "^7.0.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/call-bind-apply-helpers": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/call-bind-apply-helpers/-/call-bind-apply-helpers-1.0.2.tgz", + "integrity": "sha512-Sp1ablJ0ivDkSzjcaJdxEunN5/XvksFJ2sMBFfq6x0ryhQV/2b/KwFe21cMpmHtPOSij8K99/wSfoEuTObmuMQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "es-errors": "^1.3.0", + "function-bind": "^1.1.2" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/call-bound": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/call-bound/-/call-bound-1.0.4.tgz", + "integrity": "sha512-+ys997U96po4Kx/ABpBCqhA9EuxJaQWDQg7295H4hBphv3IZg0boBKuwYpt4YXp6MZ5AmZQnU/tyMTlRpaSejg==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bind-apply-helpers": "^1.0.2", + "get-intrinsic": "^1.3.0" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/camelcase": { + "version": "6.3.0", + "resolved": "https://registry.npmjs.org/camelcase/-/camelcase-6.3.0.tgz", + "integrity": "sha512-Gmy6FhYlCY7uOElZUSbxo2UCDH8owEk996gkbrpsgGtrJLM3J7jGxl9Ic7Qwwj4ivOE5AWZWRMecDdF7hqGjFA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/chalk": { + "version": "2.4.2", + "resolved": "https://registry.npmjs.org/chalk/-/chalk-2.4.2.tgz", + "integrity": "sha512-Mti+f9lpJNcwF4tWV8/OrTTtF1gZi+f8FqlyAdouralcFWFQWF2+NgCHShjkCb+IFBLq9buZwE1xckQU4peSuQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "ansi-styles": "^3.2.1", + "escape-string-regexp": "^1.0.5", + "supports-color": "^5.3.0" + }, + "engines": { + "node": ">=4" + } + }, + "node_modules/cheerio": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/cheerio/-/cheerio-1.2.0.tgz", + "integrity": "sha512-WDrybc/gKFpTYQutKIK6UvfcuxijIZfMfXaYm8NMsPQxSYvf+13fXUJ4rztGGbJcBQ/GF55gvrZ0Bc0bj/mqvg==", + "dev": true, + "license": "MIT", + "dependencies": { + "cheerio-select": "^2.1.0", + "dom-serializer": "^2.0.0", + "domhandler": "^5.0.3", + "domutils": "^3.2.2", + "encoding-sniffer": "^0.2.1", + "htmlparser2": "^10.1.0", + "parse5": "^7.3.0", + "parse5-htmlparser2-tree-adapter": "^7.1.0", + "parse5-parser-stream": "^7.1.2", + "undici": "^7.19.0", + "whatwg-mimetype": "^4.0.0" + }, + "engines": { + "node": ">=20.18.1" + }, + "funding": { + "url": "https://github.com/cheeriojs/cheerio?sponsor=1" + } + }, + "node_modules/cheerio-select": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/cheerio-select/-/cheerio-select-2.1.0.tgz", + "integrity": "sha512-9v9kG0LvzrlcungtnJtpGNxY+fzECQKhK4EGJX2vByejiMX84MFNQw4UxPJl3bFbTMw+Dfs37XaIkCwTZfLh4g==", + "dev": true, + "license": "BSD-2-Clause", + "dependencies": { + "boolbase": "^1.0.0", + "css-select": "^5.1.0", + "css-what": "^6.1.0", + "domelementtype": "^2.3.0", + "domhandler": "^5.0.3", + "domutils": "^3.0.1" + }, + "funding": { + "url": "https://github.com/sponsors/fb55" + } + }, + "node_modules/chokidar": { + "version": "4.0.3", + "resolved": "https://registry.npmjs.org/chokidar/-/chokidar-4.0.3.tgz", + "integrity": "sha512-Qgzu8kfBvo+cA4962jnP1KkS6Dop5NS6g7R5LFYJr4b8Ub94PPQXUksCw9PvXoeXPRRddRNC5C1JQUR2SMGtnA==", + "dev": true, + "license": "MIT", + "dependencies": { + "readdirp": "^4.0.1" + }, + "engines": { + "node": ">= 14.16.0" + }, + "funding": { + "url": "https://paulmillr.com/funding/" + } + }, + "node_modules/chownr": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/chownr/-/chownr-1.1.4.tgz", + "integrity": "sha512-jJ0bqzaylmJtVnNgzTeSOs8DPavpbYgEr/b0YL8/2GO3xJEhInFmhKMUnEJQjZumK7KXGFhUy89PrsJWlakBVg==", + "dev": true, + "license": "ISC", + "optional": true + }, + "node_modules/cli-cursor": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/cli-cursor/-/cli-cursor-5.0.0.tgz", + "integrity": "sha512-aCj4O5wKyszjMmDT4tZj93kxyydN/K5zPWSCe6/0AV/AA1pqe5ZBIw0a2ZfPQV7lL5/yb5HsUreJ6UFAF1tEQw==", + "dev": true, + "license": "MIT", + "dependencies": { + "restore-cursor": "^5.0.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/cli-spinners": { + "version": "2.9.2", + "resolved": "https://registry.npmjs.org/cli-spinners/-/cli-spinners-2.9.2.tgz", + "integrity": "sha512-ywqV+5MmyL4E7ybXgKys4DugZbX0FC6LnwrhjuykIjnK9k8OQacQ7axGKnjDXWNhns0xot3bZI5h55H8yo9cJg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=6" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/cliui": { + "version": "8.0.1", + "resolved": "https://registry.npmjs.org/cliui/-/cliui-8.0.1.tgz", + "integrity": "sha512-BSeNnyus75C4//NQ9gQt1/csTXyo/8Sb+afLAkzAptFuMsod9HFokGNudZpi/oQV73hnVK+sR+5PVRMd+Dr7YQ==", + "dev": true, + "license": "ISC", + "dependencies": { + "string-width": "^4.2.0", + "strip-ansi": "^6.0.1", + "wrap-ansi": "^7.0.0" + }, + "engines": { + "node": ">=12" + } + }, + "node_modules/cliui/node_modules/ansi-regex": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz", + "integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/cliui/node_modules/emoji-regex": { + "version": "8.0.0", + "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz", + "integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==", + "dev": true, + "license": "MIT" + }, + "node_modules/cliui/node_modules/string-width": { + "version": "4.2.3", + "resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz", + "integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==", + "dev": true, + "license": "MIT", + "dependencies": { + "emoji-regex": "^8.0.0", + "is-fullwidth-code-point": "^3.0.0", + "strip-ansi": "^6.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/cliui/node_modules/strip-ansi": { + "version": "6.0.1", + "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz", + "integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==", + "dev": true, + "license": "MIT", + "dependencies": { + "ansi-regex": "^5.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/cockatiel": { + "version": "3.2.1", + "resolved": "https://registry.npmjs.org/cockatiel/-/cockatiel-3.2.1.tgz", + "integrity": "sha512-gfrHV6ZPkquExvMh9IOkKsBzNDk6sDuZ6DdBGUBkvFnTCqCxzpuq48RySgP0AnaqQkw2zynOFj9yly6T1Q2G5Q==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=16" + } + }, + "node_modules/color-convert": { + "version": "1.9.3", + "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-1.9.3.tgz", + "integrity": "sha512-QfAUtd+vFdAtFQcC8CCyYt1fYWxSqAiK2cSD6zDB8N3cpsEBAvRxp9zOGg6G/SHHJYAT88/az/IuDGALsNVbGg==", + "dev": true, + "license": "MIT", + "dependencies": { + "color-name": "1.1.3" + } + }, + "node_modules/color-name": { + "version": "1.1.3", + "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.3.tgz", + "integrity": "sha512-72fSenhMw2HZMTVHeCA9KCmpEIbzWiQsjN+BHcBbS9vr1mtt+vJjPdksIBNUmKAW8TFUDPJK5SUU3QhE9NEXDw==", + "dev": true, + "license": "MIT" + }, + "node_modules/combined-stream": { + "version": "1.0.8", + "resolved": "https://registry.npmjs.org/combined-stream/-/combined-stream-1.0.8.tgz", + "integrity": "sha512-FQN4MRfuJeHf7cBbBMJFXhKSDq+2kAArBlmRBvcvFE5BB1HZKXtSFASDhdlz9zOYwxh8lDdnvmMOe/+5cdoEdg==", + "dev": true, + "license": "MIT", + "dependencies": { + "delayed-stream": "~1.0.0" + }, + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/commander": { + "version": "6.2.1", + "resolved": "https://registry.npmjs.org/commander/-/commander-6.2.1.tgz", + "integrity": "sha512-U7VdrJFnJgo4xjrHpTzu0yrHPGImdsmD95ZlgYSEajAn2JKzDhDTPG9kBTefmObL2w/ngeZnilk+OV9CG3d7UA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 6" + } + }, + "node_modules/concat-map": { + "version": "0.0.1", + "resolved": "https://registry.npmjs.org/concat-map/-/concat-map-0.0.1.tgz", + "integrity": "sha512-/Srv4dswyQNBfohGpz9o6Yb3Gz3SrUDqBH5rTuhGR7ahtlbYKnVxw2bCFMRljaA7EXHaXZ8wsHdodFvbkhKmqg==", + "dev": true, + "license": "MIT" + }, + "node_modules/core-util-is": { + "version": "1.0.3", + "resolved": "https://registry.npmjs.org/core-util-is/-/core-util-is-1.0.3.tgz", + "integrity": "sha512-ZQBvi1DcpJ4GDqanjucZ2Hj3wEO5pZDS89BWbkcrvdxksJorwUDDZamX9ldFkp9aw2lmBDLgkObEA4DWNJ9FYQ==", + "dev": true, + "license": "MIT" + }, + "node_modules/cross-spawn": { + "version": "7.0.6", + "resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-7.0.6.tgz", + "integrity": "sha512-uV2QOWP2nWzsy2aMp8aRibhi9dlzF5Hgh5SHaB9OiTGEyDTiJJyx0uy51QXdyWbtAHNua4XJzUKca3OzKUd3vA==", + "dev": true, + "license": "MIT", + "dependencies": { + "path-key": "^3.1.0", + "shebang-command": "^2.0.0", + "which": "^2.0.1" + }, + "engines": { + "node": ">= 8" + } + }, + "node_modules/css-select": { + "version": "5.2.2", + "resolved": "https://registry.npmjs.org/css-select/-/css-select-5.2.2.tgz", + "integrity": "sha512-TizTzUddG/xYLA3NXodFM0fSbNizXjOKhqiQQwvhlspadZokn1KDy0NZFS0wuEubIYAV5/c1/lAr0TaaFXEXzw==", + "dev": true, + "license": "BSD-2-Clause", + "dependencies": { + "boolbase": "^1.0.0", + "css-what": "^6.1.0", + "domhandler": "^5.0.2", + "domutils": "^3.0.1", + "nth-check": "^2.0.1" + }, + "funding": { + "url": "https://github.com/sponsors/fb55" + } + }, + "node_modules/css-what": { + "version": "6.2.2", + "resolved": "https://registry.npmjs.org/css-what/-/css-what-6.2.2.tgz", + "integrity": "sha512-u/O3vwbptzhMs3L1fQE82ZSLHQQfto5gyZzwteVIEyeaY5Fc7R4dapF/BvRoSYFeqfBk4m0V1Vafq5Pjv25wvA==", + "dev": true, + "license": "BSD-2-Clause", + "engines": { + "node": ">= 6" + }, + "funding": { + "url": "https://github.com/sponsors/fb55" + } + }, + "node_modules/debug": { + "version": "4.4.3", + "resolved": "https://registry.npmjs.org/debug/-/debug-4.4.3.tgz", + "integrity": "sha512-RGwwWnwQvkVfavKVt22FGLw+xYSdzARwm0ru6DhTVA3umU5hZc28V3kO4stgYryrTlLpuvgI9GiijltAjNbcqA==", + "dev": true, + "license": "MIT", + "dependencies": { + "ms": "^2.1.3" + }, + "engines": { + "node": ">=6.0" + }, + "peerDependenciesMeta": { + "supports-color": { + "optional": true + } + } + }, + "node_modules/decamelize": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/decamelize/-/decamelize-4.0.0.tgz", + "integrity": "sha512-9iE1PgSik9HeIIw2JO94IidnE3eBoQrFJ3w7sFuzSX4DpmZ3v5sZpUiV5Swcf6mQEF+Y0ru8Neo+p+nyh2J+hQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/decompress-response": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/decompress-response/-/decompress-response-6.0.0.tgz", + "integrity": "sha512-aW35yZM6Bb/4oJlZncMH2LCoZtJXTRxES17vE3hoRiowU2kWHaJKFkSBDnDR+cm9J+9QhXmREyIfv0pji9ejCQ==", + "dev": true, + "license": "MIT", + "optional": true, + "dependencies": { + "mimic-response": "^3.1.0" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/deep-extend": { + "version": "0.6.0", + "resolved": "https://registry.npmjs.org/deep-extend/-/deep-extend-0.6.0.tgz", + "integrity": "sha512-LOHxIOaPYdHlJRtCQfDIVZtfw/ufM8+rVj649RIHzcm/vGwQRXFt6OPqIFWsm2XEMrNIEtWR64sY1LEKD2vAOA==", + "dev": true, + "license": "MIT", + "optional": true, + "engines": { + "node": ">=4.0.0" + } + }, + "node_modules/deep-is": { + "version": "0.1.4", + "resolved": "https://registry.npmjs.org/deep-is/-/deep-is-0.1.4.tgz", + "integrity": "sha512-oIPzksmTg4/MriiaYGO+okXDT7ztn/w3Eptv/+gSIdMdKsJo0u4CfYNFJPy+4SKMuCqGw2wxnA+URMg3t8a/bQ==", + "dev": true, + "license": "MIT" + }, + "node_modules/default-browser": { + "version": "5.5.0", + "resolved": "https://registry.npmjs.org/default-browser/-/default-browser-5.5.0.tgz", + "integrity": "sha512-H9LMLr5zwIbSxrmvikGuI/5KGhZ8E2zH3stkMgM5LpOWDutGM2JZaj460Udnf1a+946zc7YBgrqEWwbk7zHvGw==", + "dev": true, + "license": "MIT", + "dependencies": { + "bundle-name": "^4.1.0", + "default-browser-id": "^5.0.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/default-browser-id": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/default-browser-id/-/default-browser-id-5.0.1.tgz", + "integrity": "sha512-x1VCxdX4t+8wVfd1so/9w+vQ4vx7lKd2Qp5tDRutErwmR85OgmfX7RlLRMWafRMY7hbEiXIbudNrjOAPa/hL8Q==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/define-lazy-prop": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/define-lazy-prop/-/define-lazy-prop-3.0.0.tgz", + "integrity": "sha512-N+MeXYoqr3pOgn8xfyRPREN7gHakLYjhsHhWGT3fWAiL4IkAt0iDw14QiiEm2bE30c5XX5q0FtAA3CK5f9/BUg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/delayed-stream": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/delayed-stream/-/delayed-stream-1.0.0.tgz", + "integrity": "sha512-ZySD7Nf91aLB0RxL4KGrKHBXl7Eds1DAmEdcoVawXnLD7SDhpNgtuII2aAkg7a7QS41jxPSZ17p4VdGnMHk3MQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=0.4.0" + } + }, + "node_modules/detect-libc": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/detect-libc/-/detect-libc-2.1.2.tgz", + "integrity": "sha512-Btj2BOOO83o3WyH59e8MgXsxEQVcarkUOpEYrubB0urwnN10yQ364rsiByU11nZlqWYZm05i/of7io4mzihBtQ==", + "dev": true, + "license": "Apache-2.0", + "optional": true, + "engines": { + "node": ">=8" + } + }, + "node_modules/diff": { + "version": "7.0.0", + "resolved": "https://registry.npmjs.org/diff/-/diff-7.0.0.tgz", + "integrity": "sha512-PJWHUb1RFevKCwaFA9RlG5tCd+FO5iRh9A8HEtkmBH2Li03iJriB6m6JIN4rGz3K3JLawI7/veA1xzRKP6ISBw==", + "dev": true, + "license": "BSD-3-Clause", + "engines": { + "node": ">=0.3.1" + } + }, + "node_modules/dom-serializer": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/dom-serializer/-/dom-serializer-2.0.0.tgz", + "integrity": "sha512-wIkAryiqt/nV5EQKqQpo3SToSOV9J0DnbJqwK7Wv/Trc92zIAYZ4FlMu+JPFW1DfGFt81ZTCGgDEabffXeLyJg==", + "dev": true, + "license": "MIT", + "dependencies": { + "domelementtype": "^2.3.0", + "domhandler": "^5.0.2", + "entities": "^4.2.0" + }, + "funding": { + "url": "https://github.com/cheeriojs/dom-serializer?sponsor=1" + } + }, + "node_modules/domelementtype": { + "version": "2.3.0", + "resolved": "https://registry.npmjs.org/domelementtype/-/domelementtype-2.3.0.tgz", + "integrity": "sha512-OLETBj6w0OsagBwdXnPdN0cnMfF9opN69co+7ZrbfPGrdpPVNBUj02spi6B1N7wChLQiPn4CSH/zJvXw56gmHw==", + "dev": true, + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/fb55" + } + ], + "license": "BSD-2-Clause" + }, + "node_modules/domhandler": { + "version": "5.0.3", + "resolved": "https://registry.npmjs.org/domhandler/-/domhandler-5.0.3.tgz", + "integrity": "sha512-cgwlv/1iFQiFnU96XXgROh8xTeetsnJiDsTc7TYCLFd9+/WNkIqPTxiM/8pSd8VIrhXGTf1Ny1q1hquVqDJB5w==", + "dev": true, + "license": "BSD-2-Clause", + "dependencies": { + "domelementtype": "^2.3.0" + }, + "engines": { + "node": ">= 4" + }, + "funding": { + "url": "https://github.com/fb55/domhandler?sponsor=1" + } + }, + "node_modules/domutils": { + "version": "3.2.2", + "resolved": "https://registry.npmjs.org/domutils/-/domutils-3.2.2.tgz", + "integrity": "sha512-6kZKyUajlDuqlHKVX1w7gyslj9MPIXzIFiz/rGu35uC1wMi+kMhQwGhl4lt9unC9Vb9INnY9Z3/ZA3+FhASLaw==", + "dev": true, + "license": "BSD-2-Clause", + "dependencies": { + "dom-serializer": "^2.0.0", + "domelementtype": "^2.3.0", + "domhandler": "^5.0.3" + }, + "funding": { + "url": "https://github.com/fb55/domutils?sponsor=1" + } + }, + "node_modules/dunder-proto": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/dunder-proto/-/dunder-proto-1.0.1.tgz", + "integrity": "sha512-KIN/nDJBQRcXw0MLVhZE9iQHmG68qAVIBg9CqmUYjmQIhgij9U5MFvrqkUL5FbtyyzZuOeOt0zdeRe4UY7ct+A==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bind-apply-helpers": "^1.0.1", + "es-errors": "^1.3.0", + "gopd": "^1.2.0" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/eastasianwidth": { + "version": "0.2.0", + "resolved": "https://registry.npmjs.org/eastasianwidth/-/eastasianwidth-0.2.0.tgz", + "integrity": "sha512-I88TYZWc9XiYHRQ4/3c5rjjfgkjhLyW2luGIheGERbNQ6OY7yTybanSpDXZa8y7VUP9YmDcYa+eyq4ca7iLqWA==", + "dev": true, + "license": "MIT" + }, + "node_modules/ecdsa-sig-formatter": { + "version": "1.0.11", + "resolved": "https://registry.npmjs.org/ecdsa-sig-formatter/-/ecdsa-sig-formatter-1.0.11.tgz", + "integrity": "sha512-nagl3RYrbNv6kQkeJIpt6NJZy8twLB/2vtz6yN9Z4vRKHN4/QZJIEbqohALSgwKdnksuY3k5Addp5lg8sVoVcQ==", + "dev": true, + "license": "Apache-2.0", + "dependencies": { + "safe-buffer": "^5.0.1" + } + }, + "node_modules/emoji-regex": { + "version": "10.6.0", + "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-10.6.0.tgz", + "integrity": "sha512-toUI84YS5YmxW219erniWD0CIVOo46xGKColeNQRgOzDorgBi1v4D71/OFzgD9GO2UGKIv1C3Sp8DAn0+j5w7A==", + "dev": true, + "license": "MIT" + }, + "node_modules/encoding-sniffer": { + "version": "0.2.1", + "resolved": "https://registry.npmjs.org/encoding-sniffer/-/encoding-sniffer-0.2.1.tgz", + "integrity": "sha512-5gvq20T6vfpekVtqrYQsSCFZ1wEg5+wW0/QaZMWkFr6BqD3NfKs0rLCx4rrVlSWJeZb5NBJgVLswK/w2MWU+Gw==", + "dev": true, + "license": "MIT", + "dependencies": { + "iconv-lite": "^0.6.3", + "whatwg-encoding": "^3.1.1" + }, + "funding": { + "url": "https://github.com/fb55/encoding-sniffer?sponsor=1" + } + }, + "node_modules/end-of-stream": { + "version": "1.4.5", + "resolved": "https://registry.npmjs.org/end-of-stream/-/end-of-stream-1.4.5.tgz", + "integrity": "sha512-ooEGc6HP26xXq/N+GCGOT0JKCLDGrq2bQUZrQ7gyrJiZANJ/8YDTxTpQBXGMn+WbIQXNVpyWymm7KYVICQnyOg==", + "dev": true, + "license": "MIT", + "optional": true, + "dependencies": { + "once": "^1.4.0" + } + }, + "node_modules/entities": { + "version": "4.5.0", + "resolved": "https://registry.npmjs.org/entities/-/entities-4.5.0.tgz", + "integrity": "sha512-V0hjH4dGPh9Ao5p0MoRY6BVqtwCjhz6vI5LT8AJ55H+4g9/4vbHx1I54fS0XuclLhDHArPQCiMjDxjaL8fPxhw==", + "dev": true, + "license": "BSD-2-Clause", + "engines": { + "node": ">=0.12" + }, + "funding": { + "url": "https://github.com/fb55/entities?sponsor=1" + } + }, + "node_modules/es-define-property": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/es-define-property/-/es-define-property-1.0.1.tgz", + "integrity": "sha512-e3nRfgfUZ4rNGL232gUgX06QNyyez04KdjFrF+LTRoOXmrOgFKDg4BCdsjW8EnT69eqdYGmRpJwiPVYNrCaW3g==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/es-errors": { + "version": "1.3.0", + "resolved": "https://registry.npmjs.org/es-errors/-/es-errors-1.3.0.tgz", + "integrity": "sha512-Zf5H2Kxt2xjTvbJvP2ZWLEICxA6j+hAmMzIlypy4xcBg1vKVnx89Wy0GbS+kf5cwCVFFzdCFh2XSCFNULS6csw==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/es-object-atoms": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/es-object-atoms/-/es-object-atoms-1.1.1.tgz", + "integrity": "sha512-FGgH2h8zKNim9ljj7dankFPcICIK9Cp5bm+c2gQSYePhpaG5+esrLODihIorn+Pe6FGJzWhXQotPv73jTaldXA==", + "dev": true, + "license": "MIT", + "dependencies": { + "es-errors": "^1.3.0" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/es-set-tostringtag": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/es-set-tostringtag/-/es-set-tostringtag-2.1.0.tgz", + "integrity": "sha512-j6vWzfrGVfyXxge+O0x5sh6cvxAog0a/4Rdd2K36zCMV5eJ+/+tOAngRO8cODMNWbVRdVlmGZQL2YS3yR8bIUA==", + "dev": true, + "license": "MIT", + "dependencies": { + "es-errors": "^1.3.0", + "get-intrinsic": "^1.2.6", + "has-tostringtag": "^1.0.2", + "hasown": "^2.0.2" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/escalade": { + "version": "3.2.0", + "resolved": "https://registry.npmjs.org/escalade/-/escalade-3.2.0.tgz", + "integrity": "sha512-WUj2qlxaQtO4g6Pq5c29GTcWGDyd8itL8zTlipgECz3JesAiiOKotd8JU6otB3PACgG6xkJUyVhboMS+bje/jA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=6" + } + }, + "node_modules/escape-string-regexp": { + "version": "1.0.5", + "resolved": "https://registry.npmjs.org/escape-string-regexp/-/escape-string-regexp-1.0.5.tgz", + "integrity": "sha512-vbRorB5FUQWvla16U8R/qgaFIya2qGzwDrNmCZuYKrbdSUMG6I1ZCGQRefkRVhuOkIGVne7BQ35DSfo1qvJqFg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=0.8.0" + } + }, + "node_modules/eslint": { + "version": "10.0.3", + "resolved": "https://registry.npmjs.org/eslint/-/eslint-10.0.3.tgz", + "integrity": "sha512-COV33RzXZkqhG9P2rZCFl9ZmJ7WL+gQSCRzE7RhkbclbQPtLAWReL7ysA0Sh4c8Im2U9ynybdR56PV0XcKvqaQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "@eslint-community/eslint-utils": "^4.8.0", + "@eslint-community/regexpp": "^4.12.2", + "@eslint/config-array": "^0.23.3", + "@eslint/config-helpers": "^0.5.2", + "@eslint/core": "^1.1.1", + "@eslint/plugin-kit": "^0.6.1", + "@humanfs/node": "^0.16.6", + "@humanwhocodes/module-importer": "^1.0.1", + "@humanwhocodes/retry": "^0.4.2", + "@types/estree": "^1.0.6", + "ajv": "^6.14.0", + "cross-spawn": "^7.0.6", + "debug": "^4.3.2", + "escape-string-regexp": "^4.0.0", + "eslint-scope": "^9.1.2", + "eslint-visitor-keys": "^5.0.1", + "espree": "^11.1.1", + "esquery": "^1.7.0", + "esutils": "^2.0.2", + "fast-deep-equal": "^3.1.3", + "file-entry-cache": "^8.0.0", + "find-up": "^5.0.0", + "glob-parent": "^6.0.2", + "ignore": "^5.2.0", + "imurmurhash": "^0.1.4", + "is-glob": "^4.0.0", + "json-stable-stringify-without-jsonify": "^1.0.1", + "minimatch": "^10.2.4", + "natural-compare": "^1.4.0", + "optionator": "^0.9.3" + }, + "bin": { + "eslint": "bin/eslint.js" + }, + "engines": { + "node": "^20.19.0 || ^22.13.0 || >=24" + }, + "funding": { + "url": "https://eslint.org/donate" + }, + "peerDependencies": { + "jiti": "*" + }, + "peerDependenciesMeta": { + "jiti": { + "optional": true + } + } + }, + "node_modules/eslint-scope": { + "version": "9.1.2", + "resolved": "https://registry.npmjs.org/eslint-scope/-/eslint-scope-9.1.2.tgz", + "integrity": "sha512-xS90H51cKw0jltxmvmHy2Iai1LIqrfbw57b79w/J7MfvDfkIkFZ+kj6zC3BjtUwh150HsSSdxXZcsuv72miDFQ==", + "dev": true, + "license": "BSD-2-Clause", + "dependencies": { + "@types/esrecurse": "^4.3.1", + "@types/estree": "^1.0.8", + "esrecurse": "^4.3.0", + "estraverse": "^5.2.0" + }, + "engines": { + "node": "^20.19.0 || ^22.13.0 || >=24" + }, + "funding": { + "url": "https://opencollective.com/eslint" + } + }, + "node_modules/eslint-visitor-keys": { + "version": "3.4.3", + "resolved": "https://registry.npmjs.org/eslint-visitor-keys/-/eslint-visitor-keys-3.4.3.tgz", + "integrity": "sha512-wpc+LXeiyiisxPlEkUzU6svyS1frIO3Mgxj1fdy7Pm8Ygzguax2N3Fa/D/ag1WqbOprdI+uY6wMUl8/a2G+iag==", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": "^12.22.0 || ^14.17.0 || >=16.0.0" + }, + "funding": { + "url": "https://opencollective.com/eslint" + } + }, + "node_modules/eslint/node_modules/escape-string-regexp": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/escape-string-regexp/-/escape-string-regexp-4.0.0.tgz", + "integrity": "sha512-TtpcNJ3XAzx3Gq8sWRzJaVajRs0uVxA2YAkdb1jm2YkPz4G6egUFAyA3n5vtEIZefPk5Wa4UXbKuS5fKkJWdgA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/eslint/node_modules/eslint-visitor-keys": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/eslint-visitor-keys/-/eslint-visitor-keys-5.0.1.tgz", + "integrity": "sha512-tD40eHxA35h0PEIZNeIjkHoDR4YjjJp34biM0mDvplBe//mB+IHCqHDGV7pxF+7MklTvighcCPPZC7ynWyjdTA==", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": "^20.19.0 || ^22.13.0 || >=24" + }, + "funding": { + "url": "https://opencollective.com/eslint" + } + }, + "node_modules/eslint/node_modules/ignore": { + "version": "5.3.2", + "resolved": "https://registry.npmjs.org/ignore/-/ignore-5.3.2.tgz", + "integrity": "sha512-hsBTNUqQTDwkWtcdYI2i06Y/nUBEsNEDJKjWdigLvegy8kDuJAS8uRlpkkcQpyEXL0Z/pjDy5HBmMjRCJ2gq+g==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 4" + } + }, + "node_modules/espree": { + "version": "11.2.0", + "resolved": "https://registry.npmjs.org/espree/-/espree-11.2.0.tgz", + "integrity": "sha512-7p3DrVEIopW1B1avAGLuCSh1jubc01H2JHc8B4qqGblmg5gI9yumBgACjWo4JlIc04ufug4xJ3SQI8HkS/Rgzw==", + "dev": true, + "license": "BSD-2-Clause", + "dependencies": { + "acorn": "^8.16.0", + "acorn-jsx": "^5.3.2", + "eslint-visitor-keys": "^5.0.1" + }, + "engines": { + "node": "^20.19.0 || ^22.13.0 || >=24" + }, + "funding": { + "url": "https://opencollective.com/eslint" + } + }, + "node_modules/espree/node_modules/eslint-visitor-keys": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/eslint-visitor-keys/-/eslint-visitor-keys-5.0.1.tgz", + "integrity": "sha512-tD40eHxA35h0PEIZNeIjkHoDR4YjjJp34biM0mDvplBe//mB+IHCqHDGV7pxF+7MklTvighcCPPZC7ynWyjdTA==", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": "^20.19.0 || ^22.13.0 || >=24" + }, + "funding": { + "url": "https://opencollective.com/eslint" + } + }, + "node_modules/esquery": { + "version": "1.7.0", + "resolved": "https://registry.npmjs.org/esquery/-/esquery-1.7.0.tgz", + "integrity": "sha512-Ap6G0WQwcU/LHsvLwON1fAQX9Zp0A2Y6Y/cJBl9r/JbW90Zyg4/zbG6zzKa2OTALELarYHmKu0GhpM5EO+7T0g==", + "dev": true, + "license": "BSD-3-Clause", + "dependencies": { + "estraverse": "^5.1.0" + }, + "engines": { + "node": ">=0.10" + } + }, + "node_modules/esrecurse": { + "version": "4.3.0", + "resolved": "https://registry.npmjs.org/esrecurse/-/esrecurse-4.3.0.tgz", + "integrity": "sha512-KmfKL3b6G+RXvP8N1vr3Tq1kL/oCFgn2NYXEtqP8/L3pKapUA4G8cFVaoF3SU323CD4XypR/ffioHmkti6/Tag==", + "dev": true, + "license": "BSD-2-Clause", + "dependencies": { + "estraverse": "^5.2.0" + }, + "engines": { + "node": ">=4.0" + } + }, + "node_modules/estraverse": { + "version": "5.3.0", + "resolved": "https://registry.npmjs.org/estraverse/-/estraverse-5.3.0.tgz", + "integrity": "sha512-MMdARuVEQziNTeJD8DgMqmhwR11BRQ/cBP+pLtYdSTnf3MIO8fFeiINEbX36ZdNlfU/7A9f3gUw49B3oQsvwBA==", + "dev": true, + "license": "BSD-2-Clause", + "engines": { + "node": ">=4.0" + } + }, + "node_modules/esutils": { + "version": "2.0.3", + "resolved": "https://registry.npmjs.org/esutils/-/esutils-2.0.3.tgz", + "integrity": "sha512-kVscqXk4OCp68SZ0dkgEKVi6/8ij300KBWTJq32P/dYeWTSwK41WyTxalN1eRmA5Z9UU/LX9D7FWSmV9SAYx6g==", + "dev": true, + "license": "BSD-2-Clause", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/expand-template": { + "version": "2.0.3", + "resolved": "https://registry.npmjs.org/expand-template/-/expand-template-2.0.3.tgz", + "integrity": "sha512-XYfuKMvj4O35f/pOXLObndIRvyQ+/+6AhODh+OKWj9S9498pHHn/IMszH+gt0fBCRWMNfk1ZSp5x3AifmnI2vg==", + "dev": true, + "license": "(MIT OR WTFPL)", + "optional": true, + "engines": { + "node": ">=6" + } + }, + "node_modules/fast-deep-equal": { + "version": "3.1.3", + "resolved": "https://registry.npmjs.org/fast-deep-equal/-/fast-deep-equal-3.1.3.tgz", + "integrity": "sha512-f3qQ9oQy9j2AhBe/H9VC91wLmKBCCU/gDOnKNAYG5hswO7BLKj09Hc5HYNz9cGI++xlpDCIgDaitVs03ATR84Q==", + "dev": true, + "license": "MIT" + }, + "node_modules/fast-json-stable-stringify": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/fast-json-stable-stringify/-/fast-json-stable-stringify-2.1.0.tgz", + "integrity": "sha512-lhd/wF+Lk98HZoTCtlVraHtfh5XYijIjalXck7saUtuanSDyLMxnHhSXEDJqHxD7msR8D0uCmqlkwjCV8xvwHw==", + "dev": true, + "license": "MIT" + }, + "node_modules/fast-levenshtein": { + "version": "2.0.6", + "resolved": "https://registry.npmjs.org/fast-levenshtein/-/fast-levenshtein-2.0.6.tgz", + "integrity": "sha512-DCXu6Ifhqcks7TZKY3Hxp3y6qphY5SJZmrWMDrKcERSOXWQdMhU9Ig/PYrzyw/ul9jOIyh0N4M0tbC5hodg8dw==", + "dev": true, + "license": "MIT" + }, + "node_modules/fd-slicer": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/fd-slicer/-/fd-slicer-1.1.0.tgz", + "integrity": "sha512-cE1qsB/VwyQozZ+q1dGxR8LBYNZeofhEdUNGSMbQD3Gw2lAzX9Zb3uIU6Ebc/Fmyjo9AWWfnn0AUCHqtevs/8g==", + "dev": true, + "license": "MIT", + "dependencies": { + "pend": "~1.2.0" + } + }, + "node_modules/fdir": { + "version": "6.5.0", + "resolved": "https://registry.npmjs.org/fdir/-/fdir-6.5.0.tgz", + "integrity": "sha512-tIbYtZbucOs0BRGqPJkshJUYdL+SDH7dVM8gjy+ERp3WAUjLEFJE+02kanyHtwjWOnwrKYBiwAmM0p4kLJAnXg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=12.0.0" + }, + "peerDependencies": { + "picomatch": "^3 || ^4" + }, + "peerDependenciesMeta": { + "picomatch": { + "optional": true + } + } + }, + "node_modules/file-entry-cache": { + "version": "8.0.0", + "resolved": "https://registry.npmjs.org/file-entry-cache/-/file-entry-cache-8.0.0.tgz", + "integrity": "sha512-XXTUwCvisa5oacNGRP9SfNtYBNAMi+RPwBFmblZEF7N7swHYQS6/Zfk7SRwx4D5j3CH211YNRco1DEMNVfZCnQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "flat-cache": "^4.0.0" + }, + "engines": { + "node": ">=16.0.0" + } + }, + "node_modules/find-up": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/find-up/-/find-up-5.0.0.tgz", + "integrity": "sha512-78/PXT1wlLLDgTzDs7sjq9hzz0vXD+zn+7wypEe4fXQxCmdmqfGsEPQxmiCSQI3ajFV91bVSsvNtrJRiW6nGng==", + "dev": true, + "license": "MIT", + "dependencies": { + "locate-path": "^6.0.0", + "path-exists": "^4.0.0" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/flat": { + "version": "5.0.2", + "resolved": "https://registry.npmjs.org/flat/-/flat-5.0.2.tgz", + "integrity": "sha512-b6suED+5/3rTpUBdG1gupIl8MPFCAMA0QXwmljLhvCUKcUvdE4gWky9zpuGCcXHOsz4J9wPGNWq6OKpmIzz3hQ==", + "dev": true, + "license": "BSD-3-Clause", + "bin": { + "flat": "cli.js" + } + }, + "node_modules/flat-cache": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/flat-cache/-/flat-cache-4.0.1.tgz", + "integrity": "sha512-f7ccFPK3SXFHpx15UIGyRJ/FJQctuKZ0zVuN3frBo4HnK3cay9VEW0R6yPYFHC0AgqhukPzKjq22t5DmAyqGyw==", + "dev": true, + "license": "MIT", + "dependencies": { + "flatted": "^3.2.9", + "keyv": "^4.5.4" + }, + "engines": { + "node": ">=16" + } + }, + "node_modules/flatted": { + "version": "3.3.4", + "resolved": "https://registry.npmjs.org/flatted/-/flatted-3.3.4.tgz", + "integrity": "sha512-3+mMldrTAPdta5kjX2G2J7iX4zxtnwpdA8Tr2ZSjkyPSanvbZAcy6flmtnXbEybHrDcU9641lxrMfFuUxVz9vA==", + "dev": true, + "license": "ISC" + }, + "node_modules/foreground-child": { + "version": "3.3.1", + "resolved": "https://registry.npmjs.org/foreground-child/-/foreground-child-3.3.1.tgz", + "integrity": "sha512-gIXjKqtFuWEgzFRJA9WCQeSJLZDjgJUOMCMzxtvFq/37KojM1BFGufqsCy0r4qSQmYLsZYMeyRqzIWOMup03sw==", + "dev": true, + "license": "ISC", + "dependencies": { + "cross-spawn": "^7.0.6", + "signal-exit": "^4.0.1" + }, + "engines": { + "node": ">=14" + }, + "funding": { + "url": "https://github.com/sponsors/isaacs" + } + }, + "node_modules/form-data": { + "version": "4.0.5", + "resolved": "https://registry.npmjs.org/form-data/-/form-data-4.0.5.tgz", + "integrity": "sha512-8RipRLol37bNs2bhoV67fiTEvdTrbMUYcFTiy3+wuuOnUog2QBHCZWXDRijWQfAkhBj2Uf5UnVaiWwA5vdd82w==", + "dev": true, + "license": "MIT", + "dependencies": { + "asynckit": "^0.4.0", + "combined-stream": "^1.0.8", + "es-set-tostringtag": "^2.1.0", + "hasown": "^2.0.2", + "mime-types": "^2.1.12" + }, + "engines": { + "node": ">= 6" + } + }, + "node_modules/fs-constants": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/fs-constants/-/fs-constants-1.0.0.tgz", + "integrity": "sha512-y6OAwoSIf7FyjMIv94u+b5rdheZEjzR63GTyZJm5qh4Bi+2YgwLCcI/fPFZkL5PSixOt6ZNKm+w+Hfp/Bciwow==", + "dev": true, + "license": "MIT", + "optional": true + }, + "node_modules/fs.realpath": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/fs.realpath/-/fs.realpath-1.0.0.tgz", + "integrity": "sha512-OO0pH2lK6a0hZnAdau5ItzHPI6pUlvI7jMVnxUQRtw4owF2wk8lOSabtGDCTP4Ggrg2MbGnWO9X8K1t4+fGMDw==", + "dev": true, + "license": "ISC" + }, + "node_modules/function-bind": { + "version": "1.1.2", + "resolved": "https://registry.npmjs.org/function-bind/-/function-bind-1.1.2.tgz", + "integrity": "sha512-7XHNxH7qX9xG5mIwxkhumTox/MIRNcOgDrxWsMt2pAr23WHp6MrRlN7FBSFpCpr+oVO0F744iUgR82nJMfG2SA==", + "dev": true, + "license": "MIT", + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/get-caller-file": { + "version": "2.0.5", + "resolved": "https://registry.npmjs.org/get-caller-file/-/get-caller-file-2.0.5.tgz", + "integrity": "sha512-DyFP3BM/3YHTQOCUL/w0OZHR0lpKeGrxotcHWcqNEdnltqFwXVfhEBQ94eIo34AfQpo0rGki4cyIiftY06h2Fg==", + "dev": true, + "license": "ISC", + "engines": { + "node": "6.* || 8.* || >= 10.*" + } + }, + "node_modules/get-east-asian-width": { + "version": "1.5.0", + "resolved": "https://registry.npmjs.org/get-east-asian-width/-/get-east-asian-width-1.5.0.tgz", + "integrity": "sha512-CQ+bEO+Tva/qlmw24dCejulK5pMzVnUOFOijVogd3KQs07HnRIgp8TGipvCCRT06xeYEbpbgwaCxglFyiuIcmA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/get-intrinsic": { + "version": "1.3.0", + "resolved": "https://registry.npmjs.org/get-intrinsic/-/get-intrinsic-1.3.0.tgz", + "integrity": "sha512-9fSjSaos/fRIVIp+xSJlE6lfwhES7LNtKaCBIamHsjr2na1BiABJPo0mOjjz8GJDURarmCPGqaiVg5mfjb98CQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bind-apply-helpers": "^1.0.2", + "es-define-property": "^1.0.1", + "es-errors": "^1.3.0", + "es-object-atoms": "^1.1.1", + "function-bind": "^1.1.2", + "get-proto": "^1.0.1", + "gopd": "^1.2.0", + "has-symbols": "^1.1.0", + "hasown": "^2.0.2", + "math-intrinsics": "^1.1.0" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/get-proto": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/get-proto/-/get-proto-1.0.1.tgz", + "integrity": "sha512-sTSfBjoXBp89JvIKIefqw7U2CCebsc74kiY6awiGogKtoSGbgjYE/G/+l9sF3MWFPNc9IcoOC4ODfKHfxFmp0g==", + "dev": true, + "license": "MIT", + "dependencies": { + "dunder-proto": "^1.0.1", + "es-object-atoms": "^1.0.0" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/github-from-package": { + "version": "0.0.0", + "resolved": "https://registry.npmjs.org/github-from-package/-/github-from-package-0.0.0.tgz", + "integrity": "sha512-SyHy3T1v2NUXn29OsWdxmK6RwHD+vkj3v8en8AOBZ1wBQ/hCAQ5bAQTD02kW4W9tUp/3Qh6J8r9EvntiyCmOOw==", + "dev": true, + "license": "MIT", + "optional": true + }, + "node_modules/glob": { + "version": "7.2.3", + "resolved": "https://registry.npmjs.org/glob/-/glob-7.2.3.tgz", + "integrity": "sha512-nFR0zLpU2YCaRxwoCJvL6UvCH2JFyFVIvwTLsIf21AuHlMskA1hhTdk+LlYJtOlYt9v6dvszD2BGRqBL+iQK9Q==", + "deprecated": "Old versions of glob are not supported, and contain widely publicized security vulnerabilities, which have been fixed in the current version. Please update. Support for old versions may be purchased (at exorbitant rates) by contacting i@izs.me", + "dev": true, + "license": "ISC", + "dependencies": { + "fs.realpath": "^1.0.0", + "inflight": "^1.0.4", + "inherits": "2", + "minimatch": "^3.1.1", + "once": "^1.3.0", + "path-is-absolute": "^1.0.0" + }, + "engines": { + "node": "*" + }, + "funding": { + "url": "https://github.com/sponsors/isaacs" + } + }, + "node_modules/glob-parent": { + "version": "6.0.2", + "resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-6.0.2.tgz", + "integrity": "sha512-XxwI8EOhVQgWp6iDL+3b0r86f4d6AX6zSU55HfB4ydCEuXLXc5FcYeOu+nnGftS4TEju/11rt4KJPTMgbfmv4A==", + "dev": true, + "license": "ISC", + "dependencies": { + "is-glob": "^4.0.3" + }, + "engines": { + "node": ">=10.13.0" + } + }, + "node_modules/glob/node_modules/balanced-match": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/balanced-match/-/balanced-match-1.0.2.tgz", + "integrity": "sha512-3oSeUO0TMV67hN1AmbXsK4yaqU7tjiHlbxRDZOpH0KW9+CeX4bRAaX0Anxt0tx2MrpRpWwQaPwIlISEJhYU5Pw==", + "dev": true, + "license": "MIT" + }, + "node_modules/glob/node_modules/brace-expansion": { + "version": "1.1.12", + "resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.12.tgz", + "integrity": "sha512-9T9UjW3r0UW5c1Q7GTwllptXwhvYmEzFhzMfZ9H7FQWt+uZePjZPjBP/W1ZEyZ1twGWom5/56TF4lPcqjnDHcg==", + "dev": true, + "license": "MIT", + "dependencies": { + "balanced-match": "^1.0.0", + "concat-map": "0.0.1" + } + }, + "node_modules/glob/node_modules/minimatch": { + "version": "3.1.5", + "resolved": "https://registry.npmjs.org/minimatch/-/minimatch-3.1.5.tgz", + "integrity": "sha512-VgjWUsnnT6n+NUk6eZq77zeFdpW2LWDzP6zFGrCbHXiYNul5Dzqk2HHQ5uFH2DNW5Xbp8+jVzaeNt94ssEEl4w==", + "dev": true, + "license": "ISC", + "dependencies": { + "brace-expansion": "^1.1.7" + }, + "engines": { + "node": "*" + } + }, + "node_modules/gopd": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/gopd/-/gopd-1.2.0.tgz", + "integrity": "sha512-ZUKRh6/kUFoAiTAtTYPZJ3hw9wNxx+BIBOijnlG9PnrJsCcSjs1wyyD6vJpaYtgnzDrKYRSqf3OO6Rfa93xsRg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/has-flag": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-3.0.0.tgz", + "integrity": "sha512-sKJf1+ceQBr4SMkvQnBDNDtf4TXpVhVGateu0t918bl30FnbE2m4vNLX+VWe/dpjlb+HugGYzW7uQXH98HPEYw==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=4" + } + }, + "node_modules/has-symbols": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.1.0.tgz", + "integrity": "sha512-1cDNdwJ2Jaohmb3sg4OmKaMBwuC48sYni5HUw2DvsC8LjGTLK9h+eb1X6RyuOHe4hT0ULCW68iomhjUoKUqlPQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/has-tostringtag": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/has-tostringtag/-/has-tostringtag-1.0.2.tgz", + "integrity": "sha512-NqADB8VjPFLM2V0VvHUewwwsw0ZWBaIdgo+ieHtK3hasLz4qeCRjYcqfB6AQrBggRKppKF8L52/VqdVsO47Dlw==", + "dev": true, + "license": "MIT", + "dependencies": { + "has-symbols": "^1.0.3" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/hasown": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/hasown/-/hasown-2.0.2.tgz", + "integrity": "sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "function-bind": "^1.1.2" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/he": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/he/-/he-1.2.0.tgz", + "integrity": "sha512-F/1DnUGPopORZi0ni+CvrCgHQ5FyEAHRLSApuYWMmrbSwoN2Mn/7k+Gl38gJnR7yyDZk6WLXwiGod1JOWNDKGw==", + "dev": true, + "license": "MIT", + "bin": { + "he": "bin/he" + } + }, + "node_modules/hosted-git-info": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/hosted-git-info/-/hosted-git-info-4.1.0.tgz", + "integrity": "sha512-kyCuEOWjJqZuDbRHzL8V93NzQhwIB71oFWSyzVo+KPZI+pnQPPxucdkrOZvkLRnrf5URsQM+IJ09Dw29cRALIA==", + "dev": true, + "license": "ISC", + "dependencies": { + "lru-cache": "^6.0.0" + }, + "engines": { + "node": ">=10" + } + }, + "node_modules/htmlparser2": { + "version": "10.1.0", + "resolved": "https://registry.npmjs.org/htmlparser2/-/htmlparser2-10.1.0.tgz", + "integrity": "sha512-VTZkM9GWRAtEpveh7MSF6SjjrpNVNNVJfFup7xTY3UpFtm67foy9HDVXneLtFVt4pMz5kZtgNcvCniNFb1hlEQ==", + "dev": true, + "funding": [ + "https://github.com/fb55/htmlparser2?sponsor=1", + { + "type": "github", + "url": "https://github.com/sponsors/fb55" + } + ], + "license": "MIT", + "dependencies": { + "domelementtype": "^2.3.0", + "domhandler": "^5.0.3", + "domutils": "^3.2.2", + "entities": "^7.0.1" + } + }, + "node_modules/htmlparser2/node_modules/entities": { + "version": "7.0.1", + "resolved": "https://registry.npmjs.org/entities/-/entities-7.0.1.tgz", + "integrity": "sha512-TWrgLOFUQTH994YUyl1yT4uyavY5nNB5muff+RtWaqNVCAK408b5ZnnbNAUEWLTCpum9w6arT70i1XdQ4UeOPA==", + "dev": true, + "license": "BSD-2-Clause", + "engines": { + "node": ">=0.12" + }, + "funding": { + "url": "https://github.com/fb55/entities?sponsor=1" + } + }, + "node_modules/http-proxy-agent": { + "version": "7.0.2", + "resolved": "https://registry.npmjs.org/http-proxy-agent/-/http-proxy-agent-7.0.2.tgz", + "integrity": "sha512-T1gkAiYYDWYx3V5Bmyu7HcfcvL7mUrTWiM6yOfa3PIphViJ/gFPbvidQ+veqSOHci/PxBcDabeUNCzpOODJZig==", + "dev": true, + "license": "MIT", + "dependencies": { + "agent-base": "^7.1.0", + "debug": "^4.3.4" + }, + "engines": { + "node": ">= 14" + } + }, + "node_modules/https-proxy-agent": { + "version": "7.0.6", + "resolved": "https://registry.npmjs.org/https-proxy-agent/-/https-proxy-agent-7.0.6.tgz", + "integrity": "sha512-vK9P5/iUfdl95AI+JVyUuIcVtd4ofvtrOr3HNtM2yxC9bnMbEdp3x01OhQNnjb8IJYi38VlTE3mBXwcfvywuSw==", + "dev": true, + "license": "MIT", + "dependencies": { + "agent-base": "^7.1.2", + "debug": "4" + }, + "engines": { + "node": ">= 14" + } + }, + "node_modules/iconv-lite": { + "version": "0.6.3", + "resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.6.3.tgz", + "integrity": "sha512-4fCk79wshMdzMp2rH06qWrJE4iolqLhCUH+OiuIgU++RB0+94NlDL81atO7GX55uUKueo0txHNtvEyI6D7WdMw==", + "dev": true, + "license": "MIT", + "dependencies": { + "safer-buffer": ">= 2.1.2 < 3.0.0" + }, + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/ieee754": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/ieee754/-/ieee754-1.2.1.tgz", + "integrity": "sha512-dcyqhDvX1C46lXZcVqCpK+FtMRQVdIMN6/Df5js2zouUsqG7I6sFxitIC+7KYK29KdXOLHdu9zL4sFnoVQnqaA==", + "dev": true, + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/feross" + }, + { + "type": "patreon", + "url": "https://www.patreon.com/feross" + }, + { + "type": "consulting", + "url": "https://feross.org/support" + } + ], + "license": "BSD-3-Clause", + "optional": true + }, + "node_modules/ignore": { + "version": "7.0.5", + "resolved": "https://registry.npmjs.org/ignore/-/ignore-7.0.5.tgz", + "integrity": "sha512-Hs59xBNfUIunMFgWAbGX5cq6893IbWg4KnrjbYwX3tx0ztorVgTDA6B2sxf8ejHJ4wz8BqGUMYlnzNBer5NvGg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 4" + } + }, + "node_modules/immediate": { + "version": "3.0.6", + "resolved": "https://registry.npmjs.org/immediate/-/immediate-3.0.6.tgz", + "integrity": "sha512-XXOFtyqDjNDAQxVfYxuF7g9Il/IbWmmlQg2MYKOH8ExIT1qg6xc4zyS3HaEEATgs1btfzxq15ciUiY7gjSXRGQ==", + "dev": true, + "license": "MIT" + }, + "node_modules/imurmurhash": { + "version": "0.1.4", + "resolved": "https://registry.npmjs.org/imurmurhash/-/imurmurhash-0.1.4.tgz", + "integrity": "sha512-JmXMZ6wuvDmLiHEml9ykzqO6lwFbof0GG4IkcGaENdCRDDmMVnny7s5HsIgHCbaq0w2MyPhDqkhTUgS2LU2PHA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=0.8.19" + } + }, + "node_modules/inflight": { + "version": "1.0.6", + "resolved": "https://registry.npmjs.org/inflight/-/inflight-1.0.6.tgz", + "integrity": "sha512-k92I/b08q4wvFscXCLvqfsHCrjrF7yiXsQuIVvVE7N82W3+aqpzuUdBbfhWcy/FZR3/4IgflMgKLOsvPDrGCJA==", + "deprecated": "This module is not supported, and leaks memory. Do not use it. Check out lru-cache if you want a good and tested way to coalesce async requests by a key value, which is much more comprehensive and powerful.", + "dev": true, + "license": "ISC", + "dependencies": { + "once": "^1.3.0", + "wrappy": "1" + } + }, + "node_modules/inherits": { + "version": "2.0.4", + "resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.4.tgz", + "integrity": "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==", + "dev": true, + "license": "ISC" + }, + "node_modules/ini": { + "version": "1.3.8", + "resolved": "https://registry.npmjs.org/ini/-/ini-1.3.8.tgz", + "integrity": "sha512-JV/yugV2uzW5iMRSiZAyDtQd+nxtUnjeLt0acNdw98kKLrvuRVyB80tsREOE7yvGVgalhZ6RNXCmEHkUKBKxew==", + "dev": true, + "license": "ISC", + "optional": true + }, + "node_modules/is-docker": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/is-docker/-/is-docker-3.0.0.tgz", + "integrity": "sha512-eljcgEDlEns/7AXFosB5K/2nCM4P7FQPkGc/DWLy5rmFEWvZayGrik1d9/QIY5nJ4f9YsVvBkA6kJpHn9rISdQ==", + "dev": true, + "license": "MIT", + "bin": { + "is-docker": "cli.js" + }, + "engines": { + "node": "^12.20.0 || ^14.13.1 || >=16.0.0" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/is-extglob": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/is-extglob/-/is-extglob-2.1.1.tgz", + "integrity": "sha512-SbKbANkN603Vi4jEZv49LeVJMn4yGwsbzZworEoyEiutsN3nJYdbO36zfhGJ6QEDpOZIFkDtnq5JRxmvl3jsoQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/is-fullwidth-code-point": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/is-fullwidth-code-point/-/is-fullwidth-code-point-3.0.0.tgz", + "integrity": "sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/is-glob": { + "version": "4.0.3", + "resolved": "https://registry.npmjs.org/is-glob/-/is-glob-4.0.3.tgz", + "integrity": "sha512-xelSayHH36ZgE7ZWhli7pW34hNbNl8Ojv5KVmkJD4hBdD3th8Tfk9vYasLM+mXWOZhFkgZfxhLSnrwRr4elSSg==", + "dev": true, + "license": "MIT", + "dependencies": { + "is-extglob": "^2.1.1" + }, + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/is-inside-container": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/is-inside-container/-/is-inside-container-1.0.0.tgz", + "integrity": "sha512-KIYLCCJghfHZxqjYBE7rEy0OBuTd5xCHS7tHVgvCLkx7StIoaxwNW3hCALgEUjFfeRk+MG/Qxmp/vtETEF3tRA==", + "dev": true, + "license": "MIT", + "dependencies": { + "is-docker": "^3.0.0" + }, + "bin": { + "is-inside-container": "cli.js" + }, + "engines": { + "node": ">=14.16" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/is-interactive": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/is-interactive/-/is-interactive-2.0.0.tgz", + "integrity": "sha512-qP1vozQRI+BMOPcjFzrjXuQvdak2pHNUMZoeG2eRbiSqyvbEf/wQtEOTOX1guk6E3t36RkaqiSt8A/6YElNxLQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/is-path-inside": { + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/is-path-inside/-/is-path-inside-3.0.3.tgz", + "integrity": "sha512-Fd4gABb+ycGAmKou8eMftCupSir5lRxqf4aD/vd0cD2qc4HL07OjCeuHMr8Ro4CoMaeCKDB0/ECBOVWjTwUvPQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/is-plain-obj": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/is-plain-obj/-/is-plain-obj-2.1.0.tgz", + "integrity": "sha512-YWnfyRwxL/+SsrWYfOpUtz5b3YD+nyfkHvjbcanzk8zgyO4ASD67uVMRt8k5bM4lLMDnXfriRhOpemw+NfT1eA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/is-unicode-supported": { + "version": "0.1.0", + "resolved": "https://registry.npmjs.org/is-unicode-supported/-/is-unicode-supported-0.1.0.tgz", + "integrity": "sha512-knxG2q4UC3u8stRGyAVJCOdxFmv5DZiRcdlIaAQXAbSfJya+OhopNotLQrstBhququ4ZpuKbDc/8S6mgXgPFPw==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/is-wsl": { + "version": "3.1.1", + "resolved": "https://registry.npmjs.org/is-wsl/-/is-wsl-3.1.1.tgz", + "integrity": "sha512-e6rvdUCiQCAuumZslxRJWR/Doq4VpPR82kqclvcS0efgt430SlGIk05vdCN58+VrzgtIcfNODjozVielycD4Sw==", + "dev": true, + "license": "MIT", + "dependencies": { + "is-inside-container": "^1.0.0" + }, + "engines": { + "node": ">=16" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/isarray": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/isarray/-/isarray-1.0.0.tgz", + "integrity": "sha512-VLghIWNM6ELQzo7zwmcg0NmTVyWKYjvIeM83yjp0wRDTmUnrM678fQbcKBo6n2CJEF0szoG//ytg+TKla89ALQ==", + "dev": true, + "license": "MIT" + }, + "node_modules/isexe": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/isexe/-/isexe-2.0.0.tgz", + "integrity": "sha512-RHxMLp9lnKHGHRng9QFhRCMbYAcVpn69smSGcq3f36xjgVVWThj4qqLbTLlq7Ssj8B+fIQ1EuCEGI2lKsyQeIw==", + "dev": true, + "license": "ISC" + }, + "node_modules/jackspeak": { + "version": "3.4.3", + "resolved": "https://registry.npmjs.org/jackspeak/-/jackspeak-3.4.3.tgz", + "integrity": "sha512-OGlZQpz2yfahA/Rd1Y8Cd9SIEsqvXkLVoSw/cgwhnhFMDbsQFeZYoJJ7bIZBS9BcamUW96asq/npPWugM+RQBw==", + "dev": true, + "license": "BlueOak-1.0.0", + "dependencies": { + "@isaacs/cliui": "^8.0.2" + }, + "funding": { + "url": "https://github.com/sponsors/isaacs" + }, + "optionalDependencies": { + "@pkgjs/parseargs": "^0.11.0" + } + }, + "node_modules/js-yaml": { + "version": "4.1.1", + "resolved": "https://registry.npmjs.org/js-yaml/-/js-yaml-4.1.1.tgz", + "integrity": "sha512-qQKT4zQxXl8lLwBtHMWwaTcGfFOZviOJet3Oy/xmGk2gZH677CJM9EvtfdSkgWcATZhj/55JZ0rmy3myCT5lsA==", + "dev": true, + "license": "MIT", + "dependencies": { + "argparse": "^2.0.1" + }, + "bin": { + "js-yaml": "bin/js-yaml.js" + } + }, + "node_modules/json-buffer": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/json-buffer/-/json-buffer-3.0.1.tgz", + "integrity": "sha512-4bV5BfR2mqfQTJm+V5tPPdf+ZpuhiIvTuAB5g8kcrXOZpTT/QwwVRWBywX1ozr6lEuPdbHxwaJlm9G6mI2sfSQ==", + "dev": true, + "license": "MIT" + }, + "node_modules/json-schema-traverse": { + "version": "0.4.1", + "resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-0.4.1.tgz", + "integrity": "sha512-xbbCH5dCYU5T8LcEhhuh7HJ88HXuW3qsI3Y0zOZFKfZEHcpWiHU/Jxzk629Brsab/mMiHQti9wMP+845RPe3Vg==", + "dev": true, + "license": "MIT" + }, + "node_modules/json-stable-stringify-without-jsonify": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/json-stable-stringify-without-jsonify/-/json-stable-stringify-without-jsonify-1.0.1.tgz", + "integrity": "sha512-Bdboy+l7tA3OGW6FjyFHWkP5LuByj1Tk33Ljyq0axyzdk9//JSi2u3fP1QSmd1KNwq6VOKYGlAu87CisVir6Pw==", + "dev": true, + "license": "MIT" + }, + "node_modules/jsonc-parser": { + "version": "3.3.1", + "resolved": "https://registry.npmjs.org/jsonc-parser/-/jsonc-parser-3.3.1.tgz", + "integrity": "sha512-HUgH65KyejrUFPvHFPbqOY0rsFip3Bo5wb4ngvdi1EpCYWUQDC5V+Y7mZws+DLkr4M//zQJoanu1SP+87Dv1oQ==", + "dev": true, + "license": "MIT" + }, + "node_modules/jsonwebtoken": { + "version": "9.0.3", + "resolved": "https://registry.npmjs.org/jsonwebtoken/-/jsonwebtoken-9.0.3.tgz", + "integrity": "sha512-MT/xP0CrubFRNLNKvxJ2BYfy53Zkm++5bX9dtuPbqAeQpTVe0MQTFhao8+Cp//EmJp244xt6Drw/GVEGCUj40g==", + "dev": true, + "license": "MIT", + "dependencies": { + "jws": "^4.0.1", + "lodash.includes": "^4.3.0", + "lodash.isboolean": "^3.0.3", + "lodash.isinteger": "^4.0.4", + "lodash.isnumber": "^3.0.3", + "lodash.isplainobject": "^4.0.6", + "lodash.isstring": "^4.0.1", + "lodash.once": "^4.0.0", + "ms": "^2.1.1", + "semver": "^7.5.4" + }, + "engines": { + "node": ">=12", + "npm": ">=6" + } + }, + "node_modules/jszip": { + "version": "3.10.1", + "resolved": "https://registry.npmjs.org/jszip/-/jszip-3.10.1.tgz", + "integrity": "sha512-xXDvecyTpGLrqFrvkrUSoxxfJI5AH7U8zxxtVclpsUtMCq4JQ290LY8AW5c7Ggnr/Y/oK+bQMbqK2qmtk3pN4g==", + "dev": true, + "license": "(MIT OR GPL-3.0-or-later)", + "dependencies": { + "lie": "~3.3.0", + "pako": "~1.0.2", + "readable-stream": "~2.3.6", + "setimmediate": "^1.0.5" + } + }, + "node_modules/jszip/node_modules/readable-stream": { + "version": "2.3.8", + "resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-2.3.8.tgz", + "integrity": "sha512-8p0AUk4XODgIewSi0l8Epjs+EVnWiK7NoDIEGU0HhE7+ZyY8D1IMY7odu5lRrFXGg71L15KG8QrPmum45RTtdA==", + "dev": true, + "license": "MIT", + "dependencies": { + "core-util-is": "~1.0.0", + "inherits": "~2.0.3", + "isarray": "~1.0.0", + "process-nextick-args": "~2.0.0", + "safe-buffer": "~5.1.1", + "string_decoder": "~1.1.1", + "util-deprecate": "~1.0.1" + } + }, + "node_modules/jszip/node_modules/safe-buffer": { + "version": "5.1.2", + "resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.1.2.tgz", + "integrity": "sha512-Gd2UZBJDkXlY7GbJxfsE8/nvKkUEU1G38c1siN6QP6a9PT9MmHB8GnpscSmMJSoF8LOIrt8ud/wPtojys4G6+g==", + "dev": true, + "license": "MIT" + }, + "node_modules/jszip/node_modules/string_decoder": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/string_decoder/-/string_decoder-1.1.1.tgz", + "integrity": "sha512-n/ShnvDi6FHbbVfviro+WojiFzv+s8MPMHBczVePfUpDJLwoLT0ht1l4YwBCbi8pJAveEEdnkHyPyTP/mzRfwg==", + "dev": true, + "license": "MIT", + "dependencies": { + "safe-buffer": "~5.1.0" + } + }, + "node_modules/jwa": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/jwa/-/jwa-2.0.1.tgz", + "integrity": "sha512-hRF04fqJIP8Abbkq5NKGN0Bbr3JxlQ+qhZufXVr0DvujKy93ZCbXZMHDL4EOtodSbCWxOqR8MS1tXA5hwqCXDg==", + "dev": true, + "license": "MIT", + "dependencies": { + "buffer-equal-constant-time": "^1.0.1", + "ecdsa-sig-formatter": "1.0.11", + "safe-buffer": "^5.0.1" + } + }, + "node_modules/jws": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/jws/-/jws-4.0.1.tgz", + "integrity": "sha512-EKI/M/yqPncGUUh44xz0PxSidXFr/+r0pA70+gIYhjv+et7yxM+s29Y+VGDkovRofQem0fs7Uvf4+YmAdyRduA==", + "dev": true, + "license": "MIT", + "dependencies": { + "jwa": "^2.0.1", + "safe-buffer": "^5.0.1" + } + }, + "node_modules/keytar": { + "version": "7.9.0", + "resolved": "https://registry.npmjs.org/keytar/-/keytar-7.9.0.tgz", + "integrity": "sha512-VPD8mtVtm5JNtA2AErl6Chp06JBfy7diFQ7TQQhdpWOl6MrCRB+eRbvAZUsbGQS9kiMq0coJsy0W0vHpDCkWsQ==", + "dev": true, + "hasInstallScript": true, + "license": "MIT", + "optional": true, + "dependencies": { + "node-addon-api": "^4.3.0", + "prebuild-install": "^7.0.1" + } + }, + "node_modules/keyv": { + "version": "4.5.4", + "resolved": "https://registry.npmjs.org/keyv/-/keyv-4.5.4.tgz", + "integrity": "sha512-oxVHkHR/EJf2CNXnWxRLW6mg7JyCCUcG0DtEGmL2ctUo1PNTin1PUil+r/+4r5MpVgC/fn1kjsx7mjSujKqIpw==", + "dev": true, + "license": "MIT", + "dependencies": { + "json-buffer": "3.0.1" + } + }, + "node_modules/leven": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/leven/-/leven-3.1.0.tgz", + "integrity": "sha512-qsda+H8jTaUaN/x5vzW2rzc+8Rw4TAQ/4KjB46IwK5VH+IlVeeeje/EoZRpiXvIqjFgK84QffqPztGI3VBLG1A==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=6" + } + }, + "node_modules/levn": { + "version": "0.4.1", + "resolved": "https://registry.npmjs.org/levn/-/levn-0.4.1.tgz", + "integrity": "sha512-+bT2uH4E5LGE7h/n3evcS/sQlJXCpIp6ym8OWJ5eV6+67Dsql/LaaT7qJBAt2rzfoa/5QBGBhxDix1dMt2kQKQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "prelude-ls": "^1.2.1", + "type-check": "~0.4.0" + }, + "engines": { + "node": ">= 0.8.0" + } + }, + "node_modules/lie": { + "version": "3.3.0", + "resolved": "https://registry.npmjs.org/lie/-/lie-3.3.0.tgz", + "integrity": "sha512-UaiMJzeWRlEujzAuw5LokY1L5ecNQYZKfmyZ9L7wDHb/p5etKaxXhohBcrw0EYby+G/NA52vRSN4N39dxHAIwQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "immediate": "~3.0.5" + } + }, + "node_modules/linkify-it": { + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/linkify-it/-/linkify-it-3.0.3.tgz", + "integrity": "sha512-ynTsyrFSdE5oZ/O9GEf00kPngmOfVwazR5GKDq6EYfhlpFug3J2zybX56a2PRRpc9P+FuSoGNAwjlbDs9jJBPQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "uc.micro": "^1.0.1" + } + }, + "node_modules/locate-path": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/locate-path/-/locate-path-6.0.0.tgz", + "integrity": "sha512-iPZK6eYjbxRu3uB4/WZ3EsEIMJFMqAoopl3R+zuq0UjcAm/MO6KCweDgPfP3elTztoKP3KtnVHxTn2NHBSDVUw==", + "dev": true, + "license": "MIT", + "dependencies": { + "p-locate": "^5.0.0" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/lodash.includes": { + "version": "4.3.0", + "resolved": "https://registry.npmjs.org/lodash.includes/-/lodash.includes-4.3.0.tgz", + "integrity": "sha512-W3Bx6mdkRTGtlJISOvVD/lbqjTlPPUDTMnlXZFnVwi9NKJ6tiAk6LVdlhZMm17VZisqhKcgzpO5Wz91PCt5b0w==", + "dev": true, + "license": "MIT" + }, + "node_modules/lodash.isboolean": { + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/lodash.isboolean/-/lodash.isboolean-3.0.3.tgz", + "integrity": "sha512-Bz5mupy2SVbPHURB98VAcw+aHh4vRV5IPNhILUCsOzRmsTmSQ17jIuqopAentWoehktxGd9e/hbIXq980/1QJg==", + "dev": true, + "license": "MIT" + }, + "node_modules/lodash.isinteger": { + "version": "4.0.4", + "resolved": "https://registry.npmjs.org/lodash.isinteger/-/lodash.isinteger-4.0.4.tgz", + "integrity": "sha512-DBwtEWN2caHQ9/imiNeEA5ys1JoRtRfY3d7V9wkqtbycnAmTvRRmbHKDV4a0EYc678/dia0jrte4tjYwVBaZUA==", + "dev": true, + "license": "MIT" + }, + "node_modules/lodash.isnumber": { + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/lodash.isnumber/-/lodash.isnumber-3.0.3.tgz", + "integrity": "sha512-QYqzpfwO3/CWf3XP+Z+tkQsfaLL/EnUlXWVkIk5FUPc4sBdTehEqZONuyRt2P67PXAk+NXmTBcc97zw9t1FQrw==", + "dev": true, + "license": "MIT" + }, + "node_modules/lodash.isplainobject": { + "version": "4.0.6", + "resolved": "https://registry.npmjs.org/lodash.isplainobject/-/lodash.isplainobject-4.0.6.tgz", + "integrity": "sha512-oSXzaWypCMHkPC3NvBEaPHf0KsA5mvPrOPgQWDsbg8n7orZ290M0BmC/jgRZ4vcJ6DTAhjrsSYgdsW/F+MFOBA==", + "dev": true, + "license": "MIT" + }, + "node_modules/lodash.isstring": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/lodash.isstring/-/lodash.isstring-4.0.1.tgz", + "integrity": "sha512-0wJxfxH1wgO3GrbuP+dTTk7op+6L41QCXbGINEmD+ny/G/eCqGzxyCsh7159S+mgDDcoarnBw6PC1PS5+wUGgw==", + "dev": true, + "license": "MIT" + }, + "node_modules/lodash.once": { + "version": "4.1.1", + "resolved": "https://registry.npmjs.org/lodash.once/-/lodash.once-4.1.1.tgz", + "integrity": "sha512-Sb487aTOCr9drQVL8pIxOzVhafOjZN9UU54hiN8PU3uAiSV7lx1yYNpbNmex2PK6dSJoNTSJUUswT651yww3Mg==", + "dev": true, + "license": "MIT" + }, + "node_modules/log-symbols": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/log-symbols/-/log-symbols-4.1.0.tgz", + "integrity": "sha512-8XPvpAA8uyhfteu8pIvQxpJZ7SYYdpUivZpGy6sFsBuKRY/7rQGavedeB8aK+Zkyq6upMFVL/9AW6vOYzfRyLg==", + "dev": true, + "license": "MIT", + "dependencies": { + "chalk": "^4.1.0", + "is-unicode-supported": "^0.1.0" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/log-symbols/node_modules/ansi-styles": { + "version": "4.3.0", + "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.3.0.tgz", + "integrity": "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==", + "dev": true, + "license": "MIT", + "dependencies": { + "color-convert": "^2.0.1" + }, + "engines": { + "node": ">=8" + }, + "funding": { + "url": "https://github.com/chalk/ansi-styles?sponsor=1" + } + }, + "node_modules/log-symbols/node_modules/chalk": { + "version": "4.1.2", + "resolved": "https://registry.npmjs.org/chalk/-/chalk-4.1.2.tgz", + "integrity": "sha512-oKnbhFyRIXpUuez8iBMmyEa4nbj4IOQyuhc/wy9kY7/WVPcwIO9VA668Pu8RkO7+0G76SLROeyw9CpQ061i4mA==", + "dev": true, + "license": "MIT", + "dependencies": { + "ansi-styles": "^4.1.0", + "supports-color": "^7.1.0" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/chalk/chalk?sponsor=1" + } + }, + "node_modules/log-symbols/node_modules/color-convert": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz", + "integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "color-name": "~1.1.4" + }, + "engines": { + "node": ">=7.0.0" + } + }, + "node_modules/log-symbols/node_modules/color-name": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz", + "integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==", + "dev": true, + "license": "MIT" + }, + "node_modules/log-symbols/node_modules/has-flag": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-4.0.0.tgz", + "integrity": "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/log-symbols/node_modules/supports-color": { + "version": "7.2.0", + "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-7.2.0.tgz", + "integrity": "sha512-qpCAvRl9stuOHveKsn7HncJRvv501qIacKzQlO/+Lwxc9+0q2wLyv4Dfvt80/DPn2pqOBsJdDiogXGR9+OvwRw==", + "dev": true, + "license": "MIT", + "dependencies": { + "has-flag": "^4.0.0" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/lru-cache": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-6.0.0.tgz", + "integrity": "sha512-Jo6dJ04CmSjuznwJSS3pUeWmd/H0ffTlkXXgwZi+eq1UCmqQwCh+eLsYOYCwY991i2Fah4h1BEMCx4qThGbsiA==", + "dev": true, + "license": "ISC", + "dependencies": { + "yallist": "^4.0.0" + }, + "engines": { + "node": ">=10" + } + }, + "node_modules/markdown-it": { + "version": "12.3.2", + "resolved": "https://registry.npmjs.org/markdown-it/-/markdown-it-12.3.2.tgz", + "integrity": "sha512-TchMembfxfNVpHkbtriWltGWc+m3xszaRD0CZup7GFFhzIgQqxIfn3eGj1yZpfuflzPvfkt611B2Q/Bsk1YnGg==", + "dev": true, + "license": "MIT", + "dependencies": { + "argparse": "^2.0.1", + "entities": "~2.1.0", + "linkify-it": "^3.0.1", + "mdurl": "^1.0.1", + "uc.micro": "^1.0.5" + }, + "bin": { + "markdown-it": "bin/markdown-it.js" + } + }, + "node_modules/markdown-it/node_modules/entities": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/entities/-/entities-2.1.0.tgz", + "integrity": "sha512-hCx1oky9PFrJ611mf0ifBLBRW8lUUVRlFolb5gWRfIELabBlbp9xZvrqZLZAs+NxFnbfQoeGd8wDkygjg7U85w==", + "dev": true, + "license": "BSD-2-Clause", + "funding": { + "url": "https://github.com/fb55/entities?sponsor=1" + } + }, + "node_modules/math-intrinsics": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/math-intrinsics/-/math-intrinsics-1.1.0.tgz", + "integrity": "sha512-/IXtbwEk5HTPyEwyKX6hGkYXxM9nbj64B+ilVJnC/R6B0pH5G4V3b0pVbL7DBj4tkhBAppbQUlf6F6Xl9LHu1g==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/mdurl": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/mdurl/-/mdurl-1.0.1.tgz", + "integrity": "sha512-/sKlQJCBYVY9Ers9hqzKou4H6V5UWc/M59TH2dvkt+84itfnq7uFOMLpOiOS4ujvHP4etln18fmIxA5R5fll0g==", + "dev": true, + "license": "MIT" + }, + "node_modules/mime": { + "version": "1.6.0", + "resolved": "https://registry.npmjs.org/mime/-/mime-1.6.0.tgz", + "integrity": "sha512-x0Vn8spI+wuJ1O6S7gnbaQg8Pxh4NNHb7KSINmEWKiPE4RKOplvijn+NkmYmmRgP68mc70j2EbeTFRsrswaQeg==", + "dev": true, + "license": "MIT", + "bin": { + "mime": "cli.js" + }, + "engines": { + "node": ">=4" + } + }, + "node_modules/mime-db": { + "version": "1.52.0", + "resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.52.0.tgz", + "integrity": "sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/mime-types": { + "version": "2.1.35", + "resolved": "https://registry.npmjs.org/mime-types/-/mime-types-2.1.35.tgz", + "integrity": "sha512-ZDY+bPm5zTTF+YpCrAU9nK0UgICYPT0QtT1NZWFv4s++TNkcgVaT0g6+4R2uI4MjQjzysHB1zxuWL50hzaeXiw==", + "dev": true, + "license": "MIT", + "dependencies": { + "mime-db": "1.52.0" + }, + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/mimic-function": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/mimic-function/-/mimic-function-5.0.1.tgz", + "integrity": "sha512-VP79XUPxV2CigYP3jWwAUFSku2aKqBH7uTAapFWCBqutsbmDo96KY5o8uh6U+/YSIn5OxJnXp73beVkpqMIGhA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/mimic-response": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/mimic-response/-/mimic-response-3.1.0.tgz", + "integrity": "sha512-z0yWI+4FDrrweS8Zmt4Ej5HdJmky15+L2e6Wgn3+iK5fWzb6T3fhNFq2+MeTRb064c6Wr4N/wv0DzQTjNzHNGQ==", + "dev": true, + "license": "MIT", + "optional": true, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/minimatch": { + "version": "10.2.4", + "resolved": "https://registry.npmjs.org/minimatch/-/minimatch-10.2.4.tgz", + "integrity": "sha512-oRjTw/97aTBN0RHbYCdtF1MQfvusSIBQM0IZEgzl6426+8jSC0nF1a/GmnVLpfB9yyr6g6FTqWqiZVbxrtaCIg==", + "dev": true, + "license": "BlueOak-1.0.0", + "dependencies": { + "brace-expansion": "^5.0.2" + }, + "engines": { + "node": "18 || 20 || >=22" + }, + "funding": { + "url": "https://github.com/sponsors/isaacs" + } + }, + "node_modules/minimist": { + "version": "1.2.8", + "resolved": "https://registry.npmjs.org/minimist/-/minimist-1.2.8.tgz", + "integrity": "sha512-2yyAR8qBkN3YuheJanUpWC5U3bb5osDywNB8RzDVlDwDHbocAJveqqj1u8+SVD7jkWT4yvsHCpWqqWqAxb0zCA==", + "dev": true, + "license": "MIT", + "optional": true, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/minipass": { + "version": "7.1.3", + "resolved": "https://registry.npmjs.org/minipass/-/minipass-7.1.3.tgz", + "integrity": "sha512-tEBHqDnIoM/1rXME1zgka9g6Q2lcoCkxHLuc7ODJ5BxbP5d4c2Z5cGgtXAku59200Cx7diuHTOYfSBD8n6mm8A==", + "dev": true, + "license": "BlueOak-1.0.0", + "engines": { + "node": ">=16 || 14 >=14.17" + } + }, + "node_modules/mkdirp-classic": { + "version": "0.5.3", + "resolved": "https://registry.npmjs.org/mkdirp-classic/-/mkdirp-classic-0.5.3.tgz", + "integrity": "sha512-gKLcREMhtuZRwRAfqP3RFW+TK4JqApVBtOIftVgjuABpAtpxhPGaDcfvbhNvD0B8iD1oUr/txX35NjcaY6Ns/A==", + "dev": true, + "license": "MIT", + "optional": true + }, + "node_modules/mocha": { + "version": "11.7.5", + "resolved": "https://registry.npmjs.org/mocha/-/mocha-11.7.5.tgz", + "integrity": "sha512-mTT6RgopEYABzXWFx+GcJ+ZQ32kp4fMf0xvpZIIfSq9Z8lC/++MtcCnQ9t5FP2veYEP95FIYSvW+U9fV4xrlig==", + "dev": true, + "license": "MIT", + "dependencies": { + "browser-stdout": "^1.3.1", + "chokidar": "^4.0.1", + "debug": "^4.3.5", + "diff": "^7.0.0", + "escape-string-regexp": "^4.0.0", + "find-up": "^5.0.0", + "glob": "^10.4.5", + "he": "^1.2.0", + "is-path-inside": "^3.0.3", + "js-yaml": "^4.1.0", + "log-symbols": "^4.1.0", + "minimatch": "^9.0.5", + "ms": "^2.1.3", + "picocolors": "^1.1.1", + "serialize-javascript": "^6.0.2", + "strip-json-comments": "^3.1.1", + "supports-color": "^8.1.1", + "workerpool": "^9.2.0", + "yargs": "^17.7.2", + "yargs-parser": "^21.1.1", + "yargs-unparser": "^2.0.0" + }, + "bin": { + "_mocha": "bin/_mocha", + "mocha": "bin/mocha.js" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + } + }, + "node_modules/mocha/node_modules/balanced-match": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/balanced-match/-/balanced-match-1.0.2.tgz", + "integrity": "sha512-3oSeUO0TMV67hN1AmbXsK4yaqU7tjiHlbxRDZOpH0KW9+CeX4bRAaX0Anxt0tx2MrpRpWwQaPwIlISEJhYU5Pw==", + "dev": true, + "license": "MIT" + }, + "node_modules/mocha/node_modules/brace-expansion": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-2.0.2.tgz", + "integrity": "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "balanced-match": "^1.0.0" + } + }, + "node_modules/mocha/node_modules/escape-string-regexp": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/escape-string-regexp/-/escape-string-regexp-4.0.0.tgz", + "integrity": "sha512-TtpcNJ3XAzx3Gq8sWRzJaVajRs0uVxA2YAkdb1jm2YkPz4G6egUFAyA3n5vtEIZefPk5Wa4UXbKuS5fKkJWdgA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/mocha/node_modules/glob": { + "version": "10.5.0", + "resolved": "https://registry.npmjs.org/glob/-/glob-10.5.0.tgz", + "integrity": "sha512-DfXN8DfhJ7NH3Oe7cFmu3NCu1wKbkReJ8TorzSAFbSKrlNaQSKfIzqYqVY8zlbs2NLBbWpRiU52GX2PbaBVNkg==", + "deprecated": "Old versions of glob are not supported, and contain widely publicized security vulnerabilities, which have been fixed in the current version. Please update. Support for old versions may be purchased (at exorbitant rates) by contacting i@izs.me", + "dev": true, + "license": "ISC", + "dependencies": { + "foreground-child": "^3.1.0", + "jackspeak": "^3.1.2", + "minimatch": "^9.0.4", + "minipass": "^7.1.2", + "package-json-from-dist": "^1.0.0", + "path-scurry": "^1.11.1" + }, + "bin": { + "glob": "dist/esm/bin.mjs" + }, + "funding": { + "url": "https://github.com/sponsors/isaacs" + } + }, + "node_modules/mocha/node_modules/has-flag": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-4.0.0.tgz", + "integrity": "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/mocha/node_modules/minimatch": { + "version": "9.0.9", + "resolved": "https://registry.npmjs.org/minimatch/-/minimatch-9.0.9.tgz", + "integrity": "sha512-OBwBN9AL4dqmETlpS2zasx+vTeWclWzkblfZk7KTA5j3jeOONz/tRCnZomUyvNg83wL5Zv9Ss6HMJXAgL8R2Yg==", + "dev": true, + "license": "ISC", + "dependencies": { + "brace-expansion": "^2.0.2" + }, + "engines": { + "node": ">=16 || 14 >=14.17" + }, + "funding": { + "url": "https://github.com/sponsors/isaacs" + } + }, + "node_modules/mocha/node_modules/strip-json-comments": { + "version": "3.1.1", + "resolved": "https://registry.npmjs.org/strip-json-comments/-/strip-json-comments-3.1.1.tgz", + "integrity": "sha512-6fPc+R4ihwqP6N/aIv2f1gMH8lOVtWQHoqC4yK6oSDVVocumAsfCqjkXnqiYMhmMwS/mEHLp7Vehlt3ql6lEig==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/mocha/node_modules/supports-color": { + "version": "8.1.1", + "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-8.1.1.tgz", + "integrity": "sha512-MpUEN2OodtUzxvKQl72cUF7RQ5EiHsGvSsVG0ia9c5RbWGL2CI4C7EpPS8UTBIplnlzZiNuV56w+FuNxy3ty2Q==", + "dev": true, + "license": "MIT", + "dependencies": { + "has-flag": "^4.0.0" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/chalk/supports-color?sponsor=1" + } + }, + "node_modules/ms": { + "version": "2.1.3", + "resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz", + "integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==", + "dev": true, + "license": "MIT" + }, + "node_modules/mute-stream": { + "version": "0.0.8", + "resolved": "https://registry.npmjs.org/mute-stream/-/mute-stream-0.0.8.tgz", + "integrity": "sha512-nnbWWOkoWyUsTjKrhgD0dcz22mdkSnpYqbEjIm2nhwhuxlSkpywJmBo8h0ZqJdkp73mb90SssHkN4rsRaBAfAA==", + "dev": true, + "license": "ISC" + }, + "node_modules/napi-build-utils": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/napi-build-utils/-/napi-build-utils-2.0.0.tgz", + "integrity": "sha512-GEbrYkbfF7MoNaoh2iGG84Mnf/WZfB0GdGEsM8wz7Expx/LlWf5U8t9nvJKXSp3qr5IsEbK04cBGhol/KwOsWA==", + "dev": true, + "license": "MIT", + "optional": true + }, + "node_modules/natural-compare": { + "version": "1.4.0", + "resolved": "https://registry.npmjs.org/natural-compare/-/natural-compare-1.4.0.tgz", + "integrity": "sha512-OWND8ei3VtNC9h7V60qff3SVobHr996CTwgxubgyQYEpg290h9J0buyECNNJexkFm5sOajh5G116RYA1c8ZMSw==", + "dev": true, + "license": "MIT" + }, + "node_modules/node-abi": { + "version": "3.87.0", + "resolved": "https://registry.npmjs.org/node-abi/-/node-abi-3.87.0.tgz", + "integrity": "sha512-+CGM1L1CgmtheLcBuleyYOn7NWPVu0s0EJH2C4puxgEZb9h8QpR9G2dBfZJOAUhi7VQxuBPMd0hiISWcTyiYyQ==", + "dev": true, + "license": "MIT", + "optional": true, + "dependencies": { + "semver": "^7.3.5" + }, + "engines": { + "node": ">=10" + } + }, + "node_modules/node-addon-api": { + "version": "4.3.0", + "resolved": "https://registry.npmjs.org/node-addon-api/-/node-addon-api-4.3.0.tgz", + "integrity": "sha512-73sE9+3UaLYYFmDsFZnqCInzPyh3MqIwZO9cw58yIqAZhONrrabrYyYe3TuIqtIiOuTXVhsGau8hcrhhwSsDIQ==", + "dev": true, + "license": "MIT", + "optional": true + }, + "node_modules/nth-check": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/nth-check/-/nth-check-2.1.1.tgz", + "integrity": "sha512-lqjrjmaOoAnWfMmBPL+XNnynZh2+swxiX3WUE0s4yEHI6m+AwrK2UZOimIRl3X/4QctVqS8AiZjFqyOGrMXb/w==", + "dev": true, + "license": "BSD-2-Clause", + "dependencies": { + "boolbase": "^1.0.0" + }, + "funding": { + "url": "https://github.com/fb55/nth-check?sponsor=1" + } + }, + "node_modules/object-inspect": { + "version": "1.13.4", + "resolved": "https://registry.npmjs.org/object-inspect/-/object-inspect-1.13.4.tgz", + "integrity": "sha512-W67iLl4J2EXEGTbfeHCffrjDfitvLANg0UlX3wFUUSTx92KXRFegMHUVgSqE+wvhAbi4WqjGg9czysTV2Epbew==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/once": { + "version": "1.4.0", + "resolved": "https://registry.npmjs.org/once/-/once-1.4.0.tgz", + "integrity": "sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w==", + "dev": true, + "license": "ISC", + "dependencies": { + "wrappy": "1" + } + }, + "node_modules/onetime": { + "version": "7.0.0", + "resolved": "https://registry.npmjs.org/onetime/-/onetime-7.0.0.tgz", + "integrity": "sha512-VXJjc87FScF88uafS3JllDgvAm+c/Slfz06lorj2uAY34rlUu0Nt+v8wreiImcrgAjjIHp1rXpTDlLOGw29WwQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "mimic-function": "^5.0.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/open": { + "version": "10.2.0", + "resolved": "https://registry.npmjs.org/open/-/open-10.2.0.tgz", + "integrity": "sha512-YgBpdJHPyQ2UE5x+hlSXcnejzAvD0b22U2OuAP+8OnlJT+PjWPxtgmGqKKc+RgTM63U9gN0YzrYc71R2WT/hTA==", + "dev": true, + "license": "MIT", + "dependencies": { + "default-browser": "^5.2.1", + "define-lazy-prop": "^3.0.0", + "is-inside-container": "^1.0.0", + "wsl-utils": "^0.1.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/optionator": { + "version": "0.9.4", + "resolved": "https://registry.npmjs.org/optionator/-/optionator-0.9.4.tgz", + "integrity": "sha512-6IpQ7mKUxRcZNLIObR0hz7lxsapSSIYNZJwXPGeF0mTVqGKFIXj1DQcMoT22S3ROcLyY/rz0PWaWZ9ayWmad9g==", + "dev": true, + "license": "MIT", + "dependencies": { + "deep-is": "^0.1.3", + "fast-levenshtein": "^2.0.6", + "levn": "^0.4.1", + "prelude-ls": "^1.2.1", + "type-check": "^0.4.0", + "word-wrap": "^1.2.5" + }, + "engines": { + "node": ">= 0.8.0" + } + }, + "node_modules/ora": { + "version": "8.2.0", + "resolved": "https://registry.npmjs.org/ora/-/ora-8.2.0.tgz", + "integrity": "sha512-weP+BZ8MVNnlCm8c0Qdc1WSWq4Qn7I+9CJGm7Qali6g44e/PUzbjNqJX5NJ9ljlNMosfJvg1fKEGILklK9cwnw==", + "dev": true, + "license": "MIT", + "dependencies": { + "chalk": "^5.3.0", + "cli-cursor": "^5.0.0", + "cli-spinners": "^2.9.2", + "is-interactive": "^2.0.0", + "is-unicode-supported": "^2.0.0", + "log-symbols": "^6.0.0", + "stdin-discarder": "^0.2.2", + "string-width": "^7.2.0", + "strip-ansi": "^7.1.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/ora/node_modules/chalk": { + "version": "5.6.2", + "resolved": "https://registry.npmjs.org/chalk/-/chalk-5.6.2.tgz", + "integrity": "sha512-7NzBL0rN6fMUW+f7A6Io4h40qQlG+xGmtMxfbnH/K7TAtt8JQWVQK+6g0UXKMeVJoyV5EkkNsErQ8pVD3bLHbA==", + "dev": true, + "license": "MIT", + "engines": { + "node": "^12.17.0 || ^14.13 || >=16.0.0" + }, + "funding": { + "url": "https://github.com/chalk/chalk?sponsor=1" + } + }, + "node_modules/ora/node_modules/is-unicode-supported": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/is-unicode-supported/-/is-unicode-supported-2.1.0.tgz", + "integrity": "sha512-mE00Gnza5EEB3Ds0HfMyllZzbBrmLOX3vfWoj9A9PEnTfratQ/BcaJOuMhnkhjXvb2+FkY3VuHqtAGpTPmglFQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/ora/node_modules/log-symbols": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/log-symbols/-/log-symbols-6.0.0.tgz", + "integrity": "sha512-i24m8rpwhmPIS4zscNzK6MSEhk0DUWa/8iYQWxhffV8jkI4Phvs3F+quL5xvS0gdQR0FyTCMMH33Y78dDTzzIw==", + "dev": true, + "license": "MIT", + "dependencies": { + "chalk": "^5.3.0", + "is-unicode-supported": "^1.3.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/ora/node_modules/log-symbols/node_modules/is-unicode-supported": { + "version": "1.3.0", + "resolved": "https://registry.npmjs.org/is-unicode-supported/-/is-unicode-supported-1.3.0.tgz", + "integrity": "sha512-43r2mRvz+8JRIKnWJ+3j8JtjRKZ6GmjzfaE/qiBJnikNnYv/6bagRJ1kUhNk8R5EX/GkobD+r+sfxCPJsiKBLQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/p-limit": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/p-limit/-/p-limit-3.1.0.tgz", + "integrity": "sha512-TYOanM3wGwNGsZN2cVTYPArw454xnXj5qmWF1bEoAc4+cU/ol7GVh7odevjp1FNHduHc3KZMcFduxU5Xc6uJRQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "yocto-queue": "^0.1.0" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/p-locate": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/p-locate/-/p-locate-5.0.0.tgz", + "integrity": "sha512-LaNjtRWUBY++zB5nE/NwcaoMylSPk+S+ZHNB1TzdbMJMny6dynpAGt7X/tl/QYq3TIeE6nxHppbo2LGymrG5Pw==", + "dev": true, + "license": "MIT", + "dependencies": { + "p-limit": "^3.0.2" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/package-json-from-dist": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/package-json-from-dist/-/package-json-from-dist-1.0.1.tgz", + "integrity": "sha512-UEZIS3/by4OC8vL3P2dTXRETpebLI2NiI5vIrjaD/5UtrkFX/tNbwjTSRAGC/+7CAo2pIcBaRgWmcBBHcsaCIw==", + "dev": true, + "license": "BlueOak-1.0.0" + }, + "node_modules/pako": { + "version": "1.0.11", + "resolved": "https://registry.npmjs.org/pako/-/pako-1.0.11.tgz", + "integrity": "sha512-4hLB8Py4zZce5s4yd9XzopqwVv/yGNhV1Bl8NTmCq1763HeK2+EwVTv+leGeL13Dnh2wfbqowVPXCIO0z4taYw==", + "dev": true, + "license": "(MIT AND Zlib)" + }, + "node_modules/parse-semver": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/parse-semver/-/parse-semver-1.1.1.tgz", + "integrity": "sha512-Eg1OuNntBMH0ojvEKSrvDSnwLmvVuUOSdylH/pSCPNMIspLlweJyIWXCE+k/5hm3cj/EBUYwmWkjhBALNP4LXQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "semver": "^5.1.0" + } + }, + "node_modules/parse-semver/node_modules/semver": { + "version": "5.7.2", + "resolved": "https://registry.npmjs.org/semver/-/semver-5.7.2.tgz", + "integrity": "sha512-cBznnQ9KjJqU67B52RMC65CMarK2600WFnbkcaiwWq3xy/5haFJlshgnpjovMVJ+Hff49d8GEn0b87C5pDQ10g==", + "dev": true, + "license": "ISC", + "bin": { + "semver": "bin/semver" + } + }, + "node_modules/parse5": { + "version": "7.3.0", + "resolved": "https://registry.npmjs.org/parse5/-/parse5-7.3.0.tgz", + "integrity": "sha512-IInvU7fabl34qmi9gY8XOVxhYyMyuH2xUNpb2q8/Y+7552KlejkRvqvD19nMoUW/uQGGbqNpA6Tufu5FL5BZgw==", + "dev": true, + "license": "MIT", + "dependencies": { + "entities": "^6.0.0" + }, + "funding": { + "url": "https://github.com/inikulin/parse5?sponsor=1" + } + }, + "node_modules/parse5-htmlparser2-tree-adapter": { + "version": "7.1.0", + "resolved": "https://registry.npmjs.org/parse5-htmlparser2-tree-adapter/-/parse5-htmlparser2-tree-adapter-7.1.0.tgz", + "integrity": "sha512-ruw5xyKs6lrpo9x9rCZqZZnIUntICjQAd0Wsmp396Ul9lN/h+ifgVV1x1gZHi8euej6wTfpqX8j+BFQxF0NS/g==", + "dev": true, + "license": "MIT", + "dependencies": { + "domhandler": "^5.0.3", + "parse5": "^7.0.0" + }, + "funding": { + "url": "https://github.com/inikulin/parse5?sponsor=1" + } + }, + "node_modules/parse5-parser-stream": { + "version": "7.1.2", + "resolved": "https://registry.npmjs.org/parse5-parser-stream/-/parse5-parser-stream-7.1.2.tgz", + "integrity": "sha512-JyeQc9iwFLn5TbvvqACIF/VXG6abODeB3Fwmv/TGdLk2LfbWkaySGY72at4+Ty7EkPZj854u4CrICqNk2qIbow==", + "dev": true, + "license": "MIT", + "dependencies": { + "parse5": "^7.0.0" + }, + "funding": { + "url": "https://github.com/inikulin/parse5?sponsor=1" + } + }, + "node_modules/parse5/node_modules/entities": { + "version": "6.0.1", + "resolved": "https://registry.npmjs.org/entities/-/entities-6.0.1.tgz", + "integrity": "sha512-aN97NXWF6AWBTahfVOIrB/NShkzi5H7F9r1s9mD3cDj4Ko5f2qhhVoYMibXF7GlLveb/D2ioWay8lxI97Ven3g==", + "dev": true, + "license": "BSD-2-Clause", + "engines": { + "node": ">=0.12" + }, + "funding": { + "url": "https://github.com/fb55/entities?sponsor=1" + } + }, + "node_modules/path-exists": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/path-exists/-/path-exists-4.0.0.tgz", + "integrity": "sha512-ak9Qy5Q7jYb2Wwcey5Fpvg2KoAc/ZIhLSLOSBmRmygPsGwkVVt0fZa0qrtMz+m6tJTAHfZQ8FnmB4MG4LWy7/w==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/path-is-absolute": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/path-is-absolute/-/path-is-absolute-1.0.1.tgz", + "integrity": "sha512-AVbw3UJ2e9bq64vSaS9Am0fje1Pa8pbGqTTsmXfaIiMpnr5DlDhfJOuLj9Sf95ZPVDAUerDfEk88MPmPe7UCQg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/path-key": { + "version": "3.1.1", + "resolved": "https://registry.npmjs.org/path-key/-/path-key-3.1.1.tgz", + "integrity": "sha512-ojmeN0qd+y0jszEtoY48r0Peq5dwMEkIlCOu6Q5f41lfkswXuKtYrhgoTpLnyIcHm24Uhqx+5Tqm2InSwLhE6Q==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/path-scurry": { + "version": "1.11.1", + "resolved": "https://registry.npmjs.org/path-scurry/-/path-scurry-1.11.1.tgz", + "integrity": "sha512-Xa4Nw17FS9ApQFJ9umLiJS4orGjm7ZzwUrwamcGQuHSzDyth9boKDaycYdDcZDuqYATXw4HFXgaqWTctW/v1HA==", + "dev": true, + "license": "BlueOak-1.0.0", + "dependencies": { + "lru-cache": "^10.2.0", + "minipass": "^5.0.0 || ^6.0.2 || ^7.0.0" + }, + "engines": { + "node": ">=16 || 14 >=14.18" + }, + "funding": { + "url": "https://github.com/sponsors/isaacs" + } + }, + "node_modules/path-scurry/node_modules/lru-cache": { + "version": "10.4.3", + "resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-10.4.3.tgz", + "integrity": "sha512-JNAzZcXrCt42VGLuYz0zfAzDfAvJWW6AfYlDBQyDV5DClI2m5sAmK+OIO7s59XfsRsWHp02jAJrRadPRGTt6SQ==", + "dev": true, + "license": "ISC" + }, + "node_modules/pend": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/pend/-/pend-1.2.0.tgz", + "integrity": "sha512-F3asv42UuXchdzt+xXqfW1OGlVBe+mxa2mqI0pg5yAHZPvFmY3Y6drSf/GQ1A86WgWEN9Kzh/WrgKa6iGcHXLg==", + "dev": true, + "license": "MIT" + }, + "node_modules/picocolors": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/picocolors/-/picocolors-1.1.1.tgz", + "integrity": "sha512-xceH2snhtb5M9liqDsmEw56le376mTZkEX/jEb/RxNFyegNul7eNslCXP9FDj/Lcu0X8KEyMceP2ntpaHrDEVA==", + "dev": true, + "license": "ISC" + }, + "node_modules/picomatch": { + "version": "4.0.3", + "resolved": "https://registry.npmjs.org/picomatch/-/picomatch-4.0.3.tgz", + "integrity": "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/sponsors/jonschlinkert" + } + }, + "node_modules/prebuild-install": { + "version": "7.1.3", + "resolved": "https://registry.npmjs.org/prebuild-install/-/prebuild-install-7.1.3.tgz", + "integrity": "sha512-8Mf2cbV7x1cXPUILADGI3wuhfqWvtiLA1iclTDbFRZkgRQS0NqsPZphna9V+HyTEadheuPmjaJMsbzKQFOzLug==", + "deprecated": "No longer maintained. Please contact the author of the relevant native addon; alternatives are available.", + "dev": true, + "license": "MIT", + "optional": true, + "dependencies": { + "detect-libc": "^2.0.0", + "expand-template": "^2.0.3", + "github-from-package": "0.0.0", + "minimist": "^1.2.3", + "mkdirp-classic": "^0.5.3", + "napi-build-utils": "^2.0.0", + "node-abi": "^3.3.0", + "pump": "^3.0.0", + "rc": "^1.2.7", + "simple-get": "^4.0.0", + "tar-fs": "^2.0.0", + "tunnel-agent": "^0.6.0" + }, + "bin": { + "prebuild-install": "bin.js" + }, + "engines": { + "node": ">=10" + } + }, + "node_modules/prelude-ls": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/prelude-ls/-/prelude-ls-1.2.1.tgz", + "integrity": "sha512-vkcDPrRZo1QZLbn5RLGPpg/WmIQ65qoWWhcGKf/b5eplkkarX0m9z8ppCat4mlOqUsWpyNuYgO3VRyrYHSzX5g==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.8.0" + } + }, + "node_modules/process-nextick-args": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/process-nextick-args/-/process-nextick-args-2.0.1.tgz", + "integrity": "sha512-3ouUOpQhtgrbOa17J7+uxOTpITYWaGP7/AhoR3+A+/1e9skrzelGi/dXzEYyvbxubEF6Wn2ypscTKiKJFFn1ag==", + "dev": true, + "license": "MIT" + }, + "node_modules/pump": { + "version": "3.0.4", + "resolved": "https://registry.npmjs.org/pump/-/pump-3.0.4.tgz", + "integrity": "sha512-VS7sjc6KR7e1ukRFhQSY5LM2uBWAUPiOPa/A3mkKmiMwSmRFUITt0xuj+/lesgnCv+dPIEYlkzrcyXgquIHMcA==", + "dev": true, + "license": "MIT", + "optional": true, + "dependencies": { + "end-of-stream": "^1.1.0", + "once": "^1.3.1" + } + }, + "node_modules/punycode": { + "version": "2.3.1", + "resolved": "https://registry.npmjs.org/punycode/-/punycode-2.3.1.tgz", + "integrity": "sha512-vYt7UD1U9Wg6138shLtLOvdAu+8DsC/ilFtEVHcH+wydcSpNE20AfSOduf6MkRFahL5FY7X1oU7nKVZFtfq8Fg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=6" + } + }, + "node_modules/qs": { + "version": "6.15.0", + "resolved": "https://registry.npmjs.org/qs/-/qs-6.15.0.tgz", + "integrity": "sha512-mAZTtNCeetKMH+pSjrb76NAM8V9a05I9aBZOHztWy/UqcJdQYNsf59vrRKWnojAT9Y+GbIvoTBC++CPHqpDBhQ==", + "dev": true, + "license": "BSD-3-Clause", + "dependencies": { + "side-channel": "^1.1.0" + }, + "engines": { + "node": ">=0.6" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/randombytes": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/randombytes/-/randombytes-2.1.0.tgz", + "integrity": "sha512-vYl3iOX+4CKUWuxGi9Ukhie6fsqXqS9FE2Zaic4tNFD2N2QQaXOMFbuKK4QmDHC0JO6B1Zp41J0LpT0oR68amQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "safe-buffer": "^5.1.0" + } + }, + "node_modules/rc": { + "version": "1.2.8", + "resolved": "https://registry.npmjs.org/rc/-/rc-1.2.8.tgz", + "integrity": "sha512-y3bGgqKj3QBdxLbLkomlohkvsA8gdAiUQlSBJnBhfn+BPxg4bc62d8TcBW15wavDfgexCgccckhcZvywyQYPOw==", + "dev": true, + "license": "(BSD-2-Clause OR MIT OR Apache-2.0)", + "optional": true, + "dependencies": { + "deep-extend": "^0.6.0", + "ini": "~1.3.0", + "minimist": "^1.2.0", + "strip-json-comments": "~2.0.1" + }, + "bin": { + "rc": "cli.js" + } + }, + "node_modules/read": { + "version": "1.0.7", + "resolved": "https://registry.npmjs.org/read/-/read-1.0.7.tgz", + "integrity": "sha512-rSOKNYUmaxy0om1BNjMN4ezNT6VKK+2xF4GBhc81mkH7L60i6dp8qPYrkndNLT3QPphoII3maL9PVC9XmhHwVQ==", + "dev": true, + "license": "ISC", + "dependencies": { + "mute-stream": "~0.0.4" + }, + "engines": { + "node": ">=0.8" + } + }, + "node_modules/readable-stream": { + "version": "3.6.2", + "resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-3.6.2.tgz", + "integrity": "sha512-9u/sniCrY3D5WdsERHzHE4G2YCXqoG5FTHUiCC4SIbr6XcLZBY05ya9EKjYek9O5xOAwjGq+1JdGBAS7Q9ScoA==", + "dev": true, + "license": "MIT", + "optional": true, + "dependencies": { + "inherits": "^2.0.3", + "string_decoder": "^1.1.1", + "util-deprecate": "^1.0.1" + }, + "engines": { + "node": ">= 6" + } + }, + "node_modules/readdirp": { + "version": "4.1.2", + "resolved": "https://registry.npmjs.org/readdirp/-/readdirp-4.1.2.tgz", + "integrity": "sha512-GDhwkLfywWL2s6vEjyhri+eXmfH6j1L7JE27WhqLeYzoh/A3DBaYGEj2H/HFZCn/kMfim73FXxEJTw06WtxQwg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 14.18.0" + }, + "funding": { + "type": "individual", + "url": "https://paulmillr.com/funding/" + } + }, + "node_modules/require-directory": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/require-directory/-/require-directory-2.1.1.tgz", + "integrity": "sha512-fGxEI7+wsG9xrvdjsrlmL22OMTTiHRwAMroiEeMgq8gzoLC/PQr7RsRDSTLUg/bZAZtF+TVIkHc6/4RIKrui+Q==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/restore-cursor": { + "version": "5.1.0", + "resolved": "https://registry.npmjs.org/restore-cursor/-/restore-cursor-5.1.0.tgz", + "integrity": "sha512-oMA2dcrw6u0YfxJQXm342bFKX/E4sG9rbTzO9ptUcR/e8A33cHuvStiYOwH7fszkZlZ1z/ta9AAoPk2F4qIOHA==", + "dev": true, + "license": "MIT", + "dependencies": { + "onetime": "^7.0.0", + "signal-exit": "^4.1.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/run-applescript": { + "version": "7.1.0", + "resolved": "https://registry.npmjs.org/run-applescript/-/run-applescript-7.1.0.tgz", + "integrity": "sha512-DPe5pVFaAsinSaV6QjQ6gdiedWDcRCbUuiQfQa2wmWV7+xC9bGulGI8+TdRmoFkAPaBXk8CrAbnlY2ISniJ47Q==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/safe-buffer": { + "version": "5.2.1", + "resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.2.1.tgz", + "integrity": "sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ==", + "dev": true, + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/feross" + }, + { + "type": "patreon", + "url": "https://www.patreon.com/feross" + }, + { + "type": "consulting", + "url": "https://feross.org/support" + } + ], + "license": "MIT" + }, + "node_modules/safer-buffer": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/safer-buffer/-/safer-buffer-2.1.2.tgz", + "integrity": "sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg==", + "dev": true, + "license": "MIT" + }, + "node_modules/sax": { + "version": "1.5.0", + "resolved": "https://registry.npmjs.org/sax/-/sax-1.5.0.tgz", + "integrity": "sha512-21IYA3Q5cQf089Z6tgaUTr7lDAyzoTPx5HRtbhsME8Udispad8dC/+sziTNugOEx54ilvatQ9YCzl4KQLPcRHA==", + "dev": true, + "license": "BlueOak-1.0.0", + "engines": { + "node": ">=11.0.0" + } + }, + "node_modules/semver": { + "version": "7.7.4", + "resolved": "https://registry.npmjs.org/semver/-/semver-7.7.4.tgz", + "integrity": "sha512-vFKC2IEtQnVhpT78h1Yp8wzwrf8CM+MzKMHGJZfBtzhZNycRFnXsHk6E5TxIkkMsgNS7mdX3AGB7x2QM2di4lA==", + "dev": true, + "license": "ISC", + "bin": { + "semver": "bin/semver.js" + }, + "engines": { + "node": ">=10" + } + }, + "node_modules/serialize-javascript": { + "version": "6.0.2", + "resolved": "https://registry.npmjs.org/serialize-javascript/-/serialize-javascript-6.0.2.tgz", + "integrity": "sha512-Saa1xPByTTq2gdeFZYLLo+RFE35NHZkAbqZeWNd3BpzppeVisAqpDjcp8dyf6uIvEqJRd46jemmyA4iFIeVk8g==", + "dev": true, + "license": "BSD-3-Clause", + "dependencies": { + "randombytes": "^2.1.0" + } + }, + "node_modules/setimmediate": { + "version": "1.0.5", + "resolved": "https://registry.npmjs.org/setimmediate/-/setimmediate-1.0.5.tgz", + "integrity": "sha512-MATJdZp8sLqDl/68LfQmbP8zKPLQNV6BIZoIgrscFDQ+RsvK/BxeDQOgyxKKoh0y/8h3BqVFnCqQ/gd+reiIXA==", + "dev": true, + "license": "MIT" + }, + "node_modules/shebang-command": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/shebang-command/-/shebang-command-2.0.0.tgz", + "integrity": "sha512-kHxr2zZpYtdmrN1qDjrrX/Z1rR1kG8Dx+gkpK1G4eXmvXswmcE1hTWBWYUzlraYw1/yZp6YuDY77YtvbN0dmDA==", + "dev": true, + "license": "MIT", + "dependencies": { + "shebang-regex": "^3.0.0" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/shebang-regex": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/shebang-regex/-/shebang-regex-3.0.0.tgz", + "integrity": "sha512-7++dFhtcx3353uBaq8DDR4NuxBetBzC7ZQOhmTQInHEd6bSrXdiEyzCvG07Z44UYdLShWUyXt5M/yhz8ekcb1A==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/side-channel": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/side-channel/-/side-channel-1.1.0.tgz", + "integrity": "sha512-ZX99e6tRweoUXqR+VBrslhda51Nh5MTQwou5tnUDgbtyM0dBgmhEDtWGP/xbKn6hqfPRHujUNwz5fy/wbbhnpw==", + "dev": true, + "license": "MIT", + "dependencies": { + "es-errors": "^1.3.0", + "object-inspect": "^1.13.3", + "side-channel-list": "^1.0.0", + "side-channel-map": "^1.0.1", + "side-channel-weakmap": "^1.0.2" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/side-channel-list": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/side-channel-list/-/side-channel-list-1.0.0.tgz", + "integrity": "sha512-FCLHtRD/gnpCiCHEiJLOwdmFP+wzCmDEkc9y7NsYxeF4u7Btsn1ZuwgwJGxImImHicJArLP4R0yX4c2KCrMrTA==", + "dev": true, + "license": "MIT", + "dependencies": { + "es-errors": "^1.3.0", + "object-inspect": "^1.13.3" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/side-channel-map": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/side-channel-map/-/side-channel-map-1.0.1.tgz", + "integrity": "sha512-VCjCNfgMsby3tTdo02nbjtM/ewra6jPHmpThenkTYh8pG9ucZ/1P8So4u4FGBek/BjpOVsDCMoLA/iuBKIFXRA==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bound": "^1.0.2", + "es-errors": "^1.3.0", + "get-intrinsic": "^1.2.5", + "object-inspect": "^1.13.3" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/side-channel-weakmap": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/side-channel-weakmap/-/side-channel-weakmap-1.0.2.tgz", + "integrity": "sha512-WPS/HvHQTYnHisLo9McqBHOJk2FkHO/tlpvldyrnem4aeQp4hai3gythswg6p01oSoTl58rcpiFAjF2br2Ak2A==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bound": "^1.0.2", + "es-errors": "^1.3.0", + "get-intrinsic": "^1.2.5", + "object-inspect": "^1.13.3", + "side-channel-map": "^1.0.1" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/signal-exit": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/signal-exit/-/signal-exit-4.1.0.tgz", + "integrity": "sha512-bzyZ1e88w9O1iNJbKnOlvYTrWPDl46O1bG0D3XInv+9tkPrxrN8jUUTiFlDkkmKWgn1M6CfIA13SuGqOa9Korw==", + "dev": true, + "license": "ISC", + "engines": { + "node": ">=14" + }, + "funding": { + "url": "https://github.com/sponsors/isaacs" + } + }, + "node_modules/simple-concat": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/simple-concat/-/simple-concat-1.0.1.tgz", + "integrity": "sha512-cSFtAPtRhljv69IK0hTVZQ+OfE9nePi/rtJmw5UjHeVyVroEqJXP1sFztKUy1qU+xvz3u/sfYJLa947b7nAN2Q==", + "dev": true, + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/feross" + }, + { + "type": "patreon", + "url": "https://www.patreon.com/feross" + }, + { + "type": "consulting", + "url": "https://feross.org/support" + } + ], + "license": "MIT", + "optional": true + }, + "node_modules/simple-get": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/simple-get/-/simple-get-4.0.1.tgz", + "integrity": "sha512-brv7p5WgH0jmQJr1ZDDfKDOSeWWg+OVypG99A/5vYGPqJ6pxiaHLy8nxtFjBA7oMa01ebA9gfh1uMCFqOuXxvA==", + "dev": true, + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/feross" + }, + { + "type": "patreon", + "url": "https://www.patreon.com/feross" + }, + { + "type": "consulting", + "url": "https://feross.org/support" + } + ], + "license": "MIT", + "optional": true, + "dependencies": { + "decompress-response": "^6.0.0", + "once": "^1.3.1", + "simple-concat": "^1.0.0" + } + }, + "node_modules/stdin-discarder": { + "version": "0.2.2", + "resolved": "https://registry.npmjs.org/stdin-discarder/-/stdin-discarder-0.2.2.tgz", + "integrity": "sha512-UhDfHmA92YAlNnCfhmq0VeNL5bDbiZGg7sZ2IvPsXubGkiNa9EC+tUTsjBRsYUAz87btI6/1wf4XoVvQ3uRnmQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/string_decoder": { + "version": "1.3.0", + "resolved": "https://registry.npmjs.org/string_decoder/-/string_decoder-1.3.0.tgz", + "integrity": "sha512-hkRX8U1WjJFd8LsDJ2yQ/wWWxaopEsABU1XfkM8A+j0+85JAGppt16cr1Whg6KIbb4okU6Mql6BOj+uup/wKeA==", + "dev": true, + "license": "MIT", + "optional": true, + "dependencies": { + "safe-buffer": "~5.2.0" + } + }, + "node_modules/string-width": { + "version": "7.2.0", + "resolved": "https://registry.npmjs.org/string-width/-/string-width-7.2.0.tgz", + "integrity": "sha512-tsaTIkKW9b4N+AEj+SVA+WhJzV7/zMhcSu78mLKWSk7cXMOSHsBKFWUs0fWwq8QyK3MgJBQRX6Gbi4kYbdvGkQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "emoji-regex": "^10.3.0", + "get-east-asian-width": "^1.0.0", + "strip-ansi": "^7.1.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/string-width-cjs": { + "name": "string-width", + "version": "4.2.3", + "resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz", + "integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==", + "dev": true, + "license": "MIT", + "dependencies": { + "emoji-regex": "^8.0.0", + "is-fullwidth-code-point": "^3.0.0", + "strip-ansi": "^6.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/string-width-cjs/node_modules/ansi-regex": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz", + "integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/string-width-cjs/node_modules/emoji-regex": { + "version": "8.0.0", + "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz", + "integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==", + "dev": true, + "license": "MIT" + }, + "node_modules/string-width-cjs/node_modules/strip-ansi": { + "version": "6.0.1", + "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz", + "integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==", + "dev": true, + "license": "MIT", + "dependencies": { + "ansi-regex": "^5.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/strip-ansi": { + "version": "7.2.0", + "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-7.2.0.tgz", + "integrity": "sha512-yDPMNjp4WyfYBkHnjIRLfca1i6KMyGCtsVgoKe/z1+6vukgaENdgGBZt+ZmKPc4gavvEZ5OgHfHdrazhgNyG7w==", + "dev": true, + "license": "MIT", + "dependencies": { + "ansi-regex": "^6.2.2" + }, + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/chalk/strip-ansi?sponsor=1" + } + }, + "node_modules/strip-ansi-cjs": { + "name": "strip-ansi", + "version": "6.0.1", + "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz", + "integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==", + "dev": true, + "license": "MIT", + "dependencies": { + "ansi-regex": "^5.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/strip-ansi-cjs/node_modules/ansi-regex": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz", + "integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/strip-json-comments": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/strip-json-comments/-/strip-json-comments-2.0.1.tgz", + "integrity": "sha512-4gB8na07fecVVkOI6Rs4e7T6NOTki5EmL7TUduTs6bu3EdnSycntVJ4re8kgZA+wx9IueI2Y11bfbgwtzuE0KQ==", + "dev": true, + "license": "MIT", + "optional": true, + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/supports-color": { + "version": "5.5.0", + "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-5.5.0.tgz", + "integrity": "sha512-QjVjwdXIt408MIiAqCX4oUKsgU2EqAGzs2Ppkm4aQYbjm+ZEWEcW4SfFNTr4uMNZma0ey4f5lgLrkB0aX0QMow==", + "dev": true, + "license": "MIT", + "dependencies": { + "has-flag": "^3.0.0" + }, + "engines": { + "node": ">=4" + } + }, + "node_modules/tar-fs": { + "version": "2.1.4", + "resolved": "https://registry.npmjs.org/tar-fs/-/tar-fs-2.1.4.tgz", + "integrity": "sha512-mDAjwmZdh7LTT6pNleZ05Yt65HC3E+NiQzl672vQG38jIrehtJk/J3mNwIg+vShQPcLF/LV7CMnDW6vjj6sfYQ==", + "dev": true, + "license": "MIT", + "optional": true, + "dependencies": { + "chownr": "^1.1.1", + "mkdirp-classic": "^0.5.2", + "pump": "^3.0.0", + "tar-stream": "^2.1.4" + } + }, + "node_modules/tar-stream": { + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/tar-stream/-/tar-stream-2.2.0.tgz", + "integrity": "sha512-ujeqbceABgwMZxEJnk2HDY2DlnUZ+9oEcb1KzTVfYHio0UE6dG71n60d8D2I4qNvleWrrXpmjpt7vZeF1LnMZQ==", + "dev": true, + "license": "MIT", + "optional": true, + "dependencies": { + "bl": "^4.0.3", + "end-of-stream": "^1.4.1", + "fs-constants": "^1.0.0", + "inherits": "^2.0.3", + "readable-stream": "^3.1.1" + }, + "engines": { + "node": ">=6" + } + }, + "node_modules/tinyglobby": { + "version": "0.2.15", + "resolved": "https://registry.npmjs.org/tinyglobby/-/tinyglobby-0.2.15.tgz", + "integrity": "sha512-j2Zq4NyQYG5XMST4cbs02Ak8iJUdxRM0XI5QyxXuZOzKOINmWurp3smXu3y5wDcJrptwpSjgXHzIQxR0omXljQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "fdir": "^6.5.0", + "picomatch": "^4.0.3" + }, + "engines": { + "node": ">=12.0.0" + }, + "funding": { + "url": "https://github.com/sponsors/SuperchupuDev" + } + }, + "node_modules/tmp": { + "version": "0.2.5", + "resolved": "https://registry.npmjs.org/tmp/-/tmp-0.2.5.tgz", + "integrity": "sha512-voyz6MApa1rQGUxT3E+BK7/ROe8itEx7vD8/HEvt4xwXucvQ5G5oeEiHkmHZJuBO21RpOf+YYm9MOivj709jow==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=14.14" + } + }, + "node_modules/ts-api-utils": { + "version": "2.4.0", + "resolved": "https://registry.npmjs.org/ts-api-utils/-/ts-api-utils-2.4.0.tgz", + "integrity": "sha512-3TaVTaAv2gTiMB35i3FiGJaRfwb3Pyn/j3m/bfAvGe8FB7CF6u+LMYqYlDh7reQf7UNvoTvdfAqHGmPGOSsPmA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=18.12" + }, + "peerDependencies": { + "typescript": ">=4.8.4" + } + }, + "node_modules/tslib": { + "version": "2.8.1", + "resolved": "https://registry.npmjs.org/tslib/-/tslib-2.8.1.tgz", + "integrity": "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==", + "dev": true, + "license": "0BSD" + }, + "node_modules/tunnel": { + "version": "0.0.6", + "resolved": "https://registry.npmjs.org/tunnel/-/tunnel-0.0.6.tgz", + "integrity": "sha512-1h/Lnq9yajKY2PEbBadPXj3VxsDDu844OnaAo52UVmIzIvwwtBPIuNvkjuzBlTWpfJyUbG3ez0KSBibQkj4ojg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=0.6.11 <=0.7.0 || >=0.7.3" + } + }, + "node_modules/tunnel-agent": { + "version": "0.6.0", + "resolved": "https://registry.npmjs.org/tunnel-agent/-/tunnel-agent-0.6.0.tgz", + "integrity": "sha512-McnNiV1l8RYeY8tBgEpuodCC1mLUdbSN+CYBL7kJsJNInOP8UjDDEwdk6Mw60vdLLrr5NHKZhMAOSrR2NZuQ+w==", + "dev": true, + "license": "Apache-2.0", + "optional": true, + "dependencies": { + "safe-buffer": "^5.0.1" + }, + "engines": { + "node": "*" + } + }, + "node_modules/type-check": { + "version": "0.4.0", + "resolved": "https://registry.npmjs.org/type-check/-/type-check-0.4.0.tgz", + "integrity": "sha512-XleUoc9uwGXqjWwXaUTZAmzMcFZ5858QA2vvx1Ur5xIcixXIP+8LnFDgRplU30us6teqdlskFfu+ae4K79Ooew==", + "dev": true, + "license": "MIT", + "dependencies": { + "prelude-ls": "^1.2.1" + }, + "engines": { + "node": ">= 0.8.0" + } + }, + "node_modules/typed-rest-client": { + "version": "1.8.11", + "resolved": "https://registry.npmjs.org/typed-rest-client/-/typed-rest-client-1.8.11.tgz", + "integrity": "sha512-5UvfMpd1oelmUPRbbaVnq+rHP7ng2cE4qoQkQeAqxRL6PklkxsM0g32/HL0yfvruK6ojQ5x8EE+HF4YV6DtuCA==", + "dev": true, + "license": "MIT", + "dependencies": { + "qs": "^6.9.1", + "tunnel": "0.0.6", + "underscore": "^1.12.1" + } + }, + "node_modules/typescript": { + "version": "5.9.3", + "resolved": "https://registry.npmjs.org/typescript/-/typescript-5.9.3.tgz", + "integrity": "sha512-jl1vZzPDinLr9eUt3J/t7V6FgNEw9QjvBPdysz9KfQDD41fQrC2Y4vKQdiaUpFT4bXlb1RHhLpp8wtm6M5TgSw==", + "dev": true, + "license": "Apache-2.0", + "bin": { + "tsc": "bin/tsc", + "tsserver": "bin/tsserver" + }, + "engines": { + "node": ">=14.17" + } + }, + "node_modules/uc.micro": { + "version": "1.0.6", + "resolved": "https://registry.npmjs.org/uc.micro/-/uc.micro-1.0.6.tgz", + "integrity": "sha512-8Y75pvTYkLJW2hWQHXxoqRgV7qb9B+9vFEtidML+7koHUFapnVJAZ6cKs+Qjz5Aw3aZWHMC6u0wJE3At+nSGwA==", + "dev": true, + "license": "MIT" + }, + "node_modules/underscore": { + "version": "1.13.8", + "resolved": "https://registry.npmjs.org/underscore/-/underscore-1.13.8.tgz", + "integrity": "sha512-DXtD3ZtEQzc7M8m4cXotyHR+FAS18C64asBYY5vqZexfYryNNnDc02W4hKg3rdQuqOYas1jkseX0+nZXjTXnvQ==", + "dev": true, + "license": "MIT" + }, + "node_modules/undici": { + "version": "7.22.0", + "resolved": "https://registry.npmjs.org/undici/-/undici-7.22.0.tgz", + "integrity": "sha512-RqslV2Us5BrllB+JeiZnK4peryVTndy9Dnqq62S3yYRRTj0tFQCwEniUy2167skdGOy3vqRzEvl1Dm4sV2ReDg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=20.18.1" + } + }, + "node_modules/undici-types": { + "version": "6.21.0", + "resolved": "https://registry.npmjs.org/undici-types/-/undici-types-6.21.0.tgz", + "integrity": "sha512-iwDZqg0QAGrg9Rav5H4n0M64c3mkR59cJ6wQp+7C4nI0gsmExaedaYLNO44eT4AtBBwjbTiGPMlt2Md0T9H9JQ==", + "dev": true, + "license": "MIT" + }, + "node_modules/uri-js": { + "version": "4.4.1", + "resolved": "https://registry.npmjs.org/uri-js/-/uri-js-4.4.1.tgz", + "integrity": "sha512-7rKUyy33Q1yc98pQ1DAmLtwX109F7TIfWlW1Ydo8Wl1ii1SeHieeh0HHfPeL2fMXK6z0s8ecKs9frCuLJvndBg==", + "dev": true, + "license": "BSD-2-Clause", + "dependencies": { + "punycode": "^2.1.0" + } + }, + "node_modules/url-join": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/url-join/-/url-join-4.0.1.tgz", + "integrity": "sha512-jk1+QP6ZJqyOiuEI9AEWQfju/nB2Pw466kbA0LEZljHwKeMgd9WrAEgEGxjPDD2+TNbbb37rTyhEfrCXfuKXnA==", + "dev": true, + "license": "MIT" + }, + "node_modules/util-deprecate": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/util-deprecate/-/util-deprecate-1.0.2.tgz", + "integrity": "sha512-EPD5q1uXyFxJpCrLnCc1nHnq3gOa6DZBocAIiI2TaSCA7VCJ1UJDMagCzIkXNsUYfD1daK//LTEQ8xiIbrHtcw==", + "dev": true, + "license": "MIT" + }, + "node_modules/uuid": { + "version": "8.3.2", + "resolved": "https://registry.npmjs.org/uuid/-/uuid-8.3.2.tgz", + "integrity": "sha512-+NYs2QeMWy+GWFOEm9xnn6HCDp0l7QBD7ml8zLUmJ+93Q5NF0NocErnwkTkXVFNiX3/fpC6afS8Dhb/gz7R7eg==", + "dev": true, + "license": "MIT", + "bin": { + "uuid": "dist/bin/uuid" + } + }, + "node_modules/whatwg-encoding": { + "version": "3.1.1", + "resolved": "https://registry.npmjs.org/whatwg-encoding/-/whatwg-encoding-3.1.1.tgz", + "integrity": "sha512-6qN4hJdMwfYBtE3YBTTHhoeuUrDBPZmbQaxWAqSALV/MeEnR5z1xd8UKud2RAkFoPkmB+hli1TZSnyi84xz1vQ==", + "deprecated": "Use @exodus/bytes instead for a more spec-conformant and faster implementation", + "dev": true, + "license": "MIT", + "dependencies": { + "iconv-lite": "0.6.3" + }, + "engines": { + "node": ">=18" + } + }, + "node_modules/whatwg-mimetype": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/whatwg-mimetype/-/whatwg-mimetype-4.0.0.tgz", + "integrity": "sha512-QaKxh0eNIi2mE9p2vEdzfagOKHCcj1pJ56EEHGQOVxp8r9/iszLUUV7v89x9O1p/T+NlTM5W7jW6+cz4Fq1YVg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=18" + } + }, + "node_modules/which": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/which/-/which-2.0.2.tgz", + "integrity": "sha512-BLI3Tl1TW3Pvl70l3yq3Y64i+awpwXqsGBYWkkqMtnbXgrMD+yj7rhW0kuEDxzJaYXGjEW5ogapKNMEKNMjibA==", + "dev": true, + "license": "ISC", + "dependencies": { + "isexe": "^2.0.0" + }, + "bin": { + "node-which": "bin/node-which" + }, + "engines": { + "node": ">= 8" + } + }, + "node_modules/word-wrap": { + "version": "1.2.5", + "resolved": "https://registry.npmjs.org/word-wrap/-/word-wrap-1.2.5.tgz", + "integrity": "sha512-BN22B5eaMMI9UMtjrGd5g5eCYPpCPDUy0FJXbYsaT5zYxjFOckS53SQDE3pWkVoWpHXVb3BrYcEN4Twa55B5cA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/workerpool": { + "version": "9.3.4", + "resolved": "https://registry.npmjs.org/workerpool/-/workerpool-9.3.4.tgz", + "integrity": "sha512-TmPRQYYSAnnDiEB0P/Ytip7bFGvqnSU6I2BcuSw7Hx+JSg/DsUi5ebYfc8GYaSdpuvOcEs6dXxPurOYpe9QFwg==", + "dev": true, + "license": "Apache-2.0" + }, + "node_modules/wrap-ansi": { + "version": "7.0.0", + "resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-7.0.0.tgz", + "integrity": "sha512-YVGIj2kamLSTxw6NsZjoBxfSwsn0ycdesmc4p+Q21c5zPuZ1pl+NfxVdxPtdHvmNVOQ6XSYG4AUtyt/Fi7D16Q==", + "dev": true, + "license": "MIT", + "dependencies": { + "ansi-styles": "^4.0.0", + "string-width": "^4.1.0", + "strip-ansi": "^6.0.0" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/chalk/wrap-ansi?sponsor=1" + } + }, + "node_modules/wrap-ansi-cjs": { + "name": "wrap-ansi", + "version": "7.0.0", + "resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-7.0.0.tgz", + "integrity": "sha512-YVGIj2kamLSTxw6NsZjoBxfSwsn0ycdesmc4p+Q21c5zPuZ1pl+NfxVdxPtdHvmNVOQ6XSYG4AUtyt/Fi7D16Q==", + "dev": true, + "license": "MIT", + "dependencies": { + "ansi-styles": "^4.0.0", + "string-width": "^4.1.0", + "strip-ansi": "^6.0.0" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/chalk/wrap-ansi?sponsor=1" + } + }, + "node_modules/wrap-ansi-cjs/node_modules/ansi-regex": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz", + "integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/wrap-ansi-cjs/node_modules/ansi-styles": { + "version": "4.3.0", + "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.3.0.tgz", + "integrity": "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==", + "dev": true, + "license": "MIT", + "dependencies": { + "color-convert": "^2.0.1" + }, + "engines": { + "node": ">=8" + }, + "funding": { + "url": "https://github.com/chalk/ansi-styles?sponsor=1" + } + }, + "node_modules/wrap-ansi-cjs/node_modules/color-convert": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz", + "integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "color-name": "~1.1.4" + }, + "engines": { + "node": ">=7.0.0" + } + }, + "node_modules/wrap-ansi-cjs/node_modules/color-name": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz", + "integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==", + "dev": true, + "license": "MIT" + }, + "node_modules/wrap-ansi-cjs/node_modules/emoji-regex": { + "version": "8.0.0", + "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz", + "integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==", + "dev": true, + "license": "MIT" + }, + "node_modules/wrap-ansi-cjs/node_modules/string-width": { + "version": "4.2.3", + "resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz", + "integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==", + "dev": true, + "license": "MIT", + "dependencies": { + "emoji-regex": "^8.0.0", + "is-fullwidth-code-point": "^3.0.0", + "strip-ansi": "^6.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/wrap-ansi-cjs/node_modules/strip-ansi": { + "version": "6.0.1", + "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz", + "integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==", + "dev": true, + "license": "MIT", + "dependencies": { + "ansi-regex": "^5.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/wrap-ansi/node_modules/ansi-regex": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz", + "integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/wrap-ansi/node_modules/ansi-styles": { + "version": "4.3.0", + "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.3.0.tgz", + "integrity": "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==", + "dev": true, + "license": "MIT", + "dependencies": { + "color-convert": "^2.0.1" + }, + "engines": { + "node": ">=8" + }, + "funding": { + "url": "https://github.com/chalk/ansi-styles?sponsor=1" + } + }, + "node_modules/wrap-ansi/node_modules/color-convert": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz", + "integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "color-name": "~1.1.4" + }, + "engines": { + "node": ">=7.0.0" + } + }, + "node_modules/wrap-ansi/node_modules/color-name": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz", + "integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==", + "dev": true, + "license": "MIT" + }, + "node_modules/wrap-ansi/node_modules/emoji-regex": { + "version": "8.0.0", + "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz", + "integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==", + "dev": true, + "license": "MIT" + }, + "node_modules/wrap-ansi/node_modules/string-width": { + "version": "4.2.3", + "resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz", + "integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==", + "dev": true, + "license": "MIT", + "dependencies": { + "emoji-regex": "^8.0.0", + "is-fullwidth-code-point": "^3.0.0", + "strip-ansi": "^6.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/wrap-ansi/node_modules/strip-ansi": { + "version": "6.0.1", + "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz", + "integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==", + "dev": true, + "license": "MIT", + "dependencies": { + "ansi-regex": "^5.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/wrappy": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/wrappy/-/wrappy-1.0.2.tgz", + "integrity": "sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ==", + "dev": true, + "license": "ISC" + }, + "node_modules/wsl-utils": { + "version": "0.1.0", + "resolved": "https://registry.npmjs.org/wsl-utils/-/wsl-utils-0.1.0.tgz", + "integrity": "sha512-h3Fbisa2nKGPxCpm89Hk33lBLsnaGBvctQopaBSOW/uIs6FTe1ATyAnKFJrzVs9vpGdsTe73WF3V4lIsk4Gacw==", + "dev": true, + "license": "MIT", + "dependencies": { + "is-wsl": "^3.1.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/xml2js": { + "version": "0.5.0", + "resolved": "https://registry.npmjs.org/xml2js/-/xml2js-0.5.0.tgz", + "integrity": "sha512-drPFnkQJik/O+uPKpqSgr22mpuFHqKdbS835iAQrUC73L2F5WkboIRd63ai/2Yg6I1jzifPFKH2NTK+cfglkIA==", + "dev": true, + "license": "MIT", + "dependencies": { + "sax": ">=0.6.0", + "xmlbuilder": "~11.0.0" + }, + "engines": { + "node": ">=4.0.0" + } + }, + "node_modules/xmlbuilder": { + "version": "11.0.1", + "resolved": "https://registry.npmjs.org/xmlbuilder/-/xmlbuilder-11.0.1.tgz", + "integrity": "sha512-fDlsI/kFEx7gLvbecc0/ohLG50fugQp8ryHzMTuW9vSa1GJ0XYWKnhsUx7oie3G98+r56aTQIUB4kht42R3JvA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=4.0" + } + }, + "node_modules/y18n": { + "version": "5.0.8", + "resolved": "https://registry.npmjs.org/y18n/-/y18n-5.0.8.tgz", + "integrity": "sha512-0pfFzegeDWJHJIAmTLRP2DwHjdF5s7jo9tuztdQxAhINCdvS+3nGINqPd00AphqJR/0LhANUS6/+7SCb98YOfA==", + "dev": true, + "license": "ISC", + "engines": { + "node": ">=10" + } + }, + "node_modules/yallist": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/yallist/-/yallist-4.0.0.tgz", + "integrity": "sha512-3wdGidZyq5PB084XLES5TpOSRA3wjXAlIWMhum2kRcv/41Sn2emQ0dycQW4uZXLejwKvg6EsvbdlVL+FYEct7A==", + "dev": true, + "license": "ISC" + }, + "node_modules/yargs": { + "version": "17.7.2", + "resolved": "https://registry.npmjs.org/yargs/-/yargs-17.7.2.tgz", + "integrity": "sha512-7dSzzRQ++CKnNI/krKnYRV7JKKPUXMEh61soaHKg9mrWEhzFWhFnxPxGl+69cD1Ou63C13NUPCnmIcrvqCuM6w==", + "dev": true, + "license": "MIT", + "dependencies": { + "cliui": "^8.0.1", + "escalade": "^3.1.1", + "get-caller-file": "^2.0.5", + "require-directory": "^2.1.1", + "string-width": "^4.2.3", + "y18n": "^5.0.5", + "yargs-parser": "^21.1.1" + }, + "engines": { + "node": ">=12" + } + }, + "node_modules/yargs-parser": { + "version": "21.1.1", + "resolved": "https://registry.npmjs.org/yargs-parser/-/yargs-parser-21.1.1.tgz", + "integrity": "sha512-tVpsJW7DdjecAiFpbIB1e3qxIQsE6NoPc5/eTdrbbIC4h0LVsWhnoa3g+m2HclBIujHzsxZ4VJVA+GUuc2/LBw==", + "dev": true, + "license": "ISC", + "engines": { + "node": ">=12" + } + }, + "node_modules/yargs-unparser": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/yargs-unparser/-/yargs-unparser-2.0.0.tgz", + "integrity": "sha512-7pRTIA9Qc1caZ0bZ6RYRGbHJthJWuakf+WmHK0rVeLkNrrGhfoabBNdue6kdINI6r4if7ocq9aD/n7xwKOdzOA==", + "dev": true, + "license": "MIT", + "dependencies": { + "camelcase": "^6.0.0", + "decamelize": "^4.0.0", + "flat": "^5.0.2", + "is-plain-obj": "^2.1.0" + }, + "engines": { + "node": ">=10" + } + }, + "node_modules/yargs/node_modules/ansi-regex": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz", + "integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/yargs/node_modules/emoji-regex": { + "version": "8.0.0", + "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz", + "integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==", + "dev": true, + "license": "MIT" + }, + "node_modules/yargs/node_modules/string-width": { + "version": "4.2.3", + "resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz", + "integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==", + "dev": true, + "license": "MIT", + "dependencies": { + "emoji-regex": "^8.0.0", + "is-fullwidth-code-point": "^3.0.0", + "strip-ansi": "^6.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/yargs/node_modules/strip-ansi": { + "version": "6.0.1", + "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz", + "integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==", + "dev": true, + "license": "MIT", + "dependencies": { + "ansi-regex": "^5.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/yauzl": { + "version": "2.10.0", + "resolved": "https://registry.npmjs.org/yauzl/-/yauzl-2.10.0.tgz", + "integrity": "sha512-p4a9I6X6nu6IhoGmBqAcbJy1mlC4j27vEPZX9F4L4/vZT3Lyq1VkFHw/V/PUcB9Buo+DG3iHkT0x3Qya58zc3g==", + "dev": true, + "license": "MIT", + "dependencies": { + "buffer-crc32": "~0.2.3", + "fd-slicer": "~1.1.0" + } + }, + "node_modules/yazl": { + "version": "2.5.1", + "resolved": "https://registry.npmjs.org/yazl/-/yazl-2.5.1.tgz", + "integrity": "sha512-phENi2PLiHnHb6QBVot+dJnaAZ0xosj7p3fWl+znIjBDlnMI2PsZCJZ306BPTFOaHf5qdDEI8x5qFrSOBN5vrw==", + "dev": true, + "license": "MIT", + "dependencies": { + "buffer-crc32": "~0.2.3" + } + }, + "node_modules/yocto-queue": { + "version": "0.1.0", + "resolved": "https://registry.npmjs.org/yocto-queue/-/yocto-queue-0.1.0.tgz", + "integrity": "sha512-rVksvsnNCdJ/ohGc6xgPwyN8eheCxsiLM8mxuE/t/mOVqJewPuO1miLpTHQiRgTKCLexL4MeAFVagts7HmNZ2Q==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + } + } +} diff --git a/vscode-extension/package.json b/vscode-extension/package.json new file mode 100644 index 0000000..97ab4ff --- /dev/null +++ b/vscode-extension/package.json @@ -0,0 +1,232 @@ +{ + "name": "opencode-router", + "displayName": "OpenCode Control Plane", + "description": "Browse and manage OpenCode sessions from the control plane.", + "version": "0.1.0", + "publisher": "local", + "engines": { + "vscode": "^1.90.0" + }, + "categories": [ + "Other" + ], + "main": "./out/extension.js", + "activationEvents": [ + "onView:opencodeSessions", + "onView:opencodeRemoteHosts", + "onView:opencodeChat", + "onTerminalProfile:opencode.terminalProfile", + "onCommand:opencode.attachSession", + "onCommand:opencode.createSession", + "onCommand:opencode.openChat", + "onCommand:opencode.openTerminal", + "onCommand:opencode.refreshSessions", + "onCommand:opencode.refreshRemoteHosts", + "onCommand:opencode.stopSession", + "onCommand:opencode.restartSession", + "onCommand:opencode.deleteSession", + "onCommand:opencode.applyDiffPreview", + "onCommand:opencode.applyLastDiff" + ], + "contributes": { + "viewsContainers": { + "activitybar": [ + { + "id": "opencode", + "title": "OpenCode", + "icon": "resources/opencode.svg" + } + ] + }, + "views": { + "opencode": [ + { + "id": "opencodeSessions", + "name": "Sessions" + }, + { + "id": "opencodeRemoteHosts", + "name": "Remote Hosts" + }, + { + "id": "opencodeChat", + "name": "Agent Chat", + "type": "webview" + } + ] + }, + "commands": [ + { + "command": "opencode.attachSession", + "title": "OpenCode: Attach Session" + }, + { + "command": "opencode.createSession", + "title": "OpenCode: Create Session" + }, + { + "command": "opencode.openChat", + "title": "OpenCode: Open Chat", + "icon": "$(comment-discussion)" + }, + { + "command": "opencode.openTerminal", + "title": "OpenCode: Open Terminal", + "icon": "$(terminal)" + }, + { + "command": "opencode.refreshSessions", + "title": "OpenCode: Refresh Sessions", + "icon": "$(refresh)" + }, + { + "command": "opencode.refreshRemoteHosts", + "title": "OpenCode: Refresh Remote Hosts", + "icon": "$(refresh)" + }, + { + "command": "opencode.stopSession", + "title": "OpenCode: Stop Session" + }, + { + "command": "opencode.restartSession", + "title": "OpenCode: Restart Session" + }, + { + "command": "opencode.deleteSession", + "title": "OpenCode: Delete Session" + }, + { + "command": "opencode.applyDiffPreview", + "title": "OpenCode: Apply Diff Preview" + }, + { + "command": "opencode.applyLastDiff", + "title": "OpenCode: Apply Last Staged Diff" + }, + { + "command": "opencode.rejectLastDiff", + "title": "OpenCode: Reject Last Staged Diff" + }, + { + "command": "opencode.clearDiffHighlights", + "title": "OpenCode: Clear Diff Highlights" + } + ], + "menus": { + "view/title": [ + { + "command": "opencode.createSession", + "when": "view == opencodeSessions", + "group": "navigation@1" + }, + { + "command": "opencode.refreshSessions", + "when": "view == opencodeSessions", + "group": "navigation@2" + }, + { + "command": "opencode.refreshRemoteHosts", + "when": "view == opencodeRemoteHosts", + "group": "navigation@1" + }, + { + "command": "opencode.openChat", + "when": "view == opencodeChat", + "group": "navigation@1" + }, + { + "command": "opencode.openTerminal", + "when": "view == opencodeSessions", + "group": "navigation@3" + } + ], + "view/item/context": [ + { + "command": "opencode.attachSession", + "when": "view == opencodeSessions && viewItem == opencodeSession", + "group": "inline@1" + }, + { + "command": "opencode.stopSession", + "when": "view == opencodeSessions && viewItem == opencodeSession", + "group": "navigation@2" + }, + { + "command": "opencode.openChat", + "when": "view == opencodeSessions && viewItem == opencodeSession", + "group": "navigation@2.5" + }, + { + "command": "opencode.openTerminal", + "when": "view == opencodeSessions && viewItem == opencodeSession", + "group": "navigation@2.6" + }, + { + "command": "opencode.restartSession", + "when": "view == opencodeSessions && viewItem == opencodeSession", + "group": "navigation@3" + }, + { + "command": "opencode.deleteSession", + "when": "view == opencodeSessions && viewItem == opencodeSession", + "group": "navigation@4" + } + ] + }, + "terminal": { + "profiles": [ + { + "id": "opencode.terminalProfile", + "title": "OpenCode Terminal", + "icon": "terminal" + } + ] + }, + "configuration": { + "title": "OpenCode", + "properties": { + "opencode.controlPlaneUrl": { + "type": "string", + "default": "http://localhost:8080", + "description": "Base URL for the OpenCode control plane." + }, + "opencode.authToken": { + "type": "string", + "default": "", + "description": "Optional bearer token for control plane API requests." + }, + "opencode.remoteSshConfigPath": { + "type": "string", + "default": "", + "description": "Optional SSH config path override for remote host discovery (for example ~/.ssh/config)." + }, + "opencode.remoteHostsAutoRefreshSeconds": { + "type": "number", + "default": 30, + "minimum": 0, + "description": "Automatic refresh interval for Remote Hosts view in seconds (0 disables auto-refresh)." + } + } + } + }, + "scripts": { + "compile": "tsc -p ./", + "watch": "tsc -watch -p ./", + "lint": "eslint src/**/*.ts", + "test": "npm run compile && node ./out/test/runTest.js", + "package": "npx @vscode/vsce package" + }, + "devDependencies": { + "@types/mocha": "^10.0.10", + "@types/node": "^20.16.5", + "@types/vscode": "^1.90.0", + "@typescript-eslint/eslint-plugin": "^8.56.1", + "@typescript-eslint/parser": "^8.56.1", + "@vscode/test-electron": "^2.5.2", + "@vscode/vsce": "^2.31.1", + "eslint": "^10.0.2", + "mocha": "^11.7.5", + "typescript": "^5.6.2" + } +} diff --git a/vscode-extension/resources/opencode.svg b/vscode-extension/resources/opencode.svg new file mode 100644 index 0000000..425829a --- /dev/null +++ b/vscode-extension/resources/opencode.svg @@ -0,0 +1,5 @@ + diff --git a/vscode-extension/src/chat/ChatWebviewProvider.ts b/vscode-extension/src/chat/ChatWebviewProvider.ts new file mode 100644 index 0000000..7aa61d5 --- /dev/null +++ b/vscode-extension/src/chat/ChatWebviewProvider.ts @@ -0,0 +1,448 @@ +import { randomBytes } from 'crypto'; +import * as path from 'path'; +import * as vscode from 'vscode'; + +export interface ChatSessionTarget { + id: string; + label: string; + workspacePath: string; +} + +interface ChatMessage { + id: string; + role: 'user' | 'assistant' | 'system'; + content: string; + toolCalls: unknown[]; +} + +type InboundMessage = + | { type: 'ready' } + | { type: 'requestHistory' } + | { type: 'sendPrompt'; prompt: string } + | { type: 'openFile'; path: string; line?: number } + | { type: 'applyDiff'; diff: string }; + +type OutboundMessage = + | { type: 'session'; session: ChatSessionTarget | null } + | { type: 'chatHistory'; messages: ChatMessage[] } + | { type: 'streamStarted' } + | { type: 'streamEnded' } + | { type: 'chatChunk'; chunk: Record' + lines.join('\n') + '';
+ }
+ return originalCode.apply(this, arguments);
+ };
+ marked.use({ renderer });
+}
+
+function processDiffs(text) {
+ if (!text) return '';
+ const lines = text.split('\n');
+ let inDiff = false;
+ let inCodeBlock = false;
+ for (let i = 0; i < lines.length; i++) {
+ if (lines[i].startsWith('```')) {
+ inCodeBlock = !inCodeBlock;
+ if (inDiff) {
+ lines.splice(i, 0, '```');
+ inDiff = false;
+ i++;
+ }
+ continue;
+ }
+ if (!inCodeBlock) {
+ const isDiffLine = lines[i].match(/^[+-] /) && lines[i].length > 2;
+ if (isDiffLine && !inDiff) {
+ lines.splice(i, 0, '```diff');
+ inDiff = true;
+ i++;
+ } else if (!isDiffLine && inDiff && lines[i].trim() !== '') {
+ lines.splice(i, 0, '```');
+ inDiff = false;
+ i++;
+ }
+ }
+ }
+ if (inDiff) lines.push('```');
+ return lines.join('\n');
+}
+
+ const state = {
+ sessions: new Map(),
+ filter: '',
+ sortCol: 'id',
+ sortDesc: false
+ };
+
+ const DOM = {
+ sseIndicator: document.getElementById('sse-indicator'),
+ statOnline: document.getElementById('stat-online'),
+ statTotal: document.getElementById('stat-total'),
+ tbody: document.getElementById('sessions-body'),
+ searchInput: document.getElementById('search-input'),
+ emptyState: document.getElementById('empty-state'),
+ table: document.getElementById('sessions-table'),
+ btnCreate: document.getElementById('btn-create-session'),
+ modal: document.getElementById('modal-overlay'),
+ btnCloseModal: document.getElementById('btn-close-modal'),
+ authModalOverlay: document.getElementById('auth-modal-overlay'),
+ btnCloseAuthModal: document.getElementById('btn-close-auth-modal'),
+ authForm: document.getElementById('auth-form'),
+ inputPassword: document.getElementById('input-password'),
+ authHostLabel: document.getElementById('auth-host-label'),
+ authAgentStatus: document.getElementById('auth-agent-status'),
+ formCreate: document.getElementById('create-session-form'),
+ inputWorkspace: document.getElementById('input-workspace'),
+ inputLabel: document.getElementById('input-label'), // repurposed to label
+ viewSessions: document.getElementById('view-sessions'),
+ viewTerminal: document.getElementById('view-terminal'),
+ terminalContainer: document.getElementById('terminal-container'),
+ terminalSessionId: document.getElementById('terminal-session-id'),
+ terminalConnectionStatus: document.getElementById('terminal-connection-status'),
+ btnDetachTerminal: document.getElementById('btn-detach-terminal'),
+ chatHistory: document.getElementById('chat-history'),
+ chatForm: document.getElementById('chat-form'),
+ chatInput: document.getElementById('chat-input'),
+ chatContainer: document.getElementById('chat-container'),
+ splitResizer: document.getElementById('split-resizer'),
+ btnSendChat: document.getElementById('btn-send-chat'),
+ tabLocal: document.getElementById('tab-local'),
+ tabRemote: document.getElementById('tab-remote'),
+ viewRemote: document.getElementById('view-remote'),
+ remoteHostsContainer: document.getElementById('remote-hosts-container'),
+ remoteEmptyState: document.getElementById('remote-empty-state'),
+ remoteErrorState: document.getElementById('remote-error-state'),
+ btnRefreshRemote: document.getElementById('btn-refresh-remote'),
+ remoteSearchInput: document.getElementById('remote-search-input')
+ };
+
+ function normalizeSSEtoView(sseSession) {
+ if (!sseSession) return null;
+ return {
+ id: sseSession.ID,
+ workspacePath: sseSession.WorkspacePath,
+ status: sseSession.Status,
+ daemonPort: sseSession.DaemonPort,
+ labels: sseSession.Labels || {}
+ };
+ }
+
+ // Setup EventSource
+ let evtSource = null;
+ let sseReconnectTimeout = null;
+
+ function clearSSEReconnectTimer() {
+ if (sseReconnectTimeout) {
+ clearTimeout(sseReconnectTimeout);
+ sseReconnectTimeout = null;
+ }
+ }
+
+ function setSSEIndicator(mode, detail) {
+ if (!DOM.sseIndicator) return;
+ if (mode === 'connected') {
+ DOM.sseIndicator.textContent = '● STREAM_ACTIVE';
+ DOM.sseIndicator.className = 'pulse-indicator online';
+ return;
+ }
+ if (mode === 'reconnecting') {
+ DOM.sseIndicator.textContent = `● RECONNECTING${detail ? ` (${detail})` : '...'}`;
+ DOM.sseIndicator.className = 'pulse-indicator';
+ return;
+ }
+ DOM.sseIndicator.textContent = `● DISCONNECTED${detail ? ` (${detail})` : ''}`;
+ DOM.sseIndicator.className = 'pulse-indicator';
+ }
+
+ function scheduleSSEReconnect(delayMs, reason) {
+ if (evtSource || sseReconnectTimeout) return;
+ setSSEIndicator('reconnecting', reason || 'retrying');
+ sseReconnectTimeout = setTimeout(() => {
+ sseReconnectTimeout = null;
+ connectSSE();
+ }, delayMs);
+ }
+
+ function connectSSE() {
+ if (evtSource) return;
+ clearSSEReconnectTimer();
+
+ evtSource = new EventSource('/api/events');
+
+ evtSource.onopen = () => {
+ clearSSEReconnectTimer();
+ setSSEIndicator('connected');
+ };
+
+ evtSource.onerror = () => {
+ if (!evtSource) return;
+
+ if (evtSource.readyState === EventSource.CONNECTING) {
+ setSSEIndicator('reconnecting', 'auto');
+ return;
+ }
+
+ const fatal = evtSource.readyState === EventSource.CLOSED;
+ setSSEIndicator(fatal ? 'disconnected' : 'reconnecting', fatal ? 'closed' : 'retrying');
+ evtSource.close();
+ evtSource = null;
+ scheduleSSEReconnect(2000, fatal ? 'closed' : 'retrying');
+ };
+
+ const handleSessionEvent = (e) => {
+ try {
+ const envelope = JSON.parse(e.data);
+ if (!envelope.payload || !envelope.payload.Session) return;
+ const norm = normalizeSSEtoView(envelope.payload.Session);
+
+ // Preserve health if it exists
+ if (state.sessions.has(norm.id)) {
+ const existing = state.sessions.get(norm.id);
+ norm.health = existing.health;
+ }
+
+ state.sessions.set(norm.id, norm);
+ render();
+ } catch (err) {
+ console.error('Failed parsing SSE', err);
+ }
+ };
+
+ evtSource.addEventListener('session.created', handleSessionEvent);
+ evtSource.addEventListener('session.stopped', handleSessionEvent);
+ evtSource.addEventListener('session.attached', handleSessionEvent);
+ evtSource.addEventListener('session.detached', handleSessionEvent);
+
+ evtSource.addEventListener('session.health', (e) => {
+ try {
+ const envelope = JSON.parse(e.data);
+ if (!envelope.payload || !envelope.payload.Session) return;
+ const norm = normalizeSSEtoView(envelope.payload.Session);
+
+ // Apply health
+ if (envelope.payload.Current) {
+ norm.health = envelope.payload.Current;
+ }
+
+ state.sessions.set(norm.id, norm);
+ render();
+ } catch (err) {
+ console.error('Failed parsing health SSE', err);
+ }
+ });
+
+ }
+
+ async function loadInitial() {
+ try {
+ const res = await fetch('/api/sessions');
+ if (!res.ok) throw new Error(`HTTP error! status: ${res.status}`);
+ const data = await res.json();
+ state.sessions.clear();
+ (data || []).forEach(s => state.sessions.set(s.id, s));
+ render();
+ connectSSE();
+ } catch (e) {
+ console.error('Failed to load initial sessions', e);
+ setSSEIndicator('disconnected', 'bootstrap failed');
+ setTimeout(loadInitial, 5000);
+ }
+ }
+
+ function render() {
+ const filterText = state.filter.toLowerCase();
+ let total = 0;
+ let online = 0;
+
+ DOM.tbody.innerHTML = '';
+
+ const sorted = Array.from(state.sessions.values()).sort((a, b) => {
+ let valA = a[state.sortCol] || '';
+ let valB = b[state.sortCol] || '';
+ if (state.sortCol === 'label') {
+ valA = a.labels?.label || a.labels?.name || '';
+ valB = b.labels?.label || b.labels?.name || '';
+ }
+ const res = String(valA).localeCompare(String(valB));
+ return state.sortDesc ? -res : res;
+ });
+
+ let visibleCount = 0;
+
+ sorted.forEach(s => {
+ total++;
+ if (s.status === 'active' || s.status === 'idle') online++;
+
+ const lbl = (s.labels && (s.labels.label || s.labels.name)) || '-';
+ const searchable = `${s.id} ${lbl} ${s.workspacePath}`.toLowerCase();
+ if (filterText && !searchable.includes(filterText)) return;
+
+ visibleCount++;
+ const tr = document.createElement('tr');
+
+ let statusClass = 'error';
+ if (s.status === 'active') statusClass = 'active';
+ if (s.status === 'idle') statusClass = 'idle';
+ if (s.status === 'stopped') statusClass = 'stopped';
+
+ tr.innerHTML = `
+ | STATUS | +ID | +LABEL | +WORKSPACE | +ACTIONS | +
|---|