diff --git a/.gitignore b/.gitignore index 1cc7fc3..37f84da 100644 --- a/.gitignore +++ b/.gitignore @@ -4,6 +4,9 @@ build/ dist/ eggs/ .sisyphus/ +node_modules/ +out/ +vscode-extension/.vscode-test/ # Byte-compiled / optimized / DLL files __pycache__/ diff --git a/README.md b/README.md index 438c0a3..0b29f2b 100644 --- a/README.md +++ b/README.md @@ -133,6 +133,74 @@ http://localhost:8080/myproject/session Open `http://localhost:8080/` in a browser to see a live table of all discovered backends with their status, domains, and links. +## Browser Dashboard + +The browser dashboard is served from the control plane root (`/`) and uses +control-plane APIs directly. + +### Quick start + +```bash +go run . --port 8080 ~/my-projects +``` + +Then open: + +```text +http://localhost:8080/ +``` + +### Features + +- Session list and actions (`ATTACH`, `STOP`, `START`, `DEL`) +- SSE-driven status updates from `/api/events` +- Terminal attach via `/ws/terminal/{session-id}` +- Terminal scrollback hydration via `/api/sessions/{id}/scrollback` +- Chat panel streaming via `/api/sessions/{id}/chat` + +### Screenshot placeholder + +Add dashboard screenshots under a docs assets folder when available, for example: + +```text +docs/assets/browser-dashboard.png +docs/assets/browser-terminal.png +``` + +## VS Code Extension + +The repository includes a VS Code extension under `vscode-extension/`. + +### Install (development) + +```bash +cd vscode-extension +npm install +npm run compile +``` + +Open this repository in VS Code and run **Extension Development Host**. + +### Configure + +- `opencode.controlPlaneUrl` (default: `http://localhost:8080`) +- `opencode.authToken` (optional bearer token) + +### Features + +- Session tree view with connection status +- Session create/attach/stop/restart/delete commands +- Agent chat webview bound to selected session +- Terminal profile backed by control-plane websocket bridge +- Diff integration (`apply preview`, `apply`, `reject`, `clear highlights`) + +Troubleshooting note: terminal attach depends on runtime terminal bridge +prerequisites (session daemon terminal capability + control-plane attach path). +If prerequisites are unavailable in a given environment, terminal attach can fail +with `502`/`503` while session APIs remain functional. + +For detailed extension usage, see `vscode-extension/README.md`. + ## API | Endpoint | Description | @@ -476,12 +544,16 @@ Each probe is a single SSH round-trip. Unreachable hosts show as ○ offline and ├── oc-kill # Kill all opencode serve instances (standalone) ├── internal/ │ ├── config/config.go # Router configuration types, defaults, validation +│ ├── auth/ # Auth + CORS env configuration and middleware integration +│ ├── api/ # Session lifecycle API, SSE events, scrollback APIs │ ├── launcher/launcher.go # Manages opencode serve child processes │ ├── registry/registry.go # Thread-safe backend registry │ ├── scanner/scanner.go # Parallel port scanner + OpenCode probing │ ├── discovery/discovery.go # mDNS advertisement via zeroconf │ ├── proxy/proxy.go # Reverse proxy, routing, dashboard -│ └── tui/ # Remote session TUI (ocr) +│ ├── session/ # Session manager, health checks, circuit breaker +│ └── terminal/ # Terminal websocket handler + bridge +│ └── tui/ # Remote session TUI (ocr) │ ├── app.go # Top-level Bubble Tea model │ ├── components/ │ │ ├── header.go # Search bar, refresh countdown, fleet stats @@ -500,6 +572,9 @@ Each probe is a single SSH round-trip. Unreachable hosts show as ○ offline and └── go.sum ``` +For a full control-plane architecture guide (components, data flow, config, +security, failure modes), see `docs/architecture.md`. + ## Autodispatch (OpenClaw + TickTick) OpenCodeRouter is designed to be the service-discovery layer in a **programming task autodispatch pipeline**. An external orchestrator (e.g. OpenClaw) polls a task source (e.g. TickTick), resolves the target project via the router, and dispatches the task to the correct OpenCode instance. diff --git a/docs/architecture.md b/docs/architecture.md new file mode 100644 index 0000000..3f24c94 --- /dev/null +++ b/docs/architecture.md @@ -0,0 +1,472 @@ +# OpenCodeRouter Architecture Guide + +This document describes the control-plane architecture implemented in this worktree. +It focuses on component boundaries, runtime data flow, configuration defaults, +security behavior, and failure/recovery semantics. + +## 1. Scope + +The codebase contains two major runtime surfaces: + +1. **Control Plane server** (`main.go` + `internal/*`) + Hosts discovery, reverse proxy, session lifecycle APIs, SSE, terminal websocket + bridge, browser dashboard assets, and session scrollback endpoints. + +2. **VS Code extension** (`vscode-extension/`) + Uses control-plane HTTP/SSE/WebSocket APIs for session tree, chat, terminal, + and diff workflow integration. + +The architecture below applies to the control plane plus browser and extension +clients. + +## 2. System Architecture (ASCII) + +```text + +-----------------------------+ + | VS Code | + | - Session Tree | + | - Chat Webview | + | - Terminal Profile (PTY) | + | - Diff Edit Manager | + +--------------+--------------+ + | + | HTTP / SSE / WS + v ++------------------------------+ +--------------+--------------+ +------------------------------+ +| Browser UI | | OpenCodeRouter | | OpenCode Daemon(s) | +| / (dashboard + terminal) |<------>| Control Plane Server |<------>| opencode serve per session | +| - sessions table | HTTP | - api router | HTTP | - /global/health | +| - SSE indicator | SSE | - sessions handler | | - /session APIs | +| - terminal xterm | WS | - events handler (SSE) | WS | - terminal transport | +| - chat panel | | - terminal ws bridge | | | ++------------------------------+ | - proxy + scanner + registry| +------------------------------+ + | - scrollback cache (JSONL) | + +--------------+--------------+ + | + | local process mgmt + v + +-----------------------------+ + | Session Manager | + | - create/stop/restart | + | - health checks + circuit | + | - attach terminal | + | - event publication | + +-----------------------------+ +``` + +## 3. Runtime Components and Boundaries + +### 3.1 `main.go` (composition root) + +Responsibilities: + +- Parses CLI flags and builds `config.Config` (`internal/config/config.go` defaults). +- Loads auth/cors settings via `auth.LoadFromEnv()`. +- Creates and wires: + - `registry.Registry` + - `scanner.Scanner` + - `proxy.Proxy` + - `session.Manager` + - `api.Router` + - optional mDNS advertiser +- Starts HTTP server and graceful shutdown path. +- Performs startup orphan-process detection and optional cleanup offer + (`--cleanup-orphans` or `OCR_CLEANUP_ORPHANS=1`). + +Boundary notes: + +- `main.go` owns object lifecycle and orchestration only; business behavior lives + in internal packages. +- Startup cleanup is explicit-action only; no silent destructive default. + +### 3.2 `internal/config` + +Responsibilities: + +- Defines static control-plane defaults and validation constraints. +- Provides domain naming helper and outbound IP helper. + +Boundary notes: + +- No HTTP, no process management, no storage logic. + +### 3.3 `internal/auth` + +Responsibilities: + +- Defines auth/cors configuration model. +- Loads env-backed auth settings. +- Middleware integration occurs at API router boundary. + +Boundary notes: + +- Security policy is centralized by middleware; handlers do not duplicate auth + checks. + +### 3.4 `internal/scanner` + `internal/registry` + +Responsibilities: + +- Scanner probes configured local port range for daemon health/project/session info. +- Registry keeps thread-safe backend/session index and stale pruning. + +Boundary notes: + +- Scanner discovers runtime backends and refreshes registry snapshots. +- Registry is shared state for proxy routing and status views. + +### 3.5 `internal/session` manager + +Responsibilities: + +- Session lifecycle API: create/get/list/stop/restart/delete/attach terminal/health. +- Process supervision (`opencode serve` child process start + wait handling). +- Health loop with circuit-breaker behavior: + - threshold: 3 consecutive unhealthy probes by default + - cooldown: 30s by default before next probe + - reset on healthy probe and stop/restart paths +- Publishes session events to event bus. + +Boundary notes: + +- Manager is authoritative state for session lifecycle and health. +- Terminal WS handlers delegate terminal attachment via manager interface. + +### 3.6 `internal/api` router and handlers + +Responsibilities: + +- Mounts REST and SSE endpoints: + - `/api/sessions` and `/api/sessions/{id}/*` + - `/api/events` + - `/api/sessions/{id}/scrollback` + - `/ws/terminal/{session-id}` +- Session handler translates manager errors into stable HTTP status/code payloads + via `internal/errors` mapping. +- Event handler converts internal event types (including `session.health_changed`) + into SSE event stream (`session.health`). + +Boundary notes: + +- API layer owns transport contracts (JSON + SSE + WS), not core lifecycle logic. +- Fallback routing delegates to proxy/static UI handler. + +### 3.7 `internal/terminal` bridge + +Responsibilities: + +- Upgrades websocket connections and bridges client <-> daemon terminal streams. +- Validates session existence and health before attach. +- Appends terminal output to scrollback cache for reconnect hydration. + +Boundary notes: + +- Bridge is transport-level; terminal session ownership remains in session manager. + +### 3.8 Browser dashboard (`web/`) + +Responsibilities: + +- Session table and action controls (attach/stop/restart/delete). +- SSE status indicator states (`STREAM_ACTIVE`, `RECONNECTING`, `DISCONNECTED`). +- Terminal reconnect UX with bounded exponential backoff. +- Scrollback hydration before terminal websocket attach. +- Chat panel rendering and streaming support. + +Boundary notes: + +- Browser is a thin API/SSE/WS client; no daemon-direct calls. + +### 3.9 VS Code extension (`vscode-extension/`) + +Responsibilities: + +- Session tree provider with SSE-driven refresh and connection status bar. +- Resilient request path with bounded retry and stale-data fallback. +- Chat webview integration. +- PTY-backed terminal websocket bridge. +- Diff staging/preview/apply/reject workflow. + +Boundary notes: + +- Extension host performs control-plane communication; webview stays message-based. + +## 4. API-Level Data Flow + +### 4.1 Session lifecycle flow + +1. Client `POST /api/sessions` (workspace + optional labels). +2. Sessions handler validates payload and calls manager `Create`. +3. Manager allocates port, launches process, stores session, emits + `session.created` event. +4. Client receives normalized session view with health snapshot. +5. Subsequent operations (`stop`, `restart`, `delete`) map to manager methods + and publish corresponding events. + +Error mapping: + +- `WORKSPACE_PATH_REQUIRED`, `WORKSPACE_PATH_INVALID` -> `400` +- `SESSION_ALREADY_EXISTS`, `SESSION_STOPPED` -> `409` +- `SESSION_NOT_FOUND` -> `404` +- `NO_AVAILABLE_SESSION_PORTS` -> `503` +- `TERMINAL_ATTACH_UNAVAILABLE`, `DAEMON_UNHEALTHY` -> `503` + +### 4.2 Terminal data flow + +1. Browser/extension requests websocket upgrade at `/ws/terminal/{session-id}`. +2. Handler checks method, upgrade headers, session existence, and health. +3. Handler calls `AttachTerminal` on manager and starts bridge. +4. Client input is forwarded to daemon terminal stream. +5. Daemon output is forwarded to client and persisted to scrollback cache. +6. On disconnect, client-side reconnect logic controls retry/backoff behavior. + +### 4.3 Agent chat flow + +1. Client `POST /api/sessions/{id}/chat` with prompt payload. +2. Sessions handler creates daemon client for session daemon port. +3. Handler proxies daemon message stream back as SSE-style response chunks. +4. Browser/extension incrementally renders assistant/tool output. + +History path: + +- `GET /api/sessions/{id}/chat` -> daemon message history passthrough. + +### 4.4 Scrollback flow + +1. Terminal output appends entries to JSONL scrollback cache. +2. Client reconnect path requests + `GET /api/sessions/{id}/scrollback?type=terminal_output&limit=...`. +3. Handler applies filtering + offset/limit and returns entries. +4. Client hydrates terminal before opening live websocket. + +## 5. Configuration Reference (Defaults + Toggles) + +### 5.1 CLI/config defaults (`internal/config/config.go` + `main.go` flags) + +| Setting | Default | Source | Notes | +|---|---:|---|---| +| listen port | `8080` | `Config.Defaults()` | `--port` | +| listen addr | `0.0.0.0:8080` | `Config.Defaults()` + hostname flag | host by `--hostname` | +| username | OS user | `user.Current()` | `--username` override | +| scan start | `30000` | `Config.Defaults()` | `--scan-start` | +| scan end | `31000` | `Config.Defaults()` | `--scan-end` | +| scan interval | `5s` | `Config.Defaults()` | `--scan-interval` | +| scan concurrency | `20` | `Config.Defaults()` | `--scan-concurrency` | +| probe timeout | `800ms` | `Config.Defaults()` | `--probe-timeout` | +| stale after | `30s` | `Config.Defaults()` | `--stale-after` | +| mDNS enabled | `true` | `Config.Defaults()` | `--mdns` | +| mDNS service type | `_opencode._tcp` | `Config.Defaults()` | static default | +| startup orphan cleanup | `false` | `main.go` | opt-in `--cleanup-orphans` | + +Validation constraints: + +- port ranges must be 1..65535 +- `scan-end >= scan-start` +- `scan-interval >= 1s` +- username cannot be empty + +### 5.2 Session manager defaults (`internal/session/manager.go`) + +| Setting | Default | Notes | +|---|---:|---| +| session port range | `30000..31000` | overridden by manager config from `main.go` (`scan range + 100`) | +| health interval | `10s` | periodic health loop | +| health timeout | `2s` | per-probe context timeout | +| health fail threshold | `3` | opens circuit breaker | +| circuit cooldown | `30s` | next probe delay when circuit open | +| stop timeout | `5s` | graceful stop/kill fallback | +| opencode binary | `opencode` | default process starter command | + +### 5.3 Auth/CORS environment (`internal/auth/config.go`) + +| Env | Default | Meaning | +|---|---|---| +| `OCR_AUTH_ENABLED` | `false` | enables auth middleware gate | +| `OCR_AUTH_BEARER_TOKENS` | empty | CSV list of accepted bearer tokens | +| `OCR_AUTH_BASIC` | empty | CSV `user:pass` pairs | +| `OCR_CORS_ALLOW_ORIGINS` | `*` | CSV CORS allow-list | + +Bypass paths default: + +- `/api/health` +- `/api/backends` + +### 5.4 Startup cleanup env toggle (`main.go`) + +| Env | Default | Meaning | +|---|---|---| +| `OCR_CLEANUP_ORPHANS` | off | enables startup SIGTERM cleanup for detected orphan `opencode` listeners in scan range | + +### 5.5 VS Code extension runtime settings + +Defined in `vscode-extension/package.json`: + +| Setting | Default | Meaning | +|---|---|---| +| `opencode.controlPlaneUrl` | `http://localhost:8080` | base URL for control-plane API/SSE/WS | +| `opencode.authToken` | empty | optional bearer token for authenticated control planes | + +## 6. Security Model + +### 6.1 Network binding + +- Default bind is `0.0.0.0` (server reachable on network interfaces). +- For localhost-only operation, run with `--hostname 127.0.0.1`. + +### 6.2 Authentication + +- Optional middleware in front of all API/routes via `auth.Middleware`. +- Supports bearer token and basic auth based on environment configuration. +- Some health/backend endpoints can be bypassed by default path policy. + +### 6.3 CORS + +- Default CORS allow origins: `*`. +- Can be restricted by `OCR_CORS_ALLOW_ORIGINS` CSV values. + +### 6.4 Trust boundaries + +- Browser and extension are untrusted clients from server perspective; all + operations go through HTTP API checks. +- Session manager and daemon process orchestration run server-side only. + +### 6.5 Local process controls + +- Orphan cleanup is opt-in; default behavior is warning-only. +- Cleanup scope is bounded to configured scan port range and `opencode` listener + detection. + +## 7. Failure Modes and Recovery Behavior + +### 7.1 Session daemon unavailable + +Symptoms: + +- health probes fail +- terminal attach returns service unavailable + +Behavior: + +- manager marks health unhealthy and can transition session status to `error` +- SSE emits `session.health` +- dashboard row shows error state with start/restart affordance + +Recovery: + +- explicit `restart`/`start` action from UI/extension/API +- no automatic daemon restart behavior + +### 7.2 Port exhaustion for new session + +Symptoms: + +- create session fails with `NO_AVAILABLE_SESSION_PORTS` + +Behavior: + +- API returns descriptive `503` with stable error code + +Recovery: + +- stop/delete existing sessions +- widen configured scan/session port ranges + +### 7.3 SSE disruption + +Symptoms: + +- event stream disconnect/errors + +Behavior: + +- browser indicator transitions to reconnecting/disconnected states +- extension status bar transitions to disconnected/error with retry loop + +Recovery: + +- automatic reconnect loops with bounded delay logic + +### 7.4 Terminal websocket disruption + +Symptoms: + +- terminal websocket close/error + +Behavior: + +- browser terminal prints reconnect message and retries with exponential backoff +- extension terminal bridge performs reconnect strategy per bridge implementation + +Recovery: + +- automatic reconnect path +- user can detach/reattach terminal + +### 7.5 Control-plane API temporarily unavailable (extension) + +Symptoms: + +- session fetch failures / retryable statuses + +Behavior: + +- bounded backoff retries +- stale session data retained and marked as stale +- warning with Retry action + +Recovery: + +- retry from warning action or refresh command + +### 7.6 Startup orphan listeners in scan range + +Symptoms: + +- pre-existing `opencode serve` listeners occupy scan ports + +Behavior: + +- startup warning logs orphan candidates and cleanup hint +- optional cleanup if explicitly enabled + +Recovery: + +- rerun with explicit cleanup toggle +- or manually terminate orphan listeners + +## 8. Operational Notes + +1. Browser dashboard and VS Code extension both depend on control-plane API/SSE. +2. Terminal attach requires session terminal connectivity to daemon; environment + limitations can surface as 502/503 attach failures. +3. Scrollback hydration reduces terminal reconnect blind spots by loading cached + output before live websocket starts. +4. mDNS is optional; path-based routing and direct API usage remain available when + mDNS is disabled. + +## 9. File-Level Reference Map + +- Composition root: `main.go` +- Config defaults/validation: `internal/config/config.go` +- Auth/env config: `internal/auth/config.go` +- API router: `internal/api/router.go` +- Session lifecycle API: `internal/api/sessions.go` +- SSE stream: `internal/api/events.go` +- Scrollback API: `internal/api/scrollback.go` +- Session manager core: `internal/session/manager.go` +- Terminal websocket endpoint: `internal/terminal/handler.go` +- Browser client: `web/app.js`, `web/index.html` +- VS Code extension host: `vscode-extension/src/extension.ts` + +## 10. Summary + +OpenCodeRouter implements a control-plane architecture with clear layering: + +- discovery/proxy plane (`scanner`, `registry`, `proxy`) +- session lifecycle and health supervision (`session.Manager`) +- transport adapters (`internal/api`, `internal/terminal`) +- clients (browser dashboard and VS Code extension) + +Task 20–24 capabilities (terminal bridge, chat/diff integration, scrollback +hydration, resilience/error handling, circuit breaker) are represented in this +architecture and documented with concrete runtime defaults and failure behavior. diff --git a/go.mod b/go.mod index 4d7d2f3..413e86b 100644 --- a/go.mod +++ b/go.mod @@ -8,6 +8,7 @@ require ( charm.land/lipgloss/v2 v2.0.0 github.com/charmbracelet/x/vt v0.0.0-20260304084025-7dd5c0ab408e github.com/charmbracelet/x/xpty v0.1.3 + github.com/gorilla/websocket v1.5.3 github.com/grandcat/zeroconf v1.0.0 github.com/spf13/cobra v1.8.1 github.com/spf13/viper v1.20.1 diff --git a/go.sum b/go.sum index 36fb5bf..92a17ae 100644 --- a/go.sum +++ b/go.sum @@ -52,6 +52,8 @@ github.com/go-viper/mapstructure/v2 v2.2.1 h1:ZAaOCxANMuZx5RCeg0mBdEZk7DZasvvZIx github.com/go-viper/mapstructure/v2 v2.2.1/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM= github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI= github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= +github.com/gorilla/websocket v1.5.3 h1:saDtZ6Pbx/0u+bgYQ3q96pZgCzfhKXGPqt7kZ72aNNg= +github.com/gorilla/websocket v1.5.3/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE= github.com/grandcat/zeroconf v1.0.0 h1:uHhahLBKqwWBV6WZUDAT71044vwOTL+McW0mBJvo6kE= github.com/grandcat/zeroconf v1.0.0/go.mod h1:lTKmG1zh86XyCoUeIHSA4FJMBwCJiQmGfcP2PdzytEs= github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8= diff --git a/internal/api/events.go b/internal/api/events.go new file mode 100644 index 0000000..8fb28e1 --- /dev/null +++ b/internal/api/events.go @@ -0,0 +1,300 @@ +package api + +import ( + "context" + "encoding/json" + "io" + "log/slog" + "net/http" + "strconv" + "strings" + "time" + + "opencoderouter/internal/session" +) + +const ( + defaultEventsKeepaliveInterval = 15 * time.Second + defaultEventsRetryInterval = 5 * time.Second +) + +type BackendEvent struct { + Type string + Timestamp time.Time + SessionID string + Data any +} + +type BackendEventSubscribeFunc func(ctx context.Context) (<-chan BackendEvent, func(), error) + +type EventsHandlerConfig struct { + SessionEventBus session.EventBus + BackendSubscribe BackendEventSubscribeFunc + Logger *slog.Logger + KeepaliveInterval time.Duration + RetryInterval time.Duration +} + +type EventsHandler struct { + sessionEvents session.EventBus + backendSubscribe BackendEventSubscribeFunc + logger *slog.Logger + keepalive time.Duration + retry time.Duration +} + +type streamEnvelope struct { + Type string `json:"type"` + Source string `json:"source"` + Timestamp string `json:"timestamp"` + SessionID string `json:"sessionId,omitempty"` + Sequence int64 `json:"sequence"` + Payload any `json:"payload,omitempty"` +} + +func NewEventsHandler(cfg EventsHandlerConfig) *EventsHandler { + logger := cfg.Logger + if logger == nil { + logger = slog.Default() + } + + keepalive := cfg.KeepaliveInterval + if keepalive <= 0 { + keepalive = defaultEventsKeepaliveInterval + } + + retry := cfg.RetryInterval + if retry <= 0 { + retry = defaultEventsRetryInterval + } + + return &EventsHandler{ + sessionEvents: cfg.SessionEventBus, + backendSubscribe: cfg.BackendSubscribe, + logger: logger, + keepalive: keepalive, + retry: retry, + } +} + +func (h *EventsHandler) Register(mux *http.ServeMux) { + if h == nil || mux == nil { + return + } + mux.HandleFunc("/api/events", h.handleEvents) +} + +func (h *EventsHandler) handleEvents(w http.ResponseWriter, r *http.Request) { + if r.Method != http.MethodGet { + writeAPIError(w, http.StatusMethodNotAllowed, "method not allowed", "METHOD_NOT_ALLOWED") + return + } + + if h.sessionEvents == nil && h.backendSubscribe == nil { + writeAPIError(w, http.StatusServiceUnavailable, "event stream unavailable", "EVENT_STREAM_UNAVAILABLE") + return + } + + flusher, ok := w.(http.Flusher) + if !ok { + writeAPIError(w, http.StatusInternalServerError, "streaming unsupported", "STREAMING_UNSUPPORTED") + return + } + + var ( + sessionCh <-chan session.Event + sessionUnsubscribe func() + backendCh <-chan BackendEvent + backendUnsubscribe func() + ) + + if h.sessionEvents != nil { + var err error + sessionCh, sessionUnsubscribe, err = h.sessionEvents.Subscribe(session.EventFilter{Types: []session.EventType{ + session.EventTypeSessionCreated, + session.EventTypeSessionStopped, + session.EventTypeSessionHealthChanged, + session.EventTypeSessionAttached, + session.EventTypeSessionDetached, + }}) + if err != nil { + h.logger.Warn("failed to subscribe to session events", "error", err) + writeAPIError(w, http.StatusServiceUnavailable, "event stream unavailable", "EVENT_STREAM_UNAVAILABLE") + return + } + } + + if h.backendSubscribe != nil { + var err error + backendCh, backendUnsubscribe, err = h.backendSubscribe(r.Context()) + if err != nil { + if sessionUnsubscribe != nil { + sessionUnsubscribe() + } + h.logger.Warn("failed to subscribe to backend events", "error", err) + writeAPIError(w, http.StatusServiceUnavailable, "event stream unavailable", "EVENT_STREAM_UNAVAILABLE") + return + } + } + + defer func() { + if sessionUnsubscribe != nil { + sessionUnsubscribe() + } + if backendUnsubscribe != nil { + backendUnsubscribe() + } + }() + + w.Header().Set("Content-Type", "text/event-stream") + w.Header().Set("Cache-Control", "no-cache") + w.Header().Set("Connection", "keep-alive") + w.Header().Set("X-Accel-Buffering", "no") + w.WriteHeader(http.StatusOK) + + if err := writeSSERetry(w, flusher, h.retry); err != nil { + return + } + + sequence := parseLastEventID(r.Header.Get("Last-Event-ID")) + ticker := time.NewTicker(h.keepalive) + defer ticker.Stop() + + for { + if sessionCh == nil && backendCh == nil { + return + } + + select { + case <-r.Context().Done(): + return + case ev, ok := <-sessionCh: + if !ok { + sessionCh = nil + continue + } + + sequence++ + envelope := newSessionEnvelope(sequence, ev) + if err := writeSSEJSON(w, flusher, sequence, envelope.Type, envelope); err != nil { + return + } + case ev, ok := <-backendCh: + if !ok { + backendCh = nil + continue + } + + sequence++ + envelope := newBackendEnvelope(sequence, ev) + if err := writeSSEJSON(w, flusher, sequence, envelope.Type, envelope); err != nil { + return + } + case <-ticker.C: + if err := writeSSEComment(w, flusher, "keepalive"); err != nil { + return + } + } + } +} + +func newSessionEnvelope(sequence int64, ev session.Event) streamEnvelope { + timestamp := ev.Timestamp().UTC() + if timestamp.IsZero() { + timestamp = time.Now().UTC() + } + + eventType := string(ev.Type()) + if ev.Type() == session.EventTypeSessionHealthChanged { + eventType = "session.health" + } + + return streamEnvelope{ + Type: eventType, + Source: "session", + Timestamp: timestamp.Format(timeLayoutRFC3339Nano), + SessionID: ev.SessionID(), + Sequence: sequence, + Payload: ev, + } +} + +func newBackendEnvelope(sequence int64, ev BackendEvent) streamEnvelope { + timestamp := ev.Timestamp.UTC() + if timestamp.IsZero() { + timestamp = time.Now().UTC() + } + + eventType := strings.TrimSpace(ev.Type) + if eventType == "" { + eventType = "backend.event" + } + + return streamEnvelope{ + Type: eventType, + Source: "backend", + Timestamp: timestamp.Format(timeLayoutRFC3339Nano), + SessionID: strings.TrimSpace(ev.SessionID), + Sequence: sequence, + Payload: ev.Data, + } +} + +func parseLastEventID(raw string) int64 { + raw = strings.TrimSpace(raw) + if raw == "" { + return 0 + } + + id, err := strconv.ParseInt(raw, 10, 64) + if err != nil || id < 0 { + return 0 + } + + return id +} + +func writeSSERetry(w io.Writer, flusher http.Flusher, retry time.Duration) error { + if retry <= 0 { + retry = defaultEventsRetryInterval + } + if _, err := io.WriteString(w, "retry: "+strconv.FormatInt(retry.Milliseconds(), 10)+"\n\n"); err != nil { + return err + } + flusher.Flush() + return nil +} + +func writeSSEComment(w io.Writer, flusher http.Flusher, comment string) error { + comment = strings.ReplaceAll(comment, "\n", " ") + comment = strings.ReplaceAll(comment, "\r", " ") + if _, err := io.WriteString(w, ": "+comment+"\n\n"); err != nil { + return err + } + flusher.Flush() + return nil +} + +func writeSSEJSON(w io.Writer, flusher http.Flusher, id int64, event string, payload any) error { + encoded, err := json.Marshal(payload) + if err != nil { + return err + } + + if _, err := io.WriteString(w, "id: "+strconv.FormatInt(id, 10)+"\n"); err != nil { + return err + } + + event = strings.ReplaceAll(event, "\n", " ") + event = strings.ReplaceAll(event, "\r", " ") + if _, err := io.WriteString(w, "event: "+event+"\n"); err != nil { + return err + } + + if _, err := io.WriteString(w, "data: "+string(encoded)+"\n\n"); err != nil { + return err + } + + flusher.Flush() + return nil +} diff --git a/internal/api/events_test.go b/internal/api/events_test.go new file mode 100644 index 0000000..a017a8e --- /dev/null +++ b/internal/api/events_test.go @@ -0,0 +1,368 @@ +package api + +import ( + "bufio" + "context" + "encoding/json" + "io" + "net/http" + "net/http/httptest" + "strconv" + "strings" + "testing" + "time" + + "opencoderouter/internal/session" +) + +type parsedSSEFrame struct { + ID string + Event string + Data []string + Comments []string + Retry string +} + +type parsedStreamEnvelope struct { + Type string `json:"type"` + Source string `json:"source"` + Timestamp string `json:"timestamp"` + SessionID string `json:"sessionId"` + Sequence int64 `json:"sequence"` +} + +func TestEventsHandlerStreamsSessionEventsAndKeepalive(t *testing.T) { + eventBus := session.NewEventBus(16) + + mux := http.NewServeMux() + NewEventsHandler(EventsHandlerConfig{ + SessionEventBus: eventBus, + KeepaliveInterval: 20 * time.Millisecond, + RetryInterval: 10 * time.Millisecond, + }).Register(mux) + + srv := httptest.NewServer(mux) + defer srv.Close() + + resp := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/events", nil) + if resp.StatusCode != http.StatusOK { + defer resp.Body.Close() + t.Fatalf("status=%d want=%d", resp.StatusCode, http.StatusOK) + } + if contentType := resp.Header.Get("Content-Type"); !strings.HasPrefix(contentType, "text/event-stream") { + defer resp.Body.Close() + t.Fatalf("content-type=%q want prefix text/event-stream", contentType) + } + defer resp.Body.Close() + + reader := bufio.NewReader(resp.Body) + retryFrame := readSSEUntil(t, reader, 2*time.Second, func(frame parsedSSEFrame) bool { + return frame.Retry != "" + }) + if retryFrame.Retry == "" { + t.Fatal("expected retry frame") + } + + now := time.Now().UTC() + handle := session.SessionHandle{ID: "s-created", DaemonPort: 30123, WorkspacePath: "/tmp/work", Status: session.SessionStatusActive, CreatedAt: now, LastActivity: now} + if err := eventBus.Publish(session.SessionCreated{At: now, Session: handle}); err != nil { + t.Fatalf("publish session.created: %v", err) + } + + createdFrame := readSSEUntil(t, reader, 2*time.Second, func(frame parsedSSEFrame) bool { + return frame.Event == "session.created" && len(frame.Data) > 0 + }) + if createdFrame.ID != "1" { + t.Fatalf("created id=%q want=1", createdFrame.ID) + } + + var createdPayload parsedStreamEnvelope + decodeSSEDataJSON(t, createdFrame, &createdPayload) + if createdPayload.Type != "session.created" { + t.Fatalf("created type=%q want=session.created", createdPayload.Type) + } + if createdPayload.Source != "session" { + t.Fatalf("created source=%q want=session", createdPayload.Source) + } + if createdPayload.SessionID != "s-created" { + t.Fatalf("created sessionId=%q want=s-created", createdPayload.SessionID) + } + if createdPayload.Sequence != 1 { + t.Fatalf("created sequence=%d want=1", createdPayload.Sequence) + } + + if err := eventBus.Publish(session.SessionHealthChanged{ + At: now.Add(2 * time.Second), + Session: handle, + Previous: session.HealthStatus{State: session.HealthStateHealthy}, + Current: session.HealthStatus{State: session.HealthStateUnhealthy, Error: "probe timeout"}, + }); err != nil { + t.Fatalf("publish session.health: %v", err) + } + + healthFrame := readSSEUntil(t, reader, 2*time.Second, func(frame parsedSSEFrame) bool { + return frame.Event == "session.health" && len(frame.Data) > 0 + }) + if healthFrame.ID != "2" { + t.Fatalf("health id=%q want=2", healthFrame.ID) + } + + var healthPayload parsedStreamEnvelope + decodeSSEDataJSON(t, healthFrame, &healthPayload) + if healthPayload.Type != "session.health" { + t.Fatalf("health type=%q want=session.health", healthPayload.Type) + } + if healthPayload.Sequence != 2 { + t.Fatalf("health sequence=%d want=2", healthPayload.Sequence) + } + + keepaliveFrame := readSSEUntil(t, reader, 2*time.Second, func(frame parsedSSEFrame) bool { + for _, comment := range frame.Comments { + if strings.Contains(comment, "keepalive") { + return true + } + } + return false + }) + if len(keepaliveFrame.Comments) == 0 { + t.Fatal("expected keepalive comment frame") + } +} + +func TestEventsHandlerAppliesLastEventIDSequencing(t *testing.T) { + eventBus := session.NewEventBus(16) + + mux := http.NewServeMux() + NewEventsHandler(EventsHandlerConfig{SessionEventBus: eventBus}).Register(mux) + + srv := httptest.NewServer(mux) + defer srv.Close() + + req, err := http.NewRequest(http.MethodGet, srv.URL+"/api/events", nil) + if err != nil { + t.Fatalf("new request: %v", err) + } + req.Header.Set("Last-Event-ID", "41") + + resp, err := srv.Client().Do(req) + if err != nil { + t.Fatalf("request failed: %v", err) + } + if resp.StatusCode != http.StatusOK { + defer resp.Body.Close() + t.Fatalf("status=%d want=%d", resp.StatusCode, http.StatusOK) + } + defer resp.Body.Close() + + reader := bufio.NewReader(resp.Body) + _ = readSSEUntil(t, reader, 2*time.Second, func(frame parsedSSEFrame) bool { + return frame.Retry != "" + }) + + now := time.Now().UTC() + handle := session.SessionHandle{ID: "s-stopped", DaemonPort: 30124, WorkspacePath: "/tmp/work", Status: session.SessionStatusStopped, CreatedAt: now, LastActivity: now} + if err := eventBus.Publish(session.SessionStopped{At: now, Session: handle, Reason: "user"}); err != nil { + t.Fatalf("publish session.stopped: %v", err) + } + + stoppedFrame := readSSEUntil(t, reader, 2*time.Second, func(frame parsedSSEFrame) bool { + return frame.Event == "session.stopped" && len(frame.Data) > 0 + }) + if stoppedFrame.ID != "42" { + t.Fatalf("stopped id=%q want=42", stoppedFrame.ID) + } + + var stoppedPayload parsedStreamEnvelope + decodeSSEDataJSON(t, stoppedFrame, &stoppedPayload) + if stoppedPayload.Sequence != 42 { + t.Fatalf("stopped sequence=%d want=42", stoppedPayload.Sequence) + } +} + +func TestEventsHandlerStreamsBackendEventsWhenAvailable(t *testing.T) { + backendEvents := make(chan BackendEvent, 4) + + mux := http.NewServeMux() + NewEventsHandler(EventsHandlerConfig{ + BackendSubscribe: func(_ context.Context) (<-chan BackendEvent, func(), error) { + return backendEvents, func() {}, nil + }, + }).Register(mux) + + srv := httptest.NewServer(mux) + defer srv.Close() + + resp := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/events", nil) + if resp.StatusCode != http.StatusOK { + defer resp.Body.Close() + t.Fatalf("status=%d want=%d", resp.StatusCode, http.StatusOK) + } + defer resp.Body.Close() + + reader := bufio.NewReader(resp.Body) + _ = readSSEUntil(t, reader, 2*time.Second, func(frame parsedSSEFrame) bool { + return frame.Retry != "" + }) + + backendEvents <- BackendEvent{ + Type: "backend.updated", + Timestamp: time.Now().UTC(), + Data: map[string]any{"slug": "proj-a", "port": 32000}, + } + + backendFrame := readSSEUntil(t, reader, 2*time.Second, func(frame parsedSSEFrame) bool { + return frame.Event == "backend.updated" && len(frame.Data) > 0 + }) + if backendFrame.ID != "1" { + t.Fatalf("backend id=%q want=1", backendFrame.ID) + } + + var backendPayload parsedStreamEnvelope + decodeSSEDataJSON(t, backendFrame, &backendPayload) + if backendPayload.Source != "backend" { + t.Fatalf("backend source=%q want=backend", backendPayload.Source) + } + if backendPayload.Type != "backend.updated" { + t.Fatalf("backend type=%q want=backend.updated", backendPayload.Type) + } + if backendPayload.Sequence != 1 { + t.Fatalf("backend sequence=%d want=1", backendPayload.Sequence) + } +} + +func readSSEUntil(t *testing.T, reader *bufio.Reader, timeout time.Duration, match func(parsedSSEFrame) bool) parsedSSEFrame { + t.Helper() + deadline := time.Now().Add(timeout) + for { + remaining := time.Until(deadline) + if remaining <= 0 { + t.Fatalf("timed out waiting for matching SSE frame after %s", timeout) + } + frame := readSSEFrame(t, reader, remaining) + if match(frame) { + return frame + } + } +} + +func readSSEFrame(t *testing.T, reader *bufio.Reader, timeout time.Duration) parsedSSEFrame { + t.Helper() + type result struct { + frame parsedSSEFrame + err error + } + resultCh := make(chan result, 1) + + go func() { + var frame parsedSSEFrame + for { + line, err := reader.ReadString('\n') + if err != nil { + resultCh <- result{err: err} + return + } + line = strings.TrimRight(line, "\r\n") + if line == "" { + if frame.ID != "" || frame.Event != "" || frame.Retry != "" || len(frame.Data) > 0 || len(frame.Comments) > 0 { + resultCh <- result{frame: frame} + return + } + continue + } + + switch { + case strings.HasPrefix(line, "id:"): + frame.ID = strings.TrimSpace(strings.TrimPrefix(line, "id:")) + case strings.HasPrefix(line, "event:"): + frame.Event = strings.TrimSpace(strings.TrimPrefix(line, "event:")) + case strings.HasPrefix(line, "data:"): + frame.Data = append(frame.Data, strings.TrimSpace(strings.TrimPrefix(line, "data:"))) + case strings.HasPrefix(line, "retry:"): + frame.Retry = strings.TrimSpace(strings.TrimPrefix(line, "retry:")) + case strings.HasPrefix(line, ":"): + frame.Comments = append(frame.Comments, strings.TrimSpace(strings.TrimPrefix(line, ":"))) + } + } + }() + + select { + case res := <-resultCh: + if res.err != nil { + if res.err == io.EOF { + t.Fatal("unexpected EOF while reading SSE frame") + } + t.Fatalf("read SSE frame: %v", res.err) + } + return res.frame + case <-time.After(timeout): + t.Fatalf("timed out reading SSE frame after %s", timeout) + } + + return parsedSSEFrame{} +} + +func decodeSSEDataJSON(t *testing.T, frame parsedSSEFrame, dst any) { + t.Helper() + if len(frame.Data) == 0 { + t.Fatal("expected SSE data lines") + } + joined := strings.Join(frame.Data, "\n") + if err := json.Unmarshal([]byte(joined), dst); err != nil { + t.Fatalf("decode SSE data %q: %v", joined, err) + } +} + +func TestEventsHandlerRejectsUnsupportedMethod(t *testing.T) { + eventBus := session.NewEventBus(4) + + mux := http.NewServeMux() + NewEventsHandler(EventsHandlerConfig{SessionEventBus: eventBus}).Register(mux) + srv := httptest.NewServer(mux) + defer srv.Close() + + resp := doJSONRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/events", nil) + assertErrorShape(t, resp, http.StatusMethodNotAllowed, "METHOD_NOT_ALLOWED") +} + +func TestEventsHandlerParsesInvalidLastEventIDAsZero(t *testing.T) { + eventBus := session.NewEventBus(8) + + mux := http.NewServeMux() + NewEventsHandler(EventsHandlerConfig{SessionEventBus: eventBus}).Register(mux) + srv := httptest.NewServer(mux) + defer srv.Close() + + req, err := http.NewRequest(http.MethodGet, srv.URL+"/api/events", nil) + if err != nil { + t.Fatalf("new request: %v", err) + } + req.Header.Set("Last-Event-ID", "nonsense") + + resp, err := srv.Client().Do(req) + if err != nil { + t.Fatalf("request failed: %v", err) + } + if resp.StatusCode != http.StatusOK { + defer resp.Body.Close() + t.Fatalf("status=%d want=%d", resp.StatusCode, http.StatusOK) + } + defer resp.Body.Close() + + reader := bufio.NewReader(resp.Body) + _ = readSSEUntil(t, reader, 2*time.Second, func(frame parsedSSEFrame) bool { + return frame.Retry != "" + }) + + now := time.Now().UTC() + handle := session.SessionHandle{ID: "s-invalid-last-id", DaemonPort: 30125, WorkspacePath: "/tmp/work", Status: session.SessionStatusActive, CreatedAt: now, LastActivity: now} + if err := eventBus.Publish(session.SessionAttached{At: now, Session: handle, AttachedClients: 1, ClientID: "c-1"}); err != nil { + t.Fatalf("publish session.attached: %v", err) + } + + attachedFrame := readSSEUntil(t, reader, 2*time.Second, func(frame parsedSSEFrame) bool { + return frame.Event == "session.attached" && len(frame.Data) > 0 + }) + if attachedFrame.ID != strconv.FormatInt(1, 10) { + t.Fatalf("attached id=%q want=1", attachedFrame.ID) + } +} diff --git a/internal/api/remote_hosts.go b/internal/api/remote_hosts.go new file mode 100644 index 0000000..2c6e4b3 --- /dev/null +++ b/internal/api/remote_hosts.go @@ -0,0 +1,502 @@ +package api + +import ( + "context" + "errors" + "io" + "log/slog" + "net/http" + "strconv" + "strings" + "sync" + "time" + + "opencoderouter/internal/model" + "opencoderouter/internal/remote" + tuiconfig "opencoderouter/internal/tui/config" +) + +const defaultRemoteHostsCacheTTL = 60 * time.Second + +type remoteHostsDiscoverer interface { + Discover(ctx context.Context) ([]model.Host, error) +} + +type remoteHostsProber interface { + ProbeHosts(ctx context.Context, hosts []model.Host) ([]model.Host, error) +} + +type remoteHostsPathSetter interface { + SetSSHConfigPath(path string) +} + +type RemoteHostsHandlerConfig struct { + DiscoveryOptions remote.DiscoveryOptions + ProbeOptions remote.ProbeOptions + CacheTTL time.Duration + Runner remote.Runner + Logger *slog.Logger + + DiscoveryService remoteHostsDiscoverer + ProbeService remoteHostsProber +} + +type RemoteHostsHandler struct { + discovery remoteHostsDiscoverer + probe remoteHostsProber + logger *slog.Logger + cacheTTL time.Duration + + mu sync.RWMutex + lastHosts []model.Host + lastScannedAt time.Time + lastPartial bool + lastWarnings []string + lastConfigPath string +} + +type remoteHostsResponse struct { + Hosts []remoteHostView `json:"hosts"` + Cached bool `json:"cached"` + Stale bool `json:"stale"` + Partial bool `json:"partial"` + LastScan string `json:"lastScan,omitempty"` + Warnings []string `json:"warnings,omitempty"` +} + +type remoteHostView struct { + Name string `json:"name"` + Address string `json:"address"` + User string `json:"user,omitempty"` + Label string `json:"label"` + Priority int `json:"priority,omitempty"` + Status string `json:"status"` + LastSeen string `json:"lastSeen,omitempty"` + LastError string `json:"lastError,omitempty"` + OpencodeBin string `json:"opencodeBin,omitempty"` + SessionCount int `json:"sessionCount"` + Projects []remoteProjectView `json:"projects,omitempty"` + ProxyKind string `json:"proxyKind,omitempty"` + ProxyJumpRaw string `json:"proxyJumpRaw,omitempty"` + ProxyCommand string `json:"proxyCommand,omitempty"` + DependsOn []string `json:"dependsOn,omitempty"` + Dependents []string `json:"dependents,omitempty"` + BlockedBy []string `json:"blockedBy,omitempty"` + Transport string `json:"transport,omitempty"` + TransportError string `json:"transportError,omitempty"` +} + +type remoteProjectView struct { + Name string `json:"name"` + Sessions []remoteSessionView `json:"sessions,omitempty"` +} + +type remoteSessionView struct { + ID string `json:"id"` + Project string `json:"project,omitempty"` + Title string `json:"title,omitempty"` + Directory string `json:"directory,omitempty"` + LastActivity string `json:"lastActivity,omitempty"` + Status string `json:"status"` + MessageCount int `json:"messageCount,omitempty"` + Agents []string `json:"agents,omitempty"` + Activity string `json:"activity,omitempty"` +} + +func NewRemoteHostsHandler(cfg RemoteHostsHandlerConfig) *RemoteHostsHandler { + logger := cfg.Logger + if logger == nil { + logger = slog.New(slog.NewTextHandler(io.Discard, nil)) + } + + ttl := cfg.CacheTTL + if ttl <= 0 { + ttl = defaultRemoteHostsCacheTTL + } + + discoverySvc := cfg.DiscoveryService + if discoverySvc == nil { + discoverySvc = remote.NewDiscoveryService(normalizeDiscoveryOptions(cfg.DiscoveryOptions), cfg.Runner, logger) + } + + probeSvc := cfg.ProbeService + if probeSvc == nil { + probeSvc = remote.NewProbeService(normalizeProbeOptions(cfg.ProbeOptions), cfg.Runner, remote.NewCacheStore(ttl), logger) + } + + return &RemoteHostsHandler{ + discovery: discoverySvc, + probe: probeSvc, + logger: logger, + cacheTTL: ttl, + } +} + +func (h *RemoteHostsHandler) Register(mux *http.ServeMux) { + if h == nil || mux == nil { + return + } + mux.HandleFunc("/api/remote/hosts", h.handleList) +} + +func (h *RemoteHostsHandler) handleList(w http.ResponseWriter, r *http.Request) { + if r.Method != http.MethodGet { + writeAPIError(w, http.StatusMethodNotAllowed, "method not allowed", "METHOD_NOT_ALLOWED") + return + } + + if h.discovery == nil || h.probe == nil { + writeAPIError(w, http.StatusServiceUnavailable, "remote host services unavailable", "REMOTE_HOSTS_UNAVAILABLE") + return + } + + refresh, err := parseBoolQuery(r, "refresh") + if err != nil { + writeAPIError(w, http.StatusBadRequest, err.Error(), "INVALID_QUERY") + return + } + + sshConfigPath := strings.TrimSpace(r.URL.Query().Get("sshConfigPath")) + if setter, ok := h.discovery.(remoteHostsPathSetter); ok { + setter.SetSSHConfigPath(sshConfigPath) + } else if sshConfigPath != "" { + writeAPIError(w, http.StatusBadRequest, "sshConfigPath override unsupported", "SSH_CONFIG_OVERRIDE_UNSUPPORTED") + return + } + + if !refresh { + if hosts, scannedAt, partial, warnings, ok := h.snapshotIfFresh(sshConfigPath); ok { + writeJSON(w, http.StatusOK, remoteHostsResponse{ + Hosts: toRemoteHostViews(hosts), + Cached: true, + Stale: false, + Partial: partial, + LastScan: formatOptionalTime(scannedAt), + Warnings: append([]string(nil), warnings...), + }) + return + } + } + + hosts, warnings, partial, scanErr := h.scan(r.Context()) + if scanErr != nil { + h.logger.Warn("remote host scan completed with errors", "error", remote.SanitizeLogError(scanErr), "host_count", len(hosts)) + } + + if len(hosts) == 0 && scanErr != nil { + if cachedHosts, scannedAt, cachedPartial, cachedWarnings, ok := h.latestSnapshot(sshConfigPath); ok { + warnings = append(warnings, cachedWarnings...) + partial = partial || cachedPartial + writeJSON(w, http.StatusOK, remoteHostsResponse{ + Hosts: toRemoteHostViews(cachedHosts), + Cached: true, + Stale: true, + Partial: partial, + LastScan: formatOptionalTime(scannedAt), + Warnings: uniqueStrings(warnings), + }) + return + } + + writeAPIError(w, http.StatusServiceUnavailable, "remote host scan failed", "REMOTE_HOST_SCAN_FAILED") + return + } + + h.storeSnapshot(sshConfigPath, hosts, partial, warnings) + writeJSON(w, http.StatusOK, remoteHostsResponse{ + Hosts: toRemoteHostViews(hosts), + Cached: false, + Stale: false, + Partial: partial, + LastScan: formatOptionalTime(time.Now().UTC()), + Warnings: warnings, + }) +} + +func parseBoolQuery(r *http.Request, key string) (bool, error) { + value := strings.TrimSpace(r.URL.Query().Get(key)) + if value == "" { + return false, nil + } + parsed, err := strconv.ParseBool(value) + if err != nil { + return false, errors.New("invalid query boolean: " + key) + } + return parsed, nil +} + +func (h *RemoteHostsHandler) scan(ctx context.Context) ([]model.Host, []string, bool, error) { + hosts, discoverErr := h.discovery.Discover(ctx) + warnings := make([]string, 0) + partial := false + + if discoverErr != nil { + partial = true + warnings = append(warnings, "discovery: "+remote.SanitizeLogError(discoverErr)) + } + + var probeErr error + if len(hosts) > 0 { + hosts, probeErr = h.probe.ProbeHosts(ctx, hosts) + if probeErr != nil { + partial = true + warnings = append(warnings, "probe: "+remote.SanitizeLogError(probeErr)) + } + } + + warnings = uniqueStrings(warnings) + return hosts, warnings, partial, errors.Join(discoverErr, probeErr) +} + +func (h *RemoteHostsHandler) snapshotIfFresh(configPath string) ([]model.Host, time.Time, bool, []string, bool) { + h.mu.RLock() + defer h.mu.RUnlock() + + if len(h.lastHosts) == 0 || h.lastScannedAt.IsZero() { + return nil, time.Time{}, false, nil, false + } + if h.lastConfigPath != configPath { + return nil, time.Time{}, false, nil, false + } + if h.cacheTTL > 0 && time.Since(h.lastScannedAt) > h.cacheTTL { + return nil, time.Time{}, false, nil, false + } + + return cloneHosts(h.lastHosts), h.lastScannedAt, h.lastPartial, append([]string(nil), h.lastWarnings...), true +} + +func (h *RemoteHostsHandler) latestSnapshot(configPath string) ([]model.Host, time.Time, bool, []string, bool) { + h.mu.RLock() + defer h.mu.RUnlock() + + if len(h.lastHosts) == 0 || h.lastScannedAt.IsZero() { + return nil, time.Time{}, false, nil, false + } + if h.lastConfigPath != configPath { + return nil, time.Time{}, false, nil, false + } + + return cloneHosts(h.lastHosts), h.lastScannedAt, h.lastPartial, append([]string(nil), h.lastWarnings...), true +} + +func (h *RemoteHostsHandler) storeSnapshot(configPath string, hosts []model.Host, partial bool, warnings []string) { + h.mu.Lock() + h.lastHosts = cloneHosts(hosts) + h.lastScannedAt = time.Now().UTC() + h.lastPartial = partial + h.lastWarnings = append([]string(nil), warnings...) + h.lastConfigPath = configPath + h.mu.Unlock() +} + +func cloneHosts(hosts []model.Host) []model.Host { + if len(hosts) == 0 { + return nil + } + cloned := make([]model.Host, 0, len(hosts)) + for _, host := range hosts { + cloned = append(cloned, cloneHost(host)) + } + return cloned +} + +func cloneHost(host model.Host) model.Host { + cloned := host + cloned.Projects = cloneProjects(host.Projects) + cloned.JumpChain = append([]model.JumpHop(nil), host.JumpChain...) + cloned.DependsOn = append([]string(nil), host.DependsOn...) + cloned.Dependents = append([]string(nil), host.Dependents...) + cloned.BlockedBy = append([]string(nil), host.BlockedBy...) + return cloned +} + +func cloneProjects(projects []model.Project) []model.Project { + if len(projects) == 0 { + return nil + } + cloned := make([]model.Project, 0, len(projects)) + for _, project := range projects { + copied := model.Project{ + Name: project.Name, + Sessions: append([]model.Session(nil), project.Sessions...), + } + for i := range copied.Sessions { + copied.Sessions[i].Agents = append([]string(nil), copied.Sessions[i].Agents...) + } + cloned = append(cloned, copied) + } + return cloned +} + +func toRemoteHostViews(hosts []model.Host) []remoteHostView { + if len(hosts) == 0 { + return []remoteHostView{} + } + views := make([]remoteHostView, 0, len(hosts)) + for _, host := range hosts { + views = append(views, remoteHostView{ + Name: host.Name, + Address: host.Address, + User: host.User, + Label: host.Label, + Priority: host.Priority, + Status: string(host.Status), + LastSeen: formatOptionalTime(host.LastSeen), + LastError: host.LastError, + OpencodeBin: host.OpencodeBin, + SessionCount: host.SessionCount(), + Projects: toRemoteProjectViews(host.Projects), + ProxyKind: string(host.ProxyKind), + ProxyJumpRaw: host.ProxyJumpRaw, + ProxyCommand: host.ProxyCommand, + DependsOn: append([]string(nil), host.DependsOn...), + Dependents: append([]string(nil), host.Dependents...), + BlockedBy: append([]string(nil), host.BlockedBy...), + Transport: string(host.Transport), + TransportError: host.TransportError, + }) + } + return views +} + +func toRemoteProjectViews(projects []model.Project) []remoteProjectView { + if len(projects) == 0 { + return []remoteProjectView{} + } + views := make([]remoteProjectView, 0, len(projects)) + for _, project := range projects { + views = append(views, remoteProjectView{ + Name: project.Name, + Sessions: toRemoteSessionViews(project.Sessions), + }) + } + return views +} + +func toRemoteSessionViews(sessions []model.Session) []remoteSessionView { + if len(sessions) == 0 { + return []remoteSessionView{} + } + views := make([]remoteSessionView, 0, len(sessions)) + for _, session := range sessions { + views = append(views, remoteSessionView{ + ID: session.ID, + Project: session.Project, + Title: session.Title, + Directory: session.Directory, + LastActivity: formatOptionalTime(session.LastActivity), + Status: string(session.Status), + MessageCount: session.MessageCount, + Agents: append([]string(nil), session.Agents...), + Activity: string(session.Activity), + }) + } + return views +} + +func formatOptionalTime(ts time.Time) string { + if ts.IsZero() { + return "" + } + return ts.UTC().Format(timeLayoutRFC3339Nano) +} + +func uniqueStrings(values []string) []string { + if len(values) == 0 { + return nil + } + seen := make(map[string]struct{}, len(values)) + out := make([]string, 0, len(values)) + for _, value := range values { + trimmed := strings.TrimSpace(value) + if trimmed == "" { + continue + } + if _, ok := seen[trimmed]; ok { + continue + } + seen[trimmed] = struct{}{} + out = append(out, trimmed) + } + if len(out) == 0 { + return nil + } + return out +} + +func normalizeDiscoveryOptions(options remote.DiscoveryOptions) remote.DiscoveryOptions { + defaults := tuiconfig.DefaultConfig() + + if len(options.Include) == 0 { + options.Include = append([]string(nil), defaults.Hosts.Include...) + } + if len(options.Ignore) == 0 { + options.Ignore = append([]string(nil), defaults.Hosts.Ignore...) + } + if len(options.Overrides) == 0 { + options.Overrides = hostOverridesFromTUI(defaults.Hosts.Overrides) + } + + return options +} + +func normalizeProbeOptions(options remote.ProbeOptions) remote.ProbeOptions { + defaults := tuiconfig.DefaultConfig() + + if options.MaxParallel <= 0 { + options.MaxParallel = defaults.Polling.MaxParallel + } + if len(options.SessionScanPaths) == 0 { + options.SessionScanPaths = append([]string(nil), defaults.Sessions.ScanPaths...) + } + if len(options.Overrides) == 0 { + options.Overrides = hostOverridesFromTUI(defaults.Hosts.Overrides) + } + + if strings.TrimSpace(options.SSH.ControlMaster) == "" { + options.SSH.ControlMaster = defaults.SSH.ControlMaster + } + if options.SSH.ControlPersist <= 0 { + options.SSH.ControlPersist = defaults.SSH.ControlPersist + } + if strings.TrimSpace(options.SSH.ControlPath) == "" { + options.SSH.ControlPath = defaults.SSH.ControlPath + } + if options.SSH.ConnectTimeout <= 0 { + options.SSH.ConnectTimeout = defaults.SSH.ConnectTimeout + } + + if strings.TrimSpace(options.SortBy) == "" { + options.SortBy = defaults.Sessions.SortBy + } + if options.MaxDisplay <= 0 { + options.MaxDisplay = defaults.Sessions.MaxDisplay + } + if options.ActiveThreshold <= 0 { + options.ActiveThreshold = defaults.Display.ActiveThreshold + } + if options.IdleThreshold <= 0 { + options.IdleThreshold = defaults.Display.IdleThreshold + } + if options.IdleThreshold < options.ActiveThreshold { + options.IdleThreshold = options.ActiveThreshold + } + + return options +} + +func hostOverridesFromTUI(overrides map[string]tuiconfig.HostOverride) map[string]remote.HostOverride { + if len(overrides) == 0 { + return nil + } + converted := make(map[string]remote.HostOverride, len(overrides)) + for alias, override := range overrides { + converted[alias] = remote.HostOverride{ + Label: override.Label, + Priority: override.Priority, + OpencodePath: override.OpencodePath, + ScanPaths: append([]string(nil), override.ScanPaths...), + } + } + return converted +} diff --git a/internal/api/remote_hosts_test.go b/internal/api/remote_hosts_test.go new file mode 100644 index 0000000..e6e0833 --- /dev/null +++ b/internal/api/remote_hosts_test.go @@ -0,0 +1,320 @@ +package api + +import ( + "context" + "errors" + "net/http" + "net/http/httptest" + "testing" + "time" + + "opencoderouter/internal/model" +) + +type fakeRemoteDiscoverer struct { + hosts []model.Host + err error + path string +} + +func (f *fakeRemoteDiscoverer) Discover(_ context.Context) ([]model.Host, error) { + return cloneHosts(f.hosts), f.err +} + +func (f *fakeRemoteDiscoverer) SetSSHConfigPath(path string) { + f.path = path +} + +type fakeRemoteProber struct { + hosts []model.Host + err error + calls int +} + +func (f *fakeRemoteProber) ProbeHosts(_ context.Context, hosts []model.Host) ([]model.Host, error) { + f.calls++ + if len(f.hosts) > 0 { + return cloneHosts(f.hosts), f.err + } + return cloneHosts(hosts), f.err +} + +func TestRemoteHostsHandlerReturnsFreshScan(t *testing.T) { + discoverer := &fakeRemoteDiscoverer{ + hosts: []model.Host{{ + Name: "dev-host", + Address: "10.0.0.9", + User: "alice", + Label: "dev-host", + Status: model.HostStatusUnknown, + Transport: model.TransportReady, + Projects: []model.Project{{ + Name: "demo", + Sessions: []model.Session{{ + ID: "s-1", + Title: "Build", + Directory: "/repo", + Status: model.SessionStatusActive, + Activity: model.ActivityActive, + LastActivity: time.Now().Add(-time.Minute), + }}, + }}, + }}, + } + + prober := &fakeRemoteProber{ + hosts: []model.Host{{ + Name: "dev-host", + Address: "10.0.0.9", + User: "alice", + Label: "dev-host", + Status: model.HostStatusOnline, + Transport: model.TransportReady, + Projects: []model.Project{{ + Name: "demo", + Sessions: []model.Session{{ + ID: "s-1", + Title: "Build", + Directory: "/repo", + Status: model.SessionStatusActive, + Activity: model.ActivityActive, + LastActivity: time.Now().Add(-time.Minute), + }}, + }}, + }}, + } + + h := NewRemoteHostsHandler(RemoteHostsHandlerConfig{ + CacheTTL: time.Minute, + DiscoveryService: discoverer, + ProbeService: prober, + }) + + mux := http.NewServeMux() + h.Register(mux) + srv := httptest.NewServer(mux) + defer srv.Close() + + resp := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/remote/hosts?refresh=true&sshConfigPath=%2Ftmp%2Fssh.conf", nil) + if resp.StatusCode != http.StatusOK { + defer resp.Body.Close() + t.Fatalf("status=%d want=%d", resp.StatusCode, http.StatusOK) + } + body := decodeResponseJSON[remoteHostsResponse](t, resp.Body) + _ = resp.Body.Close() + + if body.Cached { + t.Fatal("expected uncached response") + } + if body.Stale { + t.Fatal("expected non-stale response") + } + if body.Partial { + t.Fatal("expected full response") + } + if len(body.Hosts) != 1 { + t.Fatalf("hosts len=%d want=1", len(body.Hosts)) + } + if body.Hosts[0].Name != "dev-host" { + t.Fatalf("host name=%q want=dev-host", body.Hosts[0].Name) + } + if body.Hosts[0].Status != string(model.HostStatusOnline) { + t.Fatalf("host status=%q want=%q", body.Hosts[0].Status, model.HostStatusOnline) + } + if body.Hosts[0].SessionCount != 1 { + t.Fatalf("session count=%d want=1", body.Hosts[0].SessionCount) + } + if discoverer.path != "/tmp/ssh.conf" { + t.Fatalf("ssh config path=%q want=%q", discoverer.path, "/tmp/ssh.conf") + } + if prober.calls != 1 { + t.Fatalf("probe calls=%d want=1", prober.calls) + } +} + +func TestRemoteHostsHandlerReturnsCachedWithinTTL(t *testing.T) { + discoverer := &fakeRemoteDiscoverer{hosts: []model.Host{{Name: "alpha", Address: "alpha.local", Label: "alpha", Status: model.HostStatusUnknown}}} + prober := &fakeRemoteProber{hosts: []model.Host{{Name: "alpha", Address: "alpha.local", Label: "alpha", Status: model.HostStatusOnline}}} + + h := NewRemoteHostsHandler(RemoteHostsHandlerConfig{ + CacheTTL: time.Minute, + DiscoveryService: discoverer, + ProbeService: prober, + }) + + mux := http.NewServeMux() + h.Register(mux) + srv := httptest.NewServer(mux) + defer srv.Close() + + first := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/remote/hosts", nil) + if first.StatusCode != http.StatusOK { + defer first.Body.Close() + t.Fatalf("first status=%d want=%d", first.StatusCode, http.StatusOK) + } + _ = first.Body.Close() + + if prober.calls != 1 { + t.Fatalf("probe calls after first=%d want=1", prober.calls) + } + + second := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/remote/hosts", nil) + if second.StatusCode != http.StatusOK { + defer second.Body.Close() + t.Fatalf("second status=%d want=%d", second.StatusCode, http.StatusOK) + } + body := decodeResponseJSON[remoteHostsResponse](t, second.Body) + _ = second.Body.Close() + + if !body.Cached { + t.Fatal("expected cached response") + } + if prober.calls != 1 { + t.Fatalf("probe calls after cached response=%d want=1", prober.calls) + } +} + +func TestRemoteHostsHandlerCacheScopedBySSHConfigPath(t *testing.T) { + discoverer := &fakeRemoteDiscoverer{hosts: []model.Host{{Name: "alpha", Address: "alpha.local", Label: "alpha", Status: model.HostStatusUnknown}}} + prober := &fakeRemoteProber{hosts: []model.Host{{Name: "alpha", Address: "alpha.local", Label: "alpha", Status: model.HostStatusOnline}}} + + h := NewRemoteHostsHandler(RemoteHostsHandlerConfig{ + CacheTTL: time.Minute, + DiscoveryService: discoverer, + ProbeService: prober, + }) + + mux := http.NewServeMux() + h.Register(mux) + srv := httptest.NewServer(mux) + defer srv.Close() + + first := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/remote/hosts?sshConfigPath=%2Ftmp%2Fa.conf", nil) + if first.StatusCode != http.StatusOK { + defer first.Body.Close() + t.Fatalf("first status=%d want=%d", first.StatusCode, http.StatusOK) + } + _ = first.Body.Close() + + second := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/remote/hosts?sshConfigPath=%2Ftmp%2Fa.conf", nil) + if second.StatusCode != http.StatusOK { + defer second.Body.Close() + t.Fatalf("second status=%d want=%d", second.StatusCode, http.StatusOK) + } + body := decodeResponseJSON[remoteHostsResponse](t, second.Body) + _ = second.Body.Close() + if !body.Cached { + t.Fatal("expected same-config request to be served from cache") + } + if prober.calls != 1 { + t.Fatalf("probe calls after cached same-config response=%d want=1", prober.calls) + } + + third := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/remote/hosts?sshConfigPath=%2Ftmp%2Fb.conf", nil) + if third.StatusCode != http.StatusOK { + defer third.Body.Close() + t.Fatalf("third status=%d want=%d", third.StatusCode, http.StatusOK) + } + body = decodeResponseJSON[remoteHostsResponse](t, third.Body) + _ = third.Body.Close() + if body.Cached { + t.Fatal("expected different-config request to trigger fresh scan") + } + if prober.calls != 2 { + t.Fatalf("probe calls after different-config response=%d want=2", prober.calls) + } + if discoverer.path != "/tmp/b.conf" { + t.Fatalf("ssh config path=%q want=%q", discoverer.path, "/tmp/b.conf") + } +} + +func TestRemoteHostsHandlerFallsBackToStaleCacheOnFailure(t *testing.T) { + discoverer := &fakeRemoteDiscoverer{hosts: []model.Host{{Name: "alpha", Address: "alpha.local", Label: "alpha", Status: model.HostStatusUnknown}}} + prober := &fakeRemoteProber{hosts: []model.Host{{Name: "alpha", Address: "alpha.local", Label: "alpha", Status: model.HostStatusOnline}}} + + h := NewRemoteHostsHandler(RemoteHostsHandlerConfig{ + CacheTTL: time.Second, + DiscoveryService: discoverer, + ProbeService: prober, + }) + + mux := http.NewServeMux() + h.Register(mux) + srv := httptest.NewServer(mux) + defer srv.Close() + + seed := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/remote/hosts?refresh=true", nil) + if seed.StatusCode != http.StatusOK { + defer seed.Body.Close() + t.Fatalf("seed status=%d want=%d", seed.StatusCode, http.StatusOK) + } + _ = seed.Body.Close() + + discoverer.err = errors.New("lookup failed") + discoverer.hosts = nil + prober.err = errors.New("unreachable") + prober.hosts = nil + + resp := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/remote/hosts?refresh=true", nil) + if resp.StatusCode != http.StatusOK { + defer resp.Body.Close() + t.Fatalf("status=%d want=%d", resp.StatusCode, http.StatusOK) + } + body := decodeResponseJSON[remoteHostsResponse](t, resp.Body) + _ = resp.Body.Close() + + if !body.Cached { + t.Fatal("expected cached fallback response") + } + if !body.Stale { + t.Fatal("expected stale fallback response") + } + if !body.Partial { + t.Fatal("expected partial fallback response") + } + if len(body.Hosts) != 1 || body.Hosts[0].Name != "alpha" { + t.Fatalf("unexpected fallback hosts: %#v", body.Hosts) + } + if len(body.Warnings) == 0 { + t.Fatal("expected warnings for failed refresh") + } +} + +func TestRemoteHostsHandlerMethodAndValidationErrors(t *testing.T) { + h := NewRemoteHostsHandler(RemoteHostsHandlerConfig{ + DiscoveryService: &fakeRemoteDiscoverer{}, + ProbeService: &fakeRemoteProber{}, + }) + + mux := http.NewServeMux() + h.Register(mux) + srv := httptest.NewServer(mux) + defer srv.Close() + + methodResp := doJSONRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/remote/hosts", nil) + assertErrorShape(t, methodResp, http.StatusMethodNotAllowed, "METHOD_NOT_ALLOWED") + + invalidQuery := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/remote/hosts?refresh=not-bool", nil) + assertErrorShape(t, invalidQuery, http.StatusBadRequest, "INVALID_QUERY") +} + +func TestRemoteHostsHandlerUnsupportedSSHConfigOverride(t *testing.T) { + h := NewRemoteHostsHandler(RemoteHostsHandlerConfig{ + DiscoveryService: remoteDiscovererNoPathSetter{}, + ProbeService: &fakeRemoteProber{}, + }) + + mux := http.NewServeMux() + h.Register(mux) + srv := httptest.NewServer(mux) + defer srv.Close() + + resp := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/remote/hosts?sshConfigPath=%2Ftmp%2Fssh.conf", nil) + assertErrorShape(t, resp, http.StatusBadRequest, "SSH_CONFIG_OVERRIDE_UNSUPPORTED") +} + +type remoteDiscovererNoPathSetter struct{} + +func (remoteDiscovererNoPathSetter) Discover(_ context.Context) ([]model.Host, error) { + return []model.Host{}, nil +} diff --git a/internal/api/router.go b/internal/api/router.go new file mode 100644 index 0000000..c427970 --- /dev/null +++ b/internal/api/router.go @@ -0,0 +1,69 @@ +package api + +import ( + "log/slog" + "net/http" + "time" + + "opencoderouter/internal/auth" + "opencoderouter/internal/cache" + "opencoderouter/internal/remote" + "opencoderouter/internal/session" + "opencoderouter/internal/terminal" +) + +type RouterConfig struct { + SessionManager session.SessionManager + SessionEventBus session.EventBus + BackendEventSubscribe BackendEventSubscribeFunc + AuthConfig auth.Config + ScrollbackCache cache.ScrollbackCache + RemoteDiscovery remote.DiscoveryOptions + RemoteProbe remote.ProbeOptions + RemoteCacheTTL time.Duration + RemoteRunner remote.Runner + RemoteLogger *slog.Logger + Fallback http.Handler +} + +func NewRouter(cfg RouterConfig) http.Handler { + mux := http.NewServeMux() + NewSessionsHandler(SessionsHandlerConfig{SessionManager: cfg.SessionManager, ScrollbackCache: cfg.ScrollbackCache}).Register(mux) + NewEventsHandler(EventsHandlerConfig{ + SessionEventBus: cfg.SessionEventBus, + BackendSubscribe: cfg.BackendEventSubscribe, + }).Register(mux) + NewRemoteHostsHandler(RemoteHostsHandlerConfig{ + DiscoveryOptions: cfg.RemoteDiscovery, + ProbeOptions: cfg.RemoteProbe, + CacheTTL: cfg.RemoteCacheTTL, + Runner: cfg.RemoteRunner, + Logger: cfg.RemoteLogger, + }).Register(mux) + + // Wire up the terminal handler + terminal.NewHandler(terminal.HandlerConfig{ + SessionManager: cfg.SessionManager, + ScrollbackCache: cfg.ScrollbackCache, + }).Register(mux) + + fallback := cfg.Fallback + if fallback == nil { + fallback = http.NotFoundHandler() + } + mux.Handle("/", fallback) + + authCfg := cfg.AuthConfig + defaults := auth.Defaults() + if authCfg.BypassPaths == nil { + authCfg.BypassPaths = defaults.BypassPaths + } + if len(authCfg.CORSAllowedOrigins) == 0 { + authCfg.CORSAllowedOrigins = defaults.CORSAllowedOrigins + } + if authCfg.BasicAuth == nil { + authCfg.BasicAuth = map[string]string{} + } + + return auth.Middleware(mux, authCfg) +} diff --git a/internal/api/router_test.go b/internal/api/router_test.go new file mode 100644 index 0000000..cce4533 --- /dev/null +++ b/internal/api/router_test.go @@ -0,0 +1,171 @@ +package api + +import ( + "context" + "net/http" + "net/http/httptest" + "testing" + + "opencoderouter/internal/auth" + "opencoderouter/internal/cache" + "opencoderouter/internal/session" +) + +func TestNewRouterMountsSessionRoutesAndFallback(t *testing.T) { + mgr := newFakeStatefulSessionManager() + workspace := t.TempDir() + _, err := mgr.Create(context.Background(), session.CreateOpts{WorkspacePath: workspace}) + if err != nil { + t.Fatalf("seed session: %v", err) + } + + fallback := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + if r.URL.Path == "/api/backends" { + w.WriteHeader(http.StatusAccepted) + return + } + w.WriteHeader(http.StatusTeapot) + }) + + h := NewRouter(RouterConfig{SessionManager: mgr, Fallback: fallback}) + srv := httptest.NewServer(h) + defer srv.Close() + + respSessions := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/sessions", nil) + if respSessions.StatusCode != http.StatusOK { + defer respSessions.Body.Close() + t.Fatalf("sessions status=%d want=%d", respSessions.StatusCode, http.StatusOK) + } + _ = respSessions.Body.Close() + + respFallback := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/backends", nil) + if respFallback.StatusCode != http.StatusAccepted { + defer respFallback.Body.Close() + t.Fatalf("fallback status=%d want=%d", respFallback.StatusCode, http.StatusAccepted) + } + _ = respFallback.Body.Close() +} + +func TestNewRouterAppliesAuthMiddlewareToSessionEndpoints(t *testing.T) { + mgr := newFakeStatefulSessionManager() + workspace := t.TempDir() + _, err := mgr.Create(context.Background(), session.CreateOpts{WorkspacePath: workspace}) + if err != nil { + t.Fatalf("seed session: %v", err) + } + + authCfg := auth.Defaults() + authCfg.Enabled = true + authCfg.BearerTokens = []string{"secret-token"} + + h := NewRouter(RouterConfig{ + SessionManager: mgr, + AuthConfig: authCfg, + ScrollbackCache: newRouterTestScrollbackCache(), + Fallback: http.NotFoundHandler(), + }) + + srv := httptest.NewServer(h) + defer srv.Close() + + unauthorized := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/sessions", nil) + if unauthorized.StatusCode != http.StatusUnauthorized { + defer unauthorized.Body.Close() + t.Fatalf("unauthorized status=%d want=%d", unauthorized.StatusCode, http.StatusUnauthorized) + } + _ = unauthorized.Body.Close() + + req, err := http.NewRequest(http.MethodGet, srv.URL+"/api/sessions", nil) + if err != nil { + t.Fatalf("new request: %v", err) + } + req.Header.Set("Authorization", "Bearer secret-token") + authorized, err := srv.Client().Do(req) + if err != nil { + t.Fatalf("authorized request failed: %v", err) + } + if authorized.StatusCode != http.StatusOK { + defer authorized.Body.Close() + t.Fatalf("authorized status=%d want=%d", authorized.StatusCode, http.StatusOK) + } + _ = authorized.Body.Close() +} + +func TestNewRouterKeepsAuthBypassPaths(t *testing.T) { + authCfg := auth.Defaults() + authCfg.Enabled = true + authCfg.BearerTokens = []string{"secret-token"} + + h := NewRouter(RouterConfig{ + AuthConfig: authCfg, + ScrollbackCache: newRouterTestScrollbackCache(), + Fallback: http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + if r.URL.Path == "/api/health" { + w.WriteHeader(http.StatusOK) + return + } + w.WriteHeader(http.StatusUnauthorized) + }), + }) + + srv := httptest.NewServer(h) + defer srv.Close() + + resp := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/health", nil) + if resp.StatusCode != http.StatusOK { + defer resp.Body.Close() + t.Fatalf("health status=%d want=%d", resp.StatusCode, http.StatusOK) + } + _ = resp.Body.Close() +} + +func TestNewRouterMountsEventsRoute(t *testing.T) { + eventBus := session.NewEventBus(8) + h := NewRouter(RouterConfig{ + SessionEventBus: eventBus, + ScrollbackCache: newRouterTestScrollbackCache(), + Fallback: http.NotFoundHandler(), + }) + + srv := httptest.NewServer(h) + defer srv.Close() + + resp, err := srv.Client().Get(srv.URL + "/api/events") + if err != nil { + t.Fatalf("events request failed: %v", err) + } + if resp.StatusCode != http.StatusOK { + defer resp.Body.Close() + t.Fatalf("events status=%d want=%d", resp.StatusCode, http.StatusOK) + } + if got := resp.Header.Get("Content-Type"); got != "text/event-stream" { + defer resp.Body.Close() + t.Fatalf("events content-type=%q want=%q", got, "text/event-stream") + } + _ = resp.Body.Close() +} + +func TestNewRouterMountsRemoteHostsRoute(t *testing.T) { + h := NewRouter(RouterConfig{ + ScrollbackCache: newRouterTestScrollbackCache(), + Fallback: http.NotFoundHandler(), + }) + + srv := httptest.NewServer(h) + defer srv.Close() + + resp := doJSONRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/remote/hosts", nil) + assertErrorShape(t, resp, http.StatusMethodNotAllowed, "METHOD_NOT_ALLOWED") +} + +type routerTestScrollbackCache struct{} + +func newRouterTestScrollbackCache() *routerTestScrollbackCache { return &routerTestScrollbackCache{} } + +func (c *routerTestScrollbackCache) Append(sessionID string, entry cache.Entry) error { return nil } +func (c *routerTestScrollbackCache) Get(sessionID string, offset, limit int) ([]cache.Entry, error) { + return []cache.Entry{}, nil +} +func (c *routerTestScrollbackCache) Trim(sessionID string, maxEntries int) error { return nil } +func (c *routerTestScrollbackCache) Clear(sessionID string) error { return nil } +func (c *routerTestScrollbackCache) Close() error { return nil } diff --git a/internal/api/scrollback.go b/internal/api/scrollback.go new file mode 100644 index 0000000..c660128 --- /dev/null +++ b/internal/api/scrollback.go @@ -0,0 +1,105 @@ +package api + +import ( + "errors" + "net/http" + "strconv" + "strings" + + "opencoderouter/internal/cache" +) + +const defaultScrollbackLimit = 1000 + +type ScrollbackHandler struct { + cache cache.ScrollbackCache +} + +type scrollbackQuery struct { + offset int + limit int + typeV cache.EntryType +} + +func NewScrollbackHandler(scrollbackCache cache.ScrollbackCache) *ScrollbackHandler { + return &ScrollbackHandler{cache: scrollbackCache} +} + +func (h *ScrollbackHandler) HandleGet(w http.ResponseWriter, r *http.Request, sessionID string) { + if h == nil || h.cache == nil { + writeAPIError(w, http.StatusServiceUnavailable, "scrollback cache unavailable", "SCROLLBACK_UNAVAILABLE") + return + } + + query, err := parseScrollbackQuery(r) + if err != nil { + writeAPIError(w, http.StatusBadRequest, err.Error(), "INVALID_SCROLLBACK_QUERY") + return + } + + entries, err := h.getFiltered(sessionID, query) + if err != nil { + writeAPIError(w, http.StatusInternalServerError, "failed to read scrollback", "SCROLLBACK_READ_FAILED") + return + } + + writeJSON(w, http.StatusOK, entries) +} + +func (h *ScrollbackHandler) getFiltered(sessionID string, query scrollbackQuery) ([]cache.Entry, error) { + if query.typeV == "" { + return h.cache.Get(sessionID, query.offset, query.limit) + } + + all, err := h.cache.Get(sessionID, 0, 0) + if err != nil { + return nil, err + } + + filtered := make([]cache.Entry, 0, len(all)) + for _, entry := range all { + if entry.Type == query.typeV { + filtered = append(filtered, entry) + } + } + + if query.offset >= len(filtered) { + return []cache.Entry{}, nil + } + + end := len(filtered) + if query.offset+query.limit < end { + end = query.offset + query.limit + } + + result := make([]cache.Entry, end-query.offset) + copy(result, filtered[query.offset:end]) + return result, nil +} + +func parseScrollbackQuery(r *http.Request) (scrollbackQuery, error) { + q := r.URL.Query() + result := scrollbackQuery{limit: defaultScrollbackLimit} + + if raw := strings.TrimSpace(q.Get("limit")); raw != "" { + limit, err := strconv.Atoi(raw) + if err != nil || limit <= 0 { + return scrollbackQuery{}, errors.New("limit must be a positive integer") + } + result.limit = limit + } + + if raw := strings.TrimSpace(q.Get("offset")); raw != "" { + offset, err := strconv.Atoi(raw) + if err != nil || offset < 0 { + return scrollbackQuery{}, errors.New("offset must be a non-negative integer") + } + result.offset = offset + } + + if raw := strings.TrimSpace(q.Get("type")); raw != "" { + result.typeV = cache.EntryType(raw) + } + + return result, nil +} diff --git a/internal/api/scrollback_test.go b/internal/api/scrollback_test.go new file mode 100644 index 0000000..1bd8727 --- /dev/null +++ b/internal/api/scrollback_test.go @@ -0,0 +1,169 @@ +package api + +import ( + "context" + "encoding/json" + "net/http" + "net/http/httptest" + "testing" + "time" + + "opencoderouter/internal/cache" + "opencoderouter/internal/session" +) + +func TestScrollbackEndpointReturnsEntriesWithDefaultLimit(t *testing.T) { + mgr := newFakeStatefulSessionManager() + sc := newTestScrollbackCache() + workspace := t.TempDir() + created, err := mgr.Create(context.Background(), session.CreateOpts{WorkspacePath: workspace}) + if err != nil { + t.Fatalf("seed session: %v", err) + } + + for i := 0; i < 3; i++ { + err := sc.Append(created.ID, cache.Entry{Timestamp: time.Now().UTC(), Type: cache.EntryTypeTerminalOutput, Content: []byte{byte('a' + i)}}) + if err != nil { + t.Fatalf("append: %v", err) + } + } + + srv := newScrollbackTestServer(t, mgr, sc) + defer srv.Close() + + resp := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/sessions/"+created.ID+"/scrollback", nil) + if resp.StatusCode != http.StatusOK { + defer resp.Body.Close() + t.Fatalf("status=%d want=%d", resp.StatusCode, http.StatusOK) + } + var entries []cache.Entry + if err := json.NewDecoder(resp.Body).Decode(&entries); err != nil { + _ = resp.Body.Close() + t.Fatalf("decode: %v", err) + } + _ = resp.Body.Close() + if len(entries) != 3 { + t.Fatalf("entries=%d want=3", len(entries)) + } +} + +func TestScrollbackEndpointSupportsLimitOffsetAndTypeFilter(t *testing.T) { + mgr := newFakeStatefulSessionManager() + sc := newTestScrollbackCache() + workspace := t.TempDir() + created, err := mgr.Create(context.Background(), session.CreateOpts{WorkspacePath: workspace}) + if err != nil { + t.Fatalf("seed session: %v", err) + } + + seed := []cache.Entry{ + {Timestamp: time.Now().UTC(), Type: cache.EntryTypeTerminalOutput, Content: []byte("o1")}, + {Timestamp: time.Now().UTC(), Type: cache.EntryTypeSystemEvent, Content: []byte("s1")}, + {Timestamp: time.Now().UTC(), Type: cache.EntryTypeTerminalOutput, Content: []byte("o2")}, + {Timestamp: time.Now().UTC(), Type: cache.EntryTypeTerminalOutput, Content: []byte("o3")}, + } + for _, entry := range seed { + if err := sc.Append(created.ID, entry); err != nil { + t.Fatalf("append: %v", err) + } + } + + srv := newScrollbackTestServer(t, mgr, sc) + defer srv.Close() + + resp := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/sessions/"+created.ID+"/scrollback?type=terminal_output&offset=1&limit=1", nil) + if resp.StatusCode != http.StatusOK { + defer resp.Body.Close() + t.Fatalf("status=%d want=%d", resp.StatusCode, http.StatusOK) + } + var entries []cache.Entry + if err := json.NewDecoder(resp.Body).Decode(&entries); err != nil { + _ = resp.Body.Close() + t.Fatalf("decode: %v", err) + } + _ = resp.Body.Close() + if len(entries) != 1 { + t.Fatalf("entries=%d want=1", len(entries)) + } + if entries[0].Type != cache.EntryTypeTerminalOutput || string(entries[0].Content) != "o2" { + t.Fatalf("unexpected filtered entry: %+v", entries[0]) + } +} + +func TestScrollbackEndpointRejectsInvalidQuery(t *testing.T) { + mgr := newFakeStatefulSessionManager() + sc := newTestScrollbackCache() + workspace := t.TempDir() + created, err := mgr.Create(context.Background(), session.CreateOpts{WorkspacePath: workspace}) + if err != nil { + t.Fatalf("seed session: %v", err) + } + + srv := newScrollbackTestServer(t, mgr, sc) + defer srv.Close() + + resp := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/sessions/"+created.ID+"/scrollback?limit=abc", nil) + assertErrorShape(t, resp, http.StatusBadRequest, "INVALID_SCROLLBACK_QUERY") + + resp2 := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/sessions/"+created.ID+"/scrollback?offset=-1", nil) + assertErrorShape(t, resp2, http.StatusBadRequest, "INVALID_SCROLLBACK_QUERY") +} + +func newScrollbackTestServer(t *testing.T, mgr session.SessionManager, sc cache.ScrollbackCache) *httptest.Server { + t.Helper() + mux := http.NewServeMux() + NewSessionsHandler(SessionsHandlerConfig{SessionManager: mgr, ScrollbackCache: sc}).Register(mux) + return httptest.NewServer(mux) +} + +type testScrollbackCache struct { + bySession map[string][]cache.Entry +} + +func newTestScrollbackCache() *testScrollbackCache { + return &testScrollbackCache{bySession: map[string][]cache.Entry{}} +} + +func (c *testScrollbackCache) Append(sessionID string, entry cache.Entry) error { + c.bySession[sessionID] = append(c.bySession[sessionID], entry) + return nil +} + +func (c *testScrollbackCache) Get(sessionID string, offset, limit int) ([]cache.Entry, error) { + entries := c.bySession[sessionID] + if offset < 0 { + offset = 0 + } + if offset >= len(entries) { + return []cache.Entry{}, nil + } + end := len(entries) + if limit > 0 && offset+limit < end { + end = offset + limit + } + out := make([]cache.Entry, end-offset) + copy(out, entries[offset:end]) + return out, nil +} + +func (c *testScrollbackCache) Trim(sessionID string, maxEntries int) error { + entries := c.bySession[sessionID] + if maxEntries <= 0 { + c.bySession[sessionID] = []cache.Entry{} + return nil + } + if len(entries) <= maxEntries { + return nil + } + c.bySession[sessionID] = append([]cache.Entry(nil), entries[len(entries)-maxEntries:]...) + return nil +} + +func (c *testScrollbackCache) Clear(sessionID string) error { + delete(c.bySession, sessionID) + return nil +} + +func (c *testScrollbackCache) Close() error { + return nil +} diff --git a/internal/api/sessions.go b/internal/api/sessions.go new file mode 100644 index 0000000..6910220 --- /dev/null +++ b/internal/api/sessions.go @@ -0,0 +1,545 @@ +package api + +import ( + "context" + "encoding/json" + "errors" + "fmt" + "io" + "log/slog" + "net/http" + "sort" + "strings" + "sync" + + "opencoderouter/internal/cache" + "opencoderouter/internal/daemon" + errorx "opencoderouter/internal/errors" + "opencoderouter/internal/session" +) + +type SessionsHandlerConfig struct { + SessionManager session.SessionManager + ScrollbackCache cache.ScrollbackCache + Logger *slog.Logger +} + +type SessionsHandler struct { + sessions session.SessionManager + scrollback *ScrollbackHandler + logger *slog.Logger + + mu sync.Mutex + attachments map[string][]session.TerminalConn +} + +type createSessionRequest struct { + WorkspacePath string `json:"workspacePath"` + Label string `json:"label"` + Labels map[string]string `json:"labels"` +} + +type sessionView struct { + ID string `json:"id"` + DaemonPort int `json:"daemonPort"` + WorkspacePath string `json:"workspacePath"` + Status session.SessionStatus `json:"status"` + CreatedAt string `json:"createdAt"` + LastActivity string `json:"lastActivity"` + AttachedClients int `json:"attachedClients"` + Labels map[string]string `json:"labels,omitempty"` + Health session.HealthStatus `json:"health"` +} + +type errorResponse struct { + Error string `json:"error"` + Code string `json:"code"` +} + +func NewSessionsHandler(cfg SessionsHandlerConfig) *SessionsHandler { + logger := cfg.Logger + if logger == nil { + logger = slog.Default() + } + + return &SessionsHandler{ + sessions: cfg.SessionManager, + scrollback: NewScrollbackHandler(cfg.ScrollbackCache), + logger: logger, + attachments: make(map[string][]session.TerminalConn), + } +} + +func (h *SessionsHandler) Register(mux *http.ServeMux) { + if h == nil || mux == nil { + return + } + mux.HandleFunc("/api/sessions", h.handleCollection) + mux.HandleFunc("/api/sessions/", h.handleByID) +} + +func (h *SessionsHandler) handleCollection(w http.ResponseWriter, r *http.Request) { + if h.sessions == nil { + writeAPIError(w, http.StatusServiceUnavailable, "session manager unavailable", "SESSION_MANAGER_UNAVAILABLE") + return + } + + switch r.Method { + case http.MethodPost: + h.handleCreate(w, r) + case http.MethodGet: + h.handleList(w, r) + default: + writeAPIError(w, http.StatusMethodNotAllowed, "method not allowed", "METHOD_NOT_ALLOWED") + } +} + +func (h *SessionsHandler) handleByID(w http.ResponseWriter, r *http.Request) { + if h.sessions == nil { + writeAPIError(w, http.StatusServiceUnavailable, "session manager unavailable", "SESSION_MANAGER_UNAVAILABLE") + return + } + + id, action, ok := parseSessionPath(r.URL.Path) + if !ok { + writeAPIError(w, http.StatusNotFound, "route not found", "NOT_FOUND") + return + } + + if action == "" { + switch r.Method { + case http.MethodGet: + h.handleGet(w, r, id) + case http.MethodDelete: + h.handleDelete(w, r, id) + default: + writeAPIError(w, http.StatusMethodNotAllowed, "method not allowed", "METHOD_NOT_ALLOWED") + } + return + } + + if action == "chat" && r.Method == http.MethodGet { + h.handleChatHistory(w, r, id) + return + } + + if action == "scrollback" && r.Method == http.MethodGet { + h.handleScrollback(w, r, id) + return + } + + if r.Method != http.MethodPost { + writeAPIError(w, http.StatusMethodNotAllowed, "method not allowed", "METHOD_NOT_ALLOWED") + return + } + + switch action { + case "stop": + h.handleStop(w, r, id) + case "start": + h.handleStart(w, r, id) + case "restart": + h.handleRestart(w, r, id) + case "attach": + h.handleAttach(w, r, id) + case "detach": + h.handleDetach(w, r, id) + case "chat": + h.handleChat(w, r, id) + default: + writeAPIError(w, http.StatusNotFound, "route not found", "NOT_FOUND") + } +} + +func (h *SessionsHandler) handleScrollback(w http.ResponseWriter, r *http.Request, id string) { + if _, err := h.sessions.Get(id); err != nil { + h.writeSessionManagerError(w, err) + return + } + h.scrollback.HandleGet(w, r, id) +} + +func (h *SessionsHandler) handleCreate(w http.ResponseWriter, r *http.Request) { + var req createSessionRequest + if err := decodeJSONBody(r, &req); err != nil { + writeAPIError(w, http.StatusBadRequest, err.Error(), "INVALID_REQUEST_BODY") + return + } + + opts := session.CreateOpts{ + WorkspacePath: req.WorkspacePath, + } + if len(req.Labels) > 0 || strings.TrimSpace(req.Label) != "" { + labels := make(map[string]string, len(req.Labels)+1) + for k, v := range req.Labels { + labels[k] = v + } + if label := strings.TrimSpace(req.Label); label != "" { + if _, exists := labels["label"]; !exists { + labels["label"] = label + } + } + opts.Labels = labels + } + + handle, err := h.sessions.Create(r.Context(), opts) + if err != nil { + h.writeSessionManagerError(w, err) + return + } + + view, err := h.buildSessionView(r.Context(), handle.ID) + if err != nil { + h.writeSessionManagerError(w, err) + return + } + + writeJSON(w, http.StatusCreated, view) +} + +func (h *SessionsHandler) handleList(w http.ResponseWriter, r *http.Request) { + filter := session.SessionListFilter{} + + if rawStatus := strings.TrimSpace(r.URL.Query().Get("status")); rawStatus != "" { + status := session.SessionStatus(rawStatus) + if !isValidSessionStatus(status) { + writeAPIError(w, http.StatusBadRequest, "invalid status filter", "INVALID_STATUS_FILTER") + return + } + filter.Status = status + } + + handles, err := h.sessions.List(filter) + if err != nil { + h.writeSessionManagerError(w, err) + return + } + + switch strings.TrimSpace(r.URL.Query().Get("sort")) { + case "", "createdAt": + case "lastActivity": + sort.Slice(handles, func(i, j int) bool { + if handles[i].LastActivity.Equal(handles[j].LastActivity) { + return handles[i].ID < handles[j].ID + } + return handles[i].LastActivity.After(handles[j].LastActivity) + }) + default: + writeAPIError(w, http.StatusBadRequest, "invalid sort option", "INVALID_SORT") + return + } + + views := make([]sessionView, 0, len(handles)) + for _, handle := range handles { + health, err := h.sessions.Health(r.Context(), handle.ID) + if err != nil { + h.logger.Debug("session health lookup failed for list", "session_id", handle.ID, "error", err) + health = session.HealthStatus{State: session.HealthStateUnknown} + } + views = append(views, toSessionView(handle, health)) + } + + writeJSON(w, http.StatusOK, views) +} + +func (h *SessionsHandler) handleGet(w http.ResponseWriter, r *http.Request, id string) { + view, err := h.buildSessionView(r.Context(), id) + if err != nil { + h.writeSessionManagerError(w, err) + return + } + + writeJSON(w, http.StatusOK, view) +} + +func (h *SessionsHandler) handleStop(w http.ResponseWriter, r *http.Request, id string) { + if err := h.sessions.Stop(r.Context(), id); err != nil { + h.writeSessionManagerError(w, err) + return + } + + view, err := h.buildSessionView(r.Context(), id) + if err != nil { + h.writeSessionManagerError(w, err) + return + } + + writeJSON(w, http.StatusOK, view) +} + +func (h *SessionsHandler) handleRestart(w http.ResponseWriter, r *http.Request, id string) { + handle, err := h.sessions.Restart(r.Context(), id) + if err != nil { + h.writeSessionManagerError(w, err) + return + } + + view := toSessionView(*handle, session.HealthStatus{State: session.HealthStateUnknown}) + if health, healthErr := h.sessions.Health(r.Context(), id); healthErr == nil { + view.Health = health + } + + writeJSON(w, http.StatusOK, view) +} + +func (h *SessionsHandler) handleStart(w http.ResponseWriter, r *http.Request, id string) { + h.handleRestart(w, r, id) +} + +func (h *SessionsHandler) handleDelete(w http.ResponseWriter, r *http.Request, id string) { + if err := h.sessions.Delete(r.Context(), id); err != nil { + h.writeSessionManagerError(w, err) + return + } + + h.clearAttachments(id) + w.WriteHeader(http.StatusNoContent) +} + +func (h *SessionsHandler) handleAttach(w http.ResponseWriter, r *http.Request, id string) { + conn, err := h.sessions.AttachTerminal(r.Context(), id) + if err != nil { + h.writeSessionManagerError(w, err) + return + } + if conn != nil { + h.storeAttachment(id, conn) + } + + view, err := h.buildSessionView(r.Context(), id) + if err != nil { + h.writeSessionManagerError(w, err) + return + } + + writeJSON(w, http.StatusOK, view) +} + +func (h *SessionsHandler) handleDetach(w http.ResponseWriter, r *http.Request, id string) { + if _, err := h.sessions.Get(id); err != nil { + h.writeSessionManagerError(w, err) + return + } + + if conn, ok := h.popAttachment(id); ok && conn != nil { + _ = conn.Close() + } + + view, err := h.buildSessionView(r.Context(), id) + if err != nil { + h.writeSessionManagerError(w, err) + return + } + + writeJSON(w, http.StatusOK, view) +} + +func (h *SessionsHandler) buildSessionView(ctx context.Context, id string) (sessionView, error) { + handle, err := h.sessions.Get(id) + if err != nil { + return sessionView{}, err + } + + health, err := h.sessions.Health(ctx, id) + if err != nil { + return sessionView{}, err + } + + return toSessionView(*handle, health), nil +} + +func (h *SessionsHandler) storeAttachment(id string, conn session.TerminalConn) { + h.mu.Lock() + h.attachments[id] = append(h.attachments[id], conn) + h.mu.Unlock() +} + +func (h *SessionsHandler) popAttachment(id string) (session.TerminalConn, bool) { + h.mu.Lock() + defer h.mu.Unlock() + + conns := h.attachments[id] + if len(conns) == 0 { + return nil, false + } + + idx := len(conns) - 1 + conn := conns[idx] + if idx == 0 { + delete(h.attachments, id) + } else { + h.attachments[id] = conns[:idx] + } + + return conn, true +} + +func (h *SessionsHandler) clearAttachments(id string) { + h.mu.Lock() + conns := h.attachments[id] + delete(h.attachments, id) + h.mu.Unlock() + + for _, conn := range conns { + if conn != nil { + _ = conn.Close() + } + } +} + +func (h *SessionsHandler) writeSessionManagerError(w http.ResponseWriter, err error) { + switch errorx.Code(err) { + case "WORKSPACE_PATH_REQUIRED", "WORKSPACE_PATH_INVALID": + writeAPIError(w, http.StatusBadRequest, errorx.Message(err), errorx.Code(err)) + case "SESSION_ALREADY_EXISTS", "SESSION_STOPPED": + writeAPIError(w, http.StatusConflict, errorx.Message(err), errorx.Code(err)) + case "SESSION_NOT_FOUND", "NO_AVAILABLE_SESSION_PORTS", "TERMINAL_ATTACH_UNAVAILABLE", "DAEMON_UNHEALTHY": + writeAPIError(w, errorx.HTTPStatus(err), errorx.Message(err), errorx.Code(err)) + case "REQUEST_CANCELED", "REQUEST_TIMEOUT": + writeAPIError(w, errorx.HTTPStatus(err), errorx.Message(err), errorx.Code(err)) + default: + h.logger.Error("session handler error", "error", err) + writeAPIError(w, http.StatusInternalServerError, errorx.Message(err), errorx.Code(err)) + } +} + +func parseSessionPath(path string) (id string, action string, ok bool) { + tail := strings.TrimPrefix(path, "/api/sessions/") + tail = strings.TrimSpace(tail) + tail = strings.Trim(tail, "/") + if tail == "" { + return "", "", false + } + + parts := strings.Split(tail, "/") + if len(parts) == 1 { + if parts[0] == "" { + return "", "", false + } + return parts[0], "", true + } + if len(parts) == 2 { + if parts[0] == "" || parts[1] == "" { + return "", "", false + } + return parts[0], parts[1], true + } + + return "", "", false +} + +func toSessionView(handle session.SessionHandle, health session.HealthStatus) sessionView { + return sessionView{ + ID: handle.ID, + DaemonPort: handle.DaemonPort, + WorkspacePath: handle.WorkspacePath, + Status: handle.Status, + CreatedAt: handle.CreatedAt.UTC().Format(timeLayoutRFC3339Nano), + LastActivity: handle.LastActivity.UTC().Format(timeLayoutRFC3339Nano), + AttachedClients: handle.AttachedClients, + Labels: handle.Labels, + Health: health, + } +} + +const timeLayoutRFC3339Nano = "2006-01-02T15:04:05.999999999Z07:00" + +func decodeJSONBody(r *http.Request, dst any) error { + dec := json.NewDecoder(r.Body) + dec.DisallowUnknownFields() + if err := dec.Decode(dst); err != nil { + return err + } + if err := dec.Decode(&struct{}{}); !errors.Is(err, io.EOF) { + if err == nil { + return errors.New("request body must contain a single JSON object") + } + return err + } + return nil +} + +func isValidSessionStatus(status session.SessionStatus) bool { + switch status { + case session.SessionStatusUnknown, session.SessionStatusActive, session.SessionStatusIdle, session.SessionStatusStopped, session.SessionStatusError: + return true + default: + return false + } +} + +func writeJSON(w http.ResponseWriter, status int, payload any) { + w.Header().Set("Content-Type", "application/json") + w.WriteHeader(status) + _ = json.NewEncoder(w).Encode(payload) +} + +func writeAPIError(w http.ResponseWriter, status int, message string, code string) { + writeJSON(w, status, errorResponse{Error: message, Code: code}) +} + +func (h *SessionsHandler) handleChat(w http.ResponseWriter, r *http.Request, id string) { + var req struct { + Prompt string `json:"prompt"` + } + if err := decodeJSONBody(r, &req); err != nil { + writeAPIError(w, http.StatusBadRequest, err.Error(), "INVALID_REQUEST_BODY") + return + } + + handle, err := h.sessions.Get(id) + if err != nil { + h.writeSessionManagerError(w, err) + return + } + + client, err := daemon.NewDaemonClient(fmt.Sprintf("http://127.0.0.1:%d", handle.DaemonPort), daemon.ClientConfig{}) + if err != nil { + writeAPIError(w, http.StatusInternalServerError, err.Error(), "DAEMON_CLIENT_ERROR") + return + } + + ch, err := client.SendMessage(r.Context(), id, req.Prompt) + if err != nil { + writeAPIError(w, http.StatusInternalServerError, err.Error(), "SEND_MESSAGE_ERROR") + return + } + + w.Header().Set("Content-Type", "text/event-stream") + w.Header().Set("Cache-Control", "no-cache") + w.Header().Set("Connection", "keep-alive") + + flusher, ok := w.(http.Flusher) + if !ok { + writeAPIError(w, http.StatusInternalServerError, "streaming unsupported", "STREAMING_UNSUPPORTED") + return + } + + for chunk := range ch { + data, _ := json.Marshal(chunk) + fmt.Fprintf(w, "data: %s\n\n", data) + flusher.Flush() + } +} + +func (h *SessionsHandler) handleChatHistory(w http.ResponseWriter, r *http.Request, id string) { + handle, err := h.sessions.Get(id) + if err != nil { + h.writeSessionManagerError(w, err) + return + } + + client, err := daemon.NewDaemonClient(fmt.Sprintf("http://127.0.0.1:%d", handle.DaemonPort), daemon.ClientConfig{}) + if err != nil { + writeAPIError(w, http.StatusInternalServerError, err.Error(), "DAEMON_CLIENT_ERROR") + return + } + + msgs, err := client.GetMessages(r.Context(), id) + if err != nil { + writeAPIError(w, http.StatusInternalServerError, err.Error(), "GET_MESSAGES_ERROR") + return + } + + writeJSON(w, http.StatusOK, msgs) +} diff --git a/internal/api/sessions_test.go b/internal/api/sessions_test.go new file mode 100644 index 0000000..fec0a8c --- /dev/null +++ b/internal/api/sessions_test.go @@ -0,0 +1,577 @@ +package api + +import ( + "bytes" + "context" + "encoding/json" + "errors" + "io" + "net/http" + "net/http/httptest" + "os" + "sort" + "strings" + "sync" + "testing" + "time" + + "opencoderouter/internal/session" +) + +type fakeTerminalConn struct { + mu sync.Mutex + onClose func() + closed bool +} + +func (c *fakeTerminalConn) Read(_ []byte) (int, error) { return 0, io.EOF } + +func (c *fakeTerminalConn) Write(p []byte) (int, error) { return len(p), nil } + +func (c *fakeTerminalConn) Close() error { + c.mu.Lock() + if c.closed { + c.mu.Unlock() + return nil + } + c.closed = true + onClose := c.onClose + c.mu.Unlock() + if onClose != nil { + onClose() + } + return nil +} + +func (c *fakeTerminalConn) Resize(_, _ int) error { return nil } + +type fakeStatefulSessionManager struct { + mu sync.Mutex + sessions map[string]session.SessionHandle + health map[string]session.HealthStatus + nextID int + createErr error + listErr error + getErr error + stopErr error + restartErr error + deleteErr error + attachErr error + healthErr error +} + +func newFakeStatefulSessionManager() *fakeStatefulSessionManager { + return &fakeStatefulSessionManager{ + sessions: make(map[string]session.SessionHandle), + health: make(map[string]session.HealthStatus), + } +} + +func (m *fakeStatefulSessionManager) Create(_ context.Context, opts session.CreateOpts) (*session.SessionHandle, error) { + m.mu.Lock() + defer m.mu.Unlock() + + if m.createErr != nil { + return nil, m.createErr + } + if strings.TrimSpace(opts.WorkspacePath) == "" { + return nil, session.ErrWorkspacePathRequired + } + + m.nextID++ + id := "session-" + time.Now().UTC().Format("150405") + "-" + string(rune('a'+m.nextID)) + now := time.Now().UTC() + handle := session.SessionHandle{ + ID: id, + DaemonPort: 32000 + m.nextID, + WorkspacePath: opts.WorkspacePath, + Status: session.SessionStatusActive, + CreatedAt: now, + LastActivity: now, + Labels: cloneLabels(opts.Labels), + } + m.sessions[id] = handle + m.health[id] = session.HealthStatus{State: session.HealthStateHealthy, LastCheck: now} + clone := handle + clone.Labels = cloneLabels(handle.Labels) + return &clone, nil +} + +func (m *fakeStatefulSessionManager) Get(id string) (*session.SessionHandle, error) { + m.mu.Lock() + defer m.mu.Unlock() + + if m.getErr != nil { + return nil, m.getErr + } + handle, ok := m.sessions[id] + if !ok { + return nil, session.ErrSessionNotFound + } + clone := handle + clone.Labels = cloneLabels(handle.Labels) + return &clone, nil +} + +func (m *fakeStatefulSessionManager) List(filter session.SessionListFilter) ([]session.SessionHandle, error) { + m.mu.Lock() + defer m.mu.Unlock() + + if m.listErr != nil { + return nil, m.listErr + } + + out := make([]session.SessionHandle, 0, len(m.sessions)) + for _, handle := range m.sessions { + if filter.Status != "" && handle.Status != filter.Status { + continue + } + clone := handle + clone.Labels = cloneLabels(handle.Labels) + out = append(out, clone) + } + sort.Slice(out, func(i, j int) bool { + if out[i].CreatedAt.Equal(out[j].CreatedAt) { + return out[i].ID < out[j].ID + } + return out[i].CreatedAt.Before(out[j].CreatedAt) + }) + return out, nil +} + +func (m *fakeStatefulSessionManager) Stop(_ context.Context, id string) error { + m.mu.Lock() + defer m.mu.Unlock() + + if m.stopErr != nil { + return m.stopErr + } + handle, ok := m.sessions[id] + if !ok { + return session.ErrSessionNotFound + } + handle.Status = session.SessionStatusStopped + handle.LastActivity = time.Now().UTC() + m.sessions[id] = handle + health := m.health[id] + health.State = session.HealthStateUnknown + health.LastCheck = time.Now().UTC() + m.health[id] = health + return nil +} + +func (m *fakeStatefulSessionManager) Restart(_ context.Context, id string) (*session.SessionHandle, error) { + m.mu.Lock() + defer m.mu.Unlock() + + if m.restartErr != nil { + return nil, m.restartErr + } + handle, ok := m.sessions[id] + if !ok { + return nil, session.ErrSessionNotFound + } + handle.Status = session.SessionStatusActive + handle.LastActivity = time.Now().UTC() + m.sessions[id] = handle + health := m.health[id] + health.State = session.HealthStateHealthy + health.LastCheck = time.Now().UTC() + health.Error = "" + m.health[id] = health + clone := handle + clone.Labels = cloneLabels(handle.Labels) + return &clone, nil +} + +func (m *fakeStatefulSessionManager) Delete(_ context.Context, id string) error { + m.mu.Lock() + defer m.mu.Unlock() + + if m.deleteErr != nil { + return m.deleteErr + } + if _, ok := m.sessions[id]; !ok { + return session.ErrSessionNotFound + } + delete(m.sessions, id) + delete(m.health, id) + return nil +} + +func (m *fakeStatefulSessionManager) AttachTerminal(_ context.Context, id string) (session.TerminalConn, error) { + m.mu.Lock() + if m.attachErr != nil { + m.mu.Unlock() + return nil, m.attachErr + } + handle, ok := m.sessions[id] + if !ok { + m.mu.Unlock() + return nil, session.ErrSessionNotFound + } + handle.AttachedClients++ + handle.LastActivity = time.Now().UTC() + m.sessions[id] = handle + m.mu.Unlock() + + return &fakeTerminalConn{onClose: func() { + m.mu.Lock() + defer m.mu.Unlock() + handle, ok := m.sessions[id] + if !ok { + return + } + if handle.AttachedClients > 0 { + handle.AttachedClients-- + } + handle.LastActivity = time.Now().UTC() + m.sessions[id] = handle + }}, nil +} + +func (m *fakeStatefulSessionManager) Health(_ context.Context, id string) (session.HealthStatus, error) { + m.mu.Lock() + defer m.mu.Unlock() + + if m.healthErr != nil { + return session.HealthStatus{}, m.healthErr + } + health, ok := m.health[id] + if !ok { + return session.HealthStatus{}, session.ErrSessionNotFound + } + return health, nil +} + +func cloneLabels(labels map[string]string) map[string]string { + if len(labels) == 0 { + return nil + } + cloned := make(map[string]string, len(labels)) + for k, v := range labels { + cloned[k] = v + } + return cloned +} + +func newSessionsTestServer(t *testing.T, mgr session.SessionManager) *httptest.Server { + t.Helper() + mux := http.NewServeMux() + NewSessionsHandler(SessionsHandlerConfig{SessionManager: mgr}).Register(mux) + return httptest.NewServer(mux) +} + +func doJSONRequest(t *testing.T, client *http.Client, method, url string, body any) *http.Response { + t.Helper() + + var reader io.Reader + if body != nil { + buf, err := json.Marshal(body) + if err != nil { + t.Fatalf("marshal request body: %v", err) + } + reader = bytes.NewReader(buf) + } + + req, err := http.NewRequest(method, url, reader) + if err != nil { + t.Fatalf("new request: %v", err) + } + if body != nil { + req.Header.Set("Content-Type", "application/json") + } + + resp, err := client.Do(req) + if err != nil { + t.Fatalf("request failed: %v", err) + } + return resp +} + +func decodeResponseJSON[T any](t *testing.T, body io.Reader) T { + t.Helper() + var out T + if err := json.NewDecoder(body).Decode(&out); err != nil { + t.Fatalf("decode response: %v", err) + } + return out +} + +func assertErrorShape(t *testing.T, resp *http.Response, expectedStatus int, expectedCode string) { + t.Helper() + defer resp.Body.Close() + if resp.StatusCode != expectedStatus { + t.Fatalf("status=%d want=%d", resp.StatusCode, expectedStatus) + } + + var payload map[string]any + if err := json.NewDecoder(resp.Body).Decode(&payload); err != nil { + t.Fatalf("decode error payload: %v", err) + } + if got := payload["code"]; got != expectedCode { + t.Fatalf("error code=%v want=%s", got, expectedCode) + } + if _, ok := payload["error"]; !ok { + t.Fatalf("expected error field in payload: %#v", payload) + } + if len(payload) != 2 { + t.Fatalf("expected payload shape {error,code}, got %#v", payload) + } +} + +func TestSessionsLifecycleEndpoints(t *testing.T) { + mgr := newFakeStatefulSessionManager() + srv := newSessionsTestServer(t, mgr) + defer srv.Close() + + workspace := t.TempDir() + + createResp := doJSONRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/sessions", map[string]any{ + "workspacePath": workspace, + "label": "api-test", + }) + if createResp.StatusCode != http.StatusCreated { + defer createResp.Body.Close() + t.Fatalf("create status=%d want=%d", createResp.StatusCode, http.StatusCreated) + } + created := decodeResponseJSON[sessionView](t, createResp.Body) + _ = createResp.Body.Close() + if created.ID == "" { + t.Fatal("expected created session id") + } + if created.Labels["label"] != "api-test" { + t.Fatalf("expected label api-test, got %#v", created.Labels) + } + + listResp := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/sessions", nil) + if listResp.StatusCode != http.StatusOK { + defer listResp.Body.Close() + t.Fatalf("list status=%d want=%d", listResp.StatusCode, http.StatusOK) + } + listed := decodeResponseJSON[[]sessionView](t, listResp.Body) + _ = listResp.Body.Close() + if len(listed) != 1 || listed[0].ID != created.ID { + t.Fatalf("unexpected list response: %#v", listed) + } + + getResp := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/sessions/"+created.ID, nil) + if getResp.StatusCode != http.StatusOK { + defer getResp.Body.Close() + t.Fatalf("get status=%d want=%d", getResp.StatusCode, http.StatusOK) + } + detail := decodeResponseJSON[sessionView](t, getResp.Body) + _ = getResp.Body.Close() + if detail.Health.State != session.HealthStateHealthy { + t.Fatalf("expected healthy state, got %s", detail.Health.State) + } + + stopResp := doJSONRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/sessions/"+created.ID+"/stop", nil) + if stopResp.StatusCode != http.StatusOK { + defer stopResp.Body.Close() + t.Fatalf("stop status=%d want=%d", stopResp.StatusCode, http.StatusOK) + } + stopped := decodeResponseJSON[sessionView](t, stopResp.Body) + _ = stopResp.Body.Close() + if stopped.Status != session.SessionStatusStopped { + t.Fatalf("stop status field=%s want=%s", stopped.Status, session.SessionStatusStopped) + } + + restartResp := doJSONRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/sessions/"+created.ID+"/restart", nil) + if restartResp.StatusCode != http.StatusOK { + defer restartResp.Body.Close() + t.Fatalf("restart status=%d want=%d", restartResp.StatusCode, http.StatusOK) + } + restarted := decodeResponseJSON[sessionView](t, restartResp.Body) + _ = restartResp.Body.Close() + if restarted.Status != session.SessionStatusActive { + t.Fatalf("restart status field=%s want=%s", restarted.Status, session.SessionStatusActive) + } + + stopAgainResp := doJSONRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/sessions/"+created.ID+"/stop", nil) + if stopAgainResp.StatusCode != http.StatusOK { + defer stopAgainResp.Body.Close() + t.Fatalf("second stop status=%d want=%d", stopAgainResp.StatusCode, http.StatusOK) + } + _ = stopAgainResp.Body.Close() + + startResp := doJSONRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/sessions/"+created.ID+"/start", nil) + if startResp.StatusCode != http.StatusOK { + defer startResp.Body.Close() + t.Fatalf("start status=%d want=%d", startResp.StatusCode, http.StatusOK) + } + started := decodeResponseJSON[sessionView](t, startResp.Body) + _ = startResp.Body.Close() + if started.Status != session.SessionStatusActive { + t.Fatalf("start status field=%s want=%s", started.Status, session.SessionStatusActive) + } + + attachResp := doJSONRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/sessions/"+created.ID+"/attach", nil) + if attachResp.StatusCode != http.StatusOK { + defer attachResp.Body.Close() + t.Fatalf("attach status=%d want=%d", attachResp.StatusCode, http.StatusOK) + } + attached := decodeResponseJSON[sessionView](t, attachResp.Body) + _ = attachResp.Body.Close() + if attached.AttachedClients != 1 { + t.Fatalf("attached clients=%d want=1", attached.AttachedClients) + } + + detachResp := doJSONRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/sessions/"+created.ID+"/detach", nil) + if detachResp.StatusCode != http.StatusOK { + defer detachResp.Body.Close() + t.Fatalf("detach status=%d want=%d", detachResp.StatusCode, http.StatusOK) + } + detached := decodeResponseJSON[sessionView](t, detachResp.Body) + _ = detachResp.Body.Close() + if detached.AttachedClients != 0 { + t.Fatalf("attached clients after detach=%d want=0", detached.AttachedClients) + } + + deleteResp := doJSONRequest(t, srv.Client(), http.MethodDelete, srv.URL+"/api/sessions/"+created.ID, nil) + if deleteResp.StatusCode != http.StatusNoContent { + defer deleteResp.Body.Close() + t.Fatalf("delete status=%d want=%d", deleteResp.StatusCode, http.StatusNoContent) + } + _ = deleteResp.Body.Close() + + missingResp := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/sessions/"+created.ID, nil) + assertErrorShape(t, missingResp, http.StatusNotFound, "SESSION_NOT_FOUND") +} + +func TestSessionsCreateValidationErrors(t *testing.T) { + mgr := newFakeStatefulSessionManager() + srv := newSessionsTestServer(t, mgr) + defer srv.Close() + + invalidReq, err := http.NewRequest(http.MethodPost, srv.URL+"/api/sessions", strings.NewReader(`{"workspacePath":`)) + if err != nil { + t.Fatalf("new request: %v", err) + } + invalidReq.Header.Set("Content-Type", "application/json") + invalidResp, err := srv.Client().Do(invalidReq) + if err != nil { + t.Fatalf("request failed: %v", err) + } + assertErrorShape(t, invalidResp, http.StatusBadRequest, "INVALID_REQUEST_BODY") + + mgr.createErr = session.ErrWorkspacePathInvalid + badPathResp := doJSONRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/sessions", map[string]any{ + "workspacePath": "/path/does/not/exist", + }) + assertErrorShape(t, badPathResp, http.StatusBadRequest, "WORKSPACE_PATH_INVALID") +} + +func TestSessionsCreatePortExhaustionError(t *testing.T) { + mgr := newFakeStatefulSessionManager() + srv := newSessionsTestServer(t, mgr) + defer srv.Close() + + mgr.createErr = session.ErrNoAvailableSessionPorts + resp := doJSONRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/sessions", map[string]any{ + "workspacePath": t.TempDir(), + }) + assertErrorShape(t, resp, http.StatusServiceUnavailable, "NO_AVAILABLE_SESSION_PORTS") +} + +func TestSessionsListFilterAndSort(t *testing.T) { + mgr := newFakeStatefulSessionManager() + workspace := t.TempDir() + first, err := mgr.Create(context.Background(), session.CreateOpts{WorkspacePath: workspace}) + if err != nil { + t.Fatalf("seed first session: %v", err) + } + second, err := mgr.Create(context.Background(), session.CreateOpts{WorkspacePath: workspace}) + if err != nil { + t.Fatalf("seed second session: %v", err) + } + if err := mgr.Stop(context.Background(), first.ID); err != nil { + t.Fatalf("seed stop first: %v", err) + } + + mgr.mu.Lock() + h := mgr.sessions[first.ID] + h.LastActivity = time.Now().UTC().Add(-2 * time.Hour) + mgr.sessions[first.ID] = h + h2 := mgr.sessions[second.ID] + h2.LastActivity = time.Now().UTC().Add(-1 * time.Minute) + mgr.sessions[second.ID] = h2 + mgr.mu.Unlock() + + srv := newSessionsTestServer(t, mgr) + defer srv.Close() + + filteredResp := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/sessions?status=stopped", nil) + if filteredResp.StatusCode != http.StatusOK { + defer filteredResp.Body.Close() + t.Fatalf("filtered status=%d want=%d", filteredResp.StatusCode, http.StatusOK) + } + filtered := decodeResponseJSON[[]sessionView](t, filteredResp.Body) + _ = filteredResp.Body.Close() + if len(filtered) != 1 || filtered[0].ID != first.ID { + t.Fatalf("unexpected filtered list: %#v", filtered) + } + + sortedResp := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/sessions?sort=lastActivity", nil) + if sortedResp.StatusCode != http.StatusOK { + defer sortedResp.Body.Close() + t.Fatalf("sorted status=%d want=%d", sortedResp.StatusCode, http.StatusOK) + } + sortedViews := decodeResponseJSON[[]sessionView](t, sortedResp.Body) + _ = sortedResp.Body.Close() + if len(sortedViews) != 2 || sortedViews[0].ID != second.ID { + t.Fatalf("unexpected sort ordering: %#v", sortedViews) + } + + invalidSortResp := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/sessions?sort=random", nil) + assertErrorShape(t, invalidSortResp, http.StatusBadRequest, "INVALID_SORT") +} + +func TestSessionsMethodAndRouteErrors(t *testing.T) { + mgr := newFakeStatefulSessionManager() + srv := newSessionsTestServer(t, mgr) + defer srv.Close() + + methodResp := doJSONRequest(t, srv.Client(), http.MethodPut, srv.URL+"/api/sessions", nil) + assertErrorShape(t, methodResp, http.StatusMethodNotAllowed, "METHOD_NOT_ALLOWED") + + unknownRouteResp := doJSONRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/sessions/s-1/unknown", nil) + assertErrorShape(t, unknownRouteResp, http.StatusNotFound, "NOT_FOUND") + + unknownStartMethod := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/sessions/s-1/start", nil) + assertErrorShape(t, unknownStartMethod, http.StatusMethodNotAllowed, "METHOD_NOT_ALLOWED") + + mgr.attachErr = session.ErrTerminalAttachDisabled + workspace := t.TempDir() + created, err := mgr.Create(context.Background(), session.CreateOpts{WorkspacePath: workspace}) + if err != nil { + t.Fatalf("seed session: %v", err) + } + + attachResp := doJSONRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/sessions/"+created.ID+"/attach", nil) + assertErrorShape(t, attachResp, http.StatusServiceUnavailable, "TERMINAL_ATTACH_UNAVAILABLE") +} + +func TestSessionsHandlerUnavailableManager(t *testing.T) { + srv := newSessionsTestServer(t, nil) + defer srv.Close() + + resp := doJSONRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/sessions", nil) + assertErrorShape(t, resp, http.StatusServiceUnavailable, "SESSION_MANAGER_UNAVAILABLE") +} + +func TestDecodeJSONBodyRejectsTrailingPayload(t *testing.T) { + req := httptest.NewRequest(http.MethodPost, "/api/sessions", strings.NewReader(`{"workspacePath":"/tmp"}{}`)) + var body createSessionRequest + err := decodeJSONBody(req, &body) + if err == nil { + t.Fatal("expected decode error for trailing payload") + } + if errors.Is(err, io.EOF) { + t.Fatalf("expected structured decode error, got EOF") + } +} + +func TestMain(m *testing.M) { + os.Exit(m.Run()) +} diff --git a/internal/auth/config.go b/internal/auth/config.go new file mode 100644 index 0000000..28f6738 --- /dev/null +++ b/internal/auth/config.go @@ -0,0 +1,84 @@ +package auth + +import ( + "os" + "strings" +) + +const ( + envAuthEnabled = "OCR_AUTH_ENABLED" + envBearerTokens = "OCR_AUTH_BEARER_TOKENS" + envBasicAuth = "OCR_AUTH_BASIC" + envCORSAllowOrigins = "OCR_CORS_ALLOW_ORIGINS" +) + +type Config struct { + Enabled bool + BearerTokens []string + BasicAuth map[string]string + CORSAllowedOrigins []string + BypassPaths map[string]struct{} +} + +func Defaults() Config { + return Config{ + Enabled: false, + BearerTokens: nil, + BasicAuth: map[string]string{}, + CORSAllowedOrigins: []string{"*"}, + BypassPaths: map[string]struct{}{ + "/api/health": {}, + "/api/backends": {}, + }, + } +} + +func LoadFromEnv() Config { + cfg := Defaults() + + if raw := strings.TrimSpace(os.Getenv(envAuthEnabled)); raw != "" { + raw = strings.ToLower(raw) + cfg.Enabled = raw == "1" || raw == "true" || raw == "yes" || raw == "on" + } + + if raw := strings.TrimSpace(os.Getenv(envBearerTokens)); raw != "" { + cfg.BearerTokens = splitCSV(raw) + } + + if raw := strings.TrimSpace(os.Getenv(envBasicAuth)); raw != "" { + pairs := splitCSV(raw) + for _, pair := range pairs { + parts := strings.SplitN(pair, ":", 2) + if len(parts) != 2 { + continue + } + user := strings.TrimSpace(parts[0]) + pass := strings.TrimSpace(parts[1]) + if user == "" || pass == "" { + continue + } + cfg.BasicAuth[user] = pass + } + } + + if raw := strings.TrimSpace(os.Getenv(envCORSAllowOrigins)); raw != "" { + origins := splitCSV(raw) + if len(origins) > 0 { + cfg.CORSAllowedOrigins = origins + } + } + + return cfg +} + +func splitCSV(raw string) []string { + parts := strings.Split(raw, ",") + out := make([]string, 0, len(parts)) + for _, p := range parts { + trimmed := strings.TrimSpace(p) + if trimmed != "" { + out = append(out, trimmed) + } + } + return out +} diff --git a/internal/auth/config_test.go b/internal/auth/config_test.go new file mode 100644 index 0000000..5d80601 --- /dev/null +++ b/internal/auth/config_test.go @@ -0,0 +1,78 @@ +package auth + +import ( + "os" + "reflect" + "testing" +) + +func TestDefaults(t *testing.T) { + cfg := Defaults() + if cfg.Enabled { + t.Fatal("expected auth disabled by default") + } + if len(cfg.CORSAllowedOrigins) != 1 || cfg.CORSAllowedOrigins[0] != "*" { + t.Fatalf("unexpected default CORS origins: %#v", cfg.CORSAllowedOrigins) + } + if _, ok := cfg.BypassPaths["/api/health"]; !ok { + t.Fatal("expected /api/health bypass by default") + } + if _, ok := cfg.BypassPaths["/api/backends"]; !ok { + t.Fatal("expected /api/backends bypass by default") + } +} + +func TestLoadFromEnv(t *testing.T) { + t.Setenv(envAuthEnabled, "true") + t.Setenv(envBearerTokens, "tok-a, tok-b") + t.Setenv(envBasicAuth, "alice:secret,bob:pw") + t.Setenv(envCORSAllowOrigins, "https://a.example,https://b.example") + + cfg := LoadFromEnv() + if !cfg.Enabled { + t.Fatal("expected enabled from env") + } + if !reflect.DeepEqual(cfg.BearerTokens, []string{"tok-a", "tok-b"}) { + t.Fatalf("unexpected tokens: %#v", cfg.BearerTokens) + } + if got := cfg.BasicAuth["alice"]; got != "secret" { + t.Fatalf("unexpected alice password: %q", got) + } + if got := cfg.BasicAuth["bob"]; got != "pw" { + t.Fatalf("unexpected bob password: %q", got) + } + if !reflect.DeepEqual(cfg.CORSAllowedOrigins, []string{"https://a.example", "https://b.example"}) { + t.Fatalf("unexpected CORS origins: %#v", cfg.CORSAllowedOrigins) + } +} + +func TestLoadFromEnv_InvalidBasicEntriesIgnored(t *testing.T) { + t.Setenv(envBasicAuth, "bad,no-colon,ok:yes,:missing-user,user-only:") + cfg := LoadFromEnv() + if len(cfg.BasicAuth) != 1 { + t.Fatalf("expected 1 valid basic entry, got %d", len(cfg.BasicAuth)) + } + if cfg.BasicAuth["ok"] != "yes" { + t.Fatal("expected ok:yes to be parsed") + } +} + +func TestSplitCSV(t *testing.T) { + got := splitCSV(" a, ,b ,, c ") + want := []string{"a", "b", "c"} + if !reflect.DeepEqual(got, want) { + t.Fatalf("splitCSV mismatch: got %#v want %#v", got, want) + } +} + +func TestLoadFromEnv_RespectsUnset(t *testing.T) { + _ = os.Unsetenv(envAuthEnabled) + _ = os.Unsetenv(envBearerTokens) + _ = os.Unsetenv(envBasicAuth) + _ = os.Unsetenv(envCORSAllowOrigins) + + cfg := LoadFromEnv() + if cfg.Enabled { + t.Fatal("expected disabled when unset") + } +} diff --git a/internal/auth/middleware.go b/internal/auth/middleware.go new file mode 100644 index 0000000..b357110 --- /dev/null +++ b/internal/auth/middleware.go @@ -0,0 +1,190 @@ +package auth + +import ( + "crypto/rand" + "crypto/subtle" + "encoding/base64" + "encoding/hex" + "encoding/json" + "net/http" + "strings" + "time" +) + +type RateLimiter interface { + Allow(r *http.Request) bool +} + +type NoopRateLimiter struct{} + +func (NoopRateLimiter) Allow(_ *http.Request) bool { return true } + +func Middleware(next http.Handler, cfg Config) http.Handler { + if next == nil { + next = http.NotFoundHandler() + } + + if cfg.BasicAuth == nil { + cfg.BasicAuth = map[string]string{} + } + + return withRequestID(withCORS(withAuth(withRateLimit(next, NoopRateLimiter{}), cfg), cfg)) +} + +func withRequestID(next http.Handler) http.Handler { + return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + reqID := strings.TrimSpace(r.Header.Get("X-Request-ID")) + if reqID == "" { + reqID = newRequestID() + } + w.Header().Set("X-Request-ID", reqID) + next.ServeHTTP(w, r) + }) +} + +func withRateLimit(next http.Handler, limiter RateLimiter) http.Handler { + if limiter == nil { + limiter = NoopRateLimiter{} + } + return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + if !limiter.Allow(r) { + writeJSONError(w, http.StatusTooManyRequests, "rate_limited", "RATE_LIMITED", "rate limit exceeded") + return + } + next.ServeHTTP(w, r) + }) +} + +func withCORS(next http.Handler, cfg Config) http.Handler { + allowed := cfg.CORSAllowedOrigins + if len(allowed) == 0 { + allowed = []string{"*"} + } + + allowAll := false + allowSet := make(map[string]struct{}, len(allowed)) + for _, origin := range allowed { + if origin == "*" { + allowAll = true + continue + } + allowSet[origin] = struct{}{} + } + + return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + origin := strings.TrimSpace(r.Header.Get("Origin")) + if origin != "" { + if allowAll { + w.Header().Set("Access-Control-Allow-Origin", "*") + } else if _, ok := allowSet[origin]; ok { + w.Header().Set("Access-Control-Allow-Origin", origin) + w.Header().Add("Vary", "Origin") + } + w.Header().Set("Access-Control-Allow-Headers", "Authorization, Content-Type, X-Request-ID") + w.Header().Set("Access-Control-Allow-Methods", "GET, POST, PUT, PATCH, DELETE, OPTIONS") + } + + if r.Method == http.MethodOptions { + w.WriteHeader(http.StatusNoContent) + return + } + + next.ServeHTTP(w, r) + }) +} + +func withAuth(next http.Handler, cfg Config) http.Handler { + bypass := cfg.BypassPaths + if bypass == nil { + bypass = map[string]struct{}{} + } + + return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + if _, ok := bypass[r.URL.Path]; ok { + next.ServeHTTP(w, r) + return + } + + if !cfg.Enabled { + next.ServeHTTP(w, r) + return + } + + authz := strings.TrimSpace(r.Header.Get("Authorization")) + if validateBearer(authz, cfg.BearerTokens) || validateBasic(authz, cfg.BasicAuth) { + next.ServeHTTP(w, r) + return + } + + w.Header().Set("WWW-Authenticate", `Bearer realm="opencoderouter", Basic realm="opencoderouter"`) + writeJSONError(w, http.StatusUnauthorized, "unauthorized", "UNAUTHORIZED", "invalid or missing credentials") + }) +} + +func validateBearer(authz string, tokens []string) bool { + if len(tokens) == 0 { + return false + } + if !strings.HasPrefix(strings.ToLower(authz), "bearer ") { + return false + } + token := strings.TrimSpace(authz[len("Bearer "):]) + if token == "" { + return false + } + for _, allowed := range tokens { + if subtle.ConstantTimeCompare([]byte(token), []byte(allowed)) == 1 { + return true + } + } + return false +} + +func validateBasic(authz string, users map[string]string) bool { + if len(users) == 0 { + return false + } + if !strings.HasPrefix(strings.ToLower(authz), "basic ") { + return false + } + payload := strings.TrimSpace(authz[len("Basic "):]) + if payload == "" { + return false + } + + raw, err := base64.StdEncoding.DecodeString(payload) + if err != nil { + return false + } + + parts := strings.SplitN(string(raw), ":", 2) + if len(parts) != 2 { + return false + } + + user := parts[0] + pass := parts[1] + stored, ok := users[user] + if !ok { + return false + } + return subtle.ConstantTimeCompare([]byte(pass), []byte(stored)) == 1 +} + +func writeJSONError(w http.ResponseWriter, status int, errCode, code, msg string) { + w.Header().Set("Content-Type", "application/json") + w.WriteHeader(status) + _ = json.NewEncoder(w).Encode(map[string]string{ + "error": errCode, + "code": code, + "message": msg, + }) +} + +func newRequestID() string { + b := make([]byte, 12) + if _, err := rand.Read(b); err != nil { + return hex.EncodeToString([]byte(time.Now().UTC().Format("20060102150405.000000000"))) + } + return hex.EncodeToString(b) +} diff --git a/internal/auth/middleware_test.go b/internal/auth/middleware_test.go new file mode 100644 index 0000000..6dcffc3 --- /dev/null +++ b/internal/auth/middleware_test.go @@ -0,0 +1,151 @@ +package auth + +import ( + "encoding/base64" + "encoding/json" + "net/http" + "net/http/httptest" + "testing" +) + +func TestMiddleware_ValidBearerTokenPasses(t *testing.T) { + cfg := Defaults() + cfg.Enabled = true + cfg.BearerTokens = []string{"good-token"} + + called := false + h := Middleware(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + called = true + w.WriteHeader(http.StatusOK) + }), cfg) + + req := httptest.NewRequest(http.MethodGet, "/api/resolve?name=test", nil) + req.Header.Set("Authorization", "Bearer good-token") + w := httptest.NewRecorder() + h.ServeHTTP(w, req) + + if !called { + t.Fatal("expected next handler to be called") + } + if w.Code != http.StatusOK { + t.Fatalf("expected 200, got %d", w.Code) + } +} + +func TestMiddleware_InvalidTokenReturns401JSON(t *testing.T) { + cfg := Defaults() + cfg.Enabled = true + cfg.BearerTokens = []string{"good-token"} + + h := Middleware(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusOK) + }), cfg) + + req := httptest.NewRequest(http.MethodGet, "/api/resolve?name=test", nil) + req.Header.Set("Authorization", "Bearer wrong-token") + w := httptest.NewRecorder() + h.ServeHTTP(w, req) + + if w.Code != http.StatusUnauthorized { + t.Fatalf("expected 401, got %d", w.Code) + } + if ct := w.Header().Get("Content-Type"); ct != "application/json" { + t.Fatalf("expected JSON content type, got %q", ct) + } + + var payload map[string]string + if err := json.Unmarshal(w.Body.Bytes(), &payload); err != nil { + t.Fatalf("expected json body, got error: %v", err) + } + if payload["error"] != "unauthorized" || payload["code"] != "UNAUTHORIZED" { + t.Fatalf("unexpected error payload: %#v", payload) + } +} + +func TestMiddleware_ValidBasicAuthPasses(t *testing.T) { + cfg := Defaults() + cfg.Enabled = true + cfg.BasicAuth = map[string]string{"alice": "secret"} + + called := false + h := Middleware(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + called = true + w.WriteHeader(http.StatusOK) + }), cfg) + + basic := base64.StdEncoding.EncodeToString([]byte("alice:secret")) + req := httptest.NewRequest(http.MethodGet, "/api/resolve?name=test", nil) + req.Header.Set("Authorization", "Basic "+basic) + w := httptest.NewRecorder() + h.ServeHTTP(w, req) + + if !called { + t.Fatal("expected next handler to be called") + } + if w.Code != http.StatusOK { + t.Fatalf("expected 200, got %d", w.Code) + } +} + +func TestMiddleware_BypassHealthEndpoints(t *testing.T) { + cfg := Defaults() + cfg.Enabled = true + cfg.BearerTokens = []string{"good-token"} + + called := false + h := Middleware(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + called = true + w.WriteHeader(http.StatusOK) + }), cfg) + + req := httptest.NewRequest(http.MethodGet, "/api/health", nil) + w := httptest.NewRecorder() + h.ServeHTTP(w, req) + + if !called { + t.Fatal("expected bypass to call next handler") + } + if w.Code != http.StatusOK { + t.Fatalf("expected 200 for bypass endpoint, got %d", w.Code) + } +} + +func TestMiddleware_CORSAllowlist(t *testing.T) { + cfg := Defaults() + cfg.CORSAllowedOrigins = []string{"https://allowed.example"} + + h := Middleware(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusOK) + }), cfg) + + req := httptest.NewRequest(http.MethodGet, "/api/resolve?name=test", nil) + req.Header.Set("Origin", "https://allowed.example") + w := httptest.NewRecorder() + h.ServeHTTP(w, req) + + if got := w.Header().Get("Access-Control-Allow-Origin"); got != "https://allowed.example" { + t.Fatalf("expected allowed origin echoed, got %q", got) + } + + reqBlocked := httptest.NewRequest(http.MethodGet, "/api/resolve?name=test", nil) + reqBlocked.Header.Set("Origin", "https://blocked.example") + wBlocked := httptest.NewRecorder() + h.ServeHTTP(wBlocked, reqBlocked) + if got := wBlocked.Header().Get("Access-Control-Allow-Origin"); got != "" { + t.Fatalf("expected blocked origin to be omitted, got %q", got) + } +} + +func TestMiddleware_SetsRequestIDHeader(t *testing.T) { + h := Middleware(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusOK) + }), Defaults()) + + req := httptest.NewRequest(http.MethodGet, "/", nil) + w := httptest.NewRecorder() + h.ServeHTTP(w, req) + + if got := w.Header().Get("X-Request-ID"); got == "" { + t.Fatal("expected X-Request-ID response header") + } +} diff --git a/internal/cache/cache.go b/internal/cache/cache.go new file mode 100644 index 0000000..c0485a0 --- /dev/null +++ b/internal/cache/cache.go @@ -0,0 +1,57 @@ +package cache + +import ( + "log/slog" + "os" + "path/filepath" + "strings" +) + +const ( + defaultMaxEntriesPerSession = 10000 + defaultMaxTotalSize = 100 * 1024 * 1024 +) + +func NewJSONLCache(cfg CacheConfig) (ScrollbackCache, error) { + normalized := normalizeConfig(cfg) + if err := os.MkdirAll(normalized.StoragePath, storageDirPerm); err != nil { + return nil, err + } + + cache := &JSONLCache{ + config: normalized, + writers: make(map[string]*sessionWriter), + entryCounts: make(map[string]int), + lru: newSessionLRU(), + logger: slog.Default(), + } + if err := cache.bootstrapFromDisk(); err != nil { + return nil, err + } + + cache.mu.Lock() + defer cache.mu.Unlock() + if err := cache.evictLocked(); err != nil { + return nil, err + } + + return cache, nil +} + +func normalizeConfig(cfg CacheConfig) CacheConfig { + normalized := cfg + if normalized.MaxEntriesPerSession <= 0 { + normalized.MaxEntriesPerSession = defaultMaxEntriesPerSession + } + if normalized.MaxTotalSize <= 0 { + normalized.MaxTotalSize = defaultMaxTotalSize + } + if normalized.EvictionPolicy != EvictionPolicyLRU && normalized.EvictionPolicy != EvictionPolicyFIFO { + normalized.EvictionPolicy = EvictionPolicyLRU + } + normalized.StoragePath = strings.TrimSpace(normalized.StoragePath) + if normalized.StoragePath == "" { + normalized.StoragePath = filepath.Join(".opencode", "scrollback") + } + return normalized +} diff --git a/internal/cache/jsonl.go b/internal/cache/jsonl.go new file mode 100644 index 0000000..e4a791d --- /dev/null +++ b/internal/cache/jsonl.go @@ -0,0 +1,449 @@ +package cache + +import ( + "bufio" + "bytes" + "encoding/json" + "errors" + "fmt" + "io" + "log/slog" + "os" + "path/filepath" + "sort" + "strings" + "sync" +) + +var ErrCacheClosed = errors.New("scrollback cache is closed") + +const ( + jsonlExtension = ".jsonl" + storageDirPerm = 0o755 + sessionFilePerm = 0o600 +) + +type sessionWriter struct { + file *os.File + writer *bufio.Writer +} + +type JSONLCache struct { + config CacheConfig + mu sync.Mutex + closed bool + writers map[string]*sessionWriter + entryCounts map[string]int + lru *sessionLRU + logger *slog.Logger +} + +// JSONL schema contract: +// - File path layout: {storagePath}/{sessionID}.jsonl +// - Each line is one JSON object encoded from Entry. +// - Entry.Content ([]byte) is serialized by encoding/json as base64 text. +// - Lines are append-only and chronological for stable replay/hydration. +func (c *JSONLCache) Append(sessionID string, entry Entry) error { + c.mu.Lock() + defer c.mu.Unlock() + + if err := c.validateOpenLocked(sessionID); err != nil { + return err + } + + count, err := c.sessionCountLocked(sessionID) + if err != nil { + return err + } + + line, err := encodeEntryLine(entry) + if err != nil { + return err + } + + writer, err := c.writerLocked(sessionID) + if err != nil { + return err + } + if _, err := writer.writer.Write(line); err != nil { + _ = c.closeWriterLocked(sessionID) + return err + } + if err := writer.writer.Flush(); err != nil { + _ = c.closeWriterLocked(sessionID) + return err + } + + c.lru.AddSize(sessionID, int64(len(line))) + c.markAccessLocked(sessionID) + c.entryCounts[sessionID] = count + 1 + + if c.config.MaxEntriesPerSession > 0 && c.entryCounts[sessionID] > c.config.MaxEntriesPerSession { + if err := c.trimSessionLocked(sessionID, c.config.MaxEntriesPerSession); err != nil { + return err + } + } + + return c.evictLocked() +} + +func (c *JSONLCache) Get(sessionID string, offset, limit int) ([]Entry, error) { + c.mu.Lock() + defer c.mu.Unlock() + + if err := c.validateOpenLocked(sessionID); err != nil { + return nil, err + } + + entries, err := c.readEntriesLocked(sessionID) + if err != nil { + return nil, err + } + c.markAccessLocked(sessionID) + + if offset < 0 { + offset = 0 + } + if offset >= len(entries) { + return []Entry{}, nil + } + + end := len(entries) + if limit > 0 && offset+limit < end { + end = offset + limit + } + + out := make([]Entry, end-offset) + copy(out, entries[offset:end]) + return out, nil +} + +func (c *JSONLCache) Trim(sessionID string, maxEntries int) error { + c.mu.Lock() + defer c.mu.Unlock() + + if err := c.validateOpenLocked(sessionID); err != nil { + return err + } + return c.trimSessionLocked(sessionID, maxEntries) +} + +func (c *JSONLCache) Clear(sessionID string) error { + c.mu.Lock() + defer c.mu.Unlock() + + if err := c.validateOpenLocked(sessionID); err != nil { + return err + } + return c.removeSessionLocked(sessionID) +} + +func (c *JSONLCache) Close() error { + c.mu.Lock() + defer c.mu.Unlock() + + if c.closed { + return nil + } + + var closeErr error + for sessionID := range c.writers { + closeErr = errors.Join(closeErr, c.closeWriterLocked(sessionID)) + } + c.closed = true + return closeErr +} + +func (c *JSONLCache) bootstrapFromDisk() error { + entries, err := os.ReadDir(c.config.StoragePath) + if err != nil { + if errors.Is(err, os.ErrNotExist) { + return nil + } + return err + } + + sessionIDs := make([]string, 0, len(entries)) + sizes := make(map[string]int64, len(entries)) + for _, entry := range entries { + if entry.IsDir() { + continue + } + name := entry.Name() + if !strings.HasSuffix(name, jsonlExtension) { + continue + } + + sessionID := strings.TrimSuffix(name, jsonlExtension) + if strings.TrimSpace(sessionID) == "" { + continue + } + + info, infoErr := entry.Info() + if infoErr != nil { + return infoErr + } + sizes[sessionID] = info.Size() + sessionIDs = append(sessionIDs, sessionID) + } + + sort.Strings(sessionIDs) + for _, sessionID := range sessionIDs { + c.lru.SetSize(sessionID, sizes[sessionID]) + c.lru.Ensure(sessionID) + } + + return nil +} + +func (c *JSONLCache) validateOpenLocked(sessionID string) error { + if c.closed { + return ErrCacheClosed + } + if strings.TrimSpace(sessionID) == "" { + return fmt.Errorf("sessionID is required") + } + return nil +} + +func (c *JSONLCache) sessionPath(sessionID string) string { + return filepath.Join(c.config.StoragePath, sessionID+jsonlExtension) +} + +func (c *JSONLCache) writerLocked(sessionID string) (*sessionWriter, error) { + if writer, ok := c.writers[sessionID]; ok { + return writer, nil + } + + if err := os.MkdirAll(c.config.StoragePath, storageDirPerm); err != nil { + return nil, err + } + + file, err := os.OpenFile(c.sessionPath(sessionID), os.O_CREATE|os.O_APPEND|os.O_WRONLY, sessionFilePerm) + if err != nil { + return nil, err + } + + writer := &sessionWriter{ + file: file, + writer: bufio.NewWriter(file), + } + c.writers[sessionID] = writer + return writer, nil +} + +func (c *JSONLCache) closeWriterLocked(sessionID string) error { + writer, ok := c.writers[sessionID] + if !ok { + return nil + } + delete(c.writers, sessionID) + + flushErr := writer.writer.Flush() + closeErr := writer.file.Close() + return errors.Join(flushErr, closeErr) +} + +func (c *JSONLCache) sessionCountLocked(sessionID string) (int, error) { + if count, ok := c.entryCounts[sessionID]; ok { + return count, nil + } + + entries, err := c.readEntriesLocked(sessionID) + if err != nil { + return 0, err + } + count := len(entries) + c.entryCounts[sessionID] = count + return count, nil +} + +func (c *JSONLCache) markAccessLocked(sessionID string) { + if c.config.EvictionPolicy == EvictionPolicyFIFO { + c.lru.Ensure(sessionID) + return + } + c.lru.Touch(sessionID) +} + +func (c *JSONLCache) readEntriesLocked(sessionID string) ([]Entry, error) { + path := c.sessionPath(sessionID) + entries, err := c.decodeJSONLFile(path, sessionID) + if err != nil { + if errors.Is(err, os.ErrNotExist) { + c.entryCounts[sessionID] = 0 + c.lru.SetSize(sessionID, 0) + return []Entry{}, nil + } + return nil, err + } + + size, err := fileSize(path) + if err != nil { + return nil, err + } + c.lru.SetSize(sessionID, size) + c.entryCounts[sessionID] = len(entries) + return entries, nil +} + +func (c *JSONLCache) decodeJSONLFile(path, sessionID string) ([]Entry, error) { + file, err := os.Open(path) + if err != nil { + return nil, err + } + defer file.Close() + + entries := make([]Entry, 0, 128) + reader := bufio.NewReader(file) + lineNo := 0 + + for { + line, readErr := reader.ReadBytes('\n') + if len(line) > 0 { + lineNo++ + trimmed := bytes.TrimRight(line, "\r\n") + if len(trimmed) > 0 { + var entry Entry + if err := json.Unmarshal(trimmed, &entry); err != nil { + c.logger.Warn("cache skipping malformed JSONL line", "session_id", sessionID, "line", lineNo, "error", err) + } else { + entries = append(entries, entry) + } + } + } + + if errors.Is(readErr, io.EOF) { + break + } + if readErr != nil { + return nil, readErr + } + } + + return entries, nil +} + +func (c *JSONLCache) trimSessionLocked(sessionID string, maxEntries int) error { + if maxEntries <= 0 { + return c.removeSessionLocked(sessionID) + } + + entries, err := c.readEntriesLocked(sessionID) + if err != nil { + return err + } + if len(entries) <= maxEntries { + c.markAccessLocked(sessionID) + return c.evictLocked() + } + + trimmed := entries[len(entries)-maxEntries:] + if err := c.rewriteSessionLocked(sessionID, trimmed); err != nil { + return err + } + + c.entryCounts[sessionID] = len(trimmed) + c.markAccessLocked(sessionID) + return c.evictLocked() +} + +func (c *JSONLCache) rewriteSessionLocked(sessionID string, entries []Entry) error { + if err := c.closeWriterLocked(sessionID); err != nil { + return err + } + + if err := os.MkdirAll(c.config.StoragePath, storageDirPerm); err != nil { + return err + } + + path := c.sessionPath(sessionID) + tmpPath := path + ".tmp" + + file, err := os.OpenFile(tmpPath, os.O_CREATE|os.O_TRUNC|os.O_WRONLY, sessionFilePerm) + if err != nil { + return err + } + + writer := bufio.NewWriter(file) + var totalBytes int64 + writeErr := func() error { + for _, entry := range entries { + line, lineErr := encodeEntryLine(entry) + if lineErr != nil { + return lineErr + } + written, lineErr := writer.Write(line) + if lineErr != nil { + return lineErr + } + totalBytes += int64(written) + } + return writer.Flush() + }() + closeErr := file.Close() + if writeErr != nil || closeErr != nil { + if removeErr := os.Remove(tmpPath); removeErr != nil && !errors.Is(removeErr, os.ErrNotExist) { + c.logger.Debug("failed to remove temporary cache file", "path", tmpPath, "error", removeErr) + return errors.Join(writeErr, closeErr, removeErr) + } + return errors.Join(writeErr, closeErr) + } + + if err := os.Rename(tmpPath, path); err != nil { + if removeErr := os.Remove(tmpPath); removeErr != nil && !errors.Is(removeErr, os.ErrNotExist) { + c.logger.Debug("failed to remove temporary cache file after rename error", "path", tmpPath, "error", removeErr) + return errors.Join(err, removeErr) + } + return err + } + + c.lru.SetSize(sessionID, totalBytes) + return nil +} + +func (c *JSONLCache) removeSessionLocked(sessionID string) error { + closeErr := c.closeWriterLocked(sessionID) + removeErr := os.Remove(c.sessionPath(sessionID)) + if removeErr != nil && !errors.Is(removeErr, os.ErrNotExist) { + return errors.Join(closeErr, removeErr) + } + + delete(c.entryCounts, sessionID) + c.lru.Remove(sessionID) + return closeErr +} + +func (c *JSONLCache) evictLocked() error { + if c.config.MaxTotalSize <= 0 { + return nil + } + + for c.lru.TotalSize() > c.config.MaxTotalSize { + sessionID, ok := c.lru.Oldest() + if !ok { + break + } + if err := c.removeSessionLocked(sessionID); err != nil { + return err + } + } + + return nil +} + +func encodeEntryLine(entry Entry) ([]byte, error) { + encoded, err := json.Marshal(entry) + if err != nil { + return nil, err + } + return append(encoded, '\n'), nil +} + +func fileSize(path string) (int64, error) { + info, err := os.Stat(path) + if err != nil { + return 0, err + } + return info.Size(), nil +} diff --git a/internal/cache/jsonl_test.go b/internal/cache/jsonl_test.go new file mode 100644 index 0000000..f75b676 --- /dev/null +++ b/internal/cache/jsonl_test.go @@ -0,0 +1,251 @@ +package cache + +import ( + "encoding/json" + "fmt" + "os" + "path/filepath" + "sync" + "testing" + "time" +) + +func TestJSONLCacheAppendGetRoundTripAndMalformedLine(t *testing.T) { + cache := newTestCache(t, CacheConfig{StoragePath: t.TempDir(), MaxEntriesPerSession: 1000, MaxTotalSize: 64 * 1024 * 1024}) + + sessionID := "roundtrip" + base := time.Unix(1_700_000_000, 0).UTC() + entries := []Entry{ + {Timestamp: base, Type: EntryTypeAgentMessage, Content: []byte("alpha"), Metadata: map[string]any{"idx": 1}}, + {Timestamp: base.Add(time.Second), Type: EntryTypeToolCall, Content: []byte("beta"), Metadata: map[string]any{"idx": 2}}, + {Timestamp: base.Add(2 * time.Second), Type: EntryTypeTerminalOutput, Content: []byte("gamma"), Metadata: map[string]any{"idx": 3}}, + } + + for _, entry := range entries { + if err := cache.Append(sessionID, entry); err != nil { + t.Fatalf("append failed: %v", err) + } + } + + file, err := os.OpenFile(filepath.Join(cache.config.StoragePath, sessionID+jsonlExtension), os.O_APPEND|os.O_WRONLY, sessionFilePerm) + if err != nil { + t.Fatalf("open cache file failed: %v", err) + } + if _, err := file.WriteString("{this-is-not-json}\n"); err != nil { + _ = file.Close() + t.Fatalf("write malformed line failed: %v", err) + } + if err := file.Close(); err != nil { + t.Fatalf("close malformed writer failed: %v", err) + } + + all, err := cache.Get(sessionID, 0, 0) + if err != nil { + t.Fatalf("get all failed: %v", err) + } + if len(all) != len(entries) { + t.Fatalf("unexpected entries length: got %d want %d", len(all), len(entries)) + } + for i := range entries { + if !all[i].Timestamp.Equal(entries[i].Timestamp) || all[i].Type != entries[i].Type || string(all[i].Content) != string(entries[i].Content) { + t.Fatalf("entry mismatch at index %d: got %+v want %+v", i, all[i], entries[i]) + } + if fmt.Sprint(all[i].Metadata["idx"]) != fmt.Sprint(entries[i].Metadata["idx"]) { + t.Fatalf("metadata mismatch at index %d: got=%v want=%v", i, all[i].Metadata["idx"], entries[i].Metadata["idx"]) + } + } + + window, err := cache.Get(sessionID, 1, 1) + if err != nil { + t.Fatalf("paged get failed: %v", err) + } + if len(window) != 1 || window[0].Type != entries[1].Type || string(window[0].Content) != string(entries[1].Content) { + t.Fatalf("unexpected paged result: %+v", window) + } +} + +func TestJSONLCacheTrimAndClear(t *testing.T) { + cache := newTestCache(t, CacheConfig{StoragePath: t.TempDir(), MaxEntriesPerSession: 1000, MaxTotalSize: 64 * 1024 * 1024}) + + sessionID := "trim-clear" + base := time.Unix(1_700_000_000, 0).UTC() + for i := 0; i < 5; i++ { + entry := Entry{Timestamp: base.Add(time.Duration(i) * time.Second), Type: EntryTypeSystemEvent, Content: []byte(fmt.Sprintf("line-%d", i))} + if err := cache.Append(sessionID, entry); err != nil { + t.Fatalf("append %d failed: %v", i, err) + } + } + + if err := cache.Trim(sessionID, 2); err != nil { + t.Fatalf("trim failed: %v", err) + } + + trimmed, err := cache.Get(sessionID, 0, 0) + if err != nil { + t.Fatalf("get after trim failed: %v", err) + } + if len(trimmed) != 2 { + t.Fatalf("unexpected trim size: got %d want 2", len(trimmed)) + } + if string(trimmed[0].Content) != "line-3" || string(trimmed[1].Content) != "line-4" { + t.Fatalf("trim kept unexpected entries: %+v", trimmed) + } + + if err := cache.Clear(sessionID); err != nil { + t.Fatalf("clear failed: %v", err) + } + + entries, err := cache.Get(sessionID, 0, 0) + if err != nil { + t.Fatalf("get after clear failed: %v", err) + } + if len(entries) != 0 { + t.Fatalf("expected empty result after clear, got %d entries", len(entries)) + } + if _, err := os.Stat(filepath.Join(cache.config.StoragePath, sessionID+jsonlExtension)); !os.IsNotExist(err) { + t.Fatalf("expected cache file to be removed, stat err=%v", err) + } +} + +func TestJSONLCacheLRUEviction(t *testing.T) { + entry := Entry{Timestamp: time.Unix(1_700_000_000, 0).UTC(), Type: EntryTypeTerminalOutput, Content: []byte("payload")} + encoded, err := json.Marshal(entry) + if err != nil { + t.Fatalf("marshal test entry failed: %v", err) + } + lineSize := int64(len(encoded) + 1) + + cache := newTestCache(t, CacheConfig{ + StoragePath: t.TempDir(), + MaxEntriesPerSession: 1000, + MaxTotalSize: (lineSize * 2) + 5, + EvictionPolicy: EvictionPolicyLRU, + }) + + if err := cache.Append("s1", entry); err != nil { + t.Fatalf("append s1 failed: %v", err) + } + if err := cache.Append("s2", entry); err != nil { + t.Fatalf("append s2 failed: %v", err) + } + if _, err := cache.Get("s1", 0, 1); err != nil { + t.Fatalf("touch s1 failed: %v", err) + } + if err := cache.Append("s3", entry); err != nil { + t.Fatalf("append s3 failed: %v", err) + } + + if _, err := os.Stat(filepath.Join(cache.config.StoragePath, "s2"+jsonlExtension)); !os.IsNotExist(err) { + t.Fatalf("expected s2 to be evicted, stat err=%v", err) + } + if _, err := os.Stat(filepath.Join(cache.config.StoragePath, "s1"+jsonlExtension)); err != nil { + t.Fatalf("expected s1 to be kept: %v", err) + } + if _, err := os.Stat(filepath.Join(cache.config.StoragePath, "s3"+jsonlExtension)); err != nil { + t.Fatalf("expected s3 to be kept: %v", err) + } + + total := int64(0) + for _, sessionID := range []string{"s1", "s3"} { + info, err := os.Stat(filepath.Join(cache.config.StoragePath, sessionID+jsonlExtension)) + if err != nil { + t.Fatalf("stat %s failed: %v", sessionID, err) + } + total += info.Size() + } + if total > cache.config.MaxTotalSize { + t.Fatalf("total cache size exceeds limit: total=%d limit=%d", total, cache.config.MaxTotalSize) + } +} + +func TestJSONLCacheConcurrentAppend(t *testing.T) { + cache := newTestCache(t, CacheConfig{StoragePath: t.TempDir(), MaxEntriesPerSession: 20000, MaxTotalSize: 64 * 1024 * 1024}) + + const goroutines = 16 + const perWorker = 250 + + var wg sync.WaitGroup + for g := 0; g < goroutines; g++ { + g := g + wg.Add(1) + go func() { + defer wg.Done() + for i := 0; i < perWorker; i++ { + entry := Entry{ + Timestamp: time.Unix(1_700_000_000+int64(i), 0).UTC(), + Type: EntryTypeTerminalOutput, + Content: []byte(fmt.Sprintf("g%d-%d", g, i)), + } + if err := cache.Append("concurrent", entry); err != nil { + t.Errorf("append failed: %v", err) + return + } + } + }() + } + wg.Wait() + + entries, err := cache.Get("concurrent", 0, 0) + if err != nil { + t.Fatalf("get failed: %v", err) + } + expected := goroutines * perWorker + if len(entries) != expected { + t.Fatalf("unexpected entry count: got %d want %d", len(entries), expected) + } +} + +func TestJSONLCacheRoundTripPerformanceSmoke(t *testing.T) { + if testing.Short() { + t.Skip("skipping performance smoke test in short mode") + } + + cache := newTestCache(t, CacheConfig{StoragePath: t.TempDir(), MaxEntriesPerSession: 11000, MaxTotalSize: 128 * 1024 * 1024}) + + const count = 10000 + start := time.Now() + for i := 0; i < count; i++ { + entry := Entry{ + Timestamp: time.Unix(1_700_000_000+int64(i), 0).UTC(), + Type: EntryTypeAgentMessage, + Content: []byte("perf"), + } + if err := cache.Append("perf", entry); err != nil { + t.Fatalf("append failed at %d: %v", i, err) + } + } + entries, err := cache.Get("perf", 0, count) + if err != nil { + t.Fatalf("get failed: %v", err) + } + if len(entries) != count { + t.Fatalf("unexpected perf entry count: got %d want %d", len(entries), count) + } + + duration := time.Since(start) + if duration > 8*time.Second { + t.Fatalf("round-trip too slow: %s", duration) + } +} + +func newTestCache(t *testing.T, cfg CacheConfig) *JSONLCache { + t.Helper() + + instance, err := NewJSONLCache(cfg) + if err != nil { + t.Fatalf("create cache failed: %v", err) + } + + cache, ok := instance.(*JSONLCache) + if !ok { + t.Fatalf("unexpected cache type: %T", instance) + } + + t.Cleanup(func() { + if err := cache.Close(); err != nil { + t.Fatalf("close cache failed: %v", err) + } + }) + + return cache +} diff --git a/internal/cache/lru.go b/internal/cache/lru.go new file mode 100644 index 0000000..804e1b4 --- /dev/null +++ b/internal/cache/lru.go @@ -0,0 +1,104 @@ +package cache + +import ( + "container/list" + "sync" +) + +type sessionLRU struct { + mu sync.Mutex + order *list.List + nodes map[string]*list.Element + fileSize map[string]int64 + total int64 +} + +func newSessionLRU() *sessionLRU { + return &sessionLRU{ + order: list.New(), + nodes: make(map[string]*list.Element), + fileSize: make(map[string]int64), + } +} + +func (l *sessionLRU) Ensure(sessionID string) { + l.mu.Lock() + defer l.mu.Unlock() + l.ensureLocked(sessionID) +} + +func (l *sessionLRU) Touch(sessionID string) { + l.mu.Lock() + defer l.mu.Unlock() + elem := l.ensureLocked(sessionID) + l.order.MoveToBack(elem) +} + +func (l *sessionLRU) SetSize(sessionID string, size int64) { + l.mu.Lock() + defer l.mu.Unlock() + + if size < 0 { + size = 0 + } + old := l.fileSize[sessionID] + l.fileSize[sessionID] = size + l.total += size - old +} + +func (l *sessionLRU) AddSize(sessionID string, delta int64) { + l.mu.Lock() + defer l.mu.Unlock() + + current := l.fileSize[sessionID] + next := current + delta + if next < 0 { + next = 0 + } + l.fileSize[sessionID] = next + l.total += next - current +} + +func (l *sessionLRU) Remove(sessionID string) { + l.mu.Lock() + defer l.mu.Unlock() + + if elem, ok := l.nodes[sessionID]; ok { + l.order.Remove(elem) + delete(l.nodes, sessionID) + } + if size, ok := l.fileSize[sessionID]; ok { + l.total -= size + delete(l.fileSize, sessionID) + } +} + +func (l *sessionLRU) Oldest() (string, bool) { + l.mu.Lock() + defer l.mu.Unlock() + + front := l.order.Front() + if front == nil { + return "", false + } + key, ok := front.Value.(string) + if !ok || key == "" { + return "", false + } + return key, true +} + +func (l *sessionLRU) TotalSize() int64 { + l.mu.Lock() + defer l.mu.Unlock() + return l.total +} + +func (l *sessionLRU) ensureLocked(sessionID string) *list.Element { + if elem, ok := l.nodes[sessionID]; ok { + return elem + } + elem := l.order.PushBack(sessionID) + l.nodes[sessionID] = elem + return elem +} diff --git a/internal/cache/lru_test.go b/internal/cache/lru_test.go new file mode 100644 index 0000000..c7be067 --- /dev/null +++ b/internal/cache/lru_test.go @@ -0,0 +1,34 @@ +package cache + +import "testing" + +func TestSessionLRUOrderAndSize(t *testing.T) { + lru := newSessionLRU() + lru.Ensure("a") + lru.Ensure("b") + lru.Ensure("c") + + if oldest, ok := lru.Oldest(); !ok || oldest != "a" { + t.Fatalf("unexpected oldest before touch: %q, ok=%v", oldest, ok) + } + + lru.Touch("a") + if oldest, ok := lru.Oldest(); !ok || oldest != "b" { + t.Fatalf("unexpected oldest after touch: %q, ok=%v", oldest, ok) + } + + lru.SetSize("a", 10) + lru.AddSize("b", 20) + lru.SetSize("c", 30) + if total := lru.TotalSize(); total != 60 { + t.Fatalf("unexpected total size: got %d want 60", total) + } + + lru.Remove("b") + if oldest, ok := lru.Oldest(); !ok || oldest != "c" { + t.Fatalf("unexpected oldest after remove: %q, ok=%v", oldest, ok) + } + if total := lru.TotalSize(); total != 40 { + t.Fatalf("unexpected total size after remove: got %d want 40", total) + } +} diff --git a/internal/cache/types.go b/internal/cache/types.go new file mode 100644 index 0000000..5b5fe81 --- /dev/null +++ b/internal/cache/types.go @@ -0,0 +1,47 @@ +package cache + +import "time" + +// EntryType identifies the kind of scrollback record. +type EntryType string + +const ( + EntryTypeAgentMessage EntryType = "agent_message" + EntryTypeToolCall EntryType = "tool_call" + EntryTypeTerminalOutput EntryType = "terminal_output" + EntryTypeFileDiff EntryType = "file_diff" + EntryTypeSystemEvent EntryType = "system_event" +) + +// Entry is a single scrollback item associated with a session. +type Entry struct { + Timestamp time.Time `json:"timestamp"` + Type EntryType `json:"type"` + Content []byte `json:"content"` + Metadata map[string]any `json:"metadata,omitempty"` +} + +// EvictionPolicy controls how entries are evicted when limits are reached. +type EvictionPolicy string + +const ( + EvictionPolicyLRU EvictionPolicy = "LRU" + EvictionPolicyFIFO EvictionPolicy = "FIFO" +) + +// CacheConfig configures scrollback cache limits and local storage location. +type CacheConfig struct { + MaxEntriesPerSession int + MaxTotalSize int64 + EvictionPolicy EvictionPolicy + StoragePath string +} + +// ScrollbackCache defines contract-first persistence APIs for session scrollback. +type ScrollbackCache interface { + Append(sessionID string, entry Entry) error + Get(sessionID string, offset, limit int) ([]Entry, error) + Trim(sessionID string, maxEntries int) error + Clear(sessionID string) error + Close() error +} diff --git a/internal/daemon/API_FINDINGS.md b/internal/daemon/API_FINDINGS.md new file mode 100644 index 0000000..75464e7 --- /dev/null +++ b/internal/daemon/API_FINDINGS.md @@ -0,0 +1,82 @@ +# OpenCode Daemon API Findings (Spike) + +Date: 2026-03-05 +Task: 5 — Validate OpenCode daemon API assumptions (spike) +Daemon binary: `opencode 1.2.17` + +## Scope + +This is an integration **spike** only. It validates endpoint assumptions for Task 14 planning and records deltas between assumed contracts and observed daemon behavior. + +## Assumption Matrix + +| # | Assumption | Status | Evidence | +|---|---|---|---| +| 1 | `GET /doc` serves parseable OpenAPI spec with required routes | **CONFIRMED** | `TestSpikeDocEndpointOpenAPI` passed. Log: `openapi=3.1.1 path_count=85`; required routes present (`/event`, `/project/current`, `/session`, `/session/{sessionID}`, `/session/{sessionID}/message`). | +| 2 | `POST /session` response shape matches future `SessionHandle` (`id`, `daemon port`, `workspace path`, `status`, `created`, `last activity`, `attached clients`) | **DENIED** | `TestSpikeCreateSessionShape` log: `map[attached_clients:false created_at:true daemon_port:false id:true last_activity:false status:false workspace_path:true]`. Response has `id`, `directory`, `time`; missing `daemon_port`, `status`, `last_activity`, `attached_clients`. | +| 3 | `GET /session/{id}/messages` lists messages and supports SSE streaming | **DENIED** | `TestSpikeSessionMessagesEndpoints` log: `/messages` returned `text/html;charset=UTF-8` (web shell), not message list/SSE. `GET /session/{id}/message` (singular) returned JSON list (`0 entries` in fresh session). | +| 4 | `POST /session/{id}/message` itself streams token SSE response | **DENIED** | `TestSpikePostMessageAndTokenEvents`: endpoint returned `Content-Type: application/json` (single JSON response), while token deltas were observed on `/event` (`message.part.delta`). | +| 5 | `GET /event` emits SSE events for session state changes | **CONFIRMED** | `TestSpikeEventEndpointReceivesSessionUpdates` patched session title and received matching `session.updated` event on `/event`. | +| 6 | Two clients can attach simultaneously and both receive same session SSE updates | **CONFIRMED** | `TestSpikeMultiClientEventStreams` opened two `/event` streams; both observed `session.updated` for same `sessionID`. | +| 7 | `GET /session/{id}` includes file list, working directory, and agent info | **DENIED** | `TestSpikeSessionDetailFields` keys: `[directory id projectID slug time title version]`; booleans: `working_directory=true files=false agent=false`. | + +## Additional Endpoint Deltas (important for Task 14) + +1. **Singular vs plural API paths differ from assumptions** + - API routes are singular (`/session`, `/session/{id}/message`) + - `/sessions` and `/session/{id}/messages` resolved to web UI HTML in observed environment. + +2. **Token streaming source** + - Streaming token deltas (`message.part.delta`) are seen on global SSE endpoint `/event`, not as SSE response body from `POST /session/{id}/message`. + +3. **OpenAPI location** + - `/doc` returns JSON OpenAPI. + - `/openapi.json` returned web shell HTML in this environment. + +4. **Scanner contract mismatch risk** + - Existing scanner expects `/project/current` shape with `name` and `path`. + - Observed payload uses fields such as `worktree` and `sandboxes`; this can cause fallback registration paths unless scanner parsing is updated. + +## Verification Commands (exact) + +```bash +go test -count=1 -v ./internal/daemon/... +go build ./internal/daemon/... +PATH="/usr/local/go/bin:/usr/bin:/bin" go test -count=1 -run TestSpikeDocEndpointOpenAPI -v ./internal/daemon/... +``` + +## Test Invocation Output + +```text +=== RUN TestSpikeDocEndpointOpenAPI + spike_test.go:298: /doc openapi=3.1.1 path_count=85 +--- PASS: TestSpikeDocEndpointOpenAPI (5.10s) +=== RUN TestSpikeCreateSessionShape + spike_test.go:325: create-session field coverage=map[attached_clients:false created_at:true daemon_port:false id:true last_activity:false status:false workspace_path:true] +--- PASS: TestSpikeCreateSessionShape (2.65s) +=== RUN TestSpikeSessionMessagesEndpoints + spike_test.go:347: plural messages endpoint status=200 content-type="text/html;charset=UTF-8" body-prefix="..." + spike_test.go:370: singular message endpoint returned 0 entries +--- PASS: TestSpikeSessionMessagesEndpoints (2.82s) +=== RUN TestSpikePostMessageAndTokenEvents + spike_test.go:433: observed message.part.delta events for session=ses_33fe55437ffezbvVy6J33bsFWv (total_event_data_lines=65) +--- PASS: TestSpikePostMessageAndTokenEvents (15.30s) +=== RUN TestSpikeEventEndpointReceivesSessionUpdates + spike_test.go:469: event stream delivered session.updated for session=ses_33fe5185fffes3dFUYysFCoHW0 +--- PASS: TestSpikeEventEndpointReceivesSessionUpdates (3.17s) +=== RUN TestSpikeMultiClientEventStreams + spike_test.go:521: both event clients observed session.updated for session=ses_33fe50bf0ffeGySWMVQ3OSAKoz +--- PASS: TestSpikeMultiClientEventStreams (3.59s) +=== RUN TestSpikeSessionDetailFields + spike_test.go:563: session detail keys=[directory id projectID slug time title version] + spike_test.go:564: session detail capability working_directory=true files=false agent=false +--- PASS: TestSpikeSessionDetailFields (2.62s) +PASS +ok opencoderouter/internal/daemon 35.444s + +=== RUN TestSpikeDocEndpointOpenAPI + spike_test.go:253: spike skipped: opencode binary not available in PATH: exec: "opencode": executable file not found in $PATH +--- SKIP: TestSpikeDocEndpointOpenAPI (0.00s) +PASS +ok opencoderouter/internal/daemon 0.306s +``` diff --git a/internal/daemon/client.go b/internal/daemon/client.go new file mode 100644 index 0000000..2006e56 --- /dev/null +++ b/internal/daemon/client.go @@ -0,0 +1,1645 @@ +package daemon + +import ( + "bufio" + "bytes" + "context" + "encoding/base64" + "encoding/json" + "errors" + "fmt" + "io" + "math" + "net" + "net/http" + "net/url" + "strconv" + "strings" + "time" +) + +const ( + defaultClientTimeout = 15 * time.Second + defaultRetryBackoff = 150 * time.Millisecond + defaultStreamBuffer = 64 + defaultStreamIdleTimeout = 2 * time.Second + defaultScannerInitialSize = 64 * 1024 + defaultScannerMaxSize = 1024 * 1024 +) + +type Client struct { + baseURL string + config ClientConfig + httpClient *http.Client +} + +type DaemonClient = Client + +type endpointCandidate struct { + Path string + Query url.Values +} + +type httpResult struct { + StatusCode int + Header http.Header + Body []byte +} + +type sseFrame struct { + ID string + Event string + Data string +} + +type postResult struct { + payload map[string]interface{} + err error +} + +func NewClient(baseURL string, cfg ClientConfig) (*Client, error) { + baseURL = strings.TrimSpace(baseURL) + if baseURL == "" { + return nil, errors.New("base URL is required") + } + baseURL = strings.TrimRight(baseURL, "/") + + if cfg.Timeout <= 0 { + cfg.Timeout = defaultClientTimeout + } + if cfg.MaxRetries < 0 { + cfg.MaxRetries = 0 + } + if cfg.RetryBackoff <= 0 { + cfg.RetryBackoff = defaultRetryBackoff + } + if cfg.StreamBuffer <= 0 { + cfg.StreamBuffer = defaultStreamBuffer + } + if cfg.StreamIdleTimeout <= 0 { + cfg.StreamIdleTimeout = defaultStreamIdleTimeout + } + + httpClient := cfg.HTTPClient + if httpClient == nil { + httpClient = &http.Client{} + } else { + cloned := *httpClient + httpClient = &cloned + } + httpClient.Timeout = cfg.Timeout + + return &Client{ + baseURL: baseURL, + config: cfg, + httpClient: httpClient, + }, nil +} + +func NewDaemonClient(baseURL string, cfg ClientConfig) (*DaemonClient, error) { + return NewClient(baseURL, cfg) +} + +func (c *Client) ListSessions(ctx context.Context) ([]DaemonSession, error) { + candidates := []endpointCandidate{ + {Path: "/session"}, + {Path: "/sessions"}, + } + + payload, endpoint, err := c.getJSONFromCandidates(ctx, candidates) + if err != nil { + return nil, fmt.Errorf("list sessions failed: %w", err) + } + + sessions, ok := parseSessionListPayload(payload) + if !ok { + return nil, fmt.Errorf("list sessions failed: unsupported payload from %s", endpoint) + } + + return sessions, nil +} + +func (c *Client) GetSession(ctx context.Context, sessionID string) (DaemonSession, error) { + sessionID = strings.TrimSpace(sessionID) + if sessionID == "" { + return DaemonSession{}, errors.New("session ID is required") + } + + id := url.PathEscape(sessionID) + candidates := []endpointCandidate{ + {Path: "/session/" + id}, + {Path: "/sessions/" + id}, + } + + payload, endpoint, err := c.getJSONFromCandidates(ctx, candidates) + if err != nil { + return DaemonSession{}, fmt.Errorf("get session failed: %w", err) + } + + obj, ok := payload.(map[string]interface{}) + if !ok { + return DaemonSession{}, fmt.Errorf("get session failed: non-object payload from %s", endpoint) + } + + session := parseSessionEntry(obj) + if session.ID == "" { + session.ID = sessionID + } + if session.ID == "" { + return DaemonSession{}, fmt.Errorf("get session failed: missing session id in payload from %s", endpoint) + } + + return session, nil +} + +func (c *Client) GetMessages(ctx context.Context, sessionID string) ([]map[string]interface{}, error) { + sessionID = strings.TrimSpace(sessionID) + if sessionID == "" { + return nil, errors.New("session ID is required") + } + + id := url.PathEscape(sessionID) + candidates := []endpointCandidate{ + {Path: "/session/" + id + "/message"}, + {Path: "/sessions/" + id + "/messages"}, + } + + payload, endpoint, err := c.getJSONFromCandidates(ctx, candidates) + if err != nil { + return nil, fmt.Errorf("get messages failed: %w", err) + } + + arr, ok := payload.([]interface{}) + if !ok { + return nil, fmt.Errorf("get messages failed: non-array payload from %s", endpoint) + } + + var msgs []map[string]interface{} + for _, item := range arr { + if m, ok := item.(map[string]interface{}); ok { + msgs = append(msgs, m) + } + } + return msgs, nil +} + +func (c *Client) SendMessage(ctx context.Context, sessionID, prompt string) (<-chan MessageChunk, error) { + sessionID = strings.TrimSpace(sessionID) + prompt = strings.TrimSpace(prompt) + if sessionID == "" { + return nil, errors.New("session ID is required") + } + if prompt == "" { + return nil, errors.New("prompt is required") + } + + streamCtx, cancelStream := context.WithCancel(ctx) + events, err := c.subscribeEventsInternal(streamCtx) + if err != nil { + cancelStream() + fmt.Printf("subscribeEventsInternal failed: %v\n", err) + return c.sendMessageWithoutStream(ctx, sessionID, prompt), nil + } + + out := make(chan MessageChunk, c.config.StreamBuffer) + + go func() { + defer close(out) + defer cancelStream() + + postCh := make(chan postResult, 1) + go func() { + payload, postErr := c.postMessage(ctx, sessionID, prompt) + postCh <- postResult{payload: payload, err: postErr} + close(postCh) + }() + + var ( + postDone bool + sawDelta bool + idleCh <-chan time.Time + idleT *time.Timer + pending string + ) + + resetIdle := func() { + if !postDone || c.config.StreamIdleTimeout <= 0 { + return + } + if idleT == nil { + idleT = time.NewTimer(c.config.StreamIdleTimeout) + idleCh = idleT.C + return + } + if !idleT.Stop() { + select { + case <-idleT.C: + default: + } + } + idleT.Reset(c.config.StreamIdleTimeout) + } + + stopIdle := func() { + if idleT == nil { + return + } + if !idleT.Stop() { + select { + case <-idleT.C: + default: + } + } + idleCh = nil + } + + emit := func(chunk MessageChunk) bool { + select { + case out <- chunk: + return true + case <-ctx.Done(): + return false + } + } + + for { + select { + case <-ctx.Done(): + stopIdle() + return + case res, ok := <-postCh: + if !ok { + postCh = nil + continue + } + postCh = nil + postDone = true + if res.err != nil { + emit(MessageChunk{SessionID: sessionID, Type: "error", Error: res.err.Error(), Done: true}) + stopIdle() + return + } + if !sawDelta { + pending = extractMessageText(res.payload) + if pending == "" { + if encoded, marshalErr := json.Marshal(res.payload); marshalErr == nil { + pending = strings.TrimSpace(string(encoded)) + } + } + } + resetIdle() + case ev, ok := <-events: + if !ok { + if sawDelta { + emit(MessageChunk{SessionID: sessionID, Type: "stream.closed", Done: true}) + } else if postDone && pending != "" { + emit(MessageChunk{SessionID: sessionID, Type: "message.final", Delta: pending, Done: true}) + } else if postDone { + emit(MessageChunk{SessionID: sessionID, Type: "stream.closed", Done: true}) + } + stopIdle() + return + } + + if ev.Error != "" { + emit(MessageChunk{SessionID: sessionID, Type: "stream.error", Error: ev.Error, Done: true}) + stopIdle() + return + } + + if !eventMatchesSession(ev, sessionID) { + continue + } + + if isDeltaEvent(ev) { + sawDelta = true + pending = "" + if !emit(MessageChunk{ + SessionID: ev.SessionID, + MessageID: ev.MessageID, + Type: ev.Type, + Delta: ev.Delta, + Timestamp: ev.Timestamp, + RawData: ev.RawData, + Payload: ev.Payload, + }) { + stopIdle() + return + } + resetIdle() + } + + if isTerminalEvent(ev.Type) && (postDone || sawDelta) { + if !sawDelta && pending != "" { + emit(MessageChunk{ + SessionID: sessionID, + Type: "message.final", + Delta: pending, + Done: true, + Timestamp: ev.Timestamp, + RawData: ev.RawData, + Payload: ev.Payload, + }) + stopIdle() + return + } + emit(MessageChunk{ + SessionID: ev.SessionID, + MessageID: ev.MessageID, + Type: ev.Type, + Done: true, + Timestamp: ev.Timestamp, + RawData: ev.RawData, + Payload: ev.Payload, + }) + stopIdle() + return + } + case <-idleCh: + if pending != "" && !sawDelta { + emit(MessageChunk{SessionID: sessionID, Type: "message.final", Delta: pending, Done: true}) + } else { + emit(MessageChunk{SessionID: sessionID, Type: "stream.idle", Done: true}) + } + stopIdle() + return + } + } + }() + + return out, nil +} + +func (c *Client) ExecuteCommand(ctx context.Context, sessionID, command string) (CommandResult, error) { + sessionID = strings.TrimSpace(sessionID) + command = strings.TrimSpace(command) + if sessionID == "" { + return CommandResult{}, errors.New("session ID is required") + } + if command == "" { + return CommandResult{}, errors.New("command is required") + } + + requestBody, err := json.Marshal(ExecuteCommandRequest{Command: command}) + if err != nil { + return CommandResult{}, err + } + + id := url.PathEscape(sessionID) + candidates := []endpointCandidate{ + {Path: "/session/" + id + "/command"}, + {Path: "/session/" + id + "/commands"}, + {Path: "/command", Query: url.Values{"sessionID": []string{sessionID}}}, + {Path: "/commands", Query: url.Values{"sessionID": []string{sessionID}}}, + {Path: "/command", Query: url.Values{"sessionId": []string{sessionID}}}, + {Path: "/commands", Query: url.Values{"sessionId": []string{sessionID}}}, + } + + payload, endpoint, err := c.postJSONFromCandidates(ctx, candidates, requestBody) + if err != nil { + return CommandResult{}, fmt.Errorf("execute command failed: %w", err) + } + + obj, ok := payload.(map[string]interface{}) + if !ok { + return CommandResult{}, fmt.Errorf("execute command failed: non-object payload from %s", endpoint) + } + + result := parseCommandResultPayload(obj) + return result, nil +} + +func (c *Client) ListFiles(ctx context.Context, sessionID, globPattern string) ([]FileInfo, error) { + sessionID = strings.TrimSpace(sessionID) + globPattern = strings.TrimSpace(globPattern) + if sessionID == "" { + return nil, errors.New("session ID is required") + } + + id := url.PathEscape(sessionID) + query := url.Values{} + if globPattern != "" { + query.Set("glob", globPattern) + query.Set("pattern", globPattern) + } + + candidates := []endpointCandidate{ + {Path: "/session/" + id + "/file", Query: cloneValues(query)}, + {Path: "/session/" + id + "/files", Query: cloneValues(query)}, + {Path: "/file", Query: mergeValues(cloneValues(query), url.Values{"sessionID": []string{sessionID}})}, + {Path: "/files", Query: mergeValues(cloneValues(query), url.Values{"sessionID": []string{sessionID}})}, + {Path: "/file", Query: mergeValues(cloneValues(query), url.Values{"sessionId": []string{sessionID}})}, + {Path: "/files", Query: mergeValues(cloneValues(query), url.Values{"sessionId": []string{sessionID}})}, + } + + payload, endpoint, err := c.getJSONFromCandidates(ctx, candidates) + if err != nil { + return nil, fmt.Errorf("list files failed: %w", err) + } + + files, ok := parseFileListPayload(payload) + if !ok { + return nil, fmt.Errorf("list files failed: unsupported payload from %s", endpoint) + } + return files, nil +} + +func (c *Client) ReadFile(ctx context.Context, sessionID, filePath string) (FileContent, error) { + sessionID = strings.TrimSpace(sessionID) + filePath = strings.TrimSpace(filePath) + if sessionID == "" { + return FileContent{}, errors.New("session ID is required") + } + if filePath == "" { + return FileContent{}, errors.New("file path is required") + } + + id := url.PathEscape(sessionID) + escapedPath := escapePath(filePath) + + candidates := []endpointCandidate{ + {Path: "/session/" + id + "/file/" + escapedPath}, + {Path: "/session/" + id + "/files/" + escapedPath}, + {Path: "/file/" + escapedPath, Query: url.Values{"sessionID": []string{sessionID}}}, + {Path: "/files/" + escapedPath, Query: url.Values{"sessionID": []string{sessionID}}}, + {Path: "/file", Query: url.Values{"sessionID": []string{sessionID}, "path": []string{filePath}}}, + {Path: "/files", Query: url.Values{"sessionID": []string{sessionID}, "path": []string{filePath}}}, + {Path: "/file", Query: url.Values{"sessionId": []string{sessionID}, "path": []string{filePath}}}, + {Path: "/files", Query: url.Values{"sessionId": []string{sessionID}, "path": []string{filePath}}}, + } + + var lastErr error + for _, candidate := range candidates { + res, err := c.doRequest(ctx, http.MethodGet, candidate.Path, candidate.Query, nil, map[string]string{"Accept": "application/json"}, true) + if err != nil { + lastErr = err + continue + } + if isEndpointMismatchStatus(res.StatusCode) { + continue + } + if res.StatusCode < 200 || res.StatusCode >= 300 { + lastErr = fmt.Errorf("endpoint %s returned status %d", candidate.Path, res.StatusCode) + continue + } + + if responseLooksJSON(res) { + payload, decodeErr := decodeJSONPayload(res.Body) + if decodeErr != nil { + lastErr = fmt.Errorf("endpoint %s returned invalid JSON: %w", candidate.Path, decodeErr) + continue + } + switch typed := payload.(type) { + case map[string]interface{}: + content := parseFileContentPayload(typed, filePath) + if len(content.RawBytes) == 0 { + content.RawBytes = []byte(content.Content) + } + return content, nil + case string: + return FileContent{Path: filePath, Content: typed, RawBytes: []byte(typed)}, nil + default: + lastErr = fmt.Errorf("endpoint %s returned unsupported payload type", candidate.Path) + continue + } + } + + body := append([]byte(nil), res.Body...) + return FileContent{Path: filePath, Content: string(body), RawBytes: body}, nil + } + + if lastErr == nil { + lastErr = errors.New("no compatible file endpoint") + } + return FileContent{}, fmt.Errorf("read file failed: %w", lastErr) +} + +func (c *Client) SubscribeEvents(ctx context.Context) (<-chan DaemonEvent, error) { + return c.subscribeEventsInternal(ctx) +} + +func (c *Client) Health(ctx context.Context) (HealthResponse, error) { + candidates := []endpointCandidate{{Path: "/global/health"}, {Path: "/health"}} + payload, endpoint, err := c.getJSONFromCandidates(ctx, candidates) + if err != nil { + return HealthResponse{}, fmt.Errorf("health check failed: %w", err) + } + + obj, ok := payload.(map[string]interface{}) + if !ok { + return HealthResponse{}, fmt.Errorf("health check failed: non-object payload from %s", endpoint) + } + + return parseHealthPayload(obj), nil +} + +func (c *Client) Config(ctx context.Context) (DaemonConfig, error) { + candidates := []endpointCandidate{{Path: "/config"}, {Path: "/project/config"}} + payload, endpoint, err := c.getJSONFromCandidates(ctx, candidates) + if err != nil { + return DaemonConfig{}, fmt.Errorf("config fetch failed: %w", err) + } + + obj, ok := payload.(map[string]interface{}) + if !ok { + return DaemonConfig{}, fmt.Errorf("config fetch failed: non-object payload from %s", endpoint) + } + + return DaemonConfig{Raw: cloneMap(obj)}, nil +} + +func (c *Client) sendMessageWithoutStream(ctx context.Context, sessionID, prompt string) <-chan MessageChunk { + out := make(chan MessageChunk, 1) + go func() { + defer close(out) + payload, err := c.postMessage(ctx, sessionID, prompt) + if err != nil { + out <- MessageChunk{SessionID: sessionID, Type: "error", Error: err.Error(), Done: true} + return + } + text := extractMessageText(payload) + if text == "" { + encoded, _ := json.Marshal(payload) + text = string(encoded) + } + out <- MessageChunk{SessionID: sessionID, Type: "message.final", Delta: text, Done: true} + }() + return out +} + +func (c *Client) postMessage(ctx context.Context, sessionID, prompt string) (map[string]interface{}, error) { + requestBody, err := json.Marshal(MessageRequest{Parts: []MessagePart{{Type: "text", Text: prompt}}}) + if err != nil { + return nil, err + } + + id := url.PathEscape(sessionID) + candidates := []endpointCandidate{ + {Path: "/session/" + id + "/message"}, + {Path: "/sessions/" + id + "/messages"}, + {Path: "/session/" + id + "/messages"}, + } + + payload, _, err := c.postJSONFromCandidates(ctx, candidates, requestBody) + if err != nil { + return nil, err + } + + switch typed := payload.(type) { + case nil: + return map[string]interface{}{}, nil + case map[string]interface{}: + return typed, nil + default: + return map[string]interface{}{"data": typed}, nil + } +} + +func (c *Client) subscribeEventsInternal(ctx context.Context) (<-chan DaemonEvent, error) { + resp, err := c.openEventStream(ctx) + if err != nil { + return nil, err + } + + out := make(chan DaemonEvent, c.config.StreamBuffer) + + go func() { + defer close(out) + defer resp.Body.Close() + + emit := func(ev DaemonEvent) bool { + select { + case out <- ev: + return true + case <-ctx.Done(): + return false + } + } + + err := readSSEFrames(ctx, resp.Body, func(frame sseFrame) bool { + ev := parseDaemonEvent(frame) + return emit(ev) + }) + if err != nil && ctx.Err() == nil { + emit(DaemonEvent{Type: "stream.error", Error: err.Error()}) + } + }() + + return out, nil +} + +func (c *Client) openEventStream(ctx context.Context) (*http.Response, error) { + endpoints := []string{"/event", "/events"} + var lastErr error + + for _, endpoint := range endpoints { + resp, err := c.openEventEndpoint(ctx, endpoint) + if err != nil { + lastErr = err + continue + } + + if isEndpointMismatchStatus(resp.StatusCode) { + resp.Body.Close() + continue + } + if resp.StatusCode < 200 || resp.StatusCode >= 300 { + body, _ := io.ReadAll(io.LimitReader(resp.Body, 1024)) + resp.Body.Close() + lastErr = fmt.Errorf("endpoint %s returned status %d body=%s", endpoint, resp.StatusCode, strings.TrimSpace(string(body))) + continue + } + + contentType := strings.ToLower(resp.Header.Get("Content-Type")) + if contentType != "" && !strings.Contains(contentType, "text/event-stream") { + resp.Body.Close() + lastErr = fmt.Errorf("endpoint %s returned non-SSE content-type %q", endpoint, contentType) + continue + } + + return resp, nil + } + + if lastErr == nil { + lastErr = errors.New("no compatible event endpoint") + } + return nil, lastErr +} + +func (c *Client) openEventEndpoint(ctx context.Context, endpoint string) (*http.Response, error) { + attempts := 1 + if c.config.MaxRetries > 0 { + attempts += c.config.MaxRetries + } + + var lastErr error + for attempt := 0; attempt < attempts; attempt++ { + req, err := http.NewRequestWithContext(ctx, http.MethodGet, c.buildURL(endpoint, nil), nil) + if err != nil { + return nil, err + } + req.Header.Set("Accept", "text/event-stream") + if token := strings.TrimSpace(c.config.AuthToken); token != "" { + req.Header.Set("Authorization", "Bearer "+token) + } + + resp, err := c.httpClient.Do(req) + if err != nil { + lastErr = err + if attempt >= attempts-1 || ctx.Err() != nil || !isRetryableError(err) { + return nil, err + } + if !sleepBackoff(ctx, c.config.RetryBackoff, attempt+1) { + return nil, ctx.Err() + } + continue + } + + if attempt < attempts-1 && isRetryableStatus(resp.StatusCode) { + resp.Body.Close() + if !sleepBackoff(ctx, c.config.RetryBackoff, attempt+1) { + return nil, ctx.Err() + } + continue + } + + return resp, nil + } + + if lastErr == nil { + lastErr = errors.New("event stream request failed") + } + return nil, lastErr +} + +func (c *Client) getJSONFromCandidates(ctx context.Context, candidates []endpointCandidate) (interface{}, string, error) { + var errs []string + + for _, candidate := range candidates { + res, err := c.doRequest(ctx, http.MethodGet, candidate.Path, candidate.Query, nil, map[string]string{"Accept": "application/json"}, true) + if err != nil { + errs = append(errs, fmt.Sprintf("%s: %v", candidate.Path, err)) + continue + } + + if isEndpointMismatchStatus(res.StatusCode) { + continue + } + if res.StatusCode < 200 || res.StatusCode >= 300 { + errs = append(errs, fmt.Sprintf("%s: status %d", candidate.Path, res.StatusCode)) + continue + } + if !responseLooksJSON(res) { + errs = append(errs, fmt.Sprintf("%s: non-JSON content-type %q", candidate.Path, res.Header.Get("Content-Type"))) + continue + } + + payload, decodeErr := decodeJSONPayload(res.Body) + if decodeErr != nil { + errs = append(errs, fmt.Sprintf("%s: invalid JSON (%v)", candidate.Path, decodeErr)) + continue + } + + return payload, candidate.Path, nil + } + + if len(errs) == 0 { + return nil, "", errors.New("no compatible endpoint") + } + return nil, "", errors.New(strings.Join(errs, "; ")) +} + +func (c *Client) postJSONFromCandidates(ctx context.Context, candidates []endpointCandidate, body []byte) (interface{}, string, error) { + var errs []string + + for _, candidate := range candidates { + res, err := c.doRequest( + ctx, + http.MethodPost, + candidate.Path, + candidate.Query, + body, + map[string]string{"Accept": "application/json", "Content-Type": "application/json"}, + false, + ) + if err != nil { + errs = append(errs, fmt.Sprintf("%s: %v", candidate.Path, err)) + continue + } + + if isEndpointMismatchStatus(res.StatusCode) { + continue + } + if res.StatusCode < 200 || res.StatusCode >= 300 { + errs = append(errs, fmt.Sprintf("%s: status %d", candidate.Path, res.StatusCode)) + continue + } + + if len(bytes.TrimSpace(res.Body)) == 0 { + return nil, candidate.Path, nil + } + + if !responseLooksJSON(res) { + errs = append(errs, fmt.Sprintf("%s: non-JSON content-type %q", candidate.Path, res.Header.Get("Content-Type"))) + continue + } + + payload, decodeErr := decodeJSONPayload(res.Body) + if decodeErr != nil { + errs = append(errs, fmt.Sprintf("%s: invalid JSON (%v)", candidate.Path, decodeErr)) + continue + } + + return payload, candidate.Path, nil + } + + if len(errs) == 0 { + return nil, "", errors.New("no compatible endpoint") + } + return nil, "", errors.New(strings.Join(errs, "; ")) +} + +func (c *Client) doRequest( + ctx context.Context, + method string, + endpoint string, + query url.Values, + body []byte, + headers map[string]string, + retry bool, +) (*httpResult, error) { + attempts := 1 + if retry && c.config.MaxRetries > 0 { + attempts += c.config.MaxRetries + } + + var lastErr error + for attempt := 0; attempt < attempts; attempt++ { + req, err := http.NewRequestWithContext(ctx, method, c.buildURL(endpoint, query), bytes.NewReader(body)) + if err != nil { + return nil, err + } + + if token := strings.TrimSpace(c.config.AuthToken); token != "" { + req.Header.Set("Authorization", "Bearer "+token) + } + for key, value := range headers { + if strings.TrimSpace(value) == "" { + continue + } + req.Header.Set(key, value) + } + + resp, err := c.httpClient.Do(req) + if err != nil { + lastErr = err + if !retry || attempt >= attempts-1 || ctx.Err() != nil || !isRetryableError(err) { + return nil, err + } + if !sleepBackoff(ctx, c.config.RetryBackoff, attempt+1) { + return nil, ctx.Err() + } + continue + } + + responseBody, readErr := io.ReadAll(resp.Body) + resp.Body.Close() + if readErr != nil { + lastErr = readErr + if !retry || attempt >= attempts-1 || ctx.Err() != nil { + return nil, readErr + } + if !sleepBackoff(ctx, c.config.RetryBackoff, attempt+1) { + return nil, ctx.Err() + } + continue + } + + result := &httpResult{StatusCode: resp.StatusCode, Header: resp.Header.Clone(), Body: responseBody} + if retry && attempt < attempts-1 && isRetryableStatus(result.StatusCode) { + lastErr = fmt.Errorf("retryable status %d", result.StatusCode) + if !sleepBackoff(ctx, c.config.RetryBackoff, attempt+1) { + return nil, ctx.Err() + } + continue + } + + return result, nil + } + + if lastErr == nil { + lastErr = errors.New("request failed") + } + return nil, lastErr +} + +func (c *Client) buildURL(endpoint string, query url.Values) string { + u, err := url.Parse(c.baseURL) + if err != nil { + endpoint = "/" + strings.TrimPrefix(endpoint, "/") + if len(query) == 0 { + return strings.TrimRight(c.baseURL, "/") + endpoint + } + return strings.TrimRight(c.baseURL, "/") + endpoint + "?" + query.Encode() + } + + basePath := strings.TrimSuffix(u.Path, "/") + endpointPath := "/" + strings.TrimPrefix(endpoint, "/") + u.Path = basePath + endpointPath + if len(query) > 0 { + u.RawQuery = query.Encode() + } else { + u.RawQuery = "" + } + + return u.String() +} + +func parseSessionListPayload(payload interface{}) ([]DaemonSession, bool) { + var entries []interface{} + + switch typed := payload.(type) { + case []interface{}: + entries = typed + case map[string]interface{}: + if list, ok := typed["sessions"].([]interface{}); ok { + entries = list + } else if nested, ok := typed["data"].(map[string]interface{}); ok { + if list, ok := nested["sessions"].([]interface{}); ok { + entries = list + } + } else if firstString(typed, "id", "session_id", "sessionId", "sessionID") != "" { + entries = []interface{}{typed} + } else { + return nil, false + } + default: + return nil, false + } + + result := make([]DaemonSession, 0, len(entries)) + for _, entry := range entries { + obj, ok := entry.(map[string]interface{}) + if !ok { + continue + } + session := parseSessionEntry(obj) + if session.ID == "" { + continue + } + result = append(result, session) + } + + return result, true +} + +func parseSessionEntry(payload map[string]interface{}) DaemonSession { + return DaemonSession{ + ID: firstString(payload, "id", "session_id", "sessionId", "sessionID"), + Title: firstString(payload, "title", "name"), + Directory: firstString(payload, "directory", "worktree", "cwd", "workspace_path", "workspacePath"), + Status: firstString(payload, "status", "state"), + CreatedAt: firstTime(payload, "created_at", "createdAt", "created", "time"), + LastActivity: firstTime(payload, "last_activity", "lastActivity", "updated", "updated_at", "updatedAt", "time"), + DaemonPort: firstInt(payload, "daemon_port", "daemonPort", "port"), + AttachedClients: firstInt(payload, "attached_clients", "attachedClients"), + ProjectID: firstString(payload, "projectID", "projectId", "project_id"), + Slug: firstString(payload, "slug"), + Version: firstString(payload, "version"), + Raw: cloneMap(payload), + } +} + +func parseFileListPayload(payload interface{}) ([]FileInfo, bool) { + var entries []interface{} + + switch typed := payload.(type) { + case []interface{}: + entries = typed + case map[string]interface{}: + switch { + case typed["files"] != nil: + list, ok := typed["files"].([]interface{}) + if !ok { + return nil, false + } + entries = list + case typed["data"] != nil: + nested, ok := typed["data"].(map[string]interface{}) + if !ok { + return nil, false + } + if list, ok := nested["files"].([]interface{}); ok { + entries = list + } else { + return nil, false + } + case firstString(typed, "path", "file", "name") != "": + entries = []interface{}{typed} + default: + return nil, false + } + default: + return nil, false + } + + files := make([]FileInfo, 0, len(entries)) + for _, entry := range entries { + obj, ok := entry.(map[string]interface{}) + if !ok { + continue + } + info := parseFileInfoEntry(obj) + if info.Path == "" && info.Name == "" { + continue + } + files = append(files, info) + } + + return files, true +} + +func parseFileInfoEntry(payload map[string]interface{}) FileInfo { + pathValue := firstString(payload, "path", "file", "filepath", "filePath") + nameValue := firstString(payload, "name") + if nameValue == "" && pathValue != "" { + segments := strings.Split(strings.Trim(pathValue, "/"), "/") + if len(segments) > 0 { + nameValue = segments[len(segments)-1] + } + } + + return FileInfo{ + Path: pathValue, + Name: nameValue, + Size: firstInt64(payload, "size", "bytes", "length"), + IsDir: firstBool(payload, "is_dir", "isDir", "dir", "directory"), + Mode: firstString(payload, "mode", "permissions"), + ModTime: firstTime(payload, "mod_time", "modTime", "modified", "updated_at", "updatedAt"), + Raw: cloneMap(payload), + } +} + +func parseFileContentPayload(payload map[string]interface{}, requestedPath string) FileContent { + content := firstString(payload, "content", "text", "data") + encoding := firstString(payload, "encoding") + raw := []byte(content) + + if strings.EqualFold(encoding, "base64") && content != "" { + if decoded, err := base64.StdEncoding.DecodeString(content); err == nil { + raw = decoded + content = string(decoded) + } + } + + pathValue := firstString(payload, "path", "file", "filePath", "filepath") + if pathValue == "" { + pathValue = requestedPath + } + + return FileContent{ + Path: pathValue, + Content: content, + Encoding: encoding, + RawBytes: raw, + } +} + +func parseCommandResultPayload(payload map[string]interface{}) CommandResult { + result := CommandResult{ + ExitCode: firstInt(payload, "exit_code", "exitCode", "code", "status"), + Stdout: firstString(payload, "stdout", "output", "result"), + Stderr: firstString(payload, "stderr", "error"), + Raw: cloneMap(payload), + } + + if success, ok := firstBoolWithPresence(payload, "success", "ok"); ok { + result.Success = success + } else { + result.Success = result.ExitCode == 0 && result.Stderr == "" + } + + return result +} + +func parseHealthPayload(payload map[string]interface{}) HealthResponse { + healthy := false + if value, ok := firstBoolWithPresence(payload, "healthy", "ok", "status"); ok { + healthy = value + } + + return HealthResponse{ + Healthy: healthy, + Version: firstString(payload, "version"), + Raw: cloneMap(payload), + } +} + +func parseDaemonEvent(frame sseFrame) DaemonEvent { + ev := DaemonEvent{ + ID: strings.TrimSpace(frame.ID), + Type: strings.TrimSpace(frame.Event), + RawData: frame.Data, + } + + trimmed := bytes.TrimSpace([]byte(frame.Data)) + if len(trimmed) == 0 { + if ev.Type == "" { + ev.Type = "message" + } + return ev + } + + ev.Data = append([]byte(nil), trimmed...) + payload, err := decodeJSONPayload(trimmed) + if err != nil { + if ev.Type == "" { + ev.Type = "message" + } + return ev + } + + obj, ok := payload.(map[string]interface{}) + if !ok { + if ev.Type == "" { + ev.Type = "message" + } + return ev + } + + ev.Payload = cloneMap(obj) + if ev.Type == "" { + ev.Type = firstString(obj, "type", "event", "eventType", "name") + } + ev.SessionID = firstString(obj, "sessionID", "sessionId", "session_id") + if ev.SessionID == "" { + if nested := firstNestedMap(obj, "session", "data"); nested != nil { + ev.SessionID = firstString(nested, "id", "sessionID", "sessionId", "session_id") + } + } + ev.MessageID = firstString(obj, "messageID", "messageId", "message_id") + if ev.MessageID == "" { + if nested := firstNestedMap(obj, "message", "data"); nested != nil { + ev.MessageID = firstString(nested, "id", "messageID", "messageId", "message_id") + } + } + ev.Timestamp = firstTime(obj, "timestamp", "time", "created_at", "createdAt", "updated_at", "updatedAt") + ev.Delta = extractDelta(obj) + + if ev.Type == "" { + ev.Type = "message" + } + + return ev +} + +func extractMessageText(payload map[string]interface{}) string { + if len(payload) == 0 { + return "" + } + + if s := firstString(payload, "text", "content", "delta", "output", "result"); s != "" { + return s + } + + if nested := firstNestedMap(payload, "message", "data"); nested != nil { + if s := firstString(nested, "text", "content", "delta", "output", "result"); s != "" { + return s + } + } + + if parts, ok := payload["parts"].([]interface{}); ok { + for i := len(parts) - 1; i >= 0; i-- { + part, ok := parts[i].(map[string]interface{}) + if !ok { + continue + } + if s := firstString(part, "text", "content", "delta"); s != "" { + return s + } + } + } + + return "" +} + +func extractDelta(payload map[string]interface{}) string { + if s := firstString(payload, "delta"); s != "" { + return s + } + + if part := firstNestedMap(payload, "part"); part != nil { + if s := firstString(part, "delta", "text", "content"); s != "" { + return s + } + } + + if message := firstNestedMap(payload, "message"); message != nil { + if s := firstString(message, "delta", "text", "content"); s != "" { + return s + } + if part := firstNestedMap(message, "part"); part != nil { + if s := firstString(part, "delta", "text", "content"); s != "" { + return s + } + } + } + + if parts, ok := payload["parts"].([]interface{}); ok { + for i := len(parts) - 1; i >= 0; i-- { + part, ok := parts[i].(map[string]interface{}) + if !ok { + continue + } + if s := firstString(part, "delta", "text", "content"); s != "" { + return s + } + } + } + + return "" +} + +func eventMatchesSession(ev DaemonEvent, sessionID string) bool { + sessionID = strings.TrimSpace(sessionID) + if sessionID == "" { + return true + } + return strings.TrimSpace(ev.SessionID) == sessionID +} + +func isDeltaEvent(ev DaemonEvent) bool { + if strings.TrimSpace(ev.Delta) != "" { + return true + } + typ := strings.ToLower(strings.TrimSpace(ev.Type)) + return strings.Contains(typ, "message.part.delta") +} + +func isTerminalEvent(eventType string) bool { + switch strings.ToLower(strings.TrimSpace(eventType)) { + case "session.idle", "session.error", "message.completed", "message.done", "message.error", "message.stopped", "response.completed", "completion.done": + return true + default: + return false + } +} + +func readSSEFrames(ctx context.Context, reader io.Reader, handle func(frame sseFrame) bool) error { + scanner := bufio.NewScanner(reader) + scanner.Buffer(make([]byte, 0, defaultScannerInitialSize), defaultScannerMaxSize) + scanner.Split(splitSSEFrame) + + for scanner.Scan() { + select { + case <-ctx.Done(): + return ctx.Err() + default: + } + + frame, ok := parseSSEFrame(scanner.Bytes()) + if !ok { + continue + } + if !handle(frame) { + return nil + } + } + + if err := scanner.Err(); err != nil { + return err + } + + return nil +} + +func splitSSEFrame(data []byte, atEOF bool) (advance int, token []byte, err error) { + if atEOF && len(data) == 0 { + return 0, nil, nil + } + + if idx := bytes.Index(data, []byte("\r\n\r\n")); idx >= 0 { + return idx + 4, bytes.Trim(data[:idx], "\r\n"), nil + } + if idx := bytes.Index(data, []byte("\n\n")); idx >= 0 { + return idx + 2, bytes.Trim(data[:idx], "\r\n"), nil + } + + if atEOF { + return len(data), bytes.Trim(data, "\r\n"), nil + } + + return 0, nil, nil +} + +func parseSSEFrame(raw []byte) (sseFrame, bool) { + if len(raw) == 0 { + return sseFrame{}, false + } + + normalized := bytes.ReplaceAll(raw, []byte("\r\n"), []byte("\n")) + lines := bytes.Split(normalized, []byte("\n")) + + frame := sseFrame{} + dataLines := make([]string, 0, 4) + + for _, lineBytes := range lines { + line := strings.TrimRight(string(lineBytes), "\r") + if line == "" || strings.HasPrefix(line, ":") { + continue + } + + key, value, ok := strings.Cut(line, ":") + if !ok { + continue + } + value = strings.TrimLeft(value, " ") + + switch key { + case "id": + frame.ID = value + case "event": + frame.Event = value + case "data": + dataLines = append(dataLines, value) + } + } + + frame.Data = strings.Join(dataLines, "\n") + if strings.TrimSpace(frame.ID) == "" && strings.TrimSpace(frame.Event) == "" && strings.TrimSpace(frame.Data) == "" { + return sseFrame{}, false + } + + return frame, true +} + +func isEndpointMismatchStatus(status int) bool { + return status == http.StatusNotFound || status == http.StatusMethodNotAllowed || status == http.StatusNotAcceptable +} + +func responseLooksJSON(res *httpResult) bool { + contentType := strings.ToLower(strings.TrimSpace(res.Header.Get("Content-Type"))) + if strings.Contains(contentType, "application/json") || strings.Contains(contentType, "+json") { + return true + } + trimmed := bytes.TrimSpace(res.Body) + if len(trimmed) == 0 { + return true + } + if json.Valid(trimmed) { + return true + } + return false +} + +func decodeJSONPayload(body []byte) (interface{}, error) { + decoder := json.NewDecoder(bytes.NewReader(body)) + decoder.UseNumber() + var payload interface{} + if err := decoder.Decode(&payload); err != nil { + return nil, err + } + return payload, nil +} + +func sleepBackoff(ctx context.Context, step time.Duration, multiplier int) bool { + if step <= 0 { + step = defaultRetryBackoff + } + d := step * time.Duration(multiplier) + timer := time.NewTimer(d) + defer timer.Stop() + + select { + case <-ctx.Done(): + return false + case <-timer.C: + return true + } +} + +func isRetryableStatus(status int) bool { + return status == http.StatusTooManyRequests || status == http.StatusBadGateway || status == http.StatusServiceUnavailable || status == http.StatusGatewayTimeout || status >= 500 +} + +func isRetryableError(err error) bool { + if err == nil { + return false + } + if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) { + return false + } + var netErr net.Error + if errors.As(err, &netErr) { + if netErr.Timeout() { + return true + } + if t, ok := interface{}(netErr).(interface{ Temporary() bool }); ok { + return t.Temporary() + } + } + return true +} + +func cloneValues(values url.Values) url.Values { + if values == nil { + return nil + } + cloned := url.Values{} + for key, vals := range values { + copyVals := make([]string, len(vals)) + copy(copyVals, vals) + cloned[key] = copyVals + } + return cloned +} + +func mergeValues(values ...url.Values) url.Values { + merged := url.Values{} + for _, item := range values { + for key, vals := range item { + for _, value := range vals { + merged.Add(key, value) + } + } + } + return merged +} + +func escapePath(filePath string) string { + trimmed := strings.TrimSpace(filePath) + trimmed = strings.TrimPrefix(trimmed, "/") + segments := strings.Split(trimmed, "/") + for i := range segments { + segments[i] = url.PathEscape(segments[i]) + } + return strings.Join(segments, "/") +} + +func firstNestedMap(payload map[string]interface{}, keys ...string) map[string]interface{} { + for _, key := range keys { + value, ok := payload[key] + if !ok { + continue + } + if nested, ok := value.(map[string]interface{}); ok { + return nested + } + } + return nil +} + +func cloneMap(payload map[string]interface{}) map[string]interface{} { + if payload == nil { + return nil + } + cloned := make(map[string]interface{}, len(payload)) + for key, value := range payload { + cloned[key] = value + } + return cloned +} + +func firstString(payload map[string]interface{}, keys ...string) string { + for _, key := range keys { + value, ok := payload[key] + if !ok { + continue + } + switch typed := value.(type) { + case string: + if s := strings.TrimSpace(typed); s != "" { + return s + } + case json.Number: + if s := strings.TrimSpace(typed.String()); s != "" { + return s + } + case float64: + if !math.IsNaN(typed) && !math.IsInf(typed, 0) { + return strconv.FormatInt(int64(typed), 10) + } + case float32: + f := float64(typed) + if !math.IsNaN(f) && !math.IsInf(f, 0) { + return strconv.FormatInt(int64(f), 10) + } + case int: + return strconv.Itoa(typed) + case int64: + return strconv.FormatInt(typed, 10) + } + } + return "" +} + +func firstInt(payload map[string]interface{}, keys ...string) int { + for _, key := range keys { + value, ok := payload[key] + if !ok { + continue + } + switch typed := value.(type) { + case json.Number: + if n, err := typed.Int64(); err == nil { + return int(n) + } + if f, err := typed.Float64(); err == nil { + return int(f) + } + case float64: + if !math.IsNaN(typed) && !math.IsInf(typed, 0) { + return int(typed) + } + case float32: + f := float64(typed) + if !math.IsNaN(f) && !math.IsInf(f, 0) { + return int(f) + } + case int: + return typed + case int64: + return int(typed) + case string: + if n, err := strconv.Atoi(strings.TrimSpace(typed)); err == nil { + return n + } + } + } + return 0 +} + +func firstInt64(payload map[string]interface{}, keys ...string) int64 { + for _, key := range keys { + value, ok := payload[key] + if !ok { + continue + } + switch typed := value.(type) { + case json.Number: + if n, err := typed.Int64(); err == nil { + return n + } + if f, err := typed.Float64(); err == nil && !math.IsNaN(f) && !math.IsInf(f, 0) { + return int64(f) + } + case float64: + if !math.IsNaN(typed) && !math.IsInf(typed, 0) { + return int64(typed) + } + case float32: + f := float64(typed) + if !math.IsNaN(f) && !math.IsInf(f, 0) { + return int64(f) + } + case int: + return int64(typed) + case int64: + return typed + case string: + if n, err := strconv.ParseInt(strings.TrimSpace(typed), 10, 64); err == nil { + return n + } + } + } + return 0 +} + +func firstBool(payload map[string]interface{}, keys ...string) bool { + b, _ := firstBoolWithPresence(payload, keys...) + return b +} + +func firstBoolWithPresence(payload map[string]interface{}, keys ...string) (bool, bool) { + for _, key := range keys { + value, ok := payload[key] + if !ok { + continue + } + switch typed := value.(type) { + case bool: + return typed, true + case string: + s := strings.TrimSpace(strings.ToLower(typed)) + switch s { + case "true", "1", "yes", "ok", "healthy", "success": + return true, true + case "false", "0", "no", "error", "failed", "unhealthy": + return false, true + } + case json.Number: + if n, err := typed.Int64(); err == nil { + return n != 0, true + } + if f, err := typed.Float64(); err == nil { + return f != 0, true + } + case float64: + if !math.IsNaN(typed) && !math.IsInf(typed, 0) { + return typed != 0, true + } + case int: + return typed != 0, true + } + } + return false, false +} + +func firstTime(payload map[string]interface{}, keys ...string) time.Time { + for _, key := range keys { + value, ok := payload[key] + if !ok { + continue + } + timestamp := parseFlexibleTime(value) + if !timestamp.IsZero() { + return timestamp + } + } + return time.Time{} +} + +func parseFlexibleTime(value interface{}) time.Time { + switch typed := value.(type) { + case string: + s := strings.TrimSpace(typed) + if s == "" { + return time.Time{} + } + if ts, err := time.Parse(time.RFC3339Nano, s); err == nil { + return ts + } + if ts, err := time.Parse(time.RFC3339, s); err == nil { + return ts + } + if n, err := strconv.ParseInt(s, 10, 64); err == nil { + return unixMaybeMillis(n) + } + case json.Number: + if n, err := typed.Int64(); err == nil { + return unixMaybeMillis(n) + } + if f, err := typed.Float64(); err == nil { + return unixMaybeMillis(int64(f)) + } + case float64: + if !math.IsNaN(typed) && !math.IsInf(typed, 0) { + return unixMaybeMillis(int64(typed)) + } + case int64: + return unixMaybeMillis(typed) + case int: + return unixMaybeMillis(int64(typed)) + } + return time.Time{} +} + +func unixMaybeMillis(value int64) time.Time { + if value <= 0 { + return time.Time{} + } + if value > 1_000_000_000_000 { + return time.UnixMilli(value) + } + return time.Unix(value, 0) +} diff --git a/internal/daemon/client_test.go b/internal/daemon/client_test.go new file mode 100644 index 0000000..12f50d4 --- /dev/null +++ b/internal/daemon/client_test.go @@ -0,0 +1,537 @@ +package daemon + +import ( + "context" + "encoding/json" + "errors" + "fmt" + "net/http" + "net/http/httptest" + "strings" + "sync/atomic" + "testing" + "time" +) + +func mustNewClient(t *testing.T, baseURL string, cfg ClientConfig) *Client { + t.Helper() + client, err := NewClient(baseURL, cfg) + if err != nil { + t.Fatalf("failed to create daemon client: %v", err) + } + return client +} + +func TestListSessionsPrefersSingularSessionEndpoint(t *testing.T) { + var singularHits atomic.Int32 + var pluralHits atomic.Int32 + + mux := http.NewServeMux() + mux.HandleFunc("/session", func(w http.ResponseWriter, r *http.Request) { + singularHits.Add(1) + w.Header().Set("Content-Type", "application/json") + _ = json.NewEncoder(w).Encode(map[string]interface{}{ + "sessions": []map[string]interface{}{ + { + "id": "ses-1", + "directory": "/work/proj", + "time": "2026-03-05T10:00:00Z", + "projectID": "proj-1", + "slug": "proj", + "version": "1.2.17", + }, + }, + }) + }) + mux.HandleFunc("/sessions", func(w http.ResponseWriter, r *http.Request) { + pluralHits.Add(1) + w.Header().Set("Content-Type", "application/json") + _ = json.NewEncoder(w).Encode([]map[string]interface{}{}) + }) + + server := httptest.NewServer(mux) + defer server.Close() + + client := mustNewClient(t, server.URL, ClientConfig{Timeout: 2 * time.Second}) + sessions, err := client.ListSessions(context.Background()) + if err != nil { + t.Fatalf("ListSessions returned error: %v", err) + } + + if singularHits.Load() != 1 { + t.Fatalf("expected singular endpoint hit once, got %d", singularHits.Load()) + } + if pluralHits.Load() != 0 { + t.Fatalf("expected plural endpoint to be skipped, got %d hits", pluralHits.Load()) + } + + if len(sessions) != 1 { + t.Fatalf("expected 1 session, got %d", len(sessions)) + } + if sessions[0].ID != "ses-1" { + t.Fatalf("expected session id ses-1, got %q", sessions[0].ID) + } + if sessions[0].Directory != "/work/proj" { + t.Fatalf("expected directory /work/proj, got %q", sessions[0].Directory) + } + if sessions[0].CreatedAt.IsZero() { + t.Fatalf("expected CreatedAt parsed from time field") + } +} + +func TestListSessionsFallsBackFromHTMLShellResponse(t *testing.T) { + var singularHits atomic.Int32 + var pluralHits atomic.Int32 + + mux := http.NewServeMux() + mux.HandleFunc("/session", func(w http.ResponseWriter, r *http.Request) { + singularHits.Add(1) + w.Header().Set("Content-Type", "text/html; charset=utf-8") + _, _ = w.Write([]byte("shell")) + }) + mux.HandleFunc("/sessions", func(w http.ResponseWriter, r *http.Request) { + pluralHits.Add(1) + w.Header().Set("Content-Type", "application/json") + _ = json.NewEncoder(w).Encode([]map[string]interface{}{{"id": "ses-fallback", "directory": "/tmp/fallback"}}) + }) + + server := httptest.NewServer(mux) + defer server.Close() + + client := mustNewClient(t, server.URL, ClientConfig{Timeout: 2 * time.Second}) + sessions, err := client.ListSessions(context.Background()) + if err != nil { + t.Fatalf("ListSessions returned error: %v", err) + } + + if singularHits.Load() != 1 || pluralHits.Load() != 1 { + t.Fatalf("expected fallback behavior singular=1 plural=1, got singular=%d plural=%d", singularHits.Load(), pluralHits.Load()) + } + if len(sessions) != 1 || sessions[0].ID != "ses-fallback" { + t.Fatalf("unexpected sessions payload: %+v", sessions) + } +} + +func TestGetSessionFallbackAndValidationError(t *testing.T) { + var singularHits atomic.Int32 + var pluralHits atomic.Int32 + + mux := http.NewServeMux() + mux.HandleFunc("/session/ses-42", func(w http.ResponseWriter, r *http.Request) { + singularHits.Add(1) + w.Header().Set("Content-Type", "text/html; charset=utf-8") + _, _ = w.Write([]byte("ui shell")) + }) + mux.HandleFunc("/sessions/ses-42", func(w http.ResponseWriter, r *http.Request) { + pluralHits.Add(1) + w.Header().Set("Content-Type", "application/json") + _ = json.NewEncoder(w).Encode(map[string]interface{}{ + "id": "ses-42", + "directory": "/work/fallback", + "slug": "fallback", + }) + }) + + server := httptest.NewServer(mux) + defer server.Close() + + client := mustNewClient(t, server.URL, ClientConfig{Timeout: 2 * time.Second}) + session, err := client.GetSession(context.Background(), "ses-42") + if err != nil { + t.Fatalf("GetSession returned error: %v", err) + } + if session.ID != "ses-42" { + t.Fatalf("expected session id ses-42, got %q", session.ID) + } + if singularHits.Load() != 1 || pluralHits.Load() != 1 { + t.Fatalf("expected fallback hits singular=1 plural=1, got singular=%d plural=%d", singularHits.Load(), pluralHits.Load()) + } + + if _, err := client.GetSession(context.Background(), ""); err == nil { + t.Fatalf("expected validation error for empty session id") + } +} + +func TestSubscribeEventsParsesMultiLineSSEData(t *testing.T) { + mux := http.NewServeMux() + mux.HandleFunc("/event", func(w http.ResponseWriter, r *http.Request) { + w.Header().Set("Content-Type", "text/event-stream") + flusher := w.(http.Flusher) + + _, _ = fmt.Fprint(w, "id: 1\n") + _, _ = fmt.Fprint(w, "event: message.part.delta\n") + _, _ = fmt.Fprint(w, "data: {\"type\":\"message.part.delta\",\"sessionID\":\"ses-1\",\n") + _, _ = fmt.Fprint(w, "data: \"part\":{\"delta\":\"hel\"}}\n\n") + flusher.Flush() + + _, _ = fmt.Fprint(w, "id: 2\n") + _, _ = fmt.Fprint(w, "event: session.updated\n") + _, _ = fmt.Fprint(w, "data: {\"type\":\"session.updated\",\"sessionID\":\"ses-1\"}\n\n") + flusher.Flush() + }) + + server := httptest.NewServer(mux) + defer server.Close() + + client := mustNewClient(t, server.URL, ClientConfig{Timeout: 2 * time.Second}) + ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second) + defer cancel() + + events, err := client.SubscribeEvents(ctx) + if err != nil { + t.Fatalf("SubscribeEvents returned error: %v", err) + } + + first := <-events + if first.Type != "message.part.delta" { + t.Fatalf("expected first type message.part.delta, got %q", first.Type) + } + if first.ID != "1" { + t.Fatalf("expected first id 1, got %q", first.ID) + } + if first.SessionID != "ses-1" { + t.Fatalf("expected first session ses-1, got %q", first.SessionID) + } + if first.Delta != "hel" { + t.Fatalf("expected parsed delta hel, got %q", first.Delta) + } + + second := <-events + if second.Type != "session.updated" { + t.Fatalf("expected second type session.updated, got %q", second.Type) + } +} + +func TestSendMessageStreamsChunksFromEventEndpoint(t *testing.T) { + startEvents := make(chan struct{}) + postedBody := make(chan MessageRequest, 1) + + mux := http.NewServeMux() + mux.HandleFunc("/event", func(w http.ResponseWriter, r *http.Request) { + w.Header().Set("Content-Type", "text/event-stream") + flusher := w.(http.Flusher) + w.WriteHeader(http.StatusOK) + flusher.Flush() + <-startEvents + + _, _ = fmt.Fprint(w, "event: message.part.delta\n") + _, _ = fmt.Fprint(w, "data: {\"type\":\"message.part.delta\",\"sessionID\":\"other\",\"delta\":\"ignore\"}\n\n") + flusher.Flush() + + _, _ = fmt.Fprint(w, "event: message.part.delta\n") + _, _ = fmt.Fprint(w, "data: {\"type\":\"message.part.delta\",\"sessionID\":\"ses-1\",\"delta\":\"Hel\"}\n\n") + flusher.Flush() + + _, _ = fmt.Fprint(w, "event: message.part.delta\n") + _, _ = fmt.Fprint(w, "data: {\"type\":\"message.part.delta\",\"sessionID\":\"ses-1\",\"part\":{\"delta\":\"lo\"}}\n\n") + flusher.Flush() + + _, _ = fmt.Fprint(w, "event: session.idle\n") + _, _ = fmt.Fprint(w, "data: {\"type\":\"session.idle\",\"sessionID\":\"ses-1\"}\n\n") + flusher.Flush() + }) + + mux.HandleFunc("/session/ses-1/message", func(w http.ResponseWriter, r *http.Request) { + defer close(startEvents) + var req MessageRequest + if err := json.NewDecoder(r.Body).Decode(&req); err == nil { + postedBody <- req + } + w.Header().Set("Content-Type", "application/json") + _ = json.NewEncoder(w).Encode(map[string]interface{}{"id": "msg-1"}) + }) + + server := httptest.NewServer(mux) + defer server.Close() + + client := mustNewClient(t, server.URL, ClientConfig{ + Timeout: 2 * time.Second, + StreamIdleTimeout: 300 * time.Millisecond, + }) + + ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second) + defer cancel() + + chunks, err := client.SendMessage(ctx, "ses-1", "hello") + if err != nil { + t.Fatalf("SendMessage returned error: %v", err) + } + + posted := <-postedBody + if len(posted.Parts) != 1 || posted.Parts[0].Text != "hello" { + t.Fatalf("unexpected posted request body: %+v", posted) + } + + collected := make([]MessageChunk, 0, 4) + for { + select { + case chunk, ok := <-chunks: + if !ok { + goto done + } + collected = append(collected, chunk) + case <-ctx.Done(): + t.Fatalf("timed out waiting for streamed chunks") + } + } + +done: + deltas := make([]string, 0, 2) + var doneChunk MessageChunk + for _, chunk := range collected { + if chunk.Delta != "" { + deltas = append(deltas, chunk.Delta) + } + if chunk.Done { + doneChunk = chunk + } + } + + if strings.Join(deltas, "") != "Hello" { + t.Fatalf("expected streamed deltas to form Hello, got %q (%+v)", strings.Join(deltas, ""), deltas) + } + if !doneChunk.Done { + t.Fatalf("expected terminal done chunk, got %+v", collected) + } + if doneChunk.Type != "session.idle" { + t.Fatalf("expected done chunk type session.idle, got %q", doneChunk.Type) + } +} + +func TestHealthHonorsTimeout(t *testing.T) { + mux := http.NewServeMux() + mux.HandleFunc("/global/health", func(w http.ResponseWriter, r *http.Request) { + time.Sleep(120 * time.Millisecond) + w.Header().Set("Content-Type", "application/json") + _ = json.NewEncoder(w).Encode(map[string]interface{}{"healthy": true}) + }) + + server := httptest.NewServer(mux) + defer server.Close() + + client := mustNewClient(t, server.URL, ClientConfig{Timeout: 40 * time.Millisecond, MaxRetries: 0}) + start := time.Now() + _, err := client.Health(context.Background()) + elapsed := time.Since(start) + + if err == nil { + t.Fatalf("expected timeout error, got nil") + } + if !errors.Is(err, context.DeadlineExceeded) && !strings.Contains(strings.ToLower(err.Error()), "timeout") { + t.Fatalf("expected timeout/deadline error, got %v", err) + } + if elapsed >= 300*time.Millisecond { + t.Fatalf("expected timeout to return quickly, elapsed=%s", elapsed) + } +} + +func TestHealthRetriesTransientFailures(t *testing.T) { + var attempts atomic.Int32 + + mux := http.NewServeMux() + mux.HandleFunc("/global/health", func(w http.ResponseWriter, r *http.Request) { + count := attempts.Add(1) + if count < 3 { + w.WriteHeader(http.StatusServiceUnavailable) + _, _ = w.Write([]byte("retry me")) + return + } + w.Header().Set("Content-Type", "application/json") + _ = json.NewEncoder(w).Encode(map[string]interface{}{"healthy": true, "version": "1.2.17"}) + }) + + server := httptest.NewServer(mux) + defer server.Close() + + client := mustNewClient(t, server.URL, ClientConfig{ + Timeout: 2 * time.Second, + MaxRetries: 2, + RetryBackoff: time.Millisecond, + }) + + health, err := client.Health(context.Background()) + if err != nil { + t.Fatalf("Health returned error: %v", err) + } + if !health.Healthy { + t.Fatalf("expected healthy=true after retries") + } + if attempts.Load() != 3 { + t.Fatalf("expected exactly 3 attempts, got %d", attempts.Load()) + } +} + +func TestExecuteCommandFallbackAndValidationError(t *testing.T) { + var commandHits atomic.Int32 + var commandsHits atomic.Int32 + + mux := http.NewServeMux() + mux.HandleFunc("/session/ses-cmd/command", func(w http.ResponseWriter, r *http.Request) { + commandHits.Add(1) + w.WriteHeader(http.StatusNotFound) + }) + mux.HandleFunc("/session/ses-cmd/commands", func(w http.ResponseWriter, r *http.Request) { + commandsHits.Add(1) + w.Header().Set("Content-Type", "application/json") + _ = json.NewEncoder(w).Encode(map[string]interface{}{"exit_code": 7, "stderr": "boom", "success": false}) + }) + + server := httptest.NewServer(mux) + defer server.Close() + + client := mustNewClient(t, server.URL, ClientConfig{Timeout: 2 * time.Second}) + result, err := client.ExecuteCommand(context.Background(), "ses-cmd", "ls") + if err != nil { + t.Fatalf("ExecuteCommand returned error: %v", err) + } + if result.ExitCode != 7 || result.Success { + t.Fatalf("unexpected command result: %+v", result) + } + if commandHits.Load() != 1 || commandsHits.Load() != 1 { + t.Fatalf("expected fallback hits command=1 commands=1, got command=%d commands=%d", commandHits.Load(), commandsHits.Load()) + } + + if _, err := client.ExecuteCommand(context.Background(), "ses-cmd", ""); err == nil { + t.Fatalf("expected validation error for empty command") + } +} + +func TestListFilesAndReadFileFallbackPaths(t *testing.T) { + var singularFilesHits atomic.Int32 + var pluralFilesHits atomic.Int32 + var queryReadHits atomic.Int32 + + mux := http.NewServeMux() + mux.HandleFunc("/session/ses-files/file", func(w http.ResponseWriter, r *http.Request) { + singularFilesHits.Add(1) + w.WriteHeader(http.StatusNotFound) + }) + mux.HandleFunc("/session/ses-files/files", func(w http.ResponseWriter, r *http.Request) { + pluralFilesHits.Add(1) + w.Header().Set("Content-Type", "application/json") + _ = json.NewEncoder(w).Encode(map[string]interface{}{"files": []map[string]interface{}{{"path": "README.md", "size": 6}}}) + }) + mux.HandleFunc("/session/ses-files/file/README.md", func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusNotFound) + }) + mux.HandleFunc("/session/ses-files/files/README.md", func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusNotFound) + }) + mux.HandleFunc("/file/README.md", func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusNotFound) + }) + mux.HandleFunc("/files/README.md", func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusNotFound) + }) + mux.HandleFunc("/file", func(w http.ResponseWriter, r *http.Request) { + if r.URL.Query().Get("sessionID") == "ses-files" && r.URL.Query().Get("path") == "README.md" { + queryReadHits.Add(1) + w.Header().Set("Content-Type", "application/json") + _ = json.NewEncoder(w).Encode(map[string]interface{}{"path": "README.md", "content": "query-read"}) + return + } + w.WriteHeader(http.StatusNotFound) + }) + + server := httptest.NewServer(mux) + defer server.Close() + + client := mustNewClient(t, server.URL, ClientConfig{Timeout: 2 * time.Second}) + files, err := client.ListFiles(context.Background(), "ses-files", "*.md") + if err != nil { + t.Fatalf("ListFiles returned error: %v", err) + } + if len(files) != 1 || files[0].Path != "README.md" { + t.Fatalf("unexpected files payload: %+v", files) + } + if singularFilesHits.Load() != 1 || pluralFilesHits.Load() != 1 { + t.Fatalf("expected fallback hits file=1 files=1, got file=%d files=%d", singularFilesHits.Load(), pluralFilesHits.Load()) + } + + read, err := client.ReadFile(context.Background(), "ses-files", "README.md") + if err != nil { + t.Fatalf("ReadFile returned error: %v", err) + } + if read.Content != "query-read" { + t.Fatalf("expected query-read content, got %q", read.Content) + } + if queryReadHits.Load() != 1 { + t.Fatalf("expected query-path fallback hit once, got %d", queryReadHits.Load()) + } +} + +func TestConfigInvalidPayloadReturnsError(t *testing.T) { + mux := http.NewServeMux() + mux.HandleFunc("/config", func(w http.ResponseWriter, r *http.Request) { + w.Header().Set("Content-Type", "application/json") + _, _ = w.Write([]byte(`["bad-shape"]`)) + }) + + server := httptest.NewServer(mux) + defer server.Close() + + client := mustNewClient(t, server.URL, ClientConfig{Timeout: 2 * time.Second}) + if _, err := client.Config(context.Background()); err == nil { + t.Fatalf("expected config shape error") + } +} + +func TestExecuteCommandListFilesReadFileAndConfig(t *testing.T) { + mux := http.NewServeMux() + mux.HandleFunc("/session/ses-1/command", func(w http.ResponseWriter, r *http.Request) { + w.Header().Set("Content-Type", "application/json") + _ = json.NewEncoder(w).Encode(map[string]interface{}{"exit_code": 0, "stdout": "ok", "success": true}) + }) + mux.HandleFunc("/session/ses-1/file", func(w http.ResponseWriter, r *http.Request) { + w.Header().Set("Content-Type", "application/json") + _ = json.NewEncoder(w).Encode(map[string]interface{}{ + "files": []map[string]interface{}{{"path": "README.md", "size": 5, "is_dir": false}}, + }) + }) + mux.HandleFunc("/session/ses-1/file/README.md", func(w http.ResponseWriter, r *http.Request) { + w.Header().Set("Content-Type", "text/plain") + _, _ = w.Write([]byte("hello")) + }) + mux.HandleFunc("/config", func(w http.ResponseWriter, r *http.Request) { + w.Header().Set("Content-Type", "application/json") + _ = json.NewEncoder(w).Encode(map[string]interface{}{"model": "claude", "provider": "anthropic"}) + }) + + server := httptest.NewServer(mux) + defer server.Close() + + client := mustNewClient(t, server.URL, ClientConfig{Timeout: 2 * time.Second}) + ctx := context.Background() + + cmd, err := client.ExecuteCommand(ctx, "ses-1", "pwd") + if err != nil { + t.Fatalf("ExecuteCommand returned error: %v", err) + } + if !cmd.Success || cmd.ExitCode != 0 || cmd.Stdout != "ok" { + t.Fatalf("unexpected command result: %+v", cmd) + } + + files, err := client.ListFiles(ctx, "ses-1", "*.md") + if err != nil { + t.Fatalf("ListFiles returned error: %v", err) + } + if len(files) != 1 || files[0].Path != "README.md" { + t.Fatalf("unexpected files payload: %+v", files) + } + + file, err := client.ReadFile(ctx, "ses-1", "README.md") + if err != nil { + t.Fatalf("ReadFile returned error: %v", err) + } + if file.Content != "hello" { + t.Fatalf("expected plain-text file content hello, got %q", file.Content) + } + + conf, err := client.Config(ctx) + if err != nil { + t.Fatalf("Config returned error: %v", err) + } + if conf.Raw["model"] != "claude" { + t.Fatalf("unexpected config payload: %+v", conf) + } +} diff --git a/internal/daemon/spike_test.go b/internal/daemon/spike_test.go new file mode 100644 index 0000000..2f6f0ce --- /dev/null +++ b/internal/daemon/spike_test.go @@ -0,0 +1,580 @@ +package daemon + +import ( + "bufio" + "bytes" + "context" + "encoding/json" + "fmt" + "io" + "net" + "net/http" + "os/exec" + "path/filepath" + "runtime" + "strconv" + "strings" + "testing" + "time" +) + +const ( + spikeStartupTimeout = 15 * time.Second + spikeHTTPTimeout = 30 * time.Second +) + +type spikeDaemon struct { + baseURL string + client *http.Client +} + +type openAPIDoc struct { + OpenAPI string `json:"openapi"` + Paths map[string]map[string]interface{} `json:"paths"` +} + +func requireSpikeDaemon(t *testing.T) *spikeDaemon { + t.Helper() + + binaryPath, err := exec.LookPath("opencode") + if err != nil { + t.Skipf("spike skipped: opencode binary not available in PATH: %v", err) + } + + port, err := reservePort() + if err != nil { + t.Skipf("spike skipped: unable to reserve local port: %v", err) + } + + ctx, cancel := context.WithCancel(context.Background()) + cmd := exec.CommandContext(ctx, binaryPath, "serve", "--port", strconv.Itoa(port)) + cmd.Dir = moduleRoot(t) + + var stderr bytes.Buffer + cmd.Stdout = io.Discard + cmd.Stderr = &stderr + + if err := cmd.Start(); err != nil { + cancel() + t.Skipf("spike skipped: failed to start opencode serve: %v", err) + } + + t.Cleanup(func() { + cancel() + _ = cmd.Wait() + }) + + baseURL := fmt.Sprintf("http://127.0.0.1:%d", port) + client := &http.Client{Timeout: 1200 * time.Millisecond} + + deadline := time.Now().Add(spikeStartupTimeout) + for time.Now().Before(deadline) { + req, _ := http.NewRequestWithContext(context.Background(), http.MethodGet, baseURL+"/global/health", nil) + resp, err := client.Do(req) + if err == nil { + body, _ := io.ReadAll(resp.Body) + _ = resp.Body.Close() + + if resp.StatusCode == http.StatusOK { + var health struct { + Healthy bool `json:"healthy"` + } + if json.Unmarshal(body, &health) == nil && health.Healthy { + return &spikeDaemon{ + baseURL: baseURL, + client: &http.Client{Timeout: spikeHTTPTimeout}, + } + } + } + } + time.Sleep(250 * time.Millisecond) + } + + t.Skipf( + "spike skipped: opencode serve did not become healthy at %s within %s; stderr=%q", + baseURL, + spikeStartupTimeout, + trimForLog(stderr.String(), 400), + ) + return nil +} + +func moduleRoot(t *testing.T) string { + t.Helper() + _, filename, _, ok := runtime.Caller(0) + if !ok { + t.Fatal("unable to resolve caller for module root") + } + return filepath.Clean(filepath.Join(filepath.Dir(filename), "..", "..")) +} + +func reservePort() (int, error) { + ln, err := net.Listen("tcp", "127.0.0.1:0") + if err != nil { + return 0, err + } + defer ln.Close() + addr, ok := ln.Addr().(*net.TCPAddr) + if !ok { + return 0, fmt.Errorf("unexpected address type %T", ln.Addr()) + } + return addr.Port, nil +} + +func trimForLog(s string, max int) string { + s = strings.TrimSpace(s) + if len(s) <= max { + return s + } + return s[:max] + "..." +} + +func mustCreateSession(t *testing.T, d *spikeDaemon) map[string]interface{} { + t.Helper() + + req, err := http.NewRequestWithContext(context.Background(), http.MethodPost, d.baseURL+"/session", nil) + if err != nil { + t.Fatalf("create-session request build failed: %v", err) + } + + resp, err := d.client.Do(req) + if err != nil { + t.Fatalf("create-session request failed: %v", err) + } + defer resp.Body.Close() + + body, err := io.ReadAll(resp.Body) + if err != nil { + t.Fatalf("create-session response read failed: %v", err) + } + + if resp.StatusCode != http.StatusOK { + t.Fatalf("create-session unexpected status=%d body=%s", resp.StatusCode, string(body)) + } + + var payload map[string]interface{} + if err := json.Unmarshal(body, &payload); err != nil { + t.Fatalf("create-session response is not JSON: %v; body=%s", err, string(body)) + } + + if stringField(payload, "id") == "" { + t.Fatalf("create-session response missing id field: %v", payload) + } + + return payload +} + +func stringField(payload map[string]interface{}, key string) string { + v, ok := payload[key] + if !ok { + return "" + } + s, _ := v.(string) + return s +} + +func hasAnyKey(payload map[string]interface{}, keys ...string) bool { + for _, key := range keys { + if _, ok := payload[key]; ok { + return true + } + } + return false +} + +func waitForEventDataMatch(ctx context.Context, d *spikeDaemon, match func(data string) bool) (matched bool, dataLines []string, err error) { + req, err := http.NewRequestWithContext(ctx, http.MethodGet, d.baseURL+"/event", nil) + if err != nil { + return false, nil, err + } + req.Header.Set("Accept", "text/event-stream") + + resp, err := d.client.Do(req) + if err != nil { + if ctx.Err() != nil { + return false, nil, nil + } + return false, nil, err + } + defer resp.Body.Close() + + if resp.StatusCode != http.StatusOK { + body, _ := io.ReadAll(io.LimitReader(resp.Body, 1024)) + return false, nil, fmt.Errorf("event stream status=%d body=%s", resp.StatusCode, string(body)) + } + + scanner := bufio.NewScanner(resp.Body) + scanner.Buffer(make([]byte, 0, 64*1024), 1024*1024) + + for scanner.Scan() { + line := scanner.Text() + if !strings.HasPrefix(line, "data: ") { + continue + } + data := strings.TrimSpace(strings.TrimPrefix(line, "data: ")) + if data == "" { + continue + } + dataLines = append(dataLines, data) + if match(data) { + return true, dataLines, nil + } + } + + if scanErr := scanner.Err(); scanErr != nil && ctx.Err() == nil { + return false, dataLines, scanErr + } + + return false, dataLines, nil +} + +func patchSessionTitle(t *testing.T, d *spikeDaemon, sessionID, title string) { + t.Helper() + body := strings.NewReader(fmt.Sprintf(`{"title":%q}`, title)) + req, err := http.NewRequestWithContext(context.Background(), http.MethodPatch, d.baseURL+"/session/"+sessionID, body) + if err != nil { + t.Fatalf("patch-session request build failed: %v", err) + } + req.Header.Set("Content-Type", "application/json") + + resp, err := d.client.Do(req) + if err != nil { + t.Fatalf("patch-session request failed: %v", err) + } + defer resp.Body.Close() + + bodyBytes, _ := io.ReadAll(io.LimitReader(resp.Body, 4096)) + if resp.StatusCode != http.StatusOK { + t.Fatalf("patch-session unexpected status=%d body=%s", resp.StatusCode, string(bodyBytes)) + } +} + +func TestSpikeDocEndpointOpenAPI(t *testing.T) { + d := requireSpikeDaemon(t) + + req, err := http.NewRequestWithContext(context.Background(), http.MethodGet, d.baseURL+"/doc", nil) + if err != nil { + t.Fatalf("request build failed: %v", err) + } + + resp, err := d.client.Do(req) + if err != nil { + t.Fatalf("GET /doc failed: %v", err) + } + defer resp.Body.Close() + + if resp.StatusCode != http.StatusOK { + body, _ := io.ReadAll(io.LimitReader(resp.Body, 1024)) + t.Fatalf("GET /doc unexpected status=%d body=%s", resp.StatusCode, string(body)) + } + + var spec openAPIDoc + if err := json.NewDecoder(resp.Body).Decode(&spec); err != nil { + t.Fatalf("GET /doc returned non-JSON payload: %v", err) + } + + if spec.OpenAPI == "" { + t.Fatalf("openapi version missing in /doc payload") + } + + if len(spec.Paths) == 0 { + t.Fatalf("/doc paths section is empty") + } + + requiredPaths := []string{ + "/event", + "/project/current", + "/session", + "/session/{sessionID}", + "/session/{sessionID}/message", + } + + for _, path := range requiredPaths { + if _, ok := spec.Paths[path]; !ok { + t.Fatalf("required path missing from /doc: %s", path) + } + } + + t.Logf("/doc openapi=%s path_count=%d", spec.OpenAPI, len(spec.Paths)) +} + +func TestSpikeCreateSessionShape(t *testing.T) { + d := requireSpikeDaemon(t) + payload := mustCreateSession(t, d) + + id := stringField(payload, "id") + directory := stringField(payload, "directory") + + if id == "" { + t.Fatalf("session create response missing id: %v", payload) + } + if directory == "" { + t.Fatalf("session create response missing directory: %v", payload) + } + + shape := map[string]bool{ + "id": hasAnyKey(payload, "id"), + "daemon_port": hasAnyKey(payload, "daemonPort", "daemon_port", "port"), + "workspace_path": hasAnyKey(payload, "workspacePath", "workspace_path", "directory"), + "status": hasAnyKey(payload, "status"), + "created_at": hasAnyKey(payload, "createdAt", "created_at", "time"), + "last_activity": hasAnyKey(payload, "lastActivity", "last_activity"), + "attached_clients": hasAnyKey(payload, "attachedClients", "attached_clients"), + } + + t.Logf("create-session field coverage=%v", shape) +} + +func TestSpikeSessionMessagesEndpoints(t *testing.T) { + d := requireSpikeDaemon(t) + payload := mustCreateSession(t, d) + sessionID := stringField(payload, "id") + + reqPlural, err := http.NewRequestWithContext(context.Background(), http.MethodGet, d.baseURL+"/session/"+sessionID+"/messages", nil) + if err != nil { + t.Fatalf("plural endpoint request build failed: %v", err) + } + reqPlural.Header.Set("Accept", "text/event-stream") + + respPlural, err := d.client.Do(reqPlural) + if err != nil { + t.Fatalf("GET /session/{id}/messages failed: %v", err) + } + pluralBody, _ := io.ReadAll(io.LimitReader(respPlural.Body, 2048)) + _ = respPlural.Body.Close() + + pluralContentType := respPlural.Header.Get("Content-Type") + t.Logf("plural messages endpoint status=%d content-type=%q body-prefix=%q", respPlural.StatusCode, pluralContentType, trimForLog(string(pluralBody), 220)) + + reqSingular, err := http.NewRequestWithContext(context.Background(), http.MethodGet, d.baseURL+"/session/"+sessionID+"/message", nil) + if err != nil { + t.Fatalf("singular endpoint request build failed: %v", err) + } + + respSingular, err := d.client.Do(reqSingular) + if err != nil { + t.Fatalf("GET /session/{id}/message failed: %v", err) + } + defer respSingular.Body.Close() + + if respSingular.StatusCode != http.StatusOK { + body, _ := io.ReadAll(io.LimitReader(respSingular.Body, 2048)) + t.Fatalf("GET /session/{id}/message unexpected status=%d body=%s", respSingular.StatusCode, string(body)) + } + + var messages []interface{} + if err := json.NewDecoder(respSingular.Body).Decode(&messages); err != nil { + t.Fatalf("GET /session/{id}/message did not return JSON list: %v", err) + } + + t.Logf("singular message endpoint returned %d entries", len(messages)) +} + +func TestSpikePostMessageAndTokenEvents(t *testing.T) { + d := requireSpikeDaemon(t) + payload := mustCreateSession(t, d) + sessionID := stringField(payload, "id") + + eventCtx, cancelEvents := context.WithTimeout(context.Background(), 30*time.Second) + defer cancelEvents() + + type eventResult struct { + matched bool + lines []string + err error + } + + eventCh := make(chan eventResult, 1) + go func() { + matched, lines, err := waitForEventDataMatch(eventCtx, d, func(data string) bool { + return strings.Contains(data, sessionID) && strings.Contains(data, `"message.part.delta"`) + }) + eventCh <- eventResult{matched: matched, lines: lines, err: err} + }() + + time.Sleep(600 * time.Millisecond) + + body := strings.NewReader(`{"parts":[{"type":"text","text":"Reply with exactly: spike-pong"}]}`) + req, err := http.NewRequestWithContext(context.Background(), http.MethodPost, d.baseURL+"/session/"+sessionID+"/message", body) + if err != nil { + t.Fatalf("POST /session/{id}/message request build failed: %v", err) + } + req.Header.Set("Content-Type", "application/json") + req.Header.Set("Accept", "application/json") + + resp, err := d.client.Do(req) + if err != nil { + t.Skipf("spike skipped token assertion: prompt call failed (%v)", err) + } + defer resp.Body.Close() + + respBody, _ := io.ReadAll(io.LimitReader(resp.Body, 2*1024*1024)) + if resp.StatusCode != http.StatusOK { + t.Fatalf("POST /session/{id}/message unexpected status=%d body=%s", resp.StatusCode, trimForLog(string(respBody), 400)) + } + + if ct := resp.Header.Get("Content-Type"); !strings.Contains(ct, "application/json") { + t.Fatalf("POST /session/{id}/message expected application/json response, got %q", ct) + } + + var messagePayload map[string]interface{} + if err := json.Unmarshal(respBody, &messagePayload); err != nil { + t.Fatalf("POST /session/{id}/message response was not JSON: %v", err) + } + + result := <-eventCh + if result.err != nil { + t.Skipf("spike skipped token assertion: event stream error: %v", result.err) + } + if !result.matched { + t.Skipf("spike skipped token assertion: no message.part.delta event observed for session %s within timeout (events=%d)", sessionID, len(result.lines)) + } + + t.Logf("observed message.part.delta events for session=%s (total_event_data_lines=%d)", sessionID, len(result.lines)) +} + +func TestSpikeEventEndpointReceivesSessionUpdates(t *testing.T) { + d := requireSpikeDaemon(t) + payload := mustCreateSession(t, d) + sessionID := stringField(payload, "id") + + eventCtx, cancel := context.WithTimeout(context.Background(), 12*time.Second) + defer cancel() + + type eventResult struct { + matched bool + lines []string + err error + } + + eventCh := make(chan eventResult, 1) + go func() { + matched, lines, err := waitForEventDataMatch(eventCtx, d, func(data string) bool { + return strings.Contains(data, sessionID) && strings.Contains(data, `"session.updated"`) + }) + eventCh <- eventResult{matched: matched, lines: lines, err: err} + }() + + time.Sleep(500 * time.Millisecond) + patchSessionTitle(t, d, sessionID, "spike-event-update") + + result := <-eventCh + if result.err != nil { + t.Fatalf("event stream failed: %v", result.err) + } + if !result.matched { + t.Fatalf("expected session.updated event for session %s; received %d data lines", sessionID, len(result.lines)) + } + + t.Logf("event stream delivered session.updated for session=%s", sessionID) +} + +func TestSpikeMultiClientEventStreams(t *testing.T) { + d := requireSpikeDaemon(t) + payload := mustCreateSession(t, d) + sessionID := stringField(payload, "id") + + ctx1, cancel1 := context.WithTimeout(context.Background(), 12*time.Second) + defer cancel1() + ctx2, cancel2 := context.WithTimeout(context.Background(), 12*time.Second) + defer cancel2() + + type eventResult struct { + matched bool + lines []string + err error + } + + stream1 := make(chan eventResult, 1) + stream2 := make(chan eventResult, 1) + + go func() { + matched, lines, err := waitForEventDataMatch(ctx1, d, func(data string) bool { + return strings.Contains(data, sessionID) && strings.Contains(data, `"session.updated"`) + }) + stream1 <- eventResult{matched: matched, lines: lines, err: err} + }() + + go func() { + matched, lines, err := waitForEventDataMatch(ctx2, d, func(data string) bool { + return strings.Contains(data, sessionID) && strings.Contains(data, `"session.updated"`) + }) + stream2 <- eventResult{matched: matched, lines: lines, err: err} + }() + + time.Sleep(900 * time.Millisecond) + patchSessionTitle(t, d, sessionID, "spike-multi-client") + + res1 := <-stream1 + res2 := <-stream2 + + if res1.err != nil { + t.Fatalf("event stream client #1 failed: %v", res1.err) + } + if res2.err != nil { + t.Fatalf("event stream client #2 failed: %v", res2.err) + } + if !res1.matched || !res2.matched { + t.Fatalf("expected both event clients to receive session update: client1=%t client2=%t", res1.matched, res2.matched) + } + + t.Logf("both event clients observed session.updated for session=%s", sessionID) +} + +func TestSpikeSessionDetailFields(t *testing.T) { + d := requireSpikeDaemon(t) + payload := mustCreateSession(t, d) + sessionID := stringField(payload, "id") + + req, err := http.NewRequestWithContext(context.Background(), http.MethodGet, d.baseURL+"/session/"+sessionID, nil) + if err != nil { + t.Fatalf("GET /session/{id} request build failed: %v", err) + } + + resp, err := d.client.Do(req) + if err != nil { + t.Fatalf("GET /session/{id} request failed: %v", err) + } + defer resp.Body.Close() + + if resp.StatusCode != http.StatusOK { + body, _ := io.ReadAll(io.LimitReader(resp.Body, 2048)) + t.Fatalf("GET /session/{id} unexpected status=%d body=%s", resp.StatusCode, string(body)) + } + + body, err := io.ReadAll(resp.Body) + if err != nil { + t.Fatalf("GET /session/{id} read failed: %v", err) + } + + var sessionPayload map[string]interface{} + if err := json.Unmarshal(body, &sessionPayload); err != nil { + t.Fatalf("GET /session/{id} response was not JSON: %v", err) + } + + if stringField(sessionPayload, "id") == "" { + t.Fatalf("GET /session/{id} response missing id: %v", sessionPayload) + } + + hasWorkingDirectory := hasAnyKey(sessionPayload, "directory", "worktree", "cwd") + hasFiles := hasAnyKey(sessionPayload, "files", "fileList", "file_list") + hasAgent := hasAnyKey(sessionPayload, "agent", "agents", "agentInfo", "agent_info") + + t.Logf("session detail keys=%v", sortedKeys(sessionPayload)) + t.Logf("session detail capability working_directory=%t files=%t agent=%t", hasWorkingDirectory, hasFiles, hasAgent) +} + +func sortedKeys(m map[string]interface{}) []string { + keys := make([]string, 0, len(m)) + for key := range m { + keys = append(keys, key) + } + for i := 0; i < len(keys)-1; i++ { + for j := i + 1; j < len(keys); j++ { + if keys[j] < keys[i] { + keys[i], keys[j] = keys[j], keys[i] + } + } + } + return keys +} diff --git a/internal/daemon/types.go b/internal/daemon/types.go new file mode 100644 index 0000000..d23e257 --- /dev/null +++ b/internal/daemon/types.go @@ -0,0 +1,105 @@ +package daemon + +import ( + "encoding/json" + "net/http" + "time" +) + +type ClientConfig struct { + Timeout time.Duration + MaxRetries int + RetryBackoff time.Duration + AuthToken string + HTTPClient *http.Client + StreamBuffer int + StreamIdleTimeout time.Duration +} + +type DaemonSession struct { + ID string `json:"id"` + Title string `json:"title,omitempty"` + Directory string `json:"directory,omitempty"` + Status string `json:"status,omitempty"` + CreatedAt time.Time `json:"createdAt,omitempty"` + LastActivity time.Time `json:"lastActivity,omitempty"` + DaemonPort int `json:"daemonPort,omitempty"` + AttachedClients int `json:"attachedClients,omitempty"` + ProjectID string `json:"projectID,omitempty"` + Slug string `json:"slug,omitempty"` + Version string `json:"version,omitempty"` + Raw map[string]interface{} `json:"-"` +} + +type MessagePart struct { + Type string `json:"type"` + Text string `json:"text"` +} + +type MessageRequest struct { + Parts []MessagePart `json:"parts"` +} + +type MessageChunk struct { + SessionID string `json:"sessionId,omitempty"` + MessageID string `json:"messageId,omitempty"` + Type string `json:"type,omitempty"` + Delta string `json:"delta,omitempty"` + Done bool `json:"done,omitempty"` + Error string `json:"error,omitempty"` + Timestamp time.Time `json:"timestamp,omitempty"` + RawData string `json:"rawData,omitempty"` + Payload map[string]interface{} `json:"payload,omitempty"` +} + +type ExecuteCommandRequest struct { + Command string `json:"command"` +} + +type CommandResult struct { + ExitCode int `json:"exitCode"` + Success bool `json:"success"` + Stdout string `json:"stdout,omitempty"` + Stderr string `json:"stderr,omitempty"` + Raw map[string]interface{} `json:"-"` +} + +type FileInfo struct { + Path string `json:"path"` + Name string `json:"name,omitempty"` + Size int64 `json:"size,omitempty"` + IsDir bool `json:"isDir"` + Mode string `json:"mode,omitempty"` + ModTime time.Time `json:"modTime,omitempty"` + Raw map[string]interface{} `json:"-"` +} + +type FileContent struct { + Path string `json:"path"` + Content string `json:"content"` + Encoding string `json:"encoding,omitempty"` + RawBytes []byte `json:"-"` +} + +type DaemonEvent struct { + ID string `json:"id,omitempty"` + Type string `json:"type,omitempty"` + SessionID string `json:"sessionId,omitempty"` + MessageID string `json:"messageId,omitempty"` + Timestamp time.Time `json:"timestamp,omitempty"` + Delta string `json:"delta,omitempty"` + RawData string `json:"rawData,omitempty"` + Data json.RawMessage `json:"data,omitempty"` + Payload map[string]interface{} `json:"payload,omitempty"` + Error string `json:"error,omitempty"` +} + +type HealthResponse struct { + Healthy bool `json:"healthy"` + Version string `json:"version,omitempty"` + Raw map[string]interface{} `json:"-"` +} + +type DaemonConfig struct { + Raw map[string]interface{} `json:"raw"` +} diff --git a/internal/errors/errors.go b/internal/errors/errors.go new file mode 100644 index 0000000..3ff59de --- /dev/null +++ b/internal/errors/errors.go @@ -0,0 +1,101 @@ +package errors + +import ( + "context" + stderrors "errors" + "net/http" + + "opencoderouter/internal/session" +) + +var ( + ErrSessionNotFound = stderrors.New("session not found") + ErrDaemonUnhealthy = stderrors.New("daemon unhealthy") + ErrAuthFailed = stderrors.New("authentication failed") + ErrPortExhausted = stderrors.New("no available ports") +) + +func HTTPStatus(err error) int { + switch { + case isSessionNotFound(err): + return http.StatusNotFound + case isPortExhausted(err): + return http.StatusServiceUnavailable + case stderrors.Is(err, ErrAuthFailed): + return http.StatusUnauthorized + case stderrors.Is(err, ErrDaemonUnhealthy), stderrors.Is(err, session.ErrTerminalAttachDisabled): + return http.StatusServiceUnavailable + case stderrors.Is(err, context.Canceled): + return http.StatusRequestTimeout + case stderrors.Is(err, context.DeadlineExceeded): + return http.StatusGatewayTimeout + default: + return http.StatusInternalServerError + } +} + +func Code(err error) string { + switch { + case isSessionNotFound(err): + return "SESSION_NOT_FOUND" + case stderrors.Is(err, session.ErrWorkspacePathRequired): + return "WORKSPACE_PATH_REQUIRED" + case stderrors.Is(err, session.ErrWorkspacePathInvalid): + return "WORKSPACE_PATH_INVALID" + case stderrors.Is(err, session.ErrSessionAlreadyExists): + return "SESSION_ALREADY_EXISTS" + case isPortExhausted(err): + return "NO_AVAILABLE_SESSION_PORTS" + case stderrors.Is(err, session.ErrSessionStopped): + return "SESSION_STOPPED" + case stderrors.Is(err, ErrAuthFailed): + return "AUTH_FAILED" + case stderrors.Is(err, ErrDaemonUnhealthy): + return "DAEMON_UNHEALTHY" + case stderrors.Is(err, session.ErrTerminalAttachDisabled): + return "TERMINAL_ATTACH_UNAVAILABLE" + case stderrors.Is(err, context.Canceled): + return "REQUEST_CANCELED" + case stderrors.Is(err, context.DeadlineExceeded): + return "REQUEST_TIMEOUT" + default: + return "INTERNAL_ERROR" + } +} + +func Message(err error) string { + switch { + case isSessionNotFound(err): + return "session not found" + case stderrors.Is(err, session.ErrWorkspacePathRequired): + return "workspace path is required" + case stderrors.Is(err, session.ErrWorkspacePathInvalid): + return "workspace path is invalid" + case stderrors.Is(err, session.ErrSessionAlreadyExists): + return "session already exists" + case isPortExhausted(err): + return "no available session ports" + case stderrors.Is(err, session.ErrSessionStopped): + return "session is stopped" + case stderrors.Is(err, ErrAuthFailed): + return "authentication failed" + case stderrors.Is(err, ErrDaemonUnhealthy): + return "daemon unhealthy" + case stderrors.Is(err, session.ErrTerminalAttachDisabled): + return "terminal attachment is unavailable" + case stderrors.Is(err, context.Canceled): + return "request canceled" + case stderrors.Is(err, context.DeadlineExceeded): + return "request timeout" + default: + return "internal server error" + } +} + +func isSessionNotFound(err error) bool { + return stderrors.Is(err, ErrSessionNotFound) || stderrors.Is(err, session.ErrSessionNotFound) +} + +func isPortExhausted(err error) bool { + return stderrors.Is(err, ErrPortExhausted) || stderrors.Is(err, session.ErrNoAvailableSessionPorts) +} diff --git a/internal/tui/model/types.go b/internal/model/types.go similarity index 84% rename from internal/tui/model/types.go rename to internal/model/types.go index 98b84ff..0e97d07 100644 --- a/internal/tui/model/types.go +++ b/internal/model/types.go @@ -1,6 +1,10 @@ package model -import "time" +import ( + "time" + + "opencoderouter/internal/registry" +) // ActivityState captures a high-level activity bucket for a session. type ActivityState string @@ -16,6 +20,33 @@ const ( ActivityUnknown ActivityState = "UNKNOWN" ) +// SessionState captures lifecycle state for control-plane sessions. +type SessionState string + +const ( + SessionStateActive SessionState = "active" + SessionStateIdle SessionState = "idle" + SessionStateStopped SessionState = "stopped" + SessionStateError SessionState = "error" +) + +// AttachMode identifies which client surface is attached to a session. +type AttachMode string + +const ( + AttachModeTerminal AttachMode = "terminal" + AttachModeBrowser AttachMode = "browser" + AttachModeVSCode AttachMode = "vscode" +) + +// DaemonInfo describes a managed OpenCode daemon instance. +type DaemonInfo struct { + Port int `json:"port"` + PID int `json:"pid"` + Health bool `json:"health"` + Version string `json:"version"` +} + // HostStatus represents remote availability from probe/discovery. type HostStatus string @@ -87,6 +118,12 @@ type Session struct { Activity ActivityState } +// BackendSession combines a discovered backend with its sessions. +type BackendSession struct { + Backend registry.Backend `json:"backend"` + Sessions []Session `json:"sessions"` +} + // JumpHop represents one hop in a ProxyJump chain. type JumpHop struct { // Raw is the original hop string from ssh config. diff --git a/internal/proxy/proxy.go b/internal/proxy/proxy.go index ca33d8d..801b699 100644 --- a/internal/proxy/proxy.go +++ b/internal/proxy/proxy.go @@ -3,14 +3,15 @@ package proxy import ( "encoding/json" "fmt" - "html/template" "log/slog" "net/http" "net/http/httputil" "net/url" "strings" + "sync" "time" + "opencoderouter/internal/auth" "opencoderouter/internal/config" "opencoderouter/internal/registry" ) @@ -22,22 +23,44 @@ import ( // // Unmatched requests get the dashboard. type Router struct { - registry *registry.Registry - cfg config.Config - logger *slog.Logger + registry *registry.Registry + cfg config.Config + logger *slog.Logger + handler http.Handler + uiHandler http.Handler + + wsMu sync.Mutex + wsConnections map[string]string + wsConnSeq uint64 + wsPingInterval time.Duration +} + +func writeJSONResponse(w http.ResponseWriter, payload any) { + if err := json.NewEncoder(w).Encode(payload); err != nil { + slog.Default().Debug("failed to encode JSON response", "error", err) + } } // New creates a new Router. -func New(reg *registry.Registry, cfg config.Config, logger *slog.Logger) *Router { - return &Router{ - registry: reg, - cfg: cfg, - logger: logger, +func New(reg *registry.Registry, cfg config.Config, logger *slog.Logger, uiHandler http.Handler) *Router { + rt := &Router{ + registry: reg, + cfg: cfg, + logger: logger, + wsConnections: make(map[string]string), + wsPingInterval: defaultWSPingInterval, + uiHandler: uiHandler, } + rt.handler = auth.Middleware(http.HandlerFunc(rt.routeRequest), auth.LoadFromEnv()) + return rt } // ServeHTTP implements http.Handler. func (rt *Router) ServeHTTP(w http.ResponseWriter, r *http.Request) { + rt.handler.ServeHTTP(w, r) +} + +func (rt *Router) routeRequest(w http.ResponseWriter, r *http.Request) { // Try host-based routing first. if slug := rt.slugFromHost(r.Host); slug != "" { if backend, ok := rt.registry.Lookup(slug); ok { @@ -46,6 +69,11 @@ func (rt *Router) ServeHTTP(w http.ResponseWriter, r *http.Request) { } } + if rt.isWSRoute(r.URL.Path) { + rt.handleWSProxy(w, r) + return + } + // Try path-based routing: /{slug}/... if slug, remainder := rt.slugFromPath(r.URL.Path); slug != "" { if backend, ok := rt.registry.Lookup(slug); ok { @@ -193,13 +221,13 @@ func (rt *Router) handleAPIBackends(w http.ResponseWriter, r *http.Request) { } w.Header().Set("Content-Type", "application/json") - json.NewEncoder(w).Encode(items) + writeJSONResponse(w, items) } // handleAPIHealth returns the router's own health status. func (rt *Router) handleAPIHealth(w http.ResponseWriter, r *http.Request) { w.Header().Set("Content-Type", "application/json") - json.NewEncoder(w).Encode(map[string]interface{}{ + writeJSONResponse(w, map[string]interface{}{ "healthy": true, "username": rt.cfg.Username, "backends": rt.registry.Len(), @@ -242,7 +270,7 @@ func (rt *Router) handleAPIResolve(w http.ResponseWriter, r *http.Request) { } w.Header().Set("Content-Type", "application/json") w.WriteHeader(http.StatusNotFound) - json.NewEncoder(w).Encode(map[string]interface{}{ + writeJSONResponse(w, map[string]interface{}{ "error": "not_found", "query": query, "detail": "no backend found for this project", @@ -251,7 +279,7 @@ func (rt *Router) handleAPIResolve(w http.ResponseWriter, r *http.Request) { } w.Header().Set("Content-Type", "application/json") - json.NewEncoder(w).Encode(map[string]interface{}{ + writeJSONResponse(w, map[string]interface{}{ "slug": backend.Slug, "project_name": backend.ProjectName, "project_path": backend.ProjectPath, @@ -264,108 +292,11 @@ func (rt *Router) handleAPIResolve(w http.ResponseWriter, r *http.Request) { }) } -// handleDashboard renders an HTML page listing all discovered backends. +// handleDashboard serves the dashboard UI. func (rt *Router) handleDashboard(w http.ResponseWriter, r *http.Request) { - backends := rt.registry.All() - - type entry struct { - Slug string - ProjectName string - ProjectPath string - Port int - Version string - Domain string - PathURL string - LastSeen string - Healthy bool - } - - entries := make([]entry, 0, len(backends)) - for _, b := range backends { - entries = append(entries, entry{ - Slug: b.Slug, - ProjectName: b.ProjectName, - ProjectPath: b.ProjectPath, - Port: b.Port, - Version: b.Version, - Domain: rt.cfg.DomainFor(b.Slug), - PathURL: fmt.Sprintf("/%s/", b.Slug), - LastSeen: b.LastSeen.Format(time.RFC3339), - Healthy: b.Healthy(rt.cfg.StaleAfter), - }) - } - - data := struct { - Username string - Entries []entry - MDNS bool - }{ - Username: rt.cfg.Username, - Entries: entries, - MDNS: rt.cfg.EnableMDNS, - } - - w.Header().Set("Content-Type", "text/html; charset=utf-8") - if err := dashboardTmpl.Execute(w, data); err != nil { - rt.logger.Error("dashboard render error", "error", err) + if rt.uiHandler != nil { + rt.uiHandler.ServeHTTP(w, r) + } else { + http.NotFound(w, r) } } - -var dashboardTmpl = template.Must(template.New("dashboard").Parse(` - - - - -OpenCode Router — {{.Username}} - - - -

OpenCode Router

-

User: {{.Username}} · mDNS: {{if .MDNS}}enabled{{else}}disabled{{end}} · JSON API

- -{{if .Entries}} - - - - - -{{range .Entries}} - - - - - - - - - - -{{end}} - -
StatusProjectSlugBackendDomainPathVersionLast Seen
{{if .Healthy}}Healthy{{else}}Stale{{end}}{{.ProjectName}}{{.Slug}}127.0.0.1:{{.Port}}{{.Domain}}{{.PathURL}}{{.Version}}{{.LastSeen}}
-{{else}} -

No OpenCode instances discovered yet. Scanning ports…

-{{end}} - - - - -`)) diff --git a/internal/proxy/proxy_test.go b/internal/proxy/proxy_test.go index 6e420f6..0cbb8e9 100644 --- a/internal/proxy/proxy_test.go +++ b/internal/proxy/proxy_test.go @@ -1,12 +1,17 @@ package proxy import ( + "bufio" "encoding/json" + "fmt" "io" "log/slog" + "net" "net/http" "net/http/httptest" + "net/url" "os" + "strconv" "strings" "testing" "time" @@ -27,7 +32,33 @@ func testLogger() *slog.Logger { } func newTestRouter(reg *registry.Registry) *Router { - return New(reg, testCfg(), testLogger()) + mockUI := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.Header().Set("Content-Type", "text/html; charset=utf-8") + w.WriteHeader(http.StatusOK) + + body := "OpenCode Router testuser" + for _, b := range reg.All() { + body += " " + b.Slug + " " + fmt.Sprint(b.Port) + " " + b.Version + } + + if _, err := w.Write([]byte(body)); err != nil { + testLogger().Error("mock ui write failed", "error", err) + } + }) + return New(reg, testCfg(), testLogger(), mockUI) +} + +func mustPort(t *testing.T, rawURL string) int { + t.Helper() + u, err := url.Parse(rawURL) + if err != nil { + t.Fatalf("failed to parse URL %q: %v", rawURL, err) + } + port, err := strconv.Atoi(u.Port()) + if err != nil { + t.Fatalf("failed to parse port from %q: %v", rawURL, err) + } + return port } // --------------------------------------------------------------------------- @@ -109,7 +140,9 @@ func TestServeHTTP_HostRouting(t *testing.T) { backend := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { w.Header().Set("X-Backend", "reached") w.WriteHeader(http.StatusOK) - w.Write([]byte("hello from backend")) + if _, err := w.Write([]byte("hello from backend")); err != nil { + t.Fatalf("backend write failed: %v", err) + } })) defer backend.Close() @@ -156,7 +189,9 @@ func TestServeHTTP_PathRouting(t *testing.T) { // Start a fake backend that echoes the received path. backend := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { w.WriteHeader(http.StatusOK) - w.Write([]byte("path=" + r.URL.Path)) + if _, err := w.Write([]byte("path=" + r.URL.Path)); err != nil { + t.Fatalf("backend path write failed: %v", err) + } })) defer backend.Close() @@ -187,6 +222,189 @@ func TestServeHTTP_PathRouting(t *testing.T) { } } +func TestWSRouteParsing(t *testing.T) { + rt := newTestRouter(registry.New(30*time.Second, testLogger())) + + tests := []struct { + name string + path string + wantSlug string + wantRest string + wantMatch bool + }{ + {"valid with nested path", "/ws/proj/echo/path", "proj", "/echo/path", true}, + {"valid root path", "/ws/proj", "proj", "/", true}, + {"valid trailing slash", "/ws/proj/", "proj", "/", true}, + {"missing slug", "/ws/", "", "", false}, + {"wrong prefix", "/proj/ws/echo", "", "", false}, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + slug, rest, ok := rt.wsRoute(tt.path) + if ok != tt.wantMatch { + t.Fatalf("wsRoute(%q) match=%v, want %v", tt.path, ok, tt.wantMatch) + } + if slug != tt.wantSlug { + t.Fatalf("wsRoute(%q) slug=%q, want %q", tt.path, slug, tt.wantSlug) + } + if rest != tt.wantRest { + t.Fatalf("wsRoute(%q) remainder=%q, want %q", tt.path, rest, tt.wantRest) + } + }) + } +} + +func TestServeHTTP_WSRouteRequiresUpgrade(t *testing.T) { + reg := registry.New(30*time.Second, testLogger()) + rt := newTestRouter(reg) + + w := httptest.NewRecorder() + req := httptest.NewRequest(http.MethodGet, "/ws/proj/echo", nil) + rt.ServeHTTP(w, req) + + if w.Code != http.StatusBadRequest { + t.Fatalf("expected 400, got %d", w.Code) + } + if !strings.Contains(strings.ToLower(w.Body.String()), "upgrade") { + t.Fatalf("expected upgrade error message, got %q", w.Body.String()) + } +} + +func TestServeHTTP_WSRouteInvalidSlug(t *testing.T) { + reg := registry.New(30*time.Second, testLogger()) + rt := newTestRouter(reg) + + w := httptest.NewRecorder() + req := httptest.NewRequest(http.MethodGet, "/ws/missing/echo", nil) + req.Header.Set("Connection", "Upgrade") + req.Header.Set("Upgrade", "websocket") + rt.ServeHTTP(w, req) + + if w.Code != http.StatusNotFound { + t.Fatalf("expected 404, got %d", w.Code) + } + if !strings.Contains(w.Body.String(), `backend "missing" not found`) { + t.Fatalf("expected clear missing backend message, got %q", w.Body.String()) + } +} + +func TestServeHTTP_WSRouteProxyAndTrackConnection(t *testing.T) { + holdOpen := make(chan struct{}) + receivedPath := make(chan string, 1) + + backend := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + receivedPath <- r.URL.Path + + hj, ok := w.(http.Hijacker) + if !ok { + t.Error("response writer does not support hijacking") + return + } + + conn, rw, err := hj.Hijack() + if err != nil { + t.Errorf("hijack failed: %v", err) + return + } + + _, _ = rw.WriteString("HTTP/1.1 101 Switching Protocols\r\n") + _, _ = rw.WriteString("Connection: Upgrade\r\n") + _, _ = rw.WriteString("Upgrade: websocket\r\n") + _, _ = rw.WriteString("Sec-WebSocket-Accept: test\r\n\r\n") + _ = rw.Flush() + + <-holdOpen + _ = conn.Close() + })) + defer backend.Close() + + reg := registry.New(30*time.Second, testLogger()) + reg.Upsert(mustPort(t, backend.URL), "proj", "/home/test/proj", "1.0") + + rt := newTestRouter(reg) + srv := httptest.NewServer(rt) + defer srv.Close() + + u, err := url.Parse(srv.URL) + if err != nil { + t.Fatalf("failed to parse server URL: %v", err) + } + + conn, err := net.Dial("tcp", u.Host) + if err != nil { + t.Fatalf("dial failed: %v", err) + } + + _, err = fmt.Fprintf(conn, + "GET /ws/proj/echo HTTP/1.1\r\nHost: %s\r\nConnection: Upgrade\r\nUpgrade: websocket\r\nSec-WebSocket-Version: 13\r\nSec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ==\r\n\r\n", + u.Host, + ) + if err != nil { + t.Fatalf("failed to write request: %v", err) + } + + reader := bufio.NewReader(conn) + statusLine, err := reader.ReadString('\n') + if err != nil { + t.Fatalf("failed to read status line: %v", err) + } + if !strings.Contains(statusLine, "101") { + t.Fatalf("expected 101 response, got %q", statusLine) + } + + for { + line, err := reader.ReadString('\n') + if err != nil { + t.Fatalf("failed to read response headers: %v", err) + } + if line == "\r\n" { + break + } + } + + select { + case gotPath := <-receivedPath: + if gotPath != "/echo" { + t.Fatalf("expected proxied path /echo, got %q", gotPath) + } + case <-time.After(2 * time.Second): + t.Fatal("backend did not receive proxied websocket request") + } + + deadline := time.Now().Add(2 * time.Second) + tracked := false + for time.Now().Before(deadline) { + rt.wsMu.Lock() + n := len(rt.wsConnections) + rt.wsMu.Unlock() + if n > 0 { + tracked = true + break + } + time.Sleep(10 * time.Millisecond) + } + if !tracked { + t.Fatal("expected websocket connection to be tracked while open") + } + + close(holdOpen) + _ = conn.Close() + + deadline = time.Now().Add(2 * time.Second) + for time.Now().Before(deadline) { + rt.wsMu.Lock() + n := len(rt.wsConnections) + rt.wsMu.Unlock() + if n == 0 { + return + } + time.Sleep(10 * time.Millisecond) + } + + t.Fatal("expected websocket connection to be untracked after close") +} + // --------------------------------------------------------------------------- // Dashboard (fallback) // --------------------------------------------------------------------------- @@ -263,7 +481,9 @@ func TestAPIHealth(t *testing.T) { } var resp map[string]interface{} - json.Unmarshal(w.Body.Bytes(), &resp) + if err := json.Unmarshal(w.Body.Bytes(), &resp); err != nil { + t.Fatalf("unmarshal health response: %v", err) + } if resp["healthy"] != true { t.Error("expected healthy=true") @@ -294,7 +514,9 @@ func TestAPIBackends_Empty(t *testing.T) { } var items []interface{} - json.Unmarshal(w.Body.Bytes(), &items) + if err := json.Unmarshal(w.Body.Bytes(), &items); err != nil { + t.Fatalf("unmarshal backends response: %v", err) + } if len(items) != 0 { t.Errorf("expected empty list, got %d items", len(items)) } @@ -311,7 +533,9 @@ func TestAPIBackends_WithEntries(t *testing.T) { rt.ServeHTTP(w, req) var items []map[string]interface{} - json.Unmarshal(w.Body.Bytes(), &items) + if err := json.Unmarshal(w.Body.Bytes(), &items); err != nil { + t.Fatalf("unmarshal backends entries response: %v", err) + } if len(items) != 1 { t.Fatalf("expected 1 item, got %d", len(items)) } diff --git a/internal/proxy/ws.go b/internal/proxy/ws.go new file mode 100644 index 0000000..535286a --- /dev/null +++ b/internal/proxy/ws.go @@ -0,0 +1,101 @@ +package proxy + +import ( + "fmt" + "net/http" + "strings" + "sync/atomic" + "time" +) + +const defaultWSPingInterval = 30 * time.Second + +func (rt *Router) isWSRoute(path string) bool { + return path == "/ws" || path == "/ws/" || strings.HasPrefix(path, "/ws/") +} + +func (rt *Router) wsRoute(path string) (slug, remainder string, ok bool) { + if !strings.HasPrefix(path, "/ws/") { + return "", "", false + } + + trimmed := strings.TrimPrefix(path, "/ws/") + if trimmed == "" { + return "", "", false + } + + parts := strings.SplitN(trimmed, "/", 2) + slug = parts[0] + if slug == "" { + return "", "", false + } + + remainder = "/" + if len(parts) == 2 && parts[1] != "" { + remainder = "/" + parts[1] + } + + return slug, remainder, true +} + +func (rt *Router) handleWSProxy(w http.ResponseWriter, r *http.Request) { + if r.Method != http.MethodGet { + http.Error(w, "method not allowed", http.StatusMethodNotAllowed) + return + } + + slug, remainder, ok := rt.wsRoute(r.URL.Path) + if !ok { + http.Error(w, "invalid websocket route: expected /ws/{backend-slug}/{path...}", http.StatusBadRequest) + return + } + + if !isWebSocketUpgrade(r) { + http.Error(w, "websocket upgrade required", http.StatusBadRequest) + return + } + + backend, found := rt.registry.Lookup(slug) + if !found { + http.Error(w, fmt.Sprintf("backend %q not found", slug), http.StatusNotFound) + return + } + + connID := rt.trackWSConnection(slug) + defer rt.untrackWSConnection(connID) + + rt.proxyTo(backend, w, r, remainder) +} + +func isWebSocketUpgrade(r *http.Request) bool { + if !headerHasToken(r.Header.Get("Connection"), "upgrade") { + return false + } + if !headerHasToken(r.Header.Get("Upgrade"), "websocket") { + return false + } + return true +} + +func headerHasToken(headerValue, token string) bool { + for _, part := range strings.Split(headerValue, ",") { + if strings.EqualFold(strings.TrimSpace(part), token) { + return true + } + } + return false +} + +func (rt *Router) trackWSConnection(slug string) string { + connID := fmt.Sprintf("%s-%d", slug, atomic.AddUint64(&rt.wsConnSeq, 1)) + rt.wsMu.Lock() + rt.wsConnections[connID] = slug + rt.wsMu.Unlock() + return connID +} + +func (rt *Router) untrackWSConnection(connID string) { + rt.wsMu.Lock() + delete(rt.wsConnections, connID) + rt.wsMu.Unlock() +} diff --git a/internal/registry/registry.go b/internal/registry/registry.go index d0eb449..1a6863a 100644 --- a/internal/registry/registry.go +++ b/internal/registry/registry.go @@ -30,6 +30,7 @@ type Registry struct { mu sync.RWMutex backends map[string]*Backend // slug → backend byPort map[int]string // port → slug (for fast dedup) + sessions map[string]map[string]SessionMetadata staleAfter time.Duration logger *slog.Logger } @@ -39,6 +40,7 @@ func New(staleAfter time.Duration, logger *slog.Logger) *Registry { return &Registry{ backends: make(map[string]*Backend), byPort: make(map[int]string), + sessions: make(map[string]map[string]SessionMetadata), staleAfter: staleAfter, logger: logger, } @@ -54,6 +56,7 @@ func (r *Registry) Upsert(port int, projectName, projectPath, version string) bo // Check if this port was previously registered under a different slug. if oldSlug, ok := r.byPort[port]; ok && oldSlug != slug { delete(r.backends, oldSlug) + delete(r.sessions, oldSlug) r.logger.Info("backend project changed", "port", port, "old_slug", oldSlug, "new_slug", slug) } @@ -102,6 +105,7 @@ func (r *Registry) Prune() []string { if time.Since(b.LastSeen) > r.staleAfter { delete(r.backends, slug) delete(r.byPort, b.Port) + delete(r.sessions, slug) r.logger.Info("backend removed (stale)", "slug", slug, "port", b.Port) removed = append(removed, slug) } diff --git a/internal/registry/sessions.go b/internal/registry/sessions.go new file mode 100644 index 0000000..5286328 --- /dev/null +++ b/internal/registry/sessions.go @@ -0,0 +1,134 @@ +package registry + +import ( + "sort" + "strings" + "time" +) + +type SessionMetadata struct { + ID string `json:"id"` + Title string `json:"title,omitempty"` + Directory string `json:"directory,omitempty"` + Status string `json:"status,omitempty"` + LastActivity time.Time `json:"last_activity,omitempty"` + CreatedAt time.Time `json:"created_at,omitempty"` + DaemonPort int `json:"daemon_port,omitempty"` + AttachedClients int `json:"attached_clients,omitempty"` +} + +func (r *Registry) UpsertSession(backendSlug string, session SessionMetadata) bool { + backendSlug = strings.TrimSpace(backendSlug) + session.ID = strings.TrimSpace(session.ID) + if backendSlug == "" || session.ID == "" { + return false + } + + r.mu.Lock() + defer r.mu.Unlock() + + if _, ok := r.backends[backendSlug]; !ok { + return false + } + + backendSessions, ok := r.sessions[backendSlug] + if !ok { + backendSessions = make(map[string]SessionMetadata) + r.sessions[backendSlug] = backendSessions + } + + _, existed := backendSessions[session.ID] + backendSessions[session.ID] = session + return !existed +} + +func (r *Registry) ReplaceSessions(backendSlug string, sessions []SessionMetadata) { + backendSlug = strings.TrimSpace(backendSlug) + if backendSlug == "" { + return + } + + r.mu.Lock() + defer r.mu.Unlock() + + if _, ok := r.backends[backendSlug]; !ok { + return + } + + replacement := make(map[string]SessionMetadata, len(sessions)) + for _, session := range sessions { + session.ID = strings.TrimSpace(session.ID) + if session.ID == "" { + continue + } + replacement[session.ID] = session + } + + if len(replacement) == 0 { + delete(r.sessions, backendSlug) + return + } + + r.sessions[backendSlug] = replacement +} + +func (r *Registry) RemoveSession(backendSlug, sessionID string) bool { + backendSlug = strings.TrimSpace(backendSlug) + sessionID = strings.TrimSpace(sessionID) + if backendSlug == "" || sessionID == "" { + return false + } + + r.mu.Lock() + defer r.mu.Unlock() + + backendSessions, ok := r.sessions[backendSlug] + if !ok { + return false + } + if _, ok := backendSessions[sessionID]; !ok { + return false + } + + delete(backendSessions, sessionID) + if len(backendSessions) == 0 { + delete(r.sessions, backendSlug) + } + return true +} + +func (r *Registry) ListSessions(backendSlug string) []SessionMetadata { + backendSlug = strings.TrimSpace(backendSlug) + if backendSlug == "" { + return nil + } + + r.mu.RLock() + defer r.mu.RUnlock() + + backendSessions, ok := r.sessions[backendSlug] + if !ok { + return nil + } + + result := make([]SessionMetadata, 0, len(backendSessions)) + for _, session := range backendSessions { + result = append(result, session) + } + + sort.Slice(result, func(i, j int) bool { + return result[i].ID < result[j].ID + }) + return result +} + +func (r *Registry) RemoveSessionsForBackend(backendSlug string) { + backendSlug = strings.TrimSpace(backendSlug) + if backendSlug == "" { + return + } + + r.mu.Lock() + defer r.mu.Unlock() + delete(r.sessions, backendSlug) +} diff --git a/internal/registry/sessions_test.go b/internal/registry/sessions_test.go new file mode 100644 index 0000000..e4df316 --- /dev/null +++ b/internal/registry/sessions_test.go @@ -0,0 +1,83 @@ +package registry + +import ( + "testing" + "time" +) + +func TestSessionIndex_UpsertListRemove(t *testing.T) { + r := New(30*time.Second, testLogger()) + r.Upsert(4096, "proj", "/home/alice/proj", "1.0") + + created := r.UpsertSession("proj", SessionMetadata{ID: "s-1", Title: "first"}) + if !created { + t.Fatal("expected first upsert to create session") + } + + created = r.UpsertSession("proj", SessionMetadata{ID: "s-1", Title: "updated"}) + if created { + t.Fatal("expected second upsert to update existing session") + } + + list := r.ListSessions("proj") + if len(list) != 1 { + t.Fatalf("expected 1 session, got %d", len(list)) + } + if list[0].Title != "updated" { + t.Fatalf("expected updated title, got %q", list[0].Title) + } + + if !r.RemoveSession("proj", "s-1") { + t.Fatal("expected RemoveSession to return true") + } + if r.RemoveSession("proj", "s-1") { + t.Fatal("expected second RemoveSession to return false") + } + if len(r.ListSessions("proj")) != 0 { + t.Fatal("expected no sessions after remove") + } +} + +func TestSessionIndex_ReplaceSessionsRemovesMissing(t *testing.T) { + r := New(30*time.Second, testLogger()) + r.Upsert(4096, "proj", "/home/alice/proj", "1.0") + + r.ReplaceSessions("proj", []SessionMetadata{{ID: "a"}, {ID: "b"}}) + if got := len(r.ListSessions("proj")); got != 2 { + t.Fatalf("expected 2 sessions, got %d", got) + } + + r.ReplaceSessions("proj", []SessionMetadata{{ID: "b", Title: "keep"}}) + list := r.ListSessions("proj") + if len(list) != 1 { + t.Fatalf("expected 1 session after replacement, got %d", len(list)) + } + if list[0].ID != "b" { + t.Fatalf("expected only session b to remain, got %q", list[0].ID) + } +} + +func TestSessionIndex_RemovedWhenBackendPruned(t *testing.T) { + r := New(20*time.Millisecond, testLogger()) + r.Upsert(4096, "proj", "/home/alice/proj", "1.0") + r.UpsertSession("proj", SessionMetadata{ID: "s-1"}) + + time.Sleep(40 * time.Millisecond) + r.Prune() + + if len(r.ListSessions("proj")) != 0 { + t.Fatal("expected sessions to be cleared when backend is pruned") + } +} + +func TestSessionIndex_RemovedWhenProjectChangesOnPort(t *testing.T) { + r := New(30*time.Second, testLogger()) + r.Upsert(4096, "old", "/home/alice/old", "1.0") + r.UpsertSession("old", SessionMetadata{ID: "s-1"}) + + r.Upsert(4096, "new", "/home/alice/new", "1.0") + + if len(r.ListSessions("old")) != 0 { + t.Fatal("expected old project sessions removed after port project change") + } +} diff --git a/internal/remote/discovery.go b/internal/remote/discovery.go new file mode 100644 index 0000000..2a04fff --- /dev/null +++ b/internal/remote/discovery.go @@ -0,0 +1,679 @@ +package remote + +import ( + "bufio" + "context" + "errors" + "fmt" + "io" + "log/slog" + "net" + "net/url" + "os" + "os/user" + "path" + "path/filepath" + "sort" + "strconv" + "strings" + "time" + + "opencoderouter/internal/model" +) + +type DiscoveryService struct { + opts DiscoveryOptions + runner Runner + sshConfigPath string + logger *slog.Logger +} + +const maxSanitizedLogErrorRunes = 320 + +func NewDiscoveryService(opts DiscoveryOptions, runner Runner, logger *slog.Logger) *DiscoveryService { + if runner == nil { + runner = ExecRunner{} + } + if logger == nil { + logger = slog.New(slog.NewTextHandler(io.Discard, nil)) + } + sshConfigPath := strings.TrimSpace(opts.SSHConfigPath) + if sshConfigPath == "" { + sshConfigPath = defaultSSHConfigPath() + } + return &DiscoveryService{ + opts: opts, + runner: runner, + sshConfigPath: sshConfigPath, + logger: logger, + } +} + +func (s *DiscoveryService) SetSSHConfigPath(path string) { + if strings.TrimSpace(path) == "" { + s.sshConfigPath = defaultSSHConfigPath() + return + } + s.sshConfigPath = path +} + +func (s *DiscoveryService) Discover(ctx context.Context) ([]model.Host, error) { + startedAt := time.Now() + s.logger.Debug("starting host discovery", + "ssh_config_path", s.sshConfigPath, + "include_patterns_count", len(s.opts.Include), + "ignore_patterns_count", len(s.opts.Ignore), + ) + + aliases, err := s.loadHostAliases() + if err != nil { + s.logger.Error("host discovery failed", + "stage", "load_host_aliases", + "error", SanitizeLogError(err), + ) + return nil, err + } + s.logger.Debug("loaded host aliases", "alias_count", len(aliases)) + + filtered := filterAliasesWithLogger(aliases, s.opts.Include, s.opts.Ignore, s.logger) + s.logger.Debug("discovery aliases after filtering", "filtered_count", len(filtered)) + + hosts := make([]model.Host, 0, len(filtered)) + var probeErrs []error + + for _, alias := range filtered { + select { + case <-ctx.Done(): + err := fmt.Errorf("discover canceled: %w", ctx.Err()) + s.logger.Error("host discovery failed", + "stage", "context_canceled", + "processed_hosts", len(hosts), + "error", SanitizeLogError(err), + ) + return hosts, err + default: + } + + h, resolveErr := s.resolveHost(ctx, alias) + if resolveErr != nil { + h = model.Host{ + Name: alias, + Label: alias, + Status: model.HostStatusError, + LastError: resolveErr.Error(), + } + probeErrs = append(probeErrs, fmt.Errorf("resolve host %q: %w", alias, resolveErr)) + } + + if override, ok := s.opts.Overrides[alias]; ok { + if override.Label != "" { + h.Label = override.Label + } + h.Priority = override.Priority + if override.OpencodePath != "" { + h.OpencodeBin = override.OpencodePath + } + } + if h.Label == "" { + h.Label = h.Name + } + + hosts = append(hosts, h) + } + + sort.Slice(hosts, func(i, j int) bool { + if hosts[i].Priority != hosts[j].Priority { + return hosts[i].Priority > hosts[j].Priority + } + return hosts[i].Name < hosts[j].Name + }) + + buildDependencyGraphWithLogger(hosts, s.logger) + + if len(probeErrs) > 0 { + joinedErr := errors.Join(probeErrs...) + s.logger.Error("host discovery failed", + "stage", "resolve_hosts", + "host_count", len(hosts), + "failure_count", len(probeErrs), + "duration", time.Since(startedAt), + "error", SanitizeLogError(joinedErr), + ) + return hosts, joinedErr + } + + s.logger.Debug("host discovery complete", + "host_count", len(hosts), + "duration", time.Since(startedAt), + ) + + return hosts, nil +} + +func (s *DiscoveryService) loadHostAliases() ([]string, error) { + s.logger.Debug("reading ssh config for host aliases", "path", s.sshConfigPath) + + configPath, err := expandSSHPath(s.sshConfigPath) + if err != nil { + s.logger.Error("failed to expand ssh config path", "path", s.sshConfigPath, "error", SanitizeLogError(err)) + return nil, fmt.Errorf("expand ssh config path %q: %w", s.sshConfigPath, err) + } + + b, err := os.ReadFile(configPath) + if err != nil { + if os.IsNotExist(err) { + s.logger.Debug("ssh config file not found", "path", configPath, "alias_count", 0) + return nil, nil + } + s.logger.Error("failed to read ssh config", "path", configPath, "error", SanitizeLogError(err)) + return nil, fmt.Errorf("read ssh config %q: %w", configPath, err) + } + + expandedConfig, err := expandSSHConfigIncludes(configPath, b, nil) + if err != nil { + s.logger.Error("failed to expand ssh config includes", "path", configPath, "error", SanitizeLogError(err)) + return nil, fmt.Errorf("expand includes for ssh config %q: %w", configPath, err) + } + + aliases := parseSSHConfigHostsWithLogger(string(expandedConfig), s.logger) + s.logger.Debug("loaded host aliases from ssh config", "path", configPath, "alias_count", len(aliases)) + return aliases, nil +} + +func expandSSHConfigIncludes(configPath string, content []byte, visited map[string]struct{}) ([]byte, error) { + if visited == nil { + visited = make(map[string]struct{}) + } + + absPath, err := filepath.Abs(configPath) + if err != nil { + return nil, fmt.Errorf("resolve absolute path %q: %w", configPath, err) + } + canonicalPath := filepath.Clean(absPath) + if evaluatedPath, evalErr := filepath.EvalSymlinks(canonicalPath); evalErr == nil { + canonicalPath = evaluatedPath + } + + if _, seen := visited[canonicalPath]; seen { + return nil, nil + } + visited[canonicalPath] = struct{}{} + + parentDir := filepath.Dir(canonicalPath) + var out strings.Builder + + scanner := bufio.NewScanner(strings.NewReader(string(content))) + for scanner.Scan() { + rawLine := scanner.Text() + line := strings.TrimSpace(rawLine) + + includePatterns := parseSSHIncludePatterns(line) + if len(includePatterns) == 0 { + out.WriteString(rawLine) + out.WriteByte('\n') + continue + } + + for _, includePattern := range includePatterns { + resolvedPattern, resolveErr := resolveSSHIncludePattern(parentDir, includePattern) + if resolveErr != nil { + return nil, fmt.Errorf("resolve include pattern %q in %q: %w", includePattern, canonicalPath, resolveErr) + } + + matches, globErr := filepath.Glob(resolvedPattern) + if globErr != nil { + return nil, fmt.Errorf("expand include pattern %q in %q: %w", includePattern, canonicalPath, globErr) + } + + for _, includePath := range matches { + includeBytes, readErr := os.ReadFile(includePath) + if readErr != nil { + if os.IsNotExist(readErr) { + continue + } + return nil, fmt.Errorf("read included ssh config %q: %w", includePath, readErr) + } + + expandedInclude, includeErr := expandSSHConfigIncludes(includePath, includeBytes, visited) + if includeErr != nil { + return nil, includeErr + } + if len(expandedInclude) == 0 { + continue + } + + out.Write(expandedInclude) + if expandedInclude[len(expandedInclude)-1] != '\n' { + out.WriteByte('\n') + } + } + } + } + + if err := scanner.Err(); err != nil { + return nil, fmt.Errorf("scan ssh config %q: %w", canonicalPath, err) + } + + return []byte(out.String()), nil +} + +func parseSSHIncludePatterns(line string) []string { + fields := strings.Fields(line) + if len(fields) < 2 || !strings.EqualFold(fields[0], "include") { + return nil + } + + patterns := make([]string, 0, len(fields)-1) + for _, field := range fields[1:] { + if strings.HasPrefix(field, "#") { + break + } + + pattern := strings.Trim(field, "\"'") + if pattern == "" { + continue + } + + patterns = append(patterns, pattern) + } + + return patterns +} + +func resolveSSHIncludePattern(parentDir, includePattern string) (string, error) { + resolvedPattern, err := expandSSHPath(includePattern) + if err != nil { + return "", err + } + if !filepath.IsAbs(resolvedPattern) { + resolvedPattern = filepath.Join(parentDir, resolvedPattern) + } + + return filepath.Clean(resolvedPattern), nil +} + +func expandSSHPath(path string) (string, error) { + if path == "~" { + home, err := os.UserHomeDir() + if err != nil { + return "", err + } + return home, nil + } + + if strings.HasPrefix(path, "~/") { + home, err := os.UserHomeDir() + if err != nil { + return "", err + } + return filepath.Join(home, path[2:]), nil + } + + return path, nil +} + +func (s *DiscoveryService) resolveHost(ctx context.Context, alias string) (model.Host, error) { + s.logger.Debug("resolving host", "alias", alias) + s.logger.Debug("executing ssh -G", "alias", alias) + + out, err := s.runner.Run(ctx, "ssh", "-G", alias) + if err != nil { + s.logger.Error("failed to resolve host", + "alias", alias, + "error", SanitizeLogError(err), + ) + return model.Host{}, err + } + s.logger.Debug("ssh -G completed", "alias", alias, "output_bytes", len(out)) + + host := model.Host{ + Name: alias, + Address: alias, + User: currentUserName(), + Label: alias, + Status: model.HostStatusUnknown, + } + + scanner := bufio.NewScanner(strings.NewReader(string(out))) + for scanner.Scan() { + line := strings.TrimSpace(scanner.Text()) + if line == "" { + continue + } + parts := strings.Fields(line) + if len(parts) < 2 { + continue + } + + key := strings.ToLower(parts[0]) + value := strings.Join(parts[1:], " ") + switch key { + case "hostname": + host.Address = value + case "user": + host.User = value + case "proxyjump": + if value != "" && value != "none" { + host.ProxyJumpRaw = value + host.ProxyKind = model.ProxyKindJump + host.JumpChain = parseProxyJumpWithLogger(value, alias, s.logger) + } + case "proxycommand": + if value != "" && value != "none" { + host.ProxyCommand = value + if host.ProxyKind == "" || host.ProxyKind == model.ProxyKindNone { + host.ProxyKind = model.ProxyKindCommand + } + } + } + } + + if err := scanner.Err(); err != nil { + wrappedErr := fmt.Errorf("parse ssh -G output for %q: %w", alias, err) + s.logger.Error("failed to parse ssh -G output", + "alias", alias, + "error", SanitizeLogError(wrappedErr), + ) + return model.Host{}, wrappedErr + } + + s.logger.Debug("resolved host metadata", + "alias", alias, + "proxy_kind", host.ProxyKind, + "jump_hop_count", len(host.JumpChain), + "has_proxy_command", host.ProxyCommand != "", + ) + + return host, nil +} + +func ParseSSHConfigHosts(content string) []string { + return parseSSHConfigHostsWithLogger(content, nil) +} + +func parseSSHConfigHostsWithLogger(content string, logger *slog.Logger) []string { + if logger != nil { + logger.Debug("starting ssh config host parse", "content_bytes", len(content)) + } + + seen := make(map[string]struct{}) + aliases := make([]string, 0) + + scanner := bufio.NewScanner(strings.NewReader(content)) + for scanner.Scan() { + line := strings.TrimSpace(scanner.Text()) + if line == "" || strings.HasPrefix(line, "#") { + continue + } + + fields := strings.Fields(line) + if len(fields) < 2 || !strings.EqualFold(fields[0], "host") { + continue + } + + for _, candidate := range fields[1:] { + if strings.HasPrefix(candidate, "!") { + continue + } + if strings.ContainsAny(candidate, "*?") { + continue + } + if _, ok := seen[candidate]; ok { + continue + } + seen[candidate] = struct{}{} + aliases = append(aliases, candidate) + } + } + + if logger != nil { + logger.Debug("completed ssh config host parse", "alias_count", len(aliases)) + } + + return aliases +} + +func FilterAliases(aliases, includes, ignores []string) []string { + return filterAliasesWithLogger(aliases, includes, ignores, nil) +} + +func filterAliasesWithLogger(aliases, includes, ignores []string, logger *slog.Logger) []string { + if logger != nil { + logger.Debug("filtering host aliases", + "before_count", len(aliases), + "include_patterns_count", len(includes), + "ignore_patterns_count", len(ignores), + ) + } + + if len(includes) == 0 { + includes = []string{"*"} + } + + filtered := make([]string, 0, len(aliases)) + for _, alias := range aliases { + if !matchesAnyGlob(alias, includes) { + continue + } + if matchesAnyGlob(alias, ignores) { + continue + } + filtered = append(filtered, alias) + } + + if logger != nil { + logger.Debug("host alias filtering complete", + "before_count", len(aliases), + "after_count", len(filtered), + ) + } + + return filtered +} + +func matchesAnyGlob(candidate string, patterns []string) bool { + for _, pattern := range patterns { + matched, err := path.Match(pattern, candidate) + if err != nil { + if pattern == candidate { + return true + } + continue + } + if matched { + return true + } + } + return false +} + +func defaultSSHConfigPath() string { + home, err := os.UserHomeDir() + if err != nil || home == "" { + return ".ssh/config" + } + return filepath.Join(home, ".ssh", "config") +} + +func currentUserName() string { + u, err := user.Current() + if err != nil { + return "" + } + return u.Username +} + +func ParseProxyJump(raw string) []model.JumpHop { + return parseProxyJumpWithLogger(raw, "", nil) +} + +func parseProxyJumpWithLogger(raw, alias string, logger *slog.Logger) []model.JumpHop { + parts := strings.Split(raw, ",") + hops := make([]model.JumpHop, 0, len(parts)) + for _, part := range parts { + part = strings.TrimSpace(part) + if part == "" { + continue + } + hop := parseOneHop(part) + hops = append(hops, hop) + } + + if logger != nil { + if alias != "" { + logger.Debug("parsed proxy jump chain", + "alias", alias, + "hop_count", len(hops), + ) + } else { + logger.Debug("parsed proxy jump chain", "hop_count", len(hops)) + } + } + + return hops +} + +func parseOneHop(hop string) model.JumpHop { + j := model.JumpHop{Raw: hop} + + if strings.HasPrefix(hop, "ssh://") { + u, err := url.Parse(hop) + if err == nil { + j.Host = u.Hostname() + j.User = u.User.Username() + if p := u.Port(); p != "" { + j.Port, _ = strconv.Atoi(p) + } + return j + } + } + + userHost := hop + if at := strings.LastIndex(hop, "@"); at >= 0 { + j.User = hop[:at] + userHost = hop[at+1:] + } + + host, portStr, err := net.SplitHostPort(userHost) + if err == nil { + j.Host = host + j.Port, _ = strconv.Atoi(portStr) + } else { + j.Host = userHost + } + + return j +} + +func BuildDependencyGraph(hosts []model.Host) { + buildDependencyGraphWithLogger(hosts, nil) +} + +func buildDependencyGraphWithLogger(hosts []model.Host, logger *slog.Logger) { + startedAt := time.Now() + if logger != nil { + logger.Debug("building dependency graph", "host_count", len(hosts)) + } + + aliasIndex := make(map[string]int, len(hosts)) + addressIndex := make(map[string]int, len(hosts)) + for i, h := range hosts { + aliasIndex[h.Name] = i + if h.Address != "" { + addressIndex[h.Address] = i + } + } + + for i := range hosts { + if hosts[i].ProxyKind != model.ProxyKindJump || len(hosts[i].JumpChain) == 0 { + continue + } + + seen := make(map[string]bool) + for hi := range hosts[i].JumpChain { + hop := &hosts[i].JumpChain[hi] + alias := resolveHopAlias(hop.Host, aliasIndex, addressIndex) + if alias == "" { + hop.External = true + continue + } + hop.AliasRef = alias + if !seen[alias] { + seen[alias] = true + hosts[i].DependsOn = append(hosts[i].DependsOn, alias) + } + } + } + + edgeCount := 0 + for i := range hosts { + edgeCount += len(hosts[i].DependsOn) + } + if logger != nil { + logger.Debug("dependency graph edges resolved", "edge_count", edgeCount) + } + + for i := range hosts { + for _, dep := range hosts[i].DependsOn { + if idx, ok := aliasIndex[dep]; ok { + hosts[idx].Dependents = appendUnique(hosts[idx].Dependents, hosts[i].Name) + } + } + } + + if logger != nil { + logger.Debug("dependency graph build complete", + "host_count", len(hosts), + "edge_count", edgeCount, + "duration", time.Since(startedAt), + ) + } +} + +func resolveHopAlias(hopHost string, aliasIndex, addressIndex map[string]int) string { + if _, ok := aliasIndex[hopHost]; ok { + return hopHost + } + if idx, ok := addressIndex[hopHost]; ok { + for alias, i := range aliasIndex { + if i == idx { + return alias + } + } + } + return "" +} + +func appendUnique(slice []string, s string) []string { + for _, v := range slice { + if v == s { + return slice + } + } + return append(slice, s) +} + +func SanitizeLogError(err error) string { + if err == nil { + return "" + } + + msg := strings.TrimSpace(err.Error()) + msg = strings.NewReplacer("\r", " ", "\n", " ").Replace(msg) + msg = strings.Join(strings.Fields(msg), " ") + + lower := strings.ToLower(msg) + if idx := strings.Index(lower, "stderr:"); idx >= 0 { + msg = strings.TrimSpace(msg[:idx]) + " stderr: [redacted]" + } + if idx := strings.Index(strings.ToLower(msg), "stdout:"); idx >= 0 { + msg = strings.TrimSpace(msg[:idx]) + " stdout: [redacted]" + } + + runes := []rune(msg) + if len(runes) > maxSanitizedLogErrorRunes { + msg = strings.TrimSpace(string(runes[:maxSanitizedLogErrorRunes-1])) + "…" + } + + return msg +} diff --git a/internal/remote/discovery_test.go b/internal/remote/discovery_test.go new file mode 100644 index 0000000..0b3a5dd --- /dev/null +++ b/internal/remote/discovery_test.go @@ -0,0 +1,270 @@ +package remote + +import ( + "context" + "os" + "path/filepath" + "testing" +) + +type discoveryRunnerMock struct { + byAlias map[string]runResult +} + +type runResult struct { + stdout string + err error +} + +func (m discoveryRunnerMock) Run(_ context.Context, _ string, args ...string) ([]byte, error) { + if len(args) == 0 { + return nil, nil + } + alias := args[len(args)-1] + res, ok := m.byAlias[alias] + if !ok { + return []byte(""), nil + } + if res.err != nil { + return nil, res.err + } + return []byte(res.stdout), nil +} + +func TestParseSSHConfigHosts(t *testing.T) { + content := ` +Host * + ForwardAgent no + +Host prod-1 dev-1 backup-1 + User alice + +Host jump-? + User bob + +Host !ignored +` + + hosts := ParseSSHConfigHosts(content) + if len(hosts) != 3 { + t.Fatalf("expected 3 concrete hosts, got %d (%v)", len(hosts), hosts) + } + want := map[string]struct{}{"prod-1": {}, "dev-1": {}, "backup-1": {}} + for _, h := range hosts { + if _, ok := want[h]; !ok { + t.Fatalf("unexpected host alias %q", h) + } + } +} + +func TestLoadHostAliases_IncludeGlobAbsolute(t *testing.T) { + tmpDir := t.TempDir() + mainConfigPath := filepath.Join(tmpDir, "config") + includeDir := filepath.Join(tmpDir, "config.d") + + writeSSHConfigFile(t, mainConfigPath, "Host root-host\n User root\nInclude "+filepath.Join(includeDir, "*.conf")+"\n") + writeSSHConfigFile(t, filepath.Join(includeDir, "one.conf"), "Host include-one\n") + writeSSHConfigFile(t, filepath.Join(includeDir, "two.conf"), "Host include-two\n") + + svc := NewDiscoveryService(DiscoveryOptions{}, discoveryRunnerMock{}, nil) + svc.SetSSHConfigPath(mainConfigPath) + + aliases, err := svc.loadHostAliases() + if err != nil { + t.Fatalf("load host aliases: %v", err) + } + + assertAliasSet(t, aliases, "root-host", "include-one", "include-two") +} + +func TestLoadHostAliases_IncludeRelativePath(t *testing.T) { + tmpDir := t.TempDir() + mainConfigPath := filepath.Join(tmpDir, "config") + relativeIncludePath := filepath.Join("includes", "relative.conf") + + writeSSHConfigFile(t, mainConfigPath, "Host root-host\nInclude "+relativeIncludePath+"\n") + writeSSHConfigFile(t, filepath.Join(tmpDir, relativeIncludePath), "Host relative-host\n") + + svc := NewDiscoveryService(DiscoveryOptions{}, discoveryRunnerMock{}, nil) + svc.SetSSHConfigPath(mainConfigPath) + + aliases, err := svc.loadHostAliases() + if err != nil { + t.Fatalf("load host aliases: %v", err) + } + + assertAliasSet(t, aliases, "root-host", "relative-host") +} + +func TestLoadHostAliases_IncludeNested(t *testing.T) { + tmpDir := t.TempDir() + mainConfigPath := filepath.Join(tmpDir, "config") + levelOnePath := filepath.Join(tmpDir, "level-one.conf") + levelTwoPath := filepath.Join(tmpDir, "level-two.conf") + + writeSSHConfigFile(t, mainConfigPath, "Host root-host\nInclude "+levelOnePath+"\n") + writeSSHConfigFile(t, levelOnePath, "Host level-one-host\nInclude "+levelTwoPath+"\n") + writeSSHConfigFile(t, levelTwoPath, "Host level-two-host\n") + + svc := NewDiscoveryService(DiscoveryOptions{}, discoveryRunnerMock{}, nil) + svc.SetSSHConfigPath(mainConfigPath) + + aliases, err := svc.loadHostAliases() + if err != nil { + t.Fatalf("load host aliases: %v", err) + } + + assertAliasSet(t, aliases, "root-host", "level-one-host", "level-two-host") +} + +func TestLoadHostAliases_IncludeCycleSafe(t *testing.T) { + tmpDir := t.TempDir() + mainConfigPath := filepath.Join(tmpDir, "a.conf") + otherConfigPath := filepath.Join(tmpDir, "b.conf") + + writeSSHConfigFile(t, mainConfigPath, "Host cycle-a\nInclude "+otherConfigPath+"\n") + writeSSHConfigFile(t, otherConfigPath, "Host cycle-b\nInclude "+mainConfigPath+"\n") + + svc := NewDiscoveryService(DiscoveryOptions{}, discoveryRunnerMock{}, nil) + svc.SetSSHConfigPath(mainConfigPath) + + aliases, err := svc.loadHostAliases() + if err != nil { + t.Fatalf("load host aliases: %v", err) + } + + assertAliasSet(t, aliases, "cycle-a", "cycle-b") +} + +func TestLoadHostAliases_IncludeNonexistentGraceful(t *testing.T) { + tmpDir := t.TempDir() + mainConfigPath := filepath.Join(tmpDir, "config") + existingIncludePath := filepath.Join(tmpDir, "existing.conf") + + writeSSHConfigFile(t, mainConfigPath, "Host root-host\nInclude "+filepath.Join(tmpDir, "missing", "*.conf")+"\nInclude "+existingIncludePath+"\n") + writeSSHConfigFile(t, existingIncludePath, "Host existing-host\n") + + svc := NewDiscoveryService(DiscoveryOptions{}, discoveryRunnerMock{}, nil) + svc.SetSSHConfigPath(mainConfigPath) + + aliases, err := svc.loadHostAliases() + if err != nil { + t.Fatalf("load host aliases: %v", err) + } + + assertAliasSet(t, aliases, "root-host", "existing-host") +} + +func TestLoadHostAliases_IncludeExpandsHomeDir(t *testing.T) { + tmpDir := t.TempDir() + homeDir := filepath.Join(tmpDir, "home") + t.Setenv("HOME", homeDir) + + mainConfigPath := filepath.Join(tmpDir, "config") + homeIncludeDir := filepath.Join(homeDir, ".ssh", "config.d") + writeSSHConfigFile(t, filepath.Join(homeIncludeDir, "home.conf"), "Host home-host\n") + writeSSHConfigFile(t, mainConfigPath, "Host root-host\nInclude ~/.ssh/config.d/*.conf\n") + + svc := NewDiscoveryService(DiscoveryOptions{}, discoveryRunnerMock{}, nil) + svc.SetSSHConfigPath(mainConfigPath) + + aliases, err := svc.loadHostAliases() + if err != nil { + t.Fatalf("load host aliases: %v", err) + } + + assertAliasSet(t, aliases, "root-host", "home-host") +} + +func TestDiscover_WithFilteringAndOverrides(t *testing.T) { + tmpDir := t.TempDir() + sshPath := filepath.Join(tmpDir, "config") + configBody := ` +Host prod-1 dev-1 backup-1 + User alice +` + if err := os.WriteFile(sshPath, []byte(configBody), 0o600); err != nil { + t.Fatalf("write ssh config: %v", err) + } + + opts := DiscoveryOptions{ + Include: []string{"prod-*", "dev-*"}, + Ignore: []string{"backup-*"}, + Overrides: map[string]HostOverride{ + "prod-1": { + Label: "Production 1", + Priority: 1, + OpencodePath: "/usr/local/bin/opencode", + }, + }, + } + + runner := discoveryRunnerMock{byAlias: map[string]runResult{ + "prod-1": {stdout: "hostname 10.0.0.1\nuser deploy\n"}, + "dev-1": {stdout: "hostname 10.0.0.2\nuser dev\n"}, + }} + + svc := NewDiscoveryService(opts, runner, nil) + svc.SetSSHConfigPath(sshPath) + + hosts, err := svc.Discover(context.Background()) + if err != nil { + t.Fatalf("discover returned error: %v", err) + } + + if len(hosts) != 2 { + t.Fatalf("expected 2 hosts after filters, got %d", len(hosts)) + } + + if hosts[0].Name != "prod-1" { + t.Fatalf("expected first host to be prod-1 due priority sort, got %q", hosts[0].Name) + } + if hosts[0].Label != "Production 1" { + t.Fatalf("expected override label, got %q", hosts[0].Label) + } + if hosts[0].OpencodeBin != "/usr/local/bin/opencode" { + t.Fatalf("expected override opencode path, got %q", hosts[0].OpencodeBin) + } +} + +func TestNewDiscoveryService_NilLoggerDefaultsToDiscard(t *testing.T) { + t.Parallel() + + svc := NewDiscoveryService(DiscoveryOptions{}, discoveryRunnerMock{}, nil) + if svc == nil { + t.Fatal("expected discovery service to be constructed") + } + if svc.logger == nil { + t.Fatal("expected discovery service logger to default to non-nil discard logger") + } +} + +func writeSSHConfigFile(t *testing.T, filePath, body string) { + t.Helper() + + if err := os.MkdirAll(filepath.Dir(filePath), 0o755); err != nil { + t.Fatalf("create config directory %q: %v", filepath.Dir(filePath), err) + } + if err := os.WriteFile(filePath, []byte(body), 0o600); err != nil { + t.Fatalf("write config file %q: %v", filePath, err) + } +} + +func assertAliasSet(t *testing.T, got []string, want ...string) { + t.Helper() + + if len(got) != len(want) { + t.Fatalf("expected %d aliases, got %d (%v)", len(want), len(got), got) + } + + wantSet := make(map[string]struct{}, len(want)) + for _, alias := range want { + wantSet[alias] = struct{}{} + } + + for _, alias := range got { + if _, ok := wantSet[alias]; !ok { + t.Fatalf("unexpected alias %q in %v", alias, got) + } + } +} diff --git a/internal/remote/probe.go b/internal/remote/probe.go new file mode 100644 index 0000000..cbf7c55 --- /dev/null +++ b/internal/remote/probe.go @@ -0,0 +1,834 @@ +package remote + +import ( + "bytes" + "context" + "encoding/json" + "errors" + "fmt" + "io" + "log/slog" + "path/filepath" + "sort" + "strconv" + "strings" + "sync" + "time" + + "opencoderouter/internal/model" +) + +type cacheEntry struct { + host model.Host + expiresAt time.Time +} + +type CacheStore struct { + mu sync.RWMutex + ttl time.Duration + nowFunc func() time.Time + entries map[string]cacheEntry +} + +func NewCacheStore(ttl time.Duration) *CacheStore { + return &CacheStore{ + ttl: ttl, + nowFunc: time.Now, + entries: make(map[string]cacheEntry), + } +} + +func (c *CacheStore) Get(key string) (model.Host, bool) { + c.mu.RLock() + entry, ok := c.entries[key] + c.mu.RUnlock() + if !ok { + return model.Host{}, false + } + if c.nowFunc().After(entry.expiresAt) { + c.mu.Lock() + delete(c.entries, key) + c.mu.Unlock() + return model.Host{}, false + } + return entry.host, true +} + +func (c *CacheStore) Set(key string, host model.Host) { + c.mu.Lock() + defer c.mu.Unlock() + c.entries[key] = cacheEntry{host: host, expiresAt: c.nowFunc().Add(c.ttl)} +} + +func (c *CacheStore) PurgeExpired() int { + now := c.nowFunc() + removed := 0 + c.mu.Lock() + defer c.mu.Unlock() + for key, entry := range c.entries { + if now.After(entry.expiresAt) { + delete(c.entries, key) + removed++ + } + } + return removed +} + +type ProbeService struct { + opts ProbeOptions + runner Runner + cache *CacheStore + nowFn func() time.Time + logger *slog.Logger +} + +func NewProbeService(opts ProbeOptions, runner Runner, cache *CacheStore, logger *slog.Logger) *ProbeService { + if runner == nil { + runner = ExecRunner{} + } + if logger == nil { + logger = slog.New(slog.NewTextHandler(io.Discard, nil)) + } + return &ProbeService{ + opts: opts, + runner: runner, + cache: cache, + nowFn: time.Now, + logger: logger, + } +} + +func (s *ProbeService) SetNowFunc(nowFn func() time.Time) { + if nowFn == nil { + s.nowFn = time.Now + return + } + s.nowFn = nowFn +} + +type probeJob struct { + index int + host model.Host +} + +type probeResult struct { + index int + host model.Host + err error +} + +const opencodeMissingSentinel = "__OCR_OPENCODE_MISSING__" + +func (s *ProbeService) ProbeHosts(ctx context.Context, hosts []model.Host) ([]model.Host, error) { + startedAt := time.Now() + workerCount := s.opts.MaxParallel + if workerCount < 1 { + workerCount = 1 + } + + s.logger.Debug("probe hosts started", + "host_count", len(hosts), + "worker_count", workerCount, + ) + + if len(hosts) == 0 { + s.logger.Debug("probe hosts completed", + "host_count", 0, + "result_count", 0, + "error_count", 0, + "duration_ms", time.Since(startedAt).Milliseconds(), + ) + return nil, nil + } + + if s.cache != nil { + s.cache.PurgeExpired() + } + + jumpProviders := jumpProviderSet(hosts) + if len(jumpProviders) > 0 { + s.transportPreflight(ctx, hosts, jumpProviders) + propagateBlocked(s.logger, hosts) + } + + updated := make([]model.Host, len(hosts)) + copy(updated, hosts) + jobs := make(chan probeJob) + results := make(chan probeResult) + + for i := 0; i < workerCount; i++ { + go func() { + for job := range jobs { + jobCtx, cancel := s.hostProbeContext(ctx) + h, err := s.probeHost(jobCtx, job.host) + cancel() + results <- probeResult{index: job.index, host: h, err: err} + } + }() + } + + pending := 0 + for i, host := range hosts { + if host.Transport == model.TransportBlocked { + updated[i] = host + s.logger.Debug("probe host skipped blocked", + "host", host.Name, + "blocked_by", host.BlockedBy, + ) + continue + } + if s.cache != nil { + if cached, ok := s.cache.Get(host.Name); ok { + updated[i] = cached + s.logger.Debug("probe cache hit", "host", host.Name) + continue + } + s.logger.Debug("probe cache miss", "host", host.Name) + } + pending++ + jobs <- probeJob{index: i, host: host} + } + close(jobs) + + var probeErrs []error + for i := 0; i < pending; i++ { + select { + case <-ctx.Done(): + err := fmt.Errorf("probe canceled: %w", ctx.Err()) + probeErrs = append(probeErrs, err) + s.logger.Debug("probe host canceled", + "err_kind", errorKind(err), + "error", sanitizeErrorContext(err), + ) + case res := <-results: + updated[res.index] = res.host + if res.err != nil { + probeErrs = append(probeErrs, res.err) + } + if s.cache != nil { + s.cache.Set(res.host.Name, res.host) + } + } + } + + s.logger.Debug("probe hosts completed", + "host_count", len(hosts), + "result_count", len(updated), + "error_count", len(probeErrs), + "duration_ms", time.Since(startedAt).Milliseconds(), + ) + + if len(probeErrs) > 0 { + return updated, errors.Join(probeErrs...) + } + return updated, nil +} + +func (s *ProbeService) hostProbeContext(parent context.Context) (context.Context, context.CancelFunc) { + if s.opts.SSH.ConnectTimeout <= 0 { + return parent, func() {} + } + return context.WithTimeout(parent, time.Duration(s.opts.SSH.ConnectTimeout)*time.Second) +} + +func (s *ProbeService) scanPathsForHost(host model.Host) []string { + if override, ok := s.opts.Overrides[host.Name]; ok && len(override.ScanPaths) > 0 { + return override.ScanPaths + } + if len(s.opts.SessionScanPaths) > 0 { + return s.opts.SessionScanPaths + } + return []string{"~"} +} + +func (s *ProbeService) buildRemoteCmd(host model.Host) string { + paths := s.scanPathsForHost(host) + pathList := strings.Join(paths, " ") + + bin := host.OpencodeBin + if bin == "" { + bin = "opencode" + } + + remoteCmd := fmt.Sprintf( + `OC=$(command -v %s 2>/dev/null || echo "$HOME/.opencode/bin/%s"); `+ + `if [ -x "$OC" ]; then `+ + `find %s -maxdepth 2 -name .opencode -type d 2>/dev/null | while IFS= read -r d; do `+ + `(cd "$(dirname "$d")" && "$OC" session list --format json 2>/dev/null); `+ + `done; else printf '%s\n'; fi`, + bin, bin, pathList, opencodeMissingSentinel, + ) + + s.logger.Debug("probe remote command built", + "host", host.Name, + "cmd", sanitizeCommandForLog(remoteCmd, pathList), + ) + + return remoteCmd +} + +func (s *ProbeService) probeHost(ctx context.Context, host model.Host) (model.Host, error) { + startedAt := time.Now() + s.logger.Debug("probe host started", "host", host.Name) + + remoteCmd := s.buildRemoteCmd(host) + args := s.buildSSHArgs(host, remoteCmd) + s.logger.Debug("probe ssh args built", + "host", host.Name, + "arg_count", len(args), + ) + + out, runErr := s.runner.Run(ctx, "ssh", args...) + var sessions []model.Session + var parseErr error + if runErr == nil && strings.TrimSpace(string(out)) != opencodeMissingSentinel { + sessions, parseErr = s.parseSessions(out, host.Name) + } + + result := classifyProbeResult( + host.Name, + out, + runErr, + parseErr, + runErr != nil && isAuthError(host.Name, runErr, s.logger), + ) + if result.err != nil { + host.Status = result.status + host.LastError = result.lastError + s.logger.Error("probe host failed", + "host", host.Name, + "status", host.Status, + "err_kind", result.errKind, + "error", result.logError, + "duration_ms", time.Since(startedAt).Milliseconds(), + ) + return host, result.err + } + + if s.opts.MaxDisplay > 0 && len(sessions) > s.opts.MaxDisplay { + sessions = sessions[:s.opts.MaxDisplay] + } + + host.Projects = groupSessionsByProject(sessions) + host.Status = result.status + host.LastSeen = s.nowFn() + host.LastError = "" + s.logger.Debug("probe host completed", + "host", host.Name, + "status", host.Status, + "sessions", len(sessions), + "duration_ms", time.Since(startedAt).Milliseconds(), + ) + + return host, nil +} + +type probeClassification struct { + status model.HostStatus + lastError string + err error + errKind string + logError string +} + +func classifyProbeResult(hostName string, output []byte, runErr, parseErr error, authRequired bool) probeClassification { + if runErr != nil { + if authRequired { + return probeClassification{ + status: model.HostStatusAuthRequired, + lastError: "password authentication required", + err: fmt.Errorf("probe host %q: auth required", hostName), + errKind: "auth", + logError: "authentication failed", + } + } + return probeClassification{ + status: model.HostStatusOffline, + lastError: runErr.Error(), + err: fmt.Errorf("probe host %q: %w", hostName, runErr), + errKind: errorKind(runErr), + logError: sanitizeErrorContext(runErr), + } + } + + if strings.TrimSpace(string(output)) == opencodeMissingSentinel { + err := fmt.Errorf("probe host %q: opencode binary not found", hostName) + return probeClassification{ + status: model.HostStatusOffline, + lastError: "opencode binary not found", + err: err, + errKind: "opencode_missing", + logError: "opencode binary not found", + } + } + + if parseErr != nil { + return probeClassification{ + status: model.HostStatusError, + lastError: parseErr.Error(), + err: fmt.Errorf("parse sessions for %q: %w", hostName, parseErr), + errKind: errorKind(parseErr), + logError: sanitizeErrorContext(parseErr), + } + } + + return probeClassification{status: model.HostStatusOnline} +} + +func (s *ProbeService) buildSSHArgs(host model.Host, remoteCmd string) []string { + args := make([]string, 0, 12) + if s.opts.SSH.BatchMode { + args = append(args, "-o", "BatchMode=yes") + } + if s.opts.SSH.ConnectTimeout > 0 { + args = append(args, "-o", "ConnectTimeout="+strconv.Itoa(s.opts.SSH.ConnectTimeout)) + } + if s.opts.SSH.ControlMaster != "" { + args = append(args, "-o", "ControlMaster="+s.opts.SSH.ControlMaster) + } + if s.opts.SSH.ControlPersist > 0 { + args = append(args, "-o", "ControlPersist="+strconv.Itoa(s.opts.SSH.ControlPersist)) + } + if s.opts.SSH.ControlPath != "" { + args = append(args, "-o", "ControlPath="+s.opts.SSH.ControlPath) + } + args = append(args, host.Name, remoteCmd) + return args +} + +type remoteSession struct { + ID string `json:"id"` + Project string `json:"project"` + Title string `json:"title"` + LastActivity string `json:"last_activity"` + Status string `json:"status"` + MessageCount int `json:"message_count"` + Agents []string `json:"agents"` + Updated json.Number `json:"updated"` + Created json.Number `json:"created"` + Directory string `json:"directory"` + ProjectID string `json:"projectId"` +} + +type remoteEnvelope struct { + Sessions []remoteSession `json:"sessions"` +} + +func (s *ProbeService) parseSessions(raw []byte, host string) ([]model.Session, error) { + trimmed := bytes.TrimSpace(raw) + if len(trimmed) == 0 { + s.logger.Debug("parse sessions decoded", + "host", host, + "records", 0, + "sessions", 0, + "raw_bytes", 0, + ) + return nil, nil + } + + var list []remoteSession + + dec := json.NewDecoder(bytes.NewReader(trimmed)) + for dec.More() { + var batch []remoteSession + if err := dec.Decode(&batch); err != nil { + var env remoteEnvelope + if json.Unmarshal(trimmed, &env) == nil { + list = env.Sessions + break + } + s.logger.Error("parse sessions failed", + "host", host, + "err_kind", "parse", + "error", "invalid session payload", + "raw_bytes", len(trimmed), + ) + return nil, err + } + list = append(list, batch...) + } + + now := s.nowFn() + thresholds := model.ActivityThresholds{ + Active: s.opts.ActiveThreshold, + Idle: s.opts.IdleThreshold, + } + + sessions := make([]model.Session, 0, len(list)) + for _, rs := range list { + status := mapSessionStatus(rs.Status) + if status == model.SessionStatusArchived && !s.opts.ShowArchived { + continue + } + lastActivity := resolveTimestamp(rs) + project := resolveProject(rs) + sessions = append(sessions, model.Session{ + ID: rs.ID, + Project: project, + Title: rs.Title, + Directory: rs.Directory, + LastActivity: lastActivity, + Status: status, + MessageCount: rs.MessageCount, + Agents: append([]string(nil), rs.Agents...), + Activity: model.ResolveActivityState(lastActivity, now, thresholds), + }) + } + + sortBy := strings.ToLower(strings.TrimSpace(s.opts.SortBy)) + if sortBy == "last_activity" { + sort.SliceStable(sessions, func(i, j int) bool { + return sessions[i].LastActivity.After(sessions[j].LastActivity) + }) + } + + s.logger.Debug("parse sessions decoded", + "host", host, + "records", len(list), + "sessions", len(sessions), + "raw_bytes", len(trimmed), + ) + return sessions, nil +} + +func resolveTimestamp(rs remoteSession) time.Time { + if rs.LastActivity != "" { + return parseTimestamp(rs.LastActivity) + } + if rs.Updated.String() != "" { + if ms, err := rs.Updated.Int64(); err == nil && ms > 0 { + return time.UnixMilli(ms) + } + } + if rs.Created.String() != "" { + if ms, err := rs.Created.Int64(); err == nil && ms > 0 { + return time.UnixMilli(ms) + } + } + return time.Time{} +} + +func resolveProject(rs remoteSession) string { + if rs.Project != "" { + return rs.Project + } + if rs.Directory != "" { + return filepath.Base(rs.Directory) + } + return "" +} + +func groupSessionsByProject(sessions []model.Session) []model.Project { + byName := make(map[string][]model.Session) + for _, session := range sessions { + projectName := session.Project + if strings.TrimSpace(projectName) == "" { + projectName = "(unknown)" + } + byName[projectName] = append(byName[projectName], session) + } + + projects := make([]model.Project, 0, len(byName)) + for name, grouped := range byName { + projects = append(projects, model.Project{Name: name, Sessions: grouped}) + } + sort.Slice(projects, func(i, j int) bool { + return projects[i].Name < projects[j].Name + }) + return projects +} + +func mapSessionStatus(status string) model.SessionStatus { + switch strings.ToLower(strings.TrimSpace(status)) { + case "active", "running": + return model.SessionStatusActive + case "idle": + return model.SessionStatusIdle + case "archived", "closed", "done": + return model.SessionStatusArchived + default: + return model.SessionStatusUnknown + } +} + +func parseTimestamp(value string) time.Time { + if strings.TrimSpace(value) == "" { + return time.Time{} + } + t, err := time.Parse(time.RFC3339, value) + if err != nil { + return time.Time{} + } + return t +} + +func isAuthError(host string, err error, logger *slog.Logger) bool { + if err == nil { + return false + } + msg := strings.ToLower(err.Error()) + authIndicators := []string{ + "permission denied", + "no more authentication methods", + "publickey,password", + "keyboard-interactive", + "too many authentication failures", + "authentication failed", + } + for _, indicator := range authIndicators { + if strings.Contains(msg, indicator) { + if logger != nil { + logger.Error("probe auth indicator detected", + "host", host, + "err_kind", "auth", + "error", "authentication failed", + ) + } + return true + } + } + return false +} + +func (s *ProbeService) AuthBootstrapCmd(host model.Host) string { + controlPath := s.opts.SSH.ControlPath + if controlPath == "" { + controlPath = "~/.ssh/ocr-%C" + } + persist := s.opts.SSH.ControlPersist + if persist <= 0 { + persist = 600 + } + timeout := s.opts.SSH.ConnectTimeout + if timeout <= 0 { + timeout = 10 + } + + cmd := fmt.Sprintf( + "ssh -o ControlMaster=yes -o ControlPath=%s -o ControlPersist=%d -o ConnectTimeout=%d -Nf %s", + controlPath, + persist, + timeout, + host.Name, + ) + return cmd +} + +func jumpProviderSet(hosts []model.Host) map[string]bool { + providers := make(map[string]bool) + for _, h := range hosts { + for _, dep := range h.DependsOn { + providers[dep] = true + } + } + return providers +} + +func (s *ProbeService) transportPreflight(ctx context.Context, hosts []model.Host, providers map[string]bool) { + startedAt := time.Now() + s.logger.Debug("transport preflight started", "provider_count", len(providers)) + + type preflightResult struct { + idx int + status model.TransportStatus + err error + dur time.Duration + } + + results := make(chan preflightResult) + count := 0 + for i, h := range hosts { + if !providers[h.Name] { + continue + } + count++ + go func(idx int, host model.Host) { + hostStarted := time.Now() + s.logger.Debug("transport preflight host started", "host", host.Name) + args := s.buildSSHArgs(host, "true") + _, err := s.runner.Run(ctx, "ssh", args...) + if err == nil { + s.logger.Debug("transport preflight host result", + "host", host.Name, + "status", model.TransportReady, + "duration_ms", time.Since(hostStarted).Milliseconds(), + ) + results <- preflightResult{idx: idx, status: model.TransportReady, dur: time.Since(hostStarted)} + return + } + if isAuthError(host.Name, err, s.logger) { + s.logger.Debug("transport preflight host result", + "host", host.Name, + "status", model.TransportAuthRequired, + "err_kind", "auth", + "duration_ms", time.Since(hostStarted).Milliseconds(), + ) + results <- preflightResult{idx: idx, status: model.TransportAuthRequired, err: err, dur: time.Since(hostStarted)} + return + } + s.logger.Debug("transport preflight host result", + "host", host.Name, + "status", model.TransportUnreachable, + "err_kind", errorKind(err), + "duration_ms", time.Since(hostStarted).Milliseconds(), + ) + results <- preflightResult{idx: idx, status: model.TransportUnreachable, err: err, dur: time.Since(hostStarted)} + }(i, h) + } + + readyCount := 0 + failureCount := 0 + for j := 0; j < count; j++ { + res := <-results + hosts[res.idx].Transport = res.status + if res.err != nil { + hosts[res.idx].TransportError = res.err.Error() + failureCount++ + } else { + readyCount++ + } + } + s.logger.Debug("transport preflight completed", + "provider_count", count, + "ready_count", readyCount, + "failure_count", failureCount, + "duration_ms", time.Since(startedAt).Milliseconds(), + ) +} + +func propagateBlocked(logger *slog.Logger, hosts []model.Host) { + if logger == nil { + logger = slog.New(slog.NewTextHandler(io.Discard, nil)) + } + startedAt := time.Now() + blockedCount := 0 + + aliasIndex := make(map[string]int, len(hosts)) + for i, h := range hosts { + aliasIndex[h.Name] = i + } + + for i := range hosts { + if len(hosts[i].DependsOn) == 0 { + continue + } + var blockers []string + for _, dep := range hosts[i].DependsOn { + if idx, ok := aliasIndex[dep]; ok { + if hosts[idx].Transport != model.TransportReady && hosts[idx].Transport != model.TransportUnknown { + blockers = append(blockers, dep) + } + } + } + if len(blockers) > 0 { + hosts[i].Transport = model.TransportBlocked + hosts[i].BlockedBy = blockers + hosts[i].TransportError = fmt.Sprintf("blocked by: %s", strings.Join(blockers, ", ")) + blockedCount++ + logger.Debug("host transport blocked by dependency", + "host", hosts[i].Name, + "blocked_by", blockers, + ) + } + } + logger.Debug("dependency block propagation completed", + "host_count", len(hosts), + "blocked_count", blockedCount, + "duration_ms", time.Since(startedAt).Milliseconds(), + ) +} + +func sanitizeCommandForLog(cmd, pathList string) string { + sanitized := cmd + if strings.TrimSpace(pathList) != "" { + sanitized = strings.ReplaceAll(sanitized, pathList, "") + } + sanitized = strings.Join(strings.Fields(sanitized), " ") + if len(sanitized) > 240 { + return sanitized[:240] + "..." + } + return sanitized +} + +func errorKind(err error) string { + if err == nil { + return "" + } + if errors.Is(err, context.Canceled) { + return "canceled" + } + if errors.Is(err, context.DeadlineExceeded) { + return "timeout" + } + + msg := strings.ToLower(err.Error()) + switch { + case strings.Contains(msg, "permission denied"), + strings.Contains(msg, "no more authentication methods"), + strings.Contains(msg, "publickey,password"), + strings.Contains(msg, "keyboard-interactive"), + strings.Contains(msg, "too many authentication failures"), + strings.Contains(msg, "authentication failed"): + return "auth" + case strings.Contains(msg, "could not resolve hostname"): + return "dns" + case strings.Contains(msg, "connection refused"): + return "connection_refused" + case strings.Contains(msg, "no route to host"): + return "no_route" + case strings.Contains(msg, "timed out"): + return "timeout" + case strings.Contains(msg, "invalid character"), strings.Contains(msg, "cannot unmarshal"): + return "parse" + default: + return "probe" + } +} + +func sanitizeErrorContext(err error) string { + switch errorKind(err) { + case "auth": + return "authentication failed" + case "dns": + return "hostname resolution failed" + case "connection_refused": + return "connection refused" + case "no_route": + return "no route to host" + case "timeout": + return "connection timeout" + case "canceled": + return "operation canceled" + case "parse": + return "invalid session payload" + default: + return "probe command failed" + } +} + +func (s *ProbeService) MultiHopBootstrapCmds(host model.Host, allHosts []model.Host) []string { + aliasIndex := make(map[string]int, len(allHosts)) + for i, h := range allHosts { + aliasIndex[h.Name] = i + } + + var cmds []string + + for _, hop := range host.JumpChain { + if hop.External || hop.AliasRef == "" { + continue + } + if idx, ok := aliasIndex[hop.AliasRef]; ok { + jumpHost := allHosts[idx] + if jumpHost.Transport == model.TransportAuthRequired || jumpHost.Status == model.HostStatusAuthRequired { + cmds = append(cmds, s.AuthBootstrapCmd(jumpHost)) + } + } + } + + if host.Status == model.HostStatusAuthRequired || host.Transport == model.TransportAuthRequired { + cmds = append(cmds, s.AuthBootstrapCmd(host)) + } + + return cmds +} diff --git a/internal/remote/probe_test.go b/internal/remote/probe_test.go new file mode 100644 index 0000000..936ca42 --- /dev/null +++ b/internal/remote/probe_test.go @@ -0,0 +1,512 @@ +package remote + +import ( + "context" + "errors" + "strings" + "sync" + "testing" + "time" + + "opencoderouter/internal/model" +) + +type probeRunnerMock struct { + mu sync.Mutex + output map[string]string + err map[string]error + runFn map[string]func(context.Context) ([]byte, error) + calls int + lastSSH []string +} + +func (m *probeRunnerMock) Run(ctx context.Context, _ string, args ...string) ([]byte, error) { + m.mu.Lock() + m.calls++ + m.lastSSH = append([]string(nil), args...) + + if len(args) < 2 { + m.mu.Unlock() + return []byte("[]"), nil + } + host := args[len(args)-2] + if runFn := m.runFn[host]; runFn != nil { + m.mu.Unlock() + return runFn(ctx) + } + err := m.err[host] + out, ok := m.output[host] + m.mu.Unlock() + if err != nil { + return nil, err + } + if ok { + return []byte(out), nil + } + return []byte("[]"), nil +} + +func defaultProbeOptions() ProbeOptions { + return ProbeOptions{ + MaxParallel: 2, + SessionScanPaths: nil, + Overrides: nil, + SSH: SSHOptions{ + BatchMode: true, + ConnectTimeout: 10, + }, + SortBy: "last_activity", + ShowArchived: false, + MaxDisplay: 50, + ActiveThreshold: 10 * time.Minute, + IdleThreshold: 24 * time.Hour, + } +} + +func TestProbeHosts_ParsesSessions(t *testing.T) { + opts := defaultProbeOptions() + opts.ShowArchived = false + runner := &probeRunnerMock{ + output: map[string]string{ + "dev-1": `[ + {"id":"s1","project":"alpha","title":"Fix bug","last_activity":"2026-03-01T10:00:00Z","status":"active","message_count":5,"agents":["coder"]}, + {"id":"s2","project":"alpha","title":"Done","last_activity":"2026-03-01T09:00:00Z","status":"archived","message_count":3,"agents":["coder"]}, + {"id":"s3","project":"beta","title":"Investigate","last_activity":"2026-03-01T11:00:00Z","status":"idle","message_count":2,"agents":["oracle"]} + ]`, + }, + err: map[string]error{}, + } + + svc := NewProbeService(opts, runner, NewCacheStore(time.Minute), nil) + hosts, err := svc.ProbeHosts(context.Background(), []model.Host{{Name: "dev-1"}}) + if err != nil { + t.Fatalf("probe hosts failed: %v", err) + } + if len(hosts) != 1 { + t.Fatalf("expected 1 host, got %d", len(hosts)) + } + if hosts[0].Status != model.HostStatusOnline { + t.Fatalf("expected host online, got %s", hosts[0].Status) + } + if len(hosts[0].Projects) != 2 { + t.Fatalf("expected 2 projects, got %d", len(hosts[0].Projects)) + } + + totalSessions := hosts[0].SessionCount() + if totalSessions != 2 { + t.Fatalf("expected archived sessions filtered, got %d visible sessions", totalSessions) + } +} + +func TestProbeHosts_PropagatesErrors(t *testing.T) { + opts := defaultProbeOptions() + runner := &probeRunnerMock{ + output: map[string]string{}, + err: map[string]error{ + "prod-1": errors.New("ssh failed"), + }, + } + + svc := NewProbeService(opts, runner, nil, nil) + hosts, err := svc.ProbeHosts(context.Background(), []model.Host{{Name: "prod-1"}}) + if err == nil { + t.Fatalf("expected error, got nil") + } + if len(hosts) != 1 { + t.Fatalf("expected one host result, got %d", len(hosts)) + } + if hosts[0].Status != model.HostStatusOffline { + t.Fatalf("expected offline status, got %s", hosts[0].Status) + } +} + +func TestProbeHosts_CanceledContextPreservesHostMetadata(t *testing.T) { + opts := defaultProbeOptions() + runner := &probeRunnerMock{ + output: map[string]string{}, + err: map[string]error{}, + runFn: map[string]func(context.Context) ([]byte, error){ + "cancel-1": func(_ context.Context) ([]byte, error) { + time.Sleep(25 * time.Millisecond) + return []byte("[]"), nil + }, + }, + } + + svc := NewProbeService(opts, runner, nil, nil) + hostInput := model.Host{ + Name: "cancel-1", + Label: "Cancel Host", + Address: "cancel-1.local", + User: "alice", + Status: model.HostStatusUnknown, + } + + ctx, cancel := context.WithCancel(context.Background()) + cancel() + + hosts, err := svc.ProbeHosts(ctx, []model.Host{hostInput}) + if err == nil { + t.Fatal("expected cancellation error, got nil") + } + if !errors.Is(err, context.Canceled) { + t.Fatalf("expected context canceled error, got %v", err) + } + if len(hosts) != 1 { + t.Fatalf("expected one host result, got %d", len(hosts)) + } + if hosts[0].Name != hostInput.Name { + t.Fatalf("expected host name %q to be preserved, got %q", hostInput.Name, hosts[0].Name) + } + if hosts[0].Label != hostInput.Label { + t.Fatalf("expected host label %q to be preserved, got %q", hostInput.Label, hosts[0].Label) + } + if hosts[0].Address != hostInput.Address { + t.Fatalf("expected host address %q to be preserved, got %q", hostInput.Address, hosts[0].Address) + } + if hosts[0].Status != hostInput.Status { + t.Fatalf("expected host status %q to remain unchanged, got %q", hostInput.Status, hosts[0].Status) + } +} + +func TestProbeHosts_PartialFleetProbeRetainsMetadataForAllEntries(t *testing.T) { + opts := defaultProbeOptions() + opts.MaxParallel = 2 + runner := &probeRunnerMock{ + output: map[string]string{}, + err: map[string]error{}, + runFn: map[string]func(context.Context) ([]byte, error){ + "fast-1": func(_ context.Context) ([]byte, error) { + time.Sleep(25 * time.Millisecond) + return []byte("[]"), nil + }, + "slow-1": func(_ context.Context) ([]byte, error) { + time.Sleep(25 * time.Millisecond) + return []byte("[]"), nil + }, + }, + } + + svc := NewProbeService(opts, runner, nil, nil) + hostInput := []model.Host{ + {Name: "fast-1", Label: "Fast Host", Status: model.HostStatusUnknown}, + {Name: "slow-1", Label: "Slow Host", Status: model.HostStatusUnknown}, + } + + ctx, cancel := context.WithCancel(context.Background()) + cancel() + + hosts, err := svc.ProbeHosts(ctx, hostInput) + if err == nil { + t.Fatal("expected cancellation error, got nil") + } + if !errors.Is(err, context.Canceled) { + t.Fatalf("expected context canceled error, got %v", err) + } + if len(hosts) != len(hostInput) { + t.Fatalf("expected %d host results, got %d", len(hostInput), len(hosts)) + } + if hosts[0].Name != hostInput[0].Name { + t.Fatalf("expected host[0] name %q, got %q", hostInput[0].Name, hosts[0].Name) + } + if hosts[0].Label != hostInput[0].Label { + t.Fatalf("expected host[0] label %q, got %q", hostInput[0].Label, hosts[0].Label) + } + if hosts[1].Name != hostInput[1].Name { + t.Fatalf("expected host[1] name %q, got %q", hostInput[1].Name, hosts[1].Name) + } + if hosts[1].Label != hostInput[1].Label { + t.Fatalf("expected host[1] label %q, got %q", hostInput[1].Label, hosts[1].Label) + } +} + +func TestProbeHosts_MissingOpencodeClassifiedOffline(t *testing.T) { + opts := defaultProbeOptions() + runner := &probeRunnerMock{ + output: map[string]string{ + "no-opencode": "__OCR_OPENCODE_MISSING__\n", + }, + err: map[string]error{}, + } + + svc := NewProbeService(opts, runner, nil, nil) + hosts, err := svc.ProbeHosts(context.Background(), []model.Host{{Name: "no-opencode"}}) + if err == nil { + t.Fatal("expected missing opencode to return an error") + } + if len(hosts) != 1 { + t.Fatalf("expected one host result, got %d", len(hosts)) + } + if hosts[0].Status != model.HostStatusOffline { + t.Fatalf("expected offline status for missing opencode, got %q", hosts[0].Status) + } + if !strings.Contains(hosts[0].LastError, "opencode") { + t.Fatalf("expected missing-opencode error context, got %q", hosts[0].LastError) + } +} + +func TestProbeHosts_PerHostTimeoutIsolation(t *testing.T) { + opts := defaultProbeOptions() + opts.MaxParallel = 2 + opts.SSH.ConnectTimeout = 1 + + var mu sync.Mutex + sawDeadline := map[string]bool{} + + runner := &probeRunnerMock{ + output: map[string]string{}, + err: map[string]error{}, + runFn: map[string]func(context.Context) ([]byte, error){ + "slow-timeout": func(ctx context.Context) ([]byte, error) { + _, ok := ctx.Deadline() + mu.Lock() + sawDeadline["slow-timeout"] = ok + mu.Unlock() + <-ctx.Done() + return nil, ctx.Err() + }, + "fast-ok": func(ctx context.Context) ([]byte, error) { + _, ok := ctx.Deadline() + mu.Lock() + sawDeadline["fast-ok"] = ok + mu.Unlock() + return []byte(`[]`), nil + }, + }, + } + + svc := NewProbeService(opts, runner, nil, nil) + + type probeOutcome struct { + hosts []model.Host + err error + } + outcomeCh := make(chan probeOutcome, 1) + go func() { + hosts, err := svc.ProbeHosts(context.Background(), []model.Host{{Name: "slow-timeout"}, {Name: "fast-ok"}}) + outcomeCh <- probeOutcome{hosts: hosts, err: err} + }() + + select { + case outcome := <-outcomeCh: + if outcome.err == nil { + t.Fatal("expected timeout error for slow host, got nil") + } + if !errors.Is(outcome.err, context.DeadlineExceeded) { + t.Fatalf("expected deadline exceeded in aggregate error, got %v", outcome.err) + } + if len(outcome.hosts) != 2 { + t.Fatalf("expected two host results, got %d", len(outcome.hosts)) + } + if outcome.hosts[0].Name != "slow-timeout" { + t.Fatalf("expected slow host metadata retained, got %q", outcome.hosts[0].Name) + } + if outcome.hosts[0].Status != model.HostStatusOffline { + t.Fatalf("expected slow host offline, got %q", outcome.hosts[0].Status) + } + if outcome.hosts[1].Name != "fast-ok" { + t.Fatalf("expected fast host metadata retained, got %q", outcome.hosts[1].Name) + } + if outcome.hosts[1].Status != model.HostStatusOnline { + t.Fatalf("expected fast host online, got %q", outcome.hosts[1].Status) + } + case <-time.After(2500 * time.Millisecond): + t.Fatal("probe hosts did not return within per-host timeout bound") + } + + mu.Lock() + defer mu.Unlock() + if !sawDeadline["slow-timeout"] { + t.Fatal("expected slow host probe context to have deadline") + } + if !sawDeadline["fast-ok"] { + t.Fatal("expected fast host probe context to have deadline") + } +} + +func TestClassifyProbeResult(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + output []byte + runErr error + parseErr error + authRequired bool + wantStatus model.HostStatus + wantErr bool + wantErrKind string + wantLastError string + }{ + { + name: "auth error", + runErr: errors.New("permission denied"), + authRequired: true, + wantStatus: model.HostStatusAuthRequired, + wantErr: true, + wantErrKind: "auth", + wantLastError: "password authentication required", + }, + { + name: "generic run error", + runErr: context.DeadlineExceeded, + wantStatus: model.HostStatusOffline, + wantErr: true, + wantErrKind: "timeout", + wantLastError: context.DeadlineExceeded.Error(), + }, + { + name: "missing opencode sentinel", + output: []byte(opencodeMissingSentinel + "\n"), + wantStatus: model.HostStatusOffline, + wantErr: true, + wantErrKind: "opencode_missing", + wantLastError: "opencode binary not found", + }, + { + name: "parse error", + parseErr: errors.New("invalid character 'o' in literal null"), + wantStatus: model.HostStatusError, + wantErr: true, + wantErrKind: "parse", + wantLastError: "invalid character 'o' in literal null", + }, + { + name: "success", + output: []byte("[]"), + wantStatus: model.HostStatusOnline, + wantErr: false, + }, + } + + for _, tt := range tests { + tt := tt + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + got := classifyProbeResult("test-host", tt.output, tt.runErr, tt.parseErr, tt.authRequired) + if got.status != tt.wantStatus { + t.Fatalf("expected status %q, got %q", tt.wantStatus, got.status) + } + if tt.wantLastError != "" && got.lastError != tt.wantLastError { + t.Fatalf("expected lastError %q, got %q", tt.wantLastError, got.lastError) + } + if tt.wantErr { + if got.err == nil { + t.Fatal("expected non-nil error") + } + if got.errKind != tt.wantErrKind { + t.Fatalf("expected errKind %q, got %q", tt.wantErrKind, got.errKind) + } + return + } + if got.err != nil { + t.Fatalf("expected nil error, got %v", got.err) + } + }) + } +} + +func TestProbeHosts_OpencodeNativeFormat(t *testing.T) { + opts := defaultProbeOptions() + runner := &probeRunnerMock{ + output: map[string]string{ + "dev-2": `[ + {"id":"s1","title":"Fix bug","updated":1772565534745,"created":1772563561839,"projectId":"abc123","directory":"/home/user/DeviceEmulator"}, + {"id":"s2","title":"Add feature","updated":1772565000000,"created":1772560000000,"projectId":"def456","directory":"/home/user/MobiCom"} + ]`, + }, + err: map[string]error{}, + } + + svc := NewProbeService(opts, runner, NewCacheStore(time.Minute), nil) + hosts, err := svc.ProbeHosts(context.Background(), []model.Host{{Name: "dev-2"}}) + if err != nil { + t.Fatalf("probe failed: %v", err) + } + if hosts[0].Status != model.HostStatusOnline { + t.Fatalf("expected online, got %s", hosts[0].Status) + } + if len(hosts[0].Projects) != 2 { + t.Fatalf("expected 2 projects, got %d", len(hosts[0].Projects)) + } + found := false + for _, p := range hosts[0].Projects { + if p.Name == "DeviceEmulator" { + found = true + if len(p.Sessions) != 1 || p.Sessions[0].ID != "s1" { + t.Fatalf("unexpected sessions for DeviceEmulator: %+v", p.Sessions) + } + if p.Sessions[0].LastActivity.IsZero() { + t.Fatal("expected non-zero LastActivity from epoch ms") + } + } + } + if !found { + t.Fatal("project DeviceEmulator not found") + } +} + +func TestProbeHosts_MultiArraySweep(t *testing.T) { + opts := defaultProbeOptions() + runner := &probeRunnerMock{ + output: map[string]string{ + "dev-3": `[{"id":"s1","title":"A","updated":1772565534745,"directory":"/home/user/proj-a"}]` + + `[{"id":"s2","title":"B","updated":1772565000000,"directory":"/home/user/proj-b"}]`, + }, + err: map[string]error{}, + } + + svc := NewProbeService(opts, runner, NewCacheStore(time.Minute), nil) + hosts, err := svc.ProbeHosts(context.Background(), []model.Host{{Name: "dev-3"}}) + if err != nil { + t.Fatalf("probe failed: %v", err) + } + if len(hosts[0].Projects) != 2 { + t.Fatalf("expected 2 projects from multi-array, got %d", len(hosts[0].Projects)) + } + total := hosts[0].SessionCount() + if total != 2 { + t.Fatalf("expected 2 sessions total, got %d", total) + } +} + +func TestProbeHosts_UsesCache(t *testing.T) { + opts := defaultProbeOptions() + runner := &probeRunnerMock{ + output: map[string]string{ + "cache-1": `[]`, + }, + err: map[string]error{}, + } + + cache := NewCacheStore(time.Minute) + svc := NewProbeService(opts, runner, cache, nil) + + _, err := svc.ProbeHosts(context.Background(), []model.Host{{Name: "cache-1"}}) + if err != nil { + t.Fatalf("first probe failed: %v", err) + } + _, err = svc.ProbeHosts(context.Background(), []model.Host{{Name: "cache-1"}}) + if err != nil { + t.Fatalf("second probe failed: %v", err) + } + + runner.mu.Lock() + defer runner.mu.Unlock() + if runner.calls != 1 { + t.Fatalf("expected runner to be called once due cache, got %d", runner.calls) + } +} + +func TestNewProbeService_NilLoggerDefaultsToDiscard(t *testing.T) { + t.Parallel() + + svc := NewProbeService(defaultProbeOptions(), &probeRunnerMock{}, nil, nil) + if svc == nil { + t.Fatal("expected probe service to be constructed") + } + if svc.logger == nil { + t.Fatal("expected probe service logger to default to non-nil discard logger") + } +} diff --git a/internal/remote/types.go b/internal/remote/types.go new file mode 100644 index 0000000..2219230 --- /dev/null +++ b/internal/remote/types.go @@ -0,0 +1,68 @@ +package remote + +import ( + "context" + "errors" + "fmt" + "os/exec" + "strings" + "time" +) + +type Runner interface { + Run(ctx context.Context, name string, args ...string) ([]byte, error) +} + +type ExecRunner struct{} + +func (ExecRunner) Run(ctx context.Context, name string, args ...string) ([]byte, error) { + cmd := exec.CommandContext(ctx, name, args...) + out, err := cmd.Output() + if err == nil { + return out, nil + } + + var exitErr *exec.ExitError + if errors.As(err, &exitErr) { + stderr := strings.TrimSpace(string(exitErr.Stderr)) + if stderr != "" { + return nil, fmt.Errorf("run %s %v: %w: %s", name, args, err, stderr) + } + } + + return nil, fmt.Errorf("run %s %v: %w", name, args, err) +} + +type HostOverride struct { + Label string + Priority int + OpencodePath string + ScanPaths []string +} + +type DiscoveryOptions struct { + Include []string + Ignore []string + Overrides map[string]HostOverride + SSHConfigPath string +} + +type SSHOptions struct { + ControlMaster string + ControlPersist int + ControlPath string + BatchMode bool + ConnectTimeout int +} + +type ProbeOptions struct { + MaxParallel int + SessionScanPaths []string + Overrides map[string]HostOverride + SSH SSHOptions + SortBy string + ShowArchived bool + MaxDisplay int + ActiveThreshold time.Duration + IdleThreshold time.Duration +} diff --git a/internal/scanner/scanner.go b/internal/scanner/scanner.go index ae5fb59..55104b1 100644 --- a/internal/scanner/scanner.go +++ b/internal/scanner/scanner.go @@ -6,8 +6,11 @@ import ( "fmt" "io" "log/slog" + "math" "net/http" "path/filepath" + "strconv" + "strings" "sync" "time" @@ -131,19 +134,39 @@ func (s *Scanner) probePort(ctx context.Context, port int) { // Step 2: Get project info. project, err := s.getProject(ctx, baseURL) if err != nil { - // Healthy but can't get project — register with minimal info. - s.registry.Upsert(port, fmt.Sprintf("port-%d", port), fmt.Sprintf("/unknown/port-%d", port), health.Version) - return + project = &projectResponse{ + ID: fmt.Sprintf("port-%d", port), + Name: fmt.Sprintf("port-%d", port), + Path: fmt.Sprintf("/unknown/port-%d", port), + } } - projectPath := project.Path + projectPath := strings.TrimSpace(project.Path) if projectPath == "" { - projectPath = "/unknown/" + project.ID + fallbackID := strings.TrimSpace(project.ID) + if fallbackID == "" { + fallbackID = fmt.Sprintf("port-%d", port) + } + projectPath = "/unknown/" + fallbackID + } + projectName := strings.TrimSpace(project.Name) + if projectName == "" { + projectName = filepath.Base(projectPath) } - // Use the last folder name as the display name so it matches the slug. - projectName := filepath.Base(projectPath) s.registry.Upsert(port, projectName, projectPath, health.Version) + + backend, ok := s.registry.LookupByPort(port) + if !ok { + return + } + + sessions, err := s.getSessions(ctx, baseURL) + if err != nil { + s.logger.Debug("session probe failed", "port", port, "error", err) + return + } + s.registry.ReplaceSessions(backend.Slug, sessions) } // getHealth calls GET /global/health on the target. @@ -160,7 +183,9 @@ func (s *Scanner) getHealth(ctx context.Context, baseURL string) (*healthRespons defer resp.Body.Close() if resp.StatusCode != http.StatusOK { - io.Copy(io.Discard, resp.Body) + if _, copyErr := io.Copy(io.Discard, resp.Body); copyErr != nil { + s.logger.Debug("health response drain failed", "error", copyErr) + } return nil, fmt.Errorf("health check returned %d", resp.StatusCode) } @@ -185,13 +210,314 @@ func (s *Scanner) getProject(ctx context.Context, baseURL string) (*projectRespo defer resp.Body.Close() if resp.StatusCode != http.StatusOK { - io.Copy(io.Discard, resp.Body) + if _, copyErr := io.Copy(io.Discard, resp.Body); copyErr != nil { + s.logger.Debug("project response drain failed", "error", copyErr) + } return nil, fmt.Errorf("project endpoint returned %d", resp.StatusCode) } - var p projectResponse - if err := json.NewDecoder(resp.Body).Decode(&p); err != nil { + var payload map[string]interface{} + decoder := json.NewDecoder(resp.Body) + decoder.UseNumber() + if err := decoder.Decode(&payload); err != nil { return nil, fmt.Errorf("failed to decode project response: %w", err) } - return &p, nil + + return parseProjectPayload(payload), nil +} + +func (s *Scanner) getSessions(ctx context.Context, baseURL string) ([]registry.SessionMetadata, error) { + endpoints := []string{"/session", "/sessions"} + var lastErr error + + for _, endpoint := range endpoints { + sessions, status, err := s.getSessionsFromEndpoint(ctx, baseURL+endpoint) + if err == nil { + return sessions, nil + } + lastErr = err + if status != http.StatusNotFound { + return nil, err + } + } + + if lastErr == nil { + lastErr = fmt.Errorf("session endpoint unavailable") + } + return nil, lastErr +} + +func (s *Scanner) getSessionsFromEndpoint(ctx context.Context, endpointURL string) ([]registry.SessionMetadata, int, error) { + req, err := http.NewRequestWithContext(ctx, http.MethodGet, endpointURL, nil) + if err != nil { + return nil, 0, err + } + + resp, err := s.client.Do(req) + if err != nil { + return nil, 0, err + } + defer resp.Body.Close() + + if resp.StatusCode != http.StatusOK { + if _, copyErr := io.Copy(io.Discard, resp.Body); copyErr != nil { + s.logger.Debug("session response drain failed", "error", copyErr) + } + return nil, resp.StatusCode, fmt.Errorf("session endpoint returned %d", resp.StatusCode) + } + + var payload interface{} + decoder := json.NewDecoder(resp.Body) + decoder.UseNumber() + if err := decoder.Decode(&payload); err != nil { + return nil, resp.StatusCode, fmt.Errorf("failed to decode session response: %w", err) + } + + return parseSessionPayload(payload), resp.StatusCode, nil +} + +func parseProjectPayload(payload map[string]interface{}) *projectResponse { + p := &projectResponse{ + ID: firstString(payload, "id", "project_id", "projectId"), + Name: firstString(payload, "name", "project_name", "projectName"), + Path: firstString(payload, "path", "directory", "workspace_path", "workspacePath", "cwd"), + } + + if p.Path == "" { + if v, ok := payload["worktree"]; ok { + p.Path = extractPath(v) + if p.Name == "" { + p.Name = extractName(v) + } + } + } + + if p.Path == "" { + if v, ok := payload["sandboxes"]; ok { + p.Path, p.Name = extractFromSandboxes(v) + } + } + + if p.Name == "" && p.Path != "" { + p.Name = filepath.Base(p.Path) + } + if p.ID == "" { + switch { + case p.Name != "": + p.ID = p.Name + case p.Path != "": + p.ID = filepath.Base(p.Path) + } + } + + return p +} + +func extractFromSandboxes(value interface{}) (string, string) { + items, ok := value.([]interface{}) + if !ok { + return "", "" + } + + for _, item := range items { + path := extractPath(item) + if path == "" { + continue + } + name := extractName(item) + if name == "" { + name = filepath.Base(path) + } + return path, name + } + + return "", "" +} + +func extractPath(value interface{}) string { + switch v := value.(type) { + case string: + return strings.TrimSpace(v) + case map[string]interface{}: + return firstString(v, "path", "directory", "cwd", "workspace_path", "workspacePath", "root") + default: + return "" + } +} + +func extractName(value interface{}) string { + switch v := value.(type) { + case map[string]interface{}: + return firstString(v, "name", "project", "project_name", "projectName", "id") + default: + return "" + } +} + +func parseSessionPayload(payload interface{}) []registry.SessionMetadata { + var entries []interface{} + + switch v := payload.(type) { + case []interface{}: + entries = v + case map[string]interface{}: + if list, ok := v["sessions"].([]interface{}); ok { + entries = list + } else if data, ok := v["data"].(map[string]interface{}); ok { + if list, ok := data["sessions"].([]interface{}); ok { + entries = list + } + } else if firstString(v, "id", "session_id", "sessionId", "sessionID") != "" { + entries = []interface{}{v} + } + } + + if len(entries) == 0 { + return nil + } + + result := make([]registry.SessionMetadata, 0, len(entries)) + for _, entry := range entries { + obj, ok := entry.(map[string]interface{}) + if !ok { + continue + } + session := parseSessionEntry(obj) + if session.ID == "" { + continue + } + result = append(result, session) + } + + return result +} + +func parseSessionEntry(payload map[string]interface{}) registry.SessionMetadata { + return registry.SessionMetadata{ + ID: firstString(payload, "id", "session_id", "sessionId", "sessionID"), + Title: firstString(payload, "title", "name"), + Directory: firstString(payload, "directory", "worktree", "cwd", "workspace_path", "workspacePath"), + Status: firstString(payload, "status", "state"), + LastActivity: firstTime(payload, "last_activity", "lastActivity", "updated", "updated_at", "updatedAt"), + CreatedAt: firstTime(payload, "created_at", "createdAt", "created", "time"), + DaemonPort: firstInt(payload, "daemon_port", "daemonPort", "port"), + AttachedClients: firstInt(payload, "attached_clients", "attachedClients"), + } +} + +func firstString(payload map[string]interface{}, keys ...string) string { + for _, key := range keys { + value, ok := payload[key] + if !ok { + continue + } + switch v := value.(type) { + case string: + if s := strings.TrimSpace(v); s != "" { + return s + } + case json.Number: + if s := strings.TrimSpace(v.String()); s != "" { + return s + } + case float64: + return strconv.FormatInt(int64(v), 10) + case int: + return strconv.Itoa(v) + case int64: + return strconv.FormatInt(v, 10) + } + } + return "" +} + +func firstInt(payload map[string]interface{}, keys ...string) int { + for _, key := range keys { + value, ok := payload[key] + if !ok { + continue + } + switch v := value.(type) { + case json.Number: + if n, err := v.Int64(); err == nil { + return int(n) + } + if f, err := v.Float64(); err == nil { + return int(f) + } + case float64: + return int(v) + case float32: + return int(v) + case int: + return v + case int64: + return int(v) + case string: + if n, err := strconv.Atoi(strings.TrimSpace(v)); err == nil { + return n + } + } + } + return 0 +} + +func firstTime(payload map[string]interface{}, keys ...string) time.Time { + for _, key := range keys { + value, ok := payload[key] + if !ok { + continue + } + t := parseFlexibleTime(value) + if !t.IsZero() { + return t + } + } + return time.Time{} +} + +func parseFlexibleTime(value interface{}) time.Time { + switch v := value.(type) { + case string: + s := strings.TrimSpace(v) + if s == "" { + return time.Time{} + } + if ts, err := time.Parse(time.RFC3339Nano, s); err == nil { + return ts + } + if ts, err := time.Parse(time.RFC3339, s); err == nil { + return ts + } + if n, err := strconv.ParseInt(s, 10, 64); err == nil { + return unixMaybeMillis(n) + } + case json.Number: + if n, err := v.Int64(); err == nil { + return unixMaybeMillis(n) + } + if f, err := v.Float64(); err == nil { + return unixMaybeMillis(int64(f)) + } + case float64: + if math.IsNaN(v) || math.IsInf(v, 0) { + return time.Time{} + } + return unixMaybeMillis(int64(v)) + case int64: + return unixMaybeMillis(v) + case int: + return unixMaybeMillis(int64(v)) + } + return time.Time{} +} + +func unixMaybeMillis(value int64) time.Time { + if value <= 0 { + return time.Time{} + } + if value > 1_000_000_000_000 { + return time.UnixMilli(value) + } + return time.Unix(value, 0) } diff --git a/internal/scanner/scanner_test.go b/internal/scanner/scanner_test.go index e6832dc..070f681 100644 --- a/internal/scanner/scanner_test.go +++ b/internal/scanner/scanner_test.go @@ -9,6 +9,7 @@ import ( "os" "strconv" "strings" + "sync" "testing" "time" @@ -24,18 +25,28 @@ func fakeOpenCode(healthy bool, projectName, projectPath, version string) *httpt mux := http.NewServeMux() mux.HandleFunc("/global/health", func(w http.ResponseWriter, r *http.Request) { w.Header().Set("Content-Type", "application/json") - json.NewEncoder(w).Encode(map[string]interface{}{ + if err := json.NewEncoder(w).Encode(map[string]interface{}{ "healthy": healthy, "version": version, - }) + }); err != nil { + http.Error(w, err.Error(), http.StatusInternalServerError) + } }) mux.HandleFunc("/project/current", func(w http.ResponseWriter, r *http.Request) { w.Header().Set("Content-Type", "application/json") - json.NewEncoder(w).Encode(map[string]interface{}{ + if err := json.NewEncoder(w).Encode(map[string]interface{}{ "id": projectName, "name": projectName, "path": projectPath, - }) + }); err != nil { + http.Error(w, err.Error(), http.StatusInternalServerError) + } + }) + mux.HandleFunc("/session", func(w http.ResponseWriter, r *http.Request) { + w.Header().Set("Content-Type", "application/json") + if err := json.NewEncoder(w).Encode([]map[string]interface{}{}); err != nil { + http.Error(w, err.Error(), http.StatusInternalServerError) + } }) return httptest.NewServer(mux) } @@ -124,10 +135,12 @@ func TestProbePort_NoServer(t *testing.T) { func TestProbePort_HealthOK_ProjectFails(t *testing.T) { mux := http.NewServeMux() mux.HandleFunc("/global/health", func(w http.ResponseWriter, r *http.Request) { - json.NewEncoder(w).Encode(map[string]interface{}{ + if err := json.NewEncoder(w).Encode(map[string]interface{}{ "healthy": true, "version": "1.0.0", - }) + }); err != nil { + http.Error(w, err.Error(), http.StatusInternalServerError) + } }) mux.HandleFunc("/project/current", func(w http.ResponseWriter, r *http.Request) { w.WriteHeader(http.StatusInternalServerError) @@ -154,7 +167,9 @@ func TestProbePort_HealthOK_ProjectFails(t *testing.T) { func TestProbePort_MalformedHealth(t *testing.T) { mux := http.NewServeMux() mux.HandleFunc("/global/health", func(w http.ResponseWriter, r *http.Request) { - w.Write([]byte("not json")) + if _, err := w.Write([]byte("not json")); err != nil { + t.Logf("write failed: %v", err) + } }) srv := httptest.NewServer(mux) defer srv.Close() @@ -201,14 +216,18 @@ func TestProbePort_HealthNon200(t *testing.T) { func TestProbePort_FallbackToID(t *testing.T) { mux := http.NewServeMux() mux.HandleFunc("/global/health", func(w http.ResponseWriter, r *http.Request) { - json.NewEncoder(w).Encode(map[string]interface{}{"healthy": true, "version": "1.0"}) + if err := json.NewEncoder(w).Encode(map[string]interface{}{"healthy": true, "version": "1.0"}); err != nil { + http.Error(w, err.Error(), http.StatusInternalServerError) + } }) mux.HandleFunc("/project/current", func(w http.ResponseWriter, r *http.Request) { - json.NewEncoder(w).Encode(map[string]interface{}{ + if err := json.NewEncoder(w).Encode(map[string]interface{}{ "id": "fallback-id", "name": "", "path": "", - }) + }); err != nil { + http.Error(w, err.Error(), http.StatusInternalServerError) + } }) srv := httptest.NewServer(mux) defer srv.Close() @@ -232,6 +251,154 @@ func TestProbePort_FallbackToID(t *testing.T) { } } +func TestProbePort_SessionDiscovery(t *testing.T) { + mux := http.NewServeMux() + mux.HandleFunc("/global/health", func(w http.ResponseWriter, r *http.Request) { + if err := json.NewEncoder(w).Encode(map[string]interface{}{"healthy": true, "version": "1.1.0"}); err != nil { + http.Error(w, err.Error(), http.StatusInternalServerError) + } + }) + mux.HandleFunc("/project/current", func(w http.ResponseWriter, r *http.Request) { + if err := json.NewEncoder(w).Encode(map[string]interface{}{ + "id": "proj", + "name": "proj", + "path": "/home/test/proj", + }); err != nil { + http.Error(w, err.Error(), http.StatusInternalServerError) + } + }) + mux.HandleFunc("/session", func(w http.ResponseWriter, r *http.Request) { + if err := json.NewEncoder(w).Encode(map[string]interface{}{ + "sessions": []map[string]interface{}{ + {"id": "s-1", "title": "first", "directory": "/home/test/proj", "status": "active", "attached_clients": 2}, + }, + }); err != nil { + http.Error(w, err.Error(), http.StatusInternalServerError) + } + }) + + srv := httptest.NewServer(mux) + defer srv.Close() + + port := extractPort(t, srv.URL) + reg := registry.New(30*time.Second, testLogger()) + sc := New(reg, port, port, 5*time.Second, 1, 2*time.Second, testLogger()) + + sc.probePort(context.Background(), port) + + sessions := reg.ListSessions("proj") + if len(sessions) != 1 { + t.Fatalf("expected 1 session, got %d", len(sessions)) + } + if sessions[0].ID != "s-1" { + t.Fatalf("expected session id s-1, got %q", sessions[0].ID) + } + if sessions[0].AttachedClients != 2 { + t.Fatalf("expected attached clients 2, got %d", sessions[0].AttachedClients) + } +} + +func TestScan_SessionDisappearanceAcrossCycles(t *testing.T) { + var ( + mu sync.RWMutex + sessions = []map[string]interface{}{ + {"id": "s-1", "title": "first"}, + {"id": "s-2", "title": "second"}, + } + ) + + mux := http.NewServeMux() + mux.HandleFunc("/global/health", func(w http.ResponseWriter, r *http.Request) { + if err := json.NewEncoder(w).Encode(map[string]interface{}{"healthy": true, "version": "1.0.0"}); err != nil { + http.Error(w, err.Error(), http.StatusInternalServerError) + } + }) + mux.HandleFunc("/project/current", func(w http.ResponseWriter, r *http.Request) { + if err := json.NewEncoder(w).Encode(map[string]interface{}{"id": "proj", "name": "proj", "path": "/home/test/proj"}); err != nil { + http.Error(w, err.Error(), http.StatusInternalServerError) + } + }) + mux.HandleFunc("/session", func(w http.ResponseWriter, r *http.Request) { + mu.RLock() + defer mu.RUnlock() + copied := make([]map[string]interface{}, len(sessions)) + for i, session := range sessions { + clone := make(map[string]interface{}, len(session)) + for k, v := range session { + clone[k] = v + } + copied[i] = clone + } + if err := json.NewEncoder(w).Encode(copied); err != nil { + http.Error(w, err.Error(), http.StatusInternalServerError) + } + }) + + srv := httptest.NewServer(mux) + defer srv.Close() + + port := extractPort(t, srv.URL) + reg := registry.New(30*time.Second, testLogger()) + sc := New(reg, port, port, 5*time.Second, 1, 2*time.Second, testLogger()) + + sc.scan(context.Background()) + if got := len(reg.ListSessions("proj")); got != 2 { + t.Fatalf("expected 2 sessions after first scan, got %d", got) + } + + mu.Lock() + sessions = []map[string]interface{}{{"id": "s-2", "title": "second"}} + mu.Unlock() + + sc.scan(context.Background()) + list := reg.ListSessions("proj") + if len(list) != 1 { + t.Fatalf("expected 1 session after second scan, got %d", len(list)) + } + if list[0].ID != "s-2" { + t.Fatalf("expected remaining session s-2, got %q", list[0].ID) + } +} + +func TestProbePort_ProjectPayloadDeltaWorktreeSandboxes(t *testing.T) { + mux := http.NewServeMux() + mux.HandleFunc("/global/health", func(w http.ResponseWriter, r *http.Request) { + if err := json.NewEncoder(w).Encode(map[string]interface{}{"healthy": true, "version": "1.0"}); err != nil { + http.Error(w, err.Error(), http.StatusInternalServerError) + } + }) + mux.HandleFunc("/project/current", func(w http.ResponseWriter, r *http.Request) { + if err := json.NewEncoder(w).Encode(map[string]interface{}{ + "worktree": map[string]interface{}{"path": "/home/test/delta-proj"}, + "sandboxes": []map[string]interface{}{{"path": "/home/test/delta-proj"}}, + }); err != nil { + http.Error(w, err.Error(), http.StatusInternalServerError) + } + }) + mux.HandleFunc("/session", func(w http.ResponseWriter, r *http.Request) { + if err := json.NewEncoder(w).Encode(map[string]interface{}{"sessions": []map[string]interface{}{}}); err != nil { + http.Error(w, err.Error(), http.StatusInternalServerError) + } + }) + + srv := httptest.NewServer(mux) + defer srv.Close() + + port := extractPort(t, srv.URL) + reg := registry.New(30*time.Second, testLogger()) + sc := New(reg, port, port, 5*time.Second, 1, 2*time.Second, testLogger()) + + sc.probePort(context.Background(), port) + + b, ok := reg.Lookup("delta-proj") + if !ok { + t.Fatal("expected backend registered from worktree/sandboxes payload") + } + if b.ProjectPath != "/home/test/delta-proj" { + t.Fatalf("expected project path from delta payload, got %q", b.ProjectPath) + } +} + // --------------------------------------------------------------------------- // Full scan cycle with multiple backends // --------------------------------------------------------------------------- diff --git a/internal/session/eventbus.go b/internal/session/eventbus.go new file mode 100644 index 0000000..919b712 --- /dev/null +++ b/internal/session/eventbus.go @@ -0,0 +1,149 @@ +package session + +import ( + "errors" + "sync" +) + +const defaultEventBusBuffer = 64 + +var ( + ErrEventBusClosed = errors.New("session event bus closed") + ErrNilEvent = errors.New("session event is nil") +) + +type eventSubscriber struct { + ch chan Event + filter EventFilter +} + +type ChannelEventBus struct { + mu sync.RWMutex + subscribers map[uint64]*eventSubscriber + nextID uint64 + buffer int + closed bool +} + +func NewEventBus(buffer int) *ChannelEventBus { + if buffer <= 0 { + buffer = defaultEventBusBuffer + } + + return &ChannelEventBus{ + subscribers: make(map[uint64]*eventSubscriber), + buffer: buffer, + } +} + +func (b *ChannelEventBus) Subscribe(filter EventFilter) (<-chan Event, func(), error) { + if b == nil { + return nil, nil, ErrEventBusClosed + } + + sub := &eventSubscriber{ + ch: make(chan Event, b.buffer), + filter: filter, + } + + b.mu.Lock() + if b.closed { + b.mu.Unlock() + return nil, nil, ErrEventBusClosed + } + id := b.nextID + b.nextID++ + b.subscribers[id] = sub + b.mu.Unlock() + + var once sync.Once + unsubscribe := func() { + once.Do(func() { + b.unsubscribe(id) + }) + } + + return sub.ch, unsubscribe, nil +} + +func (b *ChannelEventBus) Publish(event Event) error { + if b == nil { + return ErrEventBusClosed + } + if event == nil { + return ErrNilEvent + } + + b.mu.RLock() + defer b.mu.RUnlock() + + if b.closed { + return ErrEventBusClosed + } + + for _, sub := range b.subscribers { + if !eventMatchesFilter(event, sub.filter) { + continue + } + select { + case sub.ch <- event: + default: + } + } + + return nil +} + +func (b *ChannelEventBus) Close() { + if b == nil { + return + } + + b.mu.Lock() + if b.closed { + b.mu.Unlock() + return + } + b.closed = true + + for id, sub := range b.subscribers { + delete(b.subscribers, id) + close(sub.ch) + } + b.mu.Unlock() +} + +func (b *ChannelEventBus) unsubscribe(id uint64) { + if b == nil { + return + } + + b.mu.Lock() + sub, ok := b.subscribers[id] + if ok { + delete(b.subscribers, id) + } + b.mu.Unlock() + + if ok { + close(sub.ch) + } +} + +func eventMatchesFilter(event Event, filter EventFilter) bool { + if filter.SessionID != "" && event.SessionID() != filter.SessionID { + return false + } + if len(filter.Types) == 0 { + return true + } + + et := event.Type() + for _, allowed := range filter.Types { + if et == allowed { + return true + } + } + + return false +} diff --git a/internal/session/eventbus_test.go b/internal/session/eventbus_test.go new file mode 100644 index 0000000..1896eb5 --- /dev/null +++ b/internal/session/eventbus_test.go @@ -0,0 +1,104 @@ +package session + +import ( + "testing" + "time" +) + +func TestEventBusSubscribePublishFilterAndUnsubscribe(t *testing.T) { + bus := NewEventBus(8) + + ch, unsubscribe, err := bus.Subscribe(EventFilter{ + SessionID: "s-1", + Types: []EventType{EventTypeSessionCreated}, + }) + if err != nil { + t.Fatalf("subscribe: %v", err) + } + + createdMatch := SessionCreated{ + At: time.Now(), + Session: SessionHandle{ + ID: "s-1", + }, + } + + if err := bus.Publish(createdMatch); err != nil { + t.Fatalf("publish created: %v", err) + } + + select { + case event := <-ch: + if event.Type() != EventTypeSessionCreated { + t.Fatalf("event type = %q, want %q", event.Type(), EventTypeSessionCreated) + } + if event.SessionID() != "s-1" { + t.Fatalf("session id = %q, want s-1", event.SessionID()) + } + case <-time.After(500 * time.Millisecond): + t.Fatal("timed out waiting for matching event") + } + + nonMatchingType := SessionStopped{ + At: time.Now(), + Session: SessionHandle{ + ID: "s-1", + }, + Reason: "test", + } + if err := bus.Publish(nonMatchingType); err != nil { + t.Fatalf("publish non-matching type: %v", err) + } + + select { + case event := <-ch: + t.Fatalf("unexpected event for non-matching type: %v", event) + case <-time.After(100 * time.Millisecond): + } + + nonMatchingSession := SessionCreated{ + At: time.Now(), + Session: SessionHandle{ + ID: "s-2", + }, + } + if err := bus.Publish(nonMatchingSession); err != nil { + t.Fatalf("publish non-matching session: %v", err) + } + + select { + case event := <-ch: + t.Fatalf("unexpected event for non-matching session: %v", event) + case <-time.After(100 * time.Millisecond): + } + + unsubscribe() + + select { + case _, ok := <-ch: + if ok { + t.Fatal("expected channel to be closed after unsubscribe") + } + case <-time.After(500 * time.Millisecond): + t.Fatal("timed out waiting for closed subscriber channel") + } +} + +func TestEventBusCloseAndNilPublish(t *testing.T) { + bus := NewEventBus(2) + + if err := bus.Publish(nil); err == nil { + t.Fatal("expected publishing nil event to fail") + } + + bus.Close() + + if _, _, err := bus.Subscribe(EventFilter{}); err == nil { + t.Fatal("expected subscribe on closed bus to fail") + } + + err := bus.Publish(SessionCreated{At: time.Now(), Session: SessionHandle{ID: "s-1"}}) + if err == nil { + t.Fatal("expected publish on closed bus to fail") + } +} diff --git a/internal/session/events.go b/internal/session/events.go new file mode 100644 index 0000000..a48214f --- /dev/null +++ b/internal/session/events.go @@ -0,0 +1,81 @@ +package session + +import "time" + +type EventType string + +const ( + EventTypeSessionCreated EventType = "session.created" + EventTypeSessionStopped EventType = "session.stopped" + EventTypeSessionHealthChanged EventType = "session.health_changed" + EventTypeSessionAttached EventType = "session.attached" + EventTypeSessionDetached EventType = "session.detached" +) + +type Event interface { + Type() EventType + Timestamp() time.Time + SessionID() string +} + +type EventFilter struct { + SessionID string + Types []EventType +} + +type SessionCreated struct { + At time.Time + Session SessionHandle +} + +func (e SessionCreated) Type() EventType { return EventTypeSessionCreated } +func (e SessionCreated) Timestamp() time.Time { return e.At } +func (e SessionCreated) SessionID() string { return e.Session.ID } + +type SessionStopped struct { + At time.Time + Session SessionHandle + Reason string +} + +func (e SessionStopped) Type() EventType { return EventTypeSessionStopped } +func (e SessionStopped) Timestamp() time.Time { return e.At } +func (e SessionStopped) SessionID() string { return e.Session.ID } + +type SessionHealthChanged struct { + At time.Time + Session SessionHandle + Previous HealthStatus + Current HealthStatus +} + +func (e SessionHealthChanged) Type() EventType { return EventTypeSessionHealthChanged } +func (e SessionHealthChanged) Timestamp() time.Time { return e.At } +func (e SessionHealthChanged) SessionID() string { return e.Session.ID } + +type SessionAttached struct { + At time.Time + Session SessionHandle + AttachedClients int + ClientID string +} + +func (e SessionAttached) Type() EventType { return EventTypeSessionAttached } +func (e SessionAttached) Timestamp() time.Time { return e.At } +func (e SessionAttached) SessionID() string { return e.Session.ID } + +type SessionDetached struct { + At time.Time + Session SessionHandle + AttachedClients int + ClientID string +} + +func (e SessionDetached) Type() EventType { return EventTypeSessionDetached } +func (e SessionDetached) Timestamp() time.Time { return e.At } +func (e SessionDetached) SessionID() string { return e.Session.ID } + +type EventBus interface { + Subscribe(filter EventFilter) (<-chan Event, func(), error) + Publish(event Event) error +} diff --git a/internal/session/manager.go b/internal/session/manager.go new file mode 100644 index 0000000..9557727 --- /dev/null +++ b/internal/session/manager.go @@ -0,0 +1,1370 @@ +package session + +import ( + "context" + "encoding/json" + "errors" + "fmt" + "io" + "log/slog" + "net" + "net/http" + "os" + "os/exec" + "path/filepath" + "sort" + "strconv" + "strings" + "sync" + "sync/atomic" + "syscall" + "time" + + "opencoderouter/internal/registry" +) + +const ( + defaultSessionPortStart = 30000 + defaultSessionPortEnd = 31000 + defaultHealthCheckInterval = 10 * time.Second + defaultHealthCheckTimeout = 2 * time.Second + defaultHealthFailThreshold = 3 + defaultHealthCircuitCooldown = 30 * time.Second + defaultSessionStopTimeout = 5 * time.Second + defaultSessionOpenCodeBinary = "opencode" +) + +var ( + ErrSessionNotFound = errors.New("session not found") + ErrSessionAlreadyExists = errors.New("session already exists") + ErrWorkspacePathRequired = errors.New("workspace path is required") + ErrWorkspacePathInvalid = errors.New("workspace path is invalid") + ErrNoAvailableSessionPorts = errors.New("no available session ports") + ErrSessionStopped = errors.New("session is stopped") + ErrTerminalAttachDisabled = errors.New("terminal attachment is not configured") + errProcessWaitTimeout = errors.New("timeout waiting for session process exit") +) + +type processStarterFn func(binary, workspace string, port int, envVars map[string]string) (sessionProcess, error) +type healthCheckerFn func(ctx context.Context, port int) HealthStatus +type terminalDialerFn func(ctx context.Context, handle SessionHandle) (TerminalConn, error) + +type ManagerConfig struct { + Registry *registry.Registry + EventBus EventBus + Logger *slog.Logger + PortStart int + PortEnd int + HealthCheckInterval time.Duration + HealthCheckTimeout time.Duration + HealthFailThreshold int + HealthCircuitReset time.Duration + StopTimeout time.Duration + EventBuffer int + ProcessStarter processStarterFn + HealthChecker healthCheckerFn + TerminalDialer terminalDialerFn + PortAvailable func(port int) bool + Now func() time.Time +} + +type Manager struct { + mu sync.RWMutex + sessions map[string]*managedSession + registry *registry.Registry + eventBus EventBus + logger *slog.Logger + portStart int + portEnd int + healthCheckInterval time.Duration + healthCheckTimeout time.Duration + healthFailThreshold int + healthCircuitReset time.Duration + stopTimeout time.Duration + processStarter processStarterFn + healthChecker healthCheckerFn + terminalDialer terminalDialerFn + portAvailable func(port int) bool + now func() time.Time + nextSessionSeq uint64 + nextClientSeq uint64 + loopCancel context.CancelFunc + loopStopOnce sync.Once + wg sync.WaitGroup +} + +type managedSession struct { + handle SessionHandle + opts CreateOpts + process sessionProcess + health HealthStatus + healthFails int + nextProbeAt time.Time + expectedStop bool + exitCh chan error +} + +type sessionProcess interface { + PID() int + Signal(sig os.Signal) error + Kill() error + Wait() error +} + +type execProcess struct { + cmd *exec.Cmd +} + +func (p *execProcess) PID() int { + if p == nil || p.cmd == nil || p.cmd.Process == nil { + return 0 + } + return p.cmd.Process.Pid +} + +func (p *execProcess) Signal(sig os.Signal) error { + if p == nil || p.cmd == nil || p.cmd.Process == nil { + return os.ErrProcessDone + } + return p.cmd.Process.Signal(sig) +} + +func (p *execProcess) Kill() error { + if p == nil || p.cmd == nil || p.cmd.Process == nil { + return os.ErrProcessDone + } + return p.cmd.Process.Kill() +} + +func (p *execProcess) Wait() error { + if p == nil || p.cmd == nil { + return nil + } + return p.cmd.Wait() +} + +type managedTerminalConn struct { + TerminalConn + onClose func() + once sync.Once +} + +func (c *managedTerminalConn) Close() error { + var err error + c.once.Do(func() { + if c.TerminalConn != nil { + err = c.TerminalConn.Close() + } + if c.onClose != nil { + c.onClose() + } + }) + return err +} + +func NewManager(cfg ManagerConfig) *Manager { + nowFn := cfg.Now + if nowFn == nil { + nowFn = time.Now + } + + logger := cfg.Logger + if logger == nil { + logger = slog.Default() + } + + portStart := cfg.PortStart + portEnd := cfg.PortEnd + if portStart <= 0 { + portStart = defaultSessionPortStart + } + if portEnd <= 0 { + portEnd = defaultSessionPortEnd + } + if portEnd < portStart { + portStart = defaultSessionPortStart + portEnd = defaultSessionPortEnd + } + + healthCheckInterval := cfg.HealthCheckInterval + if healthCheckInterval <= 0 { + healthCheckInterval = defaultHealthCheckInterval + } + + healthCheckTimeout := cfg.HealthCheckTimeout + if healthCheckTimeout <= 0 { + healthCheckTimeout = defaultHealthCheckTimeout + } + + stopTimeout := cfg.StopTimeout + if stopTimeout <= 0 { + stopTimeout = defaultSessionStopTimeout + } + + healthFailThreshold := cfg.HealthFailThreshold + if healthFailThreshold <= 0 { + healthFailThreshold = defaultHealthFailThreshold + } + + healthCircuitReset := cfg.HealthCircuitReset + if healthCircuitReset <= 0 { + healthCircuitReset = defaultHealthCircuitCooldown + } + + eventBus := cfg.EventBus + if eventBus == nil { + eventBus = NewEventBus(cfg.EventBuffer) + } + + processStarter := cfg.ProcessStarter + if processStarter == nil { + processStarter = defaultProcessStarter + } + + portAvailable := cfg.PortAvailable + if portAvailable == nil { + portAvailable = defaultPortAvailable + } + + healthChecker := cfg.HealthChecker + if healthChecker == nil { + healthChecker = defaultHealthChecker(&http.Client{}, nowFn) + } + + terminalDialer := cfg.TerminalDialer + if terminalDialer == nil { + terminalDialer = defaultTerminalDialer + } + + ctx, cancel := context.WithCancel(context.Background()) + + m := &Manager{ + sessions: make(map[string]*managedSession), + registry: cfg.Registry, + eventBus: eventBus, + logger: logger, + portStart: portStart, + portEnd: portEnd, + healthCheckInterval: healthCheckInterval, + healthCheckTimeout: healthCheckTimeout, + healthFailThreshold: healthFailThreshold, + healthCircuitReset: healthCircuitReset, + stopTimeout: stopTimeout, + processStarter: processStarter, + healthChecker: healthChecker, + terminalDialer: terminalDialer, + portAvailable: portAvailable, + now: nowFn, + loopCancel: cancel, + } + + m.wg.Add(1) + go m.healthLoop(ctx) + + return m +} + +func (m *Manager) Create(ctx context.Context, opts CreateOpts) (*SessionHandle, error) { + if ctx == nil { + ctx = context.Background() + } + if err := ctx.Err(); err != nil { + return nil, err + } + id := m.nextSessionID() + return m.createWithID(ctx, id, opts, 0, false) +} + +func (m *Manager) Get(id string) (*SessionHandle, error) { + if m == nil { + return nil, ErrSessionNotFound + } + + m.mu.RLock() + rec, ok := m.sessions[id] + m.mu.RUnlock() + if !ok { + m.syncFromRegistry() + m.mu.RLock() + rec, ok = m.sessions[id] + if !ok { + m.mu.RUnlock() + return nil, ErrSessionNotFound + } + handle := cloneSessionHandle(rec.handle) + m.mu.RUnlock() + return &handle, nil + } + handle := cloneSessionHandle(rec.handle) + + return &handle, nil +} + +func (m *Manager) List(filter SessionListFilter) ([]SessionHandle, error) { + if m == nil { + return nil, nil + } + + m.syncFromRegistry() + + m.mu.RLock() + result := make([]SessionHandle, 0, len(m.sessions)) + for _, rec := range m.sessions { + handle := cloneSessionHandle(rec.handle) + if !matchesSessionFilter(handle, filter) { + continue + } + result = append(result, handle) + } + m.mu.RUnlock() + + sort.Slice(result, func(i, j int) bool { + if result[i].CreatedAt.Equal(result[j].CreatedAt) { + return result[i].ID < result[j].ID + } + return result[i].CreatedAt.Before(result[j].CreatedAt) + }) + + return result, nil +} + +func (m *Manager) Stop(ctx context.Context, id string) error { + if ctx == nil { + ctx = context.Background() + } + + proc, exitCh, alreadyStopped, err := m.prepareStop(id) + if err != nil { + return err + } + if alreadyStopped { + return nil + } + + if proc != nil { + if signalErr := proc.Signal(syscall.SIGTERM); signalErr != nil && !isProcessAlreadyDone(signalErr) { + m.logger.Debug("session stop signal error", "session_id", id, "error", signalErr) + } + + waitErr := m.waitForExit(ctx, exitCh, m.stopTimeout) + if errors.Is(waitErr, errProcessWaitTimeout) { + if killErr := proc.Kill(); killErr != nil && !isProcessAlreadyDone(killErr) { + m.logger.Debug("session stop kill error", "session_id", id, "error", killErr) + } + waitErr = m.waitForExit(ctx, exitCh, m.stopTimeout) + } + + if waitErr != nil && !isExpectedExitError(waitErr) { + if errors.Is(waitErr, context.Canceled) || errors.Is(waitErr, context.DeadlineExceeded) { + return waitErr + } + return waitErr + } + } + + now := m.now() + var snapshot SessionHandle + var publish bool + + m.mu.Lock() + rec, ok := m.sessions[id] + if !ok { + m.mu.Unlock() + return ErrSessionNotFound + } + publish = rec.handle.Status != SessionStatusStopped + rec.handle.Status = SessionStatusStopped + rec.handle.LastActivity = now + rec.handle.AttachedClients = 0 + rec.health = HealthStatus{State: HealthStateUnknown, LastCheck: now} + rec.healthFails = 0 + rec.nextProbeAt = time.Time{} + rec.process = nil + rec.expectedStop = true + snapshot = cloneSessionHandle(rec.handle) + m.mu.Unlock() + + if publish { + m.publishEvent(SessionStopped{At: now, Session: snapshot, Reason: "stop"}) + } + + return nil +} + +func (m *Manager) Restart(ctx context.Context, id string) (*SessionHandle, error) { + if ctx == nil { + ctx = context.Background() + } + + m.mu.RLock() + rec, ok := m.sessions[id] + if !ok { + m.mu.RUnlock() + return nil, ErrSessionNotFound + } + opts := cloneCreateOpts(rec.opts) + if opts.WorkspacePath == "" { + opts.WorkspacePath = rec.handle.WorkspacePath + } + if len(opts.Labels) == 0 { + opts.Labels = cloneStringMap(rec.handle.Labels) + } + preferredPort := rec.handle.DaemonPort + m.mu.RUnlock() + + if err := m.Stop(ctx, id); err != nil { + return nil, err + } + + return m.createWithID(ctx, id, opts, preferredPort, true) +} + +func (m *Manager) Delete(ctx context.Context, id string) error { + if ctx == nil { + ctx = context.Background() + } + + if err := m.Stop(ctx, id); err != nil { + return err + } + + m.mu.Lock() + if _, ok := m.sessions[id]; !ok { + m.mu.Unlock() + return ErrSessionNotFound + } + delete(m.sessions, id) + m.mu.Unlock() + + return nil +} + +func (m *Manager) AttachTerminal(ctx context.Context, id string) (TerminalConn, error) { + if ctx == nil { + ctx = context.Background() + } + + m.mu.RLock() + rec, ok := m.sessions[id] + m.mu.RUnlock() + if !ok { + m.syncFromRegistry() + m.mu.RLock() + rec, ok = m.sessions[id] + if !ok { + m.mu.RUnlock() + return nil, ErrSessionNotFound + } + handle := cloneSessionHandle(rec.handle) + if rec.handle.Status == SessionStatusStopped { + m.mu.RUnlock() + return nil, ErrSessionStopped + } + m.mu.RUnlock() + + conn, err := m.terminalDialer(ctx, handle) + if err != nil { + return nil, err + } + + clientID := m.nextClientID() + now := m.now() + + var attached int + var snapshot SessionHandle + + m.mu.Lock() + rec, ok = m.sessions[id] + if !ok { + m.mu.Unlock() + if closeErr := conn.Close(); closeErr != nil { + m.logger.Debug("failed to close terminal connection after missing session", "session_id", id, "error", closeErr) + } + return nil, ErrSessionNotFound + } + rec.handle.AttachedClients++ + rec.handle.LastActivity = now + attached = rec.handle.AttachedClients + snapshot = cloneSessionHandle(rec.handle) + m.mu.Unlock() + + m.publishEvent(SessionAttached{At: now, Session: snapshot, AttachedClients: attached, ClientID: clientID}) + + wrapped := &managedTerminalConn{ + TerminalConn: conn, + onClose: func() { + m.onTerminalDetached(id, clientID) + }, + } + + return wrapped, nil + } + + if !ok { + return nil, ErrSessionNotFound + } + if rec.handle.Status == SessionStatusStopped { + return nil, ErrSessionStopped + } + handle := cloneSessionHandle(rec.handle) + + conn, err := m.terminalDialer(ctx, handle) + if err != nil { + return nil, err + } + + clientID := m.nextClientID() + now := m.now() + + var attached int + var snapshot SessionHandle + + m.mu.Lock() + rec, ok = m.sessions[id] + if !ok { + m.mu.Unlock() + if closeErr := conn.Close(); closeErr != nil { + m.logger.Debug("failed to close terminal connection after missing session", "session_id", id, "error", closeErr) + } + return nil, ErrSessionNotFound + } + rec.handle.AttachedClients++ + rec.handle.LastActivity = now + attached = rec.handle.AttachedClients + snapshot = cloneSessionHandle(rec.handle) + m.mu.Unlock() + + m.publishEvent(SessionAttached{At: now, Session: snapshot, AttachedClients: attached, ClientID: clientID}) + + wrapped := &managedTerminalConn{ + TerminalConn: conn, + onClose: func() { + m.onTerminalDetached(id, clientID) + }, + } + + return wrapped, nil +} + +func (m *Manager) Health(ctx context.Context, id string) (HealthStatus, error) { + if ctx == nil { + ctx = context.Background() + } + + m.mu.RLock() + rec, ok := m.sessions[id] + if !ok { + m.mu.RUnlock() + m.syncFromRegistry() + m.mu.RLock() + rec, ok = m.sessions[id] + if !ok { + m.mu.RUnlock() + return HealthStatus{}, ErrSessionNotFound + } + } + now := m.now() + current := rec.health + status := rec.handle.Status + port := rec.handle.DaemonPort + nextProbeAt := rec.nextProbeAt + m.mu.RUnlock() + + if status == SessionStatusStopped || port <= 0 { + if current.LastCheck.IsZero() { + current.LastCheck = now + } + return current, nil + } + + if !nextProbeAt.IsZero() && now.Before(nextProbeAt) { + if current.LastCheck.IsZero() { + current.LastCheck = now + } + return current, nil + } + + next := m.healthChecker(ctx, port) + if next.LastCheck.IsZero() { + next.LastCheck = m.now() + } + + return m.storeHealth(id, next) +} + +func (m *Manager) Close(ctx context.Context) error { + if m == nil { + return nil + } + if ctx == nil { + ctx = context.Background() + } + m.stopBackgroundLoop() + return m.waitForBackground(ctx) +} + +func (m *Manager) Shutdown(ctx context.Context) error { + if m == nil { + return nil + } + if ctx == nil { + ctx = context.Background() + } + + m.stopBackgroundLoop() + + ids := m.sessionIDsSnapshot() + for _, id := range ids { + if err := m.Stop(ctx, id); err != nil && !errors.Is(err, ErrSessionNotFound) { + return err + } + } + + return m.waitForBackground(ctx) +} + +func (m *Manager) createWithID(ctx context.Context, id string, opts CreateOpts, preferredPort int, replace bool) (*SessionHandle, error) { + validatedOpts, err := validateCreateOpts(opts) + if err != nil { + return nil, err + } + + if err := ctx.Err(); err != nil { + return nil, err + } + + port, err := m.allocatePort(preferredPort, id) + if err != nil { + return nil, err + } + + proc, err := m.processStarter(validatedOpts.OpenCodeBinary, validatedOpts.WorkspacePath, port, validatedOpts.EnvVars) + if err != nil { + return nil, fmt.Errorf("start session process: %w", err) + } + + now := m.now() + record := &managedSession{ + handle: SessionHandle{ + ID: id, + DaemonPort: port, + WorkspacePath: validatedOpts.WorkspacePath, + Status: SessionStatusActive, + CreatedAt: now, + LastActivity: now, + Labels: cloneStringMap(validatedOpts.Labels), + }, + opts: cloneCreateOpts(validatedOpts), + process: proc, + health: HealthStatus{ + State: HealthStateUnknown, + LastCheck: now, + }, + exitCh: make(chan error, 1), + } + + m.mu.Lock() + if existing, ok := m.sessions[id]; ok { + if !replace { + m.mu.Unlock() + cleanupProcess(proc) + return nil, ErrSessionAlreadyExists + } + if existing.process != nil { + m.mu.Unlock() + cleanupProcess(proc) + return nil, fmt.Errorf("cannot replace running session %q", id) + } + } + m.sessions[id] = record + m.mu.Unlock() + + m.wg.Add(1) + go m.watchProcess(id, proc, record.exitCh) + + if m.registry != nil { + projectName := filepath.Base(validatedOpts.WorkspacePath) + m.registry.Upsert(port, projectName, validatedOpts.WorkspacePath, "") + } + + snapshot := cloneSessionHandle(record.handle) + m.publishEvent(SessionCreated{At: now, Session: snapshot}) + + return &snapshot, nil +} + +func (m *Manager) prepareStop(id string) (sessionProcess, <-chan error, bool, error) { + if m == nil { + return nil, nil, false, ErrSessionNotFound + } + + m.mu.Lock() + defer m.mu.Unlock() + + rec, ok := m.sessions[id] + if !ok { + return nil, nil, false, ErrSessionNotFound + } + + if rec.handle.Status == SessionStatusStopped && rec.process == nil { + return nil, nil, true, nil + } + + rec.expectedStop = true + return rec.process, rec.exitCh, false, nil +} + +func (m *Manager) onTerminalDetached(id string, clientID string) { + now := m.now() + + var attached int + var snapshot SessionHandle + var publish bool + + m.mu.Lock() + rec, ok := m.sessions[id] + if ok { + if rec.handle.AttachedClients > 0 { + rec.handle.AttachedClients-- + } + rec.handle.LastActivity = now + attached = rec.handle.AttachedClients + snapshot = cloneSessionHandle(rec.handle) + publish = true + } + m.mu.Unlock() + + if publish { + m.publishEvent(SessionDetached{At: now, Session: snapshot, AttachedClients: attached, ClientID: clientID}) + } +} + +func (m *Manager) watchProcess(id string, proc sessionProcess, exitCh chan error) { + defer m.wg.Done() + + err := proc.Wait() + + select { + case exitCh <- err: + default: + } + close(exitCh) + + m.handleProcessExit(id, err) +} + +func (m *Manager) handleProcessExit(id string, waitErr error) { + now := m.now() + + var prev HealthStatus + var current HealthStatus + var snapshot SessionHandle + var publish bool + + m.mu.Lock() + rec, ok := m.sessions[id] + if !ok { + m.mu.Unlock() + return + } + + rec.process = nil + rec.handle.LastActivity = now + + if rec.expectedStop || rec.handle.Status == SessionStatusStopped { + rec.health = HealthStatus{State: HealthStateUnknown, LastCheck: now} + m.mu.Unlock() + return + } + + prev = rec.health + rec.handle.Status = SessionStatusError + rec.health = HealthStatus{ + State: HealthStateUnhealthy, + LastCheck: now, + Error: processExitMessage(waitErr), + } + current = rec.health + snapshot = cloneSessionHandle(rec.handle) + publish = prev.State != current.State || prev.Error != current.Error + m.mu.Unlock() + + if publish { + m.publishEvent(SessionHealthChanged{At: now, Session: snapshot, Previous: prev, Current: current}) + } +} + +func (m *Manager) storeHealth(id string, next HealthStatus) (HealthStatus, error) { + if next.LastCheck.IsZero() { + next.LastCheck = m.now() + } + + var prev HealthStatus + var snapshot SessionHandle + var publish bool + + m.mu.Lock() + rec, ok := m.sessions[id] + if !ok { + m.mu.Unlock() + return HealthStatus{}, ErrSessionNotFound + } + + prev = rec.health + if next.State == HealthStateHealthy { + rec.healthFails = 0 + rec.nextProbeAt = time.Time{} + } else if next.State == HealthStateUnhealthy { + rec.healthFails++ + if rec.healthFails >= m.healthFailThreshold { + rec.nextProbeAt = next.LastCheck.Add(m.healthCircuitReset) + } + } + rec.health = next + rec.handle.Status = statusFromHealth(rec.handle.Status, next.State) + snapshot = cloneSessionHandle(rec.handle) + publish = prev.State != next.State || prev.Error != next.Error + m.mu.Unlock() + + if publish { + m.publishEvent(SessionHealthChanged{At: next.LastCheck, Session: snapshot, Previous: prev, Current: next}) + } + + return next, nil +} + +func (m *Manager) healthLoop(ctx context.Context) { + defer m.wg.Done() + + ticker := time.NewTicker(m.healthCheckInterval) + defer ticker.Stop() + + for { + select { + case <-ctx.Done(): + return + case <-ticker.C: + } + + ids := m.healthCheckSessionIDs() + for _, id := range ids { + probeCtx, cancel := context.WithTimeout(ctx, m.healthCheckTimeout) + _, err := m.Health(probeCtx, id) + cancel() + if err != nil && !errors.Is(err, ErrSessionNotFound) && !errors.Is(err, context.Canceled) && !errors.Is(err, context.DeadlineExceeded) { + m.logger.Debug("session health check error", "session_id", id, "error", err) + } + } + } +} + +func (m *Manager) healthCheckSessionIDs() []string { + m.syncFromRegistry() + + m.mu.RLock() + ids := make([]string, 0, len(m.sessions)) + for id, rec := range m.sessions { + if rec.handle.Status == SessionStatusStopped || rec.handle.DaemonPort <= 0 { + continue + } + ids = append(ids, id) + } + m.mu.RUnlock() + + sort.Strings(ids) + return ids +} + +func (m *Manager) sessionIDsSnapshot() []string { + m.mu.RLock() + ids := make([]string, 0, len(m.sessions)) + for id := range m.sessions { + ids = append(ids, id) + } + m.mu.RUnlock() + sort.Strings(ids) + return ids +} + +func (m *Manager) stopBackgroundLoop() { + m.loopStopOnce.Do(func() { + if m.loopCancel != nil { + m.loopCancel() + } + }) +} + +func (m *Manager) syncFromRegistry() { + if m == nil || m.registry == nil { + return + } + + backends := m.registry.All() + if len(backends) == 0 { + return + } + + now := m.now() + discovered := make(map[string]struct{}, len(backends)*4) + + m.mu.Lock() + defer m.mu.Unlock() + + for _, backend := range backends { + if backend == nil { + continue + } + + backendSessions := m.registry.ListSessions(backend.Slug) + for _, sessionMeta := range backendSessions { + sessionID := strings.TrimSpace(sessionMeta.ID) + if sessionID == "" { + continue + } + + discovered[sessionID] = struct{}{} + + workspacePath := strings.TrimSpace(sessionMeta.Directory) + if workspacePath == "" { + workspacePath = backend.ProjectPath + } + + daemonPort := sessionMeta.DaemonPort + if daemonPort <= 0 { + daemonPort = backend.Port + } + if daemonPort <= 0 { + continue + } + + createdAt := sessionMeta.CreatedAt + if createdAt.IsZero() { + createdAt = now + } + + lastActivity := sessionMeta.LastActivity + if lastActivity.IsZero() { + lastActivity = createdAt + } + + status := sessionStatusFromRegistry(sessionMeta.Status) + + labels := map[string]string{ + "source": "registry", + "backend_slug": backend.Slug, + } + if backend.ProjectPath != "" { + labels["backend_path"] = backend.ProjectPath + } + + rec, ok := m.sessions[sessionID] + if ok && rec.process != nil { + continue + } + if ok && !isRegistrySession(rec) { + continue + } + + health := HealthStatus{State: HealthStateUnknown, LastCheck: now} + if ok { + health = rec.health + if health.LastCheck.IsZero() { + health.LastCheck = now + } + } + + m.sessions[sessionID] = &managedSession{ + handle: SessionHandle{ + ID: sessionID, + DaemonPort: daemonPort, + WorkspacePath: workspacePath, + Status: status, + CreatedAt: createdAt, + LastActivity: lastActivity, + AttachedClients: sessionMeta.AttachedClients, + Labels: cloneStringMap(labels), + }, + opts: CreateOpts{ + WorkspacePath: workspacePath, + Labels: cloneStringMap(labels), + }, + health: health, + healthFails: 0, + nextProbeAt: time.Time{}, + expectedStop: true, + exitCh: nil, + } + } + } + + for sessionID, rec := range m.sessions { + if rec.process != nil { + continue + } + if !isRegistrySession(rec) { + continue + } + if _, ok := discovered[sessionID]; !ok { + delete(m.sessions, sessionID) + } + } +} + +func isRegistrySession(rec *managedSession) bool { + if rec == nil { + return false + } + if rec.opts.Labels == nil { + return false + } + return rec.opts.Labels["source"] == "registry" +} + +func sessionStatusFromRegistry(raw string) SessionStatus { + switch strings.ToLower(strings.TrimSpace(raw)) { + case "", "active", "running", "online", "ready": + return SessionStatusActive + case "idle", "paused": + return SessionStatusIdle + case "stopped", "offline", "terminated": + return SessionStatusStopped + case "error", "failed", "unhealthy": + return SessionStatusError + default: + return SessionStatusUnknown + } +} + +func (m *Manager) waitForBackground(ctx context.Context) error { + done := make(chan struct{}) + go func() { + m.wg.Wait() + close(done) + }() + + select { + case <-done: + return nil + case <-ctx.Done(): + return ctx.Err() + } +} + +func (m *Manager) waitForExit(ctx context.Context, exitCh <-chan error, timeout time.Duration) error { + if timeout <= 0 { + timeout = defaultSessionStopTimeout + } + timer := time.NewTimer(timeout) + defer timer.Stop() + + select { + case <-ctx.Done(): + return ctx.Err() + case <-timer.C: + return errProcessWaitTimeout + case err, ok := <-exitCh: + if !ok { + return nil + } + return err + } +} + +func (m *Manager) allocatePort(preferredPort int, sessionID string) (int, error) { + used := m.activePorts(sessionID) + + if preferredPort >= m.portStart && preferredPort <= m.portEnd { + if _, inUse := used[preferredPort]; !inUse && m.portAvailable(preferredPort) { + return preferredPort, nil + } + } + + for port := m.portStart; port <= m.portEnd; port++ { + if _, inUse := used[port]; inUse { + continue + } + if !m.portAvailable(port) { + continue + } + return port, nil + } + + return 0, ErrNoAvailableSessionPorts +} + +func (m *Manager) activePorts(ignoreSessionID string) map[int]struct{} { + m.mu.RLock() + used := make(map[int]struct{}, len(m.sessions)) + for id, rec := range m.sessions { + if id == ignoreSessionID { + continue + } + if rec.process == nil || rec.handle.Status == SessionStatusStopped { + continue + } + used[rec.handle.DaemonPort] = struct{}{} + } + m.mu.RUnlock() + return used +} + +func (m *Manager) publishEvent(event Event) { + if m == nil || m.eventBus == nil || event == nil { + return + } + if err := m.eventBus.Publish(event); err != nil && !errors.Is(err, ErrEventBusClosed) { + m.logger.Debug("session event publish error", "type", event.Type(), "session_id", event.SessionID(), "error", err) + } +} + +func (m *Manager) nextSessionID() string { + n := atomic.AddUint64(&m.nextSessionSeq, 1) + return "session-" + strconv.FormatUint(n, 10) +} + +func (m *Manager) nextClientID() string { + n := atomic.AddUint64(&m.nextClientSeq, 1) + return "client-" + strconv.FormatUint(n, 10) +} + +func validateCreateOpts(opts CreateOpts) (CreateOpts, error) { + if opts.WorkspacePath == "" { + return CreateOpts{}, ErrWorkspacePathRequired + } + + absPath, err := filepath.Abs(opts.WorkspacePath) + if err != nil { + return CreateOpts{}, fmt.Errorf("%w: %v", ErrWorkspacePathInvalid, err) + } + + info, err := os.Stat(absPath) + if err != nil { + return CreateOpts{}, fmt.Errorf("%w: %v", ErrWorkspacePathInvalid, err) + } + if !info.IsDir() { + return CreateOpts{}, ErrWorkspacePathInvalid + } + + normalized := cloneCreateOpts(opts) + normalized.WorkspacePath = absPath + if normalized.OpenCodeBinary == "" { + normalized.OpenCodeBinary = defaultSessionOpenCodeBinary + } + + return normalized, nil +} + +func defaultProcessStarter(binary, workspace string, port int, envVars map[string]string) (sessionProcess, error) { + cmd := exec.Command(binary, "serve", "--port", strconv.Itoa(port)) + cmd.Dir = workspace + cmd.Stdout = nil + cmd.Stderr = nil + + if len(envVars) > 0 { + env := append([]string{}, os.Environ()...) + env = append(env, envMapToPairs(envVars)...) + cmd.Env = env + } + + if err := cmd.Start(); err != nil { + return nil, err + } + + return &execProcess{cmd: cmd}, nil +} + +func defaultHealthChecker(client *http.Client, now func() time.Time) healthCheckerFn { + if client == nil { + client = &http.Client{} + } + if now == nil { + now = time.Now + } + + return func(ctx context.Context, port int) HealthStatus { + status := HealthStatus{State: HealthStateUnknown, LastCheck: now()} + + url := fmt.Sprintf("http://127.0.0.1:%d/global/health", port) + req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil) + if err != nil { + status.State = HealthStateUnhealthy + status.Error = err.Error() + return status + } + + resp, err := client.Do(req) + if err != nil { + status.State = HealthStateUnhealthy + status.Error = err.Error() + return status + } + defer resp.Body.Close() + + if resp.StatusCode != http.StatusOK { + if _, discardErr := io.Copy(io.Discard, resp.Body); discardErr != nil { + slog.Default().Debug("failed to drain health probe response body", "port", port, "status", resp.StatusCode, "error", discardErr) + } + status.State = HealthStateUnhealthy + status.Error = fmt.Sprintf("status %d", resp.StatusCode) + return status + } + + var payload struct { + Healthy bool `json:"healthy"` + } + if err := json.NewDecoder(resp.Body).Decode(&payload); err != nil { + status.State = HealthStateUnhealthy + status.Error = err.Error() + return status + } + + if payload.Healthy { + status.State = HealthStateHealthy + status.Error = "" + return status + } + + status.State = HealthStateUnhealthy + status.Error = "daemon reported unhealthy" + return status + } +} + +func defaultTerminalDialer(_ context.Context, _ SessionHandle) (TerminalConn, error) { + return nil, ErrTerminalAttachDisabled +} + +func defaultPortAvailable(port int) bool { + conn, err := net.DialTimeout("tcp", fmt.Sprintf("127.0.0.1:%d", port), 200*time.Millisecond) + if err != nil { + return true + } + if closeErr := conn.Close(); closeErr != nil { + slog.Default().Debug("port probe close failed", "port", port, "error", closeErr) + } + return false +} + +func isExpectedExitError(err error) bool { + if err == nil { + return true + } + if isProcessAlreadyDone(err) { + return true + } + var exitErr *exec.ExitError + return errors.As(err, &exitErr) +} + +func isProcessAlreadyDone(err error) bool { + if err == nil { + return false + } + if errors.Is(err, os.ErrProcessDone) { + return true + } + if errors.Is(err, syscall.ESRCH) { + return true + } + return false +} + +func processExitMessage(err error) string { + if err == nil { + return "session process exited" + } + return err.Error() +} + +func statusFromHealth(current SessionStatus, state HealthState) SessionStatus { + switch state { + case HealthStateUnhealthy: + if current != SessionStatusStopped { + return SessionStatusError + } + case HealthStateHealthy: + if current == SessionStatusError || current == SessionStatusUnknown { + return SessionStatusActive + } + } + return current +} + +func matchesSessionFilter(handle SessionHandle, filter SessionListFilter) bool { + if filter.WorkspacePath != "" && handle.WorkspacePath != filter.WorkspacePath { + return false + } + if filter.Status != "" && handle.Status != filter.Status { + return false + } + if !labelsContain(handle.Labels, filter.LabelSelector) { + return false + } + return true +} + +func labelsContain(labels map[string]string, selector map[string]string) bool { + if len(selector) == 0 { + return true + } + if len(labels) == 0 { + return false + } + for k, v := range selector { + if labels[k] != v { + return false + } + } + return true +} + +func cloneSessionHandle(in SessionHandle) SessionHandle { + out := in + out.Labels = cloneStringMap(in.Labels) + return out +} + +func cloneCreateOpts(in CreateOpts) CreateOpts { + out := in + out.EnvVars = cloneStringMap(in.EnvVars) + out.Labels = cloneStringMap(in.Labels) + return out +} + +func cloneStringMap(in map[string]string) map[string]string { + if len(in) == 0 { + return nil + } + out := make(map[string]string, len(in)) + for k, v := range in { + out[k] = v + } + return out +} + +func envMapToPairs(env map[string]string) []string { + if len(env) == 0 { + return nil + } + keys := make([]string, 0, len(env)) + for k := range env { + keys = append(keys, k) + } + sort.Strings(keys) + + pairs := make([]string, 0, len(keys)) + for _, key := range keys { + pairs = append(pairs, key+"="+env[key]) + } + return pairs +} + +func cleanupProcess(proc sessionProcess) { + if proc == nil { + return + } + _ = proc.Kill() + go func() { + _ = proc.Wait() + }() +} diff --git a/internal/session/manager_test.go b/internal/session/manager_test.go new file mode 100644 index 0000000..d6264a8 --- /dev/null +++ b/internal/session/manager_test.go @@ -0,0 +1,604 @@ +package session + +import ( + "context" + "encoding/json" + "errors" + "io" + "log/slog" + "net" + "net/http" + "net/http/httptest" + "net/url" + "os" + "strconv" + "strings" + "sync" + "sync/atomic" + "testing" + "time" + + "opencoderouter/internal/registry" +) + +type fakeProcess struct { + mu sync.Mutex + pid int + signals int + kills int + exitCh chan error + once sync.Once +} + +func newFakeProcess(pid int) *fakeProcess { + return &fakeProcess{pid: pid, exitCh: make(chan error, 1)} +} + +func (p *fakeProcess) PID() int { + return p.pid +} + +func (p *fakeProcess) Signal(_ os.Signal) error { + p.mu.Lock() + p.signals++ + p.mu.Unlock() + p.exit(nil) + return nil +} + +func (p *fakeProcess) Kill() error { + p.mu.Lock() + p.kills++ + p.mu.Unlock() + p.exit(nil) + return nil +} + +func (p *fakeProcess) Wait() error { + err, ok := <-p.exitCh + if !ok { + return nil + } + return err +} + +func (p *fakeProcess) exit(err error) { + p.once.Do(func() { + p.exitCh <- err + close(p.exitCh) + }) +} + +func (p *fakeProcess) signalCount() int { + p.mu.Lock() + defer p.mu.Unlock() + return p.signals +} + +type fakeStarter struct { + mu sync.Mutex + nextPID int + byPort map[int]*fakeProcess +} + +func newFakeStarter() *fakeStarter { + return &fakeStarter{byPort: make(map[int]*fakeProcess)} +} + +func (s *fakeStarter) start(_ string, _ string, port int, _ map[string]string) (sessionProcess, error) { + s.mu.Lock() + defer s.mu.Unlock() + s.nextPID++ + proc := newFakeProcess(s.nextPID) + s.byPort[port] = proc + return proc, nil +} + +func (s *fakeStarter) processByPort(port int) *fakeProcess { + s.mu.Lock() + defer s.mu.Unlock() + return s.byPort[port] +} + +type fakeTerminalConn struct { + mu sync.Mutex + closed bool +} + +func (c *fakeTerminalConn) Read(_ []byte) (int, error) { + return 0, io.EOF +} + +func (c *fakeTerminalConn) Write(p []byte) (int, error) { + return len(p), nil +} + +func (c *fakeTerminalConn) Close() error { + c.mu.Lock() + c.closed = true + c.mu.Unlock() + return nil +} + +func (c *fakeTerminalConn) Resize(_, _ int) error { + return nil +} + +func testSessionLogger() *slog.Logger { + return slog.New(slog.NewTextHandler(io.Discard, &slog.HandlerOptions{Level: slog.LevelError})) +} + +func newManagerForTest(t *testing.T, cfg ManagerConfig) *Manager { + t.Helper() + if cfg.Logger == nil { + cfg.Logger = testSessionLogger() + } + manager := NewManager(cfg) + t.Cleanup(func() { + shutdownCtx, cancel := context.WithTimeout(context.Background(), 2*time.Second) + defer cancel() + _ = manager.Shutdown(shutdownCtx) + }) + return manager +} + +func waitForEventType(t *testing.T, ch <-chan Event, eventType EventType, timeout time.Duration) Event { + t.Helper() + deadline := time.After(timeout) + for { + select { + case event, ok := <-ch: + if !ok { + t.Fatalf("subscriber channel closed while waiting for %q", eventType) + } + if event.Type() == eventType { + return event + } + case <-deadline: + t.Fatalf("timed out waiting for event %q", eventType) + } + } +} + +func mustPortFromURL(t *testing.T, rawURL string) int { + t.Helper() + parsed, err := url.Parse(rawURL) + if err != nil { + t.Fatalf("parse url: %v", err) + } + _, portStr, err := net.SplitHostPort(parsed.Host) + if err != nil { + t.Fatalf("split host/port: %v", err) + } + port, err := strconv.Atoi(portStr) + if err != nil { + t.Fatalf("atoi port: %v", err) + } + return port +} + +func TestManagerCreateStopAndRegistryIntegration(t *testing.T) { + workspace := t.TempDir() + starter := newFakeStarter() + eventBus := NewEventBus(32) + events, unsubscribe, err := eventBus.Subscribe(EventFilter{Types: []EventType{EventTypeSessionCreated, EventTypeSessionStopped}}) + if err != nil { + t.Fatalf("subscribe: %v", err) + } + t.Cleanup(unsubscribe) + + reg := registry.New(30*time.Second, testSessionLogger()) + + manager := newManagerForTest(t, ManagerConfig{ + Registry: reg, + EventBus: eventBus, + ProcessStarter: starter.start, + PortStart: 35100, + PortEnd: 35100, + PortAvailable: func(int) bool { return true }, + HealthCheckInterval: time.Hour, + }) + + handle, err := manager.Create(context.Background(), CreateOpts{WorkspacePath: workspace}) + if err != nil { + t.Fatalf("create: %v", err) + } + + createdEvent := waitForEventType(t, events, EventTypeSessionCreated, time.Second) + created, ok := createdEvent.(SessionCreated) + if !ok { + t.Fatalf("created event type = %T", createdEvent) + } + if created.Session.ID != handle.ID { + t.Fatalf("created event session id = %q, want %q", created.Session.ID, handle.ID) + } + + slug := registry.Slugify(workspace) + backend, ok := reg.Lookup(slug) + if !ok { + t.Fatalf("expected backend %q in registry", slug) + } + if backend.Port != handle.DaemonPort { + t.Fatalf("registry port = %d, want %d", backend.Port, handle.DaemonPort) + } + if backend.ProjectPath != workspace { + t.Fatalf("registry path = %q, want %q", backend.ProjectPath, workspace) + } + + if err := manager.Stop(context.Background(), handle.ID); err != nil { + t.Fatalf("stop: %v", err) + } + + stoppedEvent := waitForEventType(t, events, EventTypeSessionStopped, time.Second) + stopped, ok := stoppedEvent.(SessionStopped) + if !ok { + t.Fatalf("stopped event type = %T", stoppedEvent) + } + if stopped.Session.ID != handle.ID { + t.Fatalf("stopped event session id = %q, want %q", stopped.Session.ID, handle.ID) + } + + got, err := manager.Get(handle.ID) + if err != nil { + t.Fatalf("get after stop: %v", err) + } + if got.Status != SessionStatusStopped { + t.Fatalf("status after stop = %q, want %q", got.Status, SessionStatusStopped) + } + + proc := starter.processByPort(handle.DaemonPort) + if proc == nil { + t.Fatal("missing process for created session") + } + if proc.signalCount() == 0 { + t.Fatal("expected stop to signal process") + } +} + +func TestManagerAttachDetachEvents(t *testing.T) { + workspace := t.TempDir() + starter := newFakeStarter() + eventBus := NewEventBus(32) + events, unsubscribe, err := eventBus.Subscribe(EventFilter{Types: []EventType{EventTypeSessionAttached, EventTypeSessionDetached}}) + if err != nil { + t.Fatalf("subscribe: %v", err) + } + t.Cleanup(unsubscribe) + + manager := newManagerForTest(t, ManagerConfig{ + EventBus: eventBus, + ProcessStarter: starter.start, + TerminalDialer: func(context.Context, SessionHandle) (TerminalConn, error) { return &fakeTerminalConn{}, nil }, + PortStart: 35110, + PortEnd: 35110, + PortAvailable: func(int) bool { return true }, + HealthCheckInterval: time.Hour, + }) + + handle, err := manager.Create(context.Background(), CreateOpts{WorkspacePath: workspace}) + if err != nil { + t.Fatalf("create: %v", err) + } + + conn, err := manager.AttachTerminal(context.Background(), handle.ID) + if err != nil { + t.Fatalf("attach terminal: %v", err) + } + + attachedEvent := waitForEventType(t, events, EventTypeSessionAttached, time.Second) + attached, ok := attachedEvent.(SessionAttached) + if !ok { + t.Fatalf("attached event type = %T", attachedEvent) + } + if attached.AttachedClients != 1 { + t.Fatalf("attached clients = %d, want 1", attached.AttachedClients) + } + + afterAttach, err := manager.Get(handle.ID) + if err != nil { + t.Fatalf("get after attach: %v", err) + } + if afterAttach.AttachedClients != 1 { + t.Fatalf("attached clients in handle = %d, want 1", afterAttach.AttachedClients) + } + + if err := conn.Close(); err != nil { + t.Fatalf("close terminal conn: %v", err) + } + + detachedEvent := waitForEventType(t, events, EventTypeSessionDetached, time.Second) + detached, ok := detachedEvent.(SessionDetached) + if !ok { + t.Fatalf("detached event type = %T", detachedEvent) + } + if detached.AttachedClients != 0 { + t.Fatalf("detached clients = %d, want 0", detached.AttachedClients) + } + + afterDetach, err := manager.Get(handle.ID) + if err != nil { + t.Fatalf("get after detach: %v", err) + } + if afterDetach.AttachedClients != 0 { + t.Fatalf("attached clients in handle after detach = %d, want 0", afterDetach.AttachedClients) + } +} + +func TestManagerGetListRestartDelete(t *testing.T) { + starter := newFakeStarter() + manager := newManagerForTest(t, ManagerConfig{ + ProcessStarter: starter.start, + PortStart: 35120, + PortEnd: 35130, + PortAvailable: func(int) bool { return true }, + HealthCheckInterval: time.Hour, + }) + + workspaceA := t.TempDir() + workspaceB := t.TempDir() + + a, err := manager.Create(context.Background(), CreateOpts{WorkspacePath: workspaceA, Labels: map[string]string{"team": "alpha"}}) + if err != nil { + t.Fatalf("create A: %v", err) + } + b, err := manager.Create(context.Background(), CreateOpts{WorkspacePath: workspaceB, Labels: map[string]string{"team": "beta"}}) + if err != nil { + t.Fatalf("create B: %v", err) + } + + gotA, err := manager.Get(a.ID) + if err != nil { + t.Fatalf("get A: %v", err) + } + if gotA.WorkspacePath != workspaceA { + t.Fatalf("workspace A = %q, want %q", gotA.WorkspacePath, workspaceA) + } + + list, err := manager.List(SessionListFilter{LabelSelector: map[string]string{"team": "alpha"}, Status: SessionStatusActive}) + if err != nil { + t.Fatalf("list: %v", err) + } + if len(list) != 1 || list[0].ID != a.ID { + t.Fatalf("filtered list = %#v, want only %q", list, a.ID) + } + + restarted, err := manager.Restart(context.Background(), a.ID) + if err != nil { + t.Fatalf("restart A: %v", err) + } + if restarted.ID != a.ID { + t.Fatalf("restart id = %q, want %q", restarted.ID, a.ID) + } + if restarted.DaemonPort != a.DaemonPort { + t.Fatalf("restart port = %d, want %d", restarted.DaemonPort, a.DaemonPort) + } + + if err := manager.Delete(context.Background(), b.ID); err != nil { + t.Fatalf("delete B: %v", err) + } + + if _, err := manager.Get(b.ID); !errors.Is(err, ErrSessionNotFound) { + t.Fatalf("expected ErrSessionNotFound after delete, got %v", err) + } +} + +func TestManagerPeriodicHealthChecks(t *testing.T) { + workspace := t.TempDir() + starter := newFakeStarter() + eventBus := NewEventBus(64) + events, unsubscribe, err := eventBus.Subscribe(EventFilter{Types: []EventType{EventTypeSessionHealthChanged}}) + if err != nil { + t.Fatalf("subscribe: %v", err) + } + t.Cleanup(unsubscribe) + + var healthy atomic.Bool + healthy.Store(true) + + mux := http.NewServeMux() + mux.HandleFunc("/global/health", func(w http.ResponseWriter, _ *http.Request) { + w.Header().Set("Content-Type", "application/json") + _ = json.NewEncoder(w).Encode(map[string]any{"healthy": healthy.Load()}) + }) + server := httptest.NewServer(mux) + defer server.Close() + + port := mustPortFromURL(t, server.URL) + + manager := newManagerForTest(t, ManagerConfig{ + EventBus: eventBus, + ProcessStarter: starter.start, + PortStart: port, + PortEnd: port, + PortAvailable: func(int) bool { return true }, + HealthCheckInterval: 30 * time.Millisecond, + HealthCheckTimeout: 500 * time.Millisecond, + }) + + handle, err := manager.Create(context.Background(), CreateOpts{WorkspacePath: workspace}) + if err != nil { + t.Fatalf("create: %v", err) + } + + first := waitForEventType(t, events, EventTypeSessionHealthChanged, 2*time.Second) + healthEvent, ok := first.(SessionHealthChanged) + if !ok { + t.Fatalf("health event type = %T", first) + } + if healthEvent.Session.ID != handle.ID { + t.Fatalf("health event session id = %q, want %q", healthEvent.Session.ID, handle.ID) + } + if healthEvent.Current.State != HealthStateHealthy { + t.Fatalf("first health state = %q, want %q", healthEvent.Current.State, HealthStateHealthy) + } + + healthy.Store(false) + + second := waitForEventType(t, events, EventTypeSessionHealthChanged, 2*time.Second) + healthEvent2, ok := second.(SessionHealthChanged) + if !ok { + t.Fatalf("health event type = %T", second) + } + if healthEvent2.Current.State != HealthStateUnhealthy { + t.Fatalf("second health state = %q, want %q", healthEvent2.Current.State, HealthStateUnhealthy) + } +} + +func TestManagerHealthCircuitBreakerSkipsProbesUntilCooldownAndResets(t *testing.T) { + workspace := t.TempDir() + starter := newFakeStarter() + + now := time.Date(2026, 3, 6, 12, 0, 0, 0, time.UTC) + nowFn := func() time.Time { return now } + + var checks atomic.Int32 + healthState := HealthStateUnhealthy + + manager := newManagerForTest(t, ManagerConfig{ + ProcessStarter: starter.start, + PortStart: 35135, + PortEnd: 35135, + PortAvailable: func(int) bool { return true }, + HealthCheckInterval: time.Hour, + HealthFailThreshold: 3, + HealthCircuitReset: 30 * time.Second, + Now: nowFn, + HealthChecker: func(_ context.Context, _ int) HealthStatus { + checks.Add(1) + if healthState == HealthStateHealthy { + return HealthStatus{State: HealthStateHealthy, LastCheck: nowFn()} + } + return HealthStatus{State: HealthStateUnhealthy, LastCheck: nowFn(), Error: "daemon unhealthy"} + }, + }) + + handle, err := manager.Create(context.Background(), CreateOpts{WorkspacePath: workspace}) + if err != nil { + t.Fatalf("create: %v", err) + } + + for i := 0; i < 3; i++ { + health, err := manager.Health(context.Background(), handle.ID) + if err != nil { + t.Fatalf("health probe %d: %v", i+1, err) + } + if health.State != HealthStateUnhealthy { + t.Fatalf("health state probe %d = %q, want %q", i+1, health.State, HealthStateUnhealthy) + } + } + if got := checks.Load(); got != 3 { + t.Fatalf("health checker calls after threshold = %d, want 3", got) + } + + if _, err := manager.Health(context.Background(), handle.ID); err != nil { + t.Fatalf("health probe while circuit open: %v", err) + } + if got := checks.Load(); got != 3 { + t.Fatalf("health checker calls while circuit open = %d, want 3", got) + } + + now = now.Add(20 * time.Second) + if _, err := manager.Health(context.Background(), handle.ID); err != nil { + t.Fatalf("health probe before cooldown expires: %v", err) + } + if got := checks.Load(); got != 3 { + t.Fatalf("health checker calls before cooldown expires = %d, want 3", got) + } + + now = now.Add(15 * time.Second) + if _, err := manager.Health(context.Background(), handle.ID); err != nil { + t.Fatalf("health probe after cooldown expires: %v", err) + } + if got := checks.Load(); got != 4 { + t.Fatalf("health checker calls after cooldown expires = %d, want 4", got) + } + + healthState = HealthStateHealthy + now = now.Add(31 * time.Second) + health, err := manager.Health(context.Background(), handle.ID) + if err != nil { + t.Fatalf("health recovery probe: %v", err) + } + if health.State != HealthStateHealthy { + t.Fatalf("health state after recovery = %q, want %q", health.State, HealthStateHealthy) + } + + healthState = HealthStateUnhealthy + for i := 0; i < 3; i++ { + now = now.Add(time.Second) + if _, err := manager.Health(context.Background(), handle.ID); err != nil { + t.Fatalf("re-trip health probe %d: %v", i+1, err) + } + } + trippedCalls := checks.Load() + + if _, err := manager.Health(context.Background(), handle.ID); err != nil { + t.Fatalf("health probe while re-tripped circuit open: %v", err) + } + if got := checks.Load(); got != trippedCalls { + t.Fatalf("health checker calls during re-tripped open circuit = %d, want %d", got, trippedCalls) + } + + now = now.Add(5 * time.Second) + if _, err := manager.Restart(context.Background(), handle.ID); err != nil { + t.Fatalf("restart: %v", err) + } + + if _, err := manager.Health(context.Background(), handle.ID); err != nil { + t.Fatalf("health after manual restart: %v", err) + } + if got := checks.Load(); got != trippedCalls+1 { + t.Fatalf("health checker calls after restart = %d, want %d", got, trippedCalls+1) + } +} + +func TestManagerCrashDetectionEmitsHealthEvent(t *testing.T) { + workspace := t.TempDir() + starter := newFakeStarter() + eventBus := NewEventBus(32) + events, unsubscribe, err := eventBus.Subscribe(EventFilter{Types: []EventType{EventTypeSessionHealthChanged}}) + if err != nil { + t.Fatalf("subscribe: %v", err) + } + t.Cleanup(unsubscribe) + + manager := newManagerForTest(t, ManagerConfig{ + EventBus: eventBus, + ProcessStarter: starter.start, + PortStart: 35140, + PortEnd: 35140, + PortAvailable: func(int) bool { return true }, + HealthCheckInterval: time.Hour, + }) + + handle, err := manager.Create(context.Background(), CreateOpts{WorkspacePath: workspace}) + if err != nil { + t.Fatalf("create: %v", err) + } + + proc := starter.processByPort(handle.DaemonPort) + if proc == nil { + t.Fatal("missing fake process") + } + proc.exit(errors.New("boom")) + + event := waitForEventType(t, events, EventTypeSessionHealthChanged, 2*time.Second) + healthEvent, ok := event.(SessionHealthChanged) + if !ok { + t.Fatalf("health event type = %T", event) + } + if healthEvent.Current.State != HealthStateUnhealthy { + t.Fatalf("health state = %q, want %q", healthEvent.Current.State, HealthStateUnhealthy) + } + if !strings.Contains(healthEvent.Current.Error, "boom") { + t.Fatalf("health error = %q, want boom substring", healthEvent.Current.Error) + } + + got, err := manager.Get(handle.ID) + if err != nil { + t.Fatalf("get after crash: %v", err) + } + if got.Status != SessionStatusError { + t.Fatalf("status after crash = %q, want %q", got.Status, SessionStatusError) + } +} diff --git a/internal/session/types.go b/internal/session/types.go new file mode 100644 index 0000000..3f6fda5 --- /dev/null +++ b/internal/session/types.go @@ -0,0 +1,71 @@ +package session + +import ( + "context" + "io" + "time" +) + +type SessionStatus string + +const ( + SessionStatusUnknown SessionStatus = "unknown" + SessionStatusActive SessionStatus = "active" + SessionStatusIdle SessionStatus = "idle" + SessionStatusStopped SessionStatus = "stopped" + SessionStatusError SessionStatus = "error" +) + +type HealthState string + +const ( + HealthStateUnknown HealthState = "unknown" + HealthStateHealthy HealthState = "healthy" + HealthStateUnhealthy HealthState = "unhealthy" +) + +type CreateOpts struct { + WorkspacePath string + OpenCodeBinary string + EnvVars map[string]string + Labels map[string]string +} + +type SessionListFilter struct { + WorkspacePath string + Status SessionStatus + LabelSelector map[string]string +} + +type SessionHandle struct { + ID string + DaemonPort int + WorkspacePath string + Status SessionStatus + CreatedAt time.Time + LastActivity time.Time + AttachedClients int + Labels map[string]string +} + +type HealthStatus struct { + State HealthState + LastCheck time.Time + Error string +} + +type TerminalConn interface { + io.ReadWriteCloser + Resize(cols, rows int) error +} + +type SessionManager interface { + Create(ctx context.Context, opts CreateOpts) (*SessionHandle, error) + Get(id string) (*SessionHandle, error) + List(filter SessionListFilter) ([]SessionHandle, error) + Stop(ctx context.Context, id string) error + Restart(ctx context.Context, id string) (*SessionHandle, error) + Delete(ctx context.Context, id string) error + AttachTerminal(ctx context.Context, id string) (TerminalConn, error) + Health(ctx context.Context, id string) (HealthStatus, error) +} diff --git a/internal/terminal/bridge.go b/internal/terminal/bridge.go new file mode 100644 index 0000000..0ed2e7b --- /dev/null +++ b/internal/terminal/bridge.go @@ -0,0 +1,363 @@ +package terminal + +import ( + "context" + "encoding/json" + "errors" + "fmt" + "io" + "log/slog" + "net" + "net/http" + "strings" + "sync" + "sync/atomic" + "time" + + "opencoderouter/internal/cache" + "opencoderouter/internal/session" + + "github.com/gorilla/websocket" +) + +const ( + defaultBridgePingInterval = 30 * time.Second + defaultBridgeWriteTimeout = 10 * time.Second + defaultBridgeReadBufferSize = 1024 + defaultBridgeWriteBufferSz = 1024 +) + +type BridgeConfig struct { + Logger *slog.Logger + ScrollbackCache cache.ScrollbackCache + PingInterval time.Duration + WriteTimeout time.Duration + ReadBufferSize int + WriteBufferSize int + CheckOrigin func(*http.Request) bool +} + +type TerminalBridge struct { + logger *slog.Logger + cache cache.ScrollbackCache + pingInterval time.Duration + writeTimeout time.Duration + upgrader websocket.Upgrader +} + +type resizeControlMessage struct { + Type string `json:"type"` + Cols int `json:"cols"` + Rows int `json:"rows"` +} + +func NewBridge(cfg BridgeConfig) *TerminalBridge { + logger := cfg.Logger + if logger == nil { + logger = slog.Default() + } + + pingInterval := cfg.PingInterval + if pingInterval < 0 { + pingInterval = 0 + } + if pingInterval == 0 { + pingInterval = defaultBridgePingInterval + } + + writeTimeout := cfg.WriteTimeout + if writeTimeout < 0 { + writeTimeout = 0 + } + if writeTimeout == 0 { + writeTimeout = defaultBridgeWriteTimeout + } + + readBufferSize := cfg.ReadBufferSize + if readBufferSize <= 0 { + readBufferSize = defaultBridgeReadBufferSize + } + + writeBufferSize := cfg.WriteBufferSize + if writeBufferSize <= 0 { + writeBufferSize = defaultBridgeWriteBufferSz + } + + checkOrigin := cfg.CheckOrigin + if checkOrigin == nil { + checkOrigin = func(*http.Request) bool { + return true + } + } + + return &TerminalBridge{ + logger: logger, + cache: cfg.ScrollbackCache, + pingInterval: pingInterval, + writeTimeout: writeTimeout, + upgrader: websocket.Upgrader{ + ReadBufferSize: readBufferSize, + WriteBufferSize: writeBufferSize, + CheckOrigin: checkOrigin, + }, + } +} + +func (b *TerminalBridge) Upgrade(w http.ResponseWriter, r *http.Request) (*websocket.Conn, error) { + if b == nil { + return nil, errors.New("terminal bridge is nil") + } + return b.upgrader.Upgrade(w, r, nil) +} + +func (b *TerminalBridge) Bridge(ctx context.Context, wsConn *websocket.Conn, terminalConn session.TerminalConn, sessionID string) error { + if b == nil { + return errors.New("terminal bridge is nil") + } + if wsConn == nil { + return errors.New("websocket connection is nil") + } + if terminalConn == nil { + return errors.New("terminal connection is nil") + } + if ctx == nil { + ctx = context.Background() + } + + b.logger.Info("terminal websocket bridge connected", "session_id", sessionID) + + bridgeCtx, cancel := context.WithCancel(ctx) + defer cancel() + + if b.pingInterval > 0 { + _ = wsConn.SetReadDeadline(time.Now().Add(2 * b.pingInterval)) + wsConn.SetPongHandler(func(_ string) error { + return wsConn.SetReadDeadline(time.Now().Add(2 * b.pingInterval)) + }) + } + + var bytesToBackend atomic.Int64 + var bytesToClient atomic.Int64 + var resizeOps atomic.Int64 + + var wsWriteMu sync.Mutex + writeMessage := func(messageType int, payload []byte) error { + wsWriteMu.Lock() + defer wsWriteMu.Unlock() + if b.writeTimeout > 0 { + if err := wsConn.SetWriteDeadline(time.Now().Add(b.writeTimeout)); err != nil { + return err + } + } + return wsConn.WriteMessage(messageType, payload) + } + + writeControl := func(messageType int, payload []byte) error { + wsWriteMu.Lock() + defer wsWriteMu.Unlock() + deadline := time.Now() + if b.writeTimeout > 0 { + deadline = deadline.Add(b.writeTimeout) + } + return wsConn.WriteControl(messageType, payload, deadline) + } + + workerCount := 2 + errCh := make(chan error, 3) + + go func() { + errCh <- b.pipeBackendToWS(bridgeCtx, terminalConn, writeMessage, &bytesToClient, sessionID) + }() + + go func() { + errCh <- b.pipeWSToBackend(bridgeCtx, wsConn, terminalConn, &bytesToBackend, &resizeOps) + }() + + if b.pingInterval > 0 { + workerCount++ + go func() { + errCh <- b.pingLoop(bridgeCtx, writeControl) + }() + } + + firstErr := <-errCh + cancel() + _ = wsConn.Close() + _ = terminalConn.Close() + + for i := 1; i < workerCount; i++ { + <-errCh + } + + if isExpectedBridgeClosure(firstErr) { + firstErr = nil + } + + b.logger.Info( + "terminal websocket bridge closed", + "session_id", sessionID, + "bytes_to_backend", bytesToBackend.Load(), + "bytes_to_client", bytesToClient.Load(), + "resize_ops", resizeOps.Load(), + "error", firstErr, + ) + + return firstErr +} + +func (b *TerminalBridge) pipeBackendToWS( + _ context.Context, + terminalConn session.TerminalConn, + writeMessage func(messageType int, payload []byte) error, + bytesToClient *atomic.Int64, + sessionID string, +) error { + buf := make([]byte, 32*1024) + for { + n, err := terminalConn.Read(buf) + if n > 0 { + chunk := append([]byte(nil), buf[:n]...) + b.appendTerminalOutput(sessionID, chunk) + if writeErr := writeMessage(websocket.BinaryMessage, chunk); writeErr != nil { + return writeErr + } + bytesToClient.Add(int64(n)) + } + if err != nil { + return err + } + } +} + +func (b *TerminalBridge) appendTerminalOutput(sessionID string, chunk []byte) { + if b == nil || b.cache == nil || len(chunk) == 0 { + return + } + entry := cache.Entry{ + Timestamp: time.Now().UTC(), + Type: cache.EntryTypeTerminalOutput, + Content: append([]byte(nil), chunk...), + Metadata: map[string]any{ + "sessionId": sessionID, + "bytes": len(chunk), + }, + } + if err := b.cache.Append(sessionID, entry); err != nil { + b.logger.Debug("failed to append terminal output scrollback", "session_id", sessionID, "error", err) + } +} + +func (b *TerminalBridge) pipeWSToBackend( + _ context.Context, + wsConn *websocket.Conn, + terminalConn session.TerminalConn, + bytesToBackend *atomic.Int64, + resizeOps *atomic.Int64, +) error { + for { + messageType, payload, err := wsConn.ReadMessage() + if err != nil { + return err + } + + switch messageType { + case websocket.BinaryMessage: + n, writeErr := terminalConn.Write(payload) + if n > 0 { + bytesToBackend.Add(int64(n)) + } + if writeErr != nil { + return writeErr + } + if n != len(payload) { + return io.ErrShortWrite + } + case websocket.TextMessage: + resize, decodeErr := decodeResizeControlMessage(payload) + if decodeErr != nil { + return decodeErr + } + if resizeErr := terminalConn.Resize(resize.Cols, resize.Rows); resizeErr != nil { + return fmt.Errorf("resize terminal: %w", resizeErr) + } + resizeOps.Add(1) + case websocket.CloseMessage: + return nil + } + } +} + +func (b *TerminalBridge) pingLoop( + ctx context.Context, + writeControl func(messageType int, payload []byte) error, +) error { + ticker := time.NewTicker(b.pingInterval) + defer ticker.Stop() + + for { + select { + case <-ctx.Done(): + return nil + case <-ticker.C: + if err := writeControl(websocket.PingMessage, nil); err != nil { + return err + } + } + } +} + +func decodeResizeControlMessage(payload []byte) (resizeControlMessage, error) { + var msg resizeControlMessage + if err := json.Unmarshal(payload, &msg); err != nil { + return resizeControlMessage{}, fmt.Errorf("decode control message: %w", err) + } + if msg.Type != "resize" { + return resizeControlMessage{}, fmt.Errorf("unsupported control message type %q", msg.Type) + } + if msg.Cols <= 0 || msg.Rows <= 0 { + return resizeControlMessage{}, fmt.Errorf("invalid resize dimensions cols=%d rows=%d", msg.Cols, msg.Rows) + } + return msg, nil +} + +func isWebSocketUpgrade(r *http.Request) bool { + if r == nil { + return false + } + if !headerHasToken(r.Header.Get("Connection"), "upgrade") { + return false + } + if !headerHasToken(r.Header.Get("Upgrade"), "websocket") { + return false + } + return true +} + +func headerHasToken(headerValue, token string) bool { + for _, part := range strings.Split(headerValue, ",") { + if strings.EqualFold(strings.TrimSpace(part), token) { + return true + } + } + return false +} + +func isExpectedBridgeClosure(err error) bool { + if err == nil { + return true + } + if errors.Is(err, io.EOF) || errors.Is(err, net.ErrClosed) || errors.Is(err, context.Canceled) { + return true + } + if websocket.IsCloseError(err, websocket.CloseNormalClosure, websocket.CloseGoingAway, websocket.CloseNoStatusReceived) { + return true + } + var closeErr *websocket.CloseError + if errors.As(err, &closeErr) { + switch closeErr.Code { + case websocket.CloseNormalClosure, websocket.CloseGoingAway, websocket.CloseNoStatusReceived: + return true + } + } + return false +} diff --git a/internal/terminal/dialer.go b/internal/terminal/dialer.go new file mode 100644 index 0000000..7f8f541 --- /dev/null +++ b/internal/terminal/dialer.go @@ -0,0 +1,371 @@ +package terminal + +import ( + "context" + "encoding/json" + "errors" + "fmt" + "io" + "log/slog" + "net/http" + "os" + "os/exec" + "path/filepath" + "strconv" + "strings" + "sync" + "syscall" + "time" + + "opencoderouter/internal/session" + + "github.com/charmbracelet/x/xpty" +) + +type SessionDialerConfig struct { + Logger *slog.Logger + OpenCodeBinary string + AttachArgsBuilder func(handle session.SessionHandle) ([]string, error) + CommandFactory func(binary string, args ...string) *exec.Cmd +} + +type ptySessionConn struct { + pty xpty.Pty + cmd *exec.Cmd + logger *slog.Logger + sessionID string + closeOnce sync.Once + closeErr error +} + +var errDaemonSessionNotFound = errors.New("daemon session not found for workspace") + +func NewSessionDialer(cfg SessionDialerConfig) func(ctx context.Context, handle session.SessionHandle) (session.TerminalConn, error) { + logger := cfg.Logger + if logger == nil { + logger = slog.Default() + } + + binary := cfg.OpenCodeBinary + if binary == "" { + binary = "opencode" + } + + argsBuilder := cfg.AttachArgsBuilder + + cmdFactory := cfg.CommandFactory + if cmdFactory == nil { + cmdFactory = exec.Command + } + + return func(_ context.Context, handle session.SessionHandle) (session.TerminalConn, error) { + if handle.ID == "" { + return nil, errors.New("session id is required") + } + if handle.DaemonPort <= 0 { + return nil, errors.New("daemon port must be greater than zero") + } + if handle.WorkspacePath == "" { + return nil, errors.New("workspace path is required") + } + + var ( + args []string + err error + ) + + if argsBuilder != nil { + args, err = argsBuilder(handle) + if err != nil { + return nil, err + } + } else { + daemonURL := fmt.Sprintf("http://127.0.0.1:%d", handle.DaemonPort) + daemonSessionID, resolveErr := resolveDaemonSessionID(handle.DaemonPort, handle.WorkspacePath) + if resolveErr != nil { + if errors.Is(resolveErr, errDaemonSessionNotFound) { + args = []string{"attach", daemonURL} + } else { + return nil, resolveErr + } + } else { + args = []string{"attach", daemonURL, "-s", daemonSessionID} + } + } + + ptyHandle, err := xpty.NewPty(80, 24) + if err != nil { + return nil, fmt.Errorf("allocate pty: %w", err) + } + + cmd := cmdFactory(binary, args...) + if cmd == nil { + _ = ptyHandle.Close() + return nil, errors.New("command factory returned nil command") + } + cmd.Dir = handle.WorkspacePath + cmd.Env = append([]string{}, os.Environ()...) + cmd.SysProcAttr = &syscall.SysProcAttr{Setsid: true, Setctty: true, Ctty: 0} + + if err := ptyHandle.Start(cmd); err != nil { + _ = ptyHandle.Close() + return nil, fmt.Errorf("start attach command: %w", err) + } + + if unixPTY, ok := ptyHandle.(*xpty.UnixPty); ok { + _ = unixPTY.Slave().Close() + } + + conn := &ptySessionConn{ + pty: ptyHandle, + cmd: cmd, + logger: logger, + sessionID: handle.ID, + } + return conn, nil + } +} + +type daemonSessionRecord struct { + ID string + Status string + Workspace string + LastActivity time.Time +} + +func resolveDaemonSessionID(daemonPort int, workspacePath string) (string, error) { + daemonURL := fmt.Sprintf("http://127.0.0.1:%d", daemonPort) + sessions, err := fetchDaemonSessions(daemonURL) + if err != nil { + return "", err + } + + targetWorkspace := filepath.Clean(workspacePath) + matches := make([]daemonSessionRecord, 0, len(sessions)) + for _, s := range sessions { + if s.ID == "" { + continue + } + if s.Workspace == "" { + continue + } + if filepath.Clean(s.Workspace) == targetWorkspace { + matches = append(matches, s) + } + } + + if len(matches) == 0 { + return "", fmt.Errorf("%w %q at %s/session; start or resume a daemon session in that workspace first", errDaemonSessionNotFound, workspacePath, daemonURL) + } + + best := matches[0] + for _, candidate := range matches[1:] { + if isBetterSessionCandidate(candidate, best) { + best = candidate + } + } + + return best.ID, nil +} + +func fetchDaemonSessions(daemonURL string) ([]daemonSessionRecord, error) { + ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second) + defer cancel() + + req, err := http.NewRequestWithContext(ctx, http.MethodGet, daemonURL+"/session", nil) + if err != nil { + return nil, fmt.Errorf("build daemon session request: %w", err) + } + + resp, err := http.DefaultClient.Do(req) + if err != nil { + return nil, fmt.Errorf("query daemon sessions at %s/session: %w", daemonURL, err) + } + defer func() { _ = resp.Body.Close() }() + + body, err := io.ReadAll(io.LimitReader(resp.Body, 1<<20)) + if err != nil { + return nil, fmt.Errorf("read daemon sessions response: %w", err) + } + + if resp.StatusCode != http.StatusOK { + return nil, fmt.Errorf("daemon session query failed at %s/session: status %d", daemonURL, resp.StatusCode) + } + + var payload any + if err := json.Unmarshal(body, &payload); err != nil { + return nil, fmt.Errorf("decode daemon sessions payload: %w", err) + } + + array, err := extractSessionArray(payload) + if err != nil { + return nil, err + } + + records := make([]daemonSessionRecord, 0, len(array)) + for _, item := range array { + m, ok := item.(map[string]any) + if !ok { + continue + } + records = append(records, daemonSessionRecord{ + ID: firstString(m, "id"), + Status: strings.ToLower(firstString(m, "status")), + Workspace: firstString(m, "directory", "workspacePath", "workspace_path", "path", "cwd"), + LastActivity: extractLastActivity(m), + }) + } + + return records, nil +} + +func extractSessionArray(payload any) ([]any, error) { + if arr, ok := payload.([]any); ok { + return arr, nil + } + if m, ok := payload.(map[string]any); ok { + if arr, ok := m["sessions"].([]any); ok { + return arr, nil + } + } + return nil, errors.New("decode daemon sessions payload: unexpected shape") +} + +func firstString(m map[string]any, keys ...string) string { + for _, key := range keys { + v, ok := m[key] + if !ok || v == nil { + continue + } + switch value := v.(type) { + case string: + if trimmed := strings.TrimSpace(value); trimmed != "" { + return trimmed + } + case json.Number: + if s := strings.TrimSpace(value.String()); s != "" { + return s + } + } + } + return "" +} + +func extractLastActivity(m map[string]any) time.Time { + for _, key := range []string{"lastActivity", "last_activity", "updated", "created", "createdAt"} { + v, ok := m[key] + if !ok || v == nil { + continue + } + if ts := parseTimestamp(v); !ts.IsZero() { + return ts + } + } + return time.Time{} +} + +func parseTimestamp(v any) time.Time { + switch raw := v.(type) { + case string: + raw = strings.TrimSpace(raw) + if raw == "" { + return time.Time{} + } + if ts, err := time.Parse(time.RFC3339Nano, raw); err == nil { + return ts + } + if iv, err := strconv.ParseInt(raw, 10, 64); err == nil { + return epochToTime(iv) + } + case float64: + return epochToTime(int64(raw)) + case json.Number: + if iv, err := raw.Int64(); err == nil { + return epochToTime(iv) + } + if fv, err := raw.Float64(); err == nil { + return epochToTime(int64(fv)) + } + } + return time.Time{} +} + +func epochToTime(v int64) time.Time { + if v <= 0 { + return time.Time{} + } + if v > 1_000_000_000_000 { + return time.UnixMilli(v) + } + return time.Unix(v, 0) +} + +func isBetterSessionCandidate(a, b daemonSessionRecord) bool { + if sessionStatusRank(a.Status) != sessionStatusRank(b.Status) { + return sessionStatusRank(a.Status) > sessionStatusRank(b.Status) + } + if !a.LastActivity.Equal(b.LastActivity) { + return a.LastActivity.After(b.LastActivity) + } + return a.ID < b.ID +} + +func sessionStatusRank(status string) int { + switch status { + case "active", "running": + return 2 + case "idle": + return 1 + default: + return 0 + } +} + +func (c *ptySessionConn) Read(p []byte) (int, error) { + return c.pty.Read(p) +} + +func (c *ptySessionConn) Write(p []byte) (int, error) { + return c.pty.Write(p) +} + +func (c *ptySessionConn) Resize(cols, rows int) error { + if cols <= 0 || rows <= 0 { + return fmt.Errorf("invalid terminal size %dx%d", cols, rows) + } + return c.pty.Resize(cols, rows) +} + +func (c *ptySessionConn) Close() error { + c.closeOnce.Do(func() { + if c.pty != nil { + if err := c.pty.Close(); err != nil && !errors.Is(err, os.ErrClosed) { + c.closeErr = errors.Join(c.closeErr, err) + } + } + + if c.cmd != nil && c.cmd.Process != nil { + if err := c.cmd.Process.Kill(); !isIgnorableKillError(err) { + c.closeErr = errors.Join(c.closeErr, err) + } + } + + if c.logger != nil && c.closeErr != nil { + c.logger.Debug("terminal dialer close completed with errors", "session_id", c.sessionID, "error", c.closeErr) + } + }) + return c.closeErr +} + +func isIgnorableKillError(err error) bool { + if err == nil { + return true + } + if errors.Is(err, os.ErrProcessDone) { + return true + } + if errors.Is(err, syscall.ESRCH) { + return true + } + return false +} diff --git a/internal/terminal/dialer_test.go b/internal/terminal/dialer_test.go new file mode 100644 index 0000000..99605cb --- /dev/null +++ b/internal/terminal/dialer_test.go @@ -0,0 +1,315 @@ +package terminal + +import ( + "errors" + "fmt" + "io" + "log/slog" + "net/http" + "net/http/httptest" + "net/url" + "os" + "os/exec" + "path/filepath" + "strconv" + "strings" + "testing" + "time" + + "opencoderouter/internal/session" +) + +func TestSessionDialerReadWriteResizeClose(t *testing.T) { + workspace := t.TempDir() + binaryPath := filepath.Join(workspace, "opencode-test-attach.sh") + script := "#!/bin/sh\ncat\n" + if err := os.WriteFile(binaryPath, []byte(script), 0o755); err != nil { + t.Fatalf("write test attach script: %v", err) + } + + dialer := NewSessionDialer(SessionDialerConfig{ + Logger: slog.New(slog.NewTextHandler(io.Discard, nil)), + OpenCodeBinary: binaryPath, + AttachArgsBuilder: func(handle session.SessionHandle) ([]string, error) { + return []string{}, nil + }, + }) + + conn, err := dialer(nil, session.SessionHandle{ID: "session-1", DaemonPort: 1234, WorkspacePath: workspace}) + if err != nil { + t.Fatalf("dial terminal: %v", err) + } + defer func() { _ = conn.Close() }() + + in := []byte("hello-terminal\n") + if _, err := conn.Write(in); err != nil { + t.Fatalf("write terminal input: %v", err) + } + + if err := conn.Resize(100, 30); err != nil { + t.Fatalf("resize terminal: %v", err) + } + + if err := conn.Resize(0, 30); err == nil { + t.Fatal("expected invalid resize error") + } + + buf := make([]byte, len(in)) + if err := readFullWithTimeout(conn, buf, 2*time.Second); err != nil { + t.Fatalf("read terminal output: %v", err) + } + echo := string(buf) + if echo != "hello-terminal\r" && echo != "hello-terminal\n" { + t.Fatalf("terminal echo=%q want either carriage-return or newline form", echo) + } + + _ = conn.Close() + if err := conn.Close(); err != nil { + t.Fatalf("second close terminal conn: %v", err) + } +} + +func TestSessionDialerRequiresSessionMetadata(t *testing.T) { + dialer := NewSessionDialer(SessionDialerConfig{}) + + if _, err := dialer(nil, session.SessionHandle{}); err == nil { + t.Fatal("expected error when session id/workspace/daemon missing") + } + + if _, err := dialer(nil, session.SessionHandle{ID: "session-1", WorkspacePath: t.TempDir()}); err == nil { + t.Fatal("expected error when daemon port missing") + } + + if _, err := dialer(nil, session.SessionHandle{ID: "session-1"}); err == nil { + t.Fatal("expected error when workspace missing") + } + + if _, err := dialer(nil, session.SessionHandle{ID: "session-1", DaemonPort: 1}); err == nil { + t.Fatal("expected error when workspace missing") + } +} + +func TestSessionDialerCommandFactoryNilCommand(t *testing.T) { + dialer := NewSessionDialer(SessionDialerConfig{ + CommandFactory: func(string, ...string) *exec.Cmd { return nil }, + AttachArgsBuilder: func(handle session.SessionHandle) ([]string, error) { + return []string{}, nil + }, + }) + + if _, err := dialer(nil, session.SessionHandle{ID: "session-1", DaemonPort: 1234, WorkspacePath: t.TempDir()}); err == nil { + t.Fatal("expected error when command factory returns nil") + } +} + +func TestSessionDialerAttachArgsBuilderOverrideBypassesDefaultDaemonArgs(t *testing.T) { + workspace := t.TempDir() + binaryPath := filepath.Join(workspace, "opencode-test-override-args.sh") + script := "#!/bin/sh\ncat\n" + if err := os.WriteFile(binaryPath, []byte(script), 0o755); err != nil { + t.Fatalf("write override args script: %v", err) + } + + var capturedArgs []string + + dialer := NewSessionDialer(SessionDialerConfig{ + OpenCodeBinary: binaryPath, + AttachArgsBuilder: func(handle session.SessionHandle) ([]string, error) { + return []string{"custom", "args"}, nil + }, + CommandFactory: func(binary string, args ...string) *exec.Cmd { + capturedArgs = append([]string(nil), args...) + return exec.Command(binary, args...) + }, + }) + + conn, err := dialer(nil, session.SessionHandle{ID: "session-override", DaemonPort: 12345, WorkspacePath: workspace}) + if err != nil { + t.Fatalf("dial with override args: %v", err) + } + defer func() { _ = conn.Close() }() + + if len(capturedArgs) != 2 || capturedArgs[0] != "custom" || capturedArgs[1] != "args" { + t.Fatalf("override args=%v want [custom args]", capturedArgs) + } +} + +func TestResolveDaemonSessionIDByWorkspaceSuccess(t *testing.T) { + workspace := "/tmp/workspace-match" + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) { + w.Header().Set("Content-Type", "application/json") + _, _ = w.Write([]byte(fmt.Sprintf(`[ + {"id":"daemon-old","directory":"%s","status":"active","lastActivity":"2026-03-06T10:00:00Z"}, + {"id":"daemon-new","directory":"%s","status":"active","lastActivity":"2026-03-06T11:00:00Z"}, + {"id":"daemon-other","directory":"/tmp/other","status":"active","lastActivity":"2026-03-06T12:00:00Z"} + ]`, workspace, workspace))) + })) + defer server.Close() + + resolved, err := resolveDaemonSessionID(daemonPortFromServer(t, server), workspace) + if err != nil { + t.Fatalf("resolve daemon session id: %v", err) + } + if resolved != "daemon-new" { + t.Fatalf("resolved daemon session id=%q want=%q", resolved, "daemon-new") + } +} + +func TestResolveDaemonSessionIDNoMatch(t *testing.T) { + workspace := "/tmp/workspace-missing" + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) { + w.Header().Set("Content-Type", "application/json") + _, _ = w.Write([]byte(`[{"id":"daemon-other","directory":"/tmp/other","status":"active"}]`)) + })) + defer server.Close() + + _, err := resolveDaemonSessionID(daemonPortFromServer(t, server), workspace) + if err == nil { + t.Fatal("expected no-match error") + } + if !errors.Is(err, errDaemonSessionNotFound) { + t.Fatalf("error=%v want errDaemonSessionNotFound", err) + } + if !strings.Contains(err.Error(), "daemon session not found for workspace") { + t.Fatalf("error=%q want no-match message", err) + } +} + +func TestSessionDialerDefaultArgsUseResolvedDaemonSessionID(t *testing.T) { + workspace := t.TempDir() + binaryPath := filepath.Join(workspace, "opencode-test-resolved-daemon-id.sh") + script := "#!/bin/sh\ncat\n" + if err := os.WriteFile(binaryPath, []byte(script), 0o755); err != nil { + t.Fatalf("write resolved-id args script: %v", err) + } + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) { + w.Header().Set("Content-Type", "application/json") + _, _ = w.Write([]byte(fmt.Sprintf(`[{"id":"daemon-resolved-42","directory":%q,"status":"active","lastActivity":"2026-03-06T11:00:00Z"}]`, workspace))) + })) + defer server.Close() + + var capturedArgs []string + port := daemonPortFromServer(t, server) + + dialer := NewSessionDialer(SessionDialerConfig{ + OpenCodeBinary: binaryPath, + CommandFactory: func(binary string, args ...string) *exec.Cmd { + capturedArgs = append([]string(nil), args...) + return exec.Command(binary, args...) + }, + }) + + conn, err := dialer(nil, session.SessionHandle{ID: "session-1", DaemonPort: port, WorkspacePath: workspace}) + if err != nil { + t.Fatalf("dial with resolved daemon session id: %v", err) + } + defer func() { _ = conn.Close() }() + + wantURL := fmt.Sprintf("http://127.0.0.1:%d", port) + if len(capturedArgs) != 4 || capturedArgs[0] != "attach" || capturedArgs[1] != wantURL || capturedArgs[2] != "-s" || capturedArgs[3] != "daemon-resolved-42" { + t.Fatalf("default args=%v want [attach %s -s daemon-resolved-42]", capturedArgs, wantURL) + } + if strings.Contains(strings.Join(capturedArgs, " "), "session-1") { + t.Fatalf("default args should not include control-plane session id, got %v", capturedArgs) + } +} + +func TestSessionDialerDefaultArgsFallbackToAttachWithoutSessionWhenNoMatch(t *testing.T) { + workspace := t.TempDir() + binaryPath := filepath.Join(workspace, "opencode-test-fallback-daemon-id.sh") + script := "#!/bin/sh\ncat\n" + if err := os.WriteFile(binaryPath, []byte(script), 0o755); err != nil { + t.Fatalf("write fallback args script: %v", err) + } + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) { + w.Header().Set("Content-Type", "application/json") + _, _ = w.Write([]byte(`[{"id":"daemon-other","directory":"/tmp/other","status":"active"}]`)) + })) + defer server.Close() + + var capturedArgs []string + port := daemonPortFromServer(t, server) + + dialer := NewSessionDialer(SessionDialerConfig{ + OpenCodeBinary: binaryPath, + CommandFactory: func(binary string, args ...string) *exec.Cmd { + capturedArgs = append([]string(nil), args...) + return exec.Command(binary, args...) + }, + }) + + conn, err := dialer(nil, session.SessionHandle{ID: "session-1", DaemonPort: port, WorkspacePath: workspace}) + if err != nil { + t.Fatalf("dial with no-match fallback: %v", err) + } + defer func() { _ = conn.Close() }() + + wantURL := fmt.Sprintf("http://127.0.0.1:%d", port) + if len(capturedArgs) != 2 || capturedArgs[0] != "attach" || capturedArgs[1] != wantURL { + t.Fatalf("fallback args=%v want [attach %s]", capturedArgs, wantURL) + } + if strings.Contains(strings.Join(capturedArgs, " "), "-s") { + t.Fatalf("fallback args should not include -s, got %v", capturedArgs) + } +} + +func TestSessionDialerDefaultArgsTransportOrParseErrorStillFails(t *testing.T) { + workspace := t.TempDir() + binaryPath := filepath.Join(workspace, "opencode-test-fallback-error.sh") + script := "#!/bin/sh\ncat\n" + if err := os.WriteFile(binaryPath, []byte(script), 0o755); err != nil { + t.Fatalf("write transport/parse error script: %v", err) + } + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) { + w.Header().Set("Content-Type", "application/json") + _, _ = w.Write([]byte(`{"bad":"shape"}`)) + })) + defer server.Close() + + dialer := NewSessionDialer(SessionDialerConfig{OpenCodeBinary: binaryPath}) + _, err := dialer(nil, session.SessionHandle{ID: "session-1", DaemonPort: daemonPortFromServer(t, server), WorkspacePath: workspace}) + if err == nil { + t.Fatal("expected parse-shape error") + } + if errors.Is(err, errDaemonSessionNotFound) { + t.Fatalf("unexpected no-match fallback error classification: %v", err) + } + if !strings.Contains(err.Error(), "decode daemon sessions payload") { + t.Fatalf("error=%q want decode error", err) + } +} + +func daemonPortFromServer(t *testing.T, server *httptest.Server) int { + t.Helper() + + u, err := url.Parse(server.URL) + if err != nil { + t.Fatalf("parse server url: %v", err) + } + + portText := u.Port() + port, err := strconv.Atoi(portText) + if err != nil { + t.Fatalf("parse daemon port %q: %v", portText, err) + } + + return port +} + +func readFullWithTimeout(r io.Reader, buf []byte, timeout time.Duration) error { + errCh := make(chan error, 1) + go func() { + _, err := io.ReadFull(r, buf) + errCh <- err + }() + + select { + case err := <-errCh: + return err + case <-time.After(timeout): + return io.EOF + } +} diff --git a/internal/terminal/handler.go b/internal/terminal/handler.go new file mode 100644 index 0000000..b2b300a --- /dev/null +++ b/internal/terminal/handler.go @@ -0,0 +1,173 @@ +package terminal + +import ( + "errors" + "fmt" + "log/slog" + "net/http" + "strings" + "time" + + "opencoderouter/internal/cache" + "opencoderouter/internal/session" +) + +const defaultTerminalRoutePrefix = "/ws/terminal/" + +type HandlerConfig struct { + SessionManager session.SessionManager + ScrollbackCache cache.ScrollbackCache + Bridge *TerminalBridge + Logger *slog.Logger + RoutePrefix string +} + +type Handler struct { + sessions session.SessionManager + bridge *TerminalBridge + logger *slog.Logger + routePrefix string +} + +func NewHandler(cfg HandlerConfig) *Handler { + logger := cfg.Logger + if logger == nil { + logger = slog.Default() + } + + bridge := cfg.Bridge + if bridge == nil { + bridge = NewBridge(BridgeConfig{Logger: logger, ScrollbackCache: cfg.ScrollbackCache}) + } + + routePrefix := strings.TrimSpace(cfg.RoutePrefix) + if routePrefix == "" { + routePrefix = defaultTerminalRoutePrefix + } + if !strings.HasPrefix(routePrefix, "/") { + routePrefix = "/" + routePrefix + } + if !strings.HasSuffix(routePrefix, "/") { + routePrefix += "/" + } + + return &Handler{ + sessions: cfg.SessionManager, + bridge: bridge, + logger: logger, + routePrefix: routePrefix, + } +} + +func (h *Handler) Register(mux *http.ServeMux) { + if h == nil || mux == nil { + return + } + mux.Handle(h.routePrefix, h) +} + +func (h *Handler) ServeHTTP(w http.ResponseWriter, r *http.Request) { + if h == nil { + http.Error(w, "terminal handler is not configured", http.StatusInternalServerError) + return + } + if r.Method != http.MethodGet { + http.Error(w, "method not allowed", http.StatusMethodNotAllowed) + return + } + if !isWebSocketUpgrade(r) { + http.Error(w, "websocket upgrade required", http.StatusBadRequest) + return + } + if h.sessions == nil { + http.Error(w, "session manager unavailable", http.StatusInternalServerError) + return + } + + sessionID, ok := h.sessionIDFromPath(r.URL.Path) + if !ok { + http.Error(w, "invalid terminal websocket route: expected /ws/terminal/{session-id}", http.StatusBadRequest) + return + } + + handle, err := h.sessions.Get(sessionID) + if err != nil { + h.writeSessionLookupError(w, sessionID, err) + return + } + if handle.Status == session.SessionStatusStopped { + http.Error(w, fmt.Sprintf("session %q is stopped", sessionID), http.StatusServiceUnavailable) + return + } + + health, err := h.sessions.Health(r.Context(), sessionID) + if err != nil { + h.writeSessionLookupError(w, sessionID, err) + return + } + if health.State == session.HealthStateUnhealthy { + msg := fmt.Sprintf("session %q is unhealthy", sessionID) + if health.Error != "" { + msg = fmt.Sprintf("%s: %s", msg, health.Error) + } + http.Error(w, msg, http.StatusServiceUnavailable) + return + } + + terminalConn, err := h.sessions.AttachTerminal(r.Context(), sessionID) + if err != nil { + h.writeAttachError(w, sessionID, err) + return + } + + wsConn, err := h.bridge.Upgrade(w, r) + if err != nil { + _ = terminalConn.Close() + h.logger.Warn("terminal websocket upgrade failed", "session_id", sessionID, "error", err) + return + } + + start := time.Now() + h.logger.Info("terminal websocket accepted", "session_id", sessionID, "remote_addr", r.RemoteAddr) + bridgeErr := h.bridge.Bridge(r.Context(), wsConn, terminalConn, sessionID) + h.logger.Info("terminal websocket disconnected", "session_id", sessionID, "duration", time.Since(start), "error", bridgeErr) +} + +func (h *Handler) sessionIDFromPath(path string) (string, bool) { + if !strings.HasPrefix(path, h.routePrefix) { + return "", false + } + + tail := strings.TrimPrefix(path, h.routePrefix) + tail = strings.TrimSpace(tail) + if tail == "" { + return "", false + } + if strings.Contains(tail, "/") { + return "", false + } + + return tail, true +} + +func (h *Handler) writeSessionLookupError(w http.ResponseWriter, sessionID string, err error) { + if errors.Is(err, session.ErrSessionNotFound) { + http.Error(w, fmt.Sprintf("session %q not found", sessionID), http.StatusNotFound) + return + } + + http.Error(w, fmt.Sprintf("failed to resolve session %q: %v", sessionID, err), http.StatusBadGateway) +} + +func (h *Handler) writeAttachError(w http.ResponseWriter, sessionID string, err error) { + if errors.Is(err, session.ErrSessionNotFound) { + http.Error(w, fmt.Sprintf("session %q not found", sessionID), http.StatusNotFound) + return + } + if errors.Is(err, session.ErrSessionStopped) { + http.Error(w, fmt.Sprintf("session %q is stopped", sessionID), http.StatusServiceUnavailable) + return + } + + http.Error(w, fmt.Sprintf("failed to attach terminal for session %q: %v", sessionID, err), http.StatusBadGateway) +} diff --git a/internal/terminal/handler_test.go b/internal/terminal/handler_test.go new file mode 100644 index 0000000..a6f9009 --- /dev/null +++ b/internal/terminal/handler_test.go @@ -0,0 +1,408 @@ +package terminal + +import ( + "bytes" + "context" + "errors" + "io" + "log/slog" + "net" + "net/http" + "net/http/httptest" + "net/url" + "sync" + "testing" + "time" + + "opencoderouter/internal/cache" + "opencoderouter/internal/session" + + "github.com/gorilla/websocket" +) + +type fakeSessionManager struct { + mu sync.Mutex + getFn func(id string) (*session.SessionHandle, error) + healthFn func(ctx context.Context, id string) (session.HealthStatus, error) + attachFn func(ctx context.Context, id string) (session.TerminalConn, error) + getCalls int + healthCalls int + attachCalls int +} + +func (m *fakeSessionManager) Create(context.Context, session.CreateOpts) (*session.SessionHandle, error) { + return nil, errors.New("not implemented") +} + +func (m *fakeSessionManager) Get(id string) (*session.SessionHandle, error) { + m.mu.Lock() + m.getCalls++ + fn := m.getFn + m.mu.Unlock() + if fn == nil { + return nil, session.ErrSessionNotFound + } + return fn(id) +} + +func (m *fakeSessionManager) List(session.SessionListFilter) ([]session.SessionHandle, error) { + return nil, errors.New("not implemented") +} + +func (m *fakeSessionManager) Stop(context.Context, string) error { + return errors.New("not implemented") +} + +func (m *fakeSessionManager) Restart(context.Context, string) (*session.SessionHandle, error) { + return nil, errors.New("not implemented") +} + +func (m *fakeSessionManager) Delete(context.Context, string) error { + return errors.New("not implemented") +} + +func (m *fakeSessionManager) AttachTerminal(ctx context.Context, id string) (session.TerminalConn, error) { + m.mu.Lock() + m.attachCalls++ + fn := m.attachFn + m.mu.Unlock() + if fn == nil { + return nil, session.ErrSessionNotFound + } + return fn(ctx, id) +} + +func (m *fakeSessionManager) Health(ctx context.Context, id string) (session.HealthStatus, error) { + m.mu.Lock() + m.healthCalls++ + fn := m.healthFn + m.mu.Unlock() + if fn == nil { + return session.HealthStatus{State: session.HealthStateUnknown}, nil + } + return fn(ctx, id) +} + +func (m *fakeSessionManager) calls() (getCalls, healthCalls, attachCalls int) { + m.mu.Lock() + defer m.mu.Unlock() + return m.getCalls, m.healthCalls, m.attachCalls +} + +type terminalResizeCall struct { + cols int + rows int +} + +type testTerminalConn struct { + net.Conn + + mu sync.Mutex + resizeCalls []terminalResizeCall + closeCalls int + closeOnce sync.Once + closeErr error +} + +type testScrollbackCache struct { + mu sync.Mutex + entries map[string][]cache.Entry +} + +func newTestScrollbackCache() *testScrollbackCache { + return &testScrollbackCache{entries: make(map[string][]cache.Entry)} +} + +func (c *testScrollbackCache) Append(sessionID string, entry cache.Entry) error { + c.mu.Lock() + defer c.mu.Unlock() + c.entries[sessionID] = append(c.entries[sessionID], entry) + return nil +} + +func (c *testScrollbackCache) Get(sessionID string, offset, limit int) ([]cache.Entry, error) { + c.mu.Lock() + defer c.mu.Unlock() + s := c.entries[sessionID] + if offset < 0 { + offset = 0 + } + if offset >= len(s) { + return []cache.Entry{}, nil + } + end := len(s) + if limit > 0 && offset+limit < end { + end = offset + limit + } + out := make([]cache.Entry, end-offset) + copy(out, s[offset:end]) + return out, nil +} + +func (c *testScrollbackCache) Trim(sessionID string, maxEntries int) error { + c.mu.Lock() + defer c.mu.Unlock() + s := c.entries[sessionID] + if maxEntries <= 0 { + c.entries[sessionID] = []cache.Entry{} + return nil + } + if len(s) <= maxEntries { + return nil + } + c.entries[sessionID] = append([]cache.Entry(nil), s[len(s)-maxEntries:]...) + return nil +} + +func (c *testScrollbackCache) Clear(sessionID string) error { + c.mu.Lock() + defer c.mu.Unlock() + delete(c.entries, sessionID) + return nil +} + +func (c *testScrollbackCache) Close() error { return nil } + +func newTerminalConnPair() (*testTerminalConn, net.Conn) { + client, backend := net.Pipe() + return &testTerminalConn{Conn: client}, backend +} + +func (c *testTerminalConn) Resize(cols, rows int) error { + c.mu.Lock() + c.resizeCalls = append(c.resizeCalls, terminalResizeCall{cols: cols, rows: rows}) + c.mu.Unlock() + return nil +} + +func (c *testTerminalConn) Close() error { + c.mu.Lock() + c.closeCalls++ + c.mu.Unlock() + c.closeOnce.Do(func() { + c.closeErr = c.Conn.Close() + }) + return c.closeErr +} + +func (c *testTerminalConn) resizeSnapshot() []terminalResizeCall { + c.mu.Lock() + defer c.mu.Unlock() + out := make([]terminalResizeCall, len(c.resizeCalls)) + copy(out, c.resizeCalls) + return out +} + +func (c *testTerminalConn) closeCallCount() int { + c.mu.Lock() + defer c.mu.Unlock() + return c.closeCalls +} + +func testTerminalLogger() *slog.Logger { + return slog.New(slog.NewTextHandler(io.Discard, &slog.HandlerOptions{Level: slog.LevelError})) +} + +func wsURL(serverURL, path string) string { + u, _ := url.Parse(serverURL) + u.Scheme = "ws" + u.Path = path + return u.String() +} + +func waitForCondition(t *testing.T, timeout time.Duration, condition func() bool) { + t.Helper() + deadline := time.Now().Add(timeout) + for time.Now().Before(deadline) { + if condition() { + return + } + time.Sleep(10 * time.Millisecond) + } + t.Fatal("condition not met before timeout") +} + +func TestTerminalHandlerInvalidSession(t *testing.T) { + mgr := &fakeSessionManager{ + getFn: func(id string) (*session.SessionHandle, error) { + return nil, session.ErrSessionNotFound + }, + } + + h := NewHandler(HandlerConfig{SessionManager: mgr, Logger: testTerminalLogger()}) + + w := httptest.NewRecorder() + req := httptest.NewRequest(http.MethodGet, "/ws/terminal/missing-session", nil) + req.Header.Set("Connection", "Upgrade") + req.Header.Set("Upgrade", "websocket") + + h.ServeHTTP(w, req) + + if w.Code != http.StatusNotFound { + t.Fatalf("status=%d want=%d", w.Code, http.StatusNotFound) + } +} + +func TestTerminalHandlerUnhealthySession(t *testing.T) { + mgr := &fakeSessionManager{ + getFn: func(id string) (*session.SessionHandle, error) { + return &session.SessionHandle{ID: id, Status: session.SessionStatusActive}, nil + }, + healthFn: func(context.Context, string) (session.HealthStatus, error) { + return session.HealthStatus{State: session.HealthStateUnhealthy, Error: "daemon not reachable"}, nil + }, + } + + h := NewHandler(HandlerConfig{SessionManager: mgr, Logger: testTerminalLogger()}) + + w := httptest.NewRecorder() + req := httptest.NewRequest(http.MethodGet, "/ws/terminal/s-1", nil) + req.Header.Set("Connection", "Upgrade") + req.Header.Set("Upgrade", "websocket") + + h.ServeHTTP(w, req) + + if w.Code != http.StatusServiceUnavailable { + t.Fatalf("status=%d want=%d", w.Code, http.StatusServiceUnavailable) + } + _, _, attachCalls := mgr.calls() + if attachCalls != 0 { + t.Fatalf("attach calls=%d want=0", attachCalls) + } +} + +func TestTerminalHandlerRequiresUpgrade(t *testing.T) { + h := NewHandler(HandlerConfig{SessionManager: &fakeSessionManager{}, Logger: testTerminalLogger()}) + + w := httptest.NewRecorder() + req := httptest.NewRequest(http.MethodGet, "/ws/terminal/s-1", nil) + + h.ServeHTTP(w, req) + + if w.Code != http.StatusBadRequest { + t.Fatalf("status=%d want=%d", w.Code, http.StatusBadRequest) + } +} + +func TestTerminalHandlerInvalidRoute(t *testing.T) { + h := NewHandler(HandlerConfig{SessionManager: &fakeSessionManager{}, Logger: testTerminalLogger()}) + + w := httptest.NewRecorder() + req := httptest.NewRequest(http.MethodGet, "/ws/terminal/session-1/extra", nil) + req.Header.Set("Connection", "Upgrade") + req.Header.Set("Upgrade", "websocket") + + h.ServeHTTP(w, req) + + if w.Code != http.StatusBadRequest { + t.Fatalf("status=%d want=%d", w.Code, http.StatusBadRequest) + } +} + +func TestTerminalHandlerStreamingAndResize(t *testing.T) { + terminalConn, backendConn := newTerminalConnPair() + defer backendConn.Close() + scrollback := newTestScrollbackCache() + + mgr := &fakeSessionManager{ + getFn: func(id string) (*session.SessionHandle, error) { + return &session.SessionHandle{ID: id, Status: session.SessionStatusActive}, nil + }, + healthFn: func(context.Context, string) (session.HealthStatus, error) { + return session.HealthStatus{State: session.HealthStateHealthy}, nil + }, + attachFn: func(context.Context, string) (session.TerminalConn, error) { + return terminalConn, nil + }, + } + + bridge := NewBridge(BridgeConfig{Logger: testTerminalLogger(), PingInterval: time.Hour, ScrollbackCache: scrollback}) + h := NewHandler(HandlerConfig{SessionManager: mgr, Bridge: bridge, Logger: testTerminalLogger()}) + + mux := http.NewServeMux() + h.Register(mux) + srv := httptest.NewServer(mux) + defer srv.Close() + + client, resp, err := websocket.DefaultDialer.Dial(wsURL(srv.URL, "/ws/terminal/session-1"), nil) + if err != nil { + status := 0 + if resp != nil { + status = resp.StatusCode + } + t.Fatalf("dial websocket failed: %v (status=%d)", err, status) + } + defer client.Close() + + inbound := []byte("ls -la\n") + if err := client.WriteMessage(websocket.BinaryMessage, inbound); err != nil { + t.Fatalf("write binary websocket message: %v", err) + } + + if err := backendConn.SetReadDeadline(time.Now().Add(2 * time.Second)); err != nil { + t.Fatalf("set backend read deadline: %v", err) + } + gotInbound := make([]byte, len(inbound)) + if _, err := io.ReadFull(backendConn, gotInbound); err != nil { + t.Fatalf("read backend inbound data: %v", err) + } + if !bytes.Equal(gotInbound, inbound) { + t.Fatalf("backend inbound=%q want=%q", gotInbound, inbound) + } + + if err := client.WriteMessage(websocket.TextMessage, []byte(`{"type":"resize","cols":120,"rows":40}`)); err != nil { + t.Fatalf("write resize control message: %v", err) + } + + waitForCondition(t, time.Second, func() bool { + calls := terminalConn.resizeSnapshot() + if len(calls) != 1 { + return false + } + return calls[0].cols == 120 && calls[0].rows == 40 + }) + + backendPayload := []byte("\u001b[32mready\u001b[0m\r\n") + if err := backendConn.SetWriteDeadline(time.Now().Add(2 * time.Second)); err != nil { + t.Fatalf("set backend write deadline: %v", err) + } + if _, err := backendConn.Write(backendPayload); err != nil { + t.Fatalf("write backend payload: %v", err) + } + + msgType, msgPayload, err := client.ReadMessage() + if err != nil { + t.Fatalf("read websocket payload: %v", err) + } + if msgType != websocket.BinaryMessage { + t.Fatalf("message type=%d want=%d", msgType, websocket.BinaryMessage) + } + if !bytes.Equal(msgPayload, backendPayload) { + t.Fatalf("websocket payload=%q want=%q", msgPayload, backendPayload) + } + + waitForCondition(t, 2*time.Second, func() bool { + entries, err := scrollback.Get("session-1", 0, 0) + if err != nil || len(entries) == 0 { + return false + } + last := entries[len(entries)-1] + return last.Type == cache.EntryTypeTerminalOutput && bytes.Equal(last.Content, backendPayload) + }) + + if err := client.WriteMessage(websocket.CloseMessage, websocket.FormatCloseMessage(websocket.CloseNormalClosure, "")); err != nil { + t.Fatalf("send close frame: %v", err) + } + if err := client.Close(); err != nil { + t.Fatalf("close websocket client: %v", err) + } + + waitForCondition(t, 2*time.Second, func() bool { + return terminalConn.closeCallCount() > 0 + }) + + getCalls, healthCalls, attachCalls := mgr.calls() + if getCalls == 0 || healthCalls == 0 || attachCalls == 0 { + t.Fatalf("expected manager calls >0, got get=%d health=%d attach=%d", getCalls, healthCalls, attachCalls) + } +} diff --git a/internal/tui/app.go b/internal/tui/app.go index 5a3d0c4..b12b398 100644 --- a/internal/tui/app.go +++ b/internal/tui/app.go @@ -12,14 +12,19 @@ import ( "path/filepath" "strconv" "strings" + "sync" "time" "unicode" + "opencoderouter/internal/model" + "opencoderouter/internal/registry" + "opencoderouter/internal/scanner" "opencoderouter/internal/tui/components" "opencoderouter/internal/tui/config" "opencoderouter/internal/tui/discovery" "opencoderouter/internal/tui/keys" - "opencoderouter/internal/tui/model" + "opencoderouter/internal/tui/local" + tuimodel "opencoderouter/internal/tui/model" "opencoderouter/internal/tui/probe" "opencoderouter/internal/tui/session" "opencoderouter/internal/tui/theme" @@ -38,6 +43,107 @@ type Prober interface { ProbeHosts(ctx context.Context, hosts []model.Host) ([]model.Host, error) } +type localHostProvider interface { + Start() + Stop() + LocalHost() (*model.Host, error) +} + +type scannerLocalHostProvider struct { + adapter *local.Adapter + runScanner func(context.Context) + + mu sync.Mutex + cancel context.CancelFunc + started bool +} + +func newScannerLocalHostProvider(cfg config.Config, logger *slog.Logger) localHostProvider { + localLogger := logger.With("component", "local_scanner") + staleAfter := localRegistryStaleAfter(cfg.Polling.Interval) + localRegistry := registry.New(staleAfter, localLogger) + adapter := local.NewAdapter(localRegistry, cfg.Display.ActiveThreshold, cfg.Display.IdleThreshold) + + scanInterval := cfg.Polling.Interval + if scanInterval <= 0 { + scanInterval = 30 * time.Second + } + + concurrency := cfg.Polling.MaxParallel + if concurrency <= 0 { + concurrency = 10 + } + + probeTimeout := minDuration(cfg.Polling.Timeout, localScannerProbeTimeout) + if probeTimeout <= 0 { + probeTimeout = localScannerProbeTimeout + } + + localScanner := scanner.New( + localRegistry, + localScannerPortStart, + localScannerPortEnd, + scanInterval, + concurrency, + probeTimeout, + localLogger, + ) + + return &scannerLocalHostProvider{ + adapter: adapter, + runScanner: localScanner.Run, + } +} + +func (p *scannerLocalHostProvider) Start() { + if p == nil { + return + } + + p.mu.Lock() + if p.started || p.runScanner == nil { + p.mu.Unlock() + return + } + + ctx, cancel := context.WithCancel(context.Background()) + p.cancel = cancel + p.started = true + runScanner := p.runScanner + p.mu.Unlock() + + go runScanner(ctx) +} + +func (p *scannerLocalHostProvider) Stop() { + if p == nil { + return + } + + p.mu.Lock() + cancel := p.cancel + p.cancel = nil + p.started = false + p.mu.Unlock() + + if cancel != nil { + cancel() + } +} + +func (p *scannerLocalHostProvider) LocalHost() (*model.Host, error) { + if p == nil || p.adapter == nil { + return nil, local.ErrNoLocalBackends + } + + host, err := p.adapter.GetLocalHost() + if err != nil { + return nil, err + } + + return &host, nil +} + type appView int const ( @@ -62,8 +168,9 @@ type AppModel struct { logger *slog.Logger program *tea.Program - discovery Discoverer - prober Prober + discovery Discoverer + prober Prober + localHostProvider localHostProvider header components.HeaderBar tree components.SessionTreeView @@ -88,8 +195,11 @@ type AppModel struct { } const ( - errorToastTimeout = 5 * time.Second - maxSanitizedErrorRunes = 320 + errorToastTimeout = 5 * time.Second + maxSanitizedErrorRunes = 320 + localScannerPortStart = 49152 + localScannerPortEnd = 65535 + localScannerProbeTimeout = 800 * time.Millisecond ) // NewApp constructs the root model with injected services. @@ -113,22 +223,23 @@ func NewApp(cfg config.Config, discoverer Discoverer, proberSvc Prober, logger * } app := &AppModel{ - cfg: cfg, - theme: th, - keys: keyMap, - logger: appLogger, - discovery: discoverer, - prober: proberSvc, - header: components.NewHeaderBar(th, cfg.Polling.Interval), - tree: components.NewSessionTreeView(th), - inspect: components.NewInspectPanel(th), - footer: components.NewFooterHelpBar(keyMap, th), - toast: components.NewInlineToast(th), - modal: components.NewModalLayer(th), - spinner: components.NewBrailleSpinner(cfg.Display.Animation), - showInspect: true, - activeView: viewTree, - sessionManager: session.NewManager(nil, logger, buildSSHControlOpts(cfg.SSH)), + cfg: cfg, + theme: th, + keys: keyMap, + logger: appLogger, + discovery: discoverer, + prober: proberSvc, + localHostProvider: newScannerLocalHostProvider(cfg, logger), + header: components.NewHeaderBar(th, cfg.Polling.Interval), + tree: components.NewSessionTreeView(th), + inspect: components.NewInspectPanel(th), + footer: components.NewFooterHelpBar(keyMap, th), + toast: components.NewInlineToast(th), + modal: components.NewModalLayer(th), + spinner: components.NewBrailleSpinner(cfg.Display.Animation), + showInspect: true, + activeView: viewTree, + sessionManager: session.NewManager(nil, logger, buildSSHControlOpts(cfg.SSH)), } app.tree.SetActiveSessionLookup(func(sessionID string) bool { if strings.TrimSpace(sessionID) == "" { @@ -165,6 +276,10 @@ func (m *AppModel) SetProgram(p *tea.Program) { // Init starts animation and the first refresh cycle. func (m *AppModel) Init() tea.Cmd { + if m.localHostProvider != nil { + go m.localHostProvider.Start() + } + m.nextRefresh = time.Now().Add(m.cfg.Polling.Interval) m.header.SetRefreshDeadline(m.nextRefresh) m.logger.Info("app init", @@ -177,7 +292,7 @@ func (m *AppModel) Init() tea.Cmd { m.header.Init(), m.spinner.Init(), tickCmd(), - m.refreshCmd(), + m.discoverCmd(), ) } @@ -197,27 +312,90 @@ func (m *AppModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) { cmds = append(cmds, cmd) } - case model.TickMsg: + case tuimodel.TickMsg: refreshDue := !m.nextRefresh.IsZero() && !typed.Now.Before(m.nextRefresh) m.logger.Debug("update message", "message_type", "TickMsg", "refresh_due", refreshDue) if refreshDue { - cmds = append(cmds, m.refreshCmd()) + cmds = append(cmds, m.discoverCmd()) } m.header.SetRefreshDeadline(m.nextRefresh) cmds = append(cmds, tickCmd()) - case model.ProbeResultMsg: + case tuimodel.DiscoveryResultMsg: + m.logger.Debug("update message", "message_type", "DiscoveryResultMsg", "hosts", len(typed.Hosts), "has_error", typed.Err != nil) + m.hosts = m.withLocalHost(typed.Hosts) + m.tree.SetHosts(m.hosts) + m.header.SetStats(calculateFleetStats(m.hosts)) + m.syncInspectSelection() + + if typed.Err != nil { + if toastCmd := m.showErrorToast(typed.Err); toastCmd != nil { + cmds = append(cmds, toastCmd) + } + } + + if len(typed.Hosts) == 0 { + m.nextRefresh = time.Now().Add(m.cfg.Polling.Interval) + m.header.SetRefreshDeadline(m.nextRefresh) + if len(m.hosts) == 0 { + guidance := "No SSH hosts found. Check ~/.ssh/config and Hosts.Include config." + if toastCmd := m.toast.Show(guidance, components.ToastSeverityWarning, errorToastTimeout); toastCmd != nil { + cmds = append(cmds, toastCmd) + } + } + m.resize(m.width, m.height) + break + } + + cmds = append(cmds, m.probeCmd(typed.Hosts)) + + case tuimodel.ProbeResultMsg: m.logger.Debug("update message", "message_type", "ProbeResultMsg", "hosts", len(typed.Hosts), "has_error", typed.Err != nil) if toastCmd := m.applyProbeResult(typed); toastCmd != nil { cmds = append(cmds, toastCmd) } - case model.TerminalOutputMsg: + case tuimodel.SSHErrorMsg: + if typed.Err != nil { + hostName := strings.TrimSpace(typed.Host) + if hostName == "" { + hostName = "unknown-host" + } + if toastCmd := m.showErrorToast(fmt.Errorf("ssh probe failed for %s: %w", hostName, typed.Err)); toastCmd != nil { + cmds = append(cmds, toastCmd) + } + } + + case tuimodel.TransportPreflightMsg: + if len(typed.Hosts) > 0 { + indexByName := make(map[string]int, len(m.hosts)) + for i := range m.hosts { + indexByName[m.hosts[i].Name] = i + } + for _, host := range typed.Hosts { + idx, ok := indexByName[host.Name] + if !ok { + continue + } + m.hosts[idx].Transport = host.Transport + m.hosts[idx].TransportError = host.TransportError + m.hosts[idx].BlockedBy = append([]string(nil), host.BlockedBy...) + } + m.tree.SetHosts(m.hosts) + m.syncInspectSelection() + } + if typed.Err != nil { + if toastCmd := m.showErrorToast(fmt.Errorf("transport preflight failed: %w", typed.Err)); toastCmd != nil { + cmds = append(cmds, toastCmd) + } + } + + case tuimodel.TerminalOutputMsg: if typed.SessionID == m.activeSessionID { m.logger.Debug("terminal output", "session_id", typed.SessionID, "bytes", len(typed.Data)) } - case model.TerminalInputForwardedMsg: + case tuimodel.TerminalInputForwardedMsg: if typed.Err != nil { m.logger.Error("terminal input forwarding failed", "session_id", typed.SessionID, "error", sanitizeError(typed.Err)) m.ensureSessionManager().CleanupClosed() @@ -230,7 +408,7 @@ func (m *AppModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) { } } - case model.TerminalClosedMsg: + case tuimodel.TerminalClosedMsg: m.logger.Info("terminal closed", "session_id", typed.SessionID, "active_session_id", m.activeSessionID, "has_error", typed.Err != nil) m.ensureSessionManager().CleanupClosed() if typed.SessionID == m.activeSessionID { @@ -249,7 +427,7 @@ func (m *AppModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) { } } - case model.AttachFinishedMsg: + case tuimodel.AttachFinishedMsg: if typed.Err != nil { m.logger.Info("attach finished", "status", "error") m.logger.Error("session attach failed", "error", sanitizeError(typed.Err)) @@ -261,24 +439,24 @@ func (m *AppModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) { cmds = append(cmds, toastCmd) } } - cmds = append(cmds, m.refreshCmd()) + cmds = append(cmds, m.discoverCmd()) - case model.ModalConfirmCreateMsg: + case tuimodel.ModalConfirmCreateMsg: if host := m.findHostByName(typed.HostName); host != nil { cmds = append(cmds, m.createSessionCmd(*host, typed.Directory)) } - case model.ModalConfirmNewDirMsg: + case tuimodel.ModalConfirmNewDirMsg: if host := m.findHostByName(typed.HostName); host != nil { cmds = append(cmds, m.createSessionCmd(*host, typed.Directory)) } - case model.ModalConfirmGitCloneMsg: + case tuimodel.ModalConfirmGitCloneMsg: if host := m.findHostByName(typed.HostName); host != nil { cmds = append(cmds, m.gitCloneSessionCmd(*host, typed.GitURL)) } - case model.ModalConfirmKillMsg: + case tuimodel.ModalConfirmKillMsg: if host := m.findHostByName(typed.HostName); host != nil { if manager := m.ensureSessionManager(); manager.Get(typed.SessionID) != nil { manager.Remove(typed.SessionID) @@ -291,7 +469,7 @@ func (m *AppModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) { cmds = append(cmds, m.killSessionCmd(*host, typed.SessionID, typed.Directory, typed.SaveContext)) } - case model.ModalConfirmReloadMsg: + case tuimodel.ModalConfirmReloadMsg: if m.reloadInProgress { break } @@ -307,7 +485,7 @@ func (m *AppModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) { cmds = append(cmds, m.reloadSessionsCmd(*host, directory)) } - case model.CreateSessionFinishedMsg: + case tuimodel.CreateSessionFinishedMsg: if typed.Err != nil { m.logger.Info("create session finished", "status", "error") m.logger.Error("session create failed", "error", sanitizeError(typed.Err)) @@ -319,9 +497,9 @@ func (m *AppModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) { cmds = append(cmds, toastCmd) } } - cmds = append(cmds, m.refreshCmd()) + cmds = append(cmds, m.discoverCmd()) - case model.KillSessionFinishedMsg: + case tuimodel.KillSessionFinishedMsg: if typed.Err != nil { m.logger.Info("delete session finished", "status", "error") m.logger.Error("session delete failed", "error", sanitizeError(typed.Err)) @@ -342,9 +520,9 @@ func (m *AppModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) { } m.resize(m.width, m.height) } - cmds = append(cmds, m.refreshCmd()) + cmds = append(cmds, m.discoverCmd()) - case model.ReloadSessionsFinishedMsg: + case tuimodel.ReloadSessionsFinishedMsg: m.reloadInProgress = false if typed.Err != nil { m.logger.Info("reload sessions finished", "status", "error") @@ -363,9 +541,9 @@ func (m *AppModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) { } m.resize(m.width, m.height) } - cmds = append(cmds, m.refreshCmd()) + cmds = append(cmds, m.discoverCmd()) - case model.GitCloneFinishedMsg: + case tuimodel.GitCloneFinishedMsg: if typed.Err != nil { m.logger.Info("git clone finished", "status", "error") m.logger.Error("session git clone failed", "error", sanitizeError(typed.Err)) @@ -377,7 +555,7 @@ func (m *AppModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) { cmds = append(cmds, toastCmd) } } - cmds = append(cmds, m.refreshCmd()) + cmds = append(cmds, m.discoverCmd()) case tea.KeyPressMsg: if m.activeView == viewTerminal { @@ -403,7 +581,7 @@ func (m *AppModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) { sessionID := m.activeSessionID cmds = append(cmds, func() tea.Msg { if err := terminal.WriteInput(input); err != nil { - return model.TerminalInputForwardedMsg{SessionID: sessionID, Err: err} + return tuimodel.TerminalInputForwardedMsg{SessionID: sessionID, Err: err} } return nil }) @@ -427,11 +605,12 @@ func (m *AppModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) { switch { case keys.Matches(typed.String(), m.keys.Quit): m.logger.Debug("update message", "message_type", "KeyPressMsg", "category", "quit") + m.stopLocalHostProvider() m.ensureSessionManager().Shutdown() return m, tea.Quit case keys.Matches(typed.String(), m.keys.Refresh): keyCategory = "refresh" - cmds = append(cmds, m.refreshCmd()) + cmds = append(cmds, m.discoverCmd()) case keys.Matches(typed.String(), m.keys.Search): keyCategory = "search" m.header.FocusSearch() @@ -584,51 +763,63 @@ func (m *AppModel) View() tea.View { return v } -func (m *AppModel) refreshCmd() tea.Cmd { +func (m *AppModel) discoverCmd() tea.Cmd { discoverer := m.discovery - proberSvc := m.prober timeout := m.cfg.Polling.Timeout return func() tea.Msg { startedAt := time.Now() - m.logger.Info("refresh started", "timeout", timeout) + m.logger.Info("discovery started", "timeout", timeout) ctx, cancel := context.WithTimeout(context.Background(), timeout) defer cancel() - hosts, discoverErr := discoverer.Discover(ctx) - probed, probeErr := proberSvc.ProbeHosts(ctx, hosts) + hosts, err := discoverer.Discover(ctx) - resultErr := probeErr - if discoverErr != nil { - resultErr = errors.Join(discoverErr, probeErr) + elapsed := time.Since(startedAt) + m.logger.Info("discovery complete", "hosts", len(hosts), "duration", elapsed, "has_error", err != nil) + if err != nil { + m.logger.Error("discovery failed", "error", sanitizeError(err)) } + return tuimodel.DiscoveryResultMsg{Hosts: hosts, Err: err} + } +} + +func (m *AppModel) probeCmd(hosts []model.Host) tea.Cmd { + proberSvc := m.prober + timeout := m.cfg.Polling.Timeout + probeHosts := append([]model.Host(nil), hosts...) + + return func() tea.Msg { + startedAt := time.Now() + m.logger.Info("probe started", "hosts", len(probeHosts), "timeout", timeout) + + ctx, cancel := context.WithTimeout(context.Background(), timeout) + defer cancel() + + probed, probeErr := proberSvc.ProbeHosts(ctx, probeHosts) + elapsed := time.Since(startedAt) - errCount := countRefreshErrors(probed, resultErr) - m.logger.Info("refresh complete", "hosts", len(probed), "duration", elapsed, "errors", errCount) - if resultErr != nil { - m.logger.Error( - "refresh failed", - "error", sanitizeError(resultErr), - "discover_error", discoverErr != nil, - "probe_error", probeErr != nil, - ) - } - - return model.ProbeResultMsg{ + errCount := countRefreshErrors(probed, probeErr) + m.logger.Info("probe complete", "hosts", len(probed), "duration", elapsed, "errors", errCount) + if probeErr != nil { + m.logger.Error("probe failed", "error", sanitizeError(probeErr)) + } + + return tuimodel.ProbeResultMsg{ Hosts: probed, - Err: resultErr, + Err: probeErr, RefreshedAt: time.Now(), } } } -func (m *AppModel) applyProbeResult(msg model.ProbeResultMsg) tea.Cmd { +func (m *AppModel) applyProbeResult(msg tuimodel.ProbeResultMsg) tea.Cmd { hostsBefore := len(m.hosts) errorsBefore := countHostErrors(m.hosts) - m.hosts = append([]model.Host(nil), msg.Hosts...) + m.hosts = m.withLocalHost(msg.Hosts) m.tree.SetHosts(m.hosts) stats := calculateFleetStats(m.hosts) m.header.SetStats(stats) @@ -728,12 +919,78 @@ func calculateFleetStats(hosts []model.Host) components.FleetStats { return stats } +func (m *AppModel) withLocalHost(hosts []model.Host) []model.Host { + merged := append([]model.Host(nil), hosts...) + if m == nil || m.localHostProvider == nil { + return merged + } + + localHost, err := m.localHostProvider.LocalHost() + if err != nil { + if !errors.Is(err, local.ErrNoLocalBackends) { + m.logger.Debug("local host snapshot failed", "error", sanitizeError(err)) + } + return merged + } + if localHost == nil { + return merged + } + + localName := strings.TrimSpace(localHost.Name) + if localName == "" { + return merged + } + + filtered := make([]model.Host, 0, len(merged)) + for _, host := range merged { + if strings.EqualFold(strings.TrimSpace(host.Name), localName) { + continue + } + filtered = append(filtered, host) + } + + return append([]model.Host{*localHost}, filtered...) +} + +func (m *AppModel) stopLocalHostProvider() { + if m == nil || m.localHostProvider == nil { + return + } + m.localHostProvider.Stop() +} + +func localRegistryStaleAfter(scanInterval time.Duration) time.Duration { + if scanInterval <= 0 { + return 30 * time.Second + } + + staleAfter := scanInterval * 2 + if staleAfter < 30*time.Second { + return 30 * time.Second + } + + return staleAfter +} + func tickCmd() tea.Cmd { return tea.Tick(time.Second, func(t time.Time) tea.Msg { - return model.TickMsg{Now: t} + return tuimodel.TickMsg{Now: t} }) } +func minDuration(a, b time.Duration) time.Duration { + if a <= 0 { + return b + } + if b <= 0 { + return a + } + if a < b { + return a + } + return b +} + func maxInt(a, b int) int { if a > b { return a @@ -1033,7 +1290,7 @@ func (m *AppModel) createSessionCmd(host model.Host, directory string) tea.Cmd { c := exec.Command("ssh", "-o", "BatchMode=yes", "-o", "ConnectTimeout=10", "-t", host.Name, remoteCmd) return tea.ExecProcess(c, func(err error) tea.Msg { - return model.CreateSessionFinishedMsg{Err: err} + return tuimodel.CreateSessionFinishedMsg{Err: err} }) } @@ -1105,32 +1362,32 @@ fi`, if saveContext { exportPath, err := defaultSessionExportPath(host.Name, sessionID) if err != nil { - return model.KillSessionFinishedMsg{Err: err} + return tuimodel.KillSessionFinishedMsg{Err: err} } exportJSON, err := runSSHCommand(host.Name, exportRemoteCmd) if err != nil { - return model.KillSessionFinishedMsg{Err: fmt.Errorf("export session %s: %w", sessionID, err)} + return tuimodel.KillSessionFinishedMsg{Err: fmt.Errorf("export session %s: %w", sessionID, err)} } if strings.TrimSpace(string(exportJSON)) == "" { - return model.KillSessionFinishedMsg{Err: fmt.Errorf("export session %s: empty export output", sessionID)} + return tuimodel.KillSessionFinishedMsg{Err: fmt.Errorf("export session %s: empty export output", sessionID)} } if err := os.WriteFile(exportPath, exportJSON, 0o600); err != nil { - return model.KillSessionFinishedMsg{Err: fmt.Errorf("save export %q: %w", exportPath, err)} + return tuimodel.KillSessionFinishedMsg{Err: fmt.Errorf("save export %q: %w", exportPath, err)} } savedExportPath = exportPath } if _, err := runSSHCommand(host.Name, deleteRemoteCmd); err != nil { - return model.KillSessionFinishedMsg{Err: fmt.Errorf("delete session %s: %w", sessionID, err), SavedExportPath: savedExportPath} + return tuimodel.KillSessionFinishedMsg{Err: fmt.Errorf("delete session %s: %w", sessionID, err), SavedExportPath: savedExportPath} } if _, err := runSSHCommand(host.Name, cleanupRemoteCmd); err != nil { - return model.KillSessionFinishedMsg{Err: fmt.Errorf("verify remote session process cleanup for %s: %w", sessionID, err), SavedExportPath: savedExportPath} + return tuimodel.KillSessionFinishedMsg{Err: fmt.Errorf("verify remote session process cleanup for %s: %w", sessionID, err), SavedExportPath: savedExportPath} } - return model.KillSessionFinishedMsg{SavedExportPath: savedExportPath} + return tuimodel.KillSessionFinishedMsg{SavedExportPath: savedExportPath} } } @@ -1247,7 +1504,7 @@ printf 'reload:remaining:%%s\n' "$remaining"`, directory) err = fmt.Errorf("%d process(es) remain after reload kill sweep", remainingCount) } - return model.ReloadSessionsFinishedMsg{ + return tuimodel.ReloadSessionsFinishedMsg{ HostName: host.Name, Directory: directory, Err: err, @@ -1272,7 +1529,7 @@ func (m *AppModel) gitCloneSessionCmd(host model.Host, gitURL string) tea.Cmd { c := exec.Command("ssh", "-o", "BatchMode=yes", "-o", "ConnectTimeout=10", "-t", host.Name, remoteCmd) return tea.ExecProcess(c, func(err error) tea.Msg { - return model.GitCloneFinishedMsg{Err: err} + return tuimodel.GitCloneFinishedMsg{Err: err} }) } diff --git a/internal/tui/app_terminal_test.go b/internal/tui/app_terminal_test.go index f006061..27a4d61 100644 --- a/internal/tui/app_terminal_test.go +++ b/internal/tui/app_terminal_test.go @@ -10,8 +10,9 @@ import ( "sync" "testing" + "opencoderouter/internal/model" "opencoderouter/internal/tui/config" - "opencoderouter/internal/tui/model" + tuimodel "opencoderouter/internal/tui/model" "opencoderouter/internal/tui/session" tea "charm.land/bubbletea/v2" @@ -413,7 +414,7 @@ func TestAppTerminalClosedMessageSwitchesToTreeAndCleansManager(t *testing.T) { } _ = terminal.Close() - _, cmd := app.Update(model.TerminalClosedMsg{SessionID: sessionData.ID}) + _, cmd := app.Update(tuimodel.TerminalClosedMsg{SessionID: sessionData.ID}) if cmd != nil { _ = cmd } diff --git a/internal/tui/app_test.go b/internal/tui/app_test.go index f6ad106..2399eef 100644 --- a/internal/tui/app_test.go +++ b/internal/tui/app_test.go @@ -8,12 +8,14 @@ import ( "os" "path/filepath" "strings" + "sync" "testing" "time" + "opencoderouter/internal/model" "opencoderouter/internal/tui/components" "opencoderouter/internal/tui/config" - "opencoderouter/internal/tui/model" + tuimodel "opencoderouter/internal/tui/model" tea "charm.land/bubbletea/v2" ) @@ -51,11 +53,111 @@ func (fakeProber) ProbeHosts(_ context.Context, hosts []model.Host) ([]model.Hos return []model.Host{host}, nil } +type trackingProber struct { + hostsToReturn []model.Host + err error + + calls int + lastHosts []model.Host +} + +func (p *trackingProber) ProbeHosts(_ context.Context, hosts []model.Host) ([]model.Host, error) { + p.calls++ + p.lastHosts = append([]model.Host(nil), hosts...) + if p.hostsToReturn != nil || p.err != nil { + return append([]model.Host(nil), p.hostsToReturn...), p.err + } + return append([]model.Host(nil), hosts...), nil +} + +type fakeLocalHostProvider struct { + mu sync.Mutex + + host *model.Host + err error + + startCalls int + stopCalls int +} + +func (f *fakeLocalHostProvider) Start() { + f.mu.Lock() + f.startCalls++ + f.mu.Unlock() +} + +func (f *fakeLocalHostProvider) Stop() { + f.mu.Lock() + f.stopCalls++ + f.mu.Unlock() +} + +func (f *fakeLocalHostProvider) LocalHost() (*model.Host, error) { + f.mu.Lock() + defer f.mu.Unlock() + if f.host == nil { + return nil, f.err + } + hostCopy := *f.host + return &hostCopy, f.err +} + +type blockingLocalHostProvider struct { + started chan struct{} + release chan struct{} +} + +func newBlockingLocalHostProvider() *blockingLocalHostProvider { + return &blockingLocalHostProvider{ + started: make(chan struct{}), + release: make(chan struct{}), + } +} + +func (b *blockingLocalHostProvider) Start() { + select { + case <-b.started: + default: + close(b.started) + } + <-b.release +} + +func (b *blockingLocalHostProvider) Stop() { + select { + case <-b.release: + default: + close(b.release) + } +} + +func (b *blockingLocalHostProvider) LocalHost() (*model.Host, error) { + return nil, nil +} + +func localHostFixture() model.Host { + return model.Host{ + Name: "localhost", + Label: "localhost (local)", + Status: model.HostStatusOnline, + Projects: []model.Project{{ + Name: "alpha-local", + Sessions: []model.Session{{ + ID: "local-1", + Project: "alpha-local", + Title: "local session", + Activity: model.ActivityActive, + }}, + }}, + } +} + func TestAppSmoke(t *testing.T) { cfg := config.DefaultConfig() cfg.Display.Animation = false app := NewApp(cfg, fakeDiscoverer{hosts: []model.Host{{Name: "dev-1", Label: "dev-1"}}}, fakeProber{}, nil) + app.localHostProvider = &fakeLocalHostProvider{} initCmd := app.Init() if initCmd == nil { t.Fatal("expected init command") @@ -94,6 +196,7 @@ func TestNewApp_LoggerPropagated(t *testing.T) { logger := slog.New(slog.NewTextHandler(&buf, &slog.HandlerOptions{Level: slog.LevelDebug})) app := NewApp(cfg, fakeDiscoverer{}, fakeProber{}, logger) + app.localHostProvider = &fakeLocalHostProvider{} if app.logger == nil { t.Fatal("expected app logger to be initialized") } @@ -104,6 +207,377 @@ func TestNewApp_LoggerPropagated(t *testing.T) { } } +func TestInit_DispatchesDiscoveryCmd(t *testing.T) { + cfg := config.DefaultConfig() + cfg.Display.Animation = false + + app := NewApp(cfg, fakeDiscoverer{hosts: []model.Host{{Name: "dev-1", Label: "dev-1"}}}, fakeProber{}, nil) + app.localHostProvider = &fakeLocalHostProvider{} + initCmd := app.Init() + if initCmd == nil { + t.Fatal("expected init command") + } + + msg, ok := runCmdWithTimeout(initCmd, 200*time.Millisecond) + if !ok { + t.Fatal("init command timed out") + } + + batch, ok := msg.(tea.BatchMsg) + if !ok { + t.Fatalf("init message type = %T, want tea.BatchMsg", msg) + } + if !batchContainsDiscoveryResult(batch) { + t.Fatal("expected init batch to include discoverCmd") + } + if batchContainsProbeResult(batch) { + t.Fatal("init should not dispatch probe result directly") + } +} + +func TestInit_StartsLocalProviderWithoutBlocking(t *testing.T) { + t.Parallel() + + cfg := config.DefaultConfig() + cfg.Display.Animation = false + + app := NewApp(cfg, fakeDiscoverer{}, fakeProber{}, nil) + blockingProvider := newBlockingLocalHostProvider() + app.localHostProvider = blockingProvider + + initDone := make(chan tea.Cmd, 1) + go func() { + initDone <- app.Init() + }() + + select { + case cmd := <-initDone: + if cmd == nil { + t.Fatal("expected init command") + } + case <-time.After(200 * time.Millisecond): + t.Fatal("Init blocked while local provider start was waiting") + } + + select { + case <-blockingProvider.started: + case <-time.After(200 * time.Millisecond): + t.Fatal("expected local provider Start to be invoked asynchronously") + } + + blockingProvider.Stop() +} + +func TestUpdate_RefreshKeyDispatchesDiscoveryCmd(t *testing.T) { + cfg := config.DefaultConfig() + cfg.Display.Animation = false + + app := NewApp(cfg, fakeDiscoverer{hosts: []model.Host{{Name: "dev-1", Label: "dev-1"}}}, fakeProber{}, nil) + + _, cmd := app.Update(tea.KeyPressMsg{Code: 'r', Text: "r"}) + if cmd == nil { + t.Fatal("expected refresh key to dispatch command") + } + + msg, ok := runCmdWithTimeout(cmd, 200*time.Millisecond) + if !ok { + t.Fatal("refresh command timed out") + } + + switch typed := msg.(type) { + case tuimodel.DiscoveryResultMsg: + case tea.BatchMsg: + if !batchContainsDiscoveryResult(typed) { + t.Fatalf("expected refresh command batch to include discovery, got %T", msg) + } + default: + t.Fatalf("refresh command message type = %T, want DiscoveryResultMsg or tea.BatchMsg", msg) + } +} + +func TestUpdate_DiscoveryResultDispatchesProbeAndStoresHosts(t *testing.T) { + cfg := config.DefaultConfig() + cfg.Display.Animation = false + + prober := &trackingProber{hostsToReturn: []model.Host{{Name: "dev-1", Label: "dev-1", Status: model.HostStatusOnline}}} + app := NewApp(cfg, fakeDiscoverer{}, prober, nil) + + discovered := []model.Host{{Name: "dev-1", Label: "dev-1", Status: model.HostStatusUnknown}} + _, cmd := app.Update(tuimodel.DiscoveryResultMsg{Hosts: discovered}) + + if len(app.hosts) != 1 || app.hosts[0].Name != "dev-1" { + t.Fatalf("app.hosts after discovery = %#v, want discovered hosts copied", app.hosts) + } + if cmd == nil { + t.Fatal("expected discovery result to dispatch probe command") + } + + msg, ok := runCmdWithTimeout(cmd, 200*time.Millisecond) + if !ok { + t.Fatal("probe dispatch command timed out") + } + + switch typed := msg.(type) { + case tuimodel.ProbeResultMsg: + if len(typed.Hosts) != 1 || typed.Hosts[0].Name != "dev-1" { + t.Fatalf("probe result hosts = %#v, want host dev-1", typed.Hosts) + } + case tea.BatchMsg: + if !batchContainsProbeResult(typed) { + t.Fatalf("expected discovery command batch to include probe result, got %T", msg) + } + default: + t.Fatalf("discovery dispatch message type = %T, want ProbeResultMsg or tea.BatchMsg", msg) + } + + if prober.calls != 1 { + t.Fatalf("probe call count = %d, want 1", prober.calls) + } + if len(prober.lastHosts) != 1 || prober.lastHosts[0].Name != "dev-1" { + t.Fatalf("probe input hosts = %#v, want discovered host", prober.lastHosts) + } +} + +func TestUpdate_DiscoveryResultMergesLocalHostWithoutMutatingProbeInput(t *testing.T) { + t.Parallel() + + cfg := config.DefaultConfig() + cfg.Display.Animation = false + + prober := &trackingProber{hostsToReturn: []model.Host{{Name: "remote-1", Label: "remote-1", Status: model.HostStatusOnline}}} + app := NewApp(cfg, fakeDiscoverer{}, prober, nil) + localProvider := &fakeLocalHostProvider{host: hostPtr(localHostFixture())} + app.localHostProvider = localProvider + + discovered := []model.Host{{Name: "remote-1", Label: "remote-1", Status: model.HostStatusUnknown}} + _, cmd := app.Update(tuimodel.DiscoveryResultMsg{Hosts: discovered}) + + if len(app.hosts) != 2 { + t.Fatalf("app.hosts count after local merge = %d, want 2", len(app.hosts)) + } + if app.hosts[0].Name != "localhost" || app.hosts[1].Name != "remote-1" { + t.Fatalf("app.host ordering = [%q, %q], want [localhost, remote-1]", app.hosts[0].Name, app.hosts[1].Name) + } + + if cmd == nil { + t.Fatal("expected discovery result to dispatch probe command") + } + if _, ok := runCmdWithTimeout(cmd, 200*time.Millisecond); !ok { + t.Fatal("probe command timed out") + } + + if prober.calls != 1 { + t.Fatalf("prober call count = %d, want 1", prober.calls) + } + if len(prober.lastHosts) != 1 || prober.lastHosts[0].Name != "remote-1" { + t.Fatalf("probe input hosts = %#v, want only remote host", prober.lastHosts) + } +} + +func TestUpdate_ProbeResultMergesLocalHostIntoTree(t *testing.T) { + t.Parallel() + + cfg := config.DefaultConfig() + cfg.Display.Animation = false + + app := NewApp(cfg, fakeDiscoverer{}, fakeProber{}, nil) + localProvider := &fakeLocalHostProvider{host: hostPtr(localHostFixture())} + app.localHostProvider = localProvider + + remoteHosts := []model.Host{{Name: "remote-1", Label: "remote-1", Status: model.HostStatusOnline}} + _, _ = app.Update(tuimodel.ProbeResultMsg{Hosts: remoteHosts, RefreshedAt: time.Now()}) + + if len(app.hosts) != 2 { + t.Fatalf("app.hosts count after probe merge = %d, want 2", len(app.hosts)) + } + if app.hosts[0].Name != "localhost" || app.hosts[1].Name != "remote-1" { + t.Fatalf("app.host ordering = [%q, %q], want [localhost, remote-1]", app.hosts[0].Name, app.hosts[1].Name) + } + + treeHosts := app.tree.Hosts() + if len(treeHosts) != 2 { + t.Fatalf("tree host count = %d, want 2", len(treeHosts)) + } + if treeHosts[0].Name != "localhost" || treeHosts[1].Name != "remote-1" { + t.Fatalf("tree host ordering = [%q, %q], want [localhost, remote-1]", treeHosts[0].Name, treeHosts[1].Name) + } +} + +func TestUpdate_DiscoveryResultNoRemoteHostsKeepsLocalAndSuppressesGuidanceToast(t *testing.T) { + t.Parallel() + + cfg := config.DefaultConfig() + cfg.Display.Animation = false + + app := NewApp(cfg, fakeDiscoverer{}, fakeProber{}, nil) + localProvider := &fakeLocalHostProvider{host: hostPtr(localHostFixture())} + app.localHostProvider = localProvider + + _, cmd := app.Update(tuimodel.DiscoveryResultMsg{Hosts: nil}) + if cmd != nil { + t.Fatal("expected no command when no remote hosts are discovered") + } + + if len(app.hosts) != 1 || app.hosts[0].Name != "localhost" { + t.Fatalf("app.hosts = %#v, want local host only", app.hosts) + } + if app.toast.Visible() { + t.Fatalf("did not expect no-remote guidance toast when local host is available, got %q", app.toast.View()) + } +} + +func TestUpdate_QuitStopsLocalProvider(t *testing.T) { + t.Parallel() + + cfg := config.DefaultConfig() + cfg.Display.Animation = false + + app := NewApp(cfg, fakeDiscoverer{}, fakeProber{}, nil) + localProvider := &fakeLocalHostProvider{} + app.localHostProvider = localProvider + + _, cmd := app.Update(tea.KeyPressMsg{Code: 'q', Text: "q"}) + if cmd == nil { + t.Fatal("expected quit command") + } + quitMsg := cmd() + if _, ok := quitMsg.(tea.QuitMsg); !ok { + t.Fatalf("quit command result = %T, want tea.QuitMsg", quitMsg) + } + + localProvider.mu.Lock() + stopCalls := localProvider.stopCalls + localProvider.mu.Unlock() + if stopCalls != 1 { + t.Fatalf("local provider stop call count = %d, want 1", stopCalls) + } +} + +func TestUpdate_DiscoveryResultNoHostsShowsGuidanceToast(t *testing.T) { + cfg := config.DefaultConfig() + cfg.Display.Animation = false + + app := NewApp(cfg, fakeDiscoverer{}, fakeProber{}, nil) + + _, cmd := app.Update(tuimodel.DiscoveryResultMsg{Hosts: nil}) + if cmd == nil { + t.Fatal("expected zero-host discovery to dispatch toast command") + } + if !app.toast.Visible() { + t.Fatal("expected guidance toast when no hosts discovered") + } + + toastView := strings.ToLower(app.toast.View()) + if !strings.Contains(toastView, "no ssh hosts found") { + t.Fatalf("expected no-hosts guidance toast, got %q", toastView) + } + if !strings.Contains(toastView, "~/.ssh/config") { + t.Fatalf("expected ssh config guidance in toast, got %q", toastView) + } + if !strings.Contains(toastView, "hosts.include") { + t.Fatalf("expected hosts.include guidance in toast, got %q", toastView) + } +} + +func TestUpdate_SSHErrorMsgShowsPerHostToast(t *testing.T) { + cfg := config.DefaultConfig() + cfg.Display.Animation = false + + app := NewApp(cfg, fakeDiscoverer{}, fakeProber{}, nil) + + sshErr := errors.New("permission denied (publickey)") + _, cmd := app.Update(tuimodel.SSHErrorMsg{Host: "jump-1", Err: sshErr}) + if cmd == nil { + t.Fatal("expected SSHErrorMsg handler to return toast command") + } + if !app.toast.Visible() { + t.Fatal("expected SSHErrorMsg to show toast") + } + if app.lastError == nil || !strings.Contains(app.lastError.Error(), "jump-1") { + t.Fatalf("expected lastError to capture host-scoped ssh failure, got %v", app.lastError) + } + + toastView := strings.ToLower(app.toast.View()) + if !strings.Contains(toastView, "error:") || !strings.Contains(toastView, "jump-1") || !strings.Contains(toastView, "permission denied") { + t.Fatalf("expected host-scoped ssh error toast, got %q", toastView) + } +} + +func TestUpdate_TransportPreflightMsgUpdatesHostTransportState(t *testing.T) { + cfg := config.DefaultConfig() + cfg.Display.Animation = false + + app := NewApp(cfg, fakeDiscoverer{}, fakeProber{}, nil) + app.hosts = []model.Host{ + {Name: "jump-1", Label: "jump-1", Status: model.HostStatusUnknown, Transport: model.TransportUnknown}, + {Name: "target-1", Label: "target-1", Status: model.HostStatusUnknown, Transport: model.TransportUnknown}, + } + app.tree.SetHosts(app.hosts) + + preflightHosts := []model.Host{ + {Name: "jump-1", Transport: model.TransportAuthRequired, TransportError: "password authentication required"}, + {Name: "target-1", Transport: model.TransportBlocked, BlockedBy: []string{"jump-1"}, TransportError: "blocked by: jump-1"}, + } + + _, _ = app.Update(tuimodel.TransportPreflightMsg{Hosts: preflightHosts}) + + if app.hosts[0].Transport != model.TransportAuthRequired { + t.Fatalf("jump host transport = %q, want %q", app.hosts[0].Transport, model.TransportAuthRequired) + } + if app.hosts[1].Transport != model.TransportBlocked { + t.Fatalf("target host transport = %q, want %q", app.hosts[1].Transport, model.TransportBlocked) + } + if len(app.hosts[1].BlockedBy) != 1 || app.hosts[1].BlockedBy[0] != "jump-1" { + t.Fatalf("target blocked_by = %#v, want [jump-1]", app.hosts[1].BlockedBy) + } + + treeHosts := app.tree.Hosts() + if len(treeHosts) != 2 { + t.Fatalf("tree host count = %d, want 2", len(treeHosts)) + } + if treeHosts[0].Transport != model.TransportAuthRequired || treeHosts[1].Transport != model.TransportBlocked { + t.Fatalf("tree host transports = [%q %q], want [%q %q]", treeHosts[0].Transport, treeHosts[1].Transport, model.TransportAuthRequired, model.TransportBlocked) + } +} + +func TestUpdate_ProbeResultStillUpdatesTreeAndDeadline(t *testing.T) { + cfg := config.DefaultConfig() + cfg.Display.Animation = false + + app := NewApp(cfg, fakeDiscoverer{}, fakeProber{}, nil) + + refreshedAt := time.Now().Add(-2 * time.Second).UTC().Round(0) + probedHosts := []model.Host{{ + Name: "dev-1", + Label: "dev-1", + Status: model.HostStatusOnline, + Projects: []model.Project{{ + Name: "alpha", + Sessions: []model.Session{{ + ID: "sess-1", + Project: "alpha", + Title: "session one", + Activity: model.ActivityActive, + }}, + }}, + }} + + _, _ = app.Update(tuimodel.ProbeResultMsg{Hosts: probedHosts, RefreshedAt: refreshedAt}) + + if len(app.hosts) != 1 || app.hosts[0].Name != "dev-1" { + t.Fatalf("app hosts after probe result = %#v, want dev-1", app.hosts) + } + treeHosts := app.tree.Hosts() + if len(treeHosts) != 1 || treeHosts[0].Name != "dev-1" { + t.Fatalf("tree hosts after probe result = %#v, want dev-1", treeHosts) + } + + wantNext := refreshedAt.Add(cfg.Polling.Interval) + if !app.nextRefresh.Equal(wantNext) { + t.Fatalf("nextRefresh = %v, want %v", app.nextRefresh, wantNext) + } +} + func setupReloadSessionsMockSSH(t *testing.T, mode string) (model.Host, string) { t.Helper() @@ -282,7 +756,7 @@ func TestReloadSessionsCmd_Success(t *testing.T) { directory := "/tmp/project-alpha" msg := app.reloadSessionsCmd(host, directory)() - finished, ok := msg.(model.ReloadSessionsFinishedMsg) + finished, ok := msg.(tuimodel.ReloadSessionsFinishedMsg) if !ok { t.Fatalf("message type = %T, want model.ReloadSessionsFinishedMsg", msg) } @@ -347,7 +821,7 @@ func TestReloadSessionsCmd_NoProcessFound(t *testing.T) { directory := "/tmp/project-beta" msg := app.reloadSessionsCmd(host, directory)() - finished, ok := msg.(model.ReloadSessionsFinishedMsg) + finished, ok := msg.(tuimodel.ReloadSessionsFinishedMsg) if !ok { t.Fatalf("message type = %T, want model.ReloadSessionsFinishedMsg", msg) } @@ -372,7 +846,7 @@ func TestReloadSessionsCmd_ResidualProcessRemaining(t *testing.T) { directory := "/tmp/project-residual" msg := app.reloadSessionsCmd(host, directory)() - finished, ok := msg.(model.ReloadSessionsFinishedMsg) + finished, ok := msg.(tuimodel.ReloadSessionsFinishedMsg) if !ok { t.Fatalf("message type = %T, want model.ReloadSessionsFinishedMsg", msg) } @@ -400,7 +874,7 @@ func TestReloadSessionsCmd_SSHFailure(t *testing.T) { directory := "/tmp/project-gamma" msg := app.reloadSessionsCmd(host, directory)() - finished, ok := msg.(model.ReloadSessionsFinishedMsg) + finished, ok := msg.(tuimodel.ReloadSessionsFinishedMsg) if !ok { t.Fatalf("message type = %T, want model.ReloadSessionsFinishedMsg", msg) } @@ -425,7 +899,7 @@ func TestReloadSessionsCmd_PermissionDenied(t *testing.T) { directory := "/tmp/project-delta" msg := app.reloadSessionsCmd(host, directory)() - finished, ok := msg.(model.ReloadSessionsFinishedMsg) + finished, ok := msg.(tuimodel.ReloadSessionsFinishedMsg) if !ok { t.Fatalf("message type = %T, want model.ReloadSessionsFinishedMsg", msg) } @@ -450,7 +924,7 @@ func TestKillSessionCmd_SaveContextExportsThenDeletes(t *testing.T) { t.Setenv("HOME", t.TempDir()) msg := app.killSessionCmd(host, "session-1", "/tmp/project-alpha", true)() - finished, ok := msg.(model.KillSessionFinishedMsg) + finished, ok := msg.(tuimodel.KillSessionFinishedMsg) if !ok { t.Fatalf("message type = %T, want model.KillSessionFinishedMsg", msg) } @@ -505,7 +979,7 @@ func TestKillSessionCmd_DeleteWithoutSaveSkipsExport(t *testing.T) { host, argsFile := setupDeleteSessionMockSSH(t, "success") msg := app.killSessionCmd(host, "session-1", "/tmp/project-alpha", false)() - finished, ok := msg.(model.KillSessionFinishedMsg) + finished, ok := msg.(tuimodel.KillSessionFinishedMsg) if !ok { t.Fatalf("message type = %T, want model.KillSessionFinishedMsg", msg) } @@ -541,7 +1015,7 @@ func TestKillSessionCmd_SaveContextExportFailureStopsDelete(t *testing.T) { t.Setenv("HOME", t.TempDir()) msg := app.killSessionCmd(host, "session-1", "/tmp/project-alpha", true)() - finished, ok := msg.(model.KillSessionFinishedMsg) + finished, ok := msg.(tuimodel.KillSessionFinishedMsg) if !ok { t.Fatalf("message type = %T, want model.KillSessionFinishedMsg", msg) } @@ -577,7 +1051,7 @@ func TestKillSessionCmd_DeleteFailureReturnsSavedExportPath(t *testing.T) { t.Setenv("HOME", t.TempDir()) msg := app.killSessionCmd(host, "session-1", "/tmp/project-alpha", true)() - finished, ok := msg.(model.KillSessionFinishedMsg) + finished, ok := msg.(tuimodel.KillSessionFinishedMsg) if !ok { t.Fatalf("message type = %T, want model.KillSessionFinishedMsg", msg) } @@ -617,7 +1091,7 @@ func TestKillSessionCmd_CleanupFailureReturnsSavedExportPath(t *testing.T) { t.Setenv("HOME", t.TempDir()) msg := app.killSessionCmd(host, "session-1", "/tmp/project-alpha", true)() - finished, ok := msg.(model.KillSessionFinishedMsg) + finished, ok := msg.(tuimodel.KillSessionFinishedMsg) if !ok { t.Fatalf("message type = %T, want model.KillSessionFinishedMsg", msg) } @@ -767,7 +1241,7 @@ func TestCtrlR_SessionSelectionResolvesParentProjectDirectory(t *testing.T) { } msg := cmd() - confirm, ok := msg.(model.ModalConfirmReloadMsg) + confirm, ok := msg.(tuimodel.ModalConfirmReloadMsg) if !ok { t.Fatalf("message type = %T, want model.ModalConfirmReloadMsg", msg) } @@ -840,7 +1314,7 @@ func TestReloadConfirm_DetachesActiveProjectTerminalsBeforeDispatch(t *testing.T app.activeView = viewTerminal app.activeSessionID = "alpha-2" - _, cmd := app.Update(model.ModalConfirmReloadMsg{HostName: host.Name, Directory: "/srv/work/alpha"}) + _, cmd := app.Update(tuimodel.ModalConfirmReloadMsg{HostName: host.Name, Directory: "/srv/work/alpha"}) if cmd == nil { t.Fatal("expected reload dispatch command after confirmation") } @@ -879,13 +1353,13 @@ func TestReloadConfirm_DispatchesReloadSessionsCmd(t *testing.T) { app, _ := setupReloadWiringApp(t, mockHost) - _, cmd := app.Update(model.ModalConfirmReloadMsg{HostName: mockHost.Name, Directory: "/tmp/project-alpha"}) + _, cmd := app.Update(tuimodel.ModalConfirmReloadMsg{HostName: mockHost.Name, Directory: "/tmp/project-alpha"}) if cmd == nil { t.Fatal("expected reloadSessionsCmd dispatch on modal confirm") } msg := cmd() - finished, ok := msg.(model.ReloadSessionsFinishedMsg) + finished, ok := msg.(tuimodel.ReloadSessionsFinishedMsg) if !ok { t.Fatalf("message type = %T, want model.ReloadSessionsFinishedMsg", msg) } @@ -916,7 +1390,7 @@ func TestReloadFinished_SuccessUpdatesToastAndRefresh(t *testing.T) { app, _ := setupReloadWiringApp(t, host) app.reloadInProgress = true - _, cmd := app.Update(model.ReloadSessionsFinishedMsg{ + _, cmd := app.Update(tuimodel.ReloadSessionsFinishedMsg{ HostName: host.Name, Directory: "/srv/work/alpha", KilledCount: 2, @@ -947,8 +1421,8 @@ func TestReloadFinished_SuccessUpdatesToastAndRefresh(t *testing.T) { if len(batch) != 2 { t.Fatalf("batch command length = %d, want 2 (toast + refresh)", len(batch)) } - if !batchContainsProbeResult(batch) { - t.Fatal("expected reload success command batch to include refreshCmd") + if !batchContainsDiscoveryResult(batch) { + t.Fatal("expected reload success command batch to include discoverCmd") } } @@ -958,7 +1432,7 @@ func TestReloadFinished_ErrorUpdatesToastAndRefresh(t *testing.T) { app.reloadInProgress = true reloadErr := errors.New("reload failed: ssh timeout") - _, cmd := app.Update(model.ReloadSessionsFinishedMsg{ + _, cmd := app.Update(tuimodel.ReloadSessionsFinishedMsg{ HostName: host.Name, Directory: "/srv/work/alpha", Err: reloadErr, @@ -992,11 +1466,30 @@ func TestReloadFinished_ErrorUpdatesToastAndRefresh(t *testing.T) { if len(batch) != 2 { t.Fatalf("batch command length = %d, want 2 (toast + refresh)", len(batch)) } - if !batchContainsProbeResult(batch) { - t.Fatal("expected reload error command batch to include refreshCmd") + if !batchContainsDiscoveryResult(batch) { + t.Fatal("expected reload error command batch to include discoverCmd") } } +func batchContainsDiscoveryResult(cmds tea.BatchMsg) bool { + for _, cmd := range cmds { + if cmd == nil { + continue + } + + msg, ok := runCmdWithTimeout(cmd, 100*time.Millisecond) + if !ok { + continue + } + + if _, isDiscovery := msg.(tuimodel.DiscoveryResultMsg); isDiscovery { + return true + } + } + + return false +} + func batchContainsProbeResult(cmds tea.BatchMsg) bool { for _, cmd := range cmds { if cmd == nil { @@ -1008,7 +1501,7 @@ func batchContainsProbeResult(cmds tea.BatchMsg) bool { continue } - if _, isProbe := msg.(model.ProbeResultMsg); isProbe { + if _, isProbe := msg.(tuimodel.ProbeResultMsg); isProbe { return true } } @@ -1037,3 +1530,8 @@ func runCmdWithTimeout(cmd tea.Cmd, timeout time.Duration) (tea.Msg, bool) { return nil, false } } + +func hostPtr(host model.Host) *model.Host { + copy := host + return © +} diff --git a/internal/tui/components/inspect.go b/internal/tui/components/inspect.go index 1178d2c..26cb038 100644 --- a/internal/tui/components/inspect.go +++ b/internal/tui/components/inspect.go @@ -5,7 +5,7 @@ import ( "strings" "time" - "opencoderouter/internal/tui/model" + "opencoderouter/internal/model" "opencoderouter/internal/tui/theme" lipgloss "charm.land/lipgloss/v2" diff --git a/internal/tui/components/modal.go b/internal/tui/components/modal.go index b6e7482..0dcf339 100644 --- a/internal/tui/components/modal.go +++ b/internal/tui/components/modal.go @@ -5,7 +5,7 @@ import ( "path/filepath" "strings" - "opencoderouter/internal/tui/model" + tuimodel "opencoderouter/internal/tui/model" "opencoderouter/internal/tui/theme" textinput "charm.land/bubbles/v2/textinput" @@ -224,7 +224,7 @@ func (m ModalLayer) Update(msg tea.Msg) (ModalLayer, tea.Cmd) { directory := m.directory m.Close() return m, func() tea.Msg { - return model.ModalConfirmCreateMsg{ + return tuimodel.ModalConfirmCreateMsg{ HostName: hostName, Directory: directory, } @@ -249,7 +249,7 @@ func (m ModalLayer) Update(msg tea.Msg) (ModalLayer, tea.Cmd) { hostName := m.hostName m.Close() return m, func() tea.Msg { - return model.ModalConfirmNewDirMsg{ + return tuimodel.ModalConfirmNewDirMsg{ HostName: hostName, Directory: dir, } @@ -274,7 +274,7 @@ func (m ModalLayer) Update(msg tea.Msg) (ModalLayer, tea.Cmd) { hostName := m.hostName m.Close() return m, func() tea.Msg { - return model.ModalConfirmGitCloneMsg{ + return tuimodel.ModalConfirmGitCloneMsg{ HostName: hostName, GitURL: gitURL, } @@ -292,7 +292,7 @@ func (m ModalLayer) Update(msg tea.Msg) (ModalLayer, tea.Cmd) { directory := m.directory m.Close() return m, func() tea.Msg { - return model.ModalConfirmKillMsg{ + return tuimodel.ModalConfirmKillMsg{ HostName: hostName, SessionID: sessionID, Directory: directory, @@ -305,7 +305,7 @@ func (m ModalLayer) Update(msg tea.Msg) (ModalLayer, tea.Cmd) { directory := m.directory m.Close() return m, func() tea.Msg { - return model.ModalConfirmKillMsg{ + return tuimodel.ModalConfirmKillMsg{ HostName: hostName, SessionID: sessionID, Directory: directory, @@ -325,7 +325,7 @@ func (m ModalLayer) Update(msg tea.Msg) (ModalLayer, tea.Cmd) { directory := m.directory m.Close() return m, func() tea.Msg { - return model.ModalConfirmReloadMsg{ + return tuimodel.ModalConfirmReloadMsg{ HostName: hostName, Directory: directory, } diff --git a/internal/tui/components/modal_test.go b/internal/tui/components/modal_test.go index a697217..c18e7cd 100644 --- a/internal/tui/components/modal_test.go +++ b/internal/tui/components/modal_test.go @@ -4,7 +4,7 @@ import ( "strings" "testing" - "opencoderouter/internal/tui/model" + tuimodel "opencoderouter/internal/tui/model" "opencoderouter/internal/tui/theme" tea "charm.land/bubbletea/v2" @@ -45,7 +45,7 @@ func TestModalConfirmReloadEmitsConfirmMessage(t *testing.T) { } msg := cmd() - confirm, ok := msg.(model.ModalConfirmReloadMsg) + confirm, ok := msg.(tuimodel.ModalConfirmReloadMsg) if !ok { t.Fatalf("expected ModalConfirmReloadMsg, got %T", msg) } @@ -99,7 +99,7 @@ func TestModalConfirmKillYesEmitsSaveContextTrue(t *testing.T) { } msg := cmd() - confirm, ok := msg.(model.ModalConfirmKillMsg) + confirm, ok := msg.(tuimodel.ModalConfirmKillMsg) if !ok { t.Fatalf("expected ModalConfirmKillMsg, got %T", msg) } @@ -124,7 +124,7 @@ func TestModalConfirmKillNoEmitsSaveContextFalse(t *testing.T) { } msg := cmd() - confirm, ok := msg.(model.ModalConfirmKillMsg) + confirm, ok := msg.(tuimodel.ModalConfirmKillMsg) if !ok { t.Fatalf("expected ModalConfirmKillMsg, got %T", msg) } diff --git a/internal/tui/components/terminal.go b/internal/tui/components/terminal.go index ec7c4e0..f87b633 100644 --- a/internal/tui/components/terminal.go +++ b/internal/tui/components/terminal.go @@ -15,7 +15,8 @@ import ( "syscall" "time" - "opencoderouter/internal/tui/model" + "opencoderouter/internal/model" + tuimodel "opencoderouter/internal/tui/model" tea "charm.land/bubbletea/v2" "github.com/charmbracelet/x/vt" @@ -218,7 +219,7 @@ func (t *SessionTerminal) readLoop() { return } t.logger.Debug("terminal readLoop chunk", "session_id", t.sessionID, "bytes", n, "preview", preview) - t.emit(model.TerminalOutputMsg{SessionID: t.sessionID, Data: chunk}) + t.emit(tuimodel.TerminalOutputMsg{SessionID: t.sessionID, Data: chunk}) } if err != nil { @@ -246,7 +247,7 @@ func (t *SessionTerminal) emulatorReplyLoop() { return } t.logger.Error("terminal emulator reply write failed", "session_id", t.sessionID, "error", writeErr, "bytes", len(reply), "preview", preview) - t.closeWithErr(writeErr) + _ = t.closeWithErr(writeErr) return } t.logger.Debug("terminal emulator reply forwarded", "session_id", t.sessionID, "bytes", len(reply), "preview", preview) @@ -256,7 +257,7 @@ func (t *SessionTerminal) emulatorReplyLoop() { return } t.logger.Error("terminal emulator reply loop error", "session_id", t.sessionID, "error", err) - t.closeWithErr(err) + _ = t.closeWithErr(err) return } } @@ -278,7 +279,7 @@ func (t *SessionTerminal) watchNoOutput(timeout time.Duration) { t.logger.Warn("terminal no output watchdog", "session_id", t.sessionID, "timeout", timeout.String(), "pid", pid) note := "\r\n[ocr] Attach has no output yet. SSH is still running. Check ~/.ocr/ocr.log for details.\r\n" _, _ = t.emulator.Write([]byte(note)) - t.emit(model.TerminalOutputMsg{SessionID: t.sessionID}) + t.emit(tuimodel.TerminalOutputMsg{SessionID: t.sessionID}) if pid <= 0 { return @@ -329,7 +330,7 @@ func (t *SessionTerminal) closeWithErr(reason error) error { t.mu.Unlock() t.logger.Info("terminal closed", "session_id", t.sessionID, "error", finalErr) - t.emit(model.TerminalClosedMsg{SessionID: t.sessionID, Err: finalErr}) + t.emit(tuimodel.TerminalClosedMsg{SessionID: t.sessionID, Err: finalErr}) }) return closeErr diff --git a/internal/tui/components/terminal_test.go b/internal/tui/components/terminal_test.go index 74e5b0c..20387ca 100644 --- a/internal/tui/components/terminal_test.go +++ b/internal/tui/components/terminal_test.go @@ -14,7 +14,8 @@ import ( "testing" "time" - "opencoderouter/internal/tui/model" + "opencoderouter/internal/model" + tuimodel "opencoderouter/internal/tui/model" tea "charm.land/bubbletea/v2" ) @@ -69,7 +70,7 @@ if [ "$#" -lt 2 ]; then exit 2 fi shift -exec /bin/sh -lc "$1" +exec /bin/sh -c "$1" ` opencodeScript := `#!/bin/sh set -eu @@ -129,14 +130,14 @@ func waitFor(t *testing.T, timeout time.Duration, desc string, cond func() bool) t.Fatalf("timed out waiting for %s", desc) } -func waitForClosedMsg(t *testing.T, ch <-chan tea.Msg, sessionID string, timeout time.Duration) model.TerminalClosedMsg { +func waitForClosedMsg(t *testing.T, ch <-chan tea.Msg, sessionID string, timeout time.Duration) tuimodel.TerminalClosedMsg { t.Helper() deadline := time.Now().Add(timeout) for time.Now().Before(deadline) { select { case msg := <-ch: - if closed, ok := msg.(model.TerminalClosedMsg); ok && closed.SessionID == sessionID { + if closed, ok := msg.(tuimodel.TerminalClosedMsg); ok && closed.SessionID == sessionID { return closed } default: @@ -145,7 +146,7 @@ func waitForClosedMsg(t *testing.T, ch <-chan tea.Msg, sessionID string, timeout } t.Fatalf("timed out waiting for TerminalClosedMsg for session %q", sessionID) - return model.TerminalClosedMsg{} + return tuimodel.TerminalClosedMsg{} } func newTestSession(sessionID string) model.Session { diff --git a/internal/tui/components/toast.go b/internal/tui/components/toast.go index 38e74fe..5f2d9fd 100644 --- a/internal/tui/components/toast.go +++ b/internal/tui/components/toast.go @@ -5,7 +5,7 @@ import ( "strings" "time" - "opencoderouter/internal/tui/model" + tuimodel "opencoderouter/internal/tui/model" "opencoderouter/internal/tui/theme" tea "charm.land/bubbletea/v2" @@ -58,7 +58,7 @@ func (t *InlineToast) Show(message string, severity ToastSeverity, timeout time. } return tea.Tick(timeout, func(_ time.Time) tea.Msg { - return model.ToastExpiredMsg{Token: currentToken} + return tuimodel.ToastExpiredMsg{Token: currentToken} }) } @@ -73,7 +73,7 @@ func (t InlineToast) Visible() bool { } func (t InlineToast) Update(msg tea.Msg) (InlineToast, tea.Cmd) { - typed, ok := msg.(model.ToastExpiredMsg) + typed, ok := msg.(tuimodel.ToastExpiredMsg) if !ok { return t, nil } diff --git a/internal/tui/components/tree.go b/internal/tui/components/tree.go index bad9513..2b27f63 100644 --- a/internal/tui/components/tree.go +++ b/internal/tui/components/tree.go @@ -4,7 +4,7 @@ import ( "fmt" "strings" - "opencoderouter/internal/tui/model" + "opencoderouter/internal/model" "opencoderouter/internal/tui/theme" tea "charm.land/bubbletea/v2" @@ -191,7 +191,7 @@ func (t *SessionTreeView) rebuild() { if !projectMatchesQuery(host, project, query) { continue } - projectCollapsed := t.isProjectCollapsed(host.Name, project.Name) + projectCollapsed := t.isProjectCollapsed(host.Name, project) t.rows = append(t.rows, treeRow{kind: treeNodeProject, hostIdx: hi, projectIdx: pi, sessionIdx: -1, projCollapsed: projectCollapsed}) if projectCollapsed { continue @@ -225,43 +225,13 @@ func (t SessionTreeView) renderRow(row treeRow, selected bool) string { if t.isHostCollapsed(h.Name) { glyph = "▸" } - status := string(h.Status) - if status == "" { - status = string(model.HostStatusUnknown) - } - - // Build proxy badge suffix - var suffix string - if len(h.Dependents) > 0 { - suffix += fmt.Sprintf(" [jump for %d]", len(h.Dependents)) - } - if h.Transport == model.TransportBlocked { - status = "blocked" - if len(h.BlockedBy) > 0 { - suffix += fmt.Sprintf(" (via %s)", strings.Join(h.BlockedBy, " → ")) - } - } else if h.ProxyKind == model.ProxyKindJump && h.ProxyJumpRaw != "" { - var hops []string - for _, hop := range h.JumpChain { - if hop.AliasRef != "" { - hops = append(hops, hop.AliasRef) - } else { - hops = append(hops, hop.Host) - } - } - if len(hops) > 0 { - suffix += fmt.Sprintf(" via %s", strings.Join(hops, " → ")) - } - } else if h.ProxyKind == model.ProxyKindCommand { - suffix += " via ProxyCommand" - } - line = fmt.Sprintf("%s %s [%s]%s", glyph, h.Label, status, suffix) + line = fmt.Sprintf("%s %s", glyph, formatHostLabel(h)) line = t.theme.TreeHost.Render(line) case treeNodeProject: p := h.Projects[row.projectIdx] glyph := "▾" - if t.isProjectCollapsed(h.Name, p.Name) { + if t.isProjectCollapsed(h.Name, p) { glyph = "▸" } line = fmt.Sprintf(" %s %s", glyph, p.Name) @@ -300,7 +270,7 @@ func (t *SessionTreeView) toggleAtCursor() { h := t.hosts[row.hostIdx] p := h.Projects[row.projectIdx] projectKey := makeProjectKey(h.Name, p.Name) - t.collapsedProjects[projectKey] = !t.isProjectCollapsed(h.Name, p.Name) + t.collapsedProjects[projectKey] = !t.isProjectCollapsed(h.Name, p) } t.rebuild() } @@ -371,11 +341,11 @@ func (t *SessionTreeView) isHostCollapsed(host string) bool { return collapsed } -func (t *SessionTreeView) isProjectCollapsed(host, project string) bool { - projectKey := makeProjectKey(host, project) +func (t *SessionTreeView) isProjectCollapsed(host string, project model.Project) bool { + projectKey := makeProjectKey(host, project.Name) collapsed, ok := t.collapsedProjects[projectKey] if !ok { - return true + return len(project.Sessions) == 0 } return collapsed } @@ -424,3 +394,59 @@ func treeMaxInt(a, b int) int { } return b } + +func formatHostLabel(h model.Host) string { + status := string(h.Status) + if status == "" { + status = string(model.HostStatusUnknown) + } + + // Build proxy badge suffix + var suffix string + if len(h.Dependents) > 0 { + suffix += fmt.Sprintf(" [jump for %d]", len(h.Dependents)) + } + if h.Transport == model.TransportBlocked { + status = "blocked" + if len(h.BlockedBy) > 0 { + suffix += fmt.Sprintf(" (via %s)", strings.Join(h.BlockedBy, " → ")) + } + } else if h.ProxyKind == model.ProxyKindJump && h.ProxyJumpRaw != "" { + var hops []string + for _, hop := range h.JumpChain { + if hop.AliasRef != "" { + hops = append(hops, hop.AliasRef) + } else { + hops = append(hops, hop.Host) + } + } + if len(hops) > 0 { + suffix += fmt.Sprintf(" via %s", strings.Join(hops, " → ")) + } + } else if h.ProxyKind == model.ProxyKindCommand { + suffix += " via ProxyCommand" + } + + var countIndicator string + if h.Status == model.HostStatusOffline || h.Status == model.HostStatusError { + countIndicator = " (offline)" + } else { + sessionCount := 0 + for _, p := range h.Projects { + sessionCount += len(p.Sessions) + } + + if sessionCount > 0 { + if sessionCount == 1 { + countIndicator = " (1 session)" + } else { + countIndicator = fmt.Sprintf(" (%d sessions)", sessionCount) + } + } else { + countIndicator = " (no sessions)" + } + } + + return fmt.Sprintf("%s [%s]%s%s", h.Label, status, suffix, countIndicator) +} + diff --git a/internal/tui/components/tree_test.go b/internal/tui/components/tree_test.go index 16e26e2..9a9738e 100644 --- a/internal/tui/components/tree_test.go +++ b/internal/tui/components/tree_test.go @@ -4,7 +4,7 @@ import ( "strings" "testing" - "opencoderouter/internal/tui/model" + "opencoderouter/internal/model" "opencoderouter/internal/tui/theme" ) @@ -74,29 +74,123 @@ func TestSessionTreeViewSessionIndicatorHiddenForInactiveSession(t *testing.T) { } } -func TestSessionTreeViewProjectsAreCollapsedByDefault(t *testing.T) { +func TestSessionTreeViewProjectsWithSessionsAreExpandedByDefault(t *testing.T) { tree := NewSessionTreeView(theme.Minimal()) tree.SetHosts([]model.Host{ { - Name: "host-a", - Label: "host-a", + Name: "host-a", + Label: "host-a", + Status: model.HostStatusOnline, Projects: []model.Project{ { Name: "proj-a", Sessions: []model.Session{ {ID: "session-1", Title: "Session One", Activity: model.ActivityActive}, + {ID: "session-2", Title: "Session Two", Activity: model.ActivityIdle}, }, }, }, }, + { + Name: "host-b", + Label: "host-b", + Status: model.HostStatusOnline, + Projects: []model.Project{ + { + Name: "proj-empty", + Sessions: []model.Session{}, + }, + }, + }, }) view := tree.View() - if !strings.Contains(view, "▸ proj-a") { - t.Fatalf("expected collapsed project row by default, got %q", view) + if !strings.Contains(view, "host-a [online] (2 sessions)") { + t.Errorf("expected host-a to show session count, got %q", view) + } + + if !strings.Contains(view, "host-b [online] (no sessions)") { + t.Errorf("expected empty host to show empty indicator, got %q", view) + } + + if !strings.Contains(view, "▾ proj-a") { + t.Errorf("expected expanded project row by default, got %q", view) + } + if !strings.Contains(view, "Session One") { + t.Errorf("expected project sessions to be visible by default, got %q", view) + } +} + +func TestFormatHostLabel(t *testing.T) { + tests := []struct { + name string + host model.Host + expected string + }{ + { + name: "offline host", + host: model.Host{ + Name: "host-offline", + Label: "host-offline", + Status: model.HostStatusOffline, + }, + expected: "host-offline [offline] (offline)", + }, + { + name: "error host", + host: model.Host{ + Name: "host-error", + Label: "host-error", + Status: model.HostStatusError, + }, + expected: "host-error [error] (offline)", + }, + { + name: "zero sessions online", + host: model.Host{ + Name: "host-empty", + Label: "host-empty", + Status: model.HostStatusOnline, + }, + expected: "host-empty [online] (no sessions)", + }, + { + name: "one session online", + host: model.Host{ + Name: "host-one", + Label: "host-one", + Status: model.HostStatusOnline, + Projects: []model.Project{ + { + Sessions: []model.Session{{ID: "1"}}, + }, + }, + }, + expected: "host-one [online] (1 session)", + }, + { + name: "multiple sessions online", + host: model.Host{ + Name: "host-many", + Label: "host-many", + Status: model.HostStatusOnline, + Projects: []model.Project{ + { + Sessions: []model.Session{{ID: "1"}, {ID: "2"}}, + }, + }, + }, + expected: "host-many [online] (2 sessions)", + }, } - if strings.Contains(view, "Session One") { - t.Fatalf("expected project sessions to be hidden by default, got %q", view) + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + got := formatHostLabel(tt.host) + if got != tt.expected { + t.Errorf("formatHostLabel() = %q, want %q", got, tt.expected) + } + }) } } diff --git a/internal/tui/config/config.go b/internal/tui/config/config.go index 089e425..efa24d4 100644 --- a/internal/tui/config/config.go +++ b/internal/tui/config/config.go @@ -114,7 +114,6 @@ func Load(ctx context.Context, filePath string) (Config, error) { if _, err := os.Stat(resolved); err != nil { if errors.Is(err, os.ErrNotExist) { if samePath(resolved, DefaultPath()) { - // TODO: add first-run bootstrap workflow to materialize a starter file. return cfg, nil } return cfg, fmt.Errorf("config file %q does not exist", resolved) diff --git a/internal/tui/discovery/discovery.go b/internal/tui/discovery/discovery.go index e531c26..c4f13f2 100644 --- a/internal/tui/discovery/discovery.go +++ b/internal/tui/discovery/discovery.go @@ -1,66 +1,27 @@ package discovery import ( - "bufio" "context" - "errors" - "fmt" "io" "log/slog" - "net" - "net/url" - "os" - "os/exec" - "os/user" - "path" - "path/filepath" - "sort" - "strconv" - "strings" - "time" + "opencoderouter/internal/model" + "opencoderouter/internal/remote" "opencoderouter/internal/tui/config" - "opencoderouter/internal/tui/model" ) -// Runner executes external commands and returns stdout bytes. -type Runner interface { - Run(ctx context.Context, name string, args ...string) ([]byte, error) -} - -// ExecRunner is a Runner backed by os/exec. -type ExecRunner struct{} +type Runner = remote.Runner -// Run executes a command, propagating stderr when available. -func (ExecRunner) Run(ctx context.Context, name string, args ...string) ([]byte, error) { - cmd := exec.CommandContext(ctx, name, args...) - out, err := cmd.Output() - if err == nil { - return out, nil - } +type ExecRunner = remote.ExecRunner - var exitErr *exec.ExitError - if errors.As(err, &exitErr) { - stderr := strings.TrimSpace(string(exitErr.Stderr)) - if stderr != "" { - return nil, fmt.Errorf("run %s %v: %w: %s", name, args, err, stderr) - } - } - - return nil, fmt.Errorf("run %s %v: %w", name, args, err) -} - -// DiscoveryService finds SSH hosts and resolves host metadata via `ssh -G`. type DiscoveryService struct { cfg config.Config runner Runner sshConfigPath string logger *slog.Logger + inner *remote.DiscoveryService } -const maxSanitizedLogErrorRunes = 320 - -// NewDiscoveryService builds a discovery service for SSH host inventory. func NewDiscoveryService(cfg config.Config, runner Runner, logger *slog.Logger) *DiscoveryService { if runner == nil { runner = ExecRunner{} @@ -68,510 +29,60 @@ func NewDiscoveryService(cfg config.Config, runner Runner, logger *slog.Logger) if logger == nil { logger = slog.New(slog.NewTextHandler(io.Discard, nil)) } + opts := discoveryOptionsFromConfig(cfg) + inner := remote.NewDiscoveryService(opts, runner, logger) + return &DiscoveryService{ cfg: cfg, runner: runner, - sshConfigPath: defaultSSHConfigPath(), + sshConfigPath: opts.SSHConfigPath, logger: logger, + inner: inner, } } -// Discover returns filtered hosts, with address/user resolved from ssh config. func (s *DiscoveryService) Discover(ctx context.Context) ([]model.Host, error) { - startedAt := time.Now() - s.logger.Debug("starting host discovery", - "ssh_config_path", s.sshConfigPath, - "include_patterns_count", len(s.cfg.Hosts.Include), - "ignore_patterns_count", len(s.cfg.Hosts.Ignore), - ) - - aliases, err := s.loadHostAliases() - if err != nil { - s.logger.Error("host discovery failed", - "stage", "load_host_aliases", - "error", sanitizeLogError(err), - ) - return nil, err - } - s.logger.Debug("loaded host aliases", "alias_count", len(aliases)) - - filtered := filterAliasesWithLogger(aliases, s.cfg.Hosts.Include, s.cfg.Hosts.Ignore, s.logger) - s.logger.Debug("discovery aliases after filtering", "filtered_count", len(filtered)) - - hosts := make([]model.Host, 0, len(filtered)) - var probeErrs []error - - for _, alias := range filtered { - select { - case <-ctx.Done(): - err := fmt.Errorf("discover canceled: %w", ctx.Err()) - s.logger.Error("host discovery failed", - "stage", "context_canceled", - "processed_hosts", len(hosts), - "error", sanitizeLogError(err), - ) - return hosts, err - default: - } - - h, resolveErr := s.resolveHost(ctx, alias) - if resolveErr != nil { - h = model.Host{ - Name: alias, - Label: alias, - Status: model.HostStatusError, - LastError: resolveErr.Error(), - } - probeErrs = append(probeErrs, fmt.Errorf("resolve host %q: %w", alias, resolveErr)) - } - - if override, ok := s.cfg.Hosts.Overrides[alias]; ok { - if override.Label != "" { - h.Label = override.Label - } - h.Priority = override.Priority - if override.OpencodePath != "" { - h.OpencodeBin = override.OpencodePath - } - } - if h.Label == "" { - h.Label = h.Name - } - - hosts = append(hosts, h) - } - - sort.Slice(hosts, func(i, j int) bool { - if hosts[i].Priority != hosts[j].Priority { - return hosts[i].Priority > hosts[j].Priority - } - return hosts[i].Name < hosts[j].Name - }) - - buildDependencyGraphWithLogger(hosts, s.logger) - - if len(probeErrs) > 0 { - joinedErr := errors.Join(probeErrs...) - s.logger.Error("host discovery failed", - "stage", "resolve_hosts", - "host_count", len(hosts), - "failure_count", len(probeErrs), - "duration", time.Since(startedAt), - "error", sanitizeLogError(joinedErr), - ) - return hosts, joinedErr - } - - s.logger.Debug("host discovery complete", - "host_count", len(hosts), - "duration", time.Since(startedAt), - ) - - return hosts, nil + s.ensureInner() + s.inner.SetSSHConfigPath(s.sshConfigPath) + return s.inner.Discover(ctx) } -// loadHostAliases reads ~/.ssh/config and extracts concrete Host aliases. -func (s *DiscoveryService) loadHostAliases() ([]string, error) { - s.logger.Debug("reading ssh config for host aliases", "path", s.sshConfigPath) - - b, err := os.ReadFile(s.sshConfigPath) - if err != nil { - if errors.Is(err, os.ErrNotExist) { - s.logger.Debug("ssh config file not found", "path", s.sshConfigPath, "alias_count", 0) - return nil, nil - } - s.logger.Error("failed to read ssh config", "path", s.sshConfigPath, "error", sanitizeLogError(err)) - return nil, fmt.Errorf("read ssh config %q: %w", s.sshConfigPath, err) +func (s *DiscoveryService) ensureInner() { + if s.inner != nil { + return } - - // TODO: support Include directives and multi-file merge semantics from OpenSSH. - aliases := parseSSHConfigHostsWithLogger(string(b), s.logger) - s.logger.Debug("loaded host aliases from ssh config", "path", s.sshConfigPath, "alias_count", len(aliases)) - return aliases, nil + s.inner = remote.NewDiscoveryService(discoveryOptionsFromConfig(s.cfg), s.runner, s.logger) } -// resolveHost runs `ssh -G ` and extracts hostname/user values. -func (s *DiscoveryService) resolveHost(ctx context.Context, alias string) (model.Host, error) { - s.logger.Debug("resolving host", "alias", alias) - s.logger.Debug("executing ssh -G", "alias", alias) - - out, err := s.runner.Run(ctx, "ssh", "-G", alias) - if err != nil { - s.logger.Error("failed to resolve host", - "alias", alias, - "error", sanitizeLogError(err), - ) - return model.Host{}, err - } - s.logger.Debug("ssh -G completed", "alias", alias, "output_bytes", len(out)) - - host := model.Host{ - Name: alias, - Address: alias, - User: currentUserName(), - Label: alias, - Status: model.HostStatusUnknown, - } - - scanner := bufio.NewScanner(strings.NewReader(string(out))) - for scanner.Scan() { - line := strings.TrimSpace(scanner.Text()) - if line == "" { - continue - } - parts := strings.Fields(line) - if len(parts) < 2 { - continue - } - - key := strings.ToLower(parts[0]) - value := strings.Join(parts[1:], " ") - switch key { - case "hostname": - host.Address = value - case "user": - host.User = value - case "proxyjump": - if value != "" && value != "none" { - host.ProxyJumpRaw = value - host.ProxyKind = model.ProxyKindJump - host.JumpChain = parseProxyJumpWithLogger(value, alias, s.logger) - } - case "proxycommand": - if value != "" && value != "none" { - host.ProxyCommand = value - if host.ProxyKind == "" || host.ProxyKind == model.ProxyKindNone { - host.ProxyKind = model.ProxyKindCommand - } - } - } - } - - if err := scanner.Err(); err != nil { - wrappedErr := fmt.Errorf("parse ssh -G output for %q: %w", alias, err) - s.logger.Error("failed to parse ssh -G output", - "alias", alias, - "error", sanitizeLogError(wrappedErr), - ) - return model.Host{}, wrappedErr - } - - s.logger.Debug("resolved host metadata", - "alias", alias, - "proxy_kind", host.ProxyKind, - "jump_hop_count", len(host.JumpChain), - "has_proxy_command", host.ProxyCommand != "", - ) - - return host, nil -} - -// parseSSHConfigHosts extracts non-wildcard `Host` aliases from config text. func parseSSHConfigHosts(content string) []string { - return parseSSHConfigHostsWithLogger(content, nil) -} - -func parseSSHConfigHostsWithLogger(content string, logger *slog.Logger) []string { - if logger != nil { - logger.Debug("starting ssh config host parse", "content_bytes", len(content)) - } - - seen := make(map[string]struct{}) - aliases := make([]string, 0) - - scanner := bufio.NewScanner(strings.NewReader(content)) - for scanner.Scan() { - line := strings.TrimSpace(scanner.Text()) - if line == "" || strings.HasPrefix(line, "#") { - continue - } - - fields := strings.Fields(line) - if len(fields) < 2 || !strings.EqualFold(fields[0], "host") { - continue - } - - for _, candidate := range fields[1:] { - if strings.HasPrefix(candidate, "!") { - continue - } - if strings.ContainsAny(candidate, "*?") { - continue - } - if _, ok := seen[candidate]; ok { - continue - } - seen[candidate] = struct{}{} - aliases = append(aliases, candidate) - } - } - - if logger != nil { - logger.Debug("completed ssh config host parse", "alias_count", len(aliases)) - } - - return aliases -} - -// filterAliases applies include/ignore glob lists. -func filterAliases(aliases, includes, ignores []string) []string { - return filterAliasesWithLogger(aliases, includes, ignores, nil) -} - -func filterAliasesWithLogger(aliases, includes, ignores []string, logger *slog.Logger) []string { - if logger != nil { - logger.Debug("filtering host aliases", - "before_count", len(aliases), - "include_patterns_count", len(includes), - "ignore_patterns_count", len(ignores), - ) - } - - if len(includes) == 0 { - includes = []string{"*"} - } - - filtered := make([]string, 0, len(aliases)) - for _, alias := range aliases { - if !matchesAnyGlob(alias, includes) { - continue - } - if matchesAnyGlob(alias, ignores) { - continue - } - filtered = append(filtered, alias) - } - - if logger != nil { - logger.Debug("host alias filtering complete", - "before_count", len(aliases), - "after_count", len(filtered), - ) - } - - return filtered -} - -// matchesAnyGlob returns true if candidate matches at least one pattern. -func matchesAnyGlob(candidate string, patterns []string) bool { - for _, pattern := range patterns { - matched, err := path.Match(pattern, candidate) - if err != nil { - if pattern == candidate { - return true - } - continue - } - if matched { - return true - } - } - return false + return remote.ParseSSHConfigHosts(content) } -// defaultSSHConfigPath resolves ~/.ssh/config. -func defaultSSHConfigPath() string { - home, err := os.UserHomeDir() - if err != nil || home == "" { - return ".ssh/config" - } - return filepath.Join(home, ".ssh", "config") -} - -// currentUserName returns current username when available. -func currentUserName() string { - u, err := user.Current() - if err != nil { - return "" - } - return u.Username -} - -// parseProxyJump splits a comma-separated ProxyJump value into JumpHop structs. -// Each hop can be: alias, user@host, host:port, user@host:port, or ssh://user@host:port. -func parseProxyJump(raw string) []model.JumpHop { - return parseProxyJumpWithLogger(raw, "", nil) -} - -func parseProxyJumpWithLogger(raw, alias string, logger *slog.Logger) []model.JumpHop { - parts := strings.Split(raw, ",") - hops := make([]model.JumpHop, 0, len(parts)) - for _, part := range parts { - part = strings.TrimSpace(part) - if part == "" { - continue - } - hop := parseOneHop(part) - hops = append(hops, hop) - } - - if logger != nil { - if alias != "" { - logger.Debug("parsed proxy jump chain", - "alias", alias, - "hop_count", len(hops), - ) - } else { - logger.Debug("parsed proxy jump chain", "hop_count", len(hops)) - } - } - - return hops -} - -// parseOneHop parses a single ProxyJump hop string into a JumpHop. -func parseOneHop(hop string) model.JumpHop { - j := model.JumpHop{Raw: hop} - - // Handle ssh:// URI scheme - if strings.HasPrefix(hop, "ssh://") { - u, err := url.Parse(hop) - if err == nil { - j.Host = u.Hostname() - j.User = u.User.Username() - if p := u.Port(); p != "" { - j.Port, _ = strconv.Atoi(p) - } - return j - } - } - - // Handle user@host:port, user@host, host:port, or bare alias - userHost := hop - if at := strings.LastIndex(hop, "@"); at >= 0 { - j.User = hop[:at] - userHost = hop[at+1:] - } - - host, portStr, err := net.SplitHostPort(userHost) - if err == nil { - j.Host = host - j.Port, _ = strconv.Atoi(portStr) - } else { - j.Host = userHost - } - - return j -} - -// BuildDependencyGraph populates DependsOn/Dependents/AliasRef fields across hosts. -// It maps JumpChain hops to known aliases and builds the reverse index. func BuildDependencyGraph(hosts []model.Host) { - buildDependencyGraphWithLogger(hosts, nil) + remote.BuildDependencyGraph(hosts) } -func buildDependencyGraphWithLogger(hosts []model.Host, logger *slog.Logger) { - startedAt := time.Now() - if logger != nil { - logger.Debug("building dependency graph", "host_count", len(hosts)) - } - - // Build alias lookup - aliasIndex := make(map[string]int, len(hosts)) - addressIndex := make(map[string]int, len(hosts)) - for i, h := range hosts { - aliasIndex[h.Name] = i - if h.Address != "" { - addressIndex[h.Address] = i - } - } - - // Resolve hops and build edges - for i := range hosts { - if hosts[i].ProxyKind != model.ProxyKindJump || len(hosts[i].JumpChain) == 0 { - continue - } - - seen := make(map[string]bool) - for hi := range hosts[i].JumpChain { - hop := &hosts[i].JumpChain[hi] - alias := resolveHopAlias(hop.Host, aliasIndex, addressIndex) - if alias == "" { - hop.External = true - continue - } - hop.AliasRef = alias - if !seen[alias] { - seen[alias] = true - hosts[i].DependsOn = append(hosts[i].DependsOn, alias) - } - } - } - - edgeCount := 0 - for i := range hosts { - edgeCount += len(hosts[i].DependsOn) - } - if logger != nil { - logger.Debug("dependency graph edges resolved", "edge_count", edgeCount) - } - - // Build reverse index (Dependents) - for i := range hosts { - for _, dep := range hosts[i].DependsOn { - if idx, ok := aliasIndex[dep]; ok { - hosts[idx].Dependents = appendUnique(hosts[idx].Dependents, hosts[i].Name) - } - } - } - - if logger != nil { - logger.Debug("dependency graph build complete", - "host_count", len(hosts), - "edge_count", edgeCount, - "duration", time.Since(startedAt), - ) +func discoveryOptionsFromConfig(cfg config.Config) remote.DiscoveryOptions { + return remote.DiscoveryOptions{ + Include: append([]string(nil), cfg.Hosts.Include...), + Ignore: append([]string(nil), cfg.Hosts.Ignore...), + Overrides: hostOverridesFromConfig(cfg.Hosts.Overrides), + SSHConfigPath: "", } } -// resolveHopAlias tries to match a hop host to a known SSH alias. -func resolveHopAlias(hopHost string, aliasIndex, addressIndex map[string]int) string { - // Direct alias match - if _, ok := aliasIndex[hopHost]; ok { - return hopHost +func hostOverridesFromConfig(overrides map[string]config.HostOverride) map[string]remote.HostOverride { + if len(overrides) == 0 { + return nil } - // Address match (hostname resolved) - if idx, ok := addressIndex[hopHost]; ok { - for alias, i := range aliasIndex { - if i == idx { - return alias - } + converted := make(map[string]remote.HostOverride, len(overrides)) + for alias, override := range overrides { + converted[alias] = remote.HostOverride{ + Label: override.Label, + Priority: override.Priority, + OpencodePath: override.OpencodePath, + ScanPaths: append([]string(nil), override.ScanPaths...), } } - return "" -} - -// appendUnique appends s to slice only if not already present. -func appendUnique(slice []string, s string) []string { - for _, v := range slice { - if v == s { - return slice - } - } - return append(slice, s) -} - -func sanitizeLogError(err error) string { - if err == nil { - return "" - } - - msg := strings.TrimSpace(err.Error()) - msg = strings.NewReplacer("\r", " ", "\n", " ").Replace(msg) - msg = strings.Join(strings.Fields(msg), " ") - - lower := strings.ToLower(msg) - if idx := strings.Index(lower, "stderr:"); idx >= 0 { - msg = strings.TrimSpace(msg[:idx]) + " stderr: [redacted]" - } - if idx := strings.Index(strings.ToLower(msg), "stdout:"); idx >= 0 { - msg = strings.TrimSpace(msg[:idx]) + " stdout: [redacted]" - } - - runes := []rune(msg) - if len(runes) > maxSanitizedLogErrorRunes { - msg = strings.TrimSpace(string(runes[:maxSanitizedLogErrorRunes-1])) + "…" - } - - return msg + return converted } diff --git a/internal/tui/local/adapter.go b/internal/tui/local/adapter.go new file mode 100644 index 0000000..190c0a2 --- /dev/null +++ b/internal/tui/local/adapter.go @@ -0,0 +1,179 @@ +package local + +import ( + "errors" + "path/filepath" + "sort" + "strings" + "time" + + "opencoderouter/internal/model" + "opencoderouter/internal/registry" +) + +const ( + LocalHostName = "localhost" + LocalHostLabel = "localhost (local)" + localHostAddress = "127.0.0.1" +) + +var ErrNoLocalBackends = errors.New("no local backends discovered") + +type Adapter struct { + registry *registry.Registry + nowFn func() time.Time + + activeThreshold time.Duration + idleThreshold time.Duration +} + +func NewAdapter(reg *registry.Registry, activeThreshold, idleThreshold time.Duration) *Adapter { + if activeThreshold <= 0 { + activeThreshold = 10 * time.Minute + } + if idleThreshold <= 0 { + idleThreshold = 24 * time.Hour + } + if activeThreshold > idleThreshold { + activeThreshold = idleThreshold + } + + return &Adapter{ + registry: reg, + nowFn: time.Now, + activeThreshold: activeThreshold, + idleThreshold: idleThreshold, + } +} + +func (a *Adapter) GetLocalHost() (model.Host, error) { + if a == nil || a.registry == nil { + return model.Host{}, errors.New("local registry is not configured") + } + + backends := a.registry.All() + if len(backends) == 0 { + return model.Host{}, ErrNoLocalBackends + } + + now := a.now() + thresholds := model.ActivityThresholds{Active: a.activeThreshold, Idle: a.idleThreshold} + + projects := make([]model.Project, 0, len(backends)) + latestSeen := time.Time{} + + for _, backend := range backends { + if backend == nil { + continue + } + + if backend.LastSeen.After(latestSeen) { + latestSeen = backend.LastSeen + } + + projectName := projectNameFromBackend(backend) + metadata := a.registry.ListSessions(backend.Slug) + + sessions := make([]model.Session, 0, len(metadata)) + for _, sessionMeta := range metadata { + sessionID := strings.TrimSpace(sessionMeta.ID) + if sessionID == "" { + continue + } + + directory := strings.TrimSpace(sessionMeta.Directory) + if directory == "" { + directory = strings.TrimSpace(backend.ProjectPath) + } + + lastActivity := sessionMeta.LastActivity + if lastActivity.IsZero() { + lastActivity = backend.LastSeen + } + + title := strings.TrimSpace(sessionMeta.Title) + if title == "" { + title = sessionID + } + + sessions = append(sessions, model.Session{ + ID: sessionID, + Project: projectName, + Title: title, + Directory: directory, + LastActivity: lastActivity, + Status: mapSessionStatus(sessionMeta.Status), + Activity: model.ResolveActivityState(lastActivity, now, thresholds), + }) + } + + sort.SliceStable(sessions, func(i, j int) bool { + if sessions[i].LastActivity.Equal(sessions[j].LastActivity) { + return sessions[i].ID < sessions[j].ID + } + return sessions[i].LastActivity.After(sessions[j].LastActivity) + }) + + projects = append(projects, model.Project{Name: projectName, Sessions: sessions}) + } + + if len(projects) == 0 { + return model.Host{}, ErrNoLocalBackends + } + + sort.Slice(projects, func(i, j int) bool { + return projects[i].Name < projects[j].Name + }) + + return model.Host{ + Name: LocalHostName, + Address: localHostAddress, + Label: LocalHostLabel, + Status: model.HostStatusOnline, + LastSeen: latestSeen, + Projects: projects, + }, nil +} + +func (a *Adapter) now() time.Time { + if a != nil && a.nowFn != nil { + return a.nowFn() + } + return time.Now() +} + +func projectNameFromBackend(backend *registry.Backend) string { + if backend == nil { + return "(unknown)" + } + + if name := strings.TrimSpace(backend.ProjectName); name != "" { + return name + } + + if projectPath := strings.TrimSpace(backend.ProjectPath); projectPath != "" { + base := strings.TrimSpace(filepath.Base(projectPath)) + if base != "" && base != "." && base != string(filepath.Separator) { + return base + } + } + + if slug := strings.TrimSpace(backend.Slug); slug != "" { + return slug + } + + return "(unknown)" +} + +func mapSessionStatus(status string) model.SessionStatus { + switch strings.ToLower(strings.TrimSpace(status)) { + case "active", "running", "online", "ready": + return model.SessionStatusActive + case "idle", "paused": + return model.SessionStatusIdle + case "archived", "closed", "done", "stopped", "terminated", "offline": + return model.SessionStatusArchived + default: + return model.SessionStatusUnknown + } +} diff --git a/internal/tui/local/adapter_test.go b/internal/tui/local/adapter_test.go new file mode 100644 index 0000000..3c75c92 --- /dev/null +++ b/internal/tui/local/adapter_test.go @@ -0,0 +1,114 @@ +package local + +import ( + "errors" + "io" + "log/slog" + "testing" + "time" + + "opencoderouter/internal/model" + "opencoderouter/internal/registry" +) + +func TestAdapterGetLocalHostConvertsRegistryEntries(t *testing.T) { + t.Parallel() + + now := time.Date(2026, 3, 14, 10, 30, 0, 0, time.UTC) + reg := registry.New(2*time.Minute, slog.New(slog.NewTextHandler(io.Discard, nil))) + + reg.Upsert(30000, "alpha", "/work/alpha", "1.0.0") + reg.Upsert(30001, "beta", "/work/beta", "1.1.0") + + reg.ReplaceSessions("alpha", []registry.SessionMetadata{ + { + ID: "sess-active", + Title: "active session", + Directory: "/work/alpha", + Status: "running", + LastActivity: now.Add(-2 * time.Minute), + }, + { + ID: "sess-idle", + Title: "idle session", + Directory: "/work/alpha", + Status: "idle", + LastActivity: now.Add(-2 * time.Hour), + }, + }) + + adapter := NewAdapter(reg, 10*time.Minute, 24*time.Hour) + adapter.nowFn = func() time.Time { return now } + + host, err := adapter.GetLocalHost() + if err != nil { + t.Fatalf("GetLocalHost() error = %v, want nil", err) + } + + if host.Name != LocalHostName { + t.Fatalf("host.Name = %q, want %q", host.Name, LocalHostName) + } + if host.Status != model.HostStatusOnline { + t.Fatalf("host.Status = %q, want %q", host.Status, model.HostStatusOnline) + } + if len(host.Projects) != 2 { + t.Fatalf("project count = %d, want 2", len(host.Projects)) + } + + if host.Projects[0].Name != "alpha" || host.Projects[1].Name != "beta" { + t.Fatalf("project order = [%q, %q], want [alpha, beta]", host.Projects[0].Name, host.Projects[1].Name) + } + + alpha := host.Projects[0] + if len(alpha.Sessions) != 2 { + t.Fatalf("alpha session count = %d, want 2", len(alpha.Sessions)) + } + + active := findSessionByID(alpha.Sessions, "sess-active") + if active == nil { + t.Fatal("expected sess-active in alpha project") + } + if active.Status != model.SessionStatusActive { + t.Fatalf("sess-active status = %q, want %q", active.Status, model.SessionStatusActive) + } + if active.Activity != model.ActivityActive { + t.Fatalf("sess-active activity = %q, want %q", active.Activity, model.ActivityActive) + } + + idle := findSessionByID(alpha.Sessions, "sess-idle") + if idle == nil { + t.Fatal("expected sess-idle in alpha project") + } + if idle.Status != model.SessionStatusIdle { + t.Fatalf("sess-idle status = %q, want %q", idle.Status, model.SessionStatusIdle) + } + if idle.Activity != model.ActivityIdle { + t.Fatalf("sess-idle activity = %q, want %q", idle.Activity, model.ActivityIdle) + } + + beta := host.Projects[1] + if len(beta.Sessions) != 0 { + t.Fatalf("beta session count = %d, want 0", len(beta.Sessions)) + } +} + +func TestAdapterGetLocalHostNoBackends(t *testing.T) { + t.Parallel() + + reg := registry.New(2*time.Minute, slog.New(slog.NewTextHandler(io.Discard, nil))) + adapter := NewAdapter(reg, 10*time.Minute, 24*time.Hour) + + _, err := adapter.GetLocalHost() + if !errors.Is(err, ErrNoLocalBackends) { + t.Fatalf("GetLocalHost() error = %v, want ErrNoLocalBackends", err) + } +} + +func findSessionByID(sessions []model.Session, id string) *model.Session { + for i := range sessions { + if sessions[i].ID == id { + return &sessions[i] + } + } + return nil +} diff --git a/internal/tui/model/messages.go b/internal/tui/model/messages.go index faa2fe8..c7c9785 100644 --- a/internal/tui/model/messages.go +++ b/internal/tui/model/messages.go @@ -2,15 +2,17 @@ package model import "time" +import sharedmodel "opencoderouter/internal/model" + // DiscoveryResultMsg is emitted when host discovery completes. type DiscoveryResultMsg struct { - Hosts []Host + Hosts []sharedmodel.Host Err error } // ProbeResultMsg is emitted after probing all hosts. type ProbeResultMsg struct { - Hosts []Host + Hosts []sharedmodel.Host Err error RefreshedAt time.Time } @@ -33,7 +35,7 @@ type SearchChangedMsg struct { // TransportPreflightMsg is emitted after transport preflight probing completes. type TransportPreflightMsg struct { - Hosts []Host + Hosts []sharedmodel.Host Err error } diff --git a/internal/tui/probe/cache.go b/internal/tui/probe/cache.go index 077185f..7df983f 100644 --- a/internal/tui/probe/cache.go +++ b/internal/tui/probe/cache.go @@ -1,69 +1,13 @@ package probe import ( - "sync" "time" - "opencoderouter/internal/tui/model" + "opencoderouter/internal/remote" ) -type cacheEntry struct { - host model.Host - expiresAt time.Time -} - -// CacheStore is a simple in-memory TTL cache for probe responses. -type CacheStore struct { - mu sync.RWMutex - ttl time.Duration - nowFunc func() time.Time - entries map[string]cacheEntry -} +type CacheStore = remote.CacheStore -// NewCacheStore creates a new in-memory cache with the provided TTL. func NewCacheStore(ttl time.Duration) *CacheStore { - return &CacheStore{ - ttl: ttl, - nowFunc: time.Now, - entries: make(map[string]cacheEntry), - } -} - -// Get retrieves a host from cache if the entry is still valid. -func (c *CacheStore) Get(key string) (model.Host, bool) { - c.mu.RLock() - entry, ok := c.entries[key] - c.mu.RUnlock() - if !ok { - return model.Host{}, false - } - if c.nowFunc().After(entry.expiresAt) { - c.mu.Lock() - delete(c.entries, key) - c.mu.Unlock() - return model.Host{}, false - } - return entry.host, true -} - -// Set stores a host in cache and refreshes expiry. -func (c *CacheStore) Set(key string, host model.Host) { - c.mu.Lock() - defer c.mu.Unlock() - c.entries[key] = cacheEntry{host: host, expiresAt: c.nowFunc().Add(c.ttl)} -} - -// PurgeExpired removes expired entries and returns deleted count. -func (c *CacheStore) PurgeExpired() int { - now := c.nowFunc() - removed := 0 - c.mu.Lock() - defer c.mu.Unlock() - for key, entry := range c.entries { - if now.After(entry.expiresAt) { - delete(c.entries, key) - removed++ - } - } - return removed + return remote.NewCacheStore(ttl) } diff --git a/internal/tui/probe/probe.go b/internal/tui/probe/probe.go index 97d9d83..409a8e1 100644 --- a/internal/tui/probe/probe.go +++ b/internal/tui/probe/probe.go @@ -1,60 +1,29 @@ package probe import ( - "bytes" "context" - "encoding/json" - "errors" - "fmt" "io" "log/slog" - "os/exec" - "path/filepath" - "sort" - "strconv" - "strings" "time" + "opencoderouter/internal/model" + "opencoderouter/internal/remote" "opencoderouter/internal/tui/config" - "opencoderouter/internal/tui/model" ) -// Runner executes external commands and returns stdout bytes. -type Runner interface { - Run(ctx context.Context, name string, args ...string) ([]byte, error) -} - -// ExecRunner is a Runner backed by os/exec. -type ExecRunner struct{} +type Runner = remote.Runner -// Run executes a command and preserves stderr details in failures. -func (ExecRunner) Run(ctx context.Context, name string, args ...string) ([]byte, error) { - cmd := exec.CommandContext(ctx, name, args...) - out, err := cmd.Output() - if err == nil { - return out, nil - } - - var exitErr *exec.ExitError - if errors.As(err, &exitErr) { - stderr := strings.TrimSpace(string(exitErr.Stderr)) - if stderr != "" { - return nil, fmt.Errorf("run %s %v: %w: %s", name, args, err, stderr) - } - } - return nil, fmt.Errorf("run %s %v: %w", name, args, err) -} +type ExecRunner = remote.ExecRunner -// ProbeService executes per-host SSH probes and converts output to domain models. type ProbeService struct { cfg config.Config runner Runner cache *CacheStore nowFn func() time.Time logger *slog.Logger + inner *remote.ProbeService } -// NewProbeService creates a probe service with worker-pool execution. func NewProbeService(cfg config.Config, runner Runner, cache *CacheStore, logger *slog.Logger) *ProbeService { if runner == nil { runner = ExecRunner{} @@ -62,709 +31,76 @@ func NewProbeService(cfg config.Config, runner Runner, cache *CacheStore, logger if logger == nil { logger = slog.New(slog.NewTextHandler(io.Discard, nil)) } + inner := remote.NewProbeService(probeOptionsFromConfig(cfg), runner, cache, logger) + return &ProbeService{ cfg: cfg, runner: runner, cache: cache, nowFn: time.Now, logger: logger, + inner: inner, } } -type probeJob struct { - index int - host model.Host -} - -type probeResult struct { - index int - host model.Host - err error -} - -// ProbeHosts runs transport preflight for jump providers, then probes all hosts. -// Hosts with unresolved jump dependencies are marked TransportBlocked. func (s *ProbeService) ProbeHosts(ctx context.Context, hosts []model.Host) ([]model.Host, error) { - startedAt := time.Now() - workerCount := s.cfg.Polling.MaxParallel - if workerCount < 1 { - workerCount = 1 - } - - s.logger.Debug("probe hosts started", - "host_count", len(hosts), - "worker_count", workerCount, - ) - - if len(hosts) == 0 { - s.logger.Debug("probe hosts completed", - "host_count", 0, - "result_count", 0, - "error_count", 0, - "duration_ms", time.Since(startedAt).Milliseconds(), - ) - return nil, nil - } - - if s.cache != nil { - s.cache.PurgeExpired() - } - - // Phase 1: Transport preflight for jump providers - jumpProviders := jumpProviderSet(hosts) - if len(jumpProviders) > 0 { - s.transportPreflight(ctx, hosts, jumpProviders) - propagateBlocked(s.logger, hosts) - } - - // Phase 2: Session probe (skip blocked hosts) - updated := make([]model.Host, len(hosts)) - jobs := make(chan probeJob) - results := make(chan probeResult) - - for i := 0; i < workerCount; i++ { - go func() { - for job := range jobs { - h, err := s.probeHost(ctx, job.host) - results <- probeResult{index: job.index, host: h, err: err} - } - }() - } - - pending := 0 - for i, host := range hosts { - if host.Transport == model.TransportBlocked { - updated[i] = host - s.logger.Debug("probe host skipped blocked", - "host", host.Name, - "blocked_by", host.BlockedBy, - ) - continue - } - if s.cache != nil { - if cached, ok := s.cache.Get(host.Name); ok { - updated[i] = cached - s.logger.Debug("probe cache hit", "host", host.Name) - continue - } - s.logger.Debug("probe cache miss", "host", host.Name) - } - pending++ - jobs <- probeJob{index: i, host: host} - } - close(jobs) - - var probeErrs []error - for i := 0; i < pending; i++ { - select { - case <-ctx.Done(): - err := fmt.Errorf("probe canceled: %w", ctx.Err()) - probeErrs = append(probeErrs, err) - s.logger.Debug("probe host canceled", - "err_kind", errorKind(err), - "error", sanitizeErrorContext(err), - ) - case res := <-results: - updated[res.index] = res.host - if res.err != nil { - probeErrs = append(probeErrs, res.err) - } - if s.cache != nil { - s.cache.Set(res.host.Name, res.host) - } - } - } - - s.logger.Debug("probe hosts completed", - "host_count", len(hosts), - "result_count", len(updated), - "error_count", len(probeErrs), - "duration_ms", time.Since(startedAt).Milliseconds(), - ) - - if len(probeErrs) > 0 { - return updated, errors.Join(probeErrs...) - } - return updated, nil -} - -func (s *ProbeService) scanPathsForHost(host model.Host) []string { - if override, ok := s.cfg.Hosts.Overrides[host.Name]; ok && len(override.ScanPaths) > 0 { - return override.ScanPaths - } - if len(s.cfg.Sessions.ScanPaths) > 0 { - return s.cfg.Sessions.ScanPaths - } - return []string{"~"} -} - -func (s *ProbeService) buildRemoteCmd(host model.Host) string { - paths := s.scanPathsForHost(host) - pathList := strings.Join(paths, " ") - - bin := host.OpencodeBin - if bin == "" { - bin = "opencode" - } - - remoteCmd := fmt.Sprintf( - `OC=$(command -v %s 2>/dev/null || echo "$HOME/.opencode/bin/%s"); `+ - `if [ -x "$OC" ]; then `+ - `find %s -maxdepth 2 -name .opencode -type d 2>/dev/null | while IFS= read -r d; do `+ - `(cd "$(dirname "$d")" && "$OC" session list --format json 2>/dev/null); `+ - `done; fi`, - bin, bin, pathList, - ) - - s.logger.Debug("probe remote command built", - "host", host.Name, - "cmd", sanitizeCommandForLog(remoteCmd, pathList), - ) - - return remoteCmd -} - -func (s *ProbeService) probeHost(ctx context.Context, host model.Host) (model.Host, error) { - startedAt := time.Now() - s.logger.Debug("probe host started", "host", host.Name) - - remoteCmd := s.buildRemoteCmd(host) - args := s.buildSSHArgs(host, remoteCmd) - s.logger.Debug("probe ssh args built", - "host", host.Name, - "arg_count", len(args), - ) - - out, err := s.runner.Run(ctx, "ssh", args...) - if err != nil { - if isAuthError(host.Name, err, s.logger) { - host.Status = model.HostStatusAuthRequired - host.LastError = "password authentication required" - s.logger.Error("probe host failed", - "host", host.Name, - "status", host.Status, - "err_kind", "auth", - "error", sanitizeErrorContext(err), - "duration_ms", time.Since(startedAt).Milliseconds(), - ) - return host, fmt.Errorf("probe host %q: auth required", host.Name) - } - host.Status = model.HostStatusOffline - host.LastError = err.Error() - s.logger.Error("probe host failed", - "host", host.Name, - "status", host.Status, - "err_kind", errorKind(err), - "error", sanitizeErrorContext(err), - "duration_ms", time.Since(startedAt).Milliseconds(), - ) - return host, fmt.Errorf("probe host %q: %w", host.Name, err) - } - - sessions, parseErr := s.parseSessions(out, host.Name) - if parseErr != nil { - host.Status = model.HostStatusError - host.LastError = parseErr.Error() - s.logger.Error("probe host failed", - "host", host.Name, - "status", host.Status, - "err_kind", errorKind(parseErr), - "error", sanitizeErrorContext(parseErr), - "duration_ms", time.Since(startedAt).Milliseconds(), - ) - return host, fmt.Errorf("parse sessions for %q: %w", host.Name, parseErr) - } - - if s.cfg.Sessions.MaxDisplay > 0 && len(sessions) > s.cfg.Sessions.MaxDisplay { - sessions = sessions[:s.cfg.Sessions.MaxDisplay] - } - - host.Projects = groupSessionsByProject(sessions) - host.Status = model.HostStatusOnline - host.LastSeen = s.nowFn() - host.LastError = "" - s.logger.Debug("probe host completed", - "host", host.Name, - "status", host.Status, - "sessions", len(sessions), - "duration_ms", time.Since(startedAt).Milliseconds(), - ) - - return host, nil + s.ensureInner() + s.inner.SetNowFunc(s.nowFn) + return s.inner.ProbeHosts(ctx, hosts) } -// buildSSHArgs returns ssh options and target command for a probe call. -func (s *ProbeService) buildSSHArgs(host model.Host, remoteCmd string) []string { - args := make([]string, 0, 12) - if s.cfg.SSH.BatchMode { - args = append(args, "-o", "BatchMode=yes") - } - if s.cfg.SSH.ConnectTimeout > 0 { - args = append(args, "-o", "ConnectTimeout="+strconv.Itoa(s.cfg.SSH.ConnectTimeout)) - } - if s.cfg.SSH.ControlMaster != "" { - args = append(args, "-o", "ControlMaster="+s.cfg.SSH.ControlMaster) - } - if s.cfg.SSH.ControlPersist > 0 { - args = append(args, "-o", "ControlPersist="+strconv.Itoa(s.cfg.SSH.ControlPersist)) - } - if s.cfg.SSH.ControlPath != "" { - args = append(args, "-o", "ControlPath="+s.cfg.SSH.ControlPath) - } - args = append(args, host.Name, remoteCmd) - return args -} - -type remoteSession struct { - ID string `json:"id"` - Project string `json:"project"` - Title string `json:"title"` - LastActivity string `json:"last_activity"` - Status string `json:"status"` - MessageCount int `json:"message_count"` - Agents []string `json:"agents"` - // opencode native fields - Updated json.Number `json:"updated"` - Created json.Number `json:"created"` - Directory string `json:"directory"` - ProjectID string `json:"projectId"` -} - -type remoteEnvelope struct { - Sessions []remoteSession `json:"sessions"` -} - -func (s *ProbeService) parseSessions(raw []byte, host string) ([]model.Session, error) { - trimmed := bytes.TrimSpace(raw) - if len(trimmed) == 0 { - s.logger.Debug("parse sessions decoded", - "host", host, - "records", 0, - "sessions", 0, - "raw_bytes", 0, - ) - return nil, nil - } - - var list []remoteSession - - dec := json.NewDecoder(bytes.NewReader(trimmed)) - for dec.More() { - var batch []remoteSession - if err := dec.Decode(&batch); err != nil { - var env remoteEnvelope - if json.Unmarshal(trimmed, &env) == nil { - list = env.Sessions - break - } - s.logger.Error("parse sessions failed", - "host", host, - "err_kind", "parse", - "error", "invalid session payload", - "raw_bytes", len(trimmed), - ) - return nil, err - } - list = append(list, batch...) - } - - now := s.nowFn() - thresholds := model.ActivityThresholds{ - Active: s.cfg.Display.ActiveThreshold, - Idle: s.cfg.Display.IdleThreshold, - } - - sessions := make([]model.Session, 0, len(list)) - for _, rs := range list { - status := mapSessionStatus(rs.Status) - if status == model.SessionStatusArchived && !s.cfg.Sessions.ShowArchived { - continue - } - lastActivity := resolveTimestamp(rs) - project := resolveProject(rs) - sessions = append(sessions, model.Session{ - ID: rs.ID, - Project: project, - Title: rs.Title, - Directory: rs.Directory, - LastActivity: lastActivity, - Status: status, - MessageCount: rs.MessageCount, - Agents: append([]string(nil), rs.Agents...), - Activity: model.ResolveActivityState(lastActivity, now, thresholds), - }) - } - - if s.cfg.Sessions.SortBy == "last_activity" { - sort.SliceStable(sessions, func(i, j int) bool { - return sessions[i].LastActivity.After(sessions[j].LastActivity) - }) - } - - s.logger.Debug("parse sessions decoded", - "host", host, - "records", len(list), - "sessions", len(sessions), - "raw_bytes", len(trimmed), - ) - return sessions, nil -} - -func resolveTimestamp(rs remoteSession) time.Time { - if rs.LastActivity != "" { - return parseTimestamp(rs.LastActivity) - } - if rs.Updated.String() != "" { - if ms, err := rs.Updated.Int64(); err == nil && ms > 0 { - return time.UnixMilli(ms) - } - } - if rs.Created.String() != "" { - if ms, err := rs.Created.Int64(); err == nil && ms > 0 { - return time.UnixMilli(ms) - } - } - return time.Time{} -} - -func resolveProject(rs remoteSession) string { - if rs.Project != "" { - return rs.Project - } - if rs.Directory != "" { - return filepath.Base(rs.Directory) - } - return "" -} - -// groupSessionsByProject folds sessions into Project buckets. -func groupSessionsByProject(sessions []model.Session) []model.Project { - byName := make(map[string][]model.Session) - for _, session := range sessions { - projectName := session.Project - if strings.TrimSpace(projectName) == "" { - projectName = "(unknown)" - } - byName[projectName] = append(byName[projectName], session) - } - - projects := make([]model.Project, 0, len(byName)) - for name, grouped := range byName { - projects = append(projects, model.Project{Name: name, Sessions: grouped}) - } - sort.Slice(projects, func(i, j int) bool { - return projects[i].Name < projects[j].Name - }) - return projects -} - -// mapSessionStatus converts wire status strings into typed status. -func mapSessionStatus(status string) model.SessionStatus { - switch strings.ToLower(strings.TrimSpace(status)) { - case "active", "running": - return model.SessionStatusActive - case "idle": - return model.SessionStatusIdle - case "archived", "closed", "done": - return model.SessionStatusArchived - default: - return model.SessionStatusUnknown - } -} - -// parseTimestamp parses RFC3339 timestamps and falls back to zero time. -func parseTimestamp(value string) time.Time { - if strings.TrimSpace(value) == "" { - return time.Time{} - } - t, err := time.Parse(time.RFC3339, value) - if err != nil { - return time.Time{} - } - return t -} - -// isAuthError checks whether an SSH error indicates authentication failure -// (as opposed to network unreachability). BatchMode=yes causes ssh to exit with -// specific error messages when password auth is the only option. -func isAuthError(host string, err error, logger *slog.Logger) bool { - if err == nil { - return false - } - msg := strings.ToLower(err.Error()) - authIndicators := []string{ - "permission denied", - "no more authentication methods", - "publickey,password", - "keyboard-interactive", - "too many authentication failures", - "authentication failed", - } - for _, indicator := range authIndicators { - if strings.Contains(msg, indicator) { - if logger != nil { - logger.Error("probe auth indicator detected", - "host", host, - "err_kind", "auth", - "error", "authentication failed", - ) - } - return true - } - } - return false -} - -// AuthBootstrapCmd returns the SSH command a user should run to establish a -// ControlMaster connection for a password-protected host. The resulting socket -// is reused by subsequent BatchMode=yes probes. func (s *ProbeService) AuthBootstrapCmd(host model.Host) string { - controlPath := s.cfg.SSH.ControlPath - if controlPath == "" { - controlPath = "~/.ssh/ocr-%C" - } - persist := s.cfg.SSH.ControlPersist - if persist <= 0 { - persist = 600 - } - timeout := s.cfg.SSH.ConnectTimeout - if timeout <= 0 { - timeout = 10 - } - - cmd := fmt.Sprintf( - "ssh -o ControlMaster=yes -o ControlPath=%s -o ControlPersist=%d -o ConnectTimeout=%d -Nf %s", - controlPath, - persist, - timeout, - host.Name, - ) - return cmd + s.ensureInner() + return s.inner.AuthBootstrapCmd(host) } -// jumpProviderSet returns the set of alias names that serve as jump hosts. -func jumpProviderSet(hosts []model.Host) map[string]bool { - providers := make(map[string]bool) - for _, h := range hosts { - for _, dep := range h.DependsOn { - providers[dep] = true - } - } - return providers -} - -// transportPreflight probes jump providers with a lightweight `ssh true` -// to check reachability before running full session probes. -func (s *ProbeService) transportPreflight(ctx context.Context, hosts []model.Host, providers map[string]bool) { - startedAt := time.Now() - s.logger.Debug("transport preflight started", "provider_count", len(providers)) - - type preflightResult struct { - idx int - status model.TransportStatus - err error - dur time.Duration - } - - results := make(chan preflightResult) - count := 0 - for i, h := range hosts { - if !providers[h.Name] { - continue - } - count++ - go func(idx int, host model.Host) { - hostStarted := time.Now() - s.logger.Debug("transport preflight host started", "host", host.Name) - args := s.buildSSHArgs(host, "true") - _, err := s.runner.Run(ctx, "ssh", args...) - if err == nil { - s.logger.Debug("transport preflight host result", - "host", host.Name, - "status", model.TransportReady, - "duration_ms", time.Since(hostStarted).Milliseconds(), - ) - results <- preflightResult{idx: idx, status: model.TransportReady, dur: time.Since(hostStarted)} - return - } - if isAuthError(host.Name, err, s.logger) { - s.logger.Debug("transport preflight host result", - "host", host.Name, - "status", model.TransportAuthRequired, - "err_kind", "auth", - "duration_ms", time.Since(hostStarted).Milliseconds(), - ) - results <- preflightResult{idx: idx, status: model.TransportAuthRequired, err: err, dur: time.Since(hostStarted)} - return - } - s.logger.Debug("transport preflight host result", - "host", host.Name, - "status", model.TransportUnreachable, - "err_kind", errorKind(err), - "duration_ms", time.Since(hostStarted).Milliseconds(), - ) - results <- preflightResult{idx: idx, status: model.TransportUnreachable, err: err, dur: time.Since(hostStarted)} - }(i, h) - } - - readyCount := 0 - failureCount := 0 - for j := 0; j < count; j++ { - res := <-results - hosts[res.idx].Transport = res.status - if res.err != nil { - hosts[res.idx].TransportError = res.err.Error() - failureCount++ - } else { - readyCount++ - } - } - s.logger.Debug("transport preflight completed", - "provider_count", count, - "ready_count", readyCount, - "failure_count", failureCount, - "duration_ms", time.Since(startedAt).Milliseconds(), - ) -} - -// propagateBlocked marks hosts whose jump dependencies are not ready as TransportBlocked. -func propagateBlocked(logger *slog.Logger, hosts []model.Host) { - if logger == nil { - logger = slog.New(slog.NewTextHandler(io.Discard, nil)) - } - startedAt := time.Now() - blockedCount := 0 - - aliasIndex := make(map[string]int, len(hosts)) - for i, h := range hosts { - aliasIndex[h.Name] = i - } - - for i := range hosts { - if len(hosts[i].DependsOn) == 0 { - continue - } - var blockers []string - for _, dep := range hosts[i].DependsOn { - if idx, ok := aliasIndex[dep]; ok { - if hosts[idx].Transport != model.TransportReady && hosts[idx].Transport != model.TransportUnknown { - blockers = append(blockers, dep) - } - } - } - if len(blockers) > 0 { - hosts[i].Transport = model.TransportBlocked - hosts[i].BlockedBy = blockers - hosts[i].TransportError = fmt.Sprintf("blocked by: %s", strings.Join(blockers, ", ")) - blockedCount++ - logger.Debug("host transport blocked by dependency", - "host", hosts[i].Name, - "blocked_by", blockers, - ) - } - } - logger.Debug("dependency block propagation completed", - "host_count", len(hosts), - "blocked_count", blockedCount, - "duration_ms", time.Since(startedAt).Milliseconds(), - ) +func (s *ProbeService) MultiHopBootstrapCmds(host model.Host, allHosts []model.Host) []string { + s.ensureInner() + return s.inner.MultiHopBootstrapCmds(host, allHosts) } -func sanitizeCommandForLog(cmd, pathList string) string { - sanitized := cmd - if strings.TrimSpace(pathList) != "" { - sanitized = strings.ReplaceAll(sanitized, pathList, "") +func (s *ProbeService) ensureInner() { + if s.inner != nil { + return } - sanitized = strings.Join(strings.Fields(sanitized), " ") - if len(sanitized) > 240 { - return sanitized[:240] + "..." + s.inner = remote.NewProbeService(probeOptionsFromConfig(s.cfg), s.runner, s.cache, s.logger) + if s.nowFn != nil { + s.inner.SetNowFunc(s.nowFn) } - return sanitized } -func errorKind(err error) string { - if err == nil { - return "" - } - if errors.Is(err, context.Canceled) { - return "canceled" - } - if errors.Is(err, context.DeadlineExceeded) { - return "timeout" - } - - msg := strings.ToLower(err.Error()) - switch { - case strings.Contains(msg, "permission denied"), - strings.Contains(msg, "no more authentication methods"), - strings.Contains(msg, "publickey,password"), - strings.Contains(msg, "keyboard-interactive"), - strings.Contains(msg, "too many authentication failures"), - strings.Contains(msg, "authentication failed"): - return "auth" - case strings.Contains(msg, "could not resolve hostname"): - return "dns" - case strings.Contains(msg, "connection refused"): - return "connection_refused" - case strings.Contains(msg, "no route to host"): - return "no_route" - case strings.Contains(msg, "timed out"): - return "timeout" - case strings.Contains(msg, "invalid character"), strings.Contains(msg, "cannot unmarshal"): - return "parse" - default: - return "probe" +func probeOptionsFromConfig(cfg config.Config) remote.ProbeOptions { + return remote.ProbeOptions{ + MaxParallel: cfg.Polling.MaxParallel, + SessionScanPaths: append([]string(nil), cfg.Sessions.ScanPaths...), + Overrides: hostOverridesFromConfig(cfg.Hosts.Overrides), + SSH: remote.SSHOptions{ + ControlMaster: cfg.SSH.ControlMaster, + ControlPersist: cfg.SSH.ControlPersist, + ControlPath: cfg.SSH.ControlPath, + BatchMode: cfg.SSH.BatchMode, + ConnectTimeout: cfg.SSH.ConnectTimeout, + }, + SortBy: cfg.Sessions.SortBy, + ShowArchived: cfg.Sessions.ShowArchived, + MaxDisplay: cfg.Sessions.MaxDisplay, + ActiveThreshold: cfg.Display.ActiveThreshold, + IdleThreshold: cfg.Display.IdleThreshold, } } -func sanitizeErrorContext(err error) string { - switch errorKind(err) { - case "auth": - return "authentication failed" - case "dns": - return "hostname resolution failed" - case "connection_refused": - return "connection refused" - case "no_route": - return "no route to host" - case "timeout": - return "connection timeout" - case "canceled": - return "operation canceled" - case "parse": - return "invalid session payload" - default: - return "probe command failed" +func hostOverridesFromConfig(overrides map[string]config.HostOverride) map[string]remote.HostOverride { + if len(overrides) == 0 { + return nil } -} - -// MultiHopBootstrapCmds returns ordered ControlMaster bootstrap commands for a -// host and all its unresolved jump dependencies. -func (s *ProbeService) MultiHopBootstrapCmds(host model.Host, allHosts []model.Host) []string { - aliasIndex := make(map[string]int, len(allHosts)) - for i, h := range allHosts { - aliasIndex[h.Name] = i - } - - var cmds []string - - // First, generate commands for each hop that needs auth (in order) - for _, hop := range host.JumpChain { - if hop.External || hop.AliasRef == "" { - continue + converted := make(map[string]remote.HostOverride, len(overrides)) + for alias, override := range overrides { + converted[alias] = remote.HostOverride{ + Label: override.Label, + Priority: override.Priority, + OpencodePath: override.OpencodePath, + ScanPaths: append([]string(nil), override.ScanPaths...), } - if idx, ok := aliasIndex[hop.AliasRef]; ok { - jumpHost := allHosts[idx] - if jumpHost.Transport == model.TransportAuthRequired || jumpHost.Status == model.HostStatusAuthRequired { - cmds = append(cmds, s.AuthBootstrapCmd(jumpHost)) - } - } - } - - // Then the target host itself if it needs auth - if host.Status == model.HostStatusAuthRequired || host.Transport == model.TransportAuthRequired { - cmds = append(cmds, s.AuthBootstrapCmd(host)) } - - return cmds + return converted } diff --git a/internal/tui/probe/probe_test.go b/internal/tui/probe/probe_test.go index d352353..afb3fd9 100644 --- a/internal/tui/probe/probe_test.go +++ b/internal/tui/probe/probe_test.go @@ -7,8 +7,8 @@ import ( "testing" "time" + "opencoderouter/internal/model" "opencoderouter/internal/tui/config" - "opencoderouter/internal/tui/model" ) type probeRunnerMock struct { diff --git a/internal/tui/session/manager.go b/internal/tui/session/manager.go index 5f2dc0f..e6afdfd 100644 --- a/internal/tui/session/manager.go +++ b/internal/tui/session/manager.go @@ -6,8 +6,8 @@ import ( "sort" "sync" + "opencoderouter/internal/model" "opencoderouter/internal/tui/components" - "opencoderouter/internal/tui/model" tea "charm.land/bubbletea/v2" ) diff --git a/internal/tui/session/manager_test.go b/internal/tui/session/manager_test.go index b88b7ff..b2e3efa 100644 --- a/internal/tui/session/manager_test.go +++ b/internal/tui/session/manager_test.go @@ -6,7 +6,7 @@ import ( "sync" "testing" - "opencoderouter/internal/tui/model" + "opencoderouter/internal/model" tea "charm.land/bubbletea/v2" ) diff --git a/main.go b/main.go index e2ea434..0c390cf 100644 --- a/main.go +++ b/main.go @@ -2,23 +2,32 @@ package main import ( "context" + "errors" "flag" "fmt" "io" "log/slog" "net/http" "os" + "os/exec" "os/signal" "path/filepath" + "strconv" + "strings" "syscall" "time" + "opencoderouter/internal/api" + "opencoderouter/internal/auth" + "opencoderouter/internal/cache" "opencoderouter/internal/config" "opencoderouter/internal/discovery" "opencoderouter/internal/launcher" "opencoderouter/internal/proxy" "opencoderouter/internal/registry" "opencoderouter/internal/scanner" + "opencoderouter/internal/session" + "opencoderouter/internal/terminal" ) func main() { @@ -34,6 +43,7 @@ func main() { flag.DurationVar(&cfg.ProbeTimeout, "probe-timeout", cfg.ProbeTimeout, "Timeout for each port probe") flag.DurationVar(&cfg.StaleAfter, "stale-after", cfg.StaleAfter, "Remove backends unseen for this duration") flag.BoolVar(&cfg.EnableMDNS, "mdns", cfg.EnableMDNS, "Enable mDNS service advertisement") + cleanupOrphans := flag.Bool("cleanup-orphans", false, "Cleanup likely orphan opencode serve processes in scan range on startup") hostname := flag.String("hostname", "0.0.0.0", "Hostname/IP to bind the router to") flag.Parse() @@ -91,6 +101,9 @@ func main() { "mdns", cfg.EnableMDNS, ) + orphanCleanupEnabled := *cleanupOrphans || envEnabled("OCR_CLEANUP_ORPHANS") + handleStartupOrphanOffer(cfg.ScanPortStart, cfg.ScanPortEnd, orphanCleanupEnabled, logger.With("component", "startup-cleanup")) + // Launch opencode serve instances for any project paths given as args. var lnch *launcher.Launcher if len(projectPaths) > 0 { @@ -112,7 +125,45 @@ func main() { cfg.ProbeTimeout, logger.With("component", "scanner"), ) - rt := proxy.New(reg, cfg, logger.With("component", "proxy")) + uiHandler := http.FileServer(getWebFS()) + rt := proxy.New(reg, cfg, logger.With("component", "proxy"), uiHandler) + + eventBus := session.NewEventBus(100) + scrollbackCache, err := cache.NewJSONLCache(cache.CacheConfig{}) + if err != nil { + logger.Error("failed to initialize scrollback cache", "error", err) + os.Exit(1) + } + defer func() { + if closeErr := scrollbackCache.Close(); closeErr != nil { + logger.Warn("failed to close scrollback cache", "error", closeErr) + } + }() + + sessionMgr := session.NewManager(session.ManagerConfig{ + Registry: reg, + EventBus: eventBus, + Logger: logger.With("component", "session"), + PortStart: cfg.ScanPortStart + 100, // separate range + PortEnd: cfg.ScanPortEnd + 100, + HealthCheckInterval: 10 * time.Second, + HealthCheckTimeout: 2 * time.Second, + StopTimeout: 5 * time.Second, + EventBuffer: 100, + TerminalDialer: terminal.NewSessionDialer(terminal.SessionDialerConfig{ + Logger: logger.With("component", "terminal-dialer"), + }), + }) + + apiRouter := api.NewRouter(api.RouterConfig{ + SessionManager: sessionMgr, + SessionEventBus: eventBus, + AuthConfig: auth.LoadFromEnv(), + ScrollbackCache: scrollbackCache, + RemoteLogger: logger.With("component", "remote-api"), + RemoteCacheTTL: 60 * time.Second, + Fallback: rt, + }) var adv *discovery.Advertiser if cfg.EnableMDNS { @@ -154,7 +205,7 @@ func main() { // HTTP server. srv := &http.Server{ Addr: cfg.ListenAddr, - Handler: rt, + Handler: apiRouter, ReadTimeout: 30 * time.Second, WriteTimeout: 120 * time.Second, // long for SSE streaming IdleTimeout: 120 * time.Second, @@ -212,3 +263,171 @@ func main() { logger.Info("OpenCode Router stopped") } + +type orphanProcess struct { + Port int + PID int + Command string +} + +func handleStartupOrphanOffer(scanStart, scanEnd int, cleanup bool, logger *slog.Logger) { + orphans, err := detectLikelyOrphanOpenCodeServes(scanStart, scanEnd) + if err != nil { + if logger != nil { + logger.Debug("orphan detection unavailable", "error", err) + } + return + } + if len(orphans) == 0 { + return + } + + if logger != nil { + logger.Warn( + "detected likely orphan opencode serve processes in configured scan range", + "count", len(orphans), + "scan_range", fmt.Sprintf("%d-%d", scanStart, scanEnd), + "cleanup_hint", "rerun with --cleanup-orphans or OCR_CLEANUP_ORPHANS=1", + ) + for _, orphan := range orphans { + logger.Warn("orphan candidate", "port", orphan.Port, "pid", orphan.PID, "command", orphan.Command) + } + } + + if !cleanup { + return + } + + if logger != nil { + logger.Warn("startup orphan cleanup enabled; sending SIGTERM", "count", len(orphans)) + } + cleanupLikelyOrphans(orphans, logger) +} + +func detectLikelyOrphanOpenCodeServes(scanStart, scanEnd int) ([]orphanProcess, error) { + if scanEnd < scanStart { + return nil, nil + } + + cmd := exec.Command("lsof", "-nP", fmt.Sprintf("-iTCP:%d-%d", scanStart, scanEnd), "-sTCP:LISTEN", "-Fpcn") + output, err := cmd.Output() + if err != nil { + return nil, err + } + return parseLikelyOrphansFromLsofOutput(string(output), scanStart, scanEnd), nil +} + +func parseLikelyOrphansFromLsofOutput(raw string, scanStart, scanEnd int) []orphanProcess { + if scanEnd < scanStart { + return nil + } + + lines := strings.Split(raw, "\n") + var ( + currentPID int + currentCmd string + ) + + seen := make(map[string]struct{}) + orphans := make([]orphanProcess, 0) + + for _, lineRaw := range lines { + line := strings.TrimSpace(lineRaw) + if line == "" { + continue + } + + tag := line[0] + value := strings.TrimSpace(line[1:]) + + switch tag { + case 'p': + pid, convErr := strconv.Atoi(value) + if convErr != nil { + currentPID = 0 + continue + } + currentPID = pid + case 'c': + currentCmd = value + case 'n': + if currentPID == 0 { + continue + } + if !strings.Contains(strings.ToLower(currentCmd), "opencode") { + continue + } + port, ok := extractListenPort(value) + if !ok || port < scanStart || port > scanEnd { + continue + } + key := fmt.Sprintf("%d:%d", currentPID, port) + if _, exists := seen[key]; exists { + continue + } + seen[key] = struct{}{} + orphans = append(orphans, orphanProcess{Port: port, PID: currentPID, Command: currentCmd}) + } + } + + return orphans +} + +func cleanupLikelyOrphans(orphans []orphanProcess, logger *slog.Logger) { + seen := make(map[int]struct{}) + for _, orphan := range orphans { + if _, ok := seen[orphan.PID]; ok { + continue + } + seen[orphan.PID] = struct{}{} + + err := syscall.Kill(orphan.PID, syscall.SIGTERM) + if err == nil { + if logger != nil { + logger.Info("sent SIGTERM to likely orphan process", "pid", orphan.PID, "command", orphan.Command) + } + continue + } + if errors.Is(err, syscall.ESRCH) { + if logger != nil { + logger.Debug("orphan process already exited", "pid", orphan.PID) + } + continue + } + if logger != nil { + logger.Warn("failed to terminate likely orphan process", "pid", orphan.PID, "error", err) + } + } +} + +func extractListenPort(addr string) (int, bool) { + addr = strings.TrimSpace(addr) + if addr == "" { + return 0, false + } + if idx := strings.Index(addr, "->"); idx > 0 { + addr = strings.TrimSpace(addr[:idx]) + } + if idx := strings.Index(addr, "("); idx > 0 { + addr = strings.TrimSpace(addr[:idx]) + } + idx := strings.LastIndex(addr, ":") + if idx < 0 || idx+1 >= len(addr) { + return 0, false + } + port, err := strconv.Atoi(strings.TrimSpace(addr[idx+1:])) + if err != nil { + return 0, false + } + return port, true +} + +func envEnabled(name string) bool { + v := strings.ToLower(strings.TrimSpace(os.Getenv(name))) + switch v { + case "1", "true", "yes", "on": + return true + default: + return false + } +} diff --git a/main_test.go b/main_test.go new file mode 100644 index 0000000..c6b5e25 --- /dev/null +++ b/main_test.go @@ -0,0 +1,52 @@ +package main + +import ( + "log/slog" + "strings" + "testing" +) + +func TestParseLikelyOrphansFromLsofOutputFiltersToOpencodeAndRange(t *testing.T) { + raw := strings.Join([]string{ + "p101", + "copencode", + "n127.0.0.1:30010", + "n127.0.0.1:30010", + "p202", + "cnode", + "n127.0.0.1:30011", + "p303", + "copencode", + "n127.0.0.1:29999", + "", + }, "\n") + + orphans := parseLikelyOrphansFromLsofOutput(raw, 30000, 30020) + if len(orphans) != 1 { + t.Fatalf("orphans len=%d want=1 (%#v)", len(orphans), orphans) + } + if orphans[0].PID != 101 || orphans[0].Port != 30010 { + t.Fatalf("unexpected orphan %#v", orphans[0]) + } +} + +func TestExtractListenPort(t *testing.T) { + port, ok := extractListenPort("127.0.0.1:31000") + if !ok || port != 31000 { + t.Fatalf("extract simple addr got (%d,%v) want (31000,true)", port, ok) + } + + port, ok = extractListenPort("127.0.0.1:31001->127.0.0.1:51514") + if !ok || port != 31001 { + t.Fatalf("extract connected addr got (%d,%v) want (31001,true)", port, ok) + } + + if _, ok := extractListenPort("nonsense"); ok { + t.Fatal("expected invalid addr to return ok=false") + } +} + +func TestHandleStartupOrphanOfferNoCleanupByDefault(t *testing.T) { + logger := slog.Default() + handleStartupOrphanOffer(31010, 31000, false, logger) +} diff --git a/scripts/dev-setup.sh b/scripts/dev-setup.sh new file mode 100755 index 0000000..002eead --- /dev/null +++ b/scripts/dev-setup.sh @@ -0,0 +1,67 @@ +#!/usr/bin/env bash + +set -euo pipefail + +ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)" +PORT="${OPENCODEROUTER_PORT:-8080}" +WORK_ROOT="${OPENCODEROUTER_WORK_ROOT:-/tmp/opencoderouter-dev}" +WS1="${WORK_ROOT}/workspace-a" +WS2="${WORK_ROOT}/workspace-b" + +mkdir -p "${WS1}" "${WS2}" + +touch "${WS1}/README.md" "${WS2}/README.md" + +echo "[dev-setup] root: ${ROOT_DIR}" +echo "[dev-setup] control-plane port: ${PORT}" +echo "[dev-setup] workspace A: ${WS1}" +echo "[dev-setup] workspace B: ${WS2}" + +cleanup() { + local exit_code=$? + if [[ -n "${SERVER_PID:-}" ]] && kill -0 "${SERVER_PID}" 2>/dev/null; then + echo "[dev-setup] stopping control plane (pid=${SERVER_PID})" + kill "${SERVER_PID}" 2>/dev/null || true + wait "${SERVER_PID}" 2>/dev/null || true + fi + if [[ -n "${TAIL_PID:-}" ]] && kill -0 "${TAIL_PID}" 2>/dev/null; then + kill "${TAIL_PID}" 2>/dev/null || true + wait "${TAIL_PID}" 2>/dev/null || true + fi + exit "${exit_code}" +} + +trap cleanup EXIT INT TERM + +LOG_FILE="${WORK_ROOT}/control-plane.log" + +echo "[dev-setup] starting control plane..." +(cd "${ROOT_DIR}" && go run . -port "${PORT}" "${WS1}" "${WS2}") >"${LOG_FILE}" 2>&1 & +SERVER_PID=$! + +for _ in {1..40}; do + if curl -fsS "http://127.0.0.1:${PORT}/api/health" >/dev/null 2>&1; then + echo "[dev-setup] control plane is healthy" + break + fi + sleep 0.25 +done + +echo +echo "OpenCodeRouter dev setup is ready" +echo "--------------------------------" +echo "Dashboard: http://127.0.0.1:${PORT}/" +echo "Sessions: http://127.0.0.1:${PORT}/api/sessions" +echo "Events SSE: http://127.0.0.1:${PORT}/api/events" +echo "Terminal WS: ws://127.0.0.1:${PORT}/ws/terminal/{session-id}" +echo +echo "Useful commands:" +echo " curl -s http://127.0.0.1:${PORT}/api/sessions | jq" +echo " curl -N http://127.0.0.1:${PORT}/api/events" +echo " tail -f ${LOG_FILE}" +echo +echo "Press Ctrl+C to stop everything." + +tail -f "${LOG_FILE}" & +TAIL_PID=$! +wait "${SERVER_PID}" diff --git a/tests/integration/deterministic_offline_test.go b/tests/integration/deterministic_offline_test.go new file mode 100644 index 0000000..539ee17 --- /dev/null +++ b/tests/integration/deterministic_offline_test.go @@ -0,0 +1,45 @@ +package integration_test + +import ( + "net/http" + "net/http/httptest" + "testing" + + "opencoderouter/internal/api" + "opencoderouter/internal/auth" + "opencoderouter/internal/session" +) + +func TestDeterministicOfflineIndicator(t *testing.T) { + mgr := newFakeSessionManager() + bus := session.NewEventBus(16) + + // Create a router with our fake manager + r := api.NewRouter(api.RouterConfig{ + SessionManager: mgr, + SessionEventBus: bus, + Fallback: http.NotFoundHandler(), + AuthConfig: auth.Defaults(), + }) + + srv := httptest.NewServer(r) + + // 1. Initial Load (Online) + resp, err := http.Get(srv.URL + "/api/sessions") + if err != nil { + t.Fatalf("Failed initial load: %v", err) + } + if resp.StatusCode != http.StatusOK { + t.Fatalf("Initial load status = %d", resp.StatusCode) + } + resp.Body.Close() + + // 2. Simulate Offline by closing server + srv.Close() + + // 3. Attempt to fetch (should fail) + _, err = http.Get(srv.URL + "/api/sessions") + if err == nil { + t.Fatal("Expected error fetching from closed server, got nil") + } +} diff --git a/tests/integration/e2e_test.go b/tests/integration/e2e_test.go new file mode 100644 index 0000000..85812cb --- /dev/null +++ b/tests/integration/e2e_test.go @@ -0,0 +1,816 @@ +package integration_test + +import ( + "bufio" + "bytes" + "context" + "encoding/json" + "io" + "net/http" + "net/http/httptest" + "os" + "os/exec" + "path/filepath" + "runtime" + "sort" + "strconv" + "strings" + "sync" + "testing" + "time" + + "opencoderouter/internal/api" + "opencoderouter/internal/auth" + "opencoderouter/internal/session" +) + +type fakeTerminalConn struct { + mu sync.Mutex + onClose func() + closed bool +} + +func (c *fakeTerminalConn) Read(_ []byte) (int, error) { return 0, io.EOF } +func (c *fakeTerminalConn) Write(p []byte) (int, error) { return len(p), nil } +func (c *fakeTerminalConn) Resize(_, _ int) error { return nil } + +func (c *fakeTerminalConn) Close() error { + c.mu.Lock() + if c.closed { + c.mu.Unlock() + return nil + } + c.closed = true + onClose := c.onClose + c.mu.Unlock() + if onClose != nil { + onClose() + } + return nil +} + +type fakeSessionManager struct { + mu sync.Mutex + sessions map[string]session.SessionHandle + health map[string]session.HealthStatus + nextID int + createErr error +} + +func newFakeSessionManager() *fakeSessionManager { + return &fakeSessionManager{ + sessions: make(map[string]session.SessionHandle), + health: make(map[string]session.HealthStatus), + } +} + +func (m *fakeSessionManager) Create(_ context.Context, opts session.CreateOpts) (*session.SessionHandle, error) { + m.mu.Lock() + defer m.mu.Unlock() + if m.createErr != nil { + return nil, m.createErr + } + + if strings.TrimSpace(opts.WorkspacePath) == "" { + return nil, session.ErrWorkspacePathRequired + } + abs, err := filepath.Abs(opts.WorkspacePath) + if err != nil { + return nil, session.ErrWorkspacePathInvalid + } + if stat, err := os.Stat(abs); err != nil || !stat.IsDir() { + return nil, session.ErrWorkspacePathInvalid + } + + m.nextID++ + id := "session-" + time.Now().UTC().Format("150405") + "-" + string(rune('a'+m.nextID)) + now := time.Now().UTC() + h := session.SessionHandle{ + ID: id, + DaemonPort: 32000 + m.nextID, + WorkspacePath: abs, + Status: session.SessionStatusActive, + CreatedAt: now, + LastActivity: now, + Labels: cloneLabels(opts.Labels), + } + m.sessions[id] = h + m.health[id] = session.HealthStatus{State: session.HealthStateHealthy, LastCheck: now} + copy := h + return ©, nil +} + +func (m *fakeSessionManager) Get(id string) (*session.SessionHandle, error) { + m.mu.Lock() + defer m.mu.Unlock() + h, ok := m.sessions[id] + if !ok { + return nil, session.ErrSessionNotFound + } + copy := h + copy.Labels = cloneLabels(h.Labels) + return ©, nil +} + +func (m *fakeSessionManager) List(filter session.SessionListFilter) ([]session.SessionHandle, error) { + m.mu.Lock() + defer m.mu.Unlock() + out := make([]session.SessionHandle, 0, len(m.sessions)) + for _, h := range m.sessions { + if filter.Status != "" && h.Status != filter.Status { + continue + } + copy := h + copy.Labels = cloneLabels(h.Labels) + out = append(out, copy) + } + sort.Slice(out, func(i, j int) bool { return out[i].ID < out[j].ID }) + return out, nil +} + +func (m *fakeSessionManager) Stop(_ context.Context, id string) error { + m.mu.Lock() + defer m.mu.Unlock() + h, ok := m.sessions[id] + if !ok { + return session.ErrSessionNotFound + } + h.Status = session.SessionStatusStopped + h.LastActivity = time.Now().UTC() + m.sessions[id] = h + st := m.health[id] + st.State = session.HealthStateUnknown + st.LastCheck = time.Now().UTC() + m.health[id] = st + return nil +} + +func (m *fakeSessionManager) Restart(_ context.Context, id string) (*session.SessionHandle, error) { + m.mu.Lock() + defer m.mu.Unlock() + h, ok := m.sessions[id] + if !ok { + return nil, session.ErrSessionNotFound + } + h.Status = session.SessionStatusActive + h.LastActivity = time.Now().UTC() + m.sessions[id] = h + st := m.health[id] + st.State = session.HealthStateHealthy + st.LastCheck = time.Now().UTC() + m.health[id] = st + copy := h + return ©, nil +} + +func (m *fakeSessionManager) Delete(_ context.Context, id string) error { + m.mu.Lock() + defer m.mu.Unlock() + if _, ok := m.sessions[id]; !ok { + return session.ErrSessionNotFound + } + delete(m.sessions, id) + delete(m.health, id) + return nil +} + +func (m *fakeSessionManager) AttachTerminal(_ context.Context, id string) (session.TerminalConn, error) { + m.mu.Lock() + h, ok := m.sessions[id] + if !ok { + m.mu.Unlock() + return nil, session.ErrSessionNotFound + } + h.AttachedClients++ + h.LastActivity = time.Now().UTC() + m.sessions[id] = h + m.mu.Unlock() + + return &fakeTerminalConn{onClose: func() { + m.mu.Lock() + defer m.mu.Unlock() + h, ok := m.sessions[id] + if !ok { + return + } + if h.AttachedClients > 0 { + h.AttachedClients-- + } + h.LastActivity = time.Now().UTC() + m.sessions[id] = h + }}, nil +} + +func (m *fakeSessionManager) Health(_ context.Context, id string) (session.HealthStatus, error) { + m.mu.Lock() + defer m.mu.Unlock() + h, ok := m.health[id] + if !ok { + return session.HealthStatus{}, session.ErrSessionNotFound + } + return h, nil +} + +func cloneLabels(in map[string]string) map[string]string { + if len(in) == 0 { + return nil + } + out := make(map[string]string, len(in)) + for k, v := range in { + out[k] = v + } + return out +} + +func jsonRequest(t *testing.T, client *http.Client, method, url string, payload any) *http.Response { + t.Helper() + var body io.Reader + if payload != nil { + buf, err := json.Marshal(payload) + if err != nil { + t.Fatalf("marshal payload: %v", err) + } + body = bytes.NewReader(buf) + } + req, err := http.NewRequest(method, url, body) + if err != nil { + t.Fatalf("new request: %v", err) + } + if payload != nil { + req.Header.Set("Content-Type", "application/json") + } + resp, err := client.Do(req) + if err != nil { + t.Fatalf("request failed: %v", err) + } + return resp +} + +type sessionView struct { + ID string `json:"id"` + WorkspacePath string `json:"workspacePath"` + Status session.SessionStatus `json:"status"` + AttachedClients int `json:"attachedClients"` +} + +func decode[T any](t *testing.T, r io.Reader) T { + t.Helper() + var out T + if err := json.NewDecoder(r).Decode(&out); err != nil { + t.Fatalf("decode response: %v", err) + } + return out +} + +func TestE2ESessionLifecycleAndWiring(t *testing.T) { + mgr := newFakeSessionManager() + bus := session.NewEventBus(16) + + r := api.NewRouter(api.RouterConfig{ + SessionManager: mgr, + SessionEventBus: bus, + Fallback: http.NotFoundHandler(), + AuthConfig: auth.Defaults(), + }) + + srv := httptest.NewServer(r) + defer srv.Close() + + ws1 := t.TempDir() + ws2 := t.TempDir() + + create1 := jsonRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/sessions", map[string]any{"workspacePath": ws1}) + if create1.StatusCode != http.StatusCreated { + defer create1.Body.Close() + t.Fatalf("create1 status=%d want=%d", create1.StatusCode, http.StatusCreated) + } + s1 := decode[sessionView](t, create1.Body) + _ = create1.Body.Close() + + create2 := jsonRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/sessions", map[string]any{"workspacePath": ws2}) + if create2.StatusCode != http.StatusCreated { + defer create2.Body.Close() + t.Fatalf("create2 status=%d want=%d", create2.StatusCode, http.StatusCreated) + } + s2 := decode[sessionView](t, create2.Body) + _ = create2.Body.Close() + + if s1.ID == "" || s2.ID == "" || s1.ID == s2.ID { + t.Fatalf("invalid session ids: s1=%q s2=%q", s1.ID, s2.ID) + } + + list := jsonRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/sessions", nil) + if list.StatusCode != http.StatusOK { + defer list.Body.Close() + t.Fatalf("list status=%d want=%d", list.StatusCode, http.StatusOK) + } + all := decode[[]sessionView](t, list.Body) + _ = list.Body.Close() + if len(all) != 2 { + t.Fatalf("list len=%d want=2", len(all)) + } + + attach := jsonRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/sessions/"+s1.ID+"/attach", nil) + if attach.StatusCode != http.StatusOK { + defer attach.Body.Close() + t.Fatalf("attach status=%d want=%d", attach.StatusCode, http.StatusOK) + } + attached := decode[sessionView](t, attach.Body) + _ = attach.Body.Close() + if attached.AttachedClients != 1 { + t.Fatalf("attached clients=%d want=1", attached.AttachedClients) + } + + get2 := jsonRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/sessions/"+s2.ID, nil) + if get2.StatusCode != http.StatusOK { + defer get2.Body.Close() + t.Fatalf("get2 status=%d want=%d", get2.StatusCode, http.StatusOK) + } + state2 := decode[sessionView](t, get2.Body) + _ = get2.Body.Close() + if state2.AttachedClients != 0 { + t.Fatalf("session2 attached clients=%d want=0", state2.AttachedClients) + } + + detach := jsonRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/sessions/"+s1.ID+"/detach", nil) + if detach.StatusCode != http.StatusOK { + defer detach.Body.Close() + t.Fatalf("detach status=%d want=%d", detach.StatusCode, http.StatusOK) + } + detached := decode[sessionView](t, detach.Body) + _ = detach.Body.Close() + if detached.AttachedClients != 0 { + t.Fatalf("detached clients=%d want=0", detached.AttachedClients) + } + + del := jsonRequest(t, srv.Client(), http.MethodDelete, srv.URL+"/api/sessions/"+s1.ID, nil) + if del.StatusCode != http.StatusNoContent { + defer del.Body.Close() + t.Fatalf("delete status=%d want=%d", del.StatusCode, http.StatusNoContent) + } + _ = del.Body.Close() + + missing := jsonRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/sessions/"+s1.ID, nil) + if missing.StatusCode != http.StatusNotFound { + defer missing.Body.Close() + t.Fatalf("missing status=%d want=%d", missing.StatusCode, http.StatusNotFound) + } + _ = missing.Body.Close() + + eventsReq, err := http.NewRequest(http.MethodGet, srv.URL+"/api/events", nil) + if err != nil { + t.Fatalf("events request: %v", err) + } + eventsResp, err := srv.Client().Do(eventsReq) + if err != nil { + t.Fatalf("events call: %v", err) + } + if eventsResp.StatusCode != http.StatusOK { + defer eventsResp.Body.Close() + t.Fatalf("events status=%d want=%d", eventsResp.StatusCode, http.StatusOK) + } + if got := eventsResp.Header.Get("Content-Type"); got != "text/event-stream" { + defer eventsResp.Body.Close() + t.Fatalf("events content-type=%q want=%q", got, "text/event-stream") + } + _ = eventsResp.Body.Close() + + terminalReq, err := http.NewRequest(http.MethodGet, srv.URL+"/ws/terminal/"+s2.ID, nil) + if err != nil { + t.Fatalf("terminal request: %v", err) + } + terminalResp, err := srv.Client().Do(terminalReq) + if err != nil { + t.Fatalf("terminal call: %v", err) + } + if terminalResp.StatusCode != http.StatusBadRequest { + defer terminalResp.Body.Close() + t.Fatalf("terminal status=%d want=%d", terminalResp.StatusCode, http.StatusBadRequest) + } + _ = terminalResp.Body.Close() +} + +func TestE2ERouterAuthMiddleware(t *testing.T) { + mgr := newFakeSessionManager() + ws := t.TempDir() + if _, err := mgr.Create(context.Background(), session.CreateOpts{WorkspacePath: ws}); err != nil { + t.Fatalf("seed create: %v", err) + } + + authCfg := auth.Defaults() + authCfg.Enabled = true + authCfg.BearerTokens = []string{"integration-secret"} + + srv := httptest.NewServer(api.NewRouter(api.RouterConfig{ + SessionManager: mgr, + SessionEventBus: session.NewEventBus(8), + AuthConfig: authCfg, + Fallback: http.NotFoundHandler(), + })) + defer srv.Close() + + unauth := jsonRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/sessions", nil) + if unauth.StatusCode != http.StatusUnauthorized { + defer unauth.Body.Close() + t.Fatalf("unauthorized status=%d want=%d", unauth.StatusCode, http.StatusUnauthorized) + } + _ = unauth.Body.Close() + + req, err := http.NewRequest(http.MethodGet, srv.URL+"/api/sessions", nil) + if err != nil { + t.Fatalf("new auth request: %v", err) + } + req.Header.Set("Authorization", "Bearer integration-secret") + authResp, err := srv.Client().Do(req) + if err != nil { + t.Fatalf("authorized request: %v", err) + } + if authResp.StatusCode != http.StatusOK { + defer authResp.Body.Close() + t.Fatalf("authorized status=%d want=%d", authResp.StatusCode, http.StatusOK) + } + _ = authResp.Body.Close() +} + +func TestE2ERealRuntimeGuard(t *testing.T) { + if strings.TrimSpace(os.Getenv("RUN_REAL_DAEMON_E2E")) == "" { + t.Skip("set RUN_REAL_DAEMON_E2E=1 to run daemon-dependent integration checks") + } + if _, err := exec.LookPath("opencode"); err != nil { + t.Skipf("opencode binary unavailable: %v", err) + } + t.Skip("real daemon integration scaffold guard in place; runtime assertions intentionally skipped in default CI") +} + +func TestE2EMainWiringAssumptions(t *testing.T) { + root := repoRoot(t) + contents, err := os.ReadFile(filepath.Join(root, "main.go")) + if err != nil { + t.Fatalf("read main.go: %v", err) + } + text := string(contents) + required := []string{ + "session.NewManager(", + "api.NewRouter(api.RouterConfig{", + "SessionManager: sessionMgr", + "SessionEventBus: eventBus", + "AuthConfig: auth.LoadFromEnv()", + } + for _, needle := range required { + if !strings.Contains(text, needle) { + t.Fatalf("main.go missing wiring marker: %q", needle) + } + } + + routerText, err := os.ReadFile(filepath.Join(root, "internal", "api", "router.go")) + if err != nil { + t.Fatalf("read internal/api/router.go: %v", err) + } + rt := string(routerText) + routes := []string{ + "NewSessionsHandler", + "NewEventsHandler", + "terminal.NewHandler", + "auth.Middleware", + } + for _, needle := range routes { + if !strings.Contains(rt, needle) { + t.Fatalf("router.go missing mount marker: %q", needle) + } + } +} + +func TestE2EErrorShapeOnMissingSession(t *testing.T) { + mgr := newFakeSessionManager() + srv := httptest.NewServer(api.NewRouter(api.RouterConfig{ + SessionManager: mgr, + SessionEventBus: session.NewEventBus(8), + AuthConfig: auth.Defaults(), + Fallback: http.NotFoundHandler(), + })) + defer srv.Close() + + resp := jsonRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/sessions/does-not-exist", nil) + if resp.StatusCode != http.StatusNotFound { + defer resp.Body.Close() + t.Fatalf("status=%d want=%d", resp.StatusCode, http.StatusNotFound) + } + var payload map[string]any + if err := json.NewDecoder(resp.Body).Decode(&payload); err != nil { + _ = resp.Body.Close() + t.Fatalf("decode error payload: %v", err) + } + _ = resp.Body.Close() + if payload["code"] != "SESSION_NOT_FOUND" { + t.Fatalf("code=%v want=%q", payload["code"], "SESSION_NOT_FOUND") + } + if _, ok := payload["error"]; !ok { + t.Fatalf("missing error field in payload: %#v", payload) + } +} + +func TestE2EAttachDetachPathShape(t *testing.T) { + mgr := newFakeSessionManager() + ws := t.TempDir() + h, err := mgr.Create(context.Background(), session.CreateOpts{WorkspacePath: ws}) + if err != nil { + t.Fatalf("seed create: %v", err) + } + + srv := httptest.NewServer(api.NewRouter(api.RouterConfig{ + SessionManager: mgr, + SessionEventBus: session.NewEventBus(8), + AuthConfig: auth.Defaults(), + Fallback: http.NotFoundHandler(), + })) + defer srv.Close() + + attach := jsonRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/sessions/"+h.ID+"/attach", nil) + if attach.StatusCode != http.StatusOK { + defer attach.Body.Close() + t.Fatalf("attach status=%d want=%d", attach.StatusCode, http.StatusOK) + } + attached := decode[sessionView](t, attach.Body) + _ = attach.Body.Close() + + detach := jsonRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/sessions/"+h.ID+"/detach", nil) + if detach.StatusCode != http.StatusOK { + defer detach.Body.Close() + t.Fatalf("detach status=%d want=%d", detach.StatusCode, http.StatusOK) + } + detached := decode[sessionView](t, detach.Body) + _ = detach.Body.Close() + + if attached.AttachedClients != 1 || detached.AttachedClients != 0 { + t.Fatalf("unexpected attach/detach shape attached=%d detached=%d", attached.AttachedClients, detached.AttachedClients) + } +} + +func TestE2ECreateSessionPortExhaustionErrorPath(t *testing.T) { + mgr := newFakeSessionManager() + mgr.createErr = session.ErrNoAvailableSessionPorts + + srv := httptest.NewServer(api.NewRouter(api.RouterConfig{ + SessionManager: mgr, + SessionEventBus: session.NewEventBus(8), + AuthConfig: auth.Defaults(), + Fallback: http.NotFoundHandler(), + })) + defer srv.Close() + + resp := jsonRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/sessions", map[string]any{ + "workspacePath": t.TempDir(), + }) + if resp.StatusCode != http.StatusServiceUnavailable { + defer resp.Body.Close() + t.Fatalf("status=%d want=%d", resp.StatusCode, http.StatusServiceUnavailable) + } + var payload map[string]any + if err := json.NewDecoder(resp.Body).Decode(&payload); err != nil { + _ = resp.Body.Close() + t.Fatalf("decode response: %v", err) + } + _ = resp.Body.Close() + if payload["code"] != "NO_AVAILABLE_SESSION_PORTS" { + t.Fatalf("code=%v want=%q", payload["code"], "NO_AVAILABLE_SESSION_PORTS") + } + errText, _ := payload["error"].(string) + if !strings.Contains(strings.ToLower(errText), "port") { + t.Fatalf("error text=%q want descriptive port message", errText) + } +} + +func TestE2EHealthFailureEventPath(t *testing.T) { + mgr := newFakeSessionManager() + bus := session.NewEventBus(16) + + srv := httptest.NewServer(api.NewRouter(api.RouterConfig{ + SessionManager: mgr, + SessionEventBus: bus, + AuthConfig: auth.Defaults(), + Fallback: http.NotFoundHandler(), + })) + defer srv.Close() + + resp := jsonRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/events", nil) + if resp.StatusCode != http.StatusOK { + defer resp.Body.Close() + t.Fatalf("events status=%d want=%d", resp.StatusCode, http.StatusOK) + } + defer resp.Body.Close() + + reader := bufio.NewReader(resp.Body) + _ = readSSEUntil(t, reader, 2*time.Second, func(frame parsedSSEFrame) bool { + return frame.Retry != "" + }) + + now := time.Now().UTC() + handle := session.SessionHandle{ + ID: "session-health-failure", + DaemonPort: 32042, + WorkspacePath: "/tmp/workspace", + Status: session.SessionStatusError, + CreatedAt: now, + LastActivity: now, + } + if err := bus.Publish(session.SessionHealthChanged{ + At: now, + Session: handle, + Previous: session.HealthStatus{State: session.HealthStateHealthy}, + Current: session.HealthStatus{State: session.HealthStateUnhealthy, Error: "probe timeout"}, + }); err != nil { + t.Fatalf("publish health event: %v", err) + } + + frame := readSSEUntil(t, reader, 2*time.Second, func(frame parsedSSEFrame) bool { + return frame.Event == "session.health" && len(frame.Data) > 0 + }) + + var envelope map[string]any + if err := json.Unmarshal([]byte(strings.Join(frame.Data, "\n")), &envelope); err != nil { + t.Fatalf("decode session.health payload: %v", err) + } + if envelope["type"] != "session.health" { + t.Fatalf("type=%v want=session.health", envelope["type"]) + } + payload, _ := envelope["payload"].(map[string]any) + current, _ := payload["Current"].(map[string]any) + if current["State"] != string(session.HealthStateUnhealthy) { + t.Fatalf("current state=%v want=%q", current["State"], session.HealthStateUnhealthy) + } + if current["Error"] != "probe timeout" { + t.Fatalf("current error=%v want=%q", current["Error"], "probe timeout") + } +} + +func TestE2EOfflineIndicatorBehaviorContract(t *testing.T) { + root := repoRoot(t) + indexBytes, err := os.ReadFile(filepath.Join(root, "web", "index.html")) + if err != nil { + t.Fatalf("read web/index.html: %v", err) + } + appBytes, err := os.ReadFile(filepath.Join(root, "web", "app.js")) + if err != nil { + t.Fatalf("read web/app.js: %v", err) + } + + indexText := string(indexBytes) + appText := string(appBytes) + + checks := []string{ + "● OFFLINE", + "● DISCONNECTED", + "● RECONNECTING", + "setSSEIndicator('disconnected', 'bootstrap failed')", + } + + if !strings.Contains(indexText, checks[0]) { + t.Fatalf("index.html missing offline default indicator marker %q", checks[0]) + } + for _, marker := range checks[1:] { + if !strings.Contains(appText, marker) { + t.Fatalf("app.js missing offline/reconnect marker %q", marker) + } + } +} + +func TestE2EMultiSessionIndependenceSanity(t *testing.T) { + mgr := newFakeSessionManager() + ws1 := t.TempDir() + ws2 := t.TempDir() + a, err := mgr.Create(context.Background(), session.CreateOpts{WorkspacePath: ws1}) + if err != nil { + t.Fatalf("seed a: %v", err) + } + b, err := mgr.Create(context.Background(), session.CreateOpts{WorkspacePath: ws2}) + if err != nil { + t.Fatalf("seed b: %v", err) + } + + srv := httptest.NewServer(api.NewRouter(api.RouterConfig{ + SessionManager: mgr, + SessionEventBus: session.NewEventBus(8), + AuthConfig: auth.Defaults(), + Fallback: http.NotFoundHandler(), + })) + defer srv.Close() + + stopA := jsonRequest(t, srv.Client(), http.MethodPost, srv.URL+"/api/sessions/"+a.ID+"/stop", nil) + if stopA.StatusCode != http.StatusOK { + defer stopA.Body.Close() + t.Fatalf("stop a status=%d want=%d", stopA.StatusCode, http.StatusOK) + } + _ = stopA.Body.Close() + + getA := jsonRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/sessions/"+a.ID, nil) + getB := jsonRequest(t, srv.Client(), http.MethodGet, srv.URL+"/api/sessions/"+b.ID, nil) + if getA.StatusCode != http.StatusOK || getB.StatusCode != http.StatusOK { + defer getA.Body.Close() + defer getB.Body.Close() + t.Fatalf("get statuses a=%d b=%d", getA.StatusCode, getB.StatusCode) + } + stateA := decode[sessionView](t, getA.Body) + stateB := decode[sessionView](t, getB.Body) + _ = getA.Body.Close() + _ = getB.Body.Close() + + if stateA.Status != session.SessionStatusStopped { + t.Fatalf("session A status=%s want=%s", stateA.Status, session.SessionStatusStopped) + } + if stateB.Status != session.SessionStatusActive { + t.Fatalf("session B status=%s want=%s", stateB.Status, session.SessionStatusActive) + } +} + +func repoRoot(t *testing.T) string { + t.Helper() + _, file, _, ok := runtime.Caller(0) + if !ok { + t.Fatal("runtime.Caller failed") + } + return filepath.Clean(filepath.Join(filepath.Dir(file), "..", "..")) +} + +type parsedSSEFrame struct { + ID string + Event string + Data []string + Comments []string + Retry string +} + +func readSSEUntil(t *testing.T, reader *bufio.Reader, timeout time.Duration, match func(parsedSSEFrame) bool) parsedSSEFrame { + t.Helper() + deadline := time.Now().Add(timeout) + for { + remaining := time.Until(deadline) + if remaining <= 0 { + t.Fatalf("timed out waiting for matching SSE frame after %s", timeout) + } + frame := readSSEFrame(t, reader, remaining) + if match(frame) { + return frame + } + } +} + +func readSSEFrame(t *testing.T, reader *bufio.Reader, timeout time.Duration) parsedSSEFrame { + t.Helper() + type result struct { + frame parsedSSEFrame + err error + } + resultCh := make(chan result, 1) + + go func() { + var frame parsedSSEFrame + for { + line, err := reader.ReadString('\n') + if err != nil { + resultCh <- result{err: err} + return + } + line = strings.TrimRight(line, "\r\n") + if line == "" { + if frame.ID != "" || frame.Event != "" || frame.Retry != "" || len(frame.Data) > 0 || len(frame.Comments) > 0 { + resultCh <- result{frame: frame} + return + } + continue + } + + switch { + case strings.HasPrefix(line, "id:"): + frame.ID = strings.TrimSpace(strings.TrimPrefix(line, "id:")) + case strings.HasPrefix(line, "event:"): + frame.Event = strings.TrimSpace(strings.TrimPrefix(line, "event:")) + case strings.HasPrefix(line, "data:"): + frame.Data = append(frame.Data, strings.TrimSpace(strings.TrimPrefix(line, "data:"))) + case strings.HasPrefix(line, "retry:"): + frame.Retry = strings.TrimSpace(strings.TrimPrefix(line, "retry:")) + case strings.HasPrefix(line, ":"): + frame.Comments = append(frame.Comments, strings.TrimSpace(strings.TrimPrefix(line, ":"))) + } + } + }() + + select { + case res := <-resultCh: + if res.err != nil { + if res.err == io.EOF { + t.Fatal("unexpected EOF while reading SSE frame") + } + t.Fatalf("read SSE frame: %v", res.err) + } + if res.frame.ID != "" { + if _, err := strconv.ParseInt(res.frame.ID, 10, 64); err != nil { + t.Fatalf("invalid SSE id format %q: %v", res.frame.ID, err) + } + } + return res.frame + case <-time.After(timeout): + t.Fatalf("timed out reading SSE frame after %s", timeout) + } + + return parsedSSEFrame{} +} diff --git a/vscode-extension/.vscode/launch.json b/vscode-extension/.vscode/launch.json new file mode 100644 index 0000000..fb1ac52 --- /dev/null +++ b/vscode-extension/.vscode/launch.json @@ -0,0 +1,17 @@ +{ + "version": "0.2.0", + "configurations": [ + { + "name": "Run OpenCode Extension", + "type": "extensionHost", + "request": "launch", + "runtimeExecutable": "${execPath}", + "args": [ + "--extensionDevelopmentPath=${workspaceFolder}/vscode-extension" + ], + "outFiles": [ + "${workspaceFolder}/vscode-extension/out/**/*.js" + ] + } + ] +} diff --git a/vscode-extension/README.md b/vscode-extension/README.md new file mode 100644 index 0000000..66e9ba2 --- /dev/null +++ b/vscode-extension/README.md @@ -0,0 +1,160 @@ +# OpenCode Control Plane VS Code Extension + +This extension integrates VS Code with the OpenCodeRouter control plane. +It provides session management, chat, terminal bridging, and diff-apply flows. + +## 1. Requirements + +- VS Code `^1.90.0` +- Running OpenCodeRouter control plane (default `http://localhost:8080`) +- Optional auth token if control-plane auth is enabled + +## 2. Installation + +### Development install + +```bash +cd vscode-extension +npm install +npm run compile +``` + +Then launch via VS Code **Extension Development Host**. + +### Packaged install (optional) + +```bash +cd vscode-extension +npm install +npm run compile +npx @vscode/vsce package +``` + +Install the generated `.vsix` using: + +- VS Code command palette -> `Extensions: Install from VSIX...` + +## 3. Configuration + +Settings are under the `opencode` namespace. + +| Setting | Type | Default | Description | +|---|---|---|---| +| `opencode.controlPlaneUrl` | string | `http://localhost:8080` | Base URL used for API, SSE, and terminal websocket connections | +| `opencode.authToken` | string | `""` | Optional bearer token added as `Authorization: Bearer ` | + +## 4. Features + +### 4.1 Session tree + +- View ID, label, status, and workspace path. +- Run session lifecycle actions: + - create + - attach + - stop + - restart + - delete +- Connection status is shown in a status bar item. + +### 4.2 Resilient refresh and event handling + +- Initial session fetch uses bounded retry backoff. +- SSE event stream drives incremental refresh scheduling. +- On control-plane failures with cached sessions: + - sessions are marked stale + - warning prompt offers explicit `Retry` + +### 4.3 Agent chat view + +- Session-targeted chat webview in the OpenCode activity container. +- Uses extension-host transport to avoid webview-side auth/cors concerns. + +### 4.4 Terminal bridge + +- `OpenCode Terminal` profile backed by extension PTY bridge. +- Session-selected websocket terminal connection. +- Reconnect/status behavior handled in bridge implementation. + +### 4.5 Diff integration + +- Stage and preview diffs via `vscode.diff`. +- Apply/reject staged diffs. +- Clear diff highlights explicitly. + +## 5. Commands + +Contributed commands: + +- `opencode.attachSession` +- `opencode.createSession` +- `opencode.openChat` +- `opencode.openTerminal` +- `opencode.refreshSessions` +- `opencode.stopSession` +- `opencode.restartSession` +- `opencode.deleteSession` +- `opencode.applyDiffPreview` +- `opencode.applyLastDiff` +- `opencode.rejectLastDiff` +- `opencode.clearDiffHighlights` + +## 6. Keybindings reference + +This extension does **not** contribute default keyboard shortcuts in +`package.json`. + +Use one of: + +- Command palette (`Ctrl/Cmd+Shift+P`) with the command IDs above +- OpenCode activity view title actions and context menus +- User/workspace custom keybindings mapped to command IDs + +Example custom mapping (`keybindings.json`): + +```json +[ + { + "key": "ctrl+alt+o", + "command": "opencode.refreshSessions" + }, + { + "key": "ctrl+alt+t", + "command": "opencode.openTerminal" + } +] +``` + +## 7. Troubleshooting + +### Session tree shows disconnected/error + +- Verify control plane is running at `opencode.controlPlaneUrl`. +- Check token validity if auth is enabled. +- Use `OpenCode: Refresh Sessions` command. + +### Sessions appear as stale + +- The extension kept last successful data due to control-plane request failures. +- Use `Retry` in warning prompt or run refresh command. + +### Terminal connection fails + +- Confirm session is not `stopped` and daemon health is `healthy`. +- Ensure control plane can attach terminal for that session. +- In constrained environments, `/ws/terminal/{id}` may return 502/503 if terminal + attach prerequisites are not available. + +### Chat or diff actions fail + +- Confirm selected session exists and daemon is reachable. +- Check control-plane logs for daemon passthrough errors. + +### Extension compile issues + +- Reinstall dependencies: + +```bash +cd vscode-extension +npm install +npm run compile +``` diff --git a/vscode-extension/eslint.config.js b/vscode-extension/eslint.config.js new file mode 100644 index 0000000..bef9862 --- /dev/null +++ b/vscode-extension/eslint.config.js @@ -0,0 +1,27 @@ +const tsParser = require('@typescript-eslint/parser'); +const tsPlugin = require('@typescript-eslint/eslint-plugin'); + +module.exports = [ + { + ignores: ['out/**', 'node_modules/**', '.vscode-test/**'] + }, + { + files: ['src/**/*.ts'], + languageOptions: { + parser: tsParser, + parserOptions: { + ecmaVersion: 2022, + sourceType: 'module', + project: './tsconfig.json', + tsconfigRootDir: __dirname + } + }, + plugins: { + '@typescript-eslint': tsPlugin + }, + rules: { + '@typescript-eslint/no-unused-vars': ['error', { argsIgnorePattern: '^_' }], + '@typescript-eslint/no-explicit-any': 'off' + } + } +]; diff --git a/vscode-extension/media/chat/chat.css b/vscode-extension/media/chat/chat.css new file mode 100644 index 0000000..981340a --- /dev/null +++ b/vscode-extension/media/chat/chat.css @@ -0,0 +1,184 @@ +.chat-root { + display: flex; + flex-direction: column; + height: 100vh; + color: var(--vscode-editor-foreground); + background: var(--vscode-editor-background); + font-family: var(--vscode-font-family); +} + +.chat-header { + display: flex; + justify-content: space-between; + align-items: center; + padding: 10px 12px; + border-bottom: 1px solid var(--vscode-panel-border); + gap: 12px; +} + +.session-title { + font-size: 12px; + white-space: nowrap; + overflow: hidden; + text-overflow: ellipsis; +} + +.stream-state { + text-transform: uppercase; + font-size: 11px; + color: var(--vscode-descriptionForeground); +} + +.messages { + flex: 1; + overflow-y: auto; + padding: 10px; + display: flex; + flex-direction: column; + gap: 10px; +} + +.message { + border: 1px solid var(--vscode-panel-border); + border-radius: 6px; + background: var(--vscode-editorWidget-background); +} + +.message.user { + border-color: var(--vscode-textLink-foreground); +} + +.message.system { + border-color: var(--vscode-editorWarning-foreground); +} + +.message-header { + padding: 6px 8px; + font-size: 11px; + color: var(--vscode-descriptionForeground); + border-bottom: 1px solid var(--vscode-panel-border); +} + +.message-body { + padding: 8px; + font-size: 13px; + line-height: 1.45; +} + +.message-body p { + margin: 0 0 8px; +} + +.message-body p:last-child { + margin-bottom: 0; +} + +.message-body pre { + margin: 8px 0; + background: var(--vscode-textCodeBlock-background); + border: 1px solid var(--vscode-panel-border); + border-radius: 6px; + padding: 8px; + overflow-x: auto; +} + +.message-body code { + font-family: var(--vscode-editor-font-family); +} + +.file-ref { + color: var(--vscode-textLink-foreground); + text-decoration: underline; + cursor: pointer; +} + +.tool-calls { + padding: 0 8px 8px; + display: flex; + flex-direction: column; + gap: 6px; +} + +.tool-call { + border: 1px solid var(--vscode-panel-border); + border-radius: 6px; + background: var(--vscode-sideBar-background); +} + +.tool-call summary { + padding: 6px 8px; + cursor: pointer; + font-size: 12px; +} + +.tool-call pre { + margin: 0 8px 8px; +} + +.diff-preview { + border: 1px solid var(--vscode-panel-border); + border-radius: 6px; + margin: 8px 0; + overflow: hidden; +} + +.diff-preview-header { + display: flex; + justify-content: space-between; + align-items: center; + padding: 6px 8px; + background: var(--vscode-sideBar-background); + border-bottom: 1px solid var(--vscode-panel-border); + font-size: 12px; +} + +.diff-preview pre { + margin: 0; + border: 0; + border-radius: 0; +} + +.diff-add { + color: var(--vscode-gitDecoration-addedResourceForeground); +} + +.diff-remove { + color: var(--vscode-gitDecoration-deletedResourceForeground); +} + +.chat-form { + border-top: 1px solid var(--vscode-panel-border); + padding: 10px; + display: flex; + flex-direction: column; + gap: 8px; +} + +.chat-input { + resize: vertical; + min-height: 56px; + background: var(--vscode-input-background); + color: var(--vscode-input-foreground); + border: 1px solid var(--vscode-input-border); + border-radius: 4px; + padding: 8px; + font-family: var(--vscode-font-family); +} + +.chat-actions { + display: flex; + justify-content: flex-end; +} + +button { + border: 1px solid var(--vscode-button-border, transparent); + border-radius: 4px; + padding: 4px 10px; + cursor: pointer; + color: var(--vscode-button-foreground); + background: var(--vscode-button-background); +} + +button:hover { + background: var(--vscode-button-hoverBackground); +} diff --git a/vscode-extension/media/chat/chat.js b/vscode-extension/media/chat/chat.js new file mode 100644 index 0000000..2c285e2 --- /dev/null +++ b/vscode-extension/media/chat/chat.js @@ -0,0 +1,451 @@ +const vscode = acquireVsCodeApi(); + +const state = { + session: null, + messages: [], + activeAssistantId: null, + streaming: false +}; + +const dom = { + title: document.getElementById('session-title'), + streamState: document.getElementById('stream-state'), + messages: document.getElementById('messages'), + form: document.getElementById('chat-form'), + input: document.getElementById('chat-input'), + send: document.getElementById('chat-send') +}; + +const fileRefPattern = /(^|[\s(])((?:\.{0,2}\/|~\/)?[A-Za-z0-9_./-]+\.(?:ts|tsx|js|jsx|mjs|cjs|go|py|md|json|ya?ml|txt|css|scss|html|sql))(?:\:(\d+))?/g; + +function makeId() { + return `${Date.now()}-${Math.random().toString(36).slice(2, 10)}`; +} + +function escapeHtml(value) { + return value + .replace(/&/g, '&') + .replace(//g, '>') + .replace(/"/g, '"') + .replace(/'/g, '''); +} + +function normalizeDiffMarkdown(input) { + if (!input) { + return ''; + } + + const lines = input.split('\n'); + let inCode = false; + let inDiff = false; + + for (let i = 0; i < lines.length; i += 1) { + if (lines[i].startsWith('```')) { + inCode = !inCode; + if (inDiff) { + lines.splice(i, 0, '```'); + i += 1; + inDiff = false; + } + continue; + } + + if (inCode) { + continue; + } + + const isDiffLine = /^[+-] /.test(lines[i]); + if (isDiffLine && !inDiff) { + lines.splice(i, 0, '```diff'); + i += 1; + inDiff = true; + continue; + } + + if (!isDiffLine && inDiff && lines[i].trim() !== '') { + lines.splice(i, 0, '```'); + i += 1; + inDiff = false; + } + } + + if (inDiff) { + lines.push('```'); + } + + return lines.join('\n'); +} + +function linkifyFileRefs(text) { + return text.replace(fileRefPattern, (full, prefix, filePath, line) => { + const encodedPath = encodeURIComponent(filePath); + const encodedLine = line ? encodeURIComponent(line) : ''; + const display = line ? `${filePath}:${line}` : filePath; + return `${prefix}${display}`; + }); +} + +function renderInline(text) { + let rendered = escapeHtml(text); + rendered = linkifyFileRefs(rendered); + rendered = rendered.replace(/`([^`]+)`/g, (_m, value) => `${value}`); + rendered = rendered.replace(/\*\*([^*]+)\*\*/g, '$1'); + rendered = rendered.replace(/\*([^*]+)\*/g, '$1'); + rendered = rendered.replace(/\[([^\]]+)\]\((https?:\/\/[^)]+)\)/g, '$1'); + return rendered; +} + +function looksLikeDiff(code) { + return /^[-+]/m.test(code); +} + +function renderDiffCode(code) { + return escapeHtml(code) + .split('\n') + .map((line) => { + if (line.startsWith('+')) { + return `${line}`; + } + if (line.startsWith('-')) { + return `${line}`; + } + return line; + }) + .join('\n'); +} + +function renderCodeBlock(language, code) { + const normalizedLanguage = (language || '').toLowerCase(); + if (normalizedLanguage === 'diff' || looksLikeDiff(code)) { + const encoded = encodeURIComponent(code); + return `
Diff Preview
${renderDiffCode(code)}
`; + } + return `
${escapeHtml(code)}
`; +} + +function renderMarkdown(markdown) { + const normalized = normalizeDiffMarkdown(markdown || ''); + const codeBlocks = []; + const tokenized = normalized.replace(/```([a-zA-Z0-9_-]+)?\n([\s\S]*?)```/g, (_full, language, code) => { + const index = codeBlocks.push({ language: language || '', code }) - 1; + return `@@CODE_BLOCK_${index}@@`; + }); + + const lines = tokenized.split('\n'); + let html = ''; + let inList = false; + + for (const line of lines) { + const codeMatch = line.match(/^@@CODE_BLOCK_(\d+)@@$/); + if (codeMatch) { + if (inList) { + html += ''; + inList = false; + } + const block = codeBlocks[Number(codeMatch[1])]; + html += renderCodeBlock(block.language, block.code); + continue; + } + + const trimmed = line.trim(); + if (!trimmed) { + if (inList) { + html += ''; + inList = false; + } + continue; + } + + const heading = trimmed.match(/^(#{1,6})\s+(.*)$/); + if (heading) { + if (inList) { + html += ''; + inList = false; + } + const level = heading[1].length; + html += `${renderInline(heading[2])}`; + continue; + } + + const listItem = trimmed.match(/^[-*]\s+(.*)$/); + if (listItem) { + if (!inList) { + html += '
    '; + inList = true; + } + html += `
  • ${renderInline(listItem[1])}
  • `; + continue; + } + + if (inList) { + html += '
'; + inList = false; + } + html += `

${renderInline(trimmed)}

`; + } + + if (inList) { + html += ''; + } + + return html; +} + +function firstString(value, fallback = '') { + if (typeof value === 'string' && value.trim()) { + return value.trim(); + } + return fallback; +} + +function extractToolCall(chunk) { + const type = firstString(chunk.type || ''); + const payload = chunk.payload && typeof chunk.payload === 'object' ? chunk.payload : null; + if (!payload) { + return null; + } + + const payloadType = firstString(payload.type || payload.kind || ''); + const name = firstString(payload.name || payload.tool || payload.toolName || payload.call || 'tool'); + if (!type.toLowerCase().includes('tool') && !payloadType.toLowerCase().includes('tool') && !payload.input && !payload.arguments) { + return null; + } + + return { + name, + input: payload.input || payload.arguments || payload.args || payload.params || payload + }; +} + +function renderToolCall(toolCall) { + const details = document.createElement('details'); + details.className = 'tool-call'; + + const summary = document.createElement('summary'); + summary.textContent = `Tool Call: ${firstString(toolCall.name, 'tool')}`; + details.appendChild(summary); + + const pre = document.createElement('pre'); + pre.textContent = JSON.stringify(toolCall.input, null, 2); + details.appendChild(pre); + + return details; +} + +function renderMessageNode(message) { + const container = document.createElement('section'); + container.className = `message ${message.role}`; + + const header = document.createElement('div'); + header.className = 'message-header'; + header.textContent = message.role.toUpperCase(); + container.appendChild(header); + + const body = document.createElement('div'); + body.className = 'message-body'; + if (message.role === 'assistant') { + body.innerHTML = renderMarkdown(message.content || ''); + } else { + body.textContent = message.content || ''; + } + container.appendChild(body); + + if (message.toolCalls && message.toolCalls.length > 0) { + const tools = document.createElement('div'); + tools.className = 'tool-calls'; + for (const toolCall of message.toolCalls) { + tools.appendChild(renderToolCall(toolCall)); + } + container.appendChild(tools); + } + + wireInteractiveElements(container); + return container; +} + +function wireInteractiveElements(root) { + const fileLinks = root.querySelectorAll('.file-ref'); + for (const link of fileLinks) { + link.addEventListener('click', (event) => { + event.preventDefault(); + const target = event.currentTarget; + const path = decodeURIComponent(target.getAttribute('data-file-path') || ''); + const lineRaw = decodeURIComponent(target.getAttribute('data-file-line') || ''); + const line = Number.parseInt(lineRaw, 10); + vscode.postMessage({ + type: 'openFile', + path, + line: Number.isFinite(line) ? line : undefined + }); + }); + } + + const applyButtons = root.querySelectorAll('.apply-diff'); + for (const button of applyButtons) { + button.addEventListener('click', (event) => { + const target = event.currentTarget; + const diff = decodeURIComponent(target.getAttribute('data-diff') || ''); + vscode.postMessage({ type: 'applyDiff', diff }); + }); + } +} + +function renderMessages() { + dom.messages.innerHTML = ''; + for (const message of state.messages) { + dom.messages.appendChild(renderMessageNode(message)); + } + dom.messages.scrollTop = dom.messages.scrollHeight; +} + +function updateHeader() { + if (!state.session) { + dom.title.textContent = 'No session selected'; + } else { + const description = state.session.workspacePath ? ` · ${state.session.workspacePath}` : ''; + dom.title.textContent = `${state.session.label || state.session.id}${description}`; + } + dom.streamState.textContent = state.streaming ? 'streaming' : 'idle'; + dom.send.disabled = !state.session || state.streaming; +} + +function appendMessage(role, content) { + const message = { + id: makeId(), + role, + content, + toolCalls: [] + }; + state.messages.push(message); + renderMessages(); + return message; +} + +function getMessageById(id) { + return state.messages.find((message) => message.id === id) || null; +} + +function ensureActiveAssistantMessage() { + if (state.activeAssistantId) { + const existing = getMessageById(state.activeAssistantId); + if (existing) { + return existing; + } + } + + const created = appendMessage('assistant', ''); + state.activeAssistantId = created.id; + return created; +} + +function handleChatChunk(chunk) { + if (!chunk || typeof chunk !== 'object') { + return; + } + + if (chunk.error) { + appendMessage('system', String(chunk.error)); + state.activeAssistantId = null; + state.streaming = false; + updateHeader(); + return; + } + + const assistant = ensureActiveAssistantMessage(); + if (typeof chunk.delta === 'string' && chunk.delta.length > 0) { + assistant.content += chunk.delta; + } + + const toolCall = extractToolCall(chunk); + if (toolCall) { + assistant.toolCalls.push(toolCall); + } + + if (chunk.done === true) { + state.activeAssistantId = null; + state.streaming = false; + } + + renderMessages(); + updateHeader(); +} + +function replaceHistory(messages) { + state.messages = []; + if (Array.isArray(messages)) { + for (const item of messages) { + if (!item || typeof item !== 'object') { + continue; + } + state.messages.push({ + id: firstString(item.id, makeId()), + role: ['user', 'assistant', 'system'].includes(item.role) ? item.role : 'assistant', + content: firstString(item.content, ''), + toolCalls: Array.isArray(item.toolCalls) ? item.toolCalls : [] + }); + } + } + state.activeAssistantId = null; + renderMessages(); +} + +dom.form.addEventListener('submit', (event) => { + event.preventDefault(); + const prompt = dom.input.value.trim(); + if (!prompt || !state.session || state.streaming) { + return; + } + + appendMessage('user', prompt); + const assistant = appendMessage('assistant', ''); + state.activeAssistantId = assistant.id; + state.streaming = true; + updateHeader(); + + dom.input.value = ''; + vscode.postMessage({ type: 'sendPrompt', prompt }); +}); + +window.addEventListener('message', (event) => { + const msg = event.data; + if (!msg || typeof msg !== 'object') { + return; + } + + switch (msg.type) { + case 'session': + state.session = msg.session || null; + updateHeader(); + if (state.session) { + vscode.postMessage({ type: 'requestHistory' }); + } + break; + case 'chatHistory': + replaceHistory(msg.messages || []); + break; + case 'streamStarted': + state.streaming = true; + updateHeader(); + break; + case 'streamEnded': + state.streaming = false; + state.activeAssistantId = null; + updateHeader(); + break; + case 'chatChunk': + handleChatChunk(msg.chunk || {}); + break; + case 'error': + appendMessage('system', firstString(msg.message, 'Unknown error')); + state.streaming = false; + state.activeAssistantId = null; + updateHeader(); + break; + default: + break; + } +}); + +updateHeader(); +vscode.postMessage({ type: 'ready' }); diff --git a/vscode-extension/opencode-router-0.1.0.vsix b/vscode-extension/opencode-router-0.1.0.vsix new file mode 100644 index 0000000..243257f Binary files /dev/null and b/vscode-extension/opencode-router-0.1.0.vsix differ diff --git a/vscode-extension/package-lock.json b/vscode-extension/package-lock.json new file mode 100644 index 0000000..8322f67 --- /dev/null +++ b/vscode-extension/package-lock.json @@ -0,0 +1,5085 @@ +{ + "name": "opencode-router", + "version": "0.1.0", + "lockfileVersion": 3, + "requires": true, + "packages": { + "": { + "name": "opencode-router", + "version": "0.1.0", + "devDependencies": { + "@types/mocha": "^10.0.10", + "@types/node": "^20.16.5", + "@types/vscode": "^1.90.0", + "@typescript-eslint/eslint-plugin": "^8.56.1", + "@typescript-eslint/parser": "^8.56.1", + "@vscode/test-electron": "^2.5.2", + "@vscode/vsce": "^2.31.1", + "eslint": "^10.0.2", + "mocha": "^11.7.5", + "typescript": "^5.6.2" + }, + "engines": { + "vscode": "^1.90.0" + } + }, + "node_modules/@azure/abort-controller": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/@azure/abort-controller/-/abort-controller-2.1.2.tgz", + "integrity": "sha512-nBrLsEWm4J2u5LpAPjxADTlq3trDgVZZXHNKabeXZtpq3d3AbN/KGO82R87rdDz5/lYB024rtEf10/q0urNgsA==", + "dev": true, + "license": "MIT", + "dependencies": { + "tslib": "^2.6.2" + }, + "engines": { + "node": ">=18.0.0" + } + }, + "node_modules/@azure/core-auth": { + "version": "1.10.1", + "resolved": "https://registry.npmjs.org/@azure/core-auth/-/core-auth-1.10.1.tgz", + "integrity": "sha512-ykRMW8PjVAn+RS6ww5cmK9U2CyH9p4Q88YJwvUslfuMmN98w/2rdGRLPqJYObapBCdzBVeDgYWdJnFPFb7qzpg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@azure/abort-controller": "^2.1.2", + "@azure/core-util": "^1.13.0", + "tslib": "^2.6.2" + }, + "engines": { + "node": ">=20.0.0" + } + }, + "node_modules/@azure/core-client": { + "version": "1.10.1", + "resolved": "https://registry.npmjs.org/@azure/core-client/-/core-client-1.10.1.tgz", + "integrity": "sha512-Nh5PhEOeY6PrnxNPsEHRr9eimxLwgLlpmguQaHKBinFYA/RU9+kOYVOQqOrTsCL+KSxrLLl1gD8Dk5BFW/7l/w==", + "dev": true, + "license": "MIT", + "dependencies": { + "@azure/abort-controller": "^2.1.2", + "@azure/core-auth": "^1.10.0", + "@azure/core-rest-pipeline": "^1.22.0", + "@azure/core-tracing": "^1.3.0", + "@azure/core-util": "^1.13.0", + "@azure/logger": "^1.3.0", + "tslib": "^2.6.2" + }, + "engines": { + "node": ">=20.0.0" + } + }, + "node_modules/@azure/core-rest-pipeline": { + "version": "1.23.0", + "resolved": "https://registry.npmjs.org/@azure/core-rest-pipeline/-/core-rest-pipeline-1.23.0.tgz", + "integrity": "sha512-Evs1INHo+jUjwHi1T6SG6Ua/LHOQBCLuKEEE6efIpt4ZOoNonaT1kP32GoOcdNDbfqsD2445CPri3MubBy5DEQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "@azure/abort-controller": "^2.1.2", + "@azure/core-auth": "^1.10.0", + "@azure/core-tracing": "^1.3.0", + "@azure/core-util": "^1.13.0", + "@azure/logger": "^1.3.0", + "@typespec/ts-http-runtime": "^0.3.4", + "tslib": "^2.6.2" + }, + "engines": { + "node": ">=20.0.0" + } + }, + "node_modules/@azure/core-tracing": { + "version": "1.3.1", + "resolved": "https://registry.npmjs.org/@azure/core-tracing/-/core-tracing-1.3.1.tgz", + "integrity": "sha512-9MWKevR7Hz8kNzzPLfX4EAtGM2b8mr50HPDBvio96bURP/9C+HjdH3sBlLSNNrvRAr5/k/svoH457gB5IKpmwQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "tslib": "^2.6.2" + }, + "engines": { + "node": ">=20.0.0" + } + }, + "node_modules/@azure/core-util": { + "version": "1.13.1", + "resolved": "https://registry.npmjs.org/@azure/core-util/-/core-util-1.13.1.tgz", + "integrity": "sha512-XPArKLzsvl0Hf0CaGyKHUyVgF7oDnhKoP85Xv6M4StF/1AhfORhZudHtOyf2s+FcbuQ9dPRAjB8J2KvRRMUK2A==", + "dev": true, + "license": "MIT", + "dependencies": { + "@azure/abort-controller": "^2.1.2", + "@typespec/ts-http-runtime": "^0.3.0", + "tslib": "^2.6.2" + }, + "engines": { + "node": ">=20.0.0" + } + }, + "node_modules/@azure/identity": { + "version": "4.13.0", + "resolved": "https://registry.npmjs.org/@azure/identity/-/identity-4.13.0.tgz", + "integrity": "sha512-uWC0fssc+hs1TGGVkkghiaFkkS7NkTxfnCH+Hdg+yTehTpMcehpok4PgUKKdyCH+9ldu6FhiHRv84Ntqj1vVcw==", + "dev": true, + "license": "MIT", + "dependencies": { + "@azure/abort-controller": "^2.0.0", + "@azure/core-auth": "^1.9.0", + "@azure/core-client": "^1.9.2", + "@azure/core-rest-pipeline": "^1.17.0", + "@azure/core-tracing": "^1.0.0", + "@azure/core-util": "^1.11.0", + "@azure/logger": "^1.0.0", + "@azure/msal-browser": "^4.2.0", + "@azure/msal-node": "^3.5.0", + "open": "^10.1.0", + "tslib": "^2.2.0" + }, + "engines": { + "node": ">=20.0.0" + } + }, + "node_modules/@azure/logger": { + "version": "1.3.0", + "resolved": "https://registry.npmjs.org/@azure/logger/-/logger-1.3.0.tgz", + "integrity": "sha512-fCqPIfOcLE+CGqGPd66c8bZpwAji98tZ4JI9i/mlTNTlsIWslCfpg48s/ypyLxZTump5sypjrKn2/kY7q8oAbA==", + "dev": true, + "license": "MIT", + "dependencies": { + "@typespec/ts-http-runtime": "^0.3.0", + "tslib": "^2.6.2" + }, + "engines": { + "node": ">=20.0.0" + } + }, + "node_modules/@azure/msal-browser": { + "version": "4.29.0", + "resolved": "https://registry.npmjs.org/@azure/msal-browser/-/msal-browser-4.29.0.tgz", + "integrity": "sha512-/f3eHkSNUTl6DLQHm+bKecjBKcRQxbd/XLx8lvSYp8Nl/HRyPuIPOijt9Dt0sH50/SxOwQ62RnFCmFlGK+bR/w==", + "dev": true, + "license": "MIT", + "dependencies": { + "@azure/msal-common": "15.15.0" + }, + "engines": { + "node": ">=0.8.0" + } + }, + "node_modules/@azure/msal-common": { + "version": "15.15.0", + "resolved": "https://registry.npmjs.org/@azure/msal-common/-/msal-common-15.15.0.tgz", + "integrity": "sha512-/n+bN0AKlVa+AOcETkJSKj38+bvFs78BaP4rNtv3MJCmPH0YrHiskMRe74OhyZ5DZjGISlFyxqvf9/4QVEi2tw==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=0.8.0" + } + }, + "node_modules/@azure/msal-node": { + "version": "3.8.8", + "resolved": "https://registry.npmjs.org/@azure/msal-node/-/msal-node-3.8.8.tgz", + "integrity": "sha512-+f1VrJH1iI517t4zgmuhqORja0bL6LDQXfBqkjuMmfTYXTQQnh1EvwwxO3UbKLT05N0obF72SRHFrC1RBDv5Gg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@azure/msal-common": "15.15.0", + "jsonwebtoken": "^9.0.0", + "uuid": "^8.3.0" + }, + "engines": { + "node": ">=16" + } + }, + "node_modules/@eslint-community/eslint-utils": { + "version": "4.9.1", + "resolved": "https://registry.npmjs.org/@eslint-community/eslint-utils/-/eslint-utils-4.9.1.tgz", + "integrity": "sha512-phrYmNiYppR7znFEdqgfWHXR6NCkZEK7hwWDHZUjit/2/U0r6XvkDl0SYnoM51Hq7FhCGdLDT6zxCCOY1hexsQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "eslint-visitor-keys": "^3.4.3" + }, + "engines": { + "node": "^12.22.0 || ^14.17.0 || >=16.0.0" + }, + "funding": { + "url": "https://opencollective.com/eslint" + }, + "peerDependencies": { + "eslint": "^6.0.0 || ^7.0.0 || >=8.0.0" + } + }, + "node_modules/@eslint-community/regexpp": { + "version": "4.12.2", + "resolved": "https://registry.npmjs.org/@eslint-community/regexpp/-/regexpp-4.12.2.tgz", + "integrity": "sha512-EriSTlt5OC9/7SXkRSCAhfSxxoSUgBm33OH+IkwbdpgoqsSsUg7y3uh+IICI/Qg4BBWr3U2i39RpmycbxMq4ew==", + "dev": true, + "license": "MIT", + "engines": { + "node": "^12.0.0 || ^14.0.0 || >=16.0.0" + } + }, + "node_modules/@eslint/config-array": { + "version": "0.23.3", + "resolved": "https://registry.npmjs.org/@eslint/config-array/-/config-array-0.23.3.tgz", + "integrity": "sha512-j+eEWmB6YYLwcNOdlwQ6L2OsptI/LO6lNBuLIqe5R7RetD658HLoF+Mn7LzYmAWWNNzdC6cqP+L6r8ujeYXWLw==", + "dev": true, + "license": "Apache-2.0", + "dependencies": { + "@eslint/object-schema": "^3.0.3", + "debug": "^4.3.1", + "minimatch": "^10.2.4" + }, + "engines": { + "node": "^20.19.0 || ^22.13.0 || >=24" + } + }, + "node_modules/@eslint/config-helpers": { + "version": "0.5.3", + "resolved": "https://registry.npmjs.org/@eslint/config-helpers/-/config-helpers-0.5.3.tgz", + "integrity": "sha512-lzGN0onllOZCGroKJmRwY6QcEHxbjBw1gwB8SgRSqK8YbbtEXMvKynsXc3553ckIEBxsbMBU7oOZXKIPGZNeZw==", + "dev": true, + "license": "Apache-2.0", + "dependencies": { + "@eslint/core": "^1.1.1" + }, + "engines": { + "node": "^20.19.0 || ^22.13.0 || >=24" + } + }, + "node_modules/@eslint/core": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/@eslint/core/-/core-1.1.1.tgz", + "integrity": "sha512-QUPblTtE51/7/Zhfv8BDwO0qkkzQL7P/aWWbqcf4xWLEYn1oKjdO0gglQBB4GAsu7u6wjijbCmzsUTy6mnk6oQ==", + "dev": true, + "license": "Apache-2.0", + "dependencies": { + "@types/json-schema": "^7.0.15" + }, + "engines": { + "node": "^20.19.0 || ^22.13.0 || >=24" + } + }, + "node_modules/@eslint/object-schema": { + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/@eslint/object-schema/-/object-schema-3.0.3.tgz", + "integrity": "sha512-iM869Pugn9Nsxbh/YHRqYiqd23AmIbxJOcpUMOuWCVNdoQJ5ZtwL6h3t0bcZzJUlC3Dq9jCFCESBZnX0GTv7iQ==", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": "^20.19.0 || ^22.13.0 || >=24" + } + }, + "node_modules/@eslint/plugin-kit": { + "version": "0.6.1", + "resolved": "https://registry.npmjs.org/@eslint/plugin-kit/-/plugin-kit-0.6.1.tgz", + "integrity": "sha512-iH1B076HoAshH1mLpHMgwdGeTs0CYwL0SPMkGuSebZrwBp16v415e9NZXg2jtrqPVQjf6IANe2Vtlr5KswtcZQ==", + "dev": true, + "license": "Apache-2.0", + "dependencies": { + "@eslint/core": "^1.1.1", + "levn": "^0.4.1" + }, + "engines": { + "node": "^20.19.0 || ^22.13.0 || >=24" + } + }, + "node_modules/@humanfs/core": { + "version": "0.19.1", + "resolved": "https://registry.npmjs.org/@humanfs/core/-/core-0.19.1.tgz", + "integrity": "sha512-5DyQ4+1JEUzejeK1JGICcideyfUbGixgS9jNgex5nqkW+cY7WZhxBigmieN5Qnw9ZosSNVC9KQKyb+GUaGyKUA==", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": ">=18.18.0" + } + }, + "node_modules/@humanfs/node": { + "version": "0.16.7", + "resolved": "https://registry.npmjs.org/@humanfs/node/-/node-0.16.7.tgz", + "integrity": "sha512-/zUx+yOsIrG4Y43Eh2peDeKCxlRt/gET6aHfaKpuq267qXdYDFViVHfMaLyygZOnl0kGWxFIgsBy8QFuTLUXEQ==", + "dev": true, + "license": "Apache-2.0", + "dependencies": { + "@humanfs/core": "^0.19.1", + "@humanwhocodes/retry": "^0.4.0" + }, + "engines": { + "node": ">=18.18.0" + } + }, + "node_modules/@humanwhocodes/module-importer": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/@humanwhocodes/module-importer/-/module-importer-1.0.1.tgz", + "integrity": "sha512-bxveV4V8v5Yb4ncFTT3rPSgZBOpCkjfK0y4oVVVJwIuDVBRMDXrPyXRL988i5ap9m9bnyEEjWfm5WkBmtffLfA==", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": ">=12.22" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/nzakas" + } + }, + "node_modules/@humanwhocodes/retry": { + "version": "0.4.3", + "resolved": "https://registry.npmjs.org/@humanwhocodes/retry/-/retry-0.4.3.tgz", + "integrity": "sha512-bV0Tgo9K4hfPCek+aMAn81RppFKv2ySDQeMoSZuvTASywNTnVJCArCZE2FWqpvIatKu7VMRLWlR1EazvVhDyhQ==", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": ">=18.18" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/nzakas" + } + }, + "node_modules/@isaacs/cliui": { + "version": "8.0.2", + "resolved": "https://registry.npmjs.org/@isaacs/cliui/-/cliui-8.0.2.tgz", + "integrity": "sha512-O8jcjabXaleOG9DQ0+ARXWZBTfnP4WNAqzuiJK7ll44AmxGKv/J2M4TPjxjY3znBCfvBXFzucm1twdyFybFqEA==", + "dev": true, + "license": "ISC", + "dependencies": { + "string-width": "^5.1.2", + "string-width-cjs": "npm:string-width@^4.2.0", + "strip-ansi": "^7.0.1", + "strip-ansi-cjs": "npm:strip-ansi@^6.0.1", + "wrap-ansi": "^8.1.0", + "wrap-ansi-cjs": "npm:wrap-ansi@^7.0.0" + }, + "engines": { + "node": ">=12" + } + }, + "node_modules/@isaacs/cliui/node_modules/ansi-styles": { + "version": "6.2.3", + "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-6.2.3.tgz", + "integrity": "sha512-4Dj6M28JB+oAH8kFkTLUo+a2jwOFkuqb3yucU0CANcRRUbxS0cP0nZYCGjcc3BNXwRIsUVmDGgzawme7zvJHvg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/chalk/ansi-styles?sponsor=1" + } + }, + "node_modules/@isaacs/cliui/node_modules/emoji-regex": { + "version": "9.2.2", + "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-9.2.2.tgz", + "integrity": "sha512-L18DaJsXSUk2+42pv8mLs5jJT2hqFkFE4j21wOmgbUqsZ2hL72NsUU785g9RXgo3s0ZNgVl42TiHp3ZtOv/Vyg==", + "dev": true, + "license": "MIT" + }, + "node_modules/@isaacs/cliui/node_modules/string-width": { + "version": "5.1.2", + "resolved": "https://registry.npmjs.org/string-width/-/string-width-5.1.2.tgz", + "integrity": "sha512-HnLOCR3vjcY8beoNLtcjZ5/nxn2afmME6lhrDrebokqMap+XbeW8n9TXpPDOqdGK5qcI3oT0GKTW6wC7EMiVqA==", + "dev": true, + "license": "MIT", + "dependencies": { + "eastasianwidth": "^0.2.0", + "emoji-regex": "^9.2.2", + "strip-ansi": "^7.0.1" + }, + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/@isaacs/cliui/node_modules/wrap-ansi": { + "version": "8.1.0", + "resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-8.1.0.tgz", + "integrity": "sha512-si7QWI6zUMq56bESFvagtmzMdGOtoxfR+Sez11Mobfc7tm+VkUckk9bW2UeffTGVUbOksxmSw0AA2gs8g71NCQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "ansi-styles": "^6.1.0", + "string-width": "^5.0.1", + "strip-ansi": "^7.0.1" + }, + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/chalk/wrap-ansi?sponsor=1" + } + }, + "node_modules/@pkgjs/parseargs": { + "version": "0.11.0", + "resolved": "https://registry.npmjs.org/@pkgjs/parseargs/-/parseargs-0.11.0.tgz", + "integrity": "sha512-+1VkjdD0QBLPodGrJUeqarH8VAIvQODIbwh9XpP5Syisf7YoQgsJKPNFoqqLQlu+VQ/tVSshMR6loPMn8U+dPg==", + "dev": true, + "license": "MIT", + "optional": true, + "engines": { + "node": ">=14" + } + }, + "node_modules/@types/esrecurse": { + "version": "4.3.1", + "resolved": "https://registry.npmjs.org/@types/esrecurse/-/esrecurse-4.3.1.tgz", + "integrity": "sha512-xJBAbDifo5hpffDBuHl0Y8ywswbiAp/Wi7Y/GtAgSlZyIABppyurxVueOPE8LUQOxdlgi6Zqce7uoEpqNTeiUw==", + "dev": true, + "license": "MIT" + }, + "node_modules/@types/estree": { + "version": "1.0.8", + "resolved": "https://registry.npmjs.org/@types/estree/-/estree-1.0.8.tgz", + "integrity": "sha512-dWHzHa2WqEXI/O1E9OjrocMTKJl2mSrEolh1Iomrv6U+JuNwaHXsXx9bLu5gG7BUWFIN0skIQJQ/L1rIex4X6w==", + "dev": true, + "license": "MIT" + }, + "node_modules/@types/json-schema": { + "version": "7.0.15", + "resolved": "https://registry.npmjs.org/@types/json-schema/-/json-schema-7.0.15.tgz", + "integrity": "sha512-5+fP8P8MFNC+AyZCDxrB2pkZFPGzqQWUzpSeuuVLvm8VMcorNYavBqoFcxK8bQz4Qsbn4oUEEem4wDLfcysGHA==", + "dev": true, + "license": "MIT" + }, + "node_modules/@types/mocha": { + "version": "10.0.10", + "resolved": "https://registry.npmjs.org/@types/mocha/-/mocha-10.0.10.tgz", + "integrity": "sha512-xPyYSz1cMPnJQhl0CLMH68j3gprKZaTjG3s5Vi+fDgx+uhG9NOXwbVt52eFS8ECyXhyKcjDLCBEqBExKuiZb7Q==", + "dev": true, + "license": "MIT" + }, + "node_modules/@types/node": { + "version": "20.19.37", + "resolved": "https://registry.npmjs.org/@types/node/-/node-20.19.37.tgz", + "integrity": "sha512-8kzdPJ3FsNsVIurqBs7oodNnCEVbni9yUEkaHbgptDACOPW04jimGagZ51E6+lXUwJjgnBw+hyko/lkFWCldqw==", + "dev": true, + "license": "MIT", + "dependencies": { + "undici-types": "~6.21.0" + } + }, + "node_modules/@types/vscode": { + "version": "1.109.0", + "resolved": "https://registry.npmjs.org/@types/vscode/-/vscode-1.109.0.tgz", + "integrity": "sha512-0Pf95rnwEIwDbmXGC08r0B4TQhAbsHQ5UyTIgVgoieDe4cOnf92usuR5dEczb6bTKEp7ziZH4TV1TRGPPCExtw==", + "dev": true, + "license": "MIT" + }, + "node_modules/@typescript-eslint/eslint-plugin": { + "version": "8.56.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/eslint-plugin/-/eslint-plugin-8.56.1.tgz", + "integrity": "sha512-Jz9ZztpB37dNC+HU2HI28Bs9QXpzCz+y/twHOwhyrIRdbuVDxSytJNDl6z/aAKlaRIwC7y8wJdkBv7FxYGgi0A==", + "dev": true, + "license": "MIT", + "dependencies": { + "@eslint-community/regexpp": "^4.12.2", + "@typescript-eslint/scope-manager": "8.56.1", + "@typescript-eslint/type-utils": "8.56.1", + "@typescript-eslint/utils": "8.56.1", + "@typescript-eslint/visitor-keys": "8.56.1", + "ignore": "^7.0.5", + "natural-compare": "^1.4.0", + "ts-api-utils": "^2.4.0" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" + }, + "peerDependencies": { + "@typescript-eslint/parser": "^8.56.1", + "eslint": "^8.57.0 || ^9.0.0 || ^10.0.0", + "typescript": ">=4.8.4 <6.0.0" + } + }, + "node_modules/@typescript-eslint/parser": { + "version": "8.56.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/parser/-/parser-8.56.1.tgz", + "integrity": "sha512-klQbnPAAiGYFyI02+znpBRLyjL4/BrBd0nyWkdC0s/6xFLkXYQ8OoRrSkqacS1ddVxf/LDyODIKbQ5TgKAf/Fg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@typescript-eslint/scope-manager": "8.56.1", + "@typescript-eslint/types": "8.56.1", + "@typescript-eslint/typescript-estree": "8.56.1", + "@typescript-eslint/visitor-keys": "8.56.1", + "debug": "^4.4.3" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" + }, + "peerDependencies": { + "eslint": "^8.57.0 || ^9.0.0 || ^10.0.0", + "typescript": ">=4.8.4 <6.0.0" + } + }, + "node_modules/@typescript-eslint/project-service": { + "version": "8.56.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/project-service/-/project-service-8.56.1.tgz", + "integrity": "sha512-TAdqQTzHNNvlVFfR+hu2PDJrURiwKsUvxFn1M0h95BB8ah5jejas08jUWG4dBA68jDMI988IvtfdAI53JzEHOQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "@typescript-eslint/tsconfig-utils": "^8.56.1", + "@typescript-eslint/types": "^8.56.1", + "debug": "^4.4.3" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" + }, + "peerDependencies": { + "typescript": ">=4.8.4 <6.0.0" + } + }, + "node_modules/@typescript-eslint/scope-manager": { + "version": "8.56.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/scope-manager/-/scope-manager-8.56.1.tgz", + "integrity": "sha512-YAi4VDKcIZp0O4tz/haYKhmIDZFEUPOreKbfdAN3SzUDMcPhJ8QI99xQXqX+HoUVq8cs85eRKnD+rne2UAnj2w==", + "dev": true, + "license": "MIT", + "dependencies": { + "@typescript-eslint/types": "8.56.1", + "@typescript-eslint/visitor-keys": "8.56.1" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" + } + }, + "node_modules/@typescript-eslint/tsconfig-utils": { + "version": "8.56.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/tsconfig-utils/-/tsconfig-utils-8.56.1.tgz", + "integrity": "sha512-qOtCYzKEeyr3aR9f28mPJqBty7+DBqsdd63eO0yyDwc6vgThj2UjWfJIcsFeSucYydqcuudMOprZ+x1SpF3ZuQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" + }, + "peerDependencies": { + "typescript": ">=4.8.4 <6.0.0" + } + }, + "node_modules/@typescript-eslint/type-utils": { + "version": "8.56.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/type-utils/-/type-utils-8.56.1.tgz", + "integrity": "sha512-yB/7dxi7MgTtGhZdaHCemf7PuwrHMenHjmzgUW1aJpO+bBU43OycnM3Wn+DdvDO/8zzA9HlhaJ0AUGuvri4oGg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@typescript-eslint/types": "8.56.1", + "@typescript-eslint/typescript-estree": "8.56.1", + "@typescript-eslint/utils": "8.56.1", + "debug": "^4.4.3", + "ts-api-utils": "^2.4.0" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" + }, + "peerDependencies": { + "eslint": "^8.57.0 || ^9.0.0 || ^10.0.0", + "typescript": ">=4.8.4 <6.0.0" + } + }, + "node_modules/@typescript-eslint/types": { + "version": "8.56.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/types/-/types-8.56.1.tgz", + "integrity": "sha512-dbMkdIUkIkchgGDIv7KLUpa0Mda4IYjo4IAMJUZ+3xNoUXxMsk9YtKpTHSChRS85o+H9ftm51gsK1dZReY9CVw==", + "dev": true, + "license": "MIT", + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" + } + }, + "node_modules/@typescript-eslint/typescript-estree": { + "version": "8.56.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/typescript-estree/-/typescript-estree-8.56.1.tgz", + "integrity": "sha512-qzUL1qgalIvKWAf9C1HpvBjif+Vm6rcT5wZd4VoMb9+Km3iS3Cv9DY6dMRMDtPnwRAFyAi7YXJpTIEXLvdfPxg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@typescript-eslint/project-service": "8.56.1", + "@typescript-eslint/tsconfig-utils": "8.56.1", + "@typescript-eslint/types": "8.56.1", + "@typescript-eslint/visitor-keys": "8.56.1", + "debug": "^4.4.3", + "minimatch": "^10.2.2", + "semver": "^7.7.3", + "tinyglobby": "^0.2.15", + "ts-api-utils": "^2.4.0" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" + }, + "peerDependencies": { + "typescript": ">=4.8.4 <6.0.0" + } + }, + "node_modules/@typescript-eslint/utils": { + "version": "8.56.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/utils/-/utils-8.56.1.tgz", + "integrity": "sha512-HPAVNIME3tABJ61siYlHzSWCGtOoeP2RTIaHXFMPqjrQKCGB9OgUVdiNgH7TJS2JNIQ5qQ4RsAUDuGaGme/KOA==", + "dev": true, + "license": "MIT", + "dependencies": { + "@eslint-community/eslint-utils": "^4.9.1", + "@typescript-eslint/scope-manager": "8.56.1", + "@typescript-eslint/types": "8.56.1", + "@typescript-eslint/typescript-estree": "8.56.1" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" + }, + "peerDependencies": { + "eslint": "^8.57.0 || ^9.0.0 || ^10.0.0", + "typescript": ">=4.8.4 <6.0.0" + } + }, + "node_modules/@typescript-eslint/visitor-keys": { + "version": "8.56.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/visitor-keys/-/visitor-keys-8.56.1.tgz", + "integrity": "sha512-KiROIzYdEV85YygXw6BI/Dx4fnBlFQu6Mq4QE4MOH9fFnhohw6wX/OAvDY2/C+ut0I3RSPKenvZJIVYqJNkhEw==", + "dev": true, + "license": "MIT", + "dependencies": { + "@typescript-eslint/types": "8.56.1", + "eslint-visitor-keys": "^5.0.0" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" + } + }, + "node_modules/@typescript-eslint/visitor-keys/node_modules/eslint-visitor-keys": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/eslint-visitor-keys/-/eslint-visitor-keys-5.0.1.tgz", + "integrity": "sha512-tD40eHxA35h0PEIZNeIjkHoDR4YjjJp34biM0mDvplBe//mB+IHCqHDGV7pxF+7MklTvighcCPPZC7ynWyjdTA==", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": "^20.19.0 || ^22.13.0 || >=24" + }, + "funding": { + "url": "https://opencollective.com/eslint" + } + }, + "node_modules/@typespec/ts-http-runtime": { + "version": "0.3.4", + "resolved": "https://registry.npmjs.org/@typespec/ts-http-runtime/-/ts-http-runtime-0.3.4.tgz", + "integrity": "sha512-CI0NhTrz4EBaa0U+HaaUZrJhPoso8sG7ZFya8uQoBA57fjzrjRSv87ekCjLZOFExN+gXE/z0xuN2QfH4H2HrLQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "http-proxy-agent": "^7.0.0", + "https-proxy-agent": "^7.0.0", + "tslib": "^2.6.2" + }, + "engines": { + "node": ">=20.0.0" + } + }, + "node_modules/@vscode/test-electron": { + "version": "2.5.2", + "resolved": "https://registry.npmjs.org/@vscode/test-electron/-/test-electron-2.5.2.tgz", + "integrity": "sha512-8ukpxv4wYe0iWMRQU18jhzJOHkeGKbnw7xWRX3Zw1WJA4cEKbHcmmLPdPrPtL6rhDcrlCZN+xKRpv09n4gRHYg==", + "dev": true, + "license": "MIT", + "dependencies": { + "http-proxy-agent": "^7.0.2", + "https-proxy-agent": "^7.0.5", + "jszip": "^3.10.1", + "ora": "^8.1.0", + "semver": "^7.6.2" + }, + "engines": { + "node": ">=16" + } + }, + "node_modules/@vscode/vsce": { + "version": "2.32.0", + "resolved": "https://registry.npmjs.org/@vscode/vsce/-/vsce-2.32.0.tgz", + "integrity": "sha512-3EFJfsgrSftIqt3EtdRcAygy/OJ3hstyI1cDmIgkU9CFZW5C+3djr6mfosndCUqcVYuyjmxOK1xmFp/Bq7+NIg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@azure/identity": "^4.1.0", + "@vscode/vsce-sign": "^2.0.0", + "azure-devops-node-api": "^12.5.0", + "chalk": "^2.4.2", + "cheerio": "^1.0.0-rc.9", + "cockatiel": "^3.1.2", + "commander": "^6.2.1", + "form-data": "^4.0.0", + "glob": "^7.0.6", + "hosted-git-info": "^4.0.2", + "jsonc-parser": "^3.2.0", + "leven": "^3.1.0", + "markdown-it": "^12.3.2", + "mime": "^1.3.4", + "minimatch": "^3.0.3", + "parse-semver": "^1.1.1", + "read": "^1.0.7", + "semver": "^7.5.2", + "tmp": "^0.2.1", + "typed-rest-client": "^1.8.4", + "url-join": "^4.0.1", + "xml2js": "^0.5.0", + "yauzl": "^2.3.1", + "yazl": "^2.2.2" + }, + "bin": { + "vsce": "vsce" + }, + "engines": { + "node": ">= 16" + }, + "optionalDependencies": { + "keytar": "^7.7.0" + } + }, + "node_modules/@vscode/vsce-sign": { + "version": "2.0.9", + "resolved": "https://registry.npmjs.org/@vscode/vsce-sign/-/vsce-sign-2.0.9.tgz", + "integrity": "sha512-8IvaRvtFyzUnGGl3f5+1Cnor3LqaUWvhaUjAYO8Y39OUYlOf3cRd+dowuQYLpZcP3uwSG+mURwjEBOSq4SOJ0g==", + "dev": true, + "hasInstallScript": true, + "license": "SEE LICENSE IN LICENSE.txt", + "optionalDependencies": { + "@vscode/vsce-sign-alpine-arm64": "2.0.6", + "@vscode/vsce-sign-alpine-x64": "2.0.6", + "@vscode/vsce-sign-darwin-arm64": "2.0.6", + "@vscode/vsce-sign-darwin-x64": "2.0.6", + "@vscode/vsce-sign-linux-arm": "2.0.6", + "@vscode/vsce-sign-linux-arm64": "2.0.6", + "@vscode/vsce-sign-linux-x64": "2.0.6", + "@vscode/vsce-sign-win32-arm64": "2.0.6", + "@vscode/vsce-sign-win32-x64": "2.0.6" + } + }, + "node_modules/@vscode/vsce-sign-alpine-arm64": { + "version": "2.0.6", + "resolved": "https://registry.npmjs.org/@vscode/vsce-sign-alpine-arm64/-/vsce-sign-alpine-arm64-2.0.6.tgz", + "integrity": "sha512-wKkJBsvKF+f0GfsUuGT0tSW0kZL87QggEiqNqK6/8hvqsXvpx8OsTEc3mnE1kejkh5r+qUyQ7PtF8jZYN0mo8Q==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "SEE LICENSE IN LICENSE.txt", + "optional": true, + "os": [ + "alpine" + ] + }, + "node_modules/@vscode/vsce-sign-alpine-x64": { + "version": "2.0.6", + "resolved": "https://registry.npmjs.org/@vscode/vsce-sign-alpine-x64/-/vsce-sign-alpine-x64-2.0.6.tgz", + "integrity": "sha512-YoAGlmdK39vKi9jA18i4ufBbd95OqGJxRvF3n6ZbCyziwy3O+JgOpIUPxv5tjeO6gQfx29qBivQ8ZZTUF2Ba0w==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "SEE LICENSE IN LICENSE.txt", + "optional": true, + "os": [ + "alpine" + ] + }, + "node_modules/@vscode/vsce-sign-darwin-arm64": { + "version": "2.0.6", + "resolved": "https://registry.npmjs.org/@vscode/vsce-sign-darwin-arm64/-/vsce-sign-darwin-arm64-2.0.6.tgz", + "integrity": "sha512-5HMHaJRIQuozm/XQIiJiA0W9uhdblwwl2ZNDSSAeXGO9YhB9MH5C4KIHOmvyjUnKy4UCuiP43VKpIxW1VWP4tQ==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "SEE LICENSE IN LICENSE.txt", + "optional": true, + "os": [ + "darwin" + ] + }, + "node_modules/@vscode/vsce-sign-darwin-x64": { + "version": "2.0.6", + "resolved": "https://registry.npmjs.org/@vscode/vsce-sign-darwin-x64/-/vsce-sign-darwin-x64-2.0.6.tgz", + "integrity": "sha512-25GsUbTAiNfHSuRItoQafXOIpxlYj+IXb4/qarrXu7kmbH94jlm5sdWSCKrrREs8+GsXF1b+l3OB7VJy5jsykw==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "SEE LICENSE IN LICENSE.txt", + "optional": true, + "os": [ + "darwin" + ] + }, + "node_modules/@vscode/vsce-sign-linux-arm": { + "version": "2.0.6", + "resolved": "https://registry.npmjs.org/@vscode/vsce-sign-linux-arm/-/vsce-sign-linux-arm-2.0.6.tgz", + "integrity": "sha512-UndEc2Xlq4HsuMPnwu7420uqceXjs4yb5W8E2/UkaHBB9OWCwMd3/bRe/1eLe3D8kPpxzcaeTyXiK3RdzS/1CA==", + "cpu": [ + "arm" + ], + "dev": true, + "license": "SEE LICENSE IN LICENSE.txt", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@vscode/vsce-sign-linux-arm64": { + "version": "2.0.6", + "resolved": "https://registry.npmjs.org/@vscode/vsce-sign-linux-arm64/-/vsce-sign-linux-arm64-2.0.6.tgz", + "integrity": "sha512-cfb1qK7lygtMa4NUl2582nP7aliLYuDEVpAbXJMkDq1qE+olIw/es+C8j1LJwvcRq1I2yWGtSn3EkDp9Dq5FdA==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "SEE LICENSE IN LICENSE.txt", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@vscode/vsce-sign-linux-x64": { + "version": "2.0.6", + "resolved": "https://registry.npmjs.org/@vscode/vsce-sign-linux-x64/-/vsce-sign-linux-x64-2.0.6.tgz", + "integrity": "sha512-/olerl1A4sOqdP+hjvJ1sbQjKN07Y3DVnxO4gnbn/ahtQvFrdhUi0G1VsZXDNjfqmXw57DmPi5ASnj/8PGZhAA==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "SEE LICENSE IN LICENSE.txt", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@vscode/vsce-sign-win32-arm64": { + "version": "2.0.6", + "resolved": "https://registry.npmjs.org/@vscode/vsce-sign-win32-arm64/-/vsce-sign-win32-arm64-2.0.6.tgz", + "integrity": "sha512-ivM/MiGIY0PJNZBoGtlRBM/xDpwbdlCWomUWuLmIxbi1Cxe/1nooYrEQoaHD8ojVRgzdQEUzMsRbyF5cJJgYOg==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "SEE LICENSE IN LICENSE.txt", + "optional": true, + "os": [ + "win32" + ] + }, + "node_modules/@vscode/vsce-sign-win32-x64": { + "version": "2.0.6", + "resolved": "https://registry.npmjs.org/@vscode/vsce-sign-win32-x64/-/vsce-sign-win32-x64-2.0.6.tgz", + "integrity": "sha512-mgth9Kvze+u8CruYMmhHw6Zgy3GRX2S+Ed5oSokDEK5vPEwGGKnmuXua9tmFhomeAnhgJnL4DCna3TiNuGrBTQ==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "SEE LICENSE IN LICENSE.txt", + "optional": true, + "os": [ + "win32" + ] + }, + "node_modules/@vscode/vsce/node_modules/balanced-match": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/balanced-match/-/balanced-match-1.0.2.tgz", + "integrity": "sha512-3oSeUO0TMV67hN1AmbXsK4yaqU7tjiHlbxRDZOpH0KW9+CeX4bRAaX0Anxt0tx2MrpRpWwQaPwIlISEJhYU5Pw==", + "dev": true, + "license": "MIT" + }, + "node_modules/@vscode/vsce/node_modules/brace-expansion": { + "version": "1.1.12", + "resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.12.tgz", + "integrity": "sha512-9T9UjW3r0UW5c1Q7GTwllptXwhvYmEzFhzMfZ9H7FQWt+uZePjZPjBP/W1ZEyZ1twGWom5/56TF4lPcqjnDHcg==", + "dev": true, + "license": "MIT", + "dependencies": { + "balanced-match": "^1.0.0", + "concat-map": "0.0.1" + } + }, + "node_modules/@vscode/vsce/node_modules/minimatch": { + "version": "3.1.5", + "resolved": "https://registry.npmjs.org/minimatch/-/minimatch-3.1.5.tgz", + "integrity": "sha512-VgjWUsnnT6n+NUk6eZq77zeFdpW2LWDzP6zFGrCbHXiYNul5Dzqk2HHQ5uFH2DNW5Xbp8+jVzaeNt94ssEEl4w==", + "dev": true, + "license": "ISC", + "dependencies": { + "brace-expansion": "^1.1.7" + }, + "engines": { + "node": "*" + } + }, + "node_modules/acorn": { + "version": "8.16.0", + "resolved": "https://registry.npmjs.org/acorn/-/acorn-8.16.0.tgz", + "integrity": "sha512-UVJyE9MttOsBQIDKw1skb9nAwQuR5wuGD3+82K6JgJlm/Y+KI92oNsMNGZCYdDsVtRHSak0pcV5Dno5+4jh9sw==", + "dev": true, + "license": "MIT", + "bin": { + "acorn": "bin/acorn" + }, + "engines": { + "node": ">=0.4.0" + } + }, + "node_modules/acorn-jsx": { + "version": "5.3.2", + "resolved": "https://registry.npmjs.org/acorn-jsx/-/acorn-jsx-5.3.2.tgz", + "integrity": "sha512-rq9s+JNhf0IChjtDXxllJ7g41oZk5SlXtp0LHwyA5cejwn7vKmKp4pPri6YEePv2PU65sAsegbXtIinmDFDXgQ==", + "dev": true, + "license": "MIT", + "peerDependencies": { + "acorn": "^6.0.0 || ^7.0.0 || ^8.0.0" + } + }, + "node_modules/agent-base": { + "version": "7.1.4", + "resolved": "https://registry.npmjs.org/agent-base/-/agent-base-7.1.4.tgz", + "integrity": "sha512-MnA+YT8fwfJPgBx3m60MNqakm30XOkyIoH1y6huTQvC0PwZG7ki8NacLBcrPbNoo8vEZy7Jpuk7+jMO+CUovTQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 14" + } + }, + "node_modules/ajv": { + "version": "6.14.0", + "resolved": "https://registry.npmjs.org/ajv/-/ajv-6.14.0.tgz", + "integrity": "sha512-IWrosm/yrn43eiKqkfkHis7QioDleaXQHdDVPKg0FSwwd/DuvyX79TZnFOnYpB7dcsFAMmtFztZuXPDvSePkFw==", + "dev": true, + "license": "MIT", + "dependencies": { + "fast-deep-equal": "^3.1.1", + "fast-json-stable-stringify": "^2.0.0", + "json-schema-traverse": "^0.4.1", + "uri-js": "^4.2.2" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/epoberezkin" + } + }, + "node_modules/ansi-regex": { + "version": "6.2.2", + "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-6.2.2.tgz", + "integrity": "sha512-Bq3SmSpyFHaWjPk8If9yc6svM8c56dB5BAtW4Qbw5jHTwwXXcTLoRMkpDJp6VL0XzlWaCHTXrkFURMYmD0sLqg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/chalk/ansi-regex?sponsor=1" + } + }, + "node_modules/ansi-styles": { + "version": "3.2.1", + "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-3.2.1.tgz", + "integrity": "sha512-VT0ZI6kZRdTh8YyJw3SMbYm/u+NqfsAxEpWO0Pf9sq8/e94WxxOpPKx9FR1FlyCtOVDNOQ+8ntlqFxiRc+r5qA==", + "dev": true, + "license": "MIT", + "dependencies": { + "color-convert": "^1.9.0" + }, + "engines": { + "node": ">=4" + } + }, + "node_modules/argparse": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/argparse/-/argparse-2.0.1.tgz", + "integrity": "sha512-8+9WqebbFzpX9OR+Wa6O29asIogeRMzcGtAINdpMHHyAg10f05aSFVBbcEqGf/PXw1EjAZ+q2/bEBg3DvurK3Q==", + "dev": true, + "license": "Python-2.0" + }, + "node_modules/asynckit": { + "version": "0.4.0", + "resolved": "https://registry.npmjs.org/asynckit/-/asynckit-0.4.0.tgz", + "integrity": "sha512-Oei9OH4tRh0YqU3GxhX79dM/mwVgvbZJaSNaRk+bshkj0S5cfHcgYakreBjrHwatXKbz+IoIdYLxrKim2MjW0Q==", + "dev": true, + "license": "MIT" + }, + "node_modules/azure-devops-node-api": { + "version": "12.5.0", + "resolved": "https://registry.npmjs.org/azure-devops-node-api/-/azure-devops-node-api-12.5.0.tgz", + "integrity": "sha512-R5eFskGvOm3U/GzeAuxRkUsAl0hrAwGgWn6zAd2KrZmrEhWZVqLew4OOupbQlXUuojUzpGtq62SmdhJ06N88og==", + "dev": true, + "license": "MIT", + "dependencies": { + "tunnel": "0.0.6", + "typed-rest-client": "^1.8.4" + } + }, + "node_modules/balanced-match": { + "version": "4.0.4", + "resolved": "https://registry.npmjs.org/balanced-match/-/balanced-match-4.0.4.tgz", + "integrity": "sha512-BLrgEcRTwX2o6gGxGOCNyMvGSp35YofuYzw9h1IMTRmKqttAZZVU67bdb9Pr2vUHA8+j3i2tJfjO6C6+4myGTA==", + "dev": true, + "license": "MIT", + "engines": { + "node": "18 || 20 || >=22" + } + }, + "node_modules/base64-js": { + "version": "1.5.1", + "resolved": "https://registry.npmjs.org/base64-js/-/base64-js-1.5.1.tgz", + "integrity": "sha512-AKpaYlHn8t4SVbOHCy+b5+KKgvR4vrsD8vbvrbiQJps7fKDTkjkDry6ji0rUJjC0kzbNePLwzxq8iypo41qeWA==", + "dev": true, + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/feross" + }, + { + "type": "patreon", + "url": "https://www.patreon.com/feross" + }, + { + "type": "consulting", + "url": "https://feross.org/support" + } + ], + "license": "MIT", + "optional": true + }, + "node_modules/bl": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/bl/-/bl-4.1.0.tgz", + "integrity": "sha512-1W07cM9gS6DcLperZfFSj+bWLtaPGSOHWhPiGzXmvVJbRLdG82sH/Kn8EtW1VqWVA54AKf2h5k5BbnIbwF3h6w==", + "dev": true, + "license": "MIT", + "optional": true, + "dependencies": { + "buffer": "^5.5.0", + "inherits": "^2.0.4", + "readable-stream": "^3.4.0" + } + }, + "node_modules/boolbase": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/boolbase/-/boolbase-1.0.0.tgz", + "integrity": "sha512-JZOSA7Mo9sNGB8+UjSgzdLtokWAky1zbztM3WRLCbZ70/3cTANmQmOdR7y2g+J0e2WXywy1yS468tY+IruqEww==", + "dev": true, + "license": "ISC" + }, + "node_modules/brace-expansion": { + "version": "5.0.4", + "resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-5.0.4.tgz", + "integrity": "sha512-h+DEnpVvxmfVefa4jFbCf5HdH5YMDXRsmKflpf1pILZWRFlTbJpxeU55nJl4Smt5HQaGzg1o6RHFPJaOqnmBDg==", + "dev": true, + "license": "MIT", + "dependencies": { + "balanced-match": "^4.0.2" + }, + "engines": { + "node": "18 || 20 || >=22" + } + }, + "node_modules/browser-stdout": { + "version": "1.3.1", + "resolved": "https://registry.npmjs.org/browser-stdout/-/browser-stdout-1.3.1.tgz", + "integrity": "sha512-qhAVI1+Av2X7qelOfAIYwXONood6XlZE/fXaBSmW/T5SzLAmCgzi+eiWE7fUvbHaeNBQH13UftjpXxsfLkMpgw==", + "dev": true, + "license": "ISC" + }, + "node_modules/buffer": { + "version": "5.7.1", + "resolved": "https://registry.npmjs.org/buffer/-/buffer-5.7.1.tgz", + "integrity": "sha512-EHcyIPBQ4BSGlvjB16k5KgAJ27CIsHY/2JBmCRReo48y9rQ3MaUzWX3KVlBa4U7MyX02HdVj0K7C3WaB3ju7FQ==", + "dev": true, + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/feross" + }, + { + "type": "patreon", + "url": "https://www.patreon.com/feross" + }, + { + "type": "consulting", + "url": "https://feross.org/support" + } + ], + "license": "MIT", + "optional": true, + "dependencies": { + "base64-js": "^1.3.1", + "ieee754": "^1.1.13" + } + }, + "node_modules/buffer-crc32": { + "version": "0.2.13", + "resolved": "https://registry.npmjs.org/buffer-crc32/-/buffer-crc32-0.2.13.tgz", + "integrity": "sha512-VO9Ht/+p3SN7SKWqcrgEzjGbRSJYTx+Q1pTQC0wrWqHx0vpJraQ6GtHx8tvcg1rlK1byhU5gccxgOgj7B0TDkQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": "*" + } + }, + "node_modules/buffer-equal-constant-time": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/buffer-equal-constant-time/-/buffer-equal-constant-time-1.0.1.tgz", + "integrity": "sha512-zRpUiDwd/xk6ADqPMATG8vc9VPrkck7T07OIx0gnjmJAnHnTVXNQG3vfvWNuiZIkwu9KrKdA1iJKfsfTVxE6NA==", + "dev": true, + "license": "BSD-3-Clause" + }, + "node_modules/bundle-name": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/bundle-name/-/bundle-name-4.1.0.tgz", + "integrity": "sha512-tjwM5exMg6BGRI+kNmTntNsvdZS1X8BFYS6tnJ2hdH0kVxM6/eVZ2xy+FqStSWvYmtfFMDLIxurorHwDKfDz5Q==", + "dev": true, + "license": "MIT", + "dependencies": { + "run-applescript": "^7.0.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/call-bind-apply-helpers": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/call-bind-apply-helpers/-/call-bind-apply-helpers-1.0.2.tgz", + "integrity": "sha512-Sp1ablJ0ivDkSzjcaJdxEunN5/XvksFJ2sMBFfq6x0ryhQV/2b/KwFe21cMpmHtPOSij8K99/wSfoEuTObmuMQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "es-errors": "^1.3.0", + "function-bind": "^1.1.2" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/call-bound": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/call-bound/-/call-bound-1.0.4.tgz", + "integrity": "sha512-+ys997U96po4Kx/ABpBCqhA9EuxJaQWDQg7295H4hBphv3IZg0boBKuwYpt4YXp6MZ5AmZQnU/tyMTlRpaSejg==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bind-apply-helpers": "^1.0.2", + "get-intrinsic": "^1.3.0" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/camelcase": { + "version": "6.3.0", + "resolved": "https://registry.npmjs.org/camelcase/-/camelcase-6.3.0.tgz", + "integrity": "sha512-Gmy6FhYlCY7uOElZUSbxo2UCDH8owEk996gkbrpsgGtrJLM3J7jGxl9Ic7Qwwj4ivOE5AWZWRMecDdF7hqGjFA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/chalk": { + "version": "2.4.2", + "resolved": "https://registry.npmjs.org/chalk/-/chalk-2.4.2.tgz", + "integrity": "sha512-Mti+f9lpJNcwF4tWV8/OrTTtF1gZi+f8FqlyAdouralcFWFQWF2+NgCHShjkCb+IFBLq9buZwE1xckQU4peSuQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "ansi-styles": "^3.2.1", + "escape-string-regexp": "^1.0.5", + "supports-color": "^5.3.0" + }, + "engines": { + "node": ">=4" + } + }, + "node_modules/cheerio": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/cheerio/-/cheerio-1.2.0.tgz", + "integrity": "sha512-WDrybc/gKFpTYQutKIK6UvfcuxijIZfMfXaYm8NMsPQxSYvf+13fXUJ4rztGGbJcBQ/GF55gvrZ0Bc0bj/mqvg==", + "dev": true, + "license": "MIT", + "dependencies": { + "cheerio-select": "^2.1.0", + "dom-serializer": "^2.0.0", + "domhandler": "^5.0.3", + "domutils": "^3.2.2", + "encoding-sniffer": "^0.2.1", + "htmlparser2": "^10.1.0", + "parse5": "^7.3.0", + "parse5-htmlparser2-tree-adapter": "^7.1.0", + "parse5-parser-stream": "^7.1.2", + "undici": "^7.19.0", + "whatwg-mimetype": "^4.0.0" + }, + "engines": { + "node": ">=20.18.1" + }, + "funding": { + "url": "https://github.com/cheeriojs/cheerio?sponsor=1" + } + }, + "node_modules/cheerio-select": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/cheerio-select/-/cheerio-select-2.1.0.tgz", + "integrity": "sha512-9v9kG0LvzrlcungtnJtpGNxY+fzECQKhK4EGJX2vByejiMX84MFNQw4UxPJl3bFbTMw+Dfs37XaIkCwTZfLh4g==", + "dev": true, + "license": "BSD-2-Clause", + "dependencies": { + "boolbase": "^1.0.0", + "css-select": "^5.1.0", + "css-what": "^6.1.0", + "domelementtype": "^2.3.0", + "domhandler": "^5.0.3", + "domutils": "^3.0.1" + }, + "funding": { + "url": "https://github.com/sponsors/fb55" + } + }, + "node_modules/chokidar": { + "version": "4.0.3", + "resolved": "https://registry.npmjs.org/chokidar/-/chokidar-4.0.3.tgz", + "integrity": "sha512-Qgzu8kfBvo+cA4962jnP1KkS6Dop5NS6g7R5LFYJr4b8Ub94PPQXUksCw9PvXoeXPRRddRNC5C1JQUR2SMGtnA==", + "dev": true, + "license": "MIT", + "dependencies": { + "readdirp": "^4.0.1" + }, + "engines": { + "node": ">= 14.16.0" + }, + "funding": { + "url": "https://paulmillr.com/funding/" + } + }, + "node_modules/chownr": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/chownr/-/chownr-1.1.4.tgz", + "integrity": "sha512-jJ0bqzaylmJtVnNgzTeSOs8DPavpbYgEr/b0YL8/2GO3xJEhInFmhKMUnEJQjZumK7KXGFhUy89PrsJWlakBVg==", + "dev": true, + "license": "ISC", + "optional": true + }, + "node_modules/cli-cursor": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/cli-cursor/-/cli-cursor-5.0.0.tgz", + "integrity": "sha512-aCj4O5wKyszjMmDT4tZj93kxyydN/K5zPWSCe6/0AV/AA1pqe5ZBIw0a2ZfPQV7lL5/yb5HsUreJ6UFAF1tEQw==", + "dev": true, + "license": "MIT", + "dependencies": { + "restore-cursor": "^5.0.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/cli-spinners": { + "version": "2.9.2", + "resolved": "https://registry.npmjs.org/cli-spinners/-/cli-spinners-2.9.2.tgz", + "integrity": "sha512-ywqV+5MmyL4E7ybXgKys4DugZbX0FC6LnwrhjuykIjnK9k8OQacQ7axGKnjDXWNhns0xot3bZI5h55H8yo9cJg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=6" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/cliui": { + "version": "8.0.1", + "resolved": "https://registry.npmjs.org/cliui/-/cliui-8.0.1.tgz", + "integrity": "sha512-BSeNnyus75C4//NQ9gQt1/csTXyo/8Sb+afLAkzAptFuMsod9HFokGNudZpi/oQV73hnVK+sR+5PVRMd+Dr7YQ==", + "dev": true, + "license": "ISC", + "dependencies": { + "string-width": "^4.2.0", + "strip-ansi": "^6.0.1", + "wrap-ansi": "^7.0.0" + }, + "engines": { + "node": ">=12" + } + }, + "node_modules/cliui/node_modules/ansi-regex": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz", + "integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/cliui/node_modules/emoji-regex": { + "version": "8.0.0", + "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz", + "integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==", + "dev": true, + "license": "MIT" + }, + "node_modules/cliui/node_modules/string-width": { + "version": "4.2.3", + "resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz", + "integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==", + "dev": true, + "license": "MIT", + "dependencies": { + "emoji-regex": "^8.0.0", + "is-fullwidth-code-point": "^3.0.0", + "strip-ansi": "^6.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/cliui/node_modules/strip-ansi": { + "version": "6.0.1", + "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz", + "integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==", + "dev": true, + "license": "MIT", + "dependencies": { + "ansi-regex": "^5.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/cockatiel": { + "version": "3.2.1", + "resolved": "https://registry.npmjs.org/cockatiel/-/cockatiel-3.2.1.tgz", + "integrity": "sha512-gfrHV6ZPkquExvMh9IOkKsBzNDk6sDuZ6DdBGUBkvFnTCqCxzpuq48RySgP0AnaqQkw2zynOFj9yly6T1Q2G5Q==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=16" + } + }, + "node_modules/color-convert": { + "version": "1.9.3", + "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-1.9.3.tgz", + "integrity": "sha512-QfAUtd+vFdAtFQcC8CCyYt1fYWxSqAiK2cSD6zDB8N3cpsEBAvRxp9zOGg6G/SHHJYAT88/az/IuDGALsNVbGg==", + "dev": true, + "license": "MIT", + "dependencies": { + "color-name": "1.1.3" + } + }, + "node_modules/color-name": { + "version": "1.1.3", + "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.3.tgz", + "integrity": "sha512-72fSenhMw2HZMTVHeCA9KCmpEIbzWiQsjN+BHcBbS9vr1mtt+vJjPdksIBNUmKAW8TFUDPJK5SUU3QhE9NEXDw==", + "dev": true, + "license": "MIT" + }, + "node_modules/combined-stream": { + "version": "1.0.8", + "resolved": "https://registry.npmjs.org/combined-stream/-/combined-stream-1.0.8.tgz", + "integrity": "sha512-FQN4MRfuJeHf7cBbBMJFXhKSDq+2kAArBlmRBvcvFE5BB1HZKXtSFASDhdlz9zOYwxh8lDdnvmMOe/+5cdoEdg==", + "dev": true, + "license": "MIT", + "dependencies": { + "delayed-stream": "~1.0.0" + }, + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/commander": { + "version": "6.2.1", + "resolved": "https://registry.npmjs.org/commander/-/commander-6.2.1.tgz", + "integrity": "sha512-U7VdrJFnJgo4xjrHpTzu0yrHPGImdsmD95ZlgYSEajAn2JKzDhDTPG9kBTefmObL2w/ngeZnilk+OV9CG3d7UA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 6" + } + }, + "node_modules/concat-map": { + "version": "0.0.1", + "resolved": "https://registry.npmjs.org/concat-map/-/concat-map-0.0.1.tgz", + "integrity": "sha512-/Srv4dswyQNBfohGpz9o6Yb3Gz3SrUDqBH5rTuhGR7ahtlbYKnVxw2bCFMRljaA7EXHaXZ8wsHdodFvbkhKmqg==", + "dev": true, + "license": "MIT" + }, + "node_modules/core-util-is": { + "version": "1.0.3", + "resolved": "https://registry.npmjs.org/core-util-is/-/core-util-is-1.0.3.tgz", + "integrity": "sha512-ZQBvi1DcpJ4GDqanjucZ2Hj3wEO5pZDS89BWbkcrvdxksJorwUDDZamX9ldFkp9aw2lmBDLgkObEA4DWNJ9FYQ==", + "dev": true, + "license": "MIT" + }, + "node_modules/cross-spawn": { + "version": "7.0.6", + "resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-7.0.6.tgz", + "integrity": "sha512-uV2QOWP2nWzsy2aMp8aRibhi9dlzF5Hgh5SHaB9OiTGEyDTiJJyx0uy51QXdyWbtAHNua4XJzUKca3OzKUd3vA==", + "dev": true, + "license": "MIT", + "dependencies": { + "path-key": "^3.1.0", + "shebang-command": "^2.0.0", + "which": "^2.0.1" + }, + "engines": { + "node": ">= 8" + } + }, + "node_modules/css-select": { + "version": "5.2.2", + "resolved": "https://registry.npmjs.org/css-select/-/css-select-5.2.2.tgz", + "integrity": "sha512-TizTzUddG/xYLA3NXodFM0fSbNizXjOKhqiQQwvhlspadZokn1KDy0NZFS0wuEubIYAV5/c1/lAr0TaaFXEXzw==", + "dev": true, + "license": "BSD-2-Clause", + "dependencies": { + "boolbase": "^1.0.0", + "css-what": "^6.1.0", + "domhandler": "^5.0.2", + "domutils": "^3.0.1", + "nth-check": "^2.0.1" + }, + "funding": { + "url": "https://github.com/sponsors/fb55" + } + }, + "node_modules/css-what": { + "version": "6.2.2", + "resolved": "https://registry.npmjs.org/css-what/-/css-what-6.2.2.tgz", + "integrity": "sha512-u/O3vwbptzhMs3L1fQE82ZSLHQQfto5gyZzwteVIEyeaY5Fc7R4dapF/BvRoSYFeqfBk4m0V1Vafq5Pjv25wvA==", + "dev": true, + "license": "BSD-2-Clause", + "engines": { + "node": ">= 6" + }, + "funding": { + "url": "https://github.com/sponsors/fb55" + } + }, + "node_modules/debug": { + "version": "4.4.3", + "resolved": "https://registry.npmjs.org/debug/-/debug-4.4.3.tgz", + "integrity": "sha512-RGwwWnwQvkVfavKVt22FGLw+xYSdzARwm0ru6DhTVA3umU5hZc28V3kO4stgYryrTlLpuvgI9GiijltAjNbcqA==", + "dev": true, + "license": "MIT", + "dependencies": { + "ms": "^2.1.3" + }, + "engines": { + "node": ">=6.0" + }, + "peerDependenciesMeta": { + "supports-color": { + "optional": true + } + } + }, + "node_modules/decamelize": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/decamelize/-/decamelize-4.0.0.tgz", + "integrity": "sha512-9iE1PgSik9HeIIw2JO94IidnE3eBoQrFJ3w7sFuzSX4DpmZ3v5sZpUiV5Swcf6mQEF+Y0ru8Neo+p+nyh2J+hQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/decompress-response": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/decompress-response/-/decompress-response-6.0.0.tgz", + "integrity": "sha512-aW35yZM6Bb/4oJlZncMH2LCoZtJXTRxES17vE3hoRiowU2kWHaJKFkSBDnDR+cm9J+9QhXmREyIfv0pji9ejCQ==", + "dev": true, + "license": "MIT", + "optional": true, + "dependencies": { + "mimic-response": "^3.1.0" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/deep-extend": { + "version": "0.6.0", + "resolved": "https://registry.npmjs.org/deep-extend/-/deep-extend-0.6.0.tgz", + "integrity": "sha512-LOHxIOaPYdHlJRtCQfDIVZtfw/ufM8+rVj649RIHzcm/vGwQRXFt6OPqIFWsm2XEMrNIEtWR64sY1LEKD2vAOA==", + "dev": true, + "license": "MIT", + "optional": true, + "engines": { + "node": ">=4.0.0" + } + }, + "node_modules/deep-is": { + "version": "0.1.4", + "resolved": "https://registry.npmjs.org/deep-is/-/deep-is-0.1.4.tgz", + "integrity": "sha512-oIPzksmTg4/MriiaYGO+okXDT7ztn/w3Eptv/+gSIdMdKsJo0u4CfYNFJPy+4SKMuCqGw2wxnA+URMg3t8a/bQ==", + "dev": true, + "license": "MIT" + }, + "node_modules/default-browser": { + "version": "5.5.0", + "resolved": "https://registry.npmjs.org/default-browser/-/default-browser-5.5.0.tgz", + "integrity": "sha512-H9LMLr5zwIbSxrmvikGuI/5KGhZ8E2zH3stkMgM5LpOWDutGM2JZaj460Udnf1a+946zc7YBgrqEWwbk7zHvGw==", + "dev": true, + "license": "MIT", + "dependencies": { + "bundle-name": "^4.1.0", + "default-browser-id": "^5.0.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/default-browser-id": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/default-browser-id/-/default-browser-id-5.0.1.tgz", + "integrity": "sha512-x1VCxdX4t+8wVfd1so/9w+vQ4vx7lKd2Qp5tDRutErwmR85OgmfX7RlLRMWafRMY7hbEiXIbudNrjOAPa/hL8Q==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/define-lazy-prop": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/define-lazy-prop/-/define-lazy-prop-3.0.0.tgz", + "integrity": "sha512-N+MeXYoqr3pOgn8xfyRPREN7gHakLYjhsHhWGT3fWAiL4IkAt0iDw14QiiEm2bE30c5XX5q0FtAA3CK5f9/BUg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/delayed-stream": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/delayed-stream/-/delayed-stream-1.0.0.tgz", + "integrity": "sha512-ZySD7Nf91aLB0RxL4KGrKHBXl7Eds1DAmEdcoVawXnLD7SDhpNgtuII2aAkg7a7QS41jxPSZ17p4VdGnMHk3MQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=0.4.0" + } + }, + "node_modules/detect-libc": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/detect-libc/-/detect-libc-2.1.2.tgz", + "integrity": "sha512-Btj2BOOO83o3WyH59e8MgXsxEQVcarkUOpEYrubB0urwnN10yQ364rsiByU11nZlqWYZm05i/of7io4mzihBtQ==", + "dev": true, + "license": "Apache-2.0", + "optional": true, + "engines": { + "node": ">=8" + } + }, + "node_modules/diff": { + "version": "7.0.0", + "resolved": "https://registry.npmjs.org/diff/-/diff-7.0.0.tgz", + "integrity": "sha512-PJWHUb1RFevKCwaFA9RlG5tCd+FO5iRh9A8HEtkmBH2Li03iJriB6m6JIN4rGz3K3JLawI7/veA1xzRKP6ISBw==", + "dev": true, + "license": "BSD-3-Clause", + "engines": { + "node": ">=0.3.1" + } + }, + "node_modules/dom-serializer": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/dom-serializer/-/dom-serializer-2.0.0.tgz", + "integrity": "sha512-wIkAryiqt/nV5EQKqQpo3SToSOV9J0DnbJqwK7Wv/Trc92zIAYZ4FlMu+JPFW1DfGFt81ZTCGgDEabffXeLyJg==", + "dev": true, + "license": "MIT", + "dependencies": { + "domelementtype": "^2.3.0", + "domhandler": "^5.0.2", + "entities": "^4.2.0" + }, + "funding": { + "url": "https://github.com/cheeriojs/dom-serializer?sponsor=1" + } + }, + "node_modules/domelementtype": { + "version": "2.3.0", + "resolved": "https://registry.npmjs.org/domelementtype/-/domelementtype-2.3.0.tgz", + "integrity": "sha512-OLETBj6w0OsagBwdXnPdN0cnMfF9opN69co+7ZrbfPGrdpPVNBUj02spi6B1N7wChLQiPn4CSH/zJvXw56gmHw==", + "dev": true, + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/fb55" + } + ], + "license": "BSD-2-Clause" + }, + "node_modules/domhandler": { + "version": "5.0.3", + "resolved": "https://registry.npmjs.org/domhandler/-/domhandler-5.0.3.tgz", + "integrity": "sha512-cgwlv/1iFQiFnU96XXgROh8xTeetsnJiDsTc7TYCLFd9+/WNkIqPTxiM/8pSd8VIrhXGTf1Ny1q1hquVqDJB5w==", + "dev": true, + "license": "BSD-2-Clause", + "dependencies": { + "domelementtype": "^2.3.0" + }, + "engines": { + "node": ">= 4" + }, + "funding": { + "url": "https://github.com/fb55/domhandler?sponsor=1" + } + }, + "node_modules/domutils": { + "version": "3.2.2", + "resolved": "https://registry.npmjs.org/domutils/-/domutils-3.2.2.tgz", + "integrity": "sha512-6kZKyUajlDuqlHKVX1w7gyslj9MPIXzIFiz/rGu35uC1wMi+kMhQwGhl4lt9unC9Vb9INnY9Z3/ZA3+FhASLaw==", + "dev": true, + "license": "BSD-2-Clause", + "dependencies": { + "dom-serializer": "^2.0.0", + "domelementtype": "^2.3.0", + "domhandler": "^5.0.3" + }, + "funding": { + "url": "https://github.com/fb55/domutils?sponsor=1" + } + }, + "node_modules/dunder-proto": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/dunder-proto/-/dunder-proto-1.0.1.tgz", + "integrity": "sha512-KIN/nDJBQRcXw0MLVhZE9iQHmG68qAVIBg9CqmUYjmQIhgij9U5MFvrqkUL5FbtyyzZuOeOt0zdeRe4UY7ct+A==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bind-apply-helpers": "^1.0.1", + "es-errors": "^1.3.0", + "gopd": "^1.2.0" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/eastasianwidth": { + "version": "0.2.0", + "resolved": "https://registry.npmjs.org/eastasianwidth/-/eastasianwidth-0.2.0.tgz", + "integrity": "sha512-I88TYZWc9XiYHRQ4/3c5rjjfgkjhLyW2luGIheGERbNQ6OY7yTybanSpDXZa8y7VUP9YmDcYa+eyq4ca7iLqWA==", + "dev": true, + "license": "MIT" + }, + "node_modules/ecdsa-sig-formatter": { + "version": "1.0.11", + "resolved": "https://registry.npmjs.org/ecdsa-sig-formatter/-/ecdsa-sig-formatter-1.0.11.tgz", + "integrity": "sha512-nagl3RYrbNv6kQkeJIpt6NJZy8twLB/2vtz6yN9Z4vRKHN4/QZJIEbqohALSgwKdnksuY3k5Addp5lg8sVoVcQ==", + "dev": true, + "license": "Apache-2.0", + "dependencies": { + "safe-buffer": "^5.0.1" + } + }, + "node_modules/emoji-regex": { + "version": "10.6.0", + "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-10.6.0.tgz", + "integrity": "sha512-toUI84YS5YmxW219erniWD0CIVOo46xGKColeNQRgOzDorgBi1v4D71/OFzgD9GO2UGKIv1C3Sp8DAn0+j5w7A==", + "dev": true, + "license": "MIT" + }, + "node_modules/encoding-sniffer": { + "version": "0.2.1", + "resolved": "https://registry.npmjs.org/encoding-sniffer/-/encoding-sniffer-0.2.1.tgz", + "integrity": "sha512-5gvq20T6vfpekVtqrYQsSCFZ1wEg5+wW0/QaZMWkFr6BqD3NfKs0rLCx4rrVlSWJeZb5NBJgVLswK/w2MWU+Gw==", + "dev": true, + "license": "MIT", + "dependencies": { + "iconv-lite": "^0.6.3", + "whatwg-encoding": "^3.1.1" + }, + "funding": { + "url": "https://github.com/fb55/encoding-sniffer?sponsor=1" + } + }, + "node_modules/end-of-stream": { + "version": "1.4.5", + "resolved": "https://registry.npmjs.org/end-of-stream/-/end-of-stream-1.4.5.tgz", + "integrity": "sha512-ooEGc6HP26xXq/N+GCGOT0JKCLDGrq2bQUZrQ7gyrJiZANJ/8YDTxTpQBXGMn+WbIQXNVpyWymm7KYVICQnyOg==", + "dev": true, + "license": "MIT", + "optional": true, + "dependencies": { + "once": "^1.4.0" + } + }, + "node_modules/entities": { + "version": "4.5.0", + "resolved": "https://registry.npmjs.org/entities/-/entities-4.5.0.tgz", + "integrity": "sha512-V0hjH4dGPh9Ao5p0MoRY6BVqtwCjhz6vI5LT8AJ55H+4g9/4vbHx1I54fS0XuclLhDHArPQCiMjDxjaL8fPxhw==", + "dev": true, + "license": "BSD-2-Clause", + "engines": { + "node": ">=0.12" + }, + "funding": { + "url": "https://github.com/fb55/entities?sponsor=1" + } + }, + "node_modules/es-define-property": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/es-define-property/-/es-define-property-1.0.1.tgz", + "integrity": "sha512-e3nRfgfUZ4rNGL232gUgX06QNyyez04KdjFrF+LTRoOXmrOgFKDg4BCdsjW8EnT69eqdYGmRpJwiPVYNrCaW3g==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/es-errors": { + "version": "1.3.0", + "resolved": "https://registry.npmjs.org/es-errors/-/es-errors-1.3.0.tgz", + "integrity": "sha512-Zf5H2Kxt2xjTvbJvP2ZWLEICxA6j+hAmMzIlypy4xcBg1vKVnx89Wy0GbS+kf5cwCVFFzdCFh2XSCFNULS6csw==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/es-object-atoms": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/es-object-atoms/-/es-object-atoms-1.1.1.tgz", + "integrity": "sha512-FGgH2h8zKNim9ljj7dankFPcICIK9Cp5bm+c2gQSYePhpaG5+esrLODihIorn+Pe6FGJzWhXQotPv73jTaldXA==", + "dev": true, + "license": "MIT", + "dependencies": { + "es-errors": "^1.3.0" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/es-set-tostringtag": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/es-set-tostringtag/-/es-set-tostringtag-2.1.0.tgz", + "integrity": "sha512-j6vWzfrGVfyXxge+O0x5sh6cvxAog0a/4Rdd2K36zCMV5eJ+/+tOAngRO8cODMNWbVRdVlmGZQL2YS3yR8bIUA==", + "dev": true, + "license": "MIT", + "dependencies": { + "es-errors": "^1.3.0", + "get-intrinsic": "^1.2.6", + "has-tostringtag": "^1.0.2", + "hasown": "^2.0.2" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/escalade": { + "version": "3.2.0", + "resolved": "https://registry.npmjs.org/escalade/-/escalade-3.2.0.tgz", + "integrity": "sha512-WUj2qlxaQtO4g6Pq5c29GTcWGDyd8itL8zTlipgECz3JesAiiOKotd8JU6otB3PACgG6xkJUyVhboMS+bje/jA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=6" + } + }, + "node_modules/escape-string-regexp": { + "version": "1.0.5", + "resolved": "https://registry.npmjs.org/escape-string-regexp/-/escape-string-regexp-1.0.5.tgz", + "integrity": "sha512-vbRorB5FUQWvla16U8R/qgaFIya2qGzwDrNmCZuYKrbdSUMG6I1ZCGQRefkRVhuOkIGVne7BQ35DSfo1qvJqFg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=0.8.0" + } + }, + "node_modules/eslint": { + "version": "10.0.3", + "resolved": "https://registry.npmjs.org/eslint/-/eslint-10.0.3.tgz", + "integrity": "sha512-COV33RzXZkqhG9P2rZCFl9ZmJ7WL+gQSCRzE7RhkbclbQPtLAWReL7ysA0Sh4c8Im2U9ynybdR56PV0XcKvqaQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "@eslint-community/eslint-utils": "^4.8.0", + "@eslint-community/regexpp": "^4.12.2", + "@eslint/config-array": "^0.23.3", + "@eslint/config-helpers": "^0.5.2", + "@eslint/core": "^1.1.1", + "@eslint/plugin-kit": "^0.6.1", + "@humanfs/node": "^0.16.6", + "@humanwhocodes/module-importer": "^1.0.1", + "@humanwhocodes/retry": "^0.4.2", + "@types/estree": "^1.0.6", + "ajv": "^6.14.0", + "cross-spawn": "^7.0.6", + "debug": "^4.3.2", + "escape-string-regexp": "^4.0.0", + "eslint-scope": "^9.1.2", + "eslint-visitor-keys": "^5.0.1", + "espree": "^11.1.1", + "esquery": "^1.7.0", + "esutils": "^2.0.2", + "fast-deep-equal": "^3.1.3", + "file-entry-cache": "^8.0.0", + "find-up": "^5.0.0", + "glob-parent": "^6.0.2", + "ignore": "^5.2.0", + "imurmurhash": "^0.1.4", + "is-glob": "^4.0.0", + "json-stable-stringify-without-jsonify": "^1.0.1", + "minimatch": "^10.2.4", + "natural-compare": "^1.4.0", + "optionator": "^0.9.3" + }, + "bin": { + "eslint": "bin/eslint.js" + }, + "engines": { + "node": "^20.19.0 || ^22.13.0 || >=24" + }, + "funding": { + "url": "https://eslint.org/donate" + }, + "peerDependencies": { + "jiti": "*" + }, + "peerDependenciesMeta": { + "jiti": { + "optional": true + } + } + }, + "node_modules/eslint-scope": { + "version": "9.1.2", + "resolved": "https://registry.npmjs.org/eslint-scope/-/eslint-scope-9.1.2.tgz", + "integrity": "sha512-xS90H51cKw0jltxmvmHy2Iai1LIqrfbw57b79w/J7MfvDfkIkFZ+kj6zC3BjtUwh150HsSSdxXZcsuv72miDFQ==", + "dev": true, + "license": "BSD-2-Clause", + "dependencies": { + "@types/esrecurse": "^4.3.1", + "@types/estree": "^1.0.8", + "esrecurse": "^4.3.0", + "estraverse": "^5.2.0" + }, + "engines": { + "node": "^20.19.0 || ^22.13.0 || >=24" + }, + "funding": { + "url": "https://opencollective.com/eslint" + } + }, + "node_modules/eslint-visitor-keys": { + "version": "3.4.3", + "resolved": "https://registry.npmjs.org/eslint-visitor-keys/-/eslint-visitor-keys-3.4.3.tgz", + "integrity": "sha512-wpc+LXeiyiisxPlEkUzU6svyS1frIO3Mgxj1fdy7Pm8Ygzguax2N3Fa/D/ag1WqbOprdI+uY6wMUl8/a2G+iag==", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": "^12.22.0 || ^14.17.0 || >=16.0.0" + }, + "funding": { + "url": "https://opencollective.com/eslint" + } + }, + "node_modules/eslint/node_modules/escape-string-regexp": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/escape-string-regexp/-/escape-string-regexp-4.0.0.tgz", + "integrity": "sha512-TtpcNJ3XAzx3Gq8sWRzJaVajRs0uVxA2YAkdb1jm2YkPz4G6egUFAyA3n5vtEIZefPk5Wa4UXbKuS5fKkJWdgA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/eslint/node_modules/eslint-visitor-keys": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/eslint-visitor-keys/-/eslint-visitor-keys-5.0.1.tgz", + "integrity": "sha512-tD40eHxA35h0PEIZNeIjkHoDR4YjjJp34biM0mDvplBe//mB+IHCqHDGV7pxF+7MklTvighcCPPZC7ynWyjdTA==", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": "^20.19.0 || ^22.13.0 || >=24" + }, + "funding": { + "url": "https://opencollective.com/eslint" + } + }, + "node_modules/eslint/node_modules/ignore": { + "version": "5.3.2", + "resolved": "https://registry.npmjs.org/ignore/-/ignore-5.3.2.tgz", + "integrity": "sha512-hsBTNUqQTDwkWtcdYI2i06Y/nUBEsNEDJKjWdigLvegy8kDuJAS8uRlpkkcQpyEXL0Z/pjDy5HBmMjRCJ2gq+g==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 4" + } + }, + "node_modules/espree": { + "version": "11.2.0", + "resolved": "https://registry.npmjs.org/espree/-/espree-11.2.0.tgz", + "integrity": "sha512-7p3DrVEIopW1B1avAGLuCSh1jubc01H2JHc8B4qqGblmg5gI9yumBgACjWo4JlIc04ufug4xJ3SQI8HkS/Rgzw==", + "dev": true, + "license": "BSD-2-Clause", + "dependencies": { + "acorn": "^8.16.0", + "acorn-jsx": "^5.3.2", + "eslint-visitor-keys": "^5.0.1" + }, + "engines": { + "node": "^20.19.0 || ^22.13.0 || >=24" + }, + "funding": { + "url": "https://opencollective.com/eslint" + } + }, + "node_modules/espree/node_modules/eslint-visitor-keys": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/eslint-visitor-keys/-/eslint-visitor-keys-5.0.1.tgz", + "integrity": "sha512-tD40eHxA35h0PEIZNeIjkHoDR4YjjJp34biM0mDvplBe//mB+IHCqHDGV7pxF+7MklTvighcCPPZC7ynWyjdTA==", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": "^20.19.0 || ^22.13.0 || >=24" + }, + "funding": { + "url": "https://opencollective.com/eslint" + } + }, + "node_modules/esquery": { + "version": "1.7.0", + "resolved": "https://registry.npmjs.org/esquery/-/esquery-1.7.0.tgz", + "integrity": "sha512-Ap6G0WQwcU/LHsvLwON1fAQX9Zp0A2Y6Y/cJBl9r/JbW90Zyg4/zbG6zzKa2OTALELarYHmKu0GhpM5EO+7T0g==", + "dev": true, + "license": "BSD-3-Clause", + "dependencies": { + "estraverse": "^5.1.0" + }, + "engines": { + "node": ">=0.10" + } + }, + "node_modules/esrecurse": { + "version": "4.3.0", + "resolved": "https://registry.npmjs.org/esrecurse/-/esrecurse-4.3.0.tgz", + "integrity": "sha512-KmfKL3b6G+RXvP8N1vr3Tq1kL/oCFgn2NYXEtqP8/L3pKapUA4G8cFVaoF3SU323CD4XypR/ffioHmkti6/Tag==", + "dev": true, + "license": "BSD-2-Clause", + "dependencies": { + "estraverse": "^5.2.0" + }, + "engines": { + "node": ">=4.0" + } + }, + "node_modules/estraverse": { + "version": "5.3.0", + "resolved": "https://registry.npmjs.org/estraverse/-/estraverse-5.3.0.tgz", + "integrity": "sha512-MMdARuVEQziNTeJD8DgMqmhwR11BRQ/cBP+pLtYdSTnf3MIO8fFeiINEbX36ZdNlfU/7A9f3gUw49B3oQsvwBA==", + "dev": true, + "license": "BSD-2-Clause", + "engines": { + "node": ">=4.0" + } + }, + "node_modules/esutils": { + "version": "2.0.3", + "resolved": "https://registry.npmjs.org/esutils/-/esutils-2.0.3.tgz", + "integrity": "sha512-kVscqXk4OCp68SZ0dkgEKVi6/8ij300KBWTJq32P/dYeWTSwK41WyTxalN1eRmA5Z9UU/LX9D7FWSmV9SAYx6g==", + "dev": true, + "license": "BSD-2-Clause", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/expand-template": { + "version": "2.0.3", + "resolved": "https://registry.npmjs.org/expand-template/-/expand-template-2.0.3.tgz", + "integrity": "sha512-XYfuKMvj4O35f/pOXLObndIRvyQ+/+6AhODh+OKWj9S9498pHHn/IMszH+gt0fBCRWMNfk1ZSp5x3AifmnI2vg==", + "dev": true, + "license": "(MIT OR WTFPL)", + "optional": true, + "engines": { + "node": ">=6" + } + }, + "node_modules/fast-deep-equal": { + "version": "3.1.3", + "resolved": "https://registry.npmjs.org/fast-deep-equal/-/fast-deep-equal-3.1.3.tgz", + "integrity": "sha512-f3qQ9oQy9j2AhBe/H9VC91wLmKBCCU/gDOnKNAYG5hswO7BLKj09Hc5HYNz9cGI++xlpDCIgDaitVs03ATR84Q==", + "dev": true, + "license": "MIT" + }, + "node_modules/fast-json-stable-stringify": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/fast-json-stable-stringify/-/fast-json-stable-stringify-2.1.0.tgz", + "integrity": "sha512-lhd/wF+Lk98HZoTCtlVraHtfh5XYijIjalXck7saUtuanSDyLMxnHhSXEDJqHxD7msR8D0uCmqlkwjCV8xvwHw==", + "dev": true, + "license": "MIT" + }, + "node_modules/fast-levenshtein": { + "version": "2.0.6", + "resolved": "https://registry.npmjs.org/fast-levenshtein/-/fast-levenshtein-2.0.6.tgz", + "integrity": "sha512-DCXu6Ifhqcks7TZKY3Hxp3y6qphY5SJZmrWMDrKcERSOXWQdMhU9Ig/PYrzyw/ul9jOIyh0N4M0tbC5hodg8dw==", + "dev": true, + "license": "MIT" + }, + "node_modules/fd-slicer": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/fd-slicer/-/fd-slicer-1.1.0.tgz", + "integrity": "sha512-cE1qsB/VwyQozZ+q1dGxR8LBYNZeofhEdUNGSMbQD3Gw2lAzX9Zb3uIU6Ebc/Fmyjo9AWWfnn0AUCHqtevs/8g==", + "dev": true, + "license": "MIT", + "dependencies": { + "pend": "~1.2.0" + } + }, + "node_modules/fdir": { + "version": "6.5.0", + "resolved": "https://registry.npmjs.org/fdir/-/fdir-6.5.0.tgz", + "integrity": "sha512-tIbYtZbucOs0BRGqPJkshJUYdL+SDH7dVM8gjy+ERp3WAUjLEFJE+02kanyHtwjWOnwrKYBiwAmM0p4kLJAnXg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=12.0.0" + }, + "peerDependencies": { + "picomatch": "^3 || ^4" + }, + "peerDependenciesMeta": { + "picomatch": { + "optional": true + } + } + }, + "node_modules/file-entry-cache": { + "version": "8.0.0", + "resolved": "https://registry.npmjs.org/file-entry-cache/-/file-entry-cache-8.0.0.tgz", + "integrity": "sha512-XXTUwCvisa5oacNGRP9SfNtYBNAMi+RPwBFmblZEF7N7swHYQS6/Zfk7SRwx4D5j3CH211YNRco1DEMNVfZCnQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "flat-cache": "^4.0.0" + }, + "engines": { + "node": ">=16.0.0" + } + }, + "node_modules/find-up": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/find-up/-/find-up-5.0.0.tgz", + "integrity": "sha512-78/PXT1wlLLDgTzDs7sjq9hzz0vXD+zn+7wypEe4fXQxCmdmqfGsEPQxmiCSQI3ajFV91bVSsvNtrJRiW6nGng==", + "dev": true, + "license": "MIT", + "dependencies": { + "locate-path": "^6.0.0", + "path-exists": "^4.0.0" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/flat": { + "version": "5.0.2", + "resolved": "https://registry.npmjs.org/flat/-/flat-5.0.2.tgz", + "integrity": "sha512-b6suED+5/3rTpUBdG1gupIl8MPFCAMA0QXwmljLhvCUKcUvdE4gWky9zpuGCcXHOsz4J9wPGNWq6OKpmIzz3hQ==", + "dev": true, + "license": "BSD-3-Clause", + "bin": { + "flat": "cli.js" + } + }, + "node_modules/flat-cache": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/flat-cache/-/flat-cache-4.0.1.tgz", + "integrity": "sha512-f7ccFPK3SXFHpx15UIGyRJ/FJQctuKZ0zVuN3frBo4HnK3cay9VEW0R6yPYFHC0AgqhukPzKjq22t5DmAyqGyw==", + "dev": true, + "license": "MIT", + "dependencies": { + "flatted": "^3.2.9", + "keyv": "^4.5.4" + }, + "engines": { + "node": ">=16" + } + }, + "node_modules/flatted": { + "version": "3.3.4", + "resolved": "https://registry.npmjs.org/flatted/-/flatted-3.3.4.tgz", + "integrity": "sha512-3+mMldrTAPdta5kjX2G2J7iX4zxtnwpdA8Tr2ZSjkyPSanvbZAcy6flmtnXbEybHrDcU9641lxrMfFuUxVz9vA==", + "dev": true, + "license": "ISC" + }, + "node_modules/foreground-child": { + "version": "3.3.1", + "resolved": "https://registry.npmjs.org/foreground-child/-/foreground-child-3.3.1.tgz", + "integrity": "sha512-gIXjKqtFuWEgzFRJA9WCQeSJLZDjgJUOMCMzxtvFq/37KojM1BFGufqsCy0r4qSQmYLsZYMeyRqzIWOMup03sw==", + "dev": true, + "license": "ISC", + "dependencies": { + "cross-spawn": "^7.0.6", + "signal-exit": "^4.0.1" + }, + "engines": { + "node": ">=14" + }, + "funding": { + "url": "https://github.com/sponsors/isaacs" + } + }, + "node_modules/form-data": { + "version": "4.0.5", + "resolved": "https://registry.npmjs.org/form-data/-/form-data-4.0.5.tgz", + "integrity": "sha512-8RipRLol37bNs2bhoV67fiTEvdTrbMUYcFTiy3+wuuOnUog2QBHCZWXDRijWQfAkhBj2Uf5UnVaiWwA5vdd82w==", + "dev": true, + "license": "MIT", + "dependencies": { + "asynckit": "^0.4.0", + "combined-stream": "^1.0.8", + "es-set-tostringtag": "^2.1.0", + "hasown": "^2.0.2", + "mime-types": "^2.1.12" + }, + "engines": { + "node": ">= 6" + } + }, + "node_modules/fs-constants": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/fs-constants/-/fs-constants-1.0.0.tgz", + "integrity": "sha512-y6OAwoSIf7FyjMIv94u+b5rdheZEjzR63GTyZJm5qh4Bi+2YgwLCcI/fPFZkL5PSixOt6ZNKm+w+Hfp/Bciwow==", + "dev": true, + "license": "MIT", + "optional": true + }, + "node_modules/fs.realpath": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/fs.realpath/-/fs.realpath-1.0.0.tgz", + "integrity": "sha512-OO0pH2lK6a0hZnAdau5ItzHPI6pUlvI7jMVnxUQRtw4owF2wk8lOSabtGDCTP4Ggrg2MbGnWO9X8K1t4+fGMDw==", + "dev": true, + "license": "ISC" + }, + "node_modules/function-bind": { + "version": "1.1.2", + "resolved": "https://registry.npmjs.org/function-bind/-/function-bind-1.1.2.tgz", + "integrity": "sha512-7XHNxH7qX9xG5mIwxkhumTox/MIRNcOgDrxWsMt2pAr23WHp6MrRlN7FBSFpCpr+oVO0F744iUgR82nJMfG2SA==", + "dev": true, + "license": "MIT", + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/get-caller-file": { + "version": "2.0.5", + "resolved": "https://registry.npmjs.org/get-caller-file/-/get-caller-file-2.0.5.tgz", + "integrity": "sha512-DyFP3BM/3YHTQOCUL/w0OZHR0lpKeGrxotcHWcqNEdnltqFwXVfhEBQ94eIo34AfQpo0rGki4cyIiftY06h2Fg==", + "dev": true, + "license": "ISC", + "engines": { + "node": "6.* || 8.* || >= 10.*" + } + }, + "node_modules/get-east-asian-width": { + "version": "1.5.0", + "resolved": "https://registry.npmjs.org/get-east-asian-width/-/get-east-asian-width-1.5.0.tgz", + "integrity": "sha512-CQ+bEO+Tva/qlmw24dCejulK5pMzVnUOFOijVogd3KQs07HnRIgp8TGipvCCRT06xeYEbpbgwaCxglFyiuIcmA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/get-intrinsic": { + "version": "1.3.0", + "resolved": "https://registry.npmjs.org/get-intrinsic/-/get-intrinsic-1.3.0.tgz", + "integrity": "sha512-9fSjSaos/fRIVIp+xSJlE6lfwhES7LNtKaCBIamHsjr2na1BiABJPo0mOjjz8GJDURarmCPGqaiVg5mfjb98CQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bind-apply-helpers": "^1.0.2", + "es-define-property": "^1.0.1", + "es-errors": "^1.3.0", + "es-object-atoms": "^1.1.1", + "function-bind": "^1.1.2", + "get-proto": "^1.0.1", + "gopd": "^1.2.0", + "has-symbols": "^1.1.0", + "hasown": "^2.0.2", + "math-intrinsics": "^1.1.0" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/get-proto": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/get-proto/-/get-proto-1.0.1.tgz", + "integrity": "sha512-sTSfBjoXBp89JvIKIefqw7U2CCebsc74kiY6awiGogKtoSGbgjYE/G/+l9sF3MWFPNc9IcoOC4ODfKHfxFmp0g==", + "dev": true, + "license": "MIT", + "dependencies": { + "dunder-proto": "^1.0.1", + "es-object-atoms": "^1.0.0" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/github-from-package": { + "version": "0.0.0", + "resolved": "https://registry.npmjs.org/github-from-package/-/github-from-package-0.0.0.tgz", + "integrity": "sha512-SyHy3T1v2NUXn29OsWdxmK6RwHD+vkj3v8en8AOBZ1wBQ/hCAQ5bAQTD02kW4W9tUp/3Qh6J8r9EvntiyCmOOw==", + "dev": true, + "license": "MIT", + "optional": true + }, + "node_modules/glob": { + "version": "7.2.3", + "resolved": "https://registry.npmjs.org/glob/-/glob-7.2.3.tgz", + "integrity": "sha512-nFR0zLpU2YCaRxwoCJvL6UvCH2JFyFVIvwTLsIf21AuHlMskA1hhTdk+LlYJtOlYt9v6dvszD2BGRqBL+iQK9Q==", + "deprecated": "Old versions of glob are not supported, and contain widely publicized security vulnerabilities, which have been fixed in the current version. Please update. Support for old versions may be purchased (at exorbitant rates) by contacting i@izs.me", + "dev": true, + "license": "ISC", + "dependencies": { + "fs.realpath": "^1.0.0", + "inflight": "^1.0.4", + "inherits": "2", + "minimatch": "^3.1.1", + "once": "^1.3.0", + "path-is-absolute": "^1.0.0" + }, + "engines": { + "node": "*" + }, + "funding": { + "url": "https://github.com/sponsors/isaacs" + } + }, + "node_modules/glob-parent": { + "version": "6.0.2", + "resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-6.0.2.tgz", + "integrity": "sha512-XxwI8EOhVQgWp6iDL+3b0r86f4d6AX6zSU55HfB4ydCEuXLXc5FcYeOu+nnGftS4TEju/11rt4KJPTMgbfmv4A==", + "dev": true, + "license": "ISC", + "dependencies": { + "is-glob": "^4.0.3" + }, + "engines": { + "node": ">=10.13.0" + } + }, + "node_modules/glob/node_modules/balanced-match": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/balanced-match/-/balanced-match-1.0.2.tgz", + "integrity": "sha512-3oSeUO0TMV67hN1AmbXsK4yaqU7tjiHlbxRDZOpH0KW9+CeX4bRAaX0Anxt0tx2MrpRpWwQaPwIlISEJhYU5Pw==", + "dev": true, + "license": "MIT" + }, + "node_modules/glob/node_modules/brace-expansion": { + "version": "1.1.12", + "resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.12.tgz", + "integrity": "sha512-9T9UjW3r0UW5c1Q7GTwllptXwhvYmEzFhzMfZ9H7FQWt+uZePjZPjBP/W1ZEyZ1twGWom5/56TF4lPcqjnDHcg==", + "dev": true, + "license": "MIT", + "dependencies": { + "balanced-match": "^1.0.0", + "concat-map": "0.0.1" + } + }, + "node_modules/glob/node_modules/minimatch": { + "version": "3.1.5", + "resolved": "https://registry.npmjs.org/minimatch/-/minimatch-3.1.5.tgz", + "integrity": "sha512-VgjWUsnnT6n+NUk6eZq77zeFdpW2LWDzP6zFGrCbHXiYNul5Dzqk2HHQ5uFH2DNW5Xbp8+jVzaeNt94ssEEl4w==", + "dev": true, + "license": "ISC", + "dependencies": { + "brace-expansion": "^1.1.7" + }, + "engines": { + "node": "*" + } + }, + "node_modules/gopd": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/gopd/-/gopd-1.2.0.tgz", + "integrity": "sha512-ZUKRh6/kUFoAiTAtTYPZJ3hw9wNxx+BIBOijnlG9PnrJsCcSjs1wyyD6vJpaYtgnzDrKYRSqf3OO6Rfa93xsRg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/has-flag": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-3.0.0.tgz", + "integrity": "sha512-sKJf1+ceQBr4SMkvQnBDNDtf4TXpVhVGateu0t918bl30FnbE2m4vNLX+VWe/dpjlb+HugGYzW7uQXH98HPEYw==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=4" + } + }, + "node_modules/has-symbols": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.1.0.tgz", + "integrity": "sha512-1cDNdwJ2Jaohmb3sg4OmKaMBwuC48sYni5HUw2DvsC8LjGTLK9h+eb1X6RyuOHe4hT0ULCW68iomhjUoKUqlPQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/has-tostringtag": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/has-tostringtag/-/has-tostringtag-1.0.2.tgz", + "integrity": "sha512-NqADB8VjPFLM2V0VvHUewwwsw0ZWBaIdgo+ieHtK3hasLz4qeCRjYcqfB6AQrBggRKppKF8L52/VqdVsO47Dlw==", + "dev": true, + "license": "MIT", + "dependencies": { + "has-symbols": "^1.0.3" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/hasown": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/hasown/-/hasown-2.0.2.tgz", + "integrity": "sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "function-bind": "^1.1.2" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/he": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/he/-/he-1.2.0.tgz", + "integrity": "sha512-F/1DnUGPopORZi0ni+CvrCgHQ5FyEAHRLSApuYWMmrbSwoN2Mn/7k+Gl38gJnR7yyDZk6WLXwiGod1JOWNDKGw==", + "dev": true, + "license": "MIT", + "bin": { + "he": "bin/he" + } + }, + "node_modules/hosted-git-info": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/hosted-git-info/-/hosted-git-info-4.1.0.tgz", + "integrity": "sha512-kyCuEOWjJqZuDbRHzL8V93NzQhwIB71oFWSyzVo+KPZI+pnQPPxucdkrOZvkLRnrf5URsQM+IJ09Dw29cRALIA==", + "dev": true, + "license": "ISC", + "dependencies": { + "lru-cache": "^6.0.0" + }, + "engines": { + "node": ">=10" + } + }, + "node_modules/htmlparser2": { + "version": "10.1.0", + "resolved": "https://registry.npmjs.org/htmlparser2/-/htmlparser2-10.1.0.tgz", + "integrity": "sha512-VTZkM9GWRAtEpveh7MSF6SjjrpNVNNVJfFup7xTY3UpFtm67foy9HDVXneLtFVt4pMz5kZtgNcvCniNFb1hlEQ==", + "dev": true, + "funding": [ + "https://github.com/fb55/htmlparser2?sponsor=1", + { + "type": "github", + "url": "https://github.com/sponsors/fb55" + } + ], + "license": "MIT", + "dependencies": { + "domelementtype": "^2.3.0", + "domhandler": "^5.0.3", + "domutils": "^3.2.2", + "entities": "^7.0.1" + } + }, + "node_modules/htmlparser2/node_modules/entities": { + "version": "7.0.1", + "resolved": "https://registry.npmjs.org/entities/-/entities-7.0.1.tgz", + "integrity": "sha512-TWrgLOFUQTH994YUyl1yT4uyavY5nNB5muff+RtWaqNVCAK408b5ZnnbNAUEWLTCpum9w6arT70i1XdQ4UeOPA==", + "dev": true, + "license": "BSD-2-Clause", + "engines": { + "node": ">=0.12" + }, + "funding": { + "url": "https://github.com/fb55/entities?sponsor=1" + } + }, + "node_modules/http-proxy-agent": { + "version": "7.0.2", + "resolved": "https://registry.npmjs.org/http-proxy-agent/-/http-proxy-agent-7.0.2.tgz", + "integrity": "sha512-T1gkAiYYDWYx3V5Bmyu7HcfcvL7mUrTWiM6yOfa3PIphViJ/gFPbvidQ+veqSOHci/PxBcDabeUNCzpOODJZig==", + "dev": true, + "license": "MIT", + "dependencies": { + "agent-base": "^7.1.0", + "debug": "^4.3.4" + }, + "engines": { + "node": ">= 14" + } + }, + "node_modules/https-proxy-agent": { + "version": "7.0.6", + "resolved": "https://registry.npmjs.org/https-proxy-agent/-/https-proxy-agent-7.0.6.tgz", + "integrity": "sha512-vK9P5/iUfdl95AI+JVyUuIcVtd4ofvtrOr3HNtM2yxC9bnMbEdp3x01OhQNnjb8IJYi38VlTE3mBXwcfvywuSw==", + "dev": true, + "license": "MIT", + "dependencies": { + "agent-base": "^7.1.2", + "debug": "4" + }, + "engines": { + "node": ">= 14" + } + }, + "node_modules/iconv-lite": { + "version": "0.6.3", + "resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.6.3.tgz", + "integrity": "sha512-4fCk79wshMdzMp2rH06qWrJE4iolqLhCUH+OiuIgU++RB0+94NlDL81atO7GX55uUKueo0txHNtvEyI6D7WdMw==", + "dev": true, + "license": "MIT", + "dependencies": { + "safer-buffer": ">= 2.1.2 < 3.0.0" + }, + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/ieee754": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/ieee754/-/ieee754-1.2.1.tgz", + "integrity": "sha512-dcyqhDvX1C46lXZcVqCpK+FtMRQVdIMN6/Df5js2zouUsqG7I6sFxitIC+7KYK29KdXOLHdu9zL4sFnoVQnqaA==", + "dev": true, + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/feross" + }, + { + "type": "patreon", + "url": "https://www.patreon.com/feross" + }, + { + "type": "consulting", + "url": "https://feross.org/support" + } + ], + "license": "BSD-3-Clause", + "optional": true + }, + "node_modules/ignore": { + "version": "7.0.5", + "resolved": "https://registry.npmjs.org/ignore/-/ignore-7.0.5.tgz", + "integrity": "sha512-Hs59xBNfUIunMFgWAbGX5cq6893IbWg4KnrjbYwX3tx0ztorVgTDA6B2sxf8ejHJ4wz8BqGUMYlnzNBer5NvGg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 4" + } + }, + "node_modules/immediate": { + "version": "3.0.6", + "resolved": "https://registry.npmjs.org/immediate/-/immediate-3.0.6.tgz", + "integrity": "sha512-XXOFtyqDjNDAQxVfYxuF7g9Il/IbWmmlQg2MYKOH8ExIT1qg6xc4zyS3HaEEATgs1btfzxq15ciUiY7gjSXRGQ==", + "dev": true, + "license": "MIT" + }, + "node_modules/imurmurhash": { + "version": "0.1.4", + "resolved": "https://registry.npmjs.org/imurmurhash/-/imurmurhash-0.1.4.tgz", + "integrity": "sha512-JmXMZ6wuvDmLiHEml9ykzqO6lwFbof0GG4IkcGaENdCRDDmMVnny7s5HsIgHCbaq0w2MyPhDqkhTUgS2LU2PHA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=0.8.19" + } + }, + "node_modules/inflight": { + "version": "1.0.6", + "resolved": "https://registry.npmjs.org/inflight/-/inflight-1.0.6.tgz", + "integrity": "sha512-k92I/b08q4wvFscXCLvqfsHCrjrF7yiXsQuIVvVE7N82W3+aqpzuUdBbfhWcy/FZR3/4IgflMgKLOsvPDrGCJA==", + "deprecated": "This module is not supported, and leaks memory. Do not use it. Check out lru-cache if you want a good and tested way to coalesce async requests by a key value, which is much more comprehensive and powerful.", + "dev": true, + "license": "ISC", + "dependencies": { + "once": "^1.3.0", + "wrappy": "1" + } + }, + "node_modules/inherits": { + "version": "2.0.4", + "resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.4.tgz", + "integrity": "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==", + "dev": true, + "license": "ISC" + }, + "node_modules/ini": { + "version": "1.3.8", + "resolved": "https://registry.npmjs.org/ini/-/ini-1.3.8.tgz", + "integrity": "sha512-JV/yugV2uzW5iMRSiZAyDtQd+nxtUnjeLt0acNdw98kKLrvuRVyB80tsREOE7yvGVgalhZ6RNXCmEHkUKBKxew==", + "dev": true, + "license": "ISC", + "optional": true + }, + "node_modules/is-docker": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/is-docker/-/is-docker-3.0.0.tgz", + "integrity": "sha512-eljcgEDlEns/7AXFosB5K/2nCM4P7FQPkGc/DWLy5rmFEWvZayGrik1d9/QIY5nJ4f9YsVvBkA6kJpHn9rISdQ==", + "dev": true, + "license": "MIT", + "bin": { + "is-docker": "cli.js" + }, + "engines": { + "node": "^12.20.0 || ^14.13.1 || >=16.0.0" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/is-extglob": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/is-extglob/-/is-extglob-2.1.1.tgz", + "integrity": "sha512-SbKbANkN603Vi4jEZv49LeVJMn4yGwsbzZworEoyEiutsN3nJYdbO36zfhGJ6QEDpOZIFkDtnq5JRxmvl3jsoQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/is-fullwidth-code-point": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/is-fullwidth-code-point/-/is-fullwidth-code-point-3.0.0.tgz", + "integrity": "sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/is-glob": { + "version": "4.0.3", + "resolved": "https://registry.npmjs.org/is-glob/-/is-glob-4.0.3.tgz", + "integrity": "sha512-xelSayHH36ZgE7ZWhli7pW34hNbNl8Ojv5KVmkJD4hBdD3th8Tfk9vYasLM+mXWOZhFkgZfxhLSnrwRr4elSSg==", + "dev": true, + "license": "MIT", + "dependencies": { + "is-extglob": "^2.1.1" + }, + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/is-inside-container": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/is-inside-container/-/is-inside-container-1.0.0.tgz", + "integrity": "sha512-KIYLCCJghfHZxqjYBE7rEy0OBuTd5xCHS7tHVgvCLkx7StIoaxwNW3hCALgEUjFfeRk+MG/Qxmp/vtETEF3tRA==", + "dev": true, + "license": "MIT", + "dependencies": { + "is-docker": "^3.0.0" + }, + "bin": { + "is-inside-container": "cli.js" + }, + "engines": { + "node": ">=14.16" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/is-interactive": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/is-interactive/-/is-interactive-2.0.0.tgz", + "integrity": "sha512-qP1vozQRI+BMOPcjFzrjXuQvdak2pHNUMZoeG2eRbiSqyvbEf/wQtEOTOX1guk6E3t36RkaqiSt8A/6YElNxLQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/is-path-inside": { + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/is-path-inside/-/is-path-inside-3.0.3.tgz", + "integrity": "sha512-Fd4gABb+ycGAmKou8eMftCupSir5lRxqf4aD/vd0cD2qc4HL07OjCeuHMr8Ro4CoMaeCKDB0/ECBOVWjTwUvPQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/is-plain-obj": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/is-plain-obj/-/is-plain-obj-2.1.0.tgz", + "integrity": "sha512-YWnfyRwxL/+SsrWYfOpUtz5b3YD+nyfkHvjbcanzk8zgyO4ASD67uVMRt8k5bM4lLMDnXfriRhOpemw+NfT1eA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/is-unicode-supported": { + "version": "0.1.0", + "resolved": "https://registry.npmjs.org/is-unicode-supported/-/is-unicode-supported-0.1.0.tgz", + "integrity": "sha512-knxG2q4UC3u8stRGyAVJCOdxFmv5DZiRcdlIaAQXAbSfJya+OhopNotLQrstBhququ4ZpuKbDc/8S6mgXgPFPw==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/is-wsl": { + "version": "3.1.1", + "resolved": "https://registry.npmjs.org/is-wsl/-/is-wsl-3.1.1.tgz", + "integrity": "sha512-e6rvdUCiQCAuumZslxRJWR/Doq4VpPR82kqclvcS0efgt430SlGIk05vdCN58+VrzgtIcfNODjozVielycD4Sw==", + "dev": true, + "license": "MIT", + "dependencies": { + "is-inside-container": "^1.0.0" + }, + "engines": { + "node": ">=16" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/isarray": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/isarray/-/isarray-1.0.0.tgz", + "integrity": "sha512-VLghIWNM6ELQzo7zwmcg0NmTVyWKYjvIeM83yjp0wRDTmUnrM678fQbcKBo6n2CJEF0szoG//ytg+TKla89ALQ==", + "dev": true, + "license": "MIT" + }, + "node_modules/isexe": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/isexe/-/isexe-2.0.0.tgz", + "integrity": "sha512-RHxMLp9lnKHGHRng9QFhRCMbYAcVpn69smSGcq3f36xjgVVWThj4qqLbTLlq7Ssj8B+fIQ1EuCEGI2lKsyQeIw==", + "dev": true, + "license": "ISC" + }, + "node_modules/jackspeak": { + "version": "3.4.3", + "resolved": "https://registry.npmjs.org/jackspeak/-/jackspeak-3.4.3.tgz", + "integrity": "sha512-OGlZQpz2yfahA/Rd1Y8Cd9SIEsqvXkLVoSw/cgwhnhFMDbsQFeZYoJJ7bIZBS9BcamUW96asq/npPWugM+RQBw==", + "dev": true, + "license": "BlueOak-1.0.0", + "dependencies": { + "@isaacs/cliui": "^8.0.2" + }, + "funding": { + "url": "https://github.com/sponsors/isaacs" + }, + "optionalDependencies": { + "@pkgjs/parseargs": "^0.11.0" + } + }, + "node_modules/js-yaml": { + "version": "4.1.1", + "resolved": "https://registry.npmjs.org/js-yaml/-/js-yaml-4.1.1.tgz", + "integrity": "sha512-qQKT4zQxXl8lLwBtHMWwaTcGfFOZviOJet3Oy/xmGk2gZH677CJM9EvtfdSkgWcATZhj/55JZ0rmy3myCT5lsA==", + "dev": true, + "license": "MIT", + "dependencies": { + "argparse": "^2.0.1" + }, + "bin": { + "js-yaml": "bin/js-yaml.js" + } + }, + "node_modules/json-buffer": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/json-buffer/-/json-buffer-3.0.1.tgz", + "integrity": "sha512-4bV5BfR2mqfQTJm+V5tPPdf+ZpuhiIvTuAB5g8kcrXOZpTT/QwwVRWBywX1ozr6lEuPdbHxwaJlm9G6mI2sfSQ==", + "dev": true, + "license": "MIT" + }, + "node_modules/json-schema-traverse": { + "version": "0.4.1", + "resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-0.4.1.tgz", + "integrity": "sha512-xbbCH5dCYU5T8LcEhhuh7HJ88HXuW3qsI3Y0zOZFKfZEHcpWiHU/Jxzk629Brsab/mMiHQti9wMP+845RPe3Vg==", + "dev": true, + "license": "MIT" + }, + "node_modules/json-stable-stringify-without-jsonify": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/json-stable-stringify-without-jsonify/-/json-stable-stringify-without-jsonify-1.0.1.tgz", + "integrity": "sha512-Bdboy+l7tA3OGW6FjyFHWkP5LuByj1Tk33Ljyq0axyzdk9//JSi2u3fP1QSmd1KNwq6VOKYGlAu87CisVir6Pw==", + "dev": true, + "license": "MIT" + }, + "node_modules/jsonc-parser": { + "version": "3.3.1", + "resolved": "https://registry.npmjs.org/jsonc-parser/-/jsonc-parser-3.3.1.tgz", + "integrity": "sha512-HUgH65KyejrUFPvHFPbqOY0rsFip3Bo5wb4ngvdi1EpCYWUQDC5V+Y7mZws+DLkr4M//zQJoanu1SP+87Dv1oQ==", + "dev": true, + "license": "MIT" + }, + "node_modules/jsonwebtoken": { + "version": "9.0.3", + "resolved": "https://registry.npmjs.org/jsonwebtoken/-/jsonwebtoken-9.0.3.tgz", + "integrity": "sha512-MT/xP0CrubFRNLNKvxJ2BYfy53Zkm++5bX9dtuPbqAeQpTVe0MQTFhao8+Cp//EmJp244xt6Drw/GVEGCUj40g==", + "dev": true, + "license": "MIT", + "dependencies": { + "jws": "^4.0.1", + "lodash.includes": "^4.3.0", + "lodash.isboolean": "^3.0.3", + "lodash.isinteger": "^4.0.4", + "lodash.isnumber": "^3.0.3", + "lodash.isplainobject": "^4.0.6", + "lodash.isstring": "^4.0.1", + "lodash.once": "^4.0.0", + "ms": "^2.1.1", + "semver": "^7.5.4" + }, + "engines": { + "node": ">=12", + "npm": ">=6" + } + }, + "node_modules/jszip": { + "version": "3.10.1", + "resolved": "https://registry.npmjs.org/jszip/-/jszip-3.10.1.tgz", + "integrity": "sha512-xXDvecyTpGLrqFrvkrUSoxxfJI5AH7U8zxxtVclpsUtMCq4JQ290LY8AW5c7Ggnr/Y/oK+bQMbqK2qmtk3pN4g==", + "dev": true, + "license": "(MIT OR GPL-3.0-or-later)", + "dependencies": { + "lie": "~3.3.0", + "pako": "~1.0.2", + "readable-stream": "~2.3.6", + "setimmediate": "^1.0.5" + } + }, + "node_modules/jszip/node_modules/readable-stream": { + "version": "2.3.8", + "resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-2.3.8.tgz", + "integrity": "sha512-8p0AUk4XODgIewSi0l8Epjs+EVnWiK7NoDIEGU0HhE7+ZyY8D1IMY7odu5lRrFXGg71L15KG8QrPmum45RTtdA==", + "dev": true, + "license": "MIT", + "dependencies": { + "core-util-is": "~1.0.0", + "inherits": "~2.0.3", + "isarray": "~1.0.0", + "process-nextick-args": "~2.0.0", + "safe-buffer": "~5.1.1", + "string_decoder": "~1.1.1", + "util-deprecate": "~1.0.1" + } + }, + "node_modules/jszip/node_modules/safe-buffer": { + "version": "5.1.2", + "resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.1.2.tgz", + "integrity": "sha512-Gd2UZBJDkXlY7GbJxfsE8/nvKkUEU1G38c1siN6QP6a9PT9MmHB8GnpscSmMJSoF8LOIrt8ud/wPtojys4G6+g==", + "dev": true, + "license": "MIT" + }, + "node_modules/jszip/node_modules/string_decoder": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/string_decoder/-/string_decoder-1.1.1.tgz", + "integrity": "sha512-n/ShnvDi6FHbbVfviro+WojiFzv+s8MPMHBczVePfUpDJLwoLT0ht1l4YwBCbi8pJAveEEdnkHyPyTP/mzRfwg==", + "dev": true, + "license": "MIT", + "dependencies": { + "safe-buffer": "~5.1.0" + } + }, + "node_modules/jwa": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/jwa/-/jwa-2.0.1.tgz", + "integrity": "sha512-hRF04fqJIP8Abbkq5NKGN0Bbr3JxlQ+qhZufXVr0DvujKy93ZCbXZMHDL4EOtodSbCWxOqR8MS1tXA5hwqCXDg==", + "dev": true, + "license": "MIT", + "dependencies": { + "buffer-equal-constant-time": "^1.0.1", + "ecdsa-sig-formatter": "1.0.11", + "safe-buffer": "^5.0.1" + } + }, + "node_modules/jws": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/jws/-/jws-4.0.1.tgz", + "integrity": "sha512-EKI/M/yqPncGUUh44xz0PxSidXFr/+r0pA70+gIYhjv+et7yxM+s29Y+VGDkovRofQem0fs7Uvf4+YmAdyRduA==", + "dev": true, + "license": "MIT", + "dependencies": { + "jwa": "^2.0.1", + "safe-buffer": "^5.0.1" + } + }, + "node_modules/keytar": { + "version": "7.9.0", + "resolved": "https://registry.npmjs.org/keytar/-/keytar-7.9.0.tgz", + "integrity": "sha512-VPD8mtVtm5JNtA2AErl6Chp06JBfy7diFQ7TQQhdpWOl6MrCRB+eRbvAZUsbGQS9kiMq0coJsy0W0vHpDCkWsQ==", + "dev": true, + "hasInstallScript": true, + "license": "MIT", + "optional": true, + "dependencies": { + "node-addon-api": "^4.3.0", + "prebuild-install": "^7.0.1" + } + }, + "node_modules/keyv": { + "version": "4.5.4", + "resolved": "https://registry.npmjs.org/keyv/-/keyv-4.5.4.tgz", + "integrity": "sha512-oxVHkHR/EJf2CNXnWxRLW6mg7JyCCUcG0DtEGmL2ctUo1PNTin1PUil+r/+4r5MpVgC/fn1kjsx7mjSujKqIpw==", + "dev": true, + "license": "MIT", + "dependencies": { + "json-buffer": "3.0.1" + } + }, + "node_modules/leven": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/leven/-/leven-3.1.0.tgz", + "integrity": "sha512-qsda+H8jTaUaN/x5vzW2rzc+8Rw4TAQ/4KjB46IwK5VH+IlVeeeje/EoZRpiXvIqjFgK84QffqPztGI3VBLG1A==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=6" + } + }, + "node_modules/levn": { + "version": "0.4.1", + "resolved": "https://registry.npmjs.org/levn/-/levn-0.4.1.tgz", + "integrity": "sha512-+bT2uH4E5LGE7h/n3evcS/sQlJXCpIp6ym8OWJ5eV6+67Dsql/LaaT7qJBAt2rzfoa/5QBGBhxDix1dMt2kQKQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "prelude-ls": "^1.2.1", + "type-check": "~0.4.0" + }, + "engines": { + "node": ">= 0.8.0" + } + }, + "node_modules/lie": { + "version": "3.3.0", + "resolved": "https://registry.npmjs.org/lie/-/lie-3.3.0.tgz", + "integrity": "sha512-UaiMJzeWRlEujzAuw5LokY1L5ecNQYZKfmyZ9L7wDHb/p5etKaxXhohBcrw0EYby+G/NA52vRSN4N39dxHAIwQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "immediate": "~3.0.5" + } + }, + "node_modules/linkify-it": { + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/linkify-it/-/linkify-it-3.0.3.tgz", + "integrity": "sha512-ynTsyrFSdE5oZ/O9GEf00kPngmOfVwazR5GKDq6EYfhlpFug3J2zybX56a2PRRpc9P+FuSoGNAwjlbDs9jJBPQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "uc.micro": "^1.0.1" + } + }, + "node_modules/locate-path": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/locate-path/-/locate-path-6.0.0.tgz", + "integrity": "sha512-iPZK6eYjbxRu3uB4/WZ3EsEIMJFMqAoopl3R+zuq0UjcAm/MO6KCweDgPfP3elTztoKP3KtnVHxTn2NHBSDVUw==", + "dev": true, + "license": "MIT", + "dependencies": { + "p-locate": "^5.0.0" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/lodash.includes": { + "version": "4.3.0", + "resolved": "https://registry.npmjs.org/lodash.includes/-/lodash.includes-4.3.0.tgz", + "integrity": "sha512-W3Bx6mdkRTGtlJISOvVD/lbqjTlPPUDTMnlXZFnVwi9NKJ6tiAk6LVdlhZMm17VZisqhKcgzpO5Wz91PCt5b0w==", + "dev": true, + "license": "MIT" + }, + "node_modules/lodash.isboolean": { + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/lodash.isboolean/-/lodash.isboolean-3.0.3.tgz", + "integrity": "sha512-Bz5mupy2SVbPHURB98VAcw+aHh4vRV5IPNhILUCsOzRmsTmSQ17jIuqopAentWoehktxGd9e/hbIXq980/1QJg==", + "dev": true, + "license": "MIT" + }, + "node_modules/lodash.isinteger": { + "version": "4.0.4", + "resolved": "https://registry.npmjs.org/lodash.isinteger/-/lodash.isinteger-4.0.4.tgz", + "integrity": "sha512-DBwtEWN2caHQ9/imiNeEA5ys1JoRtRfY3d7V9wkqtbycnAmTvRRmbHKDV4a0EYc678/dia0jrte4tjYwVBaZUA==", + "dev": true, + "license": "MIT" + }, + "node_modules/lodash.isnumber": { + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/lodash.isnumber/-/lodash.isnumber-3.0.3.tgz", + "integrity": "sha512-QYqzpfwO3/CWf3XP+Z+tkQsfaLL/EnUlXWVkIk5FUPc4sBdTehEqZONuyRt2P67PXAk+NXmTBcc97zw9t1FQrw==", + "dev": true, + "license": "MIT" + }, + "node_modules/lodash.isplainobject": { + "version": "4.0.6", + "resolved": "https://registry.npmjs.org/lodash.isplainobject/-/lodash.isplainobject-4.0.6.tgz", + "integrity": "sha512-oSXzaWypCMHkPC3NvBEaPHf0KsA5mvPrOPgQWDsbg8n7orZ290M0BmC/jgRZ4vcJ6DTAhjrsSYgdsW/F+MFOBA==", + "dev": true, + "license": "MIT" + }, + "node_modules/lodash.isstring": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/lodash.isstring/-/lodash.isstring-4.0.1.tgz", + "integrity": "sha512-0wJxfxH1wgO3GrbuP+dTTk7op+6L41QCXbGINEmD+ny/G/eCqGzxyCsh7159S+mgDDcoarnBw6PC1PS5+wUGgw==", + "dev": true, + "license": "MIT" + }, + "node_modules/lodash.once": { + "version": "4.1.1", + "resolved": "https://registry.npmjs.org/lodash.once/-/lodash.once-4.1.1.tgz", + "integrity": "sha512-Sb487aTOCr9drQVL8pIxOzVhafOjZN9UU54hiN8PU3uAiSV7lx1yYNpbNmex2PK6dSJoNTSJUUswT651yww3Mg==", + "dev": true, + "license": "MIT" + }, + "node_modules/log-symbols": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/log-symbols/-/log-symbols-4.1.0.tgz", + "integrity": "sha512-8XPvpAA8uyhfteu8pIvQxpJZ7SYYdpUivZpGy6sFsBuKRY/7rQGavedeB8aK+Zkyq6upMFVL/9AW6vOYzfRyLg==", + "dev": true, + "license": "MIT", + "dependencies": { + "chalk": "^4.1.0", + "is-unicode-supported": "^0.1.0" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/log-symbols/node_modules/ansi-styles": { + "version": "4.3.0", + "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.3.0.tgz", + "integrity": "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==", + "dev": true, + "license": "MIT", + "dependencies": { + "color-convert": "^2.0.1" + }, + "engines": { + "node": ">=8" + }, + "funding": { + "url": "https://github.com/chalk/ansi-styles?sponsor=1" + } + }, + "node_modules/log-symbols/node_modules/chalk": { + "version": "4.1.2", + "resolved": "https://registry.npmjs.org/chalk/-/chalk-4.1.2.tgz", + "integrity": "sha512-oKnbhFyRIXpUuez8iBMmyEa4nbj4IOQyuhc/wy9kY7/WVPcwIO9VA668Pu8RkO7+0G76SLROeyw9CpQ061i4mA==", + "dev": true, + "license": "MIT", + "dependencies": { + "ansi-styles": "^4.1.0", + "supports-color": "^7.1.0" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/chalk/chalk?sponsor=1" + } + }, + "node_modules/log-symbols/node_modules/color-convert": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz", + "integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "color-name": "~1.1.4" + }, + "engines": { + "node": ">=7.0.0" + } + }, + "node_modules/log-symbols/node_modules/color-name": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz", + "integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==", + "dev": true, + "license": "MIT" + }, + "node_modules/log-symbols/node_modules/has-flag": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-4.0.0.tgz", + "integrity": "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/log-symbols/node_modules/supports-color": { + "version": "7.2.0", + "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-7.2.0.tgz", + "integrity": "sha512-qpCAvRl9stuOHveKsn7HncJRvv501qIacKzQlO/+Lwxc9+0q2wLyv4Dfvt80/DPn2pqOBsJdDiogXGR9+OvwRw==", + "dev": true, + "license": "MIT", + "dependencies": { + "has-flag": "^4.0.0" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/lru-cache": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-6.0.0.tgz", + "integrity": "sha512-Jo6dJ04CmSjuznwJSS3pUeWmd/H0ffTlkXXgwZi+eq1UCmqQwCh+eLsYOYCwY991i2Fah4h1BEMCx4qThGbsiA==", + "dev": true, + "license": "ISC", + "dependencies": { + "yallist": "^4.0.0" + }, + "engines": { + "node": ">=10" + } + }, + "node_modules/markdown-it": { + "version": "12.3.2", + "resolved": "https://registry.npmjs.org/markdown-it/-/markdown-it-12.3.2.tgz", + "integrity": "sha512-TchMembfxfNVpHkbtriWltGWc+m3xszaRD0CZup7GFFhzIgQqxIfn3eGj1yZpfuflzPvfkt611B2Q/Bsk1YnGg==", + "dev": true, + "license": "MIT", + "dependencies": { + "argparse": "^2.0.1", + "entities": "~2.1.0", + "linkify-it": "^3.0.1", + "mdurl": "^1.0.1", + "uc.micro": "^1.0.5" + }, + "bin": { + "markdown-it": "bin/markdown-it.js" + } + }, + "node_modules/markdown-it/node_modules/entities": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/entities/-/entities-2.1.0.tgz", + "integrity": "sha512-hCx1oky9PFrJ611mf0ifBLBRW8lUUVRlFolb5gWRfIELabBlbp9xZvrqZLZAs+NxFnbfQoeGd8wDkygjg7U85w==", + "dev": true, + "license": "BSD-2-Clause", + "funding": { + "url": "https://github.com/fb55/entities?sponsor=1" + } + }, + "node_modules/math-intrinsics": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/math-intrinsics/-/math-intrinsics-1.1.0.tgz", + "integrity": "sha512-/IXtbwEk5HTPyEwyKX6hGkYXxM9nbj64B+ilVJnC/R6B0pH5G4V3b0pVbL7DBj4tkhBAppbQUlf6F6Xl9LHu1g==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/mdurl": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/mdurl/-/mdurl-1.0.1.tgz", + "integrity": "sha512-/sKlQJCBYVY9Ers9hqzKou4H6V5UWc/M59TH2dvkt+84itfnq7uFOMLpOiOS4ujvHP4etln18fmIxA5R5fll0g==", + "dev": true, + "license": "MIT" + }, + "node_modules/mime": { + "version": "1.6.0", + "resolved": "https://registry.npmjs.org/mime/-/mime-1.6.0.tgz", + "integrity": "sha512-x0Vn8spI+wuJ1O6S7gnbaQg8Pxh4NNHb7KSINmEWKiPE4RKOplvijn+NkmYmmRgP68mc70j2EbeTFRsrswaQeg==", + "dev": true, + "license": "MIT", + "bin": { + "mime": "cli.js" + }, + "engines": { + "node": ">=4" + } + }, + "node_modules/mime-db": { + "version": "1.52.0", + "resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.52.0.tgz", + "integrity": "sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/mime-types": { + "version": "2.1.35", + "resolved": "https://registry.npmjs.org/mime-types/-/mime-types-2.1.35.tgz", + "integrity": "sha512-ZDY+bPm5zTTF+YpCrAU9nK0UgICYPT0QtT1NZWFv4s++TNkcgVaT0g6+4R2uI4MjQjzysHB1zxuWL50hzaeXiw==", + "dev": true, + "license": "MIT", + "dependencies": { + "mime-db": "1.52.0" + }, + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/mimic-function": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/mimic-function/-/mimic-function-5.0.1.tgz", + "integrity": "sha512-VP79XUPxV2CigYP3jWwAUFSku2aKqBH7uTAapFWCBqutsbmDo96KY5o8uh6U+/YSIn5OxJnXp73beVkpqMIGhA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/mimic-response": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/mimic-response/-/mimic-response-3.1.0.tgz", + "integrity": "sha512-z0yWI+4FDrrweS8Zmt4Ej5HdJmky15+L2e6Wgn3+iK5fWzb6T3fhNFq2+MeTRb064c6Wr4N/wv0DzQTjNzHNGQ==", + "dev": true, + "license": "MIT", + "optional": true, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/minimatch": { + "version": "10.2.4", + "resolved": "https://registry.npmjs.org/minimatch/-/minimatch-10.2.4.tgz", + "integrity": "sha512-oRjTw/97aTBN0RHbYCdtF1MQfvusSIBQM0IZEgzl6426+8jSC0nF1a/GmnVLpfB9yyr6g6FTqWqiZVbxrtaCIg==", + "dev": true, + "license": "BlueOak-1.0.0", + "dependencies": { + "brace-expansion": "^5.0.2" + }, + "engines": { + "node": "18 || 20 || >=22" + }, + "funding": { + "url": "https://github.com/sponsors/isaacs" + } + }, + "node_modules/minimist": { + "version": "1.2.8", + "resolved": "https://registry.npmjs.org/minimist/-/minimist-1.2.8.tgz", + "integrity": "sha512-2yyAR8qBkN3YuheJanUpWC5U3bb5osDywNB8RzDVlDwDHbocAJveqqj1u8+SVD7jkWT4yvsHCpWqqWqAxb0zCA==", + "dev": true, + "license": "MIT", + "optional": true, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/minipass": { + "version": "7.1.3", + "resolved": "https://registry.npmjs.org/minipass/-/minipass-7.1.3.tgz", + "integrity": "sha512-tEBHqDnIoM/1rXME1zgka9g6Q2lcoCkxHLuc7ODJ5BxbP5d4c2Z5cGgtXAku59200Cx7diuHTOYfSBD8n6mm8A==", + "dev": true, + "license": "BlueOak-1.0.0", + "engines": { + "node": ">=16 || 14 >=14.17" + } + }, + "node_modules/mkdirp-classic": { + "version": "0.5.3", + "resolved": "https://registry.npmjs.org/mkdirp-classic/-/mkdirp-classic-0.5.3.tgz", + "integrity": "sha512-gKLcREMhtuZRwRAfqP3RFW+TK4JqApVBtOIftVgjuABpAtpxhPGaDcfvbhNvD0B8iD1oUr/txX35NjcaY6Ns/A==", + "dev": true, + "license": "MIT", + "optional": true + }, + "node_modules/mocha": { + "version": "11.7.5", + "resolved": "https://registry.npmjs.org/mocha/-/mocha-11.7.5.tgz", + "integrity": "sha512-mTT6RgopEYABzXWFx+GcJ+ZQ32kp4fMf0xvpZIIfSq9Z8lC/++MtcCnQ9t5FP2veYEP95FIYSvW+U9fV4xrlig==", + "dev": true, + "license": "MIT", + "dependencies": { + "browser-stdout": "^1.3.1", + "chokidar": "^4.0.1", + "debug": "^4.3.5", + "diff": "^7.0.0", + "escape-string-regexp": "^4.0.0", + "find-up": "^5.0.0", + "glob": "^10.4.5", + "he": "^1.2.0", + "is-path-inside": "^3.0.3", + "js-yaml": "^4.1.0", + "log-symbols": "^4.1.0", + "minimatch": "^9.0.5", + "ms": "^2.1.3", + "picocolors": "^1.1.1", + "serialize-javascript": "^6.0.2", + "strip-json-comments": "^3.1.1", + "supports-color": "^8.1.1", + "workerpool": "^9.2.0", + "yargs": "^17.7.2", + "yargs-parser": "^21.1.1", + "yargs-unparser": "^2.0.0" + }, + "bin": { + "_mocha": "bin/_mocha", + "mocha": "bin/mocha.js" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + } + }, + "node_modules/mocha/node_modules/balanced-match": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/balanced-match/-/balanced-match-1.0.2.tgz", + "integrity": "sha512-3oSeUO0TMV67hN1AmbXsK4yaqU7tjiHlbxRDZOpH0KW9+CeX4bRAaX0Anxt0tx2MrpRpWwQaPwIlISEJhYU5Pw==", + "dev": true, + "license": "MIT" + }, + "node_modules/mocha/node_modules/brace-expansion": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-2.0.2.tgz", + "integrity": "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "balanced-match": "^1.0.0" + } + }, + "node_modules/mocha/node_modules/escape-string-regexp": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/escape-string-regexp/-/escape-string-regexp-4.0.0.tgz", + "integrity": "sha512-TtpcNJ3XAzx3Gq8sWRzJaVajRs0uVxA2YAkdb1jm2YkPz4G6egUFAyA3n5vtEIZefPk5Wa4UXbKuS5fKkJWdgA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/mocha/node_modules/glob": { + "version": "10.5.0", + "resolved": "https://registry.npmjs.org/glob/-/glob-10.5.0.tgz", + "integrity": "sha512-DfXN8DfhJ7NH3Oe7cFmu3NCu1wKbkReJ8TorzSAFbSKrlNaQSKfIzqYqVY8zlbs2NLBbWpRiU52GX2PbaBVNkg==", + "deprecated": "Old versions of glob are not supported, and contain widely publicized security vulnerabilities, which have been fixed in the current version. Please update. Support for old versions may be purchased (at exorbitant rates) by contacting i@izs.me", + "dev": true, + "license": "ISC", + "dependencies": { + "foreground-child": "^3.1.0", + "jackspeak": "^3.1.2", + "minimatch": "^9.0.4", + "minipass": "^7.1.2", + "package-json-from-dist": "^1.0.0", + "path-scurry": "^1.11.1" + }, + "bin": { + "glob": "dist/esm/bin.mjs" + }, + "funding": { + "url": "https://github.com/sponsors/isaacs" + } + }, + "node_modules/mocha/node_modules/has-flag": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-4.0.0.tgz", + "integrity": "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/mocha/node_modules/minimatch": { + "version": "9.0.9", + "resolved": "https://registry.npmjs.org/minimatch/-/minimatch-9.0.9.tgz", + "integrity": "sha512-OBwBN9AL4dqmETlpS2zasx+vTeWclWzkblfZk7KTA5j3jeOONz/tRCnZomUyvNg83wL5Zv9Ss6HMJXAgL8R2Yg==", + "dev": true, + "license": "ISC", + "dependencies": { + "brace-expansion": "^2.0.2" + }, + "engines": { + "node": ">=16 || 14 >=14.17" + }, + "funding": { + "url": "https://github.com/sponsors/isaacs" + } + }, + "node_modules/mocha/node_modules/strip-json-comments": { + "version": "3.1.1", + "resolved": "https://registry.npmjs.org/strip-json-comments/-/strip-json-comments-3.1.1.tgz", + "integrity": "sha512-6fPc+R4ihwqP6N/aIv2f1gMH8lOVtWQHoqC4yK6oSDVVocumAsfCqjkXnqiYMhmMwS/mEHLp7Vehlt3ql6lEig==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/mocha/node_modules/supports-color": { + "version": "8.1.1", + "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-8.1.1.tgz", + "integrity": "sha512-MpUEN2OodtUzxvKQl72cUF7RQ5EiHsGvSsVG0ia9c5RbWGL2CI4C7EpPS8UTBIplnlzZiNuV56w+FuNxy3ty2Q==", + "dev": true, + "license": "MIT", + "dependencies": { + "has-flag": "^4.0.0" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/chalk/supports-color?sponsor=1" + } + }, + "node_modules/ms": { + "version": "2.1.3", + "resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz", + "integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==", + "dev": true, + "license": "MIT" + }, + "node_modules/mute-stream": { + "version": "0.0.8", + "resolved": "https://registry.npmjs.org/mute-stream/-/mute-stream-0.0.8.tgz", + "integrity": "sha512-nnbWWOkoWyUsTjKrhgD0dcz22mdkSnpYqbEjIm2nhwhuxlSkpywJmBo8h0ZqJdkp73mb90SssHkN4rsRaBAfAA==", + "dev": true, + "license": "ISC" + }, + "node_modules/napi-build-utils": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/napi-build-utils/-/napi-build-utils-2.0.0.tgz", + "integrity": "sha512-GEbrYkbfF7MoNaoh2iGG84Mnf/WZfB0GdGEsM8wz7Expx/LlWf5U8t9nvJKXSp3qr5IsEbK04cBGhol/KwOsWA==", + "dev": true, + "license": "MIT", + "optional": true + }, + "node_modules/natural-compare": { + "version": "1.4.0", + "resolved": "https://registry.npmjs.org/natural-compare/-/natural-compare-1.4.0.tgz", + "integrity": "sha512-OWND8ei3VtNC9h7V60qff3SVobHr996CTwgxubgyQYEpg290h9J0buyECNNJexkFm5sOajh5G116RYA1c8ZMSw==", + "dev": true, + "license": "MIT" + }, + "node_modules/node-abi": { + "version": "3.87.0", + "resolved": "https://registry.npmjs.org/node-abi/-/node-abi-3.87.0.tgz", + "integrity": "sha512-+CGM1L1CgmtheLcBuleyYOn7NWPVu0s0EJH2C4puxgEZb9h8QpR9G2dBfZJOAUhi7VQxuBPMd0hiISWcTyiYyQ==", + "dev": true, + "license": "MIT", + "optional": true, + "dependencies": { + "semver": "^7.3.5" + }, + "engines": { + "node": ">=10" + } + }, + "node_modules/node-addon-api": { + "version": "4.3.0", + "resolved": "https://registry.npmjs.org/node-addon-api/-/node-addon-api-4.3.0.tgz", + "integrity": "sha512-73sE9+3UaLYYFmDsFZnqCInzPyh3MqIwZO9cw58yIqAZhONrrabrYyYe3TuIqtIiOuTXVhsGau8hcrhhwSsDIQ==", + "dev": true, + "license": "MIT", + "optional": true + }, + "node_modules/nth-check": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/nth-check/-/nth-check-2.1.1.tgz", + "integrity": "sha512-lqjrjmaOoAnWfMmBPL+XNnynZh2+swxiX3WUE0s4yEHI6m+AwrK2UZOimIRl3X/4QctVqS8AiZjFqyOGrMXb/w==", + "dev": true, + "license": "BSD-2-Clause", + "dependencies": { + "boolbase": "^1.0.0" + }, + "funding": { + "url": "https://github.com/fb55/nth-check?sponsor=1" + } + }, + "node_modules/object-inspect": { + "version": "1.13.4", + "resolved": "https://registry.npmjs.org/object-inspect/-/object-inspect-1.13.4.tgz", + "integrity": "sha512-W67iLl4J2EXEGTbfeHCffrjDfitvLANg0UlX3wFUUSTx92KXRFegMHUVgSqE+wvhAbi4WqjGg9czysTV2Epbew==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/once": { + "version": "1.4.0", + "resolved": "https://registry.npmjs.org/once/-/once-1.4.0.tgz", + "integrity": "sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w==", + "dev": true, + "license": "ISC", + "dependencies": { + "wrappy": "1" + } + }, + "node_modules/onetime": { + "version": "7.0.0", + "resolved": "https://registry.npmjs.org/onetime/-/onetime-7.0.0.tgz", + "integrity": "sha512-VXJjc87FScF88uafS3JllDgvAm+c/Slfz06lorj2uAY34rlUu0Nt+v8wreiImcrgAjjIHp1rXpTDlLOGw29WwQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "mimic-function": "^5.0.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/open": { + "version": "10.2.0", + "resolved": "https://registry.npmjs.org/open/-/open-10.2.0.tgz", + "integrity": "sha512-YgBpdJHPyQ2UE5x+hlSXcnejzAvD0b22U2OuAP+8OnlJT+PjWPxtgmGqKKc+RgTM63U9gN0YzrYc71R2WT/hTA==", + "dev": true, + "license": "MIT", + "dependencies": { + "default-browser": "^5.2.1", + "define-lazy-prop": "^3.0.0", + "is-inside-container": "^1.0.0", + "wsl-utils": "^0.1.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/optionator": { + "version": "0.9.4", + "resolved": "https://registry.npmjs.org/optionator/-/optionator-0.9.4.tgz", + "integrity": "sha512-6IpQ7mKUxRcZNLIObR0hz7lxsapSSIYNZJwXPGeF0mTVqGKFIXj1DQcMoT22S3ROcLyY/rz0PWaWZ9ayWmad9g==", + "dev": true, + "license": "MIT", + "dependencies": { + "deep-is": "^0.1.3", + "fast-levenshtein": "^2.0.6", + "levn": "^0.4.1", + "prelude-ls": "^1.2.1", + "type-check": "^0.4.0", + "word-wrap": "^1.2.5" + }, + "engines": { + "node": ">= 0.8.0" + } + }, + "node_modules/ora": { + "version": "8.2.0", + "resolved": "https://registry.npmjs.org/ora/-/ora-8.2.0.tgz", + "integrity": "sha512-weP+BZ8MVNnlCm8c0Qdc1WSWq4Qn7I+9CJGm7Qali6g44e/PUzbjNqJX5NJ9ljlNMosfJvg1fKEGILklK9cwnw==", + "dev": true, + "license": "MIT", + "dependencies": { + "chalk": "^5.3.0", + "cli-cursor": "^5.0.0", + "cli-spinners": "^2.9.2", + "is-interactive": "^2.0.0", + "is-unicode-supported": "^2.0.0", + "log-symbols": "^6.0.0", + "stdin-discarder": "^0.2.2", + "string-width": "^7.2.0", + "strip-ansi": "^7.1.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/ora/node_modules/chalk": { + "version": "5.6.2", + "resolved": "https://registry.npmjs.org/chalk/-/chalk-5.6.2.tgz", + "integrity": "sha512-7NzBL0rN6fMUW+f7A6Io4h40qQlG+xGmtMxfbnH/K7TAtt8JQWVQK+6g0UXKMeVJoyV5EkkNsErQ8pVD3bLHbA==", + "dev": true, + "license": "MIT", + "engines": { + "node": "^12.17.0 || ^14.13 || >=16.0.0" + }, + "funding": { + "url": "https://github.com/chalk/chalk?sponsor=1" + } + }, + "node_modules/ora/node_modules/is-unicode-supported": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/is-unicode-supported/-/is-unicode-supported-2.1.0.tgz", + "integrity": "sha512-mE00Gnza5EEB3Ds0HfMyllZzbBrmLOX3vfWoj9A9PEnTfratQ/BcaJOuMhnkhjXvb2+FkY3VuHqtAGpTPmglFQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/ora/node_modules/log-symbols": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/log-symbols/-/log-symbols-6.0.0.tgz", + "integrity": "sha512-i24m8rpwhmPIS4zscNzK6MSEhk0DUWa/8iYQWxhffV8jkI4Phvs3F+quL5xvS0gdQR0FyTCMMH33Y78dDTzzIw==", + "dev": true, + "license": "MIT", + "dependencies": { + "chalk": "^5.3.0", + "is-unicode-supported": "^1.3.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/ora/node_modules/log-symbols/node_modules/is-unicode-supported": { + "version": "1.3.0", + "resolved": "https://registry.npmjs.org/is-unicode-supported/-/is-unicode-supported-1.3.0.tgz", + "integrity": "sha512-43r2mRvz+8JRIKnWJ+3j8JtjRKZ6GmjzfaE/qiBJnikNnYv/6bagRJ1kUhNk8R5EX/GkobD+r+sfxCPJsiKBLQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/p-limit": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/p-limit/-/p-limit-3.1.0.tgz", + "integrity": "sha512-TYOanM3wGwNGsZN2cVTYPArw454xnXj5qmWF1bEoAc4+cU/ol7GVh7odevjp1FNHduHc3KZMcFduxU5Xc6uJRQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "yocto-queue": "^0.1.0" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/p-locate": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/p-locate/-/p-locate-5.0.0.tgz", + "integrity": "sha512-LaNjtRWUBY++zB5nE/NwcaoMylSPk+S+ZHNB1TzdbMJMny6dynpAGt7X/tl/QYq3TIeE6nxHppbo2LGymrG5Pw==", + "dev": true, + "license": "MIT", + "dependencies": { + "p-limit": "^3.0.2" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/package-json-from-dist": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/package-json-from-dist/-/package-json-from-dist-1.0.1.tgz", + "integrity": "sha512-UEZIS3/by4OC8vL3P2dTXRETpebLI2NiI5vIrjaD/5UtrkFX/tNbwjTSRAGC/+7CAo2pIcBaRgWmcBBHcsaCIw==", + "dev": true, + "license": "BlueOak-1.0.0" + }, + "node_modules/pako": { + "version": "1.0.11", + "resolved": "https://registry.npmjs.org/pako/-/pako-1.0.11.tgz", + "integrity": "sha512-4hLB8Py4zZce5s4yd9XzopqwVv/yGNhV1Bl8NTmCq1763HeK2+EwVTv+leGeL13Dnh2wfbqowVPXCIO0z4taYw==", + "dev": true, + "license": "(MIT AND Zlib)" + }, + "node_modules/parse-semver": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/parse-semver/-/parse-semver-1.1.1.tgz", + "integrity": "sha512-Eg1OuNntBMH0ojvEKSrvDSnwLmvVuUOSdylH/pSCPNMIspLlweJyIWXCE+k/5hm3cj/EBUYwmWkjhBALNP4LXQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "semver": "^5.1.0" + } + }, + "node_modules/parse-semver/node_modules/semver": { + "version": "5.7.2", + "resolved": "https://registry.npmjs.org/semver/-/semver-5.7.2.tgz", + "integrity": "sha512-cBznnQ9KjJqU67B52RMC65CMarK2600WFnbkcaiwWq3xy/5haFJlshgnpjovMVJ+Hff49d8GEn0b87C5pDQ10g==", + "dev": true, + "license": "ISC", + "bin": { + "semver": "bin/semver" + } + }, + "node_modules/parse5": { + "version": "7.3.0", + "resolved": "https://registry.npmjs.org/parse5/-/parse5-7.3.0.tgz", + "integrity": "sha512-IInvU7fabl34qmi9gY8XOVxhYyMyuH2xUNpb2q8/Y+7552KlejkRvqvD19nMoUW/uQGGbqNpA6Tufu5FL5BZgw==", + "dev": true, + "license": "MIT", + "dependencies": { + "entities": "^6.0.0" + }, + "funding": { + "url": "https://github.com/inikulin/parse5?sponsor=1" + } + }, + "node_modules/parse5-htmlparser2-tree-adapter": { + "version": "7.1.0", + "resolved": "https://registry.npmjs.org/parse5-htmlparser2-tree-adapter/-/parse5-htmlparser2-tree-adapter-7.1.0.tgz", + "integrity": "sha512-ruw5xyKs6lrpo9x9rCZqZZnIUntICjQAd0Wsmp396Ul9lN/h+ifgVV1x1gZHi8euej6wTfpqX8j+BFQxF0NS/g==", + "dev": true, + "license": "MIT", + "dependencies": { + "domhandler": "^5.0.3", + "parse5": "^7.0.0" + }, + "funding": { + "url": "https://github.com/inikulin/parse5?sponsor=1" + } + }, + "node_modules/parse5-parser-stream": { + "version": "7.1.2", + "resolved": "https://registry.npmjs.org/parse5-parser-stream/-/parse5-parser-stream-7.1.2.tgz", + "integrity": "sha512-JyeQc9iwFLn5TbvvqACIF/VXG6abODeB3Fwmv/TGdLk2LfbWkaySGY72at4+Ty7EkPZj854u4CrICqNk2qIbow==", + "dev": true, + "license": "MIT", + "dependencies": { + "parse5": "^7.0.0" + }, + "funding": { + "url": "https://github.com/inikulin/parse5?sponsor=1" + } + }, + "node_modules/parse5/node_modules/entities": { + "version": "6.0.1", + "resolved": "https://registry.npmjs.org/entities/-/entities-6.0.1.tgz", + "integrity": "sha512-aN97NXWF6AWBTahfVOIrB/NShkzi5H7F9r1s9mD3cDj4Ko5f2qhhVoYMibXF7GlLveb/D2ioWay8lxI97Ven3g==", + "dev": true, + "license": "BSD-2-Clause", + "engines": { + "node": ">=0.12" + }, + "funding": { + "url": "https://github.com/fb55/entities?sponsor=1" + } + }, + "node_modules/path-exists": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/path-exists/-/path-exists-4.0.0.tgz", + "integrity": "sha512-ak9Qy5Q7jYb2Wwcey5Fpvg2KoAc/ZIhLSLOSBmRmygPsGwkVVt0fZa0qrtMz+m6tJTAHfZQ8FnmB4MG4LWy7/w==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/path-is-absolute": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/path-is-absolute/-/path-is-absolute-1.0.1.tgz", + "integrity": "sha512-AVbw3UJ2e9bq64vSaS9Am0fje1Pa8pbGqTTsmXfaIiMpnr5DlDhfJOuLj9Sf95ZPVDAUerDfEk88MPmPe7UCQg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/path-key": { + "version": "3.1.1", + "resolved": "https://registry.npmjs.org/path-key/-/path-key-3.1.1.tgz", + "integrity": "sha512-ojmeN0qd+y0jszEtoY48r0Peq5dwMEkIlCOu6Q5f41lfkswXuKtYrhgoTpLnyIcHm24Uhqx+5Tqm2InSwLhE6Q==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/path-scurry": { + "version": "1.11.1", + "resolved": "https://registry.npmjs.org/path-scurry/-/path-scurry-1.11.1.tgz", + "integrity": "sha512-Xa4Nw17FS9ApQFJ9umLiJS4orGjm7ZzwUrwamcGQuHSzDyth9boKDaycYdDcZDuqYATXw4HFXgaqWTctW/v1HA==", + "dev": true, + "license": "BlueOak-1.0.0", + "dependencies": { + "lru-cache": "^10.2.0", + "minipass": "^5.0.0 || ^6.0.2 || ^7.0.0" + }, + "engines": { + "node": ">=16 || 14 >=14.18" + }, + "funding": { + "url": "https://github.com/sponsors/isaacs" + } + }, + "node_modules/path-scurry/node_modules/lru-cache": { + "version": "10.4.3", + "resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-10.4.3.tgz", + "integrity": "sha512-JNAzZcXrCt42VGLuYz0zfAzDfAvJWW6AfYlDBQyDV5DClI2m5sAmK+OIO7s59XfsRsWHp02jAJrRadPRGTt6SQ==", + "dev": true, + "license": "ISC" + }, + "node_modules/pend": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/pend/-/pend-1.2.0.tgz", + "integrity": "sha512-F3asv42UuXchdzt+xXqfW1OGlVBe+mxa2mqI0pg5yAHZPvFmY3Y6drSf/GQ1A86WgWEN9Kzh/WrgKa6iGcHXLg==", + "dev": true, + "license": "MIT" + }, + "node_modules/picocolors": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/picocolors/-/picocolors-1.1.1.tgz", + "integrity": "sha512-xceH2snhtb5M9liqDsmEw56le376mTZkEX/jEb/RxNFyegNul7eNslCXP9FDj/Lcu0X8KEyMceP2ntpaHrDEVA==", + "dev": true, + "license": "ISC" + }, + "node_modules/picomatch": { + "version": "4.0.3", + "resolved": "https://registry.npmjs.org/picomatch/-/picomatch-4.0.3.tgz", + "integrity": "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/sponsors/jonschlinkert" + } + }, + "node_modules/prebuild-install": { + "version": "7.1.3", + "resolved": "https://registry.npmjs.org/prebuild-install/-/prebuild-install-7.1.3.tgz", + "integrity": "sha512-8Mf2cbV7x1cXPUILADGI3wuhfqWvtiLA1iclTDbFRZkgRQS0NqsPZphna9V+HyTEadheuPmjaJMsbzKQFOzLug==", + "deprecated": "No longer maintained. Please contact the author of the relevant native addon; alternatives are available.", + "dev": true, + "license": "MIT", + "optional": true, + "dependencies": { + "detect-libc": "^2.0.0", + "expand-template": "^2.0.3", + "github-from-package": "0.0.0", + "minimist": "^1.2.3", + "mkdirp-classic": "^0.5.3", + "napi-build-utils": "^2.0.0", + "node-abi": "^3.3.0", + "pump": "^3.0.0", + "rc": "^1.2.7", + "simple-get": "^4.0.0", + "tar-fs": "^2.0.0", + "tunnel-agent": "^0.6.0" + }, + "bin": { + "prebuild-install": "bin.js" + }, + "engines": { + "node": ">=10" + } + }, + "node_modules/prelude-ls": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/prelude-ls/-/prelude-ls-1.2.1.tgz", + "integrity": "sha512-vkcDPrRZo1QZLbn5RLGPpg/WmIQ65qoWWhcGKf/b5eplkkarX0m9z8ppCat4mlOqUsWpyNuYgO3VRyrYHSzX5g==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.8.0" + } + }, + "node_modules/process-nextick-args": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/process-nextick-args/-/process-nextick-args-2.0.1.tgz", + "integrity": "sha512-3ouUOpQhtgrbOa17J7+uxOTpITYWaGP7/AhoR3+A+/1e9skrzelGi/dXzEYyvbxubEF6Wn2ypscTKiKJFFn1ag==", + "dev": true, + "license": "MIT" + }, + "node_modules/pump": { + "version": "3.0.4", + "resolved": "https://registry.npmjs.org/pump/-/pump-3.0.4.tgz", + "integrity": "sha512-VS7sjc6KR7e1ukRFhQSY5LM2uBWAUPiOPa/A3mkKmiMwSmRFUITt0xuj+/lesgnCv+dPIEYlkzrcyXgquIHMcA==", + "dev": true, + "license": "MIT", + "optional": true, + "dependencies": { + "end-of-stream": "^1.1.0", + "once": "^1.3.1" + } + }, + "node_modules/punycode": { + "version": "2.3.1", + "resolved": "https://registry.npmjs.org/punycode/-/punycode-2.3.1.tgz", + "integrity": "sha512-vYt7UD1U9Wg6138shLtLOvdAu+8DsC/ilFtEVHcH+wydcSpNE20AfSOduf6MkRFahL5FY7X1oU7nKVZFtfq8Fg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=6" + } + }, + "node_modules/qs": { + "version": "6.15.0", + "resolved": "https://registry.npmjs.org/qs/-/qs-6.15.0.tgz", + "integrity": "sha512-mAZTtNCeetKMH+pSjrb76NAM8V9a05I9aBZOHztWy/UqcJdQYNsf59vrRKWnojAT9Y+GbIvoTBC++CPHqpDBhQ==", + "dev": true, + "license": "BSD-3-Clause", + "dependencies": { + "side-channel": "^1.1.0" + }, + "engines": { + "node": ">=0.6" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/randombytes": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/randombytes/-/randombytes-2.1.0.tgz", + "integrity": "sha512-vYl3iOX+4CKUWuxGi9Ukhie6fsqXqS9FE2Zaic4tNFD2N2QQaXOMFbuKK4QmDHC0JO6B1Zp41J0LpT0oR68amQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "safe-buffer": "^5.1.0" + } + }, + "node_modules/rc": { + "version": "1.2.8", + "resolved": "https://registry.npmjs.org/rc/-/rc-1.2.8.tgz", + "integrity": "sha512-y3bGgqKj3QBdxLbLkomlohkvsA8gdAiUQlSBJnBhfn+BPxg4bc62d8TcBW15wavDfgexCgccckhcZvywyQYPOw==", + "dev": true, + "license": "(BSD-2-Clause OR MIT OR Apache-2.0)", + "optional": true, + "dependencies": { + "deep-extend": "^0.6.0", + "ini": "~1.3.0", + "minimist": "^1.2.0", + "strip-json-comments": "~2.0.1" + }, + "bin": { + "rc": "cli.js" + } + }, + "node_modules/read": { + "version": "1.0.7", + "resolved": "https://registry.npmjs.org/read/-/read-1.0.7.tgz", + "integrity": "sha512-rSOKNYUmaxy0om1BNjMN4ezNT6VKK+2xF4GBhc81mkH7L60i6dp8qPYrkndNLT3QPphoII3maL9PVC9XmhHwVQ==", + "dev": true, + "license": "ISC", + "dependencies": { + "mute-stream": "~0.0.4" + }, + "engines": { + "node": ">=0.8" + } + }, + "node_modules/readable-stream": { + "version": "3.6.2", + "resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-3.6.2.tgz", + "integrity": "sha512-9u/sniCrY3D5WdsERHzHE4G2YCXqoG5FTHUiCC4SIbr6XcLZBY05ya9EKjYek9O5xOAwjGq+1JdGBAS7Q9ScoA==", + "dev": true, + "license": "MIT", + "optional": true, + "dependencies": { + "inherits": "^2.0.3", + "string_decoder": "^1.1.1", + "util-deprecate": "^1.0.1" + }, + "engines": { + "node": ">= 6" + } + }, + "node_modules/readdirp": { + "version": "4.1.2", + "resolved": "https://registry.npmjs.org/readdirp/-/readdirp-4.1.2.tgz", + "integrity": "sha512-GDhwkLfywWL2s6vEjyhri+eXmfH6j1L7JE27WhqLeYzoh/A3DBaYGEj2H/HFZCn/kMfim73FXxEJTw06WtxQwg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 14.18.0" + }, + "funding": { + "type": "individual", + "url": "https://paulmillr.com/funding/" + } + }, + "node_modules/require-directory": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/require-directory/-/require-directory-2.1.1.tgz", + "integrity": "sha512-fGxEI7+wsG9xrvdjsrlmL22OMTTiHRwAMroiEeMgq8gzoLC/PQr7RsRDSTLUg/bZAZtF+TVIkHc6/4RIKrui+Q==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/restore-cursor": { + "version": "5.1.0", + "resolved": "https://registry.npmjs.org/restore-cursor/-/restore-cursor-5.1.0.tgz", + "integrity": "sha512-oMA2dcrw6u0YfxJQXm342bFKX/E4sG9rbTzO9ptUcR/e8A33cHuvStiYOwH7fszkZlZ1z/ta9AAoPk2F4qIOHA==", + "dev": true, + "license": "MIT", + "dependencies": { + "onetime": "^7.0.0", + "signal-exit": "^4.1.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/run-applescript": { + "version": "7.1.0", + "resolved": "https://registry.npmjs.org/run-applescript/-/run-applescript-7.1.0.tgz", + "integrity": "sha512-DPe5pVFaAsinSaV6QjQ6gdiedWDcRCbUuiQfQa2wmWV7+xC9bGulGI8+TdRmoFkAPaBXk8CrAbnlY2ISniJ47Q==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/safe-buffer": { + "version": "5.2.1", + "resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.2.1.tgz", + "integrity": "sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ==", + "dev": true, + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/feross" + }, + { + "type": "patreon", + "url": "https://www.patreon.com/feross" + }, + { + "type": "consulting", + "url": "https://feross.org/support" + } + ], + "license": "MIT" + }, + "node_modules/safer-buffer": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/safer-buffer/-/safer-buffer-2.1.2.tgz", + "integrity": "sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg==", + "dev": true, + "license": "MIT" + }, + "node_modules/sax": { + "version": "1.5.0", + "resolved": "https://registry.npmjs.org/sax/-/sax-1.5.0.tgz", + "integrity": "sha512-21IYA3Q5cQf089Z6tgaUTr7lDAyzoTPx5HRtbhsME8Udispad8dC/+sziTNugOEx54ilvatQ9YCzl4KQLPcRHA==", + "dev": true, + "license": "BlueOak-1.0.0", + "engines": { + "node": ">=11.0.0" + } + }, + "node_modules/semver": { + "version": "7.7.4", + "resolved": "https://registry.npmjs.org/semver/-/semver-7.7.4.tgz", + "integrity": "sha512-vFKC2IEtQnVhpT78h1Yp8wzwrf8CM+MzKMHGJZfBtzhZNycRFnXsHk6E5TxIkkMsgNS7mdX3AGB7x2QM2di4lA==", + "dev": true, + "license": "ISC", + "bin": { + "semver": "bin/semver.js" + }, + "engines": { + "node": ">=10" + } + }, + "node_modules/serialize-javascript": { + "version": "6.0.2", + "resolved": "https://registry.npmjs.org/serialize-javascript/-/serialize-javascript-6.0.2.tgz", + "integrity": "sha512-Saa1xPByTTq2gdeFZYLLo+RFE35NHZkAbqZeWNd3BpzppeVisAqpDjcp8dyf6uIvEqJRd46jemmyA4iFIeVk8g==", + "dev": true, + "license": "BSD-3-Clause", + "dependencies": { + "randombytes": "^2.1.0" + } + }, + "node_modules/setimmediate": { + "version": "1.0.5", + "resolved": "https://registry.npmjs.org/setimmediate/-/setimmediate-1.0.5.tgz", + "integrity": "sha512-MATJdZp8sLqDl/68LfQmbP8zKPLQNV6BIZoIgrscFDQ+RsvK/BxeDQOgyxKKoh0y/8h3BqVFnCqQ/gd+reiIXA==", + "dev": true, + "license": "MIT" + }, + "node_modules/shebang-command": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/shebang-command/-/shebang-command-2.0.0.tgz", + "integrity": "sha512-kHxr2zZpYtdmrN1qDjrrX/Z1rR1kG8Dx+gkpK1G4eXmvXswmcE1hTWBWYUzlraYw1/yZp6YuDY77YtvbN0dmDA==", + "dev": true, + "license": "MIT", + "dependencies": { + "shebang-regex": "^3.0.0" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/shebang-regex": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/shebang-regex/-/shebang-regex-3.0.0.tgz", + "integrity": "sha512-7++dFhtcx3353uBaq8DDR4NuxBetBzC7ZQOhmTQInHEd6bSrXdiEyzCvG07Z44UYdLShWUyXt5M/yhz8ekcb1A==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/side-channel": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/side-channel/-/side-channel-1.1.0.tgz", + "integrity": "sha512-ZX99e6tRweoUXqR+VBrslhda51Nh5MTQwou5tnUDgbtyM0dBgmhEDtWGP/xbKn6hqfPRHujUNwz5fy/wbbhnpw==", + "dev": true, + "license": "MIT", + "dependencies": { + "es-errors": "^1.3.0", + "object-inspect": "^1.13.3", + "side-channel-list": "^1.0.0", + "side-channel-map": "^1.0.1", + "side-channel-weakmap": "^1.0.2" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/side-channel-list": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/side-channel-list/-/side-channel-list-1.0.0.tgz", + "integrity": "sha512-FCLHtRD/gnpCiCHEiJLOwdmFP+wzCmDEkc9y7NsYxeF4u7Btsn1ZuwgwJGxImImHicJArLP4R0yX4c2KCrMrTA==", + "dev": true, + "license": "MIT", + "dependencies": { + "es-errors": "^1.3.0", + "object-inspect": "^1.13.3" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/side-channel-map": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/side-channel-map/-/side-channel-map-1.0.1.tgz", + "integrity": "sha512-VCjCNfgMsby3tTdo02nbjtM/ewra6jPHmpThenkTYh8pG9ucZ/1P8So4u4FGBek/BjpOVsDCMoLA/iuBKIFXRA==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bound": "^1.0.2", + "es-errors": "^1.3.0", + "get-intrinsic": "^1.2.5", + "object-inspect": "^1.13.3" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/side-channel-weakmap": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/side-channel-weakmap/-/side-channel-weakmap-1.0.2.tgz", + "integrity": "sha512-WPS/HvHQTYnHisLo9McqBHOJk2FkHO/tlpvldyrnem4aeQp4hai3gythswg6p01oSoTl58rcpiFAjF2br2Ak2A==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bound": "^1.0.2", + "es-errors": "^1.3.0", + "get-intrinsic": "^1.2.5", + "object-inspect": "^1.13.3", + "side-channel-map": "^1.0.1" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/signal-exit": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/signal-exit/-/signal-exit-4.1.0.tgz", + "integrity": "sha512-bzyZ1e88w9O1iNJbKnOlvYTrWPDl46O1bG0D3XInv+9tkPrxrN8jUUTiFlDkkmKWgn1M6CfIA13SuGqOa9Korw==", + "dev": true, + "license": "ISC", + "engines": { + "node": ">=14" + }, + "funding": { + "url": "https://github.com/sponsors/isaacs" + } + }, + "node_modules/simple-concat": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/simple-concat/-/simple-concat-1.0.1.tgz", + "integrity": "sha512-cSFtAPtRhljv69IK0hTVZQ+OfE9nePi/rtJmw5UjHeVyVroEqJXP1sFztKUy1qU+xvz3u/sfYJLa947b7nAN2Q==", + "dev": true, + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/feross" + }, + { + "type": "patreon", + "url": "https://www.patreon.com/feross" + }, + { + "type": "consulting", + "url": "https://feross.org/support" + } + ], + "license": "MIT", + "optional": true + }, + "node_modules/simple-get": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/simple-get/-/simple-get-4.0.1.tgz", + "integrity": "sha512-brv7p5WgH0jmQJr1ZDDfKDOSeWWg+OVypG99A/5vYGPqJ6pxiaHLy8nxtFjBA7oMa01ebA9gfh1uMCFqOuXxvA==", + "dev": true, + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/feross" + }, + { + "type": "patreon", + "url": "https://www.patreon.com/feross" + }, + { + "type": "consulting", + "url": "https://feross.org/support" + } + ], + "license": "MIT", + "optional": true, + "dependencies": { + "decompress-response": "^6.0.0", + "once": "^1.3.1", + "simple-concat": "^1.0.0" + } + }, + "node_modules/stdin-discarder": { + "version": "0.2.2", + "resolved": "https://registry.npmjs.org/stdin-discarder/-/stdin-discarder-0.2.2.tgz", + "integrity": "sha512-UhDfHmA92YAlNnCfhmq0VeNL5bDbiZGg7sZ2IvPsXubGkiNa9EC+tUTsjBRsYUAz87btI6/1wf4XoVvQ3uRnmQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/string_decoder": { + "version": "1.3.0", + "resolved": "https://registry.npmjs.org/string_decoder/-/string_decoder-1.3.0.tgz", + "integrity": "sha512-hkRX8U1WjJFd8LsDJ2yQ/wWWxaopEsABU1XfkM8A+j0+85JAGppt16cr1Whg6KIbb4okU6Mql6BOj+uup/wKeA==", + "dev": true, + "license": "MIT", + "optional": true, + "dependencies": { + "safe-buffer": "~5.2.0" + } + }, + "node_modules/string-width": { + "version": "7.2.0", + "resolved": "https://registry.npmjs.org/string-width/-/string-width-7.2.0.tgz", + "integrity": "sha512-tsaTIkKW9b4N+AEj+SVA+WhJzV7/zMhcSu78mLKWSk7cXMOSHsBKFWUs0fWwq8QyK3MgJBQRX6Gbi4kYbdvGkQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "emoji-regex": "^10.3.0", + "get-east-asian-width": "^1.0.0", + "strip-ansi": "^7.1.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/string-width-cjs": { + "name": "string-width", + "version": "4.2.3", + "resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz", + "integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==", + "dev": true, + "license": "MIT", + "dependencies": { + "emoji-regex": "^8.0.0", + "is-fullwidth-code-point": "^3.0.0", + "strip-ansi": "^6.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/string-width-cjs/node_modules/ansi-regex": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz", + "integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/string-width-cjs/node_modules/emoji-regex": { + "version": "8.0.0", + "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz", + "integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==", + "dev": true, + "license": "MIT" + }, + "node_modules/string-width-cjs/node_modules/strip-ansi": { + "version": "6.0.1", + "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz", + "integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==", + "dev": true, + "license": "MIT", + "dependencies": { + "ansi-regex": "^5.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/strip-ansi": { + "version": "7.2.0", + "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-7.2.0.tgz", + "integrity": "sha512-yDPMNjp4WyfYBkHnjIRLfca1i6KMyGCtsVgoKe/z1+6vukgaENdgGBZt+ZmKPc4gavvEZ5OgHfHdrazhgNyG7w==", + "dev": true, + "license": "MIT", + "dependencies": { + "ansi-regex": "^6.2.2" + }, + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/chalk/strip-ansi?sponsor=1" + } + }, + "node_modules/strip-ansi-cjs": { + "name": "strip-ansi", + "version": "6.0.1", + "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz", + "integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==", + "dev": true, + "license": "MIT", + "dependencies": { + "ansi-regex": "^5.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/strip-ansi-cjs/node_modules/ansi-regex": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz", + "integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/strip-json-comments": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/strip-json-comments/-/strip-json-comments-2.0.1.tgz", + "integrity": "sha512-4gB8na07fecVVkOI6Rs4e7T6NOTki5EmL7TUduTs6bu3EdnSycntVJ4re8kgZA+wx9IueI2Y11bfbgwtzuE0KQ==", + "dev": true, + "license": "MIT", + "optional": true, + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/supports-color": { + "version": "5.5.0", + "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-5.5.0.tgz", + "integrity": "sha512-QjVjwdXIt408MIiAqCX4oUKsgU2EqAGzs2Ppkm4aQYbjm+ZEWEcW4SfFNTr4uMNZma0ey4f5lgLrkB0aX0QMow==", + "dev": true, + "license": "MIT", + "dependencies": { + "has-flag": "^3.0.0" + }, + "engines": { + "node": ">=4" + } + }, + "node_modules/tar-fs": { + "version": "2.1.4", + "resolved": "https://registry.npmjs.org/tar-fs/-/tar-fs-2.1.4.tgz", + "integrity": "sha512-mDAjwmZdh7LTT6pNleZ05Yt65HC3E+NiQzl672vQG38jIrehtJk/J3mNwIg+vShQPcLF/LV7CMnDW6vjj6sfYQ==", + "dev": true, + "license": "MIT", + "optional": true, + "dependencies": { + "chownr": "^1.1.1", + "mkdirp-classic": "^0.5.2", + "pump": "^3.0.0", + "tar-stream": "^2.1.4" + } + }, + "node_modules/tar-stream": { + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/tar-stream/-/tar-stream-2.2.0.tgz", + "integrity": "sha512-ujeqbceABgwMZxEJnk2HDY2DlnUZ+9oEcb1KzTVfYHio0UE6dG71n60d8D2I4qNvleWrrXpmjpt7vZeF1LnMZQ==", + "dev": true, + "license": "MIT", + "optional": true, + "dependencies": { + "bl": "^4.0.3", + "end-of-stream": "^1.4.1", + "fs-constants": "^1.0.0", + "inherits": "^2.0.3", + "readable-stream": "^3.1.1" + }, + "engines": { + "node": ">=6" + } + }, + "node_modules/tinyglobby": { + "version": "0.2.15", + "resolved": "https://registry.npmjs.org/tinyglobby/-/tinyglobby-0.2.15.tgz", + "integrity": "sha512-j2Zq4NyQYG5XMST4cbs02Ak8iJUdxRM0XI5QyxXuZOzKOINmWurp3smXu3y5wDcJrptwpSjgXHzIQxR0omXljQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "fdir": "^6.5.0", + "picomatch": "^4.0.3" + }, + "engines": { + "node": ">=12.0.0" + }, + "funding": { + "url": "https://github.com/sponsors/SuperchupuDev" + } + }, + "node_modules/tmp": { + "version": "0.2.5", + "resolved": "https://registry.npmjs.org/tmp/-/tmp-0.2.5.tgz", + "integrity": "sha512-voyz6MApa1rQGUxT3E+BK7/ROe8itEx7vD8/HEvt4xwXucvQ5G5oeEiHkmHZJuBO21RpOf+YYm9MOivj709jow==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=14.14" + } + }, + "node_modules/ts-api-utils": { + "version": "2.4.0", + "resolved": "https://registry.npmjs.org/ts-api-utils/-/ts-api-utils-2.4.0.tgz", + "integrity": "sha512-3TaVTaAv2gTiMB35i3FiGJaRfwb3Pyn/j3m/bfAvGe8FB7CF6u+LMYqYlDh7reQf7UNvoTvdfAqHGmPGOSsPmA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=18.12" + }, + "peerDependencies": { + "typescript": ">=4.8.4" + } + }, + "node_modules/tslib": { + "version": "2.8.1", + "resolved": "https://registry.npmjs.org/tslib/-/tslib-2.8.1.tgz", + "integrity": "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==", + "dev": true, + "license": "0BSD" + }, + "node_modules/tunnel": { + "version": "0.0.6", + "resolved": "https://registry.npmjs.org/tunnel/-/tunnel-0.0.6.tgz", + "integrity": "sha512-1h/Lnq9yajKY2PEbBadPXj3VxsDDu844OnaAo52UVmIzIvwwtBPIuNvkjuzBlTWpfJyUbG3ez0KSBibQkj4ojg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=0.6.11 <=0.7.0 || >=0.7.3" + } + }, + "node_modules/tunnel-agent": { + "version": "0.6.0", + "resolved": "https://registry.npmjs.org/tunnel-agent/-/tunnel-agent-0.6.0.tgz", + "integrity": "sha512-McnNiV1l8RYeY8tBgEpuodCC1mLUdbSN+CYBL7kJsJNInOP8UjDDEwdk6Mw60vdLLrr5NHKZhMAOSrR2NZuQ+w==", + "dev": true, + "license": "Apache-2.0", + "optional": true, + "dependencies": { + "safe-buffer": "^5.0.1" + }, + "engines": { + "node": "*" + } + }, + "node_modules/type-check": { + "version": "0.4.0", + "resolved": "https://registry.npmjs.org/type-check/-/type-check-0.4.0.tgz", + "integrity": "sha512-XleUoc9uwGXqjWwXaUTZAmzMcFZ5858QA2vvx1Ur5xIcixXIP+8LnFDgRplU30us6teqdlskFfu+ae4K79Ooew==", + "dev": true, + "license": "MIT", + "dependencies": { + "prelude-ls": "^1.2.1" + }, + "engines": { + "node": ">= 0.8.0" + } + }, + "node_modules/typed-rest-client": { + "version": "1.8.11", + "resolved": "https://registry.npmjs.org/typed-rest-client/-/typed-rest-client-1.8.11.tgz", + "integrity": "sha512-5UvfMpd1oelmUPRbbaVnq+rHP7ng2cE4qoQkQeAqxRL6PklkxsM0g32/HL0yfvruK6ojQ5x8EE+HF4YV6DtuCA==", + "dev": true, + "license": "MIT", + "dependencies": { + "qs": "^6.9.1", + "tunnel": "0.0.6", + "underscore": "^1.12.1" + } + }, + "node_modules/typescript": { + "version": "5.9.3", + "resolved": "https://registry.npmjs.org/typescript/-/typescript-5.9.3.tgz", + "integrity": "sha512-jl1vZzPDinLr9eUt3J/t7V6FgNEw9QjvBPdysz9KfQDD41fQrC2Y4vKQdiaUpFT4bXlb1RHhLpp8wtm6M5TgSw==", + "dev": true, + "license": "Apache-2.0", + "bin": { + "tsc": "bin/tsc", + "tsserver": "bin/tsserver" + }, + "engines": { + "node": ">=14.17" + } + }, + "node_modules/uc.micro": { + "version": "1.0.6", + "resolved": "https://registry.npmjs.org/uc.micro/-/uc.micro-1.0.6.tgz", + "integrity": "sha512-8Y75pvTYkLJW2hWQHXxoqRgV7qb9B+9vFEtidML+7koHUFapnVJAZ6cKs+Qjz5Aw3aZWHMC6u0wJE3At+nSGwA==", + "dev": true, + "license": "MIT" + }, + "node_modules/underscore": { + "version": "1.13.8", + "resolved": "https://registry.npmjs.org/underscore/-/underscore-1.13.8.tgz", + "integrity": "sha512-DXtD3ZtEQzc7M8m4cXotyHR+FAS18C64asBYY5vqZexfYryNNnDc02W4hKg3rdQuqOYas1jkseX0+nZXjTXnvQ==", + "dev": true, + "license": "MIT" + }, + "node_modules/undici": { + "version": "7.22.0", + "resolved": "https://registry.npmjs.org/undici/-/undici-7.22.0.tgz", + "integrity": "sha512-RqslV2Us5BrllB+JeiZnK4peryVTndy9Dnqq62S3yYRRTj0tFQCwEniUy2167skdGOy3vqRzEvl1Dm4sV2ReDg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=20.18.1" + } + }, + "node_modules/undici-types": { + "version": "6.21.0", + "resolved": "https://registry.npmjs.org/undici-types/-/undici-types-6.21.0.tgz", + "integrity": "sha512-iwDZqg0QAGrg9Rav5H4n0M64c3mkR59cJ6wQp+7C4nI0gsmExaedaYLNO44eT4AtBBwjbTiGPMlt2Md0T9H9JQ==", + "dev": true, + "license": "MIT" + }, + "node_modules/uri-js": { + "version": "4.4.1", + "resolved": "https://registry.npmjs.org/uri-js/-/uri-js-4.4.1.tgz", + "integrity": "sha512-7rKUyy33Q1yc98pQ1DAmLtwX109F7TIfWlW1Ydo8Wl1ii1SeHieeh0HHfPeL2fMXK6z0s8ecKs9frCuLJvndBg==", + "dev": true, + "license": "BSD-2-Clause", + "dependencies": { + "punycode": "^2.1.0" + } + }, + "node_modules/url-join": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/url-join/-/url-join-4.0.1.tgz", + "integrity": "sha512-jk1+QP6ZJqyOiuEI9AEWQfju/nB2Pw466kbA0LEZljHwKeMgd9WrAEgEGxjPDD2+TNbbb37rTyhEfrCXfuKXnA==", + "dev": true, + "license": "MIT" + }, + "node_modules/util-deprecate": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/util-deprecate/-/util-deprecate-1.0.2.tgz", + "integrity": "sha512-EPD5q1uXyFxJpCrLnCc1nHnq3gOa6DZBocAIiI2TaSCA7VCJ1UJDMagCzIkXNsUYfD1daK//LTEQ8xiIbrHtcw==", + "dev": true, + "license": "MIT" + }, + "node_modules/uuid": { + "version": "8.3.2", + "resolved": "https://registry.npmjs.org/uuid/-/uuid-8.3.2.tgz", + "integrity": "sha512-+NYs2QeMWy+GWFOEm9xnn6HCDp0l7QBD7ml8zLUmJ+93Q5NF0NocErnwkTkXVFNiX3/fpC6afS8Dhb/gz7R7eg==", + "dev": true, + "license": "MIT", + "bin": { + "uuid": "dist/bin/uuid" + } + }, + "node_modules/whatwg-encoding": { + "version": "3.1.1", + "resolved": "https://registry.npmjs.org/whatwg-encoding/-/whatwg-encoding-3.1.1.tgz", + "integrity": "sha512-6qN4hJdMwfYBtE3YBTTHhoeuUrDBPZmbQaxWAqSALV/MeEnR5z1xd8UKud2RAkFoPkmB+hli1TZSnyi84xz1vQ==", + "deprecated": "Use @exodus/bytes instead for a more spec-conformant and faster implementation", + "dev": true, + "license": "MIT", + "dependencies": { + "iconv-lite": "0.6.3" + }, + "engines": { + "node": ">=18" + } + }, + "node_modules/whatwg-mimetype": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/whatwg-mimetype/-/whatwg-mimetype-4.0.0.tgz", + "integrity": "sha512-QaKxh0eNIi2mE9p2vEdzfagOKHCcj1pJ56EEHGQOVxp8r9/iszLUUV7v89x9O1p/T+NlTM5W7jW6+cz4Fq1YVg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=18" + } + }, + "node_modules/which": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/which/-/which-2.0.2.tgz", + "integrity": "sha512-BLI3Tl1TW3Pvl70l3yq3Y64i+awpwXqsGBYWkkqMtnbXgrMD+yj7rhW0kuEDxzJaYXGjEW5ogapKNMEKNMjibA==", + "dev": true, + "license": "ISC", + "dependencies": { + "isexe": "^2.0.0" + }, + "bin": { + "node-which": "bin/node-which" + }, + "engines": { + "node": ">= 8" + } + }, + "node_modules/word-wrap": { + "version": "1.2.5", + "resolved": "https://registry.npmjs.org/word-wrap/-/word-wrap-1.2.5.tgz", + "integrity": "sha512-BN22B5eaMMI9UMtjrGd5g5eCYPpCPDUy0FJXbYsaT5zYxjFOckS53SQDE3pWkVoWpHXVb3BrYcEN4Twa55B5cA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/workerpool": { + "version": "9.3.4", + "resolved": "https://registry.npmjs.org/workerpool/-/workerpool-9.3.4.tgz", + "integrity": "sha512-TmPRQYYSAnnDiEB0P/Ytip7bFGvqnSU6I2BcuSw7Hx+JSg/DsUi5ebYfc8GYaSdpuvOcEs6dXxPurOYpe9QFwg==", + "dev": true, + "license": "Apache-2.0" + }, + "node_modules/wrap-ansi": { + "version": "7.0.0", + "resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-7.0.0.tgz", + "integrity": "sha512-YVGIj2kamLSTxw6NsZjoBxfSwsn0ycdesmc4p+Q21c5zPuZ1pl+NfxVdxPtdHvmNVOQ6XSYG4AUtyt/Fi7D16Q==", + "dev": true, + "license": "MIT", + "dependencies": { + "ansi-styles": "^4.0.0", + "string-width": "^4.1.0", + "strip-ansi": "^6.0.0" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/chalk/wrap-ansi?sponsor=1" + } + }, + "node_modules/wrap-ansi-cjs": { + "name": "wrap-ansi", + "version": "7.0.0", + "resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-7.0.0.tgz", + "integrity": "sha512-YVGIj2kamLSTxw6NsZjoBxfSwsn0ycdesmc4p+Q21c5zPuZ1pl+NfxVdxPtdHvmNVOQ6XSYG4AUtyt/Fi7D16Q==", + "dev": true, + "license": "MIT", + "dependencies": { + "ansi-styles": "^4.0.0", + "string-width": "^4.1.0", + "strip-ansi": "^6.0.0" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/chalk/wrap-ansi?sponsor=1" + } + }, + "node_modules/wrap-ansi-cjs/node_modules/ansi-regex": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz", + "integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/wrap-ansi-cjs/node_modules/ansi-styles": { + "version": "4.3.0", + "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.3.0.tgz", + "integrity": "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==", + "dev": true, + "license": "MIT", + "dependencies": { + "color-convert": "^2.0.1" + }, + "engines": { + "node": ">=8" + }, + "funding": { + "url": "https://github.com/chalk/ansi-styles?sponsor=1" + } + }, + "node_modules/wrap-ansi-cjs/node_modules/color-convert": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz", + "integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "color-name": "~1.1.4" + }, + "engines": { + "node": ">=7.0.0" + } + }, + "node_modules/wrap-ansi-cjs/node_modules/color-name": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz", + "integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==", + "dev": true, + "license": "MIT" + }, + "node_modules/wrap-ansi-cjs/node_modules/emoji-regex": { + "version": "8.0.0", + "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz", + "integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==", + "dev": true, + "license": "MIT" + }, + "node_modules/wrap-ansi-cjs/node_modules/string-width": { + "version": "4.2.3", + "resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz", + "integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==", + "dev": true, + "license": "MIT", + "dependencies": { + "emoji-regex": "^8.0.0", + "is-fullwidth-code-point": "^3.0.0", + "strip-ansi": "^6.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/wrap-ansi-cjs/node_modules/strip-ansi": { + "version": "6.0.1", + "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz", + "integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==", + "dev": true, + "license": "MIT", + "dependencies": { + "ansi-regex": "^5.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/wrap-ansi/node_modules/ansi-regex": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz", + "integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/wrap-ansi/node_modules/ansi-styles": { + "version": "4.3.0", + "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.3.0.tgz", + "integrity": "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==", + "dev": true, + "license": "MIT", + "dependencies": { + "color-convert": "^2.0.1" + }, + "engines": { + "node": ">=8" + }, + "funding": { + "url": "https://github.com/chalk/ansi-styles?sponsor=1" + } + }, + "node_modules/wrap-ansi/node_modules/color-convert": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz", + "integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "color-name": "~1.1.4" + }, + "engines": { + "node": ">=7.0.0" + } + }, + "node_modules/wrap-ansi/node_modules/color-name": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz", + "integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==", + "dev": true, + "license": "MIT" + }, + "node_modules/wrap-ansi/node_modules/emoji-regex": { + "version": "8.0.0", + "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz", + "integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==", + "dev": true, + "license": "MIT" + }, + "node_modules/wrap-ansi/node_modules/string-width": { + "version": "4.2.3", + "resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz", + "integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==", + "dev": true, + "license": "MIT", + "dependencies": { + "emoji-regex": "^8.0.0", + "is-fullwidth-code-point": "^3.0.0", + "strip-ansi": "^6.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/wrap-ansi/node_modules/strip-ansi": { + "version": "6.0.1", + "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz", + "integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==", + "dev": true, + "license": "MIT", + "dependencies": { + "ansi-regex": "^5.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/wrappy": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/wrappy/-/wrappy-1.0.2.tgz", + "integrity": "sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ==", + "dev": true, + "license": "ISC" + }, + "node_modules/wsl-utils": { + "version": "0.1.0", + "resolved": "https://registry.npmjs.org/wsl-utils/-/wsl-utils-0.1.0.tgz", + "integrity": "sha512-h3Fbisa2nKGPxCpm89Hk33lBLsnaGBvctQopaBSOW/uIs6FTe1ATyAnKFJrzVs9vpGdsTe73WF3V4lIsk4Gacw==", + "dev": true, + "license": "MIT", + "dependencies": { + "is-wsl": "^3.1.0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/xml2js": { + "version": "0.5.0", + "resolved": "https://registry.npmjs.org/xml2js/-/xml2js-0.5.0.tgz", + "integrity": "sha512-drPFnkQJik/O+uPKpqSgr22mpuFHqKdbS835iAQrUC73L2F5WkboIRd63ai/2Yg6I1jzifPFKH2NTK+cfglkIA==", + "dev": true, + "license": "MIT", + "dependencies": { + "sax": ">=0.6.0", + "xmlbuilder": "~11.0.0" + }, + "engines": { + "node": ">=4.0.0" + } + }, + "node_modules/xmlbuilder": { + "version": "11.0.1", + "resolved": "https://registry.npmjs.org/xmlbuilder/-/xmlbuilder-11.0.1.tgz", + "integrity": "sha512-fDlsI/kFEx7gLvbecc0/ohLG50fugQp8ryHzMTuW9vSa1GJ0XYWKnhsUx7oie3G98+r56aTQIUB4kht42R3JvA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=4.0" + } + }, + "node_modules/y18n": { + "version": "5.0.8", + "resolved": "https://registry.npmjs.org/y18n/-/y18n-5.0.8.tgz", + "integrity": "sha512-0pfFzegeDWJHJIAmTLRP2DwHjdF5s7jo9tuztdQxAhINCdvS+3nGINqPd00AphqJR/0LhANUS6/+7SCb98YOfA==", + "dev": true, + "license": "ISC", + "engines": { + "node": ">=10" + } + }, + "node_modules/yallist": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/yallist/-/yallist-4.0.0.tgz", + "integrity": "sha512-3wdGidZyq5PB084XLES5TpOSRA3wjXAlIWMhum2kRcv/41Sn2emQ0dycQW4uZXLejwKvg6EsvbdlVL+FYEct7A==", + "dev": true, + "license": "ISC" + }, + "node_modules/yargs": { + "version": "17.7.2", + "resolved": "https://registry.npmjs.org/yargs/-/yargs-17.7.2.tgz", + "integrity": "sha512-7dSzzRQ++CKnNI/krKnYRV7JKKPUXMEh61soaHKg9mrWEhzFWhFnxPxGl+69cD1Ou63C13NUPCnmIcrvqCuM6w==", + "dev": true, + "license": "MIT", + "dependencies": { + "cliui": "^8.0.1", + "escalade": "^3.1.1", + "get-caller-file": "^2.0.5", + "require-directory": "^2.1.1", + "string-width": "^4.2.3", + "y18n": "^5.0.5", + "yargs-parser": "^21.1.1" + }, + "engines": { + "node": ">=12" + } + }, + "node_modules/yargs-parser": { + "version": "21.1.1", + "resolved": "https://registry.npmjs.org/yargs-parser/-/yargs-parser-21.1.1.tgz", + "integrity": "sha512-tVpsJW7DdjecAiFpbIB1e3qxIQsE6NoPc5/eTdrbbIC4h0LVsWhnoa3g+m2HclBIujHzsxZ4VJVA+GUuc2/LBw==", + "dev": true, + "license": "ISC", + "engines": { + "node": ">=12" + } + }, + "node_modules/yargs-unparser": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/yargs-unparser/-/yargs-unparser-2.0.0.tgz", + "integrity": "sha512-7pRTIA9Qc1caZ0bZ6RYRGbHJthJWuakf+WmHK0rVeLkNrrGhfoabBNdue6kdINI6r4if7ocq9aD/n7xwKOdzOA==", + "dev": true, + "license": "MIT", + "dependencies": { + "camelcase": "^6.0.0", + "decamelize": "^4.0.0", + "flat": "^5.0.2", + "is-plain-obj": "^2.1.0" + }, + "engines": { + "node": ">=10" + } + }, + "node_modules/yargs/node_modules/ansi-regex": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz", + "integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/yargs/node_modules/emoji-regex": { + "version": "8.0.0", + "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz", + "integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==", + "dev": true, + "license": "MIT" + }, + "node_modules/yargs/node_modules/string-width": { + "version": "4.2.3", + "resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz", + "integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==", + "dev": true, + "license": "MIT", + "dependencies": { + "emoji-regex": "^8.0.0", + "is-fullwidth-code-point": "^3.0.0", + "strip-ansi": "^6.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/yargs/node_modules/strip-ansi": { + "version": "6.0.1", + "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz", + "integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==", + "dev": true, + "license": "MIT", + "dependencies": { + "ansi-regex": "^5.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/yauzl": { + "version": "2.10.0", + "resolved": "https://registry.npmjs.org/yauzl/-/yauzl-2.10.0.tgz", + "integrity": "sha512-p4a9I6X6nu6IhoGmBqAcbJy1mlC4j27vEPZX9F4L4/vZT3Lyq1VkFHw/V/PUcB9Buo+DG3iHkT0x3Qya58zc3g==", + "dev": true, + "license": "MIT", + "dependencies": { + "buffer-crc32": "~0.2.3", + "fd-slicer": "~1.1.0" + } + }, + "node_modules/yazl": { + "version": "2.5.1", + "resolved": "https://registry.npmjs.org/yazl/-/yazl-2.5.1.tgz", + "integrity": "sha512-phENi2PLiHnHb6QBVot+dJnaAZ0xosj7p3fWl+znIjBDlnMI2PsZCJZ306BPTFOaHf5qdDEI8x5qFrSOBN5vrw==", + "dev": true, + "license": "MIT", + "dependencies": { + "buffer-crc32": "~0.2.3" + } + }, + "node_modules/yocto-queue": { + "version": "0.1.0", + "resolved": "https://registry.npmjs.org/yocto-queue/-/yocto-queue-0.1.0.tgz", + "integrity": "sha512-rVksvsnNCdJ/ohGc6xgPwyN8eheCxsiLM8mxuE/t/mOVqJewPuO1miLpTHQiRgTKCLexL4MeAFVagts7HmNZ2Q==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + } + } +} diff --git a/vscode-extension/package.json b/vscode-extension/package.json new file mode 100644 index 0000000..97ab4ff --- /dev/null +++ b/vscode-extension/package.json @@ -0,0 +1,232 @@ +{ + "name": "opencode-router", + "displayName": "OpenCode Control Plane", + "description": "Browse and manage OpenCode sessions from the control plane.", + "version": "0.1.0", + "publisher": "local", + "engines": { + "vscode": "^1.90.0" + }, + "categories": [ + "Other" + ], + "main": "./out/extension.js", + "activationEvents": [ + "onView:opencodeSessions", + "onView:opencodeRemoteHosts", + "onView:opencodeChat", + "onTerminalProfile:opencode.terminalProfile", + "onCommand:opencode.attachSession", + "onCommand:opencode.createSession", + "onCommand:opencode.openChat", + "onCommand:opencode.openTerminal", + "onCommand:opencode.refreshSessions", + "onCommand:opencode.refreshRemoteHosts", + "onCommand:opencode.stopSession", + "onCommand:opencode.restartSession", + "onCommand:opencode.deleteSession", + "onCommand:opencode.applyDiffPreview", + "onCommand:opencode.applyLastDiff" + ], + "contributes": { + "viewsContainers": { + "activitybar": [ + { + "id": "opencode", + "title": "OpenCode", + "icon": "resources/opencode.svg" + } + ] + }, + "views": { + "opencode": [ + { + "id": "opencodeSessions", + "name": "Sessions" + }, + { + "id": "opencodeRemoteHosts", + "name": "Remote Hosts" + }, + { + "id": "opencodeChat", + "name": "Agent Chat", + "type": "webview" + } + ] + }, + "commands": [ + { + "command": "opencode.attachSession", + "title": "OpenCode: Attach Session" + }, + { + "command": "opencode.createSession", + "title": "OpenCode: Create Session" + }, + { + "command": "opencode.openChat", + "title": "OpenCode: Open Chat", + "icon": "$(comment-discussion)" + }, + { + "command": "opencode.openTerminal", + "title": "OpenCode: Open Terminal", + "icon": "$(terminal)" + }, + { + "command": "opencode.refreshSessions", + "title": "OpenCode: Refresh Sessions", + "icon": "$(refresh)" + }, + { + "command": "opencode.refreshRemoteHosts", + "title": "OpenCode: Refresh Remote Hosts", + "icon": "$(refresh)" + }, + { + "command": "opencode.stopSession", + "title": "OpenCode: Stop Session" + }, + { + "command": "opencode.restartSession", + "title": "OpenCode: Restart Session" + }, + { + "command": "opencode.deleteSession", + "title": "OpenCode: Delete Session" + }, + { + "command": "opencode.applyDiffPreview", + "title": "OpenCode: Apply Diff Preview" + }, + { + "command": "opencode.applyLastDiff", + "title": "OpenCode: Apply Last Staged Diff" + }, + { + "command": "opencode.rejectLastDiff", + "title": "OpenCode: Reject Last Staged Diff" + }, + { + "command": "opencode.clearDiffHighlights", + "title": "OpenCode: Clear Diff Highlights" + } + ], + "menus": { + "view/title": [ + { + "command": "opencode.createSession", + "when": "view == opencodeSessions", + "group": "navigation@1" + }, + { + "command": "opencode.refreshSessions", + "when": "view == opencodeSessions", + "group": "navigation@2" + }, + { + "command": "opencode.refreshRemoteHosts", + "when": "view == opencodeRemoteHosts", + "group": "navigation@1" + }, + { + "command": "opencode.openChat", + "when": "view == opencodeChat", + "group": "navigation@1" + }, + { + "command": "opencode.openTerminal", + "when": "view == opencodeSessions", + "group": "navigation@3" + } + ], + "view/item/context": [ + { + "command": "opencode.attachSession", + "when": "view == opencodeSessions && viewItem == opencodeSession", + "group": "inline@1" + }, + { + "command": "opencode.stopSession", + "when": "view == opencodeSessions && viewItem == opencodeSession", + "group": "navigation@2" + }, + { + "command": "opencode.openChat", + "when": "view == opencodeSessions && viewItem == opencodeSession", + "group": "navigation@2.5" + }, + { + "command": "opencode.openTerminal", + "when": "view == opencodeSessions && viewItem == opencodeSession", + "group": "navigation@2.6" + }, + { + "command": "opencode.restartSession", + "when": "view == opencodeSessions && viewItem == opencodeSession", + "group": "navigation@3" + }, + { + "command": "opencode.deleteSession", + "when": "view == opencodeSessions && viewItem == opencodeSession", + "group": "navigation@4" + } + ] + }, + "terminal": { + "profiles": [ + { + "id": "opencode.terminalProfile", + "title": "OpenCode Terminal", + "icon": "terminal" + } + ] + }, + "configuration": { + "title": "OpenCode", + "properties": { + "opencode.controlPlaneUrl": { + "type": "string", + "default": "http://localhost:8080", + "description": "Base URL for the OpenCode control plane." + }, + "opencode.authToken": { + "type": "string", + "default": "", + "description": "Optional bearer token for control plane API requests." + }, + "opencode.remoteSshConfigPath": { + "type": "string", + "default": "", + "description": "Optional SSH config path override for remote host discovery (for example ~/.ssh/config)." + }, + "opencode.remoteHostsAutoRefreshSeconds": { + "type": "number", + "default": 30, + "minimum": 0, + "description": "Automatic refresh interval for Remote Hosts view in seconds (0 disables auto-refresh)." + } + } + } + }, + "scripts": { + "compile": "tsc -p ./", + "watch": "tsc -watch -p ./", + "lint": "eslint src/**/*.ts", + "test": "npm run compile && node ./out/test/runTest.js", + "package": "npx @vscode/vsce package" + }, + "devDependencies": { + "@types/mocha": "^10.0.10", + "@types/node": "^20.16.5", + "@types/vscode": "^1.90.0", + "@typescript-eslint/eslint-plugin": "^8.56.1", + "@typescript-eslint/parser": "^8.56.1", + "@vscode/test-electron": "^2.5.2", + "@vscode/vsce": "^2.31.1", + "eslint": "^10.0.2", + "mocha": "^11.7.5", + "typescript": "^5.6.2" + } +} diff --git a/vscode-extension/resources/opencode.svg b/vscode-extension/resources/opencode.svg new file mode 100644 index 0000000..425829a --- /dev/null +++ b/vscode-extension/resources/opencode.svg @@ -0,0 +1,5 @@ + + + + + diff --git a/vscode-extension/src/chat/ChatWebviewProvider.ts b/vscode-extension/src/chat/ChatWebviewProvider.ts new file mode 100644 index 0000000..7aa61d5 --- /dev/null +++ b/vscode-extension/src/chat/ChatWebviewProvider.ts @@ -0,0 +1,448 @@ +import { randomBytes } from 'crypto'; +import * as path from 'path'; +import * as vscode from 'vscode'; + +export interface ChatSessionTarget { + id: string; + label: string; + workspacePath: string; +} + +interface ChatMessage { + id: string; + role: 'user' | 'assistant' | 'system'; + content: string; + toolCalls: unknown[]; +} + +type InboundMessage = + | { type: 'ready' } + | { type: 'requestHistory' } + | { type: 'sendPrompt'; prompt: string } + | { type: 'openFile'; path: string; line?: number } + | { type: 'applyDiff'; diff: string }; + +type OutboundMessage = + | { type: 'session'; session: ChatSessionTarget | null } + | { type: 'chatHistory'; messages: ChatMessage[] } + | { type: 'streamStarted' } + | { type: 'streamEnded' } + | { type: 'chatChunk'; chunk: Record } + | { type: 'error'; message: string }; + +export class ChatWebviewProvider implements vscode.WebviewViewProvider, vscode.Disposable { + private view?: vscode.WebviewView; + private currentSession: ChatSessionTarget | null = null; + private streamAbort?: AbortController; + + constructor(private readonly extensionUri: vscode.Uri) {} + + dispose(): void { + this.streamAbort?.abort(); + } + + resolveWebviewView(webviewView: vscode.WebviewView): void { + this.view = webviewView; + webviewView.webview.options = { + enableScripts: true, + localResourceRoots: [vscode.Uri.joinPath(this.extensionUri, 'media', 'chat')] + }; + webviewView.webview.html = this.buildHtml(webviewView.webview); + + webviewView.webview.onDidReceiveMessage((msg: InboundMessage) => { + void this.handleMessage(msg); + }); + + webviewView.onDidDispose(() => { + this.streamAbort?.abort(); + this.streamAbort = undefined; + this.view = undefined; + }); + + this.post({ type: 'session', session: this.currentSession }); + } + + async openChat(session?: ChatSessionTarget): Promise { + if (session) { + this.currentSession = session; + } + + await vscode.commands.executeCommand('workbench.view.extension.opencode'); + this.view?.show?.(true); + this.post({ type: 'session', session: this.currentSession }); + if (this.currentSession) { + await this.loadHistory(); + } + } + + private async handleMessage(msg: InboundMessage): Promise { + switch (msg.type) { + case 'ready': + this.post({ type: 'session', session: this.currentSession }); + if (this.currentSession) { + await this.loadHistory(); + } + return; + case 'requestHistory': + await this.loadHistory(); + return; + case 'sendPrompt': + await this.streamPrompt(msg.prompt); + return; + case 'openFile': + await this.openFile(msg.path, msg.line); + return; + case 'applyDiff': + await this.applyDiff(msg.diff); + return; + default: + return; + } + } + + private async loadHistory(): Promise { + if (!this.currentSession) { + return; + } + + try { + const response = await this.request(`/api/sessions/${encodeURIComponent(this.currentSession.id)}/chat`, { + method: 'GET', + headers: { Accept: 'application/json' } + }); + + if (!response.ok) { + throw new Error(`GET /api/sessions/{id}/chat failed (${response.status})`); + } + + const payload = (await response.json()) as unknown; + const messages = Array.isArray(payload) + ? payload + .map((entry) => this.normalizeHistoryMessage(entry)) + .filter((entry): entry is ChatMessage => entry !== null) + : []; + + this.post({ type: 'chatHistory', messages }); + } catch (error) { + this.post({ type: 'error', message: `Failed to load chat history: ${this.formatError(error)}` }); + } + } + + private async streamPrompt(prompt: string): Promise { + if (!this.currentSession) { + this.post({ type: 'error', message: 'No session selected for chat.' }); + return; + } + + const trimmed = prompt.trim(); + if (!trimmed) { + return; + } + + this.streamAbort?.abort(); + const controller = new AbortController(); + this.streamAbort = controller; + this.post({ type: 'streamStarted' }); + + try { + const response = await this.request(`/api/sessions/${encodeURIComponent(this.currentSession.id)}/chat`, { + method: 'POST', + headers: { + Accept: 'text/event-stream' + }, + body: JSON.stringify({ prompt: trimmed }), + signal: controller.signal + }); + + if (!response.ok || !response.body) { + throw new Error(`POST /api/sessions/{id}/chat failed (${response.status})`); + } + + const reader = response.body.getReader(); + const decoder = new TextDecoder(); + let buffer = ''; + + while (true) { + const { done, value } = await reader.read(); + if (done) { + break; + } + + buffer += decoder.decode(value, { stream: true }).replace(/\r\n/g, '\n'); + let boundary = buffer.indexOf('\n\n'); + while (boundary >= 0) { + const frame = buffer.slice(0, boundary); + buffer = buffer.slice(boundary + 2); + const parsed = this.parseSSEFrame(frame); + if (parsed) { + this.post({ type: 'chatChunk', chunk: parsed }); + } + boundary = buffer.indexOf('\n\n'); + } + } + } catch (error) { + const aborted = error instanceof Error && error.name === 'AbortError'; + if (!aborted) { + this.post({ type: 'error', message: `Chat streaming failed: ${this.formatError(error)}` }); + } + } finally { + if (this.streamAbort === controller) { + this.streamAbort = undefined; + } + this.post({ type: 'streamEnded' }); + } + } + + private parseSSEFrame(frame: string): Record | null { + const lines = frame.split('\n'); + const dataLines: string[] = []; + for (const line of lines) { + if (line.startsWith('data:')) { + dataLines.push(line.slice(5).trimStart()); + } + } + + if (dataLines.length === 0) { + return null; + } + + const raw = dataLines.join('\n').trim(); + if (!raw) { + return null; + } + + try { + const parsed = JSON.parse(raw) as unknown; + if (parsed && typeof parsed === 'object') { + return this.normalizeChunk(parsed as Record); + } + return { type: 'message', delta: String(parsed), done: false }; + } catch { + return { type: 'message', delta: raw, done: false }; + } + } + + private normalizeChunk(chunk: Record): Record { + const type = this.firstNonEmptyString(chunk.type, chunk.event, 'message') ?? 'message'; + const delta = + this.firstNonEmptyString( + chunk.delta, + this.nestedString(chunk, ['part', 'delta']), + this.nestedString(chunk, ['message', 'part', 'delta']), + this.nestedString(chunk, ['message', 'delta']) + ) ?? ''; + const error = this.firstNonEmptyString(chunk.error, this.nestedString(chunk, ['payload', 'error'])) ?? ''; + const terminalType = ['session.idle', 'session.error', 'message.completed', 'message.done', 'message.error', 'stream.closed']; + const done = Boolean(chunk.done) || terminalType.includes(type.toLowerCase()); + const payload = this.firstObject(chunk.payload, chunk.part, chunk.message) ?? chunk; + + return { + type, + delta, + error, + done, + payload, + raw: chunk + }; + } + + private normalizeHistoryMessage(entry: unknown): ChatMessage | null { + if (!entry || typeof entry !== 'object') { + return null; + } + + const message = entry as Record; + const roleRaw = this.firstNonEmptyString(message.role, message.type, 'assistant') ?? 'assistant'; + const role: 'user' | 'assistant' | 'system' = roleRaw === 'user' || roleRaw === 'assistant' || roleRaw === 'system' ? roleRaw : 'assistant'; + const content = this.extractHistoryText(message); + const toolCalls = this.extractHistoryTools(message); + + return { + id: this.firstNonEmptyString(message.id, message.messageId, message.message_id, `${Date.now()}-${Math.random()}`) ?? + `${Date.now()}-${Math.random()}`, + role, + content, + toolCalls + }; + } + + private extractHistoryText(message: Record): string { + const direct = this.firstNonEmptyString(message.content, message.text, message.delta); + if (direct) { + return direct; + } + + const parts = Array.isArray(message.parts) ? message.parts : []; + const textParts: string[] = []; + for (const part of parts) { + if (!part || typeof part !== 'object') { + continue; + } + const partObject = part as Record; + const text = this.firstNonEmptyString(partObject.text, partObject.content, partObject.delta); + if (text) { + textParts.push(text); + } + } + + return textParts.join(''); + } + + private extractHistoryTools(message: Record): unknown[] { + const tools: unknown[] = []; + const parts = Array.isArray(message.parts) ? message.parts : []; + for (const part of parts) { + if (!part || typeof part !== 'object') { + continue; + } + const partObject = part as Record; + const type = this.firstNonEmptyString(partObject.type, partObject.kind) ?? ''; + if (type.toLowerCase().includes('tool')) { + tools.push(partObject); + } + } + return tools; + } + + private async openFile(filePath: string, line?: number): Promise { + const trimmed = filePath.trim(); + if (!trimmed) { + return; + } + + const roots: string[] = []; + if (this.currentSession?.workspacePath) { + roots.push(this.currentSession.workspacePath); + } + const workspaceRoot = vscode.workspace.workspaceFolders?.[0]?.uri.fsPath; + if (workspaceRoot) { + roots.push(workspaceRoot); + } + + const targetPath = path.isAbsolute(trimmed) ? trimmed : roots.length > 0 ? path.join(roots[0], trimmed) : trimmed; + + try { + const document = await vscode.workspace.openTextDocument(vscode.Uri.file(targetPath)); + const editor = await vscode.window.showTextDocument(document, { preview: false }); + const targetLine = Number.isFinite(line) && typeof line === 'number' && line > 0 ? line - 1 : 0; + const position = new vscode.Position(Math.max(targetLine, 0), 0); + editor.selection = new vscode.Selection(position, position); + editor.revealRange(new vscode.Range(position, position), vscode.TextEditorRevealType.InCenter); + } catch (error) { + vscode.window.showErrorMessage(`Unable to open file reference ${trimmed}: ${this.formatError(error)}`); + } + } + + private async applyDiff(diff: string): Promise { + const content = diff.trim(); + if (!content) { + return; + } + await vscode.commands.executeCommand('opencode.applyDiffPreview', { + sessionId: this.currentSession?.id, + diff: content + }); + } + + private async request(pathname: string, init: RequestInit): Promise { + const headers = new Headers(init.headers ?? undefined); + if (!headers.has('Accept')) { + headers.set('Accept', 'application/json'); + } + if (init.body && !headers.has('Content-Type')) { + headers.set('Content-Type', 'application/json'); + } + + const token = this.getAuthToken(); + if (token) { + headers.set('Authorization', `Bearer ${token}`); + } + + return fetch(`${this.getControlPlaneUrl()}${pathname}`, { + ...init, + headers + }); + } + + private getControlPlaneUrl(): string { + const configured = vscode.workspace.getConfiguration('opencode').get('controlPlaneUrl', 'http://localhost:8080'); + return configured.replace(/\/+$/, ''); + } + + private getAuthToken(): string { + return vscode.workspace.getConfiguration('opencode').get('authToken', '').trim(); + } + + private post(message: OutboundMessage): void { + this.view?.webview.postMessage(message); + } + + private firstNonEmptyString(...values: unknown[]): string | undefined { + for (const value of values) { + if (typeof value === 'string' && value.trim()) { + return value.trim(); + } + } + return undefined; + } + + private firstObject(...values: unknown[]): Record | undefined { + for (const value of values) { + if (value && typeof value === 'object' && !Array.isArray(value)) { + return value as Record; + } + } + return undefined; + } + + private nestedString(value: Record, keys: string[]): string | undefined { + let cursor: unknown = value; + for (const key of keys) { + if (!cursor || typeof cursor !== 'object' || Array.isArray(cursor)) { + return undefined; + } + cursor = (cursor as Record)[key]; + } + return this.firstNonEmptyString(cursor); + } + + private formatError(error: unknown): string { + if (error instanceof Error) { + return error.message; + } + return String(error); + } + + private buildHtml(webview: vscode.Webview): string { + const nonce = randomBytes(16).toString('base64'); + const scriptUri = webview.asWebviewUri(vscode.Uri.joinPath(this.extensionUri, 'media', 'chat', 'chat.js')); + const styleUri = webview.asWebviewUri(vscode.Uri.joinPath(this.extensionUri, 'media', 'chat', 'chat.css')); + + return ` + + + + + + + OpenCode Chat + + +
+
+
No session selected
+
idle
+
+
+
+ +
+ +
+
+
+ + +`; + } +} diff --git a/vscode-extension/src/edits/DiffEditManager.ts b/vscode-extension/src/edits/DiffEditManager.ts new file mode 100644 index 0000000..8aa92af --- /dev/null +++ b/vscode-extension/src/edits/DiffEditManager.ts @@ -0,0 +1,536 @@ +import * as path from 'path'; +import * as vscode from 'vscode'; + +export interface DiffCommandPayload { + sessionId?: string; + diff?: string; + source?: string; +} + +interface Hunk { + oldStart: number; + oldCount: number; + newStart: number; + newCount: number; + lines: string[]; +} + +interface FilePatch { + oldPath: string; + newPath: string; + isCreate: boolean; + isDelete: boolean; + hunks: Hunk[]; +} + +interface LineRange { + startLine: number; + endLine: number; +} + +interface PendingDiffFile { + path: string; + uri: vscode.Uri; + originalSnapshot: string; + proposedContent: string; + isCreate: boolean; + isDelete: boolean; + modifiedRanges: LineRange[]; +} + +interface PendingDiff { + sessionId?: string; + source: string; + createdAt: number; + rawDiff: string; + files: PendingDiffFile[]; +} + +type WorkspaceResolver = (sessionId?: string) => string | undefined; + +const decoder = new TextDecoder(); +const encoder = new TextEncoder(); + +export class DiffEditManager implements vscode.Disposable { + private pending?: PendingDiff; + private readonly decorationType: vscode.TextEditorDecorationType; + private readonly disposables: vscode.Disposable[] = []; + private readonly decorationRanges = new Map(); + private clearTimer?: NodeJS.Timeout; + + constructor(private readonly resolveWorkspacePath: WorkspaceResolver) { + this.decorationType = vscode.window.createTextEditorDecorationType({ + isWholeLine: true, + backgroundColor: new vscode.ThemeColor('editor.wordHighlightStrongBackground'), + overviewRulerColor: new vscode.ThemeColor('editorOverviewRuler.modifiedForeground'), + overviewRulerLane: vscode.OverviewRulerLane.Right + }); + + this.disposables.push( + vscode.window.onDidChangeVisibleTextEditors(() => { + this.applyDecorationsToVisibleEditors(); + }) + ); + } + + dispose(): void { + this.clearPending(false); + if (this.clearTimer) { + clearTimeout(this.clearTimer); + this.clearTimer = undefined; + } + for (const disposable of this.disposables) { + disposable.dispose(); + } + this.decorationType.dispose(); + } + + async stageFromPayload(payload?: DiffCommandPayload): Promise { + const raw = (payload?.diff ?? '').trim(); + if (!raw) { + vscode.window.showWarningMessage('No diff payload was provided.'); + return; + } + + const unifiedDiff = this.extractUnifiedDiff(raw); + const patches = this.parseUnifiedDiff(unifiedDiff); + if (patches.length === 0) { + vscode.window.showWarningMessage('Unable to parse diff payload into file patches.'); + return; + } + + const files = await this.buildPendingFiles(patches, payload?.sessionId); + if (files.length === 0) { + vscode.window.showWarningMessage('No applicable file changes were found in the diff payload.'); + return; + } + + this.pending = { + sessionId: payload?.sessionId, + source: payload?.source ?? 'unknown', + createdAt: Date.now(), + rawDiff: raw, + files + }; + + await this.openPreview(this.pending.files[0], 1, this.pending.files.length); + const selection = await vscode.window.showInformationMessage( + `OpenCode staged ${this.pending.files.length} diff file(s).`, + 'Apply', + 'Reject', + this.pending.files.length > 1 ? 'Preview All' : 'Preview' + ); + + if (selection === 'Apply') { + await this.applyLastDiff(); + return; + } + + if (selection === 'Reject') { + this.rejectLastDiff(); + return; + } + + if (selection === 'Preview All') { + for (let index = 1; index < this.pending.files.length; index += 1) { + await this.openPreview(this.pending.files[index], index + 1, this.pending.files.length); + } + return; + } + + if (selection === 'Preview') { + await this.openPreview(this.pending.files[0], 1, this.pending.files.length); + } + } + + async applyLastDiff(): Promise { + if (!this.pending) { + vscode.window.showWarningMessage('No staged diff available.'); + return; + } + + const pending = this.pending; + const workspaceEdit = new vscode.WorkspaceEdit(); + const createdOrModifiedForDecorations: PendingDiffFile[] = []; + + for (const file of pending.files) { + if (file.isDelete || file.isCreate) { + continue; + } + const doc = await vscode.workspace.openTextDocument(file.uri); + const lastLine = Math.max(doc.lineCount - 1, 0); + const fullRange = new vscode.Range(0, 0, lastLine, doc.lineAt(lastLine).text.length); + workspaceEdit.replace(file.uri, fullRange, file.proposedContent); + createdOrModifiedForDecorations.push(file); + } + + for (const file of pending.files) { + if (!file.isCreate) { + continue; + } + await vscode.workspace.fs.writeFile(file.uri, encoder.encode(file.proposedContent)); + createdOrModifiedForDecorations.push(file); + } + + const editApplied = await vscode.workspace.applyEdit(workspaceEdit); + if (!editApplied) { + vscode.window.showErrorMessage('Failed to apply staged workspace edits.'); + return; + } + + for (const file of pending.files) { + if (!file.isDelete) { + continue; + } + const choice = await vscode.window.showWarningMessage( + `Delete file from staged diff: ${file.path}?`, + { modal: true }, + 'Delete', + 'Keep' + ); + if (choice !== 'Delete') { + continue; + } + await vscode.workspace.fs.delete(file.uri, { useTrash: true }); + } + + this.setDecorations(createdOrModifiedForDecorations); + this.clearPending(false); + vscode.window.showInformationMessage('OpenCode diff applied.'); + } + + rejectLastDiff(): void { + if (!this.pending) { + vscode.window.showWarningMessage('No staged diff available.'); + return; + } + this.clearPending(true); + vscode.window.showInformationMessage('OpenCode staged diff discarded.'); + } + + clearDecorations(): void { + this.decorationRanges.clear(); + if (this.clearTimer) { + clearTimeout(this.clearTimer); + this.clearTimer = undefined; + } + for (const editor of vscode.window.visibleTextEditors) { + editor.setDecorations(this.decorationType, []); + } + } + + private clearPending(clearDecorations: boolean): void { + this.pending = undefined; + if (clearDecorations) { + this.clearDecorations(); + } + } + + private async openPreview(file: PendingDiffFile, index: number, total: number): Promise { + const leftDoc = await vscode.workspace.openTextDocument({ content: file.originalSnapshot }); + const rightDoc = await vscode.workspace.openTextDocument({ content: file.proposedContent }); + const title = `OpenCode Diff ${index}/${total}: ${file.path}`; + await vscode.commands.executeCommand('vscode.diff', leftDoc.uri, rightDoc.uri, title); + } + + private async buildPendingFiles(patches: FilePatch[], sessionId?: string): Promise { + const workspacePath = this.resolveWorkspacePath(sessionId) ?? vscode.workspace.workspaceFolders?.[0]?.uri.fsPath; + const files: PendingDiffFile[] = []; + + for (const patch of patches) { + const candidatePath = patch.isDelete ? patch.oldPath : patch.newPath || patch.oldPath; + const normalizedPath = this.normalizePatchPath(candidatePath); + if (!normalizedPath) { + continue; + } + + const uri = this.resolveFileUri(normalizedPath, workspacePath); + const originalSnapshot = await this.loadOriginalSnapshot(uri, patch.isCreate); + const proposedContent = patch.isDelete ? '' : this.applyPatchToContent(originalSnapshot, patch.hunks); + const ranges = patch.isDelete ? [] : this.hunksToRanges(patch.hunks, proposedContent); + + files.push({ + path: normalizedPath, + uri, + originalSnapshot, + proposedContent, + isCreate: patch.isCreate, + isDelete: patch.isDelete, + modifiedRanges: ranges + }); + } + + return files; + } + + private async loadOriginalSnapshot(uri: vscode.Uri, allowMissing: boolean): Promise { + try { + const bytes = await vscode.workspace.fs.readFile(uri); + return decoder.decode(bytes); + } catch (error) { + if (allowMissing) { + return ''; + } + throw error; + } + } + + private resolveFileUri(filePath: string, workspacePath?: string): vscode.Uri { + if (path.isAbsolute(filePath)) { + return vscode.Uri.file(filePath); + } + + if (workspacePath) { + return vscode.Uri.file(path.join(workspacePath, filePath)); + } + + const root = vscode.workspace.workspaceFolders?.[0]?.uri.fsPath; + if (root) { + return vscode.Uri.file(path.join(root, filePath)); + } + + return vscode.Uri.file(filePath); + } + + private extractUnifiedDiff(input: string): string { + const fenced = Array.from(input.matchAll(/```diff\n([\s\S]*?)```/g)).map((match) => match[1]); + if (fenced.length > 0) { + return fenced.join('\n'); + } + return input; + } + + private parseUnifiedDiff(diff: string): FilePatch[] { + const lines = diff.replace(/\r\n/g, '\n').split('\n'); + const patches: FilePatch[] = []; + let current: FilePatch | undefined; + let i = 0; + + const flushCurrent = () => { + if (!current) { + return; + } + if (current.oldPath === '/dev/null') { + current.isCreate = true; + } + if (current.newPath === '/dev/null') { + current.isDelete = true; + } + patches.push(current); + current = undefined; + }; + + while (i < lines.length) { + const line = lines[i]; + if (line.startsWith('diff --git ')) { + flushCurrent(); + const parts = line.trim().split(/\s+/); + current = { + oldPath: this.stripGitPrefix(parts[2] ?? ''), + newPath: this.stripGitPrefix(parts[3] ?? ''), + isCreate: false, + isDelete: false, + hunks: [] + }; + i += 1; + continue; + } + + if (line.startsWith('--- ')) { + if (!current) { + current = { oldPath: '', newPath: '', isCreate: false, isDelete: false, hunks: [] }; + } + current.oldPath = this.parseDiffHeaderPath(line.slice(4)); + i += 1; + continue; + } + + if (line.startsWith('+++ ')) { + if (!current) { + current = { oldPath: '', newPath: '', isCreate: false, isDelete: false, hunks: [] }; + } + current.newPath = this.parseDiffHeaderPath(line.slice(4)); + i += 1; + continue; + } + + if (line.startsWith('new file mode')) { + if (!current) { + current = { oldPath: '', newPath: '', isCreate: false, isDelete: false, hunks: [] }; + } + current.isCreate = true; + i += 1; + continue; + } + + if (line.startsWith('deleted file mode')) { + if (!current) { + current = { oldPath: '', newPath: '', isCreate: false, isDelete: false, hunks: [] }; + } + current.isDelete = true; + i += 1; + continue; + } + + if (line.startsWith('@@ ')) { + if (!current) { + current = { oldPath: '', newPath: '', isCreate: false, isDelete: false, hunks: [] }; + } + + const parsed = this.parseHunkHeader(line); + if (!parsed) { + i += 1; + continue; + } + + i += 1; + const hunkLines: string[] = []; + while (i < lines.length) { + const next = lines[i]; + if (next.startsWith('diff --git ') || next.startsWith('@@ ')) { + break; + } + if (next.startsWith('--- ') && !hunkLines.some((entry) => entry.startsWith('+') || entry.startsWith('-') || entry.startsWith(' '))) { + break; + } + const prefix = next.slice(0, 1); + if (prefix === ' ' || prefix === '+' || prefix === '-' || prefix === '\\') { + hunkLines.push(next); + } + i += 1; + } + + current.hunks.push({ ...parsed, lines: hunkLines }); + continue; + } + + i += 1; + } + + flushCurrent(); + return patches.filter((patch) => (patch.oldPath || patch.newPath) && patch.hunks.length > 0); + } + + private parseHunkHeader(line: string): Omit | undefined { + const match = line.match(/^@@\s+-(\d+)(?:,(\d+))?\s+\+(\d+)(?:,(\d+))?\s+@@/); + if (!match) { + return undefined; + } + + return { + oldStart: Number.parseInt(match[1], 10), + oldCount: Number.parseInt(match[2] ?? '1', 10), + newStart: Number.parseInt(match[3], 10), + newCount: Number.parseInt(match[4] ?? '1', 10) + }; + } + + private applyPatchToContent(original: string, hunks: Hunk[]): string { + const sourceLines = original.split('\n'); + const output: string[] = []; + let sourceIndex = 0; + + for (const hunk of hunks) { + const targetSourceIndex = Math.max(hunk.oldStart - 1, 0); + if (targetSourceIndex > sourceLines.length) { + throw new Error('Diff hunk exceeds source line count.'); + } + + while (sourceIndex < targetSourceIndex) { + output.push(sourceLines[sourceIndex]); + sourceIndex += 1; + } + + for (const line of hunk.lines) { + const marker = line.slice(0, 1); + const value = line.slice(1); + if (marker === ' ') { + output.push(sourceLines[sourceIndex] ?? value); + sourceIndex += 1; + continue; + } + if (marker === '-') { + sourceIndex += 1; + continue; + } + if (marker === '+') { + output.push(value); + continue; + } + } + } + + while (sourceIndex < sourceLines.length) { + output.push(sourceLines[sourceIndex]); + sourceIndex += 1; + } + + return output.join('\n'); + } + + private hunksToRanges(hunks: Hunk[], proposedContent: string): LineRange[] { + const maxLine = Math.max(proposedContent.split('\n').length, 1); + const ranges: LineRange[] = []; + for (const hunk of hunks) { + const startLine = Math.min(Math.max(hunk.newStart, 1), maxLine); + const span = Math.max(hunk.newCount, 1); + const endLine = Math.min(startLine + span - 1, maxLine); + ranges.push({ startLine, endLine }); + } + return ranges; + } + + private setDecorations(files: PendingDiffFile[]): void { + this.decorationRanges.clear(); + for (const file of files) { + if (file.modifiedRanges.length === 0) { + continue; + } + const ranges = file.modifiedRanges.map((range) => { + const start = Math.max(range.startLine - 1, 0); + const end = Math.max(range.endLine - 1, start); + return new vscode.Range(start, 0, end, Number.MAX_SAFE_INTEGER); + }); + this.decorationRanges.set(file.uri.toString(), ranges); + } + + this.applyDecorationsToVisibleEditors(); + + if (this.clearTimer) { + clearTimeout(this.clearTimer); + } + this.clearTimer = setTimeout(() => { + this.clearDecorations(); + }, 30000); + } + + private applyDecorationsToVisibleEditors(): void { + for (const editor of vscode.window.visibleTextEditors) { + const ranges = this.decorationRanges.get(editor.document.uri.toString()) ?? []; + editor.setDecorations(this.decorationType, ranges); + } + } + + private normalizePatchPath(value: string): string { + const trimmed = value.trim(); + if (!trimmed || trimmed === '/dev/null') { + return ''; + } + return this.stripGitPrefix(trimmed); + } + + private parseDiffHeaderPath(value: string): string { + const first = value.trim().split(/\s+/)[0] ?? ''; + return this.stripGitPrefix(first); + } + + private stripGitPrefix(value: string): string { + if (value === '/dev/null') { + return value; + } + if (value.startsWith('a/') || value.startsWith('b/')) { + return value.slice(2); + } + return value; + } +} diff --git a/vscode-extension/src/extension.ts b/vscode-extension/src/extension.ts new file mode 100644 index 0000000..c35fe59 --- /dev/null +++ b/vscode-extension/src/extension.ts @@ -0,0 +1,739 @@ +import * as vscode from 'vscode'; +import { ChatSessionTarget, ChatWebviewProvider } from './chat/ChatWebviewProvider'; +import { DiffEditManager } from './edits/DiffEditManager'; +import { RemoteHostsTreeProvider } from './remote/RemoteHostsTreeProvider'; +import { OpenCodeTerminalBridge } from './terminal/OpenCodeTerminalBridge'; + +type ConnectionState = 'connecting' | 'connected' | 'disconnected' | 'error'; + +interface SessionRecord { + id: string; + label: string; + status: string; + workspacePath: string; + stale?: boolean; +} + +class SessionItem extends vscode.TreeItem { + constructor(readonly session: SessionRecord) { + super(session.stale ? `${session.label} (stale)` : session.label, vscode.TreeItemCollapsibleState.None); + this.id = session.id; + this.contextValue = 'opencodeSession'; + this.iconPath = this.statusToIcon(session.status); + this.description = session.stale + ? `${session.workspacePath || 'n/a'} · stale data` + : session.workspacePath || undefined; + this.tooltip = session.stale + ? `${session.label}\nStatus: ${session.status}\nWorkspace: ${session.workspacePath || 'n/a'}\nData: stale (control plane unavailable)` + : `${session.label}\nStatus: ${session.status}\nWorkspace: ${session.workspacePath || 'n/a'}`; + this.command = { + command: 'opencode.attachSession', + title: 'Attach Session', + arguments: [this] + }; + } + + private statusToIcon(status: string): vscode.ThemeIcon { + switch (status.toLowerCase()) { + case 'active': + return new vscode.ThemeIcon('play-circle'); + case 'idle': + return new vscode.ThemeIcon('clock'); + case 'stopped': + return new vscode.ThemeIcon('debug-stop'); + case 'error': + case 'errored': + return new vscode.ThemeIcon('error'); + default: + return new vscode.ThemeIcon('question'); + } + } +} + +class SessionTreeProvider implements vscode.TreeDataProvider, vscode.Disposable { + private readonly changeEmitter = new vscode.EventEmitter(); + readonly onDidChangeTreeData = this.changeEmitter.event; + + private sessions: SessionRecord[] = []; + private connectionState: ConnectionState = 'disconnected'; + private sseAbort?: AbortController; + private disposed = false; + private reconnectDelayMs = 2000; + private scheduledRefresh?: NodeJS.Timeout; + private staleNoticeVisible = false; + + constructor( + private readonly onConnectionStateChanged: (state: ConnectionState, detail?: string) => void + ) {} + + dispose(): void { + this.disposed = true; + this.sseAbort?.abort(); + if (this.scheduledRefresh) { + clearTimeout(this.scheduledRefresh); + this.scheduledRefresh = undefined; + } + this.setConnectionState('disconnected', 'Extension deactivated'); + } + + getTreeItem(element: SessionItem): vscode.TreeItem { + return element; + } + + getChildren(): Thenable { + return Promise.resolve(this.sessions.map((session) => new SessionItem(session))); + } + + async start(): Promise { + await this.refresh(); + void this.startEventLoop(); + } + + getSessions(): SessionRecord[] { + return [...this.sessions]; + } + + async refresh(): Promise { + try { + const response = await this.requestWithBackoff('/api/sessions', { + method: 'GET', + headers: { Accept: 'application/json' } + }, 2); + + if (!response.ok) { + throw new Error(`GET /api/sessions failed (${response.status})`); + } + + const body = (await response.json()) as unknown; + this.sessions = Array.isArray(body) + ? body + .map((entry) => this.normalizeSession(entry)) + .filter((value): value is SessionRecord => value !== null) + .map((session) => ({ ...session, stale: false })) + : []; + + this.staleNoticeVisible = false; + if (this.connectionState !== 'connected') { + this.setConnectionState('connected'); + } + this.changeEmitter.fire(); + } catch (error) { + const detail = this.formatError(error); + if (this.sessions.length > 0) { + this.sessions = this.sessions.map((session) => ({ ...session, stale: true })); + this.setConnectionState('error', `Showing stale data: ${detail}`); + this.changeEmitter.fire(); + this.showStaleDataWarning(detail); + return; + } + + this.setConnectionState('error', detail); + throw error; + } + } + + async createSession(workspacePath: string, label?: string): Promise { + const payload = { workspacePath, ...(label ? { label } : {}) }; + const response = await this.request('/api/sessions', { + method: 'POST', + body: JSON.stringify(payload) + }); + + if (!response.ok) { + throw new Error(`POST /api/sessions failed (${response.status})`); + } + await this.refresh(); + } + + async attachSession(item: SessionItem): Promise { + await this.sessionAction(item, 'attach', 'POST'); + } + + async stopSession(item: SessionItem): Promise { + await this.sessionAction(item, 'stop', 'POST'); + } + + async restartSession(item: SessionItem): Promise { + await this.sessionAction(item, 'restart', 'POST'); + } + + async deleteSession(item: SessionItem): Promise { + const response = await this.request(`/api/sessions/${encodeURIComponent(item.session.id)}`, { + method: 'DELETE' + }); + + if (!response.ok && response.status !== 204) { + throw new Error(`DELETE /api/sessions/{id} failed (${response.status})`); + } + await this.refresh(); + } + + private async startEventLoop(): Promise { + while (!this.disposed) { + try { + await this.connectEventStream(); + } catch (error) { + if (!this.disposed) { + this.setConnectionState('error', `SSE error: ${this.formatError(error)}`); + } + } + + if (this.disposed) { + break; + } + + this.setConnectionState('disconnected', `SSE disconnected; retrying in ${Math.floor(this.reconnectDelayMs / 1000)}s`); + await this.delay(this.reconnectDelayMs); + this.reconnectDelayMs = Math.min(this.reconnectDelayMs * 2, 15000); + } + } + + private async connectEventStream(): Promise { + this.setConnectionState('connecting', 'Connecting to /api/events'); + const controller = new AbortController(); + this.sseAbort = controller; + + const response = await this.request('/api/events', { + method: 'GET', + headers: { Accept: 'text/event-stream' }, + signal: controller.signal + }); + + if (!response.ok || !response.body) { + throw new Error(`GET /api/events failed (${response.status})`); + } + + this.reconnectDelayMs = 2000; + this.setConnectionState('connected', 'SSE connected'); + + const reader = response.body.getReader(); + const decoder = new TextDecoder(); + let buffer = ''; + + while (!this.disposed) { + const { done, value } = await reader.read(); + if (done) { + break; + } + + buffer += decoder.decode(value, { stream: true }).replace(/\r\n/g, '\n'); + + let boundary = buffer.indexOf('\n\n'); + while (boundary >= 0) { + const chunk = buffer.slice(0, boundary); + buffer = buffer.slice(boundary + 2); + this.handleSseChunk(chunk); + boundary = buffer.indexOf('\n\n'); + } + } + } + + private handleSseChunk(chunk: string): void { + const trimmed = chunk.trim(); + if (!trimmed || trimmed.startsWith(':')) { + return; + } + + const lines = trimmed.split('\n'); + let shouldRefresh = false; + for (const line of lines) { + if (line.startsWith('event:')) { + const eventType = line.slice(6).trim(); + if (eventType.startsWith('session.') || eventType.startsWith('backend.') || eventType === 'message') { + shouldRefresh = true; + } + } + if (line.startsWith('data:')) { + shouldRefresh = true; + } + } + + if (shouldRefresh) { + this.scheduleRefresh(); + } + } + + private scheduleRefresh(): void { + if (this.scheduledRefresh) { + return; + } + + this.scheduledRefresh = setTimeout(() => { + this.scheduledRefresh = undefined; + void this.refresh().catch(() => undefined); + }, 250); + } + + private async sessionAction(item: SessionItem, action: 'attach' | 'stop' | 'restart', method: 'POST'): Promise { + const response = await this.request(`/api/sessions/${encodeURIComponent(item.session.id)}/${action}`, { + method + }); + + if (!response.ok) { + throw new Error(`${method} /api/sessions/{id}/${action} failed (${response.status})`); + } + await this.refresh(); + } + + private normalizeSession(value: unknown): SessionRecord | null { + if (!value || typeof value !== 'object') { + return null; + } + + const candidate = value as Record; + const id = this.firstNonEmptyString(candidate.id, candidate.sessionId, candidate.session_id); + if (!id) { + return null; + } + + const label = + this.firstNonEmptyString(candidate.label, candidate.name, candidate.sessionName, candidate.session_name) || id; + const status = this.firstNonEmptyString(candidate.status, candidate.state) || 'unknown'; + const workspacePath = + this.firstNonEmptyString(candidate.workspacePath, candidate.workspace_path, candidate.path, candidate.projectPath) || + ''; + + return { + id, + label, + status, + workspacePath + }; + } + + private firstNonEmptyString(...values: unknown[]): string | undefined { + for (const value of values) { + if (typeof value === 'string' && value.trim()) { + return value.trim(); + } + } + return undefined; + } + + private setConnectionState(state: ConnectionState, detail?: string): void { + this.connectionState = state; + this.onConnectionStateChanged(state, detail); + } + + private getControlPlaneUrl(): string { + const cfg = vscode.workspace.getConfiguration('opencode'); + const configured = cfg.get('controlPlaneUrl', 'http://localhost:8080'); + return configured.replace(/\/+$/, ''); + } + + private getAuthToken(): string { + return vscode.workspace.getConfiguration('opencode').get('authToken', '').trim(); + } + + private async request(path: string, init: RequestInit): Promise { + const headers = new Headers(init.headers ?? undefined); + if (!headers.has('Accept')) { + headers.set('Accept', 'application/json'); + } + if (init.body && !headers.has('Content-Type')) { + headers.set('Content-Type', 'application/json'); + } + + const token = this.getAuthToken(); + if (token) { + headers.set('Authorization', `Bearer ${token}`); + } + + return fetch(`${this.getControlPlaneUrl()}${path}`, { + ...init, + headers + }); + } + + private async requestWithBackoff(path: string, init: RequestInit, maxRetries: number): Promise { + let attempt = 0; + let lastError: unknown; + + while (attempt <= maxRetries) { + try { + const response = await this.request(path, init); + if (!this.isRetryableStatus(response.status) || attempt >= maxRetries) { + return response; + } + lastError = new Error(`retryable status ${response.status}`); + } catch (error) { + lastError = error; + if (!this.isRetryableError(error) || attempt >= maxRetries) { + throw error; + } + } + + const delayMs = Math.min(1000 * Math.pow(2, attempt), 4000); + await this.delay(delayMs); + attempt++; + } + + if (lastError instanceof Error) { + throw lastError; + } + throw new Error('request failed'); + } + + private isRetryableStatus(status: number): boolean { + return status === 429 || status >= 500; + } + + private isRetryableError(error: unknown): boolean { + if (error instanceof Error) { + const text = error.message.toLowerCase(); + return text.includes('fetch') || text.includes('network') || text.includes('timeout') || text.includes('econn'); + } + return true; + } + + private showStaleDataWarning(detail: string): void { + if (this.staleNoticeVisible) { + return; + } + this.staleNoticeVisible = true; + void vscode.window + .showWarningMessage(`OpenCode unavailable (${detail}). Showing stale sessions.`, 'Retry') + .then(async (action) => { + this.staleNoticeVisible = false; + if (action === 'Retry') { + await this.refresh().catch(() => undefined); + } + }); + } + + private formatError(error: unknown): string { + if (error instanceof Error) { + return error.message; + } + return String(error); + } + + private delay(ms: number): Promise { + return new Promise((resolve) => setTimeout(resolve, ms)); + } +} + +class ConnectionStatusBar implements vscode.Disposable { + private readonly item: vscode.StatusBarItem; + + constructor() { + this.item = vscode.window.createStatusBarItem(vscode.StatusBarAlignment.Left, 100); + this.item.command = 'opencode.refreshSessions'; + this.item.tooltip = 'OpenCode control plane status. Click to refresh sessions.'; + this.update('disconnected', 'Not connected'); + this.item.show(); + } + + update(state: ConnectionState, detail?: string): void { + switch (state) { + case 'connected': + this.item.text = '$(plug) OpenCode: Connected'; + this.item.color = undefined; + break; + case 'connecting': + this.item.text = '$(sync~spin) OpenCode: Connecting'; + this.item.color = undefined; + break; + case 'error': + this.item.text = '$(error) OpenCode: Error'; + this.item.color = new vscode.ThemeColor('statusBarItem.errorForeground'); + break; + default: + this.item.text = '$(debug-disconnect) OpenCode: Disconnected'; + this.item.color = undefined; + break; + } + + this.item.tooltip = detail + ? `OpenCode control plane status: ${state}\n${detail}\n\nClick to refresh sessions.` + : `OpenCode control plane status: ${state}\n\nClick to refresh sessions.`; + } + + dispose(): void { + this.item.dispose(); + } +} + +export function activate(context: vscode.ExtensionContext): void { + const statusBar = new ConnectionStatusBar(); + const treeProvider = new SessionTreeProvider((state, detail) => statusBar.update(state, detail)); + const remoteHostsProvider = new RemoteHostsTreeProvider( + () => getControlPlaneUrlFromConfig(), + () => getAuthTokenFromConfig() + ); + const chatProvider = new ChatWebviewProvider(context.extensionUri); + const diffEditManager = new DiffEditManager((sessionId) => + treeProvider.getSessions().find((session) => session.id === sessionId)?.workspacePath + ); + + context.subscriptions.push(statusBar, treeProvider, remoteHostsProvider, chatProvider, diffEditManager); + context.subscriptions.push(vscode.window.registerTreeDataProvider('opencodeSessions', treeProvider)); + context.subscriptions.push(vscode.window.registerTreeDataProvider('opencodeRemoteHosts', remoteHostsProvider)); + context.subscriptions.push( + vscode.window.registerWebviewViewProvider('opencodeChat', chatProvider, { + webviewOptions: { retainContextWhenHidden: true } + }) + ); + + context.subscriptions.push( + vscode.commands.registerCommand('opencode.refreshSessions', async () => { + try { + await treeProvider.refresh(); + } catch (error) { + vscode.window.showErrorMessage(`Failed to refresh sessions: ${error instanceof Error ? error.message : String(error)}`); + } + }) + ); + + context.subscriptions.push( + vscode.commands.registerCommand('opencode.refreshRemoteHosts', async () => { + try { + await remoteHostsProvider.refresh(true); + } catch (error) { + vscode.window.showErrorMessage(`Failed to refresh remote hosts: ${error instanceof Error ? error.message : String(error)}`); + } + }) + ); + + context.subscriptions.push( + vscode.commands.registerCommand('opencode.createSession', async () => { + const workspaceDefault = vscode.workspace.workspaceFolders?.[0]?.uri.fsPath ?? ''; + const workspacePath = await vscode.window.showInputBox({ + prompt: 'Workspace path for new session', + value: workspaceDefault, + ignoreFocusOut: true + }); + + if (!workspacePath) { + return; + } + + const label = await vscode.window.showInputBox({ + prompt: 'Session label (optional)', + ignoreFocusOut: true + }); + + try { + await treeProvider.createSession(workspacePath, label?.trim() || undefined); + vscode.window.showInformationMessage('OpenCode session created.'); + } catch (error) { + vscode.window.showErrorMessage( + `Failed to create session: ${error instanceof Error ? error.message : String(error)}` + ); + } + }) + ); + + context.subscriptions.push( + vscode.commands.registerCommand('opencode.openChat', async (item?: SessionItem) => { + let target: ChatSessionTarget | undefined; + if (item?.session) { + target = { + id: item.session.id, + label: item.session.label, + workspacePath: item.session.workspacePath + }; + } else { + const picked = await pickSession(treeProvider, 'Select a session for OpenCode chat'); + + if (!picked) { + return; + } + + target = { + id: picked.id, + label: picked.label, + workspacePath: picked.workspacePath + }; + } + + await chatProvider.openChat(target); + }) + ); + + context.subscriptions.push( + vscode.commands.registerCommand('opencode.openTerminal', async (item?: SessionItem) => { + const session = item?.session ?? (await pickSession(treeProvider, 'Select a session for OpenCode terminal')); + if (!session) { + return; + } + + const terminal = vscode.window.createTerminal(createTerminalOptions(session)); + terminal.show(true); + }) + ); + + context.subscriptions.push( + vscode.window.registerTerminalProfileProvider('opencode.terminalProfile', { + provideTerminalProfile: async () => { + const session = await pickSession(treeProvider, 'Select a session for OpenCode terminal profile'); + if (!session) { + return undefined; + } + return new vscode.TerminalProfile(createTerminalOptions(session)); + } + }) + ); + + context.subscriptions.push( + vscode.commands.registerCommand('opencode.applyDiffPreview', async (payload?: { sessionId?: string; diff?: string }) => { + await diffEditManager.stageFromPayload({ + sessionId: payload?.sessionId, + diff: payload?.diff, + source: 'chat.applyDiffPreview' + }); + }) + ); + + context.subscriptions.push( + vscode.commands.registerCommand('opencode.applyLastDiff', async () => { + await diffEditManager.applyLastDiff(); + }) + ); + + context.subscriptions.push( + vscode.commands.registerCommand('opencode.rejectLastDiff', () => { + diffEditManager.rejectLastDiff(); + }) + ); + + context.subscriptions.push( + vscode.commands.registerCommand('opencode.clearDiffHighlights', () => { + diffEditManager.clearDecorations(); + }) + ); + + context.subscriptions.push( + vscode.commands.registerCommand('opencode.attachSession', async (item: SessionItem) => { + if (!item) { + return; + } + + try { + await treeProvider.attachSession(item); + vscode.window.showInformationMessage(`Attached to session: ${item.session.label}`); + } catch (error) { + vscode.window.showErrorMessage( + `Failed to attach session: ${error instanceof Error ? error.message : String(error)}` + ); + } + }) + ); + + context.subscriptions.push( + vscode.commands.registerCommand('opencode.stopSession', async (item: SessionItem) => { + if (!item) { + return; + } + + try { + await treeProvider.stopSession(item); + vscode.window.showInformationMessage(`Stopped session: ${item.session.label}`); + } catch (error) { + vscode.window.showErrorMessage(`Failed to stop session: ${error instanceof Error ? error.message : String(error)}`); + } + }) + ); + + context.subscriptions.push( + vscode.commands.registerCommand('opencode.restartSession', async (item: SessionItem) => { + if (!item) { + return; + } + + try { + await treeProvider.restartSession(item); + vscode.window.showInformationMessage(`Restarted session: ${item.session.label}`); + } catch (error) { + vscode.window.showErrorMessage( + `Failed to restart session: ${error instanceof Error ? error.message : String(error)}` + ); + } + }) + ); + + context.subscriptions.push( + vscode.commands.registerCommand('opencode.deleteSession', async (item: SessionItem) => { + if (!item) { + return; + } + + const confirmed = await vscode.window.showWarningMessage( + `Delete session \"${item.session.label}\"?`, + { modal: true }, + 'Delete' + ); + + if (confirmed !== 'Delete') { + return; + } + + try { + await treeProvider.deleteSession(item); + vscode.window.showInformationMessage(`Deleted session: ${item.session.label}`); + } catch (error) { + vscode.window.showErrorMessage( + `Failed to delete session: ${error instanceof Error ? error.message : String(error)}` + ); + } + }) + ); + + void treeProvider.start(); + void remoteHostsProvider.start().catch((error) => { + vscode.window.showWarningMessage(`Remote hosts unavailable: ${error instanceof Error ? error.message : String(error)}`); + }); +} + +export function deactivate(): void { +} + +async function pickSession(treeProvider: SessionTreeProvider, placeHolder: string): Promise { + if (treeProvider.getSessions().length === 0) { + await treeProvider.refresh().catch(() => undefined); + } + + const sessions = treeProvider.getSessions(); + if (sessions.length === 0) { + vscode.window.showWarningMessage('No sessions available. Refresh or create a session first.'); + return undefined; + } + + const picked = await vscode.window.showQuickPick( + sessions.map((session) => ({ + label: session.label, + description: session.workspacePath || session.id, + detail: `${session.status} · ${session.id}`, + session + })), + { placeHolder } + ); + + return picked?.session; +} + +function createTerminalOptions(session: SessionRecord): vscode.ExtensionTerminalOptions { + const controlPlaneUrl = getControlPlaneUrlFromConfig(); + const authToken = getAuthTokenFromConfig(); + + return { + name: `OpenCode: ${session.label}`, + pty: new OpenCodeTerminalBridge({ + controlPlaneUrl, + authToken, + session: { + id: session.id, + label: session.label + } + }) + }; +} + +function getControlPlaneUrlFromConfig(): string { + const configured = vscode.workspace.getConfiguration('opencode').get('controlPlaneUrl', 'http://localhost:8080'); + return configured.replace(/\/+$/, ''); +} + +function getAuthTokenFromConfig(): string { + return vscode.workspace.getConfiguration('opencode').get('authToken', '').trim(); +} diff --git a/vscode-extension/src/remote/RemoteHostsTreeProvider.ts b/vscode-extension/src/remote/RemoteHostsTreeProvider.ts new file mode 100644 index 0000000..bcff4e4 --- /dev/null +++ b/vscode-extension/src/remote/RemoteHostsTreeProvider.ts @@ -0,0 +1,492 @@ +import * as vscode from 'vscode'; + +interface RemoteHostsResponse { + hosts: RemoteHostRecord[]; + cached: boolean; + stale: boolean; + partial: boolean; + lastScan?: string; + warnings?: string[]; +} + +interface RemoteHostRecord { + name: string; + address: string; + user: string; + label: string; + status: string; + sessionCount: number; + lastSeen?: string; + lastError?: string; + transport?: string; + transportError?: string; + projects: RemoteProjectRecord[]; +} + +interface RemoteProjectRecord { + name: string; + sessions: RemoteSessionRecord[]; +} + +interface RemoteSessionRecord { + id: string; + title: string; + directory: string; + status: string; + activity: string; + lastActivity?: string; +} + +type RemoteNode = RemoteInfoItem | RemoteHostItem | RemoteProjectItem | RemoteSessionItem; + +class RemoteInfoItem extends vscode.TreeItem { + constructor(label: string, description?: string, severity: 'info' | 'warning' | 'error' = 'info') { + super(label, vscode.TreeItemCollapsibleState.None); + this.contextValue = 'opencodeRemoteInfo'; + this.description = description; + this.tooltip = description ? `${label}\n${description}` : label; + this.iconPath = severity === 'error' + ? new vscode.ThemeIcon('error') + : severity === 'warning' + ? new vscode.ThemeIcon('warning') + : new vscode.ThemeIcon('info'); + } +} + +class RemoteHostItem extends vscode.TreeItem { + constructor(readonly host: RemoteHostRecord) { + super(host.label || host.name, host.projects.length > 0 ? vscode.TreeItemCollapsibleState.Collapsed : vscode.TreeItemCollapsibleState.None); + this.id = `remote-host:${host.name}`; + this.contextValue = 'opencodeRemoteHost'; + this.iconPath = this.statusToIcon(host.status); + this.description = `${host.address || 'n/a'} · ${host.sessionCount} session${host.sessionCount === 1 ? '' : 's'}`; + + const details: string[] = [ + `Host: ${host.name}`, + `Address: ${host.address || 'n/a'}`, + `Status: ${host.status || 'unknown'}` + ]; + if (host.user) { + details.push(`User: ${host.user}`); + } + if (host.lastSeen) { + details.push(`Last seen: ${host.lastSeen}`); + } + if (host.transport) { + details.push(`Transport: ${host.transport}`); + } + if (host.lastError) { + details.push(`Last error: ${host.lastError}`); + } + if (host.transportError) { + details.push(`Transport error: ${host.transportError}`); + } + this.tooltip = details.join('\n'); + } + + private statusToIcon(status: string): vscode.ThemeIcon { + switch (status.toLowerCase()) { + case 'online': + return new vscode.ThemeIcon('vm-active'); + case 'auth_required': + return new vscode.ThemeIcon('key'); + case 'offline': + return new vscode.ThemeIcon('debug-disconnect'); + case 'error': + return new vscode.ThemeIcon('error'); + default: + return new vscode.ThemeIcon('question'); + } + } +} + +class RemoteProjectItem extends vscode.TreeItem { + constructor(readonly host: RemoteHostRecord, readonly project: RemoteProjectRecord) { + super(project.name || '(unnamed project)', project.sessions.length > 0 ? vscode.TreeItemCollapsibleState.Collapsed : vscode.TreeItemCollapsibleState.None); + this.id = `remote-project:${host.name}:${project.name}`; + this.contextValue = 'opencodeRemoteProject'; + this.iconPath = new vscode.ThemeIcon('folder-library'); + this.description = `${project.sessions.length} session${project.sessions.length === 1 ? '' : 's'}`; + this.tooltip = `${project.name}\nHost: ${host.name}\nSessions: ${project.sessions.length}`; + } +} + +class RemoteSessionItem extends vscode.TreeItem { + constructor(readonly host: RemoteHostRecord, readonly project: RemoteProjectRecord, readonly session: RemoteSessionRecord) { + super(session.title || session.id, vscode.TreeItemCollapsibleState.None); + this.id = `remote-session:${host.name}:${project.name}:${session.id}`; + this.contextValue = 'opencodeRemoteSession'; + this.iconPath = this.statusToIcon(session.status); + + const relative = formatRelativeTime(session.lastActivity); + this.description = [session.status || 'unknown', relative].filter(Boolean).join(' · '); + this.tooltip = [ + `Session: ${session.title || session.id}`, + `ID: ${session.id}`, + `Host: ${host.name}`, + `Project: ${project.name}`, + `Status: ${session.status || 'unknown'}`, + session.directory ? `Directory: ${session.directory}` : '', + session.activity ? `Activity: ${session.activity}` : '', + session.lastActivity ? `Last activity: ${session.lastActivity}` : '' + ] + .filter(Boolean) + .join('\n'); + } + + private statusToIcon(status: string): vscode.ThemeIcon { + switch (status.toLowerCase()) { + case 'active': + return new vscode.ThemeIcon('play-circle'); + case 'idle': + return new vscode.ThemeIcon('clock'); + case 'archived': + return new vscode.ThemeIcon('archive'); + default: + return new vscode.ThemeIcon('history'); + } + } +} + +export class RemoteHostsTreeProvider implements vscode.TreeDataProvider, vscode.Disposable { + private readonly changeEmitter = new vscode.EventEmitter(); + readonly onDidChangeTreeData = this.changeEmitter.event; + + private hosts: RemoteHostRecord[] = []; + private cached = false; + private stale = false; + private partial = false; + private lastScan = ''; + private warnings: string[] = []; + private lastError = ''; + private disposed = false; + private refreshTimer?: NodeJS.Timeout; + + constructor( + private readonly getControlPlaneUrl: () => string, + private readonly getAuthToken: () => string + ) {} + + dispose(): void { + this.disposed = true; + if (this.refreshTimer) { + clearTimeout(this.refreshTimer); + this.refreshTimer = undefined; + } + } + + getTreeItem(element: RemoteNode): vscode.TreeItem { + return element; + } + + getChildren(element?: RemoteNode): Thenable { + if (!element) { + const items: RemoteNode[] = []; + + if (this.lastError) { + items.push(new RemoteInfoItem('Remote scan error', this.lastError, 'error')); + } else if (this.stale || this.partial || this.cached || this.lastScan || this.warnings.length > 0) { + const statusParts: string[] = []; + if (this.stale) { + statusParts.push('stale'); + } + if (this.partial) { + statusParts.push('partial'); + } + if (this.cached) { + statusParts.push('cached'); + } + if (this.lastScan) { + statusParts.push(`last scan: ${this.lastScan}`); + } + const warning = this.warnings[0] ?? ''; + items.push(new RemoteInfoItem('Remote scan status', [statusParts.join(' · '), warning].filter(Boolean).join('\n'), this.stale || this.partial ? 'warning' : 'info')); + } + + if (this.hosts.length === 0) { + items.push(new RemoteInfoItem('No remote hosts discovered', 'Run refresh after configuring SSH hosts.')); + } else { + items.push(...this.hosts.map((host) => new RemoteHostItem(host))); + } + + return Promise.resolve(items); + } + + if (element instanceof RemoteHostItem) { + return Promise.resolve(element.host.projects.map((project) => new RemoteProjectItem(element.host, project))); + } + + if (element instanceof RemoteProjectItem) { + return Promise.resolve(element.project.sessions.map((session) => new RemoteSessionItem(element.host, element.project, session))); + } + + return Promise.resolve([]); + } + + async start(): Promise { + await this.refresh(false); + } + + async refresh(force: boolean): Promise { + try { + const response = await this.request(this.remoteHostsPath(force), { + method: 'GET', + headers: { Accept: 'application/json' } + }); + if (!response.ok) { + throw new Error(`GET /api/remote/hosts failed (${response.status})`); + } + + const body = (await response.json()) as unknown; + const normalized = this.normalizeResponse(body); + this.hosts = normalized.hosts; + this.cached = normalized.cached; + this.stale = normalized.stale; + this.partial = normalized.partial; + this.lastScan = normalized.lastScan ?? ''; + this.warnings = normalized.warnings ?? []; + this.lastError = ''; + } catch (error) { + this.lastError = this.formatError(error); + if (this.hosts.length === 0) { + this.cached = false; + this.stale = false; + this.partial = false; + this.lastScan = ''; + this.warnings = []; + } + throw error; + } finally { + this.changeEmitter.fire(); + this.scheduleAutoRefresh(); + } + } + + private scheduleAutoRefresh(): void { + if (this.disposed) { + return; + } + + if (this.refreshTimer) { + clearTimeout(this.refreshTimer); + this.refreshTimer = undefined; + } + + const seconds = vscode.workspace.getConfiguration('opencode').get('remoteHostsAutoRefreshSeconds', 30); + if (!Number.isFinite(seconds) || seconds <= 0) { + return; + } + + this.refreshTimer = setTimeout(() => { + void this.refresh(false).catch(() => undefined); + }, Math.max(1, Math.floor(seconds)) * 1000); + } + + private remoteHostsPath(force: boolean): string { + const params = new URLSearchParams(); + if (force) { + params.set('refresh', 'true'); + } + const sshConfigPath = vscode.workspace + .getConfiguration('opencode') + .get('remoteSshConfigPath', '') + .trim(); + if (sshConfigPath) { + params.set('sshConfigPath', sshConfigPath); + } + const query = params.toString(); + return query ? `/api/remote/hosts?${query}` : '/api/remote/hosts'; + } + + private async request(path: string, init: RequestInit): Promise { + const headers = new Headers(init.headers ?? undefined); + if (!headers.has('Accept')) { + headers.set('Accept', 'application/json'); + } + + const token = this.getAuthToken().trim(); + if (token) { + headers.set('Authorization', `Bearer ${token}`); + } + + return fetch(`${this.getControlPlaneUrl().replace(/\/+$/, '')}${path}`, { + ...init, + headers + }); + } + + private normalizeResponse(value: unknown): RemoteHostsResponse { + const record = asRecord(value); + if (!record) { + return { + hosts: [], + cached: false, + stale: false, + partial: false, + warnings: [] + }; + } + + const hosts = Array.isArray(record.hosts) + ? record.hosts + .map((host) => this.normalizeHost(host)) + .filter((host): host is RemoteHostRecord => host !== null) + : []; + + return { + hosts, + cached: Boolean(record.cached), + stale: Boolean(record.stale), + partial: Boolean(record.partial), + lastScan: readString(record.lastScan), + warnings: readStringArray(record.warnings) + }; + } + + private normalizeHost(value: unknown): RemoteHostRecord | null { + const record = asRecord(value); + if (!record) { + return null; + } + + const name = readString(record.name); + if (!name) { + return null; + } + + const projects = Array.isArray(record.projects) + ? record.projects + .map((project) => this.normalizeProject(project)) + .filter((project): project is RemoteProjectRecord => project !== null) + : []; + + return { + name, + address: readString(record.address), + user: readString(record.user), + label: readString(record.label) || name, + status: readString(record.status) || 'unknown', + sessionCount: readNumber(record.sessionCount), + lastSeen: readString(record.lastSeen), + lastError: readString(record.lastError), + transport: readString(record.transport), + transportError: readString(record.transportError), + projects + }; + } + + private normalizeProject(value: unknown): RemoteProjectRecord | null { + const record = asRecord(value); + if (!record) { + return null; + } + + const name = readString(record.name); + if (!name) { + return null; + } + + const sessions = Array.isArray(record.sessions) + ? record.sessions + .map((session) => this.normalizeSession(session)) + .filter((session): session is RemoteSessionRecord => session !== null) + : []; + + return { + name, + sessions + }; + } + + private normalizeSession(value: unknown): RemoteSessionRecord | null { + const record = asRecord(value); + if (!record) { + return null; + } + + const id = readString(record.id); + if (!id) { + return null; + } + + return { + id, + title: readString(record.title), + directory: readString(record.directory), + status: readString(record.status) || 'unknown', + activity: readString(record.activity), + lastActivity: readString(record.lastActivity) + }; + } + + private formatError(error: unknown): string { + if (error instanceof Error) { + return error.message; + } + return String(error); + } +} + +function asRecord(value: unknown): Record | null { + if (!value || typeof value !== 'object' || Array.isArray(value)) { + return null; + } + return value as Record; +} + +function readString(value: unknown): string { + if (typeof value !== 'string') { + return ''; + } + return value.trim(); +} + +function readNumber(value: unknown): number { + if (typeof value === 'number' && Number.isFinite(value)) { + return value; + } + if (typeof value === 'string' && value.trim()) { + const parsed = Number(value); + if (Number.isFinite(parsed)) { + return parsed; + } + } + return 0; +} + +function readStringArray(value: unknown): string[] { + if (!Array.isArray(value)) { + return []; + } + return value + .map((entry) => (typeof entry === 'string' ? entry.trim() : '')) + .filter((entry): entry is string => Boolean(entry)); +} + +function formatRelativeTime(isoTimestamp: string | undefined): string { + if (!isoTimestamp) { + return ''; + } + const ts = new Date(isoTimestamp); + if (Number.isNaN(ts.getTime())) { + return ''; + } + + const deltaMs = Date.now() - ts.getTime(); + if (!Number.isFinite(deltaMs)) { + return ''; + } + if (deltaMs < 60_000) { + return 'just now'; + } + const minutes = Math.floor(deltaMs / 60_000); + if (minutes < 60) { + return `${minutes}m ago`; + } + const hours = Math.floor(minutes / 60); + if (hours < 24) { + return `${hours}h ago`; + } + const days = Math.floor(hours / 24); + return `${days}d ago`; +} diff --git a/vscode-extension/src/terminal/OpenCodeTerminalBridge.ts b/vscode-extension/src/terminal/OpenCodeTerminalBridge.ts new file mode 100644 index 0000000..3027818 --- /dev/null +++ b/vscode-extension/src/terminal/OpenCodeTerminalBridge.ts @@ -0,0 +1,245 @@ +import * as vscode from 'vscode'; + +export interface TerminalSessionTarget { + id: string; + label: string; +} + +interface BridgeConfig { + controlPlaneUrl: string; + authToken: string; + session: TerminalSessionTarget; +} + +const encoder = new TextEncoder(); + +export class OpenCodeTerminalBridge implements vscode.Pseudoterminal { + private readonly writeEmitter = new vscode.EventEmitter(); + private readonly closeEmitter = new vscode.EventEmitter(); + + readonly onDidWrite: vscode.Event = this.writeEmitter.event; + readonly onDidClose?: vscode.Event = this.closeEmitter.event; + + private socket?: WebSocket; + private reconnectTimer?: NodeJS.Timeout; + private reconnectAttempts = 0; + private closed = false; + private openCalled = false; + private dimensions?: vscode.TerminalDimensions; + private pendingInput: Uint8Array[] = []; + + constructor(private readonly config: BridgeConfig) {} + + open(initialDimensions: vscode.TerminalDimensions | undefined): void { + this.openCalled = true; + this.dimensions = initialDimensions; + this.printStatus(`Opening terminal for session ${this.config.session.label} (${this.config.session.id})...`); + this.connect(); + } + + close(): void { + this.dispose(0); + } + + handleInput(data: string): void { + const payload = encoder.encode(data); + if (this.socket && this.socket.readyState === WebSocket.OPEN) { + this.socket.send(payload); + return; + } + this.pendingInput.push(payload); + } + + setDimensions(dimensions: vscode.TerminalDimensions): void { + this.dimensions = dimensions; + this.sendResize(dimensions); + } + + private connect(): void { + if (this.closed) { + return; + } + + this.clearReconnectTimer(); + + let socket: WebSocket; + try { + socket = this.createSocket(); + } catch (error) { + this.scheduleReconnect(`failed to construct websocket (${this.formatError(error)})`); + return; + } + + this.socket = socket; + + socket.binaryType = 'arraybuffer'; + socket.onopen = () => { + this.reconnectAttempts = 0; + this.printStatus(`Connected to OpenCode terminal for ${this.config.session.label}.`); + if (this.dimensions) { + this.sendResize(this.dimensions); + } + this.flushPendingInput(); + }; + + socket.onmessage = (event: MessageEvent) => { + this.handleSocketMessage(event.data); + }; + + socket.onerror = () => { + this.printStatus('Terminal websocket error detected.'); + }; + + socket.onclose = (event: CloseEvent) => { + if (this.closed) { + return; + } + this.socket = undefined; + const reason = event.reason ? ` (${event.reason})` : ''; + this.scheduleReconnect(`connection closed${reason}`); + }; + } + + private handleSocketMessage(data: unknown): void { + if (typeof data === 'string') { + this.writeEmitter.fire(data); + return; + } + + if (data instanceof ArrayBuffer) { + this.writeEmitter.fire(new TextDecoder().decode(data)); + return; + } + + if (ArrayBuffer.isView(data)) { + const view = data as ArrayBufferView; + this.writeEmitter.fire(new TextDecoder().decode(view.buffer.slice(view.byteOffset, view.byteOffset + view.byteLength))); + return; + } + + if (data instanceof Blob) { + void data.arrayBuffer().then((buffer) => { + if (!this.closed) { + this.writeEmitter.fire(new TextDecoder().decode(buffer)); + } + }); + } + } + + private sendResize(dimensions: vscode.TerminalDimensions): void { + if (!this.socket || this.socket.readyState !== WebSocket.OPEN) { + return; + } + + if (!dimensions.columns || !dimensions.rows) { + return; + } + + const control = { + type: 'resize', + cols: dimensions.columns, + rows: dimensions.rows + }; + this.socket.send(JSON.stringify(control)); + } + + private flushPendingInput(): void { + if (!this.socket || this.socket.readyState !== WebSocket.OPEN) { + return; + } + for (const payload of this.pendingInput) { + this.socket.send(payload); + } + this.pendingInput = []; + } + + private scheduleReconnect(reason: string): void { + if (this.closed) { + return; + } + + const delays = [1000, 2000, 4000, 8000, 15000]; + const idx = Math.min(this.reconnectAttempts, delays.length - 1); + const delayMs = delays[idx]; + this.reconnectAttempts += 1; + + this.printStatus(`Terminal ${reason}; reconnecting in ${Math.floor(delayMs / 1000)}s...`); + this.reconnectTimer = setTimeout(() => { + this.reconnectTimer = undefined; + if (!this.closed && this.openCalled) { + this.connect(); + } + }, delayMs); + } + + private createSocket(): WebSocket { + const primary = this.buildWebSocketUrl(false); + const authHeader = this.config.authToken ? { Authorization: `Bearer ${this.config.authToken}` } : undefined; + const WSAny = WebSocket as unknown as { + new (url: string, protocols?: string | string[], options?: { headers?: Record }): WebSocket; + }; + + if (authHeader) { + try { + return new WSAny(primary, undefined, { headers: authHeader }); + } catch { + } + } + + return new WebSocket(this.buildWebSocketUrl(Boolean(this.config.authToken))); + } + + private buildWebSocketUrl(includeQueryToken: boolean): string { + const base = new URL(this.config.controlPlaneUrl); + base.protocol = base.protocol === 'https:' ? 'wss:' : 'ws:'; + const basePath = base.pathname.replace(/\/+$/, ''); + base.pathname = `${basePath}/ws/terminal/${encodeURIComponent(this.config.session.id)}`; + if (includeQueryToken && this.config.authToken) { + base.searchParams.set('access_token', this.config.authToken); + base.searchParams.set('authorization', `Bearer ${this.config.authToken}`); + } + return base.toString(); + } + + private printStatus(message: string): void { + this.writeEmitter.fire(`\r\n[OpenCode] ${message}\r\n`); + } + + private clearReconnectTimer(): void { + if (this.reconnectTimer) { + clearTimeout(this.reconnectTimer); + this.reconnectTimer = undefined; + } + } + + private formatError(error: unknown): string { + if (error instanceof Error) { + return error.message; + } + return String(error); + } + + private dispose(exitCode: number): void { + if (this.closed) { + return; + } + this.closed = true; + this.clearReconnectTimer(); + + if (this.socket) { + this.socket.onopen = null; + this.socket.onmessage = null; + this.socket.onclose = null; + this.socket.onerror = null; + if (this.socket.readyState === WebSocket.OPEN || this.socket.readyState === WebSocket.CONNECTING) { + this.socket.close(1000, 'terminal disposed'); + } + this.socket = undefined; + } + + this.pendingInput = []; + this.closeEmitter.fire(exitCode); + this.writeEmitter.dispose(); + this.closeEmitter.dispose(); + } +} diff --git a/vscode-extension/src/test/runTest.ts b/vscode-extension/src/test/runTest.ts new file mode 100644 index 0000000..19de5bb --- /dev/null +++ b/vscode-extension/src/test/runTest.ts @@ -0,0 +1,26 @@ +import * as fs from 'node:fs/promises'; +import * as os from 'node:os'; +import * as path from 'node:path'; + +import { runTests } from '@vscode/test-electron'; + +async function main(): Promise { + const extensionDevelopmentPath = path.resolve(__dirname, '../../'); + const extensionTestsPath = path.resolve(__dirname, './suite/index'); + const workspaceDir = await fs.mkdtemp(path.join(os.tmpdir(), 'opencode-router-vscode-tests-')); + + try { + await runTests({ + extensionDevelopmentPath, + extensionTestsPath, + launchArgs: [workspaceDir, '--disable-extensions', '--disable-workspace-trust'] + }); + } finally { + await fs.rm(workspaceDir, { recursive: true, force: true }); + } +} + +void main().catch((error) => { + console.error('Failed to run VS Code UI tests', error); + process.exit(1); +}); diff --git a/vscode-extension/src/test/suite/index.ts b/vscode-extension/src/test/suite/index.ts new file mode 100644 index 0000000..27fd657 --- /dev/null +++ b/vscode-extension/src/test/suite/index.ts @@ -0,0 +1,24 @@ +import * as path from 'node:path'; + +import Mocha from 'mocha'; + +export function run(): Promise { + const mocha = new Mocha({ + ui: 'bdd', + color: true, + timeout: 30_000 + }); + + const testsRoot = path.resolve(__dirname); + mocha.addFile(path.join(testsRoot, 'remoteHosts.ui.test.js')); + + return new Promise((resolve, reject) => { + mocha.run((failures) => { + if (failures > 0) { + reject(new Error(`${failures} UI test(s) failed.`)); + return; + } + resolve(); + }); + }); +} diff --git a/vscode-extension/src/test/suite/remoteHosts.ui.test.ts b/vscode-extension/src/test/suite/remoteHosts.ui.test.ts new file mode 100644 index 0000000..34787be --- /dev/null +++ b/vscode-extension/src/test/suite/remoteHosts.ui.test.ts @@ -0,0 +1,24 @@ +import * as assert from 'node:assert/strict'; +import * as vscode from 'vscode'; +import { suite, test } from 'mocha'; + +suite('OpenCode Remote Hosts UI', () => { + test('registers and runs remote hosts refresh command', async () => { + const extension = vscode.extensions.getExtension('local.opencode-router'); + assert.ok(extension, 'expected local.opencode-router extension to be discoverable'); + + if (!extension.isActive) { + await extension.activate(); + } + + const activityBarViews = await vscode.commands.getCommands(true); + assert.ok( + activityBarViews.includes('opencode.refreshRemoteHosts'), + 'expected opencode.refreshRemoteHosts command to be registered' + ); + + await assert.doesNotReject(async () => { + await vscode.commands.executeCommand('opencode.refreshRemoteHosts'); + }, 'expected remote hosts refresh command to execute without throwing'); + }); +}); diff --git a/vscode-extension/tsconfig.json b/vscode-extension/tsconfig.json new file mode 100644 index 0000000..c81eec7 --- /dev/null +++ b/vscode-extension/tsconfig.json @@ -0,0 +1,23 @@ +{ + "compilerOptions": { + "module": "commonjs", + "target": "ES2022", + "outDir": "out", + "lib": [ + "ES2022", + "DOM" + ], + "sourceMap": true, + "rootDir": "src", + "strict": true, + "esModuleInterop": true, + "skipLibCheck": true + }, + "exclude": [ + "node_modules", + ".vscode-test" + ], + "include": [ + "src/**/*.ts" + ] +} diff --git a/web.go b/web.go new file mode 100644 index 0000000..a2617df --- /dev/null +++ b/web.go @@ -0,0 +1,18 @@ +package main + +import ( + "embed" + "io/fs" + "net/http" +) + +//go:embed web/* +var webAssets embed.FS + +func getWebFS() http.FileSystem { + fsys, err := fs.Sub(webAssets, "web") + if err != nil { + panic(err) + } + return http.FS(fsys) +} diff --git a/web/app.js b/web/app.js new file mode 100644 index 0000000..37e602d --- /dev/null +++ b/web/app.js @@ -0,0 +1,1201 @@ +document.addEventListener('DOMContentLoaded', () => { + +if (typeof marked !== 'undefined') { + const renderer = new marked.Renderer(); + const originalCode = renderer.code.bind(renderer); + renderer.code = function(token) { + const text = typeof token === 'string' ? token : token.text; + const lang = typeof token === 'string' ? arguments[1] : token.lang; + + if (lang === 'diff' || (!lang && (text.match(/^-[^-]/m) || text.match(/^\+[^+]/m)))) { + const lines = text.split('\n').map(line => { + if (line.startsWith('+')) return '' + line.replace(//g, '>') + ''; + if (line.startsWith('-')) return '' + line.replace(//g, '>') + ''; + return line.replace(//g, '>'); + }); + return '
' + lines.join('\n') + '
'; + } + return originalCode.apply(this, arguments); + }; + marked.use({ renderer }); +} + +function processDiffs(text) { + if (!text) return ''; + const lines = text.split('\n'); + let inDiff = false; + let inCodeBlock = false; + for (let i = 0; i < lines.length; i++) { + if (lines[i].startsWith('```')) { + inCodeBlock = !inCodeBlock; + if (inDiff) { + lines.splice(i, 0, '```'); + inDiff = false; + i++; + } + continue; + } + if (!inCodeBlock) { + const isDiffLine = lines[i].match(/^[+-] /) && lines[i].length > 2; + if (isDiffLine && !inDiff) { + lines.splice(i, 0, '```diff'); + inDiff = true; + i++; + } else if (!isDiffLine && inDiff && lines[i].trim() !== '') { + lines.splice(i, 0, '```'); + inDiff = false; + i++; + } + } + } + if (inDiff) lines.push('```'); + return lines.join('\n'); +} + + const state = { + sessions: new Map(), + filter: '', + sortCol: 'id', + sortDesc: false + }; + + const DOM = { + sseIndicator: document.getElementById('sse-indicator'), + statOnline: document.getElementById('stat-online'), + statTotal: document.getElementById('stat-total'), + tbody: document.getElementById('sessions-body'), + searchInput: document.getElementById('search-input'), + emptyState: document.getElementById('empty-state'), + table: document.getElementById('sessions-table'), + btnCreate: document.getElementById('btn-create-session'), + modal: document.getElementById('modal-overlay'), + btnCloseModal: document.getElementById('btn-close-modal'), + authModalOverlay: document.getElementById('auth-modal-overlay'), + btnCloseAuthModal: document.getElementById('btn-close-auth-modal'), + authForm: document.getElementById('auth-form'), + inputPassword: document.getElementById('input-password'), + authHostLabel: document.getElementById('auth-host-label'), + authAgentStatus: document.getElementById('auth-agent-status'), + formCreate: document.getElementById('create-session-form'), + inputWorkspace: document.getElementById('input-workspace'), + inputLabel: document.getElementById('input-label'), // repurposed to label + viewSessions: document.getElementById('view-sessions'), + viewTerminal: document.getElementById('view-terminal'), + terminalContainer: document.getElementById('terminal-container'), + terminalSessionId: document.getElementById('terminal-session-id'), + terminalConnectionStatus: document.getElementById('terminal-connection-status'), + btnDetachTerminal: document.getElementById('btn-detach-terminal'), + chatHistory: document.getElementById('chat-history'), + chatForm: document.getElementById('chat-form'), + chatInput: document.getElementById('chat-input'), + chatContainer: document.getElementById('chat-container'), + splitResizer: document.getElementById('split-resizer'), + btnSendChat: document.getElementById('btn-send-chat'), + tabLocal: document.getElementById('tab-local'), + tabRemote: document.getElementById('tab-remote'), + viewRemote: document.getElementById('view-remote'), + remoteHostsContainer: document.getElementById('remote-hosts-container'), + remoteEmptyState: document.getElementById('remote-empty-state'), + remoteErrorState: document.getElementById('remote-error-state'), + btnRefreshRemote: document.getElementById('btn-refresh-remote'), + remoteSearchInput: document.getElementById('remote-search-input') + }; + + function normalizeSSEtoView(sseSession) { + if (!sseSession) return null; + return { + id: sseSession.ID, + workspacePath: sseSession.WorkspacePath, + status: sseSession.Status, + daemonPort: sseSession.DaemonPort, + labels: sseSession.Labels || {} + }; + } + + // Setup EventSource + let evtSource = null; + let sseReconnectTimeout = null; + + function clearSSEReconnectTimer() { + if (sseReconnectTimeout) { + clearTimeout(sseReconnectTimeout); + sseReconnectTimeout = null; + } + } + + function setSSEIndicator(mode, detail) { + if (!DOM.sseIndicator) return; + if (mode === 'connected') { + DOM.sseIndicator.textContent = '● STREAM_ACTIVE'; + DOM.sseIndicator.className = 'pulse-indicator online'; + return; + } + if (mode === 'reconnecting') { + DOM.sseIndicator.textContent = `● RECONNECTING${detail ? ` (${detail})` : '...'}`; + DOM.sseIndicator.className = 'pulse-indicator'; + return; + } + DOM.sseIndicator.textContent = `● DISCONNECTED${detail ? ` (${detail})` : ''}`; + DOM.sseIndicator.className = 'pulse-indicator'; + } + + function scheduleSSEReconnect(delayMs, reason) { + if (evtSource || sseReconnectTimeout) return; + setSSEIndicator('reconnecting', reason || 'retrying'); + sseReconnectTimeout = setTimeout(() => { + sseReconnectTimeout = null; + connectSSE(); + }, delayMs); + } + + function connectSSE() { + if (evtSource) return; + clearSSEReconnectTimer(); + + evtSource = new EventSource('/api/events'); + + evtSource.onopen = () => { + clearSSEReconnectTimer(); + setSSEIndicator('connected'); + }; + + evtSource.onerror = () => { + if (!evtSource) return; + + if (evtSource.readyState === EventSource.CONNECTING) { + setSSEIndicator('reconnecting', 'auto'); + return; + } + + const fatal = evtSource.readyState === EventSource.CLOSED; + setSSEIndicator(fatal ? 'disconnected' : 'reconnecting', fatal ? 'closed' : 'retrying'); + evtSource.close(); + evtSource = null; + scheduleSSEReconnect(2000, fatal ? 'closed' : 'retrying'); + }; + + const handleSessionEvent = (e) => { + try { + const envelope = JSON.parse(e.data); + if (!envelope.payload || !envelope.payload.Session) return; + const norm = normalizeSSEtoView(envelope.payload.Session); + + // Preserve health if it exists + if (state.sessions.has(norm.id)) { + const existing = state.sessions.get(norm.id); + norm.health = existing.health; + } + + state.sessions.set(norm.id, norm); + render(); + } catch (err) { + console.error('Failed parsing SSE', err); + } + }; + + evtSource.addEventListener('session.created', handleSessionEvent); + evtSource.addEventListener('session.stopped', handleSessionEvent); + evtSource.addEventListener('session.attached', handleSessionEvent); + evtSource.addEventListener('session.detached', handleSessionEvent); + + evtSource.addEventListener('session.health', (e) => { + try { + const envelope = JSON.parse(e.data); + if (!envelope.payload || !envelope.payload.Session) return; + const norm = normalizeSSEtoView(envelope.payload.Session); + + // Apply health + if (envelope.payload.Current) { + norm.health = envelope.payload.Current; + } + + state.sessions.set(norm.id, norm); + render(); + } catch (err) { + console.error('Failed parsing health SSE', err); + } + }); + + } + + async function loadInitial() { + try { + const res = await fetch('/api/sessions'); + if (!res.ok) throw new Error(`HTTP error! status: ${res.status}`); + const data = await res.json(); + state.sessions.clear(); + (data || []).forEach(s => state.sessions.set(s.id, s)); + render(); + connectSSE(); + } catch (e) { + console.error('Failed to load initial sessions', e); + setSSEIndicator('disconnected', 'bootstrap failed'); + setTimeout(loadInitial, 5000); + } + } + + function render() { + const filterText = state.filter.toLowerCase(); + let total = 0; + let online = 0; + + DOM.tbody.innerHTML = ''; + + const sorted = Array.from(state.sessions.values()).sort((a, b) => { + let valA = a[state.sortCol] || ''; + let valB = b[state.sortCol] || ''; + if (state.sortCol === 'label') { + valA = a.labels?.label || a.labels?.name || ''; + valB = b.labels?.label || b.labels?.name || ''; + } + const res = String(valA).localeCompare(String(valB)); + return state.sortDesc ? -res : res; + }); + + let visibleCount = 0; + + sorted.forEach(s => { + total++; + if (s.status === 'active' || s.status === 'idle') online++; + + const lbl = (s.labels && (s.labels.label || s.labels.name)) || '-'; + const searchable = `${s.id} ${lbl} ${s.workspacePath}`.toLowerCase(); + if (filterText && !searchable.includes(filterText)) return; + + visibleCount++; + const tr = document.createElement('tr'); + + let statusClass = 'error'; + if (s.status === 'active') statusClass = 'active'; + if (s.status === 'idle') statusClass = 'idle'; + if (s.status === 'stopped') statusClass = 'stopped'; + + tr.innerHTML = ` + ${s.status ? s.status.toUpperCase() : 'UNKNOWN'} + ${s.id} + ${lbl} + ${s.workspacePath} + +
+ + ${(s.status === 'active' || s.status === 'idle') + ? `` + : `` + } + +
+ + `; + DOM.tbody.appendChild(tr); + }); + + DOM.statTotal.textContent = total; + DOM.statOnline.textContent = online; + + if (visibleCount === 0) { + DOM.table.style.display = 'none'; + DOM.emptyState.style.display = 'block'; + } else { + DOM.table.style.display = 'table'; + DOM.emptyState.style.display = 'none'; + } + } + + window.appAction = async (action, id) => { + try { + if (action === 'attach') { + attachTerminal(id); + return; + } + + const method = action === 'delete' ? 'DELETE' : 'POST'; + const url = action === 'delete' ? `/api/sessions/${id}` : `/api/sessions/${id}/${action}`; + + const res = await fetch(url, { method }); + if (!res.ok) { + const err = await res.json().catch(() => ({error: 'Unknown error'})); + alert(`Action failed: ${err.error || res.statusText}`); + } else { + if (action === 'delete') { + state.sessions.delete(id); + render(); + } + } + } catch (e) { + console.error(e); + alert(`Network error during action: ${action}`); + } + }; + + DOM.searchInput.addEventListener('input', (e) => { + state.filter = e.target.value; + render(); + }); + + document.querySelectorAll('.cyber-table th').forEach(th => { + if (th.textContent.includes('ACTIONS')) return; + th.style.cursor = 'pointer'; + th.title = 'Click to sort'; + th.addEventListener('click', () => { + const col = th.textContent.trim().toLowerCase(); + if (state.sortCol === col) { + state.sortDesc = !state.sortDesc; + } else { + state.sortCol = col; + state.sortDesc = false; + } + render(); + }); + }); + + // Modal handlers + DOM.btnCreate.addEventListener('click', () => { + DOM.modal.style.display = 'flex'; + DOM.inputWorkspace.focus(); + }); + + DOM.btnCloseModal.addEventListener('click', () => { + DOM.modal.style.display = 'none'; + }); + + DOM.formCreate.addEventListener('submit', async (e) => { + e.preventDefault(); + const payload = { + workspacePath: DOM.inputWorkspace.value, + label: DOM.inputLabel.value || undefined + }; + + try { + const res = await fetch('/api/sessions', { + method: 'POST', + headers: { 'Content-Type': 'application/json' }, + body: JSON.stringify(payload) + }); + if (!res.ok) { + const err = await res.json().catch(() => ({error: 'Unknown error'})); + alert(`Create failed: ${err.error || res.statusText}`); + return; + } + DOM.modal.style.display = 'none'; + DOM.formCreate.reset(); + + // Usually handled by SSE, but if we get a response, we can fetch all or append. + // fetch is safer to get the normalized view format. + loadInitial(); + } catch (err) { + alert(`Network error creating session`); + } + }); + + + // Terminal Logic + let term = null; + let fitAddon = null; + let ws = null; + let activeTerminalSessionId = null; + let reconnectTimeout = null; + let reconnectDelay = 1000; + + async function attachTerminal(sessionId) { + activeTerminalSessionId = sessionId; + DOM.viewSessions.style.display = 'none'; + DOM.viewTerminal.style.display = 'flex'; + DOM.terminalSessionId.textContent = sessionId; + if (DOM.terminalConnectionStatus) DOM.terminalConnectionStatus.textContent = 'Loading history...'; + + if (!term) { + term = new Terminal({ + theme: { + background: '#000000', + foreground: '#e0e0e0', + cursor: '#00ff41' + }, + fontFamily: "'JetBrains Mono', monospace", + fontSize: 14 + }); + fitAddon = new FitAddon.FitAddon(); + term.loadAddon(fitAddon); + term.open(DOM.terminalContainer); + + term.onData(data => { + if (ws && ws.readyState === WebSocket.OPEN) { + ws.send(new TextEncoder().encode(data)); + } + }); + + term.onResize(size => { + if (ws && ws.readyState === WebSocket.OPEN) { + ws.send(JSON.stringify({ type: 'resize', cols: size.cols, rows: size.rows })); + } + }); + + window.addEventListener('resize', () => { + if (DOM.viewTerminal.style.display === 'flex') { + fitAddon.fit(); + } + }); + } + + term.clear(); + term.writeln(`\x1b[36m> Loading history...\x1b[0m`); + await hydrateTerminalScrollback(sessionId); + loadChatHistory(sessionId); + + reconnectDelay = 1000; + setTimeout(() => { + fitAddon.fit(); + connectTerminalWS(sessionId); + }, 50); + } + + function connectTerminalWS(sessionId) { + if (activeTerminalSessionId !== sessionId) return; + + if (ws) { + ws.close(); + ws = null; + } + if (reconnectTimeout) { + clearTimeout(reconnectTimeout); + reconnectTimeout = null; + } + + const protocol = location.protocol === 'https:' ? 'wss:' : 'ws:'; + ws = new WebSocket(`${protocol}//${location.host}/ws/terminal/${sessionId}`); + ws.binaryType = 'arraybuffer'; + + ws.onopen = () => { + term.writeln(`\x1b[32m> Connected\x1b[0m`); + if (DOM.terminalConnectionStatus) DOM.terminalConnectionStatus.textContent = 'Connected'; + reconnectDelay = 1000; // Reset backoff on success + if (term.cols && term.rows) { + ws.send(JSON.stringify({ type: 'resize', cols: term.cols, rows: term.rows })); + } + }; + + ws.onmessage = (evt) => { + if (evt.data instanceof ArrayBuffer) { + term.write(new Uint8Array(evt.data)); + } else if (evt.data instanceof Blob) { + const reader = new FileReader(); + reader.onload = () => { + term.write(new Uint8Array(reader.result)); + }; + reader.readAsArrayBuffer(evt.data); + } else { + term.write(evt.data); + } + }; + + ws.onclose = (e) => { + if (activeTerminalSessionId !== sessionId) return; + term.writeln(`\r\n\x1b[33m> Disconnected (code: ${e.code}). Reconnecting in ${reconnectDelay}ms...\x1b[0m`); + if (DOM.terminalConnectionStatus) DOM.terminalConnectionStatus.textContent = `Reconnecting in ${reconnectDelay}ms...`; + + if (reconnectTimeout) { + clearTimeout(reconnectTimeout); + reconnectTimeout = null; + } + + reconnectTimeout = setTimeout(() => { + connectTerminalWS(sessionId); + }, reconnectDelay); + + // Exponential backoff capped at 30s + reconnectDelay = Math.min(reconnectDelay * 2, 30000); + }; + + ws.onerror = (e) => { + term.writeln(`\r\n\x1b[31m> WebSocket error.\x1b[0m`); + if (DOM.terminalConnectionStatus) DOM.terminalConnectionStatus.textContent = 'WebSocket error'; + }; + } + + async function hydrateTerminalScrollback(sessionId) { + try { + const res = await fetch(`/api/sessions/${sessionId}/scrollback?type=terminal_output&limit=1000`); + if (!res.ok) { + term.writeln(`\x1b[33m> History unavailable (${res.status})\x1b[0m`); + return; + } + const entries = await res.json(); + if (!Array.isArray(entries) || entries.length === 0) { + return; + } + + for (const entry of entries) { + if (!entry || entry.type !== 'terminal_output') continue; + const content = typeof entry.content === 'string' ? atob(entry.content) : ''; + if (!content) continue; + term.write(content); + } + } catch (error) { + term.writeln(`\x1b[33m> Failed to load history\x1b[0m`); + } + } + + function detachTerminal() { + activeTerminalSessionId = null; + if (reconnectTimeout) { + clearTimeout(reconnectTimeout); + reconnectTimeout = null; + } + if (ws) { + ws.close(); + ws = null; + } + DOM.viewTerminal.style.display = 'none'; + DOM.viewSessions.style.display = 'block'; + DOM.terminalSessionId.textContent = ''; + if (DOM.terminalConnectionStatus) DOM.terminalConnectionStatus.textContent = ''; + } + + DOM.btnDetachTerminal.addEventListener('click', detachTerminal); + + // Bootstrap + loadInitial(); + + const remoteState = { + hosts: [], + filter: '', + isLoading: false, + error: null, + tunnels: [], + expandedProjects: new Set() + }; + + let remoteRefreshTimer = null; + + function startRemoteRefreshTimer() { + stopRemoteRefreshTimer(); + remoteRefreshTimer = setInterval(() => { + fetchRemoteHosts(true); + }, 30000); + } + + function stopRemoteRefreshTimer() { + if (remoteRefreshTimer) { + clearInterval(remoteRefreshTimer); + remoteRefreshTimer = null; + } + } + + window.switchTab = function(tabName) { + if (tabName === 'local') { + if(DOM.tabLocal) DOM.tabLocal.classList.add('tab-active'); + if(DOM.tabRemote) DOM.tabRemote.classList.remove('tab-active'); + if(DOM.viewSessions) DOM.viewSessions.style.display = 'block'; + if(DOM.viewRemote) DOM.viewRemote.style.display = 'none'; + if(DOM.viewTerminal) DOM.viewTerminal.style.display = 'none'; + stopRemoteRefreshTimer(); + } else if (tabName === 'remote') { + if(DOM.tabRemote) DOM.tabRemote.classList.add('tab-active'); + if(DOM.tabLocal) DOM.tabLocal.classList.remove('tab-active'); + if(DOM.viewRemote) DOM.viewRemote.style.display = 'block'; + if(DOM.viewSessions) DOM.viewSessions.style.display = 'none'; + if(DOM.viewTerminal) DOM.viewTerminal.style.display = 'none'; + + if (remoteState.hosts.length === 0 && !remoteState.isLoading) { + fetchRemoteHosts(); + } + startRemoteRefreshTimer(); + } + }; + + if(DOM.tabLocal) DOM.tabLocal.addEventListener('click', () => switchTab('local')); + if(DOM.tabRemote) DOM.tabRemote.addEventListener('click', () => switchTab('remote')); + + async function fetchRemoteHosts(isSilentRefresh = false) { + if (remoteState.isLoading) return; + remoteState.isLoading = true; + + if (!isSilentRefresh && DOM.remoteEmptyState) { + DOM.remoteEmptyState.style.display = 'block'; + if(DOM.remoteHostsContainer) DOM.remoteHostsContainer.innerHTML = ''; + if(DOM.remoteErrorState) DOM.remoteErrorState.style.display = 'none'; + } + + try { + const qs = isSilentRefresh ? '?refresh=true' : ''; + const [res, tunnelsRes] = await Promise.all([ + fetch('/api/remote/hosts' + qs), + fetch('/api/remote/tunnels') + ]); + + if (!res.ok) throw new Error(`HTTP error! status: ${res.status}`); + const data = await res.json(); + // The API returns { hosts: [...], warnings: [...] } so we handle both cases + remoteState.hosts = data.hosts || data || []; + remoteState.warnings = data.warnings || []; + remoteState.error = null; + + if (tunnelsRes.ok) { + remoteState.tunnels = await tunnelsRes.json(); + remoteState.serves = {}; + await Promise.all(remoteState.tunnels.filter(t => t.state === 'active').map(async (t) => { + try { + const sr = await fetch(`/api/remote/serve/${t.hostAlias}`); + if (sr.ok) { + remoteState.serves[t.hostAlias] = await sr.json(); + } + } catch(e) {} + })); + } else { + remoteState.tunnels = []; + remoteState.serves = {}; + } + } catch (e) { + console.error('Failed to fetch remote hosts', e); + remoteState.error = e.message; + if (!isSilentRefresh && DOM.remoteErrorState) { + DOM.remoteErrorState.style.display = 'block'; + if(DOM.remoteEmptyState) DOM.remoteEmptyState.style.display = 'none'; + } + } finally { + remoteState.isLoading = false; + renderRemoteHosts(); + } + } + + window.toggleProjectExpanded = function(projectId) { + if (remoteState.expandedProjects.has(projectId)) { + remoteState.expandedProjects.delete(projectId); + } else { + remoteState.expandedProjects.add(projectId); + } + const el = document.getElementById(projectId + '-sessions'); + const toggleEl = document.getElementById(projectId + '-toggle'); + if (el) { + if (remoteState.expandedProjects.has(projectId)) { + el.style.display = 'block'; + if (toggleEl) toggleEl.textContent = '[-]'; + } else { + el.style.display = 'none'; + if (toggleEl) toggleEl.textContent = '[+]'; + } + } + }; + + +window.connectHost = async function(hostName) { + try { + const res = await fetch('/api/remote/tunnel', { + method: 'POST', + headers: { 'Content-Type': 'application/json' }, + body: JSON.stringify({ host: hostName, remotePort: 4000 }) + }); + + if (res.status === 401 || res.status === 403 || (!res.ok && (await res.clone().json().catch(()=>({}))).code === 'AUTH_REQUIRED')) { + showAuthModal(hostName); + return; + } + if (!res.ok) { + const err = await res.json().catch(()=>({message: 'Unknown error'})); + if (err.code === 'AUTH_REQUIRED') { + showAuthModal(hostName); + return; + } + throw new Error(err.message || 'Failed to connect'); + } + + startFastRefresh(); + } catch (e) { + alert('Connect failed: ' + e.message); + } +}; + +window.disconnectHost = async function(tunnelId) { + try { + const res = await fetch(`/api/remote/tunnel/${tunnelId}`, { method: 'DELETE' }); + if (!res.ok) throw new Error('Failed to disconnect'); + fetchRemoteHosts(true); + } catch (e) { + alert('Disconnect failed: ' + e.message); + } +}; + +window.startServe = async function(hostName, projectDir) { + try { + const res = await fetch('/api/remote/serve', { + method: 'POST', + headers: { 'Content-Type': 'application/json' }, + body: JSON.stringify({ host: hostName, projectDir: projectDir }) + }); + if (!res.ok) { + const err = await res.json().catch(()=>({message: 'Unknown error'})); + throw new Error(err.message || 'Failed to start serve'); + } + fetchRemoteHosts(true); + } catch (e) { + alert('Failed to start serve: ' + e.message); + } +}; + +window.openOpencode = function(hostName, projectDir) { + const encProject = encodeURIComponent(projectDir); + window.open(`/remote/${hostName}/${encProject}/`, '_blank'); +}; + +let fastRefreshTimer = null; +function startFastRefresh() { + if (fastRefreshTimer) clearInterval(fastRefreshTimer); + fetchRemoteHosts(true); + fastRefreshTimer = setInterval(() => { + const isConnecting = (remoteState.tunnels || []).some(t => t.state === 'connecting' || t.state === 'pending'); + if (!isConnecting) { + clearInterval(fastRefreshTimer); + fastRefreshTimer = null; + } + fetchRemoteHosts(true); + }, 5000); +} + +let currentAuthHost = ''; + +window.showAuthModal = async function(hostName) { + currentAuthHost = hostName; + if(DOM.authHostLabel) DOM.authHostLabel.innerText = `> PASSWORD FOR ${hostName}`; + if(DOM.inputPassword) DOM.inputPassword.value = ''; + if(DOM.authModalOverlay) DOM.authModalOverlay.style.display = 'flex'; + if(DOM.inputPassword) DOM.inputPassword.focus(); + + if(DOM.authAgentStatus) { + DOM.authAgentStatus.innerText = 'Checking SSH agent...'; + try { + const res = await fetch('/api/remote/auth/agent'); + if (res.ok) { + const data = await res.json(); + if (data.available) { + DOM.authAgentStatus.innerText = `SSH Agent active (Socket: ${data.socketPath})`; + } else { + DOM.authAgentStatus.innerText = 'SSH Agent not available or no identities.'; + } + } else { + DOM.authAgentStatus.innerText = 'SSH Agent status unknown.'; + } + } catch(e) { + DOM.authAgentStatus.innerText = 'Error checking SSH agent.'; + } + } +}; + +if(DOM.btnCloseAuthModal) { + DOM.btnCloseAuthModal.addEventListener('click', () => { + if(DOM.authModalOverlay) DOM.authModalOverlay.style.display = 'none'; + }); +} + +if(DOM.authForm) { + DOM.authForm.addEventListener('submit', async (e) => { + e.preventDefault(); + const pwd = DOM.inputPassword.value; + const btn = document.getElementById('btn-auth-submit'); + if(btn) btn.disabled = true; + + try { + const res = await fetch('/api/remote/auth', { + method: 'POST', + headers: { 'Content-Type': 'application/json' }, + body: JSON.stringify({ host: currentAuthHost, password: pwd }) + }); + + if (!res.ok) { + const err = await res.json().catch(()=>({message: 'Auth failed'})); + throw new Error(err.message || 'Auth failed'); + } + + if(DOM.authModalOverlay) DOM.authModalOverlay.style.display = 'none'; + + window.connectHost(currentAuthHost); + + } catch (err) { + alert(err.message); + } finally { + if(btn) btn.disabled = false; + if(DOM.inputPassword) DOM.inputPassword.value = ''; + } + }); +} + + +window.renderTunnelStatus = function(tunnel) { + if (!tunnel) return 'TUN_NONE'; + switch (tunnel.state) { + case 'active': return 'TUN_CONN'; + case 'connecting': return 'TUN_WAIT'; + case 'pending': return 'TUN_WAIT'; + case 'failed': return `TUN_ERR`; + case 'closed': return 'TUN_NONE'; + default: return 'TUN_NONE'; + } +}; + +window.renderTunnelActions = function(tunnel, host) { + if (!tunnel || tunnel.state === 'closed' || tunnel.state === 'failed') { + let btn = ``; + if (host.status === 'auth_required') { + btn = ` + `; + } + return btn; + } + + if (tunnel.state === 'connecting' || tunnel.state === 'pending') { + return ` + `; + } + + if (tunnel.state === 'active') { + return ``; + } + + return ''; +}; + + function renderRemoteHosts() { + if (!DOM.remoteHostsContainer) return; + + const filterText = remoteState.filter.toLowerCase(); + + const visibleHosts = remoteState.hosts.filter(h => { + if (!filterText) return true; + const searchable = `${h.name} ${h.address} ${h.user} ${h.label || ""}`.toLowerCase(); + if (searchable.includes(filterText)) return true; + + if (h.projects) { + for (const p of h.projects) { + if (p.name.toLowerCase().includes(filterText)) return true; + } + } + return false; + }); + + if (remoteState.hosts.length === 0 && !remoteState.error) { + DOM.remoteHostsContainer.innerHTML = '
> NO_REMOTE_HOSTS_FOUND
'; + if(DOM.remoteEmptyState) DOM.remoteEmptyState.style.display = 'none'; + return; + } + + if (visibleHosts.length === 0 && remoteState.hosts.length > 0 && !remoteState.error) { + DOM.remoteHostsContainer.innerHTML = '
> NO_MATCHING_HOSTS
'; + if(DOM.remoteEmptyState) DOM.remoteEmptyState.style.display = 'none'; + return; + } + + if (remoteState.hosts.length > 0 && !remoteState.error) { + if(DOM.remoteEmptyState) DOM.remoteEmptyState.style.display = 'none'; + if(DOM.remoteErrorState) DOM.remoteErrorState.style.display = 'none'; + + let html = ''; + + if (remoteState.warnings && remoteState.warnings.length > 0) { + html += '
'; + remoteState.warnings.forEach(w => { + html += `
> WARNING: ${w}
`; + }); + html += '
'; + } + + visibleHosts.forEach((host, hIdx) => { + let statusClass = host.status === 'online' ? 'active' : 'error'; + let statusText = host.status.toUpperCase(); + if (host.status === 'auth_required') { + statusClass = 'warning'; + statusText = 'AUTH_REQ'; + } + + const tunnel = (remoteState.tunnels || []).find(t => t.hostAlias === host.name || t.remoteHost === host.address || t.remoteHost === host.name); + + let projectsHtml = ''; + if (host.projects && host.projects.length > 0) { + projectsHtml = ` +
+ ${host.projects.map((p, idx) => { + const sessionCount = p.sessions ? p.sessions.length : 0; + const projectId = ('project-' + host.name + '-' + idx).replace(/\W/g, '-'); + const isExpanded = remoteState.expandedProjects.has(projectId); + + let sessionsHtml = ''; + if (sessionCount > 0) { + sessionsHtml = ` +
+ ${p.sessions.map(s => { + const sStatus = (s.status || 'unknown').toLowerCase(); + let sStatusClass = 'error'; + if (sStatus === 'active') sStatusClass = 'active'; + else if (sStatus === 'idle') sStatusClass = 'idle'; + else if (sStatus === 'stopped') sStatusClass = 'stopped'; + + return ` +
+
+ ${s.id} + ${s.directory ? `${s.directory}` : ''} +
+ ${sStatus.toUpperCase()} +
+ `; + }).join('')} +
+ `; + } + + return ` +
+
0 ? `onclick="toggleProjectExpanded('${projectId}')"` : ''}> + ${p.name} + + ${sessionCount} session(s) + ${tunnel && tunnel.state === 'active' ? `` : ''} + ${sessionCount > 0 ? `${isExpanded ? '[-]' : '[+]'}` : ''} + +
+ ${sessionsHtml} +
+ `; + }).join('')} +
+ `; + } + + + let actionBtn = window.renderTunnelActions(tunnel, host); + let openCodeBtn = ''; + let tunnelStatusBadge = window.renderTunnelStatus(tunnel); + + const serveStatus = (remoteState.serves || {})[host.name] || (remoteState.serves || {})[host.address]; + + if (tunnel && tunnel.state === 'active') { + if (serveStatus && serveStatus.state === 'running') { + openCodeBtn = `SERVE_RUNNING`; + } else if (serveStatus && serveStatus.state === 'starting') { + openCodeBtn = `SERVE_STARTING`; + } else { + openCodeBtn = ``; + } + } + +html += ` +
+
+
+ ${host.label || host.name} + ${host.user}@${host.address} +
+
+ ${statusText} + ${tunnelStatusBadge} +
+
+
+
+ OpenCode + ${host.opencode_version || 'Not detected'} +
+
+ Latency + ${host.latency_ms > 0 ? host.latency_ms + 'ms' : '-'} +
+
+ SESSIONS + ${host.sessionCount || 0} +
+ ${tunnel && tunnel.error ? `
> ${tunnel.error}
` : ''} +
+
+ ${actionBtn} + ${openCodeBtn} +
+ ${projectsHtml} +
+ `; + }); + DOM.remoteHostsContainer.innerHTML = html; + } + } + + if (DOM.remoteSearchInput) { + DOM.remoteSearchInput.addEventListener('input', (e) => { + remoteState.filter = e.target.value; + renderRemoteHosts(); + }); + } + + if (DOM.btnRefreshRemote) { + DOM.btnRefreshRemote.addEventListener('click', () => { + fetchRemoteHosts(true); + }); + } + + async function loadChatHistory(sessionId) { + if(DOM.chatHistory) DOM.chatHistory.innerHTML = ''; + try { + const res = await fetch(`/api/sessions/${sessionId}/chat`); + if (!res.ok) return; + const messages = await res.json(); + if (!messages || !Array.isArray(messages)) return; + + messages.forEach(msg => { + let content = ''; + if (msg.parts && Array.isArray(msg.parts)) { + content = msg.parts.map(p => p.text || '').join(''); + } else if (msg.content) { + content = msg.content; + } + + if (msg.role === 'user') { + appendChatMessage('user', content); + } else if (msg.role === 'assistant') { + appendChatMessage('assistant', content); + } + }); + } catch (err) { + console.error('Failed to load chat history', err); + } + } + + function appendChatMessage(role, content) { + const div = document.createElement('div'); + div.className = `chat-message ${role}`; + + if (role === 'assistant') { + try { + div.innerHTML = marked.parse(processDiffs(content || '')); + } catch (e) { + div.textContent = content; + } + } else { + div.textContent = content; + } + + if(DOM.chatHistory) DOM.chatHistory.appendChild(div); + if(DOM.chatHistory) DOM.chatHistory.scrollTop = DOM.chatHistory.scrollHeight; + return div; + } + + if(DOM.chatForm) DOM.chatForm.addEventListener('submit', async (e) => { + e.preventDefault(); + if (!activeTerminalSessionId) return; + + const prompt = DOM.chatInput.value.trim(); + if (!prompt) return; + + DOM.chatInput.value = ''; + DOM.btnSendChat.disabled = true; + + appendChatMessage('user', prompt); + const assistantMsgDiv = appendChatMessage('assistant', '...'); + let currentResponse = ''; + + try { + const res = await fetch(`/api/sessions/${activeTerminalSessionId}/chat`, { + method: 'POST', + headers: { 'Content-Type': 'application/json' }, + body: JSON.stringify({ prompt }) + }); + + if (!res.ok) { + throw new Error(`HTTP error! status: ${res.status}`); + } + + const reader = res.body.getReader(); + const decoder = new TextDecoder(); + let buffer = ''; + + while (true) { + const { value, done } = await reader.read(); + if (done) break; + + buffer += decoder.decode(value, { stream: true }); + const lines = buffer.split('\n'); + buffer = lines.pop(); // keep the incomplete line in buffer + + for (const line of lines) { + if (line.startsWith('data: ')) { + try { + const chunk = JSON.parse(line.slice(6)); + + if (chunk.type === 'message.part.delta' || chunk.type === 'message.final') { + if (chunk.delta) { + currentResponse += chunk.delta; + if (currentResponse === '...') currentResponse = chunk.delta; // clear initial dot + try { + assistantMsgDiv.innerHTML = marked.parse(processDiffs(currentResponse)); + } catch (err) { + assistantMsgDiv.textContent = currentResponse; + } + DOM.chatHistory.scrollTop = DOM.chatHistory.scrollHeight; + } + } else if (chunk.type === 'tool_call' || chunk.type === 'agent.tool_call') { + // Render tool call + const toolCall = document.createElement('div'); + toolCall.className = 'chat-tool-call'; + toolCall.innerHTML = ` +
+ > TOOL: ${chunk.payload?.name || 'unknown'} + [+] +
+
${JSON.stringify(chunk.payload?.input || {}, null, 2)}
+ `; + toolCall.querySelector('.chat-tool-header').addEventListener('click', () => { + toolCall.classList.toggle('expanded'); + }); + assistantMsgDiv.appendChild(toolCall); + DOM.chatHistory.scrollTop = DOM.chatHistory.scrollHeight; + } + } catch (err) { + console.error('Failed to parse SSE line', line, err); + } + } + } + } + } catch (err) { + console.error('Chat error', err); + appendChatMessage('system', `Error: ${err.message}`); + } finally { + DOM.btnSendChat.disabled = false; + DOM.chatInput.focus(); + } + }); + + + // Split pane resizer logic + if(DOM.splitResizer) { + let isResizing = false; + DOM.splitResizer.addEventListener('mousedown', (e) => { + isResizing = true; + DOM.splitResizer.classList.add('resizing'); + document.body.style.cursor = 'col-resize'; + }); + document.addEventListener('mousemove', (e) => { + if (!isResizing) return; + const splitView = DOM.terminalContainer.parentElement.getBoundingClientRect(); + const newWidth = e.clientX - splitView.left; + const percentage = (newWidth / splitView.width) * 100; + if (percentage > 10 && percentage < 90) { + DOM.terminalContainer.style.flex = '0 0 ' + percentage + '%'; + if (DOM.chatContainer) { + DOM.chatContainer.style.flex = '1 1 0%'; + } + if (fitAddon) { + fitAddon.fit(); + } + } + }); + document.addEventListener('mouseup', () => { + if (isResizing) { + isResizing = false; + DOM.splitResizer.classList.remove('resizing'); + document.body.style.cursor = ''; + } + }); + } + + if(DOM.chatInput) DOM.chatInput.addEventListener('keydown', (e) => { + if (e.key === 'Enter' && (e.ctrlKey || e.metaKey)) { + e.preventDefault(); + DOM.chatForm.dispatchEvent(new Event('submit')); + } + }); + +}); diff --git a/web/index.html b/web/index.html new file mode 100644 index 0000000..bcfdd49 --- /dev/null +++ b/web/index.html @@ -0,0 +1,150 @@ + + + + + + OpenCode Fleet Command + + + + + + + +
+
+ +
+
+

[SYS.OP] OpenCode Fleet Command

+ ● OFFLINE +
+
+
+ ONLINE + 0 +
+
+ TOTAL + 0 +
+ +
+
+ + + +
+
+ +
+ +
+ + + + + + + + + + + + + +
STATUSIDLABELWORKSPACEACTIONS
+ +
+
+ +
+
+ + +
+ +
+
+ > SCANNING_REMOTE_FLEET...
+ Awaiting telemetry... +
+
+ +
+
+

> SESSION:

+ Loading history... +
+ +
+
+
+
+
+
+
+
+ + +
+
+
+
+ + + + + + + + + + + diff --git a/web/styles.css b/web/styles.css new file mode 100644 index 0000000..fe6dbd8 --- /dev/null +++ b/web/styles.css @@ -0,0 +1,605 @@ +:root { + --bg-base: #050505; + --bg-surface: #0a0a0c; + --bg-panel: #121214; + --fg-base: #e0e0e0; + --fg-muted: #888888; + --accent-primary: #00ff41; + --accent-secondary: #00f0ff; + --accent-danger: #ff003c; + --accent-warning: #ffb000; + --font-display: 'Orbitron', sans-serif; + --font-mono: 'JetBrains Mono', monospace; + --border-color: #333333; +} + +* { box-sizing: border-box; margin: 0; padding: 0; } + +body { + background-color: var(--bg-base); + color: var(--fg-base); + font-family: var(--font-mono); + font-size: 14px; + line-height: 1.5; + min-height: 100vh; + position: relative; + overflow-x: hidden; +} + +.noise-overlay { + position: fixed; top: 0; left: 0; right: 0; bottom: 0; + pointer-events: none; z-index: 50; + background-image: url("data:image/svg+xml,%3Csvg viewBox='0 0 200 200' xmlns='http://www.w3.org/2000/svg'%3E%3Cfilter id='noiseFilter'%3E%3CfeTurbulence type='fractalNoise' baseFrequency='0.65' numOctaves='3' stitchTiles='stitch'/%3E%3C/filter%3E%3Crect width='100%25' height='100%25' filter='url(%23noiseFilter)' opacity='0.05'/%3E%3C/svg%3E"); +} + +.grid-overlay { + position: fixed; top: 0; left: 0; right: 0; bottom: 0; + pointer-events: none; z-index: 0; + background-image: linear-gradient(var(--border-color) 1px, transparent 1px), + linear-gradient(90deg, var(--border-color) 1px, transparent 1px); + background-size: 40px 40px; + opacity: 0.15; +} + +.cmd-header { + position: relative; z-index: 10; + display: flex; justify-content: space-between; align-items: center; + padding: 1.5rem 2rem; + border-bottom: 1px solid var(--accent-primary); + background: rgba(10, 10, 12, 0.8); + backdrop-filter: blur(10px); +} + +.cmd-brand h1 { + font-family: var(--font-display); + font-size: 1.25rem; + font-weight: 700; + color: var(--accent-primary); + text-transform: uppercase; + letter-spacing: 2px; + margin-bottom: 0.25rem; + text-shadow: 0 0 10px rgba(0, 255, 65, 0.3); +} + +.tab-nav { + position: relative; + z-index: 10; + display: flex; + gap: 1rem; + padding: 1rem 2rem 0; + max-width: 1400px; + margin: 0 auto; +} + +.cmd-tabs { + position: relative; + z-index: 10; + display: flex; + gap: 1rem; + padding: 1rem 2rem 0; + max-width: 1400px; + margin: 0 auto; +} + +.tab-button { + background: transparent; + color: var(--fg-muted); + border: 1px solid var(--border-color); + border-bottom: none; + padding: 0.75rem 1.5rem; + font-family: var(--font-mono); + font-weight: bold; + cursor: pointer; + text-transform: uppercase; + transition: all 0.2s; +} + +.tab-button:hover { + color: var(--accent-secondary); + background: rgba(0, 240, 255, 0.05); +} + +.tab-active { + color: var(--accent-primary); + border-color: var(--accent-primary); + background: rgba(0, 255, 65, 0.05); + box-shadow: inset 0 2px 5px rgba(0,255,65,0.1); +} + +.tab-button.active { + color: var(--accent-primary); + border-color: var(--accent-primary); + background: rgba(0, 255, 65, 0.05); + box-shadow: inset 0 2px 5px rgba(0,255,65,0.1); +} + +.remote-host-card { position: relative; } +.tunnel-status { display: inline-flex; } +.auth-modal { position: fixed; } + +.pulse-indicator { + display: inline-flex; align-items: center; gap: 6px; + font-size: 0.75rem; color: var(--accent-danger); font-weight: bold; +} +.pulse-indicator.online { color: var(--accent-primary); } +.pulse-indicator.online::before { + content: ''; display: inline-block; width: 6px; height: 6px; + background: var(--accent-primary); border-radius: 50%; + box-shadow: 0 0 8px var(--accent-primary); + animation: pulse 2s infinite; +} + +@keyframes pulse { + 0% { opacity: 1; transform: scale(1); } + 50% { opacity: 0.5; transform: scale(1.5); } + 100% { opacity: 1; transform: scale(1); } +} + +.cmd-stats { display: flex; gap: 2rem; align-items: center; } +.stat-box { display: flex; flex-direction: column; align-items: flex-end; } +.stat-label { font-size: 0.7rem; color: var(--fg-muted); } +.stat-value { font-size: 1.5rem; font-weight: bold; font-family: var(--font-display); color: var(--fg-base); } + +.cyber-button { + background: transparent; + color: var(--accent-primary); + border: 1px solid var(--accent-primary); + padding: 0.5rem 1rem; + font-family: var(--font-mono); + font-weight: bold; + cursor: pointer; + text-transform: uppercase; + position: relative; + overflow: hidden; + transition: all 0.2s; +} +.cyber-button:hover { background: rgba(0, 255, 65, 0.1); box-shadow: 0 0 10px rgba(0,255,65,0.2); } +.cyber-button.primary { background: var(--accent-primary); color: var(--bg-base); } +.cyber-button.primary:hover { background: #00cc33; box-shadow: 0 0 15px rgba(0,255,65,0.4); } +.cyber-button.danger { border-color: var(--accent-danger); color: var(--accent-danger); } +.cyber-button.danger:hover { background: rgba(255, 0, 60, 0.1); box-shadow: 0 0 10px rgba(255,0,60,0.2); } +.cyber-button.warning { border-color: var(--accent-warning); color: var(--accent-warning); } +.cyber-button.warning:hover { background: rgba(255, 176, 0, 0.1); } +.cyber-button.secondary { border-color: var(--accent-secondary); color: var(--accent-secondary); } +.cyber-button.secondary:hover { background: rgba(0, 240, 255, 0.1); } + +.cmd-main { position: relative; z-index: 10; padding: 2rem; max-width: 1400px; margin: 0 auto; } + +.toolbar { margin-bottom: 1.5rem; display: flex; gap: 1rem; } +.cyber-input { + width: 100%; max-width: 400px; + background: var(--bg-panel); border: 1px solid var(--border-color); + color: var(--accent-secondary); font-family: var(--font-mono); + padding: 0.75rem 1rem; outline: none; + transition: border-color 0.3s; +} +.cyber-input:focus { border-color: var(--accent-secondary); box-shadow: 0 0 8px rgba(0,240,255,0.2); } +.cyber-input::placeholder { color: var(--fg-muted); } + +.table-container { + background: rgba(18, 18, 20, 0.8); + border: 1px solid var(--border-color); + backdrop-filter: blur(5px); + overflow-x: auto; +} + +.cyber-table { width: 100%; border-collapse: collapse; text-align: left; } +.cyber-table th { + padding: 1rem; color: var(--fg-muted); + border-bottom: 1px solid var(--border-color); + font-weight: normal; letter-spacing: 1px; +} +.cyber-table td { + padding: 1rem; border-bottom: 1px solid rgba(51, 51, 51, 0.5); + vertical-align: middle; +} +.cyber-table tr:hover td { background: rgba(255, 255, 255, 0.03); } + +.status-badge { + display: inline-block; padding: 0.25rem 0.5rem; + font-size: 0.75rem; font-weight: bold; border-radius: 2px; + background: var(--bg-base); border: 1px solid var(--fg-muted); color: var(--fg-muted); +} +.status-badge.active { border-color: var(--accent-primary); color: var(--accent-primary); box-shadow: 0 0 5px rgba(0,255,65,0.2); } +.status-badge.stopped { border-color: var(--accent-warning); color: var(--accent-warning); } +.status-badge.error { border-color: var(--accent-danger); color: var(--accent-danger); } + +.id-col { color: var(--accent-secondary); } +.workspace-col { color: var(--fg-muted); font-size: 0.85rem; } + +.action-group { display: flex; gap: 0.5rem; } + +.empty-state { padding: 3rem; text-align: center; color: var(--fg-muted); } + +/* Modal */ +.modal-overlay { + position: fixed; top: 0; left: 0; right: 0; bottom: 0; + background: rgba(0, 0, 0, 0.8); backdrop-filter: blur(5px); + z-index: 100; display: flex; justify-content: center; align-items: center; +} +.cyber-modal { + background: var(--bg-surface); border: 1px solid var(--accent-primary); + width: 100%; max-width: 500px; box-shadow: 0 0 20px rgba(0,255,65,0.1); +} +.modal-header { + padding: 1rem 1.5rem; border-bottom: 1px solid var(--border-color); + display: flex; justify-content: space-between; align-items: center; +} +.modal-header h2 { font-family: var(--font-display); font-size: 1rem; color: var(--accent-primary); } +.icon-button { + background: none; border: none; color: var(--fg-muted); font-family: var(--font-mono); + cursor: pointer; font-size: 1rem; +} +.icon-button:hover { color: var(--accent-danger); } +.modal-body { padding: 1.5rem; } +.form-group { margin-bottom: 1.5rem; } +.form-group label { display: block; margin-bottom: 0.5rem; color: var(--fg-muted); font-size: 0.85rem; } +.form-group .cyber-input { max-width: 100%; } +.modal-actions { display: flex; justify-content: flex-end; margin-top: 2rem; } + +/* Truncate helpers */ +.truncate { max-width: 250px; white-space: nowrap; overflow: hidden; text-overflow: ellipsis; display: inline-block; } +.status-badge.idle { border-color: var(--accent-secondary); color: var(--accent-secondary); box-shadow: 0 0 5px rgba(0,240,255,0.2); } + +/* Tunnel Status UI */ +.tunnel-badge { + display: inline-flex; + align-items: center; + padding: 0.25rem 0.5rem; + font-size: 0.75rem; + font-weight: bold; + border-radius: 2px; + background: var(--bg-base); + border: 1px solid var(--fg-muted); + color: var(--fg-muted); + font-family: var(--font-mono); + letter-spacing: 0.5px; + transition: all 0.2s; +} + +.tunnel-badge.tunnel-none { + border-color: var(--fg-muted); + color: var(--fg-muted); +} + +.tunnel-badge.tunnel-connecting { + border-color: var(--accent-secondary); + color: var(--accent-secondary); + box-shadow: 0 0 5px rgba(0,240,255,0.2); + position: relative; +} + +.tunnel-badge.tunnel-connecting::after { + content: ''; + display: inline-block; + width: 1em; + text-align: left; + animation: tunnelDots 1.5s steps(4, end) infinite; +} + +@keyframes tunnelDots { + 0%, 20% { content: ''; } + 40% { content: '.'; } + 60% { content: '..'; } + 80%, 100% { content: '...'; } +} + +.tunnel-badge.tunnel-active { + border-color: var(--accent-primary); + color: var(--accent-primary); + box-shadow: 0 0 5px rgba(0,255,65,0.2); +} + +.tunnel-badge.tunnel-failed { + border-color: var(--accent-danger); + color: var(--accent-danger); + box-shadow: 0 0 5px rgba(255,0,60,0.2); +} + +/* Terminal View */ +.terminal-view { + display: flex; + flex-direction: column; + height: calc(100vh - 120px); + padding: 1rem 2rem; +} + +.terminal-header { + display: flex; + justify-content: space-between; + align-items: center; + margin-bottom: 1rem; + padding-bottom: 1rem; + border-bottom: 1px solid var(--border-color); +} + +.terminal-title { + font-family: var(--font-display); + font-size: 1.25rem; + color: var(--fg-muted); +} + +.terminal-title .id-col { + color: var(--accent-secondary); +} + +.terminal-container { + flex-grow: 1; + background: #000; + border: 1px solid var(--border-color); + padding: 0.5rem; + border-radius: 4px; + overflow: hidden; + box-shadow: inset 0 0 10px rgba(0,0,0,0.8); +} + +/* Chat Panel Styles */ +.split-view { + display: flex; + flex-direction: row; + flex-grow: 1; + gap: 1rem; + overflow: hidden; + min-height: 0; +} + +.terminal-container { + flex: 1 1 60%; + background: #000; + border: 1px solid var(--border-color); + padding: 0.5rem; + border-radius: 4px; + overflow: hidden; + box-shadow: inset 0 0 10px rgba(0,0,0,0.8); +} + +.chat-container { + flex: 1 1 40%; + display: flex; + flex-direction: column; + border: 1px solid var(--border-color); + border-radius: 4px; + background: rgba(0, 0, 0, 0.4); + overflow: hidden; +} + +.chat-history { + flex-grow: 1; + overflow-y: auto; + padding: 1rem; + display: flex; + flex-direction: column; + gap: 1rem; +} + +.chat-input-wrapper { + display: flex; + padding: 1rem; + border-top: 1px solid var(--border-color); + gap: 0.5rem; + background: rgba(0, 0, 0, 0.6); +} + +.chat-input-wrapper textarea { + flex-grow: 1; + resize: none; + font-family: var(--font-mono); +} + +.chat-message { + padding: 0.75rem; + border-radius: 4px; + font-family: var(--font-mono); + font-size: 0.9rem; + line-height: 1.4; + border: 1px solid transparent; + word-wrap: break-word; +} + +.chat-message.user { + border-color: rgba(0, 255, 65, 0.3); + background: rgba(0, 255, 65, 0.05); + align-self: flex-end; + max-width: 90%; + border-right: 2px solid var(--accent-primary); +} + +.chat-message.assistant { + border-color: rgba(0, 168, 255, 0.3); + background: rgba(0, 168, 255, 0.05); + align-self: flex-start; + max-width: 95%; + border-left: 2px solid var(--accent-secondary); +} + +.chat-message.system { + border-color: rgba(255, 170, 0, 0.3); + background: rgba(255, 170, 0, 0.05); + align-self: center; + font-style: italic; + font-size: 0.8rem; + color: var(--accent-warning); +} + +/* Tool call collapsible styling */ +.chat-tool-call { + margin-top: 0.5rem; + border: 1px solid var(--border-color); + background: rgba(0,0,0,0.3); + border-radius: 4px; + overflow: hidden; +} + +.chat-tool-header { + padding: 0.4rem 0.5rem; + font-size: 0.8rem; + cursor: pointer; + background: rgba(0, 168, 255, 0.1); + display: flex; + justify-content: space-between; + align-items: center; +} +.chat-tool-header:hover { + background: rgba(0, 168, 255, 0.2); +} + +.chat-tool-body { + display: none; + padding: 0.5rem; + font-size: 0.8rem; + border-top: 1px solid var(--border-color); + background: #050505; + white-space: pre-wrap; + color: var(--fg-muted); +} +.chat-tool-call.expanded .chat-tool-body { + display: block; +} + +.chat-message.assistant pre { + background: #000; + padding: 0.5rem; + border: 1px solid var(--border-color); + border-radius: 4px; + overflow-x: auto; + margin: 0.5rem 0; +} +.chat-message.assistant code { + color: var(--accent-primary); +} + +/* Split Pane Resizer */ +.split-resizer { + width: 8px; + cursor: col-resize; + background-color: var(--bg-color); + border-left: 1px solid var(--border-color); + border-right: 1px solid var(--border-color); + flex-shrink: 0; + transition: background-color 0.2s; +} + +.split-resizer:hover, .split-resizer.resizing { + background-color: var(--primary-color); + opacity: 0.5; +} + +.diff-add { + color: #a6e22e; + background: rgba(166,226,46,0.1); + display: inline-block; + width: 100%; +} +.diff-rm { + color: #f92672; + background: rgba(249,38,114,0.1); + display: inline-block; + width: 100%; +} + +/* Remote Hosts Grid */ +.hosts-grid { + display: grid; + grid-template-columns: repeat(auto-fill, minmax(350px, 1fr)); + gap: 1.5rem; + margin-top: 1rem; +} + +.host-card { + background: rgba(18, 18, 20, 0.8); + border: 1px solid var(--border-color); + border-radius: 4px; + padding: 1.25rem; + display: flex; + flex-direction: column; + gap: 1rem; + transition: all 0.2s; +} + +.host-card:hover { + border-color: var(--accent-secondary); + box-shadow: 0 0 15px rgba(0, 240, 255, 0.1); + transform: translateY(-2px); +} + +.host-header { + display: flex; + justify-content: space-between; + align-items: flex-start; + border-bottom: 1px solid var(--border-color); + padding-bottom: 0.75rem; +} + +.host-title { + display: flex; + flex-direction: column; + gap: 0.25rem; +} + +.host-label { + font-family: var(--font-display); + font-size: 1.1rem; + color: var(--accent-secondary); + font-weight: bold; +} + +.host-name { + font-size: 0.8rem; + color: var(--fg-muted); +} + +.host-details { + display: flex; + flex-direction: column; + gap: 0.5rem; + font-size: 0.9rem; +} + +.host-detail-row { + display: flex; + justify-content: space-between; +} + +.host-detail-label { + color: var(--fg-muted); +} + +.host-detail-value { + color: var(--fg-base); + font-weight: bold; +} + +.host-projects { + border-top: 1px solid rgba(51, 51, 51, 0.5); + padding-top: 0.75rem; + margin-top: 0.5rem; + display: flex; + flex-direction: column; + gap: 0.5rem; +} + +.host-project-item { + display: flex; + justify-content: space-between; + font-size: 0.85rem; + background: rgba(0, 0, 0, 0.3); + padding: 0.5rem; + border-radius: 2px; +} + +.project-name { + color: var(--accent-primary); +} + +.project-sessions { + color: var(--fg-muted); +} + + +.host-project-item[onclick]:hover { + background: rgba(0, 240, 255, 0.05); +} + +.project-session-item:hover { + background: rgba(255, 255, 255, 0.03); +}