Skip to content

Releases: cuemap-dev/cuemap

v0.6.6

05 Mar 16:41

Choose a tag to compare

v0.6.1

22 Jan 10:41

Choose a tag to compare

[0.6.1] - 2026-01-21

Added

  • Cloud Backup Integration: Native support for backing up project snapshots to AWS S3, Google Cloud Storage, and Azure Blob Storage. Configurable via CLI flags (--cloud-backup, etc.) and managed via new API endpoints (/backup/upload, /backup/download, /backup/list, /backup/:id).
  • Context Expansion API: New /context/expand endpoint that uses the cue co-occurrence graph to suggest related concepts for a given query, enabling "search suggestion" or "related topics" features.
  • Prometheus Metrics: New /metrics endpoint exposing internal system metrics (ingestion rate, latency, memory usage, etc.) for observability.

v0.6.0

19 Jan 09:34
4354575

Choose a tag to compare

[0.6.0] - 2026-01-19

Added

  • Embedded Web UI: A web UI is now embedded into the engine, accessible at http://localhost:8080/ui. The UI provides a modern interface to ingest a URL, file or text content, perform recalls and manage your lexicon. It features a physics-based graph visualization of your memory and lexicon.
  • WordNet Expansion: The engine now automatically expands cue with synonyms using WordNet, adding related cues to your lexicon.
  • Cue Lemmatization: The engine now automatically lemmatizes cues, minimizing noise and improving recall accuracy.
  • More formats for Self-Learning Agent: The engine now supports more formats for self-learning agent, including social media exports from WhatsApp and Instagram, as well as Google Takeout exports of Chrome and YouTube.
  • Lexicon Management API: Native support for manual lexicon management.
  • Relevance Compression for LLMs via Grounded Recall API: The engine now supports relevance compression for LLMs via grounded recall API, which returns a context string containing related memories to the query while taking into account the token budget given by the user.

Removed

  • Single Tenant Mode: The engine now runs as a multi-tenant application by default, each with its own memory store, lexicons and aliases.
  • Cue Expansion with Google or OpenAI models: To keep the deterministic nature of the engine, cue expansion is now only done using WordNet. GloVe and Ollama support exists but is not enabled by default.

v0.5.0

28 Dec 18:42

Choose a tag to compare

[0.5.0] - 2025-12-28

Added (Alias Management & Control)

  • Alias Management API: Native support for manual alias management via POST /aliases, GET /aliases, and POST /aliases/merge.
  • Deterministic Cue Expansion: Strict filtering in alias resolution to prevent "fuzzy" leaks. Aliases are now only expanded if they strictly match the source cue.
  • LLM Context Injection: The engine now resolves existing cues from content before prompting the LLM. These "known cues" are injected into the prompt to guide semantic expansion while maintaining determinism.

Added (Brain-Inspired Features)

  • Pattern Completion (Hippocampal CA3): Implemented cue co-occurrence matrix for associative recall. The engine now automatically infers missing cues based on historical co-occurrence at retrieval-time. Features a new disable_pattern_completion query flag for strict matching.
  • Temporal Chunking: Memories are now automatically grouped into "episodes" based on temporal proximity and cue overlap, linked via episode:<id> cues. Supports disable_temporal_chunking at write-time.
  • Salience Bias (Amygdala): Introduced a dynamic salience score for memories, boosted by cue density, reinforcement frequency, and complexity. Salient memories decay slower and rank higher. Supports disable_salience_bias at retrieval.
  • Match Integrity: Each recall result now includes a match_integrity score (0.0 - 1.0) derived from intersection strength, context agreement, and reinforcement counts.
  • Systems Consolidation: New mechanism to periodically merge highly overlapping memories into abstracted summaries. Consolidation is strictly additive; original "Ground Truth" memories are preserved. Summaries can be ignored during retrieval via disable_systems_consolidation.

Added (v0.5 Core)

  • Selective Set Intersection: A new, more exhaustive search strategy that replaces legacy tiered search. It scans the most selective cue list and uses O(1) probes to gather intersection data.
  • Continuous Gradient Scoring: Replaced discrete search tiers with a smooth scoring gradient based on recency and reinforcement frequency.
  • Asynchronous Intelligence Pipeline: Background job system for LLM-based fact extraction, cue proposal, and automatic alias discovery.
  • Explainable AI: Support for the explain=true flag in recall requests, providing detailed breakdowns of intersection, recency, and frequency components.
  • Expanded Chunker Support & Structural Cues: Recursive "Ground Truth" extraction for 17+ formats:
    • Recursive Code Extraction: Python, Rust, TS, JS, Go, Java, PHP now use tree-sitter to capture nested functions, classes, and methods as grounded cues.
    • Markup & Styling: HTML extracts IDs/classes at any depth; CSS captures selectors.
    • Structured Data: CSV (headers), JSON/YAML (keys/indices), and XML (attributes/IDs) now provide full structural metadata.
    • Documents: PDF, Word (DOCX), Excel (XLSX) text extraction.
  • Binary Ingestion: The agent now handles binary files gracefully, computing hashes and extracting text for ingestion.
  • Multi-Tenant Isolation: Full isolation between projects, including independent taxonomies, lexicons, and memory stores.
  • Advanced Text Normalization: Improved NLP normalization that better handles special characters and word boundaries.
  • Lexicon Resolution: Support for training a lexicon from existing memories to map natural language tokens to canonical cues.

Changed

  • Memory Storage: Optimized OrderedSet with get_index_of for O(1) recency lookup.
  • Recall Weighting: Intersection scores are now weighted by cue relevance, improving precision for complex queries.
  • Persistence: Enhanced snapshot mechanism with reliable roundtrip verification.

Fixed

  • Recall Boundary Issues: Fixed cases where niche items deep in a cue list were missed by tiered search.
  • Reinforcement Precision: Corrected log-frequency scaling to ensure exact reinforcement scores.
  • NLP Tokenization: Fixed edge cases in normalize_text involving punctuation.

Removed

  • Legacy iterative search tiers (TIER_1_DEPTH, TIER_2_DEPTH).
  • Unused BinaryHeap implementation in favor of faster unstable sorting.