Hey @Grivn π
We crossed paths on openclaw/openclaw#13991 (the Associative Hierarchical Memory proposal). Your mnemon architecture caught our attention β the LLM-supervised pattern and four-graph knowledge store are philosophically very close to what we've built.
What We Have
We maintain an OpenClaw fork with a full cognitive memory stack (~150 files, 7 modules). Two of our modules directly overlap with mnemon's approach:
SYNAPSE β multi-model debate with graph-based reasoning (RAAC protocol: Reason, Argue, Arbitrate, Conclude). We use cognitive diversity scoring to decide when debate improves output vs. when it's overhead.
HIPPOCAMPUS β pre-computed concept index (500 anchors, 9500+ chunks). Instead of runtime vector search, we build the graph at consolidation time and retrieve at O(1). Similar to your importance decay + deduplication, but we front-load the computation.
Your four graph types (temporal, entity, semantic, causal) map interestingly to our modules:
| mnemon graph |
Our module |
Overlap |
| Temporal |
ENGRAM (episodic timeline) |
High β both track event sequences |
| Entity |
HIPPOCAMPUS (concept anchors) |
Medium β different granularity |
| Semantic |
ENGRAM semantic store |
High β both vector-based |
| Causal |
SYNAPSE (debate chains) |
Low β different purpose, potential synergy |
Research Papers
We've written academic papers for each module:
- ENGRAM: Context compaction as cache eviction (paper)
- CORTEX: Persistent agent identity through persona state
- HIPPOCAMPUS: Pre-computed concept indexing for O(1) retrieval
- LIMBIC: Humor detection via bisociation in embedding space
- SYNAPSE: Multi-model deliberation with cognitive diversity
Happy to share full PDFs if you're interested.
Collaboration Ideas
- Benchmark comparison β run both systems on the same long-conversation dataset and compare retrieval quality
- Graph type exchange β your causal graph could improve our SYNAPSE reasoning; our pre-computed index could speed up your recall path
- Joint OpenClaw integration β mnemon as external memory + our fork's cognitive layer = comprehensive agent memory
The fork is at globalcaos/tinkerclaw. Would love to exchange notes. π€
Hey @Grivn π
We crossed paths on openclaw/openclaw#13991 (the Associative Hierarchical Memory proposal). Your mnemon architecture caught our attention β the LLM-supervised pattern and four-graph knowledge store are philosophically very close to what we've built.
What We Have
We maintain an OpenClaw fork with a full cognitive memory stack (~150 files, 7 modules). Two of our modules directly overlap with mnemon's approach:
SYNAPSE β multi-model debate with graph-based reasoning (RAAC protocol: Reason, Argue, Arbitrate, Conclude). We use cognitive diversity scoring to decide when debate improves output vs. when it's overhead.
HIPPOCAMPUS β pre-computed concept index (500 anchors, 9500+ chunks). Instead of runtime vector search, we build the graph at consolidation time and retrieve at O(1). Similar to your importance decay + deduplication, but we front-load the computation.
Your four graph types (temporal, entity, semantic, causal) map interestingly to our modules:
Research Papers
We've written academic papers for each module:
Happy to share full PDFs if you're interested.
Collaboration Ideas
The fork is at globalcaos/tinkerclaw. Would love to exchange notes. π€