CortexGraph provides AI assistants with a human-like temporal memory system featuring automatic decay based on cognitive science principles, reinforcement through usage, and intelligent two-layer storage architecture.
Core Memory Operations
save_memory - Store new memories with tags, entities, source, context, and metadata
search_memory - Search short-term memory with filters for tags, time windows, and score thresholds
search_unified - Search across both short-term and long-term memory with weighted ranking
open_memories - Retrieve specific memories by ID with detailed information and relations
touch_memory - Reinforce memories by updating access time and boosting strength to slow decay
Memory Lifecycle Management
gc - Remove low-scoring memories that have decayed below the forget threshold (with dry-run preview)
promote_memory - Promote high-value memories to permanent long-term storage (Obsidian vault), with auto-detection of promotion candidates
Knowledge Graph & Relations
read_graph - Read the complete knowledge graph with all memories, relations, and statistics
create_relation - Create explicit typed relationships between memories (e.g., "references", "follows_from", "similar_to") with strength scores
Memory Optimization
cluster_memories - Group similar memories using semantic similarity or identify duplicates
consolidate_memories - Merge similar memories into unified entries (placeholder - not yet implemented)
Key Features: Temporal decay following the Ebbinghaus forgetting curve, reinforcement learning through natural usage patterns, automatic promotion from short-term (JSONL) to long-term storage (Markdown), smart scoring combining recency, frequency, and importance, and automatic entity extraction from natural language content.
Enables version control and backup operations for memory storage, providing Git integration for tracking changes to memory data and creating backups
Provides integration with Obsidian vaults for long-term memory storage, allowing AI to automatically promote important memories to permanent Markdown files with YAML frontmatter and wikilinks
CortexGraph: Temporal Memory for AI
A Model Context Protocol (MCP) server providing human-like memory dynamics for AI assistants. Memories naturally fade over time unless reinforced through use, mimicking the Ebbinghaus forgetting curve.
About the Name & Version
This project was originally developed as mnemex (published to PyPI up to v0.6.0). In November 2025, it was transferred to Prefrontal Systems and renamed to CortexGraph to better reflect its role within a broader cognitive architecture for AI systems.
Version numbering starts at 0.1.0 for the cortexgraph package to signal a fresh start under the new name, while acknowledging the mature, well-tested codebase (791 tests, 98%+ coverage) inherited from mnemex. The mnemex package remains frozen at v0.6.0 on PyPI.
This versioning approach:
Signals "new package" to PyPI users discovering cortexgraph
Gives room to evolve the brand, API, and organizational integration before 1.0
Maintains continuity: users can migrate from
pip install mnemex→pip install cortexgraphReflects that while the code is mature, the cortexgraph identity is just beginning
🚧 ACTIVE DEVELOPMENT - EXPECT BUGS 🚧
This project is under active development and should be considered experimental. You will likely encounter bugs, breaking changes, and incomplete features. Use at your own risk. Please report issues on GitHub, but understand that this is research code, not production-ready software.
Known issues:
API may change without notice between versions
Test coverage is incomplete
📖 New to this project? Start with the ELI5 Guide for a simple explanation of what this does and how to use it.
What is CortexGraph?
CortexGraph gives AI assistants like Claude a human-like memory system.
The Problem
When you chat with Claude, it forgets everything between conversations. You tell it "I prefer TypeScript" or "I'm allergic to peanuts," and three days later, you have to repeat yourself. This is frustrating and wastes time.
What CortexGraph Does
CortexGraph makes AI assistants remember things naturally, just like human memory:
🧠 Remembers what matters - Your preferences, decisions, and important facts
⏰ Forgets naturally - Old, unused information fades away over time (like the Ebbinghaus forgetting curve)
💪 Gets stronger with use - The more you reference something, the longer it's remembered
📦 Saves important things permanently - Frequently used memories get promoted to long-term storage
How It Works (Simple Version)
You talk naturally - "I prefer dark mode in all my apps"
Memory is saved automatically - No special commands needed
Time passes - Memory gradually fades if not used
You reference it again - "Make this app dark mode"
Memory gets stronger - Now it lasts even longer
Important memories promoted - Used 5+ times? Saved permanently to your Obsidian vault
No flashcards. No explicit review. Just natural conversation.
Why It's Different
Most memory systems are dumb:
❌ "Delete after 7 days" (doesn't care if you used it 100 times)
❌ "Keep last 100 items" (throws away important stuff just because it's old)
CortexGraph is smart:
✅ Combines recency (when?), frequency (how often?), and importance (how critical?)
✅ Memories fade naturally like human memory
✅ Frequently used memories stick around longer
✅ You can mark critical things to "never forget"
Technical Overview
This repository contains research, design, and a complete implementation of a short-term memory system that combines:
Novel temporal decay algorithm based on cognitive science
Reinforcement learning through usage patterns
Two-layer architecture (STM + LTM) for working and permanent memory
Smart prompting patterns for natural LLM integration
Git-friendly storage with human-readable JSONL
Knowledge graph with entities and relations
Why CortexGraph?
🔒 Privacy & Transparency
All data stored locally on your machine - no cloud services, no tracking, no data sharing.
Short-term memory:
JSONL (default): Human-readable, git-friendly files (
~/.config/cortexgraph/jsonl/)SQLite: Robust database storage for larger datasets (
~/.config/cortexgraph/cortexgraph.db)
Long-term memory: Markdown files optimized for Obsidian
YAML frontmatter with metadata
Wikilinks for connections
Permanent storage you control
Export: Built-in utility to export memories to Markdown for portability.
You own your data. You can read it, edit it, delete it, or version control it - all without any special tools.
Core Algorithm
The temporal decay scoring function:
$$ \Large \text{score}(t) = (n_{\text{use}})^\beta \cdot e^{-\lambda \cdot \Delta t} \cdot s $$
Where:
$\large n_{\text{use}}$ - Use count (number of accesses)
$\large \beta$ (beta) - Sub-linear use count weighting (default: 0.6)
$\large \lambda = \frac{\ln(2)}{t_{1/2}}$ (lambda) - Decay constant; set via half-life (default: 3-day)
$\large \Delta t$ - Time since last access (seconds)
$\large s$ - Strength parameter $\in [0, 2]$ (importance multiplier)
Thresholds:
$\large \tau_{\text{forget}}$ (default 0.05) — if score < this, forget
$\large \tau_{\text{promote}}$ (default 0.65) — if score ≥ this, promote (or if $\large n_{\text{use}}\ge5$ in 14 days)
Decay Models:
Power‑Law (default): heavier tail; most human‑like retention
Exponential: lighter tail; forgets sooner
Two‑Component: fast early forgetting + heavier tail
See detailed parameter reference, model selection, and worked examples in docs/scoring_algorithm.md.
Tuning Cheat Sheet
Balanced (default)
Half-life: 3 days (λ ≈ 2.67e-6)
β = 0.6, τ_forget = 0.05, τ_promote = 0.65, use_count≥5 in 14d
Strength: 1.0 (bump to 1.3–2.0 for critical)
High‑velocity context (ephemeral notes, rapid switching)
Half-life: 12–24 hours (λ ≈ 1.60e-5 to 8.02e-6)
β = 0.8–0.9, τ_forget = 0.10–0.15, τ_promote = 0.70–0.75
Long retention (research/archival)
Half-life: 7–14 days (λ ≈ 1.15e-6 to 5.73e-7)
β = 0.3–0.5, τ_forget = 0.02–0.05, τ_promote = 0.50–0.60
Preference/decision heavy assistants
Half-life: 3–7 days; β = 0.6–0.8
Strength defaults: 1.3–1.5 for preferences; 1.8–2.0 for decisions
Aggressive space control
Raise τ_forget to 0.08–0.12 and/or shorten half-life; schedule weekly GC
Environment template
CORTEXGRAPH_DECAY_LAMBDA=2.673e-6, CORTEXGRAPH_DECAY_BETA=0.6
CORTEXGRAPH_FORGET_THRESHOLD=0.05, CORTEXGRAPH_PROMOTE_THRESHOLD=0.65
CORTEXGRAPH_PROMOTE_USE_COUNT=5, CORTEXGRAPH_PROMOTE_TIME_WINDOW=14
Decision thresholds:
Forget: $\text{score} < 0.05$ → delete memory
Promote: $\text{score} \geq 0.65$ OR $n_{\text{use}} \geq 5$ within 14 days → move to LTM
Key Innovations
1. Temporal Decay with Reinforcement
Unlike traditional caching (TTL, LRU), Mnemex scores memories continuously by combining recency (exponential decay), frequency (sub-linear use count), and importance (adjustable strength). See Core Algorithm for the mathematical formula. This creates memory dynamics that closely mimic human cognition.
2. Smart Prompting System + Natural Language Activation (v0.6.0+)
Patterns for making AI assistants use memory naturally, now enhanced with automatic entity extraction and importance scoring:
Auto-Enrichment (NEW in v0.6.0)
When you save memories, CortexGraph automatically:
Extracts entities (people, technologies, organizations) using spaCy NER
Calculates importance/strength based on content markers
Detects save/recall intent from natural language phrases
Auto-Save
Auto-Recall
Auto-Reinforce
Decision Support Tools (v0.6.0+)
Two new tools help Claude decide when to save/recall:
analyze_message- Detects memory-worthy content, suggests entities and strengthanalyze_for_recall- Detects recall intent, suggests search queries
No explicit memory commands needed - just natural conversation.
3. Natural Spaced Repetition
Inspired by how concepts naturally reinforce across different contexts (the "Maslow effect" - remembering Maslow's hierarchy better when it appears in history, economics, and sociology classes).
No flashcards. No explicit review sessions. Just natural conversation.
How it works:
Review Priority Calculation - Memories in the "danger zone" (0.15-0.35 decay score) get highest priority
Cross-Domain Detection - Detects when memories are used in different contexts (tag Jaccard similarity <30%)
Automatic Reinforcement - Memories strengthen naturally when used, especially across domains
Blended Search - Review candidates appear in 30% of search results (configurable)
Usage pattern:
Configuration:
See docs/prompts/ for LLM system prompt templates that enable natural memory usage.
4. Two-Layer Architecture
5. Multi-Agent Consolidation Pipeline
Automated memory maintenance through five specialized agents:
The Five Agents:
Agent | Purpose |
DecayAnalyzer | Find memories at risk of being forgotten (danger zone: 0.15-0.35) |
ClusterDetector | Group similar memories using embedding similarity |
SemanticMerge | Intelligently combine clustered memories, preserving unique info |
LTMPromoter | Move high-value memories to permanent Obsidian storage |
RelationshipDiscovery | Find cross-domain connections via shared entities |
Key Features:
Dry-run mode: Preview changes without modifying data
Rate limiting: Configurable operations per minute (default: 60)
Audit trail: Every decision tracked via beads issue tracking
Human override: Review and approve decisions before execution
Usage:
CLI:
See docs/agents.md for complete documentation including configuration, beads integration, and troubleshooting.
Quick Start
Installation
Recommended: UV Tool Install (from PyPI)
This installs cortexgraph and all 7 CLI commands in an isolated environment.
Alternative Installation Methods
For Development (Editable Install)
Configuration
IMPORTANT: Configuration location depends on installation method:
Method 1: .env file (Works for all installation methods)
Create ~/.config/cortexgraph/.env:
Edit ~/.config/cortexgraph/.env with your settings:
Where cortexgraph looks for .env files:
Primary:
~/.config/cortexgraph/.env← Use this foruv tool install/uvxFallback:
./.env(current directory) ← Only works for editable installs
MCP Configuration
Recommended: Use absolute path (works everywhere)
Add to ~/Library/Application Support/Claude/claude_desktop_config.json:
Find your actual path:
Use that path in your config. Replace yourusername with your actual username.
Why absolute path? GUI apps like Claude Desktop don't inherit your shell's PATH configuration (.zshrc, .bashrc). Using the full path ensures it always works.
For development (editable install):
Configuration can be loaded from ./.env in the project directory OR ~/.config/cortexgraph/.env.
Troubleshooting: Command Not Found
If Claude Desktop shows spawn cortexgraph ENOENT errors, the cortexgraph command isn't in Claude Desktop's PATH.
macOS/Linux: GUI apps don't inherit shell PATH
GUI applications on macOS and Linux don't see your shell's PATH configuration (.zshrc, .bashrc, etc.). Claude Desktop only searches:
/usr/local/bin/opt/homebrew/bin(macOS)/usr/bin/bin/usr/sbin/sbin
If uv tool install placed cortexgraph in ~/.local/bin/ or another custom location, Claude Desktop can't find it.
Solution: Use absolute path
Update your Claude config with the absolute path:
Replace /Users/username/.local/bin/cortexgraph with your actual path from which cortexgraph.
Maintenance
Use the maintenance CLI to inspect and compact JSONL storage:
Migrating to UV Tool Install
If you're currently using an editable install (uv pip install -e .), you can switch to the simpler UV tool install:
Your data is safe! This only changes how the command is installed. Your memories in ~/.config/cortexgraph/ are untouched.
CLI Commands
The server includes 7 command-line tools:
Visualization
Interactive graph visualization using PyVis:
Features:
Interactive network graph with pan/zoom
Node colors by status (active=blue, promoted=green, archived=gray)
Node size based on use count
Edge colors by relation type
Hover tooltips showing full content, tags, and entities
Physics controls for layout adjustment
The visualization reads directly from your JSONL files and creates a standalone HTML file you can open in any browser.
MCP Tools
13 tools for AI assistants to manage memories:
Tool | Purpose |
| Save new memory with tags, entities (auto-enrichment in v0.6.0+) |
| Search with filters and scoring (includes review candidates) |
| Unified search across STM + LTM |
| Reinforce memory (boost strength) |
| Record memory usage for natural spaced repetition |
| ✨ NEW v0.6.0 - Detect memory-worthy content, suggest entities/strength |
| ✨ NEW v0.6.0 - Detect recall intent, suggest search queries |
| Garbage collect low-scoring memories |
| Move to long-term storage |
| Find similar memories |
| Merge similar memories (algorithmic) |
| Get entire knowledge graph |
| Retrieve specific memories |
| Link memories explicitly |
Example: Unified Search
Search across STM and LTM with the CLI:
Example: Reinforce (Touch) Memory
Boost a memory's recency/use count to slow decay:
Sample response:
Example: Promote Memory
Suggest and promote high-value memories to the Obsidian vault.
Auto-detect (dry run):
Promote a specific memory:
As an MCP tool (request body):
Example: Consolidate Similar Memories
Find and merge duplicate or highly similar memories to reduce clutter:
Auto-detect candidates (preview):
Apply consolidation to detected clusters:
The tool will:
Merge content intelligently (preserving unique information)
Combine tags and entities (union)
Calculate strength based on cluster cohesion
Preserve earliest
created_atand latestlast_usedtimestampsCreate tracking relations showing consolidation history
Mathematical Details
Decay Curves
For a memory with $n_{\text{use}}=1$, $s=1.0$, and $\lambda = 2.673 \times 10^{-6}$ (3-day half-life):
Time | Score | Status |
0 hours | 1.000 | Fresh |
12 hours | 0.917 | Active |
1 day | 0.841 | Active |
3 days | 0.500 | Half-life |
7 days | 0.210 | Decaying |
14 days | 0.044 | Near forget |
30 days | 0.001 | Forgotten |
Use Count Impact
With $\beta = 0.6$ (sub-linear weighting):
Use Count | Boost Factor |
1 | 1.0× |
5 | 2.6× |
10 | 4.0× |
50 | 11.4× |
Frequent access significantly extends retention.
Documentation
Scoring Algorithm - Complete mathematical model with LaTeX formulas
Smart Prompting - Patterns for natural LLM integration
Architecture - System design and implementation
API Reference - MCP tool documentation
Multi-Agent System - Consolidation agents and pipeline architecture
Bear Integration - Guide to using Bear app as an LTM store
Graph Features - Knowledge graph usage
Use Cases
Personal Assistant (Balanced)
3-day half-life
Remember preferences and decisions
Auto-promote frequently referenced information
Development Environment (Aggressive)
1-day half-life
Fast context switching
Aggressive forgetting of old context
Research / Archival (Conservative)
14-day half-life
Long retention
Comprehensive knowledge preservation
License
AGPL-3.0 License - See LICENSE for details.
This project uses the GNU Affero General Public License v3.0, which requires that modifications to this software be made available as source code when used to provide a network service.
Related Work
Model Context Protocol - MCP specification
Ebbinghaus Forgetting Curve - Cognitive science foundation
Basic Memory - Primary inspiration for the integration layer. CortexGraph extends this concept by adding the Ebbinghaus forgetting curve, temporal decay algorithms, short-term memory in JSONL storage, and natural spaced repetition.
Additional research inspired by: mem0, Neo4j Graph Memory
Citation
If you use this work in research, please cite:
Contributing
Contributions are welcome! See CONTRIBUTING.md for detailed instructions.
🚨 Help Needed: Windows & Linux Testers!
I develop on macOS and need help testing on Windows and Linux. If you have access to these platforms, please:
Try the installation instructions
Run the test suite
Report what works and what doesn't
See the Help Needed section in CONTRIBUTING.md for details.
General Contributions
For all contributors, see CONTRIBUTING.md for:
Platform-specific setup (Windows, Linux, macOS)
Development workflow
Testing guidelines
Code style requirements
Pull request process
Quick start:
Read CONTRIBUTING.md for platform-specific setup
Understand the Architecture docs
Review the Scoring Algorithm
Follow existing code patterns
Add tests for new features
Update documentation
Status
Version: 1.0.0 Status: Research implementation - functional but evolving
Phase 1 (Complete) ✅
14 MCP tools
Temporal decay algorithm
Knowledge graph
Phase 2 (Complete) ✅
JSONL storage
LTM index
Git integration
Smart prompting documentation
Maintenance CLI
Memory consolidation (algorithmic merging)
Phase 3 (Complete) ✅
Multi-Agent Consolidation Pipeline
DecayAnalyzer, ClusterDetector, SemanticMerge, LTMPromoter, RelationshipDiscovery
Scheduler for orchestration
Beads issue tracking integration
Dry-run and rate limiting support
Natural language activation (v0.6.0+)
Auto-enrichment for entity extraction
Future Work
Adaptive decay parameters
Performance benchmarks
LLM-assisted consolidation (optional enhancement)
Built with Claude Code 🤖