Core Concepts
Reasoning
Memory & Retrieval
Agent Types
Design Patterns
Training & Alignment
Frameworks
Tools
Safety & Security
Evaluation
Meta
Core Concepts
Reasoning
Memory & Retrieval
Agent Types
Design Patterns
Training & Alignment
Frameworks
Tools
Safety & Security
Evaluation
Meta
Biologically-inspired memory architectures for LLM agents go beyond flat vector stores by modeling the dynamic, associative nature of human memory. SYNAPSE introduces a unified episodic-semantic graph with spreading activation, while E-mem uses multi-agent episodic context reconstruction to preserve reasoning integrity over long horizons.
Standard RAG-based agent memory uses embedding similarity to retrieve relevant context. This approach suffers from:
Biologically-inspired approaches address these limitations by modeling memory as cognitive science suggests humans do – through interconnected networks where activation spreads along associative pathways.
SYNAPSE (arXiv:2601.02744) introduces a brain-inspired memory architecture that unifies episodic and semantic memories in a directed graph with spreading activation dynamics.
The memory graph contains two types of nodes:
Three types of edges connect the graph:
When a query arrives, the retrieval process works as follows:
This emulates the cognitive science model of spreading activation in human memory: when you think of “dog,” activation spreads to “pet,” “bark,” “walk,” and eventually reaches more distant concepts like “veterinarian” or “loyalty.”
E-mem (Wang et al., 2026, arXiv:2601.21714) shifts from memory preprocessing to episodic context reconstruction, addressing the problem of “destructive de-contextualization” in traditional memory systems.
Traditional memory preprocessing methods (embeddings, graphs, summaries) compress complex sequential dependencies into pre-defined structures. This severs the contextual integrity essential for System 2 (deliberative) reasoning:
Inspired by biological engrams (the physical traces of memories in neural tissue):
import numpy as np from collections import defaultdict class SpreadingActivationMemory: """SYNAPSE-style episodic-semantic memory with spreading activation.""" def __init__(self, decay=0.85, threshold=0.1, max_steps=5): self.nodes = {} # id -> {type, content, embedding} self.edges = defaultdict(list) # id -> [(target_id, edge_type, weight)] self.decay = decay self.threshold = threshold self.max_steps = max_steps def add_episodic(self, node_id, content, embedding, timestamp): self.nodes[node_id] = { "type": "episodic", "content": content, "embedding": embedding, "timestamp": timestamp } def add_semantic(self, node_id, concept, embedding): self.nodes[node_id] = { "type": "semantic", "content": concept, "embedding": embedding } def add_edge(self, source, target, edge_type, weight=1.0): self.edges[source].append((target, edge_type, weight)) def retrieve(self, query_embedding, top_k=10): """Spreading activation retrieval.""" # Step 1: Initial activation from query similarity activation = {} for nid, node in self.nodes.items(): sim = np.dot(query_embedding, node["embedding"]) if sim > self.threshold: activation[nid] = sim # Step 2: Spread activation through edges for step in range(self.max_steps): new_activation = dict(activation) for nid, energy in activation.items(): if energy < self.threshold: continue for target, etype, weight in self.edges.get(nid, []): spread = energy * self.decay * weight new_activation[target] = max( new_activation.get(target, 0), spread ) activation = new_activation # Step 3: Return top-k activated nodes ranked = sorted(activation.items(), key=lambda x: -x[1]) return [(self.nodes[nid], score) for nid, score in ranked[:top_k]]
| Architecture | Memory Type | Retrieval | Preserves Context | Multi-Agent |
|---|---|---|---|---|
| Vector RAG | Flat embeddings | Similarity search | No | No |
| Knowledge Graph | Structured triples | Graph traversal | Partial | No |
| SYNAPSE | Episodic + Semantic graph | Spreading activation | Yes (via edges) | No |
| E-mem | Uncompressed segments | Agent-based reasoning | Yes (uncompressed) | Yes |
Both SYNAPSE and E-mem draw on established models from cognitive science and neuroscience: