Table of Contents

Spreading Activation Memory

Biologically-inspired memory architectures for LLM agents go beyond flat vector stores by modeling the dynamic, associative nature of human memory. SYNAPSE introduces a unified episodic-semantic graph with spreading activation, while E-mem uses multi-agent episodic context reconstruction to preserve reasoning integrity over long horizons.

Beyond Vector Retrieval

Standard RAG-based agent memory uses embedding similarity to retrieve relevant context. This approach suffers from:

Biologically-inspired approaches address these limitations by modeling memory as cognitive science suggests humans do – through interconnected networks where activation spreads along associative pathways.

SYNAPSE: Unified Episodic-Semantic Memory

SYNAPSE (arXiv:2601.02744) introduces a brain-inspired memory architecture that unifies episodic and semantic memories in a directed graph with spreading activation dynamics.

Graph Architecture

The memory graph contains two types of nodes:

Three types of edges connect the graph:

Spreading Activation Mechanism

When a query arrives, the retrieval process works as follows:

  1. Embedding injection: The query embedding activates the most similar nodes in the graph
  2. Energy propagation: Activation “energy” spreads outward through edges:
    • Temporal edges propagate based on recency
    • Abstraction edges propagate between specific memories and general concepts
    • Association edges propagate between related concepts
  3. Convergence: After multiple propagation steps, the activation pattern stabilizes
  4. Context assembly: The highest-activated nodes (both episodic and semantic) are assembled into context for the LLM

This emulates the cognitive science model of spreading activation in human memory: when you think of “dog,” activation spreads to “pet,” “bark,” “walk,” and eventually reaches more distant concepts like “veterinarian” or “loyalty.”

Why Spreading Activation Matters

Key Results

E-mem: Multi-Agent Episodic Context Reconstruction

E-mem (Wang et al., 2026, arXiv:2601.21714) shifts from memory preprocessing to episodic context reconstruction, addressing the problem of “destructive de-contextualization” in traditional memory systems.

The De-Contextualization Problem

Traditional memory preprocessing methods (embeddings, graphs, summaries) compress complex sequential dependencies into pre-defined structures. This severs the contextual integrity essential for System 2 (deliberative) reasoning:

Hierarchical Multi-Agent Architecture

Inspired by biological engrams (the physical traces of memories in neural tissue):

Retrieval Process

  1. Master agent receives a query and formulates a retrieval plan
  2. Relevant assistant agents are activated (analogous to engram activation in neuroscience)
  3. Each activated assistant reasons over its uncompressed memory segment
  4. Assistants extract context-aware evidence (not just raw text chunks)
  5. Master agent aggregates evidence from all assistants into a coherent response

Results

Code Example

import numpy as np
from collections import defaultdict
 
class SpreadingActivationMemory:
    """SYNAPSE-style episodic-semantic memory with spreading activation."""
 
    def __init__(self, decay=0.85, threshold=0.1, max_steps=5):
        self.nodes = {}        # id -> {type, content, embedding}
        self.edges = defaultdict(list)  # id -> [(target_id, edge_type, weight)]
        self.decay = decay
        self.threshold = threshold
        self.max_steps = max_steps
 
    def add_episodic(self, node_id, content, embedding, timestamp):
        self.nodes[node_id] = {
            "type": "episodic", "content": content,
            "embedding": embedding, "timestamp": timestamp
        }
 
    def add_semantic(self, node_id, concept, embedding):
        self.nodes[node_id] = {
            "type": "semantic", "content": concept,
            "embedding": embedding
        }
 
    def add_edge(self, source, target, edge_type, weight=1.0):
        self.edges[source].append((target, edge_type, weight))
 
    def retrieve(self, query_embedding, top_k=10):
        """Spreading activation retrieval."""
        # Step 1: Initial activation from query similarity
        activation = {}
        for nid, node in self.nodes.items():
            sim = np.dot(query_embedding, node["embedding"])
            if sim > self.threshold:
                activation[nid] = sim
 
        # Step 2: Spread activation through edges
        for step in range(self.max_steps):
            new_activation = dict(activation)
            for nid, energy in activation.items():
                if energy < self.threshold:
                    continue
                for target, etype, weight in self.edges.get(nid, []):
                    spread = energy * self.decay * weight
                    new_activation[target] = max(
                        new_activation.get(target, 0), spread
                    )
            activation = new_activation
 
        # Step 3: Return top-k activated nodes
        ranked = sorted(activation.items(), key=lambda x: -x[1])
        return [(self.nodes[nid], score) for nid, score in ranked[:top_k]]

Comparison of Memory Architectures

Architecture Memory Type Retrieval Preserves Context Multi-Agent
Vector RAG Flat embeddings Similarity search No No
Knowledge Graph Structured triples Graph traversal Partial No
SYNAPSE Episodic + Semantic graph Spreading activation Yes (via edges) No
E-mem Uncompressed segments Agent-based reasoning Yes (uncompressed) Yes

Biological Inspiration

Both SYNAPSE and E-mem draw on established models from cognitive science and neuroscience:

References

See Also