Table of Contents

Agent Memory Frameworks

Agent memory frameworks provide persistent, queryable memory infrastructure for AI agents, enabling them to remember user preferences, track conversation history, and build knowledge over time. Unlike simple conversation buffers, these frameworks offer structured storage, semantic retrieval, and temporal reasoning that transform stateless LLM calls into stateful agent systems.

Why Agents Need Memory

LLMs are inherently stateless — each API call starts fresh with no knowledge of previous interactions. Memory frameworks solve this by:

Framework Comparison

Framework Architecture Key Strength Best For
Mem0 Vector + Knowledge Graph Most widely adopted, flexible backends General-purpose agent personalization
Zep / Graphiti Temporal Knowledge Graph Fact evolution tracking, <200ms retrieval Temporal reasoning, compliance (SOC2/HIPAA)
Letta (MemGPT) Three-tier OS-inspired Agent-controlled self-editing memory Agents managing their own context
LangMem Flat key-value + vector Deep LangGraph integration, MIT license LangGraph-native agent systems
Motorhead Redis-backed server Simple REST API, session management Lightweight memory for prototypes

Architecture Deep Dives

Mem0

Mem0 uses a dual-store architecture combining vector databases (Qdrant, Chroma, Milvus, pgvector) with knowledge graphs. An extraction pipeline converts conversations into atomic memory facts scoped to users, sessions, or agents.

Zep / Graphiti

Zep implements a temporal knowledge graph that tracks how facts change over time. It scores 63.8% on the LongMemEval benchmark with sub-200ms retrieval latency. Offers Python, TypeScript, and Go SDKs with SOC2 Type 2 and HIPAA compliance.

Letta (MemGPT)

Letta uses a three-tier memory hierarchy inspired by operating systems:

Agents actively self-edit their memory blocks, deciding what stays in context versus what gets archived.

LangMem

LangMem uses a flat key-value plus vector architecture with MIT licensing and deep LangGraph integration. Unique features include prompt optimization from conversation data and a background memory manager for automatic extraction.

Example: Memory-Enhanced Agent

from mem0 import Memory
 
# Initialize memory with user scoping
memory = Memory()
 
def memory_agent(user_id: str, message: str, llm_client):
    # Retrieve relevant memories for this user
    relevant = memory.search(query=message, user_id=user_id, limit=5)
    memory_context = "
".join(m["memory"] for m in relevant)
 
    # Generate response with memory context
    response = llm_client.chat(
        messages=[
            {"role": "system", "content": f"User memories:
{memory_context}"},
            {"role": "user", "content": message}
        ]
    )
 
    # Store new memories from this interaction
    memory.add(
        messages=[
            {"role": "user", "content": message},
            {"role": "assistant", "content": response}
        ],
        user_id=user_id
    )
 
    return response
 
# Memories persist across sessions
memory_agent("user_123", "I prefer Python over JavaScript")
# Later session...
memory_agent("user_123", "What language should I use for this project?")
# Agent recalls the user's Python preference

Selection Criteria

Benchmarks

The LongMemEval benchmark measures long-term memory capabilities:

References

See Also