====== Agent Memory Frameworks ====== Agent memory frameworks provide persistent, queryable memory infrastructure for AI agents, enabling them to remember user preferences, track conversation history, and build knowledge over time. Unlike simple conversation buffers, these frameworks offer structured storage, semantic retrieval, and temporal reasoning that transform stateless LLM calls into stateful agent systems. ===== Why Agents Need Memory ===== LLMs are inherently stateless — each API call starts fresh with no knowledge of previous interactions. Memory frameworks solve this by: * **Personalizing** responses based on accumulated user preferences and history * **Maintaining** context across sessions, days, and weeks * **Tracking** how facts evolve over time (temporal reasoning) * **Sharing** knowledge across multiple agents in orchestrated systems * **Reducing** token costs by storing and retrieving only relevant context ===== Framework Comparison ===== | **Framework** | **Architecture** | **Key Strength** | **Best For** | | [[https://mem0.ai|Mem0]] | Vector + Knowledge Graph | Most widely adopted, flexible backends | General-purpose agent personalization | | [[https://www.getzep.com|Zep / Graphiti]] | Temporal Knowledge Graph | Fact evolution tracking, <200ms retrieval | Temporal reasoning, compliance (SOC2/HIPAA) | | [[https://www.letta.com|Letta (MemGPT)]] | Three-tier OS-inspired | Agent-controlled self-editing memory | Agents managing their own context | | [[https://github.com/langchain-ai/langmem|LangMem]] | Flat key-value + vector | Deep LangGraph integration, MIT license | LangGraph-native agent systems | | [[https://github.com/getmetal/motorhead|Motorhead]] | Redis-backed server | Simple REST API, session management | Lightweight memory for prototypes | ===== Architecture Deep Dives ===== ==== Mem0 ==== Mem0 uses a dual-store architecture combining vector databases (Qdrant, Chroma, Milvus, pgvector) with knowledge graphs. An extraction pipeline converts conversations into atomic memory facts scoped to users, sessions, or agents. ==== Zep / Graphiti ==== Zep implements a temporal knowledge graph that tracks how facts change over time. It scores 63.8% on the LongMemEval benchmark with sub-200ms retrieval latency. Offers Python, TypeScript, and Go SDKs with SOC2 Type 2 and HIPAA compliance. ==== Letta (MemGPT) ==== Letta uses a three-tier memory hierarchy inspired by operating systems: * **Core memory** — Always in the LLM context window (like RAM) * **Recall memory** — Searchable conversation history (like disk cache) * **Archival memory** — Long-term queryable storage (like cold storage) Agents actively self-edit their memory blocks, deciding what stays in context versus what gets archived. ==== LangMem ==== LangMem uses a flat key-value plus vector architecture with MIT licensing and deep LangGraph integration. Unique features include prompt optimization from conversation data and a background memory manager for automatic extraction. ===== Example: Memory-Enhanced Agent ===== from mem0 import Memory # Initialize memory with user scoping memory = Memory() def memory_agent(user_id: str, message: str, llm_client): # Retrieve relevant memories for this user relevant = memory.search(query=message, user_id=user_id, limit=5) memory_context = " ".join(m["memory"] for m in relevant) # Generate response with memory context response = llm_client.chat( messages=[ {"role": "system", "content": f"User memories: {memory_context}"}, {"role": "user", "content": message} ] ) # Store new memories from this interaction memory.add( messages=[ {"role": "user", "content": message}, {"role": "assistant", "content": response} ], user_id=user_id ) return response # Memories persist across sessions memory_agent("user_123", "I prefer Python over JavaScript") # Later session... memory_agent("user_123", "What language should I use for this project?") # Agent recalls the user's Python preference ===== Selection Criteria ===== * **Choose Mem0** when you need battle-tested, widely-adopted memory with flexible backend options and multi-agent knowledge sharing * **Choose Zep** when temporal reasoning is critical — tracking how facts evolve over time — and you need enterprise compliance * **Choose Letta** when agents need active control over their own memory management and dynamic context curation * **Choose LangMem** when you are committed to the LangGraph ecosystem and want cost-free, fully-owned memory infrastructure * **Choose Motorhead** for lightweight prototyping where a simple REST API with Redis-backed sessions is sufficient ===== Benchmarks ===== The **LongMemEval** benchmark measures long-term memory capabilities: * Cognee: 81.6% (highest score, vector + knowledge graph with built-in RAG) * Zep/Graphiti: 63.8% (best temporal reasoning) * Mem0 and Letta: Strong but lack published benchmark scores ===== References ===== * [[https://vectorize.io/articles/best-ai-agent-memory-systems|Vectorize - Best AI Agent Memory Systems]] * [[https://mem0.ai|Mem0 Documentation]] * [[https://www.getzep.com|Zep Documentation]] * [[https://www.letta.com|Letta (MemGPT) Documentation]] ===== See Also ===== * [[retrieval_augmented_generation]] — RAG patterns that memory frameworks build on * [[embeddings]] — Embedding models powering semantic memory retrieval * [[knowledge_graphs]] — Graph-based knowledge storage * [[agent_orchestration]] — Shared memory in multi-agent systems