Core Concepts
Reasoning
Memory & Retrieval
Agent Types
Design Patterns
Training & Alignment
Frameworks
Tools
Safety & Security
Evaluation
Meta
Core Concepts
Reasoning
Memory & Retrieval
Agent Types
Design Patterns
Training & Alignment
Frameworks
Tools
Safety & Security
Evaluation
Meta
Mem0 is a universal, self-improving memory layer for AI agents and LLM applications that combines vector databases, knowledge graphs, and key-value stores into a hybrid datastore architecture. With over 51,000 GitHub stars, Mem0 delivers 26% higher accuracy and 90% token savings compared to baselines like OpenAI Memory on the LOCOMO benchmark.
| Repository | github.com/mem0ai/mem0 |
| License | Apache 2.0 |
| Language | Python |
| Stars | 51K+ |
| Category | Agent Memory Layer |
Mem0 employs a two-phase memory pipeline:
Extraction Phase: Ingests the latest user exchange, a rolling long-term summary, and the most recent messages. An LLM extracts concise candidate memories as salient factual phrases. For graph-enhanced Mem0g, an Entity Extractor identifies nodes and a Relations Generator infers labeled edges.
Update Phase: Consolidates memories asynchronously to maintain a coherent, non-redundant store. Background processes refresh summaries without stalling inference.
Mem0 supports multi-level namespaces for scoped persistence:
On the LOCOMO benchmark:
from mem0 import Memory config = { "vector_store": { "provider": "qdrant", "config": {"host": "localhost", "port": 6333} }, "llm": { "provider": "openai", "config": {"model": "gpt-4o", "temperature": 0} } } m = Memory.from_config(config) # Add memories from a conversation m.add("I prefer Python for backend and React for frontend", user_id="alice", metadata={"topic": "preferences"}) m.add("My current project uses FastAPI with PostgreSQL", user_id="alice", metadata={"topic": "project"}) # Retrieve relevant memories memories = m.search("What tech stack does Alice use?", user_id="alice", limit=5) for mem in memories: print(f"[{mem['score']:.2f}] {mem['memory']}")