====== Mem0 ======
**Mem0** is a universal, self-improving memory layer for AI agents and LLM applications that combines vector databases, [[knowledge_graphs|knowledge graphs]], and key-value stores into a hybrid datastore architecture.(([[https://github.com/mem0ai/mem0|Mem0 GitHub Repository]])) With over **51,000 GitHub stars**, Mem0 delivers 26% higher accuracy and 90% token savings compared to baselines like OpenAI Memory on the LOCOMO benchmark.(([[https://mem0.ai/research|Mem0 Research Paper]]))
| **Repository** | [[https://[[github|github]].com/mem0ai/mem0|github.com/mem0ai/mem0]] |
| **License** | Apache 2.0 |
| **Language** | Python |
| **Stars** | 51K+ |
| **Category** | Agent Memory Layer |
===== Key Features =====
* **Hybrid Datastore**, Combines vector stores for similarity search, [[knowledge_graphs|knowledge graphs]] for entity relationships, and key-value stores for metadata filtering
* **Multi-Level Memory**, Short-term, long-term, user-scoped, session-scoped, and agent-scoped memory namespaces
* **Self-Improving**, Dynamic consolidation and rolling summaries for coherent long-horizon reasoning
* **Graph Enhancement (Mem0g)**, Entity-relation graphs for structured recall beyond vector similarity
* **LLM-Agnostic**, Works with any LLM provider including [[openai|OpenAI]], [[anthropic|Anthropic]], and open-source models
* **Flexible Storage**, Supports [[qdrant|Qdrant]], [[pinecone|Pinecone]], Milvus, Chroma, [[pgvector|pgvector]], Redis, and [[sqlite|SQLite]]
===== Architecture =====
Mem0 employs a two-phase memory pipeline:
**Extraction Phase:** Ingests the latest user exchange, a rolling long-term summary, and the most recent messages. An LLM extracts concise candidate memories as salient factual phrases. For graph-enhanced Mem0g, an Entity Extractor identifies nodes and a Relations Generator infers labeled edges.
**Update Phase:** Consolidates memories asynchronously to maintain a coherent, non-redundant store. Background processes refresh summaries without stalling inference.
graph TB
subgraph Input["Input Processing"]
Msg[User Message]
Ctx[Conversation Context]
Sum[Rolling Summary]
end
subgraph Extract["Extraction Phase"]
LLM[LLM Extractor]
Entity[Entity Extractor]
Rel[Relations Generator]
end
subgraph Store["Hybrid Datastore"]
VS[(Vector Store)]
KG[(Knowledge Graph)]
KV[(Key-Value Store)]
end
subgraph Retrieve["Retrieval Phase"]
VecSearch[Vector Similarity]
GraphTraverse[Graph Traversal]
KVLookup[Key-Value Lookup]
Rank[Ranking and Fusion]
end
subgraph Output["Memory Output"]
Memories[Ranked Memories]
Context[Enriched Context]
end
Input --> Extract
LLM --> VS
Entity --> KG
Rel --> KG
Extract --> Store
Store --> Retrieve
VecSearch --> Rank
GraphTraverse --> Rank
KVLookup --> Rank
Rank --> Output
===== Memory Types =====
Mem0 supports multi-level namespaces for scoped persistence:
* **Short-term**, Derived from recent messages; stored as atomic semantic facts to minimize size
* **Long-term**, Consolidated facts across sessions using rolling summaries and graph structures
* **User-scoped**, Isolated to individual user_id; stores preferences and history
* **Session-scoped**, Temporary, conversation-specific memories
* **Agent-scoped**, Global or shared facts across users and agents
===== Performance =====
On the LOCOMO benchmark:(([[https://mem0.ai/research|Mem0 Research Paper]]))
* **Mem0**: 66.9% accuracy vs [[openai|OpenAI]] Memory's 52.9%
* **Mem0g (graph-enhanced)**: 68.4% accuracy
* **p95 latency**: 1.44s (91% lower than baselines)
* **Token savings**: 90% compared to full-context RAG approaches
===== Code Example =====
from mem0 import Memory
config = {
"vector_store": {
"provider": "[[qdrant|qdrant]]",
"config": {"host": "localhost", "port": 6333}
},
"llm": {
"provider": "[[openai|openai]]",
"config": {"model": "gpt-4o", "temperature": 0}
}
}
m = Memory.from_config(config)
# Add memories from a conversation
m.add("I prefer Python for backend and React for frontend",
user_id="alice", metadata={"topic": "preferences"})
m.add("My current project uses FastAPI with PostgreSQL",
user_id="alice", metadata={"topic": "project"})
# Retrieve relevant memories
memories = m.search("What tech stack does Alice use?",
user_id="alice", limit=5)
for mem in memories:
print(f"[{mem['score']:.2f}] {mem['memory']}")
===== See Also =====
* [[memgraph|Memgraph]]
* [[letta|Letta]]
* [[agent_memory_frameworks|Agent Memory Frameworks]]
* [[cognitive_memory_architectures|Cognitive Memory Architectures]]
* [[continual_learning_agents|Continual Learning Agents]]
===== References =====
* [[https://github.com/mem0ai/mem0|Mem0 GitHub Repository]]
* [[https://mem0.ai|Mem0 Official Website]](([[https://mem0.ai|Mem0 Official Website]]))
* [[https://mem0.ai/research|Mem0 Research Paper]]
* [[https://docs.mem0.ai|Mem0 Documentation]](([[https://docs.mem0.ai|Mem0 Documentation]]))