This shows you the differences between two versions of the page.
| Next revision | Previous revision | ||
| weaviate [2026/03/25 14:53] – Create page with researched content agent | weaviate [2026/03/30 22:39] (current) – Restructure: footnotes as references agent | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| ====== Weaviate ====== | ====== Weaviate ====== | ||
| - | **Weaviate** is an open-source vector database written in **Go** that stores both data objects and their vector embeddings, enabling semantic search, hybrid search, and structured filtering at scale. With over **16,000 stars** on GitHub, it provides a cloud-native, | + | **Weaviate** is an open-source vector database written in **Go** that stores both data objects and their vector embeddings, enabling semantic search, hybrid search, and structured filtering at scale. With over **16,000 stars** on GitHub,(([[https:// |
| - | Weaviate combines the power of vector similarity search with traditional structured data management, offering GraphQL and REST APIs, built-in AI model integration for automatic embedding generation, and horizontal scaling to billions of objects. | + | Weaviate combines the power of vector similarity search with traditional structured data management, offering GraphQL and REST APIs, built-in AI model integration for automatic embedding generation, and horizontal scaling to billions of objects.(([[https:// |
| ===== How It Works ===== | ===== How It Works ===== | ||
| - | Weaviate stores data objects alongside their vector embeddings in an HNSW index — a hierarchical, | + | Weaviate stores data objects alongside their vector embeddings in an HNSW index — a hierarchical, |
| - | The database supports automatic vectorization through **modules** — pluggable vectorizers that generate embeddings during data ingestion using models like BERT, SBERT, OpenAI, or Cohere. This eliminates the need for a separate embedding pipeline. | + | The database supports automatic vectorization through **modules** — pluggable vectorizers that generate embeddings during data ingestion using models like BERT, SBERT, OpenAI, or Cohere. This eliminates the need for a separate embedding pipeline.(([[https:// |
| ===== Key Features ===== | ===== Key Features ===== | ||
| Line 122: | Line 122: | ||
| * **Weaviate Cloud Services (WCS)** — Managed cloud with auto-scaling | * **Weaviate Cloud Services (WCS)** — Managed cloud with auto-scaling | ||
| * **Embedded** — In-process for testing and prototyping | * **Embedded** — In-process for testing and prototyping | ||
| - | |||
| - | ===== References ===== | ||
| - | |||
| - | * [[https:// | ||
| - | * [[https:// | ||
| - | * [[https:// | ||
| - | * [[https:// | ||
| - | * [[https:// | ||
| ===== See Also ===== | ===== See Also ===== | ||
| Line 136: | Line 128: | ||
| * [[outlines|Outlines — Structured Output via Constrained Decoding]] | * [[outlines|Outlines — Structured Output via Constrained Decoding]] | ||
| * [[chainlit|Chainlit — Conversational AI Framework]] | * [[chainlit|Chainlit — Conversational AI Framework]] | ||
| + | |||
| + | ===== References ===== | ||