====== Flowise ====== **Flowise** is an open-source, low-code platform for building AI agents, chatbots, and LLM workflows through a visual drag-and-drop interface. Built on top of [[langchain|LangChain]], it allows developers and non-developers to compose agents, RAG pipelines, and [[multi_agent_systems|multi-agent systems]] by connecting [[modular|modular]] nodes on a visual canvas. Flowise significantly lowers the barrier to creating sophisticated AI workflows. * **Website:** [[https://flowiseai.com|flowiseai.com]] * **[[github|GitHub]]:** [[https://github.com/FlowiseAI/Flowise|github.com/FlowiseAI/Flowise]](([[https://github.com/FlowiseAI/Flowise|github.com/FlowiseAI/Flowise]])) * **License:** Apache 2.0 ===== Visual Builder ===== The **Node Editor** is the primary interface where users drag and drop pre-built components from a Component Library onto a canvas. Components include: * **LLMs:** GPT-4o, [[claude|Claude]], Gemini, Llama, Mistral, and other models * **Vector Databases:** [[pinecone|Pinecone]], [[weaviate|Weaviate]], Chroma, Qdrant, [[milvus|Milvus]] * **Tools:** Web search APIs, CRM integrations, database connectors, email services * **Memory Systems:** Short-term conversation memory, long-term vector storage, hybrid options * **Document Loaders:** PDF, CSV, web scrapers, API connectors Users configure inputs (chat, API calls, file uploads), processing (LLM chains and tools), and outputs (JSON, text, API payloads) with real-time testing, tracing, and debugging. ===== Three Builder Modes ===== **Chatflow:** Conversational pipelines with memory management, token tracking, caching, and guardrails. Supports [[short_term_memory|short-term memory]] for context, long-term vector storage for knowledge, and [[human_in_the_loop|human-in-the-loop]] approval steps. **Agentflow:** [[multi_agent_systems|Multi-agent systems]] with supervisor-worker patterns. Supports native agent-to-agent communication, conflict resolution, dynamic role assignment, and complex orchestration patterns. **Assistant:** Rapid prototyping mode for quickly creating AI assistants with minimal configuration. ===== Key Features ===== * **Performance:** Handles 1,000 concurrent connections with 150ms overhead and 200-300MB memory footprint * **Deployment:** Local, Docker, Kubernetes, or serverless; production-ready API endpoints with authentication and rate limiting * **Embeddable Widgets:** Chat widgets for embedding in websites; API/SDK access for programmatic control * **Version Control:** Git-style versioning for flows * **Real-time Collaboration:** Multiple users can work on flows simultaneously * **Cost Optimization:** 40-60% LLM cost savings via intelligent caching, deduplication, and model switching * **Guardrails:** Content filtering, hallucination detection, and [[human_in_the_loop|human-in-the-loop]] checkpoints * **Self-hosting:** As low as $6-8/month vs. $500-5,000 for hosted alternatives ===== Integrations ===== Flowise connects with numerous external services via API endpoints and pre-built nodes: * LLM providers ([[openai|OpenAI]], [[anthropic|Anthropic]], Google, local models via [[ollama|Ollama]]) * Vector databases and document stores * Third-party APIs (CRM, ticketing, analytics platforms) * Platforms like Bubble.io via REST API * Proxy support for hiding API keys from client applications ===== Use Cases ===== * Customer support chatbots with RAG-powered knowledge bases * Research agents with multi-step web search and analysis * Internal copilots for enterprise documentation * Data analysis pipelines with database integration * Embedded AI experiences in existing web applications ===== Related Pages ===== * [[langchain|LangChain]] * [[retrieval_augmented_generation|Retrieval-Augmented Generation]] * [[tool_integration_patterns|Tool Integration Patterns]] * [[multi_agent_systems|Multi-Agent Systems]] ===== See Also ===== * [[langflow|Langflow]] * [[chatdev|ChatDev]] * [[langgraph|LangGraph]] * [[github_copilot_vs_windsurf|GitHub Copilot vs Windsurf]] * [[streamlit_ai|Streamlit for AI]] ===== References =====