====== OpenCode ======
**OpenCode** is an open-source, terminal-native AI coding agent that provides intelligent code generation, refactoring, debugging, and documentation assistance. With over **129,000 GitHub stars** and 800+ contributors, it has become one of the most popular open-source alternatives to proprietary coding assistants.
| **Repository** | [[https://github.com/anomalyco/opencode|github.com/anomalyco/opencode]] |
| **License** | MIT |
| **Language** | Go, TypeScript |
| **Stars** | 129K+ |
| **Category** | AI Coding Agent |
===== Key Features =====
* **Multi-Provider Support** -- Connects to 75+ LLMs via Models.dev including OpenAI GPT-4, Anthropic Claude, Google Gemini, and local models via Ollama/LM Studio
* **Terminal-Native TUI** -- Beautiful terminal user interface optimized for power users with session management and real-time feedback
* **LSP Integration** -- Auto-loads Language Server Protocol servers (PyRight, Rust Analyzer, TypeScript) for real-time code feedback
* **MCP Support** -- Model Context Protocol servers for extended context such as GitHub repository access
* **Plan and Build Modes** -- Plan mode for analysis/explanation, Build mode for code changes and edits
* **Multi-Session** -- Run parallel agents on the same project with shareable session links
* **GitHub Integration** -- Trigger via /opencode comments in issues and PRs, executing in GitHub Actions runners
* **Offline Mode** -- Full functionality with local models at zero API cost
===== Architecture =====
OpenCode operates as a modular, provider-agnostic system with several key components:
* **Core Agent Engine** -- Handles natural language processing via connected LLMs with context management
* **LSP Integration Layer** -- Auto-discovers and loads language-specific servers for real-time code feedback, improving LLM accuracy
* **MCP Client** -- Optional remote or local MCP servers for extended context
* **ACP Support** -- Agent Client Protocol for standardized editor/IDE communication
* **TUI Renderer** -- Terminal user interface with interactive chat, session management, and visualization
graph TB
subgraph UI["Terminal UI"]
TUI[TUI Renderer]
Chat[Interactive Chat]
Sessions[Session Manager]
end
subgraph Agent["Agent Core"]
Engine[Agent Engine]
Plan[Plan Mode]
Build[Build Mode]
end
subgraph Providers["LLM Providers"]
Cloud[Cloud APIs - GPT-4 / Claude / Gemini]
Local[Local Models - Ollama / LM Studio]
end
subgraph Tools["Tool Layer"]
LSP[LSP Servers]
MCP[MCP Servers]
Git[Git Integration]
FS[Filesystem Tools]
end
UI --> Agent
Agent --> Providers
Agent --> Tools
LSP --> Engine
===== Supported Providers =====
OpenCode is model-agnostic, connecting to providers via Models.dev:
* **Cloud Providers** -- OpenAI (GPT-4, 8K-200K context), Anthropic (Claude Opus/Sonnet), Google (Gemini)
* **Local/Free** -- Code Llama, DeepSeek Coder, StarCoder via Ollama or LM Studio
* **Subscriptions** -- Direct login for ChatGPT Plus/Pro, GitHub Copilot
Performance scales with model quality: GPT-4/Claude achieves 80-90% quality vs commercial tools; open models reach 60-70%.
===== Code Example =====
# AGENTS.md configuration example for OpenCode
# Place in your project root to provide context to the agent
#
# Project: MyApp
# Tech Stack: Python 3.12, FastAPI, PostgreSQL
# Testing: pytest with coverage
# Conventions: type hints, PEP 8, docstrings required
#
# Usage:
# pip install opencode
# opencode # launches TUI
# opencode "refactor to async/await" # direct command
import subprocess
import json
def run_opencode_headless(prompt, model="claude-sonnet"):
result = subprocess.run(
["opencode", "--headless", "--model", model, prompt],
capture_output=True, text=True
)
return json.loads(result.stdout)
output = run_opencode_headless("Add error handling to api/routes.py")
print(output["changes"])
===== References =====
* [[https://github.com/anomalyco/opencode|OpenCode GitHub Repository]]
* [[https://opencode.ai|OpenCode Official Website]]
* [[https://opencode.ai/docs/github/|OpenCode GitHub Integration Docs]]
===== See Also =====
* [[dify|Dify]] -- Agentic workflow platform
* [[mcp_servers|MCP Servers]] -- Model Context Protocol implementations
* [[langfuse|Langfuse]] -- LLM observability for monitoring agent performance