Core Concepts
Reasoning
Memory & Retrieval
Agent Types
Design Patterns
Training & Alignment
Frameworks
Tools
Safety & Security
Evaluation
Meta
Core Concepts
Reasoning
Memory & Retrieval
Agent Types
Design Patterns
Training & Alignment
Frameworks
Tools
Safety & Security
Evaluation
Meta
This is an old revision of the document!
Open Interpreter is an open-source local AI agent and terminal AI agent that lets large language models run code directly on your machine.1) As a fully open source AI agent with over 63,000 stars on GitHub, it provides a ChatGPT-like terminal experience that can execute Python, JavaScript, Shell, and more — with full access to your local filesystem, internet, and installed packages.
Unlike cloud-based code interpreters with sandboxed environments, Open Interpreter removes all restrictions on runtime, file size, and package availability, giving LLMs direct access to your computer's full capabilities including data analysis, browser control, media editing, and GUI interaction.2)
Open Interpreter splits into two core components: a Core execution engine and a Terminal Interface. The Core provides a real-time code execution environment where LLMs safely control the computer via an exec() function that takes a language identifier and code string. The Terminal Interface connects to LLMs via LiteLLM, streaming model messages, code blocks, and system outputs as Markdown.3)
The system supports any LLM provider through LiteLLM — including OpenAI, Anthropic, local models via LM Studio, and dozens more. Conversations can be persisted, restored, and run asynchronously.
# Install Open Interpreter # pip install open-interpreter # Python API usage from interpreter import interpreter # Basic conversation interpreter.chat("What operating system are we on?") # Configure model interpreter.model = "gpt-4o" # Use with Anthropic interpreter.model = "claude-3-5-sonnet-20240620" interpreter.chat("Analyze the CSV files in my Downloads folder") # Local/offline mode with LM Studio interpreter.offline = True interpreter.llm.model = "openai/x" interpreter.llm.api_base = "http://localhost:1234/v1" interpreter.llm.context_window = 3000 interpreter.llm.max_tokens = 1000 interpreter.chat()
%%{init: {'theme': 'dark'}}%%
graph TB
User([User]) -->|Natural Language| TI[Terminal Interface]
TI -->|Markdown Stream| User
TI -->|Messages| LM[LiteLLM Router]
LM -->|API Calls| OpenAI[OpenAI API]
LM -->|API Calls| Anthropic[Anthropic API]
LM -->|API Calls| Local[Local Models]
TI -->|Code Request| Core[Core Engine]
Core -->|exec| Python[Python Runtime]
Core -->|exec| JS[JavaScript Runtime]
Core -->|exec| Shell[Shell / Bash]
Core -->|Results| TI
Core -->|File Access| FS[Local Filesystem]
Core -->|Network| Internet[Internet Access]