====== E2B ======
**E2B** (pronounced "edge-to-backend") is an open-source infrastructure platform that provides secure, isolated cloud sandboxes for running AI-generated code. Built on Firecracker microVMs (the same technology behind AWS Lambda), E2B enables AI agents to execute arbitrary code safely without risking the host system. With 11,000+ GitHub stars, $21M in Series A funding, and adoption by Fortune 100 companies, it has become the standard for sandboxed code execution in agentic AI systems.
===== Architecture =====
E2B's architecture centers on **Firecracker microVMs** for hardware-level isolation:
* **Firecracker MicroVMs** — Each sandbox runs in its own lightweight virtual machine with a dedicated kernel, root filesystem, and network namespace. MicroVMs boot in under 200ms with minimal attack surface.
* **Jailer Process** — Wraps each Firecracker instance with cgroups, namespaces, and seccomp filters for defense-in-depth isolation.
* **Resource Controls** — Configurable CPU, memory, and storage limits prevent denial-of-service from runaway code.
* **Network Isolation** — Segmented network namespaces with configurable ingress/egress rules for secure internet access.
* **Ephemeral by Default** — Sandboxes auto-destroy on exit, ensuring no state leaks between sessions. Optional persistence for long-running agents (up to 24 hours).
===== Code Interpreter SDK =====
The **Code Interpreter SDK** is E2B's primary interface, launching Jupyter servers inside sandboxes for LLMs to execute code:
* Available for Python (''pip install e2b-code-interpreter'') and JavaScript (''npm install @e2b/code-interpreter'')
* Supports Python execution via Jupyter kernel
* Returns execution results including text output, logs, errors, and generated files
* Integrates with OpenAI, Anthropic, and any LLM that supports function calling
* Context manager pattern for automatic sandbox lifecycle management
===== How Sandboxes Work =====
- **Creation** — ''Sandbox.create()'' provisions a new Firecracker microVM in under 200ms
- **Code Execution** — ''sandbox.run_code()'' sends code to the Jupyter kernel inside the sandbox
- **File Operations** — Read, write, and manage files within the sandbox filesystem
- **Internet Access** — Install packages via pip/npm, make API calls, download data
- **Process Management** — Spawn child processes, run shell commands
- **Teardown** — Sandbox destroys automatically on exit or timeout
===== Key Features =====
* **Sub-200ms Boot Time** — Sandboxes start globally in milliseconds, not seconds
* **Full Linux Environment** — Complete OS with filesystem, networking, and process management
* **Multi-Language Support** — Primary Python support via Jupyter; extensible to JavaScript, shell, and more
* **Secure Isolation** — Hardware-level isolation via Firecracker prevents guest-to-host escapes
* **Scalable** — Spin up hundreds of concurrent sandboxes for parallel agent workloads
* **228K+ Weekly Downloads** — Widely adopted npm package for the JavaScript SDK
===== Code Example =====
from openai import OpenAI
from e2b_code_interpreter import Sandbox
# Create OpenAI client for code generation
client = OpenAI()
# Define the task
system_prompt = (
"You are a data analyst. Write Python code to answer questions. "
"Only output executable code, no explanations."
)
user_prompt = "Generate a bar chart of the top 5 programming languages by popularity in 2025."
# Get code from LLM
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_prompt},
],
)
code = response.choices[0].message.content
# Execute in secure E2B sandbox
with Sandbox() as sandbox:
# Install dependencies
sandbox.run_code("!pip install matplotlib")
# Run the generated code
execution = sandbox.run_code(code)
print("Output:", execution.text)
# Check for errors
if execution.error:
print(f"Error: {execution.error.name}: {execution.error.value}")
# Access generated files (e.g., chart images)
for artifact in execution.results:
print(f"Generated: {artifact}")
===== Architecture Diagram =====
graph TD
A["AI Agent (your app)"] <--> B["LLM API (GPT-4o / Claude)"]
A -->|E2B SDK| C["E2B Cloud Platform"]
C --> D["Firecracker MicroVM"]
D --> E["Jupyter Server"]
E --> F["Python Kernel (code execution)"]
D --> G["Isolated Filesystem"]
D --> H["Resource Controls (CPU / RAM / Net)"]
===== References =====
* [[https://github.com/e2b-dev/e2b|E2B GitHub Repository]]
* [[https://e2b.dev/docs|E2B Documentation]]
* [[https://e2b.dev/|E2B Website]]
* [[https://e2b.dev/blog/build-ai-data-analyst-with-sandboxed-code-execution-using-typescript-and-gpt-4o|Build AI Data Analyst with E2B]]
===== See Also =====
* [[modal_compute|Modal]] — Serverless GPU compute for agent workloads
* [[browser_use|Browser-Use]] — AI browser automation
* [[composio|Composio]] — Tool integration platform for agents
* [[autogen_studio|AutoGen Studio]] — Visual multi-agent workflow builder