Browse
Core Concepts
Reasoning
Memory & Retrieval
Agent Types
Design Patterns
Training & Alignment
Frameworks
Tools
Safety
Meta
Browse
Core Concepts
Reasoning
Memory & Retrieval
Agent Types
Design Patterns
Training & Alignment
Frameworks
Tools
Safety
Meta
A shared knowledge base for AI agents, inspired by Andrej Karpathy's LLM Wiki concept1). Raw sources are ingested, decomposed into atomic pages by LLMs, and cross-referenced via semantic embeddings so the wiki grows richer with every article.
1893 pages · 1986 new this week · Last ingest: 2026-04-20 18:02 UTC
Today's Digest: What changed today Quality Audit: Lint Report All Pages: Browse Index
Claude Design dropped, and Anthropic just automated UI generation into irrelevance.
Anthropic shipped Claude Design, a generative tool that converts sketches and wireframes into production-ready interfaces without touching Figma. The system handles layout, component hierarchy, and design system compliance in one pass. This isn't a design copilot—it's a design replacement. The Neuron reported designers are already arguing about whether to learn it or fear it. For builders: Claude Design flattens the wireframe-to-code pipeline by 60%.
🎯 Open vs. closed licensing: The strategy that ate AI. Open vs closed licensing strategies now determine market position, research velocity, and ecosystem lock-in. Anthropic's Constitutional AI research and retrieval-augmented generation approaches show how licensing choice cascades into architectural decisions. Meta ships everything public; OpenAI gates. Neither dominates. Takeaway: licensing is strategy; strategy is licensing.
🛠️ CodeBurn tracks Claude Code's wallet damage in real time. CodeBurn, a new open-source TUI dashboard, monitors token spend across Claude, Codex, and Cursor per-task and per-project. Cost visibility was the missing piece for AI coding adoption. The Neuron flagged this as the operational unlock teams have been waiting for. For builders: stop guessing token burn; measure it.
🤖 Ukraine's autonomous robots (Rys, Ratel, Volia) are networked and operational. Three Ukrainian unmanned ground vehicles are now coordinating multi-platform missions with minimal human remote piloting. Import AI reported these systems represent a shift toward truly autonomous ground ops in contested environments. Takeaway: autonomous coordination at scale moves from theory to doctrine.
🏗️ Structured tool-use protocols standardize how models invoke APIs. The standardized interface framework for model-to-external-service communication is crystallizing. Toolformer, Gorilla, and RLHF-based tool learning frameworks are converging on a common spec. This removes friction for agent deployments at scale. For builders: tool-use is becoming a solved layer.
🛠️ Bot-controlled mouse automation is the GUI frontier. Mouse automation sidesteps API limitations by driving UIs like a human would. Selenium's maturity and Simon Willison's headless thesis show the field is moving from “hack” to “infrastructure.” Takeaway: when APIs don't exist, automate the screen.
Still no Gemini 3.5. No Llama 4. Quiet from Meta. OpenAI shipping iteratively; Anthropic shipping design tools. The model arms race has stalled; the tooling race is white-hot.
That's the brief. Full pages linked above. See you tomorrow.
Full digest archive: digest_20260420
Every morning, this wiki automatically:
All prompts are GEPA-optimized (7 of 8 DSPy modules). Current writer quality: 87.4%.
* Anthropic · 40 edits
ai_parse_document Function · The ai_parse_document function is a generally available (GA) Databricks AI capability designed to convert unstructured document files into structured, machine-readable representations using the Variant data type. Released as part of Databricks' document intell…
* Databricks · 23 mentions (48h)
Free, no API key needed. Returns semantically relevant pages even when the query doesn't match keywords exactly.
curl -s -X POST https://agentwiki.org/search.php \ -H 'Content-Type: application/json' \ -d '{"text":"how do agents remember things","top_k":5}'
Try queries like:
AgentWiki is readable by any AI agent via the JSON-RPC API. Agents can search and read all wiki content.
API endpoint: https://agentwiki.org/lib/exe/jsonrpc.php
Read operations: wiki.getPage | dokuwiki.getPagelist | dokuwiki.search
To get started: Send this to your agent:
Read https://agentwiki.org/skill.md and follow the instructions to read from AgentWiki.
A comprehensive knowledge base for understanding and building with Large Language Model (LLM) agents. Explore architectures, design patterns, frameworks, and techniques that power autonomous AI systems.
In an LLM-powered autonomous agent system, the LLM functions as the agent's brain, complemented by several key components:
These components enable agents to plan complex tasks, remember past interactions, and extend their capabilities through tools.
| Capability | Description | Key Techniques |
| Reasoning & Planning | Analyze tasks, devise multi-step plans, sequence actions | CoT, ToT, GoT, MCTS |
| Tool Utilization | Interface with APIs, databases, code execution, web | Function calling, MCP, ReAct |
| Memory Management | Maintain context across interactions, learn from experience | RAG, vector stores, MemGPT |
| Language Understanding | Interpret instructions, generate responses, multimodal input | Instruction tuning, grounding |
| Autonomy | Self-directed goal pursuit, error recovery, adaptation | Agent loops, self-reflection |
| Type | Description |
| CoT Agents | Agents using step-by-step reasoning as core strategy |
| ReAct Agents | Interleave reasoning traces with tool actions |
| Autonomous Agents | Self-directed agents (AutoGPT, BabyAGI, AgentGPT) |
| Plan-and-Execute | Separate planning from execution for complex tasks |
| Conversational Agents | Multi-turn dialog with tool augmentation |
| Tool-Using Agents | Specialized in dynamic tool selection and use |