Browse
Core Concepts
Reasoning
Memory & Retrieval
Agent Types
Design Patterns
Training & Alignment
Frameworks
Tools
Safety
Meta
Browse
Core Concepts
Reasoning
Memory & Retrieval
Agent Types
Design Patterns
Training & Alignment
Frameworks
Tools
Safety
Meta
Tool integration patterns define the standard approaches for connecting AI agents to external tools, services, and APIs1). These patterns address how tools are described to agents, how agents discover and select them, how invocations are structured, how results feed back into reasoning, and how errors are handled. Well-designed tool integration is essential for building agents that operate effectively in real-world environments.
The most widely adopted pattern, where LLM providers expose a native mechanism for structured tool invocation:
This pattern is implemented by OpenAI (function calling / tool use), Anthropic (tool use), Google (function calling), and most major providers2). It replaced earlier fragile approaches based on prompt engineering and regex parsing.
The following example shows how to define a tool registry, let the model select tools, and execute them:
Tool definition, selection, and execution pattern from [[openai|openai]] import [[openai|OpenAI]] import json client = [[openai|OpenAI]]() Tool registry: map names to callable functions def search_database(query: str, limit: int = 5) -> str: return json.dumps({"results": [f"Result for '{query}' #{i}" for i in range(limit)]}) TOOL_REGISTRY = {"search_database": search_database} TOOLS = [{ "type": "function", "function": { "name": "search_database", "description": "Search the product database", "parameters": { "type": "object", "properties": { "query": {"type": "string", "description": "Search query"}, "limit": {"type": "integer", "description": "Max results"}, }, "required": ["query"], }, }, }] Send request and execute any tool calls the model returns messages = [{"role": "user", "content": "Find laptops under $500"}] response = client.chat.completions.create(model="gpt-4o", messages=messages, tools=TOOLS) for tool_call in response.choices[0].message.tool_calls or []: fn = TOOL_REGISTRY[tool_call.function.name] result = fn(**json.loads(tool_call.function.arguments)) messages.append(response.choices[0].message) messages.append({"role": "tool", "tool_call_id": tool_call.id, "content": result}) Get final response incorporating tool results final = client.chat.completions.create(model="gpt-4o", messages=messages, tools=TOOLS) print(final.choices[0].message.content)
MCP extends the function calling pattern into a full client-server protocol:
MCP is particularly powerful for enterprise environments where tools span multiple services and need centralized management3).com/modelcontextprotocol|Anthropic. “Model Context Protocol.” github.com/modelcontextprotocol, 2024.]])). By 2025, it became the dominant standard for AI-tool connectivity. In practice, tool-using agents integrate with diverse enterprise systems such as Slack, GitHub, Linear, QuickBooks, Stripe, and Meta Ads to execute real workflows4).
Plugin systems treat tools as modular, dynamically-loadable components:
Plugin architectures enable extensibility without modifying the core agent, supporting marketplace-style tool distribution.
The ReAct (Reasoning + Acting) pattern interleaves reasoning and tool use in an iterative loop:
ReAct grounds agent responses in real-world data and is the foundation for most modern agent frameworks. It naturally supports multi-step tool use, error recovery, and adaptive planning. See ReAct Prompting for the original paper by Yao et al., 2022.
How agents choose which tool to use:
Robust tool integration requires systematic error handling: