Browse
Core Concepts
Reasoning
Memory & Retrieval
Agent Types
Design Patterns
Training & Alignment
Frameworks
Tools
Safety
Meta
Browse
Core Concepts
Reasoning
Memory & Retrieval
Agent Types
Design Patterns
Training & Alignment
Frameworks
Tools
Safety
Meta
Tool use is a critical capability for Large Language Model (LLM) agents, enabling them to interact with external systems, access up-to-date information, and perform actions beyond their inherent knowledge1). This functionality allows LLMs to handle complex tasks that require real-time data retrieval or specific operations. Tool use represents a fundamental paradigm shift from passive LLM processing to active decision-making, where agents determine actions rather than simply process inputs, with key concepts including agent autonomy, tool calling, function calling, memory management, and planning2).
import json from [[openai|openai]] import [[openai|OpenAI]] client = [[openai|OpenAI]]() Define tools with JSON Schema for the model tools = [{ "type": "function", "function": { "name": "get_weather", "description": "Get the current weather for a location", "parameters": { "type": "object", "properties": { "location": {"type": "string", "description": "City name"}, "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}, }, "required": ["location"], }, }, }] Tool implementation registry def get_weather(location: str, unit: str = "celsius") -> str: # In production, call a real weather API here return json.dumps({"location": location, "temp": 22, "unit": unit, "condition": "sunny"}) TOOL_REGISTRY = {"get_weather": get_weather} def run_with_tools(user_message: str) -> str: """Complete tool-use loop: send message, execute tools, return final response.""" messages = [{"role": "user", "content": user_message}] response = client.chat.completions.create( model="gpt-4o", messages=messages, tools=tools ) msg = response.choices[0].message # If the model wants to call tools, execute them while msg.tool_calls: messages.append(msg) for call in msg.tool_calls: func = TOOL_REGISTRY[call.function.name] args = json.loads(call.function.arguments) result = func(**args) messages.append({"role": "tool", "tool_call_id": call.id, "content": result}) response = client.chat.completions.create( model="gpt-4o", messages=messages, tools=tools ) msg = response.choices[0].message return msg.content print(run_with_tools("What is the weather in Paris?"))
The emergence of models specifically designed for agentic and tool-heavy tasks represents a significant advancement in agent capabilities. MiniMax M2.5 and M2.7 models have gained prominence for their effectiveness in local agent development, providing reliable tool integration and functional execution without reliance on external cloud services6). These models are preferred by developers building self-contained agent systems that prioritize tool reliability and local inference.
Harness engineering represents a technical shift where performance gains are derived from the surrounding infrastructure and toolsets rather than solely from model weights7). This approach emphasizes building complex agentic environments with extensive tool ecosystems. Organizations like Meta have demonstrated this principle by constructing systems with 16 hidden tools, while projects like OpenClaw showcase the value of extensive codebases supporting LLM agents. This paradigm suggests that future performance improvements will increasingly come from intelligent tool orchestration and environmental design rather than model scaling alone.