Tool Calling Patterns
Generate function schemas from Pydantic models and dispatch tool calls.
I've seen AI assistants look up weather, run code, and search the web. How do they do that?
Tool calling. You give the LLM a list of functions it can invoke. When it decides a tool would help, it returns a structured request instead of plain text. Your code executes the tool and feeds the result back.
The pattern has three parts:
# 1. Define tools as Pydantic models
from pydantic import BaseModel
class WeatherArgs(BaseModel):
city: str
units: str = "celsius"
class CalculateArgs(BaseModel):
expression: str
# 2. Build tool schemas for the LLM
def get_tool_schema(name: str, model: type) -> dict:
return {
"type": "function",
"function": {
"name": name,
"parameters": model.model_json_schema()
}
}
tools = [
get_tool_schema("get_weather", WeatherArgs),
get_tool_schema("calculate", CalculateArgs)
]
The LLM sees the schema and knows exactly what arguments each tool expects.
What does the LLM actually return when it wants to call a tool?
A structured object — not text. Something like:
tool_call = {
"name": "get_weather",
"arguments": {"city": "Tokyo", "units": "celsius"}
}
Your code then dispatches it to the right function:
def dispatch_tool(tool_call: dict) -> str:
name = tool_call["name"]
args = tool_call["arguments"]
if name == "get_weather":
validated = WeatherArgs(**args)
return get_weather(validated.city, validated.units)
elif name == "calculate":
validated = CalculateArgs(**args)
return calculate(validated.expression)
else:
return f"Unknown tool: {name}"
Notice the Pydantic validation step — WeatherArgs(**args) ensures the LLM sent valid arguments before you execute anything.
Why validate the arguments if the LLM saw the schema?
Because LLMs make mistakes. They might hallucinate a wrong type, miss a required field, or add extra parameters. Pydantic catches all of that before your code runs. Think of it as a safety net — the schema guides the LLM, and validation catches anything it gets wrong.
What happens after the tool runs?
You send the tool's result back to the LLM so it can continue:
# The conversation flow:
# 1. User: "What's the weather in Tokyo?"
# 2. LLM: tool_call("get_weather", {"city": "Tokyo"})
# 3. Your code: result = dispatch_tool(tool_call)
# 4. Send result back to LLM
# 5. LLM: "The temperature in Tokyo is 22°C"
The LLM uses the tool result to generate its final answer. It doesn't call the API itself — your code is the middleman.
Practice your skills
Sign up to write and run code in this lesson.
Tool Calling Patterns
Generate function schemas from Pydantic models and dispatch tool calls.
I've seen AI assistants look up weather, run code, and search the web. How do they do that?
Tool calling. You give the LLM a list of functions it can invoke. When it decides a tool would help, it returns a structured request instead of plain text. Your code executes the tool and feeds the result back.
The pattern has three parts:
# 1. Define tools as Pydantic models
from pydantic import BaseModel
class WeatherArgs(BaseModel):
city: str
units: str = "celsius"
class CalculateArgs(BaseModel):
expression: str
# 2. Build tool schemas for the LLM
def get_tool_schema(name: str, model: type) -> dict:
return {
"type": "function",
"function": {
"name": name,
"parameters": model.model_json_schema()
}
}
tools = [
get_tool_schema("get_weather", WeatherArgs),
get_tool_schema("calculate", CalculateArgs)
]
The LLM sees the schema and knows exactly what arguments each tool expects.
What does the LLM actually return when it wants to call a tool?
A structured object — not text. Something like:
tool_call = {
"name": "get_weather",
"arguments": {"city": "Tokyo", "units": "celsius"}
}
Your code then dispatches it to the right function:
def dispatch_tool(tool_call: dict) -> str:
name = tool_call["name"]
args = tool_call["arguments"]
if name == "get_weather":
validated = WeatherArgs(**args)
return get_weather(validated.city, validated.units)
elif name == "calculate":
validated = CalculateArgs(**args)
return calculate(validated.expression)
else:
return f"Unknown tool: {name}"
Notice the Pydantic validation step — WeatherArgs(**args) ensures the LLM sent valid arguments before you execute anything.
Why validate the arguments if the LLM saw the schema?
Because LLMs make mistakes. They might hallucinate a wrong type, miss a required field, or add extra parameters. Pydantic catches all of that before your code runs. Think of it as a safety net — the schema guides the LLM, and validation catches anything it gets wrong.
What happens after the tool runs?
You send the tool's result back to the LLM so it can continue:
# The conversation flow:
# 1. User: "What's the weather in Tokyo?"
# 2. LLM: tool_call("get_weather", {"city": "Tokyo"})
# 3. Your code: result = dispatch_tool(tool_call)
# 4. Send result back to LLM
# 5. LLM: "The temperature in Tokyo is 22°C"
The LLM uses the tool result to generate its final answer. It doesn't call the API itself — your code is the middleman.
Practice your skills
Sign up to write and run code in this lesson.