research_and_extract asked the model to search and parse — but the model still couldn't run code. What if you want it to do arithmetic reliably instead of hallucinating a calculation?
I'd wire in a Python function somehow? But I'm not sure how the agent knows when to call it. Do I have to detect "math question" myself and route it?
No routing code needed. You register the function directly on the agent with @agent.tool_plain and attach a docstring — the model reads that docstring and decides when to call the tool:
def agent_with_tool(prompt: str) -> str:
agent = Agent(model)
@agent.tool_plain
def add(a: int, b: int) -> int:
"""Add two integers."""
return a + b
result = agent.run_sync(prompt)
return result.outputSo add never appears in my prompt — the model infers from the docstring that "What is 3 + 4?" maps to add(3, 4). It calls the function, gets 7, and wraps an answer around that?
Precisely. The tool call happens inside the agent's reasoning loop. The model emits a tool-call event, PydanticAI intercepts it, runs add(3, 4), injects 7 back into context, and the model continues to produce its final response string. You never see the intermediate step unless you inspect the run history.
The agent is picking the tool — I just defined them. That's a completely different mental model from writing if "add" in prompt. I hand it a function and it decides.
And that scales. One tool, two tools, ten tools — the agent always picks based on docstrings. The docstring is the contract: be precise about what the function does and what types it expects. Vague docstrings produce wrong calls. The minimal call site is two lines once the tool is registered:
result = agent.run_sync(prompt)
return result.outputSo the quality of my tool is half docstring, half logic. A bad description and the model ignores the tool or misuses it — even if the Python is correct.
Exactly. Write the docstring for the model, not for yourself. "Add two integers" is clear. "Does the thing" is useless. Ship the docstring first, then the body.
@agent.tool_plain — Register a Function as an Agent ToolDecorate any function with @agent.tool_plain immediately after creating the agent. The docstring tells the model when and how to call it:
agent = Agent(model)
@agent.tool_plain
def add(a: int, b: int) -> int:
"""Add two integers."""
return a + b
result = agent.run_sync(prompt)
return result.outputHow it works: when the model decides a tool call is needed, PydanticAI intercepts it, runs add(a, b) in Python, injects the return value back into context, and the model produces its final string.
Docstring quality matters. A precise docstring ("Add two integers") produces reliable calls; a vague one produces missed or wrong calls.
tool_plain vs tool: tool_plain receives only the function arguments — right for pure functions. Use tool when the function needs agent context.
research_and_extract asked the model to search and parse — but the model still couldn't run code. What if you want it to do arithmetic reliably instead of hallucinating a calculation?
I'd wire in a Python function somehow? But I'm not sure how the agent knows when to call it. Do I have to detect "math question" myself and route it?
No routing code needed. You register the function directly on the agent with @agent.tool_plain and attach a docstring — the model reads that docstring and decides when to call the tool:
def agent_with_tool(prompt: str) -> str:
agent = Agent(model)
@agent.tool_plain
def add(a: int, b: int) -> int:
"""Add two integers."""
return a + b
result = agent.run_sync(prompt)
return result.outputSo add never appears in my prompt — the model infers from the docstring that "What is 3 + 4?" maps to add(3, 4). It calls the function, gets 7, and wraps an answer around that?
Precisely. The tool call happens inside the agent's reasoning loop. The model emits a tool-call event, PydanticAI intercepts it, runs add(3, 4), injects 7 back into context, and the model continues to produce its final response string. You never see the intermediate step unless you inspect the run history.
The agent is picking the tool — I just defined them. That's a completely different mental model from writing if "add" in prompt. I hand it a function and it decides.
And that scales. One tool, two tools, ten tools — the agent always picks based on docstrings. The docstring is the contract: be precise about what the function does and what types it expects. Vague docstrings produce wrong calls. The minimal call site is two lines once the tool is registered:
result = agent.run_sync(prompt)
return result.outputSo the quality of my tool is half docstring, half logic. A bad description and the model ignores the tool or misuses it — even if the Python is correct.
Exactly. Write the docstring for the model, not for yourself. "Add two integers" is clear. "Does the thing" is useless. Ship the docstring first, then the body.
@agent.tool_plain — Register a Function as an Agent ToolDecorate any function with @agent.tool_plain immediately after creating the agent. The docstring tells the model when and how to call it:
agent = Agent(model)
@agent.tool_plain
def add(a: int, b: int) -> int:
"""Add two integers."""
return a + b
result = agent.run_sync(prompt)
return result.outputHow it works: when the model decides a tool call is needed, PydanticAI intercepts it, runs add(a, b) in Python, injects the return value back into context, and the model produces its final string.
Docstring quality matters. A precise docstring ("Add two integers") produces reliable calls; a vague one produces missed or wrong calls.
tool_plain vs tool: tool_plain receives only the function arguments — right for pure functions. Use tool when the function needs agent context.
Create a free account to get started. Paid plans unlock all tracks.