Ask an LLM to add 142 and 358. What do you think it returns, and why should that worry you?
It might say 500, 499, 501 — it is pattern-matching on what "looks right," not actually doing the math?
Exactly. Language models predict tokens probabilistically; they are not calculators. For precise computation you hand the agent a real Python function as a tool, and the agent calls that function whenever exact arithmetic is needed. The decorator that wires it up is @agent.tool_plain:
agent = Agent(model)
@agent.tool_plain
def add(a: int, b: int) -> int:
return a + b
result = agent.run_sync("What is 17 + 25?").outputThe decorator registers add on that specific agent. When the model decides addition is needed, PydanticAI executes the function and feeds the result back into the conversation.
The function runs in Python, not inside the LLM? And the model decides when to call it?
Exactly the split. The model handles language and intent — reading "17 + 25" and recognizing arithmetic. Python handles the precision — the function returns 42 deterministically, every time. The two pieces each do what they are best at:
def agent_with_calculator(prompt: str) -> str:
agent = Agent(model)
@agent.tool_plain
def add(a: int, b: int) -> int:
return a + b
return agent.run_sync(prompt).outputOrder matters: create agent first, then decorate. The decorator attaches the function to that instance.
What if a prompt doesn't need the tool — like "What is the capital of France?" Does the agent still try to call add?
No. The agent reads the prompt, decides whether any registered tool applies, and only calls the one it needs. For a non-arithmetic question it answers directly from its own knowledge. Tools are capabilities the agent may use — not steps it must run.
So this is the same idea as search being built into Perplexity — but now I decide what Python capability the agent can reach for?
Exactly. @agent.tool_plain lets you extend any agent with exact, deterministic Python. Arithmetic today; database lookups, date math, and file reads in real systems. Write agent_with_calculator(prompt) now: create the agent, register add, return the response.
TL;DR: register a Python function; the agent calls it when it needs exact computation.
agent = Agent(model) — create the agent first@agent.tool_plain — decorator attaches the function to that instance| Job | Runs in |
|---|---|
| Reading language, intent | Model |
| Exact arithmetic, lookups | Python tool |
Decorator order: always agent = ... before @agent.tool_plain — the decorator binds to that specific instance.
Ask an LLM to add 142 and 358. What do you think it returns, and why should that worry you?
It might say 500, 499, 501 — it is pattern-matching on what "looks right," not actually doing the math?
Exactly. Language models predict tokens probabilistically; they are not calculators. For precise computation you hand the agent a real Python function as a tool, and the agent calls that function whenever exact arithmetic is needed. The decorator that wires it up is @agent.tool_plain:
agent = Agent(model)
@agent.tool_plain
def add(a: int, b: int) -> int:
return a + b
result = agent.run_sync("What is 17 + 25?").outputThe decorator registers add on that specific agent. When the model decides addition is needed, PydanticAI executes the function and feeds the result back into the conversation.
The function runs in Python, not inside the LLM? And the model decides when to call it?
Exactly the split. The model handles language and intent — reading "17 + 25" and recognizing arithmetic. Python handles the precision — the function returns 42 deterministically, every time. The two pieces each do what they are best at:
def agent_with_calculator(prompt: str) -> str:
agent = Agent(model)
@agent.tool_plain
def add(a: int, b: int) -> int:
return a + b
return agent.run_sync(prompt).outputOrder matters: create agent first, then decorate. The decorator attaches the function to that instance.
What if a prompt doesn't need the tool — like "What is the capital of France?" Does the agent still try to call add?
No. The agent reads the prompt, decides whether any registered tool applies, and only calls the one it needs. For a non-arithmetic question it answers directly from its own knowledge. Tools are capabilities the agent may use — not steps it must run.
So this is the same idea as search being built into Perplexity — but now I decide what Python capability the agent can reach for?
Exactly. @agent.tool_plain lets you extend any agent with exact, deterministic Python. Arithmetic today; database lookups, date math, and file reads in real systems. Write agent_with_calculator(prompt) now: create the agent, register add, return the response.
TL;DR: register a Python function; the agent calls it when it needs exact computation.
agent = Agent(model) — create the agent first@agent.tool_plain — decorator attaches the function to that instance| Job | Runs in |
|---|---|
| Reading language, intent | Model |
| Exact arithmetic, lookups | Python tool |
Decorator order: always agent = ... before @agent.tool_plain — the decorator binds to that specific instance.
Create a free account to get started. Paid plans unlock all tracks.