One registered tool, the agent had one decision to make — call it or not. What do you think happens when the agent has two tools to pick from?
It would need to choose which tool matches the question? Like, read the prompt and pick the right function?
Exactly. That choice is the agent's job, not yours. You register both tools with @agent.tool_plain and the model decides which one fits each prompt. "What is 6 times 7" triggers multiply; "Add 54 and 46" triggers add. No branching logic in your code:
agent = Agent(model)
@agent.tool_plain
def add(a: int, b: int) -> int:
return a + b
@agent.tool_plain
def multiply(a: int, b: int) -> int:
return a * bThe agent figures out from the prompt whether to call add or multiply? No if "multiply" in prompt from me?
No branching. The model reads the prompt, sees both tool names and their signatures, and picks the one whose semantics match. "Product of," "times," and "multiplied by" all route to multiply. The model understands intent — not just keywords:
def agent_two_tools(prompt: str) -> str:
agent = Agent(model)
@agent.tool_plain
def add(a: int, b: int) -> int:
return a + b
@agent.tool_plain
def multiply(a: int, b: int) -> int:
return a * b
return agent.run_sync(prompt).outputWhat if the prompt is ambiguous? "Combine 3 and 4" — does the agent still pick one?
The model makes a judgment call — usually add for "combine." When you need tighter control, add a docstring to the tool function. PydanticAI passes docstrings to the model as part of the tool description, so """Use for multiplication or 'times' questions.""" sharpens the model's choice between overlapping tools.
So real agents — a search tool, an email tool, a calendar tool — work exactly this way, agents picking the right capability per request?
That is the core of agentic behavior — reasoning about which capability to use, in what order. Same pattern, bigger toolkit. Write agent_two_tools(prompt) now: register add and multiply, return the agent's response.
TL;DR: register every capability; the agent picks the right one per prompt.
@agent.tool_plain decorators — one per capability| Only names | Names + docstrings |
|---|---|
multiply(a, b) | """Use for products or 'times'.""" |
| Model guesses intent | Model matches description |
Toolkits scale linearly — add one decorator and the agent considers one more capability.
One registered tool, the agent had one decision to make — call it or not. What do you think happens when the agent has two tools to pick from?
It would need to choose which tool matches the question? Like, read the prompt and pick the right function?
Exactly. That choice is the agent's job, not yours. You register both tools with @agent.tool_plain and the model decides which one fits each prompt. "What is 6 times 7" triggers multiply; "Add 54 and 46" triggers add. No branching logic in your code:
agent = Agent(model)
@agent.tool_plain
def add(a: int, b: int) -> int:
return a + b
@agent.tool_plain
def multiply(a: int, b: int) -> int:
return a * bThe agent figures out from the prompt whether to call add or multiply? No if "multiply" in prompt from me?
No branching. The model reads the prompt, sees both tool names and their signatures, and picks the one whose semantics match. "Product of," "times," and "multiplied by" all route to multiply. The model understands intent — not just keywords:
def agent_two_tools(prompt: str) -> str:
agent = Agent(model)
@agent.tool_plain
def add(a: int, b: int) -> int:
return a + b
@agent.tool_plain
def multiply(a: int, b: int) -> int:
return a * b
return agent.run_sync(prompt).outputWhat if the prompt is ambiguous? "Combine 3 and 4" — does the agent still pick one?
The model makes a judgment call — usually add for "combine." When you need tighter control, add a docstring to the tool function. PydanticAI passes docstrings to the model as part of the tool description, so """Use for multiplication or 'times' questions.""" sharpens the model's choice between overlapping tools.
So real agents — a search tool, an email tool, a calendar tool — work exactly this way, agents picking the right capability per request?
That is the core of agentic behavior — reasoning about which capability to use, in what order. Same pattern, bigger toolkit. Write agent_two_tools(prompt) now: register add and multiply, return the agent's response.
TL;DR: register every capability; the agent picks the right one per prompt.
@agent.tool_plain decorators — one per capability| Only names | Names + docstrings |
|---|---|
multiply(a, b) | """Use for products or 'times'.""" |
| Model guesses intent | Model matches description |
Toolkits scale linearly — add one decorator and the agent considers one more capability.
Create a free account to get started. Paid plans unlock all tracks.