Yesterday: 2 tools, sequential. Today: 3 tools, the agent picks the right one for each task.
from pydantic_ai import Agent
agent = Agent(model)
@agent.tool_plain
def add(a: int, b: int) -> int:
"""Return the sum of two integers."""
return a + b
@agent.tool_plain
def multiply(a: int, b: int) -> int:
"""Return the product of two integers."""
return a * b
@agent.tool_plain
def subtract(a: int, b: int) -> int:
"""Return a minus b."""
return a - b
tasks = [
("What is 9 plus 4?", "13"),
("What is 6 times 5?", "30"),
("What is 20 minus 7?", "13"),
]
for prompt, expected in tasks:
out = agent.run_sync(prompt).output
print(f"{prompt} → {out.strip()} (expected {expected})")Same agent, same toolset. The model reads each prompt and picks add, multiply, or subtract based on the wording.
Right. The agent doesn't know in advance which tool fits — it reads the docstrings, matches them to the prompt's intent, and dispatches. Good docstrings → good routing.
What if the model picks the wrong tool?
It happens. Two recoveries: (1) sharper docstrings ("Return the difference of two integers (a minus b)" is clearer than "subtract"); (2) sharper prompts ("What is 20 minus 7? Use the subtract tool." forces the choice). For lesson tasks we expect the model to pick correctly on these unambiguous prompts. Real-world tool routing needs the eval suite from L19 to catch the cases where it doesn't.
The agent now has a menu of tools. Each new prompt:
The model uses three things to pick:
multiply is a stronger match for "multiply" than mul isAll three matter. The single biggest win is a clear, unambiguous docstring.
# Bad — ambiguous, vague
@agent.tool_plain
def sub(a, b):
return a - b
# Good — specific, unambiguous
@agent.tool_plain
def subtract(a: int, b: int) -> int:
"""Return a minus b. The first argument is the minuend, the second is the subtrahend."""
return a - bThe second version tells the model exactly which arg is which. Critical for non-commutative operations like subtraction and division.
(7, 20) instead of (20, 7). Mitigation: docstring explicit about positions, or use named-only parameters.Three LLM calls per task (request, tool-call, final). Three tasks today = ~9 quota slots. Budget accordingly.
Three small math prompts, one for each tool. Verification asserts each task's expected answer appears somewhere in the output.
Yesterday: 2 tools, sequential. Today: 3 tools, the agent picks the right one for each task.
from pydantic_ai import Agent
agent = Agent(model)
@agent.tool_plain
def add(a: int, b: int) -> int:
"""Return the sum of two integers."""
return a + b
@agent.tool_plain
def multiply(a: int, b: int) -> int:
"""Return the product of two integers."""
return a * b
@agent.tool_plain
def subtract(a: int, b: int) -> int:
"""Return a minus b."""
return a - b
tasks = [
("What is 9 plus 4?", "13"),
("What is 6 times 5?", "30"),
("What is 20 minus 7?", "13"),
]
for prompt, expected in tasks:
out = agent.run_sync(prompt).output
print(f"{prompt} → {out.strip()} (expected {expected})")Same agent, same toolset. The model reads each prompt and picks add, multiply, or subtract based on the wording.
Right. The agent doesn't know in advance which tool fits — it reads the docstrings, matches them to the prompt's intent, and dispatches. Good docstrings → good routing.
What if the model picks the wrong tool?
It happens. Two recoveries: (1) sharper docstrings ("Return the difference of two integers (a minus b)" is clearer than "subtract"); (2) sharper prompts ("What is 20 minus 7? Use the subtract tool." forces the choice). For lesson tasks we expect the model to pick correctly on these unambiguous prompts. Real-world tool routing needs the eval suite from L19 to catch the cases where it doesn't.
The agent now has a menu of tools. Each new prompt:
The model uses three things to pick:
multiply is a stronger match for "multiply" than mul isAll three matter. The single biggest win is a clear, unambiguous docstring.
# Bad — ambiguous, vague
@agent.tool_plain
def sub(a, b):
return a - b
# Good — specific, unambiguous
@agent.tool_plain
def subtract(a: int, b: int) -> int:
"""Return a minus b. The first argument is the minuend, the second is the subtrahend."""
return a - bThe second version tells the model exactly which arg is which. Critical for non-commutative operations like subtraction and division.
(7, 20) instead of (20, 7). Mitigation: docstring explicit about positions, or use named-only parameters.Three LLM calls per task (request, tool-call, final). Three tasks today = ~9 quota slots. Budget accordingly.
Three small math prompts, one for each tool. Verification asserts each task's expected answer appears somewhere in the output.
Create a free account to get started. Paid plans unlock all tracks.