Schema yesterday, real call today. The smallest tool-call cycle: prompt → model picks tool → Python executes → model finishes.
from pydantic_ai import Agent
agent = Agent(model)
@agent.tool_plain
def add(a: int, b: int) -> int:
"""Return the sum of two integers."""
return a + b
result = agent.run_sync("What is 7 plus 8?")
print(result.output)That's it? I expected more wiring.
Pydantic-AI handles the round trips for you. Under the hood: send prompt + tool schemas → model responds with tool-call request → library executes add(7, 8) → result fed back to model → model produces final text. From your side, one run_sync call. The library hid two model calls and one Python call inside it.
What does result.output actually contain?
A natural-language answer like "7 plus 8 equals 15." — the model's wording, with 15 from the tool. If you want just the integer, you parse it out with regex or use output_type= (week 4). Today: assert the string contains "15".
user prompt + tool schema
↓
LLM call #1
↓
tool-call signal: add(a=7, b=8)
↓
Python: add(7, 8) → 15
↓
LLM call #2 (with tool result)
↓
"7 plus 8 equals 15."
Two LLM calls per tool-using turn, not one. That's why tool-calling lessons cost 2-3 quota slots each — each run_sync here makes the model talk to itself once via your tool.
@agent.tool_plain vs @agent.tool@agent.tool_plain — the function takes only the arguments the model passes. No agent context.@agent.tool — the function gets a RunContext first argument with access to dependencies you've configured.For week-1 lessons, every tool is @agent.tool_plain — clean and minimum-demonstration.
From your function:
@agent.tool_plain
def add(a: int, b: int) -> int:
"""Return the sum of two integers."""
return a + bPydantic-AI auto-builds the schema:
{
"name": "add",
"description": "Return the sum of two integers.",
"parameters": {
"type": "object",
"properties": {
"a": {"type": "integer"},
"b": {"type": "integer"}
},
"required": ["a", "b"]
}
}Docstring → description. Type hints → property types. The schema you built by hand yesterday is what the decorator generates.
A tool call is at least 2 LLM calls under the hood. The model also might call the tool more than once if it needs to verify (e.g., add(7, 8) then check by calling add(8, 7)). Plan for 2-4 quota slots per tool-using prompt during this week.
Schema yesterday, real call today. The smallest tool-call cycle: prompt → model picks tool → Python executes → model finishes.
from pydantic_ai import Agent
agent = Agent(model)
@agent.tool_plain
def add(a: int, b: int) -> int:
"""Return the sum of two integers."""
return a + b
result = agent.run_sync("What is 7 plus 8?")
print(result.output)That's it? I expected more wiring.
Pydantic-AI handles the round trips for you. Under the hood: send prompt + tool schemas → model responds with tool-call request → library executes add(7, 8) → result fed back to model → model produces final text. From your side, one run_sync call. The library hid two model calls and one Python call inside it.
What does result.output actually contain?
A natural-language answer like "7 plus 8 equals 15." — the model's wording, with 15 from the tool. If you want just the integer, you parse it out with regex or use output_type= (week 4). Today: assert the string contains "15".
user prompt + tool schema
↓
LLM call #1
↓
tool-call signal: add(a=7, b=8)
↓
Python: add(7, 8) → 15
↓
LLM call #2 (with tool result)
↓
"7 plus 8 equals 15."
Two LLM calls per tool-using turn, not one. That's why tool-calling lessons cost 2-3 quota slots each — each run_sync here makes the model talk to itself once via your tool.
@agent.tool_plain vs @agent.tool@agent.tool_plain — the function takes only the arguments the model passes. No agent context.@agent.tool — the function gets a RunContext first argument with access to dependencies you've configured.For week-1 lessons, every tool is @agent.tool_plain — clean and minimum-demonstration.
From your function:
@agent.tool_plain
def add(a: int, b: int) -> int:
"""Return the sum of two integers."""
return a + bPydantic-AI auto-builds the schema:
{
"name": "add",
"description": "Return the sum of two integers.",
"parameters": {
"type": "object",
"properties": {
"a": {"type": "integer"},
"b": {"type": "integer"}
},
"required": ["a", "b"]
}
}Docstring → description. Type hints → property types. The schema you built by hand yesterday is what the decorator generates.
A tool call is at least 2 LLM calls under the hood. The model also might call the tool more than once if it needs to verify (e.g., add(7, 8) then check by calling add(8, 7)). Plan for 2-4 quota slots per tool-using prompt during this week.
Create a free account to get started. Paid plans unlock all tracks.