Yesterday's pseudocode in real code. Pydantic-AI's Agent wraps the loop for you — your job is to register tools and pass an iteration cap.
from pydantic_ai import Agent
agent = Agent(model)
@agent.tool_plain
def add(a: int, b: int) -> int:
"""Return the sum of two integers."""
return a + b
@agent.tool_plain
def multiply(a: int, b: int) -> int:
"""Return the product of two integers."""
return a * b
result = agent.run_sync(
"Compute 2 + 3, then multiply that by 4. Use the tools step by step.",
)
print(result.output) # contains "20"
print("messages:", len(result.all_messages()))This looks the same as L4 (multi-step tool calls). What's new?
The mental shift. L4 framed it as "two tool calls in one prompt". L17 frames it as "an agent loop ran two iterations". result.all_messages() lets you inspect each iteration — the user prompt, each tool call, each tool result, the final answer. Same code, deeper visibility.
Where does the iteration cap live?
Pydantic-AI sets a default max (around 10). For a real agent you'd configure it explicitly via usage_limits. Today's task only needs 2 iterations, so the default is fine. Tomorrow we'll add a third tool and watch the model pick among them — that's where loop visibility starts paying off.
agent = Agent(model)
@agent.tool_plain
def tool_a(...): ...
@agent.tool_plain
def tool_b(...): ...
result = agent.run_sync(prompt)
# pydantic-AI ran the loop until the model emitted a final answerThe library handles:
for msg in result.all_messages():
# each msg has parts: text, tool calls, tool returns
print(type(msg).__name__)Message types you'll see:
ModelRequest — what was sent (your prompt + previous tool results)ModelResponse — what came back (tool call requests OR final text)For a 2-iteration loop on "2 + 3 then * 4":
add(2, 3)5multiply(5, 4)20Three LLM calls under the hood. Two tool calls. One run_sync from your perspective.
from pydantic_ai.usage import UsageLimits
limits = UsageLimits(request_limit=10)
result = agent.run_sync(prompt, usage_limits=limits)For lessons we accept the defaults. For production you'd set explicit caps.
If the model calls tools forever (rare but possible with confused tasks), pydantic-AI raises after the iteration cap. Catch it; log; pick a safer prompt or smaller toolset. The loop will terminate one way or another — by completion or by cap.
"Compute 2 + 3, then multiply that by 4." Two tools. Final answer 20. Verification asserts "20" in the output.
Yesterday's pseudocode in real code. Pydantic-AI's Agent wraps the loop for you — your job is to register tools and pass an iteration cap.
from pydantic_ai import Agent
agent = Agent(model)
@agent.tool_plain
def add(a: int, b: int) -> int:
"""Return the sum of two integers."""
return a + b
@agent.tool_plain
def multiply(a: int, b: int) -> int:
"""Return the product of two integers."""
return a * b
result = agent.run_sync(
"Compute 2 + 3, then multiply that by 4. Use the tools step by step.",
)
print(result.output) # contains "20"
print("messages:", len(result.all_messages()))This looks the same as L4 (multi-step tool calls). What's new?
The mental shift. L4 framed it as "two tool calls in one prompt". L17 frames it as "an agent loop ran two iterations". result.all_messages() lets you inspect each iteration — the user prompt, each tool call, each tool result, the final answer. Same code, deeper visibility.
Where does the iteration cap live?
Pydantic-AI sets a default max (around 10). For a real agent you'd configure it explicitly via usage_limits. Today's task only needs 2 iterations, so the default is fine. Tomorrow we'll add a third tool and watch the model pick among them — that's where loop visibility starts paying off.
agent = Agent(model)
@agent.tool_plain
def tool_a(...): ...
@agent.tool_plain
def tool_b(...): ...
result = agent.run_sync(prompt)
# pydantic-AI ran the loop until the model emitted a final answerThe library handles:
for msg in result.all_messages():
# each msg has parts: text, tool calls, tool returns
print(type(msg).__name__)Message types you'll see:
ModelRequest — what was sent (your prompt + previous tool results)ModelResponse — what came back (tool call requests OR final text)For a 2-iteration loop on "2 + 3 then * 4":
add(2, 3)5multiply(5, 4)20Three LLM calls under the hood. Two tool calls. One run_sync from your perspective.
from pydantic_ai.usage import UsageLimits
limits = UsageLimits(request_limit=10)
result = agent.run_sync(prompt, usage_limits=limits)For lessons we accept the defaults. For production you'd set explicit caps.
If the model calls tools forever (rare but possible with confused tasks), pydantic-AI raises after the iteration cap. Catch it; log; pick a safer prompt or smaller toolset. The loop will terminate one way or another — by completion or by cap.
"Compute 2 + 3, then multiply that by 4." Two tools. Final answer 20. Verification asserts "20" in the output.
Create a free account to get started. Paid plans unlock all tracks.