research_and_extract from yesterday chains search and extraction. Your agent needs to count words in abstract summaries — but language models are unreliable at arithmetic. How do you give the agent a reliable word counter?
Define a tool function the agent can call. @agent.tool_plain wraps a Python function so the model can invoke it mid-response. The agent asks for the word count; Python does the math.
Exactly. The agent picks the tool when it needs it:
agent = Agent(model)
@agent.tool_plain
def word_count(text: str) -> int:
return len(text.split())
return agent.run_sync(prompt).outputHow does the model know the tool exists? I didn't mention it in the prompt.
The @agent.tool_plain decorator registers the function with the agent at construction. The model receives the tool's name and docstring as part of its context — it can choose to call it or not based on the prompt. If you ask the agent "How many words is this text?" it will call word_count. If you ask for a summary, it likely won't.
So I'm the tool author and the model is the caller. I define what the tool does; the agent decides when to use it. That's the division of labour for Week 4.
The model is picking the tool — you just defined it. That's the agentic pattern.
I could define a citation_count tool, an outlier_check tool, an effect_size_calculator — any Python function the model can invoke when it decides it needs to. The agent gets more capable with each tool I add.
Adding a new tool is always the same pattern:
@agent.tool_plain
def sentence_count(text: str) -> int:
"""Count sentences in text."""
return len([s for s in text.split(".") if s.strip()])Tools are called with the model's best-guess arguments. Validate parameter values for complex lookups.
agent = Agent(model)
@agent.tool_plain
def word_count(text: str) -> int:
return len(text.split())
return agent.run_sync(prompt).output@agent.tool_plain worksThe decorator registers the function with the agent. The model receives the tool's name and signature in its context and can call it when relevant. tool_plain is for functions with no PydanticAI context argument — the simplest tool form.
Add a docstring to each @agent.tool_plain function — the model uses it to decide when to call the tool:
@agent.tool_plain
def word_count(text: str) -> int:
"""Count the number of words in text."""
return len(text.split())Clear docstrings reduce tool selection errors on ambiguous prompts.
research_and_extract from yesterday chains search and extraction. Your agent needs to count words in abstract summaries — but language models are unreliable at arithmetic. How do you give the agent a reliable word counter?
Define a tool function the agent can call. @agent.tool_plain wraps a Python function so the model can invoke it mid-response. The agent asks for the word count; Python does the math.
Exactly. The agent picks the tool when it needs it:
agent = Agent(model)
@agent.tool_plain
def word_count(text: str) -> int:
return len(text.split())
return agent.run_sync(prompt).outputHow does the model know the tool exists? I didn't mention it in the prompt.
The @agent.tool_plain decorator registers the function with the agent at construction. The model receives the tool's name and docstring as part of its context — it can choose to call it or not based on the prompt. If you ask the agent "How many words is this text?" it will call word_count. If you ask for a summary, it likely won't.
So I'm the tool author and the model is the caller. I define what the tool does; the agent decides when to use it. That's the division of labour for Week 4.
The model is picking the tool — you just defined it. That's the agentic pattern.
I could define a citation_count tool, an outlier_check tool, an effect_size_calculator — any Python function the model can invoke when it decides it needs to. The agent gets more capable with each tool I add.
Adding a new tool is always the same pattern:
@agent.tool_plain
def sentence_count(text: str) -> int:
"""Count sentences in text."""
return len([s for s in text.split(".") if s.strip()])Tools are called with the model's best-guess arguments. Validate parameter values for complex lookups.
agent = Agent(model)
@agent.tool_plain
def word_count(text: str) -> int:
return len(text.split())
return agent.run_sync(prompt).output@agent.tool_plain worksThe decorator registers the function with the agent. The model receives the tool's name and signature in its context and can call it when relevant. tool_plain is for functions with no PydanticAI context argument — the simplest tool form.
Add a docstring to each @agent.tool_plain function — the model uses it to decide when to call the tool:
@agent.tool_plain
def word_count(text: str) -> int:
"""Count the number of words in text."""
return len(text.split())Clear docstrings reduce tool selection errors on ambiguous prompts.
Create a free account to get started. Paid plans unlock all tracks.