agent_with_tool from yesterday gives the agent one tool. For abstract quality checking, you need both word count and character count — different constraints for different publication venues. How do you give the agent two tools?
Define two @agent.tool_plain functions on the same agent. The agent picks the relevant one based on the prompt. If I ask for character count, it calls char_count; if I ask for word count, it calls word_count.
Exactly. Two decorators, one agent:
agent = Agent(model)
@agent.tool_plain
def char_count(text: str) -> int:
return len(text)
@agent.tool_plain
def word_count(text: str) -> int:
return len(text.split())
return agent.run_sync(prompt).outputWhat if the prompt is ambiguous — could match either tool? Which does the agent pick?
The model picks based on the prompt semantics. "How many characters?" → char_count. "How many words?" → word_count. Ambiguous prompts like "How long is this?" could go either way. For production use, be specific in your prompts and add a docstring to each tool — the docstring is part of the model's context for tool selection.
So the docstring is the tool's job description, and the model is the dispatcher. I write the functions; the agent decides which specialist to call. That's the research team analogy from the voice bible — calling in a statistician mid-analysis.
The model is picking the tool — you just defined them. Add three tools tomorrow and the agent's capability expands without code changes to the dispatch logic.
I could add a citation_format tool, an effect_size_calc tool, a p_value_check tool. Each one is a Python function I already know how to write. The agent becomes a research assistant that knows when to use each.
A third tool would register the same way:
@agent.tool_plain
def sentence_count(text: str) -> int:
"""Count sentences in text."""
return len([s for s in text.split(".") if s.strip()])Start with two. Verify the agent picks correctly before adding more. Start with two. Verify the agent picks correctly on your real prompts before adding more. Tool selection accuracy degrades with many similar-sounding tools — distinct names and clear docstrings matter.
agent = Agent(model)
@agent.tool_plain
def char_count(text: str) -> int:
"""Count characters in text."""
return len(text)
@agent.tool_plain
def word_count(text: str) -> int:
"""Count words in text."""
return len(text.split())
return agent.run_sync(prompt).outputThe model uses the function name and docstring to decide which tool to call. Distinct names and clear docstrings improve selection accuracy when tools have overlapping purposes.
| Scenario | Pattern |
|---|---|
| One type of measurement | Single @agent.tool_plain |
| Multiple measurement types | Two tools — agent picks the relevant one |
| Agent must choose | Distinct names + clear docstrings matter |
Tip: Test on ambiguous prompts to verify the agent selects correctly before adding more tools.
agent_with_tool from yesterday gives the agent one tool. For abstract quality checking, you need both word count and character count — different constraints for different publication venues. How do you give the agent two tools?
Define two @agent.tool_plain functions on the same agent. The agent picks the relevant one based on the prompt. If I ask for character count, it calls char_count; if I ask for word count, it calls word_count.
Exactly. Two decorators, one agent:
agent = Agent(model)
@agent.tool_plain
def char_count(text: str) -> int:
return len(text)
@agent.tool_plain
def word_count(text: str) -> int:
return len(text.split())
return agent.run_sync(prompt).outputWhat if the prompt is ambiguous — could match either tool? Which does the agent pick?
The model picks based on the prompt semantics. "How many characters?" → char_count. "How many words?" → word_count. Ambiguous prompts like "How long is this?" could go either way. For production use, be specific in your prompts and add a docstring to each tool — the docstring is part of the model's context for tool selection.
So the docstring is the tool's job description, and the model is the dispatcher. I write the functions; the agent decides which specialist to call. That's the research team analogy from the voice bible — calling in a statistician mid-analysis.
The model is picking the tool — you just defined them. Add three tools tomorrow and the agent's capability expands without code changes to the dispatch logic.
I could add a citation_format tool, an effect_size_calc tool, a p_value_check tool. Each one is a Python function I already know how to write. The agent becomes a research assistant that knows when to use each.
A third tool would register the same way:
@agent.tool_plain
def sentence_count(text: str) -> int:
"""Count sentences in text."""
return len([s for s in text.split(".") if s.strip()])Start with two. Verify the agent picks correctly before adding more. Start with two. Verify the agent picks correctly on your real prompts before adding more. Tool selection accuracy degrades with many similar-sounding tools — distinct names and clear docstrings matter.
agent = Agent(model)
@agent.tool_plain
def char_count(text: str) -> int:
"""Count characters in text."""
return len(text)
@agent.tool_plain
def word_count(text: str) -> int:
"""Count words in text."""
return len(text.split())
return agent.run_sync(prompt).outputThe model uses the function name and docstring to decide which tool to call. Distinct names and clear docstrings improve selection accuracy when tools have overlapping purposes.
| Scenario | Pattern |
|---|---|
| One type of measurement | Single @agent.tool_plain |
| Multiple measurement types | Two tools — agent picks the relevant one |
| Agent must choose | Distinct names + clear docstrings matter |
Tip: Test on ambiguous prompts to verify the agent selects correctly before adding more tools.
Create a free account to get started. Paid plans unlock all tracks.