Your agent calls an AI model. What if the prompt asks it to compute something — like 'how many words is this abstract'? The model estimates word counts rather than counting precisely. How do you give it an exact counter?
Register a Python word-count function as a tool. The agent calls the model, the model decides to use the tool, the tool runs Python and returns the exact count.
Exactly. @agent.tool_plain registers a function as a tool the agent can call. You define the tool inside the function, decorate it, and the agent decides when to invoke it based on the prompt:
def agent_with_tool(prompt: str) -> str:
agent = Agent(model)
@agent.tool_plain
def add(a: int, b: int) -> int:
return a + b
result = agent.run_sync(prompt)
return result.outputWhat if the prompt doesn't require the tool? Does the agent always call it?
The agent decides. If the prompt doesn't need the tool, the model answers directly without calling it. Tools are available — not mandatory. That's the 'tool-use' model: you provide capabilities, the agent decides when to apply them:
def agent_with_tool(prompt: str) -> str:
agent = Agent(model)
@agent.tool_plain
def add(a: int, b: int) -> int:
"""Add two integers."""
return a + b
result = agent.run_sync(prompt)
output = result.output
print(f"Output: {output}")
return outputThe agent is picking which tool to call based on the prompt. I just defined the tools and wrote the question. The decision-making is the agent's job.
Tool docstrings matter — they're how the model understands what each tool does. A clear docstring on the tool function ("""Add two integers.""") is the difference between reliable tool selection and random invocation. Write docstrings as if explaining the tool to a fast-but-literal assistant.
So the model reads my docstring and decides whether to call my function. The function signature and docstring are the tool's API spec.
Precisely. Vague docstrings produce vague tool selection. Specific docstrings — 'Count the number of words in a text string and return an integer' — produce reliable selection.
@agent.tool_plain patternagent = Agent(model)
@agent.tool_plain
def my_tool(a: int, b: int) -> int:
"""Describe what this tool does."""
return a + b
result = agent.run_sync(prompt)
return result.outputThe model reads the docstring to decide when to call the tool. Write clear, specific docstrings: "Add two integers and return their sum" is better than "Add." Vague docstrings produce unreliable tool selection.
The agent calls tools only when the prompt requires them. Tools are available — not mandatory. If the prompt doesn't need the tool, the model answers directly without invoking it.
You can decorate multiple inner functions with @agent.tool_plain. The agent selects among them based on docstrings and the prompt context.
Your agent calls an AI model. What if the prompt asks it to compute something — like 'how many words is this abstract'? The model estimates word counts rather than counting precisely. How do you give it an exact counter?
Register a Python word-count function as a tool. The agent calls the model, the model decides to use the tool, the tool runs Python and returns the exact count.
Exactly. @agent.tool_plain registers a function as a tool the agent can call. You define the tool inside the function, decorate it, and the agent decides when to invoke it based on the prompt:
def agent_with_tool(prompt: str) -> str:
agent = Agent(model)
@agent.tool_plain
def add(a: int, b: int) -> int:
return a + b
result = agent.run_sync(prompt)
return result.outputWhat if the prompt doesn't require the tool? Does the agent always call it?
The agent decides. If the prompt doesn't need the tool, the model answers directly without calling it. Tools are available — not mandatory. That's the 'tool-use' model: you provide capabilities, the agent decides when to apply them:
def agent_with_tool(prompt: str) -> str:
agent = Agent(model)
@agent.tool_plain
def add(a: int, b: int) -> int:
"""Add two integers."""
return a + b
result = agent.run_sync(prompt)
output = result.output
print(f"Output: {output}")
return outputThe agent is picking which tool to call based on the prompt. I just defined the tools and wrote the question. The decision-making is the agent's job.
Tool docstrings matter — they're how the model understands what each tool does. A clear docstring on the tool function ("""Add two integers.""") is the difference between reliable tool selection and random invocation. Write docstrings as if explaining the tool to a fast-but-literal assistant.
So the model reads my docstring and decides whether to call my function. The function signature and docstring are the tool's API spec.
Precisely. Vague docstrings produce vague tool selection. Specific docstrings — 'Count the number of words in a text string and return an integer' — produce reliable selection.
@agent.tool_plain patternagent = Agent(model)
@agent.tool_plain
def my_tool(a: int, b: int) -> int:
"""Describe what this tool does."""
return a + b
result = agent.run_sync(prompt)
return result.outputThe model reads the docstring to decide when to call the tool. Write clear, specific docstrings: "Add two integers and return their sum" is better than "Add." Vague docstrings produce unreliable tool selection.
The agent calls tools only when the prompt requires them. Tools are available — not mandatory. If the prompt doesn't need the tool, the model answers directly without invoking it.
You can decorate multiple inner functions with @agent.tool_plain. The agent selects among them based on docstrings and the prompt context.
Create a free account to get started. Paid plans unlock all tracks.