You have told the agent what to return. Now what if the agent needs to call a function you wrote — a calculation, a lookup, a formatter — during the run?
I'd define the function and pass it to the agent somehow? Like passing a system_prompt but for a callable?
@agent.tool_plain is the decorator. Define the agent first, decorate the function, and the agent decides when to call it. Here is the full pattern next to Day 3's baseline:
# Day 3 — no tools
def run_agent(prompt: str) -> str:
return Agent(model).run_sync(prompt).output
# Day 26 — with a tool
def agent_with_tool(prompt: str) -> str:
agent = Agent(model)
@agent.tool_plain
def word_count(text: str) -> int:
return len(text.split())
return agent.run_sync(prompt).outputThe agent decides when to call word_count? I do not call it myself?
Correct. You define the tool; the agent decides whether the prompt requires it. If the prompt says 'how many words are in this text?', the agent calls word_count and uses the result. If the prompt does not need a word count, word_count is never called.
def agent_with_tool(prompt: str) -> str:
agent = Agent(model)
@agent.tool_plain
def word_count(text: str) -> int:
return len(text.split())
return agent.run_sync(prompt).outputSo I can give the agent access to my own Python functions — date calculations, rate lookups, formatting helpers — and it uses them when relevant.
Exactly. The agent is picking the tool — I just defined them. That is the Week 4 shift: from functions that call the model to agents that call your functions.
I just gave an AI agent access to my own Python logic. That is a real tool, not a chatbot.
And the tool is just a Python function — testable, reusable, type-annotated. The @agent.tool_plain decorator is the only new piece. The function underneath is plain Python.
@agent.tool_plain for Custom Toolsagent = Agent(model)
@agent.tool_plain
def word_count(text: str) -> int:
return len(text.split())
result = agent.run_sync(prompt).outputThe agent receives a description of each tool derived from the function name, type annotations, and docstring. It selects the tool when the prompt's intent matches the tool's description:
word_count is calledword_count is not called@agent.tool_plain only registers the function as a tool. The function body is standard Python — no special return type, no framework-specific logic. You can test it independently of the agent.
You have told the agent what to return. Now what if the agent needs to call a function you wrote — a calculation, a lookup, a formatter — during the run?
I'd define the function and pass it to the agent somehow? Like passing a system_prompt but for a callable?
@agent.tool_plain is the decorator. Define the agent first, decorate the function, and the agent decides when to call it. Here is the full pattern next to Day 3's baseline:
# Day 3 — no tools
def run_agent(prompt: str) -> str:
return Agent(model).run_sync(prompt).output
# Day 26 — with a tool
def agent_with_tool(prompt: str) -> str:
agent = Agent(model)
@agent.tool_plain
def word_count(text: str) -> int:
return len(text.split())
return agent.run_sync(prompt).outputThe agent decides when to call word_count? I do not call it myself?
Correct. You define the tool; the agent decides whether the prompt requires it. If the prompt says 'how many words are in this text?', the agent calls word_count and uses the result. If the prompt does not need a word count, word_count is never called.
def agent_with_tool(prompt: str) -> str:
agent = Agent(model)
@agent.tool_plain
def word_count(text: str) -> int:
return len(text.split())
return agent.run_sync(prompt).outputSo I can give the agent access to my own Python functions — date calculations, rate lookups, formatting helpers — and it uses them when relevant.
Exactly. The agent is picking the tool — I just defined them. That is the Week 4 shift: from functions that call the model to agents that call your functions.
I just gave an AI agent access to my own Python logic. That is a real tool, not a chatbot.
And the tool is just a Python function — testable, reusable, type-annotated. The @agent.tool_plain decorator is the only new piece. The function underneath is plain Python.
@agent.tool_plain for Custom Toolsagent = Agent(model)
@agent.tool_plain
def word_count(text: str) -> int:
return len(text.split())
result = agent.run_sync(prompt).outputThe agent receives a description of each tool derived from the function name, type annotations, and docstring. It selects the tool when the prompt's intent matches the tool's description:
word_count is calledword_count is not called@agent.tool_plain only registers the function as a tool. The function body is standard Python — no special return type, no framework-specific logic. You can test it independently of the agent.
Create a free account to get started. Paid plans unlock all tracks.