agent_two_tools let the agent pick the right tool from two options — the agent decided, you just defined the menu. What if the agent's first job is to go out and find the raw material before any extraction happens?
Like a two-step process — one agent searches the web, a second one pulls the structured fact out of whatever the first one returned. I've been building toward this since Week 1.
Exactly. The search agent uses perplexity/sonar — a model with live internet access. You hand it a topic, it returns a narrative. Then a second agent with result_type=Fact reads that narrative and fills out a Pydantic form:
from pydantic import BaseModel
class Fact(BaseModel):
fact: str
source: strIf the search agent already returns a detailed narrative, why do I need a second agent? Can't I just format the narrative directly?
You could — but you'd get prose, not structure. The extraction agent enforces the form. result_type=Fact means the LLM has to return exactly two fields: a one-sentence fact and a named source. No prose, no rambling. Structure is how you make a pipeline reliable enough to hand to a colleague:
def research_extract_format(topic: str) -> str:
search_agent = Agent(model)
extract_agent = Agent(model, result_type=Fact)
narrative = search_agent.run_sync(topic).output
fact = extract_agent.run_sync(narrative).output
report = f"# {topic}\n\n**Finding:** {fact.fact}\n\n**Source:** {fact.source}\n"
print(report)
return reportWeek 1 was run_sync. Week 2 was result_type=Fact. Week 3 was chaining agents. Week 4 added search. This function uses all four. One function, four weeks of technique.
And the output is a Markdown report — something you can paste into a strategy memo, drop into a Notion page, or pipe into a dashboard. The agent is picking the content; you're controlling the shape.
My COO asked me last week how I compiled competitive intel so fast. I told him it was a Python function. He asked for the source. I showed him the Source: field. He stopped asking questions.
That's the point. Reliable structure — fact plus named source — is what separates a research tool from a hallucination risk. Validate the source field before you ship anything to an exec. The model is confident; confidence is not the same as accuracy.
The capstone composes every technique from the track into a single research_extract_format(topic) call.
Stage 1 — Search: search_agent = Agent(model) using the ai-search lesson type routes to perplexity/sonar, a model with live web access. .run_sync(topic).output returns a sourced narrative string.
Stage 2 — Extract: extract_agent = Agent(model, result_type=Fact) forces the LLM to return exactly two fields. Fact.fact is a one-sentence finding; Fact.source is a named attribution. The model cannot return prose — the Pydantic contract enforces the shape.
Stage 3 — Format: String interpolation assembles a Markdown report. f"# {topic}\n\n**Finding:** {fact.fact}\n\n**Source:** {fact.source}\n" is deterministic — no AI involved in formatting.
A single agent asked to "search and extract" often conflates the tasks. Splitting search and extraction gives you an auditable intermediate: the raw narrative string. If the report looks wrong, you can inspect the narrative before blaming extraction.
agent_two_tools let the agent pick the right tool from two options — the agent decided, you just defined the menu. What if the agent's first job is to go out and find the raw material before any extraction happens?
Like a two-step process — one agent searches the web, a second one pulls the structured fact out of whatever the first one returned. I've been building toward this since Week 1.
Exactly. The search agent uses perplexity/sonar — a model with live internet access. You hand it a topic, it returns a narrative. Then a second agent with result_type=Fact reads that narrative and fills out a Pydantic form:
from pydantic import BaseModel
class Fact(BaseModel):
fact: str
source: strIf the search agent already returns a detailed narrative, why do I need a second agent? Can't I just format the narrative directly?
You could — but you'd get prose, not structure. The extraction agent enforces the form. result_type=Fact means the LLM has to return exactly two fields: a one-sentence fact and a named source. No prose, no rambling. Structure is how you make a pipeline reliable enough to hand to a colleague:
def research_extract_format(topic: str) -> str:
search_agent = Agent(model)
extract_agent = Agent(model, result_type=Fact)
narrative = search_agent.run_sync(topic).output
fact = extract_agent.run_sync(narrative).output
report = f"# {topic}\n\n**Finding:** {fact.fact}\n\n**Source:** {fact.source}\n"
print(report)
return reportWeek 1 was run_sync. Week 2 was result_type=Fact. Week 3 was chaining agents. Week 4 added search. This function uses all four. One function, four weeks of technique.
And the output is a Markdown report — something you can paste into a strategy memo, drop into a Notion page, or pipe into a dashboard. The agent is picking the content; you're controlling the shape.
My COO asked me last week how I compiled competitive intel so fast. I told him it was a Python function. He asked for the source. I showed him the Source: field. He stopped asking questions.
That's the point. Reliable structure — fact plus named source — is what separates a research tool from a hallucination risk. Validate the source field before you ship anything to an exec. The model is confident; confidence is not the same as accuracy.
The capstone composes every technique from the track into a single research_extract_format(topic) call.
Stage 1 — Search: search_agent = Agent(model) using the ai-search lesson type routes to perplexity/sonar, a model with live web access. .run_sync(topic).output returns a sourced narrative string.
Stage 2 — Extract: extract_agent = Agent(model, result_type=Fact) forces the LLM to return exactly two fields. Fact.fact is a one-sentence finding; Fact.source is a named attribution. The model cannot return prose — the Pydantic contract enforces the shape.
Stage 3 — Format: String interpolation assembles a Markdown report. f"# {topic}\n\n**Finding:** {fact.fact}\n\n**Source:** {fact.source}\n" is deterministic — no AI involved in formatting.
A single agent asked to "search and extract" often conflates the tasks. Splitting search and extraction gives you an auditable intermediate: the raw narrative string. If the report looks wrong, you can inspect the narrative before blaming extraction.
Create a free account to get started. Paid plans unlock all tracks.