search_the_web returns a long paragraph of context. For a proposal, you often need just one fact — a rate, a statistic, a source you can cite. How do you extract a typed fact from that paragraph?
Chain the search result into an extraction agent — like Day 25's research_and_extract from the automation track. Search first, extract second.
Exactly the same chain pattern. The search agent produces context; the extraction agent produces a typed object:
from pydantic import BaseModel
class Fact(BaseModel):
fact: str
source: strDoes the extraction agent also search the web? Or does it only process the text I pass it?
The extraction agent uses the standard model — no web search. It receives the search result string you pass to .run_sync() and extracts from that. The Sonar model handles searching; the standard model handles structuring. Each agent does one job:
def research_and_extract(topic: str) -> dict:
context = search_the_web(topic)
result = Agent(model, result_type=Fact).run_sync(context)
return result.output.model_dump()Search agent → extraction agent → typed dict. That's search plus structured output in one pipeline. The fact and the source come back as separate fields.
And you can cite the source in your proposal. Not just 'the AI said so' — an actual source field from the extraction.
I searched the web and structured the result in eight lines. No browser, no copy-paste, no reformatting.
Still verify the source field before citing it in client work — Sonar's citations are usually reliable but not infallible. The pipeline accelerates research; the judgment check stays with you.
Chain two agents: the first searches, the second structures:
context = search_the_web(topic) # Perplexity Sonar
result = Agent(model, result_type=Fact).run_sync(context) # standard model
return result.output.model_dump() # {'fact': '...', 'source': '...'}The search agent returns a long, prose-heavy response. The extraction agent receives that prose and structures the most relevant fact and source from it:
Each agent does one job. Separating them keeps each instruction set focused and the output reliable.
The source field from Sonar-grounded answers is usually accurate but not guaranteed. Verify before including in client deliverables.
search_the_web returns a long paragraph of context. For a proposal, you often need just one fact — a rate, a statistic, a source you can cite. How do you extract a typed fact from that paragraph?
Chain the search result into an extraction agent — like Day 25's research_and_extract from the automation track. Search first, extract second.
Exactly the same chain pattern. The search agent produces context; the extraction agent produces a typed object:
from pydantic import BaseModel
class Fact(BaseModel):
fact: str
source: strDoes the extraction agent also search the web? Or does it only process the text I pass it?
The extraction agent uses the standard model — no web search. It receives the search result string you pass to .run_sync() and extracts from that. The Sonar model handles searching; the standard model handles structuring. Each agent does one job:
def research_and_extract(topic: str) -> dict:
context = search_the_web(topic)
result = Agent(model, result_type=Fact).run_sync(context)
return result.output.model_dump()Search agent → extraction agent → typed dict. That's search plus structured output in one pipeline. The fact and the source come back as separate fields.
And you can cite the source in your proposal. Not just 'the AI said so' — an actual source field from the extraction.
I searched the web and structured the result in eight lines. No browser, no copy-paste, no reformatting.
Still verify the source field before citing it in client work — Sonar's citations are usually reliable but not infallible. The pipeline accelerates research; the judgment check stays with you.
Chain two agents: the first searches, the second structures:
context = search_the_web(topic) # Perplexity Sonar
result = Agent(model, result_type=Fact).run_sync(context) # standard model
return result.output.model_dump() # {'fact': '...', 'source': '...'}The search agent returns a long, prose-heavy response. The extraction agent receives that prose and structures the most relevant fact and source from it:
Each agent does one job. Separating them keeps each instruction set focused and the output reliable.
The source field from Sonar-grounded answers is usually accurate but not guaranteed. Verify before including in client deliverables.
Create a free account to get started. Paid plans unlock all tracks.