batch_word_counts ran the same agent over a list of prompts and measured verbosity. What happens when the prompt asks for something that wasn't in the model's training data — a competitor's pricing announced last week?
The model guesses. Or says it doesn't know. Static training means the answer is already stale the moment it's deployed.
Right. For current information — benchmarks, pricing changes, regulatory updates — you need a model with live web access. The sandbox preamble selects Perplexity/sonar automatically when lesson_type is ai-search. The code looks identical to Day 3:
def search_the_web(query: str) -> str:
result = Agent(model).run_sync(query)
return result.outputIdentical? No extra import, no search API key, no URL list? I'm not calling a search endpoint myself?
The model does the retrieval. Perplexity/sonar runs live web searches as part of its inference — you write a query, it returns a sourced answer. Same pattern as run_agent, different model under the hood. You never touch the search API:
result = Agent(model).run_sync("What is OKR vs KPI?")
print(result.output) # sourced explanation with current framingThe agent is picking the tool — I just defined them. My COO asked who sent such a clean meeting follow-up. I told him it was a Python function. Now I can give him a competitive intel brief the same way.
One function call for live research. The string comes back with the model's synthesis of current sources — not a cached answer from 18 months ago. Paste it into a QBR deck or pipe it to the next agent.
So search_the_web becomes the first step in a research pipeline — search, then extract a structured finding, then format a memo. That's three agents chained off a single query.
Exactly the pattern for the lessons ahead. The next challenge chains this search output into a Pydantic extractor — one agent retrieves, the second extracts a typed Fact with a source citation. Two agents, one sourced intel report.
ai-search Gives the Agent Live Web AccessThe sandbox preamble checks lesson_type and sets OPENROUTER_MODEL before your code runs. For ai-search, it routes to perplexity/sonar — a model that performs retrieval-augmented generation with live web searches baked into each inference call.
What changes: the model, not your code. Agent(model).run_sync(query).output is identical to Day 3. Perplexity handles the search, the source selection, and the synthesis internally.
What you get back: a string answer synthesising current sources — suitable for piping into a Pydantic extractor or formatting as a report. The model may include inline citations in the output.
Do not add system_prompt="Search the web for…" — Perplexity already retrieves. Adding a search instruction wastes tokens and can suppress the model's own retrieval framing. Pass the query directly.
batch_word_counts ran the same agent over a list of prompts and measured verbosity. What happens when the prompt asks for something that wasn't in the model's training data — a competitor's pricing announced last week?
The model guesses. Or says it doesn't know. Static training means the answer is already stale the moment it's deployed.
Right. For current information — benchmarks, pricing changes, regulatory updates — you need a model with live web access. The sandbox preamble selects Perplexity/sonar automatically when lesson_type is ai-search. The code looks identical to Day 3:
def search_the_web(query: str) -> str:
result = Agent(model).run_sync(query)
return result.outputIdentical? No extra import, no search API key, no URL list? I'm not calling a search endpoint myself?
The model does the retrieval. Perplexity/sonar runs live web searches as part of its inference — you write a query, it returns a sourced answer. Same pattern as run_agent, different model under the hood. You never touch the search API:
result = Agent(model).run_sync("What is OKR vs KPI?")
print(result.output) # sourced explanation with current framingThe agent is picking the tool — I just defined them. My COO asked who sent such a clean meeting follow-up. I told him it was a Python function. Now I can give him a competitive intel brief the same way.
One function call for live research. The string comes back with the model's synthesis of current sources — not a cached answer from 18 months ago. Paste it into a QBR deck or pipe it to the next agent.
So search_the_web becomes the first step in a research pipeline — search, then extract a structured finding, then format a memo. That's three agents chained off a single query.
Exactly the pattern for the lessons ahead. The next challenge chains this search output into a Pydantic extractor — one agent retrieves, the second extracts a typed Fact with a source citation. Two agents, one sourced intel report.
ai-search Gives the Agent Live Web AccessThe sandbox preamble checks lesson_type and sets OPENROUTER_MODEL before your code runs. For ai-search, it routes to perplexity/sonar — a model that performs retrieval-augmented generation with live web searches baked into each inference call.
What changes: the model, not your code. Agent(model).run_sync(query).output is identical to Day 3. Perplexity handles the search, the source selection, and the synthesis internally.
What you get back: a string answer synthesising current sources — suitable for piping into a Pydantic extractor or formatting as a report. The model may include inline citations in the output.
Do not add system_prompt="Search the web for…" — Perplexity already retrieves. Adding a search instruction wastes tokens and can suppress the model's own retrieval framing. Pass the query directly.
Create a free account to get started. Paid plans unlock all tracks.