Three weeks of agents processing text you give them. Your literature review needs to cover the last 12 months. Your keyword searches miss papers that use different terminology. What's the step that closes that gap?
A search agent. The ai-search model uses Perplexity under the hood — web search built in. The agent queries recent papers on a topic, not just processes a text I already have.
The call is identical to run_agent. The model swap is handled by the sandbox when lesson_type is "ai-search" — Perplexity/sonar runs automatically:
def search_the_web(query: str) -> str:
return Agent(model).run_sync(query).outputThis looks exactly like run_agent from Day 3. What's different?
The model variable is bound to Perplexity/sonar in this lesson — the sandbox injects a search-capable model instead of a basic language model. The code is identical; the capability changes with the model. That's the separation: your code says Agent(model); the model selection is external configuration.
So the abstraction pays off here. I wrote run_agent once and the search version is structurally the same. The infra decides what model to use, not the function.
The Perplexity response includes citations — usually as inline references or a sources section. That raw string is the input to the extraction step on Day 25. The agent is picking papers based on the query; you verify the relevance.
I can schedule this. Run it every Friday on my three main topics, get the new papers automatically. That's my Monday morning reading list, generated without touching Google Scholar.
The weekly sweep is a loop:
topics = ["meta-analysis", "open data", "replication crisis"]
for topic in topics:
result = search_the_web(topic)
print(result[:150])Chain it with the Day 25 extractor and it becomes a structured table of findings.
def search_the_web(query: str) -> str:
return Agent(model).run_sync(query).outputThe model variable is bound to perplexity/sonar in ai-search lessons — web search is built into the model, not an extra API call. The response string includes citations and source references.
Frame queries as research questions for best results:
"What are the 5 most recent papers on early childhood vocabulary intervention?""Key findings in systematic reviews of open-access citation effects 2023-2025"Perplexity responds best to specific, scoped queries:
"5 most recent RCTs on early childhood vocabulary intervention 2023-2025""systematic reviews of open-access citation effects since 2022"Vague queries return broad summaries; specific queries return citable findings.
Three weeks of agents processing text you give them. Your literature review needs to cover the last 12 months. Your keyword searches miss papers that use different terminology. What's the step that closes that gap?
A search agent. The ai-search model uses Perplexity under the hood — web search built in. The agent queries recent papers on a topic, not just processes a text I already have.
The call is identical to run_agent. The model swap is handled by the sandbox when lesson_type is "ai-search" — Perplexity/sonar runs automatically:
def search_the_web(query: str) -> str:
return Agent(model).run_sync(query).outputThis looks exactly like run_agent from Day 3. What's different?
The model variable is bound to Perplexity/sonar in this lesson — the sandbox injects a search-capable model instead of a basic language model. The code is identical; the capability changes with the model. That's the separation: your code says Agent(model); the model selection is external configuration.
So the abstraction pays off here. I wrote run_agent once and the search version is structurally the same. The infra decides what model to use, not the function.
The Perplexity response includes citations — usually as inline references or a sources section. That raw string is the input to the extraction step on Day 25. The agent is picking papers based on the query; you verify the relevance.
I can schedule this. Run it every Friday on my three main topics, get the new papers automatically. That's my Monday morning reading list, generated without touching Google Scholar.
The weekly sweep is a loop:
topics = ["meta-analysis", "open data", "replication crisis"]
for topic in topics:
result = search_the_web(topic)
print(result[:150])Chain it with the Day 25 extractor and it becomes a structured table of findings.
def search_the_web(query: str) -> str:
return Agent(model).run_sync(query).outputThe model variable is bound to perplexity/sonar in ai-search lessons — web search is built into the model, not an extra API call. The response string includes citations and source references.
Frame queries as research questions for best results:
"What are the 5 most recent papers on early childhood vocabulary intervention?""Key findings in systematic reviews of open-access citation effects 2023-2025"Perplexity responds best to specific, scoped queries:
"5 most recent RCTs on early childhood vocabulary intervention 2023-2025""systematic reviews of open-access citation effects since 2022"Vague queries return broad summaries; specific queries return citable findings.
Create a free account to get started. Paid plans unlock all tracks.