You automated lead classification last week. Before a sales call, what's the one thing you still do manually?
Google the company. I check their recent funding, their team size, what they do. Fifteen minutes per lead.
search_the_web(query) — same code you wrote in Week 1, different model. When lesson_type is ai-search, the sandbox injects a Perplexity model with live web access. The agent searches the web and returns grounded results automatically:
def search_the_web(query: str) -> str:
result = Agent(model).run_sync(query)
return result.outputThe code is identical to run_agent. What's actually different?
The model. ai-basic uses openrouter/auto — a standard LLM that knows what it was trained on. ai-search uses perplexity/sonar — a model with live web search built in. The API surface is identical; the capability is different. Here's the full function:
def search_the_web(query: str) -> str:
result = Agent(model).run_sync(query)
output = result.output
print(f"Search result: {output[:80]}...")
return outputThe agent is picking the tool — I just defined the query. It does the Googling for me automatically.
An assistant who Googles the company before you call them — automated research before every outreach. Fifteen minutes per lead becomes two seconds per lead.
I run this on every hot lead from my summarize_and_classify pipeline before I make the call. Full background brief, no manual research.
Perplexity sonar returns grounded results with source citations embedded in the text. The output is longer than a basic LLM response — plan for 100-300 words. In the capstone you'll pass this output to a structured extractor to pull the key fact and source into a typed record.
ai-search ModelSame PydanticAI code, different model selected by lesson_type:
# lesson_type: 'ai-search' → perplexity/sonar
result = Agent(model).run_sync(query)
return result.output # grounded, web-sourced response| lesson_type | model | capability |
|---|---|---|
ai-basic | openrouter/auto | standard text |
ai-search | perplexity/sonar | live web search |
ai-tools | openrouter/auto-exacto | tool calling |
Search results are longer than basic LLM responses. Truncate or extract structured data before passing downstream.
You automated lead classification last week. Before a sales call, what's the one thing you still do manually?
Google the company. I check their recent funding, their team size, what they do. Fifteen minutes per lead.
search_the_web(query) — same code you wrote in Week 1, different model. When lesson_type is ai-search, the sandbox injects a Perplexity model with live web access. The agent searches the web and returns grounded results automatically:
def search_the_web(query: str) -> str:
result = Agent(model).run_sync(query)
return result.outputThe code is identical to run_agent. What's actually different?
The model. ai-basic uses openrouter/auto — a standard LLM that knows what it was trained on. ai-search uses perplexity/sonar — a model with live web search built in. The API surface is identical; the capability is different. Here's the full function:
def search_the_web(query: str) -> str:
result = Agent(model).run_sync(query)
output = result.output
print(f"Search result: {output[:80]}...")
return outputThe agent is picking the tool — I just defined the query. It does the Googling for me automatically.
An assistant who Googles the company before you call them — automated research before every outreach. Fifteen minutes per lead becomes two seconds per lead.
I run this on every hot lead from my summarize_and_classify pipeline before I make the call. Full background brief, no manual research.
Perplexity sonar returns grounded results with source citations embedded in the text. The output is longer than a basic LLM response — plan for 100-300 words. In the capstone you'll pass this output to a structured extractor to pull the key fact and source into a typed record.
ai-search ModelSame PydanticAI code, different model selected by lesson_type:
# lesson_type: 'ai-search' → perplexity/sonar
result = Agent(model).run_sync(query)
return result.output # grounded, web-sourced response| lesson_type | model | capability |
|---|---|---|
ai-basic | openrouter/auto | standard text |
ai-search | perplexity/sonar | live web search |
ai-tools | openrouter/auto-exacto | tool calling |
Search results are longer than basic LLM responses. Truncate or extract structured data before passing downstream.
Create a free account to get started. Paid plans unlock all tracks.