Three weeks in, you can classify a list of leads and extract typed records. One gap: before a call you still spend 15 minutes manually Googling the company. That's the ai-search lesson type — same code, different model, live web access.
The code doesn't change? I just get a different model that can search the web?
Exactly. lesson_type: 'ai-search' injects a Perplexity model with web search built in. Agent(model).run_sync(query).output — identical to Week 1. The model does the searching; you get a grounded response string. This week adds tools on top: define a Python function with @agent.tool_plain and the agent calls it when the prompt needs it. The capstone chains search, structured extraction, and formatting into one research pipeline.
When would I give the agent a tool instead of just putting the logic in the prompt?
When the computation is deterministic — arithmetic, counting, lookups — and you need a guaranteed result. Asking an LLM to add 47 and 93 is risky. Giving it an add(a, b) tool that returns a + b is exact. Tools are for precision; prompts are for judgment.
search_the_web(query) — ai-search model, Perplexity sonar, grounded resultsresearch_and_extract(topic) — search then structured extraction with result_type=Factagent_with_tool(prompt) — ai-tools model, @agent.tool_plain for deterministic computationagent_two_tools(prompt) — two tools, agent picks oneresearch_extract_format(topic) — capstone: search + extract + formatai-basic → openrouter/auto (standard text tasks)ai-search → perplexity/sonar (web search)ai-tools → openrouter/auto-exacto (tool calling)Create a free account to get started. Paid plans unlock all tracks.
Three weeks in, you can classify a list of leads and extract typed records. One gap: before a call you still spend 15 minutes manually Googling the company. That's the ai-search lesson type — same code, different model, live web access.
The code doesn't change? I just get a different model that can search the web?
Exactly. lesson_type: 'ai-search' injects a Perplexity model with web search built in. Agent(model).run_sync(query).output — identical to Week 1. The model does the searching; you get a grounded response string. This week adds tools on top: define a Python function with @agent.tool_plain and the agent calls it when the prompt needs it. The capstone chains search, structured extraction, and formatting into one research pipeline.
When would I give the agent a tool instead of just putting the logic in the prompt?
When the computation is deterministic — arithmetic, counting, lookups — and you need a guaranteed result. Asking an LLM to add 47 and 93 is risky. Giving it an add(a, b) tool that returns a + b is exact. Tools are for precision; prompts are for judgment.
search_the_web(query) — ai-search model, Perplexity sonar, grounded resultsresearch_and_extract(topic) — search then structured extraction with result_type=Factagent_with_tool(prompt) — ai-tools model, @agent.tool_plain for deterministic computationagent_two_tools(prompt) — two tools, agent picks oneresearch_extract_format(topic) — capstone: search + extract + formatai-basic → openrouter/auto (standard text tasks)ai-search → perplexity/sonar (web search)ai-tools → openrouter/auto-exacto (tool calling)