Three weeks of agents that process text you provide. What's missing if you want to research something that happened last month?
The model's training data has a cutoff. For recent publications I'd need a model that can search the web in real time — not just recall from training.
That's the perplexity/sonar model — real-time web search built into the model call. The ai-search lesson type wires it automatically in the sandbox. Your code is identical to run_agent from Day 3 — you write the research question, the model searches and responds with current results.
If the model searches the web, do I need to handle pagination or result limits?
No. The model abstracts the search entirely — you get a synthesised answer, not raw search results to iterate. You ask a research question in plain text; you get a plain text answer with citations. Week 4 builds on that: search first, then extract structured findings, then add tools, then assemble the capstone.
search_the_web(query) — call a real-time search modelresearch_and_extract(topic) — search → structured Pydantic extractionagent_with_tool — register @agent.tool_plain and let the agent decide when to use itagent_two_tools — two tools, agent picks which oneresearch_extract_format — the full capstone: search → extract list[Finding] → markdown outputGoal: by Day 28 you have a self-contained research assistant that searches recent literature and formats findings as a reading list.
Create a free account to get started. Paid plans unlock all tracks.
Three weeks of agents that process text you provide. What's missing if you want to research something that happened last month?
The model's training data has a cutoff. For recent publications I'd need a model that can search the web in real time — not just recall from training.
That's the perplexity/sonar model — real-time web search built into the model call. The ai-search lesson type wires it automatically in the sandbox. Your code is identical to run_agent from Day 3 — you write the research question, the model searches and responds with current results.
If the model searches the web, do I need to handle pagination or result limits?
No. The model abstracts the search entirely — you get a synthesised answer, not raw search results to iterate. You ask a research question in plain text; you get a plain text answer with citations. Week 4 builds on that: search first, then extract structured findings, then add tools, then assemble the capstone.
search_the_web(query) — call a real-time search modelresearch_and_extract(topic) — search → structured Pydantic extractionagent_with_tool — register @agent.tool_plain and let the agent decide when to use itagent_two_tools — two tools, agent picks which oneresearch_extract_format — the full capstone: search → extract list[Finding] → markdown outputGoal: by Day 28 you have a self-contained research assistant that searches recent literature and formats findings as a reading list.