Week 3 everything stayed inside the Python process — agent in, string out. Week 4 changed that. The model started reaching outward: querying the live web, picking tools at runtime, extracting structured facts from search results. Which day felt most different?
Day 24. I wrote Agent(model).run_sync(query).output — identical to Day 3 — and it came back with sourced facts from the live internet. No search code. The ai-search lesson type swapped in perplexity/sonar and the model did the rest.
Exactly the design. Your code doesn't change — the model selection does the work. Day 25 added result_type=Fact on top. Two agents, two jobs: one fetches, one structures.
Days 26 and 27 flipped it — tools the agent picks, not me. @agent.tool_plain, two functions, and the model chose based on the prompt. Day 28 chained everything: search, extract, format. Four weeks of patterns in one function.
That's the shape of the whole track in one lesson. Let's confirm it landed.
Week 3 everything stayed inside the Python process — agent in, string out. Week 4 changed that. The model started reaching outward: querying the live web, picking tools at runtime, extracting structured facts from search results. Which day felt most different?
Day 24. I wrote Agent(model).run_sync(query).output — identical to Day 3 — and it came back with sourced facts from the live internet. No search code. The ai-search lesson type swapped in perplexity/sonar and the model did the rest.
Exactly the design. Your code doesn't change — the model selection does the work. Day 25 added result_type=Fact on top. Two agents, two jobs: one fetches, one structures.
Days 26 and 27 flipped it — tools the agent picks, not me. @agent.tool_plain, two functions, and the model chose based on the prompt. Day 28 chained everything: search, extract, format. Four weeks of patterns in one function.
That's the shape of the whole track in one lesson. Let's confirm it landed.
Create a free account to get started. Paid plans unlock all tracks.