Four weeks of agent calls, result_type, pipelines, web search, and tools. How does Day 3's first run_sync look from here?
Tiny. Writing Agent(model).run_sync(prompt).output on Day 3 felt like the whole lesson. Now it is one line inside a pipeline that searches, extracts, and formats a full report.
That is the arc — the call stayed the same, the compositions around it grew. Which pattern clicked hardest?
result_type. Replacing regex and .split() with a Pydantic class made me realize I was writing parsers I didn't need to. One class definition and the model fills it.
That is the cleanest win of the track. Let's close Week 4.
Week 4 added live web search via Perplexity and custom Python tools via @agent.tool_plain. Swapping the model changed the capability without changing your code — Agent(model).run_sync(query).output now reads the live web. Adding a tool lets the agent reach for exact Python whenever the prompt calls for computation.
The whole track fits into one function: a search agent retrieves, an extraction agent enforces a Pydantic schema, and a plain Python f-string assembles the report. Every week you wrote a different shape around the same run_sync primitive — string output, typed output, two-agent chain, batch, search, tool use.
Next up in the explorers track, agents start pulling data from external services on demand — the pattern you learned here scales directly into retrieval-augmented workflows.
Create a free account to get started. Paid plans unlock all tracks.
Four weeks of agent calls, result_type, pipelines, web search, and tools. How does Day 3's first run_sync look from here?
Tiny. Writing Agent(model).run_sync(prompt).output on Day 3 felt like the whole lesson. Now it is one line inside a pipeline that searches, extracts, and formats a full report.
That is the arc — the call stayed the same, the compositions around it grew. Which pattern clicked hardest?
result_type. Replacing regex and .split() with a Pydantic class made me realize I was writing parsers I didn't need to. One class definition and the model fills it.
That is the cleanest win of the track. Let's close Week 4.
Week 4 added live web search via Perplexity and custom Python tools via @agent.tool_plain. Swapping the model changed the capability without changing your code — Agent(model).run_sync(query).output now reads the live web. Adding a tool lets the agent reach for exact Python whenever the prompt calls for computation.
The whole track fits into one function: a search agent retrieves, an extraction agent enforces a Pydantic schema, and a plain Python f-string assembles the report. Every week you wrote a different shape around the same run_sync primitive — string output, typed output, two-agent chain, batch, search, tool use.
Next up in the explorers track, agents start pulling data from external services on demand — the pattern you learned here scales directly into retrieval-augmented workflows.