Before writing any code, take stock of what the track covered. Week 1 was basic agent calls and chains. Week 2 was result_type and Pydantic models. Week 3 was multi-agent pipelines and batch processing. Week 4 added live web search and Python tools. What do you think a capstone that uses all four weeks looks like?
A pipeline with a search agent, then an extraction agent, then a formatting step? Web query in, structured report out?
Exactly the shape. Three steps, three different tools. Step one — a search agent queries the live web. Step two — an extraction agent with result_type=Fact parses the result into named fields. Step three — a plain Python f-string assembles the lines. Here is the three-step function:
from pydantic import BaseModel
class Fact(BaseModel):
fact: str
source: str
def research_report(query: str) -> str:
search_agent = Agent(model)
search_result = search_agent.run_sync(query).output
extract_agent = Agent(model, result_type=Fact, system_prompt="Extract the most important fact and its source.")
extracted = extract_agent.run_sync(search_result).output
lines = ["Query: " + query, "Fact: " + extracted.fact, "Source: " + extracted.source]
report = "\n".join(lines)
print(f"Agent: {report}")
return reportThe final step isn't an agent at all? Just an f-string?
That is the design lesson of the whole track. Use AI where you need intelligence — search, extraction, classification. Use Python where you don't — formatting, arithmetic, assembling strings. Mixing them here would be wasteful: the model already filled extracted.fact and extracted.source, so the formatting is a line of f-string, not a third API call.
Two agents, each doing one thing well — search, then extract — then Python does the last step?
Exactly. One focused agent per job beats one overloaded agent. The search agent is tuned for retrieval; the extract agent is tuned for schema compliance. Each responsibility stays narrow, each result stays reliable. The final shape of the three-line core:
search_result = search_agent.run_sync(query).output
extracted = extract_agent.run_sync(search_result).output
lines = ["Query: " + query, "Fact: " + extracted.fact, "Source: " + extracted.source]So the whole month fits into one function. And wrapping it in a list comprehension from Week 3 would generate a hundred reports in one script?
That is the payoff of learning the patterns cleanly. Each step is small and focused. Chain them and you have a research assistant. Loop over a list of queries and you have a briefing generator. Write research_report(query) now — you have earned this one.
TL;DR: search → extract → format. Each step uses a different tool.
lesson_type: ai-search, live web retrievalresult_type=Fact + narrow system_promptextracted.fact and extracted.source| Step | Technique | Introduced |
|---|---|---|
| Search | Agent(model).run_sync(query).output | Week 1 & 4 |
| Extract | result_type=Fact → .model_dump() | Week 2 |
| Format | f-string on a Pydantic instance | Week 1 |
Use AI where you need intelligence; use Python everywhere else.
Before writing any code, take stock of what the track covered. Week 1 was basic agent calls and chains. Week 2 was result_type and Pydantic models. Week 3 was multi-agent pipelines and batch processing. Week 4 added live web search and Python tools. What do you think a capstone that uses all four weeks looks like?
A pipeline with a search agent, then an extraction agent, then a formatting step? Web query in, structured report out?
Exactly the shape. Three steps, three different tools. Step one — a search agent queries the live web. Step two — an extraction agent with result_type=Fact parses the result into named fields. Step three — a plain Python f-string assembles the lines. Here is the three-step function:
from pydantic import BaseModel
class Fact(BaseModel):
fact: str
source: str
def research_report(query: str) -> str:
search_agent = Agent(model)
search_result = search_agent.run_sync(query).output
extract_agent = Agent(model, result_type=Fact, system_prompt="Extract the most important fact and its source.")
extracted = extract_agent.run_sync(search_result).output
lines = ["Query: " + query, "Fact: " + extracted.fact, "Source: " + extracted.source]
report = "\n".join(lines)
print(f"Agent: {report}")
return reportThe final step isn't an agent at all? Just an f-string?
That is the design lesson of the whole track. Use AI where you need intelligence — search, extraction, classification. Use Python where you don't — formatting, arithmetic, assembling strings. Mixing them here would be wasteful: the model already filled extracted.fact and extracted.source, so the formatting is a line of f-string, not a third API call.
Two agents, each doing one thing well — search, then extract — then Python does the last step?
Exactly. One focused agent per job beats one overloaded agent. The search agent is tuned for retrieval; the extract agent is tuned for schema compliance. Each responsibility stays narrow, each result stays reliable. The final shape of the three-line core:
search_result = search_agent.run_sync(query).output
extracted = extract_agent.run_sync(search_result).output
lines = ["Query: " + query, "Fact: " + extracted.fact, "Source: " + extracted.source]So the whole month fits into one function. And wrapping it in a list comprehension from Week 3 would generate a hundred reports in one script?
That is the payoff of learning the patterns cleanly. Each step is small and focused. Chain them and you have a research assistant. Loop over a list of queries and you have a briefing generator. Write research_report(query) now — you have earned this one.
TL;DR: search → extract → format. Each step uses a different tool.
lesson_type: ai-search, live web retrievalresult_type=Fact + narrow system_promptextracted.fact and extracted.source| Step | Technique | Introduced |
|---|---|---|
| Search | Agent(model).run_sync(query).output | Week 1 & 4 |
| Extract | result_type=Fact → .model_dump() | Week 2 |
| Format | f-string on a Pydantic instance | Week 1 |
Use AI where you need intelligence; use Python everywhere else.
Create a free account to get started. Paid plans unlock all tracks.