Four weeks. Every pattern from Day 3 to Day 27. Walk me through the capstone before writing it.
Call search_the_web with the topic. Define class Finding(BaseModel): claim: str; source: str; year: int. Create an agent with result_type=list[Finding] and run it on the search result. Format the findings as a numbered markdown list. Return the markdown string.
Perfect. The structured output is a list of Pydantic objects — result.output is a list[Finding]. Iterate it with a list comprehension to format each finding as a markdown line:
def research_extract_format(topic: str) -> str:
search_result = search_the_web(topic)
class Finding(BaseModel):
claim: str
source: str
year: int
agent = Agent(model, result_type=list[Finding])
result = agent.run_sync(f"Extract 3 key findings with source and year: {search_result}")
lines = [f"{i+1}. {f.claim} ({f.source}, {f.year})" for i, f in enumerate(result.output)]
print(f"Formatted {len(lines)} findings")
return "\n".join(lines)What if the search result doesn't contain three findings with clear years?
The model fills what it can. If a year isn't in the text, it may estimate or leave the field as a default. For thesis use, verify years against the original source. The list[Finding] type enforces that you get at least the shape — three objects with three fields each:
def research_extract_format(topic: str) -> str:
search_result = search_the_web(topic)
class Finding(BaseModel):
claim: str
source: str
year: int
agent = Agent(model, result_type=list[Finding])
result = agent.run_sync(f"Extract 3 key findings with source and year from: {search_result}")
lines = [f"{i+1}. {f.claim} ({f.source}, {f.year})" for i, f in enumerate(result.output)]
markdown = "\n".join(lines)
print(f"Findings:\n{markdown}")
return markdownOne function call. I ask a research question, the agent searches the web, extracts three structured findings, and formats them as a reading list I can paste into my literature notes. Three hours of abstract scanning replaced by one Python function.
You've built a self-contained research assistant: search → extract → format. Each step is a pattern you built separately across four weeks. The capstone is just the assembly.
I started Week 1 thinking 'AI pipelines' meant something complicated. It's a function that calls three other functions.
The infrastructure is sophisticated. What you write is simple — because you understood each building block before assembling them. That's the track's point.
def research_extract_format(topic: str) -> str:
search_result = search_the_web(topic)
class Finding(BaseModel):
claim: str; source: str; year: int
agent = Agent(model, result_type=list[Finding])
result = agent.run_sync(f"Extract 3 findings: {search_result}")
lines = [f"{i+1}. {f.claim} ({f.source}, {f.year})" for i, f in enumerate(result.output)]
return "\n".join(lines)search_the_web (Day 24) → list[Finding] extraction (Days 10+25) → list comprehension format (Days 19+21)
Verify years and sources against original papers before citing in research work.
Four weeks. Every pattern from Day 3 to Day 27. Walk me through the capstone before writing it.
Call search_the_web with the topic. Define class Finding(BaseModel): claim: str; source: str; year: int. Create an agent with result_type=list[Finding] and run it on the search result. Format the findings as a numbered markdown list. Return the markdown string.
Perfect. The structured output is a list of Pydantic objects — result.output is a list[Finding]. Iterate it with a list comprehension to format each finding as a markdown line:
def research_extract_format(topic: str) -> str:
search_result = search_the_web(topic)
class Finding(BaseModel):
claim: str
source: str
year: int
agent = Agent(model, result_type=list[Finding])
result = agent.run_sync(f"Extract 3 key findings with source and year: {search_result}")
lines = [f"{i+1}. {f.claim} ({f.source}, {f.year})" for i, f in enumerate(result.output)]
print(f"Formatted {len(lines)} findings")
return "\n".join(lines)What if the search result doesn't contain three findings with clear years?
The model fills what it can. If a year isn't in the text, it may estimate or leave the field as a default. For thesis use, verify years against the original source. The list[Finding] type enforces that you get at least the shape — three objects with three fields each:
def research_extract_format(topic: str) -> str:
search_result = search_the_web(topic)
class Finding(BaseModel):
claim: str
source: str
year: int
agent = Agent(model, result_type=list[Finding])
result = agent.run_sync(f"Extract 3 key findings with source and year from: {search_result}")
lines = [f"{i+1}. {f.claim} ({f.source}, {f.year})" for i, f in enumerate(result.output)]
markdown = "\n".join(lines)
print(f"Findings:\n{markdown}")
return markdownOne function call. I ask a research question, the agent searches the web, extracts three structured findings, and formats them as a reading list I can paste into my literature notes. Three hours of abstract scanning replaced by one Python function.
You've built a self-contained research assistant: search → extract → format. Each step is a pattern you built separately across four weeks. The capstone is just the assembly.
I started Week 1 thinking 'AI pipelines' meant something complicated. It's a function that calls three other functions.
The infrastructure is sophisticated. What you write is simple — because you understood each building block before assembling them. That's the track's point.
def research_extract_format(topic: str) -> str:
search_result = search_the_web(topic)
class Finding(BaseModel):
claim: str; source: str; year: int
agent = Agent(model, result_type=list[Finding])
result = agent.run_sync(f"Extract 3 findings: {search_result}")
lines = [f"{i+1}. {f.claim} ({f.source}, {f.year})" for i, f in enumerate(result.output)]
return "\n".join(lines)search_the_web (Day 24) → list[Finding] extraction (Days 10+25) → list comprehension format (Days 19+21)
Verify years and sources against original papers before citing in research work.
Create a free account to get started. Paid plans unlock all tracks.