Before any code, take stock. Week 1 typed decisions. Week 2 memory and multi-turn history. Week 3 orchestration over compressed state. Week 4 retrieval, critic scores, multi-field plans. What does a capstone using every week look like?
A function that takes a goal and a history list, searches the web for context, builds a typed plan, and returns a response plus updated history?
Exactly the shape. The Plan model carries the structured output. Start with the class:
from pydantic import BaseModel
from typing import Literal
class Plan(BaseModel):
steps: list[str]
priority: Literal["low", "medium", "high"]
estimated_actions: intSearch, plan, reply, history — all four steps in one function, using every week's pattern exactly once?
That is the design lesson. Each step does one thing well — retrieval, typed planning, stateful reply, memory update. The orchestrator threads them together with plain Python variables. No framework, no global state, just function outputs flowing into the next call.
Two agents: one planner, one assistant. Planner sees context and returns a Plan. Assistant sees history and plan and returns the actual reply?
Two narrow agents, shared state through plain values. Same pattern you built in Week 3 — specialists sharing context — scaled up to include retrieval and history. The assistant never retrieves; the planner never tracks history. Each does one thing. The full orchestrator:
def full_agent_run(goal: str, history: list) -> dict:
results = search(goal, count=3)
context = chr(10).join(r["snippet"] for r in results)
planner = Agent(model, result_type=Plan, system_prompt=f"Context:{chr(10)}{context}")
plan = planner.run_sync(goal).output.model_dump()
assistant = Agent(model, system_prompt=f"Prior turns: {history}. Plan: {plan}")
reply = assistant.run_sync(goal).output
out = {"reply": reply, "plan": plan, "history": history + [goal, reply]}
print(f"Agent capstone: {out}")
return outSo the capstone is the whole track in one function — typed planning, memory, retrieval, multi-turn state — all composed with plain dicts and lists?
The whole track in one function. This is the stateful multi-turn assistant you started the month aiming for. Plain Python wiring, narrow agents, clean memory, rich plans. Ship it.
TL;DR: search, plan, reply, update history — four steps, four patterns, one dict out.
search(goal, count=3) — retrieval (Week 4)result_type=Plan (Weeks 1 and 4)system_prompt with history + plan (Week 2)history + [goal, reply] — stateful update (Week 2)| Week | Role in capstone |
|---|---|
| Week 1 | Typed planner |
| Week 2 | Multi-turn history |
| Week 3 | Narrow specialists |
| Week 4 | Retrieval and rich plan |
Clean composition of narrow agents is the payoff of the whole track.
Before any code, take stock. Week 1 typed decisions. Week 2 memory and multi-turn history. Week 3 orchestration over compressed state. Week 4 retrieval, critic scores, multi-field plans. What does a capstone using every week look like?
A function that takes a goal and a history list, searches the web for context, builds a typed plan, and returns a response plus updated history?
Exactly the shape. The Plan model carries the structured output. Start with the class:
from pydantic import BaseModel
from typing import Literal
class Plan(BaseModel):
steps: list[str]
priority: Literal["low", "medium", "high"]
estimated_actions: intSearch, plan, reply, history — all four steps in one function, using every week's pattern exactly once?
That is the design lesson. Each step does one thing well — retrieval, typed planning, stateful reply, memory update. The orchestrator threads them together with plain Python variables. No framework, no global state, just function outputs flowing into the next call.
Two agents: one planner, one assistant. Planner sees context and returns a Plan. Assistant sees history and plan and returns the actual reply?
Two narrow agents, shared state through plain values. Same pattern you built in Week 3 — specialists sharing context — scaled up to include retrieval and history. The assistant never retrieves; the planner never tracks history. Each does one thing. The full orchestrator:
def full_agent_run(goal: str, history: list) -> dict:
results = search(goal, count=3)
context = chr(10).join(r["snippet"] for r in results)
planner = Agent(model, result_type=Plan, system_prompt=f"Context:{chr(10)}{context}")
plan = planner.run_sync(goal).output.model_dump()
assistant = Agent(model, system_prompt=f"Prior turns: {history}. Plan: {plan}")
reply = assistant.run_sync(goal).output
out = {"reply": reply, "plan": plan, "history": history + [goal, reply]}
print(f"Agent capstone: {out}")
return outSo the capstone is the whole track in one function — typed planning, memory, retrieval, multi-turn state — all composed with plain dicts and lists?
The whole track in one function. This is the stateful multi-turn assistant you started the month aiming for. Plain Python wiring, narrow agents, clean memory, rich plans. Ship it.
TL;DR: search, plan, reply, update history — four steps, four patterns, one dict out.
search(goal, count=3) — retrieval (Week 4)result_type=Plan (Weeks 1 and 4)system_prompt with history + plan (Week 2)history + [goal, reply] — stateful update (Week 2)| Week | Role in capstone |
|---|---|
| Week 1 | Typed planner |
| Week 2 | Multi-turn history |
| Week 3 | Narrow specialists |
| Week 4 | Retrieval and rich plan |
Clean composition of narrow agents is the payoff of the whole track.
Create a free account to get started. Paid plans unlock all tracks.