Four weeks. From counting results to a deployable research assistant. Looking back, which pattern surprised you most?
The split of responsibility in the structured brief — let the agent produce summary and keywords, but overwrite the sources list with real URLs from Python. I wouldn't have thought to treat sources as ground truth separate from the agent's output.
That instinct — knowing when to trust the model and when to trust your own data — is the real RAG skill. Let's confirm a few last details before you close out the track.
One thing I want to nail — when I make a Pydantic result_type with a field like sources: list, does Pydantic care what's in the list? Or is list always accepted?
Plain list is untyped — any list passes validation. Typed as list[str] it enforces string elements. For this capstone we used plain list because we overwrite the field with our own URLs anyway — but in production code list[str] is the stricter, safer choice.
Five retrieval-augmented functions:
retrieve_and_answer — minimal RAG: search + agent answerstructured_research_brief — Pydantic Brief, sources overwritten from URLsmulti_query_brief — fan-out extend, flat merged contextconfidence_rated_answer — Literal confidence combined with free-form answerresearch_assistant — capstone: multi-query + dedup + Report + overwritten sourcesKey patterns: context goes before question in the prompt. "Answer from context" keeps the model grounded. extend for merging, set for dedup, Pydantic for structure, .model_dump() for the final dict.
Create a free account to get started. Paid plans unlock all tracks.
Four weeks. From counting results to a deployable research assistant. Looking back, which pattern surprised you most?
The split of responsibility in the structured brief — let the agent produce summary and keywords, but overwrite the sources list with real URLs from Python. I wouldn't have thought to treat sources as ground truth separate from the agent's output.
That instinct — knowing when to trust the model and when to trust your own data — is the real RAG skill. Let's confirm a few last details before you close out the track.
One thing I want to nail — when I make a Pydantic result_type with a field like sources: list, does Pydantic care what's in the list? Or is list always accepted?
Plain list is untyped — any list passes validation. Typed as list[str] it enforces string elements. For this capstone we used plain list because we overwrite the field with our own URLs anyway — but in production code list[str] is the stricter, safer choice.
Five retrieval-augmented functions:
retrieve_and_answer — minimal RAG: search + agent answerstructured_research_brief — Pydantic Brief, sources overwritten from URLsmulti_query_brief — fan-out extend, flat merged contextconfidence_rated_answer — Literal confidence combined with free-form answerresearch_assistant — capstone: multi-query + dedup + Report + overwritten sourcesKey patterns: context goes before question in the prompt. "Answer from context" keeps the model grounded. extend for merging, set for dedup, Pydantic for structure, .model_dump() for the final dict.