Week 1 agents return strings. Strings are flexible — but a string that sometimes reads "positive", sometimes "Positive.", and occasionally "The tone is positive" breaks every downstream sort. What's missing?
A schema. In my codebook I'd define the exact categories — high, medium, low — and coders must stay within them. The equivalent here is telling the model it must return exactly one of those strings.
That's result_type. result_type=Literal["high","medium","low"] means the model must return one of those three strings or the call fails. result_type=BaseModel means the model must return a dict that validates against your schema. Week 2 is codebook-compliant extraction.
And if the model can't fill in a required field — say journal is not mentioned in the abstract — does the call fail?
PydanticAI will retry the model with a correction prompt. If after retries the field is still missing, the call raises a validation error. That's the trade-off: higher reliability, occasionally more latency on hard cases. Week 2 builds the structured extraction layer that Week 4's capstone depends on.
extract_contact: Pydantic model output with .model_dump()classify_urgency: Literal output for guaranteed one-of-N stringsextract_action_items: list output for bullet-point extractionsummarize_and_classify: summarise → classify with Literal (two-agent chain)triage_ticket: multi-field Pydantic model for structured research task extractionGoal: a triage function that extracts a structured task from any reviewer comment or co-author request.
Create a free account to get started. Paid plans unlock all tracks.
Week 1 agents return strings. Strings are flexible — but a string that sometimes reads "positive", sometimes "Positive.", and occasionally "The tone is positive" breaks every downstream sort. What's missing?
A schema. In my codebook I'd define the exact categories — high, medium, low — and coders must stay within them. The equivalent here is telling the model it must return exactly one of those strings.
That's result_type. result_type=Literal["high","medium","low"] means the model must return one of those three strings or the call fails. result_type=BaseModel means the model must return a dict that validates against your schema. Week 2 is codebook-compliant extraction.
And if the model can't fill in a required field — say journal is not mentioned in the abstract — does the call fail?
PydanticAI will retry the model with a correction prompt. If after retries the field is still missing, the call raises a validation error. That's the trade-off: higher reliability, occasionally more latency on hard cases. Week 2 builds the structured extraction layer that Week 4's capstone depends on.
extract_contact: Pydantic model output with .model_dump()classify_urgency: Literal output for guaranteed one-of-N stringsextract_action_items: list output for bullet-point extractionsummarize_and_classify: summarise → classify with Literal (two-agent chain)triage_ticket: multi-field Pydantic model for structured research task extractionGoal: a triage function that extracts a structured task from any reviewer comment or co-author request.