Week 1's agents return strings. A string is fine for reading, but useless in a data pipeline. What's the problem with classify_sentiment returning 'positive'?
If I put 200 classifications in a spreadsheet, I can count them. But if the model returns 'mostly positive' on one response and 'positive' on another — they won't group correctly. I need exact, typed values.
That's exactly the problem result_type solves. Define a Pydantic model or a Literal type, pass it to Agent(model, result_type=...), and the API enforces that shape at the response level — not just with a prompt instruction. You get a typed Python object back, not a string you have to parse.
So the LLM always gets it right when I use result_type? The structured output is always valid?
No — and that's the misconception worth correcting now. result_type constrains the response format, but the model can still fill a field incorrectly or hallucinate a value. Structured output means the shape is enforced, not that the content is accurate. Validate a sample of your extractions manually before trusting the pipeline output in your thesis.
result_type=Contact — Pydantic BaseModel with two typed fieldsresult_type=Literal["high","medium","low"] — constrained label instead of free textresult_type=list[str] — extract a list of items from textresult_type=Ticket — three-field extraction for triageGoal: by Day 14 you can extract typed, structured data from any text input — the foundation of an AI-powered data pipeline.
Create a free account to get started. Paid plans unlock all tracks.
Week 1's agents return strings. A string is fine for reading, but useless in a data pipeline. What's the problem with classify_sentiment returning 'positive'?
If I put 200 classifications in a spreadsheet, I can count them. But if the model returns 'mostly positive' on one response and 'positive' on another — they won't group correctly. I need exact, typed values.
That's exactly the problem result_type solves. Define a Pydantic model or a Literal type, pass it to Agent(model, result_type=...), and the API enforces that shape at the response level — not just with a prompt instruction. You get a typed Python object back, not a string you have to parse.
So the LLM always gets it right when I use result_type? The structured output is always valid?
No — and that's the misconception worth correcting now. result_type constrains the response format, but the model can still fill a field incorrectly or hallucinate a value. Structured output means the shape is enforced, not that the content is accurate. Validate a sample of your extractions manually before trusting the pipeline output in your thesis.
result_type=Contact — Pydantic BaseModel with two typed fieldsresult_type=Literal["high","medium","low"] — constrained label instead of free textresult_type=list[str] — extract a list of items from textresult_type=Ticket — three-field extraction for triageGoal: by Day 14 you can extract typed, structured data from any text input — the foundation of an AI-powered data pipeline.