Week 1 returned strings. Week 2 returned typed objects. What's the biggest change in how you think about agent output now?
I stopped treating the output as text to parse. It's a Python object — a dict, a list, a Literal string. I just use it directly.
That shift is the whole point. When you tell the agent what shape to return, it has to return that shape. The framework validates it — your code never needs to parse a wall of text.
The misconception I had was that the LLM always gets it right. Now I know: result_type doesn't make it always right — it makes failures loud instead of silent.
Exactly. Structured output doesn't prevent the model from being wrong. It prevents wrong output from silently passing through. Let's see what stuck from both weeks.
Week 1 returned strings. Week 2 returned typed objects. What's the biggest change in how you think about agent output now?
I stopped treating the output as text to parse. It's a Python object — a dict, a list, a Literal string. I just use it directly.
That shift is the whole point. When you tell the agent what shape to return, it has to return that shape. The framework validates it — your code never needs to parse a wall of text.
The misconception I had was that the LLM always gets it right. Now I know: result_type doesn't make it always right — it makes failures loud instead of silent.
Exactly. Structured output doesn't prevent the model from being wrong. It prevents wrong output from silently passing through. Let's see what stuck from both weeks.
Create a free account to get started. Paid plans unlock all tracks.