Day 6 used .strip().lower() to normalise a free-text sentiment label. That works — but you are doing cleanup work after the model. What if the model was constrained to return only valid values?
You'd need a way to tell it the exact options. Like a dropdown — pick one, no freeform allowed.
result_type=Literal["high", "medium", "low"] is exactly that. The model gets a schema with three valid options — it must pick one:
from typing import Literal
def classify_urgency(text: str) -> str:
return Agent(model, result_type=Literal["high", "medium", "low"]).run_sync(text).outputNo .strip().lower() needed? What if the model returns something outside the three options?
PydanticAI validates the output against the schema before returning it. If the model produces something outside the Literal, PydanticAI raises an error rather than returning a garbage value. Compare the two approaches:
# Day 6 — soft constraint, cleanup required
agent = Agent(model, system_prompt="Respond with one word: positive, neutral, or negative.")
result.output.strip().lower() # still might return 'very positive'
# Day 11 — hard constraint, framework enforces it
Agent(model, result_type=Literal["high", "medium", "low"]).run_sync(text).outputSo I am not trusting the model to stay in bounds — the framework is enforcing it. That changes everything for production code.
Welcome to structured output. The LLM doesn't always get it right on the first try — but with result_type, when it comes back wrong it fails loudly instead of silently. Loud failures are much easier to debug than silent bad data.
A three-tier urgency classifier in two lines, with type enforcement. That is a proper CRM triage function.
And notice: no Pydantic class needed for a Literal. One import, one type hint. Keep it simple when the output is a small fixed set.
result_type=Literal for Constrained Stringsfrom typing import Literal
result = Agent(model, result_type=Literal["high", "medium", "low"]).run_sync(text)
result.output # 'high', 'medium', or 'low' — guaranteed.strip().lower()Day 6 used a tight system_prompt plus .strip().lower() to normalise the output. That approach is a soft constraint — the model could still return "very high" or "critical". Literal is a hard constraint enforced at the framework level.
result_type=Literal[...]. Invalid output raises an error instead of passing through.PydanticAI validates the model's output against the Literal schema before returning it. Use Literal whenever your downstream code depends on exact string matching.
Day 6 used .strip().lower() to normalise a free-text sentiment label. That works — but you are doing cleanup work after the model. What if the model was constrained to return only valid values?
You'd need a way to tell it the exact options. Like a dropdown — pick one, no freeform allowed.
result_type=Literal["high", "medium", "low"] is exactly that. The model gets a schema with three valid options — it must pick one:
from typing import Literal
def classify_urgency(text: str) -> str:
return Agent(model, result_type=Literal["high", "medium", "low"]).run_sync(text).outputNo .strip().lower() needed? What if the model returns something outside the three options?
PydanticAI validates the output against the schema before returning it. If the model produces something outside the Literal, PydanticAI raises an error rather than returning a garbage value. Compare the two approaches:
# Day 6 — soft constraint, cleanup required
agent = Agent(model, system_prompt="Respond with one word: positive, neutral, or negative.")
result.output.strip().lower() # still might return 'very positive'
# Day 11 — hard constraint, framework enforces it
Agent(model, result_type=Literal["high", "medium", "low"]).run_sync(text).outputSo I am not trusting the model to stay in bounds — the framework is enforcing it. That changes everything for production code.
Welcome to structured output. The LLM doesn't always get it right on the first try — but with result_type, when it comes back wrong it fails loudly instead of silently. Loud failures are much easier to debug than silent bad data.
A three-tier urgency classifier in two lines, with type enforcement. That is a proper CRM triage function.
And notice: no Pydantic class needed for a Literal. One import, one type hint. Keep it simple when the output is a small fixed set.
result_type=Literal for Constrained Stringsfrom typing import Literal
result = Agent(model, result_type=Literal["high", "medium", "low"]).run_sync(text)
result.output # 'high', 'medium', or 'low' — guaranteed.strip().lower()Day 6 used a tight system_prompt plus .strip().lower() to normalise the output. That approach is a soft constraint — the model could still return "very high" or "critical". Literal is a hard constraint enforced at the framework level.
result_type=Literal[...]. Invalid output raises an error instead of passing through.PydanticAI validates the model's output against the Literal schema before returning it. Use Literal whenever your downstream code depends on exact string matching.
Create a free account to get started. Paid plans unlock all tracks.