Yesterday's extract_contact returned a dict because a Pydantic model forced the shape. What if the output isn't a structured object — just one of three strings?
Like labelling an inbound request as high, medium, or low priority before it hits the triage queue. You only want those three words — nothing else.
That's exactly where result_type=Literal["high", "medium", "low"] earns its place. You import Literal from typing and hand it to the Agent the same way you handed it a Pydantic model — the LLM must pick from your list, not compose a response:
from typing import Literal
def classify_urgency(text: str) -> str:
agent = Agent(model, result_type=Literal["high", "medium", "low"])
result = agent.run_sync(text)
return result.outputWait — no .strip().lower() this time? Yesterday's classify_sentiment needed that chain to survive the model returning "Positive." with a capital and a period.
Right distinction. With a plain system_prompt, the model returns free text you then normalize. With result_type=Literal[...], PydanticAI validates the output against the enum before your code ever sees it. If the model drifts — returns "High" or "MEDIUM" — PydanticAI rejects it and retries. You get back exactly "high", "medium", or "low". No defensive string handling needed.
So the LLM literally has to return one of my three options. That's the difference between a tool I trust in a pipeline and one I have to babysit.
And that's why Week 2 is about structure. A sentiment classifier that returns "Somewhat negative, with caveats" breaks every router downstream. A Literal classifier that returns "low" never does.
I'm already thinking about the request queue. High goes to a human immediately, medium gets batched, low auto-replies. All I need is this one function at the intake point.
Which is exactly where it belongs. One clean label at the intake gate is worth more than a paragraph of nuance that no downstream code can consume. Here's the complete function:
from typing import Literal
def classify_urgency(text: str) -> str:
agent = Agent(model, result_type=Literal["high", "medium", "low"])
result = agent.run_sync(text)
return result.outputNext up: result_type=list[str] — when the output isn't one label but a whole list of items.
result_type=Literal vs system_prompt for Label OutputTwo ways to get a short label from an agent — only one is safe in a pipeline.
system_prompt approach — asks the model to return one word. Works most of the time. Fails when the model adds punctuation or capitalization. You add .strip().lower() to catch the drift.
result_type=Literal["high", "medium", "low"] approach — PydanticAI validates the output against the enum on every call. Non-matching output triggers an automatic retry. Your code receives exactly one of the declared strings.
| Scenario | Pattern |
|---|---|
| Three or fewer fixed options, used in routing logic | result_type=Literal[...] |
| Open-ended label where the set is too large to enumerate | system_prompt + .strip().lower() |
Literal lives in the standard library — from typing import Literal. No extra install needed.
Yesterday's extract_contact returned a dict because a Pydantic model forced the shape. What if the output isn't a structured object — just one of three strings?
Like labelling an inbound request as high, medium, or low priority before it hits the triage queue. You only want those three words — nothing else.
That's exactly where result_type=Literal["high", "medium", "low"] earns its place. You import Literal from typing and hand it to the Agent the same way you handed it a Pydantic model — the LLM must pick from your list, not compose a response:
from typing import Literal
def classify_urgency(text: str) -> str:
agent = Agent(model, result_type=Literal["high", "medium", "low"])
result = agent.run_sync(text)
return result.outputWait — no .strip().lower() this time? Yesterday's classify_sentiment needed that chain to survive the model returning "Positive." with a capital and a period.
Right distinction. With a plain system_prompt, the model returns free text you then normalize. With result_type=Literal[...], PydanticAI validates the output against the enum before your code ever sees it. If the model drifts — returns "High" or "MEDIUM" — PydanticAI rejects it and retries. You get back exactly "high", "medium", or "low". No defensive string handling needed.
So the LLM literally has to return one of my three options. That's the difference between a tool I trust in a pipeline and one I have to babysit.
And that's why Week 2 is about structure. A sentiment classifier that returns "Somewhat negative, with caveats" breaks every router downstream. A Literal classifier that returns "low" never does.
I'm already thinking about the request queue. High goes to a human immediately, medium gets batched, low auto-replies. All I need is this one function at the intake point.
Which is exactly where it belongs. One clean label at the intake gate is worth more than a paragraph of nuance that no downstream code can consume. Here's the complete function:
from typing import Literal
def classify_urgency(text: str) -> str:
agent = Agent(model, result_type=Literal["high", "medium", "low"])
result = agent.run_sync(text)
return result.outputNext up: result_type=list[str] — when the output isn't one label but a whole list of items.
result_type=Literal vs system_prompt for Label OutputTwo ways to get a short label from an agent — only one is safe in a pipeline.
system_prompt approach — asks the model to return one word. Works most of the time. Fails when the model adds punctuation or capitalization. You add .strip().lower() to catch the drift.
result_type=Literal["high", "medium", "low"] approach — PydanticAI validates the output against the enum on every call. Non-matching output triggers an automatic retry. Your code receives exactly one of the declared strings.
| Scenario | Pattern |
|---|---|
| Three or fewer fixed options, used in routing logic | result_type=Literal[...] |
| Open-ended label where the set is too large to enumerate | system_prompt + .strip().lower() |
Literal lives in the standard library — from typing import Literal. No extra install needed.
Create a free account to get started. Paid plans unlock all tracks.