Foundations day 12 used json.loads on a free-text response asking for JSON. Pydantic-AI has a cleaner pattern: declare a pydantic model, pass it as output_type, and the library handles request shape, parsing, and validation.
from pydantic import BaseModel
from pydantic_ai import Agent
class Verdict(BaseModel):
sentiment: str
confidence: float
result = Agent(model, output_type=Verdict).run_sync(
'Classify the sentiment of "I love it" as positive, negative, or neutral. Provide a confidence score from 0 to 1.'
)
print(result.output) # Verdict(sentiment='positive', confidence=0.95)
print(result.output.sentiment) # 'positive'
print(result.output.confidence) # 0.95No json.loads, no try/except, no manual key-presence check. The library does it all from the type definition.
Right. The pydantic model is the schema and the validator and the parser. If the model returns malformed output, pydantic-AI retries automatically (up to its default cap). If retries exhaust, you get an error you can catch.
When would I still hand-roll JSON?
When you need cross-model compatibility (some smaller models don't support structured outputs well), when the schema is dynamic (built at runtime), or in environments without pydantic. For 95% of production code in pydantic-AI, output_type= is the cleaner default.
output_type=from pydantic import BaseModel
from pydantic_ai import Agent
class MyOutput(BaseModel):
field_a: str
field_b: int
field_c: list[str]
result = Agent(model, output_type=MyOutput).run_sync(prompt)
result.output # already a MyOutput instanceresult.output.your_field instead of result.output['your_field']Replaces the entire json.loads + key-check + retry-on-failure pattern from Foundations.
| Situation | Use |
|---|---|
| Multi-field, multi-type structured output | output_type= |
| Production reliability requirements | output_type= |
| Cross-model script (some models don't support typed output well) | Manual JSON |
| Schema built dynamically from config | Manual JSON or model factory |
| Quick scripts, single-field response | Either works |
# One field
class Sentiment(BaseModel):
label: str
# Multiple fields, mixed types
class Extraction(BaseModel):
name: str
age: int
skills: list[str]
# Nested
class Address(BaseModel):
street: str
city: str
class Person(BaseModel):
name: str
address: Address
# Constrained
from pydantic import Field
class Score(BaseModel):
value: int = Field(ge=1, le=10) # 1-10 inclusivePydantic's full type system is available — Optional, Union, Literal, Enum, validators. The model has to satisfy them all to be considered valid.
A run_sync with typed output is one LLM call (or a few if pydantic-AI's auto-retry kicks in). Same cost ballpark as un-typed. The wins are in code clarity and reliability, not cost.
Define a small pydantic model with two fields. Get a typed output. Inspect both fields. No manual parsing.
Foundations day 12 used json.loads on a free-text response asking for JSON. Pydantic-AI has a cleaner pattern: declare a pydantic model, pass it as output_type, and the library handles request shape, parsing, and validation.
from pydantic import BaseModel
from pydantic_ai import Agent
class Verdict(BaseModel):
sentiment: str
confidence: float
result = Agent(model, output_type=Verdict).run_sync(
'Classify the sentiment of "I love it" as positive, negative, or neutral. Provide a confidence score from 0 to 1.'
)
print(result.output) # Verdict(sentiment='positive', confidence=0.95)
print(result.output.sentiment) # 'positive'
print(result.output.confidence) # 0.95No json.loads, no try/except, no manual key-presence check. The library does it all from the type definition.
Right. The pydantic model is the schema and the validator and the parser. If the model returns malformed output, pydantic-AI retries automatically (up to its default cap). If retries exhaust, you get an error you can catch.
When would I still hand-roll JSON?
When you need cross-model compatibility (some smaller models don't support structured outputs well), when the schema is dynamic (built at runtime), or in environments without pydantic. For 95% of production code in pydantic-AI, output_type= is the cleaner default.
output_type=from pydantic import BaseModel
from pydantic_ai import Agent
class MyOutput(BaseModel):
field_a: str
field_b: int
field_c: list[str]
result = Agent(model, output_type=MyOutput).run_sync(prompt)
result.output # already a MyOutput instanceresult.output.your_field instead of result.output['your_field']Replaces the entire json.loads + key-check + retry-on-failure pattern from Foundations.
| Situation | Use |
|---|---|
| Multi-field, multi-type structured output | output_type= |
| Production reliability requirements | output_type= |
| Cross-model script (some models don't support typed output well) | Manual JSON |
| Schema built dynamically from config | Manual JSON or model factory |
| Quick scripts, single-field response | Either works |
# One field
class Sentiment(BaseModel):
label: str
# Multiple fields, mixed types
class Extraction(BaseModel):
name: str
age: int
skills: list[str]
# Nested
class Address(BaseModel):
street: str
city: str
class Person(BaseModel):
name: str
address: Address
# Constrained
from pydantic import Field
class Score(BaseModel):
value: int = Field(ge=1, le=10) # 1-10 inclusivePydantic's full type system is available — Optional, Union, Literal, Enum, validators. The model has to satisfy them all to be considered valid.
A run_sync with typed output is one LLM call (or a few if pydantic-AI's auto-retry kicks in). Same cost ballpark as un-typed. The wins are in code clarity and reliability, not cost.
Define a small pydantic model with two fields. Get a typed output. Inspect both fields. No manual parsing.
Create a free account to get started. Paid plans unlock all tracks.