A model that returns prose is fine for humans. For programs you want structure — JSON your code can parse, fields you can index. Two ways to ask:
# 1. Plain string output, ask the model for JSON
prompt = 'Classify the sentiment of "I love it" and return JSON: {"sentiment": "positive" or "negative" or "neutral"}. Return just the JSON, no other text.'
result = Agent(model).run_sync(prompt)
import json
parsed = json.loads(result.output.strip())
print(parsed["sentiment"])# 2. pydantic_ai's typed output (cleaner — auto-parsed and validated)
from pydantic import BaseModel
class Verdict(BaseModel):
sentiment: str
result = Agent(model, output_type=Verdict).run_sync('Classify "I love it" as positive, negative, or neutral.')
print(result.output.sentiment) # already a Verdict, no manual parsingThe pydantic version skips the json.loads step entirely?
Right — pydantic_ai handles the parse + validation under the hood. For lessons we'll use both shapes; pydantic typed output is the cleaner production pattern.
For any LLM output your code will use (vs display), you want structure. Two approaches:
json.loadsprompt = 'Classify ... and return JSON: {"sentiment": ...}. Return only the JSON.'
result = Agent(model).run_sync(prompt)
import json
parsed = json.loads(result.output.strip())Works with any model. The model occasionally slips and includes prose — handle parse failures with try/except (day 12).
from pydantic import BaseModel
class Verdict(BaseModel):
sentiment: str
result = Agent(model, output_type=Verdict).run_sync(prompt)
# result.output is a Verdict instance, fields directly accessible
print(result.output.sentiment)The library handles JSON request → response parsing → validation. If the model returns malformed JSON, pydantic raises a clear error.
| Need | Approach |
|---|---|
| Quick script, simple shape | Approach 1 (JSON in prompt + json.loads) |
| Production, multiple fields, validation | Approach 2 (output_type=YourModel) |
| Cross-model compatibility | Approach 1 (works on any model) |
For this lesson we use approach 1 (it teaches the parsing concept). For day 13 (schema in prompt) we extend it. For real production you'd reach for approach 2.
"Here's your answer: {\"sentiment\": \"positive\"}" → json.loads fails on the whole string. Mitigation: explicit "return only the JSON, no other text".{"sentiment": "positive", "confidence": 0.9} when you only asked for one field. Mitigation: validate or use pydantic typed output that strips unknown fields.{"sentiment": 1} (int instead of string). Pydantic catches this; raw json.loads won't.A model that returns prose is fine for humans. For programs you want structure — JSON your code can parse, fields you can index. Two ways to ask:
# 1. Plain string output, ask the model for JSON
prompt = 'Classify the sentiment of "I love it" and return JSON: {"sentiment": "positive" or "negative" or "neutral"}. Return just the JSON, no other text.'
result = Agent(model).run_sync(prompt)
import json
parsed = json.loads(result.output.strip())
print(parsed["sentiment"])# 2. pydantic_ai's typed output (cleaner — auto-parsed and validated)
from pydantic import BaseModel
class Verdict(BaseModel):
sentiment: str
result = Agent(model, output_type=Verdict).run_sync('Classify "I love it" as positive, negative, or neutral.')
print(result.output.sentiment) # already a Verdict, no manual parsingThe pydantic version skips the json.loads step entirely?
Right — pydantic_ai handles the parse + validation under the hood. For lessons we'll use both shapes; pydantic typed output is the cleaner production pattern.
For any LLM output your code will use (vs display), you want structure. Two approaches:
json.loadsprompt = 'Classify ... and return JSON: {"sentiment": ...}. Return only the JSON.'
result = Agent(model).run_sync(prompt)
import json
parsed = json.loads(result.output.strip())Works with any model. The model occasionally slips and includes prose — handle parse failures with try/except (day 12).
from pydantic import BaseModel
class Verdict(BaseModel):
sentiment: str
result = Agent(model, output_type=Verdict).run_sync(prompt)
# result.output is a Verdict instance, fields directly accessible
print(result.output.sentiment)The library handles JSON request → response parsing → validation. If the model returns malformed JSON, pydantic raises a clear error.
| Need | Approach |
|---|---|
| Quick script, simple shape | Approach 1 (JSON in prompt + json.loads) |
| Production, multiple fields, validation | Approach 2 (output_type=YourModel) |
| Cross-model compatibility | Approach 1 (works on any model) |
For this lesson we use approach 1 (it teaches the parsing concept). For day 13 (schema in prompt) we extend it. For real production you'd reach for approach 2.
"Here's your answer: {\"sentiment\": \"positive\"}" → json.loads fails on the whole string. Mitigation: explicit "return only the JSON, no other text".{"sentiment": "positive", "confidence": 0.9} when you only asked for one field. Mitigation: validate or use pydantic typed output that strips unknown fields.{"sentiment": 1} (int instead of string). Pydantic catches this; raw json.loads won't.Create a free account to get started. Paid plans unlock all tracks.