L11 retried on regex mismatch. Today: any custom predicate. The pattern is the same; the validator is whatever Python check you need.
import json
from pydantic_ai import Agent
def valid(parsed):
if not isinstance(parsed, dict):
return False, "output is not a JSON object"
needed = ("red", "green", "blue", "total")
for k in needed:
if k not in parsed:
return False, f"missing key {k!r}"
s = parsed["red"] + parsed["green"] + parsed["blue"]
if s != parsed["total"]:
return False, f"red+green+blue ({s}) != total ({parsed['total']})"
return True, "ok"
base = (
'Generate a JSON object with keys "red", "green", "blue" (each a positive integer 1-9) '
'and "total" equal to red+green+blue. Return only the JSON.'
)
prompt = base
for attempt in range(3):
out = Agent(model).run_sync(prompt).output.strip()
try:
parsed = json.loads(out)
except json.JSONDecodeError:
prompt = base + f'\nThe previous answer {out!r} was not valid JSON. Return ONLY a JSON object, no prose.'
continue
ok, why = valid(parsed)
if ok:
break
prompt = base + f'\nThe previous answer was rejected: {why}. Try again, ensure red+green+blue equals total.'
else:
raise ValueError(f"could not produce valid output after 3 attempts: {parsed!r}")
print(parsed)The validator returns (ok, why). The retry message includes why so the model knows what to fix.
Right. Custom predicates can express anything: numeric consistency, set membership, length bounds, cross-field equality, schema match. The discipline is the same — pure-Python check, capped retry, specific feedback.
Why not pydantic-AI's output_type= with a model validator?
You can — model_validator(mode='after') does this exact thing inside pydantic. We hand-roll today to make the pattern visible. Once you see the loop, switch to typed outputs in production. They're shorter and the auto-retry uses the same mechanism.
for attempt in range(MAX):
out = call(prompt)
parsed = parse_or_default(out)
ok, reason = validate(parsed)
if ok:
break
prompt = clarify(prompt, reason)
else:
raise ValueError("all attempts produced invalid output")validate check?Anything you can write as Python:
| Check | Code |
|---|---|
| Required keys present | all(k in parsed for k in NEEDED) |
| Cross-field consistency | parsed['a'] + parsed['b'] == parsed['total'] |
| Numeric range | 1 <= parsed['score'] <= 10 |
| String format | re.fullmatch(PATTERN, parsed['code']) |
| Set membership | parsed['label'] in ALLOWED_LABELS |
| Custom rule | parsed['end'] > parsed['start'] (date range) |
The validator returning (bool, reason) makes the retry feedback specific.
try:
parsed = json.loads(out)
except json.JSONDecodeError:
# Layer 1: not valid JSON at all — retry with stricter format request
...
ok, reason = validate(parsed)
if not ok:
# Layer 2: valid JSON but wrong shape — retry with content-specific feedback
...Different feedback for different failures. "Not JSON" needs a structural retry; "red+green+blue != total" needs an arithmetic retry.
After MAX attempts, raise. Silent failure (returning a default, swallowing the error) makes broken prompts invisible. Loud failure forces you to fix the prompt.
output_type=Learning. The pattern is the same as pydantic-AI's internal auto-retry — same loop, same feedback shape. Once you've written the loop yourself, you understand what output_type= is doing under the hood. Then in production, prefer output_type= for the brevity.
A generated JSON with an arithmetic constraint (sum of three colour fields = total). Validate. Retry on mismatch. Bind the validated dict to parsed.
L11 retried on regex mismatch. Today: any custom predicate. The pattern is the same; the validator is whatever Python check you need.
import json
from pydantic_ai import Agent
def valid(parsed):
if not isinstance(parsed, dict):
return False, "output is not a JSON object"
needed = ("red", "green", "blue", "total")
for k in needed:
if k not in parsed:
return False, f"missing key {k!r}"
s = parsed["red"] + parsed["green"] + parsed["blue"]
if s != parsed["total"]:
return False, f"red+green+blue ({s}) != total ({parsed['total']})"
return True, "ok"
base = (
'Generate a JSON object with keys "red", "green", "blue" (each a positive integer 1-9) '
'and "total" equal to red+green+blue. Return only the JSON.'
)
prompt = base
for attempt in range(3):
out = Agent(model).run_sync(prompt).output.strip()
try:
parsed = json.loads(out)
except json.JSONDecodeError:
prompt = base + f'\nThe previous answer {out!r} was not valid JSON. Return ONLY a JSON object, no prose.'
continue
ok, why = valid(parsed)
if ok:
break
prompt = base + f'\nThe previous answer was rejected: {why}. Try again, ensure red+green+blue equals total.'
else:
raise ValueError(f"could not produce valid output after 3 attempts: {parsed!r}")
print(parsed)The validator returns (ok, why). The retry message includes why so the model knows what to fix.
Right. Custom predicates can express anything: numeric consistency, set membership, length bounds, cross-field equality, schema match. The discipline is the same — pure-Python check, capped retry, specific feedback.
Why not pydantic-AI's output_type= with a model validator?
You can — model_validator(mode='after') does this exact thing inside pydantic. We hand-roll today to make the pattern visible. Once you see the loop, switch to typed outputs in production. They're shorter and the auto-retry uses the same mechanism.
for attempt in range(MAX):
out = call(prompt)
parsed = parse_or_default(out)
ok, reason = validate(parsed)
if ok:
break
prompt = clarify(prompt, reason)
else:
raise ValueError("all attempts produced invalid output")validate check?Anything you can write as Python:
| Check | Code |
|---|---|
| Required keys present | all(k in parsed for k in NEEDED) |
| Cross-field consistency | parsed['a'] + parsed['b'] == parsed['total'] |
| Numeric range | 1 <= parsed['score'] <= 10 |
| String format | re.fullmatch(PATTERN, parsed['code']) |
| Set membership | parsed['label'] in ALLOWED_LABELS |
| Custom rule | parsed['end'] > parsed['start'] (date range) |
The validator returning (bool, reason) makes the retry feedback specific.
try:
parsed = json.loads(out)
except json.JSONDecodeError:
# Layer 1: not valid JSON at all — retry with stricter format request
...
ok, reason = validate(parsed)
if not ok:
# Layer 2: valid JSON but wrong shape — retry with content-specific feedback
...Different feedback for different failures. "Not JSON" needs a structural retry; "red+green+blue != total" needs an arithmetic retry.
After MAX attempts, raise. Silent failure (returning a default, swallowing the error) makes broken prompts invisible. Loud failure forces you to fix the prompt.
output_type=Learning. The pattern is the same as pydantic-AI's internal auto-retry — same loop, same feedback shape. Once you've written the loop yourself, you understand what output_type= is doing under the hood. Then in production, prefer output_type= for the brevity.
A generated JSON with an arithmetic constraint (sum of three colour fields = total). Validate. Retry on mismatch. Bind the validated dict to parsed.
Create a free account to get started. Paid plans unlock all tracks.