Same shape as Auto-Beg L24 (helpers for tool calls), but the body is Agent(model).run_sync(...). Wrap it once, name it well, reuse:
from pydantic_ai import Agent
def ask(prompt, system=None):
if system:
result = Agent(model, system_prompt=system).run_sync(prompt)
else:
result = Agent(model).run_sync(prompt)
return result.output
answer = ask("What is the capital of France?")
print(answer)
brief = ask(
"Summarise: Python is a high-level language used for many things.",
system="You write tight, single-sentence summaries."
)
print(brief)Why bother — Agent(model).run_sync(p).output is one line.
Three reasons. (1) Repetition — once you have 5+ LLM calls in one script, the boilerplate adds up. (2) Centralised options — when you want to change the system prompt or default settings across the whole script, you edit one function. (3) Resilience — the helper can include retry, parse-failure recovery, cost tracking. Every call site benefits without changes.
What kinds of helpers are useful?
A small kit:
ask(prompt, system=None) — plain text in, plain text outask_json(prompt, system=None) — same, but parses JSON and returns a dictask_structured(prompt, schema) — uses pydantic typed outputclassify(text, labels) — closed set of labels, returns oneMost of your scripts use ask and ask_json. The other two come up later.
def ask(prompt, system=None):
if system:
return Agent(model, system_prompt=system).run_sync(prompt).output
return Agent(model).run_sync(prompt).outputUsed for any prompt where you just want the model's text response.
import json
def ask_json(prompt, system=None, max_attempts=2):
extra = "Reply with ONLY a valid JSON object. No prose around it."
if system:
sys_prompt = f"{system}\n\n{extra}"
else:
sys_prompt = extra
for attempt in range(max_attempts):
try:
text = Agent(model, system_prompt=sys_prompt).run_sync(prompt).output
return json.loads(text.strip().strip("`").strip())
except json.JSONDecodeError:
if attempt == max_attempts - 1:
raise
sys_prompt = f"{sys_prompt}\nThe previous response was not valid JSON. Try again, JSON only, no prose."
raise RuntimeError("unreachable")Wraps the parse-failure pattern from L12. Every call site gets retry-on-bad-JSON for free.
def classify(text, labels, system=None):
label_str = " / ".join(f'"{lbl}"' for lbl in labels)
prompt = f"Classify the input as exactly one of: {label_str}. Reply with only that single word.\n\nInput: {text}"
out = ask(prompt, system).strip().strip(".").strip().lower()
if out not in [l.lower() for l in labels]:
raise ValueError(f"unexpected label: {out!r}")
return outValidates the response is one of the allowed labels — catches drift before it propagates.
A tool call is deterministic — same args, same response. An LLM call is not — same prompt, slightly different response each time. Helpers give you one place to add the resilience layer (retry on bad JSON, validate against a label set, fall back to a simpler model on failure). Without helpers, every call site needs the same defensive code.
for email in emails:
summary = ask(f"Summarise in one sentence:\n\n{email['body']}")
sentiment = classify(email['body'], ["positive", "neutral", "negative"])
extracted = ask_json(f"Extract action items as JSON {{items: [...]}} from:\n\n{email['body']}")
print(f"{email['subject']}: {sentiment} — {summary}")
for item in extracted.get("items", []):
print(f" - {item}")Reads like English. The plumbing — system prompts, JSON parsing, label validation — is all in helpers above.
Same shape as Auto-Beg L24 (helpers for tool calls), but the body is Agent(model).run_sync(...). Wrap it once, name it well, reuse:
from pydantic_ai import Agent
def ask(prompt, system=None):
if system:
result = Agent(model, system_prompt=system).run_sync(prompt)
else:
result = Agent(model).run_sync(prompt)
return result.output
answer = ask("What is the capital of France?")
print(answer)
brief = ask(
"Summarise: Python is a high-level language used for many things.",
system="You write tight, single-sentence summaries."
)
print(brief)Why bother — Agent(model).run_sync(p).output is one line.
Three reasons. (1) Repetition — once you have 5+ LLM calls in one script, the boilerplate adds up. (2) Centralised options — when you want to change the system prompt or default settings across the whole script, you edit one function. (3) Resilience — the helper can include retry, parse-failure recovery, cost tracking. Every call site benefits without changes.
What kinds of helpers are useful?
A small kit:
ask(prompt, system=None) — plain text in, plain text outask_json(prompt, system=None) — same, but parses JSON and returns a dictask_structured(prompt, schema) — uses pydantic typed outputclassify(text, labels) — closed set of labels, returns oneMost of your scripts use ask and ask_json. The other two come up later.
def ask(prompt, system=None):
if system:
return Agent(model, system_prompt=system).run_sync(prompt).output
return Agent(model).run_sync(prompt).outputUsed for any prompt where you just want the model's text response.
import json
def ask_json(prompt, system=None, max_attempts=2):
extra = "Reply with ONLY a valid JSON object. No prose around it."
if system:
sys_prompt = f"{system}\n\n{extra}"
else:
sys_prompt = extra
for attempt in range(max_attempts):
try:
text = Agent(model, system_prompt=sys_prompt).run_sync(prompt).output
return json.loads(text.strip().strip("`").strip())
except json.JSONDecodeError:
if attempt == max_attempts - 1:
raise
sys_prompt = f"{sys_prompt}\nThe previous response was not valid JSON. Try again, JSON only, no prose."
raise RuntimeError("unreachable")Wraps the parse-failure pattern from L12. Every call site gets retry-on-bad-JSON for free.
def classify(text, labels, system=None):
label_str = " / ".join(f'"{lbl}"' for lbl in labels)
prompt = f"Classify the input as exactly one of: {label_str}. Reply with only that single word.\n\nInput: {text}"
out = ask(prompt, system).strip().strip(".").strip().lower()
if out not in [l.lower() for l in labels]:
raise ValueError(f"unexpected label: {out!r}")
return outValidates the response is one of the allowed labels — catches drift before it propagates.
A tool call is deterministic — same args, same response. An LLM call is not — same prompt, slightly different response each time. Helpers give you one place to add the resilience layer (retry on bad JSON, validate against a label set, fall back to a simpler model on failure). Without helpers, every call site needs the same defensive code.
for email in emails:
summary = ask(f"Summarise in one sentence:\n\n{email['body']}")
sentiment = classify(email['body'], ["positive", "neutral", "negative"])
extracted = ask_json(f"Extract action items as JSON {{items: [...]}} from:\n\n{email['body']}")
print(f"{email['subject']}: {sentiment} — {summary}")
for item in extracted.get("items", []):
print(f" - {item}")Reads like English. The plumbing — system prompts, JSON parsing, label validation — is all in helpers above.
Create a free account to get started. Paid plans unlock all tracks.