You want the model to answer any question you pass in at runtime. What is the smallest Python program that can do that?
I've seen Agent(...) in examples, but I don't know what it needs. A prompt? A model object? Some setup call first?
An Agent wraps a model and gives you a .run_sync(prompt) method. Pass it any string, it returns a result object with the answer inside. The smallest call looks like this:
from pydantic_ai import Agent
result = Agent(model).run_sync("What is Python?")
print(result.output)So result is not the text itself — it has an .output attribute that holds the string? That feels like an extra layer.
It is, but a deliberate one. The result object also carries token usage, tool calls, and timing. For now you only need .output, which is the plain string the model returned. Wrapping it in a function gives you this:
def run_agent(prompt: str) -> str:
result = Agent(model).run_sync(prompt)
return result.outputEvery call hits a real LLM? Not a cached answer or a canned test string?
Every call is live. The model reads your prompt, generates tokens, streams them back. Two calls with the same prompt can return different phrasings — that is the nature of sampling. Test cases here check for shape, not exact matches.
So the same function works for a factual question, a translation, a summary — anything I put into the prompt?
One function, any prompt, a live answer. You have just built the smallest useful piece of an AI application: a bridge from a user's question to a model's answer.
TL;DR: Agent(model).run_sync(prompt) returns a result object; read .output for the string.
Agent(model) — wraps a configured OpenRouter model.run_sync(prompt) — blocks until the model finishes generating.output — the plain string the model returned| Need | Use |
|---|---|
| Plain text answer | .output |
| Typed fields | Pydantic model as result_type |
| Async callers | .run(prompt) instead of .run_sync |
The model variable is already configured by the preamble — you never import it.
Create a free account to get started. Paid plans unlock all tracks.
You want the model to answer any question you pass in at runtime. What is the smallest Python program that can do that?
I've seen Agent(...) in examples, but I don't know what it needs. A prompt? A model object? Some setup call first?
An Agent wraps a model and gives you a .run_sync(prompt) method. Pass it any string, it returns a result object with the answer inside. The smallest call looks like this:
from pydantic_ai import Agent
result = Agent(model).run_sync("What is Python?")
print(result.output)So result is not the text itself — it has an .output attribute that holds the string? That feels like an extra layer.
It is, but a deliberate one. The result object also carries token usage, tool calls, and timing. For now you only need .output, which is the plain string the model returned. Wrapping it in a function gives you this:
def run_agent(prompt: str) -> str:
result = Agent(model).run_sync(prompt)
return result.outputEvery call hits a real LLM? Not a cached answer or a canned test string?
Every call is live. The model reads your prompt, generates tokens, streams them back. Two calls with the same prompt can return different phrasings — that is the nature of sampling. Test cases here check for shape, not exact matches.
So the same function works for a factual question, a translation, a summary — anything I put into the prompt?
One function, any prompt, a live answer. You have just built the smallest useful piece of an AI application: a bridge from a user's question to a model's answer.
TL;DR: Agent(model).run_sync(prompt) returns a result object; read .output for the string.
Agent(model) — wraps a configured OpenRouter model.run_sync(prompt) — blocks until the model finishes generating.output — the plain string the model returned| Need | Use |
|---|---|
| Plain text answer | .output |
| Typed fields | Pydantic model as result_type |
| Async callers | .run(prompt) instead of .run_sync |
The model variable is already configured by the preamble — you never import it.