The very first agent call on Day 3 was three steps — Agent(model).run_sync(query).output. What do you think changes today when the goal is live web search?
Probably a different API? A tool registration? Some kind of search client you wire in?
Neither. You write the exact same code, character for character:
def web_search(query: str) -> str:
result = Agent(model).run_sync(query)
return result.outputThe only thing that changed is the model the preamble points to. For this lesson lesson_type is "ai-search", which wires model to perplexity/sonar — a model with live web retrieval built in at the infrastructure level.
The code is identical and the agent now reads the internet in real time?
Identical. PydanticAI is model-agnostic — Agent(model).run_sync(query) works whether the backend is a plain LLM, a search-augmented model, or anything else OpenRouter exposes. When you ask "what is the latest Python release," the Perplexity model retrieves live sources, synthesizes an answer, and hands back a plain string. Your code never sees the retrieval step.
And I never touch the wiring? The preamble sets the model based on lesson_type?
Exactly. By the time your function runs, model already points at the right backend. You write Agent(model) and the infrastructure decides what "model" means for this lesson. The abstraction pays off now:
# Same shape, different backends
Agent(model).run_sync("What is 2+2?") # Day 3 — plain LLM
Agent(model).run_sync("Latest Python release") # Today — search + LLMSo I can ask current-events questions, get up-to-date answers, and the string comes back the same way as any other agent response?
Exactly. Current events, live prices, latest library versions — anything that would be stale in a training-cutoff model now answers live. Write web_search(query) — it should feel very familiar.
TL;DR: same function, new model — search becomes free.
lesson_type: "ai-search" — preamble points model at perplexity/sonarAgent(model).run_sync(query).outputlesson_type | model | Capability |
|---|---|---|
ai-basic | llama 3.1 | General LLM |
ai-search | perplexity/sonar | LLM + live web |
ai-tools | deepseek/v3 | LLM + tool use |
The abstraction — Agent(model) — is why swapping backends costs zero code changes.
The very first agent call on Day 3 was three steps — Agent(model).run_sync(query).output. What do you think changes today when the goal is live web search?
Probably a different API? A tool registration? Some kind of search client you wire in?
Neither. You write the exact same code, character for character:
def web_search(query: str) -> str:
result = Agent(model).run_sync(query)
return result.outputThe only thing that changed is the model the preamble points to. For this lesson lesson_type is "ai-search", which wires model to perplexity/sonar — a model with live web retrieval built in at the infrastructure level.
The code is identical and the agent now reads the internet in real time?
Identical. PydanticAI is model-agnostic — Agent(model).run_sync(query) works whether the backend is a plain LLM, a search-augmented model, or anything else OpenRouter exposes. When you ask "what is the latest Python release," the Perplexity model retrieves live sources, synthesizes an answer, and hands back a plain string. Your code never sees the retrieval step.
And I never touch the wiring? The preamble sets the model based on lesson_type?
Exactly. By the time your function runs, model already points at the right backend. You write Agent(model) and the infrastructure decides what "model" means for this lesson. The abstraction pays off now:
# Same shape, different backends
Agent(model).run_sync("What is 2+2?") # Day 3 — plain LLM
Agent(model).run_sync("Latest Python release") # Today — search + LLMSo I can ask current-events questions, get up-to-date answers, and the string comes back the same way as any other agent response?
Exactly. Current events, live prices, latest library versions — anything that would be stale in a training-cutoff model now answers live. Write web_search(query) — it should feel very familiar.
TL;DR: same function, new model — search becomes free.
lesson_type: "ai-search" — preamble points model at perplexity/sonarAgent(model).run_sync(query).outputlesson_type | model | Capability |
|---|---|---|
ai-basic | llama 3.1 | General LLM |
ai-search | perplexity/sonar | LLM + live web |
ai-tools | deepseek/v3 | LLM + tool use |
The abstraction — Agent(model) — is why swapping backends costs zero code changes.
Create a free account to get started. Paid plans unlock all tracks.