Yesterday you ran 2 turns. Today: 3, with the model accumulating context across all of them.
from pydantic_ai import Agent
agent = Agent(model)
turns = [
"What's the capital of France?",
"What about Germany?",
"Are these in the same continent?",
]
history = []
for q in turns:
result = agent.run_sync(q, message_history=history)
print(f"USER: {q}")
print(f"ASSISTANT: {result.output}\n")
history = result.all_messages()Turn 3's question — "are these in the same continent?" — is meaningless without turns 1 and 2. The model resolves "these" by looking at the prior turns it was given.
The pattern from yesterday, generalized to N turns:
history = []
for user_input in turns:
result = agent.run_sync(user_input, message_history=history)
history = result.all_messages() # extend history for next turn
use(result.output)Each iteration:
The payoff of multi-turn is contextual references:
Germany is a fresh noun, but What about only makes sense relative to a prior questionthese resolves to the entities from turns 1-2that refers to the most recent topicWithout history, the model would be confused. With history, it follows naturally.
Long histories cost tokens (you re-send everything every turn) and can confuse the model when irrelevant context bleeds into a new topic. Two recovery strategies:
For week-3 lessons we keep histories short and don't worry about trimming. Production multi-turn (chatbots, agents) handles this — AI Intermediate.
The message list has two roles:
| Role | Source |
|---|---|
user | Each run_sync(prompt) adds the prompt as a user message |
assistant | Each model reply is added as an assistant message |
A third role, system, sets the model's persona and rules. That's day 17.
Yesterday you ran 2 turns. Today: 3, with the model accumulating context across all of them.
from pydantic_ai import Agent
agent = Agent(model)
turns = [
"What's the capital of France?",
"What about Germany?",
"Are these in the same continent?",
]
history = []
for q in turns:
result = agent.run_sync(q, message_history=history)
print(f"USER: {q}")
print(f"ASSISTANT: {result.output}\n")
history = result.all_messages()Turn 3's question — "are these in the same continent?" — is meaningless without turns 1 and 2. The model resolves "these" by looking at the prior turns it was given.
The pattern from yesterday, generalized to N turns:
history = []
for user_input in turns:
result = agent.run_sync(user_input, message_history=history)
history = result.all_messages() # extend history for next turn
use(result.output)Each iteration:
The payoff of multi-turn is contextual references:
Germany is a fresh noun, but What about only makes sense relative to a prior questionthese resolves to the entities from turns 1-2that refers to the most recent topicWithout history, the model would be confused. With history, it follows naturally.
Long histories cost tokens (you re-send everything every turn) and can confuse the model when irrelevant context bleeds into a new topic. Two recovery strategies:
For week-3 lessons we keep histories short and don't worry about trimming. Production multi-turn (chatbots, agents) handles this — AI Intermediate.
The message list has two roles:
| Role | Source |
|---|---|
user | Each run_sync(prompt) adds the prompt as a user message |
assistant | Each model reply is added as an assistant message |
A third role, system, sets the model's persona and rules. That's day 17.
Create a free account to get started. Paid plans unlock all tracks.