You have a numbered fact list and a goal the user wants the agent to pursue. Where inside the call do you put the memory so the agent reads it first?
In the user prompt before the goal? Or is there a special slot for background knowledge?
There is — system_prompt. Anything passed as system_prompt arrives before the user message, framing how the model reads the goal. Build the prompt from the context string at call time:
agent = Agent(model, system_prompt=f"Known facts:\n{context}")
result = agent.run_sync(goal)
print(result.output)So the system prompt is the agent's briefing, and each goal is a task that runs against that same briefing?
Exactly. The system_prompt sets the world; the user message sets the task. If the facts change, rebuild the agent. If the goal changes, keep the agent and call again. The full injector:
def agent_with_context(context: str, goal: str) -> str:
agent = Agent(model, system_prompt=f"Known facts:{chr(10)}{context}")
result = agent.run_sync(goal)
return result.outputWhy chr(10) again? Cannot I just use a plain newline inside the f-string?
Both work. Using chr(10) keeps the pattern consistent with the context builder from Day 10, so memory flows through the same character on both sides. In production you would pick one and stick with it.
And the same context can drive any goal I pass in — one briefing, many tasks?
One briefing, many tasks. This is how memory-augmented agents scale: the context carries what the agent knows, the goal carries what the user wants, and the model blends them into a grounded answer.
TL;DR: pass the fact list as system_prompt; pass the user's goal as the run_sync argument.
system_prompt — the agent's static briefingrun_sync(goal) — the per-call dynamic taskf"Known facts:{chr(10)}{context}"| Slot | Purpose | Lifetime |
|---|---|---|
system_prompt | stable context | per agent |
run_sync(...) argument | per-call goal | per call |
Separating static and dynamic inputs keeps the agent's reasoning clean and reproducible.
You have a numbered fact list and a goal the user wants the agent to pursue. Where inside the call do you put the memory so the agent reads it first?
In the user prompt before the goal? Or is there a special slot for background knowledge?
There is — system_prompt. Anything passed as system_prompt arrives before the user message, framing how the model reads the goal. Build the prompt from the context string at call time:
agent = Agent(model, system_prompt=f"Known facts:\n{context}")
result = agent.run_sync(goal)
print(result.output)So the system prompt is the agent's briefing, and each goal is a task that runs against that same briefing?
Exactly. The system_prompt sets the world; the user message sets the task. If the facts change, rebuild the agent. If the goal changes, keep the agent and call again. The full injector:
def agent_with_context(context: str, goal: str) -> str:
agent = Agent(model, system_prompt=f"Known facts:{chr(10)}{context}")
result = agent.run_sync(goal)
return result.outputWhy chr(10) again? Cannot I just use a plain newline inside the f-string?
Both work. Using chr(10) keeps the pattern consistent with the context builder from Day 10, so memory flows through the same character on both sides. In production you would pick one and stick with it.
And the same context can drive any goal I pass in — one briefing, many tasks?
One briefing, many tasks. This is how memory-augmented agents scale: the context carries what the agent knows, the goal carries what the user wants, and the model blends them into a grounded answer.
TL;DR: pass the fact list as system_prompt; pass the user's goal as the run_sync argument.
system_prompt — the agent's static briefingrun_sync(goal) — the per-call dynamic taskf"Known facts:{chr(10)}{context}"| Slot | Purpose | Lifetime |
|---|---|---|
system_prompt | stable context | per agent |
run_sync(...) argument | per-call goal | per call |
Separating static and dynamic inputs keeps the agent's reasoning clean and reproducible.
Create a free account to get started. Paid plans unlock all tracks.