Two topics: "Python async" and "Python threading". You want a one-paragraph comparison that grounds itself in real search snippets, not just the model's prior knowledge. What shape does that function take?
Two search() calls, collect the top snippets from each, then pass them both to the agent in a single prompt asking for a comparison?
Exactly. The agent sees both sets side by side and synthesizes. Getting the snippets ready is Week 1 work; the synthesis is Week 2. The shape:
results_a = search(query_a, count=3)
results_b = search(query_b, count=3)
snips_a = " ".join(r["snippet"] for r in results_a)
snips_b = " ".join(r["snippet"] for r in results_b)
prompt = f"Compare {query_a} vs {query_b}. A: {snips_a} B: {snips_b}"
result = Agent(model).run_sync(prompt)
print(result.output)The agent doesn't research — it reads what you retrieved and compares.
So the agent gets the actual web snippets inline in the prompt, not just the two topic names? That changes what "comparison" means.
Completely. Without retrieval, the agent only has training-data priors — possibly stale, definitely generic. With retrieval, the agent compares the live, current web's take on each topic. Same agent, different input — and the output improves dramatically. Full function:
def compare_two_queries(query_a: str, query_b: str) -> str:
results_a = search(query_a, count=3)
results_b = search(query_b, count=3)
snips_a = " ".join(r["snippet"] for r in results_a)
snips_b = " ".join(r["snippet"] for r in results_b)
prompt = f"Compare {query_a} vs {query_b}. A: {snips_a} B: {snips_b}"
return Agent(model).run_sync(prompt).outputWhy " ".join(...) to flatten the snippets into one big string? Couldn't I pass the list directly?
The agent's input is a single prompt string, not a Python list. Joining flattens a list of snippets into one paragraph — space-separated is the simplest, readable choice. Some prompts benefit from bullets; for comparison prose, a flat paragraph works fine because the agent can parse sentence structure itself.
So two API calls to search, one to the agent — three network round-trips for a real, grounded comparison?
Exactly. This is the first true retrieval-augmented synthesis you've written — two retrievals feeding one reasoning step. Week 4's capstone scales this to many queries and adds structured output, but the shape you just wrote is the template.
TL;DR: retrieve both topics' snippets, flatten into one prompt, let the agent synthesize.
search() calls — one per topic" ".join(...) — flatten a list of snippets into a paragraphAgent(model) call — synthesis over both halves| Stage | Type |
|---|---|
results_a | list[dict] |
snips_a | str |
prompt | str |
Agents read strings — everything upstream reshapes data into a single prompt string.
Two topics: "Python async" and "Python threading". You want a one-paragraph comparison that grounds itself in real search snippets, not just the model's prior knowledge. What shape does that function take?
Two search() calls, collect the top snippets from each, then pass them both to the agent in a single prompt asking for a comparison?
Exactly. The agent sees both sets side by side and synthesizes. Getting the snippets ready is Week 1 work; the synthesis is Week 2. The shape:
results_a = search(query_a, count=3)
results_b = search(query_b, count=3)
snips_a = " ".join(r["snippet"] for r in results_a)
snips_b = " ".join(r["snippet"] for r in results_b)
prompt = f"Compare {query_a} vs {query_b}. A: {snips_a} B: {snips_b}"
result = Agent(model).run_sync(prompt)
print(result.output)The agent doesn't research — it reads what you retrieved and compares.
So the agent gets the actual web snippets inline in the prompt, not just the two topic names? That changes what "comparison" means.
Completely. Without retrieval, the agent only has training-data priors — possibly stale, definitely generic. With retrieval, the agent compares the live, current web's take on each topic. Same agent, different input — and the output improves dramatically. Full function:
def compare_two_queries(query_a: str, query_b: str) -> str:
results_a = search(query_a, count=3)
results_b = search(query_b, count=3)
snips_a = " ".join(r["snippet"] for r in results_a)
snips_b = " ".join(r["snippet"] for r in results_b)
prompt = f"Compare {query_a} vs {query_b}. A: {snips_a} B: {snips_b}"
return Agent(model).run_sync(prompt).outputWhy " ".join(...) to flatten the snippets into one big string? Couldn't I pass the list directly?
The agent's input is a single prompt string, not a Python list. Joining flattens a list of snippets into one paragraph — space-separated is the simplest, readable choice. Some prompts benefit from bullets; for comparison prose, a flat paragraph works fine because the agent can parse sentence structure itself.
So two API calls to search, one to the agent — three network round-trips for a real, grounded comparison?
Exactly. This is the first true retrieval-augmented synthesis you've written — two retrievals feeding one reasoning step. Week 4's capstone scales this to many queries and adds structured output, but the shape you just wrote is the template.
TL;DR: retrieve both topics' snippets, flatten into one prompt, let the agent synthesize.
search() calls — one per topic" ".join(...) — flatten a list of snippets into a paragraphAgent(model) call — synthesis over both halves| Stage | Type |
|---|---|
results_a | list[dict] |
snips_a | str |
prompt | str |
Agents read strings — everything upstream reshapes data into a single prompt string.
Create a free account to get started. Paid plans unlock all tracks.