Day 10 summarized just the top result. What changes when you want every snippet summarized — three results in, three summaries out?
A loop? One agent call per result, collected into a list. The agent call doesn't change — I just run it more than once.
Exactly the pattern. The agent is reused — the for-loop iterates the inputs, not the agents. The minimal shape:
results = search("openai releases", count=3)
summaries = []
for r in results:
prompt = f"Summarize in one sentence: {r['snippet']}"
summaries.append(Agent(model).run_sync(prompt).output)
print(summaries)Three snippets, three agent calls, three summaries. Each call is independent; the loop is just scheduling.
So this is three network round-trips, one per result? That could add up fast if the count is 20.
Exactly — and that is the real-world tradeoff. Batching means cost grows with count. For five to ten snippets it is fine; for hundreds you'd want async concurrency or a summarizer that takes multiple inputs in one prompt. For now the straight loop keeps the code readable and lets you see exactly where the work happens:
def batch_summarize_results(query: str, count: int) -> list:
results = search(query, count=count)
summaries = []
for r in results:
prompt = f"Summarize in one sentence: {r['snippet']}"
summaries.append(Agent(model).run_sync(prompt).output)
return summariesCould I write this as a list comprehension instead — [Agent(model).run_sync(...).output for r in results]?
You absolutely can. The comprehension is terser; the explicit loop is easier to annotate with prints or error handling. Both produce the same list in the same order. For a batch with side effects — like a print per iteration — the explicit loop reads better; for pure transforms, the comprehension wins.
And because each summary is independent, I can return them in the same order as the search results — first summary matches first result.
Exactly. Order is preserved for free because the for-loop walks results in order and append adds to the end. One input list, one output list, one-to-one alignment. The batch pattern is the workhorse of the rest of this track.
TL;DR: one agent, many prompts — a for-loop keeps order and pays one round-trip per item.
for r in results — one iteration per search result.append(...) — preserves order| Form | When |
|---|---|
for loop | side effects (prints, error handling) |
| comprehension | pure transform, one expression |
For count > 20, async concurrency is the next upgrade — but the loop is the baseline every batch starts from.
Day 10 summarized just the top result. What changes when you want every snippet summarized — three results in, three summaries out?
A loop? One agent call per result, collected into a list. The agent call doesn't change — I just run it more than once.
Exactly the pattern. The agent is reused — the for-loop iterates the inputs, not the agents. The minimal shape:
results = search("openai releases", count=3)
summaries = []
for r in results:
prompt = f"Summarize in one sentence: {r['snippet']}"
summaries.append(Agent(model).run_sync(prompt).output)
print(summaries)Three snippets, three agent calls, three summaries. Each call is independent; the loop is just scheduling.
So this is three network round-trips, one per result? That could add up fast if the count is 20.
Exactly — and that is the real-world tradeoff. Batching means cost grows with count. For five to ten snippets it is fine; for hundreds you'd want async concurrency or a summarizer that takes multiple inputs in one prompt. For now the straight loop keeps the code readable and lets you see exactly where the work happens:
def batch_summarize_results(query: str, count: int) -> list:
results = search(query, count=count)
summaries = []
for r in results:
prompt = f"Summarize in one sentence: {r['snippet']}"
summaries.append(Agent(model).run_sync(prompt).output)
return summariesCould I write this as a list comprehension instead — [Agent(model).run_sync(...).output for r in results]?
You absolutely can. The comprehension is terser; the explicit loop is easier to annotate with prints or error handling. Both produce the same list in the same order. For a batch with side effects — like a print per iteration — the explicit loop reads better; for pure transforms, the comprehension wins.
And because each summary is independent, I can return them in the same order as the search results — first summary matches first result.
Exactly. Order is preserved for free because the for-loop walks results in order and append adds to the end. One input list, one output list, one-to-one alignment. The batch pattern is the workhorse of the rest of this track.
TL;DR: one agent, many prompts — a for-loop keeps order and pays one round-trip per item.
for r in results — one iteration per search result.append(...) — preserves order| Form | When |
|---|---|
for loop | side effects (prints, error handling) |
| comprehension | pure transform, one expression |
For count > 20, async concurrency is the next upgrade — but the loop is the baseline every batch starts from.
Create a free account to get started. Paid plans unlock all tracks.