You have a sentiment classifier that takes one string. Now you have a hundred strings to classify. What is the naive first try?
A loop calling the agent on each text? Or a list comprehension with agent.run_sync(t) inside?
Both work — but there is a subtlety that matters at 100 texts. Where do you create the agent? Inside the comprehension, or before it? Watch:
# Wasteful — a new agent per item
[Agent(model, system_prompt=...).run_sync(t).output for t in texts]
# Correct — one agent, reused
agent = Agent(model, system_prompt="Classify sentiment. Reply with exactly one word: positive, neutral, or negative.")
[agent.run_sync(t).output.strip().lower() for t in texts]The first version works but creates a fresh Agent for every single text? That is 100 setups for 100 items?
Exactly. Agent creation allocates memory, holds the model reference, and stores the system prompt. Creating it once and reusing it is free — the Agent is stateless between run_sync calls, so there is nothing to leak or corrupt. Moving the creation out of the loop is the one optimization every batch needs:
def batch_classify(texts: list) -> list:
agent = Agent(model, system_prompt="Classify sentiment. Reply with exactly one word: positive, neutral, or negative.")
return [agent.run_sync(t).output.strip().lower() for t in texts]And .strip().lower() inside the comprehension — same reason as Day 6, defensive normalization per item?
Same reason. One noisy response out of a hundred breaks your downstream code; normalizing every item keeps comparisons clean. The comprehension handles the repetition; the transformation chain does the cleaning per item.
And the output length always equals the input length — one label per text, in order?
That is the guarantee a list comprehension gives you — same length, same order, one-to-one mapping between input and output. Write batch_classify(texts) now: one agent, one comprehension, normalized outputs.
TL;DR: build the agent once, reuse it across every item.
agent = Agent(model, system_prompt=...) — once, outside the loop.strip().lower() — normalize each output for safe matching| Creations for 100 texts | Agent creation cost |
|---|---|
| Inside the loop | 100× |
| Outside the loop | 1× |
Agent is stateless between calls — reuse is always safe.
You have a sentiment classifier that takes one string. Now you have a hundred strings to classify. What is the naive first try?
A loop calling the agent on each text? Or a list comprehension with agent.run_sync(t) inside?
Both work — but there is a subtlety that matters at 100 texts. Where do you create the agent? Inside the comprehension, or before it? Watch:
# Wasteful — a new agent per item
[Agent(model, system_prompt=...).run_sync(t).output for t in texts]
# Correct — one agent, reused
agent = Agent(model, system_prompt="Classify sentiment. Reply with exactly one word: positive, neutral, or negative.")
[agent.run_sync(t).output.strip().lower() for t in texts]The first version works but creates a fresh Agent for every single text? That is 100 setups for 100 items?
Exactly. Agent creation allocates memory, holds the model reference, and stores the system prompt. Creating it once and reusing it is free — the Agent is stateless between run_sync calls, so there is nothing to leak or corrupt. Moving the creation out of the loop is the one optimization every batch needs:
def batch_classify(texts: list) -> list:
agent = Agent(model, system_prompt="Classify sentiment. Reply with exactly one word: positive, neutral, or negative.")
return [agent.run_sync(t).output.strip().lower() for t in texts]And .strip().lower() inside the comprehension — same reason as Day 6, defensive normalization per item?
Same reason. One noisy response out of a hundred breaks your downstream code; normalizing every item keeps comparisons clean. The comprehension handles the repetition; the transformation chain does the cleaning per item.
And the output length always equals the input length — one label per text, in order?
That is the guarantee a list comprehension gives you — same length, same order, one-to-one mapping between input and output. Write batch_classify(texts) now: one agent, one comprehension, normalized outputs.
TL;DR: build the agent once, reuse it across every item.
agent = Agent(model, system_prompt=...) — once, outside the loop.strip().lower() — normalize each output for safe matching| Creations for 100 texts | Agent creation cost |
|---|---|
| Inside the loop | 100× |
| Outside the loop | 1× |
Agent is stateless between calls — reuse is always safe.
Create a free account to get started. Paid plans unlock all tracks.