Your shortest_response uses min(key=len) to pick one response. What if you want to measure all of them — a word count for every response in the batch?
Same list comprehension pattern, but instead of collecting the strings I compute len(output.split()) for each one. Returns a list of ints.
Exactly. One comprehension:
return [len(Agent(model).run_sync(p).output.split()) for p in prompts]Each element is the word count of the corresponding agent response. Here's the full function:
def batch_word_counts(prompts: list) -> list:
counts = [len(Agent(model).run_sync(p).output.split()) for p in prompts]
print(f"Word counts: {counts}")
return countsIs there a risk that Agent(model) inside the comprehension creates a new agent object for every call? Is that wasteful?
It creates a new Agent wrapper each time, but the underlying model object is the same injected model — no reconnection, no state. For a short list it's fine. If you're looping thousands of times, extract the agent outside the comprehension. For 10-30 prompts, it doesn't matter.
I'm measuring the verbosity of the model across different question types. Market research on my own AI.
Useful for prompt engineering — if one phrasing consistently produces shorter, denser answers, use that phrasing in production. You've built a prompt quality measurement tool in one line.
I'm chaining these like functions now. Run a batch, measure outputs, pick the best. That's a real evaluation loop.
You now have the full batch toolkit: classify a list, pick the minimum, measure all. Week 4 adds search and tool-calling agents, then the capstone applies everything to a list of 10 inbound lead emails — classify, extract, sort. You're ready.
counts = [len(Agent(model).run_sync(p).output.split()) for p in prompts]Each element is len(output.split()) for the corresponding agent response.
Agent(model).run_sync(p).output.split() chains four operations in one expression:
Agent(model) — create agent wrapper.run_sync(p) — call the model.output — get the string.split() — list of wordslen() counts the words.
For lists over ~50 items, extract Agent(model) outside the comprehension to avoid creating a new wrapper object per iteration.
Your shortest_response uses min(key=len) to pick one response. What if you want to measure all of them — a word count for every response in the batch?
Same list comprehension pattern, but instead of collecting the strings I compute len(output.split()) for each one. Returns a list of ints.
Exactly. One comprehension:
return [len(Agent(model).run_sync(p).output.split()) for p in prompts]Each element is the word count of the corresponding agent response. Here's the full function:
def batch_word_counts(prompts: list) -> list:
counts = [len(Agent(model).run_sync(p).output.split()) for p in prompts]
print(f"Word counts: {counts}")
return countsIs there a risk that Agent(model) inside the comprehension creates a new agent object for every call? Is that wasteful?
It creates a new Agent wrapper each time, but the underlying model object is the same injected model — no reconnection, no state. For a short list it's fine. If you're looping thousands of times, extract the agent outside the comprehension. For 10-30 prompts, it doesn't matter.
I'm measuring the verbosity of the model across different question types. Market research on my own AI.
Useful for prompt engineering — if one phrasing consistently produces shorter, denser answers, use that phrasing in production. You've built a prompt quality measurement tool in one line.
I'm chaining these like functions now. Run a batch, measure outputs, pick the best. That's a real evaluation loop.
You now have the full batch toolkit: classify a list, pick the minimum, measure all. Week 4 adds search and tool-calling agents, then the capstone applies everything to a list of 10 inbound lead emails — classify, extract, sort. You're ready.
counts = [len(Agent(model).run_sync(p).output.split()) for p in prompts]Each element is len(output.split()) for the corresponding agent response.
Agent(model).run_sync(p).output.split() chains four operations in one expression:
Agent(model) — create agent wrapper.run_sync(p) — call the model.output — get the string.split() — list of wordslen() counts the words.
For lists over ~50 items, extract Agent(model) outside the comprehension to avoid creating a new wrapper object per iteration.
Create a free account to get started. Paid plans unlock all tracks.