Day 4 counted words in one agent response. You have five proposal prompts and you want to know how verbose the model is for each — before deciding which prompt generates the most concise output. How do you get all five counts at once?
Same comprehension pattern, but instead of classify_urgency(t) it's len(Agent(model).run_sync(p).output.split()). One expression, one list.
Exactly right. You've internalised the batch pattern:
def batch_word_counts(prompts: list) -> list:
return [len(Agent(model).run_sync(p).output.split()) for p in prompts]That's three nested method calls in one expression — is that readable?
It's readable if you know each step: .run_sync(p) runs the agent, .output gets the string, .split() splits into words, len() counts them. The whole expression is one line because each step produces a value the next step consumes. If you find it hard to read, break it out:
result = Agent(model).run_sync(p)
count = len(result.output.split())So I can pick the prompt that generates the most concise response — by word count — and use that one for production. That's quality control on prompts.
Prompt quality control by word count — that's a real technique. Shorter outputs are often sharper outputs for extraction tasks. You've just built a lightweight prompt benchmark.
Five prompts, five counts, one list. I can sort it, find the minimum, compare across days — all standard Python list operations on the results.
That's the Week 3 payoff. Every agent output is a value in a Python collection. Every collection operation is available. The AI is just a function that transforms strings — and you know how to work with strings.
counts = [len(Agent(model).run_sync(p).output.split()) for p in prompts]Three operations chained per item:
.run_sync(p).output — run the agent, get the string.split() — split into words (on any whitespace)len(...) — count the wordsThe result is a list[int] — one word count per prompt.
If the one-liner is hard to read, break it out:
counts = []
for p in prompts:
result = Agent(model).run_sync(p)
words = result.output.split()
counts.append(len(words))Both produce identical results. The comprehension is more concise; the loop is easier to debug.
Day 4 counted words in one agent response. You have five proposal prompts and you want to know how verbose the model is for each — before deciding which prompt generates the most concise output. How do you get all five counts at once?
Same comprehension pattern, but instead of classify_urgency(t) it's len(Agent(model).run_sync(p).output.split()). One expression, one list.
Exactly right. You've internalised the batch pattern:
def batch_word_counts(prompts: list) -> list:
return [len(Agent(model).run_sync(p).output.split()) for p in prompts]That's three nested method calls in one expression — is that readable?
It's readable if you know each step: .run_sync(p) runs the agent, .output gets the string, .split() splits into words, len() counts them. The whole expression is one line because each step produces a value the next step consumes. If you find it hard to read, break it out:
result = Agent(model).run_sync(p)
count = len(result.output.split())So I can pick the prompt that generates the most concise response — by word count — and use that one for production. That's quality control on prompts.
Prompt quality control by word count — that's a real technique. Shorter outputs are often sharper outputs for extraction tasks. You've just built a lightweight prompt benchmark.
Five prompts, five counts, one list. I can sort it, find the minimum, compare across days — all standard Python list operations on the results.
That's the Week 3 payoff. Every agent output is a value in a Python collection. Every collection operation is available. The AI is just a function that transforms strings — and you know how to work with strings.
counts = [len(Agent(model).run_sync(p).output.split()) for p in prompts]Three operations chained per item:
.run_sync(p).output — run the agent, get the string.split() — split into words (on any whitespace)len(...) — count the wordsThe result is a list[int] — one word count per prompt.
If the one-liner is hard to read, break it out:
counts = []
for p in prompts:
result = Agent(model).run_sync(p)
words = result.output.split()
counts.append(len(words))Both produce identical results. The comprehension is more concise; the loop is easier to debug.
Create a free account to get started. Paid plans unlock all tracks.