Day 4's word_count_of_output measured one response. Yesterday's list comprehension ran one agent over many prompts. How do you combine them — count words in all responses from a list of prompts?
[len(Agent(model).run_sync(p).output.split()) for p in prompts] — inline the word count inside the list comprehension. Each item in the result is an integer word count for that prompt.
That's it — one line, combining everything from Weeks 1 and 2. The pattern chains three operations inside the comprehension: run_sync(p) calls the model, .output gets the string, .split() splits into words, len() counts them:
def batch_word_counts(prompts: list) -> list:
return [len(Agent(model).run_sync(p).output.split()) for p in prompts]Should I add a print statement inside the comprehension to track progress?
You can't add a print inside a list comprehension — but you can print after. Add a summary print before the return. For long batches where you want progress updates, convert to a regular for loop — list comprehensions are clean but don't support per-iteration side effects:
def batch_word_counts(prompts: list) -> list:
counts = [len(Agent(model).run_sync(p).output.split()) for p in prompts]
print(f"Word counts: {counts}")
return countsI can quality-check a batch of AI responses in one call — any count under 10 words flags a potentially bad response, and I know exactly which prompt index to investigate.
Index-aligned outputs are the key advantage. counts[i] tells you the word count for prompts[i]. A low count at index 12 means the 13th prompt produced a suspiciously short response — go look at that one first.
Three weeks ago I thought 'AI pipeline' meant something from a sci-fi movie. It's a list comprehension with a model call inside.
The infrastructure is sophisticated. What you write is simple — because the preamble handles the configuration and the PydanticAI API is well-designed. You write the logic; the framework handles the model.
counts = [len(Agent(model).run_sync(p).output.split()) for p in prompts]run_sync(p) → .output → .split() → len() — all applied in the comprehension. Each step transforms the value from the previous step. This is the same chain as Day 4, applied across a list.
Returns list[int] — one integer per prompt. Use zip(prompts, counts) to pair each prompt with its word count for inspection or export.
Filter with [(p, c) for p, c in zip(prompts, counts) if c < 10] to surface responses under 10 words — those are candidates for re-prompting. Pair with Day 20's min to find the single worst response.
Day 4's word_count_of_output measured one response. Yesterday's list comprehension ran one agent over many prompts. How do you combine them — count words in all responses from a list of prompts?
[len(Agent(model).run_sync(p).output.split()) for p in prompts] — inline the word count inside the list comprehension. Each item in the result is an integer word count for that prompt.
That's it — one line, combining everything from Weeks 1 and 2. The pattern chains three operations inside the comprehension: run_sync(p) calls the model, .output gets the string, .split() splits into words, len() counts them:
def batch_word_counts(prompts: list) -> list:
return [len(Agent(model).run_sync(p).output.split()) for p in prompts]Should I add a print statement inside the comprehension to track progress?
You can't add a print inside a list comprehension — but you can print after. Add a summary print before the return. For long batches where you want progress updates, convert to a regular for loop — list comprehensions are clean but don't support per-iteration side effects:
def batch_word_counts(prompts: list) -> list:
counts = [len(Agent(model).run_sync(p).output.split()) for p in prompts]
print(f"Word counts: {counts}")
return countsI can quality-check a batch of AI responses in one call — any count under 10 words flags a potentially bad response, and I know exactly which prompt index to investigate.
Index-aligned outputs are the key advantage. counts[i] tells you the word count for prompts[i]. A low count at index 12 means the 13th prompt produced a suspiciously short response — go look at that one first.
Three weeks ago I thought 'AI pipeline' meant something from a sci-fi movie. It's a list comprehension with a model call inside.
The infrastructure is sophisticated. What you write is simple — because the preamble handles the configuration and the PydanticAI API is well-designed. You write the logic; the framework handles the model.
counts = [len(Agent(model).run_sync(p).output.split()) for p in prompts]run_sync(p) → .output → .split() → len() — all applied in the comprehension. Each step transforms the value from the previous step. This is the same chain as Day 4, applied across a list.
Returns list[int] — one integer per prompt. Use zip(prompts, counts) to pair each prompt with its word count for inspection or export.
Filter with [(p, c) for p, c in zip(prompts, counts) if c < 10] to surface responses under 10 words — those are candidates for re-prompting. Pair with Day 20's min to find the single worst response.
Create a free account to get started. Paid plans unlock all tracks.