Yesterday batch_classify looped through a list and returned a label for every item. What if you want a single answer from that batch — the most concise one — not all of them?
After batch_classify I'd have a list of strings. I'd just scan them manually for the shortest... except that doesn't scale past about ten items.
Python gives you min(iterable, key=len) for exactly that. Run all your prompts through the agent first, collect the outputs, then pick the shortest in one expression:
agent = Agent(model)
outputs = [agent.run_sync(p).output for p in prompts]
return min(outputs, key=len)Why would I want the shortest output specifically? Shorter doesn't mean better.
Not in general — but in routing and triage it does. If you're asking five candidate prompts "summarise this decision in one sentence" and one agent run produces a tighter answer, shortest is a fast heuristic for most concise. You can always swap key=len for a custom scorer later:
def shortest_response(prompts: list) -> str:
agent = Agent(model)
outputs = [agent.run_sync(p).output for p in prompts]
return min(outputs, key=len)So the list comprehension does the batch run and min does the selection — two lines of actual logic. I'm chaining these like functions now.
Exactly. The pattern separates concerns: generation lives in the comprehension, selection lives in min. Swap min for max, a sort, or a filter and the generation step doesn't change at all.
I've been manually reading five AI outputs and picking the cleanest one. I could have written min(outputs, key=len) and gone for lunch.
Build shortest_response(prompts) — run each prompt through the same agent, collect the outputs, and return the shortest one. The next step is a full word-count audit across the same batch.
min(..., key=len)Pattern: collect all outputs first, then select.
agent = Agent(model)
outputs = [agent.run_sync(p).output for p in prompts]
return min(outputs, key=len)Why reuse one Agent instance? Creating Agent(model) once and calling run_sync multiple times is cleaner than constructing a new agent per prompt. Configuration (system_prompt, result_type) stays consistent across all calls.
min(..., key=len) picks the element with the fewest characters — a practical proxy for "most concise" in summarisation and routing tasks. The key function is swappable: key=lambda s: s.count('.') would pick fewest sentences instead.
Yesterday batch_classify looped through a list and returned a label for every item. What if you want a single answer from that batch — the most concise one — not all of them?
After batch_classify I'd have a list of strings. I'd just scan them manually for the shortest... except that doesn't scale past about ten items.
Python gives you min(iterable, key=len) for exactly that. Run all your prompts through the agent first, collect the outputs, then pick the shortest in one expression:
agent = Agent(model)
outputs = [agent.run_sync(p).output for p in prompts]
return min(outputs, key=len)Why would I want the shortest output specifically? Shorter doesn't mean better.
Not in general — but in routing and triage it does. If you're asking five candidate prompts "summarise this decision in one sentence" and one agent run produces a tighter answer, shortest is a fast heuristic for most concise. You can always swap key=len for a custom scorer later:
def shortest_response(prompts: list) -> str:
agent = Agent(model)
outputs = [agent.run_sync(p).output for p in prompts]
return min(outputs, key=len)So the list comprehension does the batch run and min does the selection — two lines of actual logic. I'm chaining these like functions now.
Exactly. The pattern separates concerns: generation lives in the comprehension, selection lives in min. Swap min for max, a sort, or a filter and the generation step doesn't change at all.
I've been manually reading five AI outputs and picking the cleanest one. I could have written min(outputs, key=len) and gone for lunch.
Build shortest_response(prompts) — run each prompt through the same agent, collect the outputs, and return the shortest one. The next step is a full word-count audit across the same batch.
min(..., key=len)Pattern: collect all outputs first, then select.
agent = Agent(model)
outputs = [agent.run_sync(p).output for p in prompts]
return min(outputs, key=len)Why reuse one Agent instance? Creating Agent(model) once and calling run_sync multiple times is cleaner than constructing a new agent per prompt. Configuration (system_prompt, result_type) stays consistent across all calls.
min(..., key=len) picks the element with the fewest characters — a practical proxy for "most concise" in summarisation and routing tasks. The key function is swappable: key=lambda s: s.count('.') would pick fewest sentences instead.
Create a free account to get started. Paid plans unlock all tracks.