Monday morning: 40 unread client messages, all needing urgency labels before you decide what to open first. Right now that is 40 minutes of triage. What is the Python version?
A list comprehension — [classify_urgency(t) for t in texts]. Same agent call, every item in the list, one line.
Exactly. Compare the two approaches:
# Without batch — one at a time, manual
for text in texts:
label = classify_urgency(text)
print(label)
# With batch — all at once, one return value
def batch_classify(texts: list) -> list:
return [classify_urgency(t) for t in texts]Does the model remember previous calls? If the first email says 'this is urgent' does that influence how it classifies the second?
No memory between calls. Each classify_urgency(t) is a fresh, independent call — the model sees only that text. That is usually what you want for triage: each message judged on its own merits, not coloured by what came before.
def batch_classify(texts: list) -> list:
labels = [classify_urgency(t) for t in texts]
return labelsSo 40 emails, 40 labels, zero reading time. I pick from the 'high' group first and ignore the 'low' group until I have spare capacity.
And you just built a better inbox triage than any email client on the market, in three lines of Python.
I'm chaining these like functions now. Brief goes in, urgency list comes out.
The list comprehension pattern generalises: swap classify_urgency for summarize_text or extract_action_items and you have batch versions of any agent call. The pattern is always the same — the agent is the only variable.
labels = [classify_urgency(t) for t in texts]Each call is independent — no shared state or memory between calls. This pattern works for any agent function that takes a string and returns a value.
Swap the function to batch any agent operation:
[summarize_text(t) for t in texts] — batch summaries[extract_action_items(t) for t in texts] — batch extraction[ai_pipeline(t) for t in texts] — batch full pipelineEach call is a live API request. For long lists, add a print to track progress — the comprehension runs calls sequentially and can take several seconds for 20+ items.
Monday morning: 40 unread client messages, all needing urgency labels before you decide what to open first. Right now that is 40 minutes of triage. What is the Python version?
A list comprehension — [classify_urgency(t) for t in texts]. Same agent call, every item in the list, one line.
Exactly. Compare the two approaches:
# Without batch — one at a time, manual
for text in texts:
label = classify_urgency(text)
print(label)
# With batch — all at once, one return value
def batch_classify(texts: list) -> list:
return [classify_urgency(t) for t in texts]Does the model remember previous calls? If the first email says 'this is urgent' does that influence how it classifies the second?
No memory between calls. Each classify_urgency(t) is a fresh, independent call — the model sees only that text. That is usually what you want for triage: each message judged on its own merits, not coloured by what came before.
def batch_classify(texts: list) -> list:
labels = [classify_urgency(t) for t in texts]
return labelsSo 40 emails, 40 labels, zero reading time. I pick from the 'high' group first and ignore the 'low' group until I have spare capacity.
And you just built a better inbox triage than any email client on the market, in three lines of Python.
I'm chaining these like functions now. Brief goes in, urgency list comes out.
The list comprehension pattern generalises: swap classify_urgency for summarize_text or extract_action_items and you have batch versions of any agent call. The pattern is always the same — the agent is the only variable.
labels = [classify_urgency(t) for t in texts]Each call is independent — no shared state or memory between calls. This pattern works for any agent function that takes a string and returns a value.
Swap the function to batch any agent operation:
[summarize_text(t) for t in texts] — batch summaries[extract_action_items(t) for t in texts] — batch extraction[ai_pipeline(t) for t in texts] — batch full pipelineEach call is a live API request. For long lists, add a print to track progress — the comprehension runs calls sequentially and can take several seconds for 20+ items.
Create a free account to get started. Paid plans unlock all tracks.