Your classify_urgency handles one ticket. You have 30. What's the manual version of running it 30 times?
A for loop. But I'd rather write it in one line — a list comprehension.
Exactly. One line:
return [classify_urgency(t) for t in texts]That's your SDR team's hourly triage meeting replaced by a list comprehension. Here's the full function:
def batch_classify(texts: list) -> list:
results = [classify_urgency(t) for t in texts]
print(f"Classified {len(results)} texts")
return resultsThis calls the model once per text? 30 texts means 30 model calls?
Yes — 30 independent calls, one per text. PydanticAI is synchronous here so they run sequentially. For a list of 10-30 short tickets, total runtime is 10-30 seconds — fast enough for a morning batch job.
So the output is a list of 'high', 'medium', 'low' strings in the same order as the input list?
Same order as the input. Index 0 of results is the classification of index 0 of texts. That ordering matters when you want to zip(texts, results) to pair each ticket with its label.
I'm running 10 inbound emails through my SDR — except the AI does it in 15 seconds.
The LLM doesn't always get it right — that's why you validate. For urgency routing, a false positive (marking low-urgency as high) wastes time; a false negative (missing a critical outage) loses customers. For production use, add a review step for high-urgency outputs before auto-routing.
results = [classify_urgency(t) for t in texts]List comprehensions preserve input order. results[i] is always the classification of texts[i]. Use zip(texts, results) to pair each input with its label.
Calls are sequential — one model call per text. For 10-30 inputs this is fast enough for a morning batch job. For hundreds of inputs, add concurrency.
For high-stakes routing (urgency, billing flags), add a human review step for edge cases. Batch classification is fast — not infallible.
Create a free account to get started. Paid plans unlock all tracks.
Your classify_urgency handles one ticket. You have 30. What's the manual version of running it 30 times?
A for loop. But I'd rather write it in one line — a list comprehension.
Exactly. One line:
return [classify_urgency(t) for t in texts]That's your SDR team's hourly triage meeting replaced by a list comprehension. Here's the full function:
def batch_classify(texts: list) -> list:
results = [classify_urgency(t) for t in texts]
print(f"Classified {len(results)} texts")
return resultsThis calls the model once per text? 30 texts means 30 model calls?
Yes — 30 independent calls, one per text. PydanticAI is synchronous here so they run sequentially. For a list of 10-30 short tickets, total runtime is 10-30 seconds — fast enough for a morning batch job.
So the output is a list of 'high', 'medium', 'low' strings in the same order as the input list?
Same order as the input. Index 0 of results is the classification of index 0 of texts. That ordering matters when you want to zip(texts, results) to pair each ticket with its label.
I'm running 10 inbound emails through my SDR — except the AI does it in 15 seconds.
The LLM doesn't always get it right — that's why you validate. For urgency routing, a false positive (marking low-urgency as high) wastes time; a false negative (missing a critical outage) loses customers. For production use, add a review step for high-urgency outputs before auto-routing.
results = [classify_urgency(t) for t in texts]List comprehensions preserve input order. results[i] is always the classification of texts[i]. Use zip(texts, results) to pair each input with its label.
Calls are sequential — one model call per text. For 10-30 inputs this is fast enough for a morning batch job. For hundreds of inputs, add concurrency.
For high-stakes routing (urgency, billing flags), add a human review step for edge cases. Batch classification is fast — not infallible.