Weeks 1 and 2 gave you functions that process one piece of text at a time. But your survey dataset has 200 open responses. Running each one manually isn't an option. What changes?
A loop. Or a list comprehension. I call the same function on each item in the list — the AI part doesn't change, the iteration does.
Exactly that. Batch processing with AI agents is just batch processing with Python — [classify_urgency(t) for t in texts]. The function you already built works on one item. The list comprehension runs it on all of them. This week you'll also explore running two agents on the same text and chaining a pipeline that returns a structured dict of results.
Won't running 200 agent calls be slow? Or expensive?
Each call is fast — a few seconds for a short text. 200 calls is a few minutes, not hours. For large batches you'd use async or parallelism — that's outside this track's scope. For thesis-scale data (200–1000 items), sequential batch is sufficient and easy to debug when something goes wrong.
{summary, urgency} dict[classify_urgency(t) for t in texts] — first batch operationmin(..., key=len) — reduction across a list of agent outputs[len(Agent(model).run_sync(p).output.split()) for p in prompts] — batch word countsGoal: by Day 21 you can run AI agents over lists of inputs — the foundation for the Day 28 capstone.
Create a free account to get started. Paid plans unlock all tracks.
Weeks 1 and 2 gave you functions that process one piece of text at a time. But your survey dataset has 200 open responses. Running each one manually isn't an option. What changes?
A loop. Or a list comprehension. I call the same function on each item in the list — the AI part doesn't change, the iteration does.
Exactly that. Batch processing with AI agents is just batch processing with Python — [classify_urgency(t) for t in texts]. The function you already built works on one item. The list comprehension runs it on all of them. This week you'll also explore running two agents on the same text and chaining a pipeline that returns a structured dict of results.
Won't running 200 agent calls be slow? Or expensive?
Each call is fast — a few seconds for a short text. 200 calls is a few minutes, not hours. For large batches you'd use async or parallelism — that's outside this track's scope. For thesis-scale data (200–1000 items), sequential batch is sufficient and easy to debug when something goes wrong.
{summary, urgency} dict[classify_urgency(t) for t in texts] — first batch operationmin(..., key=len) — reduction across a list of agent outputs[len(Agent(model).run_sync(p).output.split()) for p in prompts] — batch word countsGoal: by Day 21 you can run AI agents over lists of inputs — the foundation for the Day 28 capstone.