Week 2 agents return structured data for one input. Your corpus has 200 abstracts. Running them one at a time through summarize_and_classify is still 200 manual function calls. What's the scaling step?
A list comprehension calling the agent for each text in a list. One loop, 200 calls, 200 structured outputs. That's inter-rater reliability at scale — same coding criteria applied to every abstract.
That's Week 3. Batch processing, multi-step pipelines, and comparing parallel agent personas — like running the same abstract through two different coding schemas to check consistency. By the end of the week, you'll have the building blocks for the Week 4 capstone search-and-extract pipeline.
How long does a 200-abstract batch take? Is it sequential or parallel?
The list comprehension runs sequentially — each call blocks until the response arrives. For 200 abstracts at ~1 second each, roughly 3 minutes. For true parallelism, you'd use asyncio.gather — that's beyond this track's scope. Three minutes for 200 abstracts beats a weekend of manual reading by any measure.
compare_tones: two agents with different system prompts on the same textai_pipeline: summarise → classify, returns dictbatch_classify: list comprehension for 200 abstractsshortest_response: batch + min(key=len) to pick the most concise summarybatch_word_counts: batch word count audit across all agent outputsGoal: a batch pipeline that classifies 200 open-text survey responses into themes in one loop.
Create a free account to get started. Paid plans unlock all tracks.
Week 2 agents return structured data for one input. Your corpus has 200 abstracts. Running them one at a time through summarize_and_classify is still 200 manual function calls. What's the scaling step?
A list comprehension calling the agent for each text in a list. One loop, 200 calls, 200 structured outputs. That's inter-rater reliability at scale — same coding criteria applied to every abstract.
That's Week 3. Batch processing, multi-step pipelines, and comparing parallel agent personas — like running the same abstract through two different coding schemas to check consistency. By the end of the week, you'll have the building blocks for the Week 4 capstone search-and-extract pipeline.
How long does a 200-abstract batch take? Is it sequential or parallel?
The list comprehension runs sequentially — each call blocks until the response arrives. For 200 abstracts at ~1 second each, roughly 3 minutes. For true parallelism, you'd use asyncio.gather — that's beyond this track's scope. Three minutes for 200 abstracts beats a weekend of manual reading by any measure.
compare_tones: two agents with different system prompts on the same textai_pipeline: summarise → classify, returns dictbatch_classify: list comprehension for 200 abstractsshortest_response: batch + min(key=len) to pick the most concise summarybatch_word_counts: batch word count audit across all agent outputsGoal: a batch pipeline that classifies 200 open-text survey responses into themes in one loop.