Week 2 every lesson returned one thing — a Pydantic model, a Literal label, a list. Week 3 multiplied that. compare_tones ran two agents and returned a dict. What changed in how you thought about the code?
I stopped thinking of each agent call as the answer and started thinking of it as one step in a longer flow. On Day 17 I had two agents with different system prompts returning into the same dict. That felt like building a small product, not just calling a function.
That shift — from one call to one step — is the whole point of Week 3. ai_pipeline on Day 18 formalised it: summarise first, then classify the summary, return both values together. Each agent does one job and hands off.
And then Day 19 took that single agent and ran it across a whole list with a list comprehension. batch_classify is just [agent.run_sync(t).output for t in texts] — the same call, repeated cleanly. I'm chaining these like functions now.
Days 20 and 21 pushed the batch idea further. shortest_response used min() with key=len to pick the tightest answer from multiple prompts. batch_word_counts counted words in each agent output without storing the strings at all — just the counts. Let's see what landed.
Create a free account to get started. Paid plans unlock all tracks.
Week 2 every lesson returned one thing — a Pydantic model, a Literal label, a list. Week 3 multiplied that. compare_tones ran two agents and returned a dict. What changed in how you thought about the code?
I stopped thinking of each agent call as the answer and started thinking of it as one step in a longer flow. On Day 17 I had two agents with different system prompts returning into the same dict. That felt like building a small product, not just calling a function.
That shift — from one call to one step — is the whole point of Week 3. ai_pipeline on Day 18 formalised it: summarise first, then classify the summary, return both values together. Each agent does one job and hands off.
And then Day 19 took that single agent and ran it across a whole list with a list comprehension. batch_classify is just [agent.run_sync(t).output for t in texts] — the same call, repeated cleanly. I'm chaining these like functions now.
Days 20 and 21 pushed the batch idea further. shortest_response used min() with key=len to pick the tightest answer from multiple prompts. batch_word_counts counted words in each agent output without storing the strings at all — just the counts. Let's see what landed.