Five days ago you had never called Agent(model).run_sync(prompt). How does the pattern feel now?
Less scary than expected. I kept waiting for the complicated part and it never arrived — it is just a Python call that returns a string.
That is the design. The model does the hard cognitive work. Your code wraps the call, reads .output, and moves on. Which pattern clicked hardest?
The chain on Day 7 — summarize, pass the summary into classify, strip-and-lower the label. Once you see that two agent calls are just two Python variables, the whole idea of an AI pipeline stops feeling magic.
That is exactly the right reaction. Let's see what stuck.
Week 1 covered five shapes of the same call. Agent(model).run_sync(prompt) returns a result object; .output is a plain Python string. From that one primitive, you extracted word counts, built a summarizer with a system_prompt, constrained a classifier to one word, and chained two agents into a summarize-then-classify function.
The important idea of this week is the separation of labor. The model handles language. Python handles everything else: counting words, stripping whitespace, lowercasing, branching. When you start mixing those — asking the model to sort numbers or count characters — accuracy drops. When you let the model shape language and let Python handle structure, the code stays small and the results stay reliable.
Week 2 keeps the same call and adds a new capability: instead of a raw string, result_type lets the agent return a typed Pydantic object. Same pattern, stricter output.
Create a free account to get started. Paid plans unlock all tracks.
Five days ago you had never called Agent(model).run_sync(prompt). How does the pattern feel now?
Less scary than expected. I kept waiting for the complicated part and it never arrived — it is just a Python call that returns a string.
That is the design. The model does the hard cognitive work. Your code wraps the call, reads .output, and moves on. Which pattern clicked hardest?
The chain on Day 7 — summarize, pass the summary into classify, strip-and-lower the label. Once you see that two agent calls are just two Python variables, the whole idea of an AI pipeline stops feeling magic.
That is exactly the right reaction. Let's see what stuck.
Week 1 covered five shapes of the same call. Agent(model).run_sync(prompt) returns a result object; .output is a plain Python string. From that one primitive, you extracted word counts, built a summarizer with a system_prompt, constrained a classifier to one word, and chained two agents into a summarize-then-classify function.
The important idea of this week is the separation of labor. The model handles language. Python handles everything else: counting words, stripping whitespace, lowercasing, branching. When you start mixing those — asking the model to sort numbers or count characters — accuracy drops. When you let the model shape language and let Python handle structure, the code stays small and the results stay reliable.
Week 2 keeps the same call and adds a new capability: instead of a raw string, result_type lets the agent return a typed Pydantic object. Same pattern, stricter output.