Thirty days of agent calls, result_type, chains, batches, live search, and custom tools. Rate yourself on the same six questions from Day 1 — the gap between today and then is the point of the whole month.
I remember being a 1 on most of these. Literal, result_type, @agent.tool_plain were words I had never seen. Now I reach for them without thinking.
And the capstone on Day 28 — how did wiring a search agent into an extraction agent feel compared with your first run_sync on Day 3?
Much smaller than I expected. Once the patterns click, three agents in one function is shorter than my first attempt at one.
That is the shift the track tries to land — from "AI is magic" to "an agent is a function that returns a value." Rate honestly and compare with Day 1.
On Day 1 you rated your confidence on six agent skills: calling run_sync, shaping output with system_prompt, using a Pydantic model as result_type, constraining with Literal, chaining multiple agents, and registering a Python tool. Today you rate the same six.
The functions you wrote this month — run_agent, summarize_text, classify_urgency, triage_ticket, batch_word_counts, research_report — are real agents. Not demos. They call live LLMs, return validated structures, and compose into pipelines that search the web and extract typed facts.
The mental model transfers: an agent is a network call that takes text and returns a value whose shape you chose. The specific patterns in this track are examples; the shape is what you keep. Next in the explorers track, agents pull data from external services on demand — same run_sync, bigger context.
Create a free account to get started. Paid plans unlock all tracks.
Thirty days of agent calls, result_type, chains, batches, live search, and custom tools. Rate yourself on the same six questions from Day 1 — the gap between today and then is the point of the whole month.
I remember being a 1 on most of these. Literal, result_type, @agent.tool_plain were words I had never seen. Now I reach for them without thinking.
And the capstone on Day 28 — how did wiring a search agent into an extraction agent feel compared with your first run_sync on Day 3?
Much smaller than I expected. Once the patterns click, three agents in one function is shorter than my first attempt at one.
That is the shift the track tries to land — from "AI is magic" to "an agent is a function that returns a value." Rate honestly and compare with Day 1.
On Day 1 you rated your confidence on six agent skills: calling run_sync, shaping output with system_prompt, using a Pydantic model as result_type, constraining with Literal, chaining multiple agents, and registering a Python tool. Today you rate the same six.
The functions you wrote this month — run_agent, summarize_text, classify_urgency, triage_ticket, batch_word_counts, research_report — are real agents. Not demos. They call live LLMs, return validated structures, and compose into pipelines that search the web and extract typed facts.
The mental model transfers: an agent is a network call that takes text and returns a value whose shape you chose. The specific patterns in this track are examples; the shape is what you keep. Next in the explorers track, agents pull data from external services on demand — same run_sync, bigger context.