This track adds external data to your agent toolkit — web search, retrieval, and batch processing. Before the first lesson, where would you place your current comfort with combining AI calls and live data?
I can call an agent once and read .output. Chaining search results into an agent, embeddings, or caching results — all of that is new territory for me.
That is exactly the right starting point. Week 1 keeps things quiet on the AI side and teaches the search() primitive on its own. Once you can read search results cleanly, Week 2 introduces the AI layer that reasons over them.
So by the end of the track I should be able to write a function that takes a question and returns a researched answer?
That is the capstone. Rate yourself on the six dimensions below — this is your baseline so that on Day 30 you can see exactly how the four weeks moved you.
You already know how to call an agent. This track layers external data onto that skill so agents can reason about information outside their training data.
Week 1 — search() on its own: count results, extract URLs, format snippets, build summary dicts. No AI yet — the goal is to read search results fluently before adding intelligence on top.
Week 2 — search plus Agent: summarize a top result, classify each snippet, batch-process a list of results, extract structured facts.
Week 3 — embeddings, semantic similarity, caching, deduplication, cross-query comparison. The retrieval side gets sharper.
Week 4 — retrieval-augmented agents: structured briefs, multi-query synthesis, confidence-scored answers, and a full research-assistant capstone.
Every lesson runs in the Vercel Sandbox against real infrastructure — live web search, live agent calls. The patterns you write here are the ones powering production research tools.
Create a free account to get started. Paid plans unlock all tracks.
This track adds external data to your agent toolkit — web search, retrieval, and batch processing. Before the first lesson, where would you place your current comfort with combining AI calls and live data?
I can call an agent once and read .output. Chaining search results into an agent, embeddings, or caching results — all of that is new territory for me.
That is exactly the right starting point. Week 1 keeps things quiet on the AI side and teaches the search() primitive on its own. Once you can read search results cleanly, Week 2 introduces the AI layer that reasons over them.
So by the end of the track I should be able to write a function that takes a question and returns a researched answer?
That is the capstone. Rate yourself on the six dimensions below — this is your baseline so that on Day 30 you can see exactly how the four weeks moved you.
You already know how to call an agent. This track layers external data onto that skill so agents can reason about information outside their training data.
Week 1 — search() on its own: count results, extract URLs, format snippets, build summary dicts. No AI yet — the goal is to read search results fluently before adding intelligence on top.
Week 2 — search plus Agent: summarize a top result, classify each snippet, batch-process a list of results, extract structured facts.
Week 3 — embeddings, semantic similarity, caching, deduplication, cross-query comparison. The retrieval side gets sharper.
Week 4 — retrieval-augmented agents: structured briefs, multi-query synthesis, confidence-scored answers, and a full research-assistant capstone.
Every lesson runs in the Vercel Sandbox against real infrastructure — live web search, live agent calls. The patterns you write here are the ones powering production research tools.