I finished AI Foundations. I can call an LLM, get JSON, and hold a multi-turn chat. What's left?
Foundations taught you how to ask an LLM things. This track teaches you how to give it tools, examples, and validation — and trust it to compose. The big shift: from generating text to triggering actions.
What changes in the code?
A handful of patterns. Tool calling — define a Python function, register it as a tool, the model decides when to call it. Chain-of-thought — coax the model to reason step-by-step before answering. Chained prompts — output of step N feeds step N+1, a small pipeline. Output validation — pure-Python checks before you act on the LLM's response. Self-critique — the model reviews its own draft. Moderation — safe/unsafe routing. Evals — input/expected pairs that catch regressions when you tweak a prompt. Agent loop — while-not-done with tool calls.
And by week 4?
You'll be writing a small agent that picks among multiple tools — including a real Composio tool that touches Gmail or Sheets — and grading its own behaviour with an eval suite. Same minimum-demonstration discipline: tiny generic inputs, no "build a customer support bot". The patterns are the point.
AI Patterns is track 8 of 9. Foundations gave you the LLM as a generator. Patterns gives you the LLM as a participant — picking tools, reasoning, validating itself, being measured.
Week 1 — Tool calling and chain-of-thought. What a tool schema looks like, single tool call, multi-step tool calls, chain-of-thought prompting, few-shot CoT.
Week 2 — Chains and validation. Chained prompts, classify→branch pipelines, deterministic output validation, self-critique, moderation, eval criteria.
Week 3 — Agents and evals. Agent concept, agent loop in code, multi-tool agent, tiny eval suite, iteration via evals, a synthesis combining 5+ primitives.
Week 4 — Production patterns. Typed outputs via pydantic, retry-on-bad-output extended, eval scoring rubrics, multi-step planning chains, agent + real Composio tool, final integration synthesis.
Write small Python scripts that: define tool schemas the LLM can call, run single and multi-step tool-calling cycles, apply chain-of-thought to harder reasoning problems, build chained-prompt pipelines, validate LLM output deterministically before acting on it, run an agent loop that selects tools and terminates, write a small eval suite to catch regressions when iterating on prompts.
AI Foundations completed (or equivalent). Comfortable with Agent(model).run_sync, result.output, result.usage(), result.all_messages(), output_type=YourPydanticModel. Python Foundations and Automation Foundations strongly recommended — pure-Python tool functions and Composio actions appear here.
Rate each statement honestly on the 1-5 scale. The same prompts come back on day 30 to mark your delta.
I finished AI Foundations. I can call an LLM, get JSON, and hold a multi-turn chat. What's left?
Foundations taught you how to ask an LLM things. This track teaches you how to give it tools, examples, and validation — and trust it to compose. The big shift: from generating text to triggering actions.
What changes in the code?
A handful of patterns. Tool calling — define a Python function, register it as a tool, the model decides when to call it. Chain-of-thought — coax the model to reason step-by-step before answering. Chained prompts — output of step N feeds step N+1, a small pipeline. Output validation — pure-Python checks before you act on the LLM's response. Self-critique — the model reviews its own draft. Moderation — safe/unsafe routing. Evals — input/expected pairs that catch regressions when you tweak a prompt. Agent loop — while-not-done with tool calls.
And by week 4?
You'll be writing a small agent that picks among multiple tools — including a real Composio tool that touches Gmail or Sheets — and grading its own behaviour with an eval suite. Same minimum-demonstration discipline: tiny generic inputs, no "build a customer support bot". The patterns are the point.
AI Patterns is track 8 of 9. Foundations gave you the LLM as a generator. Patterns gives you the LLM as a participant — picking tools, reasoning, validating itself, being measured.
Week 1 — Tool calling and chain-of-thought. What a tool schema looks like, single tool call, multi-step tool calls, chain-of-thought prompting, few-shot CoT.
Week 2 — Chains and validation. Chained prompts, classify→branch pipelines, deterministic output validation, self-critique, moderation, eval criteria.
Week 3 — Agents and evals. Agent concept, agent loop in code, multi-tool agent, tiny eval suite, iteration via evals, a synthesis combining 5+ primitives.
Week 4 — Production patterns. Typed outputs via pydantic, retry-on-bad-output extended, eval scoring rubrics, multi-step planning chains, agent + real Composio tool, final integration synthesis.
Write small Python scripts that: define tool schemas the LLM can call, run single and multi-step tool-calling cycles, apply chain-of-thought to harder reasoning problems, build chained-prompt pipelines, validate LLM output deterministically before acting on it, run an agent loop that selects tools and terminates, write a small eval suite to catch regressions when iterating on prompts.
AI Foundations completed (or equivalent). Comfortable with Agent(model).run_sync, result.output, result.usage(), result.all_messages(), output_type=YourPydanticModel. Python Foundations and Automation Foundations strongly recommended — pure-Python tool functions and Composio actions appear here.
Rate each statement honestly on the 1-5 scale. The same prompts come back on day 30 to mark your delta.
Create a free account to get started. Paid plans unlock all tracks.