Before any code, you take a one-minute read on where you stand. This track is about agents that remember — programs that hold history across calls, compress state when it grows, and reason over conversation instead of isolated prompts. What does that sound like to you?
I have called LLMs and gotten answers back, but each call felt like amnesia — no memory of the last turn. I have not written anything that keeps context alive.
That is the gap this track closes. A single call is a function; a stateful agent is a loop that feeds its own output back as input to the next call. Same Agent, bigger pattern.
And the state lives where? In a database, in the process, somewhere else entirely?
Usually in a Python list or dict you pass between calls. The model never manages memory for you — your code does. This track shows you every shape of that list: plain history, keyword recall, compressed summaries, structured observations.
So by the end I will be writing multi-turn agents that remember what came before, not just one-shot answers?
Exactly that. Rate yourself now — the same six questions return on Day 30 so you can measure the shift.
Thirty days of stateful, multi-turn agents. No prior state management needed — you start with one call per function and scale up.
Literal, list[str], Pydantic observations, ranking, two-call refinementsystem_prompt injection, compression, message_historyEvery code lesson runs a live model inside Vercel Sandbox. No mocks in your dialog, no canned responses — the agent you call is real.
Create a free account to get started. Paid plans unlock all tracks.
Before any code, you take a one-minute read on where you stand. This track is about agents that remember — programs that hold history across calls, compress state when it grows, and reason over conversation instead of isolated prompts. What does that sound like to you?
I have called LLMs and gotten answers back, but each call felt like amnesia — no memory of the last turn. I have not written anything that keeps context alive.
That is the gap this track closes. A single call is a function; a stateful agent is a loop that feeds its own output back as input to the next call. Same Agent, bigger pattern.
And the state lives where? In a database, in the process, somewhere else entirely?
Usually in a Python list or dict you pass between calls. The model never manages memory for you — your code does. This track shows you every shape of that list: plain history, keyword recall, compressed summaries, structured observations.
So by the end I will be writing multi-turn agents that remember what came before, not just one-shot answers?
Exactly that. Rate yourself now — the same six questions return on Day 30 so you can measure the shift.
Thirty days of stateful, multi-turn agents. No prior state management needed — you start with one call per function and scale up.
Literal, list[str], Pydantic observations, ranking, two-call refinementsystem_prompt injection, compression, message_historyEvery code lesson runs a live model inside Vercel Sandbox. No mocks in your dialog, no canned responses — the agent you call is real.