Five memory shapes in five days — numbered context, keyword recall, system_prompt injection, compressed summaries, and message_history. How do these fit together for you?
Like layers. The context is the raw fact store. Recall trims it. Injection hands it to the agent. Compression shrinks it. Multi-turn history tracks a conversation on top of it all.
That is the stack. The quiz probes the shapes — what chr(10).join produces, what any() versus all() does, and what the stateful_response dict looks like.
Anything that trips most folks?
One question gives you a history list and a new input and asks for the updated_history. The order is history + [new_input, reply] — the input comes before the reply, reflecting the order events happened.
Five memory shapes, each a cleanly separable piece of the stateful pattern:
build_context_string — enumerate(facts, start=1) joined by newlinerecall_facts — any(kw.lower() in line.lower()) filter over split linesagent_with_context — context injected via system_promptcompress_memory — summarizer agent shrinks long contexts past a thresholdstateful_response — returns {response, updated_history} so state flows through callsKey insight: all memory lives in your Python code. The model only ever sees what you hand it on this call.
Create a free account to get started. Paid plans unlock all tracks.
Five memory shapes in five days — numbered context, keyword recall, system_prompt injection, compressed summaries, and message_history. How do these fit together for you?
Like layers. The context is the raw fact store. Recall trims it. Injection hands it to the agent. Compression shrinks it. Multi-turn history tracks a conversation on top of it all.
That is the stack. The quiz probes the shapes — what chr(10).join produces, what any() versus all() does, and what the stateful_response dict looks like.
Anything that trips most folks?
One question gives you a history list and a new input and asks for the updated_history. The order is history + [new_input, reply] — the input comes before the reply, reflecting the order events happened.
Five memory shapes, each a cleanly separable piece of the stateful pattern:
build_context_string — enumerate(facts, start=1) joined by newlinerecall_facts — any(kw.lower() in line.lower()) filter over split linesagent_with_context — context injected via system_promptcompress_memory — summarizer agent shrinks long contexts past a thresholdstateful_response — returns {response, updated_history} so state flows through callsKey insight: all memory lives in your Python code. The model only ever sees what you hand it on this call.