Today's lesson exercises five primitives from this track on a tiny generic input. Five chunks. Three eval cases. One tool call. The point isn't the size — it's the composition.
Pipeline:
What composes:
Why Tasks instead of Sheets?
Tasks auto-provisions a default list per user — no setup. Sheets requires you have a spreadsheet to append to. Tasks ship as a portable proof-the-tool-fired primitive without preconditions.
Five primitives, one script.
# Inputs (in the practice)
chunks = [...5 strings...]
eval_cases = [...3 (query, expected) pairs...]
# Pipeline
store = build_store(chunks)
cache = {}
for query, expected in eval_cases:
clean_query = redact(query) # PII
key = hashlib.sha256(clean_query.encode()).hexdigest()
if key in cache:
answer = cache[key]
else:
top_id, top_score = top_k(clean_query, store, k=1)[0]
if top_score < 0.3:
tool_log_unanswered(clean_query) # Tasks append
answer = "out-of-corpus"
else:
context = store[top_id]["text"]
answer = generate(clean_query, context)
cache[key] = answer
score(answer, expected)The v1 north star, in one synthesis lesson: primitives compose. You can swap in real embeddings, a vector DB, and a different tool — the shape stays exactly the same. The kit you have can build any AI-automation script.
The lesson is not "build a knowledge bot". 5 chunks is too small for that. The lesson is prove that 5 of this track's concepts plug together correctly on a minimal demonstration. From here, the same composition extends to 5,000 chunks, real users, production data — without any new concepts.
redact_calls >= 3 (one per query)idFour assertions, one for each primitive.
Today's lesson exercises five primitives from this track on a tiny generic input. Five chunks. Three eval cases. One tool call. The point isn't the size — it's the composition.
Pipeline:
What composes:
Why Tasks instead of Sheets?
Tasks auto-provisions a default list per user — no setup. Sheets requires you have a spreadsheet to append to. Tasks ship as a portable proof-the-tool-fired primitive without preconditions.
Five primitives, one script.
# Inputs (in the practice)
chunks = [...5 strings...]
eval_cases = [...3 (query, expected) pairs...]
# Pipeline
store = build_store(chunks)
cache = {}
for query, expected in eval_cases:
clean_query = redact(query) # PII
key = hashlib.sha256(clean_query.encode()).hexdigest()
if key in cache:
answer = cache[key]
else:
top_id, top_score = top_k(clean_query, store, k=1)[0]
if top_score < 0.3:
tool_log_unanswered(clean_query) # Tasks append
answer = "out-of-corpus"
else:
context = store[top_id]["text"]
answer = generate(clean_query, context)
cache[key] = answer
score(answer, expected)The v1 north star, in one synthesis lesson: primitives compose. You can swap in real embeddings, a vector DB, and a different tool — the shape stays exactly the same. The kit you have can build any AI-automation script.
The lesson is not "build a knowledge bot". 5 chunks is too small for that. The lesson is prove that 5 of this track's concepts plug together correctly on a minimal demonstration. From here, the same composition extends to 5,000 chunks, real users, production data — without any new concepts.
redact_calls >= 3 (one per query)idFour assertions, one for each primitive.
Create a free account to get started. Paid plans unlock all tracks.