Your context string has 80 numbered facts. The agent only needs the ones about memory and disk. How do you pull just those without an AI call?
Split the context on newlines and check each line for a keyword? Plain Python, no model?
Exactly — keyword recall is a string operation, not an agent task. Split, lowercase, match, collect. The shape is a simple list comprehension with any():
matches = [
line for line in context.split(chr(10))
if any(kw.lower() in line.lower() for kw in keywords)
]And any() returns true as soon as one keyword matches — so it is OR logic, broader recall than matching every keyword?
OR by default. any() casts a wider net, all() narrows to AND. For exploration you want broader; for precision you switch to all(). The full function:
def recall_facts(context: str, keywords: list) -> list:
lines = context.split(chr(10))
return [
line for line in lines
if any(kw.lower() in line.lower() for kw in keywords)
]Why lowercase both sides? Does the context come in with mixed case?
Numbered facts often contain capitalized nouns — "Memory spike" versus a query for "memory." Lowercasing both the line and each keyword makes the match case-insensitive without touching the original context. The returned lines keep their original case.
So this is the retrieval step — filter big context down to the relevant slice before sending anything to the model?
Retrieval before inference. Cheap string filtering trims the prompt, keeps token spend low, and gives the agent only the facts it needs. Plain Python, zero AI cost.
TL;DR: split the context on newlines, filter by any(kw in line.lower()), keep the matching lines.
context.split(chr(10)) — one line per factany(...) — OR match, broad recall.lower() on both sides — case-insensitiveany() vs all()| Function | Logic | Recall |
|---|---|---|
any() | OR | Broad |
all() | AND | Narrow |
Retrieval before inference — filter first, send only the relevant slice to the agent.
Create a free account to get started. Paid plans unlock all tracks.
Your context string has 80 numbered facts. The agent only needs the ones about memory and disk. How do you pull just those without an AI call?
Split the context on newlines and check each line for a keyword? Plain Python, no model?
Exactly — keyword recall is a string operation, not an agent task. Split, lowercase, match, collect. The shape is a simple list comprehension with any():
matches = [
line for line in context.split(chr(10))
if any(kw.lower() in line.lower() for kw in keywords)
]And any() returns true as soon as one keyword matches — so it is OR logic, broader recall than matching every keyword?
OR by default. any() casts a wider net, all() narrows to AND. For exploration you want broader; for precision you switch to all(). The full function:
def recall_facts(context: str, keywords: list) -> list:
lines = context.split(chr(10))
return [
line for line in lines
if any(kw.lower() in line.lower() for kw in keywords)
]Why lowercase both sides? Does the context come in with mixed case?
Numbered facts often contain capitalized nouns — "Memory spike" versus a query for "memory." Lowercasing both the line and each keyword makes the match case-insensitive without touching the original context. The returned lines keep their original case.
So this is the retrieval step — filter big context down to the relevant slice before sending anything to the model?
Retrieval before inference. Cheap string filtering trims the prompt, keeps token spend low, and gives the agent only the facts it needs. Plain Python, zero AI cost.
TL;DR: split the context on newlines, filter by any(kw in line.lower()), keep the matching lines.
context.split(chr(10)) — one line per factany(...) — OR match, broad recall.lower() on both sides — case-insensitiveany() vs all()| Function | Logic | Recall |
|---|---|---|
any() | OR | Broad |
all() | AND | Narrow |
Retrieval before inference — filter first, send only the relevant slice to the agent.