compare_tones from yesterday runs two parallel agents and returns both outputs. For abstract triage, you need both the summary and the urgency classification together — one call, two results. How do you compose the pipeline functions you already have?
I'm chaining these like functions now. Run Agent A to summarise, pass the summary to Agent B for urgency classification, collect both results into a dict. It's the same pipeline pattern from the Python track, except the steps are AI calls.
Exactly. Two agents, one pipeline, one dict:
from typing import Literal
summary_agent = Agent(model, system_prompt="Summarize in 2 sentences.")
summary = summary_agent.run_sync(text).output
classify_agent = Agent(model, result_type=Literal["high", "medium", "low"])
urgency = classify_agent.run_sync(summary).output
return {"summary": summary, "urgency": urgency}Why classify the summary rather than the original text for urgency? Urgency seems like it could come from the full context.
The summary distils the core claim. Classifying urgency from a full abstract risks the model weighting irrelevant sections — methodology, dataset size, limitations. Feeding the two-sentence summary focuses the classifier on the research finding, which is the urgency signal you care about.
So the chain improves the classifier's signal-to-noise ratio, not just the word count. The pipeline is doing pre-processing that a human analyst would do instinctively.
That's the architectural insight. Each step narrows the input to what the next step needs.
Two hundred abstracts, one pipeline call each, a dict with summary and urgency for every one. That's my systematic review first pass automated.
The batch run is a list comprehension:
results = [ai_pipeline(abstract) for abstract in corpus]
high = [r for r in results if r["urgency"] == "high"]The urgency label is a first-pass signal — verify the "high" pile yourself. The urgency label is a first-pass signal. Review the "high" pile yourself — the model may misread a limitation paragraph as urgent. Verify before acting.
from typing import Literal
# Step 1: Summarise
summary_agent = Agent(model, system_prompt="Summarize in 2 sentences.")
summary = summary_agent.run_sync(text).output
# Step 2: Classify urgency of the summary
classify_agent = Agent(model, result_type=Literal["high", "medium", "low"])
urgency = classify_agent.run_sync(summary).output
return {"summary": summary, "urgency": urgency}Returning {"summary": ..., "urgency": ...} keeps both outputs accessible. Pass the whole dict to the next step or unpack individual fields as needed.
Returning {"summary": ..., "urgency": ...} keeps both outputs accessible without separate function calls. Pass the whole dict to a Sheet row (append_row) or unpack individual fields for different downstream consumers — the pipeline is the dict, not the individual strings.
compare_tones from yesterday runs two parallel agents and returns both outputs. For abstract triage, you need both the summary and the urgency classification together — one call, two results. How do you compose the pipeline functions you already have?
I'm chaining these like functions now. Run Agent A to summarise, pass the summary to Agent B for urgency classification, collect both results into a dict. It's the same pipeline pattern from the Python track, except the steps are AI calls.
Exactly. Two agents, one pipeline, one dict:
from typing import Literal
summary_agent = Agent(model, system_prompt="Summarize in 2 sentences.")
summary = summary_agent.run_sync(text).output
classify_agent = Agent(model, result_type=Literal["high", "medium", "low"])
urgency = classify_agent.run_sync(summary).output
return {"summary": summary, "urgency": urgency}Why classify the summary rather than the original text for urgency? Urgency seems like it could come from the full context.
The summary distils the core claim. Classifying urgency from a full abstract risks the model weighting irrelevant sections — methodology, dataset size, limitations. Feeding the two-sentence summary focuses the classifier on the research finding, which is the urgency signal you care about.
So the chain improves the classifier's signal-to-noise ratio, not just the word count. The pipeline is doing pre-processing that a human analyst would do instinctively.
That's the architectural insight. Each step narrows the input to what the next step needs.
Two hundred abstracts, one pipeline call each, a dict with summary and urgency for every one. That's my systematic review first pass automated.
The batch run is a list comprehension:
results = [ai_pipeline(abstract) for abstract in corpus]
high = [r for r in results if r["urgency"] == "high"]The urgency label is a first-pass signal — verify the "high" pile yourself. The urgency label is a first-pass signal. Review the "high" pile yourself — the model may misread a limitation paragraph as urgent. Verify before acting.
from typing import Literal
# Step 1: Summarise
summary_agent = Agent(model, system_prompt="Summarize in 2 sentences.")
summary = summary_agent.run_sync(text).output
# Step 2: Classify urgency of the summary
classify_agent = Agent(model, result_type=Literal["high", "medium", "low"])
urgency = classify_agent.run_sync(summary).output
return {"summary": summary, "urgency": urgency}Returning {"summary": ..., "urgency": ...} keeps both outputs accessible. Pass the whole dict to the next step or unpack individual fields as needed.
Returning {"summary": ..., "urgency": ...} keeps both outputs accessible without separate function calls. Pass the whole dict to a Sheet row (append_row) or unpack individual fields for different downstream consumers — the pipeline is the dict, not the individual strings.
Create a free account to get started. Paid plans unlock all tracks.