You have summarize_text from Day 5 and classify_sentiment from Day 6. A 200-word abstract is too long for a reliable single-word classification. What do you do?
Summarise first — get a two-sentence version — then classify the summary. The classification agent gets a cleaner input and produces a more reliable label.
That's the two-agent pipeline. The output of summarize_text becomes the input to classify_sentiment. One function, two agent calls in sequence:
def summarize_then_classify(text: str) -> str:
summary = summarize_text(text)
label = classify_sentiment(summary)
return labelWhy does summarising first improve classification? The model should be able to handle long text directly.
Models handle long text, but classification accuracy drops with noise. A two-sentence summary removes filler and focuses on the core claim — the classifier sees less ambiguity. It also models a real peer-review workflow: abstract → summary → relevance label. Each step reduces the information to what the next step needs:
def summarize_then_classify(text: str) -> str:
summary = summarize_text(text)
label = classify_sentiment(summary)
print(f"Summary: {summary[:60]}... → {label}")
return labelEach step is already tested separately. Chaining them is just calling two functions in order. This is no different from chaining any two Python functions.
Exactly. AI agents are functions. The fact that they call a model inside doesn't change how you compose them. You already know function composition from Python basics — this is the same pattern.
I thought AI pipelines would be this complicated framework thing. It's literally label = classify_sentiment(summarize_text(text)).
The complexity is in prompt engineering and reliability — getting each step to behave consistently on your domain. The code structure is simple. The quality of each step determines the quality of the chain.
def summarize_then_classify(text: str) -> str:
summary = summarize_text(text) # Agent 1: summarise
label = classify_sentiment(summary) # Agent 2: classify
return labelEach agent has a single job. The summariser reduces noise; the classifier receives a clean, focused input. Chaining avoids asking one agent to do two different things at once, which degrades reliability.
summarize_text and classify_sentiment were written in earlier lessons. Chains are just function calls — each step is independently testable.
Return the final label, not the intermediate summary. If the caller also needs the summary, return a dict instead: {"summary": summary, "label": label}.
You have summarize_text from Day 5 and classify_sentiment from Day 6. A 200-word abstract is too long for a reliable single-word classification. What do you do?
Summarise first — get a two-sentence version — then classify the summary. The classification agent gets a cleaner input and produces a more reliable label.
That's the two-agent pipeline. The output of summarize_text becomes the input to classify_sentiment. One function, two agent calls in sequence:
def summarize_then_classify(text: str) -> str:
summary = summarize_text(text)
label = classify_sentiment(summary)
return labelWhy does summarising first improve classification? The model should be able to handle long text directly.
Models handle long text, but classification accuracy drops with noise. A two-sentence summary removes filler and focuses on the core claim — the classifier sees less ambiguity. It also models a real peer-review workflow: abstract → summary → relevance label. Each step reduces the information to what the next step needs:
def summarize_then_classify(text: str) -> str:
summary = summarize_text(text)
label = classify_sentiment(summary)
print(f"Summary: {summary[:60]}... → {label}")
return labelEach step is already tested separately. Chaining them is just calling two functions in order. This is no different from chaining any two Python functions.
Exactly. AI agents are functions. The fact that they call a model inside doesn't change how you compose them. You already know function composition from Python basics — this is the same pattern.
I thought AI pipelines would be this complicated framework thing. It's literally label = classify_sentiment(summarize_text(text)).
The complexity is in prompt engineering and reliability — getting each step to behave consistently on your domain. The code structure is simple. The quality of each step determines the quality of the chain.
def summarize_then_classify(text: str) -> str:
summary = summarize_text(text) # Agent 1: summarise
label = classify_sentiment(summary) # Agent 2: classify
return labelEach agent has a single job. The summariser reduces noise; the classifier receives a clean, focused input. Chaining avoids asking one agent to do two different things at once, which degrades reliability.
summarize_text and classify_sentiment were written in earlier lessons. Chains are just function calls — each step is independently testable.
Return the final label, not the intermediate summary. If the caller also needs the summary, return a dict instead: {"summary": summary, "label": label}.
Create a free account to get started. Paid plans unlock all tracks.