Your classify_sentiment classifies a full support ticket. But what if the ticket is 500 words and you want to classify the core complaint, not the preamble?
Run summarize_text first to get the two-sentence core, then classify the summary. The classifier is working on cleaner signal.
That's exactly the pattern. summarize_text(text) → classify_sentiment(summary). Two functions, one chain. The output of Agent A is the input of Agent B:
summary = summarize_text(text)
sentiment = classify_sentiment(summary)
return sentimentSo there are two separate model calls happening here? That costs twice the tokens?
Two calls, yes. The first agent summarises — it reads the whole text. The second agent classifies — it reads two sentences. The second call is cheap because the first did the compression. Total token cost is less than classifying the raw 500 words. Here's the full function:
def summarize_then_classify(text: str) -> str:
summary = summarize_text(text)
sentiment = classify_sentiment(summary)
print(f"Summary: {summary[:60]}... Sentiment: {sentiment}")
return sentimentI'm hiring a summariser and a classifier and wiring them in series. It's an org chart in Python.
A two-agent pipeline is exactly that — specialised workers in sequence. The summariser doesn't know about sentiment; the classifier doesn't know about long form text. Each does one thing well.
I'm chaining these like functions now. The output of one is the input of the next. That's a real process in Python.
This chain is the foundation for the Week 4 capstone: classify lead emails, extract structured records, sort by fit tier. The pattern you wrote today is that sales pipeline, just without the Pydantic layer. Build it in your head now so the capstone is familiar.
Agent A output → Agent B input. The output of any agent call is a Python string — any function that takes a string can be chained next.
summary = summarize_text(text) # Agent A: compress
sentiment = classify_sentiment(summary) # Agent B: label
return sentimentSpecialisation. A classifier tuned for one-word output is more reliable than an agent asked to both summarise and classify in one pass. Two focused agents beat one overloaded one.
Summarise first, classify the summary. The second call processes 2 sentences instead of 500 words — cheaper and faster.
Any function that returns a string can feed any function that takes a string. The chain can be as long as needed — compress → classify → extract → format is four agents linked the same way.
Your classify_sentiment classifies a full support ticket. But what if the ticket is 500 words and you want to classify the core complaint, not the preamble?
Run summarize_text first to get the two-sentence core, then classify the summary. The classifier is working on cleaner signal.
That's exactly the pattern. summarize_text(text) → classify_sentiment(summary). Two functions, one chain. The output of Agent A is the input of Agent B:
summary = summarize_text(text)
sentiment = classify_sentiment(summary)
return sentimentSo there are two separate model calls happening here? That costs twice the tokens?
Two calls, yes. The first agent summarises — it reads the whole text. The second agent classifies — it reads two sentences. The second call is cheap because the first did the compression. Total token cost is less than classifying the raw 500 words. Here's the full function:
def summarize_then_classify(text: str) -> str:
summary = summarize_text(text)
sentiment = classify_sentiment(summary)
print(f"Summary: {summary[:60]}... Sentiment: {sentiment}")
return sentimentI'm hiring a summariser and a classifier and wiring them in series. It's an org chart in Python.
A two-agent pipeline is exactly that — specialised workers in sequence. The summariser doesn't know about sentiment; the classifier doesn't know about long form text. Each does one thing well.
I'm chaining these like functions now. The output of one is the input of the next. That's a real process in Python.
This chain is the foundation for the Week 4 capstone: classify lead emails, extract structured records, sort by fit tier. The pattern you wrote today is that sales pipeline, just without the Pydantic layer. Build it in your head now so the capstone is familiar.
Agent A output → Agent B input. The output of any agent call is a Python string — any function that takes a string can be chained next.
summary = summarize_text(text) # Agent A: compress
sentiment = classify_sentiment(summary) # Agent B: label
return sentimentSpecialisation. A classifier tuned for one-word output is more reliable than an agent asked to both summarise and classify in one pass. Two focused agents beat one overloaded one.
Summarise first, classify the summary. The second call processes 2 sentences instead of 500 words — cheaper and faster.
Any function that returns a string can feed any function that takes a string. The chain can be as long as needed — compress → classify → extract → format is four agents linked the same way.
Create a free account to get started. Paid plans unlock all tracks.