Day 7 chained summarize_text and classify_sentiment — but the classification was free text. How do you make the final output a Literal-constrained label?
Replace classify_sentiment with an agent that uses result_type=Literal["relevant", "background", "unrelated"]. The summarise step stays the same — free text output fed into the classifier.
Exactly. The chain is identical to Day 7's pattern — but the second agent uses result_type=Literal instead of a system prompt constraint. The first agent's output becomes the second agent's input, and the final result is API-enforced:
def summarize_and_classify(text: str) -> str:
summary = summarize_text(text)
agent = Agent(model, result_type=Literal["relevant", "background", "unrelated"])
result = agent.run_sync(summary)
return result.outputWhy not put the summarisation and classification into one agent call?
You could — but chaining is cleaner. Each agent has a single job. The summariser produces clean text; the classifier makes a single decision. Debugging is easier: if the classification is wrong, you can print the intermediate summary to see what the classifier received. Monolithic prompts hide where things go wrong:
def summarize_and_classify(text: str) -> str:
summary = summarize_text(text)
agent = Agent(model, result_type=Literal["relevant", "background", "unrelated"])
result = agent.run_sync(summary)
label = result.output
print(f"Summary: {summary[:50]}... → {label}")
return labelI can now classify 40 abstracts into 'relevant', 'background', or 'unrelated' for my literature review — two agents, one function call per abstract, structured output every time.
That's the research assistant pattern. Summarise first to reduce noise. Classify on the clean summary with a constrained label. The output feeds directly into a spreadsheet column — no parsing, no free text to clean up.
Three hours of abstract triage, now a 40-iteration loop. My advisor is going to ask how I got through my literature review so fast.
Tell her you automated the triage pass and validated the shortlist manually. That's an honest and defensible methodology note.
def summarize_and_classify(text: str) -> str:
summary = summarize_text(text) # free text summary
agent = Agent(model, result_type=Literal["relevant", "not_relevant"])
return agent.run_sync(summary).outputSummaries remove noise. A 300-word raw response with scattered relevance signals is harder to classify reliably than a two-sentence summary that focuses on the core claim.
Each function call in the chain makes one API call. A two-step chain uses two calls — budget for that in rate-limited contexts.
Add a third step to route: if "relevant", add to a review queue; if "not_relevant", archive. The chain grows by appending function calls, not by modifying existing ones.
Day 7 chained summarize_text and classify_sentiment — but the classification was free text. How do you make the final output a Literal-constrained label?
Replace classify_sentiment with an agent that uses result_type=Literal["relevant", "background", "unrelated"]. The summarise step stays the same — free text output fed into the classifier.
Exactly. The chain is identical to Day 7's pattern — but the second agent uses result_type=Literal instead of a system prompt constraint. The first agent's output becomes the second agent's input, and the final result is API-enforced:
def summarize_and_classify(text: str) -> str:
summary = summarize_text(text)
agent = Agent(model, result_type=Literal["relevant", "background", "unrelated"])
result = agent.run_sync(summary)
return result.outputWhy not put the summarisation and classification into one agent call?
You could — but chaining is cleaner. Each agent has a single job. The summariser produces clean text; the classifier makes a single decision. Debugging is easier: if the classification is wrong, you can print the intermediate summary to see what the classifier received. Monolithic prompts hide where things go wrong:
def summarize_and_classify(text: str) -> str:
summary = summarize_text(text)
agent = Agent(model, result_type=Literal["relevant", "background", "unrelated"])
result = agent.run_sync(summary)
label = result.output
print(f"Summary: {summary[:50]}... → {label}")
return labelI can now classify 40 abstracts into 'relevant', 'background', or 'unrelated' for my literature review — two agents, one function call per abstract, structured output every time.
That's the research assistant pattern. Summarise first to reduce noise. Classify on the clean summary with a constrained label. The output feeds directly into a spreadsheet column — no parsing, no free text to clean up.
Three hours of abstract triage, now a 40-iteration loop. My advisor is going to ask how I got through my literature review so fast.
Tell her you automated the triage pass and validated the shortlist manually. That's an honest and defensible methodology note.
def summarize_and_classify(text: str) -> str:
summary = summarize_text(text) # free text summary
agent = Agent(model, result_type=Literal["relevant", "not_relevant"])
return agent.run_sync(summary).outputSummaries remove noise. A 300-word raw response with scattered relevance signals is harder to classify reliably than a two-sentence summary that focuses on the core claim.
Each function call in the chain makes one API call. A two-step chain uses two calls — budget for that in rate-limited contexts.
Add a third step to route: if "relevant", add to a review queue; if "not_relevant", archive. The chain grows by appending function calls, not by modifying existing ones.
Create a free account to get started. Paid plans unlock all tracks.