Your summarize_text constrains length. What if you need a label — not a summary — something you can use as a filter condition in code?
Like tagging customer feedback as positive or negative? I'd want a single word I could use in an if-statement.
A system prompt that says 'Respond with exactly one word: positive, negative, or neutral' is a classifier. The model is forced to pick from your vocabulary, not generate free text:
agent = Agent(model, system_prompt="Respond with exactly one word: positive, negative, or neutral.")
return agent.run_sync(text).output.strip().lower()Why .strip().lower()? Shouldn't the model just return the word I asked for?
Models sometimes add a trailing newline or capitalise the first letter. .strip() removes whitespace, .lower() normalises case. Together they guarantee 'positive', 'negative', or 'neutral' — not 'Positive\n'. Here's the full function:
def classify_sentiment(text: str) -> str:
agent = Agent(model, system_prompt="Respond with exactly one word: positive, negative, or neutral.")
output = agent.run_sync(text).output.strip().lower()
print(f"Sentiment: {output}")
return outputI'm writing a job description that makes the AI act like a very constrained intern — only allowed to say three words.
That's the right mental model. A classifier is an agent with a very short vocabulary. The system prompt is the constraint that makes it reliable enough to use in production logic.
I can run all my support tickets through this and filter by sentiment before I read them. Hot ones first.
.strip().lower() is a normalisation habit — build it in from the start. Any time you use model output in an if-statement or as a dict key, normalise it first. The LLM doesn't always get it right, and defensive normalisation is cheaper than debugging a comparison that fails because of a capital letter.
A tight system prompt with an enumerated vocabulary turns an LLM into a classifier:
agent = Agent(model, system_prompt="Respond with exactly one word: positive, negative, or neutral.")
return agent.run_sync(text).output.strip().lower().strip().lower() is defensive normalisation. Models sometimes add trailing newlines or capitalise. If the output goes into a comparison or a dict key, always normalise first.
The LLM doesn't always follow instructions perfectly — normalisation catches the common failures. For stricter guarantees, Week 2 introduces result_type=Literal[...].
Your summarize_text constrains length. What if you need a label — not a summary — something you can use as a filter condition in code?
Like tagging customer feedback as positive or negative? I'd want a single word I could use in an if-statement.
A system prompt that says 'Respond with exactly one word: positive, negative, or neutral' is a classifier. The model is forced to pick from your vocabulary, not generate free text:
agent = Agent(model, system_prompt="Respond with exactly one word: positive, negative, or neutral.")
return agent.run_sync(text).output.strip().lower()Why .strip().lower()? Shouldn't the model just return the word I asked for?
Models sometimes add a trailing newline or capitalise the first letter. .strip() removes whitespace, .lower() normalises case. Together they guarantee 'positive', 'negative', or 'neutral' — not 'Positive\n'. Here's the full function:
def classify_sentiment(text: str) -> str:
agent = Agent(model, system_prompt="Respond with exactly one word: positive, negative, or neutral.")
output = agent.run_sync(text).output.strip().lower()
print(f"Sentiment: {output}")
return outputI'm writing a job description that makes the AI act like a very constrained intern — only allowed to say three words.
That's the right mental model. A classifier is an agent with a very short vocabulary. The system prompt is the constraint that makes it reliable enough to use in production logic.
I can run all my support tickets through this and filter by sentiment before I read them. Hot ones first.
.strip().lower() is a normalisation habit — build it in from the start. Any time you use model output in an if-statement or as a dict key, normalise it first. The LLM doesn't always get it right, and defensive normalisation is cheaper than debugging a comparison that fails because of a capital letter.
A tight system prompt with an enumerated vocabulary turns an LLM into a classifier:
agent = Agent(model, system_prompt="Respond with exactly one word: positive, negative, or neutral.")
return agent.run_sync(text).output.strip().lower().strip().lower() is defensive normalisation. Models sometimes add trailing newlines or capitalise. If the output goes into a comparison or a dict key, always normalise first.
The LLM doesn't always follow instructions perfectly — normalisation catches the common failures. For stricter guarantees, Week 2 introduces result_type=Literal[...].
Create a free account to get started. Paid plans unlock all tracks.