summarize_text used a loose system prompt — two sentences, any phrasing. What if you need the agent to return exactly one word from a fixed set? A sentiment tag, a priority flag, a status label?
I'd make the system prompt stricter — 'respond with one word only'. But the model might still add punctuation or capitalise differently.
Exactly. So you also normalise the output. A tight system prompt plus .strip().lower() gives you a reliable, comparable label:
agent = Agent(model, system_prompt="Classify the sentiment. Respond with one word: positive, neutral, or negative.")
result = agent.run_sync(text)
return result.output.strip().lower()Why .strip() first and then .lower()? Does order matter?
Strip removes whitespace and newlines at the edges — models sometimes add a trailing newline. Lower normalises case — models sometimes capitalise a response word. Strip first, lower second keeps the logic clear and both operations are idempotent, so order doesn't cause bugs here.
def classify_sentiment(text: str) -> str:
agent = Agent(model, system_prompt="Classify the sentiment. Respond with one word: positive, neutral, or negative.")
return agent.run_sync(text).output.strip().lower()So this is the pattern for any classification: tight system prompt, strip+lower on the output. I can use this for lead quality, project urgency, invoice status.
You're already three steps ahead. That's the right instinct — this is a general pattern. Week 2 replaces .strip().lower() with result_type=Literal[...], which is even tighter. But for now you understand why the normalisation step exists.
I wrote a classifier in five lines without training a model or reading a paper.
That's the whole pitch. Pre-trained model, structured prompt, Python string ops. No ML degree required.
A system_prompt shapes every response, but models can still vary whitespace and capitalisation. Two operations clean that up:
.strip() — removes leading/trailing whitespace and newlines.lower() — normalises caseagent = Agent(model, system_prompt="Respond with one word: positive, neutral, or negative.")
result = agent.run_sync(text).output.strip().lower()
# Returns 'positive', 'neutral', or 'negative' — always comparableWeek 2 will introduce result_type=Literal[...] which enforces constraints at the type level. This week's pattern is useful when you want a soft constraint without full structured output.
summarize_text used a loose system prompt — two sentences, any phrasing. What if you need the agent to return exactly one word from a fixed set? A sentiment tag, a priority flag, a status label?
I'd make the system prompt stricter — 'respond with one word only'. But the model might still add punctuation or capitalise differently.
Exactly. So you also normalise the output. A tight system prompt plus .strip().lower() gives you a reliable, comparable label:
agent = Agent(model, system_prompt="Classify the sentiment. Respond with one word: positive, neutral, or negative.")
result = agent.run_sync(text)
return result.output.strip().lower()Why .strip() first and then .lower()? Does order matter?
Strip removes whitespace and newlines at the edges — models sometimes add a trailing newline. Lower normalises case — models sometimes capitalise a response word. Strip first, lower second keeps the logic clear and both operations are idempotent, so order doesn't cause bugs here.
def classify_sentiment(text: str) -> str:
agent = Agent(model, system_prompt="Classify the sentiment. Respond with one word: positive, neutral, or negative.")
return agent.run_sync(text).output.strip().lower()So this is the pattern for any classification: tight system prompt, strip+lower on the output. I can use this for lead quality, project urgency, invoice status.
You're already three steps ahead. That's the right instinct — this is a general pattern. Week 2 replaces .strip().lower() with result_type=Literal[...], which is even tighter. But for now you understand why the normalisation step exists.
I wrote a classifier in five lines without training a model or reading a paper.
That's the whole pitch. Pre-trained model, structured prompt, Python string ops. No ML degree required.
A system_prompt shapes every response, but models can still vary whitespace and capitalisation. Two operations clean that up:
.strip() — removes leading/trailing whitespace and newlines.lower() — normalises caseagent = Agent(model, system_prompt="Respond with one word: positive, neutral, or negative.")
result = agent.run_sync(text).output.strip().lower()
# Returns 'positive', 'neutral', or 'negative' — always comparableWeek 2 will introduce result_type=Literal[...] which enforces constraints at the type level. This week's pattern is useful when you want a soft constraint without full structured output.
Create a free account to get started. Paid plans unlock all tracks.