Second task verb: infer. Given some text, pick a label — sentiment, category, intent, etc. Closed-set output: the model picks from a list you provide.
review = "The package arrived broken. Total waste of money."
prompt = f'Classify the sentiment of this review as exactly one of: positive, negative, neutral.\n\nReview: "{review}"\n\nReply with just the label, no explanation.'
result = Agent(model).run_sync(prompt)
print(result.output.strip())Why "reply with just the label"?
Without it the model often pads — "This review is negative because...". You'd then have to extract the label from prose. Asking for the label-only output is one constraint that turns the answer into something your code can use directly.
Given text, pick from a list of labels. The classic three:
positive / negative / neutralbug / feature_request / question / praisebuy / compare / complain / learnprompt = f'''Classify the following as exactly one of: {label_list}.
Text: "{text}"
Reply with just the label, no explanation.'''Three pieces:
Without an explicit list, the model invents labels ("a bit angry", "mildly disappointed"). With a closed set, it picks from your options.
.strip().lower()Real model output sometimes adds whitespace or capitalization variants:
label = result.output.strip().lower()
# normalizes "Negative\n" / "NEGATIVE" / "negative " all to "negative"Useful before checking which label was picked.
Extract is a flavour of infer:
f"Extract the person's full name from this sentence. Reply with just the name.\n\nSentence: \"{text}\""Same shape — closed-format constraint, single-token answer where possible. For complex extraction (multiple fields), structured output (day 11) is the better tool.
Create a free account to get started. Paid plans unlock all tracks.
Second task verb: infer. Given some text, pick a label — sentiment, category, intent, etc. Closed-set output: the model picks from a list you provide.
review = "The package arrived broken. Total waste of money."
prompt = f'Classify the sentiment of this review as exactly one of: positive, negative, neutral.\n\nReview: "{review}"\n\nReply with just the label, no explanation.'
result = Agent(model).run_sync(prompt)
print(result.output.strip())Why "reply with just the label"?
Without it the model often pads — "This review is negative because...". You'd then have to extract the label from prose. Asking for the label-only output is one constraint that turns the answer into something your code can use directly.
Given text, pick from a list of labels. The classic three:
positive / negative / neutralbug / feature_request / question / praisebuy / compare / complain / learnprompt = f'''Classify the following as exactly one of: {label_list}.
Text: "{text}"
Reply with just the label, no explanation.'''Three pieces:
Without an explicit list, the model invents labels ("a bit angry", "mildly disappointed"). With a closed set, it picks from your options.
.strip().lower()Real model output sometimes adds whitespace or capitalization variants:
label = result.output.strip().lower()
# normalizes "Negative\n" / "NEGATIVE" / "negative " all to "negative"Useful before checking which label was picked.
Extract is a flavour of infer:
f"Extract the person's full name from this sentence. Reply with just the name.\n\nSentence: \"{text}\""Same shape — closed-format constraint, single-token answer where possible. For complex extraction (multiple fields), structured output (day 11) is the better tool.