Day 6's classify_sentiment used a system prompt to request one word. Yesterday's result_type=Contact used Pydantic to enforce a schema. Can you apply the same enforcement to a label?
result_type=Literal["high", "medium", "low"] — instead of free text, the API enforces that the output is exactly one of those three values. No .strip().lower() needed.
Precisely. Literal from the typing module defines an enumerated type — the API rejects any output that isn't in the set. You get exactly "high", "medium", or "low" back — guaranteed:
def classify_urgency(text: str) -> str:
agent = Agent(model, result_type=Literal["high", "medium", "low"])
result = agent.run_sync(text)
return result.outputWhere does Literal come from? It wasn't in the Week 1 preamble.
The sandbox preamble injects Literal from Python's typing module. Same as BaseModel, Agent, and model — you use them without importing them. The pattern is the same as result_type=Contact — just replace the Pydantic class with the Literal type:
def classify_urgency(text: str) -> str:
agent = Agent(model, result_type=Literal["high", "medium", "low"])
result = agent.run_sync(text)
label = result.output
print(f"Urgency: {label}")
return labelThis is the Likert scale analogy from the voice bible. I define the response options, the model can only pick one, and I get the raw label back with no parsing needed.
Right. The difference from Day 6: Day 6's constraint was in the prompt — best-effort. result_type=Literal is API-level — the model's response is validated before you receive it. Use Literal when the label set is fixed and exhaustive.
My survey questionnaire uses 5-point Likert scales. I could classify open responses into 'strongly agree' through 'strongly disagree' with Literal in ten minutes.
The scale maps directly. Define Literal["strongly_agree", "agree", "neutral", "disagree", "strongly_disagree"] and the model assigns one label per response. That's a legitimate coding strategy for open-text items when disclosed in the methodology.
result_type=Literal[...]agent = Agent(model, result_type=Literal["high", "medium", "low"])
result = agent.run_sync(text)
return result.output # guaranteed to be one of the three valuesSystem prompt: best-effort — model should return one word. Literal: API-enforced — model must return a value from the set. No .strip().lower() needed.
Literal is injected by the sandbox preamble alongside BaseModel, Agent, and model. You use it without importing from typing.
Use Literal when the label set is fixed, exhaustive, and feeds downstream logic that requires exact string equality — for example, routing high-urgency items to a follow-up queue.
Day 6's classify_sentiment used a system prompt to request one word. Yesterday's result_type=Contact used Pydantic to enforce a schema. Can you apply the same enforcement to a label?
result_type=Literal["high", "medium", "low"] — instead of free text, the API enforces that the output is exactly one of those three values. No .strip().lower() needed.
Precisely. Literal from the typing module defines an enumerated type — the API rejects any output that isn't in the set. You get exactly "high", "medium", or "low" back — guaranteed:
def classify_urgency(text: str) -> str:
agent = Agent(model, result_type=Literal["high", "medium", "low"])
result = agent.run_sync(text)
return result.outputWhere does Literal come from? It wasn't in the Week 1 preamble.
The sandbox preamble injects Literal from Python's typing module. Same as BaseModel, Agent, and model — you use them without importing them. The pattern is the same as result_type=Contact — just replace the Pydantic class with the Literal type:
def classify_urgency(text: str) -> str:
agent = Agent(model, result_type=Literal["high", "medium", "low"])
result = agent.run_sync(text)
label = result.output
print(f"Urgency: {label}")
return labelThis is the Likert scale analogy from the voice bible. I define the response options, the model can only pick one, and I get the raw label back with no parsing needed.
Right. The difference from Day 6: Day 6's constraint was in the prompt — best-effort. result_type=Literal is API-level — the model's response is validated before you receive it. Use Literal when the label set is fixed and exhaustive.
My survey questionnaire uses 5-point Likert scales. I could classify open responses into 'strongly agree' through 'strongly disagree' with Literal in ten minutes.
The scale maps directly. Define Literal["strongly_agree", "agree", "neutral", "disagree", "strongly_disagree"] and the model assigns one label per response. That's a legitimate coding strategy for open-text items when disclosed in the methodology.
result_type=Literal[...]agent = Agent(model, result_type=Literal["high", "medium", "low"])
result = agent.run_sync(text)
return result.output # guaranteed to be one of the three valuesSystem prompt: best-effort — model should return one word. Literal: API-enforced — model must return a value from the set. No .strip().lower() needed.
Literal is injected by the sandbox preamble alongside BaseModel, Agent, and model. You use it without importing from typing.
Use Literal when the label set is fixed, exhaustive, and feeds downstream logic that requires exact string equality — for example, routing high-urgency items to a follow-up queue.
Create a free account to get started. Paid plans unlock all tracks.