classify_urgency returned one string — high, medium, or low — because the answer is always a single choice. What if the answer is inherently plural? A meeting with six open commitments?
Yesterday with classify_urgency I constrained the output to three options. But for action items there's no fixed number — the list is as long as the meeting was disorganised.
Right. For a variable-length list of strings you swap the Literal for list[str]. The agent returns a proper Python list — every item a separate string, no parsing, no splitting:
def extract_action_items(text: str) -> list:
agent = Agent(model, result_type=list[str])
result = agent.run_sync(text)
return result.outputWait — result.output is already a list? I don't call .model_dump() the way I did with the Pydantic model?
No model_dump(). That method only exists on Pydantic model instances. When result_type is a plain Python type, result.output is that type directly — the rule in one comparison:
# Pydantic model → call .model_dump()
result = Agent(model, result_type=Contact).run_sync(text)
data = result.output.model_dump() # dict
# list[str] → use .output as-is
result = Agent(model, result_type=list[str]).run_sync(text)
data = result.output # listSo the LLM is returning a structured Python list, not a markdown bullet string I have to parse. That's the difference between a tool I can use in a pipeline and a chatbot I have to babysit.
Exactly. And because it is a real list, you can immediately index it, filter it, count it, or pass it into the next step in code. The model does the extraction; your code does the logic. No text parsing glue in between.
I just spent forty minutes last Tuesday copy-pasting action items out of a Teams transcript into a spreadsheet. One function call and I get a list back. I'm furious and relieved at the same time.
That forty minutes is now zero lines of manual work. Next you will chain this extractor into a classifier — pulling the list and then tagging each item by owner or urgency in a single pipeline.
result_type=list[str] Returns a List Directlyresult.output is typed to match result_type exactly. When result_type=list[str], the output is a Python list — iterate it, index it, or pass it straight to the next function.
No model_dump() here. That method only exists on Pydantic model instances. list[str], int, and Literal values come back as native Python — no conversion step.
System prompt tip: you can guide the extraction quality with system_prompt="Return each action item as a concise sentence starting with a verb." — the list structure is guaranteed by result_type; the phrasing is shaped by the prompt.
classify_urgency returned one string — high, medium, or low — because the answer is always a single choice. What if the answer is inherently plural? A meeting with six open commitments?
Yesterday with classify_urgency I constrained the output to three options. But for action items there's no fixed number — the list is as long as the meeting was disorganised.
Right. For a variable-length list of strings you swap the Literal for list[str]. The agent returns a proper Python list — every item a separate string, no parsing, no splitting:
def extract_action_items(text: str) -> list:
agent = Agent(model, result_type=list[str])
result = agent.run_sync(text)
return result.outputWait — result.output is already a list? I don't call .model_dump() the way I did with the Pydantic model?
No model_dump(). That method only exists on Pydantic model instances. When result_type is a plain Python type, result.output is that type directly — the rule in one comparison:
# Pydantic model → call .model_dump()
result = Agent(model, result_type=Contact).run_sync(text)
data = result.output.model_dump() # dict
# list[str] → use .output as-is
result = Agent(model, result_type=list[str]).run_sync(text)
data = result.output # listSo the LLM is returning a structured Python list, not a markdown bullet string I have to parse. That's the difference between a tool I can use in a pipeline and a chatbot I have to babysit.
Exactly. And because it is a real list, you can immediately index it, filter it, count it, or pass it into the next step in code. The model does the extraction; your code does the logic. No text parsing glue in between.
I just spent forty minutes last Tuesday copy-pasting action items out of a Teams transcript into a spreadsheet. One function call and I get a list back. I'm furious and relieved at the same time.
That forty minutes is now zero lines of manual work. Next you will chain this extractor into a classifier — pulling the list and then tagging each item by owner or urgency in a single pipeline.
result_type=list[str] Returns a List Directlyresult.output is typed to match result_type exactly. When result_type=list[str], the output is a Python list — iterate it, index it, or pass it straight to the next function.
No model_dump() here. That method only exists on Pydantic model instances. list[str], int, and Literal values come back as native Python — no conversion step.
System prompt tip: you can guide the extraction quality with system_prompt="Return each action item as a concise sentence starting with a verb." — the list structure is guaranteed by result_type; the phrasing is shaped by the prompt.
Create a free account to get started. Paid plans unlock all tracks.