Day 10 extracted two named fields from text. What if you need the model to extract a variable number of items — like all the action items from a meeting transcript?
result_type=list[str] — instead of a Pydantic model, I pass the list type directly. The model returns a Python list of strings I can iterate over immediately.
Exactly. result_type=list[str] tells the API to return a JSON array of strings — no Pydantic class definition needed. result.output is already a list[str]:
def extract_action_items(text: str) -> list:
agent = Agent(model, result_type=list[str])
result = agent.run_sync(text)
return result.outputHow does the model know what counts as an action item vs. background context? The system prompt is empty.
The model infers from the result_type and the user prompt. For better precision, you can add a system prompt that defines what qualifies as an action item. But for most research notes, list[str] with a clear user prompt ('Extract all action items from:') gives reliable results:
def extract_action_items(text: str) -> list:
agent = Agent(model, result_type=list[str])
result = agent.run_sync(f"Extract all action items from this text: {text}")
items = result.output
print(f"Found {len(items)} action items")
return itemsAfter every advisor meeting I paste the transcript in, call this, and get a clean task list. No more scrolling back through a wall of text to find what I agreed to do.
The extraction quality depends on the clarity of the transcript and the user prompt. Dense, informal text produces noisier results. Structure your meeting notes — even brief notes — and the extraction improves significantly.
My advisor emails me a bullet list after every meeting anyway. Now I can extract tasks from her emails instead of re-reading them.
The same pattern applies to emails, transcripts, paper abstracts, or any text with embedded items. result_type=list[str] is the simplest extraction shape when the number of items is variable.
result_type=list[str]agent = Agent(model, result_type=list[str])
result = agent.run_sync(f"Extract all action items: {text}")
return result.output # already a list[str]Use list[str] when the number of items is variable and each item is a plain string. The model produces a JSON array; the API validates the type before returning.
Prefix the text with an extraction instruction in the user prompt: "Extract all action items from the following text:". The system prompt can reinforce format: "Return a JSON list of concise action items."
result.output is a Python list you can iterate, filter, or pass to a spreadsheet writer. No string splitting or parsing needed.
Day 10 extracted two named fields from text. What if you need the model to extract a variable number of items — like all the action items from a meeting transcript?
result_type=list[str] — instead of a Pydantic model, I pass the list type directly. The model returns a Python list of strings I can iterate over immediately.
Exactly. result_type=list[str] tells the API to return a JSON array of strings — no Pydantic class definition needed. result.output is already a list[str]:
def extract_action_items(text: str) -> list:
agent = Agent(model, result_type=list[str])
result = agent.run_sync(text)
return result.outputHow does the model know what counts as an action item vs. background context? The system prompt is empty.
The model infers from the result_type and the user prompt. For better precision, you can add a system prompt that defines what qualifies as an action item. But for most research notes, list[str] with a clear user prompt ('Extract all action items from:') gives reliable results:
def extract_action_items(text: str) -> list:
agent = Agent(model, result_type=list[str])
result = agent.run_sync(f"Extract all action items from this text: {text}")
items = result.output
print(f"Found {len(items)} action items")
return itemsAfter every advisor meeting I paste the transcript in, call this, and get a clean task list. No more scrolling back through a wall of text to find what I agreed to do.
The extraction quality depends on the clarity of the transcript and the user prompt. Dense, informal text produces noisier results. Structure your meeting notes — even brief notes — and the extraction improves significantly.
My advisor emails me a bullet list after every meeting anyway. Now I can extract tasks from her emails instead of re-reading them.
The same pattern applies to emails, transcripts, paper abstracts, or any text with embedded items. result_type=list[str] is the simplest extraction shape when the number of items is variable.
result_type=list[str]agent = Agent(model, result_type=list[str])
result = agent.run_sync(f"Extract all action items: {text}")
return result.output # already a list[str]Use list[str] when the number of items is variable and each item is a plain string. The model produces a JSON array; the API validates the type before returning.
Prefix the text with an extraction instruction in the user prompt: "Extract all action items from the following text:". The system prompt can reinforce format: "Return a JSON list of concise action items."
result.output is a Python list you can iterate, filter, or pass to a spreadsheet writer. No string splitting or parsing needed.
Create a free account to get started. Paid plans unlock all tracks.