classify_urgency from yesterday returns exactly one string. Your post-meeting notes have five action items scattered across three paragraphs. In your current workflow, how do you extract them?
Read the transcript, highlight anything that sounds like a task, add it to my task manager. Fifteen minutes per meeting. For a co-author collaboration that's three meetings a week.
result_type=list[str] tells the model to return a Python list of strings — one item per action. You get a list you can loop over, store, or pass to GOOGLETASKS_INSERT_TASK:
result = Agent(model, result_type=list[str]).run_sync(text)
return result.outputHow does the model know what counts as an action item? I didn't give it a definition.
The model's training includes meeting notes and task extraction — it has strong priors for action-item language. For precision on your specific domain, add a system prompt: "Extract action items as a list. Include only tasks with a clear owner or deadline." The result_type=list[str] enforces the output shape; the system prompt guides the content.
So the list type handles the format and the system prompt handles the domain knowledge. Two controls, both tunable independently.
Exactly. Try it without a system prompt first — the defaults are surprisingly good for standard meeting notes. Add the prompt when you find false positives.
Meeting transcript in, task list out. Pass the list to append_row and every meeting auto-populates the shared tasks sheet. That's the co-author coordination pipeline I've been building manually.
The pipeline bridge is two lines:
items = extract_action_items(transcript)
for item in items:
print(f"Task: {item}")One risk: the model may split a compound task into two items. Review the list before marking tasks complete.
result = Agent(model, result_type=list[str]).run_sync(text)
return result.output # a Python list of stringsagent = Agent(
model,
system_prompt="Extract action items as a list. Include only tasks with a clear owner or deadline.",
result_type=list[str]
)result_type enforces the shape; system_prompt guides the content. Both are optional and independent.
result_type for precisionAdd system_prompt alongside result_type=list[str] for domain-specific extraction: system_prompt="Extract only tasks with explicit owners or deadlines. Ignore background context." The result_type enforces the list shape; the system prompt guides what goes in the list.
classify_urgency from yesterday returns exactly one string. Your post-meeting notes have five action items scattered across three paragraphs. In your current workflow, how do you extract them?
Read the transcript, highlight anything that sounds like a task, add it to my task manager. Fifteen minutes per meeting. For a co-author collaboration that's three meetings a week.
result_type=list[str] tells the model to return a Python list of strings — one item per action. You get a list you can loop over, store, or pass to GOOGLETASKS_INSERT_TASK:
result = Agent(model, result_type=list[str]).run_sync(text)
return result.outputHow does the model know what counts as an action item? I didn't give it a definition.
The model's training includes meeting notes and task extraction — it has strong priors for action-item language. For precision on your specific domain, add a system prompt: "Extract action items as a list. Include only tasks with a clear owner or deadline." The result_type=list[str] enforces the output shape; the system prompt guides the content.
So the list type handles the format and the system prompt handles the domain knowledge. Two controls, both tunable independently.
Exactly. Try it without a system prompt first — the defaults are surprisingly good for standard meeting notes. Add the prompt when you find false positives.
Meeting transcript in, task list out. Pass the list to append_row and every meeting auto-populates the shared tasks sheet. That's the co-author coordination pipeline I've been building manually.
The pipeline bridge is two lines:
items = extract_action_items(transcript)
for item in items:
print(f"Task: {item}")One risk: the model may split a compound task into two items. Review the list before marking tasks complete.
result = Agent(model, result_type=list[str]).run_sync(text)
return result.output # a Python list of stringsagent = Agent(
model,
system_prompt="Extract action items as a list. Include only tasks with a clear owner or deadline.",
result_type=list[str]
)result_type enforces the shape; system_prompt guides the content. Both are optional and independent.
result_type for precisionAdd system_prompt alongside result_type=list[str] for domain-specific extraction: system_prompt="Extract only tasks with explicit owners or deadlines. Ignore background context." The result_type enforces the list shape; the system prompt guides what goes in the list.
Create a free account to get started. Paid plans unlock all tracks.