You have search_the_web, research_and_extract, and agent_with_tool. What does your pre-call research workflow look like when they chain together?
Search for background on the company. Extract the key fact and source. Format both into a one-paragraph brief I can read in 30 seconds before the call.
That's the capstone. Three stages, one function:
bg = search_the_web(topic)
fact_data = research_and_extract(topic)
report = "Research brief: " + fact_data["fact"] + " Source: " + fact_data["source"]
return reportWhy call search_the_web if research_and_extract already does that internally?
Good catch — in the capstone we call research_and_extract directly, which handles the search internally. The capstone chains research_and_extract for the structured data and formats it with string concatenation. Here's the full function:
from pydantic import BaseModel
class Fact(BaseModel):
fact: str
source: str
def research_extract_format(topic: str) -> str:
bg = search_the_web(topic)
result = Agent(model, result_type=Fact).run_sync(bg)
fact_data = result.output.model_dump()
report = "Research brief: " + fact_data["fact"] + " Source: " + fact_data["source"]
print("Report: " + report[:80])
return reportSearch the web, extract the key fact, format the brief. Four weeks. From 'what is Agent(model)' to 'I have a research assistant in Python.' That's real leverage.
Fifteen minutes of pre-call research per lead. If you call 10 leads a week, that's 150 minutes back. Every week.
The agent is doing the research, the extraction, and the formatting. I designed the pipeline — I just don't do the work anymore.
The capstone is a starting point. Extend it: add a Lead model that extracts company, deal size, and fit tier. Sort the leads. Connect the output to your CRM via the Composio track. The scaffold is solid — what you build on it is up to you.
bg = search_the_web(topic) # 1. web search
result = Agent(model, result_type=Fact).run_sync(bg) # 2. extract structured fact
fact_data = result.output.model_dump()
report = "Research brief: " + fact_data["fact"] + " Source: " + fact_data["source"]
return report # 3. formatted stringsearch_the_web → grounded prose (ai-search model)Agent(result_type=Fact) → typed record with fact + sourceFact with a full Lead model (company, deal_size, fit)batch_classify across a list of topicsYou have search_the_web, research_and_extract, and agent_with_tool. What does your pre-call research workflow look like when they chain together?
Search for background on the company. Extract the key fact and source. Format both into a one-paragraph brief I can read in 30 seconds before the call.
That's the capstone. Three stages, one function:
bg = search_the_web(topic)
fact_data = research_and_extract(topic)
report = "Research brief: " + fact_data["fact"] + " Source: " + fact_data["source"]
return reportWhy call search_the_web if research_and_extract already does that internally?
Good catch — in the capstone we call research_and_extract directly, which handles the search internally. The capstone chains research_and_extract for the structured data and formats it with string concatenation. Here's the full function:
from pydantic import BaseModel
class Fact(BaseModel):
fact: str
source: str
def research_extract_format(topic: str) -> str:
bg = search_the_web(topic)
result = Agent(model, result_type=Fact).run_sync(bg)
fact_data = result.output.model_dump()
report = "Research brief: " + fact_data["fact"] + " Source: " + fact_data["source"]
print("Report: " + report[:80])
return reportSearch the web, extract the key fact, format the brief. Four weeks. From 'what is Agent(model)' to 'I have a research assistant in Python.' That's real leverage.
Fifteen minutes of pre-call research per lead. If you call 10 leads a week, that's 150 minutes back. Every week.
The agent is doing the research, the extraction, and the formatting. I designed the pipeline — I just don't do the work anymore.
The capstone is a starting point. Extend it: add a Lead model that extracts company, deal size, and fit tier. Sort the leads. Connect the output to your CRM via the Composio track. The scaffold is solid — what you build on it is up to you.
bg = search_the_web(topic) # 1. web search
result = Agent(model, result_type=Fact).run_sync(bg) # 2. extract structured fact
fact_data = result.output.model_dump()
report = "Research brief: " + fact_data["fact"] + " Source: " + fact_data["source"]
return report # 3. formatted stringsearch_the_web → grounded prose (ai-search model)Agent(result_type=Fact) → typed record with fact + sourceFact with a full Lead model (company, deal_size, fit)batch_classify across a list of topicsCreate a free account to get started. Paid plans unlock all tracks.