Week 3 agents batch-process existing text. Your literature search still starts with you typing into Google Scholar and reading whatever appears. What's missing to close that loop?
Search capability. The agent needs to pull recent papers, not just process text I give it. ai-search uses Perplexity under the hood — web search built in. And tools let the agent call a function I define, so it can do computation the model alone can't do reliably.
This week: search agents that query the literature, extraction agents that parse findings into structured data, tool agents that call Python functions, and the capstone that chains all three into a research question → literature summary pipeline.
The capstone uses ai-search and defines a Finding Pydantic model. How does the structured extraction know which fields to fill from a web-search result?
The search agent returns a string — citations, summaries, sources. The extraction agent receives that string as input and fills in claim, authors, year, and journal from it, exactly like the structured extraction pattern from Week 2. The model doing the search and the model doing the extraction are separate agent calls, each doing what it does best.
search_the_web: Perplexity/sonar search agent, returns string with sourcesresearch_and_extract: search → extract Fact(fact, source) with Pydanticagent_with_tool: one @agent.tool_plain function the agent can callagent_two_tools: two tools, agent picks the right oneresearch_extract_format: capstone — search → extract Finding → markdown mini-reviewGoal: research_extract_format(topic) takes a research question and returns a markdown literature summary with structured findings grouped by year.
Create a free account to get started. Paid plans unlock all tracks.
Week 3 agents batch-process existing text. Your literature search still starts with you typing into Google Scholar and reading whatever appears. What's missing to close that loop?
Search capability. The agent needs to pull recent papers, not just process text I give it. ai-search uses Perplexity under the hood — web search built in. And tools let the agent call a function I define, so it can do computation the model alone can't do reliably.
This week: search agents that query the literature, extraction agents that parse findings into structured data, tool agents that call Python functions, and the capstone that chains all three into a research question → literature summary pipeline.
The capstone uses ai-search and defines a Finding Pydantic model. How does the structured extraction know which fields to fill from a web-search result?
The search agent returns a string — citations, summaries, sources. The extraction agent receives that string as input and fills in claim, authors, year, and journal from it, exactly like the structured extraction pattern from Week 2. The model doing the search and the model doing the extraction are separate agent calls, each doing what it does best.
search_the_web: Perplexity/sonar search agent, returns string with sourcesresearch_and_extract: search → extract Fact(fact, source) with Pydanticagent_with_tool: one @agent.tool_plain function the agent can callagent_two_tools: two tools, agent picks the right oneresearch_extract_format: capstone — search → extract Finding → markdown mini-reviewGoal: research_extract_format(topic) takes a research question and returns a markdown literature summary with structured findings grouped by year.