Three weeks in you can call agents, constrain their output, chain them, and batch them. One capability remains: what if the agent needs to look something up before it answers? Or call a function you wrote?
Web search and tools. I assumed those required a separate library or a complex setup.
The ai-search lesson type switches the model to Perplexity's Sonar — web search is built in. You call .run_sync() exactly as before; the model decides when to search. Tools are just Python functions decorated with @agent.tool_plain — the agent calls them when it judges they're relevant.
The agent decides when to call the tool? I don't control that?
You define the tool; the agent decides whether to use it. That's the Week 4 mindset shift — from writing functions the model fills to writing functions the model calls. The capstone is an AI proposal writer that searches the web for context, extracts typed scope items, and formats them into a proposal section. One function that closes a four-hour drafting session down to twenty minutes.
search_the_web(query) with lesson_type: "ai-search" and the Perplexity Sonar modelresearch_and_extract(topic): search then extract a structured Fact with Pydanticagent_with_tool(prompt): one @agent.tool_plain function the agent can callagent_two_tools(prompt): two tools — the agent picks which to useresearch_extract_format(topic): capstone — search, extract scope items, format proposalGoal: by end of week you can build an agent that searches the web, calls custom tools, and extracts typed scope items from a brief.
Create a free account to get started. Paid plans unlock all tracks.
Three weeks in you can call agents, constrain their output, chain them, and batch them. One capability remains: what if the agent needs to look something up before it answers? Or call a function you wrote?
Web search and tools. I assumed those required a separate library or a complex setup.
The ai-search lesson type switches the model to Perplexity's Sonar — web search is built in. You call .run_sync() exactly as before; the model decides when to search. Tools are just Python functions decorated with @agent.tool_plain — the agent calls them when it judges they're relevant.
The agent decides when to call the tool? I don't control that?
You define the tool; the agent decides whether to use it. That's the Week 4 mindset shift — from writing functions the model fills to writing functions the model calls. The capstone is an AI proposal writer that searches the web for context, extracts typed scope items, and formats them into a proposal section. One function that closes a four-hour drafting session down to twenty minutes.
search_the_web(query) with lesson_type: "ai-search" and the Perplexity Sonar modelresearch_and_extract(topic): search then extract a structured Fact with Pydanticagent_with_tool(prompt): one @agent.tool_plain function the agent can callagent_two_tools(prompt): two tools — the agent picks which to useresearch_extract_format(topic): capstone — search, extract scope items, format proposalGoal: by end of week you can build an agent that searches the web, calls custom tools, and extracts typed scope items from a brief.