Your thesis needs three papers published after 2024. How do you find them today?
Google Scholar, arXiv, maybe Semantic Scholar — three tabs, two hours, and I'm not sure I found everything relevant.
Agent(model).run_sync("Find three recent papers on survey methodology published after 2024") with the perplexity/sonar model does the search inside the call. The ai-search lesson type wires the search model automatically — your code is identical to Day 3's run_agent:
def search_the_web(query: str) -> str:
result = Agent(model).run_sync(query)
return result.outputIt searches the actual web, not just its training data? There's nothing special to configure?
Perplexity's sonar model has live search built in. The sandbox wires it automatically when the lesson type is ai-search. You write the research question; the model searches and synthesises. The response includes citations inline. Your code is three lines — the same three lines from Day 3:
def search_the_web(query: str) -> str:
result = Agent(model).run_sync(query)
output = result.output
print(f"Search result: {output[:100]}...")
return outputI can search for recent methodology papers, extract findings, and format a reading list — all in Python functions I've already built. The capstone is just connecting them.
Day 28's capstone does exactly that. search_the_web is the first step — it produces the raw search result text that feeds into the extraction and formatting steps. Write a good research question and the rest of the pipeline has good input to work with.
My university library charges for database access. This model has read every paper published this month and I'm just calling a Python function.
The model synthesises and cites — but verify key claims against the original sources before citing them in your thesis. Real-time search models can still hallucinate citations. Use this for discovery and triage; use the original papers for citation.
ai-search lesson typeThe ai-search lesson type configures the sandbox with a Brave Search tool the agent can call automatically.
def search_the_web(query: str) -> str:
result = Agent(model).run_sync(query)
return result.outputThe function body is identical to run_agent. The difference is lesson_type: "ai-search" in the lesson metadata, which injects the search tool. The agent decides when to call it.
If the query needs current information, the agent calls the search tool and incorporates the results. If the query is factual and already in the model's training data, the agent answers directly. You don't control which path is taken.
result.output is a plain string — the agent's final answer, which may or may not include search citations depending on the model.
Your thesis needs three papers published after 2024. How do you find them today?
Google Scholar, arXiv, maybe Semantic Scholar — three tabs, two hours, and I'm not sure I found everything relevant.
Agent(model).run_sync("Find three recent papers on survey methodology published after 2024") with the perplexity/sonar model does the search inside the call. The ai-search lesson type wires the search model automatically — your code is identical to Day 3's run_agent:
def search_the_web(query: str) -> str:
result = Agent(model).run_sync(query)
return result.outputIt searches the actual web, not just its training data? There's nothing special to configure?
Perplexity's sonar model has live search built in. The sandbox wires it automatically when the lesson type is ai-search. You write the research question; the model searches and synthesises. The response includes citations inline. Your code is three lines — the same three lines from Day 3:
def search_the_web(query: str) -> str:
result = Agent(model).run_sync(query)
output = result.output
print(f"Search result: {output[:100]}...")
return outputI can search for recent methodology papers, extract findings, and format a reading list — all in Python functions I've already built. The capstone is just connecting them.
Day 28's capstone does exactly that. search_the_web is the first step — it produces the raw search result text that feeds into the extraction and formatting steps. Write a good research question and the rest of the pipeline has good input to work with.
My university library charges for database access. This model has read every paper published this month and I'm just calling a Python function.
The model synthesises and cites — but verify key claims against the original sources before citing them in your thesis. Real-time search models can still hallucinate citations. Use this for discovery and triage; use the original papers for citation.
ai-search lesson typeThe ai-search lesson type configures the sandbox with a Brave Search tool the agent can call automatically.
def search_the_web(query: str) -> str:
result = Agent(model).run_sync(query)
return result.outputThe function body is identical to run_agent. The difference is lesson_type: "ai-search" in the lesson metadata, which injects the search tool. The agent decides when to call it.
If the query needs current information, the agent calls the search tool and incorporates the results. If the query is factual and already in the model's training data, the agent answers directly. You don't control which path is taken.
result.output is a plain string — the agent's final answer, which may or may not include search citations depending on the model.
Create a free account to get started. Paid plans unlock all tracks.