You've read 40 abstracts for your literature review. How long did that take?
About three hours — reading, note-taking, deciding which ones were relevant to my methodology. And I'm still not confident I caught every key paper.
One agent call handles the first pass. Agent(model).run_sync("Is this abstract about survey methodology? Answer yes or no: " + abstract) returns a string. Loop that over 40 abstracts and you have a pre-filtered list in two minutes:
def run_agent(prompt: str) -> str:
result = Agent(model).run_sync(prompt)
return result.outputWhere does model come from? I didn't import anything or set an API key.
The sandbox preamble injects both Agent and model before your code runs — like a university library account that's already active when you log in. You just use them. The entire first agent is four lines: define the function, create the agent, call it, return .output:
def run_agent(prompt: str) -> str:
result = Agent(model).run_sync(prompt)
output = result.output
print(f"Agent output: {output}")
return outputThat's it? Abstract in, relevance verdict out — from a live model — in four lines?
Every call is live. No cache, no canned answer. Two calls with the same prompt can return different phrasings because models sample, they don't look up. Test cases check that you got a non-empty string, not exact words. Shape matters; wording is the model's job.
I've been spending three hours on abstract triage when the answer was literally Agent(model).run_sync(prompt).output the whole time.
The agent handles the triage pass. You still read the shortlisted abstracts and write the methodology yourself. The agent is a research assistant — it filters and summarises, but your thesis is your intellectual work.
Agent(model).run_sync(prompt).output WorksThree moving parts, one line.
Agent(model) wraps your configured model in a callable. The sandbox injects both Agent and model — no imports needed.
.run_sync(prompt) sends the prompt to the model synchronously and blocks until the response arrives. It returns a result object, not the string directly.
.output extracts the plain string response from the result object.
Chaining keeps single-use calls concise. When you need to inspect intermediate results — word count, token usage, tool calls — break the chain into named variables.
You've read 40 abstracts for your literature review. How long did that take?
About three hours — reading, note-taking, deciding which ones were relevant to my methodology. And I'm still not confident I caught every key paper.
One agent call handles the first pass. Agent(model).run_sync("Is this abstract about survey methodology? Answer yes or no: " + abstract) returns a string. Loop that over 40 abstracts and you have a pre-filtered list in two minutes:
def run_agent(prompt: str) -> str:
result = Agent(model).run_sync(prompt)
return result.outputWhere does model come from? I didn't import anything or set an API key.
The sandbox preamble injects both Agent and model before your code runs — like a university library account that's already active when you log in. You just use them. The entire first agent is four lines: define the function, create the agent, call it, return .output:
def run_agent(prompt: str) -> str:
result = Agent(model).run_sync(prompt)
output = result.output
print(f"Agent output: {output}")
return outputThat's it? Abstract in, relevance verdict out — from a live model — in four lines?
Every call is live. No cache, no canned answer. Two calls with the same prompt can return different phrasings because models sample, they don't look up. Test cases check that you got a non-empty string, not exact words. Shape matters; wording is the model's job.
I've been spending three hours on abstract triage when the answer was literally Agent(model).run_sync(prompt).output the whole time.
The agent handles the triage pass. You still read the shortlisted abstracts and write the methodology yourself. The agent is a research assistant — it filters and summarises, but your thesis is your intellectual work.
Agent(model).run_sync(prompt).output WorksThree moving parts, one line.
Agent(model) wraps your configured model in a callable. The sandbox injects both Agent and model — no imports needed.
.run_sync(prompt) sends the prompt to the model synchronously and blocks until the response arrives. It returns a result object, not the string directly.
.output extracts the plain string response from the result object.
Chaining keeps single-use calls concise. When you need to inspect intermediate results — word count, token usage, tool calls — break the chain into named variables.
Create a free account to get started. Paid plans unlock all tracks.