Two hundred abstracts, one weekend. In your current workflow, how do you decide which ones make the cut for your systematic review?
I read each one. I know the inclusion criteria cold — the decision is fast. The reading is the bottleneck. Copy-paste into ChatGPT helps, but that's still five clicks per abstract and no audit trail.
Call the model from Python and you get a string back — one you can store, print, or pass to the next step. One function call per abstract:
result = Agent(model).run_sync("Does this abstract study early childhood cognition? Answer yes or no.")
print(result.output)Where does model come from? I didn't import anything.
The sandbox preamble injects Agent and model before your code runs — the same way a lab has pipettes already on the bench. You don't configure the model. You call it. The entire function is four lines:
def run_agent(prompt: str) -> str:
result = Agent(model).run_sync(prompt)
return result.outputThat's it? Abstract in, triage decision out — from a live model — in four lines?
Every call is live. No cache, no canned answer. Test cases check that you got a non-empty string, not exact words — the model's phrasing varies, the return type does not. Shape matters; wording is the model's job.
I've been spending a weekend reading abstracts when the answer was literally Agent(model).run_sync(prompt).output the whole time.
The agent handles bulk triage. Your job becomes reviewing the borderline cases — the work only you can do. That's the division of labour.
Agent(model).run_sync(prompt).output WorksAgent(model) wraps your configured model. Agent and model are injected by the sandbox preamble — no imports needed.
.run_sync(prompt) sends the prompt and blocks until the response arrives. Returns an AgentRunResult object.
.output is the string field on that result.
Use .output. Not .data (older API). Every lesson uses .output.
Agent and model are injected by the sandbox preamble before your code runs. You never import them or configure the model. Every lesson in this track uses the same Agent(model) construction — the model changes per lesson type; your code does not.
Two hundred abstracts, one weekend. In your current workflow, how do you decide which ones make the cut for your systematic review?
I read each one. I know the inclusion criteria cold — the decision is fast. The reading is the bottleneck. Copy-paste into ChatGPT helps, but that's still five clicks per abstract and no audit trail.
Call the model from Python and you get a string back — one you can store, print, or pass to the next step. One function call per abstract:
result = Agent(model).run_sync("Does this abstract study early childhood cognition? Answer yes or no.")
print(result.output)Where does model come from? I didn't import anything.
The sandbox preamble injects Agent and model before your code runs — the same way a lab has pipettes already on the bench. You don't configure the model. You call it. The entire function is four lines:
def run_agent(prompt: str) -> str:
result = Agent(model).run_sync(prompt)
return result.outputThat's it? Abstract in, triage decision out — from a live model — in four lines?
Every call is live. No cache, no canned answer. Test cases check that you got a non-empty string, not exact words — the model's phrasing varies, the return type does not. Shape matters; wording is the model's job.
I've been spending a weekend reading abstracts when the answer was literally Agent(model).run_sync(prompt).output the whole time.
The agent handles bulk triage. Your job becomes reviewing the borderline cases — the work only you can do. That's the division of labour.
Agent(model).run_sync(prompt).output WorksAgent(model) wraps your configured model. Agent and model are injected by the sandbox preamble — no imports needed.
.run_sync(prompt) sends the prompt and blocks until the response arrives. Returns an AgentRunResult object.
.output is the string field on that result.
Use .output. Not .data (older API). Every lesson uses .output.
Agent and model are injected by the sandbox preamble before your code runs. You never import them or configure the model. Every lesson in this track uses the same Agent(model) construction — the model changes per lesson type; your code does not.
Create a free account to get started. Paid plans unlock all tracks.