After a two-hour exec sync, what's your workflow before you can write a single follow-up?
Copy the transcript into ChatGPT, type a prompt, read the output, copy it back. Three windows, five clicks, no audit trail.
That works until you do it twenty times a week, or need the summary piped into a spreadsheet automatically. Call the model from Python and you get a string back — one you can format, store, or pass to the next step in code:
result = Agent(model).run_sync("Summarise this transcript in two sentences.")
print(result.output)Hold on — Agent(model)? Where does model come from? I didn't import anything.
The sandbox preamble injects Agent and model before your code runs — like office infrastructure that's already on when you arrive. You never configure the model yourself; you just call it. The entire function is four lines:
def run_agent(prompt: str) -> str:
result = Agent(model).run_sync(prompt)
return result.outputThat's it? Transcript in, summary out — from a live model — in four lines?
Every call is live. No cache, no canned answer. Two calls with the same prompt can return different phrasings because models sample, they don't look up. Test cases check that you got a non-empty string, not exact words. Shape matters; wording is the model's job.
I've been spending three minutes finding the right ChatGPT tab when the answer was literally Agent(model).run_sync(prompt).output the whole time.
The tab is still useful for exploration. Python gives you a repeatable function you can call from a script, a dashboard, a scheduled job — the difference between a tool you use and a tool you build with.
Agent(model).run_sync(prompt).output WorksThree moving parts, one line.
Agent(model) wraps your configured model in a function-like object. The sandbox preamble injects both Agent and model — no imports needed.
.run_sync(prompt) sends the prompt and blocks until the response arrives. No async/await. Returns an AgentRunResult object (not a string).
.output is the string field on that result.
Use .output. Not .data (older API), not .result (does not exist). Both raise AttributeError. Every lesson in this track uses .output.
After a two-hour exec sync, what's your workflow before you can write a single follow-up?
Copy the transcript into ChatGPT, type a prompt, read the output, copy it back. Three windows, five clicks, no audit trail.
That works until you do it twenty times a week, or need the summary piped into a spreadsheet automatically. Call the model from Python and you get a string back — one you can format, store, or pass to the next step in code:
result = Agent(model).run_sync("Summarise this transcript in two sentences.")
print(result.output)Hold on — Agent(model)? Where does model come from? I didn't import anything.
The sandbox preamble injects Agent and model before your code runs — like office infrastructure that's already on when you arrive. You never configure the model yourself; you just call it. The entire function is four lines:
def run_agent(prompt: str) -> str:
result = Agent(model).run_sync(prompt)
return result.outputThat's it? Transcript in, summary out — from a live model — in four lines?
Every call is live. No cache, no canned answer. Two calls with the same prompt can return different phrasings because models sample, they don't look up. Test cases check that you got a non-empty string, not exact words. Shape matters; wording is the model's job.
I've been spending three minutes finding the right ChatGPT tab when the answer was literally Agent(model).run_sync(prompt).output the whole time.
The tab is still useful for exploration. Python gives you a repeatable function you can call from a script, a dashboard, a scheduled job — the difference between a tool you use and a tool you build with.
Agent(model).run_sync(prompt).output WorksThree moving parts, one line.
Agent(model) wraps your configured model in a function-like object. The sandbox preamble injects both Agent and model — no imports needed.
.run_sync(prompt) sends the prompt and blocks until the response arrives. No async/await. Returns an AgentRunResult object (not a string).
.output is the string field on that result.
Use .output. Not .data (older API), not .result (does not exist). Both raise AttributeError. Every lesson in this track uses .output.
Create a free account to get started. Paid plans unlock all tracks.