You have a customer interview transcript sitting in your inbox. How long does it take you to pull the key insights out of it right now?
Honestly? 20-30 minutes if I'm thorough. I'm usually not thorough because I don't have 20 minutes. The chat window version is faster but there's no audit trail — I copy the output and lose the prompt.
Agent(model).run_sync(prompt).output — that's the whole extraction. One call. The model reads the text and returns what you'd have spent 20 minutes writing yourself. The prompt lives in your code, the output is a Python string you can store, format, or pipe to the next step:
def run_agent(prompt: str) -> str:
result = Agent(model).run_sync(prompt)
return result.outputWhere does model come from? I didn't import anything — the code just references it.
The sandbox injects Agent and model before your code runs — like electricity in an office building. You never configure the model yourself; you just call it. The entire function is four lines including the print statement that shows you what came back:
def run_agent(prompt: str) -> str:
result = Agent(model).run_sync(prompt)
output = result.output
print(f"Agent output: {output}")
return outputIt wrote a full summary in one call? I thought building this would take a team of engineers and months of setup.
The mystique is mostly marketing. AI is a function you call with a prompt. The hard part is knowing what to ask — which is a founder skill you already have.
I've been spending three minutes finding the right ChatGPT tab when the answer was Agent(model).run_sync(prompt).output the whole time.
Test cases check that you returned a non-empty string — not exact words. Two calls with the same prompt can return different phrasings because models sample. Shape matters; wording is the model's job.
Agent(model).run_sync(prompt).outputThree parts, one line:
Agent(model) — wraps your injected model in a callable object. The sandbox provides model; you never configure it..run_sync(prompt) — sends the prompt and blocks until the response arrives. No async/await. Returns an AgentRunResult object (not a string)..output — the string field on that result.Use .output. Not .data (older API), not .result (does not exist). Both raise AttributeError. Every lesson in this track uses .output.
Agent, model, os, json, and common stdlib modules are pre-loaded. You only write the function body.
You have a customer interview transcript sitting in your inbox. How long does it take you to pull the key insights out of it right now?
Honestly? 20-30 minutes if I'm thorough. I'm usually not thorough because I don't have 20 minutes. The chat window version is faster but there's no audit trail — I copy the output and lose the prompt.
Agent(model).run_sync(prompt).output — that's the whole extraction. One call. The model reads the text and returns what you'd have spent 20 minutes writing yourself. The prompt lives in your code, the output is a Python string you can store, format, or pipe to the next step:
def run_agent(prompt: str) -> str:
result = Agent(model).run_sync(prompt)
return result.outputWhere does model come from? I didn't import anything — the code just references it.
The sandbox injects Agent and model before your code runs — like electricity in an office building. You never configure the model yourself; you just call it. The entire function is four lines including the print statement that shows you what came back:
def run_agent(prompt: str) -> str:
result = Agent(model).run_sync(prompt)
output = result.output
print(f"Agent output: {output}")
return outputIt wrote a full summary in one call? I thought building this would take a team of engineers and months of setup.
The mystique is mostly marketing. AI is a function you call with a prompt. The hard part is knowing what to ask — which is a founder skill you already have.
I've been spending three minutes finding the right ChatGPT tab when the answer was Agent(model).run_sync(prompt).output the whole time.
Test cases check that you returned a non-empty string — not exact words. Two calls with the same prompt can return different phrasings because models sample. Shape matters; wording is the model's job.
Agent(model).run_sync(prompt).outputThree parts, one line:
Agent(model) — wraps your injected model in a callable object. The sandbox provides model; you never configure it..run_sync(prompt) — sends the prompt and blocks until the response arrives. No async/await. Returns an AgentRunResult object (not a string)..output — the string field on that result.Use .output. Not .data (older API), not .result (does not exist). Both raise AttributeError. Every lesson in this track uses .output.
Agent, model, os, json, and common stdlib modules are pre-loaded. You only write the function body.
Create a free account to get started. Paid plans unlock all tracks.