The automation track gave you Gmail sends, Calendar events, and Sheets reads on demand. One gap: none of those actions read a customer email and decide what it says. That's where AI slots in — not as magic, but as a function you call with a prompt.
So it's literally a function call? I expected something more complicated — a model config, an API key, training data.
One line: Agent(model).run_sync(prompt).output. Pass text in, get text back. The sandbox configures the model automatically — you only write the logic. This week every lesson is one new thing you can do with that call: count its output, constrain it with a system prompt, lock it to a single word, chain two calls in sequence.
But how do I know the output is reliable? If I'm automating a workflow, I can't have the AI just say whatever it wants.
That question is exactly where Week 2 starts. This week you build the muscle: call the model, read the output, measure it, shape it. Reliability comes from structure — and you can't add structure until you've run the call a few times and seen what comes back.
Agent(model).run_sync(prompt).output is a function call that returns a string. The AI is infrastructure you call — not a product you use.
Agent, model, BaseModel, and standard library modules are pre-loaded. You write the function body only.
Create a free account to get started. Paid plans unlock all tracks.
The automation track gave you Gmail sends, Calendar events, and Sheets reads on demand. One gap: none of those actions read a customer email and decide what it says. That's where AI slots in — not as magic, but as a function you call with a prompt.
So it's literally a function call? I expected something more complicated — a model config, an API key, training data.
One line: Agent(model).run_sync(prompt).output. Pass text in, get text back. The sandbox configures the model automatically — you only write the logic. This week every lesson is one new thing you can do with that call: count its output, constrain it with a system prompt, lock it to a single word, chain two calls in sequence.
But how do I know the output is reliable? If I'm automating a workflow, I can't have the AI just say whatever it wants.
That question is exactly where Week 2 starts. This week you build the muscle: call the model, read the output, measure it, shape it. Reliability comes from structure — and you can't add structure until you've run the call a few times and seen what comes back.
Agent(model).run_sync(prompt).output is a function call that returns a string. The AI is infrastructure you call — not a product you use.
Agent, model, BaseModel, and standard library modules are pre-loaded. You write the function body only.