The automation track gave you email sends, Calendar events, and Salesforce writes on a schedule. One gap: none of those actions read a transcript and decide what it says. That's where AI slots in — not as magic, but as a function you call with a prompt.
So it's literally a function call? I expected something more complicated — a model config, an API key, a dozen parameters.
One line: Agent(model).run_sync(prompt).output. Pass text in, get text back. The sandbox configures the model automatically — you only write the logic. This week every lesson is one new thing you can do with that call: count its output, constrain it with a system prompt, lock it to a single word, chain two calls in sequence.
But how do I know the output is right? If I'm automating a workflow, I can't have the AI just say whatever it wants.
That question is exactly where Week 2 starts. This week you build the muscle: call the model, read the output, measure it, shape it. Reliability comes from structure — and you can't add structure until you've run the call a few times and seen what comes back.
Agent(model).run_sync(prompt) and .output.split()system_prompt that turns any agent into a summariserGoal: by Friday you can call an AI model, process its output as a string, and wire two calls together in one function.
Create a free account to get started. Paid plans unlock all tracks.
The automation track gave you email sends, Calendar events, and Salesforce writes on a schedule. One gap: none of those actions read a transcript and decide what it says. That's where AI slots in — not as magic, but as a function you call with a prompt.
So it's literally a function call? I expected something more complicated — a model config, an API key, a dozen parameters.
One line: Agent(model).run_sync(prompt).output. Pass text in, get text back. The sandbox configures the model automatically — you only write the logic. This week every lesson is one new thing you can do with that call: count its output, constrain it with a system prompt, lock it to a single word, chain two calls in sequence.
But how do I know the output is right? If I'm automating a workflow, I can't have the AI just say whatever it wants.
That question is exactly where Week 2 starts. This week you build the muscle: call the model, read the output, measure it, shape it. Reliability comes from structure — and you can't add structure until you've run the call a few times and seen what comes back.
Agent(model).run_sync(prompt) and .output.split()system_prompt that turns any agent into a summariserGoal: by Friday you can call an AI model, process its output as a string, and wire two calls together in one function.