You've automated Gmail, Sheets, and Calendar in the last track. One gap remains: those automations can move data, but none of them can read a client email and decide what it says. That's where AI slots in — not as magic, but as a function you call with a prompt.
So it's literally one function? I assumed I'd need a model config, API keys, a dozen parameters.
One line: Agent(model).run_sync(prompt).output. Pass a client brief in, get text back. The sandbox configures the model automatically — you only write the logic. This week every lesson adds one new capability: count the words in the response, constrain it with a system prompt, lock it to a single sentiment word, chain two calls in sequence.
But how do I know the output is actually correct? If I'm piping this into a proposal doc, I can't have the AI say whatever it wants.
That question is exactly where Week 2 starts. This week you build the muscle: call the model, read the output, measure it, shape it. Reliability comes from structure — and you can't add structure until you've run the call a few times and seen what comes back.
Agent(model).run_sync(prompt) and .output.split()system_prompt that turns any agent into a summariserGoal: by the end of the week you can call an AI model, process its output as a string, and wire two calls together in one function.
Create a free account to get started. Paid plans unlock all tracks.
You've automated Gmail, Sheets, and Calendar in the last track. One gap remains: those automations can move data, but none of them can read a client email and decide what it says. That's where AI slots in — not as magic, but as a function you call with a prompt.
So it's literally one function? I assumed I'd need a model config, API keys, a dozen parameters.
One line: Agent(model).run_sync(prompt).output. Pass a client brief in, get text back. The sandbox configures the model automatically — you only write the logic. This week every lesson adds one new capability: count the words in the response, constrain it with a system prompt, lock it to a single sentiment word, chain two calls in sequence.
But how do I know the output is actually correct? If I'm piping this into a proposal doc, I can't have the AI say whatever it wants.
That question is exactly where Week 2 starts. This week you build the muscle: call the model, read the output, measure it, shape it. Reliability comes from structure — and you can't add structure until you've run the call a few times and seen what comes back.
Agent(model).run_sync(prompt) and .output.split()system_prompt that turns any agent into a summariserGoal: by the end of the week you can call an AI model, process its output as a string, and wire two calls together in one function.