You have 3 channels you want to post a status update to. Three different channels, same message body. The shape:
channels = ["#status", "#daily", "#updates"]
for channel in channels:
toolset.execute_action(Action.SLACK_SEND_MESSAGE, {
"channel": channel,
"text": "build green",
})For loop over the list. The tool call inside is the same shape each time, only channel varies.
Same as Python L13 (for loops) — just with a tool call as the body?
Exactly. A tool call is just another expression. for doesn't care what's inside.
What if one fails partway through? The first one posted, the second crashes — now I have partial state.
Right, batch + errors gets tricky. Two strategies:
# Strategy A: stop on first failure (default behaviour — exception propagates)
for channel in channels:
toolset.execute_action(...) # if this raises, loop ends
# Strategy B: continue, collect failures
failures = []
for channel in channels:
try:
toolset.execute_action(...)
except Exception as e:
failures.append((channel, str(e)))
if failures:
print(f"failed: {failures}")Strategy A is fine when the items are independent and you want loud failure. Strategy B is right when you'd rather process as many as possible and log the rest.
for item in items:
do_action(item)That's it. The list might come from a tool read ("all unread emails"), a hardcoded list, or a file. Each iteration performs one action.
# Same arg shape, different value
for email in ["a@x", "b@y", "c@z"]:
send_invite(email)
# Iterate a list of dicts
for user in users:
notify(user["name"], user["email"])
# Iterate the result of a previous tool call
emails = fetch_emails(20)
for email in emails:
classify_and_route(email)Batch jobs often re-run on the same list (cron retries, manual re-runs). Combine with the dedup pattern from week 2 L14:
seen = set(load_already_processed())
for item in items:
if item["id"] in seen:
continue # skip already-processed
process(item)
seen.add(item["id"])
save_already_processed(seen)Now re-running a half-completed batch finishes the rest, doesn't re-do the done ones. Production batch jobs always have this safety net.
for item in items:
for attempt in range(3):
try:
process(item)
break
except Exception:
time.sleep(2 ** attempt)
else:
print(f"gave up on {item}")Nested loop. Outer = items. Inner = retries per item. The else on the inner for runs if no break (all 3 attempts failed) — log and move to the next item rather than crashing the whole batch.
Critical question for batch jobs: should one failure stop the whole batch, or should you process every item you can?
| Mode | When to use |
|---|---|
| Stop on first failure | Items must succeed in order; later items depend on earlier ones |
| Continue, collect failures | Items are independent; "do as many as possible" |
Some tools accept multiple items in one call — send_bulk_email([list]), update_many_rows([list]). Always prefer one call with a list over a Python loop with N calls. Fewer round trips, often required for rate-limit reasons.
You have 3 channels you want to post a status update to. Three different channels, same message body. The shape:
channels = ["#status", "#daily", "#updates"]
for channel in channels:
toolset.execute_action(Action.SLACK_SEND_MESSAGE, {
"channel": channel,
"text": "build green",
})For loop over the list. The tool call inside is the same shape each time, only channel varies.
Same as Python L13 (for loops) — just with a tool call as the body?
Exactly. A tool call is just another expression. for doesn't care what's inside.
What if one fails partway through? The first one posted, the second crashes — now I have partial state.
Right, batch + errors gets tricky. Two strategies:
# Strategy A: stop on first failure (default behaviour — exception propagates)
for channel in channels:
toolset.execute_action(...) # if this raises, loop ends
# Strategy B: continue, collect failures
failures = []
for channel in channels:
try:
toolset.execute_action(...)
except Exception as e:
failures.append((channel, str(e)))
if failures:
print(f"failed: {failures}")Strategy A is fine when the items are independent and you want loud failure. Strategy B is right when you'd rather process as many as possible and log the rest.
for item in items:
do_action(item)That's it. The list might come from a tool read ("all unread emails"), a hardcoded list, or a file. Each iteration performs one action.
# Same arg shape, different value
for email in ["a@x", "b@y", "c@z"]:
send_invite(email)
# Iterate a list of dicts
for user in users:
notify(user["name"], user["email"])
# Iterate the result of a previous tool call
emails = fetch_emails(20)
for email in emails:
classify_and_route(email)Batch jobs often re-run on the same list (cron retries, manual re-runs). Combine with the dedup pattern from week 2 L14:
seen = set(load_already_processed())
for item in items:
if item["id"] in seen:
continue # skip already-processed
process(item)
seen.add(item["id"])
save_already_processed(seen)Now re-running a half-completed batch finishes the rest, doesn't re-do the done ones. Production batch jobs always have this safety net.
for item in items:
for attempt in range(3):
try:
process(item)
break
except Exception:
time.sleep(2 ** attempt)
else:
print(f"gave up on {item}")Nested loop. Outer = items. Inner = retries per item. The else on the inner for runs if no break (all 3 attempts failed) — log and move to the next item rather than crashing the whole batch.
Critical question for batch jobs: should one failure stop the whole batch, or should you process every item you can?
| Mode | When to use |
|---|---|
| Stop on first failure | Items must succeed in order; later items depend on earlier ones |
| Continue, collect failures | Items are independent; "do as many as possible" |
Some tools accept multiple items in one call — send_bulk_email([list]), update_many_rows([list]). Always prefer one call with a list over a Python loop with N calls. Fewer round trips, often required for rate-limit reasons.
Create a free account to get started. Paid plans unlock all tracks.