safe_compute_cpl handles bad data without crashing. You've also got rank_channels_by_cpl, channel_summary, group_by_channel — every piece sitting separately. What happens if you chain them into a single function?
I've been running them one at a time — safe CPL first, then group, then rank. Put them in sequence inside one function and that's the whole quarterly report.
Exactly. A pipeline is functions called in the right order, where each output feeds the next input. Five stages:
# Stage 1: guard each campaign's CPL with safe_compute_cpl
# Stage 2: drop campaigns where CPL is 0.0
# Stage 3: group by channel, avg CPL and total leads per group
# Stage 4: sort channels ascending → ranked_channels
# Stage 5: flag channels where avg_cpl > 1.25 × overall avgStage five — what's the "overall avg"? The average of the channel averages, or the average across every individual campaign?
Across every individual campaign. If paid search has 40 campaigns and affiliate has 2, averaging the channel averages would let the small channel swing the baseline. Sum all valid CPLs, divide by count. That's the portfolio number your VP actually cares about — and the 1.25× threshold is the same band you wrote in categorize_performance back on Day 7, just applied at channel level now.
Same logic, bigger scope. Here's the full thing:
def quarterly_report(campaigns: list) -> dict:
for c in campaigns:
c["cpl"] = safe_compute_cpl(c)
valid = [c for c in campaigns if c["cpl"] > 0.0]
if not valid:
return {"ranked_channels": [], "underperformers": [], "overall_avg_cpl": 0.0}
overall_avg_cpl = round(sum(c["cpl"] for c in valid) / len(valid), 2)
groups = {}
for c in valid:
ch = c.get("channel", "unknown")
groups.setdefault(ch, []).append(c)
channel_rows = []
for ch, clist in groups.items():
avg = round(sum(c["cpl"] for c in clist) / len(clist), 2)
total_leads = sum(c.get("leads", 0) for c in clist)
channel_rows.append({"channel": ch, "avg_cpl": avg, "total_leads": total_leads})
ranked = sorted(channel_rows, key=lambda x: x["avg_cpl"])
underperformers = [r["channel"] for r in ranked if r["avg_cpl"] > overall_avg_cpl * 1.25]
print(f"Report: {len(ranked)} channels, {len(underperformers)} underperformer(s), overall CPL ${overall_avg_cpl}")
return {"ranked_channels": ranked, "underperformers": underperformers, "overall_avg_cpl": overall_avg_cpl}You narrated the whole pipeline before you typed a line. That's the Week 4 version of you.
I can point at every line and say which day it came from. safe_compute_cpl is Day 27, rank_channels_by_cpl is Day 26 — the capstone is literally thirty lessons assembled.
That's the point. A capstone isn't a new problem — it's proof the pieces compose. Every function you wrote was already part of this pipeline. You just hadn't connected them yet.
A pipeline is functions chained so each output is the next function's input.
| Stage | Operation | Tool |
|---|---|---|
| Guard | Add CPL, handle errors | safe_compute_cpl |
| Filter | Drop zero-CPL rows | list comprehension |
| Group | Bucket by channel | dict.setdefault |
| Rank | Sort ascending by avg CPL | sorted(key=...) |
| Flag | Mark channels > 1.25× baseline | threshold comparison |
Computed across all valid individual campaigns — not the average of channel averages. Prevents small channels from distorting the baseline.
avg_cpl > overall_avg_cpl * 1.25 — same 25% band as categorize_performance (Day 7), now applied at channel level.
safe_compute_cpl handles bad data without crashing. You've also got rank_channels_by_cpl, channel_summary, group_by_channel — every piece sitting separately. What happens if you chain them into a single function?
I've been running them one at a time — safe CPL first, then group, then rank. Put them in sequence inside one function and that's the whole quarterly report.
Exactly. A pipeline is functions called in the right order, where each output feeds the next input. Five stages:
# Stage 1: guard each campaign's CPL with safe_compute_cpl
# Stage 2: drop campaigns where CPL is 0.0
# Stage 3: group by channel, avg CPL and total leads per group
# Stage 4: sort channels ascending → ranked_channels
# Stage 5: flag channels where avg_cpl > 1.25 × overall avgStage five — what's the "overall avg"? The average of the channel averages, or the average across every individual campaign?
Across every individual campaign. If paid search has 40 campaigns and affiliate has 2, averaging the channel averages would let the small channel swing the baseline. Sum all valid CPLs, divide by count. That's the portfolio number your VP actually cares about — and the 1.25× threshold is the same band you wrote in categorize_performance back on Day 7, just applied at channel level now.
Same logic, bigger scope. Here's the full thing:
def quarterly_report(campaigns: list) -> dict:
for c in campaigns:
c["cpl"] = safe_compute_cpl(c)
valid = [c for c in campaigns if c["cpl"] > 0.0]
if not valid:
return {"ranked_channels": [], "underperformers": [], "overall_avg_cpl": 0.0}
overall_avg_cpl = round(sum(c["cpl"] for c in valid) / len(valid), 2)
groups = {}
for c in valid:
ch = c.get("channel", "unknown")
groups.setdefault(ch, []).append(c)
channel_rows = []
for ch, clist in groups.items():
avg = round(sum(c["cpl"] for c in clist) / len(clist), 2)
total_leads = sum(c.get("leads", 0) for c in clist)
channel_rows.append({"channel": ch, "avg_cpl": avg, "total_leads": total_leads})
ranked = sorted(channel_rows, key=lambda x: x["avg_cpl"])
underperformers = [r["channel"] for r in ranked if r["avg_cpl"] > overall_avg_cpl * 1.25]
print(f"Report: {len(ranked)} channels, {len(underperformers)} underperformer(s), overall CPL ${overall_avg_cpl}")
return {"ranked_channels": ranked, "underperformers": underperformers, "overall_avg_cpl": overall_avg_cpl}You narrated the whole pipeline before you typed a line. That's the Week 4 version of you.
I can point at every line and say which day it came from. safe_compute_cpl is Day 27, rank_channels_by_cpl is Day 26 — the capstone is literally thirty lessons assembled.
That's the point. A capstone isn't a new problem — it's proof the pieces compose. Every function you wrote was already part of this pipeline. You just hadn't connected them yet.
A pipeline is functions chained so each output is the next function's input.
| Stage | Operation | Tool |
|---|---|---|
| Guard | Add CPL, handle errors | safe_compute_cpl |
| Filter | Drop zero-CPL rows | list comprehension |
| Group | Bucket by channel | dict.setdefault |
| Rank | Sort ascending by avg CPL | sorted(key=...) |
| Flag | Mark channels > 1.25× baseline | threshold comparison |
Computed across all valid individual campaigns — not the average of channel averages. Prevents small channels from distorting the baseline.
avg_cpl > overall_avg_cpl * 1.25 — same 25% band as categorize_performance (Day 7), now applied at channel level.
Amir's VP of Marketing wants a quarterly campaign report that flags which channels are burning budget. Write `quarterly_report(campaigns)` that takes a list of campaign dicts (each with `"channel"`, `"spend"`, and `"leads"` keys). For each campaign, compute CPL using `safe_compute_cpl` (which returns `0.0` on any error). Drop campaigns with CPL of `0.0`. Group the rest by channel, compute avg CPL and total leads per channel, and sort channels ascending by avg CPL. Compute overall avg CPL across all valid campaigns. Flag any channel whose avg CPL exceeds `1.25 × overall_avg_cpl` as an underperformer. Return `{"ranked_channels": [{"channel": ..., "avg_cpl": ..., "total_leads": ...}], "underperformers": [channel_name, ...], "overall_avg_cpl": float}` with all float values rounded to 2 decimal places. Return `{"ranked_channels": [], "underperformers": [], "overall_avg_cpl": 0.0}` if no valid campaigns exist.
Tap each step for scaffolded hints.
No blank-editor panic.