Your parse_campaign_csv from Day 19 splits each line on commas and pulls values by position. It works — until a campaign name like "Q3, Big Push" arrives. What does split(",") do with that?
It splits on every comma, so the quoted campaign name becomes two fragments instead of one. The column positions shift and everything after it is wrong.
That's the exact failure mode that csv.DictReader was built to prevent. It follows the CSV spec — quoted fields, embedded commas, newlines inside quotes, all handled. And instead of pulling values by index, it gives you each row as a dict keyed by the header names from the first line. The setup is two imports and one wrap:
import csv
import io
reader = csv.DictReader(io.StringIO(csv_text))
for row in reader:
print(row) # {'name': 'Email Blast', 'channel': 'email', 'spend': '1250.0', 'leads': '50'}Why does it need io.StringIO? Can't I just pass the string directly to DictReader?
DictReader expects something it can iterate line by line — a file object. io.StringIO wraps a plain string in a file-like interface without touching disk. Think of it as slipping the CSV text into a folder so Python's file-reading tools see what they expect. Once you have reader, each row is a dict — but every value arrives as a string, so numeric fields need explicit conversion:
import csv
import io
def load_campaigns_from_csv(csv_text: str) -> list:
"""Parse CSV text into a list of campaign dicts with numeric fields converted."""
reader = csv.DictReader(io.StringIO(csv_text))
campaigns = []
for row in reader:
campaigns.append({
"name": row["name"],
"channel": row["channel"],
"spend": float(row["spend"]),
"leads": int(row["leads"])
})
print(f"Loaded {len(campaigns)} campaigns")
return campaignsI get a proper list of dicts — exactly what channel_summary expects. I never have to count column positions or remember that index 2 is spend and index 3 is leads.
Positional indexing is the OFFSET formula of Python. Technically valid, immediately confusing to everyone including you three months later. Named dict keys are the column headers your data already has — use them.
And if the export adds or reorders columns, my code still works because I'm asking for row["spend"], not row[2].
That robustness is the point. CSV exports from Salesforce, HubSpot, and GA4 all ship slightly different column orders depending on the saved view. Code that names its columns survives a re-export; code that counts positions silently produces garbage.
csv.DictReader + io.StringIOcsv.DictReader reads the first row as headers and yields each subsequent row as a dict.
reader = csv.DictReader(io.StringIO(csv_text))
io.StringIO wraps a string into a file-like object — required because DictReader expects an iterable of lines, not a raw string.
| Step | What happens |
|---|---|
| First row | Read as field names (header) |
| Each subsequent row | Yielded as dict keyed by header names |
| All values | Arrive as str — convert with float() / int() |
split(',')?Quoted fields like "Q3, Big Push" contain commas. split breaks them; DictReader handles them per the CSV spec.
Your parse_campaign_csv from Day 19 splits each line on commas and pulls values by position. It works — until a campaign name like "Q3, Big Push" arrives. What does split(",") do with that?
It splits on every comma, so the quoted campaign name becomes two fragments instead of one. The column positions shift and everything after it is wrong.
That's the exact failure mode that csv.DictReader was built to prevent. It follows the CSV spec — quoted fields, embedded commas, newlines inside quotes, all handled. And instead of pulling values by index, it gives you each row as a dict keyed by the header names from the first line. The setup is two imports and one wrap:
import csv
import io
reader = csv.DictReader(io.StringIO(csv_text))
for row in reader:
print(row) # {'name': 'Email Blast', 'channel': 'email', 'spend': '1250.0', 'leads': '50'}Why does it need io.StringIO? Can't I just pass the string directly to DictReader?
DictReader expects something it can iterate line by line — a file object. io.StringIO wraps a plain string in a file-like interface without touching disk. Think of it as slipping the CSV text into a folder so Python's file-reading tools see what they expect. Once you have reader, each row is a dict — but every value arrives as a string, so numeric fields need explicit conversion:
import csv
import io
def load_campaigns_from_csv(csv_text: str) -> list:
"""Parse CSV text into a list of campaign dicts with numeric fields converted."""
reader = csv.DictReader(io.StringIO(csv_text))
campaigns = []
for row in reader:
campaigns.append({
"name": row["name"],
"channel": row["channel"],
"spend": float(row["spend"]),
"leads": int(row["leads"])
})
print(f"Loaded {len(campaigns)} campaigns")
return campaignsI get a proper list of dicts — exactly what channel_summary expects. I never have to count column positions or remember that index 2 is spend and index 3 is leads.
Positional indexing is the OFFSET formula of Python. Technically valid, immediately confusing to everyone including you three months later. Named dict keys are the column headers your data already has — use them.
And if the export adds or reorders columns, my code still works because I'm asking for row["spend"], not row[2].
That robustness is the point. CSV exports from Salesforce, HubSpot, and GA4 all ship slightly different column orders depending on the saved view. Code that names its columns survives a re-export; code that counts positions silently produces garbage.
csv.DictReader + io.StringIOcsv.DictReader reads the first row as headers and yields each subsequent row as a dict.
reader = csv.DictReader(io.StringIO(csv_text))
io.StringIO wraps a string into a file-like object — required because DictReader expects an iterable of lines, not a raw string.
| Step | What happens |
|---|---|
| First row | Read as field names (header) |
| Each subsequent row | Yielded as dict keyed by header names |
| All values | Arrive as str — convert with float() / int() |
split(',')?Quoted fields like "Q3, Big Push" contain commas. split breaks them; DictReader handles them per the CSV spec.
Ayaan exports the weekly campaign report from HubSpot as CSV text. The export sometimes contains campaign names with commas inside quotes — `"Q3, Big Push"` — which breaks the manual `split(',')` approach. Write `load_campaigns_from_csv(csv_text)` that uses `csv.DictReader` and `io.StringIO` to parse the CSV text into a list of dicts, converting `spend` to `float` and `leads` to `int`. The CSV always has a header row with columns `name`, `channel`, `spend`, `leads`. Return the list ready to pass into `channel_summary`.
Tap each step for scaffolded hints.
No blank-editor panic.