rank_groups_by_satisfaction from yesterday assumes every response has a numeric satisfaction value. What happens when a respondent skips that question or enters "N/A" instead of a number?
The float() call inside score_per_response would raise a ValueError. The whole pipeline crashes on one bad row.
is_valid_response guards against this upstream. But real Qualtrics exports are messier than ideal — sometimes a field exists but holds unexpected content. try/except is your final safety net. You wrap the risky code, name the exception types you expect, and return a safe default when they fire:
responses = [{"satisfaction": "4.0"}, {"satisfaction": "N/A"}, {}]
try:
scores = [float(r["satisfaction"]) for r in responses]
avg = sum(scores) / len(scores)
except (KeyError, ValueError, ZeroDivisionError):
avg = 0.0
print(avg) # 0.0 — crashed on 'N/A'You can catch multiple exception types in one except? And the execution continues after the except block?
Exactly. except (KeyError, ValueError, ZeroDivisionError): catches any of those three. After the except block runs, the function continues normally — the caller gets 0.0 and never sees the exception:
def safe_compute_avg(responses: list) -> float:
"""Compute average satisfaction with graceful error handling."""
try:
scores = [float(r["satisfaction"]) for r in responses]
if not scores:
return 0.0
avg = sum(scores) / len(scores)
print(f"Average: {avg:.2f}")
return avg
except (KeyError, ValueError, ZeroDivisionError):
print("Error computing avg — returning 0.0")
return 0.0The thesis pipeline stays alive even when a respondent typed garbage into the satisfaction field. One bad row doesn't invalidate the rest.
Your "if respondent skipped this question, mark N/A" cleaning protocol, except now Python handles it instead of you.
I'm wrapping the same averaging formula I've been using since Week 2 — sum(scores) / len(scores) — but now it's safe for messy real data.
Only catch exceptions you understand and can handle. Catching Exception (the base class) hides all errors including bugs in your own code. Name the specific types — KeyError for missing keys, ValueError for bad conversions, ZeroDivisionError for empty lists. Let everything else surface.
try:
risky_code()
except (KeyError, ValueError, ZeroDivisionError):
handle_error()try runs normallyexcept list fires, execution jumps to the except blocktry/except continues regardless| Situation | Exception |
|---|---|
| Missing dict key | KeyError |
float("N/A") | ValueError |
sum([]) / 0 | ZeroDivisionError |
Return None when the result is simply absent; return a sentinel value like -1.0 or float('nan') when the caller needs to distinguish "computation failed" from "no data provided". Document which convention you use.
rank_groups_by_satisfaction from yesterday assumes every response has a numeric satisfaction value. What happens when a respondent skips that question or enters "N/A" instead of a number?
The float() call inside score_per_response would raise a ValueError. The whole pipeline crashes on one bad row.
is_valid_response guards against this upstream. But real Qualtrics exports are messier than ideal — sometimes a field exists but holds unexpected content. try/except is your final safety net. You wrap the risky code, name the exception types you expect, and return a safe default when they fire:
responses = [{"satisfaction": "4.0"}, {"satisfaction": "N/A"}, {}]
try:
scores = [float(r["satisfaction"]) for r in responses]
avg = sum(scores) / len(scores)
except (KeyError, ValueError, ZeroDivisionError):
avg = 0.0
print(avg) # 0.0 — crashed on 'N/A'You can catch multiple exception types in one except? And the execution continues after the except block?
Exactly. except (KeyError, ValueError, ZeroDivisionError): catches any of those three. After the except block runs, the function continues normally — the caller gets 0.0 and never sees the exception:
def safe_compute_avg(responses: list) -> float:
"""Compute average satisfaction with graceful error handling."""
try:
scores = [float(r["satisfaction"]) for r in responses]
if not scores:
return 0.0
avg = sum(scores) / len(scores)
print(f"Average: {avg:.2f}")
return avg
except (KeyError, ValueError, ZeroDivisionError):
print("Error computing avg — returning 0.0")
return 0.0The thesis pipeline stays alive even when a respondent typed garbage into the satisfaction field. One bad row doesn't invalidate the rest.
Your "if respondent skipped this question, mark N/A" cleaning protocol, except now Python handles it instead of you.
I'm wrapping the same averaging formula I've been using since Week 2 — sum(scores) / len(scores) — but now it's safe for messy real data.
Only catch exceptions you understand and can handle. Catching Exception (the base class) hides all errors including bugs in your own code. Name the specific types — KeyError for missing keys, ValueError for bad conversions, ZeroDivisionError for empty lists. Let everything else surface.
try:
risky_code()
except (KeyError, ValueError, ZeroDivisionError):
handle_error()try runs normallyexcept list fires, execution jumps to the except blocktry/except continues regardless| Situation | Exception |
|---|---|
| Missing dict key | KeyError |
float("N/A") | ValueError |
sum([]) / 0 | ZeroDivisionError |
Return None when the result is simply absent; return a sentinel value like -1.0 or float('nan') when the caller needs to distinguish "computation failed" from "no data provided". Document which convention you use.
Noor is running your thesis pipeline on a real Qualtrics export that contains some rows with missing satisfaction values and some with text entries like 'N/A'. Write `safe_compute_avg(responses)` that computes the average satisfaction score from a list of response dicts, returning 0.0 if any KeyError, ValueError, or ZeroDivisionError occurs.
Tap each step for scaffolded hints.
No blank-editor panic.