A monitoring dashboard needs the error rate over the last N log entries, refreshed as new logs stream in. Given a flat list and a window size, how do you produce one rate per window?
Same sliding-window structure from Day 14 — but instead of yielding max, I compute errors-over-total per window. Slice, count errors, divide, round.
Exactly. Reuse the window math, swap the aggregation. For each start index, take the slice, count how many logs have level == "ERROR", divide by window size:
window = logs[i:i+size]
errors = sum(1 for log in window if log.get("level") == "ERROR")
rate = round(errors / size, 2)sum(1 for ... if ...) is the idiomatic count-matching-items form.
Why round(..., 2)? Won't the raw float be fine?
Raw floats carry 16 digits of noise. 0.16666666666666666 on a dashboard looks like a bug. 0.17 is the signal. Always round at the presentation boundary — the caller gets clean, comparable numbers.
What if size is zero or the list is shorter than the window?
Both guard cases reduce to one thing: no valid window, return an empty list. That drops the division-by-zero risk and keeps the function composable:
if not logs or size <= 0 or size > len(logs):
return []One guard line, three edge cases covered.
So this is the analytics generalisation of Day 14. The window math is identical; only the metric changes.
And that's the whole point of the sliding-window pattern. Once you own the window loop, you plug in any aggregation — max, mean, rate, threshold — and you have a live analytics tool.
TL;DR: one outer loop for the window positions, one inner metric — here, an error rate rounded to 2 decimals.
sum(1 for x in xs if cond) — count matches without building an intermediate listround(x, 2) — presentation-safe precision[]| Metric | Inner expression |
|---|---|
| max | max(window) |
| mean | sum(window) / size |
| error rate | round(errors / size, 2) |
Write `windowed_error_rate(logs, size)` that returns a list of error rates — one per sliding window of length `size` over `logs`. Each rate is `errors_in_window / size` rounded to 2 decimals. Return `[]` for empty input, non-positive size, or oversized window.
Tap each step for scaffolded hints.
No blank-editor panic.
A monitoring dashboard needs the error rate over the last N log entries, refreshed as new logs stream in. Given a flat list and a window size, how do you produce one rate per window?
Same sliding-window structure from Day 14 — but instead of yielding max, I compute errors-over-total per window. Slice, count errors, divide, round.
Exactly. Reuse the window math, swap the aggregation. For each start index, take the slice, count how many logs have level == "ERROR", divide by window size:
window = logs[i:i+size]
errors = sum(1 for log in window if log.get("level") == "ERROR")
rate = round(errors / size, 2)sum(1 for ... if ...) is the idiomatic count-matching-items form.
Why round(..., 2)? Won't the raw float be fine?
Raw floats carry 16 digits of noise. 0.16666666666666666 on a dashboard looks like a bug. 0.17 is the signal. Always round at the presentation boundary — the caller gets clean, comparable numbers.
What if size is zero or the list is shorter than the window?
Both guard cases reduce to one thing: no valid window, return an empty list. That drops the division-by-zero risk and keeps the function composable:
if not logs or size <= 0 or size > len(logs):
return []One guard line, three edge cases covered.
So this is the analytics generalisation of Day 14. The window math is identical; only the metric changes.
And that's the whole point of the sliding-window pattern. Once you own the window loop, you plug in any aggregation — max, mean, rate, threshold — and you have a live analytics tool.
TL;DR: one outer loop for the window positions, one inner metric — here, an error rate rounded to 2 decimals.
sum(1 for x in xs if cond) — count matches without building an intermediate listround(x, 2) — presentation-safe precision[]| Metric | Inner expression |
|---|---|
| max | max(window) |
| mean | sum(window) / size |
| error rate | round(errors / size, 2) |