Twenty-eight days. Every function you wrote. Today you chain them into one: raw text in, config in, formatted summary out. What's the overall shape of the pipeline?
Four steps. Split into lines, parse each into a dict, apply the level and limit config, count by level, format as Total: N plus one line per level sorted alphabetically. Standard pipeline.
Exactly. Each step is a transformation that hands its output to the next. The first pass collects clean log dicts:
lines = [l for l in raw_text.strip().splitlines() if l.strip()]
logs = []
for line in lines:
parts = line.split(" ", 2)
if len(parts) >= 3:
rest = parts[2].split(" ", 1)
logs.append({"timestamp": f"{parts[0]} {parts[1]}", "level": rest[0], "message": rest[1] if len(rest) > 1 else ""})Malformed lines silently skip — three-part minimum.
Why not import the Day-3 parse function and reuse it?
In a real codebase you would. Inside one testable function for this lesson we embed the logic so the whole pipeline is self-contained. Either style works; the composability story is identical.
And filter-then-limit, same as Day 25?
Same order, same pattern. Then a frequency map with dict.get(k, 0) + 1, and a summary builder that prints Total then one line per level in sorted order:
summary = [f"Total: {len(logs)}"]
for level in sorted(counts):
summary.append(f"{level}: {counts[level]}")
print("\n".join(summary))Every pattern from the last four weeks lives in this one function. Parse, filter, count, sort, format, join. It works.
One function, every shape. Run it, swap the config, watch the report change. That's a real tool — configurable, composable, shippable.
TL;DR: split → parse → filter → limit → count → format. Each step transforms the running state.
splitlines + filter — drop blanksdict.get(k, 0) + 1 — safe frequency count"\n".join(summary) — final string| Week | Pattern used here |
|---|---|
| 1 | str.split with maxsplit, f-strings |
| 2 | frequency map |
| 3 | streaming line cleanup |
| 4 | config application, summary join |
Twenty-eight days. Every function you wrote. Today you chain them into one: raw text in, config in, formatted summary out. What's the overall shape of the pipeline?
Four steps. Split into lines, parse each into a dict, apply the level and limit config, count by level, format as Total: N plus one line per level sorted alphabetically. Standard pipeline.
Exactly. Each step is a transformation that hands its output to the next. The first pass collects clean log dicts:
lines = [l for l in raw_text.strip().splitlines() if l.strip()]
logs = []
for line in lines:
parts = line.split(" ", 2)
if len(parts) >= 3:
rest = parts[2].split(" ", 1)
logs.append({"timestamp": f"{parts[0]} {parts[1]}", "level": rest[0], "message": rest[1] if len(rest) > 1 else ""})Malformed lines silently skip — three-part minimum.
Why not import the Day-3 parse function and reuse it?
In a real codebase you would. Inside one testable function for this lesson we embed the logic so the whole pipeline is self-contained. Either style works; the composability story is identical.
And filter-then-limit, same as Day 25?
Same order, same pattern. Then a frequency map with dict.get(k, 0) + 1, and a summary builder that prints Total then one line per level in sorted order:
summary = [f"Total: {len(logs)}"]
for level in sorted(counts):
summary.append(f"{level}: {counts[level]}")
print("\n".join(summary))Every pattern from the last four weeks lives in this one function. Parse, filter, count, sort, format, join. It works.
One function, every shape. Run it, swap the config, watch the report change. That's a real tool — configurable, composable, shippable.
TL;DR: split → parse → filter → limit → count → format. Each step transforms the running state.
splitlines + filter — drop blanksdict.get(k, 0) + 1 — safe frequency count"\n".join(summary) — final string| Week | Pattern used here |
|---|---|
| 1 | str.split with maxsplit, f-strings |
| 2 | frequency map |
| 3 | streaming line cleanup |
| 4 | config application, summary join |
Write `run_log_report(raw_text, config)` that parses each non-empty line of `raw_text` into a log dict, applies `config` (keys `level` and `limit`, may be `None`), counts by level, and returns `"Total: N\nLEVEL: n\n..."` with levels sorted alphabetically.
Tap each step for scaffolded hints.
No blank-editor panic.