Tell me honestly — what happened when you sent the script to your teammates?
Two people tried it. One didn't know she needed to change the file path at the top of the script. She opened it, didn't see any input prompt, ran it anyway, and got a FileNotFoundError because she didn't have the file at the path I had hardcoded. The other one got as far as the input() call, saw a blinking cursor with no text, and assumed it crashed. They both gave up. I felt terrible.
That is not a Python problem. That is an interface problem. Your logic is solid — three weeks of work has produced a script that actually does what Diane asked for. What it lacks is every surface that a tool exposes to its users: how it accepts input, what it prints when it is working, how it reports errors, how it tells someone what arguments it needs. A working script and a usable tool are two different things.
So argparse is the input interface, logging is the output interface. Those two fix most of what went wrong.
Exactly. argparse turns python analyze.py into python analyze.py app.log --level ERROR --since 2026-03-31 with automatic --help text, type validation, and error messages that tell the user what they did wrong. logging turns your print() calls into filterable, timestamped output that Diane can run normally or your teammate can run with --verbose to see exactly what the script is doing step by step. pprint, timeit, and glob round out the week. The capstone is a CLI log analyzer that a teammate can actually hand off to someone else.
I've been thinking about the --verbose flag. If I add that, I want the normal run to only show the final report, and --verbose to show every step — which files were loaded, how many entries were parsed, what the filter removed. Is that how real tools work?
That is exactly how real tools work. You just described the logging.basicConfig(level=logging.WARNING) vs logging.basicConfig(level=logging.DEBUG) pattern, before I taught it. Three weeks ago you were hardcoding file paths. Now you are thinking about the user experience of a tool you are about to build.
A command-line tool has three surfaces: how it accepts input, how it produces output, and how it behaves when integrated with other tools. argparse handles input — it parses sys.argv, validates argument types, enforces required vs optional, generates help text, and exits with a meaningful error message when the user passes something wrong. The difference between input() and argparse is the difference between a script that needs its author present and a tool that runs unattended in a cron job.
logging handles output in a structured way. Unlike print(), log messages have a severity level (DEBUG, INFO, WARNING, ERROR, CRITICAL) and a destination (console, file, or both). The caller controls what they see by setting the minimum level. In production, you set WARNING to suppress noise. In debugging, you set DEBUG to see everything. The logger hierarchy means library code and application code can coexist without their output colliding — your script uses logging.getLogger(__name__), and each component gets its own named channel.
pprint is the microscope: it formats nested Python objects — dicts of lists of dicts — with indentation and line breaks that make structure visible. timeit is the stopwatch: it runs a callable N times and reports the total elapsed time, smoothing out scheduling noise. glob is filesystem search by pattern — glob.glob('/var/logs/**/*.log', recursive=True) does what the pathlib rglob did in Week 1, but in a single function call without constructing a Path object.
Taken together, Week 4 closes the gap between working and shippable. The log analyzer Maya built over three weeks has correct logic and solid module coverage. This week gives it a face — the argument parser, the log output, the help text — so that anyone on the ops team can run it on any log file, on any day, without reading the source code.
Sign up to save your notes.
Tell me honestly — what happened when you sent the script to your teammates?
Two people tried it. One didn't know she needed to change the file path at the top of the script. She opened it, didn't see any input prompt, ran it anyway, and got a FileNotFoundError because she didn't have the file at the path I had hardcoded. The other one got as far as the input() call, saw a blinking cursor with no text, and assumed it crashed. They both gave up. I felt terrible.
That is not a Python problem. That is an interface problem. Your logic is solid — three weeks of work has produced a script that actually does what Diane asked for. What it lacks is every surface that a tool exposes to its users: how it accepts input, what it prints when it is working, how it reports errors, how it tells someone what arguments it needs. A working script and a usable tool are two different things.
So argparse is the input interface, logging is the output interface. Those two fix most of what went wrong.
Exactly. argparse turns python analyze.py into python analyze.py app.log --level ERROR --since 2026-03-31 with automatic --help text, type validation, and error messages that tell the user what they did wrong. logging turns your print() calls into filterable, timestamped output that Diane can run normally or your teammate can run with --verbose to see exactly what the script is doing step by step. pprint, timeit, and glob round out the week. The capstone is a CLI log analyzer that a teammate can actually hand off to someone else.
I've been thinking about the --verbose flag. If I add that, I want the normal run to only show the final report, and --verbose to show every step — which files were loaded, how many entries were parsed, what the filter removed. Is that how real tools work?
That is exactly how real tools work. You just described the logging.basicConfig(level=logging.WARNING) vs logging.basicConfig(level=logging.DEBUG) pattern, before I taught it. Three weeks ago you were hardcoding file paths. Now you are thinking about the user experience of a tool you are about to build.
A command-line tool has three surfaces: how it accepts input, how it produces output, and how it behaves when integrated with other tools. argparse handles input — it parses sys.argv, validates argument types, enforces required vs optional, generates help text, and exits with a meaningful error message when the user passes something wrong. The difference between input() and argparse is the difference between a script that needs its author present and a tool that runs unattended in a cron job.
logging handles output in a structured way. Unlike print(), log messages have a severity level (DEBUG, INFO, WARNING, ERROR, CRITICAL) and a destination (console, file, or both). The caller controls what they see by setting the minimum level. In production, you set WARNING to suppress noise. In debugging, you set DEBUG to see everything. The logger hierarchy means library code and application code can coexist without their output colliding — your script uses logging.getLogger(__name__), and each component gets its own named channel.
pprint is the microscope: it formats nested Python objects — dicts of lists of dicts — with indentation and line breaks that make structure visible. timeit is the stopwatch: it runs a callable N times and reports the total elapsed time, smoothing out scheduling noise. glob is filesystem search by pattern — glob.glob('/var/logs/**/*.log', recursive=True) does what the pathlib rglob did in Week 1, but in a single function call without constructing a Path object.
Taken together, Week 4 closes the gap between working and shippable. The log analyzer Maya built over three weeks has correct logic and solid module coverage. This week gives it a face — the argument parser, the log output, the help text — so that anyone on the ops team can run it on any log file, on any day, without reading the source code.