I pulled the production logs from last Thursday. I can see your print statements. print("HERE") appears eleven times in a file that handles payment enrichment.
I know. I found the bug. It was a race condition in cache invalidation — completely unrelated to the place I was printing. But the prints helped me see the data flowing through. I needed to see what order_data looked like at each stage.
You needed to see what order_data looked like, so you added eleven prints and re-ran eleven times. Notice the pattern: you were asking eleven specific questions you formed before you had evidence. pdb lets you ask any question — not the eleven you thought to ask, but every question, in any order, without re-running anything. Print debugging interviews one witness at a time. The debugger puts you at the crime scene with all witnesses simultaneously.
I've opened pdb once. The terminal just showed (Pdb) and I had no idea what to type. I closed it and went back to print statements because at least I understood what I was getting.
Four commands cover 90% of debugging sessions. n for next line, s to step into a function, c to continue to the next breakpoint, p to print any variable or expression. You can also call functions, modify variables, and inspect the full call stack. The entry point is one line: breakpoint(). Add it where you want to stop, run the program, and the interpreter halts and hands you control. You get to look at everything — not just what you thought to print.
If I had used breakpoint() last Thursday — set it before the enrichment step — I could have inspected the cache state directly? Without re-running anything?
Without re-running anything. And if the bug was intermittent — happening on 3% of orders — you could have set a conditional breakpoint: breakpoint() if order['status'] == 'missing'. The debugger only halts when the condition fires. You examine the exact order that triggers the bug, with the full program state in front of you. Not the state you guessed at with prints. The actual state.
That is the difference. Print statements are guesses about what I think I need to see. The debugger shows me what actually is. And this week also covers stack traces and logging — so I am replacing the entire print-debugging workflow, not just adding one tool.
By Day 28, you will have the complete toolkit: pdb for interactive debugging, structured logging to replace the production prints, stack trace reading for post-mortem analysis, and inspect and dis for when you need to see exactly how Python is interpreting your code. The 47 print statements come out. They get replaced by a logging configuration that produces structured output you can filter by level, by module, and by context. What you ship to production should help you debug production, not just tell you that code ran.
Every developer who has used Python for more than a week has used print debugging. It works. It reveals information that was not visible before. And it has a ceiling that every developer hits eventually: you can only see what you thought to look at.
The structural problem with print debugging is that it requires you to form hypotheses before you have evidence. You add print(data) because you think data might be wrong. You add print("HERE") because you think execution might not be reaching that line. Every print statement is a question you thought to ask before you understood the bug. But bugs are not caused by the things you suspected. They are caused by the things you did not suspect. Print debugging gives you the answers to the questions you already had. It does not give you access to the questions you had not thought of yet.
pdb inverts this. When the interpreter halts at a breakpoint, you can inspect any variable in any frame of the call stack. You can call functions. You can modify values. You can evaluate expressions that did not exist in the original code. You can trace backward through the call history using where and u and d. You are not limited to the questions you thought of before running the program — you can ask new questions as the evidence develops. This is what Arjun does in two characters (breakpoint()) that Priya tries to do with eleven print statements.
Structured logging is the production-safe version of the same discipline. print() writes to stdout, untagged, unfiltered, unsearchable. A properly configured logging setup writes to a structured stream with level, timestamp, module name, and any context you attach. In production, you can set DEBUG logging on a specific module while leaving everything else at WARNING. You can filter the log aggregator by module=enrichment level=DEBUG and see only the lines relevant to your investigation. This is not a tool for big companies — it is a tool for any codebase that runs in an environment you cannot attach a debugger to.
inspect and dis round out the toolkit for the cases where the bug is not in the logic you wrote but in how Python is interpreting it. inspect.signature tells you exactly what parameters a function expects and what their defaults are. dis.dis shows you the bytecode Python compiled from your source — which is how you confirm that a performance optimization actually changed the generated instructions, not just the source text. These are specialist tools, but the specialist who knows when to use them is faster than the one who is limited to print statements.
Sign up to save your notes.
I pulled the production logs from last Thursday. I can see your print statements. print("HERE") appears eleven times in a file that handles payment enrichment.
I know. I found the bug. It was a race condition in cache invalidation — completely unrelated to the place I was printing. But the prints helped me see the data flowing through. I needed to see what order_data looked like at each stage.
You needed to see what order_data looked like, so you added eleven prints and re-ran eleven times. Notice the pattern: you were asking eleven specific questions you formed before you had evidence. pdb lets you ask any question — not the eleven you thought to ask, but every question, in any order, without re-running anything. Print debugging interviews one witness at a time. The debugger puts you at the crime scene with all witnesses simultaneously.
I've opened pdb once. The terminal just showed (Pdb) and I had no idea what to type. I closed it and went back to print statements because at least I understood what I was getting.
Four commands cover 90% of debugging sessions. n for next line, s to step into a function, c to continue to the next breakpoint, p to print any variable or expression. You can also call functions, modify variables, and inspect the full call stack. The entry point is one line: breakpoint(). Add it where you want to stop, run the program, and the interpreter halts and hands you control. You get to look at everything — not just what you thought to print.
If I had used breakpoint() last Thursday — set it before the enrichment step — I could have inspected the cache state directly? Without re-running anything?
Without re-running anything. And if the bug was intermittent — happening on 3% of orders — you could have set a conditional breakpoint: breakpoint() if order['status'] == 'missing'. The debugger only halts when the condition fires. You examine the exact order that triggers the bug, with the full program state in front of you. Not the state you guessed at with prints. The actual state.
That is the difference. Print statements are guesses about what I think I need to see. The debugger shows me what actually is. And this week also covers stack traces and logging — so I am replacing the entire print-debugging workflow, not just adding one tool.
By Day 28, you will have the complete toolkit: pdb for interactive debugging, structured logging to replace the production prints, stack trace reading for post-mortem analysis, and inspect and dis for when you need to see exactly how Python is interpreting your code. The 47 print statements come out. They get replaced by a logging configuration that produces structured output you can filter by level, by module, and by context. What you ship to production should help you debug production, not just tell you that code ran.
Every developer who has used Python for more than a week has used print debugging. It works. It reveals information that was not visible before. And it has a ceiling that every developer hits eventually: you can only see what you thought to look at.
The structural problem with print debugging is that it requires you to form hypotheses before you have evidence. You add print(data) because you think data might be wrong. You add print("HERE") because you think execution might not be reaching that line. Every print statement is a question you thought to ask before you understood the bug. But bugs are not caused by the things you suspected. They are caused by the things you did not suspect. Print debugging gives you the answers to the questions you already had. It does not give you access to the questions you had not thought of yet.
pdb inverts this. When the interpreter halts at a breakpoint, you can inspect any variable in any frame of the call stack. You can call functions. You can modify values. You can evaluate expressions that did not exist in the original code. You can trace backward through the call history using where and u and d. You are not limited to the questions you thought of before running the program — you can ask new questions as the evidence develops. This is what Arjun does in two characters (breakpoint()) that Priya tries to do with eleven print statements.
Structured logging is the production-safe version of the same discipline. print() writes to stdout, untagged, unfiltered, unsearchable. A properly configured logging setup writes to a structured stream with level, timestamp, module name, and any context you attach. In production, you can set DEBUG logging on a specific module while leaving everything else at WARNING. You can filter the log aggregator by module=enrichment level=DEBUG and see only the lines relevant to your investigation. This is not a tool for big companies — it is a tool for any codebase that runs in an environment you cannot attach a debugger to.
inspect and dis round out the toolkit for the cases where the bug is not in the logic you wrote but in how Python is interpreting it. inspect.signature tells you exactly what parameters a function expects and what their defaults are. dis.dis shows you the bytecode Python compiled from your source — which is how you confirm that a performance optimization actually changed the generated instructions, not just the source text. These are specialist tools, but the specialist who knows when to use them is faster than the one who is limited to print statements.