You mentioned the retry decorator you wrote last week. If someone hands you that decorator as a callable at runtime — no source file, just the object — how do you find out what parameters it takes?
I would... look at the source file? Or check the editor. I have never needed to inspect a function I did not write.
You will. The moment you write a framework that receives callbacks, or a test helper that introspects function signatures, or a CLI that reads parameter names to build its help text. The answer is inspect:
import inspect
def process_payment(amount: float, currency: str = "USD", retry: bool = False) -> dict:
"""Process a payment and return the transaction record."""
...
sig = inspect.signature(process_payment)
for name, param in sig.parameters.items():
has_default = param.default is not inspect.Parameter.empty
print(f"{name}: default={param.default if has_default else 'none'}, annotation={param.annotation}")inspect.Parameter.empty is the sentinel for "no default." param.kind tells you whether it is positional-only, keyword-only, *args, or **kwargs. Five kinds, five different behaviors when constructing a call dynamically.
iscoroutinefunction — that is how a framework knows whether to await a callback. I have had bugs where I passed an async function to something that expected sync, and the return value was an unawaited coroutine.
That is the exact bug inspect.iscoroutinefunction prevents. FastAPI checks your view function before deciding how to call it. When you write async def and FastAPI dispatches the request, it does not assume — it inspects. Now dis:
import dis
def add_with_default(x, y=10):
return x + y
dis.dis(add_with_default)Output:
2 RESUME 0
3 LOAD_FAST 0 (x)
LOAD_FAST 1 (y)
BINARY_OP 0 (+)
RETURN_VALUEA stack machine. LOAD_FAST pushes the local variable onto the stack. BINARY_OP pops both, adds, pushes the result. RETURN_VALUE returns the top of the stack.
Four instructions for return x + y. Now let me show you the practical case — why list comprehensions are faster than append loops:
import dis
def version_a(items):
result = []
for item in items:
result.append(item * 2)
return result
def version_b(items):
return [item * 2 for item in items]version_a emits a LOAD_ATTR on every iteration to look up result.append on the list object. version_b uses LIST_APPEND — a single bytecode that CPython handles at the interpreter level without an attribute lookup. The comprehension is faster not because it is more concise — it uses a different instruction.
I have been telling people list comprehensions are faster because they are "more Pythonic." I had the conclusion right but the mechanism completely wrong.
The conclusion was correct. dis makes the mechanism precise. And dis.get_instructions returns structured objects instead of printed text:
instructions = list(dis.get_instructions(version_a))
# Each Instruction has: opname, argval, offset, starts_line
count = len(instructions)You can count instructions, filter by opname, detect LOAD_ATTR calls to measure attribute lookup cost — without parsing strings.
The code object also exposes metadata directly. co_varnames, co_argcount, co_consts. I saw co_consts includes the docstring because docstrings are string constants pushed and immediately popped.
You just described the bytecode behavior of a docstring from memory. The first statement of a function body is compiled into LOAD_CONST (pushes the string) followed by POP_TOP (discards it). The only purpose is making the string accessible via __doc__. It is a magic comment that Python pretends is a constant.
Three years of docstrings and I never knew they were bytecode instructions that get thrown away. I thought they were stored somewhere special.
They are stored in func.__doc__. CPython copies the constant into that attribute when the function object is created. The bytecode that pushes and pops it is purely for the attribute assignment mechanism.
When would I reach for dis in practice — not just to understand Python, but on an actual bug?
Three cases. First, when two versions of the same logic produce different timings and you cannot explain why from the source. dis shows you which version generates fewer or cheaper instructions. Second, when a closure captures the wrong variable and you need to see exactly which variables are in co_freevars. Third, when you are writing a framework and need to understand what the Python runtime will do with a function before calling it. inspect.iscoroutinefunction, inspect.signature, and dis.get_instructions together give you a complete picture of any callable at runtime.
inspect answers "what does this function accept and return." dis answers "what did Python actually compile this into." Both are ways of asking Python to explain itself rather than inferring from source code.
That is the framing for all four tools this week. pdb asks "what is the state right now." Traceback reading asks "how did we get here." Logging asks "what was happening before this." Inspect and dis ask "is Python doing what I think I told it to do." Each is a different distance from the running code, and each is irreplaceable when the others cannot answer the question.
Tomorrow is the capstone. Every tool from all four weeks on one piece of broken code. I recognize exactly what that means now in a way I did not four weeks ago — profiling to find the bottleneck first, pdb to step through the failure, structured logging to give it observability, and inspect or dis if the behavior still does not make sense from the source.
The order matters: measure before you optimize, observe before you fix, understand before you refactor. Four weeks ago you optimized the database loop that took 0.8 seconds while a 29-second problem waited untouched. Tomorrow you will not make that mistake.
The inspect.signature implementation. inspect.signature() wraps inspect._signature_from_callable(), which reads the function's __code__ object and __annotations__ dict to build a Signature object. For each parameter, it reads co_varnames[:co_argcount] for positional parameters, co_varnames[co_argcount:co_argcount+co_kwonlyargcount] for keyword-only parameters, and __defaults__ or __kwdefaults__ for default values. The Parameter.empty sentinel is a class-level singleton used instead of None because None is a valid default. inspect.iscoroutinefunction checks CO_COROUTINE in func.__code__.co_flags — a bitmask set by the compiler when a function is defined with async def.
How dis accesses bytecode. dis.dis() calls dis.get_instructions(), which calls dis._get_instructions_bytes() on func.__code__.co_code — the raw bytecode as a bytes object. Each instruction is two bytes: one for the opcode, one for the argument. dis.get_instructions() yields Instruction named tuples with opname, opcode, arg, argval (the resolved value of the argument), argrepr (human-readable), offset, and starts_line. The line number information comes from co_linetable (Python 3.10+) or co_lnotab — a compressed mapping of bytecode offset ranges to source line numbers.
LOAD_ATTR versus LIST_APPEND. In version_a (append loop), the compiler emits LOAD_FAST to load the list, then LOAD_ATTR to look up .append, then calls it with the value. LOAD_ATTR performs a full attribute lookup on every iteration — it checks type(result).__dict__, then result.__dict__, applying the descriptor protocol. In version_b (list comprehension), the compiler uses a dedicated LIST_APPEND opcode that CPython handles with direct C-level array append — no attribute lookup, no descriptor protocol. This is why comprehensions have a measurable throughput advantage over append loops independent of any constant-factor differences.
co_consts and the docstring. A function's co_consts tuple holds all literal constants used in the function body, including None (the implicit return value), numeric and string literals, and the docstring. Python's compiler places the docstring as the first element of co_consts and generates LOAD_CONST 0 (push it) and POP_TOP (discard it) as the first two instructions of the function body. The interpreter then copies it to func.__doc__. This behavior is why inspect.getdoc() works without re-parsing source — the string is already stored on the function object as a compiled artifact.
Sign up to write and run code in this lesson.
You mentioned the retry decorator you wrote last week. If someone hands you that decorator as a callable at runtime — no source file, just the object — how do you find out what parameters it takes?
I would... look at the source file? Or check the editor. I have never needed to inspect a function I did not write.
You will. The moment you write a framework that receives callbacks, or a test helper that introspects function signatures, or a CLI that reads parameter names to build its help text. The answer is inspect:
import inspect
def process_payment(amount: float, currency: str = "USD", retry: bool = False) -> dict:
"""Process a payment and return the transaction record."""
...
sig = inspect.signature(process_payment)
for name, param in sig.parameters.items():
has_default = param.default is not inspect.Parameter.empty
print(f"{name}: default={param.default if has_default else 'none'}, annotation={param.annotation}")inspect.Parameter.empty is the sentinel for "no default." param.kind tells you whether it is positional-only, keyword-only, *args, or **kwargs. Five kinds, five different behaviors when constructing a call dynamically.
iscoroutinefunction — that is how a framework knows whether to await a callback. I have had bugs where I passed an async function to something that expected sync, and the return value was an unawaited coroutine.
That is the exact bug inspect.iscoroutinefunction prevents. FastAPI checks your view function before deciding how to call it. When you write async def and FastAPI dispatches the request, it does not assume — it inspects. Now dis:
import dis
def add_with_default(x, y=10):
return x + y
dis.dis(add_with_default)Output:
2 RESUME 0
3 LOAD_FAST 0 (x)
LOAD_FAST 1 (y)
BINARY_OP 0 (+)
RETURN_VALUEA stack machine. LOAD_FAST pushes the local variable onto the stack. BINARY_OP pops both, adds, pushes the result. RETURN_VALUE returns the top of the stack.
Four instructions for return x + y. Now let me show you the practical case — why list comprehensions are faster than append loops:
import dis
def version_a(items):
result = []
for item in items:
result.append(item * 2)
return result
def version_b(items):
return [item * 2 for item in items]version_a emits a LOAD_ATTR on every iteration to look up result.append on the list object. version_b uses LIST_APPEND — a single bytecode that CPython handles at the interpreter level without an attribute lookup. The comprehension is faster not because it is more concise — it uses a different instruction.
I have been telling people list comprehensions are faster because they are "more Pythonic." I had the conclusion right but the mechanism completely wrong.
The conclusion was correct. dis makes the mechanism precise. And dis.get_instructions returns structured objects instead of printed text:
instructions = list(dis.get_instructions(version_a))
# Each Instruction has: opname, argval, offset, starts_line
count = len(instructions)You can count instructions, filter by opname, detect LOAD_ATTR calls to measure attribute lookup cost — without parsing strings.
The code object also exposes metadata directly. co_varnames, co_argcount, co_consts. I saw co_consts includes the docstring because docstrings are string constants pushed and immediately popped.
You just described the bytecode behavior of a docstring from memory. The first statement of a function body is compiled into LOAD_CONST (pushes the string) followed by POP_TOP (discards it). The only purpose is making the string accessible via __doc__. It is a magic comment that Python pretends is a constant.
Three years of docstrings and I never knew they were bytecode instructions that get thrown away. I thought they were stored somewhere special.
They are stored in func.__doc__. CPython copies the constant into that attribute when the function object is created. The bytecode that pushes and pops it is purely for the attribute assignment mechanism.
When would I reach for dis in practice — not just to understand Python, but on an actual bug?
Three cases. First, when two versions of the same logic produce different timings and you cannot explain why from the source. dis shows you which version generates fewer or cheaper instructions. Second, when a closure captures the wrong variable and you need to see exactly which variables are in co_freevars. Third, when you are writing a framework and need to understand what the Python runtime will do with a function before calling it. inspect.iscoroutinefunction, inspect.signature, and dis.get_instructions together give you a complete picture of any callable at runtime.
inspect answers "what does this function accept and return." dis answers "what did Python actually compile this into." Both are ways of asking Python to explain itself rather than inferring from source code.
That is the framing for all four tools this week. pdb asks "what is the state right now." Traceback reading asks "how did we get here." Logging asks "what was happening before this." Inspect and dis ask "is Python doing what I think I told it to do." Each is a different distance from the running code, and each is irreplaceable when the others cannot answer the question.
Tomorrow is the capstone. Every tool from all four weeks on one piece of broken code. I recognize exactly what that means now in a way I did not four weeks ago — profiling to find the bottleneck first, pdb to step through the failure, structured logging to give it observability, and inspect or dis if the behavior still does not make sense from the source.
The order matters: measure before you optimize, observe before you fix, understand before you refactor. Four weeks ago you optimized the database loop that took 0.8 seconds while a 29-second problem waited untouched. Tomorrow you will not make that mistake.
The inspect.signature implementation. inspect.signature() wraps inspect._signature_from_callable(), which reads the function's __code__ object and __annotations__ dict to build a Signature object. For each parameter, it reads co_varnames[:co_argcount] for positional parameters, co_varnames[co_argcount:co_argcount+co_kwonlyargcount] for keyword-only parameters, and __defaults__ or __kwdefaults__ for default values. The Parameter.empty sentinel is a class-level singleton used instead of None because None is a valid default. inspect.iscoroutinefunction checks CO_COROUTINE in func.__code__.co_flags — a bitmask set by the compiler when a function is defined with async def.
How dis accesses bytecode. dis.dis() calls dis.get_instructions(), which calls dis._get_instructions_bytes() on func.__code__.co_code — the raw bytecode as a bytes object. Each instruction is two bytes: one for the opcode, one for the argument. dis.get_instructions() yields Instruction named tuples with opname, opcode, arg, argval (the resolved value of the argument), argrepr (human-readable), offset, and starts_line. The line number information comes from co_linetable (Python 3.10+) or co_lnotab — a compressed mapping of bytecode offset ranges to source line numbers.
LOAD_ATTR versus LIST_APPEND. In version_a (append loop), the compiler emits LOAD_FAST to load the list, then LOAD_ATTR to look up .append, then calls it with the value. LOAD_ATTR performs a full attribute lookup on every iteration — it checks type(result).__dict__, then result.__dict__, applying the descriptor protocol. In version_b (list comprehension), the compiler uses a dedicated LIST_APPEND opcode that CPython handles with direct C-level array append — no attribute lookup, no descriptor protocol. This is why comprehensions have a measurable throughput advantage over append loops independent of any constant-factor differences.
co_consts and the docstring. A function's co_consts tuple holds all literal constants used in the function body, including None (the implicit return value), numeric and string literals, and the docstring. Python's compiler places the docstring as the first element of co_consts and generates LOAD_CONST 0 (push it) and POP_TOP (discard it) as the first two instructions of the function body. The interpreter then copies it to func.__doc__. This behavior is why inspect.getdoc() works without re-parsing source — the string is already stored on the function object as a compiled artifact.