You guessed first, *rest = items at the end of yesterday's lesson. Where did that come from?
Stack Overflow, two months ago. Someone was splitting a product list — grab the first SKU and handle the rest differently. I copied the pattern and moved on without understanding what * was actually doing.
Today you understand it. Unpacking is already part of your workflow — every time you wrote something like name, sku, price = row in Track 1, that was unpacking. What we're doing today is making it intentional, and extending it to the starred version that absorbs any number of elements.
So it's not new — I was doing it without noticing it was a named feature. Same thing as discovering I'd been making tuples every time I wrote return x, y.
Exactly the same. The basic form: Python assigns values from the right side to variable names on the left, position by position. One-to-one match. When the delivery truck pulls up with five boxes and you have five dock workers, each worker gets one box in the order they're standing:
product = ("Widget-A", "SKU-1001", 24.99, 150, "Hardware")
name, sku, price, stock, category = product
print(name) # Widget-A
print(category) # HardwareInstead of product[0], product[1], product[2] scattered everywhere. The variable names document the intent.
That's the real value. When I review code that says product[2], I have to count back to the tuple definition to know it's the price. When I see _, _, price, _, _ = product, I know instantly — you wanted the price and nothing else.
The underscores — that's the "I don't care about this" convention? I've seen it but assumed it was a naming preference.
It's a strong Python convention. You can technically use _ as a real variable, but every experienced developer reads it as "throwaway." Using it communicates intent to the reader and to linters.
product = ("Widget-A", "SKU-1001", 24.99, 150, "Hardware")
name, _, price, _, category = product # only care about name, price, category
print(f"{name} ({category}): ${price:.2f}")
# Widget-A (Hardware): $24.99What happens when the count doesn't match? Three variables, five elements in the tuple?
ValueError: too many values to unpack (expected 3). Python checks the count before assigning anything — it does not silently drop extras. This is strict by design. If you want to capture the first two and ignore everything else, you have to say so explicitly:
product = ("Widget-A", "SKU-1001", 24.99, 150, "Hardware")
name, sku, *_ = product # name and sku captured, rest discardedThe *_ absorbs everything that doesn't have a named variable and throws it away? So that's the "take the rest" worker who doesn't care what's in the boxes?
Exactly. And the starred variable doesn't have to be at the end. You saw the first, *middle, last form at the end of yesterday's session — the star can sit in any position, absorbing everything between its neighbors:
skus = ["SKU-1001", "SKU-2042", "SKU-3300", "SKU-4010", "SKU-5005"]
first, *batch, last = skus
print(first) # SKU-1001
print(batch) # ['SKU-2042', 'SKU-3300', 'SKU-4010']
print(last) # SKU-5005And batch is always a list — even if the source was a tuple?
Always a list. The starred result is always a list regardless of what the source sequence was. One starred variable per unpacking expression — you can't have two.
Right, two star workers fighting over who gets the middle boxes. That would be chaos.
Syntactically illegal chaos. Python raises a SyntaxError at parse time if you try it.
Okay. Let me tie this back to the actual function today. split_product takes a five-field product tuple and returns a dict. I unpack all five fields and build the dict from named variables.
Try it before I show you. You know the shape — (name, sku, price, stock, category).
Straightforward.
def split_product(product: tuple) -> dict:
name, sku, price, stock, category = product
return {
"name": name,
"sku": sku,
"price": price,
"stock": stock,
"category": category,
}That's it. No magic numbers, no counting indices. The variable names at the unpacking line are the documentation for what each position means.
Is there a way to do this in one shot without the intermediate variables? I'm thinking there might be a zip-based version.
There is, and you guessed right:
def split_product(product: tuple) -> dict:
keys = ("name", "sku", "price", "stock", "category")
return dict(zip(keys, product))zip() pairs each key with the corresponding value positionally, and dict() builds from those pairs. But for a five-field tuple with named fields, the explicit unpack version is clearer — the reader sees every field name without hunting for the keys tuple. Use zip when the field count is large or variable.
So the rule is: if the names are obvious and the count is small, unpack explicitly. If you're converting large structured records with a known schema, zip scales better.
That's the practical heuristic. One last edge case I want you to see before tomorrow. What does this produce:
product = ("Widget-A", "SKU-1001", 24.99)
a, b, c, d = productValueError: not enough values to unpack (expected 4, got 3). Four variables, three elements — Python raises before assigning anything.
Right. And that's the other direction of the count-mismatch error — you can run over and under. Python is strict in both directions. No guessing, no truncating, no padding with None.
Good. I'd rather have a loud error than a silent None I don't notice until the report runs.
Tomorrow we meet del and the is versus == distinction — two sharp edges around how Python manages names and identities. After what you've built this week with tuples, sets, and unpacking, those will feel small but they're the kind of detail that explains confusing bugs. Including at least one I watched you spend a morning on in Track 1.
Was it the date comparison thing? Where I checked if result is None but the result was an empty list and I thought it was None?
Not that one. Tomorrow you'll recognize it.
Tuple unpacking assigns values from a sequence to multiple variables in a single statement, positionally. The variable count on the left must match the element count on the right unless a starred variable is used.
The _ convention marks discarded variables. It is not enforced by Python but is universally read as "I am deliberately ignoring this value."
The starred variable (*name) absorbs any number of remaining elements into a list. It can appear anywhere in the unpacking expression — start, middle, or end — and there can be only one per expression.
# Basic unpacking
name, sku, price = ("Widget-A", "SKU-1001", 24.99)
# Discard specific fields
name, _, price, _, _ = ("Widget-A", "SKU-1001", 24.99, 150, "Hardware")
# Starred: capture first and last, collect rest
first_sku, *batch, last_sku = ["S001", "S002", "S003", "S004", "S005"]
# batch = ['S002', 'S003', 'S004']
# Starred at end: discard everything after the first two
name, sku, *_ = ("Widget-A", "SKU-1001", 24.99, 150, "Hardware")Pitfall 1: Count mismatch raises ValueError in both directions. Too many values (a, b = (1, 2, 3)) and too few values (a, b, c = (1, 2)) both raise ValueError. Python checks before assigning — no partial results.
Pitfall 2: Starred result is always a list, not the source type. Unpacking a tuple with *rest gives rest as a list, not a tuple. The star syntax always collects into a list.
Pitfall 3: _ is a real variable name. In interactive REPLs, _ holds the last expression result. In test frameworks, it has special meaning. Convention is strong but not enforced — if your code genuinely uses _ as a variable, the reader will be confused.
Unpacking works with any iterable, not just tuples and lists — you can unpack strings, generator expressions, and dictionary views. Nested unpacking is also valid: (a, b), c = (1, 2), 3. This pattern appears in loops over list-of-tuples: for name, sku, price in products: unpacks each row on every iteration, which is cleaner than for row in products: name = row[0]; sku = row[1].
Sign up to write and run code in this lesson.
You guessed first, *rest = items at the end of yesterday's lesson. Where did that come from?
Stack Overflow, two months ago. Someone was splitting a product list — grab the first SKU and handle the rest differently. I copied the pattern and moved on without understanding what * was actually doing.
Today you understand it. Unpacking is already part of your workflow — every time you wrote something like name, sku, price = row in Track 1, that was unpacking. What we're doing today is making it intentional, and extending it to the starred version that absorbs any number of elements.
So it's not new — I was doing it without noticing it was a named feature. Same thing as discovering I'd been making tuples every time I wrote return x, y.
Exactly the same. The basic form: Python assigns values from the right side to variable names on the left, position by position. One-to-one match. When the delivery truck pulls up with five boxes and you have five dock workers, each worker gets one box in the order they're standing:
product = ("Widget-A", "SKU-1001", 24.99, 150, "Hardware")
name, sku, price, stock, category = product
print(name) # Widget-A
print(category) # HardwareInstead of product[0], product[1], product[2] scattered everywhere. The variable names document the intent.
That's the real value. When I review code that says product[2], I have to count back to the tuple definition to know it's the price. When I see _, _, price, _, _ = product, I know instantly — you wanted the price and nothing else.
The underscores — that's the "I don't care about this" convention? I've seen it but assumed it was a naming preference.
It's a strong Python convention. You can technically use _ as a real variable, but every experienced developer reads it as "throwaway." Using it communicates intent to the reader and to linters.
product = ("Widget-A", "SKU-1001", 24.99, 150, "Hardware")
name, _, price, _, category = product # only care about name, price, category
print(f"{name} ({category}): ${price:.2f}")
# Widget-A (Hardware): $24.99What happens when the count doesn't match? Three variables, five elements in the tuple?
ValueError: too many values to unpack (expected 3). Python checks the count before assigning anything — it does not silently drop extras. This is strict by design. If you want to capture the first two and ignore everything else, you have to say so explicitly:
product = ("Widget-A", "SKU-1001", 24.99, 150, "Hardware")
name, sku, *_ = product # name and sku captured, rest discardedThe *_ absorbs everything that doesn't have a named variable and throws it away? So that's the "take the rest" worker who doesn't care what's in the boxes?
Exactly. And the starred variable doesn't have to be at the end. You saw the first, *middle, last form at the end of yesterday's session — the star can sit in any position, absorbing everything between its neighbors:
skus = ["SKU-1001", "SKU-2042", "SKU-3300", "SKU-4010", "SKU-5005"]
first, *batch, last = skus
print(first) # SKU-1001
print(batch) # ['SKU-2042', 'SKU-3300', 'SKU-4010']
print(last) # SKU-5005And batch is always a list — even if the source was a tuple?
Always a list. The starred result is always a list regardless of what the source sequence was. One starred variable per unpacking expression — you can't have two.
Right, two star workers fighting over who gets the middle boxes. That would be chaos.
Syntactically illegal chaos. Python raises a SyntaxError at parse time if you try it.
Okay. Let me tie this back to the actual function today. split_product takes a five-field product tuple and returns a dict. I unpack all five fields and build the dict from named variables.
Try it before I show you. You know the shape — (name, sku, price, stock, category).
Straightforward.
def split_product(product: tuple) -> dict:
name, sku, price, stock, category = product
return {
"name": name,
"sku": sku,
"price": price,
"stock": stock,
"category": category,
}That's it. No magic numbers, no counting indices. The variable names at the unpacking line are the documentation for what each position means.
Is there a way to do this in one shot without the intermediate variables? I'm thinking there might be a zip-based version.
There is, and you guessed right:
def split_product(product: tuple) -> dict:
keys = ("name", "sku", "price", "stock", "category")
return dict(zip(keys, product))zip() pairs each key with the corresponding value positionally, and dict() builds from those pairs. But for a five-field tuple with named fields, the explicit unpack version is clearer — the reader sees every field name without hunting for the keys tuple. Use zip when the field count is large or variable.
So the rule is: if the names are obvious and the count is small, unpack explicitly. If you're converting large structured records with a known schema, zip scales better.
That's the practical heuristic. One last edge case I want you to see before tomorrow. What does this produce:
product = ("Widget-A", "SKU-1001", 24.99)
a, b, c, d = productValueError: not enough values to unpack (expected 4, got 3). Four variables, three elements — Python raises before assigning anything.
Right. And that's the other direction of the count-mismatch error — you can run over and under. Python is strict in both directions. No guessing, no truncating, no padding with None.
Good. I'd rather have a loud error than a silent None I don't notice until the report runs.
Tomorrow we meet del and the is versus == distinction — two sharp edges around how Python manages names and identities. After what you've built this week with tuples, sets, and unpacking, those will feel small but they're the kind of detail that explains confusing bugs. Including at least one I watched you spend a morning on in Track 1.
Was it the date comparison thing? Where I checked if result is None but the result was an empty list and I thought it was None?
Not that one. Tomorrow you'll recognize it.
Tuple unpacking assigns values from a sequence to multiple variables in a single statement, positionally. The variable count on the left must match the element count on the right unless a starred variable is used.
The _ convention marks discarded variables. It is not enforced by Python but is universally read as "I am deliberately ignoring this value."
The starred variable (*name) absorbs any number of remaining elements into a list. It can appear anywhere in the unpacking expression — start, middle, or end — and there can be only one per expression.
# Basic unpacking
name, sku, price = ("Widget-A", "SKU-1001", 24.99)
# Discard specific fields
name, _, price, _, _ = ("Widget-A", "SKU-1001", 24.99, 150, "Hardware")
# Starred: capture first and last, collect rest
first_sku, *batch, last_sku = ["S001", "S002", "S003", "S004", "S005"]
# batch = ['S002', 'S003', 'S004']
# Starred at end: discard everything after the first two
name, sku, *_ = ("Widget-A", "SKU-1001", 24.99, 150, "Hardware")Pitfall 1: Count mismatch raises ValueError in both directions. Too many values (a, b = (1, 2, 3)) and too few values (a, b, c = (1, 2)) both raise ValueError. Python checks before assigning — no partial results.
Pitfall 2: Starred result is always a list, not the source type. Unpacking a tuple with *rest gives rest as a list, not a tuple. The star syntax always collects into a list.
Pitfall 3: _ is a real variable name. In interactive REPLs, _ holds the last expression result. In test frameworks, it has special meaning. Convention is strong but not enforced — if your code genuinely uses _ as a variable, the reader will be confused.
Unpacking works with any iterable, not just tuples and lists — you can unpack strings, generator expressions, and dictionary views. Nested unpacking is also valid: (a, b), c = (1, 2), 3. This pattern appears in loops over list-of-tuples: for name, sku, price in products: unpacks each row on every iteration, which is cleaner than for row in products: name = row[0]; sku = row[1].