You have a million parsed log entries to post to an external API, but the API rate-limits at a thousand per request. How do you split the list into 1k-sized pieces without writing a for loop with an index counter?
I'd use range with a step argument. range(0, len(items), size) gives starting indices spaced size apart. Then slice.
Exactly. And wrap it in a list comprehension for the one-liner:
[items[i:i+size] for i in range(0, len(items), size)]Each items[i:i+size] gives one chunk. The last chunk may be smaller — Python slicing never raises IndexError, it just returns what's there.
That's it? I expected this to be harder.
The elegance is in range with a step. range(0, 7, 3) is [0, 3, 6] — three starting indices for a list of 7 with chunk size 3. Slice each one: [0:3], [3:6], [6:9] (which tail-trims to [6:7]).
What if items is empty? Does this blow up?
range(0, 0, size) is empty, so the comprehension produces no chunks — you get []. Empty input handled for free, no guard clause needed:
def chunk_list(items, size):
return [items[i:i+size] for i in range(0, len(items), size)]And this same shape works for any batched I/O — API calls, database inserts, file writes, parallel processing. Change items, change size, the pattern carries.
That's exactly why this is in the robust-tools week. Batching is the universal lever for rate limits, memory caps, and partial-failure recovery. One line now, applied everywhere later.
range(start, stop, step) for batchingTL;DR: [items[i:i+size] for i in range(0, len(items), size)] is the canonical Python chunker.
range(0, N, step) — every stepth index from 0items[i:i+size] tail-trims safelyrange(0, 0, step) == []| Target | Call |
|---|---|
| API batch post | for batch in chunk_list(rows, 1000): post(batch) |
| DB bulk insert | same loop, swap post for insert_many |
| Rate-limit pause | for b in ...: post(b); sleep(0.1) |
Pair with enumerate(..., start=1) for progress tracking.
Write `chunk_list(items, size)` that returns a list of sub-lists where each sub-list holds at most `size` items from `items`. The last chunk may be smaller. Empty `items` returns `[]`. Use a one-line list comprehension with `range` stepping by `size`.
Tap each step for scaffolded hints.
No blank-editor panic.
You have a million parsed log entries to post to an external API, but the API rate-limits at a thousand per request. How do you split the list into 1k-sized pieces without writing a for loop with an index counter?
I'd use range with a step argument. range(0, len(items), size) gives starting indices spaced size apart. Then slice.
Exactly. And wrap it in a list comprehension for the one-liner:
[items[i:i+size] for i in range(0, len(items), size)]Each items[i:i+size] gives one chunk. The last chunk may be smaller — Python slicing never raises IndexError, it just returns what's there.
That's it? I expected this to be harder.
The elegance is in range with a step. range(0, 7, 3) is [0, 3, 6] — three starting indices for a list of 7 with chunk size 3. Slice each one: [0:3], [3:6], [6:9] (which tail-trims to [6:7]).
What if items is empty? Does this blow up?
range(0, 0, size) is empty, so the comprehension produces no chunks — you get []. Empty input handled for free, no guard clause needed:
def chunk_list(items, size):
return [items[i:i+size] for i in range(0, len(items), size)]And this same shape works for any batched I/O — API calls, database inserts, file writes, parallel processing. Change items, change size, the pattern carries.
That's exactly why this is in the robust-tools week. Batching is the universal lever for rate limits, memory caps, and partial-failure recovery. One line now, applied everywhere later.
range(start, stop, step) for batchingTL;DR: [items[i:i+size] for i in range(0, len(items), size)] is the canonical Python chunker.
range(0, N, step) — every stepth index from 0items[i:i+size] tail-trims safelyrange(0, 0, step) == []| Target | Call |
|---|---|
| API batch post | for batch in chunk_list(rows, 1000): post(batch) |
| DB bulk insert | same loop, swap post for insert_many |
| Rate-limit pause | for b in ...: post(b); sleep(0.1) |
Pair with enumerate(..., start=1) for progress tracking.