A queue is the dual of a list — instead of all-at-once iteration, you take one item, process it, then take the next. The producer and consumer can be separate. The consumer can crash and restart. New items can arrive mid-flight.
from collections import deque
queue = deque([
{"id": "a", "status": "pending"},
{"id": "b", "status": "pending"},
{"id": "c", "status": "pending"},
])
final_state = []
while queue:
item = queue.popleft() # take from the front (FIFO)
item["status"] = "done" # mark as processed
final_state.append(item)
print(final_state)Expected: each item moves from pending to done, in arrival order.
Why a deque instead of a list?
list.pop(0) works but is O(n) — every pop shifts every remaining element. deque.popleft() is O(1). For three items it doesn't matter; for thousands it's the difference between fast and slow.
And in production, where would the queue actually live?
Three common shapes:
pending rows in Postgres with status column) — simpler, one consumer, easy to queryThe in-memory deque is the shape. Different stores swap in cleanly because the API is the same: take front, process, mark done.
while queue:
item = take_from_front(queue)
try:
process(item)
mark_done(item)
except Exception:
mark_failed(item)
# decide: retry? send to dead-letter? skip?Four conceptual operations: take_from_front, process, mark_done, mark_failed. Different queue stores implement these differently; the structure is universal.
First-in-first-out is the natural fit for most event streams — events arrive in order; you want to handle them in arrival order. LIFO (stack) is occasionally useful (most-recent-first for user-facing operations) but rarely the right shape for unattended processing.
dequefrom collections import deque
q = deque()
q.append(item) # add to right (back)
q.popleft() # take from left (front) — FIFO
# alternate operations:
q.appendleft(item) # add to front (rare)
q.pop() # take from back — LIFOdeque is double-ended, but for queue-style use only append + popleft.
# iteration — fine when the input is fully known up front
for item in items:
process(item)# queue — when items can be ADDED while processing
q = deque(initial_items)
while q:
item = q.popleft()
new_items = process(item)
q.extend(new_items) # processing produced more work — append itThe queue shape lets process add work mid-flight: dispatching events that beget more events, breadth-first traversal, retry queues. Iteration can't do this.
Production queue items typically move through states:
pending -> in_progress -> done
\-> failed -> [retry] or [dead-letter]
For today's lesson: only pending and done. failed and dead-letter come in week 4.
Real queues (SQS, Redis Streams) implement visibility timeout: when you take an item, it's marked invisible for N seconds. If you don't mark_done within N, it becomes visible again — another consumer (or your retry) can pick it up.
This is the production solution to "consumer crashed mid-process". The Sheet equivalent: write in_progress with a timestamp; on startup, find rows older than N minutes and reset to pending.
popleft is safe; no contentionUPDATE ... WHERE id = (SELECT id FROM q WHERE status='pending' LIMIT 1 FOR UPDATE SKIP LOCKED)). Real queue services do this for you.For most automation projects, one consumer is plenty. Don't reach for distributed-queue tooling unless you've measured a need.
A queue is the dual of a list — instead of all-at-once iteration, you take one item, process it, then take the next. The producer and consumer can be separate. The consumer can crash and restart. New items can arrive mid-flight.
from collections import deque
queue = deque([
{"id": "a", "status": "pending"},
{"id": "b", "status": "pending"},
{"id": "c", "status": "pending"},
])
final_state = []
while queue:
item = queue.popleft() # take from the front (FIFO)
item["status"] = "done" # mark as processed
final_state.append(item)
print(final_state)Expected: each item moves from pending to done, in arrival order.
Why a deque instead of a list?
list.pop(0) works but is O(n) — every pop shifts every remaining element. deque.popleft() is O(1). For three items it doesn't matter; for thousands it's the difference between fast and slow.
And in production, where would the queue actually live?
Three common shapes:
pending rows in Postgres with status column) — simpler, one consumer, easy to queryThe in-memory deque is the shape. Different stores swap in cleanly because the API is the same: take front, process, mark done.
while queue:
item = take_from_front(queue)
try:
process(item)
mark_done(item)
except Exception:
mark_failed(item)
# decide: retry? send to dead-letter? skip?Four conceptual operations: take_from_front, process, mark_done, mark_failed. Different queue stores implement these differently; the structure is universal.
First-in-first-out is the natural fit for most event streams — events arrive in order; you want to handle them in arrival order. LIFO (stack) is occasionally useful (most-recent-first for user-facing operations) but rarely the right shape for unattended processing.
dequefrom collections import deque
q = deque()
q.append(item) # add to right (back)
q.popleft() # take from left (front) — FIFO
# alternate operations:
q.appendleft(item) # add to front (rare)
q.pop() # take from back — LIFOdeque is double-ended, but for queue-style use only append + popleft.
# iteration — fine when the input is fully known up front
for item in items:
process(item)# queue — when items can be ADDED while processing
q = deque(initial_items)
while q:
item = q.popleft()
new_items = process(item)
q.extend(new_items) # processing produced more work — append itThe queue shape lets process add work mid-flight: dispatching events that beget more events, breadth-first traversal, retry queues. Iteration can't do this.
Production queue items typically move through states:
pending -> in_progress -> done
\-> failed -> [retry] or [dead-letter]
For today's lesson: only pending and done. failed and dead-letter come in week 4.
Real queues (SQS, Redis Streams) implement visibility timeout: when you take an item, it's marked invisible for N seconds. If you don't mark_done within N, it becomes visible again — another consumer (or your retry) can pick it up.
This is the production solution to "consumer crashed mid-process". The Sheet equivalent: write in_progress with a timestamp; on startup, find rows older than N minutes and reset to pending.
popleft is safe; no contentionUPDATE ... WHERE id = (SELECT id FROM q WHERE status='pending' LIMIT 1 FOR UPDATE SKIP LOCKED)). Real queue services do this for you.For most automation projects, one consumer is plenty. Don't reach for distributed-queue tooling unless you've measured a need.
Create a free account to get started. Paid plans unlock all tracks.