functools Advanced: singledispatch, cache, and Dispatch by Type
Replace type-checking chains with singledispatch: register different handlers for each type. Add cache for expensive calculations. Type-aware pipelines ready to compose.
You gave me code that makes me want to scream. A process_order() function with if/elif cascading down: if type(order).name == 'StandardOrder', do this. elif type(order).name == 'PriorityOrder', do that. elif type(order).name == 'SubscriptionOrder', do something else. And every time someone adds a new order type, the chain gets longer. There is no pattern here. It is just... a wall of branching.
That is the wall you're about to demolish with singledispatch. Remember yesterday — we talked about itertools pipelines, lazy evaluation, feeding the output of one stage into the next? Today is the conductor's technique for routing those stages. singledispatch is cueing different sections of the orchestra based on what is playing.
You mean the function figures out which version to run based on the type of the first argument?
Exactly. You write a base implementation of process_order(). Then you register handlers for StandardOrder, PriorityOrder, SubscriptionOrder — each one with a single @process_order.register(OrderType) decorator. The function automatically routes to the right handler based on the type it receives. No if/elif chains. No type(x).name checks. Clean dispatch.
So singledispatch is like... a switch statement that Python writes for me?
Better than a switch statement. A switch statement checks a single value. singledispatch looks at the type of the first argument and routes based on that type hierarchy. If you pass a PriorityOrder, it calls the registered PriorityOrder handler. If you pass something it doesn't recognize, it falls back to the base implementation.
This is exactly what yesterday's iterator pipeline needed — not one function handling all order types, but one entry point that delegates to specialized handlers.
That is exactly right. Now watch how it works.
You define the base function with @singledispatch:
from functools import singledispatch
@singledispatch
def process_order(order):
"""Default handler: raise an error for unknown types."""
raise NotImplementedError(f'No handler for {type(order).__name__}')
Then you register a handler for StandardOrder:
@process_order.register(StandardOrder)
def _(order):
"""Route StandardOrder to this handler."""
shipping_cost = calculate_shipping(order.weight)
return {
'order_id': order.order_id,
'subtotal': order.amount,
'shipping': shipping_cost,
'expedited': False
}
And another for PriorityOrder:
@process_order.register(PriorityOrder)
def _(order):
"""Route PriorityOrder to this handler."""
shipping_cost = calculate_shipping(order.weight) * 1.5 # Premium shipping
return {
'order_id': order.order_id,
'subtotal': order.amount,
'shipping': shipping_cost,
'expedited': True
}
Now when you call process_order(order), Python looks at the type of order, matches it against the registered types, and calls the right handler.
So the function name is always process_order, but the implementation changes based on the type of the first argument?
Yes. All the registered functions have the same name (actually you use _ because Python ignores it — they are anonymous). The singledispatch decorator replaces the original function with a dispatcher that routes to the right implementation.
What happens if I pass a type that is not registered?
It calls the base implementation — the one decorated with @singledispatch. If the base implementation raises NotImplementedError, the caller sees that error. If the base implementation has a real default behavior, that runs instead.
Can you register handlers for parent classes? Like, if StandardOrder and PriorityOrder both inherit from Order, can I register a handler for Order and have it apply to both?
No. singledispatch routes based on exact type match first, then falls back to parent classes. If you register a handler for StandardOrder, it matches only StandardOrder and its subclasses. If a type is not registered and neither are its parent classes, it falls back to the base function.
So if I have a class hierarchy StandardOrder → Order, and I register handlers for StandardOrder and Order, does StandardOrder use the StandardOrder handler or the Order handler?
The most specific match wins. StandardOrder instances use the StandardOrder handler. Order instances use the Order handler. Exact type match is checked first, then the MRO (Method Resolution Order) is followed backward looking for a registered parent.
That is cleaner than inheritance in regular classes. You are not locked into a hierarchy — you register handlers as you need them.
That is the whole point. You write the base function once, the handler stubs once, then you register new order types without touching the original code. You are not modifying the function — you are extending the dispatcher.
So if someone adds a new order type in six months, they just write a new @process_order.register(NewOrderType) handler and drop it in. The old code does not care.
Exactly. No merge conflicts on the massive if/elif chain. No refactoring the whole function. Just add a new handler, register it, and move on.
Okay, so singledispatch is about clean routing based on type. But you mentioned cache earlier. That is functools too, right?
cache is simpler than singledispatch but just as powerful. It memoizes function results. If you call a function with the same arguments, cache returns the cached result instead of running the function again.
Let me show you. calculate_shipping() is expensive — it calls an external API:
from functools import cache
@cache
def calculate_shipping(weight):
"""Call an expensive API to get shipping cost."""
response = requests.get(f'https://shipping.example.com/cost?weight={weight}')
return response.json()['cost']
First call: process_order(order1) → calls calculate_shipping(2.5) → API responds → cost = 12.50 Second call: process_order(order2) → calls calculate_shipping(2.5) → cache hit → returns 12.50 immediately
No second API call.
So cache just stores the result and returns it if the same arguments come in again?
Yes. Python hashes the arguments and looks them up in an internal dictionary. If the arguments are in the cache, you get the cached result. If not, the function runs and the result is cached.
What if the function is called with different arguments?
Each argument combination is cached separately. calculate_shipping(2.5) and calculate_shipping(5.0) are two different cache entries. Both are stored.
Does cache have a limit? Like, if you cache too many values, does it run out of memory?
That is the difference between cache and lru_cache. cache has no limit — it stores everything forever. lru_cache has a maximum size and evicts old entries when the limit is reached. You learned lru_cache in Intermediate Python. cache is simpler — no size limit, just pure memoization.
When would you use cache instead of lru_cache?
When your arguments are always unique or nearly unique, cache is fine. When you know the working set is bounded, cache is fine. When arguments repeat in patterns and you need bounded memory, use lru_cache with a size limit.
For calculate_shipping(weight), you might only ever see 10 different weights. cache is fine. But if weight can be any float with infinite precision, lru_cache with a size limit is safer.
So I can put @cache on calculate_shipping(), and then in the singledispatch handlers, when I call calculate_shipping(order.weight), it automatically uses the cache?
That is exactly the pattern. singledispatch routes to the right handler, and inside each handler, @cache on the utility functions prevents redundant work. The conductor (singledispatch) routes the sections, and the musicians (cached functions) remember the passages they have played.
So tomorrow is the quiz for the week, and then Day 29 is the final lesson? What is Day 29?
Day 29 is where you put it all together. Context managers from Monday, iterators from Tuesday, itertools from Wednesday, singledispatch and cache from today — all of it. One final pipeline that combines every pattern you learned this week. You read that complex pull request at the start of week 4? Next week, you write one.
We are building the full pipeline?
You are building it. The one that processes orders from multiple sources, routes them based on type, applies validation with a context manager, caches the expensive computations, and yields the results lazily. The whole thing, woven together.
That is the thing I could not do when the week started. I was looking at a wall of nested abstractions. Now I can see how every piece fits.
That is fluency. Not just reading the pieces. Understanding how they compose. Today you learned singledispatch and cache. Tomorrow you learn one more concept, and then the final project ties them together. You have earned this.
Practice your skills
Sign up to write and run code in this lesson.
functools Advanced: singledispatch, cache, and Dispatch by Type
Replace type-checking chains with singledispatch: register different handlers for each type. Add cache for expensive calculations. Type-aware pipelines ready to compose.
You gave me code that makes me want to scream. A process_order() function with if/elif cascading down: if type(order).name == 'StandardOrder', do this. elif type(order).name == 'PriorityOrder', do that. elif type(order).name == 'SubscriptionOrder', do something else. And every time someone adds a new order type, the chain gets longer. There is no pattern here. It is just... a wall of branching.
That is the wall you're about to demolish with singledispatch. Remember yesterday — we talked about itertools pipelines, lazy evaluation, feeding the output of one stage into the next? Today is the conductor's technique for routing those stages. singledispatch is cueing different sections of the orchestra based on what is playing.
You mean the function figures out which version to run based on the type of the first argument?
Exactly. You write a base implementation of process_order(). Then you register handlers for StandardOrder, PriorityOrder, SubscriptionOrder — each one with a single @process_order.register(OrderType) decorator. The function automatically routes to the right handler based on the type it receives. No if/elif chains. No type(x).name checks. Clean dispatch.
So singledispatch is like... a switch statement that Python writes for me?
Better than a switch statement. A switch statement checks a single value. singledispatch looks at the type of the first argument and routes based on that type hierarchy. If you pass a PriorityOrder, it calls the registered PriorityOrder handler. If you pass something it doesn't recognize, it falls back to the base implementation.
This is exactly what yesterday's iterator pipeline needed — not one function handling all order types, but one entry point that delegates to specialized handlers.
That is exactly right. Now watch how it works.
You define the base function with @singledispatch:
from functools import singledispatch
@singledispatch
def process_order(order):
"""Default handler: raise an error for unknown types."""
raise NotImplementedError(f'No handler for {type(order).__name__}')
Then you register a handler for StandardOrder:
@process_order.register(StandardOrder)
def _(order):
"""Route StandardOrder to this handler."""
shipping_cost = calculate_shipping(order.weight)
return {
'order_id': order.order_id,
'subtotal': order.amount,
'shipping': shipping_cost,
'expedited': False
}
And another for PriorityOrder:
@process_order.register(PriorityOrder)
def _(order):
"""Route PriorityOrder to this handler."""
shipping_cost = calculate_shipping(order.weight) * 1.5 # Premium shipping
return {
'order_id': order.order_id,
'subtotal': order.amount,
'shipping': shipping_cost,
'expedited': True
}
Now when you call process_order(order), Python looks at the type of order, matches it against the registered types, and calls the right handler.
So the function name is always process_order, but the implementation changes based on the type of the first argument?
Yes. All the registered functions have the same name (actually you use _ because Python ignores it — they are anonymous). The singledispatch decorator replaces the original function with a dispatcher that routes to the right implementation.
What happens if I pass a type that is not registered?
It calls the base implementation — the one decorated with @singledispatch. If the base implementation raises NotImplementedError, the caller sees that error. If the base implementation has a real default behavior, that runs instead.
Can you register handlers for parent classes? Like, if StandardOrder and PriorityOrder both inherit from Order, can I register a handler for Order and have it apply to both?
No. singledispatch routes based on exact type match first, then falls back to parent classes. If you register a handler for StandardOrder, it matches only StandardOrder and its subclasses. If a type is not registered and neither are its parent classes, it falls back to the base function.
So if I have a class hierarchy StandardOrder → Order, and I register handlers for StandardOrder and Order, does StandardOrder use the StandardOrder handler or the Order handler?
The most specific match wins. StandardOrder instances use the StandardOrder handler. Order instances use the Order handler. Exact type match is checked first, then the MRO (Method Resolution Order) is followed backward looking for a registered parent.
That is cleaner than inheritance in regular classes. You are not locked into a hierarchy — you register handlers as you need them.
That is the whole point. You write the base function once, the handler stubs once, then you register new order types without touching the original code. You are not modifying the function — you are extending the dispatcher.
So if someone adds a new order type in six months, they just write a new @process_order.register(NewOrderType) handler and drop it in. The old code does not care.
Exactly. No merge conflicts on the massive if/elif chain. No refactoring the whole function. Just add a new handler, register it, and move on.
Okay, so singledispatch is about clean routing based on type. But you mentioned cache earlier. That is functools too, right?
cache is simpler than singledispatch but just as powerful. It memoizes function results. If you call a function with the same arguments, cache returns the cached result instead of running the function again.
Let me show you. calculate_shipping() is expensive — it calls an external API:
from functools import cache
@cache
def calculate_shipping(weight):
"""Call an expensive API to get shipping cost."""
response = requests.get(f'https://shipping.example.com/cost?weight={weight}')
return response.json()['cost']
First call: process_order(order1) → calls calculate_shipping(2.5) → API responds → cost = 12.50 Second call: process_order(order2) → calls calculate_shipping(2.5) → cache hit → returns 12.50 immediately
No second API call.
So cache just stores the result and returns it if the same arguments come in again?
Yes. Python hashes the arguments and looks them up in an internal dictionary. If the arguments are in the cache, you get the cached result. If not, the function runs and the result is cached.
What if the function is called with different arguments?
Each argument combination is cached separately. calculate_shipping(2.5) and calculate_shipping(5.0) are two different cache entries. Both are stored.
Does cache have a limit? Like, if you cache too many values, does it run out of memory?
That is the difference between cache and lru_cache. cache has no limit — it stores everything forever. lru_cache has a maximum size and evicts old entries when the limit is reached. You learned lru_cache in Intermediate Python. cache is simpler — no size limit, just pure memoization.
When would you use cache instead of lru_cache?
When your arguments are always unique or nearly unique, cache is fine. When you know the working set is bounded, cache is fine. When arguments repeat in patterns and you need bounded memory, use lru_cache with a size limit.
For calculate_shipping(weight), you might only ever see 10 different weights. cache is fine. But if weight can be any float with infinite precision, lru_cache with a size limit is safer.
So I can put @cache on calculate_shipping(), and then in the singledispatch handlers, when I call calculate_shipping(order.weight), it automatically uses the cache?
That is exactly the pattern. singledispatch routes to the right handler, and inside each handler, @cache on the utility functions prevents redundant work. The conductor (singledispatch) routes the sections, and the musicians (cached functions) remember the passages they have played.
So tomorrow is the quiz for the week, and then Day 29 is the final lesson? What is Day 29?
Day 29 is where you put it all together. Context managers from Monday, iterators from Tuesday, itertools from Wednesday, singledispatch and cache from today — all of it. One final pipeline that combines every pattern you learned this week. You read that complex pull request at the start of week 4? Next week, you write one.
We are building the full pipeline?
You are building it. The one that processes orders from multiple sources, routes them based on type, applies validation with a context manager, caches the expensive computations, and yields the results lazily. The whole thing, woven together.
That is the thing I could not do when the week started. I was looking at a wall of nested abstractions. Now I can see how every piece fits.
That is fluency. Not just reading the pieces. Understanding how they compose. Today you learned singledispatch and cache. Tomorrow you learn one more concept, and then the final project ties them together. You have earned this.