Python functools: partial, lru_cache, and reduce
Pre-fill function arguments with functools.partial. Cache results with lru_cache decorator. Fold sequences with reduce for rare but critical transforms.
I built a cache decorator yesterday. It worked but felt brittle — I had to track maxsize myself, and I wasn't sure if it was thread-safe. This morning I was reading the standard library docs and I found functools.lru_cache. It's basically what I built, but better.
Yes. The stdlib had it all along. You needed to understand the concept first. Now that you've built one from scratch, you know what lru_cache actually does under the hood.
So I wasted a day reinventing the wheel?
You learned how the wheel works. That's worth more than using one you don't understand. Now let's look at lru_cache and two other functools you should know about.
What's in functools?
Three main patterns: partial — pre-fill function arguments. lru_cache — memoize function results. reduce — fold a sequence into a single value. Less common than the first two, but important to recognize when you see it.
Let me start with lru_cache since I already know what it does. What does it look like?
Simple:
from functools import lru_cache
@lru_cache(maxsize=128)
def fibonacci(n):
if n < 2:
return n
return fibonacci(n - 1) + fibonacci(n - 2)
The decorator does the caching. Every time you call fibonacci(5), it checks the cache first. No recalculation. After 128 unique calls, it evicts the oldest unused entry and caches the new one.
That's literally what my decorator did. So lru_cache handles: argument hashing, cache lookup, cache eviction, and thread safety.
Your decorator handled some of that. lru_cache handles all of it. And it gives you methods to inspect the cache:
fibonacci(10)
fibonacci(5)
fibonacci(10) # Cache hit
info = fibonacci.cache_info()
print(info) # CacheInfo(hits=1, misses=2, maxsize=128, currsize=2)
fibonacci.cache_clear() # Clear everything
So every cached function gets cache_info() and cache_clear() methods added to it by the decorator?
Exactly. The decorator wraps your function and adds those methods. Your original function is still there — lru_cache just stands in front.
Okay so that's basically my decorator but bulletproof. What about partial?
Partial is different. It pre-fills function arguments. Imagine you have a function that takes multiple parameters:
def apply_discount(price, discount_pct):
return price * (1 - discount_pct / 100)
apply_discount(100, 10) # 10% off $100 = $90
Now you want a function that always applies 10% discount, but you don't want to type 10 every time:
from functools import partial
apply_10_pct = partial(apply_discount, discount_pct=10)
apply_10_pct(100) # Still $90, but shorter to write
So partial creates a new function with some arguments already filled in. You call the new function with the remaining arguments.
Right. It's a factory without writing a factory function. Instead of:
def apply_10_pct(price):
return apply_discount(price, discount_pct=10)
You write:
apply_10_pct = partial(apply_discount, discount_pct=10)
Same outcome, one line.
What if you have multiple discount functions?
Stack them:
apply_10_pct = partial(apply_discount, discount_pct=10)
apply_20_pct = partial(apply_discount, discount_pct=20)
apply_10_pct_bulk = partial(apply_discount, discount_pct=10) # Same as first
prices = [100, 200, 150]
result = [apply_10_pct(p) for p in prices] # All get 10% off
Partial is useful when you pass functions as arguments — like to map or filter:
prices = [100, 200, 150]
discounted = list(map(apply_10_pct, prices))
# Without partial, you'd need a lambda:
# discounted = list(map(lambda p: apply_discount(p, 10), prices))
So partial is "make a new function with some arguments locked in so I don't have to write a lambda."
That's exactly what it is. Way less noise than a lambda, and it's reusable.
What about reduce? That sounds like a database thing.
Reduce is a fold operation. It takes a function and a sequence, and it combines all elements into a single value using that function repeatedly.
from functools import reduce
from operator import add
numbers = [1, 2, 3, 4, 5]
result = reduce(add, numbers)
print(result) # 15 — (((1 + 2) + 3) + 4) + 5
Why would I use reduce instead of sum()? sum does the same thing and is clearer.
For addition, yes. sum() is the right tool. But reduce works with any function. Imagine you need to combine dictionaries:
from functools import reduce
dicts = [{"a": 1}, {"b": 2}, {"c": 3}]
merged = reduce(lambda d1, d2: {**d1, **d2}, dicts)
print(merged) # {"a": 1, "b": 2, "c": 3}
You can't sum() dicts. But you can reduce them.
That's... pretty niche.
It is. Most of the time you want sum(), max(), min(), or a plain loop. But when you need to apply an operation across all elements in a specific sequence, reduce is the tool. And more importantly, when you see reduce in code, you need to know what it does — it's combining all elements into one.
How does reduce actually work? What's the sequence?
It takes the first two elements, applies the function, gets a result. That result becomes the first element for the next iteration:
reduce(add, [1, 2, 3, 4])
# Step 1: add(1, 2) = 3
# Step 2: add(3, 3) = 6
# Step 3: add(6, 4) = 10
You can also provide an initial value:
reduce(add, [1, 2, 3, 4], 10)
# Step 1: add(10, 1) = 11
# Step 2: add(11, 2) = 13
# Step 3: add(13, 3) = 16
# Step 4: add(16, 4) = 20
So the function always takes two arguments — the accumulator and the current element?
Yes. And it returns the new accumulator for the next iteration. By the end, you have one value — the final accumulator.
Okay so functools.partial for argument pre-filling, lru_cache for memoization that I already understand, and reduce for rare folding operations. That's three tools. Why are they all in one module?
Because they're all higher-order function utilities — they take functions as arguments or return functions. The module is a grab-bag of functional programming tools. Most of the time you only need partial and lru_cache.
I can use partial to create specialized versions of discount functions instead of writing lambdas or wrapper functions. And lru_cache gives me memoization without building it myself.
Exactly. You're thinking like someone who reads the stdlib and asks "what tools are available before I build my own."
One thing though — with partial, I'm creating a new function reference. Does the original function still work the same way?
Yes. Partial creates a wrapper around the original. The original is untouched. Both exist:
from functools import partial
def greet(greeting, name):
return f"{greeting}, {name}!"
hello = partial(greet, greeting="Hello")
bye = partial(greet, greeting="Goodbye")
print(greet("Hi", "Priya")) # "Hi, Priya!" — original still works
print(hello("Priya")) # "Hello, Priya!" — partial with greeting fixed
print(bye("Priya")) # "Goodbye, Priya!" — another partial
So I can create multiple specializations of the same function and use whichever one fits the context.
That's the whole idea. And the original function signature is still there — it hasn't changed, you've just created convenience versions.
All right, let me think about the challenge. I need to build a function that creates tax calculators using partial. That's the pattern.
Create a function that returns a partial function. When you call it with a tax rate, you get back a function that calculates price plus tax for that rate.
So if I call create_tax_calculator(0.10), I get back a function that adds 10% tax to any price?
That's exactly what you'll build. When the tests call tax_10(100), they're calling that partial function with just the price. The tax rate is already locked in.
Okay I see it. partial is perfect for this.
One more thing before you start: notice that functools lets you compose these tools. You could have a function decorated with lru_cache that internally uses partial functions. Or use reduce to combine results from cached functions. They work together.
So lru_cache for "remember answers you've already given," partial for "pre-fill arguments," reduce for "combine many values into one."
And knowing when not to use them. lru_cache makes sense for expensive computations. partial makes sense when you're repeatedly using the same subset of arguments. reduce makes sense when you're actually folding a sequence — otherwise use sum/max/min.
I went from building my own cache decorator yesterday to understanding three functools patterns today. This is how the stdlib makes sense.
And tomorrow: we're going deeper. You've been writing for loops all month. Tomorrow you'll learn what a for loop actually is under the hood — and why understanding it changes how you write Python.
Practice your skills
Sign up to write and run code in this lesson.
Python functools: partial, lru_cache, and reduce
Pre-fill function arguments with functools.partial. Cache results with lru_cache decorator. Fold sequences with reduce for rare but critical transforms.
I built a cache decorator yesterday. It worked but felt brittle — I had to track maxsize myself, and I wasn't sure if it was thread-safe. This morning I was reading the standard library docs and I found functools.lru_cache. It's basically what I built, but better.
Yes. The stdlib had it all along. You needed to understand the concept first. Now that you've built one from scratch, you know what lru_cache actually does under the hood.
So I wasted a day reinventing the wheel?
You learned how the wheel works. That's worth more than using one you don't understand. Now let's look at lru_cache and two other functools you should know about.
What's in functools?
Three main patterns: partial — pre-fill function arguments. lru_cache — memoize function results. reduce — fold a sequence into a single value. Less common than the first two, but important to recognize when you see it.
Let me start with lru_cache since I already know what it does. What does it look like?
Simple:
from functools import lru_cache
@lru_cache(maxsize=128)
def fibonacci(n):
if n < 2:
return n
return fibonacci(n - 1) + fibonacci(n - 2)
The decorator does the caching. Every time you call fibonacci(5), it checks the cache first. No recalculation. After 128 unique calls, it evicts the oldest unused entry and caches the new one.
That's literally what my decorator did. So lru_cache handles: argument hashing, cache lookup, cache eviction, and thread safety.
Your decorator handled some of that. lru_cache handles all of it. And it gives you methods to inspect the cache:
fibonacci(10)
fibonacci(5)
fibonacci(10) # Cache hit
info = fibonacci.cache_info()
print(info) # CacheInfo(hits=1, misses=2, maxsize=128, currsize=2)
fibonacci.cache_clear() # Clear everything
So every cached function gets cache_info() and cache_clear() methods added to it by the decorator?
Exactly. The decorator wraps your function and adds those methods. Your original function is still there — lru_cache just stands in front.
Okay so that's basically my decorator but bulletproof. What about partial?
Partial is different. It pre-fills function arguments. Imagine you have a function that takes multiple parameters:
def apply_discount(price, discount_pct):
return price * (1 - discount_pct / 100)
apply_discount(100, 10) # 10% off $100 = $90
Now you want a function that always applies 10% discount, but you don't want to type 10 every time:
from functools import partial
apply_10_pct = partial(apply_discount, discount_pct=10)
apply_10_pct(100) # Still $90, but shorter to write
So partial creates a new function with some arguments already filled in. You call the new function with the remaining arguments.
Right. It's a factory without writing a factory function. Instead of:
def apply_10_pct(price):
return apply_discount(price, discount_pct=10)
You write:
apply_10_pct = partial(apply_discount, discount_pct=10)
Same outcome, one line.
What if you have multiple discount functions?
Stack them:
apply_10_pct = partial(apply_discount, discount_pct=10)
apply_20_pct = partial(apply_discount, discount_pct=20)
apply_10_pct_bulk = partial(apply_discount, discount_pct=10) # Same as first
prices = [100, 200, 150]
result = [apply_10_pct(p) for p in prices] # All get 10% off
Partial is useful when you pass functions as arguments — like to map or filter:
prices = [100, 200, 150]
discounted = list(map(apply_10_pct, prices))
# Without partial, you'd need a lambda:
# discounted = list(map(lambda p: apply_discount(p, 10), prices))
So partial is "make a new function with some arguments locked in so I don't have to write a lambda."
That's exactly what it is. Way less noise than a lambda, and it's reusable.
What about reduce? That sounds like a database thing.
Reduce is a fold operation. It takes a function and a sequence, and it combines all elements into a single value using that function repeatedly.
from functools import reduce
from operator import add
numbers = [1, 2, 3, 4, 5]
result = reduce(add, numbers)
print(result) # 15 — (((1 + 2) + 3) + 4) + 5
Why would I use reduce instead of sum()? sum does the same thing and is clearer.
For addition, yes. sum() is the right tool. But reduce works with any function. Imagine you need to combine dictionaries:
from functools import reduce
dicts = [{"a": 1}, {"b": 2}, {"c": 3}]
merged = reduce(lambda d1, d2: {**d1, **d2}, dicts)
print(merged) # {"a": 1, "b": 2, "c": 3}
You can't sum() dicts. But you can reduce them.
That's... pretty niche.
It is. Most of the time you want sum(), max(), min(), or a plain loop. But when you need to apply an operation across all elements in a specific sequence, reduce is the tool. And more importantly, when you see reduce in code, you need to know what it does — it's combining all elements into one.
How does reduce actually work? What's the sequence?
It takes the first two elements, applies the function, gets a result. That result becomes the first element for the next iteration:
reduce(add, [1, 2, 3, 4])
# Step 1: add(1, 2) = 3
# Step 2: add(3, 3) = 6
# Step 3: add(6, 4) = 10
You can also provide an initial value:
reduce(add, [1, 2, 3, 4], 10)
# Step 1: add(10, 1) = 11
# Step 2: add(11, 2) = 13
# Step 3: add(13, 3) = 16
# Step 4: add(16, 4) = 20
So the function always takes two arguments — the accumulator and the current element?
Yes. And it returns the new accumulator for the next iteration. By the end, you have one value — the final accumulator.
Okay so functools.partial for argument pre-filling, lru_cache for memoization that I already understand, and reduce for rare folding operations. That's three tools. Why are they all in one module?
Because they're all higher-order function utilities — they take functions as arguments or return functions. The module is a grab-bag of functional programming tools. Most of the time you only need partial and lru_cache.
I can use partial to create specialized versions of discount functions instead of writing lambdas or wrapper functions. And lru_cache gives me memoization without building it myself.
Exactly. You're thinking like someone who reads the stdlib and asks "what tools are available before I build my own."
One thing though — with partial, I'm creating a new function reference. Does the original function still work the same way?
Yes. Partial creates a wrapper around the original. The original is untouched. Both exist:
from functools import partial
def greet(greeting, name):
return f"{greeting}, {name}!"
hello = partial(greet, greeting="Hello")
bye = partial(greet, greeting="Goodbye")
print(greet("Hi", "Priya")) # "Hi, Priya!" — original still works
print(hello("Priya")) # "Hello, Priya!" — partial with greeting fixed
print(bye("Priya")) # "Goodbye, Priya!" — another partial
So I can create multiple specializations of the same function and use whichever one fits the context.
That's the whole idea. And the original function signature is still there — it hasn't changed, you've just created convenience versions.
All right, let me think about the challenge. I need to build a function that creates tax calculators using partial. That's the pattern.
Create a function that returns a partial function. When you call it with a tax rate, you get back a function that calculates price plus tax for that rate.
So if I call create_tax_calculator(0.10), I get back a function that adds 10% tax to any price?
That's exactly what you'll build. When the tests call tax_10(100), they're calling that partial function with just the price. The tax rate is already locked in.
Okay I see it. partial is perfect for this.
One more thing before you start: notice that functools lets you compose these tools. You could have a function decorated with lru_cache that internally uses partial functions. Or use reduce to combine results from cached functions. They work together.
So lru_cache for "remember answers you've already given," partial for "pre-fill arguments," reduce for "combine many values into one."
And knowing when not to use them. lru_cache makes sense for expensive computations. partial makes sense when you're repeatedly using the same subset of arguments. reduce makes sense when you're actually folding a sequence — otherwise use sum/max/min.
I went from building my own cache decorator yesterday to understanding three functools patterns today. This is how the stdlib makes sense.
And tomorrow: we're going deeper. You've been writing for loops all month. Tomorrow you'll learn what a for loop actually is under the hood — and why understanding it changes how you write Python.