A loop that looks up the same tasklist_id for every item:
for item in items: # 100 items
lists = toolset.execute_action(Action.GOOGLETASKS_LIST_TASK_LISTS, {}) # 100 calls
tasklist_id = lists["items"][0]["id"]
create_task(tasklist_id, item)The first line of the loop hits the same API 100 times for the same answer. Cache it.
_cache = {}
def get_tasklist_id():
if "tasklist_id" not in _cache:
lists = toolset.execute_action(Action.GOOGLETASKS_LIST_TASK_LISTS, {})
items = lists.get("items") or lists.get("response_data", {}).get("items", [])
_cache["tasklist_id"] = items[0]["id"]
return _cache["tasklist_id"]
for item in items: # 100 items
tasklist_id = get_tasklist_id() # 1 call total
create_task(tasklist_id, item)First call: 1 API hit. Calls 2-100: dict lookup. Saved 99 quota slots and ~99× the latency.
When is caching safe? What if the data changes mid-run?
Cache only invariants for the duration of one run — things you'd be surprised to see change:
Don't cache rapidly-changing data — unread email count, recent messages, current tasks. The whole point of reading those is to get fresh values.
Just a dict — no expiry, no eviction?
For one run: yes, just a dict. Scripts terminate; cache vanishes. For long-lived processes you'd add TTLs. Patterns scripts are short-lived, so a dict is enough.
_cache = {}
def cached_call(key, fetch_fn):
if key not in _cache:
_cache[key] = fetch_fn()
return _cache[key]Four lines. key identifies the value ("tasklist_id", f"emails:{query}"); fetch_fn is the actual API call.
Invariant for the run. The shorter the script, the more things qualify:
tasklist_id — won't change in 5 minutesspreadsheet_id from SEARCH_SPREADSHEETS — sameLIST_CALENDARS — sameos.environ.get("USER_EMAIL") — samedef get_tasklist_id():
if "tasklist_id" not in _cache:
result = toolset.execute_action(Action.GOOGLETASKS_LIST_TASK_LISTS, {})
items = result.get("items") or result.get("response_data", {}).get("items", [])
if not items:
raise RuntimeError("no task lists found")
_cache["tasklist_id"] = items[0]["id"]
return _cache["tasklist_id"]Validate before storing. A cache that holds None because the API returned an empty list breaks every subsequent call site silently.
For cached results that depend on arguments:
def cached_search(query):
key = f"search:{query}"
if key not in _cache:
_cache[key] = toolset.execute_action(Action.X, {"query": query})
return _cache[key]
cached_search("is:unread") # API call
cached_search("is:starred") # API call (different key)
cached_search("is:unread") # cache hitKey schema: "verb:argv" — readable for debugging, unique enough for reasonable inputs.
For scripts where every quota slot matters:
def cached_call(key, fetch_fn):
if key in _cache:
log("cache_hit", key=key)
return _cache[key]
log("cache_miss", key=key)
_cache[key] = fetch_fn()
return _cache[key]One line per call site shows you the hit rate. If you're seeing cache_miss more than expected, your key isn't normalized correctly (e.g., trailing whitespace, case).
A loop that looks up the same tasklist_id for every item:
for item in items: # 100 items
lists = toolset.execute_action(Action.GOOGLETASKS_LIST_TASK_LISTS, {}) # 100 calls
tasklist_id = lists["items"][0]["id"]
create_task(tasklist_id, item)The first line of the loop hits the same API 100 times for the same answer. Cache it.
_cache = {}
def get_tasklist_id():
if "tasklist_id" not in _cache:
lists = toolset.execute_action(Action.GOOGLETASKS_LIST_TASK_LISTS, {})
items = lists.get("items") or lists.get("response_data", {}).get("items", [])
_cache["tasklist_id"] = items[0]["id"]
return _cache["tasklist_id"]
for item in items: # 100 items
tasklist_id = get_tasklist_id() # 1 call total
create_task(tasklist_id, item)First call: 1 API hit. Calls 2-100: dict lookup. Saved 99 quota slots and ~99× the latency.
When is caching safe? What if the data changes mid-run?
Cache only invariants for the duration of one run — things you'd be surprised to see change:
Don't cache rapidly-changing data — unread email count, recent messages, current tasks. The whole point of reading those is to get fresh values.
Just a dict — no expiry, no eviction?
For one run: yes, just a dict. Scripts terminate; cache vanishes. For long-lived processes you'd add TTLs. Patterns scripts are short-lived, so a dict is enough.
_cache = {}
def cached_call(key, fetch_fn):
if key not in _cache:
_cache[key] = fetch_fn()
return _cache[key]Four lines. key identifies the value ("tasklist_id", f"emails:{query}"); fetch_fn is the actual API call.
Invariant for the run. The shorter the script, the more things qualify:
tasklist_id — won't change in 5 minutesspreadsheet_id from SEARCH_SPREADSHEETS — sameLIST_CALENDARS — sameos.environ.get("USER_EMAIL") — samedef get_tasklist_id():
if "tasklist_id" not in _cache:
result = toolset.execute_action(Action.GOOGLETASKS_LIST_TASK_LISTS, {})
items = result.get("items") or result.get("response_data", {}).get("items", [])
if not items:
raise RuntimeError("no task lists found")
_cache["tasklist_id"] = items[0]["id"]
return _cache["tasklist_id"]Validate before storing. A cache that holds None because the API returned an empty list breaks every subsequent call site silently.
For cached results that depend on arguments:
def cached_search(query):
key = f"search:{query}"
if key not in _cache:
_cache[key] = toolset.execute_action(Action.X, {"query": query})
return _cache[key]
cached_search("is:unread") # API call
cached_search("is:starred") # API call (different key)
cached_search("is:unread") # cache hitKey schema: "verb:argv" — readable for debugging, unique enough for reasonable inputs.
For scripts where every quota slot matters:
def cached_call(key, fetch_fn):
if key in _cache:
log("cache_hit", key=key)
return _cache[key]
log("cache_miss", key=key)
_cache[key] = fetch_fn()
return _cache[key]One line per call site shows you the hit rate. If you're seeing cache_miss more than expected, your key isn't normalized correctly (e.g., trailing whitespace, case).
Create a free account to get started. Paid plans unlock all tracks.