Tool A returns this:
[{"id": 1, "name": "alpha"}, {"id": 2, "name": "beta"}, {"id": 3, "name": "gamma"}]Tool B's create call wants this:
{"items": ["alpha", "beta", "gamma"], "count": 3}The data is the same. The shape is different. Step 2 of the chain is the reshape.
rows = [{"id": 1, "name": "alpha"}, {"id": 2, "name": "beta"}, {"id": 3, "name": "gamma"}]
names = [r["name"] for r in rows]
payload = {"items": names, "count": len(names)}List comprehension to extract one field, then build the target dict.
Pure Python — no APIs at all in the middle?
Right. The reshape is just code you'd write in Python Foundations — comprehensions, dict-building, len(). The skill is recognizing that step 2 is the reshape, not another API call. New programmers reach for an API when they should reach for [r['name'] for r in rows].
Why is this a separate lesson?
Because the reflex "what shape does B want, and how do I get there from A's shape" is the most common reason chains break. Yesterday's chain was simple — extract one field. Real chains reshape lists into dicts, dicts into lists, group, filter, project, rename. Today is just the simplest of those transforms — but the pattern.
Different tools were designed by different teams over different decades. Their input/output shapes don't agree:
[{id, subject, from, body}, ...][[col1, col2, col3], [col1, col2, col3]] (list of lists){summary, start_datetime, ...} (a single dict per call){tasklist_id, title} (one at a time)The reshape is unavoidable. The good news: it's pure Python.
1. Extract a field (list → list)
names = [r["name"] for r in rows]
# ["alpha", "beta", "gamma"]2. Project to a smaller shape (list of dicts → list of dicts)
projection = [{"name": r["name"], "id": r["id"]} for r in rows]3. List of dicts → list of lists (for Sheets append)
rows_for_sheet = [[r["id"], r["name"]] for r in rows]4. Aggregate (list → single dict)
payload = {"items": [r["name"] for r in rows], "count": len(rows)}.get on every fieldReal API responses sometimes drop fields. Reshape defensively:
names = [r.get("name", "(unnamed)") for r in rows]One missing name field crashes the comprehension if you wrote r["name"]. With .get(...), the script keeps working.
Don't be surprised if step 2 of a chain is 10 lines of Python and step 1 + step 3 are 2 lines each. The reshape is where business logic actually lives — "only items with status=active", "group by category", "normalize the date format". The tool calls are just I/O.
A quick assertion catches reshape bugs before they hit the API:
payload = {"items": names, "count": len(names)}
assert isinstance(payload["items"], list)
assert isinstance(payload["count"], int)Fails locally with a clear message instead of producing a confusing 400 from the next API.
Tool A returns this:
[{"id": 1, "name": "alpha"}, {"id": 2, "name": "beta"}, {"id": 3, "name": "gamma"}]Tool B's create call wants this:
{"items": ["alpha", "beta", "gamma"], "count": 3}The data is the same. The shape is different. Step 2 of the chain is the reshape.
rows = [{"id": 1, "name": "alpha"}, {"id": 2, "name": "beta"}, {"id": 3, "name": "gamma"}]
names = [r["name"] for r in rows]
payload = {"items": names, "count": len(names)}List comprehension to extract one field, then build the target dict.
Pure Python — no APIs at all in the middle?
Right. The reshape is just code you'd write in Python Foundations — comprehensions, dict-building, len(). The skill is recognizing that step 2 is the reshape, not another API call. New programmers reach for an API when they should reach for [r['name'] for r in rows].
Why is this a separate lesson?
Because the reflex "what shape does B want, and how do I get there from A's shape" is the most common reason chains break. Yesterday's chain was simple — extract one field. Real chains reshape lists into dicts, dicts into lists, group, filter, project, rename. Today is just the simplest of those transforms — but the pattern.
Different tools were designed by different teams over different decades. Their input/output shapes don't agree:
[{id, subject, from, body}, ...][[col1, col2, col3], [col1, col2, col3]] (list of lists){summary, start_datetime, ...} (a single dict per call){tasklist_id, title} (one at a time)The reshape is unavoidable. The good news: it's pure Python.
1. Extract a field (list → list)
names = [r["name"] for r in rows]
# ["alpha", "beta", "gamma"]2. Project to a smaller shape (list of dicts → list of dicts)
projection = [{"name": r["name"], "id": r["id"]} for r in rows]3. List of dicts → list of lists (for Sheets append)
rows_for_sheet = [[r["id"], r["name"]] for r in rows]4. Aggregate (list → single dict)
payload = {"items": [r["name"] for r in rows], "count": len(rows)}.get on every fieldReal API responses sometimes drop fields. Reshape defensively:
names = [r.get("name", "(unnamed)") for r in rows]One missing name field crashes the comprehension if you wrote r["name"]. With .get(...), the script keeps working.
Don't be surprised if step 2 of a chain is 10 lines of Python and step 1 + step 3 are 2 lines each. The reshape is where business logic actually lives — "only items with status=active", "group by category", "normalize the date format". The tool calls are just I/O.
A quick assertion catches reshape bugs before they hit the API:
payload = {"items": names, "count": len(names)}
assert isinstance(payload["items"], list)
assert isinstance(payload["count"], int)Fails locally with a clear message instead of producing a confusing 400 from the next API.
Create a free account to get started. Paid plans unlock all tracks.