Five lessons, one primitive, five shapes of output — an int, a str, a list of strs, a list of formatted strs, and a summary dict. How does the pattern feel at this point?
The skeleton is the same every time. Call search(), then use len(), indexing, a comprehension, or a dict literal. Once you see that, every new function is a variation on the same four moves.
That recognition is Week 1's payoff. The skeleton is now in your head, and the quiz confirms the corners are solid before Week 2 wires an agent onto it.
The part I want to double-check is count versus len(results). I almost returned count directly on Day 3 before catching myself — is that really a real-world difference or a pedagogical trick?
A real difference. Niche queries return fewer hits than you asked for; popular ones return exactly what you asked for. Reporting count instead of len(results) is the difference between "what I requested" and "what exists" — and dashboards built on the wrong one quietly mislead readers.
Five search utility functions built this week:
search_count_results — len(results) from a single callsearch_first_snippet — results[0]['snippet'] for the top hitextract_all_urls — list comprehension over url keysformat_search_results — f-string inside a comprehensionsearch_summary_dict — dict combining count, top URL, titlesKey patterns: the search(query, count=n) primitive is sync (no await), each result is a dict with title/url/snippet, indexing + comprehensions are the main reshaping moves, and one API call should feed every derived value.
Create a free account to get started. Paid plans unlock all tracks.
Five lessons, one primitive, five shapes of output — an int, a str, a list of strs, a list of formatted strs, and a summary dict. How does the pattern feel at this point?
The skeleton is the same every time. Call search(), then use len(), indexing, a comprehension, or a dict literal. Once you see that, every new function is a variation on the same four moves.
That recognition is Week 1's payoff. The skeleton is now in your head, and the quiz confirms the corners are solid before Week 2 wires an agent onto it.
The part I want to double-check is count versus len(results). I almost returned count directly on Day 3 before catching myself — is that really a real-world difference or a pedagogical trick?
A real difference. Niche queries return fewer hits than you asked for; popular ones return exactly what you asked for. Reporting count instead of len(results) is the difference between "what I requested" and "what exists" — and dashboards built on the wrong one quietly mislead readers.
Five search utility functions built this week:
search_count_results — len(results) from a single callsearch_first_snippet — results[0]['snippet'] for the top hitextract_all_urls — list comprehension over url keysformat_search_results — f-string inside a comprehensionsearch_summary_dict — dict combining count, top URL, titlesKey patterns: the search(query, count=n) primitive is sync (no await), each result is a dict with title/url/snippet, indexing + comprehensions are the main reshaping moves, and one API call should feed every derived value.