Five days, five functions. make_table_row, summarize_group, parse_respondent_csv, load_respondents_from_csv, respondents_to_json. Which moment felt like a genuine upgrade?
load_respondents_from_csv replacing my manual split logic. I've been doing that by hand in R scripts for two years and never thought to look for a standard library solution.
Every language has a CSV reader that handles quoted fields. The manual split approach was a stepping stone — it showed you what DictReader automates. What about the JSON serialisation?
respondents_to_json completed the pipeline I could see in my head from Day 1: CSV text in, JSON summary out. That's the supplementary file for every paper I write from now on.
Supplementary materials generated by the same code that computed the stats. Reviewer 2's replication request just got a lot less terrifying.
Week 4 capstone is next. Ready to wire it all together.
Six questions — four from this week, two reviewing Weeks 1–2. Review questions marked [REVIEW].
Five days, five functions. make_table_row, summarize_group, parse_respondent_csv, load_respondents_from_csv, respondents_to_json. Which moment felt like a genuine upgrade?
load_respondents_from_csv replacing my manual split logic. I've been doing that by hand in R scripts for two years and never thought to look for a standard library solution.
Every language has a CSV reader that handles quoted fields. The manual split approach was a stepping stone — it showed you what DictReader automates. What about the JSON serialisation?
respondents_to_json completed the pipeline I could see in my head from Day 1: CSV text in, JSON summary out. That's the supplementary file for every paper I write from now on.
Supplementary materials generated by the same code that computed the stats. Reviewer 2's replication request just got a lot less terrifying.
Week 4 capstone is next. Ready to wire it all together.
Six questions — four from this week, two reviewing Weeks 1–2. Review questions marked [REVIEW].