A new paper appeared on arXiv that fits your inclusion criteria. In your current workflow, what's the next five minutes?
read_range from Day 17 can pull the existing rows. For the new paper, I open the browser, find the citations Sheet, scroll to the last empty row, type the date, author, journal, and status. And if I forget, the paper falls out of the review.
That's the reproducibility gap. GOOGLESHEETS_SPREADSHEETS_VALUES_APPEND appends a row without opening a browser. You define the row as a list — date, author, journal, status — and the Sheet updates immediately. The paper can't fall out of the review if the script logs it the moment you find it:
result = toolset.execute_action(
Action.GOOGLESHEETS_SPREADSHEETS_VALUES_APPEND,
{"spreadsheet_id": sheet_id, "range": range_a1, "values": [values]}
)Why is values wrapped in an extra list — [values]? If values is already a list like ["2026-04-12", "Smith 2024", "JPSP", "cited"], why the outer bracket?
The API treats values as a list of rows — each row is itself a list. So to append one row, you pass a list containing that one row: [["2026-04-12", "Smith 2024", "JPSP", "cited"]]. Two rows would be [[row1], [row2]]. The double nesting is the API's way of supporting batch appends in one call.
So I could wire this into an arXiv alert — new paper arrives, one function call logs it to the Sheet before I even read the abstract. That's my Week 4 capstone taking shape.
The literature search that can't drop a paper. That's reproducible methodology:
def append_row(sheet_id: str, range_a1: str, values: list) -> dict:
result = toolset.execute_action(
Action.GOOGLESHEETS_SPREADSHEETS_VALUES_APPEND,
{"spreadsheet_id": sheet_id, "range": range_a1, "values": [values]}
)
print(f"Appended row to {range_a1}")
return resultTwo papers have fallen out of my last systematic review because I forgot to log them. This function solves that permanently.
One safety note: GOOGLESHEETS_SPREADSHEETS_VALUES_APPEND appends without deduplication — calling it twice logs the same paper twice. If you're building an automated ingestion pipeline, keep a local set of already-logged DOIs and skip papers that are already in it.
toolset.execute_action(
Action.GOOGLESHEETS_SPREADSHEETS_VALUES_APPEND,
{
"spreadsheet_id": sheet_id,
"range": "Sheet1!A:D",
"values": [["2026-04-12", "Smith 2024", "JPSP", "cited"]]
}
)values is a list of rows. One row append: [[col1, col2, col3]]. Two rows: [[row1col1, row1col2], [row2col1, row2col2]].
The range tells Sheets which columns to use. Sheet1!A:D appends at the first empty row in columns A–D. You can also use Sheet1!A1 — Sheets finds the next empty row automatically.
Tip: Always match the number of values in each row to the number of columns in your range. Mismatched lengths are silently truncated or padded by the API.
A new paper appeared on arXiv that fits your inclusion criteria. In your current workflow, what's the next five minutes?
read_range from Day 17 can pull the existing rows. For the new paper, I open the browser, find the citations Sheet, scroll to the last empty row, type the date, author, journal, and status. And if I forget, the paper falls out of the review.
That's the reproducibility gap. GOOGLESHEETS_SPREADSHEETS_VALUES_APPEND appends a row without opening a browser. You define the row as a list — date, author, journal, status — and the Sheet updates immediately. The paper can't fall out of the review if the script logs it the moment you find it:
result = toolset.execute_action(
Action.GOOGLESHEETS_SPREADSHEETS_VALUES_APPEND,
{"spreadsheet_id": sheet_id, "range": range_a1, "values": [values]}
)Why is values wrapped in an extra list — [values]? If values is already a list like ["2026-04-12", "Smith 2024", "JPSP", "cited"], why the outer bracket?
The API treats values as a list of rows — each row is itself a list. So to append one row, you pass a list containing that one row: [["2026-04-12", "Smith 2024", "JPSP", "cited"]]. Two rows would be [[row1], [row2]]. The double nesting is the API's way of supporting batch appends in one call.
So I could wire this into an arXiv alert — new paper arrives, one function call logs it to the Sheet before I even read the abstract. That's my Week 4 capstone taking shape.
The literature search that can't drop a paper. That's reproducible methodology:
def append_row(sheet_id: str, range_a1: str, values: list) -> dict:
result = toolset.execute_action(
Action.GOOGLESHEETS_SPREADSHEETS_VALUES_APPEND,
{"spreadsheet_id": sheet_id, "range": range_a1, "values": [values]}
)
print(f"Appended row to {range_a1}")
return resultTwo papers have fallen out of my last systematic review because I forgot to log them. This function solves that permanently.
One safety note: GOOGLESHEETS_SPREADSHEETS_VALUES_APPEND appends without deduplication — calling it twice logs the same paper twice. If you're building an automated ingestion pipeline, keep a local set of already-logged DOIs and skip papers that are already in it.
toolset.execute_action(
Action.GOOGLESHEETS_SPREADSHEETS_VALUES_APPEND,
{
"spreadsheet_id": sheet_id,
"range": "Sheet1!A:D",
"values": [["2026-04-12", "Smith 2024", "JPSP", "cited"]]
}
)values is a list of rows. One row append: [[col1, col2, col3]]. Two rows: [[row1col1, row1col2], [row2col1, row2col2]].
The range tells Sheets which columns to use. Sheet1!A:D appends at the first empty row in columns A–D. You can also use Sheet1!A1 — Sheets finds the next empty row automatically.
Tip: Always match the number of values in each row to the number of columns in your range. Mismatched lengths are silently truncated or padded by the API.
Create a free account to get started. Paid plans unlock all tracks.