Real production systems have dashboards — Grafana, Datadog, custom apps — where you watch metrics over time. For automations, a Google Sheet works as a poor man's dashboard. Append a row per run: timestamp, op count, errors, total ms. Open the Sheet to see the trend.
from datetime import datetime
# Run summary computed during the work
run_summary = {
"timestamp": datetime.utcnow().isoformat(),
"ops": 5,
"errors": 1,
"total_ms": 142,
}
# Append to the dashboard sheet
ss = toolset.execute_action(Action.GOOGLESHEETS_SEARCH_SPREADSHEETS, {"query": ""})
files = ss.get("files", []) or ss.get("response_data", {}).get("files", [])
spreadsheet_id = files[0]["id"]
row = [run_summary["timestamp"], run_summary["ops"], run_summary["errors"], run_summary["total_ms"]]
result = toolset.execute_action(Action.GOOGLESHEETS_SHEET_APPEND_GOOGLE_SHEET_ROW, {
"spreadsheet_id": spreadsheet_id,
"sheet_name": "Sheet1",
"values": [[str(c) for c in row]],
})
print(result.get("updates", {}).get("updatedRange")
or result.get("response_data", {}).get("updates", {}).get("updatedRange"))After a week of daily runs, you've got 7 rows — open the Sheet, see whether errors are trending up, whether total_ms is creeping. Sort, filter, chart with Sheets' built-in tools.
This isn't a real dashboard, right?
Right. It's the mental model of a dashboard with no infrastructure. For a personal automation that runs once a day, this is plenty: append a row, open the Sheet when you want to see how things are going. For a system with hundreds of runs per minute, you'd want real time-series tooling.
What's the difference between this and the structured logs from day 16?
Logs are per-event (one line per step, per item). Dashboard rows are per-run (one summary across all steps in this run). Different granularity, different purpose. You'd often emit both — logs for debugging individual events, dashboard rows for trend-over-time.
A dashboard is fundamentally a time-series view of a few key metrics. For automations:
For low-frequency automations (daily, hourly), this is sufficient. The infrastructure cost is zero (you have Sheets); the analysis tooling is built into the destination.
| timestamp | ops | errors | total_ms | dlq_size | last_error |
The rule of thumb: anything you'd want to glance at to know if today's run was healthy. Don't dump every metric — five columns is plenty.
# right — one row per run, history preserved
append_row(timestamp, ops, errors, total_ms)
# wrong — overwrites the same cell each run, no history
update_cell("A1", str(total_ms))Append means you can always look at trend. Update means you only have the most recent value, which is much less useful.
Once you have a few rows, Sheets' chart feature graphs them in 30 seconds:
Now you have a dashboard. Refreshes automatically as new rows append.
The Sheet-as-dashboard breaks down when:
Real tooling: Datadog, Grafana, Prometheus. They all have free tiers. The pattern stays the same — emit metrics, view trends — just at a different scale.
Most production automations emit both:
Logs answer "what went wrong on this specific event?". Dashboard answers "is the system healthy on average?". You need both.
A common antipattern: emit so many metrics that nobody reads any of them. Pick the 3-5 columns that matter for triage — total ops, errors, latency. Add columns later if you find yourself wishing you had them; don't add them speculatively.
Real production systems have dashboards — Grafana, Datadog, custom apps — where you watch metrics over time. For automations, a Google Sheet works as a poor man's dashboard. Append a row per run: timestamp, op count, errors, total ms. Open the Sheet to see the trend.
from datetime import datetime
# Run summary computed during the work
run_summary = {
"timestamp": datetime.utcnow().isoformat(),
"ops": 5,
"errors": 1,
"total_ms": 142,
}
# Append to the dashboard sheet
ss = toolset.execute_action(Action.GOOGLESHEETS_SEARCH_SPREADSHEETS, {"query": ""})
files = ss.get("files", []) or ss.get("response_data", {}).get("files", [])
spreadsheet_id = files[0]["id"]
row = [run_summary["timestamp"], run_summary["ops"], run_summary["errors"], run_summary["total_ms"]]
result = toolset.execute_action(Action.GOOGLESHEETS_SHEET_APPEND_GOOGLE_SHEET_ROW, {
"spreadsheet_id": spreadsheet_id,
"sheet_name": "Sheet1",
"values": [[str(c) for c in row]],
})
print(result.get("updates", {}).get("updatedRange")
or result.get("response_data", {}).get("updates", {}).get("updatedRange"))After a week of daily runs, you've got 7 rows — open the Sheet, see whether errors are trending up, whether total_ms is creeping. Sort, filter, chart with Sheets' built-in tools.
This isn't a real dashboard, right?
Right. It's the mental model of a dashboard with no infrastructure. For a personal automation that runs once a day, this is plenty: append a row, open the Sheet when you want to see how things are going. For a system with hundreds of runs per minute, you'd want real time-series tooling.
What's the difference between this and the structured logs from day 16?
Logs are per-event (one line per step, per item). Dashboard rows are per-run (one summary across all steps in this run). Different granularity, different purpose. You'd often emit both — logs for debugging individual events, dashboard rows for trend-over-time.
A dashboard is fundamentally a time-series view of a few key metrics. For automations:
For low-frequency automations (daily, hourly), this is sufficient. The infrastructure cost is zero (you have Sheets); the analysis tooling is built into the destination.
| timestamp | ops | errors | total_ms | dlq_size | last_error |
The rule of thumb: anything you'd want to glance at to know if today's run was healthy. Don't dump every metric — five columns is plenty.
# right — one row per run, history preserved
append_row(timestamp, ops, errors, total_ms)
# wrong — overwrites the same cell each run, no history
update_cell("A1", str(total_ms))Append means you can always look at trend. Update means you only have the most recent value, which is much less useful.
Once you have a few rows, Sheets' chart feature graphs them in 30 seconds:
Now you have a dashboard. Refreshes automatically as new rows append.
The Sheet-as-dashboard breaks down when:
Real tooling: Datadog, Grafana, Prometheus. They all have free tiers. The pattern stays the same — emit metrics, view trends — just at a different scale.
Most production automations emit both:
Logs answer "what went wrong on this specific event?". Dashboard answers "is the system healthy on average?". You need both.
A common antipattern: emit so many metrics that nobody reads any of them. Pick the 3-5 columns that matter for triage — total ops, errors, latency. Add columns later if you find yourself wishing you had them; don't add them speculatively.
Create a free account to get started. Paid plans unlock all tracks.