A function with a known bug:
def average(numbers):
return sum(numbers) / len(numbers)Looks fine? What input crashes it?
An empty list. len([]) is 0, so sum([]) / 0 is a ZeroDivisionError.
Right. Now: how do you know a future change to average doesn't re-introduce this bug? Or break the basic case average([1, 2, 3]) == 2.0? You can't keep that whole truth table in your head as the codebase grows.
The fix is to write code that calls the function on known inputs and asserts the expected output. If average ever stops returning 2.0 for [1, 2, 3], the assertion fails and you know.
def test_average_basic():
assert average([1, 2, 3]) == 2.0
def test_average_empty_raises():
try:
average([])
except ZeroDivisionError:
return # OK
assert False, "expected ZeroDivisionError"Tests are just functions that assert?
At their core, yes. The framework (unittest, pytest) gives you organisation, discovery, parametrisation, fixtures — but the underlying mechanic is always: "call the code, assert the result, fail loud if it's wrong."
When do I write them?
Three triggers. (1) When you fix a bug — write the test that would have caught it. (2) When you write a new function with non-trivial logic — test the happy path and one edge case. (3) When you refactor — write tests against the old behaviour first, then refactor with confidence. The goal isn't 100% coverage; it's: if I break something, will I know?
A test is a function that calls your code on a known input and asserts the expected output. The point: when behaviour drifts (you or someone else changes the code), the test fails loud instead of the bug shipping silently.
| Without tests | With tests |
|---|---|
| Manual check after each change | Automated check on every commit |
| Confidence drops as code grows | Confidence stays steady |
| Refactoring is scary | Refactoring is the test suite running green |
| Bugs ship and you find out from users | Bugs caught before merge |
1. Reproduce the bug. When you fix something, the first thing you write is the failing test:
def test_average_empty_does_not_silently_return_zero():
try:
average([])
except ZeroDivisionError:
return
assert False, "expected ZeroDivisionError"2. Cover the happy path. When you write a new function:
def test_average_basic():
assert average([1, 2, 3]) == 2.03. Pin behaviour before a refactor. Before changing how average works, write tests against the current behaviour. Then refactor. Tests fail → you broke something. Tests pass → you didn't.
assert doesassert condition, messageIf condition is truthy → nothing happens. If falsy → raises AssertionError with message. That's the entire test mechanic.
unittestYou can test with raw asserts — that's what the dialog showed. The standard library's unittest adds:
test_* function automatically)assertEqual, assertRaises, etc.)Note: The Python series uses unittest rather than pytest because pytest isn't available in this in-browser Python runtime. The mental model is the same; the syntax is slightly different. If you've seen pytest examples online, the translation is mechanical.
Given the buggy average function, write the test that catches the empty-list bug. Print the test name when you do.
A function with a known bug:
def average(numbers):
return sum(numbers) / len(numbers)Looks fine? What input crashes it?
An empty list. len([]) is 0, so sum([]) / 0 is a ZeroDivisionError.
Right. Now: how do you know a future change to average doesn't re-introduce this bug? Or break the basic case average([1, 2, 3]) == 2.0? You can't keep that whole truth table in your head as the codebase grows.
The fix is to write code that calls the function on known inputs and asserts the expected output. If average ever stops returning 2.0 for [1, 2, 3], the assertion fails and you know.
def test_average_basic():
assert average([1, 2, 3]) == 2.0
def test_average_empty_raises():
try:
average([])
except ZeroDivisionError:
return # OK
assert False, "expected ZeroDivisionError"Tests are just functions that assert?
At their core, yes. The framework (unittest, pytest) gives you organisation, discovery, parametrisation, fixtures — but the underlying mechanic is always: "call the code, assert the result, fail loud if it's wrong."
When do I write them?
Three triggers. (1) When you fix a bug — write the test that would have caught it. (2) When you write a new function with non-trivial logic — test the happy path and one edge case. (3) When you refactor — write tests against the old behaviour first, then refactor with confidence. The goal isn't 100% coverage; it's: if I break something, will I know?
A test is a function that calls your code on a known input and asserts the expected output. The point: when behaviour drifts (you or someone else changes the code), the test fails loud instead of the bug shipping silently.
| Without tests | With tests |
|---|---|
| Manual check after each change | Automated check on every commit |
| Confidence drops as code grows | Confidence stays steady |
| Refactoring is scary | Refactoring is the test suite running green |
| Bugs ship and you find out from users | Bugs caught before merge |
1. Reproduce the bug. When you fix something, the first thing you write is the failing test:
def test_average_empty_does_not_silently_return_zero():
try:
average([])
except ZeroDivisionError:
return
assert False, "expected ZeroDivisionError"2. Cover the happy path. When you write a new function:
def test_average_basic():
assert average([1, 2, 3]) == 2.03. Pin behaviour before a refactor. Before changing how average works, write tests against the current behaviour. Then refactor. Tests fail → you broke something. Tests pass → you didn't.
assert doesassert condition, messageIf condition is truthy → nothing happens. If falsy → raises AssertionError with message. That's the entire test mechanic.
unittestYou can test with raw asserts — that's what the dialog showed. The standard library's unittest adds:
test_* function automatically)assertEqual, assertRaises, etc.)Note: The Python series uses unittest rather than pytest because pytest isn't available in this in-browser Python runtime. The mental model is the same; the syntax is slightly different. If you've seen pytest examples online, the translation is mechanical.
Given the buggy average function, write the test that catches the empty-list bug. Print the test name when you do.
Create a free account to get started. Paid plans unlock all tracks.