You want to test add(a, b) against five input pairs. Naive way:
class TestAdd(unittest.TestCase):
def test_a(self):
self.assertEqual(add(1, 2), 3)
def test_b(self):
self.assertEqual(add(0, 0), 0)
def test_c(self):
self.assertEqual(add(-1, 1), 0)
def test_d(self):
self.assertEqual(add(10, 20), 30)
def test_e(self):
self.assertEqual(add(-5, -5), -10)Five copies of the same line with different numbers. Feels wrong.
It is. The test logic is identical — only the data differs. pytest has @pytest.mark.parametrize; for unittest the standard pattern is subTest:
import unittest
def add(a, b):
return a + b
class TestAdd(unittest.TestCase):
def test_pairs(self):
cases = [
(1, 2, 3),
(0, 0, 0),
(-1, 1, 0),
(10, 20, 30),
(-5, -5, -10),
]
for a, b, expected in cases:
with self.subTest(a=a, b=b):
self.assertEqual(add(a, b), expected)
result = unittest.main(argv=[''], exit=False, verbosity=0)
print("ran:", result.result.testsRun, "failures:", len(result.result.failures))What does self.subTest(a=a, b=b) do?
Each iteration becomes its own "sub-test." If one assertion fails, the loop keeps going — you see all the failing pairs in one run, not just the first one. Without subTest, the first failure stops the loop and you only know about pair #1.
And the kwargs to subTest — those are just labels for the failure message?
Right. When a sub-test fails, you'll see something like FAIL: test_pairs (a=10, b=20). Helpful for figuring out which of the five inputs broke.
Running the same test logic against many inputs — without N copies of the same code.
subTest — the unittest patternclass TestAdd(unittest.TestCase):
def test_pairs(self):
cases = [
(1, 2, 3),
(0, 0, 0),
(-1, 1, 0),
(10, 20, 30),
(-5, -5, -10),
]
for a, b, expected in cases:
with self.subTest(a=a, b=b):
self.assertEqual(add(a, b), expected)subTest gives yousubTest, the first assertEqual failure stops the loop. With it, every iteration is independent — the runner reports all five failing pairs at once.a=a, b=b) appear in the failure message — you immediately see which input broke.pytest's version (for context)Most real Python projects use pytest, which has a more concise version:
import pytest
@pytest.mark.parametrize("a, b, expected", [
(1, 2, 3),
(0, 0, 0),
(-1, 1, 0),
])
def test_add(a, b, expected):
assert add(a, b) == expectedSame idea, different syntax. Each tuple becomes a separate test. pytest is not available in our in-browser runtime; the subTest pattern works in pure stdlib.
(input, expected) pairs. Parametrise.Good parametrised tests cover:
add(1, 2) == 3)add(0, 0) == 0)add(-1, 1) == 0)The table grows over time. That's fine — it's the cheapest way to keep regressions caught.
You want to test add(a, b) against five input pairs. Naive way:
class TestAdd(unittest.TestCase):
def test_a(self):
self.assertEqual(add(1, 2), 3)
def test_b(self):
self.assertEqual(add(0, 0), 0)
def test_c(self):
self.assertEqual(add(-1, 1), 0)
def test_d(self):
self.assertEqual(add(10, 20), 30)
def test_e(self):
self.assertEqual(add(-5, -5), -10)Five copies of the same line with different numbers. Feels wrong.
It is. The test logic is identical — only the data differs. pytest has @pytest.mark.parametrize; for unittest the standard pattern is subTest:
import unittest
def add(a, b):
return a + b
class TestAdd(unittest.TestCase):
def test_pairs(self):
cases = [
(1, 2, 3),
(0, 0, 0),
(-1, 1, 0),
(10, 20, 30),
(-5, -5, -10),
]
for a, b, expected in cases:
with self.subTest(a=a, b=b):
self.assertEqual(add(a, b), expected)
result = unittest.main(argv=[''], exit=False, verbosity=0)
print("ran:", result.result.testsRun, "failures:", len(result.result.failures))What does self.subTest(a=a, b=b) do?
Each iteration becomes its own "sub-test." If one assertion fails, the loop keeps going — you see all the failing pairs in one run, not just the first one. Without subTest, the first failure stops the loop and you only know about pair #1.
And the kwargs to subTest — those are just labels for the failure message?
Right. When a sub-test fails, you'll see something like FAIL: test_pairs (a=10, b=20). Helpful for figuring out which of the five inputs broke.
Running the same test logic against many inputs — without N copies of the same code.
subTest — the unittest patternclass TestAdd(unittest.TestCase):
def test_pairs(self):
cases = [
(1, 2, 3),
(0, 0, 0),
(-1, 1, 0),
(10, 20, 30),
(-5, -5, -10),
]
for a, b, expected in cases:
with self.subTest(a=a, b=b):
self.assertEqual(add(a, b), expected)subTest gives yousubTest, the first assertEqual failure stops the loop. With it, every iteration is independent — the runner reports all five failing pairs at once.a=a, b=b) appear in the failure message — you immediately see which input broke.pytest's version (for context)Most real Python projects use pytest, which has a more concise version:
import pytest
@pytest.mark.parametrize("a, b, expected", [
(1, 2, 3),
(0, 0, 0),
(-1, 1, 0),
])
def test_add(a, b, expected):
assert add(a, b) == expectedSame idea, different syntax. Each tuple becomes a separate test. pytest is not available in our in-browser runtime; the subTest pattern works in pure stdlib.
(input, expected) pairs. Parametrise.Good parametrised tests cover:
add(1, 2) == 3)add(0, 0) == 0)add(-1, 1) == 0)The table grows over time. That's fine — it's the cheapest way to keep regressions caught.
Create a free account to get started. Paid plans unlock all tracks.