Automated testing is an essential part of your development process.
Although writing tests at first may look like it prolongs the development process, it saves you a lot of time in the long run.
Well-written tests decrease the possibility of something breaking in a production environment by ensuring your code is doing what you expected. Tests also help you cover marginal cases and make refactoring easier.
In this article, we'll look at how to use pytest, so you'll be able to use it on your own to improve your development process and follow more advanced pytest tutorials.
Contents
Objectives
By the end of this article, you'll be able to:
- Explain what pytest is and how you can use it
- Write a test with pytest on your own
- Follow more complicated tutorials that use pytest
- Prepare data and/or files that you need for a test
- Parametrize a test
- Mock functionality you need for a test
Why pytest
Although often overlooked, testing is so vital that Python comes with its own built-in testing framework called unittest. Writing tests in unittest can be complicated, though, so in recent years, the pytest framework has become the standard.
Some significant advantages of pytest are:
- requires less boilerplate code, making your test suites more readable
- uses plain assert statement rather than unittest's assertSomething methods (e.g.,
assertEquals
,assertTrue
) - fixture system simplifies setting up and tearing down test state
- functional approach
- large, community-maintained plugin ecosystem
Getting Started
Since this is a guide rather than a tutorial, we've prepared a simple FastAPI application that you can refer to as you're going through this article. You can clone it from GitHub.
On the basic branch, our API has 4 endpoints (defined in main.py) that use functions from calculations.py to return a result from performing a certain basic arithmetic operation (+
/-
/*
//
) on two integers.
On the advanced_topics branch, there are two more functionalities added:
CalculationsStoreJSON
(inside store_calculations.py) class - allows you to store and retrieve calculations to/from a JSON file.get_number_fact
(inside number_facts.py) - makes a call to a remote API to retrieve a fact about a certain number.
No knowledge of FastAPI is required to understand this article.
We'll use the basics branch for the first part of this article.
Create and activate the virtual environment and install the requirements:
$ python3.10 -m venv venv
$ source venv/bin/activate
(venv)$ pip install -r requirements.txt
Organizing and Naming
To organize your tests, you can use three possibilities, all of which are used in the example project:
Organized in | Example |
---|---|
Python package (folder including an __init__.py file) | "test_calculations" |
Module | test_commutative_operations.py |
Class | TestCalculationEndpoints |
When it comes to best practices for organizing tests, each programmer has their own preferences.
The purpose of this article is not to show best practices but, instead, to show you all possibilities.
pytest will discover tests on its own if you abide by the following conventions:
- you add your tests to a file that starts with
test_
or ends with_test.py
(e.g.,test_foo.py
orfoo_test.py
) - you prefix test functions with
test_
(e.g.,def test_foo()
) - if you're using classes, you add your tests as methods to a class prefixed with
Test
(e.g.,class TestFoo
)
The tests not following the naming convention will not be found, so be careful with your naming.
It's worth noting that the naming convention can be changed on the command line or a configuration file).
Test Anatomy
Let's see what the test_return_sum
(in the test_calculation_endpoints.py file) test function looks like:
# tests/test_endpoints/test_calculation_endpoints.py
def test_return_sum(self):
# Arrange
test_data = {
"first_val": 10,
"second_val": 8
}
client = TestClient(app)
# Act
response = client.post("/sum/", json=test_data)
# Assert
assert response.status_code == 200
assert response.json() == 18
Each test function, according to the pytest documentation, consists of four steps:
- Arrange - where you prepare everything for your test (
test_data = {"first_val": 10, "second_val": 8}
) - Act - singular, state-changing action that kicks off the behavior you want to test (
client.post("/sum/", json=test_data)
) - Assert - compares the result of the Act with the desired result (
assert response.json() == 18
) - Cleanup - where the test-specific data gets cleaned up (usually in tests that test more complicated features, you can see an example in our tips)
Running Tests
pytest gives you a lot of control as to which tests you want to run:
- all the tests
- specific package
- specific module
- specific class
- specific test
- tests corresponding to a specific keyword
Let's see how this works...
If you're following along with our sample application,
pytest
is already installed if you installed the requirements.For your own projects,
pytest
can be installed as any other package with pip:(venv)$ pip install pytest
Running All the Tests
Running the pytest
command will simply run all the tests that pytest can find:
(venv)$ python -m pytest
=============================== test session starts ===============================
platform darwin -- Python 3.10.4, pytest-7.1.2, pluggy-1.0.0
rootdir: /Users/michael/repos/testdriven/pytest_for_beginners_test_project
plugins: anyio-3.6.1
collected 8 items
tests/test_calculations/test_anticommutative_operations.py .. [ 25%]
tests/test_calculations/test_commutative_operations.py .. [ 50%]
tests/test_endpoints/test_calculation_endpoints.py .... [100%]
================================ 8 passed in 5.19s ================================
pytest will inform you how many tests are found and which modules the tests were found in. In our example app, pytest found 8 tests, and they all passed.
At the bottom of the message, you can see how many tests passed/failed.
Incorrect Naming Pattern
As already discussed, tests that don't abide by the proper naming convention will simply not be found. Wrongly named tests don't produce any error, so you need to be mindful of that.
For example, if you rename the TestCalculationEndpoints
class to CalculationEndpointsTest
, all the tests inside it simply won't run:
=============================== test session starts ===============================
platform darwin -- Python 3.10.4, pytest-7.1.2, pluggy-1.0.0
rootdir: /Users/michael/repos/testdriven/pytest_for_beginners_test_project
plugins: anyio-3.6.1
collected 4 items
tests/test_calculations/test_anticommutative_operations.py .. [ 50%]
tests/test_calculations/test_commutative_operations.py .. [100%]
================================ 4 passed in 0.15s ================================
Change the name back to TestCalculationEndpoints
before moving on.
Failing Test
Your test won't always pass on the first try.
Corrupt the predicted output in the assert
statement in test_calculate_sum
to see what the output for a failing test looks like:
# tests/test_calculations/test_commutative_operations.py
def test_calculate_sum():
calculation = calculate_sum(5, 3)
assert calculation == 7 # whops, a mistake
Run the test. You should see something similar to:
=============================== test session starts ===============================
platform darwin -- Python 3.10.4, pytest-7.1.2, pluggy-1.0.0
rootdir: /Users/michael/repos/testdriven/pytest_for_beginners_test_project
plugins: anyio-3.6.1
collected 8 items
tests/test_calculations/test_anticommutative_operations.py .. [ 25%]
tests/test_calculations/test_commutative_operations.py F. [ 50%]
tests/test_endpoints/test_calculation_endpoints.py .... [100%]
==================================== FAILURES =====================================
_______________________________ test_calculate_sum ________________________________
def test_calculate_sum():
calculation = calculate_sum(5, 3)
> assert calculation == 7
E assert 8 == 7
tests/test_calculations/test_commutative_operations.py:8: AssertionError
============================= short test summary info =============================
FAILED tests/test_calculations/test_commutative_operations.py::test_calculate_sum
=========================== 1 failed, 7 passed in 0.26s ===========================
At the bottom of the message, you can see a short test summary info section. This tells you which test failed and where. In this case, the actual output -- 8
-- doesn't match the expected one -- 7
.
If you scroll a little higher, the failing test is displayed in detail, so it's easier to pinpoint what went wrong (helpful with more complex tests).
Fix this test before moving on.
Running Tests in a Specific Package or Module
To run a specific package or module, you just need to add a full relative path to the specific test set to the pytest command.
For a package:
(venv)$ python -m pytest tests/test_calculations
This command will run all the tests inside the "tests/test_calculations" package.
For a module:
(venv)$ python -m pytest tests/test_calculations/test_commutative_operations.py
This command will run all the tests inside the tests/test_calculations/test_commutative_operations.py module.
The output of both will be similar to the previous one, except the number of executed tests will be smaller.
Running Tests in a Specific Class
To access a specific class in pytest, you need to write a relative path to its module and then add the class after ::
:
(venv)$ python -m pytest tests/test_endpoints/test_calculation_endpoints.py::TestCalculationEndpoints
This command will execute all tests inside the TestCalculationEndpoints
class.
Running a Specific Test
You can access a specific test the same way as the class, with two colons after the relative path, followed by the test name:
(venv)$ python -m pytest tests/test_calculations/test_commutative_operations.py::test_calculate_sum
If the function you wish to run is inside a class, a single test needs to be run in the following form:
relative_path_to_module::TestClass::test_method
For example:
(venv)$ python -m pytest tests/test_endpoints/test_calculation_endpoints.py::TestCalculationEndpoints::test_return_sum
Running Tests by Keyword
Now, let's say you only want to run tests dealing with division. Since we included the word "divided" in the test name for tests that deal with division, you can run just those tests like so:
(venv)$ python -m pytest -k "dividend"
So, 2 out of 8 tests will run:
=============================== test session starts ===============================
platform darwin -- Python 3.10.4, pytest-7.1.2, pluggy-1.0.0
rootdir: /Users/michael/repos/testdriven/pytest_for_beginners_test_project
plugins: anyio-3.6.1
collected 8 items / 6 deselected / 2 selected
tests/test_calculations/test_anticommutative_operations.py . [ 50%]
tests/test_endpoints/test_calculation_endpoints.py . [100%]
========================= 2 passed, 6 deselected in 0.18s =========================
Those are not the only ways to select a specific subset of tests. Refer to the official documentation for more info.
pytest Flags Worth Remembering
pytest includes many flags; you can list all of them with the pytest --help
command.
Among the most useful are:
pytest -v
increases verbosity for one level, andpytest -vv
increases it for two levels. For example, when using parametrization (running the same test multiple times with different inputs/outputs), running justpytest
informs you how many test versions passed and how many failed while adding-v
also outputs which parameters were used. If you add-vv
, you'll see each test version with the input parameters. You can see a much more detailed example on the pytest docs.pytest --lf
re-runs only the tests that failed during the last run. If there are no failures, all the tests will run.- Adding the
-x
flag causes pytest to exit instantly on the first error or failed test.
Parameterizing
We covered the basics and are now moving to more advanced topics.
If you're following along with the repo, switch the branch from basics to advanced_topics (
git checkout advanced_topics
).
Sometimes, a single example input for your test will suffice, but there are also many occasions that you'll want to test multiple inputs -- e.g., emails, passwords, etc.
You can add multiple inputs and their respective outputs with parameterizing via the @pytest.mark.parametrize
decorator.
For example, with anti-commutative operations, the order of the numbers passed matters. It would be smart to cover more cases to ensure that the function works correctly for all the cases:
# tests/test_calculations/test_anticommutative_operations.py
import pytest
from calculations import calculate_difference
@pytest.mark.parametrize(
"first_value, second_value, expected_output",
[
(10, 8, 2),
(8, 10, -2),
(-10, -8, -2),
(-8, -10, 2),
]
)
def test_calculate_difference(first_value, second_value, expected_output):
calculation = calculate_difference(first_value, second_value)
assert calculation == expected_output
@pytest.mark.parametrize
has a strictly structured form:
- You pass two arguments to the decorator:
- A string with comma-separated parameter names
- A list of parameter values where their position corresponds to the position of parameter names
- You pass the parameter names to the test function (they're not dependent on the position)
If you run that test, it will run 4 times, each time with different inputs and output:
(venv)$ python -m pytest -v tests/test_calculations/test_anticommutative_operations.py::test_calculate_difference
=============================== test session starts ===============================
platform darwin -- Python 3.10.4, pytest-7.1.2, pluggy-1.0.0
rootdir: /Users/michael/repos/testdriven/pytest_for_beginners_test_project
plugins: anyio-3.6.1
collected 4 items
tests/test_calculations/test_anticommutative_operations.py::test_calculate_difference[10-8-2] PASSED [ 25%]
tests/test_calculations/test_anticommutative_operations.py::test_calculate_difference[8-10--2] PASSED [ 50%]
tests/test_calculations/test_anticommutative_operations.py::test_calculate_difference[-10--8--2] PASSED [ 75%]
tests/test_calculations/test_anticommutative_operations.py::test_calculate_difference[-8--10-2] PASSED [100%]
================================ 4 passed in 0.01s ================================
Fixtures
It's a good idea to move the Arrange (and consequently Cleanup) step to a separate fixture function when the Arrange step is exactly the same in multiple tests or if it's so complicated that it hurts tests' readability.
Creation
A function is marked as a fixture with a @pytest.fixture
decorator.
The old version of TestCalculationEndpoints
had a step for creating a TestClient
in each method.
For example:
# tests/test_endpoints/test_calculation_endpoints.py
def test_return_sum(self):
test_data = {
"first_val": 10,
"second_val": 8
}
client = TestClient(app)
response = client.post("/sum/", json=test_data)
assert response.status_code == 200
assert response.json() == 18
In the advanced_topics branch, you'll see that the method now looks much cleaner:
# tests/test_endpoints/test_calculation_endpoints.py
def test_return_sum(self, test_app):
test_data = {
"first_val": 10,
"second_val": 8
}
response = test_app.post("/sum/", json=test_data)
assert response.status_code == 200
assert response.json() == 18
The second two were left as they were, so you can compare them (don't do that in real-life; it makes no sense).
test_return_sum
now uses a fixture called test_app
that you can see in the conftest.py file:
# tests/conftest.py
import pytest
from starlette.testclient import TestClient
from main import app
@pytest.fixture(scope="module")
def test_app():
client = TestClient(app)
return client
What's going on?
- The
@pytest.fixture()
decorator marks the functiontest_app
as a fixture. When pytest reads that module, it adds that function to a list of fixtures. Test functions can then use any fixture in that list. - This fixture is a simple function that returns a
TestClient
, so test API calls can be performed. - Test function arguments are compared with a list of fixtures. If the argument's value matches a fixture's name, the fixture will be resolved and its return value is written as an argument in the test function.
- The test function uses the result of the fixture to do its testing, using it in the same way as any other variable value.
Another important thing to notice is that the function is not passed the fixture itself but a fixture value.
Scope
Fixtures are created when first requested by a test, but they are destroyed based on their scope. After the fixture is destroyed, it needs to be evoked again, if required by another test; so, you need to be mindful of the scope with time-expensive fixtures (e.g., API calls).
There are five possible scopes, from the narrowest to the broadest:
Scope | Description |
---|---|
function (default) | The fixture is destroyed at the end of the test. |
class | The fixture is destroyed during the teardown of the last test in the class. |
module | The fixture is destroyed during the teardown of the last test in the module. |
package | The fixture is destroyed during the teardown of the last test in the package. |
session | The fixture is destroyed at the end of the test session. |
To change the scope in the previous example, you just need to set the scope
parameter:
# tests/conftest.py
import pytest
from starlette.testclient import TestClient
from main import app
@pytest.fixture(scope="function") # scope changed
def test_app():
client = TestClient(app)
return client
How important it is to define the smallest possible scope depends on how time-consuming the fixture is. Creating a
TestClient
isn't very time-consuming, so changing the scope doesn't shorten the test run. But, for example, running 10 tests using a fixture that calls an external API can be very time-consuming, so it's probably best to use themodule
scope.
Temporary Files
When your production code has to deal with files, your tests will as well.
To avoid interference between multiple test files or even with the rest of the app and the additional cleaning process, it's best to use a unique temporary directory.
In the sample app, we stored all the operations performed on a JSON file for future analysis. Now, since you definitely don't want to alter a production file during test runs, you need to create a separate, temporary JSON file.
The code to be tested can be found in store_calculations.py:
# store_calculations.py
import json
class CalculationsStoreJSON:
def __init__(self, json_file_path):
self.json_file_path = json_file_path
with open(self.json_file_path / "calculations.json", "w") as file:
json.dump([], file)
def add(self, calculation):
with open(self.json_file_path/"calculations.json", "r+") as file:
calculations = json.load(file)
calculations.append(calculation)
file.seek(0)
json.dump(calculations, file)
def list_operation_usages(self, operation):
with open(self.json_file_path / "calculations.json", "r") as file:
calculations = json.load(file)
return [calculation for calculation in calculations if calculation['operation'] == operation]
Notice that upon initializing CalculationsStoreJSON
, you have to provide a json_file_path
, where your JSON file will be stored. This can be any valid path on disk; you pass the path the same way for production code and the tests.
Fortunately, pytest provides a number of built-in fixtures, one of which we can use in this case called tmppath:
# tests/test_advanced/test_calculations_storage.py
from store_calculations import CalculationsStoreJSON
def test_correct_calculations_listed_from_json(tmp_path):
store = CalculationsStoreJSON(tmp_path)
calculation_with_multiplication = {"value_1": 2, "value_2": 4, "operation": "multiplication"}
store.add(calculation_with_multiplication)
assert store.list_operation_usages("multiplication") == [{"value_1": 2, "value_2": 4, "operation": "multiplication"}]
This test checks if upon saving the calculation to a JSON file using the CalculationsStoreJSON.add()
method, we can retrieve a list of certain operations using UserStoreJSON.list_operation_usages()
.
We passed the tmp_path
fixture to this test, which returns a path (pathlib.Path
) object, that points to a temporary directory inside the base directory.
When using tmp_path
, pytest creates a:
- base temporary directory
- temporary directory (inside the base directory) that's unique to each test function invocation
It's worth noting that, to help with debugging, pytest creates a new base temporary directory during each test session, while old base directories are removed after 3 sessions.
Monkeypatching
With monkeypatching, you dynamically modify the behavior of a piece of code at runtime without actually changing the source code.
Although it's not necessarily limited just to testing, in pytest, it's used to modify the behavior of the code part inside the tested unit. It's usually used to replace expensive function calls, like HTTP call to APIs, with some pre-defined dummy behavior that's fast and easy to control.
For example, instead of making a call to a real API to get a response, you return some hardcoded response that's used inside tests.
Let's take a deeper look. In our app, there's a function that returns a fact about some number that's retrieved from a public API:
# number_facts.py
import requests
def get_number_fact(number):
url = f"http://numbersapi.com/{number}?json"
response = requests.get(url)
json_resp = response.json()
if json_resp["found"]:
return json_resp["text"]
return "No fact about this number."
You don't want to call the API during your tests because:
- it's slow
- it's error-prone (the API can be down, you may have a poor internet connection, ...)
In this case, you want to mock the response, so it returns the part we're interested in without actually making the HTTP request:
# tests/test_advanced/test_number_facts.py
import requests
from number_facts import get_number_fact
class MockedResponse:
def __init__(self, json_body):
self.json_body = json_body
def json(self):
return self.json_body
def mock_get(*args, **kwargs):
return MockedResponse({
"text": "7 is the number of days in a week.",
"found": "true",
})
def test_get_number_fact(monkeypatch):
monkeypatch.setattr(requests, 'get', mock_get)
number = 7
fact = '7 is the number of days in a week.'
assert get_number_fact(number) == fact
A lot is happening here:
- pytest's built-in monkeypatch fixture is used in the test function.
- Using
monkeypatch.setattr
, we overrode theget
function of therequests
package with our own function,mock_get
. All the calls inside the app code torequests.get
will now actually callmock_get
during the execution of this test. - The
mock_get
function returns aMockedResponse
instance that replacesjson_body
with the value we assigned inside themock_get
function ({'"text": "7 is the number of days in a week.", "found": "true",}
). - Each time that test is evoked, instead of executing
requests.get("http://numbersapi.com/7?json")
as in the production code (get_number_fact
), aMockedResponse
with a hardcoded fact will be returned.
This way, you can still verify the behavior of your function (getting a fact about a number from an API response) without really calling the API.
Conclusion
There's a number of reasons why pytest became a standard in the past few years, most notably:
- It simplifies the writing of the tests.
- Due to its comprehensive outputs, it can be easy to pinpoint which tests failed and why.
- It provides solutions for repetitive or complicated test preparation, creating files for testing purposes, and test isolation.
pytest offers much more than what we covered in this article.
Their documentation includes helpful how-to guides that cover in-depth most of what we skimmed here. They also provide a number of examples.
pytest also comes with an extensive list of plugins, which you can use to extend pytest functionalities.
Here are a few you might find useful:
- pytest-cov adds support for checking code coverage.
- pytest-django adds a set of valuable tools for testing Django applications.
- pytest-xdist allows you to run tests in parallel, thus shortening the time tests need to run.
- pytest-randomly runs tests in random order, preventing them from accidentally being dependent on each other.
- pytest-asincio makes it easier to test asynchronous programs.
- pytest-mock provides a mocker fixture that's a wrapper around the standard unittest mock package along with additional utilities.
This article should have helped you understand how the pytest library works and what it's possible to accomplish with it. However, understanding just how pytest works and how testing works are not the same. Learning to write meaningful tests takes practice and understanding of what you expect your code to do.