Pytest: More Advanced Features for Easier Testing

Written by Dan Sackett on September 9, 2014

With the number of features that pytest provides, I wanted to make a third post to note some of the miscellaneous features.

I've already covered the basics and fixtures but pytest has many random features that you might find useful. What follows are some of the more interesting topics.

Doomed tests

A lot of times, we know that a test is going to fail. In those cases, we either want to modify the test or modify the code. Still, having a failing test can block the test suite from finishing so pytest gives us tools to help us again.

Those tools are skip and xfail.

A skip means that you expect your test to pass unless the environment (e.g. wrong Python interpreter, missing dependency) prevents it to run. And xfail means that your test can run but you expect it to fail because there is an implementation problem. Let's see an example.

import pytest
import sys

@pytest.mark.skipif(sys.platform != 'win32', reason="requires windows")
def test_func_skipped():
    """Test the function"""
    assert 0

def test_func_xfailed():
    """Test the function"""
    assert 0

And when we run it:

$ py.test -s tests/ 
================================================= test session starts =================================================
platform linux2 -- Python 2.7.3 -- py-1.4.23 -- pytest-2.6.1
collected 2 items 

tests/ sx

======================================== 1 skipped, 1 xfailed in 0.20 seconds =========================================

We ran both tests and one was skipped (because I'm not on a Windows system) and one was xfailed because we knew it wouldn't work. Let's talk about the skip first.

With skipping tests, we give it a condition to meet. If the condition is not met then the test will not run and will be marked as skipped. This is perfect for tests that might require specific versions of modules and software. This might be cumbersome if we have a number of tests that require the same skip condition though so let's create a decorator that can easily give us this condition on a test.

import pytest
import sys

windows = pytest.mark.skipif(sys.platform != 'win32', reason="requires windows")

def test_func_skipped():
    """Test the function"""
    assert 0

We can apply the @windows decorator to any test function and now we have that condition portable and DRY. One more cool thing to do with skips is and importorskip.

docutils = pytest.importorskip("docutils", minversion="0.3")

This will create a decorator @docutils again but this time the test will be skipped if we don't have docutils imported. I also specified the optional parameter to specify a minimum version of the module to check.

Let's move onto xfail tests.

We can do the same kind of things with decorators and such with xfail including providing conditions:

import pytest
import sys

@pytest.mark.xfail(sys.version_info >= (3,3), reason="python3.3 api changes")
def test_func_xfailed():
    """Test the function"""
    assert 0

Here's a good roundup of some of the other things we can do with xfail:

import pytest
xfail = pytest.mark.xfail

def test_hello():
    assert 0

def test_hello2():
    assert 0

@xfail("hasattr(os, 'sep')")
def test_hello3():
    assert 0

@xfail(reason="bug 110")
def test_hello4():
    assert 0

@xfail('pytest.__version__[0] != "17"')
def test_hello5():
    assert 0

def test_hello6():

def test_hello7():
    x = []
    x[1] = 1

You'll notice we can specify run=False and the test will not run. We can also use a string expression as a test to see if the test should be failed. Even within a test itself we can raise an xfail with pytest.xfail('reason'). Xfail is diverse and can really help you write smart tests based on the system and environment you are working in.

As a final note on xfail and skip, notice that you can specify a reason. When you run the tests, you do not normally see the reasons. To see the reasons, run the following:

py.test -rxs

This will show extra information about xfails and skips.

Parametrizing Tests

I showed you how to parametrize fixtures, but we can also parametrize tests individually. In some cases we only need to create a one-off test and a fixture doesn't make sense since it won't be reused. Instead we can mark the test for paramtrizing and do the following:

import pytest

@pytest.mark.parametrize('input, expected', [
    ('2 + 3', 5),
    ('6 - 4', 2),
    pytest.mark.xfail(('5 + 2', 8))
def test_equations(input, expected):
    """Test that equation works"""
    assert eval(input) == expected

In this example, note a few things:

Just like that, we can truly test a module with varying inputs to ensure that it is functioning correctly in many cases. This saves us from writing new tests to try different cases.

Monkeypatching and Mocking

Sometimes tests need functionality that depends on global settings or that calls code that cannot be easily tested such as network access. The monkeypatch function argument helps helps us here. It allows us to set/delete an attribute, dictionary item or environment variable or to modify sys.path for importing.

Let's see an example of this in use:

import os

def test_func(monkeypatch):
    monkeypatch.setattr(os, "getcwd", lambda: "/")

In a test we may not want to get the current working directory of our test so it makes sense to provide an expected value for the location of the actual code. By using monkeypatch, we can set the getcwd function on the os module to return /. Now in our test we have a path we expect and can carry on as if we were running it live on the server.

The above example uses setattr but monkeypatch gives us more than that.

As well, I recommend install Mock to work with tests.

Mock allows us to create fake objects that can act as the real thing. This isn't a tutorial about Mock, so I won't get into all of the things you can do, but a common use case with pytest might be to set the value of a monkeypatch to a mock.

from mock import Mock
import pytest

def test_func(monkeypatch):
    cwd = Mock(return_value='/')
    monkeypatch.setattr(os, 'getcwd', cwd)
    assert os.getcwd == '/'

It's essentially the same thing, but imagine wanting to return an object or something else. We can build the mocked object and return it on a module.

XUnit Styles Setup and Teardown

With XUnit style testing, we are always setting up and tearing down tests. Pytest does support this style of testing if you choose to use it.

def setup_module(module):
    """Run at the start of a testing module (module)"""

def teardown_module(module):
    """Run at the end of a testing module (file)"""

class TestClass:
    def setup_class(cls):
        """Setup the class"""

    def teardown_class(cls):
        """Teardown the class"""

    def setup_method(method):
        """Setup a method"""

    def teardown_method(method):
        """Teardown a method"""

def setup_function(function):
    """Setup a function"""

def teardown_function(function):
    """Teardown a function"""

As you can see, there are a number of setup and teardown functions for all cases you may need.

Config Files

There are a lot of options for running pytest so typing them everytime can be annoying. Luckily, you can use config files to setup your standard options so every py.test call has them preloaded.

When pytest is run, it will search for the first instance of either pytest.ini, setup.cfg, tox.ini. Once it finds one it looks for a [pytest[ block and if found, it loads that and then moves onto the tests. If it is missing the pytest block it continues. Note that you cannot chain configs.

Where should these live?

This is up to you. Pytest will recursively drill down form the current working directory into all directories until it finds a file that matches. I like to keep my pytest.ini in the root of the application personally. As an example config, see the following:

addopts = -rsx --tb=short
norecursedirs = node_modules/
python_files = check_*.py
python_classes = Check
python_functions =  check

The above config file will add the -rsx option for skip and xfail test information as well as set the traceback format to a short format for easier reading. This will be applied to every py.test call so we don't need to type them any more.

We also set the norecursedirs to node_modules so pytest will not look in that directory for tests.

For the pyton_* options we specify that we want pytest to look for files that start with check_*.py, classes that start with Check, and functions that start with check.

One other thing to talk about are files.

You can create a file called and place your fixtures, settings manipulations, etc, and it will be loaded before the tests if it's in the same directory as the tests running. This is a good way to keep your tests and fixtures in different places and allows fixtures to become global for your use in different test files.


Finally, I'll make a note that there are all kinds of plugins for pytest. Many developers have made pytest compatible with a lot of python modules including:

To get the full list of plugins check out the pytest website.

python pytest

comments powered by Disqus