|Version 11 (modified by jarrod.millman, 6 years ago)|
CAUTION: This page is in the process of being migrated from an older page. Until this process is completed, information found here may be incorrect.
SciPy uses the NumPy testing system, which is based on the unit testing framework offered by unittest.py. Our goal is that every module and package in SciPy should have a thorough set of unit tests. These tests should exercise the full functionality of a given routine as well as its robustness to erroneous or unexpected input arguments. The best time to write the tests is when the module itself is being written. In addition, whenever a new bug is found in a routine, a new test for that specific case should be written and added to the test suite to prevent that bug from creeping back in unnoticed.
To run SciPy's full test suite, use the following:
>>> import scipy >>> scipy.test()
The test method may take two arguments; the first specifies the level of testing and the second the verbosity. The higher the level the more test will be run. If the verbosity is 1 or less, the tests will just show information messages about the tests that are run; but if it is greater than 1, then the tests will also provide warnings on missing tests. So if you want to run every test and get messages about which modules don't have tests:
>>> scipy.test(level=10, verbosity=2) # or >>> scipy.test(10, 2)
Finally, if you are only interested in testing a subset of SciPy, for example, the integrate module, use the following:
The rest of this page will give you a basic idea of how to add unit tests to modules in SciPy. It is extremely important for us to have extensive unit testing since this code is going to be used by scientists and researchers and is being developed by a large number of people spread across the world. So, if you are writing a package that you'd like to become part of SciPy, please write the tests as you develop the package. Also since much of SciPy is legacy code that was originally written without unit tests, there are still several modules that don't have tests yet. To see what modules still lack tests use:
>>> scipy.test(level=0, verbosity=2) # or >>> scipy.test(0, 2)
Please feel free to choose one of these modules to develop test for either after or even as you read through this introduction.
Writing your own tests
Ideally, every Python code, extension module, or subpackage in the SciPy package directory should have a corresponding test_<name>.py file. This file should define classes derived from the NumpyTestCase (or the unittest.TestCase) class and have names starting with test_. The methods of these classes whose names start with bench_, check_, or test_, are passed on to the unittest machinery. In addition, the value of the first optional argument of these methods determines the level of the corresponding test. (Default level is 1.)
Suppose you have a SciPy module scipy/xxx/yyy.py containing a function zzz(). To test this you would start by creating a test module called test_yyy.py. This test file should include a class that tests zzz(). The test class has test methods that test various aspects of zzz(). Within these test methods, assert() is used to test whether some case is true. If the assert fails, the test fails. The NumpyTest().run() function actually runs the test suite. A minimal example of a test_yyy.py file that implements tests for a Scipy package module scipy.xxx.yyy, is shown below:
import sys from numpy.testing import * set_package_path() # import xxx symbols from xxx.yyy import zzz restore_path() #Optional: set_local_path() # import modules that are located in the same directory as this file. restore_path() class test_zzz(NumpyTestCase): def check_simple(self, level=1): assert zzz()=='Hello from zzz' #... if __name__ == "__main__": NumpyTest().run()
NumpyTestCase is derived from unittest.TestCase and it basically only implements an additional method measure(self, code_str, times=1). Note that all classes that are inherited from TestCase class, are picked up by the test runner when using testall. For more detailed information on defining test classes see the official documentation for the Python Unit testing framework.
The tests/ directory
Rather than keeping the code and the tests in the same directory, we put all the tests for a given subpackage in a tests/ subdirectory. For our example, if it doesn't all ready exist you will need to create a tests/ directory in scipy/xxx/. So the path for test_yyy.py is scipy/xxx/tests/test_yyy.py.
Once the scipy/xxx/tests/test_yyy.py is written, its possible to run the tests by going to the tests/ directory and typing:
Or if you add scipy/xxx/tests/ to the Python path, you could run the tests interactively in the interpreter like this:
>>> import test_yyy.py >>> test_yyy.test()
The __init__.py file
Usually however, adding the tests/ directory to the python path isn't desirable. Instead it would better to invoke the test straight from the module xxx. To this end, simply place the following two lines at the end of your package's __init__.py file:
from numpy.testing import NumpyTest test = NumpyTest().test
Now you can do the following to test your module:
>>> import scipy >>> scipy.xxx.test()
Also, when invoking the entire SciPy test suite, your tests will be found and run:
>>> import scipy >>> scipy.test() # your tests are included and run automatically!
This package holds a few helper routines. Several were discussed in the previous section. A couple of others are something of a verbose assert() function that give feedback as to what error occured. One of them, assert_array_equal(), is useful for comparing values in two arrays. The library of assert functions should grow to test for other general cases.
- Speed testing: It'd be nice to have some speed testing facilities so that we could keep track of whether new code hurts or helps our speed. It would provide some simple benchmarking facilities also. To do this correctly, we'd need something like start_timer() and stop_timer() calls that could be placed in the check_xxx() methods. This would allow the set up and error checking code at the beginning and end of check methods to be ignored for timing purposes. If these functions weren't used in the check method, then the entire check method would be timed.
- Regression testing and database: If each test could be stored in a database, including timing information, that would allow us to see how speed is changing over time (using a reference machine).
- Web output of test results: Haven't looked much at the TestRunner stuff, but I imagine it is possible to grab info from the test suite and write it out to an HTML file. We should do this on several architectures every evening on the SciPy site.