Kaydet (Commit) e9e5dcd4 authored tarafından Skip Montanaro's avatar Skip Montanaro

restructured a bit and added some more content...

üst 4d069231
Writing Python Test Cases Writing Python Regression Tests
------------------------- -------------------------------
Skip Montanaro Skip Montanaro
(skip@mojam.com)
Introduction
If you add a new module to Python or modify the functionality of an existing If you add a new module to Python or modify the functionality of an existing
module, it is your responsibility to write one or more test cases to test module, you should write one or more test cases to exercise that new
that new functionality. The mechanics of the test system are fairly functionality. The mechanics of how the test system operates are fairly
straightforward. If you are writing test cases for module zyzzyx, you need straightforward. When a test case is run, the output is compared with the
to create a file in .../Lib/test named test_zyzzyx.py and an expected output expected output that is stored in .../Lib/test/output. If the test runs to
file in .../Lib/test/output named test_zyzzyx ("..." represents the completion and the actual and expected outputs match, the test succeeds, if
top-level directory in the Python source tree, the directory containing the not, it fails. If an ImportError is raised, the test is not run.
configure script). Generate the initial version of the test output file by
executing: You will be writing unit tests (isolated tests of functions and objects
defined by the module) using white box techniques. Unlike black box
cd .../Lib/test testing, where you only have the external interfaces to guide your test case
python regrtest.py -g test_zyzzyx.py writing, in white box testing you can see the code being tested and tailor
your test cases to exercise it more completely. In particular, you will be
Any time you modify test_zyzzyx.py you need to generate a new expected able to refer to the C and Python code in the CVS repository when writing
your regression test cases.
Executing Test Cases
If you are writing test cases for module spam, you need to create a file
in .../Lib/test named test_spam.py and an expected output file in
.../Lib/test/output named test_spam ("..." represents the top-level
directory in the Python source tree, the directory containing the configure
script). From the top-level directory, generate the initial version of the
test output file by executing:
./python Lib/test/regrtest.py -g test_spam.py
Any time you modify test_spam.py you need to generate a new expected
output file. Don't forget to desk check the generated output to make sure output file. Don't forget to desk check the generated output to make sure
it's really what you expected to find! To run a single test after modifying it's really what you expected to find! To run a single test after modifying
a module, simply run regrtest.py without the -g flag: a module, simply run regrtest.py without the -g flag:
cd .../Lib/test ./python Lib/test/regrtest.py test_spam.py
python regrtest.py test_zyzzyx.py
While debugging a regression test, you can of course execute it
independently of the regression testing framework and see what it prints:
./python Lib/test/test_spam.py
To run the entire test suite, make the "test" target at the top level: To run the entire test suite, make the "test" target at the top level:
cd ...
make test make test
Test cases generate output based upon computed values and branches taken in On non-Unix platforms where make may not be available, you can simply
the code. When executed, regrtest.py compares the actual output generated execute the two runs of regrtest (optimized and non-optimized) directly:
by executing the test case with the expected output and reports success or
failure. It stands to reason that if the actual and expected outputs are to ./python Lib/test/regrtest.py
match, they must not contain any machine dependencies. This means ./python -O Lib/test/regrtest.py
your test cases should not print out absolute machine addresses or floating
point numbers with large numbers of significant digits.
Test cases generate output based upon values computed by the test code.
When executed, regrtest.py compares the actual output generated by executing
the test case with the expected output and reports success or failure. It
stands to reason that if the actual and expected outputs are to match, they
must not contain any machine dependencies. This means your test cases
should not print out absolute machine addresses (e.g. the return value of
the id() builtin function) or floating point numbers with large numbers of
significant digits (unless you understand what you are doing!).
Test Case Writing Tips
Writing good test cases is a skilled task and is too complex to discuss in Writing good test cases is a skilled task and is too complex to discuss in
detail in this short document. Many books have been written on the subject. detail in this short document. Many books have been written on the subject.
...@@ -46,32 +79,88 @@ object-oriented software revolution, so doesn't cover that subject at all. ...@@ -46,32 +79,88 @@ object-oriented software revolution, so doesn't cover that subject at all.
Unfortunately, it is very expensive (about $100 new). If you can borrow it Unfortunately, it is very expensive (about $100 new). If you can borrow it
or find it used (around $20), I strongly urge you to pick up a copy. or find it used (around $20), I strongly urge you to pick up a copy.
As an author of at least part of a module, you will be writing unit tests
(isolated tests of functions and objects defined by the module) using white
box techniques. (Unlike black box testing, where you only have the external
interfaces to guide your test case writing, in white box testing you can see
the code being tested and tailor your test cases to exercise it more
completely).
The most important goal when writing test cases is to break things. A test The most important goal when writing test cases is to break things. A test
case that doesn't uncover a bug is less valuable than one that does. In case that doesn't uncover a bug is much less valuable than one that does.
designing test cases you should pay attention to the following: In designing test cases you should pay attention to the following:
1. Your test cases should exercise all the functions and objects defined * Your test cases should exercise all the functions and objects defined
in the module, not just the ones meant to be called by users of your in the module, not just the ones meant to be called by users of your
module. This may require you to write test code that uses the module module. This may require you to write test code that uses the module
in ways you don't expect (explicitly calling internal functions, for in ways you don't expect (explicitly calling internal functions, for
example - see test_atexit.py). example - see test_atexit.py).
2. You should consider any boundary values that may tickle exceptional * You should consider any boundary values that may tickle exceptional
conditions (e.g. if you were testing a division module you might well conditions (e.g. if you were writing regression tests for division,
want to generate tests with numerators and denominators at the limits you might well want to generate tests with numerators and denominators
of floating point and integer numbers on the machine performing the at the limits of floating point and integer numbers on the machine
tests as well as a denominator of zero). performing the tests as well as a denominator of zero).
3. You should exercise as many paths through the code as possible. This * You should exercise as many paths through the code as possible. This
may not always be possible, but is a goal to strive for. In may not always be possible, but is a goal to strive for. In
particular, when considering if statements (or their equivalent), you particular, when considering if statements (or their equivalent), you
want to create test cases that exercise both the true and false want to create test cases that exercise both the true and false
branches. For while and for statements, you should create test cases branches. For loops, you should create test cases that exercise the
that exercise the loop zero, one and multiple times. loop zero, one and multiple times.
* You should test with obviously invalid input. If you know that a
function requires an integer input, try calling it with other types of
objects to see how it responds.
* You should test with obviously out-of-range input. If the domain of a
function is only defined for positive integers, try calling it with a
negative integer.
* If you are going to fix a bug that wasn't uncovered by an existing
test, try to write a test case that exposes the bug (preferably before
fixing it).
Regression Test Writing Rules
Each test case is different. There is no "standard" form for a Python
regression test case, though there are some general rules:
* If your test case detects a failure, raise TestFailed (found in
test_support).
* Import everything you'll need as early as possible.
* If you'll be importing objects from a module that is at least
partially platform-dependent, only import those objects you need for
the current test case to avoid spurious ImportError exceptions that
prevent the test from running to completion.
* Print all your test case results using the print statement. For
non-fatal errors, print an error message (or omit a successful
completion print) to indicate the failure, but proceed instead of
raising TestFailed.
Miscellaneous
There is a test_support module you can import from your test case. It
provides the following useful objects:
* TestFailed - raise this exception when your regression test detects a
failure.
* findfile(file) - you can call this function to locate a file somewhere
along sys.path or in the Lib/test tree - see test_linuxaudiodev.py for
an example of its use.
* verbose - you can use this variable to control print output. Many
modules use it. Search for "verbose" in the test_*.py files to see
lots of examples.
* fcmp(x,y) - you can call this function to compare two floating point
numbers when you expect them to only be approximately equal withing a
fuzz factor (test_support.FUZZ, which defaults to 1e-6).
Python and C statement coverage results are currently available at
http://www.musi-cal.com/~skip/python/Python/dist/src/
As of this writing (July, 2000) these results are being generated nightly.
You can refer to the summaries and the test coverage output files to see
where coverage is adequate or lacking and write test cases to beef up the
coverage.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment