Today I was playing with python-coverage, which seems to be the tool of choice for code coverage measurement in Python. Since I am constantly hacking on Jockey’s test suite, I want to strive for perfection and cover everything, so it does sound like something worthwhile.
First I tried to use it like documented:
python /usr/share/python-support/python-coverage/coverage.py -x tests/run
which just caused the tests not to run at all, for no immediately obvious reason (it worked fine with real Python modules in apport). However, it gets much nicer once you stop trying to wrap it around the command line call and start to integrate it into the test suite code itself:
[… run all the tests … ]
(which is more or less what I committed).
This will run all the tests, and give me a report about how much code they covered, plus a list of code lines which weren’t touched. For example:
Name Stmts Exec Cover Missing
jockey/handlers 248 242 97% 402-405, 420-421, 514
Also, the exclude() interface is much more flexible than putting #pragmas all over the place (which do not seem to really work anyway unfortunately).
Now, off to fixing everything to get 100%. I was surprised how many little bugs I found and fixed while completing the tests. Test suites FTW!!