Python tools

Pylint

Pylint checks for errors in Python code, and also checks it against the PEP8 standard. To run Pylint on all Python files and output into a machine readable format, use the following in Execute shell (note: will not move inside directories):

pylint *.py --msg-template="{path}:{line}: [{msg_id}({symbol}), {obj}] {msg}" > pylint.txt || true

If Pylint finds any issues at all, the build will automatically fail. This behaviour is likely not desirable, so || true will stop Pylint from failing the build.

To publish the Pylint results in a graph, choose Report Violations as a post-build action. In the pylint row, enter the output file under XML filename pattern e.g. **/pylint.txt. You can also change the thresholds by changing the numbers on the left to determine the number of issues Pylint needs to detect before the job becomes stormy or the build becomes unstable.

As builds are completed, this will create a graph on the job page showing the number of Pylint violations across the builds. Click on the graph to see a colour-coded version which shows the severity of the violations. Here, you can also see the files that have caused issues in the most recent build, click on the file name to see the source code with the violations highlighted. To see violations for a specific build, click on the graph on that particular build page.

Python Unit Tests (Pytest or Nosetests)

Pytest and Nose both run unit tests written in Python. To use one of them and publish the results in a machine readable format, use one of the following commands in Execute shell:

nosetests tests.py --with-xunit --xunit-file=unittest.xml || true

py.test --junitxml unittest.xml tests.py || true

Where tests.py is your file containing the unit tests. The build fails if just one of the tests fails, so if you don't want this behaviour || true is required at the end of the command. Nosetests is recommended as it produces more useful error messages, but py.test is also available.

To publish the results in a graph, add a post-build action to Publish xUnit test result report, then click Add and JUnit. Under JUnit Pattern enter the file produced above e.g. unittest.xml. Below, you can also set the thresholds to determine the number of skipped or failed tests before the build becomes unstable or fails.

As builds are completed and tests are run, a colour-coded graph is produced on the job page showing the number of successful, skipped and failed tests. Click on the graph to see the test files that were run and the failed tests in the most recent build. Click on a particular failed test to see its error message and stacktrace. This can also be done for a particular build from the build page.