These will be published progressively here during the unit.
These will be published progressively here during the unit.
Material on the theory of testing is mostly drawn from:
For material on the Python testing frameworks, see the Python documentation for the unittest, doctest and Hypothesis libraries. More information on them, and on test-driven development in Python are available in the following places:
Although it uses the Java language and Java libraries, a good, practical guide to test-driven development is:
It’s not directly used in the unit, but can be a useful source for seeing how test-driven development is done in practice. Another very useful source on good software practices – which does cover testing, but also many other aspects of software development, is:
There is no need to buy it or read it, but the unit material will sometimes draw on the authors’ way of explaining things.
Roy Osherove provides a good set of guidelines to bear in mind when reviewing your own (or someone else’s) unit tests.
In the lectures and workshops, we examine coverage criteria for tests, but do not focus on particular tools that measure the code coverage of an existing set of tests. Code coverage tools exist for nearly every language, however, and for your own interest, you may wish to experiment with some. For Python, the
coverage tool (found here) is commonly used; for Java, one of the best-known is
We do not cover the use of particular tools that generate visualizations of call graphs for a body of code, but these exist for many languages. If you are interested, you may wish to investigate
java-callgraph, which can generate call graphs for Java programs.
In lectures, we mention two mutation testing packages, PIT for Java, and mutpy for Python. Mutation testing packages for many other languages (together with blog posts and papers on the topic of mutation testing) are listed at https://github.com/theofidry/mutation-testing.
The book Test-Driven Development in Python provides an excellent guide to test-driven development of web applications in Python. It is available to read for free online, as well as being available in hard-copy and e-book versions. There is far more material on test-driven web development than we will need for this unit, but it may provide helpful background reading.
For the topics of quality management, standards, risk management, and metrics, the material is largely taken from:
Any later edition is also fine.
For the topics of formal methods and formal specification, the unit briefly covers use of the Alloy specification language and model analysis tool, described in the following book:
You do not need to purchase the book; the material on the Alloy website will be sufficient for our purposes.
For students who are interested – although this goes beyond what we cover in the unit – the freely available textbooks in the series “Software Foundations” provide an excellent guide to using formal methods to improve the reliability of software systems. However, working through those would require a whole semester, and we only have 2 weeks to spend on the topic. (Some more excellent resources on formal methods are listed at https://avigad.github.io/formal_methods_in_education/.)
Students may also be interested in the Dafny language, which includes a Hoare logic–style verifier as part of its compiler.
Alloy is well-documented, works well for analysing object-oriented systems, and has a freely available graphical tool for analysing models (available at https://alloytools.org/download.html). It is possible to get started using the tool quite quickly, even before knowing a great deal about the language. In addition to the textbook, there is an online tutorial on using Alloy here: https://alloytools.org/tutorials/online/. An alternative tutorial is available at https://www.doc.ic.ac.uk/project/examples/2007/271j/suprema_on_alloy/Web/.
To install the Alloy analyzer:
Download version 4.2 of the Analyzer from the main Alloy website, at https://alloy.mit.edu, under “download”.
If you have a Mac, you can use the
.dmg installer file; if you use Linux or Windows, you need to download the
You will need to ensure you have a Java runtime installed. On Windows or Mac, if you are not sure whether you do, you can visit https://www.oracle.com/technetwork/java/javase/downloads/jre8-downloads-2133155.html, and download an appropriate Java SE Runtime Environment 8u211.
On Linux systems: a Java runtime can typically installed using your distribution’s package manager. For instance, on recent versions of Ubuntu,
sudo apt install openjdk-8-jre will install one.
On a Mac, double-click the
.dmg file and follow the instructions.
On Linux or Windows: once you have installed a Java runtime, simply doublle-click the
.jar file you have downloaded, and the analyzer should launch.
Alloy quick reference sheets
For alloy syntax all on one page, try visiting https://thomasalspaugh.org/pub/fnd/alloy.html (web page), or viewing https://www.monperrus.net/martin/alloy-quick-ref.pdf (PDF).
If you’re unsure how to express something in Alloy, these may suggest some possibilities.
These are not required reading for the unit, but may be useful for your self-study or in completing assessments.
For a solid coverage of how classes, objects and methods work in Python, I’d suggest looking at chapters 16 and 17 of the interactive course “How To Think Like a Computer Scientist”:
For students who are already very familiar with how classes, objects and methods work in other languages, the official Python tutorial has a shorter explanation of how they work in Python, here:
Techniques used include:
and many others.
The website for the 10th edition of Ian Sommerville’s Software Engineering text contains a number of excellent case studies relevant to testing and quality assurance. They illustrate issues related to:
They include a description of the Ariane 5 launch accident (in which an Ariane 5 rocket exploded on its maiden flight), the flight control system for the Airbus 340 jet airliner, and several safety- or privacy-critical systems.
Erik Dietrich provides guidelines on creating your own code review checklist, including suggestions on what items should be used by a code author versus a code reviewer.
Some specific checklists you may wish to investigate include:
NASA’s Office of Safety and Mission Assurance produced a Software Formal Inspections Guidebook (1993) which you may find useful.
Coding standards and guidelines are often used as the basis for code review checklists.
Some you may wish to investigate include:
the Linux kernel coding style documentation, which recommends, inter alia, that you print off a copy of the GNU Coding Standards and burn it without reading them.
US National Weather Service Office of Hydrologic Development, Software Development Standards and Guidelines
the “MISRA C” guidelines, a set of 127 guidelines for the use of C in safety-critical systems, developed by the UK’s Motor Industry Software Reliability Association.
Google’s Style Guides, for languages including:
A number of real-world and sample SQA plans are available on the web. Some of them assume a fair amount of background knowledge, but are still useful references for seeing how such plans are typically structured.
Although we do not cover them in detail in this unit, many tools are available for analysing code and computing code metrics. Some examples:
PMD is an extensible, cross-language, static code analyser. It scans source code in Java and other languages and looks for potential problems like:
Custom detection rules can be written using XML and/or Java. PMD also provides the ability to calculate code metrics.
FindBugs is software originally developed by the University of Maryland, which uses static analysis to look for bugs in Java code. Similar to PMD, it allows custom “bug filters” to be written.
Coverity comprises a suite of static and dynamic software analysis tools, marketed by Synopsis. In addition to providing software commercially, Synopsis provides, as a free service to open-source projects, free analysis using its “Coverity Scan” static analysis product.
You may also find it interesting to read “A Few Billion Lines of Code Later: Using Static Analysis to Find Bugs in the Real World”, an article by the (academic) authors of Coverity on the process of commercializing their static bug-finding tool from a research project.