There are various frameworks for writing tests (eunit, ct, etc.) and even for automatically generating tests (e.g. QuickCheck) but how do you know when you have done "enough" testing? Test adequacy metrics provide some measure of how extensively your test set tests your system. The line coverage tool 'cover' that is included in OTP is a very simple metric - a test set that doesn't even execute some parts of the code is definitely not testing the system thoroughly. However, simply executing every line still doesn't ensure that the system is "well tested". This talk will cover two more advanced metrics: MC/DC analysis, provided by the Smother tool, and Mutation Testing, supported by the mu2 tool.
Talk objectives:
Give an introduction to the intention and application of MC/DC analysis and mutation analysis of test sets.
Target audience:
Anyone developing tests or responsible for testing/acceptance of Erlang systems.
Slides
Ramsay Taylor is a Research Associate (post-doc) at the University of Sheffield, working on the PROWESS project. After a few years working in Safety Critical Systems for aviation he returned to academia to undertake a PhD in formal verification. Upon completion he began researching techniques for testing and verifying Erlang programs. These days he works in two areas: Test Adequacy for Erlang - testing other people's test sets and State Machine Inference - trying to automatically determine what other people's Erlang code actually does. GitHub:
ramsay-t