By analyzing the error reports and the code changes required to fix faults in a number of development-stage and released software systems, ranging in size from one thousand to one million lines of code, the authors have discovered that the overwhelming majority of failures are triggered by the interaction of a relatively small number of input parameters. This is contrary to our intuitive ideas that faults in released systems should be far more difficult to expose since they are most likely caused by complex interactions of input data. The reported study found that the failure-triggering fault interaction (FTFI) number was no more than six, and was generally around four.
Assuming that this is true for all faults in the software being tested, it should be possible (given the FTFI number, n) to perform effectively exhaustive testing by generating all sets of n-tuples of parameters where n << k, the total number of input parameters, provided that each parameter takes only a small set of discrete values. Pseudoexhaustive testing could be attained by partition testing all n-way combinations of equivalence classes.
The authors provide a simple example to show how the number of test cases required for pseudoexhaustive testing is practical for small FTFI numbers, and a naive lower bound is obtained for the number of test data sets required. The automatic test-case generation tools currently available are capable of providing the required data sets, although the actual number of test cases produced by these tools will be a small multiple of the naive bound. An analysis is also provided that shows how this testing strategy should be implemented based on available testing resources.
This is an exciting new strategy for test design, but more empirical testing is necessary to provide reliable data on fault interaction across a range of application areas. More practical studies of the use of automatic test-case generation tools in this manner are also required. This is an interesting, well-written, and easy-to-follow paper that is well worth reading by anyone interested in improving the quality of software testing.