There have been relatively few studies on the effectiveness of applying different fault-finding strategies together. The authors propose a model to estimate this efficacy. They find that researchers tend to assume statistical independence for repeated applications of the same strategy. They suggest that, in real life, we should be able to find fewer faults with such repetitions. On the other hand, they prove that when the expected efficacies of different strategies are the same, the most diverse choice of strategies will give the best results. They verify their model using empirical studies on a railroad signaling system. The paper is innovative and interesting.
The paper has a serious limitation, however. Although the authors argue against the assumption of statistical independence, the main proofs are actually based on this assumption. They support their case by quoting the law of diminishing returns, so that the probability that a strategy fails will increase when it is repeatedly applied. Unfortunately, this law may not be true in general, because people learn from experience. Consider a program with only three input cases, one of which is at fault. Suppose the strategy is to execute one test case randomly. When a sensible tester applies the strategy the first, second, and third time, the probabilities that it fails to work will be two-thirds, one-half, and zero, respectively. In such circumstances, the main proofs and, hence, the main results in the paper will no longer apply.