Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Prioritizing tests for software fault diagnosis
Gonzalez-Sanchez A., Piel É., Abreu R., Gross H., van Gemund A. Software--Practice & Experience41 (10):1105-1129,2011.Type:Article
Date Reviewed: Dec 7 2011

This paper addresses the prioritization of regression tests--that is, determining which tests should be conducted early in the testing process and which ones should be deferred until later, if conducted at all. Typically, tests are prioritized so that those tests covering the largest number of modules are executed first. This increases the failure detection rate by inducing failures early in the testing process, thereby reducing the cost of testing by requiring fewer tests overall.

However, the authors argue that this traditional approach to regression testing is suboptimal. Yes, it decreases the cost of testing, but it increases the total cost of the testing and debugging process. Once a test encounters a failure, debugging begins with a fault localization step to find the faults that induced that failure. Whereas conducting the tests is often automated, fault localization is a manual process and hence considerably more expensive to carry out. A failure resulting from a test with high coverage provides little guidance on where to look to find the fault--many candidate modules could have caused the failure. As a result, the more expensive fault localization step takes longer because the debugger has to manually weed through the many candidate modules.

The authors propose an alternative Bayesian statistical approach that emphasizes not coverage but finding likely candidate modules with faults. Unlike the traditional approach in which the order of tests is predetermined, their technique is dynamic in that the next test conducted depends on the results of the previous tests. Their approach begins by constructing a static table showing which modules are covered by which tests. This table is used to select which test is to be conducted next, based on the results of previous tests. If the previous tests did not fail, then the modules that have already been tested have a lower priority for being selected for the next test. On the other hand, if a failure is encountered, then the next test is selected to differentiate between the candidates identified in the previous test.

The authors present simulation results comparing their technique with several others. They show that their technique decreases the monetary cost of the testing and debugging process by as much as 60 percent. However, this comes at the expense of quality in that failure detection performance is reduced. Another limitation is that the effectiveness of their technique is rigorously demonstrated for only single-fault cases.

The paper builds in a logical sequence, providing an introductory section, a simple example to illustrate the subject, an explanation of the authors’ approach, and extensive simulation results that show its cost effectiveness. However, no paper is perfect. The authors’ initial equation for total cost, which serves as the basis of the whole paper, seems to mix units for the monetary cost and time cost in an inappropriate way. In addition, the equation includes the factor α relating the relative costs of testing and localization. But the explanation of α is unclear: α is defined as a ratio, but it’s not clear what is in the numerator and what is in the denominator. Finally, the first equation in section 2.3 reverses the terms before and after the | symbol in the conditional probability.

Whereas this paper adds to the rich literature on testing, this literature is often not used in practice. Most testing is neither automated nor based on well-defined criteria; it is conducted in an ad hoc order and not in accordance with a well-defined metrics-based priority scheme as this paper assumes.

Reviewer:  A. E. Salwin Review #: CR139654 (1205-0491)
Bookmark and Share
  Editor Recommended
 
 
Testing And Debugging (D.2.5 )
 
 
Quality Assurance (K.6.4 ... )
 
Would you recommend this review?
yes
no
Other reviews under "Testing And Debugging": Date
Software defect removal
Dunn R., McGraw-Hill, Inc., New York, NY, 1984. Type: Book (9789780070183131)
Mar 1 1985
On the optimum checkpoint selection problem
Toueg S., Babaoglu O. SIAM Journal on Computing 13(3): 630-649, 1984. Type: Article
Mar 1 1985
Software testing management
Royer T., Prentice-Hall, Inc., Upper Saddle River, NJ, 1993. Type: Book (9780135329870)
Mar 1 1994
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy