Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Breeding software test cases with genetic algorithms
Berndt D., Fisher J., Johnson L., Pinglikar J., Watkins A.  Conference on System Sciences (Proceedings of the 36th Annual Hawaii International Conference on System Sciences (HICSS’03) - Track 9, Big Island, HI, Jan 6-9, 2003)338.12003.Type:Proceedings
Date Reviewed: Jun 3 2004

This paper considers the use of generic algorithms (GAs) to generate test cases that are concentrated in error prone regions of code. The idea is to use a fitness function that combines the concepts of novelty, proximity, and severity; we want to be able to locate data regions that produce faults, but we also want to locate as many different faults as possible. For the test code considered, the method does appear to be reasonably successful.

There are some logistical problems that are not emphasized in the paper. First, the fitness function uses the fossil record (past test data sets), along with information about whether each data set recorded detected a fault. The idea is that newly generated data sets are added to the fossil record as the GA generates them. In practice, this involves each data set produced being checked to determine whether or not a fault has been uncovered. The way in which the case study code was seeded with faults meant that this was not an issue; however, that required knowing in advance where the defects to be uncovered were. Second, the novelty component of the fitness function is somewhat naive for floating-point data (and, indeed, integer data covering the whole integer range), in that scaling the data will automatically generate novelty.

The paper contains a confusing example of chromosomes, consisting of three 12-bit binary integers; the coding used is a mystery, since 229 is 00011100101 in binary, and this string does not appear anywhere in the two strings given. Table 1 appears to use a lot of space to say very little, and the labeling of kn and kp in figures 5 and 6 seems to be at odds with the text (page 9), and what’s being said about figure 7.

I would only recommend this paper to readers wishing to gauge the full extent of research into the use of genetic algorithms for unit testing. It would probably be best avoided by the more general reader.

Reviewer:  T. Hopkins Review #: CR129707 (0412-1535)
Bookmark and Share
  Reviewer Selected
Featured Reviewer
 
 
Program Verification (I.2.2 ... )
 
 
Self-Modifying Machines (F.1.1 ... )
 
 
Testing Tools (D.2.5 ... )
 
 
Testing And Debugging (D.2.5 )
 
Would you recommend this review?
yes
no
Other reviews under "Program Verification": Date
Extraction of redundancy-free programs from constructive natural deduction proofs
Takayama Y. Journal of Symbolic Computation 12(1): 29-69, 1991. Type: Article
Oct 1 1992
PROUST: an automatic debugger for PASCAL programs
Johnson W., Soloway E. BYTE 10(4): 179-190, 1985. Type: Article
Aug 1 1985
Modular Verification of Software Components in C
 IEEE Transactions on Software Engineering 30(6): 388-402, 2004. Type: Article
Jan 6 2005
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy