Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Automatic generation of random self-checking test cases
Bird D., Munoz C. IBM Systems Journal22 (3):229-245,1983.Type:Article
Date Reviewed: Aug 1 1985

In their paper, Bird and Munoz describe an approach to the testing of several large systems. This methodology starts with the Syntax text case generators first described by Hanford [1] and includes more recent ideas from the works of Bazzichi and Spadafora [2]. Bird and Munoz, however, have resolved the primary limitation of the Syntax test case generators, and have found an approach to the testing of executable code. Although Bird and Munoz’s tool requires recoding for each application that is tested, they explain that the changes are minimal, and thus have created a flexible testing system.

The paper begins by reviewing the significant works others have completed in the area of automatic test case generation, providing an accurate background of the current state of this field. Next, Bird and Munoz describe the functionality of their tool, along with its limitations. This test case generator creates syntactically and semantically correct test cases that are self-checking, using random statement selection with a biasing factor (weighting). The final two-thirds of this paper describe two detailed examples of the use of this test case generator in a Graphics and in a sort/merge application.

This paper is a significant step forward in the automatic test case generation field, providing an excellent description of the random, automatic generation of self-checking code. Bird and Munoz also accurately describe the limitations of this tool, and provide a very good analysis of possible uses for automatic test case generators.

The only major drawback of the paper is the lack of analytical discussions on the test results. Given this automatic generation of test cases, accurate Mean-Time-Between=Failure (MTBF) statistics can be gathered and analyzed. Coverage statistics can also be gathered using meters in the test generator. Both numbers can be used for general reliability comparisons, possibly leading to software reliability standards. Unfortunately, this paper fails to discuss either statistic.

In general, this paper is a further step ahead in the field of automated testing. It should be read by all interested in the state-of-the-art of software testing.

Reviewer:  Richard A. Baker, Jr. Review #: CR123624
1) Hanford, K. V.Automatic generation of test cases, IBM Syst. J. 9 (1970), 242–257. See <CR> 12, 5 (May 1971), Rev. 21,221.
2) Bazzichi, F.; and Spadafora, I.An automatic generator for compiler testing, IEEE Trans. Softw. Eng. SE-8 (1982), 343–353.
Bookmark and Share
 
Testing Tools (D.2.5 ... )
 
Would you recommend this review?
yes
no
Other reviews under "Testing Tools": Date
Program testing by specification mutation
Budd T., Gopal A. Information Systems 10(1): 63-73, 1985. Type: Article
Feb 1 1986
SEES--a software testing environment support system
Roussopoulos N., Yeh R. (ed) IEEE Transactions on Software Engineering SE-11(4): 355-366, 1985. Type: Article
Apr 1 1986
Selecting software test data using data flow information
Rapps S., Weyuker E. IEEE Transactions on Software Engineering SE-11(4): 367-375, 1985. Type: Article
May 1 1986
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy