Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Does automated unit test generation really help software testers? A controlled empirical study
Fraser G., Staats M., McMinn P., Arcuri A., Padberg F. ACM Transactions on Software Engineering and Methodology24 (4):1-49,2015.Type:Article
Date Reviewed: Oct 22 2015

If you are a developer of an automated test generation tool, you may want to know how this type of software impacts the testing process. In general, automated test generation software is evaluated by the percentage of code covered in the resulting test cases. One of the outcomes the authors wanted to determine is whether code coverage is a good metric to use. They designed a study to compare how well users could detect faults using EvoSuite (an automated unit test generator) or manually developed unit tests. It turned out that even though EvoSuite generated tests with higher code coverage, about the same number of defects was found by each group.

The purpose of the paper is to describe the study (and results) and how it relates to research in the area of automated test generation. It contains nine sections: “Introduction, “Study Design,” “Results: Initial Study,” “Results: Replication Study,” “Discussion” (interpreting the results), “Background and Exit Questionnaires,” “Implications for Future Work,” “Related Work,” and “Conclusions.” Overall, it is well organized and extremely detailed. The most interesting parts are the sections that interpret the results and provide direction for future research.

In addition to examining code coverage during testing, the researchers also wanted to understand how automated test generation impacts the ability of testers to detect faults, how many tests mismatched the intended behavior of the class, and the ability of the produced test suites to detect regression faults.

From the exit questionnaires, they learned that most users in the group that used EvoSuite wanted to use the generated tests even if the tests were bad. One conclusion was that a combination of manual and automated tests is needed and that the manual tests should somehow inform the automated ones using a technique that has yet to be developed. Furthermore, test automation tools should be able to generate tests that users can easily understand and trust. The time saved in generating tests was used up by analyzing the tests produced by the tool.

For the researcher, this paper is relevant for the questions it raises about the current state of automated test generation and its suggestions for future research. General readers may find the introduction and conclusion enough to give them a basic understanding of the intent of this research.

Reviewer:  Julia Yousefi Review #: CR143881 (1601-0062)
Bookmark and Share
 
Testing And Debugging (D.2.5 )
 
Would you recommend this review?
yes
no
Other reviews under "Testing And Debugging": Date
Software defect removal
Dunn R., McGraw-Hill, Inc., New York, NY, 1984. Type: Book (9789780070183131)
Mar 1 1985
On the optimum checkpoint selection problem
Toueg S., Babaoglu O. SIAM Journal on Computing 13(3): 630-649, 1984. Type: Article
Mar 1 1985
Software testing management
Royer T., Prentice-Hall, Inc., Upper Saddle River, NJ, 1993. Type: Book (9780135329870)
Mar 1 1994
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy