Computing Reviews

Does automated unit test generation really help software testers? A controlled empirical study
Fraser G., Staats M., McMinn P., Arcuri A., Padberg F. ACM Transactions on Software Engineering and Methodology24(4):1-49,2015.Type:Article
Date Reviewed: 10/22/15

If you are a developer of an automated test generation tool, you may want to know how this type of software impacts the testing process. In general, automated test generation software is evaluated by the percentage of code covered in the resulting test cases. One of the outcomes the authors wanted to determine is whether code coverage is a good metric to use. They designed a study to compare how well users could detect faults using EvoSuite (an automated unit test generator) or manually developed unit tests. It turned out that even though EvoSuite generated tests with higher code coverage, about the same number of defects was found by each group.

The purpose of the paper is to describe the study (and results) and how it relates to research in the area of automated test generation. It contains nine sections: “Introduction, “Study Design,” “Results: Initial Study,” “Results: Replication Study,” “Discussion” (interpreting the results), “Background and Exit Questionnaires,” “Implications for Future Work,” “Related Work,” and “Conclusions.” Overall, it is well organized and extremely detailed. The most interesting parts are the sections that interpret the results and provide direction for future research.

In addition to examining code coverage during testing, the researchers also wanted to understand how automated test generation impacts the ability of testers to detect faults, how many tests mismatched the intended behavior of the class, and the ability of the produced test suites to detect regression faults.

From the exit questionnaires, they learned that most users in the group that used EvoSuite wanted to use the generated tests even if the tests were bad. One conclusion was that a combination of manual and automated tests is needed and that the manual tests should somehow inform the automated ones using a technique that has yet to be developed. Furthermore, test automation tools should be able to generate tests that users can easily understand and trust. The time saved in generating tests was used up by analyzing the tests produced by the tool.

For the researcher, this paper is relevant for the questions it raises about the current state of automated test generation and its suggestions for future research. General readers may find the introduction and conclusion enough to give them a basic understanding of the intent of this research.

Reviewer:  Julia Yousefi Review #: CR143881 (1601-0062)

Reproduction in whole or in part without permission is prohibited.   Copyright 2024 ComputingReviews.com™
Terms of Use
| Privacy Policy