Computing Reviews

Software testing with code-based test generators:data and lessons learned from a case study with an industrial software component
Braione P., Denaro G., Mattavelli A., Vivanti M., Muhammad A. Software Quality Journal22(2):311-333,2014.Type:Article
Date Reviewed: 07/23/14

This paper is an interesting indictment of the current state of test generators for software programs. The authors provide a very good overview of the available test generator methodology and discuss the more than 20 years of research experience in the field.

Unfortunately, the actual study exhibits the limitations of the field. The software tested is a path-computation module for robotics. It has four versions of between 300 and 1,000 lines of C code. There were a total of seven defects in this code set. The automated tests could not themselves identify errors because the code did not crash. Instead, experts had to review each test output to deduce the validity or failure of each test case.

These limitations have plagued the field of automated testing since its inception. The tools do not scale to modern software component sizes. They don’t provide complete out-of-the-box solutions, instead requiring significant manual intervention and adjustment. One of the packages tested here only supports C#, so the authors ported the original code to C#. This is a field still working on small, hundreds-of-lines test cases where the rest of the industry now builds million-line software systems.

Read this paper for a good overview of the state of the art. It is interesting both for its completeness and its limitations.

Reviewer:  Elliot Jaffe Review #: CR142540 (1410-0868)

Reproduction in whole or in part without permission is prohibited.   Copyright 2024 ComputingReviews.com™
Terms of Use
| Privacy Policy