Model-based testing has grown in popularity and the ability to automatically generate and trace models from use cases could provide significant benefits. The aToucan system, built at the Simula Research Laboratory, delivers such automation, though it requires a more formal approach for writing use cases. For example, keywords must be used for specifying control structures. The transformation rules--which take an instance of UCMeta and the intermediate modeling used by aToucan, and generate a system-level, state-based model--are the main focus of the paper. An impressive array of technologies is used by aToucan, including the Eclipse modeling framework, the Kermeta metaprogramming environment, and the Stanford parser for natural language processing.
Section 6 discusses an informal evaluation of system-level, state-based models generated by aToucan. Use cases are said to need correct preconditions and correct postconditions, if the quality of the generated model is not to suffer. Also, the inability to model sequences of use case executions is said to sometimes impact the quality of generated models. Human experts were required to add two transitions to the generated model shown in appendix B. The process of human refinement of this generated model, however, is said to have taken only 40 minutes.
The case for automation provided by aToucan would have been stronger had actual results of model-based testing been reported. Those with experience with agile approaches in software development are likely to question the wisdom of making use case writing more formal. Despite these criticisms, this paper is recommended to the software engineering community.