The acceptance testing of large software systems is difficult and time-consuming because of the sheer volume of user requirements, especially when numerous users and service providers are involved. In this paper, the authors present an automatic approach to analyzing a natural language requirements specification and generating a test plan. They adopt the standard reference model for open distributed processes (RM-ODP) to specify a system from the enterprise, computational, information, engineering, and technology viewpoints. They use linguistic techniques to annotate the requirements specification. Then, they apply requirements clustering algorithms to compute the similarities in requirements and group the latter into clusters. Finally, they produce a test plan using pattern-matching techniques. They have implemented the proposal as an open-source prototype tool set known as Test Optimization by Requirements Clustering (TORC).
This paper provides an excellent proposal to compose test plans automatically from large requirements specifications, and supports the proposal with effective techniques. In real life, more and more user requirements are documented in semi-formal notations such as unified modeling language (UML), which consists of diagrams in vastly different formats. It would be nice to extend the current work to cover semi-formal specifications such as UML sequence diagrams and class diagrams, especially for the computational and information viewpoints. On one hand, the analysis of UML diagrams would help avoid the problem of interpreting natural language specifications. On the other hand, since they are developed manually from the user requirements specifications, such diagrams may not necessarily be consistent with the original requirements. The authors may need to compare the pros and cons of these specification analysis approaches in practical projects.