Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
An experimental study of fault detection in user requirements documents
Schneider G., Martin J., Tsai W. ACM Transactions on Software Engineering and Methodology1 (2):188-204,1992.Type:Article
Date Reviewed: Mar 1 1993

The authors propose a way to detect faults in a user requirements document (URD)--a user’s description of the functionality and performance of a software product--by carrying out N formal inspections of the document in parallel. The authors hypothesize that the N separate inspection teams do not significantly duplicate each others’ efforts, so that the total number of faults detected will be much higher than the number found by any one team during a single inspection.

To verify their hypothesis, they carried out a controlled experiment in which nine teams of computer science graduate students formally inspected a requirements document describing a software system for real-time track control for railroads. The original document was written by a knowledgable railroad engineer. Before the experiment, the document was independently inspected and reviewed by 40 students and by the authors in order to obtain as correct a document as possible. That document was then seeded with errors, including ones discovered by the preliminary reviewers, and turned over to the nine teams. Each team member reviewed the document individually, and then each team met as a group to discuss and debate the faults found by the individuals. Although the proportion of the faults found by a single team averaged 35 percent (with a maximum of 50 percent for the best team), the global coverage, that is, the proportion of faults found by at least one team, was 78 percent. (Apparently, the nine teams did not unearth any faults not found in the preliminary inspection.)

The paper describes the review process: “Following this individual review, [the team] met…for a formal group review, also lasting about two hours. In this…meeting, each team member would identify what he or she believed to be a problem with the existing URD, which would then be discussed and debated. If the team agreed that this did indeed represent a requirements fault, then they filled out a Fault Report….” From this description, it appears that the interactions among team members during the meeting did not lead to the discovery of additional faults, so the faults found by a team were just the aggregate of the faults found by its members individually.

The paper did not convince me that the costs of N -fold inspection are worth the benefits. If a single team cannot do better, on the average, than a 35 percent fault detection rate, then our ability to review requirements documents is sadly limited, and I wonder whether raising the rate to 78 percent is really worth it. The possibility remains that more skilled inspection teams, or teams spending more time on the project, would have detected nearly all of the faults. But were the single-team detection rates on the order of 98 percent rather than 35 percent, the results of the experiment described here would probably not apply. Indeed, the results of N -version programming are not encouraging; Nancy Leveson reports that common errors usually crop up in the products of independent programming teams (see, for example, her work with J. C. Knight [1]). The strongest argument for N -fold inspection is that since each requirements error costs so much more to fix later on, finding even one additional fault is worth the price.

Reviewer:  P. Abrahams Review #: CR124024
1) Knight, J. C. and Leveson, N. G. An experimental evaluation of the assumption of independence in multiversion programming. IEEE Trans. Softw. Eng. SE-12, 1 (Jan. 1986), 96–109.
Bookmark and Share
 
Methodologies (D.2.1 ... )
 
 
Programming Teams (D.2.9 ... )
 
 
Software Development (K.6.3 ... )
 
 
Tools (D.2.1 ... )
 
Would you recommend this review?
yes
no
Other reviews under "Methodologies": Date
Multilevel specification of real time systems
Gabrielian A., Franklin M. Communications of the ACM 34(5): 50-60, 1991. Type: Article
May 1 1992
Software requirements
Davis A., Prentice-Hall, Inc., Upper Saddle River, NJ, 1993. Type: Book (9780138057633)
Nov 1 1994
The automated production control documentation system
Trammell C., Binder L., Snyder C. ACM Transactions on Software Engineering and Methodology 1(1): 81-94, 1992. Type: Article
Mar 1 1993
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy