Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Simulated annealing for improving software quality prediction
Bouktif S., Sahraoui H., Antoniol G.  Genetic and evolutionary computation (Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation, Seattle, Washington, Jul 8-12, 2006)1893-1900.2006.Type:Proceedings
Date Reviewed: Nov 8 2006

We all know that having several experts give their opinion is better than having just one give it. This paper proves it. The focus is on Bayesian classifiers and the method for developing a prediction algorithm.

Often, the paper is hard to comprehend. Consider, for example, this passage from section 2: “By reusing existing models, we reuse the common domain knowledge represented by versatile contexts, and by guiding the adaptation via the context of specific data, we consider the specific knowledge represented by D(c).” This seems to mean that we can use existing estimation models such as COCOMO and then weight the results based on our confidence in them, as compared to other estimation models. Their algorithm in section 2 leaves out the important assumption of conditional independence. This is mentioned in section 3 after we struggle to understand the algorithm. The authors do not discuss the implications of this assumption except to say that without it the math is very hard. You bet it is.

The punch line is to use many experts, and always more than one, for every domain, and weight your confidence in each. Then, iterate your predictions to convergence. This is quite similar to Barry Boehm’s famous wideband Delphi estimation method [1].

The math in section 4 seems correct, but I did not and could not follow it completely. The 19 metrics listed in table 3 as attributes in their experiments are insightful and useful. Table 4 can give the reader leads to tools to be used as expert predictors. I wish the authors would have defined stress. They define stability as “changes to source code from release to release,” which contrasts with the definition I prefer: a measure of performance.

My advice is to use the paper as a reference if and when you need to make profound predictions about a software project. Skip the math and employ the algorithm.

Reviewer:  Larry Bernstein Review #: CR133533 (0711-1123)
1) Boehm, B.W. Software engineering economics. Prentice Hall PTR, Upper Saddle River, NJ, 1981.
Bookmark and Share
  Reviewer Selected
Featured Reviewer
 
 
Product Metrics (D.2.8 ... )
 
Would you recommend this review?
yes
no
Other reviews under "Product Metrics": Date
Communication Metrics for Software Development
Dutoit A., Bruegge B. IEEE Transactions on Software Engineering 24(8): 615-628, 1998. Type: Article
Oct 1 1998
Analyzing Data Sets with Missing Data: An Empirical Evaluation of Imputation Methods and Likelihood-Based Methods
Myrtveit I., Stensrud E., Olsson U. IEEE Transactions on Software Engineering 27(11): 999-1013, 2001. Type: Article
Jul 2 2002
The Optimal Class Size for Object-Oriented Software
El Emam K., Benlarbi S., Goel N., Melo W., Lounis H., Rai S. IEEE Transactions on Software Engineering 28(5): 494-509, 2002. Type: Article
Jan 3 2003
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy