Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Kernel density classification and boosting: an L2 analysis
Marzio M., Taylor C. Statistics and Computing15 (2):113-123,2005.Type:Article
Date Reviewed: Apr 28 2006

Boosting is a new learning technique that has become very attractive to many researchers involved in the areas of machine learning and statistical pattern recognition. The general idea of boosting is to develop the classifier team incrementally, adding one classifier at a time. The classifier that joins the ensemble at each step is trained on a data set selectively sampled from the training data set. The sampling distribution starts out uniform, and progresses toward increasing the likelihood of “difficult” data points. Thus, the distribution is updated at each step, increasing the likelihood of the objects misclassified at the previous step.

The authors propose an algorithm for boosting kernel density estimation, and show that boosting kernel classifiers reduces the bias with an overall reduction in error. Based on the main result of the paper, a series of conclusions explaining the good performance of boosting kernel classifiers is derived.

The paper is structured into six sections. The general framework of the nonparametric estimation method of density functions using kernels is briefly presented in the introductory section. Since, in the case of the two-class discrimination problem of equal a priori probabilities, the Bayesian classifier reduces to the difference g(x) of the values of the class density functions, the bias and variance of its estimate near the solution g(x0 )=0 are next evaluated when the same kernel function is used to estimate both density functions. In the third section, BoostKDC, a boosting algorithm for kernel density estimation, is proposed, and it is proven that boosting the kernel classifier reduces the bias, with an overall reduction in error. The overfitting effect in boosting is investigated, and cross-validation is proposed as a possible approach to preventing the overfitting. The analysis performed in the fourth section yields convincing explanations about the effect of bias reduction achieved by boosting when it is used in kernel discrimination. In the final section of the paper, comparisons of boosting with simple kernel methods are discussed and supported by simulations and experimental tests.

Reviewer:  L. State Review #: CR132726 (0702-0196)
Bookmark and Share
  Reviewer Selected
 
 
Statistical (I.5.1 ... )
 
 
Classifier Design And Evaluation (I.5.2 ... )
 
 
Statistical Computing (G.3 ... )
 
 
Design Methodology (I.5.2 )
 
 
Probability And Statistics (G.3 )
 
Would you recommend this review?
yes
no
Other reviews under "Statistical": Date
A formulation and comparison of two linear feature selection techniques applicable to statistical classification
Young D., Odell P. Pattern Recognition 17(3): 331-337, 1984. Type: Article
Mar 1 1985
Remarks on some statistical properties of the minimum spanning forest
Dubes R., Hoffman R. Pattern Recognition 19(1): 49-53, 1986. Type: Article
Dec 1 1987
Statistical pattern recognition--early development and recent progress
Chen C. International Journal of Pattern Recognition and Artificial Intelligence 1(1): 43-51, 1987. Type: Article
May 1 1988
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy