To mitigate the curse of dimensionality, the main problem in localized classification techniques, the authors combine localization with appropriate dimension reduction. The underlying assumption is that not all observations or predictors are equally informative on class membership. It is shown that powerful localized classifiers can be obtained even in high dimensions if they are combined with the appropriate dimension reduction technique, whereas simple localized classification without dimension reduction performs rather poorly in this case.
To demonstrate the improvement over the global classifier, a localized logistic regression (LLR) method is developed. At first, the global classifier is localized by introducing weight into the model, based on the distance in the kernel transformed predictor space. Secondly, the dimension is reduced by selecting relevant predictors, based on a localized version of the Wald statistic. In this method, three flexible parameters are introduced to control the degree of localization, the thresholds for predictor selection, and the strength of penalization. Combined with leave-one-out cross-validation, a grid search is employed to find the optimal values of these three tuning parameters for various data structures. The application of the proposed procedure to a variety of simulated and real data shows promising results. In summary, this paper is well written and easy to understand. For those interested in classification and discrimination, this paper is worth reading.