Applications of pattern recognition methods include the differentiation of blood cells, the recognition of numerals, and image analysis. One such method is vector support training, where the m-dimensional vectors of input data are mapped nonlinearly to a k-dimensional feature space. Linear classifiers are then constructed in the feature space. The training data consists of sample input vectors, together with a classification of each into one of two classes.
The paper starts by showing that training can be done in the subspace of the feature space generated by the images of the training data. It then describes an efficient method for finding a basis for this subspace. The final section of the paper is devoted to testing and timing the method on some standard datasets.
Both the presentation of the theory and the test results are quite clear. Abe could make the theory part more concise by using more of the properties of the singular value decomposition (SVD) presented. The method for finding the subspace basis is a clever use of a modified Cholesky decomposition of the m-by-m matrix of inner products between the training images.