Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Large-scale kernel machines (Neural Information Processing)
Bottou L., Chapelle O., DeCoste D., Weston J., The MIT Press, 2007. 416 pp. Type: Book (9780262026253)
Date Reviewed: Apr 10 2008

Kernel-based techniques represent a major development in machine learning algorithms. These techniques include support vector machines (SVM), Bayes point machines, kernel principal component analysis, and Gaussian processes. SVMs are a set of supervised learning methods used for classification and regression. They represent an extension to nonlinear models of the generalized portrait algorithm developed by Vladimir Vapnik. SVMs map input vectors to a higher dimensional space, where a maximal separating hyperplane is constructed. SVMs suffer from the complexity of their training algorithm, which is at least quadratic with respect to the number of examples.

As it becomes easier to collect large datasets, more ambitious learning systems are theoretically possible. The computational cost of learning algorithms becomes a bottleneck. Efficient learning algorithms are needed to allow the running performance to scale linearly with the volume of the dataset, while maintaining enough statistical efficiency.

This book discusses a set of algorithms used to solve practical learning problems with a large dataset. The authors investigate computationally efficient ways to extract the statistical features from large datasets, using kernel machines. They discuss the learning algorithms, from well-understood techniques to more novel or controversial approaches.

The book also offers information that narrows the gap between theory and practically efficient algorithms lacking solid analysis. Topics covered include fast implementations of known algorithms, approximations that are amenable to theoretical guarantees, and algorithms that perform well in practice but are difficult to analyze theoretically.

The book is very well written. It is organized by clustering together the machine learning algorithms of the same type. The target audience includes professionals and senior graduate students who are working in the area of data mining and machine learning.

Reviewer:  Jun Liu Review #: CR135464 (0902-0143)
Bookmark and Share
 
Learning (I.2.6 )
 
 
General (F.2.0 )
 
 
General (I.2.0 )
 
 
Systems And Information Theory (H.1.1 )
 
 
Probability And Statistics (G.3 )
 
Would you recommend this review?
yes
no
Other reviews under "Learning": Date
Learning in parallel networks: simulating learning in a probabilistic system
Hinton G. (ed) BYTE 10(4): 265-273, 1985. Type: Article
Nov 1 1985
Macro-operators: a weak method for learning
Korf R. Artificial Intelligence 26(1): 35-77, 1985. Type: Article
Feb 1 1986
Inferring (mal) rules from pupils’ protocols
Sleeman D.  Progress in artificial intelligence (, Orsay, France,391985. Type: Proceedings
Dec 1 1985
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy