Computing Reviews

Multilinear subspace learning :dimensionality reduction of multidimensional data
Lu H., Plataniotis K., Venetsanopoulos A., Chapman & Hall/CRC,Boca Raton, FL,2013. 296 pp.Type:Book
Date Reviewed: 10/20/14

This book provides the background to approach tensor-based dimensionality reduction of large datasets and gives an overview of current research in this area of machine learning. While in part organized in a manner similar to (but much more structured than) a literature review, this work also provides a guided introduction or refresher for established and accepted methods as a preparation for the new material.

The content is organized into two parts: “Fundamentals and Foundations” and “Algorithms and Applications.” Three short appendices add background and application notes. The first part begins with an introduction to well-known linear methods for dimensionality reduction, briefly addressing principal component analysis (PCA), independent component analysis (ICA), linear discriminant analysis, canonical correlation analysis (CCA), and partial least squares (PLS), first individually, and then under a comprehensive unified treatment. The first part then presents established multilinear subspace methods, comprising a summary introduction to multilinear algebra and tensor decomposition and projections.

Moving on to transforms in higher dimensions, there arises the problem of measuring sample scatter in that domain, which is addressed here by providing results for between-class and within-class scatter. This part also puts into perspective both history and current research in multilinear subspace learning, and provides insightful notes on the algorithmic and computational aspects of this discipline.

The second part of the book is a review of current research and results in the multidimensional equivalents of PCA, multilinear discriminant analysis, multilinear ICA, CCA, and PLS. It comprises applications in pattern recognition, data mining, and other specialized examples.

Appendix A is a quick refresher on results in linear algebra, probability, and constrained optimization. Appendices B and C are notes on data, preprocessing, and software resources.

The book draws on results originally from the communities of numerical algebra, signal processing, and psychometrics. It advocates and explains the use of multidimensional models for the reduction of large, complex datasets. These multilinear models, when compared with well-established single-dimensional alternatives, are simpler insofar as the associated transformations from source data to reduced data require the estimation of fewer parameters (under favorable circumstances, orders of magnitude fewer parameters). By virtue of providing transformations that are less tunable, and therefore also more constrained, multilinear models are at a lesser risk for overfitting and are less demanding on certain computational requirements.

Experimentally inclined readers will probably like this book, while mathematically oriented readers may perhaps not. Practitioners will appreciate that the presentation of the subject matter is goal oriented rather than axiomatic, aiming at evangelization and understanding rather than emphasizing rigor. This does not imply by any means a shoddy presentation; on the contrary, the material, even when ponderous, is properly introduced, justified, and explained. But if choosing between being accessible or rigorous, the authors generally prefer the accessibility route, yet never at the expense of correctness. As a result, the content is artfully structured for a specialized audience of new researchers and bleeding-edge practitioners. The treatment builds an overarching framework and provides an analytical reader with a well-expressed taxonomy on the foundations of historical developments and similarity in content and goals. Thus, packaged, current research is endowed with instant meaning and purpose, the derivation of which would initially elude a newcomer to this complex and articulated branch of machine learning. The structure that this book builds can allow a neophyte to avoid much of the initial confusion and wasted effort necessary to classify unfamiliar work and distinguish between what may be useful or not to one’s intents and interests.

However, the book, while rich in structure, is poor on proofs and often skimps on many as-yet-unessential details. While this is an exquisitely enriched literature review that is almost good enough to use as an auxiliary graduate textbook (there are no exercises, of course), not too much in the way of low-level detail is included in the presentation. This transpires as a deliberate goal rather than a flaw, and such an approach helps keep the length to approximately 260 pages focused on concepts and structure.

Overall, this book is built to be read as a rich and yet accessible introduction, and not as a comprehensive reference. The latter function is left to a vast bibliography, appropriately comprising about one-tenth of the page count, which will likely be explored with greater advantage after gaining the conceptual understanding that is enthusiastically promoted in the body of the work.

Reviewer:  A. Squassabia Review #: CR142849 (1501-0032)

Reproduction in whole or in part without permission is prohibited.   Copyright 2024 ComputingReviews.com™
Terms of Use
| Privacy Policy