Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Elements of dimensionality reduction and manifold learning
Ghojogh B., Crowley M., Karray F., Ghodsi A., Springer International Publishing, Cham, Switzerland, 2023. 606 pp. Type: Book (9783031106019)
Date Reviewed: Nov 29 2024

Structured data is often organized in multi-dimensional/tabular ways. Dimensionality reductions are techniques to transform such organized data. In these techniques, the number of dimensions in the source data is reduced in the target data. The book prepares the reader with the foundational information about these techniques. The readers are expected to have knowledge of linear algebra, functional analysis, optimization problems, probability theory, and neural networks.

The first part of the book introduces required knowledge to understand the problem of dimensionality reductions. Chapter 2 introduces Eigenvalue problems, and their solutions under linear algebra. Chapter 3 discusses eigenfunctions approximations. Section 3.7 observes that linear algorithms cannot properly handle nonlinear patterns of data. One approach to handle such data is to change the linear algorithm. The other approach is to transform data such that the transformed data has linear patterns. Chapter 4 presents background on optimization problems. The motivation is that the dimensionality reduction problems can be converted to optimization problems. All the techniques of solving optimization problems can then be used for dimensionality reductions. General solution guidelines for first-order, second-order, and distributive optimization problems are given in the chapter.

The first chapter classifies the dimensionality reductions into spectral, probability based, and neural network based. Parts 2 through 4 are devoted to each of these categories of the dimensionality reductions.

The major chapter of Part 2 is chapter 10. It presents a unified framework for spectral dimensionality reduction techniques. Maximum variance unfolding is the manifold learning technique chosen for the framework. For details, readers can refer to earlier chapters in Part 2. Chapter 5 begins with linear projection/construction-based principal component analysis (PCA). The variations include kernel based, dual, and analysis for supervised learning. The chapter ends with a facial recognition application using eigenfaces. Section 6.6 reveals differences between Fisher discriminant analysis (FDA) and PCA. FDA captures falling apart classes. Chapter 6 describes various methods for FDA. Chapter 7 categorizes multidimensional scaling (MDS) into classical, metric, and non-metric. It explains each of the categories, including some special cases from them. Chapter 8 describes locally linear embeddings. These are particularly useful for nonlinear data, which can have linear embeddings. Chapter 9 is for data graphs. The technique presented uses the Laplacian matrix of the adjacency matrix corresponding to the graph in the data. Part 2 also has chapter 11, which covers spectral matric learning.

Chapter 12 is on probabilistic PCA. The theory of probability has the concept of inference; with the knowledge of data points/probability distribution, inference of other data points can be done. This can be applied to a generative model of machine learning. The chapter looks at maximum likelihood and variational inference techniques to carry out factor analysis/PCA. Chapter 13 extends chapter 12 with more techniques for component analysis using metric learning. Chapter 14 describes various methods for generating projections randomly. Chapter 15 is on sufficient dimension reduction techniques. The introduction classifies the techniques into inverse regression methods, forward regression methods, and kernel dimensional reduction; the details of these techniques follow. Chapters 16 and 17 describe probabilistic techniques involving neighborhood concepts and manifolds.

Chapter 18 presents the restricted Boltzmann machine and deep belief networks for dimensionality reduction. Chapter 19 on deep learning, chapter 20 on generative variational autoencoders, and chapter 21 on generative adversarial networks describe how the techniques can be applied to solve dimensionality reduction problems.

The entire book is full of mathematical equations. There are few algorithms. Professionals looking for practical usages may find that the book cannot be used directly due to the lacking empirical analysis. By having mathematical insights, which the book provides, one can adapt these techniques. The book can be used in academia at the graduate level. With lots of references, it can be used by researchers, too.

Reviewer:  Maulik A. Dave Review #: CR147850
Bookmark and Share
  Reviewer Selected
Featured Reviewer
 
 
Learning (I.2.6 )
 
Would you recommend this review?
yes
no
Other reviews under "Learning": Date
Learning in parallel networks: simulating learning in a probabilistic system
Hinton G. (ed) BYTE 10(4): 265-273, 1985. Type: Article
Nov 1 1985
Macro-operators: a weak method for learning
Korf R. Artificial Intelligence 26(1): 35-77, 1985. Type: Article
Feb 1 1986
Inferring (mal) rules from pupils’ protocols
Sleeman D.  Progress in artificial intelligence (, Orsay, France,391985. Type: Proceedings
Dec 1 1985
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy