Computing Reviews

Sparse modeling :theory, algorithms, and applications
Rish I., Grabarnik G., CRC Press, Inc.,Boca Raton, FL,2014. 253 pp.Type:Book
Date Reviewed: 05/22/15

The authors themselves admit in several instances that the literature about sparse modeling is so vast that a single book cannot possibly encompass its entirety. However, this book is written in a way of a reasonably ample review, with enough literature being addressed for further and deeper reading at the end of each chapter.

Among the subjects treated, they introduce sparsity as a good approach of modeling, since even though we had few measurements, still many other instances of the measured variable can be thought of as zeroes of a large matrix. Hence, those measurements may report significant inferences, due to the principle of parsimony nature follows, quoting William of Ockham, Agatha Christie, and Ptolemy. It is possible to recover an unobserved high-dimensional signal **x** from a low-dimensional, even noisy observation **y**.

The algorithm least absolute shrinkage and selection operator (LASSO) is mentioned as one of the points of departure for the discussion of the solution of sparse systems. The importance of convexity is briefly discussed since only by transforming a problem into a convex one is it possible to hope for a solution faster and easier than what a sparse problem normally is: NP-hard, meaning that sometimes it may even be unfeasible to be solved as such, unless some transformations are made. l1 is the norm used to enforce both sparsity and convexity, theorems about which are mentioned and some of them proved.

Concerning theorems, among the most important being mentioned is the Nyquist-Shannon sampling theorem, stating that a perfect signal recovery from discrete samples is possible when the sampling frequency is greater than twice the signal bandwidth. Surprisingly, such is not the case empirically; more often than not, fewer measurements are sufficient. A theorem of Candés is mentioned that essentially states that if a signal xCN is k-sparse, then it can be restored precisely from almost any random subset of its Fourier spectrum of size proportional to the sparsity k and to log N.

Other important concepts discussed are mutual coherence, spark, and restricted isometry property, including their importance and how they hold.

A concise selection of algorithms is offered for sparse recovery problems, like univariate thresholding, matching pursuit (MP), orthogonal matching pursuit (OMP), least-squares orthogonal matching pursuit (LS-OMP), least angle regression (LARS), coordinate descent for LASSO, proximal algorithm, accelerated proximal algorithm, fast iterative shrinkage-thresholding algorithm (FISTA), elastic net, fused LASSO, group LASSO with l1/l2 penalty, and simultaneous LASSO with l1/l penalty. Other exponential families are explored as loss functions, beyond the Gaussian ones.

Sparse graphical models, such as the projected gradient approach, are also discussed. Matrix factorization is approached from the dictionary learning methods, among others.

This work is not only an excellent introductory book for branching off into aspects of sparse modeling; it is also good for advanced students since it is contains an appendix with some of the mathematical background needed to learn from this book, including topics such as eigentheory, discrete Fourier transform, and subgaussian random variables. I very much recommend this book for researchers and students alike.

Reviewer:  Arturo Ortiz-Tapia Review #: CR143462 (1508-0687)

Reproduction in whole or in part without permission is prohibited.   Copyright 2024 ComputingReviews.com™
Terms of Use
| Privacy Policy