While the conjugate gradient method is a popular iterative approach to solving large sparse linear systems, it is not biased to find good sparse solutions. This paper presents a variant called sparse conjugate directions pursuit (SCDP), which will find a good sparse solution--if there is one--and will converge to a nonsparse solution otherwise. The basic approach is to increase the number of potentially nonzero variables in the solution by one on each iteration until the least-squares residual error has been reduced to a satisfactory level.
After explaining the SCDP method, the paper shows how it is applied to the machine-learning task of finding a least squares support vector machine model for classification problems. The results of many experiments are reported, in which SCDP is compared with several other approaches, including other sparse model-generating methods such as matching pursuit (MP), least angle regression (LARS), and least absolute shrinkage and selection operator (LASSO). Overall, the experiment data show SCDP to be quite competitive and superior in many ways to these other methods.
While designed for over-determined sparse linear systems, SCDP is also effective in solving under-determined sparse linear systems. The paper is well written, but the reader should have an understanding of conjugate direction and conjugate gradient methods to fully appreciate the ideas presented. Those who want to find sparse solutions to linear systems will want to look at the SCDP method and see what it can do for them.