The authors consider a class of methods for solving the general affine rank minimization (ARM) problem. Mathematically, the problem is described in terms of a multilinear operator A that maps an m × n matrix X into a p-vector y. The goal is to find a matrix X of rank k ⪡ min {m,n} such that y - AX is minimized in the Euclidean norm. Particular cases of the problem arise in quantum tomography, matrix completion (also known as the Netflix problem), and, a very special version, in principal component analysis (PCA). Of course, the problem is NP-hard, thus all reasonable algorithms are approximation algorithms.
The paper separates the approximation methods into two classes. The first is convex relaxations. The rank constraint is replaced by a constraint on the nuclear norm of X, which is the sum of the first k singular values. That change allows us to employ well-understood techniques from constrained optimization.
The second class uses greedy algorithms that maintain the nonconvex nature of the problem. These greedy algorithms refine a rank-k solution based upon local information. Hard thresholding methods, the topic of this paper, are a subset of this class of algorithms.
In 30 densely written pages, the paper gives a nice overview of thresholding methods, including the mathematical development, conditions for convergence, and comparisons. The bibliography is helpful for any researcher wanting to understand more about these methods and their applications.