Compressed sensing (CS) is a new set of techniques that promise good or even exact reconstruction of signals based on a very small number of measurements. When x is a large, sparse vector (that is, x has a small number of nonzero coefficients in some basis), the observation is y = Φ x + e, where Φ is a “short, fat” sensing matrix and e is the error.
The goal is to reconstruct x from y; this has widespread application. There are several well-known algorithms for recovering (or estimating) x from y when x is sparse, and several techniques for constructing an appropriate sensing matrix Φ.
This paper proposes a new reconstruction algorithm called similar sensing matrix pursuit. The basic idea is to exploit similarities in the columns of Φ and choose a representative column to represent each similar set; this forms the similar sensing matrix. Using a smaller matrix can reduce the time complexity of the recovery process, although some of the examples showed poorer performance.
I was fooled by the title. From a computational perspective, CS is a time-space tradeoff, where less space is used for data at the cost of computational time for reconstructing the original signal. One obvious difficulty is that the sensing matrix requires significant space, especially if its construction depends on a random process. One thrust of current research is to find algorithms that compute rather than store Φ; this is how I think of a “deterministic” sensing matrix, and an informal convenience poll of other CS workers confirmed my thinking.