For the solution of very high-dimensional linear algebra problems, much current research involves dimension reduction using norm-approximating projections. The Johnson–Lindenstrauss lemma [1] establishes the existence of projections that map a given space of dimension n to a given subspace dimension m << n, approximately preserving the 2-norm of vectors within a given relative error of ±&egr;. Moreover, the map can be linear, an m-by-n matrix. For example, a solution to the least-squares problem, Ax=b with an n-by-d matrix, A, and a right-hand side b, can be approximated by a least-squares solution to the m equations PAx=Pb, where P is an m-by-d norm-approximating projection. Although m, which depends on &egr; and d can be significantly smaller than n, it will still be large, and the time to compute the product PA will be large if both P and A are dense (hence the desire for sparse projections).
OSNAP stands for oblivious sparse norm-approximating projections. These projections are oblivious in that the matrix entries are independent identically distributed random variables. They are sparse because they have only a few nonzero entries per column; in fact, in some examples presented here, each column has only one entry. The authors compare their methods with work by others and give asymptotic running times for these methods.
The paper is well-written and the methods are clearly explained. There is an extensive bibliography. This version of the paper is from the proceedings of a conference, and the proofs are only sketched.