The authors consider a generally unsolvable system of linear equations , where is a given matrix of observations and is a real vector of corresponding values, all taken from a given dataset. To obtain an approximate solution for this system, one usually minimizes an error residual, where typically &rgr; is the absolute value function or its square. In order to deemphasize outliers and avoid the nondifferentiability of the robust absolute value error residual, a popular residual considered in the literature is the Huber M-estimator cost function, namely, where &ggr; is some positive number. Because the function switches at from being quadratic to linear, special methods have been developed for the linear error minimization problem with the Huber M-estimator.
The authors give a simple quadratic programming formulation of the Huber M-estimator cost function in one dimension, and then set up the corresponding convex quadratic program for the linear estimator with as given above. Using parametric perturbation results of linear programming, the authors show that, for all values of the parameter for some , a Huber linear estimator is just an ordinary least squares estimate. On the other hand, for all sufficiently small values of &ggr;, the Huber estimates depend linearly on &ggr; and converge to a least -norm solution.
The authors show that their direct convex quadratic formulation is considerably faster than earlier proposed methods, such as Huber’s Gauss-Seidel method, Smola’s dual formulation, Madsen and Nielsen’s Newton-type method, and Li’s conjugate gradient method.