Describing a technical enhancement for solving machine learning cases with multiple blocks of variables, this paper is a solid contribution to the methods improving machine learning efficiency.
The purpose of the proposed research is to create and test a linearized alternating direction method extended with parallel splitting and adaptive penalty to solve the problem of multi-block variables. The essence of the method includes letting “the penalty parameter to be unbounded and proving the sufficient and necessary conditions for global convergence,” and producing an optimality measure that helps reveal the convergence rate of the method. In addition to that, the method is equipped with tools to linearize part of the objective function in order to allow the handling of more difficult variants of it; it works well for low-rank recovery problems and sparse representation problems.
The paper presents an exhaustive range of aspects of the proposed method, with a detailed outline of the enabling formulas. The evaluation is made on a range of application domains, such as latent low-rank representation, nonnegative matrix completion, and pathway analysis on breast cancer data. Seven appendices show the proofs of the theorems in detail.
Very well organized and structured, with a clear positioning of the proposed approach with respect to related work, this paper is good reading for scholars and engineers interested in advanced methods of machine learning.