Multi-objective optimization has long been an important research topic in both the academic and industrial sectors, and it has many real-world applications. A significant extension is constrained vector-valued optimization, which corresponds to an arbitrary closed and convex cone, rather than a nonnegative orthant, and hence is more difficult to deal with.
This paper considers a projected gradient method for constrained vector-valued optimization problems. The scalarization method is abandoned in the paper, for reasons that are well explained and justified. In their approach, the authors consider the relevant problems: the steepest descent method for unconstrained scalar-valued problems, the projected gradient method for constrained scalar-valued problems, and the steepest descent method for unconstrained vector-valued problems. In short, generalization takes shape from two directions (the corresponding scalar-valued problems, and the corresponding unconstrained problems), which in fact can be viewed as being combined. The iterative steps of the proposed method are given in a specific form. Two cases are classified with respect to the function to be minimized: the nonconvex and the convex cases. In addition, the methods themselves could be categorized as exact and inexact, with the latter being variants of the proposed method. Convergence results for all individual cases are presented and proven. As cautioned by the authors, special attention must be paid to the variation of the step-size scaling factors, which, for technical reasons, is needed for getting convergence results for the convex objective case. The authors also remark on the price to be paid when using the proposed method, as opposed to the method that applies the projected gradient method to a prespecified scalarization.
The paper is very well written, and very clearly presents the methodology for solving constrained vector-valued optimization problems. The paper makes a solid contribution to the field.