Constrained optimization refers to minimizing (or maximizing) an objective function under constraints based on possible values of the independent variable. In a practical constrained optimization problem, there can be multiple independent variables and multiple objective functions to satisfy. Such vectorized problems arise in many areas today, including engineering design and computational finance. Although there are well-known techniques for certain classes of constrained optimization problems, such as linear programming and nonlinear programming, other approaches need to be investigated when they are not applicable.
In this paper, Jahn improves a random search method due to Graef and Younes for computing global approximate solutions to multiobjective constrained optimization problems with an arbitrary structure. The improvement is twofold. First, several more important solutions are selected from among the randomly generated, possibly very large number of solutions. This is achieved by relaxing the minimality requirement in the objective functions and adding a backward iteration to enhance the self-learning nature of random search. Second, the minimal solutions corresponding to these more important solutions are classified into subdivisions so as to refine the solutions further. Three benchmark bicriterial minimization problems from the literature, two with two variables and one with three variables, are analyzed using the proposed method. The results are quite encouraging in terms of reducing the number of solution points obtained, but they would be stronger if the author had reported the computational effort that went into the improvement and compared his results to other methods in the literature.