In linearly constrained optimization problems, two types of constraints may be distinguished: simple bounds (upper and/or lower) on individual variables; and general linear equality, or inequality constraints involving several variables. In methods based on an active-set strategy, one attempts to identify the subset of these constraints which are satisfied as equalities at the optimum, and to solve the corresponding reduced equality constrained problem. From stage to stage during the course of the iteration, the tentatively identified active set will change, in general; certain auxiliary quantities required to generate the next iterant, which depend on the active set, must change correspondingly. One could choose to treat simple bounds on the same footing as general constraints. But there are potential advantages to be gained by recognizing their presence should a significant number of them enter the active set at any stage, because variables assigned their values by active bound constraints need not be determined in the course of solving (normally approximately) the reduced problem at that stage.
The auxiliary quantities involve the gradient and Hessian of the objective function whose extremum subject to the active constraints is sought, or approximations thereto. Common to a broad class of methods and problems is the need for additional information most easily obtained from a suitable factorization of a matrix specified by the tentative active set, for a factorization of a reduced Hessian matrix, and for quantities related thereto. This paper is devoted to the selection of suitable factorizations, and the design and implementation of an algorithm for updating these factorizations when the active set changes. Stable and efficient updating techniques are proposed for the four canonical changes: adjoining or deleting a simple bound or a general linear constraint. Other changes can be treated as a sequence of such canonical changes. The implementation is designed for moderate numbers of active constraints with no special sparsity structure beyond that characteristic of simple bounds. Some variants of and alternatives to the techniques proposed are discussed briefly.
With the exception of a few minor misprints, the paper is clearly and carefully written. With the possible exception of linear objective functions, the linear programming problem, such techniques seem likely to be part of many future state-of-the-art codes. The authors have apparently produced such codes for quadratic and nonlinear programming problems.