Two issues with the LogitBoost method for the boosting of weak classifiers are addressed in this paper. The first is that because of an invariant property of the logistic loss function, which means that adding a constant to each component does not change the loss value, the classifier that minimizes the loss is not unique. The second issue is that the logistic loss produces a dense Hessian matrix, making the computation of the gain from splitting a tree node more computationally difficult.
The approach used in this paper addresses the first problem by imposing a constraint that fixes the value of the invariant and reduces the number of degrees of freedom in the problem by one. The second is addressed by splitting the tree, considering only the values of pairs of the components of the classifying vector, with the possibility of using different pairs at each step of the algorithm. This simplifies the Hessian and thus the computations.
The algorithm is described in detail in the paper. A nice picture illustrates the kinds of trees that the various boosting algorithms discussed in the paper produce. The authors consider another approach in which a diagonal Hessian can be used, but this turns out to lead to degraded performance. The authors compare their algorithm, which they call AOSO-LogitBoost, with ABC-LogitBoost on the tests used by Li [1], and find that it mostly outperforms ABC-LogitBoost.