This paper is concerned with the question of how a learning system can deal with a changing environment. For example, a system that has learned a linear function needs to deal with changes in the intercept and the slope of the function, or even changes in the shape of the function. The first of these two changes is drift, which can be managed by shifting the parameters that have been learned. In the second case, the domain can be dynamically divided into regions in each of which one has essentially a linear criterion for the learner.
In this paper, this division of the domain is maintained as a binary tree, which is subject to modifications as the input stream shows changes, which are detected because the error rate increases. Each leaf of the tree corresponds to a subdomain, and the system maintains a window of past observations for each leaf. As learning errors accumulate, the system responds by discarding old observations to learn new parameters at the leaf.
In addition, and this is the major contribution of the paper, nodes are monitored with a view to changing the tree should this be needed. Two scenarios need to be managed. In the first, a leaf may need to be split to account for a change in the corresponding region that is not simply drift but a change of function shape. In the second, two leaves may need to be merged into a leaf at the parent node.
The paper reports on the results of comparisons with four other learning algorithms that adapt to change: IB1, LR, M5, and MLP. While similar in performance to the other algorithms, SAIRT (what the authors call their algorithm) is faster and its models are smaller and thus easier to understand. The paper is clearly written with full details of the algorithms.