Decision tables are succinct representations of decision trees, which one can think of as multiple nested IF-THEN-ELSE constructs. Decision tables have been used for years by many software systems to choose the proper solution to a problem with many input variables. Rapid searching of decision tables requires the translation of decision tables into an optimum or near-optimum decision tree. Previous papers on the subject have identified ways of constructing optimum decision trees but have used one of two somewhat costly processing techniques.
In this paper, the author proposes two new variable selection criteria for constructing near-optimum decision trees. These two criteria use the “potential” or “complexity” of the table to develop decision trees in less time than previous algorithms. He then shows that these two new selection criteria produce much more compact trees and reduce processing times. They do not, however, always create an optimum tree. Likewise, for very special tables, the known selection criterion loss does better since those special tables favor the loss criterion.
The author’s purpose in writing the paper is not stated but appears to be to extend his own work in decision table theory. There seems to be some current research in the area, but of the 18 references, only five were less than 10 years old and the newest two references (1982, 1985) were to the author’s own work.
The paper is exceptionally well written considering the number of mathematical proofs given, but it suffers from the assumption that the reader has a strong understanding of previous references. I would only recommend it to students doing serious research in decision tree theory.