Error minimized extreme learning machines (EM-ELMs) were introduced to improve upon the learning time of feed-forward learning neural networks [1]. EM-ELMs are constructed by an incremental process that introduces hidden nodes into the network in a way that allows one to compute the output layer weights using simple matrix algebra. The introduced nodes are selected with random weights. An alternative approach, the input strategy, is to introduce interior nodes based upon a selection from the best of the input examples. Note that the random strategy permits the exploration of parts of the search space that may not be present in the input examples.
This paper compares the two approaches. As part of this exploration, the authors show that the two approaches can be implemented in such a way that their computational costs are equivalent. Thus, the authors can concentrate on the quality of the learning.
The comparison used 20 benchmark datasets, ten for classification and ten for regression. For two cases corresponding to easy problems, the problems are learned completely in both approaches. In other cases, with the exception of the stock dataset, the results are either comparable or the input method is superior. The exceptional nature of the stock data is attributed by the authors to a strong tendency to over-fit in this case. The reports on the results also give the number of interior nodes introduced by the two approaches for each problem. In their conclusion, the authors suggest reasons why the input method may be more effective.