Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Comparing error minimized extreme learning machines and support vector sequential feed-forward neural networks
Romero E., Alquézar R. Neural Networks25 122-129,2012.Type:Article
Date Reviewed: Mar 2 2012

Error minimized extreme learning machines (EM-ELMs) were introduced to improve upon the learning time of feed-forward learning neural networks [1]. EM-ELMs are constructed by an incremental process that introduces hidden nodes into the network in a way that allows one to compute the output layer weights using simple matrix algebra. The introduced nodes are selected with random weights. An alternative approach, the input strategy, is to introduce interior nodes based upon a selection from the best of the input examples. Note that the random strategy permits the exploration of parts of the search space that may not be present in the input examples.

This paper compares the two approaches. As part of this exploration, the authors show that the two approaches can be implemented in such a way that their computational costs are equivalent. Thus, the authors can concentrate on the quality of the learning.

The comparison used 20 benchmark datasets, ten for classification and ten for regression. For two cases corresponding to easy problems, the problems are learned completely in both approaches. In other cases, with the exception of the stock dataset, the results are either comparable or the input method is superior. The exceptional nature of the stock data is attributed by the authors to a strong tendency to over-fit in this case. The reports on the results also give the number of interior nodes introduced by the two approaches for each problem. In their conclusion, the authors suggest reasons why the input method may be more effective.

Reviewer:  J. P. E. Hodgson Review #: CR139936 (1207-0731)
1) Huang, G. B.; Zhu, Q. Y.; Siew, C. K. Extreme learning machine: theory and applications. Neurocomputing 70, (2006), 489–501.
Bookmark and Share
  Featured Reviewer  
 
Learning (I.2.6 )
 
 
Connectionism And Neural Nets (I.2.6 ... )
 
 
Neural Nets (C.1.3 ... )
 
 
Neural Nets (I.5.1 ... )
 
Would you recommend this review?
yes
no
Other reviews under "Learning": Date
Learning in parallel networks: simulating learning in a probabilistic system
Hinton G. (ed) BYTE 10(4): 265-273, 1985. Type: Article
Nov 1 1985
Macro-operators: a weak method for learning
Korf R. Artificial Intelligence 26(1): 35-77, 1985. Type: Article
Feb 1 1986
Inferring (mal) rules from pupils’ protocols
Sleeman D.  Progress in artificial intelligence (, Orsay, France,391985. Type: Proceedings
Dec 1 1985
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy