Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Advanced algorithms for neural networks
Masters T., John Wiley & Sons, Inc., New York, NY, 1995. Type: Book (9780471105886)
Date Reviewed: May 1 1996

A variety of algorithms already known to the neural networks community have, thus far, not seen widespread acceptance among developers. Masters presents these algorithms and also manages to introduce some state-of-the-art algorithms still in the early stages of development.

Masters presents all the algorithms well. The accompanying graphics illustrate their behavior and associated formulas. The source code in C++ of the algorithms posed is included on the accompanying disk, along with the source and executables of two programs:

  • PNN, which implements the algorithms presented in the first three chapters of the book: probabilistic neural network, the generalized regression neural network (GRNN), and the Gram-Charlier neural network (GCNN)

  • MLFN2, an updated version of the multilayer feedforward network (MLFN) program included in Masters’s previous book [1]

The book addresses a problem that is common to several neural network models, especially MLFN: the long training periods needed to reach acceptable performance.

Chapter 1, “Deterministic Optimization,” is devoted to deterministic algorithms that efficiently find the nearest minimum. The problem is that their chances of falling into a local minimum are very high and they must be combined with some stochastic method. Masters examines the deterministic Levenberg-Marquardt, conjugate-gradient method, and line minimization algorithms as compared to the traditional backpropagation algorithm.

Chapter 2, “Stochastic Optimization,” begins with a brief but excellent review of simulated annealing along with an algorithm designed by Masters that deals with optimizations that rely on random numbers. The detailed discussion of random numbers can be skipped by readers who can afford to see the perturbation and random generator parts of the stochastic methods as black boxes.

Chapter 3, “Hybrid Training Algorithms,” introduces two hybrid alternatives to the algorithms presented thus far: simple deterministic/stochastic alternation and stochastic smoothing with gradient hinting.

The preference of Masters for PNN is clearly shown in chapter 4, “Probabilistic Neural Networks I: Introduction,” and chapter 5, “Probabilistic Neural Networks II: Advanced Techniques,” where he presents the basics of PNN and a brilliant discussion of methods like parzen and optimizing multiple-sigma models.

Chapter 6 presents GRNN. The author’s strategy for implementing GRNN is consistent with the strategies used in PNN, so the algorithm is extremely good for general function mapping applications. But the presentation of the formulas in this chapter is not as clear as in the preceding chapters. A graphic simulation of the behavior of GRNN shows a way of arriving at an intuitive understanding of the model, but for developers and researchers, a detailed discussion along the lines of those about PNN would have been more effective.

Chapter 7 presents GCNN in a didactic way. It begins with an intuitive description of the model, followed by the original implementation that Moon Kim published in 1992. A good feature of this, the best chapter of the book, is the comparative evaluation of the pure GCNN and Edgeworth’s modification.

Masters devotes chapter 8, “Dimension Reduction and Orthogonalization,” entirely to the study of methods for the exclusion of redundant data and the implementation of algorithms for the preprocessing of input patterns in order to produce orthogonal ones.

One of Masters’s original contributions is chapter 9, “Assessing Generalization Ability,” which addresses the problem of verification and validation of neural networks. He reveals how the reliability of test results can be measured. He also presents an aggressive but technically sound method for using the same dataset for both training and validation.

Chapter 10, “Using the PNN Program,” is a reference manual for the PNN program included on the accompanying disk. The appendix, “Disk Contents, Hardware and Software Requirement, Installation Steps,” shows how to install the disk.

The bibliography reflects a good review of the state of the art of neural networks and the classic works required to understand the book fully.

The book’s cover; fonts; well-written and fully documented code; illustrations and size; and accompanying disk help make this an excellent package.

Reviewer:  Jose M. Ramirez Review #: CR119294 (9605-0338)
1) Masters, T. Signal and image processing with neural networks. Wiley, New York, 1994.
Bookmark and Share
 
Neural Nets (I.5.1 ... )
 
 
Algorithms (I.5.3 ... )
 
 
C++ (D.3.2 ... )
 
 
Classifier Design And Evaluation (I.5.2 ... )
 
Would you recommend this review?
yes
no
Other reviews under "Neural Nets": Date
Synergetic computers and cognition
Haken H. (ed), Springer-Verlag New York, Inc., New York, NY, 1991. Type: Book (9780387530307)
Oct 1 1992
Code recognition and set selection with neural networks
Jeffries C., Birkhäuser Boston Inc., Cambridge, MA, 1991. Type: Book (9780817635855)
Jun 1 1993
Fast learning and invariant object recognition
Souček B. (ed), Wiley-Interscience, New York, NY, 1992. Type: Book (9780471574309)
Nov 1 1992
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy