Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Artificial neural networks : a practical course
da Silva I., Spatti D., Flauzino R., Liboni L., dos Reis Alves S., Springer International Publishing, New York, NY, 2016. 307 pp. Type: Book (978-3-319431-61-1)
Date Reviewed: Apr 13 2017

The topic of artificial neural networks is one of the areas within the computing discipline of soft computing. Their architecture and use are based on the operation of a biological neural system. As such, there is an inherent parallelism in the processing of the input data. The types of problems to which they have been applied include autonomous systems for control, computer vision, image analysis, forecasting, and pattern recognition. Inherently nonlinear behavior can be treated with neural networks.

This book would be very good for advanced undergraduate students, first-year graduate students, or for anyone wishing to learn about neural networks on their own. It was originally published in Brazil in Portuguese. This is the English translation of the text. Even though the authors’ first language is not English, the translation is very readable. In only a few spots did Portuguese words survive in table headings.

The book has two parts of text followed by five appendices. The first part is the more important of the two and contains ten chapters. The first chapter describes biological neurons and how they function as models for artificial neurons and their operation. Chapter 2 has a general description of the components and architectural features of artificial neural networks and presents an overview of the training of a neural network. The remaining eight chapters go into details about different architectures of well-known artificial neural networks. These chapters are generally organized by describing any specific implementations of the artificial neurons used in the network, the architecture of the network, and the training process used. All of the algorithms employed in the training process are presented in pseudocode. There is no preference for any high-level language for implementing the algorithms.

These eight chapters describe architectures beginning with the simplest and continuing on to the more elaborate. This approach also follows the historical evolution of the study of artificial neural networks. Chapters 3 and 4 describe the perceptron and ADALINE single-layer networks, respectively. Chapter 5 presents the multilayer perceptron (MLP) network. It is the longest chapter in the book and introduces hidden layers and backpropagation. Chapter 6 describes the radial basis function (RBF) architecture. It is a short chapter depending heavily on comparisons with the MLP network. Hopfield networks are introduced in chapter 7. The feedback of outputs to network inputs, associative memory, and stability of dynamic networks are among the new concepts. Competitive learning processes and self-organizing maps are introduced in chapter 8 on Kohonen networks. Chapter 9 treats two architectures, learning vector quantization (LVQ) and counter-propagation. It is unclear why they were not placed in two separate chapters. They are sufficiently different to warrant separation: for example, single layer versus two layer, single training process versus two-stage training. The connection between them is that they have competitive training algorithms, and the only practical example at the end of the chapter employs the LVQ architecture. The last chapter in the first part describes the adaptive resonance theory (ART). The authors identify five topologies of ART architectures, but analyze the simplest with binary inputs and unsupervised training. References to the other topologies are given in the literature. Among the new topics in architecture and training algorithms are different sets of weights for feed forward and feed backward and the dynamic character of the second layer involving the addition of neurons to recognize new classes of results.

The second part of the book is a set of ten brief chapters describing practical projects presumably undertaken in the authors’ laboratory. There are no references to the research literature. They serve, as the authors state, as models and motivators for projects that readers may wish to study: for example, grading coffee, identifying adulterants in powdered coffee, computer traffic analysis, electric power quality, stock market forecasting, disease diagnostics, robotics control, grading tomatoes, and pattern classification. The most theoretical application is a somewhat artificial problem on constrained optimization using Hopfield networks. In these applications, the most common architecture used was the MLP, used in six of the nine remaining practical projects. The LVQ, ART, Kohonen, and RBF were used only once with the RBF compared to the MLP in the pattern classification example.

Particular attention should be paid to the exercises at the end of each of the ten chapters in the first part and to the demanding practical examples at the end of chapters 3 through 10. The exercises thoroughly test the readers’ understanding of the descriptive material. The practical examples address the training and use of the architecture in the chapter. Instructors are free to choose the computational tools they prefer. Each chapter has at least one practical example, with chapters 5 and 6 having three and two, respectively. Data for the practical examples are either embedded in the chapters or in one of the appendices. Electronic versions of the data are available from the authors at the website cited in the book, both in plain text or Excel spreadsheet formats.

Reviewer:  Anthony J. Duben Review #: CR145198 (1706-0339)
Bookmark and Share
  Reviewer Selected
Featured Reviewer
 
 
Self-Modifying Machines (F.1.1 ... )
 
 
Connectionism And Neural Nets (I.2.6 ... )
 
 
Applications And Expert Systems (I.2.1 )
 
Would you recommend this review?
yes
no
Other reviews under "Self-Modifying Machines": Date
Complex systems dynamics
Weisbuch G., Ryckebushe S. (trans.), Addison-Wesley Longman Publishing Co., Inc., Boston, MA, 1991. Type: Book (9780201528879)
Dec 1 1991
On the computational efficiency of symmetric neural networks
Wiedermann J. Theoretical Computer Science 80(2): 337-345, 1991. Type: Article
Mar 1 1992
Introduction to the theory of neural computation
Hertz J., Krogh A., Palmer R., Addison-Wesley Longman Publishing Co., Inc., Boston, MA, 1991. Type: Book (9780201503951)
Jan 1 1993
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy