Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Neural networks and statistical learning
Du K., Swamy M., Springer Publishing Company, Incorporated, London, UK, 2014. 850 pp. Type: Book (978-1-447155-70-6)
Date Reviewed: Aug 6 2014

Encyclopedic is probably the most apt description of Du and Swamy’s book on neural networks and statistical learning. Each chapter is a brief introduction to a topic that would merit a full monograph of its own. Depending on the size of the chapter and breadth of topics, there are usually no fewer than 100--and often 200--references from the primary literature. The book is organized into 25 chapters with two appendices. Fortunately, there is a glossary of abbreviations at the beginning of the book. The chapters are sequenced in topical groups.

Chapters 1 and 2 present an outline of the basic properties of neural networks, their analogous behavior to physiological neural systems, and machine learning. The properties of perceptron-based neural networks are developed in chapters 3 to 5. Chapters 4 and 5 emphasize multilayer perceptron networks, structural optimization, and second-order learning.

Hopfield networks are discussed in chapters 6 (optimization) and 7 (associative memory). Chapters 8 and 9 are on clustering. Chapter 8 discusses models and algorithms, and chapter 9 has a set of topics centered on underutilization in clustering. In chapters 10 and 11, the authors complete the emphasis on the neural networks themselves. Chapter 10 presents networks based on radial basis functions. Chapter 11 returns to the use of multilayer perceptrons for discussing recurrent neural networks.

The book then shifts to statistical techniques in chapters 12 through 20. The statistical techniques are presented in the context of their use with neural networks. The topic of chapter 12 is principal component analysis (PCA). The subject of chapter 13 is nonnegative matrix factorization, unsupervised learning for factoring a matrix into two matrices where both have nonnegative elements. Independent component analysis (ICA) for extracting latent components from observations is covered in chapter 14. Chapter 15 continues with linear discriminant analysis (LDA) for statistical pattern recognition. Kernel methods are treated in chapters 16 and 17, with support vector machines (SVMs) the principal topic in chapter 16. Probabilistic models incorporating Markov methods connect chapters 18, on reinforcement learning, and 19, on Bayesian learning. Chapter 20 discusses combining multiple learning strategies.

Chapters 21 and 22 discuss the principles of fuzzy sets and fuzzy logic and their application in neuro-fuzzy systems. Chapters 23 to 25 are on applications. They cover parallel implementation of neural networks in hardware, biometrics (for example, face recognition), and data mining (on the web based on data described in Extensible Markup Language (XML)). The first appendix is on mathematical topics, and the second is a listing of databases of sample material for study.

The encyclopedic character of this book has the benefit of pulling together in one place in an organized manner a wealth of information. On the other hand, this encyclopedic character means that readers must have superb library resources at their disposal. There is simply not enough space in the text, despite its length of over 800 pages, to develop any topic in suitable detail, especially since the book is being marketed as a textbook.

There are problems at the end of each chapter, but there is not enough development or explanation in the narrative of the chapters to allow students to develop solutions. There is enough material here for several substantial books, especially if they are to be used as textbooks for students.

Another problem is the availability of MATLAB code for the examples in the text. When I tried to access the code from the Springer website, I hit the button for supplementary material and Microsoft OneDrive appeared with the message: “This item might not exist or is no longer available.” In conclusion, this book is a valiant attempt. It would have a place in a library as a secondary reference.

Reviewer:  Anthony J. Duben Review #: CR142594 (1411-0941)
Bookmark and Share
  Reviewer Selected
Featured Reviewer
 
 
Learning (I.2.6 )
 
 
Neural Nets (I.5.1 ... )
 
 
Self-Modifying Machines (F.1.1 ... )
 
Would you recommend this review?
yes
no
Other reviews under "Learning": Date
Learning in parallel networks: simulating learning in a probabilistic system
Hinton G. (ed) BYTE 10(4): 265-273, 1985. Type: Article
Nov 1 1985
Macro-operators: a weak method for learning
Korf R. Artificial Intelligence 26(1): 35-77, 1985. Type: Article
Feb 1 1986
Inferring (mal) rules from pupils’ protocols
Sleeman D.  Progress in artificial intelligence (, Orsay, France,391985. Type: Proceedings
Dec 1 1985
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy