Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Best of 2016 Recommended by Editor Recommended by Reviewer Recommended by Reader
Search
Deep learning neural networks : design and case studies
Graupe D., World Scientific Publishing Co, Inc., River Edge, NJ, 2016. 200 pp. Type: Book
Date Reviewed: Apr 26 2017

Deep learning is a field of study that is gaining much attention both in academia and industry; hence, textbooks and monographs on this subject are increasingly in demand. At first glance, this book by D. Graupe seems well suited for the reader seeking a brief treatment of this complex topic, with special focus on design issues and case studies; the content of the book, however, is not as comprehensive as the title promises.

If we have a look at publication statistics on deep learning, the results are impressive: the growth of publications in this field increased ten-fold in ten years (according to Scopus) showing an exponential increase from year to year. The reason for this success is simple: deep learning has become the go-to solution for a broad range of problems that, only few years ago, would have been solvable by human brains only. The application potential of deep learning is immense, and big IT companies are making us taste some flavors of mass-consumer artificial intelligence with semantic multimedia search, accurate speech recognition, and natural language processing, assisting technologies for our cars, homes, and appliances. Without any doubt, this technological explosion calls for new experts who are eager to acquire actionable knowledge in deep learning.

Deep learning is a class of machine learning algorithms mostly based on the principles of artificial neural networks. In fact, Graupe’s book starts with some basic concepts of neural networks, with a special emphasis on back-propagation, a popular learning algorithm that adapts the parameters of a neural network in order to tune it toward a desired behavior. These introductory chapters are certainly not intended for a comprehensive foundation of these topics; however, they are objectively too shallow (and rich in typographical errors). It is strongly recommended that the reader builds her own background on artificial neural networks through textbooks that offer a systematic, precise, and clear explanation of neural architectures and learning schemes.

After this preamble, the book eventually gets into the core of the topic by presenting some well-known deep learning architectures, such as convolutional neural networks, deep Boltzmann machines, and large memory storage and retrieval (LAMSTAR). Well, LAMSTAR is not so well known, but it is one of Graupe’s creatures and is deeply described in this book. Unfortunately, the level of detail in which LAMSTAR has been explained (more than 40 pages) is not reflected by the description of the other, more widely known, deep neural networks (20 pages overall, which is not sufficient even for a fair overview of these architectures). This makes the book very specific to LAMSTAR; alas, the title did not reflect this! Also, a robust chapter is entirely devoted in comparing LAMSTAR with other deep learning architectures. The chapter is structured as a report, with quite a repetitive structure and a final discussion: useful for preparing experiments, but not really engaging. Finally, the book ends with almost 100 pages (!) of source code (in different programming languages) showing the solutions given by some of Graupe’s students to the case studies reported in the previous chapter. Honestly, I could not help skipping that section.

To conclude, the book is an interesting option if you need a detailed reference to the LAMSTAR deep learning architecture. For a more general treatment of deep learning, however, current literature offers good alternatives with broader coverage and a more rigorous approach.

More reviews about this item: Amazon

Reviewer:  Corrado Mencar Review #: CR145221 (1707-0432)
Bookmark and Share
  Reviewer Selected
Editor Recommended
Featured Reviewer
 
 
Learning (I.2.6 )
 
 
Self-Modifying Machines (F.1.1 ... )
 
Would you recommend this review?
yes
no
Other reviews under "Learning": Date
Learning in parallel networks: simulating learning in a probabilistic system
Hinton G. (ed) BYTE 10(4): 265-273, 1985. Type: Article
Nov 1 1985
Macro-operators: a weak method for learning
Korf R. Artificial Intelligence 26(1): 35-77, 1985. Type: Article
Feb 1 1986
Inferring (mal) rules from pupils’ protocols
Sleeman D.  Progress in artificial intelligence (, Orsay, France,391985. Type: Proceedings
Dec 1 1985
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy