Computing Reviews

Deep learning:methods and applications
Deng L., Yu D. Foundations and Trends in Signal Processing7(3¿4):197-387,2014.Type:Article
Date Reviewed: 11/13/15

Deep learning encompasses “machine learning techniques, where many layers of information processing stages ... are exploited for unsupervised feature learning and for pattern analysis/classification” (p. 217). The resulting hierarchical architectures are inspired by how our brains are thought to work [1]. They have led to the resurgence of neural networks within the artificial intelligence (AI) community since the 2006 breakthroughs led by three renowned researchers [2,3,4].

Two researchers from Microsoft Research, Li Deng and Dong Yu, have written a detailed--yet not self-contained--survey of the state of the art in deep learning, with a special focus on the applications where deep learning is currently at the cutting edge. After a brief conventional introduction, they summarize the historical development of deep learning techniques from their roots in feed-forward neural networks and back-propagation, the learning algorithm used for learning the parameters of such networks since the 1980s. They also establish a parallel between the history of neural networks and Gartner’s Hype Cycle, which represents the prototypical stages of development, maturity, and adoption for specific technologies. According to the authors, using Gartner’s consulting jargon, deep learning put artificial neural networks (ANNs) on the “slope of enlightenment” after the “trough of disillusionment” caused by the limitations of back-propagation, just on their way to their “plateau of productivity,” not yet reached in their opinion.

Deng and Yu survey different classes of deep learning neural networks before delving into the specific details of the applications where deep learning has started to yield fruitful results. They categorize existing proposals in three broad classes: deep networks for unsupervised or generative learning; deep networks for supervised learning; and hybrid deep networks, which make use of both supervised and unsupervised components. In just 16 pages, they provide an excellent overview of the field, albeit only fully understandable by those already acquainted with recent developments on neural networks. This survey chapter is followed by three separate chapters where representative instances of each class of deep network are analyzed in more detail, namely, deep autoencoders, pre-trained deep neural networks, and deep stacking networks. From a pedagogical point of view, however, including a detailed academic survey before representative instances are shown in full detail seems somewhat counterproductive, especially for those readers who might benefit the most from this state-of-the-art survey: graduate students.

The second half of this monograph focuses on selected applications of deep networks to real-world problems. Given the authors’ research background, it is not surprising that they start with applications in speech recognition, a key application where deep learning has succeeded. They also include specific chapters on using deep networks for language modeling in natural language processing, information retrieval (semantic hashing and deep-structured semantic modeling), object recognition in computer vision (another outstanding application of deep networks), and, finally, multimodal and multi-task learning (for example, combining modalities such as text and images, or speech and images, and cross-lingual speech recognition).

This survey gives an excellent walk-through of the most recent developments within the ANN community, a subfield that is currently in the spotlight given recent successes in key application areas (speech recognition and computer vision, in particular). On the plus side, it provides a wealth of information and puts hundreds of bibliographic references in context, which is certainly valuable for researchers in the field. On the minus side, it is written as an academic survey and it is not self-contained, so it might not fit the bill for those interested in starting their research on deep learning. Unfortunately, although the authors have put a lot of effort into describing what works and what doesn’t, including specific details of successful network configurations for many different applications, ANN design (that is, how to adjust ANN hyper-parameters) remains an art, without ready-made recipes and mostly inscrutable for the layman.


1)

Ballard, D. H. Brain computation as hierarchical abstraction. MIT Press, Cambridge, MA, 2015.


2)

Hinton, G.; Osindero, S.; Teh, Y.-W. A fast learning algorithm for deep belief nets. Neural Computation 18, 7(2006), 1527-1554.

Reviewer:  Fernando Berzal Review #: CR143948 (1602-0143)

Reproduction in whole or in part without permission is prohibited.   Copyright 2024 ComputingReviews.com™
Terms of Use
| Privacy Policy