Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Best of 2016 Recommended by Editor Recommended by Reviewer Recommended by Reader
A survey on deep learning: algorithms, techniques, and applications
Pouyanfar S., Sadiq S., Yan Y., Tian H., Tao Y., Reyes M., Shyu M., Chen S., Iyengar S.  ACM Computing Surveys 51 (5): 1-36, 2018. Type: Article
Date Reviewed: Oct 16 2020

Deep learning (DL) algorithms, characterized by mapping from input to output (labels or classes) with multiple hidden layers in between, have revived the excitement of artificial intelligence (AI) to get closer to its initial vision of building intelligent machines. The major DL network configurations include recursive neural networks (RvNNs), recurrent neural networks (RNNs), convolutional neural networks (CNNs), and deep generative networks such as deep belief networks (DBNs), deep Boltzman machines (DBMs), generative adversarial networks (GANs), and variational autoencoders (VAE).

Many variations of RNNs and CNNs have been applied to natural language processing (NLP) tasks such as sentiment analysis, translation, paraphrase identification, text summarization, and question answering (QA). CNNs have been dominant in visual data processing tasks such as image classification, resulting in many variations, for example, LeNet-5, VGGNet, GoogleNet, ResNet, RestNeXT, AlexNet, and so on, with varying numbers of inner network layers to enhance accuracy levels. In addition, CNN variations such as R-CNN, fast R-CNN, and “you only look once” (YOLO) are used for object recognition in images. For autonomous driving and medical applications, fully convolutional networks (FCNs) or mask RNNs are used for the semantic segmentation tasks to achieve pixel-level understanding of images, while recurrent convolutional networks (RCNs) show better performance for video processing.

Automatic speech recognition (ASR) technologies that apply to speech recognition (or speech transcription), speech enhancement, and phone and music classification tasks, have advanced through DBN, deep RNN, Deep LSTM, and hybrid models combining RNNs and CNNs. Applications include speech sentiment analysis and speech enhancements.

Although DNNs “improve the learning performance, broaden the scopes of applications, and simplify the calculation process,” they require extremely long training times and are sensitive to training data size and model parameters. “To simplify the implementation process and boost the system-level development,” efforts have focused on advanced techniques and frameworks that “combine the implementation of modularized DL algorithms, optimization techniques, distribution techniques, and support to infrastructures.” These frameworks include TensorFlow, Theano, MXNet, Torch, Caffe, DL4j, CNTK, and Neon.

The survey serves as a good summary for AI model developers who consider DNN solutions for given tasks in different domains. However, it is uncertain if these trained DNNs can be reused in their given tasks. What may be the ultimate goal of DNN research and development? Is improving accuracy and reducing errors through experiments and competition the only goal? One goal should be a catalog of trained DL networks that can be reused for similar tasks, regardless of the domain, as long as datasets are of the same type. So far, the bulk of the problem solutions in DNNs are devoted to finding the proper hyperparameters and fine-tuning the networks (that is, optimizing the number of layers, reducing errors, and shortening the training time), but there is no good guide other than trial and error experiments.

Reviewer:  Soon Ae Chun Review #: CR147084
Bookmark and Share
  Editor Recommended
Learning (I.2.6 )
Algorithms (I.5.3 ... )
Ecs (C.2.0 ... )
Parallel Algorithms (G.1.0 ... )
Theory Of Computation (F )
Would you recommend this review?
Other reviews under "Learning": Date
Variational Bayesian learning theory
Nakajima S., Watanabe K., Sugiyama M.,  Cambridge University Press, New York, NY, 2019. 558 pp. Type: Book (978-1-107076-15-0)
Sep 10 2020
Adversarial machine learning
Joseph A., Nelson B., Rubinstein B., Tygar J.,  Cambridge University Press, New York, NY, 2019. 338 pp. Type: Book (978-1-107043-46-6)
Sep 8 2020
Explainable AI: interpreting, explaining and visualizing deep learning
Samek W., Montavon G., Vedaldi A., Hansen L., Muller K.,  Springer International Publishing, New York, NY, 2019. 452 pp. Type: Book (978-3-030289-53-9)
Jul 24 2020

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright © 2000-2020 ThinkLoud, Inc.
Terms of Use
| Privacy Policy