Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Machine learning with TensorFlow
Shukla N., Manning Publications Co., Greenwich, CT, 2018. 272 pp. Type: Book (978-1-617293-87-0)
Date Reviewed: May 23 2019

Machine learning with TensorFlow is a short book laid out in three parts. The first part (about 50 pages) gets the reader up to speed with today’s artificial intelligence (AI) and machine learning community. It also covers TensorFlow essentials, from introducing tensors to visualizing the data using the TensorBoard tool. The book assumes intermediate to advanced knowledge of the Python programming language; this is simply a prerequisite for the reader to get the most out of the book.

The second part (about 80 pages) turns its attention to traditional learning algorithms like linear regression, classification, clustering, and hidden Markov models (HMMs).

The third and longest (100-plus-pages) part of the book delves into neural networks by getting the reader to understand four neural network algorithms: autoencoders, reinforcement learning, convolutional neural networks (CNNs), and recurrent neural networks (RNNs). The last topic (RNN) is further explored by constructing sequence-to-sequence models in natural language processing (NLP) for chatbots.

Throughout the book, Python is used to manipulate the tensors to conform to the specific algorithm under study. The mathematics is kept at a minimum, which may be a boon for someone itching to get the code running as quickly as possible. However, due to this, the book is best used as supplementary material in a machine learning class, where theoretical aspects are coupled with the practical aspects of getting the algorithms in the book to quickly work in code.

By far the best part of the book is the manner in which each learning algorithm is introduced; the author simply does a marvelous job of demonstrating how an algorithm works before going into the details. This is most apparent in the chapter on autoencoders, reinforcement learning, and CNNs/RNNs. Although to balance things out, I should note that the introductory material on long short-term memory (LSTM) is rather sketchy and not as detailed as the others.

A final reason why I liked the book as a practical approach to machine learning: the author makes explicit certain aspects that normally stay hidden when more abstract libraries are used. A good example of this is encountered right at the beginning of the book, when the author defines a cost function for a simple linear regression model. This cost function is subsequently minimized during the training operation using the TensorFlow gradient descent optimizer. Watching this happen explicitly in code provides readers with a deeper appreciation for how cost functions work during training. The alternative, when using a higher level library such as scikit-learn (sklearn), is that this cost function magic happens in the background, invisible to the reader. The explicit approach is preferred, especially when learning material as complex as a low-level machine learning library.

More reviews about this item: Amazon, Goodreads

Reviewer:  Vijay Gurbani Review #: CR146579 (1908-0304)
Bookmark and Share
  Reviewer Selected
 
 
Learning (I.2.6 )
 
 
Python (D.3.2 ... )
 
Would you recommend this review?
yes
no
Other reviews under "Learning": Date
Learning in parallel networks: simulating learning in a probabilistic system
Hinton G. (ed) BYTE 10(4): 265-273, 1985. Type: Article
Nov 1 1985
Macro-operators: a weak method for learning
Korf R. Artificial Intelligence 26(1): 35-77, 1985. Type: Article
Feb 1 1986
Inferring (mal) rules from pupils’ protocols
Sleeman D.  Progress in artificial intelligence (, Orsay, France,391985. Type: Proceedings
Dec 1 1985
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy