Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Hyperparameter optimization in machine learning
Agrawal T., Apress, New York, NY, 2020. 188 pp. Type: Book (978-1-484265-78-9)
Date Reviewed: Dec 2 2022

The book explores a variety of optimization algorithms, ranging from brute-force ones like grid search and random search and their distribution, to more complex ones like the Hyperband algorithm. Bayesian optimization, which has the ability to learn from prior observations, is then discussed. It finishes with the captivating subject that is automated machine learning (autoML). The transition from one chapter to another, while debating the shortcomings of one and the necessity of the other, is exemplary. The book jumps immediately into the subject and doesn’t let up until the very end. The author keeps a firm grasp on the subject, going from a detailed description of what hyperparameter tuning is to the effective ways to use it.

Unlike previous books on hyperparameter optimization, this one includes a thorough step-by-step implementation without diverging into a detailed discussion of the concepts; this is also one of its drawbacks. The book fails to explain several topics, including Gaussian processes and random forest surrogate functions, which negatively impacts one’s reading. The tree-structured Parzen estimator (TPE), however, is portrayed quite efficiently. Other probabilistic distributions are noticeably missing, including the Poisson, Bernoulli, geometric, and negative binomial distributions.

A prerequisite for comprehending the presented material is a solid understanding of machine learning and the Python programming language. Therefore, this book would be most useful to scholars and professionals working on machine learning models. Readers looking for implementational assistance with the performance of their models will be the best fit, therefore conceptual derivation is not something that should be expected.

Reviewer:  Niraj Singh Review #: CR147518 (2302-0016)
Bookmark and Share
 
Learning (I.2.6 )
 
Would you recommend this review?
yes
no
Other reviews under "Learning": Date
Learning in parallel networks: simulating learning in a probabilistic system
Hinton G. (ed) BYTE 10(4): 265-273, 1985. Type: Article
Nov 1 1985
Macro-operators: a weak method for learning
Korf R. Artificial Intelligence 26(1): 35-77, 1985. Type: Article
Feb 1 1986
Inferring (mal) rules from pupils’ protocols
Sleeman D.  Progress in artificial intelligence (, Orsay, France,391985. Type: Proceedings
Dec 1 1985
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy