Computing Reviews

Limit theorems for simulation-based optimization via random search
Chia Y., Glynn P. ACM Transactions on Modeling and Computer Simulation23(3):1-18,2013.Type:Article
Date Reviewed: 10/08/13

It is important to know if a random search for an optimal solution converges or not. Furthermore, if it does converge, then it is important to be able to determine the rate of convergence and the distribution.

A broad range of applications can be modeled for finding the optimal solution of an objective function &agr;(&thgr;), where &thgr; ∈ Λ ∈ Rd. However, for many real-world applications, the analytic form of &agr;(&thgr;) is unknown, rather than a random variable &agr; related to &thgr;. This is due to incomplete information in modeling and possibly some stochastic factors. Therefore, random search has been applied to estimate the expected optimum in practice. In this paper, n independent simulations, or estimations, are assumed. In each of them, the expected value of &agr;(&thgr;), X(&thgr;), is evaluated at m sample exploration points &thgr;1, &thgr;2, &thgr;3, ... , &thgr;(m-1), &thgr;m. Then, the maximum of the sample mean σjn Xj(&thgr;i) n yields the estimated empirical maximum of &agr;(&thgr;i) for i between 1 and m.

The authors of this paper present and prove several theorems to answer the important questions about random search stated at the beginning of this review. Among these theorems, the most significant one is the large sample limit theory for simple (non-adaptive) simulation-based random search for the optimizer of an objective function. That is, under three general assumptions, the maximum of the sample mean σjn Xj(&thgr;i) n converges to the maximum of &agr;(&thgr;) when (log m)/n approaches zero and m*n approaches infinity, as stated in Theorem 2.1 (i). Theorem 2.1 also provides insight into the rates of convergence and the random error associated with such search methods as described in Theorem 3.2.

Although this is a theoretical paper, requiring a strong mathematics background to understand the details, practitioners in the fields of data mining, machine learning, modeling, and simulation will find it worthwhile reading. The theorems presented provide insights on setting the values of exploration (m) and estimation (n) for a simulator with understanding, and on possible estimation of the trade-off in terms of convergence, and more importantly, limit distribution.

Reviewer:  Chenyi Hu Review #: CR141626 (1312-1117)

Reproduction in whole or in part without permission is prohibited.   Copyright 2024 ComputingReviews.com™
Terms of Use
| Privacy Policy