Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Comparison of group screening strategies for factorial experiments: how to write them and why
Dean A., Lewis S. Computational Statistics & Data Analysis39 (3):287-297,2002.Type:Article
Date Reviewed: Jun 18 2003

An important aim of experimentation is to identify design factors and settings that achieve a required mean performance, while minimizing performance variability due to varying noise factors. An initial screening experiment must then be performed to identify factors of substantive influence for further investigation.

A simulation tool for comparing two-stage group screening strategies is presented in this paper. The work is interesting, and will be useful for experimenters, since the simulation software can be used to investigate the consequences of using different alternatives; a screening strategy can then be adopted that is satisfactory in terms of the proportion of errors made and the number of active factorial effects. Users of this tool can also make informed trade-off decisions regarding the number of effects missed, and the total amount of experimentation required when deciding on a strategy.

The group screening strategies studied in the paper are of the conventional type, which considers only the main effects at the first stage of the experiment; the possibility of important interactions is ignored, and factors are sent forward to stage two if their main effects are found to be active. An alternative strategy of screening on two-factor interaction, as well as main effects at stage two is also considered. This alternative strategy is discussed by the authors in a technical report, in press, from the University of Southampton (where one of the authors is based). It is a pity that this discussion is not accessible through a more readily available medium.

This paper first presents some details of the simulation software, and then presents an example of how this software can be used to compare the interaction and classical group screening strategies for an experiment. At the end of the paper, the authors assert that, in all the examples they have examined (except for one kind of case), the classical screening strategy does not perform as well as interaction screening, using the criteria of minimizing the proportion of active main effects, and the proportion of active interactions that are screened out incorrectly in stage one. The authors believe that the classical main effect screening strategy requires many fewer observations, on average, than their interaction screening strategy because of the number of active effects it misses.

Given the papers title, the reader might expect a complete discussion about the best strategy for each case. However, only one paragraph, at the end of the paper, addresses this topic. The main part of the paper is focused on the software for simulating both strategies, so that readers (who are, hopefully, future users of the tool) can make their own comparisons, and come to their own conclusions. Perhaps a better title for this paper would have been A tool (or an algorithm) for comparing group screening strategies.

The tool is available on a Web site hosted by the university with which one of the authors is affiliated. The ten-page paper is long enough to understand the work of the authors. The fact that interested readers can access the tool via the Web makes more information unnecessary. It would have been interesting if the authors had provided pointers to related work. Only six bibliographic references are made, two of them about basics in the design and analysis of experiments, and none of them about other screening strategies or other similar tools, leaving me to wonder if this is the only tool of its kind in town.

Reviewer:  Natalia Juristo Review #: CR127813 (0310-1116)
Bookmark and Share
 
Experimental Design (G.3 ... )
 
 
Stochastic Processes (G.3 ... )
 
Would you recommend this review?
yes
no
Other reviews under "Experimental Design": Date
Weighted tests of homogeneity for testing the number of components in a mixture
Susko E. Computational Statistics & Data Analysis 41(3-4): 367-378, 2003. Type: Article
May 28 2003
A graphical method for evaluating slope-rotatability in axial directions for second order response surface designs
Jang D. Computational Statistics & Data Analysis 39(3): 343-349, 2002. Type: Article
Jun 12 2003
Diagnostics for conditional heteroscedasticity models: some simulation results
Tsui A. Mathematics and Computers in Simulation 64(1): 113-119, 2004. Type: Article
Apr 2 2004
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy