When conducting a test that requires a large, or even moderately sized, sample, there are many instances when the sample consists of more than one subpopulation. This can obscure the true result for each of the subpopulations. There are a number of tests for homogeneity. Part of the motivation for conducting the method described in this paper is that the likelihood ratio statistic for a test of H0: m = m0 against HA: m > m0 does not satisfy the regularity conditions for a large sample likelihood theory, and usually does not have a chi-squared distribution. There are ways of getting around this difficulty, for example by using simulation, but this can be computationally intensive.
The author proposes the use of weighted homogeneity tests, which are less computationally intensive. He proposes several of these tests, and provides illustrative examples to demonstrate their usefulness. How these tests might be used to conduct a meta-analysis is one of the examples provided. Another example, of an application for a medical problem, explains how to ascertain the effectiveness of beta-blockers. The author concludes, based on his examples, that the tests described in the study are computationally efficient.
An alternative to other tests for homogeneity are provided in the study. This is especially important in follow-up analyses of medical data, as it pertains to new drug testing that often doesn’t provide clear statistical significance, but, when analyzed by subpopulations, offers important results that might otherwise be lost.