Mixture models refer to the fact that many datasets have an internal structure that can be better analyzed with more than one probability distribution as models for the data. These two (or more) models may be more or less mixed in terms of data dispersion, for example, in terms of how many standard deviations are close to one another. Unsupervised learning implies that the parameters or description of this mixing can be achieved through algorithms and proven theorems that guarantee convergence toward something meaningful.
This work presents both mathematical theory and computational implementations to apply to several events, the data of which is composed of mixed data subsets or subpopulations. For example, several chapters argue the case for identification of face expressions. Each different part of a face must be recognized, independently of the precise geometric configuration of a given person’s facial features. Or the case of letter recognition: the same letter can be traced with varying strokes, and each one of them can be characterized by a statistical set of parameters; on top of that, each letter must be discernible from the other elements of the alphabet set.
This curated collection of research presents much of the state of the art of automatic mixture modeling. The chapters are divided by framework: Gaussian based, generalized Gaussian models, data clustering, and data segmentation. Thorough proofs are included for the different formulae, and the given case studies represent the current challenges and successes as well as the types of problems for which mixture modeling is useful.
Thus, this book can be taken as a review of the subject. It is also a very good starting point for understanding mixture modeling and even for setting up new research. I strongly recommend this work for researchers and advanced undergraduate and graduate students of computer science and applied probability.