An optimization technique, a modified kernel clustering method, is presented in this paper, making a contribution to the improvement of the performance of kernel clustering.
The purpose of the proposed research is to elaborate an extension of kernel clustering based on four main assumptions: (1) “two kinds of kernel clustering criteria and two kinds of growing ways of kernels,” (2) no restriction for the “computation of kernel center and width,” (3) alternative ways of computing distance to Euclidean distance, and (4) acknowledged importance of the selection of the initial centers of kernels. These assumptions help researchers engineer a solution that allows them to generate more feasible kernels. The method is applied to several non-linear classifiers, such as radial basis function neural network (RBFNN), k-nearest-neighbor classifier (KNNC), and the Math Kernel Library (MKL). Thorough experiments over 29 real-life datasets, for example sensor data, images, medical databases, and the like, as well as seven classifiers with four clustering methods, are described in a detailed manner to show the benefits of the modified kernel clustering method.
The paper presents all technical details around the proposed methods, juxtaposing them with the classic kernel clustering method. A very detailed account of the experiments, including in-depth analysis of the clustering behavior over the different datasets, classifiers, clustering methods, and the reasons for it, plays a crucial role in the quality of the presented research. It is good reading for scholars and engineers interested in automated data and information organization and analysis, more precisely, the use of clustering.