Computing Reviews

A constrained growing grid neural clustering model
Hung C. Applied Intelligence43(1):15-31,2015.Type:Article
Date Reviewed: 08/24/15

The growing grid (GG) algorithm reduces the dimensional complexity of a dataset, irrespective of its topological structure. This model grows its units with a fixed rate, and fine-tunes its topological structure in growing and fine-tuning stages with greater and smaller learning rates, respectively. This paper neutralizes this complexity.

According to the author, “The constrained GG [CGG] emphasizes the effect of the lateral connections between output units in a grid, and neutralizes the effect on the distance between the input vector and neighbors of the best matching unit (BMU).” He further extends GG to fine-tuned CGG and full CGG to constrain input vectors only at the fine-tuning stage and at both growing and fine-tuning stages, respectively, to test the constrained neural learning rule. CGG is a new approach to implementing the GG algorithm; this is the paper’s core contribution.

For evaluations, “the average quantization error (AQE), error entropy, average map scope, BMU activation rate (BAR), and topographic error (TE) are evaluated for the explanation ability, smooth hyper surface, border effect, BMU activation, and topology preservation of the GG algorithm, respectively.” The author uses 15 datasets, as well as models like the self-organizing map (SOM), constrained SOM, GG, fine-tuned CGG, and full CGG. Using a t-test to test the statistical significance, the author’s full CGG model outperforms other models with an original learning rule, which makes this paper an interesting read for those working in this area.

The author further considers five future issues:

(1) how the approach can be applied to neural gas (NG), growing neural gas (GNG), GNG with utility (GNG-U), grow when required (GWR), dynamic adaptive self-organizing hybrid (DASH), and so on;

(2) exploring and using receiver operating characteristics and other datasets;

(3) using a twist-free mechanism to avoid building a twisted topological map;

(4) learning length and effectiveness of topological preservation; and

(5) having online and offline specifications of algorithms.

Furthermore, the author proposes using and comparing constrained neural learning in offline as well as online mapping algorithms.

Reviewer:  Lalit Saxena Review #: CR143720 (1511-0982)

Reproduction in whole or in part without permission is prohibited.   Copyright 2024 ComputingReviews.com™
Terms of Use
| Privacy Policy