Growing self-organizing map

A growing self-organizing map (GSOM) is a growing variant of a self-organizing map (SOM). The GSOM was developed to address the issue of identifying a suitable map size in the SOM. It starts with a minimal number of nodes (usually 4) and grows new nodes on the boundary based on a heuristic. By using the value called Spread Factor (SF), the data analyst has the ability to control the growth of the GSOM.

All the starting nodes of the GSOM are boundary nodes, i.e. each node has the freedom to grow in its own direction at the beginning. (Fig. 1) New Nodes are grown from the boundary nodes. Once a node is selected for growing all its free neighboring positions will be grown new nodes. The figure shows the three possible node growth options for a rectangular GSOM.

Node growth options in GSOM: (a) one new node, (b) two new nodes and (c) three new nodes.

The algorithm

[edit]

The GSOM process is as follows:

  1. Initialization phase:
    1. Initialize the weight vectors of the starting nodes (usually four) with random numbers between 0 and 1.
    2. Calculate the growth threshold () for the given data set of dimension according to the spread factor () using the formula
  2. Growing Phase:
    1. Present input to the network.
    2. Determine the weight vector that is closest to the input vector mapped to the current feature map (winner), using Euclidean distance (similar to the SOM). This step can be summarized as: find such that where , are the input and weight vectors respectively, is the position vector for nodes and is the set of natural numbers.
    3. The weight vector adaptation is applied only to the neighborhood of the winner and the winner itself. The neighborhood is a set of neurons around the winner, but in the GSOM the starting neighborhood selected for weight adaptation is smaller compared to the SOM (localized weight adaptation). The amount of adaptation (learning rate) is also reduced exponentially over the iterations. Even within the neighborhood, weights that are closer to the winner are adapted more than those further away. The weight adaptation can be described by where the Learning Rate , is a sequence of positive parameters converging to zero as . , are the weight vectors of the node before and after the adaptation and is the neighbourhood of the winning neuron at the th iteration. The decreasing value of in the GSOM depends on the number of nodes existing in the map at time .
    4. Increase the error value of the winner (error value is the difference between the input vector and the weight vectors).
    5. When (where is the total error of node and is the growth threshold). Grow nodes if i is a boundary node. Distribute weights to neighbors if is a non-boundary node.
    6. Initialize the new node weight vectors to match the neighboring node weights.
    7. Initialize the learning rate () to its starting value.
    8. Repeat steps 2 – 7 until all inputs have been presented and node growth is reduced to a minimum level.
  3. Smoothing phase.
    1. Reduce learning rate and fix a small starting neighborhood.
    2. Find winner and adapt the weights of the winner and neighbors in the same way as in growing phase.
Approximation of a spiral with noise by 1D SOM (the upper row) and GSOM (the lower row) with 50 (the first column) and 100 (the second column) nodes. The Fraction of variance unexplained is: a) 4.68% (SOM, 50 nodes); b) 1.69% (SOM, 100 nodes); c) 4.20% (GSOM, 50 nodes); d) 2.32% (GSOM, 100 nodes). The initial approximation for SOM was equidistribution of nodes in a segment on the first principal component with the same variance as for the data set. The initial approximation for GSOM was the mean point.[1]

Applications

[edit]

The GSOM can be used for many preprocessing tasks in Data mining, for Nonlinear dimensionality reduction, for approximation of principal curves and manifolds, for clustering and classification. It gives often the better representation of the data geometry than the SOM (see the classical benchmark for principal curves on the left).

References

[edit]
  1. ^ The illustration is prepared using free software: E.M. Mirkes, Principal Component Analysis and Self-Organizing Maps: applet. University of Leicester, 2011.

Bibliography

[edit]
  • Liu, Y.; Weisberg, R.H.; He, R. (2006). "Sea surface temperature patterns on the West Florida Shelf using growing hierarchical self-organizing maps". Journal of Atmospheric and Oceanic Technology. 23 (2): 325–338. Bibcode:2006JAtOT..23..325L. doi:10.1175/JTECH1848.1. hdl:1912/4186.
  • Hsu, A.; Tang, S.; Halgamuge, S. K. (2003). "An unsupervised hierarchical dynamic self-organizing approach to cancer class discovery and marker gene identification in microarray data". Bioinformatics. 19 (16): 2131–2140. doi:10.1093/bioinformatics/btg296. PMID 14594719.
  • Alahakoon, D.; Halgamuge, S.K.; Sirinivasan, B. (2000). "Dynamic Self Organizing Maps With Controlled Growth for Knowledge Discovery". IEEE Transactions on Neural Networks. 11 (3): 601–614. doi:10.1109/72.846732. PMID 18249788.

See also

[edit]