Beginner’s Guide to Self-Organizing Maps

0

With the increasing recent developments in intelligent and artificially intelligent systems, the use of neural networks has become very evident. Neural networks are ideally suited to help and solve complex problems in real situations. They possess the ability to learn and model the relationships between inputs and outputs which are nonlinear and complex, to make generalizations and inferences, to reveal hidden relationships, patterns and predictions from the processed data. using neural networks. Neural networks use processing inspired by the human brain as the basis for developing algorithms that can be used to model and understand complex models and prediction problems. There are several types of neural networks and each has its own unique use. The self-organizing map (SOM) is one such variation of the neural network, also known as the Kohonen map.

In this article, we will discuss a type of neural network for unsupervised learning known as self-organizing maps. Here is a list of the main points that we will cover in this article.

Learn about Intel's Edge Computing>>

Contents

  1. What is a self-organizing card?
  2. Uses of self-organizing cards
  3. The architecture of self-organizing maps
  4. Advantages and disadvantages of self-organizing maps
  5. Implementing Self-Organizing Maps Using Python

What is a self-organizing card?

A self-organized map is also known as SOM and it was offered by Kohonen. It is an unsupervised neural network that is trained using unsupervised learning techniques to produce a low-dimensional discretized representation from the input space of the training samples, known under the name of map and thus constitutes a method to reduce the dimensions of the data. Self-organized maps are very different from other artificial neural networks in that they apply competitive learning techniques unlike others that use error correction learning methods such as backpropagation with gradient descent, and use a neighborhood function to preserve all the topological properties in the input space.

Self-organizing maps were initially only used for data visualization, but nowadays they have been applied to different problems, including as a solution to the traveling salesman problem. Map units or neurons usually form a two-dimensional space, and as a result, a large-dimensional space mapping on a plane is created. The map keeps the calculated relative distance between the points. Points closest to each other in the input space are mapped to nearby map units in self-organizing maps. Self-organizing maps can thus serve as a cluster analysis tool for large-dimensional data. Self-organizing cards also have the ability to generalize. When generalizing, the network can recognize or characterize inputs that it has never seen as data before. The new entry is taken over by the mapping unit and is therefore mapped.

Uses of self-organizing cards

Self-organizing maps provide an advantage in preserving structural information from training data and are not inherently linear. Using principal component analysis on large-dimensional data may simply result in data loss when the dimension is reduced in half. If the data includes a lot of dimensions and each predefined dimension is useful, in such cases self-organizing maps can be very useful relative to PCA for dimensionality reduction. Seismic facies analysis generates groups based on the identification of different individual characteristics. This method finds organizations of entities in the dataset and forms organized relational clusters.

However, sometimes these clusters may or may not have physical analogues. Therefore, a benchmarking method to relate these clusters to reality is needed and the self-organizing maps get the job done. This calibration method defines the correspondence between the groups and the measured physical properties. Another important preprocessing step that can be done through self-organizing maps is text grouping. It is a method that helps to verify how the current text can be converted into a mathematical expression for further analysis and processing. Exploratory data analysis and visualization are also the most important applications of self-organizing maps.

Image source

Architecture and operation of self-organized maps

Self-organizing maps consist of two important layers, the first is the input layer and the second is the output layer, which is also known as the feature map. Each data point in the dataset is recognized by competing for a representation. The self-organizing map mapping steps begin with initializing the weight to the vectors. After that, a random vector as the sample is selected and the mapped vectors are searched to find which weight best represents the chosen sample. Each weighted vector has neighboring weights present which are close to it. The chosen weight is then rewarded by being able to become a random sampling vector. It helps the card to grow and form different shapes. Most commonly, they form square or hexagonal shapes in a 2D feature space. This whole process is repeated a large number of times and over 1000 times.

Self-organized charts do not use backpropagation with SGD to update weights, this unsupervised ANN uses competitive learning to update its weights i.e. competition, cooperation and l ‘adaptation. Each neuron of the output layer is present with a vector of dimension n. The distance between each neuron present at the output layer and the input data is calculated. The neuron with the shortest distance is called the most appropriate fit. Updating the appropriate neuron vector in the final process is known as adaptation, with its cooperative neighbor. After selecting the appropriate neuron and its neighbors, we process the neuron to be updated. The greater the distance between the neuron and the input, the more the data grows.

To explain simply, the learning is done as follows:

  • Each node is examined to calculate which appropriate weights are similar to the input vector. The correct node is commonly referred to as the best match unit.
  • The neighborhood value of the best match unit is then calculated. The number of neighbors tends to decrease over time.
  • The appropriate weight is further rewarded with a transition to a sample vector. Neighbors make the transition as the chosen sample vector. The closer a node is to the best match unit, the more its weights are changed and the further the neighbor is from the node, the less it learns.
  • Repeat step two for N iterations.

Advantages and disadvantages of self-organizing maps

There are several advantages and disadvantages of self-organizing maps, some of them are as follows:

Advantages

  • Data can be easily interpreted and understood using techniques such as dimensionality reduction and grid grouping.
  • Self-organizing maps are able to handle multiple types of classification issues while providing useful and intelligent data summary at the same time.

The inconvenients

  • It does not create a generative model for the data and therefore the model does not understand how the data is created.
  • Self-organizing maps do not work well when working with categorical data and even worse for mixed data types.
  • Model preparation time is comparatively very slow and difficult to train compared to slowly changing data.

Implementing Self-Organizing Maps Using Python

Self-organizing maps can easily be implemented in Python using the MiniSom library and Numpy. Below is an example of a self-organizing map created on iris data. We will see how to use MiniSom to group the starting data set.

!pip install minisom
 
from minisom import MiniSom
 
# Initializing neurons and Training
 
n_neurons = 9
m_neurons = 9
som = MiniSom(n_neurons, m_neurons, data.shape[1], sigma=1.5, learning_rate=.5, 
              neighborhood_function='gaussian', random_seed=0)
 
som.pca_weights_init(data)
som.train(data, 1000, verbose=True)  # random training

To visualize the result of our training, we can plot the distance map or the U matrix, using a pseudo-color where the neurons in the maps are displayed as a table of cells and the color represents the weighted distance of the neurons. neighbors. In addition to the pseudo color, we can add markers that represent the samples mapped into specific cells:

To understand how the samples are distributed on the map, a scatter plot can be used where each point represents the coordinates of the winning neuron. A random offset can be added to avoid overlaps between points in the same cell.

To understand which neurons on the map are activated most often, we can create another pseudocolor plot that reflects the neuro activation frequencies:

plt.figure(figsize=(7, 7))
frequencies = som.activation_response(data)
plt.pcolor(frequencies.T, cmap='Blues') 
plt.colorbar()
plt.show()

Conclusion

Self-organizing maps are unique in themselves and present us with a wide range of uses in the field of artificial neural networks as well as deep learning. It is a method which projects the data into a low dimensional grid for unsupervised grouping and therefore becomes very useful for dimensionality reduction. Its unique training architecture also facilitates the implementation of clustering techniques.

In this article, we’ve understood self-organizing maps, their uses, architecture, and how they work. We also discussed the pros and cons of self-organizing maps and explored an example of their implementation in Python. I would like to encourage the reader to explore and implement this topic further because of its learning potential.

Good learning!

The references

Share.

About Author

Comments are closed.