Important Introduction To Dimensionality Reduction (2021)

Introduction

Data that is central to machine learning comes with multiple variables on multiple dimensions. This complexity arising from having too many factors makes it more complicated on the final classification. As the variables increase the number of features get higher and it becomes increasingly difficult to visualize and work on the training set. One way to simplify it and make machines less dependent on extensive data is through dimensionality reduction. 

  1. What is Dimensionality Reduction? 
  2. Why is Dimensionality Reduction important 
  3. Components of Dimensionality Reduction 
  4. Methods of Dimensionality Reduction 

1. What is Dimensionality Reduction? 

As discussed in the introduction, having too many variables makes it difficult to visualize and then work on the training set. However, there are times when these variables or features are correlated and hence can be removed to simplify it. This is where dimensionality reduction algorithms are useful to reduce the number of random variables by extracting a set of principle variables.

Sometimes the feature that is being worked on is a dataset that has a hundred columns or it could be a distribution of data points that fit a sphere on a three-dimensional space. The function of dimensionality reduction is to reduce the number of columns from a hundred to say thirty of converting the three-dimensional sphere to a simpler two-dimensional circle. 

The purpose of dimensionality reduction is it reduces the burden brought about by dimensionality as a whole range of problems arises when working with data in multiple dimensions that do not exist in the lower dimensions. The increase in features complicates the model and increases the chances of overfitting. When a large number of features is used to train machine learning models, it becomes more and more dependent on the data it was trained on. This means it could perform poorly with real data

2. Why is Dimensionality Reduction important 

To better understand why dimensional reduction is important consider a task as simple as email assortment in the mail folder where the algorithms need to classify an email as spam or not. The task can have a number of features such as the title of email-whether it is generic or specific, the contents of the email, or whether the email is based on a template, etc. Many of these features could also overlap where dimensional reduction can be used to separate spam from important emails.

Another example is a classification issue that depends on both rainfall and humidity. Since the two features are highly correlated, they can collapse into one underlying feature. In many such problems, the number of dimensions can be collapsed and turned into simple problems. 

3-dimensional problems can be difficult to visualize while a problem with 2 dimensions can be easily mapped to a 2D space. The same applies to a 1-dimensional problem which can be represented with just a simple line. There are a number of other advantages that makes it important: 

  • The model accuracy is improved when there is less data that can contain variables that can mislead.
  • When dealing with fewer dimensions, it requires a lot less computing power and also since the dater is lesser, the algorithm can train faster. 
  • Lesser data requires lesser storage space.
  • Lesser dimensions can work with algorithms that cannot be used with larger dimensions.
  • Lesser features come with the benefit of noise and redundant variables. 

3. Components of Dimensionality Reduction 

Dimensionality reduction has two main components: 

  • Feature selection: This is the process where the universal set of features or variables is used to extract a subset that can be used to model the problem. Feature selection is done as Filter or Wrapper or Embedded. 
  • Feature extraction: This used to reduce data that is in a higher-dimensional space to a lower-dimensional space. The same as discussed above as to how features in 3 dimensions can be reduced to two dimensions for simplicity. 

4. Methods of Dimensionality Reduction 

Some of the dimension reduction techniques include: 

  • Principal Component Analysis (PCA): This method is commonly used with continuous data. It works under the condition that the variance in mapped data in the lower dimensional space needs to be at the peak when the data is mapped from a higher-dimensional space. I other words it projects data where variance increases and the features with the most variance become the principal components. 
  • Linear Discriminant Analysis (LDA): This project’s data in such a way that the separability of the class is maximized. Points from the same class are projected closely together while those from different classes are spaced far apart. 
  • Generalized Discriminant Analysis (GDA): The GDA is quite an effective approach when it comes to extracting non-linear features. 

Conclusion

This introduction to dimensionality reduction makes a few things clear at the fundamental level. Machine learning algorithms perform better with a lesser number of inputs. Dimensionality concerns reducing the input features to make it simpler to train the algorithms. There are a number of methods for feature dimensionality reduction. 

There are no right or wrong ways of learning AI and ML technologies – the more, the better! These valuable resources can be the starting point for your journey on how to learn Artificial Intelligence and Machine Learning. Do pursuing AI and ML interest you? If you want to step into the world of emerging tech, you can accelerate your career with this Machine Learning And AI Courses by Jigsaw Academy.

ALSO READ

Related Articles

loader
Please wait while your application is being created.
Request Callback