Normalization is a data preparation technique that is frequently used in machine learning. The process of transforming the columns in a dataset to the same scale is referred to as normalization. Every dataset does not need to be normalized for machine learning. It is only required when the ranges of characteristics are different.
Normalization techniques in Machine Learning
If you’re new to data science and machine learning, you’ve certainly questioned a lot about what feature normalization in machine learning is and how it works.
The most widely used types of normalization in machine learning are:
- Min-Max Scaling – Subtract the minimum value from each column’s highest value and divide by the range. Each new column has a minimum value of 0 and a maximum value of 1.
- Standardization Scaling –Â The term “standardization” refers to the process of centering a variable at zero and standardizing the variance at one. Subtracting the mean of each observation and then dividing by the standard deviation is the procedure:
The features will be rescaled so that they have the attributes of a typical normal distribution with standard deviations.
Normalization and standardization
Normalization and standardization are not the same things. Standardization, interestingly, refers to setting the mean to zero and the standard deviation to one. Normalization in machine learning is the process of translating data into the range [0, 1] (or any other range) or simply transforming data onto the unit sphere.
Some machine learning algorithms benefit from normalization and standardization, particularly when Euclidean distance is used. For example, if one of the variables in the K-Nearest Neighbor, KNN, is in the 1000s and the other is in the 0.1s, the first variable will dominate the distance rather strongly. In this scenario, normalization and standardization might be beneficial.
An instance of standardization is when a machine learning method is utilized and the data is assumed to come from a normal distribution. One example is linear discriminant analysis or LDA.
When using linear models and interpreting their coefficients as variable importance, normalization and standardization come in handy. If one of the variables has a value in the 100s and the other has a value in the 0.01s, the coefficient discovered by Logistic Regression for the first variable will most likely be significantly bigger than the coefficient produced by Logistic Regression for the second variable.
This does not reveal whether the first variable is more essential or not, but it does illustrate that this coefficient must be large to compensate for the variable’s scale. Normalization and standardization change the coordinate system so that all variables have the same scale, making linear model coefficients understandable.
When to use normalization and standardization
- When you don’t know the distribution of your data or when you know it’s not Gaussian, normalization is a smart approach to apply. Normalization is useful when your data has variable scales and the technique you’re employing, such as k-nearest neighbors and artificial neural networks, doesn’t make assumptions about the distribution of your data.
- The assumption behind standardization is that your data follows a Gaussian (bell curve) distribution. This isn’t required, however, it helps the approach work better if your attribute distribution is Gaussian. When your data has variable dimensions and the technique you’re using (like logistic regression, linear regression, linear discriminant analysis) standardization is useful.
As we stated before, every dataset does not need to be normalized for machine learning. It is only required when the ranges of characteristics are different.
Consider a data collection that includes two characteristics: age and income. Where the age spans from 0 to 80 years old, and the income extends from 0 to 80,000 dollars and up. Income is roughly 1,000 times that of age. As a result, the ranges of these two characteristics are vastly different.
Because of its bigger value, the attributed income will organically influence the conclusion more when we undertake further analysis, such as multivariate linear regression. However, this does not necessarily imply that it is a better predictor. As a result, we normalize the data so that all of the variables are in the same range.
- We normalize training data to solve the model learning challenge. We make sure that the various features have similar value ranges (feature scaling) so that gradient descents can converge faster.