Start learning 50% faster. Sign in now
Explanation: Min-Max Normalization is a technique used to scale features to a fixed range, typically [0,1]. This transformation is particularly useful for algorithms sensitive to the scale of input data, such as gradient descent-based models. This method ensures that each feature contributes proportionately to the model, eliminating bias caused by varying scales across features. Min-Max Normalization is especially suitable for cases where the data has a defined range, making it ideal for neural networks and distance-based algorithms like k-NN. Option A: Z-score Standardization scales data to have a mean of 0 and a standard deviation of 1, which is more suitable for normally distributed data. It does not confine the values to a specific range like [0,1]. Option C: One-Hot Encoding is used for categorical variables, converting them into binary vectors. It is not applicable for scaling numerical data. Option D: Logarithmic Transformation is used to handle skewness in data and is not designed to scale values into a fixed range. Option E: Ordinal Encoding converts categorical data into integers based on their ordinal rank, which is unrelated to numerical feature scaling.
22 39 58 81 ? 141
200, 50, ‘?’, 46.875, 82.03125
94, 563, 2813, ? 33743, 67481
1 0.5 3 7.5 21.5 74.25
6, 17, 21, 47, ?, 107
1 4 13 46 ? 976
...There are three series given below which are following with the same pattern.
Series I: 9, 10, 32, 163, 1145
Series II: 4, B, C, D, E
...11 12.1 14.3 17.6 22 ?
18 29 51 84 128 182
7 98 644 3145 12596 37797
...