Lesson 9.3: Regularization β L1 (Lasso), L2 (Ridge)
πΉ What is Regularization?
Regularization is a technique to prevent overfitting by adding a penalty to model complexity.
-
Helps models generalize better to unseen data.
-
Commonly used in linear and logistic regression.
πΉ Types of Regularization
-
L1 Regularization (Lasso)
-
Adds absolute value of coefficients to the loss function.
-
Can shrink some coefficients to zero, performing feature selection.
Cost=Loss+Ξ»ββ£wiβ£Cost = Loss + \lambda \sum |w_i|
-
L2 Regularization (Ridge)
-
Adds squared value of coefficients to the loss function.
-
Reduces coefficients but does not set them to zero.
Cost=Loss+Ξ»βwi2Cost = Loss + \lambda \sum w_i^2
-
Ξ»\lambda β Regularization strength (higher β more penalty)
πΉ Example
-
alphaβ Regularization parameter -
Ridge β Shrinks coefficients
-
Lasso β Shrinks and selects important features
πΉ Advantages
-
Reduces overfitting.
-
Lasso β Performs automatic feature selection.
-
Improves model stability and generalization.
πΉ Disadvantages
-
Requires tuning of regularization parameter.
-
May underfit if penalty is too high.
β Quick Recap:
-
Regularization β Adds penalty to model complexity.
-
L1 (Lasso) β Shrinks & selects features.
-
L2 (Ridge) β Shrinks coefficients without zeroing them.
