Lesson 7.11: Evaluation Metrics for Classification (Accuracy, Precision, Recall, F1, ROC, AUC)
🔹 Why Evaluate Classification Models?
-
Classification models predict discrete classes.
-
Metrics help assess how well the model predicts and handles errors.
🔹 Common Metrics
-
Accuracy
-
Percentage of correctly predicted instances.
Accuracy=TP+TNTP+TN+FP+FNAccuracy = \frac{TP + TN}{TP + TN + FP + FN}
-
TP = True Positive, TN = True Negative, FP = False Positive, FN = False Negative
-
Precision
-
Measures correctness of positive predictions.
Precision=TPTP+FPPrecision = \frac{TP}{TP + FP}
-
High precision → Few false positives
-
Recall (Sensitivity)
-
Measures ability to identify actual positives.
Recall=TPTP+FNRecall = \frac{TP}{TP + FN}
-
High recall → Few false negatives
-
F1 Score
-
Harmonic mean of Precision and Recall.
F1=2⋅Precision⋅RecallPrecision+RecallF1 = 2 \cdot \frac{Precision \cdot Recall}{Precision + Recall}
-
Balances precision and recall
-
ROC Curve (Receiver Operating Characteristic)
-
Plots True Positive Rate (Recall) vs False Positive Rate.
-
Shows model performance at different thresholds
-
AUC (Area Under Curve)
-
Measures overall ability to distinguish classes.
-
AUC closer to 1 → Better model
🔹 Quick Recap Table
| Metric | Purpose |
|---|---|
| Accuracy | Overall correct predictions |
| Precision | Correctness of positive predictions |
| Recall | Ability to detect actual positives |
| F1 Score | Balance between precision and recall |
| ROC Curve | Performance at various thresholds |
| AUC | Overall class separation ability |
✅ Quick Summary:
-
Use accuracy for balanced classes.
-
Use precision, recall, F1 for imbalanced classes.
-
ROC & AUC → Evaluate model discrimination ability.
