Lesson 13.2: Bias and Fairness in Machine Learning
🔹 What is Bias in ML?
Bias occurs when a machine learning model produces systematically unfair outcomes for certain groups or individuals.
-
Causes: Unbalanced datasets, poor feature selection, or algorithm limitations.
-
Example: A hiring model favoring one gender due to biased training data.
🔹 Types of Bias
-
Sampling Bias → Training data is not representative of the population.
-
Measurement Bias → Incorrect or inconsistent data collection.
-
Algorithmic Bias → Model design unintentionally favors certain outcomes.
🔹 Ensuring Fairness
-
Use balanced and diverse datasets.
-
Apply bias detection metrics (e.g., demographic parity, equal opportunity).
-
Regularly audit models for fairness.
-
Avoid using sensitive features (like gender, race) unless ethically justified.
🔹 Key Takeaways
-
Bias can lead to unethical decisions and discrimination.
-
Fair ML ensures trust, transparency, and responsible AI.
-
Regular evaluation and corrective measures are essential.
✅ Quick Recap:
-
Bias → Unfair model behavior due to data or algorithm issues.
-
Fairness → Balanced datasets, auditing, ethical feature use.
-
Goal → Build trustworthy and equitable ML models.
