1 Answers
📚 Quick Study Guide: AI Fairness Metrics
- ✨ Definition of Fairness: Context-dependent, often relates to equal treatment or outcomes for different demographic groups.
- 분류 Common Metrics Categories: Individual Fairness (similar individuals treated similarly) and Group Fairness (e.g., demographic parity, equal opportunity, equalized odds).
- ⚖️ Demographic Parity (Statistical Parity): Requires the proportion of positive predictions to be the same across different protected groups. Mathematically, $P(\\hat{Y}=1|A=a) = P(\\hat{Y}=1|A=b)$ for protected groups A and B.
- 🎯 Equal Opportunity: Requires equal true positive rates (TPR) across protected groups. Focuses on ensuring that the model correctly identifies positive cases (e.g., qualified candidates) at the same rate for all groups. $P(\\hat{Y}=1|Y=1, A=a) = P(\\hat{Y}=1|Y=1, A=b)$.
- ✅ Equalized Odds: A stronger condition, requiring both equal true positive rates (TPR) AND equal false positive rates (FPR) across protected groups. $P(\\hat{Y}=1|Y=1, A=a) = P(\\hat{Y}=1|Y=1, A=b)$ AND $P(\\hat{Y}=1|Y=0, A=a) = P(\\hat{Y}=1|Y=0, A=b)$.
- 📈 Predictive Parity (Predictive Value Parity): Requires equal positive predictive value (PPV) across groups. This means among those predicted positive, the actual proportion of positives should be the same. $P(Y=1|\\hat{Y}=1, A=a) = P(Y=1|\\hat{Y}=1, A=b)$.
- 🤝 Sufficiency (Conditional Use Accuracy Equality): Requires equal positive predictive value (PPV) AND equal negative predictive value (NPV) across groups. $P(Y=1|\\hat{Y}=1, A=a) = P(Y=1|\\hat{Y}=1, A=b)$ AND $P(Y=0|\\hat{Y}=0, A=a) = P(Y=0|\\hat{Y}=0, A=b)$.
- 🤹 Trade-offs: It's often impossible to satisfy all fairness metrics simultaneously (e.g., Kleinberg's Impossibility Theorem). The choice of metric depends on the application's context and ethical priorities.
- 🛡️ Protected Attributes: Characteristics like race, gender, age, religion, disability, etc., that are legally or ethically protected from discrimination.
- 🛠️ Mitigation Strategies: Techniques to reduce bias, categorized into pre-processing (e.g., re-weighting data), in-processing (e.g., adding regularization to the model), and post-processing (e.g., adjusting classification thresholds).
📝 Practice Quiz: Fairness Metrics in AI Ethics
-
Which fairness metric requires the proportion of positive predictions to be the same across different protected groups?
- Equal Opportunity
- Equalized Odds
- Demographic Parity
- Predictive Parity
-
If a model satisfies "Equal Opportunity," what specific rates are equalized across protected groups?
- True Positive Rate (TPR)
- False Positive Rate (FPR)
- True Negative Rate (TNR)
- Positive Predictive Value (PPV)
-
What is the strongest fairness criterion among the following, requiring equal True Positive Rates AND equal False Positive Rates across groups?
- Demographic Parity
- Equal Opportunity
- Equalized Odds
- Predictive Parity
-
Consider a loan application model. If the model exhibits "Predictive Parity," what does this imply?
- The proportion of loans approved is the same for all groups.
- Among those predicted to repay the loan, the actual repayment rate is the same for all groups.
- The model correctly identifies loan repayers at the same rate across groups.
- The model incorrectly denies loans at the same rate across groups.
-
Kleinberg's Impossibility Theorem suggests that it is generally impossible to satisfy which combination of fairness criteria simultaneously when base rates (prevalence of the positive outcome) differ between groups?
- Demographic Parity and Equal Opportunity
- Equal Opportunity and Equalized Odds
- Predictive Parity and Equal Opportunity
- Statistical Parity (Demographic Parity), Balance for the positive class (Equal Opportunity), and Balance for the negative class (Predictive Parity for negative outcomes).
-
Which of the following is considered a "post-processing" technique for mitigating bias in an AI model?
- Re-sampling the training data to balance protected groups.
- Adding a fairness regularization term to the model's loss function during training.
- Adjusting the classification threshold differently for various protected groups.
- Using an adversarial debiasing framework during model training.
-
A college admissions algorithm is found to have a lower True Negative Rate (TNR) for applicants from a specific minority group compared to the majority group. Which fairness metric is directly violated by this observation?
- Demographic Parity
- Equal Opportunity
- Equalized Odds (specifically, the FPR part, since TNR = 1 - FPR)
- Predictive Parity
Click to see Answers
1. C
2. A
3. C
4. B
5. D
6. C
7. C
Join the discussion
Please log in to post your answer.
Log InEarn 2 Points for answering. If your answer is selected as the best, you'll get +20 Points! 🚀