diane316
diane316 8h ago • 0 views

Multiple Choice Questions on Fairness Metrics in AI Ethics

Hey everyone! 👋 Diving into AI ethics can feel pretty complex, especially with all the different fairness metrics out there. But understanding them is super crucial for building responsible AI. I've put together a quick study guide and some practice questions to help us nail down the core concepts. Let's get smart about fairness! 🧠
💻 Computer Science & Technology
🪄

🚀 Can't Find Your Exact Topic?

Let our AI Worksheet Generator create custom study notes, online quizzes, and printable PDFs in seconds. 100% Free!

✨ Generate Custom Content

1 Answers

✅ Best Answer

📚 Quick Study Guide: AI Fairness Metrics

  • Definition of Fairness: Context-dependent, often relates to equal treatment or outcomes for different demographic groups.
  • 분류 Common Metrics Categories: Individual Fairness (similar individuals treated similarly) and Group Fairness (e.g., demographic parity, equal opportunity, equalized odds).
  • ⚖️ Demographic Parity (Statistical Parity): Requires the proportion of positive predictions to be the same across different protected groups. Mathematically, $P(\\hat{Y}=1|A=a) = P(\\hat{Y}=1|A=b)$ for protected groups A and B.
  • 🎯 Equal Opportunity: Requires equal true positive rates (TPR) across protected groups. Focuses on ensuring that the model correctly identifies positive cases (e.g., qualified candidates) at the same rate for all groups. $P(\\hat{Y}=1|Y=1, A=a) = P(\\hat{Y}=1|Y=1, A=b)$.
  • Equalized Odds: A stronger condition, requiring both equal true positive rates (TPR) AND equal false positive rates (FPR) across protected groups. $P(\\hat{Y}=1|Y=1, A=a) = P(\\hat{Y}=1|Y=1, A=b)$ AND $P(\\hat{Y}=1|Y=0, A=a) = P(\\hat{Y}=1|Y=0, A=b)$.
  • 📈 Predictive Parity (Predictive Value Parity): Requires equal positive predictive value (PPV) across groups. This means among those predicted positive, the actual proportion of positives should be the same. $P(Y=1|\\hat{Y}=1, A=a) = P(Y=1|\\hat{Y}=1, A=b)$.
  • 🤝 Sufficiency (Conditional Use Accuracy Equality): Requires equal positive predictive value (PPV) AND equal negative predictive value (NPV) across groups. $P(Y=1|\\hat{Y}=1, A=a) = P(Y=1|\\hat{Y}=1, A=b)$ AND $P(Y=0|\\hat{Y}=0, A=a) = P(Y=0|\\hat{Y}=0, A=b)$.
  • 🤹 Trade-offs: It's often impossible to satisfy all fairness metrics simultaneously (e.g., Kleinberg's Impossibility Theorem). The choice of metric depends on the application's context and ethical priorities.
  • 🛡️ Protected Attributes: Characteristics like race, gender, age, religion, disability, etc., that are legally or ethically protected from discrimination.
  • 🛠️ Mitigation Strategies: Techniques to reduce bias, categorized into pre-processing (e.g., re-weighting data), in-processing (e.g., adding regularization to the model), and post-processing (e.g., adjusting classification thresholds).

📝 Practice Quiz: Fairness Metrics in AI Ethics

  1. Which fairness metric requires the proportion of positive predictions to be the same across different protected groups?

    1. Equal Opportunity
    2. Equalized Odds
    3. Demographic Parity
    4. Predictive Parity
  2. If a model satisfies "Equal Opportunity," what specific rates are equalized across protected groups?

    1. True Positive Rate (TPR)
    2. False Positive Rate (FPR)
    3. True Negative Rate (TNR)
    4. Positive Predictive Value (PPV)
  3. What is the strongest fairness criterion among the following, requiring equal True Positive Rates AND equal False Positive Rates across groups?

    1. Demographic Parity
    2. Equal Opportunity
    3. Equalized Odds
    4. Predictive Parity
  4. Consider a loan application model. If the model exhibits "Predictive Parity," what does this imply?

    1. The proportion of loans approved is the same for all groups.
    2. Among those predicted to repay the loan, the actual repayment rate is the same for all groups.
    3. The model correctly identifies loan repayers at the same rate across groups.
    4. The model incorrectly denies loans at the same rate across groups.
  5. Kleinberg's Impossibility Theorem suggests that it is generally impossible to satisfy which combination of fairness criteria simultaneously when base rates (prevalence of the positive outcome) differ between groups?

    1. Demographic Parity and Equal Opportunity
    2. Equal Opportunity and Equalized Odds
    3. Predictive Parity and Equal Opportunity
    4. Statistical Parity (Demographic Parity), Balance for the positive class (Equal Opportunity), and Balance for the negative class (Predictive Parity for negative outcomes).
  6. Which of the following is considered a "post-processing" technique for mitigating bias in an AI model?

    1. Re-sampling the training data to balance protected groups.
    2. Adding a fairness regularization term to the model's loss function during training.
    3. Adjusting the classification threshold differently for various protected groups.
    4. Using an adversarial debiasing framework during model training.
  7. A college admissions algorithm is found to have a lower True Negative Rate (TNR) for applicants from a specific minority group compared to the majority group. Which fairness metric is directly violated by this observation?

    1. Demographic Parity
    2. Equal Opportunity
    3. Equalized Odds (specifically, the FPR part, since TNR = 1 - FPR)
    4. Predictive Parity
Click to see Answers

1. C
2. A
3. C
4. B
5. D
6. C
7. C

Join the discussion

Please log in to post your answer.

Log In

Earn 2 Points for answering. If your answer is selected as the best, you'll get +20 Points! 🚀