π Quick Study Guide
- π€ NLP bias arises from biased training data.
- π° Examples include gender bias in pronoun resolution and racial bias in sentiment analysis.
- βοΈ Mitigation techniques involve data augmentation, bias detection algorithms, and fairness-aware training.
- π Evaluation metrics should include fairness metrics alongside accuracy.
- π‘ Real-world impacts include discriminatory hiring algorithms and biased loan applications.
π§ͺ Practice Quiz
1. Which of the following is a common source of bias in NLP models?
A. Perfectly balanced datasets
B. Unbiased algorithms
C. Biased training data
D. Random noise
2. In pronoun resolution, gender bias can manifest as:
A. Correctly identifying all pronouns
B. Incorrectly associating pronouns with genders
C. Ignoring pronouns entirely
D. Always using gender-neutral pronouns
3. Sentiment analysis models can exhibit racial bias by:
A. Accurately assessing sentiment for all races
B. Assigning different sentiment scores based on race
C. Ignoring race in sentiment assessment
D. Providing sentiment scores in multiple languages
4. Which technique can help mitigate bias in NLP models?
A. Using smaller datasets
B. Data augmentation
C. Ignoring outliers
D. Using simpler models
5. What is a fairness metric used to evaluate NLP models?
A. Accuracy
B. Precision
C. Equal opportunity
D. Recall
6. A real-world consequence of biased NLP in hiring algorithms is:
A. Increased diversity
B. Discriminatory hiring practices
C. More efficient hiring
D. Reduced workload for HR
7. How can fairness-aware training help reduce bias?
A. By ignoring sensitive attributes
B. By explicitly minimizing bias during training
C. By using only positive examples
D. By increasing the model's complexity
Click to see Answers
- C
- B
- B
- B
- C
- B
- B