Ethics & Bias

Responsible AI Development

26 SLIDES Part 4: Applications

?
The AI That Rejected All Women: Amazon's hiring AI (2014-2018) learned from 10 years of mostly male engineer resumes. Result: "women's chess club" on a resume meant -5 stars automatically.

Prerequisites

  • Understanding of how ML models learn from data
  • Week 6: Pre-trained models and their training data
  • Awareness of societal impact of AI systems

Overview

Build responsible NLP systems. Detect and mitigate bias, ensure fairness.

Learning Objectives

  • Identify sources of bias in NLP systems (data, model, deployment)
  • Apply bias detection methods (WEAT, demographic parity)
  • Understand fairness metrics and their trade-offs
  • Evaluate responsible AI frameworks and guidelines
  • Design mitigation strategies for biased systems

Key Topics

Bias detection
Fairness metrics
Debiasing methods
Responsible AI

Key Concepts

Training data biasHistorical biases encoded in text corpora
Representation biasUnderrepresentation of minority groups
WEATWord Embedding Association Test for measuring bias
Demographic parityEqual outcomes across groups
Equalized oddsEqual error rates across groups
Model cardsDocumentation for responsible AI deployment

Key Visualizations

Ai Ethics Landscape Ai Ethics Landscape
Bias Sources Flowchart Bias Sources Flowchart
Fairness Metrics Comparison Fairness Metrics Comparison
Ethics Ethics

Resources