Back to Glossary
Fairness
Fairness in AI recognizes that AI systems distribute both benefits and harms, requiring thoughtful consideration of their impact. It includes individual fairness, where similar individuals receive similar predictions, and group fairness, which ensures different groups are treated equally, often based on legally protected characteristics like race. Since these fairness principles can sometimes conflict, fairness must be defined on a case-by-case basis rather than relying on a universal metric. Researchers propose different worldviews, such as WYSIWYG, which assumes data reflects reality, and WAE, which attributes different outcomes to structural bias. Understanding fairness is essential for ethical AI decision-making.