MPI-INF Logo
Campus Event Calendar

Event Entry

What and Who

Operationalizing Fairness for Responsible Machine Learning

Preethi Lahoti
MMCI
Promotionskolloquium
AG 1, AG 2, AG 3, INET, AG 4, AG 5, D6, SWS, RG1, MMCI  
Public Audience
English

Date, Time and Location

Friday, 20 May 2022
16:15
60 Minutes
Virtual talk
Virtual Talk
Saarbrücken

Abstract

As machine learning (ML) is increasingly used for decision making in scenarios that impact humans, there is a growing awareness of its potential for unfairness. A large body of recent work has focused on proposing formal notions of fairness in ML, as well as approaches to mitigate unfairness. However, there is a disconnect between the ML fairness literature and the needs to operationalize fairness in practice. 


This dissertation brings ML fairness closer to real-world applications by developing new models and methods that address key challenges in operationalizing fairness in practice. 

*First, we tackle a key assumption in the group fairness literature that sensitive demographic attributes such as race and gender are known upfront, and can be readily used in model training to mitigate unfairness. In practice, factors like privacy and regulation often prohibit ML models from collecting or using protected attributes in decision making. To address this challenge we introduce the novel notion of computationally-identifiable errors and propose Adversarially Reweighted Learning (ARL), an optimization method that seeks to improve the worst-case performance over unobserved groups, without requiring access to the protected attributes in the dataset.

*Second, we argue that while group fairness notions are a desirable fairness criterion, they are fundamentally limited as they reduce fairness to an average statistic over pre-identified protected groups. In practice, automated decisions are made at an individual level, and can adversely impact individual people irrespective of the group statistic. We advance the paradigm of individual fairness by proposing iFair (individually fair representations), an optimization approach for learning a low dimensional latent representation of the data with two goals: to encode the data as well as possible, while removing any information about protected attributes in the transformed representation.

*Third, we advance the individual fairness paradigm, which requires that similar individuals receive similar outcomes. However, similarity metrics computed over observed feature space can be brittle, and inherently limited in their ability to accurately capture similarity between individuals. To address this, we introduce a novel notion of fairness graphs to capture nuanced expert knowledge on which individuals should be treated similarly with respect to the ML objective. We cast the problem of individual fairness into graph embedding, and propose PFR (pairwise fair representations), a method to learn a unified pairwise fair representation of the data.

*Fourth, we tackle the challenge that production data after model deployment is constantly evolving. As a consequence, in spite of the best efforts in training a fair model, ML systems can be prone to failure risks due to a variety of unforeseen failure risks. We propose Risk Advisor, a model-agnostic meta-learner to predict potential failure risks and to give guidance on the sources of uncertainty inducing the risks, by leveraging information theoretic notions of aleatoric and epistemic uncertainty.

Extensive experiments on a variety of real-world and synthetic datasets show that our proposed methods are viable in practice. 

Contact

Larisa Ivanova
+49 681 9325 5002
--email hidden

Virtual Meeting Details

passcode not visible
logged in users only

Larisa Ivanova, 05/11/2022 16:11 -- Created document.