MPI-INF Logo
Campus Event Calendar

Event Entry

What and Who

Fairness for Sequential Decision Making Algorithms

Hoda Heidari
ETH Zurich
SWS Colloquium

Hoda Heidari is a Post-doctoral Fellow at the Machine Learning Institute at ETH
Zurich working under supervision of Prof. Andreas Krause. She received her PhD
in Computer and Information Science from the University of Pennsylvania where
she was advised by Prof. Michael Kearns and Prof. Ali Jadbabaie. Her research
interests are broadly in algorithmic economics and societal aspects of AI.
In particular, she is interested in fairness considerations in online markets and data-driven
decision making systems.
SWS, RG1, MMCI  
MPI Audience
English

Date, Time and Location

Monday, 12 November 2018
10:30
90 Minutes
E1 5
029
Saarbrücken

Abstract

Fairness considerations in settings where decisions are made by supervised learning algorithms (e.g. criminal risk assessment) has received considerable attention, recently. As the fairness literature continues to expand mostly around this canonical learning task, it is important to recognize that many real-world applications of ML fall outside the category of supervised, one-shot learning. In this presentation, I will talk about two scenarios in which algorithmic decisions are made in a sequential manner and over time. I will argue that in such settings, being fair---at a minimum---requires decisions to be "consistent" across individuals who arrive at different time steps, that is, similar individuals must be treated similarly. I will then talk about how such consistency constraints affect learning. 

In the first part of the talk, I will introduce a generic sequential decision making framework, in which at each time step the learning algorithm receives data corresponding to a new individual (e.g. a new job application) and must make an irrevocable decision about him/her (e.g. whether to hire the applicant) based on observations it has made so far. I propose a general framework for post-processing predictions made by a black-box learning model, so that the resulting sequence of outcomes is guaranteed to be consistent. I show, both theoretically and via simulations, that imposing consistency constraints will not significantly slow down learning.

In the second part of the talk, I will focus on fairness considerations in a particular type of market---namely, combinatorial prediction markets---where traders can submit limit orders on various security bundles, and the market maker is tasked with executing these orders in a fair manner. The main challenge in running such market is that executing one order can potentially change the price of every other order in the book. I define the notion of a "fair trading path", which at a high level guarantees similar orders are executed similarly: no order executes at a price above its limit, and every order executes when its market price falls below its limit price. I present a market algorithm that respects these fairness conditions, and evaluate it using real combinatorial predictions made during the 2008 U.S. Presidential election. 

I will conclude by comparing my work with previous papers on fairness for online learning, and a list of directions for future work.

Contact

Gretchen Gravelle
0681-93039102
--email hidden

Video Broadcast

Yes
Kaiserslautern
G26
112
passcode not visible
logged in users only

Gretchen Gravelle, 10/29/2018 13:48 -- Created document.