Max-Planck-Institut für Informatik
max planck institut
informatik
mpii logo Minerva of the Max Planck Society
 

MPI-INF or MPI-SWS or Local Campus Event Calendar

<< Previous Entry Next Entry >> New Event Entry Edit this Entry Login to DB (to update, delete)
What and Who
Title:Fairness for Sequential Decision Making Algorithms
Speaker:Hoda Heidari
coming from:ETH Zurich
Speakers Bio:Hoda Heidari is a Post-doctoral Fellow at the Machine Learning Institute at ETH

Zurich working under supervision of Prof. Andreas Krause. She received her PhD
in Computer and Information Science from the University of Pennsylvania where
she was advised by Prof. Michael Kearns and Prof. Ali Jadbabaie. Her research
interests are broadly in algorithmic economics and societal aspects of AI.
In particular, she is interested in fairness considerations in online markets and data-driven
decision making systems.

Event Type:SWS Colloquium
Visibility:SWS, RG1, MMCI
We use this to send out email in the morning.
Level:MPI Audience
Language:English
Date, Time and Location
Date:Monday, 12 November 2018
Time:10:30
Duration:90 Minutes
Location:Saarbrücken
Building:E1 5
Room:029
Abstract
Fairness considerations in settings where decisions are made by supervised learning algorithms (e.g. criminal risk assessment) has received considerable attention, recently. As the fairness literature continues to expand mostly around this canonical learning task, it is important to recognize that many real-world applications of ML fall outside the category of supervised, one-shot learning. In this presentation, I will talk about two scenarios in which algorithmic decisions are made in a sequential manner and over time. I will argue that in such settings, being fair---at a minimum---requires decisions to be "consistent" across individuals who arrive at different time steps, that is, similar individuals must be treated similarly. I will then talk about how such consistency constraints affect learning. 

In the first part of the talk, I will introduce a generic sequential decision making framework, in which at each time step the learning algorithm receives data corresponding to a new individual (e.g. a new job application) and must make an irrevocable decision about him/her (e.g. whether to hire the applicant) based on observations it has made so far. I propose a general framework for post-processing predictions made by a black-box learning model, so that the resulting sequence of outcomes is guaranteed to be consistent. I show, both theoretically and via simulations, that imposing consistency constraints will not significantly slow down learning.

In the second part of the talk, I will focus on fairness considerations in a particular type of market---namely, combinatorial prediction markets---where traders can submit limit orders on various security bundles, and the market maker is tasked with executing these orders in a fair manner. The main challenge in running such market is that executing one order can potentially change the price of every other order in the book. I define the notion of a "fair trading path", which at a high level guarantees similar orders are executed similarly: no order executes at a price above its limit, and every order executes when its market price falls below its limit price. I present a market algorithm that respects these fairness conditions, and evaluate it using real combinatorial predictions made during the 2008 U.S. Presidential election. 

I will conclude by comparing my work with previous papers on fairness for online learning, and a list of directions for future work.
Contact
Name(s):Gretchen Gravelle
Phone:0681-93039102
EMail:--email address not disclosed on the web
Video Broadcast
Video Broadcast:YesTo Location:Kaiserslautern
To Building:G26To Room:112
Tags, Category, Keywords and additional notes
Note:
Attachments, File(s):

Created:Gretchen Gravelle/MPI-SWS, 10/29/2018 01:37 PM Last modified:Uwe Brahm/MPII/DE, 11/12/2018 07:01 AM
  • Gretchen Gravelle, 10/29/2018 01:48 PM -- Created document.