Max-Planck-Institut für Informatik
max planck institut
informatik
mpii logo Minerva of the Max Planck Society
 

MPI-INF or MPI-SWS or Local Campus Event Calendar

<< Previous Entry Next Entry >> New Event Entry Edit this Entry Login to DB (to update, delete)
What and Who
Title:Accountability in the Governance of Machine Learning
Speaker:Joshua Kroll
coming from:School of Information, University of California, Berkeley
Speakers Bio:Joshua A. Kroll is a computer scientist studying the relationship between governance, public policy, and computer systems. As a Postdoctoral Research Scholar at the School of Information at the University of California at Berkeley, his research focuses on how technology fits within a human-driven, normative context and how it satisfies goals driven by ideals such as fairness, accountability, transparency, and ethics. He is most interested in the governance of automated decision-making systems, especially those using machine learning. His paper "Accountable Algorithms" in the University of Pennsylvania Law Review received the Future of Privacy Forum's Privacy Papers for Policymakers Award in 2017.

Joshua's previous work spans accountable algorithms, cryptography, software security, formal methods, Bitcoin, and the technical aspects of cybersecurity policy. He also spent two years working on cryptography and internet security at the web performance and security company Cloudflare. Joshua holds a PhD in computer science from Princeton University, where he received the National Science Foundation Graduate Research Fellowship in 2011.

Event Type:SWS Colloquium
Visibility:SWS, RG1, MMCI
We use this to send out email in the morning.
Level:MPI Audience
Language:English
Date, Time and Location
Date:Monday, 7 May 2018
Time:10:30
Duration:90 Minutes
Location:Saarbr├╝cken
Building:E1 5
Room:029
Abstract
As software systems, especially those based on machine learning and data analysis, become ever more deeply engrained in modern society and take on increasingly powerful roles in shaping people's lives, concerns have been raised about the fairness, equity, and other embedded values of these systems. Many definitions of "fairness" have been proposed, and the technical definitions capture a variety of desirable statistical invariants. However, such invariants may not address fairness for all stakeholders, may be in tension with each other or other desirable properties, and may not be recognized by people as capturing the correct notion of fairness. In addition, requirements that serve fairness, in practice, often are enacted by prohibiting a set of practices considered unfair rather than fully modeling a particular definition of fairness.

For these reasons, we attack the goal of producing fair systems from a different starting point. We argue that a focus on accountability and transparency in the design of a computer system is a stronger basis for reasoning about fairness. We outline a research agenda in responsible system design based on this approach, attacking both technical and non-technical open questions. Technology can help realize human values - including fairness - in computer systems, but only if it is supported by appropriate organizational best practices and a new approach to the system design life cycle.

As a first step toward realizing this agenda, we present a cryptographic protocol for accountable algorithms, which uses a combination of commitments and zero-knowledge proofs to construct audit logs for automated decision-making systems that are publicly verifiable for integrity. Such logs comprise an integral record of the behavior of a computer system, providing evidence for future interrogation, oversight, and review while also providing immediate public assurance of important procedural regularity properties, such as the guarantee that all decisions were made under the same policy. Further, the existence of such evidence provides a strong incentive for the system's designers to capture the right human values by making deviations from those values apparent and undeniable. Finally, we describe how such logs can be extended to demonstrate the existence of key fairness and transparency properties in machine-learning settings. For example, we consider how to demonstrate that a model was trained on particular data, that it operates without considering particular sensitive inputs, or that it satisfies particular fairness invariants of the type considered in the machine-learning fairness literature. This approach leads to a better, more complete, and more flexible outcome from the perspective of preventing unfairness.
Contact
Name(s):Claudia Richter
Phone:9303 9103
EMail:--email address not disclosed on the web
Video Broadcast
Video Broadcast:YesTo Location:Kaiserslautern
To Building:G26To Room:113
Meeting ID:
Tags, Category, Keywords and additional notes
Note:
Attachments, File(s):

Created:
Claudia Richter/MPI-SWS, 04/30/2018 10:57 AM
Last modified:
Uwe Brahm/MPII/DE, 05/07/2018 07:01 AM
  • Claudia Richter, 04/30/2018 11:03 AM -- Created document.