MPI-INF Logo
Campus Event Calendar

Event Entry

What and Who

Accountability in the Governance of Machine Learning

Joshua Kroll
School of Information, University of California, Berkeley
SWS Colloquium

Joshua A. Kroll is a computer scientist studying the relationship between governance, public policy, and computer systems. As a Postdoctoral Research Scholar at the School of Information at the University of California at Berkeley, his research focuses on how technology fits within a human-driven, normative context and how it satisfies goals driven by ideals such as fairness, accountability, transparency, and ethics. He is most interested in the governance of automated decision-making systems, especially those using machine learning. His paper "Accountable Algorithms" in the University of Pennsylvania Law Review received the Future of Privacy Forum's Privacy Papers for Policymakers Award in 2017.

Joshua's previous work spans accountable algorithms, cryptography, software security, formal methods, Bitcoin, and the technical aspects of cybersecurity policy. He also spent two years working on cryptography and internet security at the web performance and security company Cloudflare. Joshua holds a PhD in computer science from Princeton University, where he received the National Science Foundation Graduate Research Fellowship in 2011.
SWS, RG1, MMCI  
MPI Audience
English

Date, Time and Location

Monday, 7 May 2018
10:30
90 Minutes
E1 5
029
Saarbrücken

Abstract

As software systems, especially those based on machine learning and data analysis, become ever more deeply engrained in modern society and take on increasingly powerful roles in shaping people's lives, concerns have been raised about the fairness, equity, and other embedded values of these systems. Many definitions of "fairness" have been proposed, and the technical definitions capture a variety of desirable statistical invariants. However, such invariants may not address fairness for all stakeholders, may be in tension with each other or other desirable properties, and may not be recognized by people as capturing the correct notion of fairness. In addition, requirements that serve fairness, in practice, often are enacted by prohibiting a set of practices considered unfair rather than fully modeling a particular definition of fairness.

For these reasons, we attack the goal of producing fair systems from a different starting point. We argue that a focus on accountability and transparency in the design of a computer system is a stronger basis for reasoning about fairness. We outline a research agenda in responsible system design based on this approach, attacking both technical and non-technical open questions. Technology can help realize human values - including fairness - in computer systems, but only if it is supported by appropriate organizational best practices and a new approach to the system design life cycle.

As a first step toward realizing this agenda, we present a cryptographic protocol for accountable algorithms, which uses a combination of commitments and zero-knowledge proofs to construct audit logs for automated decision-making systems that are publicly verifiable for integrity. Such logs comprise an integral record of the behavior of a computer system, providing evidence for future interrogation, oversight, and review while also providing immediate public assurance of important procedural regularity properties, such as the guarantee that all decisions were made under the same policy. Further, the existence of such evidence provides a strong incentive for the system's designers to capture the right human values by making deviations from those values apparent and undeniable. Finally, we describe how such logs can be extended to demonstrate the existence of key fairness and transparency properties in machine-learning settings. For example, we consider how to demonstrate that a model was trained on particular data, that it operates without considering particular sensitive inputs, or that it satisfies particular fairness invariants of the type considered in the machine-learning fairness literature. This approach leads to a better, more complete, and more flexible outcome from the perspective of preventing unfairness.

Contact

Claudia Richter
9303 9103
--email hidden

Video Broadcast

Yes
Kaiserslautern
G26
113
passcode not visible
logged in users only

Claudia Richter, 04/30/2018 11:03 -- Created document.