Campus Event Calendar

Event Entry

What and Who

Understanding and Controlling Leakage in Machine Learning

Tribhuvanesh Orekondy
Max-Planck-Institut für Informatik - D2
AG 1, AG 3, AG 4, RG1, MMCI, AG 2, INET, AG 5, SWS  
Public Audience

Date, Time and Location

Friday, 18 December 2020
60 Minutes
Virtual talk
Virtual talk


Machinelearning models are being increasingly adopted in a variety of real-world scenarios. However, the privacy and confidentiality implications for users in such scenarios is not well understood. In thistalk, I will present our work that focuses on better understanding and controlling leakage of information in three parts. The first part of thetalk is motivated by the fact that a massive amount of personal photos are disseminated by individuals, many of which unintentionally reveals a wide range of private information. To address this, I will present our work on visual privacy, which attempts to identify privacy leakage in visual data. The second part of my talk analyzes privacy leakage during training of ML models, such as in the case of federated learning where individuals contribute towards the training of a model by sharing parameter updates. The third part of my talk focuses on leakage of information at inference-time by presenting our work on model functionality stealing threats, where an adversary exploits black-box access to a victim ML model to inexpensively create a replica model. In each part, I also discuss our work to mitigate leakage of sensitive information to enable wide-spread adoption of ML techniques in real-world scenarios.


Connie Balzert
+49 681 9325 2000
passcode not visible
logged in users only

Tags, Category, Keywords and additional notes

Join Zoom Meeting

Meeting ID: 912 1129 4790
Passcode: 678558

Connie Balzert, 12/07/2020 11:38 -- Created document.