Machine learning (ML) is undergoing rapid development and deployment in an ever-growing list of industries. Every stage of the modern ML pipeline, from crowd-sourced data collection to online prediction interfaces, is accompanied by a plethora of security and privacy challenges.
Florian will give an overview of these challenges, and illustrate some of his recent work that explores attacks and defenses on deployed ML models:
1) How to abuse the rich prediction interfaces of ML models deployed in the cloud to reverse engineer model parameters or training data properties.
2) How to efficiently protect the privacy and integrity of machine learning computations with trusted hardware.
3) What can be done to protect against adversarial examples in realistic threat models (usually not much!), and what this means for recent proposals on "perceptual" ad-blocking.