Max-Planck-Institut für Informatik
max planck institut
mpii logo Minerva of the Max Planck Society

MPI-INF or MPI-SWS or Local Campus Event Calendar

<< Previous Entry Next Entry >> New Event Entry Edit this Entry Login to DB (to update, delete)
What and Who
Title:Characterizing the Space of Adversarial Examples in Machine Learning
Speaker:Nicolas Papernot
coming from:Pennsylvania State University
Speakers Bio:Nicolas Papernot is a PhD student in Computer Science and Engineering working with Professor Patrick McDaniel at the

Pennsylvania State University. His research interests lie at the intersection of computer security, privacy and machine learning.
He is supported by a Google PhD Fellowship in Security and received a best paper award at ICLR 2017. He is also the co-author
of CleverHans, an open-source library widely adopted in the technical community to benchmark machine learning in adversarial settings.
In 2016, he received his M.S. in Computer Science and Engineering from the Pennsylvania State University and his M.S. in Engineering Sciences
from the Ecole Centrale de Lyon.

Event Type:SWS Colloquium
Visibility:SWS, RG1, MMCI
We use this to send out email in the morning.
Level:AG Audience
Date, Time and Location
Date:Thursday, 22 March 2018
Duration:90 Minutes
Building:E1 5
There is growing recognition that machine learning (ML) exposes new security and privacy vulnerabilities in software systems, yet the technical community's understanding of the nature and extent of these vulnerabilities remains limited but expanding. In this talk, I explore the threat model space of ML algorithms, and systematically explore the vulnerabilities resulting from the poor generalization of ML models when they are presented with inputs manipulated by adversaries. This characterization of the threat space prompts an investigation of defenses that exploit the lack of reliable confidence estimates for predictions made. In particular, we introduce a promising new approach to defensive measures tailored to the structure of deep learning. Through this research, we expose connections between the resilience of ML to adversaries, model interpretability, and training data privacy.
Name(s):Claudia Richter
Phone:9303 9103
EMail:--email address not disclosed on the web
Video Broadcast
Video Broadcast:YesTo Location:Kaiserslautern
To Building:G26To Room:111
Tags, Category, Keywords and additional notes
Attachments, File(s):

Claudia Richter/MPI-SWS, 03/08/2018 12:19 PM
Last modified:
Uwe Brahm/MPII/DE, 03/12/2018 07:01 AM
  • Claudia Richter, 03/08/2018 12:23 PM -- Created document.