Max-Planck-Institut für Informatik
max planck institut
informatik
mpii logo Minerva of the Max Planck Society
 

MPI-INF or MPI-SWS or Local Campus Event Calendar

<< Previous Entry Next Entry >> New Event Entry Edit this Entry Login to DB (to update, delete)
What and Who
Title:Adversarial Machine Learning: Are We Playing the Wrong Game?
Speaker:David Evans
coming from:University of Virginia
Speakers Bio:David Evans (https://www.cs.virginia.edu/evans/) is a Professor of
Computer Science at the University of Virginia where he leads the
Security Research Group. He is the author of an open computer science
textbook and a children's book on combinatorics and computability. He is
Program Co-Chair for ACM Conference on Computer and Communications
Security (CCS) 2017, and previously was Program Co-Chair for the 31st
(2009) and 32nd (2010) IEEE Symposia on Security and Privacy (where he
initiated the SoK papers). He has SB, SM and PhD degrees in Computer
Science from MIT and has been a faculty member at the University of
Virginia since 1999.
Event Type:CISPA Distinguished Lecture Series
Visibility:D1, D2, D3, D4, D5, SWS, RG1, MMCI
We use this to send out email in the morning.
Level:Public Audience
Language:English
Date, Time and Location
Date:Monday, 10 July 2017
Time:11:00
Duration:60 Minutes
Location:Saarbr├╝cken
Building:E9 1
Room:Lecture Hall
Abstract
Machine learning classifiers are increasingly popular for security
applications, and often achieve outstanding performance in testing. When
deployed, however, classifiers can be thwarted by motivated adversaries
who adaptively construct adversarial examples that exploit flaws in the
classifier's model. Much work on adversarial examples has focused on
finding small distortions to inputs that fool a classifier. Previous
defenses have been both ineffective and very expensive in practice. In
this talk, I'll describe a new very simple strategy, feature squeezing,
that can be used to harden classifiers by detecting adversarial
examples. Feature squeezing reduces the search space available to an
adversary by coalescing samples that correspond to many different inputs
in the original space into a single sample. Adversarial examples can be
detected by comparing the model's predictions on the original and
squeezed sample. In practice, of course, adversaries are not limited to
small distortions in a particular metric space. Indeed, in security
applications like malware detection it may be possible to make large
changes to an input without disrupting its intended malicious behavior.
I'll report on an evolutionary framework we have developed to search for
such adversarial examples that can automatically find evasive variants
against state-of-the-art classifiers. This suggests that work on
adversarial machine learning needs a better definition of adversarial
examples, and to make progress towards understanding how classifiers and
oracles perceive samples differently.
Contact
Name(s):Sabine Nermerich
Phone:302-71911
EMail:--email address not disclosed on the web
Video Broadcast
Video Broadcast:NoTo Location:
Tags, Category, Keywords and additional notes
Note:
Attachments, File(s):
Created:
Sabine Nermerich/AG4/MPII/DE, 07/03/2017 03:03 PM
Last modified:
Uwe Brahm/MPII/DE, 07/10/2017 07:01 AM
  • Sabine Nermerich, 07/03/2017 03:05 PM -- Created document.