MPI-INF Logo
Campus Event Calendar

Event Entry

What and Who

Adversarial Machine Learning: Are We Playing the Wrong Game?

David Evans
University of Virginia
CISPA Distinguished Lecture Series

David Evans (https://www.cs.virginia.edu/evans/) is a Professor of
Computer Science at the University of Virginia where he leads the
Security Research Group. He is the author of an open computer science
textbook and a children's book on combinatorics and computability. He is
Program Co-Chair for ACM Conference on Computer and Communications
Security (CCS) 2017, and previously was Program Co-Chair for the 31st
(2009) and 32nd (2010) IEEE Symposia on Security and Privacy (where he
initiated the SoK papers). He has SB, SM and PhD degrees in Computer
Science from MIT and has been a faculty member at the University of
Virginia since 1999.
AG 1, AG 2, AG 3, AG 4, AG 5, SWS, RG1, MMCI  
Public Audience
English

Date, Time and Location

Monday, 10 July 2017
11:00
60 Minutes
E9 1
Lecture Hall
Saarbrücken

Abstract

Machine learning classifiers are increasingly popular for security
applications, and often achieve outstanding performance in testing. When
deployed, however, classifiers can be thwarted by motivated adversaries
who adaptively construct adversarial examples that exploit flaws in the
classifier's model. Much work on adversarial examples has focused on
finding small distortions to inputs that fool a classifier. Previous
defenses have been both ineffective and very expensive in practice. In
this talk, I'll describe a new very simple strategy, feature squeezing,
that can be used to harden classifiers by detecting adversarial
examples. Feature squeezing reduces the search space available to an
adversary by coalescing samples that correspond to many different inputs
in the original space into a single sample. Adversarial examples can be
detected by comparing the model's predictions on the original and
squeezed sample. In practice, of course, adversaries are not limited to
small distortions in a particular metric space. Indeed, in security
applications like malware detection it may be possible to make large
changes to an input without disrupting its intended malicious behavior.
I'll report on an evolutionary framework we have developed to search for
such adversarial examples that can automatically find evasive variants
against state-of-the-art classifiers. This suggests that work on
adversarial machine learning needs a better definition of adversarial
examples, and to make progress towards understanding how classifiers and
oracles perceive samples differently.

Contact

Sabine Nermerich
302-71911
--email hidden
passcode not visible
logged in users only

Sabine Nermerich, 07/03/2017 15:05 -- Created document.