MPI-INF Logo
Campus Event Calendar

Event Entry

What and Who

AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation

Dana Drachsler Cohen
ETH Zurich
SWS Colloquium

Dana Drachsler Cohen is a postdoc in the Secure, Reliable, and Intelligent Systems Lab at the Computer Science Department at ETH Zurich. Her research interests span program synthesis, machine learning, security, and computer networks.
AG 1, AG 2, AG 3, INET, AG 4, AG 5, SWS, RG1, MMCI  
AG Audience
English

Date, Time and Location

Monday, 15 October 2018
10:30
60 Minutes
G26
111
Kaiserslautern

Abstract

In this talk, I will present AI2, a sound and scalable analyzer for deep neural networks. Based on overapproximation, AI2 can automatically prove safety properties (e.g., robustness) of realistic neural networks (e.g., convolutional neural networks). The key insight behind AI2 is to phrase reasoning about safety and robustness of neural networks in terms of classic abstract interpretation, enabling to leverage decades of advances in that area. To this end, I will introduce abstract transformers that capture the behavior of fully connected and convolutional neural network layers with rectified linear unit activations (ReLU), as well as max pooling layers. This allows to handle real-world neural networks, which are often built out of these types of layers. I will also empirically demonstrate that (i) AI2 is precise enough to prove useful specifications (e.g., robustness), (ii) AI2 can be used to certify the effectiveness of state-of-the-art defenses for neural networks, and (iii) AI2 is significantly faster than existing analyzers based on symbolic analysis, which often take hours to verify simple fully connected networks.

Contact

Susanne Girard
--email hidden

Video Broadcast

Yes
Saarbrücken
E1 5
029
passcode not visible
logged in users only

Susanne Girard, 10/10/2018 15:50 -- Created document.