In the SafeAI project at the SRI lab, ETH Zurich, we explore new methods and systems which can ensure Artificial Intelligence (AI) systems such as deep neural networks are more robust, safe and interpretable. Our work tends to sit at the intersection of machine learning, optimization and symbolic reasoning methods. For example, among other results, we recently introduced new approaches and systems that can certify and train deep neural networks with symbolic methods, illustrated in the figure below.

AI2 overview

Downloads

The code for DiffAI (ICML'18) is available at
https://github.com/eth-sri/diffai.

The code for ERAN (ETH Robusness Analyzer for Neural Networks; NIPS'18, POPL'19) is available at
https://github.com/eth-sri/eran.

Publications

An Abstract Domain for Certifying Neural Networks, POPL 2019
Gagandeep Singh, Timon Gehr, Markus Püschel, Martin Vechev

Fast and Effective Robustness Certification, NIPS 2018
Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus Püschel, Martin Vechev

Differentiable Abstract Interpretation for Provably Robust Neural Networks, ICML 2018
Matthew Mirman, Timon Gehr, Martin Vechev

AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation, IEEE S&P 2018
Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev

Talks

Safe Deep Learning: progress and open problems,
ETH Workshop on Dependable and Secure Software Systems 2018

AI2: AI Safety and Robustness with Abstract Interpretation,
Machine Learning meets Formal Methods, FLOC 2018