In the SafeAI project at the SRI lab, ETH Zurich, we explore new methods and systems which can ensure Artificial Intelligence (AI) systems such as deep neural networks are more robust, safe and interpretable. Our work tends to sit at the intersection of machine learning, optimization and symbolic reasoning methods. For example, among other results, we recently introduced new approaches and systems that can certify and train deep neural networks with symbolic methods, illustrated in the figure below.

AI2 overview


The code for DiffAI (ICML'18) is available at

The code for ERAN (ETH Robusness Analyzer for Neural Networks; NIPS'18, POPL'19) is available at

The code for DL2 (Deep Learning with Differentiable Logic; ICML'19) is available at


k-ReLU: Beyond Neuron-Level Convex Relaxations for Certification, NeurIPS 2019
Gagandeep Singh, Rupanshu Ganvir, Markus Püschel, Martin Vechev

Certifying Geometric Robustness of Neural Networks, NeurIPS 2019
Mislav Balunovic, Maximilian Baader, Gagandeep Singh, Timon Gehr, Martin Vechev

DL2: Training and Querying Neural Networks with Logic, ICML 2019
Marc Fischer, Mislav Balunovic, Dana Drachsler-Cohen, Timon Gehr, Ce Zhang, Martin Vechev

Boosting Robustness Certification of Neural Networks, ICLR 2019
Gagandeep Singh, Timon Gehr, Markus Püschel, Martin Vechev

An Abstract Domain for Certifying Neural Networks, POPL 2019
Gagandeep Singh, Timon Gehr, Markus Püschel, Martin Vechev

Fast and Effective Robustness Certification, NIPS 2018
Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus Püschel, Martin Vechev

Differentiable Abstract Interpretation for Provably Robust Neural Networks, ICML 2018
Matthew Mirman, Timon Gehr, Martin Vechev

AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation, IEEE S&P 2018
Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev


Safe and Robust Deep Learning,
Waterloo ML + Security + Verification Workshop 2019

Safe and Robust Deep Learning,
University of Edinburgh, Robust Artificial Intelligence for Neurorobotics 2019

Safe Deep Learning: progress and open problems,
ETH Workshop on Dependable and Secure Software Systems 2018

AI2: AI Safety and Robustness with Abstract Interpretation,
Machine Learning meets Formal Methods, FLOC 2018