In the SafeAI project at the SRI lab, ETH Zurich, we explore new methods and systems which can ensure Artificial Intelligence (AI) systems such as deep neural networks are more robust, safe and interpretable. Our work tends to sit at the intersection of machine learning, optimization and symbolic reasoning methods. For example, among other results, we recently introduced new approaches and systems that can certify and train deep neural networks with symbolic methods, illustrated in the figure below.

AI2 overview

Downloads

The code for DiffAI (ICML'18) is available at
https://github.com/eth-sri/diffai.

The code for ERAN (ETH Robusness Analyzer for Neural Networks; NIPS'18, POPL'19) is available at
https://github.com/eth-sri/eran.

The code for DL2 (Deep Learning with Differentiable Logic; ICML'19) is available at
https://github.com/eth-sri/dl2.

Publications

Certify or Predict: Boosting Certified Robustness with Compositional Architectures, ICLR 2021
Mark Niklas Müller, Mislav Balunovic, Martin Vechev

Scaling Polyhedral Neural Network Verification on GPUs, MLSys 2021
Christoph Müller, Francois Serre, Gagandeep Singh, Markus Püschel, Martin Vechev

Efficient Certification of Spatial Robustness, AAAI 2021
Anian Ruoss, Maximilian Baader, Mislav Balunovic, Martin Vechev

Learning Certified Individually Fair Representations, NeurIPS 2020
Anian Ruoss, Mislav Balunovic, Marc Fischer, Martin Vechev

Certified Defense to Image Transformations via Randomized Smoothing, NeurIPS 2020
Marc Fischer, Maximilian Baader, Martin Vechev

Adversarial Attacks on Probabilistic Autoregressive Forecasting Models, ICML 2020
Raphaël Dang-Nhu, Gagandeep Singh, Pavol Bielik, Martin Vechev

Adversarial Robustness for Code, ICML 2020
Pavol Bielik, Martin Vechev

Adversarial Training and Provable Defenses: Bridging the Gap, ICLR 2020
Oral presentation
Mislav Balunovic, Martin Vechev

Scalable Inference of Symbolic Adversarial Examples, arXiv 2020
Dimitar I. Dimitrov, Gagandeep Singh, Martin Vechev

Universal Approximation with Certified Networks, ICLR 2020
Maximilian Baader, Matthew Mirman, Martin Vechev

Robustness Certification of Generative Models, arXiv 2020
Mathew Mirman, Timon Gehr, Martin Vechev

Beyond the Single Neuron Convex Barrier for Neural Network Certification, NeurIPS 2019
Gagandeep Singh, Rupanshu Ganvir, Markus Püschel, Martin Vechev

Certifying Geometric Robustness of Neural Networks, NeurIPS 2019
Mislav Balunovic, Maximilian Baader, Gagandeep Singh, Timon Gehr, Martin Vechev

Online Robustness Training for Deep Reinforcement Learning, arXiv 2019
Marc Fischer, Matthew Mirman, Steven Stalder, Martin Vechev

DL2: Training and Querying Neural Networks with Logic, ICML 2019
Marc Fischer, Mislav Balunovic, Dana Drachsler-Cohen, Timon Gehr, Ce Zhang, Martin Vechev

Boosting Robustness Certification of Neural Networks, ICLR 2019
Gagandeep Singh, Timon Gehr, Markus Püschel, Martin Vechev

An Abstract Domain for Certifying Neural Networks, POPL 2019
Gagandeep Singh, Timon Gehr, Markus Püschel, Martin Vechev

Fast and Effective Robustness Certification, NeurIPS 2018
Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus Püschel, Martin Vechev

Differentiable Abstract Interpretation for Provably Robust Neural Networks, ICML 2018
Matthew Mirman, Timon Gehr, Martin Vechev

AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation, IEEE S&P 2018
Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev

Talks

Safe and Robust Deep Learning,
Waterloo ML + Security + Verification Workshop 2019

Safe and Robust Deep Learning,
University of Edinburgh, Robust Artificial Intelligence for Neurorobotics 2019

Safe Deep Learning: progress and open problems,
ETH Workshop on Dependable and Secure Software Systems 2018

AI2: AI Safety and Robustness with Abstract Interpretation,
Machine Learning meets Formal Methods, FLOC 2018