In the SafeAI project at the SRI lab, ETH Zurich, we explore new methods and systems which can ensure Artificial Intelligence (AI) systems such as deep neural networks are more robust, safe and interpretable. Our work tends to sit at the intersection of machine learning, optimization, and symbolic reasoning methods. For example, among other results, we recently introduced new approaches and systems that can certify and train deep neural networks with symbolic methods. Following the success of this work, our ETH spin-off LatticeFlow is building the world's first Trustworthy AI Platform to enable organizations to build and deploy robust AI models.

AI2 overview


LatticeFlow logo
The mission of our ETH spin-off LatticeFlow is to enable organizations to deliver safe, robust, and reliable AI systems. To this end, LatticeFlow collaborates with leading enterprises such as Swiss Federal Railways (SBB) and Siemens, and government agencies such as the US Army and Germany's Federal Office of Information Security (BSI). You can read more about our vision and product on TechCrunch and ETH News.


The code for DiffAI (ICML'18) is available at

The code for ERAN (ETH Robusness Analyzer for Neural Networks; NIPS'18, POPL'19) is available at

The code for DL2 (Deep Learning with Differentiable Logic; ICML'19) is available at


Robustness Certification for Point Cloud Models, ICCV 2021
Tobias Lorenz, Anian Ruoss, Mislav Balunovic, Gagandeep Singh, Martin Vechev

PRIMA: Precise and General Neural Network Certification via Multi-Neuron Convex Relaxations, arXiv 2021
Mark Niklas Müller, Gleb Makarchuk, Gagandeep Singh, Markus Püschel, Martin Vechev

Certify or Predict: Boosting Certified Robustness with Compositional Architectures, ICLR 2021
Mark Niklas Müller, Mislav Balunovic, Martin Vechev

Scaling Polyhedral Neural Network Verification on GPUs, MLSys 2021
Christoph Müller, Francois Serre, Gagandeep Singh, Markus Püschel, Martin Vechev

Efficient Certification of Spatial Robustness, AAAI 2021
Anian Ruoss, Maximilian Baader, Mislav Balunovic, Martin Vechev

Learning Certified Individually Fair Representations, NeurIPS 2020
Anian Ruoss, Mislav Balunovic, Marc Fischer, Martin Vechev

Certified Defense to Image Transformations via Randomized Smoothing, NeurIPS 2020
Marc Fischer, Maximilian Baader, Martin Vechev

Adversarial Attacks on Probabilistic Autoregressive Forecasting Models, ICML 2020
Raphaël Dang-Nhu, Gagandeep Singh, Pavol Bielik, Martin Vechev

Adversarial Robustness for Code, ICML 2020
Pavol Bielik, Martin Vechev

Adversarial Training and Provable Defenses: Bridging the Gap, ICLR 2020
Oral presentation
Mislav Balunovic, Martin Vechev

Scalable Inference of Symbolic Adversarial Examples, arXiv 2020
Dimitar I. Dimitrov, Gagandeep Singh, Martin Vechev

Universal Approximation with Certified Networks, ICLR 2020
Maximilian Baader, Matthew Mirman, Martin Vechev

Robustness Certification of Generative Models, arXiv 2020
Mathew Mirman, Timon Gehr, Martin Vechev

Beyond the Single Neuron Convex Barrier for Neural Network Certification, NeurIPS 2019
Gagandeep Singh, Rupanshu Ganvir, Markus Püschel, Martin Vechev

Certifying Geometric Robustness of Neural Networks, NeurIPS 2019
Mislav Balunovic, Maximilian Baader, Gagandeep Singh, Timon Gehr, Martin Vechev

Online Robustness Training for Deep Reinforcement Learning, arXiv 2019
Marc Fischer, Matthew Mirman, Steven Stalder, Martin Vechev

DL2: Training and Querying Neural Networks with Logic, ICML 2019
Marc Fischer, Mislav Balunovic, Dana Drachsler-Cohen, Timon Gehr, Ce Zhang, Martin Vechev

Boosting Robustness Certification of Neural Networks, ICLR 2019
Gagandeep Singh, Timon Gehr, Markus Püschel, Martin Vechev

An Abstract Domain for Certifying Neural Networks, POPL 2019
Gagandeep Singh, Timon Gehr, Markus Püschel, Martin Vechev

Fast and Effective Robustness Certification, NeurIPS 2018
Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus Püschel, Martin Vechev

Differentiable Abstract Interpretation for Provably Robust Neural Networks, ICML 2018
Matthew Mirman, Timon Gehr, Martin Vechev

AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation, IEEE S&P 2018
Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev


Safe and Robust Deep Learning,
Waterloo ML + Security + Verification Workshop 2019

Safe and Robust Deep Learning,
University of Edinburgh, Robust Artificial Intelligence for Neurorobotics 2019

Safe Deep Learning: progress and open problems,
ETH Workshop on Dependable and Secure Software Systems 2018

AI2: AI Safety and Robustness with Abstract Interpretation,
Machine Learning meets Formal Methods, FLOC 2018