In the AI2 project done at the SRI lab, ETH Zurich, we explore new methods and systems which can reason about AI safety, including deep learning. Concretely, we have introduced new approaches and tools based on abstract interpretation for certifying and training deep neural networks. The figure below shows the high-level flow:

AI2 overview


The code for DiffAI (ICML'18) is available at


Fast and Effective Robustness Certification, NIPS 2018
Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus Puschel, Martin Vechev

Differentiable Abstract Interpretation for Provably Robust Neural Networks, ICML 2018
Matthew Mirman, Timon Gehr, Martin Vechev

AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation, IEEE S&P 2018
Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev


AI2: AI Safety and Robustness with Abstract Interpretation, Machine Learning meets Formal Methods, FLOC 2018