Neural Network Robustness from a Probabilistic Perspective
Explore the intriguing behavior of neural networks, the discovery of adversarial examples, challenges in training robust networks, and the shift towards non-adversarial robustness considerations. Learn about local and global robustness specifications and the probabilistic view on maintaining neural network resilience against small input changes.
Download Presentation
Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
Robustness of Neural Networks: A Probabilistic and Practical Perspective Ravi Mangal Aditya V. Nori Alessandro Orso
The Discovery of Adversarial Examples Szegedy et al. (2013), Goodfellow et al. (2014) observe a curious phenomenon
What the heck is going on? Why do neural networks exhibit such behavior? How can we train neural networks to be robust? How can we check if a trained neural network is robust?
How to specify robustness? Small changes in input Small changes in output Local Robustness: ?. ?0 ? ? ? ?0 = ?(?) Global Robustness: ?,? . ? ? ? ? ? = ?(? )
How to specify robustness? Small changes in input Small changes in output Local Robustness: ?. ?0 ? ? ? ?0 = ?(?) Global Robustness: ?,? . ? ? ? ? ? ?(? )
A Perspective Shift : From Adversarial to Non- Adversarial Local and global robustness motivated by security considerations Inputs generated by malicious entities No adversarial examples tolerated Our thesis: Non-adversarial setting is useful and understudied Inputs generated by natural, non-malicious entities Tolerance for rarely exhibit non-robust behavior
A Probabilistic View * With high probability, ? ? ?(? ) Sample ?,? s.t. ? ? ? *Michael A. Nielsen, "Neural Networks and Deep Learning", Determination Press, 2015
Non-Adversarial, Probabilistic Robustness With high probability, small input changes small output changes ? ? ?,? ~ ?? ? ?(? ) Pr ? 1 ?
Non-Adversarial, Probabilistic Robustness With high probability, small input changes small output changes ? ? ? ? ? ? ? ? ? Pr ? 1 ? ?,? ~ ?
Sketch of an Algorithm Combine statistical and symbolic techniques Statistical Sample inputs Symbolic Perform backwards abstract interpretation to find input regions to sample from
Extra: Sketch of an Algorithm Backwards Abstract Interpreter Regions to sample from * Estimate Updater Sampler