2
Deflecting Adversarial Attacks with Pixel Deflection
Aaditya Prakash, Nick Moran, Solomon Garber, Antonella DiLillo, James Storer
2018-01-26 — arXiv
Submitted 3 months ago by iamaaditya to machine_learning
Abstract
CNNs are poised to become integral parts of many critical systems. Despite their robustness to natural variations, image pixel values can be manipulated, via small, carefully crafted, imperceptible perturbations, to cause a model to misclassify images. We present an algorithm to process an image so that classification accuracy is significantly preserved in the presence of such adversarial manipulations. Image classifiers tend to be robust to natural noise, and adversarial attacks tend to be agnostic to object location. These observations motivate our strategy, which leverages model robustness to defend against adversarial perturbations by forcing the image to match natural image statistics. Our algorithm locally corrupts the image by redistributing pixel values via a process we term pixel deflection. A subsequent wavelet-based denoising operation softens this corruption, as well as some of the adversarial changes. We demonstrate experimentally that the combination of these techniques enables the effective recovery of the true class, against a variety of robust attacks. Our results compare favorably with current state-of-the-art defenses, without requiring retraining or modifying the CNN.

Comments


iamaaditya Brandeis University - 3 months ago

Code: https://github.com/iamaaditya/pixel-deflection Blog (Jupyter notebook): https://iamaaditya.github.io/2018/02/demo-for-pixel-deflection/

dec Northwestern University - 3 months ago

Note that markdown is allowed in comments!