DeepCloak- a tool for Automatically Defending DNN against Adversarial Examples

DeepCloak: Masking Deep Neural Network Models for Robustness against Adversarial Samples

GitHub: DeepCloak

Paper ICLR17 Workshop

Poster

Abstract

Recent studies have shown that deep neural networks (DNN) are vulnerable to adversarial samples: maliciously-perturbed samples crafted to yield incorrect model outputs. Such attacks can severely undermine DNN systems, particularly in security-sensitive settings. It was observed that an adversary could easily generate adversarial samples by making a small perturbation on irrelevant feature dimensions that are unnecessary for the current classification task. To overcome this problem, we introduce a defensive mechanism called DeepCloak. By identifying and removing unnecessary features in a DNN model, DeepCloak limits the capacity an attacker can use generating adversarial samples and therefore increase the robustness against such inputs. Comparing with other defensive approaches, DeepCloak is easy to implement and computationally efficient. Experimental results show that DeepCloak can increase the performance of state-of-the-art DNN models against adversarial samples.

deepCloak

Citations

@article{gao2017deepmask,
  title={DeepCloak: Masking DNN Models for robustness against adversarial samples},
  author={Gao, Ji and Wang, Beilun and Qi, Yanjun},
  journal={arXiv preprint arXiv:1702.06763},
  year={2017}
}

Support or Contact

Having trouble with our tools? Please contact Ji Gao and we’ll help you sort it out.

View Posts Feed