1-Evasion

This category of tools aims to automatically assess the robustness of a classifier.
This includes:


Blackbox Generation of Adversarial Text Sequences

Title: Black-box Generation of Adversarial Text Sequences to Fool Deep Learning Classifiers

evadePDF

Paper Arxiv

Published @ 1ST DEEP LEARNING AND SECURITY WORKSHOP, co-located with the 39th IEEE Symposium on Security and Privacy.

GitHub: [Coming]

TalkSlide: URL

Abstract

Although various techniques have been proposed to generate adversarial samples for white-box attacks on text, little attention has been paid to a black-box attack, which is a more realistic scenario. In this paper, we present a novel algorithm, DeepWordBug, to effectively generate small text perturbations in a black-box setting that forces a deep-learning classifier to misclassify a text input. We develop novel scoring strategies to find the most important words to modify such that the deep classifier makes a wrong prediction. Simple character-level transformations are applied to the highest-ranked words in order to minimize the edit distance of the perturbation. We evaluated DeepWordBug on two real-world text datasets: Enron spam emails and IMDB movie reviews. Our experimental results indicate that DeepWordBug can reduce the classification accuracy from 99% to around 40% on Enron data and from 87% to about 26% on IMDB. Also, our experimental results strongly demonstrate that the generated adversarial sequences from a deep-learning model can similarly evade other deep models.

Citations

@article{gao2018black,
  title={Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers},
  author={Gao, Ji and Lanchantin, Jack and Soffa, Mary Lou and Qi, Yanjun},
  journal={arXiv preprint arXiv:1801.04354},
  year={2018}
}

Support or Contact

Having trouble with our tools? Please contact Ji Gao and we’ll help you sort it out.


A Tool for Automatically Evading Classifiers for PDF Malware detection

Paper: Automatically Evading Classifiers,

A Case Study on PDF Malware Classifiers NDSS16

More information is provided by EvadeML.org

By using evolutionary techniques to simulate an adversary’s efforts to evade that classifier

GitHub: EvadePDFClassifiers

Presentation

Abstract

Machine learning is widely used to develop classifiers for security tasks. However, the robustness of these methods against motivated adversaries is uncertain. In this work, we propose a generic method to evaluate the robustness of classifiers under attack. The key idea is to stochastically manipulate a malicious sample to find a variant that preserves the malicious behavior but is classified as benign by the classifier. We present a general approach to search for evasive variants and report on results from experiments using our techniques against two PDF malware classifiers, PDFrate and Hidost. Our method is able to automatically find evasive variants for both classifiers for all of the 500 malicious seeds in our study. Our results suggest a general method for evaluating classifiers used in security applications, and raise serious doubts about the effectiveness of classifiers based on superficial features in the presence of adversaries.

evadePDF

Citations

@inproceedings{xu2016automatically,
  title={Automatically evading classifiers},
  author={Xu, Weilin and Qi, Yanjun and Evans, David},
  booktitle={Proceedings of the 2016 Network and Distributed Systems Symposium},
  year={2016}
}

Support or Contact

Having troubl with our tools? Please contact Weilin and we’ll help you sort it out.