Combinatorial Masking based Adversarial Text Generation

Title: MCTSBug: Generating Adversarial Text Sequences via Monte Carlo Tree Search and Homoglyph Attack

Paper ArxivOnline

GitHub: Coming

Preliminary Abstract

Crafting adversarial examples on discrete inputs like text sequences is fundamentally different from generating such examples for continuous inputs like images. This paper tries to answer the question: under a black-box setting, can we create adversarial examples automatically to effectively fool deep learning classifiers on texts by making imperceptible changes? Our answer is a firm yes. Previous efforts mostly replied on using gradient evidence, and they are less effective either due to finding the nearest neighbor word (wrt meaning) automatically is difficult or relying heavily on hand-crafted linguistic rules. We, instead, use Monte Carlo tree search (MCTS) for finding the most important few words to perturb and perform homoglyph attack by replacing one character in each selected word with a symbol of identical shape. Our novel algorithm, we call MCTSBug, is black-box and extremely effective at the same time. Our experimental results indicate that MCTSBug can fool deep learning classifiers at the success rates of 95% on seven large-scale benchmark datasets, by perturbing only a few characters. Surprisingly, MCTSBug, without relying on gradient information at all, is more effective than the gradient-based white-box baseline. Thanks to the nature of homoglyph attack, the generated adversarial perturbations are almost imperceptible to human eyes.

Citations

@article{gao2018mctsbug,
  title={MCTSBug: Generating Adversarial Text Sequences via Monte Carlo Tree Search and Homoglyph Attack},
  author={Gao, Ji and Lanchantin, Jack and Qi, Yanjun},
  year={2018}
}

Support or Contact

Having trouble with our tools? Please contact Ji Gao and we’ll help you sort it out.

View Posts Feed