We move the blogs on this research front to our Group-Blog-Site at https://qdata.github.io/qdata-page/ since 2021.

The is our legacy website (until 2020) http://TrustworthyMachineLearning.org/ that introduced a suite of tools we have designed for making machine learning secure and trustworthy. This project involves toolboxes for five main tasks (organized as entries in the navigation menu).



Scope of problems our tools aim to tackle

Classifiers based on machine learning algorithms have shown promising results for many security tasks including malware classification and network intrusion detection, but classic machine learning algorithms are not designed to operate in the presence of adversaries. Intelligent and adaptive adversaries may actively manipulate the information they present in attempts to evade a trained classifier, leading to a competition between the designers of learning systems and attackers who wish to evade them. This project is developing automated techniques for predicting how well classifiers will resist the evasions of adversaries, along with general methods to automatically harden machine-learning classifiers against adversarial evasion attacks.

Important tasks

At the junction between machine learning and computer security, this project involves toolboxes for five main task as shown in the following table. Our system aims to allow a classifier designer to understand how the classification performance of a model degrades under evasion attacks, enabling better-informed and more secure design choices. The framework is general and scalable, and takes advantage of the latest advances in machine learning and computer security.

timeline

timeline

We categorize the topics into a list of subtasks and list our selected works in the following table:

No. Tool Category ~~~~~~~Paper~Title~~~~ Venues Software
1 Evade NLP Machine Learning TextAttack: A Framework for Adversarial Attacks in Natural Language Processing EMNLP2020 GitHub
2 Evade Machine Learning Automatically Evading Classifiers, Case Study on PDF Malware Classifiers NDSS16 GitHub
3 Evade NLP Machine Learning Black-box Generation of Adversarial Text Sequences to Fool Deep Learning Classifiers DeepSecureWkp18 GitHub
4 Detect Adversarial Attacks Feature Squeezing- Detecting Adversarial Examples in Deep Neural Networks NDSS18 GitHub
5 Defense against Adversarial Attacks DeepCloak- Masking Deep Neural Network Models for Robustness against Adversarial Samples ICLRwkp17 GitHub
6 Visualize Adversarial Attacks Adversarial-Playground- A Visualization Suite for Adversarial Samples VizSec17 GitHub
7 Theorems of Adversarial Examples A Theoretical Framework for Robustness of (Deep) Classifiers Against Adversarial Samples ICLRw17  
8 Trustworthy via Interpretation Deep Motif Dashboard ICLRw2017  

Contact

Have questions or suggestions? Feel free to ask me on Twitter or email me.

Thanks for reading!