securemachinelearning.org is up and running!08 Jun 2017
The website securemachinelearning.org introduces updates of a suite of tools we have developed for making machine learning secure and robust.
Scope of problems our tools aim to tackle
Classifiers based on machine learning algorithms have shown promising results for many security tasks including malware classification and network intrusion detection, but classic machine learning algorithms are not designed to operate in the presence of adversaries. Intelligent and adaptive adversaries may actively manipulate the information they present in attempts to evade a trained classifier, leading to a competition between the designers of learning systems and attackers who wish to evade them. This project is developing automated techniques for predicting how well classifiers will resist the evasions of adversaries, along with general methods to automatically harden machine-learning classifiers against adversarial evasion attacks.
Five important tasks
At the junction between machine learning and computer security, this project involves toolboxes for five main task as shown in the following table. Our system aims to allow a classifier designer to understand how the classification performance of a model degrades under evasion attacks, enabling better-informed and more secure design choices. The framework is general and scalable, and takes advantage of the latest advances in machine learning and computer security.
|No.||Tool Name||Short Description|
|1||Evade Machine Learning||Tools we designed to Automatically Evade Classifiers|
|2||Detect Adversarial Attacks||Tools we designed for Detecting Adversarial Examples in Deep Neural Networks|
|3||Defense against Adversarial Attacks||Tools we designed for defending against Adversarial Examples in Deep Neural Networks|
|4||Visualize Adversarial Attacks||Tools we designed for Visualizing Adversarial Examples|
|5||Theorems of Adversarial Machine Learning||Theorems we proposed for understanding Adversarial Examples in Machine Learning|
Thanks for reading!