Reliable Applications VI  Robustness2
Presenter  Papers  Paper URL  Our Slides 

Tianlu  Robustness of classifiers: from adversarial to random noise, NIPS16  PDF ^{1}  
Anant  Blind Attacks on Machine Learners, ^{2} NIPS16  
Data Noising as Smoothing in Neural Network Language Models (Ng), ICLR17 ^{3}  
The Robustness of Estimator Composition, NIPS16 ^{4} 

_{ The Robustness of Estimator Composition, NIPS16 / We formalize notions of robustness for composite estimators via the notion of a breakdown point. A composite estimator successively applies two (or more) estimators: on data decomposed into disjoint parts, it applies the first estimator on each part, then the second estimator on the outputs of the first estimator. And so on, if the composition is of more than two estimators. Informally, the breakdown point is the minimum fraction of data points which if significantly modified will also significantly modify the output of the estimator, so it is typically desirable to have a large breakdown point. Our main result shows that, under mild conditions on the individual estimators, the breakdown point of the composite estimator is the product of the breakdown points of the individual estimators. We also demonstrate several scenarios, ranging from regression to statistical testing, where this analysis is easy to apply, useful in understanding worst case robustness, and sheds powerful insights onto the associated data analysis. } ↩

_{ Data Noising as Smoothing in Neural Network Language Models (Ng), ICLR17/ Data noising is an effective technique for regularizing neural network models. While noising is widely adopted in application domains such as vision and speech, commonly used noising primitives have not been developed for discrete sequencelevel settings such as language modeling. In this paper, we derive a connection between input noising in neural network language models and smoothing in ngram models. Using this connection, we draw upon ideas from smoothing to develop effective noising schemes. We demonstrate performance gains when applying the proposed schemes to language modeling and machine translation. Finally, we provide empirical analysis validating the relationship between noising and smoothing. } ↩

_{ Robustness of classifiers: from adversarial to random noise, NIPS16/ Several recent works have shown that stateoftheart classifiers are vulnerable to worstcase (i.e., adversarial) perturbations of the datapoints. On the other hand, it has been empirically observed that these same classifiers are relatively robust to random noise. In this paper, we propose to study a semirandom noise regime that generalizes both the random and worstcase noise regimes. We propose the first quantitative analysis of the robustness of nonlinear classifiers in this general noise regime. We establish precise theoretical bounds on the robustness of classifiers in this general regime, which depend on the curvature of the classifier’s decision boundary. Our bounds confirm and quantify the empirical observations that classifiers satisfying curvature constraints are robust to random noise. Moreover, we quantify the robustness of classifiers in terms of the subspace dimension in the semirandom noise regime, and show that our bounds remarkably interpolate between the worstcase and random noise regimes. We perform experiments and show that the derived bounds provide very accurate estimates when applied to various stateoftheart deep neural networks and datasets. This result suggests bounds on the curvature of the classifiers’ decision boundaries that we support experimentally, and more generally offers important insights onto the geometry of high dimensional classification problems. } ↩

_{ Blind Attacks on Machine Learners, NIPS16/ The importance of studying the robustness of learners to malicious data is well established. While much work has been done establishing both robust estimators and effective data injection attacks when the attacker is omniscient, the ability of an attacker to provably harm learning while having access to little information is largely unstudied. We study the potential of a “blind attacker” to provably limit a learner’s performance by data injection attack without observing the learner’s training set or any parameter of the distribution from which it is drawn. We provide examples of simple yet effective attacks in two settings: firstly, where an “informed learner” knows the strategy chosen by the attacker, and secondly, where a “blind learner” knows only the proportion of malicious data and some family to which the malicious distribution chosen by the attacker belongs. For each attack, we analyze minimax rates of convergence and establish lower bounds on the learner’s minimax risk, exhibiting limits on a learner’s ability to learn under data injection attack even when the attacker is “blind”. } ↩