Reliable17Secure Machine Learning
22 Feb 2017 3Reliable secure Privacy CryptographyPresenter  Papers  Paper URL  Our Slides 

Tobin  Summary of A few Papers on: Machine Learning and Cryptography, (e.g., learning to Protect Communications with Adversarial Neural Cryptography) ^{1}  
Tobin  Privacy Aware Learning (NIPS12) ^{2}  
Tobin  Can Machine Learning be Secure?(2006) 

_{ Learning to protect communications with adversarial neural cryptography Abadi & Anderson, arXiv 2016: We ask whether neural networks can learn to use secret keys to protect information from other neural networks. Specifically, we focus on ensuring confidentiality properties in a multiagent system, and we specify those properties in terms of an adversary. Thus, a system may consist of neural networks named Alice and Bob, and we aim to limit what a third neural network named Eve learns from eavesdropping on the communication between Alice and Bob. We do not prescribe specific cryptographic algorithms to these neural networks; instead, we train endtoend, adversarially. We demonstrate that the neural networks can learn how to perform forms of encryption and decryption, and also how to apply these operations selectively in order to meet confidentiality goals. } ↩

_{ Privacy Aware Learning (NIPS12) / John C. Duchi, Michael I. Jordan, Martin J. Wainwright/ We study statistical risk minimization problems under a privacy model in which the data is kept confidential even from the learner. In this local privacy framework, we establish sharp upper and lower bounds on the convergence rates of statistical estimation procedures. As a consequence, we exhibit a precise tradeoff between the amount of privacy the data preserves and the utility, as measured by convergence rate, of any statistical estimator or learning procedure. } ↩