1Theoretical


Recent Readings for Analyzing Theoretical Properties of Deep Neural Networks (since 2017) (Index of Posts):

No. Read Date Title and Information We Read @
1 2019, Dec, 12 deep2reproduce 2019 Fall - 1Analysis papers 2019-fall Students deep2reproduce
2 2017, Sep, 14 Theoretical17 VI - More about Behaviors of DNN 2017-W4
3 2017, Sep, 12 Theoretical17 V - More about Behaviors of DNN 2017-W4
4 2017, Sep, 7 Theoretical17 IV - Investigating Behaviors of DNN 2017-W3
5 2017, Sep, 5 Theoretical17 III - Investigating Behaviors of DNN 2017-W3
6 2017, Aug, 24 Theoretical17 II - Ganguli - Theoretical Neuroscience and Deep Learning DLSS16 2017-W1


Here is a detailed list of posts!



[1]: deep2reproduce 2019 Fall - 1Analysis papers


analysis generalization forgetting training optimization subspace informax normalization Sample-selection
Team INDEX Title & Link Tags Our Slide
T2 Empirical Study of Example Forgetting During Deep Neural Network Learning Sample Selection, forgetting OurSlide
T29 Select Via Proxy: Efficient Data Selection For Training Deep Networks Sample Selection OurSlide
T9 How SGD Selects the Global Minima in over-parameterized Learning optimization OurSlide
T10 Escaping Saddles with Stochastic Gradients optimization OurSlide
T13 To What Extent Do Different Neural Networks Learn the Same Representation subspace OurSlide
T19 On the Information Bottleneck Theory of Deep Learning informax OurSlide
T20 Visualizing the Loss Landscape of Neural Nets normalization OurSlide
T21 Using Pre-Training Can Improve Model Robustness and Uncertainty training, analysis OurSlide
T24 Norm matters: efficient and accurate normalization schemes in deep networks normalization OurSlide

[2]: Theoretical17 VI - More about Behaviors of DNN


understanding black-box Expressive generalization
Presenter Papers Paper URL Our Slides
SE Equivariance Through Parameter-Sharing, ICML17 1 PDF  
SE Why Deep Neural Networks for Function Approximation?, ICLR17 2 PDF  
SE Geometry of Neural Network Loss Surfaces via Random Matrix Theory, 3ICML17 PDF  
  Sharp Minima Can Generalize For Deep Nets, ICML17 4 PDF  

[3]: Theoretical17 V - More about Behaviors of DNN


understanding black-box Memorization InfoMax Expressive
Presenter Papers Paper URL Our Slides
Ceyer A Closer Look at Memorization in Deep Networks, ICML17 1 PDF PDF
  On the Expressive Efficiency of Overlapping Architectures of Deep Learning 2 DLSSpdf + video  
Mutual Information Opening the Black Box of Deep Neural Networks via Information 3 URL + video  
ChaoJiang Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity, NIPS16 PDF PDF

[4]: Theoretical17 IV - Investigating Behaviors of DNN


understanding black-box Parsimonious Associative memory
Presenter Papers Paper URL Our Slides
Beilun Learning Deep Parsimonious Representations, NIPS16 1 PDF PDF
Jack Dense Associative Memory for Pattern Recognition, NIPS16 2 PDF + video PDF

[5]: Theoretical17 III - Investigating Behaviors of DNN


understanding black-box generalization Expressive
Presenter Papers Paper URL Our Slides
Rita On the Expressive Power of Deep Neural Networks 1 PDF PDF
Arshdeep Understanding deep learning requires rethinking generalization, ICLR17 2 PDF PDF
Tianlu On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima, ICLR17 3 PDF PDF

[6]: Theoretical17 II - Ganguli - Theoretical Neuroscience and Deep Learning DLSS16


neuroscience visualizing brain

Ganguli - Theoretical Neuroscience and Deep Learning

Presenter Papers Paper URL Our Slides
DLSS16 video    
DLSS17 video + slide    
DLSS17 Deep learning in the brain DLSS17 + Video  



Here is a name list of posts!