QData Deep Learning Undergraduate Reading Group
  • About Our Group

  • Posts

  • Deep Q Networks
  • Image Super-Resolution Using Deep Convolutional Networks
  • Neural Turing Machines
  • Distilling the Knowledge in a Neural Network
  • You Only Look Once: Unified, Real-Time Object Detection
  • Statistical Modeling: The Two Cultures
  • Attention is All You Need
  • Visualizing and Understanding Convolutional Networks
  • Natural Language Processing (Almost) from Scratch
  • A Neural Algorithm of Artistic Style
  • Generative Adversarial Networks (GANS)
  • The Lottery Ticket Hypothesis
  • Overview of Deep Learning
  • Training Deep Neural Networks
  • The Universal Approximation Theorem
  • Improving the way neural networks learn
  • How the backpropagation algorithm works
  • Recognizing Handwritten Digits
  • Tags

  • art
  • attention
  • backprop
  • cnns
  • cv
  • distillation
  • explainability
  • gans
  • generative-models
  • iclr
  • lottery-tickets
  • lstms
  • mnist
  • neural-networks
  • nlp
  • object-detection
  • optimization
  • rnns
  • softmax
  • statistics
  • style-transfer
  • theory
  • transformers
  • universal-approximation-theorem

Deep Q Networks

Deep Q Networks apply advances in deep supervised learning to Reinforcement Learning environments with large state spaces. We went over a quick intro to RL and tabular Q-learning before discussing the advancements of the original...

Jake Grigsby // 26 Apr 2020

Image Super-Resolution Using Deep Convolutional Networks

Super resolution convolutional neural networks was introduced as a new deep learning method for single image super-resolution compared to traditional methods such as sparse coding. They also show that conventional sparse-coding-based SR methods can be...

Alan Zheng // 19 Apr 2020

Neural Turing Machines

Neural Turing Machines extend recurrent network architectures with an external memory bank that can be read from and written to at each time step. This process is fully differentiable and can be trained like any...

Jake Grigsby // 19 Apr 2020

Distilling the Knowledge in a Neural Network

NeurIPS 2014

We chose this paper as it describes the foundations of network compression, a deperature from some of the previous papers we’ve read that are more domain-specific. The approach that the authors used addresses what knowledge...

Kevin Ivey // 05 Apr 2020

You Only Look Once: Unified, Real-Time Object Detection

CVPR 2016

We chose this paper since YOLO is the most widely used technique in real-time object detection, which is an important application of computer vision. The paper was relatively easy to understand as one of its...

Eli Lifland // 22 Mar 2020

Statistical Modeling: The Two Cultures

Statistical Science, Vol. 16, No. 3 (Aug., 2001)

We took a break from machine learning papers to review our statistical foundations. This paper discusses the tradeoffs between data models and algorithmic models as seen by an eminent statistician. It parallels the contrast between...

Jack Morris // 01 Mar 2020

Attention is All You Need

NeurIPS 2017

We chose this paper since it kicked off a revolution in NLP architectures, introducing attention-based transformers. While the paper provided a good technical approach, it didn’t have the best visualizations to help us understand how...

Eli Lifland // 23 Feb 2020

Visualizing and Understanding Convolutional Networks

ECCV 2014

We chose this paper to give us context: this is a buoy in the sea of papers and improvements in image recognition that took place after the 2012 ImageNet challenge showed such remarkable improvements. This...

Jack Morris // 16 Feb 2020

Natural Language Processing (Almost) from Scratch

JMLR 2011

Before we delve into learning about the state-of-the-art in NLP, we wanted to get a strong foundation. This NLP from scratch paper explains how you can go about training neural networks that perform well on...

Jake Grigsby, Kevin Ivey, Eli Lifland, Jack Morris, Yu Yang, Jeffrey Yoo, Alan Zheng // 09 Feb 2020

A Neural Algorithm of Artistic Style

IEEE 2016

The style transfer paper is very unique in the ranks of seminal deep learning papers. The authors are more active in neuroscience than in machine learning, but showed the amazing results you can get when...

Jack Morris // 19 Jan 2020

Generative Adversarial Networks (GANS)

NIPS 2014

This week, we had two presentations: GANs (presented by Alan Zheng) and Style Transfer (presented by Jack Morris). We thought that the GANs paper was as good of a place as any for us to...

Alan Zheng // 19 Jan 2020

The Lottery Ticket Hypothesis

ICLR 2018

For our first academic paper reading as a group, we chose The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks by Frankle et al. We chose this paper to try and make a smooth transition...

Jack Morris // 01 Dec 2019

Overview of Deep Learning

Neural Networks and Deep Learning, Chapter 6

This was our final reading from Neural Networks and Deep Learning. This chapter explained the basics of convolutional networks. It goes on to talk about the ImageNet competition and the amazing progress in image recognition....

Alan Zheng // 17 Nov 2019

Training Deep Neural Networks

Neural Networks and Deep Learning, Chapter 5

This week, we read Training Deep Neural Networks, Chapter 5 of the textbook. This chapter outlines some of the fundamental problems in training deep neural networks, like the vanishing and exploding gradient problems. This chapter...

Jack Morris // 10 Nov 2019

The Universal Approximation Theorem

Neural Networks and Deep Learning, Chapter 4

This chapter gives a visual proof for the Universal Approximation Theorem, based on the sigmoid activation function. It’s an explorable explanation full of animations that start from a basic step function and show you how...

Jeffrey Yoo // 03 Nov 2019

Improving the way neural networks learn

Neural Networks and Deep Learning, Chapter 3

This chapter explains the basics of the deep neural network learning process. It explains the cross-entropy loss function in depth and gives an intuition for the learning slowdown across layers. It also covers overfitting vs....

Kevin Ivey // 27 Oct 2019

How the backpropagation algorithm works

Neural Networks and Deep Learning, Chapter 2

This chapter is the most theoretical of the six chapters in Neural Networks and Deep Learning. It derives the equations for gradient descent and provides strong mathematical intuitions for why they actually make sense. (These...

Eli Lifland // 12 Oct 2019

Recognizing Handwritten Digits

Neural Networks and Deep Learning, Chapter 1

We started with Neural Networks and Deep Learning in the hope that after reading this short textbook, our team members would have a similar foundation of the most basic concepts behind deep learning and neural...

Jack Morris // 21 Sep 2019