Further Reading: Training with Noise is Equivalent to Tikhonov Regularization, Chris Bishop.Suggested Reading: Adversarial Perturbations of Deep Neural Networks, Warde-Farley and Goodfellow.Suggested Reading: Dropout: A Simple Way to Prevent Neural Networks from Overfitting, Srivastava et al.Suggested Reading: Bagging Regularizes, Poggio et al.Suggested Reading: Regularization and Complexity Control in Neural Networks, Chris Bishop.Suggested Reading: Sparsity and the LASSO, Tibshirani and Wasserman.Suggested Reading: Chapter 7 of Goodfellow, Bengio and Courville.Suggested Reading: Andrej Karpathy's notes (linked below in "Additional Linkage").Suggested Reading: Calculus on Computational Graphs, Chris Olah.Suggested Reading: Chapter 2 of Nielsen.Suggested Reading: Chapter 6 of Goodfellow, Bengio and Courville.Classic Paper: Hinton and Nowlan "How Learning can guide Evolution".Classic Paper: Rumelhart, Hinton and Williams "Learning Internal Representations by Error Propagation".Suggested Reading: Representational power of feedforward networks (Cybenko, Barron, Folklore, Kolmogorov) by Matus Telgarsky.Suggested Reading: A Visual Proof that Neural Nets can compute any function by Michael Nielsen. ![]() Suggested Reading: Chapter 5 of Goodfellow, Bengio, Courville.Connectionist History: A Brief History of Connectionism.Connectionist History: A Sociological Study of the Official History of the Perceptrons Controversy.Classic Paper: Rosenblatt "The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain".Classic Paper: McCulloch and Pitts "A Logical Calculus of the ideas immanent in nervous activity".Suggested Tutorial: Python/NumPy Tutorial.Suggested Reading: Chapter 1 of Goodfellow, Bengio, Courville.Programming proficiency in Python (although you should be fine if you have extensive experience in some other high level language)Īdditional Resources, Suggested Readings, etc.Familiarity with basic Probability Theory, Linear Algebra, Calculus.Graduate Machine Learning courses at the level of STAT 37710/CMSC 35400 or TTIC 31020 (STAT 27725/CMSC 25400 should be OK).Mondays and Wednesdays, 3.00pm-4.20pm, Ryerson 277.Instructors: Shubhendu Trivedi and Risi Kondor This course aims to cover the basics of Deep Learning and some of the underlying theory with a particular focus on supervised Deep Learning, with a good coverage of unsupervised methods. ![]() In tasks in vision, speech and rapidly in other domains as well. This automatic feature learning has been demonstrated to uncover underlying structure in the data leading to state-of-the-art results Deep Learning algorithms aim to learn feature hierarchies with featuresĪt higher levels in the hierarchy formed by the composition of lower level features. Prior to 2010, to achieve decent performance on such tasks, significant effort had to be put to engineer hand crafted features. In many real world Machine Learning tasks, in particular those with perceptual input, such as vision and speech, the mapping from raw data to the output is often a complicated function with manyįactors of variation.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |