Description
In this course, you will :
- Understand the theory behind principal components analysis (PCA)
- Know why PCA is useful for dimensionality reduction, visualization, de-correlation, and denoising
- Derive the PCA algorithm by hand
- Write the code for PCA
- Understand the theory behind t-SNE
- Use t-SNE in code
- Understand the limitations of PCA and t-SNE
- Understand the theory behind autoencoders
- Write an autoencoder in Theano and Tensorflow
- Understand how stacked autoencoders are used in deep learning
- Write a stacked denoising autoencoder in Theano and Tensorflow
- Understand the theory behind restricted Boltzmann machines (RBMs)
- Understand why RBMs are hard to train
- Understand the contrastive divergence algorithm to train RBMs
- Write your own RBM and deep belief network (DBN) in Theano and Tensorflow
- Visualize and interpret the features learned by autoencoders and RBMs
Syllabus :
- Principal Components Analysis
- t-SNE (t-distributed Stochastic Neighbor Embedding)
- Autoencoders
- Restricted Boltzmann Machines
- The Vanishing Gradient Problem
- Extras + Visualizing what features a neural network has learned
- Applications to NLP (Natural Language Processing)
- Applications to Recommender Systems
- Theano and Tensorflow Basics Review
- Setting Up Your Environment (FAQ by Student Request)
- Extra Help With Python Coding for Beginners (FAQ by Student Request)
- Effective Learning Strategies for Machine Learning (FAQ by Student Request)