Description
In this course, you will :
- Hyperparameter
- Tensorflow
- Hyperparameter Optimization
- Deep Learning
Syllabus :
1. Practical Aspects of Deep Learning
- Train / Dev / Test sets
- Bias / Variance
- Basic Recipe for Machine Learning
- Regularization
- Why Regularization Reduces Overfitting?
- Dropout Regularization
- Understanding Dropout
- Other Regularization Methods
- Normalizing Inputs
- Vanishing / Exploding Gradients
- Weight Initialization for Deep Networks
- Numerical Approximation of Gradients
- Gradient Checking
- Gradient Checking Implementation Notes
2. Optimization Algorithms
- Mini-batch Gradient Descent
- Understanding Mini-batch Gradient Descent
- Exponentially Weighted Averages
- Understanding Exponentially Weighted Averages
- Bias Correction in Exponentially Weighted Averages
- Gradient Descent with Momentum
- RMSprop
- Adam Optimization Algorithm
- Learning Rate Decay
- The Problem of Local Optima
3. Hyperparameter Tuning, Batch Normalization and Programming Frameworks
- Using an Appropriate Scale to pick Hyperparameters
- Hyperparameters Tuning in Practice: Pandas vs. Caviar
- Normalizing Activations in a Network
- Fitting Batch Norm into a Neural Network
- Why does Batch Norm work?
- Batch Norm at Test Time
- Softmax Regression
- Training a Softmax Classifier
- Deep Learning Frameworks
- TensorFlow