Description
Learn how to process speech and analyse text using cutting-edge natural language processing techniques. Create probabilistic and deep learning models, such as hidden Markov models and recurrent neural networks, to teach the computer to perform tasks like speech recognition and machine translation, among others!
Syllabus:
Course 1: Introduction to Natural Language Processing
Intro to NLP
- Learn the main techniques used in natural language processing.
- Get familiarized with the terminology and the topics in the class.
- Build your first application with IBM Watson.
Text Processing
- See how text gets processed in order to use it in models.
- Learn techniques such as tokenization, stemming, and lemmatization.
- Get started with part of speech tagging and named entity recognition.
Part of Speech Tagging with Hidden Markov Models
- Learn how hidden Markov models are defined.
- Train HMMs with the Viterbi and the Baum-Welch algorithms.
- Use HMMs to build a part of speech tagging model.
Project Part of Speech Tagging
- Compare the performance of various techniques for tagging parts of speech in sentences, such as table lookups, n-grams, and hidden Markov models.
- This project demonstrates text processing techniques that allow you to create a model for tagging parts of speech. Using probabilistic graphical models, you will start with a simple lookup table and gradually add more complexity to improve the model. Finally, you'll use a Python package to create and train a tagger with a hidden Markov model, and you'll be able to compare the results of all of these models on a dataset of sentences.
Course 2: Computing with Natural Language
Feature Extraction and Embeddings
- Learn to extract features from text.
- Learn the most used embedding algorithms, such as Word2Vec and Glove.
Modeling
- Learn about the main uses of deep learning models in NLP.
- Learn about machine translation, topic models, and sentiment analysis.
Deep Learning Attention
- Learn about attention, the advanced deep learning method empowering applications like Google Translate.
- Learn about additive and multiplicative attention in applications like machine translation, text summarization, and image captioning.
- Learn about cutting edge deep learning models like The Transformer that extend the use of attention to eliminate the need for RNNs.
Information Systems
- Learn about information extraction and information retrieval systems.
- Learn about question answering and its applications.
Project: Machine Translation
Create a deep neural network that will be used as part of a machine translation pipeline from start to finish. When finished, your pipeline will accept English text as input and return the French translation. You will be able to compare the performance of various recurrent neural network architectures.
To begin, you will preprocess the data by converting it from text to sequence. a set of integers Then you'll create a number of deep learning models for The text was translated into French. You will then run this as a final step. models on an English test in order to evaluate their performance.
Course 3: Communicating with Natural Language
Intro to Voice User Interfaces
- Learn the basics of how computers understand spoken words.
- Get familiar with the most common VUI applications.
- Set up your AWS account and build Alexa skill with an existing template.
Alexa History SKill
- Learn the basics of Amazon AWS.
- Create your own fully functional Alexa skill using Amazon’s API.
- Deploy your skill for everyone to use it.
Introduction to Speech Recognition
- Learn the pipeline used for speech recognition.
- Learn to process and extract features from sound signals.
- Learn to build probabilistic and machine learning Language models in order to extract words and grammar from sound signals.
Project: Speech Recognizer
Create a deep neural network that will be used as part of a complete automatic speech recognition (ASR) pipeline. The model will transform raw audio into feature representations, which will then be converted into transcribed text.
You'll start by looking into a dataset that will be used to train and test your models. Your algorithm will begin by converting any raw audio to feature representations commonly used for ASR. Then, you'll train neural networks to map these features to transcribed text.