Description
In this course, you will learn:
- Employers want job-ready capabilities in next-generation AI, machine learning, deep learning, NLP apps, and massive language models in only six months.
- Create and deploy generative AI apps, agents, and chatbots with Python frameworks such as Flask, SciPy, ScikitLearn, Keras, and PyTorch.
- Key-generation AI architectures and NLP models, as well as how to use approaches such as prompt engineering, model training, and fine-tuning.
- Use transformers like BERT and LLMs like GPT for NLP jobs, together with frameworks like RAG and LangChain.
Syllabus:
1. Introduction to Artificial Intelligence (AI)
- Describe AI and its key concepts.
- Showcase how AI applications and use cases can alter our lives and work.
- Recognize AI's potential and impact on businesses and jobs.
- Describe the challenges, limitations, and ethical considerations with AI.
2. Generative AI: Introduction and Applications
- Describe and distinguish generative AI from discriminative AI.
- Describe the capabilities of generative AI and its applications in the actual world.
- Identify the uses of generative AI in various domains and businesses.
- Investigate popular generative AI models and tools for text, code, image, audio, and video generation.
3. Generative AI: Prompt Engineering Basics
- Explain the meaning and importance of prompt engineering in generative AI models.
- Apply best practices for developing prompts and look at examples of effective prompts.
- Use popular prompt engineering tools and approaches to write excellent prompts.
- Investigate commonly used tools for prompt engineering.
4. Python for Data Science, AI & Development
- Learn Python, the most widely used programming language for data science and software development.
- Python programming logic includes variables, data structures, branching, loops, functions, objects, and classes.
- Show that you can use Python libraries like Pandas and Numpy, as well as build code in Jupyter Notebooks, effectively.
- Use APIs and Python modules such as Beautiful Soup to access and scrape data from the web.
5. Developing AI Applications with Python and Flask
- Describe the steps and processes required to create a Python application, including the application development lifecycle.
- Create Python modules, run unit tests, and package applications while following PEP8 coding best practices.
- Explain Flask's features and deploy web apps with the Flask framework.
- Create and deploy an AI-based application on a web server using IBM Watson AI Libraries and Flask.
6. Building Generative AI-Powered Applications with Python
- Explain the fundamental concepts behind generative AI models, AI technologies, and AI platforms like IBM Watson and Hugging Face.
- Integrate and improve big language models (LLMs) using RAG technology to add intelligence to apps and chatbots.
- Use Python modules such as Flask and Gradio to develop web applications that interact with generative AI models.
- Create generative AI-powered applications and chatbots with generative AI models, Python, and other frameworks.
7. Data Analysis with Python
- Write Python code to clean and prepare data for analysis, including managing missing values, formatting, normalizing, and binning data.
- Use libraries like Pandas, NumPy, and Scipy to conduct exploratory data analysis and apply analytical approaches to real-world datasets.
- Use dataframes to manipulate data, summarize it, understand its distribution, perform correlation, and build data pipelines.
- Create and evaluate regression models with machine learning. Use the scikit-learn library to make predictions and decisions.
8. Machine Learning with Python
- In just 6 weeks, you'll master job-ready basic machine learning abilities in Python, including how to use Scikit-learn to construct, test, and evaluate models.
- How to use data preparation strategies and handle bias-variance tradeoffs to improve model performance.
- How to apply key machine learning methods, such as linear regression, decision trees, and SVM, to classification and regression tasks.
- How to assess model performance with metrics, cross-validation, and hyperparameter adjustment to ensure accuracy and reliability.
9. Introduction to Deep Learning & Neural Networks with Keras
- Describe a neural network, a deep learning model, and how they differ.
- Show an understanding of unsupervised deep learning models like autoencoders and limited Boltzmann machines.
- Show an understanding of supervised deep learning models like convolutional neural networks and recurrent networks.
- Create deep learning models and networks with the Keras library.
10. Generative AI and LLMs: Architecture and Data Preparation
- Differentiate between generative AI architectures and models, including RNNs, Transformers, VAEs, GANs, and Diffusion Models.
- Explain how LLMs like GPT, BERT, BART, and T5 are employed in language processing.
- To preprocess raw textual data, use NLP packages like NLTK, spaCy, BertTokenizer, and XLNetTokenizer.
- Make an NLP data loader with PyTorch to do tokenization, numericalization, and padding on text data.
11. Gen AI Foundational Models for NLP & Language Understanding
- Explain how to utilize one-hot encoding, bag-of-words, embedding, and embedding bags to turn words into features.
- Create and use word2vec models for contextual embedding.
- Create and train a simple language model using a neural network.
- Use N-gram and sequence-to-sequence models to classify documents, analyze texts, and transform sequences.
12. Generative AI Language Modeling with Transformers
- Explain the concept of attention mechanisms in transformers, including their role in capturing contextual information.
- Describe language modeling with the decoder-based GPT and encoder-based BERT.
- Implement positional encoding, masking, attention mechanism, document classification, and create LLMs like GPT and BERT.
- Use transformer-based models and PyTorch functions for text classification, language translation, and modeling.
13. Generative AI Engineering and Fine-Tuning Transformers
- Sought-after job-ready skills businesses need for working with transformer-based LLMs for generative AI engineering... in just 1 week.
- How to perform parameter-efficient fine-tuning (PEFT) using LoRA and QLoRA
- How to use pretrained transformers for language tasks and fine-tune them for specific tasks.
- How to load models and their inferences and train models with Hugging Face.
14. Generative AI Advance Fine-Tuning for LLMs
- In-demand gen AI engineering skills in fine-tuning LLMs employers are actively looking for in just 2 weeks
- Instruction-tuning and reward modeling with the Hugging Face, plus LLMs as policies and RLHF
- Direct preference optimization (DPO) with partition function and Hugging Face and how to create an optimal solution to a DPO problem
- How to use proximal policy optimization (PPO) with Hugging Face to create a scoring function and perform dataset tokenization
14. Fundamentals of AI Agents Using RAG and LangChain
- In under 8 hours, organizations can construct AI agents utilizing RAG and LangChain, which requires in-demand job ready skills.
- How to improve prompt design by combining the principles of in-context learning with advanced prompt engineering methodologies.
- LangChain's key principles include tools, components, chat models, chains, and agents.
- How to use RAG, PyTorch, Hugging Face, LLMs, and LangChain technologies for various applications.
15. Project: Generative AI Applications with RAG and LangChain
- Gain hands-on experience by developing your own real-world AI application, which you may discuss in interviews.
- Get hands-on experience loading documents with LangChain and applying text splitting techniques with RAG and LangChain to improve model responsiveness.
- Create and configure a vector database to store document embeddings, then write a retriever to retrieve document segments based on queries.
- Create a simple Gradio interface for model interaction, then use LangChain and an LLM to build a QA bot that can answer questions from loaded documents.