Description
In this course, you will :
- Understand how prompt design influences ChatGPT outputs
- Master key LLM controls (system messages, temperature, top_p, max_tokens, penalties).
- Learn the different types of prompts (instruction, few-shot, chain-of-thought, role, etc.).
- Grasp tokens, cost, and latency trade-offs for efficiency.
- Design, test, and iterate prompts across multiple use-cases (summarization, coding, data extraction, customer support, content generation).
- Build a library of reusable prompt templates.
- Apply chaining methods to connect multiple AI steps into workflows.
- Use tools and APIs (ChatGPT Playground, LangChain, PromptLayer) to automate workflows.
- Measure prompts with qualitative and quantitative metrics (accuracy, F1, BLEU/ROUGE, user satisfaction).
- Run A/B testing to compare prompt variations.
- Optimize for cost and latency in real deployments.
- understand why hallucinations happen and how to mitigate them.
- Implement guardrails (refusal prompts, style constraints, profanity/PII filters).
- Apply legal, privacy, and safety considerations when deploying AI in production.
- Add logging, caching, and observability for scaling.
- Plan failover strategies and human-in-loop safeguards.
- Optimize tokens and examples for efficiency.
- Explore prompt tuning vs. instruction tuning.
- Learn retrieval-augmented generation (RAG) basics.
- Experiment with multimodal prompts (text + image).
- Get an intro to RLHF and future LLM research directions.