Description
In this course, you will :
- Learn how to solve large computational problems using High Performance Computing (HPC) Systems.
- Basic understanding of the Bash command line environment found on GNU/Linux and other Unix-like systems.
- Intro to HPC Systems and Supercomputers
- HPC system's basic components
- HPC software stack
- HPC job schedulers and batch systems (PBS and Slurm with demos)
- Introduction to parallel programming (Open MP, MPI and GPU coding)
Syllabus :
1. Supercomputers and HPC clusters
- A Little bit of Supercomputing history
- Supercomputing examples
- HPC cluster computers
- Benefits of using cluster computing
2. Components of a HPC system
- Components of a HPC cluster
- Login node(s)
- Compute node(s)
- Master node(s)
- Storage node(s)
3. HPC software stack
- Access to HPC
- Data Transfer
- HPC software list
- HPC software modules
- Job Schedulers
4. PBS - Portable Batch System
- Introduction to PBS
- PBS basic commands
- PBS `qsub`
- PBS `qstat`
- PBS `qdel` command
- PBS `qalter`
- PBS job states
- PBS variables
- A simple PBS job script
- PBS interactive jobs
- PBS arrays
- PBS Matlab example
5. SLURM -Workload Manager
- Introduction to Slurm
- Slurm commands
- A simple Slurm job
- Slurm distrbuted MPI and GPU jobs
- Slurm multi-threaded OpenMP jobs
- Slurm interactive jobs
- Slurm array jobs
- Slurm job dependencies
6. Parallel programming - OpenMP
- OpenMP
- OpenMP basics
- Open MP - clauses
- OpenMP - worksharing constructs
- OpenMP- Hello world!
- Open MP - reduction and parallel `for-loop`
- OpenMP - section parallelization
- OpenMP vector addition
7. Parallel programming - MPI
- MPI - Message Passing Interface
- MPI program stucture
- MPI - hello world!
- MPI send/ receive
- MPI `ping-pong` send and receive example
8. Parallel programming - GPU and CUDA
- GPUs - graphics processing units
- GPU Programming - CUDA
- CUDA - hello world!
- CUDA - vector additon demo