In this 4-days online workshop you will learn how to accelerate your applications with OpenACC, CUDA C/C++ and CUDA Python on NVIDIA GPUs.
The workshop combines lectures about Fundamentals of Accelerated Computing with OpenACC, CUDA C/C++ and Python on a single GPU with a lecture about Accelerating CUDA C++ Applications with Multiple GPUs.
The lectures are interleaved with many hands-on sessions using Jupyter Notebooks. The exercises will be done on a fully configured GPU-accelerated workstation in the cloud.
The workshop is co-organised by Leibniz Supercomputing Centre (LRZ), Erlangen National High Performance Computing Center (NHR@FAU) and NVIDIA Deep Learning Institute (DLI). NVIDIA DLI offers hands-on training for developers, data scientists, and researchers looking to solve challenging problems with deep learning.
All instructors are NVIDIA certified University Ambassadors.
1st day: Fundamentals of Accelerated Computing with OpenACC
2nd day: Fundamentals of Accelerated Computing with CUDA C/C++
This lecture teaches the fundamental tools and techniques for accelerating C/C++ applications to run on massively parallel GPUs with CUDA. You’ll learn how to write code, configure code parallelisation with CUDA, optimise memory migration between the CPU and GPU accelerator, and implement the workflow that you’ve learned on a new task—accelerating a fully functional, but CPU-only, particle simulator for observable massive performance gains. At the end of the lecture, you will be able to create new GPU-accelerated applications on your own.
3rd day: Fundamentals of Accelerated Computing with CUDA Python
This lecture explores how to use Numba — the just-in-time, type-specialising Python function compiler — to accelerate Python programs to run on massively parallel NVIDIA GPUs. You’ll learn how to:
Upon completion, you’ll be able to use Numba to compile and launch CUDA kernels to accelerate your Python applications on NVIDIA GPUs.
4th day: Accelerating CUDA C++ Applications with Multiple GPUs
Computationally intensive CUDA C++ applications in high-performance computing, data science, bioinformatics, and deep learning can be accelerated by using multiple GPUs, which can increase throughput and/or decrease your total runtime. When combined with the concurrent overlap of computation and memory transfers, computation can be scaled across multiple GPUs without increasing the cost of memory transfers. For organisations with multi-GPU servers, whether in the cloud or on NVIDIA DGX systems, these techniques enable you to achieve peak performance from GPU-accelerated applications. And it’s important to implement these single-node, multi-GPU techniques before scaling your applications across multiple nodes.
This lecture covers how to write CUDA C++ applications that efficiently and correctly utilise all available GPUs in a single node, dramatically improving the performance of your applications and making the most cost-effective use of systems with multiple GPUs.
After you are accepted, please create an account under https://learn.nvidia.com/join
Ensure your laptop / PC will run smoothly by going to http://websocketstest.com/
Make sure that WebSockets work for you by seeing under Environment, WebSockets is supported and Data Receive, Send and Echo Test all check Yes under WebSockets (Port 80).
If there are issues with WebSockets, try updating your browser.
The NVIDIA Deep Learning Institute delivers hands-on training for developers, data scientists, and engineers. The program is designed to help you get started with training, optimising, and deploying neural networks to solve real-world problems across diverse industries such as self-driving cars, healthcare, online services, and robotics.
Technical background, basic C/C++ programming skills.
For the 3rd day, basics in Python (see https://www.python.org/about/gettingstarted/) and NumPy (https://numpy.org/) are needed.
The lectures are interleaved with many hands-on sessions using Jupyter Notebooks. The exercises will be done on a fully configured GPU-accelerated workstation in the cloud.
English
Dr. Momme Allalen (LRZ), Dr. Sebastian Kuckuk (NHR@FAU), Dr. Volker Weinberg (LRZ)
All instructors are NVIDIA certified University Ambassadors.
The course is open and free of charge for academic participants from the Member States of the European Union (EU) and Associated Countries to the Horizon 2020 programme.
Please register with your official e-mail address to prove your affiliation.
See Withdrawal
For registration for LRZ courses and workshops we use the service edoobox from Etzensperger Informatik AG (www.edoobox.com). Etzensperger Informatik AG acts as processor and we have concluded a Data Processing Agreement with them.
Online Course | GPU Programming Workshop |
Number | hdli1w24 |
Available places | 34 |
Date | 03.02.2025 – 06.02.2025 |
Price | EUR 0.00 |
Location | ONLINE |
Room | |
Registration deadline | 27.01.2025 23:59 |
[email protected] |
No. | Date | Time | Teacher | Location | Room | Description |
---|---|---|---|---|---|---|
1 | 03.02.2025 | 10:00 – 17:00 | Volker Weinberg | ONLINE | Lecture | |
2 | 04.02.2025 | 10:00 – 17:00 | Momme Allalen | ONLINE | Lecture | |
3 | 05.02.2025 | 10:00 – 17:00 | Sebastian Kuckuk | ONLINE | Lecture | |
4 | 06.02.2025 | 10:00 – 17:00 | Momme Allalen | ONLINE | Lecture |