Parallel Computing 101: Part 2
TimeTuesday, 10 November 20202:30pm - 6:30pm EST
DescriptionThis tutorial provides a comprehensive overview of parallel computing, emphasizing those aspects most relevant to the user. It is suitable for new users, managers, students and anyone seeking an overview of parallel computing. It discusses hardware-software interaction, with an emphasis on standards, portability and systems that are widely available.
The tutorial surveys basic parallel computing concepts, using examples from multiple engineering, scientific and machine learning problems. These examples illustrate the use of MPI on distributed memory systems, OpenMP on shared memory systems, MPI+OpenMP on hybrid systems and CUDA and compiler directives on GPUs and accelerators. It discusses numerous parallelization and load balancing approaches and software engineering/performance improvement aspects, including state-of-the-art tools.
The tutorial helps attendees make intelligent decisions by covering the primary options that are available, explaining how the different components work together and their most suitable uses. Extensive pointers to web-based resources are provided to facilitate follow-up studies.