Number of hours
- Lectures 36.0
- Projects -
- Tutorials -
- Internship -
- Laboratory works -
- Written tests -
ECTS
ECTS 6.0
Goal(s)
Today, parallel computing is omnipresent across a large spectrum of computing platforms, from processor cores or GPUs up to Supercomputers and Cloud platforms. The largest Supercomputers gather millions of processing units and are heading towards Exascale (a quintillion or 10^18 flops - http://top500.org). If parallel processing was initially targeting scientific computing, it is now part of many domains like Big Data analytics and Deep Learning. But making an efficient use of these parallel resources requires a deep understanding of architectures, systems, parallel programming models, and parallel algorithms.
This class will progressively enable attendees to master advanced parallel processing. This class prepare students to pursue a M2R internship in related topics in an academic or industrial research team to next pursue a PhD in computer science or to work into companies on parallel or large systems.
Bruno RAFFIN
Content(s)
This class is organized around 2 main blocks:
Overview of parallel systems:
Introduction to parallelism from GPU to supercomputers.
Hardware and system considerations for parallel processing (multi-core architectures, process and thread handling, cache efficiency, remote data access, atomic instructions)
Parallel programming: message passing, one-sided communications, task-based programming, work stealing based runtimes (MPI, Cilk, TBB, OpenMP).
Modeling of parallel programs and platforms. Locality, granularity, memory space, communications.
Parallel algorithms, collective communications, topology aware algorithms.
Scheduling: list algorithms; partitioning techniques. Application to resource management (PBS, LSF, SGE, OAR).
Large scale scientific computing: parallelization of Lagrangian and Eulerian solvers, parallel data analytics and scientific visualization.
AI and HPC: how parallelism is used at different levels to accelerate machine learning and deep learning using supercomputers.
Functional parallel programming:
We propose to study a clean and modern approach to the design of parallel algorithms: functional programming. Functional languages are known to provide a different and cleaner programming experience by allowing a shift of focus from "how to do" on "what to do".
Students should have some base knowledge on parallel programming (some MPI or OpenMP for instance) and experience in at least one low level language (typically C or C++). No specific skills on system, processor architecture or theoretical models beyond the base training that any computer science M1 student should have received. Students should have a taste for experimenting with advanced computer systems and ready to be exposed to a few theoretical models (mainly cost models for reasoning about parallel algorithms).
The exam is given in english only
The course exists in the following branches:
- Curriculum - Master in Computer Science - Semester 9 (this course is given in english only )
Course ID : WMM9MO59
Course language(s):
The course is attached to the following structures:
You can find this course among all other courses.