Abdelkader Baggag, Professeur agrégé / Associate Professor
Pavillon Adrien-Pouliot, local 2935
T. 418 656-2869
F. 418 656-5343
Champs d'intérêts


  • High-performance computing
  • Krylov-based methods for the solution of large linear systems
  • Robust preconditioners
  • The eXtended finite element method for the treatment of discontinuities
  • The discontinuous Galerkin method
  • Scientific computing in general


Teaching in high performance computing

In this course, I teach the essence of what is needed for researchers to take advantage of the machines' power, and I spend an extra effort to hold the researchers' hand in parallelizing, debugging and optimizing some of their respective codes.

The scaling of parallel algorithms has not yet matched peak speed, and the programming burden for parallel machines remains heavy. Hence, the applications must be programmed to exploit parallelism in the most efficient way possible. Today, the responsibility for achieving the vision of scalable parallelism remains in the hands of the application developers.

This course illustrates the state-of-the-art of parallel computing, and links theory to applications, through demonstrations and training. This course should be of interest to engineers, programmers and code developers.

Course Modules and Description:

  • Parallel Computing Explained: This module is an introduction to parallel computing. It provides a resource that is useful to both beginners and more experienced users.
  • Introduction to Parallel Programming using MPI: This module is an introduction to parallel programming through the Message Passing Interface (MPI), a standard library of subroutines, or function calls that can be used to implement a message-passing program.
  • Intermediate MPI + MPI-2: This module covers “intermediate” level topics in MPI, and is both useful and relevant to expand MPI knowledge acquired in the previous module.  
  • Multi-level Parallel Programming: This module describes a hybrid-type programming using both OpenMP on computational nodes, and MPI across nodes.
  • Applications:
  1. Domain Decomposition Techniques
  2. Parallel Implementation of Discontinuous Galerkin Method for Aeroacoustics (Explicit)
  3. Parallel Iterative Solvers and Preconditioners (Implicit)
  4. Parallel mathematical libraries

Each module, where applicable, begins with a list of objectives, includes an introduction that describes the best suited applications or algorithms, and denotes typical uses, detailed descriptions of routines, lots of examples, and ends with a test that relates back to the objectives.

Retour à la liste