Parallel Debugging with DDT

From SciNet Users Documentation
Revision as of 22:24, 14 January 2019 by Rzon (talk | contribs)
Jump to navigation Jump to search

ARM DDT Parallel Debugger

For parallel debugging, SciNet has DDT ("Distributed Debugging Tool") installed on all our clusters. DDT is a powerful, GUI-based commercial debugger by ARM (formerly by Allinea). It supports the programming languages C, C++, and Fortran, and the parallel programming paradigms MPI, OpenMPI, and CUDA. DDT can also be very useful for serial programs. DDT provides a nice, intuitive graphical user interface. It does need graphics support, so make sure to use the '-X' or '-Y' arguments to your ssh commands, so that X11 graphics can find its way back to your screen ("X forwarding").

The most currently installed version of ddt on Niagara is DDT 18.2. The ddt license allows up to a total of 128 processes to be debugged simultaneously (shared among all users).

To use ddt, ssh in with X forwarding enabled, load your usual compiler and mpi modules, compile your code with '-g' and load the module

module load ddt

You can then start ddt with one of the following commands:

ddt

ddt <executable compiled with -g flag>

ddt <executable compiled with -g flag> <arguments>

ddt -n <numprocs> <executable compiled with -g flag> <arguments>

The first time you run DDT, it will set up configuration files. It puts these in the hidden directory $SCRATCH/.allinea.

Note that most users will debug on the login nodes of the a clusters (nia-login0{1-3,5-7}), but that this is only appropriate if the number of mpi processes and threads is small, and the memory usage is not too large. If your debugging requires more resources, you should run it through the queue. On Niagara, an interactive debug session will suit most debugging purposes.

ARM MAP Parallel Profiler

MAP is a parallel (MPI) performance analyser with a graphical interface. It is part of the same DDT module, so you need to load ddt to use MAP (together, DDT and MAP form the ARM Forge bundel).

It has a similar job startup interface as DDT.

To be more precise, MAP is a sampling profiler with adaptive sampling rates to keep the data volumes collected under control. Samples are aggregated at all levels to preserve key features of a run without drowning in data. A folding code and stack viewer allows you to zoom into time spent on individual lines and draw back to see the big picture across nests of routines. MAP measures memory usage, floating-point calculations, MPI usage, as well as I/O.

The maximum number of MPI processes for that our MAP license supports is 64 (simultaneously shared among all users).

It supports both interactive and batch modes for gathering profile data.

Interactive profiling with MAP

Startup is much the same as for DDT:

map

map <executable compiled with -g flag>

map <executable compiled with -g flag> <arguments>

map -n <numprocs> <executable compiled with -g flag> <arguments>

After you have started the code and it has run to completion, MAP will show the results. It will also save these results in a file with the extension .map. This allows you to load the result again into the graphical user interface at a later time.

Non-interactive profiling with MAP

It is also possible to run map non-interactively by passing the -profile flag, e.g.

map -profile -n <numprocs> <executable compiled with -g flag> <arguments>

For instance, this could be used in a job launch with a jobscript like

#!/bin/bash 
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=32
#SBATCH --time=1:00:00
#SBATCH --job-name=mpi_job
#SBATCH --output=mpi_output_%j.txt
#SBATCH --mail-type=FAIL

module load intel/2018.2
module load openmpi/3.1.0
module load ddt

map -profile -n 64 ./mpi_example

This will just create the .map file, which you could inspect afterward the job has finished with

map MAPFILE

Parallel Debugging and Profiling in an Interactive Session on Niagara

By requesting a job from the 'debug' partition on Niagara, you can have access to at most 4 nodes, i.e., a total of 160 physical cores (or 320 virtual cores, using hyper-threading), for your exclusive, interactive use. Starting from a Niagara login node, you would request a debug sessions with the following command:

debugjob <numberofnodes>

where <numberofnodes> is 1, 2, 3, or 4. The sessions will last 60, 45, 30, or 15 minutes, depending on the number of nodes requested.

This command will get you a prompt on a compute node (or on the 'head' node if you've asked for more than one node). Reload any modules that your application needs (e.g. module load intel openmpi), as well as the ddt module.

Note that on compute nodes, $HOME is read-only, so unless your code is on $SCRATCH, you cannot recompile it (with '-g') in the debug session; this should have been done on a login node.

If the time restrictions of these debugjobs is too great, you need to request nodes from the regular queue. In that case, you want to make sure that you get X11 graphics forwarded properly.