Mist
Mist | |
---|---|
Installed | Dec 2019 |
Operating System | Red Hat Enterprise Linux 7.6 |
Number of Nodes | 54 IBM AC922 |
Interconnect | Mellanox EDR |
Ram/Node | 256 GB |
GPUs/Node | 4 V100-SMX2-32GB |
Login/Devel Node | mist.scinet.utoronto.ca |
Vendor Compilers | IBM XL |
Queue Submission | Slurm |
Warning
Mist is in early users/beta testing phase. All instructions below are temporary and subject to change.
Specifications
The Mist cluster is a GPU cluster of 54 IBM AC922 servers each with 32 IBM Power9 cores with 4 NVIDIA V100-SMX2-32GB GPU and NVLINKs in between. Each node of the cluster has 256GB RAM. It has InfiniBand EDR interconnection providing GPU-Direct RMDA capability.
Getting started on Mist
Currently Mist is under testing phase. Mist login node mist-login01 can be accessed via Niagara cluster.
ssh -Y MYCCUSERNAME@niagara.scinet.utoronto.ca ssh -Y mist-login01
Storage
The filesystem for Mist is shared with Niagara cluster. See Niagara Storage for more details.
Loading software modules
You have two options for running code on Mist: use existing software, or compile your own. This section focuses on the former.
Other than essentials, all installed software is made available using module commands. These modules set environment variables (PATH, etc.), allowing multiple, conflicting versions of a given package to be available. A detailed explanation of the module system can be found on the modules page.
Common module subcommands are:
module load <module-name>
: load the default version of a particular software.module load <module-name>/<module-version>
: load a specific version of a particular software.module purge
: unload all currently loaded modules.module spider
(ormodule spider <module-name>
): list available software packages.module avail
: list loadable software packages.module list
: list loaded modules.
Along with modifying common environment variables, such as PATH, and LD_LIBRARY_PATH, these modules also create a SCINET_MODULENAME_ROOT environment variable, which can be used to access commonly needed software directories, such as /include and /lib.
There are handy abbreviations for the module commands. ml
is the same as module list
, and ml <module-name>
is the same as module load <module-name>
.
Tips for loading software
- We advise against loading modules in your .bashrc. This can lead to very confusing behaviour under certain circumstances. Our guidelines for .bashrc files can be found here.
- Instead, load modules by hand when needed, or by sourcing a separate script.
- Load run-specific modules inside your job submission script.
- Short names give default versions; e.g.
cuda
→cuda/10.1.243
. It is usually better to be explicit about the versions, for future reproducibility. - Modules often require other modules to be loaded first. Solve these dependencies by using
module spider
.
Available compilers and interpreters
- cuda module has to be loaded first for GPU softwares.
- For most compiled software, one should use the GNU compilers (gcc for C, g++ for C++, and gfortran for Fortran). Loading an at ( IBM Advance Toolchain) or gcc module makes these available.
- The IBM XL compiler suite (xlc_r, xlc++_r, xlf_r) is also available, if you load one of the xl modules.
- To compile mpi code, you must additionally load an openmpi or spectrummpi module.
CUDA
The current installed CUDA Tookits are 10.1.243 and 10.2.89
module load cuda/<version>
The current NVIDIA driver version is 440.33.01
Documentation and API reference information for the CUDA Toolkit can be found here: http://docs.nvidia.com/cuda/index.html
GNU Compilers
A core GCC-7.4.0 will be loaded automatically when loading CUDA module. More recent versions of the GNU Compiler Collection (C/C++/Fortran) are provided in the IBM Advance Toolchain and GCC modules with enhancements for the POWER9 CPU. at/11.0 provides GCC-7.4.1. at/12.0 provides GCC-8.2.0. A gcc/9.2.0 module is also available for newest GCC.
More information about the IBM Advance Toolchain can be found here: https://developer.ibm.com/linuxonpower/advance-toolchain/
IBM XL Compilers
To load the native IBM xlc/xlc++ and xlf (Fortran) compilers, run
module load xl/16.1.1
IBM XL Compilers are enabled for use with NVIDIA GPUs, including support for OpenMP GPU offloading and integration with NVIDIA's nvcc command to compile host-side code for the POWER9 CPU.
Information about the IBM XL Compilers can be found at the following links:
OpenMPI
openmpi/4.0.2 module is avaiable with different compilers including GCC and XL. spectrummpi/10.03 module provides IBM Spectrum MPI.
PGI
To load PGI compiler and its own OpenMPI environment, run:
module load pgi/19.10 module load openmpi/3.1.3-pgi-19.10