Niagara Quickstart

From SciNet Users Documentation
Jump to navigation Jump to search
Niagara
Niagara.jpg
Installed Jan 2018
Operating System CentOS 7.4
Number of Nodes 1500 nodes (60,000 cores)
Interconnect Mellanox Dragonfly+
Ram/Node 188 GiB / 202 GB
Cores/Node 40 (80 hyperthreads)
Login/Devel Node niagara.scinet.utoronto.ca
Vendor Compilers icc (C) ifort (fortran) icpc (C++)
Queue Submission Slurm

Specifications

The Niagara cluster is a large cluster of 1500 Lenovo SD350 servers each with 40 Intel "Skylake" cores at 2.4 GHz. The peak performance of the cluster is 3.02 PFlops delivered / 4.6 PFlops theoretical (would've been #42 on the TOP500 in Nov 2017). Each node has 188 GiB / 202 GB RAM per node (at least 4 GiB/core for user jobs). Being designed for large parallel workloads, it has a fast interconnect consisting of EDR InfiniBand in a Dragonfly+ topology with Adaptive Routing. The compute nodes are accessed through a queueing system that allows jobs with a minimum of 15 minutes and a maximum of 12 or 24 hours and favours large jobs.

Using Niagara: Logging in

Those of you new to SciNet and belonging to a group whose primary PI doesn't have a RAC, to gain access to niagara you will need first to follow the old route of requesting a SciNet Consortium Account on the CCDB site.

Otherwise, as with all SciNet and CC (Compute Canada) compute systems, access to Niagara is done via ssh (secure shell) only. Just open a terminal window (e.g. MobaXTerm on Windows), then ssh into the Niagara login nodes with your CC credentials:

$ ssh -Y MYCCUSERNAME@niagara.scinet.utoronto.ca

or

$ ssh -Y MYCCUSERNAME@niagara.computecanada.ca
  • The Niagara login nodes are where you develop, edit, compile, prepare and submit jobs.
  • These login nodes are not part of the Niagara compute cluster, but have the same architecture, operating system, and software stack.
  • The optional -Y is needed to open windows from the Niagara command-line onto your local X server.
  • To run on Niagara's compute nodes, you must submit a batch job.

If you cannot log in, be sure first to check the System Status on this site's front page.

Locating your directories

home and scratch

You have a home and scratch directory on the system, whose locations will be given in the form

$HOME=/home/g/groupname/myccusername

$SCRATCH=/scratch/g/groupname/myccusername

For example:

 nia-login07:~$ pwd
 /home/s/scinet/rzon
 nia-login07:~$ cd $SCRATCH
 nia-login07:rzon$ pwd
 /scratch/s/scinet/rzon

NOTE: home is read-only on compute nodes.

project and archive

Users from groups with RAC storage allocation will also have a project and/or archive directory.

$PROJECT=/project/g/groupname/myccusername

$ARCHIVE=/archive/g/groupname/myccusername

NOTE: Currently archive space is available only via HPSS

IMPORTANT: Future-proof your scripts

Use the environment variables (HOME, SCRATCH, PROJECT, ARCHIVE) instead of the actual paths! The paths may change in the future.

Data Management

Migration to Niagara

Migration for Existing Users of the GPC

Niagara is replacing the General Purpose Cluster (GPC) and the Tightly Coupled Cluster (TCS) at SciNet. The TCS was decommissioned last fall, and the compute nodes of the GPC were decommissioned on April 21, 2018, while the storage attached to the GPC will be decommissioned on May 30, 2018.

Active GPC Users got access to the new system, Niagara, on April 9, 2018.

Users' home and project folder were last copied over from the GPC to Niagara on April 5th, 2018, except for files whose name start with a period that were in their home directories (these files were never synced).

It is the user's responsibility to copy over data generated on the GPC after April 5th, 2018.

Data stored in scratch has also not been transfered automatically. Users are to clean up their scratch space on the GPC as much as possible (remember it's temporary data!). Then they can transfer what they need using datamover nodes.

To enable this transfer, there will be a short period during which you can have access to Niagara as well as to the GPC storage resources. This period will end on May 30, 2018.

To copy substantial amounts of data (i.e.,more than 10 GB), please use the datamovers of both the GPC (called gpc-logindm01 and gpc-logindm02) and the Niagara datamovers (called nia-dm1 and nia-dm2). For instance, to copy a directory abc from your GPC scratch to your Niagara scratch directory, you can do the following:

 $ ssh CCUSERNAME@niagara.computecanada.ca
 $ ssh nia-dm1
 $ scp -r SCINETUSERNAME@gpc-logindm01:\$SCRATCH/abc $SCRATCH/abc

For many users, CCUSERNAME amd SCINETUSERNAME will be the same. Make sure you use the slash (\) before the first $SCRATCH; it cause the value of scratch on the remote node (i.e., here, gpc-logindm01) to be used. Note that the gpc-logindm01 will ask for your SciNet password.

You can also go the other way:

 $ ssh SCINETUSERNAME@login.scinet.utoronto.ca
 $ ssh gpc-logindm01
 $ scp -r $SCRATCH/abc CCUSERNAME@nia-dm1:\$SCRATCH/abc

Again, pay attention to the slash in front of the last occurrence of $SCRATCH.

If you are using rsync, we advice to refrain from using the -a flags, and if using cp, refrain from using the -a and -p flags.

For Non-GPC Users

Those of you new to SciNet, but with 2018 RAC allocations on Niagara, will have your accounts created and ready for you to login.

New, non-RAC users: we are still working out the procedure to get access. If you can't wait, for now, you can follow the old route of requesting a SciNet Consortium Account on the CCDB site.

Moving data

Move amounts less than 10GB through the login nodes.

  • Only Niagara login nodes visible from outside SciNet.
  • Use scp or rsync to niagara.scinet.utoronto.ca or niagara.computecanada.ca (no difference).
  • This will time out for amounts larger than about 10GB.

Move amounts larger than 10GB through the datamover nodes.

  • From a Niagara login node, ssh to nia-datamover1 or nia-datamover2.
  • Transfers must originate from this datamover.
  • The other side (e.g. your machine) must be reachable from the outside.
  • If you do this often, consider using Globus, a web-based tool for data transfer.

Moving data to HPSS/Archive/Nearline using the scheduler.

  • HPSS is a tape-based storage solution, and is SciNet's nearline a.k.a. archive facility.
  • Storage space on HPSS is allocated through the annual Compute Canada RAC allocation.

Storage and quotas

location quota block size expiration time backed up on login nodes on compute nodes
$HOME 100 GB per user 1 MB yes yes read-only
$SCRATCH 25 TB per user (dynamic per group) 16 MB 2 months no yes yes
up to 4 users per group 50TB
up to 11 users per group 125TB
up to 28 users per group 250TB
up to 60 users per group 400TB
above 60 users per group 500TB
$PROJECT by group allocation 16 MB yes yes yes
$ARCHIVE by group allocation dual-copy no no
$BBUFFER ? 1 MB very short no ? ?

File/Ownership Management (ACL)

  • By default, at SciNet, users within the same group already have read permission to each other's files (not write)
  • You may use access control list (ACL) to allow your supervisor (or another user within your group) to manage files for you (i.e., create, move, rename, delete), while still retaining your access and permission as the original owner of the files/directories. You may also let users in other groups or whole other groups access (read, execute) your files using this same mechanism.

Using mmputacl/mmgetacl

  • You may use gpfs' native mmputacl and mmgetacl commands. The advantages are that you can set "control" permission and that POSIX or NFS v4 style ACL are supported. You will need first to create a /tmp/supervisor.acl file with the following contents:
user::rwxc
group::----
other::----
mask::rwxc
user:[owner]:rwxc
user:[supervisor]:rwxc
group:[othegroup]:r-xc

Then issue the following 2 commands:

1) $ mmputacl -i /tmp/supervisor.acl /project/g/group/[owner]
2) $ mmputacl -d -i /tmp/supervisor.acl /project/g/group/[owner]
   (every *new* file/directory inside [owner] will inherit [supervisor] ownership by default as well as 
   [owner] ownership, ie, ownership of both by default, for files/directories created by [supervisor])

   $ mmgetacl /project/g/group/[owner]
   (to determine the current ACL attributes)

   $ mmdelacl -d /project/g/group/[owner]
   (to remove any previously set ACL)

   $ mmeditacl /project/g/group/[owner]
   (to create or change a GPFS access control list)
   (for this command to work set the EDITOR environment variable: export EDITOR=/usr/bin/vi)

NOTES:

  • mmputacl will not overwrite the original linux group permissions for a directory when copied to another directory already with ACLs, hence the "#effective:r-x" note you may see from time to time with mmgetacf. If you want to give rwx permissions to everyone in your group you should simply rely on the plain unix 'chmod g+rwx' command. You may do that before or after copying the original material to another folder with the ACLs.
  • In the case of PROJECT, your group's supervisor will need to set proper ACL to the /project/G/GROUP level in order to let users from other groups access your files.
  • ACL won't let you give away permissions to files or directories that do not belong to you.
  • We highly recommend that you never give write permission to other users on the top level of your home directory (/home/G/GROUP/[owner]), since that would seriously compromise your privacy, in addition to disable ssh key authentication, among other things. If necessary, make specific sub-directories under your home directory so that other users can manipulate/access files from those.

For more information on using mmputacl or mmgetaclacl see their man pages.

Recursive ACL script

You may use/adapt this sample bash script to recursively add or remove ACL attributes using gpfs built-in commands

Courtesy of Agata Disks (http://csngwinfo.in2p3.fr/mediawiki/index.php/GPFS_ACL)

Scratch Disk Purging Policy

In order to ensure that there is always significant space available for running jobs we automatically delete files in /scratch that have not been accessed or modified for more than 2 months by the actual deletion day on the 15th of each month. Note that we recently changed the cut out reference to the MostRecentOf(atime,ctime). This policy is subject to revision depending on its effectiveness. More details about the purging process and how users can check if their files will be deleted follows. If you have files scheduled for deletion you should move them to a more permanent locations such as your departmental server or your /project space (for PIs who have either been allocated disk space by the RAC or have bought diskspace).

On the first of each month, a list of files scheduled for purging is produced, and an email notification is sent to each user on that list. Furthermore, at/or about the 12th of each month a 2nd scan produces a more current assessment and another email notification is sent. This way users can double check that they have indeed taken care of all the files they needed to relocate before the purging deadline. Those files will be automatically deleted on the 15th of the same month unless they have been accessed or relocated in the interim. If you have files scheduled for deletion then they will be listed in a file in /scratch/t/todelete/current, which has your userid and groupid in the filename. For example, if user xxyz wants to check if they have files scheduled for deletion they can issue the following command on a system which mounts /scratch (e.g. a scinet login node): ls -1 /scratch/t/todelete/current |grep xxyz. In the example below, the name of this file indicates that user xxyz is part of group abc, has 9,560 files scheduled for deletion and they take up 1.0TB of space:

 [xxyz@nia-login03 ~]$ ls -1 /scratch/t/todelete/current |grep xxyz
 -rw-r----- 1 xxyz     root       1733059 Jan 17 11:46 3110001___xxyz_______abc_________1.00T_____9560files

The file itself contains a list of all files scheduled for deletion (in the last column) and can be viewed with standard commands like more/less/cat - e.g. more /scratch/t/todelete/current/3110001___xxyz_______abc_________1.00T_____9560files

Similarly, you can also verify all other users on your group by using the ls command with grep on your group. For example: ls -1 /scratch/t/todelete/current |grep abc. That will list all other users in the same group that xxyz is part of, and have files to be purged on the 15th. Members of the same group have access to each other's contents.

NOTE: Preparing these assessments takes several hours. If you change the access/modification time of a file in the interim, that will not be detected until the next cycle. A way for you to get immediate feedback is to use the 'ls -lu' command on the file to verify the atime and 'ls -la' for the mtime. If the file atime/ctime has been updated in the meantime, coming the purging date on the 15th it will no longer be deleted.

How much Disk Space Do I have left?

The /scinet/niagara/bin/diskUsage command, available on the login nodes and datamovers, provides information in a number of ways on the home, scratch, project and archive file systems. For instance, how much disk space is being used by yourself and your group (with the -a option), or how much your usage has changed over a certain period ("delta information") or you may generate plots of your usage over time. Please see the usage help below for more details.

Usage: diskUsage [-h|-?| [-a] [-u <user>] [-de|-plot]
       -h|-?: help
       -a: list usages of all members on the group
       -u <user>: as another user on your group
       -de: include delta information
       -plot: create plots of disk usages

Did you know that you can check which of your directories have more than 1000 files with the /scinet/niagara/bin/topUserDirOver1000list command and which have more than 1GB of material with the /scinet/niagara/bin/topUserDirOver1GBlist command?

Note:

  • information on usage and quota is only updated every 3 hours!

I/O Tips

  • $HOME, $SCRATCH, and $PROJECT all use the parallel file system called GPFS.
  • Your files can be seen on all Niagara login and compute nodes.
  • GPFS is a high-performance file system which provides rapid reads and writes to large data sets in parallel from many nodes.
  • But accessing data sets which consist of many, small files leads to poor performance.
  • Avoid reading and writing lots of small amounts of data to disk.
  • Many small files on the system would waste space and would be slower to access, read and write.
  • Write data out in binary. Faster and takes less space.
  • The Burst Buffer is better for i/o heavy jobs and to speed up checkpoints.

Loading Software Modules

Other than essentials, all installed software is made available using module commands. These modules set environment variables (PATH, etc.) This allows multiple, conflicting versions of a given package to be available. module spider shows the available software.

For example:

nia-login07:~$ module spider
---------------------------------------------------
The following is a list of the modules currently av
---------------------------------------------------
  CCEnv: CCEnv

  NiaEnv: NiaEnv/2018a

  anaconda2: anaconda2/5.1.0

  anaconda3: anaconda3/5.1.0

  autotools: autotools/2017
    autoconf, automake, and libtool 

  boost: boost/1.66.0

  cfitsio: cfitsio/3.430

  cmake: cmake/3.10.2 cmake/3.10.3

  ...

Common module subcommands are:

  • module load <module-name>

    use particular software

  • module purge

    remove currently loaded modules

  • module spider

    (or module spider <module-name>)

    list available software packages

  • module avail

    list loadable software packages

  • module list

    list loaded modules

On Niagara, there are really two software stacks:

  1. A Niagara software stack tuned and compiled for this machine. This stack is available by default, but if not, can be reloaded with

    module load NiaEnv
  2. The same software stack available on Compute Canada's General Purpose clusters Graham and Cedar, compiled (for now) for a previous generation of CPUs:

    module load CCEnv

    If you want the same default modules loaded as on Cedar and Graham, then afterwards also module load StdEnv.

Note: the *Env modules are sticky; remove them by --force.

Tips for loading software

  • We advise against loading modules in your .bashrc.

    This could lead to very confusing behaviour under certain circumstances.

  • The default .bashrc and .bash_profile files on Niagara can be found here

  • Instead, load modules by hand when needed, or by sourcing a separate script.

  • Load run-specific modules inside your job submission script.

  • Short names give default versions; e.g. intelintel/2018.2.

    It is usually better to be explicit about the versions, for future reproducibility.

  • Handy abbreviations:

 
  ml → module list
  ml NAME → module load NAME  # if NAME is an existing module
  ml X → module X
  • Modules sometimes require other modules to be loaded first.

Solve these dependencies by using module spider.

Module spider

Oddly named, the module subcommand spider is the search-and-advice facility for modules.

Suppose one wanted to load the openmpi module. Upon trying to load the module, one may get the following message:

nia-login07:~$ module load openmpi
Lmod has detected the error:  These module(s) exist but cannot be loaded as requested: "openmpi"
   Try: "module spider openmpi" to see how to load the module(s).

So while that fails, following the advice that the command outputs, the next command would be:

nia-login07:~$ module spider openmpi
------------------------------------------------------------------------------------------------------
  openmpi:
------------------------------------------------------------------------------------------------------
     Versions:
        openmpi/2.1.3
        openmpi/3.0.1
        openmpi/3.1.0

------------------------------------------------------------------------------------------------------
  For detailed information about a specific "openmpi" module (including how to load the modules) use
  the module s full name.
  For example:

     $ module spider openmpi/3.1.0
------------------------------------------------------------------------------------------------------

So this gives just more detailed suggestions on using the spider command. Following the advice again, one would type:

nia-login07:~$ module spider openmpi/3.1.0
------------------------------------------------------------------------------------------------------
  openmpi: openmpi/3.1.0
------------------------------------------------------------------------------------------------------
    You will need to load all module(s) on any one of the lines below before the "openmpi/3.1.0"
    module is available to load.

      NiaEnv/2018a  gcc/7.3.0
      NiaEnv/2018a  intel/2018.2

These are concrete instructions on how to load this particular openmpi module. Following these leads to a successful loading of the module.

nia-login07:~$ module load NiaEnv/2018a  intel/2018.2   # note: NiaEnv is usually already loaded
nia-login07:~$ module load openmpi/3.1.0
nia-login07:~$ module list
Currently Loaded Modules:
  1) NiaEnv/2018a (S)   2) intel/2018.2   3) openmpi/3.1.0

  Where:
   S:  Module is Sticky, requires --force to unload or purge

Running Commercial Software

  • Possibly, but you have to bring your own license for it.
  • SciNet and Compute Canada have an extremely large and broad user base of thousands of users, so we cannot provide licenses for everyone's favorite software.
  • Thus, the only commercial software installed on Niagara is software that can benefit everyone: Compilers, math libraries and debuggers.
  • That means no Matlab, Gaussian, IDL,
  • Open source alternatives like Octave, Python, R are available.
  • We are happy to help you to install commercial software for which you have a license.
  • In some cases, if you have a license, you can use software in the Compute Canada stack.

Compiling on Niagara: Example

Suppose one want to compile an application from two c source files, appl.c and module.c, which use the Gnu Scientific Library (GSL). This is an example of how this would be done:

nia-login07:~$ module list
Currently Loaded Modules:
  1) NiaEnv/2018a (S)
  Where:
   S:  Module is Sticky, requires --force to unload or purge

nia-login07:~$ module load intel/2018.2 gsl/2.4

nia-login07:~$ ls
appl.c module.c

nia-login07:~$ icc -c -O3 -xHost -o appl.o appl.c
nia-login07:~$ icc -c -O3 -xHost -o module.o module.c
nia-login07:~$ icc  -o appl module.o appl.o -lgsl -mkl

nia-login07:~$ ./appl

Note:

  • The optimization flags -O3 -xHost allow the Intel compiler to use instructions specific to the architecture CPU that is present (instead of for more generic x86_64 CPUs).
  • The GSL requires a cblas implementation, for is contained in the Intel Math Kernel Library (MKL). Linking with this library is easy when using the intel compiler, it just requires the -mkl flags.
  • If compiling with gcc, the optimization flags would be -O3 -march=native. For the way to link with the MKL, it is suggested to use the MKL link line advisor.

Testing

You really should test your code before you submit it to the cluster to know if your code is correct and what kind of resources you need.

  • Small test jobs can be run on the login nodes.

    Rule of thumb: couple of minutes, taking at most about 1-2GB of memory, couple of cores.

  • You can run the the ddt debugger on the login nodes after module load ddt.

  • Short tests that do not fit on a login node, or for which you need a dedicated node, request an
    interactive debug job with the salloc command

    nia-login07:~$ salloc -pdebug --nodes N --time=1:00:00
    

    where N is the number of nodes. The duration of your interactive debug session can be at most one hour, can use at most 4 nodes, and each user can only have one such session at a time.

    Alternatively, on Niagara, you can use the command

    nia-login07:~$ debugjob N
    

    where N is the number of nodes, If N=1, this gives an interactive session one 1 hour, when N=4 (the maximum), it gives you 30 minutes.

    Finally, if your debugjob process takes more than 1 hour, you can request an interactive job from the regular queue. Note, however, that this may take some time to run, since it will be part of the regular queue, and will be run when the scheduler decides.

    nia-login07:~$ salloc --nodes N --time=M:00:00
    

    where N is again the number of nodes, and M is the number of hours you wish the job to run.


Testing with Graphics: X-forwarding

If you need to use graphics while testing your code, e.g. when using a debugger such as DDT or DDD, you will need to follow these steps:

You will need two terminals in order to achieve this:

  1. In the 1st terminal
    • ssh to niagara.scinet.utoronto.ca and issue your salloc command
    • wait until your resources are allocated and you are assigned the nodes
    • take note of the node where you are logged to, ie. the head node, let's say niaWXYZ
    $ ssh  niagara.scinet.utoronto.ca
    USER@nia-login07:~$ salloc --ndoes 5 --time=2:00:00
    
    .salloc: Granted job allocation 141862
    .salloc: Waiting for resource configuration
    .salloc: Nodes nia1265 are ready for job
    
    [USER@nia1265 ~]$
    
  2. On the second terminal:
    • ssh into niagara.scinet.utoronto.ca now using the -X flag in the ssh command
    • after that ssh -X niaWXYZ, ie. you will ssh carrying on the '-X' flag into the head node of the job
    • in the niaWXYZ you should be able to use graphics and should be redirected by x-forwarding to your local terminal
    ssh niagara.scinet.utoronto.ca -X
    USER@nia-login07:~$ ssh -X nia1265
    [USER@nia1265 ~]$ xclock
    


      Observations:
    • If you are using ssh from a Windows machine, you need to have an X-server, a good option is to use MobaXterm, that already brings a X-server included.
    • If you are in Mac OS, substitute -X by -Y
    • Instead of using two terminals, you could just use screen to request the resources and then detach the session and ssh into the head node directly.

    Submitting jobs

    Niagara uses SLURM as its job scheduler.

    You submit jobs from a login node by passing a script to the sbatch command:

    nia-login07:~$ sbatch jobscript.sh
    

    This puts the job in the queue. It will run on the compute nodes in due course.

    Jobs will run under their group's RRG allocation, or, if the group has none, under a RAS allocation (previously called `default' allocation).

    Keep in mind:

    • Scheduling is by node, so in multiples of 40 cores.

    • For users with an allocation, the maximum walltime is 24 hours. For those without an allocation, the maximum walltime is 12 hours.

    • Jobs must write to your scratch or project directory (home is read-only on compute nodes).

    • Compute nodes have no internet access.

      Download data you need beforehand on a login node.

    SLURM nomenclature: jobs, nodes, tasks, cpus, cores, threads

    SLURM, which is the job scheduler used on Niagara, has a somewhat different way of referring to things like mpi processes and threads tasks. The SLURM nomenclature is reflected in the names of scheduler option (i.e., resource requests). SLURM strictly enforces those requests, so it is important to get this right.

    term meaning SLURM term related scheduler options
    job scheduled piece of work for which specific resources were requested. job sbatch, salloc
    node basic computing component with several cores (40 for Niagara) that share memory node --nodes -N
    mpi process one of a group of running programs using Message Passing Interface for parallel computing task --ntasks -n --ntasks-per-node
    core or physical cpu A fully functional independent physical execution unit. - -
    logical cpu An execution unit that the operating system can assign work to. Operating systems can be configured to overload physical cores with multiple logical cpus using hyperthreading. cpu --ncpus-per-task
    thread one of possibly multiple simultaneous execution paths within a program, which can share memory. - --ncpus-per-task and OMP_NUM_THREADS
    hyperthread a thread run in a collection of threads that is larger than the number of physical cores. - -

    Scheduling by Node

    • On many systems that use SLURM, the scheduler will deduce from the specifications of the number of tasks and the number of cpus-per-node, what resources should be allocated. On Niagara, this is a bit different.
    • All job resource requests on Niagara are scheduled as a multiple of nodes.

    • The nodes that your jobs run on are exclusively yours.
      • No other users are running anything on them.
      • You can ssh into them to see how things are going.
    • Whatever your requests to the scheduler, it will always be translated into a multiple of nodes allocated to your job.

    • Memory requests to the scheduler are of no use. Your job always gets N x 202GB of RAM, where N is the number of nodes.

    • You should try to use all the cores on the nodes allocated to your job. Since there are 40 cores per node, your job should use N x 40 cores. If this is not the case, we will be contacted you to help you optimize your workflow.

    Hyperthreading: Logical CPUs vs. cores

    Hyperthreading, a technology that leverages more of the physical hardware by pretending there are twice as many logical cores than real once, is enabled on Niagara. So the OS and scheduler see 80 logical cpus.

    Using 80 logical cpus vs. 40 real cores typically gives about a 5-10% speedup (Your Mileage May Vary).

    Because Niagara is scheduled by node, hyperthreading is actually fairly easy to use:

    • Ask for a certain number of nodes N for your jobs.
    • You know that you get 40xN cores, so you will use (at least) a total of 40xN mpi processes or threads. (mpirun, srun, and the OS will automaticallly spread these over the real cores)
    • But you should also test if running 80xN mpi processes or threads gives you any speedup.
    • Regardless, your usage will be counted as 40xNx(walltime in years).

    Limits

    There are limits to the size and duration of your jobs, the number of jobs you can run and the number of jobs you can have queued. It matters whether a user is part of a group with a Resources for Research Group allocation or not. It also matters in which 'partition' the jobs runs. 'Partitions' are SLURM-speak for use cases. You specify the partition with the -p parameter to sbatch or salloc, but if you do not specify one, your job will run in the compute partition, which is the most common case.

    Usage Partition Running jobs Submitted jobs (incl. running) Min. size of jobs Max. size of jobs Min. walltime Max. walltime
    Compute jobs with an allocation compute 50 1000 1 node (40 cores) 1000 nodes (40000 cores) 15 minutes 24 hours
    Compute jobs without allocation ("default") compute 50 200 1 node (40 cores) 20 nodes (800 cores) 15 minutes 12 hours
    Testing or troubleshooting debug 1 1 1 node (40 cores) 4 nodes (160 cores) N/A 1 hour
    Archiving or retrieving data in HPSS archivelong 2 per user (max 5 total) 10 per user N/A N/A 15 minutes 72 hours
    Inspecting archived data, small archival actions in HPSS archiveshort 2 per user 10 per user N/A N/A 15 minutes 1 hour

    Within these limits, jobs will still have to wait in the queue. The waiting time depends on many factors such as the allocation amount, how much allocation was used in the recent past, the number of nodes and the walltime, and how many other jobs are waiting in the queue.

    SLURM Accounts

    To be able to prioritise jobs based on groups and allocations, the SLURM scheduler uses the concept of accounts. Each group that has a Resource for Research Groups (RRG) or Research Platforms and Portals (RPP) allocation (awarded through an annual competition by Compute Canada) has an account that starts with rrg- or rpp-. SLURM assigns a 'fairshare' priority to these accounts based on the size of the award in core-years. Groups without an RRG or RPP can use Niagara using a so-called Rapid Access Service (RAS), and have an account that starts with def-.

    On Niagara, most users will only ever use one account, and those users do not need to specify the account to SLURM. However, users that are part of collaborations may be able to use multiple accounts, i.e., that of their sponsor and that of their collaborator, but this mean that they need to select the right account when running jobs.

    To select the account, just add

       #SBATCH -A [account]
    

    to the job scripts, or use the -A [account] to salloc or debugjob.

    To see which accounts you have access to, or what their names are, use the command

       sshare -U
    

    Passing Variables to Job's submission scripts

    It is possible to pass values through environment variables into your SLURM submission scripts. For doing so with already defined variables in your shell, just add the following directive in the submission script,

    #SBATCH --export=ALL
    

    and you will have access to any predefined environment variable.

    A better way is to specify explicitly which variables you want to pass into the submision script,

    sbatch --export=i=15,j='test' jobscript.sbatch
    

    You can even set the job name and output files using environment variables, eg.

    i="simulation"
    j=14
    sbatch --job-name=$i.$j.run --output=$i.$j.out --export=i=$i,j=$j jobscript.sbatch
    

    (The latter only works on the command line; you cannot use environment variables in #SBATCH lines in the job script.)

    Command line arguments:

    Command line arguments can also be used in the same way as command line argument for shell scripts. All command line arguments given to sbatch that follow after the job script name, will be passed to the job script. In fact, SLURM will not look at any of these arguments, so you must place all sbatch arguments before the script name, e.g.:

    sbatch  -p debug  jobscript.sbatch  FirstArgument SecondArgument ...
    

    In this example, -p debug is interpreted by SLURM, while in your submission script you can access FirstArgument, SecondArgument, etc., by referring to $1, $2, ....

    Email Notification

    Email notification works, but you need to add the email address and type of notification you may want to receive in your submission script, eg.

       #SBATCH --mail-user=YOUR.email.ADDRESS
       #SBATCH --mail-type=ALL    
    

    Example submission script (MPI)

    #!/bin/bash 
    #SBATCH --nodes=8
    #SBATCH --ntasks=320
    #SBATCH --time=1:00:00
    #SBATCH --job-name mpi_job
    #SBATCH --output=mpi_output_%j.txt
    
    cd $SLURM_SUBMIT_DIR
    
    module load intel/2018.2
    module load openmpi/3.1.0
    
    mpirun ./mpi_example
    # or "srun ./mpi_example"
    

    Submit this script with the command:

       nia-login07:~$ sbatch mpi_job.sh
    
    • First line indicates that this is a bash script.

    • Lines starting with #SBATCH go to SLURM.

    • sbatch reads these lines as a job request (which it gives the name mpi_job)

    • In this case, SLURM looks for 8 nodes with 40 cores on which to run 320 tasks, for 1 hour.

    • Note that the mpifun flag "--ppn" (processors per node) is ignored.

    • Once it found such a node, it runs the script:

      • Change to the submission directory;
      • Loads modules;
      • Runs the mpi_example application.
    • To use hyperthreading, just change --ntasks=320 to --ntasks=640, and add --bind-to none to the mpirun command (the latter is necessary for OpenMPI only, not when using IntelMPI).

    Example submission script (OpenMP)

    #!/bin/bash
    #SBATCH --nodes=1
    #SBATCH --cpus-per-task=40
    #SBATCH --time=1:00:00
    #SBATCH --job-name openmp_job
    #SBATCH --output=openmp_output_%j.txt
    
    cd $SLURM_SUBMIT_DIR
    
    module load intel/2018.2
    
    export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
    
    ./openmp_example
    # or "srun ./openmp_example".
    

    Submit this script with the command:

       nia-login07:~$ sbatch openmp_job.sh
    
    • First line indicates that this is a bash script.
    • Lines starting with #SBATCH go to SLURM.
    • sbatch reads these lines as a job request (which it gives the name openmp_job) .
    • In this case, SLURM looks for one node with 40 cores to be run inside one task, for 1 hour.
    • Once it found such a node, it runs the script:
      • Change to the submission directory;
      • Loads modules;
      • Sets an environment variable;
      • Runs the openmp_example application.
    • To use hyperthreading, just change --cpus-per-task=40 to --cpus-per-task=80.

    Monitoring queued jobs

    Once the job is incorporated into the queue, there are some command you can use to monitor its progress.

    • squeue or qsum to show the job queue (squeue -u $USER for just your jobs);

    • squeue -j JOBID to get information on a specific job

      (alternatively, scontrol show job JOBID, which is more verbose).

    • squeue --start -j JOBID to get an estimate for when a job will run; these tend not to be very accurate predictions.

    • scancel -i JOBID to cancel the job.

    • sinfo -pcompute to look at available nodes.

    • jobperf JOBID to get an instantaneous view of the cpu and memory usage of the nodes of the job while it is running.

    • sacct to get information on your recent jobs.

    • More utilities like those that were available on the GPC are under development.

    Visualization

    Information about how to use visualization tools on Niagara is available on Visualization page.

    Further information

    Useful sites

    Support

    • support@scinet.utoronto.ca
    • niagara@computecanada.ca