Teach

From SciNet Users Documentation
Revision as of 14:26, 30 October 2018 by Northrup (talk | contribs)
Jump to navigation Jump to search
Teach Cluster
Ibm idataplex dx360 m4.jpg
Installed (orig Feb 2013), Oct 2018
Operating System Linux (Centos 7.4)
Number of Nodes 42
Interconnect Infiniband (QDR)
Ram/Node 64 Gb
Cores/Node 16
Login/Devel Node teach01 (from teach.scinet)
Vendor Compilers icc/gcc
Queue Submission slurm

Teaching Cluster

SciNet has assembled some older compute hardware into a small cluster provided primarily for teaching purposes. It is configured similarly to the production Niagara system, however uses repurposed hardware. Questions about its use or problems should be sent to support@scinet.utoronto.ca.


Specifications

The cluster consists of 42 repurposed x86_64 nodes each with two octal core Intel Xeon (Sandybridge) E5-2650 2.0GHz CPUs with 64GB of RAM per node. The nodes are interconnected with 2.6:1 blocking QDR Infiniband for MPI communications and disk I/O to the SciNet Niagara filesystems. In total this cluster contains 672 x86_64 cores.

Login/Devel Node

Login via ssh with your scinet account to teach.scinet.utoronto.ca, which will bring directly to teach01 the gateway/devel node for this cluster. From teach01 you can compile, do short tests, and submit your jobs to the queue.

Software Modules

 
module avail

Submit a Job

Teach uses SLURM as its job scheduler. More-advanced details of how to interact with the scheduler can be found on the Slurm page.

You submit jobs from a login node by passing a script to the sbatch command:

teach01:~scratch$ sbatch jobscript.sh

This puts the job in the queue. It will run on the compute nodes in due course.

In most cases, you will want to submit from your $SCRATCH directory, so that the output of your compute job can be written out (as mentioned above, $HOME is read-only on the compute nodes).