Teach

From SciNet Users Documentation
Jump to navigation Jump to search
Teach Cluster
Ibm idataplex dx360 m4.jpg
Installed (orig Feb 2013), Oct 2018
Operating System Linux (Centos 7.4)
Number of Nodes 42
Interconnect Infiniband (QDR)
Ram/Node 64 Gb
Cores/Node 16
Login/Devel Node teach01 (from teach.scinet)
Vendor Compilers icc/gcc
Queue Submission slurm

Teaching Cluster

SciNet has assembled some older compute hardware into a small cluster provided primarily for teaching purposes. It is configured similarly to the production Niagara system, however uses repurposed hardware. This system should not be used for production work as such the queuing policies are designed to provide fast job turnover and limit the amount of resources one person can use at a time. Questions about its use or problems should be sent to support@scinet.utoronto.ca.

Specifications

The cluster consists of 42 repurposed x86_64 nodes each with 16 cores (from two octal core Intel XeonSandybridge E5-2650 CPUs) running at 2.0GHz with 64GB of RAM per node. The nodes are interconnected with 2.6:1 blocking QDR Infiniband for MPI communications and disk I/O to the SciNet Niagara filesystems. In total this cluster contains 672 cores.

Login/Devel Node

Login via ssh with your scinet account to teach.scinet.utoronto.ca, which will bring directly to teach01 the gateway/devel node for this cluster. From teach01 you can compile, do short tests, and submit your jobs to the queue. The first time you login to Teach cluster, please make sure to check if the login node ssh key fingerprint matches. See here how.

Interactive jobs

The login node teach01 is shared between students of a number of different courses. Use this node to develop and compile code, to run short tests, and to submit computations to the scheduler.

For a interactive sessions on a compute node of the teach cluster, use the 'debugjob' command.

teach01:~$ debugjob -n C

where C is the number of cores. An interactive session defaults to four hours when using at most one node (C<=16), and becomes 60 minutes when using four nodes (i.e., 48<C<=64), which the maximum number of nodes allowed for an interactive session by debugjob.

For a short interactive sessions on a dedicated compute node of the teach cluster, use the 'debugjob' command as follows:

teach01:~$ debugjob N

where N is the number of nodes. On the Teach cluster, this is equivalent to debugjob -n 16*N . The positive integer number N can at most be 4.

If no arguments are given to debugjob, it allocatesa single core on a Teach compute node.

Submit a Job

Teach uses SLURM as its job scheduler. More-advanced details of how to interact with the scheduler can be found on the Slurm page.

You submit jobs from a login node by passing a script to the sbatch command:

teach01:~scratch$ sbatch jobscript.sh

This puts the job in the queue. It will run on the compute nodes in due course.

It is worth mentioning some differences between the Niagara and Teach clusters:

  • On the Teach cluster, $HOME is writable on the compute nodes. On Niagara, $HOME is read-only on the compute nodes, so in most cases, you will want to submit from your $SCRATCH directory.
  • Each teach cluster node has two CPUs with 8 cores each, a total of 16 cores per node (there is no hyperthreading). Make sure to adjust accordingly the flags --ntasks-per-node or --ntasks together with --nodes for the examples found at Slurm page.
  • The current slurm configuration of the teach cluster allocates compute resources by core as opposed to by node. That means your tasks might land on nodes that have other jobs running, i.e. they might share the node. If you want to avoid that, make sure to add the following directive in your submitting script: #SBATCH --exclusive. This forces your job to use the compute nodes exclusively.
  • The maximum walltime is currently set to 4 hours.
  • There are 2 queues available: Compute queue and debug queue. Their usage limits are listed on the table below.
  • 7 of the teach computing nodes have more memory than the 64GB default memory size. 5 of them have 128GB and 2 of them, 256GB. To run a big memory job on these nodes you need to add the following directive to your submitting script: #SBATCH --constraint=m128G. Replace m128G for m256G if you want your job to run exclusively on the 256GB nodes.

Limits

There are limits to the size and duration of your jobs, the number of jobs you can run and the number of jobs you can have queued. It also matters in which 'partition' the jobs runs. 'Partitions' are SLURM-speak for use cases. You specify the partition with the -p parameter to sbatch or salloc, but if you do not specify one, your job will run in the compute partition, which is the most common case.

Usage Partition Running jobs Submitted jobs (incl. running) Min. size of jobs Max. size of jobs Min. walltime Max. walltime
Interactive testing or troubleshooting debug 1 1 1 core 4 nodes (64 cores) N/A 4 hours
Compute jobs compute 6 12 1 core 8 nodes (128 cores) 15 minutes 4 hours

Within these limits, jobs may still have to wait in the queue. Although there are no allocations on the teach cluster, the waiting time still depends on many factors, such as the number of nodes and the walltime, how many other jobs are waiting in the queue, and whether a job can fill an otherwise unused spot in the schedule.

Jupyter Hub

Some courses, like the Summer School, use Jupyter notebooks. In those cases, some (or all) of the large memory compute nodes are dedicated as jupyterhub nodes.

To connect to these, you must first set up an ssh tunnel from your local computer to the jupyterhub node in the SciNet datacenter. On a local terminal on your computer (i.e., not logged into SciNet), use the following command:

ssh -L8888:jupyterhub7:8000 teach.scinet.utoronto.ca -N

Instead of jupyterhub7, you can also choose jupyterhub1, jupyterhub2, jupyterhub3, jupyterhub4, jupyterhub5, or jupyterhub 6.

Note: It turns out that for many computers, in particular for Macs, this ssh command should be the first ssh to teach.scinet.utoronto.ca, i.e, you cannot already have another ssh session to teach running on your computer.

Also Note that this command will seem to 'hang' there, but the tunnel will have been established.

Next, open your browser and go to https://localhost:8888 and you can login to the jupyterhub.

Note: You will likely have to tell your browser to trust this site.

Software Modules

Other than essentials, all installed software is made available using module commands. These modules set environment variables (PATH, etc.), allowing multiple, conflicting versions of a given package to be available. A detailed explanation of the module system can be found on the modules page.

Common module subcommands are:

  • module load <module-name>: load the default version of a particular software.
  • module load <module-name>/<module-version>: load a specific version of a particular software.
  • module purge: unload all currently loaded modules.
  • module spider (or module spider <module-name>): list available software packages.
  • module avail: list loadable software packages.
  • module list: list loaded modules.

Along with modifying common environment variables, such as PATH, and LD_LIBRARY_PATH, these modules also create a SCINET_MODULENAME_ROOT environment variable, which can be used to access commonly needed software directories, such as /include and /lib.

There are handy abbreviations for the module commands. ml is the same as module list, and ml <module-name> is the same as module load <module-name>.

Module Versions (2018a) Description
anaconda2 5.1.0
Built to complement the rich, open source Python community, the Anaconda platform provides an enterprise-ready data analytics platform that empowers companies to adopt a modern open data science analytics architecture
anaconda3 5.2.0
R is a free software environment for statistical computing and graphics
astral 4.7.12
ASTRAL is a tool for estimating an unrooted species tree given a set of unrooted gene trees
bcftools 1.8
SAMtools is a suite of programs for interacting with high-throughput sequencing data
bedtools 2.27.1
The BEDTools utilities allow one to address common genomics tasks such as finding feature overlaps and computing coverage
blast+ 2.7.1
Basic Local Alignment Search Tool, or BLAST, is an algorithm for comparing primary biological sequence information, such as the amino-acid sequences of different proteins or the nucleotides of DNA sequences
boost 1.67.0  1.66.0
Boost provides free peer-reviewed portable C++ source libraries
bowtie2 2.3.4.3
Bowtie 2 is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences
bwa 0.7.17
Burrows-Wheeler Aligner (BWA) is an efficient program that aligns relatively short nucleotide sequences against a long reference sequence such as the human genome
bwameth 0.4.0
Fast and accurate alignment of BS-Seq reads
cmake 3.12.3
CMake, the cross-platform, open-source build system
cutadapt 2.1
Cutadapt finds and removes adapter sequences, primers, poly-A tails and other types of unwanted sequence from your high-throughput sequencing reads
deeptools 3.2.1-anaconda2
deepTools is a suite of python tools particularly developed for the efficient analysis of high-throughput sequencing data, such as ChIP-seq, RNA-seq or MNase-seq
dexseq 1.24.4
Inference of differential exon usage in RNA sequencing
fastqc 0.11.8
FastQC is a quality control application for high throughput sequence data
fftw 3.3.7
FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions, of arbitrary input size, and of both real and complex data
gcc 7.3.0
The GNU Compiler Collection for C, C++, and Fortran
gdb 8.1
The GNU Project Debugger
git-annex 2.8.1
git-annex allows managing files with git, without checking the file contents into git
gmp 6.1.2
GMP is a free library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and floating point numbers
gnu-parallel 20180322
GNU parallel is a shell tool for executing (usually serial) jobs in parallel
gnuplot 5.2.2
Portable interactive, function plotting utility
gsl 2.4
The GNU Scientific Library (GSL) is a numerical library for C and C++
hdf5 1.8.20  1.10.4
HDF5 is a data model, library, and file format for storing and managing data
hisat2 2.1.0
HISAT2 is a fast and sensitive alignment program for mapping next-generation sequencing reads (both DNA and RNA) against the general human population (as well as against a single reference genome)
htseq 0.11.1-anaconda2  0.11.1
A framework to process and analyze data from high-throughput sequencing (HTS) assays
htslib 1.8
A C library for reading/writing high-throughput sequencing data
intel 2018.4
Intel compilers suite for C, C++, and Fortran, including the MKL, TBB, IPP, DAAL, and PSTL libraries
intelmpi 2018.4
Intel MPI library with compiler wrappers for C, C++, and Fortran
java 1.8.0_201
Java Platform, Standard Edition (Java SE) lets you develop and deploy Java applications on desktops and servers
lmdb 0.9.22
OpenLDAP's Lightning Memory-Mapped Database (LMDB) library
methyldackel 0.4.0
A (mostly) universal methylation extractor for BS-seq experiments
metilene 0.2.7
Fast and sensitive detection of differential DNA methylation
miso 0.5.4
A probabilistic framework that quantitates the expression level of alternatively spliced genes from RNA-Seq data, and identifies differentially regulated isoforms or exons across samples
mkl 2018.4
Intel Math Kernel Library
netcdf 4.6.1
NetCDF (network Common Data Form) is a set of software libraries and machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data
openmpi 3.1.1
The Open MPI Project is an open source MPI-2 implementation
oprofile 1.3.0
OProfile is a system-wide profiler for Linux systems, capable of profiling all running code at low overhead
orthofinder 2.2.7
Program for identifying orthologous protein sequence families
pgplot 5.2.2-x
Graphics subroutine library for C/C++ and Fortran
prinseq 0.20.4
A bioinformatics tool to PRe-process and show INformation of SEQuence data
python 3.6.8
Python is a programming language that lets you work more quickly and integrate your systems more effectively
r 3.5.1  3.5.0
R is a free software environment for statistical computing and graphics
rarray 1.2
Library for runtime multi-dimensional arrays in C++
raxml 8.2.12
RAxML search algorithm for maximum likelihood based inference of phylogenetic trees
samtools 1.8
SAM Tools provide various utilities for manipulating alignments in the SAM format, including sorting, merging, indexing and generating alignments in a per-position format
singularity 2.6.1
Singularity is a portable application stack packaging and runtime utility.
sqlite 3.23.0
SQLite: SQL Database Engine in a C Library
stringtie 1.3.5
StringTie is a fast and highly efficient assembler of RNA-Seq alignments into potential transcripts
tbb 2019.4
Intel(R) Threading Building Blocks (Intel(R) TBB) lets you easily write parallel C++ programs that take full advantage of multicore performance, that are portable, composable and have future-proof scalability
trimgalore 0.6.0
Function A wrapper tool around Cutadapt and FastQC to consistently apply quality and adapter trimming to FastQ files, with some extra functionality for MspI-digested RRBS-type (Reduced Representation Bisufite-Seq) libraries
upsetr 1.3.3
R implementation of the UpSet set visualization technique published by Lex, Gehlenborg, et al
valgrind 3.14.0
Valgrind provides debugging and profiling tools