Teach
Teach Cluster | |
---|---|
Installed | (orig Feb 2013), Oct 2018 |
Operating System | Linux (Centos 7.4) |
Number of Nodes | 42 |
Interconnect | Infiniband (QDR) |
Ram/Node | 64 Gb |
Cores/Node | 16 |
Login/Devel Node | teach01 (from teach.scinet) |
Vendor Compilers | icc/gcc |
Queue Submission | slurm |
Teaching Cluster
SciNet has assembled some older compute hardware into a small cluster provided primarily for teaching purposes. It is configured similarly to the production Niagara system, however uses repurposed hardware. This system should not be used for production work as such the queuing policies are designed to provide fast job turnover and limit the amount of resources one person can use at a time. Questions about its use or problems should be sent to support@scinet.utoronto.ca.
Specifications
The cluster consists of 42 repurposed x86_64 nodes each with 16 cores (from two octal core Intel XeonSandybridge E5-2650 CPUs) running at 2.0GHz with 64GB of RAM per node. The nodes are interconnected with 2.6:1 blocking QDR Infiniband for MPI communications and disk I/O to the SciNet Niagara filesystems. In total this cluster contains 672 cores.
Login/Devel Node
Teach runs CentOS 7, which is a type of Linux. You will need to be somewhat familiar with Linux systems to work on Teach. If you are not, it will be worth your time to review our Introduction to Linux Shell class.
As with all SciNet and Alliance (formerly Compute Canada) systems, access to Teach is done via SSH (secure shell) only. Open a terminal window (e.g. using PuTTY or MobaXTerm on Windows), and type
ssh -Y USERNAME@teach.scinet.utoronto.ca
This will bring directly to the command line of teach01 the gateway/devel node for this cluster. From teach01 you can compile, do short tests, and submit your jobs to the queue. The first time you login to Teach cluster, please make sure to check if the login node ssh key fingerprint matches. See here how.
The login node teach01 is shared between students of a number of different courses. Use this node to develop and compile code, to run short tests, and to submit computations to the scheduler (see below).
Note that there are 2 kinds of accounts active on the teach cluster: personal accounts that are maintained in the Compute Canada Database, and temporary accounts that start with the course code followed by the word 'student' and a number. For the latter, passwords can be changed using the changePassword command.
Software Modules
Other than essentials, all installed software is made available using module commands. These modules set environment variables (PATH, etc.), allowing multiple, conflicting versions of a given package to be available. A detailed explanation of the module system can be found on the modules page.
Common module subcommands are:
module load <module-name>
: load the default version of a particular software.module load <module-name>/<module-version>
: load a specific version of a particular software.module purge
: unload all currently loaded modules.module spider
(ormodule spider <module-name>
): list available software packages.module avail
: list loadable software packages.module list
: list loaded modules.
For example, to make the GNU compilers (gcc, g++ and gfortran) available, you should type
module load gcc
while the Intel compilers (icc, icpc and ifort) can be loaded by
module load intel
Along with modifying common environment variables, such as PATH, and LD_LIBRARY_PATH, these modules also create a SCINET_MODULENAME_ROOT environment variable, which can be used to access commonly needed software directories, such as /include and /lib.
There are handy abbreviations for the module commands. ml
is the same as module list
, and ml <module-name>
is the same as module load <module-name>
.
A list of available software modules can be found below.
Interactive jobs
For a interactive sessions on a compute node of the teach cluster that give access to non-shared resources, use the 'debugjob' command.
teach01:~$ debugjob -n C
where C is the number of cores. An interactive session defaults to four hours when using at most one node (C<=16), and becomes 60 minutes when using four nodes (i.e., 48<C<=64), which the maximum number of nodes allowed for an interactive session by debugjob.
For a short interactive sessions on a dedicated compute node of the teach cluster, use the 'debugjob' command as follows:
teach01:~$ debugjob N
where N is the number of nodes. On the Teach cluster, this is equivalent to debugjob -n 16*N . The positive integer number N can at most be 4.
If no arguments are given to debugjob, it allocates a single core on a Teach compute node.
There are limits on the resources you can get with a debugjob, and how long you can get them. No debugjob can run longer than four hours or use more than 64 cores, and each user can only run one at a time. For longer computations, jobs must be submitted to the scheduler.
Submit a Job
Teach uses SLURM as its job scheduler. More-advanced details of how to interact with the scheduler can be found on the Slurm page.
You submit jobs from a login node by passing a script to the sbatch command:
teach01:~scratch$ sbatch jobscript.sh
This puts the job in the queue. It will run on the compute nodes in due course.
It is worth mentioning some differences between the Niagara and Teach clusters:
- $HOME is read-only on the compute nodes, so in most cases, you will want to submit jobs from your $SCRATCH directory.
- Each teach cluster node has two CPUs with 8 cores each, a total of 16 cores per node (there is no hyperthreading). Make sure to adjust accordingly the flags --ntasks-per-node or --ntasks together with --nodes for the examples found at Slurm page.
- The current slurm configuration of the teach cluster allocates compute resources by core as opposed to by node. That means your tasks might land on nodes that have other jobs running, i.e. they might share the node. If you want to avoid that, make sure to add the following directive in your submitting script: #SBATCH --exclusive. This forces your job to use the compute nodes exclusively.
- The maximum walltime is currently set to 4 hours.
- There are 2 queues available: Compute queue and debug queue. Their usage limits are listed on the table below.
- 7 of the teach computing nodes have more memory than the 64GB default memory size. 5 of them have 128GB and 2 of them, 256GB. To run a big memory job on these nodes you need to add the following directive to your submitting script: #SBATCH --constraint=m128G. Replace m128G for m256G if you want your job to run exclusively on the 256GB nodes.
Limits
There are limits to the size and duration of your jobs, the number of jobs you can run and the number of jobs you can have queued. It also matters in which 'partition' the jobs runs. 'Partitions' are SLURM-speak for use cases. You specify the partition with the -p parameter to sbatch or salloc, but if you do not specify one, your job will run in the compute partition, which is the most common case.
Usage | Partition | Running jobs | Submitted jobs (incl. running) | Min. size of jobs | Max. size of jobs | Min. walltime | Max. walltime |
---|---|---|---|---|---|---|---|
Interactive testing or troubleshooting | debug | 1 | 1 | 1 core | 4 nodes (64 cores) | N/A | 4 hours |
Compute jobs | compute | 6 | 12 | 1 core | 8 nodes (128 cores) | 15 minutes | 4 hours |
Within these limits, jobs may still have to wait in the queue. Although there are no allocations on the teach cluster, the waiting time still depends on many factors, such as the number of nodes and the walltime, how many other jobs are waiting in the queue, and whether a job can fill an otherwise unused spot in the schedule.
Running Jupyter on a Teach Compute Node
1. To be able to run Jupyter on a compute node, you must first (a) install it inside a virtual environment, (b) enable a way for jupyter to seemingly write to a specific directory on $HOME, and (c) create a little helper script called notebook.sh that will be used to start the jupyter server in step 2. These are the commands that you should use for the installation (which you should do only once, on the Teach login node):
(a) Create virtual env
$ module load python/3.9.10 $ virtualenv --system-site-packages $HOME/.virtualenvs/jupteach $ source $HOME/.virtualenvs/jupteach/bin/activate $ pip install jupyter jupyterlab $ deactivate
You can choose another directory than $HOME/.virtualenvs/jupteach for where to create the virtual environment, but you need to be consistent and use the same directory everywhere below.
(b) Make a writable 'runtime' directory for Jupyter.
$ mkdir -p $HOME/.local/share/jupyter/runtime $ mv -f $HOME/.local/share/jupyter/runtime $SCRATCH/jupyter_runtime || mkdir $SCRATCH/jupyter_runtime $ ln -sT $SCRATCH/jupyter_runtime $HOME/.local/share/jupyter/runtime
(c) Create a launch script to use on the compute nodes:
$ cat > $HOME/.virtualenvs/jupteach/bin/notebook.sh <<EOF #!/bin/bash source \$HOME/.virtualenvs/jupteach/bin/activate export XDG_DATA_HOME=\$SCRATCH/.share export XDG_CACHE_HOME=\$SCRATCH/.cache export XDG_CONFIG_HOME=\$SCRATCH/.config export XDG_RUNTIME_DIR=\$SCRATCH/.runtime export JUPYTER_CONFIG_DIR=\$SCRATCH/.config/.jupyter jupyter \${1:-notebook} --ip \$(hostname -f) --no-browser --notebook-dir=\$PWD EOF $ chmod +x $HOME/.virtualenvs/jupteach/bin/notebook.sh
2. To run the jupyter server on a compute node, start an interactive session with the debugjob command and then launch the jupyter server:
$ debugjob -n 16 # use less if you need less cores. $ cd $SCRATCH # $HOME is read-only, so move to $SCRATCH $ $HOME/.virtualenvs/jupteach/bin/notebook.sh # add the argument "lab" to start with the jupyter lab
Make sure you note down (a) the name of the compute node that you got allocated (they start with "teach" followed by a 2-digit number), and (b) the number following the compute nodes after the colon (usually this is 8888, but it can be another, higher number); this is the PORT and (c) the last URL that the notebook.sh tells you to use to connect.
4. To connect to this jupyter server running on a teach compute node, which is not accessible from the internet, in a different terminal on you own computer, you must reconnect to the Teach cluster with a port-forwarding tunnel to the compute node on which jupyter is running:
$ ssh -LPORT:teachXX:PORT -o ControlMaster=no USERNAME@teach.scinet.utoronto.ca -N
where teachXX is to be replaced by the name of the compute node (point (a) above), PORT is to be replaced by the port number that notebook.sh showed, and USERNAME</t> should be your teach account username. This command should just "hang" there, it only serves to forward port number PORT (usually 8888) to port PORT (usually 8888) on the compute node.
Finally, point your browser to the URL that the notebook.sh command printed out (point (b) above), i.e., the one with 127.0.0.1 in it.
Available Modules
=== TeachEnv/2018a ===
Module | Version(s) | Documentation | Description |
---|---|---|---|
anaconda2 | 5.1.0 | Python | Deprecated. Built to complement the rich, open source Python community, the Anaconda platform provides an enterprise-ready data analytics platform that empowers companies to adopt a modern open data science analytics architecture
|
anaconda3 | 5.2.0 | Python | Deprecated. Built to complement the rich, open source Python community, the Anaconda platform provides an enterprise-ready data analytics platform that empowers companies to adopt a modern open data science analytics architecture
|
arm-forge | 21.0.3 | Parallel Debugging with DDT | Arm Forge is the complete toolsuite for software development - with everything needed to debug, profile, optimize, edit and build C, C++ and Fortran applications on Linux for high performance - from single threads through to complex parallel HPC codes with MPI, OpenMP, threads or CUDA.
|
armadillo | 11.4.3 | Armadillo is an open-source C++ linear algebra library (matrix maths) aiming towards a good balance between speed and ease of use. Integer, floating point and complex numbers are supported, as well as a subset of trigonometric and statistics functions.
| |
astral | 4.7.12 5.7.1 | ASTRAL is a tool for estimating an unrooted species tree given a set of unrooted gene trees
| |
autotools | 2018b | The standard GNU build tools: autoconf, automake and libtool
| |
bcftools | 1.8 | SAMtools is a suite of programs for interacting with high-throughput sequencing data
| |
beagle-lib | 3.1.2 | beagle-lib is a high-performance library that can perform the core calculations at the heart of most Bayesian and Maximum Likelihood phylogenetics packages
| |
bedtools | 2.27.1 | The BEDTools utilities allow one to address common genomics tasks such as finding feature overlaps and computing coverage
| |
blast+ | 2.7.1 2.10.1 | Basic Local Alignment Search Tool, or BLAST, is an algorithm for comparing primary biological sequence information, such as the amino-acid sequences of different proteins or the nucleotides of DNA sequences
| |
boost | 1.66.0 1.67.0 1.70.0 1.76.0 1.78.0 | Boost provides free peer-reviewed portable C++ source libraries, emphasizing libraries that work well with the C++ Standard Library
| |
bowtie2 | 2.3.4.3 | Bowtie 2 is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences
| |
bwa | 0.7.17 | Burrows-Wheeler Aligner (BWA) is an efficient program that aligns relatively short nucleotide sequences against a long reference sequence such as the human genome
| |
bwameth | 0.4.0 | Fast and accurate alignment of BS-Seq reads
| |
catch | 2.11.1 | C++ test framework for unit-tests, TDD and BDD using C++11 and later
| |
cmake | 3.12.3 | CMake, the cross-platform, open-source build system
| |
cuda | 8.0.61 | ||
cutadapt | 2.1 2.10 | Cutadapt finds and removes adapter sequences, primers, poly-A tails and other types of unwanted sequence from your high-throughput sequencing reads
| |
dcm2niix | 1.0.20200331 | dcm2niix is a designed to convert neuroimaging data from the DICOM format to the NIfTI format
| |
ddd | 3.3.12 | GNU DDD is a graphical front-end for command-line debuggers such as GDB, DBX, WDB, Ladebug, JDB, XDB, the Perl debugger, the bash debugger bashdb, the GNU Make debugger remake, or the Python debugger pydb
| |
deeptools | 3.2.1-anaconda2 | deepTools is a suite of python tools particularly developed for the efficient analysis of high-throughput sequencing data, such as ChIP-seq, RNA-seq or MNase-seq
| |
dejagnu | 1.6.2 | DejaGnu is a framework for testing other programs
| |
dexseq | 1.24.4 | Inference of differential exon usage in RNA sequencing
| |
doxygen | 1.8.17 | Doxygen is a documentation system for C++, C, Java, Objective-C, Python, IDL (Corba and Microsoft flavors), Fortran, VHDL, PHP, C#, and to some extent D
| |
eigen | 3.4.0 | Eigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms.
| |
expect | 5.45.4 | Expect is a tool for automating interactive applications such as telnet, ftp, passwd, fsck, rlogin, tip, etc.
| |
fastqc | 0.11.8 | FastQC is a quality control application for high throughput sequence data
| |
fftw | 3.3.7 3.3.10 | FFTW is a C subroutine library for computing the discrete Fourier transform (DFT)
| |
gcc | 4.9.4 7.3.0 9.2.0 12.2.0 | Teach | The GNU Compiler Collection for C, C++, and Fortran
|
gdb | 8.1 10.2 | Performance And Debugging Tools: Teach | GDB, the GNU Project debugger, allows you to see what is going on 'inside' another program while it executes -- or what another program was doing at the moment it crashed.
|
git | 2.30.1 | Git is a free and open source distributed version control system.
| |
git-annex | 2.8.1 2.20.1 8.20200618 | ||
gmp | 6.1.2 | GMP is a free library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and floating point numbers
| |
gnu-parallel | 20180322 | Running Serial Jobs on Teach | GNU parallel is a shell tool for executing (usually serial) jobs in parallel
|
gnuplot | 5.2.2 5.4.5 | Portable interactive, function plotting utility
| |
go | 1.13 1.17.5 | Go is an open source programming language that makes it easy to build simple, reliable, and efficient software
| |
googletest | 1.10.0 | Google's C++ test framework
| |
graphviz | 2.40.1 | Graphviz is open source graph visualization software
| |
gromacs | 2016.5 | GROMACS | GROMACS is a versatile package to perform molecular dynamics, i
|
gsl | 2.4 2.7 2.7.1 | The GNU Scientific Library (GSL) is a numerical library for C and C++ programmers
| |
hdf5 | 1.8.20 1.10.4 1.10.7 1.10.9 | HDF5 | HDF5 is a data model, library, and file format for storing and managing data
|
hisat2 | 2.1.0 | HISAT2 is a fast and sensitive alignment program for mapping next-generation sequencing reads (both DNA and RNA) against the general human population (as well as against a single reference genome)
| |
htseq | 0.11.1-anaconda2 0.11.1 | A framework to process and analyze data from high-throughput sequencing (HTS) assays
| |
htslib | 1.8 | A C library for reading/writing high-throughput sequencing data
| |
intel | 2018.4 2020u1 | Teach | Intel compilers suite for C, C++, and Fortran, including the MKL, TBB, IPP, DAAL, and PSTL libraries
|
intelmpi | 2018.4 2020u1 | Teach | |
intelpython3 | 2020u1 | Python | Deprecated
|
java | 1.8.0_201 | Java Platform, Standard Edition (Java SE) lets you develop and deploy Java applications on desktops and servers
| |
julia | 1.1.1 1.8.5 | A high-level, high-performance dynamic language for technical computing
| |
kallisto | 0.46.1 | kallisto is a program for quantifying abundances of transcripts from bulk and single-cell RNA-Seq data, or more generally of target sequences using high-throughput sequencing reads
| |
lmdb | 0.9.22 0.9.23 | OpenLDAP's Lightning Memory-Mapped Database (LMDB) library
| |
macs | 3.0.0a5 | ||
methyldackel | 0.4.0 | A (mostly) universal methylation extractor for BS-seq experiments
| |
metilene | 0.2.7 | Fast and sensitive detection of differential DNA methylation
| |
miso | 0.5.4 | A probabilistic framework that quantitates the expression level of alternatively spliced genes from RNA-Seq data, and identifies differentially regulated isoforms or exons across samples
| |
mkl | 2018.4 2022.1.0 | Teach | Intel Math Kernel Library.
|
mrbayes | 3.2.7 | ||
nano | 7.1 | A beginner-friendly text editor for the terminal.
| |
netcdf | 4.6.1 4.8.0 4.8.1 4.9.0 | NetCDF (network Common Data Form) is a set of software libraries and machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data
| |
openmpi | 3.1.1 4.1.2 4.1.4 | Teach | An open source Message Passing Interface implementation
|
oprofile | 1.3.0 | OProfile is a system-wide profiler for Linux systems, capable of profiling all running code at low overhead
| |
orthofinder | 2.2.7 | Program for identifying orthologous protein sequence families
| |
partitionfinder | 2.1.1 | ||
pgplot | 5.2.2-x 5.2.2 | Graphics subroutine library for C/C++ and Fortran.
| |
plink | 1.07 1.9 1.90b6 | ||
prinseq | 0.20.4 | A bioinformatics tool to PRe-process and show INformation of SEQuence data
| |
python | 3.6.8 3.8.5 3.9.10 | Python | Python is a programming language that lets you work more quickly and integrate your systems more effectively
|
r | 3.4.3-anaconda5.1.0 3.5.0 3.5.1 3.6.3 | R is a free software environment for statistical computing and graphics
| |
rarray | 1.2 2.1.1 2.2.1 2.3.0 2.4.0 | Library for runtime multi-dimensional arrays in C++
| |
raxml | 8.2.12 | RAxML search algorithm for maximum likelihood based inference of phylogenetic trees
| |
salmon | 1.4.0 | Salmon is a wicked-fast program to produce a highly-accurate, transcript-level quantification estimates from RNA-seq data
| |
samtools | 1.8 | SAM Tools provide various utilities for manipulating alignments in the SAM format, including sorting, merging, indexing and generating alignments in a per-position format
| |
scons | 3.0.5 | SCons is a software construction tool
| |
singularity | 2.6.1 | Singularity | Singularity is a portable application stack packaging and runtime utility
|
sqlite | 3.23.0 | SQLite: SQL Database Engine in a C Library
| |
stringtie | 1.3.5 | StringTie is a fast and highly efficient assembler of RNA-Seq alignments into potential transcripts
| |
tbb | 2019.4 | Intel(R) Threading Building Blocks (Intel(R) TBB) lets you easily write parallel C++ programs that take full advantage of multicore performance, that are portable, composable and have future-proof scalability
| |
texinfo | 6.7 | Texinfo is the official documentation format of the GNU project
| |
texlive | 2019 | High-quality typesetting programs TeX and LaTeX
| |
trimgalore | 0.6.0 | Function A wrapper tool around Cutadapt and FastQC to consistently apply quality and adapter trimming to FastQ files, with some extra functionality for MspI-digested RRBS-type (Reduced Representation Bisufite-Seq) libraries
| |
upsetr | 1.3.3 | R implementation of the UpSet set visualization technique published by Lex, Gehlenborg, et al
| |
valgrind | 3.14.0 3.20.0 | Introduction To Performance | Valgrind provides debugging and profiling tools
|
visit | 2.13.1 2.13.2 | Visualization | |
vmd | 1.9.4a38 | Visualization | VMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting
|