Trillium Quickstart
| Trillium | |
|---|---|
| Installed | Aug 2025 |
| Operating System | Rocky Linux 9.6 |
| Number of Nodes | 1284 nodes (240768 cores) |
| Interconnect | Mellanox Dragonfly+ |
| Ram/Node | 768 GB |
| Cores/Node | 192 (CPU nodes) and 96 (GPU nodes) |
| Login/Devel Node | trillium.scinet.utoronto.ca |
| Queue Submission | Slurm |
System Overview
The Trillium system is a state-of-the-art high performance computing platform, consisting of three main components:
1. CPU Subcluster
- ~240,000 cores across homogeneous CPU nodes
- Non-blocking 400 Gb/s NDR InfiniBand interconnect
- Ideal for large-scale parallel workloads
2. GPU Subcluster
- 61 GPU nodes, each with 4 x NVIDIA H100 (SXM) GPUs
- 800 Gb/s bandwidth per node (200 Gb/s per GPU) over InfiniBand
- Optimized for AI/ML and accelerated science workloads
- Note: This subcluster is in high demand and not ideal for training extremely large models (multi-100B parameters)
3. Storage System
- Unified 29 PB VAST NVMe storage for all workloads
- No tiering — all flash-based for consistent performance
- Accessible via POSIX or S3 under a unified namespace
Specifications
The Trillium cluster is a large cluster comprised of two types of nodes:
| nodes | cores | available memory | CPU | GPU |
|---|---|---|---|---|
| 1224 | 192 | 768GB DDR5 | 2 x AMD EPYC 9655 (Zen 5) @ 2.6 GHz, 384MB cache L3 | |
| 60 | 96 | 768GB DDR5 | 1 x AMD EPYC 9654 (Zen 4) @ 2.4 GHz, 384MB cache L3 | 4 x NVIDIA H100 SXM (80 GB memory) |
Each node of the cluster has 768 GB RAM per node. Being designed for large parallel workloads, it has a fast interconnect consisting of NDR InfiniBand in a Dragonfly+ topology with Adaptive Routing. The compute nodes are accessed through a queueing system that allows jobs with a minimum of 15 minutes and a maximum of 24 hours.
Storage System
Trillium features a unified high-performance storage system based on the VAST platform, with no tiering. It serves the following directories:
/home– For personal files and configurations./scratch– High-speed, temporary storage for job data./project– Shared storage for project teams and collaborations.
The storage is accessible via the NDR InfiniBand fabric for maximum performance across all workloads.
Getting started on Trillium
Access to Trillium is not enabled automatically for everyone with an account with the Digital Reseach Alliance of Canada (formerly Compute Canada), but anyone with an active Alliance account can get their access enabled.
Trillium is not automatically available to all Alliance account holders. If you are new to SciNet or your Supervisor/PI does not hold a current Alliance (formerly Compute Canada) RAC allocation, you will need to request access on the Access Systems page on the CCDB site. After clicking the "I request access" button, it usually takes only one or two business days for access to be granted.
You can check if you already have Trillium access by attempting to log in. If you receive a "Permission denied" error (and your SSH key is correctly set up), you may need to opt in.
Please read this document carefully. The FAQ is also a useful resource. If at any time you require assistance, or if something is unclear, please do not hesitate to contact us.
Logging in
Trillium runs Rocky Linux 9.6, which is a type of Linux. You will need to be familiar with Linux systems to work on Trillium. If you are not it will be worth your time to review our Introduction to Linux Shell class.
As with all SciNet and Alliance (formerly Compute Canada) compute systems, access to Trillium is done via SSH (secure shell) only and authentication is only allowed via SSH keys. Please refer to this page to generate your SSH key pair and make sure you use them securely.
Open a terminal window (e.g. Connecting with PuTTY on Windows or Connecting with MobaXTerm), then SSH into the Trillium login nodes with your Alliance (formerly Compute Canada) credentials:
$ ssh -i /path/to/ssh_private_key -Y MYALLIANCEUSERNAME@trillium.scinet.utoronto.ca
- The Trillium login nodes are where you develop, edit, compile, prepare and submit jobs.
- These login nodes are not part of the Trillium compute cluster, but have the same architecture, operating system, and software stack.
- The optional
-Yenables X11 forwarding, allowing graphical programs to open windows on your local computer. - To run on Trillium compute nodes, you must submit a batch job.
If you cannot log in, be sure to first check the System Status on this site's front page.
Note: We plan to add browser access to Trillium via Open OnDemand in the future. In the meantime you can still access our existing Open OnDemand deployment by following the instructions in our quickstart guide.
Software Environment
Trillium uses the environment modules system to manage compilers, libraries, and other software packages. Modules dynamically modify your environment (e.g., PATH, LD_LIBRARY_PATH) so you can access different versions of software without conflicts.
A detailed explanation can be found on the modules page.
Commonly used module commands:
module load <module-name>– Load the default version of a software package.module load <module-name>/<module-version>– Load a specific version.module purge– Unload all currently loaded modules.module avail– List available modules that can be loaded.module list– Show currently loaded modules.module spiderormodule spider <module-name>– Search for available modules and their versions.
Handy abbreviations are available:
ml– Equivalent tomodule list.ml <module-name>– Equivalent tomodule load <module-name>.
Tips for Loading Software
Properly managing your software environment is key to avoiding conflicts and ensuring reproducibility. Here are some best practices:
- Avoid loading modules in your
.bashrcfile. Doing so can cause unexpected behavior, particularly in non-interactive environments like batch jobs or remote shells. For more information, see our .bashrc guidelines.
- Instead, load modules manually or from a separate script. This approach gives you more control and helps keep environments clean.
- Load required modules inside your job submission script. This ensures that your job runs with the expected software environment, regardless of your interactive shell settings.
- Be explicit about module versions. Short names like
gccwill load the system default (e.g.,gcc/12.3), which may change in the future. Specify full versions (e.g.,gcc/13.3) for long-term reproducibility.
- Resolve dependencies with
module spider. Some modules depend on others. Usemodule spider <module-name>to discover which modules are required and how to load them in the correct order. For more, see Usingmodule spider.
Using Commercial Software
You may be able to use commercial software on Trillium, but there are a few important considerations:
- Bring your own license. You can use commercial software on Trillium if you have a valid license. If the software requires a license server, you can connect to it securely using SSH tunneling.
- SciNet and the Alliance (formerly Compute Canada) do not provide user-specific licenses. Due to the large and diverse user base, we cannot provide licenses for individual or specialized commercial packages.
- Freely available commercial tools. Some widely useful commercial tools are available system-wide, such as compilers, math libraries, debuggers.
- Software not available (unless you bring your own license): tools like MATLAB, Gaussian, and IDL are not provided centrally. If you have your own license, you are welcome to install and use them.
- Open-source alternatives are available. Consider using freely available tools such as Python, R, and Octave, which are well-supported and widely used on the system.
- We're here to help. If you have a valid license and need help installing commercial software, feel free to contact us, we'll assist where possible.
A list of commercial software currently installed on Trillium (for which you must supply a license to use) is available on the Commercial Software page.
Technical Details
Cooling and Energy Efficiency
Trillium is fully direct liquid cooled using warm water (35–40 °C input), resulting in:
- PUE below 1.03 (high energy efficiency)
- Use of closed-loop dry fluid coolers, avoiding evaporative towers and new water usage
- Heat reuse: Trillium supplies excess heat to nearby facilities to minimize climate impact
Storage System
The VAST high-performance file system is comprised of a unified 29 PB NVMe-backed storage pool, with:
- 29 PB effective capacity (deduplicated via VAST)
- 16.7 PB raw flash capacity
- 714 GB/s read bandwidth, 275 GB/s write bandwidth
- 10 million read IOPS, 2 million write IOPS
- POSIX and S3 access protocols under a unified namespace
- 48 C-Boxes and 14 D-Boxes for data services
Backup and Archive Storage
An additional 114 PB HPSS tape-based archive is available for nearline storage:
- Dual-copy archive across geographically separate libraries
- Used for both backup and archival purposes
- Backups are managed using Atempo backup software
Testing and Debugging
Before submitting your job to the cluster, it's important to test your code to ensure correctness and determine the resources it requires.
- Lightweight tests can be run directly on the login nodes. As a rule of thumb, these should:
- Run in under a few minutes
- Use no more than 1–2 GB of memory
- Use only 1–2 CPU cores
- You can also run the DDT debugger on the login nodes after loading it with:
module load ddt-cpu
- For short tests that exceed login node limits or require dedicated resources, request an interactive debug job using the
debugjobcommand:
tri-login01:~$ debugjob --clean N
Replace N with the number of nodes (1 to 4). If N=1, you will get 1 hour of interactive time; with N=4 (the maximum), you will get 22 minutes.
The --clean flag is optional but recommended, as it starts the session with no modules loaded, better mimicking the clean environment of batch jobs.
- If your test job requires more time than allowed by
debugjob, you can request an interactive session from the regular queue usingsalloc:
tri-login01:~$ salloc --nodes=N --time=M:00:00 --x11
Nis the number of nodesMis the number of hours the job should run--x11is required for graphical applications (e.g., when using DDT or DDD)
Note: Jobs submitted with salloc may take longer to start, as they are scheduled like any other batch job. See the Testing with graphics page for more information on graphical testing options.