<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://docs.scinet.utoronto.ca/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Nolta</id>
	<title>SciNet Users Documentation - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://docs.scinet.utoronto.ca/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Nolta"/>
	<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php/Special:Contributions/Nolta"/>
	<updated>2026-04-30T18:59:17Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.35.12</generator>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Ansys&amp;diff=7670</id>
		<title>Ansys</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Ansys&amp;diff=7670"/>
		<updated>2026-04-29T13:21:43Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [http://www.ansys.com/ Ansys] engineering simulation tools are installed in the CC software stack.&lt;br /&gt;
&lt;br /&gt;
=Getting a license=&lt;br /&gt;
&lt;br /&gt;
Licenses are provided by [https://www.cmc.ca CMC Microsystems]. Canadian students and faculty can register at [https://account.cmc.ca/MyAccount/GetAccount.aspx this page].&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you must [https://www.cmc.ca/support-form/ contact CMC] and tell them you want to use the Ansys tools on Trillium, and give them your SciNet username.&lt;br /&gt;
&lt;br /&gt;
Then create the file &amp;lt;code&amp;gt;~/.licenses/ansys.lic&amp;lt;/code&amp;gt; containing:&lt;br /&gt;
&lt;br /&gt;
 setenv(&amp;quot;ANSYSLMD_LICENSE_FILE&amp;quot;, &amp;quot;6624@nia-cmc&amp;quot;)&lt;br /&gt;
 setenv(&amp;quot;ANSYSLI_SERVERS&amp;quot;, &amp;quot;2325@nia-cmc&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
You may also use a license server from the other GP clusters (Rorqual, Narval, Fir, Nibi), since Trillium doesn't have its own server. More information can be found here:&lt;br /&gt;
&lt;br /&gt;
https://docs.alliancecan.ca/wiki/Ansys#Configuring_your_license_file&lt;br /&gt;
&lt;br /&gt;
=Running on Trillium=&lt;br /&gt;
&lt;br /&gt;
Load the following modules (current as of Apr 2026, newer versions may be available):&lt;br /&gt;
&lt;br /&gt;
 module load StdEnv/2023&lt;br /&gt;
 module load ansys/2025R2.04&lt;br /&gt;
&lt;br /&gt;
Programs available:&lt;br /&gt;
&lt;br /&gt;
* fluent&lt;br /&gt;
* ansysedt&lt;br /&gt;
* mapdl&lt;br /&gt;
* ...&lt;br /&gt;
&lt;br /&gt;
You can use the Ansys graphical tools to set up your problem, but you cannot use the graphical tools to submit your job.  The job must be submitted to the scheduler for running.&lt;br /&gt;
&lt;br /&gt;
More information can be found here: https://docs.computecanada.ca/wiki/ANSYS&lt;br /&gt;
&lt;br /&gt;
==Setting up your .mw directory==&lt;br /&gt;
&lt;br /&gt;
Ansys will attempt to write to your $HOME/.mw directory.  This will work when you are testing your workflow on the login nodes, because they can write to $HOME.  However, recall that the compute nodes cannot write to the /home filesystem.  If you attempt to run Ansys from a compute node using the default configuration, it will fail because Ansys cannot write to $HOME/.mw.&lt;br /&gt;
&lt;br /&gt;
The solution is to create an alternative directory called $SCRATCH/.mw, and create a soft link from $HOME/.mw to $SCRATCH/.mw:&lt;br /&gt;
 mkdir $SCRATCH/.mw&lt;br /&gt;
 ln -s $SCRATCH/.mw $HOME/.mw&lt;br /&gt;
This will fool Ansys into thinking it is writing to $HOME/.mw, when in fact it is writing to $SCRATCH/.mw.  This command only needs to be run once.&lt;br /&gt;
&lt;br /&gt;
==Running ansys==&lt;br /&gt;
&lt;br /&gt;
Example submission script for a job running on 1 node, with max walltime of 11 hours:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=192&lt;br /&gt;
#SBATCH --time=11:00:00&lt;br /&gt;
#SBATCH --job-name test&lt;br /&gt;
&lt;br /&gt;
module load StdEnv/2023&lt;br /&gt;
module load ansys/2025R2.04&lt;br /&gt;
&lt;br /&gt;
# DIRECTORY TO RUN - $SLURM_SUBMIT_DIR is directory job was submitted from&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
machines=$(srun bash -c 'hostname -s' | sort | uniq | awk '{print $1 &amp;quot;:&amp;quot; 40}' | paste -s -d ':')&lt;br /&gt;
ansys252 -b -j JOBNAME -dis -machines &amp;quot;$machines&amp;quot; -i ansys.in&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Custom license server==&lt;br /&gt;
&lt;br /&gt;
To use your own license server for Ansys, you'll need to open up ssh tunnels to the required ports,&lt;br /&gt;
and point Ansys to the tunnels.&lt;br /&gt;
For example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ssh tri-gw -fNL 1055:license.server:1055&lt;br /&gt;
ssh tri-gw -fNL 1056:license.server:1056&lt;br /&gt;
ssh tri-gw -fNL 2325:license.server:2325&lt;br /&gt;
&lt;br /&gt;
export ANSYSLI_SERVERS=2325@localhost&lt;br /&gt;
export ANSYSLMD_LICENSE_FILE=1055@localhost&lt;br /&gt;
export LM_LICENSE_FILE=1055@localhost&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure your firewall allows connections from the following IP addresses and confirm with your license provider which ports should be used with the ssh tunnel:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
# 142.150.188.{58..67}&lt;br /&gt;
142.150.188.58&lt;br /&gt;
142.150.188.59&lt;br /&gt;
142.150.188.60&lt;br /&gt;
142.150.188.61&lt;br /&gt;
142.150.188.62&lt;br /&gt;
142.150.188.63&lt;br /&gt;
142.150.188.64&lt;br /&gt;
142.150.188.65&lt;br /&gt;
142.150.188.66&lt;br /&gt;
142.150.188.67&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
INPUTFILE=input.jou&lt;br /&gt;
fluent 2ddp -t &amp;quot;$PBS_NP&amp;quot; -cnf=&amp;quot;$PBS_NODEFILE&amp;quot; -mpi=intel -pib -pcheck -g -i &amp;quot;$INPUTFILE&amp;quot;&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=7571</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=7571"/>
		<updated>2026-03-06T15:33:10Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
{| style=&amp;quot;border-spacing:10px; width: 95%&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:1em; padding-top:.1em; border:2px solid #0645ad; background-color:#f6f6f6; border-radius:7px&amp;quot;|&lt;br /&gt;
&lt;br /&gt;
==System Status==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Use &amp;quot;Up&amp;quot;, &amp;quot;Partial&amp;quot; or &amp;quot;Down&amp;quot;; these are templates. --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;width:100%&amp;quot; &lt;br /&gt;
|{{Up3 | Trillium|https://docs.alliancecan.ca/wiki/Trillium_Quickstart}}&lt;br /&gt;
|{{Up3 | OnDemand|https://docs.alliancecan.ca/wiki/Trillium_Open_OnDemand_Quickstart}}&lt;br /&gt;
|{{Up | Globus |Globus}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up | HPSS|HPSS}}&lt;br /&gt;
|{{Up | Balam|Balam}}&lt;br /&gt;
|{{Up | S4H | S4H}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up | Teach|Teach}}&lt;br /&gt;
|{{Up3 | File system|https://docs.alliancecan.ca/wiki/Trillium_Quickstart#Storage}}&lt;br /&gt;
|{{Up3 | External Network|https://docs.alliancecan.ca/wiki/Trillium_Quickstart#Logging_in}} &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
'''Fri Feb 20, 2026, 11:35 pm:''' Power glitch, ~480 compute nodes rebooted. Regional power quality has been quite poor lately ([https://www.yorkregion.com/news/road-salt-blamed-for-power-outages/article_1a36d25d-5f97-56ee-a0c7-c49c7b732d38.html 1],&lt;br /&gt;
[https://www.yorkregion.com/news/power-company-executive-responds-to-york-region-outages/article_c4d072e7-2892-5c9c-8deb-ac5e1936779c.html 2]).&lt;br /&gt;
&lt;br /&gt;
'''Thu Feb 19, 2026, 3:00 pm:''' Systems restored. Please report issues to support@scinet.utoronto.ca.&lt;br /&gt;
&lt;br /&gt;
'''Tue Feb 17, 2026, 8:40 am:''' Power outage at the data centre.  Cooling issues have developed as a result.  Major systems (Trillium, S4H) are expected to be down until sometime Thursday. Login nodes and file systems will remain accessible.&lt;br /&gt;
&lt;br /&gt;
'''Mon Feb 16, 2026, 8:40 pm:''' Electricity is unstable in the data centre area due to severe snowfall.&lt;br /&gt;
&lt;br /&gt;
'''Thu Jan 29, 2026, 1:40 pm:''' All services are operational again.&lt;br /&gt;
&lt;br /&gt;
'''Thu Jan 29, 2026, 12:00 pm:''' The Trillium and Open OnDemand compute nodes are operational again. We are still working on bringing Balam, Neptune and S4H nodes up again.&lt;br /&gt;
&lt;br /&gt;
'''Thu Jan 29, 2026, 10:00 am:''' There was a power glitch at the data centre overnight. The login nodes are accessible but the compute nodes are down.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--  When removing system status entries, please archive them to: --&amp;gt;&lt;br /&gt;
[[Previous messages]]&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;border-spacing: 10px;width: 100%&amp;quot;&lt;br /&gt;
|valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== QuickStart Guides ==&lt;br /&gt;
* [https://docs.alliancecan.ca/wiki/Trillium_Quickstart Trillium Quickstart]&lt;br /&gt;
* [[HPSS | HPSS archival storage]]&lt;br /&gt;
* [[Teach|Teach cluster]]&lt;br /&gt;
* [[FAQ | FAQ (frequently asked questions)]]&lt;br /&gt;
* [[Acknowledging SciNet]]&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Tutorials, Manuals, etc. ==&lt;br /&gt;
* [https://education.scinet.utoronto.ca SciNet education material]&lt;br /&gt;
* [https://www.youtube.com/c/SciNetHPCattheUniversityofToronto SciNet's YouTube channel]&lt;br /&gt;
* [[Modules for Mist]] &lt;br /&gt;
* [[Commercial software]]&lt;br /&gt;
* [[Burst Buffer]]&lt;br /&gt;
* [[SSH#SSH Keys|SSH keys]]&lt;br /&gt;
* [[SSH Tunneling]]&lt;br /&gt;
* [[Visualization]]&lt;br /&gt;
* [[Running Serial Jobs on Niagara]]&lt;br /&gt;
* [[Jupyter Hub]]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Ansys&amp;diff=7562</id>
		<title>Ansys</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Ansys&amp;diff=7562"/>
		<updated>2026-03-04T18:30:03Z</updated>

		<summary type="html">&lt;p&gt;Nolta: /* Custom license server */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [http://www.ansys.com/ Ansys] engineering simulation tools are installed in the CC software stack.&lt;br /&gt;
&lt;br /&gt;
=Getting a license=&lt;br /&gt;
&lt;br /&gt;
Licenses are provided by [https://www.cmc.ca CMC Microsystems]. Canadian students and faculty can register at [https://account.cmc.ca/MyAccount/GetAccount.aspx this page].&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you must [https://www.cmc.ca/support-form/ contact CMC] and tell them you want to use the Ansys tools on Trillium, and give them your SciNet username.&lt;br /&gt;
&lt;br /&gt;
Then create the file &amp;lt;code&amp;gt;~/.licenses/ansys.lic&amp;lt;/code&amp;gt; containing:&lt;br /&gt;
&lt;br /&gt;
 setenv(&amp;quot;ANSYSLMD_LICENSE_FILE&amp;quot;, &amp;quot;6624@nia-cmc&amp;quot;)&lt;br /&gt;
 setenv(&amp;quot;ANSYSLI_SERVERS&amp;quot;, &amp;quot;2325@nia-cmc&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
You may also use a license server from the other GP clusters (Rorqual, Narval, Fir, Nibi), since Trillium doesn't have its own server. More information can be found here:&lt;br /&gt;
&lt;br /&gt;
https://docs.alliancecan.ca/wiki/Ansys#Configuring_your_license_file&lt;br /&gt;
&lt;br /&gt;
=Running on Trillium=&lt;br /&gt;
&lt;br /&gt;
Load the following modules (current as of Sep 2025, newer versions may be available):&lt;br /&gt;
&lt;br /&gt;
 module load StdEnv/2023&lt;br /&gt;
 module load ansys/2025R1.02&lt;br /&gt;
&lt;br /&gt;
Programs available:&lt;br /&gt;
&lt;br /&gt;
* fluent&lt;br /&gt;
* ansysedt&lt;br /&gt;
* mapdl&lt;br /&gt;
* ...&lt;br /&gt;
&lt;br /&gt;
You can use the Ansys graphical tools to set up your problem, but you cannot use the graphical tools to submit your job.  The job must be submitted to the scheduler for running.&lt;br /&gt;
&lt;br /&gt;
More information can be found here: https://docs.computecanada.ca/wiki/ANSYS&lt;br /&gt;
&lt;br /&gt;
==Setting up your .mw directory==&lt;br /&gt;
&lt;br /&gt;
Ansys will attempt to write to your $HOME/.mw directory.  This will work when you are testing your workflow on the login nodes, because they can write to $HOME.  However, recall that the compute nodes cannot write to the /home filesystem.  If you attempt to run Ansys from a compute node using the default configuration, it will fail because Ansys cannot write to $HOME/.mw.&lt;br /&gt;
&lt;br /&gt;
The solution is to create an alternative directory called $SCRATCH/.mw, and create a soft link from $HOME/.mw to $SCRATCH/.mw:&lt;br /&gt;
 mkdir $SCRATCH/.mw&lt;br /&gt;
 ln -s $SCRATCH/.mw $HOME/.mw&lt;br /&gt;
This will fool Ansys into thinking it is writing to $HOME/.mw, when in fact it is writing to $SCRATCH/.mw.  This command only needs to be run once.&lt;br /&gt;
&lt;br /&gt;
==Running ansys==&lt;br /&gt;
&lt;br /&gt;
Example submission script for a job running on 1 node, with max walltime of 11 hours:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=192&lt;br /&gt;
#SBATCH --time=11:00:00&lt;br /&gt;
#SBATCH --job-name test&lt;br /&gt;
&lt;br /&gt;
module load StdEnv/2023&lt;br /&gt;
module load ansys/2025R1.02&lt;br /&gt;
&lt;br /&gt;
# DIRECTORY TO RUN - $SLURM_SUBMIT_DIR is directory job was submitted from&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
machines=$(srun bash -c 'hostname -s' | sort | uniq | awk '{print $1 &amp;quot;:&amp;quot; 40}' | paste -s -d ':')&lt;br /&gt;
ansys251 -b -j JOBNAME -dis -machines &amp;quot;$machines&amp;quot; -i ansys.in&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Custom license server==&lt;br /&gt;
&lt;br /&gt;
To use your own license server for Ansys, you'll need to open up ssh tunnels to the required ports,&lt;br /&gt;
and point Ansys to the tunnels.&lt;br /&gt;
For example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ssh tri-gw -fNL 1055:license.server:1055&lt;br /&gt;
ssh tri-gw -fNL 1056:license.server:1056&lt;br /&gt;
ssh tri-gw -fNL 2325:license.server:2325&lt;br /&gt;
&lt;br /&gt;
export ANSYSLI_SERVERS=2325@localhost&lt;br /&gt;
export ANSYSLMD_LICENSE_FILE=1055@localhost&lt;br /&gt;
export LM_LICENSE_FILE=1055@localhost&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure your firewall allows connections from the following IP addresses and confirm with your license provider which ports should be used with the ssh tunnel:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
# 142.150.188.{58..67}&lt;br /&gt;
142.150.188.58&lt;br /&gt;
142.150.188.59&lt;br /&gt;
142.150.188.60&lt;br /&gt;
142.150.188.61&lt;br /&gt;
142.150.188.62&lt;br /&gt;
142.150.188.63&lt;br /&gt;
142.150.188.64&lt;br /&gt;
142.150.188.65&lt;br /&gt;
142.150.188.66&lt;br /&gt;
142.150.188.67&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
INPUTFILE=input.jou&lt;br /&gt;
fluent 2ddp -t &amp;quot;$PBS_NP&amp;quot; -cnf=&amp;quot;$PBS_NODEFILE&amp;quot; -mpi=intel -pib -pcheck -g -i &amp;quot;$INPUTFILE&amp;quot;&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=7559</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=7559"/>
		<updated>2026-02-21T16:18:30Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
{| style=&amp;quot;border-spacing:10px; width: 95%&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:1em; padding-top:.1em; border:2px solid #0645ad; background-color:#f6f6f6; border-radius:7px&amp;quot;|&lt;br /&gt;
&lt;br /&gt;
==System Status==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Use &amp;quot;Up&amp;quot;, &amp;quot;Partial&amp;quot; or &amp;quot;Down&amp;quot;; these are templates. --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;width:100%&amp;quot; &lt;br /&gt;
|{{Up3 | Trillium|https://docs.alliancecan.ca/wiki/Trillium_Quickstart}}&lt;br /&gt;
|{{Up3 | OnDemand|https://docs.alliancecan.ca/wiki/Trillium_Open_OnDemand_Quickstart}}&lt;br /&gt;
|{{Up | Globus |Globus}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up | HPSS|HPSS}}&lt;br /&gt;
|{{Up | Balam|Balam}}&lt;br /&gt;
|{{Up | S4H | S4H}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up | Teach|Teach}}&lt;br /&gt;
|{{Up3 | File system|https://docs.alliancecan.ca/wiki/Trillium_Quickstart#Storage}}&lt;br /&gt;
|{{Up3 | External Network|https://docs.alliancecan.ca/wiki/Trillium_Quickstart#Logging_in}} &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
'''Fri Feb 20, 2026, 11:35 pm:''' Power glitch, ~480 compute nodes rebooted. Regional power quality has been quite poor lately ([https://www.yorkregion.com/news/road-salt-blamed-for-power-outages/article_1a36d25d-5f97-56ee-a0c7-c49c7b732d38.html 1],&lt;br /&gt;
[https://www.yorkregion.com/news/power-company-executive-responds-to-york-region-outages/article_c4d072e7-2892-5c9c-8deb-ac5e1936779c.html 2]).&lt;br /&gt;
&lt;br /&gt;
'''Thu Feb 19, 2026, 3:00 pm:''' Systems restored. Please report issues to support@scinet.utoronto.ca.&lt;br /&gt;
&lt;br /&gt;
'''Tue Feb 17, 2026, 8:40 am:''' Power outage at the data centre.  Cooling issues have developed as a result.  Major systems (Trillium, S4H) are expected to be down until sometime Thursday. Login nodes and file systems will remain accessible.&lt;br /&gt;
&lt;br /&gt;
'''Mon Feb 16, 2026, 8:40 pm:''' Electricity is unstable in the data centre area due to severe snowfall.&lt;br /&gt;
&lt;br /&gt;
'''Thu Jan 29, 2026, 1:40 pm:''' All services are operational again.&lt;br /&gt;
&lt;br /&gt;
'''Thu Jan 29, 2026, 12:00 pm:''' The Trillium and Open OnDemand compute nodes are operational again. We are still working on bringing Balam, Neptune and S4H nodes up again.&lt;br /&gt;
&lt;br /&gt;
'''Thu Jan 29, 2026, 10:00 am:''' There was a power glitch at the data centre overnight. The login nodes are accessible but the compute nodes are down.  &lt;br /&gt;
&lt;br /&gt;
'''Thu Jan 16, 2026, 11:00 pm:''' HPSS is back online, and accessible via alliancecan#hpss Globus endpoint. &lt;br /&gt;
&lt;br /&gt;
'''Thu Jan 15, 2026, 10:00 pm:''' HPSS will undergo maintenance on Friday morning, Jan/16/2025, , including alliancecan#hpss Globus endpoint &lt;br /&gt;
&lt;br /&gt;
'''Tue Jan 6, 2026, 10:15 am:''' OnDemand has been fixed and is working again.&lt;br /&gt;
&lt;br /&gt;
'''Mon Jan 5, 2026, 9:00 pm:''' The authentication mechanism of OnDemand is not working.&lt;br /&gt;
&lt;br /&gt;
'''Wed Dec 31, 2025, 12:40 pm:''' We believe the problem has now been resolved.  Please let us know if you still experience login problems or aborted jobs.&lt;br /&gt;
&lt;br /&gt;
'''Tue Dec 30, 2025, 2:10 pm:''' We are experiencing problems with authentication, resulting in failed logins, OOD errors, and aborted jobs (with &amp;quot;prolog error&amp;quot;).  Please bear with us, as we are very short-staffed during the holiday break.  We will post updates here.&lt;br /&gt;
&lt;br /&gt;
'''Tue Dec 3, 2025, 11:30 am:''' Open OnDemand is fully operational again.&lt;br /&gt;
&lt;br /&gt;
'''Sat Nov 29, 2025, 00:40 am:''' There has been a problem with the water chiller. Some systems are offline.&lt;br /&gt;
&lt;br /&gt;
'''Wed Nov 5, 2025, 12:55 pm:''' Balam is back online.&lt;br /&gt;
&lt;br /&gt;
'''Wed Nov 5, 2025, 10:00 am:''' Open OnDemand is back online.&lt;br /&gt;
&lt;br /&gt;
'''Tue Nov 4, 2025, 11:00 pm:''' Most of the work is done, data movers, Globus, and HPSS are back online. Remaining services will be worked on tomorrow.&lt;br /&gt;
&lt;br /&gt;
'''Tue Nov 4, 2025, 8:30 am:''' Scheduled network maintenance. Trillium cluster is *not* affected.&lt;br /&gt;
&lt;br /&gt;
'''Tue Oct 21, 2025, 17:30 am:''' Balam maintenance finished.&lt;br /&gt;
&lt;br /&gt;
'''Tue Oct 21, 2025, 7:00 am:''' Balam maintenance day.&lt;br /&gt;
&lt;br /&gt;
'''Thu Oct 15, 2025, 3:55 pm:''' Trillium inbound connections through trillium.alliancecan.ca or trillium.scinet.utoronto.ca are working again.&lt;br /&gt;
&lt;br /&gt;
'''Thu Oct 15, 2025, 3:05 pm:''' Trillium is experiencing external network issues for both incoming traffic. Please try: ssh USERNAME@tri-login01.scinet.utoronto.ca in the meantime.&lt;br /&gt;
 &lt;br /&gt;
'''Thu Oct 06, 2025, 8:00 pm:''' HPSS is fully functional. You may submit archive jobs from trillium login nodes, datamovers and robots.&lt;br /&gt;
&lt;br /&gt;
'''Thu Oct 03, 2025, 6:30 pm:''' HPSS is back online, and already accessible via alliancecan#hpss Globus endpoint. Directory tree now follows the other Alliance clusters. We're still working on job submission via Slurm&lt;br /&gt;
&lt;br /&gt;
'''Thu Oct 01, 2025, 0:00 am:''' Niagara compute nodes are now unavailable for regular users. The login nodes will remain available for a while to allow a few last data transfers, although transfers from the Niagara file systems to Trillium are best done on nia-dm1.scinet.utoronto.ca.&lt;br /&gt;
&lt;br /&gt;
'''Thu Oct 01, 2025, 9:30 am:''' HPSS is down for scheduled maintenance, including alliancecan#hpss Globus endpoint&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--  When removing system status entries, please archive them to: --&amp;gt;&lt;br /&gt;
[[Previous messages]]&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;border-spacing: 10px;width: 100%&amp;quot;&lt;br /&gt;
|valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== QuickStart Guides ==&lt;br /&gt;
* [https://docs.alliancecan.ca/wiki/Trillium_Quickstart Trillium Quickstart]&lt;br /&gt;
* [[Niagara Quickstart]]&lt;br /&gt;
* [[HPSS | HPSS archival storage]]&lt;br /&gt;
* [[Teach|Teach cluster]]&lt;br /&gt;
* [[FAQ | FAQ (frequently asked questions)]]&lt;br /&gt;
* [[Acknowledging SciNet]]&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Tutorials, Manuals, etc. ==&lt;br /&gt;
* [https://education.scinet.utoronto.ca SciNet education material]&lt;br /&gt;
* [https://www.youtube.com/c/SciNetHPCattheUniversityofToronto SciNet's YouTube channel]&lt;br /&gt;
* [[Modules specific to Niagara|Software Modules specific to Niagara]] &lt;br /&gt;
* [[Modules for Mist]] &lt;br /&gt;
* [[Commercial software]]&lt;br /&gt;
* [[Burst Buffer]]&lt;br /&gt;
* [[SSH#SSH Keys|SSH keys]]&lt;br /&gt;
* [[SSH Tunneling]]&lt;br /&gt;
* [[Visualization]]&lt;br /&gt;
* [[Running Serial Jobs on Niagara]]&lt;br /&gt;
* [[Jupyter Hub]]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=7556</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=7556"/>
		<updated>2026-02-20T19:09:58Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
{| style=&amp;quot;border-spacing:10px; width: 95%&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:1em; padding-top:.1em; border:2px solid #0645ad; background-color:#f6f6f6; border-radius:7px&amp;quot;|&lt;br /&gt;
&lt;br /&gt;
==System Status==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Use &amp;quot;Up&amp;quot;, &amp;quot;Partial&amp;quot; or &amp;quot;Down&amp;quot;; these are templates. --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;width:100%&amp;quot; &lt;br /&gt;
|{{Up3 | Trillium|https://docs.alliancecan.ca/wiki/Trillium_Quickstart}}&lt;br /&gt;
|{{Up3 | OnDemand|https://docs.alliancecan.ca/wiki/Trillium_Open_OnDemand_Quickstart}}&lt;br /&gt;
|{{Up | Globus |Globus}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up | HPSS|HPSS}}&lt;br /&gt;
|{{Up | Balam|Balam}}&lt;br /&gt;
|{{Up | S4H | S4H}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up | Teach|Teach}}&lt;br /&gt;
|{{Up3 | File system|https://docs.alliancecan.ca/wiki/Trillium_Quickstart#Storage}}&lt;br /&gt;
|{{Up3 | External Network|https://docs.alliancecan.ca/wiki/Trillium_Quickstart#Logging_in}} &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
'''Fri Feb 20, 2026, 11:35 pm:''' Power glitch, ~480 compute nodes rebooted.&lt;br /&gt;
&lt;br /&gt;
'''Thu Feb 19, 2026, 3:00 pm:''' Systems restored. Please report issues to support@scinet.utoronto.ca.&lt;br /&gt;
&lt;br /&gt;
'''Tue Feb 17, 2026, 8:40 am:''' Power outage at the data centre.  Cooling issues have developed as a result.  Major systems (Trillium, S4H) are expected to be down until sometime Thursday. Login nodes and file systems will remain accessible.&lt;br /&gt;
&lt;br /&gt;
'''Mon Feb 16, 2026, 8:40 pm:''' Electricity is unstable in the data centre area due to severe snowfall.&lt;br /&gt;
&lt;br /&gt;
'''Thu Jan 29, 2026, 1:40 pm:''' All services are operational again.&lt;br /&gt;
&lt;br /&gt;
'''Thu Jan 29, 2026, 12:00 pm:''' The Trillium and Open OnDemand compute nodes are operational again. We are still working on bringing Balam, Neptune and S4H nodes up again.&lt;br /&gt;
&lt;br /&gt;
'''Thu Jan 29, 2026, 10:00 am:''' There was a power glitch at the data centre overnight. The login nodes are accessible but the compute nodes are down.  &lt;br /&gt;
&lt;br /&gt;
'''Thu Jan 16, 2026, 11:00 pm:''' HPSS is back online, and accessible via alliancecan#hpss Globus endpoint. &lt;br /&gt;
&lt;br /&gt;
'''Thu Jan 15, 2026, 10:00 pm:''' HPSS will undergo maintenance on Friday morning, Jan/16/2025, , including alliancecan#hpss Globus endpoint &lt;br /&gt;
&lt;br /&gt;
'''Tue Jan 6, 2026, 10:15 am:''' OnDemand has been fixed and is working again.&lt;br /&gt;
&lt;br /&gt;
'''Mon Jan 5, 2026, 9:00 pm:''' The authentication mechanism of OnDemand is not working.&lt;br /&gt;
&lt;br /&gt;
'''Wed Dec 31, 2025, 12:40 pm:''' We believe the problem has now been resolved.  Please let us know if you still experience login problems or aborted jobs.&lt;br /&gt;
&lt;br /&gt;
'''Tue Dec 30, 2025, 2:10 pm:''' We are experiencing problems with authentication, resulting in failed logins, OOD errors, and aborted jobs (with &amp;quot;prolog error&amp;quot;).  Please bear with us, as we are very short-staffed during the holiday break.  We will post updates here.&lt;br /&gt;
&lt;br /&gt;
'''Tue Dec 3, 2025, 11:30 am:''' Open OnDemand is fully operational again.&lt;br /&gt;
&lt;br /&gt;
'''Sat Nov 29, 2025, 00:40 am:''' There has been a problem with the water chiller. Some systems are offline.&lt;br /&gt;
&lt;br /&gt;
'''Wed Nov 5, 2025, 12:55 pm:''' Balam is back online.&lt;br /&gt;
&lt;br /&gt;
'''Wed Nov 5, 2025, 10:00 am:''' Open OnDemand is back online.&lt;br /&gt;
&lt;br /&gt;
'''Tue Nov 4, 2025, 11:00 pm:''' Most of the work is done, data movers, Globus, and HPSS are back online. Remaining services will be worked on tomorrow.&lt;br /&gt;
&lt;br /&gt;
'''Tue Nov 4, 2025, 8:30 am:''' Scheduled network maintenance. Trillium cluster is *not* affected.&lt;br /&gt;
&lt;br /&gt;
'''Tue Oct 21, 2025, 17:30 am:''' Balam maintenance finished.&lt;br /&gt;
&lt;br /&gt;
'''Tue Oct 21, 2025, 7:00 am:''' Balam maintenance day.&lt;br /&gt;
&lt;br /&gt;
'''Thu Oct 15, 2025, 3:55 pm:''' Trillium inbound connections through trillium.alliancecan.ca or trillium.scinet.utoronto.ca are working again.&lt;br /&gt;
&lt;br /&gt;
'''Thu Oct 15, 2025, 3:05 pm:''' Trillium is experiencing external network issues for both incoming traffic. Please try: ssh USERNAME@tri-login01.scinet.utoronto.ca in the meantime.&lt;br /&gt;
 &lt;br /&gt;
'''Thu Oct 06, 2025, 8:00 pm:''' HPSS is fully functional. You may submit archive jobs from trillium login nodes, datamovers and robots.&lt;br /&gt;
&lt;br /&gt;
'''Thu Oct 03, 2025, 6:30 pm:''' HPSS is back online, and already accessible via alliancecan#hpss Globus endpoint. Directory tree now follows the other Alliance clusters. We're still working on job submission via Slurm&lt;br /&gt;
&lt;br /&gt;
'''Thu Oct 01, 2025, 0:00 am:''' Niagara compute nodes are now unavailable for regular users. The login nodes will remain available for a while to allow a few last data transfers, although transfers from the Niagara file systems to Trillium are best done on nia-dm1.scinet.utoronto.ca.&lt;br /&gt;
&lt;br /&gt;
'''Thu Oct 01, 2025, 9:30 am:''' HPSS is down for scheduled maintenance, including alliancecan#hpss Globus endpoint&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--  When removing system status entries, please archive them to: --&amp;gt;&lt;br /&gt;
[[Previous messages]]&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;border-spacing: 10px;width: 100%&amp;quot;&lt;br /&gt;
|valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== QuickStart Guides ==&lt;br /&gt;
* [https://docs.alliancecan.ca/wiki/Trillium_Quickstart Trillium Quickstart]&lt;br /&gt;
* [[Niagara Quickstart]]&lt;br /&gt;
* [[HPSS | HPSS archival storage]]&lt;br /&gt;
* [[Teach|Teach cluster]]&lt;br /&gt;
* [[FAQ | FAQ (frequently asked questions)]]&lt;br /&gt;
* [[Acknowledging SciNet]]&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Tutorials, Manuals, etc. ==&lt;br /&gt;
* [https://education.scinet.utoronto.ca SciNet education material]&lt;br /&gt;
* [https://www.youtube.com/c/SciNetHPCattheUniversityofToronto SciNet's YouTube channel]&lt;br /&gt;
* [[Modules specific to Niagara|Software Modules specific to Niagara]] &lt;br /&gt;
* [[Modules for Mist]] &lt;br /&gt;
* [[Commercial software]]&lt;br /&gt;
* [[Burst Buffer]]&lt;br /&gt;
* [[SSH#SSH Keys|SSH keys]]&lt;br /&gt;
* [[SSH Tunneling]]&lt;br /&gt;
* [[Visualization]]&lt;br /&gt;
* [[Running Serial Jobs on Niagara]]&lt;br /&gt;
* [[Jupyter Hub]]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Ansys&amp;diff=7184</id>
		<title>Ansys</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Ansys&amp;diff=7184"/>
		<updated>2025-10-15T13:51:28Z</updated>

		<summary type="html">&lt;p&gt;Nolta: /* Custom license server */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [http://www.ansys.com/ Ansys] engineering simulation tools are installed in the CC software stack.&lt;br /&gt;
&lt;br /&gt;
=Getting a license=&lt;br /&gt;
&lt;br /&gt;
Licenses are provided by [https://www.cmc.ca CMC Microsystems]. Canadian students and faculty can register at [https://account.cmc.ca/MyAccount/GetAccount.aspx this page].&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you must [https://www.cmc.ca/support-form/ contact CMC] and tell them you want to use the Ansys tools on Trillium, and give them your SciNet username.&lt;br /&gt;
&lt;br /&gt;
Then create the file &amp;lt;code&amp;gt;~/.licenses/ansys.lic&amp;lt;/code&amp;gt; containing:&lt;br /&gt;
&lt;br /&gt;
 setenv(&amp;quot;ANSYSLMD_LICENSE_FILE&amp;quot;, &amp;quot;6624@nia-cmc&amp;quot;)&lt;br /&gt;
 setenv(&amp;quot;ANSYSLI_SERVERS&amp;quot;, &amp;quot;2325@nia-cmc&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
You may also use a license server from the other GP clusters (Rorqual, Narval, Fir, Nibi), since Trillium doesn't have its own server. More information can be found here:&lt;br /&gt;
&lt;br /&gt;
https://docs.alliancecan.ca/wiki/Ansys#Configuring_your_license_file&lt;br /&gt;
&lt;br /&gt;
=Running on Trillium=&lt;br /&gt;
&lt;br /&gt;
Load the following modules (current as of Sep 2025, newer versions may be available):&lt;br /&gt;
&lt;br /&gt;
 module load StdEnv/2023&lt;br /&gt;
 module load ansys/2025R1.02&lt;br /&gt;
&lt;br /&gt;
Programs available:&lt;br /&gt;
&lt;br /&gt;
* fluent&lt;br /&gt;
* ansysedt&lt;br /&gt;
* mapdl&lt;br /&gt;
* ...&lt;br /&gt;
&lt;br /&gt;
You can use the Ansys graphical tools to set up your problem, but you cannot use the graphical tools to submit your job.  The job must be submitted to the scheduler for running.&lt;br /&gt;
&lt;br /&gt;
More information can be found here: https://docs.computecanada.ca/wiki/ANSYS&lt;br /&gt;
&lt;br /&gt;
==Setting up your .mw directory==&lt;br /&gt;
&lt;br /&gt;
Ansys will attempt to write to your $HOME/.mw directory.  This will work when you are testing your workflow on the login nodes, because they can write to $HOME.  However, recall that the compute nodes cannot write to the /home filesystem.  If you attempt to run Ansys from a compute node using the default configuration, it will fail because Ansys cannot write to $HOME/.mw.&lt;br /&gt;
&lt;br /&gt;
The solution is to create an alternative directory called $SCRATCH/.mw, and create a soft link from $HOME/.mw to $SCRATCH/.mw:&lt;br /&gt;
 mkdir $SCRATCH/.mw&lt;br /&gt;
 ln -s $SCRATCH/.mw $HOME/.mw&lt;br /&gt;
This will fool Ansys into thinking it is writing to $HOME/.mw, when in fact it is writing to $SCRATCH/.mw.  This command only needs to be run once.&lt;br /&gt;
&lt;br /&gt;
==Running ansys==&lt;br /&gt;
&lt;br /&gt;
Example submission script for a job running on 1 node, with max walltime of 11 hours:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=192&lt;br /&gt;
#SBATCH --time=11:00:00&lt;br /&gt;
#SBATCH --job-name test&lt;br /&gt;
&lt;br /&gt;
module load StdEnv/2023&lt;br /&gt;
module load ansys/2025R1.02&lt;br /&gt;
&lt;br /&gt;
# DIRECTORY TO RUN - $SLURM_SUBMIT_DIR is directory job was submitted from&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
machines=$(srun bash -c 'hostname -s' | sort | uniq | awk '{print $1 &amp;quot;:&amp;quot; 40}' | paste -s -d ':')&lt;br /&gt;
ansys251 -b -j JOBNAME -dis -machines &amp;quot;$machines&amp;quot; -i ansys.in&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Custom license server==&lt;br /&gt;
&lt;br /&gt;
To use your own license server for Ansys, you'll need to open up ssh tunnels to the required ports,&lt;br /&gt;
and point Ansys to the tunnels.&lt;br /&gt;
For example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ssh tri-gw -fNL 1055:license.server:1055&lt;br /&gt;
ssh tri-gw -fNL 1056:license.server:1056&lt;br /&gt;
ssh tri-gw -fNL 2325:license.server:2325&lt;br /&gt;
&lt;br /&gt;
export ANSYSLI_SERVERS=2325@localhost&lt;br /&gt;
export ANSYSLMD_LICENSE_FILE=1055@localhost&lt;br /&gt;
export LM_LICENSE_FILE=1055@localhost&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure your firewall allows connections from the following IP addresses and confirm with your license provider which ports should be used with the ssh tunnel:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
142.150.188.58&lt;br /&gt;
142.150.188.60&lt;br /&gt;
142.150.188.61&lt;br /&gt;
142.150.188.62&lt;br /&gt;
142.150.188.63&lt;br /&gt;
142.150.188.64&lt;br /&gt;
142.150.188.65&lt;br /&gt;
142.150.188.66&lt;br /&gt;
142.150.188.67&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
INPUTFILE=input.jou&lt;br /&gt;
fluent 2ddp -t &amp;quot;$PBS_NP&amp;quot; -cnf=&amp;quot;$PBS_NODEFILE&amp;quot; -mpi=intel -pib -pcheck -g -i &amp;quot;$INPUTFILE&amp;quot;&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Ansys&amp;diff=7127</id>
		<title>Ansys</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Ansys&amp;diff=7127"/>
		<updated>2025-09-26T19:44:57Z</updated>

		<summary type="html">&lt;p&gt;Nolta: /* Getting a license */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [http://www.ansys.com/ Ansys] engineering simulation tools are installed in the CC software stack.&lt;br /&gt;
&lt;br /&gt;
=Getting a license=&lt;br /&gt;
&lt;br /&gt;
Licenses are provided by [https://www.cmc.ca CMC Microsystems]. Canadian students and faculty can register at [https://account.cmc.ca/MyAccount/GetAccount.aspx this page].&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you must [https://www.cmc.ca/support-form/ contact CMC] and tell them you want to use the Ansys tools on Trillium, and give them your SciNet username.&lt;br /&gt;
&lt;br /&gt;
Then create the file &amp;lt;code&amp;gt;~/.licenses/ansys.lic&amp;lt;/code&amp;gt; containing:&lt;br /&gt;
&lt;br /&gt;
 setenv(&amp;quot;ANSYSLMD_LICENSE_FILE&amp;quot;, &amp;quot;6624@nia-cmc&amp;quot;)&lt;br /&gt;
 setenv(&amp;quot;ANSYSLI_SERVERS&amp;quot;, &amp;quot;2325@nia-cmc&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
You may also use a license server from the other GP clusters (Rorqual, Narval, Fir, Nibi), since Trillium doesn't have its own server. More information can be found here:&lt;br /&gt;
&lt;br /&gt;
https://docs.alliancecan.ca/wiki/Ansys#Configuring_your_license_file&lt;br /&gt;
&lt;br /&gt;
=Running on Trillium=&lt;br /&gt;
&lt;br /&gt;
Load the following modules (current as of Sep 2025, newer versions may be available):&lt;br /&gt;
&lt;br /&gt;
 module load StdEnv/2023&lt;br /&gt;
 module load ansys/2025R1.02&lt;br /&gt;
&lt;br /&gt;
Programs available:&lt;br /&gt;
&lt;br /&gt;
* fluent&lt;br /&gt;
* ansysedt&lt;br /&gt;
* mapdl&lt;br /&gt;
* ...&lt;br /&gt;
&lt;br /&gt;
You can use the Ansys graphical tools to set up your problem, but you cannot use the graphical tools to submit your job.  The job must be submitted to the scheduler for running.&lt;br /&gt;
&lt;br /&gt;
More information can be found here: https://docs.computecanada.ca/wiki/ANSYS&lt;br /&gt;
&lt;br /&gt;
==Setting up your .mw directory==&lt;br /&gt;
&lt;br /&gt;
Ansys will attempt to write to your $HOME/.mw directory.  This will work when you are testing your workflow on the login nodes, because they can write to $HOME.  However, recall that the compute nodes cannot write to the /home filesystem.  If you attempt to run Ansys from a compute node using the default configuration, it will fail because Ansys cannot write to $HOME/.mw.&lt;br /&gt;
&lt;br /&gt;
The solution is to create an alternative directory called $SCRATCH/.mw, and create a soft link from $HOME/.mw to $SCRATCH/.mw:&lt;br /&gt;
 mkdir $SCRATCH/.mw&lt;br /&gt;
 ln -s $SCRATCH/.mw $HOME/.mw&lt;br /&gt;
This will fool Ansys into thinking it is writing to $HOME/.mw, when in fact it is writing to $SCRATCH/.mw.  This command only needs to be run once.&lt;br /&gt;
&lt;br /&gt;
==Running ansys==&lt;br /&gt;
&lt;br /&gt;
Example submission script for a job running on 1 node, with max walltime of 11 hours:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=192&lt;br /&gt;
#SBATCH --time=11:00:00&lt;br /&gt;
#SBATCH --job-name test&lt;br /&gt;
&lt;br /&gt;
module load StdEnv/2023&lt;br /&gt;
module load ansys/2025R1.02&lt;br /&gt;
&lt;br /&gt;
# DIRECTORY TO RUN - $SLURM_SUBMIT_DIR is directory job was submitted from&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
machines=$(srun bash -c 'hostname -s' | sort | uniq | awk '{print $1 &amp;quot;:&amp;quot; 40}' | paste -s -d ':')&lt;br /&gt;
ansys251 -b -j JOBNAME -dis -machines &amp;quot;$machines&amp;quot; -i ansys.in&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Custom license server==&lt;br /&gt;
&lt;br /&gt;
To use your own license server for Ansys, you'll need to open up ssh tunnels to the required ports,&lt;br /&gt;
and point Ansys to the tunnels.&lt;br /&gt;
For example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ssh tri-gw -fNL 1055:license.server:1055&lt;br /&gt;
ssh tri-gw -fNL 1056:license.server:1056&lt;br /&gt;
ssh tri-gw -fNL 2325:license.server:2325&lt;br /&gt;
&lt;br /&gt;
export ANSYSLI_SERVERS=2325@localhost&lt;br /&gt;
export ANSYSLMD_LICENSE_FILE=1055@localhost&lt;br /&gt;
export LM_LICENSE_FILE=1055@localhost&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure your firewall allows connections from the following IP addresses and confirm with your license provider which ports should be used with the ssh tunnel:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
142.150.188.58&lt;br /&gt;
142.150.188.61&lt;br /&gt;
142.150.188.62&lt;br /&gt;
142.150.188.63&lt;br /&gt;
142.150.188.64&lt;br /&gt;
142.150.188.65&lt;br /&gt;
142.150.188.66&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
INPUTFILE=input.jou&lt;br /&gt;
fluent 2ddp -t &amp;quot;$PBS_NP&amp;quot; -cnf=&amp;quot;$PBS_NODEFILE&amp;quot; -mpi=intel -pib -pcheck -g -i &amp;quot;$INPUTFILE&amp;quot;&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Ansys&amp;diff=7124</id>
		<title>Ansys</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Ansys&amp;diff=7124"/>
		<updated>2025-09-26T19:42:45Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [http://www.ansys.com/ Ansys] engineering simulation tools are installed in the CC software stack.&lt;br /&gt;
&lt;br /&gt;
=Getting a license=&lt;br /&gt;
&lt;br /&gt;
Licenses are provided by [https://www.cmc.ca CMC Microsystems]. Canadian students and faculty can register at [https://account.cmc.ca/MyAccount/GetAccount.aspx this page].&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you must [https://www.cmc.ca/support-form/ contact CMC] and tell them you want to use the Ansys tools on Trillium, and give them your SciNet username.&lt;br /&gt;
&lt;br /&gt;
You may also use a license server from the other GP clusters (Rorqual, Narval, Fir, Nibi), since Trillium doesn't have its own server. More information can be found here:&lt;br /&gt;
&lt;br /&gt;
https://docs.alliancecan.ca/wiki/Ansys#Configuring_your_license_file&lt;br /&gt;
&lt;br /&gt;
=Running on Trillium=&lt;br /&gt;
&lt;br /&gt;
Load the following modules (current as of Sep 2025, newer versions may be available):&lt;br /&gt;
&lt;br /&gt;
 module load StdEnv/2023&lt;br /&gt;
 module load ansys/2025R1.02&lt;br /&gt;
&lt;br /&gt;
Programs available:&lt;br /&gt;
&lt;br /&gt;
* fluent&lt;br /&gt;
* ansysedt&lt;br /&gt;
* mapdl&lt;br /&gt;
* ...&lt;br /&gt;
&lt;br /&gt;
You can use the Ansys graphical tools to set up your problem, but you cannot use the graphical tools to submit your job.  The job must be submitted to the scheduler for running.&lt;br /&gt;
&lt;br /&gt;
More information can be found here: https://docs.computecanada.ca/wiki/ANSYS&lt;br /&gt;
&lt;br /&gt;
==Setting up your .mw directory==&lt;br /&gt;
&lt;br /&gt;
Ansys will attempt to write to your $HOME/.mw directory.  This will work when you are testing your workflow on the login nodes, because they can write to $HOME.  However, recall that the compute nodes cannot write to the /home filesystem.  If you attempt to run Ansys from a compute node using the default configuration, it will fail because Ansys cannot write to $HOME/.mw.&lt;br /&gt;
&lt;br /&gt;
The solution is to create an alternative directory called $SCRATCH/.mw, and create a soft link from $HOME/.mw to $SCRATCH/.mw:&lt;br /&gt;
 mkdir $SCRATCH/.mw&lt;br /&gt;
 ln -s $SCRATCH/.mw $HOME/.mw&lt;br /&gt;
This will fool Ansys into thinking it is writing to $HOME/.mw, when in fact it is writing to $SCRATCH/.mw.  This command only needs to be run once.&lt;br /&gt;
&lt;br /&gt;
==Running ansys==&lt;br /&gt;
&lt;br /&gt;
Example submission script for a job running on 1 node, with max walltime of 11 hours:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=192&lt;br /&gt;
#SBATCH --time=11:00:00&lt;br /&gt;
#SBATCH --job-name test&lt;br /&gt;
&lt;br /&gt;
module load StdEnv/2023&lt;br /&gt;
module load ansys/2025R1.02&lt;br /&gt;
&lt;br /&gt;
# DIRECTORY TO RUN - $SLURM_SUBMIT_DIR is directory job was submitted from&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
machines=$(srun bash -c 'hostname -s' | sort | uniq | awk '{print $1 &amp;quot;:&amp;quot; 40}' | paste -s -d ':')&lt;br /&gt;
ansys251 -b -j JOBNAME -dis -machines &amp;quot;$machines&amp;quot; -i ansys.in&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Custom license server==&lt;br /&gt;
&lt;br /&gt;
To use your own license server for Ansys, you'll need to open up ssh tunnels to the required ports,&lt;br /&gt;
and point Ansys to the tunnels.&lt;br /&gt;
For example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ssh tri-gw -fNL 1055:license.server:1055&lt;br /&gt;
ssh tri-gw -fNL 1056:license.server:1056&lt;br /&gt;
ssh tri-gw -fNL 2325:license.server:2325&lt;br /&gt;
&lt;br /&gt;
export ANSYSLI_SERVERS=2325@localhost&lt;br /&gt;
export ANSYSLMD_LICENSE_FILE=1055@localhost&lt;br /&gt;
export LM_LICENSE_FILE=1055@localhost&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure your firewall allows connections from the following IP addresses and confirm with your license provider which ports should be used with the ssh tunnel:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
142.150.188.58&lt;br /&gt;
142.150.188.61&lt;br /&gt;
142.150.188.62&lt;br /&gt;
142.150.188.63&lt;br /&gt;
142.150.188.64&lt;br /&gt;
142.150.188.65&lt;br /&gt;
142.150.188.66&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
INPUTFILE=input.jou&lt;br /&gt;
fluent 2ddp -t &amp;quot;$PBS_NP&amp;quot; -cnf=&amp;quot;$PBS_NODEFILE&amp;quot; -mpi=intel -pib -pcheck -g -i &amp;quot;$INPUTFILE&amp;quot;&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Ansys&amp;diff=7121</id>
		<title>Ansys</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Ansys&amp;diff=7121"/>
		<updated>2025-09-26T19:41:03Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [http://www.ansys.com/ Ansys] engineering simulation tools are installed in the CC software stack.&lt;br /&gt;
&lt;br /&gt;
=Getting a license=&lt;br /&gt;
&lt;br /&gt;
Licenses are provided by [https://www.cmc.ca CMC Microsystems]. Canadian students and faculty can register at [https://account.cmc.ca/MyAccount/GetAccount.aspx this page].&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you must [https://www.cmc.ca/support-form/ contact CMC] and tell them you want to use the Ansys tools on Trillium, and give them your SciNet username.&lt;br /&gt;
&lt;br /&gt;
You may also use a license server from the other GP clusters (Rorqual, Narval, Fir, Nibi), since Trillium doesn't have its own server. More information can be found here:&lt;br /&gt;
&lt;br /&gt;
https://docs.computecanada.ca/wiki/ANSYS#Configuring_your_own_license_file&lt;br /&gt;
&lt;br /&gt;
=Running on Trillium=&lt;br /&gt;
&lt;br /&gt;
Load the following modules (current as of Sep 2025, newer versions may be available):&lt;br /&gt;
&lt;br /&gt;
 module load StdEnv/2023&lt;br /&gt;
 module load ansys/2025R1.02&lt;br /&gt;
&lt;br /&gt;
Programs available:&lt;br /&gt;
&lt;br /&gt;
* fluent&lt;br /&gt;
* ansysedt&lt;br /&gt;
* mapdl&lt;br /&gt;
* ...&lt;br /&gt;
&lt;br /&gt;
You can use the Ansys graphical tools to set up your problem, but you cannot use the graphical tools to submit your job.  The job must be submitted to the scheduler for running.&lt;br /&gt;
&lt;br /&gt;
More information can be found here: https://docs.computecanada.ca/wiki/ANSYS&lt;br /&gt;
&lt;br /&gt;
==Setting up your .mw directory==&lt;br /&gt;
&lt;br /&gt;
Ansys will attempt to write to your $HOME/.mw directory.  This will work when you are testing your workflow on the login nodes, because they can write to $HOME.  However, recall that the compute nodes cannot write to the /home filesystem.  If you attempt to run Ansys from a compute node using the default configuration, it will fail because Ansys cannot write to $HOME/.mw.&lt;br /&gt;
&lt;br /&gt;
The solution is to create an alternative directory called $SCRATCH/.mw, and create a soft link from $HOME/.mw to $SCRATCH/.mw:&lt;br /&gt;
 mkdir $SCRATCH/.mw&lt;br /&gt;
 ln -s $SCRATCH/.mw $HOME/.mw&lt;br /&gt;
This will fool Ansys into thinking it is writing to $HOME/.mw, when in fact it is writing to $SCRATCH/.mw.  This command only needs to be run once.&lt;br /&gt;
&lt;br /&gt;
==Running ansys==&lt;br /&gt;
&lt;br /&gt;
Example submission script for a job running on 1 node, with max walltime of 11 hours:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=192&lt;br /&gt;
#SBATCH --time=11:00:00&lt;br /&gt;
#SBATCH --job-name test&lt;br /&gt;
&lt;br /&gt;
module load StdEnv/2023&lt;br /&gt;
module load ansys/2025R1.02&lt;br /&gt;
&lt;br /&gt;
# DIRECTORY TO RUN - $SLURM_SUBMIT_DIR is directory job was submitted from&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
machines=$(srun bash -c 'hostname -s' | sort | uniq | awk '{print $1 &amp;quot;:&amp;quot; 40}' | paste -s -d ':')&lt;br /&gt;
ansys251 -b -j JOBNAME -dis -machines &amp;quot;$machines&amp;quot; -i ansys.in&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Custom license server==&lt;br /&gt;
&lt;br /&gt;
To use your own license server for Ansys, you'll need to open up ssh tunnels to the required ports,&lt;br /&gt;
and point Ansys to the tunnels.&lt;br /&gt;
For example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ssh tri-gw -fNL 1055:license.server:1055&lt;br /&gt;
ssh tri-gw -fNL 1056:license.server:1056&lt;br /&gt;
ssh tri-gw -fNL 2325:license.server:2325&lt;br /&gt;
&lt;br /&gt;
export ANSYSLI_SERVERS=2325@localhost&lt;br /&gt;
export ANSYSLMD_LICENSE_FILE=1055@localhost&lt;br /&gt;
export LM_LICENSE_FILE=1055@localhost&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure your firewall allows connections from the following IP addresses and confirm with your license provider which ports should be used with the ssh tunnel:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
142.150.188.58&lt;br /&gt;
142.150.188.61&lt;br /&gt;
142.150.188.62&lt;br /&gt;
142.150.188.63&lt;br /&gt;
142.150.188.64&lt;br /&gt;
142.150.188.65&lt;br /&gt;
142.150.188.66&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
INPUTFILE=input.jou&lt;br /&gt;
fluent 2ddp -t &amp;quot;$PBS_NP&amp;quot; -cnf=&amp;quot;$PBS_NODEFILE&amp;quot; -mpi=intel -pib -pcheck -g -i &amp;quot;$INPUTFILE&amp;quot;&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Ansys&amp;diff=7118</id>
		<title>Ansys</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Ansys&amp;diff=7118"/>
		<updated>2025-09-26T19:40:03Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [http://www.ansys.com/ Ansys] engineering simulation tools are installed in the CC software stack.&lt;br /&gt;
&lt;br /&gt;
=Getting a license=&lt;br /&gt;
&lt;br /&gt;
Licenses are provided by [https://www.cmc.ca CMC Microsystems]. Canadian students and faculty can register at [https://account.cmc.ca/MyAccount/GetAccount.aspx this page].&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you must [https://cmcmicrosystems.formtitan.com/SupportForm contact CMC] and tell them you want to use the Ansys tools on Trillium, and give them your SciNet username.&lt;br /&gt;
&lt;br /&gt;
You may also use a license server from the other GP clusters (Rorqual, Narval, Fir, Nibi), since Trillium doesn't have its own server. More information can be found here:&lt;br /&gt;
&lt;br /&gt;
https://docs.computecanada.ca/wiki/ANSYS#Configuring_your_own_license_file&lt;br /&gt;
&lt;br /&gt;
=Running on Trillium=&lt;br /&gt;
&lt;br /&gt;
Load the following modules (current as of Sep 2025, newer versions may be available):&lt;br /&gt;
&lt;br /&gt;
 module load StdEnv/2023&lt;br /&gt;
 module load ansys/2025R1.02&lt;br /&gt;
&lt;br /&gt;
Programs available:&lt;br /&gt;
&lt;br /&gt;
* fluent&lt;br /&gt;
* ansysedt&lt;br /&gt;
* mapdl&lt;br /&gt;
* ...&lt;br /&gt;
&lt;br /&gt;
You can use the Ansys graphical tools to set up your problem, but you cannot use the graphical tools to submit your job.  The job must be submitted to the scheduler for running.&lt;br /&gt;
&lt;br /&gt;
More information can be found here: https://docs.computecanada.ca/wiki/ANSYS&lt;br /&gt;
&lt;br /&gt;
==Setting up your .mw directory==&lt;br /&gt;
&lt;br /&gt;
Ansys will attempt to write to your $HOME/.mw directory.  This will work when you are testing your workflow on the login nodes, because they can write to $HOME.  However, recall that the compute nodes cannot write to the /home filesystem.  If you attempt to run Ansys from a compute node using the default configuration, it will fail because Ansys cannot write to $HOME/.mw.&lt;br /&gt;
&lt;br /&gt;
The solution is to create an alternative directory called $SCRATCH/.mw, and create a soft link from $HOME/.mw to $SCRATCH/.mw:&lt;br /&gt;
 mkdir $SCRATCH/.mw&lt;br /&gt;
 ln -s $SCRATCH/.mw $HOME/.mw&lt;br /&gt;
This will fool Ansys into thinking it is writing to $HOME/.mw, when in fact it is writing to $SCRATCH/.mw.  This command only needs to be run once.&lt;br /&gt;
&lt;br /&gt;
==Running ansys==&lt;br /&gt;
&lt;br /&gt;
Example submission script for a job running on 1 node, with max walltime of 11 hours:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=192&lt;br /&gt;
#SBATCH --time=11:00:00&lt;br /&gt;
#SBATCH --job-name test&lt;br /&gt;
&lt;br /&gt;
module load StdEnv/2023&lt;br /&gt;
module load ansys/2025R1.02&lt;br /&gt;
&lt;br /&gt;
# DIRECTORY TO RUN - $SLURM_SUBMIT_DIR is directory job was submitted from&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
machines=$(srun bash -c 'hostname -s' | sort | uniq | awk '{print $1 &amp;quot;:&amp;quot; 40}' | paste -s -d ':')&lt;br /&gt;
ansys251 -b -j JOBNAME -dis -machines &amp;quot;$machines&amp;quot; -i ansys.in&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Custom license server==&lt;br /&gt;
&lt;br /&gt;
To use your own license server for Ansys, you'll need to open up ssh tunnels to the required ports,&lt;br /&gt;
and point Ansys to the tunnels.&lt;br /&gt;
For example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ssh tri-gw -fNL 1055:license.server:1055&lt;br /&gt;
ssh tri-gw -fNL 1056:license.server:1056&lt;br /&gt;
ssh tri-gw -fNL 2325:license.server:2325&lt;br /&gt;
&lt;br /&gt;
export ANSYSLI_SERVERS=2325@localhost&lt;br /&gt;
export ANSYSLMD_LICENSE_FILE=1055@localhost&lt;br /&gt;
export LM_LICENSE_FILE=1055@localhost&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure your firewall allows connections from the following IP addresses and confirm with your license provider which ports should be used with the ssh tunnel:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
142.150.188.58&lt;br /&gt;
142.150.188.61&lt;br /&gt;
142.150.188.62&lt;br /&gt;
142.150.188.63&lt;br /&gt;
142.150.188.64&lt;br /&gt;
142.150.188.65&lt;br /&gt;
142.150.188.66&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
INPUTFILE=input.jou&lt;br /&gt;
fluent 2ddp -t &amp;quot;$PBS_NP&amp;quot; -cnf=&amp;quot;$PBS_NODEFILE&amp;quot; -mpi=intel -pib -pcheck -g -i &amp;quot;$INPUTFILE&amp;quot;&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Ansys&amp;diff=7115</id>
		<title>Ansys</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Ansys&amp;diff=7115"/>
		<updated>2025-09-26T19:25:56Z</updated>

		<summary type="html">&lt;p&gt;Nolta: update cmc account url&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [http://www.ansys.com/ Ansys] engineering simulation tools are installed in both the Niagara and CC software stacks.&lt;br /&gt;
&lt;br /&gt;
=Getting a license=&lt;br /&gt;
Licenses are provided by [https://www.cmc.ca CMC Microsystems]. Canadian students and faculty can register at [https://account.cmc.ca/MyAccount/GetAccount.aspx this page].&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you must [https://cmcmicrosystems.formtitan.com/SupportForm contact CMC] and tell them you want to use the Ansys tools on Niagara, and give them your SciNet username.&lt;br /&gt;
&lt;br /&gt;
After confirming with CMC and your institution that your username is authorized to claim a license, you will need to establish a ssh tunnel on your submission script in order to access the license server from niagara compute nodes (which don't have direct access to the internet). Please obtain the proper server name and ports from CMC or directly from your institution, so you may fill the ssh tunnel settings on the script template further below.&lt;br /&gt;
&lt;br /&gt;
You may also use a license server from the other GP clusters (beluga/narval, cedar or graham), since niagara doesn't have its own server. More information can be found here:&lt;br /&gt;
&lt;br /&gt;
https://docs.computecanada.ca/wiki/ANSYS#Configuring_your_own_license_file&lt;br /&gt;
&lt;br /&gt;
=Running on Niagara=&lt;br /&gt;
&lt;br /&gt;
Commercial software modules are not available by default, and require a 'module use' command:&lt;br /&gt;
&lt;br /&gt;
 module use /scinet/niagara/software/commercial/modules&lt;br /&gt;
 module load ansys/2023r1&lt;br /&gt;
&lt;br /&gt;
Programs available:&lt;br /&gt;
&lt;br /&gt;
* fluent&lt;br /&gt;
* ansysedt&lt;br /&gt;
* mapdl&lt;br /&gt;
* ...&lt;br /&gt;
You can use the Ansys graphical tools to set up your problem, but you cannot use the graphical tools to submit your job.  The job must be submitted to the scheduler for running.&lt;br /&gt;
&lt;br /&gt;
==Setting up your .mw directory==&lt;br /&gt;
&lt;br /&gt;
Ansys will attempt to write to your $HOME/.mw directory.  This will work when you are testing your workflow on the login nodes, because they can write to $HOME.  However, recall that the compute nodes cannot write to the /home filesystem.  If you attempt to run Ansys from a compute node using the default configuration, it will fail because Ansys cannot write to $HOME/.mw.&lt;br /&gt;
&lt;br /&gt;
The solution is to create an alternative directory called $SCRATCH/.mw, and create a soft link from $HOME/.mw to $SCRATCH/.mw:&lt;br /&gt;
 mkdir $SCRATCH/.mw&lt;br /&gt;
 ln -s $SCRATCH/.mw $HOME/.mw&lt;br /&gt;
This will fool Ansys into thinking it is writing to $HOME/.mw, when in fact it is writing to $SCRATCH/.mw.  This command only needs to be run once.&lt;br /&gt;
&lt;br /&gt;
==Running ansys==&lt;br /&gt;
&lt;br /&gt;
Example submission script for a job running on 1 node, with max walltime of 11 hours:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=40&lt;br /&gt;
#SBATCH --time=11:00:00&lt;br /&gt;
#SBATCH --job-name test&lt;br /&gt;
&lt;br /&gt;
module use /scinet/niagara/software/commercial/modules&lt;br /&gt;
module load ansys/2023r1&lt;br /&gt;
&lt;br /&gt;
# DIRECTORY TO RUN - $SLURM_SUBMIT_DIR is directory job was submitted from&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
machines=$(srun bash -c 'hostname -s' | sort | uniq | awk '{print $1 &amp;quot;:&amp;quot; 40}' | paste -s -d ':')&lt;br /&gt;
ansys231 -b -j JOBNAME -dis -machines &amp;quot;$machines&amp;quot; -i ansys.in&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Custom license server==&lt;br /&gt;
&lt;br /&gt;
To use your own license server for Ansys, you'll need to open up ssh tunnels to the required ports,&lt;br /&gt;
and point Ansys to the tunnels.&lt;br /&gt;
For example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ssh nia-gw -fNL 1055:license.server:1055&lt;br /&gt;
ssh nia-gw -fNL 1056:license.server:1056&lt;br /&gt;
ssh nia-gw -fNL 2325:license.server:2325&lt;br /&gt;
&lt;br /&gt;
export ANSYSLI_SERVERS=2325@localhost&lt;br /&gt;
export ANSYSLMD_LICENSE_FILE=1055@localhost&lt;br /&gt;
export LM_LICENSE_FILE=1055@localhost&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure your firewall allows connections from the following IP addresses and confirm with your license provider which ports should be used with the ssh tunnel:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
142.150.188.71&lt;br /&gt;
142.150.188.72&lt;br /&gt;
142.150.188.73&lt;br /&gt;
142.150.188.74&lt;br /&gt;
142.150.188.75&lt;br /&gt;
142.150.188.76&lt;br /&gt;
142.150.188.77&lt;br /&gt;
142.150.188.78&lt;br /&gt;
142.1.174.227&lt;br /&gt;
142.1.174.228&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
INPUTFILE=input.jou&lt;br /&gt;
fluent 2ddp -t &amp;quot;$PBS_NP&amp;quot; -cnf=&amp;quot;$PBS_NODEFILE&amp;quot; -mpi=intel -pib -pcheck -g -i &amp;quot;$INPUTFILE&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Running using the CC installation=&lt;br /&gt;
&lt;br /&gt;
==Ansys 19.0==&lt;br /&gt;
To access the CC software stack you must unload the Niagara stack.&lt;br /&gt;
&lt;br /&gt;
 module load CCEnv StdEnv&lt;br /&gt;
 module load ansys/19.0&lt;br /&gt;
&lt;br /&gt;
You can run the script given in the previous section by substituting the previous module commands with the above two.&lt;br /&gt;
&lt;br /&gt;
More information can be found here: https://docs.computecanada.ca/wiki/ANSYS &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=7037</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=7037"/>
		<updated>2025-09-17T12:57:23Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
{| style=&amp;quot;border-spacing:10px; width: 95%&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:1em; padding-top:.1em; border:2px solid #0645ad; background-color:#f6f6f6; border-radius:7px&amp;quot;|&lt;br /&gt;
&lt;br /&gt;
==System Status==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Use &amp;quot;Up&amp;quot;, &amp;quot;Partial&amp;quot; or &amp;quot;Down&amp;quot;; these are templates. --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;width:100%&amp;quot; &lt;br /&gt;
|{{Down3 | Trillium|https://docs.alliancecan.ca/wiki/Trillium_Quickstart}}&lt;br /&gt;
|{{Down | Niagara|Niagara_Quickstart}}&lt;br /&gt;
|{{Down | Teach|Teach}}&lt;br /&gt;
|{{Down | Rouge|Rouge}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Down | OnDemand|Open_OnDemand_Quickstart}}&lt;br /&gt;
|{{Down | Scheduler|Niagara_Quickstart#Submitting_jobs}}&lt;br /&gt;
|{{Down | File system|Niagara_Quickstart#Storage_and_quotas}}&lt;br /&gt;
|{{Down | Burst Buffer|Burst_Buffer}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Down | HPSS|HPSS}}&lt;br /&gt;
|{{Down | Login Nodes|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Down | External Network|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Partial| Globus |Globus}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Down | Balam|Balam}}&lt;br /&gt;
|{{Down | Cvmfs|Using_modules}}&lt;br /&gt;
|{{Down | Mist|Mist}}&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
'''Tue Sep 16, 2025, 5:45 pm:''' Unfortunately, we cannot bring all systems up yet because we are waiting for a spare part for the cooling system that will be brought tomorrow.  In the meantime, we will try bring Trillium up, but only the login nodes of other systems.&lt;br /&gt;
&lt;br /&gt;
'''Tue Sep 16, 2025, from 7:00 am to 5:00 pm (EDT):''' The SciNet datacentre will undergo maintenance of several critical parts of the centre.  This will require a full shutdown of all SciNet systems (Trillium, Niagara, Mist, HPSS, Rouge, Teach, as well as hosted equipment). This will also be the time that the Mist cluster gets decommissioned. &lt;br /&gt;
&lt;br /&gt;
'''Fri Sep 12 22:03:17 EDT 2025:''' HPSS software and OS upgrades are finished.&lt;br /&gt;
&lt;br /&gt;
'''Tue Sep  9 17:05:38 EDT 2025:''' Starting tomorrow, Sep/10, and for the following 3 days HPSS will be down for software and OS upgrades. We will strive to finish sooner, at which time we will make the system available to users again.&lt;br /&gt;
&lt;br /&gt;
===Mist/Niagara Decommissioning Schedule===&lt;br /&gt;
&lt;br /&gt;
'''September 4, 2025'''&lt;br /&gt;
* Niagara reduced to 863 compute nodes.&lt;br /&gt;
&lt;br /&gt;
'''September 9, 2025'''&lt;br /&gt;
* Niagara's Open OnDemand decommissioned.&lt;br /&gt;
* Brief data centre connection outage at 9 AM EDT&lt;br /&gt;
* Niagara reduced to 647 compute nodes at end of day.&lt;br /&gt;
&lt;br /&gt;
'''September 11, 2025'''&lt;br /&gt;
* Trillium Open OnDemand goes live.&lt;br /&gt;
&lt;br /&gt;
'''September 16, 2025'''&lt;br /&gt;
* '''Full-day data centre maintenance'''&lt;br /&gt;
* Niagara reduced to 431 compute nodes.&lt;br /&gt;
* Mist decommissioned.&lt;br /&gt;
&lt;br /&gt;
'''September 24, 2025'''&lt;br /&gt;
* Niagara reduced to 215 compute nodes.&lt;br /&gt;
&lt;br /&gt;
'''September 30, 2025'''&lt;br /&gt;
* Niagara decommissioned.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--  When removing system status entries, please archive them to: --&amp;gt;&lt;br /&gt;
[[Previous messages]]&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;border-spacing: 10px;width: 100%&amp;quot;&lt;br /&gt;
|valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== QuickStart Guides ==&lt;br /&gt;
* [https://docs.alliancecan.ca/wiki/Trillium_Quickstart Trillium Quickstart]&lt;br /&gt;
* [[Niagara Quickstart]]&lt;br /&gt;
* [[HPSS | HPSS archival storage]]&lt;br /&gt;
* [[Mist| Mist Power 9 GPU cluster]]&lt;br /&gt;
* [[Teach|Teach cluster]]&lt;br /&gt;
* [[FAQ | FAQ (frequently asked questions)]]&lt;br /&gt;
* [[Acknowledging SciNet]]&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Tutorials, Manuals, etc. ==&lt;br /&gt;
* [https://education.scinet.utoronto.ca SciNet education material]&lt;br /&gt;
* [https://www.youtube.com/c/SciNetHPCattheUniversityofToronto SciNet's YouTube channel]&lt;br /&gt;
* [[Modules specific to Niagara|Software Modules specific to Niagara]] &lt;br /&gt;
* [[Modules for Mist]] &lt;br /&gt;
* [[Commercial software]]&lt;br /&gt;
* [[Burst Buffer]]&lt;br /&gt;
* [[SSH#SSH Keys|SSH keys]]&lt;br /&gt;
* [[SSH Tunneling]]&lt;br /&gt;
* [[Visualization]]&lt;br /&gt;
* [[Running Serial Jobs on Niagara]]&lt;br /&gt;
* [[Jupyter Hub]]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=SSH&amp;diff=5810</id>
		<title>SSH</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=SSH&amp;diff=5810"/>
		<updated>2024-08-18T13:03:59Z</updated>

		<summary type="html">&lt;p&gt;Nolta: /* Two-Factor authentication */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;SSH (secure shell) is the only way to log into the systems at SciNet.   It opens a secure, encrypted connection between your computer and SciNet.  If you have a Linux or Mac OSX machine, you already have SSH installed; if you have a Windows machine, you will have to install additional software before logging into SciNet.&lt;br /&gt;
&lt;br /&gt;
{{Note|SSH keys are now the only way to authenticate to most SciNet systems, and passwords are not accepted.}}&lt;br /&gt;
&lt;br /&gt;
==SSH For Linux or Mac OS X Users==&lt;br /&gt;
&lt;br /&gt;
===Simple Login===&lt;br /&gt;
&lt;br /&gt;
To login to SciNet's Niagara cluster, open a terminal window and type:&lt;br /&gt;
&lt;br /&gt;
 ssh USERNAME@niagara.scinet.utoronto.ca&lt;br /&gt;
&lt;br /&gt;
replacing USERNAME with your SciNet username.   Once done, you will be logged into the login nodes at the SciNet data centre, as if you have a terminal from those machines on your desktop. &lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;***Note***&amp;lt;/big&amp;gt;''' if you chose a custom SSH key name, &amp;lt;i&amp;gt;i.e.&amp;lt;/i&amp;gt; something other than the default names: &amp;lt;code&amp;gt;id_dsa&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;id_ecdsa&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;id_ed25519&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;id_rsa&amp;lt;/code&amp;gt;, you will need to use the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option of ssh to specify the path to your private key via:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -i /path/to/key USERNAME@niagara.scinet.utoronto.ca&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More details about custom SSH key names can be found [[#Custom SSH Keys names|here]].&lt;br /&gt;
&lt;br /&gt;
Note that if your username is the same on both the machine you're logging in from and the scinet machines, you can drop the &amp;lt;tt&amp;gt;USERNAME@&amp;lt;/tt&amp;gt;, as SSH by default will try to use the username on the machine you are logging in from.&lt;br /&gt;
&lt;br /&gt;
===Copying Files===&lt;br /&gt;
&lt;br /&gt;
The SSH protocol can be used for more than logging in remotely; it can also be used to copy files between machines.&lt;br /&gt;
&lt;br /&gt;
To copy '''small''' files from your home computer to a subdirectory of your &amp;lt;tt&amp;gt;/scratch&amp;lt;/tt&amp;gt; directory at SciNet, you would type from a terminal on your computer&lt;br /&gt;
&lt;br /&gt;
 scp filetocopy.txt USERNAME@niagara.scinet.utoronto.ca:/scratch/USERNAME/some_subdirectory/&lt;br /&gt;
&lt;br /&gt;
Note that soon the location of your scratch directory will change, and you will have to type:&lt;br /&gt;
&lt;br /&gt;
 scp filetocopy.txt USERNAME@niagara.scinet.utoronto.ca:/scratch/G/GROUPNAME/USERNAME/some_subdirectory/&lt;br /&gt;
&lt;br /&gt;
Similarly, to copy files back into your current directory, you would type&lt;br /&gt;
&lt;br /&gt;
 scp USERNAME@niagara.scinet.utoronto.ca:/scratch/G/GROUPNAME/USERNAME/my_dirs/myfile.txt . &lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;***Note***&amp;lt;/big&amp;gt;''' if you chose a custom SSH key name, &amp;lt;i&amp;gt;i.e.&amp;lt;/i&amp;gt; something other than the default names: &amp;lt;code&amp;gt;id_dsa&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;id_ecdsa&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;id_ed25519&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;id_rsa&amp;lt;/code&amp;gt;, you will need to use the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option of scp and specify the path to your private key before the file paths via:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
scp -i /path/to/key filetocopy.txt USERNAME@niagara.scinet.utoronto.ca:/scratch/USERNAME/some_subdirectory/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More details about custom SSH key names can be found [[#Custom SSH Keys names|here]].&lt;br /&gt;
&lt;br /&gt;
The [[Niagara_Quickstart#Moving_data | Data Management]] wiki page has much more information on doing large transfers efficiently.&lt;br /&gt;
&lt;br /&gt;
==SSH for Windows Users==&lt;br /&gt;
&lt;br /&gt;
To use SSH on Windows, you will have to install SSH software.   SciNet recommends, roughly in order of preference:&lt;br /&gt;
&lt;br /&gt;
* [http://www.cygwin.com/ Cygwin] is an entire linux-like environment for Windows.   Using something like Cygwin is highly recommended if you are going to be interacting a lot with linux systems, as it will give you a development environment very similar to that on the systems you'll be using.   Download and run &amp;lt;tt&amp;gt;setup.exe&amp;lt;/tt&amp;gt;, and install any packages you think you'll need.  Once this is done, you will have icons for terminals, including one saying something like &amp;quot;X11&amp;quot;.  From either of these, you'll be able to type &amp;lt;tt&amp;gt;ssh user@niagara.scinet.utoronto.ca&amp;lt;/tt&amp;gt; as above; if you think you will need to pop up windows from SciNet machines (&amp;lt;i&amp;gt;e.g.&amp;lt;/i&amp;gt;, for displaying data or using [[Performance_And_Debugging_Tools:_GPC | Profiling Tools]]), you'll need to use the X11 terminal and type &amp;lt;tt&amp;gt;ssh -Y user@niagara.scinet.utoronto.ca&amp;lt;/tt&amp;gt;.   Other ssh tools such as &amp;lt;tt&amp;gt;scp&amp;lt;/tt&amp;gt; will work as above.&lt;br /&gt;
* [http://mobaxterm.mobatek.net/en/ MobaXterm] is a tabbed ssh client with some Cygwin tools all wrapped up into one executable.&lt;br /&gt;
* [http://sshwindows.sourceforge.net/ OpenSSH For Windows] installs only those parts of Cygwin necessary to run SSH.  Again, once installed, opening up one of the new terminals allows you to use SSH as in the Linux/Mac OSX section above, but X11 forwarding for displaying windows may not work.&lt;br /&gt;
* [http://www.chiark.greenend.org.uk/~sgtatham/putty/ PuTTY] is one of the better stand-alone SSH programs for windows.  It is a small download, and is enough to get you logged into the SciNet machines.  For advanced use like X11 forwarding however, you are better off using Cygwin.   A related program, [http://the.earth.li/~sgtatham/putty/latest/x86/pscp.exe PSCP], can be used to copy files using a graphical user interface. &amp;lt;br/&amp;gt; '''WARNING:''' Make sure you download putty from the official website, because there are &amp;quot;trojanized&amp;quot; versions of putty around that will send your login information to a site in Russia (as reported [http://blogs.cisco.com/security/trojanized-putty-software here]).&lt;br /&gt;
&lt;br /&gt;
===Copying Files===&lt;br /&gt;
&lt;br /&gt;
To transfer files to Niagara in Windows it is recommended to use the tool called [https://winscp.net/eng/index.php WinSCP]. Setting up a connection to Niagara using your SSH key can be done by following the steps in this [https://www.exavault.com/blog/import-ssh-keys-winscp link].&lt;br /&gt;
&lt;br /&gt;
===Copying Files with scp===&lt;br /&gt;
If you want to use &amp;lt;code&amp;gt;scp&amp;lt;/code&amp;gt; in Windows (e.g. [http://mobaxterm.mobatek.net/en/ MobaXterm] terminal) and your SSH key was generated in MobaXterm/PuTTY, i.e. your private key has an &amp;lt;code&amp;gt;.ppk&amp;lt;/code&amp;gt; extension, then you will need to convert your private key to an OpenSSH format. A good guide on how to do this can be found [https://www.simplified.guide/putty/convert-ppk-to-ssh-key here].&lt;br /&gt;
&lt;br /&gt;
==X11 Forwarding==&lt;br /&gt;
&lt;br /&gt;
If during your login session you will only need to be typing and reading text, the techniques described above will suffice.&lt;br /&gt;
However, if in a session you will need to be displaying graphics &amp;amp;mdash; such as plotting data on the scinet machines or using our [[Performance_And_Debugging_Tools:_Niagara | performance profiling tools]] &amp;amp;mdash; you can use SSH's very powerful ability to forward several different types of data over one connection.&lt;br /&gt;
To enable &amp;quot;X11 forwarding&amp;quot; over this SSH connection, add the option &amp;lt;tt&amp;gt;-Y&amp;lt;/tt&amp;gt; to your command, &lt;br /&gt;
&lt;br /&gt;
 ssh -Y USERNAME@niagara.scinet.utoronto.ca&lt;br /&gt;
&lt;br /&gt;
* Both, Windows and Mac OS users, will need to install an additional program to have X-forwarding working, usually referred to as &amp;quot;Xserver&amp;quot; which will interprete the data (graphics) forwarded and displayed on the local computer.&lt;br /&gt;
* Mac OS users need to install [https://www.xquartz.org XQUARTZ]&lt;br /&gt;
* Windows users could opt for installing [https://mobaxterm.mobatek.net MobaXterm] which is a ssh-client which already includes an Xserver.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==SSH Keys==&lt;br /&gt;
&lt;br /&gt;
[[SSH | SSH]] has an alternative to passwords to authenticate your login; you can generate a key file on a trusted machine and tell a remote machine to trust logins from a machine that presents that key.   This can be both convenient and secure, and may be necessary for some tasks (such as connecting directly to compute nodes to use [[Visualization | some visualization packages]]). Here we describe how to setup keys for logging into SciNet.&lt;br /&gt;
&lt;br /&gt;
In addition to using passwords to [http://en.wikipedia.org/wiki/Authentication authenticate] users, one can use cryptographically secure keys to guarantee that a login request is coming from a trusted account on a remote machine, and automatically allow such requests.   Done properly, this is as secure as requiring a password, but can be more convenient, and is necessary for some operations.  &lt;br /&gt;
&lt;br /&gt;
===How SSH keys work===&lt;br /&gt;
&lt;br /&gt;
SSH relies on [http://en.wikipedia.org/wiki/Public-key_cryptography public key cryptography] for its encryption.  These cryptosystems have a private key, which must be kept secret, and a public key, which may be disseminated freely.   In these systems, anyone may use the public key to encode a message; but only the owner of the private key can decode the message.  This can also be used to verify identities; if someone is claiming to be Alice, the owner of some private key, Bob can send Alice a message encoded with Alice's well-known public key.  If the person claiming to be Alice can then tell Bob what the message really was, then that person at the very least has access to Alice's private key.&lt;br /&gt;
&lt;br /&gt;
To use keys for authentication, you need to:&lt;br /&gt;
&lt;br /&gt;
* Generate a key pair (private and public)&lt;br /&gt;
* Copy the public key to a remote site, and add it to the list of authorized keys&lt;br /&gt;
* Ensure permissions are set properly&lt;br /&gt;
* Test to make sure it works&lt;br /&gt;
&lt;br /&gt;
===Generating an SSH key pair===&lt;br /&gt;
&lt;br /&gt;
The first stage is to create an SSH key pair. On Linux &amp;amp; MacOS (and Windows, with [https://mobaxterm.mobatek.net/ MobaXterm] terminal)  this is done using the &amp;lt;tt&amp;gt;ssh-keygen&amp;lt;/tt&amp;gt; command:&lt;br /&gt;
&lt;br /&gt;
 ssh-keygen -t ed25519&lt;br /&gt;
&lt;br /&gt;
If that doesn't work, try:&lt;br /&gt;
&lt;br /&gt;
 ssh-keygen -t rsa -b 4096&lt;br /&gt;
&lt;br /&gt;
This will prompt you for two pieces of information: where to save the key, and a passphrase for the key.  The passphrase is like a password, but rather than letting you in to some particular account, it allows you to use the key you've generated to log into other systems.  &lt;br /&gt;
&lt;br /&gt;
The default location to save the private key is in &amp;lt;tt&amp;gt;${HOME}/.ssh/id_&amp;lt;i&amp;gt;type&amp;lt;/i&amp;gt;&amp;lt;/tt&amp;gt; (where &amp;lt;tt&amp;gt;&amp;lt;i&amp;gt;type&amp;lt;/i&amp;gt;&amp;lt;/tt&amp;gt; is &amp;lt;tt&amp;gt; ed25519&amp;lt;/tt&amp;gt; for an Ed25519 key or &amp;lt;tt&amp;gt;rsa&amp;lt;/tt&amp;gt; for an RSA key); unless you have some specific reason for placing it elsewhere, use this option. The public key will be &amp;lt;tt&amp;gt;id_&amp;lt;i&amp;gt;type&amp;lt;/i&amp;gt;.pub&amp;lt;/tt&amp;gt; in the same directory. &lt;br /&gt;
&lt;br /&gt;
While the default names for the keys are sufficient for most use cases, it is worth noticing that you are free to name them as you wish. For example, you might want to include a string referring to your name and where you plan on using this key like &amp;lt;tt&amp;gt;js_cc_ed25519&amp;lt;/tt&amp;gt; (and &amp;lt;tt&amp;gt;js_cc_ed25519.pub&amp;lt;/tt&amp;gt;), if your name is John Smith and you created a key of &amp;lt;tt&amp;gt;ed25519&amp;lt;/tt&amp;gt; type on your laptop to be used on {{Alliance}} HPC systems. This is useful to distinguish amongst several keys you might need to have on the same computer (for example different keys on the same computer to access different clusters).&lt;br /&gt;
&lt;br /&gt;
Your passphrase can be any string, and of any length.   It is best not to make it the same as any of your passwords. A reasonably strong passphrase usually constitutes of one to three sentences with a few words each, written with the proper punctuation in place. Make sure it is unique and memorable to you. Do not use any popular catch phrases, jingles, or song choruses. They could be easily found on the web and catalogued on a database for a brute-force attack. &lt;br /&gt;
&lt;br /&gt;
A sample session of generating a key would go like this:&lt;br /&gt;
&lt;br /&gt;
 $ ssh-keygen -t ed25519&lt;br /&gt;
 Generating public/private ed25519 key pair.&lt;br /&gt;
 Enter file in which to save the key (/home/USERNAME/.ssh/id_ed25519): &lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in /home/USERNAME/.ssh/id_ed25519.&lt;br /&gt;
 Your public key has been saved in /home/USERNAME/.ssh/id_ed25519.pub.&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 SHA256:EajOndriRmLpl1qKg03FDhnc0EzRaApdBTygEbpQZrA USERNAME@HOSTNAME&lt;br /&gt;
 The key's randomart image is:&lt;br /&gt;
 +--[ED25519 256]--+&lt;br /&gt;
 |+=*X=*...        |&lt;br /&gt;
 |oB+ O o  .       |&lt;br /&gt;
 |E. * o  .        |&lt;br /&gt;
 |..+ +    .       |&lt;br /&gt;
 |.  B . .S        |&lt;br /&gt;
 |  = = o          |&lt;br /&gt;
 |.= o.+           |&lt;br /&gt;
 |o.oo* .          |&lt;br /&gt;
 |..o=..           |&lt;br /&gt;
 +----[SHA256]-----+&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Don't Use Passphraseless Keys!====&lt;br /&gt;
&lt;br /&gt;
If you do not specify a passphrase, you will have a completely &amp;quot;exposed&amp;quot; private key.  '''This is a terrible idea.'''   If you then use this key for anything it means that anyone who sits down at your desk, or anyone who borrows or steals your laptop, can login to anywhere you use that key (good guesses could come from just looking at your history) without needing any password, and could do anything they wanted with your account or data.  Don't use passphraseless keys.&lt;br /&gt;
&lt;br /&gt;
====Don't Copy Your Private Key to Other Systems!====&lt;br /&gt;
&lt;br /&gt;
A private key should never leave the computer that was used to generate it. If you have a personal computer at home, a laptop and a desktop at work, for example, make sure to repeat the process for generating a public-private key pair on each of these systems. Insert a comment into the key so that you know where the key pair was created and how it was named. It is useful when distinguishing keys coming from different devices when inspecting the public keys. For example, issue the following commands:&lt;br /&gt;
&lt;br /&gt;
 $ ssh-keygen -t ed25519 -C &amp;quot;js@mylaptop js_cc&amp;quot; -f $HOME/.ssh/js_cc_ed25519&lt;br /&gt;
 $ ssh-keygen -t ed25519 -C &amp;quot;js@mydesktop js_cc&amp;quot; -f $HOME/.ssh/js_cc_ed25519&lt;br /&gt;
 $ ssh-keygen -t ed25519 -C &amp;quot;js@workdesktop js_cc&amp;quot; -f $HOME/.ssh/js_cc_ed25519&lt;br /&gt;
&lt;br /&gt;
for keys created on your personal laptop, desktop and work desktop, respectively (Replace the suggested &amp;quot;js&amp;quot; abbreviations above, or any other reference to John Smith name, to yours). See [https://docs.scinet.utoronto.ca/index.php/SSH_keys#Multiple_SSH_keys Multiple SSH keys] session for other examples.&lt;br /&gt;
&lt;br /&gt;
Private keys are very powerful. In the wrong hands, they can be used to impersonate you on every system you use those keys to access to. Copying them from the location where they were generated to other places also increases the number of systems that that key can be used to compromise. In addition, leaving the private key where it was created protects you against [https://en.wikipedia.org/wiki/Man-in-the-middle_attack man-in-the-middle attacks,] which impersonate the systems you want to copy your keys to and steal your private key in the process.&lt;br /&gt;
&lt;br /&gt;
===Uploading the Public Key to CCDB===&lt;br /&gt;
&lt;br /&gt;
Use your {{Alliance}} credentials to visit the following site:&lt;br /&gt;
&lt;br /&gt;
 https://ccdb.computecanada.ca/ssh_authorized_keys&lt;br /&gt;
&lt;br /&gt;
and follow the instructions there to upload your public key. By using the 'cat' command as follows:&lt;br /&gt;
&lt;br /&gt;
 cat ~/.ssh/id_ed25519.pub&lt;br /&gt;
&lt;br /&gt;
you can inspect, select and copy the plain text output of your public key to your screen into the CCDB web page above.&lt;br /&gt;
&lt;br /&gt;
Keys uploaded to the CCDB are available by all clusters across the {{DigitalResearchAllianceOfCanada}}.&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;tt&amp;gt;~/.ssh&amp;lt;/tt&amp;gt; Directory Permissions===&lt;br /&gt;
&lt;br /&gt;
Note that &amp;lt;tt&amp;gt;SSH&amp;lt;/tt&amp;gt; is very fussy about file permissions; your &amp;lt;tt&amp;gt;~/.ssh&amp;lt;/tt&amp;gt; directory must only be accessible by you, and your various key files must not be writable (or in some cases, readable) by anyone else.  Sometimes users accidentally reset file permissions while editing these files, and problems happen.   If you look at the &amp;lt;tt&amp;gt;~/.ssh&amp;lt;/tt&amp;gt; directory itself, it should not be accessible by anyone else:&lt;br /&gt;
&lt;br /&gt;
 $ ls -ld ~/.ssh&lt;br /&gt;
 drwx------ 2 USERNAME GROUPNAME 7 Aug  9 15:43 /home/.../.ssh&lt;br /&gt;
&lt;br /&gt;
To fix your permissions, use the following command:&lt;br /&gt;
&lt;br /&gt;
 chmod -R go= ~/.ssh/&lt;br /&gt;
&lt;br /&gt;
===Testing Your Key===&lt;br /&gt;
&lt;br /&gt;
Now you should be able to login to the remote system (say, SciNet):&lt;br /&gt;
&lt;br /&gt;
 $ ssh USERNAME@niagara.scinet.utoronto.ca&lt;br /&gt;
 Enter passphrase for key '/home/USERNAME/.ssh/id_ed25519': &lt;br /&gt;
 Last login: Tue Aug 17 11:24:48 2010 from HOSTNAME&lt;br /&gt;
 [...]&lt;br /&gt;
 nia-login07-$&lt;br /&gt;
&lt;br /&gt;
If this is indeed the absolute first time you are trying to access Niagara, for example, please make sure you are actually accessing Niagara by double checking Niagara's login node ssh key fingerprint as instructed [https://docs.scinet.utoronto.ca/index.php/SSH_Changes_in_May_2019 here.] This check is important to avoid being victim of [https://en.wikipedia.org/wiki/Man-in-the-middle_attack man-in-the-middle attacks.] As a reference you can check host key fingerprints for other {{Alliance}} systems at this [https://docs.alliancecan.ca/wiki/SSH_host_keys link.]&lt;br /&gt;
&lt;br /&gt;
If you get the message below you may need to logout of your gnome session and log back in since &amp;lt;tt&amp;gt;ssh-agent&amp;lt;/tt&amp;gt; needs to be restarted with the new passphrase ssh key.&lt;br /&gt;
&lt;br /&gt;
 $ ssh USERNAME@niagara.scinet.utoronto.ca&lt;br /&gt;
 Agent admitted failure to sign using the key.&lt;br /&gt;
&lt;br /&gt;
===(Optional) Using &amp;lt;tt&amp;gt;ssh-agent&amp;lt;/tt&amp;gt; to Remember Your Key===&lt;br /&gt;
&lt;br /&gt;
But now you've just replaced having to type a password for login with having to type a passphrase for your key; what have you gained?  &lt;br /&gt;
&lt;br /&gt;
It turns out that there's an automated way to manage ssh &amp;quot;identities&amp;quot;, using the &amp;lt;tt&amp;gt;ssh-agent&amp;lt;/tt&amp;gt; command, which should automatically be running on newer Linux or macOS machines.   You can add keys to this agent for the duration of your login using the &amp;lt;tt&amp;gt;ssh-add&amp;lt;/tt&amp;gt; command:&lt;br /&gt;
&lt;br /&gt;
 $ ssh-add&lt;br /&gt;
 Enter passphrase for /home/USERNAME/.ssh/id_ed25519: &lt;br /&gt;
 Identity added: /home/USERNAME/.ssh/id_ed25519 (/home/USERNAME/.ssh/id_ed25519)&lt;br /&gt;
&lt;br /&gt;
and then logins will not require the passphrase, as &amp;lt;tt&amp;gt;ssh-agent&amp;lt;/tt&amp;gt; will provide access to the key.&lt;br /&gt;
&lt;br /&gt;
When you log out of your home computer, the ssh agent will close, and next time you log in, you will have to &amp;lt;tt&amp;gt;ssh-add&amp;lt;/tt&amp;gt; your key.  You can also set a timeout of (say) an hour by using &amp;lt;tt&amp;gt;ssh-add -t 3600&amp;lt;/tt&amp;gt;.  This minimizes the number of times you have to type your passphrase, while still maintaining some degree of key security.&lt;br /&gt;
&lt;br /&gt;
You can list the fingerprints of all identities currently represented by the agent with '-l' option: 'ssh-add -l'. To delete all identities from the agent just type 'ssh-add -D'.&lt;br /&gt;
&lt;br /&gt;
=== Custom SSH Keys names ===&lt;br /&gt;
&lt;br /&gt;
If you use a custom name for your SSH key pair, you will need to create or modify the config file under ~/.ssh/config to something similar:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host niagara&lt;br /&gt;
HostName niagara.scinet.utoronto.ca&lt;br /&gt;
User YOUR_LOGIN&lt;br /&gt;
IdentityFile ~/.ssh/ssh_privatekey_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then authenticate by typing:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ssh niagara&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also use the -i option of ssh to specify the path to your private key via:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ssh -i /path/to/key USERNAME@niagara.scinet.utoronto.ca&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Multiple SSH keys ===&lt;br /&gt;
&lt;br /&gt;
It's recommended to have different ssh keys for each service, specific role, or domain. For example, to have separate keys for niagara and graham, first generate two new keys:&lt;br /&gt;
&lt;br /&gt;
 $ ssh-keygen -t ed25519 -f ~/.ssh/id_niagara  -C &amp;quot;Key for Niagara&amp;quot;&lt;br /&gt;
 ...&lt;br /&gt;
 $ ssh-keygen -t ed25519 -f ~/.ssh/id_graham   -C &amp;quot;Key for Graham&amp;quot;&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Make sure to use different file names for each key. Next, modify your &amp;lt;tt&amp;gt;~/.ssh/config&amp;lt;/tt&amp;gt; file, adding &amp;lt;tt&amp;gt;IdentityFile&amp;lt;/tt&amp;gt; directives:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host niagara&lt;br /&gt;
  User myusername&lt;br /&gt;
  HostName niagara.scinet.utoronto.ca&lt;br /&gt;
  IdentityFile ~/.ssh/id_niagara&lt;br /&gt;
&lt;br /&gt;
Host graham&lt;br /&gt;
  User myusername&lt;br /&gt;
  HostName graham.computecanada.ca&lt;br /&gt;
  IdentityFile ~/.ssh/id_graham&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now when you login with the shortcuts&lt;br /&gt;
&lt;br /&gt;
 $ ssh niagara&lt;br /&gt;
&lt;br /&gt;
or&lt;br /&gt;
&lt;br /&gt;
 $ ssh graham&lt;br /&gt;
&lt;br /&gt;
different keys will be used.&lt;br /&gt;
&lt;br /&gt;
===Examples===&lt;br /&gt;
&lt;br /&gt;
====Copying a file on another cluster to Niagara====&lt;br /&gt;
&lt;br /&gt;
For convenience I will assume you have already pasted your public key on CCDB so you can access all clusters of the {{DigitalResearchAllianceOfCanada}} with that key. Your modified &amp;lt;tt&amp;gt;~/.ssh/config&amp;lt;/tt&amp;gt; file (on your linux laptop &lt;br /&gt;
for example) could use then the same IdentityFile directive for all clusters. In this example, let's say Graham and Niagara on {{theAlliance}} use the same key, as well as a remote cluster not part of {{theAlliance}}:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host niagara&lt;br /&gt;
  User myusername&lt;br /&gt;
  HostName niagara.scinet.utoronto.ca&lt;br /&gt;
  IdentityFile ~/.ssh/id_ed25519&lt;br /&gt;
&lt;br /&gt;
Host graham&lt;br /&gt;
  User myusername&lt;br /&gt;
  HostName graham.computecanada.ca&lt;br /&gt;
  IdentityFile ~/.ssh/id_ed25519&lt;br /&gt;
&lt;br /&gt;
Host remote_cluster&lt;br /&gt;
  User myusername&lt;br /&gt;
  HostName remote_cluster.other_domain.ca&lt;br /&gt;
  IdentityFile ~/.ssh/id_ed25519&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Suppose you want to access Niagara and copy a file on Graham there, or a file on remote_cluster. How would you do it? Based on the assumptions above you could proceed as follows:&lt;br /&gt;
&lt;br /&gt;
1) On your laptop, the first step to follow is to load your private key to the ssh-agent daemon:&lt;br /&gt;
&lt;br /&gt;
  $ ssh-add /home/USERNAME/.ssh/id_ed25519&lt;br /&gt;
  Enter passphrase for /home/USERNAME/.ssh/id_ed25519: &lt;br /&gt;
  Identity added: /home/USERNAME/.ssh/id_ed25519 (/home/USERNAME/.ssh/id_ed25519)&lt;br /&gt;
&lt;br /&gt;
It is very convenient since you do it once per work session only and it is strongly encouraged.&lt;br /&gt;
&lt;br /&gt;
2) Access Niagara and enable forwarding of the authentication agent connection (ssh-agent):&lt;br /&gt;
&lt;br /&gt;
  $ ssh -A niagara&lt;br /&gt;
&lt;br /&gt;
3) Use secure copy command to copy the file on Graham to Niagara:&lt;br /&gt;
&lt;br /&gt;
  $ scp -p graham.computecanada.ca:/path/to/file.txt .&lt;br /&gt;
&lt;br /&gt;
'-p' option is commonly used to preserve the date stamp and other file attributes as it is copied. Not mandatory though. Use the '-R' if you want to copy a directory recursively. Be aware though that it might be better to create a tarball first if there are lots of small files in the directory. Note that if you&lt;br /&gt;
want to use an alias for graham in the scp command as follows:&lt;br /&gt;
&lt;br /&gt;
  $ scp -p graham:/path/to/file.txt .&lt;br /&gt;
&lt;br /&gt;
you would have to copy your modified ~/.ssh/config file above from your laptop to Niagara first.&lt;br /&gt;
&lt;br /&gt;
4) Use secure copy command to copy the file on remote_cluster to Niagara:&lt;br /&gt;
&lt;br /&gt;
  $ scp -p remote_cluster.other_domain.ca:/path/to/file.txt .&lt;br /&gt;
&lt;br /&gt;
This assumes you have copied your public key from your laptop to the remote_cluster first:&lt;br /&gt;
&lt;br /&gt;
  $ ssh-copy-id -i ~/.ssh/id_ed25519.pub remote_cluster&lt;br /&gt;
&lt;br /&gt;
or in the absence of ssh-copy-id command:&lt;br /&gt;
&lt;br /&gt;
  $ cat ~/.ssh/id_ed25519.pub | ssh remote_cluster &amp;quot;cat &amp;gt;&amp;gt; ~/.ssh/authorized_keys&amp;quot;&lt;br /&gt;
&lt;br /&gt;
and also assumes remote_cluster is not part of {{theAlliance}}: therefore it doesn't have a similar mechanism to manage user's ssh keys centrally as CCDB does.&lt;br /&gt;
&lt;br /&gt;
5) The '-A' option should be used deliberately for this copying task for the shortest time needed. The rationale is that if Niagara is compromised an attacker would now be able to use your connection to access other systems using your forwarded ssh-agent. Better to exit and return without any option added to ssh:&lt;br /&gt;
&lt;br /&gt;
  $ exit&lt;br /&gt;
  $ ssh niagara&lt;br /&gt;
&lt;br /&gt;
=== Best Practice Summary ===&lt;br /&gt;
&lt;br /&gt;
 * Create one key pair for each computer you work on and give it a meaningful name. In addition, a comment can also help identify the device where they were created.&lt;br /&gt;
 * Protect each of your private keys with a strong passphrase. We recommend fifteen characters or more.&lt;br /&gt;
 * Do not share your private keys.&lt;br /&gt;
 * Never copy your private keys to other systems.&lt;br /&gt;
 * Create one key pair for each different service, role or domain, and name them accordingly.&lt;br /&gt;
 * Do not create key pairs in shared systems like HPC clusters.&lt;br /&gt;
&lt;br /&gt;
==SSH Tunnels==&lt;br /&gt;
&lt;br /&gt;
A more-obscure technique for setting up SSH communication is the construction of an SSH tunnel.  This can be useful if, for example, your code needs to access an external software license server from a Niagara compute node.  You can read about setting up SSH tunnels on Niagara [[SSH_Tunneling|here]].&lt;br /&gt;
&lt;br /&gt;
==Multifactor authentication==&lt;br /&gt;
&lt;br /&gt;
Multifactor authentication via Duo or Yubikey is now required for all users.&lt;br /&gt;
Please visit the [https://ccdb.alliancecan.ca/multi_factor_authentications CCDB] to enrol,&lt;br /&gt;
and more information can be found [https://docs.alliancecan.ca/wiki/Multifactor_authentication here].&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=5663</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=5663"/>
		<updated>2024-06-02T20:23:28Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
{| style=&amp;quot;border-spacing:10px; width: 95%&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:1em; padding-top:.1em; border:2px solid #0645ad; background-color:#f6f6f6; border-radius:7px&amp;quot;|&lt;br /&gt;
&lt;br /&gt;
==System Status==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Use &amp;quot;Up&amp;quot;, &amp;quot;Partial&amp;quot; or &amp;quot;Down&amp;quot;; these are templates. --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;width:100%&amp;quot; &lt;br /&gt;
|{{Up   |Niagara|Niagara_Quickstart}}&lt;br /&gt;
|{{Up   |Mist|Mist}}&lt;br /&gt;
|{{Up   |Teach|Teach}}&lt;br /&gt;
|{{Up   |Rouge|Rouge}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up   |Jupyter Hub|Jupyter_Hub}}&lt;br /&gt;
|{{Up   |Scheduler|Niagara_Quickstart#Submitting_jobs}}&lt;br /&gt;
|{{Up   |File system|Niagara_Quickstart#Storage_and_quotas}}&lt;br /&gt;
|{{Up   |Burst Buffer|Burst_Buffer}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up   |HPSS|HPSS}}&lt;br /&gt;
|{{Up   |Login Nodes|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Up   |External Network|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Up   |Globus |Globus}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up   |Balam|Balam}}&lt;br /&gt;
|{{Down |CCEnv|Using_modules}}&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
'''Sunday, Jun 2, 12:00 PM EDT''' CCEnv modules missing, investigating.&lt;br /&gt;
&lt;br /&gt;
'''Wednesday May 29, 5:50 PM EDT''' Niagara compute nodes are up.  &lt;br /&gt;
&lt;br /&gt;
'''Wednesday May 29, 4:40 PM EDT''' Niagara compute nodes are coming up.  &lt;br /&gt;
&lt;br /&gt;
'''Wednesday May 29, 4 PM EDT''' Niagara login nodes and jupyterhub are up; file system is now accessible.  &lt;br /&gt;
&lt;br /&gt;
'''Wednesday May 29, 2 PM EDT''' Electricians are checking and testing all junction boxes and connectors under the raised floor for safety.  Some systems are expected to be back up later today (storage, login nodes), and compute systems will be powered up as soon as it is deemed safe.&lt;br /&gt;
&lt;br /&gt;
'''Tuesday May 28, 3 PM EDT''' Cleaning crews are at the datacentre, to pump the water and install dryers.  Once the floors are dry, we need to inspect all electrical boxes to ensure safety.  We do not expect to have a fully functional datacentre before Thursday, although we hope to be able to turn on the storage and login nodes sometime tomorrow, if circumstances permit.  Apologies, and thank you for your patience.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Tuesday May 28, 7 AM EDT''' A water mains break outside our datacentre has caused extensive flooding, and all systems have been shut down preventatively. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Friday May 17, 10 PM EDT - Saturday May 18, 2 AM EDT:''' The external network will be unavailable for maintenance. Running and queued jobs on the systems will not be affected.&lt;br /&gt;
&lt;br /&gt;
'''Tuesday May 14, 6:45 PM EDT:''' All systems are recovered now.&lt;br /&gt;
&lt;br /&gt;
'''Tuesday May 14, 5 PM EDT:''' Power loss at the datacentre resulted in loss of cooling.  Systems are being restored.&lt;br /&gt;
&lt;br /&gt;
'''Friday May 3, 10 PM EDT - Saturday May 4, 2 AM EDT:''' The external network will be unavailable for maintenance. Running and queued jobs on the systems will not be affected.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--  When removing system status entries, please archive them to: --&amp;gt;&lt;br /&gt;
[[Previous messages]]&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;border-spacing: 10px;width: 100%&amp;quot;&lt;br /&gt;
|valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== QuickStart Guides ==&lt;br /&gt;
* [[Niagara Quickstart]]&lt;br /&gt;
* [[HPSS | HPSS archival storage]]&lt;br /&gt;
* [[Mist| Mist Power 9 GPU cluster]]&lt;br /&gt;
* [[Teach|Teach cluster]]&lt;br /&gt;
* [[FAQ | FAQ (frequently asked questions)]]&lt;br /&gt;
* [[Acknowledging SciNet]]&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Tutorials, Manuals, etc. ==&lt;br /&gt;
* [https://education.scinet.utoronto.ca SciNet education material]&lt;br /&gt;
* [https://www.youtube.com/c/SciNetHPCattheUniversityofToronto SciNet's YouTube channel]&lt;br /&gt;
* [[Modules specific to Niagara|Software Modules specific to Niagara]] &lt;br /&gt;
* [[Modules for Mist]] &lt;br /&gt;
* [[Commercial software]]&lt;br /&gt;
* [[Burst Buffer]]&lt;br /&gt;
* [[SSH#SSH Keys|SSH keys]]&lt;br /&gt;
* [[SSH Tunneling]]&lt;br /&gt;
* [[Visualization]]&lt;br /&gt;
* [[Running Serial Jobs on Niagara]]&lt;br /&gt;
* [[Jupyter Hub]]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Ansys&amp;diff=5433</id>
		<title>Ansys</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Ansys&amp;diff=5433"/>
		<updated>2024-01-29T20:35:05Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [http://www.ansys.com/ Ansys] engineering simulation tools are installed in both the Niagara and CC software stacks.&lt;br /&gt;
&lt;br /&gt;
=Getting a license=&lt;br /&gt;
Licenses are provided by [https://www.cmc.ca CMC Microsystems]. Canadian students and faculty can register at [https://www.cmc.ca/en/MyAccount/GetAccount.aspx this page].&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you must [https://cmcmicrosystems.formtitan.com/SupportForm contact CMC] and tell them you want to use the Ansys tools on Niagara, and give them your SciNet username.&lt;br /&gt;
&lt;br /&gt;
After confirming with CMC and your institution that your username is authorized to claim a license, you will need to establish a ssh tunnel on your submission script in order to access the license server from niagara compute nodes (which don't have direct access to the internet). Please obtain the proper server name and ports from CMC or directly from your institution, so you may fill the ssh tunnel settings on the script template further below.&lt;br /&gt;
&lt;br /&gt;
You may also use a license server from the other GP clusters (beluga/narval, cedar or graham), since niagara doesn't have its own server. More information can be found here:&lt;br /&gt;
&lt;br /&gt;
https://docs.computecanada.ca/wiki/ANSYS#Configuring_your_own_license_file&lt;br /&gt;
&lt;br /&gt;
=Running on Niagara=&lt;br /&gt;
&lt;br /&gt;
Commercial software modules are not available by default, and require a 'module use' command:&lt;br /&gt;
&lt;br /&gt;
 module use /scinet/niagara/software/commercial/modules&lt;br /&gt;
 module load ansys/2023r1&lt;br /&gt;
&lt;br /&gt;
Programs available:&lt;br /&gt;
&lt;br /&gt;
* fluent&lt;br /&gt;
* ansysedt&lt;br /&gt;
* mapdl&lt;br /&gt;
* ...&lt;br /&gt;
You can use the Ansys graphical tools to set up your problem, but you cannot use the graphical tools to submit your job.  The job must be submitted to the scheduler for running.&lt;br /&gt;
&lt;br /&gt;
==Setting up your .mw directory==&lt;br /&gt;
&lt;br /&gt;
Ansys will attempt to write to your $HOME/.mw directory.  This will work when you are testing your workflow on the login nodes, because they can write to $HOME.  However, recall that the compute nodes cannot write to the /home filesystem.  If you attempt to run Ansys from a compute node using the default configuration, it will fail because Ansys cannot write to $HOME/.mw.&lt;br /&gt;
&lt;br /&gt;
The solution is to create an alternative directory called $SCRATCH/.mw, and create a soft link from $HOME/.mw to $SCRATCH/.mw:&lt;br /&gt;
 mkdir $SCRATCH/.mw&lt;br /&gt;
 ln -s $SCRATCH/.mw $HOME/.mw&lt;br /&gt;
This will fool Ansys into thinking it is writing to $HOME/.mw, when in fact it is writing to $SCRATCH/.mw.  This command only needs to be run once.&lt;br /&gt;
&lt;br /&gt;
==Running ansys==&lt;br /&gt;
&lt;br /&gt;
Example submission script for a job running on 1 node, with max walltime of 11 hours:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=40&lt;br /&gt;
#SBATCH --time=11:00:00&lt;br /&gt;
#SBATCH --job-name test&lt;br /&gt;
&lt;br /&gt;
module use /scinet/niagara/software/commercial/modules&lt;br /&gt;
module load ansys/2023r1&lt;br /&gt;
&lt;br /&gt;
# DIRECTORY TO RUN - $SLURM_SUBMIT_DIR is directory job was submitted from&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
machines=$(srun bash -c 'hostname -s' | sort | uniq | awk '{print $1 &amp;quot;:&amp;quot; 40}' | paste -s -d ':')&lt;br /&gt;
ansys231 -b -j JOBNAME -dis -machines &amp;quot;$machines&amp;quot; -i ansys.in&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Custom license server==&lt;br /&gt;
&lt;br /&gt;
To use your own license server for Ansys, you'll need to open up ssh tunnels to the required ports,&lt;br /&gt;
and point Ansys to the tunnels.&lt;br /&gt;
For example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ssh nia-gw -fNL 1055:license.server:1055&lt;br /&gt;
ssh nia-gw -fNL 1056:license.server:1056&lt;br /&gt;
ssh nia-gw -fNL 2325:license.server:2325&lt;br /&gt;
&lt;br /&gt;
export ANSYSLI_SERVERS=2325@localhost&lt;br /&gt;
export ANSYSLMD_LICENSE_FILE=1055@localhost&lt;br /&gt;
export LM_LICENSE_FILE=1055@localhost&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure your firewall allows connections from the following IP addresses and confirm with your license provider which ports should be used with the ssh tunnel:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
142.150.188.71&lt;br /&gt;
142.150.188.72&lt;br /&gt;
142.150.188.73&lt;br /&gt;
142.150.188.74&lt;br /&gt;
142.150.188.75&lt;br /&gt;
142.150.188.76&lt;br /&gt;
142.150.188.77&lt;br /&gt;
142.150.188.78&lt;br /&gt;
142.1.174.227&lt;br /&gt;
142.1.174.228&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
INPUTFILE=input.jou&lt;br /&gt;
fluent 2ddp -t &amp;quot;$PBS_NP&amp;quot; -cnf=&amp;quot;$PBS_NODEFILE&amp;quot; -mpi=intel -pib -pcheck -g -i &amp;quot;$INPUTFILE&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Running using the CC installation=&lt;br /&gt;
&lt;br /&gt;
==Ansys 19.0==&lt;br /&gt;
To access the CC software stack you must unload the Niagara stack.&lt;br /&gt;
&lt;br /&gt;
 module load CCEnv StdEnv&lt;br /&gt;
 module load ansys/19.0&lt;br /&gt;
&lt;br /&gt;
You can run the script given in the previous section by substituting the previous module commands with the above two.&lt;br /&gt;
&lt;br /&gt;
More information can be found here: https://docs.computecanada.ca/wiki/ANSYS &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=5163</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=5163"/>
		<updated>2023-10-27T17:10:51Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
{| style=&amp;quot;border-spacing:10px; width: 95%&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:1em; padding-top:.1em; border:2px solid #0645ad; background-color:#f6f6f6; border-radius:7px&amp;quot;|&lt;br /&gt;
&lt;br /&gt;
==System Status==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Use &amp;quot;Up&amp;quot;, &amp;quot;Partial&amp;quot; or &amp;quot;Down&amp;quot;; these are templates. --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;width:100%&amp;quot; &lt;br /&gt;
|{{up   |Niagara|Niagara_Quickstart}}&lt;br /&gt;
|{{up   |Mist|Mist}}&lt;br /&gt;
|{{Up   |Teach|Teach}}&lt;br /&gt;
|{{up   |Rouge|Rouge}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up   |Jupyter Hub|Jupyter_Hub}}&lt;br /&gt;
|{{Up   |Scheduler|Niagara_Quickstart#Submitting_jobs}}&lt;br /&gt;
|{{Up   |File system|Niagara_Quickstart#Storage_and_quotas}}&lt;br /&gt;
|{{Up   |Burst Buffer|Burst_Buffer}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up   |HPSS|HPSS}}&lt;br /&gt;
|{{Up   |Login Nodes|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Up   |External Network|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Up   |Globus |Globus}}&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''' Thu Oct 27 11:16 AM EDT:''' SSH keys are gradually being restored, estimated to complete by 1:15 PM.&lt;br /&gt;
&lt;br /&gt;
'''Thu Oct 27, 2023, 8:00 EDT:''' SSH key login authentication with CCDB keys is currently not working, on many Alliance systems.  It appears this started last night. Issue is being investigated.&lt;br /&gt;
&lt;br /&gt;
'''Thu Oct 26, 2023, 12:35 EDT:''' Mist login node is accessible again.&lt;br /&gt;
&lt;br /&gt;
'''Thu Oct 26, 2023, 12:05 EDT:''' Mist login node is under maintenance and temporarily inaccessible to users.&lt;br /&gt;
&lt;br /&gt;
'''Wed Oct 25 7:54 PM EDT:''' slurm-*.out now outputs job info for last array job.&lt;br /&gt;
&lt;br /&gt;
'''Tue Oct 24 12:00 AM EDT:''' network appears to be up&lt;br /&gt;
&lt;br /&gt;
'''Tue Oct 24 11:32 AM EDT:''' campus network issues&lt;br /&gt;
&lt;br /&gt;
'''Thu Oct 05, 2023, 12:05 PM EDT:''' Niagara scheduler is back online.&lt;br /&gt;
&lt;br /&gt;
'''Thu Oct 05, 2023, 11:50 AM EDT:''' Niagara scheduler is temporarily under maintenance for security updates. &lt;br /&gt;
&lt;br /&gt;
'''Tue Oct 31, 2023, 12:PM EDT - Fri Nov 3, 2023, 12:00 PM EDT:''' Three-day reservation for the &amp;quot;Niagara at Scale&amp;quot; event. Only &amp;quot;Niagara at Scale&amp;quot; projects will run on the compute nodes. Users are encouraged to submit small and short jobs that could run before this event.  Throughout the event, users can still login, access their data, and submit jobs, but these jobs will not run until after the event. Note that the debugjob queue will remain available to everyone as well.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--  When removing system status entries, please archive them to: --&amp;gt;&lt;br /&gt;
[[Previous messages]]&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;border-spacing: 10px;width: 100%&amp;quot;&lt;br /&gt;
|valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== QuickStart Guides ==&lt;br /&gt;
* [[Niagara Quickstart]]&lt;br /&gt;
* [[HPSS | HPSS archival storage]]&lt;br /&gt;
* [[Mist| Mist Power 9 GPU cluster]]&lt;br /&gt;
* [[Teach|Teach cluster]]&lt;br /&gt;
* [[FAQ | FAQ (frequently asked questions)]]&lt;br /&gt;
* [[Acknowledging SciNet]]&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Tutorials, Manuals, etc. ==&lt;br /&gt;
* [https://education.scinet.utoronto.ca SciNet education material]&lt;br /&gt;
* [https://www.youtube.com/c/SciNetHPCattheUniversityofToronto SciNet's YouTube channel]&lt;br /&gt;
* [[Modules specific to Niagara|Software Modules specific to Niagara]] &lt;br /&gt;
* [[Modules for Mist]] &lt;br /&gt;
* [[Commercial software]]&lt;br /&gt;
* [[Burst Buffer]]&lt;br /&gt;
* [[SSH#SSH Keys|SSH keys]]&lt;br /&gt;
* [[SSH Tunneling]]&lt;br /&gt;
* [[SSH#Two-Factor_authentication|Two-Factor Authentication]]&lt;br /&gt;
* [[Visualization]]&lt;br /&gt;
* [[Running Serial Jobs on Niagara]]&lt;br /&gt;
* [[Jupyter Hub]]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=5160</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=5160"/>
		<updated>2023-10-27T15:18:18Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
{| style=&amp;quot;border-spacing:10px; width: 95%&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:1em; padding-top:.1em; border:2px solid #0645ad; background-color:#f6f6f6; border-radius:7px&amp;quot;|&lt;br /&gt;
&lt;br /&gt;
==System Status==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Use &amp;quot;Up&amp;quot;, &amp;quot;Partial&amp;quot; or &amp;quot;Down&amp;quot;; these are templates. --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;width:100%&amp;quot; &lt;br /&gt;
|{{up   |Niagara|Niagara_Quickstart}}&lt;br /&gt;
|{{up   |Mist|Mist}}&lt;br /&gt;
|{{Up   |Teach|Teach}}&lt;br /&gt;
|{{up   |Rouge|Rouge}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up   |Jupyter Hub|Jupyter_Hub}}&lt;br /&gt;
|{{Up   |Scheduler|Niagara_Quickstart#Submitting_jobs}}&lt;br /&gt;
|{{Up   |File system|Niagara_Quickstart#Storage_and_quotas}}&lt;br /&gt;
|{{Up   |Burst Buffer|Burst_Buffer}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up   |HPSS|HPSS}}&lt;br /&gt;
|{{Partial   |Login Nodes|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Up   |External Network|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Up   |Globus |Globus}}&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''' Thu Oct 27 11:16 AM EDT:''' SSH keys are gradually being restored, estimated to complete by 1:15 PM.&lt;br /&gt;
&lt;br /&gt;
'''Thu Oct 27, 2023, 8:00 EDT:''' SSH key login authentication with CCDB keys is currently not working, on many Alliance systems.  It appears this started last night. Issue is being investigated.&lt;br /&gt;
&lt;br /&gt;
'''Thu Oct 26, 2023, 12:35 EDT:''' Mist login node is accessible again.&lt;br /&gt;
&lt;br /&gt;
'''Thu Oct 26, 2023, 12:05 EDT:''' Mist login node is under maintenance and temporarily inaccessible to users.&lt;br /&gt;
&lt;br /&gt;
'''Wed Oct 25 7:54 PM EDT:''' slurm-*.out now outputs job info for last array job.&lt;br /&gt;
&lt;br /&gt;
'''Tue Oct 24 12:00 AM EDT:''' network appears to be up&lt;br /&gt;
&lt;br /&gt;
'''Tue Oct 24 11:32 AM EDT:''' campus network issues&lt;br /&gt;
&lt;br /&gt;
'''Thu Oct 05, 2023, 12:05 PM EDT:''' Niagara scheduler is back online.&lt;br /&gt;
&lt;br /&gt;
'''Thu Oct 05, 2023, 11:50 AM EDT:''' Niagara scheduler is temporarily under maintenance for security updates. &lt;br /&gt;
&lt;br /&gt;
'''Tue Oct 31, 2023, 12:PM EDT - Fri Nov 3, 2023, 12:00 PM EDT:''' Three-day reservation for the &amp;quot;Niagara at Scale&amp;quot; event. Only &amp;quot;Niagara at Scale&amp;quot; projects will run on the compute nodes. Users are encouraged to submit small and short jobs that could run before this event.  Throughout the event, users can still login, access their data, and submit jobs, but these jobs will not run until after the event. Note that the debugjob queue will remain available to everyone as well.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--  When removing system status entries, please archive them to: --&amp;gt;&lt;br /&gt;
[[Previous messages]]&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;border-spacing: 10px;width: 100%&amp;quot;&lt;br /&gt;
|valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== QuickStart Guides ==&lt;br /&gt;
* [[Niagara Quickstart]]&lt;br /&gt;
* [[HPSS | HPSS archival storage]]&lt;br /&gt;
* [[Mist| Mist Power 9 GPU cluster]]&lt;br /&gt;
* [[Teach|Teach cluster]]&lt;br /&gt;
* [[FAQ | FAQ (frequently asked questions)]]&lt;br /&gt;
* [[Acknowledging SciNet]]&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Tutorials, Manuals, etc. ==&lt;br /&gt;
* [https://education.scinet.utoronto.ca SciNet education material]&lt;br /&gt;
* [https://www.youtube.com/c/SciNetHPCattheUniversityofToronto SciNet's YouTube channel]&lt;br /&gt;
* [[Modules specific to Niagara|Software Modules specific to Niagara]] &lt;br /&gt;
* [[Modules for Mist]] &lt;br /&gt;
* [[Commercial software]]&lt;br /&gt;
* [[Burst Buffer]]&lt;br /&gt;
* [[SSH#SSH Keys|SSH keys]]&lt;br /&gt;
* [[SSH Tunneling]]&lt;br /&gt;
* [[SSH#Two-Factor_authentication|Two-Factor Authentication]]&lt;br /&gt;
* [[Visualization]]&lt;br /&gt;
* [[Running Serial Jobs on Niagara]]&lt;br /&gt;
* [[Jupyter Hub]]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=5139</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=5139"/>
		<updated>2023-10-24T18:18:26Z</updated>

		<summary type="html">&lt;p&gt;Nolta: /* System Status */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
{| style=&amp;quot;border-spacing:10px; width: 95%&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:1em; padding-top:.1em; border:2px solid #0645ad; background-color:#f6f6f6; border-radius:7px&amp;quot;|&lt;br /&gt;
&lt;br /&gt;
==System Status==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Use &amp;quot;Up&amp;quot;, &amp;quot;Partial&amp;quot; or &amp;quot;Down&amp;quot;; these are templates. --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;width:100%&amp;quot; &lt;br /&gt;
|{{up   |Niagara|Niagara_Quickstart}}&lt;br /&gt;
|{{up   |Mist|Mist}}&lt;br /&gt;
|{{Up   |Teach|Teach}}&lt;br /&gt;
|{{up   |Rouge|Rouge}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up   |Jupyter Hub|Jupyter_Hub}}&lt;br /&gt;
|{{Up   |Scheduler|Niagara_Quickstart#Submitting_jobs}}&lt;br /&gt;
|{{Up   |File system|Niagara_Quickstart#Storage_and_quotas}}&lt;br /&gt;
|{{Up   |Burst Buffer|Burst_Buffer}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up   |HPSS|HPSS}}&lt;br /&gt;
|{{Up   |Login Nodes|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Up   |External Network|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Up   |Globus |Globus}}&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
'''Tue Oct 24 12:00 AM EDT:''' network appears to be up&lt;br /&gt;
&lt;br /&gt;
'''Tue Oct 24 11:32 AM EDT:''' campus network issues&lt;br /&gt;
&lt;br /&gt;
'''Thu Oct 05, 2023, 12:05 PM EDT:''' Niagara scheduler is back online.&lt;br /&gt;
&lt;br /&gt;
'''Thu Oct 05, 2023, 11:50 AM EDT:''' Niagara scheduler is temporarily under maintenance for security updates. &lt;br /&gt;
&lt;br /&gt;
'''Tue Oct 31, 2023, 12:PM EDT - Fri Nov 3, 2023, 12:00 PM EDT:''' Three-day reservation for the &amp;quot;Niagara at Scale&amp;quot; event. Only &amp;quot;Niagara at Scale&amp;quot; projects will run on the compute nodes. Users are encouraged to submit small and short jobs that could run before this event.  Throughout the event, users can still login, access their data, and submit jobs, but these jobs will not run until after the event. Note that the debugjob queue will remain available to everyone as well.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--  When removing system status entries, please archive them to: --&amp;gt;&lt;br /&gt;
[[Previous messages]]&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;border-spacing: 10px;width: 100%&amp;quot;&lt;br /&gt;
|valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== QuickStart Guides ==&lt;br /&gt;
* [[Niagara Quickstart]]&lt;br /&gt;
* [[HPSS | HPSS archival storage]]&lt;br /&gt;
* [[Mist| Mist Power 9 GPU cluster]]&lt;br /&gt;
* [[Teach|Teach cluster]]&lt;br /&gt;
* [[FAQ | FAQ (frequently asked questions)]]&lt;br /&gt;
* [[Acknowledging SciNet]]&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Tutorials, Manuals, etc. ==&lt;br /&gt;
* [https://education.scinet.utoronto.ca SciNet education material]&lt;br /&gt;
* [https://www.youtube.com/c/SciNetHPCattheUniversityofToronto SciNet's YouTube channel]&lt;br /&gt;
* [[Modules specific to Niagara|Software Modules specific to Niagara]] &lt;br /&gt;
* [[Modules for Mist]] &lt;br /&gt;
* [[Commercial software]]&lt;br /&gt;
* [[Burst Buffer]]&lt;br /&gt;
* [[SSH#SSH Keys|SSH keys]]&lt;br /&gt;
* [[SSH Tunneling]]&lt;br /&gt;
* [[SSH#Two-Factor_authentication|Two-Factor Authentication]]&lt;br /&gt;
* [[Visualization]]&lt;br /&gt;
* [[Running Serial Jobs on Niagara]]&lt;br /&gt;
* [[Jupyter Hub]]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=5136</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=5136"/>
		<updated>2023-10-24T16:37:06Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
{| style=&amp;quot;border-spacing:10px; width: 95%&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:1em; padding-top:.1em; border:2px solid #0645ad; background-color:#f6f6f6; border-radius:7px&amp;quot;|&lt;br /&gt;
&lt;br /&gt;
==System Status==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Use &amp;quot;Up&amp;quot;, &amp;quot;Partial&amp;quot; or &amp;quot;Down&amp;quot;; these are templates. --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;width:100%&amp;quot; &lt;br /&gt;
|{{up   |Niagara|Niagara_Quickstart}}&lt;br /&gt;
|{{up   |Mist|Mist}}&lt;br /&gt;
|{{Up   |Teach|Teach}}&lt;br /&gt;
|{{up   |Rouge|Rouge}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up   |Jupyter Hub|Jupyter_Hub}}&lt;br /&gt;
|{{Up   |Scheduler|Niagara_Quickstart#Submitting_jobs}}&lt;br /&gt;
|{{Up   |File system|Niagara_Quickstart#Storage_and_quotas}}&lt;br /&gt;
|{{Up   |Burst Buffer|Burst_Buffer}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up   |HPSS|HPSS}}&lt;br /&gt;
|{{Up   |Login Nodes|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Partial   |External Network|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Up   |Globus |Globus}}&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
'''Tue Oct 24 12:00 AM EDT:''' network appears to be up&lt;br /&gt;
&lt;br /&gt;
'''Tue Oct 24 11:32 AM EDT:''' campus network issues&lt;br /&gt;
&lt;br /&gt;
'''Thu Oct 05, 2023, 12:05 PM EDT:''' Niagara scheduler is back online.&lt;br /&gt;
&lt;br /&gt;
'''Thu Oct 05, 2023, 11:50 AM EDT:''' Niagara scheduler is temporarily under maintenance for security updates. &lt;br /&gt;
&lt;br /&gt;
'''Tue Oct 31, 2023, 12:PM EDT - Fri Nov 3, 2023, 12:00 PM EDT:''' Three-day reservation for the &amp;quot;Niagara at Scale&amp;quot; event. Only &amp;quot;Niagara at Scale&amp;quot; projects will run on the compute nodes. Users are encouraged to submit small and short jobs that could run before this event.  Throughout the event, users can still login, access their data, and submit jobs, but these jobs will not run until after the event. Note that the debugjob queue will remain available to everyone as well.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--  When removing system status entries, please archive them to: --&amp;gt;&lt;br /&gt;
[[Previous messages]]&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;border-spacing: 10px;width: 100%&amp;quot;&lt;br /&gt;
|valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== QuickStart Guides ==&lt;br /&gt;
* [[Niagara Quickstart]]&lt;br /&gt;
* [[HPSS | HPSS archival storage]]&lt;br /&gt;
* [[Mist| Mist Power 9 GPU cluster]]&lt;br /&gt;
* [[Teach|Teach cluster]]&lt;br /&gt;
* [[FAQ | FAQ (frequently asked questions)]]&lt;br /&gt;
* [[Acknowledging SciNet]]&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Tutorials, Manuals, etc. ==&lt;br /&gt;
* [https://education.scinet.utoronto.ca SciNet education material]&lt;br /&gt;
* [https://www.youtube.com/c/SciNetHPCattheUniversityofToronto SciNet's YouTube channel]&lt;br /&gt;
* [[Modules specific to Niagara|Software Modules specific to Niagara]] &lt;br /&gt;
* [[Modules for Mist]] &lt;br /&gt;
* [[Commercial software]]&lt;br /&gt;
* [[Burst Buffer]]&lt;br /&gt;
* [[SSH#SSH Keys|SSH keys]]&lt;br /&gt;
* [[SSH Tunneling]]&lt;br /&gt;
* [[SSH#Two-Factor_authentication|Two-Factor Authentication]]&lt;br /&gt;
* [[Visualization]]&lt;br /&gt;
* [[Running Serial Jobs on Niagara]]&lt;br /&gt;
* [[Jupyter Hub]]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=5133</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=5133"/>
		<updated>2023-10-24T15:33:08Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
{| style=&amp;quot;border-spacing:10px; width: 95%&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:1em; padding-top:.1em; border:2px solid #0645ad; background-color:#f6f6f6; border-radius:7px&amp;quot;|&lt;br /&gt;
&lt;br /&gt;
==System Status==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Use &amp;quot;Up&amp;quot;, &amp;quot;Partial&amp;quot; or &amp;quot;Down&amp;quot;; these are templates. --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;width:100%&amp;quot; &lt;br /&gt;
|{{up   |Niagara|Niagara_Quickstart}}&lt;br /&gt;
|{{up   |Mist|Mist}}&lt;br /&gt;
|{{Up   |Teach|Teach}}&lt;br /&gt;
|{{up   |Rouge|Rouge}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up   |Jupyter Hub|Jupyter_Hub}}&lt;br /&gt;
|{{Up   |Scheduler|Niagara_Quickstart#Submitting_jobs}}&lt;br /&gt;
|{{Up   |File system|Niagara_Quickstart#Storage_and_quotas}}&lt;br /&gt;
|{{Up   |Burst Buffer|Burst_Buffer}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up   |HPSS|HPSS}}&lt;br /&gt;
|{{Up   |Login Nodes|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Down   |External Network|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Up   |Globus |Globus}}&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
'''Tue Oct 24 11:32:22 AM EDT:''' unknown connection issues&lt;br /&gt;
&lt;br /&gt;
'''Thu Oct 05, 2023, 12:05 PM EDT:''' Niagara scheduler is back online.&lt;br /&gt;
&lt;br /&gt;
'''Thu Oct 05, 2023, 11:50 AM EDT:''' Niagara scheduler is temporarily under maintenance for security updates. &lt;br /&gt;
&lt;br /&gt;
'''Tue Oct 31, 2023, 12:PM EDT - Fri Nov 3, 2023, 12:00 PM EDT:''' Three-day reservation for the &amp;quot;Niagara at Scale&amp;quot; event. Only &amp;quot;Niagara at Scale&amp;quot; projects will run on the compute nodes. Users are encouraged to submit small and short jobs that could run before this event.  Throughout the event, users can still login, access their data, and submit jobs, but these jobs will not run until after the event. Note that the debugjob queue will remain available to everyone as well.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--  When removing system status entries, please archive them to: --&amp;gt;&lt;br /&gt;
[[Previous messages]]&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;border-spacing: 10px;width: 100%&amp;quot;&lt;br /&gt;
|valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== QuickStart Guides ==&lt;br /&gt;
* [[Niagara Quickstart]]&lt;br /&gt;
* [[HPSS | HPSS archival storage]]&lt;br /&gt;
* [[Mist| Mist Power 9 GPU cluster]]&lt;br /&gt;
* [[Teach|Teach cluster]]&lt;br /&gt;
* [[FAQ | FAQ (frequently asked questions)]]&lt;br /&gt;
* [[Acknowledging SciNet]]&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Tutorials, Manuals, etc. ==&lt;br /&gt;
* [https://education.scinet.utoronto.ca SciNet education material]&lt;br /&gt;
* [https://www.youtube.com/c/SciNetHPCattheUniversityofToronto SciNet's YouTube channel]&lt;br /&gt;
* [[Modules specific to Niagara|Software Modules specific to Niagara]] &lt;br /&gt;
* [[Modules for Mist]] &lt;br /&gt;
* [[Commercial software]]&lt;br /&gt;
* [[Burst Buffer]]&lt;br /&gt;
* [[SSH#SSH Keys|SSH keys]]&lt;br /&gt;
* [[SSH Tunneling]]&lt;br /&gt;
* [[SSH#Two-Factor_authentication|Two-Factor Authentication]]&lt;br /&gt;
* [[Visualization]]&lt;br /&gt;
* [[Running Serial Jobs on Niagara]]&lt;br /&gt;
* [[Jupyter Hub]]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=4866</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=4866"/>
		<updated>2023-05-27T15:42:52Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
{| style=&amp;quot;border-spacing:10px; width: 95%&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:1em; padding-top:.1em; border:2px solid #0645ad; background-color:#f6f6f6; border-radius:7px&amp;quot;|&lt;br /&gt;
&lt;br /&gt;
==System Status==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Use &amp;quot;Up&amp;quot;, &amp;quot;Partial&amp;quot; or &amp;quot;Down&amp;quot;; these are templates. --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;width:100%&amp;quot; &lt;br /&gt;
|{{Up |Niagara|Niagara_Quickstart}}&lt;br /&gt;
|{{Up |Mist|Mist}}&lt;br /&gt;
|{{Up |Teach|Teach}}&lt;br /&gt;
|{{Up |Rouge|Rouge}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up |Jupyter Hub|Jupyter_Hub}}&lt;br /&gt;
|{{Up |Scheduler|Niagara_Quickstart#Submitting_jobs}}&lt;br /&gt;
|{{Down |File system|Niagara_Quickstart#Storage_and_quotas}}&lt;br /&gt;
|{{Up |Burst Buffer|Burst_Buffer}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up |HPSS|HPSS}}&lt;br /&gt;
|{{Up |Login Nodes|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Up |External Network|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Up |Globus |Globus}}&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
'''Sat May 27, 2023, 11:18AM EDT:''' Filesystem issues, investigating.&lt;br /&gt;
&lt;br /&gt;
'''Wed May 24, 2023, 11:40AM EDT:''' Mist login node is accessible again.&lt;br /&gt;
&lt;br /&gt;
'''Wed May 24, 2023, 11:10 AM EDT:''' Mist login node is under maintenance and temporarily inaccessible to users.&lt;br /&gt;
&lt;br /&gt;
'''Mon May 15, 2023, 10:08 AM EDT''' rebooting Mist-login node again &lt;br /&gt;
&lt;br /&gt;
'''Mon May 15, 2023, 09:15 AM EDT''' rebooting Mist-login node&lt;br /&gt;
&lt;br /&gt;
'''Mon May 01, 2023, 04:00 PM EDT''' done rebooting nia-login nodes&lt;br /&gt;
&lt;br /&gt;
'''Mon May 01, 2023, 12:00 PM EDT''' rebooting all nia-login nodes one at a time &lt;br /&gt;
&lt;br /&gt;
'''Mon May 01, 2023, 11:00 AM EDT''' nia-login07 is going to be rebooted.&lt;br /&gt;
&lt;br /&gt;
'''Thu Apr 20, 2023, 12:05 PM EDT:''' Mist login node is accessible again.&lt;br /&gt;
&lt;br /&gt;
'''Thu Apr 20, 2023, 11:30 AM EDT:''' Mist login node is under maintenance and temporarily inaccessible to users.&lt;br /&gt;
&lt;br /&gt;
'''Thu Apr 20, 2023, 8:27 AM EDT:''' Intermittent file system issues. We are investigating.  For now (10:45 AM), the file systems appear operational.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--  When removing system status entries, please archive them to: --&amp;gt;&lt;br /&gt;
[[Previous messages]]&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;border-spacing: 10px;width: 100%&amp;quot;&lt;br /&gt;
|valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== QuickStart Guides ==&lt;br /&gt;
* [[Niagara Quickstart]]&lt;br /&gt;
* [[HPSS | HPSS archival storage]]&lt;br /&gt;
* [[Mist| Mist Power 9 GPU cluster]]&lt;br /&gt;
* [[Teach|Teach cluster]]&lt;br /&gt;
* [[FAQ | FAQ (frequently asked questions)]]&lt;br /&gt;
* [[Acknowledging SciNet]]&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Tutorials, Manuals, etc. ==&lt;br /&gt;
* [https://education.scinet.utoronto.ca SciNet education material]&lt;br /&gt;
* [https://www.youtube.com/c/SciNetHPCattheUniversityofToronto SciNet's YouTube channel]&lt;br /&gt;
* [[Modules specific to Niagara|Software Modules specific to Niagara]] &lt;br /&gt;
* [[Modules for Mist]] &lt;br /&gt;
* [[Commercial software]]&lt;br /&gt;
* [[Burst Buffer]]&lt;br /&gt;
* [[SSH#SSH Keys|SSH keys]]&lt;br /&gt;
* [[SSH Tunneling]]&lt;br /&gt;
* [[SSH#Two-Factor_authentication|Two-Factor Authentication]]&lt;br /&gt;
* [[Visualization]]&lt;br /&gt;
* [[Running Serial Jobs on Niagara]]&lt;br /&gt;
* [[Jupyter Hub]]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Ansys&amp;diff=3821</id>
		<title>Ansys</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Ansys&amp;diff=3821"/>
		<updated>2022-05-16T13:20:50Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [http://www.ansys.com/ Ansys] engineering simulation tools are installed in both the Niagara and CC software stacks.&lt;br /&gt;
&lt;br /&gt;
=Getting a license=&lt;br /&gt;
Licenses are provided by [https://www.cmc.ca CMC Microsystems]. Canadian students and faculty can register at [https://www.cmc.ca/en/MyAccount/GetAccount.aspx this page].&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you must [https://cmcmicrosystems.formtitan.com/SupportForm contact CMC] and tell them you want to use the Ansys tools on Niagara, and give them your SciNet username.&lt;br /&gt;
&lt;br /&gt;
After confirming with CMC and your institution that your username is authorized to claim a license, you will need to establish a ssh tunnel on your submission script in order to access the license server from niagara compute nodes (which don't have direct access to the internet). Please obtain the proper server name and ports from CMC or directly from your institution, so you may fill the ssh tunnel settings on the script template further below.&lt;br /&gt;
&lt;br /&gt;
You may also use a license server from the other GP clusters (beluga/narval, cedar or graham), since niagara doesn't have its own server. More information can be found here:&lt;br /&gt;
&lt;br /&gt;
https://docs.computecanada.ca/wiki/ANSYS#Configuring_your_own_license_file&lt;br /&gt;
&lt;br /&gt;
=Running on Niagara=&lt;br /&gt;
&lt;br /&gt;
Commercial software modules are not available by default, and require a 'module use' command:&lt;br /&gt;
&lt;br /&gt;
 module use /scinet/niagara/software/commercial/modules&lt;br /&gt;
 module load ansys/2021r2&lt;br /&gt;
&lt;br /&gt;
Programs available:&lt;br /&gt;
&lt;br /&gt;
* fluent&lt;br /&gt;
* ansysedt&lt;br /&gt;
* mapdl&lt;br /&gt;
* ...&lt;br /&gt;
You can use the Ansys graphical tools to set up your problem, but you cannot use the graphical tools to submit your job.  The job must be submitted to the scheduler for running.&lt;br /&gt;
&lt;br /&gt;
==Setting up your .mw directory==&lt;br /&gt;
&lt;br /&gt;
Ansys will attempt to write to your $HOME/.mw directory.  This will work when you are testing your workflow on the login nodes, because they can write to $HOME.  However, recall that the compute nodes cannot write to the /home filesystem.  If you attempt to run Ansys from a compute node using the default configuration, it will fail because Ansys cannot write to $HOME/.mw.&lt;br /&gt;
&lt;br /&gt;
The solution is to create an alternative directory called $SCRATCH/.mw, and create a soft link from $HOME/.mw to $SCRATCH/.mw:&lt;br /&gt;
 mkdir $SCRATCH/.mw&lt;br /&gt;
 ln -s $SCRATCH/.mw $HOME/.mw&lt;br /&gt;
This will fool Ansys into thinking it is writing to $HOME/.mw, when in fact it is writing to $SCRATCH/.mw.  This command only needs to be run once.&lt;br /&gt;
&lt;br /&gt;
==Running ansys==&lt;br /&gt;
&lt;br /&gt;
Example submission script for a job running on 1 node, with max walltime of 11 hours:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=40&lt;br /&gt;
#SBATCH --time=11:00:00&lt;br /&gt;
#SBATCH --job-name test&lt;br /&gt;
&lt;br /&gt;
module use /scinet/niagara/software/commercial/modules&lt;br /&gt;
module load ansys/2021r2&lt;br /&gt;
&lt;br /&gt;
# DIRECTORY TO RUN - $SLURM_SUBMIT_DIR is directory job was submitted from&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
machines=$(srun bash -c 'hostname -s' | sort | uniq | awk '{print $1 &amp;quot;:&amp;quot; 40}' | paste -s -d ':')&lt;br /&gt;
ansys212 -b -j JOBNAME -dis -machines &amp;quot;$machines&amp;quot; -i ansys.in&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Custom license server==&lt;br /&gt;
&lt;br /&gt;
To use your own license server for Ansys, you'll need to open up ssh tunnels to the required ports,&lt;br /&gt;
and point Ansys to the tunnels.&lt;br /&gt;
For example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ssh nia-gw -fNL 1055:license.server:1055&lt;br /&gt;
ssh nia-gw -fNL 1056:license.server:1056&lt;br /&gt;
ssh nia-gw -fNL 2325:license.server:2325&lt;br /&gt;
&lt;br /&gt;
export ANSYSLI_SERVERS=2325@localhost&lt;br /&gt;
export ANSYSLMD_LICENSE_FILE=1055@localhost&lt;br /&gt;
export LM_LICENSE_FILE=1055@localhost&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure your firewall allows connections from the following IP addresses:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
142.150.188.71&lt;br /&gt;
142.150.188.72&lt;br /&gt;
142.150.188.73&lt;br /&gt;
142.150.188.74&lt;br /&gt;
142.150.188.75&lt;br /&gt;
142.150.188.76&lt;br /&gt;
142.150.188.77&lt;br /&gt;
142.150.188.78&lt;br /&gt;
142.1.174.227&lt;br /&gt;
142.1.174.228&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
INPUTFILE=input.jou&lt;br /&gt;
fluent 2ddp -t &amp;quot;$PBS_NP&amp;quot; -cnf=&amp;quot;$PBS_NODEFILE&amp;quot; -mpi=intel -pib -pcheck -g -i &amp;quot;$INPUTFILE&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Running using the CC installation=&lt;br /&gt;
&lt;br /&gt;
==Ansys 19.0==&lt;br /&gt;
To access the CC software stack you must unload the Niagara stack.&lt;br /&gt;
&lt;br /&gt;
 module load CCEnv StdEnv&lt;br /&gt;
 module load ansys/19.0&lt;br /&gt;
&lt;br /&gt;
You can run the script given in the previous section by substituting the previous module commands with the above two.&lt;br /&gt;
&lt;br /&gt;
More information can be found here: https://docs.computecanada.ca/wiki/ANSYS &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Ansys&amp;diff=3818</id>
		<title>Ansys</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Ansys&amp;diff=3818"/>
		<updated>2022-05-12T20:06:48Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [http://www.ansys.com/ Ansys] engineering simulation tools are installed in both the Niagara and CC software stacks.&lt;br /&gt;
&lt;br /&gt;
=Getting a license=&lt;br /&gt;
Licenses are provided by [https://www.cmc.ca CMC Microsystems]. Canadian students and faculty can register at [https://www.cmc.ca/en/MyAccount/GetAccount.aspx this page].&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you must [https://cmcmicrosystems.formtitan.com/SupportForm contact CMC] and tell them you want to use the Ansys tools on Niagara, and give them your SciNet username.&lt;br /&gt;
&lt;br /&gt;
After confirming with CMC and your institution that your username is authorized to claim a license, you will need to establish a ssh tunnel on your submission script in order to access the license server from niagara compute nodes (which don't have direct access to the internet). Please obtain the proper server name and ports from CMC or directly from your institution, so you may fill the ssh tunnel settings on the script template further below.&lt;br /&gt;
&lt;br /&gt;
You may also use a license server from the other GP clusters (beluga/narval, cedar or graham), since niagara doesn't have its own server. More information can be found here:&lt;br /&gt;
&lt;br /&gt;
https://docs.computecanada.ca/wiki/ANSYS#Configuring_your_own_license_file&lt;br /&gt;
&lt;br /&gt;
=Running on Niagara=&lt;br /&gt;
&lt;br /&gt;
Commercial software modules are not available by default, and require a 'module use' command:&lt;br /&gt;
&lt;br /&gt;
 module use /scinet/niagara/software/commercial/modules&lt;br /&gt;
 module load ansys/2021r2&lt;br /&gt;
&lt;br /&gt;
Programs available:&lt;br /&gt;
&lt;br /&gt;
* fluent&lt;br /&gt;
* ansysedt&lt;br /&gt;
* mapdl&lt;br /&gt;
* ...&lt;br /&gt;
You can use the Ansys graphical tools to set up your problem, but you cannot use the graphical tools to submit your job.  The job must be submitted to the scheduler for running.&lt;br /&gt;
&lt;br /&gt;
==Setting up your .mw directory==&lt;br /&gt;
&lt;br /&gt;
Ansys will attempt to write to your $HOME/.mw directory.  This will work when you are testing your workflow on the login nodes, because they can write to $HOME.  However, recall that the compute nodes cannot write to the /home filesystem.  If you attempt to run Ansys from a compute node using the default configuration, it will fail because Ansys cannot write to $HOME/.mw.&lt;br /&gt;
&lt;br /&gt;
The solution is to create an alternative directory called $SCRATCH/.mw, and create a soft link from $HOME/.mw to $SCRATCH/.mw:&lt;br /&gt;
 mkdir $SCRATCH/.mw&lt;br /&gt;
 ln -s $SCRATCH/.mw $HOME/.mw&lt;br /&gt;
This will fool Ansys into thinking it is writing to $HOME/.mw, when in fact it is writing to $SCRATCH/.mw.  This command only needs to be run once.&lt;br /&gt;
&lt;br /&gt;
==Running ansys==&lt;br /&gt;
&lt;br /&gt;
Example submission script for a job running on 1 node, with max walltime of 11 hours:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=40&lt;br /&gt;
#SBATCH --time=11:00:00&lt;br /&gt;
#SBATCH --job-name test&lt;br /&gt;
&lt;br /&gt;
module use /scinet/niagara/software/commercial/modules&lt;br /&gt;
module load ansys/2021r2&lt;br /&gt;
&lt;br /&gt;
# DIRECTORY TO RUN - $SLURM_SUBMIT_DIR is directory job was submitted from&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
machines=$(srun bash -c 'hostname -s' | sort | uniq | awk '{print $1 &amp;quot;:&amp;quot; 40}' | paste -s -d ':')&lt;br /&gt;
ansys212 -b -j JOBNAME -dis -machines &amp;quot;$machines&amp;quot; -i ansys.in&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Custom license server==&lt;br /&gt;
&lt;br /&gt;
To use your own license server for Ansys, you'll need to open up ssh tunnels to the required ports,&lt;br /&gt;
and point Ansys to the tunnels.&lt;br /&gt;
For example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ssh nia-gw -fNL 1055:license.server:1055&lt;br /&gt;
ssh nia-gw -fNL 1056:license.server:1056&lt;br /&gt;
ssh nia-gw -fNL 2325:license.server:2325&lt;br /&gt;
&lt;br /&gt;
export ANSYSLI_SERVERS=2325@localhost&lt;br /&gt;
export ANSYSLMD_LICENSE_FILE=1055@localhost&lt;br /&gt;
export LM_LICENSE_FILE=1055@localhost&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
INPUTFILE=input.jou&lt;br /&gt;
fluent 2ddp -t &amp;quot;$PBS_NP&amp;quot; -cnf=&amp;quot;$PBS_NODEFILE&amp;quot; -mpi=intel -pib -pcheck -g -i &amp;quot;$INPUTFILE&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Running using the CC installation=&lt;br /&gt;
&lt;br /&gt;
==Ansys 19.0==&lt;br /&gt;
To access the CC software stack you must unload the Niagara stack.&lt;br /&gt;
&lt;br /&gt;
 module load CCEnv StdEnv&lt;br /&gt;
 module load ansys/19.0&lt;br /&gt;
&lt;br /&gt;
You can run the script given in the previous section by substituting the previous module commands with the above two.&lt;br /&gt;
&lt;br /&gt;
More information can be found here: https://docs.computecanada.ca/wiki/ANSYS &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Ansys&amp;diff=3752</id>
		<title>Ansys</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Ansys&amp;diff=3752"/>
		<updated>2022-04-29T15:21:18Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [http://www.ansys.com/ Ansys] engineering simulation tools are installed in both the Niagara and CC software stacks.&lt;br /&gt;
&lt;br /&gt;
=Getting a license=&lt;br /&gt;
Licenses are provided by [https://www.cmc.ca CMC Microsystems]. Canadian students and faculty can register at [https://www.cmc.ca/en/MyAccount/GetAccount.aspx this page].&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you must [https://cmcmicrosystems.formtitan.com/SupportForm contact CMC] and tell them you want to use the Ansys tools on Niagara, and give them your SciNet username.&lt;br /&gt;
&lt;br /&gt;
After confirming with CMC and your institution that your username is authorized to claim a license, you will need to establish a ssh tunnel on your submission script in order to access the license server from niagara compute nodes (which don't have direct access to the internet). Please obtain the proper server name and ports from CMC or directly from your institution, so you may fill the ssh tunnel settings on the script template further below.&lt;br /&gt;
&lt;br /&gt;
You may also use a license server from the other GP clusters (beluga/narval, cedar or graham), since niagara doesn't have its own server. More information can be found here:&lt;br /&gt;
&lt;br /&gt;
https://docs.computecanada.ca/wiki/ANSYS#Configuring_your_own_license_file&lt;br /&gt;
&lt;br /&gt;
=Running on Niagara=&lt;br /&gt;
&lt;br /&gt;
Commercial software modules are not available by default, and require a 'module use' command:&lt;br /&gt;
&lt;br /&gt;
 module use /scinet/niagara/software/commercial/modules&lt;br /&gt;
 module load ansys/2021r2&lt;br /&gt;
&lt;br /&gt;
Programs available:&lt;br /&gt;
&lt;br /&gt;
* fluent&lt;br /&gt;
* ansysedt&lt;br /&gt;
* mapdl&lt;br /&gt;
* ...&lt;br /&gt;
You can use the Ansys graphical tools to set up your problem, but you cannot use the graphical tools to submit your job.  The job must be submitted to the scheduler for running.&lt;br /&gt;
&lt;br /&gt;
==Setting up your .mw directory==&lt;br /&gt;
&lt;br /&gt;
Ansys will attempt to write to your $HOME/.mw directory.  This will work when you are testing your workflow on the login nodes, because they can write to $HOME.  However, recall that the compute nodes cannot write to the /home filesystem.  If you attempt to run Ansys from a compute node using the default configuration, it will fail because Ansys cannot write to $HOME/.mw.&lt;br /&gt;
&lt;br /&gt;
The solution is to create an alternative directory called $SCRATCH/.mw, and create a soft link from $HOME/.mw to $SCRATCH/.mw:&lt;br /&gt;
 mkdir $SCRATCH/.mw&lt;br /&gt;
 ln -s $SCRATCH/.mw $HOME/.mw&lt;br /&gt;
This will fool Ansys into thinking it is writing to $HOME/.mw, when in fact it is writing to $SCRATCH/.mw.  This command only needs to be run once.&lt;br /&gt;
&lt;br /&gt;
==Running ansys==&lt;br /&gt;
&lt;br /&gt;
Example submission script for a job running on 1 node, with max walltime of 11 hours:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=40&lt;br /&gt;
#SBATCH --time=11:00:00&lt;br /&gt;
#SBATCH --job-name test&lt;br /&gt;
&lt;br /&gt;
module use /scinet/niagara/software/commercial/modules&lt;br /&gt;
module load ansys/2021r2&lt;br /&gt;
&lt;br /&gt;
# DIRECTORY TO RUN - $SLURM_SUBMIT_DIR is directory job was submitted from&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
machines=$(srun bash -c 'hostname -s' | sort | uniq | awk '{print $1 &amp;quot;:&amp;quot; 40}' | paste -s -d ':')&lt;br /&gt;
ansys212 -b -j JOBNAME -dis -machines &amp;quot;$machines&amp;quot; -i ansys.in&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
INPUTFILE=input.jou&lt;br /&gt;
fluent 2ddp -t &amp;quot;$PBS_NP&amp;quot; -cnf=&amp;quot;$PBS_NODEFILE&amp;quot; -mpi=intel -pib -pcheck -g -i &amp;quot;$INPUTFILE&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Running using the CC installation=&lt;br /&gt;
&lt;br /&gt;
==Ansys 19.0==&lt;br /&gt;
To access the CC software stack you must unload the Niagara stack.&lt;br /&gt;
&lt;br /&gt;
 module load CCEnv StdEnv&lt;br /&gt;
 module load ansys/19.0&lt;br /&gt;
&lt;br /&gt;
You can run the script given in the previous section by substituting the previous module commands with the above two.&lt;br /&gt;
&lt;br /&gt;
More information can be found here: https://docs.computecanada.ca/wiki/ANSYS &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Ansys&amp;diff=3737</id>
		<title>Ansys</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Ansys&amp;diff=3737"/>
		<updated>2022-04-26T14:38:35Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [http://www.ansys.com/ Ansys] engineering simulation tools are installed in both the Niagara and CC software stacks.&lt;br /&gt;
&lt;br /&gt;
=Getting a license=&lt;br /&gt;
Licenses are provided by [http://www.cmc.ca CMC Microsystems]. Canadian students and faculty can register at [https://www.cmc.ca/en/MyAccount/GetAccount.aspx this page].&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you must contact CMC and tell them you want to use the Ansys tools on Niagara, and give them your SciNet username.&lt;br /&gt;
&lt;br /&gt;
After confirming with CMC and your institution that your username is authorized to claim a license, you will need to establish a ssh tunnel on your submission script in order to access the license server from niagara compute nodes (which don't have direct access to the internet). Please obtain the proper server name and ports from CMC or directly from your institution, so you may fill the ssh tunnel settings on the script template further below.&lt;br /&gt;
&lt;br /&gt;
You may also use a license server from the other GP clusters (beluga/narval, cedar or graham), since niagara doesn't have its own server. More information can be found here:&lt;br /&gt;
&lt;br /&gt;
https://docs.computecanada.ca/wiki/ANSYS#Configuring_your_own_license_file&lt;br /&gt;
&lt;br /&gt;
=Running on Niagara=&lt;br /&gt;
&lt;br /&gt;
Commercial software modules are not available by default, and require a 'module use' command:&lt;br /&gt;
&lt;br /&gt;
 module use /scinet/niagara/software/commercial/modules&lt;br /&gt;
 module load ansys/2021r2&lt;br /&gt;
&lt;br /&gt;
Programs available:&lt;br /&gt;
&lt;br /&gt;
* fluent&lt;br /&gt;
* ansysedt&lt;br /&gt;
* mapdl&lt;br /&gt;
* ...&lt;br /&gt;
You can use the Ansys graphical tools to set up your problem, but you cannot use the graphical tools to submit your job.  The job must be submitted to the scheduler for running.&lt;br /&gt;
&lt;br /&gt;
==Setting up your .mw directory==&lt;br /&gt;
&lt;br /&gt;
Ansys will attempt to write to your $HOME/.mw directory.  This will work when you are testing your workflow on the login nodes, because they can write to $HOME.  However, recall that the compute nodes cannot write to the /home filesystem.  If you attempt to run Ansys from a compute node using the default configuration, it will fail because Ansys cannot write to $HOME/.mw.&lt;br /&gt;
&lt;br /&gt;
The solution is to create an alternative directory called $SCRATCH/.mw, and create a soft link from $HOME/.mw to $SCRATCH/.mw:&lt;br /&gt;
 mkdir $SCRATCH/.mw&lt;br /&gt;
 ln -s $SCRATCH/.mw $HOME/.mw&lt;br /&gt;
This will fool Ansys into thinking it is writing to $HOME/.mw, when in fact it is writing to $SCRATCH/.mw.  This command only needs to be run once.&lt;br /&gt;
&lt;br /&gt;
==Running ansys==&lt;br /&gt;
&lt;br /&gt;
Example submission script for a job running on 1 node, with max walltime of 11 hours:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=40&lt;br /&gt;
#SBATCH --time=11:00:00&lt;br /&gt;
#SBATCH --job-name test&lt;br /&gt;
&lt;br /&gt;
module use /scinet/niagara/software/commercial/modules&lt;br /&gt;
module load ansys/2021r2&lt;br /&gt;
&lt;br /&gt;
# DIRECTORY TO RUN - $SLURM_SUBMIT_DIR is directory job was submitted from&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
machines=$(srun bash -c 'hostname -s' | sort | uniq | awk '{print $1 &amp;quot;:&amp;quot; 40}' | paste -s -d ':')&lt;br /&gt;
ansys212 -b -j JOBNAME -dis -machines &amp;quot;$machines&amp;quot; -i ansys.in&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
INPUTFILE=input.jou&lt;br /&gt;
fluent 2ddp -t &amp;quot;$PBS_NP&amp;quot; -cnf=&amp;quot;$PBS_NODEFILE&amp;quot; -mpi=intel -pib -pcheck -g -i &amp;quot;$INPUTFILE&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Running using the CC installation=&lt;br /&gt;
&lt;br /&gt;
==Ansys 19.0==&lt;br /&gt;
To access the CC software stack you must unload the Niagara stack.&lt;br /&gt;
&lt;br /&gt;
 module load CCEnv StdEnv&lt;br /&gt;
 module load ansys/19.0&lt;br /&gt;
&lt;br /&gt;
You can run the script given in the previous section by substituting the previous module commands with the above two.&lt;br /&gt;
&lt;br /&gt;
More information can be found here: https://docs.computecanada.ca/wiki/ANSYS &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Ansys&amp;diff=3653</id>
		<title>Ansys</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Ansys&amp;diff=3653"/>
		<updated>2022-03-21T18:48:38Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [http://www.ansys.com/ Ansys] engineering simulation tools are installed in both the Niagara and CC software stacks.&lt;br /&gt;
&lt;br /&gt;
=Getting a license=&lt;br /&gt;
Licenses are provided by [http://www.cmc.ca CMC Microsystems]. Canadian students and faculty can register at [https://www.cmc.ca/en/MyAccount/GetAccount.aspx this page].&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you must contact CMC and tell them you want to use the Ansys tools on Niagara, and give them your SciNet username.&lt;br /&gt;
&lt;br /&gt;
=Running on Niagara=&lt;br /&gt;
&lt;br /&gt;
Commercial software modules are not available by default, and require a 'module use' command:&lt;br /&gt;
&lt;br /&gt;
 module use /scinet/niagara/software/commercial/modules&lt;br /&gt;
 module load ansys/2021r2&lt;br /&gt;
&lt;br /&gt;
Programs available:&lt;br /&gt;
&lt;br /&gt;
* fluent&lt;br /&gt;
* ansysedt&lt;br /&gt;
* mapdl&lt;br /&gt;
* ...&lt;br /&gt;
You can use the Ansys graphical tools to set up your problem, but you cannot use the graphical tools to submit your job.  The job must be submitted to the scheduler for running.&lt;br /&gt;
&lt;br /&gt;
==Setting up your .mw directory==&lt;br /&gt;
&lt;br /&gt;
Ansys will attempt to write to your $HOME/.mw directory.  This will work when you are testing your workflow on the login nodes, because they can write to $HOME.  However, recall that the compute nodes cannot write to the /home filesystem.  If you attempt to run Ansys from a compute node using the default configuration, it will fail because Ansys cannot write to $HOME/.mw.&lt;br /&gt;
&lt;br /&gt;
The solution is to create an alternative directory called $SCRATCH/.mw, and create a soft link from $HOME/.mw to $SCRATCH/.mw:&lt;br /&gt;
 mkdir $SCRATCH/.mw&lt;br /&gt;
 ln -s $SCRATCH/.mw $HOME/.mw&lt;br /&gt;
This will fool Ansys into thinking it is writing to $HOME/.mw, when in fact it is writing to $SCRATCH/.mw.  This command only needs to be run once.&lt;br /&gt;
&lt;br /&gt;
==Running ansys==&lt;br /&gt;
&lt;br /&gt;
Example submission script for a job running on 1 node, with max walltime of 11 hours:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=40&lt;br /&gt;
#SBATCH --time=11:00:00&lt;br /&gt;
#SBATCH --job-name test&lt;br /&gt;
&lt;br /&gt;
module use /scinet/niagara/software/commercial/modules&lt;br /&gt;
module load ansys/2021r2&lt;br /&gt;
&lt;br /&gt;
# DIRECTORY TO RUN - $SLURM_SUBMIT_DIR is directory job was submitted from&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
machines=$(srun bash -c 'hostname -s' | sort | uniq | awk '{print $1 &amp;quot;:&amp;quot; 40}' | paste -s -d ':')&lt;br /&gt;
ansys212 -b -j JOBNAME -dis -machines &amp;quot;$machines&amp;quot; -i ansys.in&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
INPUTFILE=input.jou&lt;br /&gt;
fluent 2ddp -t &amp;quot;$PBS_NP&amp;quot; -cnf=&amp;quot;$PBS_NODEFILE&amp;quot; -mpi=intel -pib -pcheck -g -i &amp;quot;$INPUTFILE&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Running using the CC installation=&lt;br /&gt;
&lt;br /&gt;
==Ansys 19.0==&lt;br /&gt;
To access the CC software stack you must unload the Niagara stack.&lt;br /&gt;
&lt;br /&gt;
 module load CCEnv StdEnv&lt;br /&gt;
 module load ansys/19.0&lt;br /&gt;
&lt;br /&gt;
You can run the script given in the previous section by substituting the previous module commands with the above two.&lt;br /&gt;
&lt;br /&gt;
More information can be found here: https://docs.computecanada.ca/wiki/ANSYS &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=SSH&amp;diff=3524</id>
		<title>SSH</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=SSH&amp;diff=3524"/>
		<updated>2022-02-01T21:17:53Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;SSH (secure shell) is the only way to log into the systems at SciNet.   It opens a secure, encrypted connection between your computer and SciNet.  If you have a Linux or Mac OSX machine, you already have SSH installed; if you have a Windows machine, you will have to install additional software before logging into SciNet.&lt;br /&gt;
&lt;br /&gt;
{{Note|SSH keys are now the only way to authenticate to most SciNet systems, and passwords are not accepted.}}&lt;br /&gt;
&lt;br /&gt;
==SSH For Linux or Mac OS X Users==&lt;br /&gt;
&lt;br /&gt;
===Simple Login===&lt;br /&gt;
&lt;br /&gt;
To login to SciNet's Niagara cluster, open a terminal window and type:&lt;br /&gt;
&lt;br /&gt;
 ssh USERNAME@niagara.scinet.utoronto.ca&lt;br /&gt;
&lt;br /&gt;
replacing USERNAME with your SciNet username.   Once done, you will be logged into the login nodes at the SciNet data centre, as if you have a terminal from those machines on your desktop. &lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;***Note***&amp;lt;/big&amp;gt;''' if you chose a custom SSH key name, &amp;lt;i&amp;gt;i.e.&amp;lt;/i&amp;gt; something other than the default names: &amp;lt;code&amp;gt;id_dsa&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;id_ecdsa&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;id_ed25519&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;id_rsa&amp;lt;/code&amp;gt;, you will need to use the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option of ssh to specify the path to your private key via:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -i /path/to/key USERNAME@niagara.scinet.utoronto.ca&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More details about custom SSH key names can be found [[#Custom SSH Keys names|here]].&lt;br /&gt;
&lt;br /&gt;
Note that if your username is the same on both the machine you're logging in from and the scinet machines, you can drop the &amp;lt;tt&amp;gt;USERNAME@&amp;lt;/tt&amp;gt;, as SSH by default will try to use the username on the machine you are logging in from.&lt;br /&gt;
&lt;br /&gt;
===Copying Files===&lt;br /&gt;
&lt;br /&gt;
The SSH protocol can be used for more than logging in remotely; it can also be used to copy files between machines.&lt;br /&gt;
&lt;br /&gt;
To copy '''small''' files from your home computer to a subdirectory of your &amp;lt;tt&amp;gt;/scratch&amp;lt;/tt&amp;gt; directory at SciNet, you would type from a terminal on your computer&lt;br /&gt;
&lt;br /&gt;
 scp filetocopy.txt USERNAME@niagara.scinet.utoronto.ca:/scratch/USERNAME/some_subdirectory/&lt;br /&gt;
&lt;br /&gt;
Note that soon the location of your scratch directory will change, and you will have to type:&lt;br /&gt;
&lt;br /&gt;
 scp filetocopy.txt USERNAME@niagara.scinet.utoronto.ca:/scratch/G/GROUPNAME/USERNAME/some_subdirectory/&lt;br /&gt;
&lt;br /&gt;
Similarly, to copy files back into your current directory, you would type&lt;br /&gt;
&lt;br /&gt;
 scp USERNAME@niagara.scinet.utoronto.ca:/scratch/G/GROUPNAME/USERNAME/my_dirs/myfile.txt . &lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;big&amp;gt;***Note***&amp;lt;/big&amp;gt;''' if you chose a custom SSH key name, &amp;lt;i&amp;gt;i.e.&amp;lt;/i&amp;gt; something other than the default names: &amp;lt;code&amp;gt;id_dsa&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;id_ecdsa&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;id_ed25519&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;id_rsa&amp;lt;/code&amp;gt;, you will need to use the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option of scp and specify the path to your private key before the file paths via:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
scp -i /path/to/key filetocopy.txt USERNAME@niagara.scinet.utoronto.ca:/scratch/USERNAME/some_subdirectory/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More details about custom SSH key names can be found [[#Custom SSH Keys names|here]].&lt;br /&gt;
&lt;br /&gt;
The [[Niagara_Quickstart#Moving_data | Data Management]] wiki page has much more information on doing large transfers efficiently.&lt;br /&gt;
&lt;br /&gt;
==SSH for Windows Users==&lt;br /&gt;
&lt;br /&gt;
To use SSH on Windows, you will have to install SSH software.   SciNet recommends, roughly in order of preference:&lt;br /&gt;
&lt;br /&gt;
* [http://www.cygwin.com/ Cygwin] is an entire linux-like environment for Windows.   Using something like Cygwin is highly recommended if you are going to be interacting a lot with linux systems, as it will give you a development environment very similar to that on the systems you'll be using.   Download and run &amp;lt;tt&amp;gt;setup.exe&amp;lt;/tt&amp;gt;, and install any packages you think you'll need.  Once this is done, you will have icons for terminals, including one saying something like &amp;quot;X11&amp;quot;.  From either of these, you'll be able to type &amp;lt;tt&amp;gt;ssh user@niagara.scinet.utoronto.ca&amp;lt;/tt&amp;gt; as above; if you think you will need to pop up windows from SciNet machines (&amp;lt;i&amp;gt;e.g.&amp;lt;/i&amp;gt;, for displaying data or using [[Performance_And_Debugging_Tools:_GPC | Profiling Tools]]), you'll need to use the X11 terminal and type &amp;lt;tt&amp;gt;ssh -Y user@niagara.scinet.utoronto.ca&amp;lt;/tt&amp;gt;.   Other ssh tools such as &amp;lt;tt&amp;gt;scp&amp;lt;/tt&amp;gt; will work as above.&lt;br /&gt;
* [http://mobaxterm.mobatek.net/en/ MobaXterm] is a tabbed ssh client with some Cygwin tools all wrapped up into one executable.&lt;br /&gt;
* [http://sshwindows.sourceforge.net/ OpenSSH For Windows] installs only those parts of Cygwin necessary to run SSH.  Again, once installed, opening up one of the new terminals allows you to use SSH as in the Linux/Mac OSX section above, but X11 forwarding for displaying windows may not work.&lt;br /&gt;
* [http://www.chiark.greenend.org.uk/~sgtatham/putty/ PuTTY] is one of the better stand-alone SSH programs for windows.  It is a small download, and is enough to get you logged into the SciNet machines.  For advanced use like X11 forwarding however, you are better off using Cygwin.   A related program, [http://the.earth.li/~sgtatham/putty/latest/x86/pscp.exe PSCP], can be used to copy files using a graphical user interface. &amp;lt;br/&amp;gt; '''WARNING:''' Make sure you download putty from the official website, because there are &amp;quot;trojanized&amp;quot; versions of putty around that will send your login information to a site in Russia (as reported [http://blogs.cisco.com/security/trojanized-putty-software here]).&lt;br /&gt;
&lt;br /&gt;
===Copying Files===&lt;br /&gt;
&lt;br /&gt;
To transfer files to Niagara in Windows it is recommended to use the tool called [https://winscp.net/eng/index.php WinSCP]. Setting up a connection to Niagara using your SSH key can be done by following the steps in this [https://www.exavault.com/blog/import-ssh-keys-winscp link].&lt;br /&gt;
&lt;br /&gt;
==X11 Forwarding==&lt;br /&gt;
&lt;br /&gt;
If during your login session you will only need to be typing and reading text, the techniques described above will suffice.&lt;br /&gt;
However, if in a session you will need to be displaying graphics &amp;amp;mdash; such as plotting data on the scinet machines or using our [[Performance_And_Debugging_Tools:_Niagara | performance profiling tools]] &amp;amp;mdash; you can use SSH's very powerful ability to forward several different types of data over one connection.&lt;br /&gt;
To enable &amp;quot;X11 forwarding&amp;quot; over this SSH connection, add the option &amp;lt;tt&amp;gt;-Y&amp;lt;/tt&amp;gt; to your command, &lt;br /&gt;
&lt;br /&gt;
 ssh -Y USERNAME@niagara.scinet.utoronto.ca&lt;br /&gt;
&lt;br /&gt;
* Both, Windows and Mac OS users, will need to install an additional program to have X-forwarding working, usually referred to as &amp;quot;Xserver&amp;quot; which will interprete the data (graphics) forwarded and displayed on the local computer.&lt;br /&gt;
* Mac OS users need to install [https://www.xquartz.org XQUARTZ]&lt;br /&gt;
* Windows users could opt for installing [https://mobaxterm.mobatek.net MobaXterm] which is a ssh-client which already includes an Xserver.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==SSH Keys==&lt;br /&gt;
&lt;br /&gt;
[[SSH | SSH]] has an alternative to passwords to authenticate your login; you can generate a key file on a trusted machine and tell a remote machine to trust logins from a machine that presents that key.   This can be both convenient and secure, and may be necessary for some tasks (such as connecting directly to compute nodes to use [[Visualization | some visualization packages]]). Here we describe how to setup keys for logging into SciNet.&lt;br /&gt;
&lt;br /&gt;
In addition to using passwords to [http://en.wikipedia.org/wiki/Authentication authenticate] users, one can use cryptographically secure keys to guarantee that a login request is coming from a trusted account on a remote machine, and automatically allow such requests.   Done properly, this is as secure as requiring a password, but can be more convenient, and is necessary for some operations.  &lt;br /&gt;
&lt;br /&gt;
===How SSH keys work===&lt;br /&gt;
&lt;br /&gt;
SSH relies on [http://en.wikipedia.org/wiki/Public-key_cryptography public key cryptography] for its encryption.  These cryptosystems have a private key, which must be kept secret, and a public key, which may be disseminated freely.   In these systems, anyone may use the public key to encode a message; but only the owner of the private key can decode the message.  This can also be used to verify identities; if someone is claiming to be Alice, the owner of some private key, Bob can send Alice a message encoded with Alice's well-known public key.  If the person claiming to be Alice can then tell Bob what the message really was, then that person at the very least has access to Alice's private key.&lt;br /&gt;
&lt;br /&gt;
To use keys for authentication, you need to:&lt;br /&gt;
&lt;br /&gt;
* Generate a key pair (private and public)&lt;br /&gt;
* Copy the public key to a remote site, and add it to the list of authorized keys&lt;br /&gt;
* Ensure permissions are set properly&lt;br /&gt;
* Test to make sure it works&lt;br /&gt;
&lt;br /&gt;
===Generating an SSH key pair===&lt;br /&gt;
&lt;br /&gt;
The first stage is to create an SSH key pair. On Linux &amp;amp; MacOS (and Windows, with [https://mobaxterm.mobatek.net/ MobaXterm] terminal)  this is done using the &amp;lt;tt&amp;gt;ssh-keygen&amp;lt;/tt&amp;gt; command:&lt;br /&gt;
&lt;br /&gt;
 ssh-keygen -t ed25519&lt;br /&gt;
&lt;br /&gt;
If that doesn't work, try:&lt;br /&gt;
&lt;br /&gt;
 ssh-keygen -t rsa -b 4096&lt;br /&gt;
&lt;br /&gt;
This will prompt you for two pieces of information: where to save the key, and a passphrase for the key.  The passphrase is like a password, but rather than letting you in to some particular account, it allows you to use the key you've generated to log into other systems.  &lt;br /&gt;
&lt;br /&gt;
The default location to save the private key is in &amp;lt;tt&amp;gt;${HOME}/.ssh/id_&amp;lt;i&amp;gt;type&amp;lt;/i&amp;gt;&amp;lt;/tt&amp;gt; (where &amp;lt;tt&amp;gt;&amp;lt;i&amp;gt;type&amp;lt;/i&amp;gt;&amp;lt;/tt&amp;gt; is &amp;lt;tt&amp;gt; ed25519&amp;lt;/tt&amp;gt; for an Ed25519 key or &amp;lt;tt&amp;gt;rsa&amp;lt;/tt&amp;gt; for an RSA key); unless you have some specific reason for placing it elsewhere, use this option. The public key will be &amp;lt;tt&amp;gt;id_&amp;lt;i&amp;gt;type&amp;lt;/i&amp;gt;.pub&amp;lt;/tt&amp;gt; in the same directory. &lt;br /&gt;
&lt;br /&gt;
While the default names for the keys are sufficient for most use cases, it is worth noticing that you are free to name them as you wish. For example, you might want to include a string referring to your name and where you plan on using this key like &amp;lt;tt&amp;gt;js_cc_ed25519&amp;lt;/tt&amp;gt; (and &amp;lt;tt&amp;gt;js_cc_ed25519.pub&amp;lt;/tt&amp;gt;), if your name is John Smith and you created a key of &amp;lt;tt&amp;gt;ed25519&amp;lt;/tt&amp;gt; type on your laptop to be used on Compute Canada (CC) HPC systems. This is useful to distinguish amongst several keys you might need to have on the same computer (for example different keys on the same computer to access different clusters).&lt;br /&gt;
&lt;br /&gt;
Your passphrase can be any string, and of any length.   It is best not to make it the same as any of your passwords. A reasonably strong passphrase usually constitutes of one to three sentences with a few words each, written with the proper punctuation in place. Make sure it is unique and memorable to you. Do not use any popular catch phrases, jingles, or song choruses. They could be easily found on the web and catalogued on a database for a brute-force attack. &lt;br /&gt;
&lt;br /&gt;
A sample session of generating a key would go like this:&lt;br /&gt;
&lt;br /&gt;
 $ ssh-keygen -t ed25519&lt;br /&gt;
 Generating public/private ed25519 key pair.&lt;br /&gt;
 Enter file in which to save the key (/home/USERNAME/.ssh/id_ed25519): &lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in /home/USERNAME/.ssh/id_ed25519.&lt;br /&gt;
 Your public key has been saved in /home/USERNAME/.ssh/id_ed25519.pub.&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 SHA256:EajOndriRmLpl1qKg03FDhnc0EzRaApdBTygEbpQZrA USERNAME@HOSTNAME&lt;br /&gt;
 The key's randomart image is:&lt;br /&gt;
 +--[ED25519 256]--+&lt;br /&gt;
 |+=*X=*...        |&lt;br /&gt;
 |oB+ O o  .       |&lt;br /&gt;
 |E. * o  .        |&lt;br /&gt;
 |..+ +    .       |&lt;br /&gt;
 |.  B . .S        |&lt;br /&gt;
 |  = = o          |&lt;br /&gt;
 |.= o.+           |&lt;br /&gt;
 |o.oo* .          |&lt;br /&gt;
 |..o=..           |&lt;br /&gt;
 +----[SHA256]-----+&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Don't Use Passphraseless Keys!====&lt;br /&gt;
&lt;br /&gt;
If you do not specify a passphrase, you will have a completely &amp;quot;exposed&amp;quot; private key.  '''This is a terrible idea.'''   If you then use this key for anything it means that anyone who sits down at your desk, or anyone who borrows or steals your laptop, can login to anywhere you use that key (good guesses could come from just looking at your history) without needing any password, and could do anything they wanted with your account or data.  Don't use passphraseless keys.&lt;br /&gt;
&lt;br /&gt;
====Don't Copy Your Private Key to Other Systems!====&lt;br /&gt;
&lt;br /&gt;
A private key should never leave the computer that was used to generate it. If you have a personal computer at home, a laptop and a desktop at work, for example, make sure to repeat the process for generating a public-private key pair on each of these systems. Insert a comment into the key so that you know where the key pair was created and how it was named. It is useful when distinguishing keys coming from different devices when inspecting the public keys. For example, issue the following commands:&lt;br /&gt;
&lt;br /&gt;
 $ ssh-keygen -t ed25519 -C &amp;quot;js@mylaptop js_cc&amp;quot; -f $HOME/.ssh/js_cc_ed25519&lt;br /&gt;
 $ ssh-keygen -t ed25519 -C &amp;quot;js@mydesktop js_cc&amp;quot; -f $HOME/.ssh/js_cc_ed25519&lt;br /&gt;
 $ ssh-keygen -t ed25519 -C &amp;quot;js@workdesktop js_cc&amp;quot; -f $HOME/.ssh/js_cc_ed25519&lt;br /&gt;
&lt;br /&gt;
for keys created on your personal laptop, desktop and work desktop, respectively (Replace the suggested &amp;quot;js&amp;quot; abbreviations above, or any other reference to John Smith name, to yours). See [https://docs.scinet.utoronto.ca/index.php/SSH_keys#Multiple_SSH_keys Multiple SSH keys] session for other examples.&lt;br /&gt;
&lt;br /&gt;
Private keys are very powerful. In the wrong hands, they can be used to impersonate you on every system you use those keys to access to. Copying them from the location where they were generated to other places also increases the number of systems that that key can be used to compromise. In addition, leaving the private key where it was created protects you against [https://en.wikipedia.org/wiki/Man-in-the-middle_attack man-in-the-middle attacks,] which impersonate the systems you want to copy your keys to and steal your private key in the process.&lt;br /&gt;
&lt;br /&gt;
===Uploading the Public Key to CCDB===&lt;br /&gt;
&lt;br /&gt;
Use your Compute Canada credentials to visit the following site:&lt;br /&gt;
&lt;br /&gt;
 https://ccdb.computecanada.ca/ssh_authorized_keys&lt;br /&gt;
&lt;br /&gt;
and follow the instructions there to upload your public key. By using the 'cat' command as follows:&lt;br /&gt;
&lt;br /&gt;
 cat ~/.ssh/id_ed25519.pub&lt;br /&gt;
&lt;br /&gt;
you can inspect, select and copy the plain text output of your public key to your screen into the CCDB web page above.&lt;br /&gt;
&lt;br /&gt;
Keys uploaded to the CCDB are available by all clusters across Compute Canada.&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;tt&amp;gt;~/.ssh&amp;lt;/tt&amp;gt; Directory Permissions===&lt;br /&gt;
&lt;br /&gt;
Note that &amp;lt;tt&amp;gt;SSH&amp;lt;/tt&amp;gt; is very fussy about file permissions; your &amp;lt;tt&amp;gt;~/.ssh&amp;lt;/tt&amp;gt; directory must only be accessible by you, and your various key files must not be writable (or in some cases, readable) by anyone else.  Sometimes users accidentally reset file permissions while editing these files, and problems happen.   If you look at the &amp;lt;tt&amp;gt;~/.ssh&amp;lt;/tt&amp;gt; directory itself, it should not be accessible by anyone else:&lt;br /&gt;
&lt;br /&gt;
 $ ls -ld ~/.ssh&lt;br /&gt;
 drwx------ 2 USERNAME GROUPNAME 7 Aug  9 15:43 /home/.../.ssh&lt;br /&gt;
&lt;br /&gt;
To fix your permissions, use the following command:&lt;br /&gt;
&lt;br /&gt;
 chmod -R go= ~/.ssh/&lt;br /&gt;
&lt;br /&gt;
===Testing Your Key===&lt;br /&gt;
&lt;br /&gt;
Now you should be able to login to the remote system (say, SciNet):&lt;br /&gt;
&lt;br /&gt;
 $ ssh USERNAME@niagara.scinet.utoronto.ca&lt;br /&gt;
 Enter passphrase for key '/home/USERNAME/.ssh/id_ed25519': &lt;br /&gt;
 Last login: Tue Aug 17 11:24:48 2010 from HOSTNAME&lt;br /&gt;
 [...]&lt;br /&gt;
 nia-login07-$&lt;br /&gt;
&lt;br /&gt;
If this is indeed the absolute first time you are trying to access Niagara, for example, please make sure you are actually accessing Niagara by double checking Niagara's login node ssh key fingerprint as instructed [https://docs.scinet.utoronto.ca/index.php/SSH_Changes_in_May_2019 here.] This check is important to avoid being victim of [https://en.wikipedia.org/wiki/Man-in-the-middle_attack man-in-the-middle attacks.] As a reference you can check host key fingerprints for other Compute Canada systems at this [https://docs.computecanada.ca/wiki/SSH_host_keys link.]&lt;br /&gt;
&lt;br /&gt;
If you get the message below you may need to logout of your gnome session and log back in since &amp;lt;tt&amp;gt;ssh-agent&amp;lt;/tt&amp;gt; needs to be restarted with the new passphrase ssh key.&lt;br /&gt;
&lt;br /&gt;
 $ ssh USERNAME@niagara.scinet.utoronto.ca&lt;br /&gt;
 Agent admitted failure to sign using the key.&lt;br /&gt;
&lt;br /&gt;
===(Optional) Using &amp;lt;tt&amp;gt;ssh-agent&amp;lt;/tt&amp;gt; to Remember Your Key===&lt;br /&gt;
&lt;br /&gt;
But now you've just replaced having to type a password for login with having to type a passphrase for your key; what have you gained?  &lt;br /&gt;
&lt;br /&gt;
It turns out that there's an automated way to manage ssh &amp;quot;identities&amp;quot;, using the &amp;lt;tt&amp;gt;ssh-agent&amp;lt;/tt&amp;gt; command, which should automatically be running on newer Linux or macOS machines.   You can add keys to this agent for the duration of your login using the &amp;lt;tt&amp;gt;ssh-add&amp;lt;/tt&amp;gt; command:&lt;br /&gt;
&lt;br /&gt;
 $ ssh-add&lt;br /&gt;
 Enter passphrase for /home/USERNAME/.ssh/id_ed25519: &lt;br /&gt;
 Identity added: /home/USERNAME/.ssh/id_ed25519 (/home/USERNAME/.ssh/id_ed25519)&lt;br /&gt;
&lt;br /&gt;
and then logins will not require the passphrase, as &amp;lt;tt&amp;gt;ssh-agent&amp;lt;/tt&amp;gt; will provide access to the key.&lt;br /&gt;
&lt;br /&gt;
When you log out of your home computer, the ssh agent will close, and next time you log in, you will have to &amp;lt;tt&amp;gt;ssh-add&amp;lt;/tt&amp;gt; your key.  You can also set a timeout of (say) an hour by using &amp;lt;tt&amp;gt;ssh-add -t 3600&amp;lt;/tt&amp;gt;.  This minimizes the number of times you have to type your passphrase, while still maintaining some degree of key security.&lt;br /&gt;
&lt;br /&gt;
You can list the fingerprints of all identities currently represented by the agent with '-l' option: 'ssh-add -l'. To delete all identities from the agent just type 'ssh-add -D'.&lt;br /&gt;
&lt;br /&gt;
=== Custom SSH Keys names ===&lt;br /&gt;
&lt;br /&gt;
If you use a custom name for your SSH key pair, you will need to create or modify the config file under ~/.ssh/config to something similar:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host niagara&lt;br /&gt;
HostName niagara.scinet.utoronto.ca&lt;br /&gt;
User YOUR_LOGIN&lt;br /&gt;
IdentityFile ~/.ssh/ssh_privatekey_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then authenticate by typing:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ssh niagara&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also use the -i option of ssh to specify the path to your private key via:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ssh -i /path/to/key USERNAME@niagara.scinet.utoronto.ca&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Multiple SSH keys ===&lt;br /&gt;
&lt;br /&gt;
It's recommended to have different ssh keys for each service, specific role, or domain. For example, to have separate keys for niagara and graham, first generate two new keys:&lt;br /&gt;
&lt;br /&gt;
 $ ssh-keygen -t ed25519 -f ~/.ssh/id_niagara  -C &amp;quot;Key for Niagara&amp;quot;&lt;br /&gt;
 ...&lt;br /&gt;
 $ ssh-keygen -t ed25519 -f ~/.ssh/id_graham   -C &amp;quot;Key for Graham&amp;quot;&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Make sure to use different file names for each key. Next, modify your &amp;lt;tt&amp;gt;~/.ssh/config&amp;lt;/tt&amp;gt; file, adding &amp;lt;tt&amp;gt;IdentityFile&amp;lt;/tt&amp;gt; directives:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host niagara&lt;br /&gt;
  User myusername&lt;br /&gt;
  HostName niagara.scinet.utoronto.ca&lt;br /&gt;
  IdentityFile ~/.ssh/id_niagara&lt;br /&gt;
&lt;br /&gt;
Host graham&lt;br /&gt;
  User myusername&lt;br /&gt;
  HostName graham.computecanada.ca&lt;br /&gt;
  IdentityFile ~/.ssh/id_graham&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now when you login with the shortcuts&lt;br /&gt;
&lt;br /&gt;
 $ ssh niagara&lt;br /&gt;
&lt;br /&gt;
or&lt;br /&gt;
&lt;br /&gt;
 $ ssh graham&lt;br /&gt;
&lt;br /&gt;
different keys will be used.&lt;br /&gt;
&lt;br /&gt;
===Examples===&lt;br /&gt;
&lt;br /&gt;
====Copying a file on another cluster to Niagara====&lt;br /&gt;
&lt;br /&gt;
For convenience I will assume you have already pasted your public key on CCDB so you can access all Compute Canada clusters with that key. Your modified &amp;lt;tt&amp;gt;~/.ssh/config&amp;lt;/tt&amp;gt; file (on your linux laptop &lt;br /&gt;
for example) could use then the same IdentityFile directive for all clusters. In this example, let's say Graham and Niagara on Compute Canada use the same key, as well as a remote cluster not part of Compute Canada:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host niagara&lt;br /&gt;
  User myusername&lt;br /&gt;
  HostName niagara.scinet.utoronto.ca&lt;br /&gt;
  IdentityFile ~/.ssh/id_ed25519&lt;br /&gt;
&lt;br /&gt;
Host graham&lt;br /&gt;
  User myusername&lt;br /&gt;
  HostName graham.computecanada.ca&lt;br /&gt;
  IdentityFile ~/.ssh/id_ed25519&lt;br /&gt;
&lt;br /&gt;
Host remote_cluster&lt;br /&gt;
  User myusername&lt;br /&gt;
  HostName remote_cluster.other_domain.ca&lt;br /&gt;
  IdentityFile ~/.ssh/id_ed25519&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Suppose you want to access Niagara and copy a file on Graham there, or a file on remote_cluster. How would you do it? Based on the assumptions above you could proceed as follows:&lt;br /&gt;
&lt;br /&gt;
1) On your laptop, the first step to follow is to load your private key to the ssh-agent daemon:&lt;br /&gt;
&lt;br /&gt;
  $ ssh-add /home/USERNAME/.ssh/id_ed25519&lt;br /&gt;
  Enter passphrase for /home/USERNAME/.ssh/id_ed25519: &lt;br /&gt;
  Identity added: /home/USERNAME/.ssh/id_ed25519 (/home/USERNAME/.ssh/id_ed25519)&lt;br /&gt;
&lt;br /&gt;
It is very convenient since you do it once per work session only and it is strongly encouraged.&lt;br /&gt;
&lt;br /&gt;
2) Access Niagara and enable forwarding of the authentication agent connection (ssh-agent):&lt;br /&gt;
&lt;br /&gt;
  $ ssh -A niagara&lt;br /&gt;
&lt;br /&gt;
3) Use secure copy command to copy the file on Graham to Niagara:&lt;br /&gt;
&lt;br /&gt;
  $ scp -p graham.computecanada.ca:/path/to/file.txt .&lt;br /&gt;
&lt;br /&gt;
'-p' option is commonly used to preserve the date stamp and other file attributes as it is copied. Not mandatory though. Use the '-R' if you want to copy a directory recursively. Be aware though that it might be better to create a tarball first if there are lots of small files in the directory. Note that if you&lt;br /&gt;
want to use an alias for graham in the scp command as follows:&lt;br /&gt;
&lt;br /&gt;
  $ scp -p graham:/path/to/file.txt .&lt;br /&gt;
&lt;br /&gt;
you would have to copy your modified ~/.ssh/config file above from your laptop to Niagara first.&lt;br /&gt;
&lt;br /&gt;
4) Use secure copy command to copy the file on remote_cluster to Niagara:&lt;br /&gt;
&lt;br /&gt;
  $ scp -p remote_cluster.other_domain.ca:/path/to/file.txt .&lt;br /&gt;
&lt;br /&gt;
This assumes you have copied your public key from your laptop to the remote_cluster first:&lt;br /&gt;
&lt;br /&gt;
  $ ssh-copy-id -i ~/.ssh/id_ed25519.pub remote_cluster&lt;br /&gt;
&lt;br /&gt;
or in the absence of ssh-copy-id command:&lt;br /&gt;
&lt;br /&gt;
  $ cat ~/.ssh/id_ed25519.pub | ssh remote_cluster &amp;quot;cat &amp;gt;&amp;gt; ~/.ssh/authorized_keys&amp;quot;&lt;br /&gt;
&lt;br /&gt;
and also assumes remote_cluster is not part of Compute Canada: therefore it doesn't have a similar mechanism to manage user's ssh keys centrally as CCDB does.&lt;br /&gt;
&lt;br /&gt;
5) The '-A' option should be used deliberately for this copying task for the shortest time needed. The rationale is that if Niagara is compromised an attacker would now be able to use your connection to access other systems using your forwarded ssh-agent. Better to exit and return without any option added to ssh:&lt;br /&gt;
&lt;br /&gt;
  $ exit&lt;br /&gt;
  $ ssh niagara&lt;br /&gt;
&lt;br /&gt;
=== Best Practice Summary ===&lt;br /&gt;
&lt;br /&gt;
 * Create one key pair for each computer you work on and give it a meaningful name. In addition, a comment can also help identify the device where they were created.&lt;br /&gt;
 * Protect each of your private keys with a strong passphrase. We recommend fifteen characters or more.&lt;br /&gt;
 * Do not share your private keys.&lt;br /&gt;
 * Never copy your private keys to other systems.&lt;br /&gt;
 * Create one key pair for each different service, role or domain, and name them accordingly.&lt;br /&gt;
 * Do not create key pairs in shared systems like HPC clusters.&lt;br /&gt;
&lt;br /&gt;
==SSH Tunnels==&lt;br /&gt;
&lt;br /&gt;
A more-obscure technique for setting up SSH communication is the construction of an SSH tunnel.  This can be useful if, for example, your code needs to access an external software license server from a Niagara compute node.  You can read about setting up SSH tunnels on Niagara [[SSH_Tunneling|here]].&lt;br /&gt;
&lt;br /&gt;
==Two-Factor authentication==&lt;br /&gt;
&lt;br /&gt;
As a protection for you and for you data and programs, you may use Two-Factor authentication when connecting to Niagara thru SSH. This is optional.&lt;br /&gt;
&lt;br /&gt;
===What is Two-Factor authentication?===&lt;br /&gt;
&lt;br /&gt;
According to [https://en.wikipedia.org/wiki/Multi-factor_authentication Wikipedia], Multi-factor authentication is an authentication method in which a computer user is granted access only after successfully presenting two or more pieces of evidence (or factors) to an authentication mechanism: knowledge (something the user and only the user knows), possession (something the user and only the user has), and inherence (something the user and only the user is).&lt;br /&gt;
&lt;br /&gt;
Two-factor authentication (also known as 2FA) is a type, or subset, of multi-factor authentication. It is a method of confirming users' claimed identities by using a combination of two different factors: 1) something they know, 2) something they have, or 3) something they are.&lt;br /&gt;
&lt;br /&gt;
A good example of two-factor authentication is the withdrawing of money from an ATM; only the correct combination of a bank card (something the user possesses) and a PIN (something the user knows) allows the transaction to be carried out.&lt;br /&gt;
&lt;br /&gt;
Two other examples are to supplement a user-controlled password with a one-time password (OTP) or code generated or received by an authenticator (e.g. a security token or smartphone) that only the user possesses.&lt;br /&gt;
&lt;br /&gt;
Two-step verification or two-step authentication is a method of confirming a user's claimed identity by utilizing something they know (password) and a second factor other than something they have or something they are. An example of a second step is the user repeating back something that was sent to them through an out-of-band mechanism (such as a code sent over SMS), or a number generated by an app that is common to the user and the authentication system.&lt;br /&gt;
&lt;br /&gt;
===Benefits of Two-Factor authentication (2FA)===&lt;br /&gt;
&lt;br /&gt;
2FA delivers an extra layer of protection for user accounts that, while not impregnable, significantly decreases the risk of unauthorized access and system breaches. Users benefit from increased security in the same manner as account access requires far more resources from the hacker.&lt;br /&gt;
&lt;br /&gt;
If you already follow basic password security measures, two-factor authentication will make it more difficult for cyber criminals to breach your account because it is hard to get the second authentication factor, they would have to be much closer to you. This drastically reduces their chances to succeed.&lt;br /&gt;
&lt;br /&gt;
A hacker may gain access to computer. This is not impossible and rather common. They can plant a malware in your computer such as a key logger which will transmit all your keyboard activity. Or a malware that will give a hacker total remote access to you computer. This hacker will easily get your passwords, but it is virtually impossible that the same hacker can get access to your second factor.&lt;br /&gt;
&lt;br /&gt;
We encourage all our users to setup Two-Factor authentication. It’s for your own protection. To setup, you can do it [[Two-Factor_setup|here]]..&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=3503</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=3503"/>
		<updated>2022-01-29T21:46:17Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
{| style=&amp;quot;border-spacing:10px; width: 95%&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:1em; padding-top:.1em; border:2px solid #0645ad; background-color:#f6f6f6; border-radius:7px&amp;quot;|&lt;br /&gt;
&lt;br /&gt;
==System Status==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Use &amp;quot;Up&amp;quot; or &amp;quot;Down&amp;quot;; these are templates. --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;width:100%&amp;quot; &lt;br /&gt;
|{{Up |Niagara|Niagara_Quickstart}}&lt;br /&gt;
|{{Up |Mist|Mist}}&lt;br /&gt;
|{{Up |Teach|Teach}}&lt;br /&gt;
|{{Up |Rouge|Rouge}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up |Jupyter Hub|Jupyter_Hub}}&lt;br /&gt;
|{{Up |Scheduler|Niagara_Quickstart#Submitting_jobs}}&lt;br /&gt;
|{{Up |File system|Niagara_Quickstart#Storage_and_quotas}}&lt;br /&gt;
|{{Up |Burst Buffer|Burst_Buffer}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up|HPSS|HPSS}}&lt;br /&gt;
|{{Up |Login Nodes|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Up |External Network|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Up |Globus |Globus}}&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Current Messages: --&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;Sat Jan 29 16:45:38 EST 2022&amp;lt;/b&amp;gt; Fibre repaired.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Sat 29 Jan 2022 11:22:27 EST&amp;lt;/b&amp;gt; Fibre repair is underway.  Expect to have connectivity restored later today.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Fri 28 Jan 2022 07:35:01 EST&amp;lt;/b&amp;gt; The fibre optics cable that connects the SciNet datacentre was severed by uncoordinated digging at York University.  We expect repairs to happen as soon as possible.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Thu Jan 27 12:46 EST PM 2022&amp;lt;/b&amp;gt; Network issues to and from the datacentre. We are investigating.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Sun Jan 23 11:05 EST AM 2022&amp;lt;/b&amp;gt; Filesystem issues appear to have resolved.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Sun Jan 23 10:30 EST AM 2022&amp;lt;/b&amp;gt; Filesystem issues -- investigating.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Sat Jan 8 11:42 EST AM 2022&amp;lt;/b&amp;gt; The emergency maintenance is complete. Systems are up and available.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Fri Jan 7 14:34 EST PM 2022&amp;lt;/b&amp;gt; The SciNet shutdown is in progress. Systems are expected back on Saturday, Jan 8.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--  When removing system status entries, please archive them to: https://docs.scinet.utoronto.ca/index.php/Previous_messages --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;border-spacing: 10px;width: 100%&amp;quot;&lt;br /&gt;
|valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== QuickStart Guides ==&lt;br /&gt;
* [[Niagara Quickstart]]&lt;br /&gt;
* [[HPSS | HPSS archival storage]]&lt;br /&gt;
* [[Mist| Mist Power 9 GPU cluster]]&lt;br /&gt;
* [[Teach|Teach cluster]]&lt;br /&gt;
* [[FAQ | FAQ (frequently asked questions)]]&lt;br /&gt;
* [[Acknowledging SciNet]]&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Tutorials, Manuals, etc. ==&lt;br /&gt;
* [https://education.scinet.utoronto.ca SciNet education material]&lt;br /&gt;
* [https://www.youtube.com/c/SciNetHPCattheUniversityofToronto SciNet's YouTube channel]&lt;br /&gt;
* [[Modules specific to Niagara|Software Modules specific to Niagara]] &lt;br /&gt;
* [[Modules for Mist]] &lt;br /&gt;
* [[Commercial software]]&lt;br /&gt;
* [[Burst Buffer]]&lt;br /&gt;
* [[SSH#SSH Keys|SSH keys]]&lt;br /&gt;
* [[SSH Tunneling]]&lt;br /&gt;
* [[SSH#Two-Factor_authentication|Two-Factor Authentication]]&lt;br /&gt;
* [[Visualization]]&lt;br /&gt;
* [[Running Serial Jobs on Niagara]]&lt;br /&gt;
* [[Jupyter Hub]]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=3473</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=3473"/>
		<updated>2022-01-23T16:07:32Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
{| style=&amp;quot;border-spacing:10px; width: 95%&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:1em; padding-top:.1em; border:2px solid #0645ad; background-color:#f6f6f6; border-radius:7px&amp;quot;|&lt;br /&gt;
&lt;br /&gt;
==System Status==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Use &amp;quot;Up&amp;quot; or &amp;quot;Down&amp;quot;; these are templates. --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;width:100%&amp;quot; &lt;br /&gt;
|{{Up |Niagara|Niagara_Quickstart}}&lt;br /&gt;
|{{Up |Mist|Mist}}&lt;br /&gt;
|{{Up |Teach|Teach}}&lt;br /&gt;
|{{Partial |Rouge|Rouge}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up |Jupyter Hub|Jupyter_Hub}}&lt;br /&gt;
|{{Up |Scheduler|Niagara_Quickstart#Submitting_jobs}}&lt;br /&gt;
|{{Up |File system|Niagara_Quickstart#Storage_and_quotas}}&lt;br /&gt;
|{{Up |Burst Buffer|Burst_Buffer}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up|HPSS|HPSS}}&lt;br /&gt;
|{{Up |Login Nodes|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Up |External Network|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Up |Globus |Globus}}&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Current Messages: --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Sun Jan 23 11:05 EST AM 2022&amp;lt;/b&amp;gt; Filesystem issues appear to have resolved.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Sun Jan 23 10:30 EST AM 2022&amp;lt;/b&amp;gt; Filesystem issues -- investigating.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Sat Jan 8 11:42 EST AM 2022&amp;lt;/b&amp;gt; The emergency maintenance is complete. Systems are up and available.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Fri Jan 7 14:34 EST PM 2022&amp;lt;/b&amp;gt; The SciNet shutdown is in progress. Systems are expected back on Saturday, Jan 8.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--  When removing system status entries, please archive them to: https://docs.scinet.utoronto.ca/index.php/Previous_messages --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;border-spacing: 10px;width: 100%&amp;quot;&lt;br /&gt;
|valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== QuickStart Guides ==&lt;br /&gt;
* [[Niagara Quickstart]]&lt;br /&gt;
* [[HPSS | HPSS archival storage]]&lt;br /&gt;
* [[Mist| Mist Power 9 GPU cluster]]&lt;br /&gt;
* [[Teach|Teach cluster]]&lt;br /&gt;
* [[FAQ | FAQ (frequently asked questions)]]&lt;br /&gt;
* [[Acknowledging SciNet]]&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Tutorials, Manuals, etc. ==&lt;br /&gt;
* [https://education.scinet.utoronto.ca SciNet education material]&lt;br /&gt;
* [https://www.youtube.com/c/SciNetHPCattheUniversityofToronto SciNet's YouTube channel]&lt;br /&gt;
* [[Modules specific to Niagara|Software Modules specific to Niagara]] &lt;br /&gt;
* [[Modules for Mist]] &lt;br /&gt;
* [[Commercial software]]&lt;br /&gt;
* [[Burst Buffer]]&lt;br /&gt;
* [[SSH#SSH Keys|SSH keys]]&lt;br /&gt;
* [[SSH Tunneling]]&lt;br /&gt;
* [[SSH#Two-Factor_authentication|Two-Factor Authentication]]&lt;br /&gt;
* [[Visualization]]&lt;br /&gt;
* [[Running Serial Jobs on Niagara]]&lt;br /&gt;
* [[Jupyter Hub]]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=3470</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=3470"/>
		<updated>2022-01-23T16:07:21Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
{| style=&amp;quot;border-spacing:10px; width: 95%&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:1em; padding-top:.1em; border:2px solid #0645ad; background-color:#f6f6f6; border-radius:7px&amp;quot;|&lt;br /&gt;
&lt;br /&gt;
==System Status==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Use &amp;quot;Up&amp;quot; or &amp;quot;Down&amp;quot;; these are templates. --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;width:100%&amp;quot; &lt;br /&gt;
|{{Up |Niagara|Niagara_Quickstart}}&lt;br /&gt;
|{{Up |Mist|Mist}}&lt;br /&gt;
|{{Up |Teach|Teach}}&lt;br /&gt;
|{{Partial |Rouge|Rouge}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up |Jupyter Hub|Jupyter_Hub}}&lt;br /&gt;
|{{Up |Scheduler|Niagara_Quickstart#Submitting_jobs}}&lt;br /&gt;
|{{Down |File system|Niagara_Quickstart#Storage_and_quotas}}&lt;br /&gt;
|{{Up |Burst Buffer|Burst_Buffer}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up|HPSS|HPSS}}&lt;br /&gt;
|{{Up |Login Nodes|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Up |External Network|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Up |Globus |Globus}}&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Current Messages: --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Sun Jan 23 11:05 EST AM 2022&amp;lt;/b&amp;gt; Filesystem issues appear to have resolved.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Sun Jan 23 10:30 EST AM 2022&amp;lt;/b&amp;gt; Filesystem issues -- investigating.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Sat Jan 8 11:42 EST AM 2022&amp;lt;/b&amp;gt; The emergency maintenance is complete. Systems are up and available.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Fri Jan 7 14:34 EST PM 2022&amp;lt;/b&amp;gt; The SciNet shutdown is in progress. Systems are expected back on Saturday, Jan 8.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--  When removing system status entries, please archive them to: https://docs.scinet.utoronto.ca/index.php/Previous_messages --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;border-spacing: 10px;width: 100%&amp;quot;&lt;br /&gt;
|valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== QuickStart Guides ==&lt;br /&gt;
* [[Niagara Quickstart]]&lt;br /&gt;
* [[HPSS | HPSS archival storage]]&lt;br /&gt;
* [[Mist| Mist Power 9 GPU cluster]]&lt;br /&gt;
* [[Teach|Teach cluster]]&lt;br /&gt;
* [[FAQ | FAQ (frequently asked questions)]]&lt;br /&gt;
* [[Acknowledging SciNet]]&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Tutorials, Manuals, etc. ==&lt;br /&gt;
* [https://education.scinet.utoronto.ca SciNet education material]&lt;br /&gt;
* [https://www.youtube.com/c/SciNetHPCattheUniversityofToronto SciNet's YouTube channel]&lt;br /&gt;
* [[Modules specific to Niagara|Software Modules specific to Niagara]] &lt;br /&gt;
* [[Modules for Mist]] &lt;br /&gt;
* [[Commercial software]]&lt;br /&gt;
* [[Burst Buffer]]&lt;br /&gt;
* [[SSH#SSH Keys|SSH keys]]&lt;br /&gt;
* [[SSH Tunneling]]&lt;br /&gt;
* [[SSH#Two-Factor_authentication|Two-Factor Authentication]]&lt;br /&gt;
* [[Visualization]]&lt;br /&gt;
* [[Running Serial Jobs on Niagara]]&lt;br /&gt;
* [[Jupyter Hub]]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=3467</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=3467"/>
		<updated>2022-01-23T15:56:59Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
{| style=&amp;quot;border-spacing:10px; width: 95%&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:1em; padding-top:.1em; border:2px solid #0645ad; background-color:#f6f6f6; border-radius:7px&amp;quot;|&lt;br /&gt;
&lt;br /&gt;
==System Status==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Use &amp;quot;Up&amp;quot; or &amp;quot;Down&amp;quot;; these are templates. --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;width:100%&amp;quot; &lt;br /&gt;
|{{Up |Niagara|Niagara_Quickstart}}&lt;br /&gt;
|{{Up |Mist|Mist}}&lt;br /&gt;
|{{Up |Teach|Teach}}&lt;br /&gt;
|{{Partial |Rouge|Rouge}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up |Jupyter Hub|Jupyter_Hub}}&lt;br /&gt;
|{{Up |Scheduler|Niagara_Quickstart#Submitting_jobs}}&lt;br /&gt;
|{{Down |File system|Niagara_Quickstart#Storage_and_quotas}}&lt;br /&gt;
|{{Up |Burst Buffer|Burst_Buffer}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up|HPSS|HPSS}}&lt;br /&gt;
|{{Up |Login Nodes|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Up |External Network|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Up |Globus |Globus}}&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Current Messages: --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Sun Jan 23 10:45 EST AM 2022&amp;lt;/b&amp;gt; Filesystem issues -- investigating.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Sat Jan 8 11:42 EST AM 2022&amp;lt;/b&amp;gt; The emergency maintenance is complete. Systems are up and available.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Fri Jan 7 14:34 EST PM 2022&amp;lt;/b&amp;gt; The SciNet shutdown is in progress. Systems are expected back on Saturday, Jan 8.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--  When removing system status entries, please archive them to: https://docs.scinet.utoronto.ca/index.php/Previous_messages --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;border-spacing: 10px;width: 100%&amp;quot;&lt;br /&gt;
|valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== QuickStart Guides ==&lt;br /&gt;
* [[Niagara Quickstart]]&lt;br /&gt;
* [[HPSS | HPSS archival storage]]&lt;br /&gt;
* [[Mist| Mist Power 9 GPU cluster]]&lt;br /&gt;
* [[Teach|Teach cluster]]&lt;br /&gt;
* [[FAQ | FAQ (frequently asked questions)]]&lt;br /&gt;
* [[Acknowledging SciNet]]&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Tutorials, Manuals, etc. ==&lt;br /&gt;
* [https://education.scinet.utoronto.ca SciNet education material]&lt;br /&gt;
* [https://www.youtube.com/c/SciNetHPCattheUniversityofToronto SciNet's YouTube channel]&lt;br /&gt;
* [[Modules specific to Niagara|Software Modules specific to Niagara]] &lt;br /&gt;
* [[Modules for Mist]] &lt;br /&gt;
* [[Commercial software]]&lt;br /&gt;
* [[Burst Buffer]]&lt;br /&gt;
* [[SSH#SSH Keys|SSH keys]]&lt;br /&gt;
* [[SSH Tunneling]]&lt;br /&gt;
* [[SSH#Two-Factor_authentication|Two-Factor Authentication]]&lt;br /&gt;
* [[Visualization]]&lt;br /&gt;
* [[Running Serial Jobs on Niagara]]&lt;br /&gt;
* [[Jupyter Hub]]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=3217</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=3217"/>
		<updated>2021-09-15T21:38:07Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
{| style=&amp;quot;border-spacing:10px; width: 95%&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:1em; padding-top:.1em; border:2px solid #0645ad; background-color:#f6f6f6; border-radius:7px&amp;quot;|&lt;br /&gt;
&lt;br /&gt;
==System Status==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Use &amp;quot;Up&amp;quot; or &amp;quot;Down&amp;quot;; these are templates. --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;width:100%&amp;quot; &lt;br /&gt;
|{{Up |Niagara|Niagara_Quickstart}}&lt;br /&gt;
|{{Up |Mist|Mist}}&lt;br /&gt;
|{{Up |Teach|Teach}}&lt;br /&gt;
|{{Up |Rouge|Rouge}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up |Jupyter Hub|Jupyter_Hub}}&lt;br /&gt;
|{{up |Scheduler|Niagara_Quickstart#Submitting_jobs}}&lt;br /&gt;
|{{Up |File system|Niagara_Quickstart#Storage_and_quotas}}&lt;br /&gt;
|{{Up |Burst Buffer|Burst_Buffer}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up |HPSS|HPSS}}&lt;br /&gt;
|{{Up |Login Nodes|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Up |External Network|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Up|Globus|Globus}}&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Current Messages: --&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;Wed Sep 15 17:35 2021&amp;lt;/b&amp;gt;: filesystem issues resolved&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Wed Sep 15 16:39 2021&amp;lt;/b&amp;gt;: filesystem issues&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Mon Sep 13 13:15:07 EDT 2021&amp;lt;/b&amp;gt; HPSS is back online.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Fri Sep 10 17:57:23 EDT 2021&amp;lt;/b&amp;gt; HPSS is offline due to unscheduled maintenance.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Wed Aug 18 16:13:42 EDT 2021&amp;lt;/b&amp;gt; The HPSS upgrade is complete.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;HPSS Downtime August 17th and 18th, 2021 (Tuesday and Wednesday):&amp;lt;/b&amp;gt; We'll be upgrading the HPSS software to version 8.3, along with all the clients (htar/hsi, vfs and Globus/dsi)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--  When removing system status entries, please archive them to: https://docs.scinet.utoronto.ca/index.php/Previous_messages --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;border-spacing: 10px;width: 100%&amp;quot;&lt;br /&gt;
|valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== QuickStart Guides ==&lt;br /&gt;
* [[Niagara Quickstart]]&lt;br /&gt;
* [[HPSS | HPSS archival storage]]&lt;br /&gt;
* [[Mist| Mist Power 9 GPU cluster]]&lt;br /&gt;
* [[Teach|Teach cluster]]&lt;br /&gt;
* [[FAQ | FAQ (frequently asked questions)]]&lt;br /&gt;
* [[Acknowledging SciNet]]&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Tutorials, Manuals, etc. ==&lt;br /&gt;
* [https://support.scinet.utoronto.ca/education/browse.php SciNet education material]&lt;br /&gt;
* [https://www.youtube.com/c/SciNetHPCattheUniversityofToronto SciNet's YouTube channel]&lt;br /&gt;
* [[Modules specific to Niagara|Software Modules specific to Niagara]] &lt;br /&gt;
* [[Modules for Mist]] &lt;br /&gt;
* [[Commercial software]]&lt;br /&gt;
* [[Burst Buffer]]&lt;br /&gt;
* [[SSH Tunneling]]&lt;br /&gt;
* [[SSH#Two-Factor_authentication|Two-Factor Authentication]]&lt;br /&gt;
* [[Visualization]]&lt;br /&gt;
* [[Running Serial Jobs on Niagara]]&lt;br /&gt;
* [[Jupyter Hub]]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=3216</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=3216"/>
		<updated>2021-09-15T21:03:27Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
{| style=&amp;quot;border-spacing:10px; width: 95%&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:1em; padding-top:.1em; border:2px solid #0645ad; background-color:#f6f6f6; border-radius:7px&amp;quot;|&lt;br /&gt;
&lt;br /&gt;
==System Status==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Use &amp;quot;Up&amp;quot; or &amp;quot;Down&amp;quot;; these are templates. --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;width:100%&amp;quot; &lt;br /&gt;
|{{Up |Niagara|Niagara_Quickstart}}&lt;br /&gt;
|{{Up |Mist|Mist}}&lt;br /&gt;
|{{Up |Teach|Teach}}&lt;br /&gt;
|{{Up |Rouge|Rouge}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up |Jupyter Hub|Jupyter_Hub}}&lt;br /&gt;
|{{up |Scheduler|Niagara_Quickstart#Submitting_jobs}}&lt;br /&gt;
|{{Down |File system|Niagara_Quickstart#Storage_and_quotas}}&lt;br /&gt;
|{{Up |Burst Buffer|Burst_Buffer}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up |HPSS|HPSS}}&lt;br /&gt;
|{{Up |Login Nodes|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Up |External Network|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Up|Globus|Globus}}&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Current Messages: --&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;Wed Sep 15 16:39 2021&amp;lt;/b&amp;gt;: filesystem issues&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Mon Sep 13 13:15:07 EDT 2021&amp;lt;/b&amp;gt; HPSS is back online.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Fri Sep 10 17:57:23 EDT 2021&amp;lt;/b&amp;gt; HPSS is offline due to unscheduled maintenance.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Wed Aug 18 16:13:42 EDT 2021&amp;lt;/b&amp;gt; The HPSS upgrade is complete.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;HPSS Downtime August 17th and 18th, 2021 (Tuesday and Wednesday):&amp;lt;/b&amp;gt; We'll be upgrading the HPSS software to version 8.3, along with all the clients (htar/hsi, vfs and Globus/dsi)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--  When removing system status entries, please archive them to: https://docs.scinet.utoronto.ca/index.php/Previous_messages --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;border-spacing: 10px;width: 100%&amp;quot;&lt;br /&gt;
|valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== QuickStart Guides ==&lt;br /&gt;
* [[Niagara Quickstart]]&lt;br /&gt;
* [[HPSS | HPSS archival storage]]&lt;br /&gt;
* [[Mist| Mist Power 9 GPU cluster]]&lt;br /&gt;
* [[Teach|Teach cluster]]&lt;br /&gt;
* [[FAQ | FAQ (frequently asked questions)]]&lt;br /&gt;
* [[Acknowledging SciNet]]&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Tutorials, Manuals, etc. ==&lt;br /&gt;
* [https://support.scinet.utoronto.ca/education/browse.php SciNet education material]&lt;br /&gt;
* [https://www.youtube.com/c/SciNetHPCattheUniversityofToronto SciNet's YouTube channel]&lt;br /&gt;
* [[Modules specific to Niagara|Software Modules specific to Niagara]] &lt;br /&gt;
* [[Modules for Mist]] &lt;br /&gt;
* [[Commercial software]]&lt;br /&gt;
* [[Burst Buffer]]&lt;br /&gt;
* [[SSH Tunneling]]&lt;br /&gt;
* [[SSH#Two-Factor_authentication|Two-Factor Authentication]]&lt;br /&gt;
* [[Visualization]]&lt;br /&gt;
* [[Running Serial Jobs on Niagara]]&lt;br /&gt;
* [[Jupyter Hub]]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=3213</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=3213"/>
		<updated>2021-09-15T21:02:04Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
{| style=&amp;quot;border-spacing:10px; width: 95%&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:1em; padding-top:.1em; border:2px solid #0645ad; background-color:#f6f6f6; border-radius:7px&amp;quot;|&lt;br /&gt;
&lt;br /&gt;
==System Status==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Use &amp;quot;Up&amp;quot; or &amp;quot;Down&amp;quot;; these are templates. --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;width:100%&amp;quot; &lt;br /&gt;
|{{Up |Niagara|Niagara_Quickstart}}&lt;br /&gt;
|{{Up |Mist|Mist}}&lt;br /&gt;
|{{Up |Teach|Teach}}&lt;br /&gt;
|{{Up |Rouge|Rouge}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up |Jupyter Hub|Jupyter_Hub}}&lt;br /&gt;
|{{up |Scheduler|Niagara_Quickstart#Submitting_jobs}}&lt;br /&gt;
|{{Down |File system|Niagara_Quickstart#Storage_and_quotas}}&lt;br /&gt;
|{{Up |Burst Buffer|Burst_Buffer}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up |HPSS|HPSS}}&lt;br /&gt;
|{{Up |Login Nodes|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Up |External Network|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Up|Globus|Globus}}&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Current Messages: --&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;Wed Sep 15 16:50 2021&amp;lt;/b&amp;gt;: filesystem issues&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Mon Sep 13 13:15:07 EDT 2021&amp;lt;/b&amp;gt; HPSS is back online.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Fri Sep 10 17:57:23 EDT 2021&amp;lt;/b&amp;gt; HPSS is offline due to unscheduled maintenance.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Wed Aug 18 16:13:42 EDT 2021&amp;lt;/b&amp;gt; The HPSS upgrade is complete.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;HPSS Downtime August 17th and 18th, 2021 (Tuesday and Wednesday):&amp;lt;/b&amp;gt; We'll be upgrading the HPSS software to version 8.3, along with all the clients (htar/hsi, vfs and Globus/dsi)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--  When removing system status entries, please archive them to: https://docs.scinet.utoronto.ca/index.php/Previous_messages --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;border-spacing: 10px;width: 100%&amp;quot;&lt;br /&gt;
|valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== QuickStart Guides ==&lt;br /&gt;
* [[Niagara Quickstart]]&lt;br /&gt;
* [[HPSS | HPSS archival storage]]&lt;br /&gt;
* [[Mist| Mist Power 9 GPU cluster]]&lt;br /&gt;
* [[Teach|Teach cluster]]&lt;br /&gt;
* [[FAQ | FAQ (frequently asked questions)]]&lt;br /&gt;
* [[Acknowledging SciNet]]&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Tutorials, Manuals, etc. ==&lt;br /&gt;
* [https://support.scinet.utoronto.ca/education/browse.php SciNet education material]&lt;br /&gt;
* [https://www.youtube.com/c/SciNetHPCattheUniversityofToronto SciNet's YouTube channel]&lt;br /&gt;
* [[Modules specific to Niagara|Software Modules specific to Niagara]] &lt;br /&gt;
* [[Modules for Mist]] &lt;br /&gt;
* [[Commercial software]]&lt;br /&gt;
* [[Burst Buffer]]&lt;br /&gt;
* [[SSH Tunneling]]&lt;br /&gt;
* [[SSH#Two-Factor_authentication|Two-Factor Authentication]]&lt;br /&gt;
* [[Visualization]]&lt;br /&gt;
* [[Running Serial Jobs on Niagara]]&lt;br /&gt;
* [[Jupyter Hub]]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=3212</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=3212"/>
		<updated>2021-09-15T21:01:29Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
{| style=&amp;quot;border-spacing:10px; width: 95%&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:1em; padding-top:.1em; border:2px solid #0645ad; background-color:#f6f6f6; border-radius:7px&amp;quot;|&lt;br /&gt;
&lt;br /&gt;
==System Status==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Use &amp;quot;Up&amp;quot; or &amp;quot;Down&amp;quot;; these are templates. --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;width:100%&amp;quot; &lt;br /&gt;
|{{Up |Niagara|Niagara_Quickstart}}&lt;br /&gt;
|{{Up |Mist|Mist}}&lt;br /&gt;
|{{Up |Teach|Teach}}&lt;br /&gt;
|{{Up |Rouge|Rouge}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up |Jupyter Hub|Jupyter_Hub}}&lt;br /&gt;
|{{up |Scheduler|Niagara_Quickstart#Submitting_jobs}}&lt;br /&gt;
|{{Down |File system|Niagara_Quickstart#Storage_and_quotas}}&lt;br /&gt;
|{{Up |Burst Buffer|Burst_Buffer}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up |HPSS|HPSS}}&lt;br /&gt;
|{{Up |Login Nodes|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Up |External Network|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Up|Globus|Globus}}&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Current Messages: --&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;Wed Sep 15 4:50 2021&amp;lt;/b&amp;gt;: filesystem issues&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Mon Sep 13 13:15:07 EDT 2021&amp;lt;/b&amp;gt; HPSS is back online.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Fri Sep 10 17:57:23 EDT 2021&amp;lt;/b&amp;gt; HPSS is offline due to unscheduled maintenance.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Wed Aug 18 16:13:42 EDT 2021&amp;lt;/b&amp;gt; The HPSS upgrade is complete.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;HPSS Downtime August 17th and 18th, 2021 (Tuesday and Wednesday):&amp;lt;/b&amp;gt; We'll be upgrading the HPSS software to version 8.3, along with all the clients (htar/hsi, vfs and Globus/dsi)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--  When removing system status entries, please archive them to: https://docs.scinet.utoronto.ca/index.php/Previous_messages --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;border-spacing: 10px;width: 100%&amp;quot;&lt;br /&gt;
|valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== QuickStart Guides ==&lt;br /&gt;
* [[Niagara Quickstart]]&lt;br /&gt;
* [[HPSS | HPSS archival storage]]&lt;br /&gt;
* [[Mist| Mist Power 9 GPU cluster]]&lt;br /&gt;
* [[Teach|Teach cluster]]&lt;br /&gt;
* [[FAQ | FAQ (frequently asked questions)]]&lt;br /&gt;
* [[Acknowledging SciNet]]&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Tutorials, Manuals, etc. ==&lt;br /&gt;
* [https://support.scinet.utoronto.ca/education/browse.php SciNet education material]&lt;br /&gt;
* [https://www.youtube.com/c/SciNetHPCattheUniversityofToronto SciNet's YouTube channel]&lt;br /&gt;
* [[Modules specific to Niagara|Software Modules specific to Niagara]] &lt;br /&gt;
* [[Modules for Mist]] &lt;br /&gt;
* [[Commercial software]]&lt;br /&gt;
* [[Burst Buffer]]&lt;br /&gt;
* [[SSH Tunneling]]&lt;br /&gt;
* [[SSH#Two-Factor_authentication|Two-Factor Authentication]]&lt;br /&gt;
* [[Visualization]]&lt;br /&gt;
* [[Running Serial Jobs on Niagara]]&lt;br /&gt;
* [[Jupyter Hub]]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Docker&amp;diff=3147</id>
		<title>Docker</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Docker&amp;diff=3147"/>
		<updated>2021-07-17T18:03:03Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Docker is not available on SciNet's clusters.&lt;br /&gt;
However, you can use [http://singularity.hpcng.org singularity] to run commands in docker images.&lt;br /&gt;
&lt;br /&gt;
== Pulling a docker image ==&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;$ singularity pull docker://alpine:latest&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This pulls the '''alpine:latest''' image from Docker Hub and converts it to singularity's image format,&lt;br /&gt;
saving it as a file named '''alpine_latest.sif'''.&lt;br /&gt;
&lt;br /&gt;
As this requires an internet connection to work, it can only be done on the login nodes, and not in job scripts.&lt;br /&gt;
&lt;br /&gt;
You can also pull from other docker registries, e.g.:&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;nowiki&amp;gt;$ singularity pull docker://quay.io/biocontainers/samtools:1.13--h8c37831_0&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This creates an image file named '''samtools_1.13--h8c37831_0.sif'''.&lt;br /&gt;
&lt;br /&gt;
== Running a command inside an image ==&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;$ singularity exec alpine_latest.sif cat /etc/alpine-release&lt;br /&gt;
3.14.0&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;$ singularity exec samtools_1.13--h8c37831_0.sif samtools --version&lt;br /&gt;
samtools 1.13&lt;br /&gt;
Using htslib 1.13&lt;br /&gt;
...&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Binding directories ==&lt;br /&gt;
&lt;br /&gt;
Like docker, singularity containers have their own filesystems, and can't see files on the host system by default.&lt;br /&gt;
So for example, your scratch space is not visible inside the container:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;$ singularity exec alpine_latest.sif ls $SCRATCH&lt;br /&gt;
ls: /scratch/g/group/user: No such file or directory&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To access directories on the host system, you need to bind them into the container:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;$ singularity exec --bind=&amp;quot;$SCRATCH&amp;quot; alpine_latest.sif ls $SCRATCH&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can change the bound directory name inside the container:&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;nowiki&amp;gt;$ singularity exec --bind=&amp;quot;$SCRATCH:/data&amp;quot; alpine_latest.sif ls /data&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To bind multiple directories, use a comma separated list:&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;nowiki&amp;gt;$ singularity exec --bind=&amp;quot;$SCRATCH:/data,$PROJECT&amp;quot; alpine_latest.sif ls /data $PROJECT&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Unlike docker, singularity containers are read-only. Files may only be written to host directories.&lt;br /&gt;
&lt;br /&gt;
Also unlike docker, singularity will automatically bind your home directory. To disable this, use the '''--no-home''' option.&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Docker&amp;diff=3146</id>
		<title>Docker</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Docker&amp;diff=3146"/>
		<updated>2021-07-16T15:04:29Z</updated>

		<summary type="html">&lt;p&gt;Nolta: Created page with &amp;quot; Docker is not available on SciNet's clusters. However, you can use [http://singularity.hpcng.org singularity] to run docker images.  == Fetching a docker image ==   &amp;lt;nowiki&amp;gt;$...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Docker is not available on SciNet's clusters.&lt;br /&gt;
However, you can use [http://singularity.hpcng.org singularity] to run docker images.&lt;br /&gt;
&lt;br /&gt;
== Fetching a docker image ==&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;$ singularity pull docker://alpine:latest&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This pulls the '''alpine:latest''' image from Docker Hub and converts it to singularity's image format,&lt;br /&gt;
saving it as a file named '''alpine_latest.sif'''.&lt;br /&gt;
&lt;br /&gt;
As this requires an internet connection to work, it can only be done on the login nodes, and not in job scripts.&lt;br /&gt;
&lt;br /&gt;
You can also pull from other docker registries, e.g.:&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;nowiki&amp;gt;$ singularity pull docker://quay.io/biocontainers/samtools:1.13--h8c37831_0&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This creates an image file named '''samtools_1.13--h8c37831_0.sif'''.&lt;br /&gt;
&lt;br /&gt;
== Running a command inside the image ==&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;$ singularity exec alpine_latest.sif cat /etc/alpine-release&lt;br /&gt;
3.14.0&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;$ singularity exec samtools_1.13--h8c37831_0.sif samtools --version&lt;br /&gt;
samtools 1.13&lt;br /&gt;
Using htslib 1.13&lt;br /&gt;
...&amp;lt;/nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=3059</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=3059"/>
		<updated>2021-06-05T19:19:58Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
{| style=&amp;quot;border-spacing:10px; width: 95%&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:1em; padding-top:.1em; border:2px solid #0645ad; background-color:#f6f6f6; border-radius:7px&amp;quot;|&lt;br /&gt;
&lt;br /&gt;
==System Status==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Use &amp;quot;Up&amp;quot; or &amp;quot;Down&amp;quot;; these are templates. --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;width:100%&amp;quot; &lt;br /&gt;
|{{Up|Niagara|Niagara_Quickstart}}&lt;br /&gt;
|{{Up|Mist|Mist}}&lt;br /&gt;
|{{Up |Teach|Teach}}&lt;br /&gt;
|{{Up |Rouge|Rouge}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up |Jupyter Hub|Jupyter_Hub}}&lt;br /&gt;
|{{Up |Scheduler|Niagara_Quickstart#Submitting_jobs}}&lt;br /&gt;
|{{Up |File system|Niagara_Quickstart#Storage_and_quotas}}&lt;br /&gt;
|{{Up |Burst Buffer|Burst_Buffer}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up |HPSS|HPSS}}&lt;br /&gt;
|{{Up |Login Nodes|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Up |External Network|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Up |Globus|Globus}}&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Current Messages: --&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;Jun 5, 2021, 3:10 PM EDT:&amp;lt;/b&amp;gt; File issues resolved.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Jun 5, 2021, 11:15 AM EDT:&amp;lt;/b&amp;gt; File systems issues. We are investigating.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;May 27, 2021:&amp;lt;/b&amp;gt; Datamovers addresses have changed to improve high bandwidth connectivity and cybersecurity. The new addresses are 142.1.174.227 for nia-datamover1.scinet.utoronto.ca, and 142.1.174.228 for nia-datamover2.scinet.utoronto.ca.&lt;br /&gt;
&lt;br /&gt;
If you have jobs that need to connect to a software license server using an ssh tunnel through nia-gw (which actually resolves to datamover1 or datamover2), you may need to ask the system administrators of that license server to allow incoming connections from the new addresses above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;June 9th to 10th, 2021:&amp;lt;/b&amp;gt; The SciNet datacentre will have a scheduled maintenance shutdown.  Niagara, Mist, Rouge, HPSS, login nodes, the file systems, and hosted systems will all be offline during the shutdown starting at 7AM EDT on Wednesday June 9th. We expect the system to be back up in the morning of Friday June 11th.  Check here for updates.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--  When removing system status entries, please archive them to: https://docs.scinet.utoronto.ca/index.php/Previous_messages --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;border-spacing: 10px;width: 100%&amp;quot;&lt;br /&gt;
|valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== QuickStart Guides ==&lt;br /&gt;
* [[Niagara Quickstart]]&lt;br /&gt;
* [[HPSS | HPSS archival storage]]&lt;br /&gt;
* [[Mist| Mist Power 9 GPU cluster]]&lt;br /&gt;
* [[Teach|Teach cluster]]&lt;br /&gt;
* [[FAQ | FAQ (frequently asked questions)]]&lt;br /&gt;
* [[Acknowledging SciNet]]&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Tutorials, Manuals, etc. ==&lt;br /&gt;
* [https://support.scinet.utoronto.ca/education/browse.php SciNet education material]&lt;br /&gt;
* [https://www.youtube.com/c/SciNetHPCattheUniversityofToronto SciNet's YouTube channel]&lt;br /&gt;
* [[Modules specific to Niagara|Software Modules specific to Niagara]] &lt;br /&gt;
* [[Commercial software]]&lt;br /&gt;
* [[Burst Buffer]]&lt;br /&gt;
* [[SSH Tunneling]]&lt;br /&gt;
* [[SSH#Two-Factor_authentication|Two-Factor Authentication]]&lt;br /&gt;
* [[Visualization]]&lt;br /&gt;
* [[Running Serial Jobs on Niagara]]&lt;br /&gt;
* [[Jupyter Hub]]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=3022</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=3022"/>
		<updated>2021-05-27T01:28:51Z</updated>

		<summary type="html">&lt;p&gt;Nolta: /* System Status */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
{| style=&amp;quot;border-spacing:10px; width: 95%&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:1em; padding-top:.1em; border:2px solid #0645ad; background-color:#f6f6f6; border-radius:7px&amp;quot;|&lt;br /&gt;
&lt;br /&gt;
==System Status==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Use &amp;quot;Up&amp;quot; or &amp;quot;Down&amp;quot;; these are templates. --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;width:100%&amp;quot; &lt;br /&gt;
|{{Down|Niagara|Niagara_Quickstart}}&lt;br /&gt;
|{{Down|Mist|Mist}}&lt;br /&gt;
|{{Down|Teach|Teach}}&lt;br /&gt;
|{{Down|Rouge|Rouge}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Down|Jupyter Hub|Jupyter_Hub}}&lt;br /&gt;
|{{Down|Scheduler|Niagara_Quickstart#Submitting_jobs}}&lt;br /&gt;
|{{Down|File system|Niagara_Quickstart#Storage_and_quotas}}&lt;br /&gt;
|{{Down|Burst Buffer|Burst_Buffer}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Down|HPSS|HPSS}}&lt;br /&gt;
|{{Down|Login Nodes|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Down|External Network|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Down|Globus|Globus}}&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Current Messages: --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Announcement: On June 7th and 8th, 2021, The SciNet datacentre will have a scheduled maintenance shutdown.  Niagara, Mist, HPSS, login nodes, the file systems, and hosted systems will all be offline during the shutdown. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--  When removing system status entries, please archive them to: https://docs.scinet.utoronto.ca/index.php/Previous_messages --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;border-spacing: 10px;width: 100%&amp;quot;&lt;br /&gt;
|valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== QuickStart Guides ==&lt;br /&gt;
* [[Niagara Quickstart]]&lt;br /&gt;
* [[HPSS | HPSS archival storage]]&lt;br /&gt;
* [[Mist| Mist Power 9 GPU cluster]]&lt;br /&gt;
* [[Teach|Teach cluster]]&lt;br /&gt;
* [[FAQ | FAQ (frequently asked questions)]]&lt;br /&gt;
* [[Acknowledging SciNet]]&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Tutorials, Manuals, etc. ==&lt;br /&gt;
* [https://support.scinet.utoronto.ca/education/browse.php SciNet education material]&lt;br /&gt;
* [https://www.youtube.com/c/SciNetHPCattheUniversityofToronto SciNet's YouTube channel]&lt;br /&gt;
* [[Modules specific to Niagara|Software Modules specific to Niagara]] &lt;br /&gt;
* [[Commercial software]]&lt;br /&gt;
* [[Burst Buffer]]&lt;br /&gt;
* [[SSH Tunneling]]&lt;br /&gt;
* [[SSH#Two-Factor_authentication|Two-Factor Authentication]]&lt;br /&gt;
* [[Visualization]]&lt;br /&gt;
* [[Running Serial Jobs on Niagara]]&lt;br /&gt;
* [[Jupyter Hub]]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Ansys&amp;diff=2943</id>
		<title>Ansys</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Ansys&amp;diff=2943"/>
		<updated>2021-02-11T18:00:52Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [http://www.ansys.com/ Ansys] engineering simulation tools are installed in both the Niagara and CC software stacks.&lt;br /&gt;
&lt;br /&gt;
=Getting a license=&lt;br /&gt;
Licenses are provided by [http://www.cmc.ca CMC Microsystems]. Canadian students and faculty can register at [https://www.cmc.ca/en/MyAccount/GetAccount.aspx this page].&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you must contact CMC and tell them you want to use the Ansys tools on Niagara, and give them your SciNet username.&lt;br /&gt;
&lt;br /&gt;
=Running using the Niagara installation=&lt;br /&gt;
&lt;br /&gt;
==Ansys 2020R2==&lt;br /&gt;
Commercial modules can only be accessed using the 'module use' command.&lt;br /&gt;
&lt;br /&gt;
 module use /scinet/niagara/software/commercial/modules&lt;br /&gt;
 module load ansys/2020r2&lt;br /&gt;
&lt;br /&gt;
Programs available:&lt;br /&gt;
&lt;br /&gt;
* fluent&lt;br /&gt;
* ansysedt&lt;br /&gt;
* mapdl&lt;br /&gt;
* ...&lt;br /&gt;
You can use the Ansys graphical tools to set up your problem, but you cannot use the graphical tools to submit your job.  The job must be submitted to the scheduler for running.&lt;br /&gt;
&lt;br /&gt;
==Setting up your .mw directory==&lt;br /&gt;
&lt;br /&gt;
Ansys will attempt to write to your $HOME/.mw directory.  This will work when you are testing your workflow on the login nodes, because they can write to $HOME.  However, recall that the compute nodes cannot write to the /home filesystem.  If you attempt to run Ansys from a compute node using the default configuration, it will fail because Ansys cannot write to $HOME/.mw.&lt;br /&gt;
&lt;br /&gt;
The solution is to create an alternative directory called $SCRATCH/.mw, and create a soft link from $HOME/.mw to $SCRATCH/.mw:&lt;br /&gt;
 mkdir $SCRATCH/.mw&lt;br /&gt;
 ln -s $SCRATCH/.mw $HOME/.mw&lt;br /&gt;
This will fool Ansys into thinking it is writing to $HOME/.mw, when in fact it is writing to $SCRATCH/.mw.  This command only needs to be run once.&lt;br /&gt;
&lt;br /&gt;
==Running ansys202==&lt;br /&gt;
&lt;br /&gt;
Example submission script for a job running on 1 node, with max walltime of 11 hours:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=40&lt;br /&gt;
#SBATCH --time=11:00:00&lt;br /&gt;
#SBATCH --job-name test&lt;br /&gt;
&lt;br /&gt;
module use /scinet/niagara/software/commercial/modules&lt;br /&gt;
module load ansys/2020r2&lt;br /&gt;
&lt;br /&gt;
# DIRECTORY TO RUN - $SLURM_SUBMIT_DIR is directory job was submitted from&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
machines=$(srun bash -c 'hostname -s' | sort | uniq | awk '{print $1 &amp;quot;:&amp;quot; 40}' | paste -s -d ':')&lt;br /&gt;
ansys202 -b -j JOBNAME -dis -machines &amp;quot;$machines&amp;quot; -i ansys.in&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
INPUTFILE=input.jou&lt;br /&gt;
fluent 2ddp -t &amp;quot;$PBS_NP&amp;quot; -cnf=&amp;quot;$PBS_NODEFILE&amp;quot; -mpi=intel -pib -pcheck -g -i &amp;quot;$INPUTFILE&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Running using the CC installation=&lt;br /&gt;
&lt;br /&gt;
==Ansys 19.0==&lt;br /&gt;
To access the CC software stack you must unload the Niagara stack.&lt;br /&gt;
&lt;br /&gt;
 module load CCEnv StdEnv&lt;br /&gt;
 module load ansys/19.0&lt;br /&gt;
&lt;br /&gt;
You can run the script given in the previous section by substituting the previous module commands with the above two.&lt;br /&gt;
&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Installing_your_own_Python_Modules&amp;diff=2879</id>
		<title>Installing your own Python Modules</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Installing_your_own_Python_Modules&amp;diff=2879"/>
		<updated>2020-12-05T17:26:43Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There are many optional and conflicting packages for Python that users could potentially want (see e.g. http://pypi.python.org/pypi). Therefore, users need to install these additional packages locally in their home directories.  In fact, there is no choice, as users do not have permissions to install packages system-wide.&lt;br /&gt;
&lt;br /&gt;
Python provides a number of ways to install packages, the most common of which are the &amp;lt;tt&amp;gt;pip&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;conda&amp;lt;/tt&amp;gt; commands.  By default, these commands would install in the same directory as the one in which the python executable lives, &lt;br /&gt;
but python provides a number of ways for users to install libraries in their home directories instead.  &lt;br /&gt;
&lt;br /&gt;
One way to do this with &amp;lt;tt&amp;gt;pip&amp;lt;/tt&amp;gt; using the &amp;lt;tt&amp;gt;--user&amp;lt;/tt&amp;gt; option, but you shouldn't. That approach is now mostly superseded by virtual environments, and we do not recommend using the &amp;lt;tt&amp;gt;--user&amp;lt;/tt&amp;gt; option as it can interfere with other Python environments.&lt;br /&gt;
&lt;br /&gt;
Virtual environments are a standard in Python to create isolated Python environments. This is useful when certain modules or certain versions of modules are not available in the default python environment.&lt;br /&gt;
&lt;br /&gt;
Virtual environments can be used either with the [[Python#Regular_Python | regular python modules]] or the [[Python#Intel_Python | intelpython/anaconda]] modules.&lt;br /&gt;
&lt;br /&gt;
== Using Virtualenv in Regular Python ==&lt;br /&gt;
&lt;br /&gt;
===Creation===&lt;br /&gt;
First load a python module, e.g.&lt;br /&gt;
&lt;br /&gt;
    module load NiaEnv/2019b python/3.6&lt;br /&gt;
&lt;br /&gt;
or&lt;br /&gt;
&lt;br /&gt;
    module load NiaEnv/2019b python/3.8&lt;br /&gt;
&lt;br /&gt;
Then create a directory for the virtual environments.&lt;br /&gt;
One can put a virtual environment anywhere, but this directory structure is recommended:&lt;br /&gt;
&lt;br /&gt;
    mkdir ~/.virtualenvs&lt;br /&gt;
    cd ~/.virtualenvs&lt;br /&gt;
&lt;br /&gt;
Now we create our first virtualenv called &amp;lt;code&amp;gt;myEnv&amp;lt;/code&amp;gt; choose any name you like:&lt;br /&gt;
&lt;br /&gt;
    virtualenv --system-site-packages ~/.virtualenvs/myenv&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;--system-site-packages&amp;quot; flag will use the system-installed versions of packages rather than installing them anew (the list of these packages can be found on the [[Python]] wiki page).  This will result in fewer files created in your virtual environment.  After that you can activate that virtual environment:&lt;br /&gt;
&lt;br /&gt;
    source ~/.virtualenvs/myenv/bin/activate &lt;br /&gt;
&lt;br /&gt;
As you are in the virtualenv now, you can just type &amp;lt;code&amp;gt;pip install &amp;lt;required module&amp;gt;&amp;lt;/code&amp;gt; to install any module into your virtual environment.  &lt;br /&gt;
&lt;br /&gt;
To go back to the normal python installation simply type &lt;br /&gt;
&lt;br /&gt;
    deactivate&lt;br /&gt;
&lt;br /&gt;
===Command line and job usage===&lt;br /&gt;
&lt;br /&gt;
You need to activate the appropriate environment every time you log in, and at the start of all your jobs scripts.  However, the installation of packages only needs to be done once.  In the NiaEnv/2019b stack, it is *not* necessary to load the python module before activating the environment, while in the NiaEnv/2018a stack, you need to load the python module before activating the environment.  &lt;br /&gt;
&lt;br /&gt;
===Usage of your virtual environment by others===&lt;br /&gt;
&lt;br /&gt;
Sharing a virtual environment with another user is easy. As long as the directory containing the virtual environment is readable by that other user (which on Niagara is the default when that user is in the same group as the directory), then they simply have to source the activate file in the bin directory of that environment, e.g.&lt;br /&gt;
&lt;br /&gt;
    source /home/g/group/user/.virtualenvs/myenv/bin/activate&lt;br /&gt;
&lt;br /&gt;
===Usage in the Jupyter Hub===&lt;br /&gt;
&lt;br /&gt;
You can use your virtual environment in Niagara's [[Jupyter_Hub]], but there are two additional steps required to get the JupterHub to know about your environment and to make it as one of its possible &amp;quot;kernels&amp;quot; for new notebooks.&lt;br /&gt;
&lt;br /&gt;
After having activated your environment, execute the following two commands&lt;br /&gt;
&lt;br /&gt;
    pip install ipykernel&lt;br /&gt;
    python -m ipykernel install --name NAME --user&lt;br /&gt;
    venv2jup&lt;br /&gt;
&lt;br /&gt;
The first installs the packages needed to interface with jupyter as a kernel, the latter puts an entry in the &amp;lt;tt&amp;gt;.share/jupyter&amp;lt;/tt&amp;gt; directory, in which the jupyterhub looks for possible kernels. The final command corrects some paths and checks if all is setup properly. This procedure works for NiaEnv/2019b, but may fail for NiaEnv/2018a.&lt;br /&gt;
&lt;br /&gt;
For conda environments that were installed in .conda/envs, the jupyter notebook should pick them up automatically.&lt;br /&gt;
&lt;br /&gt;
== Using Virtual Environments in Intelpython/Anaconda ==&lt;br /&gt;
&lt;br /&gt;
===Creation===&lt;br /&gt;
&lt;br /&gt;
One can use the same kind of virtual environments for the intelpython and conda modules as for regular modules. However,&lt;br /&gt;
environments are built-in in Anaconda, see [https://conda.io/docs/user-guide/tasks/manage-environments.html].  These &amp;quot;conda environments&amp;quot; are not the same as regular virtual environments, as they can contain general packages, such as compilers.  The latter feature means that conda environments are much more flexible, but also that they do not cooperate well with other software modules on Niagara.  Therefore, you should always use regular virtual environments and pip on Niagara and not conda, unless you have a good reason not too. &lt;br /&gt;
&lt;br /&gt;
First, you just need to load a conda-like module, e.g.&lt;br /&gt;
&lt;br /&gt;
    module load NiaEnv/2019b intelpython3&lt;br /&gt;
&lt;br /&gt;
Then, you create a virtual environment&lt;br /&gt;
&lt;br /&gt;
    conda create -n myPythonEnv python=3.6&lt;br /&gt;
&lt;br /&gt;
(conda puts the environment in the directory &amp;lt;tt&amp;gt;$HOME/.conda/envs/myPythonEnv&amp;lt;/tt&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
Next, you activate your conda environment:&lt;br /&gt;
&lt;br /&gt;
    source activate myPythonEnv&lt;br /&gt;
&lt;br /&gt;
At this point you are in your own environment and can just do the installation of any package that you need, e.g.&lt;br /&gt;
&lt;br /&gt;
    pip install myFAVpackage&lt;br /&gt;
&lt;br /&gt;
or&lt;br /&gt;
    conda install myFAVpackage&lt;br /&gt;
&lt;br /&gt;
To go back to the normal python installation, type &lt;br /&gt;
    &lt;br /&gt;
    source deactivate&lt;br /&gt;
&lt;br /&gt;
===Command line and job usage===&lt;br /&gt;
&lt;br /&gt;
You need to load the intelpython/anaconda module and activate the appropriate environment every time you log in, and at the start of all your jobs scripts.  However, the installation of packages only needs to be done once. &lt;br /&gt;
&lt;br /&gt;
===Usage in the Jupyter Hub===&lt;br /&gt;
&lt;br /&gt;
You can use conda environment in Niagara's [[Jupyter_Hub]]. If they were installed in .conda/envs, the jupyter notebook should pick them up automatically.&lt;br /&gt;
&lt;br /&gt;
==Installing the Scientific Python Suite==&lt;br /&gt;
&lt;br /&gt;
For many scientific codes the packages ''numpy'', ''scipy'', ''matplotlib'', ''pandas'' and ''ipython'' are used.  Versions of these are already in the python modules (except for the regular python modules in the NiaEnv/2018a stack).&lt;br /&gt;
 &lt;br /&gt;
However, if you need different versions, you could start your virtual environment without &amp;lt;tt&amp;gt;--system-site-packages&amp;lt;/tt&amp;gt;.  In that case, for regular python modules, please install versions of package with an &amp;lt;tt&amp;gt;intel-&amp;lt;/tt&amp;gt; prefix, if they exists, so that you will get the most optimized version of the package.&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=2753</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=2753"/>
		<updated>2020-08-15T02:37:24Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
{| style=&amp;quot;border-spacing:10px; width: 95%&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:1em; padding-top:.1em; border:2px solid #0645ad; background-color:#f6f6f6; border-radius:7px&amp;quot;|&lt;br /&gt;
&lt;br /&gt;
==System Status==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Use &amp;quot;Up&amp;quot; or &amp;quot;Down&amp;quot;; these are templates. --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;width:100%&amp;quot; &lt;br /&gt;
|{{Down|Niagara|Niagara_Quickstart}}&lt;br /&gt;
|{{Down|HPSS|HPSS}}&lt;br /&gt;
|{{Down|Mist|Mist}}&lt;br /&gt;
|{{Down|Teach|Teach}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Down|Jupyter Hub|Jupyter_Hub}}&lt;br /&gt;
|{{Down|Scheduler|Niagara_Quickstart#Submitting_jobs}}&lt;br /&gt;
|{{Down|File system|Niagara_Quickstart#Storage_and_quotas}}&lt;br /&gt;
|{{Down|Burst Buffer|Burst_Buffer}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Down|Login Nodes|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Down|External Network|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Down|Globus|Globus}}&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Current Messages: --&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;August 14, 2020, 21:04 AM EST:&amp;lt;/b&amp;gt; Tomorrow's /scratch purge has been postponed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;August 14, 2020, 21:00 AM EST:&amp;lt;/b&amp;gt; Staff at the datacenter. Looks like one of the pumps has a seal that is leaking badly.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;August 14, 2020, 20:37 AM EST:&amp;lt;/b&amp;gt; We seem to be undergoing a thermal shutdown at the datacenter.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;August 14, 2020, 20:20 AM EST:&amp;lt;/b&amp;gt; Network problems to niagara/mist. We are investigating.&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;!--  When removing system status entries, please archive them to: https://docs.scinet.utoronto.ca/index.php/Previous_messages --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;border-spacing: 10px;width: 100%&amp;quot;&lt;br /&gt;
|valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== QuickStart Guides ==&lt;br /&gt;
* [[Niagara Quickstart]]&lt;br /&gt;
* [[HPSS | HPSS archival storage]]&lt;br /&gt;
* [[SOSCIP_GPU | SOSCIP GPU cluster]]&lt;br /&gt;
* [[Mist| Mist Power 9 GPU cluster]]&lt;br /&gt;
* [[Teach|Teach cluster]]&lt;br /&gt;
* [[FAQ | FAQ (frequently asked questions)]]&lt;br /&gt;
* [[Acknowledging SciNet]]&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Tutorials, Manuals, etc. ==&lt;br /&gt;
* [https://support.scinet.utoronto.ca/education/browse.php SciNet education material]&lt;br /&gt;
* [https://www.youtube.com/c/SciNetHPCattheUniversityofToronto SciNet's YouTube channel]&lt;br /&gt;
* [[Modules specific to Niagara|Software Modules specific to Niagara]] &lt;br /&gt;
* [[Commercial software]]&lt;br /&gt;
* [[Burst Buffer]]&lt;br /&gt;
* [[SSH Tunneling]]&lt;br /&gt;
* [[SSH#Two-Factor_authentication|Two-Factor Authentication]]&lt;br /&gt;
* [[Visualization]]&lt;br /&gt;
* [[Running Serial Jobs on Niagara]]&lt;br /&gt;
* [[Jupyter Hub]]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=2676</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=2676"/>
		<updated>2020-06-29T17:03:52Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
{| style=&amp;quot;border-spacing:10px; width: 95%&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:1em; padding-top:.1em; border:2px solid #0645ad; background-color:#f6f6f6; border-radius:7px&amp;quot;|&lt;br /&gt;
&lt;br /&gt;
==System Status==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Use &amp;quot;Up&amp;quot; or &amp;quot;Down&amp;quot;; these are templates. --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;width:100%&amp;quot; &lt;br /&gt;
|{{Down|Niagara|Niagara_Quickstart}}&lt;br /&gt;
|{{Down|HPSS|HPSS}}&lt;br /&gt;
|{{Down|Mist|Mist}}&lt;br /&gt;
|{{Down|Teach|Teach}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Down|Jupyter Hub|Jupyter_Hub}}&lt;br /&gt;
|{{Down|Scheduler|Niagara_Quickstart#Submitting_jobs}}&lt;br /&gt;
|{{Down|File system|Niagara_Quickstart#Storage_and_quotas}}&lt;br /&gt;
|{{Down|Burst Buffer|Burst_Buffer}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Down|Login Nodes|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Down|External Network|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Down|Globus|Globus}}&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;!-- Current Messages: --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt; June 29, 12:30:00  PM:&amp;lt;/b&amp;gt; Power Outage caused thermal shutdown.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;June 20, 2020, 10:24 PM:&amp;lt;/b&amp;gt; File systems are back up.  Unfortunately, all running jobs would have died and users are asked to resubmit them.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;June 20, 2020, 9:48 PM:&amp;lt;/b&amp;gt; An issue with the file systems is causing trouble.  We are investigating the cause.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;June 15, 2020, 10:30 PM:&amp;lt;/b&amp;gt; A &amp;lt;b&amp;gt;power glitch&amp;lt;/b&amp;gt; caused some compute nodes to be rebooted: jobs running at the time may have failed; users are asked to resubmit these jobs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--  When removing system status entries, please archive them to: https://docs.scinet.utoronto.ca/index.php/Previous_messages --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;border-spacing: 10px;width: 100%&amp;quot;&lt;br /&gt;
|valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== QuickStart Guides ==&lt;br /&gt;
* [[Niagara Quickstart]]&lt;br /&gt;
* [[HPSS | HPSS archival storage]]&lt;br /&gt;
* [[SOSCIP_GPU | SOSCIP GPU cluster]]&lt;br /&gt;
* [[Mist| Mist Power 9 GPU cluster]]&lt;br /&gt;
* [[Teach|Teach cluster]]&lt;br /&gt;
* [[FAQ | FAQ (frequently asked questions)]]&lt;br /&gt;
* [[Acknowledging SciNet]]&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Tutorials, Manuals, etc. ==&lt;br /&gt;
* [https://support.scinet.utoronto.ca/education/browse.php SciNet education material]&lt;br /&gt;
* [https://www.youtube.com/c/SciNetHPCattheUniversityofToronto SciNet's YouTube channel]&lt;br /&gt;
* [[Modules specific to Niagara|Software Modules specific to Niagara]] &lt;br /&gt;
* [[Commercial software]]&lt;br /&gt;
* [[Burst Buffer]]&lt;br /&gt;
* [[SSH Tunneling]]&lt;br /&gt;
* [[SSH#Two-Factor_authentication|Two-Factor Authentication]]&lt;br /&gt;
* [[Visualization]]&lt;br /&gt;
* [[Running Serial Jobs on Niagara]]&lt;br /&gt;
* [[Jupyter Hub]]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=User_Ramdisk&amp;diff=2522</id>
		<title>User Ramdisk</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=User_Ramdisk&amp;diff=2522"/>
		<updated>2020-02-26T14:25:01Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;On the Niagara nodes a `ramdisk' is available. Up to 70 percent of the RAM on the node (i.e. 202GB) may be used as a temporary file system.  This is particularly useful for use in the early stages of migrating desktop-computing codes to a High Performance Computing platform such as Niagara, especially those that use a lot of file I/O (Input/Output).  Using a lot of I/O is a bottleneck in large scale computing, especially on parallel file systems (such as the GPFS used on Niagara), since the files are synchronized across the whole network.&lt;br /&gt;
&lt;br /&gt;
Ramdisk is much faster than real disk, and is especially beneficial for codes which perform a lot of small I/O work, since the ramdisk does not require network traffic.  However, each node sees its own ramdisk and cannot see files on that of other nodes.  You can also not see the ramdisks of the compute nodes on the login nodes. To track progress on a ramdisk, you'd have to [[SSH]] into the respective compute node.&lt;br /&gt;
&lt;br /&gt;
= Using Ramdisk =&lt;br /&gt;
&lt;br /&gt;
To use the ramdisk, create and read to or write from files in /dev/shm/ just as one would to $SCRATCH.  Only the amount of RAM needed to store the files will be taken up by the temporary file system. Thus if you have 40 serial jobs each requiring 1 GB of RAM, and 2GB is taken up by various OS services, you would still have approximately 140GB available to use as ramdisk on a 202GB node. However, if you were to write 7 GB of data to the ramdisk, this would exceed available memory and your job would crash.&lt;br /&gt;
&lt;br /&gt;
Note that when using the ramdisk:&lt;br /&gt;
* At the start of your job, you can copy frequently accessed files to ramdisk (''stage in''). If there are many such files, it is beneficial to put them in a tar file.&lt;br /&gt;
* One would periodically copy the output files from ramdisk to /scratch or /project, as well as at the end of the job (''stage out'').&lt;br /&gt;
* It is very important to delete your files from ramdisk at the end of your job.  If you do not do this, the next user to use that node will have less RAM available than they might expect, and this might kill his job. &lt;br /&gt;
&lt;br /&gt;
== A simple example ==&lt;br /&gt;
&lt;br /&gt;
A simple script using the ramdisk for 40 serial jobs in a 4 hour window might look like this:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=40&lt;br /&gt;
#SBATCH --time=4:00:00&lt;br /&gt;
#SBATCH --job-name ramdisk-example&lt;br /&gt;
&lt;br /&gt;
workdir=&amp;quot;/dev/shm/$USER/workdir&amp;quot;&lt;br /&gt;
&lt;br /&gt;
mkdir -p $workdir&lt;br /&gt;
&lt;br /&gt;
cp $SLURM_SUBMIT_DIR/* $workdir&lt;br /&gt;
&lt;br /&gt;
cd $workdir&lt;br /&gt;
&lt;br /&gt;
for ((i=1;i&amp;lt;=40;i++)); do&lt;br /&gt;
  ./executable &amp;lt; $i.in &amp;gt; $i.out &amp;amp;&lt;br /&gt;
done&lt;br /&gt;
wait&lt;br /&gt;
&lt;br /&gt;
tar cf $SLURM_SUBMIT_DIR/out.tar *.out&lt;br /&gt;
rm -r $workdir&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Often collections of serial jobs are run on the ramdisk, see the [[Running_Serial_Jobs_on_Niagara | serial run wiki page]] for more details.&lt;br /&gt;
&lt;br /&gt;
==A more complex example==&lt;br /&gt;
&lt;br /&gt;
A more complete script using the ramdisk in a 1 day OpenMP job that saves output periodically, might look like this:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --cpus-per-task=40&lt;br /&gt;
#SBATCH --time=24:00:00&lt;br /&gt;
#SBATCH --job-name ramdisk-test&lt;br /&gt;
&lt;br /&gt;
#Job parameters:&lt;br /&gt;
execname=job          # name of the executable&lt;br /&gt;
input_tar=input.tar   # tar file with input files and executables&lt;br /&gt;
output_tar=out.tar    # file in which to store output&lt;br /&gt;
input_subdir=indir    # sub-directory (within input_tar) with input files&lt;br /&gt;
output_subdir=outdir  # sub-directory to contain of output files&lt;br /&gt;
poll_period=60        # how often check for job completion (in seconds)&lt;br /&gt;
save_period=120       # how often to save output (in minutes)&lt;br /&gt;
&lt;br /&gt;
#Track how long everything takes.&lt;br /&gt;
date&lt;br /&gt;
&lt;br /&gt;
#Copy to ramdisk&lt;br /&gt;
echo &amp;quot;Stage-in: copying files to ramdisk directory /dev/shm/$USER&amp;quot;&lt;br /&gt;
mkdir -p /dev/shm/$USER/$output_subdir&lt;br /&gt;
cd /dev/shm/$USER&lt;br /&gt;
cp $SLURM_SUBMIT_DIR/$input_tar .&lt;br /&gt;
tar xf $input_tar&lt;br /&gt;
rm -rf $input_tar&lt;br /&gt;
&lt;br /&gt;
#Track how long everything takes.&lt;br /&gt;
echo -n &amp;quot;Stage-in completed on &amp;quot;&lt;br /&gt;
date&lt;br /&gt;
&lt;br /&gt;
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK&lt;br /&gt;
&lt;br /&gt;
#Run on ramdisk&lt;br /&gt;
echo &amp;quot;Starting job&amp;quot;&lt;br /&gt;
./$execname $input_subdir $output_subdir &amp;amp;&lt;br /&gt;
# Store the process id in $pid so we may check if it's still running:&lt;br /&gt;
pid=$!&lt;br /&gt;
&lt;br /&gt;
#Note:&lt;br /&gt;
# 1. The above launching command is appropriate for a multi-threaded (OpenMP) applications.&lt;br /&gt;
# 2. Ramdisk MPI jobs are limited to 1 node as /dev/shm is not shared across nodes.&lt;br /&gt;
# 3. For serial jobs, you'd want to start 40 jobs at the same time instead, e.g.&lt;br /&gt;
#     mkdir -p $output_subdir/1&lt;br /&gt;
#     ./$execname ${input_subdir}/1 ${output_subdir}/1 &amp;amp;&lt;br /&gt;
#     pid=$!&lt;br /&gt;
#     mkdir -p $output_subdir/2&lt;br /&gt;
#     ./$execname ${input_subdir}/2 ${output_subdir}/2 &amp;amp;&lt;br /&gt;
#     pid=$pid,$!&lt;br /&gt;
#&lt;br /&gt;
#     etc.&lt;br /&gt;
#  &lt;br /&gt;
#     mkdir -p $output_subdir/40&lt;br /&gt;
#     ./$execname ${input_subdir}/40 ${output_subdir}/40 &amp;amp;&lt;br /&gt;
#     pid=$pid,$!&lt;br /&gt;
&lt;br /&gt;
#Track how long everything takes.&lt;br /&gt;
echo -n &amp;quot;Job started on &amp;quot;&lt;br /&gt;
date&lt;br /&gt;
&lt;br /&gt;
function save_results {    &lt;br /&gt;
    echo -n &amp;quot;Copying from directory $output_subdir to file $SLURM_SUBMIT_DIR/$output_tar on &amp;quot;&lt;br /&gt;
    date&lt;br /&gt;
    tar cf $output_tar $output_subdir/*&lt;br /&gt;
    cp $output_tar $SLURM_SUBMIT_DIR&lt;br /&gt;
    echo -n &amp;quot;Copying of output complete on &amp;quot;&lt;br /&gt;
    date&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function cleanup_ramdisk {&lt;br /&gt;
    echo -n &amp;quot;Cleaning up ramdisk directory /dev/shm/$USER on &amp;quot;&lt;br /&gt;
    date&lt;br /&gt;
    rm -rf /dev/shm/$USER&lt;br /&gt;
    echo -n &amp;quot;done at &amp;quot;&lt;br /&gt;
    date&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function trap_term {&lt;br /&gt;
    echo -n &amp;quot;Trapped term (soft kill) signal on &amp;quot;&lt;br /&gt;
    date&lt;br /&gt;
    save_results&lt;br /&gt;
    cleanup_ramdisk&lt;br /&gt;
    exit&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function interruptible_sleep {&lt;br /&gt;
    # waits for a number of seconds&lt;br /&gt;
    # argument 1 = number of seconds&lt;br /&gt;
    # note: just doing `sleep $1' would not be interruptible!&lt;br /&gt;
    for m in `seq $1`; do  &lt;br /&gt;
        sleep 1&lt;br /&gt;
    done&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function is_running {&lt;br /&gt;
    # check if one or more process is running &lt;br /&gt;
    # argument 1 = a command separated list of PIDs (no spaces)&lt;br /&gt;
    ps -p $1 -o pid= | wc -l&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
#trap the termination signal, and call the function 'trap_term' when &lt;br /&gt;
# that happens, so results may be saved.&lt;br /&gt;
trap &amp;quot;trap_term&amp;quot; TERM&lt;br /&gt;
&lt;br /&gt;
#number of pollings per save period (rounded down):&lt;br /&gt;
npoll=$(($save_period*60/$poll_period))&lt;br /&gt;
&lt;br /&gt;
#polling and saving loop&lt;br /&gt;
running=$(is_running $pid)&lt;br /&gt;
while [ $running -gt 0 ]&lt;br /&gt;
do&lt;br /&gt;
    for n in `seq $npoll`&lt;br /&gt;
    do&lt;br /&gt;
        interruptible_sleep $poll_period&lt;br /&gt;
        running=$(is_running $pid)&lt;br /&gt;
        if [ $running -eq 0 ]; then&lt;br /&gt;
            break&lt;br /&gt;
        fi&lt;br /&gt;
    done&lt;br /&gt;
    save_results&lt;br /&gt;
done&lt;br /&gt;
&lt;br /&gt;
#Done&lt;br /&gt;
cleanup_ramdisk&lt;br /&gt;
&lt;br /&gt;
echo -n &amp;quot;Job finished cleanly on &amp;quot;&lt;br /&gt;
date&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Notes with this script:&lt;br /&gt;
* The script assumes that the tar file &amp;lt;tt&amp;gt;input.tar&amp;lt;/tt&amp;gt; contains the executable &amp;lt;tt&amp;gt;job&amp;lt;/tt&amp;gt; and the input files in a subdirectory called &amp;lt;tt&amp;gt;indir&amp;lt;/tt&amp;gt; (with further subdirectories for the case of 8 serial jobs).&lt;br /&gt;
* The executable is supposed to take the locations of the input and output directory as arguments.&lt;br /&gt;
* The trap comment makes sure that the results gets saved and the ramdisk gets flushed even when the jobs gets killed before the end of the script is reached.  &amp;lt;tt&amp;gt;trap&amp;lt;/tt&amp;gt; is a bash script construction that executes the given command when the script is given, in this case, a TERM signal.  The TERM signal is given by the scheduler 30 seconds before your time is up.&lt;br /&gt;
* You could also [[Using_Signals|trap signals in your C, C++ or FORTRAN codes]].&lt;br /&gt;
* All files are kept in a subdirectory of &amp;lt;tt&amp;gt;/dev/shm&amp;lt;/tt&amp;gt;. This makes the clean up simpler, and keeps things tidy when doing small test jobs on the development nodes.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
--[[User:Rzon|Rzon]] 18 June 2010 (UTC)&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Mist&amp;diff=2486</id>
		<title>Mist</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Mist&amp;diff=2486"/>
		<updated>2020-02-12T22:01:43Z</updated>

		<summary type="html">&lt;p&gt;Nolta: /* Available compilers and interpreters */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox Computer&lt;br /&gt;
|image=[[Image:Mist.jpg|center|300px|thumb]]&lt;br /&gt;
|name=Mist&lt;br /&gt;
|installed=Dec 2019&lt;br /&gt;
|operatingsystem= Red Hat Enterprise Linux 7.6 &lt;br /&gt;
|loginnode= mist.scinet.utoronto.ca&lt;br /&gt;
|nnodes=  54 IBM AC922&lt;br /&gt;
|rampernode= 256 GB  &lt;br /&gt;
|gpuspernode=4 V100-SMX2-32GB&lt;br /&gt;
|interconnect=Mellanox EDR&lt;br /&gt;
|vendorcompilers= NVCC, IBM XL&lt;br /&gt;
|queuetype=Slurm&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
=Specifications=&lt;br /&gt;
Mist is a SciNet-[[#SOSCIP Users |SOSCIP]] joint GPU cluster consisting of 54 IBM AC922 servers. Each node of the cluster has 32 IBM Power9 cores, 256GB RAM and 4 NVIDIA V100-SMX2-32GB GPU with NVLINKs in between. The cluster has InfiniBand EDR interconnection providing GPU-Direct RMDA capability.&lt;br /&gt;
&lt;br /&gt;
= Getting started on Mist =&lt;br /&gt;
Mist will be able to be accessed directly in the near future.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -Y MYCCUSERNAME@mist.scinet.utoronto.ca&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
But for now, you must access the mist login node '''mist-login01''' via the Niagara cluster.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -Y MYCCUSERNAME@niagara.scinet.utoronto.ca&lt;br /&gt;
ssh -Y mist-login01&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
== Storage ==&lt;br /&gt;
The filesystem for Mist is shared with Niagara cluster. See [https://docs.scinet.utoronto.ca/index.php/Niagara_Quickstart#Your_various_directories Niagara Storage] for more details.&lt;br /&gt;
&lt;br /&gt;
= Loading software modules =&lt;br /&gt;
&lt;br /&gt;
You have two options for running code on Mist: use existing software, or compile your own.  This section focuses on the former.&lt;br /&gt;
&lt;br /&gt;
Other than essentials, all installed software is made available [[Using_modules | using module commands]]. These modules set environment variables (PATH, etc.), allowing multiple, conflicting versions of a given package to be available.  A detailed explanation of the module system can be [[Using_modules | found on the modules page]].&lt;br /&gt;
&lt;br /&gt;
Common module subcommands are:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;module load &amp;lt;module-name&amp;gt;&amp;lt;/code&amp;gt;: load the default version of a particular software.&lt;br /&gt;
* &amp;lt;code&amp;gt;module load &amp;lt;module-name&amp;gt;/&amp;lt;module-version&amp;gt;&amp;lt;/code&amp;gt;: load a specific version of a particular software.&lt;br /&gt;
* &amp;lt;code&amp;gt;module purge&amp;lt;/code&amp;gt;: unload all currently loaded modules.&lt;br /&gt;
* &amp;lt;code&amp;gt;module spider&amp;lt;/code&amp;gt; (or &amp;lt;code&amp;gt;module spider &amp;lt;module-name&amp;gt;&amp;lt;/code&amp;gt;): list available software packages.&lt;br /&gt;
* &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt;: list loadable software packages.&lt;br /&gt;
* &amp;lt;code&amp;gt;module list&amp;lt;/code&amp;gt;: list loaded modules.&lt;br /&gt;
&lt;br /&gt;
Along with modifying common environment variables, such as PATH, and LD_LIBRARY_PATH, these modules also create a SCINET_MODULENAME_ROOT environment variable, which can be used to access commonly needed software directories, such as /include and /lib.&lt;br /&gt;
&lt;br /&gt;
There are handy abbreviations for the module commands. &amp;lt;code&amp;gt;ml&amp;lt;/code&amp;gt; is the same as &amp;lt;code&amp;gt;module list&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;ml &amp;lt;module-name&amp;gt;&amp;lt;/code&amp;gt; is the same as &amp;lt;code&amp;gt;module load &amp;lt;module-name&amp;gt;&amp;lt;/code&amp;gt;.&lt;br /&gt;
== Tips for loading software ==&lt;br /&gt;
&lt;br /&gt;
* We advise '''''against''''' loading modules in your .bashrc.  This can lead to very confusing behaviour under certain circumstances.  Our guidelines for .bashrc files can be found [[bashrc guidelines|here]].&lt;br /&gt;
* Instead, load modules by hand when needed, or by sourcing a separate script.&lt;br /&gt;
* Load run-specific modules inside your job submission script.&lt;br /&gt;
* Short names give default versions; e.g. &amp;lt;code&amp;gt;cuda&amp;lt;/code&amp;gt; → &amp;lt;code&amp;gt;cuda/10.1.243&amp;lt;/code&amp;gt;. It is usually better to be explicit about the versions, for future reproducibility.&lt;br /&gt;
* Modules often require other modules to be loaded first.  Solve these dependencies by using [[Using_modules#Module_spider | &amp;lt;code&amp;gt;module spider&amp;lt;/code&amp;gt;]].&lt;br /&gt;
&lt;br /&gt;
= Available compilers and interpreters =&lt;br /&gt;
* &amp;lt;tt&amp;gt;cuda&amp;lt;/tt&amp;gt; module has to be loaded first for GPU softwares.&lt;br /&gt;
* For most compiled software, one should use the GNU compilers (&amp;lt;tt&amp;gt;gcc&amp;lt;/tt&amp;gt; for C, &amp;lt;tt&amp;gt;g++&amp;lt;/tt&amp;gt; for C++, and &amp;lt;tt&amp;gt;gfortran&amp;lt;/tt&amp;gt; for Fortran). Loading &amp;lt;tt&amp;gt;gcc&amp;lt;/tt&amp;gt; module makes these available. &lt;br /&gt;
* The IBM XL compiler suite (&amp;lt;tt&amp;gt;xlc_r, xlc++_r, xlf_r&amp;lt;/tt&amp;gt;) is also available, if you load one of the &amp;lt;tt&amp;gt;xl&amp;lt;/tt&amp;gt; modules.&lt;br /&gt;
* To compile mpi code, you must additionally load an &amp;lt;tt&amp;gt;openmpi&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;spectrum-mpi&amp;lt;/tt&amp;gt; module.&lt;br /&gt;
&lt;br /&gt;
=== CUDA ===&lt;br /&gt;
&lt;br /&gt;
The current installed CUDA Tookits are '''10.1.243''' and '''10.2.89 (default)'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load cuda/&amp;lt;version&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
*A compiler (GCC, XL or PGI) module must be loaded in order to use CUDA to build any code.&lt;br /&gt;
The current NVIDIA driver version is 440.33.01.&lt;br /&gt;
&lt;br /&gt;
===GNU Compilers ===&lt;br /&gt;
&lt;br /&gt;
Available GCC modules are:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gcc/7.5.0&lt;br /&gt;
gcc/8.3.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== IBM XL Compilers ===&lt;br /&gt;
&lt;br /&gt;
To load the native IBM xlc/xlc++ and xlf (Fortran) compilers, run&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load xl/16.1.1.3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
IBM XL Compilers are enabled for use with NVIDIA GPUs, including support for OpenMP GPU offloading and integration with NVIDIA's nvcc command to compile host-side code for the POWER9 CPU. Information about the IBM XL Compilers can be found at the following links:[https://www.ibm.com/support/knowledgecenter/SSXVZZ_16.1.1/com.ibm.compilers.linux.doc/welcome.html IBM XL C/C++], &lt;br /&gt;
[https://www.ibm.com/support/knowledgecenter/SSAT4T_16.1.1/com.ibm.compilers.linux.doc/welcome.html IBM XL Fortran]&lt;br /&gt;
&lt;br /&gt;
=== OpenMPI ===&lt;br /&gt;
&amp;lt;tt&amp;gt;openmpi/&amp;lt;version&amp;gt;&amp;lt;/tt&amp;gt; module is avaiable with different compilers including GCC and XL. &amp;lt;tt&amp;gt;spectrum-mpi/&amp;lt;version&amp;gt;&amp;lt;/tt&amp;gt; module provides IBM Spectrum MPI.&lt;br /&gt;
&lt;br /&gt;
=== PGI ===&lt;br /&gt;
To load PGI compiler and its own OpenMPI environment, run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load pgi/19.10&lt;br /&gt;
module load pgi-openmpi/3.1.3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Softwares =&lt;br /&gt;
== Anaconda (Python) ==&lt;br /&gt;
Anaconda is a popular distribution of the Python programming language. It contains several common Python libraries such as SciPy and NumPy as pre-built packages, which eases installation. Anaconda is provided as modules: '''anaconda3'''&lt;br /&gt;
&lt;br /&gt;
To install Anaconda locally, user need to load the module and create a conda environment:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load anaconda3&lt;br /&gt;
conda create -n myPythonEnv python=3.7&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: By default, conda environments are located in '''$HOME/.conda/envs'''. Cache (downloaded tarballs and packages) is under '''$HOME/.conda/pkgs'''. User may run into problem with disk quota if there are too many environments created. To clean conda cache, '''please run: &amp;quot;conda clean -y --all&amp;quot; and &amp;quot;rm -rf $HOME/.conda/pkgs/*&amp;quot; after installation of packages'''.&lt;br /&gt;
&lt;br /&gt;
To activate the conda environment: (should be activated before running python)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
source activate myPythonEnv&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that you SHOULD NOT use '''conda activate myPythonEnv''' to activate the environment.  This leads to all sorts of problems.  Once the environment is activated, user can update or install packages via '''conda''' or '''pip'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda install  &amp;lt;package_name&amp;gt; (preferred way to install packages)&lt;br /&gt;
pip install &amp;lt;package_name&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
To deactivate:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
source deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
To remove a conda enviroment:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda remove --name myPythonEnv --all&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
To verify that the environment was removed, run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda info --envs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting Python Job ===&lt;br /&gt;
A single-gpu job example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --gpus-per-node=1&lt;br /&gt;
#SBATCH --time=1:00:0&lt;br /&gt;
#SBATCH -A &amp;lt;SOSCIP_PROJECT_ID&amp;gt; #For SOSCIP projects only&lt;br /&gt;
&lt;br /&gt;
module load anaconda3&lt;br /&gt;
source activate myPythonEnv&lt;br /&gt;
python code.py ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== CuPy ==&lt;br /&gt;
[https://cupy.chainer.org CuPy] is an open-source matrix library accelerated with NVIDIA CUDA. It also uses CUDA-related libraries including cuBLAS, cuDNN, cuRand, cuSolver, cuSPARSE, cuFFT and NCCL to make full use of the GPU architecture. CuPy is an implementation of NumPy-compatible multi-dimensional array on CUDA. CuPy consists of the core multi-dimensional array class, cupy.ndarray, and many functions on it. It supports a subset of numpy.ndarray interface.&lt;br /&gt;
&lt;br /&gt;
CuPy can be install into any conda environment. Python packages: numpy, six and fastrlock are required. cuDNN and NCCL are optional.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load anaconda3/2019.10 cuda/10.2.89 gcc/7.5.0 cudnn/7.6.5.32  nccl/2.5.6 &lt;br /&gt;
conda create -n cupy-env python=3.7 numpy six fastrlock&lt;br /&gt;
source activate cupy-env&lt;br /&gt;
CFLAGS=&amp;quot;-I$SCINET_CUDNN_ROOT/include -I$SCINET_NCCL_ROOT/include -I$SCINET_CUDA_ROOT/include&amp;quot; LDFLAGS=&amp;quot;-L$SCINET_CUDNN_ROOT/lib64 -L$SCINET_NCCL_ROOT/lib&amp;quot; CUDA_PATH=$SCINET_CUDA_ROOT pip install cupy&lt;br /&gt;
#building/installing CuPy will take a few minutes&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== IBM Watson Machine Learning Community Edition (PowerAI) ==&lt;br /&gt;
[https://developer.ibm.com/linuxonpower/deep-learning-powerai/releases/ IBM Watson Machine Learning Community Edition (PowerAI)] contains many popular ML packages including TensorFlow, PyTorch, XGBoost and RAPIDS. It is distributed through IBM Conda channel. To install packages from PowerAI, user needs to specify IBM Conda channel when using Anaconda.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load anaconda3&lt;br /&gt;
&lt;br /&gt;
conda create --name wmlce_env -c https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda &amp;lt;package_name&amp;gt; (e.g. powerai, tensorflow-gpu, keras, pytorch, powerai-rapids, py-xgboost-gpu,  etc)&lt;br /&gt;
&lt;br /&gt;
source activate wmlce_env &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
*The WML CE Early Access Conda channel (https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda-early-access/) makes new versions of frameworks available in advance of formal WML CE releases. Easy upgrade between packages in the main and Early Access channels is not guaranteed. Using a separate conda environment for Early Access packages is recommended.&lt;br /&gt;
&lt;br /&gt;
== NAMD ==&lt;br /&gt;
[http://www.ks.uiuc.edu/Research/namd/ NAMD] is a parallel, object-oriented molecular dynamics code designed for high-performance simulation of large biomolecular systems.&lt;br /&gt;
=== v2.13 ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load cuda/10.2.89 gcc/7.5.0 fftw/3.3.8 spectrum-mpi/10.3.1  namd/2.13&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Running with one process per node====&lt;br /&gt;
An example of the job script (using 1 node, '''one process per node''',  32 CPU threads per process + 4 GPUs per process):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --time=20:00&lt;br /&gt;
#SBATCH --gpus-per-node=4&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH -p compute_full_node&lt;br /&gt;
&lt;br /&gt;
module load cuda/10.2.89 gcc/7.5.0 fftw/3.3.8 spectrum-mpi/10.3.1  namd/2.13&lt;br /&gt;
scontrol show hostnames &amp;gt; nodelist-$SLURM_JOB_ID&lt;br /&gt;
&lt;br /&gt;
`which charmrun` -npernode 1 -hostfile nodelist-$SLURM_JOB_ID `which namd2` +setcpuaffinity +pemap 0-127:4 +idlepoll +ppn 32 +p $((32*SLURM_NTASKS)) stmv.namd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Running with one process per GPU ====&lt;br /&gt;
NAMD may scale better if using '''one process per GPU'''. Please do your own benchmark.&lt;br /&gt;
An example of the job script (using 1 node, '''one process per GPU''',  8 CPU threads per process):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --time=20:00&lt;br /&gt;
#SBATCH --gpus-per-node=4&lt;br /&gt;
#SBATCH --ntasks=4&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH -p compute_full_node&lt;br /&gt;
&lt;br /&gt;
module load cuda/10.2.89 gcc/7.5.0 fftw/3.3.8 spectrum-mpi/10.3.1  namd/2.13&lt;br /&gt;
scontrol show hostnames &amp;gt; nodelist-$SLURM_JOB_ID&lt;br /&gt;
&lt;br /&gt;
`which charmrun` -npernode 4 -hostfile nodelist-$SLURM_JOB_ID `which namd2` +setcpuaffinity +pemap 0-127:4 +idlepoll +ppn 8 +p $((8*SLURM_NTASKS)) stmv.namd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== PyTorch ==&lt;br /&gt;
=== Installing from IBM Conda Channel ===&lt;br /&gt;
The easiest way to install PyTorch on Mist is using IBM's Conda channel. User needs to prepare a conda environment with Python 3.6 or 3.7 and install PyTorch using IBM's Conda channel.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load anaconda3&lt;br /&gt;
conda create -n pytorch_env python=3.7&lt;br /&gt;
source activate pytorch_env&lt;br /&gt;
conda install -c https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda/ pytorch &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Once the installation finishes, please clean the cache:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda clean -y --all&lt;br /&gt;
rm -rf $HOME/.conda/pkgs/*&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== RAPIDS ==&lt;br /&gt;
The [https://rapids.ai RAPIDS] is a suite of open source software libraries that gives you the freedom to execute end-to-end data science and analytics pipelines entirely on GPUs. The RAPIDS data science framework includes a collection of libraries: '''cuDF(GPU DataFrames)''', '''cuML(GPU Machine Learning Algorithms)''', '''cuStrings(GPU String Manipulation)''', etc.&lt;br /&gt;
&lt;br /&gt;
=== Installing from IBM Conda Channel ===&lt;br /&gt;
The easiest way to install RAPIDS on Mist is using IBM's Conda channel. User needs to prepare a conda environment with Python 3.6 or 3.7 and install powerai-rapids using IBM's Conda channel.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load anaconda3&lt;br /&gt;
conda create -n rapids_env python=3.7&lt;br /&gt;
source activate rapids_env&lt;br /&gt;
conda install -c https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda/ powerai-rapids&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Once the installation finishes, please clean the cache:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda clean -y --all&lt;br /&gt;
rm -rf $HOME/.conda/pkgs/*&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== TensorFlow and Keras ==&lt;br /&gt;
=== Installing from IBM Conda Channel ===&lt;br /&gt;
The easiest way to install TensorFlow and Keras on Mist is using IBM's Conda channel. User needs to prepare a conda environment with Python 3.6 or 3.7 and install TensorFlow-gpu using IBM's Conda channel.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load anaconda3&lt;br /&gt;
conda create -n tf_env python=3.7&lt;br /&gt;
source activate tf_env&lt;br /&gt;
conda install -c https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda/ tensorflow-gpu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Once the installation finishes, please clean the cache:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda clean -y --all&lt;br /&gt;
rm -rf $HOME/.conda/pkgs/*&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Testing and debugging =&lt;br /&gt;
You really should test your code before you submit it to the cluster to know if your code is correct and what kind of resources you need.&lt;br /&gt;
* Small test jobs can be run on the login node.  Rule of thumb: tests should run no more than a couple of minutes, taking at most about 1-2GB of memory, and use no more than one gpu and a few cores.&lt;br /&gt;
&amp;lt;!-- * You can run the [[Parallel Debugging with DDT|DDT]] debugger on the login nodes after &amp;lt;code&amp;gt;module load ddt&amp;lt;/code&amp;gt;. --&amp;gt;&lt;br /&gt;
* Short tests that do not fit on a login node, or for which you need a dedicated node, request an interactive debug job with the debug command:&lt;br /&gt;
 mist-login01:~$ debugjob --clean -g G&lt;br /&gt;
where G is the number of gpus, If G=1, this gives an interactive session for 2 hours, whereas G=4 gets you a single node with 4 gpus for 30 minutes, and with G=8 (the maximum) gets you 2 nodes each with 4 gpus for 30 minutes.  The &amp;lt;tt&amp;gt;--clean&amp;lt;/tt&amp;gt; argument is optional but recommended as it will start the session without any modules loaded, thus mimicking more closely what happens when you submit a job script.&lt;br /&gt;
&lt;br /&gt;
= Submitting jobs =&lt;br /&gt;
Once you have compiled and tested your code or workflow on the Mist login nodes, and confirmed that it behaves correctly, you are ready to submit jobs to the cluster.  Your jobs will run on some of Mist's 53 compute nodes.  When and where your job runs is determined by the scheduler.&lt;br /&gt;
&lt;br /&gt;
Mist uses SLURM as its job scheduler. It is configured to allow only '''Single-GPU jobs''' and '''Full-node jobs (4 GPUs per node)'''.&lt;br /&gt;
&lt;br /&gt;
You submit jobs from a login node by passing a script to the sbatch command:&lt;br /&gt;
&lt;br /&gt;
mist-login01:scratch$ sbatch jobscript.sh&lt;br /&gt;
&lt;br /&gt;
This puts the job in the queue. It will run on the compute nodes in due course. In most cases, you should not submit from your $HOME directory, but rather, from your $SCRATCH directory, so that the output of your compute job can be written out (as mentioned above, $HOME is read-only on the compute nodes).&lt;br /&gt;
&lt;br /&gt;
Example job scripts can be found below.&lt;br /&gt;
Keep in mind:&lt;br /&gt;
* Scheduling is by single gpu or by full node, so you ask only 1 gpu or 4 gpus per node.&lt;br /&gt;
* Your job's maximum walltime is 24 hours. &lt;br /&gt;
* Jobs must write their output to your scratch or project directory (home is read-only on compute nodes).&lt;br /&gt;
* Compute nodes have no internet access.&lt;br /&gt;
* Your job script will not remember the modules you have loaded, so it needs to contain &amp;quot;module load&amp;quot; commands of all the required modules (see examples below). &lt;br /&gt;
== SOSCIP Users ==&lt;br /&gt;
*[https://www.soscip.org SOSCIP] is a consortium to bring together industrial partners and academic researchers and provide them with sophisticated advanced computing technologies and expertise to solve social, technical and business challenges across sectors and drive economic growth.&lt;br /&gt;
&lt;br /&gt;
If you are working on a SOSCIP project, please contact [mailto:soscip-support@scinet.utoronto.ca soscip-support@scinet.utoronto.ca] to have your user account added to SOSCIP project accounts. SOSCIP users need to submit jobs with additional SLURM flag to get higher priority:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#SBATCH -A &amp;lt;SOSCIP_PROJECT_ID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Single-GPU job script ==&lt;br /&gt;
For a single GPU job, each will have a quarter of the node which is 1 GPU + 8/32 CPU Cores/Threads + ~58GB CPU memory. '''Users should never ask CPU or Memory explicitly.''' If running MPI program, user can set --ntasks to be the number of MPI ranks. It is suggested to use NVIDIA Multi-Process Service (MPS) if running multiple MPI ranks on one GPU.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --gpus-per-node=1&lt;br /&gt;
#SBATCH --time=1:00:0&lt;br /&gt;
#SBATCH -A &amp;lt;SOSCIP_PROJECT_ID&amp;gt; #For SOSCIP projects only&lt;br /&gt;
&lt;br /&gt;
module load anaconda3&lt;br /&gt;
source activate conda_env&lt;br /&gt;
python code.py ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Full-node job script ==&lt;br /&gt;
Multi-GPU job should ask for a minimum of one full node. User need to specify &amp;quot;compute_full_node&amp;quot; partition in order to get all resource on a node.&lt;br /&gt;
*An example for a 2-node, 8-rank OpenMPI job: (Each rank binds to 1 GPU and 8 physical CPU cores in this case)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --nodes=2&lt;br /&gt;
#SBATCH --gpus-per-node=4&lt;br /&gt;
#SBATCH --ntasks=8&lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH -p compute_full_node&lt;br /&gt;
#SBATCH -A &amp;lt;SOSCIP_PROJECT_ID&amp;gt; #For SOSCIP projects only&lt;br /&gt;
&lt;br /&gt;
module load cuda/10.2.89 gcc/8.3.0 openmpi/3.1.5&lt;br /&gt;
&lt;br /&gt;
mpirun -bind-to core -map-by slot:PE=8 -report-bindings ./program&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Support =&lt;br /&gt;
&lt;br /&gt;
SciNet inquiries:&lt;br /&gt;
* [mailto:support@scinet.utoronto.ca support@scinet.utoronto.ca]&lt;br /&gt;
* [mailto:niagara@computecanada.ca niagara@computecanada.ca]&lt;br /&gt;
&lt;br /&gt;
SOSCIP inquiries:&lt;br /&gt;
*[mailto:soscip-support@scinet.utoronto.ca soscip-support@scinet.utoronto.ca]&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Mist&amp;diff=2472</id>
		<title>Mist</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Mist&amp;diff=2472"/>
		<updated>2020-02-12T17:24:36Z</updated>

		<summary type="html">&lt;p&gt;Nolta: /* Full-node job script */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox Computer&lt;br /&gt;
|image=[[Image:Mist.jpg|center|300px|thumb]]&lt;br /&gt;
|name=Mist&lt;br /&gt;
|installed=Dec 2019&lt;br /&gt;
|operatingsystem= Red Hat Enterprise Linux 7.6 &lt;br /&gt;
|loginnode= mist.scinet.utoronto.ca&lt;br /&gt;
|nnodes=  54 IBM AC922&lt;br /&gt;
|rampernode= 256 GB  &lt;br /&gt;
|gpuspernode=4 V100-SMX2-32GB&lt;br /&gt;
|interconnect=Mellanox EDR&lt;br /&gt;
|vendorcompilers= IBM XL&lt;br /&gt;
|queuetype=Slurm&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
= Warning =&lt;br /&gt;
&lt;br /&gt;
'''Mist is in early users/beta testing phase. All instructions below are temporary and subject to change.'''&lt;br /&gt;
&lt;br /&gt;
=Specifications=&lt;br /&gt;
The Mist cluster is a GPU cluster of 54 IBM AC922 servers each with 32 IBM Power9 cores with 4 NVIDIA V100-SMX2-32GB GPU and NVLINKs in between. Each node of the cluster has 256GB RAM. It has InfiniBand EDR interconnection providing GPU-Direct RMDA capability.&lt;br /&gt;
&lt;br /&gt;
= Getting started on Mist =&lt;br /&gt;
&lt;br /&gt;
Currently Mist is under testing phase. Mist login node '''mist-login01''' can be accessed via Niagara cluster.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -Y MYCCUSERNAME@niagara.scinet.utoronto.ca&lt;br /&gt;
ssh -Y mist-login01&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
== Storage ==&lt;br /&gt;
The filesystem for Mist is shared with Niagara cluster. See [https://docs.scinet.utoronto.ca/index.php/Niagara_Quickstart#Your_various_directories Niagara Storage] for more details.&lt;br /&gt;
&lt;br /&gt;
= Loading software modules =&lt;br /&gt;
&lt;br /&gt;
You have two options for running code on Mist: use existing software, or compile your own.  This section focuses on the former.&lt;br /&gt;
&lt;br /&gt;
Other than essentials, all installed software is made available [[Using_modules | using module commands]]. These modules set environment variables (PATH, etc.), allowing multiple, conflicting versions of a given package to be available.  A detailed explanation of the module system can be [[Using_modules | found on the modules page]].&lt;br /&gt;
&lt;br /&gt;
Common module subcommands are:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;module load &amp;lt;module-name&amp;gt;&amp;lt;/code&amp;gt;: load the default version of a particular software.&lt;br /&gt;
* &amp;lt;code&amp;gt;module load &amp;lt;module-name&amp;gt;/&amp;lt;module-version&amp;gt;&amp;lt;/code&amp;gt;: load a specific version of a particular software.&lt;br /&gt;
* &amp;lt;code&amp;gt;module purge&amp;lt;/code&amp;gt;: unload all currently loaded modules.&lt;br /&gt;
* &amp;lt;code&amp;gt;module spider&amp;lt;/code&amp;gt; (or &amp;lt;code&amp;gt;module spider &amp;lt;module-name&amp;gt;&amp;lt;/code&amp;gt;): list available software packages.&lt;br /&gt;
* &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt;: list loadable software packages.&lt;br /&gt;
* &amp;lt;code&amp;gt;module list&amp;lt;/code&amp;gt;: list loaded modules.&lt;br /&gt;
&lt;br /&gt;
Along with modifying common environment variables, such as PATH, and LD_LIBRARY_PATH, these modules also create a SCINET_MODULENAME_ROOT environment variable, which can be used to access commonly needed software directories, such as /include and /lib.&lt;br /&gt;
&lt;br /&gt;
There are handy abbreviations for the module commands. &amp;lt;code&amp;gt;ml&amp;lt;/code&amp;gt; is the same as &amp;lt;code&amp;gt;module list&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;ml &amp;lt;module-name&amp;gt;&amp;lt;/code&amp;gt; is the same as &amp;lt;code&amp;gt;module load &amp;lt;module-name&amp;gt;&amp;lt;/code&amp;gt;.&lt;br /&gt;
== Tips for loading software ==&lt;br /&gt;
&lt;br /&gt;
* We advise '''''against''''' loading modules in your .bashrc.  This can lead to very confusing behaviour under certain circumstances.  Our guidelines for .bashrc files can be found [[bashrc guidelines|here]].&lt;br /&gt;
* Instead, load modules by hand when needed, or by sourcing a separate script.&lt;br /&gt;
* Load run-specific modules inside your job submission script.&lt;br /&gt;
* Short names give default versions; e.g. &amp;lt;code&amp;gt;cuda&amp;lt;/code&amp;gt; → &amp;lt;code&amp;gt;cuda/10.1.243&amp;lt;/code&amp;gt;. It is usually better to be explicit about the versions, for future reproducibility.&lt;br /&gt;
* Modules often require other modules to be loaded first.  Solve these dependencies by using [[Using_modules#Module_spider | &amp;lt;code&amp;gt;module spider&amp;lt;/code&amp;gt;]].&lt;br /&gt;
&lt;br /&gt;
= Available compilers and interpreters =&lt;br /&gt;
* &amp;lt;tt&amp;gt;cuda&amp;lt;/tt&amp;gt; module has to be loaded first for GPU softwares.&lt;br /&gt;
* For most compiled software, one should use the GNU compilers (&amp;lt;tt&amp;gt;gcc&amp;lt;/tt&amp;gt; for C, &amp;lt;tt&amp;gt;g++&amp;lt;/tt&amp;gt; for C++, and &amp;lt;tt&amp;gt;gfortran&amp;lt;/tt&amp;gt; for Fortran). Loading an &amp;lt;tt&amp;gt;at&amp;lt;/tt&amp;gt; ( IBM Advance Toolchain) or &amp;lt;tt&amp;gt;gcc&amp;lt;/tt&amp;gt; module makes these available. &lt;br /&gt;
* The IBM XL compiler suite (&amp;lt;tt&amp;gt;xlc_r, xlc++_r, xlf_r&amp;lt;/tt&amp;gt;) is also available, if you load one of the &amp;lt;tt&amp;gt;xl&amp;lt;/tt&amp;gt; modules.&lt;br /&gt;
* To compile mpi code, you must additionally load an &amp;lt;tt&amp;gt;openmpi&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;spectrummpi&amp;lt;/tt&amp;gt; module.&lt;br /&gt;
&lt;br /&gt;
=== CUDA ===&lt;br /&gt;
&lt;br /&gt;
The current installed CUDA Tookits are '''10.1.243''' and '''10.2.89 (default)'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load cuda/&amp;lt;version&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
*A compiler (GCC, XL or PGI) module must be loaded in order to use CUDA to build any code.&lt;br /&gt;
The current NVIDIA driver version is 440.33.01.&lt;br /&gt;
&lt;br /&gt;
===GNU Compilers ===&lt;br /&gt;
&lt;br /&gt;
Available GCC modules are:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gcc/7.5.0&lt;br /&gt;
gcc/8.3.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== IBM XL Compilers ===&lt;br /&gt;
&lt;br /&gt;
To load the native IBM xlc/xlc++ and xlf (Fortran) compilers, run&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load xl/16.1.1.3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
IBM XL Compilers are enabled for use with NVIDIA GPUs, including support for OpenMP GPU offloading and integration with NVIDIA's nvcc command to compile host-side code for the POWER9 CPU. Information about the IBM XL Compilers can be found at the following links:[https://www.ibm.com/support/knowledgecenter/SSXVZZ_16.1.1/com.ibm.compilers.linux.doc/welcome.html IBM XL C/C++], &lt;br /&gt;
[https://www.ibm.com/support/knowledgecenter/SSAT4T_16.1.1/com.ibm.compilers.linux.doc/welcome.html IBM XL Fortran]&lt;br /&gt;
&lt;br /&gt;
=== OpenMPI ===&lt;br /&gt;
&amp;lt;tt&amp;gt;openmpi/&amp;lt;version&amp;gt;&amp;lt;/tt&amp;gt; module is avaiable with different compilers including GCC and XL. &amp;lt;tt&amp;gt;spectrum-mpi/&amp;lt;version&amp;gt;&amp;lt;/tt&amp;gt; module provides IBM Spectrum MPI.&lt;br /&gt;
&lt;br /&gt;
=== PGI ===&lt;br /&gt;
To load PGI compiler and its own OpenMPI environment, run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load pgi/19.10&lt;br /&gt;
module load pgi-openmpi/3.1.3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Softwares =&lt;br /&gt;
== Anaconda (Python) ==&lt;br /&gt;
Anaconda is a popular distribution of the Python programming language. It contains several common Python libraries such as SciPy and NumPy as pre-built packages, which eases installation. Anaconda is provided as modules: '''anaconda3'''&lt;br /&gt;
&lt;br /&gt;
To install Anaconda locally, user need to load the module and create a conda environment:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load anaconda3&lt;br /&gt;
conda create -n myPythonEnv python=3.7&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: By default, conda environments are located in '''$HOME/.conda/envs'''. Cache (downloaded tarballs and packages) is under '''$HOME/.conda/pkgs'''. User may run into problem with disk quota if there are too many environments created. To clean conda cache, '''please run: &amp;quot;conda clean -y --all&amp;quot; and &amp;quot;rm -rf $HOME/.conda/pkgs/*&amp;quot; after installation of packages'''.&lt;br /&gt;
&lt;br /&gt;
To activate the conda environment: '''(should be activated before running python)'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
source activate myPythonEnv&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that you SHOULD NOT use '''conda activate myPythonEnv''' to activate the environment.  This leads to all sorts of problems.  Once the environment is activated, user can update or install packages via '''conda''' or '''pip'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda install  &amp;lt;package_name&amp;gt; (preferred way to install packages)&lt;br /&gt;
pip install &amp;lt;package_name&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
To deactivate:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
source deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
To remove a conda enviroment:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda remove --name myPythonEnv --all&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
To verify that the environment was removed, run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda info --envs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting Python Job ===&lt;br /&gt;
A single-gpu job example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --gpus-per-node=1&lt;br /&gt;
#SBATCH --time=1:00:0&lt;br /&gt;
#SBATCH -A &amp;lt;SOSCIP_PROJECT_ID&amp;gt; #For SOSCIP projects only&lt;br /&gt;
&lt;br /&gt;
module load anaconda3&lt;br /&gt;
source activate myPythonEnv&lt;br /&gt;
python code.py ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== CuPy ==&lt;br /&gt;
[https://cupy.chainer.org CuPy] is an open-source matrix library accelerated with NVIDIA CUDA. It also uses CUDA-related libraries including cuBLAS, cuDNN, cuRand, cuSolver, cuSPARSE, cuFFT and NCCL to make full use of the GPU architecture. CuPy is an implementation of NumPy-compatible multi-dimensional array on CUDA. CuPy consists of the core multi-dimensional array class, cupy.ndarray, and many functions on it. It supports a subset of numpy.ndarray interface.&lt;br /&gt;
&lt;br /&gt;
CuPy can be install into any conda environment. Python packages: numpy, six and fastrlock are required. cuDNN and NCCL are optional.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load anaconda3/2019.10 cuda/10.2.89 gcc/7.5.0 cudnn/7.6.5.32  nccl/2.5.6 &lt;br /&gt;
conda create -n cupy-env python=3.7 numpy six fastrlock&lt;br /&gt;
source activate cupy-env&lt;br /&gt;
CFLAGS=&amp;quot;-I$SCINET_CUDNN_ROOT/include -I$SCINET_NCCL_ROOT/include -I$SCINET_CUDA_ROOT/include&amp;quot; LDFLAGS=&amp;quot;-L$SCINET_CUDNN_ROOT/lib64 -L$SCINET_NCCL_ROOT/lib&amp;quot; CUDA_PATH=$SCINET_CUDA_ROOT pip install cupy&lt;br /&gt;
#building/installing CuPy will take a few minutes&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== IBM Watson Machine Learning Community Edition (PowerAI) ==&lt;br /&gt;
[https://developer.ibm.com/linuxonpower/deep-learning-powerai/releases/ IBM Watson Machine Learning Community Edition (PowerAI)] contains many popular ML packages including TensorFlow, PyTorch, XGBoost and RAPIDS. It is distributed through IBM Conda channel. To install packages from PowerAI, user needs to specify IBM Conda channel when using Anaconda.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load anaconda3&lt;br /&gt;
&lt;br /&gt;
conda create --name wmlce_env -c https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda &amp;lt;package_name&amp;gt; (e.g. powerai, tensorflow-gpu, keras, pytorch, powerai-rapids, py-xgboost-gpu,  etc)&lt;br /&gt;
&lt;br /&gt;
source activate wmlce_env &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
*The WML CE Early Access Conda channel (https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda-early-access/) makes new versions of frameworks available in advance of formal WML CE releases. Easy upgrade between packages in the main and Early Access channels is not guaranteed. Using a separate conda environment for Early Access packages is recommended.&lt;br /&gt;
&lt;br /&gt;
== NAMD ==&lt;br /&gt;
[http://www.ks.uiuc.edu/Research/namd/ NAMD] is a parallel, object-oriented molecular dynamics code designed for high-performance simulation of large biomolecular systems.&lt;br /&gt;
=== v2.13 ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load cuda/10.2.89 gcc/7.5.0 fftw/3.3.8 spectrum-mpi/10.3.1  namd/2.13&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Running with one process per node====&lt;br /&gt;
An example of the job script (using 1 node, '''one process per node''',  32 CPU threads per process + 4 GPUs per process):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --time=20:00&lt;br /&gt;
#SBATCH --gpus-per-node=4&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH -p compute_full_node&lt;br /&gt;
&lt;br /&gt;
module load cuda/10.2.89 gcc/7.5.0 fftw/3.3.8 spectrum-mpi/10.3.1  namd/2.13&lt;br /&gt;
scontrol show hostnames &amp;gt; nodelist-$SLURM_JOB_ID&lt;br /&gt;
&lt;br /&gt;
`which charmrun` -npernode 1 -hostfile nodelist-$SLURM_JOB_ID `which namd2` +setcpuaffinity +pemap 0-127:4 +idlepoll +ppn 32 +p $((32*SLURM_NTASKS)) stmv.namd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Running with one process per GPU ====&lt;br /&gt;
NAMD may scale better if using '''one process per GPU'''. Please do your own benchmark.&lt;br /&gt;
An example of the job script (using 1 node, '''one process per GPU''',  8 CPU threads per process):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --time=20:00&lt;br /&gt;
#SBATCH --gpus-per-node=4&lt;br /&gt;
#SBATCH --ntasks=4&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH -p compute_full_node&lt;br /&gt;
&lt;br /&gt;
module load cuda/10.2.89 gcc/7.5.0 fftw/3.3.8 spectrum-mpi/10.3.1  namd/2.13&lt;br /&gt;
scontrol show hostnames &amp;gt; nodelist-$SLURM_JOB_ID&lt;br /&gt;
&lt;br /&gt;
`which charmrun` -npernode 4 -hostfile nodelist-$SLURM_JOB_ID `which namd2` +setcpuaffinity +pemap 0-127:4 +idlepoll +ppn 8 +p $((8*SLURM_NTASKS)) stmv.namd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== PyTorch ==&lt;br /&gt;
=== Installing from IBM Conda Channel ===&lt;br /&gt;
The easiest way to install PyTorch on Mist is using IBM's Conda channel. User needs to prepare a conda environment with Python 3.6 or 3.7 and install PyTorch using IBM's Conda channel.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load anaconda3&lt;br /&gt;
conda create -n pytorch_env python=3.7&lt;br /&gt;
source activate pytorch_env&lt;br /&gt;
conda install -c https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda/ pytorch &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Once the installation finishes, please clean the cache:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda clean -y --all&lt;br /&gt;
rm -rf $HOME/.conda/pkgs/*&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== RAPIDS ==&lt;br /&gt;
The [https://rapids.ai RAPIDS] is a suite of open source software libraries that gives you the freedom to execute end-to-end data science and analytics pipelines entirely on GPUs. The RAPIDS data science framework includes a collection of libraries: '''cuDF(GPU DataFrames)''', '''cuML(GPU Machine Learning Algorithms)''', '''cuStrings(GPU String Manipulation)''', etc.&lt;br /&gt;
&lt;br /&gt;
=== Installing from IBM Conda Channel ===&lt;br /&gt;
The easiest way to install RAPIDS on Mist is using IBM's Conda channel. User needs to prepare a conda environment with Python 3.6 or 3.7 and install powerai-rapids using IBM's Conda channel.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load anaconda3&lt;br /&gt;
conda create -n rapids_env python=3.7&lt;br /&gt;
source activate rapids_env&lt;br /&gt;
conda install -c https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda/ powerai-rapids&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Once the installation finishes, please clean the cache:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda clean -y --all&lt;br /&gt;
rm -rf $HOME/.conda/pkgs/*&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== TensorFlow and Keras ==&lt;br /&gt;
=== Installing from IBM Conda Channel ===&lt;br /&gt;
The easiest way to install TensorFlow and Keras on Mist is using IBM's Conda channel. User needs to prepare a conda environment with Python 3.6 or 3.7 and install TensorFlow-gpu using IBM's Conda channel.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load anaconda3&lt;br /&gt;
conda create -n tf_env python=3.7&lt;br /&gt;
source activate tf_env&lt;br /&gt;
conda install -c https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda/ tensorflow-gpu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Once the installation finishes, please clean the cache:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda clean -y --all&lt;br /&gt;
rm -rf $HOME/.conda/pkgs/*&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Testing and debugging =&lt;br /&gt;
You really should test your code before you submit it to the cluster to know if your code is correct and what kind of resources you need.&lt;br /&gt;
* Small test jobs can be run on the login node.  Rule of thumb: tests should run no more than a couple of minutes, taking at most about 1-2GB of memory, and use no more than one gpu and a few cores.&lt;br /&gt;
&amp;lt;!-- * You can run the [[Parallel Debugging with DDT|DDT]] debugger on the login nodes after &amp;lt;code&amp;gt;module load ddt&amp;lt;/code&amp;gt;. --&amp;gt;&lt;br /&gt;
* Short tests that do not fit on a login node, or for which you need a dedicated node, request an interactive debug job with the debug command:&lt;br /&gt;
 mist-login01:~$ debugjob --clean -g G&lt;br /&gt;
where G is the number of gpus, If G=1, this gives an interactive session for 2 hours, whereas G=4 gets you a single node with 4 gpus for 30 minutes, and with G=8 (the maximum) gets you 2 nodes each with 4 gpus for 30 minutes.  The &amp;lt;tt&amp;gt;--clean&amp;lt;/tt&amp;gt; argument is optional but recommended as it will start the session without any modules loaded, thus mimicking more closely what happens when you submit a job script.&lt;br /&gt;
&lt;br /&gt;
= Submitting jobs =&lt;br /&gt;
Once you have compiled and tested your code or workflow on the Mist login nodes, and confirmed that it behaves correctly, you are ready to submit jobs to the cluster.  Your jobs will run on some of Mist's 53 compute nodes.  When and where your job runs is determined by the scheduler.&lt;br /&gt;
&lt;br /&gt;
Mist uses SLURM as its job scheduler. It is configured to allow only '''Single-GPU jobs''' and '''Full-node jobs (4 GPUs per node)'''.&lt;br /&gt;
&lt;br /&gt;
You submit jobs from a login node by passing a script to the sbatch command:&lt;br /&gt;
&lt;br /&gt;
mist-login01:scratch$ sbatch jobscript.sh&lt;br /&gt;
&lt;br /&gt;
This puts the job in the queue. It will run on the compute nodes in due course. In most cases, you should not submit from your $HOME directory, but rather, from your $SCRATCH directory, so that the output of your compute job can be written out (as mentioned above, $HOME is read-only on the compute nodes).&lt;br /&gt;
&lt;br /&gt;
Example job scripts can be found below.&lt;br /&gt;
Keep in mind:&lt;br /&gt;
* Scheduling is by single gpu or by full node, so you ask only 1 gpu or 4 gpus per node.&lt;br /&gt;
* Your job's maximum walltime is 24 hours. &lt;br /&gt;
* Jobs must write their output to your scratch or project directory (home is read-only on compute nodes).&lt;br /&gt;
* Compute nodes have no internet access.&lt;br /&gt;
* Your job script will not remember the modules you have loaded, so it needs to contain &amp;quot;module load&amp;quot; commands of all the required modules (see examples below). &lt;br /&gt;
== SOSCIP Users ==&lt;br /&gt;
If you are working on a SOSCIP project, please contact soscip-support@scinet.utoronto.ca to have your user account added to SOSCIP project accounts. SOSCIP users need to submit jobs with additional SLURM flag:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#SBATCH -A &amp;lt;SOSCIP_PROJECT_ID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
== Single-GPU job script ==&lt;br /&gt;
For a single GPU job, each will have a quarter of the node which is 1 GPU + 8/32 CPU Cores/Threads + ~58GB CPU memory. '''Users should never ask CPU or Memory explicitly.''' If running MPI program, user can set --ntasks to be the number of MPI ranks. It is suggested to use NVIDIA Multi-Process Service (MPS) if running multiple MPI ranks on one GPU.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --gpus-per-node=1&lt;br /&gt;
#SBATCH --time=1:00:0&lt;br /&gt;
#SBATCH -A &amp;lt;SOSCIP_PROJECT_ID&amp;gt; #For SOSCIP projects only&lt;br /&gt;
&lt;br /&gt;
module load anaconda3&lt;br /&gt;
source activate conda_env&lt;br /&gt;
python code.py ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Full-node job script ==&lt;br /&gt;
Multi-GPU job should ask for a minimum of one full node. User need to specify &amp;quot;compute_full_node&amp;quot; partition in order to get all resource on a node.&lt;br /&gt;
*An example for a 2-node, 8-rank OpenMPI job: (Each rank binds to 1 GPU and 8 physical CPU cores in this case)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --nodes=2&lt;br /&gt;
#SBATCH --gpus-per-node=4&lt;br /&gt;
#SBATCH --ntasks=8&lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH -p compute_full_node&lt;br /&gt;
#SBATCH -A &amp;lt;SOSCIP_PROJECT_ID&amp;gt; #For SOSCIP projects only&lt;br /&gt;
&lt;br /&gt;
module load cuda/10.2.89 gcc/8.3.0 openmpi/3.1.5&lt;br /&gt;
&lt;br /&gt;
mpirun -bind-to core -map-by slot:PE=8 -report-bindings ./program&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Mist&amp;diff=2471</id>
		<title>Mist</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Mist&amp;diff=2471"/>
		<updated>2020-02-12T17:22:54Z</updated>

		<summary type="html">&lt;p&gt;Nolta: /* PGI */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox Computer&lt;br /&gt;
|image=[[Image:Mist.jpg|center|300px|thumb]]&lt;br /&gt;
|name=Mist&lt;br /&gt;
|installed=Dec 2019&lt;br /&gt;
|operatingsystem= Red Hat Enterprise Linux 7.6 &lt;br /&gt;
|loginnode= mist.scinet.utoronto.ca&lt;br /&gt;
|nnodes=  54 IBM AC922&lt;br /&gt;
|rampernode= 256 GB  &lt;br /&gt;
|gpuspernode=4 V100-SMX2-32GB&lt;br /&gt;
|interconnect=Mellanox EDR&lt;br /&gt;
|vendorcompilers= IBM XL&lt;br /&gt;
|queuetype=Slurm&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
= Warning =&lt;br /&gt;
&lt;br /&gt;
'''Mist is in early users/beta testing phase. All instructions below are temporary and subject to change.'''&lt;br /&gt;
&lt;br /&gt;
=Specifications=&lt;br /&gt;
The Mist cluster is a GPU cluster of 54 IBM AC922 servers each with 32 IBM Power9 cores with 4 NVIDIA V100-SMX2-32GB GPU and NVLINKs in between. Each node of the cluster has 256GB RAM. It has InfiniBand EDR interconnection providing GPU-Direct RMDA capability.&lt;br /&gt;
&lt;br /&gt;
= Getting started on Mist =&lt;br /&gt;
&lt;br /&gt;
Currently Mist is under testing phase. Mist login node '''mist-login01''' can be accessed via Niagara cluster.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -Y MYCCUSERNAME@niagara.scinet.utoronto.ca&lt;br /&gt;
ssh -Y mist-login01&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
== Storage ==&lt;br /&gt;
The filesystem for Mist is shared with Niagara cluster. See [https://docs.scinet.utoronto.ca/index.php/Niagara_Quickstart#Your_various_directories Niagara Storage] for more details.&lt;br /&gt;
&lt;br /&gt;
= Loading software modules =&lt;br /&gt;
&lt;br /&gt;
You have two options for running code on Mist: use existing software, or compile your own.  This section focuses on the former.&lt;br /&gt;
&lt;br /&gt;
Other than essentials, all installed software is made available [[Using_modules | using module commands]]. These modules set environment variables (PATH, etc.), allowing multiple, conflicting versions of a given package to be available.  A detailed explanation of the module system can be [[Using_modules | found on the modules page]].&lt;br /&gt;
&lt;br /&gt;
Common module subcommands are:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;module load &amp;lt;module-name&amp;gt;&amp;lt;/code&amp;gt;: load the default version of a particular software.&lt;br /&gt;
* &amp;lt;code&amp;gt;module load &amp;lt;module-name&amp;gt;/&amp;lt;module-version&amp;gt;&amp;lt;/code&amp;gt;: load a specific version of a particular software.&lt;br /&gt;
* &amp;lt;code&amp;gt;module purge&amp;lt;/code&amp;gt;: unload all currently loaded modules.&lt;br /&gt;
* &amp;lt;code&amp;gt;module spider&amp;lt;/code&amp;gt; (or &amp;lt;code&amp;gt;module spider &amp;lt;module-name&amp;gt;&amp;lt;/code&amp;gt;): list available software packages.&lt;br /&gt;
* &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt;: list loadable software packages.&lt;br /&gt;
* &amp;lt;code&amp;gt;module list&amp;lt;/code&amp;gt;: list loaded modules.&lt;br /&gt;
&lt;br /&gt;
Along with modifying common environment variables, such as PATH, and LD_LIBRARY_PATH, these modules also create a SCINET_MODULENAME_ROOT environment variable, which can be used to access commonly needed software directories, such as /include and /lib.&lt;br /&gt;
&lt;br /&gt;
There are handy abbreviations for the module commands. &amp;lt;code&amp;gt;ml&amp;lt;/code&amp;gt; is the same as &amp;lt;code&amp;gt;module list&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;ml &amp;lt;module-name&amp;gt;&amp;lt;/code&amp;gt; is the same as &amp;lt;code&amp;gt;module load &amp;lt;module-name&amp;gt;&amp;lt;/code&amp;gt;.&lt;br /&gt;
== Tips for loading software ==&lt;br /&gt;
&lt;br /&gt;
* We advise '''''against''''' loading modules in your .bashrc.  This can lead to very confusing behaviour under certain circumstances.  Our guidelines for .bashrc files can be found [[bashrc guidelines|here]].&lt;br /&gt;
* Instead, load modules by hand when needed, or by sourcing a separate script.&lt;br /&gt;
* Load run-specific modules inside your job submission script.&lt;br /&gt;
* Short names give default versions; e.g. &amp;lt;code&amp;gt;cuda&amp;lt;/code&amp;gt; → &amp;lt;code&amp;gt;cuda/10.1.243&amp;lt;/code&amp;gt;. It is usually better to be explicit about the versions, for future reproducibility.&lt;br /&gt;
* Modules often require other modules to be loaded first.  Solve these dependencies by using [[Using_modules#Module_spider | &amp;lt;code&amp;gt;module spider&amp;lt;/code&amp;gt;]].&lt;br /&gt;
&lt;br /&gt;
= Available compilers and interpreters =&lt;br /&gt;
* &amp;lt;tt&amp;gt;cuda&amp;lt;/tt&amp;gt; module has to be loaded first for GPU softwares.&lt;br /&gt;
* For most compiled software, one should use the GNU compilers (&amp;lt;tt&amp;gt;gcc&amp;lt;/tt&amp;gt; for C, &amp;lt;tt&amp;gt;g++&amp;lt;/tt&amp;gt; for C++, and &amp;lt;tt&amp;gt;gfortran&amp;lt;/tt&amp;gt; for Fortran). Loading an &amp;lt;tt&amp;gt;at&amp;lt;/tt&amp;gt; ( IBM Advance Toolchain) or &amp;lt;tt&amp;gt;gcc&amp;lt;/tt&amp;gt; module makes these available. &lt;br /&gt;
* The IBM XL compiler suite (&amp;lt;tt&amp;gt;xlc_r, xlc++_r, xlf_r&amp;lt;/tt&amp;gt;) is also available, if you load one of the &amp;lt;tt&amp;gt;xl&amp;lt;/tt&amp;gt; modules.&lt;br /&gt;
* To compile mpi code, you must additionally load an &amp;lt;tt&amp;gt;openmpi&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;spectrummpi&amp;lt;/tt&amp;gt; module.&lt;br /&gt;
&lt;br /&gt;
=== CUDA ===&lt;br /&gt;
&lt;br /&gt;
The current installed CUDA Tookits are '''10.1.243''' and '''10.2.89 (default)'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load cuda/&amp;lt;version&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
*A compiler (GCC, XL or PGI) module must be loaded in order to use CUDA to build any code.&lt;br /&gt;
The current NVIDIA driver version is 440.33.01.&lt;br /&gt;
&lt;br /&gt;
===GNU Compilers ===&lt;br /&gt;
&lt;br /&gt;
Available GCC modules are:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gcc/7.5.0&lt;br /&gt;
gcc/8.3.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== IBM XL Compilers ===&lt;br /&gt;
&lt;br /&gt;
To load the native IBM xlc/xlc++ and xlf (Fortran) compilers, run&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load xl/16.1.1.3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
IBM XL Compilers are enabled for use with NVIDIA GPUs, including support for OpenMP GPU offloading and integration with NVIDIA's nvcc command to compile host-side code for the POWER9 CPU. Information about the IBM XL Compilers can be found at the following links:[https://www.ibm.com/support/knowledgecenter/SSXVZZ_16.1.1/com.ibm.compilers.linux.doc/welcome.html IBM XL C/C++], &lt;br /&gt;
[https://www.ibm.com/support/knowledgecenter/SSAT4T_16.1.1/com.ibm.compilers.linux.doc/welcome.html IBM XL Fortran]&lt;br /&gt;
&lt;br /&gt;
=== OpenMPI ===&lt;br /&gt;
&amp;lt;tt&amp;gt;openmpi/&amp;lt;version&amp;gt;&amp;lt;/tt&amp;gt; module is avaiable with different compilers including GCC and XL. &amp;lt;tt&amp;gt;spectrum-mpi/&amp;lt;version&amp;gt;&amp;lt;/tt&amp;gt; module provides IBM Spectrum MPI.&lt;br /&gt;
&lt;br /&gt;
=== PGI ===&lt;br /&gt;
To load PGI compiler and its own OpenMPI environment, run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load pgi/19.10&lt;br /&gt;
module load pgi-openmpi/3.1.3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Softwares =&lt;br /&gt;
== Anaconda (Python) ==&lt;br /&gt;
Anaconda is a popular distribution of the Python programming language. It contains several common Python libraries such as SciPy and NumPy as pre-built packages, which eases installation. Anaconda is provided as modules: '''anaconda3'''&lt;br /&gt;
&lt;br /&gt;
To install Anaconda locally, user need to load the module and create a conda environment:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load anaconda3&lt;br /&gt;
conda create -n myPythonEnv python=3.7&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
*Note: By default, conda environments are located in '''$HOME/.conda/envs'''. Cache (downloaded tarballs and packages) is under '''$HOME/.conda/pkgs'''. User may run into problem with disk quota if there are too many environments created. To clean conda cache, '''please run: &amp;quot;conda clean -y --all&amp;quot; and &amp;quot;rm -rf $HOME/.conda/pkgs/*&amp;quot; after installation of packages'''.&lt;br /&gt;
&lt;br /&gt;
To activate the conda environment: '''(should be activated before running python)'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
source activate myPythonEnv&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that you SHOULD NOT use '''conda activate myPythonEnv''' to activate the environment.  This leads to all sorts of problems.  Once the environment is activated, user can update or install packages via '''conda''' or '''pip'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda install  &amp;lt;package_name&amp;gt; (preferred way to install packages)&lt;br /&gt;
pip install &amp;lt;package_name&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
To deactivate:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
source deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
To remove a conda enviroment:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda remove --name myPythonEnv --all&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
To verify that the environment was removed, run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda info --envs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting Python Job ===&lt;br /&gt;
A single-gpu job example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --gpus-per-node=1&lt;br /&gt;
#SBATCH --time=1:00:0&lt;br /&gt;
#SBATCH -A &amp;lt;SOSCIP_PROJECT_ID&amp;gt; #For SOSCIP projects only&lt;br /&gt;
&lt;br /&gt;
module load anaconda3&lt;br /&gt;
source activate myPythonEnv&lt;br /&gt;
python code.py ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== CuPy ==&lt;br /&gt;
[https://cupy.chainer.org CuPy] is an open-source matrix library accelerated with NVIDIA CUDA. It also uses CUDA-related libraries including cuBLAS, cuDNN, cuRand, cuSolver, cuSPARSE, cuFFT and NCCL to make full use of the GPU architecture. CuPy is an implementation of NumPy-compatible multi-dimensional array on CUDA. CuPy consists of the core multi-dimensional array class, cupy.ndarray, and many functions on it. It supports a subset of numpy.ndarray interface.&lt;br /&gt;
&lt;br /&gt;
CuPy can be install into any conda environment. Python packages: numpy, six and fastrlock are required. cuDNN and NCCL are optional.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load anaconda3/2019.10 cuda/10.2.89 gcc/7.5.0 cudnn/7.6.5.32  nccl/2.5.6 &lt;br /&gt;
conda create -n cupy-env python=3.7 numpy six fastrlock&lt;br /&gt;
source activate cupy-env&lt;br /&gt;
CFLAGS=&amp;quot;-I$SCINET_CUDNN_ROOT/include -I$SCINET_NCCL_ROOT/include -I$SCINET_CUDA_ROOT/include&amp;quot; LDFLAGS=&amp;quot;-L$SCINET_CUDNN_ROOT/lib64 -L$SCINET_NCCL_ROOT/lib&amp;quot; CUDA_PATH=$SCINET_CUDA_ROOT pip install cupy&lt;br /&gt;
#building/installing CuPy will take a few minutes&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== IBM Watson Machine Learning Community Edition (PowerAI) ==&lt;br /&gt;
[https://developer.ibm.com/linuxonpower/deep-learning-powerai/releases/ IBM Watson Machine Learning Community Edition (PowerAI)] contains many popular ML packages including TensorFlow, PyTorch, XGBoost and RAPIDS. It is distributed through IBM Conda channel. To install packages from PowerAI, user needs to specify IBM Conda channel when using Anaconda.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load anaconda3&lt;br /&gt;
&lt;br /&gt;
conda create --name wmlce_env -c https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda &amp;lt;package_name&amp;gt; (e.g. powerai, tensorflow-gpu, keras, pytorch, powerai-rapids, py-xgboost-gpu,  etc)&lt;br /&gt;
&lt;br /&gt;
source activate wmlce_env &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
*The WML CE Early Access Conda channel (https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda-early-access/) makes new versions of frameworks available in advance of formal WML CE releases. Easy upgrade between packages in the main and Early Access channels is not guaranteed. Using a separate conda environment for Early Access packages is recommended.&lt;br /&gt;
&lt;br /&gt;
== NAMD ==&lt;br /&gt;
[http://www.ks.uiuc.edu/Research/namd/ NAMD] is a parallel, object-oriented molecular dynamics code designed for high-performance simulation of large biomolecular systems.&lt;br /&gt;
=== v2.13 ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load cuda/10.2.89 gcc/7.5.0 fftw/3.3.8 spectrum-mpi/10.3.1  namd/2.13&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Running with one process per node====&lt;br /&gt;
An example of the job script (using 1 node, '''one process per node''',  32 CPU threads per process + 4 GPUs per process):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --time=20:00&lt;br /&gt;
#SBATCH --gpus-per-node=4&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH -p compute_full_node&lt;br /&gt;
&lt;br /&gt;
module load cuda/10.2.89 gcc/7.5.0 fftw/3.3.8 spectrum-mpi/10.3.1  namd/2.13&lt;br /&gt;
scontrol show hostnames &amp;gt; nodelist-$SLURM_JOB_ID&lt;br /&gt;
&lt;br /&gt;
`which charmrun` -npernode 1 -hostfile nodelist-$SLURM_JOB_ID `which namd2` +setcpuaffinity +pemap 0-127:4 +idlepoll +ppn 32 +p $((32*SLURM_NTASKS)) stmv.namd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Running with one process per GPU ====&lt;br /&gt;
NAMD may scale better if using '''one process per GPU'''. Please do your own benchmark.&lt;br /&gt;
An example of the job script (using 1 node, '''one process per GPU''',  8 CPU threads per process):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --time=20:00&lt;br /&gt;
#SBATCH --gpus-per-node=4&lt;br /&gt;
#SBATCH --ntasks=4&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH -p compute_full_node&lt;br /&gt;
&lt;br /&gt;
module load cuda/10.2.89 gcc/7.5.0 fftw/3.3.8 spectrum-mpi/10.3.1  namd/2.13&lt;br /&gt;
scontrol show hostnames &amp;gt; nodelist-$SLURM_JOB_ID&lt;br /&gt;
&lt;br /&gt;
`which charmrun` -npernode 4 -hostfile nodelist-$SLURM_JOB_ID `which namd2` +setcpuaffinity +pemap 0-127:4 +idlepoll +ppn 8 +p $((8*SLURM_NTASKS)) stmv.namd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== PyTorch ==&lt;br /&gt;
=== Installing from IBM Conda Channel ===&lt;br /&gt;
The easiest way to install PyTorch on Mist is using IBM's Conda channel. User needs to prepare a conda environment with Python 3.6 or 3.7 and install PyTorch using IBM's Conda channel.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load anaconda3&lt;br /&gt;
conda create -n pytorch_env python=3.7&lt;br /&gt;
source activate pytorch_env&lt;br /&gt;
conda install -c https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda/ pytorch &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Once the installation finishes, please clean the cache:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda clean -y --all&lt;br /&gt;
rm -rf $HOME/.conda/pkgs/*&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== RAPIDS ==&lt;br /&gt;
The [https://rapids.ai RAPIDS] is a suite of open source software libraries that gives you the freedom to execute end-to-end data science and analytics pipelines entirely on GPUs. The RAPIDS data science framework includes a collection of libraries: '''cuDF(GPU DataFrames)''', '''cuML(GPU Machine Learning Algorithms)''', '''cuStrings(GPU String Manipulation)''', etc.&lt;br /&gt;
&lt;br /&gt;
=== Installing from IBM Conda Channel ===&lt;br /&gt;
The easiest way to install RAPIDS on Mist is using IBM's Conda channel. User needs to prepare a conda environment with Python 3.6 or 3.7 and install powerai-rapids using IBM's Conda channel.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load anaconda3&lt;br /&gt;
conda create -n rapids_env python=3.7&lt;br /&gt;
source activate rapids_env&lt;br /&gt;
conda install -c https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda/ powerai-rapids&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Once the installation finishes, please clean the cache:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda clean -y --all&lt;br /&gt;
rm -rf $HOME/.conda/pkgs/*&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== TensorFlow and Keras ==&lt;br /&gt;
=== Installing from IBM Conda Channel ===&lt;br /&gt;
The easiest way to install TensorFlow and Keras on Mist is using IBM's Conda channel. User needs to prepare a conda environment with Python 3.6 or 3.7 and install TensorFlow-gpu using IBM's Conda channel.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load anaconda3&lt;br /&gt;
conda create -n tf_env python=3.7&lt;br /&gt;
source activate tf_env&lt;br /&gt;
conda install -c https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda/ tensorflow-gpu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Once the installation finishes, please clean the cache:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda clean -y --all&lt;br /&gt;
rm -rf $HOME/.conda/pkgs/*&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Testing and debugging =&lt;br /&gt;
You really should test your code before you submit it to the cluster to know if your code is correct and what kind of resources you need.&lt;br /&gt;
* Small test jobs can be run on the login node.  Rule of thumb: tests should run no more than a couple of minutes, taking at most about 1-2GB of memory, and use no more than one gpu and a few cores.&lt;br /&gt;
&amp;lt;!-- * You can run the [[Parallel Debugging with DDT|DDT]] debugger on the login nodes after &amp;lt;code&amp;gt;module load ddt&amp;lt;/code&amp;gt;. --&amp;gt;&lt;br /&gt;
* Short tests that do not fit on a login node, or for which you need a dedicated node, request an interactive debug job with the debug command:&lt;br /&gt;
 mist-login01:~$ debugjob --clean -g G&lt;br /&gt;
where G is the number of gpus, If G=1, this gives an interactive session for 2 hours, whereas G=4 gets you a single node with 4 gpus for 30 minutes, and with G=8 (the maximum) gets you 2 nodes each with 4 gpus for 30 minutes.  The &amp;lt;tt&amp;gt;--clean&amp;lt;/tt&amp;gt; argument is optional but recommended as it will start the session without any modules loaded, thus mimicking more closely what happens when you submit a job script.&lt;br /&gt;
&lt;br /&gt;
= Submitting jobs =&lt;br /&gt;
Once you have compiled and tested your code or workflow on the Mist login nodes, and confirmed that it behaves correctly, you are ready to submit jobs to the cluster.  Your jobs will run on some of Mist's 53 compute nodes.  When and where your job runs is determined by the scheduler.&lt;br /&gt;
&lt;br /&gt;
Mist uses SLURM as its job scheduler. It is configured to allow only '''Single-GPU jobs''' and '''Full-node jobs (4 GPUs per node)'''.&lt;br /&gt;
&lt;br /&gt;
You submit jobs from a login node by passing a script to the sbatch command:&lt;br /&gt;
&lt;br /&gt;
mist-login01:scratch$ sbatch jobscript.sh&lt;br /&gt;
&lt;br /&gt;
This puts the job in the queue. It will run on the compute nodes in due course. In most cases, you should not submit from your $HOME directory, but rather, from your $SCRATCH directory, so that the output of your compute job can be written out (as mentioned above, $HOME is read-only on the compute nodes).&lt;br /&gt;
&lt;br /&gt;
Example job scripts can be found below.&lt;br /&gt;
Keep in mind:&lt;br /&gt;
* Scheduling is by single gpu or by full node, so you ask only 1 gpu or 4 gpus per node.&lt;br /&gt;
* Your job's maximum walltime is 24 hours. &lt;br /&gt;
* Jobs must write their output to your scratch or project directory (home is read-only on compute nodes).&lt;br /&gt;
* Compute nodes have no internet access.&lt;br /&gt;
* Your job script will not remember the modules you have loaded, so it needs to contain &amp;quot;module load&amp;quot; commands of all the required modules (see examples below). &lt;br /&gt;
== SOSCIP Users ==&lt;br /&gt;
If you are working on a SOSCIP project, please contact soscip-support@scinet.utoronto.ca to have your user account added to SOSCIP project accounts. SOSCIP users need to submit jobs with additional SLURM flag:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#SBATCH -A &amp;lt;SOSCIP_PROJECT_ID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
== Single-GPU job script ==&lt;br /&gt;
For a single GPU job, each will have a quarter of the node which is 1 GPU + 8/32 CPU Cores/Threads + ~58GB CPU memory. '''Users should never ask CPU or Memory explicitly.''' If running MPI program, user can set --ntasks to be the number of MPI ranks. It is suggested to use NVIDIA Multi-Process Service (MPS) if running multiple MPI ranks on one GPU.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --gpus-per-node=1&lt;br /&gt;
#SBATCH --time=1:00:0&lt;br /&gt;
#SBATCH -A &amp;lt;SOSCIP_PROJECT_ID&amp;gt; #For SOSCIP projects only&lt;br /&gt;
&lt;br /&gt;
module load anaconda3&lt;br /&gt;
source activate conda_env&lt;br /&gt;
python code.py ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Full-node job script ==&lt;br /&gt;
Multi-GPU job should ask for a minimum of one full node. User need to specify &amp;quot;compute_full_node&amp;quot; partition in order to get all resource on a node.&lt;br /&gt;
*An example for a 2-node, 8-rank OpenMPI job: (Each rank binds to 1 GPU and 8 physical CPU cores in this case)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --nodes=2&lt;br /&gt;
#SBATCH --gpus-per-node=4&lt;br /&gt;
#SBATCH --ntasks=8&lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH -p compute_full_node&lt;br /&gt;
#SBATCH -A &amp;lt;SOSCIP_PROJECT_ID&amp;gt; #For SOSCIP projects only&lt;br /&gt;
&lt;br /&gt;
module load cuda/10.2.89 gcc/7.5.0 openmpi/4.0.2&lt;br /&gt;
&lt;br /&gt;
mpirun -bind-to core -map-by slot:PE=8 -report-bindings ./program&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=2464</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=2464"/>
		<updated>2020-02-11T20:16:35Z</updated>

		<summary type="html">&lt;p&gt;Nolta: /* System Status */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
{| style=&amp;quot;border-spacing:10px; width: 95%&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:1em; padding-top:.1em; border:2px solid #0645ad; background-color:#f6f6f6; border-radius:7px&amp;quot;|&lt;br /&gt;
&lt;br /&gt;
==System Status==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Use &amp;quot;Up&amp;quot; or &amp;quot;Down&amp;quot;; these are templates. --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;width:100%&amp;quot; &lt;br /&gt;
|{{Up|Niagara|Niagara_Quickstart}}&lt;br /&gt;
|{{Up|HPSS|HPSS}}&lt;br /&gt;
|{{Up|SOSCIP&amp;amp;nbsp;GPU|SOSCIP_GPU}}&lt;br /&gt;
|{{Up|P8|P8}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up|Teach|Teach}}&lt;br /&gt;
|{{Up|Jupyter Hub|Jupyter_Hub}}&lt;br /&gt;
|{{Up|Scheduler|Niagara_Quickstart#Submitting_jobs}}&lt;br /&gt;
|{{Up|File system|Niagara_Quickstart#Storage_and_quotas}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up|External Network|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Up|Globus|Globus}}&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;!-- Current Messages: --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt; Feb 11, 2020, 2:00PM: &amp;lt;/b&amp;gt; The niagara compute nodes were accidentally rebooted, killing all running jobs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt; Feb 10, 2020, 19:00PM: &amp;lt;/b&amp;gt; HPSS is back to normal.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt; Jan 30, 2020, 12:01PM: &amp;lt;/b&amp;gt; We are having an issue with HPSS, in which the disk-cache is full. We put a reservation on the whole system (Globus, plus archive and vfs queues), until it has had a chance to clear some space on the cache.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt; Jan 21, 2020, 4:05PM: &amp;lt;/b&amp;gt;   The was a partial power outage the took down a large amount of the compute nodes.  If your job died during this period please resubmit.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Jan 13, 2020, 7:35 PM:&amp;lt;/b&amp;gt; Maintenance finished.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Jan 13, 2020, 8:20 AM:&amp;lt;/b&amp;gt; The announced maintenance downtime started (see below).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Jan 9 2020, 11:30 AM:&amp;lt;/b&amp;gt; External ssh connectivity restored, issue related to the university network.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Jan 9 2020, 9:24 AM:&amp;lt;/b&amp;gt; We received reports of users having trouble connecting into the SciNet data centre; we're investigating.  Systems are up and running and jobs are fine.&amp;lt;p&amp;gt;&lt;br /&gt;
As a work around, in the meantime, it appears to be possible to log into graham, cedar or beluga, and then ssh to niagara.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Downtime announcement:&amp;lt;/b&amp;gt;&lt;br /&gt;
To prepare for the upcoming expansion of Niagara, there will be a&lt;br /&gt;
one-day maintenance shutdown on &amp;lt;b&amp;gt;January 13th 2020, starting at 8 am&lt;br /&gt;
EST&amp;lt;/b&amp;gt;.  There will be no access to Niagara, Mist, HPSS or teach, nor&lt;br /&gt;
to their file systems during this time.&lt;br /&gt;
&amp;lt;!--  When removing system status entries, please archive them to: https://docs.scinet.utoronto.ca/index.php/Previous_messages --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;border-spacing: 10px;width: 100%&amp;quot;&lt;br /&gt;
|valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== QuickStart Guides ==&lt;br /&gt;
* [[Niagara Quickstart]]&lt;br /&gt;
* [[HPSS | HPSS archival storage]]&lt;br /&gt;
* [[SOSCIP_GPU | SOSCIP GPU cluster]]&lt;br /&gt;
* [[P8|Experimental Power 8 GPU cluster]]&lt;br /&gt;
* [[Teach|Teach cluster]]&lt;br /&gt;
* [[FAQ | FAQ (frequently asked questions)]]&lt;br /&gt;
* [[Acknowledging SciNet]]&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Tutorials, Manuals, etc. ==&lt;br /&gt;
* [https://support.scinet.utoronto.ca/education/browse.php SciNet education material]&lt;br /&gt;
* [https://www.youtube.com/c/SciNetHPCattheUniversityofToronto SciNet's YouTube channel]&lt;br /&gt;
* [[Modules specific to Niagara|Software Modules specific to Niagara]] &lt;br /&gt;
* [[Commercial software]]&lt;br /&gt;
* [[Burst Buffer]]&lt;br /&gt;
* [[SSH Tunneling]]&lt;br /&gt;
* [[Visualization]]&lt;br /&gt;
* [[Running Serial Jobs on Niagara]]&lt;br /&gt;
* [[Jupyter Hub]]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Globus&amp;diff=2364</id>
		<title>Globus</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Globus&amp;diff=2364"/>
		<updated>2019-11-21T20:00:31Z</updated>

		<summary type="html">&lt;p&gt;Nolta: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Globus is a service for fast, reliable, secure data movement. Designed specifically for researchers, Globus has an easy-to-use interface with background monitoring features that automate the management of file transfers between any two resources, whether they are at Compute Canada, another supercomputing facility, a campus cluster, lab server, desktop or laptop.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
== Copying files between Compute Canada sites using the Globus web interface ==&lt;br /&gt;
&lt;br /&gt;
The procedure of using the [http://globus.computecanada.ca/ Globus web interface] for transferring data between different Compute Canada sites (&amp;quot;endpoints&amp;quot; in Globus), including those at SciNet, is well described in the [https://docs.computecanada.ca/wiki/Globus Compute Canada Globus documentation].&lt;br /&gt;
&lt;br /&gt;
At SciNet, there are three endpoints:&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;tt&amp;gt;computecandada#niagara&amp;lt;/tt&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
This endpoint gives access to the files on your Niagara $HOME.  &amp;lt;br&amp;gt;To get easy access to your files on your $SCRATCH in the Globus web interface, it can be helpful to add a so-called softlink in your home directory to the scratch directory, by issuing the following command on the command line on Niagara (once):&amp;lt;br/&amp;gt;&amp;lt;tt&amp;gt; $ ln -sn $SCRATCH $HOME/scratch&amp;lt;/tt&amp;gt;&amp;lt;br/&amp;gt;A similar soft-link can be useful for your $PROJECT directory, if you have one on Niagara.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;tt&amp;gt;computecanada#hpss&amp;lt;/tt&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
This endpoint gives access to your [[HPSS]] space, if you have access to it.  [[HPSS]] is a tape-backed hierarchical storage system that provides a significant portion of the allocated storage space at SciNet.  The Globus endpoint is not the only way to interact with HPSS, and may not be appropriate method for your use case. Users should read the [[HPSS]] page before using this endpoint. &lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;tt&amp;gt;computecanada#bgq&amp;lt;/tt&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
This endpoint gives access to the files on your SciNet SOSCIP $HOME, if you have access to one of the SOSCIP systems hosted at SciNet, i.e., the [[BGQ]] or the [[SOSCIP GPU]] cluster.   To get easy access in the Globus web interface to your SOSCIP $SCRATCH files, it can be helpful to add a so-called softlink in your home directory to your scratch directory, by issuing the following command on the command line on Niagara (once):&amp;lt;br/&amp;gt;&amp;lt;tt&amp;gt; $ ln -sn $SCRATCH $HOME/scratch&amp;lt;/tt&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Copying files between Compute Canada sites and your personal computer using the Globus web interface ==&lt;br /&gt;
&lt;br /&gt;
The procedure of using the [http://globus.computecanada.ca/ Globus web interface] for transferring data between Compute Canada sites (including those at SciNet) and your own personal computer, is well described in the [https://docs.computecanada.ca/wiki/Globus Compute Canada Globus documentation]. &lt;br /&gt;
&lt;br /&gt;
Essentially, you create an &amp;quot;endpoint for your personal computer&amp;quot;, then you can use the web interface to transfer to one of the SciNet endpoints listed above.&lt;br /&gt;
&lt;br /&gt;
== Copying Files to Niagara From the Linux Command-line ==&lt;br /&gt;
&lt;br /&gt;
=== Step 1: Install globus CLI ===&lt;br /&gt;
&lt;br /&gt;
Requires python 2.7. On the machine you're transferring from:&lt;br /&gt;
&lt;br /&gt;
 $ virtualenv venv-globus&lt;br /&gt;
 $ source ./venv-globus/bin/activate&lt;br /&gt;
 $ pip install globus-cli&lt;br /&gt;
&lt;br /&gt;
=== Step 2: Login to globus ===&lt;br /&gt;
&lt;br /&gt;
 $ globus login&lt;br /&gt;
&lt;br /&gt;
You should see:&lt;br /&gt;
&lt;br /&gt;
 Please log into Globus here:&lt;br /&gt;
 ---------------------------&lt;br /&gt;
 https://auth.globus.org/v2/oauth2/...&lt;br /&gt;
 ---------------------------&lt;br /&gt;
 &lt;br /&gt;
 Enter the resulting Authorization Code here:&lt;br /&gt;
&lt;br /&gt;
Visit the URL in a web browser, choose &amp;quot;Compute Canada&amp;quot; as your organization, and enter your Niagara username/password.&lt;br /&gt;
&lt;br /&gt;
=== Step 3: Create a personal endpoint ===&lt;br /&gt;
&lt;br /&gt;
 $ globus endpoint create --personal my-endpoint-name&lt;br /&gt;
&lt;br /&gt;
Replace &amp;lt;code&amp;gt;my-endpoint-name&amp;lt;/code&amp;gt; with a name of your choice.&lt;br /&gt;
&lt;br /&gt;
You should see:&lt;br /&gt;
&lt;br /&gt;
 Message:     Endpoint created successfully&lt;br /&gt;
 Endpoint ID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx&lt;br /&gt;
 Setup Key:   yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy&lt;br /&gt;
&lt;br /&gt;
Save this info as we'll need it later.&lt;br /&gt;
&lt;br /&gt;
=== Step 4: Get Globus Connect Personal ===&lt;br /&gt;
&lt;br /&gt;
 $ wget https://downloads.globus.org/globus-connect-personal/linux/stable/globusconnectpersonal-latest.tgz&lt;br /&gt;
 $ tar -xzf globusconnectpersonal-latest.tgz&lt;br /&gt;
 $ cd globusconnectpersonal-x.y.z&lt;br /&gt;
&lt;br /&gt;
=== Step 5: Setup your endpoint ===&lt;br /&gt;
&lt;br /&gt;
 $ ./globusconnectpersonal -setup yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy&lt;br /&gt;
&lt;br /&gt;
replacing &amp;quot;yyyy...&amp;quot; with the Setup Key from 'globus endpoint create'. You should see something like:&lt;br /&gt;
&lt;br /&gt;
 Configuration directory: /home/username/.globusonline/lta&lt;br /&gt;
 Contacting relay.globusonline.org:2223&lt;br /&gt;
 Done!&lt;br /&gt;
&lt;br /&gt;
=== Step 6: Configure your endpoint (optional) ===&lt;br /&gt;
&lt;br /&gt;
By default globus only allows transfers to/from your home directory.&lt;br /&gt;
Edit &amp;lt;code&amp;gt;~/.globusonline/lta/config-paths&amp;lt;/code&amp;gt; and add a line for any other directories you need, e.g.:&lt;br /&gt;
&lt;br /&gt;
 /path/to/data/,0,1&lt;br /&gt;
&lt;br /&gt;
=== Step 7: Start Globus Connect ===&lt;br /&gt;
&lt;br /&gt;
 $ ./globusconnectpersonal -start &amp;amp;&lt;br /&gt;
&lt;br /&gt;
=== Step 8: Set some convenience variables ===&lt;br /&gt;
&lt;br /&gt;
 $ my_endpoint=&amp;quot;xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx&amp;quot;&lt;br /&gt;
 $ niagara_endpoint=&amp;quot;77506016-4a51-11e8-8f88-0a6d4e044368&amp;quot;&lt;br /&gt;
&lt;br /&gt;
replacing &amp;quot;xxxx...&amp;quot; with the Endpoint ID from 'globus endpoint create'.&lt;br /&gt;
&lt;br /&gt;
=== Step 9: Activate the Niagara endpoint ===&lt;br /&gt;
&lt;br /&gt;
 $ globus endpoint activate --myproxy --myproxy-lifetime=1000 $niagara_endpoint&lt;br /&gt;
&lt;br /&gt;
=== Step 10: Start a transfer ===&lt;br /&gt;
&lt;br /&gt;
 $ globus transfer --recursive $my_endpoint:/path/to/data $niagara_endpoint:/scratch/g/group/username/data&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
	<entry>
		<id>https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=2363</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://docs.scinet.utoronto.ca/index.php?title=Main_Page&amp;diff=2363"/>
		<updated>2019-11-19T17:40:27Z</updated>

		<summary type="html">&lt;p&gt;Nolta: /* System Status */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
{| style=&amp;quot;border-spacing:10px; width: 95%&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:1em; padding-top:.1em; border:2px solid #0645ad; background-color:#f6f6f6; border-radius:7px&amp;quot;|&lt;br /&gt;
&lt;br /&gt;
==System Status==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Use &amp;quot;Up&amp;quot; or &amp;quot;Down&amp;quot;; these are templates. --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;width:100%&amp;quot; &lt;br /&gt;
|{{Up|Niagara|Niagara_Quickstart}}&lt;br /&gt;
|{{Up|HPSS|HPSS}}&lt;br /&gt;
|{{Up|SOSCIP&amp;amp;nbsp;GPU|SOSCIP_GPU}}&lt;br /&gt;
|{{Up|P8|P8}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up|Teach|Teach}}&lt;br /&gt;
|{{Up|Jupyter Hub|Jupyter_Hub}}&lt;br /&gt;
|{{Up|Scheduler|Niagara_Quickstart#Submitting_jobs}}&lt;br /&gt;
|{{Up| File system|Niagara_Quickstart#Storage_and_quotas}}&lt;br /&gt;
|-&lt;br /&gt;
|{{Up|External Network|Niagara_Quickstart#Logging_in}} &lt;br /&gt;
|{{Up|Globus|Globus}}&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;!-- Current Messages: --&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt; &amp;lt;b&amp;gt;Fri, Nov 15 2019, 11:00 PM (EST)&amp;lt;/b&amp;gt;  Niagara and most of the main systems are now available. &lt;br /&gt;
&amp;lt;/p&amp;gt;&amp;lt;p&amp;gt; &amp;lt;b&amp;gt;Fri, Nov 15 2019, 7:50 PM (EST)&amp;lt;/b&amp;gt;  SOSCIP GPU cluster is up and accessible.  Work on the other systems continues.&lt;br /&gt;
&amp;lt;/p&amp;gt;&amp;lt;p&amp;gt; &amp;lt;b&amp;gt;Fri, Nov 15 2019, 5:00 PM (EST)&amp;lt;/b&amp;gt;  Infrastructure maintenance done, upgrades still in process.&lt;br /&gt;
&amp;lt;/p&amp;gt;&amp;lt;p&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;Fri, Nov 15 2019, 7:00 AM (EST)&amp;lt;/b&amp;gt;  Maintenance shutdown of the SciNet data centre has started.  Note: scratch purging has been postponed until Nov 17.&amp;lt;br/&amp;gt; &lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;Announcement:&amp;lt;/b&amp;gt; &lt;br /&gt;
The SciNet datacentre will undergo a maintenance shutdown on&lt;br /&gt;
Friday November 15th 2019, from 7 am to 11 pm (EST), with no access&lt;br /&gt;
to any of the SciNet systems (Niagara, P8, SGC, HPSS, Teach cluster,&lt;br /&gt;
or the filesystems) during that time. &lt;br /&gt;
&amp;lt;!--  When removing system status entries, please archive them to: https://docs.scinet.utoronto.ca/index.php/Previous_messages --&amp;gt;&lt;br /&gt;
{|style=&amp;quot;border-spacing: 10px;width: 100%&amp;quot;&lt;br /&gt;
|valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== QuickStart Guides ==&lt;br /&gt;
* [[Niagara Quickstart]]&lt;br /&gt;
* [[HPSS | HPSS archival storage]]&lt;br /&gt;
* [[SOSCIP_GPU | SOSCIP GPU cluster]]&lt;br /&gt;
* [[P8|Experimental Power 8 GPU cluster]]&lt;br /&gt;
* [[Teach|Teach cluster]]&lt;br /&gt;
* [[FAQ | FAQ (frequently asked questions)]]&lt;br /&gt;
* [[Acknowledging SciNet]]&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;margin: 1em; padding:1em; padding-top:.1em; border:2px solid #000; background-color:#fff; border-radius:7px; width: 49.5%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Tutorials, Manuals, etc. ==&lt;br /&gt;
* [https://courses.scinet.utoronto.ca SciNet education material]&lt;br /&gt;
* [https://www.youtube.com/c/SciNetHPCattheUniversityofToronto SciNet's YouTube channel]&lt;br /&gt;
* [[Modules specific to Niagara|Software Modules specific to Niagara]] &lt;br /&gt;
* [[Commercial software]]&lt;br /&gt;
* [[Burst Buffer]]&lt;br /&gt;
* [[SSH Tunneling]]&lt;br /&gt;
* [[Visualization]]&lt;br /&gt;
* [[Running Serial Jobs on Niagara]]&lt;br /&gt;
* [[Jupyter Hub]]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Nolta</name></author>
	</entry>
</feed>