FAQ
The Basics
Whom do I contact for support?
Whom do I contact if I have problems or questions about how to use the SciNet systems?
E-mail <support@scinet.utoronto.ca>
In your email, please include the following information:
- your username on SciNet
- the cluster that your question pertains to (Niagara, Mist, Rouge, ...; SciNet is not a cluster!),
- any relevant error messages
- the commands you typed before the errors occurred
- the path to your code (if applicable)
- the location of the job scripts (if applicable)
- the directory from which it was submitted (if applicable)
- a description of what it is supposed to do (if applicable)
- if your problem is about connecting to SciNet, the type of computer you are connecting from.
Note that your password should never, never, never be to sent to us, even if your question is about your account.
Avoid sending email only to specific individuals at SciNet. Your chances of a quick reply increase significantly if you email our team! (support@scinet.utoronto.ca)
I have a CCDB account, but I can't login to Niagara. How can I get access?
If you have an active CCDB/Alliance account but you do not have access to Niagara yet, go to the opt-in page on the CCDB site. After clicking the "Join" button, it usually takes only one or two business days for access to be granted.
How can I reset the password for my Alliance (formerly Compute Canada) account?
You can reset your password for your Alliance (formerly Compute Canada) account here:
https://ccdb.alliancecan.ca/security/forgot
How can I change or reset the password for my SciNet account?
To reset your password at SciNet please go to Password reset page. Note that SciNet accounts are only necessary for non-Alliance resources, such as those of SOSCIP.
Connecting to Niagara
Do you have a recommended ssh program that will allow scinet access from Windows machines?
The SSH for Windows users programs we recommend are:
- MobaXterm is a tabbed ssh client with some Cygwin tools, including ssh and X, all wrapped up into one executable.
- Git Bash is an implementation of git which comes with a terminal emulator.
- PuTTY - this is a terminal for windows that connects via ssh. It is a quick install and will get you up and running quickly.
WARNING: Make sure you download putty from the official website, because there are "trojanized" versions of putty around that will send your login information to a site in Russia (as reported here).
To set up your passphrase protected ssh key with putty, see here. - CygWin - this is a whole linux-like environment for windows, which also includes an X window server so that you can display remote windows on your desktop. Make sure you include the openssh and X window system in the installation for full functionality. This is recommended if you will be doing a lot of work on Linux machines, as it makes a very similar environment available on your computer.
To set up your ssh keys, follow the Linux instructions in the SSH keys page.
My ssh key does not work! WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!
If this doesn't work, you should be able to login using your password, and investigate the problem. For example, if during a login session you get an message similar to the one below, just follow the instruction and delete the offending key on line 3 (you can use vi to jump to that line with ESC plus : plus 3). That only means that you may have logged in from your home computer to SciNet in the past, and that key is obsolete.
$ ssh USERNAME@niagara.scinet.utoronto.ca @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@**@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@**@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that the RSA host key has just been changed. The fingerprint for the RSA key sent by the remote host is 53:f9:60:71:a8:0b:5d:74:83:52:**fe:ea:1a:9e:cc:d3. Please contact your system administrator. Add correct host key in /home/<user>/.ssh/known_hosts to get rid of this message. Offending key in /home/<user>/.ssh/known_hosts:3 RSA host key for niagara.scinet.utoronto.ca <http://niagara.scinet.utoronto.ca <http://niagara.scinet.utoronto.ca>> has changed and you have requested
Can't get graphics: "Can't open display/DISPLAY is not set"
To use graphics on SciNet machines and have it displayed on your machine, you need to have a X server running on your computer (an X server is the standard way graphics is done on linux). One an X server is running, you can log in with the "-Y" option to ssh ("-X" sometimes also works).
How to get an X server running on your computer, depends on the operating system. On linux machines with a graphical interface, X will already be running. On windows, the easiest solution is using MobaXterm, which comes with an X server (alternatives, such as cygwin with the x11 server installed, or running putty+Xming, can also work, but are a bit more work to set up. For Macs, you will need to install Xquartz.
Remote graphics stops working after a while: "Can't open display"
If you still cannot get graphics, or it works only for a while and then suddenly it "can't open display localhost:....", your X11 graphics connection may have timed out (Macs seem to be particularly prone to this). You'll have to tell your own computer not to allow, and not to timeout the X11 graphics connection.
The following should fix it. The ssh configuration settings are in a file called /etc/ssh/ssh_config (or /etc/ssh_config in older OS X versions, or $HOME/.ssh/config for specific users). In the config file, find (or create) the section "Host *" (meaning all hosts) and add the following lines:
Host * ServerAliveInterval 60 ServerAliveCountMax 3 ForwardX11 yes ForwardX11Trusted yes ForwardX11Timeout 596h
(The Host * is only needed if there was no Host section yet to append these settings to.)
If this does not resolve it, try it again with "ssh -vvv -Y ....". The "-vvv" spews out a lot of diagnostic messages. Look for anything resembling a timeout, and let us know (support AT scinet DOT utoronto DOT ca).
Can't forward X: "Warning: No xauth data; using fake authentication data", or "X11 connection rejected because of wrong authentication."
I used to be able to forward X11 windows from SciNet to my home machine, but now I'm getting these messages; what's wrong?
Answer:
This very likely means that ssh/xauth can't update your ${HOME}/.Xauthority file.
The simplest possible reason for this is that you've filled your 100GB /home quota and so can't write anything to your home directory. Use
$ diskUsage
to check to see how close you are to your disk usage on ${HOME}.
Alternately, this could mean your .Xauthority file has become broken/corrupted/confused some how, in which case you can delete that file, and when you next log in you'll get a similar warning message involving creating .Xauthority, but things should work.
Why am I getting the error "Permission denied (publickey,gssapi-with-mic,password)"?
In most cases, the "Permission denied" error is caused by incorrect permission of the (hidden) .ssh directory. SSH is used for logging in as well as for the copying of the standard error and output files after a job.
For security reasons, the directory .ssh should only be writable and readable to you, but yours has read permission for everybody, and thus it fails. You can change this by
chmod 700 ~/.ssh
And to be sure, also do
chmod 600 ~/.ssh/id_rsa ~/authorized_keys
How to use VNC to connect to niagara?
The use case here is you want to start vncserver on a compute node, and have it display remotely on your home computer.
On niagara this presents 2 sets of issues:
- the compute nodes are not accessible from the internet, hence, you can not connect directly from your home computer to them. You will need to establish and ssh tunnel via a login node.
- compute nodes have $HOME mounted as read-only, and therefore this presents some challenges to start VNC server, since it wants to change several dot files on your account.
The recipe below complements the general set os instructions already on the Alliance (formerly Compute Canada) wiki for Windows clients using Cygwin:
https://docs.scinet.utoronto.ca/index.php/VNC
Environment
I changed my .bashrc/.bash_profile and now nothing works
The default startup scripts provided by SciNet, and guidelines for them, can be found here. Certain things - like sourcing /etc/profile and /etc/bashrc are required for various SciNet routines to work!
If the situation is so bad that you cannot even log in, please send email support.
Could I have my login shell changed to (t)csh?
The login shell used on our systems is bash. While the tcsh is available, we do not support it as the default login shell at present. So "chsh" will not work, but you can always run tcsh interactively. Also, csh scripts will be executed correctly provided that they have the correct "shebang" #!/bin/tcsh at the top.
Can I work in a Jupyter Notebook?
Yes, a Niagara Jupyter Hub is available for use. See this page for details.
Compiling your Code
How do I link against the Intel Math Kernel Library?
If you need to link to the Intel Math Kernal Library (MKL) with the intel compilers, just add the
-mkl
flag. There are in fact three flavours: -mkl=sequential, -mkl=parallel and -mkl=cluster, for the serial version, the threaded version and the mpi version, respectively. (Note: The cluster version is available only when using the intelmpi module and mpi compilation wrappers.)
If you need to link in the Intel Math Kernel Library (MKL) libraries to gcc/gfortran/c++, you are well advised to use the Intel(R) Math Kernel Library Link Line Advisor for help in devising the list of libraries to link with your code.
Note that this give the link line for the command line. When using this in Makefiles, replace $MKLPATH by ${MKLROOT}.
Note too that, unless the integer arguments you will be passing to the MKL libraries are actually 64-bit integers, rather than the normal int or INTEGER types, you want to specify 32-bit integers (lp64) .
Testing your Code
How can I run MATLAB / IDL / Gaussian / my favourite commercial software at SciNet?
Because SciNet serves such a disparate group of user communities, there is just no way we can buy licenses for everyone's commercial package. The only commercial software we have purchased is that which in principle can benefit everyone -- fast compilers and math libraries.
If your research group requires a commercial package that you already have or are willing to buy licenses for, contact us at support@scinet and we can work together to find out if it is feasible to implement the package's licensing arrangement on the SciNet clusters, and if so, what is the the best way to do it. Several commercial packages have already been installed, you can see the list here.
Note that it is important that you contact us before installing commercially licensed software on SciNet machines, even if you have a way to do it in your own directory without requiring sysadmin intervention. It puts us in a very awkward position if someone is found to be running unlicensed or invalidly licensed software on our systems, so we need to be aware of what is being installed where.
Also note that MATLAB is somewhat of a special case. See the MATLAB page for more information.
Can I run a something for a short time on the login nodes?
I am in the process of playing around with the MPI calls in my code to get it to work. I do a lot of tests and each of them takes a couple of seconds only. Can I do this on the login nodes?
Answer:
Yes, as long as it's very brief (a few minutes). People use the login nodes for their work, and you don't want to bog it down. Testing a real code can chew up a lot more resources than compiling, etc.
Once you have run some short test jobs, you should run an interactive job and run the tests either in the regular compute queue or using the debug queue that is reserved for this purpose.
How do I run a longer (but still shorter than an hour) test job quickly ?
On NIagara there is a high turnover short queue called debug that is designed for this purpose. You can use it by adding
#SBATCH -p debug
to your submission script. This is for testing you code only; do not use the debug queue for production runs.
What does code scaling mean?
Please see A Performance Primer
What do you mean by throughput?
Please see A Performance Primer.
Here is a simple example:
Suppose you need to do 10 computations. Say each of these runs for 1 day on 40 cores, but they take "only" 18 hours on 80 cores. What is the fastest way to get all 10 computations done - as 40-core jobs or as 80-core jobs? Let us assume you have 2 nodes at your disposal. The answer, after some simple arithmetic, is that running your 10 jobs as 40-core jobs will take 5 days, whereas if you ran them as 80-core jobs it would take 7.5 days. Take your own conclusions...
Submitting your jobs
How do I charge jobs to my RAC allocation?
Please see the accounting section of Slurm page.
How can I automatically resubmit a job?
Commonly you may have a job that you know will take longer to run than what is permissible in the queue. As long as your program contains checkpoint or restart capability, you can have one job automatically submit the next. In the following example it is assumed that the program finishes before the 24 hour limit and then resubmits itself by logging into one of the login nodes.
#!/bin/bash # #SBATCH --nodes=1 #SBATCH --time=24:00:00 #SBATCH --job-name my_job # DIRECTORY TO RUN - $PBS_O_WORKDIR is directory job was submitted from cd $SLURM_SUBMIT_DIR # YOUR CODE HERE ./run_my_code # RESUBMIT 10 TIMES HERE num=$NUM if [ "$num" -lt 10 ]; then num=$(($num+1)) ssh -t nia-login01 "cd $SLURM_SUBMIT_DIR; sbatch --export=NUM=$num script_name.sh"; fi
sbatch --export=NUM=0 script_name.sh
You can alternatively use Job dependencies through the queuing system which will not start one job until another job has completed.
If your job can't be made to automatically stop before the 24 hour queue window, but it does write out checkpoints, you can use the timeout command to stop the program while you still have time to resubmit; for instance
timeout 1410m ./run_my_code argument1 argument2
will run the program for 23.5 hours (1410 minutes), and then send it SIGTERM to exit the program.
How can I pass in arguments to my submission script?
If you wish to make your scripts more generic you can use SLURM's ability to pass in environment variables to pass in arguments to your script. See this page.
Scheduling and Priority
Why did squeue --start
say it would take 3 hours for my job to start before, and now it says my job will start in 10 hours?
Please look at the How do priorities work/why did that job jump ahead of mine in the queue? page.
How do priorities work/why did that job jump ahead of mine in the queue?
The queueing system used on SciNet machines is a Priority Queue. Jobs enter the queue at the back of the queue, and slowly make their way to the front as those ahead of them are run; but a job that enters the queue with a higher priority can `cut in line'.
The main factor which determines priority is whether or not the user belongs to a group(PI) who has a Alliance (formerly Compute Canada) RAC allocation. These are competitively allocated grants of computer time; there is a call for proposals in the fall of every calendar year. Users in groups with an allocation have higher priorities in an attempt to make sure that they can use the amount of computer time the committees granted them. Their priority decreases as they approach their allotted usage over the current window of time; by the time that they have exhausted that allotted usage, their priority is the same as users in groups with no allocation ("RAS", or `default' users). Default groups (hence users with these groups) share a fixed, low, priority.
This priority system is called `fairshare'; the scheduler attempts to make sure everyone has their fair share of the machines, where the share that's fair has been determined by the allocation committee. The fairshare window is a rolling window of one week; that is, any time you have a job in the queue, the fairshare calculation of its priority is given by how much of your allocation of the machine has been used in the last 7 days.
A particular allocation might have some fraction of Niagara - say 4% of the machine (if the PI had been allocated 2400 core-years on Niagara). The allocations have labels; (called `Resource Allocation Proposal Identifiers', or RAPIs) they look something like
rrg-abc-ab
where rrg (or rpp) indicates an allocation, abc is the group name, and the suffix specifies which of the allocations granted to the PI is to be used. These can be specified on a job-by-job basis. On Niagara, one adds the line
#SBATCH -A RAPI
to your script. If the allocation to charge isn't specified, a default is used; each user has such a default, which can be changed at the SciNet portal where one changes one's password.
A job's priority is determined primarily by the fairshare priority of the allocation it is being charged to; the previous 7 days worth of use under that allocation is calculated and compared to the allocated fraction (here, 4%) of the machine over that window (here, 7 days). The fairshare priority is a decreasing function of the allocation left; if there is no allocation left (eg, jobs running under that allocation have already used 403,200 CPU hours (2400 cores x 7 days x 24 hours) in the past 7 days), the priority is the same as that of a user with no granted allocation. (This last part has been the topic of some debate; as the machine gets more utilized, it will probably be the case that we allow users in RAC groups who have greatly overused their quota to have their priorities to drop below that of unallocated groups, to give the unallocated group some chance to run on our increasingly crowded system; this would have no undue effect on our allocated groups as they still would be able to use the amount of resources they had been allocated by the committees.) Note that all jobs charging the same allocation get the same fairshare priority.
There are other factors that go into calculating priority, but fairshare is the most significant. Other factors include
- length of time waiting in queue (measured in units of the requested runtime). A waiting queue job gains priority as it sits in the queue to avoid job starvation.
- User adjustment of priorities ( See below ).
The major effect of these subdominant terms is to shuffle the order of jobs running under the same allocation.
How do we manage job priorities within our research group?
Obviously, managing shared resources within a large group - whether it is conference funding or CPU time - takes some doing.
It's important to note that the fairshare periods are intentionally kept quite short - just one week long. So, for example, let us say that in your resource allocation you have about 10% of the machine. Then for someone to use up the whole one week amount of time in one day, he'd have to use 70% of the machine in that one day - which is unlikely to happen by accident. If that does happen, those using the same allocation as the person who used 70% of the machine over the one day will suffer by having much lower priority for their jobs, but only for the next 6 days - and even then, if there are idle CPUs they'll still be able to compute.
There are online tools, both the CCDB and my.SciNet, for seeing how the allocation is being used, and those people who are in charge in your group will be able to use that information to manage users, telling them to dial it down or up. We know that managing a large research group is hard, and we want to make sure we provide you the information you need to do your job effectively.
When my job will be executed? "sinfo" shows that some nodes are idle but my job still does not get executed
The answer to your question is not easy, because the scheduler is a dynamic system. There are 2 main factors contributing to this lack of accuracy as to WHEN the job will start, among others:
1) suppose you issue a query, and it tells your job will start in 6 hours; However, if in the meantime a couple of users with higher priority than you submit some jobs, they will jump the line and start ahead of yours, therefore all of the sudden increasing your wait time to 10 or 20 hours. This is not a bad thing. This is exactly how fairshare is expected to work, and you should not get surprised or upset. Without an allocation there will always be somebody with higher priority than you that can come in at anytime and "ruin" your day. On the other hand, in all fairness, the longer your job remains on the queue the higher your priority becomes, up to a point in which it doesn't matter if the King of the Universe shows up, your job will start.
2) The other factor is that many people don't take the time to make a better estimate of how long it takes to execute their jobs. A good practice is to keep track of how long it took the last time for similar jobs, and add 20-30% on the request time for the next submission, just in case. However many people just send the jobs for maximum 24 hours (we have no control over this, despite our education efforts), and the scheduler has no way to know if the job will finish sooner, so it uses the 24 hours to estimate WHEN your job will start. But as you can imagine, many jobs will end sooner, so in all fairness, the scheduler will immediately move the line along, and this will favor every one, including you, and with this new information, if you query again it may tell you that your job will start in 2 hours or 30 minutes, instead of 6 hours.
As for your question about seeing idle nodes with "sinfo", and your job not starting: That is because those nodes are not there for you, pure and simple. They are idle because they are being held for those jobs with higher priority, some of them requiring 100 or 500 nodes for instance. Until that count is accumulated, they will sit idle, until the higher priority job starts. A clever user such as yourself, having asked us about this dynamic, may take advantage of the situation, by submitting jobs of 1 hour or 5 hours, and squeeze in those gaps. That is called backfill, and we have users that have become masters on this art, milking everything they can from the system.
Bottom line, the time to start will give you more of an order of magnitude as to when your job will be executed: in 50 hours or 5 hours? and you do the best with that information.
Regardless, the best is to always keep jobs on the queue, so they accrue priority over time, and don't ask for 24 hours if you know your job will only take 10 hours to finish.
My resource group has a default allocation of ~52 cores. What does this exactly mean? Does this imply that my group can use on average 52 cores per rolling week without losing priority?
~52 cores means your group's avg daily allowance for the whole year, sustained. Users within your group may slice and dice that in many different ways. For instance:
- 2 jobs of 16 hours on 1 node each ==> 2*16*40=1280 ~52*24
- 8 jobs of 4 hours on 1 node each ==> 8*4*40=1280 ~52*24
Your whole group combined can do this every day of the year, and your priority won't be reduced. In fact, if you submit many smaller jobs in any dimension (1 hour and/or 1 node) and you keep them on the queue most of the time your will likely milk more than ~52/day out of the system, by taking advantage of back-fill opportunities.
On the other hand, there will be times in which an user in the group may need to run a much larger job than the groups daily allowance. The scheduler will accommodate that for the first time around. For instance:
- 1 job of 24 hours on 10 nodes ==> 1*24*10*40=9600 ~7.7*(52*24)
that is over 7 times the daily allowance for the group. Consequently, once the job starts the priority for everyone else in the group will fall, quite a lot in fact, and the next job from anybody in the group, including the original submitter, may have to wait a whole week before it runs again. In another words, the scheduler acts in a very fair manner to ensure it tracks and maintains that ~52 cores daily average over time.
Running your jobs
My job can't write to /home
My code works fine when I test on the Niagara login nodes, but when I submit a job it fails. What's wrong?
Answer:
As discussed elsewhere, /home is mounted read-only on the compute nodes; you can only write to /home from the login nodes. In general, to run jobs you can read from /home but you'll have to write to /scratch (or, if you were allocated space through the RAC process, on /project). More information on SciNet filesystems can be found on our Data Management page.
Can I can use hybrid codes consisting of MPI and openMP ?
Yes.
How do I run serial jobs?
Niagara is a parallel computing resource, and SciNet's priority will always be parallel jobs. Having said that, if you can make efficient use of the resources using serial jobs and get good science done, that's good too, and we're happy to help you.
The Niagara nodes each have 40 processing cores, and making efficient use of these nodes means using all forty cores. As a result, users must run multiples of 40 jobs at a time.
It depends on the nature of your job what the best strategy is. Several approaches are presented on the serial page.
Why can't I request only a single cpu for my job on Niagara?
On Niagara jobs are allocated by whole node - that is, in chunks of 40 processors. If you want to run a job that requires only one processor, you need to bundle the jobs into groups of 40, so as to not be wasting the other 39 cores. See the serial run page for more information on how this is accomplished.
If you are unable to bundle your jobs into groups of 40, you should consider running on the Narval, Beluga,Graham or Cedar, also part of Template:The Alliance, instead of Niagara.
How do I use the ramdisk on Niagara?
To use the ramdisk, create and read to/write from files in /dev/shm/.. just as one would to (eg) ${SCRATCH}. Only the amount of RAM needed to store the files will be taken up by the temporary file system; thus if you have 40 serial jobs each requiring 1 GB of RAM, and 1GB is taken up by various OS services, you would still have approximately 160GB available to use as ramdisk on a ~202GB node. However, if you were to write 8 GB of data to the RAM disk, this would exceed available memory and your job would likely crash.
It is very important to delete your files from ramdisk at the end of your job. If you do not do this, the next user to use that node will have less RAM available than she might expect, and this might kill her job.
More details on how to setup your script to use the ramdisk can be found on the Ramdisk page.
How can I run a job longer than 24 hours?
The Niagara queue has a queue limit of 24 hours. This is pretty typical for systems of its size; larger systems commonly have shorter run limits. The limits are there to ensure that every user gets a fair share of the system (so that no one user ties up lots of nodes for a long time), and for safety (so that if one memory board in one node fails in the middle of a very long job, you haven't lost a months worth of work).
Since many of us have simulations that require more than that much time, most widely-used scientific applications have "checkpoint-restart" functionality, where every so often the complete state of the calculation is stored as a checkpoint file, and one can restart a simulation from one of these. In fact, these restart files tend to be quite useful for a number of purposes.
If your job will take longer, you will have to submit your job in multiple parts, restarting from a checkpoint each time. In this way, one can run a simulation much longer than the queue limit. In fact, one can even write job scripts which automatically re-submit themselves until a run is completed, using automatic resubmission.
Errors in running jobs
Monitoring jobs in the queue
How can I check the memory usage from my jobs?
In many occasions it can be really useful to take a look at how much memory your job is using while it is running. There a couple of ways to do so:
1) using some of the command line utilities we have developed, e.g: by using the jobperf utility, will allow you to check the job performance and head's node utilization respectively.
2) SSH into the nodes where your job is being run and check for memory usage and system stats right there. For instance, trying the 'top' or 'free' commands, on those nodes.
Also, it always a good a idea and strongly encouraged to inspect the output generated for your job submissions. The output file is named JobName-jobIdNumber.out; where JobName is the name you gave to the job (via the '--job-name' Slurm flag) and JobIdNumber is the id number of the job. If no job name is given, then the JobName will be "slurm". This file is saved in the working directory after the job is finished.
Other related topics to memory usage:
Can I run cron jobs on login nodes to monitor my jobs?
No, we do not permit cron jobs to be run by users. To monitor the status of your jobs using a cron job running on your own machine, use the command
ssh myusername@niagara.scinet.utoronto.ca "squeue -u myusername"
or some variation of this command. Of course, you will need to have SSH keys set up on the machine running the cron job, so that password entry won't be necessary.
How does one check the amount of used CPU-hours in a project, and how does one get statistics for each user in the project?
This information is available on the SciNet portal, See also SciNet Usage Reports.
Usage
How do I compute the core-years usage of my code?
The "core-years" quantity is a way to account for the time your code runs, by considering the total number of cores and time used, accounting for the total number of hours in a year. For instance if your code uses HH hours, in NN nodes, where each node has CC cores, then "core-years" can be computed as follow:
HH*(NN*CC)/(365*24)
If you have several independent instances (batches) running on different nodes, with BB number of batches and each batch during HH hours, then your core-years usage can be computed as,
BB*HH*(NN*CC)/(365*24)
As a general rule, the Niagara system, each node has only 40 cores, so CC will be always 40.
How much have I been running?
You can get information about your SciNet resource usage by visiting the SciNet Usage Reports page. The CCDB and my.SciNet sites contain similar information.
Data on SciNet disks
How do I find out my disk usage?
The standard Unix/Linux utilities for finding the amount of disk space used by a directory are very slow, and notoriously inefficient on the GPFS filesystems that we run on the SciNet systems. There are utilities that very quickly report your disk usage:
The diskUsage command, available on the login nodes and datamovers, provides information in a number of ways on the home, scratch, and project file systems. For instance, how much disk space is being used by yourself and your group (with the -a option), or how much your usage has changed over a certain period ("delta information") or you may generate plots of your usage over time. This information is updated every 3-hours!
More information about these filesystems is available at the Data Management page.
How do I transfer data to/from SciNet?
All incoming connections to SciNet go through relatively low-speed connections to the niagara.scinet gateways, so using scp to copy files the same way you ssh in is not an effective way to move lots of data. Better tools are described in our page on Moving data.
My group works with data files of size 1-2 GB. Is this too large to transfer by scp to niagara.scinet.utoronto.ca ?
Generally, occasion transfers of data less than 10GB is perfectly acceptable to so through the login nodes. See Moving data.
How can I check if I have files in /scratch that are scheduled for automatic deletion?
Please see Scratch Disk Purging Policy
How to allow my supervisor to manage files for me using ACL-based commands?
Please see File/Ownership Management
Can I transfer files between BGQ and HPSS?
Yes, however for now you'll need to do this in 2 steps:
- transfer from BGQ to Niagara SCRATCH
- then from Niagara SCRATCH to HPSS
Miscellaneous
How do I acknowledge SciNet?
Visit our Acknowledging SciNet page for direction on how to thank us.
Keep 'em Coming!
Next question, please
Send your question to <support@scinet.utoronto.ca>; we'll answer it asap!