Data Management
Understanding the various file systems, and how to use them properly, is critical to optimizing your workflow and being a good SciNet citizen. This page describes the various Niagara file systems, and how to properly use them.
Performance
The file systems on SciNet, with the exception of archive, are GPFS, a high-performance file system which provides rapid reads and writes to large datasets in parallel from many nodes. As a consequence of this design, however, the file system performs quite poorly at accessing data sets which consist of many, small files. For instance, you will find that reading data in from one 16MB file is enormously faster than from 400 40KB files. Such small files are also quite wasteful of space, as the blocksize for the scratch and project filesystems is 16MB. This is something you should keep in mind when planning your input/output strategy for runs on SciNet.
For instance, if you run multi-process jobs, having each process write to a file of its own is not an scalable I/O solution. A directory gets locked by the first process accessing it, so all other processes have to wait for it. Not only has the code just become considerably less parallel, chances are the file system will have a time-out while waiting for your other processes, leading your program to crash mysteriously. Consider using MPI-IO (part of the MPI-2 standard), which allows files to be opened simultaneously by different processes, or using a dedicated process for I/O to which all other processes send their data, and which subsequently writes this data to a single file.
Purpose of each file system
Niagara, Mist and Rouge accesses several different file systems. Note that not all of these file systems are available to all users.
/home ($HOME)
/home is intended primarily for individual user files, common software or small datasets used by others in the same group, provided it does not exceed individual quotas. Otherwise you may consider /scratch or /project. /home is read-only on the compute nodes and has daily backups.
/scratch ($SCRATCH)
/scratch is to be used primarily for temporary or transient files, checkpoint dumps, for all the results of your computations and simulations, or any material that can be easily recreated or reacquired. You may use scratch as well for any intermediate step in your workflow, provided it does not induce too much I/O (Input/Output) or too many small files on this disk-based storage pool, otherwise you should consider burst buffer (/bb). Once you have your final results, those that you want to keep for the long term, you may migrate them to /project or /archive. /scratch is purged on a regular basis and has no backups.
/project ($PROJECT)
/project is available to groups whose PIs have a storage allocation, and is intended for common group software, large static datasets, or any material very costly to be reacquired or re-generated by the group, associated with jobs currently running on niagara or mist. Material on /project is expected to remain relatively immutable over time. You should think of $PROJECT as if it was $STATIC. Temporary or transient files should be kept on scratch, not project. High data turnover induces stress and unnecessary consumption tapes on the TSM backup system, long after this material has been deleted, due to backup retention policies and the extra versions kept of the same file. Even renaming top directories is enough to trick the system into assuming a completely new directory tree has been created, and the old one deleted, hence think carefully about your naming convention ahead of time, and stick with it. Users abusing the project file system and using it as scratch will be flagged and contacted. Note that on niagara /project is only available to groups with RAC allocation.
/bb ($BBUFFER)
/bb, the burst buffer, is a very fast, very high performance alternative to /scratch, made of solid-state drives (SSD). You may request this resource if you anticipate a lot of IOPs (Input/Output Operations) or when you notice your job is not performing well running on scratch or project because of I/O (Input/Output) bottlenecks. See here for more details.
/archive ($ARCHIVE)
/archive is available to groups whose PIs have a storage allocation, and on niagara it is the 'nearline' storage pool. It's used if you want to temporarily offload semi-active material from any of the above file systems. In practice users will offload/recall material as part of their regular workflow, or when they hit their quotas on scratch or project. That material can remain on HPSS for a few months to a few years. Note that on niagara /archive is only available to groups with RAC allocation.
/dev/shm (RAM)
On the Niagara nodes a ramdisk is available. Ramdisk is much faster than real disk, and faster than Burst Buffer. Up to 70 percent of the RAM on the node (i.e. 202GB) may be used as a temporary local file system. This is particularly useful in the early stages of migrating desktop-computing codes to a HPC platform such as Niagara, especially those that use a lot of file I/O (Input/Output). Using a lot of I/O is a bottleneck in large scale computing, especially on parallel file systems (such as the GPFS used on Niagara), since the files are synchronized across the whole network.
$SLURM_TMPDIR (RAM)
For consistency with the general purpose clusters Cedar and Graham, the environment variable $SLURM_TMPDIR will be set on Niagara compute jobs. Note that this variable will point to RAMdisk, not to local hard drives. The $SLURM_TMPDIR directory will be empty when your jobs starts and its content gets deleted after the job has finished.
Per-job temporary burst buffer space ($BB_JOB_DIR)
For every job on Niagara, the scheduler creates a temporary directory on the burst buffer called $BB_JOB_DIR. The $BB_JOB_DIR directory will be empty when your jobs starts and its content gets deleted after the job has finished. This directory is accessible from all nodes of a job.
$BB_JOB_DIR is intended as a place for applications that generate many small temporary files or that create files that are accessed very frequently (i.e., high IOPS applications), but that do not fit in ramdisk.
It should be emphasized that if the temporary files do fit in ramdisk, then that is generally a better location for them as both the bandwidth and iops of ramdisk far exceeds that of the burst buffer. To use ramdisk, you can either directly access /dev/shm or use the environment variable $SLURM_TMPDIR.
Note that Niagara compute nodes have no local disks, so $SLURM_TMPDIR lives in memory (ramdisk), in contrast to the general purpose systems of the Alliance (formerly Compute Canada), i.e., Cedar, Graham, Beluga and Narval, where this variable points to a directory on a node-local ssd disk.
Quotas and purging
You should familiarize yourself with the various file systems, what purpose they serve, and how to properly use them. This table summarizes the various file systems.
location | quota | block size | expiration time | backed up | on login nodes | on compute nodes | |
---|---|---|---|---|---|---|---|
$HOME | 100 GB / 250,000 files per user | 1 MB | yes | yes | read-only | ||
$SCRATCH | 25 TB / 6,000,000 files per user provided group quota is not reached | 16 MB | 2 months | no | yes | yes | |
groups of up to 4 users | 50TB for the group | ||||||
groups of up to 11 users | 125TB for the group | ||||||
groups of up to 28 users | 250TB for the group | ||||||
groups of up to 60 users | 400TB for the group | ||||||
groups with over 60 users | 500TB for the group | ||||||
$PROJECT | by group allocation | 16 MB | yes | yes | yes | ||
$ARCHIVE | by group allocation | dual-copy | no | no | |||
$BBUFFER | 10 TB per user | 1 MB | very short | no | yes | yes |
- Inode vs. Space quota (PROJECT and SCRATCH)
- dynamic quota per group (SCRATCH)
- Compute nodes do not have local storage.
- Archive space is on HPSS, and is not accessible on the Niagara login, compute, or datamover nodes.
- Backup means a recent snapshot, not a replica of all data or version that ever was.
$BBUFFER
stands for the Burst Buffer, a faster parallel storage tier for temporary data.
How much Disk Space Do I have left?
The /scinet/niagara/bin/diskUsage command, available on the login nodes and datamovers, provides information in a number of ways on the home, scratch, project and archive file systems. For instance, how much disk space is being used by yourself and your group (with the -a option), or how much your usage has changed over a certain period ("delta information") or you may generate plots of your usage over time. Please see the usage help below for more details.
Usage: diskUsage [-h|-?| [-a] [-u <user>] -h|-?: help -a: list usages of all members on the group -u <user>: as another user on your group
Did you know that you can check which of your directories have more than 1000 files with the /scinet/niagara/bin/topUserDirOver1000list command and which have more than 1GB of material with the /scinet/niagara/bin/topUserDirOver1GBlist command?
Note: information on usage and quota is only updated every 3 hours!
Scratch Disk Purging Policy
In order to ensure that there is always sufficient space available for running jobs we automatically delete files in /scratch that have not been accessed or modified for more than 2 months by the 15th of each month. Note that we recently changed the reference time to be the MostRecentOf(atime,ctime). This policy is subject to revision depending on its effectiveness. More details about the purging process and how users can check if their files will be deleted follow. If you have files scheduled for deletion you should move them to a more permanent location, such as your departmental server, your /project space or into HPSS (for PIs who have either been allocated storage space by the RAC on project or HPSS).
On the first of each month, a list of files scheduled for purging is produced, and an email notification is sent to each user on that list. Users also get a shell notification on every login to Niagara. Furthermore, at/or about the 12th of each month a 2nd scan produces a more-current assessment and another email notification is sent. This way users can double check that they have indeed taken care of all the files they needed to relocate before the purging deadline. Those files will be automatically deleted on the 15th of the same month unless they have been accessed or relocated in the interim. If you have files scheduled for deletion then they will be listed in a file in /scratch/t/todelete/current, which has your userid and groupid in the filename. For example, if user xxyz wants to check if they have files scheduled for deletion they can issue the following command on a system which mounts /scratch (e.g. a Niagara login node): ls -1 /scratch/t/todelete/current | grep xxyz. In the example below, the name of this file indicates that user xxyz is part of group abc, has 9,560 files scheduled for deletion and they take up 1.0TB of space:
[xxyz@nia-login03 ~]$ ls -1 /scratch/t/todelete/current |grep xxyz -rw-r----- 1 xxyz root 1733059 Jan 17 11:46 3110001___xxyz_______abc_________1.00T_____9560files
The file itself contains a list of all files scheduled for deletion (in the last column) and can be viewed with standard commands like more/less/cat - e.g.
more /scratch/t/todelete/current/3110001___xxyz_______abc_________1.00T_____9560files
[_inode information__] [uidNumber] [__________atime__________] [__________ctime__________] [size] [_____file_path_____] 659919349 1268424780 0 -u 3199999 -a2019-26-11 08:49:27.745412 -c2019-26-11 08:49:27.739630 -s 234 -- /gpfs/fs0/scratch/...
Similarly, you can also verify all other users in your group by using the ls command with grep on your group. For example: ls -1 /scratch/t/todelete/current | grep abc. That will list all other users in the same group that xxyz is part of, and have files to be purged on the 15th. Members of the same group have access to each other's contents.
If you access/read/move/delete some of the candidates between the 1st and the 11th, there won't be any changes in the assessment until the 12th.
If there was an assessment file up until the 11th, but no longer on the 12th, it's because you don't have anything to be purged anymore.
If you access/read/move/delete some or the candidates after the 12th, then you have to check yourself to confirm your files won't be purged on the 15th (see below)
NOTE: Preparing these assessments takes several hours. If you change the access/modification time of a file in the interim, that will not be detected until the next cycle. A way for you to get immediate feedback is to use the 'ls -lu' command on the file to verify the ctime and 'ls -lc' for the mtime. If the file atime/ctime has been updated in the meantime, come the purging date on the 15th it will no longer be deleted.
Purging on niagara is final. Purged files can not be recovered.
Backup Policy
Our backup is based on versions, not on date or age:
- In general we keep the 2 most recent versions of a file, one per day, provided it exists on the file system. Once a file is deleted we expire the oldest version from the backup, and keep the most recent for 60 days. After that grace period then that only version is expired as well.
- We may have 1 or 2 versions of a file on the backup for over 10 years, provided the original has never been deleted from the file system.
- On the other hand we may not have any backup, if the user created the file in the morning and deleted in the afternoon, since the backup system never had a chance to capture the file (it runs once a day around midnight).
- And the user may have generated several versions of a file during the day, however only the most recent before the backup runs will be captured for that day.
Moving data
Data for analysis and final results need to be moved to and from Niagara. There are several ways to accomplish this.
Using rsync/scp
Move amounts less than 10GB through the login nodes.
- Niagara login nodes and datamovers are visible from outside SciNet.
- Use scp or rsync to niagara.scinet.utoronto.ca or niagara.computecanada.ca (no difference).
- This will time out for amounts larger than about 10GB.
Move amounts larger than 10GB through the datamover nodes.
- From a Niagara login node, ssh to
nia-datamover1
ornia-datamover2
. From there you can transfer to or from Niagara. - Alternatively, you may also login/scp/rsync directly to the datamovers from the outside:
nia-datamover1.scinet.utoronto.ca nia-datamover2.scinet.utoronto.ca
- If you do this often, consider using Globus, a web-based tool for data transfer.
Note that you can only connect 4 times in a 2-minute window to the login nodes or the datamover nodes. So bundle your transfers, i.e., specify multiple files to be copied as arguments to scp or rsync, or copy whole directories, or zip/tar the files up and unzip/untar them on the other end.
If you want to transfer smaller files between other Compute Canada clusters and Niagara use the SSH agent forwarding flag, -A
when logging into another cluster. For example, to copy files to Niagara from Cedar use:
ssh -A USERNAME@cedar.computecanada.ca
then perform the copy:
[USERNAME@cedar5 ~]$ scp file USERNAME@niagara.computecanada.ca:/scratch/g/group/USERNAME/
Using Globus
Please check the comprehensive documentation [here], and here.
The Niagara endpoint is "computecanada#niagara".
Moving data to HPSS/Archive/Nearline
HPSS is for long-term storage of data.
- HPSS is a tape-based storage solution, and is SciNet's nearline a.k.a. archive facility.
- Storage space on HPSS is allocated through the annual Compute Canada RAC allocation.
File/Ownership Management (ACL)
- By default, at SciNet, users within the same group already have read permission to each other's files (not write)
- You may use access control list (ACL) to allow your supervisor (or another user within your group) to manage files for you (i.e., create, move, rename, delete), while still retaining your access and permission as the original owner of the files/directories. You may also let users in other groups or whole other groups access (read, execute) your files using this same mechanism.
Using mmputacl/mmgetacl
- You may use gpfs' native mmputacl and mmgetacl commands. The advantages are that you can set "control" permission and that POSIX or NFS v4 style ACL are supported. You will need first to create a /tmp/supervisor.acl template with the following contents, as needed:
user::rwxc group::---- other::---- mask::rwxc user:[owner]:rwxc user:[supervisor]:rwxc #read and WRITE permissions to supervisor (may not be necessary) group:[othegroup]:r-xc #read ONLY permissions to members of other groups (recommended)
Then issue the following 2 commands:
1) $ mmputacl -i /tmp/supervisor.acl /project/g/group/[owner] 2) $ mmputacl -d -i /tmp/supervisor.acl /project/g/group/[owner] (every *new* file/directory inside [owner] will inherit [supervisor] ownership by default as well as [owner] ownership, ie, ownership of both by default, for files/directories created by [supervisor]) $ mmgetacl /project/g/group/[owner] (to determine the current ACL attributes) $ mmdelacl -d /project/g/group/[owner] (to remove any previously set ACL) $ mmeditacl /project/g/group/[owner] (to create or change a GPFS access control list) (for this command to work set the EDITOR environment variable: export EDITOR=/usr/bin/vi)
If you want to apply ACL to a folder deep in the tree, as in /project/g/group/owner/dir1/subdir2/subdir3, you will need to also apply ACL to every individual path above the subdir3 level, as in:
$ mmputacl -i /tmp/supervisor.acl /project/g/group/owner $ mmputacl -i /tmp/supervisor.acl /project/g/group/owner/dir1 $ mmputacl -i /tmp/supervisor.acl /project/g/group/owner/dir1/subdir2 $ mmputacl -i /tmp/supervisor.acl /project/g/group/owner/dir1/subdir2/subdir3 $ mmputacl -d -i /tmp/supervisor.acl /project/g/group/owner/dir1/subdir2/subdir3
In addition, you'll need to ask your PI to apply ACL to the group level:
$ mmputacl -i /tmp/supervisor.acl /project/g/group
NOTES:
- There is no option to recursively add or remove ACL attributes using a gpfs built-in command to existing files. You'll need to use the -i option as above for each file or directory individually. Here is a sample bash script you may use for that purpose
- mmputacl will not overwrite the original linux group permissions for a directory when copied to another directory already with ACLs, hence the "#effective:r-x" note you may see from time to time with mmgetacf. If you want to give rwx permissions to everyone in your group you should simply rely on the plain unix 'chmod g+rwx' command. You may do that before or after copying the original material to another folder with the ACLs.
- The only latitude you have is with the "w". You may or not want to let the collaborator/supervisor write to your folder. As for "r-xc" you don't have the option, this combination must always be applied.
- In the case of PROJECT, your group's supervisor will need to set proper ACL to the /project/G/GROUP level in order to let users from other groups access your files.
- ACL's won't let you give away permissions to files or directories that do not belong to you.
- We highly recommend that you never give write permission to other users on the top level of your home directory (/home/G/GROUP/[owner]), since that would seriously compromise your privacy, in addition to disable ssh key authentication, among other things. If necessary, make specific sub-directories under your home directory so that other users can manipulate/access files from those.
- Just a reminder: setfacl/getfacl only works on cedar/graham/beluga, since they have lustre. On niagara you have to use the mm* command just for GPFS: mmputacl, mmgetacl, mmdelacl, mmeditacl
For more information on using mmputacl or mmgetacl see their man pages.
Recursive ACL script
You may use/adapt this sample bash script to recursively add or remove ACL attributes using gpfs built-in commands
Courtesy of Agata Disks (http://csngwinfo.in2p3.fr/mediawiki/index.php/GPFS_ACL)