User Ramdisk

From SciNet Users Documentation
Jump to navigation Jump to search

On the Niagara nodes a `ramdisk' is available. Up to 70 percent of the RAM on the node (i.e. 202GB) may be used as a temporary file system. This is particularly useful for use in the early stages of migrating desktop-computing codes to a High Performance Computing platform such as Niagara, especially those that use a lot of file I/O (Input/Output). Using a lot of I/O is a bottleneck in large scale computing, especially on parallel file systems (such as the GPFS used on Niagara), since the files are synchronized across the whole network.

Ramdisk is much faster than real disk, and is especially beneficial for codes which perform a lot of small I/O work, since the ramdisk does not require network traffic. However, each node sees its own ramdisk and cannot see files on that of other nodes. You can also not see the ramdisks of the compute nodes on the login nodes. To track progress on a ramdisk, you'd have to SSH into the respective compute node.

Using Ramdisk

To use the ramdisk, create and read to or write from files in /dev/shm/ just as one would to $SCRATCH. Only the amount of RAM needed to store the files will be taken up by the temporary file system. Thus if you have 40 serial jobs each requiring 1 GB of RAM, and 2GB is taken up by various OS services, you would still have approximately 140GB available to use as ramdisk on a 202GB node. However, if you were to write 7 GB of data to the ramdisk, this would exceed available memory and your job would crash.

Note that when using the ramdisk:

  • At the start of your job, you can copy frequently accessed files to ramdisk (stage in). If there are many such files, it is beneficial to put them in a tar file.
  • One would periodically copy the output files from ramdisk to /scratch or /project, as well as at the end of the job (stage out).
  • It is very important to delete your files from ramdisk at the end of your job. If you do not do this, the next user to use that node will have less RAM available than they might expect, and this might kill his job.

A simple example

A simple script using the ramdisk for 40 serial jobs in a 4 hour window might look like this:

#SBATCH --nodes=1
#SBATCH --ntasks-per-node=40
#SBATCH --time=4:00:00
#SBATCH --job-name ramdisk-example


mkdir -p $workdir

cp $SLURM_SUBMIT_DIR/* $workdir

cd $workdir

for ((i=1;i<=40;i++)); do
  ./executable < $ > $i.out &

tar cf $SLURM_SUBMIT_DIR/out.tar *.out
rm -r $workdir

Often collections of serial jobs are run on the ramdisk, see the serial run wiki page for more details.

A more complex example

A more complete script using the ramdisk in a 1 day OpenMP job that saves output periodically, might look like this:

#SBATCH --nodes=1
#SBATCH --cpus-per-task=40
#SBATCH --time=24:00:00
#SBATCH --job-name ramdisk-test

#Job parameters:
execname=job          # name of the executable
input_tar=input.tar   # tar file with input files and executables
output_tar=out.tar    # file in which to store output
input_subdir=indir    # sub-directory (within input_tar) with input files
output_subdir=outdir  # sub-directory to contain of output files
poll_period=60        # how often check for job completion (in seconds)
save_period=120       # how often to save output (in minutes)

#Track how long everything takes.

#Copy to ramdisk
echo "Stage-in: copying files to ramdisk directory /dev/shm/$USER"
mkdir -p /dev/shm/$USER/$output_subdir
cd /dev/shm/$USER
cp $SLURM_SUBMIT_DIR/$input_tar .
tar xf $input_tar
rm -rf $input_tar

#Track how long everything takes.
echo -n "Stage-in completed on "


#Run on ramdisk
echo "Starting job"
./$execname $input_subdir $output_subdir &
# Store the process id in $pid so we may check if it's still running:

# 1. The above launching command is appropriate for a multi-threaded (OpenMP) applications.
# 2. Ramdisk MPI jobs are limited to 1 node as /dev/shm is not shared across nodes.
# 3. For serial jobs, you'd want to start 40 jobs at the same time instead, e.g.
#     mkdir -p $output_subdir/1
#     ./$execname ${input_subdir}/1 ${output_subdir}/1 &
#     pid=$!
#     mkdir -p $output_subdir/2
#     ./$execname ${input_subdir}/2 ${output_subdir}/2 &
#     pid=$pid,$!
#     etc.
#     mkdir -p $output_subdir/40
#     ./$execname ${input_subdir}/40 ${output_subdir}/40 &
#     pid=$pid,$!

#Track how long everything takes.
echo -n "Job started on "

function save_results {    
    echo -n "Copying from directory $output_subdir to file $SLURM_SUBMIT_DIR/$output_tar on "
    tar cf $output_tar $output_subdir/*
    cp $output_tar $SLURM_SUBMIT_DIR
    echo -n "Copying of output complete on "

function cleanup_ramdisk {
    echo -n "Cleaning up ramdisk directory /dev/shm/$USER on "
    rm -rf /dev/shm/$USER
    echo -n "done at "

function trap_term {
    echo -n "Trapped term (soft kill) signal on "

function interruptible_sleep {
    # waits for a number of seconds
    # argument 1 = number of seconds
    # note: just doing `sleep $1' would not be interruptible!
    for m in `seq $1`; do  
        sleep 1

function is_running {
    # check if one or more process is running 
    # argument 1 = a command separated list of PIDs (no spaces)
    ps -p $1 -o pid= | wc -l

#trap the termination signal, and call the function 'trap_term' when 
# that happens, so results may be saved.
trap "trap_term" TERM

#number of pollings per save period (rounded down):

#polling and saving loop
running=$(is_running $pid)
while [ $running -gt 0 ]
    for n in `seq $npoll`
        interruptible_sleep $poll_period
        running=$(is_running $pid)
        if [ $running -eq 0 ]; then


echo -n "Job finished cleanly on "

Notes with this script:

  • The script assumes that the tar file input.tar contains the executable job and the input files in a subdirectory called indir (with further subdirectories for the case of 8 serial jobs).
  • The executable is supposed to take the locations of the input and output directory as arguments.
  • The trap comment makes sure that the results gets saved and the ramdisk gets flushed even when the jobs gets killed before the end of the script is reached. trap is a bash script construction that executes the given command when the script is given, in this case, a TERM signal. The TERM signal is given by the scheduler 30 seconds before your time is up.
  • You could also trap signals in your C, C++ or FORTRAN codes.
  • All files are kept in a subdirectory of /dev/shm. This makes the clean up simpler, and keeps things tidy when doing small test jobs on the development nodes.