HPSS-by-pomes

From SciNet Users Documentation
Jump to: navigation, search

Packing up large data sets and putting them on HPSS

(Pomés group recommendations)

HPSS has the following limitations:

Hundreds of thousands of small files can be offloaded rapidly, but take weeks or months to recall

  • No individual file can exceed 1 TB
  • You must verify the integrity of your data throughout the data preparation and offload process
  • With these limitations in mind, we have developed the following protocol for efficiently offloading data from scratch to HPSS.


1. Identify the subdirectories that contain > 1,000 files.

a. Create a directory called DU/ and place the following script in that directory:

#!/bin/bash
# du.sh
for i in $(ls ../); do
    n=$(find ../$i |wc -l)
    s=$(du -hs ../$i  | awk '{print $1}')
    echo "$i $n $s"
done > my.du.dirs

b. chmod +x du.sh

c. nohup ./du.sh & (This step may require hours or days to complete)

d. Now my.du.dirs will contain a listing of the number of files and the total size of each directory.

e. Identify the directories with many files and copy the DU/ directory there and then run du.sh again. Continue this process until you have a good understanding of which directories actually contain large numbers of files.

2. Create tar files for these directories.

a. This should be scripted to ensure that your tarballs are completely written.

b. Never script the removal of the original files.

c. Here is an example script:

for i in "dir1 dir2 dir3"; do
    tar –cf ${i}.tar ${i}
    echo "tar $i returned $?"
done > my.tar.results

d. Note that the evaluation of $? must be done on the very next command after tar. Even inserting an additional echo statement will break the test.

e. Once you are sure that the tar command was successful (return code equals zero), you should delete the originals. You should not script the deletion process because a typo in a rm –rf command can be very costly. If you must script a removal, it is best to do it like this:

mkdir TRASH; for i in $list; do mv $i TRASH; done

then inspect TRASH and remove it manually. Note that the above command could possibly be costly if you make a mistake since files may get overwritten.


3. Now upload to HPSS using HSI.

You should have less than 10,000 files per TB of uploaded data. If that is not the case, then go back and pack up your data some more before proceeding.

a. It is recommended that you have a new directory structure in HPSS for your fully packed up data. This is because you may have put other things on HPSS in the past and the creation of a clean directory structure is a good way to denote that this data is the final copy. Here, we use FULL_DATA/

b. An example of an HPSS offload script follows:

#!/bin/bash
#SBATCH -t 72:00:00
#SBATCH -p archivelong 
#SBATCH -N 1
#SBATCH -J offload
#SBATCH --mail-type=ALL

## scratch files:  $SCRATCH/mydata
## HPSS files:     $ARCHIVE/FULL_DATA/mydata

trap "echo 'Job script not completed';exit 129" TERM INT

/usr/local/bin/hsi  -v <<EOF1
mkdir -p FULL_DATA
cd FULL_DATA
cput -Rpuh $SCRATCH/mydata
end
EOF1

status=$?
if [ ! $status == 0 ];then
    echo 'HSI returned non-zero code.'
    /scinet/niagara/bin/exit2msg $status
    exit $status
else
    echo 'TRANSFER SUCCESSFUL'
fi

trap - TERM INT

c. After the above script has completed, check the output to ensure that your transfer was successful. If you had errors, or if it timed out, simply run the script again. If you continue to get the same errors, contact support@scinet.utoronto.ca

4. Check HPSS data against the original

Now you must retrieve your data back to scratch so that you can check it against the original copy on scratch, which we have not yet deleted.

a. Run diskUsage to ensure that you have space in your allocation to recall the data to scratch. If the recall will bring you close to your limit, advise your other group members how much space you will be recalling in case another user is also planning a large data recall.

b. An example of an HPSS recall script follows:

#SBATCH -t 72:00:00
#SBATCH -p archivelong 
#SBATCH -N 1
#SBATCH -J offload
#SBATCH --mail-type=ALL

## original scratch files:  $SCRATCH/mydata
## HPSS files:     $ARCHIVE/FULL_DATA/mydata
## new copy scratch files:  $SCRATCH/RETREIVED_MODULES/FULL_DATA/mydata

mkdir -p $SCRATCH/RETREIVED_MODULES/FULL_DATA

trap "echo 'Job script not completed';exit 129" TERM INT

/usr/local/bin/hsi  -v <<EOF1
lcd $SCRATCH/RETREIVED_MODULES/FULL_DATA/
cget -Rpuh $ARCHIVE/FULL_DATA/mydata
end
EOF1

status=$?
if [ ! $status == 0 ];then
    echo 'HSI returned non-zero code.'
    /scinet/niagara/bin/exit2msg $status
    exit $status
else
    echo 'TRANSFER SUCCESSFUL'
fi

trap - TERM INT

5. Run a md5sum over the entire directory

Now that you have the original and the cput/cget copy back from HPSS, run a md5sum over the entire directory. An example of a check is as follows. Run this script under nohup from one of the datamover nodes:

#!/bin/bash

COMPUTE_DIFFERENCES=1
dir=$(pwd)
WAS=$SCRATCH/mydata
IS=$SCRATCH/RETREIVED_MODULES/FULL_DATA/mydata
if ((COMPUTE_DIFFERENCES)); then
    cd $WAS
    find . > ${dir}/tmp.was
    echo "find on was returned $?"

    cd $IS
    find . > ${dir}/tmp.is
    echo "find on is returned $?"

    cd $dir

    changes=$(sort tmp.was >a ; sort tmp.is >b; diff a b)
    if ((changes!=0)); then
        echo "FILES DIFFER! Diff returned $changes"
        exit
    fi
fi

for i in $(cat tmp.was); do
    if [ -f ${WAS}/${i} ]; then
        was=$(md5sum ${WAS}/$i |awk '{print $1}')
        is=$(md5sum ${IS}/$i |awk '{print $1}')
        same=$(echo $was $is | awk '{if($1==$2) print 1; else print 0}')
        if ((same==0)); then
            echo "FILES DIFFER -- $i $was $is"
        else
            echo "OK for $i"
        fi
    fi
done

a. when that is done, grep DIFFER on the output (in nohup.out since you ran this script under nohup). Any returned value means there is a problem. Contact scinet support.

b. If everything was a success, you can delete all of the copies that you recalled from HPSS to scratch. You can also delete your original copy in scratch if you would like as you have a complete copy on scinet HPSS.


BACK TO HPSS