MediaWiki API result

This is the HTML representation of the JSON format. HTML is good for debugging, but is unsuitable for application use.

Specify the format parameter to change the output format. To see the non-HTML representation of the JSON format, set format=json.

See the complete documentation, or the API help for more information.

{
    "batchcomplete": "",
    "continue": {
        "gapcontinue": "Running_Serial_Jobs_on_Niagara",
        "continue": "gapcontinue||"
    },
    "warnings": {
        "main": {
            "*": "Subscribe to the mediawiki-api-announce mailing list at <https://lists.wikimedia.org/mailman/listinfo/mediawiki-api-announce> for notice of API deprecations and breaking changes."
        },
        "revisions": {
            "*": "Because \"rvslots\" was not specified, a legacy format has been used for the output. This format is deprecated, and in the future the new format will always be used."
        }
    },
    "query": {
        "pages": {
            "34": {
                "pageid": 34,
                "ns": 0,
                "title": "Recursive ACL script",
                "revisions": [
                    {
                        "contentformat": "text/x-wiki",
                        "contentmodel": "wikitext",
                        "*": "===Recursive ACL script ===\nYou may use/adapt one of the following bash scripts to recursively add or remove ACL attributes using gpfs built-in commands\n\n====Courtesy of Gabriel Devenyi====\n\n<pre>\n#!/bin/bash\n# USAGE\n#     - on one directory:     ./set_acl.sh aclfile dir_name\n#     - on more directories:  ./set_acl.sh aclfile dir1_name dir2_name ...\n#\n# Based on a contributed script by Gabriel Devenyi.\n#\n\nset -euo pipefail\n \naclfile=\"$1\"\nshift\n\nfor dir in \"$@\"\ndo\n    find \"${dir}\" -type d -exec mmputacl -i \"${aclfile}\" {} \\; -exec mmputacl -d -i \"${aclfile}\" {} \\; \n    find \"${dir}\" -type f -exec mmputacl -i \"${aclfile}\" {} \\; \ndone\n</pre> \n\n====Courtesy of Agata Disks====\n\n(http://csngwinfo.in2p3.fr/mediawiki/index.php/GPFS_ACL)\n\nThis script is a bit more verbose and precise in its error messages\n\n<pre>\n#!/bin/bash\n# USAGE\n#     - on one directory:     ./set_acl.sh dir_name\n#     - on more directories:  ./set_acl.sh 'dir_nam*'\n#\n\n# Path of the file that contains the ACL\nACL_FILE_PATH=/agatadisks/data/acl_file.acl\n\n# Directories onto the ACLs have to be set\ndirs=$1\n\n# Recursive function that sets ACL to files and directories\nset_acl () {\n  curr_dir=$1\n  for args in $curr_dir/*\n  do\n    if [ -f $args ]; then\n      echo \"ACL set on file $args\"\n      mmputacl -i $ACL_FILE_PATH $args\n      if [ $? -ne 0 ]; then\n        echo \"ERROR: ACL not set on $args\"\n        exit -1\n      fi\n    fi\n    if [ -d $args ]; then\n      # Set Default ACL in directory\n      mmputacl -i $ACL_FILE_PATH $args -d\n      if [ $? -ne 0 ]; then\n        echo \"ERROR: Default ACL not set on $args\"\n        exit -1\n      fi\n      echo \"Default ACL set on directory $args\"\n      # Set ACL in directory\n      mmputacl -i $ACL_FILE_PATH $args\n      if [ $? -ne 0 ]; then\n        echo \"ERROR: ACL not set on $args\"\n        exit -1\n      fi\n      echo \"ACL set on directory $args\"\n      set_acl $args\n    fi\n  done\n}\nfor dir in $dirs\ndo\n  if [ ! -d $dir ]; then\n    echo \"ERROR: $dir is not a directory\"\n    exit -1\n  fi\n  set_acl $dir\ndone\nexit 0\n\n</pre>\n\n[[Data Management#File/Ownership Management (ACL) |  BACK TO Data Management]]"
                    }
                ]
            },
            "188": {
                "pageid": 188,
                "ns": 0,
                "title": "Rouge",
                "revisions": [
                    {
                        "contentformat": "text/x-wiki",
                        "contentmodel": "wikitext",
                        "*": "{{Infobox Computer\n|image=[[File:Amd1.jpeg|center|300px|thumb]] \n|name=Rouge\n|installed=March 2021\n|operatingsystem= Linux (Centos 7.6)\n|loginnode= rouge-login01\n|nnodes=20 \n|gpuspernode=8 MI50-32GB\n|rampernode=512 GB\n|corespernode=48 \n|interconnect=Infiniband (2xEDR)\n|vendorcompilers=rocm/gcc\n|queuetype=slurm\n}}\n\n= Specifications=\n\nThe Rouge cluster was donated to the University of Toronto by AMD as part of their [https://www.amd.com/en/corporate/hpc-fund#:~:text=The%20goal%20of%20the%20AMD,potential%20threats%20to%20global%20health COVID-19 HPC Fund ] support program.  The cluster consists of 20 x86_64 nodes each with a single AMD EPYC 7642 48-Core CPU running at 2.3GHz with 512GB of RAM and 8 Radeon Instinct MI50 GPUs per node.\n \nThe nodes are interconnected with 2xHDR100 Infiniband for internode communications and disk I/O to the SciNet Niagara filesystems.  In total this cluster contains 960 CPU cores and 160 GPUs. \n\nAccess and support requests should be sent to '''support@scinet.utoronto.ca'''.\n\n= Getting started on Rouge =\n\n<!-- \nRouge can be accessed directly.\n<pre>\nssh -Y MYCCUSERNAME@rouge.scinet.utoronto.ca\n-->\n\n\nRouge login node '''rouge-login01''' can be accessed via the Niagara cluster.\n<pre>\nssh -Y MYCCUSERNAME@niagara.scinet.utoronto.ca\nssh -Y rouge-login01\n</pre>\n\n== Storage ==\n\nThe filesystem for Rouge is currently shared with Niagara cluster. See [https://docs.scinet.utoronto.ca/index.php/Niagara_Quickstart#Your_various_directories Niagara Storage] for more details.\n\n= Loading software modules =\n\nYou have two options for running code on : use existing software, or compile your own.  This section focuses on the former.\n\nOther than essentials, all installed software is made available [[Using_modules | using module commands]]. These modules set environment variables (PATH, etc.), allowing multiple, conflicting versions of a given package to be available.  A detailed explanation of the module system can be [[Using_modules | found on the modules page]].\n\nCommon module subcommands are:\n\n* <code>module load <module-name></code>: load the default version of a particular software.\n* <code>module load <module-name>/<module-version></code>: load a specific version of a particular software.\n* <code>module purge</code>: unload all currently loaded modules.\n* <code>module spider</code> (or <code>module spider <module-name></code>): list available software packages.\n* <code>module avail</code>: list loadable software packages.\n* <code>module list</code>: list loaded modules.\n\nAlong with modifying common environment variables, such as PATH, and LD_LIBRARY_PATH, these modules also create a SCINET_MODULENAME_ROOT environment variable, which can be used to access commonly needed software directories, such as /include and /lib.\n\nThere are handy abbreviations for the module commands. <code>ml</code> is the same as <code>module list</code>, and <code>ml <module-name></code> is the same as <code>module load <module-name></code>.\n\n= Available compilers and interpreters =\n\n* The <tt>Rocm</tt> module has to be loaded first for GPU software.\n* To compile mpi code, you must additionally load an <tt>openmpi</tt> module.\n\n=== ROCm ===\n\nThe current installed ROCm Tookit is '''4.1.0'''\n<pre>\nmodule load rocm/<version>\n</pre>\n*A compiler (GCC or rocm-clang) module must be loaded in order to use ROCm to build any code.\n\nThe current AMD driver version is 5.9.15.  Use '''rocm-smi -a''' for full details.\n\n===Other Compilers and Tools ===\n\nAvailable compiler modules are:\n\n<code>gcc/10.3.0</code> GNU Compiler Collection\n\n<code>rocm-clang/4.1.0</code> Clang\n\n<code>hipify-clang/12.0.0</code> Tool for translating CUDA sources into HIP sources\n\n<code>aocc/3.0.0</code> AMD Optimizing C/C++ Compiler (Clang-based)\n\n=== OpenMPI ===\n<tt>openmpi/<version></tt> module is available with different compilers.\n\n= Software =\n\n== Singularity Containers ==\n<pre>\n/scinet/rouge/amd/containers/gromacs.rocm401.ubuntu18.sif\n/scinet/rouge/amd/containers/lammps.rocm401.ubuntu18.sif\n/scinet/rouge/amd/containers/namd.rocm401.ubuntu18.sif\n/scinet/rouge/amd/containers/openmm.rocm401.ubuntu18.sif\n</pre>\n\n== GROMACS ==\nThe HIP version of GROMACS 2020.3 (better performance than OpenCL version) is provided by AMD in a container. Currently it is suggested to use a single GPU for all simulations.\nJob example:\n<pre>\n#!/bin/bash\n#SBATCH --time=1:00:00\n#SBATCH --nodes=1\n#SBATCH --gpus-per-node=1\n\nexport SINGULARITY_HOME=$SLURM_SUBMIT_DIR\n\nsingularity exec -B /home -B /scratch --env OMP_PLACES=cores /scinet/rouge/amd/containers/gromacs.rocm401.ubuntu18.sif gmx mdrun -pin off -ntmpi 1 -ntomp 6 ......\n\n# setting '-ntomp 4' might give better performance, do your own benchmark. not recommended to set larger than 6 for single GPU job\n# if you worry about 'GPU update with domain decomposition lacks substantial testing and should be used with caution.' warning message (if there is any), add '-update cpu' to override\n</pre>\n\n== NAMD ==\nThe HIP version of NAMD (3.0a) is provided by AMD in a container. Currently it is suggested to use a single GPU for all simulations.\nJob example:\n<pre>\n#!/bin/bash\n#SBATCH --time=1:00:00\n#SBATCH --nodes=1\n#SBATCH --gpus-per-node=1\n\nexport SINGULARITY_HOME=$SLURM_SUBMIT_DIR\n\nsingularity exec -B /home -B /scratch --env LD_LIBRARY_PATH=/opt/rocm/lib:/.singularity.d/libs /scinet/rouge/amd/containers/namd.rocm401.ubuntu18.sif namd2 +idlepoll +p 12 stmv.namd\n# do not set +p flag larger than 12, there are only 6 cores (12 threads) per single GPU job.\n</pre>\n\n== PyTorch ==\nInstall PyTorch into a python virtual environment:\n<pre>\nmodule load python gcc\nmkdir -p ~/.virtualenvs\nvirtualenv --system-site-packages ~/.virtualenvs/pytorch-rocm\nsource ~/.virtualenvs/pytorch-rocm/bin/activate\npip3 install torch -f https://download.pytorch.org/whl/rocm4.0.1/torch_stable.html\npip3 install ninja && pip3 install 'git+https://github.com/pytorch/vision.git@v0.9.1'\n</pre>\nRun PyTorch job with single GPU:\n<pre>\n#!/bin/bash\n#SBATCH --time=1:00:00\n#SBATCH --nodes=1\n#SBATCH --gpus-per-node=1\n\nmodule load python gcc\nsource ~/.virtualenvs/pytorch-rocm/bin/activate\npython code.py\n</pre>\n\n= Testing and debugging =\n\nYou really should test your code before you submit it to the cluster to know if your code is correct and what kind of resources you need.\n* Small test jobs can be run on the login node.  Rule of thumb: tests should run no more than a couple of minutes, taking at most about 1-2GB of memory, and use no more than one gpu and a few cores.\n\n* Short tests that do not fit on a login node, or for which you need a dedicated node, request an interactive debug job with the debug command:\n\n<pre>\nrouge-login01:~$ debugjob --clean -g G=1\n</pre> \n\nwhere G is the number of gpus.  If G=1, this gives an interactive session for 2 hours, whereas G=4 gets you a node with 4 gpus for 30 minutes, and with G=8 (the maximum) gets you a full node with 8 gpus for 30 minutes.  The <tt>--clean</tt> argument is optional but recommended as it will start the session without any modules loaded, thus mimicking more closely what happens when you submit a job script.\n\n= Submitting jobs =\nOnce you have compiled and tested your code or workflow on the Rouge login nodes, and confirmed that it behaves correctly, you are ready to submit jobs to the cluster.  Your jobs will run on one of Rouge's 20 compute nodes.  When and where your job runs is determined by the scheduler.\n\nRouge uses SLURM as its job scheduler. \n\nYou submit jobs from a login node by passing a script to the sbatch command:\n\n<pre>\nrouge-login01:scratch$ sbatch jobscript.sh\n</pre>\n\nThis puts the job in the queue. It will run on the compute nodes in due course. In most cases, you should not submit from your $HOME directory, but rather, from your $SCRATCH directory, so that the output of your compute job can be written out (as mentioned above, $HOME is read-only on the compute nodes).\n\nExample job scripts can be found below.\nKeep in mind:\n* Scheduling is by gpu each with 6 CPU cores.\n* Your job's maximum walltime is 24 hours. \n* Jobs must write their output to your scratch or project directory (home is read-only on compute nodes).\n* Compute nodes have no internet access.\n* Your job script will not remember the modules you have loaded, so it needs to contain \"module load\" commands of all the required modules (see examples below).\n\n== Single-GPU job script ==\nFor a single GPU job, each will have a 1/8 of the node which is 1 GPU + 6/12 CPU Cores/Threads + ~64GB CPU memory. '''Users should never ask CPU or Memory explicitly.''' If running MPI program, user can set --ntasks to be the number of MPI ranks. '''Do NOT set --ntasks for non-MPI programs.''' \n\n<pre>\n#!/bin/bash\n#SBATCH --nodes=1\n#SBATCH --gpus-per-node=1\n#SBATCH --time=1:00:0\n\nmodule load <modules you need>\nRun your program\n</pre>\n\n== Full-node job script ==\n'''If you are not sure the program can be executed on multiple GPUs, please follow the single-gpu job instruction above or contact SciNet support.'''\n\nMulti-GPU job should ask for a minimum of one full node (8 GPUs). User need to specify \"compute_full_node\" partition in order to get all resource on a node. \n*An example for a 1-node job:\n<pre>\n#!/bin/bash\n#SBATCH --nodes=1\n#SBATCH --gpus-per-node=8\n#SBATCH --ntasks=8 #this only affects MPI job\n#SBATCH --time=1:00:00\n#SBATCH -p compute_full_node\n\nmodule load <modules you need>\nRun your program\n</pre>"
                    }
                ]
            }
        }
    }
}