Difference between revisions of "GROMACS"
Jump to navigation
Jump to search
Line 1: | Line 1: | ||
− | ===GROMACS | + | ===GROMACS 2018.6=== |
[http://www.gromacs.org GROMACS] is a versatile molecular dynamics package. A thorough treatment of GROMACS can be found on the [https://docs.computecanada.ca/wiki/GROMACS Compute Canada page]. Here is a sample Niagara run script: | [http://www.gromacs.org GROMACS] is a versatile molecular dynamics package. A thorough treatment of GROMACS can be found on the [https://docs.computecanada.ca/wiki/GROMACS Compute Canada page]. Here is a sample Niagara run script: | ||
Line 12: | Line 12: | ||
#SBATCH --job-name test | #SBATCH --job-name test | ||
− | module load intel/ | + | module load intel/2019u3 |
− | module load intelmpi/ | + | module load intelmpi/2019u3 |
− | module load gromacs/ | + | module load gromacs/2018.6 |
− | # DIRECTORY TO RUN - $ | + | # DIRECTORY TO RUN - $SLURM_SUBMIT_DIR is directory job was submitted from |
cd $SLURM_SUBMIT_DIR | cd $SLURM_SUBMIT_DIR | ||
Line 26: | Line 26: | ||
The above script requests 1 node and runs gromacs in a hybrid mode, in 10 groups of 4 processors. | The above script requests 1 node and runs gromacs in a hybrid mode, in 10 groups of 4 processors. | ||
− | Note that GROMACS is well-suited to use on GPUs, of which Niagara has none. | + | Note that GROMACS is well-suited to use on GPUs, of which Niagara has none. Running on [[Mist]] is recommended. Alternatively, running on CC systems which have GPUs, such as [https://docs.computecanada.ca/wiki/Graham Graham] and [https://docs.computecanada.ca/wiki/Cedar Cedar], is also an option. |
=== Gromacs on Mist GPU cluster === | === Gromacs on Mist GPU cluster === | ||
See details on Mist page: [https://docs.scinet.utoronto.ca/index.php/Mist#Gromacs Gromacs on Mist] | See details on Mist page: [https://docs.scinet.utoronto.ca/index.php/Mist#Gromacs Gromacs on Mist] |
Revision as of 13:30, 30 October 2020
GROMACS 2018.6
GROMACS is a versatile molecular dynamics package. A thorough treatment of GROMACS can be found on the Compute Canada page. Here is a sample Niagara run script:
#!/bin/bash # #SBATCH --nodes=1 #SBATCH --ntasks-per-node=10 #SBATCH --cpus-per-task=4 #SBATCH --time=11:00:00 #SBATCH --job-name test module load intel/2019u3 module load intelmpi/2019u3 module load gromacs/2018.6 # DIRECTORY TO RUN - $SLURM_SUBMIT_DIR is directory job was submitted from cd $SLURM_SUBMIT_DIR export OMP_NUM_THREADS="${SLURM_CPUS_PER_TASK:-1}" srun gmx_mpi mdrun -deffnm md
The above script requests 1 node and runs gromacs in a hybrid mode, in 10 groups of 4 processors.
Note that GROMACS is well-suited to use on GPUs, of which Niagara has none. Running on Mist is recommended. Alternatively, running on CC systems which have GPUs, such as Graham and Cedar, is also an option.
Gromacs on Mist GPU cluster
See details on Mist page: Gromacs on Mist