Co-array Fortran on Niagara

From SciNet Users Documentation
Revision as of 01:09, 3 February 2021 by Northrup (talk | contribs)
Jump to navigation Jump to search

Versions 12 and higher of the Intel Fortran compiler, and version 5.1 and up of the GNU Fortran compiler, support almost all of Co-array Fortran, and are installed on Niagara.

This page will briefly sketch how to compile and run Co-array Fortran programs using these compilers.

Example

Here is an example of a co-array fortran program:

program Hello_World
  integer :: i ! Local variable
  integer :: num[*] ! scalar coarray
  if (this_image() == 1) then
    write(*,'(a)') 'Enter a number: '
    read(*,'(i80)') num
    ! Distribute information to other images
    do i = 2, num_images()
      num[i] = num
    end do
  end if
  sync all ! Barrier to make sure the data has arrived
  ! I/O from all nodes
  write(*,'(a,i0,a,i0)') 'Hello ',num,' from image ', this_image()
end program Hello_world

(Adapted from [1]).

Compiling, linking and running co-array fortran programs is different depending on whether you will run the program only on a single node (with 8 cores), or on several nodes, and depends on which compiler you are using, Intel, or GNU.

Intel compiler instructions for Coarray Fortran

Loading necessary modules

First, you need to load the module for version 12 or greater of the Intel compilers, as well as Intel MPI.

module load intel/2018.2 intelmpi/2018.2

There are two modes in which the intel compiler supports coarray fortran:

1. Single node usage

2. Multiple node usage

The way you compile and run for these two cases is different. However, we're working on making coarray fortran compilation and running more uniform among these two cases, as well as with the, as-yet-experimental, gfortran coarray support. See Uniformized Usage below.

Note: For multiple node usage, it makes sense to have to load the IntelMPI module, since Intel's implementation of Co-array Fortran uses MPI. However, the Intel MPI module is needed even for single-node usage, just in order to link successfully.

Single node usage

Compilation

ifort -O3 -xHost -coarray=shared -c [sourcefile] -o [objectfile]

Linking

ifort -coarray=shared [objectfile] -o [executable]

Running

To run this co-array program on one node with 16 images (co-array version for what openmp calls a thread and mpi calls a process), you simply put

./[executable]

in your job submission script. The reason that this gives 16 images is that HyperThreading is enabled on Niagara nodes, which makes it seem to the system as if there are 80 computing units on a node, even though physically there are only 40.

To control the number of images, you can change the FOR_COARRAY_NUM_IMAGES environment variable:

export FOR_COARRAY_NUM_IMAGES=2
./[executable]

This can be useful for testing.

An example submission script would look as follows:

#!/bin/bash
# SLURM submission script for SciNet Niagara (Intel Coarray Fortran)
#
#SBATCH --nodes=1
#SBATCH --time=1:00:00
#SBATCH --cpus-per-task=40
#SBATCH --job-name test

# DIRECTORY TO RUN - $SLURM_SUBMIT_DIR is directory job was submitted from
cd $SLURM_SUBMIT_DIR

# LOAD MODULES THAT THE APPLICATION WAS COMPILED WITH
module load intel/2018.2 intelmpi/2018.2

# RUN THE APPLICATION WITH 80 IMAGES
export FOR_COARRAY_NUM_IMAGES=80
./[executable]


Multiple nodes usage

Please read over the following link: [2] for the newer intel compilers

module load NiaEnv/2019b  intel/2019u4 intelmpi/2019u4

Compilation

ifort -O3 -xHost -coarray=distributed -c [sourcefile] -o [objectfile]

Linking

ifort -coarray=distributed [objectfile] -o [executable]

Running

Because distributed co-array fortran is based on MPI, we need to launch the mpi processes on different nodes. The defaults will work on Niagara, however the number of images will be equal to the number of nodes * cpus-per-

An example submission script would look as follows:

#!/bin/bash
#
#SBATCH --nodes=4
#SBATCH --time=1:00:00
#SBATCH --ntasks-per-node=40
#SBATCH --job-name test

# DIRECTORY TO RUN - $SLURM_SUBMIT_DIR is directory job was submitted from
cd $SLURM_SUBMIT_DIR

# LOAD MODULES THAT THE APPLICATION WAS COMPILED WITH
module load NiaEnv/2019b  intel/2019u4 intelmpi/2019u4

# EXECUTION export COMMAND;FOR_COARRAY_NUM_IMAGES = nodes*ntasks-per-node
 ./[executable]

You can provide a configuration file using the ifort '-corray-config-file=file.cfg` option that will allow you to provide your own MPI parameters include the number of tasks per host and the number of total taks.



Uniformized Usage

If you load the addition module

module load caf/intel/any

you get access to a compilation and linking wrapper called caf and a wrapper for running the application called cafrun.

Compilation

caf -O3 -xhost -c [sourcefile] -o [objectfile]

Linking

caf [objectfile] -o [executable]

Running

To run this co-array program on one node with 40 images, you simply put

cafrun ./[executable]

This runs 40 images, not 80.

To control the number of images, you can change the run command to

cafrun -np 2 ./[executable]

This can be useful for testing.

To control the number of images per node, add the -N [images-per-node] option.

Note: currently, the uniformized mode doesn't explicitly utilize optimization opportunities offered by the single node mode, although it will work on one node.


GNU compiler instructions for Coarray Fortran

Coarray fortran is supported in the GNU compiler suite (GCC) starting from version 5.1. To implement coarrays, it uses the opencoarray library, which in turns uses openmpi (or at least, that's how it has been setup on the GPC).

Issues with the gcc/opencoarray fortran compilers seem to exist, particularly with multidimensional arrays. We're still investigating the cause, but for now, one should consider these coarray fortran support by gcc as experimental

Loading necessary modules

First, you need to load the module for version 5.2 or greater of the GNU compilers (version 5.1 would've worked, but we skipped that release on the GPC), as well as OpenMPI.

module load gcc/5.2.0 openmpi/gcc/1.8.3 use.experimental caf/gcc/5.2.0-openmpi

The caf/gcc/5.2.0-openmpi modules comes with a compilation and linking wrapper called caf and a wrapper for running the application called cafrun.

Compilation

caf -O3 -march=native -c [sourcefile] -o [objectfile]

Linking

caf [objectfile] -o [executable]

Running

To run this co-array program on one node with 8 images (co-array version for what openmp calls a thread and mpi calls a process), you simply put

cafrun ./[executable]

in your job submission script. In contrast with the Intel compiler, this does not runs 16 images, but only 8. The reason is that the gcc/opencoarray implementation uses MPI, and MPI is not aware of HyperThreading.

To control the number of images, you can change the run command to

cafrun -np 2 ./[executable]

This can be useful for testing, or to exploit HyperThreading.

An example submission script would look as follows:

#!/bin/bash
# MOAB/Torque submission script for SciNet GPC (GCC Coarray Fortran)
#
#PBS -l nodes=1:ppn=8,walltime=1:00:00
#PBS -N test

# DIRECTORY TO RUN - $PBS_O_WORKDIR is directory job was submitted from
cd $PBS_O_WORKDIR

# LOAD MODULES THAT THE APPLICATION WAS COMPILED WITH
module load gcc/5.2.0 openmpi/gcc/1.8.3

# RUN WITH 16 IMAGES ON 1 NODE
cafrun -np 16 ./[executable]

Multiple nodes usage

Because the GNU implementation of Coarray Fortran in the gcc/5.2.0 module is based on MPI, running on multiple nodes is no different from the single-node usage. An example multi-node submission script would look as follows:

#!/bin/bash
# MOAB/Torque submission script for SciNet GPC (GCC Coarray Fortran on multiple nodes)
#
#PBS -l nodes=4:ppn=8,walltime=1:00:00
#PBS -N test

# DIRECTORY TO RUN - $PBS_O_WORKDIR is directory job was submitted from
cd $PBS_O_WORKDIR

# LOAD MODULES THAT THE APPLICATION WAS COMPILED WITH
module load gcc/5.2.0 openmpi/gcc/1.8.3

# EXECUTION with 32 images (nodes*ppn)
cafrun -np 32 ./[executable]

--!>