Difference between revisions of "HybridX on P7"

From SciNet Users Documentation
Jump to navigation Jump to search
(Created page with "== How to compile HybridX == Following script assumes HybridX code is located under "/$HOME/HybridCode". It then compiles the package using GCC-4.8 and OpenMPI-1.6.5 and inst...")
(No difference)

Revision as of 20:38, 18 October 2018

How to compile HybridX

Following script assumes HybridX code is located under "/$HOME/HybridCode". It then compiles the package using GCC-4.8 and OpenMPI-1.6.5 and install it to the "build-p7/install" directory inside the HybridX. See script below and modify if you want make changes:

#!/bin/sh

# Load modules
module load gcc/4.8.1 cmake/2.8.8 openmpi/1.6.5-gcc

# Package details
base=/$HOME/HybridCode
pkg=HybridX

cd $base/$pkg

# Variables for installation
src=$base/$pkg
bld=$base/$pkg/build-p7

# Start from scratch each time when this script is executed
rm -rf $bld
mkdir -p $bld
cd $bld

# Run cmake
cmake $src
# cmake -DBOOST_ROOT=${SCINET_BOOST_DIR} $src

# Compile and install
gmake
gmake install

When the compilation is completed successfully, you should be able to find HybridX executable under the "$HOME/HybridCode/HybridX/build-p7/install/bin". Using the binary executable, you can run HybridX simulations. Please see following job script:

#!/bin/bash
##===================================
## P7 Load Leveler Submission Script
##===================================
##
## Don't change these parameters unless you really know what you are doing
##
##@ environment = MP_INFOLEVEL=0; MP_USE_BULK_XFER=yes; MP_BULK_MIN_MSG_SIZE=64K; \
##                MP_EAGER_LIMIT=64K; MP_DEBUG_ENABLE_AFFINITY=no
##
##===================================
## Avoid core dumps
## @ core_limit   = 0
##===================================
## Job specific
##===================================
#
# @ job_name = hybridx-isotropic
# @ job_type = parallel
# @ class = verylong
# @ output = $(jobid).out
# @ error = $(jobid).err
# @ wall_clock_limit = 01:00:00
# @ node = 4
# @ tasks_per_node = 128
# @ queue
#
#===================================

# Load modules
module purge
module load gcc/4.8.1
module load openmpi/1.6.5-gcc

# HybridX folders
export hybrid_root=$HOME/HybridCode/HybridX/build-p7/install
export hybrid_bin=${hybrid_root}/bin
export hybrid_run=$HOME/HybridCode/run

# Go to case folder
cd $hybrid_run/isotropic-p7

mpirun -np 128 ${hybrid_bin}/Hybrid -i isotropic.input 2>&1 | tee log.hybridx.isotropic

Notes:

  1. P7 processors support 4 threads per core, so you can increase the number of tasks accordingly.
  2. Scheduler is the same as Blue Gene, LoadLeveler so same commands apply. llq, llsubmit etc.
  3. LoadLeveler writes results to an output file specified in the job details so you don't need the tee command given in the example above.
  4. P7 cluster share the same file-system with Blue Gene so be careful with that.