Difference between revisions of "STAR-CCM+"

From SciNet Users Documentation
Jump to: navigation, search
(Running STAR-CCM+)
(Running STAR-CCM+)
 
Line 37: Line 37:
 
<source lang="bash">
 
<source lang="bash">
 
#!/bin/bash
 
#!/bin/bash
#SBATCH --nodes=1
+
#SBATCH --time=0-00:30        # Time limit: d-hh:mm
 +
#SBATCH --nodes=2
 
#SBATCH --ntasks-per-node=1
 
#SBATCH --ntasks-per-node=1
 
#SBATCH --cpus-per-task=40
 
#SBATCH --cpus-per-task=40
#SBATCH --time=4:00:00
+
#SBATCH --mail-type=BEGIN
#SBATCH --job-name test
+
#SBATCH --mail-type=END
  
# DIRECTORY TO RUN - $SLURM_SUBMIT_DIR is directory job was submitted from
 
 
cd $SLURM_SUBMIT_DIR
 
cd $SLURM_SUBMIT_DIR
  
# Setup the license server first, prior to loading the modules:
 
 
ssh nia-gw -L 1999:flex.cd-adapco.com:1999 -L 2099:flex.cd-adapco.com:2099 -N -f
 
ssh nia-gw -L 1999:flex.cd-adapco.com:1999 -L 2099:flex.cd-adapco.com:2099 -N -f
 +
export LM_PROJECT='jx+5qeNKQxSa/3dDGkEfDA'
 +
export CDLMD_LICENSE_FILE="1999@127.0.0.1"
  
export LM_PROJECT='XXXX'    # your license information
 
export CDLMD_LICENSE_FILE="1999@@127.0.0.1"
 
 
# Load the modules and take note of the STAR-CCM+ selected
 
 
module load CCEnv
 
module load CCEnv
 
module load StdEnv
 
module load StdEnv
module load starccm/12.04.011-R8
+
module load starccm/13.06.012-R8
  
# NOTE: make sure the STAR-CCM+ Ansys directory for the selected version exists prior
+
slurm_hl2hl.py --format STAR-CCM+ > $SLURM_SUBMIT_DIR/machinefile_$SLURM_JOB_ID
# to submitting the script. See notes above, in particular if/when you change versions.
 
  
slurm_hl2hl.py --format STAR-CCM+ > machinefile
+
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
  
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))
+
# Ridiculous workaround for license failures, for some reason takes 3 tries to get working
 +
# Try upto five times to get starccm+ to start by checking exit status (throws 143 when fails, 0 when works).
 +
i=1
 +
RET=-1
 +
while [ $i -le 5 ] && [ $RET -ne 0 ]; do
 +
        [ $i -eq 1 ] || sleep 5
 +
        echo "Attempt number: "$i
 +
        starccm+ -power -np $NCORE -podkey $LM_PROJECT  -machinefile $SLURM_SUBMIT_DIR/machinefile_$SLURM_JOB_ID -batch $SLURM_SUBMIT_DIR/your-simulation-file.java $SLURM_SUBMIT_DIR/your-simulation-file.sim > $SLURM_JOB_ID.results
 +
        RET=$?
 +
        i=$((i+1))
 +
  done
 +
exit $RET
  
starccm+ -power -np $NCORE -podkey $LM_PROJECT -licpath $CDLMD_LICENSE_FILE -machinefile $SLURM_SUBMIT_DIR/machinefile -batch /path/to/your/simulation/file >> logfile.log
 
 
</source>
 
</source>

Latest revision as of 13:39, 4 September 2020

The STAR-CCM+ engineering simulation tool is installed on the CC software stack.

Getting a license

Licenses are provided by Siemens.

You will need to create a $HOME/.licenses/starccm.lic file with following contents:

SERVER 127.0.0.1 ANY 1999
USE_SERVER

and establish the ssh tunnel to the server at Siemens as shown in the submission script template further below. Also be sure to define the license env vars prior to loading the modules.

Running using the CC installation

STAR-CCM+ 12.04.011

STAR-CCM+ is only available on the CC software stack. On Niagara you must run the following module commands:

module load CCEnv
module load StdEnv
module load starccm/12.04.011-R8

Setting up your .star-12.04.011 directory

STAR-CCM+ version 12.04.011 will attempt to write to your $HOME/.star-12.04.011 directory. This will work when you are testing your workflow on the login nodes, because they can write to $HOME. However, recall that the compute nodes cannot write to the /home filesystem. If you attempt to run STAR-CCM+ from a compute node using the default configuration, it will fail because STAR-CCM+ cannot write to $HOME/.star-12.04.011.

The solution is to create an alternative directory called $SCRATCH/.star-12.04.011, and create a soft link from $SCRATCH/.star-12.04.011 to $HOME/.star-12.04.011:

mkdir $SCRATCH/.star-12.04.011
ln -s $SCRATCH/.star-12.04.011 $HOME/.star-12.04.011

This will fool Ansys into thinking it is writing to $HOME/.star-12.04.011, when in fact it is writing to $SCRATCH/.star-12.04.011. This command only needs to be run once, though it needs to be run for each version of STAR-CCM+ you run.

Running STAR-CCM+

Example submission script for a job running on 1 node, with max walltime of 4 hours:

#!/bin/bash
#SBATCH --time=0-00:30        # Time limit: d-hh:mm
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=40
#SBATCH --mail-type=BEGIN
#SBATCH --mail-type=END

cd $SLURM_SUBMIT_DIR

ssh nia-gw -L 1999:flex.cd-adapco.com:1999 -L 2099:flex.cd-adapco.com:2099 -N -f
export LM_PROJECT='jx+5qeNKQxSa/3dDGkEfDA'
export CDLMD_LICENSE_FILE="1999@127.0.0.1"

module load CCEnv
module load StdEnv
module load starccm/13.06.012-R8

slurm_hl2hl.py --format STAR-CCM+ > $SLURM_SUBMIT_DIR/machinefile_$SLURM_JOB_ID

NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))

# Ridiculous workaround for license failures, for some reason takes 3 tries to get working
# Try upto five times to get starccm+ to start by checking exit status (throws 143 when fails, 0 when works).
i=1
RET=-1
while [ $i -le 5 ] && [ $RET -ne 0 ]; do
        [ $i -eq 1 ] || sleep 5
        echo "Attempt number: "$i
        starccm+ -power -np $NCORE -podkey $LM_PROJECT   -machinefile $SLURM_SUBMIT_DIR/machinefile_$SLURM_JOB_ID -batch $SLURM_SUBMIT_DIR/your-simulation-file.java $SLURM_SUBMIT_DIR/your-simulation-file.sim > $SLURM_JOB_ID.results
        RET=$?
        i=$((i+1))
   done
exit $RET