Gurobi

From SciNet Users Documentation
Jump to navigation Jump to search

The Gurobi linear programming solver is installed on the Niagara software stack.

"Given a set of linear inequality/equality constraints, Ax>=b, where A is a matrix and x & b are vectors, what is the set of variables x (within a given range) that maximizes/minimizes a target objective function f(x)?"

Such a model is very common in scientific computation, engineering, and business. If the variables x are (partially) limited to integers, this becomes Mixed Integer Programming (MIP) which is a much more sophisticated problem. Gurobi, along with other solvers (such as "linprog" and "intlinprog" in MATLAB) can solve such LP/MIP problems efficiently. And for Gurobi, for instance, efficient multi-threading (and even distributed computation, depending on the license) capabilities are also implemented for parallelism and easy scaling for large models.

Getting a license

The University of Toronto has a free academic license to use Gurobi. Access to the license is granted by loading the Gurobi module.

Running using the Niagara installation

Gurobi 11.0.1

To access commercial modules on Niagara one must invoke the 'module use' command.

module load NiaEnv/2022a
module use /scinet/niagara/software/commercial/modules
module load gurobi/11.0.1

Using Gurobi

To use Gurobi, one only needs to include "gurobi_c++.h" in the source file, and use the compilation/linking flags:

CXXLIB=-L ${SCINET_GUROBI_LIB} -lgurobi_g++5.2 -lgurobi75 -fopenmp
CXXINC=-I ${SCINET_GUROBI_INC} -fopenmp

The actual documentation for using Gurobi's API can be found here.

Running Gurobi

Example submission script for a job running on 1 node, with max walltime of 11 hours:

#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=40
#SBATCH --time=11:00:00
#SBATCH --job-name test

module load NiaEnv/2022a
module use /scinet/niagara/software/commercial/modules
module load gurobi/11.0.1

# DIRECTORY TO RUN - $SLURM_SUBMIT_DIR is directory job was submitted from
cd $SLURM_SUBMIT_DIR

# If you are using OpenMP
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK

./mycode