User Tools

Site Tools


comsol_5.3

This is an old revision of the document!


COMSOL 5.3

Running COMSOL in parallel

COMSOL can run a job on many cores in parallel (Shared-memory processing or multithreading) and on many physical nodes (distributed computing through MPI). A good strategy is to use both parallel operations to maximize parallelization benefits. This means that you should request some cores on a certain number of nodes to leverage Comsol parallelization features. Cluster computing requires a floating network license (provided by POLIMI).

Four ways to run a cluster job

  1. Submit cluster-enabled batch job via PBS script - Requires completed and saved model mph file
  2. Branch off cluster-enabled batch jobs from the COMSOL GUI process started on masternode - Allows GUI model work, and batch job submission to PBS from within the GUI; limited command-line proficiency needed; with the Cluster Sweep feature it is possible to submit a single batch job from the COMSOL GUI and continue working in the GUI while the cluster job is computing in the background.
  3. Start a cluster-enabled COMSOL desktop GUI on the masternode and work interactively with cluster jobs
  4. Start the COMSOL Desktop GUI as a client on a local PC or Mac and connect to a cluster-enabled COMSOL server on the masternode and work interactively

Actually we support only method 1 and 2.

Environment and Documentation

The command to set the Comsol environment is: module load comsol/5.3.0

The link to the documentation is: http://masternode.chem.polimi.it/comsol53

PBS jobfile

#!/bin/bash
#
# Set Job execution shell
#PBS -S /bin/bash
 
# Set Job name: <jobname>=jobcomsol 
#PBS -N jobcomsol
 
# Set the execution queue: <queue name> is one of 
# gandalf, merlino, default, morgana, covenant
#PBS -q <queue name>
 
# Set mail addresses that will receive mail from PBS about job
# Can be a list of addresses separated by commas (,)
#PBS -M <polimi.it or mail.polimi.it email address only>
 
# Set events for mail from PBS about job
#PBS -m abe
 
# Job re-run (yes or no)
#PBS -r n
 
# Set standard output file 
#PBS -o jobcomsol.out
 
# Set standard error file 
#PBS -e jobcomsol.err
 
# Set request for nodes,number of cpu (cores),number of mpi processes per node
#PBS -l select=N:ncpus=C:mpiprocs=P
 
# Pass environment to job
#PBS -V
 
# Change to submission directory
cd $PBS_O_WORKDIR
 
# Command to launch application and it's parameters
 
module load comsol/5.3.0
 
export inputfile="<name-of-model-input-mph-file>.mph"
export outputfile="<name-of-output-mph-file>.mph"
 
echo "---------------------------------------------------------------------"
echo  "---Starting job at: `date`"
echo
echo "------Current working directory is `pwd`"
np=$(wc -l < $PBS_NODEFILE)
echo "------Running on ${np} processes (cores) on the following nodes:"
cat $PBS_NODEFILE
echo "----Parallel comsol run"
comsol -clustersimple batch -mpiarg -rmk -mpiarg pbs -inputfile $inputfile -outputfile
 $outputfile -batchlog jobcomsol.log
echo "-----job finished at `date`"
echo "---------------------------------------------------------------------"
comsol_5.3.1511877454.txt.gz · Last modified: 2017/11/28 14:57 by druido