You are on page 1of 1

NewZealandeScienceInfrastructure

SlurmUserTraining14

SlurmWorkloadManagerHandsOn
In this handson, we are going to setup different submit scripts attending to the usage (serial, OpenMP, MPI,
job arrays, job dependency). We would like to encourage you to spend this time in porting your Loadleveler
scriptstoSlurm.
Estimatedtime:30minutes

ToDo Requirements
LogintobuildwmafterlogintothePan Clusteraccount.
cluster. LaptopwithSSHclient.
ssh build-wm
Copythecontentof/share/training/slurm QuickReference
toyourprojectdirectory. Slurmcommands
Edittheserial.sl sacctExtractaccountinginformationfromcluster.
changetheJobname scancelJobdeletion.
askfor2GBofmemory sinfoShowsstatusinformationaboutcluster.
changetheworkingdirectory squeueStatuslistingofjobsandqueues.
askfor2hwalltime sviewGUItoviewjob,nodeandpartitioninformation.
addyourprojectaccount smapCLItoviewjob,nodeandpartitioninformation.
submitthejob sbatchCommandlineinterfacetosubmitjobs.
reviewthequeuestatus interactiveAllocatesHPCresourcesforinteractive
Edittheopenmp.sl usage.
askfor12cores CommonlyusedSLURMvariables
addemailnotification $SLURM_JOBID
addyouremailaddress $SLURM_JOB_NODELIST:i.e.sb[004,006]
submitthejob $SLURM_NNODES:Numberofnodes
reviewthequeuestatus $TMP_DIR:localfilesystem
Editthempi.sl $SCRATCH_DIR:sharedfilesystem
askfor24cores $SHM_DIR:localRAMfilesystem
askfor3dayswalltime $SLURM_SUBMIT_DIR
submitthejob CommonlyusedSLURMoptions
reviewthequeuestatus #SBATCH -J FLEXPART-WRF
changethewalltimeto10min #SBATCH -A uoa99999 # Project Account
#SBATCH --time=00:10:00 # Walltime
submitthejob
#SBATCH --mem-per-cpu=1024 # memory/cpu (in MB)
reviewthequeuestatus #SBATCH -D /project/uoaXXXX # Working Directory
Editthearray.sl #SBATCH -o job-%j.%N.out # OPTIONAL
#SBATCH -e job-%j.%N.err # OPTIONAL
askfor100jobarray
#SBATCH --mail-type=ALL # email notification
askfor2cores #SBATCH --mail-user=username@nesi.org.nz
askfor10minutewalltime #SBATCH -C sb # sb=Sandybridge wm=Westmere
submitthejob #SBATCH --cpus-per-task=8 # 8 OpenMP Threads
#SBATCH --ntasks=96 # Number of MPI tasks
reviewthequeuestatus #SBATCH --nodes=2-4 # Number of nodes
cancellast80jobsinthearray #SBATCH --array=1-500 # Array definition
Submitthepreprocessing.sl #SBATCH --dependency afterok:JOB_ID

Submitthempi.slwithpreviousjob

dependency
sbatch --dependency afterok:JOB_ID mpi.sl Reference
Submitthepostprocessing.slwith SlurmTrainingSlides
previousjobdependency SlurmSubmitScriptTemplates
Executetheinteractivecommandline: CeRwikipage
interactive -A PROJECT_ACCOUNT SlurmRosettaStone
TranslatefromLoadlevelertoSlurm
SlurmCheatSheet

June2014NeSI(NewZealandeScienceInfrastructure)
email:support@nesi.org.nzweb:www.nesi.org.nz

You might also like