slurm_sbatch_srun
Differences
This shows you the differences between two versions of the page.
| Next revision | Previous revision | ||
| slurm_sbatch_srun [2023/03/01 20:29] – created root | slurm_sbatch_srun [2025/10/15 19:51] (current) – external edit 127.0.0.1 | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| - | ====Slurm sbatch/ | + | ====Slurm sbatch/ |
| + | |||
| + | Slurm jobs may be submitted by: | ||
| + | |||
| + | 1. Slurm batch scripts submitted by '' | ||
| + | 2. PBS batch scripts submitted by '' | ||
| + | 3. Slurm interactive submitted by '' | ||
| + | 4. Slurm interactive and graphical submitted by [[ portal_login_new | OpenOnDemand ]] | ||
| + | |||
| + | Essential slurm subcommands and available values are described in [[ selecting_resources | Selecting Resources ]]. The same constraints apply regardless of the source of the commands. | ||
| Basic slurm commands are: | Basic slurm commands are: | ||
| <csv> | <csv> | ||
| slurm, use | slurm, use | ||
| - | sbatch | + | sbatch |
| srun , submit interactive job | srun , submit interactive job | ||
| squeue | squeue | ||
| Line 12: | Line 21: | ||
| </ | </ | ||
| - | A Torque compatibility layer also offers some torque commands | + | A basic slurm batch script for MPI (2 full nodes) follows. It should begin with "# |
| + | |||
| + | For MPI jobs of more than one node, a '' | ||
| < | < | ||
| - | #!/bin/bash | + | #!/bin/sh |
| - | #SBATCH --job-name=mpi | + | |
| - | #SBATCH --output=zzz.slurm | + | |
| #SBATCH --partition comp06 | #SBATCH --partition comp06 | ||
| + | #SBATCH --qos comp | ||
| #SBATCH --nodes=2 | #SBATCH --nodes=2 | ||
| #SBATCH --tasks-per-node=32 | #SBATCH --tasks-per-node=32 | ||
| + | #SBATCH --cpus-per-task=1 | ||
| #SBATCH --time=6: | #SBATCH --time=6: | ||
| - | cd $SLURM_SUBMIT_DIR | ||
| module purge | module purge | ||
| module load intel/ | module load intel/ | ||
| - | mpirun -np $SLURM_NTASKS -machinefile / | + | mpirun -np $SLURM_NTASKS -machinefile / |
| + | ${SLURM_JOB_ID} ./ | ||
| </ | </ | ||
| - | and a more complex script | + | A similar interactive job with one node in the '' |
| + | < | ||
| + | srun --nodes 1 --ntasks-per-node=1 | ||
| + | --time=1: | ||
| + | </ | ||
| + | All the slurm options between '' | ||
| + | Then the '' | ||
| + | A PBS compatibility layer will run simple PBS scripts under slurm. Basic PBS commands that can be interpreted as slurm commands will be translated. | ||
| < | < | ||
| + | $ cat qcp2.sh | ||
| #!/bin/bash | #!/bin/bash | ||
| - | #SBATCH | + | #PBS -q cloud72 |
| - | #SBATCH | + | #PBS -l walltime=00:10:00 |
| - | #SBATCH --nodes=4 | + | #PBS -l nodes=1:ppn=2 |
| - | #SBATCH --tasks-per-node=32 | + | sleep 5 |
| - | $SBATCH --time=00:00:10 | + | echo $HOSTNAME |
| - | #SBATCH | + | |
| - | module purge | + | |
| - | module load intel/ | + | |
| - | cd $SLURM_SUBMIT_DIR | + | |
| - | cp *.in *UPF / | + | |
| - | cd / | + | |
| - | mpirun -ppn 16 -hostfile / | + | |
| - | mv ausurf.log *mix* *wfc* *igk* $SLURM_SUBMIT_DIR/ | + | |
| - | pinnacle-l1: | + | |
| - | </ | + | |
| - | See also [ https://www.marquette.edu/ | + | $ qsub qcp2.sh |
| - | ] | + | 1430970 |
| + | $ cat qcp2.sh.o1430970 | ||
| + | c1331 | ||
| + | $ | ||
| + | </ | ||
| + | A bioinformatics and large data example follows. | ||
| + | In this script you have 1) slurm commands 2) job setup 3) go to scratch directory and create '' | ||
| + | If you don't understand it, do the copy back and delete manually as misapplied '' | ||
| - | |||
| - | We have a conversion script **/ | ||
| < | < | ||
| - | pbs2slurm.sh <pbs-script-name> | + | # |
| + | #SBATCH --partition tres72 | ||
| + | #SBATCH --qos tres | ||
| + | #SBATCH --nodes=1 | ||
| + | #SBATCH --ntasks-per-node=1 | ||
| + | #SBATCH --cpus-per-task=32 | ||
| + | #SBATCH --time=72: | ||
| + | # | ||
| + | module load python/ | ||
| + | source / | ||
| + | conda activate tbprofiler | ||
| + | export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK | ||
| + | # | ||
| + | cd / | ||
| + | mkdir -p tmprun | ||
| + | cd tmprun | ||
| + | # | ||
| + | FILE=/ | ||
| + | trimmomatic PE -threads $SLURM_CPUS_PER_TASK ${FILE}F.fq ${FILE}R.fq | ||
| + | Kausttrim-Unpaired.fq ILLUMINACLIP: | ||
| + | TRAILING:3 SLIDINGWINDOW: | ||
| + | # | ||
| + | cd .. | ||
| + | rsync -av tmprun $SLURM_SUBMIT_DIR/ | ||
| + | if [ $? -eq 0 ]; then | ||
| + | rm -rf tmprun | ||
| + | fi | ||
| </ | </ | ||
| - | will generate the conversion to stdout, thus save with | ||
| - | < | ||
| - | pbs2slurm.sh demoscriptpbs.sh > demoscriptslurm.sh | ||
| - | </ | ||
| - | |||
| - | ==Notes:== | ||
| - | |||
| - | Leading hash-bang /bin/sh or /bin/bash or /bin/tcsh is optional in torque, required in slurm, pbs2slurm.sh inserts it if not present\\ | ||
| - | |||
| - | Slurm date formats with days are " | ||
| - | Slurm unlike Torque does not autogenerate an MPI machinefile/ | ||
| - | < | ||
| - | The generated machinefile differs from torque machinefile in that it has 1 entry per host instead of '' | ||
| - | Slurm does define a variable with the total number of cores '' | ||
slurm_sbatch_srun.1677702594.txt.gz · Last modified: (external edit)
