User Tools

Site Tools


slurm_queues

**This is an old revision of the document!**

Slurm Queues Pinnacle/Karpinski

Queues or slurm “partitions” are:

pinnacle partitiondescriptiontime limitnumber of nodesother
comp01192 GB nodes1 hr48full node usage required
comp06192 GB nodes6 hr44full node usage required
comp72192 GB nodes72 hr40full node usage required
gpu06gpu nodes6 hr19gpu usage required/full node usage required
gpu72gpu nodes72 hr19gpu usage required/full node usage required
himem06768 GB nodes6 hr6>192 GB memory usage required/full node usage required
himem72768 GB nodes72 hr6>192 GB memory usage required/full node usage required
cloud72virtual machines/containers/single processor jobs72 hr3for non-intensive computing up to 4 cores
karpinski partitiondescriptiontime limitnumber of nodes
csce7232 GB nodes72 hr18
cscloud72virtual machines/containers/single processor jobs72 hr18

Condo queues are:

pinnacle partitiondescriptiontime limitnumber of nodesother
condocondo nodesnone25authorization required
pcon06public use of condo nodes6 hr25

Condo nodes require specification of a sufficient set of slurm properties. Property choices available are:

gpu or not: 0gpu/1v100/2v100
processor: i6130/a7351/i6128
equivalently: 192gb/256gb/768gb
equivalently: 32c/32c/24c
local drive: nvme/no specification
research group: fwang equivalent to 2v100/i6130/768gb/32c/nvme
research group: tkaman equivalent to 0gpu/i6130/192gb/32c
research group: aja equivalent to 0gpu/i6130/192gb/32c or 0gpu/i6128/768gb/24c

examples:
#SBATCH –constraint=2v100
#SBATCH –constraint=fwang
#SBATCH –constraint=768gb&0gpu
#SBATCH –constraint=256gb

slurm_queues.1580323560.txt.gz · Last modified: 2020/01/29 18:46 by root