See Selecting Resources for help on choosing the best node/queue for your work.
Updates:
tres288 queue added with 288 hour/12 day maximum tres72 time limit changed to 288 hours, same as tres288, retained for existing scripts csce-k2-72 queue added for new csce Pinnacle-2 nodes
Pinnacle queues or slurm
“partitions” are:
pinnacle partition | description | time limit | cores per node | number of nodes | other |
---|---|---|---|---|---|
comp01 | 192 GB nodes | 1 hr | 32 | 48 | full node usage required |
comp06 | 192 GB nodes | 6 hr | 32 | 44 | full node usage required |
comp72 | 192 GB nodes | 72 hr | 32 | 40 | full node usage required |
gpu06 | gpu nodes | 6 hr | 32 | 19 | gpu usage required/full node usage required |
gpu72 | gpu nodes | 72 hr | 32 | 19 | gpu usage required/full node usage required |
himem06 | 768 GB nodes | 6 hr | 24 | 6 | >192 GB memory usage required/full node usage required |
himem72 | 768 GB nodes | 72 hr | 24 | 6 | >192 GB memory usage required/full node usage required |
cloud72 | virtual machines/containers/single processor jobs | 72 hr | 32 | 3 | for non-intensive computing up to 4 cores |
tres72 | 64 GB nodes | 72hr | 32 | 23 | Trestles nodes with Pinnacle operating system |
tres288 | 64 GB nodes | 288hr | 32 | 23 | Trestles nodes with Pinnacle operating system |
karpinski partition | description | time limit | cores per node | number of nodes |
---|---|---|---|---|
csce72 | 32 GB nodes | 72 hr | 8 | 18 |
csce-k2-72 | 256 GB nodes | 72 hr | 64 | 6 |
cscloud72 | virtual machines/containers/single processor jobs | 72 hr | 8 | 18 |
Condo queues are:
pinnacle partition | description | time limit | number of nodes | other |
---|---|---|---|---|
condo | condo nodes | none | 25 | authorization and appropriate properties required |
pcon06 | public use of condo nodes | 6 hr | 25 | appropriate properties required |
Condo nodes require specification of a sufficient set of slurm properties. Property choices available are:
condo/pcon06 jobs running on the wrong nodes through lack of specified properties will be canceled without notice
non-gpu jobs running on gpu nodes may be canceled without notice
gpu or not: 0gpu
/1v100
/2v100
/1a100
/4a100
processor: i6130
/a7351
/i6128
equivalently: 192gb
/256gb
/768gb
equivalently: 32c
/32c
/24c
local drive: nvme
/no specification
research group: fwang
equivalent to 0gpu
/i6130|i6230
/768gb
/32c|40c
/nvme
research group: tkaman
equivalent to 2v100
/i6130
/192gb
/32c
research group: aja
equivalent to 0gpu
/i6128
/192gb|768gb
/24c
examples:
#SBATCH –constraint=2v100
#SBATCH –constraint=fwang
#SBATCH –constraint=768gb&0gpu
#SBATCH –constraint=256gb
A script is available to show idle nodes like this (in this case 2 nodes idle in the1-hour comp queue, none in 6-hour or 72-hour comp queue, but nodes available in gpu, himem, csce, and csce cloud. Sufficient idle nodes in your queue of interest do not guarantee that your job will start immediately, but that is usually the case.
$ idle_pinnacle_nodes.sh n01=2 n06=0 n72=0 g06=1 g72=1 h06=2 h72=2 c72=16 l72=16 condo aja=2 wang=0 mich=2 kama=0 $