slurm_queues
**This is an old revision of the document!**
Queues Pinnacle/Karpinski
Queues or slurm “partitions” are:
| pinnacle partition | description | time limit | number of nodes | other |
|---|---|---|---|---|
| comp01 | 192 GB nodes | 1 hr | 48 | full node usage required |
| comp06 | 192 GB nodes | 6 hr | 44 | full node usage required |
| comp72 | 192 GB nodes | 72 hr | 40 | full node usage required |
| gpu06 | gpu nodes | 6 hr | 19 | gpu usage required/full node usage required |
| gpu72 | gpu nodes | 72 hr | 19 | gpu usage required/full node usage required |
| himem06 | 768 GB nodes | 6 hr | 6 | >192 GB memory usage required/full node usage required |
| himem72 | 768 GB nodes | 72 hr | 6 | >192 GB memory usage required/full node usage required |
| cloud72 | virtual machines/containers/single processor jobs | 72 hr | 3 | for non-intensive computing up to 4 cores |
| karpinski partition | description | time limit | number of nodes |
|---|---|---|---|
| csce72 | 32 GB nodes | 72 hr | 18 |
| cscloud72 | virtual machines/containers/single processor jobs | 72 hr | 18 |
Condo queues are:
| pinnacle partition | description | time limit | number of nodes | other |
|---|---|---|---|---|
| condo | condo nodes | none | 25 | authorization required |
| pcon06 | public use of condo nodes | 6 hr | 25 |
Condo nodes require specification of a sufficient set of slurm properties. Property choices available are:
gpu or not: 0gpu/1v100/2v100
processor: i6130/a7351/i6128
equivalently: 192gb/256gb/768gb
equivalently: 32c/32c/24c
local drive: nvme/no specification
research group: fwang equivalent to 2v100/i6130/768gb/32c/nvme
research group: tkaman equivalent to 0gpu/i6130/192gb/32c
research group: aja equivalent to 0gpu/i6130/192gb/32c or 0gpu/i6128/768gb/24c
examples:
#SBATCH –constraint=2v100
#SBATCH –constraint=fwang
#SBATCH –constraint=768gb&0gpu
#SBATCH –constraint=256gb
slurm_queues.1580323212.txt.gz · Last modified: (external edit)
