There are five to six different Slurm parameters that must be specified to pick a computational resource and run a job. Additional Slurm parameters are optional.
| Parameter | Description |
|---|---|
| partition | A group of similar compute nodes (except condo and pcon06) |
| time | The clock run time limit for the job. Leave some extra as the job will be killed when it reaches the limit. For partitions ....72 |
| nodes | The number of nodes to allocate. 1 unless your program uses MPI. |
| tasks-per-node | The number of processes per node to allocate. 1 unless your program uses MPI. |
| cpus-per-task | The number of hardware threads to allocate per process. |
| constraint | Sub-partition selection for partitions condo and pcon06. |
Partitions are
| Partition | Max hours | Max nodes | Max tasks | Max cpu/t | Num Nodes | Node type | Number of GPUs | Description |
|---|---|---|---|---|---|---|---|---|
| cloud72 | 72 | 1 | 1 | 2 | 3 | Xeon 6130/32c/192GB | 0 | shared queue for test jobs and low-effort tasks such as compiling and editing |
| comp72 | 72 | n/a | 32* | 32* | 45 | Dual Xeon 6130/32c/192GB | 0 | shared queue for long-term full-node computation |
| comp06 | 6 | n/a | 32* | 32* | 45 | Dual Xeon 6130/32c/192GB | 0 | same as comp72 except 6-hour limit and higher priority |
| comp01 | 1 | n/a | 32* | 32* | 47 | Dual Xeon 6130/32c/192GB | 0 | same as comp06 except 1-hour limit and higher priority |
| tres72 | 288 | n/a | 32* | 32* | 111 | Dual AMD 6136/32c/64GB | 0 | shared queue for very long-term full-node computation |
| tres288 | 288 | n/a | 32* | 32* | 111 | Dual AMD 6136/32c/64GB | 0 | synonym for tres288 with expanded 288-hour limit |
| gpu72 | 72 | n/a | 32* | 32* | 19 | Dual Xeon 6130/32c/192GB | 1 V100 | shared queue for long-term full-node single GPU computation |
| gpu06 | 6 | n/a | 32* | 32* | 19 | Dual Xeon 6130/32c/192GB | 1 V100 | same as gpu72 except 6-hour limit and higher priority |
| himem72 | 72 | n/a | 24* | 24* | 6 | Dual Xeon 6128/24c/768GB | 0 | shared queue for long-term full-node high-memory |
| himem06 | 6 | n/a | 24* | 24* | 6 | Dual Xeon 6128/24c/768GB | 0 | same as himem72 except 6-hour limit and higher priority |
| acomp06 | 6 | n/a | 64* | 64* | 1 | Dual AMD 7543/64c/1024GB | 0 | shared queue for medium-term full-node 64-core computation |
| agpu72 | 72 | n/a | 64* | 64* | 16 | Dual AMD 7543/64c/1024GB | 1 A100 | shared queue for medium-term full-node 64-core single GPU computation |
| agpu06 | 6 | n/a | 64* | 64* | 18 | Dual AMD 7543/64c/1024GB | 1 A100 | same as agpu72 except 6-hour limit and higher priority |
| qgpu72 | 72 | n/a | 64* | 64* | 4 | Dual AMD 7543/64c/1024GB | 4 A100 | shared queue for medium-term full-node 64-core quad GPU computation |
| qgpu06 | 6 | n/a | 64* | 64* | 4 | Dual AMD 7543/64c/1024GB | 4 A100 | same as qgpu72 except 6-hour limit and higher priority |
| csce72 | 72 | n/a | 64* | 64* | 14 | Dual AMD 7543/64c/1024GB | 0 | For CSCE users |
| condo | n/a | n/a | n/a | n/a | 190 | n/a | n/a | n/a |
| pcon06 | 6 | 1 | n/a | n/a | 190 | n/a | n/a | n/a |
cloud72 partitiongpu and b)himem partitions are reserved for jobs that a) use the GPU b) use more than 180 GB of memory respectivelycomp, gpu, and himem nodes are reserved for full-node jobs except for a few specific exceptions > 1 core and < full node. Contact hpc-support@listserv.uark.educondo or pcon06 and constraint: Contact hpc-support@listserv.uark.edu