This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
equipment [2020/01/24 21:12] root |
equipment [2022/02/02 17:31] (current) jpummil |
||
---|---|---|---|
Line 1: | Line 1: | ||
- | === Selecting Resources === | + | === Equipment/Selecting Resources === |
We describe the resources available at AHPCC and how to select the best one for your computing job. | We describe the resources available at AHPCC and how to select the best one for your computing job. | ||
Computing resources are presently divided into four clusters that use separate schedulers. | Computing resources are presently divided into four clusters that use separate schedulers. | ||
Line 9: | Line 9: | ||
There are 7 public high-memory compute nodes with two Xeon Gold 6126 processors, 24 cores, and 768 GB of memory, which use the queues '' | There are 7 public high-memory compute nodes with two Xeon Gold 6126 processors, 24 cores, and 768 GB of memory, which use the queues '' | ||
There are 19 public GPU nodes, like standard compute nodes but with a single NVidia V100 Tesla GPU which use the queues '' | There are 19 public GPU nodes, like standard compute nodes but with a single NVidia V100 Tesla GPU which use the queues '' | ||
+ | |||
There are 25 condo nodes: 20 Wang, standard compute nodes with also NVMe local drives; one Alverson standard compute, one Alverson high-memory compute, one Kaman high-memory compute with two V100s, and two Bernhard with two AMD 7351 and 256 GB of memory. These use the queues | There are 25 condo nodes: 20 Wang, standard compute nodes with also NVMe local drives; one Alverson standard compute, one Alverson high-memory compute, one Kaman high-memory compute with two V100s, and two Bernhard with two AMD 7351 and 256 GB of memory. These use the queues | ||
Line 18: | Line 19: | ||
These efficient-use requirements do not apply to condo owners on their own nodes. | These efficient-use requirements do not apply to condo owners on their own nodes. | ||
+ | |||
+ | == Overall Recommendations == | ||
+ | We recommend the following clusters depending on the needs of your program and system load. | ||
+ | These are rules of thumb not covering every possible situation, contact hpc-support@listserv.uark.edu with questions. | ||
+ | |||
+ | * GPU-capable | ||
+ | * use '' | ||
+ | * not GPU-capable | ||
+ | * 1 to 12 cores and up to 24 GB memory: use '' | ||
+ | * 1 to 16 cores and up to 32 GB memory: use '' | ||
+ | * Up to 32 cores and up to 64 GB memory: use '' | ||
+ | * more than 64 GB shared memory, or all 32 cores: use '' | ||
+ | * more than 192 GB shared memory: use '' | ||
+ | * more than 32 cores: use '' | ||
+ | |||
+ | Discretionary cases: | ||
+ | * anything requiring two or more '' | ||
+ | * 1 node, 32 cores and less than 192 GB memory: use '' | ||