User Tools

Site Tools


equipment

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revision Both sides next revision
equipment [2020/01/24 21:45]
root
equipment [2020/02/04 18:42]
root
Line 1: Line 1:
-=== Selecting Resources ===+=== Equipment/Selecting Resources ===
 We describe the resources available at AHPCC and how to select the best one for your computing job. We describe the resources available at AHPCC and how to select the best one for your computing job.
 Computing resources are presently divided into four clusters that use separate schedulers.  This will be condensed in the future, as all logins will be moved to ''pinnacle.uark.edu'' and all schedulers are migrated to Slurm, with one or multiple Slurm schedulers to be determined. Computing resources are presently divided into four clusters that use separate schedulers.  This will be condensed in the future, as all logins will be moved to ''pinnacle.uark.edu'' and all schedulers are migrated to Slurm, with one or multiple Slurm schedulers to be determined.
Line 39: Line 39:
 == Overall Recommendations == == Overall Recommendations ==
 We recommend the following clusters depending on the needs of your program and system load. We recommend the following clusters depending on the needs of your program and system load.
-These are rules of thumb  not covering every possible situation, contact hpc-support@listserv.uark.edu with questions.+These are rules of thumb  not covering every possible situation, contact hpc-support@listserv.uark.edu with questions.  Here "memory" refers to shared memory of one node.
  
-GPU-capableuse ''pinnacle'' GPU queues +  * GPU-capable 
- +    * use ''pinnacle'' GPU queues 
-1 to 12 cores and up to 24 GB memory: use ''razor-1'' +  * not GPU-capable 
- +    1 to 12 cores and up to 24 GB memory: use ''razor-1'' 
-1 to 16 cores and up to 32 GB memory: use ''razor-2'' +    1 to 16 cores and up to 32 GB memory: use ''razor-2'' 
- +    Up to 32 cores and up to 64 GB memory: use ''trestles'', though low-core count jobs will be slow compared with Intel 
-Up to 32 cores and up to 64 GB memory: use ''trestles'', though low-core count jobs will be slow compared with Intel +    more than 64 GB shared memory, or all 32 cores: use ''pinnacle'' standard ''comp01/comp06/comp72'' 
- +    more than 192 GB shared memory: use ''pinnacle'' ''himem06/himem72'' or high memory ''razor/trestles'' nodes 
-more than 64 GB shared memory, or all 32 cores: use ''pinnacle'' standard ''comp01/comp06/comp72'' +    more than 32 cores: use ''pinnacle'' multiple nodes standard ''comp01/comp06/comp72''
- +
-more than 192 GB shared memory: use ''pinnacle'' ''himem06/himem72'' or high memory ''razor/trestles'' nodes +
- +
-more than 32 cores: use ''pinnacle'' multiple nodes standard ''comp01/comp06/comp72''+
  
 Discretionary cases: Discretionary cases:
-anything requiring two or more ''razor/trestles'' nodes: ''pinnacle'' standard ''comp01/comp06/comp72'' will run much faster but probably start the job more slowly because of the job queue.+  * anything requiring two or more ''razor/trestles'' nodes: ''pinnacle'' standard ''comp01/comp06/comp72'' will run much faster but probably start the job more slowly because of the job queue. 
 +  * 1 node, 32 cores and less than 192 GB memory: use ''pinnacle'' standard or ''trestles'' if memory is less than 64 GB.  ''pinnacle'' will run much faster but will probably start the job more slowly because of the job queue.
  
-1 node, 32 cores and less than 192 GB memory: use ''pinnacle'' standard or ''trestles'' if memory is less than 64 GB.  ''pinnacle'' will run much faster but will probably start the job more slowly because of the job queue. 
equipment.txt · Last modified: 2022/02/02 17:31 by jpummil