User Tools

Site Tools


torque_queues

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
torque_queues [2020/01/29 19:35]
root
— (current)
Line 1: Line 1:
-====Torque Queues Trestles/Razor==== 
-These ''torque'' scheduler queues are deprecated as the ''torque'' scheduler will be replaced by ''slurm'' and most razor 12-core nodes will be retired. 
  
-**Razor** general use queues are  
-<csv> 
-queue,CPU,memory/node,max PBS spec,max PBS time,notes,Maui partitions 
-debug12core,2x Intel X5670 2.93 GHz,24GB,nodes=2:ppn=12,walltime=0:0:30:00, dedicated,rz 
-med12core,2x Intel X5670 2.93 GHz,24GB,nodes=24:ppn=12,walltime=3:00:00:00, node pool shared,rm 
-tiny12core,2x Intel X5670 2.93 GHz,24GB,nodes=24:ppn=12,walltime=0:06:00:00, node pool shared,rt/rm 
-debug16core,2x Intel E5-2670 2.6 GHz,32GB,nodes=2:ppn=16,walltime=0:0:30:00, dedicated,yd 
-tiny16core,2x Intel E5-2670 2.6 GHz,32GB,nodes=18:ppn=16,walltime=0:06:00:00, node pool shared,yt/ym 
-med16core,2x Intel E5-2670 2.6 GHz,32GB,nodes=18:ppn=16,walltime=3:00:00:00, node pool shared,ym 
-gpu16core,2x Intel E5-2630V3 2.4 GHz/2xK40c,64GB,nodes=1:ppn=16,walltime=3:00:00:00,gpu jobs only,gt 
-mem512GB64core,4x AMD 6276 2.3 GHz,512GB,nodes=2:ppn=64,walltime=3:00:00:00,>64GB shared memory only,yc 
-mem768GB32core,4x Intel E5-4640 2.4 GHz,768GB,nodes=2:ppn=32,walltime=3:00:00:00,>512 GB shared memory only,yb 
-nebula,nebula cloud,, 
-</csv> 
-Maui partitions can be used with the ''shownodes'' command to estimate which nodes are immediately available, see [[ xx ]]. 
-Production queues in quantity for most usage are tiny/med 12core/16core. 
-   
-**Trestles** general use queues are (the first three queues are production queues in quantity) 
-<csv> 
-queue,CPU,memory/node,max PBS spec,max PBS time,notes  
-q10m32c,4x AMD 6136 2.4 GHz,64GB,nodes=4:ppn=32,walltime=0:0:10:00,or "qtraining" dedicated 
-q30m32c,4x AMD 6136 2.4 GHz,64GB,nodes=128:ppn=32,walltime=0:0:30:00, node pool shared  
-q06h32c,4x AMD 6136 2.4 GHz,64GB,nodes=128:ppn=32,walltime=0:06:00:00, node pool shared 
-q72h32c,4x AMD 6136 2.4 GHz,64GB,nodes=64:ppn=32,walltime=3:00:00:00, node pool shared 
-nebula,nebula cloud,, 
-</csv> 
-Production queues in quantity for most usage are q06h32c/q72h32c. 
- 
-"node pool shared" on tiny/med or q30m/q06h/q72h means that the queues allocate jobs from a common pool of identical nodes, with some dedicated for the shorter queues.  
- 
-For a complete listing of all defined queues and their properties on each cluster please use the ''**qstat -q**'' command. 
torque_queues.1580326518.txt.gz ยท Last modified: 2020/01/29 19:35 by root