User Tools

Site Tools


pinnacle

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
pinnacle [2019/05/02 20:09]
root
pinnacle [2019/05/02 20:14]
root
Line 26: Line 26:
   - pubcondo06: condo nodes public use, 6 hour limit, various constraints required, 23 nodes   - pubcondo06: condo nodes public use, 6 hour limit, various constraints required, 23 nodes
   - cloud72: virtual machines and containers, usually single processor, 72 hour limit, 1 node   - cloud72: virtual machines and containers, usually single processor, 72 hour limit, 1 node
-  - condo: condo nodes, no limit, authorization required, various constraints required, 23 nodes+  - condo: condo nodes, no time limit, authorization required, various constraints required, 23 nodes
  
 Basic commands are, with transition from Torque/​PBS/​Maui:​ Basic commands are, with transition from Torque/​PBS/​Maui:​
Line 47: Line 47:
 cd $SLURM_SUBMIT_DIR cd $SLURM_SUBMIT_DIR
 module load intel/​18.0.1 impi/18.0.1 mkl/18.0.1 module load intel/​18.0.1 impi/18.0.1 mkl/18.0.1
-mpirun -np $SLURM_NTASKS -machinefile /​tmp/​machinefile_${SLURM_JOB_ID} mympiexe -inputfile MA4um.mph -outputfile MA4um-output.mph+mpirun -np $SLURM_NTASKS -machinefile /​tmp/​machinefile_${SLURM_JOB_ID} ​./mympiexe -inputfile MA4um.mph -outputfile MA4um-output.mph
 </​code>​ </​code>​
 Notes:  ​ Notes:  ​
   - Leading hash-bang /bin/sh or /bin/bash or /bin/tcsh is optional in torque, required in slurm, pbs2slurm.sh inserts it   - Leading hash-bang /bin/sh or /bin/bash or /bin/tcsh is optional in torque, required in slurm, pbs2slurm.sh inserts it
-  - Use full nodes only on public Pinnacle (tasks-per-node=32 standard and 24 himem) except the cloud partition. ​ All jobs should use either all the cores or more than 64 GB of memory, otherwise use trestles. If using for the memory allocate all the cores anyway. ​ Valid condo jobs may use subdivisions of the nodes (tasks-per-node = integer divisions of 32/24).+  - Use full nodes only on public Pinnacle (tasks-per-node=32 standard and 24 himem) except the cloud partition. ​ All jobs should use either all the cores or more than 64 GB of memory, otherwise use trestles. If using for the memory allocate all the cores anyway. ​ Valid condo (not pubcondo) ​jobs may subdivide ​the nodes (tasks-per-node = integer divisions of 32/24).
   - slurm doesn'​t autogenerate a machinefile like torque. ​ We have the prologue generate ''/​tmp/​machinefile_${SLURM_JOB_ID}''​. ​ It differs from torque machinefile in that it has 1 entry per host  instead of ''​ncores''​ entry per host.  Slurm does define a variable with the total number of cores ''​$SLURM_NTASKS'',​ good for most MPI jobs.   - slurm doesn'​t autogenerate a machinefile like torque. ​ We have the prologue generate ''/​tmp/​machinefile_${SLURM_JOB_ID}''​. ​ It differs from torque machinefile in that it has 1 entry per host  instead of ''​ncores''​ entry per host.  Slurm does define a variable with the total number of cores ''​$SLURM_NTASKS'',​ good for most MPI jobs.
  
Line 77: Line 77:
  
 Node Constraints in Condo Queues Node Constraints in Condo Queues
-  * Wang (20 nodes) ​ fwang,​0gpu,​nvme ​  : ​ ''​--constraint nvme'' ​is sufficient ​+  * Wang (20 nodes) ​ fwang,​0gpu,​nvme ​  : ​ ''​--constraint ​0gpu&​192gb&​nvme'' ​
   * Alverson (2 nodes) aja,​0gpu,​192gb or 768gb   * Alverson (2 nodes) aja,​0gpu,​192gb or 768gb
   * Kaman (1 node)  tkaman,​2v100,​768gb   * Kaman (1 node)  tkaman,​2v100,​768gb
pinnacle.txt · Last modified: 2019/05/02 20:14 by root