User Tools

Site Tools


pinnacle

How to use the Pinnacle Cluster

Equipment

Pinnacle has 98 compute nodes. GPU and GPU-ready nodes are Dell R740, other nodes are Dell R640. There is no user-side difference between R740 (GPU-ready) and R640 nodes.
Public nodes number 75, of which 7 nodes have 768 GB of memory and no GPU, 19 nodes have 192 GB and one V100 GPU, and 49 are standard compute nodes with 192 GB and no GPU.

Condo nodes number 23, including 20 Wang, standard compute nodes with NVMe drives, two Alverson, one 192 GB standard and one 768 GB, and one Kaman, 768 GB with two V100 GPU.

Standard nodes have Gold 6130 CPUs with 32×2.1 GHz cores. 768 GB nodes have Gold 6126 CPUs with 24×2.6 GHz cores, fewer and faster cores for better performance on often poorly-threaded bioinformatics applications.

Login

pinnacle.uark.edu

Scheduler

We are transitioning to Centos 7 and the Slurm scheduler for Pinnacle, and all nodes and clusters will eventually be transitioned.

Queues (slurm “partitions”) are:

  1. comp72: standard compute nodes, 72 hour limit, 40 nodes
  2. comp06: standard compute nodes, 6 hour limit, 44 nodes
  3. comp01: standard compute nodes: 1 hour limit, 48 nodes
  4. gpu72: gpu nodes: 72 hour limit, 19 nodes
  5. gpu06: gpu nodes: 6 hour limit, 19 nodes
  6. himem72: 768 GB nodes, 72 hour limit, 7 nodes
  7. himem06: 768 GB nodes, 6 hour limit, 7 nodes
  8. pubcondo06: condo nodes public use, 6 hour limit, various constraints required, 23 nodes
  9. cloud72: virtual machines and containers, usually single processor, 72 hour limit, 1 node
  10. condo: condo nodes, no time limit, authorization required, various constraints required, 23 nodes

Basic commands are, with transition from Torque/PBS/Maui:

qsub                  sbatch                submit <job file>
qstat                 squeue                list all queued jobs
qstat -u rfeynman     squeue -u rfeynman    list queued jobs for a user 
qdel                  scancel               cancel <job#>
shownodes -l -n       sinfo                 node status

We have a conversion script /share/apps/bin/pbs2slurm.sh <infile> which should do 95% of the script conversion. Please report errors by the script so we can improve it. Here are some sample slurm scripts.

#!/bin/bash
#SBATCH --partition comp06
#SBATCH --nodes=2
#SBATCH --tasks-per-node=32
#SBATCH --time=6:00:00
cd $SLURM_SUBMIT_DIR
module load intel/18.0.1 impi/18.0.1 mkl/18.0.1
mpirun -np $SLURM_NTASKS -machinefile /tmp/machinefile_${SLURM_JOB_ID} ./mympiexe -inputfile MA4um.mph -outputfile MA4um-output.mph

Notes:

  1. Leading hash-bang /bin/sh or /bin/bash or /bin/tcsh is optional in torque, required in slurm, pbs2slurm.sh inserts it
  2. Use full nodes only on public Pinnacle (tasks-per-node=32 standard and 24 himem) except the cloud partition. All jobs should use either all the cores or more than 64 GB of memory, otherwise use trestles. If using for the memory allocate all the cores anyway. Valid condo (not pubcondo) jobs may subdivide the nodes (tasks-per-node = integer divisions of 32/24).
  3. slurm doesn't autogenerate a machinefile like torque. We have the prologue generate /tmp/machinefile${SLURMJOBID}. It differs from torque machinefile in that it has 1 entry per host instead of ncores entry per host. Slurm does define a variable with the total number of cores $SLURMNTASKS, good for most MPI jobs.

Another script:

#!/bin/bash
#SBATCH --partition condo
#SBATCH --constraint nvme
#SBATCH --nodes=1
#SBATCH --tasks-per-node=32
#SBATCH --time=144:00:00
#SBATCH --job-name=MOLPRO_lscr

cd $SLURM_SUBMIT_DIR
cp $SLURM_SUBMIT_DIR/mpr*inp /local_scratch/$SLURM_JOB_ID/
cd /local_scratch/$SLURM_JOB_ID
module load mkl/14.0.3 intel/14.0.3 impi/5.1.1
/home/trr007/molpro/molprop_2015_1_linux_x86_64_i8/bin/molpro -n 4/4:8 mpr_qm_region.inp -d /local_scratch/$SLURM_JOB_ID -W /local_scratch/$SLURM_JOB_ID
rm -f sf_*TMP* fort*
rsync -av m* $SLURM_SUBMIT_DIR/

Notes:

  1. condo/pubcondo06 jobs require a constraint sufficient to specify the node (see table below)
  2. Similarly to razor/trestles, scratch directories /scratch/$SLURMJOBID and /localscratch/$SLURMJOB_ID are auto-created by the prolog

Node Constraints in Condo Queues

  • Wang (20 nodes) fwang,0gpu,nvme : –constraint 0gpu&192gb&nvme
  • Alverson (2 nodes) aja,0gpu,192gb or 768gb
  • Kaman (1 node) tkaman,2v100,768gb
  • requesting a non-gpu condo node: –constraint 0gpu&192gb , –constraint 0gpu&768gb (high memory use required for use as pubcondo)
  • requesting the gpu condo node: –constraint 2v100&768gb (dual gpu use required for use as pubcondo)
pinnacle.txt · Last modified: 2020/09/21 21:59 by root