User Tools

Site Tools


sapplication_software

**This is an old revision of the document!**

Application Software

Locating and using software has been made a little more complicated by some at-the-time reasonable decisions made 50 years ago for Unix: /usr/local for applications, $PATH to find an executable, $LD_LIBRARY_PATH to find dynamic link libraries, the file ~/.cshrc to set up these variables. These environment variables continue today in Linux and Mac, while Windows combines the two PATH variables. Also important were software packages intended to be used as infrastructure for complete applications, and which didn't need to be copied into the code of every project.. These “shared libraries” such as MPI and FFTW were specified by source code interfaces. At the time, nearly everyone used one computer and one compiler, so a source interface corresponded directly to one binary interface. Today there are many types of computers and compilers, and modern applications often have a defined binary interface or “ABI” to avoid compatibility issues.

Today on HPC systems, the MPI implementation must be heavily customized for each site and its network fabric. Almost all multi-node programs depend on MPI, and there are three popular implentations of MPI (Open MPI, MVAPICH2, and Intel). There are about six compilers that are reasonably popular (GNU/gcc, Intel proprietary, Intel open LLVM, NVidia/PGI, AMD LLVM, not-installed stock LLVM). MVAPICH2 and Intel MPI are binary compatible (thus a defacto ABI). Probably all the LLVM implementations are binary compatible (with each other, not with Open MPI vs MVAPICH2), so there are about 8 to 12 binary versions of MPI, not considering updates for each that come out 2 or 3 times a year.

With thousands of applications (most of which have multiple versions), it's obviously impractical just from name collisions to put every executable in /usr/local/bin. It's also impractical (unless you use only one application) to semipermanently set up these variables in ~/.bashrc or ~/.cshrc. There are several ways to handle this.

Modules

Almost all HPC centers use “modules” software to help manage versioning. This was originally Environment Modules and at most centers, including this one, has been replaced by an upward compatible rewrite Lmod. The primary use is to manage $PATH,$L\D_LIBRARY_PATH, and other environment variables over a large number of applications. Unfortunately the name is easily confused with the unrelated packaged programs in modular languages such as Python and R:Python modules.

Module command syntax for most uses is relatively simple: load/unload to invoke/remove a module, purge to unload all modules, list to show loaded modules, help, and spider for searching. We share some examples below for our three sources of software and module definitions “modulefiles”. There is a complete list of modulefiles in the text file /share/apps/bin/modulelist which can be grepped.

Locally written modulefiles

There are currently about 660 locally written modulefiles, some of which have some “smart” capability to select from multiple software builds compiled for the computer loading the module. Most parallel programs require the selection of a compiler and an MPI version. We usually recommend the following compiler versions (select only one, usually, with exceptions noted below):

module load gcc/11.2.1    
#synonym gnu also works, latest gnu compiler from "Centos 7 Development Tools", enables gcc/g++/gfortran

module load intel/21.2.0
#synonym intelcompiler also works, both intel proprietary icc/icpc/ifort and intel llvm icx/icpx/ifx

module load nvhpc/22.7
#synonym PGI also works, Nvidia/PGI compiler equally nvc/nvc++/nvfortran and pgcc/pgc++/pgf77/pgf90/pgf95/pgfortran

module load aocc/3.0
#AMD llvm compiler clang/clang++/flang

We recommend the following MPI versions. Definitely select only one (though at runtime mvapich2 and impi should be equivalent):

openmpi/4.1.4
#with gcc, intel, nvhpc

mvapich2/2.3.7
#with gcc, intel

impi/17.0.7
#with gcc, intel

In combination we recommend (as compiler, then mpi in order so that the correct libraries are loaded).

module load { gcc/11.2.1 | intel/21.2.0 | nvhpc/22.7 } openmpi/4.1.4

module load { gcc/11.2.1 | intel/21.2.0 } mvapich2/2.3.7

module load { gcc/11.2.1 | intel/21.2.0 } impi/17.0.7

There are a couple of situations where you would want multiple compilers loaded (But first compiler-MPI version will determine the MPI code that is loaded).

(1) Most c++ compilers use the gnu c++ include libraries. For a program (LAMMPS is one) that uses a lot of relatively recent c++, you will want a recent gcc to provide those libraries.

This works with the Intel proprietary icpc compiler

module load intel/17.0.7 openmpi/4.1.4 gcc/11.2.1

If you don't add the third module, icpc will use the libraries from the default Centos g++ 4.8.5 which is quite old and probably can't compile LAMMPS at all.

(2) llvm compilers (aocc/3.0.0 and intel/21.2.0 icx) try to auto-find g++ libraries but don't do it quite correctly.

module load aocc/3.0.0
clang++ -v
AMD clang version 12.0.0 (CLANG: AOCC_3.0.0-Build#78 2020_12_10) (based on LLVM Mirror.Version.12.0.0)
Target: x86_64-unknown-linux-gnu
Thread model: posix
InstalledDir: /opt/AMD/aocc-compiler-3.0.0/bin
Found candidate GCC installation: /opt/rh/devtoolset-7/root/usr/lib/gcc/x86_64-redhat-linux/7
Found candidate GCC installation: /opt/rh/devtoolset-8/root/usr/lib/gcc/x86_64-redhat-linux/8
Found candidate GCC installation: /opt/rh/devtoolset-9/root/usr/lib/gcc/x86_64-redhat-linux/9
Found candidate GCC installation: /usr/lib/gcc/x86_64-redhat-linux/4.8.2
Found candidate GCC installation: /usr/lib/gcc/x86_64-redhat-linux/4.8.5
Selected GCC installation: /opt/rh/devtoolset-9/root/usr/lib/gcc/x86_64-redhat-linux/9

so it picks devtoolset-9 libraries in spite of devtoolset-10 and 11 being available

$ ls /opt/rh
devtoolset-10  devtoolset-11  devtoolset-3  devtoolset-7  devtoolset-8  devtoolset-9  

if devtoolset-9 (gcc/9.3.1) is new enough, then that's ok.

(3) Sometimes Intel MKL will link back to the Intel compiler when using Intel OMP instead of GNU OMP. This should work:

module load gcc/11.2.1 mkl/20.0.4 openmpi/4.1.4 intel/17.0.7
sapplication_software.1664300648.txt.gz · Last modified: 2022/09/27 17:44 by root