User Tools

Site Tools


sapplication_software

**This is an old revision of the document!**

Application Software

Locating and using software has been made a little more complicated by some at-the-time reasonable decisions made 50 years ago for Unix: /usr/local for applications, $PATH to find an executable, $LD_LIBRARY_PATH to find dynamic link libraries, the file ~/.cshrc to set up these variables. These environment variables continue today in Linux and Mac, while Windows combines the two PATH variables. Also important were software packages intended to be used as infrastructure for complete applications, and which didn't need to be copied into the code of every project.. These “shared libraries” such as MPI and FFTW were specified by source code interfaces. At the time, nearly everyone used one computer and one compiler, so a source interface corresponded directly to one binary interface. Today there are many types of computers and compilers, and modern applications often have a defined binary interface or “ABI” to avoid compatibility issues.

Today on HPC systems, the MPI implementation must be heavily customized for each site and its network fabric. Almost all multi-node programs depend on MPI, and there are three popular implentations of MPI (Open MPI, MVAPICH2, and Intel). There are about six compilers that are reasonably popular (GNU/gcc, Intel proprietary, Intel open LLVM, NVidia/PGI, AMD LLVM, not-installed stock LLVM). MVAPICH2 and Intel MPI are binary compatible (thus a defacto ABI). Probably all the LLVM implementations are binary compatible (with each other, not with Open MPI vs MVAPICH2), so there are about 8 to 12 binary versions of MPI, not considering updates for each that come out 2 or 3 times a year.

With thousands of applications (most of which have multiple versions), it's obviously impractical just from name collisions to put every executable in /usr/local/bin. It's also impractical (unless you use only one application) to semipermanently set up these variables in ~/.bashrc or ~/.cshrc. There are several ways to handle this.

Modules

Almost all HPC centers use “modules” software to help manage versioning. This was originally Environment Modules and at most centers, including this one, has been replaced by an upward compatible rewrite Lmod. The primary use is to manage $PATH,$L\D_LIBRARY_PATH, and other environment variables over a large number of applications. Unfortunately the name is easily confused with the unrelated packaged programs in modular languages such as Python and R:Python modules.

Module command syntax for most uses is relatively simple: load/unload to invoke/remove a module, purge to unload all modules, list to show loaded modules, help, and spider for searching. We share some examples below for our three sources of software and module definitions “modulefiles”. There is a complete list of modulefiles in the text file /share/apps/bin/modulelist which can be grepped.

Locally written modulefiles

There are currently about 660 locally written modulefiles, some of which have some “smart” capability to select from multiple software builds compiled for the computer loading the module. Most parallel programs require the selection of a compiler and an MPI version. We usually recommend the following compiler versions (select only one, usually, with exceptions noted below):

module load gcc/11.2.1    
#synonym gnu also works, latest gnu compiler from "Centos 7 Development Tools", enables gcc/g++/gfortran

module load intel/21.2.0
#synonym intelcompiler also works, both intel proprietary icc/icpc/ifort and intel llvm icx/icpx/ifx

module load nvhpc/22.7
#synonym PGI also works, Nvidia/PGI compiler equally nvc/nvc++/nvfortran and pgcc/pgc++/pgf77/pgf90/pgf95/pgfortran

module load aocc/3.0
#AMD llvm compiler clang/clang++/flang

We recommend the following MPI versions. Definitely select only one (though at runtime mvapich2 and impi should be equivalent):

openmpi/4.1.4
#with gcc, intel, nvhpc

mvapich2/2.3.7
#with gcc, intel

impi/17.0.7
#with gcc, intel

In combination we recommend (as compiler, then mpi in order so that the correct libraries are loaded).

{ gcc/11.2.1 | intel/21.2.0 | nvhpc/22.7 } openmpi/4.1.4

{ gcc/11.2.1 | intel/21.2.0 } mvapich2/2.3.7

{ gcc/11.2.1 | intel/21.2.0 } impi/17.0.7
sapplication_software.1664297758.txt.gz · Last modified: 2022/09/27 16:55 by root