User Tools

Site Tools


compiling

Compiling and MPI

Compiler Modules

Gnu

(all gnu compilers by devtools or AHPCC,gcc/g++/gfortran) gcc/4.8.5, 7.3.1, 8.3.1, 8.4.0, 9.3.1

Version 4.8.5 is the base gcc rpm installation with Centos 7. The module does nothing but satisfies conditions for a compiler module by an MPI module (see below). It is out-of-date and we don't recommend it for most purposes. Versions 7.3.1, 8.3.1, 9.3.1 are RedHat devtools distributions with debugger,eclipse,etc.

Version 8.4.0 is a gcc/g++/gfortran distribution compiled by us. It is just the compiler and is less complete than the devtools distribution. It is most useful as a runtime module for a program (particularly anaconda) that needs a reasonably up-to-date libstdc++.so.6, which is not updated in devtools and can be a problem when linked by a library asking for newer CXXABI.

Gnu7,Gnu8

OpenHPC versions of gnu compilers. gnu7/7.3.0, gnu8/8.3.0. These are prerequisites to OHPC MPI and do have updated libstdc++.so.6. We only have the application program rpms installed for gnu8. Applications dependent on the compiler are gsl,hdf5,metis,openblas,plasma, and scotch. Applications dependent on compiler and MPI are boost,hypre,mfem,mumps,netcdf-cxx,netcdf-fortran,omb,opencoarrays, petsc,phdf5,pnetcdf,ptscotch,mpi4py,scalapack,slepc,superlu, and trilinos.

Intel

(all intel compilers,icc/icpc/ifort) intel/14.0.3, 16.0.1, 17.0.4, 17.0.7, 18.0.1, 18.0.2, 19.0.4 , 19.0.5, 20.0.1, 20.0.4 Multiple versions are mostly for backwards compatibility. With modern Intel hardware, the latest Intel compiler is usually recommended. We recommend 14.0.3 for trestles AMD Bulldozer nodes and 19.0.5 for Pinnacle AMD Epyc nodes.

PGI

(pgcc,pgc++,pgf95) PGI/2016, 2016.5, 2017, 18.4, 18.10, 19.4. Superseded by NVidia HPC SDK.

NVidia

nvhpc/20.7, the HPC SDK, loads the LLVM-based Cuda compiler (nvcc) as well as Cuda libraries, PGI-based compilers (pgcc,pgc++,pgf95) and an OpenMPI-based mpi with PGI (mpicc,mpic++,mpif90) cuda/5.5 through 11.1 loads nvcc and Cuda libraries.

Oracle

sunstudio/12.6 loads (suncc,sunCC,sunf95)

Clang

clang 3.4.2-9 is the older base version installed by rpm. clang/5.0.1 (clang,clang++,no flang) is the devtoolset-7 version.

AMD

aocc/2.3.0 is AMD's LLVM-based compiler (clang,clang++,flang)

MPI Modules

OpenMPI and MVAPICH are compiled with the compilers they will be used with. Intel MPI and Platform MPI have runtime code to use either compiler. It would be possible to compile compiler-MPI versions for PGI/Oracle/Clang but we have not done so.

OpenHPC

modules gnu8/openmpi3/3.1.4,gnu8/mvapich2/2.3.2, gnu8/impi/2019.9.304 are available and have the prerequisite gnu8/8.3.0 modules intel/openmpi3/3.1.4,intel/mvapich2/2.3.2, and intel/impi/2019.9.304 are available and have the prerequisite intel/19.0.5

AHPCC installed

modules mvapich2/2.1, 2.2, 2.3.2, 2.3.4 modules openmpi/1.8.8, 2.0.1, 3.0.3, 4.0.4, 4.1.0 are available and have a prerequisite of a gcc or intel compiler.

Commercial

modules impi/5.0.0, 5.1.1, 5.1.2, 17.0.4, 17.0.7, 18.0.1, 18.0.2, 19.0.4, 19.0.5, 20.0.1, 20.0.4 and platform_mpi/9.1.2 are available and have a prerequisite of a gcc or intel compiler.

NVidia HPCx

modules hpcx/openmpi-2.4.1.0,openmpi-2.8.0,openmpi-mt-2.4,openmpi-mt-2.8 are available and have a prerequisite of a gcc compiler.

Notes.1

The AHPCC version of IMPI and the OpenHPC version behave differently. AHPCC follows the Intel installation default for IMPI with Intel compiler in that “mpiicc,mpiicpc,mpiifort” call Intel compilers, while “mpicc,mpic++,mpif90” call GNU compilers. In OpenHPC both commands call Intel compilers. This can give very different results in a script. (In OpenMPI/MVAPICH2, only “mpicc,mpic++,mpif90” are defined and they call the compiler loaded before MPI).

$ module reset;module load intel/19.0.5 intel/impi/2019.9.304
$ mpicc -v
icc version 19.0.5.281 (gcc version 4.8.5 compatibility)
$ module reset;module load intel/19.0.5 impi/19.0.5
$ mpicc -v
gcc version 4.8.5 20150623 (Red Hat 4.8.5-44) (GCC) 
$
Notes.2

The Intel icpc compiler uses header files from the GNU compiler, so the default will be the old header files from the 4.8.5 compiler. It is common for c++ codes such as LAMMPS to expect modern header files. If using icpc for its performance benefits, a newer gcc compiler should be loaded (last, which in module terms means first in the search paths). The MPI modules expect exactly one compiler to be preloaded, so the order is significant. This will load Intel icpc and modern header files:

module load intel/19.0.5 mvapich2/2.3.2 gcc/9.3.1

At the time of the MPI module loading, the compiler is defined as intel so the intel version of mvapich2 is loaded.

Notes.3

In a reversal of the previous case, you may find that Intel OpenMP libraries are needed with GNU, especially if the Intel MKL libraries and threaded functions are called. In a pure gcc compilation, the GNU OpenMP library libgomp should suffice and be automatically loaded in an OpenMP program. But Intel libirc and libiomp5 may be needed, especially if called by MKL, and are only supplied by the Intel compilers. A module command similar to this may be necessary for the system to find the libraries:

mnodule load gcc/8.3.1 openmpi/4.1.0 mkl/19.0.5 intel/19.0.5
compiling.txt · Last modified: 2021/03/05 14:58 by root