Optimization/Making your code faster

Here we focus on compiling someone else's code in Linux for scientific computing. Writing your own code expands the problem considerably. For that you might check the free textbooks and supplemental material at https://theartofhpc.com/.

About 2015 this was a simpler exercise. There was one compiler that was the best in most situations (Intel proprietary). Now there are five or six compilers, all with some degree of different options. There are three major MPI variants which can work with each compiler. And usually you need to do at least a little custom compiling for each hardware that you plan to run on. Here are the major factors in making your code faster.

Compilers
Optimization levels

Fortunately usually the same with every compiler.

Target Architectures

with examples for AHPCC hardware (trestles=bulldozer, various older Intel E5 condo nodes, Pinnacle-1=skylake-avx512, Pinnacle-2=mostly Zen2). The five or so similar generations of Intel E5 processors are mostly distinguished by their floating point capability: nehalem(SSE4.2), sandybridge/ivybridge(AVX), haswell/broadwell(AVX2).

PRACE has a good document https://prace-ri.eu/wp-content/uploads/Best-Practice-Guide_AMD.pdf with examples matching their (Zen 1) hardware. Modify processor-specific values and floating point levels accordingly. It's from 2019 so recent developments in Clang are not covered well.

OpenMP

The automated parallelization is not usually very good, so it requires directives in the code for good performance. But generally a compiler option is necessary to enable OpenMP.

Optimized Libraries

It is best where possible to use standard libraries for low-level numerical calculations. Some are highly optimized and coded in assembler to be much faster than high-level language equivalents. “configure” scripts often default to using slow “reference” versions, particularly for BLAS/LAPACK.

These include

MPI Versions