User Tools

Site Tools


modules

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
modules [2016/09/22 19:12]
root
modules [2017/09/11 23:14] (current)
wlfarris
Line 1: Line 1:
-===== Environment Modules, .bashrc ====+==== Environment Modules, .bashrc ====
  
 The [[http://​modules.sourceforge.net/​|Modules]] package is supplied on the system to set up the users'​s environment variables to run a choice of the needed programs and versions. The most important of these variables are ''​$PATH'',​ telling the system where to find executable files such as ''​matlab''​ or ''​mpirun'',​ and ''​$LD_LIBRARY_PATH'',​ telling the system where to find shared libraries that an executable calls. ​ You can manipulate the environment variables yourself instead of calling modules, but there is no advantage in doing so. The [[http://​modules.sourceforge.net/​|Modules]] package is supplied on the system to set up the users'​s environment variables to run a choice of the needed programs and versions. The most important of these variables are ''​$PATH'',​ telling the system where to find executable files such as ''​matlab''​ or ''​mpirun'',​ and ''​$LD_LIBRARY_PATH'',​ telling the system where to find shared libraries that an executable calls. ​ You can manipulate the environment variables yourself instead of calling modules, but there is no advantage in doing so.
Line 14: Line 14:
 gcc/​4.7.2 ​    ​impi/​5.0.0 ​   intel/​14.0.3 ​ module-git ​   modules ​      ​null ​         openmpi/​1.8.8 use.own gcc/​4.7.2 ​    ​impi/​5.0.0 ​   intel/​14.0.3 ​ module-git ​   modules ​      ​null ​         openmpi/​1.8.8 use.own
 $ module purge $ module purge
-$ module load intel/​14.0.3 ​mvapich2/2.1+$ module load intel/​14.0.3 ​impi/5.1.1
 $ module list $ module list
 Currently Loaded Modulefiles:​ Currently Loaded Modulefiles:​
-  1) intel/​14.0.3 ​  2) impi/5.0.0+  1) intel/​14.0.3 ​  2) impi/5.1.1
 $ module purge $ module purge
 $ module load intel/​14.0.3 mkl/14.0.3 mvapich2/​2.1 $ module load intel/​14.0.3 mkl/14.0.3 mvapich2/​2.1
Line 25: Line 25:
 </​code>​ </​code>​
  
-''​module avail''​ can be quite slow. A quicker way to see the top-level names only is+''​module avail''​ can be slow to load. A quicker way to see the top-level names only is (on either cluster, this is razor)
 <​code>​ <​code>​
 $ ls /​share/​apps/​modulefiles $ ls /​share/​apps/​modulefiles
Line 42: Line 42:
 bwa       ​fftw ​      ​hmmer ​       miRDeep ​   ncl          PGI       ​rDock ​        tcltk bwa       ​fftw ​      ​hmmer ​       miRDeep ​   ncl          PGI       ​rDock ​        tcltk
 </​code>​ </​code>​
-''​ls -R /​share/​apps/​modulefiles''​ gives a full list.+''​ls -R /​share/​apps/​modulefiles ​| more''​ gives a full list of every file.
  
-The default .bashrc loaded with a new account ​is+The default .bashrc loaded with a new account ​includes three ''​module''​ loads.
 <​code>​ <​code>​
 $ cat ~/.bashrc $ cat ~/.bashrc
Line 64: Line 64:
 </​code>​ </​code>​
  
-''​.bashrc''​ is sourced at the beginning of each interactive job. There is a similar file ''​.bash_profile''​ sourced at the beginning of each non-interactive job.  In our setup, we source ''​.bashrc''​ from ''​.bash_profile''​ so that the files are effectively the same, thus reducing the maintenance effort. ​   Interactive or batch is determined in ''​.bashrc''​ by ''​[ -z "​$PS1"​ ] && return''​ which drops out of the loop on batch runs, so commands following that are for interactive sessions only, like setting the value of the prompt ''​$PS1''​. ​ Commands towards the top of ``.bashrc`` are for both interactive and batch.+''​.bashrc''​ is sourced at the beginning of each interactive job. There is a similar file ''​.bash_profile''​ sourced at the beginning of each non-interactive job.  In our setup, we source ''​.bashrc''​ from ''​.bash_profile''​ so that the files are effectively the same, thus reducing the maintenance effort. ​   Interactive or batch is determined in ''​.bashrc''​ by ''​[ -z "​$PS1"​ ] && return''​ which drops out of the loop on batch runs, so commands following that are for interactive sessions only, like setting the value of the prompt ''​$PS1''​. ​ Commands towards the top of ``.bashrc`` are for both interactive and batch.  You can add commands to ''​.bash_profile''​ for batch jobs only. 
 + 
 +For csh-users, module commands should operate identically under ''​tcsh''​ but they are untested.
  
 Here are some recommended module/​.bashrc setups for different cases: Here are some recommended module/​.bashrc setups for different cases:
  
-** Your run only one program, or all the programs you run use the same modules, or use different modules that don't conflict **+** You run only one program, or all the programs you run use the same modules, or each uses different modules that don't conflict **
  
-Put the ''​module load ...''​ in ''​.bashrc'',​ above ''​[ -z...''​. ​ The same environment will be loaded from ''​.bashrc''​ for every interactive session, batch job, and MPI program if any.  Many modules, for instance ''​R''​ and ''​matlab''​ and ''​python'',​ can be assumed to not conflict, though most of the very many combinations have not been tested. ​ Modules that definitely do conflict are MPI modules, only one may be safely used at a time, and multiple versions of the same program, like ''​gcc/​4.7.1''​ and ''​gcc/​4.9.1''​. ​ Multiple compilers, such as ''​gcc''​ and ''​intel''​ usually don't conflict, but the compiler module is used to load the MPI module, so multiple compiler modules should be loaded in order (1) compiler module you want to use with MPI (2) MPI module (3) additional compiler module. ​ In some cases, gnu MPI programs using MKL libraries can want a library only available from the Intel compiler, so a combination like +Put the ''​module load ...''​ in ''​.bashrc'',​ above ''​[ -z...''​. ​ The same environment will be loaded from ''​.bashrc''​ for every interactive session, batch job, and MPI program if any.  Many modules, for instance ''​R''​ and ''​matlab''​ and ''​python'',​ can be assumed to not conflict, though most of the very many combinations have not been tested. ​ Modules that definitely do conflict are MPI modules, only one may be safely used at a time, and multiple versions of the same program, like ''​gcc/​4.7.1''​ and ''​gcc/​4.9.1''​. ​ Multiple compilers, such as ''​gcc''​ and ''​intel''​ usually don't conflict, but for MVAPICH2 and OpenMPI, ​the compiler module is used to load the MPI module, so multiple compiler modules should be loaded in order (1) compiler module you want to use with MPI (2) MPI module (3) additional compiler module. ​ In some cases, gnu MPI programs using MKL libraries can want a library only available from the Intel compiler, so a combination like ''​module load gcc/4.7.2 mkl/14.0.3 openmpi/​1.8.8;​module load intel/​14.0.3''​ may be necessary. ​ The last ''​intel/​14.0.3''​ may show a harmless ​warning message.
-''​module load gcc/4.7.2 mkl/14.0.3 openmpi/​1.8.8;​module load intel/​14.0.3''​ may be necessary. ​ The last ''​intel/​14.0.3''​ may show a warning message.+
  
 ** You use different (conflicting) modules for different programs, but only run single-node batch jobs ** ** You use different (conflicting) modules for different programs, but only run single-node batch jobs **
Line 91: Line 92:
  
 This is a more difficult case.  The first two solutions won't always work.  If a module is set in a batch script using multiple nodes, the module definitely applies to the MPI processes running in the first or  "​master"​ compute node (usually the first and lowest numbered assigned node in our batch configuration) but does not necessarily apply to the "​slave"​ compute nodes, depending how different MPI types issue remote threads. ​ Multiple nodes imply MPI is being used, and the solution varies by MPI type. A certain form of the ''​mpirun''​ statement for each MPI type is required. See the [[MPI|MPI]] article for more details. This is a more difficult case.  The first two solutions won't always work.  If a module is set in a batch script using multiple nodes, the module definitely applies to the MPI processes running in the first or  "​master"​ compute node (usually the first and lowest numbered assigned node in our batch configuration) but does not necessarily apply to the "​slave"​ compute nodes, depending how different MPI types issue remote threads. ​ Multiple nodes imply MPI is being used, and the solution varies by MPI type. A certain form of the ''​mpirun''​ statement for each MPI type is required. See the [[MPI|MPI]] article for more details.
- 
- 
modules.1474571556.txt.gz · Last modified: 2016/09/22 19:12 by root