Standard software environments

From Alliance Doc
Jump to navigation Jump to search
This site replaces the former Compute Canada documentation site, and is now being managed by the Digital Research Alliance of Canada.

Ce site remplace l'ancien site de documentation de Calcul Canada et est maintenant géré par l'Alliance de recherche numérique du Canada.

Other languages:

For questions about migration to different standard environments, please see Migration to the new standard environment.

What are standard software environments?

Our software environments are provided through a set of modules which allow you to switch between different versions of software packages. These modules are organized in a tree structure with the trunk made up of typical utilities provided by any Linux environment. Branches are compiler versions and sub-branches are versions of MPI or CUDA.

Standard environments identify combinations of specific compiler and MPI modules that are used most commonly by our team to build other software. These combinations are grouped in modules named StdEnv.

As of February 2023, there are four such standard environments, versioned 2023, 2020, 2018.3 and 2016.4, with each new version incorporating major improvements. Only versions 2020 and 2023 are actively supported.

This page describes these changes and explains why you should upgrade to a more recent version.

In general, new versions of software packages will get installed with the newest software environment.

StdEnv/2023

This is the most recent iteration of our software environment. It uses GCC 12.3.0, Intel 2023.1, and Open MPI 4.1.5 as defaults.

To activate this environment, use the command

Question.png
[name@server ~]$ module load StdEnv/2023

Performance improvements

The minimum CPU instruction set supported by this environment is AVX2, or more generally, x86-64-v3. Even the compatibility layer which provides basic Linux commands is compiled with optimisations for this instruction set.

Changes of default modules

GCC becomes the default compiler, instead of Intel. We compile with Intel only software which have been known to offer better performance using Intel. CUDA becomes an add-on to OpenMPI, rather than the other way around, i.e. CUDA-aware MPI is loaded at run time if CUDA is loaded. This allows to share a lot of MPI libraries across CUDA and non-CUDA branches.

The following core modules have seen their default version upgraded:

  • GCC 9.3 => GCC 12.3
  • OpenMPI 4.0.3 => OpenMPI 4.1.5
  • Intel compilers 2020 => 2023
  • Intel MKL 2020 => Flexiblas 3.3.1 (with MKL 2023 or BLIS 0.9.0)
  • CUDA 11 => CUDA 12

StdEnv/2020

This is the most recent iteration of our software environment with the most changes so far. It uses GCC 9.3.0, Intel 2020.1, and Open MPI 4.0.3 as defaults.

To activate this environment, use the command

Question.png
[name@server ~]$ module load StdEnv/2020

Performance improvements

Binaries compiled with the Intel compiler now automatically support both AVX2 and AVX512 instruction sets. In technical terms, we call them multi-architecture binaries, also known as fat binaries. This means that when running on a cluster such as Cedar and Graham which has multiple generations of processors, you don't have to manually load one of the arch modules if you use software packages generated by the Intel compiler.

Many software packages which were previously installed either with GCC or with Intel are now installed at a lower level of the software hierarchy, which makes the same module visible, irrespective of which compiler is loaded. For example, this is the case for many bioinformatics software packages as well as the R modules, which previously required loading the gcc module. This could be done because we introduced optimizations specific to CPU architectures at a level of the software hierarchy lower than the compiler level.

We also installed a more recent version of the GNU C Library, which introduces optimizations in some mathematical functions. This has increased the requirement on the version of the Linux Kernel (see below).

Change in the compatibility layer

Another enhancement for the 2020 release was a change in tools for our compatibility layer. The compatibility layer is between the operating system and all other software packages. This layer is designed to ensure that compilers and scientific applications will work whether they run on CentOS, Ubuntu, or Fedora. For the 2016.4 and 2018.3 versions, we used the Nix package manager, while for the 2020 version, we used Gentoo Prefix.

Change in kernel requirement

Versions 2016.4 and 2018.3 required a Linux kernel version 2.6.32 or more recent. This supported CentOS versions starting at CentOS 6. With the 2020 version, we require a Linux kernel 3.10 or better. This means it no longer supports CentOS 6, but requires CentOS 7 instead. Other distributions usually have kernels which are much more recent, so you probably don't need to change your distribution if you are using this standard environment on something other than CentOS.

Module extensions

With the 2020 environment, we started installing more Python extensions inside of their corresponding core modules. For example, we installed PyQt5 inside of the qt/5.12.8 module so that it supports multiple versions of Python. The module system has also been adjusted so you can find such extensions. For example, if you run

Question.png
[name@server ~]$ module spider pyqt5

it will tell you that you can get this by loading the qt/5.12.8 module.

StdEnv/2018.3

Deprecated

This environment is no longer supported.



This is the second version of our software environment. It was released in 2018 with the deployment of Béluga, and shortly after the deployment of Niagara. Defaults were upgraded to GCC 7.3.0, Intel 2018.3, and Open MPI 3.1.2. This is the first version to support AVX512 instructions.

To activate this environment, use the command

Question.png
[name@server ~]$ module load StdEnv/2018.3

StdEnv/2016.4

Deprecated

This environment is no longer supported.



This is the initial version of our software environment released in 2016 with the deployment of Cedar and Graham. It features GCC 5.4.0 and Intel 2016.4 as default compilers, and Open MPI 2.1.1 as its default implementation of MPI. Most of the software compiled with this environment does not support AVX512 instructions provided by the Skylake processors on Béluga, Niagara, as well as on the most recent additions to Cedar and Graham.

To activate this environment, use the command

Question.png
[name@server ~]$ module load StdEnv/2016.4