Accessing CVMFS

From CC Doc
Jump to: navigation, search
Other languages:
English • ‎français


Compute Canada provides repositories of software and data via a file system called CERN Virtual Machine File System (CVMFS). On Compute Canada systems, CVMFS is already set up for you, so the repositories are automatically available for your use. For more information on using the Compute Canada software environment, please refer to available software, using modules, Python, R and Installing software in your home directory pages.

The purpose of this page is to describe how you can install and configure CVMFS on your computer or cluster, so that you can access the same repositories (and software environment) on your system that are available on Compute Canada systems. (If you are a Compute Canada staff member, refer to the internal documentation.)

The software environment described on this page has been presented at Practices and Experience in Advanced Research Computing 2019 (PEARC 2019).

Before you start[edit]


Please subscribe to announcements to remain informed of important changes regarding the Compute Canada software environment and CVMFS, and fill out the registration form. If use of our software environment contributes to your research, please acknowledge it according to these guidelines. (We appreciate it if you also cite our paper).

Subscribe to announcements[edit]

Occasionally, changes will be made regarding CVMFS or the software or other content provided by Compute Canada CVMFS repositories, which may affect users or require administrators to take action in order to ensure uninterrupted access to the Compute Canada CVMFS repositories. Subscribe to the mailing list in order to receive important but infrequent notifications about these changes, by emailing and then replying to the confirmation email you subsequently receive. (Compute Canada staff can alternatively subscribe here.)

Terms of use and support[edit]

The CVMFS client software is provided by CERN. The Compute Canada CVMFS repositories are provided by Compute Canada without any warranty. Compute Canada reserves the right to limit or block your access to the CVMFS repositories and software environment if you violate applicable terms of use (such as, by way of example and without limitation, sections 3.5 or 3.11), or at our discretion.

CVMFS requirements[edit]

For a single system[edit]

To install CVMFS on an individual system, such as your laptop or desktop, you will need:

  • A supported operating system (see installation).
  • Support for FUSE.
  • Approximately 50 GB of available local storage, for the cache. (It will only be filled based on usage, and a larger or smaller cache may be suitable in different situations. For light use on a personal computer, just ~ 5-10 GB may suffice. See cache settings for more details.)
  • Outbound HTTP access to the internet.
    • Or at least outbound HTTP access to one or more local proxy servers.

If your system lacks FUSE support or local storage, or has limited network connectivity or other restrictions, you may be able to use some alternative approaches.

For multiple systems[edit]

If multiple CVMFS clients are deployed, for example in a cluster, laboratory, campus or other site, each system must meet the above requirements, and the following considerations apply as well:

  • We recommend that you deploy forward caching HTTP proxy servers (such as Squid) at your site, especially if you have a large number of clients.
    • Note that if you have only one such proxy server it will be a single point of failure for your site. Generally you should have at least two local proxies at your site, and potentially additional nearby or regional proxies as backups.
  • It is recommended to synchronize the identity of the cvmfs service account across all client nodes (e.g. using LDAP or other means).
    • This facilitates use of an alien cache and should be done before CVMFS is installed. Even if you do not anticipate using an alien cache at this time, it is easier to synchronize the accounts initially than to try to potentially change them later.

Software environment requirements[edit]

Minimal requirements[edit]

  • Supported operating systems:
    • Linux: with a Kernel 2.6.32 or newer.
    • Windows: with Windows Subsystem for Linux version 2, with a distribution of Linux that matches the requirement above.
    • Mac OS: only through a virtual machine.
  • CPU: x86 CPU supporting at least one of SSE3, AVX, AVX2 or AVX512 instruction sets.

Optimal requirements[edit]

  • Scheduler: Slurm or Torque, for tight integration with OpenMPI applications.
  • Network interconnect: Ethernet, InfiniBand or OmniPath, for parallel applications.
  • GPU: NVidia GPU with CUDA drivers (7.5 or newer) installed, for CUDA-enabled applications. (See below for caveats about CUDA.)
  • As few Linux packages installed as possible (fewer packages reduce the odds of conflicts).

Installing CVMFS[edit]

If you wish to use Ansible, a CVMFS client role is provided as-is, for basic minimal configuration of a CVMFS client on an RPM-based system. Otherwise, use the following instructions.


It is recommended that the local CVMFS cache (located at /var/lib/cvmfs by default, configurable via the CVMFS_CACHE_BASE setting) be on a dedicated filesystem so that the storage usage of CVMFS is not shared with that of other applications. Accordingly, you should provision that filesystem before installing CVMFS.


Follow the instructions relative to your operating system in order to install CVMFS. These instructions have been tested on the following distributions:

  • CentOS 6, CentOS 7, CentOS 8
  • Fedora 29, Fedora 32
  • Debian 9
  • Ubuntu 18.04

When installing packages you may be prompted to accept some GPG keys. You should ensure that their fingerprints match these expected values:

  • CernVM key: 70B9 8904 8820 8E31 5ED4 5208 230D 389D 8AE4 5CE7
  • Compute Canada CVMFS key one: C0C4 0F04 70A3 6AF2 7CC4 4D5A 3B9F C55A CF21 4CFC
  • Compute Canada CVMFS key two: DDCD 3C84 ACDF 133F 4BEC FBFA 49DE 2015 FF55 B476
  • Install the CERN YUM repository and GPG key:
[name@server ~]$ sudo yum install
  • Install the Compute Canada YUM repository and GPG keys:
[name@server ~]$ sudo yum install
  • Install the CVMFS client and configuration packages from those YUM repositories:
[name@server ~]$ sudo yum install cvmfs cvmfs-config-default cvmfs-config-computecanada cvmfs-auto-setup
  • Install the default configuration package:
[name@server ~]$ sudo dnf install
  • Download the CVMFS client RPM for your operating system from and install it with dnf (or yum).
    • Since a yum repository for CVMFS is not available for this operating system, you will need to periodically check for updates to the CVMFS client and default configuration and install them manually.
  • Apply the initial client setup:
[name@server ~]$ sudo cvmfs_config setup
  • Install the Compute Canada YUM repository and GPG keys:
[name@server ~]$ sudo dnf install
  • Install the Compute Canada CVMFS configuration from that YUM repository:
[name@server ~]$ sudo dnf install cvmfs-config-computecanada
  • Follow the instructions here to add the CERN apt repository.
  • Install the CVMFS client from that repository:
[name@server ~]$ sudo apt-get install cvmfs cvmfs-config-default
  • Apply the initial client setup:
[name@server ~]$ sudo cvmfs_config setup
  • Download and install the Compute Canada CVMFS configuration package:
  [name@server ~]$ wget
  [name@server ~]$ sudo dpkg -i cvmfs-config-computecanada-latest.all.deb
  • Since an apt repository is not available for this package, make sure you are subscribed to be informed of updates.

As these operating systems are RPM-based, following the same instructions as for Fedora should work.

  • For Windows, you first need to have Windows Subsystem for Linux, version 2. As of this writing (July 2019), this is supported only in a developer version of Windows. The instructions for installing it are here [1].
  • Once it is installed, install the Linux distribution of your choice, and follow the appropriate instructions from one of the other tabs.
  • Under WSL2, with Ubuntu, /dev/fuse is not usable by other users than root. This does not allow CVMFS to work properly. To fix this, run
[name@server ~]$ chmod go+rw /dev/fuse

For more information refer to the quickstart guide.


Do not create any CVMFS configuration files ending with .conf. In order to avoid collisions with upstream configuration sources, all locally-applied configuration must be in .local files. See structure of /etc/cvmfs for more information.

In particular, create the file /etc/cvmfs/default.local, with at least the following minimal configuration:

  • CVMFS_REPOSITORIES is a comma-separated list of the repositories to use.
  • CVMFS_QUOTA_LIMIT is the amount of local cache space in MB for CVMFS to use; set it to about 15% less than the size of your local cache filesystem.
  • If you have proxy servers, specify them with CVMFS_HTTP_PROXY. See the documentation about this parameter, including syntax, examples, and use of load-balancing groups and round-robin DNS.

For more information on client configuration see the quickstart guide and client parameters documentation.


  • Validate the configuration:
[name@server ~]$ sudo cvmfs_config chksetup
  • Make sure to address any warnings or errors that are reported.
  • Check that the repositories are OK:
[name@server ~]$ cvmfs_config probe

If you encounter problems, this debugging guide may help.

Enabling our environment in your session[edit]

Once you have mounted the CVMFS repository, enabling our environment in your sessions is as simple as running

[name@server ~]$ source /cvmfs/

The above command will not run anything if your user ID is below 1000. This is a safeguard, because you should not rely on our software environment for privileged operation. If you nevertheless want to enable our environment, you can first define the environment variable FORCE_CC_CVMFS=1, with the command

[name@server ~]$ export FORCE_CC_CVMFS=1

or you can create a file $HOME/.force_cc_cvmfs in your home folder if you want it to always be active, with

[name@server ~]$ touch $HOME/.force_cc_cvmfs

If, on the contrary, you want to avoid enabling our environment, you can define SKIP_CC_CVMFS=1 or create the file $HOME/.skip_cc_cvmfs to ensure that the environment is never enabled in a given account.

Customizing your environment[edit]

By default, enabling our environment will automatically detect a number of features of your system, and load default modules. You can control the default behaviour by defining specific environment variables prior to enabling the environment. These are described below.

Environment variables[edit]


This variable is used to identify a cluster. It is used to send some information to the system logs, as well as define behaviour relative to licensed software. By default, its value is computecanada. You may want to set the value of this variable if you want to have system logs tailored to the name of your system.


This environment variable is used to identify the set of CPU instructions supported by the system. By default, it will be automatically detected based on /proc/cpuinfo. However if you want to force a specific one to be used, you can define it before enabling the environment. The supported instruction sets for our software environment are:

  • sse3
  • avx
  • avx2
  • avx512


This environment variable is used to identify the type of interconnect supported by the system. By default, it will be automatically detected based on the presence of /sys/module/opa_vnic (for Intel OmniPath) or /sys/module/ib_core (for InfiniBand). The fall-back value is ethernet. The supported values are

  • omnipath
  • infiniband
  • ethernet

The value of this variable will trigger different options of transport protocol to be used in OpenMPI.


This environment variable is used to hide or show some versions of our CUDA modules, according to the required version of NVidia drivers, as documented [here]. If not defined, this is detected based on the files founds under /usr/lib64/nvidia.

For backward compatibility reasons, if no library is found under /usr/lib64/nvidia, we assume that the driver versions are enough for CUDA 10.2. This is because this feature was introduced just as CUDA 11.0 was released.

Defining RSNT_CUDA_DRIVER_VERSION=0.0 will hide all versions of CUDA.


This environment variable allows to define locations for local module trees, which will be automatically mesh into our central tree. To use it, define

[name@server ~]$ export RSNT_LOCAL_MODULEPATHS=/opt/software/easybuild/modules

and then install your EasyBuild recipe using

[name@server ~]$ eb --installpath /opt/software/easybuild <your recipe>.eb

This will use our module naming scheme to install your recipe locally, and it will be picked up by the module hierarchy. For example, if this recipe was using the iompi,2018.3 toolchain, the module will become available after loading the intel/2018.3 and the openmpi/3.1.2 modules.


This environment variable defines which modules are loaded by default. If it is left undefined, our environment will define it to load the StdEnv module, which will load by default a version of the Intel compiler, and a version of OpenMPI.


This is an environment variable used by Lmod to define the default version of modules and aliases. You can define your own modulerc file and add it to the environment variable MODULERCFILE. This will take precedence over what is defined in our environment.

System paths[edit]

While our software environment strives to be as independent from the host operating system as possible, there are a number of system paths that are taken into account by our environment to facilitate interaction with tools installed on the host operating system. Below are some of these paths.


If this path exists, it will automatically be added to the default MODULEPATH. This allows the use of our software environment while also maintaining locally installed modules.


If this path exists, it will automatically be added to the default MODULEPATH. This allows the use of our software environment while also allowing installation of modules inside of home directories.

/opt/software/slurm/bin, /opt/software/bin, /opt/slurm/bin[edit]

These paths are all automatically added to the default PATH. This allows your own executable to be added in the search path.

Installing software locally[edit]

Since June 2020, we support installing additional modules locally and have it discovered by our central hierarchy. This was discussed and implemented in this issue.

To do so, first identify a path where you want to install local software. For example /opt/software/easybuild. Make sure that folder exists. Then, export the environment variable RSNT_LOCAL_MODULEPATHS:

[name@server ~]$ export RSNT_LOCAL_MODULEPATHS=/opt/software/easybuild/modules

If you want this branch of the software hierarchy to be found by your users, we recommend you define this environment variable in the cluster's common profile. Then, install the software packages you want using EasyBuild:

[name@server ~]$ eb --installpath /opt/software/easybuild <some easyconfig recipe>

This will install the piece of software locally, using the hierarchical layout driven by our module naming scheme. It will also be automatically found when users load our compiler, MPI and Cuda modules.


Use of software environment by system administrators[edit]

System administrators (or users managing their own personal system) who perform diagnostic operations on CVMFS, or privileged system operations, should ensure that their session does not depend on the Compute Canada software environment when performing any such operations. For example, if you attempt to update CVMFS using YUM while your session uses a Python module loaded from CVMFS, YUM may run using that module and lose access to it during the update, and the update may become deadlocked. Similarly, if your environment depends on CVMFS and you reconfigure CVMFS in a way that temporarily interrupts access to CVMFS, your session may hang. (When these precautions are taken, in most cases CVMFS can be updated and reconfigured without interrupting access to CVMFS for users, because the update or reconfiguration itself will complete successfully.)

Compute Canada configuration repository[edit]

If you already have CVMFS installed and configured in order to use other repositories (like CERN's repositories), and if your CVMFS client configuration relies on the use of a configuration repository, be aware that the cvmfs-config-computecanada package sets up and enables the configuration repository, which may conflict with your use of any other configuration repository and potentially break your pre-existing CVMFS client configuration, since clients can only use a single configuration repository. (The Compute Canada CVMFS configuration repository is a central source of configuration that makes all other Compute Canada CVMFS repositories available. It provides all site-independent client configuration required for Compute Canada usage and allows client configuration updates to be automatically propagated. The contents can be seen in /cvmfs/ .)

Software packages that are not available[edit]

On Compute Canada systems, a number of commercial software packages are made available to authorized users according to the terms of the license owners, but they are not available outside of Compute Canada systems, and following the instructions on this page will not grant you access to them. This includes for example the Intel and Portland Group compilers. While the modules for the Intel and PGI compilers are available, you will only have access to the redistributable parts of these packages, usually the shared objects. These are sufficient to run software packages compiled with these compilers, but not to compile new software.

CUDA location[edit]

For CUDA-enabled software packages, our software environment relies on having driver libraries installed in the path /usr/lib64/nvidia. However on some platforms, recent NVidia drivers will install libraries in /usr/lib64 instead. Because it is not possible to add /usr/lib64 to the LD_LIBRARY_PATH without also pulling in all system libraries (which may have incompatibilities with our software environment), we recommend that you create symbolic links in /usr/lib64/nvidia pointing to the installed NVidia libraries. The script below will create the symbolic links that are needed (adjust the driver version that you have)

File :

nv_pkg=( "nvidia-driver" "nvidia-driver-libs" "nvidia-driver-cuda" "nvidia-driver-cuda-libs" "nvidia-driver-NVML" "nvidia-driver-NvFBCOpenGL" "nvidia-modprobe" )
yum -y install ${nv_pkg[@]/%/-${NVIDIA_DRV_VER}}
for file in $(rpm -ql ${nv_pkg[@]}); do
  [ "${file%/*}" = '/usr/lib64' ] && [ ! -d "${file}" ] && \ 
  ln -snf "$file" "${file%/*}/nvidia/${file##*/}"


Our software environment is designed to use RUNPATH. Defining LD_LIBRARY_PATH is not recommended and can lead to the environment not working.

Missing libraries[edit]

Because we do not define LD_LIBRARY_PATH, and because our libraries are not installed in default Linux locations, binary packages, such as Anaconda, will often not find libraries that they would usually expect. Please see our documentation on Installing binary packages.


For some applications, dbus needs to be installed. This needs to be installed locally, on the host operating system.

CernVM File System

HyperText Transfer Protocol

Message Passing Interface