GROMACS

From CC Doc
Jump to: navigation, search
Other languages:
English • ‎français

General

GROMACS is a versatile package to perform molecular dynamics for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions (that usually dominate simulations) many groups are also using it for research on non-biological systems, e.g. polymers.

Strengths

  • GROMACS provides extremely high performance compared to all other programs.
  • Since GROMACS 4.6, we have excellent CUDA-based GPU acceleration on GPUs that have Nvidia compute capability >= 2.0 (e.g. Fermi or later).
  • GROMACS comes with a large selection of flexible tools for trajectory analysis
  • GROMACS can be run in parallel, using either the standard MPIMessage Passing Interface communication protocol, or via our own "Thread MPIMessage Passing Interface" library for single-node workstations.
  • GROMACS is Free Software, available under the GNU Lesser General Public License (LGPL), version 2.1.

Weak points

  • To get very high simulation speed GROMACS does not do much additional analysis and / or data collection on the fly. It may be a challenge to obtain somewhat non-standard information about the simulated system from a GROMACS simulation.
  • Different version of GROMACS may have significant differences in simulation methods and default parameters. Reproducing results of older versions with a newer version may not be straightforward.
  • Additional tools and utilities that come with GROMACS can be not of the highest quality, may contain bugs and may implement not very well documented methods. Reconfirming the results of such tools with some independent methods is always a good idea.

GPU support

The top part of any log file in Gromacs will describe the configuration, and in particular whether your version has GPU support compiled-in. Gromacs will automatically use any GPUs it finds.

Gromacs uses both CPUs and GPUs; it relies on a reasonable balance between CPU and GPU performance.

The new neighbor-structure required introduction of a new variable called "cutoff-scheme" in the mdp file. The old Gromacs settings corresponds to the value "group", while you must switch this to "verlet" to use GPU acceleration.

Quickstart Guide

This section summarizes configuration details.

Environment Modules

The following GROMACS versions have been installed:

  • gromacs/2016.3
  • gromacs/5.1.4
  • gromacs/5.0.7
  • gromacs/4.6.7

They have been compiled with Intel compilers, using Intel MKL and Open MPIMessage Passing Interface 2.0.2 libraries from the default environment and are available in single- and double precision.

These modules can be loaded by using the module load gromacs/2016.3 command.

These versions are also available with GPU support, albeit only with single precision. In order to load the GPU enabled version of GROMACS, the cuda module needs to be loaded first:

$ module load cuda
$ module load gromacs/2016.3

For more information on Environment Modules, please refer to the Using modules page.

Suffixes

GROMACS 5.x and 2016.x

GROMACS 5 and newer releases consist of only four binaries that contain the full functionality. All GROMACS tools from previous versions have been implemented as sub-commands of the gmx binaries. Please refer to GROMACS 5.0 Tool Changes and the GROMACS documentation manuals for your version.

  • gmx - single precision GROMACS with OpenMP (threading) but without MPIMessage Passing Interface.
  • gmx_mpi - single precision GROMACS with OpenMP and MPIMessage Passing Interface.
  • gmx_d - double precision GROMACS with OpenMP but without MPIMessage Passing Interface.
  • gmx_mpi_d - double precision GROMACS with OpenMP and MPIMessage Passing Interface.

GROMACS 4.6.7

  • The double precision binaries have the suffix _d.
  • The parallel single and double precision mdrun binaries are:
  • mdrun_mpi
  • mdrun_mpi_d

Submission Scripts

Please refer to the page "Running jobs" for help on using the SLURM workload manager.

Serial Job

Here's a simple job script for serial mdrun:


File : serial_gromacs_job.sh

#!/bin/bash
#SBATCH --time 0:30           # time limit (D-HH:MM)
export OMP_NUM_THREADS="${SLURM_CPUS_PER_TASK:-1}"
  
module load gromacs/2016.3
gmx mdrun -deffnm em


This will run the simulation of the molecular system in the file em.tpr.

MPIMessage Passing Interface Job

A job script for mdrun using 4 MPIMessage Passing Interface processes:


File : mpi_gromacs_job.sh

#!/bin/bash
#SBATCH --ntasks 4               # number of MPI processes
#SBATCH --mem 4000               # memory limit per node (megabytes)
#SBATCH --time 0:30:00           # time limit (D-HH:MM:ss)
export OMP_NUM_THREADS="${SLURM_CPUS_PER_TASK:-1}"
  
module load gromacs/2016.3
srun gmx_mpi mdrun -deffnm md



Hybrid MPIMessage Passing Interface/OpenMP Job

A job script for mdrun using 8 MPIMessage Passing Interface tasks and 2 OpenMP threads per MPIMessage Passing Interface task:


File : hybrid_gromacs_job.sh

#!/bin/bash
#SBATCH --ntasks 8               # number of MPI processes
#SBATCH --cpus-per-task 2        # number of OpenMP threads per MPI process
#SBATCH --mem-per-cpu 1000       # memory limit per CPU core (megabytes)
#SBATCH --time 0:30:00           # time limit (D-HH:MM:ss)
export OMP_NUM_THREADS="${SLURM_CPUS_PER_TASK:-1}"
  
module load gromacs/2016.3
srun gmx_mpi mdrun -deffnm md


GPU Job

A job script for mdrun using 4 OpenMP threads and one GPU:

File : gpu_gromacs_job.sh

#!/bin/bash
#SBATCH --gres=gpu:1             # request 1 GPU as "generic resource"
#SBATCH --cpus-per-task 4        # number of OpenMP threads per MPI process
#SBATCH --mem-per-cpu 1000       # memory limit per CPU core (megabytes)
#SBATCH --time 0:30:00           # time limit (D-HH:MM:ss)
export OMP_NUM_THREADS="${SLURM_CPUS_PER_TASK:-1}"
  
module load  cuda  gromacs/2016.3
gmx mdrun -ntomp $SLURM_NTASKS_PER_NODE -deffnm md


GPU-MPIMessage Passing Interface Job

A job script for mdrun using 1 node, 2 GPUs and 6 MPIMessage Passing Interface tasks per node and 4 OpenMP threads per MPIMessage Passing Interface task:

File : gpu_mpi_gromacs_job.sh

#!/bin/bash
#SBATCH --nodes=1                # number of nodes
#SBATCH --gres=gpu:2             # request 2 GPUs per node
#SBATCH --ntasks-per-node=6      # request 6 MPI tasks per node
#SBATCH --cpus-per-task=2        # 2 OpenMP threads per MPI process
#SBATCH --mem-per-cpu 1000       # memory limit per CPU core (megabytes)
#SBATCH --time 1:00:00           # time limit (D-HH:MM:ss)
export OMP_NUM_THREADS="${SLURM_CPUS_PER_TASK:-1}"
  
module load  cuda  gromacs/2016.3
mpiexec gmx_mpi mdrun -deffnm md


Notes on running GROMACS in GPUs
  • The new national systems (Cedar and Graham) have differently configured GPU nodes:
  • Cedar has 4 GPUs and 24 CPU cores per node
  • Graham has 2 GPUs and 32 CPU cores per node
Therefore one needs to use different settings to make use of all GPUs and CPU-cores in a node.
  • Cedar: --gres=gpu:4 --ntasks-per-node=8 --cpus-per-task=3
  • Graham: --gres=gpu:2 --ntasks-per-node=8 --cpus-per-task=4
Of course the simulated system needs to be large enough to utilize the resources.
  • GROMACS imposes a number of constraints for choosing number of GPUs, tasks (MPIMessage Passing Interface ranks) and OpenMP threads.
    For GROMACS 2016.3 the constraints are:
  • The number of --tasks-per-node always needs to be a multiple of the number of GPUs (--gres=gpu:)
  • GROMACS will not run GPU runs with only 1 OpenMP thread, unless forced by setting the -ntomp option.
    According to the developers, the optimum number of --cpus-per-task is between 2 and 6.
  • Avoid using a larger fraction of CPUs and memory than the fraction of GPUs you have requested in a node.
  • While according to the developers of the SLURM scheduler using srun as a replacement for mpiexec/mpirun is the preferred way to start MPIMessage Passing Interface jobs, we have seen evidence of jobs failing on startup, when two jobs using srun are started on the same compute node.
    At this time we therefore recommend to use mpiexec, especially when utilizing only partial nodes.

Usage

More content for this section will be added at a later time.

System Preparation

In order to run a Gromacs simulation, one needs to create a tpr file (portable binary run input file). This file contains the starting structure of the simulation, the molecular topology and all the simulation parameters.

Tpr files are created with the gmx grompp command (or simply grompp for Gromacs versions older than 5.0). Therefore one needs the following files:

  • The coordinate file with the starting structure. Gromacs can read the starting structure from various file-formats, such as .gro, .pdb or .cpt (Gromacs checkpoint).
  • The (system) topology (.top)) file. It defines which force-field is used and how the force-field parameters are applied to the simulated system. Often the topologies for individual parts of the simulated system (e.g. molecules) are placed in separate .itp files and included in the .top file using a #include directive.
  • The run-parameter (.mdp) file. See the Gromacs user guide for a detailed description of the options.

Tpr files are portable, that is they can be grompp'ed on one machine, copied over to a different machine and used as an input file for mdrun. One should always use the same Gromacs version for both grompp and mdrun. Although mdrun is able to use tpr files that have been created with an older version of grompp, this can lead to unexpected simulation results.

Running Simulations

Analyzing Results

Common pitfalls

Links

Biomolecular simulation