Running jobs

From CC Doc
Jump to: navigation, search


This article is a draft

This is not a complete article: This is a Draft, a work in progress that is intended to be published into an article, which may or may not be ready for inclusion in the main wiki. It should not necessarily be considered factual or authoritative.



Overview

What's a job?

On computers we are most often familiar with graphical user interfaces (GUIs). There are windows, menus, buttons; we click here and there and the system responds. On Compute Canada servers the environment is different. To begin with, you control it by typing, not clicking. This is called a command line interface. Furthermore, a program you would like to run may not begin immediately, but may instead be put in a waiting list. When the necessary CPU cores are available it will begin, otherwise jobs would interfere with each other leading to performance loss.

You prepare a small text file called a job script that basically says what program to run, where to get the input, and where to put the output. You submit this job script to a piece of software called the scheduler which decides when and where it will run. Once the job has finished you can retrieve the results of the calculation. Normally there is no interaction between you and the program while the job is running, although you can check on its progress if you wish.

Here's a very simple job script:

File : simple_job.sh

#!/bin/bash
#SBATCH --time=1:00
echo 'Hello, world!'


It only runs the program echo, there is no input, and the output will go to a default location.

The job scheduler

The job scheduler is a piece of software that behaves like the conductor of an orchestra, as it has multiple responsibilities: it must

  • maintain a database of all jobs that were submitted until they have finished,
  • respect certain conditions (limits, priorities),
  • make sure to only assign the available resources to one job at a time,
  • decide which jobs to run and on which compute nodes,
  • launch them on those nodes,
  • clean them up once they have finished, and
  • maintain logs for accounting and troubleshooting.

On Compute Canada systems, these responsibilities are handled by the Slurm Workload Manager.

Requesting resources

Your job script must explicitly ask for the resources needed to run your job. The two principal resources associated with a job are the time needed to complete the task and the number of processors. You can optionally request other resources such as the amount of memory per processor, or special types of processors like GPUs.

It is important to specify those parameters well. If they are too high, the job may wait longer than necessary before it starts, and once running it will prevent others from using those resources. If they are too small, then the job may be killed for exceeding the requested time or memory.

Submitting a job with SLURM

Submitting an MPI job

This example job launches four MPI processes, each with 1024 MB of memory. The run time is limited to 5 minutes. Output filename will include name of first node used and job ID number.


File : simple_mpi_job.sh

#!/bin/bash
#
#SBATCH --ntasks 4               # number of tasks
#SBATCH --partition mpi          # partition
#SBATCH --mem 1024               # memory pool per process
#SBATCH --output slurm.%N.%j.out # STDOUT
#SBATCH --time 0:05:00           # time (D-HH:MM)

mpirun ./program.x


Job submitted with:

sbatch myscript.sh

Submitting a GPU job

This example job a serial GPU job with one GPU allocated, a memory limit of 1024 MB, and a run-time limit of 5 minutes. Output filename will include name of first node used and job ID number.


File : simple_gpu_job.sh

#!/bin/bash
#
#SBATCH --ntasks 1                # number of tasks
#SBATCH --partition gpu           # partition
#SBATCH --mem 1024                # memory pool per process
#SBATCH --output slurm.%N.%j.out  # STDOUT
#SBATCH --time 0:05:00            # time (D-HH:MM)
#SBATCH --gres=gpu:1              # "generic" resource request

nvidia-smi


Job submitted with:

sbatch myscript.sh

Monitoring jobs

To see all jobs:

squeue 

To list jobs of specific user:

squeue -u <username>

Just running jobs:

squeue -u <username> -t RUNNING

Just pending jobs:

squeue -u <username> -t PENDING

Jobs in a given queue:

squeue -u <username> -p <queue>

Show detailed information for a specific job:

scontrol show jobid -dd <jobid>

Show statistics for a completed job:

sacct -j <jobid> --format=JobID,JobName,MaxRSS,Elapsed

Controlling jobs

To cancel one specific job:

 scancel <jobid>

Cancel all jobs of user:

scancel -u <username>

Cancel all pending jobs for user:

scancel -t PENDING -u <username>

External links

  • A "Rosetta stone" mapping commands and directives from PBS/Torque, SGE, LSF, and LoadLeveler, to SLURM.
  • Some SLURM tutorial materials: