# MATLAB

There are two ways of using MATLAB on Compute Canada clusters.

- Running MATLAB directly

This approach requires you to have access to a MATLAB license. You may either:

- Run MATLAB on Cedar or Béluga, both of which have a license available for any student, professor or academic researcher.
- Use an external license, i.e. one owned by your institution, faculty, department, or lab. See
*Using an external license*below.

- Running a compiled MATLAB application

This method requires compiling your code into a binary using the MATLAB Compiler (`mcc`). You can then run that binary executable using the appropriate MATLAB Runtime.

More details about these approaches are provided below.

## Contents

# Using an external license

Compute Canada is a hosting provider for MATLAB. This means that we have MATLAB installed on our clusters and can allow you to access an external license to run computations on our infrastructure. Arrangements have already been made with several Canadian institutions to make this automatic. To see if you already have access to a license, carry out the following test:

[name@cluster ~]$ module load matlab/2018a [name@cluster ~]$ matlab -nodisplay -nojvm -r "fprintf('%s\n', license()); exit" < M A T L A B (R) > Copyright 1984-2018 The MathWorks, Inc. R2018a (9.4.0.813654) 64-bit (glnxa64) February 23, 2018 987654 [name@cluster ~]$

If any license number is printed, you're okay. Be sure to run this test on each cluster on which you want to use MATLAB; you may get different results.

If you get the message *This version is newer than the version of the license.dat file and/or network license manager on the server machine*, try an older version of MATLAB in the `module load`

line.

Otherwise, either your institution does not have a MATLAB license, does not allow its use in this way, or no arrangements have yet been made. Find out who administers the MATLAB license at your institution (faculty, department) and contact them or your Mathworks account manager to know if you are allowed to use the license in this way.

If you are allowed, then some technical configuration will be required. Create a file similar to the following example:

**File :**matlab.lic

```
# MATLAB license passcode file
SERVER <ip address> ANY <port>
USE_SERVER
```

Put this file in the `$HOME/.licenses/` directory where the IP address and port number correspond to the values for your campus license server. Next you will need to ensure that the license server on your campus is reachable by our compute nodes. This will require our technical team to get in touch with the technical people managing your license software. Please write to technical support so that we can arrange this for you.

For online documentation, see http://www.mathworks.com/support. For product information, visit http://www.mathworks.com.

# Preparing your `.matlab` folder

Because the /home directory is accessible in read-only mode on some clusters' compute nodes, users should create a `.matlab` symbolic link that makes sure that the MATLAB profile and job data will be written to the /scratch space instead.

[name@cluster ~]$ cd $HOME [name@cluster ~]$ if [ -d ".matlab" ]; then mv .matlab scratch/ else mkdir -p scratch/.matlab fi && ln -sn scratch/.matlab .matlab

# Available Toolboxes

To see a list of the MATLAB toolboxes available with the license and cluster you're using, you can use the following command:

[name@cluster ~]$ module load matlab [name@cluster ~]$ matlab -nodisplay -nojvm -r "fprintf('%s\n', license()); ver; exit"

# Running a MATLAB code

**Important:** Any MATLAB calculation larger than a short test job of about 5 minutes must be submitted to the scheduler. For instructions on using the scheduler, please see the Running jobs page.

Consider the following example code:

**File :**cosplot.m

```
function cosplot()
% MATLAB file example to approximate a sawtooth
% with a truncated Fourier expansion.
nterms=5;
fourbypi=4.0/pi;
np=100;
y(1:np)=pi/2.0;
x(1:np)=linspace(-2.0*pi,2*pi,np);
for k=1:nterms
twokm=2*k-1;
y=y-fourbypi*cos(twokm*x)/twokm^2;
end
plot(x,y)
print -dpsc matlab_test_plot.ps
quit
end
```

Here is a simple Slurm script that you can use to run `cosplot.m`

:

**File :**matlab_slurm.sl

```
#!/bin/bash -l
#SBATCH --job-name=matlab_test
#SBATCH --account=def-someprof # adjust this to match the accounting group you are using to submit jobs
#SBATCH --time=0-03:00 # adjust this to match the walltime of your job
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1 # adjust this if you are using parallel commands
#SBATCH --mem=4000 # adjust this according to the memory requirement per node you need
#SBATCH --mail-user=you@youruniversity.ca # adjust this to match your email address
#SBATCH --mail-type=ALL
# Choose a version of MATLAB by loading a module:
module load matlab/2018a
# Remove -singleCompThread below if you are using parallel commands:
matlab -nodisplay -singleCompThread -r "cosplot"
```

Submit the job using `sbatch`:

`[name@server ~]$ sbatch matlab_slurm.sl`

Do not use the `-singleCompThread`

option if you request
more than one core with `--cpus-per-task`

.
You should also ensure that the size of your MATLAB parpool
matches the number of cores you are requesting.

Each time you run MATLAB, it will create a file like `java.log.12345`

unless you supply the `-nojvm`

option.
However, using `-nojvm`

may interfere with certain plotting functions.
For further information on command line options `-nodisplay`

, `-singleCompThread`

,
`-nojvm`

, and `-r`

,
see MATLAB (Linux) on the MathWorks website.

# Running multiple parallel MATLAB jobs simultaneously

There is a known issue when two (or more) parallel MATLAB jobs are initializing their `parpool`

simultaneously: multiple new MATLAB instances are trying to read and write to the same `.dat`

file in the `$HOME/.matlab/local_cluster_jobs/R*`

folder, which corrupts the local parallel profile used by other MATLAB jobs. To fix the corrupted profile, delete the `local_cluster_jobs`

folder when no job is running.

There are two main definitive solutions:

- Making sure only one MATLAB job at a time will start its
`parpool`

. There are many possible technical solutions, but none is perfect:- using a lock file (which may remain locked if a previous job has failed),
- using random delays (which may be equal or almost equal, and still cause the corruption),
- using always increasing delays (which are wasting compute time),
- using Slurm options
`--begin`

or`--dependency=after:JOBID`

to control the start time (which increases wait time in the queue).

- Making sure each MATLAB job creates a local parallel profile in a unique location of the filesystem.

In your MATLAB code:

**File :**parallel_main.m

```
% Create a "local" cluster object
local_cluster = parcluster('local')
% Modify the JobStorageLocation to $SLURM_TMPDIR
local_cluster.JobStorageLocation = getenv('SLURM_TMPDIR')
% Start the parallel pool
parpool(local_cluster);
```

References:

- FAS Research Computing, MATLAB Parallel Computing Toolbox simultaneous job problem
- MathWorks, ... from multiple MATLAB sessions that use a shared preference directory

# Using the Compiler and Runtime libraries

**Important:** Like any other intensive job, you must always run MCR code within a job that you will have submitted to the scheduler. For instructions on using the scheduler, please see the Running jobs page.

You can also compile your code using MATLAB Compiler, which is included among the modules hosted by Compute Canada. See documentation for the compiler on the MathWorks website. At the moment, mcc is provided for versions 2014a, 2018a and later.

To compile the `cosplot.m`

example given above, you would use the command

`[name@yourserver ~]$ mcc -m -R -nodisplay cosplot.m`

This will produce a binary named `cosplot`, as well as a wrapper script. To run the binary on Compute Canada servers, you will only require the binary. The wrapper script named `run_cosplot.sh` will not work as is on our servers because MATLAB assumes that some libraries can be found in specific locations. Instead, we provide a different wrapper script called `run_mcr_binary.sh` which sets the correct paths.

On one of our servers, load an MCR module corresponding to the MATLAB version you used to build the executable:

`[name@server ~]$ module load mcr/R2018a`

Run the following command:

`[name@server ~]$ setrpaths.sh --path cosplot`

then, in your submission script (**not on the login nodes**), use your binary as so:
`run_mcr_binary.sh cosplot`

You will only need to run the `setrpaths.sh` command once for each compiled binary. The `run_mcr_binary.sh` will instruct you to run it if it detects that it has not been done.

# Using the MATLAB Parallel Server

MATLAB Distributed Computing Server (MDCS) became MATLAB Parallel Server in the recent years. This technology is only worthwhile if you need more workers in your parallel MATLAB job than available CPU cores on a single compute node. While the regular MATLAB installation (see above sections) allows you to run parallel jobs within one node (up to 64 workers per job, depending on which node and cluster), the MATLAB Parallel Server is the licensed MathWorks solution for running a parallel job on more than one node. For the moment, the use of the MATLAB Parallel Server is only supported on the Béluga cluster.

This solution allows the submission of MATLAB parallel jobs from a local MATLAB interface on your computer. Some configuration is required in order to submit jobs remotely to the Slurm scheduler. Please follow instructions in the sections below.

## Slurm plugin for MATLAB

- Have MATLAB R2020a or newer installed on your computer,
**including the Parallel Computing Toolbox**. - Go to the MathWorks Slurm Plugin page,
**download and run**the`*.mlpkginstall`file. (i.e. click on the blue*Download*button on the right side, just above the*Overview*tab.) - Enter your MathWorks credentials; if the configuration wizard does not start, run in MATLAB
`parallel.cluster.generic.runProfileWizard()`

- Give these responses to the configuration wizard:
- Select
**Unix**(which is usually the only choice) - Shared location:
**No** - Cluster host:
**beluga.computecanada.ca** - Username (optional): Enter your Compute Canada username (the identity file can be set later, if needed)
- Remote job storage:
**/scratch**(or a unique sub-directory, for example /scratch/tmp_matlab) - Maximum number of workers:
**960** - Matlab installation: both local and remote versions must match:
- For local R2020a:
**/cvmfs/restricted.computecanada.ca/easybuild/software/2020/Core/matlab/2020a** - For local R2020b:
**/cvmfs/restricted.computecanada.ca/easybuild/software/2020/Core/matlab/2020b**

- For local R2020a:
- License type:
**Network license manager** - Profile Name:
**beluga**

- Select
- Click on
*Create*and*Finish*to finalize the profile.

## Edit the plugin once installed

In MATLAB, go to the `nonshared` folder (i.e. run the following in the MATLAB terminal):

cd(fullfile(matlabshared.supportpkg.getSupportPackageRoot, 'parallel', 'slurm', 'nonshared'))

Then:

- Open the
**independentSubmitFcn.m**file; around line #97 is the line`additionalSubmitArgs = sprintf('--ntasks=1 --cpus-per-task=%d', cluster.NumThreads);`Replace this line with

`additionalSubmitArgs = ccSBATCH().getSubmitArgs();` - Open the
**communicatingSubmitFcn.m**file; around line #103 is the line`additionalSubmitArgs = sprintf('--ntasks=%d --cpus-per-task=%d', environmentProperties.NumberOfTasks, cluster.NumThreads);`Replace this line with

`additionalSubmitArgs = ccSBATCH().getSubmitArgs();`

Restart MATLAB and go back to your home directory:

cd(getenv('HOME'))

## Validation

**Do not** use the built-in validation tool in the *Cluster Profile Manager*. Instead, you should try the `TestParfor` example, along with a proper `ccSBATCH.m` script file:

- Download and extract code samples on GitHub at https://github.com/ComputeCanada/matlab-parallel-server-samples.
- In MATLAB, go to the newly extracted
`TestParfor`directory. - Follow instructions in https://github.com/ComputeCanada/matlab-parallel-server-samples/blob/master/README.md.

Note: When the `ccSBATCH.m` is in your current working directory, you may try the *Cluster Profile Manager* validation tool, but only the first two tests will work. Other tests are not yet supported.