From CC Doc
Jump to: navigation, search
This page contains changes which are not marked for translation.

Other languages:
English • ‎français


ANSYS is a software suite for engineering simulation and 3-D design. It includes packages such as ANSYS Fluent and ANSYS CFX.


Compute Canada is a hosting provider for ANSYS . This means that we have ANSYS software installed on our clusters, but we do not provide a generic license accessible to everyone. However, many institutions, faculties, and departments already have licenses that can be used on our cluster. Once the legal aspects are worked out for licensing, there will be remaining technical aspects. The license server on your end will need to be reachable by our compute nodes. This will require our technical team to get in touch with the technical people managing your license software. In some cases, this has already been done. You should then be able to load the ANSYS modules, and it should find its license automatically. If this is not the case, please contact our Technical support, so that we can arrange this for you.

Available modules are: fluent/16.1, ansys/16.2.3, ansys/17.2, ansys/18.1, ansys/18.2, ansys/19.1, ansys/19.2, ansys/2019R2, ansys/2019R3.


The full ANSYS documentation (for the latest version) can be accessed by following these steps:

  1. connect to with tigervnc as described in VDI Nodes
  2. open a terminal window and start workbench:
    • module load CcEnv StdEnv/2016.4 ansys
    • runwb2
  3. in the upper pulldown menu click the sequence:
    • Help -> ANSYS Workbench Help
  4. once the ANSYS Help page appears click:
    • Home

Configuring your own license file[edit]

Our module for ANSYS is designed to look for license information in a few places. One of those places is your home folder. If you have your own license server, you can write the information to access it in the following format:

File : ansys.lic

setenv("ANSYSLMD_LICENSE_FILE", "<port>@<hostname>")
setenv("ANSYSLI_SERVERS", "<port>@<hostname>")

put this file in the folder $HOME/.licenses/. Before an ANSYS license server can be reached from any Compute Canada system firewall configuration changes will likely need to be made, please contact our Technical support to arrange this. Several ANSYS license servers have already been configured to use such as the free SHARCNET cfd license or the non-free CMC license - these maybe specified using the settings shown in the following table:

CMC beluga 6624@ 2325@
CMC cedar 6624@ 2325@
CMC graham 6624@ 2325@
SHARCNET beluga/cedar/graham

In some situations you may also need to obtain an XML file from the institution which operates the license server in order to ensure that ANSYS on the Compute Canada clusters gives priority to the right kind of license. For example to choose a research license instead of a teaching license, a file with name like license.preferences.xml would be placed into directory $HOME/.ansys/v195/licensing/ assuming you are using the ansys/2019R3 module.

Cluster Batch Job Submission[edit]

The ANSYS software suite comes with multiple implementations of MPI to support parallel computation. Unfortunately, none of them supports our Slurm scheduler. For this reason, we need special instructions for each ANSYS package on how to start a parallel job. In the sections below, we give examples of submission scripts for some of the packages. If one is not covered and you want us to investigate and help you start it, please contact our Technical support.

ANSYS Fluent[edit]

Typically you would use the following procedure for running Fluent on one of the Compute Canada clusters:

  • Prepare your Fluent job using Fluent from the "ANSYS Workbench" on your Desktop machine up to the point where you would run the calculation.
  • Export the "case" file "File > Export > Case..." or find the folder where Fluent saves your project's files. The "case" file will often have a name like FFF-1.cas.gz.
  • If you already have data from a previous calculation, which you want to continue, export a "data" file as well (File > Export > Data...) or find it the same project folder (FFF-1.dat.gz).
  • Transfer the "case" file (and if needed the "data" file) to a directory on the project or scratch filesystem on the cluster. When exporting, you save the file(s) under a more instructive name than FFF-1.* or rename them when uploading them.
  • Now you need to create a "journal" file. It's purpose is to load the case- (and optionally the data-) file, run the solver and finally write the results. See examples below and remember to adjust the filenames and desired number of iterations.
  • Adapt the Fluent jobscript below to your needs.
  • After running the job you can download the "data" file and import it back to Fluent with File > import > Data....

File :

#SBATCH --time=00-06:00       # Time limit dd-hh:mm
#SBATCH --nodes=2             # Number of compute nodes
#SBATCH --cpus-per-task=32    # Number of cores per node
#SBATCH --ntasks-per-node=1   # Do not change
#SBATCH --mem=0               # All memory on full nodes
module load ansys/2019R3 --format ANSYS-FLUENT > machinefile

fluent 3d -t $NCORE -cnf=machinefile -mpi=intel -affinity=0 -g -i fluent_3.jou
File : fluent_3.jou

; ===========================
; lines beginning with a semicolon are comments

; Read only the case file:
/file/read-case  FFF-1.cas.gz

; Run the solver for this many steps:
/solve/iterate 1000

; Overwrite output files by default
/file/confirm-overwrite n

; Write the output data-file:
/file/write-data  FFF-out.dat.gz

; Write simulation report to file (optional):
/report/summary y "My_Simulation_Report.txt"

; Exit fluent:
File : fluent_3.jou

; ===========================
; lines beginning with a semicolon are comments

; Read both case and files (FFF-1.cas.gz & FFF-1.dat.gz):
/file/read-case-data  FFF-1.cas.gz

; Run the solver for this many steps:
/solve/iterate 1000

; Write both case and files (FFF-out.cas.gz & FFF-out.dat.gz):
/file/write-case-data  FFF-out.cas.gz

; Write simulation report to file (optional):
/report/summary y "My_Simulation_Report.txt"

; Exit fluent:
File : fluent_transient.jou

; ====================================================
; lines beginning with a semicolon are comments

; Read only the input case file:
/file/read-case         "FFF-transient-inp.cas.gz"
; In case of a continuation, you need to read both ".cas" and "*.dat":
; /file/read-case-data  "FFF-transient-inp.cas.gz"

; ##### settings for Transient simulation :  ######
; # Set the magnitude of the (physical) time step (delta-t)
/solve/set/time-step   0.0001

; # Set the number of time steps for a transient simulation:
/solve/set/max-iterations-per-time-step   20

; # Set the number of iterations for which convergence monitors are reported:
/solve/set/reporting-interval   1

; ##### End of settings for Transient simulation. ######

; Initialize using the hybrid initialization method:

; Perform unsteady iterations for a specified number of time steps:
/solve/dual-time-iterate   1000

; write the output (both "FFF-transient-out.cas.gz" and "FFF-transient-out.dat.gz"):
/file/write-case-data    "FFF-transient-out.cas.gz"

; Write simulation report to file (optional):
/report/summary y "Report_Transient_Simulation.txt"

; Exit fluent:

Fluent Journal files can include basically any command from Fluent's Text-User-Interface (TUI); commands can be used to change simulation parameters like temperature, pressure and flow speed. With this you can run a series of simulations under different conditions with a single case file, by only changing the parameters in the Journal file. Refer to the Fluent User's Guide for more information and a list of all commands that can be used.


File :

#SBATCH --time=00-06:00       # Time limit dd-hh:mm
#SBATCH --nodes=2             # Number of compute nodes
#SBATCH --cpus-per-task=32    # Number of cores per node
#SBATCH --ntasks-per-node=1   # Do not change
#SBATCH --mem=0               # All memory on full nodes
module load ansys/2019R3

nodes=$( --format ANSYS-CFX)
cfx5solve -def YOURFILE.def -start-method "Intel MPI Distributed Parallel" -par-dist $nodes  <other options>

Note that you may get the following errors in your output file : /etc/tmi.conf: No such file or directory. They do not seem to affect the computation.

Site Specific Usage[edit]

Sharcnet License[edit]

The Sharcnet ANSYS cfd license consists of 25 aa_r_cfd seats and 512 aa_r_hpc cores. It can be used by any Compute Canada user on any Compute Canada system for the purpose of publishable academic research. Individual users are limited to running a maximum of 3 simultaneous jobs (3 aa_r_cfd) and consuming upto 128 cores (128 aa_r_hpc) if available. The tokens are served on a first come first serve basis. Due to the limited license size and number of users it is possible a job will fail to start if insufficient license resources are available at runtime (especially during the day). In such case the job will need to be resubmitted or retarted later when being used interactively. If guarenteed token access is required open a ticket and request a quote for the quantity needed, prices will be at cost plus applicable taxes.

License Server File[edit]

To use the Sharcnet ansys license configure your ansys.lic file as follows unless you are running on a Sharcnet system such as graham or gra-vdi:

[gra-login1:~/.licenses] cat ansys.lic
setenv("ANSYSLI_SERVERS", "")

Check License Server[edit]

It is possible to queury the license server to gain insight into its current utilization and license availability. While the below steps are shown in the context of checking the Sharcnet license server, they can in principle be used to check any server specified in your ansys.lic file:

module load ansys

1) To check the number of jobs run by all users (of which 25 are free) use:

lmutil lmstat -c $ANSYSLMD_LICENSE_FILE -a | grep "Users of aa_r_cfd"

2) To check the number of cores in use by all users (of which 512 are free) use:

lmutil lmstat -c $ANSYSLMD_LICENSE_FILE -a | grep "Users of aa_r_hpc"

3) To check the number of jobs currently running under your username:

lmutil lmstat -c -a | grep ", s" | grep -v licenses | grep $USER | wc -l

Remote Visualization[edit]

First install TigerVNC client on your desktop as described in VNC.
Once connected start ansys where the Sharcnet license server will be used by default:

  1. Connect to with TigerVNC
  2. module load SnEnv
  3. module load ansys
  4. fluent|cfx|runwb2|icemcfd|apdl
  5. Press y to accept these two conditions : y
  6. Input y to use the CMC license server : n enter

Enable Additive[edit]

To enable ANSYS additive manufacturing (AM) in your project for use on Compute Canada systems do the following:

Start Workbench

  1. Connect to with TigerVNC
  2. module load SnEnv ansys/2019R3
  3. export ANSYSLMD_LICENSE_FILE=FlexlmPortNumber@YourLicenceServerIpAddress
  4. export ANSYSLI_SERVERS=LicensingInterconnectPortNumber@YourLicenceServerIpAddress
  5. cd to the directory where your test.wbpj file is located and start the workbench gui: runwb2
  6. Press y to accept these two conditions : y hit enter
  7. Input y to use the CMC license server : n hit enter

Install Extension

  1. click Extensions -> Install Extension
  2. specify the following /paths/to/filename then click Open:
    • /cvmfs/

Load Extension

  1. click Extensions -> Manage Extensions and tick Additive Wizard then click Close

Run Additive[edit]

On gra-vdi[edit]

AM can be run on gra-vdi for upto 24hours with 8cores using either 1) Workbench or 2) Command Line approach as follows:

1) Workbench

  • start workbench as explained in the Enable Additive section above
  • click File -> Open and select test.wbpj then click Open
  • click View -> reset workspace if you get a grey screen
  • start Mechanical, clear the Solution(s), un-tick Distributed, specify 8 cores
  • under Solve -> My Computer -> Advanced Properties --> Addional Command Line Arguments: -mpi ibmmpi
  • click File -> Save Project
  • click Solve

2) Command Line

Simulation and directory preparation:

  • open simulation in workbench, clear solutions(s), save, exit
  • remove any stale lock files: rm -f *_files/.lock
  • kill any undead processes from previous runs: pkill -e -u $USER -f "ansys"
  • backup the project directory: (a=$PWD; cd ..; cp -a $a $a-bkup1)

Run simulation (with gra-vdi local ansys module):

  • module reset; module load SnEnv
  • export ANSYSLMD_LICENSE_FILE=FlexlmPortNumber@YourLicenceServerIpAddress
  • export ANSYSLI_SERVERS=LicensingInterconnectPortNumber@YourLicenceServerIpAddress
  • export PATH=/opt/sharcnet/ansys/2019R3/v195/Framework/bin/Linux64:$PATH
  • runwb2 -B -F work_bench.wbpj -E "Update();Save(Overwrite=True)"

Run the simulation (with Compute Canada ansys module):

  • module reset; module load CcEnv StdEnv
  • export ANSYSLMD_LICENSE_FILE=FlexlmPortNumber@YourLicenceServerIpAddress
  • export ANSYSLI_SERVERS=LicensingInterconnectPortNumber@YourLicenceServerIpAddress
  • export PATH=/cvmfs/$PATH
  • runwb2 -B -F work_bench.wbpj -E "Update();Save(Overwrite=True)"

Check core utilization:

  • open another terminal and run top -u $USER -H to verify 8 cores running ~100%

Message Passing Interface