ANSYS

From CC Doc
Jump to: navigation, search
This page contains changes which are not marked for translation.

Other languages:
English • ‎français

Introduction[edit]

ANSYS is a software suite for engineering simulation and 3-D design. It includes packages such as ANSYS Fluent and ANSYS CFX.

Licensing[edit]

Compute Canada is a hosting provider for ANSYS . This means that we have ANSYS software installed on our clusters, but we do not provide a generic license accessible to everyone. However, many institutions, faculties, and departments already have licenses that can be used on our cluster. Once the legal aspects are worked out for licensing, there will be remaining technical aspects. The license server on your end will need to be reachable by our compute nodes. This will require our technical team to get in touch with the technical people managing your license software. In some cases, this has already been done. You should then be able to load the ANSYS modules, and it should find its license automatically. If this is not the case, please contact our Technical support, so that we can arrange this for you.

Available modules are: fluent/16.1, ansys/16.2.3, ansys/17.2, ansys/18.1, ansys/18.2, ansys/19.1, ansys/19.2, ansys/2019R2, ansys/2019R3.

Configuring your own license file[edit]

Our module for ANSYS is designed to look for license information in a few places. One of those places is your home folder. If you have your own license server, you can write the information to access it in the following format:


File : ansys.lic

setenv("ANSYSLMD_LICENSE_FILE", "<port>@<hostname>")
setenv("ANSYSLI_SERVERS", "<port>@<hostname>")


put this file in the folder $HOME/.licenses/. Before an ANSYS license server can be reached from any Compute Canada system firewall configuration changes will likely need to be made, please contact our Technical support to arrange this. Several ANSYS license servers have already been configured to use such as the free SHARCNET cfd license or the non-free CMC license - these maybe specified using the settings shown in the following table:

Server Cluster(s) ANSYSLMD_LICENSE_FILE ANSYSLI_SERVERS
CMC cedar 6624@199.241.162.97 2325@199.241.162.97
CMC beluga 6624@132.219.136.89 2325@132.219.136.89
CMC graham 6624@206.12.126.25 2325@206.12.126.25
SHARCNET beluga/graham 1055@license3.sharcnet.ca 2325@license3.sharcnet.ca

In some situations you may also need to obtain an XML file from the institution which operates the license server in order to ensure that ANSYS on the Compute Canada clusters gives priority to the right kind of license. For example to choose a research license instead of a teaching license, a file with name like license.preferences.xml would be placed into directory $HOME/.ansys/v195/licensing/ assuming you are using the ansys/2019R3 module.

Running ANSYS software in parallel on Compute Canada servers[edit]

The ANSYS software suite comes with multiple implementations of MPI to support parallel computation. Unfortunately, none of them supports our Slurm scheduler. For this reason, we need special instructions for each ANSYS package on how to start a parallel job. In the sections below, we give examples of submission scripts for some of the packages. If one is not covered and you want us to investigate and help you start it, please contact our Technical support.

ANSYS Fluent[edit]

Typically you would use the following procedure for running Fluent on one of the Compute Canada clusters:

  • Prepare your Fluent job using Fluent from the "ANSYS Workbench" on your Desktop machine up to the point where you would run the calculation.
  • Export the "case" file "File > Export > Case..." or find the folder where Fluent saves your project's files. The "case" file will often have a name like FFF-1.cas.gz.
  • If you already have data from a previous calculation, which you want to continue, export a "data" file as well (File > Export > Data...) or find it the same project folder (FFF-1.dat.gz).
  • Transfer the "case" file (and if needed the "data" file) to a directory on the project or scratch filesystem on the cluster. When exporting, you save the file(s) under a more instructive name than FFF-1.* or rename them when uploading them.
  • Now you need to create a "journal" file. It's purpose is to load the case- (and optionally the data-) file, run the solver and finally write the results. See examples below and remember to adjust the filenames and desired number of iterations.
  • Adapt the Fluent jobscript below to your needs.
  • After running the job you can download the "data" file and import it back to Fluent with File > import > Data....


File : fluent_job.sh

#!/bin/bash
#SBATCH --time=00-06:00       # Time limit dd-hh:mm
#SBATCH --nodes=2             # Number of compute nodes
#SBATCH --cpus-per-task=32    # Number of cores per node
#SBATCH --ntasks-per-node=1   # Do not change
#SBATCH --mem=0               # All memory on full nodes
module load ansys/2019R2

slurm_hl2hl.py --format ANSYS-FLUENT > machinefile
NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))

fluent 3d -t $NCORE -cnf=machinefile -mpi=intel -affinity=0 -g -i fluent_3.jou
File : fluent_3.jou

; EXAMPLE FLUENT JOURNAL FILE
; ===========================
; lines beginning with a semicolon are comments

; Read only the case file:
/file/read-case  FFF-1.cas.gz

; Run the solver for this many steps:
/solve/iterate 1000

; write the output data-file:
/file/write-data  FFF-out.dat.gz
 
; Write simulation report to file (optional):
/report/summary y "My_Simulation_Report.txt"

; Exit Fluent:
exit
File : fluent_3.jou

; EXAMPLE FLUENT JOURNAL FILE
; ===========================
; lines beginning with a semicolon are comments

; Read both case and files (FFF-1.cas.gz & FFF-1.dat.gz):
/file/read-case-data  FFF-1.cas.gz

; Run the solver for this many steps:
/solve/iterate 1000

; Write both case and files (FFF-out.cas.gz & FFF-out.dat.gz):
/file/write-case-data  FFF-out.cas.gz

; Write simulation report to file (optional):
/report/summary y "My_Simulation_Report.txt"

; Exit Fluent:
exit
File : fluent_transient.jou

; EXAMPLE FLUENT JOURNAL FILE FOR TRANSIENT SIMULATION
; ====================================================
; lines beginning with a semicolon are comments

; Read only the input case file:
/file/read-case         "FFF-transient-inp.cas.gz"
; In case of a continuation, you need to read both ".cas" and "*.dat":
; /file/read-case-data  "FFF-transient-inp.cas.gz"

; ##### settings for Transient simulation :  ######
; # Set the magnitude of the (physical) time step (delta-t)
/solve/set/time-step   0.0001

; # Set the number of time steps for a transient simulation:
/solve/set/max-iterations-per-time-step   20

; # Set the number of iterations for which convergence monitors are reported:
/solve/set/reporting-interval   1

; ##### End of settings for Transient simulation. ######

; Initialize using the hybrid initialization method:
/solve/initialize/hyb-initialization

; Perform unsteady iterations for a specified number of time steps:
/solve/dual-time-iterate   1000

; write the output (both "FFF-transient-out.cas.gz" and "FFF-transient-out.dat.gz"):
/file/write-case-data    "FFF-transient-out.cas.gz"

; Write simulation report to file (optional):
/report/summary y "Report_Transient_Simulation.txt"

; Exit Fluent:
exit


Fluent Journal files can include basically any command from Fluent's Text-User-Interface (TUI); commands can be used to change simulation parameters like temperature, pressure and flow speed. With this you can run a series of simulations under different conditions with a single case file, by only changing the parameters in the Journal file. Refer to the Fluent User's Guide for more information and a list of all commands that can be used.

ANSYS CFX[edit]

File : mysub.sh

#!/bin/bash
#SBATCH --time=00-06:00       # Time limit dd-hh:mm
#SBATCH --nodes=2             # Number of compute nodes
#SBATCH --cpus-per-task=32    # Number of cores per node
#SBATCH --ntasks-per-node=1   # Do not change
#SBATCH --mem=0               # All memory on full nodes
module load ansys/2019R2

nodes=$(slurm_hl2hl.py --format ANSYS-CFX)
cfx5solve -def YOURFILE.def -start-method "Intel MPI Distributed Parallel" -par-dist $nodes  <other options>


Note that you may get the following errors in your output file : /etc/tmi.conf: No such file or directory. They do not seem to affect the computation.

Message Passing Interface