https://docs.alliancecan.ca/mediawiki/api.php?action=feedcontributions&user=Tyson&feedformat=atomAlliance Doc - User contributions [en]2024-03-28T15:15:14ZUser contributionsMediaWiki 1.39.6https://docs.alliancecan.ca/mediawiki/index.php?title=Visualization&diff=130306Visualization2023-02-27T17:27:44Z<p>Tyson: Remove old SHARCNET links</p>
<hr />
<div><languages /><br />
[[Category:Software]]<br />
<translate><br />
= Popular visualization packages = <!--T:1--><br />
<br />
=== ParaView === <!--T:2--><br />
[http://www.paraview.org ParaView] is a general-purpose 3D scientific visualization tool. It is open-source and compiles on all popular platforms (Linux, Windows, Mac), understands a large number of input file formats, provides multiple rendering modes, supports Python scripting, and can scale up to tens of thousands of processors for rendering of very large datasets.<br />
<br />
<!--T:148--><br />
* [[ParaView|Using ParaView on Alliance systems]]<br />
* [http://www.paraview.org/documentation ParaView official documentation]<br />
* [http://www.paraview.org/gallery ParaView gallery]<br />
* [http://www.paraview.org/Wiki/ParaView ParaView wiki]<br />
* [http://www.paraview.org/Wiki/ParaView/Python_Scripting ParaView Python scripting]<br />
<br />
=== VisIt === <!--T:3--><br />
Similar to ParaView, [https://wci.llnl.gov/simulation/computer-codes/visit/ VisIt] is an open-source, general-purpose 3D scientific data analysis and visualization tool that scales from interactive analysis on laptops to very large HPC projects on tens of thousands of processors.<br />
<br />
<!--T:149--><br />
* [[VisIt|Using VisIt on Alliance systems]]<br />
* [https://visit-dav.github.io/visit-website VisIt website]<br />
* [https://visit-dav.github.io/visit-website/examples VisIt gallery]<br />
* [http://www.visitusers.org VisIt user community wiki]<br />
* [http://www.visitusers.org/index.php?title=VisIt_Tutorial VisIt tutorials] along with [http://www.visitusers.org/index.php?title=Tutorial_Data sample datasets]<br />
<br />
=== VMD === <!--T:4--><br />
[http://www.ks.uiuc.edu/Research/vmd VMD] is an open-source molecular visualization program for displaying, animating, and analyzing large biomolecular systems in 3D. It supports scripting in Tcl and Python and runs on a variety of platforms (MacOS X, Linux, Windows). It reads many molecular data formats using an extensible plugin system and supports a number of different molecular representations.<br />
<br />
<!--T:150--><br />
* [[VMD|Using VMD on Alliance systems]]<br />
* [http://www.ks.uiuc.edu/Research/vmd/current/ug VMD User's Guide]<br />
<br />
=== VTK === <!--T:5--><br />
The Visualization Toolkit (VTK) is an open-source package for 3D computer graphics, image processing, and visualization. The toolkit includes a C++ class library as well as several interfaces for interpreted languages such as Tcl/Tk, Java, and Python. VTK was the basis for many excellent visualization packages including ParaView and VisIt.<br />
<br />
<!--T:151--><br />
* [[VTK|Using VTK on Alliance systems]]<br />
* [https://itk.org/Wiki/VTK/Tutorials VTK tutorials]<br />
<br />
=== YT === <!--T:152--><br />
YT is a Python library for analyzing and visualizing volumetric, multi-resolution data. Initially developed for astrophysical simulation data, it can handle any uniform and multiple-resolution data on Cartesian, curvilinear, unstructured meshes and on particles.<br />
<br />
<!--T:153--><br />
* [[yt|Using YT on Alliance systems]]<br />
<br />
= Visualization on Alliance systems = <!--T:6--><br />
<br />
<!--T:154--><br />
There are many options for remote visualization on our systems. In general, whenever possible, for interactive rendering we recommend '''client-server visualization''' on interactive or high-priority nodes, and for non-interactive visualization we recommend '''off-screen batch jobs''' on regular compute nodes.<br />
<br />
<!--T:155--><br />
Other, ''less efficient'' options are X11-forwarding and VNC. For some packages these are the only available remote GUI options.<br />
<br />
=== Client-server interactive visualization === <!--T:156--><br />
<br />
<!--T:157--><br />
In the client-server mode, supported by both ParaView and VisIt, all data will be processed remotely on the cluster, using either CPU or GPU rendering, while you interact with your visualization through a familiar GUI client on your local computer. You can find the details of setting up client-server visualization in [[ParaView]] and [[VisIt]] pages.<br />
<br />
=== Remote windows with X11-forwarding === <!--T:158--><br />
<br />
<!--T:159--><br />
In general, X11-forwarding should be avoided for any heavy graphics, as it requires many round trips and is much slower than VNC (below). However, in some cases you can connect via ssh with X11. Below we show how you would do this on our clusters. We assume you have an X-server installed on your local computer.<br />
<br />
<!--T:160--><br />
<tabs><br />
<tab name="Cedar, Graham and Béluga"><br />
<br />
<!--T:161--><br />
Connect to the cluster with the <code>-X/-Y</code> flag for X11-forwarding. You can start your graphical application on the login node (small visualizations)<br />
<br />
<!--T:162--><br />
module load vmd<br />
vmd<br />
<br />
<!--T:163--><br />
or you can request interactive resources on a compute node (large visualizations)<br />
<br />
<!--T:164--><br />
salloc --time=1:00:0 --ntasks=1 --mem=3500 --account=def-someprof --x11<br />
<br />
<!--T:165--><br />
: and, once the job is running, start your graphical application inside the job<br />
<br />
<!--T:166--><br />
module load vmd<br />
vmd<br />
<br />
<!--T:167--><br />
</tab><br />
<tab name="Niagara"><br />
<br />
<!--T:168--><br />
Since runtime is limited on the login nodes, you might want to request a testing job in order to have more time for exploring and visualizing your data. On the plus side, you will have access to 40 cores on each of the nodes requested. For performing an interactive visualization session in this way please follow these steps:<br />
<br />
<!--T:169--><br />
<ol><br />
<li> ssh into niagara.scinet.utoronto.ca with the <code>-X/-Y</code> flag for X11-forwarding<br />
<li> Request an interactive job, ie.</li><br />
debugjob<br />
This will connect you to a node, let's say for the argument "niaXYZW".<br />
<li> Run your visualization program, eg. VMD </li><br />
<br />
<!--T:170--><br />
module load vmd<br />
vmd<br />
<br />
<!--T:171--><br />
<li> Exit the debug session.<br />
</ol><br />
<br />
<!--T:172--><br />
</tab><br />
</tabs><br />
<br />
=== Remote off-screen windows via Xvfb === <!--T:176--> <br />
<br />
<!--T:177--><br />
Some applications insist on displaying graphical output, but you don't actually need to see them since the results are saved in a file.<br />
In that case the job can run as a regular batch job, using either the CPU or the GPU for 3D rendering. To enable this you can run<br />
the application you are calling with the X virtual frame buffer (Xvfb) in a job script as follows:<br />
<br />
<!--T:178--><br />
xvfb-run <name-of-application><br />
<br />
<!--T:179--><br />
if using the CPU for rendering or<br />
<br />
<!--T:180--><br />
xvfb-run vglrun -d egl <name-of-application><br />
<br />
<!--T:181--><br />
if using the GPU for rendering, in which case you need to reserve one GPU with Slurm, see [[Using_GPUs_with_Slurm|Using GPUs with Slurm]].<br />
Note that, depending on the workload the GPU may not necessarily be faster than the CPU, so it's important to benchmark before<br />
committing to using the more expensive GPU.<br />
<br />
=== Start a remote desktop via VNC === <!--T:92--><br />
<br />
<!--T:93--><br />
Frequently, it may be useful to start up graphical user interfaces for various software packages like Matlab. Doing so over X11-forwarding can result in a very slow connection to the server. Instead, we recommend using VNC to start and connect to a remote desktop. For more information, please see [[VNC|the article on VNC]].<br />
<br />
= Visualization training = <!--T:7--><br />
<br />
<!--T:138--><br />
Please [mailto:support@tech.alliancecan.ca let us know] if you would like to see a visualization workshop at your institution.<br />
<br />
=== Full- or half-day workshops === <!--T:9--><br />
* [https://docs.alliancecan.ca/mediawiki/images/5/5d/Visit201606.pdf VisIt workshop slides] from HPCS'2016 in Edmonton by <i>Marcelo Ponce</i> and <i>Alex Razoumov</i><br />
* [https://docs.computecanada.ca/mediawiki/images/6/6c/Paraview201707.pdf ParaView workshop slides] from July 2017 by <i>Alex Razoumov</i><br />
* [https://support.scinet.utoronto.ca/~mponce/courses/ss2016/ss2016_visualization-I.pdf Gnuplot, xmgrace, remote visualization tools (X-forwarding and VNC), python's matplotlib] slides by <i>Marcelo Ponce</i> (SciNet/UofT) from Ontario HPC Summer School 2016<br />
* [https://support.scinet.utoronto.ca/~mponce/courses/ss2016/ss2016_visualization-II.pdf Brief overview of ParaView & VisIt] slides by <i>Marcelo Ponce</i> (SciNet/UofT) from Ontario HPC Summer School 2016<br />
<br />
=== Webinars and other short presentations === <!--T:10--><br />
<br />
<!--T:173--><br />
[https://westgrid.github.io/trainingMaterials/tools/visualization/ WestGrid's visualization training materials page] has embedded video recordings and slides from the following webinars:<br />
<br />
<!--T:174--><br />
* YT series: “Using YT for analysis and visualization of volumetric data” (Part 1) and "Working with data objects in YT” (Part 2)<br />
* “Scientific visualization with Plotly”<br />
* “Novel Visualization Techniques from the 2017 Visualize This Challenge”<br />
* “Data Visualization on Compute Canada’s Supercomputers” contains recipes and demos of running client-server ParaView and batch ParaView scripts on both CPU and GPU partitions of Cedar and Graham<br />
* “Using ParaViewWeb for 3D Visualization and Data Analysis in a Web Browser”<br />
* “Scripting and other advanced topics in VisIt visualization”<br />
* “CPU-based rendering with OSPRay”<br />
* “3D graphs with NetworkX, VTK, and ParaView”<br />
* “Graph visualization with Gephi”<br />
<br />
<!--T:175--><br />
Other visualization presentations:<br />
<br />
<!--T:16--><br />
* [https://oldwiki.scinet.utoronto.ca/wiki/images/5/51/Remoteviz.pdf Remote Graphics on SciNet's GPC system (Client-Server and VNC)] slides by <i>Ramses van Zon</i> (SciNet/UofT) from October 2015 SciNet User Group Meeting<br />
* [https://support.scinet.utoronto.ca/education/go.php/242/file_storage/index.php/download/1/files%5B%5D/6399/ VisIt Basics], slides by <i>Marcelo Ponce</i> (SciNet/UofT) from February 2016 SciNet User Group Meeting<br />
* [https://oldwiki.scinet.utoronto.ca/wiki/images/e/ea/8_ComplexNetworks.pdf Intro to Complex Networks Visualization, with Python], slides by <i>Marcelo Ponce</i> (SciNet/UofT)<br />
* [https://oldwiki.scinet.utoronto.ca/wiki/images/9/9c/Tkinter.pdf Introduction to GUI Programming with Tkinter], from Sept.2014 by <i>Erik Spence</i> (SciNet/UofT)<br />
<br />
= Tips and tricks = <!--T:11--><br />
<br />
<!--T:19--><br />
This section will describe visualization workflows not included into the workshop/webinar slides above. It is meant to be user-editable, so please feel free to add your cool visualization scripts and workflows here so that everyone can benefit from them.<br />
<br />
= Regional visualization pages = <!--T:12--><br />
<br />
== [http://www.scinet.utoronto.ca SciNet HPC at the University of Toronto] == <!--T:13--><br />
* [https://docs.scinet.utoronto.ca/index.php/Visualization Visualization in Niagara]<br />
* [https://oldwiki.scinet.utoronto.ca/wiki/index.php/Software_and_Libraries#anchor_viz visualization software]<br />
* [https://oldwiki.scinet.utoronto.ca/wiki/index.php/VNC VNC]<br />
* [https://oldwiki.scinet.utoronto.ca/wiki/index.php/Visualization_Nodes visualization nodes]<br />
* [https://oldwiki.scinet.utoronto.ca/wiki/index.php/Knowledge_Base:_Tutorials_and_Manuals#Visualization further resources and viz-tech talks]<br />
* [https://oldwiki.scinet.utoronto.ca/wiki/index.php/Using_Paraview using ParaView]<br />
<br />
= How to get visualization help = <!--T:15--><br />
Please contact [[Technical support]].<br />
</translate></div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=Julia&diff=130060Julia2023-02-23T17:50:18Z<p>Tyson: Markup path too</p>
<hr />
<div><languages /><br />
<br />
<translate><br />
<!--T:1--><br />
[[Category:Software]]<br />
[[Category:Pages with video links]]<br />
<br />
<!--T:2--><br />
[https://julialang.org Julia] is a programming language that was designed for performance, ease of use and portability. It is is available as a [[Utiliser_des_modules/en | module]] on Compute Canada clusters.<br />
<br />
= Compiling packages = <!--T:3--><br />
<br />
<!--T:4--><br />
The first time you add a package to a Julia project (using <code>Pkg.add</code> or the package mode), the package will be downloaded, installed in <code>~/.julia</code>, and pre-compiled. The same package can be added to different projects, in which case the data in <code>~/.julia</code> will be reused. Different versions of the same package can be added to different projects; the required package versions will coexist in <code>~/.julia</code>. (Compared to Python, Julia projects replace “virtual environments” while avoiding code duplication.)<br />
<br />
<!--T:38--><br />
From Julia 1.6 onwards, Julia packages include their binary dependencies (such as libraries). There is therefore no need to load any software module, and we recommend not to.<br />
<br />
<!--T:39--><br />
With Julia 1.5 and earlier, you may run into problems if a package depends on system-provided binaries. For instance, [https://github.com/JuliaIO/JLD.jl JLD] depends on a system-provided HDF5 library. On a personal computer, Julia attempts to install such a dependency using [https://en.wikipedia.org/wiki/Yum_(software) yum] or [https://en.wikipedia.org/wiki/APT_(Debian) apt] with [https://en.wikipedia.org/wiki/Sudo sudo]. This will not work on a Compute Canada cluster; instead, some extra information must be provided to allow Julia's package manager (Pkg) to find the HDF5 library.<br />
<br />
<!--T:5--><br />
$ module load gcc/7.3.0 hdf5 julia/1.4.1<br />
$ julia<br />
julia> using Libdl<br />
julia> push!(Libdl.DL_LOAD_PATH, ENV["HDF5_DIR"] * "/lib")<br />
julia> using Pkg<br />
julia> Pkg.add("JLD")<br />
julia> using JLD<br />
<br />
<!--T:6--><br />
If we were to omit the <code>Libdl.DL_LOAD_PATH</code> line from the above example, it would happen to work on Graham because Graham has HDF5 installed system-wide. It would fail on Cedar because Cedar does not. The best practice on ''any'' Compute Canada system, though, is that shown above: Load the appropriate [[Utiliser_des_modules/en | module]] first, and use the environment variable defined by the module (<code>HDF5_DIR</code> in this example) to extend <code>Libdl.DL_LOAD_PATH</code>. This will work uniformly on all systems.<br />
<br />
<!--T:49--><br />
Note that JLD is superseded by [https://juliapackages.com/p/jld2 JLD2], which no longer relies on a system installed HDF5 library, making it more portable.<br />
<br />
= Package files and storage quotas = <!--T:7--><br />
<br />
<!--T:50--><br />
Installing Julia packages in your home directory will create large numbers of files. For example, starting from an empty <code>~/.julia</code> directory (no packages installed), installing just the <code>Gadfly.jl</code> plotting package will result in around 96M and 37000 files (7% of the total number of files allowed by your home directory quota). If you install a large number of Julia packages, you may exceed your quota.<br />
<br />
<!--T:51--><br />
To avoid this issue, you can store your personal Julia “depot” (containing packages, registries, pre-compiled files, etc.) in a different location, such as your project space. For example, user <code>alice</code>, a member of the <code>def-bob</code> project, could add the following to her <code>~/.bashrc</code> file:<br />
<br />
<!--T:52--><br />
export JULIA_DEPOT_PATH="/project/def-bob/alice/julia:$JULIA_DEPOT_PATH"<br />
<br />
<!--T:53--><br />
This will use the <code>/project/def-bob/alice/julia</code> directory preferentially. Files in <code>~/.julia</code> will still be considered, and <code>~/.julia</code> will still be used for some files such as your command history. When moving your depot to a different location, it is better to remove your existing <code>~/.julia</code> depot first if you have one:<br />
<br />
<!--T:54--><br />
$ rm -rf $HOME/.julia<br />
<br />
<!--T:56--><br />
Alternatively, one can create a [[Singularity]] image with a chosen version of Julia and a selection of packages, and JULIA_DEPOT_PATH redirected inside the container. This does mean that you lose the advantage of the Compute Canada optimized Julia modules. However, your container now contains the potentially very large set of small files inside 1 container file (.sif), potentially improving IO performance. Reproducibility is also improved, the container will run anywhere as-is. Another use case is if you want to test Julia nightly builds without altering your local Julia installation, or when you need to bundle your own specific dependencies, because the container creation gives you complete control at creation.<br />
<br />
= Available versions = <!--T:9--> <br />
<br />
<!--T:10--><br />
We have removed earlier versions of Julia (< 1.0) because the old package manager was creating vast numbers of small files which in turn caused performance issues on the parallel file systems. Please start using Julia 1.4, or newer versions.<br />
<br />
<!--T:11--><br />
{{Command<br />
|module spider julia<br />
|result=<br />
--------------------------------------------------------<br />
julia: julia/1.4.1<br />
--------------------------------------------------------<br />
[...]<br />
You will need to load all module(s) on any one of the lines below before the "julia/1.4.1" module is available to load.<br />
<br />
<!--T:12--><br />
nixpkgs/16.09 gcc/7.3.0<br />
[...]<br />
}}<br />
{{Command<br />
|module load gcc/7.3.0 julia/1.4.1<br />
}}<br />
<br />
== Porting code from Julia 0.x to 1.x == <!--T:13--><br />
<br />
<!--T:14--><br />
In the summer of 2018 the Julia developers released version 1.0, in which they stabilized the language API and removed deprecated (outdated) functionality.<br />
To help updating Julia programs for version 1.0, the developers also released version 0.7.0. <br />
Julia 0.7.0 contains all the new functionality of 1.0 as well as the outdated functionalities from 0.x versions, which will give [https://en.wikipedia.org/wiki/Deprecation deprecation warnings] when used.<br />
Code that runs in Julia 0.7 without warnings should be compatible with Julia 1.0.<br />
<br />
= Using PyCall.jl to call Python from Julia = <!--T:40--><br />
<br />
<!--T:41--><br />
Julia can interface with Python code using PyCall.jl. When using PyCall.jl, set the <code>PYTHON</code> environment variable to the python executable in your virtual Python environment. On our clusters, we recommend using virtual Python environments as described in our [[Python#Creating_and_using_a_virtual_environment|Python documentation]]. After activating a virtual Python environment, you can use it in PyCall.jl:<br />
<br />
<!--T:42--><br />
$ source "$HOME/myenv/bin/activate"<br />
(myenv) $ julia<br />
[...]<br />
julia> using Pkg, PyCall<br />
julia> ENV["PYTHON"] = joinpath(ENV["VIRTUAL_ENV"], "bin", "python")<br />
julia> Pkg.build("PyCall")<br />
<br />
<!--T:43--><br />
We strongly advise against the default PyCall.jl behaviour, which is to use a Miniconda distribution inside your Julia environment. Anaconda and similar distributions [[Anaconda | are not suitable on our clusters]].<br />
<br />
<!--T:55--><br />
Note that if you do not create a virtual environment as shown above, PyCall will default to the operating system python installation, which is never what you want. It will invoke Conda.jl, but fail to recognize the correct path unless you rebuild with <tt>ENV["PYTHON"]=""</tt>. In addition, apart from incompatibilities with the software stack, the Miniconda installer creates a large number of files inside <code>JULIA_DEPOT_PATH</code>. If that is <code>~/.julia</code>, the default, you can run into performance and quota issues.<br />
<br />
<!--T:44--><br />
See the [https://github.com/JuliaPy/PyCall.jl PyCall.jl documentation] for details.<br />
<br />
= Running Julia with multiple processes on clusters = <!--T:17--><br />
<br />
<!--T:18--><br />
The following is an example of running a parallel Julia code computing pi using 100 cores across nodes on a cluster<br />
<br />
<!--T:19--><br />
{{File<br />
|name=run_julia_pi.sh<br />
|lang="bash"<br />
|contents=<br />
#!/bin/bash<br />
#SBATCH --ntasks=100<br />
#SBATCH --cpus-per-task=1<br />
#SBATCH --mem-per-cpu=1024M<br />
#SBATCH --time=0-00:10<br />
<br />
<!--T:20--><br />
srun hostname -s > hostfile<br />
sleep 5<br />
julia --machine-file ./hostfile ./pi_p.jl 1000000000000<br />
}}<br />
<br />
<!--T:15--><br />
In this example, the command<br />
srun hostname -s > hostfile<br />
<br />
<!--T:21--><br />
generates a list of names of the nodes allocated and writes it to the text file hostfile. Then the command<br />
<br />
<!--T:16--><br />
julia --machine-file ./hostfile ./pi_p.jl 1000000000000<br />
<br />
<!--T:22--><br />
starts one main Julia process and 100 worker processes on the nodes specified in the hostfile and runs the program pi_p.jl in parallel.<br />
<br />
= Running Julia with MPI = <!--T:25--><br />
<br />
<!--T:26--><br />
You must make sure Julia's MPI is configured to use our MPI libraries. If you are using Julia MPI 0.19 or earlier, run the following:<br />
<br />
<!--T:27--><br />
module load StdEnv julia<br />
export JULIA_MPI_BINARY=system<br />
export JULIA_MPI_PATH=$EBROOTOPENMPI<br />
export JULIA_MPI_LIBRARY=$EBROOTOPENMPI/lib64/libmpi.so<br />
export JULIA_MPI_ABI=OpenMPI<br />
export JULIA_MPIEXEC=$EBROOTOPENMPI/bin/mpiexec<br />
<br />
<!--T:28--><br />
Then start Julia and inside it run:<br />
<br />
<!--T:29--><br />
import Pkg;<br />
Pkg.add("MPI")<br />
<br />
If you are using Julia MPI 0.20 or later, run the following (note that this appends an <code>[MPIPreferences]</code> section to your <code>.julia/environments/vX.Y/LocalPreferences.toml</code> file, if one already exists, it should be manually removed first):<br />
<br />
module load julia<br />
<br />
mkdir -p .julia/environments/v${EBVERSIONJULIA%.*}<br />
<br />
cat >> .julia/environments/v${EBVERSIONJULIA%.*}/LocalPreferences.toml << EOF<br />
[MPIPreferences]<br />
_format = "1.0"<br />
abi = "OpenMPI"<br />
binary = "system"<br />
libmpi = "${EBROOTOPENMPI}/lib64/libmpi.so"<br />
mpiexec = "${EBROOTOPENMPI}/bin/mpiexec"<br />
EOF<br />
<br />
Then start Julia and inside it run:<br />
<br />
import Pkg<br />
Pkg.add("MPIPreferences")<br />
Pkg.add("MPI")<br />
<br />
<!--T:30--><br />
To use afterwards, run (with two processes in this example):<br />
<br />
<!--T:31--><br />
module load StdEnv julia<br />
mpirun -np 2 julia hello.jl<br />
<br />
<!--T:32--><br />
The hello.jl code here is:<br />
<br />
<!--T:33--><br />
using MPI<br />
MPI.Init()<br />
comm = MPI.COMM_WORLD<br />
print("Hello world, I am rank $(MPI.Comm_rank(comm)) of $(MPI.Comm_size(comm))\n")<br />
MPI.Barrier(comm)<br />
<br />
= Configuring Julia's threading behaviour = <!--T:35--><br />
You can restrict the number of threads Julia can use by setting JULIA_NUM_THREADS=k, for example a single process on a 12 cpus-per-task job could use k=12. <br />
Setting the number of threads to the number of processors is a typical choice (although see [[Scalability]] for a discussion). <br />
In addition, one can 'pin' threads to cores, by setting <br />
JULIA_EXCLUSIVE to anything non-zero. As per the [https://docs.julialang.org/en/v1/manual/environment-variables/#JULIA_EXCLUSIVE documentation] this takes control of thread scheduling away from the OS, and pins threads to cores (sometimes referred to 'green' threads with affinity). Depending on the computation that threads execute, this can improve performance when one has precise information on cache access patterns or otherwise unwelcome scheduling patterns used by the OS. Setting JULIA_EXCLUSIVE works only if your job has exclusive access to the compute nodes (all available CPU cores were allocated to your job). Since SLURM already pins processes and threads to CPU cores, asking Julia to re-pin threads may not lead to any performance improvement.<br />
<br />
<!--T:36--><br />
Related is the variable [https://docs.julialang.org/en/v1/manual/environment-variables/#JULIA_THREAD_SLEEP_THRESHOLD JULIA_THREAD_SLEEP_THRESHOLD], controlling the number of nanoseconds after which a spinning thread is scheduled to sleep. A value of infinite (as string) indicates no sleeping on spinning. Changing this variable can be of use if many threads are contending frequently for a shared resource, where it can be preferred to schedule out spinning threads more quickly. Under heavy contention, spinning would only increase CPU load. Conversely, in a situation where a resource is only very infrequently contended, lower latency can result from prohibiting threads to sleep, that is, setting the threshold to infinity. <br />
<br />
<!--T:37--><br />
It goes without saying that configuring these values should only be done when one has accurately profiled any contention issues. Given the high pace at which Julia, and especially its threading sub-system Base.Threads, evolves, one should always consult the documentation to ensure changing the default configuration will have only the expected behaviour as a result.<br />
<br />
= Videos = <!--T:23--><br />
<br />
<!--T:24--><br />
A series of online seminars produced by SHARCNET:<br />
* [https://youtu.be/gKxs0L2Ac4I Julia: A first perspective] (47 minutes)<br />
* [https://youtu.be/-QuqSOUbY6Q Julia: A second perspective] (57 minutes)<br />
* [https://youtu.be/HWLV6oTmfO8 Julia: A third perspective - parallel computing explained] (65 minutes)<br />
* Julia: Parallel computing revisited (available soon)<br />
<br />
</translate></div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=Julia&diff=130059Julia2023-02-23T17:48:40Z<p>Tyson: Newer Julia MPI requires different way to set backend MPI info</p>
<hr />
<div><languages /><br />
<br />
<translate><br />
<!--T:1--><br />
[[Category:Software]]<br />
[[Category:Pages with video links]]<br />
<br />
<!--T:2--><br />
[https://julialang.org Julia] is a programming language that was designed for performance, ease of use and portability. It is is available as a [[Utiliser_des_modules/en | module]] on Compute Canada clusters.<br />
<br />
= Compiling packages = <!--T:3--><br />
<br />
<!--T:4--><br />
The first time you add a package to a Julia project (using <code>Pkg.add</code> or the package mode), the package will be downloaded, installed in <code>~/.julia</code>, and pre-compiled. The same package can be added to different projects, in which case the data in <code>~/.julia</code> will be reused. Different versions of the same package can be added to different projects; the required package versions will coexist in <code>~/.julia</code>. (Compared to Python, Julia projects replace “virtual environments” while avoiding code duplication.)<br />
<br />
<!--T:38--><br />
From Julia 1.6 onwards, Julia packages include their binary dependencies (such as libraries). There is therefore no need to load any software module, and we recommend not to.<br />
<br />
<!--T:39--><br />
With Julia 1.5 and earlier, you may run into problems if a package depends on system-provided binaries. For instance, [https://github.com/JuliaIO/JLD.jl JLD] depends on a system-provided HDF5 library. On a personal computer, Julia attempts to install such a dependency using [https://en.wikipedia.org/wiki/Yum_(software) yum] or [https://en.wikipedia.org/wiki/APT_(Debian) apt] with [https://en.wikipedia.org/wiki/Sudo sudo]. This will not work on a Compute Canada cluster; instead, some extra information must be provided to allow Julia's package manager (Pkg) to find the HDF5 library.<br />
<br />
<!--T:5--><br />
$ module load gcc/7.3.0 hdf5 julia/1.4.1<br />
$ julia<br />
julia> using Libdl<br />
julia> push!(Libdl.DL_LOAD_PATH, ENV["HDF5_DIR"] * "/lib")<br />
julia> using Pkg<br />
julia> Pkg.add("JLD")<br />
julia> using JLD<br />
<br />
<!--T:6--><br />
If we were to omit the <code>Libdl.DL_LOAD_PATH</code> line from the above example, it would happen to work on Graham because Graham has HDF5 installed system-wide. It would fail on Cedar because Cedar does not. The best practice on ''any'' Compute Canada system, though, is that shown above: Load the appropriate [[Utiliser_des_modules/en | module]] first, and use the environment variable defined by the module (<code>HDF5_DIR</code> in this example) to extend <code>Libdl.DL_LOAD_PATH</code>. This will work uniformly on all systems.<br />
<br />
<!--T:49--><br />
Note that JLD is superseded by [https://juliapackages.com/p/jld2 JLD2], which no longer relies on a system installed HDF5 library, making it more portable.<br />
<br />
= Package files and storage quotas = <!--T:7--><br />
<br />
<!--T:50--><br />
Installing Julia packages in your home directory will create large numbers of files. For example, starting from an empty <code>~/.julia</code> directory (no packages installed), installing just the <code>Gadfly.jl</code> plotting package will result in around 96M and 37000 files (7% of the total number of files allowed by your home directory quota). If you install a large number of Julia packages, you may exceed your quota.<br />
<br />
<!--T:51--><br />
To avoid this issue, you can store your personal Julia “depot” (containing packages, registries, pre-compiled files, etc.) in a different location, such as your project space. For example, user <code>alice</code>, a member of the <code>def-bob</code> project, could add the following to her <code>~/.bashrc</code> file:<br />
<br />
<!--T:52--><br />
export JULIA_DEPOT_PATH="/project/def-bob/alice/julia:$JULIA_DEPOT_PATH"<br />
<br />
<!--T:53--><br />
This will use the <code>/project/def-bob/alice/julia</code> directory preferentially. Files in <code>~/.julia</code> will still be considered, and <code>~/.julia</code> will still be used for some files such as your command history. When moving your depot to a different location, it is better to remove your existing <code>~/.julia</code> depot first if you have one:<br />
<br />
<!--T:54--><br />
$ rm -rf $HOME/.julia<br />
<br />
<!--T:56--><br />
Alternatively, one can create a [[Singularity]] image with a chosen version of Julia and a selection of packages, and JULIA_DEPOT_PATH redirected inside the container. This does mean that you lose the advantage of the Compute Canada optimized Julia modules. However, your container now contains the potentially very large set of small files inside 1 container file (.sif), potentially improving IO performance. Reproducibility is also improved, the container will run anywhere as-is. Another use case is if you want to test Julia nightly builds without altering your local Julia installation, or when you need to bundle your own specific dependencies, because the container creation gives you complete control at creation.<br />
<br />
= Available versions = <!--T:9--> <br />
<br />
<!--T:10--><br />
We have removed earlier versions of Julia (< 1.0) because the old package manager was creating vast numbers of small files which in turn caused performance issues on the parallel file systems. Please start using Julia 1.4, or newer versions.<br />
<br />
<!--T:11--><br />
{{Command<br />
|module spider julia<br />
|result=<br />
--------------------------------------------------------<br />
julia: julia/1.4.1<br />
--------------------------------------------------------<br />
[...]<br />
You will need to load all module(s) on any one of the lines below before the "julia/1.4.1" module is available to load.<br />
<br />
<!--T:12--><br />
nixpkgs/16.09 gcc/7.3.0<br />
[...]<br />
}}<br />
{{Command<br />
|module load gcc/7.3.0 julia/1.4.1<br />
}}<br />
<br />
== Porting code from Julia 0.x to 1.x == <!--T:13--><br />
<br />
<!--T:14--><br />
In the summer of 2018 the Julia developers released version 1.0, in which they stabilized the language API and removed deprecated (outdated) functionality.<br />
To help updating Julia programs for version 1.0, the developers also released version 0.7.0. <br />
Julia 0.7.0 contains all the new functionality of 1.0 as well as the outdated functionalities from 0.x versions, which will give [https://en.wikipedia.org/wiki/Deprecation deprecation warnings] when used.<br />
Code that runs in Julia 0.7 without warnings should be compatible with Julia 1.0.<br />
<br />
= Using PyCall.jl to call Python from Julia = <!--T:40--><br />
<br />
<!--T:41--><br />
Julia can interface with Python code using PyCall.jl. When using PyCall.jl, set the <code>PYTHON</code> environment variable to the python executable in your virtual Python environment. On our clusters, we recommend using virtual Python environments as described in our [[Python#Creating_and_using_a_virtual_environment|Python documentation]]. After activating a virtual Python environment, you can use it in PyCall.jl:<br />
<br />
<!--T:42--><br />
$ source "$HOME/myenv/bin/activate"<br />
(myenv) $ julia<br />
[...]<br />
julia> using Pkg, PyCall<br />
julia> ENV["PYTHON"] = joinpath(ENV["VIRTUAL_ENV"], "bin", "python")<br />
julia> Pkg.build("PyCall")<br />
<br />
<!--T:43--><br />
We strongly advise against the default PyCall.jl behaviour, which is to use a Miniconda distribution inside your Julia environment. Anaconda and similar distributions [[Anaconda | are not suitable on our clusters]].<br />
<br />
<!--T:55--><br />
Note that if you do not create a virtual environment as shown above, PyCall will default to the operating system python installation, which is never what you want. It will invoke Conda.jl, but fail to recognize the correct path unless you rebuild with <tt>ENV["PYTHON"]=""</tt>. In addition, apart from incompatibilities with the software stack, the Miniconda installer creates a large number of files inside <code>JULIA_DEPOT_PATH</code>. If that is <code>~/.julia</code>, the default, you can run into performance and quota issues.<br />
<br />
<!--T:44--><br />
See the [https://github.com/JuliaPy/PyCall.jl PyCall.jl documentation] for details.<br />
<br />
= Running Julia with multiple processes on clusters = <!--T:17--><br />
<br />
<!--T:18--><br />
The following is an example of running a parallel Julia code computing pi using 100 cores across nodes on a cluster<br />
<br />
<!--T:19--><br />
{{File<br />
|name=run_julia_pi.sh<br />
|lang="bash"<br />
|contents=<br />
#!/bin/bash<br />
#SBATCH --ntasks=100<br />
#SBATCH --cpus-per-task=1<br />
#SBATCH --mem-per-cpu=1024M<br />
#SBATCH --time=0-00:10<br />
<br />
<!--T:20--><br />
srun hostname -s > hostfile<br />
sleep 5<br />
julia --machine-file ./hostfile ./pi_p.jl 1000000000000<br />
}}<br />
<br />
<!--T:15--><br />
In this example, the command<br />
srun hostname -s > hostfile<br />
<br />
<!--T:21--><br />
generates a list of names of the nodes allocated and writes it to the text file hostfile. Then the command<br />
<br />
<!--T:16--><br />
julia --machine-file ./hostfile ./pi_p.jl 1000000000000<br />
<br />
<!--T:22--><br />
starts one main Julia process and 100 worker processes on the nodes specified in the hostfile and runs the program pi_p.jl in parallel.<br />
<br />
= Running Julia with MPI = <!--T:25--><br />
<br />
<!--T:26--><br />
You must make sure Julia's MPI is configured to use our MPI libraries. If you are using Julia MPI 0.19 or earlier, run the following:<br />
<br />
<!--T:27--><br />
module load StdEnv julia<br />
export JULIA_MPI_BINARY=system<br />
export JULIA_MPI_PATH=$EBROOTOPENMPI<br />
export JULIA_MPI_LIBRARY=$EBROOTOPENMPI/lib64/libmpi.so<br />
export JULIA_MPI_ABI=OpenMPI<br />
export JULIA_MPIEXEC=$EBROOTOPENMPI/bin/mpiexec<br />
<br />
<!--T:28--><br />
Then start Julia and inside it run:<br />
<br />
<!--T:29--><br />
import Pkg;<br />
Pkg.add("MPI")<br />
<br />
If you are using Julia MPI 0.20 or later, run the following (note that this appends an <code>[MPIPreferences]</code> section to your .julia/environments/vX.Y/LocalPreferences.toml file, if one already exists, it should be manually removed first):<br />
<br />
module load julia<br />
<br />
mkdir -p .julia/environments/v${EBVERSIONJULIA%.*}<br />
<br />
cat >> .julia/environments/v${EBVERSIONJULIA%.*}/LocalPreferences.toml << EOF<br />
[MPIPreferences]<br />
_format = "1.0"<br />
abi = "OpenMPI"<br />
binary = "system"<br />
libmpi = "${EBROOTOPENMPI}/lib64/libmpi.so"<br />
mpiexec = "${EBROOTOPENMPI}/bin/mpiexec"<br />
EOF<br />
<br />
Then start Julia and inside it run:<br />
<br />
import Pkg<br />
Pkg.add("MPIPreferences")<br />
Pkg.add("MPI")<br />
<br />
<!--T:30--><br />
To use afterwards, run (with two processes in this example):<br />
<br />
<!--T:31--><br />
module load StdEnv julia<br />
mpirun -np 2 julia hello.jl<br />
<br />
<!--T:32--><br />
The hello.jl code here is:<br />
<br />
<!--T:33--><br />
using MPI<br />
MPI.Init()<br />
comm = MPI.COMM_WORLD<br />
print("Hello world, I am rank $(MPI.Comm_rank(comm)) of $(MPI.Comm_size(comm))\n")<br />
MPI.Barrier(comm)<br />
<br />
= Configuring Julia's threading behaviour = <!--T:35--><br />
You can restrict the number of threads Julia can use by setting JULIA_NUM_THREADS=k, for example a single process on a 12 cpus-per-task job could use k=12. <br />
Setting the number of threads to the number of processors is a typical choice (although see [[Scalability]] for a discussion). <br />
In addition, one can 'pin' threads to cores, by setting <br />
JULIA_EXCLUSIVE to anything non-zero. As per the [https://docs.julialang.org/en/v1/manual/environment-variables/#JULIA_EXCLUSIVE documentation] this takes control of thread scheduling away from the OS, and pins threads to cores (sometimes referred to 'green' threads with affinity). Depending on the computation that threads execute, this can improve performance when one has precise information on cache access patterns or otherwise unwelcome scheduling patterns used by the OS. Setting JULIA_EXCLUSIVE works only if your job has exclusive access to the compute nodes (all available CPU cores were allocated to your job). Since SLURM already pins processes and threads to CPU cores, asking Julia to re-pin threads may not lead to any performance improvement.<br />
<br />
<!--T:36--><br />
Related is the variable [https://docs.julialang.org/en/v1/manual/environment-variables/#JULIA_THREAD_SLEEP_THRESHOLD JULIA_THREAD_SLEEP_THRESHOLD], controlling the number of nanoseconds after which a spinning thread is scheduled to sleep. A value of infinite (as string) indicates no sleeping on spinning. Changing this variable can be of use if many threads are contending frequently for a shared resource, where it can be preferred to schedule out spinning threads more quickly. Under heavy contention, spinning would only increase CPU load. Conversely, in a situation where a resource is only very infrequently contended, lower latency can result from prohibiting threads to sleep, that is, setting the threshold to infinity. <br />
<br />
<!--T:37--><br />
It goes without saying that configuring these values should only be done when one has accurately profiled any contention issues. Given the high pace at which Julia, and especially its threading sub-system Base.Threads, evolves, one should always consult the documentation to ensure changing the default configuration will have only the expected behaviour as a result.<br />
<br />
= Videos = <!--T:23--><br />
<br />
<!--T:24--><br />
A series of online seminars produced by SHARCNET:<br />
* [https://youtu.be/gKxs0L2Ac4I Julia: A first perspective] (47 minutes)<br />
* [https://youtu.be/-QuqSOUbY6Q Julia: A second perspective] (57 minutes)<br />
* [https://youtu.be/HWLV6oTmfO8 Julia: A third perspective - parallel computing explained] (65 minutes)<br />
* Julia: Parallel computing revisited (available soon)<br />
<br />
</translate></div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=Using_Nix&diff=118191Using Nix2022-07-25T18:23:45Z<p>Tyson: Page seems stable. Drop draft.</p>
<hr />
<div>= Overview =<br />
<br />
[https://nixos.org/nix/ Nix] is a software building and composition system that allows users to manage their own persistent software environments. It is only available on SHARCNET systems (i.e., graham and legacy).<br />
<br />
* Supports one-off, per-project, and per-user usage of compositions<br />
* Compositions can be built, installed, upgraded, downgraded, and removed as a user<br />
* Operations either succeed or fail leaving everything intact (operations are atomic).<br />
* Extremely easy to add and share compositions<br />
<br />
Currently nix is building software in a generic manner (e.g., without AVX2 or AVX512 vector instructions support), so module loaded software should be preferred for longer running simulations when it exists.<br />
<br />
'''NOTE:''' The message <code>failed to lock thread to CPU XX</code> is a harmless warning that can be ignored.<br />
<br />
== Enabling and disabling the nix environment ==<br />
<br />
The user’s current nix environment is enabled by loading the nix module. This creates some ''.nix*'' files and sets some environment variables.<br />
<br />
<source lang="bash">[name@cluster:~]$ module load nix</source><br />
It is disabled by unloading the nix module. This unsets the environment variables but leaves the ''.nix*'' files alone.<br />
<br />
<source lang="bash">[name@cluster:~]$ module unload nix</source><br />
== Completely resetting the nix environment ==<br />
<br />
Most per-user operations can be undone with the <code>--rollback</code> option (i.e., <code>nix-env --rollback</code> or <code>nix-channel --rollback</code>). Sometimes it is useful to entirely reset nix though. This is done by unloading the module, erasing all user related nix files, and then reloading the module file.<br />
<br />
<source lang="bash">[name@cluster:~]$ module unload nix<br />
[name@cluster:~]$ rm -fr ~/.nix-profile ~/.nix-defexpr ~/.nix-channels ~/.config/nixpkgs<br />
[name@cluster:~]$ rm -fr /nix/var/nix/profiles/per-user/$USER /nix/var/nix/gcroots/per-user/$USER<br />
[name@cluster:~]$ module load nix</source><br />
= Existing compositions =<br />
<br />
The <code>nix search</code> command can be used to locate already available compositions<br />
<br />
<source lang="bash">[user@cluster:~]$ nix search git<br />
...<br />
* nixpkgs.git (git-minimal-2.19.3)<br />
Distributed version control system<br />
...</source><br />
Pro tips include<br />
<br />
* you need to specify <code>-u</code> after upgrading your channel (this will take awhile)<br />
* the search string is actually a regular expression and multiple ones are ANDed together<br />
<br />
Often our usage of a composition is either a one-off, a per-project, or an all the time situations. Nix supports all three of these cases.<br />
<br />
== One offs ==<br />
<br />
If you just want to use a composition once, the easiest was is to use the <code>nix run</code> command. This command will start a shell in which <code>PATH</code> has been extended to include the specified composition<br />
<br />
<source lang="bash">[user@cluster:~]$ nix run nixpkg.git<br />
[user@cluster:~]$ git<br />
[user@cluster:~]$ exit</source><br />
Note that this does not protect the composition from being garbage collected overnight (e.g., the composition is only guaranteed to be around temporarily for your use until sometime in the wee-morning hours). Pro tips include<br />
<br />
* you can specify more than one composition in the same <code>nix run</code> command<br />
* you can specify a command instead of a shell with <code>-c &lt;cmd&gt; &lt;args&gt; ...</code><br />
<br />
== Per-project ==<br />
<br />
If you want to use a program for a specific project, the easiest way is with the <code>nix build</code> command. This command will create a symbolic link (by default named <code>result</code>) from which you can access the programs ''bin'' directory to run it.<br />
<br />
<source lang="bash">[user@cluster:~]$ nix build nixpkgs.git<br />
[user@cluster:~]$ ./result/bin/git</source><br />
Note that (currently) the composition will only be protected from overnight garbage collection if you output the symlink into your ''home'' directory and do not rename or move it. Pro tips include<br />
<br />
* you can specify the output symlink name with the <code>-o &lt;name&gt;</code> option<br />
* add the ''bin'' directory to your <code>PATH</code> to not have to type it in every time<br />
<br />
== Per-user ==<br />
<br />
Loading the <code>nix</code> module adds the per-user common ''~/.nix-profile/bin'' directory to your <code>PATH</code>. You can add and remove compositions from this directory with the <code>nix-env</code> command<br />
<br />
<source lang="bash">[user@cluster:~]$ nix-env --install --attr nixpkgs.git<br />
[user@cluster:~]$ nix-env --query<br />
git-minimal-2.19.3</source><br />
<source lang="bash">[user@cluster:~]$ nix-env --uninstall git-minimal<br />
uninstalling 'git-minimal-2.19.3'<br />
[user@cluster:~]$ nix-env --query</source><br />
Each command actually creates a new version, so all prior versions remain and can be used<br />
<br />
<source lang="bash">[user@cluster:~]$ nix-env --list-generations<br />
1 2020-07-29 13:10:03<br />
2 2020-07-29 13:11:52 (current)<br />
[user@cluster:~]$ nix-env --switch-generation 1<br />
[user@cluster:~]$ nix-env --query<br />
git-minimal-2.19.3<br />
[user@cluster:~]$ nix-env --switch-generation 2<br />
[user@cluster:~]$ nix-env --query</source><br />
Pro tips include<br />
<br />
* <code>nix-env --rollback</code> moves up one generation<br />
* <code>nix-env --delete-generations &lt;time&gt;</code> deletes environments older than <code>&lt;time&gt;</code> (e.g., <code>30d</code>)<br />
* see our [[Using Nix: nix-env|nix-env page]] for a much more in-depth discussion of using <code>nix-env</code><br />
<br />
= Creating compositions =<br />
<br />
Often we require our own unique composition. A basic example would be to bundle all the binaries from multiple existing compositions in a common ''bin'' directory (e.g., <code>make</code>, <code>gcc</code>, and <code>ld</code> to build a simple C program). A more complex example would be to bundle python with a set of python libraries by wrapping the python executables with shell scripts to set <code>PYTHON_PATH</code> for the python libraries before running the real python binaries.<br />
<br />
All of these have a common format. You write a nix expression in a <code>.nix</code> file that composes together existing compositions and then you tell the above commands to use that with the <code>-f &lt;nix file&gt;</code> option. For example, say the file <code>python.nix</code> has an expression for a python environment in it, you can create a per-project ''bin'' directory with<br />
<br />
<source lang="bash">[user@cluster:~]$ nix build -f python.nix -o python<br />
[user@cluster:~]$ ./python/bin/python</source><br />
The nix expression you put in the file generally<br />
<br />
* does <code>with import &lt;nixpkgs&gt; {}</code> to bring the set of nixpkgs into scope<br />
* calls an existing composition functions with a list of space-separated components to include<br />
<br />
The template for doing the second these follows below as it differs slightly across the various eco-systems.<br />
<br />
A pro tip is<br />
<br />
* there are many [https://nixos.org/nixpkgs/manual/#chap-language-support languages and framework supported] but only a few described here, send us an email if you would like a missing supported one added here<br />
<br />
== Generic ==<br />
<br />
Nixpkgs provides a <code>buildEnv</code> function that does a basic composition of compositions (by combining their ''bin'', ''lib'', etc. directories). The list of packages are the same as used before minus the leading <code>nixpkgs</code> prefix as it was imported (e.g., <code>git</code> instead of <code>nixpkgs.git</code>).<br />
<br />
<source lang="nix">with import <nixpkgs> {};<br />
buildEnv {<br />
name = "my environment";<br />
paths = [<br />
... list of compositions ...<br />
];<br />
}</source><br />
== Python ==<br />
<br />
Nixpkgs provides the following python related attributes<br />
<br />
* <code>python&lt;major&gt;&lt;minor&gt;</code> - a composition providing the given python<br />
* <code>python&lt;major&gt;&lt;minor&gt;.pkgs</code> - the set of python compositions using the given python<br />
* <code>python&lt;major&gt;&lt;minor&gt;.withPackages</code> - wraps python with <code>PYTHON_PATH</code> set to a given set of python packages<br />
<br />
We can use the former directly to use the programs provided by python compositions<br />
<br />
<source lang="bash">[user@cluster:~]$ nix run python36.pkgs.spambayes<br />
[user@cluster:~]$ sb_filter.py --help<br />
[user@cluster:~]$ exit</source><br />
and the later in a <code>.nix</code> file to create a python composition that enables a given set of libraries (e.g., a <code>python</code> command we can run and access the given set of python packages from)<br />
<br />
<source lang="nix">with import <nixpkgs> { };<br />
python.withPackages (packages:<br />
with packages; [<br />
... list of python packages ...<br />
]<br />
)</source><br />
Some pro tips are<br />
<br />
* the aliases <code>python</code> and <code>python&lt;major&gt;</code> given default <code>python&lt;major&gt;&lt;minor&gt;</code> versions<br />
* the aliases <code>pythonPackages&lt;major&gt;&lt;minor&gt;</code> are short for <code>python&lt;major&gt;&lt;minor&gt;.pkgs</code> (including default version variants)<br />
* the function <code>python&lt;major&gt;&lt;minor&gt;.pkgs.buildPythonPackage</code> can be used to build your own python packages<br />
<br />
== R ==<br />
<br />
Nixpkgs provides the following R related attributes<br />
<br />
* <code>R</code> - a composition providing R<br />
* <code>rstudio</code> - a composition providing RStudio<br />
* <code>rPackages</code> - the set of R packages<br />
* <code>rWrapper</code> - a composition that wraps R with <code>R_LIBS</code> set to a minimal set of R packages<br />
* <code>rstudioWrapper</code> - a composition that wrapped RStudio with <code>R_LIBS</code> set to a minimal set of R packages<br />
<br />
We can use <code>rPackages</code> directly to examine the content of R packages<br />
<br />
<source lang="bash">[user@cluster:~]$ nix build rPackages.exams -o exams<br />
[user@cluster:~]$ cat exams/library/exams/NEWS<br />
[user@cluster:~]$ exit</source><br />
and the latter two can be overridden in a <code>.nix</code> file to create R and RStudio wrappers to create a composition enabling a given set of R libraries (e.g., a <code>R</code> or <code>rstudio</code> command we can run and access the given set of R packages from)<br />
<br />
<source lang="nix">with import <nixpkgs> { };<br />
rWrapper.override {<br />
packages = with rPackages; [<br />
... list of R packages ...<br />
];<br />
}</source><br />
A pro tips is<br />
<br />
* the function <code>rPackages.buildRPackage</code> can be used to build your own R packages<br />
<br />
== Haskell ==<br />
<br />
Nixpkgs provides the following haskell related attributes<br />
<br />
* <code>haskell.compiler.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;</code> - composition providing the given ghc<br />
* <code>haskell.packages.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;</code> - the set of haskell packages compiled by the given ghc<br />
* <code>haskell.packages.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;.ghc.withPackages</code> - composition wrapping ghc to enable the given packages<br />
* <code>haskell.packages.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;.ghc.withHoogle</code> - composition wrapping ghc to enable the given packages with hoogle and documentation indices<br />
<br />
We can use the first directly to use programs provided by haskell packages<br />
<br />
<source lang="bash">[user@cluster:~]$ nix run haskell.packages.ghc864.pandoc<br />
[user@cluster:~]$ pandoc --help</source><br />
and the last two in a <code>.nix</code> file create a ghc environment to enable a given set of haskell package (e.g., a <code>ghci</code> we can run and access the given set of packages from)<br />
<br />
<pre>with import &lt;nixpkgs&gt; { };<br />
haskell.packages.ghc864.ghc.withPackages (packages:<br />
with packages; [<br />
... list of Haskell packages ...<br />
];<br />
}</pre><br />
Some pro tips are<br />
<br />
* the alias <code>haskellPackages</code> gives a default <code>haskell.packages.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;</code><br />
* the attributes in <code>haskell.lib</code> contains a variety of useful attributes for tweaking haskell packages (e.g., enabling profiling, etc.)<br />
* the upstream maintainer has a useful [https://www.youtube.com/watch?v=KLhkAEk8I20 youtube video] on how to fix broken haskell packages<br />
<br />
== Emacs ==<br />
<br />
Nixpkgs provides the following emacs related attributes (append a <code>Ng</code> suffix for older versions of nixpkgs, e.g., <code>emacs25Ng</code> and <code>emacs25PackagesNg</code>)<br />
<br />
* <code>emacs&lt;major&gt;&lt;minor&gt;</code> - a composition providing the given emacs editor<br />
* <code>emacs&lt;major&gt;&lt;minor&gt;Packages</code> - the set of emacs packages for the given emacs editor<br />
* <code>emacs&lt;major&gt;&lt;minor&gt;Packages.emacsWithPackages</code> - composition wrapping emacs to enable the given packages<br />
<br />
We can use the second directly examine the content of packages<br />
<br />
<source lang="bash">[user@cluster:~]$ nix build nixpkgs.emacs25Packages.magit -o magit<br />
[user@cluster:~]$ cat magit/share/emacs/site-lisp/elpa/magit*/AUTHORS.md<br />
[user@cluster:~]$ exit</source><br />
and the last one in a <code>.nix</code> file create a composition giving emacs with the given set of packages enabled<br />
<br />
<pre>with import &lt;nixpkgs&gt; { };<br />
emacs25Packages.emacsWithPackages (packages:<br />
with packages; [<br />
... list of emacs packages ...<br />
];<br />
}</pre><br />
Some pro tips are<br />
<br />
* the aliases <code>emacs</code> and <code>emacsPackages</code> give a default <code>emacs&lt;major&gt;&lt;minor&gt;</code> and <code>emacsPackages&lt;major&gt;&lt;minor&gt;</code> version<br />
* the alias <code>emacs&lt;major&gt;&lt;minor&gt;WithPackages</code> are short for <code>emacs&lt;major&gt;&lt;minor&gt;Packages.emacsWithPackages</code> (including default version variants)<br />
<br />
[[Category:Software]]</div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=SSH_tunnelling&diff=113457SSH tunnelling2022-03-30T19:57:51Z<p>Tyson: Seems cedar needs the FQDN to avoid the firewall on the omnipath route it /etc/hosts</p>
<hr />
<div><languages/><br />
<translate><br />
<br />
<!--T:53--><br />
''Parent page: [[SSH]]''<br />
<br />
=What is SSH tunnelling?= <!--T:1--><br />
<br />
<!--T:2--><br />
SSH tunnelling is a method to use a gateway computer to connect two<br />
computers that cannot connect directly.<br />
<br />
<!--T:3--><br />
In the context of Compute Canada, SSH tunnelling is necessary in certain cases,<br />
because compute nodes on [[Niagara]], [[Béluga]] and [[Graham]] do not have direct access to<br />
the internet, nor can the compute nodes be contacted directly from the internet.<br />
<br />
<!--T:4--><br />
The following use cases require SSH tunnels:<br />
<br />
<!--T:5--><br />
* Running commercial software on a compute node that needs to contact a license server over the internet;<br />
* Running [[Visualization|visualization software]] on a compute node that needs to be contacted by client software on a user's local computer;<br />
* Running a [[Jupyter | Jupyter Notebook]] on a compute node that needs to be contacted by the web browser on a user's local computer;<br />
* Connecting to the Cedar database server from somewhere other than the Cedar head node, e.g., your desktop.<br />
<br />
<!--T:6--><br />
In the first case, the license server is outside of<br />
the compute cluster and is rarely under a user's control, whereas<br />
in the other cases, the server is on the compute node but the<br />
challenge is to connect to it from the outside. We will therefore<br />
consider these two situations below.<br />
<br />
<!--T:54--><br />
While not strictly required to use SSH tunnelling, you may wish to be familiar with [[SSH Keys|SSH key pairs]].<br />
<br />
= Contacting a license server from a compute node = <!--T:7--><br />
<br />
<!--T:8--><br />
{{Panel<br />
|title=What's a port?<br />
|panelstyle=SideCallout<br />
|content=<br />
A port is a number used to distinguish streams of communication <br />
from one another. You can think of it as loosely analogous to a radio frequency <br />
or a channel. Many port numbers are reserved, by rule or by convention, for <br />
certain types of traffic. See <br />
[https://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers List of TCP and UDP port numbers] for more.<br />
}}<br />
<br />
<!--T:9--><br />
Certain commercially-licensed programs must connect to a license server machine <br />
somewhere on the internet via a predetermined port. If the compute node where <br />
the program is running has no access to the internet, then a ''gateway server'' <br />
which does have access must be used to forward communications on that port, <br />
from the compute node to the license server. To enable this, one must set up <br />
an ''SSH tunnel''. Such an arrangement is also called ''port forwarding''.<br />
<br />
<!--T:10--><br />
In most cases, creating an SSH tunnel in a batch job requires only two or <br />
three commands in your job script. You will need the following information:<br />
<br />
<!--T:11--><br />
* The IP address or the name of the license server (here LICSERVER).<br />
* The port number of the license service (here LICPORT). <br />
<br />
<!--T:12--><br />
You should obtain this information from whoever maintains the license server.<br />
That server also must allow connections from the login nodes; for<br />
Niagara, the outgoing IP address will either be 142.150.188.131 or 142.150.188.132.<br />
<br />
<!--T:13--><br />
With this information, one can now setup the SSH tunnel. For<br />
Graham, an alternative solution is to request a firewall exception<br />
for license server LICSERVER and its specific port LICPORT.<br />
<br />
<!--T:14--><br />
The gateway server on Niagara is nia-gw. On Graham, you need<br />
to pick one of the login nodes (gra-login1, 2, ...). Let us call the<br />
gateway node GATEWAY. You also need to choose the port number on the<br />
compute node to use (here COMPUTEPORT).<br />
<br />
<!--T:15--><br />
The SSH command to issue in the job script is then:<br />
<br />
<!--T:16--><br />
<source lang="bash"><br />
ssh GATEWAY -L COMPUTEPORT:LICSERVER:LICPORT -N -f<br />
</source><br />
<br />
<!--T:17--><br />
In this command, the string following the -L parameter specifies the port forwarding information:<br />
* -N tells SSH not to open a shell on the GATEWAY,<br />
* -f and -N tell SSH not to open a shell and to run in the background, allowing the job script to continue on past this SSH command.<br />
<br />
<!--T:18--><br />
A further command to add to the job script should tell the software<br />
that the license server is on port COMPUTEPORT on the server<br />
'localhost'. The term 'localhost' is the standard name by which a computer refers to itself. It is to be taken literally and should not be replaced with your computer's name. Exactly how to inform your software to use this port on 'localhost' will<br />
depend on the specific application and the type of license server,<br />
but often it is simply a matter of setting an environment variable in<br />
the job script like<br />
<br />
<!--T:19--><br />
<source lang="bash"><br />
export MLM_LICENSE_FILE=COMPUTEPORT@localhost<br />
</source><br />
<br />
== Example job script== <!--T:20--><br />
<br />
<!--T:21--><br />
The following job script sets up an SSH tunnel to contact licenseserver.institution.ca at port 9999.<br />
<br />
<!--T:22--><br />
<source lang="bash"><br />
#!/bin/bash<br />
#SBATCH --nodes 1<br />
#SBATCH --ntasks 40<br />
#SBATCH --time 3:00:00<br />
<br />
<!--T:23--><br />
REMOTEHOST=licenseserver.institution.ca<br />
REMOTEPORT=9999<br />
LOCALHOST=localhost<br />
for ((i=0; i<10; ++i)); do<br />
LOCALPORT=$(shuf -i 1024-65535 -n 1)<br />
ssh nia-gw -L $LOCALPORT:$REMOTEHOST:$REMOTEPORT -N -f && break<br />
done || { echo "Giving up forwarding license port after $i attempts..."; exit 1; }<br />
export MLM_LICENSE_FILE=$LOCALPORT@$LOCALHOST<br />
<br />
<!--T:24--><br />
module load thesoftware/2.0<br />
mpirun thesoftware ..... <br />
</source><br />
<br />
= Connecting to a program running on a compute node= <!--T:25--><br />
<br />
<!--T:26--><br />
SSH tunnelling can also be used in the context of Compute Canada to allow a user's computer to connect to a compute node on a cluster through an encrypted tunnel that is routed via the login node of this cluster. This technique allows graphical output of applications like a [[Jupyter | Jupyter Notebook]] or [[Visualization|visualization software]] to be displayed transparently on the user's local workstation even while they are running on a cluster's compute node. When connecting to a database server where the connection is only possible through the head node, SSH tunnelling can be used to bind an external port to the database server.<br />
<br />
<!--T:32--><br />
There is Network Address Translation (NAT) on both Graham and Cedar allowing users to access the internet from the compute nodes. On Graham however, access is blocked by default at the firewall. Contact [[Technical support|technical support]] if you need to have a specific port opened, supplying the IP address or range of addresses which should be allowed to use that port.<br />
<br />
== From Linux or MacOS X == <!--T:51--><br />
<br />
<!--T:52--><br />
On a Linux or MacOS X system, we recommend using the [https://sshuttle.readthedocs.io sshuttle] Python package.<br />
<br />
<!--T:34--><br />
On your computer, open a new terminal window and run the following sshuttle command to create the tunnel.<br />
<br />
<!--T:35--><br />
{{Command<br />
|prompt=[name@my_computer $]<br />
|sshuttle --dns -Nr userid@machine_name}}<br />
<br />
<!--T:36--><br />
Then, copy and paste the application's URL into your browser. If your application is a <br />
[[Jupyter#Starting_Jupyter_Notebook|Jupyter notebook]], for example, you are given a URL with a token:<br />
<pre><br />
http://cdr544.int.cedar.computecanada.ca:8888/?token=7ed7059fad64446f837567e32af8d20efa72e72476eb72ca<br />
</pre><br />
<br />
== From Windows == <!--T:37--> <br />
<br />
<!--T:38--><br />
An SSH tunnel can be created from Windows using [[Connecting with MobaXTerm|MobaXTerm]] as follows.<br />
<br />
<!--T:39--><br />
Open two sessions in MobaXTerm. <br />
<br />
<!--T:40--><br />
*Session 1 should be a connection to a cluster. Start your job there following the instructions for your application, such as [[Jupyter#Starting_Jupyter_Notebook|Jupyter Notebook]]. You should be given a URL that includes a host name and a port, such as <code>cdr544.int.cedar.computecanada.ca:8888</code> for example.<br />
<br />
<!--T:41--><br />
*Session 2 should be a local terminal in which we will set up the SSH tunnel. Run the following command, replacing this example host name with the one from the URL you received in Session 1. <br />
<br />
<!--T:42--><br />
{{Command<br />
|prompt=[name@my_computer ]$<br />
| ssh -L 8888:cdr544.int.cedar.computecanada.ca:8888 someuser@cedar.computecanada.ca}}<br />
<br />
<!--T:43--><br />
This command forwards connections to '''local port''' 8888 to port 8888 on cdr544.int.cedar.computecanada.ca, the '''remote port'''.<br />
The local port number, the first one, does not ''need'' to match the remote port number, the second one, but it is conventional and reduces confusion.<br />
<br />
<!--T:44--><br />
Modify the URL you were given in Session 1 by replacing the host name with <code>localhost</code>. <br />
Again using an example from [[Jupyter#Starting_Jupyter_Notebook|Jupyter Notebook]], this would be the URL to paste into a browser:<br />
<pre><br />
http://localhost:8888/?token=7ed7059fad64446f837567e32af8d20efa72e72476eb72ca<br />
</pre><br />
<br />
== Example for connecting to a database server on Cedar from your desktop == <!--T:46--><br />
<br />
<!--T:55--><br />
An SSH tunnel can be created from your desktop to database servers PostgreSQL or MySQL using the following commands respectively:<br />
<br />
<!--T:47--><br />
<pre> <br />
ssh -L PORT:cedar-pgsql-vm.int.cedar.computecanada.ca:5432 someuser@cedar.computecanada.ca<br />
ssh -L PORT:cedar-mysql-vm.int.cedar.computecanada.ca:3306 someuser@cedar.computecanada.ca<br />
</pre><br />
<br />
<!--T:48--><br />
These commands connect port number PORT on your local host to PostgreSQL or MySQL database servers respectively. The port number you choose (PORT) should not be bigger than 32768 (2^15). In this example, "someuser" is your Compute Canada username. The difference between this connection and an ordinary SSH connection is that you can now use another terminal to connect to the database server directly from your desktop. On your desktop, run one of these commands for PostgreSQL or MySQL as appropriate:<br />
<br />
<!--T:49--><br />
<pre> <br />
psql -h 127.0.0.1 -p PORT -U <your username> -d <your database><br />
mysql -h 127.0.0.1 -P PORT -u <your username> -p <br />
</pre><br />
<br />
<!--T:50--><br />
MySQL requires a password; it is stored in your ".my.cnf" located in your home directory on Cedar. <br />
The database connection will remain open as long as the SSH connection remains open.<br />
<br />
</translate></div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=SSH_tunnelling&diff=113456SSH tunnelling2022-03-30T18:59:22Z<p>Tyson: Don't need the int.cedar.computecanada.ca domain for cedar and it isn't correct for non-cedar machines either</p>
<hr />
<div><languages/><br />
<translate><br />
<br />
<!--T:53--><br />
''Parent page: [[SSH]]''<br />
<br />
=What is SSH tunnelling?= <!--T:1--><br />
<br />
<!--T:2--><br />
SSH tunnelling is a method to use a gateway computer to connect two<br />
computers that cannot connect directly.<br />
<br />
<!--T:3--><br />
In the context of Compute Canada, SSH tunnelling is necessary in certain cases,<br />
because compute nodes on [[Niagara]], [[Béluga]] and [[Graham]] do not have direct access to<br />
the internet, nor can the compute nodes be contacted directly from the internet.<br />
<br />
<!--T:4--><br />
The following use cases require SSH tunnels:<br />
<br />
<!--T:5--><br />
* Running commercial software on a compute node that needs to contact a license server over the internet;<br />
* Running [[Visualization|visualization software]] on a compute node that needs to be contacted by client software on a user's local computer;<br />
* Running a [[Jupyter | Jupyter Notebook]] on a compute node that needs to be contacted by the web browser on a user's local computer;<br />
* Connecting to the Cedar database server from somewhere other than the Cedar head node, e.g., your desktop.<br />
<br />
<!--T:6--><br />
In the first case, the license server is outside of<br />
the compute cluster and is rarely under a user's control, whereas<br />
in the other cases, the server is on the compute node but the<br />
challenge is to connect to it from the outside. We will therefore<br />
consider these two situations below.<br />
<br />
<!--T:54--><br />
While not strictly required to use SSH tunnelling, you may wish to be familiar with [[SSH Keys|SSH key pairs]].<br />
<br />
= Contacting a license server from a compute node = <!--T:7--><br />
<br />
<!--T:8--><br />
{{Panel<br />
|title=What's a port?<br />
|panelstyle=SideCallout<br />
|content=<br />
A port is a number used to distinguish streams of communication <br />
from one another. You can think of it as loosely analogous to a radio frequency <br />
or a channel. Many port numbers are reserved, by rule or by convention, for <br />
certain types of traffic. See <br />
[https://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers List of TCP and UDP port numbers] for more.<br />
}}<br />
<br />
<!--T:9--><br />
Certain commercially-licensed programs must connect to a license server machine <br />
somewhere on the internet via a predetermined port. If the compute node where <br />
the program is running has no access to the internet, then a ''gateway server'' <br />
which does have access must be used to forward communications on that port, <br />
from the compute node to the license server. To enable this, one must set up <br />
an ''SSH tunnel''. Such an arrangement is also called ''port forwarding''.<br />
<br />
<!--T:10--><br />
In most cases, creating an SSH tunnel in a batch job requires only two or <br />
three commands in your job script. You will need the following information:<br />
<br />
<!--T:11--><br />
* The IP address or the name of the license server (here LICSERVER).<br />
* The port number of the license service (here LICPORT). <br />
<br />
<!--T:12--><br />
You should obtain this information from whoever maintains the license server.<br />
That server also must allow connections from the login nodes; for<br />
Niagara, the outgoing IP address will either be 142.150.188.131 or 142.150.188.132.<br />
<br />
<!--T:13--><br />
With this information, one can now setup the SSH tunnel. For<br />
Graham, an alternative solution is to request a firewall exception<br />
for license server LICSERVER and its specific port LICPORT.<br />
<br />
<!--T:14--><br />
The gateway server on Niagara is nia-gw. On Graham, you need<br />
to pick one of the login nodes (gra-login1, 2, ...). Let us call the<br />
gateway node GATEWAY. You also need to choose the port number on the<br />
compute node to use (here COMPUTEPORT).<br />
<br />
<!--T:15--><br />
The SSH command to issue in the job script is then:<br />
<br />
<!--T:16--><br />
<source lang="bash"><br />
ssh GATEWAY -L COMPUTEPORT:LICSERVER:LICPORT -N -f<br />
</source><br />
<br />
<!--T:17--><br />
In this command, the string following the -L parameter specifies the port forwarding information:<br />
* -N tells SSH not to open a shell on the GATEWAY,<br />
* -f and -N tell SSH not to open a shell and to run in the background, allowing the job script to continue on past this SSH command.<br />
<br />
<!--T:18--><br />
A further command to add to the job script should tell the software<br />
that the license server is on port COMPUTEPORT on the server<br />
'localhost'. The term 'localhost' is the standard name by which a computer refers to itself. It is to be taken literally and should not be replaced with your computer's name. Exactly how to inform your software to use this port on 'localhost' will<br />
depend on the specific application and the type of license server,<br />
but often it is simply a matter of setting an environment variable in<br />
the job script like<br />
<br />
<!--T:19--><br />
<source lang="bash"><br />
export MLM_LICENSE_FILE=COMPUTEPORT@localhost<br />
</source><br />
<br />
== Example job script== <!--T:20--><br />
<br />
<!--T:21--><br />
The following job script sets up an SSH tunnel to contact licenseserver.institution.ca at port 9999.<br />
<br />
<!--T:22--><br />
<source lang="bash"><br />
#!/bin/bash<br />
#SBATCH --nodes 1<br />
#SBATCH --ntasks 40<br />
#SBATCH --time 3:00:00<br />
<br />
<!--T:23--><br />
REMOTEHOST=licenseserver.institution.ca<br />
REMOTEPORT=9999<br />
LOCALHOST=localhost<br />
for ((i=0; i<10; ++i)); do<br />
LOCALPORT=$(shuf -i 1024-65535 -n 1)<br />
ssh nia-gw -L $LOCALPORT:$REMOTEHOST:$REMOTEPORT -N -f && break<br />
done || { echo "Giving up forwarding license port after $i attempts..."; exit 1; }<br />
export MLM_LICENSE_FILE=$LOCALPORT@$LOCALHOST<br />
<br />
<!--T:24--><br />
module load thesoftware/2.0<br />
mpirun thesoftware ..... <br />
</source><br />
<br />
= Connecting to a program running on a compute node= <!--T:25--><br />
<br />
<!--T:26--><br />
SSH tunnelling can also be used in the context of Compute Canada to allow a user's computer to connect to a compute node on a cluster through an encrypted tunnel that is routed via the login node of this cluster. This technique allows graphical output of applications like a [[Jupyter | Jupyter Notebook]] or [[Visualization|visualization software]] to be displayed transparently on the user's local workstation even while they are running on a cluster's compute node. When connecting to a database server where the connection is only possible through the head node, SSH tunnelling can be used to bind an external port to the database server.<br />
<br />
<!--T:32--><br />
There is Network Address Translation (NAT) on both Graham and Cedar allowing users to access the internet from the compute nodes. On Graham however, access is blocked by default at the firewall. Contact [[Technical support|technical support]] if you need to have a specific port opened, supplying the IP address or range of addresses which should be allowed to use that port.<br />
<br />
== From Linux or MacOS X == <!--T:51--><br />
<br />
<!--T:52--><br />
On a Linux or MacOS X system, we recommend using the [https://sshuttle.readthedocs.io sshuttle] Python package.<br />
<br />
<!--T:34--><br />
On your computer, open a new terminal window and run the following sshuttle command to create the tunnel.<br />
<br />
<!--T:35--><br />
{{Command<br />
|prompt=[name@my_computer $]<br />
|sshuttle --dns -Nr userid@machine_name}}<br />
<br />
<!--T:36--><br />
Then, copy and paste the application's URL into your browser. If your application is a <br />
[[Jupyter#Starting_Jupyter_Notebook|Jupyter notebook]], for example, you are given a URL with a token:<br />
<pre><br />
http://cdr544.int.cedar.computecanada.ca:8888/?token=7ed7059fad64446f837567e32af8d20efa72e72476eb72ca<br />
</pre><br />
<br />
== From Windows == <!--T:37--> <br />
<br />
<!--T:38--><br />
An SSH tunnel can be created from Windows using [[Connecting with MobaXTerm|MobaXTerm]] as follows.<br />
<br />
<!--T:39--><br />
Open two sessions in MobaXTerm. <br />
<br />
<!--T:40--><br />
*Session 1 should be a connection to a cluster. Start your job there following the instructions for your application, such as [[Jupyter#Starting_Jupyter_Notebook|Jupyter Notebook]]. You should be given a URL that includes a host name and a port, such as <code>cdr544.int.cedar.computecanada.ca:8888</code> for example.<br />
<br />
<!--T:41--><br />
*Session 2 should be a local terminal in which we will set up the SSH tunnel. Run the following command, replacing this example host name with the one from the URL you received in Session 1. <br />
<br />
<!--T:42--><br />
{{Command<br />
|prompt=[name@my_computer ]$<br />
| ssh -L 8888:cdr544:8888 someuser@cedar.computecanada.ca}}<br />
<br />
<!--T:43--><br />
This command forwards connections to '''local port''' 8888 to port 8888 on cdr544.int.cedar.computecanada.ca, the '''remote port'''.<br />
The local port number, the first one, does not ''need'' to match the remote port number, the second one, but it is conventional and reduces confusion.<br />
<br />
<!--T:44--><br />
Modify the URL you were given in Session 1 by replacing the host name with <code>localhost</code>. <br />
Again using an example from [[Jupyter#Starting_Jupyter_Notebook|Jupyter Notebook]], this would be the URL to paste into a browser:<br />
<pre><br />
http://localhost:8888/?token=7ed7059fad64446f837567e32af8d20efa72e72476eb72ca<br />
</pre><br />
<br />
== Example for connecting to a database server on Cedar from your desktop == <!--T:46--><br />
<br />
<!--T:55--><br />
An SSH tunnel can be created from your desktop to database servers PostgreSQL or MySQL using the following commands respectively:<br />
<br />
<!--T:47--><br />
<pre> <br />
ssh -L PORT:cedar-pgsql-vm.int.cedar.computecanada.ca:5432 someuser@cedar.computecanada.ca<br />
ssh -L PORT:cedar-mysql-vm.int.cedar.computecanada.ca:3306 someuser@cedar.computecanada.ca<br />
</pre><br />
<br />
<!--T:48--><br />
These commands connect port number PORT on your local host to PostgreSQL or MySQL database servers respectively. The port number you choose (PORT) should not be bigger than 32768 (2^15). In this example, "someuser" is your Compute Canada username. The difference between this connection and an ordinary SSH connection is that you can now use another terminal to connect to the database server directly from your desktop. On your desktop, run one of these commands for PostgreSQL or MySQL as appropriate:<br />
<br />
<!--T:49--><br />
<pre> <br />
psql -h 127.0.0.1 -p PORT -U <your username> -d <your database><br />
mysql -h 127.0.0.1 -P PORT -u <your username> -p <br />
</pre><br />
<br />
<!--T:50--><br />
MySQL requires a password; it is stored in your ".my.cnf" located in your home directory on Cedar. <br />
The database connection will remain open as long as the SSH connection remains open.<br />
<br />
</translate></div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=Using_Nix&diff=107121Using Nix2021-12-01T17:33:25Z<p>Tyson: Neither encourage nor discourage users from contacting us about what they want for Nix</p>
<hr />
<div>{{Draft}}<br />
<br />
= Overview =<br />
<br />
[https://nixos.org/nix/ Nix] is a software building and composition system that allows users to manage their own persistent software environments. It is only available on SHARCNET systems (i.e., graham and legacy).<br />
<br />
* Supports one-off, per-project, and per-user usage of compositions<br />
* Compositions can be built, installed, upgraded, downgraded, and removed as a user<br />
* Operations either succeed or fail leaving everything intact (operations are atomic).<br />
* Extremely easy to add and share compositions<br />
<br />
Currently nix is building software in a generic manner (e.g., without AVX2 or AVX512 vector instructions support), so module loaded software should be preferred for longer running simulations when it exists.<br />
<br />
'''NOTE:''' The message <code>failed to lock thread to CPU XX</code> is a harmless warning that can be ignored.<br />
<br />
== Enabling and disabling the nix environment ==<br />
<br />
The user’s current nix environment is enabled by loading the nix module. This creates some ''.nix*'' files and sets some environment variables.<br />
<br />
<source lang="bash">[name@cluster:~]$ module load nix</source><br />
It is disabled by unloading the nix module. This unsets the environment variables but leaves the ''.nix*'' files alone.<br />
<br />
<source lang="bash">[name@cluster:~]$ module unload nix</source><br />
== Completely resetting the nix environment ==<br />
<br />
Most per-user operations can be undone with the <code>--rollback</code> option (i.e., <code>nix-env --rollback</code> or <code>nix-channel --rollback</code>). Sometimes it is useful to entirely reset nix though. This is done by unloading the module, erasing all user related nix files, and then reloading the module file.<br />
<br />
<source lang="bash">[name@cluster:~]$ module unload nix<br />
[name@cluster:~]$ rm -fr ~/.nix-profile ~/.nix-defexpr ~/.nix-channels ~/.config/nixpkgs<br />
[name@cluster:~]$ rm -fr /nix/var/nix/profiles/per-user/$USER /nix/var/nix/gcroots/per-user/$USER<br />
[name@cluster:~]$ module load nix</source><br />
= Existing compositions =<br />
<br />
The <code>nix search</code> command can be used to locate already available compositions<br />
<br />
<source lang="bash">[user@cluster:~]$ nix search git<br />
...<br />
* nixpkgs.git (git-minimal-2.19.3)<br />
Distributed version control system<br />
...</source><br />
Pro tips include<br />
<br />
* you need to specify <code>-u</code> after upgrading your channel (this will take awhile)<br />
* the search string is actually a regular expression and multiple ones are ANDed together<br />
<br />
Often our usage of a composition is either a one-off, a per-project, or an all the time situations. Nix supports all three of these cases.<br />
<br />
== One offs ==<br />
<br />
If you just want to use a composition once, the easiest was is to use the <code>nix run</code> command. This command will start a shell in which <code>PATH</code> has been extended to include the specified composition<br />
<br />
<source lang="bash">[user@cluster:~]$ nix run nixpkg.git<br />
[user@cluster:~]$ git<br />
[user@cluster:~]$ exit</source><br />
Note that this does not protect the composition from being garbage collected overnight (e.g., the composition is only guaranteed to be around temporarily for your use until sometime in the wee-morning hours). Pro tips include<br />
<br />
* you can specify more than one composition in the same <code>nix run</code> command<br />
* you can specify a command instead of a shell with <code>-c &lt;cmd&gt; &lt;args&gt; ...</code><br />
<br />
== Per-project ==<br />
<br />
If you want to use a program for a specific project, the easiest way is with the <code>nix build</code> command. This command will create a symbolic link (by default named <code>result</code>) from which you can access the programs ''bin'' directory to run it.<br />
<br />
<source lang="bash">[user@cluster:~]$ nix build nixpkgs.git<br />
[user@cluster:~]$ ./result/bin/git</source><br />
Note that (currently) the composition will only be protected from overnight garbage collection if you output the symlink into your ''home'' directory and do not rename or move it. Pro tips include<br />
<br />
* you can specify the output symlink name with the <code>-o &lt;name&gt;</code> option<br />
* add the ''bin'' directory to your <code>PATH</code> to not have to type it in every time<br />
<br />
== Per-user ==<br />
<br />
Loading the <code>nix</code> module adds the per-user common ''~/.nix-profile/bin'' directory to your <code>PATH</code>. You can add and remove compositions from this directory with the <code>nix-env</code> command<br />
<br />
<source lang="bash">[user@cluster:~]$ nix-env --install --attr nixpkgs.git<br />
[user@cluster:~]$ nix-env --query<br />
git-minimal-2.19.3</source><br />
<source lang="bash">[user@cluster:~]$ nix-env --uninstall git-minimal<br />
uninstalling 'git-minimal-2.19.3'<br />
[user@cluster:~]$ nix-env --query</source><br />
Each command actually creates a new version, so all prior versions remain and can be used<br />
<br />
<source lang="bash">[user@cluster:~]$ nix-env --list-generations<br />
1 2020-07-29 13:10:03<br />
2 2020-07-29 13:11:52 (current)<br />
[user@cluster:~]$ nix-env --switch-generation 1<br />
[user@cluster:~]$ nix-env --query<br />
git-minimal-2.19.3<br />
[user@cluster:~]$ nix-env --switch-generation 2<br />
[user@cluster:~]$ nix-env --query</source><br />
Pro tips include<br />
<br />
* <code>nix-env --rollback</code> moves up one generation<br />
* <code>nix-env --delete-generations &lt;time&gt;</code> deletes environments older than <code>&lt;time&gt;</code> (e.g., <code>30d</code>)<br />
* see our [[Using Nix: nix-env|nix-env page]] for a much more in-depth discussion of using <code>nix-env</code><br />
<br />
= Creating compositions =<br />
<br />
Often we require our own unique composition. A basic example would be to bundle all the binaries from multiple existing compositions in a common ''bin'' directory (e.g., <code>make</code>, <code>gcc</code>, and <code>ld</code> to build a simple C program). A more complex example would be to bundle python with a set of python libraries by wrapping the python executables with shell scripts to set <code>PYTHON_PATH</code> for the python libraries before running the real python binaries.<br />
<br />
All of these have a common format. You write a nix expression in a <code>.nix</code> file that composes together existing compositions and then you tell the above commands to use that with the <code>-f &lt;nix file&gt;</code> option. For example, say the file <code>python.nix</code> has an expression for a python environment in it, you can create a per-project ''bin'' directory with<br />
<br />
<source lang="bash">[user@cluster:~]$ nix build -f python.nix -o python<br />
[user@cluster:~]$ ./python/bin/python</source><br />
The nix expression you put in the file generally<br />
<br />
* does <code>with import &lt;nixpkgs&gt; {}</code> to bring the set of nixpkgs into scope<br />
* calls an existing composition functions with a list of space-separated components to include<br />
<br />
The template for doing the second these follows below as it differs slightly across the various eco-systems.<br />
<br />
A pro tip is<br />
<br />
* there are many [https://nixos.org/nixpkgs/manual/#chap-language-support languages and framework supported] but only a few described here, send us an email if you would like a missing supported one added here<br />
<br />
== Generic ==<br />
<br />
Nixpkgs provides a <code>buildEnv</code> function that does a basic composition of compositions (by combining their ''bin'', ''lib'', etc. directories). The list of packages are the same as used before minus the leading <code>nixpkgs</code> prefix as it was imported (e.g., <code>git</code> instead of <code>nixpkgs.git</code>).<br />
<br />
<source lang="nix">with import <nixpkgs> {};<br />
buildEnv {<br />
name = "my environment";<br />
paths = [<br />
... list of compositions ...<br />
];<br />
}</source><br />
== Python ==<br />
<br />
Nixpkgs provides the following python related attributes<br />
<br />
* <code>python&lt;major&gt;&lt;minor&gt;</code> - a composition providing the given python<br />
* <code>python&lt;major&gt;&lt;minor&gt;.pkgs</code> - the set of python compositions using the given python<br />
* <code>python&lt;major&gt;&lt;minor&gt;.withPackages</code> - wraps python with <code>PYTHON_PATH</code> set to a given set of python packages<br />
<br />
We can use the former directly to use the programs provided by python compositions<br />
<br />
<source lang="bash">[user@cluster:~]$ nix run python36.pkgs.spambayes<br />
[user@cluster:~]$ sb_filter.py --help<br />
[user@cluster:~]$ exit</source><br />
and the later in a <code>.nix</code> file to create a python composition that enables a given set of libraries (e.g., a <code>python</code> command we can run and access the given set of python packages from)<br />
<br />
<source lang="nix">with import <nixpkgs> { };<br />
python.withPackages (packages:<br />
with packages; [<br />
... list of python packages ...<br />
]<br />
)</source><br />
Some pro tips are<br />
<br />
* the aliases <code>python</code> and <code>python&lt;major&gt;</code> given default <code>python&lt;major&gt;&lt;minor&gt;</code> versions<br />
* the aliases <code>pythonPackages&lt;major&gt;&lt;minor&gt;</code> are short for <code>python&lt;major&gt;&lt;minor&gt;.pkgs</code> (including default version variants)<br />
* the function <code>python&lt;major&gt;&lt;minor&gt;.pkgs.buildPythonPackage</code> can be used to build your own python packages<br />
<br />
== R ==<br />
<br />
Nixpkgs provides the following R related attributes<br />
<br />
* <code>R</code> - a composition providing R<br />
* <code>rstudio</code> - a composition providing RStudio<br />
* <code>rPackages</code> - the set of R packages<br />
* <code>rWrapper</code> - a composition that wraps R with <code>R_LIBS</code> set to a minimal set of R packages<br />
* <code>rstudioWrapper</code> - a composition that wrapped RStudio with <code>R_LIBS</code> set to a minimal set of R packages<br />
<br />
We can use <code>rPackages</code> directly to examine the content of R packages<br />
<br />
<source lang="bash">[user@cluster:~]$ nix build rPackages.exams -o exams<br />
[user@cluster:~]$ cat exams/library/exams/NEWS<br />
[user@cluster:~]$ exit</source><br />
and the latter two can be overridden in a <code>.nix</code> file to create R and RStudio wrappers to create a composition enabling a given set of R libraries (e.g., a <code>R</code> or <code>rstudio</code> command we can run and access the given set of R packages from)<br />
<br />
<source lang="nix">with import <nixpkgs> { };<br />
rWrapper.override {<br />
packages = with rPackages; [<br />
... list of R packages ...<br />
];<br />
}</source><br />
A pro tips is<br />
<br />
* the function <code>rPackages.buildRPackage</code> can be used to build your own R packages<br />
<br />
== Haskell ==<br />
<br />
Nixpkgs provides the following haskell related attributes<br />
<br />
* <code>haskell.compiler.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;</code> - composition providing the given ghc<br />
* <code>haskell.packages.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;</code> - the set of haskell packages compiled by the given ghc<br />
* <code>haskell.packages.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;.ghc.withPackages</code> - composition wrapping ghc to enable the given packages<br />
* <code>haskell.packages.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;.ghc.withHoogle</code> - composition wrapping ghc to enable the given packages with hoogle and documentation indices<br />
<br />
We can use the first directly to use programs provided by haskell packages<br />
<br />
<source lang="bash">[user@cluster:~]$ nix run haskell.packages.ghc864.pandoc<br />
[user@cluster:~]$ pandoc --help</source><br />
and the last two in a <code>.nix</code> file create a ghc environment to enable a given set of haskell package (e.g., a <code>ghci</code> we can run and access the given set of packages from)<br />
<br />
<pre>with import &lt;nixpkgs&gt; { };<br />
haskell.packages.ghc864.ghc.withPackages (packages:<br />
with packages; [<br />
... list of Haskell packages ...<br />
];<br />
}</pre><br />
Some pro tips are<br />
<br />
* the alias <code>haskellPackages</code> gives a default <code>haskell.packages.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;</code><br />
* the attributes in <code>haskell.lib</code> contains a variety of useful attributes for tweaking haskell packages (e.g., enabling profiling, etc.)<br />
* the upstream maintainer has a useful [https://www.youtube.com/watch?v=KLhkAEk8I20 youtube video] on how to fix broken haskell packages<br />
<br />
== Emacs ==<br />
<br />
Nixpkgs provides the following emacs related attributes (append a <code>Ng</code> suffix for older versions of nixpkgs, e.g., <code>emacs25Ng</code> and <code>emacs25PackagesNg</code>)<br />
<br />
* <code>emacs&lt;major&gt;&lt;minor&gt;</code> - a composition providing the given emacs editor<br />
* <code>emacs&lt;major&gt;&lt;minor&gt;Packages</code> - the set of emacs packages for the given emacs editor<br />
* <code>emacs&lt;major&gt;&lt;minor&gt;Packages.emacsWithPackages</code> - composition wrapping emacs to enable the given packages<br />
<br />
We can use the second directly examine the content of packages<br />
<br />
<source lang="bash">[user@cluster:~]$ nix build nixpkgs.emacs25Packages.magit -o magit<br />
[user@cluster:~]$ cat magit/share/emacs/site-lisp/elpa/magit*/AUTHORS.md<br />
[user@cluster:~]$ exit</source><br />
and the last one in a <code>.nix</code> file create a composition giving emacs with the given set of packages enabled<br />
<br />
<pre>with import &lt;nixpkgs&gt; { };<br />
emacs25Packages.emacsWithPackages (packages:<br />
with packages; [<br />
... list of emacs packages ...<br />
];<br />
}</pre><br />
Some pro tips are<br />
<br />
* the aliases <code>emacs</code> and <code>emacsPackages</code> give a default <code>emacs&lt;major&gt;&lt;minor&gt;</code> and <code>emacsPackages&lt;major&gt;&lt;minor&gt;</code> version<br />
* the alias <code>emacs&lt;major&gt;&lt;minor&gt;WithPackages</code> are short for <code>emacs&lt;major&gt;&lt;minor&gt;Packages.emacsWithPackages</code> (including default version variants)<br />
<br />
[[Category:Software]]</div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=ARM_software&diff=96614ARM software2021-03-08T18:24:45Z<p>Tyson: Morning after minor grammar fixups... :)</p>
<hr />
<div><languages /><br />
[[Category:Software]] [[Category:Pages with video links]]<br />
<translate><br />
= Introduction = <!--T:1--><br />
<br />
<!--T:2--><br />
[https://www.arm.com/products/development-tools/hpc-tools/cross-platform/forge/ddt ARM DDT] (formerly know as Allinea DDT) is a powerful commercial parallel debugger with a graphical user interface. It can be used to debug serial, MPI, multi-threaded, and CUDA programs, or any combination of the above, written in C, C++, and FORTRAN. [https://www.arm.com/products/development-tools/hpc-tools/cross-platform/forge/map MAP]—an efficient parallel profiler—is another very useful tool from ARM (formerly Allinea).<br />
<br />
<!--T:3--><br />
The following modules are available on Graham:<br />
* ddt-cpu, for CPU debugging and profiling;<br />
* ddt-gpu, for GPU or mixed CPU/GPU debugging.<br />
<br />
<!--T:38--><br />
The following module is available on Niagara:<br />
* ddt<br />
<br />
<!--T:39--><br />
As this is a GUI application, log in using <code>ssh -Y</code>, and use an [[SSH|SSH client]] like [[Connecting with MobaXTerm|MobaXTerm]] (Windows) or [https://www.xquartz.org/ XQuartz] (Mac) to ensure proper X11 tunnelling.<br />
<br />
<!--T:4--><br />
Both DDT and MAP are normally used interactively through their GUI, which is normally accomplished using the <code>salloc</code> command (see below for details). MAP can also be used non-interactively, in which case it can be submitted to the scheduler with the <code>sbatch</code> command.<br />
<br />
<!--T:5--><br />
The current license limits the use of DDT/MAP to a maximum of 512 CPU cores across all users at any given time, while DDT-GPU is limited to 8 GPUs.<br />
<br />
= Usage = <!--T:6--><br />
== CPU-only code, no GPUs ==<br />
<br />
<!--T:7--><br />
1. Allocate the node or nodes on which to do the debugging or profiling. This will open a shell session on the allocated node.<br />
<br />
<!--T:8--><br />
salloc --x11 --time=0-1:00 --mem-per-cpu=4G --ntasks=4<br />
<br />
<!--T:9--><br />
2. Load the appropriate module, for example<br />
<br />
<!--T:10--><br />
module load ddt-cpu<br />
<br />
<!--T:13--><br />
3. Run the ddt or map command.<br />
<br />
<!--T:14--><br />
ddt path/to/code<br />
map path/to/code<br />
<br />
<!--T:15--><br />
:: Make sure the MPI implementation is the default OpenMPI in the DDT/MAP application window, before pressing the ''Run'' button. If this is not the case, press the ''Change'' button next to the ''Implementation:'' string, and select the correct option from the drop-down menu. Also, specify the desired number of cpu cores in this window.<br />
<br />
<!--T:16--><br />
4. When done, exit the shell to terminate the allocation.<br />
<br />
<!--T:34--><br />
IMPORTANT: The current versions of DDT and OpenMPI have a compatibility issue which breaks the important feature of DDT - displaying message queues (available from the "Tools" drop down menu). There is a workaround: before running DDT, you have to execute the following command:<br />
<br />
<!--T:35--><br />
$ export OMPI_MCA_pml=ob1<br />
<br />
<!--T:36--><br />
Be aware that the above workaround can make your MPI code run slower, so only use this trick when debugging.<br />
<br />
== CUDA code == <!--T:17--><br />
<br />
<!--T:18--><br />
1. Allocate the node or nodes on which to do the debugging or profiling with <code>salloc</code>. This will open a shell session on the allocated node. <br />
<br />
<!--T:19--><br />
salloc --x11 --time=0-1:00 --mem-per-cpu=4G --ntasks=1 --gres=gpu:1<br />
<br />
<!--T:20--><br />
2. Load the appropriate module, for example<br />
<br />
<!--T:21--><br />
module load ddt-gpu<br />
<br />
<!--T:22--><br />
:: This may fail with a suggestion to load an older version of OpenMPI first. In this case, reload the OpenMPI module with the suggested command, and then reload the ddt-gpu module.<br />
<br />
<!--T:23--><br />
module load openmpi/2.0.2<br />
module load ddt-gpu<br />
<br />
<!--T:24--><br />
3. Ensure a cuda module is loaded.<br />
<br />
<!--T:25--><br />
module load cuda<br />
<br />
<!--T:26--><br />
4. Run the ddt command.<br />
<br />
<!--T:27--><br />
ddt path/to/code<br />
<br />
<!--T:40--><br />
If DDT complains about the mismatch between the CUDA driver and toolkit version, execute the following command and the run DDT again (use the version in this command), e.g.<br />
<br />
<!--T:41--><br />
export ALLINEA_FORCE_CUDA_VERSION=10.1<br />
<br />
<!--T:28--><br />
5. When done, exit the shell to terminate the allocation.<br />
<br />
== Using VNC to fix the lag == <!--T:51--><br />
<br />
<!--T:52--><br />
[[File:DDT-VNC-1.png|400px|thumb|right|DDT on '''gra-vdi.computecanada.ca''']]<br />
[[File:DDT-VNC-2.png|400px|thumb|right|Program on '''graham.computecanada.ca''']]<br />
<br />
<!--T:53--><br />
The instructions above use X11 forwarding. X11 is very sensitive to packet latency. As a result, unless you happen to be on the same campus as the computer cluster, the ddt interface will likely be laggy and frustrating to use. This can be fixed by running ddt under VNC.<br />
<br />
<!--T:54--><br />
To do this, follow the directions on our [[VNC|VNC page]] to setup a VNC session. If your VNC session is on the compute node, then you can directly start your program under ddt as above. If you VNC session is on the login node or you are using the graham vdi node, then you need to manual launch the job as follows. From the ddt startup screen<br />
<br />
<!--T:55--><br />
* pick the ''manually launch backend yourself'' job start option,<br />
* enter the appropriate information for your job and press the ''listen'' button, and<br />
* press the ''help'' button to the right of ''waiting for you to start the job...''.<br />
<br />
<!--T:56--><br />
This will then give you the command you need to run to start your job. Allocate a job on the cluster and start your program as directed. An example of doing this would be (where $USER is your username and $PROGAM ... is the command to start your program)<br />
<br />
<!--T:57--><br />
<source lang="bash">[name@cluster-login:~]$ salloc ...<br />
[name@cluster-node:~]$ /cvmfs/restricted.computecanada.ca/easybuild/software/2020/Core/allinea/20.2/bin/forge-client --ddtsessionfile /home/$USER/.allinea/session/gra-vdi3-1 $PROGRAM ...<br />
</source><br />
<br />
= Known issues = <!--T:33--><br />
<br />
<!--T:42--><br />
On graham, if you are experiencing issues with getting X11 to work, change permissions on your home directory so that only you have access.<br />
<br />
<!--T:43--><br />
First, check (and record if needed) current permissions with<br />
<br />
<!--T:44--><br />
ls -ld /home/$USER<br />
<br />
<!--T:45--><br />
The output should begin with:<br />
<br />
<!--T:46--><br />
drwx------<br />
<br />
<!--T:47--><br />
If some of the dashes are replaced by letters, that means your group and other users have read, write (unlikely), or execute permissions on your directory. <br />
<br />
<!--T:48--><br />
This command will work to remove read and execute permissions for group and other users:<br />
<br />
<!--T:49--><br />
chmod go-rx /home/$USER<br />
<br />
<!--T:50--><br />
After you are done using DDT, you can if you like restore permissions to what they were (assuming you recorded them). More information on how to do this can be found on page [[Sharing_data]].<br />
<br />
= See also = <!--T:37--><br />
* [https://www.youtube.com/watch?v=YsF5KMr9uEQ "Code profiling on Graham"], video, 54 minutes.<br />
* [https://www.sharcnet.ca/help/index.php/Parallel_Debugging_with_DDT A short DDT tutorial.]<br />
<br />
</translate></div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=File:DDT-VNC-2.png&diff=96423File:DDT-VNC-2.png2021-03-03T19:26:16Z<p>Tyson: Tyson uploaded a new version of File:DDT-VNC-2.png</p>
<hr />
<div>== Summary ==<br />
Client side of remote ddt session.</div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=ARM_software&diff=96422ARM software2021-03-03T19:11:49Z<p>Tyson: Use VNC to fix the laggy interface issue</p>
<hr />
<div><languages /><br />
[[Category:Software]] [[Category:Pages with video links]]<br />
<translate><br />
= Introduction = <!--T:1--><br />
<br />
<!--T:2--><br />
[https://www.arm.com/products/development-tools/hpc-tools/cross-platform/forge/ddt ARM DDT] (formerly know as Allinea DDT) is a powerful commercial parallel debugger with a graphical user interface. It can be used to debug serial, MPI, multi-threaded, and CUDA programs, or any combination of the above, written in C, C++, and FORTRAN. [https://www.arm.com/products/development-tools/hpc-tools/cross-platform/forge/map MAP]—an efficient parallel profiler—is another very useful tool from ARM (formerly Allinea).<br />
<br />
<!--T:3--><br />
The following modules are available on Graham:<br />
* ddt-cpu, for CPU debugging and profiling;<br />
* ddt-gpu, for GPU or mixed CPU/GPU debugging.<br />
<br />
<!--T:38--><br />
The following module is available on Niagara:<br />
* ddt<br />
<br />
<!--T:39--><br />
As this is a GUI application, log in using <code>ssh -Y</code>, and use an [[SSH|SSH client]] like [[Connecting with MobaXTerm|MobaXTerm]] (Windows) or [https://www.xquartz.org/ XQuartz] (Mac) to ensure proper X11 tunnelling.<br />
<br />
<!--T:4--><br />
Both DDT and MAP are normally used interactively through their GUI, which is normally accomplished using the <code>salloc</code> command (see below for details). MAP can also be used non-interactively, in which case it can be submitted to the scheduler with the <code>sbatch</code> command.<br />
<br />
<!--T:5--><br />
The current license limits the use of DDT/MAP to a maximum of 512 CPU cores across all users at any given time, while DDT-GPU is limited to 8 GPUs.<br />
<br />
= Usage = <!--T:6--><br />
== CPU-only code, no GPUs ==<br />
<br />
<!--T:7--><br />
1. Allocate the node or nodes on which to do the debugging or profiling. This will open a shell session on the allocated node.<br />
<br />
<!--T:8--><br />
salloc --x11 --time=0-1:00 --mem-per-cpu=4G --ntasks=4<br />
<br />
<!--T:9--><br />
2. Load the appropriate module, for example<br />
<br />
<!--T:10--><br />
module load ddt-cpu<br />
<br />
<!--T:13--><br />
3. Run the ddt or map command.<br />
<br />
<!--T:14--><br />
ddt path/to/code<br />
map path/to/code<br />
<br />
<!--T:15--><br />
:: Make sure the MPI implementation is the default OpenMPI in the DDT/MAP application window, before pressing the ''Run'' button. If this is not the case, press the ''Change'' button next to the ''Implementation:'' string, and select the correct option from the drop-down menu. Also, specify the desired number of cpu cores in this window.<br />
<br />
<!--T:16--><br />
4. When done, exit the shell to terminate the allocation.<br />
<br />
<!--T:34--><br />
IMPORTANT: The current versions of DDT and OpenMPI have a compatibility issue which breaks the important feature of DDT - displaying message queues (available from the "Tools" drop down menu). There is a workaround: before running DDT, you have to execute the following command:<br />
<br />
<!--T:35--><br />
$ export OMPI_MCA_pml=ob1<br />
<br />
<!--T:36--><br />
Be aware that the above workaround can make your MPI code run slower, so only use this trick when debugging.<br />
<br />
== CUDA code == <!--T:17--><br />
<br />
<!--T:18--><br />
1. Allocate the node or nodes on which to do the debugging or profiling with <code>salloc</code>. This will open a shell session on the allocated node. <br />
<br />
<!--T:19--><br />
salloc --x11 --time=0-1:00 --mem-per-cpu=4G --ntasks=1 --gres=gpu:1<br />
<br />
<!--T:20--><br />
2. Load the appropriate module, for example<br />
<br />
<!--T:21--><br />
module load ddt-gpu<br />
<br />
<!--T:22--><br />
:: This may fail with a suggestion to load an older version of OpenMPI first. In this case, reload the OpenMPI module with the suggested command, and then reload the ddt-gpu module.<br />
<br />
<!--T:23--><br />
module load openmpi/2.0.2<br />
module load ddt-gpu<br />
<br />
<!--T:24--><br />
3. Ensure a cuda module is loaded.<br />
<br />
<!--T:25--><br />
module load cuda<br />
<br />
<!--T:26--><br />
4. Run the ddt command.<br />
<br />
<!--T:27--><br />
ddt path/to/code<br />
<br />
<!--T:40--><br />
If DDT complains about the mismatch between the CUDA driver and toolkit version, execute the following command and the run DDT again (use the version in this command), e.g.<br />
<br />
<!--T:41--><br />
export ALLINEA_FORCE_CUDA_VERSION=10.1<br />
<br />
<!--T:28--><br />
5. When done, exit the shell to terminate the allocation.<br />
<br />
== Using VNC to fix the lag ==<br />
<br />
[[File:DDT-VNC-1.png|400px|thumb|right|DDT on '''gra-vdi.computecanada.ca''']]<br />
[[File:DDT-VNC-2.png|400px|thumb|right|Program on '''graham.computecanada.ca''']]<br />
<br />
The instructions above use X11 forwarding. X11 is very sensitive to packet latency. As a result, unless you happen to be on the same campus as the computer cluster, the ddt interface will likely be laggy and frustrating to use. This can be fixed by running ddt under VNC.<br />
<br />
To do this, follow the directions on our [[VNC|VNC page]] to setup a VNC session. If your VNC session is on the compute node, then you can directly start your program under ddt as above. If you VNC session is on the login node or you are using the graham vdi node, then you need to manual launch the job as follows. From the ddt startup screen<br />
<br />
* pick ''manually launch backend yourself'' job start option,<br />
* enter the appropriate information for your job and press the ''listen'' button, and<br />
* press the ''help'' button the right of ''waiting for you to start the job...''.<br />
<br />
This will then give you the command you need to run to start your job. Allocate a job on the cluster and start your program as directed. As example of doing this would be (where $USER is your username and $PROGAM ... is the command to start your program)<br />
<br />
<source lang="bash">[name@cluster-login:~]$ salloc ...<br />
[name@cluster-node:~]$ /cvmfs/restricted.computecanada.ca/easybuild/software/2020/Core/allinea/20.2/bin/forge-client --ddtsessionfile /home/$USER/.allinea/session/gra-vdi3-1 $PROGRAM ...<br />
</source><br />
<br />
= Known issues = <!--T:33--><br />
<br />
<!--T:42--><br />
On graham, if you are experiencing issues with getting X11 to work, change permissions on your home directory so that only you have access.<br />
<br />
<!--T:43--><br />
First, check (and record if needed) current permissions with<br />
<br />
<!--T:44--><br />
ls -ld /home/$USER<br />
<br />
<!--T:45--><br />
The output should begin with:<br />
<br />
<!--T:46--><br />
drwx------<br />
<br />
<!--T:47--><br />
If some of the dashes are replaced by letters, that means your group and other users have read, write (unlikely), or execute permissions on your directory. <br />
<br />
<!--T:48--><br />
This command will work to remove read and execute permissions for group and other users:<br />
<br />
<!--T:49--><br />
chmod go-rx /home/$USER<br />
<br />
<!--T:50--><br />
After you are done using DDT, you can if you like restore permissions to what they were (assuming you recorded them). More information on how to do this can be found on page [[Sharing_data]].<br />
<br />
= See also = <!--T:37--><br />
* [https://www.youtube.com/watch?v=YsF5KMr9uEQ "Code profiling on Graham"], video, 54 minutes.<br />
* [https://www.sharcnet.ca/help/index.php/Parallel_Debugging_with_DDT A short DDT tutorial.]<br />
<br />
</translate></div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=File:DDT-VNC-1.png&diff=96421File:DDT-VNC-1.png2021-03-03T19:10:15Z<p>Tyson: Tyson uploaded a new version of File:DDT-VNC-1.png</p>
<hr />
<div>== Summary ==<br />
Server side of starting a manual ddt session.</div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=File:DDT-VNC-2.png&diff=96420File:DDT-VNC-2.png2021-03-03T19:07:53Z<p>Tyson: Tyson uploaded a new version of File:DDT-VNC-2.png</p>
<hr />
<div>== Summary ==<br />
Client side of remote ddt session.</div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=File:DDT-VNC-2.png&diff=96419File:DDT-VNC-2.png2021-03-03T18:36:36Z<p>Tyson: Client side of remote ddt session.</p>
<hr />
<div>== Summary ==<br />
Client side of remote ddt session.</div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=File:DDT-VNC-1.png&diff=96418File:DDT-VNC-1.png2021-03-03T18:36:01Z<p>Tyson: Server side of starting a manual ddt session.</p>
<hr />
<div>== Summary ==<br />
Server side of starting a manual ddt session.</div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=ParaView&diff=89964ParaView2020-09-23T18:15:23Z<p>Tyson: Marked this version for translation</p>
<hr />
<div><languages /><br />
[[Category:Software]]<br />
__FORCETOC__<br />
<translate><br />
= Client-server visualization = <!--T:1--><br />
<br />
<!--T:2--><br />
'''NOTE 1:''' An important setting in ParaView's preferences is ''Render View -> Remote/Parallel Rendering Options -> Remote Render Threshold.'' If you set it to default (20MB) or similar, small rendering will be done on your computer's GPU, the rotation with a mouse will be fast, but anything modestly intensive (under 20MB) will be shipped to your computer and (depending on your connection) visualization might be slow. If you set it to 0MB, all rendering will be remote including rotation, so you will really be using the cluster resources for everything, which is good for large data processing but not so good for interactivity. Experiment with the threshold to find a suitable value.<br />
<br />
<!--T:3--><br />
'''NOTE 2:''' ParaView requires the same major version on the local client and the remote host; this prevents incompatibility that typically shows as a failed handshake when establishing the client-server connection. For example, to use ParaView server version 5.5.2 on the cluster, you need client version 5.5.x on your computer.<br />
<br />
<!--T:4--><br />
Please use the tabs below to select the remote system.<br />
<br />
<!--T:5--><br />
<tabs><br />
<br />
<tab name="Cedar,Graham,Béluga"><br />
== Client-server visualization on Cedar, Graham and Béluga == <!--T:6--><br />
<br />
<!--T:91--><br />
On Cedar / Graham / Béluga, you can do client-server rendering on both CPUs (in software) and GPUs (hardware acceleration). Due to additional complications with GPU rendering, we strongly recommend starting with CPU-only visualization, allocating as many cores as necessary to your rendering. The easiest way to estimate the number of necessary cores is to look at the amount of memory that you think you will need for your rendering and divide it by ~3.5 GB/core. For example, a 40GB dataset (that you load into memory at once, e.g. a single timestep) would require at least 12 cores just to hold the data. Since software rendering is CPU-intensive, we do not recommend allocating more than 4GB/core. In addition, it is important to allocate some memory for filters and data processing (e.g. a structured to unstructured dataset conversion will increase your memory footprint by ~3X); depending on your workflow, you may want to start this rendering with 32 cores or 64 cores. If your ParaView server gets killed when processing these data, you will need to increase the number of cores.<br />
<br />
=== CPU-based visualization === <!--T:10--><br />
<br />
<!--T:11--><br />
You can also do interactive client-server ParaView rendering on cluster CPUs. For some types of rendering, modern CPU-based libraries such as OSPRay and OpenSWR offer performance quite similar to GPU-based rendering. Also, since the ParaView server uses MPI for distributed-memory processing, for very large datasets one can do parallel rendering on a large number of CPU cores, either on a single node, or scattered across multiple nodes.<br />
<br />
<!--T:12--><br />
1. First, install on your computer the same ParaView version as the one available on the cluster you will be using; log into Cedar or Graham and start a serial CPU interactive job.<br />
<br />
<!--T:13--><br />
{{Command|salloc --time{{=}}1:00:0 --ntasks{{=}}1 --account{{=}}def-someprof}}<br />
<br />
<!--T:14--><br />
:The job should automatically start on one of the CPU interactive nodes.<br />
<br />
<!--T:15--><br />
2. At the prompt that is now running inside your job, load the offscreen ParaView module and start the server.<br />
<br />
<!--T:16--><br />
{{Command|module load paraview-offscreen/5.5.2}}<br />
{{Command|pvserver --mesa-swr-avx2 --force-offscreen-rendering<br />
|result=<br />
Waiting for client...<br />
Connection URL: cs://cdr774.int.cedar.computecanada.ca:11111<br />
Accepting connection(s): cdr774.int.cedar.computecanada.ca:11111<br />
}}<br />
<br />
<!--T:17--><br />
:The <code>--mesa-swr-avx2</code> flag is important for much faster software rendering with the OpenSWR library. Wait for the server to be ready to accept client connection.<br />
<br />
<!--T:18--><br />
3. Make a note of the node (in this case cdr774) and the port (usually 11111) and in another terminal on your computer (on Mac/Linux; in Windows use a terminal emulator) link the port 11111 on your computer and the same port on the compute node (make sure to use the correct compute node).<br />
<br />
<!--T:19--><br />
{{Command|prompt=[name@computer $]|ssh <username>@cedar.computecanada.ca -L 11111:cdr774:11111}}<br />
<br />
<!--T:20--><br />
4. Start ParaView on your computer, go to ''File -> Connect'' (or click on the green ''Connect'' button in the toolbar) and click on ''Add Server.'' You will need to point ParaView to your local port 11111, so you can do something like name = cedar, server type = Client/Server, host = localhost, port = 11111; click ''Configure'', select ''Manual'' and click ''Save.''<br />
:Once the remote is added to the configuration, simply select the server from the list and click on ''Connect.'' The first terminal window that read ''Accepting connection'' will now read ''Client connected.''<br />
<br />
<!--T:21--><br />
5. Open a file in ParaView (it will point you to the remote filesystem) and visualize it as usual.<br />
<br />
<!--T:22--><br />
'''NOTE:''' An important setting in ParaView's preferences is ''Render View -> Remote/Parallel Rendering Options -> Remote Render Threshold.'' If you set it to default (20MB) or similar, small rendering will be done on your computer's GPU, the rotation with a mouse will be fast, but anything modestly intensive (under 20MB) will be shipped to your computer and (depending on your connection) visualization might be slow. If you set it to 0MB, all rendering will be remote including rotation, so you will really be using the cluster resources for everything, which is good for large data processing but not so good for interactivity. Experiment with the threshold to find a suitable value.<br />
<br><br />
If you want to do parallel rendering on multiple CPUs, start a parallel job; don't forget to specify the correct maximum walltime limit.<br />
<br />
<!--T:24--><br />
{{Command|salloc --time{{=}}0:30:0 --ntasks{{=}}8 --account{{=}}def-someprof}}<br />
<br />
<!--T:25--><br />
Start the ParaView server with <code>srun</code>.<br />
<br />
<!--T:26--><br />
{{Commands<br />
|module load paraview-offscreen/5.5.2<br />
|srun pvserver --mesa --force-offscreen-rendering<br />
}}<br />
<br />
<!--T:27--><br />
The <code>--mesa-swr-avx2</code> flag does not seem to have any effect when in parallel so we replaced it with the more generic <code>--mesa</code> to (hopefully) enable automatic detection of the best software rendering option.<br />
<br />
<!--T:28--><br />
To check that you are doing parallel rendering, you can pass your visualization through the Process Id Scalars filter and then colour it by "process id".<br />
<br />
=== GPU-based ParaView visualization === <!--T:29--><br />
<br />
<!--T:30--><br />
Cedar and Graham have a number of interactive GPU nodes that can be used for remote client-server visualization.<br />
<br />
<!--T:31--><br />
1. First, install on your computer the same version as the one available on the cluster you will be using; log into Cedar or Graham and start a serial GPU interactive job.<br />
<br />
<!--T:32--><br />
{{Command|salloc --time{{=}}1:00:0 --ntasks{{=}}1 --gres{{=}}gpu:1 --account{{=}}def-someprof}}<br />
<br />
<!--T:33--><br />
:The job should automatically start on one of the GPU interactive nodes.<br />
2. At the prompt that is now running inside your job, load the ParaView GPU+EGL module, change your display variable so that ParaView does not attempt to use the X11 rendering context, and start the ParaView server.<br />
<br />
<!--T:34--><br />
{{Commands<br />
|module load paraview-offscreen-gpu/5.4.0<br />
|unset DISPLAY<br />
}}<br />
{{Command|pvserver<br />
|result=<br />
Waiting for client...<br />
Connection URL: cs://cdr347.int.cedar.computecanada.ca:11111<br />
Accepting connection(s): cdr347.int.cedar.computecanada.ca:11111<br />
}}<br />
<br />
<!--T:35--><br />
:Wait for the server to be ready to accept client connection.<br />
<br />
<!--T:36--><br />
3. Make a note of the node (in this case ''cdr347'') and the port (usually 11111) and in another terminal on your computer (on Mac/Linux; in Windows use a terminal emulator), link the port 11111 on your computer and the same port on the compute node (make sure to use the correct compute node).<br />
<br />
<!--T:37--><br />
{{Command|prompt=[name@computer $]|ssh <username>@cedar.computecanada.ca -L 11111:cdr347:11111}}<br />
<br />
<!--T:38--><br />
4. Start ParaView on your computer, go to ''File -> Connect'' (or click on the green ''Connect'' button on the toolbar) and click on ''Add Server.'' You will need to point ParaView to your local port 11111, so you can do something like name = cedar, server type = Client/Server, host = localhost, port = 11111; click on ''Configure'', select ''Manual'' and click on ''Save.''<br />
:Once the remote is added to the configuration, simply select the server from the list and click on ''Connect.'' The first terminal window that read ''Accepting connection'' will now read ''Client connected.''<br />
<br />
<!--T:39--><br />
5. Open a file in ParaView (it will point you to the remote filesystem) and visualize it as usual.<br />
<br />
</tab><br />
<tab name="Niagara"><br />
== Client-server visualization on Niagara== <!--T:40--><br />
<br />
<!--T:42--><br />
Niagara does not have GPUs, therefore, you are limited to software rendering. With ParaView, you need to explicitly specify one of the mesa flags to tell it to not use OpenGL hardware acceleration, e.g.<br />
<br />
<!--T:43--><br />
{{Commands<br />
|module load paraview<br />
|paraview --mesa-swr<br />
}}<br />
<br />
<!--T:44--><br />
or use one of the flags below.<br />
<br />
<!--T:45--><br />
To access [https://docs.scinet.utoronto.ca/index.php/Niagara_Quickstart#Testing interactive resources on Niagara], you will need to start a <code>debugjob</code>. Here are the steps:<br />
<br />
<!--T:46--><br />
<ol><br />
<li> Launch an interactive job (debugjob).</li><br />
<br />
<!--T:47--><br />
{{Command|debugjob}}<br />
<br />
<!--T:48--><br />
<li> After getting a compute node, let's say niaXYZW, load the ParaView module and start a ParaView server.</li><br />
<br />
<!--T:49--><br />
{{Commands<br />
|module load paraview<br />
|pvserver --mesa-swr-ax2<br />
}}<br />
<br />
<!--T:50--><br />
The <code>--mesa-swr-avx2</code> flag has been reported to offer faster software rendering using the OpenSWR library.<br />
<br />
<!--T:51--><br />
<li> Now, you have to wait a few seconds for the server to be ready to accept client connections.</li><br />
<br />
<!--T:52--><br />
Waiting for client...<br />
Connection URL: cs://niaXYZW.scinet.local:11111<br />
Accepting connection(s): niaXYZW.scinet.local:11111<br />
<br />
<!--T:53--><br />
<li> Open a new terminal without closing your debugjob, and SSH into Niagara using the following command:</li><br />
<br />
<!--T:54--><br />
{{Command|prompt=[name@computer $]|ssh YOURusername@niagara.scinet.utoronto.ca -L11111:niaXYZW:11111 -N}}<br />
<br />
<!--T:55--><br />
this will establish a tunnel mapping the port 11111 in your computer (<code>localhost</code>) to the port 11111 on the Niagara's compute node, <code>niaXYZW</code>, where the ParaView server will be waiting for connections.<br />
<br />
<!--T:56--><br />
<li> Start ParaView on your local computer, go to ''File -> Connect'' and click on ''Add Server.''<br />
You will need to point ParaView to your local port <code>11111</code>, so you can do something like</li><br />
name = niagara<br />
server type = Client/Server<br />
host = localhost<br />
port = 11111<br />
then click on ''Configure'', select ''Manual'' and click on ''Save.''<br />
<br />
<!--T:57--><br />
<li> Once the remote server is added to the configuration, simply select the server from the list and click on ''Connect.''<br />
The first terminal window that read <code>Accepting connection...</code> will now read <code>Client connected</code>.<br />
<br />
<!--T:58--><br />
<li> Open a file in ParaView (it will point you to the remote filesystem) and visualize it as usual.<br />
<br />
<!--T:59--><br />
</ol><br />
<br />
=== Multiple CPUs === <!--T:60--><br />
<br />
<!--T:61--><br />
For performing parallel rendering using multiple CPUs, <code>pvserver</code> should be run using <code>srun</code>, i.e. either submit a job script or request a job using<br />
<br />
<!--T:62--><br />
{{Commands<br />
|salloc --ntasks{{=}}N*40 --nodes{{=}}N --time{{=}}1:00:00<br />
|module load paraview<br />
|srun pvserver --mesa<br />
}}<br />
<br />
<!--T:63--><br />
:where you need to replace <code>N</code> with the number of nodes and <code>N*40</code> with the single number (the product of multiplication).<br />
<br />
</tab><br />
<tab name="Cloud VM"><br />
== Client-server visualization on a cloud == <!--T:64--><br />
<br />
=== Prerequisites === <!--T:66--><br />
<br />
<!--T:67--><br />
The [[Cloud Quick Start|Cloud Quick Start Guide]] explains how to launch a new virtual machine (VM). Once you log into the VM, you will need to install some additional packages to be able to compile ParaView or VisIt. For example, on a CentOS VM you can type<br />
<br />
<!--T:68--><br />
{{Commands|prompt=[name@VM $]<br />
|sudo yum install xauth wget gcc gcc-c++ ncurses-devel python-devel libxcb-devel<br />
|sudo yum install patch imake libxml2-python mesa-libGL mesa-libGL-devel<br />
|sudo yum install mesa-libGLU mesa-libGLU-devel bzip2 bzip2-libs libXt-devel zlib-devel flex byacc<br />
|sudo ln -s /usr/include/GL/glx.h /usr/local/include/GL/glx.h<br />
}}<br />
<br />
<!--T:69--><br />
If you have your own private-public SSH key pair (as opposed to the cloud key), you may want to copy the public key to the VM to simplify logins, by issuing the following command on your computer<br />
<br />
<!--T:70--><br />
{{Command|prompt=[name@computer $]|cat ~/.ssh/id_rsa.pub {{!}} ssh -i ~/.ssh/cloudwestkey.pem centos@vm.ip.address 'cat >>.ssh/authorized_keys'}}<br />
<br />
=== Compiling with OSMesa === <!--T:71--><br />
<br />
<!--T:72--><br />
Since the VM does not have access to a GPU (most Arbutus VMs don't), we need to compile ParaView with OSMesa support so that it can do offscreen (software) rendering. The default configuration of OSMesa will enable OpenSWR (Intel's software rasterization library to run OpenGL). What you will end up with is a ParaView server that uses OSMesa for offscreen CPU-based rendering without X but with both <code>llvmpipe</code> (older and slower) and <code>SWR</code> (newer and faster) drivers built. We recommend using SWR.<br />
<br />
<!--T:73--><br />
Back on the VM, compile <code>cmake::</code><br />
<br />
<!--T:74--><br />
{{Commands|prompt=[name@VM $]<br />
|wget https://cmake.org/files/v3.7/cmake-3.7.0.tar.gz<br />
|unpack and cd there<br />
|./bootstrap<br />
|make<br />
|sudo make install<br />
}}<br />
<br />
<!--T:75--><br />
Next, compile <code>llvm</code>:<br />
<source lang="console"><br />
cd<br />
wget http://releases.llvm.org/3.9.1/llvm-3.9.1.src.tar.xz<br />
unpack and cd there<br />
mkdir -p build && cd build<br />
cmake \<br />
-DCMAKE_BUILD_TYPE=Release \<br />
-DLLVM_BUILD_LLVM_DYLIB=ON \<br />
-DLLVM_ENABLE_RTTI=ON \<br />
-DLLVM_INSTALL_UTILS=ON \<br />
-DLLVM_TARGETS_TO_BUILD:STRING=X86 \<br />
..<br />
make<br />
sudo make install<br />
</source><br />
<br />
<!--T:77--><br />
Next, compile Mesa with OSMesa:<br />
<source lang="console"><br />
cd<br />
wget ftp://ftp.freedesktop.org/pub/mesa/mesa-17.0.0.tar.gz<br />
unpack and cd there<br />
./configure \<br />
--enable-opengl --disable-gles1 --disable-gles2 \<br />
--disable-va --disable-xvmc --disable-vdpau \<br />
--enable-shared-glapi \<br />
--disable-texture-float \<br />
--enable-gallium-llvm --enable-llvm-shared-libs \<br />
--with-gallium-drivers=swrast,swr \<br />
--disable-dri \<br />
--disable-egl --disable-gbm \<br />
--disable-glx \<br />
--disable-osmesa --enable-gallium-osmesa<br />
make<br />
sudo make install<br />
</source><br />
<br />
<!--T:79--><br />
Next, compile the ParaView server:<br />
<source lang="console"><br />
cd<br />
wget http://www.paraview.org/files/v5.2/ParaView-v5.2.0.tar.gz<br />
unpack and cd there<br />
mkdir -p build && cd build<br />
cmake \<br />
-DCMAKE_BUILD_TYPE=Release \<br />
-DCMAKE_INSTALL_PREFIX=/home/centos/paraview \<br />
-DPARAVIEW_USE_MPI=OFF \<br />
-DPARAVIEW_ENABLE_PYTHON=ON \<br />
-DPARAVIEW_BUILD_QT_GUI=OFF \<br />
-DVTK_OPENGL_HAS_OSMESA=ON \<br />
-DVTK_USE_OFFSCREEN=ON \<br />
-DVTK_USE_X=OFF \<br />
..<br />
make<br />
make install<br />
</source><br />
<br />
=== Client-server mode === <!--T:81--> <br />
<br />
<!--T:82--><br />
You are now ready to start ParaView server on the VM with SWR rendering:<br />
<source lang="console"><br />
./paraview/bin/pvserver --mesa-swr-avx2<br />
</source><br />
<br />
<!--T:97--><br />
Back on your computer, organize an SSH tunnel from the local port 11111 to the VM's port 11111:<br />
<source lang="console"><br />
ssh centos@vm.ip.address -L 11111:localhost:11111<br />
</source><br />
<br />
<!--T:86--><br />
Finally, start the ParaView client on your computer and connect to localhost:11111. If successful, you should be able to open files on the remote VM. During rendering in the console you should see the message ''SWR detected AVX2.''<br />
</tab><br />
</tabs><br />
<br />
= Remote VNC desktop on Graham VDI nodes = <!--T:87--> <br />
<br />
<!--T:96--><br />
For small interactive visualizations requiring up to 256GB memory and 16 cores, you can use Graham's VDI nodes. Unlike client-server visualizations, on the VDI nodes you'll be using VNC remote desktop. Here are the steps:<br />
<br />
<!--T:92--><br />
1. Connect to gra-vdi as described [https://docs.computecanada.ca/wiki/VNC#VDI_Nodes TigerVNC]<br />
<br />
<!--T:94--><br />
2. Open a terminal window and run the following commands (the [[Using Nix: nix-env|nix-env install command]] only needs to be run initially and when upgrading):<br />
<br />
<!--T:95--><br />
module load nix<br />
nix-env --install --attr nixpkgs.paraview<br />
paraview<br />
<br />
<!--T:98--><br />
The normal paraview 5.5.2 module can be used as well on gra-vdi, but it only provides software rendering<br />
<br />
<!--T:99--><br />
module load CcEnv StdEnv<br />
module load paraview/5.5.2<br />
paraview<br />
<br />
= Batch rendering = <!--T:88--><br />
<br />
<!--T:89--><br />
For large-scale and automated visualization, we strongly recommend switching from interactive client-server to off-screen batch visualization. ParaView supports Python scripting, so you can script your workflow and submit it as a regular, possibly parallel production job on a cluster. If you need any help with this, please contact [[Technical support]].<br />
</translate></div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=ParaView&diff=89961ParaView2020-09-23T15:41:33Z<p>Tyson: Wording improvement</p>
<hr />
<div><languages /><br />
[[Category:Software]]<br />
__FORCETOC__<br />
<translate><br />
= Client-server visualization = <!--T:1--><br />
<br />
<!--T:2--><br />
'''NOTE 1:''' An important setting in ParaView's preferences is ''Render View -> Remote/Parallel Rendering Options -> Remote Render Threshold.'' If you set it to default (20MB) or similar, small rendering will be done on your computer's GPU, the rotation with a mouse will be fast, but anything modestly intensive (under 20MB) will be shipped to your computer and (depending on your connection) visualization might be slow. If you set it to 0MB, all rendering will be remote including rotation, so you will really be using the cluster resources for everything, which is good for large data processing but not so good for interactivity. Experiment with the threshold to find a suitable value.<br />
<br />
<!--T:3--><br />
'''NOTE 2:''' ParaView requires the same major version on the local client and the remote host; this prevents incompatibility that typically shows as a failed handshake when establishing the client-server connection. For example, to use ParaView server version 5.5.2 on the cluster, you need client version 5.5.x on your computer.<br />
<br />
<!--T:4--><br />
Please use the tabs below to select the remote system.<br />
<br />
<!--T:5--><br />
<tabs><br />
<br />
<tab name="Cedar,Graham,Béluga"><br />
== Client-server visualization on Cedar, Graham and Béluga == <!--T:6--><br />
<br />
<!--T:91--><br />
On Cedar / Graham / Béluga, you can do client-server rendering on both CPUs (in software) and GPUs (hardware acceleration). Due to additional complications with GPU rendering, we strongly recommend starting with CPU-only visualization, allocating as many cores as necessary to your rendering. The easiest way to estimate the number of necessary cores is to look at the amount of memory that you think you will need for your rendering and divide it by ~3.5 GB/core. For example, a 40GB dataset (that you load into memory at once, e.g. a single timestep) would require at least 12 cores just to hold the data. Since software rendering is CPU-intensive, we do not recommend allocating more than 4GB/core. In addition, it is important to allocate some memory for filters and data processing (e.g. a structured to unstructured dataset conversion will increase your memory footprint by ~3X); depending on your workflow, you may want to start this rendering with 32 cores or 64 cores. If your ParaView server gets killed when processing these data, you will need to increase the number of cores.<br />
<br />
=== CPU-based visualization === <!--T:10--><br />
<br />
<!--T:11--><br />
You can also do interactive client-server ParaView rendering on cluster CPUs. For some types of rendering, modern CPU-based libraries such as OSPRay and OpenSWR offer performance quite similar to GPU-based rendering. Also, since the ParaView server uses MPI for distributed-memory processing, for very large datasets one can do parallel rendering on a large number of CPU cores, either on a single node, or scattered across multiple nodes.<br />
<br />
<!--T:12--><br />
1. First, install on your computer the same ParaView version as the one available on the cluster you will be using; log into Cedar or Graham and start a serial CPU interactive job.<br />
<br />
<!--T:13--><br />
{{Command|salloc --time{{=}}1:00:0 --ntasks{{=}}1 --account{{=}}def-someprof}}<br />
<br />
<!--T:14--><br />
:The job should automatically start on one of the CPU interactive nodes.<br />
<br />
<!--T:15--><br />
2. At the prompt that is now running inside your job, load the offscreen ParaView module and start the server.<br />
<br />
<!--T:16--><br />
{{Command|module load paraview-offscreen/5.5.2}}<br />
{{Command|pvserver --mesa-swr-avx2 --force-offscreen-rendering<br />
|result=<br />
Waiting for client...<br />
Connection URL: cs://cdr774.int.cedar.computecanada.ca:11111<br />
Accepting connection(s): cdr774.int.cedar.computecanada.ca:11111<br />
}}<br />
<br />
<!--T:17--><br />
:The <code>--mesa-swr-avx2</code> flag is important for much faster software rendering with the OpenSWR library. Wait for the server to be ready to accept client connection.<br />
<br />
<!--T:18--><br />
3. Make a note of the node (in this case cdr774) and the port (usually 11111) and in another terminal on your computer (on Mac/Linux; in Windows use a terminal emulator) link the port 11111 on your computer and the same port on the compute node (make sure to use the correct compute node).<br />
<br />
<!--T:19--><br />
{{Command|prompt=[name@computer $]|ssh <username>@cedar.computecanada.ca -L 11111:cdr774:11111}}<br />
<br />
<!--T:20--><br />
4. Start ParaView on your computer, go to ''File -> Connect'' (or click on the green ''Connect'' button in the toolbar) and click on ''Add Server.'' You will need to point ParaView to your local port 11111, so you can do something like name = cedar, server type = Client/Server, host = localhost, port = 11111; click ''Configure'', select ''Manual'' and click ''Save.''<br />
:Once the remote is added to the configuration, simply select the server from the list and click on ''Connect.'' The first terminal window that read ''Accepting connection'' will now read ''Client connected.''<br />
<br />
<!--T:21--><br />
5. Open a file in ParaView (it will point you to the remote filesystem) and visualize it as usual.<br />
<br />
<!--T:22--><br />
'''NOTE:''' An important setting in ParaView's preferences is ''Render View -> Remote/Parallel Rendering Options -> Remote Render Threshold.'' If you set it to default (20MB) or similar, small rendering will be done on your computer's GPU, the rotation with a mouse will be fast, but anything modestly intensive (under 20MB) will be shipped to your computer and (depending on your connection) visualization might be slow. If you set it to 0MB, all rendering will be remote including rotation, so you will really be using the cluster resources for everything, which is good for large data processing but not so good for interactivity. Experiment with the threshold to find a suitable value.<br />
<br><br />
If you want to do parallel rendering on multiple CPUs, start a parallel job; don't forget to specify the correct maximum walltime limit.<br />
<br />
<!--T:24--><br />
{{Command|salloc --time{{=}}0:30:0 --ntasks{{=}}8 --account{{=}}def-someprof}}<br />
<br />
<!--T:25--><br />
Start the ParaView server with <code>srun</code>.<br />
<br />
<!--T:26--><br />
{{Commands<br />
|module load paraview-offscreen/5.5.2<br />
|srun pvserver --mesa --force-offscreen-rendering<br />
}}<br />
<br />
<!--T:27--><br />
The <code>--mesa-swr-avx2</code> flag does not seem to have any effect when in parallel so we replaced it with the more generic <code>--mesa</code> to (hopefully) enable automatic detection of the best software rendering option.<br />
<br />
<!--T:28--><br />
To check that you are doing parallel rendering, you can pass your visualization through the Process Id Scalars filter and then colour it by "process id".<br />
<br />
=== GPU-based ParaView visualization === <!--T:29--><br />
<br />
<!--T:30--><br />
Cedar and Graham have a number of interactive GPU nodes that can be used for remote client-server visualization.<br />
<br />
<!--T:31--><br />
1. First, install on your computer the same version as the one available on the cluster you will be using; log into Cedar or Graham and start a serial GPU interactive job.<br />
<br />
<!--T:32--><br />
{{Command|salloc --time{{=}}1:00:0 --ntasks{{=}}1 --gres{{=}}gpu:1 --account{{=}}def-someprof}}<br />
<br />
<!--T:33--><br />
:The job should automatically start on one of the GPU interactive nodes.<br />
2. At the prompt that is now running inside your job, load the ParaView GPU+EGL module, change your display variable so that ParaView does not attempt to use the X11 rendering context, and start the ParaView server.<br />
<br />
<!--T:34--><br />
{{Commands<br />
|module load paraview-offscreen-gpu/5.4.0<br />
|unset DISPLAY<br />
}}<br />
{{Command|pvserver<br />
|result=<br />
Waiting for client...<br />
Connection URL: cs://cdr347.int.cedar.computecanada.ca:11111<br />
Accepting connection(s): cdr347.int.cedar.computecanada.ca:11111<br />
}}<br />
<br />
<!--T:35--><br />
:Wait for the server to be ready to accept client connection.<br />
<br />
<!--T:36--><br />
3. Make a note of the node (in this case ''cdr347'') and the port (usually 11111) and in another terminal on your computer (on Mac/Linux; in Windows use a terminal emulator), link the port 11111 on your computer and the same port on the compute node (make sure to use the correct compute node).<br />
<br />
<!--T:37--><br />
{{Command|prompt=[name@computer $]|ssh <username>@cedar.computecanada.ca -L 11111:cdr347:11111}}<br />
<br />
<!--T:38--><br />
4. Start ParaView on your computer, go to ''File -> Connect'' (or click on the green ''Connect'' button on the toolbar) and click on ''Add Server.'' You will need to point ParaView to your local port 11111, so you can do something like name = cedar, server type = Client/Server, host = localhost, port = 11111; click on ''Configure'', select ''Manual'' and click on ''Save.''<br />
:Once the remote is added to the configuration, simply select the server from the list and click on ''Connect.'' The first terminal window that read ''Accepting connection'' will now read ''Client connected.''<br />
<br />
<!--T:39--><br />
5. Open a file in ParaView (it will point you to the remote filesystem) and visualize it as usual.<br />
<br />
</tab><br />
<tab name="Niagara"><br />
== Client-server visualization on Niagara== <!--T:40--><br />
<br />
<!--T:42--><br />
Niagara does not have GPUs, therefore, you are limited to software rendering. With ParaView, you need to explicitly specify one of the mesa flags to tell it to not use OpenGL hardware acceleration, e.g.<br />
<br />
<!--T:43--><br />
{{Commands<br />
|module load paraview<br />
|paraview --mesa-swr<br />
}}<br />
<br />
<!--T:44--><br />
or use one of the flags below.<br />
<br />
<!--T:45--><br />
To access [https://docs.scinet.utoronto.ca/index.php/Niagara_Quickstart#Testing interactive resources on Niagara], you will need to start a <code>debugjob</code>. Here are the steps:<br />
<br />
<!--T:46--><br />
<ol><br />
<li> Launch an interactive job (debugjob).</li><br />
<br />
<!--T:47--><br />
{{Command|debugjob}}<br />
<br />
<!--T:48--><br />
<li> After getting a compute node, let's say niaXYZW, load the ParaView module and start a ParaView server.</li><br />
<br />
<!--T:49--><br />
{{Commands<br />
|module load paraview<br />
|pvserver --mesa-swr-ax2<br />
}}<br />
<br />
<!--T:50--><br />
The <code>--mesa-swr-avx2</code> flag has been reported to offer faster software rendering using the OpenSWR library.<br />
<br />
<!--T:51--><br />
<li> Now, you have to wait a few seconds for the server to be ready to accept client connections.</li><br />
<br />
<!--T:52--><br />
Waiting for client...<br />
Connection URL: cs://niaXYZW.scinet.local:11111<br />
Accepting connection(s): niaXYZW.scinet.local:11111<br />
<br />
<!--T:53--><br />
<li> Open a new terminal without closing your debugjob, and SSH into Niagara using the following command:</li><br />
<br />
<!--T:54--><br />
{{Command|prompt=[name@computer $]|ssh YOURusername@niagara.scinet.utoronto.ca -L11111:niaXYZW:11111 -N}}<br />
<br />
<!--T:55--><br />
this will establish a tunnel mapping the port 11111 in your computer (<code>localhost</code>) to the port 11111 on the Niagara's compute node, <code>niaXYZW</code>, where the ParaView server will be waiting for connections.<br />
<br />
<!--T:56--><br />
<li> Start ParaView on your local computer, go to ''File -> Connect'' and click on ''Add Server.''<br />
You will need to point ParaView to your local port <code>11111</code>, so you can do something like</li><br />
name = niagara<br />
server type = Client/Server<br />
host = localhost<br />
port = 11111<br />
then click on ''Configure'', select ''Manual'' and click on ''Save.''<br />
<br />
<!--T:57--><br />
<li> Once the remote server is added to the configuration, simply select the server from the list and click on ''Connect.''<br />
The first terminal window that read <code>Accepting connection...</code> will now read <code>Client connected</code>.<br />
<br />
<!--T:58--><br />
<li> Open a file in ParaView (it will point you to the remote filesystem) and visualize it as usual.<br />
<br />
<!--T:59--><br />
</ol><br />
<br />
=== Multiple CPUs === <!--T:60--><br />
<br />
<!--T:61--><br />
For performing parallel rendering using multiple CPUs, <code>pvserver</code> should be run using <code>srun</code>, i.e. either submit a job script or request a job using<br />
<br />
<!--T:62--><br />
{{Commands<br />
|salloc --ntasks{{=}}N*40 --nodes{{=}}N --time{{=}}1:00:00<br />
|module load paraview<br />
|srun pvserver --mesa<br />
}}<br />
<br />
<!--T:63--><br />
:where you need to replace <code>N</code> with the number of nodes and <code>N*40</code> with the single number (the product of multiplication).<br />
<br />
</tab><br />
<tab name="Cloud VM"><br />
== Client-server visualization on a cloud == <!--T:64--><br />
<br />
=== Prerequisites === <!--T:66--><br />
<br />
<!--T:67--><br />
The [[Cloud Quick Start|Cloud Quick Start Guide]] explains how to launch a new virtual machine (VM). Once you log into the VM, you will need to install some additional packages to be able to compile ParaView or VisIt. For example, on a CentOS VM you can type<br />
<br />
<!--T:68--><br />
{{Commands|prompt=[name@VM $]<br />
|sudo yum install xauth wget gcc gcc-c++ ncurses-devel python-devel libxcb-devel<br />
|sudo yum install patch imake libxml2-python mesa-libGL mesa-libGL-devel<br />
|sudo yum install mesa-libGLU mesa-libGLU-devel bzip2 bzip2-libs libXt-devel zlib-devel flex byacc<br />
|sudo ln -s /usr/include/GL/glx.h /usr/local/include/GL/glx.h<br />
}}<br />
<br />
<!--T:69--><br />
If you have your own private-public SSH key pair (as opposed to the cloud key), you may want to copy the public key to the VM to simplify logins, by issuing the following command on your computer<br />
<br />
<!--T:70--><br />
{{Command|prompt=[name@computer $]|cat ~/.ssh/id_rsa.pub {{!}} ssh -i ~/.ssh/cloudwestkey.pem centos@vm.ip.address 'cat >>.ssh/authorized_keys'}}<br />
<br />
=== Compiling with OSMesa === <!--T:71--><br />
<br />
<!--T:72--><br />
Since the VM does not have access to a GPU (most Arbutus VMs don't), we need to compile ParaView with OSMesa support so that it can do offscreen (software) rendering. The default configuration of OSMesa will enable OpenSWR (Intel's software rasterization library to run OpenGL). What you will end up with is a ParaView server that uses OSMesa for offscreen CPU-based rendering without X but with both <code>llvmpipe</code> (older and slower) and <code>SWR</code> (newer and faster) drivers built. We recommend using SWR.<br />
<br />
<!--T:73--><br />
Back on the VM, compile <code>cmake::</code><br />
<br />
<!--T:74--><br />
{{Commands|prompt=[name@VM $]<br />
|wget https://cmake.org/files/v3.7/cmake-3.7.0.tar.gz<br />
|unpack and cd there<br />
|./bootstrap<br />
|make<br />
|sudo make install<br />
}}<br />
<br />
<!--T:75--><br />
Next, compile <code>llvm</code>:<br />
<source lang="console"><br />
cd<br />
wget http://releases.llvm.org/3.9.1/llvm-3.9.1.src.tar.xz<br />
unpack and cd there<br />
mkdir -p build && cd build<br />
cmake \<br />
-DCMAKE_BUILD_TYPE=Release \<br />
-DLLVM_BUILD_LLVM_DYLIB=ON \<br />
-DLLVM_ENABLE_RTTI=ON \<br />
-DLLVM_INSTALL_UTILS=ON \<br />
-DLLVM_TARGETS_TO_BUILD:STRING=X86 \<br />
..<br />
make<br />
sudo make install<br />
</source><br />
<br />
<!--T:77--><br />
Next, compile Mesa with OSMesa:<br />
<source lang="console"><br />
cd<br />
wget ftp://ftp.freedesktop.org/pub/mesa/mesa-17.0.0.tar.gz<br />
unpack and cd there<br />
./configure \<br />
--enable-opengl --disable-gles1 --disable-gles2 \<br />
--disable-va --disable-xvmc --disable-vdpau \<br />
--enable-shared-glapi \<br />
--disable-texture-float \<br />
--enable-gallium-llvm --enable-llvm-shared-libs \<br />
--with-gallium-drivers=swrast,swr \<br />
--disable-dri \<br />
--disable-egl --disable-gbm \<br />
--disable-glx \<br />
--disable-osmesa --enable-gallium-osmesa<br />
make<br />
sudo make install<br />
</source><br />
<br />
<!--T:79--><br />
Next, compile the ParaView server:<br />
<source lang="console"><br />
cd<br />
wget http://www.paraview.org/files/v5.2/ParaView-v5.2.0.tar.gz<br />
unpack and cd there<br />
mkdir -p build && cd build<br />
cmake \<br />
-DCMAKE_BUILD_TYPE=Release \<br />
-DCMAKE_INSTALL_PREFIX=/home/centos/paraview \<br />
-DPARAVIEW_USE_MPI=OFF \<br />
-DPARAVIEW_ENABLE_PYTHON=ON \<br />
-DPARAVIEW_BUILD_QT_GUI=OFF \<br />
-DVTK_OPENGL_HAS_OSMESA=ON \<br />
-DVTK_USE_OFFSCREEN=ON \<br />
-DVTK_USE_X=OFF \<br />
..<br />
make<br />
make install<br />
</source><br />
<br />
=== Client-server mode === <!--T:81--> <br />
<br />
<!--T:82--><br />
You are now ready to start ParaView server on the VM with SWR rendering:<br />
<source lang="console"><br />
./paraview/bin/pvserver --mesa-swr-avx2<br />
</source><br />
<br />
<!--T:97--><br />
Back on your computer, organize an SSH tunnel from the local port 11111 to the VM's port 11111:<br />
<source lang="console"><br />
ssh centos@vm.ip.address -L 11111:localhost:11111<br />
</source><br />
<br />
<!--T:86--><br />
Finally, start the ParaView client on your computer and connect to localhost:11111. If successful, you should be able to open files on the remote VM. During rendering in the console you should see the message ''SWR detected AVX2.''<br />
</tab><br />
</tabs><br />
<br />
= Remote VNC desktop on Graham VDI nodes = <!--T:87--> <br />
<br />
<!--T:96--><br />
For small interactive visualizations requiring up to 256GB memory and 16 cores, you can use Graham's VDI nodes. Unlike client-server visualizations, on the VDI nodes you'll be using VNC remote desktop. Here are the steps:<br />
<br />
<!--T:92--><br />
1. Connect to gra-vdi as described [https://docs.computecanada.ca/wiki/VNC#VDI_Nodes TigerVNC]<br />
<br />
<!--T:94--><br />
2. Open a terminal window and run the following commands (the [[Using Nix: nix-env|nix-env install command]] only needs to be run initially and when upgrading):<br />
<br />
<!--T:95--><br />
module load nix<br />
nix-env --install --attr nixpkgs.paraview<br />
paraview<br />
<br />
The normal paraview 5.5.2 module can be used as well on gra-vdi, but it only provides software rendering<br />
<br />
module load CcEnv StdEnv<br />
module load paraview/5.5.2<br />
paraview<br />
<br />
= Batch rendering = <!--T:88--><br />
<br />
<!--T:89--><br />
For large-scale and automated visualization, we strongly recommend switching from interactive client-server to off-screen batch visualization. ParaView supports Python scripting, so you can script your workflow and submit it as a regular, possibly parallel production job on a cluster. If you need any help with this, please contact [[Technical support]].<br />
</translate></div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=ParaView&diff=89911ParaView2020-09-22T03:31:57Z<p>Tyson: gra-vdi directions for hardware OpenGL ParaView</p>
<hr />
<div><languages /><br />
[[Category:Software]]<br />
__FORCETOC__<br />
<translate><br />
= Client-server visualization = <!--T:1--><br />
<br />
<!--T:2--><br />
'''NOTE 1:''' An important setting in ParaView's preferences is ''Render View -> Remote/Parallel Rendering Options -> Remote Render Threshold.'' If you set it to default (20MB) or similar, small rendering will be done on your computer's GPU, the rotation with a mouse will be fast, but anything modestly intensive (under 20MB) will be shipped to your computer and (depending on your connection) visualization might be slow. If you set it to 0MB, all rendering will be remote including rotation, so you will really be using the cluster resources for everything, which is good for large data processing but not so good for interactivity. Experiment with the threshold to find a suitable value.<br />
<br />
<!--T:3--><br />
'''NOTE 2:''' ParaView requires the same major version on the local client and the remote host; this prevents incompatibility that typically shows as a failed handshake when establishing the client-server connection. For example, to use ParaView server version 5.5.2 on the cluster, you need client version 5.5.x on your computer.<br />
<br />
<!--T:4--><br />
Please use the tabs below to select the remote system.<br />
<br />
<!--T:5--><br />
<tabs><br />
<br />
<tab name="Cedar,Graham,Béluga"><br />
== Client-server visualization on Cedar, Graham and Béluga == <!--T:6--><br />
<br />
<!--T:91--><br />
On Cedar / Graham / Béluga, you can do client-server rendering on both CPUs (in software) and GPUs (hardware acceleration). Due to additional complications with GPU rendering, we strongly recommend starting with CPU-only visualization, allocating as many cores as necessary to your rendering. The easiest way to estimate the number of necessary cores is to look at the amount of memory that you think you will need for your rendering and divide it by ~3.5 GB/core. For example, a 40GB dataset (that you load into memory at once, e.g. a single timestep) would require at least 12 cores just to hold the data. Since software rendering is CPU-intensive, we do not recommend allocating more than 4GB/core. In addition, it is important to allocate some memory for filters and data processing (e.g. a structured to unstructured dataset conversion will increase your memory footprint by ~3X); depending on your workflow, you may want to start this rendering with 32 cores or 64 cores. If your ParaView server gets killed when processing these data, you will need to increase the number of cores.<br />
<br />
=== CPU-based visualization === <!--T:10--><br />
<br />
<!--T:11--><br />
You can also do interactive client-server ParaView rendering on cluster CPUs. For some types of rendering, modern CPU-based libraries such as OSPRay and OpenSWR offer performance quite similar to GPU-based rendering. Also, since the ParaView server uses MPI for distributed-memory processing, for very large datasets one can do parallel rendering on a large number of CPU cores, either on a single node, or scattered across multiple nodes.<br />
<br />
<!--T:12--><br />
1. First, install on your computer the same ParaView version as the one available on the cluster you will be using; log into Cedar or Graham and start a serial CPU interactive job.<br />
<br />
<!--T:13--><br />
{{Command|salloc --time{{=}}1:00:0 --ntasks{{=}}1 --account{{=}}def-someprof}}<br />
<br />
<!--T:14--><br />
:The job should automatically start on one of the CPU interactive nodes.<br />
<br />
<!--T:15--><br />
2. At the prompt that is now running inside your job, load the offscreen ParaView module and start the server.<br />
<br />
<!--T:16--><br />
{{Command|module load paraview-offscreen/5.5.2}}<br />
{{Command|pvserver --mesa-swr-avx2 --force-offscreen-rendering<br />
|result=<br />
Waiting for client...<br />
Connection URL: cs://cdr774.int.cedar.computecanada.ca:11111<br />
Accepting connection(s): cdr774.int.cedar.computecanada.ca:11111<br />
}}<br />
<br />
<!--T:17--><br />
:The <code>--mesa-swr-avx2</code> flag is important for much faster software rendering with the OpenSWR library. Wait for the server to be ready to accept client connection.<br />
<br />
<!--T:18--><br />
3. Make a note of the node (in this case cdr774) and the port (usually 11111) and in another terminal on your computer (on Mac/Linux; in Windows use a terminal emulator) link the port 11111 on your computer and the same port on the compute node (make sure to use the correct compute node).<br />
<br />
<!--T:19--><br />
{{Command|prompt=[name@computer $]|ssh <username>@cedar.computecanada.ca -L 11111:cdr774:11111}}<br />
<br />
<!--T:20--><br />
4. Start ParaView on your computer, go to ''File -> Connect'' (or click on the green ''Connect'' button in the toolbar) and click on ''Add Server.'' You will need to point ParaView to your local port 11111, so you can do something like name = cedar, server type = Client/Server, host = localhost, port = 11111; click ''Configure'', select ''Manual'' and click ''Save.''<br />
:Once the remote is added to the configuration, simply select the server from the list and click on ''Connect.'' The first terminal window that read ''Accepting connection'' will now read ''Client connected.''<br />
<br />
<!--T:21--><br />
5. Open a file in ParaView (it will point you to the remote filesystem) and visualize it as usual.<br />
<br />
<!--T:22--><br />
'''NOTE:''' An important setting in ParaView's preferences is ''Render View -> Remote/Parallel Rendering Options -> Remote Render Threshold.'' If you set it to default (20MB) or similar, small rendering will be done on your computer's GPU, the rotation with a mouse will be fast, but anything modestly intensive (under 20MB) will be shipped to your computer and (depending on your connection) visualization might be slow. If you set it to 0MB, all rendering will be remote including rotation, so you will really be using the cluster resources for everything, which is good for large data processing but not so good for interactivity. Experiment with the threshold to find a suitable value.<br />
<br><br />
If you want to do parallel rendering on multiple CPUs, start a parallel job; don't forget to specify the correct maximum walltime limit.<br />
<br />
<!--T:24--><br />
{{Command|salloc --time{{=}}0:30:0 --ntasks{{=}}8 --account{{=}}def-someprof}}<br />
<br />
<!--T:25--><br />
Start the ParaView server with <code>srun</code>.<br />
<br />
<!--T:26--><br />
{{Commands<br />
|module load paraview-offscreen/5.5.2<br />
|srun pvserver --mesa --force-offscreen-rendering<br />
}}<br />
<br />
<!--T:27--><br />
The <code>--mesa-swr-avx2</code> flag does not seem to have any effect when in parallel so we replaced it with the more generic <code>--mesa</code> to (hopefully) enable automatic detection of the best software rendering option.<br />
<br />
<!--T:28--><br />
To check that you are doing parallel rendering, you can pass your visualization through the Process Id Scalars filter and then colour it by "process id".<br />
<br />
=== GPU-based ParaView visualization === <!--T:29--><br />
<br />
<!--T:30--><br />
Cedar and Graham have a number of interactive GPU nodes that can be used for remote client-server visualization.<br />
<br />
<!--T:31--><br />
1. First, install on your computer the same version as the one available on the cluster you will be using; log into Cedar or Graham and start a serial GPU interactive job.<br />
<br />
<!--T:32--><br />
{{Command|salloc --time{{=}}1:00:0 --ntasks{{=}}1 --gres{{=}}gpu:1 --account{{=}}def-someprof}}<br />
<br />
<!--T:33--><br />
:The job should automatically start on one of the GPU interactive nodes.<br />
2. At the prompt that is now running inside your job, load the ParaView GPU+EGL module, change your display variable so that ParaView does not attempt to use the X11 rendering context, and start the ParaView server.<br />
<br />
<!--T:34--><br />
{{Commands<br />
|module load paraview-offscreen-gpu/5.4.0<br />
|unset DISPLAY<br />
}}<br />
{{Command|pvserver<br />
|result=<br />
Waiting for client...<br />
Connection URL: cs://cdr347.int.cedar.computecanada.ca:11111<br />
Accepting connection(s): cdr347.int.cedar.computecanada.ca:11111<br />
}}<br />
<br />
<!--T:35--><br />
:Wait for the server to be ready to accept client connection.<br />
<br />
<!--T:36--><br />
3. Make a note of the node (in this case ''cdr347'') and the port (usually 11111) and in another terminal on your computer (on Mac/Linux; in Windows use a terminal emulator), link the port 11111 on your computer and the same port on the compute node (make sure to use the correct compute node).<br />
<br />
<!--T:37--><br />
{{Command|prompt=[name@computer $]|ssh <username>@cedar.computecanada.ca -L 11111:cdr347:11111}}<br />
<br />
<!--T:38--><br />
4. Start ParaView on your computer, go to ''File -> Connect'' (or click on the green ''Connect'' button on the toolbar) and click on ''Add Server.'' You will need to point ParaView to your local port 11111, so you can do something like name = cedar, server type = Client/Server, host = localhost, port = 11111; click on ''Configure'', select ''Manual'' and click on ''Save.''<br />
:Once the remote is added to the configuration, simply select the server from the list and click on ''Connect.'' The first terminal window that read ''Accepting connection'' will now read ''Client connected.''<br />
<br />
<!--T:39--><br />
5. Open a file in ParaView (it will point you to the remote filesystem) and visualize it as usual.<br />
<br />
</tab><br />
<tab name="Niagara"><br />
== Client-server visualization on Niagara== <!--T:40--><br />
<br />
<!--T:42--><br />
Niagara does not have GPUs, therefore, you are limited to software rendering. With ParaView, you need to explicitly specify one of the mesa flags to tell it to not use OpenGL hardware acceleration, e.g.<br />
<br />
<!--T:43--><br />
{{Commands<br />
|module load paraview<br />
|paraview --mesa-swr<br />
}}<br />
<br />
<!--T:44--><br />
or use one of the flags below.<br />
<br />
<!--T:45--><br />
To access [https://docs.scinet.utoronto.ca/index.php/Niagara_Quickstart#Testing interactive resources on Niagara], you will need to start a <code>debugjob</code>. Here are the steps:<br />
<br />
<!--T:46--><br />
<ol><br />
<li> Launch an interactive job (debugjob).</li><br />
<br />
<!--T:47--><br />
{{Command|debugjob}}<br />
<br />
<!--T:48--><br />
<li> After getting a compute node, let's say niaXYZW, load the ParaView module and start a ParaView server.</li><br />
<br />
<!--T:49--><br />
{{Commands<br />
|module load paraview<br />
|pvserver --mesa-swr-ax2<br />
}}<br />
<br />
<!--T:50--><br />
The <code>--mesa-swr-avx2</code> flag has been reported to offer faster software rendering using the OpenSWR library.<br />
<br />
<!--T:51--><br />
<li> Now, you have to wait a few seconds for the server to be ready to accept client connections.</li><br />
<br />
<!--T:52--><br />
Waiting for client...<br />
Connection URL: cs://niaXYZW.scinet.local:11111<br />
Accepting connection(s): niaXYZW.scinet.local:11111<br />
<br />
<!--T:53--><br />
<li> Open a new terminal without closing your debugjob, and SSH into Niagara using the following command:</li><br />
<br />
<!--T:54--><br />
{{Command|prompt=[name@computer $]|ssh YOURusername@niagara.scinet.utoronto.ca -L11111:niaXYZW:11111 -N}}<br />
<br />
<!--T:55--><br />
this will establish a tunnel mapping the port 11111 in your computer (<code>localhost</code>) to the port 11111 on the Niagara's compute node, <code>niaXYZW</code>, where the ParaView server will be waiting for connections.<br />
<br />
<!--T:56--><br />
<li> Start ParaView on your local computer, go to ''File -> Connect'' and click on ''Add Server.''<br />
You will need to point ParaView to your local port <code>11111</code>, so you can do something like</li><br />
name = niagara<br />
server type = Client/Server<br />
host = localhost<br />
port = 11111<br />
then click on ''Configure'', select ''Manual'' and click on ''Save.''<br />
<br />
<!--T:57--><br />
<li> Once the remote server is added to the configuration, simply select the server from the list and click on ''Connect.''<br />
The first terminal window that read <code>Accepting connection...</code> will now read <code>Client connected</code>.<br />
<br />
<!--T:58--><br />
<li> Open a file in ParaView (it will point you to the remote filesystem) and visualize it as usual.<br />
<br />
<!--T:59--><br />
</ol><br />
<br />
=== Multiple CPUs === <!--T:60--><br />
<br />
<!--T:61--><br />
For performing parallel rendering using multiple CPUs, <code>pvserver</code> should be run using <code>srun</code>, i.e. either submit a job script or request a job using<br />
<br />
<!--T:62--><br />
{{Commands<br />
|salloc --ntasks{{=}}N*40 --nodes{{=}}N --time{{=}}1:00:00<br />
|module load paraview<br />
|srun pvserver --mesa<br />
}}<br />
<br />
<!--T:63--><br />
:where you need to replace <code>N</code> with the number of nodes and <code>N*40</code> with the single number (the product of multiplication).<br />
<br />
</tab><br />
<tab name="Cloud VM"><br />
== Client-server visualization on a cloud == <!--T:64--><br />
<br />
=== Prerequisites === <!--T:66--><br />
<br />
<!--T:67--><br />
The [[Cloud Quick Start|Cloud Quick Start Guide]] explains how to launch a new virtual machine (VM). Once you log into the VM, you will need to install some additional packages to be able to compile ParaView or VisIt. For example, on a CentOS VM you can type<br />
<br />
<!--T:68--><br />
{{Commands|prompt=[name@VM $]<br />
|sudo yum install xauth wget gcc gcc-c++ ncurses-devel python-devel libxcb-devel<br />
|sudo yum install patch imake libxml2-python mesa-libGL mesa-libGL-devel<br />
|sudo yum install mesa-libGLU mesa-libGLU-devel bzip2 bzip2-libs libXt-devel zlib-devel flex byacc<br />
|sudo ln -s /usr/include/GL/glx.h /usr/local/include/GL/glx.h<br />
}}<br />
<br />
<!--T:69--><br />
If you have your own private-public SSH key pair (as opposed to the cloud key), you may want to copy the public key to the VM to simplify logins, by issuing the following command on your computer<br />
<br />
<!--T:70--><br />
{{Command|prompt=[name@computer $]|cat ~/.ssh/id_rsa.pub {{!}} ssh -i ~/.ssh/cloudwestkey.pem centos@vm.ip.address 'cat >>.ssh/authorized_keys'}}<br />
<br />
=== Compiling with OSMesa === <!--T:71--><br />
<br />
<!--T:72--><br />
Since the VM does not have access to a GPU (most Arbutus VMs don't), we need to compile ParaView with OSMesa support so that it can do offscreen (software) rendering. The default configuration of OSMesa will enable OpenSWR (Intel's software rasterization library to run OpenGL). What you will end up with is a ParaView server that uses OSMesa for offscreen CPU-based rendering without X but with both <code>llvmpipe</code> (older and slower) and <code>SWR</code> (newer and faster) drivers built. We recommend using SWR.<br />
<br />
<!--T:73--><br />
Back on the VM, compile <code>cmake::</code><br />
<br />
<!--T:74--><br />
{{Commands|prompt=[name@VM $]<br />
|wget https://cmake.org/files/v3.7/cmake-3.7.0.tar.gz<br />
|unpack and cd there<br />
|./bootstrap<br />
|make<br />
|sudo make install<br />
}}<br />
<br />
<!--T:75--><br />
Next, compile <code>llvm</code>:<br />
<source lang="console"><br />
cd<br />
wget http://releases.llvm.org/3.9.1/llvm-3.9.1.src.tar.xz<br />
unpack and cd there<br />
mkdir -p build && cd build<br />
cmake \<br />
-DCMAKE_BUILD_TYPE=Release \<br />
-DLLVM_BUILD_LLVM_DYLIB=ON \<br />
-DLLVM_ENABLE_RTTI=ON \<br />
-DLLVM_INSTALL_UTILS=ON \<br />
-DLLVM_TARGETS_TO_BUILD:STRING=X86 \<br />
..<br />
make<br />
sudo make install<br />
</source><br />
<br />
<!--T:77--><br />
Next, compile Mesa with OSMesa:<br />
<source lang="console"><br />
cd<br />
wget ftp://ftp.freedesktop.org/pub/mesa/mesa-17.0.0.tar.gz<br />
unpack and cd there<br />
./configure \<br />
--enable-opengl --disable-gles1 --disable-gles2 \<br />
--disable-va --disable-xvmc --disable-vdpau \<br />
--enable-shared-glapi \<br />
--disable-texture-float \<br />
--enable-gallium-llvm --enable-llvm-shared-libs \<br />
--with-gallium-drivers=swrast,swr \<br />
--disable-dri \<br />
--disable-egl --disable-gbm \<br />
--disable-glx \<br />
--disable-osmesa --enable-gallium-osmesa<br />
make<br />
sudo make install<br />
</source><br />
<br />
<!--T:79--><br />
Next, compile the ParaView server:<br />
<source lang="console"><br />
cd<br />
wget http://www.paraview.org/files/v5.2/ParaView-v5.2.0.tar.gz<br />
unpack and cd there<br />
mkdir -p build && cd build<br />
cmake \<br />
-DCMAKE_BUILD_TYPE=Release \<br />
-DCMAKE_INSTALL_PREFIX=/home/centos/paraview \<br />
-DPARAVIEW_USE_MPI=OFF \<br />
-DPARAVIEW_ENABLE_PYTHON=ON \<br />
-DPARAVIEW_BUILD_QT_GUI=OFF \<br />
-DVTK_OPENGL_HAS_OSMESA=ON \<br />
-DVTK_USE_OFFSCREEN=ON \<br />
-DVTK_USE_X=OFF \<br />
..<br />
make<br />
make install<br />
</source><br />
<br />
=== Client-server mode === <!--T:81--> <br />
<br />
<!--T:82--><br />
You are now ready to start ParaView server on the VM with SWR rendering:<br />
<source lang="console"><br />
./paraview/bin/pvserver --mesa-swr-avx2<br />
</source><br />
<br />
<!--T:97--><br />
Back on your computer, organize an SSH tunnel from the local port 11111 to the VM's port 11111:<br />
<source lang="console"><br />
ssh centos@vm.ip.address -L 11111:localhost:11111<br />
</source><br />
<br />
<!--T:86--><br />
Finally, start the ParaView client on your computer and connect to localhost:11111. If successful, you should be able to open files on the remote VM. During rendering in the console you should see the message ''SWR detected AVX2.''<br />
</tab><br />
</tabs><br />
<br />
= Remote VNC desktop on Graham VDI nodes = <!--T:87--> <br />
<br />
<!--T:96--><br />
For small interactive visualizations requiring up to 256GB memory and 16 cores, you can use Graham's VDI nodes. Unlike client-server visualizations, on the VDI nodes you'll be using VNC remote desktop. Here are the steps:<br />
<br />
<!--T:92--><br />
1. Connect to gra-vdi as described [https://docs.computecanada.ca/wiki/VNC#VDI_Nodes TigerVNC]<br />
<br />
<!--T:94--><br />
2. Open a terminal window and run the following commands (the [[Using Nix: nix-env|nix-env install command]] only needs to be run once or when a newer versions is desired):<br />
<br />
<!--T:95--><br />
module load nix<br />
nix-env --install --attr nixpkgs.paraview<br />
paraview<br />
<br />
The normal paraview 5.5.2 module can be used as well on gra-vdi, but it only provides software rendering<br />
<br />
module load CcEnv StdEnv<br />
module load paraview/5.5.2<br />
paraview<br />
<br />
= Batch rendering = <!--T:88--><br />
<br />
<!--T:89--><br />
For large-scale and automated visualization, we strongly recommend switching from interactive client-server to off-screen batch visualization. ParaView supports Python scripting, so you can script your workflow and submit it as a regular, possibly parallel production job on a cluster. If you need any help with this, please contact [[Technical support]].<br />
</translate></div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=Using_Nix&diff=88400Using Nix2020-08-16T01:41:25Z<p>Tyson: Missing ghc attribute in haskell attribute chains</p>
<hr />
<div>{{Draft}}<br />
<br />
= Overview =<br />
<br />
[https://nixos.org/nix/ Nix] is a software building and composition system that allows users to manage their own persistent software environments. At the moment it is only available on SHARCNET systems (i.e., graham and legacy). If you would like this to change, let us know (it requires some coordination, but isn’t too difficult to do).<br />
<br />
* Supports one-off, per-project, and per-user usage of compositions<br />
* Compositions can be built, installed, upgraded, downgraded, and removed as a user<br />
* Operations either succeed or fail leaving everything intact (operations are atomic).<br />
* Extremely easy to add and share compositions<br />
<br />
Currently nix is building software in a generic manner (e.g., without AVX2 or AVX512 vector instructions support), so module loaded software should be preferred for longer running simulations when it exists.<br />
<br />
'''NOTE:''' The message <code>failed to lock thread to CPU XX</code> is a harmless warning that can be ignored.<br />
<br />
== Enabling and disabling the nix environment ==<br />
<br />
The user’s current nix environment is enabled by loading the nix module. This creates some ''.nix*'' files and sets some environment variables.<br />
<br />
<source lang="bash">[name@cluster:~]$ module load nix</source><br />
It is disabled by unloading the nix module. This unsets the environment variables but leaves the ''.nix*'' files alone.<br />
<br />
<source lang="bash">[name@cluster:~]$ module unload nix</source><br />
== Completely resetting the nix environment ==<br />
<br />
Most per-user operations can be undone with the <code>--rollback</code> option (i.e., <code>nix-env --rollback</code> or <code>nix-channel --rollback</code>). Sometimes it is useful to entirely reset nix though. This is done by unloading the module, erasing all user related nix files, and then reloading the module file.<br />
<br />
<source lang="bash">[name@cluster:~]$ module unload nix<br />
[name@cluster:~]$ rm -fr ~/.nix-profile ~/.nix-defexpr ~/.nix-channels ~/.config/nixpkgs<br />
[name@cluster:~]$ rm -fr /nix/var/nix/profiles/per-user/$USER /nix/var/nix/gcroots/per-user/$USER<br />
[name@cluster:~]$ module load nix</source><br />
= Existing compositions =<br />
<br />
The <code>nix search</code> command can be used to locate already available compositions<br />
<br />
<source lang="bash">[user@cluster:~]$ nix search git<br />
...<br />
* nixpkgs.git (git-minimal-2.19.3)<br />
Distributed version control system<br />
...</source><br />
Pro tips include<br />
<br />
* you need to specify <code>-u</code> after upgrading your channel (this will take awhile)<br />
* the search string is actually a regular expression and multiple ones are ANDed together<br />
<br />
Often our usage of a composition is either a one-off, a per-project, or an all the time situations. Nix supports all three of these cases.<br />
<br />
== One offs ==<br />
<br />
If you just want to use a composition once, the easiest was is to use the <code>nix run</code> command. This command will start a shell in which <code>PATH</code> has been extended to include the specified composition<br />
<br />
<source lang="bash">[user@cluster:~]$ nix run nixpkg.git<br />
[user@cluster:~]$ git<br />
[user@cluster:~]$ exit</source><br />
Note that this does not protect the composition from being garbage collected overnight (e.g., the composition is only guaranteed to be around temporarily for your use until sometime in the wee-morning hours). Pro tips include<br />
<br />
* you can specify more than one composition in the same <code>nix run</code> command<br />
* you can specify a command instead of a shell with <code>-c &lt;cmd&gt; &lt;args&gt; ...</code><br />
<br />
== Per-project ==<br />
<br />
If you want to use a program for a specific project, the easiest way is with the <code>nix build</code> command. This command will create a symbolic link (by default named <code>result</code>) from which you can access the programs ''bin'' directory to run it.<br />
<br />
<source lang="bash">[user@cluster:~]$ nix build nixpkgs.git<br />
[user@cluster:~]$ ./result/bin/git</source><br />
Note that (currently) the composition will only be protected from overnight garbage collection if you output the symlink into your ''home'' directory and do not rename or move it. Pro tips include<br />
<br />
* you can specify the output symlink name with the <code>-o &lt;name&gt;</code> option<br />
* add the ''bin'' directory to your <code>PATH</code> to not have to type it in every time<br />
<br />
== Per-user ==<br />
<br />
Loading the <code>nix</code> module adds the per-user common ''~/.nix-profile/bin'' directory to your <code>PATH</code>. You can add and remove compositions from this directory with the <code>nix-env</code> command<br />
<br />
<source lang="bash">[user@cluster:~]$ nix-env --install --attr nixpkgs.git<br />
[user@cluster:~]$ nix-env --query<br />
git-minimal-2.19.3</source><br />
<source lang="bash">[user@cluster:~]$ nix-env --uninstall git-minimal<br />
uninstalling 'git-minimal-2.19.3'<br />
[user@cluster:~]$ nix-env --query</source><br />
Each command actually creates a new version, so all prior versions remain and can be used<br />
<br />
<source lang="bash">[user@cluster:~]$ nix-env --list-generations<br />
1 2020-07-29 13:10:03<br />
2 2020-07-29 13:11:52 (current)<br />
[user@cluster:~]$ nix-env --switch-generation 1<br />
[user@cluster:~]$ nix-env --query<br />
git-minimal-2.19.3<br />
[user@cluster:~]$ nix-env --switch-generation 2<br />
[user@cluster:~]$ nix-env --query</source><br />
Pro tips include<br />
<br />
* <code>nix-env --rollback</code> moves up one generation<br />
* <code>nix-env --delete-generations &lt;time&gt;</code> deletes environments older than <code>&lt;time&gt;</code> (e.g., <code>30d</code>)<br />
* see our [[Using Nix: nix-env|nix-env page]] for a much more in-depth discussion of using <code>nix-env</code><br />
<br />
= Creating compositions =<br />
<br />
Often we require our own unique composition. A basic example would be to bundle all the binaries from multiple existing compositions in a common ''bin'' directory (e.g., <code>make</code>, <code>gcc</code>, and <code>ld</code> to build a simple C program). A more complex example would be to bundle python with a set of python libraries by wrapping the python executables with shell scripts to set <code>PYTHON_PATH</code> for the python libraries before running the real python binaries.<br />
<br />
All of these have a common format. You write a nix expression in a <code>.nix</code> file that composes together existing compositions and then you tell the above commands to use that with the <code>-f &lt;nix file&gt;</code> option. For example, say the file <code>python.nix</code> has an expression for a python environment in it, you can create a per-project ''bin'' directory with<br />
<br />
<source lang="bash">[user@cluster:~]$ nix build -f python.nix -o python<br />
[user@cluster:~]$ ./python/bin/python</source><br />
The nix expression you put in the file generally<br />
<br />
* does <code>with import &lt;nixpkgs&gt; {}</code> to bring the set of nixpkgs into scope<br />
* calls an existing composition functions with a list of space-separated components to include<br />
<br />
The template for doing the second these follows below as it differs slightly across the various eco-systems.<br />
<br />
A pro tip is<br />
<br />
* there are many [https://nixos.org/nixpkgs/manual/#chap-language-support languages and framework supported] but only a few described here, send us an email if you would like a missing supported one added here<br />
<br />
== Generic ==<br />
<br />
Nixpkgs provides a <code>buildEnv</code> function that does a basic composition of compositions (by combining their ''bin'', ''lib'', etc. directories). The list of packages are the same as used before minus the leading <code>nixpkgs</code> prefix as it was imported (e.g., <code>git</code> instead of <code>nixpkgs.git</code>).<br />
<br />
<source lang="nix">with import <nixpkgs> {};<br />
buildEnv {<br />
name = "my environment";<br />
paths = [<br />
... list of compositions ...<br />
];<br />
}</source><br />
== Python ==<br />
<br />
Nixpkgs provides the following python related attributes<br />
<br />
* <code>python&lt;major&gt;&lt;minor&gt;</code> - a composition providing the given python<br />
* <code>python&lt;major&gt;&lt;minor&gt;.pkgs</code> - the set of python compositions using the given python<br />
* <code>python&lt;major&gt;&lt;minor&gt;.withPackages</code> - wraps python with <code>PYTHON_PATH</code> set to a given set of python packages<br />
<br />
We can use the former directly to use the programs provided by python compositions<br />
<br />
<source lang="bash">[user@cluster:~]$ nix run python36.pkgs.spambayes<br />
[user@cluster:~]$ sb_filter.py --help<br />
[user@cluster:~]$ exit</source><br />
and the later in a <code>.nix</code> file to create a python composition that enables a given set of libraries (e.g., a <code>python</code> command we can run and access the given set of python packages from)<br />
<br />
<source lang="nix">with import <nixpkgs> { };<br />
python.withPackages (packages:<br />
with packages; [<br />
... list of python packages ...<br />
]<br />
)</source><br />
Some pro tips are<br />
<br />
* the aliases <code>python</code> and <code>python&lt;major&gt;</code> given default <code>python&lt;major&gt;&lt;minor&gt;</code> versions<br />
* the aliases <code>pythonPackages&lt;major&gt;&lt;minor&gt;</code> are short for <code>python&lt;major&gt;&lt;minor&gt;.pkgs</code> (including default version variants)<br />
* the function <code>python&lt;major&gt;&lt;minor&gt;.pkgs.buildPythonPackage</code> can be used to build your own python packages<br />
<br />
== R ==<br />
<br />
Nixpkgs provides the following R related attributes<br />
<br />
* <code>R</code> - a composition providing R<br />
* <code>rstudio</code> - a composition providing RStudio<br />
* <code>rPackages</code> - the set of R packages<br />
* <code>rWrapper</code> - a composition that wraps R with <code>R_LIBS</code> set to a minimal set of R packages<br />
* <code>rstudioWrapper</code> - a composition that wrapped RStudio with <code>R_LIBS</code> set to a minimal set of R packages<br />
<br />
We can use <code>rPackages</code> directly to examine the content of R packages<br />
<br />
<source lang="bash">[user@cluster:~]$ nix build rPackages.exams -o exams<br />
[user@cluster:~]$ cat exams/library/exams/NEWS<br />
[user@cluster:~]$ exit</source><br />
and the latter two can be overridden in a <code>.nix</code> file to create R and RStudio wrappers to create a composition enabling a given set of R libraries (e.g., a <code>R</code> or <code>rstudio</code> command we can run and access the given set of R packages from)<br />
<br />
<source lang="nix">with import <nixpkgs> { };<br />
rWrapper.override {<br />
packages = with rPackages; [<br />
... list of R packages ...<br />
];<br />
}</source><br />
A pro tips is<br />
<br />
* the function <code>rPackages.buildRPackage</code> can be used to build your own R packages<br />
<br />
== Haskell ==<br />
<br />
Nixpkgs provides the following haskell related attributes<br />
<br />
* <code>haskell.compiler.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;</code> - composition providing the given ghc<br />
* <code>haskell.packages.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;</code> - the set of haskell packages compiled by the given ghc<br />
* <code>haskell.packages.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;.ghc.withPackages</code> - composition wrapping ghc to enable the given packages<br />
* <code>haskell.packages.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;.ghc.withHoogle</code> - composition wrapping ghc to enable the given packages with hoogle and documentation indices<br />
<br />
We can use the first directly to use programs provided by haskell packages<br />
<br />
<source lang="bash">[user@cluster:~]$ nix run haskell.packages.ghc864.pandoc<br />
[user@cluster:~]$ pandoc --help</source><br />
and the last two in a <code>.nix</code> file create a ghc environment to enable a given set of haskell package (e.g., a <code>ghci</code> we can run and access the given set of packages from)<br />
<br />
<pre>with import &lt;nixpkgs&gt; { };<br />
haskell.packages.ghc864.ghc.withPackages (packages:<br />
with packages; [<br />
... list of Haskell packages ...<br />
];<br />
}</pre><br />
Some pro tips are<br />
<br />
* the alias <code>haskellPackages</code> gives a default <code>haskell.packages.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;</code><br />
* the attributes in <code>haskell.lib</code> contains a variety of useful attributes for tweaking haskell packages (e.g., enabling profiling, etc.)<br />
* the upstream maintainer has a useful [https://www.youtube.com/watch?v=KLhkAEk8I20 youtube video] on how to fix broken haskell packages<br />
<br />
== Emacs ==<br />
<br />
Nixpkgs provides the following emacs related attributes (append a <code>Ng</code> suffix for older versions of nixpkgs, e.g., <code>emacs25Ng</code> and <code>emacs25PackagesNg</code>)<br />
<br />
* <code>emacs&lt;major&gt;&lt;minor&gt;</code> - a composition providing the given emacs editor<br />
* <code>emacs&lt;major&gt;&lt;minor&gt;Packages</code> - the set of emacs packages for the given emacs editor<br />
* <code>emacs&lt;major&gt;&lt;minor&gt;Packages.emacsWithPackages</code> - composition wrapping emacs to enable the given packages<br />
<br />
We can use the second directly examine the content of packages<br />
<br />
<source lang="bash">[user@cluster:~]$ nix build nixpkgs.emacs25Packages.magit -o magit<br />
[user@cluster:~]$ cat magit/share/emacs/site-lisp/elpa/magit*/AUTHORS.md<br />
[user@cluster:~]$ exit</source><br />
and the last one in a <code>.nix</code> file create a composition giving emacs with the given set of packages enabled<br />
<br />
<pre>with import &lt;nixpkgs&gt; { };<br />
emacs25Packages.emacsWithPackages (packages:<br />
with packages; [<br />
... list of emacs packages ...<br />
];<br />
}</pre><br />
Some pro tips are<br />
<br />
* the aliases <code>emacs</code> and <code>emacsPackages</code> give a default <code>emacs&lt;major&gt;&lt;minor&gt;</code> and <code>emacsPackages&lt;major&gt;&lt;minor&gt;</code> version<br />
* the alias <code>emacs&lt;major&gt;&lt;minor&gt;WithPackages</code> are short for <code>emacs&lt;major&gt;&lt;minor&gt;Packages.emacsWithPackages</code> (including default version variants)<br />
<br />
[[Category:Software]]</div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=Using_Nix:_nix-env&diff=88299Using Nix: nix-env2020-08-12T22:52:36Z<p>Tyson: Use composition instead of package to emphasis nix is actually a software composition system and not a package manager</p>
<hr />
<div>{{Draft}}<br />
<br />
This page details using the legacy <code>nix-env</code> command to manage a per-user environment. For an overview of Nix, start with our [[Using Nix|using nix page]].<br />
<br />
= Querying, installing and removing compositions =<br />
<br />
The <code>nix-env</code> command is used to manage your per-user Nix environment. It is actually a legacy command that has not yet been replaced by a newer <code>nix &lt;command&gt;</code> command.<br />
<br />
== What do I have installed and what can I install ==<br />
<br />
Lets first see what we currently have installed.<br />
<br />
<source lang="bash">[name@cluster:~]$ nix-env --query</source><br />
Now let’s see what is available. We request the attribute paths (unambiguous way of specifying an existing composition) and the descriptions too (cursor to the right to see them). This takes a bit of time as it visits a lot of small files. Especially over NFS it can be a good idea to pipe it to a file and then grep that in the future.<br />
<br />
<source lang="bash">[name@cluster:~]$ nix-env --query --available --attr-path --description</source><br />
The newer <code>nix search</code> command is often a better way to locate compositions as it saves a cache so subsequent invocations are quite fast.<br />
<br />
== Installing compositions ==<br />
<br />
Let’s say that we need a newer version of git than provided by default. First lets check what our OS comes with.<br />
<br />
<source lang="bash">[name@cluster:~]$ git --version<br />
[name@cluster:~]$ which git</source><br />
Let’s tell Nix to install its version in our environment.<br />
<br />
<source lang="bash">[name@cluster:~]$ nix-env --install --attr nixpkgs.git<br />
[name@cluster:~]$ nix-env --query</source><br />
Let’s checkout what we have now (it may be necessary to tell bash to to forget remembered executable locations with <code>hash -r</code> so it notices the new one).<br />
<br />
<source lang="bash">[name@cluster:~]$ git --version<br />
[name@cluster:~]$ which git</source><br />
== Removing compositions ==<br />
<br />
For completeness, lets add in the other usual version-control suspects.<br />
<br />
<source lang="bash">[name@cluster:~]$ nix-env --install --attr nixpkgs.subversion nixpkgs.mercurial<br />
[name@cluster:~]$ nix-env --query</source><br />
Actually, we probably don’t really want subversion any more. Let’s remove that.<br />
<br />
<source lang="bash">[name@cluster:~]$ nix-env --uninstall subversion<br />
[name@cluster:~]$ nix-env --query</source><br />
= Environments =<br />
<br />
Nix keeps referring to user environments. Each time we install or remove compositions we create a new environment based off of the previous environment.<br />
<br />
== Switching between previous environments ==<br />
<br />
This means the previous environments still exist and we can switch back to them at any point. Let’s say we changed our mind and want subversion back. It’s trivial to restore the previous environment.<br />
<br />
<source lang="bash">[name@cluster:~]$ nix-env --rollback<br />
[name@cluster:~]$ nix-env --query</source><br />
Of course we may want to do more than just move to the previous environment. We can get a list of all our environments so far and then jump directly to whatever one we want. Let’s undo the rollback.<br />
<br />
<source lang="bash">[name@cluster:~]$ nix-env --list-generations<br />
[name@cluster:~]$ nix-env --switch-generation 4<br />
[name@cluster:~]$ nix-env --query</source><br />
== Operations are atomic ==<br />
<br />
Due to the atomic property of Nix environments, we can’t be left halfway through installing/updating compositions. They either succeed and create us a new environment or leave us with the previous one intact.<br />
<br />
Let’s go back to the start when we just had Nix itself and install the one true GNU distributed version control system tla. Don’t let it complete though. Hit it with <code>CTRL+c</code> partway through.<br />
<br />
<source lang="bash">[name@cluster:~]$ nix-env --switch-generation 1<br />
[name@cluster:~]$ nix-env --install --attr nixpkgs.tla<br />
CTRL+c</source><br />
Nothing bad happens. The operation didn’t complete so it has no effect on the environment whatsoever.<br />
<br />
<source lang="bash">[name@cluster:~]$ nix-env --query<br />
[name@cluster:~]$ nix-env --list-generations</source><br />
== Nix only does things once ==<br />
<br />
The install and remove commands take the current environment and create a new environment with the changes. This works regardless of which environment we are currently in. Let’s create a new environment from our original environment by just adding git and mercurial.<br />
<br />
<source lang="bash">[name@cluster:~]$ nix-env --list-generations<br />
[name@cluster:~]$ nix-env --install --attr nixpkgs.git nixpkgs.mercurial<br />
[name@cluster:~]$ nix-env --list-generations</source><br />
Notice how much much faster it was to install git and mercurial the second time? That is because the software already existed in the local Nix store from the previous installs so Nix just reused it.<br />
<br />
== Garbage collection ==<br />
<br />
Nix periodically goes through and removes any software not accessible from any existing environments. This means we have to explicitly delete environments we don’t want anymore so Nix is able to reclaim the space. We can delete specific environments or any sufficiently old.<br />
<br />
<source lang="bash">[name@cluster:~]$ nix-env --delete-generations 30d</source><br />
<br />
[[Category:Software]]</div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=Using_Nix&diff=88298Using Nix2020-08-12T22:50:21Z<p>Tyson: Correct sub-page links</p>
<hr />
<div>{{Draft}}<br />
<br />
= Overview =<br />
<br />
[https://nixos.org/nix/ Nix] is a software building and composition system that allows users to manage their own persistent software environments. At the moment it is only available on SHARCNET systems (i.e., graham and legacy). If you would like this to change, let us know (it requires some coordination, but isn’t too difficult to do).<br />
<br />
* Supports one-off, per-project, and per-user usage of compositions<br />
* Compositions can be built, installed, upgraded, downgraded, and removed as a user<br />
* Operations either succeed or fail leaving everything intact (operations are atomic).<br />
* Extremely easy to add and share compositions<br />
<br />
Currently nix is building software in a generic manner (e.g., without AVX2 or AVX512 vector instructions support), so module loaded software should be preferred for longer running simulations when it exists.<br />
<br />
'''NOTE:''' The message <code>failed to lock thread to CPU XX</code> is a harmless warning that can be ignored.<br />
<br />
== Enabling and disabling the nix environment ==<br />
<br />
The user’s current nix environment is enabled by loading the nix module. This creates some ''.nix*'' files and sets some environment variables.<br />
<br />
<source lang="bash">[name@cluster:~]$ module load nix</source><br />
It is disabled by unloading the nix module. This unsets the environment variables but leaves the ''.nix*'' files alone.<br />
<br />
<source lang="bash">[name@cluster:~]$ module unload nix</source><br />
== Completely resetting the nix environment ==<br />
<br />
Most per-user operations can be undone with the <code>--rollback</code> option (i.e., <code>nix-env --rollback</code> or <code>nix-channel --rollback</code>). Sometimes it is useful to entirely reset nix though. This is done by unloading the module, erasing all user related nix files, and then reloading the module file.<br />
<br />
<source lang="bash">[name@cluster:~]$ module unload nix<br />
[name@cluster:~]$ rm -fr ~/.nix-profile ~/.nix-defexpr ~/.nix-channels ~/.config/nixpkgs<br />
[name@cluster:~]$ rm -fr /nix/var/nix/profiles/per-user/$USER /nix/var/nix/gcroots/per-user/$USER<br />
[name@cluster:~]$ module load nix</source><br />
= Existing compositions =<br />
<br />
The <code>nix search</code> command can be used to locate already available compositions<br />
<br />
<source lang="bash">[user@cluster:~]$ nix search git<br />
...<br />
* nixpkgs.git (git-minimal-2.19.3)<br />
Distributed version control system<br />
...</source><br />
Pro tips include<br />
<br />
* you need to specify <code>-u</code> after upgrading your channel (this will take awhile)<br />
* the search string is actually a regular expression and multiple ones are ANDed together<br />
<br />
Often our usage of a composition is either a one-off, a per-project, or an all the time situations. Nix supports all three of these cases.<br />
<br />
== One offs ==<br />
<br />
If you just want to use a composition once, the easiest was is to use the <code>nix run</code> command. This command will start a shell in which <code>PATH</code> has been extended to include the specified composition<br />
<br />
<source lang="bash">[user@cluster:~]$ nix run nixpkg.git<br />
[user@cluster:~]$ git<br />
[user@cluster:~]$ exit</source><br />
Note that this does not protect the composition from being garbage collected overnight (e.g., the composition is only guaranteed to be around temporarily for your use until sometime in the wee-morning hours). Pro tips include<br />
<br />
* you can specify more than one composition in the same <code>nix run</code> command<br />
* you can specify a command instead of a shell with <code>-c &lt;cmd&gt; &lt;args&gt; ...</code><br />
<br />
== Per-project ==<br />
<br />
If you want to use a program for a specific project, the easiest way is with the <code>nix build</code> command. This command will create a symbolic link (by default named <code>result</code>) from which you can access the programs ''bin'' directory to run it.<br />
<br />
<source lang="bash">[user@cluster:~]$ nix build nixpkgs.git<br />
[user@cluster:~]$ ./result/bin/git</source><br />
Note that (currently) the composition will only be protected from overnight garbage collection if you output the symlink into your ''home'' directory and do not rename or move it. Pro tips include<br />
<br />
* you can specify the output symlink name with the <code>-o &lt;name&gt;</code> option<br />
* add the ''bin'' directory to your <code>PATH</code> to not have to type it in every time<br />
<br />
== Per-user ==<br />
<br />
Loading the <code>nix</code> module adds the per-user common ''~/.nix-profile/bin'' directory to your <code>PATH</code>. You can add and remove compositions from this directory with the <code>nix-env</code> command<br />
<br />
<source lang="bash">[user@cluster:~]$ nix-env --install --attr nixpkgs.git<br />
[user@cluster:~]$ nix-env --query<br />
git-minimal-2.19.3</source><br />
<source lang="bash">[user@cluster:~]$ nix-env --uninstall git-minimal<br />
uninstalling 'git-minimal-2.19.3'<br />
[user@cluster:~]$ nix-env --query</source><br />
Each command actually creates a new version, so all prior versions remain and can be used<br />
<br />
<source lang="bash">[user@cluster:~]$ nix-env --list-generations<br />
1 2020-07-29 13:10:03<br />
2 2020-07-29 13:11:52 (current)<br />
[user@cluster:~]$ nix-env --switch-generation 1<br />
[user@cluster:~]$ nix-env --query<br />
git-minimal-2.19.3<br />
[user@cluster:~]$ nix-env --switch-generation 2<br />
[user@cluster:~]$ nix-env --query</source><br />
Pro tips include<br />
<br />
* <code>nix-env --rollback</code> moves up one generation<br />
* <code>nix-env --delete-generations &lt;time&gt;</code> deletes environments older than <code>&lt;time&gt;</code> (e.g., <code>30d</code>)<br />
* see our [[Using Nix: nix-env|nix-env page]] for a much more in-depth discussion of using <code>nix-env</code><br />
<br />
= Creating compositions =<br />
<br />
Often we require our own unique composition. A basic example would be to bundle all the binaries from multiple existing compositions in a common ''bin'' directory (e.g., <code>make</code>, <code>gcc</code>, and <code>ld</code> to build a simple C program). A more complex example would be to bundle python with a set of python libraries by wrapping the python executables with shell scripts to set <code>PYTHON_PATH</code> for the python libraries before running the real python binaries.<br />
<br />
All of these have a common format. You write a nix expression in a <code>.nix</code> file that composes together existing compositions and then you tell the above commands to use that with the <code>-f &lt;nix file&gt;</code> option. For example, say the file <code>python.nix</code> has an expression for a python environment in it, you can create a per-project ''bin'' directory with<br />
<br />
<source lang="bash">[user@cluster:~]$ nix build -f python.nix -o python<br />
[user@cluster:~]$ ./python/bin/python</source><br />
The nix expression you put in the file generally<br />
<br />
* does <code>with import &lt;nixpkgs&gt; {}</code> to bring the set of nixpkgs into scope<br />
* calls an existing composition functions with a list of space-separated components to include<br />
<br />
The template for doing the second these follows below as it differs slightly across the various eco-systems.<br />
<br />
A pro tip is<br />
<br />
* there are many [https://nixos.org/nixpkgs/manual/#chap-language-support languages and framework supported] but only a few described here, send us an email if you would like a missing supported one added here<br />
<br />
== Generic ==<br />
<br />
Nixpkgs provides a <code>buildEnv</code> function that does a basic composition of compositions (by combining their ''bin'', ''lib'', etc. directories). The list of packages are the same as used before minus the leading <code>nixpkgs</code> prefix as it was imported (e.g., <code>git</code> instead of <code>nixpkgs.git</code>).<br />
<br />
<source lang="nix">with import <nixpkgs> {};<br />
buildEnv {<br />
name = "my environment";<br />
paths = [<br />
... list of compositions ...<br />
];<br />
}</source><br />
== Python ==<br />
<br />
Nixpkgs provides the following python related attributes<br />
<br />
* <code>python&lt;major&gt;&lt;minor&gt;</code> - a composition providing the given python<br />
* <code>python&lt;major&gt;&lt;minor&gt;.pkgs</code> - the set of python compositions using the given python<br />
* <code>python&lt;major&gt;&lt;minor&gt;.withPackages</code> - wraps python with <code>PYTHON_PATH</code> set to a given set of python packages<br />
<br />
We can use the former directly to use the programs provided by python compositions<br />
<br />
<source lang="bash">[user@cluster:~]$ nix run python36.pkgs.spambayes<br />
[user@cluster:~]$ sb_filter.py --help<br />
[user@cluster:~]$ exit</source><br />
and the later in a <code>.nix</code> file to create a python composition that enables a given set of libraries (e.g., a <code>python</code> command we can run and access the given set of python packages from)<br />
<br />
<source lang="nix">with import <nixpkgs> { };<br />
python.withPackages (packages:<br />
with packages; [<br />
... list of python packages ...<br />
]<br />
)</source><br />
Some pro tips are<br />
<br />
* the aliases <code>python</code> and <code>python&lt;major&gt;</code> given default <code>python&lt;major&gt;&lt;minor&gt;</code> versions<br />
* the aliases <code>pythonPackages&lt;major&gt;&lt;minor&gt;</code> are short for <code>python&lt;major&gt;&lt;minor&gt;.pkgs</code> (including default version variants)<br />
* the function <code>python&lt;major&gt;&lt;minor&gt;.pkgs.buildPythonPackage</code> can be used to build your own python packages<br />
<br />
== R ==<br />
<br />
Nixpkgs provides the following R related attributes<br />
<br />
* <code>R</code> - a composition providing R<br />
* <code>rstudio</code> - a composition providing RStudio<br />
* <code>rPackages</code> - the set of R packages<br />
* <code>rWrapper</code> - a composition that wraps R with <code>R_LIBS</code> set to a minimal set of R packages<br />
* <code>rstudioWrapper</code> - a composition that wrapped RStudio with <code>R_LIBS</code> set to a minimal set of R packages<br />
<br />
We can use <code>rPackages</code> directly to examine the content of R packages<br />
<br />
<source lang="bash">[user@cluster:~]$ nix build rPackages.exams -o exams<br />
[user@cluster:~]$ cat exams/library/exams/NEWS<br />
[user@cluster:~]$ exit</source><br />
and the latter two can be overridden in a <code>.nix</code> file to create R and RStudio wrappers to create a composition enabling a given set of R libraries (e.g., a <code>R</code> or <code>rstudio</code> command we can run and access the given set of R packages from)<br />
<br />
<source lang="nix">with import <nixpkgs> { };<br />
rWrapper.override {<br />
packages = with rPackages; [<br />
... list of R packages ...<br />
];<br />
}</source><br />
A pro tips is<br />
<br />
* the function <code>rPackages.buildRPackage</code> can be used to build your own R packages<br />
<br />
== Haskell ==<br />
<br />
Nixpkgs provides the following haskell related attributes<br />
<br />
* <code>haskell.compiler.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;</code> - composition providing the given ghc<br />
* <code>haskell.packages.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;</code> - the set of haskell packages compiled by the given ghc<br />
* <code>haskell.packages.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;.withPackages</code> - composition wrapping ghc to enable the given packages<br />
* <code>haskell.packages.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;.withHoogle</code> - composition wrapping ghc to enable the given packages with hoogle and documentation indices<br />
<br />
We can use the first directly to use programs provided by haskell packages<br />
<br />
<source lang="bash">[user@cluster:~]$ nix run haskell.packages.ghc864.pandoc<br />
[user@cluster:~]$ pandoc --help</source><br />
and the last two in a <code>.nix</code> file create a ghc environment to enable a given set of haskell package (e.g., a <code>ghci</code> we can run and access the given set of packages from)<br />
<br />
<pre>with import &lt;nixpkgs&gt; { };<br />
haskell.packages.ghc864.withPackages (packages:<br />
with packages; [<br />
... list of Haskell packages ...<br />
];<br />
}</pre><br />
Some pro tips are<br />
<br />
* the alias <code>haskellPackages</code> gives a default <code>haskell.packages.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;</code><br />
* the attributes in <code>haskell.lib</code> contains a variety of useful attributes for tweaking haskell packages (e.g., enabling profiling, etc.)<br />
* the upstream maintainer has a useful [https://www.youtube.com/watch?v=KLhkAEk8I20 youtube video] on how to fix broken haskell packages<br />
<br />
== Emacs ==<br />
<br />
Nixpkgs provides the following emacs related attributes (append a <code>Ng</code> suffix for older versions of nixpkgs, e.g., <code>emacs25Ng</code> and <code>emacs25PackagesNg</code>)<br />
<br />
* <code>emacs&lt;major&gt;&lt;minor&gt;</code> - a composition providing the given emacs editor<br />
* <code>emacs&lt;major&gt;&lt;minor&gt;Packages</code> - the set of emacs packages for the given emacs editor<br />
* <code>emacs&lt;major&gt;&lt;minor&gt;Packages.emacsWithPackages</code> - composition wrapping emacs to enable the given packages<br />
<br />
We can use the second directly examine the content of packages<br />
<br />
<source lang="bash">[user@cluster:~]$ nix build nixpkgs.emacs25Packages.magit -o magit<br />
[user@cluster:~]$ cat magit/share/emacs/site-lisp/elpa/magit*/AUTHORS.md<br />
[user@cluster:~]$ exit</source><br />
and the last one in a <code>.nix</code> file create a composition giving emacs with the given set of packages enabled<br />
<br />
<pre>with import &lt;nixpkgs&gt; { };<br />
emacs25Packages.emacsWithPackages (packages:<br />
with packages; [<br />
... list of emacs packages ...<br />
];<br />
}</pre><br />
Some pro tips are<br />
<br />
* the aliases <code>emacs</code> and <code>emacsPackages</code> give a default <code>emacs&lt;major&gt;&lt;minor&gt;</code> and <code>emacsPackages&lt;major&gt;&lt;minor&gt;</code> version<br />
* the alias <code>emacs&lt;major&gt;&lt;minor&gt;WithPackages</code> are short for <code>emacs&lt;major&gt;&lt;minor&gt;Packages.emacsWithPackages</code> (including default version variants)<br />
<br />
[[Category:Software]]</div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=Using_Nix&diff=88279Using Nix2020-08-12T22:11:35Z<p>Tyson: </p>
<hr />
<div>{{Draft}}<br />
<br />
= Overview =<br />
<br />
[https://nixos.org/nix/ Nix] is a software building and composition system that allows users to manage their own persistent software environments. At the moment it is only available on SHARCNET systems (i.e., graham and legacy). If you would like this to change, let us know (it requires some coordination, but isn’t too difficult to do).<br />
<br />
* Supports one-off, per-project, and per-user usage of compositions<br />
* Compositions can be built, installed, upgraded, downgraded, and removed as a user<br />
* Operations either succeed or fail leaving everything intact (operations are atomic).<br />
* Extremely easy to add and share compositions<br />
<br />
Currently nix is building software in a generic manner (e.g., without AVX2 or AVX512 vector instructions support), so module loaded software should be preferred for longer running simulations when it exists.<br />
<br />
'''NOTE:''' The message <code>failed to lock thread to CPU XX</code> is a harmless warning that can be ignored.<br />
<br />
== Enabling and disabling the nix environment ==<br />
<br />
The user’s current nix environment is enabled by loading the nix module. This creates some ''.nix*'' files and sets some environment variables.<br />
<br />
<source lang="bash">[name@cluster:~]$ module load nix</source><br />
It is disabled by unloading the nix module. This unsets the environment variables but leaves the ''.nix*'' files alone.<br />
<br />
<source lang="bash">[name@cluster:~]$ module unload nix</source><br />
== Completely resetting the nix environment ==<br />
<br />
Most per-user operations can be undone with the <code>--rollback</code> option (i.e., <code>nix-env --rollback</code> or <code>nix-channel --rollback</code>). Sometimes it is useful to entirely reset nix though. This is done by unloading the module, erasing all user related nix files, and then reloading the module file.<br />
<br />
<source lang="bash">[name@cluster:~]$ module unload nix<br />
[name@cluster:~]$ rm -fr ~/.nix-profile ~/.nix-defexpr ~/.nix-channels ~/.config/nixpkgs<br />
[name@cluster:~]$ rm -fr /nix/var/nix/profiles/per-user/$USER /nix/var/nix/gcroots/per-user/$USER<br />
[name@cluster:~]$ module load nix</source><br />
= Existing compositions =<br />
<br />
The <code>nix search</code> command can be used to locate already available compositions<br />
<br />
<source lang="bash">[user@cluster:~]$ nix search git<br />
...<br />
* nixpkgs.git (git-minimal-2.19.3)<br />
Distributed version control system<br />
...</source><br />
Pro tips include<br />
<br />
* you need to specify <code>-u</code> after upgrading your channel (this will take awhile)<br />
* the search string is actually a regular expression and multiple ones are ANDed together<br />
<br />
Often our usage of a composition is either a one-off, a per-project, or an all the time situations. Nix supports all three of these cases.<br />
<br />
== One offs ==<br />
<br />
If you just want to use a composition once, the easiest was is to use the <code>nix run</code> command. This command will start a shell in which <code>PATH</code> has been extended to include the specified composition<br />
<br />
<source lang="bash">[user@cluster:~]$ nix run nixpkg.git<br />
[user@cluster:~]$ git<br />
[user@cluster:~]$ exit</source><br />
Note that this does not protect the composition from being garbage collected overnight (e.g., the composition is only guaranteed to be around temporarily for your use until sometime in the wee-morning hours). Pro tips include<br />
<br />
* you can specify more than one composition in the same <code>nix run</code> command<br />
* you can specify a command instead of a shell with <code>-c &lt;cmd&gt; &lt;args&gt; ...</code><br />
<br />
== Per-project ==<br />
<br />
If you want to use a program for a specific project, the easiest way is with the <code>nix build</code> command. This command will create a symbolic link (by default named <code>result</code>) from which you can access the programs ''bin'' directory to run it.<br />
<br />
<source lang="bash">[user@cluster:~]$ nix build nixpkgs.git<br />
[user@cluster:~]$ ./result/bin/git</source><br />
Note that (currently) the composition will only be protected from overnight garbage collection if you output the symlink into your ''home'' directory and do not rename or move it. Pro tips include<br />
<br />
* you can specify the output symlink name with the <code>-o &lt;name&gt;</code> option<br />
* add the ''bin'' directory to your <code>PATH</code> to not have to type it in every time<br />
<br />
== Per-user ==<br />
<br />
Loading the <code>nix</code> module adds the per-user common ''~/.nix-profile/bin'' directory to your <code>PATH</code>. You can add and remove compositions from this directory with the <code>nix-env</code> command<br />
<br />
<source lang="bash">[user@cluster:~]$ nix-env --install --attr nixpkgs.git<br />
[user@cluster:~]$ nix-env --query<br />
git-minimal-2.19.3</source><br />
<source lang="bash">[user@cluster:~]$ nix-env --uninstall git-minimal<br />
uninstalling 'git-minimal-2.19.3'<br />
[user@cluster:~]$ nix-env --query</source><br />
Each command actually creates a new version, so all prior versions remain and can be used<br />
<br />
<source lang="bash">[user@cluster:~]$ nix-env --list-generations<br />
1 2020-07-29 13:10:03<br />
2 2020-07-29 13:11:52 (current)<br />
[user@cluster:~]$ nix-env --switch-generation 1<br />
[user@cluster:~]$ nix-env --query<br />
git-minimal-2.19.3<br />
[user@cluster:~]$ nix-env --switch-generation 2<br />
[user@cluster:~]$ nix-env --query</source><br />
Pro tips include<br />
<br />
* <code>nix-env --rollback</code> moves up one generation<br />
* <code>nix-env --delete-generations &lt;time&gt;</code> deletes environments older than <code>&lt;time&gt;</code> (e.g., <code>30d</code>)<br />
* see our [[nix-env.md|nix-env page]] for a much more in-depth discussion of using <code>nix-env</code><br />
<br />
= Creating compositions =<br />
<br />
Often we require our own unique composition. A basic example would be to bundle all the binaries from multiple existing compositions in a common ''bin'' directory (e.g., <code>make</code>, <code>gcc</code>, and <code>ld</code> to build a simple C program). A more complex example would be to bundle python with a set of python libraries by wrapping the python executables with shell scripts to set <code>PYTHON_PATH</code> for the python libraries before running the real python binaries.<br />
<br />
All of these have a common format. You write a nix expression in a <code>.nix</code> file that composes together existing compositions and then you tell the above commands to use that with the <code>-f &lt;nix file&gt;</code> option. For example, say the file <code>python.nix</code> has an expression for a python environment in it, you can create a per-project ''bin'' directory with<br />
<br />
<source lang="bash">[user@cluster:~]$ nix build -f python.nix -o python<br />
[user@cluster:~]$ ./python/bin/python</source><br />
The nix expression you put in the file generally<br />
<br />
* does <code>with import &lt;nixpkgs&gt; {}</code> to bring the set of nixpkgs into scope<br />
* calls an existing composition functions with a list of space-separated components to include<br />
<br />
The template for doing the second these follows below as it differs slightly across the various eco-systems.<br />
<br />
A pro tip is<br />
<br />
* there are many [https://nixos.org/nixpkgs/manual/#chap-language-support languages and framework supported] but only a few described here, send us an email if you would like a missing supported one added here<br />
<br />
== Generic ==<br />
<br />
Nixpkgs provides a <code>buildEnv</code> function that does a basic composition of compositions (by combining their ''bin'', ''lib'', etc. directories). The list of packages are the same as used before minus the leading <code>nixpkgs</code> prefix as it was imported (e.g., <code>git</code> instead of <code>nixpkgs.git</code>).<br />
<br />
<source lang="nix">with import <nixpkgs> {};<br />
buildEnv {<br />
name = "my environment";<br />
paths = [<br />
... list of compositions ...<br />
];<br />
}</source><br />
== Python ==<br />
<br />
Nixpkgs provides the following python related attributes<br />
<br />
* <code>python&lt;major&gt;&lt;minor&gt;</code> - a composition providing the given python<br />
* <code>python&lt;major&gt;&lt;minor&gt;.pkgs</code> - the set of python compositions using the given python<br />
* <code>python&lt;major&gt;&lt;minor&gt;.withPackages</code> - wraps python with <code>PYTHON_PATH</code> set to a given set of python packages<br />
<br />
We can use the former directly to use the programs provided by python compositions<br />
<br />
<source lang="bash">[user@cluster:~]$ nix run python36.pkgs.spambayes<br />
[user@cluster:~]$ sb_filter.py --help<br />
[user@cluster:~]$ exit</source><br />
and the later in a <code>.nix</code> file to create a python composition that enables a given set of libraries (e.g., a <code>python</code> command we can run and access the given set of python packages from)<br />
<br />
<source lang="nix">with import <nixpkgs> { };<br />
python.withPackages (packages:<br />
with packages; [<br />
... list of python packages ...<br />
]<br />
)</source><br />
Some pro tips are<br />
<br />
* the aliases <code>python</code> and <code>python&lt;major&gt;</code> given default <code>python&lt;major&gt;&lt;minor&gt;</code> versions<br />
* the aliases <code>pythonPackages&lt;major&gt;&lt;minor&gt;</code> are short for <code>python&lt;major&gt;&lt;minor&gt;.pkgs</code> (including default version variants)<br />
* the function <code>python&lt;major&gt;&lt;minor&gt;.pkgs.buildPythonPackage</code> can be used to build your own python packages<br />
<br />
== R ==<br />
<br />
Nixpkgs provides the following R related attributes<br />
<br />
* <code>R</code> - a composition providing R<br />
* <code>rstudio</code> - a composition providing RStudio<br />
* <code>rPackages</code> - the set of R packages<br />
* <code>rWrapper</code> - a composition that wraps R with <code>R_LIBS</code> set to a minimal set of R packages<br />
* <code>rstudioWrapper</code> - a composition that wrapped RStudio with <code>R_LIBS</code> set to a minimal set of R packages<br />
<br />
We can use <code>rPackages</code> directly to examine the content of R packages<br />
<br />
<source lang="bash">[user@cluster:~]$ nix build rPackages.exams -o exams<br />
[user@cluster:~]$ cat exams/library/exams/NEWS<br />
[user@cluster:~]$ exit</source><br />
and the latter two can be overridden in a <code>.nix</code> file to create R and RStudio wrappers to create a composition enabling a given set of R libraries (e.g., a <code>R</code> or <code>rstudio</code> command we can run and access the given set of R packages from)<br />
<br />
<source lang="nix">with import <nixpkgs> { };<br />
rWrapper.override {<br />
packages = with rPackages; [<br />
... list of R packages ...<br />
];<br />
}</source><br />
A pro tips is<br />
<br />
* the function <code>rPackages.buildRPackage</code> can be used to build your own R packages<br />
<br />
== Haskell ==<br />
<br />
Nixpkgs provides the following haskell related attributes<br />
<br />
* <code>haskell.compiler.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;</code> - composition providing the given ghc<br />
* <code>haskell.packages.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;</code> - the set of haskell packages compiled by the given ghc<br />
* <code>haskell.packages.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;.withPackages</code> - composition wrapping ghc to enable the given packages<br />
* <code>haskell.packages.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;.withHoogle</code> - composition wrapping ghc to enable the given packages with hoogle and documentation indices<br />
<br />
We can use the first directly to use programs provided by haskell packages<br />
<br />
<source lang="bash">[user@cluster:~]$ nix run haskell.packages.ghc864.pandoc<br />
[user@cluster:~]$ pandoc --help</source><br />
and the last two in a <code>.nix</code> file create a ghc environment to enable a given set of haskell package (e.g., a <code>ghci</code> we can run and access the given set of packages from)<br />
<br />
<pre>with import &lt;nixpkgs&gt; { };<br />
haskell.packages.ghc864.withPackages (packages:<br />
with packages; [<br />
... list of Haskell packages ...<br />
];<br />
}</pre><br />
Some pro tips are<br />
<br />
* the alias <code>haskellPackages</code> gives a default <code>haskell.packages.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;</code><br />
* the attributes in <code>haskell.lib</code> contains a variety of useful attributes for tweaking haskell packages (e.g., enabling profiling, etc.)<br />
* the upstream maintainer has a useful [https://www.youtube.com/watch?v=KLhkAEk8I20 youtube video] on how to fix broken haskell packages<br />
<br />
== Emacs ==<br />
<br />
Nixpkgs provides the following emacs related attributes (append a <code>Ng</code> suffix for older versions of nixpkgs, e.g., <code>emacs25Ng</code> and <code>emacs25PackagesNg</code>)<br />
<br />
* <code>emacs&lt;major&gt;&lt;minor&gt;</code> - a composition providing the given emacs editor<br />
* <code>emacs&lt;major&gt;&lt;minor&gt;Packages</code> - the set of emacs packages for the given emacs editor<br />
* <code>emacs&lt;major&gt;&lt;minor&gt;Packages.emacsWithPackages</code> - composition wrapping emacs to enable the given packages<br />
<br />
We can use the second directly examine the content of packages<br />
<br />
<source lang="bash">[user@cluster:~]$ nix build nixpkgs.emacs25Packages.magit -o magit<br />
[user@cluster:~]$ cat magit/share/emacs/site-lisp/elpa/magit*/AUTHORS.md<br />
[user@cluster:~]$ exit</source><br />
and the last one in a <code>.nix</code> file create a composition giving emacs with the given set of packages enabled<br />
<br />
<pre>with import &lt;nixpkgs&gt; { };<br />
emacs25Packages.emacsWithPackages (packages:<br />
with packages; [<br />
... list of emacs packages ...<br />
];<br />
}</pre><br />
Some pro tips are<br />
<br />
* the aliases <code>emacs</code> and <code>emacsPackages</code> give a default <code>emacs&lt;major&gt;&lt;minor&gt;</code> and <code>emacsPackages&lt;major&gt;&lt;minor&gt;</code> version<br />
* the alias <code>emacs&lt;major&gt;&lt;minor&gt;WithPackages</code> are short for <code>emacs&lt;major&gt;&lt;minor&gt;Packages.emacsWithPackages</code> (including default version variants)<br />
<br />
[[Category:Software]]</div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=SSH_tunnelling&diff=88033SSH tunnelling2020-08-06T14:51:24Z<p>Tyson: Remove -n as implied by -f and not given in further example</p>
<hr />
<div><languages/><br />
<translate><br />
<br />
<!--T:53--><br />
''Parent page: [[SSH]]''<br />
<br />
=What is SSH tunnelling?= <!--T:1--><br />
<br />
<!--T:2--><br />
SSH tunnelling is a method to use a gateway computer to connect two<br />
computers that cannot connect directly.<br />
<br />
<!--T:3--><br />
In the context of Compute Canada, SSH tunnelling is necessary in certain cases,<br />
because compute nodes on [[Niagara]], [[Béluga]] and [[Graham]] do not have direct access to<br />
the internet, nor can the compute nodes be contacted directly from the internet.<br />
<br />
<!--T:4--><br />
The following use cases require SSH tunnels:<br />
<br />
<!--T:5--><br />
* Running commercial software on a compute node that needs to contact a license server over the internet;<br />
* Running [[Visualization|visualization software]] on a compute node that needs to be contacted by client software on a user's local computer;<br />
* Running a [[Jupyter | Jupyter Notebook]] on a compute node that needs to be contacted by the web browser on a user's local computer;<br />
* Connecting to the Cedar database server from somewhere other than the Cedar head node, e.g., your desktop.<br />
<br />
<!--T:6--><br />
In the first case, the license server is outside of<br />
the compute cluster and is rarely under a user's control, whereas<br />
in the other cases, the server is on the compute node but the<br />
challenge is to connect to it from the outside. We will therefore<br />
consider these two situations below.<br />
<br />
<!--T:54--><br />
While not strictly required to use SSH tunnelling, you may wish to be familiar with [[SSH Keys|SSH key pairs]].<br />
<br />
= Contacting a license server from a compute node = <!--T:7--><br />
<br />
<!--T:8--><br />
{{Panel<br />
|title=What's a port?<br />
|panelstyle=SideCallout<br />
|content=<br />
A port is a number used to distinguish streams of communication <br />
from one another. You can think of it as loosely analogous to a radio frequency <br />
or a channel. Many port numbers are reserved, by rule or by convention, for <br />
certain types of traffic. See <br />
[https://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers List of TCP and UDP port numbers] for more.<br />
}}<br />
<br />
<!--T:9--><br />
Certain commercially-licensed programs must connect to a license server machine <br />
somewhere on the internet via a predetermined port. If the compute node where <br />
the program is running has no access to the internet, then a ''gateway server'' <br />
which does have access must be used to forward communications on that port, <br />
from the compute node to the license server. To enable this, one must set up <br />
an ''SSH tunnel''. Such an arrangement is also called ''port forwarding''.<br />
<br />
<!--T:10--><br />
In most cases, creating an SSH tunnel in a batch job requires only two or <br />
three commands in your job script. You will need the following information:<br />
<br />
<!--T:11--><br />
* The IP address or the name of the license server (here LICSERVER).<br />
* The port number of the license service (here LICPORT). <br />
<br />
<!--T:12--><br />
You should obtain this information from whoever maintains the license server.<br />
That server also must allow connections from the login nodes; for<br />
Niagara, the outgoing IP address will either be 142.150.188.131 or 142.150.188.132.<br />
<br />
<!--T:13--><br />
With this information, one can now setup the SSH tunnel. For<br />
Graham, an alternative solution is to request a firewall exception<br />
for license server LICSERVER and its specific port LICPORT.<br />
<br />
<!--T:14--><br />
The gateway server on Niagara is nia-gw. On Graham, you need<br />
to pick one of the login nodes (gra-login1, 2, ...). Let us call the<br />
gateway node GATEWAY. You also need to choose the port number on the<br />
compute node to use (here COMPUTEPORT).<br />
<br />
<!--T:15--><br />
The SSH command to issue in the job script is then:<br />
<br />
<!--T:16--><br />
<source lang="bash"><br />
ssh GATEWAY -L COMPUTEPORT:LICSERVER:LICPORT -N -f<br />
</source><br />
<br />
<!--T:17--><br />
In this command, the string following the -L parameter specifies the port forwarding information.<br />
* -N tells SSH not to open a shell on the GATEWAY<br />
* -f tells SSH to run in the background, allowing the job script to proceed past this SSH command (implies -n too).<br />
<br />
<!--T:18--><br />
A further command to add to the job script should tell the software<br />
that the license server is on port COMPUTEPORT on the server<br />
'localhost'. The term 'localhost' is the standard name by which a computer refers to itself. It is to be taken literally and should not be replaced with your computer's name. Exactly how to inform your software to use this port on 'localhost' will<br />
depend on the specific application and the type of license server,<br />
but often it is simply a matter of setting an environment variable in<br />
the job script like<br />
<br />
<!--T:19--><br />
<source lang="bash"><br />
export MLM_LICENSE_FILE=COMPUTEPORT@localhost<br />
</source><br />
<br />
== Example job script== <!--T:20--><br />
<br />
<!--T:21--><br />
The following job script sets up an SSH tunnel to contact licenseserver.institution.ca at port 9999.<br />
<br />
<!--T:22--><br />
<source lang="bash"><br />
#!/bin/bash<br />
#SBATCH --nodes 1<br />
#SBATCH --ntasks 40<br />
#SBATCH --time 3:00:00<br />
<br />
<!--T:23--><br />
REMOTEHOST=licenseserver.institution.ca<br />
REMOTEPORT=9999<br />
LOCALHOST=localhost<br />
for ((i=0; i<10; ++i)); do<br />
LOCALPORT=$(shuf -i 1024-65535 -n 1)<br />
ssh nia-gw -L $LOCALPORT:$REMOTEHOST:$REMOTEPORT -N -f && break<br />
done || { echo "Giving up forwarding license port after $i attempts..."; exit 1; }<br />
export MLM_LICENSE_FILE=$LOCALPORT@$LOCALHOST<br />
<br />
<!--T:24--><br />
module load thesoftware/2.0<br />
mpirun thesoftware ..... <br />
</source><br />
<br />
= Connecting to a program running on a compute node= <!--T:25--><br />
<br />
<!--T:26--><br />
SSH tunnelling can also be used in the context of Compute Canada to allow a user's computer to connect to a compute node on a cluster through an encrypted tunnel that is routed via the login node of this cluster. This technique allows graphical output of applications like a [[Jupyter | Jupyter Notebook]] or [[Visualization|visualization software]] to be displayed transparently on the user's local workstation even while they are running on a cluster's compute node. When connecting to a database server where the connection is only possible through the head node, SSH tunnelling can be used to bind an external port to the database server.<br />
<br />
<!--T:32--><br />
There is Network Address Translation (NAT) on both Graham and Cedar allowing users to access the internet from the compute nodes. On Graham however, access is blocked by default at the firewall. Contact [[Technical support|technical support]] if you need to have a specific port opened, supplying the IP address or range of addresses which should be allowed to use that port.<br />
<br />
== From Linux or MacOS X == <!--T:51--><br />
<br />
<!--T:52--><br />
On a Linux or MacOS X system, we recommend using the [https://sshuttle.readthedocs.io sshuttle] Python package.<br />
<br />
<!--T:34--><br />
On your computer, open a new terminal window and run the following sshuttle command to create the tunnel.<br />
<br />
<!--T:35--><br />
{{Command<br />
|prompt=[name@my_computer $]<br />
|sshuttle --dns -Nr userid@machine_name}}<br />
<br />
<!--T:36--><br />
Then, copy and paste the application's URL into your browser. If your application is a <br />
[[Jupyter#Starting_Jupyter_Notebook|Jupyter notebook]], for example, you are given a URL with a token:<br />
<pre><br />
http://cdr544.int.cedar.computecanada.ca:8888/?token=7ed7059fad64446f837567e32af8d20efa72e72476eb72ca<br />
</pre><br />
<br />
== From Windows == <!--T:37--> <br />
<br />
<!--T:38--><br />
An SSH tunnel can be created from Windows using [[Connecting with MobaXTerm|MobaXTerm]] as follows.<br />
<br />
<!--T:39--><br />
Open two sessions in MobaXTerm. <br />
<br />
<!--T:40--><br />
*Session 1 should be a connection to a cluster. Start your job there following the instructions for your application, such as [[Jupyter#Starting_Jupyter_Notebook|Jupyter Notebook]]. You should be given a URL that includes a host name and a port, such as <code>cdr544.int.cedar.computecanada.ca:8888</code> for example.<br />
<br />
<!--T:41--><br />
*Session 2 should be a local terminal in which we will set up the SSH tunnel. Run the following command, replacing this example host name with the one from the URL you received in Session 1. <br />
<br />
<!--T:42--><br />
{{Command<br />
|prompt=[name@my_computer ]$<br />
| ssh -L 8888:cdr544.int.cedar.computecanada.ca:8888 someuser@cedar.computecanada.ca}}<br />
<br />
<!--T:43--><br />
This command forwards connections to '''local port''' 8888 to port 8888 on cdr544.int.cedar.computecanada.ca, the '''remote port'''.<br />
The local port number, the first one, does not ''need'' to match the remote port number, the second one, but it is conventional and reduces confusion.<br />
<br />
<!--T:44--><br />
Modify the URL you were given in Session 1 by replacing the host name with <code>localhost</code>. <br />
Again using an example from [[Jupyter#Starting_Jupyter_Notebook|Jupyter Notebook]], this would be the URL to paste into a browser:<br />
<pre><br />
http://localhost:8888/?token=7ed7059fad64446f837567e32af8d20efa72e72476eb72ca<br />
</pre><br />
<br />
== Example for connecting to a database server on Cedar from your desktop == <!--T:46--><br />
<br />
<!--T:55--><br />
An SSH tunnel can be created from your desktop to database servers PostgreSQL or MySQL using the following commands respectively:<br />
<br />
<!--T:47--><br />
<pre> <br />
ssh -L PORT:cedar-pgsql-vm.int.cedar.computecanada.ca:5432 someuser@cedar.computecanada.ca<br />
ssh -L PORT:cedar-mysql-vm.int.cedar.computecanada.ca:3306 someuser@cedar.computecanada.ca<br />
</pre><br />
<br />
<!--T:48--><br />
These commands connect port number PORT on your local host to PostgreSQL or MySQL database servers respectively. The port number you choose (PORT) should not be bigger than 32768 (2^15). In this example, "someuser" is your Compute Canada username. The difference between this connection and an ordinary SSH connection is that you can now use another terminal to connect to the database server directly from your desktop. On your desktop, run one of these commands for PostgreSQL or MySQL as appropriate:<br />
<br />
<!--T:49--><br />
<pre> <br />
psql -h 127.0.0.1 -P PORT -U <your username> -d <your database><br />
mysql -h 127.0.0.1 -P PORT -u <your username> -p <br />
</pre><br />
<br />
<!--T:50--><br />
MySQL requires a password; it is stored in your ".my.cnf" located in your home directory on Cedar. <br />
The database connection will remain open as long as the SSH connection remains open.<br />
<br />
</translate></div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=SSH_tunnelling&diff=88032SSH tunnelling2020-08-06T14:23:06Z<p>Tyson: Update compute node forwarding script to pick a random local port with retry (required to support more than one instance on a node)</p>
<hr />
<div><languages/><br />
<translate><br />
<br />
<!--T:53--><br />
''Parent page: [[SSH]]''<br />
<br />
=What is SSH tunnelling?= <!--T:1--><br />
<br />
<!--T:2--><br />
SSH tunnelling is a method to use a gateway computer to connect two<br />
computers that cannot connect directly.<br />
<br />
<!--T:3--><br />
In the context of Compute Canada, SSH tunnelling is necessary in certain cases,<br />
because compute nodes on [[Niagara]], [[Béluga]] and [[Graham]] do not have direct access to<br />
the internet, nor can the compute nodes be contacted directly from the internet.<br />
<br />
<!--T:4--><br />
The following use cases require SSH tunnels:<br />
<br />
<!--T:5--><br />
* Running commercial software on a compute node that needs to contact a license server over the internet;<br />
* Running [[Visualization|visualization software]] on a compute node that needs to be contacted by client software on a user's local computer;<br />
* Running a [[Jupyter | Jupyter Notebook]] on a compute node that needs to be contacted by the web browser on a user's local computer;<br />
* Connecting to the Cedar database server from somewhere other than the Cedar head node, e.g., your desktop.<br />
<br />
<!--T:6--><br />
In the first case, the license server is outside of<br />
the compute cluster and is rarely under a user's control, whereas<br />
in the other cases, the server is on the compute node but the<br />
challenge is to connect to it from the outside. We will therefore<br />
consider these two situations below.<br />
<br />
<!--T:54--><br />
While not strictly required to use SSH tunnelling, you may wish to be familiar with [[SSH Keys|SSH key pairs]].<br />
<br />
= Contacting a license server from a compute node = <!--T:7--><br />
<br />
<!--T:8--><br />
{{Panel<br />
|title=What's a port?<br />
|panelstyle=SideCallout<br />
|content=<br />
A port is a number used to distinguish streams of communication <br />
from one another. You can think of it as loosely analogous to a radio frequency <br />
or a channel. Many port numbers are reserved, by rule or by convention, for <br />
certain types of traffic. See <br />
[https://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers List of TCP and UDP port numbers] for more.<br />
}}<br />
<br />
<!--T:9--><br />
Certain commercially-licensed programs must connect to a license server machine <br />
somewhere on the internet via a predetermined port. If the compute node where <br />
the program is running has no access to the internet, then a ''gateway server'' <br />
which does have access must be used to forward communications on that port, <br />
from the compute node to the license server. To enable this, one must set up <br />
an ''SSH tunnel''. Such an arrangement is also called ''port forwarding''.<br />
<br />
<!--T:10--><br />
In most cases, creating an SSH tunnel in a batch job requires only two or <br />
three commands in your job script. You will need the following information:<br />
<br />
<!--T:11--><br />
* The IP address or the name of the license server (here LICSERVER).<br />
* The port number of the license service (here LICPORT). <br />
<br />
<!--T:12--><br />
You should obtain this information from whoever maintains the license server.<br />
That server also must allow connections from the login nodes; for<br />
Niagara, the outgoing IP address will either be 142.150.188.131 or 142.150.188.132.<br />
<br />
<!--T:13--><br />
With this information, one can now setup the SSH tunnel. For<br />
Graham, an alternative solution is to request a firewall exception<br />
for license server LICSERVER and its specific port LICPORT.<br />
<br />
<!--T:14--><br />
The gateway server on Niagara is nia-gw. On Graham, you need<br />
to pick one of the login nodes (gra-login1, 2, ...). Let us call the<br />
gateway node GATEWAY. You also need to choose the port number on the<br />
compute node to use (here COMPUTEPORT).<br />
<br />
<!--T:15--><br />
The SSH command to issue in the job script is then:<br />
<br />
<!--T:16--><br />
<source lang="bash"><br />
ssh GATEWAY -L COMPUTEPORT:LICSERVER:LICPORT -n -N -f<br />
</source><br />
<br />
<!--T:17--><br />
In this command, the string following the -L parameter specifies the port forwarding information.<br />
* -n prevents SSH from reading input (it couldn't in a compute job anyway)<br />
* -N tells SSH not to open a shell on the GATEWAY<br />
* -f tells SSH to run in the background, allowing the job script to proceed past this SSH command.<br />
<br />
<!--T:18--><br />
A further command to add to the job script should tell the software<br />
that the license server is on port COMPUTEPORT on the server<br />
'localhost'. The term 'localhost' is the standard name by which a computer refers to itself. It is to be taken literally and should not be replaced with your computer's name. Exactly how to inform your software to use this port on 'localhost' will<br />
depend on the specific application and the type of license server,<br />
but often it is simply a matter of setting an environment variable in<br />
the job script like<br />
<br />
<!--T:19--><br />
<source lang="bash"><br />
export MLM_LICENSE_FILE=COMPUTEPORT@localhost<br />
</source><br />
<br />
== Example job script== <!--T:20--><br />
<br />
<!--T:21--><br />
The following job script sets up an SSH tunnel to contact licenseserver.institution.ca at port 9999.<br />
<br />
<!--T:22--><br />
<source lang="bash"><br />
#!/bin/bash<br />
#SBATCH --nodes 1<br />
#SBATCH --ntasks 40<br />
#SBATCH --time 3:00:00<br />
<br />
<!--T:23--><br />
REMOTEHOST=licenseserver.institution.ca<br />
REMOTEPORT=9999<br />
LOCALHOST=localhost<br />
for ((i=0; i<10; ++i)); do<br />
LOCALPORT=$(shuf -i 1024-65535 -n 1)<br />
ssh nia-gw -L $LOCALPORT:$REMOTEHOST:$REMOTEPORT -N -f && break<br />
done || { echo "Giving up forwarding license port after $i attempts..."; exit 1; }<br />
export MLM_LICENSE_FILE=$LOCALPORT@$LOCALHOST<br />
<br />
<!--T:24--><br />
module load thesoftware/2.0<br />
mpirun thesoftware ..... <br />
</source><br />
<br />
= Connecting to a program running on a compute node= <!--T:25--><br />
<br />
<!--T:26--><br />
SSH tunnelling can also be used in the context of Compute Canada to allow a user's computer to connect to a compute node on a cluster through an encrypted tunnel that is routed via the login node of this cluster. This technique allows graphical output of applications like a [[Jupyter | Jupyter Notebook]] or [[Visualization|visualization software]] to be displayed transparently on the user's local workstation even while they are running on a cluster's compute node. When connecting to a database server where the connection is only possible through the head node, SSH tunnelling can be used to bind an external port to the database server.<br />
<br />
<!--T:32--><br />
There is Network Address Translation (NAT) on both Graham and Cedar allowing users to access the internet from the compute nodes. On Graham however, access is blocked by default at the firewall. Contact [[Technical support|technical support]] if you need to have a specific port opened, supplying the IP address or range of addresses which should be allowed to use that port.<br />
<br />
== From Linux or MacOS X == <!--T:51--><br />
<br />
<!--T:52--><br />
On a Linux or MacOS X system, we recommend using the [https://sshuttle.readthedocs.io sshuttle] Python package.<br />
<br />
<!--T:34--><br />
On your computer, open a new terminal window and run the following sshuttle command to create the tunnel.<br />
<br />
<!--T:35--><br />
{{Command<br />
|prompt=[name@my_computer $]<br />
|sshuttle --dns -Nr userid@machine_name}}<br />
<br />
<!--T:36--><br />
Then, copy and paste the application's URL into your browser. If your application is a <br />
[[Jupyter#Starting_Jupyter_Notebook|Jupyter notebook]], for example, you are given a URL with a token:<br />
<pre><br />
http://cdr544.int.cedar.computecanada.ca:8888/?token=7ed7059fad64446f837567e32af8d20efa72e72476eb72ca<br />
</pre><br />
<br />
== From Windows == <!--T:37--> <br />
<br />
<!--T:38--><br />
An SSH tunnel can be created from Windows using [[Connecting with MobaXTerm|MobaXTerm]] as follows.<br />
<br />
<!--T:39--><br />
Open two sessions in MobaXTerm. <br />
<br />
<!--T:40--><br />
*Session 1 should be a connection to a cluster. Start your job there following the instructions for your application, such as [[Jupyter#Starting_Jupyter_Notebook|Jupyter Notebook]]. You should be given a URL that includes a host name and a port, such as <code>cdr544.int.cedar.computecanada.ca:8888</code> for example.<br />
<br />
<!--T:41--><br />
*Session 2 should be a local terminal in which we will set up the SSH tunnel. Run the following command, replacing this example host name with the one from the URL you received in Session 1. <br />
<br />
<!--T:42--><br />
{{Command<br />
|prompt=[name@my_computer ]$<br />
| ssh -L 8888:cdr544.int.cedar.computecanada.ca:8888 someuser@cedar.computecanada.ca}}<br />
<br />
<!--T:43--><br />
This command forwards connections to '''local port''' 8888 to port 8888 on cdr544.int.cedar.computecanada.ca, the '''remote port'''.<br />
The local port number, the first one, does not ''need'' to match the remote port number, the second one, but it is conventional and reduces confusion.<br />
<br />
<!--T:44--><br />
Modify the URL you were given in Session 1 by replacing the host name with <code>localhost</code>. <br />
Again using an example from [[Jupyter#Starting_Jupyter_Notebook|Jupyter Notebook]], this would be the URL to paste into a browser:<br />
<pre><br />
http://localhost:8888/?token=7ed7059fad64446f837567e32af8d20efa72e72476eb72ca<br />
</pre><br />
<br />
== Example for connecting to a database server on Cedar from your desktop == <!--T:46--><br />
<br />
<!--T:55--><br />
An SSH tunnel can be created from your desktop to database servers PostgreSQL or MySQL using the following commands respectively:<br />
<br />
<!--T:47--><br />
<pre> <br />
ssh -L PORT:cedar-pgsql-vm.int.cedar.computecanada.ca:5432 someuser@cedar.computecanada.ca<br />
ssh -L PORT:cedar-mysql-vm.int.cedar.computecanada.ca:3306 someuser@cedar.computecanada.ca<br />
</pre><br />
<br />
<!--T:48--><br />
These commands connect port number PORT on your local host to PostgreSQL or MySQL database servers respectively. The port number you choose (PORT) should not be bigger than 32768 (2^15). In this example, "someuser" is your Compute Canada username. The difference between this connection and an ordinary SSH connection is that you can now use another terminal to connect to the database server directly from your desktop. On your desktop, run one of these commands for PostgreSQL or MySQL as appropriate:<br />
<br />
<!--T:49--><br />
<pre> <br />
psql -h 127.0.0.1 -P PORT -U <your username> -d <your database><br />
mysql -h 127.0.0.1 -P PORT -u <your username> -p <br />
</pre><br />
<br />
<!--T:50--><br />
MySQL requires a password; it is stored in your ".my.cnf" located in your home directory on Cedar. <br />
The database connection will remain open as long as the SSH connection remains open.<br />
<br />
</translate></div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=Using_Nix&diff=87346Using Nix2020-07-31T13:54:54Z<p>Tyson: Morning after cleanups</p>
<hr />
<div>{{Draft}}<br />
<br />
= Overview =<br />
<br />
[https://nixos.org/nix/ Nix] is a package manager system that allows users to manage their own persistent software environments. At the moment it is only available on SHARCNET systems (i.e., graham and legacy). If you would like this to change, help motivate an expansion by letting us known (it requires some coordination, but isn't too difficult to do).<br />
<br />
* Supports one-off, per-project, and per-user usage of packages<br />
* Packages can be built, installed, upgraded, downgraded, and removed as a user<br />
* Operations either succeed or fail leaving everything intact (operations are atomic).<br />
* Extremely each to add and share packages<br />
<br />
Currently nix is building packages in a generic manner (e.g., without AVX2 or AVX512 vector instructions support), so module loaded software should be preferred for longer running simulations when it exists.<br />
<br />
'''NOTE:''' The message <code>failed to lock thread to CPU XX</code> is a harmless warning that can be ignored.<br />
<br />
== Enabling and disabling the nix environment ==<br />
<br />
The user’s current nix environment is enabled by loading the nix module. This creates some ''.nix*'' files and sets some environment variables.<br />
<br />
<source lang="bash">[name@cluster:~]$ module load nix</source><br />
It is disabled by unloading the nix module. This unsets the environment variables but leaves the ''.nix'' files alone.<br />
<br />
<source lang="bash">[name@cluster:~]$ module unload nix</source><br />
== Completely resetting the nix environment ==<br />
<br />
Most per-user operations can be undone with the <code>--rollback</code> option (i.e., <code>nix-env --rollback</code> or <code>nix-channel --rollback</code>). Sometimes it is useful to entirely reset nix though. This is done by unloading the module, erasing all user related nix files, and then reloading the module file.<br />
<br />
<source lang="bash">[name@cluster:~]$ module unload nix<br />
[name@cluster:~]$ rm -fr ~/.nix-profile ~/.nix-defexpr ~/.nix-channels ~/.config/nixpkgs<br />
[name@cluster:~]$ rm -fr /nix/var/nix/profiles/per-user/$USER /nix/var/nix/gcroots/per-user/$USER<br />
[name@cluster:~]$ module load nix</source><br />
= Basic package usage =<br />
<br />
The <code>nix search</code> command can be used to locate available packages<br />
<br />
<source lang="bash">[user@cluster:~]$ nix search git<br />
...<br />
* nixpkgs.git (git-minimal-2.19.3)<br />
Distributed version control system<br />
...</source><br />
Pro tips include<br />
<br />
* you need to specify <code>-u</code> after upgrading your package set (this will take awhile)<br />
* the search string is actually a regular expression and multiple ones are ANDed together<br />
<br />
Often our usage of a package is either a one-off, a per-project, or an all the time situations. Nix supports all three of these cases.<br />
<br />
== One offs ==<br />
<br />
If you just want to use a package once, the easiest was is to use the <code>nix run</code> command. This command will start a shell in which <code>PATH</code> has been extended to include the specified package<br />
<br />
<source lang="bash">[user@cluster:~]$ nix run nixpkg.git<br />
[user@cluster:~]$ git<br />
[user@cluster:~]$ exit</source><br />
Note that this does not protect the package from being garbage collected overnight (e.g., the package is only guaranteed to be around temporarily for your use until sometime in the wee-morning hours). Pro tips include<br />
<br />
* you can specify more than one package in the same <code>nix run</code> command<br />
* you can specify a command instead of a shell with <code>-c &lt;cmd&gt; &lt;args&gt; ...</code><br />
<br />
== Per-project ==<br />
<br />
If you want to use a program for a specific project, the easiest way is with the <code>nix build</code> command. This command will create a symbolic link (by default named <code>result</code>) from which you can access the programs <code>bin</code> directory to run it.<br />
<br />
<source lang="bash">[user@cluster:~]$ nix build nixpkgs.git<br />
[user@cluster:~]$ ./result/bin/git</source><br />
Note that (currently) the package will only be protected from overnight garbage collection if you output the symlink into your <code>home</code> directory and do not rename or move it. Pro tips include<br />
<br />
* you can specify the output symlink name with the <code>-o &lt;name&gt;</code> option<br />
* add the <code>bin</code> directory to your <code>PATH</code> to not have to type it in every time<br />
<br />
== Per-user ==<br />
<br />
Loading the <code>nix</code> module adds the per-user common <code>~/.nix-profile/bin</code> directory to your <code>PATH</code>. You can add and remove packages from this directory with the <code>nix-env</code> command<br />
<br />
<source lang="bash">[user@cluster:~]$ nix-env --install --attr nixpkgs.git<br />
[user@cluster:~]$ nix-env --query<br />
git-minimal-2.19.3</source><br />
<source lang="bash">[user@cluster:~]$ nix-env --uninstall git-minimal<br />
uninstalling 'git-minimal-2.19.3'<br />
[user@cluster:~]$ nix-env --query</source><br />
Each command actually creates a new version, so all prior versions remain and can be used<br />
<br />
<source lang="bash">[user@cluster:~]$ nix-env --list-generations<br />
1 2020-07-29 13:10:03<br />
2 2020-07-29 13:11:52 (current)<br />
[user@cluster:~]$ nix-env --switch-generation 1<br />
[user@cluster:~]$ nix-env --query<br />
git-minimal-2.19.3<br />
[user@cluster:~]$ nix-env --switch-generation 2<br />
[user@cluster:~]$ nix-env --query</source><br />
Pro tips include<br />
<br />
* <code>nix-env --rollback</code> moves up one generation<br />
* <code>nix-env --delete-generations &lt;time&gt;</code> deletes environments older than <code>&lt;time&gt;</code> (e.g., <code>30d</code>)<br />
* see our [[Using Nix: nix-env|nix-env page]] page for a much more in-depth discussion of using <code>nix-env</code><br />
<br />
= Advanced package usage =<br />
<br />
Often we require a composition of packages. This can be as simple as having the binaries from multiple packages available in the same <code>bin</code> directory (e.g., <code>make</code>, <code>gcc</code>, and <code>ld</code> to build a simple C program) to as complex as having a python environment setup with all the desired modules installed (e.g., PYTHON_PATH set correctly, etc.).<br />
<br />
All of these have a common format. You write a nix expression in a <code>.nix</code> file that composes together packages in a file and then you tell the above commands to use that with the <code>-f &lt;nix file&gt;</code> option. For example, say the file <code>python.nix</code> has an expression for a python environment in it, you can create a per-project bin directory with<br />
<br />
<source lang="bash">[user@cluster:~]$ nix build -f python.nix -o python<br />
[user@cluster:~]$ ./python/bin/python</source><br />
The nix expression you put in the file generally<br />
<br />
* does <code>with import &lt;nixpkgs&gt; { }</code> to bring the set of nixpkgs into scope<br />
* calls an existing package composition functions with a list of space-separated components to include<br />
<br />
The template for doing the second these follows below as it differs slightly across the various eco-systems.<br />
<br />
A pro tip is<br />
<br />
* there are many [https://nixos.org/nixpkgs/manual/#chap-language-support languages and framework supported] but only a few described here, send us an email if you would like a missing supported one added here<br />
<br />
== Generic ==<br />
<br />
Nixpkgs provides a <code>buildEnv</code> function that combine multiple packages into a single ones (by combining their <code>bin</code>, <code>lib</code>, etc. directories). The list of packages are the same as used before minus the leading <code>nixpkgs</code> as it was imported (e.g., <code>git</code> instead of <code>nixpkgs.git</code>).<br />
<br />
<source lang="nix">with import <nixpkgs> {};<br />
buildEnv {<br />
name = "my environment";<br />
paths = [<br />
... list of packages ...<br />
];<br />
}</source><br />
== Python ==<br />
<br />
Nixpkgs provides the following python related attributes<br />
<br />
* <code>python&lt;major&gt;&lt;minor&gt;</code> - a package providing the given python<br />
* <code>python&lt;major&gt;&lt;minor&gt;.pkgs</code> - the set of python packages using the given python<br />
* <code>python&lt;major&gt;&lt;minor&gt;.withPackages</code> - wraps python with <code>PYTHON_PATH</code> set to a given set of packages<br />
<br />
We can use the former directly to use the programs provided by python packages<br />
<br />
<source lang="bash">[user@cluster:~]$ nix run python36.pkgs.spambayes<br />
[user@cluster:~]$ sb_filter.py --help<br />
[user@cluster:~]$ exit</source><br />
and the later in a <code>.nix</code> file to create a python wrapper to enable a given set of libraries (e.g., a <code>python</code> command we can run and access the given set of python packages from)<br />
<br />
<source lang="nix">with import <nixpkgs> { };<br />
python.withPackages (packages:<br />
with packages; [<br />
... list of python packages ...<br />
]<br />
)</source><br />
Some pro tips are<br />
<br />
* the aliases <code>python</code> and <code>python&lt;major&gt;</code> given default <code>python&lt;major&gt;&lt;minor&gt;</code> versions<br />
* the aliases <code>pythonPackages&lt;major&gt;&lt;minor&gt;</code> are short for <code>python&lt;major&gt;&lt;minor&gt;.pkgs</code> (including default version variants)<br />
* the function <code>python&lt;major&gt;&lt;minor&gt;.pkgs.buildPythonPackage</code> can be used to build your own packages<br />
<br />
== R ==<br />
<br />
Nixpkgs provides the following R related attributes<br />
<br />
* <code>R</code> - a package providing R<br />
* <code>rstudio</code> - a package providing RStudio<br />
* <code>rPackages</code> - the set of R packages<br />
* <code>rWrapper</code> - a wrapped R with <code>R_LIBS</code> set to a minimal set of packages<br />
* <code>rstudioWrapper</code> - a wrapped RStudio with <code>R_LIBS</code> set to a minimal set of packages<br />
<br />
We can use <code>rPackages</code> directly to examine the content of packages<br />
<br />
<source lang="bash">[user@cluster:~]$ nix build rPackages.exams -o exams<br />
[user@cluster:~]$ cat exams/library/exams/NEWS<br />
[user@cluster:~]$ exit</source><br />
and the latter two can be overridden in a <code>.nix</code> file to create R and RStudio wrappers to enable a given set of libraries (e.g., a <code>R</code> or <code>rstudio</code> command we can run and access the given set of R packages from)<br />
<br />
<source lang="nix">with import <nixpkgs> { };<br />
rWrapper.override {<br />
packages = with rPackages; [<br />
... list of R packages ...<br />
];<br />
}</source><br />
A pro tips is<br />
<br />
* the function <code>rPackages.buildRPackage</code> can be used to build your own R packages<br />
<br />
== Haskell ==<br />
<br />
Nixpkgs provides the following haskell related attributes<br />
<br />
* <code>haskell.compiler.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;</code> - package providing the given ghc<br />
* <code>haskell.packages.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;</code> - the set of haskell packages compiled by the given ghc<br />
* <code>haskell.packages.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;.withPackages</code> - wraps ghc to enable the given packages<br />
* <code>haskell.packages.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;.withHoogle</code> - wraps ghc to enable the given packages with hoogle and documentation indices<br />
<br />
We can use the first directly to use programs provided by haskell packages<br />
<br />
<source lang="bash">[user@cluster:~]$ nix run haskell.packages.ghc864.pandoc<br />
[user@cluster:~]$ pandoc --help</source><br />
and the last two in a <code>.nix</code> file create a ghc environment to enable a given set of package (e.g., a <code>ghci</code> we can run and access the given set of packages from)<br />
<br />
<pre>with import &lt;nixpkgs&gt; { };<br />
haskell.packages.ghc864.withPackages (packages:<br />
with packages; [<br />
... list of Haskell packages ...<br />
];<br />
}</pre><br />
Some pro tips are<br />
<br />
* the alias <code>haskellPackages</code> gives a default <code>haskell.packages.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;</code><br />
* the attributes in <code>haskell.lib</code> contains a variety of useful attributes for tweaking haskell packages (e.g., enabling profiling, etc.)<br />
* the upstream maintainer has a useful [https://www.youtube.com/watch?v=KLhkAEk8I20 youtube video] on how to fix broken haskell packages<br />
<br />
== Emacs ==<br />
<br />
Nixpkgs provides the following emacs related attributes (append a <code>Ng</code> suffix for older versions of nixpkgs, e.g., <code>emacs25Ng</code> and <code>emacs25PackagesNg</code>)<br />
<br />
* <code>emacs&lt;major&gt;&lt;minor&gt;</code> - a package providing the given emacs editor<br />
* <code>emacs&lt;major&gt;&lt;minor&gt;Packages</code> - the set of emacs packages for the given emacs editor<br />
* <code>emacs&lt;major&gt;&lt;minor&gt;Packages.emacsWithPackages</code> - wraps emacs to enable the given packages<br />
<br />
We can use the second directly examine the content of packages<br />
<br />
<source lang="bash">[user@cluster:~]$ nix build nixpkgs.emacs25Packages.magit -o magit<br />
[user@cluster:~]$ cat magit/share/emacs/site-lisp/elpa/magit*/AUTHORS.md<br />
[user@cluster:~]$ exit</source><br />
and the last one in a <code>.nix</code> file create an emacs with the given set of packages enabled<br />
<br />
<pre>with import &lt;nixpkgs&gt; { };<br />
emacs25Packages.emacsWithPackages (packages:<br />
with packages; [<br />
... list of emacs packages ...<br />
];<br />
}</pre><br />
Some pro tips are<br />
<br />
* the aliases <code>emacs</code> and <code>emacsPackages</code> give a default <code>emacs&lt;major&gt;&lt;minor&gt;</code> and <code>emacsPackages&lt;major&gt;&lt;minor&gt;</code> version<br />
* the alias <code>emacs&lt;major&gt;&lt;minor&gt;WithPackages</code> are short for <code>emacs&lt;major&gt;&lt;minor&gt;Packages.emacsWithPackages</code> (including default version variants)<br />
<br />
[[Category:Software]]</div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=Using_Nix:_nix-env&diff=87341Using Nix: nix-env2020-07-30T21:19:45Z<p>Tyson: Separate page for details of the nix-env command (most of the prior Using Nix page)</p>
<hr />
<div>{{Draft}}<br />
<br />
This page details using the legacy `nix-env` command to manage a per-user environment. For an overview of Nix, start with our [[Using Nix|using nix page]].<br />
<br />
= Querying, installing and removing packages =<br />
<br />
The <code>nix-env</code> command is used to manage your per-user Nix environment. It is actually a legacy command that has not yet been replaced by a newer <code>nix &lt;command&gt;</code> command.<br />
<br />
== What do I have installed and what can I install ==<br />
<br />
Lets first see what we currently have installed.<br />
<br />
<source lang="bash">[name@cluster:~]$ nix-env --query</source><br />
Now let’s see what is available. We request the attribute paths (unambiguous way of specifying a package) and the descriptions too (cursor to the right to see them). This takes a bit of time as it visits a lot of small files. Especially over NFS it can be a good idea to pipe it to a file and then grep that in the future.<br />
<br />
<source lang="bash">[name@cluster:~]$ nix-env --query --available --attr-path --description</source><br />
The newer <code>nix search</code> command is often a better way to locate packages as it saves a cache so subsequent invocations are quite fast.<br />
<br />
== Installing packages ==<br />
<br />
Let’s say that we need a newer version of git than provided by default. First lets check what our OS comes with.<br />
<br />
<source lang="bash">[name@cluster:~]$ git --version<br />
[name@cluster:~]$ which git</source><br />
Let’s tell Nix to install its version in our environment.<br />
<br />
<source lang="bash">[name@cluster:~]$ nix-env --install --attr nixpkgs.git<br />
[name@cluster:~]$ nix-env --query</source><br />
Let’s checkout what we have now (it may be necessary to tell bash to to forget remembered executable locations with <code>hash -r</code> so it notices the new one).<br />
<br />
<source lang="bash">[name@cluster:~]$ git --version<br />
[name@cluster:~]$ which git</source><br />
== Removing packages ==<br />
<br />
For completeness, lets add in the other usual version-control suspects.<br />
<br />
<source lang="bash">[name@cluster:~]$ nix-env --install --attr nixpkgs.subversion nixpkgs.mercurial<br />
[name@cluster:~]$ nix-env --query</source><br />
Actually, we probably don’t really want subversion any more. Let’s remove that.<br />
<br />
<source lang="bash">[name@cluster:~]$ nix-env --uninstall subversion<br />
[name@cluster:~]$ nix-env --query</source><br />
= Environments =<br />
<br />
Nix keeps referring to user environments. Each time we install or remove packages we create a new environment based off of the previous environment.<br />
<br />
== Switching between previous environments ==<br />
<br />
This means the previous environments still exist and we can switch back to them at any point. Let’s say we changed our mind and want subversion back. It’s trivial to restore the previous environment.<br />
<br />
<source lang="bash">[name@cluster:~]$ nix-env --rollback<br />
[name@cluster:~]$ nix-env --query</source><br />
Of course we may want to do more than just move to the previous environment. We can get a list of all our environments so far and then jump directly to whatever one we want. Let’s undo the rollback.<br />
<br />
<source lang="bash">[name@cluster:~]$ nix-env --list-generations<br />
[name@cluster:~]$ nix-env --switch-generation 4<br />
[name@cluster:~]$ nix-env --query</source><br />
== Operations are atomic ==<br />
<br />
Due to the atomic property of Nix environments, we can’t be left halfway through installing/updating packages. They either succeed and create us a new environment or leave us with the previous one intact.<br />
<br />
Let’s go back to the start when we just had Nix itself and install the one true GNU distributed version control system tla. Don’t let it complete though. Hit it with <code>CTRL+c</code> partway through.<br />
<br />
<source lang="bash">[name@cluster:~]$ nix-env --switch-generation 1<br />
[name@cluster:~]$ nix-env --install --attr nixpkgs.tla<br />
CTRL+c</source><br />
Nothing bad happens. The operation didn’t complete so it has no effect on the environment whatsoever.<br />
<br />
<source lang="bash">[name@cluster:~]$ nix-env --query<br />
[name@cluster:~]$ nix-env --list-generations</source><br />
== Nix only does things once ==<br />
<br />
The install and remove commands take the current environment and create a new environment with the changes. This works regardless of which environment we are currently in. Let’s create a new environment from our original environment by just adding git and mercurial.<br />
<br />
<source lang="bash">[name@cluster:~]$ nix-env --list-generations<br />
[name@cluster:~]$ nix-env --install --attr nixpkgs.git nixpkgs.mercurial<br />
[name@cluster:~]$ nix-env --list-generations</source><br />
Notice how much much faster it was to install git and mercurial the second time? That is because the software already existed in the local Nix store from the previous installs so Nix just reused it.<br />
<br />
== Garbage collection ==<br />
<br />
Nix periodically goes through and removes any software not accessible from any existing environments. This means we have to explicitly delete environments we don’t want anymore so Nix is able to reclaim the space. We can delete specific environments or any sufficiently old.<br />
<br />
<source lang="bash">[name@cluster:~]$ nix-env --delete-generations 30d</source><br />
<br />
[[Category:Software]]</div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=Using_Nix&diff=87340Using Nix2020-07-30T21:16:02Z<p>Tyson: Use "Using Nix: <command>" for command detail pages</p>
<hr />
<div>{{Draft}}<br />
<br />
= Overview =<br />
<br />
[https://nixos.org/nix/ Nix] is a package manager system that allows users to manage their own persistent software environments. At the moment it is only available on SHARCNET systems (i.e., graham and legacy). If you would like this to change, help motivate an expansion by letting us known (it requires some coordination, but isn't too difficult to do).<br />
<br />
* Supports one-off, per-project, and per-user usage of packages<br />
* Packages can be built, installed, upgraded, downgraded, and removed as a user<br />
* Operations either succeed or fail leaving everything intact (operations are atomic).<br />
* Extremely each to add and share packages<br />
<br />
'''NOTE:''' The message <code>failed to lock thread to CPU XX</code> is a harmless warning that can be ignored.<br />
<br />
== Enabling and disabling the Nix environment ==<br />
<br />
The user’s current Nix environment is enabled by loading the nix module. This creates some ’‘.nix*’’ files and sets some environment variables.<br />
<br />
<source lang="bash">[name@cluster:~]$ module load nix</source><br />
It is disabled by unloading the nix module. This unsets the environment variables but leaves the ’‘.nix*’’ files alone.<br />
<br />
<source lang="bash">[name@cluster:~]$ module unload nix</source><br />
== Completely resetting the Nix environment ==<br />
<br />
Most operations can be undone with the <code>--rollback</code> option (i.e., <code>nix-env --rollback</code> or <code>nix-channel --rollback</code>). Sometimes it is useful to entirely reset nix though. This is done by unloading the module, erasing all user related nix files, and then reloading the module file.<br />
<br />
<source lang="bash">[name@cluster:~]$ module unload nix<br />
[name@cluster:~]$ rm -fr ~/.nix-profile ~/.nix-defexpr ~/.nix-channels ~/.config/nixpkgs<br />
[name@cluster:~]$ rm -fr /nix/var/nix/profiles/per-user/$USER /nix/var/nix/gcroots/per-user/$USER<br />
[name@cluster:~]$ module load nix</source><br />
= Basic package usage =<br />
<br />
The <code>nix search</code> command can be used to locate available packages<br />
<br />
<source lang="bash">[user@cluster:~]$ nix search git<br />
...<br />
* nixpkgs.git (git-minimal-2.19.3)<br />
Distributed version control system<br />
...</source><br />
Pro tips include<br />
<br />
* you need to specify <code>-u</code> after upgrading your package set (this will take awhile)<br />
* the search string is actually a regular expression and multiple ones are ANDed together<br />
<br />
Often we want to use a package in one of three ways: a one-time command Individual packages can be access in Nix in a variety of ways<br />
<br />
== One offs ==<br />
<br />
If you just want to use a package once, the easiest was is to use the <code>nix run</code> command. This command will start a shell in which <code>PATH</code> has been extended to include the specified package<br />
<br />
<source lang="bash">[user@cluster:~]$ nix run nixpkg.git<br />
[user@cluster:~]$ git<br />
[user@cluster:~]$ exit</source><br />
Note that this does not protect the package from being garbage collected overnight (e.g., the package is only guaranteed to be around temporarily for your use until sometime in the wee-morning hours). Pro tips include<br />
<br />
* you can specify more than one package in the same <code>nix run</code> command<br />
* you can specify a command instead of a shell with <code>-c &lt;cmd&gt; &lt;args&gt; ...</code><br />
<br />
== Per-project ==<br />
<br />
If you want to use a program for a specific project, the easiest way is with the <code>nix build</code> command. This command will create a symbolic link (by default named <code>result</code>) from which you can access the programs <code>bin</code> directory to run it.<br />
<br />
<source lang="bash">[user@cluster:~]$ nix build nixpkgs.git<br />
[user@cluster:~]$ ./result/bin/git</source><br />
Note that (currently) the package will only be protected from overnight garbage collection if you output the symlink into your <code>home</code> directory and do not rename or move it. Pro tips include<br />
<br />
* you can specify the output symlink name with the <code>-o &lt;name&gt;</code> option<br />
* add the <code>bin</code> directory to your <code>PATH</code> to not have to type it in every time<br />
<br />
== Per-user ==<br />
<br />
Loading the <code>nix</code> module adds the per-user common <code>~/.nix-profile/bin</code> directory to your <code>PATH</code>. You can add and remove packages from this directory with the <code>nix-env</code> command<br />
<br />
<source lang="bash">[user@cluster:~]$ nix-env --install --attr nixpkgs.git<br />
[user@cluster:~]$ nix-env --query<br />
git-minimal-2.19.3</source><br />
<source lang="bash">[user@cluster:~]$ nix-env --uninstall git-minimal<br />
uninstalling 'git-minimal-2.19.3'<br />
[user@cluster:~]$ nix-env --query</source><br />
Each command actually creates a new version, so all prior versions remain and can be used<br />
<br />
<source lang="bash">[user@cluster:~]$ nix-env --list-generations<br />
1 2020-07-29 13:10:03<br />
2 2020-07-29 13:11:52 (current)<br />
[user@cluster:~]$ nix-env --switch-generation 1<br />
[user@cluster:~]$ nix-env --query<br />
git-minimal-2.19.3<br />
[user@cluster:~]$ nix-env --switch-generation 2<br />
[user@cluster:~]$ nix-env --query</source><br />
Pro tips include<br />
<br />
* <code>nix-env --rollback</code> moves up one generation<br />
* <code>nix-env --delete-generations &lt;time&gt;</code> deletes environments older than <code>&lt;time&gt;</code> (e.g., <code>30d</code>)<br />
* see our [[Using Nix: nix-env|nix-env page]] page for a much more in-depth discussion of using <code>nix-env</code><br />
<br />
= Advanced package usage =<br />
<br />
Often we require a composition of packages. This can be as simple as having the binaries from multiple packages available in the same <code>bin</code> directory (e.g., <code>make</code>, <code>gcc</code>, and <code>ld</code> to build a simple C program) to as complex as having a python environment setup with all the desired modules installed (e.g., PYTHON_PATH set correctly, etc.).<br />
<br />
All of these have a common format. You write a Nix expression in a <code>.nix</code> file that composes together packages in a file and then you tell the above commands to use that with the <code>-f &lt;nix file&gt;</code> option. For example, say the file <code>python.nix</code> has an expression for a python environment in it, you can create a per-project bin directory with<br />
<br />
<source lang="bash">[user@cluster:~]$ nix build -f python.nix -o python<br />
[user@cluster:~]$ ./python/bin/python</source><br />
The Nix expression you put in the file generally<br />
<br />
* does <code>with import &lt;nixpkgs&gt; {}</code> to bring the set of nixpkgs into scope<br />
* calls an existing package composition functions with a list of space-separated components to include<br />
<br />
The template for doing the second these follows below as it differs slightly across the various eco-systems.<br />
<br />
A pro tip is<br />
<br />
* there are many [https://nixos.org/nixpkgs/manual/#chap-language-support languages and framework supported] but only a few described here, send us an email if you would like a missing supported one added here<br />
<br />
== Generic ==<br />
<br />
Nixpkgs provides a <code>buildEnv</code> function that combine multiple packages into a single ones (by combining their <code>bin</code>, <code>lib</code>, etc. directories). The list of packages are the same as used before minus the leading <code>nixpkgs</code> as it was imported (e.g., <code>git</code> instead of <code>nixpkgs.git</code>).<br />
<br />
<source lang="nix">with import <nixpkgs> {};<br />
buildEnv {<br />
name = "my environment";<br />
paths = [<br />
... list of packages ...<br />
];<br />
}</source><br />
== Python ==<br />
<br />
Nixpkgs provides the following python related attributes<br />
<br />
* <code>python&lt;major&gt;&lt;minor&gt;</code> - a package providing the given python<br />
* <code>python&lt;major&gt;&lt;minor&gt;.pkgs</code> - the set of python packages using the given python<br />
* <code>python&lt;major&gt;&lt;minor&gt;.withPackages</code> - wraps python with <code>PYTHON_PATH</code> set to a given set of packages<br />
<br />
We can use the former directly to use the programs provided by python packages<br />
<br />
<source lang="bash">[user@cluster:~]$ nix run python36.pkgs.spambayes<br />
[user@cluster:~]$ sb_filter.py --help<br />
[user@cluster:~]$ exit</source><br />
and the later in a <code>.nix</code> file to create a python wrapper to enable a given set of libraries (e.g., a <code>python</code> command we can run and access the given set of python packages from)<br />
<br />
<source lang="nix">with import <nixpkgs> { };<br />
python.withPackages (packages:<br />
with packages; [<br />
... list of python packages ...<br />
]<br />
)</source><br />
Some pro tips are<br />
<br />
* the aliases <code>python</code> and <code>python&lt;major&gt;</code> given default <code>python&lt;major&gt;&lt;minor&gt;</code> versions<br />
* the aliases <code>pythonPackages&lt;major&gt;&lt;minor&gt;</code> are short for <code>python&lt;major&gt;&lt;minor&gt;.pkgs</code> (including default version variants)<br />
* the function <code>python&lt;major&gt;&lt;minor&gt;.pkgs.buildPythonPackage</code> can be used to build your own packages<br />
<br />
== R ==<br />
<br />
Nixpkgs provides the following R related attributes<br />
<br />
* <code>R</code> - a package providing R<br />
* <code>rstudio</code> - a package providing RStudio<br />
* <code>rPackages</code> - the set of R packages<br />
* <code>rWrapper</code> - a wrapped R with <code>R_LIBS</code> set to a minimal set of packages<br />
* <code>rstudioWrapper</code> - a wrapped RStudio with <code>R_LIBS</code> set to a minimal set of packages<br />
<br />
We can use <code>rPackages</code> directly to examine the content of packages<br />
<br />
<source lang="bash">[user@cluster:~]$ nix build rPackages.exams -o exams<br />
[user@cluster:~]$ cat exams/library/exams/NEWS<br />
[user@cluster:~]$ exit</source><br />
and the latter two can be overridden in a <code>.nix</code> file to create R and RStudio wrappers to enable a given set of libraries (e.g., a <code>R</code> or <code>rstudio</code> command we can run and access the given set of R packages from)<br />
<br />
<source lang="nix">with import <nixpkgs> { };<br />
rWrapper.override {<br />
packages = with rPackages; [<br />
... list of R packages ...<br />
];<br />
}</source><br />
A pro tips is<br />
<br />
* the function <code>rPackages.buildRPackage</code> can be used to build your own R packages<br />
<br />
== Haskell ==<br />
<br />
Nixpkgs provides the following haskell related attributes<br />
<br />
* <code>haskell.compiler.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;</code> - package providing the given ghc<br />
* <code>haskell.packages.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;</code> - the set of haskell packages compiled by the given ghc<br />
* <code>haskell.packages.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;.withPackages</code> - wraps ghc to enable the given packages<br />
* <code>haskell.packages.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;.withHoogle</code> - wraps ghc to enable the given packages with hoogle and documentation indices<br />
<br />
We can use the first directly to use programs provided by haskell packages<br />
<br />
<source lang="bash">[user@cluster:~]$ nix run haskell.packages.ghc864.pandoc<br />
[user@cluster:~]$ pandoc --help</source><br />
and the last two in a <code>.nix</code> file create a ghc environment to enable a given set of package (e.g., a <code>ghci</code> we can run and access the given set of packages from)<br />
<br />
<pre>with import &lt;nixpkgs&gt; { };<br />
haskell.packages.ghc864.withPackages (packages:<br />
with packages; [<br />
... list of Haskell packages ...<br />
];<br />
}</pre><br />
Some pro tips are<br />
<br />
* the alias <code>haskellPackages</code> gives a default <code>haskell.packages.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;</code><br />
* the attributes in <code>haskell.lib</code> contains a variety of useful attributes for tweaking haskell packages (e.g., enabling profiling, etc.)<br />
* the upstream maintainer has a useful [https://www.youtube.com/watch?v=KLhkAEk8I20 youtube video] on how to fix broken haskell packages<br />
<br />
== Emacs ==<br />
<br />
Nixpkgs provides the following emacs related attributes (append a <code>Ng</code> suffix for older versions of nixpkgs, e.g., <code>emacs25Ng</code> and <code>emacs25PackagesNg</code>)<br />
<br />
* <code>emacs&lt;major&gt;&lt;minor&gt;</code> - a package providing the given emacs editor<br />
* <code>emacs&lt;major&gt;&lt;minor&gt;Packages</code> - the set of emacs packages for the given emacs editor<br />
* <code>emacs&lt;major&gt;&lt;minor&gt;Packages.emacsWithPackages</code> - wraps emacs to enable the given packages<br />
<br />
We can use the second directly examine the content of packages<br />
<br />
<source lang="bash">[user@cluster:~]$ nix build nixpkgs.emacs25Packages.magit -o magit<br />
[user@cluster:~]$ cat magit/share/emacs/site-lisp/elpa/magit*/AUTHORS.md<br />
[user@cluster:~]$ exit</source><br />
and the last one in a <code>.nix</code> file create an emacs with the given set of packages enabled<br />
<br />
<pre>with import &lt;nixpkgs&gt; { };<br />
emacs25Packages.emacsWithPackages (packages:<br />
with packages; [<br />
... list of emacs packages ...<br />
];<br />
}</pre><br />
Some pro tips are<br />
<br />
* the aliases <code>emacs</code> and <code>emacsPackages</code> give a default <code>emacs&lt;major&gt;&lt;minor&gt;</code> and <code>emacsPackages&lt;major&gt;&lt;minor&gt;</code> version<br />
* the alias <code>emacs&lt;major&gt;&lt;minor&gt;WithPackages</code> are short for <code>emacs&lt;major&gt;&lt;minor&gt;Packages.emacsWithPackages</code> (including default version variants)<br />
<br />
[[Category:Software]]</div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=Using_Nix&diff=87339Using Nix2020-07-30T21:06:23Z<p>Tyson: Rewrite with focus on quickstart, new nix command, per-project usage, and tooling infrastructures</p>
<hr />
<div>{{Draft}}<br />
<br />
= Overview =<br />
<br />
[https://nixos.org/nix/ Nix] is a package manager system that allows users to manage their own persistent software environments. At the moment it is only available on SHARCNET systems (i.e., graham and legacy). If you would like this to change, help motivate an expansion by letting us known (it requires some coordination, but isn't too difficult to do).<br />
<br />
* Supports one-off, per-project, and per-user usage of packages<br />
* Packages can be built, installed, upgraded, downgraded, and removed as a user<br />
* Operations either succeed or fail leaving everything intact (operations are atomic).<br />
* Extremely each to add and share packages<br />
<br />
'''NOTE:''' The message <code>failed to lock thread to CPU XX</code> is a harmless warning that can be ignored.<br />
<br />
== Enabling and disabling the Nix environment ==<br />
<br />
The user’s current Nix environment is enabled by loading the nix module. This creates some ’‘.nix*’’ files and sets some environment variables.<br />
<br />
<source lang="bash">[name@cluster:~]$ module load nix</source><br />
It is disabled by unloading the nix module. This unsets the environment variables but leaves the ’‘.nix*’’ files alone.<br />
<br />
<source lang="bash">[name@cluster:~]$ module unload nix</source><br />
== Completely resetting the Nix environment ==<br />
<br />
Most operations can be undone with the <code>--rollback</code> option (i.e., <code>nix-env --rollback</code> or <code>nix-channel --rollback</code>). Sometimes it is useful to entirely reset nix though. This is done by unloading the module, erasing all user related nix files, and then reloading the module file.<br />
<br />
<source lang="bash">[name@cluster:~]$ module unload nix<br />
[name@cluster:~]$ rm -fr ~/.nix-profile ~/.nix-defexpr ~/.nix-channels ~/.config/nixpkgs<br />
[name@cluster:~]$ rm -fr /nix/var/nix/profiles/per-user/$USER /nix/var/nix/gcroots/per-user/$USER<br />
[name@cluster:~]$ module load nix</source><br />
= Basic package usage =<br />
<br />
The <code>nix search</code> command can be used to locate available packages<br />
<br />
<source lang="bash">[user@cluster:~]$ nix search git<br />
...<br />
* nixpkgs.git (git-minimal-2.19.3)<br />
Distributed version control system<br />
...</source><br />
Pro tips include<br />
<br />
* you need to specify <code>-u</code> after upgrading your package set (this will take awhile)<br />
* the search string is actually a regular expression and multiple ones are ANDed together<br />
<br />
Often we want to use a package in one of three ways: a one-time command Individual packages can be access in Nix in a variety of ways<br />
<br />
== One offs ==<br />
<br />
If you just want to use a package once, the easiest was is to use the <code>nix run</code> command. This command will start a shell in which <code>PATH</code> has been extended to include the specified package<br />
<br />
<source lang="bash">[user@cluster:~]$ nix run nixpkg.git<br />
[user@cluster:~]$ git<br />
[user@cluster:~]$ exit</source><br />
Note that this does not protect the package from being garbage collected overnight (e.g., the package is only guaranteed to be around temporarily for your use until sometime in the wee-morning hours). Pro tips include<br />
<br />
* you can specify more than one package in the same <code>nix run</code> command<br />
* you can specify a command instead of a shell with <code>-c &lt;cmd&gt; &lt;args&gt; ...</code><br />
<br />
== Per-project ==<br />
<br />
If you want to use a program for a specific project, the easiest way is with the <code>nix build</code> command. This command will create a symbolic link (by default named <code>result</code>) from which you can access the programs <code>bin</code> directory to run it.<br />
<br />
<source lang="bash">[user@cluster:~]$ nix build nixpkgs.git<br />
[user@cluster:~]$ ./result/bin/git</source><br />
Note that (currently) the package will only be protected from overnight garbage collection if you output the symlink into your <code>home</code> directory and do not rename or move it. Pro tips include<br />
<br />
* you can specify the output symlink name with the <code>-o &lt;name&gt;</code> option<br />
* add the <code>bin</code> directory to your <code>PATH</code> to not have to type it in every time<br />
<br />
== Per-user ==<br />
<br />
Loading the <code>nix</code> module adds the per-user common <code>~/.nix-profile/bin</code> directory to your <code>PATH</code>. You can add and remove packages from this directory with the <code>nix-env</code> command<br />
<br />
<source lang="bash">[user@cluster:~]$ nix-env --install --attr nixpkgs.git<br />
[user@cluster:~]$ nix-env --query<br />
git-minimal-2.19.3</source><br />
<source lang="bash">[user@cluster:~]$ nix-env --uninstall git-minimal<br />
uninstalling 'git-minimal-2.19.3'<br />
[user@cluster:~]$ nix-env --query</source><br />
Each command actually creates a new version, so all prior versions remain and can be used<br />
<br />
<source lang="bash">[user@cluster:~]$ nix-env --list-generations<br />
1 2020-07-29 13:10:03<br />
2 2020-07-29 13:11:52 (current)<br />
[user@cluster:~]$ nix-env --switch-generation 1<br />
[user@cluster:~]$ nix-env --query<br />
git-minimal-2.19.3<br />
[user@cluster:~]$ nix-env --switch-generation 2<br />
[user@cluster:~]$ nix-env --query</source><br />
Pro tips include<br />
<br />
* <code>nix-env --rollback</code> moves up one generation<br />
* <code>nix-env --delete-generations &lt;time&gt;</code> deletes environments older than <code>&lt;time&gt;</code> (e.g., <code>30d</code>)<br />
* see our [[nix-env.md|nix-env]] page for a much more in-depth discussion of using <code>nix-env</code><br />
<br />
= Advanced package usage =<br />
<br />
Often we require a composition of packages. This can be as simple as having the binaries from multiple packages available in the same <code>bin</code> directory (e.g., <code>make</code>, <code>gcc</code>, and <code>ld</code> to build a simple C program) to as complex as having a python environment setup with all the desired modules installed (e.g., PYTHON_PATH set correctly, etc.).<br />
<br />
All of these have a common format. You write a Nix expression in a <code>.nix</code> file that composes together packages in a file and then you tell the above commands to use that with the <code>-f &lt;nix file&gt;</code> option. For example, say the file <code>python.nix</code> has an expression for a python environment in it, you can create a per-project bin directory with<br />
<br />
<source lang="bash">[user@cluster:~]$ nix build -f python.nix -o python<br />
[user@cluster:~]$ ./python/bin/python</source><br />
The Nix expression you put in the file generally<br />
<br />
* does <code>with import &lt;nixpkgs&gt; {}</code> to bring the set of nixpkgs into scope<br />
* calls an existing package composition functions with a list of space-separated components to include<br />
<br />
The template for doing the second these follows below as it differs slightly across the various eco-systems.<br />
<br />
A pro tip is<br />
<br />
* there are many [https://nixos.org/nixpkgs/manual/#chap-language-support languages and framework supported] but only a few described here, send us an email if you would like a missing supported one added here<br />
<br />
== Generic ==<br />
<br />
Nixpkgs provides a <code>buildEnv</code> function that combine multiple packages into a single ones (by combining their <code>bin</code>, <code>lib</code>, etc. directories). The list of packages are the same as used before minus the leading <code>nixpkgs</code> as it was imported (e.g., <code>git</code> instead of <code>nixpkgs.git</code>).<br />
<br />
<source lang="nix">with import <nixpkgs> {};<br />
buildEnv {<br />
name = "my environment";<br />
paths = [<br />
... list of packages ...<br />
];<br />
}</source><br />
== Python ==<br />
<br />
Nixpkgs provides the following python related attributes<br />
<br />
* <code>python&lt;major&gt;&lt;minor&gt;</code> - a package providing the given python<br />
* <code>python&lt;major&gt;&lt;minor&gt;.pkgs</code> - the set of python packages using the given python<br />
* <code>python&lt;major&gt;&lt;minor&gt;.withPackages</code> - wraps python with <code>PYTHON_PATH</code> set to a given set of packages<br />
<br />
We can use the former directly to use the programs provided by python packages<br />
<br />
<source lang="bash">[user@cluster:~]$ nix run python36.pkgs.spambayes<br />
[user@cluster:~]$ sb_filter.py --help<br />
[user@cluster:~]$ exit</source><br />
and the later in a <code>.nix</code> file to create a python wrapper to enable a given set of libraries (e.g., a <code>python</code> command we can run and access the given set of python packages from)<br />
<br />
<source lang="nix">with import <nixpkgs> { };<br />
python.withPackages (packages:<br />
with packages; [<br />
... list of python packages ...<br />
]<br />
)</source><br />
Some pro tips are<br />
<br />
* the aliases <code>python</code> and <code>python&lt;major&gt;</code> given default <code>python&lt;major&gt;&lt;minor&gt;</code> versions<br />
* the aliases <code>pythonPackages&lt;major&gt;&lt;minor&gt;</code> are short for <code>python&lt;major&gt;&lt;minor&gt;.pkgs</code> (including default version variants)<br />
* the function <code>python&lt;major&gt;&lt;minor&gt;.pkgs.buildPythonPackage</code> can be used to build your own packages<br />
<br />
== R ==<br />
<br />
Nixpkgs provides the following R related attributes<br />
<br />
* <code>R</code> - a package providing R<br />
* <code>rstudio</code> - a package providing RStudio<br />
* <code>rPackages</code> - the set of R packages<br />
* <code>rWrapper</code> - a wrapped R with <code>R_LIBS</code> set to a minimal set of packages<br />
* <code>rstudioWrapper</code> - a wrapped RStudio with <code>R_LIBS</code> set to a minimal set of packages<br />
<br />
We can use <code>rPackages</code> directly to examine the content of packages<br />
<br />
<source lang="bash">[user@cluster:~]$ nix build rPackages.exams -o exams<br />
[user@cluster:~]$ cat exams/library/exams/NEWS<br />
[user@cluster:~]$ exit</source><br />
and the latter two can be overridden in a <code>.nix</code> file to create R and RStudio wrappers to enable a given set of libraries (e.g., a <code>R</code> or <code>rstudio</code> command we can run and access the given set of R packages from)<br />
<br />
<source lang="nix">with import <nixpkgs> { };<br />
rWrapper.override {<br />
packages = with rPackages; [<br />
... list of R packages ...<br />
];<br />
}</source><br />
A pro tips is<br />
<br />
* the function <code>rPackages.buildRPackage</code> can be used to build your own R packages<br />
<br />
== Haskell ==<br />
<br />
Nixpkgs provides the following haskell related attributes<br />
<br />
* <code>haskell.compiler.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;</code> - package providing the given ghc<br />
* <code>haskell.packages.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;</code> - the set of haskell packages compiled by the given ghc<br />
* <code>haskell.packages.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;.withPackages</code> - wraps ghc to enable the given packages<br />
* <code>haskell.packages.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;.withHoogle</code> - wraps ghc to enable the given packages with hoogle and documentation indices<br />
<br />
We can use the first directly to use programs provided by haskell packages<br />
<br />
<source lang="bash">[user@cluster:~]$ nix run haskell.packages.ghc864.pandoc<br />
[user@cluster:~]$ pandoc --help</source><br />
and the last two in a <code>.nix</code> file create a ghc environment to enable a given set of package (e.g., a <code>ghci</code> we can run and access the given set of packages from)<br />
<br />
<pre>with import &lt;nixpkgs&gt; { };<br />
haskell.packages.ghc864.withPackages (packages:<br />
with packages; [<br />
... list of Haskell packages ...<br />
];<br />
}</pre><br />
Some pro tips are<br />
<br />
* the alias <code>haskellPackages</code> gives a default <code>haskell.packages.ghc&lt;major&gt;&lt;minor&gt;&lt;patch&gt;</code><br />
* the attributes in <code>haskell.lib</code> contains a variety of useful attributes for tweaking haskell packages (e.g., enabling profiling, etc.)<br />
* the upstream maintainer has a useful [https://www.youtube.com/watch?v=KLhkAEk8I20 youtube video] on how to fix broken haskell packages<br />
<br />
== Emacs ==<br />
<br />
Nixpkgs provides the following emacs related attributes (append a <code>Ng</code> suffix for older versions of nixpkgs, e.g., <code>emacs25Ng</code> and <code>emacs25PackagesNg</code>)<br />
<br />
* <code>emacs&lt;major&gt;&lt;minor&gt;</code> - a package providing the given emacs editor<br />
* <code>emacs&lt;major&gt;&lt;minor&gt;Packages</code> - the set of emacs packages for the given emacs editor<br />
* <code>emacs&lt;major&gt;&lt;minor&gt;Packages.emacsWithPackages</code> - wraps emacs to enable the given packages<br />
<br />
We can use the second directly examine the content of packages<br />
<br />
<source lang="bash">[user@cluster:~]$ nix build nixpkgs.emacs25Packages.magit -o magit<br />
[user@cluster:~]$ cat magit/share/emacs/site-lisp/elpa/magit*/AUTHORS.md<br />
[user@cluster:~]$ exit</source><br />
and the last one in a <code>.nix</code> file create an emacs with the given set of packages enabled<br />
<br />
<pre>with import &lt;nixpkgs&gt; { };<br />
emacs25Packages.emacsWithPackages (packages:<br />
with packages; [<br />
... list of emacs packages ...<br />
];<br />
}</pre><br />
Some pro tips are<br />
<br />
* the aliases <code>emacs</code> and <code>emacsPackages</code> give a default <code>emacs&lt;major&gt;&lt;minor&gt;</code> and <code>emacsPackages&lt;major&gt;&lt;minor&gt;</code> version<br />
* the alias <code>emacs&lt;major&gt;&lt;minor&gt;WithPackages</code> are short for <code>emacs&lt;major&gt;&lt;minor&gt;Packages.emacsWithPackages</code> (including default version variants)<br />
<br />
[[Category:Software]]</div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=Using_GPUs_with_Slurm&diff=80995Using GPUs with Slurm2020-03-23T19:44:06Z<p>Tyson: Incorrect gres line for allocating a GPU on cedar</p>
<hr />
<div><languages /><br />
<translate><br />
<br />
<!--T:15--><br />
For general advice on job scheduling, see [[Running jobs]].<br />
<br />
== Available hardware == <!--T:1--><br />
These are the node types containing GPUs currently available on [[Béluga/en|Béluga]], [[Cedar]], [[Graham]] and [[Hélios/en|Hélios]]:<br />
<br />
<!--T:2--><br />
{| class="wikitable"<br />
|-<br />
! # of Nodes !!Node type !! CPU cores !! CPU memory !! # of GPUs !! NVIDIA GPU type !! PCIe bus topology<br />
|-<br />
| 172 || Béluga Base GPU || 40 || 191000M || 4 || V100-SXM2-16GB || All GPUs associated with the same CPU socket<br />
|-<br />
| 114 || Cedar Base GPU || 24 || 128000M || 4 || P100-PCIE-12GB || Two GPUs per CPU socket<br />
|-<br />
| 32 || Cedar Large GPU || 24|| 257000M || 4 || P100-PCIE-16GB || All GPUs associated with the same CPU socket<br />
|-<br />
| 160 || Graham Base GPU || 32|| 127518M || 2 || P100-PCIE-12GB || One GPU per CPU socket<br />
|-<br />
| 7 || Graham Base GPU || 28 || 183105M || 8 || V100-PCIE-16GB || Four GPUs per CPU socket<br />
|-<br />
| 36 || Graham Base GPU || 16 || 196608M || 4 || Tesla T4 16GB || Two GPUs per CPU socket<br />
|-<br />
| 15 || Hélios K20 || 20 || 110000M || 8 || K20 5GB || Four GPUs per CPU socket<br />
|- <br />
| 6 || Hélios K80 || 24 || 257000M || 16 || K80 12GB || Eight GPUs per CPU socket<br />
|}<br />
<br />
== Specifying the type of GPU to use == <!--T:16--><br />
Most clusters have multiple types of GPUs available. You can specify the type of GPU to use by adding a specifier to the <code>--gres=gpu</code> option. The following options are available : <br />
<br />
=== On Cedar === <!--T:17--><br />
You can request a 12G P100 using<br />
<br />
<!--T:18--><br />
#SBATCH --gres=gpu:1<br />
<br />
<!--T:19--><br />
or a 16G P100 using <br />
<br />
<!--T:20--><br />
#SBATCH --gres=gpu:lgpu:1<br />
<br />
<!--T:21--><br />
Unless specified, all GPU jobs requesting <= 125G of memory will run on 12G P100s<br />
<br />
=== On Graham === <!--T:22--><br />
You can request a P100 using<br />
<br />
<!--T:23--><br />
#SBATCH --gres=gpu:p100:1<br />
<br />
<!--T:24--><br />
or a V100 using <br />
<br />
<!--T:25--><br />
#SBATCH --gres=gpu:v100:1<br />
<br />
<!--T:26--><br />
or a T4 using <br />
<br />
<!--T:27--><br />
#SBATCH --gres=gpu:t4:1<br />
<br />
<!--T:28--><br />
Unless specified, all GPU jobs will run on a P100<br />
<br />
=== On Béluga === <!--T:29--><br />
Béluga has only a single type of GPU, so no option can be provided. <br />
<br />
=== On Hélios === <!--T:30--><br />
You can request K20 using<br />
<br />
<!--T:31--><br />
#SBATCH --gres=gpu:k20:1<br />
<br />
<!--T:32--><br />
or a K80 using <br />
<br />
<!--T:33--><br />
#SBATCH --gres=gpu:k80:1<br />
<br />
== Single-core job == <!--T:3--><br />
If you need only a single CPU core and one GPU:<br />
{{File<br />
|name=gpu_serial_job.sh<br />
|lang="sh"<br />
|contents=<br />
#!/bin/bash<br />
#SBATCH --account=def-someuser<br />
#SBATCH --gres=gpu:1 # Number of GPUs (per node)<br />
#SBATCH --mem=4000M # memory (per node)<br />
#SBATCH --time=0-03:00 # time (DD-HH:MM)<br />
./program # you can use 'nvidia-smi' for a test<br />
}}<br />
<br />
== Multi-threaded job == <!--T:4--><br />
For GPU jobs asking for multiple CPUs in a single node:<br />
{{File<br />
|name=gpu_threaded_job.sh<br />
|lang="sh"<br />
|contents=<br />
#!/bin/bash<br />
#SBATCH --account=def-someuser<br />
#SBATCH --gres=gpu:1 # Number of GPU(s) per node<br />
#SBATCH --cpus-per-task=6 # CPU cores/threads<br />
#SBATCH --mem=4000M # memory per node<br />
#SBATCH --time=0-03:00 # time (DD-HH:MM)<br />
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK<br />
./program<br />
}}<br />
For each GPU requested on:<br />
* Béluga, we recommend no more than 10 CPU cores.<br />
* Cedar, we recommend no more than 6 CPU cores.<br />
* Graham, we recommend no more than 16 CPU cores.<br />
<br />
== MPI job == <!--T:5--><br />
{{File<br />
|name=gpu_mpi_job.sh<br />
|lang="sh"<br />
|contents=<br />
#!/bin/bash<br />
#SBATCH --account=def-someuser<br />
#SBATCH --gres=gpu:4 # Number of GPUs per node<br />
#SBATCH --nodes=2 # Number of nodes<br />
#SBATCH --ntasks=48 # Number of MPI process<br />
#SBATCH --cpus-per-task=1 # CPU cores per MPI process<br />
#SBATCH --mem=120G # memory per node<br />
#SBATCH --time=0-03:00 # time (DD-HH:MM)<br />
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK<br />
srun ./program<br />
}}<br />
<br />
== Whole nodes == <!--T:6--><br />
If your application can efficiently use an entire node and its associated GPUs, you will probably experience shorter wait times if you ask Slurm for a whole node. Use one of the following job scripts as a template. <br />
<br />
=== Scheduling a GPU node at Graham === <!--T:7--><br />
{{File<br />
|name=graham_gpu_node_job.sh<br />
|lang="sh"<br />
|contents=<br />
#!/bin/bash<br />
#SBATCH --nodes=1<br />
#SBATCH --gres=gpu:2<br />
#SBATCH --ntasks-per-node=32<br />
#SBATCH --mem=127000M<br />
#SBATCH --time=3:00<br />
#SBATCH --account=def-someuser<br />
nvidia-smi<br />
}}<br />
<br />
=== Scheduling a Base GPU node at Cedar === <!--T:8--><br />
{{File<br />
|name=cedar_gpu_node_job.sh<br />
|lang="sh"<br />
|contents=<br />
#!/bin/bash<br />
#SBATCH --nodes=1<br />
#SBATCH --gres=gpu:4<br />
#SBATCH --ntasks-per-node=24<br />
#SBATCH --exclusive<br />
#SBATCH --mem=125G<br />
#SBATCH --time=3:00<br />
#SBATCH --account=def-someuser<br />
nvidia-smi<br />
}}<br />
<br />
=== Scheduling a Large GPU node at Cedar === <!--T:9--><br />
<br />
<!--T:10--><br />
There is a special group of large-memory GPU nodes at [[Cedar]] which have four Tesla P100 16GB cards each. (Other GPUs in the cluster have 12GB.) The GPUs in a large-memory node all use the same PCI switch, so the inter-GPU communication latency is lower, but bandwidth between CPU and GPU is lower than on the regular GPU nodes. The nodes also have 256 GB RAM instead of 128GB. You may only request these nodes as whole nodes, therefore you must specify <code>--gres=gpu:lgpu:4</code>. Note that the maximum run-time for the large-memory GPU nodes on Cedar used to be '''24 hours''', this is not the case any more. Large GPU jobs up to 28 days can be run on Cedar.<br />
<br />
<!--T:11--><br />
{{File<br />
|name=large_gpu_job.sh<br />
|lang="sh"<br />
|contents=<br />
#!/bin/bash<br />
#SBATCH --nodes=1 <br />
#SBATCH --gres=gpu:lgpu:4 <br />
#SBATCH --ntasks=1<br />
#SBATCH --cpus-per-task=24 # There are 24 CPU cores on Cedar GPU nodes<br />
#SBATCH --mem=0 # Request the full memory of the node<br />
#SBATCH --time=3:00<br />
#SBATCH --account=def-someuser<br />
hostname<br />
nvidia-smi<br />
}}<br />
<br />
===Packing single-GPU jobs within one SLURM job=== <!--T:12--><br />
<br />
<!--T:13--><br />
If you need to run four single-GPU programs or two 2-GPU programs for longer than 24 hours, [[GNU Parallel]] is recommended. A simple example is given below:<br />
<pre><br />
cat params.input | parallel -j4 'CUDA_VISIBLE_DEVICES=$(({%} - 1)) python {} &> {#}.out'<br />
</pre><br />
In this example the GPU ID is calculated by subtracting 1 from the slot ID {%}. {#} is the job ID, starting from 1.<br />
<br />
<!--T:14--><br />
A params.input file should include input parameters in each line like:<br />
<pre><br />
code1.py<br />
code2.py<br />
code3.py<br />
code4.py<br />
...<br />
</pre><br />
With this method, users can run multiple tasks in one submission. The <code>-j4</code> parameter means GNU Parallel can run a maximum of four concurrent tasks, launching another as soon as each one ends. CUDA_VISIBLE_DEVICES is used to ensure that two tasks do not try to use the same GPU at the same time.<br />
<br />
</translate></div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=Using_Nix&diff=67097Using Nix2019-02-04T21:26:57Z<p>Tyson: Missing an --attr flag on the final example</p>
<hr />
<div>{{Draft}}<br />
<br />
[https://nixos.org/nix/ Nix] is a package manager system that allows users to manage their own persistent software environment. At the moment it is only available on graham.<br />
<br />
* Users can build, install, upgrade, downgrade, and remove packages from their environment without root privileges and without affecting other users.<br />
* Operations either succeed and create a new environment or fail leaving the previous environment in place (operations are atomic).<br />
* Previous environments can be switched back to at any point.<br />
* Users can add their own packages and share them with other users.<br />
<br />
The default Nix package set includes a huge selection (over 10,000) of recent versions of many packages.<br />
<br />
'''NOTE:''' The message <code>failed to lock thread to CPU XX</code> is a harmless warning that can be ignored.<br />
<br />
== Enabling and disabling the Nix environment ==<br />
<br />
The user's current Nix environment is enabled by loading the nix module. This creates some ''.nix*'' files and sets some environment variables.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|module load nix<br />
}}<br />
<br />
It is disabled by unloading the nix module. This unsets the environment variables but leaves the ''.nix*'' files alone.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|module unload nix<br />
}}<br />
<br />
== Completely reseting the Nix environment ==<br />
<br />
Most operations can be undone with the <code>--rollback</code> option (i.e., <code>nix-env --rollback</code> or <code>nix-channel --rollback</code>). Sometimes it is useful to entirely reset nix though. This is done by unloading the module, erasing all user related nix files, and then reloading the module file.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|module unload nix<br />
|rm -fr ~/.nix-profile ~/.nix-defexpr ~/.nix-channels ~/.config/nixpkgs<br />
|rm -fr /nix/var/nix/profiles/per-user/$USER /nix/var/nix/gcroots/per-user/$USER<br />
|module load nix<br />
}}<br />
<br />
= Installing and removing packages =<br />
<br />
The <code>nix-env</code> command is used to setup your Nix environment.<br />
<br />
== What do I have installed and what can I install ==<br />
<br />
Lets first see what we currently have installed.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|nix-env --query<br />
}}<br />
<br />
Now let's see what is available. We request the attribute paths (unambiguous way of specifying a package) and the descriptions too (cursor to the right to see them). This takes a bit of time as it visits a lot of small files. Especially over NFS it can be a good idea to pipe it to a file and then grep that in the future.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|nix-env --query --available --attr-path --description<br />
}}<br />
<br />
== Installing packages ==<br />
<br />
Let's say that we need a newer version of git than provided by default. First lets check what our OS comes with.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|git --version<br />
|which git<br />
}}<br />
<br />
Let's tell Nix to install its version in our environment.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|nix-env --install --attr nixpkgs.git<br />
|nix-env --query<br />
}}<br />
<br />
Let's checkout what we have now (it may be necessary to tell bash to to forget remembered executable locations with <code>hash -r</code> so it notices the new one).<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|git --version<br />
|which git<br />
}}<br />
<br />
== Removing packages ==<br />
<br />
For completeness, lets add in the other usual version-control suspects.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|nix-env --install --attr nixpkgs.subversion nixpkgs.mercurial<br />
|nix-env --query<br />
}}<br />
<br />
Actually, we probably don't really want subversion any more. Let's remove that.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|nix-env --uninstall subversion<br />
|nix-env --query<br />
}}<br />
<br />
= Environments =<br />
<br />
Nix keeps referring to user environments. Each time we install or remove packages we create a new environment based off of the previous environment.<br />
<br />
== Switching between previous environments ==<br />
<br />
This means the previous environments still exist and we can switch back to them at any point. Let's say we changed our mind and want subversion back. It's trivial to restore the previous environment.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|nix-env --rollback<br />
|nix-env --query<br />
}}<br />
<br />
Of course we may want to do more than just move to the previous environment. We can get a list of all our environments so far and then jump directly to whatever one we want. Let's undo the rollback.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|nix-env --list-generations<br />
|nix-env --switch-generation 4<br />
|nix-env --query<br />
}}<br />
<br />
== Operations are atomic ==<br />
<br />
Due to the atomic property of Nix environments, we can't be left halfway through installing/updating packages. They either succeed and create us a new environment or leave us with the previous one intact.<br />
<br />
Let's go back to the start when we just had Nix itself and install the one true GNU distributed version control system tla. Don't let it complete though. Hit it with <code>CTRL+c</code> partway through.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|nix-env --switch-generation 1<br />
|nix-env --install --attr nixpkgs.tla<br />
CTRL+c<br />
}}<br />
<br />
Nothing bad happens. The operation didn't complete so it has no effect on the environment whatsoever.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|nix-env --query<br />
|nix-env --list-generations<br />
}}<br />
<br />
== Nix only does things once ==<br />
<br />
The install and remove commands take the current environment and create a new environment with the changes. This works regardless of which environment we are currently in. Let's create a new environment from our original environment by just adding git and mercurial.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|nix-env --list-generations<br />
|nix-env --install --attr nixpkgs.git nixpkgs.mercurial<br />
|nix-env --list-generations<br />
}}<br />
<br />
Notice how much much faster it was to install git and mercurial the second time? That is because the software already existed in the local Nix store from the previous installs so Nix just reused it.<br />
<br />
== Garbage collection ==<br />
<br />
Nix periodically goes through and removes any software not accessible from any existing environments. This means we have to explicitly delete environments we don't want anymore so Nix is able to reclaim the space. We can delete specific environments or any sufficiently old.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|nix-env --delete-generations 30d<br />
}}<br />
<br />
[[Category:Software]]</div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=Using_Nix&diff=67072Using Nix2019-02-03T16:00:04Z<p>Tyson: Note that the failed thread affinity call warning message can be ignored</p>
<hr />
<div>Nix is a package manager system that allows users to manage their own persistent software environment. At the moment it is only available on graham.<br />
<br />
* Users can build, install, upgrade, downgrade, and remove packages from their environment without root privileges and without affecting other users.<br />
* Operations either succeed and create a new environment or fail leaving the previous environment in place (operations are atomic).<br />
* Previous environments can be switched back to at any point.<br />
* Users can add their own packages and share them with other users.<br />
<br />
The default Nix package set includes a huge selection (over 10,000) of recent versions of many packages.<br />
<br />
'''NOTE:''' The message <code>failed to lock thread to CPU XX</code> is a harmless warning that can be ignored.<br />
<br />
== Enabling and disabling the Nix environment ==<br />
<br />
The user's current Nix environment is enabled by loading the nix module. This creates some ''.nix*'' files and sets some environment variables.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|module load nix<br />
}}<br />
<br />
It is disabled by unloading the nix module. This unsets the environment variables but leaves the ''.nix*'' files alone.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|module unload nix<br />
}}<br />
<br />
== Completely reseting the Nix environment ==<br />
<br />
Most operations can be undone with the <code>--rollback</code> option (i.e., <code>nix-env --rollback</code> or <code>nix-channel --rollback</code>). Sometimes it is useful to entirely reset nix though. This is done by unloading the module, erasing all user related nix files, and then reloading the module file.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|module unload nix<br />
|rm -fr ~/.nix-profile ~/.nix-defexpr ~/.nix-channels ~/.config/nixpkgs<br />
|rm -fr /nix/var/nix/profiles/per-user/$USER /nix/var/nix/gcroots/per-user/$USER<br />
|module load nix<br />
}}<br />
<br />
= Installing and removing packages =<br />
<br />
The <code>nix-env</code> command is used to setup your Nix environment.<br />
<br />
== What do I have installed and what can I install ==<br />
<br />
Lets first see what we currently have installed.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|nix-env --query<br />
}}<br />
<br />
Now let's see what is available. We request the attribute paths (unambiguous way of specifying a package) and the descriptions too (cursor to the right to see them). This takes a bit of time as it visits a lot of small files. Especially over NFS it can be a good idea to pipe it to a file and then grep that in the future.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|nix-env --query --available --attr-path --description<br />
}}<br />
<br />
== Installing packages ==<br />
<br />
Let's say that we need a newer version of git than provided by default. First lets check what our OS comes with.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|git --version<br />
|which git<br />
}}<br />
<br />
Let's tell Nix to install its version in our environment.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|nix-env --install --attr nixpkgs.git<br />
|nix-env --query<br />
}}<br />
<br />
Let's checkout what we have now (it may be necessary to tell bash to to forget remembered executable locations with <code>hash -r</code> so it notices the new one).<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|git --version<br />
|which git<br />
}}<br />
<br />
== Removing packages ==<br />
<br />
For completeness, lets add in the other usual version-control suspects.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|nix-env --install --attr nixpkgs.subversion nixpkgs.mercurial<br />
|nix-env --query<br />
}}<br />
<br />
Actually, we probably don't really want subversion any more. Let's remove that.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|nix-env --uninstall subversion<br />
|nix-env --query<br />
}}<br />
<br />
= Environments =<br />
<br />
Nix keeps referring to user environments. Each time we install or remove packages we create a new environment based off of the previous environment.<br />
<br />
== Switching between previous environments ==<br />
<br />
This means the previous environments still exist and we can switch back to them at any point. Let's say we changed our mind and want subversion back. It's trivial to restore the previous environment.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|nix-env --rollback<br />
|nix-env --query<br />
}}<br />
<br />
Of course we may want to do more than just move to the previous environment. We can get a list of all our environments so far and then jump directly to whatever one we want. Let's undo the rollback.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|nix-env --list-generations<br />
|nix-env --switch-generation 4<br />
|nix-env --query<br />
}}<br />
<br />
== Operations are atomic ==<br />
<br />
Due to the atomic property of Nix environments, we can't be left halfway through installing/updating packages. They either succeed and create us a new environment or leave us with the previous one intact.<br />
<br />
Let's go back to the start when we just had Nix itself and install the one true GNU distributed version control system tla. Don't let it complete though. Hit it with <code>CTRL+c</code> partway through.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|nix-env --switch-generation 1<br />
|nix-env --install --attr nixpkgs.tla<br />
CTRL+c<br />
}}<br />
<br />
Nothing bad happens. The operation didn't complete so it has no effect on the environment whatsoever.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|nix-env --query<br />
|nix-env --list-generations<br />
}}<br />
<br />
== Nix only does things once ==<br />
<br />
The install and remove commands take the current environment and create a new environment with the changes. This works regardless of which environment we are currently in. Let's create a new environment from our original environment by just adding git and mercurial.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|nix-env --list-generations<br />
|nix-env --install nixpkgs.git nixpkgs.mercurial<br />
|nix-env --list-generations<br />
}}<br />
<br />
Notice how much much faster it was to install git and mercurial the second time? That is because the software already existed in the local Nix store from the previous installs so Nix just reused it.<br />
<br />
== Garbage collection ==<br />
<br />
Nix periodically goes through and removes any software not accessible from any existing environments. This means we have to explicitly delete environments we don't want anymore so Nix is able to reclaim the space. We can delete specific environments or any sufficiently old.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|nix-env --delete-generations 30d<br />
}}<br />
<br />
[[Category:Nix]]</div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=Using_Nix&diff=67061Using Nix2019-02-01T23:57:16Z<p>Tyson: Basic nix usage copied over and updated from staff wiki</p>
<hr />
<div>Nix is a package manager system that allows users to manage their own persistent software environment. At the moment it is only available on graham.<br />
<br />
* Users can build, install, upgrade, downgrade, and remove packages from their environment without root privileges and without affecting other users.<br />
* Operations either succeed and create a new environment or fail leaving the previous environment in place (operations are atomic).<br />
* Previous environments can be switched back to at any point.<br />
* Users can add their own packages and share them with other users.<br />
<br />
The default Nix package set includes a huge selection (over 10,000) of recent versions of many packages.<br />
<br />
== Enabling and disabling the Nix environment ==<br />
<br />
The user's current Nix environment is enabled by loading the nix module. This creates some ''.nix*'' files and sets some environment variables.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|module load nix<br />
}}<br />
<br />
It is disabled by unloading the nix module. This unsets the environment variables but leaves the ''.nix*'' files alone.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|module unload nix<br />
}}<br />
<br />
== Completely reseting the Nix environment ==<br />
<br />
Most operations can be undone with the <code>--rollback</code> option (i.e., <code>nix-env --rollback</code> or <code>nix-channel --rollback</code>). Sometimes it is useful to entirely reset nix though. This is done by unloading the module, erasing all user related nix files, and then reloading the module file.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|module unload nix<br />
|rm -fr ~/.nix-profile ~/.nix-defexpr ~/.nix-channels ~/.config/nixpkgs<br />
|rm -fr /nix/var/nix/profiles/per-user/$USER /nix/var/nix/gcroots/per-user/$USER<br />
|module load nix<br />
}}<br />
<br />
= Installing and removing packages =<br />
<br />
The <code>nix-env</code> command is used to setup your Nix environment.<br />
<br />
== What do I have installed and what can I install ==<br />
<br />
Lets first see what we currently have installed.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|nix-env --query<br />
}}<br />
<br />
Now let's see what is available. We request the attribute paths (unambiguous way of specifying a package) and the descriptions too (cursor to the right to see them). This takes a bit of time as it visits a lot of small files. Especially over NFS it can be a good idea to pipe it to a file and then grep that in the future.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|nix-env --query --available --attr-path --description<br />
}}<br />
<br />
== Installing packages ==<br />
<br />
Let's say that we need a newer version of git than provided by default. First lets check what our OS comes with.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|git --version<br />
|which git<br />
}}<br />
<br />
Let's tell Nix to install its version in our environment.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|nix-env --install --attr nixpkgs.git<br />
|nix-env --query<br />
}}<br />
<br />
Let's checkout what we have now (it may be necessary to tell bash to to forget remembered executable locations with <code>hash -r</code> so it notices the new one).<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|git --version<br />
|which git<br />
}}<br />
<br />
== Removing packages ==<br />
<br />
For completeness, lets add in the other usual version-control suspects.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|nix-env --install --attr nixpkgs.subversion nixpkgs.mercurial<br />
|nix-env --query<br />
}}<br />
<br />
Actually, we probably don't really want subversion any more. Let's remove that.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|nix-env --uninstall subversion<br />
|nix-env --query<br />
}}<br />
<br />
= Environments =<br />
<br />
Nix keeps referring to user environments. Each time we install or remove packages we create a new environment based off of the previous environment.<br />
<br />
== Switching between previous environments ==<br />
<br />
This means the previous environments still exist and we can switch back to them at any point. Let's say we changed our mind and want subversion back. It's trivial to restore the previous environment.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|nix-env --rollback<br />
|nix-env --query<br />
}}<br />
<br />
Of course we may want to do more than just move to the previous environment. We can get a list of all our environments so far and then jump directly to whatever one we want. Let's undo the rollback.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|nix-env --list-generations<br />
|nix-env --switch-generation 4<br />
|nix-env --query<br />
}}<br />
<br />
== Operations are atomic ==<br />
<br />
Due to the atomic property of Nix environments, we can't be left halfway through installing/updating packages. They either succeed and create us a new environment or leave us with the previous one intact.<br />
<br />
Let's go back to the start when we just had Nix itself and install the one true GNU distributed version control system tla. Don't let it complete though. Hit it with <code>CTRL+c</code> partway through.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|nix-env --switch-generation 1<br />
|nix-env --install --attr nixpkgs.tla<br />
CTRL+c<br />
}}<br />
<br />
Nothing bad happens. The operation didn't complete so it has no effect on the environment whatsoever.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|nix-env --query<br />
|nix-env --list-generations<br />
}}<br />
<br />
== Nix only does things once ==<br />
<br />
The install and remove commands take the current environment and create a new environment with the changes. This works regardless of which environment we are currently in. Let's create a new environment from our original environment by just adding git and mercurial.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|nix-env --list-generations<br />
|nix-env --install nixpkgs.git nixpkgs.mercurial<br />
|nix-env --list-generations<br />
}}<br />
<br />
Notice how much much faster it was to install git and mercurial the second time? That is because the software already existed in the local Nix store from the previous installs so Nix just reused it.<br />
<br />
== Garbage collection ==<br />
<br />
Nix periodically goes through and removes any software not accessible from any existing environments. This means we have to explicitly delete environments we don't want anymore so Nix is able to reclaim the space. We can delete specific environments or any sufficiently old.<br />
<br />
{{Commands|prompt=[name@cluster:~]<br />
|nix-env --delete-generations 30d<br />
}}<br />
<br />
[[Category:Nix]]</div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=VNC&diff=55900VNC2018-07-26T13:51:10Z<p>Tyson: Clarify that the ssh tunnel command is ran on the user's computer</p>
<hr />
<div><languages /><br />
<translate><br />
<br />
<!--T:1--><br />
[[File:Matlab-vnc.png|400px|thumb|Matlab running via VNC.]]<br />
<br />
<!--T:2--><br />
It is sometimes useful to start a graphical user interfaces for certain software packages (like [[MATLAB]] for example). The most widely-available way to do this is with [[SSH]] and X11 forwarding, but the performance of SSH+X11 is often too slow to use useful. An alternative is to use [https://en.wikipedia.org/wiki/Virtual_Network_Computing VNC] to start and connect to a remote desktop.<br />
<br />
= VNC Client = <!--T:3--><br />
<br />
<!--T:4--><br />
First you will need to install a VNC client on your machine to connect to the VNC server. We recommend using [http://tigervnc.org/ TigerVNC]. A TigerVNC package is available for most Linux distributions, as are binaries for Windows and Mac.<br />
<br />
== Windows and Mac == <!--T:5--><br />
<br />
<!--T:6--><br />
Starting at the [http://tigervnc.org/ TigerVNC home page]<br />
<br />
<!--T:7--><br />
# click on the '''GitHub release page link'''<br />
# scroll down and click on the '''Binaries are avaiable from bintray''' link<br />
# scroll down and pick the '''.exe''' file for Windows (note the '''64''' for 64 bit Windows) or the '''.dmg''' file for Mac<br />
<br />
<!--T:8--><br />
If asked during the installation, do not enable the VNC server or start the VNC service. This is for sharing your desktop, not for connecting to our systems.<br />
<br />
== Linux == <!--T:9--><br />
<br />
<!--T:10--><br />
Install the TigerVNC viewer with your package manager and then symlink ''~/.vnc/x509_ca.pem'' to your system certificate authority list.<br />
<br />
=== Debian or Ubuntu === <!--T:11--><br />
{{Commands|prompt=[name@local_computer]$ <br />
|sudo apt-get install tigervnc-viewer<br />
|mkdir --parents ~/.vnc<br />
|ln --symbolic --interactive --no-target-directory /etc/ssl/certs/ca-certificates.crt ~/.vnc/x509_ca.pem<br />
}}<br />
<br />
=== Fedora, CentOS, or RHEL === <!--T:12--><br />
{{Commands|prompt=[name@local_computer]$ <br />
|sudo yum install tigervnc<br />
|mkdir --parents ~/.vnc<br />
|ln --symbolic --interactive --no-target-directory /etc/pki/tls/certs/ca-bundle.crt ~/.vnc/x509_ca.pem<br />
}}<br />
<br />
=== Gentoo === <!--T:13--><br />
{{Commands|prompt=[name@local_computer]$ <br />
|emerge -av net-misc/tigervnc<br />
|mkdir --parents ~/.vnc<br />
|ln --symbolic --interactive --no-target-directory /etc/ssl/certs/ca-certificates.crt ~/.vnc/x509_ca.pem<br />
}}<br />
<br />
= VNC Server = <!--T:14--><br />
<br />
<!--T:15--><br />
Now you need a VNC server to connect to. This can be either the dedicated VDI login system on [[Graham]], or one you start manually on an allocated compute node.<br />
<br />
== VDI Login Nodes == <!--T:16--><br />
<br />
<!--T:17--><br />
[[File:TigerVNC-GrahamDesktop.png|400px|thumb|right|'''gra-vdi.computecanada.ca''']]<br />
<br />
<!--T:18--><br />
Graham has dedicated VDI login nodes that provide a full graphical desktop, accelerated OpenGL, and access to <code>/home, /project,</code> and <code>/scratch</code> filesystems. You can connect to one of the VNC login nodes directly by starting your VNC viewer and entering the address '''gra-vdi.computecanada.ca'''. With TigerVNC, this means start the client from your Applications menu or run <code>vncviewer</code> from the command line. This will bring up a login screen to which you can log in using your Compute Canada credentials.<br />
<br />
<!--T:19--><br />
As with regular login nodes, these VDI login nodes are a shared resource and are not intended for doing batch computation (that is what the compute nodes are for), so please limit your use of them to graphics-related tasks. A non-exclusive list of examples includes graphical pre-processing steps such as mesh generation, graphical post-processing steps such as visualization, and using graphical intergrated development environments (IDEs).<br />
<br />
=== Installing software === <!--T:20--><br />
<br />
<!--T:21--><br />
Open-source software is provided by the '''nix''' module. Click the black terminal icon on the top menu bar or pick Applications -> System Tools -> Terminal and load the '''nix''' module. Then you can search for programs using the <code>nix search <regexp></code> command and install them in your environment using the <code>nix-env --install --attr <attribute></code> command. As an example, say you wanted to install [https://qgis.org QGIS]:<br />
<br />
<!--T:22--><br />
{{Command|prompt=[name@gra-vdi4]$ <br />
|module load nix<br />
}}<br />
{{Command|prompt=[name@gra-vdi4]$ <br />
|nix search qgis<br />
|result=<br />
Attribute name: nixpkgs.qgis<br />
Package name: qgis<br />
Version: 2.18.20<br />
Description: User friendly Open Source Geographic Information System<br />
}}<br />
{{Command|prompt=[name@gra-vdi4]$ <br />
|nix-env --install --attr nixpkgs.qgis<br />
}}<br />
<br />
<!--T:23--><br />
Your nix environment persists from one login to the next, so you only need to run an install command once. Whatever you have installed will be available anytime the module is loaded.<br />
<br />
<!--T:24--><br />
{{Commands|prompt=[name@gra-vdi4]$ <br />
|module load nix<br />
|qgis<br />
}}<br />
<br />
=== Building OpenGL applications === <!--T:25--><br />
<br />
<!--T:26--><br />
For accelerated OpenGL to work, it is necessary to adjust binaries to pre-load an appropriate version of the ''vglfaker.so'' library from VirtualGL. This has already be done for software installed by staff, and is done automatically for any OpenGL software built or installed via nix, but it is something you have to do yourself for software you have manually installed.<br />
<br />
<!--T:27--><br />
The easiest way to do this is use the <code>patchelf</code> utility from nix (use <code>nix-env --install --attr nixpkgs.patchelf</code> to install it) to adjust the final binary. For example, say you built an OpenGL application against the system libraries and installed it as ''~/.local/bin/myglapp''. Then you need to add the system VirtualGL library ''/usr/lib64/VirtualGL/libvglfaker.so'' as the first required library to it<br />
<br />
<!--T:28--><br />
{{Commands|prompt=[name@gra-vdi4]$ <br />
|module load nix<br />
|patchelf --add-needed /usr/lib64/VirtualGL/libvglfaker.so ~/.local/bin/myglapp<br />
}}<br />
Note that it is also possible to pre-load ''vglfaker.so'' via the <code>LD_PRELOAD</code> environment variable. This is generally a bad idea as it applies indiscriminately to all binaries, and those that require a different ''vglfaker.so'' than that set in <code>LD_PRELOAD</code> will then fail, but it can be used safely in some cases in wrapper scripts.<br />
<br />
== Compute Nodes == <!--T:29--><br />
<br />
<!--T:30--><br />
Where VDI login nodes are unavailable you can start a VNC server on a compute node, and, with suitable port forwarding, connect to it from your desktop. This gives you dedicated access to the server, but does not provide a full graphical desktop or hardware-accelerated OpenGL.<br />
<br />
=== Starting a VNC server === <!--T:31--><br />
<br />
<!--T:32--><br />
Before starting your VNC server, reserve a node on which to run it using <code>salloc</code>. As an example, to request an [[Running_jobs#Interactive_jobs|interactive job]] using 4 CPUs and 16GB of memory you could use the command:<br />
<br />
<!--T:33--><br />
{{Commands|salloc -c 4 --mem 16000M}}<br />
<br />
<!--T:34--><br />
Once your interactive job has started, start a VNC server with <code>vncserver</code>. Take note of which node your job is running on. If unsure, you can use the <code>hostname</code> command to check. You will be prompted to set a password for your VNC server - '''DO NOT LEAVE THIS BLANK.'''<br />
<br />
<!--T:35--><br />
'''Command with sample output:'''<br />
<br />
<!--T:36--><br />
{{Command<br />
|vncserver<br />
|result=<br />
You will require a password to access your desktops.<br />
<br />
<!--T:37--><br />
Password:<br />
Verify:<br />
Would you like to enter a view-only password (y/n)? n<br />
<br />
<!--T:38--><br />
New 'cdr767.int.cedar.computecanada.ca:1 (username)' desktop is cdr767.int.cedar.computecanada.ca:1<br />
<br />
<!--T:39--><br />
Creating default startup script /home/username/.vnc/xstartup<br />
Creating default config /home/username/.vnc/config<br />
Starting applications specified in /home/username/.vnc/xstartup<br />
Log file is /home/username/.vnc/cdr767.int.cedar.computecanada.ca:1.log<br />
}}<br />
<br />
<!--T:40--><br />
Determine which port the VNC server is using by examining the log file with <code>cat</code><br />
or <code>grep</code>, e.g.:<br />
<br />
<!--T:41--><br />
{{Command<br />
|grep port /home/username/.vnc/cdr767:1.log<br />
|result=<br />
vncext: Listening for VNC connections on all interface(s), port 5901<br />
}}<br />
<br />
=== Setting up an SSH tunnel to the VNC server === <!--T:42--><br />
Once your VNC server has been started, create a "bridge" to allow your local desktop computer to connect to the compute node directly. This bridge connection is created using an [[SSH tunnelling|SSH tunnel]]. SSH tunnels are created on your computer using the same SSH connection command as usual, with an extra option added - this follows the format: <code>ssh user@host -L port:compute_node:port</code>.<br />
<br />
<!--T:43--><br />
An example SSH tunnel command ran on your computer to connect to a VNC server running on Graham's gra796 node and port 5901 would be the following:<br />
<br />
<!--T:44--><br />
<pre><br />
ssh username@cedar.computecanada.ca -L 5902:cdr767:5901<br />
</pre><br />
<br />
<!--T:45--><br />
The SSH tunnel operates like a normal SSH session: You may run commands over it, ''etc.'' However, keep in mind that this SSH session is also your connection to the VNC server. If you terminate the SSH session, your connection to the VNC server will be lost! For more information, please see [[SSH tunnelling]].<br />
<br />
=== Connecting to the VNC server === <!--T:46--><br />
<br />
<!--T:47--><br />
To connect to the VNC server, you need to tell your VNC client to connect to '''localhost'''. The following example uses TigerVNC's <code>vncviewer</code> to connect to the running VNC server on cdr767. You will be prompted for the VNC password that you set earlier in order to connect.<br />
<br />
<!--T:48--><br />
'''Command with sample output:'''<br />
{{Command<br />
|vncviewer localhost:5902<br />
|prompt=[name@local_computer]$ <br />
|result=<br />
<br />
TigerVNC Viewer 64-bit v1.8.0<br />
Built on: 2018-06-13 10:56<br />
Copyright (C) 1999-2017 TigerVNC Team and many others (see README.txt)<br />
See http://www.tigervnc.org for information on TigerVNC.<br />
<br />
<!--T:49--><br />
Tue Jul 10 17:40:24 2018<br />
DecodeManager: Detected 8 CPU core(s)<br />
DecodeManager: Creating 4 decoder thread(s)<br />
CConn: connected to host localhost port 5902<br />
CConnection: Server supports RFB protocol version 3.8<br />
CConnection: Using RFB protocol version 3.8<br />
CConnection: Choosing security type VeNCrypt(19)<br />
CVeNCrypt: Choosing security type TLSVnc (258)<br />
<br />
<!--T:50--><br />
Tue Jul 10 17:40:27 2018<br />
CConn: Using pixel format depth 24 (32bpp) little-endian rgb888<br />
CConn: Using Tight encoding<br />
CConn: Enabling continuous updates<br />
<br />
<!--T:51--><br />
}}<br />
<br />
<!--T:55--><br />
The port number (here 5902) must match the local port (the first number) you specified when you set up the SSH tunnel. The default VNC port is 5900. If you specified 5900 for the local port of the SSH tunnel, you could omit it when you invoke <code>vncviewer</code>. However, Windows users may find that they cannot set up an SSH tunnel on local port 5900.<br />
<br />
<!--T:52--><br />
Once connected, you will be presented with an Xterm window and a blank desktop. To launch a program, simply invoke the command as you would normally within the Xterm window. <code>xclock</code> will start a sample clock application you can use to test things out. To start a more complicated program like Matlab, load the module and launch the program as follows:<br />
<br />
{{Commands<br />
|module load matlab<br />
|matlab<br />
}}<br />
=== Resetting your VNC server password === <!--T:53--><br />
<br />
<!--T:54--><br />
If you forget your VNC password or otherwise want to delete your VNC configs and start over with a clean slate, you can delete your <code>~/.vnc</code> directory. The next time you run <code>vncserver</code>, you will be prompted to set a new password.<br />
<br />
</translate></div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=VNC&diff=55710VNC2018-07-20T19:50:46Z<p>Tyson: One more time to get the patchelf name correct</p>
<hr />
<div>[[File:Matlab-vnc.png|400px|thumb|Matlab running via VNC.]]<br />
<br />
Frequently, it may be useful to start up graphical user interfaces for various software packages like Matlab. Doing so over X-forwarding can result in a very slow connection to the server, one useful alternative to X-forwarding is using VNC to start and connect to a remote desktop.<br />
<br />
= VNC Client =<br />
<br />
First you will need to install a VNC client on your machine to connect to the VNC server. We recommend using [http://tigervnc.org/ TigerVNC]. It is pre-package on most Linux distributions and comes with Windows and Mac binaries.<br />
<br />
== Windows and Mac ==<br />
<br />
Starting at the [http://tigervnc.org/ TigerVNC home page]<br />
<br />
# click on the '''GitHub release page link'''<br />
# scroll down and click on the '''Binaries are avaiable from bintray''' link<br />
# scroll down and pick the '''.exe''' file for Windows (note the '''64''' for 64 bit Windows) or the '''.dmg''' file for Mac<br />
<br />
If asked during the installation, do not enable the VNC server (this would be for sharing your desktop, not for connecting to our systems).<br />
<br />
== Linux ==<br />
<br />
Install the TigerVNC viewer with your package manager and then symlink ''~/.vnc/x509_ca.pem'' to your system certificate authority list.<br />
=== Debian or Ubuntu ===<br />
<br />
{{Commands|prompt=[name@local_computer]$ <br />
|sudo apt-get install tigervnc-viewer<br />
|mkdir --parents ~/.vnc<br />
|ln --symbolic --interactive --no-target-directory /etc/ssl/certs/ca-certificates.crt ~/.vnc/x509_ca.pem<br />
}}<br />
=== Fedora, CentOS, or RHEL ===<br />
<br />
{{Commands|prompt=[name@local_computer]$ <br />
|sudo yum install tigervnc<br />
|mkdir --parents ~/.vnc<br />
|ln --symbolic --interactive --no-target-directory /etc/pki/tls/certs/ca-bundle.crt ~/.vnc/x509_ca.pem<br />
}}<br />
=== Gentoo ===<br />
<br />
{{Commands|prompt=[name@local_computer]$ <br />
|emerge -av net-misc/tigervnc<br />
|mkdir --parents ~/.vnc<br />
|ln --symbolic --interactive --no-target-directory /etc/ssl/certs/ca-certificates.crt ~/.vnc/x509_ca.pem<br />
}}<br />
= VNC Server =<br />
<br />
Now you need a VNC server to connect to. This can be either the dedicated VDI login system on graham or one you start manually on an allocated compute node.<br />
<br />
== VDI Login Nodes ==<br />
<br />
[[File:TigerVNC-GrahamDesktop.png|400px|thumb|right|'''gra-vdi.computecanada.ca''']]<br />
<br />
Graham has a dedicated VNC login nodes that provide a full graphical desktop, accelerated OpenGL, and access to home, project, and scratch. You can connect to them directly by starting your vncviewer (e.g., for TigerVNC, start the client from your Applications menu or run <code>vncviewer</code> from the command line) and entering the address '''gra-vdi.computecanada.ca'''. This will bring up a login screen to which you can login using your Compute Canada credentials.<br />
<br />
As with regular login nodes, these VDI logins nodes are a shared resource and are not intended for doing batch computation (that is what the compute nodes are for), so please limit your use of them to graphical related tasks. A none-exclusive list of examples includes graphical pre-processing steps such as mesh generation, graphical post-processing steps such as visualization, and using graphical IDEs.<br />
<br />
=== Installing software ===<br />
<br />
Open-source software is provided by the '''nix''' module. Click the black terminal icon on the top menu bar or pick Applications -> System Tools -> Terminal and load the '''nix''' module. Then you can search for programs using the <code>nix search <regexp></code> command and install then in your environment using the <code>nix-env --install --attr <attribute></code> command. As an example, say you wanted to install [https://qgis.org QGIS] for your use<br />
<br />
{{Command|prompt=[name@gra-vdi4]$ <br />
|module load nix<br />
}}<br />
{{Command|prompt=[name@gra-vdi4]$ <br />
|nix search qgis<br />
|result=<br />
Attribute name: nixpkgs.qgis<br />
Package name: qgis<br />
Version: 2.18.20<br />
Description: User friendly Open Source Geographic Information System<br />
}}<br />
{{Command|prompt=[name@gra-vdi4]$ <br />
|nix-env --install --attr nixpkgs.qgis<br />
}}<br />
Your nix environment persists, so you only need to run an install command once. Whatever you have installed will then be available anytime to module is loaded.<br />
<br />
{{Commands|prompt=[name@gra-vdi4]$ <br />
|module load nix<br />
|qgis<br />
}}<br />
=== Building OpenGL applications ===<br />
<br />
For accelerated OpenGL to work, it is necessary to adjust binaries to pre-load an appropriate version of the ''vglfaker.so'' library from VirtualGL. This has already be done for software installed by staff, and is done automatically for any OpenGL software built/installed via nix, but it is something you have to do yourself for software you manually installed.<br />
<br />
The easiest way to do this is use the <code>patchelf</code> utility from nix (use <code>nix-env --install --attr nixpkgs.patchelf</code> to install it) to adjust the final binary. For example, say you built an OpenGL application against the system libraries and installed it as ''~/.local/bin/myglapp''. Then you need to add the system VirtualGL library ''/usr/lib64/VirtualGL/libvglfaker.so'' as the first required library to it<br />
<br />
{{Commands|prompt=[name@gra-vdi4]$ <br />
|module load nix<br />
|patchelf --add-needed /usr/lib64/VirtualGL/libvglfaker.so ~/.local/bin/myglapp<br />
}}<br />
Note that it is also possible to pre-load ''vglfaker.so'' via the <code>LD_PRELOAD</code> environment variable. This is generally a bad idea as it applies indiscriminately to all binaries, and those that require a different ''vglfaker.so'' than that set in <code>LD_PRELOAD</code> will then fail, but it can be used safely in some cases in wrapper scripts.<br />
<br />
== Compute Nodes ==<br />
<br />
A VNC server can also be started in a compute node, and, with suitable port forwarding, you can connected to from your desktop. This gives you dedicated access, but does not provide a full graphical desktop or hardware accelerated OpenGL.<br />
<br />
=== Starting a VNC server ===<br />
<br />
Before starting your VNC server, you will need a node on which to run it. The easiest way to do so is typically in the framework of an interactive job using <code>salloc</code>. As an example, to request an interactive job using 4 cpus and 16GB of memory, you would use the command:<br />
<br />
{{Commands|salloc -c 4 --mem 16g}}<br />
<br />
Once your interactive job has started, you can start a VNC server with <code>vncserver</code>. You should take note of which node your job is running on, as well as which port (typically 5901). If unsure, you can use the <code>hostname</code> command to check which host your job is running on. You will be prompted to set a password for your VNC server - '''DO NOT LEAVE THIS BLANK.'''<br />
<br />
'''Command with sample output:'''<br />
<br />
{{Command<br />
|vncserver<br />
|result=<br />
You will require a password to access your desktops.<br />
<br />
Password:<br />
Verify:<br />
Would you like to enter a view-only password (y/n)? n<br />
<br />
New 'gra796:1 (username)' desktop is gra796:1<br />
<br />
Creating default startup script /home/username/.vnc/xstartup<br />
Creating default config /home/username/.vnc/config<br />
Starting applications specified in /home/username/.vnc/xstartup<br />
Log file is /home/username/.vnc/gra796:1.log<br />
}}<br />
<br />
You will likely want to <code>cat</code> the log file to determine which port the VNC server is using, in this case, the key line to find is the following:<br />
<br />
<pre><br />
vncext: Listening for VNC connections on all interface(s), port 5901<br />
</pre><br />
<br />
=== Setting up an SSH tunnel to the VNC server ===<br />
<br />
Now that your VNC server has been started, you will need to create a "bridge" to allow your local desktop computer to connect to the compute node directly. This bridge connection is created using an SSH tunnel. SSH tunnels are created using the same SSH connection command as usual, with an extra option added - this follows the format: <code>ssh user@host -L port:compute_node:port</code>.<br />
<br />
An example SSH tunnel command to connect to a VNC server running on Graham's gra796 node and port 5901 would be the following:<br />
<br />
<pre><br />
ssh username@graham.computecanada.ca -L 5900:gra796:5901<br />
</pre><br />
<br />
The SSH tunnel will operate like a normal SSH connection- you may run commands over it, etc. However, keep in mind that this is your connection to the VNC server. If you terminate the SSH tunnel, your connection to the VNC server will be lost! For more detailed information on SSH tunneling, please see [[SSH_tunnelling]].<br />
<br />
=== Connecting to the VNC server ===<br />
<br />
To connect to the VNC server, you need to tell your VNC client to connect to '''localhost'''. The following example uses TigerVNC's <code>vncviewer</code> to connect to the running VNC server on gra796. You will be prompted for your password that you set earlier before you can connect.<br />
<br />
'''Command with sample output:'''<br />
{{Command<br />
|vncviewer localhost<br />
|prompt=[name@local_computer]$ <br />
|result=<br />
<br />
TigerVNC Viewer 64-bit v1.8.0<br />
Built on: 2018-06-13 10:56<br />
Copyright (C) 1999-2017 TigerVNC Team and many others (see README.txt)<br />
See http://www.tigervnc.org for information on TigerVNC.<br />
<br />
Tue Jul 10 17:40:24 2018<br />
DecodeManager: Detected 8 CPU core(s)<br />
DecodeManager: Creating 4 decoder thread(s)<br />
CConn: connected to host localhost port 5901<br />
CConnection: Server supports RFB protocol version 3.8<br />
CConnection: Using RFB protocol version 3.8<br />
CConnection: Choosing security type VeNCrypt(19)<br />
CVeNCrypt: Choosing security type TLSVnc (258)<br />
<br />
Tue Jul 10 17:40:27 2018<br />
CConn: Using pixel format depth 24 (32bpp) little-endian rgb888<br />
CConn: Using Tight encoding<br />
CConn: Enabling continuous updates<br />
<br />
}}<br />
<br />
Once connected, you will be presented with an Xterm window and a blank desktop. To launch a program, simply invoke the command as you would normally within the Xterm window. <code>xclock</code> will start a sample clock application you can use to test things out. To start a more complicated program (for instance, Matlab), you would load the module and launch the program via the following:<br />
<br />
{{Commands<br />
|module load matlab<br />
|matlab<br />
}}<br />
=== Resetting your VNC server password ===<br />
<br />
If you forget your VNC password or otherwise want to delete your VNC configs and start over with a clean slate, you can delete your <code>~/.vnc</code> directory. The next time you run <code>vncserver</code>, you will be prompted to set a new password.</div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=VNC&diff=55706VNC2018-07-20T18:36:57Z<p>Tyson: Small clarification</p>
<hr />
<div>[[File:Matlab-vnc.png|400px|thumb|Matlab running via VNC.]]<br />
<br />
Frequently, it may be useful to start up graphical user interfaces for various software packages like Matlab. Doing so over X-forwarding can result in a very slow connection to the server, one useful alternative to X-forwarding is using VNC to start and connect to a remote desktop.<br />
<br />
= VNC Client =<br />
<br />
First you will need to install a VNC client on your machine to connect to the VNC server. We recommend using [http://tigervnc.org/ TigerVNC]. It is pre-package on most Linux distributions and comes with Windows and Mac binaries.<br />
<br />
== Windows and Mac ==<br />
<br />
Starting at the [http://tigervnc.org/ TigerVNC home page]<br />
<br />
# click on the '''GitHub release page link'''<br />
# scroll down and click on the '''Binaries are avaiable from bintray''' link<br />
# scroll down and pick the '''.exe''' file for Windows (note the '''64''' for 64 bit Windows) or the '''.dmg''' file for Mac<br />
<br />
If asked during the installation, do not enable the VNC server (this would be for sharing your desktop, not for connecting to our systems).<br />
<br />
== Linux ==<br />
<br />
Install the TigerVNC viewer with your package manager and then symlink ''~/.vnc/x509_ca.pem'' to your system certificate authority list.<br />
=== Debian or Ubuntu ===<br />
<br />
{{Commands|prompt=[name@local_computer]$ <br />
|sudo apt-get install tigervnc-viewer<br />
|mkdir --parents ~/.vnc<br />
|ln --symbolic --interactive --no-target-directory /etc/ssl/certs/ca-certificates.crt ~/.vnc/x509_ca.pem<br />
}}<br />
=== Fedora, CentOS, or RHEL ===<br />
<br />
{{Commands|prompt=[name@local_computer]$ <br />
|sudo yum install tigervnc<br />
|mkdir --parents ~/.vnc<br />
|ln --symbolic --interactive --no-target-directory /etc/pki/tls/certs/ca-bundle.crt ~/.vnc/x509_ca.pem<br />
}}<br />
=== Gentoo ===<br />
<br />
{{Commands|prompt=[name@local_computer]$ <br />
|emerge -av net-misc/tigervnc<br />
|mkdir --parents ~/.vnc<br />
|ln --symbolic --interactive --no-target-directory /etc/ssl/certs/ca-certificates.crt ~/.vnc/x509_ca.pem<br />
}}<br />
= VNC Server =<br />
<br />
Now you need a VNC server to connect to. This can be either the dedicated VDI login system on graham or one you start manually on an allocated compute node.<br />
<br />
== VDI Login Nodes ==<br />
<br />
[[File:TigerVNC-GrahamDesktop.png|400px|thumb|right|'''gra-vdi.computecanada.ca''']]<br />
<br />
Graham has a dedicated VNC login nodes that provide a full graphical desktop, accelerated OpenGL, and access to home, project, and scratch. You can connect to them directly by starting your vncviewer (e.g., for TigerVNC, start the client from your Applications menu or run <code>vncviewer</code> from the command line) and entering the address '''gra-vdi.computecanada.ca'''. This will bring up a login screen to which you can login using your Compute Canada credentials.<br />
<br />
As with regular login nodes, these VDI logins nodes are a shared resource and are not intended for doing batch computation (that is what the compute nodes are for), so please limit your use of them to graphical related tasks. A none-exclusive list of examples includes graphical pre-processing steps such as mesh generation, graphical post-processing steps such as visualization, and using graphical IDEs.<br />
<br />
=== Installing software ===<br />
<br />
Open-source software is provided by the '''nix''' module. Click the black terminal icon on the top menu bar or pick Applications -> System Tools -> Terminal and load the '''nix''' module. Then you can search for programs using the <code>nix search <regexp></code> command and install then in your environment using the <code>nix-env --install --attr <attribute></code> command. As an example, say you wanted to install [https://qgis.org QGIS] for your use<br />
<br />
{{Command|prompt=[name@gra-vdi4]$ <br />
|module load nix<br />
}}<br />
{{Command|prompt=[name@gra-vdi4]$ <br />
|nix search qgis<br />
|result=<br />
Attribute name: nixpkgs.qgis<br />
Package name: qgis<br />
Version: 2.18.20<br />
Description: User friendly Open Source Geographic Information System<br />
}}<br />
{{Command|prompt=[name@gra-vdi4]$ <br />
|nix-env --install --attr nixpkgs.qgis<br />
}}<br />
Your nix environment persists, so you only need to run an install command once. Whatever you have installed will then be available anytime to module is loaded.<br />
<br />
{{Commands|prompt=[name@gra-vdi4]$ <br />
|module load nix<br />
|qgis<br />
}}<br />
=== Building OpenGL applications ===<br />
<br />
For accelerated OpenGL to work, it is necessary to adjust binaries to pre-load an appropriate version of the ''vglfaker.so'' library from VirtualGL. This has already be done for software installed by staff, and is done automatically for any OpenGL software built/installed via nix, but it is something you have to do yourself for software you manually installed.<br />
<br />
The easiest way to do this is use the <code>patchelf</code> utility from nix (use <code>nix-env --install --attr nixpkgs.paychelf</code> to install it) to adjust the final binary. For example, say you built an OpenGL application against the system libraries and installed it as ''~/.local/bin/myglapp''. Then you need to add the system VirtualGL library ''/usr/lib64/VirtualGL/libvglfaker.so'' as the first required library to it<br />
<br />
{{Commands|prompt=[name@gra-vdi4]$ <br />
|module load nix<br />
|patchelf --add-needed /usr/lib64/VirtualGL/libvglfaker.so ~/.local/bin/myglapp<br />
}}<br />
Note that it is also possible to pre-load ''vglfaker.so'' via the <code>LD_PRELOAD</code> environment variable. This is generally a bad idea as it applies indiscriminately to all binaries, and those that require a different ''vglfaker.so'' than that set in <code>LD_PRELOAD</code> will then fail, but it can be used safely in some cases in wrapper scripts.<br />
<br />
== Compute Nodes ==<br />
<br />
A VNC server can also be started in a compute node, and, with suitable port forwarding, you can connected to from your desktop. This gives you dedicated access, but does not provide a full graphical desktop or hardware accelerated OpenGL.<br />
<br />
=== Starting a VNC server ===<br />
<br />
Before starting your VNC server, you will need a node on which to run it. The easiest way to do so is typically in the framework of an interactive job using <code>salloc</code>. As an example, to request an interactive job using 4 cpus and 16GB of memory, you would use the command:<br />
<br />
{{Commands|salloc -c 4 --mem 16g}}<br />
<br />
Once your interactive job has started, you can start a VNC server with <code>vncserver</code>. You should take note of which node your job is running on, as well as which port (typically 5901). If unsure, you can use the <code>hostname</code> command to check which host your job is running on. You will be prompted to set a password for your VNC server - '''DO NOT LEAVE THIS BLANK.'''<br />
<br />
'''Command with sample output:'''<br />
<br />
{{Command<br />
|vncserver<br />
|result=<br />
You will require a password to access your desktops.<br />
<br />
Password:<br />
Verify:<br />
Would you like to enter a view-only password (y/n)? n<br />
<br />
New 'gra796:1 (username)' desktop is gra796:1<br />
<br />
Creating default startup script /home/username/.vnc/xstartup<br />
Creating default config /home/username/.vnc/config<br />
Starting applications specified in /home/username/.vnc/xstartup<br />
Log file is /home/username/.vnc/gra796:1.log<br />
}}<br />
<br />
You will likely want to <code>cat</code> the log file to determine which port the VNC server is using, in this case, the key line to find is the following:<br />
<br />
<pre><br />
vncext: Listening for VNC connections on all interface(s), port 5901<br />
</pre><br />
<br />
=== Setting up an SSH tunnel to the VNC server ===<br />
<br />
Now that your VNC server has been started, you will need to create a "bridge" to allow your local desktop computer to connect to the compute node directly. This bridge connection is created using an SSH tunnel. SSH tunnels are created using the same SSH connection command as usual, with an extra option added - this follows the format: <code>ssh user@host -L port:compute_node:port</code>.<br />
<br />
An example SSH tunnel command to connect to a VNC server running on Graham's gra796 node and port 5901 would be the following:<br />
<br />
<pre><br />
ssh username@graham.computecanada.ca -L 5900:gra796:5901<br />
</pre><br />
<br />
The SSH tunnel will operate like a normal SSH connection- you may run commands over it, etc. However, keep in mind that this is your connection to the VNC server. If you terminate the SSH tunnel, your connection to the VNC server will be lost! For more detailed information on SSH tunneling, please see [[SSH_tunnelling]].<br />
<br />
=== Connecting to the VNC server ===<br />
<br />
To connect to the VNC server, you need to tell your VNC client to connect to '''localhost'''. The following example uses TigerVNC's <code>vncviewer</code> to connect to the running VNC server on gra796. You will be prompted for your password that you set earlier before you can connect.<br />
<br />
'''Command with sample output:'''<br />
{{Command<br />
|vncviewer localhost<br />
|prompt=[name@local_computer]$ <br />
|result=<br />
<br />
TigerVNC Viewer 64-bit v1.8.0<br />
Built on: 2018-06-13 10:56<br />
Copyright (C) 1999-2017 TigerVNC Team and many others (see README.txt)<br />
See http://www.tigervnc.org for information on TigerVNC.<br />
<br />
Tue Jul 10 17:40:24 2018<br />
DecodeManager: Detected 8 CPU core(s)<br />
DecodeManager: Creating 4 decoder thread(s)<br />
CConn: connected to host localhost port 5901<br />
CConnection: Server supports RFB protocol version 3.8<br />
CConnection: Using RFB protocol version 3.8<br />
CConnection: Choosing security type VeNCrypt(19)<br />
CVeNCrypt: Choosing security type TLSVnc (258)<br />
<br />
Tue Jul 10 17:40:27 2018<br />
CConn: Using pixel format depth 24 (32bpp) little-endian rgb888<br />
CConn: Using Tight encoding<br />
CConn: Enabling continuous updates<br />
<br />
}}<br />
<br />
Once connected, you will be presented with an Xterm window and a blank desktop. To launch a program, simply invoke the command as you would normally within the Xterm window. <code>xclock</code> will start a sample clock application you can use to test things out. To start a more complicated program (for instance, Matlab), you would load the module and launch the program via the following:<br />
<br />
{{Commands<br />
|module load matlab<br />
|matlab<br />
}}<br />
=== Resetting your VNC server password ===<br />
<br />
If you forget your VNC password or otherwise want to delete your VNC configs and start over with a clean slate, you can delete your <code>~/.vnc</code> directory. The next time you run <code>vncserver</code>, you will be prompted to set a new password.</div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=VNC&diff=55705VNC2018-07-20T18:29:43Z<p>Tyson: Minor items and corrections</p>
<hr />
<div>[[File:Matlab-vnc.png|400px|thumb|Matlab running via VNC.]]<br />
<br />
Frequently, it may be useful to start up graphical user interfaces for various software packages like Matlab. Doing so over X-forwarding can result in a very slow connection to the server, one useful alternative to X-forwarding is using VNC to start and connect to a remote desktop.<br />
<br />
= VNC Client =<br />
<br />
First you will need to install a VNC client on your machine to connect to the VNC server. We recommend using [http://tigervnc.org/ TigerVNC]. It is pre-package on most Linux distributions and comes with Windows and Mac binaries.<br />
<br />
== Windows and Mac ==<br />
<br />
Starting at the [http://tigervnc.org/ TigerVNC home page]<br />
<br />
# click on the '''GitHub release page link'''<br />
# scroll down and click on the '''Binaries are avaiable from bintray''' link<br />
# scroll down and pick the '''.exe''' file for Windows (note the '''64''' for 64 bit Windows) or the '''.dmg''' file for Mac<br />
<br />
If asked during the installation, do not enable the VNC server (this would be for sharing your desktop, not for connecting to our systems).<br />
<br />
== Linux ==<br />
<br />
Install the TigerVNC viewer with your package manager and then symlink ''~/.vnc/x509_ca.pem'' to your system certificate authority list.<br />
=== Debian or Ubuntu ===<br />
<br />
{{Commands|prompt=[name@local_computer]$ <br />
|sudo apt-get install tigervnc-viewer<br />
|mkdir --parents ~/.vnc<br />
|ln --symbolic --interactive --no-target-directory /etc/ssl/certs/ca-certificates.crt ~/.vnc/x509_ca.pem<br />
}}<br />
=== Fedora, CentOS, or RHEL ===<br />
<br />
{{Commands|prompt=[name@local_computer]$ <br />
|sudo yum install tigervnc<br />
|mkdir --parents ~/.vnc<br />
|ln --symbolic --interactive --no-target-directory /etc/pki/tls/certs/ca-bundle.crt ~/.vnc/x509_ca.pem<br />
}}<br />
=== Gentoo ===<br />
<br />
{{Commands|prompt=[name@local_computer]$ <br />
|emerge -av net-misc/tigervnc<br />
|mkdir --parents ~/.vnc<br />
|ln --symbolic --interactive --no-target-directory /etc/ssl/certs/ca-certificates.crt ~/.vnc/x509_ca.pem<br />
}}<br />
= VNC Server =<br />
<br />
Now you need a VNC server to connect to. This can be either the dedicated VDI login system on graham or one you start manually on an allocated compute node.<br />
<br />
== VDI Login Nodes ==<br />
<br />
[[File:TigerVNC-GrahamDesktop.png|400px|thumb|right|'''gra-vdi.computecanada.ca''']]<br />
<br />
Graham has a dedicated VNC login nodes that provide a full graphical desktop, accelerated OpenGL, and access to home, project, and scratch. You can connect to them directly by starting your vncviewer (e.g., for TigerVNC, start the client from your Applications menu or run <code>vncviewer</code> from the command line) and entering the address '''gra-vdi.computecanada.ca'''. This will bring up a login screen to which you can login using your Compute Canada credentials.<br />
<br />
As with regular login nodes, these VDI logins nodes are a shared resource and are not intended for doing batch computation (that is what the compute nodes are for), so please limit your use of them to graphical related tasks. A none-exclusive list of examples includes graphical pre-processing steps such as mesh generation, graphical post-processing steps such as visualization, and using graphical IDEs.<br />
<br />
=== Installing software ===<br />
<br />
Open-source software is provided by the '''nix''' module. Click the black terminal icon on the top menu bar or pick Applications -> System Tools -> Terminal and load the '''nix''' module. Then you can search for programs using the <code>nix search <regexp></code> command and install then in your environment using the <code>nix-env --install --attr <attribute></code> command. As an example, say you wanted to install [https://qgis.org QGIS] for your use<br />
<br />
{{Command|prompt=[name@gra-vdi4]$ <br />
|module load nix<br />
}}<br />
{{Command|prompt=[name@gra-vdi4]$ <br />
|nix search qgis<br />
|result=<br />
Attribute name: nixpkgs.qgis<br />
Package name: qgis<br />
Version: 2.18.20<br />
Description: User friendly Open Source Geographic Information System<br />
}}<br />
{{Command|prompt=[name@gra-vdi4]$ <br />
|nix-env --install --attr nixpkgs.qgis<br />
}}<br />
Your nix environment persists, so you only need to run an install command once. Whatever you have installed will then be available anytime to module is loaded.<br />
<br />
{{Commands|prompt=[name@gra-vdi4]$ <br />
|module load nix<br />
|qgis<br />
}}<br />
=== Building OpenGL applications ===<br />
<br />
For accelerated OpenGL to work, it is necessary to pre-load an appropriate version of the ''vglfaker.so'' library from VirtualGL. This has already be done for software installed by staff, and is done automatically for any OpenGL software built/installed via nix, but it is something you have to do yourself for software you manually installed.<br />
<br />
The easiest way to do this is use the <code>patchelf</code> utility from nix (use <code>nix-env --install --attr nixpkgs.paychelf</code> to install it) to adjust the final binary. For example, say you built an OpenGL application against the system libraries and installed it as ''~/.local/bin/myglapp''. Then you need to add the system VirtualGL library ''/usr/lib64/VirtualGL/libvglfaker.so'' as the first required library to it<br />
<br />
{{Commands|prompt=[name@gra-vdi4]$ <br />
|module load nix<br />
|patchelf --add-needed /usr/lib64/VirtualGL/libvglfaker.so ~/.local/bin/myglapp<br />
}}<br />
Note that it is also possible to pre-load ''vglfaker.so'' via the <code>LD_PRELOAD</code> environment variable. This is generally a bad idea as it applies indiscriminately to all binaries, and those that require a different ''vglfaker.so'' than that set in <code>LD_PRELOAD</code> will then fail, but it can be used safely in some cases in wrapper scripts.<br />
<br />
== Compute Nodes ==<br />
<br />
A VNC server can also be started in a compute node, and, with suitable port forwarding, you can connected to from your desktop. This gives you dedicated access, but does not provide a full graphical desktop or hardware accelerated OpenGL.<br />
<br />
=== Starting a VNC server ===<br />
<br />
Before starting your VNC server, you will need a node on which to run it. The easiest way to do so is typically in the framework of an interactive job using <code>salloc</code>. As an example, to request an interactive job using 4 cpus and 16GB of memory, you would use the command:<br />
<br />
{{Commands|salloc -c 4 --mem 16g}}<br />
<br />
Once your interactive job has started, you can start a VNC server with <code>vncserver</code>. You should take note of which node your job is running on, as well as which port (typically 5901). If unsure, you can use the <code>hostname</code> command to check which host your job is running on. You will be prompted to set a password for your VNC server - '''DO NOT LEAVE THIS BLANK.'''<br />
<br />
'''Command with sample output:'''<br />
<br />
{{Command<br />
|vncserver<br />
|result=<br />
You will require a password to access your desktops.<br />
<br />
Password:<br />
Verify:<br />
Would you like to enter a view-only password (y/n)? n<br />
<br />
New 'gra796:1 (username)' desktop is gra796:1<br />
<br />
Creating default startup script /home/username/.vnc/xstartup<br />
Creating default config /home/username/.vnc/config<br />
Starting applications specified in /home/username/.vnc/xstartup<br />
Log file is /home/username/.vnc/gra796:1.log<br />
}}<br />
<br />
You will likely want to <code>cat</code> the log file to determine which port the VNC server is using, in this case, the key line to find is the following:<br />
<br />
<pre><br />
vncext: Listening for VNC connections on all interface(s), port 5901<br />
</pre><br />
<br />
=== Setting up an SSH tunnel to the VNC server ===<br />
<br />
Now that your VNC server has been started, you will need to create a "bridge" to allow your local desktop computer to connect to the compute node directly. This bridge connection is created using an SSH tunnel. SSH tunnels are created using the same SSH connection command as usual, with an extra option added - this follows the format: <code>ssh user@host -L port:compute_node:port</code>.<br />
<br />
An example SSH tunnel command to connect to a VNC server running on Graham's gra796 node and port 5901 would be the following:<br />
<br />
<pre><br />
ssh username@graham.computecanada.ca -L 5900:gra796:5901<br />
</pre><br />
<br />
The SSH tunnel will operate like a normal SSH connection- you may run commands over it, etc. However, keep in mind that this is your connection to the VNC server. If you terminate the SSH tunnel, your connection to the VNC server will be lost! For more detailed information on SSH tunneling, please see [[SSH_tunnelling]].<br />
<br />
=== Connecting to the VNC server ===<br />
<br />
To connect to the VNC server, you need to tell your VNC client to connect to '''localhost'''. The following example uses TigerVNC's <code>vncviewer</code> to connect to the running VNC server on gra796. You will be prompted for your password that you set earlier before you can connect.<br />
<br />
'''Command with sample output:'''<br />
{{Command<br />
|vncviewer localhost<br />
|prompt=[name@local_computer]$ <br />
|result=<br />
<br />
TigerVNC Viewer 64-bit v1.8.0<br />
Built on: 2018-06-13 10:56<br />
Copyright (C) 1999-2017 TigerVNC Team and many others (see README.txt)<br />
See http://www.tigervnc.org for information on TigerVNC.<br />
<br />
Tue Jul 10 17:40:24 2018<br />
DecodeManager: Detected 8 CPU core(s)<br />
DecodeManager: Creating 4 decoder thread(s)<br />
CConn: connected to host localhost port 5901<br />
CConnection: Server supports RFB protocol version 3.8<br />
CConnection: Using RFB protocol version 3.8<br />
CConnection: Choosing security type VeNCrypt(19)<br />
CVeNCrypt: Choosing security type TLSVnc (258)<br />
<br />
Tue Jul 10 17:40:27 2018<br />
CConn: Using pixel format depth 24 (32bpp) little-endian rgb888<br />
CConn: Using Tight encoding<br />
CConn: Enabling continuous updates<br />
<br />
}}<br />
<br />
Once connected, you will be presented with an Xterm window and a blank desktop. To launch a program, simply invoke the command as you would normally within the Xterm window. <code>xclock</code> will start a sample clock application you can use to test things out. To start a more complicated program (for instance, Matlab), you would load the module and launch the program via the following:<br />
<br />
{{Commands<br />
|module load matlab<br />
|matlab<br />
}}<br />
=== Resetting your VNC server password ===<br />
<br />
If you forget your VNC password or otherwise want to delete your VNC configs and start over with a clean slate, you can delete your <code>~/.vnc</code> directory. The next time you run <code>vncserver</code>, you will be prompted to set a new password.</div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=VNC&diff=55697VNC2018-07-20T17:44:47Z<p>Tyson: Minor edit for better flow</p>
<hr />
<div>[[File:Matlab-vnc.png|400px|thumb|Matlab running via VNC.]]<br />
<br />
Frequently, it may be useful to start up graphical user interfaces for various software packages like Matlab. Doing so over X-forwarding can result in a very slow connection to the server, one useful alternative to X-forwarding is using VNC to start and connect to a remote desktop.<br />
<br />
= VNC Client =<br />
<br />
First you will need to install a VNC client on your machine to connect to the VNC server. We recommend using [http://tigervnc.org/ TigerVNC]. It is pre-package on most Linux distributions and comes with Windows and Mac binaries.<br />
<br />
== Windows and Mac ==<br />
<br />
Starting at the [http://tigervnc.org/ TigerVNC home page]<br />
<br />
# click on the '''GitHub release page link'''<br />
# scroll down and click on the '''Binaries are avaiable from bintray''' link<br />
# scroll down and pick the '''.exe''' file for Windows (note the '''64''' for 64 bit Windows) or the '''.dmg''' file for Mac<br />
<br />
If asked during the installation, do not enable the VNC server (this would be for sharing your desktop, not for connecting to our systems).<br />
<br />
== Linux ==<br />
<br />
Install the TigerVNC viewer with your package manager and then symlink ''~/.vnc/x509_ca.pem'' to your system certificate authority list.<br />
=== Debian or Ubuntu ===<br />
<br />
{{Commands|prompt=[name@local_computer]$ <br />
|sudo apt-get install tigervnc-viewer<br />
|mkdir --parents ~/.vnc<br />
|ln --symbolic --interactive --no-target-directory /etc/ssl/certs/ca-certificates.crt ~/.vnc/x509_ca.pem<br />
}}<br />
=== Fedora, CentOS, or RHEL ===<br />
<br />
{{Commands|prompt=[name@local_computer]$ <br />
|sudo yum install tigervnc<br />
|mkdir --parents ~/.vnc<br />
|ln --symbolic --interactive --no-target-directory /etc/pki/tls/certs/ca-bundle.crt ~/.vnc/x509_ca.pem<br />
}}<br />
=== Gentoo ===<br />
<br />
{{Commands|prompt=[name@local_computer]$ <br />
|emerge -av net-misc/tigervnc<br />
|mkdir --parents ~/.vnc<br />
|ln --symbolic --interactive --no-target-directory /etc/ssl/certs/ca-certificates.crt ~/.vnc/x509_ca.pem<br />
}}<br />
= VNC Server =<br />
<br />
Now you need a VNC server to connect to. This can be either the dedicated VDI login system on graham or one you start manually on an allocated compute node.<br />
<br />
== VDI Login Nodes ==<br />
<br />
[[File:TigerVNC-GrahamDesktop.png|400px|thumb|right|'''gra-vdi.computecanada.ca''']]<br />
<br />
Graham has a dedicated VNC login nodes that provide a full graphical desktop, accelerated OpenGL, and access to home, project, and scratch. You can connect to them directly by starting your vncviewer (e.g., for TigerVNC, start the client from your Applications menu or run <code>vncviewer</code> from the command line) and entering the address '''gra-vdi.computecanada.ca'''. This will bring up a login screen to which you can login using your Compute Canada credentials.<br />
<br />
As with regular login nodes, these VDI logins nodes are a shared resource and are not intended for doing batch computation (that is what the compute nodes are for), so please limit your use of them to graphical related tasks. A none-exclusive list of examples includes graphical pre-processing steps such as mesh generation, graphical post-processing steps such as visualization, and using graphical IDEs.<br />
<br />
=== Installing software ===<br />
<br />
Open-source software is provided by the '''nix''' module. Click the black terminal icon on the top menu bar or pick Applications -> System Tools -> Terminal and load the '''nix''' module. Then you can search for programs using the <code>nix search <regexp></code> command and install then in your environment using the <code>nix-env --install --attr <attribute></code> command. As an example, say you wanted to install [https://qgis.org QGIS] for your use<br />
<br />
{{Command|prompt=[name@gra-vdi4]$ <br />
|module load nix<br />
}}<br />
{{Command|prompt=[name@gra-vdi4]$ <br />
|nix search qgis<br />
|result=<br />
Attribute name: nixpkgs.qgis<br />
Package name: qgis<br />
Version: 2.18.20<br />
Description: User friendly Open Source Geographic Information System<br />
}}<br />
{{Command|prompt=[name@gra-vdi4]$ <br />
|nix-env --install --attr nixpkgs.qgis<br />
}}<br />
Your nix environment persists, so you only need to run an install command once. Whatever you have installed will then be available anytime to module is loaded.<br />
<br />
{{Commands|prompt=[name@gra-vdi4]$ <br />
|module load nix<br />
|qgis<br />
}}<br />
=== Building OpenGL applications ===<br />
<br />
For accelerated OpenGL to work, it is necessary to pre-load an appropriate version of the ''vglfaker.so'' library from VirtualGL. This has already be done for software installed by staff, and is done automatically for any OpenGL software built/installed via Nix, but it is something you have to do yourself for software you install yourself.<br />
<br />
The easiest way to do this is use the <code>patchelf</code> utility from nix (use <code>nix-env --install --attr nixpkgs.readelf</code> to install it) to adjust the final binary. For example, say you built an OpenGL application against the system libraries and installed it as ''~/.local/bin/myglapp''. Then you need to add the system VirtualGL library ''/usr/lib64/VirtualGL/libvglfaker.so'' as the first required library to it<br />
<br />
{{Commands|prompt=[name@gra-vdi4]$ <br />
|module load nix<br />
|patchelf --add-needed /usr/lib64/VirtualGL/libvglfaker.so ~/.local/bin/myglapp<br />
}}<br />
Note that it is also possible to pre-load ''vglfaker.so'' via the <code>LD_PRELOAD</code> environment variable. This is generally a bad idea as it applies indiscriminately to all binaries, and those that require a different ''vglfaker.so'' than that set in <code>LD_PRELOAD</code> will then fail, but it can be used safely in some cases in wrapper scripts.<br />
<br />
== Compute Nodes ==<br />
<br />
A VNC server can also be started in a compute node, and, with suitable port forwarding, you can connected to from your desktop. This gives you dedicated access, but does not provide a full graphical desktop or hardware accelerated OpenGL.<br />
<br />
=== Starting a VNC server ===<br />
<br />
Before starting your VNC server, you will need a node on which to run it. The easiest way to do so is typically in the framework of an interactive job using <code>salloc</code>. As an example, to request an interactive job using 4 cpus and 16GB of memory, you would use the command:<br />
<br />
{{Commands|salloc -c 4 --mem 16g}}<br />
<br />
Once your interactive job has started, you can start a VNC server with <code>vncserver</code>. You should take note of which node your job is running on, as well as which port (typically 5901). If unsure, you can use the <code>hostname</code> command to check which host your job is running on. You will be prompted to set a password for your VNC server - '''DO NOT LEAVE THIS BLANK.'''<br />
<br />
'''Command with sample output:'''<br />
<br />
{{Command<br />
|vncserver<br />
|result=<br />
You will require a password to access your desktops.<br />
<br />
Password:<br />
Verify:<br />
Would you like to enter a view-only password (y/n)? n<br />
<br />
New 'gra796:1 (username)' desktop is gra796:1<br />
<br />
Creating default startup script /home/username/.vnc/xstartup<br />
Creating default config /home/username/.vnc/config<br />
Starting applications specified in /home/username/.vnc/xstartup<br />
Log file is /home/username/.vnc/gra796:1.log<br />
}}<br />
<br />
You will likely want to <code>cat</code> the log file to determine which port the VNC server is using, in this case, the key line to find is the following:<br />
<br />
<pre><br />
vncext: Listening for VNC connections on all interface(s), port 5901<br />
</pre><br />
<br />
=== Setting up an SSH tunnel to the VNC server ===<br />
<br />
Now that your VNC server has been started, you will need to create a "bridge" to allow your local desktop computer to connect to the compute node directly. This bridge connection is created using an SSH tunnel. SSH tunnels are created using the same SSH connection command as usual, with an extra option added - this follows the format: <code>ssh user@host -L port:compute_node:port</code>.<br />
<br />
An example SSH tunnel command to connect to a VNC server running on Graham's gra796 node and port 5901 would be the following:<br />
<br />
<pre><br />
ssh username@graham.computecanada.ca -L 5900:gra796:5901<br />
</pre><br />
<br />
The SSH tunnel will operate like a normal SSH connection- you may run commands over it, etc. However, keep in mind that this is your connection to the VNC server. If you terminate the SSH tunnel, your connection to the VNC server will be lost! For more detailed information on SSH tunneling, please see [[SSH_tunnelling]].<br />
<br />
=== Connecting to the VNC server ===<br />
<br />
To connect to the VNC server, you need to tell your VNC client to connect to '''localhost'''. The following example uses TigerVNC's <code>vncviewer</code> to connect to the running VNC server on gra796. You will be prompted for your password that you set earlier before you can connect.<br />
<br />
'''Command with sample output:'''<br />
{{Command<br />
|vncviewer localhost<br />
|prompt=[name@local_computer]$ <br />
|result=<br />
<br />
TigerVNC Viewer 64-bit v1.8.0<br />
Built on: 2018-06-13 10:56<br />
Copyright (C) 1999-2017 TigerVNC Team and many others (see README.txt)<br />
See http://www.tigervnc.org for information on TigerVNC.<br />
<br />
Tue Jul 10 17:40:24 2018<br />
DecodeManager: Detected 8 CPU core(s)<br />
DecodeManager: Creating 4 decoder thread(s)<br />
CConn: connected to host localhost port 5901<br />
CConnection: Server supports RFB protocol version 3.8<br />
CConnection: Using RFB protocol version 3.8<br />
CConnection: Choosing security type VeNCrypt(19)<br />
CVeNCrypt: Choosing security type TLSVnc (258)<br />
<br />
Tue Jul 10 17:40:27 2018<br />
CConn: Using pixel format depth 24 (32bpp) little-endian rgb888<br />
CConn: Using Tight encoding<br />
CConn: Enabling continuous updates<br />
<br />
}}<br />
<br />
Once connected, you will be presented with an Xterm window and a blank desktop. To launch a program, simply invoke the command as you would normally within the Xterm window. <code>xclock</code> will start a sample clock application you can use to test things out. To start a more complicated program (for instance, Matlab), you would load the module and launch the program via the following:<br />
<br />
{{Commands<br />
|module load matlab<br />
|matlab<br />
}}<br />
=== Resetting your VNC server password ===<br />
<br />
If you forget your VNC password or otherwise want to delete your VNC configs and start over with a clean slate, you can delete your <code>~/.vnc</code> directory. The next time you run <code>vncserver</code>, you will be prompted to set a new password.</div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=VNC&diff=55694VNC2018-07-20T17:43:14Z<p>Tyson: Some sort of strange end-of-line pre mode trigger that went away when I retyped the text</p>
<hr />
<div>[[File:Matlab-vnc.png|400px|thumb|Matlab running via VNC.]]<br />
<br />
Frequently, it may be useful to start up graphical user interfaces for various software packages like Matlab. Doing so over X-forwarding can result in a very slow connection to the server, one useful alternative to X-forwarding is using VNC to start and connect to a remote desktop.<br />
<br />
= VNC Client =<br />
<br />
First you will need to install a VNC client on your machine to connect to the VNC server. We recommend using [http://tigervnc.org/ TigerVNC]. It is pre-package on most Linux distributions and comes with Windows and Mac binaries.<br />
<br />
== Windows and Mac ==<br />
<br />
Starting at the [http://tigervnc.org/ TigerVNC home page]<br />
<br />
# click on the '''GitHub release page link'''<br />
# scroll down and click on the '''Binaries are avaiable from bintray''' link<br />
# scroll down and pick the '''.exe''' file for Windows (note the '''64''' for 64 bit Windows) or the '''.dmg''' file for Mac<br />
<br />
If asked during the installation, do not enable the VNC server (this would be for sharing your desktop, not for connecting to our systems).<br />
<br />
== Linux ==<br />
<br />
Install the TigerVNC viewer with your package manager and then symlink ''~/.vnc/x509_ca.pem'' to your system certificate authority list.<br />
=== Debian or Ubuntu ===<br />
<br />
{{Commands|prompt=[name@local_computer]$ <br />
|sudo apt-get install tigervnc-viewer<br />
|mkdir --parents ~/.vnc<br />
|ln --symbolic --interactive --no-target-directory /etc/ssl/certs/ca-certificates.crt ~/.vnc/x509_ca.pem<br />
}}<br />
=== Fedora, CentOS, or RHEL ===<br />
<br />
{{Commands|prompt=[name@local_computer]$ <br />
|sudo yum install tigervnc<br />
|mkdir --parents ~/.vnc<br />
|ln --symbolic --interactive --no-target-directory /etc/pki/tls/certs/ca-bundle.crt ~/.vnc/x509_ca.pem<br />
}}<br />
=== Gentoo ===<br />
<br />
{{Commands|prompt=[name@local_computer]$ <br />
|emerge -av net-misc/tigervnc<br />
|mkdir --parents ~/.vnc<br />
|ln --symbolic --interactive --no-target-directory /etc/ssl/certs/ca-certificates.crt ~/.vnc/x509_ca.pem<br />
}}<br />
= VNC Server =<br />
<br />
Now you need a VNC server to connect to. This can be either the dedicated VDI login system on graham or one you start manually on an allocated compute node.<br />
<br />
== VDI Login Nodes ==<br />
<br />
[[File:TigerVNC-GrahamDesktop.png|400px|thumb|right|'''gra-vdi.computecanada.ca''']]<br />
<br />
Graham has a dedicated VNC login nodes that provide a full graphical desktop, accelerated OpenGL, and access to home, project, and scratch. You can connect to them directly by starting your vncviewer (e.g., for TigerVNC, start the client from your Applications menu or run <code>vncviewer</code> from the command line) and entering the address '''gra-vdi.computecanada.ca'''. This will bring up a login screen to which you can login using your Compute Canada credentials.<br />
<br />
As with regular login nodes, these VDI logins nodes are a shared resource and are not intended for doing batch computation (that is what the compute nodes are for), so please limit your use of them to graphical related tasks. A none-exclusive list of examples includes graphical pre-processing steps such as mesh generation, graphical post-processing steps such as visualization, and using graphical IDEs.<br />
<br />
=== Installing software ===<br />
<br />
Open-source software is provided by the '''nix''' module. Click the black terminal icon on the top menu bar or pick Applications -> System Tools -> Terminal and load the '''nix''' module. Then you can search for programs using the <code>nix search <regexp></code> command and install then in your environment using the <code>nix-env --install --attr <attribute></code> command. As an example, say you wanted to install [https://qgis.org QGIS] for your use<br />
<br />
{{Command|prompt=[name@gra-vdi4]$ <br />
|module load nix<br />
}}<br />
{{Command|prompt=[name@gra-vdi4]$ <br />
|nix search qgis<br />
|result=<br />
Attribute name: nixpkgs.qgis<br />
Package name: qgis<br />
Version: 2.18.20<br />
Description: User friendly Open Source Geographic Information System<br />
}}<br />
{{Command|prompt=[name@gra-vdi4]$ <br />
|nix-env --install --attr nixpkgs.qgis<br />
}}<br />
Your nix environment persists, so you only need to run an install command once. Whatever you have installed will then be available anytime to module is loaded.<br />
<br />
{{Commands|prompt=[name@gra-vdi4]$ <br />
|module load nix<br />
|qgis<br />
}}<br />
=== Building OpenGL applications ===<br />
<br />
For accelerated OpenGL to work, it is necessary to pre-load an appropriate version of the ''vglfaker.so'' library from VirtualGL. This has already be done for software installed by staff, and is done automatically for any OpenGL software built/installed via Nix, but it is something you have to do yourself for software you install yourself.<br />
<br />
The easiest way to do this is use the <code>patchelf</code> utility from nix (use <code>nix-env --install --attr nixpkgs.readelf</code> to install it) to adjust the final binary. For example, say you built an OpenGL application against the system libraries and installed it as ''~/.local/bin/myglapp''. Then you need to add the system VirtualGL library ''/usr/lib64/VirtualGL/libvglfaker.so'' as the first required library to it<br />
<br />
{{Commands|prompt=[name@gra-vdi4]$ <br />
|module load nix<br />
|patchelf --add-needed /usr/lib64/VirtualGL/libvglfaker.so ~/.local/bin/myglapp<br />
}}<br />
Note that it is also possible to pre-load ''vglfaker.so'' via the <code>LD_PRELOAD</code> environment variable. This is generally a bad idea as it applies indiscriminately to all binaries, and those that require a different ''vglfaker.so'' than that set in <code>LD_PRELOAD</code> will then fail, but it can be used appropriately in some cases when creating a wrapper script.<br />
<br />
== Compute Nodes ==<br />
<br />
A VNC server can also be started in a compute node, and, with suitable port forwarding, you can connected to from your desktop. This gives you dedicated access, but does not provide a full graphical desktop or hardware accelerated OpenGL.<br />
<br />
=== Starting a VNC server ===<br />
<br />
Before starting your VNC server, you will need a node on which to run it. The easiest way to do so is typically in the framework of an interactive job using <code>salloc</code>. As an example, to request an interactive job using 4 cpus and 16GB of memory, you would use the command:<br />
<br />
{{Commands|salloc -c 4 --mem 16g}}<br />
<br />
Once your interactive job has started, you can start a VNC server with <code>vncserver</code>. You should take note of which node your job is running on, as well as which port (typically 5901). If unsure, you can use the <code>hostname</code> command to check which host your job is running on. You will be prompted to set a password for your VNC server - '''DO NOT LEAVE THIS BLANK.'''<br />
<br />
'''Command with sample output:'''<br />
<br />
{{Command<br />
|vncserver<br />
|result=<br />
You will require a password to access your desktops.<br />
<br />
Password:<br />
Verify:<br />
Would you like to enter a view-only password (y/n)? n<br />
<br />
New 'gra796:1 (username)' desktop is gra796:1<br />
<br />
Creating default startup script /home/username/.vnc/xstartup<br />
Creating default config /home/username/.vnc/config<br />
Starting applications specified in /home/username/.vnc/xstartup<br />
Log file is /home/username/.vnc/gra796:1.log<br />
}}<br />
<br />
You will likely want to <code>cat</code> the log file to determine which port the VNC server is using, in this case, the key line to find is the following:<br />
<br />
<pre><br />
vncext: Listening for VNC connections on all interface(s), port 5901<br />
</pre><br />
<br />
=== Setting up an SSH tunnel to the VNC server ===<br />
<br />
Now that your VNC server has been started, you will need to create a "bridge" to allow your local desktop computer to connect to the compute node directly. This bridge connection is created using an SSH tunnel. SSH tunnels are created using the same SSH connection command as usual, with an extra option added - this follows the format: <code>ssh user@host -L port:compute_node:port</code>.<br />
<br />
An example SSH tunnel command to connect to a VNC server running on Graham's gra796 node and port 5901 would be the following:<br />
<br />
<pre><br />
ssh username@graham.computecanada.ca -L 5900:gra796:5901<br />
</pre><br />
<br />
The SSH tunnel will operate like a normal SSH connection- you may run commands over it, etc. However, keep in mind that this is your connection to the VNC server. If you terminate the SSH tunnel, your connection to the VNC server will be lost! For more detailed information on SSH tunneling, please see [[SSH_tunnelling]].<br />
<br />
=== Connecting to the VNC server ===<br />
<br />
To connect to the VNC server, you need to tell your VNC client to connect to '''localhost'''. The following example uses TigerVNC's <code>vncviewer</code> to connect to the running VNC server on gra796. You will be prompted for your password that you set earlier before you can connect.<br />
<br />
'''Command with sample output:'''<br />
{{Command<br />
|vncviewer localhost<br />
|prompt=[name@local_computer]$ <br />
|result=<br />
<br />
TigerVNC Viewer 64-bit v1.8.0<br />
Built on: 2018-06-13 10:56<br />
Copyright (C) 1999-2017 TigerVNC Team and many others (see README.txt)<br />
See http://www.tigervnc.org for information on TigerVNC.<br />
<br />
Tue Jul 10 17:40:24 2018<br />
DecodeManager: Detected 8 CPU core(s)<br />
DecodeManager: Creating 4 decoder thread(s)<br />
CConn: connected to host localhost port 5901<br />
CConnection: Server supports RFB protocol version 3.8<br />
CConnection: Using RFB protocol version 3.8<br />
CConnection: Choosing security type VeNCrypt(19)<br />
CVeNCrypt: Choosing security type TLSVnc (258)<br />
<br />
Tue Jul 10 17:40:27 2018<br />
CConn: Using pixel format depth 24 (32bpp) little-endian rgb888<br />
CConn: Using Tight encoding<br />
CConn: Enabling continuous updates<br />
<br />
}}<br />
<br />
Once connected, you will be presented with an Xterm window and a blank desktop. To launch a program, simply invoke the command as you would normally within the Xterm window. <code>xclock</code> will start a sample clock application you can use to test things out. To start a more complicated program (for instance, Matlab), you would load the module and launch the program via the following:<br />
<br />
{{Commands<br />
|module load matlab<br />
|matlab<br />
}}<br />
=== Resetting your VNC server password ===<br />
<br />
If you forget your VNC password or otherwise want to delete your VNC configs and start over with a clean slate, you can delete your <code>~/.vnc</code> directory. The next time you run <code>vncserver</code>, you will be prompted to set a new password.</div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=VNC&diff=55689VNC2018-07-20T17:27:39Z<p>Tyson: Heavy work over to now also cover the graham VDI login nodes</p>
<hr />
<div>[[File:Matlab-vnc.png|400px|thumb|Matlab running via VNC.]]<br />
<br />
Frequently, it may be useful to start up graphical user interfaces for various software packages like Matlab. Doing so over X-forwarding can result in a very slow connection to the server, one useful alternative to X-forwarding is using VNC to start and connect to a remote desktop.<br />
<br />
= VNC Client =<br />
<br />
First you will need to install a VNC client on your machine to connect to the VNC server. We recommend using [http://tigervnc.org/ TigerVNC]. It is pre-package on most Linux distributions and comes with Windows and Mac binaries.<br />
<br />
== Windows and Mac ==<br />
<br />
Starting at the [http://tigervnc.org/ TigerVNC home page]<br />
<br />
# click on the '''GitHub release page link'''<br />
# scroll down and click on the '''Binaries are avaiable from bintray''' link<br />
# scroll down and pick the '''.exe''' file for Windows (note the '''64''' for 64 bit Windows) or the '''.dmg''' file for Mac<br />
<br />
If asked during the installation, do not enable the VNC server (this would be for sharing your desktop, not for connecting to our systems).<br />
<br />
== Linux ==<br />
<br />
Install the TigerVNC viewer with your package manager and then symlink ''~/.vnc/x509_ca.pem'' to your system certificate authority list.<br />
=== Debian or Ubuntu ===<br />
<br />
{{Commands|prompt=[name@local_computer]$ <br />
|sudo apt-get install tigervnc-viewer<br />
|mkdir --parents ~/.vnc<br />
|ln --symbolic --interactive --no-target-directory /etc/ssl/certs/ca-certificates.crt ~/.vnc/x509_ca.pem<br />
}}<br />
=== Fedora, CentOS, or RHEL ===<br />
<br />
{{Commands|prompt=[name@local_computer]$ <br />
|sudo yum install tigervnc<br />
|mkdir --parents ~/.vnc<br />
|ln --symbolic --interactive --no-target-directory /etc/pki/tls/certs/ca-bundle.crt ~/.vnc/x509_ca.pem<br />
}}<br />
=== Gentoo ===<br />
<br />
{{Commands|prompt=[name@local_computer]$ <br />
|emerge -av net-misc/tigervnc<br />
|mkdir --parents ~/.vnc<br />
|ln --symbolic --interactive --no-target-directory /etc/ssl/certs/ca-certificates.crt ~/.vnc/x509_ca.pem<br />
}}<br />
= VNC Server =<br />
<br />
Now you need a VNC server to connect to. This can be either the dedicated VDI login system on graham or one you start manually on an allocated compute node.<br />
<br />
== VDI Login Nodes ==<br />
<br />
[[File:TigerVNC-GrahamDesktop.png|400px|thumb|right|'''gra-vdi.computecanada.ca''']]<br />
<br />
Graham has a dedicated VNC login nodes that provide a full graphical desktop, accelerated OpenGL, and access to home, project, and scratch. You can connect to them directly by starting your vncviewer (e.g., for TigerVNC, start the client from your Applications menu or run <code>vncviewer</code> from the command line) and entering the address '''gra-vdi.computecanada.ca'''. This will bring up a login screen to which you can login using your Compute Canada credentials.<br />
<br />
As with regular login nodes, these VDI logins nodes are a shared resource and are not intended for doing batch computation (that is what the compute nodes are for), so please limit your use of them to graphical related tasks. A none-exclusive list of examples includes graphical pre-processing steps such as mesh generation, graphical post-processing steps such as visualization, and using graphical IDEs.<br />
<br />
=== Installing software ===<br />
<br />
Open-source software is provided by the '''nix''' module. Click the black terminal icon on the top menu bar or pick Applications -> System Tools -> Terminal and load the '''nix''' module. Then you can search for programs using the <code>nix search <regexp></code> command and install then in your environment using the <code>nix-env --install --attr <attribute></code> command. As an example, say you wanted to install [https://qgis.org QGIS] for your use<br />
<br />
{{Command|prompt=[name@gra-vdi4]$ <br />
|module load nix<br />
}}<br />
{{Command|prompt=[name@gra-vdi4]$ <br />
|nix search qgis<br />
|result=<br />
Attribute name: nixpkgs.qgis<br />
Package name: qgis<br />
Version: 2.18.20<br />
Description: User friendly Open Source Geographic Information System<br />
}}<br />
{{Command|prompt=[name@gra-vdi4]$ <br />
|nix-env --install --attr nixpkgs.qgis<br />
}}<br />
Your nix environment persists, so you only need to run an install command once. Whatever you have installed will then be available anytime to module is loaded.<br />
<br />
{{Commands|prompt=[name@gra-vdi4]$ <br />
|module load nix<br />
|qgis<br />
}}<br />
=== Building OpenGL applications ===<br />
<br />
For accelerated OpenGL to work, it is necessary to pre-load an appropriate version of the ''vglfaker.so'' library from VirtualGL. This has already be done for software installed by staff, and is done automatically for any OpenGL software built/installed via Nix, but it is something you have to do yourself for software you install yourself.<br />
<br />
The easiest way to do this is use the <code>patchelf</code> utility from nix (use <code>nix-env --install --attr nixpkgs.readelf</code> to install it) to adjust the final binary. <br />
For example, say you built an OpenGL application against the system libraries and installed it as ''~/.local/bin/myglapp''. Then you need to add the system VirtualGL library ''/usr/lib64/VirtualGL/libvglfaker.so'' as the first required library to it<br />
<br />
{{Commands|prompt=[name@gra-vdi4]$ <br />
|module load nix<br />
|patchelf --add-needed /usr/lib64/VirtualGL/libvglfaker.so ~/.local/bin/myglapp<br />
}}<br />
Note that it is also possible to pre-load ''vglfaker.so'' via the <code>LD_PRELOAD</code> environment variable. This is generally a bad idea as it applies indiscriminately to all binaries, and those that require a different ''vglfaker.so'' than that set in <code>LD_PRELOAD</code> will then fail, but it can be used appropriately in some cases when creating a wrapper script.<br />
<br />
== Compute Nodes ==<br />
<br />
A VNC server can also be started in a compute node, and, with suitable port forwarding, you can connected to from your desktop. This gives you dedicated access, but does not provide a full graphical desktop or hardware accelerated OpenGL.<br />
<br />
=== Starting a VNC server ===<br />
<br />
Before starting your VNC server, you will need a node on which to run it. The easiest way to do so is typically in the framework of an interactive job using <code>salloc</code>. As an example, to request an interactive job using 4 cpus and 16GB of memory, you would use the command:<br />
<br />
{{Commands|salloc -c 4 --mem 16g}}<br />
<br />
Once your interactive job has started, you can start a VNC server with <code>vncserver</code>. You should take note of which node your job is running on, as well as which port (typically 5901). If unsure, you can use the <code>hostname</code> command to check which host your job is running on. You will be prompted to set a password for your VNC server - '''DO NOT LEAVE THIS BLANK.'''<br />
<br />
'''Command with sample output:'''<br />
<br />
{{Command<br />
|vncserver<br />
|result=<br />
You will require a password to access your desktops.<br />
<br />
Password:<br />
Verify:<br />
Would you like to enter a view-only password (y/n)? n<br />
<br />
New 'gra796:1 (username)' desktop is gra796:1<br />
<br />
Creating default startup script /home/username/.vnc/xstartup<br />
Creating default config /home/username/.vnc/config<br />
Starting applications specified in /home/username/.vnc/xstartup<br />
Log file is /home/username/.vnc/gra796:1.log<br />
}}<br />
<br />
You will likely want to <code>cat</code> the log file to determine which port the VNC server is using, in this case, the key line to find is the following:<br />
<br />
<pre><br />
vncext: Listening for VNC connections on all interface(s), port 5901<br />
</pre><br />
<br />
=== Setting up an SSH tunnel to the VNC server ===<br />
<br />
Now that your VNC server has been started, you will need to create a "bridge" to allow your local desktop computer to connect to the compute node directly. This bridge connection is created using an SSH tunnel. SSH tunnels are created using the same SSH connection command as usual, with an extra option added - this follows the format: <code>ssh user@host -L port:compute_node:port</code>.<br />
<br />
An example SSH tunnel command to connect to a VNC server running on Graham's gra796 node and port 5901 would be the following:<br />
<br />
<pre><br />
ssh username@graham.computecanada.ca -L 5900:gra796:5901<br />
</pre><br />
<br />
The SSH tunnel will operate like a normal SSH connection- you may run commands over it, etc. However, keep in mind that this is your connection to the VNC server. If you terminate the SSH tunnel, your connection to the VNC server will be lost! For more detailed information on SSH tunneling, please see [[SSH_tunnelling]].<br />
<br />
=== Connecting to the VNC server ===<br />
<br />
To connect to the VNC server, you need to tell your VNC client to connect to '''localhost'''. The following example uses TigerVNC's <code>vncviewer</code> to connect to the running VNC server on gra796. You will be prompted for your password that you set earlier before you can connect.<br />
<br />
'''Command with sample output:'''<br />
{{Command<br />
|vncviewer localhost<br />
|prompt=[name@local_computer]$ <br />
|result=<br />
<br />
TigerVNC Viewer 64-bit v1.8.0<br />
Built on: 2018-06-13 10:56<br />
Copyright (C) 1999-2017 TigerVNC Team and many others (see README.txt)<br />
See http://www.tigervnc.org for information on TigerVNC.<br />
<br />
Tue Jul 10 17:40:24 2018<br />
DecodeManager: Detected 8 CPU core(s)<br />
DecodeManager: Creating 4 decoder thread(s)<br />
CConn: connected to host localhost port 5901<br />
CConnection: Server supports RFB protocol version 3.8<br />
CConnection: Using RFB protocol version 3.8<br />
CConnection: Choosing security type VeNCrypt(19)<br />
CVeNCrypt: Choosing security type TLSVnc (258)<br />
<br />
Tue Jul 10 17:40:27 2018<br />
CConn: Using pixel format depth 24 (32bpp) little-endian rgb888<br />
CConn: Using Tight encoding<br />
CConn: Enabling continuous updates<br />
<br />
}}<br />
<br />
Once connected, you will be presented with an Xterm window and a blank desktop. To launch a program, simply invoke the command as you would normally within the Xterm window. <code>xclock</code> will start a sample clock application you can use to test things out. To start a more complicated program (for instance, Matlab), you would load the module and launch the program via the following:<br />
<br />
{{Commands<br />
|module load matlab<br />
|matlab<br />
}}<br />
=== Resetting your VNC server password ===<br />
<br />
If you forget your VNC password or otherwise want to delete your VNC configs and start over with a clean slate, you can delete your <code>~/.vnc</code> directory. The next time you run <code>vncserver</code>, you will be prompted to set a new password.</div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=File:TigerVNC-GrahamDesktop.png&diff=55682File:TigerVNC-GrahamDesktop.png2018-07-20T16:06:50Z<p>Tyson: </p>
<hr />
<div></div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=File:TigerVNC-GrahamConnect.png&diff=55681File:TigerVNC-GrahamConnect.png2018-07-20T16:06:34Z<p>Tyson: </p>
<hr />
<div></div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=File:TigerVNC-GrahamLogin.png&diff=55680File:TigerVNC-GrahamLogin.png2018-07-20T16:06:01Z<p>Tyson: Tyson uploaded a new version of File:TigerVNC-GrahamLogin.png</p>
<hr />
<div></div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=File:TigerVNC-GrahamLogin.png&diff=55679File:TigerVNC-GrahamLogin.png2018-07-20T16:00:26Z<p>Tyson: </p>
<hr />
<div></div>Tysonhttps://docs.alliancecan.ca/mediawiki/index.php?title=Visualization&diff=5083Visualization2016-11-21T18:00:28Z<p>Tyson: Add SHARCNET viz links</p>
<hr />
<div><languages /><br />
<br />
<translate><br />
= External documentation for popular visualization packages = <!--T:1--><br />
<br />
=== ParaView === <!--T:2--><br />
[http://www.paraview.org ParaView] is a general-purpose 3D scientific visualization tool. It is open source and compiles on all popular platforms (Linux, Windows, Mac), understands a large number of input file formats, provides multiple rendering modes, supports Python scripting, and can scale up to tens of thousands of processors for rendering of very large datasets.<br />
* [http://www.paraview.org/documentation ParaView official documentation]<br />
* [http://www.paraview.org/gallery ParaView gallery]<br />
* [http://www.paraview.org/Wiki/ParaView ParaView wiki]<br />
* [http://www.paraview.org/Wiki/ParaView/Python_Scripting ParaView Python scripting]<br />
<br />
=== VisIt === <!--T:3--><br />
Similar to ParaView, [https://wci.llnl.gov/simulation/computer-codes/visit/ VisIt] is an open-source, general-purpose 3D scientific data analysis and visualization tool that scales from interactive analysis on laptops to very large HPC projects on tens of thousands of processors.<br />
* [https://wci.llnl.gov/simulation/computer-codes/visit/manuals VisIt manuals]<br />
* [https://wci.llnl.gov/simulation/computer-codes/visit/gallery VisIt gallery]<br />
* [http://www.visitusers.org VisIt user community wiki]<br />
* [http://www.visitusers.org/index.php?title=VisIt_Tutorial VisIt tutorials] along with [http://www.visitusers.org/index.php?title=Tutorial_Data sample datasets]<br />
<br />
=== VMD === <!--T:4--><br />
[http://www.ks.uiuc.edu/Research/vmd VMD] is an open-source molecular visualization program for displaying, animating, and analyzing large biomolecular systems in 3D. It supports scripting in Tcl and Python and runs on a variety of platforms (MacOS X, Linux, Windows). It reads many molecular data formats using an extensible plugin system and supports a number of different molecular representations.<br />
* [http://www.ks.uiuc.edu/Research/vmd/current/ug VMD User's Guide]<br />
<br />
=== VTK === <!--T:5--><br />
The Visualization Toolkit (VTK) is an open-source package for 3D computer graphics, image processing, and visualization. It consists of a C++ class library and several interpreted interface layers including Tcl/Tk, Java, and Python. VTK is the base of many excellent visualization packages including ParaView and VisIt.<br />
* [https://itk.org/Wiki/VTK/Tutorials VTK tutorials]<br />
<br />
= Visualization on new Compute Canada systems = <!--T:6--><br />
This section will be updated as the new systems come online starting with GP2 at SFU in early 2017.<br />
<br />
= Upcoming visualization events = <!--T:7--><br />
* winter semester visualization webinars in January and March<br />
* winter semester full-day visualization workshops at UofCalgary and UofAlberta<br />
<br />
= Compute Canada visualization presentation materials = <!--T:8--><br />
<br />
=== Full- or half-day workshops === <!--T:9--><br />
* [https://docs.computecanada.ca/mediawiki/images/5/5d/Visit201606.pdf VisIt workshop slides] from HPCS'2016 in Edmonton by <i>Marcelo Ponce</i> and <i>Alex Razoumov</i><br />
* [https://docs.computecanada.ca/mediawiki/images/d/d0/Paraview201602.pdf ParaView workshop slides] from February 2016 by <i>Alex Razoumov</i><br />
* [https://support.scinet.utoronto.ca/~mponce/ss2016/ss2016_visualization-I.pdf Gnuplot, xmgrace, remote visualization tools (X-forwarding and VNC), python's matplotlib] slides by <i>Marcelo Ponce</i> (SciNet/UofT) from Ontario HPC Summer School 2016<br />
* [https://support.scinet.utoronto.ca/~mponce/ss2016/ss2016_visualization-II.pdf Brief overview of ParaView & VisIt] slides by <i>Marcelo Ponce</i> (SciNet/UofT) from Ontario HPC Summer School 2016<br />
<br />
=== Webinars and other short presentations === <!--T:10--><br />
<br />
* [https://docs.computecanada.ca/mediawiki/images/e/e5/VisitScripting.pdf VisIt scripting] from November 2016 by <i>Alex Razoumov</i><br />
* [https://docs.computecanada.ca/mediawiki/images/5/5f/Batch201503.pdf Batch visualization webinar slides] from March 2015 by <i>Alex Razoumov</i><br />
* [https://docs.computecanada.ca/mediawiki/images/5/59/OspraySlides.pdf CPU-based rendering with OSPRay] from September 2016 by <i>Alex Razoumov</i><br />
* [https://docs.computecanada.ca/mediawiki/images/f/fc/Gephi201603.pdf Gephi webinar notes] from March 2016 by <i>Alex Razoumov</i><br />
* [https://docs.computecanada.ca/mediawiki/images/6/60/Graphs201605.pdf 3D graphs with NetworkX, VTK, and ParaView slides] from May 2016 by <i>Alex Razoumov</i><br />
* [https://wiki.scinet.utoronto.ca/wiki/images/5/51/Remoteviz.pdf Remote Graphics on SciNet's GPC system (Client-Server and VNC)] slides by <i>Ramses van Zon</i> (SciNet/UofT) from October 2015 SciNet User Group Meeting<br />
* [https://support.scinet.utoronto.ca/education/go.php/242/file_storage/index.php/download/1/files%5B%5D/6399/ VisIt Basics], slides by <i>Marcelo Ponce</i> (SciNet/UofT) from February 2016 SciNet User Group Meeting<br />
* [https://wiki.scinet.utoronto.ca/wiki/images/e/ea/8_ComplexNetworks.pdf Intro to Complex Networks Visualization, with Python], slides by <i>Marcelo Ponce</i> (SciNet/UofT)<br />
* [https://wiki.scinet.utoronto.ca/wiki/images/9/9c/Tkinter.pdf Introduction to GUI Programming with Tkinter], from Sept.2014 by <i>Erik Spence</i> (SciNet/UofT)<br />
<br />
= Tips and tricks = <!--T:11--><br />
<br />
= Regional visualization pages = <!--T:12--><br />
== [http://www.westgrid.ca WestGrid] ==<br />
* [https://www.westgrid.ca/support/visualization/vis_quickstart visualization quickstart guide]<br />
* [https://www.westgrid.ca/support/visualization/remote_visualization remote visualization]<br />
* [https://www.westgrid.ca/support/visualization/batch_rendering batch rendering]<br />
<br />
== [http://www.scinet.utoronto.ca SciNet, HPC at the University of Toronto] == <!--T:13--><br />
* [https://wiki.scinet.utoronto.ca/wiki/index.php/Software_and_Libraries#anchor_viz visualization software]<br />
* [https://wiki.scinet.utoronto.ca/wiki/index.php/VNC VNC]<br />
* [https://wiki.scinet.utoronto.ca/wiki/index.php/Visualization_Nodes visualization nodes]<br />
* [https://wiki.scinet.utoronto.ca/wiki/index.php/Knowledge_Base:_Tutorials_and_Manuals#Visualization further resources and viz-tech talks]<br />
* [https://wiki.scinet.utoronto.ca/wiki/index.php/Using_Paraview using ParaView]<br />
<br />
== [https://www.sharcnet.ca SHARCNET] ==<br />
* [https://www.sharcnet.ca/help/index.php/Visualization_in_SHARCNET Overview]<br />
* [https://www.sharcnet.ca/help/index.php/Remote_Graphical_Connections Running pre-/post-processing graphical applications]<br />
* [https://www.sharcnet.ca/my/software Supported software (see visualization section at bottom)]<br />
<br />
= Visualization gallery = <!--T:14--><br />
<br />
You can find a gallery of visualizations based on models run on Compute Canada systems in the [https://www.computecanada.ca/research-portal/national-services/visualization visualization gallery]. There you can click on individual thumbnails to get more details on each visualization.<br />
<br />
= How to get visualization help = <!--T:15--><br />
You can contact us via [mailto:vis-support@computecanada.ca email].<br />
</translate></div>Tyson