Niagara/fr: Difference between revisions

Jump to navigation Jump to search
Updating to match new version of source page
No edit summary
(Updating to match new version of source page)
Line 1: Line 1:
<languages />
<languages />


<div class="mw-translate-fuzzy">
{| class="wikitable"
{| class="wikitable"
|-
|-
Line 15: Line 16:
|'''computecanada#niagara-dtn'''
|'''computecanada#niagara-dtn'''
|}
|}
</div>


Niagara sera une grappe parallèle propriété de [https://www.utoronto.ca/ l'Université de Toronto] qui sera opérée par [https://www.scinethpc.ca/ SciNet]. Elle devrait offrir près de 60&nbsp;000 cœurs CPU. Niagara sera une ressource de calcul équilibrée avec un réseau interne très performant, conçue principalement pour les tâches parallèles de grande envergure. La mise en service est planifiée pour le début de 2018. Cette ressource est comprise dans le [https://www.computecanada.ca/page-daccueil-du-portail-de-recherche/acces-aux-ressources/concours-dallocation-des-ressources/?lang=fr concours d'allocation des ressources 2018] et pourra être utilisée à compter du 1er avril 2018.  
<div class="mw-translate-fuzzy">
Niagara sera une grappe parallèle propriété de [https://www.utoronto.ca/ l'Université de Toronto] qui sera opérée par [https://www.scinethpc.ca/ SciNet]. Elle devrait offrir près de 60&nbsp;000 cœurs CPU. Niagara sera une ressource de calcul équilibrée avec un réseau interne très performant, conçue principalement pour les tâches parallèles de grande envergure. La mise en service est planifiée pour le début de 2018. Cette ressource est comprise dans le [https://www.computecanada.ca/page-daccueil-du-portail-de-recherche/acces-aux-ressources/concours-dallocation-des-ressources/?lang=fr concours d'allocation des ressources 2018] et pourra être utilisée à compter du 1er avril 2018.
</div>


The user experience on Niagara will be similar to that on Graham
and Cedar, but specific instructions on how to use the Niagara system
are still in preparation, given that details of the setup are still in
flux at present (February 2018).
Niagara is an allocatable resource in the 2018 [https://www.computecanada.ca/research-portal/accessing-resources/resource-allocation-competitions/ Resource Allocation Competition] (RAC 2018), which comes into effect on April 4, 2018.
[https://youtu.be/EpIcl-iUCV8 Niagara installation update at the SciNet User Group Meeting on February 14th, 2018]
[https://youtu.be/EpIcl-iUCV8 Niagara installation time-lag video]
<div class="mw-translate-fuzzy">
Pour les détails sur les travaux de mise en service de Niagara, consultez
Pour les détails sur les travaux de mise en service de Niagara, consultez
[https://support.scinet.utoronto.ca/education/go.php/361/index.php cette page SciNet].
[https://support.scinet.utoronto.ca/education/go.php/361/index.php cette page SciNet].
</div>
* 1500 nodes, each with 40 Intel Skylake cores at 2.4GHz, for a total of 60,000 cores.
* 192 GB of RAM per node.
* EDR Infiniband network in a so-called 'Dragonfly+' topology.
* 5PB of scratch, 5+2PB of project space (parallel file system: IBM Spectrum Scale, formerly known as GPFS).
* 256 TB burst buffer (Excelero + IBM Spectrum Scale).
* No local disks.
* Rpeak of 4.61 PF.
* Rmax of 3.0 PF.
* 685 kW power consumption.
=Attached storage systems=
{| class="wikitable sortable"
|-
| '''Home space''' <br />Parallel high-performance filesystem (IBM Spectrum Scale) ||
* Location of home directories.
* Available as the <code>$HOME</code> environment variable.
* Each home directory has a small, fixed [[Storage and file management#Filesystem_Quotas_and_Policies|quota]].
* Not allocated, standard amount for each user. For larger storage requirements, use scratch or project.
* Has daily backup.
|-
| '''Scratch space'''<br />5PB total volume<br />Parallel high-performance filesystem (IBM Spectrum Scale)||
* For active or temporary (<code>/scratch</code>) storage (~ 80 GB/s).
* Available as the <code>$SCRATCH</code> environment variable.
* Not allocated.
* Large fixed [[Storage and file management#Filesystem_Quotas_and_Policies|quota]] per user.
* Inactive data will be purged.
|-
| '''Burst buffer'''<br />256TB total volume<br />Parallel extra high-performance filesystem (Excelero+IBM Spectrum Scale)||
* For active fast storage during a job (160GB/s, and very high IOPS).
* Data will be purged very frequently (i.e. soon after a job has ended).
* Not allocated.
|-
|'''Project space'''<br />External persistent storage<br />
||
* Allocated via [https://www.computecanada.ca/research-portal/accessing-resources/resource-allocation-competitions/ RAC].
* Available as the <code>$PROJECT</code> environment variable.
* [[Storage and file management#Filesystem_Quotas_and_Policies|quota]] set per project.
* Backed up.
|}
=High-performance interconnect=
The Niagara system has an EDR Infiniband network in a so-called
'Dragonfly+' topology, with four wings.  Each wing (of 375 nodes) has
1-to-1 connections.  Network traffic between wings is done through
adaptive routing, which alleviates network congestion.
=Node characteristics=
* CPU: 2 sockets with 20 Intel Skylake cores (2.4GHz, AVX512), for a total of 40 cores per node
* Computational perfomance: 3 TFlops (theoretical maximum)
* Network connection: 100Gb/s EDR
* Memory: 192 GB of RAM, i.e., a bit over 4GB per core.
* Local disk: none.
* Operating system: Linux CentOS 7
=Scheduling=
The Niagara system will use the slurm scheduler to run jobs.  The basic scheduling commands will therefore be similar as those for Cedar and Graham, with a few differences:
* Scheduling will be by node only. This means jobs will always need to use 40 cores per job.
* Asking for specific amounts of memory will not be necessary and is discouraged; all nodes have the same amount of memory (192GB minus some operating system overhead).
Details, such as how to request burst buffer usagein jobs, are still being worked out.
=Software=
* Module-based software stack.
* Both the standard Compute Canada software stack as well as system-specific software tuned for the system will be available.
* Different from Cedar and Graham, no modules will be loaded by default to prevent accidental conflicts in versions. There will be a simple mechanism to load the software stack that a user would see on Graham and Cedar.
35,893

edits

Navigation menu