Cedar: Difference between revisions

Jump to navigation Jump to search
21 bytes added ,  8 months ago
no edit summary
No edit summary
No edit summary
Line 35: Line 35:
|-
|-
| <b>Scratch space</b><br /> 5.4PB total volume<br />Parallel high-performance filesystem ||
| <b>Scratch space</b><br /> 5.4PB total volume<br />Parallel high-performance filesystem ||
* For active or temporary (/scratch) storage.
* For active or temporary (scratch) storage.
* Not allocated.
* Not allocated.
* Large fixed [[Storage and file management#Filesystem_quotas_and_policies|quota]] per user.
* Large fixed [[Storage and file management#Filesystem_quotas_and_policies|quota]] per user.
Line 92: Line 92:


<!--T:29-->
<!--T:29-->
Note that the amount of available memory is less than the "round number" suggested by the hardware configuration. For instance, "base" nodes do have 128 GiB of RAM, but some of it is permanently occupied by the kernel and OS. To avoid wasting time by swapping/paging, the scheduler will never allocate jobs whose memory requirements exceed the amount of "available" memory shown above.
Note that the amount of available memory is less than the <i>round number</i> suggested by the hardware configuration. For instance, <i>base</i> nodes do have 128 GiB of RAM, but some of it is permanently occupied by the kernel and OS. To avoid wasting time by swapping/paging, the scheduler will never allocate jobs whose memory requirements exceed the amount of <i>available</i> memory shown above.


<!--T:10-->
<!--T:10-->
Line 98: Line 98:


== Choosing a node type == <!--T:27-->
== Choosing a node type == <!--T:27-->
A number of 48-core nodes are reserved for jobs that require whole nodes. There are no 32-core nodes set aside for whole node processing. '''Jobs that request less than 48 cores per node can end up sharing nodes with other jobs.'''<br>
A number of 48-core nodes are reserved for jobs that require whole nodes. There are no 32-core nodes set aside for whole node processing. <b>Jobs that request less than 48 cores per node can end up sharing nodes with other jobs.</b><br>
Most applications will run on either Broadwell or Skylake or Cascade Lake nodes, and performance differences are expected to be small compared to job waiting times. Therefore we recommend that you do not select a specific node type for your jobs. If it is necessary, use <code>--constraint=cascade</code>, <code>--constraint=skylake</code> or <code>--constraint=broadwell</code>.  If the requirement is for any AVX512 node, use <code>--constraint=[skylake|cascade]</code>.  See [[Running_jobs#Specifying_a_CPU_architecture|Specifying a CPU architecture]].
Most applications will run on either Broadwell or Skylake or Cascade Lake nodes, and performance differences are expected to be small compared to job waiting times. Therefore we recommend that you do not select a specific node type for your jobs. If it is necessary, use <code>--constraint=cascade</code>, <code>--constraint=skylake</code> or <code>--constraint=broadwell</code>.  If the requirement is for any AVX512 node, use <code>--constraint=[skylake|cascade]</code>.  See [[Running_jobs#Specifying_a_CPU_architecture|Specifying a CPU architecture]].


Line 104: Line 104:


<!--T:31-->
<!--T:31-->
As of '''April 17, 2019''', jobs can no longer run in the <code>/home</code> filesystem. The policy was put in place to reduce the load on this filesystem and improve the responsiveness for interactive work. If you get the message <code>Submitting jobs from directories residing in /home is not permitted</code>, transfer the files either to your <code>/project</code> or <code>/scratch</code> directory and submit the job from there.
As of <b>April 17, 2019</b>, jobs can no longer run in the <code>/home</code> filesystem. The policy was put in place to reduce the load on this filesystem and improve the responsiveness for interactive work. If you get the message <code>Submitting jobs from directories residing in /home is not permitted</code>, transfer the files either to your <code>/project</code> or <code>/scratch</code> directory and submit the job from there.


= Performance = <!--T:17-->
= Performance = <!--T:17-->
Line 110: Line 110:


<!--T:32-->
<!--T:32-->
Cedar's network topology is made up of "islands" with a 2:1 blocking factor between islands. Within an island the interconnect (Omni-Path fabric) is fully non-blocking.
Cedar's network topology is made up of <i>islands</i> with a 2:1 blocking factor between islands. Within an island the interconnect (Omni-Path fabric) is fully non-blocking.
<br>
<br>
Most islands contain 32 nodes:
Most islands contain 32 nodes:
rsnt_translations
53,609

edits

Navigation menu