Cedar: Difference between revisions

Jump to navigation Jump to search
m
fix heading levels
No edit summary
m (fix heading levels)
Line 28: Line 28:
[[Transferring_data|Transferring data]]<br>
[[Transferring_data|Transferring data]]<br>


=Storage= <!--T:4-->
==Storage== <!--T:4-->


<!--T:5-->
<!--T:5-->
Line 55: Line 55:
The /scratch storage space is a Lustre filesystem based on DDN model ES14K technology. It includes 640 8TB NL-SAS disk drives, and dual redundant metadata controllers with SSD-based storage.
The /scratch storage space is a Lustre filesystem based on DDN model ES14K technology. It includes 640 8TB NL-SAS disk drives, and dual redundant metadata controllers with SSD-based storage.


=High-performance interconnect= <!--T:19-->
==High-performance interconnect== <!--T:19-->


<!--T:20-->
<!--T:20-->
Line 66: Line 66:
By design, Cedar supports multiple simultaneous parallel jobs of up to 1024 Broadwell cores (32 nodes)  or 1536 Skylake cores (32 nodes) or 1536 Cascade Lake cores (32 nodes) in a fully non-blocking manner. For larger jobs the interconnect has a 2:1 blocking factor, i.e., even for jobs running on several thousand cores, Cedar provides a high-performance interconnect.
By design, Cedar supports multiple simultaneous parallel jobs of up to 1024 Broadwell cores (32 nodes)  or 1536 Skylake cores (32 nodes) or 1536 Cascade Lake cores (32 nodes) in a fully non-blocking manner. For larger jobs the interconnect has a 2:1 blocking factor, i.e., even for jobs running on several thousand cores, Cedar provides a high-performance interconnect.


=Node characteristics= <!--T:6-->
==Node characteristics== <!--T:6-->


<!--T:28-->
<!--T:28-->
Line 102: Line 102:
All nodes have local (on-node) temporary storage. Compute nodes (except GPU nodes) have two 480GB SSD drives, for a total raw capacity of 960GB. GPU nodes have either an 800GB or a 480GB SSD drive. Use node-local storage through the job-specific directory created by the scheduler, <code>$SLURM_TMPDIR</code>. See [[Using node-local storage]].
All nodes have local (on-node) temporary storage. Compute nodes (except GPU nodes) have two 480GB SSD drives, for a total raw capacity of 960GB. GPU nodes have either an 800GB or a 480GB SSD drive. Use node-local storage through the job-specific directory created by the scheduler, <code>$SLURM_TMPDIR</code>. See [[Using node-local storage]].


== Choosing a node type == <!--T:27-->
===Choosing a node type=== <!--T:27-->
A number of 48-core nodes are reserved for jobs that require whole nodes. There are no 32-core nodes set aside for whole node processing. <b>Jobs that request less than 48 cores per node can end up sharing nodes with other jobs.</b><br>
A number of 48-core nodes are reserved for jobs that require whole nodes. There are no 32-core nodes set aside for whole node processing. <b>Jobs that request less than 48 cores per node can end up sharing nodes with other jobs.</b><br>
Most applications will run on either Broadwell or Skylake or Cascade Lake nodes, and performance differences are expected to be small compared to job waiting times. Therefore we recommend that you do not select a specific node type for your jobs. If it is necessary, use <code>--constraint=cascade</code>, <code>--constraint=skylake</code> or <code>--constraint=broadwell</code>.  If the requirement is for any AVX512 node, use <code>--constraint=[skylake|cascade]</code>.
Most applications will run on either Broadwell or Skylake or Cascade Lake nodes, and performance differences are expected to be small compared to job waiting times. Therefore we recommend that you do not select a specific node type for your jobs. If it is necessary, use <code>--constraint=cascade</code>, <code>--constraint=skylake</code> or <code>--constraint=broadwell</code>.  If the requirement is for any AVX512 node, use <code>--constraint=[skylake|cascade]</code>.


= Submitting and running jobs policy = <!--T:30-->
==Submitting and running jobs policy== <!--T:30-->


<!--T:31-->
<!--T:31-->
As of <b>April 17, 2019</b>, jobs can no longer run in the <code>/home</code> filesystem. The policy was put in place to reduce the load on this filesystem and improve the responsiveness for interactive work. If you get the message <code>Submitting jobs from directories residing in /home is not permitted</code>, transfer the files either to your <code>/project</code> or <code>/scratch</code> directory and submit the job from there.
As of <b>April 17, 2019</b>, jobs can no longer run in the <code>/home</code> filesystem. The policy was put in place to reduce the load on this filesystem and improve the responsiveness for interactive work. If you get the message <code>Submitting jobs from directories residing in /home is not permitted</code>, transfer the files either to your <code>/project</code> or <code>/scratch</code> directory and submit the job from there.


= Performance = <!--T:17-->
==Performance== <!--T:17-->
Theoretical peak double precision performance of Cedar is 6547 teraflops for CPUs, plus 7434 for GPUs, yielding almost 14 petaflops of theoretical peak double precision performance.
Theoretical peak double precision performance of Cedar is 6547 teraflops for CPUs, plus 7434 for GPUs, yielding almost 14 petaflops of theoretical peak double precision performance.


cc_staff
63

edits

Navigation menu