Cedar: Difference between revisions

Jump to navigation Jump to search
m
no edit summary
(correct node disk size (960GB))
mNo edit summary
Line 32: Line 32:
Provided by the [[National_Data_Cyberinfrastructure|NDC]].<br />
Provided by the [[National_Data_Cyberinfrastructure|NDC]].<br />
Available to compute nodes, but not designed for parallel I/O workloads.<br />
Available to compute nodes, but not designed for parallel I/O workloads.<br />
|-
|}
|'''High performance interconnect'''
 
||
===High Performance Interconnect===
 
Intel OmniPath (version 1) interconnect (100Gbit/s bandwidth).  A low-latency high-performance fabric connecting all nodes and temporary storage. <br />
Intel OmniPath (version 1) interconnect (100Gbit/s bandwidth).  A low-latency high-performance fabric connecting all nodes and temporary storage. <br />
The design of Cedar is to support multiple simultaneous parallel jobs of up to 1024 cores in a fully non-blocking manner. For larger jobs the interconnect has a 2:1 blocking factor, i.e., even for jobs running on several thousand cores the Cedar system provides a high-performance interconnect.
The design of Cedar is to support multiple simultaneous parallel jobs of up to 1024 cores in a fully non-blocking manner. For larger jobs the interconnect has a 2:1 blocking factor, i.e., even for jobs running on several thousand cores the Cedar system provides a high-performance interconnect.
|}


====Node types and characteristics:==== <!--T:6-->
====Node types and characteristics:==== <!--T:6-->
cc_staff
1,436

edits

Navigation menu