|Availability: Compute RAC2017 allocations started June 30, 2017|
|Login node: cedar.computecanada.ca|
|Globus endpoint: computecanada#cedar-dtn|
Cedar is a heterogeneous cluster suitable for a variety of workloads; it is located at Simon Fraser University. It is named for the Western Red Cedar, B.C.’s official tree, which is of great spiritual significance to the region's First Nations people. It was previously known as "GP2" and is still identified as such in the 2017 RAC documentation.
Cedar is sold and supported by Scalar Decisions, Inc. The node manufacturer is Dell, the high performance temporary space is from DDN, and the interconnect is from Intel. It is entirely liquid cooled, using rear-door heat exchangers.
As part of the second phase of the CFI Cyberinfrastructure Challenge 2 program, Cedar will be considerably expanded. Initial discussions with the vendor are in progress, and the expansion is expected to be carried out winter 2018. This should result in close to doubling the capacity of Cedar.
|Home space, 250TB|
| Scratch space, 3.7PB
Parallel high-performance filesystem
|Project space, 10PB
External persistent storage
Intel OmniPath (version 1) interconnect (100Gbit/s bandwidth).
A low-latency high-performance fabric connecting all nodes and temporary storage.
By design, Cedar supports multiple simultaneous parallel jobs of up to 1024 cores in a fully non-blocking manner. For larger jobs the interconnect has a 2:1 blocking factor, i.e., even for jobs running on several thousand cores, Cedar provides a high-performance interconnect.
Node types and characteristics
Cedar has a total of 27,696 CPU cores for computation, and 584 GPU devices. Total theoretical peak double precision performance is 936 teraflops for CPUs, plus 2,744 for GPUs, yielding over 3.6 petaflops of theoretical peak double precision performance. 22 fully connected "islands" of 32 base or large nodes each have 1024 cores in a fully non-blocking topology (Omni-Path fabric), with each island designed to yield over 30 teraflops of double-precision performance (measured with high performance LINPACK). There is a 2:1 blocking factor between the 1024 core islands.
|"Base" compute nodes:||576 nodes||128 GB of memory, 16 cores/socket, 2 sockets/node. Intel "Broadwell" CPUs at 2.1Ghz, model E5-2683 v4.|
|"Large" compute nodes:||128 nodes||256 GB of memory, 16 cores/socket, 2 sockets/node. Intel "Broadwell" CPUs at 2.1Ghz, model E5-2683 v4.|
|"Bigmem500"||24 nodes||0.5 TB (512 GB) of memory, 16 cores/socket, 2 sockets/node. Intel "Broadwell" CPUs at 2.1Ghz, model E5-2683 v4.|
|"Bigmem1500" nodes||24 nodes||1.5 TB of memory, 16 cores/socket, 2 sockets/node. Intel "Broadwell" CPUs at 2.1Ghz, model E5-2683 v4.|
|"GPU base" nodes:||114 nodes||128 GB of memory, 12 cores/socket, 2 sockets/node, 4 NVIDIA P100 Pascal GPUs/node (12GB HBM2 memory), 2 GPUs/PCI root. Intel "Broadwell" CPUs at 2.2Ghz, model E5-2650 v4|
|"GPU large" nodes.||32 nodes||256 GB of memory, 12 cores/socket, 2 sockets/node, 4 NVIDIA P100 Pascal GPUs/node (16GB HBM2 memory), All GPUs on the same PCI root. E5-2650 v4|
|"Bigmem3000" nodes||4 nodes||3 TB of memory, 8 cores/socket, 4 sockets/node. Intel "Broadwell" CPUs at 2.1Ghz, model E7-4809 v4.|
All of the above nodes have local (on-node) temporary storage. GPU nodes have a single 800GB SSD drive. All other compute nodes have two 480GB SSD drives, for a total raw capacity of 960GB. Best practice to access node-local storage is to use the directory generated by Slurm, $SLURM_TMPDIR.
Scratch storage is a Lustre filesystem based on DDN model ES14K technology. It includes 640 8TB NL-SAS disk drives, and dual redundant metadata controllers with SSD-based storage.