Cedar: Difference between revisions

Jump to navigation Jump to search
39 bytes added ,  8 months ago
no edit summary
No edit summary
No edit summary
Line 8: Line 8:
| Availability: Compute RAC2017 allocations started June 30, 2017
| Availability: Compute RAC2017 allocations started June 30, 2017
|-
|-
| Login node: '''cedar.alliancecan.ca'''
| Login node: <b>cedar.alliancecan.ca</b>
|-
|-
| Globus endpoint: '''computecanada#cedar-dtn'''
| Globus endpoint: <b>computecanada#cedar-dtn</b>
|-
|-
| System Status Page: '''http://status.alliancecan.ca/'''
| System Status Page: <b>http://status.alliancecan.ca/</b>
|}
|}


Line 18: Line 18:
Cedar is a heterogeneous cluster suitable for a variety of workloads; it is located at Simon Fraser University. It is named for the [https://en.wikipedia.org/wiki/Thuja_plicata Western Red Cedar], B.C.’s official tree, which is of great spiritual significance to the region's First Nations people.  
Cedar is a heterogeneous cluster suitable for a variety of workloads; it is located at Simon Fraser University. It is named for the [https://en.wikipedia.org/wiki/Thuja_plicata Western Red Cedar], B.C.’s official tree, which is of great spiritual significance to the region's First Nations people.  
<br/>
<br/>
Cedar is sold and supported by Scalar Decisions, Inc. The node manufacturer is Dell, the high performance temporary storage <tt>/scratch</tt> filesystem is from DDN, and the interconnect is from Intel. It is entirely liquid cooled, using rear-door heat exchangers.   
Cedar is sold and supported by Scalar Decisions, Inc. The node manufacturer is Dell, the high performance temporary storage /scratch filesystem is from DDN, and the interconnect is from Intel. It is entirely liquid-cooled, using rear-door heat exchangers.   


<!--T:25-->
<!--T:25-->
Line 28: Line 28:
{| class="wikitable sortable"
{| class="wikitable sortable"
|-
|-
| '''Home space'''<br /> 526TB total volume||
| <b>Home space</b><br /> 526TB total volume||
* Location of home directories.
* Location of /home directories.
* Each home directory has a small fixed [[Storage and file management#Filesystem_quotas_and_policies|quota]].
* Each /home directory has a small fixed [[Storage and file management#Filesystem_quotas_and_policies|quota]].
* Not allocated via [https://www.computecanada.ca/research-portal/accessing-resources/rapid-access-service/ RAS] or [https://www.computecanada.ca/research-portal/accessing-resources/resource-allocation-competitions/ RAC]. Larger requests go to Project space.
* Not allocated via [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service RAS] or [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition RAC]. Larger requests go to the /project space.
* Has daily backup
* Has daily backup
|-
|-
| '''Scratch space'''<br /> 5.4PB total volume<br />Parallel high-performance filesystem ||
| <b>Scratch space</b><br /> 5.4PB total volume<br />Parallel high-performance filesystem ||
* For active or temporary (<code>/scratch</code>) storage.
* For active or temporary (/scratch) storage.
* Not allocated.
* Not allocated.
* Large fixed [[Storage and file management#Filesystem_quotas_and_policies|quota]] per user.
* Large fixed [[Storage and file management#Filesystem_quotas_and_policies|quota]] per user.
* Inactive data will be [[Scratch purging policy|purged]].
* Inactive data will be [[Scratch purging policy|purged]].
|-
|-
|'''Project space'''<br />23PB total volume<br />External persistent storage
|<b>Project space</b><br />23PB total volume<br />External persistent storage
||
||
* Not designed for parallel I/O workloads. Use Scratch space instead.
* Not designed for parallel I/O workloads. Use /scratch space instead.
* Large adjustable [[Storage and file management#Filesystem_quotas_and_policies|quota]] per project.
* Large adjustable [[Storage and file management#Filesystem_quotas_and_policies|quota]] per project.
* Has daily backup.
* Has daily backup.
Line 48: Line 48:


<!--T:18-->
<!--T:18-->
Scratch storage is a Lustre filesystem based on DDN model ES14K technology. It includes 640 8TB NL-SAS disk drives, and dual redundant metadata controllers with SSD-based storage.
The /scratch storage space is a Lustre filesystem based on DDN model ES14K technology. It includes 640 8TB NL-SAS disk drives, and dual redundant metadata controllers with SSD-based storage.


=High-performance interconnect= <!--T:19-->
=High-performance interconnect= <!--T:19-->


<!--T:20-->
<!--T:20-->
''Intel OmniPath (version 1) interconnect (100Gbit/s bandwidth).''
<i>Intel OmniPath (version 1) interconnect (100Gbit/s bandwidth).</i>


<!--T:21-->
<!--T:21-->
rsnt_translations
53,609

edits

Navigation menu