Migration2016:Silo Interim Replacement Storage

From CC Doc
Jump to: navigation, search


Due to delays in the acquisition and installation of the new National Storage Cyberinfrastructure an interim location for data on the legacy Silo system was necessary. This page describes the interim storage.

This migration is complete as of Mar.24. All Silo data and users have been copied to SFU or Waterloo, and have received notifications and instructions.

For details of the migration process please see the Silo Migration Guide.

Storage has been split between the two NDC sites: Waterloo and SFU.

This interim storage is standalone (like Silo) and is not attached to the new clusters.


This is interim, replacement storage for silo. Only silo users have been migrated. All other (non-Silo) users should wait until the full NDC is available.

Further Migration to National Data Cyberinfrastructure

Eventually all storage will be consolidated into the National Data Cyberinfrastructure. The new systems and projected availability dates are at National systems. We expect that all persistent storage users will be moved to the project space on the NDC. Some may go to the nearline (tape) space.

Unfortunately this means that the interim silo data will have to be re-migrated to the NDC. Notifications will be sent out when we have detailed plans.


You should generally use your Compute Canada account to login to the interim storage. For details see User Accounts and Groups.


Waterloo has provisioned two Storage Building Blocks (SBB) for a total of about 1.0 PB of usable storage. This is an NFS system.

This system is in operation, and the migration of data has completed. If you have not received a notification then you have not been migrated to this system.

  • GlobusGlobus is a file transfer service [https://www.globus.org/] endpoint: computecanada#remora5
  • SSH login: dtn2.sharcnet.ca


SFU has provisioned three SBB3's for a total of approximately 1.2 PB of usable storage. This is based on the Red Hat Gluster shared filesystem.

  • Globus endpoint: computecanada#sfu-dtn
  • SSH: dtn.sfu.computecanada.ca

Technical Details

For those who are interested. Each SBB3 has

  • 90 x 8 TB SATA drives.
  • 12 cores, 128 GB front-end integrated server.

The SBB3's are configured as 8 x 9+2 RAID6 volumes with 2 hot spares (hardware RAID controller).

The Gluster filesystem spans 3 x SBB3, with 400 TB allocated from each SBB3 (xfs filesystem), for a total of 1.2 PB.