Building the Hopper Cluster Part I: Nodes
On Monday, January 5, the build began on the new HPC cluster in the AU Data Center. Resembling an old fashioned ‘barn raising’, OIT and Lenovo personnel unboxed and racked equipment in step one of constructing Auburn’s newest and most powerful research computer.
Work continues with the goal to be operational by mid-February.
- The first shipment arrives on 01.05.16.
- OIT and Lenovo personnel begin work unboxing the new equipment.
- Racks are prepared to house the new equipment.
- OIT and Lenovo personnel begin installing the compute node chassis into the AU datacenter racks.
- The machine begins to take shape as the compute node chassis are racked and cabled.
- A rear view of the racked compute node chassis.
- The final touches are applied to the compute node chassis installation.
- A “fast-fat” node with 1 TB of random access memory is placed in its new home.
- All compute node chassis are installed into racks in the AU datacenter.
- Auburn CIO Bliss Bailey checks on the progress of the build.
- AU and Lenovo personnel begin to perform some checks of the compute node hardware.
- The compute nodes light up as they are powered on for their initial checks.
- The GSS storage appliance is unpacked and placed in the OIT datacenter racks.
- The GSS storage appliance is installed.
- The cluster compute nodes and storage are ready and waiting for networking.
- A view of the cluster’s Xeon Phi and NVidia K80 GPU processing power.