Building the Hopper Cluster Part II: Networking

On Tuesday, January 19th, 2016 work started on the networking phase of the Auburn University “Hopper” High Performance Compute Cluster. This phase involved the routing of hundreds of InfiniBand, Ethernet and Fiber Optic cables, enabling high speed communication between the previously installed servers.

The InfiniBand network architecture provides the cluster with high speed, low latency shared disk access and the ability to pass messages at an extremely fast rate (100 GB/s) across compute nodes. It enables Remote Direct Memory Access, or RDMA, which allows compute nodes to bypass much of the communication overhead involved in fetching data from an external computer’s RAM. Hopper is equipped with six next generation Mellanox EDR Infiniband switches, making it one of the fastest clusters currently on record in regard to interconnect. This places Hopper at the leading edge of interconnect technology, providing flexibility, and equipping the cluster with a network architecture that allows an extended lifespan.

With the switches and wires in place and the machine beginning to talk, our next step is to configure the cluster storage. Hopper will offer a total of 1.4 Petabytes of disk in which to house software and datasets, presented across all nodes via the General Parallel File System.

 

You may also like...

Leave a Reply