miniHPC
 

Funding: University of Basel

Duration: 15.12.2016-Present

Project Summary

miniHPC is a small high-performance computing (HPC) cluster. The purpose of designing and purchasing miniHPC is two-fold:

(1) to offer a platform for teaching parallel programming to students in order to achieve high performance computations and

(2) to have a fully-controlled experimental platform for conducting leading-edge scientific investigations in HPC.


The miniHPC has a peak performance of 28.9 double precision TFLOP/s. The miniHPC has two types of nodes, Intel Xeon nodes and Intel Xeon Phi Knights Landing (KNL) nodes.  The Intel Xeon nodes amount to 22 computing nodes, 1 login node, and 1 node for storage.  The Intel Xeon Phi nodes amount to 4 computing nodes.


   

miniHPC nodes information

       

 







miniHPC CPU information


  1. a) http://ark.intel.com/products/92984/Intel-Xeon-Processor- E5-2640-v4-25M-Cache-2_40-GHz

  2. b) http://ark.intel.com/products/94033/Intel-Xeon-Phi- Processor-7210-16GB-1_30-GHz-64-core


All nodes are interconnected through two different types of interconnection networks. The first network is an Ethernet network with 10 Gbit/s speed, reserved for users and administrators access. The second network is the fastest network, an Intel Omni-Path network with 100 Gbit/s speed, reserved for the high-speed communication between the computing nodes. The topology of this second network interconnects the 28 nodes (24 Xeons and 4 KNLs) of the miniHPC cluster via a two-level fat-tree topology.






























Graphical illustration of the miniHPC two-level fat-tree topology.

Number of nodes: 28 (24 Intel Xeon and 4 Intel Xeon Phi  or KNL).

Number of switches: 5.

Number of links: 196.


 
miniHPC: SMALL BUT MODERN HPC