Quick Menu
Applications howtos
Archived pages
Support
- Phone 3004
Quick Menu
Applications howtos
Archived pages
Support
The Cluster is made of some servers and blades organized in 3 racks physically located in the Cluster Room and Datacenter Room in Mancinelli site. The main hardware components where part of the former Labs cluster, new acquisitions and one blade from Mathematics Dept. Datacenter donated from a Research group of DICA.
Server or Blade | # nodes and processors | # cores/processor | Ram (GB) | Local storage | Network interfaces |
---|---|---|---|---|---|
Blade Gandalf | 6 nodes Dell with 2 Intel Xeon | 4 | 8 | 1 HD SAS 73GB | 1Gbit Ethernet for Management and Data |
Blade Legolas | 16 nodes Hp with 2 Intel Xeon | 4 | 24 | 1 HD SAS 73GB | 1Gbit Ethernet for Management and Data |
Blade Merlino | 9 nodes Dell with 2 AMD Opteron | 4 | 16 | 1 HD SATA 80GB | 1Gbit Ethernet for Management, 10Gbit Ethernet for Data |
Blade Morgana | 11 nodes Dell with 2 Intel Xeon | 4 | 24 | 2 HD SAS 146GB Raid0 | 1Gbit Ethernet for Management, 20Gbit Mellanox Infiniband for MPI, 10Gbit Ethernet for Data |
Covenant | 1 node Hp with 4 Intel Xeon, 2 nodes Dell with 2 Xeon cpu | 10,20 | 256,320 | 2 HD SAS 1TB Raid0 | 1Gbit Ethernet for Management, 10Gbit Ethernet for Data |
Masternode | 1 node Dell with 2 Intel Xeon | 4 | 24 | 2 HD SAS 1TB Raid1 for OS, 4 HD SAS 2TB Raid5 for Scratch, 35TB Storage for Home | 1Gbit Ethernet for Management, 1Gbit Ethernet for Frontend/login, 1Gbit Ethernet for nodes console (iDRAC, ILO), 2 10Gbit Ethernet for Data, Fiber Channel 8Gbps for Storage, 1 10Gbit Mellanox Infiniband for Infiniband control |
GPU nodes | 5 nodes Dell T630/T640 with 2 Intel Xeon and 1 or 2 nVidia GPUs | 8 | 32/64 | 1 HD SAS 1TB | 1Gbit Ethernet for Management, 1Gbit Ethernet for Data |
The Masternode provides:
The cluster has been installed following the directions of the OpenHPC Project, an open source project backed by Intel and many software and hardware HPC player and supported by a strong developer community. The main reason for this choice is that they committed to support the upgrade of the software platform following the evolution of the single components (OS, compilers, MPI distributions, toolchains components, utility software, hardware support, container technology) and also for their independence from a specific vendor or technology. The installed software is divided in Management and operations software and Application software. At the startup on April, 1st 2019 the software are:
And many more libraries and tools for parallel and scalar scientific programming.