Quick Menu
Applications howtos
Archived pages
Support
- Phone 3004
Quick Menu
Applications howtos
Archived pages
Support
This is an old revision of the document!
The Cluster is made of some servers and blades organized in 2 racks physically located in the Cluster Room and Datacenter Room in Mancinelli site. The main hardware components where part of the former Labs cluster, new acquisitions and one blade from Mathematics Dept. Datacenter donated from a Research group of DICA.
Server or Blade | # nodes and processors | # cores/processor | Ram (GB) | Local storage | Network interfaces |
---|---|---|---|---|---|
Gandalf | 6 nodes Dell with 2 Intel Xeon | 4 | 8 | 1 HD SAS 73 GB | Gigabit Ethernet |
Merlino | 9 nodes Dell with 2 AMD Opteron | 4 | 16 | 1 HD SATA 80 GB | Gigabit Ethernet |
Blade Morgana | 11 nodes Dell with 2 Intel Xeon | 4 | 24 | 2 HD SAS 146 GB Raid0 | Infiniband network (MPI) and Gigabit network (data and management) |
Covenant | 1 node Hp with 4 Intel Xeon | 10 | 256 | 2 HD SAS 1 TB Raid0 | Gigabit network (Shared memory MPI) |
Masternode | 1 virtual node with 1 cpu | 6 | 12 | virtual HD for OS | 10Gb Ethernet teamed (1 channel = 20Gb), FC optical fiber 4 Gbps (storage), Gigabit ethernet |
The Masternode provides:
The cluster has been installed following the directions of the OpenHPC Project, an open source project backed by Intel and many software and hardware HPC player and supported by a strong developer community. The main reason for this choice is that they committed to support the upgrade of the software platform following the evolution of the single components (OS, compilers, MPI distributions, toolchains components, utility software, hardware support, container technology) and also for their independence from a specific vendor or technology. The installed software is divided in Management and operations software and Application software.