User Tools

Site Tools


start

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
start [2019/04/05 17:48]
druido [Software]
start [2022/07/26 11:24] (current)
druido
Line 8: Line 8:
 ^Server or Blade ^# nodes and processors ^# cores/​processor ^Ram (GB) ^Local storage ^Network interfaces^ ^Server or Blade ^# nodes and processors ^# cores/​processor ^Ram (GB) ^Local storage ^Network interfaces^
 | **Blade Gandalf** | 6 nodes Dell with 2 Intel Xeon | 4 | 8 | 1 HD SAS 73GB | 1Gbit Ethernet for Management and Data| | **Blade Gandalf** | 6 nodes Dell with 2 Intel Xeon | 4 | 8 | 1 HD SAS 73GB | 1Gbit Ethernet for Management and Data|
-| **Blade Legolas** | 16 nodes Hp with 2 Intel Xeon | 4 | | 1 HD SAS 73GB | 1Gbit Ethernet for Management and Data|+| **Blade Legolas** | 16 nodes Hp with 2 Intel Xeon | 4 | 24 | 1 HD SAS 73GB | 1Gbit Ethernet for Management and Data|
 | **Blade Merlino** | 9 nodes Dell with 2 AMD Opteron | 4 | 16 | 1 HD SATA 80GB | 1Gbit Ethernet for Management, 10Gbit Ethernet for Data| | **Blade Merlino** | 9 nodes Dell with 2 AMD Opteron | 4 | 16 | 1 HD SATA 80GB | 1Gbit Ethernet for Management, 10Gbit Ethernet for Data|
 | **Blade Morgana** | 11 nodes Dell with 2 Intel Xeon | 4 | 24 | 2 HD SAS 146GB Raid0 | 1Gbit Ethernet for Management, 20Gbit Mellanox Infiniband for MPI, 10Gbit Ethernet for Data| | **Blade Morgana** | 11 nodes Dell with 2 Intel Xeon | 4 | 24 | 2 HD SAS 146GB Raid0 | 1Gbit Ethernet for Management, 20Gbit Mellanox Infiniband for MPI, 10Gbit Ethernet for Data|
-| **Covenant** | 1 node Hp with 4 Intel Xeon | 10 | 256 | 2 HD SAS 1TB Raid0 | 1Gbit Ethernet for Management, 10Gbit Ethernet for Data|+| **Covenant** | 1 node Hp with 4 Intel Xeon, 2 nodes Dell with 2 Xeon cpu | 10,20 | 256,320 | 2 HD SAS 1TB Raid0 | 1Gbit Ethernet for Management, 10Gbit Ethernet for Data|
 | **Masternode** | 1 node Dell with 2 Intel Xeon| 4 | 24 | 2 HD SAS 1TB Raid1 for OS, 4 HD SAS 2TB Raid5 for Scratch, 35TB Storage for Home |1Gbit Ethernet for Management, 1Gbit Ethernet for Frontend/​login,​ 1Gbit Ethernet for nodes console (iDRAC, ILO), 2 10Gbit Ethernet for Data, Fiber Channel 8Gbps for Storage, 1 10Gbit Mellanox Infiniband for Infiniband control| ​ | **Masternode** | 1 node Dell with 2 Intel Xeon| 4 | 24 | 2 HD SAS 1TB Raid1 for OS, 4 HD SAS 2TB Raid5 for Scratch, 35TB Storage for Home |1Gbit Ethernet for Management, 1Gbit Ethernet for Frontend/​login,​ 1Gbit Ethernet for nodes console (iDRAC, ILO), 2 10Gbit Ethernet for Data, Fiber Channel 8Gbps for Storage, 1 10Gbit Mellanox Infiniband for Infiniband control| ​
 | **GPU nodes** | 5 nodes Dell T630/T640 with 2 Intel Xeon and 1 or 2 nVidia GPUs| 8 | 32/64 | 1 HD SAS 1TB | 1Gbit Ethernet for Management, 1Gbit Ethernet for Data| | **GPU nodes** | 5 nodes Dell T630/T640 with 2 Intel Xeon and 1 or 2 nVidia GPUs| 8 | 32/64 | 1 HD SAS 1TB | 1Gbit Ethernet for Management, 1Gbit Ethernet for Data|
Line 19: Line 19:
   - A file system **/homes** on Dept. Storage (up to 35 TB) for user homes shared via NFS on Gbit or 10Gbit network (Morgana nodes have 1 Gb bandwidth guaranteed due to 10Gb Ethernet hardware)   - A file system **/homes** on Dept. Storage (up to 35 TB) for user homes shared via NFS on Gbit or 10Gbit network (Morgana nodes have 1 Gb bandwidth guaranteed due to 10Gb Ethernet hardware)
   - A file system **/​scratch** on local Raid storage (up to 5.5 TB) for scratch shared via NFS on Gbit or 10Gbit network (Morgana nodes have 1 Gb bandwidth guaranteed due to 10Gb Ethernet hardware)   - A file system **/​scratch** on local Raid storage (up to 5.5 TB) for scratch shared via NFS on Gbit or 10Gbit network (Morgana nodes have 1 Gb bandwidth guaranteed due to 10Gb Ethernet hardware)
-  - Login for users+  - Login for users via SSH and via Web SSH interface
   - Node provisioning software (installation of new nodes and node rebuild in less than 5 min.)   - Node provisioning software (installation of new nodes and node rebuild in less than 5 min.)
   - PBSPro master for scheduling and resource management   - PBSPro master for scheduling and resource management
Line 25: Line 25:
   - Infiniband network controller (via software)   - Infiniband network controller (via software)
   - Application software for compute nodes   - Application software for compute nodes
-  - Monitoring and utility software: Ganglia, NagiosDocumentation website ​and web SSH interface ​(web interface to PBSPro submission ​is coming)+  - Monitoring and utility software: Ganglia, Nagios ​ 
 +  - Documentation website: [[http://​masternode.chem.polimi.it|http://​masternode.chem.polimi.it]] accessible only from the wired DCMC network or via DCMC VPN 
 +  - Web SSH interface ​[[https://​masternode.chem.polimi.it/​webssh|https://​masternode.chem.polimi.it/​webssh]] accessible only from the wired DCMC network or via DCMC VPN 
 +  - Web interface to PBSPro submission: planned for 2019
  
 ===== Software ===== ===== Software =====
Line 65: Line 68:
  
 [[Queues and Resources|Queues and access to the cluster]]\\ ​ [[Queues and Resources|Queues and access to the cluster]]\\ ​
-[[pbs_jobfile_structure|PBS jobfile structure]] +[[pbs_jobfile_structure|PBS jobfile structure]]\\ 
- +[[Modules|Modules]]
-===== Applications specific instructions ===== +
- +
-[[comsol_5.3|COMSOL 5.3]] +
- +
- +
- +
- +
- +
- +
- +
  
start.1554479335.txt.gz · Last modified: 2019/04/05 17:48 by druido