User Tools

Site Tools


start

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
start [2019/04/05 20:05]
druido [Applications specific instructions]
start [2022/07/26 11:24] (current)
druido
Line 8: Line 8:
 ^Server or Blade ^# nodes and processors ^# cores/​processor ^Ram (GB) ^Local storage ^Network interfaces^ ^Server or Blade ^# nodes and processors ^# cores/​processor ^Ram (GB) ^Local storage ^Network interfaces^
 | **Blade Gandalf** | 6 nodes Dell with 2 Intel Xeon | 4 | 8 | 1 HD SAS 73GB | 1Gbit Ethernet for Management and Data| | **Blade Gandalf** | 6 nodes Dell with 2 Intel Xeon | 4 | 8 | 1 HD SAS 73GB | 1Gbit Ethernet for Management and Data|
-| **Blade Legolas** | 16 nodes Hp with 2 Intel Xeon | 4 | | 1 HD SAS 73GB | 1Gbit Ethernet for Management and Data|+| **Blade Legolas** | 16 nodes Hp with 2 Intel Xeon | 4 | 24 | 1 HD SAS 73GB | 1Gbit Ethernet for Management and Data|
 | **Blade Merlino** | 9 nodes Dell with 2 AMD Opteron | 4 | 16 | 1 HD SATA 80GB | 1Gbit Ethernet for Management, 10Gbit Ethernet for Data| | **Blade Merlino** | 9 nodes Dell with 2 AMD Opteron | 4 | 16 | 1 HD SATA 80GB | 1Gbit Ethernet for Management, 10Gbit Ethernet for Data|
 | **Blade Morgana** | 11 nodes Dell with 2 Intel Xeon | 4 | 24 | 2 HD SAS 146GB Raid0 | 1Gbit Ethernet for Management, 20Gbit Mellanox Infiniband for MPI, 10Gbit Ethernet for Data| | **Blade Morgana** | 11 nodes Dell with 2 Intel Xeon | 4 | 24 | 2 HD SAS 146GB Raid0 | 1Gbit Ethernet for Management, 20Gbit Mellanox Infiniband for MPI, 10Gbit Ethernet for Data|
-| **Covenant** | 1 node Hp with 4 Intel Xeon | 10 | 256 | 2 HD SAS 1TB Raid0 | 1Gbit Ethernet for Management, 10Gbit Ethernet for Data|+| **Covenant** | 1 node Hp with 4 Intel Xeon, 2 nodes Dell with 2 Xeon cpu | 10,20 | 256,320 | 2 HD SAS 1TB Raid0 | 1Gbit Ethernet for Management, 10Gbit Ethernet for Data|
 | **Masternode** | 1 node Dell with 2 Intel Xeon| 4 | 24 | 2 HD SAS 1TB Raid1 for OS, 4 HD SAS 2TB Raid5 for Scratch, 35TB Storage for Home |1Gbit Ethernet for Management, 1Gbit Ethernet for Frontend/​login,​ 1Gbit Ethernet for nodes console (iDRAC, ILO), 2 10Gbit Ethernet for Data, Fiber Channel 8Gbps for Storage, 1 10Gbit Mellanox Infiniband for Infiniband control| ​ | **Masternode** | 1 node Dell with 2 Intel Xeon| 4 | 24 | 2 HD SAS 1TB Raid1 for OS, 4 HD SAS 2TB Raid5 for Scratch, 35TB Storage for Home |1Gbit Ethernet for Management, 1Gbit Ethernet for Frontend/​login,​ 1Gbit Ethernet for nodes console (iDRAC, ILO), 2 10Gbit Ethernet for Data, Fiber Channel 8Gbps for Storage, 1 10Gbit Mellanox Infiniband for Infiniband control| ​
 | **GPU nodes** | 5 nodes Dell T630/T640 with 2 Intel Xeon and 1 or 2 nVidia GPUs| 8 | 32/64 | 1 HD SAS 1TB | 1Gbit Ethernet for Management, 1Gbit Ethernet for Data| | **GPU nodes** | 5 nodes Dell T630/T640 with 2 Intel Xeon and 1 or 2 nVidia GPUs| 8 | 32/64 | 1 HD SAS 1TB | 1Gbit Ethernet for Management, 1Gbit Ethernet for Data|
Line 68: Line 68:
  
 [[Queues and Resources|Queues and access to the cluster]]\\ ​ [[Queues and Resources|Queues and access to the cluster]]\\ ​
-[[pbs_jobfile_structure|PBS jobfile structure]] +[[pbs_jobfile_structure|PBS jobfile structure]]\\ 
- +[[Modules|Modules]]
- +
- +
- +
- +
- +
- +
- +
- +
  
start.1554487527.txt.gz · Last modified: 2019/04/05 20:05 by druido