National High Performance Cluster
FCSE is hosting and operating the national High Performance Cluster. The HPC cluster consists of 84 computational blade servers with 2 Sixcore L5640 CPUs and 24 GB RAM. The 6 management servers have also 2 Sixcore L5640 CPUs and 24 GB RAM, four of which act as storage servers and are connected in a failover configuration to a SAS storage with 60x600GB Dualchannel SAS disks. All of the computational and management nodes are interconnected with QDR Infiniband network, which is deployed with 1:1 oversubscription. Secondary interconnection is provided by 1Gbit Ethernet network. The overall theoretic performance of the cluster is 9TFlops, and achieved peak LINPACK performance is 7.776 TFlops, that is 86% efficiency.
The HPC cluster is deployed with Scientific Linux 5 and EMI gLite Grid middleware (Torque+Maui for queuing and scheduling). The storage is deployed using Lustre 2.1 and is available over Infiniband to all compute nodes.
Programming tools that is deployed on the cluster are GNU compilers, openmpi and mvapich for MPI (x86_64 and infiniband support) and GNU OpenMP. It currently supports the following software packages dedicated for HPC computations: ATLAS, CPMD, GAMESS, NWCHEM, GNU Plot, Emacs, EMI 1 WN, ORCA. Additionally the cluster operations can support deployment of any needed library or software pack for any research community.
The HPC cluster can be accessed using gLite middleware, through the following supporting VOs: margi (Macedonian national VO), seegrid, see, env.see-grid.sci,
meteo.see-grid-sci.eu,
seismo.see grid-sci.eu and biomed. Also users can obtain local account and submit jobs directly using PBS scripts.