Professional Documents
Culture Documents
Table of Contents
1.
2.
3.
Computational Resources
03
1.1
03
04
05
06
06
1.2
07
1.3
Network Diagram
08
09
2.1
09
09
10
2.2
11
2.3
12
12
13
Contact Information
14
3.1
Contact Persons
14
3.2
Contact Address
14
1. Computational Resources
RCMS Super Computer is installed in state of art data center with 80 KVA of UPS
backup and 12 ton precision cooling system. The data center is protected by FM-200
based Automatic Fire Detection and Suppression System and manual fire extinguishers.
CCTV Cameras and Access Control systems are being procured for effective
surveillance of data center. Specifications of Super Computer are given below:-
1.1
The Super Computer is comprised of 32 Intel Xeon based machines and each one of
them is connected to Nvidia Tesla S1070 (each of which contains 4 GPUs). All nodes
are connected by40Gbps QDR InfiniBand Interconnect for internal communication. A
high-performance and reliable SAN storage is linked to Servers, accessible by all
computational nodes. Table 1shows the detailed specification of RCMS Super
Computer.
Cluster Name
Brand
Total Processors
Total Nodes
Total Memory
Operating System
Interconnects
Storage
Graphic Processing
Unit
afrit.rcms.nust.edu.pk
HP ProLiant DL380 G6 Servers/ HP
ProLiant DL160se G6 Server
272 Intel Xeon
34
1.312 TB
Redhat Enterprise Linux 5.6
InfiniBand Switch
HP P2000 SAN Storage 22TB
capacity, SAN Switches, Host Bus
Adapters (HBAs), Fiber Channel
Switch with RAID Controllers
32 x NVidia Tesla S1070 (each
system contains 4 GPUs)
Cluster Nodes
afrit.rcms.nust.edu.pk
Compute-0-13
Compute-0-24
Compute-0-3
Compute-0-4
Compute-0-5
Compute-0-6
Compute-0-7
Compute-0-8
Compute-0-14
Compute-0-15
Compute-0-16
Compute-0-17
Compute-0-18
Compute-0-19
Compute-0-25
Compute-0-26
Compute-0-27
Compute-0-28
Compute-0-29
Compute-0-30
Compute-0-9
Compute-0-10
Compute-0-11
Compute-0-12
Compute-0-20
Compute-0-21
Compute-0-22
Compute-0-23
Compute-0-31
Compute-0-32
Compute-0-33
Compute-0-35
Command
Location
Make utility
make
/usr/bin/make
GNU C compiler
gcc
/usr/bin/gcc
g++
/usr/bin/g++
g77
/usr/bin/g77
MPI C compiler
mpicc
/usr/mpi/intel/openmpi-1.4.3/bin/mpicc
mpic++
/usr/mpi/intel/openmpi-1.4.3/bin/mpic++
/usr/mpi/intel/openmpi-1.4.3/bin/mpif77
Java Compiler
javac
/usr/java/latest/bin/javac
Ant Utility
ant
/opt/rocks/bin/ant
C compiler
cc
/usr/bin/cc
F77 Compiler
f77
/usr/bin/f77
GFortran Compiler
gfortran
/usr/bin/gfortran
Fortran95 Compiler
f95
/usr/bin/f95
UPC Compiler
upcc
/share/apps/UPC/upc-installation/upcc
Command
Location
MPI Runtime
mpirun
/usr/mpi/intel/openmpi-1.4.3/bin/mpirun
java
/usr/java/latest/bin/java
UPC Runtime
upcrun
/share/apps/UPC/upc-installation/upcrun
1.2
SAN Setup
Total 22 TB of SAN Storage is available for storing users data. Two SAN Switches are
installed in UBB Rack. 8 x 8 Gb transceiver installed in each of the SAN Switch. Total
48 slots are available which are occupied with 450GB each.
The system is configured on RAID -1 one unit and RAID-5 on 4 units, each containing
16 drives. One online spare drive is marked in each of the disk enclosure for high
availability. In case of drive failure the online spare drive will take over and data will be
re-created depending on the RAID level.
Each unit is presented to Storage Node, whose hostname is u2. NFS Server daemon
is installed on u2. NFS share has been created in order to assign storage to other
nodes on network.
The Storage is managed using an application called Storage Management Utility.
1.3
e) On clicking Open you will be asked to enter your username and password. Write
your username and password followed by an Enter to login to Super Computer. Please
note that your password will not be displayed for security reasons.
10
2.2
Password-less Authentication
2.2.1 The private / public key pair will be required in order to authenticate you on the
target machine. To generate this key pair, type the following in console:
$ ssh-keygen t rsa
2.2.3 After generating the public key, copy the public key to a file named
authorized_keys, as shown below:
$ cd ~
$ cat ./.ssh/id_rsa.pub >> ./.ssh/ authorized_keys
11
2.3
b)
c)
d)
In order to see the status of script type the following command followed by enter:
$ qstat
e)
12
# !/bin/bash
# mpi_script.sh
# **********************
# Following is the name of MPI Script
# $ -N MPI_PRO
# Following will be the output and error files
# MPI_PRO.OPRO_ID and MPI_PRO.EPRO_ID respectively.
# $ -pe mpi 16
#This will assign 16 cores to the script
# $NSLOTS
# No of cores allocated by the sun grid engine
# machines
# a machine file containing names of all available nodes
echo "Allocated $NSLOTS slots."
mpirun -np $NSLOTS -mca ras gridengine --hostfile machines mpi_script.sh
# End of Script
b)
c)
d)
In order to see the status of script type the following command followed by enter:
$ qstat
e)
13
3. Contact Information:
3.1
Contact Persons:
In case of any inquiry or assistance feel free to contact the following persons:
S. No
Designation
Name
Contact No
Email Address
Director ScREC
Engr. Taufique-ur-Rehman
051-90855730
taufique.rehman@rcms.nust.edu.pk
Faculty Member
051-90855731
tariq@rcms.nust.edu.pk
System
Administrator
Assistant System
Administrator
051-90855717
usman@rcms.nust.edu.pk
051-90855714
shahzad@rcms.nust.edu.pk
3.2
Contact Address:
14