You are on page 1of 4

ARCHITECTURE DESIGN

Storage Area Networks (SANs) are well suited to support clustering technologies. As you may know, clustering is the concept of connecting several servers to the same shared disk storage. This allows multiple servers to access the disk storage in a coordinated fashion, offering fault tolerance by avoiding a single point of failure should a server malfunction. Windows Clustering requires at least two nodes (Windows Server 2008 R2Enterprise Edition) and a common storage location, generally implemented as a Storage Area Network (SAN) or Network Attached Storage (NAS).

Page 1

Synchronizing Disk Access

As you can imagine, having multiple servers accessing the same shared storage needs to be done so in an orderly fashion. If multiple servers try to access the same data at the same time in an uncoordinated fashion, the result is disk corruption. There are 2 schools of thought on how to coordinate shared disk access; one is the shared everything model, the other is the shared nothing model.

The shared everything model allows all servers to access all the shared disk drives at the same time, simultaneously. This is accomplished by the use of distributed lock manager software which coordinates the locking of files and records on disks. Only 1 server can own an exclusive write-mode lock on a file which prevents other nodes from writing at the same time. While there is overhead associated with the distributed lock manager, it scales well as the number of servers in the cluster grows.

In contrast, Windows clusters utilize the shared nothing model when synchronizing access to storage. This means that only 1 server can own a particular shared disk drive at a time. This prevents other nodes from writing to the disk while the owner node manipulates the data. The other servers can own their own disk drives, but never can 2 nodes own the same drive. You can see why this is called the shared nothing model. What follows is a diagram illustrating the shared nothing model with Server1 owning Disk1 and Server2 owning Disk2.

Page 2

Implementation Network
As we will use a NAS (Network Attached Storage) for the cluster and failover, there will be 3 different network schemes (192.168.1.X for users, 10.10.10.x for Hearbeat and 192.168.10.x for the Storage)
Name ------NAS DC NODE1 NODE1 NODE1 NODE2 NODE2 NODE2 Description ---------------iSCSI Network IP Address -----------192.168.10.25 Subnet Mask ------------255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0 Comments --------Data App Srv Server1 Win2K8En Server2 Win2K8En

Domain Controller 192.168.1.5 Srv1 to Host Srv1 Heartbeat Srv1 iSCSI Netw Srv2 to Host Srv2 Heartbeat Srv2 iSCSI Netw 192.168.1.2 10.10.10.2 192.168.10.2 192.168.1.3 10.10.10.3 192.168.10.3

Page 3

Technical Specifications
Dell PowerVault PV-LTO4-120HH (backup Tape)
PVT LTO-4-120 half-height tape drive, 120MB/s, 800GB native /1.6TB compress Controller Card None SAS cables 6Gb SAS Cable, 1M Tape Media Free Upgrade! Tape Media for LTO4-120 tape drive, 800GB/1.6TB, 5 Pack Backup software Symantec BackupExec 12.5 2010 R3 (With agents) Hardware Support Services 3Yr Basic Hardware Warranty Repair: 5x10 HW-Only, 5x10 NBD Onsite Environmental Options None PV LTO4-120HH

Dell PowerVault NX200H (NAS)


PowerVault NX200 Embedded Management Client Recovery Software Hardware Support Services Installation Services NX200, 4TB PV NAS Tower, RAID 5 Baseboard Management Controller Symantec System Recovery 2011, Desktop Edition for NX200 1 Yr Next Business Day Parts Delivery Only
Windows Storage Server 2008 x64 Basic

Page 4