Professional Documents
Culture Documents
A SAN originally designed for a high-bandwidth application, for example, can also
facilitate a more efficient tape backup solution.
Server-free tape backup applications may employ unique hardware and software
products that would not appear in a SAN designed to support server clustering.
Because SANs offer the flexibility of networking, however, you can satisfy the needs of
multiple applications within a single shared storage configuration.
Digitized video has several unique requirements that exceed the capabilities of legacy
data transports, including the sustained transmission of multiple gigabit streams and
intolerance for disruption or delays.
Most SAN-based video applications use the SCSI-3 protocol to move data from disk to
workstations, although custom configurations have been engineered using IP for multicast
and broadcast distribution.
Video applications have common high-performance transport requirements but may vary
considerably in content.
Allowing peer workstations to access and modify video streams from one or more disk
arrays.
A video broadcast application that serves content from a central data source to multiple
feeds must have the means to support multicast across the SAN.
Video used for training applications may support both editing workstations and user
stations, with random access to shared video clips or to instructional modules digitized on
disk.
In this example, the bandwidth required per workstation depends on the type of video
streams retrieved from and stored to disk.
In the latter case, 2Gbps Fibre Channel would support ~400MBps full duplex, or
sufficient bandwidth for a high-definition video stream to be read from disk, processed,
and written back concurrently.
Device drivers for the host adapter cards must therefore support failover in case a link or
switch is lost, and preferably load balancing to fully utilize both paths during normal
operation.
In addition, the ratio of server to storage links must be adequate to support concurrent
operation by all workstations.
Larger storage arrays support multiple links so that a non-blocking configuration can be
built.
Video editing applications are intolerant of latency or delays. For optimal jitter-free
performance, video data can be written to the outer tracks of individual disks within the
storage array.
Although this technique reduces the total usable capacity, it requires less disk head
movement for reading and writing data and thus minimizes latency in disk performance.
Provisioning each server with its own tape backup system is expensive and requires
additional overhead for administration of scheduling and tape rotation on multiple tape
units.
Performing backups across the production LAN allows for the centralization of
administration to one or more large tape subsystems, but it burdens the messaging
network with much higher traffic volumes during backup operations.
When the volume of data exceeds the allowable backup window and stresses the
bandwidth capacity of the messaging network, either the bandwidth of the messaging
network must be increased or the backup data must be partitioned from the messaging
network.
Thus, the potential conflict between user traffic and storage backup requirements can be
resolved only by isolating each on a separate network, either by installing separate SAN
interconnection
ME - CSE [SAN- Unit V] 7
Fig: Tape backup across a departmental network with direct-attached storage
Because the backup data path is now across a dedicated interconnection, the constraints
of the messaging network are removed from the backup process and the burden of backup
traffic is removed from the LAN.
Optimizing the backup routine requires several additional SAN components. Moving disk
storage from parallel SCSI to SAN-attached arrays offers, among other things, the ability to
remove the server from the backup data path.
This is the most significant improvement from the standpoint of performance and no
disruptive backup operations.
If server resources are freed from backup tasks, the servers are always available for user
access.
The backup agent can be an extended copy (third-party copy) utility embedded in a SAN
switch, in a dedicated SAN attached backup server, in a SAN-to-SCSI bridge product, or in
the tape target itself.
In this configuration, backup data is read directly from disk by the copy agent and then
written to tape, bypassing the server.
Concurrent backup and user access to the same data are possible if the backup protocol
maintains metadata (file information about the actual data) to track changes that users may
make to data (such as records) as it is being written to tape.
More sophisticated designs that offer dual power supplies, dual LAN interfaces, multiple
processors, and other features to enhance performance and availability.
Redundancy typically implies hardware features but may also include redundant
software components, including applications.
Redundancy can also be provided simply by duplicating the servers themselves, with
multiple servers running identical applications.
In the case of failure of a hardware or software module within a server, you shift users
from the failed server to one or more servers in a server cluster.
The recovery process must preserve user network addressing, login information,
current status, open applications, open files, and so on.
Clustering software may also include the ability to balance the load among active
servers.
In this way, in addition to failover support, the servers in a cluster can be maximized to
increase overall performance.
SANs allow server clusters to scale to very large shared data configurations, with more
than a hundred servers in a single cluster.
For smaller ISPs, internal or direct-attached disks are sufficient as long as storage
requirements do not exceed the capacity of those devices.
For larger ISPs hosting multiple sites, storage requirements may exceed the SCSI-
attached capacity of individual servers.
Network-Attached Storage (NAS) or SANs are viable options for supplying additional
data storage for these configurations.
In addition to meeting storage needs, maintaining availability of Web services is critical
for ISP operations.
Because access to a Web site (URL) is based on Domain Name System (DNS) addressing
rather than physical addressing, you can deploy redundant Web servers as a failover
strategy.
For sites that rely on internal or SCSI-attached storage, this technique implies that each
server and its attached storage must maintain a duplicate copy of data.
This solution is workable so long as the data itself is not dynamic—that is, it consists
primarily of read-only information.
This option is less attractive, however, for e-commerce applications, which must
constantly update user data, on-line orders, and inventory tracking information.
The shift from read-mostly to more dynamic read/write requirements encourages the
separation of storage from individual servers.
With NAS or SAN-attached disk arrays, data is more easily mirrored for redundancy and
is made available to multiple servers for failover operation.
you can extend the SAN with additional switch ports to accommodate expansion of
storage capacity and increased population of Web servers.
This small configuration can scale to hundreds of servers and terabytes of data with no
degradation of service.
Fig [a] depicts a scalable ISP configuration using iSCSI for block I/O access to storage
and tape.
In this case, the Ethernet switch is a common interconnection both for Web traffic via
the IP router and for block access to storage data.
Although servers can be provisioned with dual Ethernet links to segregate file and block
traffic using VLANs, some iSCSI adapters support file and block I/O on the same
interface.
Separate departments within a company, for example, may make their own server and
storage acquisitions from their vendor of choice.
Each departmental SAN island is designed to support specific upper layer applications,
and so they may be composed of various server platforms, SAN interconnections, and
storage devices.
It may be desirable, however, to begin linking SANs to streamline tape backup
operations, share storage capacity, or share storage data itself.
Creating a campus network thus requires transport of block storage traffic over
distance as well as accommodation of potentially heterogeneous SAN interconnections.
The main issue with native Fibre Channel SAN extension is not the distance itself but the
requirement for dedicated fiber from one site to another.
Many campus and metropolitan networks may already have Gigabit Ethernet links in
place, but to share the same cable by Fibre Channel and Gigabit Ethernet simultaneously
requires the additional cost of dense wave division multiplexing (DWDM) equipment.
Connecting Fibre Channel switches builds a single layer 2 fabric, and therefore multiple
sites in a campus or metro storage network must act in concert to satisfy fabric
requirements a campus storage network with a heterogeneous mix of Fibre Channel and
iSCSI-based SANs.
Depending on bandwidth requirements, these links can be shared with messaging traffic
or can be dedicated to storage.
The administrative building is shown with aggregated Gigabit Ethernet links to the data
center to provide higher bandwidth, although 10Gbps Ethernet could also be used if
desired.
The development center is shown with an iSCSI SAN, which requires only a local
Gigabit Ethernet switch to provide connections to server, storage, and the campus.
Disaster recovery tends to move toward the top of IT priorities only after
major natural or human-caused disasters.
The scope of a DR solution is more manageable if administrators first identify the types
of applications and data that are most critical to business continuance.
Customer information and current transactions, for example, must be readily accessible
to continue business operations.
Project planning data or code for application updates is not as mission critical, even
though such code may represent a substantial investment and should be recovered at
some point.
FCIP and iFCP can provide long distance support for Fibre Channel-originated storage
traffic, whereas iSCSI offers a native IP storage solution to address the distance issue.
DR supports both data replication and tape backup options and uses IP network services
to connect the primary site to the DR site.