You are on page 1of 19

Storport in Windows Server 2003: Improving Manageability and Performance in Hardware RAID and Storage Area Networks

Microsoft Corporation December 2003

Abstract

The Storport driver, new to Microsoft Windows Server 2003, delivers greater performance in hardware RAID and SAN environments than the preexisting SCSIport driver was capable of delivering. This white paper begins by explaining the limitations of the original SCSIport driver architecture when used with interconnects for which it was not designed. The paper then details the architectural improvements of the new Storport driver, which has been developed to deliver high throughput and CPU-efficient I/O in high performance environments. This paper will be of interest to OEM hardware and software developers, as well as to customers who are interested in encouraging their storage vendors to support high performance solutions in a Windows environment.

Contents
Introduction..................................................................................................................................... 1 Windows Storage Drivers............................................................................................................... 2 SCSIport Driver........................................................................................................................ 3 Adapter I/O Limit................................................................................................................ 3 Sequential I/O Functioning................................................................................................. 4 Increased Miniport Load at Elevated IRQLs......................................................................4 Data Buffer Processing Overhead.....................................................................................5 I/O Queue Limitations........................................................................................................ 5 Impact on SCSI Performance............................................................................................ 6 Storport........................................................................................................................................... 7 Synchronous I/O Functioning................................................................................................... 8 Effective Miniport Functioning.................................................................................................. 9 Offloaded Build at Low IRQL.............................................................................................. 9 Single Pass Scatter/Gather List Creation...........................................................................9 Flexible Map Buffer Settings................................................................................................... 11 Queue Management............................................................................................................... 11 Improved Error and Reset Handling.......................................................................................12 Hierarchical Resets.......................................................................................................... 12 Improved Clustering by Using Hierarchical Resets..........................................................12 Fibre Channel Link Handling............................................................................................13 Ability to Run Deferred Procedure Calls (DPCs)....................................................................13 Registry Access...................................................................................................................... 13 Fibre Channel Management................................................................................................... 14 Easy Migration to Storport...................................................................................................... 14 Performance Comparisons........................................................................................................... 14 Measuring Storport Performance...........................................................................................15 Host-Based RAID Adapter...............................................................................................15 Summary...................................................................................................................................... 16 Related Resources....................................................................................................................... 16

Storport in Windows Server 2003

Introduction
Storage adapters and storage subsystems are not all created equal. In environments where information transfers between the computer, and storage must be maximized for speed, efficiency, and reliabilitysuch as in banking or trading businesseshigh performance interconnects that maximize I/O throughput are critical. Such organizations use storage area networks (SANs), usually with Fibre Channel cabling and interconnects (links) in redundant configurations, and storage arrays with hardware based RAID (redundant array of independent disks) for high availability and high performance. Having the highest performing equipment helps to ensure that high performance needs can be met, but it doesnt guarantee them. The functioning of the storage network is also dependent on the capabilities of the host operating system, specifically the host operating system drivers that interface with the storage hardware to pass I/O requests to and from the storage devices. This is especially important in Fibre Channel SANs, where a complex system of switches and links between servers and storage requires an effective means of detecting link problems and eliciting the appropriate response from the operating system In the Microsoft Windows operating system, the SCSIport driver, in conjunction with vendorwritten adapter-specific miniport drivers, was for many years, the only driver delivering SCSI commands to the storage targets. The SCSIport driver, however, was designed to work optimally with the parallel SCSI interconnects used with direct attached storage. It was neither designed to meet the high performance standards of Fibre Channel SAN configurations, nor to work well with hardware RAID. As a consequence, organizations running mission critical Windows applications on their servers do not realize the maximum performance benefits or manageability of their Fibre Channel SANs or hardware RAID adapters (on both the host and storage arrays) when I/O passes between the host and storage target. These limitations have been overcome with the development of Storport, the new device driver designed to supplement SCSIport on Windows Server 2003 and beyond. Storport is a new port driver that delivers higher I/O throughput performance, enhanced manageability, and an improved miniport interface. Together, these changes help hardware vendors realize their high performance interconnect goals.

Storport in Windows Server 2003

Windows Storage Drivers


Applications send read/write (I/O) requests through the Windows storage stack (see Figure 1). The first layer of the stack, the I/O subsystem or manager, controls execution of all the device driver routines, including adding or removing a device, I/O requests, start I/O (initiating data transfer to or from a storage device), device interrupts (a diversion to device code in response to an external event unrelated to what a processor is currently working on), and various I/O completion routines.

Figure 1. The I/O Request Path Through the Storage Stack. The I/O manager processes and/or passes the I/O request packet (IRP) on to each lower driver layer, which successively routes the I/O to the: Correct device class driver Correct I/O port driver (such as SCSI or USB) Adapter-specific miniport driver

Class drivers manage a specific class of devices, such as disk or tape, ensuring that I/O requests are sent to the appropriate device type in correct fashion. I/O requests are then passed on to a protocol-specific port driver. Windows provides drivers for a number of transport types, including SCSI, IDE and 1394. The port driver can do one of the following: 1) complete the request without passing the request on to lower layers (if no data transfer is necessary), 2) queue the request on behalf of the storage device controller if the hardware is busy; or 3) pass the requests on to a hardware-specific miniport driver (written by an adapter vendor), which directly controls access to the hardware. The miniport driver, the lowest layer in the storage stack, is the actual device driver that acts to translate the I/O request into a physical location on the vendors hardware. Once the I/O request has been carried out in the hardware, several I/O completion routines complete the I/O path.

Storport in Windows Server 2003

SCSIport Driver
SCSIport is the Microsoft Windows system-supplied storage port driver, designed to manage SCSI transport on parallel SCSI interconnects. During the StartIo routine, the SCSIport driver translates the I/O request packet into a SCSI request block (SRB) and queues it to the miniport driver, which decodes the SRB and translates it into the specific format required by the storage controller. The start I/O phase of the request, which includes build and start (see Figure 2), takes microseconds; hundreds of SRBs can be queued, ready to be processed before even a single I/O request is completed. In fact, the longest phase of the I/O request is the data seek time (latency); if the data is not in cache, finding the correct blocks on the physical disk can take several milliseconds. (Note that the diagram shows relative time, not actual time units.) Once the hardware processes the I/O requestthat is, does the data transferthe controller generates a hardware interrupt indicating that the I/O has been completed. The interrupt is, in turn, processed by the HwInterrupt miniport routine (indicated as ISR or Interrupt Service Routine in the diagrams), which receives the completed requests and begins the whole process again. Data transfers are performed by the hardware itself (using Direct Memory Access or DMA) without operating system intervention.

Figure 2. Phases of an I/O Request (not to scale, relative durations shown) While the SCSIport driver is an effective solution for storage via parallel SCSI, it was not designed for either Fibre Channel or hardware RAID, and when used with these adapters, the full capabilities of the high performance interconnect cannot be realized. The nature of the performance limitations and their causes are detailed in the following sections.

Adapter I/O Limit


SCSIport can support a maximum of 254 outstanding I/O requests per adapter. Since each adapter may provide support for multiple SCSI devices, all the devices sharing that adapter must share the maximum of 254 outstanding requests. With a parallel SCSI bus that is designed to support a maximum of 15 attached devices, this may not be a problem; typically each SCSI physical disk corresponds to a single logical unit. In Fibre Channel environments, however, the number of target devices each adapter is designed to support is much higher. Fibre Channel arbitrated loop configurations can support 126 devices (hosts and disks). Switch configurations can theoretically support up to 16 million devices. Even without this level of device complexity, using the SCSIport driver, with its limit of 254 outstanding I/O requests per adapter, can be a significant bottleneck in Fibre Channel environments because each disk device commonly maps to multiple logical units (potentially thousands).

Storport in Windows Server 2003

Sequential I/O Functioning


At any given time, the SCSIport driver supports either the issuing or the completion of I/O requests, but not simultaneous execution of both request functions. In other words, once an I/O request enters the StartIo routine and the SRB is sent to a host bus adapter (HBA), this transmission mode (sometimes rather misleadingly called half duplex) prevents the adapter from processing storage device interrupts until the start I/O processing phase is complete. Conversely, once the miniport interrupt processing begins, new I/O requests packets (IRPs) are blocked from being issued (although those I/O requests already being processed can be completed). Only after the interrupt has been received and completely processed by the miniport/port driver, can new I/O requests be started. Figure 3 demonstrates how the SCSIport driver handles multiple I/O requests and interrupts. I/O request processing is sequentialthe second IRP cannot be started until the first reaches the end of the start I/O routine and is sent off to the hardware. Likewise, interrupt service requests arriving after IRPs have entered the start I/O routine are queued (in the order in which they are received) and must wait until the in-progress IRPs are ready to enter the next stage of processing, such as queuing or handing off to the miniport.

Figure 3. SCSIport: Sequential I/O Functioning In single processor systems, the SCSIport requirement that the start I/O routine be synchronized with the interrupt service routineso that only one of these routines can execute at any one time has negligible impact. In multiprocessor systems, however, the impact is considerable. Although up to 64 processors may be available for I/O processing, SCSIport cannot exploit the parallel processing capabilities. The net result is considerably more I/O processing time than would be required if start I/Os and interrupts could be executed simultaneously rather than sequentially.

Increased Miniport Load at Elevated IRQLs


The SCSIport driver was designed to do the majority of the work necessary to translate each I/O request packet into corresponding SCSI request blocks (SRBs); the miniport driver only does minimal additional processing during its HWStartIo routine. Its primary responsibility is to acknowledge receipt of the built SRB, construct the list (scatter/gather list) of memory addresses in a format the hardware can use, and transmit the SRB and scatter/gather list to the to the hardware. However, when some non-SCSI adapters are used with SCSIport, the SRB may require additional translation to non-SCSI protocols. These additional steps must be performed by the miniport driver during its HwStartIO routine.
Storport in Windows Server 2003 4

The HwStartIo routine always executes with the interrupt request level (IRQL) of the processor at the same priority level as the interrupt request of the device. Because all interrupts with the same or lower priority are masked to enable a higher priority process to complete without interruption, the elevated IRQL means that hardware interrupts accumulate rather than being processed. With parallel SCSI adapters, this has minimal impact, since there is very little additional work for the miniport driver to do. However, when using Fibre Channel or hardware RAID adapters, the workload on the miniport driver is much heavier; as a consequence, considerably more time is spent at an elevated IRQL. The net result of high numbers of accumulated interrupts is degraded system performance.

Data Buffer Processing Overhead


To correctly process an I/O request, the miniport driver must pass physical addresses that correspond to the IRPs data buffer (called a scatter/gather list) to the adapter. The current architecture of the SCSIport driver requires that the miniport driver repeatedly call the port driver to access this information, one element at a time, rather than obtain a complete list all at once. This repeated calling is CPU and time-intensive, and becomes inefficient with large data transfers or when memory becomes fragmented.

I/O Queue Limitations


SCSIport maintains two types of I/O request queues, a device queue for each SCSI device on a controller, and an adapter queue for each SCSI controller. The SCSIport queue does not provide an explicit method for miniport drivers to control the way items are queued to their devices. This is problematic for complex Fibre Channel configurations that must be able to pause and resume queues correctly in the event that a link connection goes down. Device queue. In a multiprocessor system, a mechanism is necessary to synchronize access to each storage device so that different processor requests (such as writing to a file or updating a database) are prevented from simultaneous access to the adapter. One mechanism of doing this is to use a spinlock. Only the processor in possession of the spinlock (a synchronization function) can make changes to the hardware; the other processors in the device queue are held in a wait mode, spinning until the next processor in the queue can acquire the spinlock and proceed with its task. Adapter Queue. The I/O requests passed down through the driver layers encounter the spinlock just above the miniport layer. From there, the requests are passed to each adapter, as shown in Figure 4.

Storport in Windows Server 2003

Figure 4. Successively More Restrictive Queuing in the SCSIport Driver Model The drawbacks to the SCSIport queuing process are several. First, each I/O request must queue for access to a spinlock not just once, but twice. Second, the adapter queue restricts I/O throughput to a maximum of 254 requests per adapter, on a first in-first out (FIFO) basis. For high performance adapters, which can process thousands of requests at a time, this can be a serious performance limitation. And third, SCSIport does not provide any means by which to manage device queues to improve performance under conditions of high load or to temporarily suspend I/O processing without accumulating errors. A consequence of this is that a busy device monopolizes the adapter queue while other devices might be able to respond without delay.

Impact on SCSI Performance


Figure 5 presents a baseline of SCSIport performance. Note that the time units presented in this figure are for illustrative purposes only and not intended to indicate actual times. Since I/O functioning is sequential in nature, I/O requests in process must complete without interruption. Interrupt service requests are queued and not processed until a suitable break in I/O processing. The benefits of multiprocessing are not realized.

Storport in Windows Server 2003

Figure 5. SCSIport I/O Performance This baseline provides the point of comparison with Storport functioning and performance, as discussed in the Storport section.

Storport
Given the inherent limitations of using SCSIport with high performance adapters for which it was not designed, Microsoft has developed a new port driver, Storport. Storport has been architected as a replacement for SCSIport, and designed to enable realization of the high performance capabilities of hardware RAID and Fibre Channel adapters. It is possible for hardware vendors to write their own class, filter, or even new port drivers, to bypass SCSIport. But, unlike Storport, these drivers may perform unreliably with the Windows platform because they are designed without in-depth knowledge of the operating system. While many of the routines in Storport are similar to SCSIport (which helps in a smooth transition from SCSIport to Storport), there are a number of critical differences. These differences, discussed in the remainder of this section, provide the advanced functionality of Storport that enables vendor miniport drivers and adapter hardware to function more effectively.

Storport in Windows Server 2003

Synchronous I/O Functioning


Unlike SCSIport, which cannot process an interrupt while it is issuing an I/O (or start a new I/O while the device is interrupting), Storport introduces a new synchronization model that allows decoupling of the interrupt from the StartIo routine. This means that Storport can send and receive I/O requests simultaneously; I/O requests can be started at the same time that requests are being completed. Multiprocessor systems can make use of this synchronous (full duplex) I/O functioning, thereby accelerating data transfer, as is shown in Figure 6.

Figure 6. Storport I/O Processing with Full Duplex Mode In this example, Storport is able to make use of a multiprocessor system by enabling execution of the start I/O initiation phase with the completion phase, cutting I/O processing time, and enabling more requests to be processed in the same amount of time (compare to SCSIport performance in Figure 5). As more processors are added, more I/O requests can be handled in parallel, thereby improving performance. This higher performance capability, however, requires that vendors code their miniports (and perhaps even modify their firmware and hardware) to take advantage of the synchronous I/O functioning, and not all miniport drivers can effectively decouple this processing. (Given this limitation, miniports can still do more work without synchronization by using fully duplex mode and calling StorPortSynchronizeAccess only as needed.)

Storport in Windows Server 2003

Effective Miniport Functioning


Storport enables more effective miniport functioning in several ways. One is to circumvent the problem of delayed interrupt processing during miniport processing of the I/O; the other is to eliminate the need for stepwise building of scatter/gather lists.

Offloaded Build at Low IRQL


Prior to sending a command to the hardware, a miniport driver used with SCSIport must perform all build work during its StartIo routine. This means not only that start I/O and interrupts must be synchronized, but also that all processing occurs at an elevated IRQ level. The net result, when additional build work is required can be dramatically slow I/O processing. In Storport, a new routine called HwBuildIo has been added to handle much of this preparatory work before the command is sent to the hardware. The Storport HwBuildIo routine is designed to allow the miniport to do this build work at a lower IRQL (known as DISPATCH IRQL) than with SCSIport, and does it without the need for synchronization. Because this routine runs before the StartIo routine, and at a lower priority level, interrupts are enabled, allowing requests to be sent to the controller even as the controller processes other requests. The net result is faster I/O processing. Again, where necessary, synchronization can be forced by calling the StorPortSynchronizeAccess routine.

Single Pass Scatter/Gather List Creation


One of the operations typically handled in the HwBuildIo routine is building the scatter/gather lists that identify the memory ranges and physical addresses where each portion of a data buffer resides in memory. Rather than making multiple calls to the port driver for each individual element (typically each element is a single memory page1), Storport can pass the entire scatter/gather list to the miniport in one call. In comparison with SCSIport processing (Figure 5), this Storport single pass also cuts the time to process I/O. Figure 7 shows the joint effects of the Storport offloaded build, done at a lower IRQL, and the single pass scatter/gather list creation.

4 KB of information on a 32-bit system. 9

Storport in Windows Server 2003

Figure 7. Storport I/O Processing with the New HwBuildIo Routine The effect of the HwBuildIo routine in combination with Storport full-duplex processing is shown in Figure 8. Note that both the Start I/O and the build on different processors can overlap in time, and can also overlap ISRs. Compared with the original SCSIport design, I/O processing is considerably more effective.

Figure 8. Storport I/O Processing with Bidirectional Transport and HwBuildIo

Storport in Windows Server 2003

10

Flexible Map Buffer Settings


Application I/O requests are processed in buffer memory. A finite amount of memory is presented to the application, and although this memory is presented to the application as if it were contiguous memory, its physical location is generally fragmented across multiple locations in memory. This buffer memory, available to programs running in user-mode of the operating system, is identified by its virtual address. When the application needs to transfer data from a physical disk device into or out of the buffer, the hardware controller executing the transfer must know the actual physical addresses where the data is stored (the scatter/gather list previously described). This process is effective as long as it is only the hardware that needs to know the physical location of the data and only the application that needs to know the virtual address. In rare cases, the miniport driver itself (not just the hardware) needs to access data in the buffer before or after a transfer. This might happen when the miniport needs to translate discovery information (such as the list of logical unit numbers (LUNs) returned by a target device or the INQUIRY data from a logical unit). The driver cannot use the user virtual address of the I/O buffer, since this is not available in the kernel mode in which all drivers run. Instead, the port driver must obtain that information by first allocating and then mapping all system (kernel) memory for all I/O buffers. This process is enormously costly to undertake because it involves copying data to scarce buffers which has a heavy impact on performance. Storport allows the vendor the flexibility of selecting the setting necessary to maximize the performance of the storage miniport driver. A Storport miniport can map any of the following: all data buffers, no data buffers, or only those buffers that are not actual data intended for the application (such as discovery information). SCSIport does not allow this selective mapping; it is all or none.

Queue Management
Unlike SCSIport, which can only queue a maximum of 254 outstanding I/O requests to an adapter supporting multiple storage devices, Storport does not limit the number of outstanding I/O requests that can be sent to an adapter. Instead, each logical storage unit (such as a virtual disk on a RAID array or a physical disk drive) can accept up to 254 outstanding requests. The number of requests an adapter can handle is the number of logical units x 254. Since large storage arrays with hundreds of disks can have thousands of logical units, it is obvious that removing the queuing from the adapter results in an enormous improvement in I/O throughput. This is especially important for organizations with high transaction processing needs and large numbers of physical and virtual disks. Storport also enables the miniport driver to implement basic queue management functions. These include pause/resume device, pause/resume adapter, busy/ready device, and busy/ready adapter, and the ability to control queue depth (the number of outstanding commands) on a per LUN basis, all of which can help ensure balanced throughput rather than overloaded I/O. Key scenarios that can take advantage of these capabilities include limited adapter resources, limited per LUN resources, or a busy LUN that prevents non-busy LUNs from receiving commands. Certain intermittent storage conditions, such as link disruptions or storage device upgrades, can be handled much more effectively when these controls are properly used in Storport miniports. (By contrast, a SCSI miniport cannot effectively control this queuing at all. A device may indicate a busy status; consequently commands will automatically be retried for a fixed amount of time; however, no controls are available on an adapter basis whatsoever.)
Storport in Windows Server 2003 11

Improved Error and Reset Handling


Errors during data transmission can either be permanent (so called hard errors, such as those cased by broken interconnects) or transient (soft errors, including recoverable data errors, device Unit Attention conditions, or fabric events such as a state change notification caused by, for example, storage entering or leaving the fabric). Hard errors must be detected and the physically damaged equipment replaced. Soft errors are handled by error checking and correction, or by simply retrying the command.

Hierarchical Resets
When SCSIport detects certain interconnect or device errors or conditions, it will respond by using a SCSI bus reset. On parallel SCSI, there is an actual reset line; however, on serial interconnects and RAID adapters, there is no bus reset, so it must be emulated in the best way possible. Whichever way the bus reset is done, the code path always disrupts I/O to all devices and LUNs connected to the adapter, even if the problem is related to only a single device. Such disruption requires reissuing in-progress commands for all LUNs. In contrast, Storport has the ability to instruct the HBA to only reset the afflicted LUN; no other device on that bus is impacted. If the LUN reset does not accomplish the recovery action, Storport attempts to reset the target device; and, if that doesnt work, it emulates a bus reset. (In practice, the bus reset should not be seen except when Storport is used with parallel devices). This advanced reset capacity enables configurations that were not possible (or were unreliable) in the past with SCSIport.

Improved Clustering by Using Hierarchical Resets


In environments with large storage arrays, or in clustering environments designed to keep applications highly available, a bus reset compromises the goal of high data availability. In the case of clustering, without a multipathing solution, none of the servers is able to access the shared storage while the bus is unavailable or the devices are being reset. Since clustered servers must use a reservation system to gain sole access to the shared storage, a bus reset will result in the loss of all reservations. This is a costly loss, since recovery of clustered systems takes considerable system resources. In other cases, the reset is actually issued by the cluster disk driver to break the reservation on devices that must be moved to a standby server. The same error recovery mechanism described earlier is used to clear the reservations on individual LUNs, as needed, during the failover process. (An interesting side effect of the bus reset mechanism used by SCSIport is that any servers that are using shared connections to storage, such as you would expect on a SAN, can have their I/O operations cancelled. This, in turn, leads to further bus resets as the non-clustered servers have to clear and retry their missing I/Os.) A further advantage to the hierarchical reset model is that cluster configurations that were not supported in the past can now work reliably. These include boot from SAN with the same adapter used for the cluster interconnect and supporting tape devices on the shared interconnects as well. With SCSIport, these configurations require additional HBAs.

Storport in Windows Server 2003

12

Fibre Channel Link Handling


Critical to managing Fibre Channel SANs in both single and multipath configurations is ensuring that interconnects and links are monitored for problems. Errors that cannot be resolved by the hardware must be passed up the stack for resolution by the port driver. While it is possible to design the miniport driver to resolve such errors, many miniport solutions do not function predictably in a multipathing environment, and there are some cases (such as when attempting to retrieve page information from a SAN) the miniport driver may not resolve interconnect errors correctly in a single-path environment. The Storport driver contains two new link status notifications, LinkDown and LinkUp. If a link is down, the miniport driver notifies the port driver and optionally identifies the outstanding I/O requests. The port driver pauses the adapter for a period of time; if the link comes back up, the port driver retries the outstanding I/O requests before sending any new ones. If the link does not come up during the specified period of time, the port driver must fail the I/O requests. In a multipath environment, the multipath driver then attempts to issue the failed commands on another available path.

Ability to Run Deferred Procedure Calls (DPCs)


It is often desirable to perform extended processing after external events, particularly in the Fibre Channel environment. An example would be device rediscovery after receipt of a registered state change notification (RSCN). The SCSIport model prescribes minimal processing once an interrupt has been received from the hardware. Unfortunately, this processing is unavoidable, so a more efficient way to do this at a lower priority level is necessary. Storport allows miniport drivers to use DPCs to accomplish this goal.

Registry Access
An important part of the Windows operating system design is the use of the registry, a configuration database if you will. SCSIport does not allow free access to the registry from a miniport driver. A single string can be passed to the miniport driver, which must then parse that string to extract adapter-specific parameters. Furthermore, SCSIport cannot guarantee that multiple adapters using the same miniport will be able to use different sets of parameters. The total length of the parameter string passed is limited to 255 characters. The Storport model allows registry access from the miniport in a much less restricted fashion. One routine can be used to query for specific parameters in any location in the system hive of the registry; writing back to the registry is also supported. This allows solving the problem of adapter specific parameters, such as persistent binding information or queue depth limits.

Storport in Windows Server 2003

13

Fibre Channel Management


The SNIA HBA Application Programming Interface is an important industry-led effort to allow management of FC adapters and switched fabrics. Although fully supported for Fibre Channel adapters using SCSIport, the Microsoft implementation of the HBA API is a required component of any Storport implementation written for FC adapters. This ensures the greatest level of compatibility with Microsoft management initiatives and support tools. The Microsoft implementation eliminates vendor supplied libraries and the complex registration process. Based on a WMI infrastructure in the miniports, this interface can also be used directly by tools and command line utilities. Important enhancements to the more common implementations also include full support for true asynchronous eventing and remoteability (ability to run utilities from a different host system or management console). Another important part of the WMI infrastructure is the ability to directly set security on individual operations: a user can be given monitoring rights but not the ability to set adapter or switch parameters (also known as role-based administration).

Easy Migration to Storport


Storport has been designed with a similar miniport interface to SCSIport, making the transition from SCSIport to Storport straightforward. The details of how to port from SCSIport to Storport are presented in the Microsoft Driver Developer Kit. Note that Storport supports Microsoft Windows Plug and Play compliant drivers. Legacy SCSIport miniports need more adaptation to work under the Storport model.

Performance Comparisons
The performance of the port driver varies not only with the capabilities of the miniport driver, but also with the system and RAID configuration, the storage adapter cache settings, the I/O queue depth, and the type of I/O operation. The rest of this section provides a brief review of how these various factors affect performance. System configuration. Adding more physical RAM helps ensure that that server accesses data in RAM cache, rather than from disk. Host based data cachingI/O requests to the file are intercepted by the caching system. If the request is an unbuffered WRITE, data is sent directly to the disk device (without any increase in speed); if a request is READ and the data is in memory cache, the response if very fast (no disk I/O is necessary). RAID configuration. I/O processing performance depends both on the type of redundancy that is used, and the number of physical disks across which the I/O load is spread. (The greater the number of disks, the better the performance, since multiple disks can be accessed simultaneously.) Note that RAID-10 gives the fastest I/O performance while still supporting redundancy. Controller cache settings. I/O performance is strongly impacted by whether or not the storage device can cache data, since caching gives better performance. Adding faster or more I/O controllers also improves performance. In the case of HBA RAID adapters, caching on the adapter also improves performance.

Storport in Windows Server 2003

14

I/O queue depth. Past a certain threshold, the more I/O requests there are in queue for each device, the slower the performance. Below that threshold, performance may actually increase, as the storage device can effectively reorder operations for greatest efficiency. A subjective measure of device stress is I/O load. According to StorageReview, a light load is 4-16 I/Os, moderate is 16-64, and high is 64-256. Consult product documentation for optimal queue depth for specific storage devices. File Type and Use. Files vary in their size and the extent to which they are used (as much as 95% of all I/O activity occurs with fewer than 5% of all files), both of which impact performance. Type of I/O operation. There are four types of I/O operation: random writes (RW), sequential writes (SW), random reads (RR) and sequential reads (SR). I/O read requests can be processed very rapidly with host-based data caching. Read and write performance can be improved by caching on the storage controller. (Write requests can be written to fast cache memory before being permanently written to disk.) In many cases, caches can be tuned to perform better for the workload (read vs write, random vs sequential). Disk Fragmentation. Just as with a single disk, files stored on RAID arrays can become fragmented, resulting in longer seek times during I/O operations.

Measuring Storport Performance


Benchmarking is one objective way to measure I/O performance. While standardized tests allow direct comparison of one configuration with another, it is also important to test workloads that represent the true data transfer patterns of the applications being considered. For example, if the intended application is Microsoft Exchange, which has 4KB I/Os, measuring 64KB sequential I/O performance is not meaningful. The performance results presented in the following section are indicative of the changes that are possible when a miniport driver has been properly written to take advantage of Storport capabilities. In all cases, the adapters used also had SCSI miniport drivers available, so legitimate comparisons can be made.

Host-Based RAID Adapter


Using a RAID adapter on the host, random writes are improved by 10-15% and sequential writes by 20-30%, although, in both cases, this effect becomes less dramatic as the size of the transfer increases. Random reads see a 10-15% improvement and sequential reads a 20-40% improvement with Storport, although this effect lessens as the total size of the data transferred grows larger. (Typical Windows I/O transfer sizes include 512 byes (for example, file system metadata), 4k (Exchange), 8k (SQL), and 64K (file system data). (Intels Iometer program, a common benchmark tool, was used for this case study to assess Storport performance. The tool measures the average number of I/Os per second, MB/s (megabytes per second, or equivalently, total I/O per second x unit size), and CPU effectiveness (I/O per % CPU used) for different I/O request types and for differing amounts of data.)

Storport in Windows Server 2003

15

Figure 9 summarizes overall system efficiency as measured by I/O per second over percent of CPU.
System Efficiency 80000 70000 60000 50000 40000 30000 20000 10000 0
R 4K W 32 RW K 25 R 6K W 51 RW 2B S 4K W 32 S W K 25 S 6K W 51 SW 2B R 4K R 32 RR K 25 RR 6K 51 R 2B R S 4K R 32 SR K 25 S 6K R SR

Efficiency

51

2B

Transfer Type and Size

Figure 9. Storport I/O Throughput Efficiency Storport (triangles) is about 30-50% more efficient than SCSIport (diamonds), passing through more I/O per second than SCSIport and using less CPU to do so.

Summary
Storport is the new Microsoft port driver recommended for use with hardware RAID storage arrays and high performance Fibre Channel interconnects. Storport overcomes the limitations of the legacy SCSIport design, while preserving enough of the SCSIport framework that porting to the Storport device is straightforward for most developers. Storport enables bidirectional (full duplex) transport of I/O requests, more effective interactions with vendor miniport drivers, and improved management capabilities. Storport should be the port driver of choice when deploying SAN or hardware RAID storage arrays in a Windows Server 2003 environment.

Related Resources
For more information on Windows Drivers see the Microsoft Developer Network (MSDN) website at http://msdn.microsoft.com/. For more information regarding the Driver Development Kit, see Microsoft Windows Driver Development Kits on the Microsoft Hardware and Driver Central website (http://go.microsoft.com/fwlink/?LinkId=19866). To locate appropriate support contacts, see WHQL Support Contacts on the Microsoft Hardware and Driver Central website (http://go.microsoft.com/fwlink/?LinkId=22256).

Storport in Windows Server 2003

16

Windows Server System is the comprehensive, integrated server software that simplifies the development, deployment, and operation of agile business solutions. www.microsoft.com/windowsserversystem

The information contained in this document represents the current view of Microsoft Corporation on the issues discussed as of the date of publication. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information presented after the date of publication. This white paper is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS OR IMPLIED, IN THIS DOCUMENT. Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in, or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation. Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property. 2003 Microsoft Corporation. All rights reserved. Microsoft and Windows Server 2003 are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.

Storport in Windows Server 2003

17

You might also like