You are on page 1of 13

Hyper-V TOI

374498662.doc

Version 1.0 10 June 2008 DRAFT Kenny Speer

1 Overview
This TOI outlines the features of Microsoft Hyper-V technology and how it relates to NetApp
storage. Hyper-V is a virtualization technology, which consists of a hypervisor that resides
between the physical hardware and the operating system kernel. Hyper-V uses a Parent
partition, which is similar to Dom0 in Xen, which hosts the Guest partitions. The Parent partition
is created automatically when the Hyper-V server role is installed.

Hyper-V also utilizes a new bus called a VMBUS which is an in-memory kernel bus. This allows
IO to flow from Guest through Parent/Drivers as fast as the memory bus in the physical machine.
This is a dramatic improvement over previous technologies where VMBUS is not used, and IO
must traverse user-mode and then context switch to kernel-mode for access to the Parent driver
stack. Operating systems, which use the VMBUS, have enlightenments which Microsoft
considers modifications to the original Guest OS to take advantage of hardware acceleration.
The following lists are the currently supported enlightened and un-enlightened operating systems.

Enlightened
o Windows Server 2008 x86 and x64
o Windows Vista SP1 x86 and x64
o SUSE Linux Enterprise Server 10 SP1 x86 and x64
Un-Enlightened
o Other Linux
o Solaris, SCO
o Windows Server 2003, XP, 2000

Un-enlightened operating systems use virtualized hypervisor layer which passes IO through the
parent process which hosts the Guest. This causes a context switch to kernel-mode to actually
send the IO.

When VMBUS is used, it consists of a VSP (Virtual Service Provider) and VSC (Virtual Service
Consumer). The VSP and VSC interface directly with the VMBUS. The VSP handles a specific
type of IO or IRP such as Storage, Network, USB, etc.

2 Related Documents
All related documents can be found in the Windows SAN Interop SharePoint site.

PRD Hyper-V Product Requirements Document


Tech Talk Presentation by Niraj Jaiswal
Microsoft Technet on Hyper-V
John Howard Blog Sr Program Manager Windows Virtualization
Hyper-V Supported Guests
WHU 5.0

Network Appliance Confidential Page 2 12/14/2017


374498662.doc

3 Contents

1 Overview............................................................................................................................... 1
2 Related Documents............................................................................................................. 1
3 Contents............................................................................................................................... 2
4 LUN Configurations............................................................................................................. 2
4.1 Boot LUN Configurations................................................................................3
4.2 Data LUNs.........................................................................................................3
4.3 Failover Clustering..........................................................................................4
4.4 LUN Types........................................................................................................4
5 Deployment.......................................................................................................................... 4
5.1 Packaging.........................................................................................................4
5.2 Installation........................................................................................................5
6 Feature Overview................................................................................................................. 5
6.1 Architecture......................................................................................................5
6.2 Failover Clustering..........................................................................................7
6.3 WHU 5.0............................................................................................................ 7
7 User Interfaces..................................................................................................................... 7
7.1 Command Line Interface.................................................................................7
7.2 Graphical Interface..........................................................................................7
8 Programmatic Interfaces..................................................................................................... 7
8.1 API..................................................................................................................... 8
8.2 Wire protocols..................................................................................................8
9 RAS....................................................................................................................................... 8
9.1 Reliability..........................................................................................................8
9.2 Availability........................................................................................................8
9.3 Supportability...................................................................................................9
9.4 Error reporting.................................................................................................9
10 Performance......................................................................................................................... 9
11 Revision History................................................................................................................... 9
12 Approvals............................................................................................................................... 9

Network Appliance Confidential Page 3 12/14/2017


374498662.doc

4 LUN Configurations
The following sections list the recommended best practices and caveats for using Hyper-V in a
NetApp environment.

4.1 Boot LUN Configurations


NOTE: Microsoft does not support mapping the boot disk as a SCSI device. The reason is that
the SCSI Bus controller is a synthetic device as opposed to an emulated device. Microsoft has
added enhancements to the IDE controller to make performance a non-issue. See this link to the
virtualization PM blog. The synthetic SCSI bus controller is not available in the VM BIOS and is
only available after the OS boots. Therefore only IDE disks can be used to boot.
Configuration Files Local (NON-HA) - The following configurations prevent clustering
the virtual machine due to all the configuration files for a virtual machine not being
accessible by all nodes.
o RAW FCP/iSCSI disk (WHOLE LUN mapped to VM as IDE)
o VHD created on FCP/iSCSI LUN formatted as NTFS
Configuration Files Remote (HA) These configurations allow Failover Clustering of
the virtual machines for high availability and load balancing.
o All VM configuration files (for all hosted Guests) on a single FCP/iSCSI LUN
(This prevents load balancing and all virtual machines will failover to another
node if any one of them fail)
o Each VM configuration files reside on the formatted LUN and the VHD resides on
the same LUN for FCP/iSCSI (provides individual VM failover/migration for high
availability and load balancing)
NOT SUPPORTED Mixed CIFS and SAN (HA) Provides for Failover Clustering of
virtual machines by utilizing the network and a common CIFS share for configuration files
o All VM configuration files reside on the NetApp target in a shared volume. This
volume is shared via CIFS and all Cluster Nodes have access to it. Standard file
sharing protocols (OPLocks) are used to handle contention.
o Each VM resides on its own iSCSI or FCP disk using either VHD or mapped
RAW via IDE
o Each VM using VHD and its associated configuration files reside on NetApp
target and are exported using only CIFS. No, SAN connectivity.
NOT SUPPORTED
o ISCSI Boot directly from the Guest (host does not see the LUN at all)
o CIFS is not currently supported with either config files or VHD

NOTE: NetApp recommends using a single LUN per virtual machine guest using VHD and having
the configuration files stored on the same physical LUN. This allows for automatic expansion of
the LUN if the VHD is set to auto grow (it should be) and allows consistent snapshots of both the
configuration files and the VHD simultaneously.
NOTE-1: An alternative configuration exists for those customers using many virtual machines.
Using a single LUN per virtual machine will become unmanageable for customers using 10-20-
100 virtual machines. For this case, the user may place multiple virtual machines along with their
associated configuration files on a single LUN. The drawback from this configuration is that when

Network Appliance Confidential Page 4 12/14/2017


374498662.doc

the virtual machines are used in an HA environment (Failover Cluster), all virtual machines will
migrate if one of them fails.

4.2 Data LUNs


Regardless of the boot configuration, NetApp supports the following access to Data LUNs:

FCP/iSCSI from Parent mapped as SCSI or IDE Disk to Guest


iSCSI directly from Guest using MPIO in the guest with any of the following:
o MSISCDSM Windows 2003 Guest
o MSDSM Windows 2008 Guest
o Data ONTAP DSM 2003, 2008
MPIO is not supported in client operating systems (Vista or XP)

4.3 Failover Clustering


Hyper-V was developed in conjunction with Microsoft Failover Clusters. This high level of
integration allows for seamless interoperability and simple management of clustered virtual
machines. Failover Cluster adds a new resource type of Virtual Machine which performs the
following operations:
1. Adds a new resource group using the name of the virtual machine
2. Creates a resource for the physical disk being used to store the virtual machine
3. Creates a resource for the virtual machine configuration file
4. Creates proper dependencies between the 3 resources for proper operation during move
or failover
A virtual machine may be moved between any cluster node that has access to the physical disk
and configuration files. If the configuration files are not accessible by all nodes in the cluster then
creation of the resource group will fail.

4.4 LUN Types


Hyper-V introduces the use of various LUN types depending on the configuration and operating
systems in use. The following guidelines should be followed for LUN types.
windows Used for legacy Windows XP, Server 2003 using MBR partition types when
iSCSI is used from the Guest directly to the target.
windows_gpt Used for Server 2003 ONLY. Support by Server 2003 SP1 32Bit, Server
2003 X64 and IA64 using GPT partition types when iSCSI is used from the Guest directly
to the target.
windows_lhs Available with ONTAP 7.3.0. This LUN type is used for Windows Server
2008 and Vista SP1 directly accessing the target either by the Parent partition via FCP or
iSCSI or the Guest partition using iSCSI directly to the target.
linux Due to windows_lhs not being available prior to 7.3, we must use LUN type
linux when the LUN is being accessed directly by Server 2008 or Vista SP1 either from
the Parent partition or from the Guest partition.
windows_2008 Available with ONTAP 7.3.1RC2 and above and also 7.2.5 and above.
This LUN type is used for Windows Server 2008 and Vista SP1 directly accessing the
target either by the Parent partition via FCP/iSCSI or the Guest partition using iSCSI
directly or raw mapped LUNs to the guest.

Network Appliance Confidential Page 5 12/14/2017


374498662.doc

5 Deployment
The deployment of Hyper-V must follow the recommended deployment configurations. This
includes best practices by both NetApp and Microsoft.
The NetApp best practices will be defined by the testing results and dependent applications such
as SnapDrive for Windows.

5.1 Packaging
WHU 5.0 will be a self contained .msi or .exe which is downloaded via the NOW website. See
the WHU 5.0 documentation and functional spec for more information.
Hyper-V is packaged as part of Windows Server 2008 x64 only. It is not available for any other
version of Windows. The RTM release of Hyper-V will be available through a separate download
and eventually through Microsoft Update.

5.2 Installation
Hyper-V is installed via the Add Role wizard, which is accessible from the Server Manager MMC.
Once the Hyper-V role is installed, a reboot is required. When the Server boots after installation,
the hypervisor is installed and the Hyper-V management console is available from the
Administrative Tools menu.

6 Feature Overview
6.1 Architecture
The following diagram displays the overall architecture of Hyper-V. The enlightened operating
systems, such as Server 2008 and Vista SP1 have the VSC interface directly with the VMBUS
and then the VSP in the Parent kernel. For Linux hypervisor aware applications, there is a Linux
VSC, which then interfaces with a hypercall adapter, and then the VMBUS directly. This method
also has much improved performance and should be equal to a Windows enlightened operating
system. The third configuration is a non-enlightened operating system, such as Windows 2000 or
XP and non-hypervisor aware Linux distributions. This configuration uses a hypervisor emulation
layer, which does not utilize the VMBUS, and instead uses the Parent VM Worker Process to
provide services to the guest. This causes a context switch to kernel mode for IRP processing.
This is shown in more detail in the next diagram.

Network Appliance Confidential Page 6 12/14/2017


374498662.doc

Figure 1: Windows Server Virtualization Block Diagram

As described above, there are two types of virtualization stacks. One utilizes the new VMBus and
synthetic devices while the other uses the standard hypervisor layer and emulated devices. The
following figures display each of these software stacks.

Network Appliance Confidential Page 7 12/14/2017


374498662.doc

Figure 2: Device Emulation

With Device Emulation, the following characteristics apply:


I/O Operations cause kernel traps
Hypervisor intercepts and redirects
Emulations make requests of storage server
Storage server passes the requests on to a VHD parser
This requires many context switches

Network Appliance Confidential Page 8 12/14/2017


374498662.doc

Figure 3: Enlightened I/O

The I/O stack for enlightened operating systems has these characteristics:
No I/O traps
Little hypervisor involvement
Enlightened I/O make requests of storage server
Storage server passes on the request to either:
o VHD parser
o Directly to LUN (raw pass-through)
Requires very little context switching

The most important enhancement to the enlightened I/O stack is the Fast Path Filter driver which
allows I/O to be sent directly to the parent partition via VMBUS and then to the corresponding
driver without context switches. This provides a pure kernel I/O path for the virtual machine.

Network Appliance Confidential Page 9 12/14/2017


374498662.doc

6.2 Failover Clustering


Unlike Virtual Server, Hyper-V was developed in collaboration with the Server 2008 Failover
Cluster team. This resulted in a completely integrated product that has advanced capabilities.
Hyper-V supports up to a 16 node cluster and guest virtual machines can migrate between them
seamlessly.

Hyper-V v1 will support passive migrations of virtual machines. This entails saving the running
machines memory to a file on disk, moving the disk ownership to another cluster node, and then
resuming the virtual machine. In subsequent releases of Hyper-V, hot or live migration of a virtual
machine will be available where the virtual machine continues to run and serve client requests as
it is being moved between cluster nodes.

6.3 Windows Host Utilities 5.0


The WHU will include a new utility called vm_info.exe. This utility will display the current status
and configuration of virtual machines. The primary features of WHU5
Hyper-V Configuration on a per Guest basis via a new info script, vminfo.exe
o Operating Systems
o Snapshots (Hyper-V)
o Disk Configuration
o Uptime, Status, etc
Unified Protocol Support (one WHU for FCP and iSCSI)
Single WHU for both Servers and Guests
o Server 2003 and 2008
o Guest Auto Detection
o Guest Support of XP, Vista, Server 2003, and Server 2008

7 User Interfaces
7.1 Command Line Interface
WHU 5.0 will provide command line utilities standard in existing WHUs. In addition, WHU 5.0 will
also provide a new utility for displaying virtual machine status and configuration. For more details
please see the WHU 5.0 functional spec.

There is no known command line interface for managing Hyper-V directly. A number of other
products such as SCVMM (System Center Virtual Machine Manager) do provide command line
interfaces but those are not covered by this functional spec.

7.2 Graphical Interface


Hyper-V provides two interfaces for managing virtual machines graphically. The first is an MMC
snap-in that can be launched either via the Administrative Tools menu or from the command line
by executing the command virtmgmt.msc from the %SystemDrive%\progra~1\hyper-v directory.
This interface allows the user to create, delete, edit, start, stop, and resume virtual machines.
This is the interface used to manage the virtual machine properties.
The second tool is a console tool, which allows the administrator to access the console of the
virtual machine. This is especially important during initial configuration of the guest operating
system. This tool can be launched from the Hyper-V MMC or directly from the command line by
executing the command vmconnect.exe from the %SystemDrive%\progra~1\hyper-v directory.

Network Appliance Confidential Page 10 12/14/2017


374498662.doc

8 Programmatic Interfaces
Microsoft provides interfaces for building tools and management of virtualized environments. The
known interfaces consist of a WMI provider and Hypercall APIs.

8.1 API
WMI Provider for Virtualization

The WMI Provider was designed to configure, manage, and monitor the Hyper-V server and
associated virtual machines. The WMI Provider for virtualization documentation is not yet
complete and is still being updated by Microsoft. The most current WMI interface documentation
can be found at:
http://msdn2.microsoft.com/en-us/library/cc136992%28VS.85%29.aspx

Hypercall API
The Hypercall APIs are intended to be used as low level interfaces to the Microsoft Hypervisor
and Hyper-V services. The Hypercall APIs provide interfaces for partition management, physical
hardware management, scheduling, partition state, etc. More information on the Hypercall APIs
can be found at:
http://www.microsoft.com/downloads/details.aspx?FamilyID=91E2E518-C62C-4FF2-
8E50-3A37EA4100F5&displaylang=en

8.2 Wire protocols


FCP Access from the Parent partition via FCP and mapping those LUNs either as IDE or SCSI
virtual disks.
iSCSI Access from the Parent partition via iSCSI and mapping those LUNs either as IDE or
iSCSI virtual disks.
iSCSI (Guest) Access from the Guest partition via iSCSI directly to the target. This bypassing
all storage process of the Parent driver stack but does utilize the network stack.

9 RAS
9.1 Reliability
All Guest partitions and the Hyper-V services must survive failure scenarios of the target, fabric,
and host protocols when the failure itself does provide for recovery. For instance, a clustered
target, which has one controller panic, should not result in any IO error or application disruption.

9.2 Availability
The availability of Hyper-V is dependent on the storage stack, the virtual machine availability, and
the target. High availability is obtained by utilizing a clustered storage controller, clustered host or
Parent partitions, and MPIO within the host.
Clustered Storage Controller Provides for availability of LUN access during controller
failure.

Network Appliance Confidential Page 11 12/14/2017


374498662.doc

Windows Failover Cluster Provides for availability of Guest partitions in the event of
physical host, Parent partition, and failure.
MPIO Provides for availability of IO during fabric (iSCSI or FCP) failure, storage
controller failure, or host hardware failure (such as an HBA or NIC).
Guest Partition Clustering It is unclear whether this feature is supported, if so, it would
provide the administrator greater control over application level failover within Guest
partitions

9.3 Supportability
Supportability of Hyper-V is obtained through the use of the WHU as well as standard Windows
reporting tools.
The WHU provides utilities for collecting data on the fabric, target, and host configurations. This
information is used to debug and support various configurations.
For highly available virtual machines, the Failover Cluster validation tool must be run and pass in
order to obtain support by Microsoft and NetApp.

9.4 Error reporting


Error reporting is accomplished using the Windows Event Log. The event log contains a few
categories that are subdivided by feature and protocol. The event viewer can be opened either
by navigating to the Diagnostics category of Server Manager or by executing eventvwr.exe
from the command line. The Hyper-V logs are located under the Applications and Services
Logs\Microsoft\Windows\ and provide these subcategories.

Hyper-V-Config
Hyper-V-High-Availability
Hyper-V-Hypervisor
Hyper-V-Integration
Hyper-V-Network
Hyper-V-SynthNic
Hyper-V-SynthStor
Hyper-V-vhdsvc
Hyper-V-VMMS
Hyper-V-Worker

10 Performance
Initial perforformance analysis shows the following:

Note: Assuming a baseline is Windows 2008 Server with no Hyper-V installed.

2008 with Hyper-V installed but not used 15% drop in performance
o When the hypervisor is installed, the Parent is essentially virtualized. The
processor interrupts must be scheduled through the hypervisor and applications
are interrupted much more frequently than with no hypervisor.
2008 Guest
o RAW SCSI/IDE Disk mapped to Guest 40% drop in performance
This is still quite good considering 600MB/s from a Guest
o iSCSI Direct from Guest 70% drop in performance

Network Appliance Confidential Page 12 12/14/2017


374498662.doc

This is caused by the lack of jumbo frame support and RSS in the virtual
switch

11 Limitations
No CIFS Support
Cannot boot from SCSI devices
Cannot boot from iSCSI
V1 will not have live migration, but quick migration (suspend, move, resume)
Migration limited to Nodes within a Windows Failover Cluster

12 Revision History

Version Date Name Reason for change


1.0 10 June Kenny Initial DRAFT for TOI.
2008 Speer

13 Approvals

Name Role Date

Kenny Speer Integration Lead

Trent Weaver NGS Program Manager

Kerry Knoll Interoperability and Integration Mgr

Network Appliance Confidential Page 13 12/14/2017

You might also like