You are on page 1of 7

Virtualization

Everybody seems to be talking about virtualization these days. However it is nothing really new. Basically virtualization means the creation of a logical layer that can hide the actual physical device behind it. This allows a number of administrative simplifications. Let us look at a few solutions from the past. When information technology was young the market knew nothing but mainframes. Every vendor was using individual components and they were not compatible with those of the competition. Every disk drive for example needed a special driver for that disk and the computer type to attach to. It could then connect to that machine type only of course. With the introduction of smaller and cheaper computers independent disk vendors appeared on the market. These vendors however had an interest in connecting their products to as many computer types as possible.

The solution was the adpotion of a mutual interface that was called Small Computer System Interface (SCSI). Computer vendors now only needed to provide SCSI drivers and could then connect to any device supporting SCSI. Likewise vendors of peripheral products had a new market to sell their products to. Of course SCSI does not only support disk but other peripherals as well. This early form of virtualization had reduced a large number of interfaces by a single one while generating a huge new market. Another example of virtualization was the introduction of virtual machines for mainframes.Mainframes had seen a constant increase in performance. For better utilization it was planned to divide a single machine into multiple similar ones. The solution was to supply an interface that provided standardized hardware components to logical partitions. A physical machine had to cope with card punching and reading punched cards. Hence a virtual machine had a virtual puncher and a virtual card reader that read virtually punched cards. Today virtualization is an advanced topic and not limited to mainframes anymore. The main classification puts virtualization into these categories:
y y y

Desktop Virtualization Server Virtualization Storage Virtualization

Server Virtualization
Virtualization tries to replace many interfaces with few standardized ones. In many cases additional computing power from supporting systems can be used to achieve this.

With server virtualization that is not an option. On the contrary server virtualization must implement virtual cpus, virtual memory, virtual storage, and virtual networking on the existing computing platform and serve many virtual machines in parallel. The biggest market for similar computers is the x386 compatible market used with Windows and Linux technologies. Hence it is not astonishing to find a strong server virtualization offering for this platform. A logical layer is implemented hiding the actual physical components of a computer and only a base of standardized components (x386 compatible cpu, memory, disk, network) is presented to logical machines. This layer is often referred to as a hypervisor.The hypervisor may to able to directly run on the bare metal to reduce its overhead to a minimum. It may also be implemented as a process running on a guest operating system. The latter approach is often used on desktops or notebooks for use as test or demo environments. The hypervisor then provides a virtual machine with a dedicated BIOSwhere the standardized components are available for use. Now an operating system and all required applications are installed on the virtual machine. Like an operating system implements a time sharing model to run many processes on a single processor the hypervisor assigns cpu cycles to virtual machines. It uses virtual machine meta data to maintain a consistent machine state and when the virtual machine is given cpu cycles it uses the actual physical processor. Many computers are underutilized today allowing a single physical server to serve many virtual machines. This is completely transparent to the virtual

machine thinking that is has exclusive hardware access. Server virtualization provides a number of operational advantages for datacenters why the technology is currently seeing a real boom:
y

Application separation The functionality and availability of many applications can suffer from side effects caused by other applications running on the same computer. It is therefore advisable to run them on dedicated machines. Virtual machine technology is an elegant solution to achieve this goal. The danger of application downtime due to configuration problems is minimized.

Cloning Datacenters need timely ways of provisioning new computing power. Installing a physical server can be a lenghty process. Operating system, drivers and updates must be installed. Software must be added and the new server must be configured to match its requirements. A virtual machine however usually consists of a number of files in a filesystem controlled by the hypervisor. To provide a similar machine a simple copy of these files is sufficient. In a later configuration step the new machine will get its own identy. This process is call cloning. Cloning allows the reduction of provisioning time from many hours or days to a few minutes. It permits completely new provisioning concepts and much more flexibility for datacenters. As an extension to the concept of cloning, machine templates may be used that allow an automatic machine configuration after cloning.

Migration If server virtualization is combined with cluster filesystem technology the files of a virtual machine can be presented to multiple servers. A virtual machine now can easily be migrated between servers. If a machine running on server A needs more compute power it can simply be stopped on server A and restarted on a more powerful server B. Some vendors are capable of actually performing a life migration on a running virtual machine. The migration is transparent to applications and clients. It also permits automatic load balancing for virtual machines residing in a pool of physical servers.

High availability For many applications a clustered filesystem also permits adding high availability to an application without actually touching the virtual machine. The hypervisors of multiple machines monitor each other. Should one machine detect that a server has failed it can take over all virtual machines that were running on the failed physical machine.

Green Computing Datacenters have big problems providing enough floor space, power, and cooling. Statistics show that the majority of servers are only used with a small percentage of their physical capabilities. Energy consumption however is about the same whether is a cpu is highly used or almost not at all. Server virtualization can consolidate many physical servers onto a few servers - often 10-20 machines can be consolidated onto a single server. This leads to a sharp reduction in floor space, power consumption and cooling. So server virtualization is a classic approach to more environmentally friendly datacenters. Since server virtualization reduces the cost of server maintenance too virtualization often can pay itself off in a short amount of time.

Desktop Virtualization
The last 2 decades have seen a migration of centralized datacenter infrastructures to decentralized infrastructures. Many desktops and mobile computers have spread within the organizations. In return this has caused a number of administrative problems.

Central infrastructures permit a relative straight data backup strategy. Desktops and notebooks on the other hand are backup challenges. Backups usually must be run across local area networks. Because the needed bandwidth may not always be available backups often are done only partially. In case of a necessary restore a completely new installation sometimes is the only option. Field staff does not have access to the corporate network regularly and backups may become rare events.

Should a mobile computer be stolen sensitive company data may be lost. As a consequence companies are looking for strategies to recentralize. Server virtualization can prove be the right technology here as well. With desktop virtualization desktops and mobile computers are implemented as thin clients. They are equipped with a very basic operating system only. A user then uses the thin client to connect to a virtual machine across a network using the Remote Desktop Protocol (RDP).The virtual machine is located inside a datacenter and provides the operating system and all applications that previously where installed on desktops and notebooks. The user has complete control of the virtual machine. The thin client is only a remote presentation layer and can cope with small connectivity bandwidth. Desktop virtualization reenables central backup and restore processes. In case of a theft the sensitive data remains within the datacenter and is not lost. A broken thin client can be replaced within minutes. Mostly not all desktops are used permanently. If desktop virtualization is properly configured unused machines can be automatically powered off or put into standby mode. A large number of desktops and notebooks may be virtualized on a single server.

Storage Virtualization
Storage virtualization focusses on replacing many storage interfaces with a limited number. Additionally storage virtualization wants to provide easier redundancies, increase performance, and hide storage network complexity.

Virtualization can be implemented on all 3 layers of a storage network and different solutions exist for each layer: Hostbased virtualization For hostbased virtualization a logical volume manager is installed on a server. It will use disks visible to the server (targets and LUNs) to create virtual disks using RAID technology. The virtual disk is then presented to the operating system and can be use for application data. The type of RAID level in use is completely transparent to the user of the virtual disk. The backend disks can be supplied by DAS or SAN connectivity. Also the logical volume manager allows the creation of virtual disks based on physical disks from different vendors, i.e. for use with a RAID 1 mirror. Networkbased virtualization or storage appliance Virtualization can also be implemented inside the storage network. Two different approaches are avaialble:
y

Inband virtualization The term "inband" refers to virtualization inside the data path. These appliances rely on a volume manager as well. Disks in the storage backend are used to create logical volumes. These are then presented to a frontend network, either as SAN targets or as NAS fileshares. Inband means that every disk I/O from a storage

consumer (server) to the disk must go through the appliance. The appliance can be implemented as a dedicated server or as a switch component.
y

Out-of-band virtualization Out-of-band virtualization uses a logical volume manager as well. But the appliance does not have a backend or frontend network. It will scan a storage area network for available disk capacities and will then create logic disks as meta objects. The meta data will describe which logical disk block lies on which physical disk. The storage consumer must install a driver for the out-of-band appliance and then mount the logical disk. The logical disk's meta data then enables the storage consumer to directly work with the physical disks comprising the logical disk. Out-of-band appliances are not trivial and available for SAN technology only. They are currently not widely spread in the market.

Storagebased virtualization Both technologies described above can be implemented on a storage array itself. Storage arrays have a number of controllers or heads that use RAID technology to create logic units (LUNs) or logical volumes from the physical disks inside the storage array. These LUNs or volumes are then presented to the array consumers, either as storage targets (SAN) or fileshares (NAS). Most storage systems have a monolithic design. A fixed number of controllers control a maximum number of disks and transform them into logic storage. Recently more grid like architectures have entered the market providing better scalability for capacity and performance. Most storage systemsare capable of providing data redundancy by using replication technology.

You might also like