Professional Documents
Culture Documents
Outline
Introduction
Linux Virtual Server
Microsoft load balancing solution
Introduction
Explosive Growth of the Internet
100% annual growth rate
Introduction
load balancing is a technique to spread work between many
computers, processes, disks or other resources in order to get optimal
resource utilization and decrease computing time.
A load balancer can be used to increase the capacity of a server farm
beyond that of a single server.
It can also allow the service to continue even in the face of server
down time due to server failure or server maintenance.
A load balancer consists of a virtual server (also referred to as vserver
or VIP) which, in turn, consists of an IP address and port.
virtual server is bound to a number of physical services running on the
physical servers in a server farm.
A client sends a request to the virtual server, which in turn selects a
physical server in the server farm and directs this request to the
selected physical server.
Introduction (cont.)
Different virtual servers can be configured for different sets of
physical services, such as TCP and UDP services in general.
Application specific virtual server may exist to support HTTP, FTP,
SSL, DNS, etc.
The load balancing methods manage the selection of an appropriate
physical server in a server farm.
Persistence can be configured on a virtual server; once a server is
selected, subsequent requests from the client are directed to the same
server.
Persistence is sometimes necessary in applications where client state is
maintained on the server, but the use of persistence can cause
problems in failure and other situations.
A more common method of managing persistence is to store state
information in a shared database, which can be accessed by all real
servers, and to link this information to a client with a small token such
as a cookie, which is sent in every client request.
Introduction (cont.)
Load balancers also perform server monitoring of services
in a web server farm.
case of failure of a service, the load balancer continues to
perform load balancing across the remaining services that
are UP.
In case of failure of all the servers bound to a virtual
server, requests may be sent to a backup virtual server (if
configured) or optionally redirected to a configured URL.
In Global Server Load Balancing (GSLB) the load
balancer distributes load to a geographically distributed set
of server farms based on health, server load or proximity.
Introduction (cont.)
Load balancing methods:
Least connections
Round robin
Least response time
Least bandwidth
Least packets
URL hashing
Domain name hashing
Source IP address
Destination IP address
Source IP - destination
Static proximity, used for GSLB
LVS
In LVS, a cluster of Linux servers appear as a single
(virtual) server on a single IP address.
Client applications interact with the cluster as if it were a
single, high-performance, and highly-available server.
Inside the virtual server, LVS directs incoming network
connections to the different servers according to
scheduling algorithms.
Scalability is achieved by transparently adding or
removing nodes in the cluster.
High availability is provided by detecting node or daemon
failures and reconfiguring the system accordingly, on-thefly.
For transparency, scalability, availability and
manageability, LVS is designed around a three-tier
architecture, as illustrated in next figure
LVS architecture
The load balancer,
servers, and shared
storage are usually
connected by a highspeed network, such
as 100 Mbps Ethernet
or Gigabit Ethernet,
so that the
intranetwork does not
become a bottleneck
of the system as the
cluster grows.
IPVS
IPVS modifies the TCP/IP stack inside the
Linux kernel to support IP load balancing
technologies
VS/NAT Workflow
1.
2.
3.
4.
5.
Disadvantages
The maximum number of server nodes is limited,
because both request and response packers are rewritten
by the load balancer. When the number of server nodes
increase up to 20, the load balancer will probably
become a new bottleneck
VS/TUN architecture
VS-TUN workflow
Disadvantages:
Real server must support IP tunneling protocol
VS/DR architecture
VS-DR workflow
Disadvantages:
Servers must have non-arp alias interface
The load balancer and server must have one of
their interfaces in the same LAN segment
Comparison
VS/NAT
VS/TUN
VS/DR
any
Tunneling
Non-arp device
LAN/WAN
LAN
High (100)
High (100)
own router
Own router
Server
Scheduling algorithms
Round-Robin
Weighted Round-Robin
Least-Connection
Weighted Least-Connection
References
Wikipedia
http://www.linux-vs.org