You are on page 1of 23

SUBMITTED BY

LATIKA WADHWANI
VIII SEM , CS A2
ROLL NO 58

SUBMITTED TO :
MS. NEHA AGRAWAL
MS. RIMJHIM JAIN



Introduction
Server load balancing
Types of load balancing
Load balancing algorithms
Linux virtual server
Methods of load balancing

Load balancing is a computer
networking methodology to distribute
workload across multiple computers or
a computer cluster, network links, central
processing units, disk drives, or other
resources, to achieve optimal resource
utilization, maximize throughput, minimize
response time, and avoid overload
Load balancing in internet is done to provide
a single Internet service from
multiple servers, sometimes known as a
server farm.

A client sends a request to the virtual server,
which in turn selects a physical server in the
server farm and directs this request to the
selected physical server.

When multiple web servers are present in a
server group, the HTTP traffic needs to be
evenly distributed among the servers. In the
process, these servers must appear as one
web server to the web client, for example an
internet browser. The load balancing
mechanism used for spreading HTTP requests
is known as IP Spraying.
The equipment used for IP spraying is called,
the 'load balancer'
Round robin dns : multiple IP addresses are
associated with a single domain name.
Hardware load balancing : Hardware load
balancers can route TCP/IP packets to various
servers in a cluster.
Software load balancing : listening on
the port where external clients connect to
access services. The load balancer forwards
requests to one of the "backend" servers,
which usually replies to the load balancer.
Random allocation
Round robin allocation
Weighted round robin allocation
Least connections


In a random allocation, the HTTP requests are
assigned to any server picked randomly among
the group of servers. In such a case, one of the
servers may be assigned many more requests to
process, while the other servers are sitting idle.
However, on average, each server gets its share
of the load due to the random selection.

Pros: Simple to implement.

Cons: Can lead to overloading of one server while
under-utilization of others

In a round-robin algorithm, the IP sprayer assigns the requests
to a list of the servers on a rotating basis. The first request is
allocated to a server picked randomly from the group. For the
subsequent requests, the IP sprayer follows the circular order to
redirect the request. Once a server is assigned a request, the
server is moved to the end of the list. This keeps the servers
equally assigned.

Pros: Better than random allocation because the requests are
equally divided among the available servers in an orderly
fashion.

Cons: Round robin algorithm is not enough for load balancing
based on processing overhead required and if the server
specifications are not identical to each other in the server group.




Weighted Round-Robin is an advanced version of the
round-robin that eliminates the deficiencies of the plain
round robin algorithm. In case of a weighted round-robin,
one can assign a weight to each server in the group so that
if one server is capable of handling twice as much load as
the other, the powerful server gets a weight of 2. In such
cases, the IP sprayer will assign two requests to the
powerful server for each request assigned to the weaker
one.

Pros: Takes care of the capacity of the servers in the
group.

Cons: Does not consider the advanced load balancing
requirements such as processing times for each individual
request.
With this method, the system passes a new
connection to the server that has the least
number of current connections.
Least Connections methods work best in
environments where the servers or other
equipment you are load balancing have
similar capabilities.
The Linux Virtual Server is a highly scalable
and highly available server built on a cluster
of real servers, with the load balancer running
on the Linux operating system. The
architecture of the server cluster is fully
transparent to end users, and the users
interact as if it were a single high-
performance virtual server.
Started in 1998, the Linux Virtual Server (LVS)
project combines multiple physical servers into one
virtual server, eliminating single points of failure
(SPOF).

Built with off-the-shelf components, LVS is already
in use in some of the highest-trafficked sites on the
Web.



The load balancer,
servers, and shared
storage are usually
connected by a high-
speed network to
prevent intranet work of
the system as cluster
grows.

LVS uses three ways to balance loads :

Virtual Server via NAT (VS/NAT)
Virtual Server via Tunneling (VS/TUN)
Virtual Server via Direct Routing (VS/DR)


1. A request packet destined for the virtual IP
address arrives at the load balancer.
2. load balancer examines the packet's destination
address and port number if match select a real
server from the cluster by scheduling algorithm.
3. Request is processed.
4. the load balancer rewrites the source address
and port of the packets to those of the virtual
service and after termination of connection
,record is removed from hash table.
5. A reply is sent back to the user.



IP tunneling is a technique to encapsulate IP
datagrams with in IP datagrams, which allows
datagrams to be wrapped and redirected to
another IP address.

This technique can also be used to build a
virtual server: the load balancer tunnels the
request packets to the different servers,
process the requests, and return the results
to the clients directly. Thus, the service
appears as a VS on a single IP address.



The load balancer and the real servers must have one of
their interfaces.
The virtual IP address is shared by real servers and the
load balancer.
In VS/DR, the load balancer directly routes a packet to
the selected server.
Receiving of forwarded packet by server its address on
interface and processes the request, and finally returns
the result directly to the user.



Wikipedia
http://www.linux-vs.org
http://www.loadbalancer.org
http://www.websitegear.com/




THANK YOU

You might also like