You are on page 1of 118

1

CHAPTER- 01
VIBE NETWORKS Ltd.
1.1 Company Profile:
Vibe group of companies is a leading IT solution and service provider around the
region as well as the global clients. Our domain of expertise lies in helping our clients
plan, build, manage their IT infrastructures along with the domains of IT, web,
software, online, marketing, internet security and other related areas. Vibe Group of
companies started as Vibe Internet Solutions in 2005. Growing Since then the
company has become a name to reckon within the sphere of high performance
enterprise applications. Lately we have ventured into Internetworking domain, with
Vibe Networks, to share with our clients, our expertise in networking, security, Voice
technologies and Microsoft solutions. The credit to such a phenomenal growth from
starting out as a web development company to providing High end Network
integration services around the area goes to the quality oriented focused approach,
with efforts into understanding the clients needs.
1.2 Goal:
The goal of Vibe Network Solutions is to be responsive to its users and
provide them with software and networking solution to assist them in being more
efficient and productive in their work. The company plans on continually improving
software to help its clients take advantage new technology.

1.3 Companys View about Network:
Computer networks today have grown way beyond the redundant perception
of a few computers connected together using some cables. When we talk of computer
networks today, we are actually talking about the information superhighways which
form the backbone of the extraordinary amount of digital information being generated
and pushed around the world. Mankind's ever growing need of communication is
constantly pushing the envelope way beyond commonly accepted boundaries. And a
robust network is what lies behind to make all of this possible. It's hard to predict
what all possibilities may the future open up, simply due to the sheer surprises the
technology keeps throwing. Technology like 24x7 high speed wireless internet,
2

TelePresence, 3D holographic projection, which were beyond imagination just five to
seven years ago are taken for granted today. Such has been the pace at which the
technology has grown. With convergence as the new buzzword, the concept of 'world
at your palmtop' is just around the corner.

1.4 Field of Excellence:
Vibe Networks is a vendor-independent IT infrastructure solution provider.
We have designed, built and managed large and complex mission-critical IT
infrastructure solutions. We help organizations manage their infrastructure better in
addition to enabling their growth. We partner with our customers in conceptualizing
and implementing infrastructure solutions with a view to aligning IT initiatives with
business goals in the best possible way. What sets us apart is our Service
Excellence, an outcome of a proven service delivery model that integrates people,
processes, and technology. The use of cutting edge infrastructure intelligence tools
enables us to produce result oriented solutions. With a resource pool consisting of a
veteran senior management team and an elite certified engineering team, we have the
requisite expertise and experience to craft unique solutions; leveraging IT to produce
business results.

1.5 Objectives:
Establish itself as an institute of excellence for imparting education and
training to generate quality manpower in areas of information Electronics and
communication technology (IECT). Facilitate education and training institutes in the
non-formal sector. Develop a mechanism for dynamic revision of course curriculum
and development of the learning materials in the textbook, CD-ROM and web based
form. Impart continuing education/refresher training and corporate training to
engineering graduates, working professionals and others. Develop and implement new
schemes of courses in emerging areas as required by industries and others. Undertake
develop projects and provide services in IT and related. Our foremost objective is to
serve the people because we believe that we can serve our nation only serving our
people.

3

CHAPTER 02
FUNDAMENTALS OF A NETWORK:
2.1 What is a Network?
A network, often simply referred to as a computer network, is a collection of
computers and devices connected by communications channels that facilitates
communications among users and allows users to share resources with other
users. A computer network allows sharing of resources and information
among devices connected to the network.
A computer network is a group of two or more computers connected to each
electronically. This means that the computers can "talk" to each other and that
every computer in the network can send information to the others.
In the world of computers, networking is the practice of linking two or more
computing devices together for the purpose of sharing data. Networks are built
with a mix of computer hardware and computer software.


Fig 2.1: A Computer Network
Thus networking is the practice of linking two or more computers or devices with
each other. The connectivity can be wired or wireless. In a nutshell computer
networking is the engineering discipline concerned with the communication between
computer systems or devices. Computer networking is sometimes considered a sub-
discipline of telecommunications, computer science, information technology and
4

electronics engineering since it relies heavily upon the theoretical and practical
application of these scientific and engineering disciplines.
2.2 Network Classification:
As a computer network is a system for communication among two or more
computers. Though there are numerous ways of classifying a network, the most
popular categorization is by range, functional relationship, network topology and
specialized function.
2.2.1 By Range:
Local area network (LAN): A local area network is a network that connects
computers and devices in a limited geographical area such as home, school,
computer laboratory, office building, or closely positioned group of buildings.
Each computer or device on the network is a node. Current wired LANs are
most likely to be based on Ethernet technology, although new standards like
ITU-T G.hn also provide a way to create a wired LAN using existing home
wires (coaxial cables, phone lines and power lines).

Fig 2.2: A Typical Local Area Network
All interconnected devices must understand the network layer (layer 3),
because they are handling multiple subnets (the different colors). Those inside
the library, which have only 10/100 Mbit/s Ethernet connections to the user
device and a Gigabit Ethernet connection to the central router, could be called
5

"layer 3 switches" because they only have Ethernet interfaces and must
understand IP. It would be more correct to call them access routers, where the
router at the top is a distribution router that connects to the Internet and
academic networks' customer access routers. The defining characteristics of
LANs, in contrast to WANs (Wide Area Networks), include their higher data
transfer rates, smaller geographic range, and no need for leased
telecommunication lines. Current Ethernet or other IEEE 802.3 LAN
technologies operate at speeds up to 10 Gbit/s. This is the data transfer rate.
IEEE has projects investigating the standardization of 40 and 100 Gbit/s.
Metropolitan area network (MAN): A metropolitan area network is a large
computer network that usually spans a city or a large campus. A MAN usually
interconnects a number of local area networks (LANs) using a high-capacity
backbone technology, such as fiber-optical links, and provides up-link services
to wide area networks and the Internet. A Metropolitan Area Network (MAN)
is a large computer network that spans a metropolitan area or campus. Its
geographic scope falls between a WAN and LAN. MANs provide Internet
connectivity for LANs in a metropolitan region, and connect them to wider
area networks like the Internet.


Fig 2.3: A Simple MAN
Wide area network (WAN): The term Wide Area Network (WAN) usually
refers to a network which covers a large geographical area, and use
6

communications circuits to connect the intermediate nodes. A major factor
impacting WAN design and performance is a requirement that they lease
communications circuits from telephone companies or other communications
carriers. Transmission rates are typically 2 Mbps, 34 Mbps, 45 Mbps, 155
Mbps, 625 Mbps (or sometimes considerably more). Numerous WANs have
been constructed, including public packet networks, large corporate networks,
military networks, banking networks, stock brokerage networks, and airline
reservation networks. Some WANs are very extensive, spanning the globe, but
most do not provide true global coverage. Organisations supporting WANs
using the Internet Protocol are known as Network Service Providers (NSPs).
These form the core of the Internet. By connecting the NSP WANs together
using links at Internet Packet Interchanges (sometimes called "peering points")
a global communication infrastructure is formed. NSPs do not generally
handle individual customer accounts (except for the major corporate
customers), but instead deal with intermediate organisations whom they can
charge for high capacity communications. They generally have an agreement
to exchange certain volumes of data at a certain "quality of service" with other
NSPs. So practically any NSP can reach any other NSP, but may require the
use of one or more other NSP networks to reach the required destination.
NSPs vary in terms of the transit delay, transmission rate, and connectivity
offered. Since radio communications systems do not provide a physically
secure connection path, WWANs typically incorporate encryption and
authentication methods to make them more secure. Unfortunately some of the
early GSM encryption techniques were flawed, and security experts have
issued warnings that cellular communication, including WWAN, is no longer
secure. UMTS (3G) encryption was developed later and has yet to be broken.
Personal area network (PAN): A personal area network is a computer
network used for communication among computer devices, including
telephones and personal digital assistants, in proximity to an individual's body.
The devices may or may not belong to the person in question. The reach of a
PAN is typically a few meters. PANs can be used for communication among
the personal devices themselves (intrapersonal communication), or for
connecting to a higher level network and the Internet (an uplink). Personal
7

area networks may be wired with computer buses such as USB and FireWire.
A wireless personal area network (WPAN) can also be made possible with
network technologies such as IrDA, Bluetooth, UWB, Z-Wave and ZigBee.

Fig 2.4: Personal Area Network
Virtual Private Network (VPN): A virtual private network (VPN) is a
computer network in which some of the links between nodes are carried by
open connections or virtual circuits in some larger network (e.g., the Internet)
instead of by physical wires. The data link layer protocols of the virtual
network are said to be tunnelled through the larger network when this is the
case. One common application is secure communications through the public
Internet, but a VPN need not have explicit security features, such as
authentication or content encryption. VPNs, for example, can be used to
separate the traffic of different user communities over an underlying network
with strong security features. A VPN may have best-effort performance, or
may have a defined service level agreement (SLA) between the VPN customer
and the VPN service provider. Generally, a VPN has a topology more complex
than point-to-point.
8


Fig 2.5: VPN used to interconnect 3 office and Remote users
2.2.2 By Functional Relationship:
Client-server: Client-server model of computing is a distributed application
structure that partitions tasks or workloads between service providers, called
servers, and service requesters, called clients. Often clients and servers
communicate over a computer network on separate hardware, but both client
and server may reside in the same system. A server machine is a host that is
running one or more server programs which share its resources with clients. A
client does not share any of its resources, but requests a server's content or
service function. Clients therefore initiate communication sessions with
servers which await incoming requests. The characteristics of the transmission
facilities lead to an emphasis on efficiency of communications techniques in
the design of WANs. Controlling the volume of traffic and avoiding excessive
delays is important. Since the topologies of WANs are likely to be more
complex than those of LANs, routing algorithms also receive more emphasis.
Many WANs also implement sophisticated monitoring procedures to account
for which users consume the network resources.
9


Fig 2.6: Client-Server Setup
Peer-to-peer: A peer-to-peer, commonly abbreviated to P2P, is any
distributed network architecture composed of participants that make a portion
of their resources (such as processing power, disk storage or network
bandwidth) directly available to other network participants, without the need
for central coordination instances (such as servers or stable hosts). Peers are
both suppliers and consumers of resources, in contrast to the traditional client
server model where only servers supply, and clients consume. Peer-to-peer
was popularized by file sharing systems like Napster. Peer-to-peer file sharing
networks have inspired new structures and philosophies in other areas of
human interaction. In such social contexts, peer-to-peer as a meme refers to
the egalitarian social networking that is currently emerging throughout society,
enabled by Internet technologies in general. P2P networks are typically used
for connecting nodes via largely ad hoc connections. Sharing content files
containing audio, video, data or anything in digital format is very common,
and real time data, such as telephony traffic, is also passed using P2P
technology.

Fig 2.7 A Peer to Peer Network
10

Multitier architecture: Multi-tier architecture (often referred to as n-tier
architecture) is an architecture in which the presentation, the application
processing, and the data management are logically separate processes. For
example, an application that uses middleware to service data requests between
a user and a database employs multi-tier architecture. The most widespread
use of "multi-tier architecture" refers to three-tier architecture. N-tier
application architecture provides a model for developers to create a flexible
and reusable application. By breaking up an application into tiers, developers
only have to modify or add a specific layer, rather than have to rewrite the
entire application over. There should be a presentation tier, a business or data
access tier, and a data tier. The concepts of layer and tier are often used
interchangeably. However, one fairly common point of view is that there is
indeed a difference, and that a layer is a logical structuring mechanism for the
elements that make up the software solution, while a tier is a physical
structuring mechanism for the system infrastructure. Apart from the usual
advantages of modular software with well defined interfaces, the three-tier
architecture is intended to allow any of the three tiers to be upgraded or
replaced independently as requirements or technology change. For example, a
change of operating system in the presentation tier would only affect the user
interface code.

Fig 2.8 Multitier architecture.
11

2.2.3 By Network Topology:
Bus network: A bus network topology is a network architecture in which a set
of clients are connected via a shared communications line, called a bus. There
are several common instances of the bus architecture, including one in the
motherboard of most computers, and those in some versions of Ethernet
networks. Bus networks are the simplest way to connect multiple clients, but
may have problems when two clients want to transmit at the same time on the
same bus. Thus systems which use bus network architectures normally have
some scheme of collision handling or collision avoidance for communication
on the bus, quite often using Carrier Sense Multiple Access or the presence of
a bus master which controls access to the shared bus resource. A true bus
network is passive the computers on the bus simply listen for a signal; they
are not responsible for moving the signal along. However, many active
architectures can also be described as a "bus", as they provide the same logical
functions as a passive bus; for example, switched Ethernet can still be
regarded as a logical network, if not a physical one. Indeed, the hardware may
be abstracted away completely in the case of a software bus. With the
dominance of switched Ethernet over passive Ethernet, passive bus networks
are uncommon in wired networks. However, almost all current wireless
networks can be viewed as examples of passive bus networks, with radio
propagation serving as the shared passive medium. The bus topology makes
the addition of new devices straightforward. The term used to describe clients
is station or workstation in this type of network. Bus network topology uses a
broadcast channel which means that all attached stations can hear every
transmission and all stations have equal priority in using the network to
transmit

data.

Fig 2.9: Bus Topology
12

Star network: Star networks are one of the most common computer network
topologies. In its simplest form, a star network consists of one central switch,
hub or computer, which acts as a conduit to transmit messages. Thus, the hub
and leaf nodes, and the transmission lines between them, form a graph with
the topology of a star. If the central node is passive, the originating node must
be able to tolerate the reception of an echo of its own transmission, delayed by
the two-way transmission time (i.e. to and from the central node) plus any
delay generated in the central node. An active star network has an active
central node that usually has the means to prevent echo-related problems. The
star topology reduces the chance of network failure by connecting all of the
systems to a central node. When applied to a bus-based network, this central
hub rebroadcasts all transmissions received from any peripheral node to all
peripheral nodes on the network, sometimes including the originating node.
All peripheral nodes may thus communicate with all others by transmitting to,
and receiving from, the central node only. The failure of a transmission line
linking any peripheral node to the central node will result in the isolation of
that peripheral node from all others, but the rest of the systems will be
unaffected. It is also designed with each node (file waver, workstations, and
peripherals) connected directly to a central network hub, switch, or
concentrator. Data on a star network passes through the hub, switch, or
concentrator before continuing to its destination. The hub, switch, or
concentrator manages and controls all functions of the network. It is also acts
as a repeater for the data flow. This configuration is common with twisted pair
cable. However, it can also be used with coaxial cable or optical fibre cable.

Fig 2.10: Star Topology
13

Ring network: A ring network is a network topology in which each node
connects to exactly two other nodes, forming a single continuous pathway for
signals through each node - a ring. Data travels from node to node, with each
node along the way handling every packet. Because a ring topology provides
only one pathway between any two nodes, ring networks may be disrupted by
the failure of a single link. A node failure or cable break might isolate every
node attached to the ring. FDDI networks overcome this vulnerability by
sending data on a clockwise and a counter clockwise ring: in the event of a
break data is wrapped back onto the complementary ring before it reaches the
end of the cable, maintaining a path to every node along the resulting "C-
Ring". 802.5 networks -- also known as IBM Token Ring networks -- avoid
the weakness of a ring topology altogether: they actually use a star topology at
the physical layer and a Multistation Access Unit (MAU) to imitate a ring at
the data link layer. Many ring networks add a "counter-rotating ring" to form a
redundant topology. The numerous advantages of ring topology include Very
orderly network where every device has access to the token and the
opportunity to transmit. Performs better than a star topology under heavy
network load. Can create much larger network using Token Ring. Does not
require network server to manage the connectivity between the computers


Fig 2.11: Ring Topology
Grid network: A grid network is a kind of computer network consisting of a
number of (computer) systems connected in a grid topology. In a regular grid
topology, each node in the network is connected with two neighbours along
one or more dimensions. If the network is one-dimensional, and the chain of
nodes is connected to form a circular loop, the resulting topology is known as
14

a ring. In general, when an n-dimensional grid network is connected circularly
in more than one dimension, the resulting network topology is a torus, and the
network is called toroidal.
Tree and hypertree networks: A Tree Network consists of star-configured
nodes connected to switches/concentrators, each connected to a linear bus
backbone. Each hub/concentrator rebroadcasts all transmissions received from
any peripheral node to all peripheral nodes on the network, sometimes
including the originating node. All peripheral nodes may thus communicate
with all others by transmitting to, and receiving from, the central node only.
The failure of a transmission line linking any peripheral node to the central
node will result in the isolation of that peripheral node from all others, but the
rest of the systems will be unaffected.

Fig 2.12: Tree Type Topology
2.3 Elements of a Network:
A network element is usually defined as a manageable logical entity uniting one
or more physical devices. This allows distributed devices to be managed in a unified
way using one management system. Elements of the network include the entities on
which the network runs upon. This includes routers, switches, hubs, bridges, network
cards, repeaters, filters, modems, connecting cables. All of these network components
are discussed in detail below:
Routers: A router is a device that interconnects two or more computer
networks, and selectively interchanges packets of data between them. Each
data packet contains address information that a router can use to determine if
the source and destination are on the same network, or if the data packet must
be transferred from one network to another. Where multiple routers are used in
a large collection of interconnected networks, the routers exchange
information about target system addresses, so that each router can build up a
15

table showing the preferred paths between any two systems on the
interconnected networks. A router is a networking device whose software and
hardware are customized to the tasks of routing and forwarding information. A
router has two or more network interfaces, which may be to different physical
types of network or different network standards. Each network interface is a
small computer specialized to convert electric signals from one form to
another. Routers connect two or more logical subnets, which do not share a
common network address. The subnets in the router do not necessarily map
one-to-one to the physical interfaces of the router. The term "layer 3
switching" is used with the term "routing". The term switching is generally
used to refer to data forwarding between two network devices that share a
common network address. This is also called layer 2 switching or LAN
switching.

Fig 2.13: Cisco 3640 Routers
Switches: A network switch or switching hub is a computer networking
device that connects network segments. Switches may operate at one or more
OSI layers, including physical, data link, network, or transport (i.e., end-to-
end). A device that operates simultaneously at more than one of these layers is
known as a multilayer switch. In switches intended for commercial use, built-
in or modular interfaces make it possible to connect different types of
networks, including Ethernet, Fibre Channel, ATM, ITU-T G.hn and 802.11.
This connectivity can be at any of the layers mentioned. While Layer 2
functionality is adequate for speed-shifting within one technology,
interconnecting technologies such as Ethernet and token ring are easier at
Layer 3. Interconnection of different Layer 3 networks is done by routers. If
there are any features that characterize "Layer-3 switches" as opposed to
16

general-purpose routers, it tends to be that they are optimized, in larger
switches, for high-density Ethernet connectivity.

Fig 2.14: Cisco Catalyst Switches
Hubs: A hub, essentially an network hub is a device for connecting multiple
twisted pair or fiber optic Ethernet devices together and making them act as a
single network segment. Hubs work at the physical layer (layer 1) of the OSI
model. The device is a form of multiport repeater. Repeater hubs also
participate in collision detection, forwarding a jam signal to all ports if it
detects a collision. A network hub is a fairly unsophisticated broadcast device.
Hubs do not manage any of the traffic that comes through them, and any
packet entering any port is broadcast out on all other ports. Since every packet
is being sent out through all other ports, packet collisions resultwhich
greatly impedes the smooth flow of traffic. The need for hosts to be able to
detect collisions limits the number of hubs and the total size of a network built
using hubs (a network built using switches does not have these limitations).
For 10 Mbit/s networks, up to 5 segments (4 hubs) are allowed between any
two end stations. For 100 Mbit/s networks, the limit is reduced to 3 segments
(2 hubs) between any two end stations, and even that is only allowed if the
hubs are of the low delay variety. Some hubs have special (and generally
manufacturer specific) stack ports allowing them to be combined in a way that
allows more hubs than simple chaining through Ethernet cables, but even so, a
large Fast Ethernet network is likely to require switches to avoid the chaining
limits of hubs.
17


Fig 2.15: A Simple Hub
Bridges: A Network Bridge connects multiple network segments at the data
link layer (Layer 2) of the OSI model. In Ethernet networks, the term Bridge
formally means a device that behaves according to the IEEE 802.1D standard.
A bridge and switch are very much alike; a switch being a bridge with
numerous ports. Switch or Layer 2 switch is often used interchangeably with
Bridge. Bridges are similar to repeaters or network hubs, devices that connect
network segments at the physical layer; however, with bridging, traffic from
one network is managed rather than simply rebroadcast to adjacent network
segments. Bridges are more complex than hubs or repeaters. Bridges can
analyze incoming data packets to determine if the bridge is able to send the
given packet to another segment of the network.

Fig 2.16: A Network Bridge
Repeaters: A network repeater is a device used to expand the boundaries of a
wired or wireless (Wi-Fi) local area network (LAN). In the past, wired
network repeaters were used to join segments of Ethernet cable. The repeaters
would amplify the data signals before sending them on to the uplinked
segment, thereby countering signal decay that occurs over extended lengths of
wire. Modern Ethernet networks use more sophisticated switching devices,
leaving the wireless flavour of the network repeater a more popular device for
use with wireless LANs (WLANs) at work and home. Another option is to
18

setup a network repeater on the lower floor, halfway between the basement
and the upstairs office. The repeater should magnify the signal enough to get
good coverage in the upstairs floor. If the building is quite large, several
network repeaters can be placed strategically to draw the signal where
required, though this concept has its limits. Devices communicating with an
intermediate network repeater will have lower performance stats than those
communicating directly with the router. This becomes more of an issue as
additional repeaters are used in line.

Fig 2.17: Network Repeaters
Modems: A modem (modulator-demodulator) is a device that modulates an
analog carrier signal to encode digital information, and also demodulates such
a carrier signal to decode the transmitted information. The goal is to produce a
signal that can be transmitted easily and decoded to reproduce the original
digital data. Modems can be used over any means of transmitting analog
signals, from driven diodes to radio. The most familiar example is a voice
band modem that turns the digital data of a personal computer into analog
audio signals that can be transmitted over a telephone line, and once received
on the other side, a modem converts the analog data back into digital. Modems
are generally classified by the amount of data they can send in a given time,
normally measured in bits per second (bit/s, or bps). They can also be
classified by Baud, the number of times the modem changes its signal state per
second. A simple type of a modem is shown below in the figure:
19


Fig 2.18: Modem
Network Cables: Communication is the process of transferring signals from
one point to another and there must be some medium to transfer those signals.
In computer networking and especially in the local area networking, there are
certain communication mediums. This section provides the basic overview of
the network cables, LAN communication system and other transmission
mediums in LAN and WAN. Today many standardized communication cables
and communication devices are in use the according to the needs of a
computer network. LAN data communication systems there are different types
of cables are used. The most common types of the LAN cables are the
Ethernet UTP/STP cables. An Ethernet cable is a twisted pair cable that is
consist of eight cables that are paired together to make four pairs. A RJ-45
connector is joined with both ends of the cables and one end of the connector
is connected with the LAN card of the computer and the other end of the cable
is connected with the hub or switch. Cable testers are used to test the
performance of each cable. The preferable cable in the Ethernet networking is
the 100baseT, which provides the best communication speed. UTP/STP is a
standardize cable in which data is transferred which provides the transmission
speed of 10/100 mbps. The most commonly used cable in the star topology is
the UTP/STP cable. UTP/STP cables are same in functionality only a slight
difference is that an extra protective silver coated layer surrounds the cable.
UPT/STP cables are further divided into straight over and cross over cables.
The most common use of the UTP/STP cables is the serial transmission,
Ethernet, ISDN, fixed and modular interfaces in the WAN networking.
Straight over cables are used to connect the computer with the hub or switch
and a cross over cable is used to connect the hub with a hub or with a switch.
20


Fig 2.19: Types of Cables
Coaxial cables are also used in the microwave frequencies but there
not as popular as other cables. The most advanced form of the communication
cables is the fiber optic cable. Fiber optic cables are designed for high speed
data communication for the corporate offices and ISPs, backbones and in the
telecommunication industry. Fiber optic cable acts as a backbone cable when
it connects two ISPs with each other. In the internet communication, there is a
major role of the fiber optic cable, which acts as a backbone. There is another
type of cable which is called Twisted Pair cable that is used connect the
consoles of the Cisco Routers and switches and RJ-45 connectors are used to
at the both ends of the twisted pair cables.
2.4 Networking Models:
Network models define a set of network layers and how they interact. There are
several different network models depending on what organization or company started
them. The most important two are:
The TCP/IP Model - This model is sometimes called the DOD model since it
was designed for the department of defence. It is also called the internet model
because TCP/IP is the protocol used on the internet.
21

OSI Network Model - The International Standards Organization (ISO) has
defined a standard called the Open Systems Interconnection (OSI) reference
model. This is a seven layer architecture listed in the next section.
2.4.1 The TCP/IP Model:
The TCP/IP model is a description framework for computer network protocols
created in the 1970s by DARPA, an agency of the United States Department of
Defense. It evolved from ARPANET, which were the world's first wide area network
and a predecessor of the Internet. The TCP/IP Model is sometimes called the Internet
Model or the DoD Model. The TCP/IP model, or Internet Protocol Suite, describes a
set of general design guidelines and implementations of specific networking protocols
to enable computers to communicate over a network. TCP/IP provides end-to-end
connectivity specifying how data should be formatted, addressed, transmitted, routed
and received at the destination. Protocols exist for a variety of different types of
communication services between computers.

Fig 2.20: TCP/IP Model
Layers in the TCP/IP Model:
The layers near the top are logically closer to the user application, while those
near the bottom are logically closer to the physical transmission of the data.
Viewing layers as providing or consuming a service is a method of abstraction to
isolate upper layer protocols from the nitty-gritty detail of transmitting bits over,
for example, Ethernet and collision detection, while the lower layers avoid having
to know the details of each and every application and its protocol. The following
22

is a description of each layer in the TCP/IP networking model starting from the
lowest level:
i. Data Link Layer: The Data Link Layer is the networking scope of the local
network connection to which a host is attached. This regime is called the link
in Internet literature. This is the lowest component layer of the Internet
protocols, as TCP/IP is designed to be hardware independent. As a result
TCP/IP has been implemented on top of virtually any hardware networking
technology in existence. The Data Link Layer is used to move packets
between the Internet Layer interfaces of two different hosts on the same link.
The processes of transmitting and receiving packets on a given link can be
controlled both in the software device driver for the network card, as well as
on firmware or specialized chipsets. These will perform data link functions
such as adding a packet header to prepare it for transmission, and then actually
transmit the frame over a physical medium.
ii. Network Layer: The Network Layer solves the problem of sending packets
across one or more networks. Internetworking requires sending data from the
source network to the destination network. This process is called routing. In
the Internet Protocol Suite, the Internet Protocol performs two basic functions:
Host addressing and identification and Packet routing. IP can carry data for a
number of different upper layer protocols. These protocols are each identified
by a unique protocol number: for example, Internet Control Message Protocol
(ICMP) and Internet Group Management Protocol (IGMP) are protocols 1 and
2, respectively.
iii. Transport Layer: The Transport Layer's responsibilities include end-to-end
message transfer capabilities independent of the underlying network, along
with error control, segmentation, flow control, congestion control, and
application addressing (port numbers). End to end message transmission or
connecting applications at the transport layer can be categorized as either
connection-oriented, implemented in Transmission Control Protocol (TCP), or
connectionless, implemented in User Datagram Protocol (UDP). The
Transport Layer can be thought of as a transport mechanism, e.g., a vehicle
with the responsibility to make sure that its contents (passengers/goods) reach
their destination safely and soundly, unless another protocol layer is
23

responsible for safe delivery. The Transport Layer provides this service of
connecting applications through the use of service ports. Since IP provides
only a best effort delivery, the Transport Layer is the first layer of the TCP/IP
stack to offer reliability. IP can run over a reliable data link protocol such as
the High-Level Data Link Control (HDLC). Protocols above transport, such as
RPC, also can provide reliability.
iv. Application Layer: The TCP/IP network interface layer provides network
functions such as frame synchronization, media access, and error control. It is
sometimes referred to as the network access layer, and is roughly equivalent to
the Open System Interconnection (OSI) model's data link layer. The network
interface layer's functionality is divided between the network interface card
driver combination and the low-level protocol stack driver. Application Layer
protocols generally treat the transport layer (and lower) protocols as "black
boxes" which provide a stable network connection across which to
communicate, although the applications are usually aware of key qualities of
the transport layer connection such as the end point IP addresses and port
numbers. As noted above, layers are not necessarily clearly defined in the
Internet protocol suite.
1.4.2 OSI Reference Network Model:
The Open System Interconnection (OSI) reference model describes how
information from a software application in one computer moves through a network
medium to a software application in another computer. The OSI reference model is a
conceptual model composed of seven layers, each specifying particular network
functions. The model was developed by the International Organization for
Standardization (ISO) in 1984, and it is now considered the primary architectural
model for intercomputer communications. The OSI model divides the tasks involved
with moving information between networked computers into seven smaller, more
manageable task groups. A task or group of tasks is then assigned to each of the seven
OSI layers. Each layer is reasonably self-contained so that the tasks assigned to each
layer can be implemented independently. This enables the solutions offered by one
layer to be updated without adversely affecting the other layers. The following
24

diagram details the seven layers of the Open System Interconnection (OSI) reference
model:

Fig 2.21: The OSI Reference Model Showing Seven Layers
Characteristics of the OSI Layers:
The seven layers of the OSI reference model can be divided into two
categories: upper layers and lower layers. The upper layers of the OSI model deal
with application issues and generally are implemented only in software. The
highest layer, the application layer, is closest to the end user. Both users and
application layer processes interact with software applications that contain a
communications component. The term upper layer is sometimes used to refer to
any layer above another layer in the OSI model. The lower layers of the OSI
model handle data transport issues. The lowest layer, the physical layer, is closest
to the physical network medium and is responsible for actually placing
information on the medium.

Fig 2.22: Two Sets of Layers Make Up the OSI Layers

25

Description of the OSI Layers:
I. Physical Layer: It defines the electrical and physical specifications for
devices. In particular, it defines the relationship between a device and a
physical medium. Physical layer specifications define characteristics such as
voltage levels, timing of voltage changes, physical data rates, maximum
transmission distances, and physical connectors. Physical layer
implementations can be categorized as either LAN or WAN specifications.
The major functions and services performed by the Physical Layer are
establishment and termination of a connection to a communications medium,
Participation in the process whereby the communication resources are
effectively shared among multiple users, modulation and conversion between
the representation of digital data in user equipment and the corresponding
signals transmitted over a communications channel.
II. Data Link Layer: The data link layer provides reliable transit of data across a
physical network link. Different data link layer specifications define different
network and protocol characteristics, including physical addressing, network
topology, error notification, sequencing of frames, and flow control. Physical
addressing (as opposed to network addressing) defines how devices are
addressed at the data link layer. Network topology consists of the data link
layer specifications that often define how devices are to be physically
connected, such as in a bus or a ring topology. Error notification alerts upper-
layer protocols that a transmission error has occurred, and the sequencing of
data frames reorders frames that are transmitted out of sequence. Finally, flow
control moderates the transmission of data so that the receiving device is not
overwhelmed with more traffic than it can handle at one time.
III. Network Layer: The network layer defines the network address, which
differs from the MAC address. Some network layer implementations, such as
the Internet Protocol (IP), define network addresses in a way that route
selection can be determined systematically by comparing the source network
address with the destination network address and applying the subnet mask.
Because this layer defines the logical network layout, routers can use this layer
to determine how to forward packets. Because of this, much of the design and
configuration work for internetworks happens at Layer 3, the network layer.
26

IV. Transport Layer: The transport layer accepts data from the session layer and
segments the data for transport across the network. Generally, the transport
layer is responsible for making sure that the data is delivered error-free and in
the proper sequence. Flow control generally occurs at the transport layer. Flow
control manages data transmission between devices so that the transmitting
device does not send more data than the receiving device can process.
Multiplexing enables data from several applications to be transmitted onto a
single physical link. Virtual circuits are established, maintained, and
terminated by the transport layer. Error checking involves creating various
mechanisms for detecting transmission errors, while error recovery involves
acting, such as requesting that data be retransmitted, to resolve any errors that
occur.
V. Session Layer: The session layer establishes, manages, and terminates
communication sessions. Communication sessions consist of service requests
and service responses that occur between applications located in different
network devices. These requests and responses are coordinated by protocols
implemented at the session layer. Some examples of session-layer
implementations include Zone Information Protocol (ZIP), the AppleTalk
protocol that coordinates the name binding process; and Session Control
Protocol (SCP), the DECnet Phase IV session layer protocol.
VI. Presentation Layer: The system. Some examples of presentation layer coding
and conversion schemes include presentation layer provides a variety of
coding and conversion functions that are applied to application layer data.
These functions ensure that information sent from the application layer of one
system would be readable by the application layer of another common data
representation formats, conversion of character representation formats,
common data compression schemes, and common data encryption schemes.
Common data representation formats, or the use of standard image, sound, and
video formats, enable the interchange of application data between different
types of computer systems. Conversion schemes are used to exchange
information with systems by using different text and data representations, such
as EBCDIC and ASCII. Standard data compression schemes enable data that
is compressed at the source device to be properly decompressed at the
27

destination. Standard data encryption schemes enable data encrypted at the
source device to be properly deciphered at the destination.
VII. Application Layer: The application layer is the OSI layer closest to the end
user, which means that both the OSI application layer and the user interact
directly with the software application. This layer interacts with software
applications that implement a communicating component. Such application
programs fall outside the scope of the OSI model. Application layer functions
typically include identifying communication partners, determining resource
availability, and synchronizing communication.
2.4.3 OSI and TCP/IP layering differences:
The three top layers in the OSI modelthe Application Layer, the
Presentation Layer and the Session Layerare not distinguished separately in the
TCP/IP model where it is just the Application Layer. While some pure OSI
protocol applications, such as X.400, also combined them, there is no requirement
that a TCP/IP protocol stack needs to impose monolithic architecture above the
Transport Layer. For example, the Network File System (NFS) application
protocol runs over the Xternal Data Representation (XDR) presentation protocol,
which, in turn, runs over a protocol with Session Layer functionality, Remote
Procedure Call (RPC). RPC provides reliable record transmission, so it can run
safely over the best-effort User Datagram Protocol (UDP) transport. The Session
Layer roughly corresponds to the Telnet virtual terminal functionality

which is
part of text based protocols such as the HTTP and SMTP TCP/IP model
Application Layer protocols. It also corresponds to TCP and UDP port numbering,
which is considered as part of the transport layer in the TCP/IP model. Some
functions that would have been performed by an OSI presentation layer are
realized at the Internet application layer using the MIME standard, which is used
in application layer protocols such as HTTP and SMTP.



28

CHAPTER - 03
CISCO SYSTEMS AND ITS CERTIFICATIONS:
3.1 Historical Perspective:
Cisco is an American multinational corporation that designs and sells
consumer electronics, networking and communications technology and services.
Headquartered in San Jose, California, Cisco has more than 65,000 employees and
annual revenue of US$36.11 billion as of 2009. The stock was added to the Dow
Jones Industrial Average on June 8, 2009, and is also included in the S&P 500 Index
the Russell 1000 Index, NASDAQ100 Index and the Russell 1000 Growth Index.
Cisco is one of the world's biggest technology corporations.

Fig: 3.1: Headquarter buildings of the Cisco Systems campus in San Jose
Len Bosack and Sandy Lerner, a married couple who worked as computer operations
staff members at Stanford University, later joined by Richard Troiano, founded Cisco
Systems in 1984. Lerner moved on to direct computer services at Schlumberger,
moving full time to Cisco in 1987. The name "Cisco" was derived from the city name,
San Francisco, which is why the company's engineers insisted on using the lower case
"cisco" in the early days. For Cisco's first product, Bosack adapted multiple-protocol
router software originally written some years before by William Yeager, another
Stanford employee who later joined Sun Microsystems. The company's first CEO was
Bill Graves, who held the position from 1987 to 1988. In 1988, John Morgridge was
appointed CEO, and was succeeded in 1995 by John Chambers. While Cisco was not
the first company to develop and sell a router, it was one of the first to sell
commercially successful routers supporting multiple network protocols. As the
Internet Protocol (IP) became widely adopted, the importance of multi-protocol
29

routing declined. Today, Cisco's largest routers are primarily used to deliver IP
packets.
In 1990, the company was listed on the NASDAQ stock exchange. Lerner was fired;
as a result Bosack quit after receiving $200 million. Most of those profits were given
to charities and the two later divorced.


Fig: 3.2: Earlier Logo of cisco

The company filed for a U.S. trademark of "Cisco" on June 13, 1988, and it
was granted on June 6, 1989. Related to the original inspiration for the Cisco name
was an early registered mark of a suspension bridge that is synonymous with San
Francisco's Golden Gate Bridge. The company's first indicated commercial use of the
stylized bridge was May 18, 1986. This classic Cisco image rendition was first used
on product packaging and products. In their trademark filing to the United States
Patent and Trademark Office, the mark is described as, "stylized two-tower
suspension bridge similar to a script letter "U" with lines extending form the "U" to a
bottom line in the manner of cables holding up a roadway." The image combines both
elements of Cisco's gateway and bridge electrical products that interconnect local area
networks and also a representation of the Bay Area's landmark bridge. Cisco acquired
a variety of companies to bring in products and talent into the company. Several
acquisitions, such as Stratacom, were the biggest deals in the industry when they
occurred. During the Internet boom in 1999, the company acquired Cerent
Corporation, a start-up company located in Petaluma, California, for about US$7
billion. It was the most expensive acquisition made by Cisco to date, and only the
acquisition of Scientific-Atlanta has been larger. Several acquired companies have
grown into $1Bn+ business units for Cisco, including LAN switching, Enterprise
Voice over Internet Protocol (VOIP), and home networking. Cisco acquired Linksys
in 2003.
In late March 2000, at the height of the dot-com boom, Cisco was the most
valuable company in the world, with a market capitalization of more than US$500
30

billion. In July 2009, with a market cap of about US$108.03 billion, it is still one of
the most valuable companies. CSCO was voted stock of the decade on NASDAQ, but
no one knows when. The company was a 2002-03 recipient of the Ron Brown Award,
a U.S. presidential honor to recognize companies "for the exemplary quality of their
relationships with employees and communities".
3.2 Notable Products and Services:
Hardware:
Flip pocket camera
Cisco Local Director - load-balancing appliance
Routers, including: 837, 1000 Series, 2500 Series, 7600, 12000, 3600 Series
and CRS-1


Fig: 3.3: A Cisco ASM/2-32EM routers deployed at CERN in 1987.
Cisco Security Manager
Security appliances: ASA 5500, PIX 500 series
Catalyst switches: 1900 Series, 6500
Cisco TelePresence
VOIP: Wireless IP Phone 7920
CLEO (router) - Low Earth Orbit router

31

Software:
Internetwork Operating System
Cisco Active Network Abstraction
Cisco Fabric Manager
Cisco Systems VPN Client
Cisco View
Cisco Works Network Management software
Clean Access Agent, Cisco NAC Appliance
Cisco Eos
Packet Tracer, didactic network simulator
Cisco Network Magic
Cisco Unified Communications Manager
Cisco IP Communicator
Cisco Security Manager
WebEx Collaboration Tools
VoIP services
Cisco became a major provider of Voice over IP to enterprises, and is now moving
into the home user market through its acquisitions of Scientific Atlanta and Linksys.
Scientific Atlanta provides VoIP equipment to cable service providers such as Time
Warner, Cablevision, Rogers Communications, UPC, and others; Linksys has
partnered with companies such as Skype and Yahoo to integrate consumer VoIP
services with wireless and cordless phones.
3.3 CISCO Career Certifications:
Cisco Career Certifications are IT Professional certifications for Cisco
Systems products. The tests are administered by Pearson VUE. There are five levels
of certification: Entry, Associate, Professional, Expert, and Architect, as well as seven
different paths, Routing & Switching, Design, Network Security, Service Provider,
Storage Networking, Voice, and Wireless.

32

3.3.1 Training:
Traditional educational institutions that teach Cisco skills are called "the Cisco
Networking Academy". Cisco Networking Academy Students can request exam
vouchers that allow them to take the retired exam for an extended period of time.
Cisco courses are also offered at collegiate institutions. Training is also available from
Cisco Learning Partners, Cisco 360 Learning Program for CCIE and Cisco Learning
Network.
3.3.2 Re-certification
All CCNA, CCDA, CCNP, CCDP, CCSP, CCVP, CCENT, CCNA Security,
CCNA Voice, CCNA Wireless and CCIP certifications are valid for 3 years. All
CCIE certifications and Specialist certifications are valid for 2 years. Re-certification
requires re-taking the current exam previously passed, or passing a higher level
examination.
3.3.3 Entry Level Certification:
The lowest level of Cisco's certification is CCENT (Cisco Certified Entry
Networking Technician). CCENT covers basic networking knowledge. It is
appropriate for entry-level network support positions. CCENT certified people can
install, manage, maintain & troubleshoot a small enterprise network, including basic
network security. CCENT is the first step towards a CCNA certification. The CCENT
certification is earned upon passing the Interconnecting Cisco Networking Devices
Part 1 (ICND1) Exam (640-822 ICND1).
3.3.4 Associate Level Certifications:
CCNA (Cisco Certified Network Associate)
The CCNA validates the ability to install, configure, operate, and troubleshoot
medium-size enterprise level router and switched networks. This includes design
implementation and verification of connections to remote sites in a WAN. New
CCNA training includes basic mitigation of security threats, introduction to wireless
networking and Voice. The CCNA certification is earned upon passing the ICND1
33

640-822 and ICND2 640-816 exams. Examinees may take the exams separately or the
single 640-802 CCNA composite exam.
CCDA (Cisco Certified Design Associate)
CCDA certified people can design switched or routed networks of LANs,
WANs, and broadband services. A CCNA certification is not required to take the
CCDA exam (640-863 DESGN), but Cisco recommends being familiar with CCNA-
level material, as well as BCMSN-level knowledge of Cisco-based LANs.
3.3.5 Professional Level Certifications:
Cisco Certified Network Professional (CCNP) certification validates
knowledge and skills required to install, configure and troubleshoot converged local
and wide area networks with 100 to 500 or more end devices. A valid CCNA
certification is required to obtain and maintain a CCNP certification.
Cisco Certified Network Professional (CCNP)
The CCNP is considered proof of having the ability to work with medium-sized
networks with technology such as QoS, broadband, VPNs, and security-minded
features. In addition to CCNA exams, professionals must pass either four separate
exams, or a composite exam along with two separate exams.
642-901 BSCI: Building Scalable Cisco Internetworks (BSCI)
642-812 BCMSN: Building Cisco Multilayer Switched Networks (BCMSN)
642-825 ISCW: Implementing Secure Converged Wide Area Networks
(ISCW)
642-845 ONT: Optimizing Converged Cisco Networks (ONT)
Cisco Certified Design Professional (CCDP)
The CCDP certification is an advanced network design certification validating
knowledge of Cisco devices and the way to interconnect them. Active CCNA and
CCDA certifications are required to earn this certification. There are two exams in
common between the CCNP and CCDP (642-901 BSCI & 642-812 BCMSN) so that
34

a CCNP and CCDA certified person can attain CCDP certification by passing a single
test (642-873 ARCH).
Required Exams:
642-901 BSCI: Building Scalable Cisco Internetworks (BSCI) or 642-902
ROUTE: Implementing Cisco IP Routing (ROUTE)
642-812 BCMSN: Building Cisco Multilayer Switched Networks (BCMSN)
or 642-813 SWITCH: Implementing Cisco IP Switched Networks (SWITCH)
642-873 ARCH: Designing Cisco Network Service Architecture
Cisco Certified Internetwork Professional (CCIP)
The CCIP certification is a professional certification covering the end-to-end
protocols used in large scale networks. To attain this certification tests must be passed
in the areas of routing, BGP, MPLS, Quality of service and the routing exam from the
CCNP track (642-901 BSCI).
Required Exams:
642-901 BSCI: Building Scalable Cisco Internetworks (BSCI) or 642-902
ROUTE: Implementing Cisco IP Routing
642-642 QOS: Quality of Service
642-611 MPLS: Implementing Cisco MPLS
642-661 BGP: Configuring BGP on Cisco Routers
Cisco Certified Security Professional (CCSP)
The CCSP certification is an advanced network security certification.
Candidates for the certification are tested for advanced knowledge of various Cisco
security products. To attain this certification several tests must be passed in the areas
of VPN, IDS, PIX firewall, Secure IOS, the Cisco SAFE, as well as having a CCNA
or higher level certification (e.g. CCNP or CCIP).
Required Exams:
642-504 SNRS: Securing Networks with Cisco Routers and Switches
35

642-524 SNAF: Securing Networks with ASA Foundation
642-533 IPS: Implementing Cisco Intrusion Prevention System
Cisco Certified Voice Professional (CCVP)
The CCVP is a certification covering all aspects of IP Telephony/VOIP networks
and applications. To attain this certification, five tests must be passed in the areas of
Quality of service, Cisco VoIP, IP Telephony Troubleshooting, Cisco IP Telephony,
and Gateway Gatekeeper and must have a valid CCNA Voice certification. The
required exams for a CCVP certification are as follows:
642-642 QoS: Quality of Service (QoS)
642-436 CVOICE: Cisco Voice over IP (CVOICE v6.0)
642-426 TUC: Troubleshooting Cisco Unified Communications Systems
(TUC v1.0)
642-446 CIPT1: Implementing Cisco Unified Communications Manager Part
1 (CIPT1 v6.0)
642-456 CIPT2: Implementing Cisco Unified Communications Manager Part
2 (CIPT2 v6.0)
3.3.6 Expert-level certifications
Cisco Certified Design Expert (CCDE)
CCDE Assesses advanced Network Infrastructure Design Principles and
Fundamentals for large networks. A CCDE can demonstrate an ability to develop
solutions which address planning, design, integration, optimization, operations,
security and ongoing support focused at the infrastructure level for customer
networks.
Cisco Certified Internetwork Expert (CCIE)
Cisco Certified Internetwork Expert is the highest level of professional
certification that Cisco currently provides and is considered one of the hardest
certifications in the world. There are five active CCIE tracks, as shown below. As of
January 6, 2010 there are 20,810 people with active CCIE certifications in the world
36

and from 2002 to 2005, it was voted as such in CertCities magazine. It has also been
voted the most technically advanced IT certification by CertMag and is generally
reported as the highest salaried certification in IT salary surveys. Cisco began its
CCIE program in 1993 originally with a two day lab, later changing it to the one day
format used today. Fewer than 3% of Cisco certified individuals attain CCIE
certification, and on average will spend thousands of dollars and 18 months studying
before passing. Many candidates build training-labs at home using old Cisco
equipment, selling it again to other candidates after passing. Alternatively candidates
may rent "rack time" online and practice lab scenarios on Cisco equipment hosted on
the Internet for that purpose.
CCIE Numbering and Recertification
Upon successful completion of the hands on lab exam, a new CCIE is awarded
a CCIE number. The first CCIE number allocated (in 1993) was 1024, and has
increased incrementally from there. A lower number indicates that the CCIE was
awarded some time back; a higher number indicated a more recently awarded
certification. As of July 2009, the highest CCIE number allocated was just under
25000. Number 1024 was allocated to the first CCIE lab location, rather than to an
individual, and featured as a plaque at the entrance to the lab. Number 1025 was
awarded to Stuart Biggs, who created the first written exam and first lab exam. The
first person to pass both CCIE written and lab exams was Terrance Slattery, who was
consulting to Cisco at the time when the lab was being devised. Terry Slattery (CCIE
1026) was therefore the first CCIE who passed both exams, and the first CCIE who
was not an employee of Cisco.






37

CHAPTER 04
ROUTING:
4.1 Definition:
Routing (or routeing) is the process of selecting paths in a network along
which to send network traffic. Routing is performed for many kinds of networks,
including the telephone network, electronic data networks (such as the Internet),
and transportation networks. Here we are concerned primarily with routing in
electronic data networks using packet switching technology In packet switching
networks, routing directs packet forwarding, the transit of logically addressed packets
from their source toward their ultimate destination through intermediate nodes;
typically hardware devices called routers, bridges, gateways, firewalls, or switches.
General-purpose computers with multiple network cards can also forward packets and
perform routing, though they are not specialized hardware and may suffer from
limited performance. The routing process usually directs forwarding on the basis
of routing tables which maintain a record of the routes to various network
destinations. Thus, constructing routing tables, which are held in the routers' memory,
is very important for efficient routing. Most routing algorithms use only one network
path at a time, but multipath routing techniques enable the use of multiple alternative
paths. In more narrow sense of term, Routing is often contrasted with bridging in its
assumption that network addresses are structured and that similar addresses imply
proximity within the network. Because structured addresses allow a single routing
table entry to represent the route to a group of devices, structured addressing (routing,
in the narrow sense) outperforms unstructured addressing (bridging) in large
networks, and has become the dominant form of addressing on the Internet, though
bridging is still widely used within localized environment
4.2 Routing Schemes:
There are the following types of schemes with which we can select the routes
from our source to the destination network. They are as follows:
Any cast delivers a message to any one out of a group of nodes, typically the one
nearest to the system
38


Fig 4.1: Any cast
Broadcast delivers a message to all nodes in the network

Fig 4.2: Broadcast
Multicast delivers a message to a group of nodes that have expressed interest in
receiving the message

Fig 4.3: Multicast
Unicast delivers a message to a single specified node

Fig 4.4: Unicast
Geocast sends or delivers data packets into all nodes in a specified geographic area.

Fig 4.5 Geocast
39

4.3 Classification of Routing:
Routing can be classified on the basis of route telling scheme to the router
about neighbouring networks. This can be done in two ways, either we can tell the
router about the neighbouring networks statically or they can be told dynamically.
Hence the classification comes out to be:
Static routing and dynamic routing
4.3.1 Static routing:
Small networks may involve manually configured routing tables (static
routing) or Non-Adaptive routing, while larger networks involve
complex topologies and may change rapidly, making the manual construction of
routing tables unfeasible. Nevertheless, most of the public switched telephone
network (PSTN) uses pre-computed routing tables, with fallback routes if the most
direct route becomes blocked (see routing in the PSTN). For (static routing) or Non-
Adaptive routing there is no algorithm, and is manually engineered. The advantage of
this routing type is maximum computing resources are saved but are conditioned.
Networks have to be prepared for disaster, by additional planning.
4.3.2 Dynamic routing:
Adaptive routing or Dynamic routing attempts to solve this problem by
constructing routing tables automatically, based on information carried by routing
protocols, and allowing the network to act nearly autonomously in avoiding network
failures and blockages. For larger networks, static routing is avoided. Examples for
(Dynamic routing) or Adaptive routing algorithms are Routing Information Protocol
(RIP), Open Shortest Path First (OSPF). Dynamic routing dominates the Internet.
However, the configuration of the routing protocols often requires a skilled touch; one
should not suppose that networking technology has developed to the point of the
complete automation of routing. Dynamic routing is further classified into different
algorithms which can be classified on the basis of the method on which any routing
protocol decides the path either on the basis of distance or on the basis of processing
done by CPU. This classification is as follows:

40

4.3.2.1 Distance vector algorithms:
Distance vector algorithms use the Bellman-Ford algorithm. This approach
assigns a number, the cost, to each of the links between each node in the network.
Nodes will send information from point A to point B via the path that results in the
lowest total cost (i.e. the sum of the costs of the links between the nodes used). The
algorithm operates in a very simple manner. When a node first starts, it only knows of
its immediate neighbours, and the direct cost involved in reaching them. Each node,
on a regular basis, sends to each neighbour its own current idea of the total cost to get
to all the destinations it knows of. The neighbouring node(s) examine this
information, and compare it to what they already 'know'; anything which represents an
improvement on what they already have, they insert in their own routing table(s).
Over time, all the nodes in the network will discover the best next hop for all
destinations, and the best total cost. When one of the nodes involved goes down, those
nodes which used it as their next hop for certain destinations discard those entries, and
create new routing-table information. They then pass this information to all adjacent
nodes, which then repeat the process.
4.3.2.2 Link-state algorithms:
When applying link-state algorithms, each node uses as its fundamental data
a map of the network in the form of a graph. To produce this, each node floods the
entire network with information about what other nodes it can connect to, and each
node then independently assembles this information into a map. Using this map, each
router then independently determines the least-cost path from itself to every other
node using a standard shortest paths algorithm such as Dijkstra's algorithm. The result
is a tree rooted at the current node such that the path through the tree from the root to
any other node is the least-cost path to that node. This tree then serves to construct the
routing table, which specifies the best next hop to get from the current node to any
other node.
4.3.3 Comparison of routing algorithms
Distance-vector routing protocols are simple and efficient in small networks,
and require little, if any management. However, distance-vector algorithms do
not scale well (due to the count-to-infinity problem), have
41

poor convergence properties and are based on a 'hop count' metric rather than a 'link-
state' metric thus they ignore bandwidth (a major drawback) when calculating the best
path. This has led to the development of more complex but more scalable algorithms
for use in large networks. Interior routing mostly uses link-state routing
protocols such as OSPF and IS-IS. A more recent development is that of loop-
free distance-vector protocols (e.g. EIGRP). Loop-free distance-vector protocols are
as robust and manageable as distance-vector protocols, while avoiding counting to
infinity and hence having good worst-case convergence times. Path selection involves
applying a routing metric to multiple routes, in order to select (or predict) the best
route. In the case of computer networking, the metric is computed by a routing
algorithm, and can cover such information as bandwidth, network delay, hop count,
path cost, load, MTU, reliability, and communication cost (see e.g. this survey for a
list of proposed routing metrics).
The routing table stores only the best possible routes, while link-state or
topological databases may store all other information as well. Because a routing
metric is specific to a given routing protocol, multi-protocol routers must use some
external heuristic in order to select between routes learned from different routing
protocols. Cisco's routers, for example, attribute a value known as the administrative
distance to each route, where smaller administrative distances indicate routes learned
from a supposedly more reliable protocol. A local network administrator, in special
cases, can set up host-specific routes to a particular machine which provides more
control over network usage, permits testing and better overall security. In some
networks, routing is complicated by the fact that no single entity is responsible for
selecting paths: instead, multiple entities are involved in selecting paths or even parts
of a single path. Complications or inefficiency can result if these entities choose paths
to selfishly optimize their own objectives, which may conflict with the objectives of
other participants. A classic example involves traffic in a road system, in which each
driver selfishly picks a path which minimizes their own travel time. With such selfish
routing, the equilibrium routes can be longer than optimal for all drivers. In
particular, Braess paradox shows that adding a new road can lengthen travel times for
all drivers. The Internet is partitioned into autonomous systems (ASs) such as internet
service providers (ISPs), each of which has control over routes involving its network,
at multiple levels. First, AS-level paths are selected via the BGP protocol, which
42

produces a sequence of ASs through which packets will flow. Each AS may have
multiple paths, offered by neighbouring ASs, from which to choose. Its decision often
involves business relationships with this neighbouring ASs, which may be unrelated
to path quality or latency. Second, once an AS-level path has been selected, there are
often multiple corresponding router-level paths, in part because two ISPs may be
connected in multiple locations. In choosing the single router-level path, it is common
practice for each ISP to employ hot-potato routing: sending traffic along the path that
minimizes the distance through the ISP's own networkeven if that path lengthens
the total distance to the destination.
4.4 Routing Protocol Basics:
4.4.1 Administrative distance
The administrative distance (AD) is used to rate the trustworthiness of routing
information received on a router from a neighbour router. An administrative distance
is an integer from 0 to 255, where 0 is the most trusted and 255 means no traffic will
be passed via this route. If a router receives two updates listing the same remote
network, the first thing the router checks is the AD. If one of the advertised routes has
a lower AD than the other, then the route with the lowest AD will be placed in the
routing table. If both advertised routes to the same network have the same AD, then
routing protocol metrics will be used to find the best path to the remote network. The
advertised route with the lowest metric will be placed in the routing table.
Route source Default AD
Connected 0
Static route 1
EIGRP 90
RIP 120
IGRP 100
OSPF 110
External EIGRP 170
Unknown 255 (this route will never be used)

Table 4.1: Administrative Distances
43

4.5 Major Routing Protocols:
4.5.1 RIP
The Routing Information Protocol (RIP) is a dynamic routing protocol used in
local and wide area networks. As such it is classified as an interior gateway
protocol (IGP). It uses the distance-vector routing algorithm. It was first defined
in RFC 1058 (1988). The protocol has since been extended several times, resulting in
RIP Version 2 (RFC 2453). Both versions are still in use today, however, they are
considered to have been made technically obsolete by more advanced techniques such
as Open Shortest Path First (OSPF) and the OSI protocol IS-IS. RIP has also been
adapted for use in IPv6 networks, a standard known as RIPng (RIP next generation),
published in RFC 2080 (1997).
4.5.1.1 History
The routing algorithm used in RIP, the Bellman-Ford algorithm, was first
deployed in a computer network in 1967, as the initial routing algorithm of
the ARPANET. The earliest version of the specific protocol that became RIP was
the Gateway Information Protocol, part of the PARC Universal Packet
internetworking protocol suite, developed at Xerox Parc. A later version, named
the Routing Information Protocol, was part of Xerox Network Systems. A version of
RIP which supported the Internet Protocol (IP) was later included in the Berkeley
Software Distribution (BSD) of the UNIX operating system. It was known as
the routed daemon. Various other vendors would create their own implementations of
the routing protocol. Eventually, RFC 1058 unified the various implementations under
a single standard.
4.5.1.2 Technical details
RIP is a distance-vector routing protocol, which employs the hop count as a
routing metric. The hold down time is 180 seconds. RIP prevents routing loops by
implementing a limit on the number of hops allowed in a path from the source to a
destination. The maximum number of hops allowed for RIP is 15. This hop limit,
however, also limits the size of networks that RIP can support. A hop count of 16 is
considered an infinite distance and used to deprecate inaccessible, inoperable, or
44

otherwise undesirable routes in the selection process. RIP implements the split
horizon, route poisoning and hold down mechanisms to prevent incorrect routing
information from being propagated. These are some of the stability features of RIP. It
is also possible to use the so called RIP-MTI algorithm to cope with the count to
infinity problem. With its help, it's possible to detect every possible loop with a very
small computation effort. Originally each RIP router transmitted full updates every 30
seconds. In the early deployments, routing tables were small enough that the traffic
was not significant. As networks grew in size, however, it became evident there could
be a massive traffic burst every 30 seconds, even if the routers had been initialized at
random times. RIP is implemented on top of the User Datagram Protocol as its
transport protocol. It is assigned the reserved port number 520.
4.5.1.3 Versions
There are three versions of the Routing Information Protocol: RIPv1, RIPv2,
and RIPng.
RIP version 1
The original specification of RIP, defined in RFC 1058, uses classful routing.
The periodic routing updates do not carry subnet information, lacking support
for variable length subnet masks (VLSM). This limitation makes it impossible to have
different-sized subnets inside of the same network class. In other words, all subnets in
a network class must have the same size. There is also no support for router
authentication, making RIP vulnerable to various attacks. The RIP version 1 works
when there is only 16 hop counts (0-15).If there is more than 16 hops between two
routers it fails to send data packets to the destination address.
RIP version 2
Due to the deficiencies of the original RIP specification, RIP version 2
(RIPv2) was developed in 1993 and last standardized in 1998. It included the ability
to carry subnet information, thus supporting Classless Inter-Domain Routing (CIDR).
To maintain backward compatibility, the hop count limit of 15 remained. RIPv2 has
facilities to fully interoperate with the earlier specification if all Must Be
Zero protocol fields in the RIPv1 messages are properly specified. In addition,
a compatibility switch feature allows fine-grained interoperability adjustments. In an
45

effort to avoid unnecessary load on hosts that do not participate in routing,
RIPv2 multicasts the entire routing table to all adjacent routers at the
address 224.0.0.9, as opposed to RIPv1 which uses broadcast. Unicast addressing is
still allowed for special applications.
RIPng
RIPng (RIP next generation), defined in RFC 2080, is an extension of RIPv2 for
support of IPv6, the next generation Internet Protocol. The main differences between
RIPv2 and RIPng are:
Support of IPv6 networking.
While RIPv2 supports RIPv1 updates authentication, RIPng does not. IPv6
routers were, at the time, supposed to use IPSec for authentication.
RIPv2 allows attaching arbitrary tags to routes, RIPng does not;
RIPv2 encodes the next-hop into each route entries; RIPng requires specific
encoding of the next hop for a set of route entries.
4.5.1.4 Limitations
Without using RIP-MTI, Hop count cannot exceed 15, in case if it exceeds it
will be considered invalid.
Most RIP networks are flat. There is no concept of areas or boundaries in RIP
networks.
Variable Length Subnet Masks were not supported by RIP version 1.
Without using RIP-MTI, RIP has slow convergence and count to
infinity problems.
4.5.1.5 Implementations
Routed, included in most BSD Unix systems.
Routing and Remote Access, a Windows Server feature, contains RIP support.
Quagga, a free open source routing software suite based on GNU Zebra.
OpenBSD, includes a RIP implementation
46

Cisco IOS, software used in Cisco routers (supports version 1, version 2 and
RIPng)
Cisco NX-OS software used in Cisco Nexus data center switches (supports
RIPv1 and RIPv2)

4.5.2 Interior Gateway Routing Protocol (IGRP)
Interior Gateway Routing Protocol (IGRP) is a distance vector interior routing
protocol (IGP) invented by Cisco. It is used by routers to exchange routing data
within an autonomous system. IGRP is a proprietary protocol. IGRP was created in
part to overcome the limitations of RIP (maximum hop count of only 15, and a single
routing metric) when used within large networks. IGRP supports multiple metrics for
each route, including bandwidth, delay, load, MTU, and reliability; to compare two
routes these metrics are combined together into a single metric, using a formula which
can be adjusted through the use of pre-set constants. The maximum hop count of
IGRP-routed packets is 255 (default 100), and routing updates are broadcast every 90
seconds (by default). IGRP is considered a classful routing protocol. Because the
protocol has no field for a subnet mask, the router assumes that all interface addresses
within the same Class A, Class B, or Class C network have the same subnet mask as
the subnet mask configured for the interfaces in question. This contrasts with classless
routing protocols that can use variable length subnet masks. Classful protocols have
become less popular as they are wasteful of IP address space.
4.5.2.1 Advancement:
In order to address the issues of address space and other factors, Cisco created
EIGRP (Enhanced Interior Gateway Routing Protocol). EIGRP adds support for
VLSM (variable length subnet mask) and adds the Diffusing Update Algorithm
(DUAL) in order to improve routing and provide a loop less environment. EIGRP has
completely replaced IGRP, making IGRP an obsolete routing protocol. In Cisco IOS
versions 12.3 and greater, IGRP is completely unsupported. In the new Cisco CCNA
curriculum (version 4), IGRP is mentioned only briefly, as an "obsolete protocol".

47

4.5.3 OPEN SHORTEST PATH FIRST (OSPF):
Open Shortest Path First (OSPF) is a dynamic routing protocol for use in
Internet Protocol (IP) networks. Specifically, it is a link-state routing protocol and
falls into the group of interior gateway protocols, operating within a single
autonomous system (AS). It is defined as OSPF Version 2 in RFC 2328 (1998) for
IPv4. The updates for IPv6 are specified as OSPF Version 3 in RFC 5340 (2008).
4.5.3.1 Overview
OSPF is an interior gateway protocol that routes Internet Protocol (IP) packets
solely within a single routing domain (autonomous system). It gathers link state
information from available routers and constructs a topology map of the network. The
topology determines the routing table presented to the Internet Layer which makes
routing decisions based solely on the destination IP address found in IP datagrams.
OSPF was designed to support variable-length subnet masking (VLSM) or Classless
Inter-Domain Routing (CIDR) addressing models. OSPF detects changes in the
topology, such as link failures, very quickly and converges on a new loop-free routing
structure within seconds. It computes the shortest path tree for each route using a
method based on Dijkstra's algorithm, a shortest path first algorithm. The link-state
information is maintained on each router as a link-state database (LSDB) which is a
tree-image of the entire network topology. Identical copies of the LSDB are
periodically updated through flooding on all OSPF routers.
An OSPF network may be structured, or subdivided, into routing areas to
simplify administration and optimize traffic and resource utilization. Areas are
identified by 32-bit numbers, expressed either simply in decimal, or often in octet-
based dot-decimal notation, familiar from IPv4 address notation. By convention, area
0 (zero) or 0.0.0.0 represents the core or backbone region of an OSPF network. The
identifications of other areas may be chosen at will, often, administrators select the IP
address of a main router in an area as the area's identification. Each additional area
must have a direct or virtual connection to the backbone OSPF area. Such connections
are maintained by an interconnecting router, known as area border router (ABR). An
ABR maintains separate link state databases for each area it serves and maintains
summarized routes for all areas in the network.
48

4.5.3.2 Neighbour relationships
Routers in the same broadcast domain or at each end of a point-to-point
telecommunications link form adjacencies when they have detected each other. This
detection occurs when a router identifies itself in a hello OSPF protocol packet. This
is called a two way state and is the most basic relationship. The routers in an Ethernet
or frame relay network select a designated router (DR) and a backup designated router
(BDR) which act as a hub to reduce traffic between routers. OSPF uses both Unicast
and multicast to send "hello packets" and link state updates.
As a link state routing protocol, OSPF establishes and maintains neighbour
relationships in order to exchange routing updates with other routers. The neighbour
relationship table is called an adjacency database in OSPF. Provided that OSPF is
configured correctly, OSPF forms neighbour relationships only with the routers
directly connected to it. In order to form a neighbour relationship between two
routers, the interfaces used to form the relationship must be in the same area. An
interface can only belong to a single area.
4.5.3.3 Area types in OSPF:
Backbone area
The backbone area (also known as area 0 or area 0.0.0.0) forms the core of an
OSPF network. All other areas are connected to it, and inter-area routing happens via
routers connected to the backbone area and to their own associated areas. It is the
logical and physical structure for the 'OSPF domain' and is attached to all nonzero
areas in the OSPF domain. Note that in OSPF the term Autonomous System
Boundary Router (ASBR) is historic, in the sense that many OSPF domains can
coexist in the same Internet-visible autonomous system, RFC1996.
Stub area
A stub area is an area which does not receive route advertisements external to
the autonomous system (AS) and routing from within the area is based entirely on a
default route. This reduces the size of the routing databases for the area's internal
routers.
49

Modifications to the basic concept of stub areas exist in the not-so-stubby area
(NSSA). In addition, several other proprietary variation have been implemented by
systems vendors, such as the totally stubby area (TSA) and the NSSA totally stubby
area, both an extension in Cisco Systems routing equipment.
Not-so-stubby area
A not-so-stubby area (NSSA) is a type of stub area that can import
autonomous system external routes and send them to other areas, but still cannot
receive AS external routes from other areas. NSSA is an extension of the stub area
feature that allows the injection of external routes in a limited fashion into the stub
area.
Transit area
A transit area is an area with two or more OSPF border routers and is used to
pass network traffic from one adjacent area to another. The transit area does not
originate this traffic and is not the destination of such traffic.
4.5.3.4 Applications
OSPF was the first widely deployed routing protocol that could converge a
network in the low seconds, and guarantee loop-free paths. It has many features that
allow the imposition of policies about the propagation of routes that it may be
appropriate to keep local, for load sharing, and for selective route importing more
than IS-IS. IS-IS, in contrast, can be tuned for lower overhead in a stable network, the
sort more common in ISP than enterprise networks.
4.5.4 IS-IS
Intermediate system to intermediate system (IS-IS), is a protocol used by
network devices (routers) to determine the best way to forward datagrams through a
packet-switched network, a process called routing. The protocol was defined in
ISO/IEC 10589:2002 as an international standard within the Open Systems
Interconnection (OSI) reference design. IS-IS is not an Internet standard, however
IETF republished the standard in RFC 1142 for the Internet community.

50

4.5.4.1 Description
IS-IS is an Interior Gateway Protocol (IGP) meaning that it is intended for use
within an administrative domain or network. It is not intended for routing between
Autonomous Systems (RFC 1930), a job that is the purpose of an Exterior Gateway
Protocol, such as Border Gateway Protocol (BGP). IS-IS is a link-state routing
protocol, meaning that it operates by reliably flooding topology information
throughout a network of routers. Each router then independently builds a picture of
the network's topology. Packets or datagram's are forwarded based on the best
topological path through the network to the destination. IS-IS uses Dijkstra's
algorithm for computing the best path through the network.
4.5.4.2 History
The IS-IS protocol was developed by Digital Equipment Corporation as part
of DECnet Phase V. It was standardized by the ISO in 1992 as ISO 10589 for
communication between network devices which are termed Intermediate Systems (as
opposed to end systems or hosts) by the ISO. The purpose of IS-IS was to make
possible the routing of datagram using the ISO-developed OSI protocol stack called
CLNS.
IS-IS was developed at roughly the same time that the Internet Engineering
Task Force IETF was developing a similar protocol called OSPF. IS-IS was later
extended to support routing of datagrams (aka network-layer packets) using IP
Protocol, the basic routed protocol of the global (public) Internet. This version of the
IS-IS routing protocol was then called Integrated IS-IS (RFC 1195). IS-IS has become
more widely known in the last several years, and has become a viable alternative to
OSPF in enterprise networks. Detailed analysis

however, tends to show that OSPF has
traffic tuning features that are especially suitable to enterprise networks while IS-IS
has stability features especially suitable to ISP infrastructure.
4.5.4.3 Comparison with OSPF
Both IS-IS and OSPF are link state protocols, and both use the same Dijkstras
algorithm for computing the best path through the network. As a result, they are
conceptually similar. Both support variable length subnet masks, can use multicast to
51

discover neighbouring routers using hello packets, and can support authentication of
routing updates.
While OSPF is natively built to route IP and is itself a Layer 3 protocol that runs on
top of IP, IS-IS is natively an OSI network layer protocol (it is at the same layer as
CLNS), a fact that may have allowed OSPF to be more widely used. IS-IS does not
use IP to carry routing information messages.
Since OSPF is more popular, this protocol has a richer set of extensions and
added features. However IS-IS is less "chatty" and can scale to support larger
networks. Given the same set of resources, IS-IS can support more routers in an area
than OSPF. This makes IS-IS favoured in ISP environments. Additionally, IS-IS is
neutral regarding the type of network addresses for which it can route. OSPF, on the
other hand, was designed for IPv4. Thus IS-IS was easily adapted to support IPv6,
while the OSPF protocol needed a major overhaul (OSPF v3).
4.5.4.4 Other related protocols
Fabric Shortest Path First (FSPF):
Level 0: Between ESs and ISs on the same subnet. OSI routing begins at this
level; END-System and Intermediate System
Level 1: Between ISs on the SAME AREA. Also called area routing/intra-area
Level 2: Called Inter-Area routing
Level 3: Routing between separate domains; It is similar to BGP
4.5.5 EIGRP
4.5.5.1 Introduction
Enhanced Interior Gateway Routing Protocol - (EIGRP) is a Cisco proprietary
routing protocol loosely based on their original IGRP. EIGRP is an advanced
distance-vector routing protocol, with optimizations to minimize both the routing
instability incurred after topology changes, as well as the use of bandwidth and
processing power in the router. Routers that support EIGRP will automatically
redistribute route information to IGRP neighbours by converting the 32 bit EIGRP
metric to the 24 bit IGRP metric. Most of the routing optimizations are based on the
52

Diffusing Update Algorithm (DUAL) work from SRI, which guarantees loop-free
operation and provides a mechanism for fast convergence.
4.5.5.2 Basic operation
The data EIGRP collects is stored in three tables:
Neighbour Table: Stores data about the neighbouring routers, i.e. those
directly accessible through directly connected interfaces.
Topology Table: Confusingly named, this table does not store an overview of
the complete network topology; rather, it effectively contains only the
aggregation of the routing tables gathered from all directly connected
neighbours. This table contains a list of destination networks in the EIGRP-
routed network together with their respective metrics. Also for every
destination, a successor and a feasible successor are identified and stored in
the table if they exist. Every destination in the topology table can be marked
either as "Passive", which is the state when the routing has stabilized and the
router knows the route to the destination, or "Active" when the topology has
changed and the router is in the process of (actively) updating its route to that
destination.
Routing table: Stores the actual routes to all destinations; the routing table is
populated from the topology table with every destination network that has its
successor and optionally feasible successor identified (if unequal-cost load-
balancing is enabled using the variance command). The successors and
feasible successors serve as the next hop routers for these destinations.
Unlike most other distance vector protocols, EIGRP does not rely on periodic route
dumps in order to maintain its topology table. Routing information is exchanged only
upon the establishment of new neighbour adjacencies, after which only changes are
sent.
4.5.5.3 Multiple metrics
EIGRP associates five different metrics with each route:
53

K1 = Bandwidth modifier 12
Minimum Bandwidth (in kilobits per second)
K2 = Load modifier
Load (number in range 1 to 255; 255 being saturated)
K3 = Delay modifier
Total Delay (in 10s of microseconds)
K4 = Reliability modifier
Reliability (number in range 1 to 255; 255 being the most reliable)
K5 = MTU modifier
Minimum path Maximum Transmission Unit (MTU) (though not actually used
in the calculation)
By default, only total delay and minimum bandwidth are enabled when EIGRP is
started on a router, but an administrator can enable or disable all the metrics as
needed.
For the purposes of comparing routes, these are combined together in a weighted
formula to produce a single overall metric:

Where the various constants (K
1
through K
5
) can be set by the user to produce varying
behaviours. An important and totally non-obvious fact is that if K
5
is set to zero, the
term is not used (i.e. taken as 1).
The default is for K
1
and K
3
to be set to 1, and the rest to zero, effectively reducing the
above formula to (Bandwidth + Delay) * 256.
54

Obviously, these constants must be set to the same value on all routers in an EIGRP
system, or permanent routing loops will probably result. Cisco routers running EIGRP
will not form an EIGRP adjacency and will complain about K-values mismatch until
these values are identical on these routers.
EIGRP scales Bandwidth and Delay metrics with following calculations:
Bandwidth for EIGRP = 10
7
/ Interface Bandwidth
Delay for EIGRP = Interface Delay / 10
On Cisco routers, the interface bandwidth is a configurable static parameter
expressed in kilobits per second. Dividing a value of 10
7
Kbit/s (i.e. 10 Gbit/s) by the
interface bandwidth yields a value that is used in the weighted formula. Analogously,
the interface delay is a configurable static parameter expressed in microseconds.
Dividing this interface delay value by 10 yields a delay in units of tens of
microseconds that is used in the weighted formula.
IGRP uses the same basic formula for computing the overall metric; the only
difference is that in IGRP, the formula does not contain the scaling factor of 256. In
fact, this scaling factor was introduced as a simple means to facilitate backward
compatility between EIGRP and IGRP: In IGRP, the overall metric is a 24-bit value
while EIGRP uses a 32-bit value to express this metric. By multiplying a 24-bit value
with the factor of 256 (effectively bit-shifting it 8 bits to the left), the value is
extended into 32 bits, and vice versa. This way, redistributing information between
EIGRP and IGRP involves simply dividing or multiplying the metric value by a factor
of 256, which is done automatically.
EIGRP also maintains a hop count for every route; however, the hop count is
not used in metric calculation. It is only verified against a predefined maximum on an
EIGRP router (by default it is set to 100 and can be changed to any value between 1
and 255). Routes having a hop count higher than the maximum will be advertised as
unreachable by an EIGRP router.


55

4.5.5.4 Important Terms Used in EIGRP
Successor
A successor for a particular destination is a next hop router that satisfies these two
conditions:
it provides the least distance to that destination
it is guaranteed not to be a part of some routing loop
The first condition can be satisfied by comparing metrics from all neighbouring
routers that advertise that particular destination, increasing the metrics by the cost of
the link to that respective neighbour, and selecting the neighbour that yields the least
total distance. The second condition can be satisfied by testing a so-called Feasibility
Condition for every neighbour advertising that destination. There can be multiple
successors for a destination, depending on the actual topology.
Feasible Successor
A feasible successor for a particular destination is a next hop router that satisfies this
condition:
it is guaranteed not to be a part of some routing loop
This condition is also verified by testing the Feasibility Condition.
Thus, every successor is also a feasible successor. However, in most
references about EIGRP the term "feasible successor" is used to denote only those
routers which provide a loop-free path but which are not successors (i.e. they do not
provide the least distance). From this point of view, for a reachable destination there
is always at least one successor, however, there might not be any feasible successors.
The feasible successor effectively provides a backup route in the case that
existing successors die. Also, when performing unequal-cost load-balancing
(balancing the network traffic in inverse proportion to the cost of the routes), the
feasible successors are used as next hops in the routing table for the load-balanced
destination. By default, the total count of successors and feasible successors for a
56

destination stored in the routing table is limited to four. This limit can be changed in
the range from 1 to 6. In more recent versions of Cisco IOS (e.g. 12.4), this range is
between 1 and 16.
Advertised Distance and Feasible Distance:
Advertised Distance (AD) is the total metric along a path to a destination
network as advertised by an upstream neighbour. This distance is sometimes also
called a Reported Distance (RD) and is equal to the current lowest total distance
through a successor for a neighbouring router. A Feasible Distance (FD) is the lowest
known distance from a router to a particular destination. This is the Advertised
Distance (AD) + the cost to reach the neighbouring router from which the AD was
sent. It is important to note that this metric represents the last time the route went
from Active to Passive state. It can be expressed in other words as a historically
lowest known distance to a particular destination. While a route remains in Passive
state, the FD is updated only if the actual distance to the destination decreases,
otherwise it stays at its present value. On the other hand, if a router needs to enter
Active state for that destination, the FD will be updated with a new value after the
router transitions back from Active to Passive state. This is the only case when the FD
can be increased. The transition from Active to Passive state in effect marks the start
of a new history for that route.







57

CHAPTER - 05
SWITCHING
5.1 Layer 2 Switching:
Ethernet is a family of frame-based computer networking technologies for
local area networks (LANs). The name comes from the physical concept of the ether.
It defines a number of wiring and signalling standards for the Physical Layer of the
OSI networking model as well as a common addressing format and Media Access
Control at the Data Link Layer. Ethernet is standardized as IEEE 802.3. The
combination of the twisted pair versions of Ethernet for connecting end systems to the
network, along with the fiber optic versions for site backbones, is the most
widespread wired LAN technology. It has been in use from around 1980 to the
present, largely replacing competing LAN standards such as token ring, FDDI, and
ARCNET.

Fig 5.1: A standard 8P8C (often called RJ45) connector
5.1.1 History
Ethernet was developed at Xerox PARC between 1973 and 1975. Ethernet was
inspired by ALOHA net which Robert Metcalfe had studied as part of his Ph. D.
dissertation. In 1975, Xerox filed a patent application listing Metcalfe, David Boggs,
Chuck Thacker and Butler Lampson as inventors. In 1976, after the system was
deployed at PARC, Metcalfe and Boggs published a seminal paper.
58

Metcalfe left Xerox in 1979 to promote the use of personal computers and
local area networks (LANs), forming 3Com. He convinced Digital Equipment
Corporation (DEC), Intel, and Xerox to work together to promote Ethernet as a
standard, the so-called "DIX" standard, for "Digital/Intel/Xerox"; it specified the
10 megabits/second Ethernet, with 48-bit destination and source addresses and a
global 16-bit Ether type field. The first standard draft was first published on
September 30, 1980 by the Institute of Electrical and Electronics Engineers (IEEE).
Support of Ethernet's carrier sense multiple access with collision detection
(CSMA/CD) in other standardization bodies (i.e. ECMA, IEC and ISO) was
instrumental in getting past delays of the finalization of the Ethernet standard due to
the difficult decision processes in the IEEE, and due to the competitive Token Ring
proposal strongly supported by IBM. Ethernet initially competed with two largely
proprietary systems, Token Ring and Token Bus. Through the first half of the 1980s,
Digital's Ethernet implementation utilized a coaxial cable about the diameter of a US
nickel which became known as Thick Ethernet when its successor, Thinnet Ethernet
was introduced. Thinnet use a cable that was a version of the cable television cable of
the era. The emphasis was on making installation of the cable easier and less costly.
5.1.2 Standardization
Notwithstanding its technical merits, timely standardization was instrumental
to the success of Ethernet. It required well-coordinated and partly competitive
activities in several standardization bodies such as the IEEE, ECMA, IEC, and finally
ISO. In February 1980 IEEE started a project, IEEE 802 for the standardization of
Local Area Networks (LAN). In addition to CSMA/CD, Token Ring (supported by
IBM) and Token Bus were also considered as candidates for a LAN standard. Due to
the goal of IEEE 802 to forward only one standard and due to the strong company
support for all three designs, the necessary agreement on a LAN standard was
significantly delayed.
In the Ethernet camp, it put at risk the market introduction of the Xerox Star
workstation and 3Com's Ethernet LAN products. With such business implications in
mind, David Liddle (General Manager, Xerox Office Systems) and Metcalfe (3Com)
strongly supported a proposal of Fritz Rscheisen (Siemens Private Networks) for an
alliance in the emerging office communication market, including Siemens' support for
59

the international standardization of Ethernet (April 10, 1981). Ingrid Fromm, Siemens
representative to IEEE 802 quickly achieved broader support for Ethernet beyond
IEEE by the establishment of a competing Task Group "Local Networks" within the
European standards body ECMA TC24. As early as March 1982 ECMA TC24 with
its corporate members reached agreement on a standard for CSMA/CD based on the
IEEE 802 draft. The speedy action taken by ECMA decisively contributed to the
conciliation of opinions within IEEE and approval of IEEE 802.3 CSMA/CD by the
end of 1982.
5.1.3 General description:


Fig 5.2: A 1990s network interface card.
This is a combination card that supports both coaxial-based using a 10BASE2
(BNC connector, left) and twisted pair-based 10BASE-T, using an RJ45 (8P8C
modular connector, right).Ethernet was originally based on the idea of computers
communicating over a shared coaxial cable acting as a broadcast transmission
medium. The methods used show some similarities to radio systems, although there
are fundamental differences, such as the fact that it is much easier to detect collisions
in a cable broadcast system than a radio broadcast. The common cable providing the
communication channel was likened to the ether and it was from this reference that
the name "Ethernet" was derived.
The advantage of CSMA/CD was that, unlike Token Ring and Token Bus, all
nodes could "see" each other directly. All "talkers" shared the same medium - a single
coaxial cable - however, this was also a limitation; with only one speaker at a time,
packets had to be of a minimum size to guarantee that the leading edge of the
propagating wave of the message got to all parts of the medium before the transmitter
60

could stop transmitting, thus guaranteeing that collisions (two or more packets
initiated within a window of time which forced them to overlap) would be discovered.
Minimum packet size and the physical medium's total length were thus closely linked.
Above the physical layer, Ethernet stations communicate by sending each
other data packets, blocks of data that are individually sent and delivered. As with
other IEEE 802 LANs, each Ethernet station is given a single 48-bit MAC address,
which is used to specify both the destination and the source of each data packet.
Network interface cards (NICs) or chips normally do not accept packets addressed to
other Ethernet stations. Adapters generally come programmed with a globally unique
address, but this can be overridden, either to avoid an address change when an adapter
is replaced, or to use locally administered addresses.
Due to the ubiquity of Ethernet, the ever-decreasing cost of the hardware
needed to support it, and the reduced panel space needed by twisted pair Ethernet,
most manufacturers now build the functionality of an Ethernet card directly into PC
motherboards, eliminating the need for installation of a separate network card.
5.1.4 CSMA/CD shared medium Ethernet
Ethernet originally used a shared coaxial cable (the shared medium) winding
around a building or campus to every attached machine. A scheme known as carrier
sense multiple accesses with collision detection (CSMA/CD) governed the way the
computers shared the channel. This scheme was simpler than the competing token
ring or token bus technologies. When a computer wanted to send some information, it
used the following algorithm:
Collision detected procedure
1. Continue transmission until minimum packet time is reached (jam signal) to
ensure that all receivers detect the collision.
2. Increment retransmission counter.
3. Was the maximum number of transmission attempts reached? If so, abort
transmission.
4. Calculate and wait random back off period based on number of collisions.
61

5. Re-enter main procedure at stage 1.
This can be likened to what happens at a dinner party, where all the guests talk to
each other through a common medium (the air). Before speaking, each guest politely
waits for the current speaker to finish. If two guests start speaking at the same time,
both stop and wait for short, random periods of time (in Ethernet, this time is
generally measured in microseconds). The hope is that by each choosing a random
period of time, both guests will not choose the same time to try to speak again, thus
avoiding another collision. Exponentially increasing back-off times (determined using
the truncated binary exponential back off algorithm) are used when there is more than
one failed attempt to transmit.
Since all communications happen on the same wire, any information sent by
one computer is received by all, even if that information is intended for just one
destination. The network interface card interrupts the CPU only when applicable
packets are received: the card ignores information not addressed to it unless it is put
into "promiscuous mode". This "one speaks, all listen" property is a security weakness
of shared-medium Ethernet, since a node on an Ethernet network can eavesdrop on all
traffic on the wire if it so chooses. Use of a single cable also means that the bandwidth
is shared, so that network traffic can slow to a crawl when, for example, the network
and nodes restart after a power failure.
5.1.5 Bridging and switching:
While repeaters could isolate some aspects of Ethernet segments, such as cable
breakages, they still forwarded all traffic to all Ethernet devices. These created
practical limits on how many machines could communicate on an Ethernet network.
Also as the entire network was one collision domain and all hosts had to be able to
detect collisions anywhere on the network, the number of repeaters between the
farthest nodes was limited. Finally segments joined by repeaters had to all operate at
the same speed, making phased-in upgrades impossible. To alleviate these problems,
bridging was created to communicate at the data link layer while isolating the
physical layer. With bridging, only well-formed Ethernet packets are forwarded from
one Ethernet segment to another; collisions and packet errors are isolated. Bridges
learn where devices are, by watching MAC addresses, and do not forward packets
62

across segments when they know the destination address is not located in that
direction.
Prior to discovery of network devices on the different segments, Ethernet
bridges (and switches) work somewhat like Ethernet hubs, passing all traffic between
segments. However, as the bridge discovers the addresses associated with each port, it
only forwards network traffic to the necessary segments, improving overall
performance. Broadcast traffic is still forwarded to all network segments. Bridges also
overcame the limits on total segments between two hosts and allowed the mixing of
speeds, both of which became very important with the introduction of Fast Ethernet.
Early bridges examined each packet one by one using software on a CPU, and
some of them were significantly slower than hubs (multi-port repeaters) at forwarding
traffic, especially when handling many ports at the same time. This was in part due to
the fact that the entire Ethernet packet would be read into a buffer, the destination
address compared with an internal table of known MAC addresses and a decision
made as to whether to drop the packet or forward it to another or all segments. When
a twisted pair or fiber link segment is used and neither end is connected to a hub, full-
duplex Ethernet becomes possible over that segment. In full duplex mode both
devices can transmit and receive to/from each other at the same time, and there is no
collision domain. This doubles the aggregate bandwidth of the link and is sometimes
advertised as double the link speed (e.g. 200 Mbit/s) to account for this. However, this
is misleading as performance will only double if traffic patterns are symmetrical
(which in reality they rarely are). The elimination of the collision domain also means
that all the link's bandwidth can be used and that segment length is not limited by the
need for correct collision detection (this is most significant with some of the fiber
variants of Ethernet).
5.1.6 More advanced networks:
Simple switched Ethernet networks, while an improvement over hub based Ethernet,
suffer from a number of issues:
63

They suffer from single points of failure. If any link fails some devices will be
unable to communicate with other devices and if the link that fails is in a
central location lots of users can be cut off from the resources they require.
It is possible to trick switches or hosts into sending data to a machine even if
it's not intended for it (see switch vulnerabilities).
Large amounts of broadcast traffic, whether malicious, accidental, or simply a
side effect of network size can flood slower links and/or systems.
o It is possible for any host to flood the network with broadcast traffic
forming a denial of service attack against any hosts that run at the same
or lower speed as the attacking device.
o As the network grows, normal broadcast traffic takes up an ever
greater amount of bandwidth.
o If switches are not multicast aware, multicast traffic will end up treated
like broadcast traffic due to being directed at a MAC with no
associated port.
o If switches discover more MAC addresses than they can store (either
through network size or through an attack) some addresses must
inevitably be dropped and traffic to those addresses will be treated the
same way as traffic to unknown addresses, that is essentially the same
as broadcast traffic (this issue is known as fail open).
They suffer from bandwidth choke points where a lot of traffic is forced down
a single link.
Some switches offer a variety of tools to combat these issues including:
Spanning-tree protocol to maintain the active links of the network as a tree
while allowing physical loops for redundancy.
Various port protection features, as it is far more likely an attacker will be on
an end system port than on a switch-switch link.
VLANs to keep different classes of users separate while using the same
physical infrastructure.
Fast routing at higher levels to route between those VLANs.
64

Link aggregation to add bandwidth to overloaded links and to provide some
measure of redundancy, although the links won't protect against switch failure
because they connect the same pair of switches.
5.2 Layer 3 Switching:
The only difference between a layer 3 switch and router is the way the
administrator creates the physical implementation. Also, traditional routers use
microprocessors to make forwarding decisions, and the switch performs only
hardware-based packet switching. However, some traditional routers can have other
hardware functions as well in some of the higher-end models. Layer 3 switches can be
placed anywhere in the network because they handle high-performance LAN traffic
and can cost-effectively replace routers. Layer 3 switching is all hardware-based
packet forwarding, and all packet forwarding is handled by hardware ASICs. Layer 3
switches really are no different functionally than a traditional router and perform the
same functions, which are listed here
Determine paths based on logical addressing
Run layer 3 checksums (on header only)
Use Time to Live (TTL)
Process and respond to any option information
Update Simple Network Management Protocol (SNMP) managers with
Management Information Base (MIB) information
Provide Security
The benefits of layer 3 switching include the following
Hardware-based packet forwarding
High-performance packet switching
High-speed scalability
Low latency
Lower per-port cost
Flow accounting
Security
Quality of service (QoS)
65

5.3 Layer 4 Switching:
Layer 4 switching is considered a hardware-based layer 3 switching
technology that can also consider the application used (for example, Telnet or FTP).
Layer 4 switching provides additional routing above layer 3 by using the port
numbers found in the Transport layer header to make routing decisions. These port
numbers are found in Request for Comments (RFC) 1700 and reference the upper-
layer protocol, program, or application.
Layer 4 information has been used to help make routing decisions for quite a
while. For example, extended access lists can filter packets based on layer 4 port
numbers. The largest benefit of layer 4 switching is that the network administrator
can configure a layer 4 switch to prioritize data traffic by application, which means a
QoS can be defined for each user. For example, a number of users can be defined as a
Video group and be assigned more priority, or band-width, based on the need for
video conferencing.
5.4 Multilayer Switching (MLS):
Multi-layer switching combines layer 2, 3, and 4 switching technologies and
provides high-speed scalability with low latency. It accomplishes this high
combination of high-speed scalability with low latency by using huge filter tables
based on the criteria designed by the network administrator. Multi-layer switching can
move traffic at wire speed and also provide layer 3 routing, which can remove the
bottleneck from the network routers. This technology is based on the idea of "route
once, switch many".
Multi-layer switching can make routing/switching decisions based on the
following
MAC source/destination address in a Data Link frame
IP source/destination address in the Network layer header
Protocol field in the Network layer header
Port source/destination numbers in the Transport layer header
66

There is no performance difference between a layer 3 and a layer 4 switch because the
routing/switching is all hardware based.
5.5 Spanning Tree Protocol:
The Spanning tree protocol (STP) is a link layer network protocol that ensures
a loop-free topology for any bridged LAN. Thus, the basic function of STP is to
prevent bridge loops and ensuing broadcast radiation. In the OSI model for computer
networking, STP falls under the OSI layer-2. It is standardized as 802.1D. As the
name suggests, it creates a spanning tree within a mesh network of connected layer-2
bridges (typically Ethernet switches), and disables those links that are not part of the
spanning tree, leaving a single active path between any two network nodes. Spanning
tree allows a network design to include spare (redundant) links to provide automatic
backup paths if an active link fails, without the danger of bridge loops, or the need for
manual enabling/disabling of these backup links. Bridge loops must be avoided
because they result in flooding the internet network.
5.5.1 Protocol operation
The collection of bridges in a LAN can be considered a graph whose nodes are
the bridges and the LAN segments (or cables), and whose edges are the interfaces
connecting the bridges to the segments. To break loops in the LAN while maintaining
access to all LAN segments, the bridges collectively compute a spanning tree. The
spanning tree is not necessarily a minimum cost spanning tree. A network
administrator can reduce the cost of a spanning tree, if necessary, by altering some of
the configuration parameters in such a way as to affect the choice of the root of the
spanning tree. The spanning tree that the bridges compute using the Spanning Tree
Protocol can be determined using the following rules. The example network at the
right, below, will be used to illustrate the rules.
67


Fig 5.3: Selection of Root Bridge (I)
1. An example network. The numbered boxes represent bridges (the number
represents the bridge ID). The lettered clouds represent network segments.

Fig 5.4: Selection of root bridge (II)
2. The smallest bridge ID is 3. Therefore, bridge 3 is the root bridge.

Fig 5.5: Selection of root bridge (III)
3. Assuming that the cost of traversing any network segment is 1, the least cost path
from bridge 4 to the root bridge goes through network segment c. Therefore, the root
port for bridge 4 is the one on network segment c.
68


Fig 5.6: Selection of root bridge (IV)
4. The least cost path to the root from network segment e goes through bridge 92.
Therefore the designated port for network segment e is the port that connects bridge
92 to network segment e.

Fig 5.7: Selection of root bridge (V)
5. This diagram illustrates all port states as computed by the spanning tree algorithm.
Any active port that is not a root port or a designated port is a blocked port.

Fig 5.8: Selection of root bridge (VI)
69

6. After link failure the spanning tree algorithm computes and spans new least-cost
tree.
5.5.2 Select a root bridge:
The root bridge of the spanning tree is the bridge with the smallest (lowest)
bridge ID. Each bridge has a unique identifier (ID) and a configurable priority
number; the bridge ID contains both numbers. To compare two bridge IDs, the
priority is compared first. If two bridges have equal priority, then the MAC addresses
are compared. For example, if switches A (MAC=0200.0000.1111) and B
(MAC=0200.0000.2222) both have a priority of 10, then switch A will be selected as
the root bridge. If the network administrators would like switch B to become the root
bridge, they must set its priority to be less than 10.
5.5.3 Determine the least cost paths to the root bridge:
The computed spanning tree has the property that messages from any
connected device to the root bridge traverse a least cost path, i.e., a path from the
device to the root that has minimum cost among all paths from the device to the root.
The cost of traversing a path is the sum of the costs of the segments on the path.
Different technologies have different default costs for network segments. An
administrator can configure the cost of traversing a particular network segment.
The property that messages always traverse least-cost paths to the root is guaranteed
by the following two rules.
Least cost path from each bridge. After the root bridge has been chosen, each
bridge determines the cost of each possible path from itself to the root. From
these, it picks the one with the smallest cost (the least-cost path). The port
connecting to that path becomes the root port (RP) of the bridge.
Least cost path from each network segment. The bridges on a network
segment collectively determine which bridge has the least-cost path from the
network segment to the root. The port connecting this bridge to the network
segment is then the designated port (DP) for the segment.
70

Disable all other root paths. Any active port that is not a root port or a
designated port is a blocked port (BP).
Modifications in case of ties. The above rules over-simplify the situation
slightly, because it is possible that there are ties, for example, two or more
ports on a single bridge are attached to least-cost paths to the root or two or
more bridges on the same network segment have equal least-cost paths to the
root. To break such ties:
Breaking ties for root ports. When multiple paths from a bridge are least-cost
paths, the chosen path uses the neighbour bridge with the lower bridge ID. The
root port is thus the one connecting to the bridge with the lowest bridge ID.
For example, in figure 3, if switch 4 were connected to network segment d,
there would be two paths of length 2 to the root, one path going through bridge
24 and the other through bridge 92. Because there are two least cost paths, the
lower bridge ID (24) would be used as the tie-breaker in choosing which path
to use.
Breaking ties for designated ports. When more than one bridge on a segment
leads to a least-cost path to the root, the bridge with the lower bridge ID is
used to forward messages to the root. The port attaching that bridge to the
network segment is the designated port for the segment. In figure 4, there are
two least cost paths from network segment d to the root, one going through
bridge 24 and the other through bridge 92. The lower bridge ID is 24, so the
tie breaker dictates that the designated port is the port through which network
segment d is connected to bridge 24. If bridge IDs were equal, then the bridge
with the lowest MAC address would have the designated port. In either case,
the loser sets the port as being blocked.
The final tie-breaker. In some cases, there may still be a tie, as when two
bridges are connected by multiple cables. In this case, multiple ports on a
single bridge are candidates for root port. In this case, the path which passes
through the port on the neighbour bridge that has the lowest port priority is
used.
5.5.4 Data rate and STP path cost
The table below shows the default cost of an interface for a given data rate.
71

Data rate STP Cost (802.1D-1998) STP Cost (802.1t-2001)
4 Mbit/s 250 5,000,000
10 Mbit/s 100 2,000,000
16 Mbit/s 62 1,250,000
100 Mbit/s 19 200,000
1 Gbit/s 4 20,000
2 Gbit/s 3 10,000
10 Gbit/s 2 2,000
5.5.5 Bridge Protocol Data Units (BPDUs)
The above rules describe one way of determining what spanning tree will be
computed by the algorithm, but the rules as written require knowledge of the entire
network. The bridges have to determine the root bridge and compute the port roles
(root, designated, or blocked) with only the information that they have. To ensure that
each bridge has enough information, the bridges use special data frames called Bridge
Protocol Data Units (BPDUs) to exchange information about bridge IDs and root path
costs. A bridge sends a BPDU frame using the unique MAC address of the port itself
as a source address, and a destination address of the STP multicast address
01:80:C2:00:00:00.
There are three types of BPDUs:
Configuration BPDU (CBPDU), used for Spanning Tree computation
Topology Change Notification (TCN) BPDU, used to announce changes in the
network topology
Topology Change Notification Acknowledgment (TCA)
BPDUs are exchanged regularly (every 2 seconds by default) and enable switches
to keep track of network changes and to start and stop forwarding at ports as required.
5.5.6 STP Switch Port States:
Blocking - A port that would cause a switching loop, no user data is sent or
received but it may go into forwarding mode if the other links in use were to
72

fail and the spanning tree algorithm determines the port may transition to the
forwarding state. BPDU data is still received in blocking state.
Listening - The switch processes BPDUs and awaits possible new information
that would cause it to return to the blocking state.
Learning - While the port does not yet forward frames (packets) it does learn
source addresses from frames received and adds them to the filtering database
(switching database)
Forwarding - A port receiving and sending data, normal operation. STP still
monitors incoming BPDUs that would indicate it should return to the blocking
state to prevent a loop.
Disabled - Not strictly part of STP, a network administrator can manually
disable a port
To prevent the delay when connecting hosts to a switch and during some topology
changes, Rapid STP was developed and standardized by IEEE 802.1w, which allows
a switch port to rapidly transition into the forwarding state during these situations.
5.6 Virtual LAN
A virtual LAN, commonly known as a VLAN, is a group of hosts with a
common set of requirements that communicate as if they were attached to the same
broadcast domain, regardless of their physical location. A VLAN has the same
attributes as a physical LAN, but it allows for end stations to be grouped together
even if they are not located on the same network switch. Network reconfiguration can
be done through software instead of physically relocating devices.
5.6.1 Uses
VLANs are created to provide the segmentation services traditionally provided
by routers in LAN configurations. VLANs address issues such as scalability, security,
and network management. Routers in VLAN topologies provide broadcast filtering,
security, address summarization, and traffic flow management. By definition,
switches may not bridge IP traffic between VLANs as it would violate the integrity of
the VLAN broadcast domain.
73

This is also useful if someone wants to create multiple Layer 3 networks on the same
Layer 2 switch. For example, if a DHCP server (which will broadcast its presence) is
plugged into a switch it will serve any host on that switch that is configured to get its
IP from a DHCP server. By using VLANs you can easily split the network up so some
hosts won't use that DHCP server and will obtain link-local addresses, or obtain an
address from a different DHCP server. Virtual LANs are essentially Layer 2
constructs, compared with IP subnets which are Layer 3 constructs. In an environment
employing VLANs, a one-to-one relationship often exists between VLANs and IP
subnets, although it is possible to have multiple subnets on one VLAN or have one
subnet spread across multiple VLANs. Virtual LANs and IP subnets provide
independent Layer 2 and Layer 3 constructs that map to one another and this
correspondence is useful during the network design process.
By using VLANs, one can control traffic patterns and react quickly to
relocations. VLANs provide the flexibility to adapt to changes in network
requirements and allow for simplified administration.
5.6.2 Protocols and design
The protocol most commonly used today in configuring virtual LANs is IEEE
802.1Q. The IEEE committee defined this method of multiplexing VLANs in an
effort to provide multivendor VLAN support. Prior to the introduction of the 802.1Q
standard, several proprietary protocols existed, such as Cisco's ISL (Inter-Switch
Link) and 3Com's VLT (Virtual LAN Trunk). Cisco also implemented VLANs over
FDDI by carrying VLAN information in an IEEE 802.10 frame header, contrary to
the purpose of the IEEE 802.10 standard.
Both ISL and IEEE 802.1Q tagging perform "explicit tagging" - the frame
itself is tagged with VLAN information. ISL uses an external tagging process that
does not modify the existing Ethernet frame, while 802.1Q uses a frame-internal field
for tagging, and so does modify the Ethernet frame. This internal tagging is what
allows IEEE 802.1Q to work on both access and trunk links: frames are standard
Ethernet, and so can be handled by commodity hardware.
74

The IEEE 802.1Q header contains a 4-byte tag header containing a 2-byte tag protocol
identifier (TPID) and 2-byte tag control information (TCI). The TPID has a fixed
value of 0x8100 that indicates that the frame carries the 802.1Q/802.1p tag
information. The TCI contains the following elements:
Three-bit user priority
One-bit canonical format indicator (CFI)
Twelve-bit VLAN identifier (VID)-Uniquely identifies the VLAN to which
the frame belongs
Inter-Switch Link (ISL) is a Cisco proprietary protocol used to interconnect
multiple switches and maintain VLAN information as traffic travels between switches
on trunk links. This technology provides one method for multiplexing bridge groups
(VLANs) over a high-speed backbone. It is defined for Fast Ethernet and Gigabit
Ethernet, as is IEEE 802.1Q. ISL has been available on Cisco routers since Cisco IOS
Software Release 11.1.
With ISL, an Ethernet frame is encapsulated with a header that transports
VLAN IDs between switches and routers. ISL does add overhead to the packet as a
26-byte header containing a 10-bit VLAN ID. In addition, a 4-byte CRC is appended
to the end of each frame. This CRC is in addition to any frame checking that the
Ethernet frame requires. The fields in an ISL header identify the frame as belonging
to a particular VLAN.
Early network designers often configured VLANs with the aim of reducing the
size of the collision domain in a large single Ethernet segment and thus improving
performance. When Ethernet switches made this a non-issue (because each switch
port is a collision domain), attention turned to reducing the size of the broadcast
domain at the MAC layer. Virtual networks can also serve to restrict access to
network resources without regard to physical topology of the network, although the
strength of this method remains debatable as VLAN Hopping is a common means of
bypassing such security measures.

75

5.6.3 Cisco VLAN Trunking Protocol (VTP)
On Cisco Devices, VTP (VLAN Trunking Protocol) maintains VLAN
configuration consistency across the entire network. VTP uses Layer 2 trunk frames
to manage the addition, deletion, and renaming of VLANs on a network-wide basis
from a centralized switch in the VTP server mode. VTP is responsible for
synchronizing VLAN information within a VTP domain and reduces the need to
configure the same VLAN information on each switch.
VTP minimizes the possible configuration inconsistencies that arise when
changes are made. These inconsistencies can result in security violations, because
VLANs can cross connect when duplicate names are used. They also could become
internally disconnected when they are mapped from one LAN type to another, for
example, Ethernet to ATM LANE ELANs or FDDI 802.10 VLANs. VTP provides a
mapping scheme that enables seamless trunking within a network employing mixed-
media technologies.
VTP provides the following benefits:
VLAN configuration consistency across the network
Mapping scheme that allows a VLAN to be trunked over mixed media
Accurate tracking and monitoring of VLANs
Dynamic reporting of added VLANs across the network
Plug-and-play configuration when adding new VLANs
As beneficial as VTP can be, it does have disadvantages that are normally related
to the spanning tree protocol (STP) as a bridging loop propagating throughout the
network can occur. Cisco switches run an instance of STP for each VLAN, and since
VTP propagates VLANs across the campus LAN, VTP effectively creates more
opportunities for a bridging loop to occur.
Before creating VLANs on the switch that will be propagated via VTP, a VTP
domain must first be set up. A VTP domain for a network is a set of all contiguously
trunked switches with the same VTP domain name. All switches in the same
management domain share their VLAN information with each other, and a switch can
76

participate in only one VTP management domain. Switches in different domains do
not share VTP information.
Using VTP, each Catalyst Family Switch advertises the following on its trunk ports:
Management domain
Configuration revision number
Known VLANs and their specific parameters
5.6.4 Establishing VLAN memberships
The two common approaches to assigning VLAN membership are as follows:
Static VLANs
Dynamic VLANs
Static VLANs are also referred to as port-based VLANs. Static VLAN
assignments are created by assigning ports to a VLAN. As a device enters the
network, the device automatically assumes the VLAN of the port. If the user changes
ports and needs access to the same VLAN, the network administrator must manually
make a port-to-VLAN assignment for the new connection.
Dynamic VLANs are created through the use of software. With a VLAN
Management Policy Server (VMPS), an administrator can assign switch ports to
VLANs dynamically based on information such as the source MAC address of the
device connected to the port or the username used to log onto that device. As a device
enters the network, the device queries a database for VLAN membership. See also
FreeNAC which implements a VMPS server.
5.6.5 Port-based VLANs
With port-based VLAN membership, the port is assigned to a specific VLAN
independent of the user or system attached to the port. This means all users attached
to the port should be members of the same VLAN. The network administrator
typically performs the VLAN assignment. The port configuration is static and cannot
be automatically changed to another VLAN without manual reconfiguration. As with
77

other VLAN approaches, the packets forwarded using this method do not leak into
other VLAN domains on the network. After a port has been assigned to a VLAN, the
port cannot send to or receive from devices in another VLAN without the intervention
of a Layer 3 device.
The device that is attached to the port likely has no understanding that a
VLAN exists. The device simply knows that it is a member of a subnet and that the
device should be able to talk to all other members of the subnet by simply sending
information to the cable segment. The switch is responsible for identifying that the
information came from a specific VLAN and for ensuring that the information gets to
all other members of the VLAN. The switch is further responsible for ensuring that
ports in a different VLAN do not receive the information. This approach is quite
simple, fast, and easy to manage in that there are no complex lookup tables required
for VLAN segmentation. If port-to-VLAN association is done with an application-
specific integrated circuit (ASIC), the performance is very good. An ASIC allows the
port-to-VLAN mapping to be done at the hardware level.











78

CHAPTER - 06
WIDE AREA NETWORKS
6.1 Introduction:
A wide area network (WAN) is a computer network that covers a broad area
(i.e., any network whose communications links cross metropolitan, regional, or
national boundaries). This is in contrast with personal area networks (PANs), local
area networks (LANs), campus area networks (CANs), or metropolitan area networks
(MANs) which are usually limited to a room, building, campus or specific
metropolitan area (e.g., a city) respectively.
6.1.1 WAN design options
WANs are used to connect LANs and other types of networks together, so that
users and computers in one location can communicate with users and computers in
other locations. Many WANs are built for one particular organization and are private.
Others, built by Internet service providers, provide connections from an organization's
LAN to the Internet. WANs are often built using leased lines. At each end of the
leased line, a router connects to the LAN on one side and a hub within the WAN on
the other. Leased lines can be very expensive. Instead of using leased lines, WANs
can also be built using less costly circuit switching or packet switching methods.
Network protocols including TCP/IP deliver transport and addressing functions.
Protocols including Packet over SONET/SDH, MPLS, ATM and Frame relay are
often used by service providers to deliver the links that are used in WANs. X.25 was
an important early WAN protocol, and is often considered to be the "grandfather" of
Frame Relay as many of the underlying protocols and functions of X.25 are still in use
today (with upgrades) by Frame Relay.
6.1.2 WAN connection technology options:
Several options are available for WAN connectivity:
Option: Description Advantag
es
Disadvanta
ges
Bandwi
dth
range
Sample
protoco
ls used
79

Leased
line
Point-to-Point connection
between two computers or
Local Area Networks (LANs)
Most
secure
Expensive PPP,
HDLC,
SDLC,
HNAS
Circuit
switchin
g
A dedicated circuit path is
created between end points.
Best example is dialup
connections
Less
Expensive
Call Setup 28 - 144
kbps
PPP,
ISDN
Packet
switchin
g
Devices transport packets via a
shared single point-to-point or
point-to-multipoint link across
a carrier internetwork. Variable
length packets are transmitted
over Permanent Virtual
Circuits (PVC) or Switched
Virtual Circuits (SVC)
Shared
media
across link
X.25
Frame-
Relay
Cell
relay
Similar to packet switching,
but uses fixed length cells
instead of variable length
packets. Data is divided into
fixed-length cells and then
transported across virtual
circuits
Best for
simultaneo
us use of
voice and
data
Overhead
can be
considerabl
e
ATM


Table 6.1 : WAN Protocols

Transmission rates usually range from 1200 bps to 6 Mbps, although some
connections such as ATM and Leased lines can reach speeds greater than 156 Mbps.
Typical communication links used in WANs are telephone lines, microwave links &
satellite channels. Recently with the proliferation of low cost of Internet connectivity
many companies and organizations have turned to VPN to interconnect their
networks, creating a WAN in that way. Companies such as Cisco, New Edge
Networks and Check Point offer solutions to create VPN networks.
80

6.2 High-Level Data Link Control
High-Level Data Link Control (HDLC) is a bit-oriented synchronous data link
layer protocol developed by the International Organization for Standardization (ISO).
The original ISO standards for HDLC are:
ISO 3309 Frame Structure
ISO 4335 Elements of Procedure
ISO 6159 Unbalanced Classes of Procedure
ISO 6256 Balanced Classes of Procedure
The current standard for HDLC is ISO 13239, which replaces all of those standards.
HDLC provides both connection-oriented and connectionless service.
6.2.1 History
HDLC is based on IBM's SDLC protocol, which is the layer 2 protocol for
IBM's Systems Network Architecture (SNA). It was extended and standardized by the
ITU as LAP, while ANSI named their essentially identical version ADCCP.
Derivatives have since appeared in innumerable standards. It was adopted into the
X.25 protocol stack as LAPB, into the V.42 protocol as LAPM, into the Frame Relay
protocol stack as LAPF and into the ISDN protocol stack as LAPD. It was the
inspiration for the IEEE 802.2 LLC protocol, and it is the basis for the framing
mechanism used with the PPP on synchronous lines, as used by many servers to
connect to a WAN, most commonly the Internet.
6.2.2 Framing
HDLC frames can be transmitted over synchronous or asynchronous links.
Those links have no mechanism to mark the beginning or end of a frame, so the
beginning and end of each frame has to be identified. This is done by using a frame
delimiter, or flag, which is a unique sequence of bits that is guaranteed not to be seen
inside a frame. This sequence is '01111110', or, in hexadecimal notation, 0x7E. Each
frame begins and ends with a frame delimiter. A frame delimiter at the end of a frame
may also mark the start of the next frame. A sequence of 7 or more consecutive 1-bits
within a frame will cause the frame to be aborted.
81

When no frames are being transmitted on a simplex or full-duplex
synchronous link, a frame delimiter is continuously transmitted on the link. Using the
standard NRZI encoding from bits to line levels (0 bit = transition, 1 bit = no
transition), this generates one of two continuous waveforms, depending on the initial
state:

Fig 6.1: Frame Sequence
This is used by modems to train and synchronize their clocks via phase-locked
loops. Some protocols allow the 0-bit at the end of a frame delimiter to be shared with
the start of the next frame delimiter, i.e. '011111101111110'.
For half-duplex or multi-drop communication, where several transmitters
share a line, a receiver on the line will see continuous idling 1-bits in the inter-frame
period when no transmitter is active. Actual binary data could easily have a sequence
of bits that is the same as the flag sequence. So the data's bit sequence must be
modified so that it doesn't appear to be a frame delimiter.
6.2.3 Synchronous framing
On synchronous links, this is done with bit stuffing. Any time that 5
consecutive 1-bits appear in the transmitted data, the data is paused and a 0-bit is
transmitted. This ensures that no more than 5 consecutive 1-bits will be sent. The
receiving device knows this is being done, and after seeing 5 1-bits in a row, a
following 0-bit is stripped out of the received data. If the following bit is a 1-bit, the
receiver has found a flag.
6.2.4 Asynchronous framing
When using asynchronous serial communication such as standard RS-232
serial ports, bits are sent in groups of 8, and bit-stuffing is inconvenient. Instead they
use "control-octet transparency", also called "byte stuffing" or "octet stuffing". The
82

frame boundary octet is 01111110, (7E in hexadecimal notation). A "control escape
octet", has the bit sequence '01111101', (7D hexadecimal). If either of these two octets
appears in the transmitted data, an escape octet is sent, followed by the original data
octet with bit 5 inverted.
6.2.5 Structure
The contents of an HDLC frame are shown in the following table:
Flag Address Control Information FCS Flag
8 bits 8 or more bits 8 or 16 Variable length,0
bits
16,32 bits 8 bits
Table 6.2: Frame Format of HDLC
Data is usually sent in multiples of 8 bits, but only some variants require this;
others theoretically permit data alignments on other than 8-bit boundaries. The frame
check sequence (FCS) is a 16-bit CRC-CCITT or a 32-bit CRC-32 computed over the
Address, Control, and Information fields. It provides a means by which the receiver
can detect errors that may have been induced during the transmission of the frame,
such as lost bits, flipped bits, and extraneous bits. However, given that the algorithms
used to calculate the FCS are such that the probability of certain types of transmission
errors going undetected increases with the length of the data is being checked for
errors, the FCS can implicitly limit the practical size of the frame.
6.2.6 Types of Stations (Computers), and Data Transfer Modes
Synchronous Data Link Control (SDLC) was originally designed to connect
one computer with multiple peripherals. The original "normal response mode" is a
master-slave mode where the computer (or primary terminal) gives each peripheral
(secondary terminal) permission to speak in turn. Because all communication is either
to or from the primary terminal, frames include only one address, that of the
secondary terminal; the primary terminal is not assigned an address. There is also a
strong distinction between commands sent by the primary to a secondary, and
responses sent by a secondary to the primary. Commands and responses are in fact
indistinguishable; the only difference is the direction in which they are transmitted.
83

Normal response mode allows operation over half-duplex communication
links, as long as the primary is aware that it may not transmit when it has given
permission to a secondary. Asynchronous response mode is an HDLC addition for use
over full-duplex links. While retaining the primary/secondary distinction, it allows the
secondary to transmit at any time. Asynchronous balanced mode added the concept of
a combined terminal which can act as both a primary and a secondary. There are some
subtleties about this mode of operation; while many features of the protocol do not
care whether they are in a command or response frame, some do, and the address field
of a received frame must be examined to determine whether it contains a command
(the address received is ours) or a response (the address received is that of the other
terminal).
6.2.7 HDLC Operations and Frame Types:
There are three fundamental types of HDLC frames.
Information frames, or I-frames, transport user data from the network layer.
In addition they can also include flow and error control information
piggybacked on data.
Supervisory Frames, or S-frames, are used for flow and error control
whenever piggybacking is impossible or inappropriate, such as when a station
does not have data to send. S-frames do not have information fields.
Unnumbered frames, or U-frames, are used for various miscellaneous
purposes, including link management. Some U-frames contain an information
field, depending on the type.
6.2.7.1 I-Frames (user data)
Information frames, or I-frames, transport user data from the network layer.
In addition they also include flow and error control information piggybacked on data.
The sub-fields in the control field define these functions. The least significant bit (first
transmitted) defines the frame type. 0 means an I-frame. N(S) defines the sequence
number of send frame. This is incremented for successive I-frames, modulo 8 or
modulo 128. Depending on the number of bits in the sequence number, up to 7 or 127
I-frames may be awaiting acknowledgment at any time.
84

6.2.7.2 S-Frames (control)
Supervisory Frames, or S-frames, are used for flow and error control whenever
piggybacking is impossible or inappropriate, such as when a station does not have
data to send. S-frames do not have information fields. The S-frame control field
includes a leading "10" indicating that it is an S-frame. This is followed by a 2-bit
type, a poll/final bit, and a sequence number. If 7-bit sequence numbers are used,
there is also a 4-bit padding field. The first 2 bits mean it is an S-frame. All S frames
include a P/F bit and a receive sequence number as described above. Except for the
interpretation of the P/F field, there is no difference between a command S frame and
a response S frame; when P/F is 0, the two forms are exactly equivalent.
6.2.7.3 U-Frames
Unnumbered frames, or U-frames, are used for link management, and can also be
used to transfer user data. They exchange session management and control
information between connected devices, and some U-frames contain an information
field, used for system management information or user data. The first 2 bits (11) mean
it is a U-frame. The 5 type bits (2 before P/F bit and 3 bit after P/F bit) can create 32
different types of U-frame.
Mode settings (SNRM, SNRME, SARM, SARME, SABM, SABME, UA,
DM, RIM, SIM, RD, DISC)
Information Transfer (UP, UI)
Recovery (FRMR, RSET)
o Invalid Control Field
o Data Field Too Long
o Data field not allowed with received Frame Type
o Invalid Receive Count
Miscellaneous (XID, TEST)
6.2.8 Link Configurations
Link configurations can be categorized as being either:
85

Unbalanced, which consists of one primary terminal, and one or more
secondary terminals.
Balanced, which consists of two peer terminals.
The three link configurations are:
Normal Response Mode (NRM) is an unbalanced configuration in which only
the primary terminal may initiate data transfer. The secondary terminal
transmits data only in response to commands from the primary terminal. The
primary terminal polls the secondary terminal(s) to determine whether they
have data to transmit, and then selects one to transmit.
Asynchronous Response Mode (ARM) is an unbalanced configuration in
which secondary terminals may transmit without permission from the primary
terminal. However, the primary terminal still retains responsibility for line
initialization, error recovery, and logical disconnect.
Asynchronous Balanced Mode (ABM) is a balanced configuration in which
either station may initiate the transmission.
An additional link configuration is Disconnected mode. This is the mode that a
secondary station is in before it is initialized by the primary, or when it is explicitly
disconnected. In this mode, the secondary responds to almost every frame other than a
mode set command with a "Disconnected mode" response. The purpose of this mode
is to allow the primary to reliably detect a secondary being powered off or otherwise
reset..
6.2.9 Basic Operations
Initialization can be requested by either side. When the six-mode set-
command is issued. This command:
o Signals the other side that initialization is requested
o Specifies the mode, NRM, ABM, ARM
o Specifies whether 3 or 7 bit sequence numbers are in use.


86

6.3 Frame Relay
6.3.1 Introduction
Frame Relay is a standardized wide area networking technology that specifies
the physical and logical link layers of digital telecommunications channels using a
packet switching methodology. Originally designed for transport across Integrated
Services Digital Network (ISDN) infrastructure, it may be used today in the context of
many other network interfaces. Network providers commonly implement Frame
Relay for voice (VoFR) and data as an encapsulation technique, used between local
area networks (LANs) over a wide area network (WAN). Each end-user gets a private
line (or leased line) to a frame-relay node. The frame-relay network handles the
transmission over a frequently-changing path transparent to all end-users.
With the advent of MPLS, VPN and dedicated broadband services such as
cable modem and DSL, the end may loom for the Frame Relay protocol and
encapsulation. However many rural areas remain lacking DSL and cable modem
services. In such cases the least expensive type of "always-on" connection remains a
64-kbit/s frame-relay line. Thus a retail chain, for instance, may use Frame Relay for
connecting rural stores into their corporate WAN.

Fig 6.2: A basic Frame Relay network

87

6.3.2 Design
The designers of Frame Relay aimed to a telecommunication service for cost-
efficient data transmission for intermittent traffic between local area networks (LANs)
and between end-points in a wide area network (WAN). Frame Relay puts data in
variable-size units called "frames" and leaves any necessary error-correction (such as
re-transmission of data) up to the end-points. This speeds up overall data
transmission. For most services, the network provides a permanent virtual circuit
(PVC), which means that the customer sees a continuous, dedicated connection
without having to pay for a full-time leased line, while the service-provider figures
out the route each frame travels to its destination and can charge based on usage.
An enterprise can select a level of service quality - prioritizing some frames
and making others less important. Frame Relay can run on fractional T-1 or full T-
carrier system carriers. Frame Relay complements and provides a mid-range service
between basic rate ISDN, which offers bandwidth at 128 Kbit/s, and Asynchronous
Transfer Mode (ATM), which operates in somewhat similar fashion to frame Relay
but at speeds from 155.520 Mbit/s to 622.080 Mbit/s.
Frame Relay has its technical base in the older X.25 packet-switching
technology, designed for transmitting data on analog voice lines. Unlike X.25, whose
designers expected analog signals, Frame Relay offers a fast packet technology,
which means that the protocol does not attempt to correct errors. When a Frame Relay
network detects an error in a frame, it simply drops that frame. The end points have
the responsibility for detecting and retransmitting dropped frames. (However, digital
networks offer an incidence of error extraordinarily small relative to that of analog
networks.)
Frame Relay has become one of the most extensively-used WAN protocols. Its
cheapness (compared to leased lines) provided one reason for its popularity. The
extreme simplicity of configuring user equipment in a Frame Relay network offers
another reason for Frame Relay's popularity.
Each Frame Relay Protocol data unit (PDU) consists of the following fields:
88

1. Flag Field. The flag is used to perform high-level data link synchronization
which indicates the beginning and end of the frame with the unique pattern
01111110. To ensure that the 01111110 pattern does not appear somewhere
inside the frame, bit stuffing and destuffing procedures are used.
2. Address Field. Each address field may occupy either octet 2 to 3, octet 2 to 4,
or octet 2 to 5, depending on the range of the address in use. A two-octet
address field comprises the EA=ADDRESS FIELD EXTENSION BITS and
the C/R=COMMAND/RESPONSE BIT.
1. DLCI-Data Link Connection Identifier Bits. The DLCI serves to
identify the virtual connection so that the receiving end knows which
information connection a frame belongs to. Note that this DLCI has
only local significance. A single physical channel can multiplex
several different virtual connections.
2. FECN, BECN, DE bits. These bits report congestion:
FECN=Forward Explicit Congestion Notification bit
BECN=Backward Explicit Congestion Notification bit
DE=Discard Eligibility bit
3. Information Field. A system parameter defines the maximum number of data
bytes that a host can pack into a frame. Hosts may negotiate the actual
maximum frame length at call set-up time. The standard specifies the
maximum information field size (supportable by any network) as at least 262
octets. Since end-to-end protocols typically operate on the basis of larger
information units, Frame Relay recommends that the network support the
maximum value of at least 1600 octets in order to avoid the need for
segmentation and reassembling by end-users.
4. Frame Check Sequence (FCS) Field. Since one cannot completely ignore the
bit error-rate of the medium, each switching node needs to implement error
detection to avoid wasting bandwidth due to the transmission of erred frames.
The error detection mechanism used in Frame Relay uses the cyclic
redundancy check (CRC) as its basis.
The Frame Relay network uses a simplified protocol at each switching node. It
achieves simplicity by omitting link-by-link flow-control. As a result, the offered load
has largely determined the performance of Frame Relay networks. When offered load
89

is high, due to the bursts in some services, temporary overload at some Frame Relay
nodes causes a collapse in network throughput. Therefore, frame-relay networks
require some effective mechanisms to control the congestion.
Congestion control in frame-relay networks includes the following elements:
1. Admission Control. This provides the principal mechanism used in Frame
Relay to ensure the guarantee of resource requirement once accepted. It also
serves generally to achieve high network performance. The network decides
whether to accept a new connection request, based on the relation of the
requested traffic descriptor and the network's residual capacity. The traffic
descriptor consists of a set of parameters communicated to the switching
nodes at call set-up time or at service-subscription time, and which
characterizes the connection's statistical properties. The traffic descriptor
consists of three elements:
2. Committed Information Rate (CIR). The average rate (in bit/s) at which the
network guarantees to transfer information units over a measurement interval
T. This T interval is defined as: T = Bc/CIR.
3. Committed Burst Size (BC). The maximum number of information units
transmittable during the interval T.
4. Excess Burst Size (BE). The maximum number of uncommitted information
units (in bits) that the network will attempt to carry during the interval.
Once the network has established a connection, the edge node of the Frame Relay
network must monitor the connection's traffic flow to ensure that the actual usage of
network resources does not exceed this specification. Frame Relay defines some
restrictions on the user's information rate. It allows the network to enforce the end
user's information rate and discard information when the subscribed access rate is
exceeded.
Explicit congestion notification is proposed as the congestion avoidance policy. It
tries to keep the network operating at its desired equilibrium point so that a certain
Quality of Service (QoS) for the network can be met. To do so, special congestion
control bits have been incorporated into the address field of the Frame Relay: FECN
and BECN. The basic idea is to avoid data accumulation inside the network. FECN
90

means Forward Explicit Congestion Notification. The FECN bit can be set to 1 to
indicate that congestion was experienced in the direction of the frame transmission, so
it informs the destination that congestion has occurred. BECN means Backwards
Explicit Congestion Notification. The BECN bit can be set to 1 to indicate that
congestion was experienced in the network in the direction opposite of the frame
transmission, so it informs the sender that congestion has occurred.
6.3.3 Frame Relay versus X.25
X.25 provides quality of service and error-free delivery, whereas, Frame Relay
was designed to relay data as quickly as possible over low error networks. Frame
Relay eliminates a number of the higher-level procedures and fields used in X.25.
Frame Relay was designed for use on links with error-rates far lower than available
when X.25 was designed.
X.25 prepares and sends packets, while Frame Relay prepares and sends
frames. X.25 packets contain several fields used for error checking and flow control,
most of which are not used by Frame Relay. The frames in Frame Relay contain an
expanded link layer address field that enables Frame Relay nodes to direct frames to
their destinations with minimal processing. The elimination of functions and fields
over X.25 allows Frame Relay to move data more quickly, but leaves more room for
errors and larger delays should data need to be retransmitted.
X.25 packet switched networks typically allocated a fixed bandwidth through
the network for each X.25 access, regardless of the current load. This resource
allocation approach, while apt for applications that require guaranteed quality of
service, is inefficient for applications that are highly dynamic in their load
characteristics or which would benefit from a more dynamic resource allocation.
Frame Relay networks can dynamically allocate bandwidth at both the physical and
logical channel level.
6.3.4 Virtual circuits
As a WAN protocol, Frame Relay is most commonly implemented at Layer 2
(data link layer) of the Open Systems Interconnection (OSI) seven layer model. Two
91

types of circuits exist: permanent virtual circuits (PVCs) which are used to form
logical end-to-end links mapped over a physical network, and switched virtual circuits
(SVCs). The latter are analogous to the circuit-switching concepts of the public
switched telephone network (PSTN), the global phone network.
6.3.5 Frame Relay origins
Frame Relay began as a stripped-down version of the X.25 protocol, releasing
itself from the error-correcting burden most commonly associated with X.25. When
Frame Relay detects an error, it simply drops the offending packet. Frame Relay uses
the concept of shared-access and relies on a technique referred to as "best-effort",
whereby error-correction practically does not exist and practically no guarantee of
reliable data delivery occurs. Frame Relay provides an industry-standard
encapsulation utilizing the strengths of high-speed, packet-switched technology able
to service multiple virtual circuits and protocols between connected devices, such as
two routers.
6.4 Virtual private network

Fig 6.3: VPN Connectivity overview
A virtual private network (VPN) links two computers through an underlying
local or wide-area network, while encapsulating the data and keeping it private. It is
analogous to a pipe within a pipe. Even though the outer pipe contains the inner one,
the inner pipe has a wall that blocks other traffic in the outer pipe. To the rest of the
network, the VPN traffic just looks like another traffic stream.
92

The term VPN can describe many different network configurations and
protocols. Some of the more common uses of VPNs are described below, along with
the various classification schemes and models.
6.4.1 History
Until the end of the 1990s the computers in computer networks connected
through very expensive leased lines and/or dial-up phone lines. It could cost
thousands of dollars for 56kbps lines or tens of thousands for T1 lines, depending on
the distance between the sites.
Virtual Private Networks reduce network costs because they avoid a need for
many leased lines that individually connect to the Internet. Users can exchange
private data securely, making the expensive leased lines redundant. Only later, in the
2000s, with broadband available, did dial-up VPN and SSL VPNs allow roaming or
home users to access corporate networks via the Internet instead of directly dialling
into the corporate Remote Access Servers. Because of VPNs, the corporate network
came to be called an Enterprise Private Network, to make it clear that it used leased
lines or something other than the Internet for connections between its computers.
6.4.2 VPN classifications
VPN technologies have myriad protocols, terminologies and marketing influences
that define them. For example, VPN technologies can differ in:
The protocols they use to tunnel the traffic
The tunnel's termination point, i.e., customer edge or network provider edge
Whether they offer site-to-site or remote access connectivity
The levels of security provided
The OSI layer they present to the connecting network, such as Layer 2 circuits
or Layer 3 network connectivity
Some classification schemes are discussed in the following sections.

93

6.4.2.1 Security Mechanisms:
Secure VPNs use cryptographic tunnelling protocols to provide confidentiality
by blocking intercepts and packet sniffing, allow sender authentication to block
identity spoofing, and provide message integrity by preventing message alteration.
Secure VPN protocols include the following:
IPSec (Internet Protocol Security) was originally developed for IPv6, which
requires it. This standards-based security protocol is also widely used with
IPv4. L2TP frequently runs over IPSec.
Transport Layer Security (SSL/TLS) can tunnel an entire network's traffic, as
it does in the OpenVPN project, or secure an individual connection. A number
of vendors provide remote access VPN capabilities through SSL. An SSL
VPN can connect from locations where IPSec runs into trouble with Network
Address Translation and firewall rules. However, SSL-based VPNs use
Transmission Control Protocol (TCP) and so may be vulnerable to denial-of-
service attacks because TCP connections do not authenticate.
Datagram Transport Layer Security (DTLS), is used in Cisco's next-generation
VPN product, Cisco AnyConnect VPN, to solve the issues SSL/TLS has with
tunnelling TCP over TCP.
Microsoft's Microsoft Point-to-Point Encryption (MPPE) works with their
PPTP and in several compatible implementations on other platforms.
Microsoft introduced Secure Socket Tunnelling Protocol (SSTP) in Windows
Server 2008 and Windows Vista Service Pack 1. SSTP tunnels Point-to-Point
Protocol (PPP) or L2TP traffic through an SSL 3.0 channel.
MPVPN (Multi Path Virtual Private Network). Ragula Systems Development
Company owns the registered trademark "MPVPN".
Secure Shell (SSH) VPN -- OpenSSH offers VPN tunnelling to secure remote
connections to a network or inter-network links. This should not be confused
with port forwarding. OpenSSH server provides limited number of concurrent
tunnels and the VPN feature itself does not support personal authentication.

94

6.4.3 Authentication
Tunnel endpoints must authenticate before secure VPN tunnels can establish.
User-created remote access VPNs may use passwords, biometrics, two-factor
authentication or other cryptographic methods. Network-to-network tunnels often use
passwords or digital certificates, as they permanently store the key to allow the tunnel
to establish automatically and without intervention.
6.4.4 Routing
Tunnelling protocols can be used in a point-to-point topology that would
theoretically not be considered a VPN, because a VPN by definition is expected to
support arbitrary and changing sets of network nodes. But since most router
implementations support software-defined tunnel interface, customer-provisioned
VPNs often are simply defined tunnels running conventional routing protocols. On the
other hand provider-provided VPNs (PPVPNs), need to support coexisting multiple
VPNs, hidden from one another, but operated by the same service provider.
6.4.5 VPNs in mobile environments:
Mobile VPNs handle the special circumstances when an endpoint of the VPN
is not fixed to a single IP address, but instead roams across various networks such as
data networks from cellular carriers or between multiple Wi-Fi access points. Mobile
VPNs have been widely used in public safety, where they give law enforcement
officers access to mission-critical applications, such as computer-assisted dispatch and
criminal databases, as they travel between different subnets of a mobile network.
They are also used in field service management and by healthcare organizations
among other industries.
Increasingly, mobile VPNs are being adopted by mobile professionals and
white-collar workers who need reliable connections. They allow users to roam
seamlessly across networks and in and out of wireless-coverage areas without losing
application sessions or dropping the secure VPN session. A conventional VPN cannot
survive such events because the network tunnel is disrupted, causing applications to
disconnect, time out or fail, or even cause the computing device itself to crash.
95

Instead of logically tying the endpoint of the network tunnel to the physical IP
address, each tunnel is bound to a permanently associated IP address at the device.
The mobile VPN software handles the necessary network authentication and
maintains the network sessions in a manner transparent to the application and the user.
The Host Identity Protocol (HIP), under study by the Internet Engineering Task Force,
is designed to support mobility of hosts by separating the role of IP addresses for host
identification from their locator functionality in an IP network. With HIP a mobile
host maintains its logical connections established via the host identity identifier while
associating with different IP addresses when roaming between access networks.




















96

CHAPTER 07

PROJECT DETAILS

7.1 About Voice over Internet Protocol:
Voice over Internet Protocol (VoIP) is a general term for a family of
transmission technologies for delivery of voice communications over IP networks
such as the Internet or other packet-switched networks. Other terms frequently
encountered and synonymous with VOIP are IP telephony, Internet telephony, voice
over broadband (VoBB), broadband telephony, and broadband phone. Internet
telephony refers to communications services voice, facsimile, and/or voice-
messaging applications that are transported via the Internet, rather than the public
switched telephone network (PSTN). The basic steps involved in originating an
Internet telephone call are conversion of the analog voice signal to digital format and
compression/translation of the signal into Internet protocol (IP) packets for
transmission over the Internet; the process is reversed at the receiving end.
VOIP systems employ session control protocols to control the set-up and tear-
down of calls as well as audio codecs which encode speech allowing transmission
over an IP network as digital audio via an audio stream. Codec use is varied between
different implementations of VOIP (and often a range of codecs are used); some
implementations rely on narrowband and compressed speech, while others support
high fidelity stereo codecs.






Fig:7.1 CISCOs IP Phones










97

7.1.1 History:
1974 The Institute of Electrical and Electronic Engineers (IEEE) published
a paper titled "A Protocol for Packet Network Interconnection."
1981 IPv4 is described in RFC 791.
1985 The National Science Foundation commissions the creation of
NSFNET.
1995 Vocal Tec releases the first commercial Internet phone software.
1996
o ITU-T begins development of standards for the transmission and
signalling of voice communications over Internet Protocol networks
with the H.323 standard.
o US telecommunication companies petition the US Congress to ban
Internet phone technology.
1997 Level 3 began development of its first soft switch, a term they coined
in 1998.
1999
o The Session Initiation Protocol (SIP) specification RFC 2543 is
released.
o Mark Spencer of Digium develops the first open source Private branch
exchange (PBX) software (Asterisk).
2004 Commercial VOIP service providers proliferate.
2005 OpenSER (later Kamailio and Open SIPS) SIP proxy server is forked
from the SIP Express Router.
2006 Free SWITCH open source software is released.
7.1.2 VOIP Technologies and Implementations
Voice-over-IP has been implemented in various ways using both proprietary and
open protocols and standards. Examples of technologies used to implement Voice
over Internet Protocol include:
H.323
IP Multimedia Subsystem (IMS)
Media Gateway Control Protocol (MGCP)
98

Session Initiation Protocol (SIP)
Real-time Transport Protocol (RTP)
The Session Initiation Protocol has gained widespread VOIP market penetration,
while H.323 deployments are increasingly limited to carrying existing long-haul
network traffic. A notable proprietary implementation is the Skype network.
7.1.3 Adoption
Consumer market

Fig: 7.2 Example of VOIP adapter setup in residential network
A major development starting in 2004 has been the introduction of mass-
market VOIP services over Broadband Internet access services, in which subscribers
make and receive calls as they would over the PSTN. Full phone service VOIP phone
companies provide inbound and outbound calling with Direct Inbound Dialling. Many
offer unlimited domestic calling and some to other countries as well, for a flat
monthly fee as well as free calling between subscribers using the same provider.
These services have a wide variety of features which can be more or less similar to
traditional POTS. There are three common methods of connecting to VOIP service
providers:
99


Fig: 7.3 A typical analog telephone adapter (ATA) for connecting an analog phone to a VOIP
provider
An Analog Telephone Adapter (ATA) may be connected between an IP
network (such as a broadband connection) and an existing telephone jack in
order to provide service nearly indistinguishable from PSTN providers on all
the other telephone jacks in the residence. This type of service, which is fixed
to one location, is generally offered by broadband Internet providers such as
cable companies and telephone companies as a cheaper flat-rate traditional
phone service.
Dedicated VOIP phones are phones that allow VOIP calls without the use of a
computer. Instead they connect directly to the IP network (using technologies
such as Wi-Fi or Ethernet). In order to connect to the PSTN they usually
require service from a VOIP service provider; most people therefore will use
them in conjunction with a paid service plan.
A soft phone (also known as an Internet phone or Digital phone) is a piece of
software that can be installed on a computer that allows VOIP calling without
dedicated hardware.
PSTN and mobile network providers
It is becoming increasingly common for telecommunications providers to use
VOIP telephony over dedicated and public IP networks to connect switching stations
and to interconnect with other telephony network providers; this is often referred to as
"IP backhaul"."Dual mode" telephone sets, which allow for the seamless handover
between a cellular network and a Wi-Fi network, are expected to help VOIP become
more popular. Phones such as the NEC N900iL, many of the Nokia E-series and
100

several other Wi-Fi enabled mobile phones have SIP clients built into the firmware.
Such clients operate independently of the mobile phone network Some operators such
as Vodafone actively try to block VOIP traffic from their network.
Corporate use
Because of the bandwidth efficiency and low costs that VOIP technology can
provide, businesses are gradually beginning to migrate from traditional copper-wire
telephone systems to VOIP systems to reduce their monthly phone costs. VOIP
solutions aimed at businesses have evolved into "unified communications" services
that treat all communicationsphone calls, faxes, voice mail, e-mail, Web
conferences and moreas discrete units that can all be delivered via any means and
to any handset, including cell phones. Two kinds of competitors are competing in this
space: one set is focused on VOIP for medium to large enterprises, while another is
targeting the small-to-medium business (SMB) market.
VOIP runs both voice and data communications over a single network, which
can significantly reduce infrastructure costs. VOIP devices have simple, intuitive user
interfaces, so users can often make simple system configuration changes. Dual-mode
cell phones enable users to continue their conversations as they move between an
outside cellular service and an internal Wi-Fi network, so that it is no longer
necessary to carry both a desktop phone and a cell phone. Maintenance becomes
simpler as there are fewer devices to oversee.
7.1.4 Benefits
Operational cost
VOIP can be a benefit for reducing communication and infrastructure costs.
Examples include:
Routing phone calls over existing data networks to avoid the need for separate
voice and data networks.
Conference calling, IVR, call forwarding, automatic redial, and caller ID
features that traditional telecommunication companies (telcos) normally
101

charge extra for are available free of charge from open source VOIP
implementations.
Costs are lower, mainly because of the way Internet access is billed compared
to regular telephone calls. While regular telephone calls are billed by the
minute or second, VOIP calls are billed per megabyte (MB). In other words,
VOIP calls are billed per amount of information (data) sent over the Internet
and not according to the time connected to the telephone network. In practice
the amount charged for the data transferred in a given period is far less than
that charged for the amount of time connected on a regular telephone line.
Flexibility
VOIP can facilitate tasks and provide services that may be more difficult to
implement using the PSTN. Examples include:
The ability to transmit more than one telephone call over a single broadband
connection without the need to add extra lines.
Secure calls using standardized protocols (such as Secure Real-time Transport
Protocol). Most of the difficulties of creating a secure telephone connection
over traditional phone lines, such as digitizing and digital transmission, are
already in place with VOIP. It is only necessary to encrypt and authenticate
the existing data stream.
Location independence. Only a sufficiently fast and stable Internet connection
is needed to get a connection from anywhere to a VOIP provider.
Integration with other services available over the Internet, including video
conversation, message or data file exchange during the conversation, audio
conferencing, managing address books, and passing information about
whether other people are available to interested parties.
7.1.5 Challenges
Quality of service
By default, IP routers handle traffic on a first-come, first-served basis. When a
packet is routed to a link where another packet is already being sent, the router holds
102

it on a queue. Should additional traffic arrive faster than the queued traffic can be
sent, the queue will grow. If VOIP packets have to wait their turn in a long queue,
intolerable latency may result.
One way to avoid this problem is to simply ensure that the links are fast
enough so that queues never build even in the worst case. This usually requires
additional mechanisms to limit the amount of traffic entering the network, and for
voice traffic this is usually done by limiting the number of simultaneous calls.
Another approach is to use quality-of-service (QoS) mechanisms such as Diffserv to
give priority to VOIP packets and other latency-sensitive traffic so they can "jump the
line" and be transmitted ahead of any bulk data packets already in the queue. This can
work quite well when voice constitutes a relatively small fraction of the total network
load, as it usually does in today's Internet.
Generally a VOIP packet still has to wait for the current packet to finish
transmission; although it is possible to pre-empt (abort) a less important packet in
mid-transmission, this is not commonly done, especially on high speed links where
transmission times are small even for maximum-sized packets. An alternative to pre-
emption on slower links, such as dialup and DSL, is to reduce the maximum
transmission time by reducing the maximum transmission unit. ADSL modems
invariably provide Ethernet (or Ethernet over USB) connections to local equipment,
but inside they are actually ATM modems. They use AAL5 to segment each Ethernet
packet into a series of 48-byte ATM cells for transmission and reassemble them back
into Ethernet packets at the receiver.
However, the great majority of DSL providers use only one VC for each
customer, even those with bundled VOIP service. Every Ethernet packet must be
completely transmitted before another can begin. If second PVC were established,
given high priority and reserved for VOIP, then a low priority data packet could be
suspended in mid-transmission and a VOIP packet sent right away on the high priority
VC. Then the link would pick up the low priority VC where it left off. Because ATM
links are multiplexed on a cell-by-cell basis, a high priority packet would have to wait
at most 53 byte times to begin transmission. There would be no need to reduce the
interface MTU and accept the resulting increase in higher layer protocol overhead,
and no need to abort a low priority packet and resend it later.
103

Voice, and all other data, travels in packets over IP networks with fixed
maximum capacity. This system is more prone to congestion and DoS attacks than
traditional circuit switched systems; a circuit switched system of insufficient capacity
will refuse new connections while carrying the remainder without impairment, while
the quality of real-time data such as telephone conversations on packet-switched
networks degrades dramatically.
Fixed delays cannot be controlled as they are caused by the physical distance
the packets travel. They are especially problematic when satellite circuits are involved
because of the long distance to a geostationary satellite and back; delays of 400-600
ms are typical. When the load on a link grows so quickly that its queue overflows,
congestion results and data packets are lost. This signals a transport protocol like TCP
to reduce its transmission rate to alleviate the congestion. The receiver must
resequence IP packets that arrive out of order and recover gracefully when packets
arrive too late or not at all. Jitter results from the rapid and random (i.e.,
unpredictable) changes in queue lengths along a given Internet path due to
competition from other users for the same transmission links. VOIP receivers counter
jitter by storing incoming packets briefly in a "de-jitter" or "play out" buffer,
deliberately increasing latency to increase the chance that each packet will be on hand
when it's time for the voice engine to play it. The added delay is thus a compromise
between excessive latency and excessive dropout, i.e., momentary audio interruptions.
Although jitter is a random variable, it is the sum of several other random
variables that are at least somewhat independent: the individual queuing delays of the
routers along the Internet path in question. Thus according to the central limit
theorem, we can model jitter as a Gaussian random variable. This suggests continually
estimating the mean delay and its standard deviation and setting the play out delay so
that only packets delayed more than several standard deviations above the mean will
arrive too late to be useful. It has been suggested to rely on the packetized nature of
media in VOIP communications and transmit the stream of packets from the source
phone to the destination phone simultaneously across different routes (multi-path
routing). In such a way, temporary failures have less impact on the communication
quality.

104

Susceptibility to power failure
Telephones for traditional residential analog service are usually connected
directly to telephone company phone lines which provide direct current to power most
basic analog handsets independently of locally available power. IP Phones and VOIP
telephone adapters connect to routers or cable modems which typically depend on the
availability of mains electricity or locally generated power. Some VOIP service
providers use customer premise equipment (e.g., cable modems) with battery-backed
power supplies to assure uninterrupted service for up to several hours in case of local
power failures. Such battery-backed devices typically are designed for use with
analog handsets.
The susceptibility of phone service to power failures is a common problem
even with traditional analog service in areas where many customers purchase modern
handset units that operate wirelessly to a base station, or that have other modern
phone features, such as built-in voicemail or phone book features.
Emergency calls
The nature of IP makes it difficult to locate network users geographically.
Emergency calls, therefore, cannot easily be routed to a nearby call center.
Sometimes, VOIP systems may route emergency calls to a non-emergency phone line
at the intended department. In the United States, at least one major police department
has strongly objected to this practice as potentially endangering the public.
A fixed line phone has a direct relationship between a telephone number and a
physical location. A telephone number represents one pair of wires that links a
location to the telephone company's exchange. Once a line is connected, the telephone
company stores the home address that relates to the wires, and this relationship will
rarely change. If an emergency call comes from that number, then the physical
location is known.
In the IP world, it is not so simple. A broadband provider may know the
location where the wires terminate, but this does not necessarily allow the mapping of
an IP address to that location. IP addresses are often dynamically assigned, so the ISP
105

may allocate an address for online access, or at the time a broadband router is
engaged. The ISP recognizes individual IP addresses, but does not necessarily know
what physical location to which it corresponds. The broadband service provider
knows the physical location, but is not necessarily tracking the IP addresses in use.
There are more complications, since IP allows a great deal of mobility. For
example, a broadband connection can be used to dial a virtual private network that is
employer-owned. When this is done, the IP address being used will belong to the
range of the employer, rather than the address of the ISP, so this could be many
kilometres away or even in another country. To provide another example: if mobile
data is used, e.g., a 3G mobile handset or USB wireless broadband adapter, then the
IP address has no relationship with any physical location, since a mobile user could be
anywhere that there is network coverage, even roaming via another cellular company.
VOIP Enhanced 911 (E911) is another method by which VOIP providers in
the United States are able to support emergency services. The VOIP E911 emergency-
calling system associates a physical address with the calling party's telephone number
as required by the Wireless Communications and Public Safety Act of 1999. All
"interconnected" VOIP providers (those that provide access to the PSTN system) are
required to have E911 available to their customers. VOIP E911 service generally adds
an additional monthly fee to the subscriber's service per line, similar to analog phone
service..
Lack of redundancy
With the current separation of the Internet and the PSTN, a certain amount of
redundancy is provided. An Internet outage does not necessarily mean that a voice
communication outage will occur simultaneously, allowing individuals to call for
emergency services and many businesses to continue to operate normally. In
situations where telephone services become completely reliant on the Internet
infrastructure, a single-point failure can isolate communities from all communication,
including Enhanced 911 and equivalent services in other locales.

106

Number portability
Local number portability (LNP) and Mobile number portability (MNP) also
impact VOIP business. In November 2007, the Federal Communications Commission
in the United States released an order extending number portability obligations to
interconnected VOIP providers and carriers that support VOIP providers. Number
portability is a service that allows a subscriber to select a new telephone carrier
without requiring a new number to be issued. Typically, it is the responsibility of the
former carrier to "map" the old number to the undisclosed number assigned by the
new carrier. This is achieved by maintaining a database of numbers. A dialled number
is initially received by the original carrier and quickly rerouted to the new carrier.
Multiple porting references must be maintained even if the subscriber returns to the
original carrier. The FCC mandates carrier compliance with these consumer-
protection stipulations.
A voice call originating in the VOIP environment also faces challenges to
reach its destination if the number is routed to a mobile phone number on a traditional
mobile carrier. VOIP has been identified in the past as a Least Cost Routing (LCR)
system, which is based on checking the destination of each telephone call as it is
made, and then sending the call via the network that will cost the customer the least.
This rating is subject to some debate given the complexity of call routing created by
number portability. With GSM number portability now in place, LCR providers can
no longer rely on using the network root prefix to determine how to route a call.
Instead, they must now determine the actual network of every number before routing
the call.
Therefore, VOIP solutions also need to handle MNP when routing a voice call.
In countries without a central database, like the UK, it might be necessary to query the
GSM network about which home network a mobile phone number belongs to. As the
popularity of VOIP increases in the enterprise markets because of least cost routing
options, it needs to provide a certain level of reliability when handling calls. MNP
checks are important to assure that this quality of service is met. By handling MNP
lookups before routing a call and by assuring that the voice call will actually work,
VOIP service providers are able to offer business subscribers the level of reliability
they require.
107

Security
Voice over Internet Protocol telephone systems (VoIP) is susceptible to
attacks as are any internet-connected devices. This means that hackers who know
about these vulnerabilities (such as insecure passwords) can institute denial-of-service
attacks, harvest customer data, record conversations and break into voice mailboxes.
Another challenge is routing VOIP traffic through firewalls and network
address translators. Private Session Border Controllers are used along with firewalls
to enable VoIP calls to and from protected networks. For example, Skype uses a
proprietary protocol to route calls through other Skype peers on the network, allowing
it to traverse symmetric NATs and firewalls. Other methods to traverse NATs involve
using protocols such as STUN or ICE.
Many consumer VoIP solutions do not support encryption, although having a
secure phone is much easier to implement with VOIP than traditional phone lines. As
a result, it is relatively easy to eavesdrop on VoIP calls and even change their content.
An attacker with a packet sniffer could intercept your VOIP calls if you are not on a
secure VLAN.
There are open source solutions, such as Wire shark, that facilitate sniffing of
VOIP conversations. A modicum of security is afforded by patented audio codecs in
proprietary implementations that are not easily available for open source applications,
however such security through obscurity has not proven effective in other fields.
Some vendors also use compression to make eavesdropping more difficult. However,
real security requires encryption and cryptographic authentication which are not
widely supported at a consumer level. The existing security standard Secure Real-time
Transport Protocol (SRTP) and the new ZRTP protocol are available on Analog
Telephone Adapters (ATAs) as well as various soft phones.
Securing VOIP
To prevent the above security concerns the government and military
organizations are using; Voice over Secure IP (VoSIP), Secure Voice over IP
(SVOIP), and Secure Voice over Secure IP (SVoSIP) to protect confidential, and/or
108

classified VOIP communications. Secure Voice over IP is accomplished by
encrypting VOIP with Type 1 encryption. Secure Voice over Secure IP is
accomplished by using Type 1 encryption on a classified network, like SIPRNet.
Public Secure VOIP is also available with free GNU programs.
Caller ID
Caller ID support among VOIP providers varies, although the majority of
VOIP providers now offer full Caller ID with name on outgoing calls. In a few cases,
VOIP providers may allow a caller to spoof the Caller ID information, potentially
making calls appear as though they are from a number that does not belong to the
caller Business grade VOIP equipment and software often makes it easy to modify
caller ID information. Although this can provide many businesses great flexibility, it
is also open to abuse.
Support for other telephony devices
Another challenge for VOIP implementations is the proper handling of
outgoing calls from other telephony devices such as DVR boxes, satellite television
receivers, alarm systems, conventional modems and other similar devices that depend
on access to a PSTN telephone line for some or all of their functionality. These types
of calls sometimes complete without any problems, but in other cases they fail. If
VOIP and cellular substitution becomes very popular, some ancillary equipment
makers may be forced to redesign equipment, because it would no longer be possible
to assume a conventional PSTN telephone line would be available in consumer's
homes.
Legal issues
As the popularity of VOIP grows, and PSTN users switch to VOIP in
increasing numbers, governments are becoming more interested in regulating VOIP in
a manner similar to PSTN services. Another legal issue that the US Congress is
debating concerns changes to the Foreign Intelligence Surveillance Act. The issue in
question is calls between Americans and foreigners. The National Security Agency
(NSA) isn't authorized to tap Americans' conversations without a warrantbut the
109

Internet, and specifically VOIP doesn't draw as clear a line to the location of a caller
or a call's recipient as the traditional phone system does. As VOIP's low cost and
flexibility convinces more and more organizations to adopt the technology, the line
separating the NSAs ability to snoop on phone calls will only gets blurrier. VOIP
technology has also increased security concerns because VOIP and similar
technologies have made it more difficult for the government to determine where a
target is physically located when communications are being intercepted, and that
creates a whole set of new legal challenges.
In the US, the Federal Communications Commission now requires all
interconnected VOIP service providers to comply with requirements comparable to
those for traditional telecommunications service providers. VOIP operators in the US
are required to support local number portability; make service accessible to people
with disabilities; pay regulatory fees, universal service contributions, and other
mandated payments; and enable law enforcement authorities to conduct surveillance
pursuant to the Communications Assistance for Law Enforcement Act (CALEA).
"Interconnected" VOIP operators also must provide Enhanced 911 service, disclose
any limitations on their E-911 functionality to their consumers, and obtain affirmative
acknowledgements of these disclosures from all consumers. VOIP operators also
receive the benefit of certain US telecommunications regulations, including an
entitlement to interconnection and exchange of traffic with incumbent local exchange
carriers via wholesale carriers. Providers of "nomadic" VOIP service those who
are unable to determine the location of their users are exempt from state
telecommunications regulation.
In the European Union, the treatment of VOIP service providers is a decision
for each Member State's national telecoms regulator, which must use competition law
to define relevant national markets and then determine whether any service provider
on those national markets has "significant market power" (and so should be subject to
certain obligations). A general distinction is usually made between VOIP services that
function over managed networks (via broadband connections) and VOIP services that
function over unmanaged networks (essentially, the Internet).
In India, it is legal to use VOIP, but it is illegal to have VOIP gateways inside
India. This effectively means that people who have PCs can use them to make a VOIP
110

call to any number, but if the remote side is a normal phone, the gateway that converts
the VOIP call to a POTS call should not be inside India.
In the UAE, it is illegal to use any form of VOIP, to the extent that Web sites
of Skype and Gizmo5 are blocked. In the Republic of Korea, only providers registered
with the government are authorized to offer VOIP services. Unlike many VOIP
providers, most of whom offer flat rates, Korean VOIP services are generally metered
and charged at rates similar to terrestrial calling. Foreign VOIP providers encounter
high barriers to government registration. This issue came to a head in 2006 when
Internet service providers providing personal Internet services by contract to United
States Forces Korea members residing on USFK bases threatened to block off access
to VOIP services used by USFK members of as an economical way to keep in contact
with their families in the United States, on the grounds that the service members'
VOIP providers were not registered.
7.2 Equipment Required To Implement VoIP:
The aim of this project is to transfer voice over the internet protocol
implemented network. To fulfil this condition we should have at least 3 routers and 2
computers, but the routers are just available on the rack to perform the practical, so
we could not use real routers for this purpose, instead of this we implemented this
project using a emulator known as gns3 which can run a no of routers in a pc by
emulating the processor resources.
COMPUTERS: We needed two computers which would work as terminal
devices for route of the voice
ROUTER: As the routers are very costly and cannot be budgeted in this
project so GNS3 software was used to run a router in a computer. GNS3 is an
emulator which actually uses the resources of the processor of the pc to run a
router inside the computer although in cannot fulfil all the working conditions
and outputs of a real router but it can be perfectly used in a high performance
pc to work it a a router
111

So , we needed two pcs with gns3 installed on at least one pc, there is a
special internet protocol operating system (IOS) which supports voice features
required to transfer voice over ip.
IP PHONE: IP Phone is a device which uses ip address instead of any
telephone no to send and receive the information in the form of voice. Now, in
order to send and receive voice on the computers we needed IP phones, which
could convert the analog form of voice into digital and then handle it to the
computer to transfer it . But as we know voip has not been implemented in
India till now, and moreover it is still illegal to use voip for commercial
purposes so, the IP phones were replaced by CISCO IP COMMUNICATOR,
which is a software cum interface which can be used to send and receive voice
packets on the computer .
7.3 Setup of equipment:
As per described above, the topology was designed, which looked like the
following image:

Fig 7.4: Set-Up

The router used in gns3 was of 3700 series and its os is 3740 voice version.
Take both pcs and name them as pc1 and pc2. First of all the loopback interface of
the pc was created and was bridged with the LAN interface of the pc. This bridging is
then connected to router in gns3 interface. This setup signifies that the pc1 is now
connected to the router and its LAN interface is ready to be connected to pc2. After
this setup, we needed to give the pc and the routers interface the appropriate ip
addresses.
112

7.4 Working of the project:
After connecting the whole equipment and making the topology, we gave the
ip addresses to the both pc1 and pc2 as follows:

Device IP Address Subnet Mask
Router 192.168.50.1 255.255.255.0
PC1 192.168.50.10 255.255.255.0
PC2 192.16.8.50.20 255.255.255.0

Table 7.1: IP Addressing of the Devices
Install the cisco ip communicator in both PCs and give them their
appropriate interfaces. The interface of IP Phone software will then detect the router
and adjoining computer and will register itself on the network. After registration the
both PCs are ready to communicate through a voice channel set up between them
through IP.
The IP communicator is now showing a registration no on each PC. This no
can be used to dial and get connected with the other end i.e. other PC. This
communication can be seen by sniffing the packets of voice being transmitted and
received on the LAN interface. Upon sniffing the packet , when we see the protocol
type of the packet it comes out to be UDP which is a transport layer protocol used for
transferring unreliable data but works on real time protocols as to establish real time
application of voice.





113

CHAPTER 08
FUTURE SCOPE AND CONCLUSION
8.1 Future of VoIP:
Around the world, voice over Internet protocol (VoIP) services are being
offered by local and long-distance telephone operators, cable television companies,
Internet service providers, non-facilities-based independent providers and mobile
operators. VoIP is showing strong growth in the number of subscribers and the
revenues it generates. Businesses and consumers are already taking advantage of the
cost savings and new features of making calls over a converged voice-data network,
and the logical next step is to take those advantages to the wireless world. The
potential impact of wireless VoIP on the communications market is enormous
market research firm ABI Research has forecasted that dual mode cellular/voice over
Wi-Fi enabled handsets will surpass 50 million by 2009 accounting for seven percent
of the overall handset market. As VoIP is much evolve in the mobile industry as it has
already done so with land-line phones. Third Generation (3G) users are expected to
reach over 230 million by the end of the year 2012. Look for interesting new headsets
to also cause buzz such as Apples 3G handsets and the newer Nokia phones (the E
and N series such as the E51) have built in SIP clients, and WIFI built into the phones
as well. This means that anyone who buys one of these phones can use VoIP along-
side their cellular service when they are in a WIFI hotspot. As long as progress
concerned future will turn VoIP into VoIPo3G (VoIP over 3G).

8.1.1 Unified Communication (UC):

VoIP future goes with, Unified communications (UC). UC is a new
technological architecture whereby communication tools are integrated so that both
businesses and individuals can manage all their communications in one entity instead
of separately. In short, unified communications bridges the gap between VoIP and
other computer related communication technologies.

8.1.2 INTEGRATION into INTERFACES and Privacy:

Next for VoIP to keep up its integration into web 2.0 interfaces such as EBay
(actually may be selling Skype), Face book, and MySpace. VoIP will be continuing to
114

look for new ways to improve similar websites or whether auction sites, social
networking sites, or blogs. The other step will be to block unwanted calls and privacy
is another factor that can lead to success for the integration of VoIP. People need to
feel that they are in control and don't want disruptive calls.

8.1.3 INTEGRATION into Places:
Integration of VoIP into places that makes people to access easily to the
required destination, you can take an example of web browsers, which is another
innovation to look for. VoIP phones will be used for Plug-ins or ad-on to Browser and
all online activities, rather using Computers. As Macromedia has announced that a
flash plug-in will include a session initiation protocol client (SIP client), then it will
be even easier for websites to create SIP applications.

8.1.4 Standard Mobile Applications:
Mobile applications will make VoIP stand out in 2008 such as with Google's
Android, which will have an open source mobile platform and the application
Bonanza that Google is going to market. Google has encouraged developers to create
applications for its operating system, and some betas have been created already.

8.1.5 Gaming:
Future required VoIP in gaming as gamers continue to move online and
interact with each other. Skype announced that it partnered with Sony on its PSP
systems to enable VoIP calling with Skype In and Skype Out.

With all these new applications, development, trends and communication
patterns, the future of VoIP is still on a FAST TRACK. VoIP still continuing to
evolve and save businesses, call centers, and consumer's money while continuing to
improve quality and features.

8.2 Predictions for VoIP Industry Market:

It's metrics and predictions time! In-Stat says the VoIP market gained 3.8
million households in 2006, with wholesale revenues of $1.1 billion last year
increasing to $3.8 billion in 2010. The Yankee Group puts the SMB VoIP market at
$200 million last year growing to $1.3 billion in 2009. And the Dell'Oro Group
115

predicts that the PBX market will exceed $7.5 billion in 2011, with most of the
growth coming in IP PBXs. As retail VoIP expands, wholesale VoIP will accelerate
quickly, said In-Stat analyst Bryan Van Dussen, in a statement. The largest segment
remains international VoIP, but we expect the market for local services to surge from
12 percent of all revenues to 27 percent by 2010.

Recent research by In-Stat also predicted that consumer VoIP adoption will
drive wholesale VoIP revenues to $3.8 billion by 2010 from $1.1 billion in 2006.
Whats more, international wholesale VoIP termination/origination revenues are
experiencing declining growth rates. Over the long-haul, wholesale VoIP is expected
to experience significant migration of TDM services throughout the forecast period,
and become a majority of the international market by 2009. The research, Wholesale
VoIP Forecast: Consumer VoIP Accelerates Demand, covers the market for
wholesale VoIP services. It provides a market forecast of U.S. VoIP households and
wholesale VoIP revenues segmented by main product categories. Analysis of the
wholesale market is presented, including market drivers and barriers and three key
trends: peering, bundling, and QoS.

8.3 Conclusion:
Voice over IP is quickly becoming readily available across much of the world,
however many problems still remain. For the time being transmission networks
involve too much latency or drop too many packets, this effects quality of service
sometimes severely deteriorating the quality of the call. Also VOIP contains many
security risks, sending out packets that any person may intercept. Although VOIP may
offer cheaper solutions for many the PSTN offers a high QoS and greater security that
makes up for its higher prices. It is my belief that the telephone market will continue
to be dominated by the PSTN until quality of service and security issues can be
addressed. High speed internet connections to businesses and residences have made it
possible to use the bandwidth for voice communications along with the other kinds of
data. VoIP is beyond the early adoption phase and can be described as being in the
initial growth phase. With VoIP, many different add-on applications are possible and
it will be interesting to see where this leads voice communications from where we
now know it. As data traffic continues to increase and surpass that of voice traffic, the
116

convergence and integration of these technologies will not only continue to improve,
but also will pave the way for a truly unified and seamless means of communication.
Implementing VoIP can provide significant benefits and savings to your company.
8.3.1 The obstacles to VoIP:
In some markets, however, VoIP does not seem to be achieving its full potential.
Some of the obstacles to growth are:
Problems with QoS and reliability: Voice, video and high-speed data
services have different requirements, so bundled products place different
burdens on networks in terms of quality of service (QoS). The ability of the
network to function despite power shortages is a particular problem in
developing countries. In terms of security, only limited calling party
information may be available over VoIP.
Resistance by incumbents: Established operators may see VoIP as a threat to
their PSTN revenues, mainly in countries where the market is monopoly-based
or less mature.
Regulatory uncertainty. Operators argue that, in order to justify heavy
investment in broadband networks for VoIP, they must have a clear and
predictable regulatory framework that helps to guarantee returns on
investment.
Specific regulatory requirements: Some countries are developing
regulations on VoIP (e.g. emergency call obligations) that may make it harder
for new entrants to offer VoIP services.
8.3.2 Pros and Cons of VoIP:

Pros:
Cheaper and free calls.
Extra features including: caller ID, call waiting, call transfer, repeat dial,
return call and three way calling.
Packets require very less space so multiple calls can be carried on a single
line.
Leading to business efficiency and profitability.
117

Cons:
Relies on wall power as compared to PSTN which relies on ghost power.
Emergency call operators 911 fail to locate the location of the caller.
Quality of service suffers due to packet loss and network latency leading to
garbled conversations.
Likely to be affected by viruses and worms.



























118

BIBLIOGRAPHY:

[1] Muthukrishnan, A, On Incorporating Payout Adaption and Loss Recovery in
VoIP Applications, Department of ECSE, Rensselaer Polytechnic Institute, New
York.
[2] Schulzrinne,H and Rosenburg,J. Internet Telephony: architecture and protocols
an IETF perspective Computer Networks, vol. 31, Feb 1999. Pp237-255.
[3] Vocal Installation Guide, available from Vovida Networks, Inc.,
www.vovida.org
[4] Schulzrinne,H and Rosenburg,J. The Session Initiation Protocol: Providing
Advanced Telephony Services across the Internet Bell Labs Technical Journal,
October-December 1998. Pp144-159
[5] www.tutorial-web.com/asp/database/
[6] www.101-asp-tutorials.com
[7] www.google.com
[8] Todd, Lammle, CCNA: Cisco Certified Network Associate Study Guide,
Sixth Edition, Exam 640-802.

You might also like