You are on page 1of 264

DATA TRAFFIC

Bandwidth
Bandwidth is a key concept in many telephony
applications.
In radio communications
bandwidth is the range of frequencies occupied by a
modulated carrier wave

In computer networking and computer science


digital bandwidth, network bandwidth or just bandwidth is
a measure of available or consumed data communication
resources

In many signal processing contexts, bandwidth is a


valuable and limited resource
2

Digital Bandwidth
is the measure of how much information
can flow from one place to another in a
given amount of time
two common uses
analog signals
digital signals
measured in bps
a major factor in analyzing a networks
performance
3

Pipe Analogy for Bandwidth

Highway Analogy for Bandwidth

Maximum Bandwidths and Length Limitations


Typical Media

Max. Theoretical
Bandwidth

Max. Physical
Distance

50-Ohm coaxial cable


(thinnet)

10-100 Mbps

185m

75-Ohm coaxial cable


(thicknet)

10-100 Mbps

500m

CAT5 UTP

10 Mbps

100m

CAT 5 (Fast Ethernet)

100 Mbps

100m

Multimode Fiber
Optical Fiber

100 Mbps

2km

Single mode Fiber


Optical Fiber

1000 Mbps (1 Gbps)

3km

Wireless

11 Mbps

A few hundred
meters
6

WAN Services and Bandwidths


Type of
Service

Typical User

Bandwidth

Modem

Individuals

56 Kbps

ISDN

Telecommuters and small


businesses

128 Kbps

Frame Relay

Small institutions and reliable


WANs

56 Kbps to 44
Mbps

T1

Larger entities

1.544 Mbps

T3

Larger entities

44.736 Mbps

STS-1 (OC-1)

Phone companies/Backbones

51.840 Mbps

STS-3 (OC-3)

Phone companies/Backbones

155.251 Mbps

STS-48 (OC-48)

Phone companies/Backbones

2.488320 Gbps
7

Optical Carrier (OC)


Optical Carrier (OC) levels describe a range of
digital signals that can be carried on SONET fiber
optic network.
The number in the Optical Carrier level is directly
proportional to the data rate of the bit stream
carried by the digital signal.
The general rule for calculating the speed of
Optical Carrier lines is when a specification is
given as OC-n, that the speed will equal n
51.84 Mbits/s
8

Optical Carrier specifications (in use)

OC-1

OC-3 / STM-1x

OC-3c

OC-12 / STM-4x

OC-24

OC-48 / STM-16x / 2.5G Sonet

OC-192 / STM-64x / 10G Sonet

OC-96

OC-768 / STM-256x
9

Optical Carrier specifications (in use)


OC-1
OC-1 is a SONET line with transmission speeds of up to 51.84 Mbit/s
(payload: 50.112 Mbit/s; overhead: 1.728 Mbit/s) using optical fiber.
This base rate is multiplied for use by other OC-n standards. For example, an
OC-3 connection is 3 times the rate of OC-1.

OC-3 / STM-1x
OC-3 is a network line with transmission speeds of up to 155.52 Mbit/s
(payload: 148.608 Mbit/s; overhead: 6.912 Mbit/s, including path overhead)
using fiber optics.
Depending on the system OC-3 is also known as STS-3 (electrical level) and
STM-1 (SDH).
When OC-3 is not multiplexed by carrying the data from a single source, the
letter c (standing for concatenated) is appended: OC-3c
10

Throughput
Refers to the actual, measured
bandwidth at a specific time of day using
specific Internet routes while
downloading a specific file
A major factor in analyzing a networks
performance

11

Bandwidth and Throughput


Bandwidth may refer to bandwidth capacity
or available bandwidth in bit/s, which
typically means the net bit rate, channel
capacity or the maximum throughput of a
logical or physical communication path in a
digital communication system.
For example, bandwidth test implies measuring
the maximum throughput of a computer
network.
12

Bandwidth and Throughput


Factors that determine throughput
and bandwidth include:
-internetworking devices
-type of data being transferred
-topology
-number of users
-users computer and server computer
-power and weather-related outages
-congestion
13

File Transfer Time Calculations

14

EXAMPLE
Which would take less time?
Sending a floppy disk (1.44 MB) full of data over an ISDN
BRI Line OR
Sending a 10GB hard drive full of data over an OC-48 line

15

EXAMPLE ( Cont..)
T = S/BW

1- S= 1.44 MB, BW=128 Kb

Thus time = 1.44 x 1000 K Bytes x 8 Bits / 128 Kbps


= 90 Seconds

2- S= 10 GB, BW= 2.488320 Gbps


Thus time = 10 G Bytes x 8 Bits/ 2.488320 Gbps
= 32.15 Seconds

16

Overview Of Communications

Message

The idea, thought

Source

The brain

Sender

The Transmitting device, the

Channel

The medium the message travels over,

Receiver

The receiving device, the

Destinatio

The

brain

mouth
air

ear
17

Basics Of Data Communications

18

Transmission Media
Transmission Media
Transmission media refers to the many types
of cables and other mediums that carry the
signal from the sender to the receiver.
Types of Media
Guided Media
Unguided Media

19

Guided media are manufactured so that signals


will be confined to a narrow path and will
behave predictably.
Commonly used guided media include
twisted-pair wiring, similar to common
telephone wiring;
coaxial cable, similar to that used for cable
TV;
and optical fibre cable.
20

LAN CABLING

Widely used are 10BASE-T, 100BASE-TX, and 1000BASE-T (Gigabit Ethernet), running at
10 Mbit/s, 100 Mbit/s (also Mbps or Mbs-1), and 1000 Mbit/s (1 Gbit/s) respectively.
The BASE is short for base band, meaning that there is no frequency-division multiplexing (FDM)
or other frequency shifting modulation in use; each signal has full control of wire, on a single
frequency.
The T designates twisted pair cable, where the pair of wires for each signal is
twisted together to reduce radio frequency interference and crosstalk between pairs
(FEXT and NEXT).

21

Coaxial Cable Systems


Coaxial cable, or coax, is an electrical cable
with an inner conductor surrounded by a
tubular insulating layer typically of a flexible
material with a high dielectric constant, all of
which are surrounded by a conductive layer
(typically of fine woven wire for flexibility, or of
a thin metallic foil), and finally covered with a
thin insulating layer on the outside.
The term coaxial comes from the inner
conductor and the outer shield sharing the
same geometric axis.

22

Coaxial Cable Systems

Advantages
cheap to install
conforms to standards
widely used
greater capacity than UTP to carry more conversations (60-1200
speech circuits)

Disadvantages
limited in distance
limited in number of connections
terminations and connectors must be done properly

23

Coaxial Cable Systems

24

Twisted pair Cabling


Twisted pair cabling is a form of wiring in which two
conductors (the forward and return conductors of a
single circuit) are twisted together for the purposes of
canceling out electromagnetic interference (EMI) from
external sources
for instance, electromagnetic radiation from Unshielded
Twisted Pair (UTP) cables, and crosstalk between
neighboring pairs.

25

UTP and STP


Advantages of UTP
Easy installation/termination
Cheap installation
Disadvantages of UTP
Very noisy
Limited in distance
Suffers from interference

26

UTP and STP

27

UTP and STP

28

29

30

Straight through cable &Crossover Cable


In a straight through cable, pins on one end
correspond exactly to the corresponding pins on
the other end (pin 1 to pin 1, pin 2 to pin 2,
etc.).
Using the same wiring (a given color wire connects to
a given number pin, the same at both ends) at each
end yields a straight through cable.

31

Crossover cable
In a crossover cable, pins do not so correspond; most
often in crossover cables some cables are swapped,
meaning that if pin 1 on one end goes to pin 2 on the
other end, then pin 2 on the first end goes to pin 1 on
the second end, and not to pin 3 or some other: such
crossover cables are symmetric, meaning that they
work identically regardless of which way you plug them
in (if you turn the cable around, it still connects the
same pins as before).
Using different wiring (a given color wire connects to
one number pin at one end, and a different number pin
at the other) at each end yields a crossover cable

32

Crossover cable
A electrical cable that connects two devices directly (output of one to input of the
other), also called a crosslink,
It also allows devices to communicate without a switch, hub, or router.
Cross-Over cables are used to connect two computers directly through NICs without
the use of a Hub or Switch or to uplink two or more hubs, switches or routers.
such a cable that changes between two different wirings, particularly an Ethernet
crossover cable.

33

34

DEVICE CONNECTIONS THROUGH UTP


Straight-through cable for:
Switch to Router
Switch to PC/Server
Hub to PC/Server

Crossover Cable for:

Switch to switch
Switch to Hub
Hub to Hub
Router to Router
PC to PC
Router to PC
35

Micro-wave systems
Advantages of Microwave systems
medium capacity
medium cost
can go long distances (stations are located about 30 kilometers
apart
in line of sight)
Disadvantages of Microwave Systems
noise interference
geographical problems due to line of sight requirements
becoming outdated

36

Micro-wave systems

37

Satellite systems
Advantages of Satellite systems
low cost per user (for PAY TV)
high capacity
very large coverage

Disadvantages of Satellite systems


high install cost in launching a satellite
receive dishes and decoders required
delays involved in the reception of the signal

38

Satellite systems

39

Fiber-optic Cable
Many extremely thin
strands of glass or
plastic bound
together in a
sheathing which
transmits signals with
light beams
Can be used for voice,
data, and video

40

Fiber-optic Cable
Advantages of Fiber optic cable
high capacity
immune to interference
can go long distances

Disadvantages of Fiber optic cable


costly
difficult to join

41

DATA TRAFFIC

42

TRAFFIC REGULATING
DEVICES

43

Repeater in Data Network

Different types of network cabling have their own


maximum distance that they can move a data
signal.
When LAN is extended beyond it maximum run for
its particular cabling type, repeaters are used.
Repeaters take the signal that it receives from the
computers and other devices on the LAN and
regenerates the signal

44

Problems with repeaters


Repeaters do not have any capability of directing
network traffic or deciding what particular route that
certain data should take
They amplify the entire signal that they receive,
including any line noise.
In the worst case scenario, they pass on data traffic
that is barely distinguishable from the background
noise on the line.
Repeaters require a some time to regenerate the
signal. This can cause a propagation delay which can
affect network communication when there are
several repeaters in a row.
Many network architectures limit the number of
repeaters that can be used in a row.
45

Repeater in Data Network


A repeater connects two segments of network cable. It
retimes and regenerates the signals to proper
amplitudes and sends them to the other segments.
When talking about, ethernet topology, we are probably
talking about using a hub as a repeater.
Repeaters work only at the physical layer of the OSI
network model.

Repeater in Data Network

47

RULE
Between any two nodes on the network,
there can only be a maximum of five
segments, connected through four
repeaters/Concentrators, and only three of
the five segments may contain user
connections.

48

LAYER 2 DEVICES AND EFFECTS ON DATAFLOW

NIC (Network Interface Card)

Connect your computer with network.


Provide MAC addresses to each connection.
Implement CSMA/CD algorithm.

Bridge

Forward or filter frame by MAC address.

Switch

Multi-port bridge.

NIC

Media Access Control (MAC)


Every computer has a unique way of identifying
itself:

MAC address

MAC addresses are sometimes referred to as burned-in


addresses (BIAs) because they are burned into read-only
memory (ROM) and are copied into random-access memory
(RAM) when the NIC initializes.
0000.0c12.3456 or 00-00-0c-12-34-56.

Physical address.

The physical address is located on the Network Interface


Card (NIC).

Media Access Control (MAC)


Ethernets MAC performs three functions:
transmitting and receiving data packets
decoding data packets and checking them
for valid addresses before passing them to
the upper layers of the OSI model
detecting errors within data packets or on
the network

Media Access Control (MAC)

Data A D Data A D Data A D Data A D


Destination Address
Source Address

Limitation of MAC

MAC does not work well in


internetwork.
Hardware dependent.

Data Collision
A data collision is the simultaneous presence
of signals from two nodes on the network.
A collision can occur when two nodes each think
the network is idle and both start transmitting at
the same time. Both packets involved in a
collision are broken into fragments and must be
retransmitted.
In an Ethernet network, a collision is the result of
two devices on the same Ethernet network
attempting to transmit data at exactly the same
time. The network detects the "collision" of the
two transmitted packets and discards them both.
55

Data Collision
Two methods of resolving the collision problem:
Collision Detection (Carrier Sense Multiple Access - Collision Detection
(CSMA/CD)
Collision Avoidance (Carrier Sense Multiple Access - Collision Avoidance

(CSMA/CA)

Ethernet uses CSMA/CD as its method of allowing


devices to "take turns" using the signal carrier line.
When a device wants to transmit, it checks the signal
level of the line to determine whether someone else is
already using it. If it is already in use, the device waits
and, perhaps in a few seconds. If it isn't in use, the
device transmits.
However, two devices can transmit at the same time in
which case a collision occurs and both devices detect it.
Each device then waits a random amount of time and
56
retries until successful in getting the transmission sent.

Carrier Sense Multiple Access - Collision Detection (CSMA/CD)

Method is used almost exclusively on star and


bus topology networks eg. Ethernet
Based on a half-duplex protocol
Only one workstation can transmit at a time.

Carrier Sense Multiple Access - Collision Detection (CSMA/CD)

Carrier Sense Multiple Access - Collision Detection (CSMA/CD)

Carrier Sense Multiple Access - Collision Detection (CSMA/CD)

To detect if a collision occurs


a workstation listens its own transmission as data is
being transmitted
a workstation should only hear its own data being
transmitted.

If there is a collision
the data transmission will be corrupted

To ensure the collision is clearly recognized


a station detecting a collision will send a jamming
signal to all stations on the network.

Carrier Sense Multiple Access - Collision Detection (CSMA/CD)

Carrier Sense Multiple Access - Collision Detection (CSMA/CD)

Collision Detection Problems:


as concurrent users increase, the number of
collisions increase
lots of retransmission during peak times
Reduce throughput on the network
stations have to wait for a collision to take place and
then solve the problem.

Carrier Sense Multiple Access - Collision Avoidance (CSMA/CA)

Prevents collisions
Station must gain permission before
transmitting.

Token passing in CSMA/CA


A token is passed from station to station
Used in token ring networks
Token passing ensures that only one station can
transmit at any one time
A station needs to get an empty token before it can
transmit
a station inserts a message into the token
then sends the token on to its destination address

There should only ever be one token in circulation on


the network.

Token passing in CSMA/CA


Token contains data in a packet (data, source address and
destination address)
Each station checks an incoming packets destination
address
When packet arrives at its destination, it is copied into a
buffer and modified to indicate acceptance
Then the token (still containing the data) is passed on round
the loop until it returns to the sender.
The sender is responsible for removing the data from the
token and passing the empty token on to the next station.

Token passing in CSMA/CA


Disadvantages:
complexity of the software needed to maintain token passing
excessive overheads can reduce performance of the network

Problems:
What happens when a token disappears? (ie. a station fails
and so does not forward a token)
If a token disappears, who generates a new token?
Is it possible for there to be two or more tokens on a ring?

Comparison of methods
CSMA/CD
simple protocol
high transmission speed (up to 1000 Mbps)
as traffic increases, collisions increase

re-transmissions increase

non-deterministic
it is not possible to determine exactly when a workstation will be able
to transmit without a collision

Token-passing

performs well under heavy loads


slower transmission speed (up to 100 Mbps)
suitable for applications that require uniform response times
complex software is required
more expensive to implement.

Flooding

68

Flooding Problems
Most often occurs when a large enough number of packets are
flowing through the network that regular data cannot be sent in a
normal speed and fashion.
Flooding can be costly in terms of wasted bandwidth and, as in the
case of a Ping flood or a Denial of service attack, it can be harmful to
the reliability of a computer network.
Duplicate packets may circulate forever, unless certain precautions
are taken:
Use a hop count or a time to live count Time to live (abbreviated as TTL
is a limit on the period of time or number of iterations or transmissions in
computer and computer network technology that a unit of data (e.g. a
packet) can experience before it should be discarded.) and include it with
each packet. This value should take into account the number of nodes
that a packet may have to pass through on the way to its destination.
Have each node keep track of every packet seen and only forward each
packet once.

HUBS
Also called a Concentrator or Multi-port Repeater.

Types of Hub
Passive Hub
A passive hub serves as a physical connection point only.
It does not boost or clean the signal and does not need electrical
power.

Active Hub
An active hub needs power to repeat the signal before passing it out
the other ports.

Intelligent Hub
Intelligent or smart hubs are active hubs
with a microprocessor chip and
diagnostic capabilities

70

HUBS
Devices attached to a hub receive all traffic traveling
through the hub.
The more devices there are attached to the hub, the more
likely there will be collisions.
A collision occurs when two or more workstations send data
over the network wire at the same time. All data is corrupted
when that occurs.
Every device connected to the same network segment is said
to be a member of a collision domain.
71

Bridges
A network bridge connects multiple network segments
at the data link layer (Layer 2) of the OSI mode
Layer 2 switch is very often used interchangeably with
bridge.
Bridges are similar to repeaters or network hubs
However, with bridging, traffic from one network is
managed rather than simply rebroadcast to adjacent
network segments.
Bridges can analyze incoming data packets to determine if the bridge is
able to send the given packet to another segment of the network.
72

Bridges
Bridges tend to be more complex than hubs or
repeaters.
Bridging takes place at the data link layer of the OSI
model, a bridge processes the information from each
frame of data it receives.
In an Ethernet frame, this provides the MAC address of
the frame's source and destination.

Bridges
Connect network segments.
Make intelligent decisions about whether to pass
signals on to the next segment.
Improve network performance by eliminating
unnecessary traffic and minimizing the chances of
collisions.
Divides traffic into segments and filters traffic based on
MAC address.
Often pass frames b/w networks operating under
different Layer 2 protocols.

Bridges

75

Bridges

If the destination device is on a different segment, the bridge forwards the frame
to the appropriate segment.
If the destination address is unknown to the bridge, the bridge forwards the frame
to all segments except the one on which it was received. This process is known as
76
flooding.

Switches
A network switch is a computer networking device that
connects network segments.
The term Switch commonly refers to a Network bridge
that processes and routes data at the Data link layer
(layer 2) of the OSI model.
Switches that additionally process data at the Network
layer (layer 3 and above) are often referred to as Layer
3 switches or Multilayer switches.
The term network switch does not generally encompass
unintelligent or passive network devices such as hubs
and repeaters.
77

Switches

78

Switches

A switch has many ports with many network segments connected to them. A
switch chooses the port to which the destination device or workstation is
connected.
Ethernet switches are becoming popular connectivity solutions replacing
hubs...
reduces network congestion
maximizes bandwidth
reduces collision domain size

79

LAN and LAN Devices

80

Common LAN Technologies


Non-deterministic: First come, first serve.
Ethernet : CSMA/CD.
Deterministic: Lets take turns.
Token-Ring
FDDI ( Fiber distributed data interface)
It is a token-passing, fiber ring, network.
The fiber optic media can be multimode fiber and can be
as large as 100 kilometers - with no more than 2 kilometers
between nodes
11/03/15

81 81

Common LAN Technologies


Ethernet: logical
broadcast
topology
Token Ring: logical
token ring
topology
FDDI: logical token
ring topology

Common LAN Technologies

Non-Deterministic
(1st come 1st served)

Deterministic
(taking turns)

83

Deterministic MAC protocol

Non-deterministic MAC protocol

Carrier Sense Multiple Access with


Collision Detection (CSMA/CD).

LAN Switch
Switches connect LAN segments.
LAN switches are considered multi-port bridges with no collision
domain.
Uses a MAC table to determine the segment on which a frame
needs to be transmitted.
Switches often replace shared hubs and work with existing cable
infrastructures.
Higher speeds than bridges.
Support new functionality, such as VLAN.

LAN Switch (cont.)

LAN Switch: MAC table

In computer networking, a Media Access Control address (MAC address) is a unique


identifier assigned to most network adapters or network interface cards (NICs) by the
manufacturer for identification, and used in the Media Access Control protocol
sublayer

Segment
A network segment
is a portion of a computer
network wherein
/
Segmentation
of
Network
every device communicates using the same physical layer.
In the context of Ethernet networking, the network segment is
also known as the collision domain. This comprises the group of
devices that are connected to the same bus, and that can make
CSMA/CD collisions with each other, and sniff their packets . It
also includes devices connected to the same hub, which also can
have collisions with each other.
In modern switch-based Ethernet configurations,
the physical layer is generally kept as small as possible to avoid the
possibility of collisions.
Thus, each segment is only composed of two devices, and the segments
are linked together using switches and routers to form one or more
broadcast domains

89

Limiting the collision Domain _Segmentation

A collision domain is a logical area in a


computer network in which data packets can
collide with one another.

90

Why segment LANs?


Isolate traffic between segments.
Achieve more bandwidth per user by creating
smaller collision domains.
LANs are segmented by devices like
Bridges
Switches
and routers.

Extend the effective length of a LAN, permitting


the attachment of distant stations.

LAN Segmentation

Segmentation with bridges

Bridges increase the latency (delay) in a network by 1030%.


A bridge is considered a store-and-forward device
because it must receive the entire frame and compute
the cyclic redundancy check (CRC) before forwarding
can take place.
The time it takes to perform these tasks can slow
network transmissions, thus causing delay.

Segmentation with bridges

Segmentation with switches

Allows a LAN topology to work faster and more


efficiently.
Uses bandwidth efficiently.
Ease bandwidth shortages and network bottlenecks .
A computer connected directly to an Ethernet switch is
its own collision domain and accesses the full
bandwidth .

Segmentation with switches

Segmentation with routers

Routers operates at the network layer


Routers bases all of its forwarding decisions on
the Layer 3 protocol address.
Routers ability to make exact determinations of
where to send the data packet.

Segmentation with routers

Wide Area Network (WAN)


A wide area network (WAN) is a computer
network that covers a broad area (i.e., any
network whose communications links cross
metropolitan, regional, or national boundaries ).
The largest and most well-known example of a
WAN is the Internet
WANs are used to connect LANs and other types
of networks together, so that users and
computers in one location can communicate with
users and computers in other locations
100

Wide Area Network (WAN)


WANs are built for one particular organization .
LAN are private and built by Internet service providers
and provide connections from an organization's LAN to
the Internet.
WANs are often built using leased lines or circuit
switching or packet switching .
At each end of the leased line, a router connects to the
LAN on one side and a hub within the WAN on the other.

Wide Area Network (WAN)


Network protocols (including TCP/IP) deliver transport
and addressing functions.
Protocols including Packet over SONET/SDH, MPLS, ATM
and Frame relay are often used by service providers to
deliver the links that are used in WANs.
X.25 was an important early WAN protocol, and is often
considered to be the "grandfather" of Frame Relay as
many of the underlying protocols and functions of X.25
are still in use today (with upgrades) by Frame Relay.
Typical communication links used in WANs are
telephone lines, microwave links & satellite channels.
11/03/15

102

WAN and WAN Devices

103

Packet propagation and switching within a router

Each time a packet is switched from one router interface to another


the packet is de-encapsulated then encapsulated once again.
104

Data terminal equipment (DTE)


DTE is an end instrument that converts user
information into signals or reconverts received
signals.
A DTE is the functional unit of a data station that
serves as a data source or a data sink and provides
for the data communication control function

105

Data circuit-terminating equipment (DCE)


A DTE device communicates with the data circuitterminating equipment (DCE).
It is also called Data Communications Equipment and
Data Carrier Equipment.
A Data circuit-terminating equipment (DCE) is a device
that sits between the data terminal equipment (DTE)
and a data transmission circuit.
DCE performs functions such as
signal conversion, coding, and line clocking and may be a part of the
DTE or intermediate equipment.
106

Data circuit-terminating equipment (DCE)


Interfacing equipment may be required to couple the
data terminal equipment (DTE) into a transmission
circuit or channel and from a transmission circuit or
channel into the DTE.
Usually the DTE device is the terminal (or computer),
and the DCE is a modem.

107

CSU/DSU (Channel Service Unit/Data Service Unit)

It is a digital-interface device used to connect a


DTE (such as a router) to a digital circuit (for
example a T1 or T3 line).
A CSU/DSU operates at the physical layer of the
OSI model.
CSU/DSUs are also made as separate physical
products or DSU or both functions may be
included as part of an interface card inserted into
a DTE.
108

CSU/DSU (Channel Service Unit/Data Service Unit)


When CSU/DSU is external
the DTE interface is usually compatible with the V.xx or RS-232C
or similar serial interface.

Digital lines require both a channel service unit (CSU)


and a data service unit (DSU).
CSU
provides termination for the digital signal and ensures connection
integrity through error correction and line monitoring.

DSU
converts the data encoded in the digital circuit into synchronous
serial data for connection to a DTE device.
109

WAN Serial Connections

TIA (Telecommunications Industry Association)

110

Router
Router is a networking device whose software
and hardware are usually tailored to the tasks
of routing and forwarding information.
For example, on the Internet, information is
directed to various paths by routers.

111

Routers for Internet connectivity and internal use

Edge Router :
Placed at the edge of an ISP network, it speaks eBGP
(external Border Gateway Protocol)

Subscriber Edge Router:


Located at the edge of the subscriber's network, it speaks eBGP to
its provider's AS(s). It belongs to an end user (enterprise)
organization.

Inter-provider Border Router:


Interconnecting ISPs, this is a BGP speaking router that maintains
BGP sessions with other BGP speaking routers in other providers
AS(s)

Core router:
A router that resides within the middle or backbone of the LAN

Routers and Serial Connections

113

Fixed Interfaces

114

ERLANG IN PACKET SWITCHING

Example:
Suppose a bandwidth of 64 Kbps .If one packet contains 64
Bytes and throughput is 60 percent,And with 90 % pay load,
how much user actual data must be transferred in one hour
to produce one Erlang traffic ?

SOLUTION:
Data transferred in one second = 64 K bits
Data transferred in one hour = 64 K bits x 3600 = 230,400 K
bits
= 230,400/8 = 288,00 KBytes
= 28.8 MB

115

ERLANG IN PACKET SWITCHING


Packets transferred in one hour = 28.8*1000000 / 64=450,000
Packets transferred in one second = 450000/60*60= 125 pps
Throughput 60%
Actual data transferred = 125 pps x 0.6
= 75 pps
Finally
Actual user data transferred = 75 x 0.9 = 67.5 pps
In one hour actual user data transferred = 67.5 x 3600
= 243,000 Packets

Hence , 243,000 Packets must be transferred in one hour


to produce one Erlang traffic .
116

OBSERVATIONS ON USER TRAFFIC


Different usage at different times (Different peaks)
Throughput limitations of each individual user
Observations has revealed that more than 80 % of the
user time (on the average) is idle
Idle time (Gap) must be filled using various tactics

- Low rates at off-peak time


- Introduction of new services

More data can be accommodated using compression


techniques
Data traffic is bursty in nature. So totally different from
that of voice traffic
Data variations within range of milli-seconds to years
117

TYPICAL NATURE OF DATA TARFFIC


Data to be sent/received through the media having a specific bandwidth can be carried at much lower rates than Actual BW
The only difference in this case will be time factor
At any time, the through-put is minimum, the data will take more
time to be transferred and vice versa
Queuing System
Delay is acceptable (unlike circuit switching)
Bursts of traffic can be handled at lower speed
e.g, Using dial-up connection, 8-16 PCs can be connected through
Hub/Switch. Although dial-up connection have a max Through-put of
35 Kbps

118

USERS TRAFFIC CONTINUD


8
7
6
5
Ser i es1

4
3
2
1
0

User -2

12
10
8
6

User -2

12

4
2
0

10

User -3

User-1

12
10
8
6

User -3

User-2

User-3

2
0

User-4

12

10

Ser ies1

User-5

User-6

2
12
10
8
6

Ser i es1

2
0

12
10
8
6

Seri es1

4
2
0

119

USERS TRAFFIC CONTINUD


Time

0000-0030
0031-0100
0100-0130
0130-0202
0200-0230
0230-0300
0300-0330
0330-0400
0400-0430
0430-0500
0500-0530
0530-0600
0600-0630
0630-0700
0700-0730
0730-0800
0800-0830
0830-0900
0900-0930
0930-1000
1000-1030
1030-1100
1100-1130
1130-1200
1200-1230
1230-1300
1300-1330
1330-1400
1400-1430
1430-1500
1500-1530
1530-1600
1600-1630
1630-1700
1700-1730
1730-1800
1800-1830
1830-1900
1900-1930
1930-2000
2000-2030
2030-2100
2100-2130
2130-2200

User
1

User
2

User
3

User
4

User
5

User
6

Kpbs

Kpbs

Kpbs

Kpbs

Kpbs

Kpbs

0
1
0
0
6
0
0
0
0
3
0
0
0
7
0
0
0
0
0
0
0
0
0
5
0
0
0
0
2
0
0
0
0
0
0
0
0
1
0
0
1
0
0
0

0
1
1
0
2
0
0
1
1
2
0
0
0
0
0
10
2
0
0
0
0
0
0
0
1
0
0
1
0
0
1
10
0
0
2
1
1
0
0
0
0
0
0
0

1
1
0
6
3
0
0
0
0
0
2
1
1
1
0
1
0
0
0
1
0
10
0
1
0
0
0
0
0
0
0
6
1
0
5
0
0
0
1
0
0
0
0
0

2
0
3
0
2
0
0
6
0
0
0
6
0
0
1
0
0
0
0
0
0
2
0
0
4
0
0
1
0
0
5
0
0
0
0
0
0
0
6
0
0
0
0
0

1
0
6
0
0
0
0
0
0
0
0
0
0
0
0
0
0
10
0
6
0
0
0
0
0
5
0
0
0
0
0
0
10
0
0
1
1
0
0
0
0
0
0
10

3
2
0
1
0
0
0
0
3
6
0
5
0
0
0
0
0
0
2
0
10
0
2
0
0
0
1
0
0
0
0
0
0
0
0
0
0
2
0
0
0
0
0
0

TOTAL

8
7
6

7
5
10
7
13
0
0
7
4
11
2
12
1
8
1
11
2
10
2
7
10
12
2
6
5
5
1
2
2
0
6
16
11
0
7
2
2
3
7
0
1
0
0
10

5
4

Ser i es1

3
2
1
0

User

2 Kpbs

12

10

User

2 Kpbs

User

User

Kpbs

12

10

Kpbs

12

10

Ser i es1

12
10
8
6

Ser i es1

4
2
0

120

USERS ACCUMULATIVE EFFECT


20
18
16
14
12
10

Series1

8
6
4
2
0

121

BASIC DATA TRAFFIC CONSIDERATION


Bandwidth increases from user to service provider similar to
water supply system

Small pipes
- Individual users

Medium pipes
- ISPs, Corporate having lease lines

Big pipes
- Back-bone service provider

122

BASIC DATA TRAFFIC CONSIDERATION

Another Three Layer Model

Rick Graziani graziani@cabrillo.edu

35

123

PTCL Data Network / Core Network

Class Lecture

124

PACKET SWITCHING VERSUS CIRCUIT SWITCHING

In case of circuit switching all the links from source to


destination are occupied during voice or data transmission. Yet
packet switching works on sharing principles
- In case of ISDN, E1 or PCM a 64Kbps channel carries voice
or data over circuit switching
- In case of packet switching following approaches are used:
PVC
SVC
DATAGRAM

125

Frame Relay differs from X.25 in several aspects.

Most importantly, it is a much simpler protocol that works at the data link layer rather
than the network layer.
Frame Relay implements no error or flow control.
The simplified handling of frames leads to reduced latency
Most Frame Relay connections are PVCs rather than SVCs.
Frame Relay provides permanent shared medium bandwidth connectivity that carries
126
both voice and data traffic.

X.25 Network
It was designed at a time when transmission system was
unreliable
Extensive error checking and flow control is exercised at
Data link Layer and Network layer levels
Copy of each frame is retained by every switch into its
buffers, and is discarded only after receiving
acknowledgement from next device

Only one-fourth of traffic on X.25 network is


actual message data, the rest is reliability
127

Frame Relay- Advantages


Frame relay operates at higher speeds up to 45Mbps
Integrates well with modern sophisticated end devices
and higher layer protocols
Allows bursty data
Allows frame size of up to 9000 bytes, which can
accommodate all types of LAN frames
Less expensive than other traditional WANs
128

Frame Relay versus Pure


Mesh T-Line Network

129

Without proper analysis and calculations,


the link will not be able to accommodate
appropriate number of users. ( Example of
ISP introducing more & more cards)

NUMBER OF USERS VERSUS BANDWIDTH

A user can be accommodated (on the average) at 3-4Kbps


Consider a 2Mbps link
Suppose on the average a user can be accommodated at 5Kbps to
accomplish his job during peak load hour with some delay .

Ideally speaking:
How many users can be accommodated simultaneously
= 2 Mbps/5 Kbps =400 users
If 25 % of the users are on-line simultaneously at the peak hour then total
number of users for which 2 Mbps link can be sufficient is:
= 400 x 4= 1600 users
Even if we allow user with double delay 3200 users can be accommodated

These calculation require previous data.

131

INTERNET TRAFFIC DESIGN PROBLEM


Consider a set-up in which four ISPs each having 2 Mbps
link with the ITI. Following information about ISPs is given
in the Table:

PRIs connected with the central offices


Two types of customers ISDN BRI & Dial-up
PRI Average channel occupancy at peak hour
Average speed per customer in Kpbs
Number of customers on-line

132

TABLE SHOWING DATA

ISPs

PRIs

LINK
(Mbps)

CUSTOMER TYPE PRI CHANNEL AVERAGE SPEED per


OCCUPANCY CUSTOMER IN Kbps
and STRENGTH
AT PEAK
BRI
DIAL-UP
BRI
DIAL-UP
HOUR %

Customers on line
BRI

DIAL-UP

ISP-1
ISP-2
ISP-3
ISP-4

10
8
4
6

2
2
2
2

120
100
80
90

2500
1500
1000
1200

100%
70%
80%
90%

55
50
60
55

25
20
18
30

50
35
20
30

225
120
40
120

133

DESIGN PROBLEMS

Calculate the BW requirement for each ISP


Find out that in ideal conditions which ISPs can handle traffic at
peak hour without any delay
Suppose buffer at ISPs end has enough space, find out the
time taken by each ISP to transmit the given data
If compression 8:1 is used by those ISPs which show slight
delay in data transmission, find out the total BW requirement at
the ITI side. Also find max number of customers that can be
connected on-line through all ISPs in this setup:

134

SOLUTION

135

Solution Cont

TOTAL BW
REQ (Mbps)

TIME
REQUIRED
(sec)

BW required
after 1:8 in
Mbps

0=m + n

q=o/8

8.375
4.15
1.92
5.25

4.1875
2.075
1.92
2.625

1.047
0.519
1.920
0.656

4.142

136

139

DATA TRAFFIC THEORY


In order to sustain data traffic through the
networks, following two parameters must be
considered

CONGESTION CONTROL
Try to avoid congestion for the traffic
QUALITY OF SERVICE
Try to create appropriate environment for the
traffic
140

DATA TRAFFIC THEORY


In congestion control we try to avoid traffic
congestion. In quality of service, we try to create
an appropriate environment for the traffic. So,
before talking about congestion control and
quality of service, we discuss the data traffic
itself.

Traffic Descriptor
Traffic Profiles

TRAFFIC DESCRIPTORS
AVERAGE DATA RATE
Ratio of total bits sent during a specific period of time
to that time (Usually in seconds)
It indicates Average BW needed for traffic flow.

PEAK DATA RATE


Maximum amount of data rate that passes through
a link during a given observation period.
It indicates the peak BW required for traffic without
142
any delay.

TRAFFIC DESCRIPTORS
MAXIMUM BURST SIZE
The maximum length of time the traffic is generated
at the peak rate. If steady traffic of 1 Mbps gives a
spike of 2Mbps for 1 ms, this is referred to as Peak data
rate. But if 2 Mbps continues for 60 ms, this is the
Max. burst size and it can be a problem for network to
handle.

EFFECTIVE BANDWIDTH
BW that the network needs to allocate for the flow of
traffic.
It is function of above three factors and requires143

TRAFFIC DESCRIPTORS

TRAFFIC PROFILES
Depending on the traffic data rates, data traffic
is divided into three profiles
1-CONSTANT BIT RATE (FIXED RATE)
2- VARIABLE BIT RATE (VBR)
3- BURSTY

TRAFFIC PROFILES

TRAFFIC PROFILES

Depending on the traffic data rates, data


traffic is divided into three profiles
1-CONSTANT BIT RATE (FIXED RATE)
Average and peak data rates are almost same.
Maximum burst size not applicable
Predictable...so easy to handle
Bandwidth allocation is easier to determine

TRAFFIC PROFILES Continued.

2- VARIABLE BIT RATE (VBR)


Rate changes in time
Smooth changes, not sharp/sudden
Average and peak data rates are different
Maximum burst size is usually a small value
More difficult to handle
Normally does not require reshaping
148

TRAFFIC PROFILES Continued.


3- BURSTY
Data rate changes abruptly
May jump from zero to Mbps in few microseconds
Average and peak bit rates are quite different
Maximum burst size is significant value
Being unpredictable profile, most difficult to handle
Traffic Reshaping techniques are required
Main cause of congestion

149

CONGESTION
Congestion occurs when the Number packets sent on the
Network (Load) becomes greater than the Network
Capacity (Throughput)
CONGESTION CONTROL
involves Techniques and mechanisms to keep the load below the
maximum capacity of the network.

CONGESTION HAPPENS
when any system involves waiting e.g. Road Traffic accident, or
overloaded road during rush hours creates blockage

IN NETWORKS,
150
congestion happens when Networking devices encounter

CONGESTION

Router in the figure has an Input Queue and an Output


Queue for each Interface.
Packet arriving at input Interface undergoes three main
processes
It is put at the end of queue to be checked on its turn

On its turn, Router checks the destination address in the


packets to find appropriate interface for routing using Routing
Table.
It is put to appropriate interface in output queue and waits for
its turn to be sent
151

CONGESTION

A- If the rate of Packet arrival is higher than Packet processing rate,


The input queue becomes longer and longer
B- If packet departure rate is less than the packet processing rate,
152
the output queue becomes longer and longer

NETWORK PERFORMANCE

Congestion control involve two


factor to measure the performance of
a network

Delay Vs Load
Throughput Vs Load
153

When Load is much less than capacity


Delay is negligible and comprises Propagation delay
and Processing Delay
NETWORK PERFORMANCE

1.DELAY VERSUS LOAD

When load reaches Network Capacity


Delay increases sharply
Waiting times of queue in all the routers in the path is
added

When load is greater than capacity


Delay becomes infinite
Queues become longer and longer and buffer
becomes full.Packets loss

Delay have negative effect on load.Delay results in


CONGESTION
Retransmissions
in case of_
NETWORK
PERFORMANCE
add more congestion

Acknowledgement
timeout
1.DELAY VERSUS LOAD

NETWORK PERFORMANCE
VERSUS LOAD

2. THROUGHPUT

Number of bits passing through a point per


second
Replace bits by Packets and point by Network
When load is below capacity
Throughput is directly proportional to Load

When and after load reaches capacity


Throughput is expected to remain constant but
throughput declines sharply .The reason is discarding
of Packets by Routers
Discarding Packet does not deduce the number of Packets
156
offered to Network.. ?

NETWORK PERFORMANCE
VERSUS LOAD

2. THROUGHPUT

157

CONGESTION CONTROL
Congestion control refers to techniques and
mechanisms that can either prevent congestion,
before it happens, or remove congestion, after it
has happened.
In general, we can divide congestion control
mechanisms into two broad categories: openloop congestion control (prevention) and closedloop congestion control (removal).

24.158

CONGESTION CONTROL
Topics discussed in this section
Open-Loop Congestion Control
Closed-Loop Congestion Control

OPEN-LOOP CONGESTION CONTROL (PREVENTION)


Policies applied to prevent congestion before it happens.
In this, Congestion control is usually handled either source or
destination.

CLOSED-LOOP CONGESTION CONTROL (REMOVAL)


Techniques applied to remove congestion after it occurs.
In this, Congestion control is handled by source/destination or
any other midway device.
159

Congestion control categories

OPEN-LOOP CONGESTION CONTROL


1-RETRANSMISSION POLICY
Retransmission rules applied can be significant
Long and short duration timers

2-WINDOW POLICY
Selective Repeat versus Go-Back-N Window
Constant and Variable Window sizes during a session
Sliding window

161

OPEN-LOOP CONGESTION CONTROL Continued.


3-ACKNOWLEDGEMENT POLICY
Imposed by the receiver
Late acknowledgement prevent congestion

4- DICARDING POLICY
Appropriate discarding policy may prevent congestion
Less sensitive packet can be discarded without compromising the
quality of transmission, e.g. Audio or Video transmissions

5-ADMISSION POLCIY
Before admitting on the Network, check by the device of the
resource requirement
Quality of service issue
Access List issue/Filtering of Packets
162

Congestion control categories

CLOSED-LOOP CONGESTION CONTROL

1-BACK PRESSURE
Congested router informs previous router to reduce rate
This process can be recursive all the way toward the source. So
many router might be involved

Backpressure method for alleviating congestion

164

CLOSED-LOOP CONGESTION CONTROL

2-CHOKE PACKET
It is a packet sent by a Router back to the source to inform it of
the congestion. Similar to ICMP source quench message

Choke packet
165

CLOSED-LOOP CONGESTION CONTROL

3-IMPLICIT SIGNALLING
Source detects implicit signal regarding congestion. e.g. a mere
delay in receiving an acknowledgement can be a sign of
congestion ( Not usually the corruption in the packet) Like TCP
Congestion control

4-EXPLICIT SIGNALLING
A router experiencing congestion, sends a signal to source or
destination by setting a bit in the packet. (In Frame Relay)

166

CLOSED-LOOP CONGESTION CONTROL

Backward Signaling
Setting a bit in the direction opposite to the congestion
Source is informed to slow down
Avoids congestion and discarding of packets

Forward Signaling
Setting a bit in the direction of the congestion
Destination is informed to apply a policy, e.g. appropriate
delaying in the acknowledgements
Avoids congestion and discarding of packets

167

TWO EXAMPLES
To better understand the concept of congestion
control, let us give two examples: one in TCP and
the other in Frame Relay.

Topics discussed in this section:


Congestion Control in TCP
Congestion Control in Frame Relay

24.168

CONGETION CONTROL IN TCP


BACKGROUND
Any Network is combination of Networks and connecting
devices
A packet may pass through many Routers from source to
destination
If a router receives packet more rapidly than it can
process, congestion may occur resulting drop of packet
Dropped packets have no acknowledgements
Sender must retransmit lost packet
It may create more traffic, thus more congestion and
more loss of packets
As a result, system may collapse
TCP ASSUMES THAT CAUSE OF LOST PACKETS IS DUE TO
169
CONGESTION IN THE NETWORK

TRAFFIC CONTROL MECHANISM IN TCP/IP

Windowing
A method of controlling the amount of information transferred
end to end
Information can be measured in terms of the number of packets
or the number of bytes
TCP window sizes are variable during the lifetime of a
connection.
Larger window sizes increase communication efficiency.
170

Simple Windowing
TCP Full-duplex service: Independent Data Flows

TCP provides full-duplex service ( which means data can be flowing


in each direction, independent of the other direction).
Window sizes, sequence numbers and acknowledgment numbers
are independent of each others data flow.
Receiver sends acceptable window size to sender during each
segment transmission (flow control)
if too much data being sent, acceptable window size is reduced
if more data can be handled, acceptable window size is increased
This is known as a Stop-and-Wait windowing protocol.

171

Sliding Windows
Working Window size

Initial Window size


Usable Window

Octets sent Usable Window

Can send ASAP

Not ACKed Can send ASAP

Sliding Window Protocol


Sliding window algorithm is a method of flow control for network
data transfers using the receivers Window size.
The sender computes its usable window, which is how much data it
can immediately send.
Over time, this sliding window moves to the rights, as the receiver
acknowledges data.
The receiver sends acknowledgements as its TCP receive buffer
empties.

172

Sliding Windows
Working Window size

Initial Window size


Usable Window

Octets sent Usable Window

Can send ASAP

Not ACKed Can send ASAP

The terms used to describe the movement of the


left and right edges of this sliding window are:
The left edge closes (moves to the right) when data is sent and
acknowledged.
The right edge opens (moves to the right) allowing more data to
be sent. This happens when the receiver acknowledges a
certain number of bytes received.
The middle edge open (moves to the right) as data is sent, but
not yet acknowledged.
173

Host A - Sender

Host B - Receiver

1 2 3 4 5 6 7 8 9 1
0

1
1

1
2

1
3

1 2 3 4 5 6 7 8 9 1
0

1
1

1
2

1
3

1 2 3 4 5 6 7 8 9 1
0

Octets sent
Not ACKed

Usable Window
Can send ASAP

1
2

1
3

1
2

1
3

1
2

Window size = 6

1
1

ACK 4

Octets received
1 2 3 4 5 6 7 8 9 1
0

1
1

Host B gives Host A a window size of 6 (octets or bytes).


Host A begins by sending octets to Host B: octets 1, 2, and 3 and slides
its window over showing it has sent those 3 octets.
Host A will not increase its usable window size by 3, until it receives an
Acknowledgement from Host B that it has received some or all of the
octets.
Host B, not waiting for all of the 6 octets to arrive, after receiving the
third octet sends an expectational Acknowledgement of 4 to Host A.
174

Host A - Sender

Host B - Receiver

1 2 3 4 5 6 7 8 9 1
0

1
1

1
2

1
3

1 2 3 4 5 6 7 8 9 1
0

1
1

1
2

1
3

1 2 3 4 5 6 7 8 9 1
0

1
1

1
2

1
3

Window size = 6
Octets sent Usable Window

1
2

Not ACKed

Can send ASAP

3
1 2 3 4 5 6 7 8 9 1
0

1
1

1
2

1
3

1 2 3 4 5 6 7 8 9 1
0

ACK 4

1
1

1
2

1
3

4
5

1 2 3 4 5 6 7 8 9 1
0

1
1

1
2

1
3

1 2 3 4 5 6 7 8 9 1
0

1
1

1
2

1
3

ACK 6

1 2 3 4 5 6 7 8 9 1
0

1
1

1
2

1
3

6
7
1 2 3 4 5 6 7 8 9 1
0
1 2 3 4 5 6 7 8 9 1
0

1
1
1
1

1
2
1
2

1
3
1
3

1 2 3 4 5 6 7 8 9 1
0

1
1

1
2

1
3

More sliding
windows

9
1 2 3 4 5 6 7 8 9 1
0

1
1

1
175 1
2
3

ACKNOWLEDGMENT
Positive acknowledgment
It requires a recipient to communicate with the source, sending
back an acknowledgment message when it receives data.
Sender keeps a record of each data packet that it sends and
expects an acknowledgment.
Sender also starts a timer when it sends a segment and
retransmits a packet if timer expires before an
acknowledgement arrives.
Acknowledgement is expectational

176

ACKNOWLEDGMENT

WINDOWING
Windowing is the process in which a particular amount of data is allowed to be
sent by the source before it receives an acknowledgement by the destination.
Senders window size is determined by the receivers buffer space available.
HERE NETWORK IS TOTALLY IGNORED
Thus window size must be dependent on both the receiver and the Network

Receivers Capacity
(Receiver Window size)

Congestion in Network
(Congestion Window size)

ACTUAL WINDOW SIZE IS MINIMUM OF THE TWO

177

Note
In the slow-start algorithm, the size of the congestion window
increases exponentially until it reaches a threshold.

24.178

Note
In the congestion avoidance algorithm, the size of the
congestion window increases additively until
congestion is detected.

24.179

Note
An implementation reacts to congestion detection in one of the
following ways:
If detection is by time-out, a new slow start phase starts.
If detection is by three ACKs, a new congestion avoidance
phase starts.

24.180

CONGESTION CONTROL (EXAMPLE-1)


SLOW START
At beginning of connection, TCP sets congestion window size to one segment
After each acknowledge, window size is doubled
Process continues till one half of maximum window size. This point is called
threshold
Somewhat misleading because it does not seem slow start (Exponential)

ADDITIVE INCREASE
This is introduced to avoid congestion before it happens
Starts when window size reaches one half of maximum window size
At threshold window size is increased by one segment against each
Acknowledgement
This process continues till one of the following happens
1. No Acknowledgement is received and time out reaches
2. Congestion window reaches receiver window size

181

CONGESTION CONTROL (continued)


MULTIPLICATIVE DECREASE
If congestion occurs window size is immediately decreased
Congestion is sensed by the time-out in acknowledgement
Due to development in transmission media(Noise free), there
are maximum chances of packet lost than packet corrupted
When time-out occurs, threshold is set to one half of the last
congestion window size
Congestion window starts from one again ( slow start phase)

182
Number of Acknowledgements

CONGESTION CONTROL (EXAMPLE-2)

FRAME RELAY
Congestion in FR decreases throughput and increases Delay
Whereas the goals of FR are reverse
No flow control in FR and allows user bursty data
Thus it has potential to face congestion
CONGESTION CONTROL in FR is done through Two bits that explicitly
warn the source and Destination of the congestion
Two bits are:

1- FECN (Forward explicit congestion notification)


2- BECN (Backward explicit congestion notification)
183

(EXAMPLE-2) Continued

BECN
Warns sender of the congestion in the network
How sender is warned of the congestion
1- Switch using response frame from the destination
2- Switch uses a predefined connection (DLCI 1023) to send
special frame for this purpose
In response, the sender reduces the data rate

184

(EXAMPLE-2) Continued

FECN
Warns destination (Receiver) of the congestion in the network
What receiver can do?
FR assumes that Sender/Receiver use some flow control at higher
layers; Like Acknowledgements at TCP Layers
When receiver receive FECN bit active, it just starting delay in
Acknowledgements forcing sender to slow down

185

(EXAMPLE-2) Continued

FRAME REALY IN FULL DUPLEX


Four situations regarding congestion in Frame Relay can
occur. FECN and BECN values are used as follows

186

QUALITY OF SERVICE

QoS DEFINITION
Something FLOW seek to attain.
Allocation of appropriate enough resources
for data of different applications
Running through various links of a Network

Satisfactory fulfillment of Customers demand

FLOW CHARACTERISTICS

FLOW CHARACTERISTICS Continued..

1-REALIILITY
Basic characteristics that flow requires
Low reliability means loss of Packet
Sensitivity of reliability varies from Application to
Application
e.g. File transfer or Internet access require more reliability
than telephony or audio conferencing

FLOW CHARACTERISTICS Continued..

2-DELAY
Measured from source to destination and includes
NICs, Propagation and Devices in-between
Tolerance level different for different applications
Real time traffic cannot afford delay but E-mail, file transfer,
browsing etc can tolerate

FLOW CHARACTERISTICS Continued..

3-JITTER
Variation in delay of packet belonging to same flow
Real time audio/video cannot tolerate Jitter
If first 3 packets face delay of 1 ms and 4th packet
face a delay of 60 ms, it is unacceptable
Application that can afford delay and jitter, Transport
Layer waits and rearranges packets before delivery to
upper layers

FLOW CHARACTERISTICS Continued..

4-BANDWIDTH
Varies among applications
High bandwidth is required for real time application
Throughput is measure of practical bandwidth

EXAMPLE

Consider the example of a Routing protocol and


examine how it calculates the best flow path
through which traffic maintains a steady flow.
A routing protocol configured in the router selects
the best path to destination and routes the
packet to appropriate interface
There are various Routing protocols
1-RIP (Routing Information Protocol )
2-OSPF (Open Shortest Path First )
3-IGRP (Interior Gateway Routing Protocol )
4-EIGRP (Enhanced Interior Gateway Routing Protocol
5- IS-IS (Intermediate system to intermediate system )

EXAMPLE Continued

A Routing protocol can find best flow path on


the basis of various attributes
HOP COUNT
BANDWIDTH
DELAY
LOAD
REALIBILITY

EXAMPLE Continued
In example, let us consider, EIGRP (Ciscos proprietary)
as the routing protocol.
It considers BW, Delay, Load and Reliability to find best
traffic route
By default, it considers BW and Delay
It has three type of Tables stored in RAM of Router
1- Neighbor Table (Contains information from neighbors)
2- Topology Table (All known routes to every destination)
3- Routing Table (Best route to every destination)

Best route to any destination is


the lowest cost metric

Metric Calculation (Review)

EIGRP
bandwidth is in kbps

k1 for bandwidth
k2 for load
k3 for delay
k4 and k5 for Reliability

Router(config-router)# metric
weights tos k1 k2 k3 k4 k5

Displaying Interface Values


Router> show interface s0/0
Serial0/0 is up, line protocol is up
Hardware is QUICC Serial
Bandwidth
Delay
Description: Out to VERIO
Internet address is 207.21.113.186/30
MTU 1500 bytes, BW 1544 Kbit, DLY 20000 usec,
rely 255/255, load 246/255
Encapsulation PPP, loopback not set
Keepalive set (10 sec)
Reliability
Load
<output omitted>
shows reliability as a fraction of
255, for example (higher is
better):
rely 190/255 (or 74% reliability)
rely 234/255 (or 92% reliability)
rely 255/255 (or 100% reliability)

shows load as a fraction of 255, for


example (lower is better):
load 10/255 (or 3% loaded link)
load 40/255 (or 16% loaded link)
load 255/255 (or 100% loaded link)

EIGRP Metrics
Values displayed in show interface commands and sent in routing updates.

Media
100M ATM
Fast Ethernet
FDDI
HSSI
16M Token Ring
Ethernet
T1 (Serial
Default)
512K
DS0
56K

Bandwidth
K= kilobits
100,000K
100,000K
100,000K
45,045K
16,000K
10,000K
1,544K
512K
64K
56K

BWEIGRP
10,000,000/Bandwidth

Delay

*256

DLYEIGRP
Delay/10
*256

25,600
25,600
25,600
56,832
160,000
256,000
1,657,856

100 S
100 S
100 S
20,000 S
630 S
1,000 S
20,000 S

2,560
2,560
2,560
512,000
16,128
25,600
512,000

4,999,936
40,000,000
45,714,176

20,000 S
20,000 S
20,000 S

512,000
512,000
512,000

BWEIGRP and DLYEIGRP display values as sent in EIGRP updates and used in
calculating the EIGRP metric. Calculated values (cumulative) displayed in
routing table (show ip route).

The Routing Table


How does SanJose2 calculate the cost for this route?

Administrative Distance / Metric


SanJose2#show ip route
D
192.168.72.0/24 [90/2172416]
via 192.168.64.6, 00:28:26, Serial0

Determining the costs


Bandwidth = (10,000,000/bandwidth kbps) * 256
Fast Ethernet
= (10,000,000/100,000) * 256
= 25,600

Delay =
2,560
Fa0/0 192.168.72.1/24

Bandwidth =
25,600

Westasman

S0/0 192.168.64.2/30

T1

S0/1 192.168.64.6/30

= (10,000,000/1544) * 256

Delay =
512,000

= 1,657,856
S0/0 192.168.64.1/30

S0/0 192.168.64.5/30
Fa0/0 192.168.1.2/24

SanJose1

SanJose2

Fa0/0 192.168.1.1/24

EIGRP
AS 100

Bandwidth =
1,657,856

Determining the costs


Delay = (delay/10) * 256
Delay =
2,560

Fast Ethernet

Fa0/0 192.168.72.1/24

= (100/10) * 256

Bandwidth =
25,600

= 2,560
Westasman

S0/0 192.168.64.2/30

S0/1 192.168.64.6/30

T1
Delay =
512,000

= (20,000/10) * 256
= 512,000
S0/0 192.168.64.1/30

S0/0 192.168.64.5/30
Fa0/0 192.168.1.2/24

SanJose1

SanJose2

Fa0/0 192.168.1.1/24

EIGRP
AS 100

Bandwidth =
1,657,856

Determining the costs


What is the cost (metric) for 192.168.72.0/24 from SanJose2?
Delay =
2,560

Cost: Slowest bandwidth


+ sum of delays
1,657,856
512,000
2,560
-------------2,172,416

Fa0/0 192.168.72.1/24

Bandwidth =
25,600

Westasman

S0/0 192.168.64.2/30

S0/1 192.168.64.6/30

Delay =
512,000

The cost!

S0/0 192.168.64.1/30

S0/0 192.168.64.5/30
Fa0/0 192.168.1.2/24

SanJose1

SanJose2

Fa0/0 192.168.1.1/24
bandwidth = (10,000,000/bandwidth kbps) * 256
delay = (delay/10) * 256

EIGRP
AS 100

Bandwidth =
1,657,856

Slowest!

The Routing Table

Administrative Distance / Metric


SanJose2#show ip route
D
192.168.72.0/24 [90/2172416]
via 192.168.64.6, 00:28:26, Serial0

QoS IMPROVEMENT
Various technique are used to improve
QoS

1 Scheduling
2
3
4

Traffic shaping
Admission Control
Resource Reservation

QoS IMPROVEMENT
1

SCHEDULING

Packets arrive from various flow at Switch/Router for processing.


Scheduling treat every packets according to some rule or technique
which improve QoS
Common scheduling techniques are
a)

FIFO

b) PRIORITY QUEING
c) WEIGHTED FAIR QUEING

a)FIFO Packets
are treated on first come first basis.
If packet arrival rate is greater than packet processing rate, the
queue is filled-up and soon packets start discarding.

QoS IMPROVEMENT
b) PRIORITY QUEUING
Better than FIFO in which packet are assigned priority classes
Each priority has it own queue and highest priority is served first
System continues till any queue is empty. This could lead to
STARVATION, the case in which continuous arrival of packets in high
priority queue, so that low priority queue is never treated resulting
in discarding of packets

QoS IMPROVEMENT
c) WEIGHTED FAIR QUEUING
Still classful in which classes are assigned weights
Packets are treated from each queue according to weight

QoS IMPROVEMENT
Various technique are used to improve
QoS
1

Scheduling

Traffic shaping

3
4

Admission Control
Resource Reservation

QoS IMPROVEMENT
2

TRAFFIC SHAPING

Control over amount and rate of data sent by a source to the


Network.
Modify traffic at entrance points in the network
Modify traffic in the routers

Enforce policies on flows

Two techniques of reshaping traffic are:


a) LEAKY BUCKET
b) TOKEN BUCKET

QoS IMPROVEMENT
Leaky Bucket
Across a single link, only allow packets across at a
constant rate
Packets may be generated in a bursty manner, but
after they pass through the leaky bucket, they enter
the network evenly spaced
If all inputs enforce a leaky bucket, its easy to reason
about the total resource demand on the rest of the
system

QoS IMPROVEMENT
Packets from input

Leaky
Bucket

Leaky Bucket: Analogy


Output

QoS IMPROVEMENT
A leaky bucket algorithm shapes bursty traffic into a fixed rate
traffic by averaging the data rate
The leaky bucket is a traffic shaper: It changes the
characteristics of a packet stream
Traffic shaping makes the network more manageable and
predictable
Usually the network tells the leaky bucket the rate at which it may
send packets when a connection is established

QoS IMPROVEMENT

QoS IMPROVEMENT

Token Bucket
Leaky Bucket: Doesnt allow bursty transmissions
In some cases, we may want to allow short bursts of packets to enter the
network without smoothing them out
For this purpose we use a token bucket, which is a modified leaky bucket
The bucket holds logical tokens instead of packets
Tokens are generated and placed into the token bucket at a constant rate
When a packet arrives at the token bucket, it is transmitted if there is a token
available. Otherwise it is buffered until a token becomes available.
The token bucket holds a fixed number of tokens, so when it becomes full,
subsequently generated tokens are discarded
Can still reason about total possible demand

QoS IMPROVEMENT
Token Bucket
Packets from input

Token Generator
(Generates a token
once every T seconds)

output

QoS IMPROVEMENT

QoS IMPROVEMENT
Various technique are used to improve
QoS
1
2

Scheduling
Traffic shaping

Admission Control

Resource Reservation

QoS IMPROVEMENT
3

ADMISSION CONTROL
Applied by Router or Switch
Device accept or reject a connection request based on predefined
parameters (Flow specifications)
For example device first checks it buffer, link BW, CPU speed and
previous commitments to other flows, and then decides whether to
accept or reject the connection
Also requests concerning priority/urgency are checked and
considered

QoS IMPROVEMENT
Various technique are used to improve
QoS
1
2
3

Scheduling
Traffic shaping
Admission Control

Resource Reservation

QoS IMPROVEMENT
4

REOURCE RESERVATION
Data flow need resources e.g. buffer, CPU speed,
protocols to run appropriate applications etc.
The proper allocation of these resources is called
Resource Reservation

QoS IN FRAME RELAY


Four attributes are related to QoS in Frame Relay
1- ACCESS RATE

2- COMMITTED BURST SIZE

3- CIR

4- EXCESS BURST SIZE

ACCESS RATE
Bit per seconds
Depend on Users channel capacity connected to Network e.g.
T1, E1

COMMITTED BURST SIZE (Bc)


Maximum number of bits in a predefined period of time
committed by the Network without discarding any frame
If Bc of 4Mb is committed during a period of 4 seconds, the user
can end 4Mb data within 4 sec interval with guaranteed delivery.
Note that data rate during the interval can vary.

QoS IN FRAME RELAY Continued

COMMITTED INFORMATION RATE

Average committed bit rate


If user continuously send at this rate, the Network is committed to deliver
frames without any loss
Rate can be higher or lower than CIR at different times
If average sending bit rate is equal to/less than CIR, no frame will e discarded

CIR = Bc / T
EXCESS BURST SIZE
Maximum number of bits in excess of Bc that user can send during a
predefined period of time
Network is committed to transfer this flow if there is no CONGESTION
Thus this rate defines conditional commitment

QoS IN FRAME RELAY Continued

QoS IN FRAME RELAY Continued

USER DATA TREATED BY FRAME RELAY

Traffic Optimisation In
Cellular Networks

Some Cellular Bands


Standard

Access

AMPS

FDD

GSM

FDMA /
TDMA
FDMA/
TDMA/
FDD
FDMA/
TDMA/
FDD
CDMA

EGSM
DAMPS
IS-136
CDMAOne /
CDMA2000
WCDMA

CDMA

Spectrum
(MHz)
825-845 t
870-890 r
890-915 t
935-960 r
880-915 t
925-960 r

Channel
Peak
Spacing Power (W)
30 kHz
3
200 kHz

0.8, 2, 5, 8

200 kHz

0.8, 2, 5, 8

824-849 t
869-894 r

30 kHz

0.8, 1, 2, 3

824-849 t
869-894 r

1.25
MHz

0.125, 0.2,
0.5, 2

1920-1980 t
2110-2170 r

5 MHz

0.125, 0.25,
0.5, 2

FDMA
Frequency spectrum is divided up into channels
and shared
Each channel is used by a single user
Least spectrally efficient

Frequency
Time

TDMA
Channels occupy cyclically repeating time intervals
or time slots
DAMPS is 6 times more spectrally efficient than
FDMA, and GSM is 8 times more so

Frequency
Time

CDMA
Each channel is assigned a unique code and occupies
the same frequency and time as other users
Most prone to interference
Maximum spectral efficiency

Same frequency; same time;


different code

Frequency
Time

Dynamics of wireless communications


1G, 2G and 3G
technologies

Access has evolved from


FDMA in 1G to
FDMA/TDMA in 2G. For
3G, CDMA is the
buzzword
Data speeds can be as
high as 2Mb/s
(stationary MS) in 3G
systems

Evolution Paths To 3G
IS-41 Core Network

GSM Core Network


2G

2.5G

2.75G

3G

Techniques for traffic optimisation


Frequency Reuse
7 cell (cluster) reuse for Voice Channels
21 cell (3 clusters) reuse for Control Channels
Base stations in adjacent cells are assigned completely different channels
Frequency reuse is done to increase capacity
Co-channel and adjacent channel interference are the unwanted
consequences of excessive reuse

interference between signals from Cochannel cells is called co-channel interference.

Unlike thermal noise which can be overcome by increasing the signal-to noise ration
(SNR), co-channel interference cannot be combated by simply increasing the carrier
power of a transmitter .This is because an increase in carrier transmit power increases
Types
of Interference_Co-channel Interference
the interference to neighboring co-channel cells.

Co-channel interference
Control Channel interference
distance is 21 channels and hence less interference

Voice Channel interference


more adjacent channel interference

To reduce co-channel interference, co-channel cells must be physically separated by a


minimum distance to provide sufficient isolation due to propagation.

Types of Interference_Co-channel Interference

Interference is reduced from improved


isolation of RF energy from the cochannel cell.
The parameter Q, called the cochannel
reuse ratio, is related to the cluster size.

Types of Interference_Co-channel Interference

Interference between co-channel cells


co-channel cells must have a minimum
separation
Q =D/R= (3N)1/2;
D/R ratio (co-channel reuse factor)

Frequency planning is necessary


More channels per cell,
more system capacity,
more co-channel interference
Fewer channels per cell,
less system capacity,
less co-channel interference

Types of Interference_Co-channel Interference

A small value of Q provides larger


capacity since the cluster size N is small,
whereas a large value of Q improves the
transmission quality, due to a smaller
level of co-channel interference.
A trade-off must be made between these
two objectives in actual cellular design.

Types of Interference_ Adjacent channel interference


Interference from signals adjacent to desired signal
imperfection in Rx filter design allowing nearby
frequencies to leak into the passband
can be serious if adjacent channel user is very close to an
MS trying to receive a base station signal - near far effect
near-far effect can be caused by nearby transmitter not
necessarily belonging to the cellular system

Techniques for traffic optimisation_Sectoring


a technique whereby cell radius is kept the same, but it
is divided into smaller directional sectors
traffic carrying capacity is increased by bringing in
more channels
co-channel interference is reduced
D/R ratio (co-channel reuse factor) is decreased

SIR increases

Techniques for traffic optimisation_Sectoring


1
6

2
3

5
2

1200

5
7
5
4

600

7
6
2
5
1
3
4

6
1

Depiction of how interference is reduced by sectoring

sectorisation is usually for


120o transmission pattern
60o transmission pattern

down tilting of sector antennas

Techniques for traffic optimisation_Sectoring

1
3

1
6 2
54 3

Subdivide cells into sectors, usually


3 (each sector is 120) or 6 (each
sector is 60)
Less Tx power as smaller area is
covered
Each sector is served by directional
antenna and different frequencies
Directional antennas reduce cochannel interference smaller
clusters, higher capacity

Techniques for traffic optimization_ Cell splitting

2
5

4
7

3
6

(6)
(7)

(3)

4
(4)
(5)

(1)
(2)

5
7

Cell splitting of cell 4


while preserving the
frequency reuse plan

congested cells are divided into smaller cells


(microcells)
each smaller cell becomes an independent cell with its
own base station
increases capacity by increasing channel reuse
transmit power is reduced to avoid interference

Traffic carrying capability


Radio spectrums are limited
Large number of users can be accommodated in the
limited spectrum
GSM-900 has 125 and GSM 1800 has 375 physical
channels
CDMA IS-95 has 10 channels. 12.5 MHz band with
channels having bandwidth of 1.25 MHz.
Trunking or use of channel pool as needed accounts for
efficient spectrum utilization

Traffic carrying capability in GSM


TDMA allows one carrier to Tx data to an I/O device for multiple users

Each carrier has a multiframe of 120 ms

A multiframe contains 26 frames (4.615 ms)


Frame 13 is used for signalling (SACCH). Information about signal strength in neighbouring cells
Frame 26 is unused
24 remaining frames are used for voice and data

A frame consists of 8 bursts (0.577 ms)


Conversation or data is carried in bursts

T
3

Data F
57 bits 1

26
2

Train
26

F
1

120 ms

Frame

Data
T
57 bits 3

Multiframe
4.615 ms

Guard
8.25

Burst

0.577 ms

Traffic carrying capability in GSM


GSM uses FDMA and TDMA to offer greater compression than
DAMPS. Upto 8 simultaneous conversations may be carried out
CDMA is the most spectrally efficient technology. IS 95 CDMA
uses a single channel of 1.25 MHz to carry entire traffic load for
one or more base stations
The same channel may be used in adjacent cells and for split up
and sectorised cells to increase traffic handling capacity
Soft handoff is employed whenever neighboring cells use the
same frequency as the reference cell

Traffic carrying capability in CDMA


CDMA is not bandwidth limited, but is interference limited
Increasing simultaneous conversations within a cell increases interference, and
decreases channel throughput
At busy hour, QoS is at its minimum, whereas at non busy hours, there is
enhanced service quality
Most prone to interference, but there are ways to counter interference
problems such as precise power control on control channel and voice channels

CDMA (CDMA2000 1x) as WLL

Most widely deployed WLL solution in the


world
High spectral efficiency to handle Wireline
like traffic
Data capability inherent in system (up to
144kbps)
Backward and forward compatibility
Available in 450, 800 and 1900MHz
247

CDMA Channel or CDMA Carrier or CDMA Frequency

Duplex channel made of two 1.25 MHz-wide bands of


electromagnetic spectrum, one for Base Station to Mobile
Station communication (called the FORWARD LINK or the
DOWNLINK) and another for Mobile Station to Base Station
communication (called the REVERSE LINK or the UPLINK)
In 800 Cellular these two simplex 1.25 MHz bands are 45 MHz
apart
In 1900 MHz they are 80 MHz apart
CDMA Forward Channel
1.25 MHz Forward Link
CDMA Reverse Channel
1.25 MHz Reverse Link

CDMA CHANNEL
CDMA
Reverse
Channel 1.25

CDMA
Forward
Channel 1.25

MHz

MHz

45 or 80 MHz

248

CDMA 2000

249

CDMA 2000 Platforms

CDMA2000-1x
(1xRTT)

CDMA20001xEV-DO

CDMA20001xEV-DV

CDMA2000-3X
(3xRTT)

250

CDMA 2000 1x (1x RTT)

251

CDMA 2000 1xEV-DO

252

CDMA 2000 1xEV-DO

253

CDMA 2000 1xEV-DV

254

CDMA 2000 3xRTT

255

CDMA2000 Radio Configurations

256

Rate Sets
A Rate Set is a set of Traffic Channel frame formats.
A Rate Set may carry voice, user data, or signaling.
Two Rate Sets are defined for use in cdma One systems.
All services provided over the air interface must
conform to one of these two rate sets:
Rate Set 1 supports a maximum of 8550 bps, with
an additional 1050 bits of overhead for a total max
rate of 9600 bps.
Rate Set 2 supports a maximum of 13,300 bps,
with additional overhead bringing the total
transmission rate to 14,400 bps maximum.
257

Radio Configurations- Forward Link

Orthogonal Transmit Diversity splits transmitted symbols into two


streams with each stream being transmitted on an antenna
258

Radio Configurations- Forward Link

259

Radio Configurations- Reverse Link

260

Spreading Rate (SR1) & Spreading Rate ( SR3)

261

Spreading Rate (SR1) And Spreading Rate ( SR3)

262

Spreading Rate (SR1) also called 1x

263

Spreading Rate (SR3) also called


3x or MC ( Multi-Carrier)

264

You might also like