Professional Documents
Culture Documents
(QOS)
Mukesh Chinta
Assistant Professor
Dept of CSE, VRSEC
Internet gives all flows the same best effort service - No promises about when or
whether packets will be delivered.
Best-effort service is often sufficient for a traffic that is not sensitive to delay, such as file
transfers and e-mail.
But not all traffic is created equal - Different owners, different application
requirements. Some applications require service assurances.
The real-time traffic generated by some multimedia applications is delay sensitive and therefore
requires guaranteed and predictable performance.
Quality of service (QoS) is an internetworking issue that refers to a set of
techniques and mechanisms that guarantee the performance of the network to
deliver predictable service to an application program.
The goal of QoS is to provide preferential delivery service for the applications that
need it by ensuring sufficient bandwidth, controlling latency and jitter, and
reducing data loss.
In packet switching networks, traffic flow, packet flow or network flow is a sequence of
packets from a source computer to a destination, which may be another host, a multicast
group, or a broadcast domain.
Traditionally, four types of characteristics are attributed to a flow: reliability, delay, jitter
and bandwidth.
- It is needed by a flow to deliver the packets safe and sound. Lack of reliability means
losing a packet or acknowledgement which then requires retransmission.
- The delay in data transmission from source to destination. Applications can
tolerate delay in different degrees.
- Jitter is defined as the variation in delay for packets belonging to the same flow. High Jitter
means the difference between delays is large and low jitter means the variation is small.
- The rate at which traffic is carried by the network. Different applications need
different bandwidths.
For those applications:
with a high level of sensitivity to reliability, error checking is done and the packet is discarded if
corrupted.
with a high level of sensitivity to delay, they are given priority in transmission.
with a high level of sensitivity to jitter, the packets belonging to the same application must pass the
network with the same delay.
with high bandwidth requirement, enough bandwidth must be allocated to be sure that the packets are not
lost.
The following classes of service have been
defined by ATM protocol:
Constant Bit Rate (CBR)
Variable Bit Rate-Non Real Time (VBR-NRT)
Variable Bit Rate-Real Time (VBR-RT)
Available Bit Rate (ABR)
Unspecified Bit Rate (UBR)
The following mechanisms have been defined for provision of services required by
an application:
- Packet scheduling is the mechanism that selects a packet for transmission
from the packets waiting in the transmission queue.
- is the mechanism to control the amount and the rate fo
the traffic sent to the network.
- Resource reservation signaling mechanisms inform the
network entities on the QoS requirements of the multimedia applications using the
network resources
- Mechanism that makes the decision whether to allow a new
session to join the network. This mechanism will ensure that existing sessions QoS will
not be degraded and the new session will be provided QoS support
While being treated at a router according to the required level of service, a
packet may be delayed, suffer from jitters, be lost, or be assigned the
required bandwidth.
A good scheduling technique treats the different flows in a fair and
appropriate manner.
Three popular scheduling schemes are FIFO queuing, priority queuing, and
weighted fair queuing.
First In First Out (FIFO) is the simplest queuing mechanism. All
packets are inserted to the tail of a single queue. Packets are
scheduled in order of their arrival.
Packets wait in a buffer (queue) until the node (router) is ready to
process them.
If the average arrival rate is higher than the average processing
rate, the queue will fill up and new packets will be discarded.
FIFO queuing is the default scheduling in the Internet. The only
thing that is guaranteed in this type of queuing is that the packets
depart in the order they arrive
With FIFO queuing, all packets are treated the same in a packet-
switched network. No matter if a packet belongs to FTP, or Voice
over IP, or an e-mail message, they will be equally subject to loss,
delay, and jitter.
If a flow increases its bandwidth aggressively, other flows will be affected by getting less
bandwidth, causing increased average packet delay for all flows.
Packets may arrive with some delays but depart with different delays producing jitters.
In priority queuing,
.
The packets in the highest-priority queue are processed first.
Packets in the lowest-priority queue are processed.
A packet priority is determined from a specific field in the
packet header like the ToS field of an IPv4 header, the priority
field of IPv6 etc.
Packets in higher priority queues always transmit before
packets in lower priority queues. A lower priority queue has a
chance to transmit packets only when there are no packets
waiting in a higher priority queue.
Aggressive bandwidth spending by the high priority queues
can starve the low priority queues.
In weighted fair queuing (WFQ), the packets are still assigned to different classes and
admitted to different queues.
The queues, however, are weighted based on the priority of the queues; higher priority
means a higher weight.
The system processes packets in each queue in a round-robin fashion with the number
of packets selected from each queue based on the corresponding weight.
A fraction of time is devoted to serve each class of packets, but the fraction depends on
the priority of the class.
WFQ can provide the required bandwidth and the delay performance is directly related
to the allocated bandwidth.
Packets of different sizes may create many imbalances in dividing a decent share of time
between different classes.
For example, if the weights are 3, 2, and 1, three packets are processed from the first
queue, two from the second queue, and one from the third queue.
If the throughput for the router is R, the class with the highest priority may have the
throughput of RI2, the middle class may have the throughput of R/3, and the class with
the lowest priority may have the throughput of R/6.
To control the amount and the rate of traffic is called traffic shaping or traffic policing.
The first term is used when the traffic leaves a network; the second term is used when
the data enters the network.
Two techniques can shape or police the traffic: leaky bucket and token bucket.
For example, assume that the network has committed a bandwidth of 3 Mbps for a host. The host sends a
burst of data at a rate of 12 Mbps for 2 seconds, for a total of 24 Mb of data. The host is silent for 5
seconds and then sends data at a rate of 2 Mbps for 3 seconds, for a total of 6 Mb of data. In all, the host
has sent 30 Mb of data in 10 seconds. The leaky bucket smooths the traffic by sending out data at a rate
of 3 Mbps during the same 10 seconds.
Without the leaky bucket, the beginning burst may have hurt the network by consuming
more bandwidth than is set aside for this host.
A simple leaky bucket implementation involves a FIFO queue that holds the packets. If the
traffic consists of fixed-size packets (e.g., cells in ATM networks), the process removes a fixed
number of packets from the queue at each tick of the clock.
If the traffic consists of variable-length packets, the fixed output rate must be based on the
number of bytes or bits.
1. The packets are first classified according to the service they require.
2. The model uses scheduling to forward the packets according to their flow characteristics.
3. Devices like routers use admission control to determine if the device has the capability
(available resources to handle the flow) before making a commitment.
To define a specific flow, a source needs to define a flow specification, which is
made of two parts:
Rspec (resource specification) - Rspec defines the resource that the flow
needs to reserve (buffer, bandwidth, etc.)."What guarantees does it need?"
Tspec (traffic specification) - Tspec defines the traffic characterization of the
flow. "What does the traffic look like?
PATH messages carry senders T-spec (Token bucket parameters). Routers note the direction PATH
messages arrived and set up reverse path to sender.
Receivers send RESV messages that follow reverse path and setup reservations. If reservation
cannot be made, user gets an error.
Path message is sent in a multicast environment; a new message is created when the path
diverges
After a receiver has received a Path message, it sends a Resv message. The Resv message
travels toward the sender (upstream) and makes a resource reservation on the routers that
support RSVP.
If a router on the path does not support RSVP, it routes the packet based on the best-effort
delivery methods.
In RSVP, the resources are not reserved for each receiver in a flow; the reservation is
merged.
RSVP Reservation Styles
When there is more than one flow, the router needs to make a reservation to
accommodate all of them. RSVP defines three types of reservation styles: wild-card filter
(WF), fixed filter (FF), and shared explicit (SE).
- In this style, the router creates a single reservation for all
senders. The reservation is based on the largest request. This type of style is used when
the flows from different senders do not occur at the same time.
- In this style, the router creates a distinct reservation for each flow.
This means that if there are n flows, n different reservations are made. This type of style
is used when there is a high probability that flows from different senders will occur at the
same time.
- In this style, the router creates a single reservation that can be
shared by a set of flows.
- The reservation information (state) stored in every node for a flow needs to be
refreshed periodically. This is referred to as a soft state. The default interval for refreshing
(soft state reservation) is currently 30 seconds.
There are at least two problems with Integrated Services that may prevent its full
implementation in the Internet: scalability and service-type limitation.
- The Integrated Services model requires that each router keep information for
each flow. As the Internet is growing every day, this is a serious problem. Keeping
information is especially troublesome for core routers because they are primarily designed
to switch packets at a high rate and not to process information.
- The Integrated Services model provides only two types of
services, guaranteed and control-load. Those opposing this model argue that applications
may need more than these two types of services.
In the DiffServ model, packets are marked by applications into classes according to their
priorities.
This model was introduced by the IETF (Internet Engineering Task Force) to handle the
shortcomings of Integrated Services. Two fundamental changes were made:
1. The main processing was moved from the core of the network to the edge of the network.
This solves the scalability problem. The routers do not have to store information about
flows. The applications, or hosts, define the type of service they need each time they send
a packet.
2. The per-flow service is changed to per-class service. The router routes the packet based on
the class of service defined in the packet, not the flow. This solves the service-type
limitation problem.
DiffServ is a coarse-grained, class-based mechanism for traffic management. In
contrast, IntServ is a fine-grained, flow-based mechanism.
DS Field
In DiffServ, each packet contains a field called the DS field. The value of this field is
set at the boundary of the network by the host or the first router designated as
the boundary router.
DiffServ uses a 6-bit differentiated services code point (DSCP) in the 8-bit
differentiated services field (DS field) in the IP header for packet classification
purposes.
The DSCP is a 6-bit subfield that defines the per-hop behavior (PHB). The 2-bit CU
(Currently Unused) subfield is not currently used.
The DiffServ capable node (router) uses the DSCP 6 bits as an index to a table
defining the packet-handling mechanism for the current packet being processed.
Per-Hop Behavior (PHB)
The DiffServ model defines per-hop behaviors (PHBs) for each node that receives a packet.
So far three PHBs are defined: DE PHB, EF PHB, and AF PHB.
: The DE PHB (default PHB) is the same as best-effort delivery, which is
compatible with ToS.
: The EF PHB (expedited forwarding PHB) provides the following services:
a. Low loss.
b. Low Latency.
c. Ensured bandwidth.
This is the same as having a virtual connection between the source and destination.
: The AF PHB (assured forwarding PHB) delivers the packet with a high
assurance as long as the class traffic does not exceed the traffic profile of the node.
The users of the network need to be aware that some packets may be discarded.
Traffic Conditioners
To implement DiffServ, the DS node uses traffic conditioners such as meters,
markers, shapers, and droppers.
The meter checks to see if the incoming flow matches the negotiated traffic
profile. The meter also sends this result to other components. The meter can use
several tools such as a token bucket to check the profile.
A marker can re-mark a packet that is using best-effort delivery
(DSCP: 000000) or down-mark a packet based on information received
from the meter. Down-marking (lowering the class of the flow) occurs if
the flow does not match the profile. A marker does not up-mark a packet
(promote the class).
A shaper uses the information received from the meter to
reshape the traffic if it is not compliant with the negotiated profile.
: A dropper, which works as a shaper with no buffer, discards
packets if the flow severely violates the negotiated profile.
This image describe DiffServ end-
to-end architecture, where the
entire path with all routers must be
DiffServ enabled.
Using DiffServ, a small bit-pattern
in each packet, in the IPv4 ToS
octet or the IPv6 traffic class octet,
is used to mark a packet to receive
a particular forwarding treatment,
or per-hop behavior (PHB), at
each network node.
PHB refers to the packet
scheduling, queuing, policing, or
shaping behavior of a node on any
given packet belonging to a BA, and
as configured by a Service Level
Agreement (SLA) or policy.
A Quick Comparison