You are on page 1of 42

Lecture 2

Goal of the Course:

The objective is to build a ‘scalable’ network, that is a network having potential to


grow indefinitely and to support applications of diverse origins like teleconferencing,
video-on-demand, voice over IP, electronic commerce, distributed computing and
digital libraries to name a few.

Asking the question ‘why?’ and answering it.

What are Computer networks?

A computer network is an interconnection between general purpose programmable


devices that deals with the handling of data.

Identifying The Requirement Constraints Of A Network:

Different people who interact with a network will have different requirements: e.g.,
an application programmer will want the network to be efficient, easy to work with,
and provide error free data transfer, while a network designer will want the network
to be cost-effective and allow efficient resource utilization.

A- The Connectivity:

The goal of a network is to achieve the connectivity between two or more systems.
For the purpose of security the networks may be public, like the Internet or Wi-Fi
hotspots, or it may be private, like the Local Area Network in SEECS.

Some of the components and issues related to a Network are following:

Link:

Link is defined as the physical transmission medium that connects networked


devices/systems.
Different types of links:

There are two types of links:

1. Point-to-Point or Direct Access Link:

Such links are used to connect two devices only. Such links provide a direct path
between two devices, thus forming a network that does not have any intermediate
device.

Such an approach can not be used to interconnect a large network since it's not
feasible to have a direct point-to-point link between all the nodes in a network.

The drawback of such networks is that for large networks, it is not feasible to have a
direct point to point link between all nodes for cost reasons. A network in which
there is a direct point-to-point link between every node and all other nodes is called
a fully connected network.
2. Multiple Access Link:

When multiple devices are connected to one another via a single link, such that
each of the devices is connected by the link to all other devices at the same time,
the link is said to be a multiple-access link. The main channel that connects all the
devices is also called as a ‘bus’ or an 'ether'
Multiple access links solves the problem of connecting multiple nodes without
requiring point-to-point links. However, they have limitations of the number of
nodes they can connect, and the geographical distance they can span.

Classification Based on Media of Transmission:

Some types the transmission links are:

1. Magnetic media:

One of the most common ways to transport data from one computer to another is
by physically transport the tape or disks to the destination machine. The hard disk
drives, the USB flash drives, CDs, DVDs are examples. It is the most cost effective
method for most applications, because of their lower cost per bit transported.

Their one draw back is the delay factor, as the magnetic device itself has to be
transported from one device to the next.

2. Copper Twisted pair:

Consists of a pair of copper wires, twisted together to reduce losses (and hence the
name), running from source to destination.

They have a relatively small range of several kilometers before they require
amplification, and have a capacity to transmit several megabits/sec at low costs.
Applications are in telephone industry, DSL etc.

3. Coaxial Cable
It is a shielded copper line, providing better range than the twisted pair cable. Coax
is widely used for cable television and metropolitan area networks (MAN).

4. Optical Fiber
5. Wireless media

[More explanation needed; remove this comment when relevant information has
been added]

Node:

Nodes are, in simple words, the machines that are connected through different links
in networks. Formally, nodes are active electronic devices that are attached to a
network, and are capable of sending, receiving, or forwarding information or data.

Types of nodes:
There are two types of nodes:

1. Nodes that use the network (Hosts)

It is the node that it does not participate in routing or packet switching. They
support users and run application programs.

2. Nodes that implement the Network (Switches/Routers/ Hubs/ Repeaters)

These are the networks nodes whose function is to implement the functionality of
the network (such as receiving data from end hosts and forwarding to the other
hosts [potentially through other network implementing nodes]).

Switches:

Switches are the nodes that provide communication between systems. The number
of users that a switch can support is limited and so is the geographical distance in
which it can provide service. For example Ethernet can support about 1000 users in
an area as large as a building. For more users, we need to form separate networks
and then join those networks.

Routers:
The nodes that provide communication between different networks (that may be
based on different underlying technologies). Thus to connect two separate LANs, a
router may be used. As the name implies it ‘routes’ data, that is it receives
data/messages from the source and systematically forward these messages toward
the destination node based on their address.

(For further reading : http://www.livinginternet.com/i/iw_route_arch.htm


http://computer.howstuffworks.com/router.htm (courtesy Sami-ur-Rehman) )

Cloud:
In the networks the cloud is used to represent a higher level of abstraction. The
cloud is a placeholder for a network, that we are using or are connected to, but we
do not want to see into its inner workings, thus simplifying the study of the network.

For example when showing an interconnection of several networks, we may


represent each network with a cloud, to hide the network it self and only see the
bigger picture.
__________________________________________________________

Mode of Data Transfer:

Data is transferred in networks by two primary ways- circuit switching and


packet switching. These form the core of the networks.

Circuit Switched Networks:

The type of network that establishes a connection or channel between the


communicating nodes as if they were physically connected with an electrical circuit
is called as circuit switched network. They are used for example in telephone
connections.
Thus, in circuit-switched networks, the resources needed along a path (for example
the bandwidth) to provide for communication between the nodes are reserved for
the duration of the session. And the channel is freed when the connection itself is
terminated.

The figure above shows that User 1 communicates with User 5 (and User 2
communicated with User 4), a dedicated connection is to be established, in which a
stream of data will flow in sequence. The circuit switching follows the ‘message
stream’ technique.

There are certain advantages and disadvantages to this approach.

1. The dedicated connection provides with no loss of data due to the


unavailability of network resources like bandwidth.
2. Another feature of circuit switched networks is that the delay present during
the data transfer is constant, because the data travels through the same path
every time (thus introducing the same delay each time).
3. A disruption in the link/channel will cause the communication to end as there
is no alternate path for the data to flow. The connection has to be setup
again.
4. In the circuit switched networks the resources remain allocated during the full
length of a communication, even if no data is flowing on a circuit thus wasting
link capacity when a link does not carry data to its full capacity or does not
carry data at all.

Packet Switched Networks:

The packet switching was developed in 1960s to cater for the problems related to
circuit switched networks.
What happens here is that the data is broken down into chunks/pieces and those
chunks are wrapped into structures called packets. Each packet contains, along with
the data (or payload), the information about the address of the source and the
destination nodes, sequence numbers and other control information. A packet can
also be called a ‘segment’ or ‘datagram’.

Thus the packet switched networks can be described as the networks in which the
communication is done via packets of data which do not flow through dedicated
channels but are routed in any sequence, through any route available between the
two nodes. Thus if no data is available at the sender at some point during a
communication, then no packet is transmitted over the network.

The figure shows that user1 communicates with user5 (and user2 communicated
with user4). Unlike the previous figure the data packets can move in any order,
through any path available to reach the destination. The packet switching follows
the ‘store and forward’ technique.

There are certain advantages and disadvantages to this approach;

1. A major advantage is absence of wastage of network resources. Since the


link/channel is only in use when the packets are being sent, there is no lost of
resources
2. If any one of the route is blocked or disrupted, the packets can still reach
their destination by choosing an alternate route.
3. A corruption in data will prompt to the dropping/deletion of only those
packets that are corrupt and not the entire message.
4. Different part of the message may arrive at different time at destination due
to the different routes taken by the packet. This causes problems when data
is required in a certain sequence.
5. The delay in the data transfer may be variable.
6. Packets may be lost or corrupted while travelling through faulty links and
nodes.

(For further reading/source:


http://www.cs.virginia.edu/~mngroup/projects/mpls/documents/thesis/node8.html )

Internetworking:

Internetworking is the interconnection of two distinct disparate networking


technology into a single homogeneous logical network (using routers/gateways).
‘Internetwork’ is also abbreviated as ‘internet’ (note the small ‘i’). An example of the
internet is the Internet.

For achieving scalability, internetworks should be able to be connected into even


larger internetworks, this gives rise to issues and complexities.

Issues of Internetworking:

On an internet all of the following issues must be resolved.

Addressing:

Joining of hosts, directly or indirectly, does not mean automatic connectivity


between them, because each of them is not yet identifiable from the other. We
require a way to distinguish between the nodes on the network and thus allow
communication between two specific nodes. This is done by assigning an address to
each node.
An address is a piece of binary code (or byte string) that identifies a node, that is,
the network can use a node’s address to distinguish it from the other nodes
connected to the network.

Routing:

When a source node wants the network to deliver a message to a certain


destination node, it specifies the address of the destination node. If the sending and
receiving nodes are not directly connected, then the switches and routers of the
network use this address to decide how to forward the message toward the
destination. This process is called as routing.
Unicast, Multicast and Broadcast:

A message sent by a node may be delivered to 3 types of recipients, it may be


destined to be delivered to a single host (called as unicast), it may be meant for
several hosts (called as multicast) or it may be for all the hosts inside that network
(called as broadcast).

An internetwork is required to support all three kinds of message addressing.

This is analogous to a class room. If a teacher is talking to a student ‘A’ then the
teacher is ‘unicasting’, if he is talking to a group of students ‘A’,’B’,’C’ etc, then he
is ‘multicasting’, and if he is talking to the whole class the he is ‘broadcasting’.

B- Cost-effective Resource Sharing:

The need of resource sharing arises when multiple nodes want to use the network at
the same time or when several hosts share the same link and they all want to use it
at the same time. The resource sharing is done by multiplexing.
Multiplexing is defined as the means through which a system resource is shared
among multiple users.

Multiplexing is similar to the ‘timesharing’ concept in the computer systems, where


a single physical CPU is shared (or multiplexed) among multiple jobs; such that each
of the process believes it has its own private processor. Similarly, data being sent by
multiple users can be multiplexed over the physical links that make up a network.

Types Of Multiplexing:

Multiplexing For Circuit Switched Networks:

1. Frequency Division Multiplexing (FDM)

In FDM, the frequency spectrum of a link/channel is shared among the connections


established across the link. Specifically, the link dedicates a frequency band (or
Bandwidth) to each connection for the duration of the connection. FM radio stations,
TV channels etc, for example use FDM to share the frequency spectrum.
(For further reading: http://en.wikipedia.org/wiki/Time-division_multiplexing)
2. Time Division Multiplexing (TDM) (or Synchronous Time Division
Multiplexing STDM)

In a TDM link, time is divided equal-sized quanta in a round-robin fashion, giving


each connection an equal chance to send its data over the physical link. The
majority of the telephone systems use TDM.
(For Further Reading: http://en.wikipedia.org/wiki/Frequency-division_multiplexing)

Multiplexing for Packet Switched Networks:

3. Statistical Multiplexing

The multiplexed links for the circuit switched networks suffer the same problems of
inefficiency that is if there is no data to send then the connection will be idle, and
that the maximum number of users/flows is fixed ahead of time, thus making it non-
scalable. The solution lies in packet switching.

Packet switching uses statistical multiplexing.

The statistical multiplexing uses the time sharing concept of STDM that is the link is
shared over time but it allow data to flow on demand rather than in predetermined
time slots, thus eliminating the problem of under efficiency.

Such networks suffer the problem of congestion. This happens when the inward flow
of data into a switch/router is more than the outward flow, causing lack of buffer
space on the networking device. Fairly allocating link capacity to different flows and
dealing with congestion when it occurs are the key challenges of statistical
multiplexing.
(For Further reading: http://en.wikipedia.org/wiki/Statistical_multiplexing )

C- Support for Common Services:

One view of the network can be that it is delivering packets among collection of
Devices and computers. In an alternate view the network can be considered as
providing the means for a set of application processes that are distributed over the
connected computers to communicate. The requirement of a computer network is
that the application programs running on the hosts connected to the network must
be able to communicate in a meaningful way.
Logical Channels:

The network can be viewed as providing logical channels over which application-
level processes can communicate with each other, with each channel providing the
set of services required by that application. The one can use a cloud to abstractly
represent the connectivity (which can be complex) among a set of computers and
think of a channel as connecting one process to another.

The channel itself can be considered as being like a pipe connecting two
applications, so that a sending application put data in one end and expect that data
to be delivered by the network (the cloud) to the application at the other end of the
pipe.

The channel must ensure that all the requirements of the application program which
it is serving. The issues of whether the order of the arrival of packets is important, is
security the prime concern, how much packet loss is acceptable, are all to be taken
into consideration when designing channels.

Types of Channels:

Two channel types are listed in the following, but it should be kept in mind that a
new type of channel can be created if non of the existing types are not able to
provide for the application programs, and closing this gap between the two (the
technology in the form of channels and the application requirements) is the
challenge of this field of study.
This gap (between the what the technology can provide and what the applications
require and demand is called as semantic gap).

1. Request/Reply Channels:

The client process, in the client machine, makes a request (for certain data or
service) from the server process, through the network. The server process replies to
the client in return.

The client machine (or the client process) is the machine that sends the request
while the server machine (or the server process) is the machine that replies to the
request. When a client receives data (or download it) from sever it is called as
reading and when the client sends data to be written on the server machine
(uploads data) it is called as writing.

The Request/Reply Channels can be used for file transfer for example. The structure
of the channel supports security, as data is sent only when request, thus making it
less likely for it to be leaked. It is generally a loss free (no packet losses) channel.
2. Message Stream Channels:

The type of channel that provides data transfer with an emphasis on sequence in
which it arrives at the user end.

It is used in the applications where data loss may be tolerable, but sequence of data
is important, for example video steaming and teleconferencing, because video
application can operate adequately even if some frames are not received.

Other features may include the support 'multicast' (For example: it is required for
multiple parties to participate in the teleconference or view the video), and the
need for security over the channel to ensure the privacy and integrity of the video
data for example.
Reliability:

Reliable delivery of the data is the first concern of the network. Data corruption may
occur due to several reasons. It is the job of the network to correct such errors and
effectively mask certain errors and provide an error free transmission.

The three types of errors/failures:

1. Bit or Burst Errors:

When a single transmission bit is corrupted, it is called a ‘bit error’. When several
bits are corrupted, the error is said to be a ‘burst error’. The error may be a
conversion of 1 to 0 or a 0 to 1.

Bit or burst errors occur due to Electromagnetic interference, for example lightning
strikes, power surges, microwave ovens, or interfere with other transmission lines.
They affect 1 out of (10^6) to (10^7) bits on a typical copper-based cable and 1 out
of every (10^12) to (10^14) bits on an optical fiber.

The correction involves identification of the corrupt bit and inverting it. If the error
can not be corrected the entire packet of data is discarded and request its
retransmission.

2. Packet Loss:

It occurs when an entire packet is lost or discarded by the network.

One of the reasons can be an uncorrectable error, as seen in bit/burst errors.


Another way it could occur is due to ‘congestion’ in the network. The buffer of a
switch or router is full so it cannot accept further packets and thus drops them. A
third reason can be a buggy software which may incorrectly forward a packet on the
wrong link.

The late arrival of packets is often incorrectly attributed as the Packet loss.

3. Node/Link Failure:

The failures at the node and link level form the third type of error.

The reasons for this failure are: the physical link between nodes is cut (when done
intentionally they are sometimes called as 'malicious cuts').The system connected
to the node/link has crashed; caused by software crashes, power failures etc.

Such failures cause dramatic amount of stress on the other nodes and links in the
network as the traffic is now rerouted through them, and may cause a slow down of
the entire network. An example can be the served submarine optic fibers in the
Mediterranean Sea which slowed down the Internet in 3 continents.

One of the difficulties in dealing with this class of failure is distinguishing between
failed nodes and ones that are merely slow, or in the case of links, between ones
that have been cut and ones that are introducing a high number of bit errors due to
their bad condition (insulation, material quality etc).
Lecture 3
Lecture 3

OSI architecture:

ISO was one of the first organizations to formally define a common standard way to connect
computers.

Their architecture called Open Systems Interconnection architecture partitioned network


functionality into seven layers where one or more protocols implement the functionality assigned
to a given layer. In this sense, OSI model is not a protocol graph (since it defines layers and not
relationships between protocols) but a reference model.

ISO, in conjunction with another standardization body called ITU, publishes a series of protocol
specifications based on the OSI model. This series is sometimes called the X dot series since the
protocols are given names like X.25, X.400 and X.500.

Starting at the bottom and moving up, the physical layer handles the transmission of raw bits
over a communications link. The data link layer then collects a stream of bits into a large
aggregate called a frame. Network adapters, along with device drivers, running in the node's OS
typically implement this layer. This means that frames (and not raw bits) are delivered to hosts.
The network layer handles routing among nodes within a packet switched network. At this layer,
the unit of data exchange is a packet rather than a frame, although they are fundamentally the
same thing. The lower three layers are implemented on all nodes in clouding switches within the
network and hosts connected along the exterior of the network. The transport layer then
implements what we have up to this point being called the process to process channel. Here the
unit of data is called a message (or segment) rather than a packet or a frame. The transport layer
and the higher layers typically run only on the end hosts and not on the intermediate switches or
routers.

There is less agreement about the definition of the top layers. Skipping ahead to the seventh layer
(the top most layer), we find the application layer. Example protocols include HTTP and FTP.
Below that presentation layer is concerned with the format of data exchanged between peers;
e.g., whether an integer is 16/32/64 bits long and whether the MSB is transmitted first or last.
Finally, the session layer provides a name space that is used to tie together potentially different
transport streams that are part of a single application. For example, an audio stream and a video
stream may be managed in a teleconferencing application.
Internet Architecture:

The internet architecture, also called TCP/ IP architecture based on its two most famous
protocols. is shown above. The architecture evolved from experience in implementing
ARPANET.

While the 7 layer OSI model can be applied to the Internet (with some imagination), a four layer
model is used instead. At the lowest layer is a variety of network protocols (also called data link
layer or subnetwork layer). In practice, these protocols are implemented using a combination of
hardware (network adapters) and software (network device driver). For example, you might find
Ethernet card or Fiber Distributed Data Interface (FDDI) protocols at this layer.

The second layer consists of a single protocol called Internet Protocol (IP). This is the protocol
that supports the interconnection of multiple networking technologies into a single logical
internetwork.

The third layer consists of two main protocols---the Transmission Control Protocol and the User
Datagram Protocol (UDP). TCP and UDP provide alternative logical channels to application
programs. TCP provides a bye steam channel and UDP provides an unreliable datagram service.
TCP and UDP are sometimes called end to end protocol.
Running above the transport layer are a range of application protocol such as FTP, TFTP, HTTP,
SMTP, Telnet, etc.

Difference between application programs and application layer protocol. All the WWW browsers
(Safari, Firefox, IE, Opera, Lynx, etc.) There is a similarly large numbers of webservers. The
reason all of them can interwork is that they all conform to the HTTP application layer protocol.

Internet architecture does not imply strict layering. The application is free to bypass the defined
transport layer protocols and to directly use IP or any of the underlying networks. In fact,
programmers are free to define new channel abstractions.

Hour glass shape: Wide at the top and bottom but narrow at the waist: IP serves as the focal point
of the architecture. IP is a common method of exchanging packets among a wide collection of
networks.

Application Example:

To clarify concepts a simplified model of the LAN of NUST-SEECS is considered.


Network’s domain name is seecs.edu.pk

Why Domain Name:

Domain names are used because the users can memorize names (seecs.edu.pk) better than
numbers (202.125.157.196).
There are multiple levels in domain name

e.g. in seecs.edu.pk there are 3 levels.

Level 1 or high level is .pk


Level 2 is .edu
Level 3 is seecs
There are different approaches for keeping levels of domain name, one approach is to keep all in
one file but this is not scalable.
Decentralized approach is followed, in which responsibility or load is not carried out by one
central node. Functionality is divided into all nodes.

Let’s assume that a student (in the seecs.edu.pk LAN) wishes to access NUST-SEECS website
hosted at www.seecs.edu.pk
To access NUST-SEECS website we have to use a browser, there are many browsers available
e.g. Internet explorer, Firefox, Opera etc.
The student enters http://www.seecs.edu.pk
http is the hyper text transfer protocol

The HTTP request sent

by the student PC (the machine pc.seecs.edu.pk)


to the webserver (the machine www.seecs.edu.pk) would be something like “GET / HTML/1.1”
In this case base directory’s default document will be requested.

Packets so far GET/HTML/1.1

Certain questions should be answered:

1. How to send this request to Webserver?

To communicate the IP address should be known, but how to get it?


The domain name is translated to IP address and this is done by another server called
DNS(Domain Name Service).
DNS maps domain name to IP address. The student pc request the DNS about the IP (of
www.seecs.edu.pk) and DNS replies the IP (202.125.157.196) .

2. Which application at webserver must process this packet?

In TCP/IP, each well-known application is identified using ports. Every application has its port.
Port is an abstraction of process. A process is represented by a number.
The port of DNS is 53; HTTP is 80; SMTP is 25.
In our considered example, HTTP server application (port 80) would process the packet.

Packets so far:
Source port Destination Port GET/HTML/1.1
>1024 80

The destination IP address (found through DNS) is 202.125.157.196.


Let’s assume the source IP address is 202.125.157.150

Packets so far:

Source IP Destination IP Source port Destination Port GET/HTML/1.1


202.125.157.150 202.125.157.196 >1024 80
3. How to send the created packet to Webserver?

To communicate with any host its physical address must be known. This physical address is
called MAC address. All actual communications is in MAC. Now we should convert IP to MAC.
Address Resolution Protocol (ARP) is used to map IP to MAC. An important point must be kept
in mind that IP address may change but MAC address is physical it cannot change. To get MAC
address we do not make a server as we do to get IP address. It is just a request from source and
the reply is from the destination. The source request the MAC by referring to IP.
There is difference in notation of IP and MAC address. IP is 32 bits dotted notation
(202.125.157.196) and MAC is hexadecimal notation (12:34:aa:bb:cc:dd).
Now we know the MAC addresses now the communication can take place. The destination MAC
address is 12:34:aa:bb:cc:dd. The source MAC address (let’s assume) is 23:34:aa:bb:cc:dd.

IP packet containing the data


Source IP Destination IP Source port Destination Port GET/HTML/1.1
202.125.157.150 202.125.157.196 >1024 80

MAC frame
Source MAC Destination MAC Paylaod FCS

FCS: frame check sequence, used for error checking.


Lecture 4
Part 1#
Topology, Addressing and Routing

TYPES OF ADDRESSES:
There are two types of addresses:

a)Logical:
The logical address is what the IP (Internet Protocol) address, can also be called virtual address,
is and it looks like this 216.109.112.135.IP address can change and often does when you have a
high speed Internet connection.It is in hierarchical fashion i-e a network part and a host part.
IP Version 4; Address: 32 bits
IP Version 6; Address: 128 bits

b)Physical :
The physical address is just like mailing address it is real,it is also called MAC address (Media
Access Control address) and looks like this 00-56-7E-4A-DD-8D i-e in a hexadecimal form. It is
different for every technology e.g Ethernet uses different physical addresses than other
technologies available.
The communicating applications (source/ destination applications) must also be identifiable.

SOCKET:
In computer networking, an Internet socket (or commonly, a network socket or socket) is the
endpoint of a bidirectional communication flow across an Internet Protocol-based computer
network such as the Internet. Internet sockets (in plural) are an application programming
interface (API) in an operating system, used for in inter-process communication. Internet sockets
constitute a mechanism for delivering incoming data packets to the appropriate application
process or thread, based on a combination of local and remote IP addresses and port numbers.
Each socket is mapped by the operational system to a communicating application process or
thread.
A socket address is the combination of an
a)IP address (the location of the computer) and
b) A port (which is mapped to the application program process) into a single identity, much like
one end of a telephone connection is between a phone number and a particular extension line at
that location.
An Internet socket is characterized by a unique combination of the following:
• Protocol (TCP, UDP or raw IP). Consequently, TCP port 53 is not the same socket as
UDP port 53. (90% of net traffic is carried by TCP)
• Local socket address (Local IP address and port number)
• Remote socket address (Only for established TCP sockets. As discussed in the Client-
Server section below, this is necessary since a TCP server may serve several clients
concurrently. The server creates one socket for each client, and these sockets share the
same local socket address.)
PHYSICAL ADDRESSING (MAC):
Here;
IP Address: It identifies the network.It is of 32-bits.
(Processes are identified through ports)
Subnet Mask: We use it to identify network and host parts.
It is composed of networkpath and host part.Forexample in 255.255.255.0 the net work path is
255.255.255
And host part is 0
EXAMPLE 2:
network part is:
10010110.11010111
and the host part is
00010001.00001001
· In this IP Address i-e 192.168.15.1/24 of Ethernet make the host part zero and we will get
network part 192.168.15 and it is of 24 bits.
· In subnet mask 255.255.128.0

Default gateway:
A default gateway is a network setting on a computer that specifies the IP address of the
computer or router that network traffic should be sent to when the traffic is not on the same
subnet as the sending computer.
LOGICAL ADDRESSING IP:

IP Address
192.168.15.2 (decimal)
1100000010101000
0000111100000010 (binary)
Subnet Mask:
255.255.255.0 (decimal)
1111111111111111
1111111100000000 (binary)
Network part Host part
NETWORKING LAN:
While an application running on say host # 1 is to communicate to a process running on host # 2
several networks apart, transport layer appends its own header with the data .At this level the
data is called a segment. Network layer than attaches its own header indicating the possible
destination and source IPs.ARP than converts the IP address into the physical MAC address and
the data is forwarded onto the data link layer. On the data link layer the ARP appends the MAC
address of the next hop. Each network just detaches the MAC frame and attaches the MAC
address of the relevant destination until the data reaches the destination network, where IP
address of the receiver host takes the data to the required host.

At receive host network layer detaches the attached IP layer. Transport layer is used to identify
the process and than even this layer is removed until the data remains and that’s what we
required at the destination host.

INERNETWORKING EXAMPLE:
In this figure we are having the three technologies i-e ETH,PPP,FDDI.R1,R2,R3 acts as a router
between the source and the destination. Our required job is to transmit the packets from the
source to the destination.In first block ETH header is on the IP while in R1 block ETH will
remove its header and IP will assign header to the FDDI similarly in R2 FDDI will remove its
header and it is assigned to PPP through IP and in R3 PPP will remove its header and is assigned
to the ETH through IP.At destination i-e H8 ETH header is again on the IP we get required
packets.

The same can be understood by the following figure:


PART#2

NETWORK PERFORMANCE ANALYSIS:

BANDWIDTH:
Network performance is measured in two fundamental ways:
A) bandwidth (also called throughput) and
B) latency (also called delay).
The bandwidth of a network is given by the number of bits that can be transmitted
over the network in a certain period of time. For example, a network might have a bandwidth
of 10 million bits/second (Mbps),meaning that it is able to deliver 10 million bits every second. It
is sometimes useful to think of bandwidth in terms of how long it takes to transmit each bit of
data. On a 10-Mbps network, for example, it takes 0.1 microsecond (μs) to transmit each bit.

Latency = Propagation + Transmit + Queue


Propagation = Distance/SpeedOfLight
Transmit = Size/Bandwidth

Bandwidth and
Throughput: Bandwidth is literally a measure of the width of a frequency band. For example, a
voice-grade telephone line supports a frequency band ranging from 300 to 3300 Hz; it is said to
have a bandwidth of 3300 Hz− 300 Hz = 3000 Hz. If we see the word “bandwidth” used in a
situation in which it is being measured in hertz, then it probably refers to the range of signals that
can be accommodated. When we talk about the bandwidth of a communication link, we normally
refer to the number of bits per second that can be transmitted on the link. We might say that the
bandwidth of an Ethernet is 10 Mbps. A useful distinction might be made, however, between the
bandwidth that is available on the link and the number of bits per second that we can actually
transmit over the link in practice.
While “throughput” to refer to the measured performance of a system. Thus, because of various
inefficiencies of implementation,a pair of nodes connected by a link with a bandwidth of 10
Mbps might achieve a throughput of only 2 Mbps. This would mean that an application on one
host could send data to the other host at 2 Mbps.

Bandwidth and latency combine to define the performance characteristics of a given link or
channel. Their relative importance,however, depends on the application. For some applications,
latency dominates bandwidth. For example, a client that sends a 1-byte message to a server and
receives a 1-byte message in return is latency bound. Assuming that no serious computation is
involved in preparing the response, the application will perform much differently on a
transcontinental channel with a 100-ms RTT than it will on an across-the-room channel with a 1-
ms RTT. Whether the channel is 1 Mbps or 100 Mbps is relatively insignificant, however, since
the former implies that the time to transmit a byte (Transmit)
is 8 μs and the latter implies Transmit = 0.08 μs.

In contrast, consider a digital library program that is being asked to fetch a 25-megabyte (MB)
image—the more bandwidth that is available, the faster it will be able to return the image to the
user. Here, the bandwidth of the channel dominates performance. To see this, suppose that the
channel has a bandwidth of 10 Mbps. It will take 20 seconds to transmit the image, making it
relatively unimportant if the image
is on the other side of a 1-ms channel or a 100-ms channel; the difference between a 20.001-
second response time and a 20.1-second response time is negligible.
Figure shown below gives a sense of how latency or bandwidth can dominate performance
in different circumstances. The graph shows how long it takes to move objects of various sizes (1
byte, 2 KB, 1 MB) across networks with RTTs ranging from 1 to 100 ms and link speeds of
either 1.5 or 10 Mbps. We use logarithmic scales to show relative performance. For a 1-byte
object (say, a keystroke), latency remains almost exactly equal to the RTT, so that you cannot
distinguish between a 1.5-Mbps network
and a 10-Mbps network. For a 2-KB object (say, an email message), the link speed makes quite a
difference on a 1-ms RTT network but a negligible difference on a 100- ms RTT network. And
for a 1-MB object (say, a digital image), the RTT makes no difference—it is the link speed that
dominates performance across the full range ofRTT.
Delay × Bandwidth
Product:
If we think of a channel between a pair of processes as a hollow pipe where the latency
corresponds to the length of the pipe and the bandwidth gives the diameter of the pipe, then the
delay × bandwidth product gives the volume of the pipe—the number of bits it holds. Said
another way, if latency (measured in time) corresponds to the length of the pipe, then given the
width of each bit (also measured in time),we can calculate how many bits fit in the pipe. For
example, a transcontinental channel with a one-way latency of 50 ms and a bandwidth of 45
Mbps is able to hold 50 × 10−3 seconds × 45 × 106 bits/second = 2.25 × 106 bits or
approximately 280 KB of data. In other words, this example channel (pipe) holds as many bytes
as the memory of a personal computer from the early 1980s could hold.

The delay × bandwidth product is important to know when constructing highperformance


networks because it corresponds to how many bits the sender must transmit before the first bit
arrives at the receiver. If the sender is expecting the receiver to somehow signal that bits are
starting to arrive, and it takes another channel latency for this signal to propagate back to the
sender (i.e., we are interested in the
channel’s RTT rather than just its one-way latency), then the sender can send up to two
delay×bandwidth’s worth of data before hearing from the receiver that all is well. The bits in the
pipe are said to be “in flight,” which means that if the receiver tells the sender to stop
transmitting, it might receive
up to a delay × bandwidth’s worth of data before the sender manages to respond.
In our example above, that amount corresponds to 5.5 × 106 bits (671 KB) of data.On the other
hand, if the sender does not fill the pipe—send a whole delay × bandwidth product’s worth of
data before it stops to wait for a signal—the sender will not fully utilize the network. Note that
most of the time we are interested in the RTT scenario, which we simply refer to as the delay ×
bandwidth product, without explicitly saying that this product is multiplied by two. Again,
whether the “delay” in “delay × bandwidth”
means one-way latency or RTT is made clear by the context.

High-Speed Networks:
High-speed networks bring a dramatic change in the bandwidth available to applications, in
many respects their impact on how we think about networking comes in what does not change as
bandwidth increases: the speed of light.
“High speed” does not mean that latency improves at the same rate as bandwidth; the
transcontinental RTT of a 1-Gbps link is the same 100 ms as it is for a 1-Mbps link.

To appreciate the significance of ever-increasing bandwidth in the face of fixed


latency, consider what is required to transmit a 1-MB file over a 1-Mbps network
versus over a 1-Gbps network, both of which have an RTT of 100 ms. In the case
of the 1-Mbps network, it takes 80 round-trip times to transmit the file; during each
RTT, 1.25% of the file is sent. In contrast, the same 1-MB file doesn’t even come close
to filling 1 RTT’s worth of the 1-Gbps link, which has a delay × bandwidth product
of 12.5 MB.

Figure illustrates the difference between the two networks. In effect, the 1-MB file looks like a
stream of data that needs to be transmitted across a 1-Mbps network, while it looks like a single
packet on a 1-Gbps network. To help drive this point home, consider that a 1-MB file is to a 1-
Gbps network what a 1-KB packet is to a 1-Mbps network.
Perhaps the best way to understand the relationship between throughput and
latency is to return to basics. The effective end-to-end throughput that can be achieved
over a network is given by the simple relationship
Throughput = TransferSize/TransferTime

where TransferTime includes not only the elements of one-way Latency identified earlier
in this section, but also any additional time spent requesting or setting up the transfer.
Generally, we represent this relationship as

TransferTime = RTT + 1/Bandwidth × TransferSize

We use RTT in this calculation to account for a request message being sent across the
network and the data being sent back. For example, consider a situation where a user
wants to fetch a 1-MB file across a 1-Gbps network with a round-trip time of 100 ms.
The TransferTime includes both the transmit time for 1 MB (1/1 Gbps ×1MB = 8 ms),
and the 100-ms RTT, for a total transfer time of 108 ms. This means that the effective
throughput will be

1 MB/108 ms = 74.1 Mbps

not 1 Gbps. Clearly, transferring a larger amount of data will help improve the effective
throughput, where in the limit, an infinitely large transfer size will cause the
effective throughput to approach the network bandwidth. On the other hand, having
to endure more than 1 RTT—for example, to retransmit missing packets—will hurt
the effective throughput for any transfer of finite size and will be most noticeable for
small transfers.

Application Performance Needs:


In this section , we have talked in terms of what a given link or channel will support. The
unstated assumption has been that application programs have simple needs—they want as much
bandwidth as the network can provide. This is certainly true of the aforementioned digital library
program that is retrieving a 25-MB image; the more bandwidth that is available, the faster the
program will be able to return the image to the user.
However, some applications are able to state an upper limit on how much bandwidth
they need. Video applications are a prime example. Suppose you want to stream
a video image that is one-quarter the size of a standard TV image; that is, it has a
resolution of 352 by 240 pixels. If each pixel is represented by 24 bits of information,
as would be the case for 24-bit color, then the size of each frame would be

(352 × 240 × 24)/8 = 247.5 KB


If the application needs to support a frame rate of 30 frames per second, then it might
request a throughput rate of 75 Mbps. The ability of the network to provide more
bandwidth is of no interest to such an application because it has only so much data to
transmit in a given period of time.
Unfortunately, the situation is not as simple as this example suggests. Because
the difference between any two adjacent frames in a video stream is often small, it is
possible to compress the video by transmitting only the differences between adjacent
frames. This compressed video does not flow at a constant rate, but varies with time
according to factors such as the amount of action and detail in the picture and the
compression algorithm being used. Therefore, it is possible to say what the average bandwidth
requirement will be, but the instantaneous rate may be more or less.
Lecture 6
by Jawad Riffat Paracha& Imran Tahir

TOPIC 2
DIRECT LINK NETWORK

In this lectures following questions were the main concerns


1. How to build a direct link ?
2. How to start a process to build network between two nodes?
3. What is ETHERNET?

The simplest network possible is one in which all the hosts are directly connected
by some physical medium. This may be a wire or a fiber, and it may cover a small
area (e.g., an office building) or a wide area (e.g., transcontinental).
Lan is a physical link between nodes. There is a network adapter at every host, host receives data
from this network adapter it copies data from link to memory and the pc access it.
NODES
Nodes are often general-purpose computers, like a desktop workstation, a multiprocessor,
or a PC. For our purposes, let’s assume it’s a workstation-class machine. This
Workstation can serve as a host that users run application programs on, it might be
Used inside the network as a switch that forwards messages from one link to another,
or it might be configured as a router that forwards internet packets from one network
to another. In some cases, a network node—most commonly a switch or router inside
the network, rather than a host—is implemented by special-purpose hardware. This
is usually done for reasons of performance and cost: It is generally possible to build
custom hardware that performs a particular function faster and cheaper than a general purpose
processor can perform it. When this happens, we will first describe the basic
function being performed by the node as though this function is being implemented
in software on a general-purpose workstation, and then explain why and how this functionality
might instead be implemented by special hardware.
LINKS
Network links are implemented using the following media.
1. COPPER TWISTED PAIR
2. CO_AXIAL CABLE
3. OPTICAL FIBRE
4. WIRELESS
5. SNEAKERS NETWORK

FUNDAMENTALS DATA LINK PROBLEMS


ENCODING
The first step in turning nodes and links into usable building blocks is to understand
how to connect them in such a way that bits can be transmitted from one node to the
other. The task, therefore, is to encode the binary data that the source node wants to send
into the signals that the links are able to carry, and then to decode the signal back
into the corresponding binary data at the receiving node. a network adaptor—a piece of hardware
that connects a node to a link. The network adaptor contains a signalling component that actually
encodes bits into signals at the sending node and decodes signals into bits at the receiving node.

FRAMING
Blocks of data (called frames at this level), not bit streams, are exchanged between
nodes. It is the network adaptor that enables the nodes to exchange frames. When
node A wishes to transmit a frame to node B, it tells its adaptor to transmit a frame
from the node’s memory. This results in a sequence of bits being sent over the link.
The adaptor on node B then collects together the sequence of bits arriving on the link
and deposits the corresponding frame in B’s memory.

ERROR DETECTION
Bit errors are sometimes introduced into frames. This happens, for example, because of electrical
interference or thermal noise. Although errors are rare, especially on optical links, some
mechanism is needed to detect these errors so that corrective action can be taken. Otherwise, the
end user is left wondering why the C program that successfully compiled just a moment ago now
suddenly has
a syntax error in it, when all that happened in the interim is that it was copied across
a network file system. Detecting errors is only one part of the problem. The other part is
correcting
errors once detected. There are two basic approaches that can be taken when the
recipient of a message detects an error. One is to notify the sender that the message
was corrupted so that the sender can retransmit a copy of the message. If bit errors
are rare, then in all probability the retransmitted copy will be error-free. Alternatively,
there are some types of error detection algorithms that allow the recipient to reconstruct
the correct message even after it has been corrupted; such algorithms rely on
error-correcting codes, discussed below.

RELIABLE DELIEVERY
Even when error-correcting codes are used some errors will be too severe to be corrected. As a
result, some corrupt frames must be discarded. A link-level protocol that wants to deliver frames
reliably must somehow recover from these discarded (lost) frames.This is usually accomplished
using a combination of two fundamental mechanisms
ACKNOWLEDGMENT
TIMEOUTS
. An acknowledgment is a small control frame that a protocol sends back to its peer saying that it
has receivedan earlier frame. By control frame we mean a header without any data, although a
protocol can piggyback an ACK on a data frame it just happens to be sending in the
opposite direction. The receipt of an acknowledgment indicates to the sender of the
original frame that its frame was successfully delivered. If the sender does not receive
an acknowledgment after a reasonable amount of time, then it retransmits the original
frame. This action of waiting a reasonable amount of time is called a timeout.

ACCESS MADIATION
This final problem is not relevant to point to point links but it is about multiple access links . it
means how to mediate access to shared link so that all nodes eventually have a chance to tranmit
their data . in this case we see different media access protocols i-e Ethernet,token ring and
several wireless protocols .
CHANNEL PARTITION

Channel partion is done for packet switching according to frequency,time and code multiplexing
.
It is good when every wants to talk .It is inefficient for light load and not overall a good approach
.
RANDOM ACCESS
Ethernet is use for random access .This protocol is notable for its distributed nature but
inefficient for high load because at high loads collision rate will be increased . it is very easy to
maintain no switch can fail it . it is very inexpensive .
TOKEN RING
A ring network consist of a set of nodes connected in a ring .A token which consists of a special
sequence of bits circulates around the ring and goes to every node . Each node receive the token ,
saving a copy and then forward the token to other node . when token come back to sender node
then it will stop forward it . Nodes are serviced as round robin pattern.if a node don’t want to
receive token then it will just forward to next node .we introduce electromechanical relays
between the nodes so that if network failure not stop circulation . one of its drawback that
waiting time will increase .
COLLISION
If two nodes wants to talk at the same time then randomness come in picture . Both nodes will
send a message and collision will occur. After collision a jamming signal will send to the sender
end and packet will not received to the receiver .
HUB
Hub is a device which receive signal and send it to all nodes .

You might also like