You are on page 1of 23

Computer and Information Systems and

Computer Engineering

Certificate

Unit 04 Introduction to Networking


Lecture Notes

By

Ali A. Abdulla
1.0 The OSI Seven Layer Model
1.1 Introduction
The Open Systems Interconnection (OSI) model is a reference tool for understanding data
communications between any two networked systems. It divides the communications
processes into seven layers. Each layer both performs specific functions to support the layers
above it and offers services to the layers below it. The three lowest layers focus on passing
traffic through the network to an end system. The top four layers come into play in the end
system to complete the process.

Upper Layers of the OSI Model


OSI designates the application, presentation, and session stages of the stack as the upper
layers. Generally speaking, software in these layers performs application-specific functions
like data formatting, encryption, and connection management.
Examples of upper layer technologies in the OSI model are HTTP, SSL and NFS.

Lower Layers of the OSI Model


The remaining lower layers of the OSI model provide more primitive network-specific
functions like routing, addressing, and flow control. Examples of lower layer technologies in
the OSI model are TCP, IP, and Ethernet.

The Open Systems Interconnection (OSI) reference model has been an essential element of
computer network design since its ratification in 1984. The OSI is an abstract model of how
network protocols and equipment should communicate and work together (interoperate).

1
1.2 An Overview of the OSI Model

A networking model offers a generic means to separate computer networking functions into
multiple layers. Each of these layers relies on the layers below it to provide supporting
capabilities and performs support to the layers above it. Such a model of layered functionality
is also called a “protocol stack” or “protocol suite”.

Protocols, or rules, can do their work in either hardware or software or, as with most protocol
stacks, in a combination of the two. The nature of these stacks is that the lower layers do their
work in hardware or firmware (software that runs on specific hardware chips) while the
higher layers work in software.
The Open System Interconnection model is a seven-layer structure that specifies the
requirements for communications between two computers. The ISO (International
Organization for Standardization) standard 7498-1 defined this model. This model allows all
network elements to operate together, no matter who created the protocols and what computer
vendor supports them.

The main benefits of the OSI model include the following:


• Helps users understand the big picture of networking

2
• Helps users understand how hardware and software elements function together
• Makes troubleshooting easier by separating networks into manageable pieces
• Defines terms that networking professionals can use to compare basic functional
relationships on different networks
• Helps users understand new technologies as they are developed
• Aids in interpreting vendor explanations of product functionality

1.3 Why was it created?


The principles that were applied to arrive at the seven layers are as follows:
• A layer should be created where a different level of abstraction is needed.
• Each layer should perform a well defined function.
• The function of each layer should be chosen in accordance with developing
internationally standardized protocols.
• The layer boundaries should be chosen to minimize the information flow across
the interfaces.
• The number of layers should be large enough that distinct functions need not be
thrown together in the same layer out of necessity, and small enough that the
architecture does not become unwieldy.
Having a way of categorizing each factor in an internet connection makes it easier for us to
do our jobs as troubleshooters.
We all inherently understand that if the modem is not plugged in, you're not going to be able
to get your e-mail. The OSI model allows us to follow that logic further: for example, if you
can browse the web by IP but can't see websites by name, you know that the problem is not
on the Network layer, but on the Transport layer.

3
1.4 Layer 1 – The Physical Layer

The physical layer defines the electrical, mechanical, procedural, and functional
specifications for activating, maintaining, and deactivating the physical link between
communicating network systems. Physical layer specifications define characteristics such as
voltage levels, timing of voltage changes, physical data rates, maximum transmission
distances, and physical connectors. Physical layer implementations can be categorized as
either LAN or WAN specifications.

Components of the physical layer include:


• Cabling system components
• Adapters that connect media to physical interfaces
• Connector design and pin assignments
• Hub, repeater, and patch panel specifications
• Wireless system components
• Parallel SCSI (Small Computer System Interface)

4
• Network Interface Card (NIC)

In a LAN environment, Category 5e UTP (Unshielded Twisted Pair) cable is generally used
for the physical layer for individual device connections. Fibre optic cabling is often used for
the physical layer in a vertical or riser backbone link. The IEEE, EIA/TIA, ANSI, and other
similar standards bodies developed standards for this layer.

1.5 Layer 2 – The Data Link Layer

The data link layer provides reliable transit of data across a physical network link. Different
data link layer specifications define different network and protocol characteristics, including
physical addressing, network topology, error notification, sequencing of frames, and flow
control. Physical addressing (as opposed to network addressing) defines how devices are
addressed at the data link layer. Network topology consists of the data link layer
specifications that often define how devices are to be physically connected, such as in a bus
or a ring topology. Error notification alerts upper-layer protocols that a transmission error has
occurred, and the sequencing of data frames reorders frames that are transmitted out of
sequence. Finally, flow control moderates the transmission of data so that the receiving
device is not overwhelmed with more traffic than it can handle at one time.
Layer 2 of the OSI model provides the following functions:
• Allows a device to access the network to send and receive messages
• Offers a physical address so a device’s data can be sent on the network

5
• Works with a device’s networking software when sending and receiving
messages
• Provides error-detection capability

Common networking components that function at layer 2 include:


• Network interface cards
• Ethernet and Token Ring switches
• Bridges

NICs have a layer 2 or MAC address. A switch uses this address to filter and forward traffic,
helping relieve congestion and collisions on a network segment.

Bridges and switches function in a similar fashion; however, bridging is normally a software
program on a CPU, while switches use Application-Specific Integrated Circuits (ASICs) to
perform the task in dedicated hardware, which is much faster.
The Institute of Electrical and Electronics Engineers (IEEE) has subdivided the data link
layer into two sublayers: Logical Link Control (LLC) and Media Access Control (MAC).

The Logical Link Control (LLC) sublayer of the data link layer manages communications
between devices over a single link of a network. LLC is defined in the IEEE 802.2
specification and supports both connectionless and connection-oriented services used by
higher-layer protocols. IEEE 802.2 defines a number of fields in data link layer frames that
enable multiple higher-layer protocols to share a single physical data link. The Media Access
Control (MAC) sublayer of the data link layer manages protocol access to the physical
network medium. The IEEE MAC specification defines MAC addresses, which enable
multiple devices to uniquely identify one another at the data link layer.

6
1.6 Layer 3 – The Network Layer

The network layer defines the network address, which differs from the MAC address. Layer
3, the network layer of the OSI model, provides an end-to-end logical addressing system so
that a packet of data can be routed across several layer 2 networks (Ethernet, Token Ring,
Frame Relay, etc.). Note that, network layer addresses can also be referred to as logical
addresses.

Initially, software manufacturers, such as Novell, developed proprietary layer 3 addressing.


However, the networking industry has evolved to the point that it requires a common layer 3
addressing system. The Internet Protocol (IP) addresses make networks easier to both set up
and connect with one another. The Internet uses IP addressing to provide connectivity to
millions of networks around the world.

To make it easier to manage the network and control the flow of packets, many organizations
separate their network layer addressing into smaller parts known as subnets. Routers use the
network or subnet portion of the IP addressing to route traffic between different networks.
Each router must be configured specifically for the networks or subnets that will be
connected to its interfaces.

Routers communicate with one another using routing protocols, such as Routing Information
Protocol (RIP) and Open version of Shortest Path First (OSPF), to learn of other networks
that are present and to calculate the best way to reach each network based on a variety of

7
criteria (such as the path with the fewest routers). Routers and other networked systems make
these routing decisions at the network layer.

When passing packets between different networks, it may become necessary to adjust their
outbound size to one that is compatible with the layer 2 protocol that is being used. The
network layer accomplishes this via a process known as fragmentation. A router’s network
layer is usually responsible for doing the fragmentation. All reassembly of fragmented
packets happens at the network layer of the final destination system.

Two of the additional functions of the network layer are diagnostics and the reporting of
logical variations in normal network operation. While the network layer diagnostics may be
initiated by any networked system, the system discovering the variation reports it to the
original sender of the packet that is found to be outside normal network operation.

The variation reporting exception is content validation calculations. If the calculation done by
the receiving system does not match the value sent by the originating system, the receiver
discards the related packet with no report to the sender. Retransmission is left to a higher
layer’s protocol.

Some basic security functionality can also be set up by filtering traffic using layer 3
addressing on routers or other similar devices.

1.7 Layer 4 – The Transport Layer

The transport layer accepts data from the session layer and segments the data for transport
across the network. Generally, the transport layer is responsible for making sure that the data
is delivered error-free and in the proper sequence. Flow control generally occurs at the
transport layer.

8
Flow control manages data transmission between devices so that the transmitting device does
not send more data than the receiving device can process. Multiplexing enables data from
several applications to be transmitted onto a single physical link. Virtual circuits are
established, maintained, and terminated by the transport layer. Error checking involves
creating various mechanisms for detecting transmission errors, while error recovery involves
acting, such as requesting that data be retransmitted, to resolve any errors that occur.
Layer 4, the transport layer of the OSI model, offers end-to-end communication between end
devices through a network. Depending on the application, the transport layer either offers
reliable, connection-oriented or connectionless, best-effort communications.

Some of the functions offered by the transport layer include:


• Application identification
• Client-side entity identification
• Confirmation that the entire message arrived intact
• Segmentation of data for network transport
• Control of data flow to prevent memory overruns
• Establishment and maintenance of both ends of virtual circuits
• Transmission-error detection
• Realignment of segmented data in the correct order on the receiving side
• Multiplexing or sharing of multiple sessions over a single physical link

The most common transport layer protocols are the connection-oriented TCP Transmission
Control Protocol (TCP) and the connectionless UDP User Datagram Protocol (UDP).

9
1.8 Layer 5 – The Session Layer

The session layer establishes, manages, and terminates communication sessions between
presentation layer entities. Communication sessions consist of service requests and service
responses that occur between applications located in different network devices. These
requests and responses are coordinated by protocols implemented at the session layer.
Layer 5, the session layer, provides various services, including tracking the number of bytes
that each end of the session has acknowledged receiving from the other end of the session.
This session layer allows applications functioning on devices to establish, manage, and
terminate a dialog through a network. Session layer functionality includes:
• Virtual connection between application entities
• Synchronization of data flow
• Creation of dialog units
• Connection parameter negotiations
• Partitioning of services into functional groups
• Acknowledgements of data received during a session
• Retransmission of data if it is not received by a device

Some examples of session-layer implementations include Zone Information Protocol (ZIP),


the AppleTalk protocol that coordinates the name binding process; and Session Control
Protocol (SCP), the DECnet Phase IV session layer protocol.
10
1.9 Layer 6 – The Presentation Layer

The presentation layer provides a variety of encoding and encryption functions that are
applied to the application layer data. These functions ensure that information sent from the
application layer of one system will be readable by the application layer of another system.
Layer 6, the presentation layer, is responsible for how an application formats the data to be
sent out onto the network. The presentation layer basically allows an application to read (or
understand) the message. Examples of presentation layer functionality include:
• Encryption and decryption of a message for security
• Compression and expansion of a message so that it travels efficiently
• Graphics formatting
• Content translation
• System-specific translation

Common data representation formats, or the use of standard image, sound, and video formats,
enable the interchange of application data between different types of computer systems.
Conversion schemes are used to exchange information with systems by using different text
and data representations, such as EBCDIC and ASCII. Standard data compression schemes
enable data that is compressed at the source device to be properly decompressed at the
destination. Standard data encryption schemes enable data encrypted at the source device to
be properly deciphered at the destination.
Presentation layer implementations are not typically associated with a particular protocol
stack. Some well-known standards for video include QuickTime and Motion Picture Experts

11
Group (MPEG). QuickTime is an Apple Computer specification for video and audio, and
MPEG is a standard for video compression and coding.
Among the well-known graphic image formats are Graphics Interchange Format (GIF), Joint
Photographic Experts Group (JPEG), and Tagged Image File Format (TIFF). GIF is a
standard for compressing and coding graphic images. JPEG is another compression and
coding standard for graphic images, and TIFF is a standard coding format for graphic images.

1.10 Layer 7 – The Application Layer

The application layer interacts with software applications (such as Netscape or Outlook
Express) that implement a communicating component. Such application programs are outside
of the scope of the OSI model, but they translate an enduser's typing into a Layer 7 request.
Layer 7, the application layer, provides an interface for the end user operating a device
connected to a network. This layer is what the user sees, in terms of loading an application
(such as Web browser or e-mail); that is, this application layer is the data the user views
while using these applications. Examples of application layer functionality include:
• Support for file transfers
• Ability to print on a network
• Electronic mail
• Electronic messaging
• Browsing the World Wide Web

12
This layer interacts with software applications that implement a communicating component.
Such application programs fall outside the scope of the OSI model. Application layer
functions typically include identifying communication partners, determining resource
availability, and synchronizing communication.
When identifying communication partners, the application layer determines the identity and
availability of communication partners for an application with data to transmit.
When determining resource availability, the application layer must decide whether sufficient
network resources for the requested communication exist. In synchronizing communication,
all communication between applications requires cooperation that is managed by the
application layer.
Some examples of application layer implementations include Telnet, File Transfer Protocol
(FTP), and Simple Mail Transfer Protocol (SMTP).

1.11 Troubleshooting using the Seven-Layer Model


The key here is to think of the Internet like a giant Taco Bell seven-layer burrito...just
kidding.
The whole point of the OSI model is to make our jobs easier through classification and
delineation of functions. Ultimately, the easiest way to use the seven-layer model is by
figuring out what the user can do on the Net, then going up one layer and seeing if they can
perform the functions that are supposed to be performed on that layer.
For example:
 Is the router plugged in? What lights are on? If the router is not a) plugged in to
the electrical outlet and b) plugged in to the ISDN jack, the user won't be able to
ping.
 If the user can ping but can't browse the internet, can the user visit a website by
IP address? If the user's TCP configurations are incorrect, they will obviously
not be able to translate a name to IP address, and therefore, won't be able to get
mail, either.

13
2.0 The History of the Internet

2.1 Introduction
The birth of the Internet can be traced to a small government project in the United States of
America way back in 1970s. It was born from the Advanced Research Projects Agency
(ARPA) network called the ARPANET. The ARPANET had several small computers called
Interface Message Processors (IMPs) which were connected to each other through modems
and leased lines that facilitated exchange of data between different computers via packate
switching. As the news spread about ARPANET, more and more computers got connected to
it gradually increasing its size and laying the seed for the Internet.
The Internet matured in the 70's as a result of the TCP/IP architecture first proposed by Bob
Kahn at BBN and further developed by Kahn and Vint Cerf at Stanford and others throughout
the 70's. It was adopted by the Defense Department in 1980 replacing the earlier Network
Control Protocol (NCP) and universally adopted by 1983.

Back from 1993, one could connect to another computer through protocols such as telnet and
FTP using a terminal window. The telnet or FTP commands had to manually typed in at the
prompt - there was no user interface. To gain access to a remote system one either needed to
know the username and password or one was restricted to only the public directories -
directories that were not protected and were thus, open to all. And if you didn't have an idea
of how to locate a file, you had to go through each directory listing and check the file names
(assuming that the file name described its contents)!
The major growth of the Internet came with the development of HTML, the HyperText
Markup Language, and programs (browsers) that could read and display those documents.
This gave rise to the World Wide Web (commonly known as WWW). Nowadays HTML
documents, also called web pages, in addition to text, can also contain images, movie cllips,
sound cips, animations and much more.
During its short history, the Internet has grown exponentially. Even at this very moment as
you are reading tons of web pages and web sites are being added to this global virtual web.
With the advent of easy to use WYSIWYG (What You See Is What You Get ) editors the
techniques of creating a web site and putting it online has reached the hands of the common
person. People are using the Internet not only for daily tasks such as checking and sending

14
emails (communication) and searching for information but are also creating their personal
and business web sites or writing their hearts out on a blog.
The Internet is now a global network of networks. Which means it consists of many smaller
networks. The number of computers linked on these smaller networks can range from 2-3 in a
small Intranet to thousands of machines in big organizations. No one knows the exact
number of computers connected to the Internet, because this figure keeps changing and is
increasing with each hour.
Tracing back in time, we can divide the history of the Internet (till the present) into three
main parts.
1. FTP: The first stage
2. Gopher: The second stage
3. The World Wide Web: The third stage

2.1.1 File Transfer Protocol - FTP


The FTP (File Transfer Protocol) was, and is still, widely used to transfer files from one
computer to the other. A user typically logs in at an FTP server and downloads or uploads
files. Though FTP allowed for sending and retrieving files from a remote computer, it did not
facilitate browsing. Thus, a lot of time was spent (wasted!) in searching for the required
information. Because of this, a service called Archie was developed to simplify keyword
searching of files located at FTP servers. Nowadays, FTP is mainly used to transfer large data
(huge files or many small files) from one machine to the other. Various FTP clients are now
available and most of them are very simple to use. The File Transfer Protocol still remains a
faster method than the HyperText Transfer Protocol (HTTP) for uploading and downloading
files.

2.1.2 Gopher - Veronica and Jughead


Gopher was a menu-style information browsing and retrieval system. Developed at the
University of Minnesota as a campus-wide information system, Gopher was named after the
University mascot, though some opine that Gopher stands for 'go-for' information. Gopher
overcame many of FTP's shortcomings but as the content increased, navigating the menu
system became arduous. A search facility for Gopher called Veronica was developed which
was similar to Archie for FTP. Jughead, a local search service for Gopher was developed to

15
facilitate searching of local networks. Due to the lack of multimedia support and its linear
nature, Gopher soon became extinct with the advent of the Web.

2.1.3 The World Wide Web


The World Wide Web: Came into existence with the introduction of browsers, the first one
being Mosaic. The browser provided ease of use with graphical display and was able to show
images with text. Hyperlinking between documents broke the linear architecture of Gopher
and increased the complexity of the web. The browser was able to provide the user with a
range of experiences - pictures, multimedia (sound, video) and interactivity. The web also
allowed for the integration of pages with databases that resulted in dynamically generated
content - content that is picked up from the database and integrated into HTML pages or
HTML templates. This prompted many companies to put their wares online resulting in the
explosive growth of the web.
The Internet has been put to a variety of uses. Though it started primarily as a medium to
facilitate data exchange, it is now employed for information search and retrieval,
communication via email, chat and voice, commerce and business processes and much more.

2.2 ARPANET
The ARPANET (Advanced Research Projects Agency Network) created by ARPA of the
United States Department of Defense, was the world's first operating packet switching
network, and the predecessor of the global High Speed Internet. The ARPANET was
developed by the IPTO under the sponsorship of DARPA, and conceptualized and designed
by Lick Licklider, Lawrence Roberts, and others as described herein.
Packet switching, today the dominant basis for both data and voice communication
worldwide, was a new and significant concept in data communications. Previously,
information communication was supported upon the idea of circuit switching, as in the old
typical telephone circuit, where a dedicated circuit is tied down for the length of the call and
communication is only achievable with the single party on the other end of the circuit. With
packet switching, a system could utilize one communication connection to communicate with
more than one machine by breaking apart data into datagraphs, then assemble these as
packets. Not just could the link be shared (very much like a single post box can be used to
send letters to various destinations), but each packet could be sent independently of other

16
packets. A sort of packet switching configured by Lincoln Laboratory scientist Larry Roberts
underlay the design of ARPANET.
A climate of intense research encircled the entire history of the ARPANET. The Advanced
Research Projects Agency was conceived with an emphasis towards research, and so was not
oriented exclusively to a military product. The establishment of this agency was part of the
U.S. response to the then Soviet Union's launch of Sputnik in 1957. ARPA was tasked to
explore how to use their investment in computers via Command and Control Research
(CCR). Dr. J.C.R. Licklider was selected to lead this effort. Licklider came to ARPA from
Bolt, Beranek and Newman, (BBN) in Cambridge, MA in October 1962.
The earliest ideas of a computer network planned to allow for general communication
between users of different computers were developed by J.C.R. Licklider of Bolt, Beranek
and Newman (BBN) in August 1962, in a series of memos talking about his "Intergalactic
Computer Network" concept. These ideas incorporated nearly everything that the Internet is
today. In October 1963, Licklider was named head of the Behavioral Sciences and Command
and Control programs at ARPA (as it was then named), the United States Department of
Defense Advanced Research Projects Agency. He then convinced Ivan Sutherland and Bob
Taylor that this was a very monumental concept, although he left ARPA prior to any actual
work on his vision was performed.
ARPA and Taylor remained interested in producing a computer communication network, in
part to permit ARPA-sponsored researchers in assorted locations to use various computers
which ARPA was supplying, and in part to make new software and other issues widely
accessible quickly. Taylor had three separate terminals in his office, connected to three
different computers which ARPA was funding: one for the SDC Q-32 in Santa Monica, one
for Project Genie at the University of California, Berkeley, and one for Multics at MIT. In
order to work on one of those projects, he needed to go over to that particular computer.
By mid-1968, a comprehensive design had been prepared, and after approval at ARPA, a
Request For Quotation (RFQ) was conveyed to 140 possible bidders. Most considered the
proposal as outlandish, and just 12 companies presented bids, of which merely four were
looked on in the top rank. By the end of the year, the arena had been narrowed to two, and
after negotiations, a final selection was made, and the contract was granted to BBN on 7
April 1969.
BBN's proposal pursued Taylor's plan closely; it called for the network to be made up of
small computers known as Interface Message Processors (more generally known as IMPs),
17
what are today called routers. The IMPs at each site executed store-and-forward packet
switching routines, and were linked to each other using modems connected to leased lines
(initially running at 50 kbit/second). Host computers linked to the IMPs via custom bit-serial
interfaces to connect to ARPANET.
BBN initially selected a ruggedized version of Honeywell's DDP-516 computer to make the
first-generation IMP. The 516 was originally designed with 24 kB of core memory
(expandable) and a 16 channel Direct Multiplex Control (DMC) direct memory access
control unit. Custom interfaces were utilized to connect, via the DMC, to each of the hosts
and modems. In addition to the lamps on the front panel of the 516 there was also a specific
set of 24 indicator lights to display the status of the IMP communication channels. Each IMP
could support up to four local hosts and could communicate with up to six remote IMPs
across leased lines.
The small team at BBN (at first just seven people), helped substantially by the detail they had
gone into to create their answer to the RFQ, quickly developed the first functioning units. The
first protocol development lead to DEL (Decode- Encode-Language) and NIL (Network
Interchange Language) were written through a series of meetings at this time. These
languages were ahead of their time. The primary intent was to form an on-the-fly description
that would tell the receiving end how to interpret the information that would be transmitted.
Yet, the first set of meetings were highly conceptual as neither ARPA nor the universities had
held any official charter. The lack of a charter permitted the group to think broadly and
openly however. BBN did present particulars as to the host-IMP interface specifications from
the IMP side. This data supplied the group some distinct starting points to build from. The
total system, including both hardware and the world's first packet switching software system,
was designed and installed in nine months.
The ARPANET went to task on August 30, 1969, when BBN delivered the first Interface
Message Processor (IMP) to Leonard Kleinrock's Network Measurements Center at UCLA.
The IMP was made from a Honeywell DDP 516 computer with 12K of memory, configured
to handle the ARPANET network interface. In a renowned piece of Internet lore, on the side
of the crate, a hardware architect at BBN named Ben Barker had scrawled "Do it to it,
Truett", in tribute to the BBN engineer Truett Thach who journeyed with the computer to
UCLA on the plane.
The UCLA team responsible for installing the IMP and producing the first ARPANET node
included graduate students Vinton Cerf, Steve Crocker, Bill Naylor, Jon Postel, and Mike
18
Wingfield. Wingfield had constructed the hardware interface between the UCLA computer
and the IMP, the machines were linked, and within a few days of delivery the IMP was
communicating with the local NMC host, an SDS Sigma 7 computer operating the SEX
operating system. Messages were successfully exchanged, and the one computer ARPANET
was born.
The Culler-Fried Interactive Mathematics centre at the University of California at Santa
Barbara was the third site added to the ARPANET, operating on an IBM 360/75 computer
employing the OS/MVT operating system. The fourth ARPANET site was brought on in
December 1969 at the University of Utah Graphics Department, operating on a DEC PDP-10
computer utilizing the Tenex operating system. These first four sites had been chosen by
Roberts to comprise the initial ARPANET because they were already DARPA sites, and he
thought they had the technical capability necessary to produce the requisite custom interface
to the IMP.
Over the next few years the ARPANET grew rapidly. In July, 1975, DARPA reassigned
management and operation of the ARPANET to the Defense Communications Agency, now
DISA. The NSFNET then took over direction of the non-military side of the network during
its first period of very rapid development, including connection to networks such as the
CSNET and EUnet, and the subsequent evolution into the Internet we recognize today.

2.3 TCP/IP Protocol Suite


While the Internet today is acknowledged as a network that is fundamentally shifting social,
political, and economic structures, and in many ways eliminating geographic boundaries, this
potential is simply the realization of predictions that date back nearly fifty years. In a series
of memos going back to August 1962, J.C.R. Licklider of MIT discussed his "Galactic
Network" and how social interactions could be enabled by networking. The Internet surely
supplies such a national and global infrastructure and, in fact, interplanetary Internet
communication has already been seriously studied.
Prior to the 1960s, what little computer communication existed represented basic text and
binary data, transmitted through the most standard telecommunications network technology
of the day; namely, circuit switching, the technology of the telephone networks for almost a
hundred years. Since most data traffic is bursty in nature, circuit switching results in
extremely inefficient use of network resources.

19
The underlying technology that makes the Internet function is called packet switching, a data
network in which all elements (i.e., hosts and switches) work independently, wiping out
single point-of-failure problems. Additionally, network communication resources look to be
dedicated to individual users but, in reality, statistical multiplexing and a maximum on the
size of a transmitted entity result in accelerated, efficient networks.
The modern Internet started as a U.S. Department of Defense (DoD) funded experiment to
interlink DoD-funded research locations in the U.S. The 1967 the first design for the so-
called ARPANET — titled for the DoD's Advanced Research Projects Agency (ARPA) —
was first released by Larry Roberts. In September 1969, the first client of the ARPANET was
installed at the University of California at Los Angeles (UCLA), followed each month with
nodes at Stanford Research Institute (SRI), the University of California at Santa Barbara
(UCSB), and the University of Utah. With four nodes by the end of 1969, the ARPANET
crossed the continental U.S. by 1971 and had connections to Europe by 1973.
The first host-to-host communications protocol introduced in the ARPANET at it's inception
in 1969 was called the Network Control Protocol (NCP). Over time, however, NCP proved to
be incapable of keeping up with the growing network traffic load. The next problem in the
new ARPAnet was that there was no standardized means of transferring files over the
network. A group of researchers assembled for six months and put together a File Transfer
Protocol (FTP) that would determine the format of the data that would move over the
ARPAnet. It was finished in July 1972. In 1973, development of a mature system of
internetworking protocols for the ARPAnet began. What many don't realize is that in early
variations of this technology, there was just one core protocol: TCP. And in fact, these letters
did not even represent what they do today; they stood for the Transmission Control Program.
In 1974, a new, more robust set of communications protocols was planned and applied
throughout the ARPANET, based on the Transmission Control Protocol (TCP) for end-to-end
network communication. Only it seemed like overkill for the intermediate gateways (what
we'd now call routers) to needlessly have to contend with an end-to-end protocol so in a
discussion between Cerf, Postel and Dany Cohen at ISI in 1978, they chose to divide TCP in
to two separate roles of TCP and the Internet Protocol (IP).
The original versions of both TCP and IP that are in general use today were scripted in
September 1981, though both have had numerous modifications applied to them
(additionally, the IP version 6, or IPv6, specification was released in December 1995). In
1983, the Department of Defense mandated that all of their computer systems would utilize
20
the TCP/IP protocol suite for long-haul communications, further heightening the range and
importance of the ARPANET and the TCP/IP protocol.
TCP/IP was once just “one of numerous” different sets of protocols that could be employed
to supply network-layer and transport-layer functionality. Today there remain some other
alternatives for internetworking protocol suites, but TCP/IP is the universally-accepted global
standard. Its increase in popularity has been due to a number of significant factors. A few of
these are historical, such as the fact that it is linked to the Internet as reported above, while
others are concerned with the features of the protocol suite itself. Primary among these are
the following:
Integrated Addressing System: TCP/IP includes inside it a system for discovering and
directing devices on both small and large networks. The addressing system is organized to
permit devices to be directed regardless of the lower-level details of how each component
network is constructed. Over time, the mechanisms for addressing in TCP/IP have improved,
to match the needs of maturing networks, particularly the Internet. The addressing system
also has a centralized administration capability for the Internet, to ensure that each device
holds a unique address.
Design For Routing: different than many network-layer protocols, TCP/IP is specifically
configured to facilitate the routing of data across a web of absolute complexity. In truth,
TCP/IP is conceptually interested more on the connection of networks, than with the
connection of devices. TCP/IP routers enable data to be presented between devices on
different networks by shifting it one step at a time from one network to the next. Several
support protocols are also included in TCP/IP to permit routers to interchange vital
information and manage the efficient stream of information from one network to another.
Underlying Network Independence: TCP/IP runs mainly at layers three and above, and
includes provisions to permit it to run on just about any lower- layer technology, including
LANs, wireless LANs and WANs of various kinds. This flexibility signifies that one can mix
and match an assortment of different underlying networks and link them all using TCP/IP.
Scalability: among the most astonishing features of TCP/IP is how scalable its protocols have
turned out to be. Over the decades it has demonstrated its spunk as the Internet has matured
from a small network with only a couple of machines to a huge internetwork with millions of
servers. While some modifications have been needed periodically to sustain this growth,
these alterations have happened as part of the TCP/IP growth process, and the substance of
TCP/IP is essentially the same as it was over twenty-five years ago.
21
Open Standards and Development Process: The TCP/IP standards are not copyrighted, but
open standards freely accessible to the public. What is more, the method used to evolve
TCP/IP standards is also entirely open. TCP/IP standards and protocols are developed and
expanded using the unique, democratic “RFC” process, with all interested parties welcome to
participate. This guarantees that anyone with an interest in the TCP/IP protocols is afforded a
chance to supply input into their development, and likewise ensures the global acceptance of
the protocol suite.
Universality: Everyone uses TCP/IP because everyone uses it! Not only is TCP/IP the
“underlying language of the Internet”, it's also utilized in most nonpublic networks today.
Even former “competitors” to TCP/IP such as NetWare today use TCP/IP to transmit traffic.
The Internet continues to grow, as do the capabilities and uses of TCP/IP. It is probable that
TCP/IP will remain a large piece of internetworking for the foreseeable future.
Key Concept: While TCP/IP isn't the only internetworking protocol suite, it is unquestionably
the most significant one. Its unparallaled success flows from a variety of factors. These
include its technical features, such as its routing-friendly design and scalability, its historical
function as the protocol suite of the Internet, and its open standards and growth process,
which bring down barriers to acceptance of TCP/IP protocols.
All of the information we run through our current Internet Providers is sent using the basic
TCP/IP technology. TCP/IP, originally prompted by low-reliability wireless packet radio
networks, has now become now the most dependable and widely deployed network
worldwide, and the IPv4 version developed in the 1970's, in addition to the IP version 6, or
IPv6, spec that was released in December 1995, remains the standard protocol used on the
Internet today.

22

You might also like