You are on page 1of 102

INTERNETWORKING BASICS

What Is an Internet work?


An Internet work is a collection of individual networks, connected by intermediate networking devices, that functions as a single large network. Internetworking refers to the industry, products, and procedures that meet the challenge of creating and administering internet works. Figure 1-1 illustrates some different kinds of network technologies that can be interconnected by routers and other networking devices to create an internet work. Figure 1 Different Network Technologies Can Be Connected to Create an Internet work

Figure 1

History of Internetworking: The first networks were time-sharing networks that used mainframes and attached terminals. Both IBMs Systems Network Architecture (SNA) and Digitals network architecture implemented such environments. Local-area networks (LANs) evolved around the PC revolution. LANs enabled multiple users in a relatively small geographical area to exchange files and messages, as well as access shared resources such as file servers and printers. Wide-area networks (WANs) interconnect LANs with geographically dispersed users to create connectivity. Some of the technologies used for connecting LANs include T1, T3, ATM, ISDN, ADSL, Frame Relay, radio links, and others. New methods of connecting dispersed LANs are appearing everyday. Today, high-speed LANs and switched internet works are becoming widely used, largely because they operate at very high speeds and support such high-bandwidth applications as multimedia and videoconferencing. Internetworking evolved as a solution to three key problems: isolated LANs, duplication of resources, and a lack of network management. Isolated LANs made electronic communication between different offices or departments impossible. Duplication of resources meant that the same hardware and software had to be supplied to each office or department, as did separate support staff. This lack of network management meant that no centralized method of managing and troubleshooting networks existed.

Internetworking Challenges

Implementing a functional internetwork is no simple task. Many challenges must be faced, especially in the areas of connectivity, reliability, network management, and flexibility. Each area is key in establishing an efficient and effective internetwork. The challenge when connecting various systems is to support communication among disparate technologies. Different sites, for example, may use different types of media operating at varying speeds, or may even include different types of systems that need to communicate. Because companies rely heavily on data communication, internetworks must provide a certain level of reliability. This is an unpredictable world; so many large internetworks include redundancy to allow for communication even when problems occur. Furthermore, network management must provide centralized support and troubleshooting capabilities in an internetwork. Configuration, security, performance, and other issues must be adequately addressed for the internetwork to function smoothly. Security within an internetwork is essential. Many people think of network security from the perspective of protecting the private network from outside attacks. However, it is just as important to protect the network from internal attacks, especially because most security breaches come from inside. Networks must also be secured so that the internal network cannot be used as a tool to attack other external sites. Early in the year 2000, many major web sites were the victims of distributed denial of service (DDOS) attacks. These attacks were possible because a great number of private networks currently connected with the Internet were not properly secured. These private networks were used as tools for the attackers. Because nothing in this world is stagnant, internetworks must be flexible enough to change with new demands.

Internetworking Models
When networks first came into being, computers could typically communicate only with computers from the same manufacturer. For example, companies ran either a complete DECnet solution or an IBM solutionnot both together. In the late 1970s, the OSI (Open Systems Interconnection) model was created by the International Organization for Standardization (ISO) to break this barrier. The OSI model was meant to help vendors create interoperable network devices. Like world peace, itll probably never happen completely, but its still a great goal. The OSI model is the primary architectural model for networks. It describes how data and network information are communicated from applications on one computer, through the network media, to an application on another computer. The OSI reference model breaks this approach into layers.

The Layered Approach


A reference model is a conceptual blueprint of how communications should take place. It addresses all the processes required for effective communication and divides these processes into logical groupings called layers. When a communication system is designed in this manner, its known as layered architecture. Think of it like this: You and some friends want to start a company. One of the first things youd do is sit down and think through what must be done, who will do them, what order they will be done in, and how they relate to each other. Ultimately, you might group these tasks into departments. Lets say you decide to have an order-taking department, an inventory department, and a shipping department. Each of your departments has its own unique tasks, keeping its staff members busy and requiring them to focus on only their own duties. Similarly, software developers can use a reference model to understand computer communication processes and to see what types of functions need to be accomplished on any one layer. If they are developing a protocol for a certain layer, all they need to concern themselves with is the specific layers functions, not those of any other layer. Another layer and protocol will handle the other functions. The technical term for this idea is binding. The communication processes that are related to each other are bound, or grouped together, at a particular layer.

Advantages of Reference Models


The OSI model is hierarchical, and the same benefits and advantages can apply to any layered model. The primary purpose of all models, and especially the OSI model, is to allow different vendors to interoperate. The benefits of the OSI model include, but are not limited to, the following: Dividing the complex network operation into more manageable layers Changing one layer without having to change all layers. This allows application developers to specialize in design and development. Defining the standard interface for the plug-and-play multi-vendor integration

Open System Interconnection Reference Model


The Open System Interconnection (OSI) reference model describes how information from a software application in one computer moves through a network medium to a software application in another computer. The OSI reference model is a conceptual model composed of seven layers, each specifying particular network functions. The model was developed by the International Organization for Standardization (ISO) in 1984, and it is now considered the primary architectural model for interceptor communications. The OSI model divides the tasks involved with moving information between networked computers into seven smaller, more manageable task groups. A task or group of tasks is then assigned to each of the seven OSI layers. Each layer is reasonably self-contained so that the tasks assigned to each layer can be implemented independently. This enables the solutions offered by one layer to be updated without adversely affecting the other layers. The following list details the seven layers of the Open System Interconnection (OSI) reference model: Layer 7Application Layer 6Presentation Layer 5Session Layer 4Transport Layer 3Network Layer 2Data link Layer 1Physical

The OSI Reference Model Contains Seven Independent Layers


Application Presentation Sessions Transport Network Data-Link Physical

Characteristics of the OSI Layers


The seven layers of the OSI reference model can be divided into two categories: upper layers and lower layers. The upper layers of the OSI model deal with application issues and generally are implemented only in software. The highest layer, the application layer, is closest to the end user. Both users and application layer processes interact with software applications that contain a communications component. The term upper layer is sometimes used to refer to any layer above another layer in the OSI model.

The lower layers of the OSI model handle data transport issues. The physical layer and the data link layer are implemented in hardware and software. The lowest layer, the physical layer, is closest to the physical network medium (the network cabling, for example) and is responsible for actually placing information on the medium.

Figure 2 illustrates the division between the upper and lower OSI layers.
Figure 2: Two Sets of Layers Make Up the OSI Layers Protocols

Application

Data Transport

Applicatio n Presentati on Session Transport Network Data-Link Physical

The OSI model provides a conceptual framework for communication between computers, but the model itself is not a method of communication. Actual communication is made possible by using communication protocols. In the context of data networking, a protocol is a formal set of rules and conventions that governs how computers exchange information over a network medium. A protocol implements the functions of one or more of the OSI layers. A wide variety of communication protocols exist. Some of these protocols include LAN protocols, WAN protocols, network protocols, and routing protocols. LAN protocols operate at the physical and data link layers of the OSI model and define communication over the various LAN media. WAN protocols operate at the lowest three layers of the OSI model and define communication over the various wide-area media. Routing protocols are network layer protocols that are responsible for exchanging information between routers so that the routers can select the proper path for network traffic. Finally, network protocols are the various upper-layer protocols that exist in a given protocol suite. Many protocols rely on others for operation. For example,

many routing protocols use network protocols to exchange information between routers. This concept of building upon the layers already in existence is the foundation of the OSI model.

OSI Model and Communication between Systems


Information being transferred from a software application in one computer system to a software application in another must pass through the OSI layers. For example, if a software application in System A has information to transmit to a software application in System B, the application program in System A will pass its information to the application layer (Layer 7) of System A. The application layer then passes the information to the presentation layer (Layer 6), which relays the data to the session layer (Layer 5), and so on down to the physical layer (Layer 1). At the physical layer, the information is placed on the physical network medium and is sent across the medium to System B. The physical layer of System B removes the information from the physical medium, and then its physical layer passes the information up to the data link layer (Layer 2), which passes it to the network layer (Layer 3), and so on, until it reaches the application layer (Layer 7) of System B. Finally, the application layer of System B passes the information to the recipient application program to complete the communication process.

Interaction between OSI Model Layers


A given layer in the OSI model generally communicates with three other OSI layers: the layer directly above it, the layer directly below it, and its peer layer in other networked computer systems. The data link layer in System A, for example, communicates with the network layer of System A, the physical layer of System A, and the data link layer in System B. Figure 1-4 illustrates this example.

Figure 2 OSI Model Layers Communicate with Other Layers

Figure 3

Figure 3

OSI Model Layers and Information Exchange


The seven OSI layers use various forms of control information to communicate with their peer layers in other computer systems. This control information consists of specific requests and instructions that are exchanged between peer OSI layers. Control information typically takes one of two forms: headers and trailers. Headers are prepended to data that has been passed down from upper layers. Trailers are appended to data that has been passed down from upper layers. An OSI layer is not required to attach a header or a trailer to data from upper layers. Headers, trailers, and data are relative concepts, depending on the layer that analyzes the information unit. At the network layer, for example, an information unit consists of a Layer 3 header and data. At the data link layer, however, all the information passed down by the network layer (the Layer 3 header and the data) is treated as data. In other words, the data portion of an information unit at a given OSI layer potentially can contain headers, trailers, and data from all the higher layers. This is known as encapsulation. Figure 1-6 shows how the header and data from one layer are encapsulated into the header of the next lowest layer. Figure 4: Headers and Data Can Be Encapsulated During Information Exchange

Figure 4

Information Exchange Process


The information exchange process occurs between peer OSI layers. Each layer in the source system adds control information to data, and each layer in the destination system analyzes and removes the control information from that data. If System A has data from a software application to send to System B, the data is passed to the application layer. The application layer in System A then communicates any control information required by the application layer in System B by prepending a header to the data. The resulting information unit (a header and the data) is passed to the presentation layer, which prepends its own header containing control information intended for the presentation layer in System B. The information unit grows in size as each layer prepends its own header (and, in some cases, a trailer) that contains control information to be used by its peer layer in System B. At the physical layer, the entire information unit is placed onto the network medium. The physical layer in System B receives the information unit and passes it to the data link layer. The data link layer in System B then reads the control information contained in the header prepended by the data link layer in System A. The header is then removed, and the remainder of the information unit is passed to the network layer. Each layer performs the same actions: The layer reads the header from its peer layer, strips it off, and passes the remaining information unit to the next highest layer. After the application layer performs these actions, the data is passed to the recipient software application in System B, in exactly the form in which it was transmitted by the application in System A.

OSI Model Physical Layer


The physical layer defines the electrical, mechanical, procedural, and functional specifications for activating, maintaining, and deactivating the physical link between communicating network systems. Physical layer specifications define characteristics such as voltage levels, timing of voltage changes, physical data rates, maximum transmission distances, and physical connectors. Physical layer implementations can be categorized as either LAN or WAN specifications. Figure 1-7 illustrates some common LAN and WAN physical layer implementations.

Figure 5: Physical Layer Implementations Can Be LAN or WAN Specifications

Figure 5

OSI Model Data Link Layer


The data link layer provides reliable transit of data across a physical network link. Different data link layer specifications define different network and protocol characteristics, including physical addressing, network topology, error notification, sequencing of frames, and flow control. Physical addressing (as opposed to network addressing) defines how devices are addressed at the data link layer. Network topology consists of the data link layer specifications that often define how devices are to be physically connected, such as in a bus or a ring topology. Error

notification alerts upper-layer protocols that a transmission error has occurred, and the sequencing of data frames reorders frames that are transmitted out of sequence. Finally, flow control moderates the transmission of data so that the receiving device is not overwhelmed with more traffic than it can handle at one time. The Institute of Electrical and Electronics Engineers (IEEE) has subdivided the data link layer into two sub layers: Logical Link Control (LLC) and Media Access Control (MAC). Figure 1-8 illustrates the IEEE sub layers of the data link layer.

Figure 6: The Data Link Layer Contains Two Sub layers

Figure 6
The Logical Link Control (LLC) sub layer of the data link layer manages communications between devices over a single link of a network. LLC is defined in the IEEE 802.2 specification and supports both connectionless and connection-oriented services used by higher-layer protocols. IEEE 802.2 defines a number of fields in data link layer frames that enable multiple higher-layer protocols to share a single physical data link. The Media Access Control (MAC) sub layer of the data link layer manages protocol access to the physical network medium. The IEEE MAC specification defines MAC addresses, which enable multiple devices to uniquely identify one another at the data link layer.

OSI Model Network Layer


The network layer defines the network address, which differs from the MAC address. Some network layer implementations, such as the Internet Protocol (IP), define network addresses in a way that route selection can be determined systematically by comparing the source network address with the destination network address and applying the subnet mask. Because this layer defines the logical network layout, routers can use this layer to determine how to forward packets. Because of this, much of the design and configuration work for internet works happens at Layer 3, the network layer.

OSI Model Transport Layer


The transport layer accepts data from the session layer and segments the data for transport across the network. Generally, the transport layer is responsible for making sure that the data is delivered error-free and in the proper sequence. Flow control generally occurs at the transport layer. Flow control manages data transmission between devices so that the transmitting device does not send more data than the receiving device can process. Multiplexing enables data from several applications to be transmitted onto a single physical link. Virtual circuits are established, maintained, and terminated by the transport layer. Error checking involves creating various mechanisms for detecting transmission errors, while error recovery involves acting, such as requesting that data be retransmitted, to resolve any errors that occur. The transport protocols used on the Internet are TCP and UDP.

Flow Control Basics


Flow control is a function that prevents network congestion by ensuring that transmitting devices do not overwhelm receiving devices with data. A high-speed computer, for example, may generate traffic faster than the network can transfer it, or faster than the destination device can receive and process it. The three commonly used methods for handling network congestion are buffering, transmitting source-quench messages, and windowing.

Buffering is used by network devices to temporarily store bursts of excess data in memory until they can be processed. Occasional data bursts are easily handled by buffering. Excess data bursts can exhaust memory, however, forcing the device to discard any additional datagrams that arrive. Source-quench messages are used by receiving devices to help prevent their buffers from overflowing. The receiving device sends source-quench messages to request that the source reduce its current rate of data transmission. First, the receiving device begins discarding received data due to overflowing buffers. Second, the receiving device begins sending sourcequench messages to the transmitting device at the rate of one message for each packet dropped. The source device receives the source-quench messages and lowers the data rate until it stops receiving the messages. Finally, the source device then gradually increases the data rate as long as no further source-quench requests are received. Windowing is a flow-control scheme in which the source device requires an acknowledgment from the destination after a certain number of packets have been transmitted. With a window size of 3, the source requires an acknowledgment after sending three packets, as follows. First, the source device sends three packets to the destination device. Then, after receiving the three packets, the destination device sends an acknowledgment to the source. The source receives the acknowledgment and sends three more packets. If the destination does not receive one or more of the packets for some reason, such as overflowing buffers, it does not receive enough packets to send an acknowledgment. The source then retransmits the packets at a reduced transmission rate.

Error-Checking Basics
Error-checking schemes determine whether transmitted data has become corrupt or otherwise damaged while traveling from the source to the destination. Error checking is implemented at several of the OSI layers. One common error-checking scheme is the cyclic redundancy check (CRC), which detects and discards corrupted data. Error-correction functions (such as data retransmission) are left to higher-layer protocols. A CRC value is generated by a calculation that is performed at the source device. The destination device compares this value to its own calculation to determine whether errors occurred during transmission. First, the source device performs a predetermined set of calculations over the contents of the packet to be sent. Then, the source places the calculated value in the packet and sends the packet to the destination. The destination performs the same predetermined set of calculations over the contents of the packet and then compares its computed value with that contained in the packet. If the values are equal, the packet is considered valid. If the values are unequal, the packet contains errors and is discarded.

OSI Model Session Layer


The Session layer is responsible for setting up, managing, and then tearing down sessions between Presentation layer entities. The Session layer also provides dialog control between devices, or nodes. It coordinates communication between systems and serves to organize their communication by offering three different modes: Simplex half-duplex full-duplex The Session layer basically keeps different applications data separate from other applications data.

OSI Model Presentation Layer


The Presentation layer gets its name from its purpose: It presents data to the Application layer. Its essentially a translator and provides coding and conversion functions. A successful data transfer technique is to adapt the data into a standard format before transmission. Computers are configured to receive this generically formatted data and then convert the data back into its native format for actual reading (for example, EBCDIC to ASCII). By providing translation services, the Presentation layer ensures that data transferred from the Application layer of one

system can be read by the Application layer of another host. The OSI has protocol standards that define how standard data should be formatted. Tasks like data compression, decompression, encryption, and decryption are associated with this layer. Some Presentation layer standards are involved in multimedia operations. The following serve to direct graphic and visual image presentation: PICT: This is picture format used by Macintosh or PowerPC programs for transferring Quick Draw graphics. TIFF: The Tagged Image File Format is a standard graphics format for high-resolution, bitmapped images. JPEG: The Joint Photographic Experts Group brings these photo standards to us. Other standards guide movies and sound. MIDI: The Musical Instrument Digital Interface is used for digitized music. MPEG: The Moving Picture Experts Groups standard for the compression and coding of motion video for CDs is increasingly popular. It provides digital storage and bit rates up to 1.5Mbps.

OSI Model Application Layer


The application layer is the OSI layer closest to the end user, which means that both the OSI application layer and the user interact directly with the software application. This layer interacts with software applications that implement a communicating component. Such application programs fall outside the scope of the OSI model. Application layer functions typically include identifying communication partners, determining resource availability, and synchronizing communication. When identifying communication partners, the application layer determines the identity and availability of communication partners for an application with data to transmit. When determining resource availability, the application layer must decide whether sufficient network resources for the requested communication exist. In synchronizing communication, all communication between applications requires cooperation that is managed by the application layer. Some examples of application layer implementations include Telnet, File Transfer Protocol (FTP), and Simple Mail Transfer Protocol (SMTP).

Information Formats
The data and control information that is transmitted through internetworks takes a variety of forms. The terms used to refer to these information formats are not used consistently in the internetworking industry but sometimes are used interchangeably. Common information formats include frames, packets, datagrams, segments, messages, cells, and data units. A frame is an information unit whose source and destination are data link layer entities. A frame is composed of the data link layer header (and possibly a trailer) and upper-layer data. The header and trailer contain control information intended for the data link layer entity in the destination system. Data from upper-layer entities is encapsulated in the data link layer header and trailer. Figure 1-9 illustrates the basic components of a data link layer frame.

Figure 7: Data from Upper-Layer Entities Makes Up the Data Link Layer Frame

Figure 7
A packet is an information unit whose source and destination are network layer entities. A packet is composed of the network layer header (and possibly a trailer) and upper-layer data. The header and trailer contain control information intended for the network layer entity in the destination system. Data from upper-layer entities is encapsulated in the network layer header and trailer. Figure 1-10 illustrates the basic components of a network layer packet.

Figure 8: Three Basic Components Make Up a Network Layer Packet

Figure 8
The term datagram usually refers to an information unit whose source and destination are network layer entities that use connectionless network service. The term segment usually refers to an information unit whose source and destination are transport layer entities. A message is an information unit whose source and destination entities exist above the network layer (often at the application layer). A cell is an information unit of a fixed size whose source and destination are data link layer entities. Cells are used in switched environments, such as Asynchronous Transfer Mode (ATM) and Switched Multimegabit Data Service (SMDS) networks. A cell is composed of the header and payload. The header contains control information intended for the destination data link layer entity and is typically 5 bytes long. The payload contains upper-layer data that is encapsulated in the cell header and is typically 48 bytes long. The length of the header and the payload fields always are the same for each cell. Figure 1picts the components of a typical cell.

Figure below Two Components Make Up a Typical Cell

Figure 9
Data unit is a generic term that refers to a variety of information units. Some common data units are service data units (SDUs), protocol data units, and bridge protocol data units (BPDUs). SDUs are information units from upper-layer protocols that define a service request to a lowerlayer protocol. PDU is OSI terminology for a packet. BPDUs are used by the spanning-tree algorithm as hello messages.

Connection-Oriented and Connectionless Network Services


In general, transport protocols can be characterized as being either connection-oriented or connectionless. Connection-oriented services must first establish a connection with the desired service before passing any data. A connectionless service can send the data without any need to establish a connection first. In general, connection-oriented services provide some level of delivery guarantee, whereas connectionless services do not. Connection-oriented service involves three phases: connection establishment, data transfer, and connection termination. During connection establishment, the end nodes may reserve resources for the connection. The end nodes also may negotiate and establish certain criteria for the transfer, such as a window size used in TCP connections. This resource reservation is one of the things exploited in some denial of service (DOS) attacks. An attacking system will send many requests for establishing a connection but then will never complete the connection. The attacked computer is then left with resources allocated for many never-completed connections. Then, when an end node tries to complete an actual connection, there are not enough resources for the valid connection.

The data transfer phase occurs when the actual data is transmitted over the connection. During data transfer, most connection-oriented services will monitor for lost packets and handle resending them. The protocol is generally also responsible for putting the packets in the right sequence before passing the data up the protocol stack. When the transfer of data is complete, the end nodes terminate the connection and release resources reserved for the connection. Connection-oriented network services have more overhead than connectionless ones. Connection-oriented services must negotiate a connection, transfer data, and tear down the connection, whereas a connectionless transfer can simply send the data without the added overhead of creating and tearing down a connection. Each has its place in internetworks.

MAC Addresses
Media Access Control (MAC) addresses consist of a subset of data link layer addresses. MAC addresses identify network entities in LANs that implement the IEEE MAC addresses of the data link layer. As with most data-link addresses, MAC addresses are unique for each LAN interface. Figure 1-14 illustrates the relationship between MAC addresses, data-link addresses, and the IEEE sub layers of the data link layer.

Figure 10: MAC Addresses, Data-Link Addresses, and the IEEE Sub layers of the Data Link Layer Are All Related

Figure 10
MAC addresses are 48 bits in length and are expressed as 12 hexadecimal digits. The first 6 hexadecimal digits, which are administered by the IEEE, identify the manufacturer or vendor and thus comprise the Organizationally Unique Identifier (OUI). The last 6 hexadecimal digits comprise the interface serial number, or another value administered by the specific vendor. MAC addresses sometimes are called burned-in addresses (BIAs) because they are burned into read-only memory (ROM) and are copied into random-access memory (RAM) when the interface card initializes. Figure 1-15 illustrates the MAC address format.

Figure 11: The MAC Address Contains a Unique Format of Hexadecimal Digits

Figure 11

Mapping Addresses
Because internetworks generally use network addresses to route traffic around the network, there is a need to map network addresses to MAC addresses. When the network layer has determined the destination station's network address, it must forward the information over a physical network using a MAC address. Different protocol suites use different methods to perform this mapping, but the most popular is Address Resolution Protocol (ARP). Different protocol suites use different methods for determining the MAC address of a device. The following three methods are used most often. Address Resolution Protocol (ARP) maps network addresses to MAC addresses. The Hello protocol enables network devices to learn the MAC addresses of other network devices. MAC addresses either are embedded in the network layer address or are generated by an algorithm. Address Resolution Protocol (ARP) is the method used in the TCP/IP suite. When a network device needs to send data to another device on the same network, it knows the source and destination network addresses for the data transfer. It must somehow map the destination address to a MAC address before forwarding the data. First, the sending station will check its ARP table to see if it has already discovered this destination station's MAC address. If it has not, it will send a broadcast on the network with the destination station's IP address contained in the broadcast. Every station on the network receives the broadcast and compares the embedded IP address to its own. Only the station with the matching IP address replies to the sending station with a packet containing the MAC address for the station. The first station then adds this information to its ARP table for future reference and proceeds to transfer the data. When the destination device lies on a remote network, one beyond a router, the process is the same except that the sending station sends the ARP request for the MAC address of its default gateway. It then forwards the information to that device. The default gateway will then forward the information over whatever networks necessary to deliver the packet to the network on which the destination device resides. The router on the destination device's network then uses ARP to obtain the MAC of the actual destination device and delivers the packet. The Hello protocol is a network layer protocol that enables network devices to identify one another and indicate that they are still functional. When a new end system powers up, for example, it broadcasts hello messages onto the network. Devices on the network then return hello replies, and hello messages are also sent at specific intervals to indicate that they are still functional. Network devices can learn the MAC addresses of other devices by examining Hello protocol packets. .

Network Layer Addresses


A network layer address identifies an entity at the network layer of the OSI layers. Network addresses usually exist within a hierarchical address space and sometimes are called virtual or logical addresses. The relationship between a network address and a device is logical and unfixed; it typically is based either on physical network characteristics (the device is on a particular network segment) or on groupings that have no physical basis (the device is part of an AppleTalk zone). End systems require one network layer address for each network layer protocol that they support. (This assumes that the device has only one physical network connection.) Routers and other internetworking devices require one network layer address per physical network connection for each network layer protocol supported. For example, a router with three interfaces each running AppleTalk, TCP/IP, and OSI must have three network layer addresses for each interface. The router therefore has nine network layer addresses. Figure 1-16 illustrates how each network interface must be assigned a network address for each protocol supported.

Figure 12: Each Network Interface Must Be Assigned a Network Address for Each Protocol supported

Figure 12

Address Assignments
Addresses are assigned to devices as one of two types: static and dynamic. Static addresses are assigned by a network administrator according to a preconceived internetwork addressing plan.

A static address does not change until the network administrator manually changes it. Dynamic addresses are obtained by devices when they attach to a network, by means of some protocolspecific process. A device using a dynamic address often has a different address each time that it connects to the network. Some networks use a server to assign addresses. Server-assigned addresses are recycled for reuse as devices disconnect. A device is therefore likely to have a different address each time that it connects to the network.

Addresses versus Names


Internet work devices usually have both a name and an address associated with them. Internet work names typically are location-independent and remain associated with a device wherever that device moves (for example, from one building to another). Internetwork addresses usually are location-dependent and change when a device is moved (although MAC addresses are an exception to this rule). As with network addresses being mapped to MAC addresses, names are usually mapped to network addresses through some protocol. The Internet uses Domain Name System (DNS) to map the name of a device to its IP address. For example, it's easier for you to remember www.cisco.com instead of some IP address. Therefore, you type www.cisco.com into your browser when you want to access Cisco's web site. Your computer performs a DNS lookup of the IP address for Cisco's web server and then communicates with it using the network address.

TCP/IP Model
The TCP/IP model is a condensed version of the OSI model. It is comprised of four, instead of seven, layers: The Process/Application layer The Host-to-Host layer The Internet layer The Network Access layer Figure given bellow shows a comparison of the TCP/IP or DoD model and the OSI reference model. As you can see, the two are similar in concept, but each has a different number of layers with different names.

A vast array of protocols combines at the DoD models Process/Application layer to integrate the various activities and duties spanning the focus of the OSIs corresponding top three layers (Application, Presentation, and Session). The Process/Application layer defines protocols for node-to-node application communication and also controls user-interface specifications. The Host-to-Host layer parallels the functions of the OSIs Transport layer, defining protocols for setting up the level of transmission service for applications. It tackles issues like creating reliable end-to-end communication and ensuring the error-free delivery of data. It handles packet sequencing and maintains data integrity.

The Internet layer corresponds to the OSIs Network layer, designating the protocols relating to the logical transmission of packets over the entire network. It takes care of the addressing of hosts by giving them an IP (Internet Protocol) address, and it handles the routing of packets among multiple networks. It also controls the communication flow between two hosts. At the bottom of the model, the Network Access layer monitors the data exchange between the host and the network. The equivalent of the Data Link and Physical layers of the OSI model, the Network Access layer oversees hardware addressing and defines protocols for the physical transmission of data. While the DoD and OSI models are alike in design and concept and have similar functions in similar places, how those functions occur is different. Figure given bellow shows the TCP/IP protocol suite and how its protocols relate to the DoD model layers.

The Process/Application Layer Protocols


In this section, we will describe the different applications and services typically used in IP networks. The different protocols and applications covered in this section include the following: TELNET FTP TFTP NFS SMTP LPD X Window SNMP DNS DHCP

Telnet
Telnet is the chameleon of protocolsits specialty is terminal emulation. It allows a user on a remote client machine, called the Telnet client, to access the resources of another machine, the Telnet server. Telnet achieves this by pulling a fast one on the Telnet server and making the client machine appear as though it were a terminal directly attached to the local network. This projection is actually a software image, a virtual terminal that can interact with the chosen remote host. These emulated terminals are of the text-mode type and can execute refined procedures like displaying menus that give users the opportunity to choose options from them and access the applications on the duped server. Users begin a Telnet session by running the Telnet client software and then logging on to the Telnet server.

File Transfer Protocol (FTP)

The File Transfer Protocol (FTP) is the protocol that actually lets us transfer files; it can facilitate this between any two machines using it. But FTP isnt just a protocol; its also a program. Operating as a protocol, FTP is used by applications. As a program, its employed by users to perform file tasks by hand. FTP also allows for access to both directories and files and can accomplish certain types of directory operations, like relocating into different ones. FTP teams up with Telnet to transparently log you in to the FTP server and then provides for the transfer of files. Accessing a host through FTP is only the first step, though. Users must then be subjected to an authentication login thats probably secured with passwords and usernames implemented by system administrators to restrict access. But you can get around this somewhat by adopting the username anonymousthough what youll gain access to will be limited. Even when employed by users manually as a program, FTPs functions are limited to listing and manipulating directories, typing file contents, and copying files between hosts. It cant execute remote files as programs.

Trivial File Transfer Protocol (TFTP)


The Trivial File Transfer Protocol (TFTP) is the stripped-down, stock version of FTP, but its the protocol of choice if you know exactly what you want and where to find it. It doesnt give you the abundance of functions that FTP does, though. TFTP has no directory-browsing abilities; it can do nothing but send and receive files. This compact little protocol also skimps in the data department, sending much smaller blocks of data than FTP, and theres no authentication as with FTP, so its insecure. Few sites support it because of the inherent security risks.

Network File System (NFS)


Network File System (NFS) is a jewel of a protocol specializing in file sharing. It allows two different types of file systems to interoperate. It works like this: Suppose the NFS server software is running on an NT server, and the NFS client software is running on a Unix host. NFS allows for a portion of the RAM on the NT server to transparently store Unix files, which can, in turn, be used by Unix users. Even though the NT file system and Unix file system are unlike they have different case sensitivity, filename lengths, security, and so onboth Unix users and NT users can access that same file with their normal file systems, in their normal way.

Simple Mail Transfer Protocol (SMTP)


Simple Mail Transfer Protocol (SMTP), answering our ubiquitous call to e-mail, uses a spooled, or queued, method of mail delivery. Once a message has been sent to a destination, the message is spooled to a deviceusually a disk. The server software at the destination posts a vigil, regularly checking this queue for messages. When it detects them, it proceeds to deliver them to their destination. SMTP is used to send mail; POP3 is used to receive mail.

Line Printer Daemon (LPD)


The Line Printer Daemon (LPD) protocol is designed for printer sharing. The LPD, along with the LPR (Line Printer) program, allows print jobs to be spooled and sent to the networks printers using TCP/IP.

X Window
Designed for client-server operations, X Window defines a protocol for the writing of graphical user interfacebased client/server applications. The idea is to allow a program, called a client, to run on one computer and have it display a program called a window server on another computer.

Simple Network Management Protocol (SNMP)


Simple Network Management Protocol (SNMP) collects and manipulates this valuable network information. It gathers data by polling the devices on the network from a management station at fixed or random intervals, requiring them to disclose certain information. When all is well, SNMP receives something called a baseline a report delimiting the operational traits of a healthy network. This protocol can also stand as a watchdog over the network, quickly notifying

managers of any sudden turn of events. These network watchdogs are called agents, and when aberrations occur, agents send an alert called a trap to the management station.

Domain Name Service (DNS)


Domain Name Service (DNS) resolves host names, specifically Internet names, like www.routersim.com. You dont have to use DNS; you can just type in the IP address of any device you want to communicate with. An IP address identifies hosts on a network and the Internet as well. However, DNS was designed to make our lives easier. Also, what would happen if you wanted to move your Web page to a different service provider? The IP address would change and no one would know what the new one was. DNS allows you to use a domain name to specify an IP address. You can change the IP address as often as you want and no one will know the difference.

The Host-to-Host Layer Protocols


The Host-to-Host layers main purpose is to shield the upper-layer applications from the complexities of the network. This layer says to the upper layer, Just give me your data stream, with any instructions, and Ill begin the process of getting your information ready to send. The following sections describe the two protocols at this layer: Transmission Control Protocol (TCP) User Datagram Protocol (UDP)

Transmission Control Protocol (TCP)


The Transmission Control Protocol (TCP) takes large blocks of information from an application and breaks them into segments. It numbers and sequences each segment so that the destinations TCP protocol can put the segments back into the order the application intended. After these segments are sent, TCP (on the transmitting host) waits for an acknowledgment of the receiving ends TCP virtual circuit session, retransmitting those that arent acknowledged. Before a transmitting host starts to send segments down the model, the senders TCP protocol contacts the destinations TCP protocol to establish a connection. What is created is known as a virtual circuit. This type of communication is called connection-oriented. During this initial handshake, the two TCP layers also agree on the amount of information thats going to be sent before the recipients TCP sends back an acknowledgment. With everything agreed upon in advance, the path is paved for reliable communication to take place. TCP is a full-duplex, connection-oriented, reliable, accurate protocol, and establishing all these terms and conditions, in addition to error checking, is no small task. TCP is very complicated and, not surprisingly, costly in terms of network overhead. Since todays networks are much more reliable than those of yore, this added reliability is often unnecessary.

User Datagram Protocol (UDP)


Application developers can use the User Datagram Protocol (UDP) in place of TCP. UDP is the scaled-down economy model and is considered a thin protocol. Like a thin person on a park bench, a thin protocol doesnt take up a lot of roomor in this case, much bandwidth on a network. UDP also doesnt offer all the bells and whistles of TCP, but it does do a fabulous job of transporting information that doesnt require reliable delivery and it does so using far fewer network resources. There are some situations where it would definitely be wise for application developers to opt for UDP rather than TCP. Remember the watchdog SNMP up there at the Process/Application layer? SNMP monitors the network, sending intermittent messages and a fairly steady flow of status updates and alerts, especially when running on a large network. The cost in overhead to establish, maintain, and close a TCP connection for each one of those little messages would reduce what would be an otherwise healthy, efficient network to a dammed-up bog in no time. Another circumstance calling for UDP over TCP is when the matter of reliability is already accomplished at the Process/Application layer. Network File System (NFS) handles its own reliability issues, making the use of TCP both impractical and redundant. However, the application developer decides whether to use UDP or TCP, not the user who wants to transfer data faster. UDP receives upper-layer blocks of information, instead of data streams as TCP does, and breaks them into segments. Like TCP, each UDP segment is given a number for reassembly into the intended block at the destination. However, UDP does not sequence the

segments and does not care in which order the segments arrive at the destination. At least it numbers them, though. But after that, UDP sends the segments off and forgets about them. It doesnt follow through, check up on them, or even allow for an acknowledgment of safe arrival complete abandonment. Because of this, its referred to as an unreliable protocol. This does not mean that UDP is ineffective, only that it doesnt handle issues of reliability. Further, UDP doesnt create a virtual circuit, nor does it contact the destination before delivering information to it. It is, therefore, also considered a connectionless protocol. Since UDP assumes that the application will use its own reliability method, it doesnt use any. This gives an application developer a choice when running the Internet Protocol stack: TCP for reliability or UDP for faster transfers.

The Internet Layer Protocols


There are two main reasons for the Internet layers existence: routing, and providing a single network interface to the upper layers. None of the upper- or lower-layer protocols have any functions relating to routing. The complex and important task of routing is the job of the Internet layer. The Internet layers second job is to provide a single network interface to the upper-layer protocols. Without this layer, application programmers would need to write hooks into every one of their applications for each different Network Access protocol. This would not only be a pain in the neck, but it would lead to different versions of each applicationone for Ethernet, another one for Token Ring, and so on. To prevent this, IP provides one single network interface for the upper-layer protocols. That accomplished, its then the job of IP and the various Network Access protocols to get along and work together. All network roads dont lead to Rome they lead to IP. And all the other protocols at this layer, as well as all those at the upper layers, use it. Never forget that. All paths through the model go through IP. The following sections describe the protocols at the Internet layer. These are the protocols that work at the Internet layer: Internet Protocol (IP) Internet Control Message Protocol (ICMP) Address Resolution Protocol (ARP) Reverse Address Resolution Protocol (RARP)

Internet Protocol (IP)


The Internet Protocol (IP) essentially is the Internet layer. The other protocols found here merely exist to support it. IP contains the big picture and could be said to see all, in that it is aware of all the interconnected networks. It can do this because all the machines on the network have software, or logical, address called an IP address. IP looks at each packets address. Then, using a routing table, it decides where a packet is to be sent next, choosing the best path. The Network Accesslayer protocols at the bottom of the model dont possess IPs enlightened scope of the entire network; they deal only with physical links (local networks). Identifying devices on networks requires answering these two questions: Which network is it on? And what is its ID on that network? The first answer is the software, or logical, address (the correct street). The second answer is the hardware address (the correct mailbox). All hosts on a network have a logical ID called an IP address. This is the software, or logical, address and contains valuable encoded information greatly simplifying the complex task of routing. IP receives segments from the Host-to-Host layer and fragments them into datagrams (packets). IP then reassembles datagrams back into segments on the receiving side. Each datagram is assigned the IP address of the sender and of the recipient. Each router (layer-3 device) that receives a datagram makes routing decisions based upon the packets destination IP address. IP protocol has to go through every time user data is sent from the upper layers and wants to be sent to a remote network.

Internet Control Message Protocol (ICMP)


The Internet Control Message Protocol (ICMP) works at the Network layer and is used by IP for many different services. ICMP is a management protocol and messaging service provider for IP. Its messages are carried as IP datagrams. RFC 1256, ICMP Router Discovery Messages, is an annex to ICMP, which affords hosts extended capability in discovering routes to gateways. Periodically, router advertisements are announced over the network, reporting IP addresses for

the routers network interfaces. Hosts listen for these network infomercials to acquire route information. A router solicitation is a request for immediate advertisements and may be sent by a host when it starts up. If a router cant send an IP datagram any further, it uses ICMP to send a message back to the sender, advising it of the situation. For example, if a router receives a packet destined for a network that the router doesnt know about, it will send an ICMP Destination Unreachable message back to the sending station. Buffer Full: If a routers memory buffer for receiving incoming datagrams is full, it will use ICMP to send out this message. Hops: Each IP datagram is allotted a certain number of routers, called hops, which it may go through. If it reaches its limit of hops before arriving at its destination, the last router to receive that datagram deletes it. The executioner router then uses ICMP to send an obituary message, informing the sending machine of the demise of its datagram. Ping: Packet Internet Groper uses ICMP echo messages to check the physical connectivity of machines on an internetwork. Trace route: Using ICMP timeouts, trace route is used to find a path a packet takes as it traverses an internetwork. The following data is from a network analyzer catching an ICMP echo request. Notice that even though ICMP works at the Network layer, it still uses IP to do the Ping request.

Address Resolution Protocol (ARP)


The Address Resolution Protocol (ARP) finds the hardware address of a host from a known IP address. Heres how it works: When IP has a datagram to send, it must inform a Network Access protocol, such as Ethernet or Token Ring, of the destinations hardware address on the local network. (It has already been informed by upper-layer protocols of the destinations IP address.) If IP doesnt find the destination hosts hardware address in the ARP cache, it uses ARP to find this information. As IPs detective, ARP interrogates the local network by sending out a broadcast asking the machine with the specified IP address to reply with its hardware address. In other words, ARP translates the software (IP) address into a hardware addressfor example, the destination machines Ethernet board addressand from it, deduces its whereabouts. This hardware address is technically referred to as the media access control (MAC) address or physical address. Figure given bellow shows how an ARP might look to a local network.

Reverse Address Resolution Protocol (RARP)


When an IP machine happens to be a diskless machine, it has no way of initially knowing its IP address, but it does know its MAC address. The Reverse Address Resolution Protocol (RARP) discovers the identity of the IP address for diskless machines by sending out a packet that includes its MAC address and a request for the IP address assigned to that MAC address. A designated machine, called a RARP server, responds with the answer, and the identity crisis is over. RARP uses the information it does know about the machines MAC address to learn its IP address and complete the machines ID portrait.

Ways of Communication
Unicasting Communication between two devices is one-on-one. Create least traffic while communicating. Best in when one device want to communicate with one device only as no extra bothering the other hosts on the segment. Cannot be use in one-on-many devices to communicate as one hub device need to send the many copies of the same packet to all the hosts and will get the Acks from them.

Broadcasting Communication between two devices is one-on-all. One-n-all means all the host in the network on the same switch. When host send the packet on broadcast address then the switch will duplicate the packet and will send it on all the host in the network. Multicasting Communication with one-on-one and one-on-many has too many limitations like large traffic to handle and security breach. It is used when one-on-group one way communication is required. For example live telecasting of video stream on internet, in this case the users are group of people who may need the particular stream but not all the hosts. So the user will join the particular multicast group to get that particular stream.

IP Addressing
One of the most important topics in any discussion of TCP/IP is IP addressing. An IP address is a numeric identifier assigned to each machine on an IP network. It designates the location of a device on the network. An IP address is a software address, not a hardware addressthe latter is hardcoded on a network interface card (NIC) and used for finding hosts on a local network. IP addressing was designed to allow a host on one network to communicate with a host on a different network, regardless of the type of LANs the hosts is participating in. IP stands for Internet Protocol, it's a communications protocol used from the smallest private network to the massive global Internet. An IP address is a unique identifier given to a single device on an IP network. The IP address consists of a 32-bit number that ranges from 0 to 4294967295. This means that theoretically, the Internet can contain approximately 4.3 billion unique objects. But to make such a large address block easier to handle, it was chopped up into four 8-bit numbers, or "octets," separated by a period. Instead of 32 binary base-2 digits, which would be too long to read, it's converted to four base-256 digits. Octets are made up of numbers ranging from 0 to 255. The numbers below show how IP addresses increment.

0.0.0.0 0.0.0.1 ...increment 252 hosts... 0.0.0.254 0.0.0.255 0.0.1.0 0.0.1.1 ...increment 252 hosts.. 0.0.1.254 0.0.1.255 0.0.2.0 0.0.2.1 ...increment 4+ billion hosts... 255.255.255.255

IP Terminology
Here are a few of the most important terms: Bit One digit; either a 1 or a 0. Byte 8 bits. Octet Always 8 bits. Base-8 addressing scheme. Network address The designation used in routing to send packets to a remote network, for example, 10.0.0.0, 172.16.0.0, and 192.168.10.0. Broadcast address Used by applications and hosts to send information to all nodes on a network. Examples include 255.255.255.255, which is all networks, all nodes; 172.16.255.255, which is all subnets and hosts on network 17.16.0.0; and 10.255.255.255, which broadcasts to all subnets and hosts on network 10.0.0.0.

The Hierarchical IP Addressing Scheme


An IP address consists of 32 bits of information. These bits are divided into four sections, referred to as octets or bytes, each containing 1 byte (8 bits). You can depict an IP address using one of three methods:

Dotted-decimal, as in 172.16.30.56 Binary, as in 10101100.00010000.00011110.00111000 Hexadecimal, as in 82 39 1E 38

Network Addressing
The network address uniquely identifies each network. Every machine on the same network shares that network address as part of its IP address. In the IP address 172.16.30.56, for example, 172.16 is the network address. The node address is assigned to, and uniquely identifies, each machine on a network. This part of the address must be unique because it identifies a particular machinean individualas opposed to a network, which is a group. This number can also be referred to as a host address. In the sample IP address 172.16.30.56, .30.56 is the node address. The designers of the Internet decided to create classes of networks based on network size. For the small number of networks possessing a very large number of nodes, they created the rank Class A network. At the other extreme is the Class C network, which is reserved for the numerous networks with a small number of nodes. The class distinction for networks between very large and very small is predictably called the Class B network. Subdividing an IP address into a network and node address is determined by the class designation of ones network. Figure summarizes the three classes of networks: -

Network Address Range: Class A


The designers of the IP address scheme said that the first bit of the first byte in a Class A network address must always be off, or 0. This means a Class A address must be between 0 and 127.

Here is how those numbers are defined: 0xxxxxxx: If we turn the other 7 bits all off and then turn them all on, we will find your Class A range of network addresses. 00000000=0 01111111=127

Network Address Range: Class B


In a Class B network, the RFCs state that the first bit of the first byte must always be turned on, but the second bit must always be turned off. If you turn the other six bits all off and then all on, you will find the range for a Class B network: 10000000=128 10111111=191 As you can see, this means that a Class B network can be defined when the first byte is configured from 128 to 191.

Network Address Range: Class C


For Class C networks, the RFCs define the first two bits of the first octet always turned on, but the third bit can never be on. Following the same process as the previous classes, convert from binary to decimal to find the range. Here is the range for a Class C network: 11000000=192 11011111=223 So, if you see an IP address that starts at 192 and goes to 223, youll know it is a Class C IP address.

Network Address Ranges: Classes D and E


The addresses between 224 and 255 are reserved for Class D and E networks. Class D is used for multicast addresses and Class E for scientific purposes.

Network Addresses: Special Purpose


Some IP addresses are reserved for special purposes, and network administrators shouldnt assign these addresses to nodes. Table given bellow lists the members of this exclusive little club and why theyre included in it.

Network Id Can be defined as the Id to represent the no. of host addresses in the same network in the topology. Cannot be assign to any host in the network. When all the host past is zero then it is called network-id. Or simply the first address of the network is always Network-Id Broadcast-Id Address on which if packets are send these will be receive by all the hosts in the network. T his address is used when all the host in the network are suppose to get the same message. Cannot be assign to any host in the network. When all the host bits are one then it is called broadcast-id. Simply the last address of the network is called broadcast-id.

Class A Addresses
In a Class A network address, the first byte is assigned to the network address and the three remaining bytes are used for the node addresses. The Class A format is Network.Node.Node.Node For example, in the IP address 49.22.102.70, 49 is the network address, and 22.102.70 is the node address. Every machine on this particular network would have the distinctive network address of 49. Class A addresses are one byte long, with the first bit of that byte reserved and the seven remaining bits available for manipulation. As a result, the maximum number of Class A networks that can be created is 128. Why? Because each of the seven bit positions can either be a 0 or a 1, thus 27 or 128. To complicate matters further, the network address of all 0s (0000 0000) is reserved to designate the default route. Additionally, the address 127, which is reserved for diagnostics, cant be used either, which means that you can only use the numbers 1 to 126 to designate Class A network addresses. This means the actual number of usable Class A network addresses is 128 minus 2, or 126. Got it? Each Class A address has three bytes (24-bit positions) for the node address of a machine. Thus, there are 224or 16,777,216unique combinations and, therefore, precisely that many possible unique node addresses for each Class A network. Because addresses with the two patterns of all 0s and all 1s are reserved, the actual maximum usable number of nodes for a Class A network is 224 minus 2, which equals 16,777,214.

Class A Valid Host IDs


Here is an example of how to figure out the valid host IDs in a Class A network address: 10.0.0.0 All host bits off is the network address. 10.255.255.255 All host bits on is the broadcast address. The valid hosts are the number in between the network address and the broadcast address: 10.0.0.1 through 10.255.255.254. Notice that 0s and 255s are valid host IDs. All you need to remember when trying to find valid host addresses is that the host bits cannot all be turned off or on at the same time.

Class B Addresses
In a Class B network address, the first two bytes are assigned to the network address, and the remaining two bytes are used for node addresses. The format is Network. Network. Node. Node. For example, in the IP address 172.16.30.56, the network address is 172.16, and the node address is 30.56. With a network address being two bytes (eight bits each), there would be 216 unique combinations. But the Internet designers decided that all Class B network addresses should start with the binary digit 1, then 0. This leaves 14 bit positions to manipulate, therefore 16,384 (214) unique Class B network addresses. A Class B address uses two bytes for node addresses. This is 216 minus thetwo reserved patterns (all 0s and all 1s), for a total of 65,534 possible node addresses for each Class B network.

Class B Valid Host IDs


Here is an example of how to find the valid hosts in a Class B network: 172.16.0.0 All host bits turned off is the network address.172.16.255.255 All host bits turned on is the broadcast address. The valid hosts would be the numbers in between the network address and the broadcast address: 172.16.0.1 through 172.16.255.254.

Class C Addresses

The first three bytes of a Class C network address are dedicated to the network portion of the address, with only one measly byte remaining for the node address. The format is Network.Network.Network.Node. Using the example IP address 192.168.100.102, the network address is192.168.100, and the node address is 102.In a Class C network address, the first three bit positions are always the binary 110. The calculation is such: 3 bytes, or 24 bits, minus 3 reserved positions, leaves 21 positions. Hence, there are 221, or 2,097,152, possible Class C networks. Each unique Class C network has one byte to use for node addresses. This leads to 28 or 256, minus the two reserved patterns of all 0s and all 1s, for a total of 254 node addresses for each Class C network.

Class C Valid Host IDs


Here is an example of how to find a valid host ID in a Class C network: 192.168.100.0 All host bits turned off is the network ID.192.168.100.255 All host bits turned on is the broadcast address. The valid hosts would be the numbers in between the network address and the broadcast address: 192.168.100.1 through 192.168.100.254 So while assigning IP addresses to host, two addresses can never assign one Network-Id and other is Broadcast-Id. Always subtract 2 from the total no of IPs in the network. Network 10.0.0.0 Subnet-mask 255.0.0.0 Total No. of IPs Usable IPs Network Id Broadcast-Id 10.0.0.0 / 10.255.255.2 55 172.31.0.0 / 172.31.255.2 55 192.168.0.0 / 192.168.0.1

2^24
65536

2^24 - 2
65534

172.31.0. 0 192.168. 0.0

255.255.0.0

255.255.255.0

256

254

Subnetting The word subnet is short for sub network--a smaller network within a larger one. The smallest subnet that has no more subdivisions within it is considered a single "broadcast domain," which directly correlates to a single LAN (local area network) segment on an Ethernet switch. The broadcast domain serves an important function because this is where devices on a network communicate directly with each other's MAC addresses, which don't route across multiple subnets, let alone the entire Internet. MAC address communications are limited to a smaller network because they rely on ARP broadcasting to find their way around, and broadcasting can be scaled only so much before the amount of broadcast traffic brings down the entire network with sheer broadcast noise. For this reason, the most common smallest subnet is 8 bits, or precisely a single octet, although it can be smaller or slightly larger. Subnetting is just the concept of borrowing the bits from the host part to reduce the host part and to include it in the network part. With this the no. of available network will be increase and the no of hosts the subnetted will be decreased. This way more efficient assignment of IP addressing in the network is possible with least possible wasting of IPs as they very limited in no .in IPv4 Subnets have a beginning and an ending, and the beginning number is always even and the ending number is always odd. The beginning number is the "Network ID" and the ending number is the "Broadcast ID." You're not allowed to use these numbers because they both have special meaning with special purposes. The Network ID is the official

designation for a particular subnet, and the ending number is the broadcast address that every device on a subnet listens to. With the Subnetting one bigger network can break down into smaller no. of Sub networks. With each sub network they must have their own Network-Id and BroadcastId. For example 192.168.1.0 255.255.255.0 Network-Id 192.168.0.0 Broadcast-Id 192.168.0.255 By doing binary of last octet we will get following 192.168.0.00000000 Now here we have last 8 digits as host bits and first 24 bits are for network and are reserve. Lets we have N no. of requirement of IP addresses Now we have to find out how many bits are suppose to require to reserve for hosts and rest left bits are subnet bits With N no. of hosts we require one Network-Id and Broadcast-Id so total no. of IPs required are N + 2. To generate N options we need M(say) bits to reserve for network. N + 2 2^M (General for all classes)

Now the No. of Subnet Networks will be as given below 2^ (8-M) Considering the requirement of 60 people No. of Ips required are N + 2 = 62 where N = 60 By putting the values we will get M = 6 So no of Subnets will be 2^(8-6) = 4 And no. of people in the each subnet will be is 2^6 = 64 192.168.0. 00 000000 Subnet bits Host bits Now Ist will be 192.168.0.00 ****** Decimal Form 192.168.0.0 192.168.0.01 ****** Decimal Form 192.168.0.64 192.168.0.10 ****** Decimal Form 192.168.0.128 192.168.0.11 ****** Decimal Form 192.168.0.192

Network-Id Broadcast- Id mal Form 192.16 8.0.00000000 192.168.0.63 192.168.0.01000000 192.168.127 192.168.0.10000000 192.168.0.191

Broadcast-Id

Network-Id Deci

192.168.0.00111111 192.168.0.01111111 192.168.0.10111111

192.168.0.0 192.168.0.64 192.168.0.128

192.168.0.11000000 192.168.0.255

192.168.0.11111111

192.168.0.192

IP Variable Length Subnet Masking (VLSM) Conventional Subnet masking replaces the two-level IP addressing scheme with a more flexible three-level method. Since it lets network administrators assign IP addresses to hosts based on how they are connected in physical networks, subnetting is a real breakthrough for those maintaining large IP networks. It has its own weaknesses though, and still has room for improvement. The main weakness of conventional subnetting is in fact that the subnet ID represents only one additional hierarchical level in how IP addresses are interpreted and used for routing. The Problem With Single-Level Subnetting It may seem greedy to look at subnetting and say what, only one additional level? J However, in large networks, the need to divide our entire network into only one level of subnetworks doesn't represent the best use of our IP address block. Furthermore, we have already seen that since the subnet ID is the same length throughout the network, we can have problems if we have subnetworks with very different numbers of hosts on themthe subnet ID must be chosen based on whichever subnet has the greatest number of hosts, even if most of subnets have far fewer. This is inefficient even in small networks, and can result in the need to use extra addressing blocks while wasting many of the addresses in each block. For example, consider a relatively small company with a Class C network, 201.45.222.0/24. They have six subnetworks in their network. The first four subnets (S1, S2, S3 and S4) are relatively small, containing only 10 hosts each. However, one of them (S5) is for their production floor and has 50 hosts, and the last (S6) is their development and engineering group, which has 100 hosts. The total number of hosts needed is thus 196. Without subnetting, we have enough hosts in our Class C network to handle them all. However, when we try to subnet, we have a big problem. In order to have six subnets we need to use 3 bits for the subnet ID. This leaves only 5 bits for the host ID, which means every subnet has the identical capacity of 30 hosts. This is enough for the smaller subnets but not enough for the larger ones. The only solution with conventional subnetting, other than shuffling the physical subnets, is to get another Class C block for the two big subnets and use the original for the four small ones. But this is expensive, and means wasting hundreds of IP addresses. Suppose requirement is as following. 120 People for marketing people 60 people for Finance 30 Tell callers 14 Team Leaders 6 Managers 2 Directors 2 Senate Members

TRANSMISSION MEDIUM USED Unshielded Twisted Pair (UTP) Cable


Unshielded Twisted Pair (UTP) is undoubtedly the most common transmission system. Twisted pair cables are available unshielded (UTP) or shielded (STP). UTP is the most common. STP is used in noisy environments where the shield protects against excessive electromagnetic interference. Both UTP and STP come in stranded and solid wire varieties. The stranded wire is the most common and is also very flexible for bending around corners. Solid wire cable has less attenuation and can span longer distances, but is less flexible than stranded wire and cannot be repeatedly bent Shielded Twisted Pair (STP) involves a metal foil, or shield, that surrounds each pair in a cable, sometimes with another shield surrounding all the pairs in a multi-pair cable.

The shields serve to block ambient interference by absorbing it and conducting it to ground. That means that the foils have to be spliced just as carefully as the conductors,

and that the connection to ground has to be rock-solid. Twisted pair comes in following categories: 1. 2. 3. 4. 5. UTP UTP UTP, STP UTP, STP UTP, STP Analog voice Digital voice (1 Mbps data) Digital voice (16 Mbps data) Digital voice (20 Mbps data) Digital voice (100 Mbps data)

Unshielded Twisted Pair (UTP) Cable


Twisted pair cabling comes in two varieties: shielded and unshielded.

Unshielded twisted pair


The quality of UTP may vary from telephone-grade wire to extremely high-speed cable. The cable has four pairs of wires inside the jacket. Each pair is twisted with a different number of twists per inch to help eliminate interference from adjacent pairs and other electrical devices. The tighter the twisting, the higher the supported transmission rate and the greater the cost per foot.

Unshielded Twisted Pair Connector


The standard connector for unshielded twisted pair cabling is an RJ-45 connector. This is a plastic connector that looks like a large telephone-style connector (fig.). A slot allows the RJ-45 to be inserted only one way. RJ stands for Registered Jack, implying that the connector follows a standard borrowed from the telephone industry. This standard designates which wire goes with each pin inside the connector. The RJ-45 connector is clear so you can see the eight colored wires that connect to the connectors pins. These wires are twisted into four pairs. Four wires (two pairs) carry the voltage and are considered tip. The other four wires are grounded and are called ring. The RJ-45 connector is crimped onto the end of the wire, and the pin locations of the connector are numbered from the left, 8 to 1.

RJ-45 connector
Pin Wire Pair (T is tip, R is Ring)

1 2 3 4 5 6 7 8

Pair Pair Pair Pair Pair Pair Pair Pair

2 2 3 1 1 3 4 4

T2 R2 T3 R1 T1 R3 T4 R4

Straight-Through
In a UTP implementation of a straight-through cable, the wires on both cable ends are in the same order. You can use a straight-through cable for the following tasks: Connecting a router to a hub or switch Connecting a server to a hub or switch Connecting workstations to a hub or switch

Crossover In the implementation of a crossover, the wires on each end of the cable are crossed. Transmit to receive and receive to Transmit on each side, for both tip and ring.
You can use a crossover cable for the following tasks: Connecting uplinks between switches Connecting hubs to switches

Connecting a hub to another hub

Coaxial Cable
Coaxial cabling has a single copper conductor at its center. A plastic layer provides insulation between the center conductor and a braided metal shield. The metal shield helps to block any outside interference from fluorescent lights, motors, and other computers.

Coaxial cable
Although coaxial cabling is difficult to install, it is highly resistant to signal interference. In addition, it can support greater cable lengths between network devices than twisted pair cable. The two types of coaxial cabling are thick coaxial and thin coaxial.

Coaxial Cable Connectors


The most common type of connector used with coaxial cables is the Bayone-Neill-Concelman (BNC) connector. Different types of adapters are available for BNC connectors, including a T-

connector, barrel connector, and terminator. Connectors on the cable are the weakest points in any network.

BNC connector

Fiber Optic Cable


Fiber optic cabling consists of a center glass core surrounded by several layers of protective materials. It transmits light rather than electronic signals eliminating the problem of electrical interference. This makes it ideal for certain environments that contain a large amount of electrical interference. It has also made it the standard for connecting networks between buildings, due to its immunity to the effects of moisture and lighting. Fiber optic cable has the ability to transmit signals over much longer distances than coaxial and twisted pair. It also has the capability to carry information at vastly greater speeds. This capacity broadens communication possibilities to include services such as video conferencing and interactive services.

Fiber optic cable


Fiber Optic Connector
The most common connector used with fiber optic cable is an ST connector. It is barrel shaped, similar to a BNC connector. A newer connector, the SC has a squared face and is easier to connect in a confined space.

Switches
Switch is an intelligent device that forwards only those packets that are meant for that subnet. Here we will discuss in detail 3com super stack 3300 switch in detail: 3com Switch: The Super Stack 3 Switch 3300 connects your existing 10Mbps devices, connects highperformance workgroups with a 100Mbps backbone or server connection, and connects power users to dedicated 100Mbps ports - all in one switch. In addition, as part of the 3Com Super Stack 3 range of products, you can combine it with any Super Stack 3 system as your network grows.

Features:
The Switch has the following hardware features: 12 or 24 Fast Ethernet auto-negotiating 10BASE-T/100BASE-TX ports Matrix port for connecting units in the Switch 1100/3300 family to form a stack: Connect two units back-to-back using a single Matrix Cable Connect up to four units using Matrix Cables linked to a Matrix Module

Slot for an Expansion Module

Front view:

Rear View:

Switches occupy the same place in the network as hubs. Unlike hubs, switches examine each packet and process it accordingly rather than simply repeating the signal to all ports. Switches map the Ethernet addresses of the nodes residing on each network segment and then allow only the necessary traffic to pass through the switch. When a packet is received by the switch, the switch examines the destination and source hardware addresses and compares them to a table of network segments and addresses. If the segments are the same, the packet is dropped ("filtered"); if the segments are different, then the packet is "forwarded" to the proper segment. Additionally, switches prevent bad or misaligned packets from spreading by not forwarding them.

Filtering of packets and the regeneration of forwarded packets enables switching technology to split a network into separate collision domains. Regeneration of packets allows for greater distances and more nodes to be used in the total network design, and dramatically lowers the overall collision rates. In switched networks, each segment is an independent collision domain. In shared networks all nodes reside in one, big shared collision domain. Easy to install, most switches are self-learning. They determine the Ethernet addresses in use on each segment, building a table as packets are passed through the switch. This "plug and play" element makes switches an attractive alternative to hubs. Switches can connect different networks types (such as Ethernet and Fast Ethernet) or networks of the same type. Many switches today offer high-speed links, like Fast Ethernet or FDDI that can be used to link the switches together or to give added bandwidth to important servers that get a

lot of traffic. A network composed of a number of switches linked together via these fast uplinks is called a "collapsed backbone" network. Dedicating ports on switches to individual nodes is another way to speed access for critical computers. Servers and power users can take advantage of a full segment for one node, so some networks connect high traffic nodes to a dedicated switch port.

Hubs
In data communications, a hub is the pivot of convergence where data arrives from one or more directions and is forwarded out in one or more directions. A hub usually includes a switch (in telecommunications, a switch is a network device that selects a path or circuit for sending a unit of data to its next destination) of some kind. The distinction seems to be that the hub is the point where data comes together and the switch is what determines how and where data is forwarded from the place where data comes together. A hub is a hardware device that acts as a central connecting point and joins lines in a star network configuration.

Routers A router is a device that interconnects two or more computer networks, and selectively interchanges packets of data between them. Each data packet contains address information that a router can use to determine if the source and destination are on the same network, or if the data packet must be transferred from one network to another. A router is a device whose software and hardware are customized to the tasks of routing and forwarding information. A router has two or more network interfaces, which may be to different types of network or different network standards. Types of routers Basically these are of two types 1) Modular: - these routers do not have fixed interfaces. These can be added and removed according to need. 2) Non-modular routers:- These routers have fixed interfaces and these cannot be removed. Ports We can connect to a Cisco router to configure it, verify its configuration and check the statistics by using various ports. There are many ports but the most important is the console port. Console Port: The console port is usually an RJ-48 connection located at the back of the router. Console is used to configure router when the router is freshly boot and when any time admin wanted to change the running configuration. We can also connect to the Cisco router by using an auxiliary port, which is the same as the console port. But the auxiliary port also allows us to configure modem commands.

Router Components

Some of the parts of a cisco router are: Chassis, motherboard, processor, RAM, NVRAM, flash memory, Power supply, Rom etc. ROM: The ROM in a router contains the bootstrap program that searches for a suitable system image when the router is switched on. When the router is switched on, the ROM performs a Power-on self-test (POST) to check the hardware. POST checks if everything is working in a proper way or not. The ROM also provides a monitor mode that can be used for recovering from a crisis. The information present in the ROM can be erased. ROM contains the basic information which interprets the information to the device. Flash Memory: Flash memory is an erasable, reprogrammable ROM that holds the system image and the microcode. Flash memory gets its name from the fact that sections of its memory cells are erased in a single action or flash. Flash memory is commonly called Flash. Flash is a variation of EEPROM (Electrically Erasable Programmable Read-Only Memory). The process of erasing and rewriting in EEPROM is slow,

while flash is erased and rewritten faster. Flash memory holds the Operating System of a router. The operating system of a Cisco router is IOS (Internetwork Operating System). When a router is switched on, it checks for the compressed form of IOS in Flash memory. If the IOS is present, then it continues else it checks it in the TFTS (Trivial File Transfer Server). RAM: This is much faster to read from and write to than other kinds of storage, provides catching, buffers network packets, and stores routing table information. RAM contains the running configuration file, which is the current configuration file. All configuration changes are saved to this file unless we explicitly save the changes to the NVRAM. Information in the RAM requires a constant power source to be sustained. When the router is powered down, or there is a power cycle, data stored in RAM ceases to exist. NVRAM is Nonvolatile Random Access Memory. Information in NVRAM is retained in storage when the router is switched off or rebooted. NVRAM (NVRAM) is the general name used to describe any type of random access memory which does not lose its information when power is turned off. The Startup-configuration is stored in the NVRAM of Router. If the router get reboot it will search the NVRAM for startup-config. If available then the router will copy that Startup-config and put it in running configuration. Internal part of a router CPU: As the function of the CPU, it executes instructions coded in the operating system and its subsystems to perform the basic operations necessary in order to accomplish the functionality of the router, for example, all of the routing functions, network module high-level control, and system initialization. Motherboard Same function as of Computer or Laptop. Router Interface Types Network Module It is type of circuit board on which WIC cards are installed and have permanent Fast Ethernet or Ethernet slots. WIC Cards Are used to connect the router to other routers in the network or with the Wide area Network like lease lines or frame-relay switch. Smart serial Serial Fast Ethernet Cards with max-speed of 100Mbps per second. And follow the Ethernet standards Ethernet Cards with max-speed of 10Mbps per second. And follow the Ethernet Standards Boot Sequence Complete these steps: 1. After you power on the router, the ROM monitor starts first. ROMMON/BOOTSTRAP functions are important at router boot, and complete these operations at boot up: o Configure power-on register settingsthese settings are of the Control Registers of the processor and of other devices such as Dual Universal Asynchronous Receiver Transmitter (DUART) for console access, as well as the configuration register.

Perform power-on diagnosticsTests are performed on NVRAM and DRAM, writing and reading various data patterns. o Initialize the hardwareInitialization of the interrupt vector and other hardware is performed, and memory, for example, DRAM, SRAM, and so forth, is sized. o Initialize software structuresInitialization of the NVRAM data structure occurs so that information about the boot sequence, stack trace, and environment variables can be read. Also, information about accessible devices is collected in the initial device table. 2. Next, the ROM looks for the Cisco IOS software image in the Flash. Even if you want to boot the router with the Trivial File Transfer Protocol (TFTP), you need a valid image in the Flash in order to boot that image first, and to use that image as a boot-helper image in order to initialize the system, and bring up the interfaces in order to load the main image from the TFTP server. 3. After the router find the image, the router decompresses it and loads it into the Dynamic RAM. Then the Cisco IOS software image starts to run. Cisco IOS software performs important functions during boot up, such as: o Recognition and analysis of interfaces and other hardware o Setup of proper data structures such as Interface Descriptor Blocks (IDBs) o Allocation of buffers o Reading the configuration from NVRAM to RAM (startup-config) and the configuration of the system
o

This is an example of a boot sequence from a 2600 router: System Bootstrap, Version 11.3(2)XA4, RELEASE SOFTWARE (fc1) Copyright (c) 1999 by cisco Systems, Inc. TAC:Home:SW:IOS:Specials for info C2600 platform with 65536 Kbytes of main memory program load complete, entry point: 0x80008000, size: 0x43b7fc Self decompressing the image: ##################################################### ##################################################### ##################################################### ##################################################### ##################################################### ##################################################### ################# [OK] Restricted Rights Legend Use, duplication, or disclosure by the Government is subject to restrictions as set forth in subparagraph (c) of the Commercial Computer Software - Restricted Rights clause at FAR sec. 52.227-19 and subparagraph (c) (1) (ii) of the Rights in Technical Data and Computer Software clause at DFARS sec. 252.227-7013. cisco Systems, Inc. 170 West Tasman Drive San Jose, California 95134-1706

Cisco Internetwork Operating System Software IOS (tm) C2600 Software (C2600-I-M), Version 12.1(8), RELEASE SOFTWARE (fc1) Copyright (c) 1986-2001 by cisco Systems, Inc. Compiled Tue 17-Apr-01 04:55 by kellythw Image text-base: 0x80008088, data-base: 0x8080853C cisco 2611 (MPC860) processor (revision 0x203) with 56320K/9216K bytes of memory. Processor board ID JAD05020BV5 (1587666027) M860 processor: part number 0, mask 49 Bridging software. X.25 software, Version 3.0.0. 2 Ethernet/IEEE 802.3 interface(s) 2 Serial(sync/async) network interface(s) 32K bytes of non-volatile configuration memory. 16384K bytes of processor board System flash (Read/Write) Press RETURN to get started! 00:00:09: %LINK-3-UPDOWN: Interface Ethernet0/0, changed state to up 00:00:09: %LINK-3-UPDOWN: Interface Ethernet0/1, changed state to up 00:00:09: %LINK-3-UPDOWN: Interface Serial0/0, changed state to up 00:00:09: %LINK-3-UPDOWN: Interface Serial0/1, changed state to up 00:00:10: %SYS-5-CONFIG_I: Configured from memory by console 00:00:10: %LINEPROTO-5-UPDOWN: Line protocol on Interface Ethernet0/0, changed state to up 00:00:10: %LINEPROTO-5-UPDOWN: Line protocol on Interface Ethernet0/1, changed state to up 00:00:10: %LINEPROTO-5-UPDOWN: Line protocol on Interface Serial0/0, changed state to up 00:00:10: %LINEPROTO-5-UPDOWN: Line protocol on Interface Serial0/1, changed state to up 00:00:13: %SYS-5-RESTART: System restarted -Cisco Internetwork Operating System Software IOS (tm) C2600 Software (C2600-I-M), Version 12.1(8), RELEASE SOFTWARE (fc1) Copyright (c) 1986-2001 by cisco Systems, Inc. Compiled Tue 17-Apr-01 04:55 by kellythw router> DCE and DTE DCE stands for Data Communication Equipment. The DCE end of a link determines the speed of the link. DCE end is usually located at the Service Provider end. It controls the speed of the DTE end by clock rate. Clock Rate is defined as Bits per second. It is essential to configure clock rate on the DCE side. No communication will start between routers without clockrate. DTE stand for Data Terminal Equipment. The DTE end is connected to the device. The services available to the DTE are most often accessed via modem or channel service unit/data service unit (CSU/DSU). No need to configure clock rate on DTE end. Configuring a Router A router can be configured in three ways: Console Telnet Auxiliary line telephone link (not used these days)

By default a router has no configuration and do not work. To enable it we use the enable command asRouter>enable Command-Line Interface (CLI) To use the CLI, press Enter after the router finishes booting up. After you do that, the router will respond with messages that tell you all about the status of each and every one of its interfaces and then display a banner and ask you to log in

Cisco Router Basic Operations Enter privileged mode Return to user mode from privileged Exit Router Recall last command Recall next command Suspend or abort Refresh screen output Complete Command Router> enable disable Logout or exit or quit up arrow or <Ctrl-P> down arrow or <Ctrl-N> <Shift> and <Ctrl> and 6 then x <Ctrl-R> TAB

Cisco Router Copy Commands (On Privilege Mode) Save the current configuration from DRAM to NVRAM Merge NVRAM configuration to DRAM Router# copy running-config startupconfig Router# copy startup-config runningconfig Router# copy running-config tftp Router# copy tftp running-config

Copy DRAM configuration to a TFTP server Merge TFTP configuration with current router configuration held in DRAM Backup the IOS onto a TFTP server Upgrade the router IOS from a TFTP server

Router# copy flash tftp Router# copy tftp flash

Cisco Router Debug Commands (On Privilege Mode)

Enable debug for RIP To See IP Packet To debug ip reply packet Switch all debugging off

Router# debug ip rip Router# debug ip packet Router# debug ip icmp Router# no debug all Router# u all

Some basic commands Set a console password to cisco Router(config)# line con 0 Router(config-line)# login Router(config-line)# password cisco Router(config)# line vty 0 4 Router(config-line)# login Router(config-line)# password cisco Router(config)# line con 0 Router(config-line)# exec-timeout 00 Router(config)# enable password cisco Router(config)# enable secret peter

Set a telnet password

Stop console timing out

Set the enable password to cisco

Set the enable secret password to peter. This password overrides the enable password and is encypted within the config file To enter in Interface mode

Router(config)# interface serial x/y or Router(config)# interface fastethernet x/y Router(config-if)#no shutdown Router(config-if)#shutdown Router(config-if)clock rate 64000

Enable an interface To disable an interface Set the clock rate for a router with a DCE cable to 64K Set a logical bandwidth assignment of 64K to the serial interface

Router(config-if)bandwidth 64 Note that the zeroes are not missing Router(config-if)# ip address 10.1.1.1 255.255.255.0

To add an IP address to a interface

Disable CDP for the whole router Enable CDP for he whole router Disable CDP on an interface

Router(config)# no cdp run Router(config)# cdp run Router(config-if)# no cdp enable

Cisco Router Show Commands (Privilege Mode) View version information View current configuration (DRAM) View startup configuration (NVRAM) Show IOS file and flash space Shows all logs that the router has in its memory Overview all interfaces on the router Router# show version Router# show running-config Router# show startup-config Router# show flash Router# show log

Router# show ip interfaces brief

Display a summary of connected cdp devices Router#show cdp neighbor Display detailed information on all devicesRouter#show cdp entry * Display current routing protocols Display IP routing table Display Interface Properties Router#show ip protocols Router#show ip route Router# Show Interface serial x/y Router# Show interface fastehternet x/y Router# Show ip interface serial x/y Router# Show ip interface fastehternet x/y

Display IP Properties of Interface

Ping Ping is a computer network administration utility used to test whether a particular host is reachable across an Internet Protocol (IP) network and to measure the round-trip time for packets sent from the local host to a destination computer, including the local host's own interfaces. By default the packet will take the source address of the outgoing interface from which the packet is suppose to leave for the destination. Ping operates by sending Internet Control Message Protocol (ICMP) echo request packets to the target host and waits for an ICMP response. In the process it measures the round-trip time and records any packet loss. The results of the test are printed in form of a statistical summary of the response packets received, including the minimum, maximum, and the mean round-trip times, and sometimes the standard deviation of the mean.

Command can be used in given formant for any device whether Microsoft OS or the Cisco Routers C:\> ping Address(IP or www.xyz.com) C:\>ping 127.0.0.254 Pinging 127.0.0.254 with 32 bytes of data: Reply from 127.0.0.254: bytes=32 time<1ms TTL=128 Reply from 127.0.0.254: bytes=32 time<1ms TTL=128 Reply from 127.0.0.254: bytes=32 time<1ms TTL=128 Reply from 127.0.0.254: bytes=32 time<1ms TTL=128 Ping statistics for 127.0.0.254: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms Router# ping A.B.C.D (IP Address) Router#ping 1.1.1.1 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 1.1.1.1, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 1/3/4 ms Extended Ping Ping has various options depending on the implementation that enable special operational modes, such as to specify the packet size used as the probe, automatic repeated operation for sending a specified count, request timeout and the source address that is carry by the ping packet.

Router# ping Protocol [ip]: ip Target IP address: 1.1.1.1 Repeat count [5]: 1000 Datagram size [100]: 200 Timeout in seconds [2]: 1 Extended commands [n]: y to (change the Source address use y ) Source address or interface: 1.1.1.1 Type of service [0]: Set DF bit in IP header? [no]: Validate reply data? [no]: Data pattern [0xABCD]: Loose, Strict, Record, Timestamp, Verbose[none]: Sweep range of sizes [n]: Type escape sequence to abort. Sending 1000, 200-byte ICMP Echos to 1.1.1.1, timeout is 1 seconds: Packet sent with a source address of 1.1.1.1 !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !!!!!!!!! Success rate is 100 percent (1000/1000), round-trip min/avg/max = 1/1/4 ms Traceroute Traceroute is a computer network tool used to show the route taken by packets across an IP network. It is used to find out on which router the packets are actually dropped if the packet is unable to reach the destination. It is very useful tool for network professional. Traceroute works by increasing the "time-to-live" value of each successive batch of packets sent. The first three packets sent have a time-to-live (TTL) value of one (implying that they are not forwarded by the next router and make only a single hop). The next three packets have a TTL value of 2, and so on. When a packet passes through a host, normally the host decrements the TTL value by one, and forwards the packet to the next host. When a packet with a TTL of one reaches a host, the host discards the packet and sends an ICMP time exceeded packet to the sender, or an echo reply if its IP address matches the IP address that the packet was originally sent to. The traceroute utility uses these returning packets to produce a list of hosts that the packets have traversed in transit to the destination. Command for Microsoft Operating systems. C:\>tracert www.google.com Tracing route to www.l.google.com [209.85.231.104] over a maximum of 30 hops: 1 2 3 4 5 6 7 8 9 2 ms 18 ms 17 ms 18 75 77 82 88 83 ms ms ms ms ms ms 2 ms 18 ms 16 ms 15 ms 85 ms 96 ms 82 ms 89 ms 83 ms 1 ms 10.16.32.96 16 ms 122.160.236.2 16 ms ABTS-North-Static-014.236.160.122.airtelbroadband.in [122.160.236.14] 15 ms 125.19.65.101 74 ms 203.101.100.210 76 ms 72.14.216.229 81 ms 66.249.94.170 92 ms 72.14.238.90 81 ms maa03s01-in-f104.1e100.net [209.85.231.104]

Trace complete C:\>tracert 4.2.2.2 Tracing route to vnsc-bak.sys.gtei.net [4.2.2.2] over a maximum of 30 hops: 1 2 ms 1 ms 1 ms 10.16.32.96 2 16 ms 20 ms 17 ms ABTS-North-Static-002.236.160.122.airtelbroadband.in [122.160.236.2] 3 18 ms 17 ms 16 ms ABTS-North-Static-006.236.160.122.airtelbroadband.in

4 17 ms 19 ms 5 71 ms 69 ms 6 225 ms 222 ms 7 221 ms 221 ms 8 216 ms 225 ms 9 328 ms 322 ms 10 304 ms 307 ms Trace complete.

[122.160.236.6] 16 ms 125.19.65.101 73 ms 203.101.95.30 221 ms so-5-3-0-dcr2.par.cw.net [195.10.54.77] 231 ms xe-4-3-0-xcr1.par.cw.net [195.2.9.233] 215 ms xe-0-1-0-xcr1.fra.cw.net [195.2.9.225] 319 ms 212.162.4.201 304 ms vnsc-bak.sys.gtei.net [4.2.2.2]

For Cisco routers Router#traceroute A.B.C.D Routed Protocols - Protocol that can be routed by a router. It is used between routers to carry user traffic. A router must be able to interpret the logical internetwork as specified by that routed protocol. Examples of routed protocols include AppleTalk, DECnet, and IP, IPX etc. Routing Protocols - Protocol that accomplishes routing through the implementation of a specific routing algorithm. Examples of routing protocols include IGRP, OSPF, and RIP. It is used between routers to maintain tables. Dynamic Routing is performed by Routing Protocols Routing Routing is the act of moving information across an internetwork from a source to a destination. Routing is used for taking a packet from one device and sending it through the network to another device on a different network. If your network has no routers, then you are not routing. Routers route traffic to all the networks in your inter-network. routing directs packet forwarding, the transit of logically addressed packets from their source toward their ultimate destination through intermediate nodes; typically hardware devices called routers, bridges, gateways, firewalls, or switches. Generalpurpose computers with multiple network cards can also forward packets and perform routing, though they are not specialized hardware and may suffer from limited performance. The routing process usually directs forwarding on the basis of routing tables which maintain a record of the routes to various network destinations. Thus, constructing routing tables, which are held in the routers' memory, is very important for efficient routing. Most routing algorithms use only one network path at a time, but multi-path routing techniques enable the use of multiple alternative paths. To be able to route packets, a router must know, at a minimum, the following: Destination address Neighbor routers from which it can learn about remote networks Possible routes to all remote networks The best route to each remote network Different Types of Routing Static routing Default routing Dynamic routing

How to maintain and verify routing information The router learns about remote networks from neighbor routers or from an administrator. The router then builds a routing table that describes how to find the remote networks. If the network is directly connected, then the router already knows how to get to the network. If the networks are not attached, the router must learn how to get to the remote network with either static routing, which means that the administrator must hand-type all network locations into the routing table, or use dynamic routing. What is the Routing Table? Routing Table is the table in which all the best routes to all possible networks learn by router are placed. All the decisions are taken by route engine are based on Routing Table. So the routing table should be the populated with latest entries and latest updates of the Networks. Routing Table is populated on the basis of the following Information 1. Highest subnet mask of the Network. For example if 2.0.0.0 |28 is advertize by the RIP and 2.0.0.0 |24 is advertize by the OSPF then the decision from IP 2.0.0.1 to 2.0.0.31 is taken on the basis of RIP and rest all the IPs of the Network 2.0.0.0 |24 are taken on the basis of OSPF 2. Lowest Administrative Distance of the coming Network route. For Example is same network is advertized by the two routing protocols then the protocol with lower Admin distance is considered as the best route to at particular Network 3. Lowest Metric if multiple routes of the particular network are advertizing by same routing protocol. For example RIP is advertizing 2.0.0.0 |24 network with 4 and 5 hop away by two interfaces then the advertisement with lowest metric will be selected as best route 4. Load Balancing if the metric of all the routes are Equal and it Depend upon routing protocol that how many path it support. Contents of routing tables The routing table consists of at least three information fields: The network id: i.e. the destination network id Cost: i.e. the cost or metric of the path through which the packet is to be sent Next hop: The next hop, or gateway, is the address of the next station to which the packet is to be sent on the way to its final destination Router#show ip route Codes: C - connected, S - static, R - RIP, M - mobile, B - BGP D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 E1 - OSPF external type 1, E2 - OSPF external type 2 i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2 ia - IS-IS inter area, * - candidate default, U - per-user static route o - ODR, P - periodic downloaded static route Gateway of last resort is not set 1.0.0.0/24 is subnetted, 5 subnets

R R R R R C

1.0.1.0 [120/1] via 192.168.0.2, 00:00:22, Serial0/0 1.0.0.0 [120/1] via 192.168.0.2, 00:00:22, Serial0/0 1.0.3.0 [120/1] via 192.168.0.2, 00:00:22, Serial0/0 1.0.2.0 [120/1] via 192.168.0.2, 00:00:22, Serial0/0 1.0.4.0 [120/1] via 192.168.0.2, 00:00:22, Serial0/0 192.168.0.0/24 is directly connected, Serial0/0

Load Balancing Load sharing, also known as load balancing, allows routers to take advantage of multiple paths to the same destination by sending packets over all the available routes. Load sharing can be equal cost or unequal cost, where cost is a generic term referring to whatever metric (if any) is associated with the route. Equal-cost load sharing distributes traffic equally among multiple paths with equal metrics. Unequal-cost load sharing distributes packets among multiple paths with different metrics. The traffic is distributed inversely proportional to the cost of the routes. That is, paths with lower costs are assigned more traffic, and paths with higher costs are assigned less traffic. Some routing protocols support both equal-cost and unequal-cost load sharing, whereas others support only equal cost. Static routes, which have no metric, support only equalcost load sharing. Routing Protocol like RIP, OSPF , IS-IS support only Equal Load Balancing where as EIGRP support both equal and unequal Load Balancing Load sharing is also either per destination or per packet. Per Destination Load Sharing and Fast Switching Per destination load balancing distributes the load according to destination address. Given two paths to the same network, all packets for one destination on the network may travel over the first path, all packets for a second destination on the same network may travel over the second path, all packets for a third destination may again be sent over the first path, and so on. This type of load balancing occurs in Cisco routers when they are fast switching, the default Cisco switching mode. Fast switching works as follows: When a router switches the first packet to a particular destination, a route table lookup is performed and an exit interface is selected. The necessary data-link information to frame the packet for the selected interface is then retrieved (from the ARP cache, for instance), and the packet is encapsulated and transmitted. The retrieved route and data-link information is then entered into a fast switching cache, and as subsequent packets to the same destination enter the router, the information in the fast cache allows the router to immediately switch the packet without performing another route table and ARP cache lookup. While switching time and processor utilization are decreased, fast switching means that all packets to a specific destination are routed out the same interface. When a packet addressed to a different host on the same network enters the router and an alternate route exists, the router may send all packets for that destination on the alternate route. Therefore, the best the router can do is balance traffic on a per destination basis. Per Packet Load Sharing and Process Switching

Per packet load sharing means that one packet to a destination is sent over one link, the next packet to the same destination is sent over the next link, and so on, given equal-cost paths. If the paths are unequal cost, the load balancing may be one packet over the higher-cost link for every three packets over the lower-cost link, or some other proportion depending upon the ratio of costs. Cisco routers will do per packet load balancing when they are process switching. Process switching simply means that for every packet, the router performs a route table lookup, selects an interface, and then looks up the data link information. Because each routing decision is independent for each packet, all packets to the same destination are not forced to use the same interface. Loopbacks: - a loopback device is a virtual network interface implemented in software only and not connected to any hardware, but which is fully integrated into the routers internal network infrastructure. Any traffic that router sends to the loopback interface is immediately received on the same interface. Any address can be given to loopbacks and it behave as the real interface to all the other devices the traffic send to the loopback is equivalent to the traffic send to the real interface or host and proper reply is send to the sender. As is testing environment we cannot create a large real networks so the loopbacks are the only tool which helps in creating the large virtual network. Router(config)#interface loopback ? <0-2147483647> Loopback interface number Router(config)#interface loopback 1 Router(config-if)# *Mar 1 00:01:32.399: %LINEPROTO-5-UPDOWN: Line protocol on Interface Loopback1, changed state to up Router(config-if)#ip address 1.0.0.1 255.255.255.0 Router(config-if)#no shut

Router# show ip interface brief Interface IP-Address Protocol Loopback0 unassigned YES Loopback1 1.0.0.1 YES

OK? unset manual up up

Method

Status up up

As shown above the no. of loopbacks that can be created on any router is from 02147483647 And there values can be loopback no. for the identification of the particular loopback The loopback no. is locally significant on the router and it must be different for all the loopbacks in particular router and need not to be different on other routers. Example like Loopback 0 can be created on all the routers in the network but every loopback must have differnet network address. Router# show ip route

Codes: C - connected, S - static, R - RIP, M - mobile, B - BGP D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 E1 - OSPF external type 1, E2 - OSPF external type 2 i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2 ia - IS-IS inter area, * - candidate default, U - per-user static route o - ODR, P - periodic downloaded static route Gateway of last resort is not set 1.0.0.0/24 is subnetted, 1 subnets C 1.0.0.0 is directly connected, Loopback1

Static Routing Static routing is the process of an administrator manually adding routes in each routers routing table. There are benefits and disadvantages to all routing processes. Static routing is not really a protocol, simply the process of manually entering routes into the routing table via a configuration file that is loaded when the routing device starts up. In these systems, routes through a data network are described by fixed paths (statically). These routes are usually entered into the router by the system administrator. An entire network can be configured using static routes, but this type of configuration is not fault tolerant. When there is a change in the network or a failure occurs between two statically defined nodes, traffic will not be rerouted. This means that anything that wishes to take an affected path will either have to wait for the failure to be repaired or the static route to be updated by the administrator before restarting its journey. Most requests will time out (ultimately failing) before these repairs can be made. There are, however, times when static routes make sense and can even improve the performance of a network Static routing has the following benefits: No overhead on the router CPU No bandwidth usage between routers for updates each other. Security (because the administrator only allows routing to certain networks) Static routing has the following disadvantages: The administrator must really understand the internetwork and how each router is connected to configure the routes correctly. If one network is added to the internetwork, the administrator must add a route to it on all routers. Its not feasible in large networks because it would be a full-time job. One major problem in Static Routing is that Admin has to select the Best route to each network when redundant paths are available. Static Routing work good with Small networks and where small series of routers are placed as these routers are not capable of taken another burden of routing protocols and where topology is like hub and spoke.

But when it comes to the large Network the static routing get very complicated and admin has to manually design the network by him-self for all best routes and the backup routes and with addition of new router the whole topology may be needed to revise for the better utilization of the recourses The command used to add a static route to a routing table is Router(config)# ip route [destination-network] [mask] [next-hop-address or exit-interface] [administrative-distance][permanent]

Ip route The command used to create the static route. Destination network The network you are placing in the routing table. Mask Indicates the subnet mask being used on the network. Next hop address The address of the next hop router that will receive the packet and forward it to the remote network. This is a router interface that is on a directly connected network. You must be able to ping the router interface before you add the route. Exit interface Used in place of the next hop address if desired. Must be on a point-to-point link, such as a WAN. This command does not work on a LAN; for example, Ethernet. Administrative distance By default, static routes have an administrative distance of 1. You can change the default value by adding an administrative weight at the end of the command. Permanent If the interface is shut down or the router cannot communicate to the next hop router, the route is automatically discarded from the routing table. Choosing the permanent option keeps the entry in the routing table no matter what happens.

R1(config)# ip route R1(config)# ip route R1(config)# ip route R1(config)# ip route R1(config)# ip route R1(config)# ip route

Figure 1 2.0.0.0 255.255.255.0 fastethernet 2.0.1.0 255.255.255.0 fastethernet 3.0.0.0 255.255.255.0 fastethernet 3.0.1.0 255.255.255.0 fastethernet 4.0.0.0 255.255.255.0 fastethernet 4.0.1.0 255.255.255.0 fastethernet

0/0 192.168.12.2 0/0 192.168.12.2 0/0 192.168.12.2 0/0 192.168.12.2 0/0 192.168.12.2 0/0 192.168.12.2

R1(config)# ip route 192.168.23.0 255.255.255.0 fastethernet 0/0 192.168.12.2 R1(config)# ip route 192.168.34.0 255.255.255.0 fastethernet 0/0 192.168.12.2 R2(config)# ip route R2(config)# ip route R2(config)# ip route R2(config)# ip route R2(config)# ip route R2(config)# ip route R2(config)# ip route R3(config)# ip route R3(config)# ip route R3(config)# ip route R3(config)# ip route R3(config)# ip route R3(config)# ip route R3(config)# ip route 1.0.0.0 255.255.255.0 fastethernet 0/0 192.168.12.1 1.0.1.0 255.255.255.0 fastethernet 0/0 192.168.12.1 3.0.0.0 255.255.255.0 fastethernet 0/1 192.168.23.2 3.0.1.0 255.255.255.0 fastethernet 0/1 192.168.23.2 4.0.0.0 255.255.255.0 fastethernet 0/1 192.168.23.2 4.0.1.0 255.255.255.0 fastethernet 0/1 192.168.23.2 192.168.34.0 255.255.255.0 fastethernet 0/1 192.168.23.2 1.0.0.0 255.255.255.0 fastethernet 0/0 192.168.23.1 1.0.1.0 255.255.255.0 fastethernet 0/0 192.168.23.1 2.0.0.0 255.255.255.0 fastethernet 0/0 192.168.23.1 2.0.1.0 255.255.255.0 fastethernet 0/0 192.168.23.1 4.0.0.0 255.255.255.0 fastethernet 0/1 192.168.34.2 4.0.1.0 255.255.255.0 fastethernet 0/1 192.168.34.2 192.168.12.0 255.255.255.0 fastethernet 0/0 192.168.23.1

R4(config)# ip route R4(config)# ip route R4(config)# ip route R4(config)# ip route R4(config)# ip route R4(config)# ip route R4(config)# ip route R4(config)# ip route

1.0.0.0 255.255.255.0 fastethernet 0/0 192.168.34.1 1.0.1.0 255.255.255.0 fastethernet 0/0 192.168.34.1 2.0.0.0 255.255.255.0 fastethernet 0/0 192.168.34.1 2.0.1.0 255.255.255.0 fastethernet 0/0 192.168.34.1 3.0.0.0 255.255.255.0 fastethernet 0/0 192.168.34.1 3.0.1.0 255.255.255.0 fastethernet 0/0 192.168.34.1 192.168.12.0 255.255.255.0 fastethernet 0/0 192.168.34.1 192.168.23.0 255.255.255.0 fastethernet 0/0 192.168.34.1

Test your Skill

Configure the routers for static routes Configure R1s Loopback 1.1.1.1 to reach R4 via R5 R2 R4 and Loopback of R4 4.4.4.4 with path R4 R3 R5 to R1.

Default Routing Default Routing is the routing in which all the packets to unknown addresses are routed through particular interface of the router and this interface will act as the default gateway for that particular router and one router can only have on gateway.

Router(config)#ip route 0.0.0.0 0.0.0.0 (Interface out address) (Next hop address) (Admin Distance) Ip route 0.0.0.0 0.0.0.0 int Serial x/y A.B.C.D 20 Admin Distance is use to give the priority of the default route. And with this command one can set the default gateway to the router and when using the Show ip route command then the Gateway to last resort will be set to the next hop address of the Adjacent router. Default routing works well in Hub and Spoke Topology in which all routers at spokes should have default route to the hub router and hub router is configured with static routes to all spokes routers Networks Router(config)#ip route 0.0.0.0 0.0.0.0 loopback 0 1.0.0.1 ? <1-255> Distance metric for this route name Specify name of the next hop permanent permanent route tag Set tag for this route track Install route depending on tracked item <cr> Router(config)#ip route 0.0.0.0 0.0.0.0 loopback 0 1.0.0.1 20 ? name Specify name of the next hop permanent Permanent route tag Set tag for this route track Install route depending on tracked item <cr> Router(config)#ip route 0.0.0.0 0.0.0.0 loopback 0 2.0.0.1 20 permanent Router#show ip route Codes: C - connected, S - static, R - RIP, M - mobile, B - BGP D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 E1 - OSPF external type 1, E2 - OSPF external type 2 i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2 ia - IS-IS inter area, * - candidate default, U - per-user static route o - ODR, P - periodic downloaded static route Gateway of last resort is 0.0.0.0 to network 0.0.0.0 1.0.0.0/24 is subnetted, 1 subnets C 1.0.0.0 is directly connected, Loopback1 S* 0.0.0.0/0 is directly connected, Null0

Figure 2 R1(config)ip route 0.0.0.0 0.0.0.0 serial 0/0 192.168.0.2 R3(config)ip route 0.0.0.0 0.0.0.0 serial 0/0 192.168.1.2 R2(config)# ip R2(config)# ip R2(config)# ip R2(config)# ip R2(config)# ip R2(config)# ip route route route route route route 1.0.0.0 255.255.255.0 serial 1.0.1.0 255.255.255.0 serial 1.0.2.0 255.255.255.0 serial 2.0.0.0 255.255.255.0 serial 2.0.1.0 255.255.255.0 serial 2.0.2.0 255.255.255.0 serial 0/0 0/0 0/0 0/1 0/1 0/1 192.168.0.1 192.168.0.1 192.168.0.1 192.168.1.1 192.168.1.1 192.168.1.1

Dynamic routing Dynamic Routing is the process of routing protocols running on the router communicating with neighbor routers. The routers then update each other about all the networks they know about. If a change occurs in the network, the dynamic routing protocols automatically inform all routers about the change. If static routing is used, the administrator is responsible for updating all changes by hand into all routers. Dynamic routing that adjust automatically to network topology or traffic changes. Also called adaptive routing. Use a route that a network routing protocol adjusts automatically for topology or traffic changes. The success of dynamic routing depends on two basic router functions: Maintenance of a routing table Timely distribution of knowledge in the form of routing updates to other routers This is the process of using protocols to find and update routing tables on routers. This is easier than static or default routing, but one can use it at the expense of router CPU processes and bandwidth on the network links. A routing protocol defines the set of rules used by a router when it communicates between neighbor routers.

Dynamic Routing is of two types:-

Distance Vector Routing Protocols Routing Information Protocol (RIP) Enhanced Interior Gateway Routing Protocols (EIGRP) Link State Routing Protocols Open Shortest Path First (OSPF) Integrated Intermediate Systems-Intermediate Systems (IS-IS)

Routing Protocols are of two types: Interior Routing Protocol Routing Information Protocol Enhanced Interior Gateway Routing Protocols Integrated Intermediate Systems-Intermediate Systems Open Shortest Path First (OSPF) Exterior Routing Protocols Border Gateway Routing (BGP)

Routing Protocols Basics All dynamic routing protocols are built around an algorithm. Generally, an algorithm is a step-by-step procedure for solving a problem. A routing algorithm must, at a minimum, specify the following: A procedure for passing reach-ability information about networks to other routers. A procedure for receiving reach-ability information from other routers A procedure for determining optimal routes based on the reach-ability information it has and for recording this information in a route table. A procedure for reacting to, compensating for, and advertising topology changes in an Internetwork. A few issues common to any routing protocol are path determination, metrics, convergence, and loadbalancing.

Figure 3

Path Determination

All networks within an internetwork must be connected to a router, and wherever a router has an interface on a network that interface must have an address on the network. This address is the originating point for reach-ability information. As shows in the above figure A simple three-router inter-network. Router A knows about networks 192.168.1.0, 192.168.2.0, and 192.168.3.0 because it has interfaces on those networks with corresponding addresses and appropriate address masks. Likewise, router B knows about 192.168.3.0, 192.168.4.0, 192.168.5.0, and 192.186.6.0; Router C knows about 192.168.6.0, 192.168.7.0, and 198.168.1.0. Each interface implements the data link and physical protocols of the network to which it is attached, so the router also knows the state of the network (up or down). Each router knows about its directly connected networks from its assigned addresses and masks. And Network that are not directly connected to router must be known to router via static routing or dynamic routing

Router A examines its IP addresses and associated masks and deduces that it is attached to networks 192.168.1.0, 192.186.2.0, and 192.168.3.0. Router A enters these networks into its route table, along with some sort of flag indicating that the networks are directly connected. Router A places the information into a packet: "My directly connected networks are 192.168.1.0, 192.186.2.0, and 192.168.3.0." Router A transmits copies of these route information packets, or routing updates, to routers B and C. Routers B and C, having performed the same steps, have sent updates with their directly connected networks to A. Router A enters the received information into its route table, along with the source address of the router that sent the update packet. Router A now knows about all the networks, and it knows the addresses of the routers to which they are attached.

Metrics When there are multiple routes to the same destination, a router must have a mechanism for calculating the best path. A metric is a variable assigned to routes as a means of ranking them from best to worst or from most preferred to least preferred. Different Routing Protocols uses different metrics so there is no comparison between the two or more routing protocols which one is better. Metric is only used to find the best route with-in the routing protocol. RIP Ver1 &2 OSPF EIGRP IS-IS Hop Counts Bandwidth Bandwidth + Delay Reference Value

Hop Count Hop count is simply counting the no. of hops the network is away. Hops are no of routers which are on the way to the network. RIP works on the basis of Hop

counts and when hop count is considered no other parameters are consider. Like if the path with lower hop count is bad and path with higher hop count is good RIP will always use lower hop count rather than good one.

Figure 4

Bandwidth A bandwidth metric would choose a higher-bandwidth path over a lowerbandwidth link. However, bandwidth by itself still may not be a good metric. What if one or both of the T1 links are heavily loaded with other traffic and the 56K link is lightly loaded? Or what if the higher-bandwidth link also has a higher delay? Load This metric reflects the amount of traffic utilizing the links along the path. The best path is the one with the lowest load. Delay Delay is a measure of the time a packet takes to traverse a route. A routing protocol using delay as a metric would choose the path with the least delay as the best path. There may be many ways to measure delay. Delay may take into account not only the delay of the links along the route but also such factors as router latency and queuing delay. On the other hand, the delay of a route may be not measured at all; it may be a sum of static quantities defined for each interface along the path. Each individual delay quantity would be an estimate based on the type of link to which the interface is connected. Reliability Reliability measures the likelihood that the link will fail in some way and can be either variable or fixed. Examples of variable-reliability metrics are the number of times a link has failed or the number of errors it has received within a certain time period. Fixed-reliability metrics are based on known qualities of a link as determined by the network administrator. The path with highest reliability would be selected as best.

Cost This metric is configured by a network administrator to reflect more- or lesspreferred routes. Cost may be defined by any policy or link characteristic or may reflect the arbitrary judgment of the network administrator. The term cost is often used as a generic term when speaking of route choices. For example, "RIP chooses the lowest-cost path based on hop count." Another generic term is shortest, as in "RIP chooses the shortest path based on hop count." When used in this context, either lowest-cost (or highest-cost) and shortest (or longest) merely refer to a routing protocol's view of paths based on its specific metrics. Cost is the value derived by using the metric for that particular protocol. Example RIP use directly HOP count OSPF calculate value with the given formulae (100/Bandwidth in Mbps) For serial link of T1 line OSPF cost is 64 and for fast Ethernet it is 1 Convergence A dynamic routing protocol must include a set of procedures for a router to inform other routers about its directly connected networks, to receive and process the same information from other routers, and to pass along the information it receives from other routers. Further, a routing protocol must define a metric by which best paths may be determined. RIP converge very slowly where as EIGRP converge very fast. Faster the convergence higher is the bandwidth used for updates of that protocol. Slower is the convergence protocol will take large time to recover from failure. Administrative distance: Admin Distance is the measure used by Cisco routers to select the best path when there are two or more different routes to the same destination from two different routing protocols. Administrative distance defines the reliability of a routing protocol. Each routing protocol is prioritized in order of most to least reliable (believable) using an administrative distance value. This value is assigned on basis of the reliability and convergence value of the routing protocol. RIP 120 EGP 140 ODR 160 External EIGRP 170 Internal BGP 200 Unknown 255 Protocol Default Admin Dist Protocol Default Admin-Dist Directly 0 RIP 120 connected Static route 1 EGP 140 EIGRP summary 5 ODR 160 route External BGP 20 External EIGRP 170 EIGRP 90 Internal BGP 200 OSPF 110 Unknown 255 IS-IS 115 Distance Vector Routing Protocols (DVRP) Most routing protocols fall into one of two classes: distance vector or link state. The basics of distance vector routing protocols are examined here; the next section covers link state routing protocols. Distance vector algorithms are based on the work done of R. E. Bellman, L. R. Ford, and D. R. Fulkerson and for this reason occasionally are referred to as Bellman-Ford or Ford-Fulkerson algorithms.

R. E. Bellman. Dynamic Programming. Princeton, New Jersey: Princeton University Press; 1957. L. R. Ford Jr. and D. R. Fulkerson. Flows in Networks. Princeton, New Jersey: Princeton University Press; 1962. The name distance vector is derived from the fact that routes are advertised as vectors of (distance, direction), where distance is defined in terms of a metric and direction is defined in terms of the next-hop router. For example, "Destination A is a distance of 5 hops away, in the direction of next-hop router X." As that statement implies, each router learns routes from its neighboring routers' perspectives and then advertises the routes from its own perspective. Because each router depends on its neighbors for information, which the neighbors in turn may have learned from their neighbors, and so on, distance vector routing is sometimes facetiously referred to as "Routing by Rumor." Routing by Rumors means that each router relies on the information obtained for m the neighbor and do not verify itself so potentially fear of loops in the network Distance vector routing protocols include the following: Routing Information Protocol (RIP) for IP Xerox Networking System's XNS RIP Novell's IPX RIP Cisco's Internet Gateway Routing Protocol (IGRP) DEC's DNA Phase IV AppleTalk's Routing Table Maintenance Protocol (RTMP) Common Characteristics A typical distance vector routing protocol uses a routing algorithm in which routers periodically send routing updates to all neighbors by broadcasting their entire route tables. The preceding statement contains a lot of information. Following sections consider it in more detail. Periodic Updates Periodic updates means that at the end of a certain time period, updates will be transmitted. This period typically ranges from 10 seconds for AppleTalk's RTMP to 90 seconds for Cisco's IGRP. At issue here is the fact that if updates are sent too frequently, congestion may occur; if updates are sent too infrequently, convergence time may be unacceptably high. Neighbors In the context of routers, neighbors always mean routers sharing a common data link. A distance vector routing protocol sends its updates to neighboring routers and depends on them to pass the update information along to their neighbors. For this reason, distance vector routing is said to use hop-by-hop updates. Broadcast or Multicast Updates When a router first becomes active on a network, how does it find other routers and how does it announce its own presence? Several methods are available. The simplest is to send the updates to the broadcast address (in the case of IP, 255.255.255.255). Neighboring routers speaking the same routing protocol will hear the broadcasts and

take appropriate action. Hosts and other devices uninterested in the routing updates will simply drop the packets. But now the days every routing protocol uses Multicast address to send its update to the neighbor router. RIP Ver. 1 is only protocol which sends its update at Broadcast address. RIP Ver. 2 224.0.0.9 OSPF 224.0.0.5, 224.0.0.6 EIGRP 224.0.0.10 Distance vector protocols converge hop-by-hop At time t1, the first updates have been received and processed by the routers. Look at R1 table at t1. R2's update to R1 said that R2 can reach networks 10.0.0.0 and 10.0.2.0, both 0 hops away. If the networks are 0 hops from R2, they must be 1 hop from R1. R1 incremented the hop count by 1 and then examined its route table. It already knew about 10.0.0.0, and the hop count (0) was less than the hop count R2 advertised, (1), so R1 disregarded that information. Network 10.0.2.0 was new information, however, so R1 entered this in the route table. The source address of the update packet was router R2's interface (10.0.0.2) so that information is entered along with the calculated hop count. Notice that the other routers performed similar operations at the same time t1 R3, for instance, disregarded the information about 10.0.3.0 from R2 and 10.0.4.0 from R4 but entered information about 10.0.0.0, reachable via R2's interface address 10.0.2.1, and 4.0.0.0, reachable via R3's interface 10.0.3.2 Both networks were calculated as 1 hop away. At time t2, the update period has again expired and another set of updates has been broadcast. R2 sent its latest table; R1 again incremented R2's advertised hop counts by 1 and compared. The information about 10.0.0.0 is again discarded for the same reason as before. 10.0.2.0 is already known, and the hop count hasn't changed, so that information is also discarded. 10.0.3.0 is new information and is entered into the route table. The network is converged at time t3. Every router knows about every network, the address of the next-hop router for every network, and the distance in hops to every network. Distance vector algorithms provide road signs to networks. They provide the direction and the distance, but no details about what lies along the route. And like the sign at the fork in the trail, they are vulnerable to accidental or intentional misdirection. Following are some of the difficulties and refinements associated with distance vector algorithms. Route Invalidation Timers Now that the internetwork in Figure 5 is fully converged, how will it handle reconvergence when some part of the topology changes? If network 4.0.0.0 goes down, the answer is simple enoughR4, in its next scheduled update, flags the network as unreachable and passes the information along. But what if, instead of 4.0.0.0 going down, router R4 fails? Routers R1, R2, R3 still have entries in their route tables about 4.0.0.0; the information is no longer valid, but there's no router to inform them of this fact. They will unknowingly forward packets to an unreachable destinationa black hole has opened in the internetwork. This problem is handled by setting a route invalidation timer for each entry in the route table. For example, when R3 first hears about 4.0.0.0 and enters the information into its

route table, R4 sets a timer for that route. At every regularly scheduled update from router R4, R3 discards the update's already known information about 4.0.0.0 as described in "Routing by Rumor." But as R3 does so, it resets the timer on that route. If router R4 goes down, R4 will no longer hear updates about 10.1.5.0. The timer will expire, R3 will flag the route as unreachable and will pass the information along in the next update. Typical periods for route timeouts range from three to six update periods. A router would not want to invalidate a route after a single update has been missed, because this event may be the result of a corrupted or lost packet or some sort of network delay. At the same time, if the period is too long, re-convergence will be excessively slow.

Figure 5

Link State Routing Protocols


The information available to a distance vector router has been compared to the information available from a road sign. Link state routing protocols are like a road map. A link state router cannot be fooled as easily into making bad routing decisions, because it has a complete picture of the network. The reason is that unlike the routingby-rumor approach of distance vector, link state routers have firsthand information from

all their peer routers. Each router originates information about itself, its directly connected links, and the state of those links (hence the name). This information is passed around from router to router, each router making a copy of it, but never changing it. The ultimate objective is that every router has identical information about the internetwork, and each router will independently calculate its own best paths. Link state protocols, sometimes called shortest path first or distributed database protocols, are built around a well-known algorithm from graph theory, E. W. Dijkstra'a shortest path algorithm. Examples of link state routing protocols are: Open Shortest Path First (OSPF) for IP The ISO's Intermediate System to Intermediate System (IS-IS) for CLNS and IP DEC's DNA Phase V Novell's NetWare Link Services Protocol (NLSP) Although link state protocols are rightly considered more complex than distance vector protocols, the basic functionality is not complex at all: NOTE Link state advertisement 1. Each router establishes a relationshipan adjacencywith each of its neighbors. 2. Each router sends link state advertisements (LSAs), sometimes called link state packets (LSPs), to each neighbor. One LSA is generated for each of the router's links, identifying the link, the state of the link, the metric cost of the router's interface to the link, and any neighbors that may be connected to the link. Each neighbor receiving an advertisement in turn forwards (floods) the advertisement to its own neighbors. 3. Each router stores a copy of all the LSAs it has seen in a database. If all works well, the databases in all routers should be identical. 4. The completed topological database, also called the link state database, describes a graph of the internetwork. Using the Dijkstra algorithm, each router calculates the shortest path to each network and enters this information into the route table.

Routing Information Protocol (RIP) Distance vector protocols, based on the algorithms developed by Bellman, Ford, and Fulkerson, were implemented as early as 1969 in networks such as ARPANET and CYCLADES. In the mid-1970s Xerox developed a protocol called PARC Universal Protocol, or PUP, to run on its 3Mbps experimental predecessor to modern Ethernet. PUP was routed by the Gateway Information Protocol (GWINFO). PUP evolved into the Xerox Network Systems (XNS) protocol suite; concurrently, the Gateway Information Protocol became the XNS Routing Information Protocol. In turn, XNS RIP has become the precursor of such common routing protocols as Novell's IPX RIP, AppleTalk's Routing Table Maintenance Protocol (RTMP), and, of course, IP RIP. The metric for RIP is hop count. The RIP process operates from UDP port 520; all RIP messages are encapsulated in a UDP segment with both the Source and Destination Port fields set to that value. RIP defines two message types: Request messages A Request message is used to ask neighboring routers to send an update. Response messages. A Response message carries the update. The metric used by RIP is hop count, with 1 signifying a directly connected network of the advertising router. On startup, RIP broadcasts a packet carrying a Request message out each RIP-enabled interface. The RIP process then enters a loop, listening for RIP Request or Response messages from other routers. Neighbors receiving the Request send a Response containing their routing table. When the requesting router receives the Response messages, it processes the enclosed information. If a particular route entry included in the update is new, it is entered into the routing table along with the address of the advertising router, which is read from the source address field of the update packet. If the route is for a network that is already in the table, the existing entry will be replaced only if the new route has a lower hop count. If the advertised hop count is higher than the recorded hop count and the update was originated by the recorded next-hop router, the route will be marked as unreachable for a specified holddown (Explained in next session) period. If at the end of that time the same neighbor is still advertising the higher hop count, the new metric will be accepted. RIP Timers and Stability Features Asynchronous Updates Figure1 shows a group of routers connected to an Ethernet backbone. The routers should not broadcast their updates at the same time; if they do, the update packets will collide. Yet this situation is exactly what can happen when a several routers share a broadcast network. System delays related to the processing of updates in the routers tend to cause the update timers to become synchronized. As a few routers become synchronized, collisions will begin to occur, further contributing to system delays and eventually all routers sharing the broadcast network may become synchronized.

Figure 2 Asynchronous updates may be maintained by one of two methods: Each router's update timer is independent of the routing process and is, therefore, not affected by processing loads on the router. A small random time, or timing jitter, is added to each update period as an offset. If routers implement the method of rigid, system-independent timers, then all routers sharing a broadcast network must be brought online in a random fashion. Rebooting the entire group of routers simultaneously could result in all the timers attempting to update at the same time. Adding randomness to the update period is effective if the variable is large enough in proportion to the number of routers sharing the broadcast network. Sally Floyd and Van Jacobson, have calculated that a too-small randomization will be overcome by a large enough network of routers and that to be effective the update timer should range as much as 50% of the median update period. After startup, the router gratuitously sends a Response message out every RIP-enabled interface every 30 seconds, on average. The Response message, or update, contains the router's full routing table with the exception of entries suppressed by the split horizon rule. The update timer initiating this periodic update includes a random variable to prevent table synchronization. As a result, the time between individual updates from a typical RIP process may be from 25 to 35 seconds. The specific random variable used by Cisco IOS, RIP_JITTER, subtracts up to 15% (4.5 seconds) from the update time. Therefore, updates from Cisco routers vary between 25.5 and 30 seconds. The destination address of the update is the all-hosts broadcast 255.255.255.255 RIP Packet Format

This router has not heard an update for subnet 10.3.0.0 for more than six update periods. The route has been marked unreachable, but has not yet been flushed from the routing table.

Hello Timer Invalid Timer Hold-down Timer Flush Timer

25.5 to 30.0 sec 180 sec 240 sec 240 sec

The invalidation timer, which distance vector protocols use to limit the amount of time a route can stay in a routing table without being updated. RIP calls this timer the expiration timer, or timeout. Cisco's IOS calls it the invalid timer. The expiration timer is initialized to 180 seconds whenever a new route is established and is reset to the initial value whenever an update is heard for that route. If an update for a route is not heard within that 180 seconds (six update periods), the hop count for the route is changed to 16, marking the route as unreachable. Another timer, the garbage collection or flush-timer, is set to 240 seconds60 seconds longer than the expiration time. The route will be advertised with the unreachable metric until the garbage collection timer expires, at which time the route is removed from the routing table.

Loop Prevention methods


Max-Hop-Count RIP prevents routing loops from continuing indefinitely by implementing a limit on the number of hops allowed in a path from the source to a destination. The maximum number of hops in a path is 15. If a router receives a routing update that contains a new or changed entry, and if increasing the metric value by 1 causes the metric to be infinity (that is, 16), the network destination is considered unreachable. The downside of this stability feature is that it limits the maximum diameter of a RIP network to less than 16 hops. Split Horizon According to the distance vector algorithm as it has been described so far, at every update period each router broadcasts its entire route table to every neighbor. But is this really necessary? Every network known by R1 in Figure 2, with a hop count higher than 0, has been learned from R2. Common sense suggests that for R1 to broadcast the networks it has learned from R2 back to R2 is a waste of resources. Obviously, R2 already knows about those networks. A route pointing back to the router from which packets were received is called a reverse route. Split horizon is a technique for preventing reverse routes between two routers.

Besides not wasting resources, there is a more important reason for not sending reachability information back to the router from which the information was learned. The most important function of a dynamic routing protocol is to detect and compensate for topology changesif the best path to a network becomes unreachable, the protocol must look for a next-best path. Look yet again at the converged internetwork of Figure 2 and suppose that network 4.0.0.0 goes down. R4 will detect the failure, flag the network as unreachable, and pass the information along to R3 at the next update interval. However, before R4's update timer triggers an update, something unexpected happens. R3's update arrives, claiming that it can reach 4.0.0.0, one hop away! R4 has no way of knowing that R3 is not advertising a legitimate next-best path. It will increment the hop count and make an entry into its route table indicating that 4.0.0.0 is reachable via R3's interface 10.0.3.1, just 2 hops away. Now a packet with a destination address of 4.0.0.0 arrives at R3. R3 consults its route table and forwards the packet to R4. R4 consults its route table and forwards the packet to R3, R3 forwards it back to R4, ad infinitum. A routing loop has occurred. Implementing split horizon prevents the possibility of such a routing loop. There are two categories of split horizon: simple split horizon and split horizon with poisoned reverse. The rule for simple split horizon is, when sending updates out a particular interface, do not include networks that were learned from updates received on that interface.

Figure 2 The routers in Figure 2 implement simple split horizon. R3 sends an update to R4 for network 10.0.0.0, 10.0.3.0 and 1.0.0.0 networks 4.0.0.0 are not included because this was learned from R4. Likewise, updates to router B include 10.0.3.0, 10.0.2.0, and 4.0.0.0 with no mention of 10.0.0.0, 1.0.0.0. Simple split horizon works by suppressing information. Split horizon with poisoned reverse is a modification that provides more positive information. The rule for split horizon with poisoned reverse is, when sending updates out a particular interface, designate any networks that were learned from updates received on that interface as unreachable. Triggered Updates Triggered updates, also known as flash updates, are very simple: If a metric changes for better or for worse, a router will immediately send out an update without waiting for its update timer to expire. Reconvergence will occur far more quickly than if every router had to wait for regularly scheduled updates, and the problem of counting to infinity is greatly reduced, although not completely eliminated. Regular updates may still occur along with triggered updates. Thus a router might receive bad information about a route from a not-yet-reconverged router after having received correct information from a triggered update. Such a situation shows that confusion and routing errors may still occur while an internetwork is reconverging, but triggered updates will help to iron things out more quickly. A further refinement is to include in the update only the networks that actually triggered it, rather than the entire route table. This technique reduces the processing time and the impact on network bandwidth. Holddown Timers

Triggered updates add responsiveness to a reconverging internetwork. Holddown timers introduce a certain amount of skepticism to reduce the acceptance of bad routing information. If the distance to a destination increases (for example, the hop count increases from 2 to 4), the router sets a holddown timer for that route. Until the timer expires, the router will not accept any new updates for the route. Obviously, a trade-off is involved here. The likelihood of bad routing information getting into a table is reduced but at the expense of the reconvergence time. Like other timers, holddown timers must be set with care. If the holddown period is too short, it will be ineffective, and if it is too long, normal routing will be adversely affected. Route Poisoning Route poisoning is a method to prevent routing loops within networks topology. Distance-vector routing protocols in routers use route poisoning to indicate to other routers that a route is no longer reachable and should be removed from their routing tables. A variation of route poisoning is split horizon with poison reverse whereby a router sends updates with unreachable hop counts back to the sender for every route received to help prevent routing loops In RIP router send metric of 16 hops to the neighbor router, which by default the neighbor router take it as unreachable. Passive Interface Passive interface is used in Routing Protocol configuration to suppress the update on particular interface. This command is used on the interface of the router on which network is connected and we dont expect any router on that network. This is enabled under routing protocol configuration as below: Router(config-router)# passive-interface Serial/Fastehternet x/y Router(config-router)# passive-interface default is used to suppress update or hello packet on all interfaces . When using this command in RIP all the broadcast or multicast updates to that interface will be blocked and if the router on the other side is sending the update RIP will receive the update and add the information in the Routing Table but do not send any update to that interface. Contiguous and Discontiguous Networks Contiguous Network When the two subnetted networks are connected with the network of same subnet on two or more routers. For example: -

Figure 4 As in the Figure 3 Networks on Router R1, R2 & R3 are all subnet of the Network 10.0.0.0 /8 and all are connected to routers and routers are connected to each other with same subnets of network 10.0.0.0 /8.

Discontiguous Networks When two subnetted networks are separated by the different networks with two or more routers it is called Discontiguous networks. For example: -

Figure 5 In the above Figure 5 Networks 10.0.3.0/24 and 10.0.4.0/24 are separated from 10.1.1.0/24 and 10.2.1.0/24 with 1.1.1.0/30 and 2.2.2.0/30. All routing Protocols support contiguous Networks but all routing protocols donot support Discontiguous Networks. RIP ver1 do not support Discontiguous Networks. Packet format of RIP Ver1

CommandIndicates whether the packet is a request or a response. The request asks that a router send all or part of its routing table. The response can be an unsolicited regular routing update or a reply to a request. Responses contain routing table entries. Multiple RIP packets are used to convey information from large routing tables. Version numberSpecifies the RIP version used. This field can signal different potentially incompatible versions. ZeroThis field is not actually used by RFC 1058 RIP; it was added solely to provide backward compatibility with pre-standard varieties of RIP. Its name comes from its defaulted value: zero. Address-family identifier (AFI)Specifies the address family used. RIP is designed to carry routing information for several different protocols. Each entry has an address-family identifier to indicate the type of address being specified. The AFI for IP is 2. AddressSpecifies the IP address for the entry. MetricIndicates how many internetwork hops (routers) have been traversed in the trip to the destination. This value is between 1 and 15 for a valid route, or 16 for an unreachable route.

Packet Format of RIP ver2

CommandIndicates whether the packet is a request or a response. The request asks that a router send all or a part of its routing table. The response can be an unsolicited regular routing update or a reply to a request. Responses contain routing table entries. Multiple RIP packets are used to convey information from large routing tables. VersionSpecifies the RIP version used. In a RIP packet implementing any of the RIP 2 fields or using authentication, this value is set to 2. UnusedHas a value set to zero.

Address-family identifier (AFI)Specifies the address family used. RIPv2s AFI field functions identically to RFC 1058 RIPs AFI field, with one exception: If the AFI for the first entry in the message is 0xFFFF, the remainder of the entry contains authentication information. Currently, the only authentication type is simple password. Route tagProvides a method for distinguishing between internal routes (learned by RIP) and external routes (learned from other protocols). IP addressSpecifies the IP address for the entry. Subnet maskContains the subnet mask for the entry. If this field is zero, no subnet mask has been specified for the entry. Next hopIndicates the IP address of the next hop to which packets for the entry should be forwarded. MetricIndicates how many internetwork hops (routers) have been traversed in the trip to the destination. This value is between 1 and 15 for a valid route, or 16 for an unreachable route.

Classfull Networks When routing protocol do not send the subnet mask information with the update packet, and the Router receiving that update will assume that the complete network is running on sending router and keep the Classfull entry of that route. Hence when come to discontiguous networks the router receiving the update with lower hopcount will be selected the best path to that network. For Example When R1 send the update of RIP Ver1 it will update the R2 router that it contained the Network 10.0.3.0 and 10.0.4.0 but do not send the subnet mask info so the R2 will assume that it contained the complete network 10.0.0.0/8 Similarly it will get update from R3 regarding the network 10.1.1.0 and 10.2.1.0 but it will again assume the same thing and since both the information is with 1 hop away info it will load balance for network 10.0.0.0/8 and hence no router will get the complete packets and other thing communication cannot be possible between two subnetted networks 10.0.0.0/8 on R1 & R3

Classless Network When routing protocol send the subnet mask information with the update packet and receiving router enter the complete information of all the subnetworks of the particular network along with the subnet mask info. So this way subnetted networks can be supported by router and other way discontiguous network support. Difference and Similarities between RIP ver. 1 and Ver. 2 RIP Version 1 Similarities Follows are Loop prevention methods. Metric is HOP COUNT Max-Hop Count is 15 Support Contiguous Networks Dissimilarities Classfull routing protocol Do not send subnet mask information in RIP Update. Do not support Discontiguous Network Do not support VLSM and Subnetting Do not support Authentication Send Update by Broadcast Address RIP Version 2 Similarities Follow all Loop prevention methods. Metric is HOP COUNT Max-Hop Count is 15 Support Contiguous Networks. Dissimilarities Classless routing protocol Send subnet mask information in RIP Update. Support Discontiguous Network. Support VLSM and Subnetting. Support Authentication Send Update via Multicast address

255.255.255.255 Do not send Next-Hop Information

224.0.0.9 Send Next-Hop-Information

Configuring RIP on Cisco Routers One can enable Rip on the Router by giving the router rip command on the privilege mode following by the interfaces on which update is supposed to be send and the network of which the advertisement is need to send. For both the interface on which advertisement is set to send and the network of which update is send are set by the same command. Router(config)# router rip Router(config-router)# network A.B.C.D By the above commands only RIP Ver1 will be enabled and to enable RIP Ver2 add the following commands Router(config-router)# version 2 Router(config-router)# no auto-summary

Figure 3 Router 0 router rip passive-interface default no passive-interface Serial0/0 no passive-interface Serial0/1 network 3.0.0.0 network 1.0.0.0 network 30.0.0.0 network 192.168.1.0 network 192.168.3.0 Router R2 router rip passive-interface default no passive-interface Serial0/0 no passive-interface Serial0/1 network 1.0.0.0 network 10.0.0.0 network 20.0.0.0 network 192.168.0.0 network 192.168.2.0 To see the RIP routes Router R1 router rip passive-interface default no passive-interface Serial0/0 no passive-interface Serial0/1 network 192.168.1.0 network 192.168.2.0 network 2.0.0.0 Router R3 router rip passive-interface default no passive-interface Serial0/0 no passive-interface Serial0/1 network 192.168.0.0 network 192.168.3.0

Show ip route rip (Output of R0 ) R0#sh ip route rip 1.0.0.0/8 is variably subnetted, 4 subnets, 2 masks R 1.0.0.0/8 [120/1] via 192.168.3.2, 00:01:36, Serial0/1 R 20.0.0.0/8 [120/1] via 192.168.3.2, 00:00:16, Serial0/1 R 10.0.0.0/8 [120/1] via 192.168.3.2, 00:00:16, Serial0/1 R 192.168.0.0/24 [120/1] via 192.168.3.2, 00:00:16, Serial0/1 R 192.168.2.0/24 [120/1] via 192.168.3.2, 00:00:16, Serial0/1 As from above command one can clearly see the Routing Table contain Classfull Networks though all the networks are subnetted R3#debug ip rip events RIP event debugging is on R3# 1 00:15:23.143: RIP: sending v1 update to 255.255.255.255 via Serial0/1 (192.168.3.2) 1 00:15:23.151: RIP: Update contains 4 routes 1 00:15:23.155: RIP: Update queued 1 00:15:23.155: RIP: Update sent via Serial0/1 1 00:15:27.415: RIP: received v1 update from 192.168.3.1 on Serial0/1 1 00:15:27.419: RIP: Update contains 4 routes 1 00:15:29.899: RIP: received v1 update from 192.168.0.2 on Serial0/0 1 00:15:29.907: RIP: Update contains 4 routes 1 00:15:30.607: RIP: sending v1 update to 255.255.255.255 via Serial0/0 (192.168.0.1) 1 00:15:30.615: RIP: Update contains 4 routes 1 00:15:30.615: RIP: Update queued 1 00:15:30.619: RIP: Update sent via Serial0/0 Enabling RIP Ver2 After enabling RIP Ver2 on all routers the following above outputs are as below When enabling RIP Ver2 enable it on all the routers in the topology and note that all the routers in the topology support RIP Ver2 If all the routers are not enabled with Ver2 then the routers working with RIP V1 will send only Ver1 updates and hence the routers down the line will only get Ver1 updates R0#sh ip route rip 1.0.0.0/8 is variably subnetted, 8 subnets, 2 masks R 1.0.1.0/24 [120/2] via 192.168.3.2, 00:00:11, Serial0/1 R 1.0.0.0/24 [120/2] via 192.168.3.2, 00:00:11, Serial0/1 R 1.0.0.0/8 [120/1] via 192.168.3.2, 00:00:38, Serial0/1 R 1.0.3.0/24 [120/2] via 192.168.3.2, 00:00:11, Serial0/1 R 1.0.2.0/24 [120/2] via 192.168.3.2, 00:00:11, Serial0/1 20.0.0.0/8 is variably subnetted, 2 subnets, 2 masks R 20.0.0.0/24 [120/2] via 192.168.3.2, 00:00:11, Serial0/1 R 20.0.0.0/8 [120/2] via 192.168.3.2, 00:00:11, Serial0/1 10.0.0.0/8 is variably subnetted, 2 subnets, 2 masks R 10.0.0.0/24 [120/2] via 192.168.3.2, 00:00:11, Serial0/1 R 10.0.0.0/8 [120/2] via 192.168.3.2, 00:00:11, Serial0/1 R 192.168.0.0/24 [120/1] via 192.168.3.2, 00:00:11, Serial0/1 R 192.168.2.0/24 [120/2] via 192.168.3.2, 00:00:11, Serial0/1 R3#debug ip rip events RIP event debugging is on R3# 1 00:24:42.479: RIP: sending v2 update to 224.0.0.9 via Serial0/1 (192.168.3.2)

1 00:24:42.487: RIP: Update contains 8 routes 1 00:24:42.487: RIP: Update queued 1 00:24:42.491: RIP: Update sent via Serial0/1 1 00:24:50.027: RIP: received v2 update from 192.168.0.2 on Serial0/0 1 00:24:50.035: RIP: Update contains 7 routes 1 00:24:52.319: RIP: sending v2 update to 224.0.0.9 via Serial0/0 (192.168.0.1) 1 00:24:52.327: RIP: Update contains 8 routes 1 00:24:52.327: RIP: Update queued 1 00:24:52.331: RIP: Update sent via Serial0/0 1 00:24:53.599: RIP: received v2 update from 192.168.3.1 on Serial0/1 1 00:24:53.603: RIP: Update contains 7 routes R1#sh ip route rip 1.0.0.0/24 is subnetted, 4 subnets R 1.0.1.0 [120/1] via 192.168.2.2, 00:00:41, Serial0/1 R 1.0.0.0 [120/1] via 192.168.2.2, 00:00:42, Serial0/1 R 1.0.3.0 [120/1] via 192.168.2.2, 00:00:42, Serial0/1 R 1.0.2.0 [120/1] via 192.168.2.2, 00:00:42, Serial0/1 3.0.0.0/24 is subnetted, 2 subnets R 3.0.1.0 [120/1] via 192.168.1.1, 00:00:19, Serial0/0 R 3.0.0.0 [120/1] via 192.168.1.1, 00:00:19, Serial0/0 10.0.0.0/24 is subnetted, 1 subnets R 10.0.0.0 [120/1] via 192.168.2.2, 00:00:42, Serial0/1 R 192.168.0.0/24 [120/1] via 192.168.2.2, 00:00:42, Serial0/1 R 192.168.3.0/24 [120/1] via 192.168.1.1, 00:00:19, Serial0/0 30.0.0.0/24 is subnetted, 1 subnets R 30.0.0.0 [120/1] via 192.168.1.1, 00:00:19, Serial0/0 After Invalid Timer expires the routes receive from faulty routers will upon possibly down status. Below status is when router R2 stop sending its update to R1 R1#sh ip route rip 1.0.0.0/24 is subnetted, 4 subnets R 1.0.1.0/24 is possibly down, routing via 192.168.2.2, Serial0/1 R 1.0.0.0/24 is possibly down, routing via 192.168.2.2, Serial0/1 R 1.0.3.0/24 is possibly down, routing via 192.168.2.2, Serial0/1 R 1.0.2.0/24 is possibly down, routing via 192.168.2.2, Serial0/1 3.0.0.0/24 is subnetted, 2 subnets R 3.0.1.0 [120/1] via 192.168.1.1, 00:00:19, Serial0/0 R 3.0.0.0 [120/1] via 192.168.1.1, 00:00:19, Serial0/0 10.0.0.0/24 is subnetted, 1 subnets R 10.0.0.0/24 is possibly down, routing via 192.168.2.2, Serial0/1 R 192.168.0.0/24 is possibly down, routing via 192.168.2.2, Serial0/1 R 192.168.3.0/24 [120/1] via 192.168.1.1, 00:00:19, Serial0/0 30.0.0.0/24 is subnetted, 1 subnets R 30.0.0.0 [120/1] via 192.168.1.1, 00:00:19, Serial0/0 Rip Commands Show ip routes Show ip route rip to see rip route only Clear ip route * to clear routing table entries and then new table will be formed with new updates Debug ip rip events

Practice your skill.

Test the following Configure the complete topology And check which network preferred which route. Send continuous ping from R1 to R6s Loopback 6.6.0.1 and shut down the interface F1/0 of R5 and check much packets drop on router R1. How much time it take to converge the network?

OPEN SHORTESH PATH FIRST (OSPF)


Link State Routing Protocols The information available to a distance vector router has been compared to the information available from a road sign. Link state routing protocols are like a road map. A link state router cannot be fooled as easily into making bad routing decisions, because it has a complete picture of the network. The reason is that unlike the routingby-rumor approach of distance vector, link state routers have firsthand information from all their peer routers. Each router originates information about itself, its directly connected links, and the state of those links (hence the name). This information is passed around from router to router, each router making a copy of it, but never changing it. The ultimate objective is that every router has identical information about the internetwork, and each router will independently calculate its own best paths. Link state protocols, sometimes called shortest path first or distributed database protocols, are built around a well-known algorithm from graph theory, E. W. Dijkstra'a shortest path algorithm.

Examples of link state routing protocols are: Open Shortest Path First (OSPF) for IP The ISO's Intermediate System to Intermediate System (IS-IS) for CLNS and IP DEC's DNA Phase V Novell's NetWare Link Services Protocol (NLSP) Although link state protocols are rightly considered more complex than distance vector protocols, the basic functionality is not complex at all. Link state advertisement 1. Each router establishes a relationshipan adjacencywith each of its neighbors. 2. Each router sends link state advertisements (LSAs), sometimes called link state packets (LSPs), to each neighbor. One LSA is generated for each of the router's links, identifying the link, the state of the link, the metric cost of the router's interface to the link, and any neighbors that may be connected to the link. Each neighbor receiving an advertisement in turn forwards (floods) the advertisement to its own neighbors. 3. Each router stores a copy of all the LSAs it has seen in a database. If all works well, the databases in all routers should be identical. 4. The completed topological database, also called the link state database, describes a graph of the Inter-network. Using the Dijkstra algorithm, each router calculates the shortest path to each network and enters this information into the route table. Neighbors Neighbor discovery is the first step in getting a link state environment up and running. In keeping with the friendly neighbor terminology, a Hello protocol is used for this step. The protocol will define a Hello packet format and a procedure for exchanging the packets and processing the information the packets contain. Router ID At a minimum, the Hello packet will contain a router ID and the address of the network on which the packet is being sent. The router ID is be something by which the router originating the packet can be uniquely distinguished from all other routers; for instance, an IP address from one of the router's interfaces. Other fields of the packet may carry a subnet mask, Hello intervals and a specified maximum period the router will wait to hear a Hello before declaring the neighbor "dead," a descriptor of the circuit type, and flags to help in bringing up adjacencies. 1. Manually assigning the Router-ID in OSPF 2. The router chooses the numerically highest IP address on any of its loopback interfaces. 3. If no loopback interfaces are configured with IP addresses, the router chooses the numerically highest IP address on any of its physically active interfaces. The interface from which the Router ID is taken does not have to be running OSPF. Using addresses associated with loopback interfaces has two advantages: The loopback interface is more stable than any physical interface. It is active when the router boots up and it only fails if the entire router fails. The network administrator has more leeway in assigning predictable or recognizable addresses as the Router IDs. Router-ID is of 32 bit address same as IP address A.B.C.D (Ex: - 1.1.1.1) but it is not important that the router have that IP address to which admin is choosing as router-id manually. At a very high level, the operation of OSPF is easily explained:

OSPF-speaking routers send Hello packets out all OSPF-enabled interfaces. If two routers sharing a common data link agree on certain parameters specified in their respective Hello packets, they will become neighbors. Adjacencies, which may be thought of as virtual point-to-point links, are formed between some neighbors. OSPF defines several network types and several router types. The establishment of an adjacency is determined by the types of routers exchanging Hellos and the type of network over which the Hellos are exchanged. Each router sends link state advertisements (LSAs) over all adjacencies. The LSAs describe all of the router's links, or interfaces, and the state of the links. These links may be to stub networks (networks with no other router attached), to other OSPF routers, to networks in other areas, or to external networks (networks learned from another routing process). Because of the varying types of link state information, OSPF defines multiple LSA types. Each router receiving an LSA from a neighbor records the LSA in its link state database and sends a copy of the LSA to all of its other neighbors. By flooding LSAs throughout an area, all routers will build identical link state databases. When the databases are complete, each router uses the SPF algorithm to calculate a loop-free graph describing the shortest (lowest cost) path to every known destination, with itself as the root. This graph is the SPF tree. Each router builds its route table from its SPF tree. When all link state information has been flooded to all routers in an areathat is, the link state databases have been synchronizedand the route tables have been built, OSPF is a quiet protocol. Hello packets are exchanged between neighbors as keepalives, and LSAs are retransmitted every 30 minutes. If the internetwork topology is stable, no other activity should occur. Metric of best path selection in OSPF Bandwidth Cost Calculation = [100 / Bandwidth (Mbps)] Cost is calculated is summation from the routers out going interface to all the routers in the route.

The Link State Database In addition to flooding LSAs and discovering neighbors, a third major task of the link state routing protocol is establishing the link state database. The link state or topological database stores the LSAs as a series of records. Although a sequence number and age and possibly other information are included in the LSA, these variables exist mainly to manage the flooding process. The important information for the shortest path determination process is the advertising router's ID, its attached networks and neighboring routers, and the cost associated with those networks or neighbors. As the previous sentence implies, LSAs may include two types of generic information Router link information advertises a router's adjacent neighbors with a triple of (Router-ID, Neighbor ID, Cost), where cost is the cost of the link to the neighbor Stub network information advertises a router's directly connected stub networks (networks with no neighbors) with a triple of (Router ID, Network ID and Cost). The shortest path first (SPF) algorithm is run once for the router link information to establish shortest paths to each router, and then stub network information is used to add these networks to the routers. Figure 1 shows an internetwork of routers and the links between them; stub networks are not shown for the sake of simplicity. Notice that several links have different costs associated with them at each end. A cost is associated with the outgoing direction of an interface. For instance, the link from RB to RC has a cost of 1, but the same link has a cost of 5 in the RC to RB direction

Figure 3 Packet Type Hello Packet The Hello protocol serves several purposes: It is the means by which neighbors are discovered. It advertises several parameters on which two routers must agree before they can become neighbors. Hello packets act as keepalives between neighbors. It ensures bi-directional communication between neighbors. It elects Designated Routers (DRs) and Backup Designated Routers (BDRs) on Broadcast and Non broadcast, Multi-access (NBMA) networks Database Description packet Link State Request Link State Request as its name suggests is the request for the any missing LSA in the database to the neighbor router. And neighbor reply with the valid information Link State Update Link State Update is reply to the LS Req packet send from the neighbor. Ack packet Every packet send from OSPF process in router except of hello packet is Ack by the receiving router. And is the receiving router does not send the Ack then the sending router will retransmit the same update to the missing router. Contents of Hello Packet Each Hello packet contains the following information: The Router ID of the originating router. The Area ID of the originating router interface. The address mask of the originating interface. The authentication type and authentication information for the originating interface. The Hello Interval of the originating interface. The Router Dead-Interval of the originating interface. The Router Priority. The DR and BDR.

The Router IDs of the originating router's neighbors. This list contains only routers from which Hellos were heard on the originating interface within the last RouterDead-Interval. Authentication is like password for tow routers to for secure neighborship. Router s Priorit y DR & BD R Neighbo rs RouterID Authenticati on **** Stu bFla g ****

Contents of Hello packet OSPF Hello Dead Are Router- Interv Interv a-ID ID al al **** **** ****

Note The following Star mark contents must match for two routers to form a neighborship otherwise no neighborship will form. Default (values) Hello Interval 10 sec Dead interval 40 sec Dead Interval is when the router miss 4 consecutive hello from the neighbor router. Then the router will assume that the neighbor router is dead. First it will inform all the other active neighbors about the neighbor lost and then it will clear the entire database regarding that router and LSA received through that router and will run the SPF Algorithm on the rest of the database and form the routing table. Type of Links Broadcast Link Point - t0 Point Link Non-broadcast Multi-access (NBMA) networks(Frame-Relay) Point-to-multipoint networks (Frame-Relay) Virtual links Point-to Point Networks, such as a T1 or subrate link, connect a single pair of routers. Valid neighbors on point-topoint networks will always become adjacent. The destination address of OSPF packets on these networks will always be the reserved class D address 224.0.0.5, known as AllSPFRouters Broadcast Broadcast networks are multi-access in that they are capable of connecting more than two devices, and they are broadcast in that all attached devices can receive a single transmitted packet. OSPF routers on broadcast networks will elect a DR and a BDR, as described in the next section, "Designated Routers and Backup Designated Routers." Hello packets are multicast with the AllSPFRouters destination address 224.0.0.5, as are all OSPF packets originated by the DR and BDR. All other routers will multicast link state update and link state acknowledgment packets (described later) to the reserved class D address 224.0.0.6, known as AllDRouters. Designated and Backup Designated Router theory Link State Flooding After the adjacencies are established, the routers may begin sending out LSAs. As the term flooding implies, the advertisements are sent to every neighbor. In turn, each received LSA is copied and forwarded to every neighbor except the one that sent the LSA. This process is the source of one of link state's advantages over distance vector. LSAs are forwarded almost immediately, whereas distance vector must run its algorithm and update its route table before routing updates, even the triggered ones, can be forwarded. As a result, link state protocols converge much faster than distance vector protocols converge when the topology changes. Multi-access networks present two problems for OSPF, relating to the flooding of LSAs The formation of an adjacency between every attached router would create many unnecessary LSAs. If n is the number of routers on a multi-access network, there would be n(n- 1)/2 adjacencies (Figure 2).

Each router would flood n- 1 LSAs for its adjacent neighbors, plus one LSA for the network, resulting in n 2 LSAs originating from the network. Flooding on the network itself would be chaotic. A router would flood an LSA to all its adjacent neighbors, which in turn would flood it to all their adjacent neighbors, creating many copies of the same LSA on the same network.

Figure 4 Designated Router and Backup Designated Router (DR & BDR) To prevent these problems a Designated Router is elected on multi-access networks. The DR has the following duties: To represent the multi-access network and its attached routers to the rest of the internetwork To manage the flooding process on the multi-access network The concept behind the DR is that the network itself is considered a "pseudo node," or a virtual router. Each router on the network forms an adjacency with the DR (Figure 2), which represents the pseudo node. Only the DR will send LSAs to the rest of the internetwork. Keep in mind that a router might be a DR on one of its attached multi-access networks, and it might not be the DR on another of its attached multi-access networks. In other words, the DR is a property of a router's interface, not the entire router.

Figure 5 DR & BDR selection is on per-interfaces bases as a router can be DR on one Broadcast link and can BDP on other broadcast link and DROther on third link as shown in Figure 4

Figure 6 A significant problem with the DR scheme as described so far is that if the DR fails, a new DR must be elected. New adjacencies must be established, and all routers on the network must synchronize their databases with the new DR (part of the adjacency-building process). While all this is happening, the network is unavailable for transit packets. To prevent this problem, a Backup Designated Router is elected in addition to the DR. All routers form adjacencies not only with the DR but also with the BDR. The DR and BDR also become adjacent with each other. If the DR fails, the BDR becomes the new DR. Because the other routers on the network are already adjacent with the BDR, network unavailability is minimized. DR in Network is selected on the basis of highest Router OSPF Priority and if the Router Priority of two or more routers clash then the router with highest Router-ID is selected as DR. And the router with second highest priority is selected as BDR Router. And all the other routers as DROther. Once the DR router is selected, it cannot be replaced form DR responsibilities unless the interface or the router goes down. The router with higher priority join that broadcast link cannot directly become DR by replacing the existing router as DR. It must first become BDR when DR goes down and BDR take the charge as DR and through new election of BDR. And this process is auto-started on broadcast links by default. Router Priority is 8bit value starting from 0 to 255 Default Priority of all the Ethernet interfaces is 1 and if any routers interface is configured with routers priority 0 then it cannot take part in DR & BDR selection. Router Priority in DR & BDR selection is configured on per interface basis. DR & BDR election only occur on the broadcast links not on Point-to-Point links. Neighborship Formation Attempt. This state applies only to neighbors on NBMA networks, where neighbors are manually configured. A DReligible router will transition a neighbor to the Attempt state when the interface to the neighbor first becomes Active or when the router is the DR or BDR. A router will send packets to a neighbor in Attempt state at the HelloInterval instead of the PollInterval. Init.

This state indicates that a Hello packet has been seen from the neighbor in the last RouterDeadInterval, but 2Way communication has not yet been established. A router will include the Router IDs of all neighbors in this state or higher in the Neighbor field of the Hello packets. 2-Way. This state indicates that the router has seen its own Router ID in the Neighbor field of the neighbor's Hello packets, which means that a bidirectional conversation has been established. On multi-access networks, neighbors must be in this state or higher to be eligible to be elected as the DR or BDR. The reception of a Database Description packet from a neighbor in the init state will also cause a transition to 2-Way. ExStart. In this state, the router and its neighbor establish a master/slave relationship and determine the initial DD sequence number in preparation for the exchange of Database Description packets. The neighbor with the highest interface address becomes the master. And the other will become the slave Exchange. The router sends Database Description packets describing its entire link state database to neighbors that are in the Exchange state. The router may also send Link State Request packets, requesting more recent LSAs, to neighbors in this state. And the receiving router will ack the receiving DBD packets to the neighbor router. And then the master router will request the DBDpackets from slave and ack the slave for DBDs Loading. The router will send Link State Request packets to neighbors that are in the Loading state, requesting more recent LSAs that have been discovered in the Exchange state but have not yet been received. Full. Neighbors in this state are fully adjacent, and the adjacencies will appear in Router LSAs and Network LSAs. After receiving all the DBD packets from the newly formed neighbor, the router will send the new information to its existing neighbors and simultaneously run SPF algorithm on the entire database and form a routing table. DR R1#debug ip ospf adj *Mar 1 00:08:54.251: OSPF: Interface FastEthernet0/0 going Up *Mar 1 00:08:54.267: OSPF: 2 Way Communication to 2.2.2.4 on FastEthernet0/0, state 2WAY *Mar 1 00:08:54.267: OSPF: Backup seen Event before WAIT timer on FastEthernet0/0 *Mar 1 00:08:54.267: OSPF: DR/BDR election on FastEthernet0/0 *Mar 1 00:08:54.267: OSPF: Elect BDR 1.1.1.1 *Mar 1 00:08:54.267: OSPF: Elect DR 2.2.2.4 *Mar 1 00:08:54.271: OSPF: Elect BDR 1.1.1.1 *Mar 1 00:08:54.271: OSPF: Elect DR 2.2.2.4 *Mar 1 00:08:54.271: DR: 2.2.2.4 (Id) BDR: 1.1.1.1 (Id) *Mar 1 00:08:54.271: OSPF: Send DBD to 2.2.2.4 on FastEthernet0/0 seq 0xE5E opt 0x52 flag 0x7 len 32 *Mar 1 00:08:54.755: OSPF: Build router LSA for area 0, router ID 1.1.1.1, seq 0x80000005 *Mar 1 00:08:54.811: %SYS-5-CONFIG_I: Configured from console by console *Mar 1 00:08:56.235: %LINK-3-UPDOWN: Interface FastEthernet0/0, changed state to up *Mar 1 00:08:57.235: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/0, changed state to up *Mar 1 00:08:59.275: OSPF: Send DBD to 2.2.2.4 on FastEthernet0/0 seq 0xE5E opt 0x52 flag 0x7 len 32 *Mar 1 00:08:59.275: OSPF: Retransmitting DBD to 2.2.2.4 on FastEthernet0/0 [1] *Mar 1 00:08:59.355: OSPF: Rcv DBD from 2.2.2.4 on FastEthernet0/0 seq 0x7BE opt 0x52 flag 0x7 len 32 mtu 1500 state EXSTART *Mar 1 00:08:59.355: OSPF: NBR Negotiation Done. We are the SLAVE *Mar 1 00:08:59.359: OSPF: Send DBD to 2.2.2.4 on FastEthernet0/0 seq 0x7BE opt 0x52 flag 0x2 len 92 *Mar 1 00:08:59.471: OSPF: Rcv DBD from 2.2.2.4 on FastEthernet0/0 seq 0x7BF opt 0x52 flag 0x3 len 72 mtu 1500 state EXCHANGE *Mar 1 00:08:59.471: OSPF: Send DBD to 2.2.2.4 on FastEthernet0/0 seq 0x7BF opt 0x52 flag 0x0 len 32 *Mar 1 00:08:59.479: OSPF: Rcv DBD from 2.2.2.4 on FastEthernet0/0 seq 0x7C0 opt 0x52 flag 0x1 len 32 mtu 1500 state EXCHANGE *Mar 1 00:08:59.479: OSPF: Exchange Done with 2.2.2.4 on FastEthernet0/0 *Mar 1 00:08:59.483: OSPF: Send LS REQ to 2.2.2.4 length 12 LSA count 1 *Mar 1 00:08:59.483: OSPF: Send DBD to 2.2.2.4 on FastEthernet0/0 seq 0x7C0 opt 0x52 flag 0x0 len 32 *Mar 1 00:08:59.483: OSPF: Rcv LS REQ from 2.2.2.4 on FastEthernet0/0 length 48 LSA count 2 *Mar 1 00:08:59.487: OSPF: Send UPD to 1.1.1.2 on FastEthernet0/0 length 72 LSA count 2

*Mar 1 00:08:59.575: OSPF: Rcv LS UPD from 2.2.2.4 on FastEthernet0/0 length 100 LSA count 1 *Mar 1 00:08:59.579: OSPF: Synchronized with 2.2.2.4 on FastEthernet0/0, state FULL *Mar 1 00:08:59.579: %OSPF-5-ADJCHG: Process 1, Nbr 2.2.2.4 on FastEthernet0/0 from LOADING to FULL, Loading Done *Mar 1 00:08:59.611: OSPF: Rcv LS UPD from 2.2.2.4 on FastEthernet0/0 length 64 LSA count 1 *Mar 1 00:08:59.683: OSPF: Rcv LS UPD from 2.2.2.4 on FastEthernet0/0 length 60 LSA count 1 *Mar 1 00:08:59.823: OSPF: Rcv LS UPD from 2.2.2.4 on FastEthernet0/0 length 100 LSA count 1 *Mar 1 00:09:00.079: OSPF: Build router LSA for area 0, router ID 1.1.1.1, seq 0x80000006 *Mar 1 00:09:00.135: OSPF: Rcv LS UPD from 2.2.2.4 on FastEthernet0/0 length 64 LSA count 1 *Mar 1 00:09:01.467: OSPF: Neighbor change Event on interface FastEthernet0/0 *Mar 1 00:09:01.467: OSPF: DR/BDR election on FastEthernet0/0 *Mar 1 00:09:01.467: OSPF: Elect BDR 1.1.1.1 *Mar 1 00:09:01.471: OSPF: Elect DR 2.2.2.4 *Mar 1 00:09:01.471: DR: 2.2.2.4 (Id) BDR: 1.1.1.1 (Id) *Mar 1 00:09:04.583: OSPF: Rcv LS UPD from 2.2.2.4 on FastEthernet0/0 length 60 LSA count 1 R1#sh ip ospf interface f0/0 FastEthernet0/0 is up, line protocol is up Internet Address 1.1.1.1/24, Area 0 Process ID 1, Router ID 1.1.1.1, Network Type BROADCAST, Cost: 1 Transmit Delay is 1 sec, State BDR, Priority 1 Designated Router (ID) 2.2.2.4, Interface address 1.1.1.2 Backup Designated router (ID) 1.1.1.1, Interface address 1.1.1.1 Timer intervals configured, Hello 10, Dead 40, Wait 40, Retransmit 5 Hello due in 00:00:02 Neighbor Count is 1, Adjacent neighbor count is 1 Adjacent with neighbor 2.2.2.4 (Designated Router) Concept of Area As per definition the OSPF is link state routing protocol, it contain the map of complete Area and then it run SPF algorithm to form a best possible path. With the grow of area and the Database become larger. This requires more RAM to handle and more router processing power to calculate the best route from the Large Database. Here comes the concept of dividing the topology in to smaller areas according to connivance. And now the router will be in there particular areas only. And they will have the database of their particular area and summary routes of the other area send by the ABR. OSPF uses areas to reduce these adverse effects. In the context of OSPF, an area is a logical grouping of OSPF routers and links that effectively divide an OSPF domain into sub-domains (Figure 5). Routers within an area will have no detailed knowledge of the topology outside of their area. Because of this condition: A router must share an identical link state database only with the other routers in its area, not with the entire internetwork. The reduced size of the database reduces the impact on a router's memory. The smaller link state databases mean fewer LSAs to process and therefore less impact on the CPU. Because the link state database must be maintained only within an area, most flooding is also limited to the area.

Figure 7
An OSPF area is a logical grouping of OSPF routers. Each area is described by its own link state database, and each router must maintain a database only for the area to which it belongs.

NOTE Area ID Areas are identified by a 32-bit Area ID. As Figure 5 shows, the Area ID may be expressed either as a decimal number or in dotted decimal, and the two formats may be used together on Cisco routers. The choice usually depends on which format is more convenient for identifying the particular Area ID. For example , area 0 and area 0.0.0.0 are equivalent, as are area 16 and area 0.0.0.16, and area 271 and area 0.0.1.15. In each of these cases, the decimal format would probably be preferred. However, given the choice of area 3232243229 and area 192.168.30.29, the latter would probably be chosen. NOTE Classifying traffic in relation to areas Three types of traffic may be defined in relation to areas: Intra-area traffic consists of packets that are passed between routers within a single area. Inter-area traffic consists of packets that are passed between routers in different areas. External traffic consists of packets that are passed between a router within the OSPF domain and a router within another autonomous system. Backbone Area ID 0 (or 0.0.0.0) is reserved for the backbone. The backbone is responsible for summarizing the topographies of each area to every other area. For this reason, all inter-area traffic must pass through the backbone; non-backbone areas cannot exchange packets directly. Many OSPF designers have a favorite rule of thumb concerning the maximum number of routers that an area can handle. This number might range from 30 to 200. However, the number of routers has little actual bearing on the maximum size of an area. It helps in avoiding the loops the topology by providing the single available path for multiple areas to communicate. Wild Card Mask A wildcard mask is basically a mask of bits that indicates which parts of an IP address can assume any value. 'Don't care bits' are represented by binary 1's whilst the 'Do care bits' are represented by binary 0's.One will note that this is the exact opposite to subnet masks . Wild card mask provide more flexibility in selecting the hosts and network with respect to Subnet mask. For Example 10.0.0.0 / 24 is network with subnet mask 255.255.255.0 to calculate wild card mask easy way to go is subtract subnet mask from 255.255.255.0 Here comes 255.255.255.255 255.255.255.0 = 0.0.0.255 Try find out the subnet mask of following 172.31.55.0 / 28 192.168.0.0 / 29 10.0.23.2 / 32 10.0.2.0 / 14

172.16.50.0 / 12 OSPF Table Types Neighbor Table Database Table Routing Table Neighbbor Table It is the table in which all the information regarding OSPF neighbors are placed. Regarding OSPF hello from the neighbors, neighbors router-ids, interface to which neighbor is attached. Router#show ip ospf neighbor Neighbor ID 203.250.12.1 203.250.15.1 203.250.13.1
128.213.10.3 128.213.10.2 1 1

Pri State 1 1 1

Dead Time Address

Interface

2WAY/DROTHER 0:00:37 203.250.14.3 Ethernet0/0 FULL/DR 0:00:36 203.250.14.2 Ethernet0/0 FULL/BDR 0:00:34 203.250.14.1 Ethernet0/0
FULL/ FULL/ 0:01:35 0:01:44 128.213.10.3 Serial0/0 128.213.10.2 Serial0/0

Router(config)#sh ip ospf interface serial 0/0


Serial0/0 is up, line protocol is up Internet Address 128.213.10.2 255.255.255.0, Area 0 Process ID 10, Router ID 128.213.10.2, Network Type POINT_TO_MULTIPOINT, Cost: 64 Transmit Delay is 1 sec, State POINT_TO_POINT, Timer intervals configured, Hello 30, Dead 120, Wait 120, Retransmit 5 Hello due in 0:00:14 Neighbor Count is 1, Adjacent neighbor count is 1 Adjacent with neighbor 200.200.10.1

Router#show ip ospf interface e0/0


Ethernet0/0 is up, line protocol is up Internet Address 203.250.14.3 255.255.255.0, Area 0.0.0.0 Process ID 10, Router ID 203.250.12.1, Network Type BROADCAST, Cost: 10 Transmit Delay is 1 sec, State DROTHER, Priority 1 Designated Router (ID) 203.250.15.1, Interface address 203.250.14.2 Backup Designated router (ID) 203.250.13.41, Interface address 203.250.14.1 Timer intervals configured, Hello 10, Dead 40, Wait 40, Retransmit 5 Hello due in 0:00:03 Neighbor Count is 3, Adjacent neighbor count is 2 Adjacent with neighbor 203.250.15.1 (Designated Router) Adjacent with neighbor 203.250.13.41(Backup Designated Router)

The Link State Database All valid LSAs received by a router are stored in its link state database. The collected LSAs will describe a graph of the area topology. Because each router in an area calculates its shortest path tree from this database, it is imperative for accurate routing that all area databases are identical. Router#show ip ospf database
OSPF Router with ID (203.250.15.67) (Process ID 10) Router Link States (Area 1) Link ID ADV Router Link count 203.250.15.67 203.250.15.67 0xB112 2 203.250.16.130 203.250.16.130 Age 48 212 Seq# Checksum 0x80000008 0x80000006 0x3F44 2

Summary Net Link States (Area 1) Link ID ADV Router Age Seq# Checksum

203.250.13.41 203.250.15.67 203.250.15.64 203.250.15.67 203.250.15.192 203.250.15.67 Router Link States (Area 0) Link ID Link count 203.250.13.41 203.250.15.67 ADV Router 203.250.13.41 203.250.15.67 Net Link States (Area 0) Link ID ADV Router 203.250.15.68 203.250.13.41

602 620 638

0x80000002 0x800000E9 0x800000E5

0x90AA 0x3E3C 0xA54E

Age 179 675

Seq#

Checksum

0x80000029 0x9ADA 3 0x800001E2 0xDD23 1

Age 334

Seq# Checksum 0x80000001 0xB6B5

Summary Net Link States (Area 0) Link ID ADV Router Checksum 203.250.15.0 203.250.15.67 Age 792 Seq# 0x80000002 0xAEBD

Summary ASB Link States (Area 0) Link ID Checksum 203.250.16.130 ADV Router 203.250.15.67 Age 579 Seq# 0x80000001 0xF9AF

Routing Table Same before. Configuring OSPF on Cisco Routers OSPF Process-ID OSPF Process-ID is identification of the OSPF process in the router. It is of 16 bits value ranging from 165535. OSPF process-id is locally significant to router and one router can have multiple OSPF processes with different process-ids but it is not recommended. As multiple processes will take extra resources like RAM and CPU Usage. Configuration To enable OSPF on router Router(config)# router ospf <procress-id> To advertize networks in OSPF Router(config-router)# network A.B.C.D (wild card mask) area (area-id) And to enable OSPF on interface Router(config-router)#network X.Y.Z.A (wild card mask) area (area-id) Assigning router-id to router Router(config)#router ospf (process-id) Router(config-router)#router-id A.B.C.D

Backbone R0 R0(config)#router ospf 1 R0(config-router)# network 4.0.0.0 0.0.0.255 area 0 R0(config-router)# network 4.0.1.0 0.0.0.255 area 0 R0(config-router)# network 4.0.2.0 0.0.0.255 area 0 R0(config-router)# network 192.168.0.0 0.0.0.255 area 0 R0(config-router)# network 192.168.1.0 0.0.0.255 area 0 R0(config-router)# network 192.168.2.0 0.0.0.255 area 0 R2 router ospf 2 R2(config-router)# network 192.168.0.0 0.0.0.255 area 0 R2(config-router)# network 192.168.4.0 0.0.0.255 area 3 R4 R4(config)#router ospf 4 R4(config-router) network 1.0.2.1 0.0.0.0 area 2 R4(config-router) network 1.0.3.0 0.0.0.255 area 2 R4(config-router) network 10.0.0.0 0.0.0.255 area 2 R4(config-router) network 20.0.0.0 0.0.0.255 area 2

R1 R1(config)#router ospf 1 R1(config-router)# network 192.168.0.0 0.0.0.255 area 0 R1(config-router)# network 192.168.5.0 0.0.0.255 area 3

R3 R3(config)#router ospf 3 R3(config-router)# network 192.168.1.0 0.0.0.255 area 0 R3(config-router)# network 192.168.3.0 0.0.0.255 area 1 R5 R5(config)#router ospf 5 R5(config-router)# network 1.0.0.0 0.0.0.255 area 1 R5(config-router)# network 1.0.1.0 0.0.0.255 area 1 R5(config-router)# network 2.0.0.0 0.0.0.255 area 1 R5(config-router)# network 2.0.1.0 0.0.0.255 area 1 R5(config-router) # network 192.168.3.0 0.0.0.255 area 1

R4(config-router) network 192.168.2.0 0.0.0.255 area 0 R6 R6(config)# router ospf 6 R6(config)# router-id 3.3.3.3 R6(config-router)# network 3.0.0.0 0.0.0.255 area 3 R6(config-router)# network 3.0.1.0 0.0.0.255 area 3 R6(config-router)# network 3.0.2.0 0.0.0.255 area 3 R6(config-router)# network 192.168.4.0 0.0.0.255 area 3 R6(config-router)# network 192.168.5.0 0.0.0.255 area 3 Commands for OSPF Show ip route to see routing table Show ip route ospf to see only ospf routes in routing table Show ip ospf database to see the available database of ospf Show ip ospf neighbors to see the neighbors Show ip ospf process to see the ospf process Show ip ospf neighbor A.B.C.D to see the particular neighbor detail Show ip ospf interface (interface type) (interface no)

Enhanced Interior Gateway Routing Protocol (EIGRP) Cisco Proprietary


The Diffusing Update Algorithm The design philosophy behind DUAL is that even temporary routing loops are detrimental to the performance of an internetwork. DUAL uses diffusing computations, first proposed by E. W. Dijkstra and C. S. Scholten, to perform distributed shortest-path routing while maintaining freedom from loops at every instant. Although many researchers have contributed to the development of DUAL, the most prominent work is that of J. J. Garcia-Luna-Aceves. DUAL: Preliminary Concepts For DUAL to operate correctly, a lower-level protocol must assure that the following conditions are met A node detects within a finite time the existence of a new neighbor or the loss of connectivity with a neighbor.

All messages transmitted over an operational link are received correctly and in the proper sequence within a finite time. All messages, changes in the cost of a link, link failures, and new-neighbor notifications are processed one at a time within a finite time and in the order in which they are detected. Cisco's EIGRP uses Neighbor Discovery/Recovery and RTP to establish neighborship. Neighborship and Reliable Incremental Updates EIGRP produces reliable updates by identifying its packets using IP protocol 88. Reliable, in a networking context, means that the receiver acknowledges that the transmission was received and understood. EIGRP only repeats itself if an advertisement is lost, so EIGRP is less "chatty" than other protocols. EIGRP uses the following five types of packets to communicate. These packets are directly encapsulated by IP. Hello Identifies neighbors. Hellos are sent as periodic multicasts and are not acknowledged directly. Hello packets are multicast for neighbor discovery/recovery and do not require acknowledgment. Update Update packets are used to convey reachability of destinations. When a new neighbor is discovered, unicast update packets are sent so that the neighbor can build up its topology table. In other cases, such as a link-cost change, updates are multicast. Updates always are transmitted reliably advertises routes. Updates are sent as multicasts only when there is a change. Ack Acknowledges receipt of an update. An acknowledgment packet is a hello packet that has no data. Acknowledgment packets contain a nonzero acknowledgment number and always are sent by using a unicast address Query Used to ask about routes for which the previous best path has been lost. If an update indicates that a path is down then multicast queries are used to ask other neighbors if they still have a path. If the querier does not receive a reply from each of its neighbors then it repeats the query as a unicast to each unresponsive neighbor until it either gets a reply or gives up after sixteen tries. Reply Used to answer a query. Each neighbor responds to the query with a uni-cast reply indicating an alternative path or the fact that it does not have a path. Sophisticated Metric EIGRP uses a sophisticated metric that considers bandwidth, load, reliability, and delay. That metric is:

Although this equation looks intimidating, a little work will help you understand the math and the impact the metric has on route selection. You first need to understand that EIGRP selects path based on the fastest path. To do that it uses K-values to balance bandwidth and delay. The K-values are constants that are used to adjust the relative contribution of the various parameters to the total metric. In other words, if you wanted delay to be much more relatively important than bandwidth, you might set K3 to a much larger number. You next need to understand the variables:

Bandwidth Bandwidth is defined as 107 kbps divided by the slowest link along the path. Because routing protocols select the lowest metric, inverting the bandwidth (using it as the divisor) makes faster paths have lower costs. Load and reliability Load and reliability are 8-bit calculated values based on the performance of the link. Both are multiplied by a zero K-value, so neither is used. Delay Delay is a constant value on every interface type, and is stored in terms of microseconds. For example, serial links have a delay of 20,000 microseconds and Ethernet lines have a delay of 1000 microseconds. EIGRP uses the sum of all delays along the path, in tens of microseconds.

By default, K1=K3=1 and K2=K4=K5=0. Those who followed the math will note that when K5=0 the metric is always zero. Because this is not useful, EIGRP simply ignores everything outside the parentheses. Therefore, given the default K-values the equation becomes

Substituting the earlier description of variables, the equation becomes 10,000,000 divided by the chokepoint bandwidth plus the sum of the delays:

An example of the metric in context will make its application clear. Figure shows a simple network topology, with routers labeled A, B, C, D, and E. Using the metric equation, which path would be used to pass traffic from Router A to Router D?

The top path (ABCD) metric would have a chokepoint bandwidth of 768 Kbps and would go across three serial lines: -

The bottom path (AED) metric would have a chokepoint bandwidth of 512 Kbps and would go across two serial lines: -

EIGRP chooses the top path based on bandwidth and delay Note Routers will not become EIGRP neighbors unless they share K-values. There really is not a compelling reason to change the default K-values and Cisco does not recommend it. Adjacency Upon startup, a router uses Hellos to discover neighbors and to identify itself to neighbors. When a neighbor is discovered, EIGRP will attempt to form an adjacency with that neighbor. An adjacency is a virtual link between two neighbors over which route information is exchanged. When adjacencies have been established, the router will receive updates from its neighbors. The updates will contain all routes known by the sending routers and the metrics of those routes. For each route, the router will calculate a distance based on the distance advertised by the neighbor and the cost of the link to that neighbor. The lowest calculated metric to each destination will become the Feasible distance (FD) of that destination. For example, a router may be informed of three different routes to subnet 172.16.5.0 and may calculate metrics of 380672, 12381440, and 660868 for the three routes. 380672 will become the FD because it is the lowest calculated distance. Feasible distance The Feasibility condition (FC) is a condition that is met if a neighbor's advertised distance to a destination is lower than the router's FD to that same destination. Feasibility condition If a neighbor's advertised distance to a destination meets the FC, the neighbor becomes a feasible successor for that destination. For example, if the FD to subnet 172.16.5.0 is 380672 and a neighbor advertises a route to that subnet with a distance of 355072, the neighbor will become a feasible successor; if the neighbor advertises a distance of 380928, it will not satisfy the FC and will not become a Feasible Successor. Successor simply means a router that is one hop closer to a destinationin other words, a nexthop router.

Feasible successor The concepts of feasible successors and the FC are central to loop avoidance. Because feasible successors are always "downstream" (that is, a shorter metric distance to the destination than the FD), a router will never choose a path that will lead back through itself. Such a path would have a distance larger than the FD. Every destination for which one or more feasible successors exist will be recorded in a topological table, along with the following items: The destination's FD All feasible successors Each feasible successor's advertised distance to the destination The locally calculated distance to the destination via each feasible successor, based on the feasible successor's advertised distance and the cost of the link to that successor The interface connected to the network on which each feasible successor is found. Actually, the interface is not explicitly recorded in the route table. Rather, it is an attribute of the neighbor itself. This convention implies that the same router, seen across multiple parallel links, will be viewed by EIGRP as multiple neighbors.

For every destination listed in the topological table, the route with the lowest metric is chosen and placed into the route table. The neighbor advertising that route becomes the successor, or the next-hop router towhich packets for that destination are sent. Type of Tables Neighbor Table Topology Table and Routing Table

Topology Table: Confusingly named, this table does not store an overview of the complete network topology; rather, it effectively contains only the aggregation of the routing tables gathered from all directly connected neighbors. This table contains a list of destination networks in the EIGRP-routed network together with their respective metrics. Also for every destination, a successor and a feasible successor are identified and stored in the table if they exist. Every destination in the topology table can be marked either as "Passive", which is the state when the routing has stabilized and the router knows the route to the destination, or "Active" when the topology has changed and the router is in the process of (actively) updating its route to that destination. Neighbor Table: Stores data about the neighboring routers, i.e. those directly accessible through directly connected interfaces. When a router discovers a new neighbor, it records the neighbors address and interface as an entry in the neighbor table. One neighbor table exists for each protocol-dependent module. When a neighbor sends a hello packet, it advertises a hold time, which is the amount of time that a router treats a neighbor as reachable and operational. If a hello packet is not received within the hold time, the hold time expires and DUAL is informed of the topology change. The neighbor-table entry also includes information required by RTP. Sequence numbers are employed to match acknowledgments with data packets, and the last sequence number received from the neighbor is recorded so that out-of-order packets can be detected. A transmission list is used to queue packets for possible retransmission on a per-neighbor basis. Round-trip timers are kept in the neighbor-table entry to estimate an optimal retransmission interval. Routing table: Stores the actual routes to all destinations; the routing table is populated from the topology table with every destination network that has its successor and optionally feasible successor identified (if unequal-cost load-balancing is enabled using the variance command). The successors and feasible successors serve as the next hop routers for these destinations. Neighbor Discovery and Recovery Using reliable updates produces two new problems: The router needs to know how many other routers exist, so it knows how many acknowledgements to expect. The router needs to know whether a missing advertisement should be interpreted as "no new information" or "neighbor disconnected." EIGRP uses the concept of neighbor-ship to address these problems. EIGRP produces hellos periodically. The first hellos are used to build a list of neighbors; thereafter, hellos indicate that the neighbor is still alive. If hellos are missed over a long period of timethe hold timethen the neighbor is removed from the EIGRP table and routing reconverges. EIGRP starts by discovering its neighbors. Advertisements are multicast, and individual unicast acknowledgements come back. The neighbor table is used to make sure that each neighbor responds. Unresponsive neighbors receive a follow-

up unicast copy, repeatedly, until they acknowledge. If a neighbor is still unresponsive after 16 attempts, the neighbor is removed from the neighbor table and EIGRP continues with its next task. Presumably, the neighbor will at some point be able to communicate. When it is able to do so, it will send a hello and the process of routing with that neighbor will begin again.

To become a neighbor, the following conditions must be met: The router must hear a Hello packet from a neighbor. The EIGRP autonomous system number in the Hello must be the same as that of the receiving router. K-values used to calculate the metric must be the same. Authentication flag must be match with the receiving hello-packet. Creating the Neighbor Table Hellos build the local neighbor table. Once neighbor tables are built, hellos continue periodically to maintain neighborship ("I'm still here!"). Each Layer 3 protocol supported by EIGRP (IPv4, IPv6, IPX, and AppleTalk) has its own neighbor table. Information about neighbors, routes, or costs is not shared between protocols. Contents of the Neighbor Table The neighbor table includes the following information: The Layer 3 address of the neighbor. The interface through which the neighbor's Hello was heard. The holdtime, or how long the neighbor table waits without hearing a Hello from a neighbor, before declaring the neighbor unavailable and purging the database. Holdtime is three times the value of the Hello timer by default. The uptime, or period since the router first heard from the neighbor. The sequence number. The neighbor table tracks all the packets sent between the neighbors. It tracks both the last sequence number sent to the neighbor and the last sequence number received from the neighbor. Retransmission timeout (RTO), which is the time the router will wait on a connection-oriented protocol without an acknowledgment before retransmitting the packet. Smooth Round Trip Time (SRTT), which calculates the RTO. SRTT is the time (in milliseconds) that it takes a packet to be sent to a neighbor and a reply to be received. The number of packets in a queue, which is a means by which administrators can monitor congestion on the network.

R0#sh ip eigrp neighbors IP-EIGRP neighbors for process 1 H Address Interface Hold Cnt Num (sec) 1 192.168.1.2 Se0/0 14 14 0 192.168.0.2 Se0/1 11 13

Uptime

SRTT

RTO

Que 750 1008

Seq 0 0

00:00:19 00:00:21

(ms) 125 168

Because EIGRP updates are non-periodic, it is especially important to have a process whereby neighborsEIGRP-speaking routers on directly connected networksare discovered and tracked. On most networks, Hellos are multicast every 5 seconds,

minus a small random time to prevent synchronization and hello is multicast on 224.0.0.10 address. On multipoint X.25, Frame Relay, and ATM interfaces, with access link speeds of T1 or slower, Hellos are unicast every 60 seconds. In all cases, the Hellos are unacknowledged. When a router receives a Hello packet from a neighbor, the packet will include a hold time. The hold time tells the router the maximum time it should wait to receive subsequent Hellos. If the hold timer expires, the neighbor is assumed to be down and the router will send the query packet to other neighbor about the network lost due to that particular router. Hold timer is 15 sec by default. Hello Interval 5 Sec Hold Timer 15 Sec Hello is multicast on multicast address 224.0.0.10 Port used for EIGRP packets 88
The DUAL Finite State Machine

When an EIGRP router is performing no diffusing computations, each route is in the passive state. Referring to any of the topology tables in the previous section, a key to the left of each route indicates a passive state. A router will reassess its list of feasible successors for a route, as described in the last section, any time an input event occurs. An input event can be: A change in the cost of a directly connected link A change in the state (up or down) of a directly connected link The reception of an update packet The reception of a query packet The reception of a reply packet The first step in its reassessment is a local computation in which the distance to the destination is recalculated for all feasible successors. The possible results are: If the feasible successor with the lowest distance is different from the existing successor, the feasible successor will become the successor. If the new distance is lower than the FD, the FD will be updated. If the new distance is different from the existing distance, updates will be sent to all neighbors. While the router is performing a local computation, the route remains in the passive state. If a feasible successor is found, an update is sent to all neighbors and no state change occurs. If a feasible successor cannot be found in the topology table, the router will begin a diffusing computation and the route will change to the active state. Until the diffusing computation is completed and the route transitions back to the passive state, the router cannot: Change the route's successor Change the distance it is advertising for the route Change the route's FD Begin another diffusing computation for the route A router begins a diffusing computation by sending queries to all of its neighbors (Figure 2) The query will contain the new locally calculated distance to the destination. Each neighbor, upon receipt of the query, will perform its own local computation:

Figure 2 If the neighbor has one or more feasible successors for the destination, it will send a reply to the originating router. The reply will contain that neighbor's minimum locally calculated distance to the destination. If the neighbor does not have a feasible successor, it too will change the route to the active state and will begin a diffusing computation. For each neighbor to whom a query is sent, the router will set a reply status flag(r) to keep track of all outstanding queries. The diffusing computation is complete when the router has received a reply to every query sent to every neighbor. In some cases, a router does not receive a reply to every query sent. For example, this may happen in large networks with many low-bandwidth or low-quality links. At the beginning of the diffusing computation, an Active timer is set for 3 minutes. If all expected replies are not received before the Active time expires, the route is declared Stuck-in-Active (SIA). The neighbor or neighbors that did not reply will be removed from the neighbor table, and the diffusing computation will consider the neighbor to have responded with an infinite metric. Stuck-in-Active Neighbors When a route goes active and queries are sent to neighbors, the route will remain active until a reply is received for every query. But what happens if a neighbor is dead or otherwise incapacitated and cannot reply? The route would stay permanently active. The active timer is designed to prevent this situation. The timer is set when a query is sent. If the timer expires before a reply to the query is received, the route is declared Stuck-in-Active, the neighbor is presumed dead, and it is flushed from the neighbor table. The SIA route and any other routes via that neighbor are eliminated from the route table. DUAL will be satisfied by considering the neighbor to have replied with an infinite metric. Concept of Autonomous No AS No Routers in the same administrative domain should be identified and grouped by a common AS number system In other words the devices under the one administration is called autonomous system. This 16-bit number is arbitrary. Organizations that have a BGP AS will sometimes use that number; others just make up a number or use "AS 1." The significance of the AS is that a router will not become a neighbor with a router in a foreign AS. Unique features of EIGRP

Unequal load balancing is only supported by EIGRP. Contain backup routes in Topology Table. Query packet to search a lost Network if no backup route is available. Configuring EIGRP on CISCO Routers First select the As No. for the network. To enable EIGRP on the router we following command

Router(config)# router eigrp <As. no> (As No. range 1 65535) Router(config-router)# network A.B.C.D (Wild Card Bits) Router(config-router)#no auto-summary Command to enable EIGRP on interface and to advertize is same. By default EIGRP support Auto-summary but with adding no auto-summary it start sending update with subnet-mask info in packet.

R0 R0(config)#router eigrp 1 R0(config-router)# network 1.0.0.0 0.0.0.255 R0(config-router) #network 1.0.1.0 0.0.0.255 R0(config-router) # network 1.0.2.0 0.0.0.255 R0(config-router) # network 1.0.4.0 0.0.0.255 R0(config-router) # network 1.0.5.0 0.0.0.255 R0(config-router) # network 192.168.0.0 R0(config-router) # network 192.168.1.0 R0(config-router) # no autosummary

R2 R2(config)#router eigrp 1 R2(config-router)# network 192.168.0.0 R2(config-router)# network 192.168.2.0 R2(config-router)# network 192.168.4.0 R2(config-router)# auto-summary

R1 R1(config)#router eigrp 1 R1(config-router)#network 192.168.1.0 R1(config-router)#network 192.168.3.0 R1(config-router)# auto-summary

R3 R3(config)#router eigrp 1 R3(config-router)# network 192.168.2.0 R3(config-router)# network 192.168.3.0 R3(config-router)# network 2.0.0.0 0.0.0.255 R3(config-router)# network 20.0.0.0 0.0.0.255 R3(config-router)# no auto-summary

R1#sh ip route eigrp 1 D 1.0.0.0/8 [90/2297856] via 192.168.1.1, 00:00:16, Serial0/0 D 2.0.0.0/8 [90/409600] via 192.168.3.2, 00:09:14, Ethernet1/0 D 20.0.0.0/8 [90/409600] via 192.168.3.2, 00:09:14, Ethernet1/0 D 192.168.0.0/24 [90/2681856] via 192.168.4.2, 00:07:51, Serial0/1 [90/2681856] via 192.168.1.1, 00:07:51, Serial0/0 D 192.168.2.0/24 [90/2195456] via 192.168.3.2, 00:07:51, Ethernet1/0 When no auto-summary command is added to routers R1#sh ip route eigrp 1 1.0.0.0/24 is subnetted, 8 subnets D 1.0.1.0 [90/2297856] via 192.168.1.1, 00:00:49, Serial0/0 D 1.0.0.0 [90/2297856] via 192.168.1.1, 00:00:49, Serial0/0 D 1.0.3.0 [90/2297856] via 192.168.1.1, 00:00:49, Serial0/0 D 1.0.2.0 [90/2297856] via 192.168.1.1, 00:00:49, Serial0/0 D 1.0.5.0 [90/2297856] via 192.168.1.1, 00:00:49, Serial0/0 D 1.0.4.0 [90/2297856] via 192.168.1.1, 00:00:49, Serial0/0 D 1.0.7.0 [90/2297856] via 192.168.1.1, 00:00:49, Serial0/0 D 1.0.6.0 [90/2297856] via 192.168.1.1, 00:00:49, Serial0/0 2.0.0.0/24 is subnetted, 1 subnets D 2.0.0.0 [90/409600] via 192.168.3.2, 00:00:06, Ethernet1/0 20.0.0.0/24 is subnetted, 1 subnets D 20.0.0.0 [90/409600] via 192.168.3.2, 00:00:06, Ethernet1/0 D 192.168.0.0/24 [90/2681856] via 192.168.4.2, 00:10:08, Serial0/1 [90/2681856] via 192.168.1.1, 00:10:08, Serial0/0 D 192.168.2.0/24 [90/2195456] via 192.168.3.2, 00:10:08, Ethernet1/0 Graceful Restart When router enabled with eigrp reset its neighborship with neighbor routers it send its last hello packet to neighbor as a goodbye and restart the neighborship process. With this when neighbor get this message it will not send query packet to its available neighbors for the lost neighbor. As shown below when R0 reset neighbors it send its goodbye to neighbor routers. R0#clear ip eigrp neighbors R0#
*Mar 1 00:24:51.323: %DUAL-5-NBRCHANGE: IP-EIGRP(0) 1: Neighbor 192.168.0.2 (Serial0/1) is down: manually cleared *Mar 1 00:24:51.347: %DUAL-5-NBRCHANGE: IP-EIGRP(0) 1: Neighbor 192.168.1.2 (Serial0/0) is down: manually cleared *Mar 1 00:24:51.491: %DUAL-5-NBRCHANGE: IP-EIGRP(0) 1: Neighbor 192.168.0.2 (Serial0/1) is up: new adjacency *Mar 1 00:24:51.623: %DUAL-5-NBRCHANGE: IP-EIGRP(0) 1: Neighbor 192.168.1.2 (Serial0/0) is up: new adjacency

R1#
*Mar 1 00:24:14.799: %DUAL-5-NBRCHANGE: IP-EIGRP(0) 1: Neighbor 192.168.1.1 (Serial0/0) is down: Interface Goodbye received *Mar 1 00:24:18.223: %DUAL-5-NBRCHANGE: IP-EIGRP(0) 1: Neighbor 192.168.1.1 (Serial0/0) is up: new adjacency

R2#
*Mar 1 00:20:43.619: %DUAL-5-NBRCHANGE: IP-EIGRP(0) 1: Neighbor 192.168.0.1 (Serial0/0) is down: Interface Goodbye received *Mar 1 00:20:45.543: %DUAL-5-NBRCHANGE: IP-EIGRP(0) 1: Neighbor 192.168.0.1 (Serial0/0) is up: new adjacency

R3#sh ip eigrp topology IP-EIGRP Topology Table for AS(1)/ID(20.0.0.1) Codes: P - Passive, A - Active, U - Update, Q - Query, R - Reply, r - reply Status, s - sia Status P 1.0.1.0/24, 1 successors, FD is 2323456 via 192.168.3.1 (2323456/2297856), Ethernet1/0 via 192.168.2.1 (2809856/2297856), Serial0/0 Backup Path P 1.0.0.0/24, 1 successors, FD is 2323456 via 192.168.3.1 (2323456/2297856), Ethernet1/0 via 192.168.2.1 (2809856/2297856), Serial0/0 P 1.0.3.0/24, 1 successors, FD is 2323456 via 192.168.3.1 (2323456/2297856), Ethernet1/0 via 192.168.2.1 (2809856/2297856), Serial0/0 P 2.0.0.0/24, 1 successors, FD is 128256 via Connected, Loopback0 P 1.0.2.0/24, 1 successors, FD is 2323456 via 192.168.3.1 (2323456/2297856), Ethernet1/0 via 192.168.2.1 (2809856/2297856), Serial0/0 P 1.0.5.0/24, 1 successors, FD is 2323456 via 192.168.3.1 (2323456/2297856), Ethernet1/0 via 192.168.2.1 (2809856/2297856), Serial0/0 P 1.0.4.0/24, 1 successors, FD is 2323456 via 192.168.3.1 (2323456/2297856), Ethernet1/0 via 192.168.2.1 (2809856/2297856), Serial0/0 P 1.0.7.0/24, 1 successors, FD is 2323456 via 192.168.3.1 (2323456/2297856), Ethernet1/0 via 192.168.2.1 (2809856/2297856), Serial0/0 P 1.0.6.0/24, 1 successors, FD is 2323456 via 192.168.3.1 (2323456/2297856), Ethernet1/0 via 192.168.2.1 (2809856/2297856), Serial0/0 P 20.0.0.0/24, 1 successors, FD is 128256 via Connected, Loopback2 P 192.168.0.0/24, 1 successors, FD is 2681856 via 192.168.2.1 (2681856/2169856), Serial0/0 P 192.168.1.0/24, 1 successors, FD is 2195456 via 192.168.3.1 (2195456/2169856), Ethernet1/0 P 192.168.2.0/24, 1 successors, FD is 2169856 via Connected, Serial0/0 P 192.168.3.0/24, 1 successors, FD is 281600 via Connected, Ethernet1/0 P 192.168.4.0/24, 1 successors, FD is 2195456 via 192.168.3.1 (2195456/2169856), Ethernet1/0 via 192.168.2.1 (2681856/2169856), Serial0/0
Commands Show ip eigrp topology. Show ip eigrp neighbors, show ip eigrp neighbors

Frame-Relay

Frame Relay is a high-performance WAN protocol that operates at the physical and data link layers of the OSI reference model. Frame Relay originally was designed for use across Integrated Services Digital Network (ISDN) interfaces. Today, it is used over a variety of other network interfaces as well. This chapter focuses on Frame Relays specifications and applications in the context of WAN services. Frame Relay is an example of a packet-switched technology. Packet-switched networks enable end stations to dynamically share the network medium and the available bandwidth. Frame Relay Devices Devices attached to a Frame Relay WAN fall into the following two general categories: Data terminal equipment (DTE) Data circuit-terminating equipment (DCE) DTEs generally are considered to be terminating equipment for a specific network and typically are located on the premises of a customer. In fact, they may be owned by the customer. Examples of DTE devices are terminals, personal computers, routers, and bridges. DCEs are carrier-owned internetworking devices. The purpose of DCE equipment is to provide clocking and switching services in a network, which are the devices that actually transmit data through the WAN. In most cases, these are packet switches. Figure 1 shows the relationship between the two categories of devices.

Committed information rate (CIR) Frame Relay connections are often given a committed information rate (CIR) and an allowance of burstable bandwidth known as the extended information rate (EIR). The provider guarantees that the connection will always support the CIR rate, and sometimes the EIR rate should there be adequate bandwidth. Frames that are sent in excess of the CIR are marked as discard eligible (DE) which means they can be dropped should congestion occur within the Frame Relay network. Frames sent in excess of the EIR are dropped immediately. Frame Relay Virtual Circuits Frame Relay provides connection-oriented data link layer communication. This means that a defined communication exists between each pair of devices and that these connections are associated with a connection identifier. This service is implemented by using a Frame Relay virtual circuit, which is a logical connection created between two data terminal equipment (DTE) devices across a Frame Relay packetswitched network (PSN). Virtual circuits provide a bidirectional communication path from one DTE device to another and are uniquely identified by a data-link connection identifier (DLCI). A number of virtual circuits can be multiplexed into a single physical circuit for transmission across the network. This capability often can reduce the equipment and network complexity required to connect multiple DTE devices. A virtual circuit can pass through any number of intermediate DCE devices (switches) located within the Frame Relay PSN. Frame Relay virtual circuits fall into two categories: Switched virtual circuits (SVCs) Permanent virtual circuits (PVCs).

Switched Virtual Circuits Switched virtual circuits (SVCs) are temporary connections used in situations requiring only sporadic data transfer between DTE devices across the Frame Relay network. A communication session across an SVC consists of the following four operational states: Call setupThe virtual circuit between two Frame Relay DTE devices is established. Data transferData is transmitted between the DTE devices over the virtual circuit. IdleThe connection between DTE devices is still active, but no data is transferred. If an SVC remains in an idle state for a defined period of time, the call can be terminated. Call terminationThe virtual circuit between DTE devices is terminated. After the virtual circuit is terminated, the DTE devices must establish a new SVC if there is additional data to be exchanged. It is expected that SVCs will be established, maintained, and terminated using the same signaling protocols used in ISDN. Few manufacturers of Frame Relay DCE equipment support switched virtual circuit connections. Therefore, their actual deployment is minimal in todays Frame Relay networks. Previously not widely supported by Frame Relay equipment, SVCs are now the norm. Companies have found that SVCs save money in the end because the circuit is not open all the time. Permanent Virtual Circuits Permanent virtual circuits (PVCs) are permanently established connections that are used for frequent and consistent data transfers between DTE devices across the Frame Relay network. Communication across a PVC does not require the call setup and termination states that are used with SVCs. PVCs always operate in one of the following two operational states: Data transferData is transmitted between the DTE devices over the virtual circuit. IdleThe connection between DTE devices is active, but no data is transferred. Unlike SVCs, PVCs will not be terminated under any circumstances when in an idle state. DTE devices can begin transferring data whenever they are ready because the circuit is permanently established. Data-Link Connection Identifier (DCLI) Frame Relay virtual circuits are identified by data-link connection identifiers (DLCIs). DLCI values typically are assigned by the Frame Relay service provider (for example, the telephone company). Frame Relay DLCIs have local significance, which means that their values are unique in the LAN, but not necessarily in the Frame Relay WAN. Example

Congestion-Control Mechanisms Frame Relay reduces network overhead by implementing simple congestionnotification mechanisms rather than explicit, per-virtual-circuit flow control. Frame Relay typically is implemented on reliable network media, so data integrity is not sacrificed because flow control can be left to higher-layer protocols. Frame Relay implements two congestion-notification mechanisms: Forward-explicit congestion notification (FECN) Backward-explicit congestion notification (BECN) FECN and BECN each is controlled by a single bit contained in the Frame Relay frame header. The Frame Relay frame header also contains a Discard Eligibility (DE) bit, which is used to identify less important traffic that can be dropped during periods of congestion. The FECN bit is part of the Address field in the Frame Relay frame header. The FECN mechanism is initiated when a DTE device sends Frame Relay frames into the network. If the network is congested, DCE devices (switches) set the value of the frames FECN bit to 1. When the frames reach the destination DTE device, the Address field (with the FECN bit set) indicates that the frame experienced congestion in the path from source to destination. The DTE device can relay this information to a higher-layer protocol for processing. Depending on the implementation, flow control may be initiated, or the indication may be ignored. The BECN bit is part of the Address field in the Frame Relay frame header. DCE devices set the value of the BECN bit to 1 in frames traveling in the opposite direction of frames with their FECN bit set. This informs the receiving DTE device that a particular path through the network is congested. The DTE device then can relay this information to a higher-layer protocol for processing. Depending on the implementation, flowcontrol may be initiated, or the indication may be ignored. Frame Relay Discard Eligibility The Discard Eligibility (DE) bit is used to indicate that a frame has lower importance than other frames.

The DE bit is part of the Address field in the Frame Relay frame header. DTE devices can set the value of the DE bit of a frame to 1 to indicate that the frame has lower importance than other frames. When the network becomes congested, DCE devices will discard frames with the DE bit set before discarding those that do not. This reduces the likelihood of critical data being dropped by Frame Relay DCE devices during periods of congestion Frame Relay Local Management Interface The Local Management Interface (LMI) is a set of enhancements to the basic Frame Relay specification. It offers a number of features (called extensions) for managing complex internetworks. Key Frame Relay LMI extensions include global addressing, virtual circuit status messages, and multicasting. DLCI values become DTE addresses that are unique in the Frame Relay WAN. In addition, the entire Frame Relay network appears to be a typical LAN to routers on its periphery. LMI virtual circuit status messages provide communication and synchronization between Frame Relay DTE and DCE devices. These messages are used to periodically report on the status of PVCs, which prevents data from being sent into black holes (that is, over PVCs that no longer exist). Type of LMIs Cisco (Cisco Propriety) ANSI (Industry Standard) Q933a (Industry Standard)

Flags Delimits the beginning and end of the frame. The value of this field is always the same and is represented either as the hexadecimal number 7E or as the binary number 01111110. AddressContains the following information: DLCI The 10-bit DLCI is the essence of the Frame Relay header. This value represents the virtual connection between the DTE device and the switch. Each virtual connection that is multiplexed onto the physical channel will be represented by a unique DLCI. The DLCI values have local significance only, which means that they are unique only to the physical channel on which they reside. Therefore, devices at opposite ends of a connection can use different DLCI values to refer to the same virtual connection Extended Address (EA) The EA is used to indicate whether the byte in which the EA value is 1 is the last addressing field. If the value is 1, then the current byte is determined to be the last DLCI octet. Although current Frame Relay implementations all use a two-octet DLCI, this capability does allow longer DLCIs to be used in the future. The eighth bit of each byte of the Address field is used to indicate the EA. C/R

The C/R is the bit that follows the most significant DLCI byte in the Address field. The C/R bit is not currently defined. Congestion Control This consists of the 3 bits that control the Frame Relay Congestion-notification mechanisms. These are the FECN, BECN, and DE bits, which are the last 3 bits in the Address field. Forward-explicit congestion notification (FECN) is a single-bit field that can be set to a value of 1 by a switch to indicate to an end DTE device, such as a router, that congestion was experienced in the direction of the frame transmission from source to destination. The primary benefit of the use of the FECN and BECN fields is the capability of higher-layer protocols to react intelligently to these congestion indicators. Today, DECnet and OSI are the only higher-layer protocols that implement these capabilities. Backward-explicit congestion notification (BECN) is a single-bit field that, when set to a value of 1 by a switch, indicates that congestion was experienced in the network in the direction opposite of the frame transmission from source to destination. Discard eligibility (DE) is set by the DTE device, such as a router, to indicate that the marked frame is of lesser importance relative to other frames being transmitted. Frames that are marked as discard eligible should be discarded before other frames in a congested network. This allows for a basic prioritization mechanism in Frame Relay networks. DataContains encapsulated upper-layer data. Each frame in this variable-length field includes a user data or payload field that will vary in length up to 16,000 octets. This field serves to transport the higher-layer protocol packet (PDU) through a Frame Relay network.

From the above Picture we can see that that there are three PVCs in the Network. R1 R2 (DCLIs for PVCs 102 - 201) R1 R3 (DCLIs for PVCs 103 - 301) R1 R4 (DLCIs for PVCs 104 - 401) The frame-relay network is like local network for the sites to connect with each other. They use private IPs to connect remote sites. Type of Frame-Relay Network Full-Mesh Topology Partial Mesh Topology Hub and Spoke

Full Mesh Topology When all the sites have PVCs to all the other location in the Network. In this case the no. of PVCs needed to purchase are N(N-1)/2 where N is no. of sites a company have. This is very expensive as all the sites are connected to all other sites. Because all the sites have PVCs

Partial Mesh Topology All the Sites dont have the connectivity between all the locations. Only the very important sites have direct connectivity. Branch sites are connected to the Central Office.

Hub and Spoke Topology Special case of Partial Mesh where all the sites have connectivity to Central Branch. And the Branch sites will connect to the each other via Central Office. Means the traffic from Spoke to spoike is forwarded through Hub. This has edge over other topologies as the least no. of PVCs required to connect to all the sites. Traffic control and route policies can easily to implement and are require configuring only on the Hub router rather than all the routers in the topology. Least PVCs means less amount to be paid to Service Provider.

Hub and Spoke topology have some Cons also as the traffic is supposed to travel on hub first if spoke to spoke connectivity is required. This add delay in the packets so voice or video traffic will experience more delay and jitters, but the data transfer will work fine. Frame-Relay encapsulation It is the language which to frame-relay routers speak with each-other. This is End-to-End router communication language. It is of two types: Cisco (Cisco Proprietary) IETF (Industry Standard) It depends only on end-to-end routers. We can use Cisco for hub and spoke router without concerning the devices in the Frame-Relay network. If the end routers are not of Cisco than only possibility to work out is IETF.

You might also like