Professional Documents
Culture Documents
Once a common encapsulation mechanism has been selected for Ethernet, hosts must still convert a 32-bit IP address into a 48-bit Ethernet address. The Address Resolution Protocol (ARP), documented in RFC 826, is used to do this. It has also been adapted for other media, such as FDDI. ARP works by broadcasting a packet to all hosts attached to an Ethernet. The packet contains the IP address the sender is interested in communicating with. Most hosts ignore the packet. The target machine, recognizing that the IP address in the packet matches its own, returns an answer. Hosts typically keep a cache of ARP responses, based on the assumption that IP-to-hardware address mapping rarely change.
Proxy ARP
Proxy ARP is a technique that can be used by routers to handle traffic between hosts that don't expect to use a router as described above. Probably the most common case of its use would be the gradual subnetting of a larger network. Those hosts not yet converted to the new system would expect to transmit directly to hosts now placed behind a router. A router using Proxy ARP recognizes ARP requests for hosts on the "other side" of the router that can't reply for themselves. The router answers for those addresses with an ARP reply matching the remote IP address with the router's Ethernet address (in essence, a lie). Proxy ARP is best thought of as a temporary transition mechanism, and its use should not be encouraged as part of a stable solution. There are a number of potential problems with its use, including the inability of hosts to fall back on alternate routers if a network component fails, and the possibility of race conditions and bizarre traffic patterns if the bridged and routed network segments are not clearly delineated.
Reverse ARP
Reverse ARP, document in RFC 903, is a fairly simple bootstrapping protocol that allows a workstation to broadcast using its Ethernet address, and expect a server to reply, telling it its IP address.
Overview
ATM is a cell-switching and multiplexing technology that combines the benefits of circuit switching (guaranteed capacity and constant transmission delay) with those of packet switching (flexibility and efficiency for intermittent traffic). It provides scalable bandwidth from a few megabits per second (Mbps) to many gigabits per second (Gbps). Because of its asynchronous nature, ATM is more efficient than synchronous technologies, such as timedivision multiplexing (TDM). With TDM, each user is assigned to a time slot, and no other station can send in that time slot. If a station has a lot of data to send, it can send only when its time slot comes up, even if all other time slots are empty. If, however, a station has nothing to transmit when its time slot comes up, the time slot is sent empty and is wasted. Because ATM is asynchronous, time slots are available on demand with information identifying the source of the transmission contained in the header of each ATM cell.
Asynchronous Transfer Mode works with very short, fixed-length units called cells. ATM uses 53 byte cells, consisting of a 5 byte header and a 48 byte payload. Because ATM is connection-oriented, the cells can have a short adress space and the cells are not used for establishing the circuit and maintaining it. Once a circuit is set up the bandwidth can be used entirely for data transport. After the circuit is set up, ATM associates each cell with the virtual connection between origin and destination. This can be a virtual channel or path. The 40 bit header holds 8 bits for the virtual path (256 max), and 16 bits for the virtual channel (65536 max). Having both virtual paths and channels make it easy for the switch to handle many connections with the same origin and destination. The proces that segments a longer entity of data into 53 byte cells is called 'segmentation and reassembly' (SAR). The data that goes into these cells comes from different native mode protocols, such as TCP/IP. The ATM Adaptation Layer (AAL) takes care of the differences between the different sources. The AAL adapts the protocols to an ATM intermediate format. It uses socalled 'classes' to do so. AAL type 3 and 4 handle transmissions of connectionless data, AAL type 5 is intended for connection-oriented services.
ATM Circuits
Three types of ATM services exist: permanent virtual circuits (PVC), switched virtual circuits (SVC), and connectionless service (which is similar to SMDS). A PVC allows direct connectivity between sites. In this way, a PVC is similar to a leased line. Among its advantages, a PVC guarantees availability of a connection and does not require call setup procedures between switches. Disadvantages of PVCs include static connectivity and manual setup. An SVC is created and released dynamically and remains in use only as long as data is being transferred. In this sense, it is similar to a telephone call. Dynamic call control requires a signaling protocol between the ATM endpoint and the ATM switch. The advantages of SVCs
include connection flexibility and call setup that can be handled automatically by a networking device. Disadvantages include the extra time and overhead required to set up the connection.
Class
B yesno ** Type 2
C no
D variable ****
Connection-oriented, circuit emulation Connection-oriented, variablee bit-rate video Connection-oriented, connection-oriented data Connectionless, connectionless data
for isochronous, constaant bit-rate services, such as audio and video. This adaption layer corresponds to fractional and full T1 and T3, but with a greater range of choices for data rates.
AAL 1:
AAL 2:
for isochronous variale bit-rate services, such as compressed video. AAL 3/4:
AAL 5:
for variable bi-rate data, such as LAN applications. Originally designed as two different layers, one for connetion-oriented services (like frame relay) and one for connectionles services (like SMDS). both can be done by the same AAL though. for vriable bit-rate data that must be formatted into 53-byte cells. Similar to AAL 3/4, easier to implement, less features.
The service-specific convergence sublayer (SSCS) maps (converts) the data to the ATM layer. The convergence sublayer (CS) then compensates for the various interfaces (copper and fiber) that may be used on an ATM network. The ATM network can use Sonet, T1, E1, T3, E3, E4, FDDI, pure cells, Sonet SDH, block-encoded fiber, etc.
BGP-4 Protocol OverviewBorder Gateway Protocol Version 4 (BGP-4), documented in RFC 1771, is the current exterior routing protocol used for the global Internet. BGP is essentially a distance-vector algorithm, but with several added twists. Other BGP-related documents are RFC 1772 (BGP Application), RFC 1773 (BGP Experience), RFC 1774 (BGP Protocol Analysis), and RFC 1657 (BGP MIB). BGP uses TCP as its transport protocol, on port 179. On connection start, BGP peers exchange complete copies of their routing tables, which can be quite large. However, only changes (deltas) are then exchanged, which makes long running BGP sessions more efficient than shorter ones. BGP's basic unit of routing information is the BGP path, a route to a certain set of CIDR prefixes. Paths are tagged with various path attributes, of which the most important are AS_PATH and NEXT_HOP. One of BGP-4's most important functions is loop detection at the Autonomous System level, using the AS_PATH attribute, a list of Autonomous Systems being used for data transport. The syntax of this attribute is made more complex by its need to support path aggregation, when multiple paths are collapsed into one to simplify further route advertisements. A simplified view of AS_PATH is that it is the list of Autonomous Systems that a route goes through to reach its destination. Loops are detected and avoided by checking for your own AS number in AS_PATH's received from neighboring Autonomous Systems. Every time a BGP path advertisement crosses an Autonomous System boundary, the NEXT_HOP attribute is changed to the IP address of the boundary router. Conversely, as a BGP path advertisement is passed among BGP speakers in the same AS, the NEXT_HOP attribute is left untouched. Consequently, BGP's NEXT_HOP is always the IP address of the first router in the next autonomous system, even though this may actually be several hops away. The AS's interior routing protocol is responsible for computing an interior route to reach the BGP NEXT_HOP. This leads to the distinction between Internal BGP (IBGP) sessions (between routers in the same AS) and External BGP (EBGP) sessions (between routers in different AS's). NEXT_HOPs are only changed across EBGP sessions, but left intact across IBGP sessions.
The two most important consequences of this design are the need for interior routing protocols to reach one hop beyond the AS boundary, and for BGP sessions to be fully meshed within an AS. Since the NEXT_HOP contains the IP address of a router interface in the next autonomous system, and this IP address is used to perform routing, the interior routing protocol must be able to route to this address. This means that interior routing tables must include entries one hop beyond the AS boundary. Furthermore, since BGP does not relay routing traffic from one Interior BGP session to another (only from an Exterior BGP session to an IBGP session or another EBGP session), BGP speakers must be fully meshed. When a BGP routing update is received from a neighboring AS, it must be relayed directly to all other BGP speakers in the AS. Do not expect to relay BGP paths from one router, through another, to a third, all within the same AS. It the responsibility of the BGP implementation to select among competing paths using a nearly completely undefined algorithm. RFC 1771 states only that the computation be based on "preconfigured policy information. The exact nature of this policy information and the computation involved is a local matter." Since the AS_PATH attribute includes a list of Autonomous Systems used to reach the destination, it's possible to implement primative policy decisions such as "avoid all routes through AS XXXX". A free software implementation of BGP-4 can be found in Gated.
IP Address, which is valid for a period of time specified by the administrator of the DHCP server. Subnet Bit Mask.
Default Gateway. Optional DNS Servers. Duration for which the IP address assignment is valid
DHCP is flexible so that other information can also be stored and retrieved. A shortcoming is that there is currently no way to update Domain Name Servers with the new IP address for a user's DNS name (DNS names remain permanently assigned to hosts). Since important destination machines (such as servers) would use permanently assigned IP addresses, this should not be a big problem (until a solution is standardized).
Example Configuration
In many ways, DHCP is quite similar to RADIUS (Remote Authenticat Dial-In User System), which assigns a user their IP address, Subnet Mask, Default Gateway, and DNS Servers. The primary difference is in the authentication. Authentication for RADIUS is handled by a username/password pair, while the only authentication system that DHCP supports is through MAC Address filtering. The VPEC network runs the Internet Software Consortium DHCP Server to handle the granting of IP addresses to all workstations and laptops on the network. This software, while it runs on a UNIX system, is similar in configuration to many other DHCP systems, including Microsoft's. Here's the basic configuration for the desktop workstations:
subnet 209.39.6.0 netmask 255.255.255.0 { range 209.39.6.129 209.39.6.254; option subnet-mask 255.255.255.0; option broadcast-address 209.39.6.255; option routers 209.39.6.1; option domain-name-servers 209.39.6.4, 209.39.6.5; option domain-name "training.verio.net"; default-lease-time 2592000; max-lease-time 2592000; }
This defines a pool of IP's (209.39.6.129 - 209.39.6.254) that workstations will be assigned for 2592000 seconds at a time. However, if we want to provide the same IP address for a computer everytime it logs on, then we can define the MAC address for the computer into the DHCP server configuration as follows:
hardware ethernet 00:50:04:DF:1A:4C; fixed-address 209.39.6.32; option subnet-mask 255.255.255.0; option broadcast-address 209.39.6.255; option routers 209.39.6.1; option domain-name-servers 129.250.35.250, 129.250.35.251; option domain-name "training.verio.com"; }
By using this same configuration for more than one computer, we can provide a static IP to trusted workstations.
Each node on the tree represents a domain. Everything below a node falls into its domain. One domain can be part of another domain. For example, the machine chichi is part of the .us domain as well as the .com domain. You'll see why this is important in just a minute. How it works A DNS server is just a computer that's running DNS software. Since most servers are Unix machines, the most popular program is BIND (Berkeley Internet Name Domain), but you can find software for the Mac and the PC as well. DNS software is generally made up of two elements: the actual name server, and something called a resolver. The name server responds to browser requests by supplying name-toaddress conversions. When it doesn't know the answer, the resolver will ask another name server for the information. To see how it works, let's go back to the domain-name-space inverted tree. When you type in a URL, your browser sends a request to the closest name server. If that server has ever fielded a request for the same host name (within a time period set by the administrator to prevent passing old information), it will locate the information in its cache and reply. If the name server is unfamiliar with the domain name, the resolver will attempt to "solve" the problem by asking a server farther up the tree. If that doesn't work, the second server will ask yet another - until it finds one that knows. (When a server can supply an answer without asking another, it's known as an authoritative server.) Once the information is located, it's passed back to your browser, and you're sent on your merry way. Usually this process occurs quickly, but occasionally it can take an excruciatingly long time (like 15 seconds). In the worst cases, you'll get a dialog box that says the domain name doesn't exist - even though you know damn well it does. This happens because the authoritative server is slow replying to the first, and your computer gets tired of waiting so it times-out (drops the connection). But if you try again, there's a good chance it will work, because the authoritative server has had enough time to reply, and your name server has stored the information in its cache.
Ethernet Fundamentals
Developed in the early 1970's, Ethernet has proven to be one of the most simple, reliable, and long-lived networking protocols ever designed. The high speed and simplicity of the protocol has resulted in its widespread use. Although Ethernet works across a variety of layer one media, the three most popular forms are 10BaseT, 10Base2, and 10BaseF, which use unshielded twisted pair (UTP), coaxial, and fiber optic cables respectively. UTP is used in a "star" configuration, in which all nodes connect to a central hub. 10Base2 uses a single coaxial cable to connect all workstations together in a "bus" configuration, and does not require a hub. 10BaseF uses fiber optics, which, though expensive, can travel long distances (2km) and through electrically noisy areas. An interesting difference between coaxial Ethernet and other types is that coax Ethernet is truly a one-to-many (or, 'point-to-multipoint') connection; fiber and UTP connections are, from a layer one perspective, one-to-one (or, 'point-to-point') connections, and require an additional networking device (typically, a repeater, or Ethernet hub) to connect to multiple
other workstations. This is why coax Ethernet does not require a hub, and Ethernet over other media typically does.
Ethernet Topologies
Pro Con Typical Use
10BaseT Very reliable- one fault usually Relatively short distance from Offices and home doesn't affect entire network. hub to workstation (100m). networks. Requires a lot of wiring (a separate link for each workstation.) 10Base2 Cheap- no hub required, no wiring except from station to station. Any break in connectivity disrupts entire network segment. Small or home networks, hub to hub links.
Well shielded against electrical Problems can be very difficult interference. to troubleshoot. Can transmit longer distances (200m). 10BaseF Long distance networking (2000m). Immune to electrical interference. Very expensive to install. Long distance hub-tohub or switch-to-hub links.
Ethernet is like a bunch of loud people in an unmoderated meeting room. Only one person can talk at a time, because communication consists of standing up and yelling at the top of your lungs. People are allowed to start communicating whenever there is silence in the room. If two people stand up and start yelling at the same time, they wind up garbling each others' attempt at communication, an event known as a "collision." In the event of a collision, the two offending parties sit back down for a semi-random period of time, then one of them stands up and starts yelling again. Because it's unmoderated, the likelihood of collisions occurring increases geometrically as the number of talkers and the amount of stuff they talk about increases. In fact, networks with many workstations are generally considered to be overloaded if the segment utilization exceeds 30-40%. If the collision light on your hubs is lit more often than not, you probably need to segment your network. Consider the purchase of a switch, described below. Ethernet hubs are used in 10BaseT networks. A standard hub is just a dumb repeater-anything it hears on one port, it repeats to all of its other ports. Although 10BaseT is usually wired with eight wire jacks (known as RJ45 connectors), only four wires are used-- one pair to transmit data, and another pair to receive data. While transmitting, an Ethernet card will listen to its receive pair to see if it hears anyone else talking at the same time. These two behaviors (listen for silence before talking, and detect other people talking at the same time) are described by the as CSMA/CD, or "Carrier Sense Multiple Access, Collision Detection." One hundred megabit Ethernet (100BaseTX) works just like ten megabit Ethernet, only ten times faster. On high-quality copper (known as Category 5, or CAT 5 UTP), 100BaseTX uses the same two pair of copper to communicate. If you have standard network-quality copper, an alternative is to use 100BaseT4, which uses all four pairs, but can communicate at 100Mbps on CAT 3 UTP.
Gigabit Ethernet works just like hundred megabit Ethernet, only ten times faster (1000Mbps, or 1Gbps.) There are some Gigabit Ethernet devices floating around out there, but it's unlikely that you'll find such devices on the small LANs that you'd find on the "Near Side of the 'Net." If your conference room gets too busy, you may consider splitting them into two groups by putting a partition wall with a door between the halves, and putting a person in the doorway. This person would listen to the conversations in both rooms, memorize the names (Ethernet card addresses) of everyone in each room, and forward messages from room to room when necessary. A device to do this is called a "transparent bridge." It's called "transparent" because it's smart enough to learn the Ethernet addresses on its own without the workstations suspecting anything is going on. ["Source-route bridges" are uncommonly used so I'm not going to discuss them.] Ethernet switches are little more than high-speed, multi-port bridges. They learn the Ethernet addresses of everyone attached to each port, and make intelligent forwarding decisions based on Ethernet card address (aka MAC address.) Because communication between 100Mbps and 10Mbps networks requires buffering, Ethernet switches are often used for this purpose. Many inexpensive switches have many 10Mbps ports and one or two 100Mbps ports. Typically, you would connect your server(s) to the 100Mbps port(s), and workstations or entire hubs to the 10Mbps ports. The buffering and intelligent forwarding allows another interesting feature to exist-- "full-duplex" Ethernet. "Half-duplex" means you can either talk or listen, but not both, at a given time, such as when using a radio. "Full-duplex" communication means you can talk and listen at the same time, such as when on the phone. Since 10BaseT uses separate pairs of copper for sending and receiving, it's physically possible to do both if there are no other workstations on your network segment-- which is the case if you are directly attached to a switch. Note that both the switch port and your network card must be configured for full duplex operation for this to work, but the result is worth it: a full 20Mbps for "regular" Ethernet and a whopping 200Mbps of bandwidth available for full-duplex fast Ethernet. Since collisions are eliminated, the 30% rule does not apply. When considering the purchase of a switch, there are a few important considerations, not all of which may apply to your requirements:
Does the switch support 100Mbps on any ports? How many, and will it autodetect 10/100BaseT? Does the switch support full duplex? Even on the 100Mbps ports? How many MAC (Ethernet card) addresses does it store? 500? 5000? "Unlimited" is not a rational answer. Some "workgroup" switches only allow one MAC address per port, so these would not be suitable if you plan to connect hubs to switch ports. You tend to get what you pay for. If a switch seems unreasonably inexpensive compared to other switches that appear to have similar specs, look closer, or check the detailed specs on the manufacturer's web site. Often, you'll find that a cheap switch either isn't a switch at all (see last item) or only allows one workstation per port (see item above last item.)
Frame Relay
Frame Relay is probably the simplest data communications protocol ever conceived. Designed to run over virtually error- free circuits, it's a protocol stripped down for speed. Frame Relay abolishes the Network Layer of the OSI model, claims the routing and multiplexing functions for itself, and leaves everything else to the higher layers. A Frame Relay service ignores traditional functions such as window rotation, sequence numbering, frame acknowledgment, and automatic retransmission in order to concentrate on the basics:
delivering correct data quickly in the right order to the right place. It simply discards incorrect data. The need for a streamlined protocol like Frame Relay grows from several facts of modern data communications:
Users have more data to communicate, and they'd like that data to travel faster and in larger chunks than current technology has allowed. Physical transmission gets faster every year and introduces fewer and fewer errors into the data. Computers and workstations with the intelligence to handle high-level protocols have replaced dumb terminals as the instruments of choice.
Thanks especially to cleaner transmission and smarter workstations, the procedures that traditional Data Link and Network protocols use to recognize and correct errors have become redundant for jobs that require large volume at high speeds. Frame Relay handles volume and speed efficiently by combining the necessary functions of the Data Link and Network layers into one simple protocol. As a Data Link protocol, Frame Relay provides access to a network, delimits and delivers frames in proper order, and recognizes transmission errors through a standard Cyclic Redundancy Check. As a Network protocol, Frame Relay provides multiple logical connections over a single physical circuit and allows the network to route data over those connections to its intended destinations. In order to operate efficiently, Frame Relay eliminates all the error handling and flow control procedures common to conventional protocols such as SDLC and X.25. In their place, it requires both an error-free transmission path, such as a digital carrier circuit or a fiber span, and intelligent higher- layer protocols in the user devices. By definition, Frame Relay is an access protocol that operates between an end-user device such as a LAN bridge or router or a front-end processor and a network. The network itself can use any transmission method that's compatible with the speed and efficiency that Frame Relay applications require. Some networks use Frame Relay itself; others use either digital circuit switching or one of the new cell relay systems.
| FRAME | `-------'
: : : :
: FRAME :
+---------------+--------------+--------| | .--------. | DEVICE | `--------' | .--------. | DEVICE | `--------' | .--------. | DEVICE | `--------'
A Frame Relay network can discard data for any of three reasons:
A subscriber has exceeded the amount of data that the network has agreed to carry A failed Cyclic Redundancy Check, which indicates physical transmission errors Network congestion, which occurs when the network's community of subscribers transmits enough data to approach or exceed the network's capacity to carry it
A Frame Relay network relies on the higher-layer protocols in its attached devices to recover from errors or congestion. In practice, this means that the higher layers must recognize that the network has discarded one or more frames of data. Most higher-layer protocols use rotating sequence numbers to recognize frames that have been discarded. When a device receives a sequence number out of order, it requests that its partner retransmit all frames in order since the last frame it received with a correct sequence number. In a well-tuned network, this typically includes the missing frame and all frames that its originator had transmitted in the time the destination device took to recognize the discard and send a message across the network requesting retransmission. In most cases, the originating device retransmits more data than would have been necessary. This is a very reliable way to recover data lost through occasional transmission errors. However, when data's been discarded because of traffic congestion, bulk retransmission can only make the problem worse. Fortunately, most higher-layer protocols use some form of throttling or flow control mechanism to recognize and prevent congestion. The Frame Relay protocol also provides a way for the network to alert its subscribers when it becomes congested. The header of each Frame Relay frame contains two Explicit Congestion Notification bits that the network can set if it transmits that frame over a congested path. Each of these bits signifies congestion in a specific direction on the virtual route. A value of 1 in the Forward Explicit Congestion Notification (FECN, pronounced "feacon") bit indicates that the frame has encountered a congested path on its way across the network. A value of 1 in the Backward Explicit Congestion Notification (BECN, pronounced "beacon") bit indicates that the path through the network in the direction opposite the frame's path (i.e., toward the frame's source) is congested. The FECN and BECN bits explicitly notify a subscriber's device of congestion on the network and implicitly ask that device to withhold traffic or reduce its transmission rate until the congestion has cleared.
| C/R |
`-----------------------------------`-----`-----' | 2 | | | | DE | | EA |
| D/C |
`-----------------------------------`-----`-----'
The Frame Relay frame header is illustrated above. The first octet is a flag field that delimits the frame from another frame or from idle time on the circuit. The second octet contains the first 6 bits of the 10-bit DLCI followed by a Command/Response bit (C/R) and the frame's first Extended Address (EA) bit. Use of the C/R bit is not defined by Frame Relay, so implementors are free to define a function for it. A value of 0 in an EA bit indicates that the frame's address (DLCI) continues in the next octet. Since the DLCI must occupy parts of two octets at minimum, the EA bit in this octet should always have a value of 0. The next octet contains the remaining four bits of the DLCI followed by the FECN and BECN bits described above, a Discard Eligibility (DE) bit, and the frame's second EA bit. The subscriber or the network may set the value of the DE bit to 1 to indicate that the network may discard this frame in preference to frames in which the value of the DE bit is 0. (This occurs only after it has discarded all frames transmitted in excess of their subscribers' CIR and Bc). In a normal Frame Relay frame, the value of the EA bit in this octet should be 1, to indicate that the address information ends here. An EA value of 0 indicates an that an Extended Address Octet follows. Extended addressing is seldom implemented. The subscriber's data follows the Frame Relay header in most Frame Relay frames, and the data is followed in turn by the 2- octet Frame Check Sequence (FCS) and a final flag octet. A frame must contain at least one octet of user data for a total of 5 octets between flags. A frame may not exceed 8192 octets between flags, counting header and FCS. The latest Frame Relay standards recommend a maximum frame size of 1600 octets overall. Implementors are free to define a smaller maximum frame size if they wish.
A major area where the Frame Relay protocol leaves room for improvement is the management of the interface. The network and the subscriber's device should be able to communicate information on which DLCIs have been configured for the link and on which DLCIs are currently active. Since Frame Relay applications can go for relatively long periods without bursts of data, the devices also need a mechanism for ensuring that the physical link is running normally in the absence of traffic. In September 1990, a group of Frame Relay vendors introduced a signalling mechanism for Frame Relay links that handles both of these functions. The Local Management Interface (LMI) is a simple protocol that runs in one dedicated PVC of a Frame Relay link and allows the subscriber and the network to exchange information about the link itself and about the status of the other PVCs. Since LMI occupies its own PVC, its link signalling cannot congest or interfere with traffic on the PVCs that carry subscriber data. The use of LMI is entirely optional. The protocol is designed so that the subscriber must originate all exchanges of information. This feature prevents the network from transmitting unwanted information to subscribers whose devices haven't implemented the LMI protocol. The subscriber begins an LMI exchange by sending a Status Enquiry message. The Network completes the exchange by answering with a Status message. An exchange of LMI messages can perform either of two functions:
A simple "heartbeat" exchange that verifies that the link is running normally A report on the individual status of each DLCI defined for the link
`-----------------------------------`-----`-----` 2 | 1 1 1 1 | 0 | 0 | 0 | 1 |
`-----------------------------------------------'
An LMI frame is divided into a header of 6 octets (beyond the flag) and a list of Information Elements (IEs) that carry the heartbeat or status information. The Data Link protocol used for LMI is a subset of LAPD, the ITU's Link protocol for ISDN signalling. Where the Frame Relay link protocol defines a 2- octet frame header, the LAPD protocol defines a 6-octet header. Octets 1 and 2 contain the DLCI used by LMI. In the original LMI specification, this was defined to be DLCI 1023. The DLCI appears in Frame Relay format, 6 bits in octet 1 and 4 bits in octet 2. Notice that the Frame Relay control bits (C/R, EA, FECN, BECN, and DE) are all present, but in practice, only the final EA bit (1 for "end of address") is actually used. Octet 3 identifies all LMI frames as Unnumbered Information frames according to the LAPD standard. Octet 4 contains a protocol discriminator which identifies the frame as one containing LMI information. (The protocol discriminator will become more important in future implementations that may use other signalling protocols such as ISDN's Q.931 instead of, or along with, LMI.) Octet 5 contains a LAPD parameter called a Call Reference. In LMI frames, this is a dummy field that's always set to 0. Octet 6 identifies the LMI Message Type as either Status Enquiry (from the subscriber) or Status (from the network).
All LMI messages contain one Report Type element and one Keep- Alive element. A full Status message from the network to the subscriber also contains one PVC Status element for each PVC on the link. The Keep-Alive information element contains a pair of 8-bit sequence numbers, Current and Last Received, through which the heartbeat process maintains a running check on the health of the link. The heartbeat process is similar to the error detection mechanism used by higher-layer protocols. At a regular interval, the subscriber sends a Status Enquiry message that contains a Report Type value of Sequence Number Exchange and a Keep-Alive Element.
When the Network receives the message, it records the Current Sequence Number as its Last Received Sequence Number, increments it by one to produce its new Current Sequence Number, and transmits a Status message with a Keep-Alive element that contains the new numbers. The sequence numbers rotate in Modulo 256 with one exception. In normal sequence counting, both the subscriber and the network must skip the value 0. Either side may reset its sequence count to 0 at any time: the LMI specification leaves this option open to implementors as a way to reset the heartbeat process in response to conditions on the link. If either side receives a heartbeat message in which the Sequence Numbers don't follow correctly, it may declare an LMI sequence error. The LMI protocol does not define how users are to handle errors, but suggests maintaining a count of "error events," including bad frames (failed frame checks) and LMI sequence errors, and initiating error-handling procedures when the count reaches a specified threshold within a specified period. Error handling mechanisms such as alarms and link resets are left to the implementor. After a specified number of sequence number exchanges, the subscriber issues a Status Enquiry with a value of "Full Status" in the Report Type element. The network answers with a Status Message containing a PVC Status information element for each DLCI currently defined for the link. Like all LMI information elements, it begins with 2 octets that indicate its element type and length. The next 2 octets contain the DLCI of the PVC on which the element reports. Note that the format of the DLCI octets is different from that in the Frame Relay header. In the first octet after the DLCI, the first 4 bits are not used and are set to 0. Two of the next 4 bits have meaning in all LMI implementations:
The N bit is set to 1 only when the PVC Status element is reporting on a newly defined DLCI. The N bit will be reset to 0 in all subsequent PVC Status elements for that DLCI. The A bit is set to 1 whenever the PVC to which the element refers is Active, i.e., known to be transmitting and receiving data. Implementors are free to define when and how a PVC becomes active. Functions of the bits labeled "D" and "R" and of the three reserved octets at the end of the element are defined by a set of optional extensions to the LMI specification.
Global addressing convention Multicast capability A simple flow control mechanism Ability for the network to communicate a PVC's CIR to the subscriber in a Status message
A new message type that allows the network to announce PVC status changes without prompting from the subscriber
Implementors may build any, all, or none of these features into their networks.
Global Addressing
The global addressing convention defines a simple commitment from the operator of a network that DLCIs will remain unique throughout the network. In a globally addressed network, each DLCI identifies a subscriber device uniquely. For a few years Frame Relay networks will remain small enough that they won't need to implement extended addressing to use the global addressing feature. As networks grow and interconnect, any trend toward global addressing will probably require use of extended addresses.
Multicasting
The LMI multicast capability adapts a popular feature from the LAN world. It reserves a block of DLCIs (1019 to 1022) as multicast groups so that a subscriber wishing to transmit a message to all members of the group must transmit the message only once on the multicast DLCI. The multicasting feature requires a new information element, Multicast Status, in the full LMI Status message. The Multicast Status element is similar in most respects to the PVC Status IE, but it includes a field for the source DLCI transmitting over the multicast group. It also omits the function of the R bit (see below), since a multicast group may use several paths with different congestion conditions.
Flow Control
The optional LMI flow control capability provides a way for the network to report congestion to the subscriber. The flow control feature uses the optional R bit in the PVC Status information element as a "Receive-Not-Ready" signal for the PVC whose status is being reported. A 1 in the R bit indicates congestion; a 0 indicates no congestion. On networks where LMI is fully implemented, this feature improves on the ECN bits of the basic Frame Relay protocol because the LMI heartbeat process guarantees that PVC Status elements will reach the subscriber periodically. Of course, according to the laissez faire practice of Frame Relay, the subscriber may or may not have implemented the feature, and may or may not choose to act on the information.
Deletion of a PVC or multicast group (reported by setting the optional D bit of the Status element) Changes in the minimum bandwidth allocated to a PVC Activation or deactivation of a PVC (indicated by setting or clearing the A bit) Flow control information (changes in congestion status, signalled by setting or resetting the R bit). Besides improving flow control, this feature allows LMI signalling over network-to-network Frame Relay connections where neither partner functions as a subscriber device
The current ANSI standards, T1.606- 1990, T1.617-1991, and T1.618-1991 respectively define Frame Relay service, access signalling for Frame Relay, and the core aspects of the Frame Relay protocol. The LMI specification, which originated in the "private sector" appears as Annex D of T1.617-1991, which defines a status signalling process that's essentially the same as LMI without the optional extensions. ANSI's LMI-like protocol operates on DLCI 0.
Internationally, the International Telecommunications Union (ITU) has defined a corresponding set of standards:
Recommendation I.233 for the service description, Annex A of recommendation Q.922 for the Frame Relay data transfer protocol. Recommendation Q.933 for access signalling.
The Frame Relay standards differ from the current practice of Frame Relay communications in one important respect. All the standards assume that the Frame Relay link will carry switched virtual circuits over one channel of an ISDN access interface, while virtually all real-world implementations of Frame Relay are carrying permanent virtual circuits over dedicated access circuits into special packet networks. Thus, the standards are much more complex than their current implementations. The standards define a set of necessarily elaborate signalling procedures for:
Gaining access to an ISDN channel Establishing a Frame Relay link on that channel Establishing and terminating virtual circuits
In practice so far, the one-time process of subscribing to a Frame Relay service replaces all of this signalling. As carriers implement ISDN more widely, the ISDN signalling aspects of the Frame Relay standards will become more important.
delete dir disconnect get glob hash help lcd literal ls mdelete
Delete file Directory listing Terminate FTP session Download file Toggle glob Toggle hash # Local help Change local directory Send arbitrary FTP command List contects of remote directory Delete multiple files
quote recv remotehelp rename rmdir send status trace type user verbose
Send arbitrary FTP command Receive file Help from remote server Rename file Remove directory Send one file Current status Toggle packet tracing Show file transfer type Connect as new user Toggles verbose mode
!
Escapes to the shell (command prompt) to run the specified command on the local computer. ! command Parameter command - Specifies the command to run on the local computer. If command is omitted, the local command prompt is displayed; type exit to return to ftp. [Back to top]
?
Displays descriptions for ftp commands. ? is identical to help. ? [command] Parameter command - Specifies the name of the command about which you want a description. If command is not specified, ftp displays a list of all commands. [Back to top]
append
Appends a local file to a file on the remote computer using the current file type setting. append local-file [remote-file] Parameters local-file - Specifies the local file to add. remote-file - Specifies the file on the remote computer to which local-file will be added. If remote-file is omitted, the local filename is used for the remote filename. [Back to top]
ascii
Sets the file transfer type to ASCII, the default. ascii Note
FTP supports two file transfer types, ASCII and binary image. ASCII should be used when transferring text files. See also binary. In ASCII mode, character conversions to and from the network standard character set are performed. For example, end-of-line characters are converted as necessary, based on the target operating system. [Back to top]
bell
Toggles a bell to ring after each file transfer command is completed. By default, the bell is off. bell [Back to top]
binary
Sets the file transfer type to binary. binary Note FTP supports two file transfer types, ASCII and binary image. Binary should be used when transferring executable files. In binary mode, the file is moved byte-by-byte. See also ascii. [Back to top]
bye
Ends the FTP session with the remote computer and exits ftp. bye [Back to top]
cd
Changes the working directory on the remote computer. cd remote-directory Parameter remote-directory - Specifies the directory on the remote computer to change to. [Back to top]
close
Ends the FTP session with the remote server and returns to the command interpreter. close [Back to top]
debug
Toggles debugging. When debugging is on, each command sent to the remote computer is printed, preceded by the string --->. By default, debugging is off. debug [Back to top]
delete
Deletes files on remote computers. delete remote-file Parameter remote-file - Specifies the file to delete. [Back to top]
dir
Displays a list of a remote directory's files and subdirectories. dir [remote-directory] [local-file] Parameters remote-directory - Specifies the directory for which you want to see a listing. If no directory is specified, the current working directory on the remote computer is used. local-file - Specifies a local file to store the listing. If not specified, output is displayed on the screen. [Back to top]
disconnect
Disconnects from the remote computer, retaining the ftp prompt. disconnect [Back to top]
get
Copies a remote file to the local computer using the current file transfer type. get remote-file [local-file] Parameters remote-file - Specifies the remote file to copy. local-file - Specifies the name to use on the local computer. If not specified, the file is given the remote-file name. [Back to top]
glob
Toggles filename globbing. Globbing permits use of wildcard characters in local file or path names. By default, globbing is on. glob [Back to top]
hash
Toggles hash-sign (#) printing for each data block transferred. The size of a data block is 2048 bytes. By default, hash mark printing is off. hash [Back to top]
help
Displays descriptions for ftp commands. help [command] Parameter command - Specifies the name of the command about which you want a description. If command is not specified, ftp displays a list of all commands. [Back to top]
lcd
Changes the working directory on the local computer. By default, the working directory is the directory in which ftp was started. lcd [directory] Parameter directory - Specifies the directory on the local computer to change to. If directory is not specified, the current working directory on the local computer is displayed. [Back to top]
literal
Sends arguments, verbatim, to the remote FTP server. A single FTP reply code is expected in return. literal argument [ ...] Parameter argument - Specifies the argument to send to the FTP server. [Back to top]
ls
Displays an abbreviated list of a remote directory's files and subdirectories. ls [remote-directory] [local-file] Parameters remote-directory - Specifies the directory for which you want to see a listing. If no directory is specified, the current working directory on the remote computer is used. local-file - Specifies a local file to store the listing. If not specified, output is displayed on the screen. [Back to top]
mdelete
Deletes files on remote computers. mdelete remote-files [ ...] Parameter remote-files - Specifies the remote files to delete. [Back to top]
mdir
Displays a list of a remote directory's files and subdirectories. Mdir allows you to specify multiple files. mdir remote-files [ ...] local-file Parameters remote-files - Specifies the directory for which you want to see a listing. Remote-files must be specified; type "-" (no quotes) to use the current working directory on the remote computer. local-file - Specifies a local file to store the listing. Type "-" (no quotes) to display the listing on the screen. [Back to top]
mget
Copies remote files to the local computer using the current file transfer type. mget remote-files [ ...] Parameter remote-files - Specifies the remote files to copy to the local computer. [Back to top]
mkdir
Creates a remote directory. mkdir directory Parameter directory - Specifies the name of the new remote directory. [Back to top]
mls
Displays an abbreviated list of a remote directory's files and subdirectories. mls remote-files [ ...] local-file Parameters remote-files - Specifies the files for which you want to see a listing. Remote-files must be specified; type - to use the current working directory on the remote computer. local-file - Specifies a local file to store the listing. Type - to display the listing on the screen. [Back to top]
mput
Copies local files to the remote computer using the current file transfer type. mput local-files [ ...] Parameter local-files - Specifies the local files to copy to the remote computer. [Back to top]
open
Connects to the specified FTP server. open computer [port] Parameters computer - Specifies the remote computer to connect to. Computer can be specified by IP address or computer name (a DNS or HOSTS file must be available). If auto-login is on (default), FTP also attempts to automatically log the user in to the FTP server (see Ftp to disable auto-login). port - Specifies a port number to use to contact an FTP server. [Back to top]
prompt
Toggles prompting to force interactive prompting on multiple commands. Ftp prompts during multiple file transfers to allow you to selectively retrieve or store files; mget and mput transfer all files if prompting is turned off. By default, prompting is on. prompt [Back to top]
put
Copies a local file to the remote computer using the current file transfer type. put local-file [remote-file] Parameters local-file - Specifies the local file to copy. remote-file - Specifies the name to use on the remote computer. If not specified, the file is given the local-file name. [Back to top]
pwd
Displays the current directory on the remote computer. pwd [Back to top]
quit
Ends the FTP session with the remote computer and exits ftp. quit [Back to top]
quote
Sends arguments, verbatim, to the remote FTP server. A single FTP reply code is expected in return. Quote is identical to literal. quote argument [ ...] Parameter argument - Specifies the argument to send to the FTP server. [Back to top]
recv
Copies a remote file to the local computer using the current file transfer type. Recv is identical to get. recv remote-file [local-file] Parameters remote-file - Specifies the remote file to copy. local-file - Specifies the name to use on the local computer. If not specified, the file is given the remote-file name. [Back to top]
remotehelp
Displays help for remote commands. remotehelp [command] Parameter command - Specifies the name of the command about which you want help. If command is not specified, ftp displays a list of all remote commands. [Back to top]
rename
Renames remote files. rename filename newfilename Parameters filename - Specifies the file you want to rename. newfilename - Specifies the new filename. [Back to top]
rmdir
Deletes a remote directory. rmdir directory Parameter directory - Specifies the name of the remote directory to delete. [Back to top]
send
Copies a local file to the remote computer using the current file transfer type. Send is identical to put. send local-file [remote-file] Parameters local-file - Specifies the local file to copy. remote-file - Specifies the name to use on the remote computer. If not specified, the file is given the local-file name. [Back to top]
status
Displays the current status of FTP connections and toggles. status [Back to top]
trace
Toggles packet tracing; trace displays the route of each packet when running an ftp command. trace [Back to top]
type
Sets or displays the file transfer type. type [type-name] Parameter type-name - Specifies the file transfer type; the default is ASCII. If type-name is not specified, the current type is displayed. Notes FTP supports two file transfer types, ASCII and binary image. ASCII should be used when transferring text files. In ASCII mode, character conversions to and from the network standard character set are performed. For example, end-of-line characters are converted as necessary, based on the destination's operating system. Binary should be used when transferring executable files. In binary mode, the file is moved byte-by-byte. See Also : ascii binary [Back to top]
user
Specifes a user to the remote computer. user user-name [password] [account] Parameters user-name - Specifies a user name with which to log in to the remote computer. password - Specifies the password for user-name. If not specified, but required, ftp prompts for the password. account - Specifies an account with which to log on to the remote computer. If account is not specified, but required, ftp prompts for the account. [Back to top]
verbose
Toggles verbose mode. If on, all ftp responses are displayed; when a file transfer completes, statistics regarding the efficiency of the transfer are also displayed. By default, verbose is on. verbose
Error Number
200 201 202 OK
Name
Method
GET, HEAD POST, PUT
Description
Document was successfully transferred. No error. POST or PUT was successful. No error.
Created Accepted
GET, HEAD, Request was accepted without error, DELETE, but the request will be processed POST, PUT later. Request succesful, but response GET, HEAD, consists of cached or nonPOST authoritative information GET, HEAD, Request was successful, but there is POST no data to send DELETE PUT Document will be deleted as requested Document will be modified as requested
203
301
Moved Permanently
Document has a new permanent URI. GET, HEAD, Browsers with support redirection POST, PUT should direct future requests to the new URI Document has temporarily moved to a new URI. Browser should redirect this GET, HEAD, request to the new URI, but future POST, PUT requests should still try the original URI first. GET Document has not changed since the date and time specified in the IfModified-Since field.
302
Moved Temporarily
304 400
401 Unauthorized
Username and password do not match with an allowed username and password for a protected directory Encryption failure File requested is not readable by user nobody Access to the file, directory, or index is prohibited in access.conf or httpd.conf
403 Forbidden
404
Not Found
exist
501
Not Implemented
Object does not support the HTTP method used (e.g. POST instead of GET) httpd is unable to spawn a child process to handle the request, either because the system is out of resources, or in accordance with configuration constraints A timeout occurred while waiting for a response from the port specified
502
503
HTTP Methods
Method
Implementations
"Standard" Hypertext document viewing Read only CGI applications (URL based) Search engines
Description
A GET request will cause the server to respond with the entire header and body of the document specified. The URI may also include query information which can be utilized by CGI to customize the information presented or to search for particular information. While it is possible to pass write data to an application via GET, the practice is not recommended as it poses significant security risks, since URI information can easily be manipulated by the end-user. A PUT request will cause the object body data to be saved to the URI location specified. Some publishing implementations, most notoriously Microsoft FrontPage, use this as an alternative to FTP for publishing web content. Obviously, the server must be specifically configured to allow PUT requests, since they are disabled by default for security reasons. The POST method passes the object body data to the application at the specified URI for processing. This is the preferred method for passing write data to the server, since it cannot be easily manipulated.
Cached?
GET
Yes.
PUT
POST
Read/write CGI applications (form based) Some web publishing applications Web caching
Never.
DELETE HEAD
response with the header information for the document without the Object Body information. Compare to GET, which elicits a response containing both header and body information. Via the information returned from a HEAD request, a web cache or validation tool can verify that a document exists and determine whether it has been updated since it was last retrieved. The OPTIONS method causes the server to respond with the options of what can be performed on the specified document. OPTIONS is very rarely implemented on most servers, and usually elicits an Error 500 (Internal Server Error) not.
OPTIONS
Unknown
No.
LINK UNLINK When a server receives a TRACE command/method, it will respond with the text 'TRACE' to indicate that it is functioning properly. The TRACE method is HTTP's rough equivalent to the layer 3 ping command.
TRACE
Also, when a TRACE is issued, all proxies and caching servers along the path will insert their Manual HTTP troubleshooting information into the Via header field, so that upon deeper analysis, the entity which submitted the TRACE can identify if and where such equipment exists along the network path.
Announce network errors, such as a host or entire portion of the network being unreachable, due to some type of failure. A TCP or UDP packet directed at a port number with no receiver attached is also reported via ICMP. Announce network congestion. When a router begins buffering too many packets, due to an inability to transmit them as fast as they are being received, it will generate ICMP Source Quench messages. Directed at the sender, these messages should cause the rate of packet transmission to be slowed. Of course, generating too many Source Quench messages would cause even more network congestion, so they are used sparingly. Assist Troubleshooting. ICMP supports an Echo function, which just sends a packet on a round--trip between two hosts. Ping, a common network management tool, is based
on this feature. Ping will transmit a series of packets, measuring average round--trip times and computing loss percentages. Announce Timeouts. If an IP packet's TTL field drops to zero, the router discarding the packet will often generate an ICMP packet announcing this fact. Traceroute is a tool which maps network routes by sending packets with small TTL values and watching the ICMP timeout announcements.
IP Protocol Overview
IP is the Internet's most basic protocol. In order to function in a TCP/IP network, a network segment's only requirement is to forward IP packets. In fact, a TCP/IP network can be defined as a communication medium that can transport IP packets. Almost all other TCP/IP functions are constructed by layering atop IP. IP is documented in RFC 791, and IP broadcasting procedures are discussed in RFC 919. IP is a datagram-oriented protocol, treating each packet independently. This means each packet must contain complete addressing information. Also, IP makes no attempt to determine if packets reach their destination or to take corrective action if they do not. Nor does IP checksum the contents of a packet, only the IP header. IP provides several services:
Addressing. IP headers contain 32-bit addresses which identify the sending and receiving hosts. These addresses are used by intermediate routers to select a path through the network for the packet. Fragmentation. IP packets may be split, or fragmented, into smaller packets. This permits a large packet to travel across a network which can only handle smaller packets. IP fragments and reassembles packets transparently. Packet timeouts. Each IP packet contains a Time To Live (TTL) field, which is decremented every time a router handles the packet. If TTL reaches zero, the packet is discarded, preventing packets from running in circles forever and flooding a network. Type of Service. IP supports traffic prioritization by allowing packets to be labeled with an abstract type of service. Options. IP provides several optional features, allowing a packet's sender to set requirements on the path it takes through the network (source routing), trace the route a packet takes (record route), and label packets with security features
ISDN
Table of Contents
Integrated Services Digital Network I.430 Protocol Q.921 Protocol Q.931 Protocol G.711 Protocol
data, and video over the same circuits. The core of the telephone network is now digital, so most ordinary telephone calls are now converted into bits and bytes, transported through digital circuits, and converted back into analog audio at the remote end. The international standard for the digital telephone network is Signaling System 7 (SS-7), a protocol suite in its own right, roughly comparable to TCP/IP. End users never see SS-7, since it is only used between telephone switches. ISDN provides a fully digital user interface to the SS-7 network, capable of transporting either voice or data. BISDN (Broadband ISDN) uses ATM instead of SS-7 as the underlying networking technology. ISDN is a complete networking technology in its own right, providing clearly defined Physical, Data Link, Network and Presentation layer protocols. For most Internet applications, though, ISDN is regarded as a fancy Data Link protocol used to transport IP packets. An ISDN interface is time division multiplexed into channels. In accordance with SS-7 convention, control and data signals are seperated onto different channels. Contrast this to TCP/IP, where control packets are largely regarded as special cases of data packets and are transported over the same channel. In ISDN, the D channel is used for control, and the B channels are for data. B channels are always bi-directional 64 kbps, the standard data rate for transporting a single audio conversation; D channels vary in size. The two primary varients of ISDN are BRI (Basic Rate Interface) and PRI (Primary Rate Interface). BRI, sometimes referred to as 2B+D, provides two 64 kbps B channels and a 16 kbps D channel over a single 192 kbps circuit (the remaining bandwidth is used for framing). BRI is the ISDN equivalent of a single phone line, though it can handle two calls simultaneously over its two B channels. PRI, essentially ISDN over T1, is referred to as 23B+D and provides 23 B channels and a 64 kbps D channel. PRI is intended for use by an Internet Service Provider, for example, multiplexing almost two dozen calls over a single pair of wires. A number of international standards define ISDN. I.430 describes the Physical layer and part of the Data Link layer for BRI. Q.921 documents the Data Link protocol used over the D channel. Q.931, one of the most important ISO standards, documents the Network layer userto-network interface, providing call setup and breakdown, channel allocation, and a variety of optional services. Varients of Q.931 are used in both ATM and voice-over-IP. G.711 documents the standard 64 kbps audio encoding used by telcos throughout the world.
In Europe and Japan, the telco owns the NT-1 and provides the S/T interface to the customer. In North America, however, largely due to the U.S government's unwillingness to allow telephone companies to own customer premises equipment (such as the NT-1), the U interface is provided to the customer, who owns the NT-1. This effectively produces two incompatible varients of ISDN, which some manufacturers have attempted to remedy with devices (such as the Cisco 760) containing both S/T and U jacks. Normal ISDN devices plug into the S/T interface, an RJ-45 jack carrying two pairs of wires, each pair a current loop. As current flows into the positive line, it flows out of the negative line, maintaining a net balance between the two. The two lines should be grouped together on a single twisted pair, minimizing crosstalk between signals. One pair carries signal from the TE to the NT (user to network), the other pair carries signal from the NT to the TE (network to user).
3 4 5 6
+ + -
The signals transmitted over the pairs are 192 kbps digital, using an Alternate Mark Inversion (AMI) scheme. Under AMI, one binary value (1) is indicated with no signal, while the other binary value (0) is indicated with either positive or negative signal, in alternating order. Thus, binary 00110101 would be signaled as +-00+0-0. The use of AMI ensures there will be no net DC signal, an important consideration since DC voltages won't be transferred by coupling transformers. ISDN uses a 48 bit frame, transmitted 4000 times every second (once every 250 microseconds). Each frame includes several L (balancing) bits, which insert an extra positive signal if needed to DC balance the entire frame. A very similar (but not identical) frame format is used on the two pairs, with the TE to NT (user to network) signal synchronized with the NT to TE signal, delayed two bit times. The beginning of each frame is marked with an F (framing) bit, followed by a L (balancing) bit, both reversed polarity. This AMI violation provides a clear frame marker, but since two bits are both reversed, net DC balance is still maintained. In both
directions, each frame contains two 8-bit B1 channel slots and two 8-bit B2 channel slots, for a net data rate of 8 bits/slot * 2 slots/frame * 4000 frames/second = 64 kbps on each B channel. Each frame also contains four bits of D channel data, for a net D channel data rate of 4 bits * 4000 frames/second = 16 kbps. In the NT->TE direction four E (echo) bits copy back the D bits from the other direction, providing collision detection for multiple devices competing for the D channel. An 8 kbps S channel (two bits per frame; not currently used) and an A (activation) bit complete the frame structure.
An ISDN TE goes through several states before becoming synchronized. The state diagram makes use of several signals, illustrated below. INFO 0 means no signal. INFO 1 is an unsynchronized signal sent from the TE to the network. INFO 2 is normal ISDN framing (NT to TE) with the B, D, E, and A bits all zero (others normal). INFO 3 is normal ISDN framing (TE to NT), carrying data and synchronized to INFO 2/4. INFO 4 is normal ISDN framing (NT to TE), carrying data with the A bit one. These signals can be observed easily with an oscilloscope; the only difficulty being synchronizing the scope to observe INFO 2, 3 and 4. A signal generator sending a 4 kHz square wave to the scope's trigger will due in a pinch.
An ISDN circuit can be activated by either side of the link. TE requests activation by entering state F4 and transmitting INFO 1. NT requests activation by transmitting INFO 2, causing TE to enter state F5 (upon receiving signal) and then F6 (upon synchronizing with the framing bits). F7 is the normal data carrying state of an ISDN TE. These states have nothing to due with call setup or other operations over the B and D channels, though the TE physical layer must be in state F7 before any higher layer protocols can function.
Frame Format
Flag, bit stuffing, and FCS computation are identical to HDLC. The 16 bits of address contain a command/response (C/R) field, a SAPI (Service Access Point Identifier) and a TEI (Terminal Endpoint Identifier). TEIs are used to distinguish between several different devices using the same ISDN links. TEI 127 is broadcast; other TEI values are dynamically assigned (see below). SAPIs play the role of a protocol or port number, and
identify the higher layer protocol being used in the data field. Q.931 messages are sent using SAPI 0, SAPI 16 means X.25, and SAPI 63 is used for TEI assignment procedures. This are usually the only SAPI values used. Data transfer can occur in one of two formats: Information (I) frames or Unnumbered Information (UI) frames. UI, offering unreliable delivery, is the simplest of the two, since no sequence numbering, acknowledgements, or retransmissions are involved. I frames are numbered modulo 128; the frame number is included in the N(S) field. Acknowledgements are sent using the N(R) field, either piggybacked onto an I frame in the reverse direction, or sent explicitly in an RR or RNR frame. RR indicates that the receiver is ready for more data, RNR indicates a busy condition and places the circuit on hold awaiting a future RR. REJ is a negative acknowledgement, requesting retransmission beginning with frame N(R). Before I frames can be transferred, a SABME command initializes the sequence numbers to all zeros. The DISC command terminates multi frame operation. Both SABME and DISC are acknowledged with a UA. Protocol errors (undefined control field; incorrect length frame; invalid acknowledgement; etc) are reported by sending a FRMR with the initial fields of the erroneous frame.
TEI Management
Before any higher level (Q.931) functions can be performed, each ISDN device must be assigned at least one unique TEI value. These numbers can be preassigned (TEIs 0-63), or dynamically assigned (TEIs 64-126). Most TEI assignment is done dynamically, using the TEI management protocol. The user broadcasts an Identity request and the network responds with an Identity assigned containing the TEI value. Functions are also provided to verify and release TEI assignments. All TEI management functions are performed using TEI 127 (broadcast), SAPI 63, and the following 5-byte UI frame:
The reference number is a randomly generated 16 bit value used to distinguish between different ISDN devices that might simultaneously request TEI assignment. The possible message types are:
Message
Identity request (1)
Direction
Action indicator
user->network 127
assigned (2) network->user Assigned TEI denied (3) network->user check (4) user->network TEI to be checked check response (5) network->user TEI value(s) in use remove (6) user->network TEI to be removed verify (7) user->network TEI to be checked
Sample Exchange
over the same D channel, a message type, and various information elements (IEs) as required by the message type in question:
The most important messages types are: ALERTING (1) IEs: Bearer capability, Channel identification, Progress indicator, Display, Signal, High layer compatibility Direction: Called user -> network -> calling user The called user is being alerted, i.e "the phone is ringing". CALL PROCEEDING (2) IEs: Bearer capability, Channel identification, Progress indicator, Display, High layer compatibility Direction: Called user -> network -> calling user Call establishment is proceeding. CONNECT (7) IEs: Bearer capability, Channel identification, Progress indicator, Display, Date/time, Signal, Low layer compatibility, High layer compatibility Direction: Called user -> network -> calling user The call has gone through and been accepted. CONNECT ACKNOWLEDGE (15) IEs: Display, Signal Direction: Calling user -> network -> called user SETUP (5) IEs: Sending complete, Repeat indicator, Bearer capability, Channel identification, Progress indicator, Network specific facilities, Display, Keypad facility, Signal, Calling party number, Calling party subaddress, Called party number, Called party
subaddress, Transit network selection, Repeat indicator, Low layer compatibility, High layer compatibility Direction: Calling user -> network -> called user Initial message sent to initiate a call SETUP ACKNOWLEDGE (13) IEs: Channel identification, Progress indicator, Display, Signal Direction: Called user -> network -> calling user SUSPEND (37) IEs: Call identity Direction: User -> network ISDN calls can be suspended (put on hold) to allow another call to use the B channel. SUSPEND/RESUME messages manage suspended calls. SUSPEND ACKNOWLEDGE (45) IEs: Display Direction: Network -> user SUSPEND REJECT (33) IEs: Cause, Display Direction: Network -> user RESUME (40) IEs: Call identity Direction: User -> network RESUME ACKNOWLEDGE (48) IEs: Channel identification, Display Direction: Network -> user RESUME REJECT (34) IEs: Cause, Display Direction: Network -> user DISCONNECT (69) IEs: Cause, Progress indicator, Display, Signal A message sent from the user to request call breakdown, or from the network to indicate the call has been cleared.
RELEASE (77) IEs: Cause, Display, Signal A message sent to indicate the channel is being released. RELEASE COMPLETE (90) IEs: Cause, Display, Signal STATUS ENQUIRY (117) IEs: Display Direction: User -> network Requests a STATUS message from the network STATUS (125) IEs: Cause, Call State, Display Direction: Network -> user Indicates current call state in terms of Q.931 state machine A simple Q.931 message exchange might go as follows:
After the Q.931 header, identifying the call and the message type, comes the information elements. There are two types of IEs: single byte and multi-byte, distinguished by their highorder bit:
The most important IEs are all multi-byte: Bearer capability (4) Specifies a requested service: packet or circuit mode, data rate, type of information content Call identity (16) Used to identify a suspended call Call state (20) Describes the current status of a call in terms of the standard Q.931 state machine Called party number (112) The phone number being dialed Calling party number (108) The origin phone number Cause (16) The reason a call was rejected or disconnected. A sample of possible cause codes: 1 Unassigned number 3 No route to destination 6 Channel unacceptable 16 Normal call clearing 17 User busy 18 User not responding 19 User alerting; no answer 22 Number changed 27 Destination out of order 28 Invalid number format 34 No circuit/channel available 42 Switching equipment congestion Channel identification (24)
Identify a B channel Date/time (41) Poorly defined. Not year 2000 compliant! Display (40) Human-readable text. Can be specified with almost any message to provide text for an LCD display, for example. Service Profile Identification (58) Contains a Service Profile Identifier (SPID) Signal (52) Provide call status tones according to the following chart:
Meaning
0000 0000 0000 0001 0000 0010 0000 0011 0000 0100 0000 0101 0000 0110 0000 0111 0000 1000 0011 1111 Dial tone Ringing Intercept
Network congestion (fast 480 Hz + 620 Hz; 250 ms on/250 ms off busy) Busy Confirm Answer Call waiting Off-hook warning Tones off 480 Hz + 620 Hz; 500 ms on/500 ms off 350 Hz + 440 Hz; repeated three times: 100 ms on/100 ms off not used 440 Hz; 300 ms burst 1400 Hz + 2060 Hz + 2450 Hz + 2600 Hz; 100 ms on/100 ms off
G.711 Protocol
G.711 is the international standard for encoding telephone audio on an 64 kbps channel. It is a pulse code modulation (PCM) scheme operating at a 8 kHz sample rate, with 8 bits per sample. According to the Nyquist theorem, which states that a signal must be sampled at twice its highest frequency component, G.711 can encode frequencies between 0 and 4 kHz. Telcos can select between two different varients of G.711: A-law and mu-law. A-law is the standard for international circuits. Each of these encoding schemes is designed in a roughly logarithmic fashion. Lower signal values are encoded using more bits; higher signal values require fewer bits. This ensures that low amplitude signals will be well represented, while maintaining enough range to encode high amplitudes. The actual encoding doesn't use logarithmic functions, however. The input range is broken into segments, each segment using a different interval between decision values. Most segments contain 16 intervals, and the interval size doubles from segment to segment. The illustration shows three segments with four intervals in each.
Both encodings are symmetrical around zero. mu-law uses 8 segments of 16 intervals each in each of the positive and negative directions, starting with a interval size of 2 in segment 1, and increasing to an interval size of 256 in segment 8. A-law uses 7 segments. The smallest segment, using an interval of 2, is twice the size of the others (32 intervals). The remaining six segments are "normal", with 16 intervals each, increasing up to an interval size of 128 in segment 7. Thus, A-law is skewed towards representing smaller signals with greater fidelity.
A world shortage of IP addresses Security needs Ease and flexibility of network administration
IP Addresses
In an IP network, each computer is allocated a unique IP address. In the current version of IP protocol, IP version 4, an IP address is 4 bytes. The addresses are usually written as a.b.c.d, with a, b, c and d each describing one byte of the address.
Since an address is 4 bytes, the total number of available addresses is 2 to the power of 32 = 4,294,967,296. This represents the TOTAL theoretical number of computers that can be directly connected to the Internet. In practice, the real limit is much smaller for several reasons. Each physical network has to have a unique Network Number, comprising some of the bits of the IP address. The rest of the bits are used as a Host Number to uniquely identify each computer on that network. The number of unique Network Numbers that can be assigned in the Internet is therefore much smaller than 4 billion, and it is very unlikely that all of the possible Host Numbers in each Network Number are fully assigned. An address is divided into two parts: a network number and a host number. The idea is that all computers on one physical network will have the same network number - a bit like the street name, the rest of the address defines an individual computer - a bit like house numbers within a street. The size of the network and host parts depends on the class of the address, and is determined by address' network mask. The network mask is a binary mask with 1s in the network part of the address, and 0 in the host part. To allow for a range from big networks, with a lot of computers, to small networks, with a few hosts, the IP address space is divided into 4 classes, called class A, B, C and D. The first byte of the address determines which class an address belongs to:
Network addresses with first byte between 1 and 126 are class A, and can have about 17 million hosts each. Network addresses with first byte between 128 and 191 are class B, and can have about 65000 hosts each. Network addresses with first byte between 192 and 223 are class C, and can have 256 hosts. All other networks are class D, used for special functions or class E which is reserved.
Most class A and B addresses have already been allocated, leaving only class C available. This means that total number of available addresses on the Internet is 2,147,483,774. Each major world region has an authority which is given a share of the addresses and is responsible for allocating them to Internet Service Providers (ISPs) and other large customers. Because of routing requirements, a whole class C network (256 addresses) has to be assigned to a client at a time; the clients (e.g.. ISPs) are then responsible for distributing these addresses to their customers. While the number of available addresses seems large, the Internet is growing at such a pace that it will soon be exhausted. While the next generation IP protocol, IP version 6, allows for larger addresses, it will take years before the existing network infrastructure migrates to the new protocol. Because IP addresses are a scarce resource, most Internet Service Providers (ISPs) will only allocate one address to a single customer. In majority of cases this address is assigned dynamically, so every time a client connects to the ISP a different address will be provided. Big companies can buy more addresses, but for small businesses and home users the cost of doing so is prohibitive. Because such users are given only one IP address, they can have only one computer connected to the Internet at one time. With an NAT gateway running on this single computer, it is possible to share that single address between multiple local computers and connect them all at the same time. The outside world is unaware of this division and thinks that only one computer is connected.
Security Considerations
Many people view the Internet as a "one-way street"; they forget that while their computer is connected to the Internet, the Internet is also connected to their computer. That means that anybody with Net access can potentially access resources on their computers (such as files, email, company network etc). Most personal computer operating systems are not designed with security in mind, leaving them wide open to attacks from the Net. To make matters worse, many new software technologies such as Java or Active X have actually reduced security since it is now possible for a Java applet or Active X control to take control of a computer it is running on. Many times it is not even possible to detect that such applets are running; it is only necessary to go to a Web site and the browser will automatically load and run any applets specified on that page. The security implications of this are very serious. For home users, this means that sensitive personal information, such as emails, correspondence or financial details (such as credit card or cheque numbers) can be stolen. For business users the consequences can be disastrous; should confidential company information such as product plans or marketing strategies be stolen, this can lead to major financial losses or even cause the company to fold. To combat the security problem, a number of firewall products are available. They are placed between the user and the Internet and verify all traffic before allowing it to pass through. This means, for example, that no unauthorised user would be allowed to access the company's file or email server. The problem with firewall solutions is that they are expensive and difficult to set up and maintain, putting them out of reach for home and small business users. NAT automatically provides firewall-style protection without any special set-up. That is because it only allows connections that are originated on the inside network. This means, for example, that an internal client can connect to an outside FTP server, but an outside client will not be able to connect to an internal FTP server because it would have to originate the connection, and NAT will not allow that. It is still possible to make some internal servers available to the outside world via inbound mapping, which maps certain well know TCP ports (e.g.. 21 for FTP) to specific internal addresses, thus making services such as FTP or Web available in a controlled way. Many TCP/IP stacks are susceptible to low-level protocol attacks such as the recentlypublicised "SYN flood" or "Ping of Death". These attacks do not compromise the security of the computer, but can cause the servers to crash, resulting in potentially damaging "denials of service". Such attacks can cause abnormal network events that can be used as a precursor or cloak for further security breaches. NATs that do not use the host machine protocol stack but supply their own can provide protection from such attacks:
Administrative Considerations
IP networks are more difficult to set up than local desktop LANs; each computer requires an IP address, a subnet mask, DNS address, domain name, and a default router. This information has to be entered on every computer on the network; if only one piece of information is wrong, the network connection will not function and there is usually no indication of what is wrong. In bigger networks the task of co-ordinating the distribution of addresses and dividing the network into subnets is so complicated that it requires a dedicated network administrator. NAT can help network administration in several ways:
It can divide a large network into several smaller ones. The smaller parts expose only one IP address to the outside, which means that computers can be added or removed, or their addresses changed, without impacting external networks. With inbound mapping, it is even possible to move services (such as Web servers) to a different computer without having to do any changes on external clients.
Some modern NAT gateways contain a dynamic host configuration protocol (DHCP) server. DHCP allows client computers to be configured automatically; when a computer is switched on, it searches for a DHCP server and obtains TCP/IP setup information. Changes to network configuration are done centrally at the server and affect all the clients; the administrator does not need to apply the change to every computer in the network. For example, if the DNS server address changes, all clients will automatically start using the new address the next time they contact the DHCP server. Many NAT gateways provide for a way to restrict access to the Internet, including Netopia and Cisco. Another useful feature is traffic logging; since all the traffic to and from the Internet has to pass through a NAT gateway, it can record all the traffic to a log file. This file can be used to generate various traffic reports, such as traffic breakdown by user, by site, by network connection etc. Since NAT gateways operate on IP packet-level, most of them have built-in internetwork routing capability. The internetwork they are serving can be divided into several separate sub networks (either using different backbones or sharing the same backbone) which further simplifies network administration and allows more computers to be connected to the network.
Firewall protection for the internal network; only servers specifically designated with "inbound mapping" will be accessible from the Internet Protocol-level protection Automatic client computer configuration control
Local caching: a proxy can store frequently-accessed pages on its local hard disk; when these pages are requested, it can serve them from its local files instead of having to download the data from a remote Web server. Proxies that perform caching are often called caching proxy servers. Network bandwidth conservation: if more than one client requests the same page, the proxy can make one request only to a remote server and distribute the received data to all waiting clients.
Both these benefits only become apparent in situations where multiple clients are very likely to access the same sites and so share the same data. Unlike NAT, Web proxying is not a transparent operation: it must be explicitly supported by its clients. Due to early adoption of Web proxying, most browsers, including Internet Explorer and Netscape Communicator, have built-in support for proxies, but this must normally be configured on each client machine, and may be changed by the naive or malicious user. Web proxying has the following disadvantages:
Web content is becoming more and more dynamic, with new developments such as streaming video & audio being widely used. Most of the new data formats are not cacheable, eliminating one of the main benefits of proxying. Clients have to be explicitly set to use Web proxying; It is recommended that you use the "Automatic proxy configuration URL" in modern browsers to make changing these configurations dynamically easy. If you do not use this and instead configure a manual proxy, then any change to the proxy server will require a manual change on each computer. A proxy server operates above the TCP level and uses the machine's built-in protocol stack. For each Web request from a client, a TCP connection has to be established between the client and the proxy machine, and another connection between the proxy machine and the remote Web server. This puts lot of strain on the proxy server machine; in fact, since Web pages are becoming more and more complicated the proxy itself may become bottleneck on the network. This contrasts with a NAT which operates on packet level and requires much less processing for each connection.
NAT Operation
The basic purpose of NAT is to multiplex traffic from the internal network and present it to the Internet as if it was coming from a single computer having only one IP address. The TCP/IP protocols include a multiplexing facility so that any computer can maintain multiple simultaneous connections with a remote computer. It is this multiplexing facility that is the key to single address NAT.
To multiplex several connections to a single destination, client computers label all packets with unique "port numbers". Each IP packet starts with a header containing the source and destination addresses and port numbers:
Incoming packet received on non-NAT port Look for source address, port in the mapping table If found, replace source port with previously allocated mapping port If not found, allocate a new mapping port Replace source address with NAT address, source port with mapping port
Incoming packet received on NAT port Look up destination port number in port mapping table If found, replace destination address and port with entries from the mapping table If not found, the packet is not for us and should be rejected
Each client has an idle time-out associated with it. Whenever new traffic is received for a client, its time-out is reset. When the time-out expires, the client is removed from the table. This ensures that the table is kept to a reasonable size. The length of the time-out varies, but taking into account traffic variations on the Internet should not go below 2-3 minutes. Most
NAT implementations can also track TCP clients on a per-connection basis and remove them from the table as soon as the connection is closed. This is not possible for UDP traffic since it is not connection based. Many higher-level TCP/IP protocols embed client addressing information in the packets. For example, during an "active" FTP transfer the client informs the server of its IP address & port number, and then waits for the server to open a connection to that address. NAT has to monitor these packets and modify them on the fly to replace the client's IP address (which is on the internal network) with the NAT address. Since this changes the length of the packet, the TCP sequence/acknowledge numbers must be modified as well. Most protocols can be supported within the NAT; some protocols, however, may require that the clients themselves are made aware of the NAT and that they participate in the address translation process. [Or the NAT must be protocol-sensitive so that it can monitor or modify the embedded address or port data] Because the port mapping table relates complete connection information - source and destination address and port numbers - it is possible to validate any or all of this information before passing incoming packets back to the client. This checking helps to provide effective firewall protection against Internet-launched attacks on the private LAN. Each IP packet also contain checksums that are calculated by the originator. They are recalculated and compared by the recipient to see if the packet has been corrupted in transit. The checksums depend on the contents of the packet. Since the NAT must modify the packet addresses and port numbers, it must also recalculate and replace the checksums. Careful design in the NAT software can ensure that this extra processing has a minimal effect on the gateway's throughput. Before doing so it must check for, and discard, any corrupt packets to avoid converting a bad packet into a good one. However, because each packet in a NAT network must be translated when it leaves and enter the network, the larger the network, the slower things will run. The efficiency and processing power of the NAT gateway can greatly enhance or degrade the performance of the network. Also, because the NAT gateway must act as a stand-in for each device behind it, there is a limit to how many devices can be run behind this type of device before all 65,000 ports are in use.
Conclusion
As the Internet continues to expand at an ever-increasing rate, Network Address Translation offers a fast and effective way to expand secure Internet access into existing and new private networks, without having to wait for a major new IP addressing structure. It offers greater administrative flexibility and performance than the alternative application-level proxies, and is becoming the de facto standard for shared access
Scalability. OSPF is specifically designed to operate with larger networks. It does not impose a hop-count restriction and permits its domain to be subdivided for easier management.
Full subnetting support. OSPF can fully support subnetting, including VLSM and noncontiguous subnets. Hello packets. OSPF uses small "hello" packets to verify link operation without transferring large tables. In stable networks, large updates occur only once every 30 minutes. TOS routing. OSPF can route packets by different criterion based on their Type Of Service (TOS) field. For example, file transfers could be routed over a satellite link while terminal I/O could avoid such high delays. This requires cooperative applications on the end systems. Tagged routes. Routes can be tagged with arbitrary values, easing interoperation with EGPs, which can tag OSPF routes with AS numbers.
OSPF has some disadvantages as well. Chief among them are its complexity and its demands on memory and computation. Although link-state protocols are not difficult to understand, OSPF muddles the picture with plenty of options and features. OSPF divides its routing domain into areas. Area 0, the backbone, is required. This divides interior routing into two levels. If traffic must travel between two areas, the packets are first routed to the backbone. This may cause non-optimal routes, since interarea routing is not done until the packet reaches the backbone. Once there, it is routed to the destination area, which is then responsible for final delivery. This layering permits addresses to be consolidated by area, reducing the size of the link state databases. Small networks can operate with a single OSPF area, which must be area 0. OSPF divides networks into several classes, including point-to-point, multiaccess, and nonbroadcast multiaccess. A serial link connecting two routers together would be a point-to-point link, while an Ethernet or Token Ring segment would be a multiaccess link. A Frame Relay or X.25 cloud would be classified as non-broadcast multiaccess. Multiaccess networks (like Ethernet) use a designated router (DR) to avoid the problem of each router forming a link with every other router on a Ethernet, resulting in a N^2 explosion in the number of links. Instead, the DR manages all the link state advertisements for the Ethernet. Selecting the DR requires an election process, during which a Backup Designated Router (BDR) is also selected. OSPF provides a priority feature to help the network engineer influence the choice of DR and BDR, but in practice this is difficult. Link layer multicasting is also used, if available, to avoid broadcasts and better target routing updates. Non-broadcast multiaccess networks (like X.25) also use the designated router concept, but since broadcasts (and presumably multicasts) are not supported, the identity of neighboring routers must be specified manually. A DR on such a network without a complete list of neighbors will cause a loss of connectivity, even though the network is otherwise functional. If possible, I recommend configuring such networks as a collection of point-to-point links, simply to avoid the intricacies of DR election. OSPF's primary means of verifying continuing operation of the network is via its Hello Protocol. Every OSPF speaker sends small hello packets out each of its interfaces every ten seconds. It is through receipt of these packets that OSPF neighbors initially learn of each other's existance. Hello packets are not forwarded or recorded in the OSPF database, but if none are recieved from a particular neighbor for forty seconds, that neighbor is marked down. LSAs are then generated marking links through a down router as down. The hello timer values can be configured, though they must be consistant across all routers on a network segment. Link state advertisements also age. The originating router readvertises an LSA after it has remained unchanged for thirty minutes. If an LSA ages to more than an hour, it is flushed from the databases. These timer values are called architectural constants by the RFC.
If a link goes down for twenty seconds, then comes back up, OSPF doesn't notice. If a link flaps constantly, but at least one of every four Hello packets make it across, OSPF doesn't notice. If a link goes down for anywhere from a minute to half an hour, OSPF floods an LSA when it goes down, and another LSA when it comes back up. If a link stays down for more than half an hour, LSAs originated by remote routers (that have become unreachable) begin to age out. When the link comes back up, all these LSAs will be reflooded. If a link is down for more than an hour, any LSAs originated by remote routers will have aged out and been flushed. When the link comes back up, it will be as if it were brand new
Ping
The most useful software tool for testing Internet operation at the IP level is Ping. Ping is one of the most useful network debugging tools available. It takes its name from a submarine sonar search - you send a short sound burst and listen for an echo - a ping coming back. In an IP network, `ping' sends a short data burst - a single packet - and listens for a single packet in reply. Since this tests the most basic function of an IP network (delivery of single packet), it's easy to see how you can learn a lot from some `pings'. Ping is implemented using the required ICMP Echo function, documented in RFC 792 that all hosts should implement. Of course, administrators can disable ping messages (this is rarely a good idea, unless security considerations dictate that the host should be unreachable anyway), and some implementations have (gasp) even been known not to implement all required functions. However, ping is usually a better bet than almost any other network software. Many versions of ping are available. For the remainder of this discussion, I assume use of BSD UNIX's ping, a freely available, full-featured ping available for many UNIX systems. Most PCbased pings do not have the advanced features I describe. As always, read the manual for whatever version you use.
Ping places a unique sequence number on each packet it transmits, and reports which sequence numbers it receives back. Thus, you can determine if packets have been dropped, duplicated, or reordered. Ping checksums each packet it exchanges. You can detect some forms of damaged packets. Ping places a timestamp in each packet, which is echoed back and can easily be used to compute how long each packet exchange took - the Round Trip Time (RTT). Ping reports other ICMP messages that might otherwise get buried in the system software. It reports, for example, if a router is declaring the target host unreachable.
Some routers may silently discard undeliverable packets. Others may believe a packet has been transmitted successfully when it has not been. (This is especially common over Ethernet, which does not provide link-layer acknowledgments) Therefore, ping may not always provide reasons why packets go unanswered. Ping can not tell you why a packet was damaged, delayed, or duplicated. It can not tell you where this happened either, although you may be able to deduce it. Ping can not give you a blow-by-blow description of every host that handled the packet and everything that happened at every step of the way. It is an unfortunate fact that no software can reliably provide this information for a TCP/IP network.
Using ping
Ping should be your first stop for network troubleshooting. Having problems transferring a file with FTP? Don't fire up your packet analyzer just yet. Leave your TDR in the box for now. Relax. Put on some Yanni. Don't even ``su'' - ping is a non-privileged command on most systems. Start one running and just watch it for at least two minutes. That's enough time for most periodic network problems to show themselves. Once you've seen about a hundred packets, you should be getting a good feel for how this host is responding. Are the round-trip times consistent? Seeing any packet loss? Are the TTL values sane? Start pinging other hosts. Try the machine next to you - the problem might be closer than you think. Try the last router - maybe the remote system is overloaded (especially if it's a popular Internet site like this one). Don't know what the last router is? Use traceroute or guess - changing the last number in the IP address to 1 usually gets you something interesting. Check other sites with similar network topologies (other remote LAN sites, or other Internet sites, or other sites using the same backbone). Starting to learn something about how your network is responding? Good. And - oh, yeah, go check that FTP. It's probably done by now. Here's a list of common BSD ping options, and when you might want to use them: -c count Send count packets and then stop. The other way to stop is type CNTL-C. This option is convenient for scripts that periodically check network behavior. -f Flood ping. Send packets as fast as the receiving host can handle them, at least one hundred per second. I've found this most useful to stress a production network being tested during its down-time. Fast machines with fast Ethernet interfaces (like SPARCs) can basically shutdown a network with flood ping, so use this with caution. -l preload Send preload packets as fast as possible, then fall into a normal mode of behavior. Good for finding out how many packets your routers can quickly handle, which is in turn good for diagnosing problems that only appear with large TCP window sizes. -n Numeric output only. Use this when, in addition to everything else, you've got nameserver problems and ping is hanging trying to give you a nice symbolic name for the IP addresses. -p pattern Pattern is a string of hexadecimal digits to pad the end of the packet with. This can be useful if you suspect data-dependent problems, as links have been known to fail only when certain bit patterns are presented to them. -R Use IP's Record Route option to determine what route the ping packets are taking. There are many problems with using this, not the least of which is that the option is placed on the request and the target host is under no obligation to place a corresponding option on the reply. Consider yourself lucky if this works. -r
Bypass the routing tables. Use this when, in addition to everything else, you've got routing problems and ping can't find a route to the target host. This only works for hosts that can be directly reached without using any routers. -s packetsize Change the size of the test packets. Try it - why not? Check large packets, small packets (the default), very large packets that must be fragmented, packets that aren't a neat power of two. Read the manual to find out exactly what you're specifying here BSD ping doesn't count either IP or ICMP headers in packetsize. -V Verbose output. You see other ICMP packets that are not normally considered ``interesting'' (and rarely are).
[mauve]:[10:03pm]:[/home/rnejdl]> ping -c10 localhost PING localhost (127.0.0.1): 56 data bytes 64 bytes from 127.0.0.1: icmp_seq=0 ttl=255 time=2 ms 64 bytes from 127.0.0.1: icmp_seq=1 ttl=255 time=2 ms 64 bytes from 127.0.0.1: icmp_seq=2 ttl=255 time=2 ms 64 bytes from 127.0.0.1: icmp_seq=3 ttl=255 time=2 ms 64 bytes from 127.0.0.1: icmp_seq=4 ttl=255 time=2 ms 64 bytes from 127.0.0.1: icmp_seq=5 ttl=255 time=2 ms 64 bytes from 127.0.0.1: icmp_seq=6 ttl=255 time=2 ms 64 bytes from 127.0.0.1: icmp_seq=7 ttl=255 time=2 ms 64 bytes from 127.0.0.1: icmp_seq=8 ttl=255 time=2 ms 64 bytes from 127.0.0.1: icmp_seq=9 ttl=255 time=2 ms
--- localhost ping statistics --10 packets transmitted, 10 packets received, 0% packet loss round-trip min/avg/max = 2/2/2 ms [mauve]:[10:03pm]:[/home/rnejdl]>
The next session shows a more interesting example - a router on the remote side of a medium speed (128Kbps) link. The initial timings show consistent link behavior. However, about 50
seconds into the trace, we see greater fluctuations in the RTT, which approaches one minute for several packets. From packet 53 to 54, we see a factor of 26 reduction in RTT. But since reductions in RTT rarely cause problems, this is not as troublesome as the change from packet 54 to 55, a factor of 7 increase in RTT. So what should the RTT be? Well, we're transferring 56 data bytes, plus an 8 byte ICMP header (64 ICMP bytes), plus a 20 byte IP header - 84 byte packets. At 128 kilobits per second, 84 bytes should require about 84*(8/128000) = 6 ms to transfer. Since the packet has to go both ways, we expect 10-15 ms round-trip times. None of these values are that low; clearly there are problems with this link. More than anything else, it is simply overcrowded.
[mauve]:[10:03pm]:[/home/rnejdl]> ping sl-stk-3-S17-128k.sprintlink.net PING sl-stk-3-S17-128k.sprintlink.net (144.228.202.1): 56 data bytes 64 bytes from 144.228.202.1: icmp_seq=0 ttl=254 time=35.653 ms 64 bytes from 144.228.202.1: icmp_seq=1 ttl=254 time=28.797 ms 64 bytes from 144.228.202.1: icmp_seq=2 ttl=254 time=28.559 ms 64 bytes from 144.228.202.1: icmp_seq=3 ttl=254 time=39.533 ms 64 bytes from 144.228.202.1: icmp_seq=4 ttl=254 time=28.621 ms 64 bytes from 144.228.202.1: icmp_seq=5 ttl=254 time=28.159 ms ... 64 bytes from 144.228.202.1: icmp_seq=50 ttl=254 time=848.810 ms 64 bytes from 144.228.202.1: icmp_seq=51 ttl=254 time=828.579 ms 64 bytes from 144.228.202.1: icmp_seq=52 ttl=254 time=753.865 ms 64 bytes from 144.228.202.1: icmp_seq=53 ttl=254 time=778.202 ms 64 bytes from 144.228.202.1: icmp_seq=54 ttl=254 time=29.913 ms 64 bytes from 144.228.202.1: icmp_seq=55 ttl=254 time=220.931 ms 64 bytes from 144.228.202.1: icmp_seq=56 ttl=254 time=173.661 ms 64 bytes from 144.228.202.1: icmp_seq=57 ttl=254 time=144.990 ms 64 bytes from 144.228.202.1: icmp_seq=58 ttl=254 time=28.520 ms ... [mauve]:[10:03pm]:[/home/rnejdl]>
router queueing packets for a relatively slow link, and the queue simply grew too large. Early TCP implementations dropped packets at a truly alarming rate, but things have gotten better. Even so, there are common situations, typically involving crowded wide-area networks, in which even modern TCP implementations can't operate steadystate without dropping packets. There's no reason to pull your out hair over this, since TCP will retransmit missing data, but this won't make your network run faster. Also, if you have fast links that aren't showing much congestion, the cause of trouble may be elsewhere - link-level failures are the next most common cause of packet loss. I'd suggest using the techniques mentioned above to narrow down as much as possible where packets are being dropped, and try to understand why this is happening, even if fixing it is beyond your control. Fluctuating Round Trip Times Another fact of life. Pretty much caused by the same things that cause packet loss. Again, not serious cause for alarm, but don't expect optimum performance from TCP. Remember that TCP generates an internal RTT estimate that affects protocol behavior. If the actual RTT changes too much, TCP may never be able to make a satisfactory estimate. Both dropped packets and RTT fluctuations may occur in a periodic nature a batch of slow packets every 30 seconds, for instance. If you see this symptom, check for routing updates or other periodic traffic with the same period as the problem. Poor network performance can often be traced to slow links being clogged with various kinds of automated updates. Connectivity that comes and goes Again, look for periods between problems that are multiples of some common number - 10 and 15 seconds are good things to check. If a router is sending error messages when connectivity disappears, that router's the first place to start looking. However, just because you can always reach hop 5, for instance, doesn't mean that your problem isn't hop 3. Hop 3's router may be erroneously timing out routing information for your target, but handling hop 5's routing information just fine. Of course, check hop 5 first if that's where your packets seem to check out but never leave. Ping works fine but TELNET/FTP/Mail/News/... doesn't Good news - it's (probably) not a hardware problem. Use a packet tracer of some sort to see what TTL values are being generated by your hosts. If they're too low, you can see this kind of behavior. It could also be a software or configuration problem - can other machines connect to the offending host? Can it talk to itself? On the other hand, it could be a hardware problem, if one of your links is showing data-dependent behavior. The telltale symptom is when FTP (for example) can transfer some files fine, but others always have problems. Once you've found an offending file, trying breaking it into smaller and smaller pieces and see which ones don't work. If the pieces becomes too small to detect problems, duplicate them several times to get a larger file. Once you've found a small pattern that you suspect is causing your grief, see if you can load it into ping packets (BSD PING's `-p' switch) and reproduce the trouble.
POP3 Commands
Command
USER PASS STAT LIST
Syntax
user Username pass Password stat list list MessageNumber last retr MessageNumber top MessageNumber lines dele MessageNumber rset noop
Description
Provides username to the POP3 server. Must be followed by a PASS command. Provides a password to the POP3 server. Must follw a USER command. Returns the number of messages and total size of mailbox. Lists message number and size of each message. If a message number is specified, returns the size of the specified message. Returns the message number of the last message not marked as read or deleted. Returns the full text of the specified message, and marks that message as read. Returns the specified number of lines from the specified mesasge number.
LAST RETR
TOP
Marks the specified message for deletion. Resets any messages which have been marked as read or deleted to the standard unread state. Returns a simple acknowledgement, without performing any function. Allows for a secure method of POP3 authentication, in which a cleartext password does not have to be sent. Instead, the client creates an MD5 encrypted string from the password, process id, and timestamp, and sends it to the POP3 server. Ends the POP3 session
APOP
QUIT
quit
stat +OK 2 773 list +OK 2 messages (773 octets) 1 391 2 382 . retr 1 +OK 391 octets Return-Path: griselda Received: (from griselda@localhost) by arjuna.mindflip.com (8.9.3/8.9.3) id DAA84577 for matthew; Tue, 12 Oct 1999 03:19:21 GMT (envelope-from griselda) Date: Tue, 12 Oct 1999 03:19:21 GMT From: Test User Message-Id: <199910120319.DAA84577@arjuna.mindflip.com> To: matthew Subject: Test X-UIDL: 858de06153a9e0e3c235a4a54c4f56d3 Status: RO
This is a test.
+OK 382 octets Return-Path: griselda Received: (from griselda@localhost) by arjuna.mindflip.com (8.9.3/8.9.3) id DAA84593 for matthew; Tue, 12 Oct 1999 03:21:28 GMT (envelope-from griselda) . dele 2 +OK Message 2 has been deleted. quit +OK Pop server at arjuna.mindflip.com signing off. Connection closed by foreign host.
Weak security. RIP itself has no security features, but some developers have produced RIP implementations that will only accept updates from configured hosts, for example. Various security attacks can be imagined.
However, RIP has several benefits. It is in widespread use, the only interior gateway protocol that can be counted on to really run everywhere. Configuring a RIP system requires little effort, beyond setting path costs. Finally, RIP uses an algorithm that does not impose serious computation or storage requirements on hosts or routers.
Introduction
When we browse the Internet, a physical connection allows for us to connect to the internet, either through a modem or through an Ethernet card in the case of a dedicated connection. A TCP/IP stack allows us to pass traffic and resolve web sites to IP addresses. Finally, applications, such as Netscape and Eudora, allow us to see the web sites and receive our email. The modem or Ethernet function has 2 parts. The modem or Ethernet drivers provide the computer with a way to communicate with the hardware. The PPP connection, also known as Dial-up Networking, allows your computer to access the modem. These two components provide the basis of getting a connection to the Internet. The TCP/IP stack allows the computer to pass traffic across the link to the Internet in a meaningful way. That is, the TCP/IP stack allows your computer to speak the same "language" as the equipment at the other end of your connection. The TCP/IP stack also allows you to resolve friendly host names, such as www.verio.net, into an IP (Internet Protocol) address. Without the TCP/IP stack, we would be forced to go to each web site by it's IP address instead of a name! Finally, the applications allow us to interact with friendly software to interpret HTML code into web pages for us, interact with mail servers to exchange e-mail, connect to news servers to retrieve and post news articles, and exchange data with FTP servers to allow us to download files. Without these programs, the Internet would be much more difficult to navigate through. top
Each layer should perform a well defined function. The function of each layer should be chosen in accordance with developing internationally standardized protocols. The layer boundaries should be chosen to minimize the information flow across the interfaces. The number of layers should be large enough that distinct functions need not be thrown together in the same layer out of necessity, and small enough that the architecture does not become unwieldy.
Having a way of categorizing each factor in an internet connection makes it easier for us to do our jobs as troubleshooters. We all inherently understand that if the modem is not plugged in, you're not going to be able to get your e-mail. The OSI model allows us to follow that logic further: for example, if you can browse the web by IP but can't see websites by name, you know that the problem is not on the Network layer, but on the Transport layer. top
Imagine that System A is requesting information from System B. System A makes an HTTP (Layer 7) request, which gets prepended with a header and appended with a footer. Layer 6 specifies whether it's a request for a GIF or an HTML document, and treats the Layer 7 header, data, and footer as its own data, prepending that with a header and appending it with a footer. The same treatment happens on Layer 5, and so on. System B receives the request on Layer 1, and begins the decapsulation
process, stripping the Layer 1 headers and footers off to reveal the Layer 2 information, and so forth, all the way up to the 7th layer. top
Application
The application layer interacts with software applications (such as Netscape or Outlook Express) that implement a communicating component. Such application programs are outside of the scope of the OSI model, but they translate an enduser's typing into a Layer 7 request. Application layer functions typically include the following:
Identifying communication partners - The application layer identifies and determines the availability of communication partners for an application with data to transmit. Determining resource availability - The application layer must determine whether sufficient network resources for the requested communication are available. Synchronizing communication - Communication between applications requires cooperation that is managed by the application layer.
Example: The Application layer is responsible for identifying that there is a web server answering on port 80 in order for HTTP communication to happen. top
Presentation
The presentation layer provides a variety of encoding and encryption functions that are applied to the application layer data. These functions ensure that information sent from the application layer of one system will be readable by the application layer of another system. Some examples of presentation layer encoding and encryption schemes follow:
Conversion of character representation formats - Conversion schemes are used to exchange information with systems using different text and data representations (such as EBCDIC and ASCII). Common data representation formats -the use of standard image, sound, and video formats (like JPEG, MPEG, and RealAudio) allow the interchange of application data between different types of computer systems. Common data compression schemes - The use of standard data compression schemes (like WinZip or GZip) allows data that is compressed at the source device to be properly decompressed at the destination. Common data encryption schemes - The use of standard data encryption schemes allows data encrypted at the source device to be properly unencrypted at the destination.
top
Session
The session layer establishes, manages, and terminates communication sessions between presentation layer entities. Communication sessions consist of service requests and service responses that occur between applications located in different network devices. These requests and responses are coordinated by protocols implemented at the session layer.
For example, SQL is a Session layer application that manages multiple queries to the SQL database. It's what allows multiple people to log in to, say, the Intranet at the same time. top
Transport
The transport layer implements reliable internetwork data transport services that are transparent to upper layers. Transport layer functions typically include the following:
Flow control - Flow control manages data transmission between devices so that the transmitting device does not send more data than the receiving device can process. Sliding Window - This allows the receiving computer to dictate to the receiving end how many packets the receiver is capable of receiving at one time. Multiplexing - Multiplexing allows data from several applications to be transmitted onto a single physical link. Virtual circuit management - Virtual circuits are established, maintained, and terminated by the transport layer. Three-way handshake - The three-way handshake is a connection establishment protocol. First, host A sends a SYN segment to host B in order to check that host B gets ready for establishing a TCP connection. Second, when host B receives the SYN segment that host A sent and is ready to start the TCP session, it sends a SYN and ACK segment back to host A. This ACK advertises an arrival of the first SYN segment to host A. Finally, host A sends an ACK segment for the second SYN and ACK segment that host B sent. Error checking and recovery - Error checking mechanisms for detecting transmission errors. Error recovery involves taking an action (such as requesting that data be retransmitted) to resolve any errors that occur.
The two most common Transport layer protocols are TCP and UDP. Common Transport Layer Ports 21 FTP 22 23 25 53 80 SSH telnet SMTP DNS HTTP
110 POP3 143 IMAP 443 HTTPS A complete Port List top
Network
The network layer provides routing and related functions that allow multiple data links to be combined into an internetwork. This is accomplished by the logical addressing (as opposed to the physical addressing) of devices. The network layer supports both connection-oriented and connectionless service from higher-layer protocols.
Common protocols on the Network layer are BGP and OSPF. RIP is another Network layer protocol, but is not used on larger networks because of its inefficiency. top
Data Link
The data link layer is where the logical information (i.e., IP addresses) is translated into the actual electrical pulses that travel over the physical layer. Frame Relay, ATM, and DSL all work on the Data Link layer. Different data link layer specifications define different network and protocol characteristics, including the following:
Physical addressing - Physical addressing (as opposed to network addressing) defines how devices are addressed at the data link layer. Network topology - Data link layer specifications often define how devices are to be physically connected (such as in a bus or a ring topology). Error notification - Error notification involves alerting upper layer protocols that a transmission error has occurred. Sequencing of frames - Sequencing of data frames involves the reordering of frames that are transmitted out of sequence. Flow control - Flow control involves moderating the transmission of data so that the receiving device is not overwhelmed with more traffic than it can handle at one time.
Logical Link Control Sub-layer The Logical Link Control (LCC) sublayer of the data link layer manages communications between devices over a single link of a network. LCC is defined in the IEEE 802.2 specification. IEEE 802.2 defines a number of fields in data link layer frames that allow multiple higher-layer protocols to share a single physical data link. LLC supports both connectionless and connection-oriented services used by higher-layer protocols. Media Access Control Sub-layer The Media Access Control (MAC) sublayer of the data link layer manages protocol access to the physical network medium. The IEEE MAC specification defines MAC addresses, which allow multiple devices to uniquely identify one another at the data link layer.
top
Physical
The physical layer defines the electrical, mechanical, procedural, and functional specifications for activating, maintaining, and deactivating the physical link between communicating network systems. Physical layer specifications define such characteristics as voltage levels, timing of voltage changes, physical data rates, maximum transmission distances, and the physical connectors to be used. Common examples of things that work on the Physical layer are Fiber Optic cables, CAT5 (ethernet) cables, and Copper Twisted Pair.
top
The whole point of the OSI model is to make our jobs easier through classification and dilineation of functions. Ultimately, the easiest way to use the seven-layer model is by figuring out what the user can do on the Net, then going up one layer and seeing if they can perform the functions that are supposed to be performed on that layer. For example:
Is the router plugged in? What lights are on? If the router is not a) plugged in to the electrical outlet and b) plugged in to the ISDN jack, the user won't be able to ping. If the user can ping but can't browse the internet, can the user visit a website by IP address? If the user's TCP configurations are incorrect, they will obviously not be able to translate a name to IP address, and therefore, won't be able to get mail, either.
Streams. TCP data is organized as a stream of bytes, much like a file. The datagram nature of the network is concealed. A mechanism (the Urgent Pointer) exists to let out-of-band data be specially flagged. Reliable delivery. Sequence numbers are used to coordinate which data has been transmitted and received. TCP will arrange for retransmission if it determines that data has been lost.
Network adaptation. TCP will dynamically learn the delay characteristics of a network and adjust its operation to maximize throughput without overloading the network. Flow control. TCP manages data buffers, and coordinates traffic so its buffers will never overflow. Fast senders will be stopped periodically to keep up with slower receivers.
Full-duplex Operation
No matter what the particular application, TCP almost always operates full-duplex. The algorithms described below operate in both directions, in an almost completely independent manner. It's sometimes useful to think of a TCP session as two independent byte streams, traveling in opposite directions. No TCP mechanism exists to associate data in the forward and reverse byte streams. Only during connection start and close sequences can TCP exhibit asymmetric behavior (i.e. data transfer in the forward direction but not in the reverse, or vice versa).
Sequence Numbers
TCP uses a 32-bit sequence number that counts bytes in the data stream. Each TCP packet contains the starting sequence number of the data in that packet, and the sequence number (called the acknowledgment number) of the last byte received from the remote peer. With this information, a sliding-window protocol is implemented. Forward and reverse sequence numbers are completely independent, and each TCP peer must track both its own sequence numbering and the numbering being used by the remote peer. TCP uses a number of control flags to manage the connection. Some of these flags pertain to a single packet, such as the URG flag indicating valid data in the Urgent Pointer field, but two flags (SYN and FIN), require reliable delivery as they mark the beginning and end of the data stream. In order to insure reliable delivery of these two flags, they are assigned spots in the sequence number space. Each flag occupies a single byte.
All modern TCP implementations seek to answer this question by monitoring the normal exchange of data packets and developing an estimate of how long is "too long". This process is called Round-Trip Time (RTT) estimation. RTT estimates are one of the most important performance parameters in a TCP exchange, especially when you consider that on an indefinitely large transfer, all TCP implementations eventually drop packets and retransmit them, no matter how good the quality of the link. If the RTT estimate is too low, packets are retransmitted unnecessarily; if too high, the connection can sit idle while the host waits to timeout.
Port Numbers. UDP provides 16-bit port numbers to let multiple processes use UDP services on the same host. A UDP address is the combination of a 32-bit IP address and the 16-bit port number. Checksumming. Unlike IP, UDP does checksum its data, ensuring data integrity. A packet failing checksum is simply discarded, with no further action taken.
Usage
FTP - Data FTP - Control SSH Telnet SMTP TIME TACACS DNS DHCP Server (UDP) DHCP Client (UDP) TFTP (UDP) Finger HTTP POP3 RPC (UDP) NNTP NTP SNMP
137-139 NetBIOS
162
Trap (UDP)
X.25
Introduction
X.25 is an International Telecommunication Union-Telecommunication Standardization Sector (ITU-T) protocol standard for WAN communications that defines how connections between user devices and network devices are established and maintained. X.25 is designed to operate effectively regardless of the type of systems connected to the network. It is typically used in the packet-switched networks (PSNs) of common carriers, such as the telephone companies. Subscribers are charged based on their use of the network. The development of the X.25 standard was initiated by the common carriers in the 1970s. At that time, there was a need for WAN protocols capable of providing connectivity across public data networks (PDNs). X.25 is now administered as an international standard by the ITU-T.
Packet Assembler/Disassembler
There used to be an explicit gizmo called the PAD (Packet Assembly and Disassembly) which waited for 128 bytes from the terminal before it sent off a packet, and likewise, broke the packet up at the receiver side to give the illusion of a stream. Nowadays, the PAD's functionality is performed at the DTE. Almost all DTE/DCE interfaces for x.25 are modems. The packet assembler/disassembler (PAD) is a device commonly found in X.25 networks. PADs are used when a DTE device, such as a character-mode terminal, is too simple to implement the full X.25 functionality. The PAD is located between a DTE device and a DCE device, and it performs three primary functions: buffering (storing data until a device is ready to process it), packet assembly, and packet disassembly. The PAD buffers data sent to or from the DTE
device. It also assembles outgoing data into packets and forwards them to the DCE device. (This includes adding an X.25 header.) Finally, the PAD disassembles incoming packets before forwarding the data to the DTE. (This includes removing the X.25 header.) Figure 17-2 illustrates the basic operation of the PAD when receiving packets from the X.25 WAN.
Figure 17-2: The PAD Buffers, Assembles, and Disassembles Data Packets
Figure 17-3: Virtual Circuits Can Be Multiplexed onto a Single Physical Circuit
Two types of X.25 virtual circuits exist: switched and permanent. Switched virtual circuits (SVCs) are temporary connections used for sporadic data transfers. They require that two DTE devices establish, maintain, and terminate a session each time the devices need to communicate. Permanent virtual circuits (PVCs) are permanently established connections used for frequent and consistent data transfers. PVCs do not require that sessions be established
and terminated. Therefore, DTEs can begin transferring data whenever necessary because the session is always active. The basic operation of an X.25 virtual circuit begins when the source DTE device specifies the virtual circuit to be used (in the packet headers) and then sends the packets to a locally connected DCE device. At this point, the local DCE device examines the packet headers to determine which virtual circuit to use and then sends the packets to the closest PSE in the path of that virtual circuit. PSEs (switches) pass the traffic to the next intermediate node in the path, which may be another switch or the remote DCE device. When the traffic arrives at the remote DCE device, the packet headers are examined and the destination address is determined. The packets are then sent to the destination DTE device. If communication occurs over an SVC and neither device has additional data to transfer, the virtual circuit is terminated.
Figure 17-4: Key X.25 Protocols Map to the Three Lower Layers of the OSI Reference Model
Packet-Layer Protocol
PLP is the X.25 network layer protocol. PLP manages packet exchanges between DTE devices across virtual circuits. PLPs also can run over Logical Link Control 2 (LLC2) implementations on LANs and over Integrated Services Digital Network (ISDN) interfaces running Link Access Procedure on the D channel (LAPD). The PLP operates in five distinct modes: call setup, data transfer, idle, call clearing, and restarting. Call setup mode is used to establish SVCs between DTE devices. A PLP uses the X.121 addressing scheme to set up the virtual circuit. The call setup mode is executed on a pervirtual-circuit basis, which means that one virtual circuit can be in call setup mode while another is in data transfer mode. This mode is used only with SVCs, not with PVCs.
Data transfer mode is used for transferring data between two DTE devices across a virtual circuit. In this mode, PLP handles segmentation and reassembly, bit padding, and error and flow control. This mode is executed on a per-virtual-circuit basis and is used with both PVCs and SVCs. Idle mode is used when a virtual circuit is established but data transfer is not occurring. It is executed on a per-virtual-circuit basis and is used only with SVCs. Call clearing mode is used to end communication sessions between DTE devices and to terminate SVCs. This mode is executed on a per-virtual-circuit basis and is used only with SVCs. Restarting mode is used to synchronize transmission between a DTE device and a locally connected DCE device. This mode is not executed on a per-virtual-circuit basis. It affects all the DTE device's established virtual circuits. Four types of PLP packet fields exist:
General Format Identifier (GFI)Identifies packet parameters, such as whether the packet carries user data or control information, what kind of windowing is being used, and whether delivery confirmation is required. Logical Channel Identifier (LCI)Identifies the virtual circuit across the local DTE/DCE interface. Packet Type Identifier (PTI) Identifies the packet as one of 17 different PLP packet types. User DataContains encapsulated upper-layer information. This field is present only in data packets. Otherwise, additional fields containing control information are added.
Figure 17-5: The PLP Packet Is Encapsulated Within the LAPB Frame and the X.21bis Frame
FlagDelimits the beginning and end of the LAPB frame. Bit stuffing is used to ensure that the flag pattern does not occur within the body of the frame. AddressIndicates whether the frame carries a command or a response. ControlQualifies command and response frames and indicates whether the frame is an I-frame, an S-frame, or a U-frame. In addition, this field contains the frame's sequence number and its function (for example, whether receiver-ready or disconnect). Control frames vary in length depending on the frame type. DataContains upper-layer data in the form of an encapsulated PLP packet. FCSHandles error checking and ensures the integrity of the transmitted data.
Figure 17-6: An LAPB Frame Includes a Header, a Trailer, and Encapsulated Data