You are on page 1of 87

MSTP FUNDAMENTALS

Information Security Declaration


All information in this manual is confidential and owned by ZTE CORPORATION. It is the authorized readers responsibility to protect the confidentiality of this manual. Any reproduction, storage, importing to any retrieval system or distribution of this document or any portion of this document, for any purpose, in any form or by any means including, but not limited to electronic or mechanical reprography and audio recording, without the prior written permission of ZTE CORPORATION is prohibited.

Contents
1 Evolution of MSTP..................................................................................................................................... 1 1.1 Emergence of MSTP ......................................................................................................................... 1 1.2 First Generation MSTP ..................................................................................................................... 2 1.2.1 Virtual Concatenation Technology ......................................................................................... 2 1.2.2 Link Capacity Adjustment Scheme ........................................................................................ 3 1.3 Second Generation MSTP................................................................................................................. 4 1.3.1 Resilient Packet Ring Technology ......................................................................................... 4 1.3.2 Multiple Protocol Label Switching Technology..................................................................... 5 2 Theory of EOS ............................................................................................................................................ 7 2.1 Ethernet Fundamentals...................................................................................................................... 7 2.1.1 Ethernet Frame Format .......................................................................................................... 7 2.1.2 MAC Address......................................................................................................................... 8 2.2 Ethernet Switching Principle............................................................................................................. 9 2.2.1 Operation Principle of Transparent Bridge ............................................................................ 9 2.2.2 MAC Address Learning ......................................................................................................... 9 2.2.3 Transfer and Filtering Mechanism ....................................................................................... 10 2.2.4 Loop Avoidance: Spanning Tree Protocol............................................................................ 13 2.2.5 VLAN................................................................................................................................... 13 2.3 EOS Fundamentals.......................................................................................................................... 19 2.3.1 What is EOS......................................................................................................................... 19 2.3.2 Function Model of EOS ....................................................................................................... 19 2.3.3 Ethernet Frame Encapsulation ............................................................................................. 20 2.3.4 Contiguous Concatenation and Virtual Concatenation......................................................... 23

Confidential

3 Theory of ATM..........................................................................................................................................25 3.1 ATM Fundamentals..........................................................................................................................25 3.1.1 Generation Background of ATM Technology .......................................................................25 3.1.2 ATM Features........................................................................................................................26 3.1.3 ATM Cell Structure...............................................................................................................27 3.1.4 Fundamentals of ATM Switching .........................................................................................27 3.1.5 ATM Statistics Multiplexing .................................................................................................30 3.1.6 ATM Protocol Reference Model ...........................................................................................31 3.1.7 ATM Service Type ................................................................................................................34 3.1.8 ATM Communication QoS ...................................................................................................36 3.2 ATM Processing in MSTP Devices..................................................................................................36 3.2.1 Background of ATM Application on MSTP ........................................................................36 3.2.2 Key Technology of ATM Service Processing .......................................................................37 3.2.3 ATM Layer Processing Function of MSTP Devices .............................................................39 4 Theory of RPR ..........................................................................................................................................41 4.1 Overview of RPR Technology .........................................................................................................41 4.1.1 Emergence of RPR Technology............................................................................................41 4.1.2 Basic Concepts and Features of RPR Technology................................................................43 4.2 Fundamentals of RPR Technology ..................................................................................................45 4.2.1 RPR Ring Network Architecture ..........................................................................................45 4.2.2 RPT Technology ...................................................................................................................47 4.2.3 RPR Network Hierarchy Model ...........................................................................................50 4.2.4 RPR MAC Data Frame Processing.......................................................................................52 4.2.5 RPR Fairness Algorithm .......................................................................................................54 4.2.6 RPR Topology Discovery .....................................................................................................56 4.2.7 RPR Protection .....................................................................................................................58

ii

Contents

4.3 RPR Implementation Scheme ......................................................................................................... 60 4.3.1 Three Implementation Schemes of RPR .............................................................................. 60 4.3.2 System Architecture of RPR-Embedded MSTP................................................................... 62 5 Theory of MPLS....................................................................................................................................... 63 5.1 Introduction to MPLS ..................................................................................................................... 63 5.2 Architecture of MPLS ..................................................................................................................... 66 5.2.1 Basic Working Mode of MPLS ............................................................................................ 67 5.2.2 Advantages of MPLS ........................................................................................................... 72 Appendix A Abbreviations.......................................................................................................................... 77

iii

Chapter 1

Evolution of MSTP

1 Evolution of MSTP
Key points Evolution of the MSTP technology Difference between the MSTP and traditional SDH technology Current state of the MSTP technology

1.1 Emergence of MSTP


In the telecommunication era with the principle part of voice services, Synchronous Digital Hierarchy (SDH) systems guarantee the real-time transmission of voice services as carrier networks through the mapping of tributaries, cross-connects and the point-to-point quality assurance mechanism. Increasingly, with the great and high-speed development of IP data services based on packet exchange, it is difficult for SDH networks, which are based on the transmission mechanism of time-division switching, to carry IP services efficiently while satisfying the transmission requirements of voice services. It used to be the discussing focus of professionals in the telecommunication industry whether reestablish carrier networks without using the SDH technology, or approach some new technologies to reconstruct SDH networks and solve the problem at network edges (access end), thus achieving good passing characteristic of IP services over SDH networks. Undoubtedly, the latter one is more operable. Because with this solution, not only current network resources can be used more efficiently, but also some inherent characteristics of the SDH technology can make up for certain shortcomings of Ethernet, such as Quality of Service (QoS). Then the concept of Multi-Service Transport Platform (MSTP) over SDH appears which is also called as a new generation of SDH. It is different from traditional SDH equipment. From the point of position in networks, the MSTP should be located at the access layer, which means it is connected to various service interfaces at the customer side while to SDH transmission equipment at the network side. In other words, the MSTP is just like a long-haul passenger/freight junction center station. The objective of the station is to separate passengers from freights efficiently, and then transport them to corresponding destinations safely and fast according to different requirements.
1

Confidential

1.2 First Generation MSTP


The initial purpose of MSTP is to implement the transparent point-to-point transmission of IP data packets over SDH through mapping Ethernet frames into containers (C) of SDH frames directly. However, the fixed sizes of payload in different SDH containers make it difficult to load 10/100 M Base-T frames or Gigabit Ethernet frames into SDH containers perfectly. As a kind of point-to-point transparent transmission mechanism, the earlier MSTP can not implement many functions such as flow control, QoS of Ethernet services, and the statistics and multiplexing of different Ethernet traffics. At that time, no commercial values could be offered by the MSTP. To improve the carrying efficiency of IP data services over SDH, the virtual concatenation technology and Link Capacity Adjustment Scheme (LCAS) are presented in the first generation MSTP.

1.2.1 Virtual Concatenation Technology


An effective solution for mapping Ethernet frames appears: Cascade Virtual Container (VC) units to form an appropriate loading unit. For instance, bind five VC-12 units as a unit, which can carry 10 M Ethernet service pretty well. But it also brings a new problem. If adjacent containers are cascaded as VC-n-Xn, the cascaded loading units have to keep the same route and continuous bandwidth during the whole transmission process. On the other hand, the passing equipment must also support the concatenation function to guarantee the point-to-point transmission of the integrated loading unit after concatenation. All these require too much for long-haul transmission, and will block the practical development of services consequently. The virtual concatenation technology solves the problem above thoroughly. The difference between it and the adjacent concatenation technology is that VC-n units can belong to different STM-N with independent structure and corresponding Path Overhead (POH) sequence. These VC units are bound as a big virtual container (VC-n-Xv, or called as VC group), in which each VC unit is identified with the Multi-frame Identifier (MFI) of the virtual concatenation, and the Sequence (SQ) identifier (belonging to LCAS and VC control frames). In this way, each independent VC-n can be transmitted through different routes as different member in the VC group, before converging at the destination end. And the concatenation function is unnecessary for passing equipment.

Chapter 1

Evolution of MSTP

1.2.2 Link Capacity Adjustment Scheme


The virtual concatenation technology just provides a possible scheme to combine loading units more efficient. A real management scheme is needed to ensure the high-efficiency point-to-point transmission of IP data services over SDH carrier networks. Just like a highway extending in all direction, on which there are various cars. It can not become a fine transportation system without a good dispatch station. The key point is how to manage and dispatch VC units efficiently, especially for the virtual concatenation. Unlike the adjacent concatenation technology, VC-n in virtual concatenations can be located in different STM-N with various combination modes, which may result in unexpected problems without a proper dispatching scheme. Then Link Capacity Adjustment Scheme (LCAS) emergences. It is a bidirectional control information system established between sources and destinations. The control information can be used to adjust the number of members in VCs dynamically according to actual demands, and thus implement the real-time management of bandwidth. It improves the network utilization ratio greatly while ensure the quality of services at the same time. From the point of integrated technology, the virtual concatenation technology and LCAS make it possible to carry IP services efficiently over SDH networks, and finally form the first generation MSTP with practical values. The emergence of MSTP equipment drives SDH networks into further development. As an efficient networking technology for Metro Area Network (MAN), the MSTP plays an important role at the access section in the network. Furthermore, it improves the reliability of IP services greatly with the quality guarantee characteristic inherited from SDH. Cooperating with the flow control at the Media Access Control (MAC) layer, the LCAS increases or decreases members in VC groups to avoid packet loss of IP services when the network runs normally. Even when the optical signal intensity at the optical receiving end does not reach the receiving sensitivity (maybe caused by fiber cut), or the bit error rate is higher than the threshold, the network can implement protection switching in 50 ms with the inherent protection capability of SDH. The application of LCAS and flow control may only cause losses of a few packets, which will not influence normal services. Ethernet has no this characteristic.

Confidential

1.3 Second Generation MSTP


The MSTP technology itself develops further due to its high commercial values. With the application of Resilient Packet Ring (RPR) technology and Multiple Protocol Label Switching (MPLS) technology, the second generation MSTP appears.

1.3.1 Resilient Packet Ring Technology


Ring networks share the features of low cost and convenient management in comparison with star networks, bus networks and tree networks. At the beginning, the token ring technology is developed for the data flow transmission in ring networks. However, data packets in token ring networks roam in the whole network. Thus the shared bandwidth will decrease drastically with the increase of nodes in the network. This drawback restricts the further development of token ring technology. The RPR works at the MAC layer supports better transmission of data flows in ring topologies. The RPR technology has the following features.

Dual-ring structure: Two physical paths between every two adjacent nodes guarantee the high reliability of networks.

Ring bandwidth control and Spatial Reuse Protocol (SRP) Unicast data can be transported at different parts of the ring, thus the capacity of the ring increases accordingly. In this way, the bandwidth decreasing caused by the addition of nodes will be eased to a certain degree. Moreover, the RPR can discover the new network topology and update it automatically when the ring topology changes. With this function, the man-made errors caused by manual configuration can be avoided. It facilitates the management and maintenance of networks.

Dynamic bandwidth allocation and statistical multiplexing principle Each node maintains the data load passing through itself and transports corresponding data to adjacent nodes in the ring. Other nodes can find how many available bandwidths can be achieved from the source node according to the information.

To sum up, with the features above, the RPR technology shortens the transmission process of data flow in the ring network for the maximum route between any two nodes
4

Chapter 1

Evolution of MSTP

is only half of the ring. The network topology discovery and update capability is achieved through exchanging topology identification information with the algorithm such as Open Shortest Path First (OSPF). It can not only avoid the infinite loop of packets efficiently, but also improve the self-healing ability of ring networks.

1.3.2 Multiple Protocol Label Switching Technology


The MPLS is a kind of data transfer technology. It combines the Layer-3 (network layer) routing and Layer-2 (data link layer) switching. Based on the mechanism of label switching, the MPLS separates the route selection from data transfer, and specifies the path of a packet through a network with the label. It implements the conversion from the connectionless IP services to connection-oriented label switching. The technical characteristics of MPLS include traffic engineering, load balancing, failure recovery and path priority etc. Its typical application in Ethernet is Virtual Private Network (VPN) services based on MPLS, which offer the following benefits:

Providing seamless connections for intranets Restricting the spreading of VPN route information, and guarantee the security through adopting MPLS forwarding only for members in the VPN

Allowing different customers using the same VLAN ID through embedding Layer 2 MPLS technology, and thus extending the address space for VLAN

Implementing multilevel services in VPN, and setting up different priority between VPNs

The involvement of the MPLS technology in the MSTP provides the label switching function in addition to those features of MPLS mentioned above. Then the process of adding/removing labels at the edge of IP networks is unnecessary. The real point-to-point label switching is implemented through connecting the MSTP to core routers with the label switching function directly. The evolution process of MSTP, from the traditional SDH which can not carrying IP services efficiently to the first generation MSTP which is competent to carry IP services, and then to the increasingly robust MSTP supporting RPR and MPLS, is always driven by practical applications. We can imagine, in the future, more other functions and technologies will be involved in MSTP.

Confidential

2 Theory of EOS
Key points Ethernet frame structure MAC address and address learning Transfer and filtering mechanism of Layer 2 switching Layer 2 loop Spanning tree and fast spanning tree VLAN

2.1 Ethernet Fundamentals


2.1.1 Ethernet Frame Format
Fig. 2.1-1 shows the Ethernet frame format.

64-1518 octet 7 octet 1 octet 6 octet 6 octet 2 octet 46-1500 octet 4 octet

PRE

SFD

DA

SA

LEN

DATA

PAD

FCS

PRE = Preamble SFD = Start-of-Frame Delimiter DA = Destination Address SA = Source Address LEN = Data Length FCS = Frame Check Sequence

Fig. 2.1-1 Ethernet Frame Format

The Preamble (PRE) is a 7-octet sequence of alternating bits (10101010) used to reach synchronization.

The Start-of-Frame Delimiter (SFD) is a special octet (10101011) to identify the


7

Confidential

beginning of the Ethernet frame.

Destination Address (DA): The first bit indicates whether the address is an individual address or a group address. 0 identifies an individual address, while 1 identifies a group address. The frame with a group address will be transferred to all stations specified in the address. The interface of each station recognizes its own address and responds to it when the interface detects the group address. If all bits in the destination address are 1, the frame will be broadcasted to all stations on the network.

The Source Address (SA) indicates where the frame comes from. The Data Length (LEN) field indicates the number of octets in the data field and pad field.

The Data (DATA) field comprises all the data originated from the upper layer. Pad (PAD) field: The length of Data field should be no less than 46 octets. If it is less than 46, the Data field must be extended by adding octets (pad) to make the actual Data field length meet the minimum length.

Frame Check Sequence (FCS): It provides error detection with 32-bit Cyclic Redundancy Check (CRC) sequence.

2.1.2 MAC Address


MAC address is 48 bits long, which can be transferred to 12 hex bytes. These bytes are divided into three groups, which are separated with a dot between groups. Each group comprises four bytes. The MAC address is burned into the Network Interface Controller (NIC). The IEEE manages all MAC addresses to ensure their uniqueness. Each MAC address consists of two parts: the vendor code and the serial number.

Vendor code: As the first six hex bytes (24 binary bits) in the MAC address, it identifies the NIC vendor.

Serial number: The vendor manages serial numbers of MAC addresses. The serial number is the last six hex bytes in the MAC address. If all serial numbers after a vender code are used up, the vendor must apply for another vendor code.

Chapter 2 Theory of EOS

2.2 Ethernet Switching Principle


2.2.1 Operation Principle of Transparent Bridge
Fig. 2.2-1 illustrates the operating principle of transparent bridge.
Station C Segment A Port 1 Station D

Segment B Station A

Port 2 Station B

Fig. 2.2-1

Operating Principle of Transparent Bridge

In Ethernet, the process of determining transfer is called as transparent bridge connection. The meaning of transparent: Terminal equipment connected to the bridge do not know whether they are connected to shared medium or switching equipment, that is, the equipment is transparent to terminal users. On the other hand, the bridge does not change or process the frames transferred through it (except trunk lines of VLAN). The transparent bridge has the following three main functions:

Address learning function Transfer and filtering function Loop avoidance function

All these three functions are performed in the transparent bridge, and they works on the network at the same time. Moreover, Ethernet switches also perform three main functions same as those of transparent bridge.

2.2.2 MAC Address Learning


The bridge determines whether to transfer frames based on the destination MAC address. Therefore, it must catch the position of MAC address first. Only by this, can the bridge make a decision about transfer correctly.

Confidential

When the bridge is connected to physical network sections, it checks all frames detected. After reading the source address of a frame, the bridge associate the frame to corresponding receiving port and record the relation in the MAC address table. Then the process of MAC address learning completes.

2.2.3 Transfer and Filtering Mechanism


1. Transfer/filtering Once all workstations have transmitted data frames, the switch learns all one-to-one relationships between MAC addresses and ports and records them in the MAC address table. For example, as shown in Fig. 2.2-2, the workstation A sends a unicast data frame to workstation C. The switch recognizes that the destination address of the frame already exists in the MAC address table and is associated to the E2 port. Then it transfers this frame to the E2 port directly. The switch will not transfer the data frame to other ports in the network, and this is considered as the filtering operation.
MAC address table E0: E2: E1: E3: 0260.8c01.1111 0260.8c01.2222 0260.8c01.3333 0260.8c01.4444

0260.8c01.1111

E0 E2

X XE3

E1

0260.8c01.3333

0260.8c01.2222

0260.8c01.4444

Fig. 2.2-2 Transfer and Filtering

2.

Transfer of broadcast/multicast frames or frames with unknown MAC addresses As shown in Fig. 2.2-3, when the workstation D sends a data frame, the switch recognizes that the frame is a broadcast frame or a multicast frame, or a frame whose MAC address is unknown (that is, the MAC address of this frame does not exist in the MAC address table of the switch). Then the switch floods the network with the frame, that is, it transfers the frame to all the other ports except the entrance port.
10

Chapter 2 Theory of EOS

MAC address table


A

0260.8c01.1111

E0: E2: E1: E3:

0260.8c01.1111 0260.8c01.2222 0260.8c01.3333 0260.8c01.4444

E0 E2

E1 E3

0260.8c01.3333

0260.8c01.2222

0260.8c01.4444

Fig. 2.2-3 Transfer of broadcast/multicast frame or frame with unknown MAC address

Note If the switch supports multicast functions such as Internet Group Multicast Protocol (IGMP) interception, it will not transfer multicast data frames with the flooding mode.

11

Confidential

3.

Transfer/filtering procedure Fig. 2.2-4 shows the transfer/filtering procedure diagram.

Fig. 2.2-4 Transfer/Filtering Procedure

The processing procedures after the switch receives a data frame at a port are described as follows. The switch judges whether the MAC address of the data frame is a broadcast address or a multicast address. If the answer is yes, it will perform the flooding operation. If the MAC address is a unicast one identifying a network device, the switch will look up the address in the MAC-Port table. Once the switch can not find it in the table, it will transfer the frame with flooding mode too.
12

Chapter 2 Theory of EOS

If the switch finds the address in the MAC-Port table, it will transfer the data frame to the corresponding port associated with the destination address.

2.2.4 Loop Avoidance: Spanning Tree Protocol


In Layer 2 networks, Layer 2 loop may be generated once the physical loop comes into being. The Layer 2 loop will damage the network drastically, and the network can not recover by itself sometimes. The problems caused by the loop may be broadcast storm, repeated replication of frames, and the instability of switches' MAC address tables (MAC address drift), etc. However, complicated multi-loop connections always exit in actual networks. Layer 2 loops should be avoided when there are physical loops in networks. Then the Spanning Tree Protocol (STP) appears to avoid the generation of Layer 2 loops. A switch supported STP can automatically recognize a loop in the network with redundant paths, and it will keep the best path for frame transfer while blocking other redundant paths. When the network topology changes, STP automatically reconfigure switch ports and ensure that only one path exists between any two stations and the network does not run into a loop situation.

2.2.5 VLAN
1. Overview Local Area Network (LAN) can be either a network consisting of a few computers or an enterprise network composed of hundreds of computers. Virtual LAN (VLAN) is a special LAN segmented by routers, that is, a kind of broadcast domain. Members in a VLAN work like sharing the same physical network section. Members in different VLANs can not access each other directly. In a VLAN, there is no physical or geographical limit for members divided into the same broadcast domain. They can be connected to different switches in a switching network. Broadcast packets, unknown packets and data packets between members are all restricted in the VLAN. Another explanation of VLAN is that it offers a method to divide one physical network into multiple broadcast domains.

13

Confidential

Note A broadcast domain is a restricted area in which broadcast frames (all bits of the destination address are 1) can be transmitted to all other devices. Strictly speaking, not only broadcast frames, but also multicast frames and unknown unicast frames can be transmitted in a broadcast domain without blocks.

VLAN provides the following functions or advantages:

VLAN can divide a switching network or a broadcast domain into multiple broadcast domains, as if multiple separate physical networks. In this way, a network is segmented, and in each segment the number of computers decreases accordingly so as to improve the network performance.

VLAN is very flexible for users to configure a VLAN, add/remove or modify members in the VLAN just on the switch. Generally, it is unnecessary to change the physical network or add new devices.

When a network is divided into VLANs, computers in different VLANs can communicate with each other only through Layer 3 devices. The security of Layer 3 can be ensured by configuring the Access Control List (ACL) on these devices. To sum up, the communication between VLANs is implemented under control. The security of VLANs is better than those networks without VLAN division, in which computers communicate with each other directly. Furthermore, if a customer wants to join in a VLAN, only after the network administrator configures it on the switch, can the customer be added in the VLAN. All these improve the security of networks accordingly.

For example, no VLAN has been configured on a Layer2 switch, as shown in Fig. 2.2-5. Any broadcast frames will be transferred to all the other ports of the switch except the receiving port. The switch floods the broadcast information received from the computer A to port 2, 3 and 4.

14

Chapter 2 Theory of EOS

Fig. 2.2-5

Switch without VLAN Division

Two VLANs are configured on a switch: VLAN I and VLAN II, as shown in Fig. 2.2-6. The port 1 and 2 belongs to the VLAN I, while port 3 and 4 to the VLAN II. If the computer A sends a broadcast frame, the switch will transfer it only to the other port in the same VLAN, that is, port 2 in VLAN I. It will not transfer the frame to the ports in VLAN II. In the same way, the broadcast information output from the computer C will be transferred only to the port in VLAN II instead of ports in VLAN I.

Fig. 2.2-6

Switch with VLAN Division

VLAN divides the broadcast area through limiting the transferring range of broadcast frames. To illustrate different VLANs clearly, Fig. 2.2-6 identifies two VLANs with different colors. In actual application, VLAN ID is used to identify the VLAN.

15

Confidential

2.

VLAN division modes VLAN The popular VLAN division mode now is the static division mode based on ports. The network administrator sets ports as those in a specified VLAN. Then computers connected to these ports belong to this VLAN. The advantage of this division mode is that the configuration is easy and has no influence on transfer performance of the switch. However, every port of the switch should be configured into the VLAN it belongs to. Once the user moves, the network administrator has to reconfigure corresponding ports for the switch. Other VLAN division modes include: division based on MAC address, division based on protocol, division based on IP address subnet, division based on application, division based on user name and division based on password etc.

3.

Operation process of VLAN Each VLAN can be regarded as a physically isolated bridge. Members in different VLANs can not access each other directly. VLAN can pass over switches. Members belonging to the same VLAN on different switches are in the same broadcast domain; therefore they can access each other directly. Because the VLAN division is based on physical ports of switches, when the switch receives a data frame from one port connected to a computer, it can recognize which VLAN the data frame belongs to. But the link connecting two switches carries data frames from different VLANs. Then the ports connecting to the link on the switches does not belong to a specified VLAN. If not tag the data frame, the switch can not know which VLAN the frame received from such link belongs to. So every data frame is tagged with a prefix before being transferred to such links by the switch. The tag is used to identify the VLAN which the data frame belongs to. The tag of VLAN enables the switch combining traffics from different VLANs and transmitting them through the same physical line.

16

Chapter 2 Theory of EOS

4.

Link type

Access Link The access link connects a non-VLAN-aware workstation to a LAN section of a VLAN switch port, that is, the access link is used to connect terminal equipment and switches. If the VLAN division is based on ports, an access link can only belongs to one VLAN. The access link may be either an isolated network section, or multiple network sections or workstations connected with non-VLAN-aware bridges and switches. The access link can not carry tagged packets.

Trunk Link The trunk link is the one carrying tagged packets (with VLAN ID). Therefore a trunk link can carry data from multiple VLANs. It supports devices which can recognize VLAN frames and membership. The trunk link is always used to connect two VLAN switches. It enables VLAN passing over multiple switches. The trunk link may also be a shared LAN section connected with multiple VLAN switches and VLAN-aware workstations.
VLAN1

VLAN1 VLAN2 VLAN3 Backbone

VLAN2

VLAN3

VLAN1

Fig. 2.2-7 Trunk Link

5.

IEEE 802.1Q IEEE developed a general VLAN standard IEEE 802.1Q. The standard

Defines the architecture of VLAN for the purpose of providing VLAN services for current IEEE 802 bridge LAN.

Defines the tagged VLAN frame format for Ethernet IEEE 802.3 and token ring IEEE 802.5
17

Confidential

Defines the protocol and mechanism for VLAN-aware devices to configure information and communicate membership information.

Defines the principle and procedures for VLAN-aware devices to transfer frames on networks.

Specifies the requirements to ensure the interoperability and coexistence of non-VLAN-aware devices. The non-VLAN-aware device is a workstation or router which can not receive or transmit tagged VLAN packets, and neither can recognize information about VLAN membership.

Fig. 2.2-8 shows the 802.1Q frame format.

Fig. 2.2-8

802.1Q Frame Format

The addition of 4-byte tag head to the original Ethernet frame makes the maximum length of Ethernet frame up to 1518 bytes. This number exceeds 1514 bytes specified in IEEE 802.3, which is expected to be modified to support 1518-byte long tagged VLAN frames. The 4-byte tag head carries the following information:

Tag Protocol Identifier (TPID): Two-byte field consisting of the hexadecimal value 81-00. It carries the 802.1Q/802.1p tag type.

Tag Control Information (TCI): The fields contained in the TCI are described as follows.

The three-bit user priority field is capable of representing the priority of the frame while the IEEE 802.1p-supported switch transferring it. The one-bit Canonical Format Indicator (CFI) field indicates that all MAC address information carried by the frame is in Canonical format. The twelve-bit VLAN Identifier (VID) field uniquely identifies the VLAN to which the frame belongs.

18

Chapter 2 Theory of EOS

2.3 EOS Fundamentals


2.3.1 What is EOS
Ethernet Over SDH (EOS) is a part of the MSTP architecture to implement the transmission of Ethernet services through SDH nodes in the network. It has the functions of the transparent transmission and Ethernet Layer 2 switching etc.

2.3.2 Function Model of EOS


Fig. 2.3-1 shows the diagram of EOS function model.

MSOH = Multiplex Section Overhead RSOH = Regenerator Section Overhead

Fig. 2.3-1 EOS Function Model

As shown in the diagram above, the data frame from Ethernet interface is transmitted transparently to the Layer 2 for switching. And after being encapsulated, the frame is mapped into the virtual container. The Multiplex Section Overhead (MSOH) and Regenerator Section Overhead (RSOH) are inserted to form a STM-N frame which is transmitted over the SDH network. The EOS transfer node supporting Layer 2 switching must be provided with the following basic functions:

Transmission link bandwidth is configurable Ensuring the transparency of Ethernet services Having the transfer and filtering function for Layer 2 data frames Supporting IEEE 802.1d Spanning Tree Protocol

19

Confidential

2.3.3 Ethernet Frame Encapsulation


Fig. 2.3-2 shows the encapsulation architecture of EOS.
IEEE 802.3 MAC PPP/LAPS/GFP VC12/VC3/VC4 Multiplex Section Regenerator Section PHY

Fig. 2.3-2 Encapsulation Architecture of EOS

The encapsulation protocol stack specifies the functions of link control from point-to-point Ethernet to SDH network, rate adaptation and delineation. There are three encapsulation protocols: Point-to-Point Protocol (PPP), Link Access Procedure for SDH (LAPS) and Generic Framing Procedure (GFP). 1. PPP encapsulation The Point-to-Point Protocol adopts the RFC 1662 PPP in HDLC-like Framing of the byte synchroneous link. And the encapsulation procedures include three steps: MAC frame extraction, PPP framing and HDLC processing. 1) MAC extraction Check the MAC frames, filtering CRC errors and other abnormal frames, and then remove the preambles of Ethernet frames and gaps between frames. 2) PPP framing The Address, Control and Protocol provides controls of multiple protocol encapsulation, link initialization and authentication In addition, errors are controlled with the Frame Check Sequence (FCS) through CRC16 and CRC32. 3) HDLC processing Process the PPP frame transparently through changing 0x7e to 0x7d and then change 0x7d to 0x7d, 5d. Delimit the PPP frame by adding 0x7e to the header and trailer. And then adapt the rate of PPP frame to SDH VC channel by filling 0x7e.
20

Chapter 2 Theory of EOS

Fig. 2.3-3 PPP Encapsulation Procedure

The PPP protocol is the first encapsulation protocol. It has been used widely due to its technology maturity. The PPP protocol is the link layer protocol used commonly to support the communication between two devices connected directly on a point-to-point link. For example, the connection between the computer and the access server during dial-up adopts the PPP protocol; and the connection between Digital Data Network (DDN) routers adopts the PPP protocol too. However, devices provided by different vendors can not interwork because there are no unified requirements to apply the PPP protocol. 2. LAPS encapsulation The LAPS encapsulation is similar to the PPP. It simplifies the processing of link control and implements the rate adaptation with the additional transmitting sequence (0x7d, 0xdd). Comparing to the PPP encapsulation, the LAPS completes the packing and adaptation at the same time. Fig. 2.3-4 illustrates the LAPS encapsulation procedure.

Fig. 2.3-4 LAPS Encapsulation Procedure 21

Confidential

3.

GFP encapsulation The GFP is a kind of generic mapping technology. Variable-length or fixed-length data packets can be adapted and processed together, thus implementing the transmission of data services through various high-speed physical transmission channels.

Fig. 2.3-5 GFP Frame Format

Fig. 2.3-5 shows the format of GFP frame. Generally, a GFP frame contains the core header and payload. The encapsulation efficiency of GFP, being independent of the payload contents, is higher then those of PPP and LAPS. In addition, the GFP is more robust. Even if there are odd-bit errors in the GFP frame header, it will not cause synchronization loss (out of frame), while it will do for the PPP/LAPS encapsulation. The GFP can use the system bandwidth more efficiently. With the channel identifier, the GFP can combine multiple physical ports to one channel, while a physical port can only associated to one channel for the PPP/LAPS. The GFP support ring networks besides point-to-point networks.

22

Chapter 2 Theory of EOS

2.3.4 Contiguous Concatenation and Virtual Concatenation


Concatenation is a data encapsulation mapping technique on MSTP that allows the transmission of large-granularity services by combining multiple virtual containers as a single container which keeps the integrity of bit sequence. Concatenation is divided into contiguous concatenation and virtual concatenation. For contiguous concatenation, adjacent virtual containers in the same STM-N data frame are combined asC-4/3/12-Xc which is transferred as a whole structure. For virtual concatenation, virtual containers (with the same route or different route) in different STM-N data frames are combined to form a virtual VC-4/3/12-Xv with big structure which is transferred as a whole. 1. Contiguous concatenation The contiguous concatenation technology is first adopted to support the transmission of traffics occupying more than one virtual container over the transmission network. The advantage of contiguous concatenation is that every part of the data can be transferred without delay and the transmission quality is high for the traffics are transmitted as a whole. However, the application of contiguous concatenation technology has its limitation. It requires that all passing networks and nodes support the contiguous concatenation. 2. Virtual concatenation With the virtual concatenation technology, virtual containers (VC-n) in different STM-N can be cascaded to form a big virtual structure (VC-n-Xv) for transmission, in which each VC-n has independent and integrated structure and corresponding POH. The virtual concatenation of multiple C-n is quite like the interleaving of multiple VC-n. Unlike the contiguous concatenation, the virtual concatenation enables the independent transmission for each VC-n through different paths. And there are no special requirements for middle devices except terminal devices at both sides of the transmission path, which should comply with corresponding protocols to support the virtual concatenation. The virtual concatenation shares the following features:

Passing-independent over networks and multi-path transmission

23

Confidential

Because devices supporting concatenation traffics and those not supporting concatenation traffics have different interpretation for pointers, generally existing SDH devices can not transfer adjacent-concatenation traffics. However, the application of virtual concatenation can meet the bandwidth demands of broadband services. Generally, the virtual concatenation should implement functions in both the transmitting and receiving direction. In the transmitting direction, it converts C-4/3/12-Xc to C-4/3/12-Xv, and converts adjacent concatenation traffics to virtual concatenation traffics which can be transmitted over SDH devices. In the receiving direction, C-4/3/12-Xv is transformed to C-4/3/12-Xc, and the virtual concatenation traffics are converted to adjacent concatenation traffics. In this way, adjacent concatenation traffics can be transmitted through SDH devices.

Supporting LCAS The LCAS is applicable to virtual concatenation. It can adjust the link capacity without damage for virtual-cascaded signals passing through the transmission network. Based on the existing bandwidth, the LCAS can increase or decrease the bandwidth capacity dynamically, and thus adapt to the change of virtual-cascaded traffics. Moreover, the LCAS improve the robusticity of virtual-cascaded traffics and service quality as well.

There are still some problems about the application of virtual concatenation should be considered. From the point of technology, the main problem of virtual concatenation is delay comparing to contiguous concatenation. Because the passing path for each virtual container in the virtual concatenation may be different, transmission time difference may appears between virtual containers. At worst, the virtual container with the latter sequence reaches the sink terminal node before the one with the sequence in front. This makes it difficult to recover original signals. At present, the effective way to solve this problem is using a large delay alignment memory to buffer data for the purpose of data re-alignment. For multi-path transmission, ZTEs MSTP products can compensate the path delay difference of 32 ms. Calculated by 5 us/km, the maximum path difference is 6400 km.
24

3 Theory of ATM
Key points Features of ATM ATM cell structure Fundamentals of ATM switching ATM protocol reference model ATM ATM communication QoS VP-Ring technology ATM service type Basic connection functions of ATM

3.1 ATM Fundamentals


3.1.1 Generation Background of ATM Technology
In modern times, people need to transmit and process more and more information. And a variety of information appears. The demand for new broadband services, such as video conference, high-speed data transmission, tele-education and Video On Demand (VOD), is increasing rapidly. The earlier networks can transfer only one type of service. For example, the telephone network can only provide telephone services, while the data communication network can only provide data communication services. Such networks are costly and inconvenient for either users or network carriers. Then the concept of Integrated Services Digital Network (ISDN) is presented which is expected to carry various services with just one kind of network. The narrowband ISDN (N-ISDN) was presented at first in 1972 due to the limitation of technology and service demands of the time. Now, the N-ISDN technology is mature, and there are many mature N-ISDN networks in the world already. However, the N-ISDN still has limitations such as narrow bandwidth, limited service integrated
25

Confidential

capability, various relay networks, and weak adaptation to new services. Therefore a more flexible new network with broader bandwidth and stronger service integrated capability is needed. From the 80th in 20 century, the development of basic technologies related to telecommunication, such as micro-electronics and photoelectronics, provides bases for the realization of new networks. In this background, the broadband ISDN (B-ISDN) appears. The B-ISDN can

Enable the high-speed transmission of services. The network devices is independent from the characteristics of services The information transfer mode is independent from service type.

People have sought many solutions to develop a transfer mode adapting to the B-ISDN, such as multi-rate circuit switching, frame relay, and fast packet switching etc. Finally, the most appropriate transfer mode for B-ISDN was found, and that is the Asynchronous Transfer Mode (ATM). As the core technology of B-ISDN, the ATM technology has been specified as the unified information transfer mode by ITU-T in 1992. The ATM technology excludes the limitations of circuit switching mode and packet switching mode. It adopts the optical telecommunication technology, and improves the transmission quality. At the same time, it simplifies the operation on the network node and thus decreases network delays. A series of other technologies are also adopted to meet all the requirements of B-ISDN.

3.1.2 ATM Features


1. 2. 3. 4. Statistics time division multiplexing and channel utilization ratio improvement The respond time is short with the transmission unit of fixed-53-byte cell. Connection-oriented working mode with transmission resource reservation In ATM networks, the error control and flow control hop by hop are cancelled, which are moved to edges of the network. 5. The ATM supports integrated services.

26

Chapter 3 Theory of ATM

3.1.3 ATM Cell Structure


Fig. 3.1-1 shows the structure of ATM cell.
8 7 6 GFC VPI VCI VCI HEC PTI CLP VCI HEC 5 4 3 2 VPI VCI VPI VCI PTI CLP 1 8 7 6 5 VPI VCI 4 3 2 1

PAYLOAD (a) UNI header format

PAYLOAD (b) NNI header format

Fig. 3.1-1

ATM Cell Structure

The cell header contains the following parts:

GFC: Generic Flow Control. It has four bits, and all of them are set to a default value 0000currently. The GFC is only applicable to User to Network Interfaces (UNI) and may be used to control flows in the future.

VPI: Virtual Path Identifier. It has 12 bits for Network to Network Interfaces (NNI), while has 8 bits for UNI.

VCI: Virtual Channel Identifier. It identifies the virtual path part of a virtual path. The VCI and VPI can be combined to identify a virtual connection.

PTI: Payload Type Identifier. It is a 3-bit field to identify the payload type. CLP: Cell Loss Priority. It is a bit used to distinguish the priority of cell loss. 1 indicates the cell is of low priority, while 0 indicates it is of high priority. The cell of low priority will be discarded when congestion occurs.

HEC: Header Error Control. It is an 8-bit error control byte to detect the cell with error. It can also correct the 1 bit error in the cell header. In addition, the HEC is used for cell delineation.

3.1.4 Fundamentals of ATM Switching


In an ATM network, a physical transmission channel is divided into multiple Virtual Paths (VPs). A VP may be multiplexed by thousands of Virtual Channel (VC). Both the
27

Confidential

VP and VC are used to describe the unidirectional transmission route of ATM cells. The ATM cell can be switched either on the VP level or on the VC level. Each VP can accommodate 65536 virtual channels at most with the multiplexing mode. Cells in a cell group belonging to the same VC have the same VC Identifier (VCI). Different VCs belonging to the same VP have the same VP Identifier (VPI). Both the VCI and VPI are transmitted with the cell as parts of the cell header. The transmission channel, VP and VC are three important concepts in the ATM technology. Fig. 3.1-2 shows the relationship among them.

VC VC

V P V P V P
Fig. 3.1-2

V P Transmission Channel V P V P
Relationship among Transmission Channel, VP and VC

VC VC VC

VC

The call proceeding in the ATM is based on the concept of virtual call in packet switching instead of routing control for cells one by one. The cell proceeding route related to a call is established in advance before transmission. All cells of the same call must pass through this route until the call ends. The proceeding procedure is as follows. The calling party sends a control signal of call request via a UNI. The called party receives the control signal and accepts the request. After that, switching nodes in the network forms a virtual circuit between the calling and called party after exchanging signaling. The virtual circuit is represented with a series of VPI and VCI. While setting up the virtual circuit, all switching nodes on the circuit arranges a routing table for the purpose to convert the input cell VPI/VCI to output cell VPI/VCI. After establishing the virtual circuit, the transmitted information is segmented into cells, which are transferred to the called party over the network. If the transmitting end wants to forward more than one message to different receiving ends at the same time, different virtual circuits can be built up respectively to corresponding receiving ends. And the cells will be output alternately.

28

Chapter 3 Theory of ATM

In the virtual circuit, the VCI/VPI value of cells in two adjacent switching nodes keeps unchanged. Between these two nodes, a VC link is formed. A bunch of VC links forms the VC Connection (VCC). Similarly, the VP link and VP Connection (VPC) are formed. 1. VP switching While the cell passing the ATM switching node, this node modifies the VPI value in the input cell to a new value according to the destination connected to the VP. Then the node assigns the new value to the cell and output it. This process is called as VP switching. As shown in Fig. 3.1-3, all VC links in a VP are transferred to another VP in the VP switching, while the VCI values in these VC links keep unchanged. The implementation of VP switching is very simple. Generally, it can be realized through the cross-connection of digital multiplex cables at some level in the transmission channel.

VCI=1 VCI=2

VPI=1

VPI=4

VCI=7 VCI=8

VCI=7 VCI=8

VPI=2 VP Switching

VPI=5

VCI=1 VCI=2

Fig. 3.1-3

VP Switching

2.

VC switching The VC switching should be performed with the VP switching simultaneously. Because when a VC link ends, corresponding VP connection ends too. Then all VC links on this VPC will carry out switching respectively, being added to VPCs in different directions, as shown in Fig. 3.1-4.

29

Confidential

VC Switcing VCI=1 VCI=3 VCI=2 VCI=4

VPI=2 VCI=1 VCI=2 VPI=1 VPI=3 VP Switching

VCI=4

VCI=3

Fig. 3.1-4

VC Switching

3.1.5 ATM Statistics Multiplexing


It is the greatest feature of ATM to achieve the best resource utilization ratio in networks under any traffic distribution mode. For this purpose, the statistics multiplexing of network resources is necessary. The statistics multiplexing refers to the dynamic distribution of network resources between traffics while guaranteeing the services quality according to the statistics characteristics of various services, and thus getting the best resource utilization ratio. As shown in Fig. 3.1-5, the data from user A, C and D are arranged in turn on the line carrier. The user B will not occupy bandwidth resource of output line because he/she has no data for transmission at the time. From this point, the ATM link is a virtual link.

Fig. 3.1-5 30

ATM Multiplexing

Chapter 3 Theory of ATM

3.1.6 ATM Protocol Reference Model


The ATM reference model contains three flats, the user flat, control flat and management flat. Here we mainly introduce the user flat, including data flow directions and functions of each layer, specially the ATM Adaptation Layer (AAL), AAL1 ~ AAL5 and services type such as CBR/VBR/UBR/ABR and A/B/C/D. The user plat contains the physical layer, ATM layer, AAL layer and highest layer same as from the OSI model. Fig. 3.1-6 illustrates the data transmission between these layers.
Highest layer inforamtion

AAL-PCI AAL Layer

AAL- SDU

48-byte transmitting information

5-byte header ATM Layer

48-byte payload

53-byte cell

Physical Layer

53-byte cell

Bit Flow

Fig. 3.1-6

Data Transmission between Layers

The functions of each layer are as follows. 1. Physical layer The physical layer is the layer to carry information flow. It contains two sublayers, the Transmission Convergence (TC) sublayer and Physical Medium (PM) sublayer. 1) TC sublayer This layer embeds ATM cells to the transmission frame of the current medium,
31

Confidential

or extracts valid ATM cells from the transmission frame on the contrary. The procedure of embedding ATM cells in the transmission frame is: ATM cell demodulation (buffering) generating Header Error Control (HEC) cell delineation adapting the transmission frame generating the transmission frame The procedure of ATM cells extraction from the transmission frame is: receiving the transmission frame adapting the transmission frame cell delineation checking header error control ATM cell queuing. The main functions of TC layer is cell delineation and header error control. 2) PM sublayer The PM sublayer is based on the ITU-T and ATMF recommendations. It includes the following connections:

Connections based on direct transmission of cells Connections over PDH networks Connections over SDH networks Direct optical transmission of cells Connections between Universal Test & Operation PHY Interfaces for ATM (UTOPIA)

Connections between Operation And Maintenance (OAM) interfaces for management and monitoring information flow

2.

ATM layer This layer mainly implements the multiplexing/demultiplexing of cells, operations related to headers and flow control. The multiplexing and demultiplexing of cells completes at the interface between the ATM layer and the TC sublayer of physical layer. The sending ATM layer combines cells with different VPI/VCI and transfers them to the physical layer as a whole. The receiving ATM layer recognize the VPI/VCI in cells received from the physical layer and then sends each cell to different modules for processing. If the cell is a signaling cell, it will be sent to the control plat; while the cell will be sent to the management plat, if it is a managing one.
32

Chapter 3 Theory of ATM

The header operations is the translation of VPI/VCI based on the value of VPI/VCI allocated while establishing the link. 3. AAL layer The AAL layer works on the top of the ATM layer. It cares about services, adopting different adaptation methods for different services. For each adaptation mode, the information flow (with different length and rate) from the highest layer is split into ATM service data cell in 48 bytes long. At the same time, it reassembles cells from the ATM layer, recovers the flow and sends it to the highest layer. For there are various kinds of information at the highest layer, the ALL layer is divided into two sublayers for the complicated processing procedure. They are the Convergence Sublayer (CS) and Segmentation and Reassembly (SAR) sublayer. To improve the rate of switching networks, the ATM layer has been simplified as possible. However the ATM layer does not provide functions concerning about the quality of service, such as cell loss, transmission error, delay and jitter. These functions are performed by the ALL layer. It is necessary to provide different adaptation for different services. Four classes of service are defined according to requirements of timing, bit rate and connection modes between the source and destination. These classes of service correspond to ALL protocol ALL1, ALL2, ALL3/4 and ALL5 respectively.

ALL1 supports constant bit rate and connection-oriented traffics. And the timing information needs to be transferred between sources and sinks. Common services of this class include 64 kbit/s voice services, uncompressed video traffics with constant bit rate and leased lines in private data networks.

ALL2 is provided for point-to-point variable bit rate traffics with timing relations. Common services of this class are compressed packet voice communication and compressed video transmission. One characteristic of these services is the transmission delay, which is caused by the reassembly of uncompressed voice and video information in the receiver.

ALL3/4 is provided to adapt two kinds of data services in the ATM network, the data service corresponding to remote LAN interworking and the connection-oriented data service.
33

Confidential

AAL5

supports

variable

bit

rate

traffics

without

synchronization

requirements between transmitting and receiving ends. It provides similar services as ALL3/4, which are mainly used to transmit computer data, UNI signaling information and frame relay in the ATM network. The purpose of ALL5 is to reduce overheads and provide a simple and efficient ALL.

3.1.7 ATM Service Type


The ATM layer provides five types of services: Constant Bit Rate (CBR), Real Time Variable Bit Rate (rt-VBR), non-realtime Variable Bit Rate (nrt-VBR), Available Bit Rate and Unspecified Bit Rate. These services concern of the flow characteristic and QoS of the network. 1. CBR services CBR services are provided for links with static bandwidth in the life cycle. The bandwidth depends on the value of Peak Cell Rate (PCR). The network provides the basic guarantee for CBR users that the negotiated ATM layer QoS should be ensured for all the cells which have passed the consistency check once the link is established. That is, the QoS must be guaranteed for CBR services whenever the source terminal sends cells at the peak cell rate in any duration. Generally, CBR services include realtime services having high requirements for delay variation (such as voice, video and circuit emulation) but not limited to these services. For CBR services, the source end can transmit cells at the negotiated PCR or at the rate below PCR (or just stop transmitting). The performance is considered being drastically decreased if the cell delay is greater than the specified maximum cell transfer delay (maxCTD). 2. rt-VBR services As a real time application, the rt-VBR service limits the delay and delay variation strictly. Rt-VBR services mainly include voice and video services. The characteristics of rt-VBR links are represented by Peak Cell Rate (PCR), Sustainable Cell Rate (SCR), Maximum Burst Size (MBS) and Cell Delay Variation Tolerance (CDTV). The cell rate at the source end is variable; it can also be considered that the source bursts out. Rt-VBR services support the statistics multiplexing of real time resources.
34

Chapter 3 Theory of ATM

3.

nrt-VBR services Nrt-VBR services support the burst non-realtime application. The link characteristics are represented by PCR, SCR and MBS. The nrt-VBR service can ensure a very low cell loss ratio for those cells meeting the flow contract, but it will not restrict the delay. Nrt-VBR services support continuous statistics multiplexing.

4.

UBR services The UBR service is a kind of non-realtime application. It does not limit the delay and delay variation strictly. UBR services include traditional computer communication applications, such as file transfer and E-mail etc. UBR services will neither ensure the quality of services, nor the amount of cell loss ratio and cell transfer delay. Networks can determine whether use the PCR in Connection Admission Control (CAC) and Usage Parameter Control (UPC). The PCR value has no sense when the network has no forced limitation on PCR. The congestion control for UBR links is carried out on the highest layer based on point-to-point.

5.

ABR services The transfer characteristics of ABR services on the establishment of links can be changed later. There is a flow control mechanism supporting the feedback of source to control the transfer rate of cells at the source end. Such feedback is realized by a special control cell Resource Management (RM) cell. Low cell loss ratio is expected when the interruption system is controlling flows according to the feedback. Then a fair and available bandwidth can be accessed. To a specified link, the ABR service has no boundary limitation for delay and delay variation, that is, the ARB service does not support real time applications. While the ABR establishing a link, the interruption system will assign a maximum bandwidth and a minimum available bandwidth needed. They are represented by the PCR and Minimum Cell Rate (MCR). The MCR can be 0. The bandwidth provided by the network can be variable but can not less than the MCR.

35

Confidential

3.1.8 ATM Communication QoS


The Quality of Service (QoS) is an important part of the ATM networking, for ATM networks are always used for real time transmission, such as the transfer of voice and video. The contract of QoS has two major parts. The first one is about traffic descriptor. It characterizes the load to be offered. The second part of the contract specifies the QoS desired by the customer and accepted by the carrier. To get the detailed traffic contract, the ATM standard defines a few QoS parameters whose values the customers can negotiate with the carriers. These parameters are called PCR, SCR, MCR, MBS and CDVT. Each parameter is defined by the worst case performance and the carrier to meet or exceed it. In some cases, the parameter is the minimum value, in others it is the maximum value. Here, the QoS is defined separately in each direction. The correlativity between five classes of ATM service and QoS parameters are shown in Table 3.1-1, where the indicates Specified, while the - indicates Unconcerned.
Table 3.1-1 Correlativity between ATM services and QoS parameters ATM Service QoS Parameter PCRCDVT SCRMBSCDVT MCR CBR rt-VBR nrt-VBR ABR UBR -

3.2 ATM Processing in MSTP Devices


3.2.1 Background of ATM Application on MSTP
The transfer and processing of Ethernet services are well done with the application of various new technologies of new generation MSTP, such as GFP encapsulation, virtual concatenation, LCAS, embedded Resilient Packet Ring (RPR) and enhanced Layer 2 etc. However, the Ethernet can not ensure the QoS because it is non-connection-oriented. The ATM is connection-oriented and has the features of statistics multiplexing and QoS. Therefore, the ATM technology is used widely in bandwidth access and 3G networks. The new generation MSTP with new ATM
36

Chapter 3 Theory of ATM

functions, such as inverse multiplexing and statistics multiplexing, enable carriers meeting the demands of ATM service application and leading ahead in the metro area network construction.

3.2.2 Key Technology of ATM Service Processing


1. Inverse Multiplexing for ATM (IMA) technology Taking various devices in current use into account, they can provide functions of T1/E1 line interfaces, frame generation and ATM transfer convergence (TC) layer etc. And the ATM cell transfer bus (UTOPIA) specified in ATM forum is accepted and used widely. Accordingly, a solution defined between the TC layer and ATM layer appears, that is the inverse multiplexing technology. The IMA technology is contrary to the traditional multiplexing technology. It disassembles a single ATM cell flow at the transmitting end and them allocates them to multiple low-rate links for transmission. At the receiving end, the services carried in cells are integrated and then recovered to the high-rate ATM cell flow. This technology enables broadband ATM cell flows being transferred through multiple T1 or E1 lines. As shown in Fig. 3.2-1, three T1 links are assembled as a 4.6 M bandwidth. The IMA technology is applicable to both public networks and private networks. By adopting enough low-cost narrowband line terminals, users can enjoy many advantages of ATM, such as QoS, service cutting-in, scalability and capability of transmitting mixed data voice and video flow conveniently. And it is unnecessary to apply expensive broadband transfer devices, such as T3, E3 and SONET/SDH devices to get all these advantages.

Fig. 3.2-1

Principle of IMA Technology

The IMA technology provides functions similar to virtual concatenation and LCAS in Ethernet. When there are much traffic on the ATM Digital Subscriber
37

Confidential

Line Access Multiplexer (DSLAM) which can not be transferred through a single path, the IMA can detach the ATM service into multiple low-rate E1 links with IMA timing, and transmit them transparently through VC12 of different spare paths in the current transmission network. With the function similar to LCAS, the IMA can adjust the bandwidth of ATM services dynamically. In this way, the IMA can still ensure the QoS of other E1 links when one E1 link failures. Furthermore, it can dynamically adjust the bandwidth combination mode of links for the purpose to access bursting services at any moment. 2. Virtual Path Ring (VP-Ring) technology For ATM services, the standard SDH can also implement the transmission function of ATM 155/622 M interfaces. Considering burst data services, the statistics multiplexing of ATM services on the ring should be carried out on the MSTP through taking advantage of characteristic great dynamic variation of actual data service flow. The ATM VP-ring technology transfers data services through the VC4 of SDH, and implements the statistics multiplexing and protection of ATM service access nodes. It also specifies the service convergence processing principle. As shown in Fig. 3.2-3, the ATM DSLAM, node B, and Radio Network Controller (RNC) are accessed to the MSTP via 155 M interfaces. Actually, the bandwidth for transmission is dynamically variable. With the application of ATM VP-Ring, all ATM nodes on the ring can share a VC4 of the SDH path, thus improving the bandwidth utilization ratio greatly.

Fig. 3.2-2

Principle of ATM VP-Ring

38

Chapter 3 Theory of ATM

3.2.3 ATM Layer Processing Function of MSTP Devices


For the ATM layer processing function of the MSTP nodes over SDH, their protocol reference model and hierarchical function should comply with the recommend I.321, and the functional characteristics should comply with the recommend I.731 and I.732. The main function of the ATM access board in MSTP equipment is implementing the convergence from ATM services to the SDH transmission network. Fig. 3.2-3 shows the functional block model.

Fig. 3.2-3

ATM Function Model of MSTP Devices

1.

ATM service types provided The SDH-based multi-service transport node provides the following ATM services for ATM service source with different characteristics.

CBR service rt-VBR service nrt-VBR service UBR service

2.

Basic connection functions The VP-Ring supports the building up and removal of Permanent Virtual Circuit (PVC) by commands, as well as the ordered establishment and removal of user data path between ATM interfaces.

Point-to-point connection function


39

Confidential

It supports the establishment of PVC by commands.

Point-to-multipoint connection function Multipoint network connection: supporting network interconnection between two or more physical interfaces. ATM multicast: it supports replicating the VP/VC of the input cell flow to multiple output ATM links during ATM switching. Space multicast: the output ATM link can be located at two or more physical interfaces while each interface has only one ATM link. Logic multicast: two or more output ATM links share one physical interface.

Connection management function This function includes the following two parts. Network resource control: including the management of VPI/VCI and network bandwidth, as well as the routing of services. Flow control: providing contracted QoS for ATM data flows, which includes traffic shaping, Usage Parameter Control and Network Parameter Control (UPC/NPC), Connection Admission Control (CAC), Selective Cell Discard (SCD), frame discard, user data buffer and QoS type management etc.

3.

ATM layer service protection switching The ATM layer service protection mode is the ATM virtual path (VP) protection. Generally, the ATM service protection switching is a kind of layered protection. The physical layer adopts SDH protection, such as multiplex section protection; while the ATM layer adopts ATM VP protection. When the switching of the ATM layer and physical layer has been enabled, the switching between layers is implemented by delaying the ATM layer switching so as to avoid the overlapping of these two switching.

40

4 Theory of RPR
Key points RPR overview and features RPR networking architecture Concepts and functions of RPR technology RPR network hierarchy model RPR fairness algorithm RPR topology discovery RPR protection Implementation scheme of RPR System architecture of MSTP with embedded RPR

4.1 Overview of RPR Technology


4.1.1 Emergence of RPR Technology
The Resilient Packet Ring (RPR) technology was presented in 2000 as the solution of some limitations of the SDH, ATM and Ethernet technology used widely in metro area networks. As the Time Division Multiplexing (TDM) path, the SDH is not good at supporting packet services, and thus the resource utilization ratio is low. The metro area network structure constructed with SDH technology is complicated, and it is difficult to share bandwidth. Therefore, the SDH is generally used in existing TDM networks to supplement data services. Although the ATM has the advantage of QoS, the complexity of this technology leads to high cost and more cell overheads. And it can not keep pace with the development of IP networks. As a low-cost and simple technology, the Ethernet is widely used in local area networks. However, it can not satisfy the carriers requirements due to the lack of effective QoS, network recovery and protection, and network management mechanism.

41

Confidential

Currently, the EOS product combining the Ethernet and SDH technology is employed on a large scale too. It is the main and mature supporting product with the application of earlier MSTP technology. It meets the demands of data transmission in early times of TDM networks by transferring Ethernet frames with SDH virtual containers. It encapsulates Ethernet frames into virtual containers directly with the GFP, HDLC/PPP or LAPS protocol. In this way, the EOS technology resolves the problem that expensive Packet Over SONET (POS) interfaces must be used if there is no packet service interface in the SDH optical network. Generally, services are converged through an 802.3 switching module before encapsulation in order to improve the bandwidth utilization ratio. The technology is just the second generation MSTP technology mentioned in the first chapter of this book. The main disadvantages of the second generation MSTP are as follows.

Complicate configuration: Services between sites should be configured one by one, and the passing sites should be configured as straight-through. For a complex network, a lot of works should be done for configuration and maintenance.

Lack of sharing characteristic: Traffics are connected through VC. As the carrier of traffic, the link can not share its bandwidth with other links.

Bandwidth with low utilization ratio: In order to avoid broadcast storm, the STP protocol should be performed, and thus some bandwidth can not be used at best. On the other hand, the MSTP needs SDH protection because it has no fast protection mechanism (STP protection at second level is too slow). However, the SDH protection will waste 50 percent bandwidth regardless of its high speed.

Special requirements on convergence ratio: In convergence networks, boards with multiple system directions are needed to respond linking requests from various sites to the convergence site.

Difficult to ensure the QoS in ring networks: Although EOS equipment can avoid the problems in part mentioned above by constructing Ethernet ring, they can not be avoided completely while constructing ring networks because network complying with 802.3 is not designed originally for ring networks. Besides, it will result in interaction between traffics from upstream and downstream sites, and thus the QoS can not be guaranteed. Therefore, so far there is seldom such application.
42

Chapter 4 Theory of RPR

All these problems are those the RPR technology should solve.

4.1.2 Basic Concepts and Features of RPR Technology


The IP industry recognized the value of ring networking architecture for a long time. It has done a lot of works on this field and developed solutions such as token ring and Fiber Distributed Data Interface (FDDI). However, these solutions can not meet either the demands of IP traffic flow and optical fiber bandwidth, or the requirements of the development of IP transmission and service transfer, such as keeping high bandwidth utilization ratio and forwarding traffic in the congestion case, ensuring the balance between nodes, recovering quickly from node or transmission medium failures, and being capable of plug-and-play etc. Therefore, ring networks such as token ring or FDDI are not applicable to metro area network. Service providers and enterprises need a technology with good scalability, which can be applied in Metro Area Network (MAN) and Wide Area Network (WAN) with the capability of transferring IP information packets at Gigabit rate In Nov. 2000, IEEE set up the 802.17 Resilient Packet Ring Workgroup (RPRWG) formally, which was expected to develop a RPR MAC standard and thus optimize the data packet transfer on rings in the LAN, MAN and WAN topology. The Resilient Packet Ring is a new MAC layer protocol to optimize the data service transmission in ring architectures. It makes the advantage of ring architecture, and solves problems in packet service transfer through the supporting protocol on the MAC layer, such as bandwidth sharing, protection and QoS guarantee etc. Because the bandwidth of low-priority traffic is controlled by the inner algorithm, having the characteristic of automatic adjustment, this technology is called as resilient packet ring. The RPR is adaptive to various physical layers such as SDH and Ethernet, and can transmit various types of service such as data, voice and video service. It not only involves the Ethernet features of economy, flexibility and scalability, but also absorbs the advantage of quick protection (50 ms) in SDH ring networks. Besides, the RPR technology has the advantages such as the automatic discovery of network topology, bandwidth sharing of ring, fair allocation, and strict Classification Of Service (COS) etc. The purpose of RPR is to provide more economical and efficient metro area network solution without the cost of network performance and reliability.

43

Confidential

The RPR technology has the following features.

High bandwidth efficiency Traditional SDH networks need 50% of the ring bandwidth as redundancy, while the RPR does not need. The RPR technology remains the protection mechanism similar to that of SDH networks. It protects services by using two rings in reverse directions and allows the data traffics being transferring at the full speed on the ring between the source node and the destination node. With the application of Spatial Reuse Protocol (SRP), the destination node does not occupy bandwidth of the ring after extracting data frames dropped in this node, which is released to the downstream section. Spatially, there are no repeated traffic flows can use their own bandwidths without influencing others. To sum up, normally data are transmitted on the shortest arc between the source node and the destination node, and multiple nodes can intercommunicated at the same time. In this way, many nodes can receive and transmit packets at the same time, and thus improving the utilization of ring bandwidth; especially for the ring with many nodes, the improvement is more evident.

Fair bandwidth allocation protocol (QOS guarantee) The RPR enables the bandwidth sharing for data services by using the effective fairness algorithm. In the network, the traffic of the user access end is paroxysmal in nature, while the traffic of the core part of the network is comparatively smooth, which thus can be predicted. By classifying services, the RPR technology enables carriers providing low-priority networks to access services (such as some data services) only when there are spare bandwidths. It not only makes full use of the inherent characteristics of these traffics, but also avoids the bandwidth un-fairness between upstream and downstream sites.

Quick protection mechanism The RPR can provide 50 ms service protection which is similar to the Automatic Protection Switching (APS) in SDH networks. At present, two methods can be used to avoid failures: Wrapping and Steering. When Wrapping is used, the adjacent node of the failure will loop the traffic on a ring to another ring, such as looping the traffic on the internal ring to the external ring. This way can keep the continuity (sequence) of the data even the traffic flow arrives at the destination node through a long path. The Steering method is used to reverse the traffic
44

Chapter 4 Theory of RPR

flow in direction, which will reach the destination node through another path.

Seamless connection with SDH networks The RPR system with embedded SDH can be connected to SDH networks seamlessly, for most SDH networks are ring networks while the RPR is ring in structure. It makes full use of bandwidth in ring networks. Unlike the existing MSTP, it should avoid looping traffics with STP protocol or manual configuration, which sometimes may cause the bandwidth waste.

Simple service configuration One objective of RPR technology is distributed access. The distributed access together with the quick protection, automatic traffic re-establishment provides the plug-and play mechanism for inserting or removing nodes quickly. The RPR is a packet switching technology using shared bandwidth in the ring. In the ring, each node knows the available capacity of the ring. Under the traditional circuit switching mode, each connection in the whole network should be configured point to point. But the RPR only needs to configure the connection relationship between the access end and the ring. It is unnecessary to configure the connections between nodes, the flow direction of traffics. All these simplified the configuration greatly. Furthermore, such service configuration mode avoids the convergence ratio problem existing in traditional EOS devices. Neglecting the bandwidth limitation, the RPR can almost get the unlimited convergence ratio.

4.2 Fundamentals of RPR Technology


4.2.1 RPR Ring Network Architecture
The RPR is of the dual-ring architecture, as shown in Fig. 4.2-1. Being similar to the bidirectional multiplex section ring topology of SDH, it consists of two rings with reverse directions: the clockwise one is calls as Ringlet 0 while the counter-clockwise one is Ringlet 1. Each node on the ring is called as site, which is identified with a 48-bit address. The connection between sites is SPAN, and each SPAN based on Time to Live (TTL) is a hop.

45

Confidential

Fig. 4.2-1

RPR Ring Network Architecture

The data transmission based on RPR supports the unicast, multicast and broadcast mechanisms. For the multicast or broadcast, each node can detect the data after it is sent by the source node. The node analyzes the address information in the frame header; if this node meets the address, it copies the data and then forwards it to the next node. After the data passing through the ring and returning to the source node, it is stripped from the ring by the source node. For the unicast, the data packet is transferred only on the shortest arc between the source node and the destination node. The source node sends the data, and the destination node receives the data and strips the data packet from the ring. The RPR is more efficient than token ring or FDDI in metro area networks. The topology of RPR is a dual symmetrical reverse ring: one is the inner ring; the other is the outer ring. Such architecture provides the following benefits.

Two paths between each pair of node ensure the high reliability. Two protection mechanisms can be used. One is the wrapping mechanism. When the adjacent node detects the failure, the ring stops working and the traffic on this ring is looped to another ring. The other one is the steering protection. When a failure occurs, the protection messages are quickly dispatched to all nodes on the ring. The source node has the right to select the ring for data transfer and finally enable the data bypassing the failure point. IEEE 802.17 specifies the steering protection as the default mechanism.

The longest path is half of the ring, for data can be transmitted in two directions and nodes in the ring can select the shortest transfer path.

46

Chapter 4 Theory of RPR

It is possible for spatial reuse. In the unicast mode, data can be transferred on different parts of the ring at the same time, and thus the capacity of the whole ring will be a few multiple of that of single fiber.

The Resilient in the RPR represents the RPR protection protocol. The Operation, Administration and Management (OAM) protocol of RPR involves four functions defined by ISO: fault management, configuration management, performance management and accounting management. The fault management is the core part of OAM, responsible of detecting, isolating and correcting exception conditions in the network and reporting them to the network management system. IEEE 802.17 specifies that network elements must detect two types of failure: Loss Of Continuity (LOC) and Remote Defect Indication (RDI).

4.2.2 RPT Technology


As a superset of RPR, the Resilient Packet Transport (RPT) technology goes beyond RPR, attempting to extend resiliency protection further. The RPT classifies services into different levels which are identified by the MPLS COS field. It adopts the special synchronization mechanism to provide reliable clock and delay, jitter assurance, and support TMD services such as voice service. RPT devices provide a lot of Ethernet interfaces. And the Layer 2 switching based on logic MAC address simplifies the IP packets transmission in metro area networks greatly, and thus decreases the network construction cost. The RPT technology provides data services and voice services with a platform for Layer 2 statistics multiplexing. For the private line services or data services, they can be transmitted on the same bandwidth once being accessed to the network. Because all services share the bandwidth together, the RPT improves the bandwidth utilization ratio greatly. 1. Quick layer 2 packet switching The RPT layer 2 adopts the encapsulation mode of resilient packet transport frame. The header overhead of frame contains the logic MAC address (1-byte) and standard MPLS field. All nodes on the ring are assigned with a unique logic MAC address, and 255 nodes can be identified at most (while 16 nodes can be identified at most on an SDH ring). All nodes can perform quick Layer 2 switching based on logic MAC address. The passing node can judge the
47

Confidential

destination according to the logic MAC address of the traffic flow, and forwards the traffic quickly without any processing. 2. Flexible interfaces Currently, the RPT technology supports various tributary interfaces such as E1, 10 M/100 M, GE, STM-1 and POS. And the rate of line interface rate can reach GE, 2.5 G and 10 G.. Some RPT devices even provide Dense Wavelength Division Multiplexing (DWDM) boards to reach the line rate of 80 G or higher. The various high line rate and abundant tributary interfaces fit the traffic deployment in broadband metro area networks very much. 3. Spatial reuse technology based on logic MAC The RPT supports the Spatial Reuse Protocol (SRP), which is a MAC layer protocol being independent of mediums. All nodes have the same control right to bandwidth. 4. Bidirectional reverse ring, single fiber ring and linear network topology The RPT ring is a bidirectional reverse ring: one fiber is clockwise ring and the other fiber is counter-clockwise ring. Each fiber transfers data and control signals in one direction. The control signal is transferred as the packet with the highest priority. The RPT also supports the single fiber topology, bidirectional data transmission at 1 G or 2.5 G, as well as WDM data transmission etc. 5. Medium independent The RPT does not depend on the medium of physical layer. The medium of RPT can be fiber or wavelength in DWDM systems. 6. Comparison of RPT and other broadband technologies Compared with other technologies, the RPT technology has the greatest features as follows: the economy of LAN, the reliable base to guarantee TDM transmission and full use of network bandwidth.

Comparing the RPT with the SDH/POS (Packet Over SDH) Both the RPT and POS avoid the complex protocol and too many header overheads of the ATM technology. They can transfer Gigabit IP services through fiber in the format of resilient packet data frame (similar to Ethernet frame). It is unnecessary to disassemble and reassemble IP packets, thus improving the
48

Chapter 4 Theory of RPR

processing capability of switches greatly and decrease the cost of devices. But the RPT provides the function of using bandwidth dynamically, increasing the bandwidth utilization ratio greatly. Thus the RPT avoids the point-to-point limitation of POS and decrease the number of ports. In the network protection, the RPT protection the switching based on the source route ring, which is different from the multiplex section protection of SDH. It is more economical than SDH protection for network resources. When a fiber is broken, the nodes at both ends of the fiber will send Layer 2 control signaling to each node along the fiber direction. As soon as the source node of the traffic receives the control message, it sends the service according to the logic MAC address of the destination node to the fiber in the other direction. In this way, the protection is implemented. It is obvious that the protection route selected is the best in the switching based on source route. It saves the bandwidth resources of fiber, and the protection switching time is less than 50 ms. To sum up, the RPT functions of bandwidth statistics multiplexing, provision of multiple high-speed Ethernet interfaces, different level services and source route ring protection based on different service level can not only guarantee the transfer of TDM services, but also support bursting IP services efficiently. This is that the POS/SDH can not provide.

Comparing the RPT with the Dynamic Packet Transport (DPT)/Gigabit Ethernet (GE) Compared with the DPT/GE, the RPT has the important advantage that it provides the capability of multi-service (including TDM services) transport switching.

49

Confidential

4.2.3 RPR Network Hierarchy Model


Fig. 4.2-2 shows the hierarchy reference model of the RPR.
RPR Hierarchy OSI Reference Model
Application Layer Presentation Layer Session Layer Transport Layer Network Layer Data Link Layer Physical Layer Fairness algorithm Protecion Higher layer Logical Link Sublayer MAC Control Sublayer Ring selection OAM Topology discovery MAC service interface

Physical layer MAC Data Channel Sublayer service GFP coordinate SDH coordinate interface sublayer sublayer System packet interface-x PPP/LAPS GFP adaptation adaptation SONET/SDH Physical Layer Interfaces depending on medium Medium

Fig. 4.2-2

RPR Hierarchy Reference Model

The hierarchy reference model complies with the Open System Interconnection (OSI) reference model and corresponds to the first and second layer of the ISO model. The purposes of RPR physical layer interface and physical layer entity are as follows: 1) 2) 3) Supporting RPR MAC Supporting GE and 10 G Ethernet physical layer entity Supporting the framing modes of GFP and byte synchronous HDLC/LAPS, and physical layer entities running at the rate of 155 Mbit/s ~ 9.95 Gbit/s. 4) 5) Supporting synchronous or plesiochronous network applications. Only supporting full-duplex operations

In the hierarchy model of RPR, the coordinate sublayer of the physical layer at the bottom is responsible for the mapping of information between the Medium-Independent Interface (MII) and physical medium. The System Packet Interface (SPI) defined by the Optical Internetworking Forum (OIF) is a kind of interface between physical layer device and data link layer device. It separates the synchronous layer and asynchronous layer by improving and receiving data transmitted
50

Chapter 4 Theory of RPR

at the rate independent of the actual line bit rate. The SONET/SDH adaptation layer implements the mapping from the RPR to the SONET/SDH by taking the GFP protocol, and HDLC series protocol (PPP/LAPS are used widely) as the data link layer mapping protocol. The MAC sublayer consists of two parts: the data channel sublayer and the MAC control sublayer. The MAC data channel sublayer transfers and receives frames on the physical medium through the physical service interface. The data channel includes two function modules: ringlet-independent function module and ringlet-specific function module. The ringlet-independent functions include MAC service interface processing, ring selection, receiving frames from the ring and data transfer to the client layer. The ringlet-specific functions include the traffic flow adjustment of local data transmission and data exchange through the physical service interface. The MAC control sublayer is responsible of necessary works related to maintain the data channel sublayer, including RPR topology discovery protocol, RPR fairness algorithm, RPR ring network protection and RPR OAM. The RPR topology discovery message is transferred through the RPR control frame. The MAC service interface is used to transmit data from the MAC client layer as well as local messages from the MAC layer to the MAC client layer. The MAC control sublayer establishes the data channel independent of the actual ring network, and performs related control operations. While the MAC data channel sublayer performs functions related to the actual ring network, such as the access control and transmission of data.

51

Confidential

4.2.4 RPR MAC Data Frame Processing


The RPR MAC frame has four types: data frame, control frame, fair frame and idle frame.

Fig. 4.2-3

RPR MAC Data Frame Processing

RPR processing on the receiving side (Ingress): 1. After the node receives the data frames from the ring, it checks the value of Time to Live (TTL) in the data frames. If the value is not zero, the TTL field is decremented by 1. If the TTL reaches zero, the packet is stripped from the ring. 2. Check the type of frames and the frames again, and then strip the error frames. If the fairness algorithm is used to control the frames, then send these frames to the fairness algorithm module directly for processing. If these frames are idle frames, then strip them directly from the ring. 3. Compare the source address of the frame with that of the local node, and judge whether this frame is output from this node. If the addresses are same, check whether the ring is mismatched. If it is mismatched, determine whether to enter the wrapping protection. If the ring has already been in the wrapping protection situation, forward this frame directly; or else discard the frame. 4. Judge whether the destination address of the frame is same as that of the local node. If they are same, check whether the frame is a control frame. If yes, send
52

Chapter 4 Theory of RPR

this frame to the CPU for processing. If no, input it to the egress specified in 802.3 for copy processing. If the destination does not identify the local node, forward the frame to the transfer channel.

Note

Drop: The local node can receive unicast or multicast frames from the ring after

Stack VLAN filtering which are transferred to this node from other nodes. Unicast frames will be stripped from the ring and sent to corresponding user ports. Multicast frames are sent to corresponding user ports and are transited.

Transit: Frames received from the ring at the local node are transit to the Primary

Transit Queue (PTQ) and the Secondary Transit Queue (STQ) channel. The data frames in the PTQ and STQ are inserted to the transfer ports of the source ring directly.

Strip: The local node receives the frames from the ring, which will no be

forwarded. The frames are terminated at this node.

RPR processing on the transmitting side (Egress): The data frames at the transmitting side include data to be forwarded and data frames and control frames inserted at the node. For the inserted data frame, determine its destination address and ringlet selection through topology discovery and routing table. Then send it to the corresponding inserted queue (A, B, or C) according to the priority. The important point of the processing at the transmitting side is the dispatching of queues. The dispatching priority sequence is as follows. PTQ over threshold > STQ close to limit threshold > CTL > PTQ > STQ over threshold > A > B > eB > C > STQ

53

Confidential

4.2.5 RPR Fairness Algorithm


Bandwidth management is an important part of the RPR. Three classes of service are defined for the RPR MAC -- A, B and C. Class A This class of service allocates and ensures the data rate, and requires point-to-point low delay and jitter guarantee. It has the bandwidth reservation mechanism, and is not controlled by the fairness algorithm. Class A services usually are used to provide voice and video flow. Class B -- This class of service allocates and ensures the data rate, which is optional. For optional data rate, there is no allocation and guarantee. For ensured data rate, class B services provide the point-to-point delay and jitter guarantee. For class B services, the processing of the data rate allocation and guarantee is same as that of class A services. The processing of those exceeding the allocated rate range is same as that of class C services. Class C -- This class of services is dispatched fairly and requires no bandwidth guarantee. It usually used to transfer general IP services. Class A services can be considered as best effort services, and they are always be restricted by the fairness algorithm. The fair dispatching function is implemented through the fairness algorithm for the purpose of dynamical bandwidth adjustment and sharing. The RPR fairness algorithm has the following features:

Providing a mechanism to divide the available bandwidth fairly between nodes on the ring

Only applicable to low-priority services and excessive medium-priority services, which are Excess Information Rate (EIR) data frames in the medium-priority services

The RPR fairness algorithm can control the bandwidth of two sub-ringlets respectively. That is, there are two fairness protocols on a RPR ring to control the bandwidth respectively.

54

Chapter 4 Theory of RPR

1.

Fairness algorithm module block diagram Fig. 4.2-4 shows the fairness algorithm modules.

Fig. 4.2-4

Fairness Algorithm Module Block Diagram

The functions of fairness algorithm modules are as follows:

Receiving and processing fair frames Calculating the fair allowed rate of the local node Controlling the fair traffic rate with the shaper Determining the fair rate to be propagated Generating and sending fairness control messages Bandwidth adjustment technique The RPR fairness algorithm initiates the bandwidth adjustment by detecting congestion. When the congestion occurs at a node, it dispatches a fair rate to the upstream node on the reverse ring, which is calculated with the add_rate in this node and the normalized weight. The upstream node adjusts its sending rate and keeps it not exceeding the fair rate after it receives the fair rate. Nodes receiving the fair rate may respond in two ways: If the node is congested, it will select the minimum one from its own fair rate and the received fair rate, and then dispatch the rate to its upstream node.
55

2.

Confidential

If the node is not congested, it will forward the fair rate to its upstream node. 3. Example of bandwidth adjustment based on fairness algorithm

Fig. 4.2-5 Fairness Algorithm Bandwidth Adjustment Example

As shown in Fig. 4.2-5, the non-reserved bandwidth of the RPR (ring bandwidth reserved bandwidth for A0 services) is 500 M, that is, the maximum ring bandwidth which can be controlled by the fairness algorithm is 500 M. There are convergence services between site 1, 2 and 3. Services of site 1 and site 2 are converged at the site 3.

4.2.6 RPR Topology Discovery


1. Purpose of topology discovery The purpose of topology discovery is to provide decision-making principles for ring selection, fairness algorithm and protection units by making each site know the whole structure of the ring, the hop count to itself and the capacity of each site on the ring etc. RPR topology discovery is a periodical activity. It can also be initiated by a node which wants to know the topology. That is, a node can generate a topology information frame when necessary, such as, when the node is just accessed to the RPR ring, when the node receives a protection switching requirement information, or when the node detects a fiber link error. The topology information generation cycle can be configured as the value between 50 ms and 10 s. The minimum resolution is 50 ms. And the default configuration is 100 ms.

56

Chapter 4 Theory of RPR

2.

Process of topology discovery In the case of ring initiation, access of new nodes, ring protection switching or startup of PRP auto-reorganization mode, some node generates a topology discovery packet, which contains the MAC address and state information of this node. When other nodes receive the packet, they insert their own MAC addresses and state information in this packet and forward it to downstream nodes. In this way, every node gets the knowledge of the node number on the ring and the queue information, and then forms the topology mapping. RPR topology discovery can process various topology changes, such as adding/deleting nodes on the ring and broken link etc. Every node can discover the topology automatically. The process of topology discovery is similar to the link state protocol of Open Shortest Path First (OSPF), which transfers information with corresponding control message and trigger the node to dispatch the message with the trigger. Fig. 4.2-6 illustrates the basic procedures of topology discovery.

Fig. 4.2-6 Topology Discovery Procedures

As shown in Fig. 4.2-6, the line between S7 and S1 of the closed loop has been damaged. The node S1 and S7 broadcasts topology (TP) frames to indicate the
57

Confidential

network boundary. Such TP frame will trigger all nodes, which support the Steering protection mode, to change or keep the direction of data to be transferred (the principle is to avoid wrong paths). However, this frame will not change those nodes which support the Wrapping protection mode. When the new topology becomes stable, each node will check the topology with its adjacent nodes. If the topology is right, there is an open-loop topology structure in the topology database of each node. Although the RPR topology discovery is a kind of periodical activity, it still should be initiated by a node which knows the topology structure. That is, some node on the ring can generate a topology frame when necessary.

4.2.7 RPR Protection


The RPR has a perfect protection mechanism. RPR MAC layer supports the Steering protection or Wrapping protection. 1. Steering protection When the sending node transfers unicast services, the steering protection will select the ringlet 0 or ringlet 1 according to actual situations. Which ringlet will be selected depends on the avoidance of wrong paths. Multicast services will be sent to the ringlet 0 and ringlet 1 at the same time.

Fig. 4.2-7

RPR Steering Protection of Unicast Services

Normally, the S2 sends data to S6 along the ringlet 0. When some line failures, the S2 will send data to S6 along the ringlet 1 to avoid the wrong path.
58

Chapter 4 Theory of RPR

2.

Wrapping protection

Fig. 4.2-8

RPR Wrapping Protection

Under the wrapping protection mode, the sending node will keep using the original ringlet instead of considering the avoidance of wrong path. The protection action occurs at the boundary of the ring, as shown in Fig. 4.2-8. In normal state, the sending node S2 transmits data to S6 along the ringlet 0. If the fiber between S3 and S4 is broken, the S2 will send data to S3 still along the ringlet 0. The node S3 will send the data to S6 along the ringlet 1. During the process, the data packet will not be stripped to avoid frame out of sequence in the protection. Only when the data reaches the S6, can it be stripped from the ring. When the topology becomes stable, select re-steer to optimize the path, that is, send data through the shortest path S2 > S1 > S7 > S6. 3. Implementation of RPR protection The key of RPR protection is to know which path has problems. The topology structure should be known at any moment, and then process according to corresponding configurations. The RPR network supervises the network topology continuously while it transferring data. Once any topology change is found, it will carry out the steering protection or wrapping protection according to corresponding configurations.

59

Confidential

For MSTP devices with embedded RPR, users can assign whether to adopt both the RPR MAC layer protection and SDH physical layer protection. If both of them are enabled, the action of delaying the RPR protection switching can be taken to support switching between layers and avoid the overlapping of two kinds of protection switching.

4.3 RPR Implementation Scheme


4.3.1 Three Implementation Schemes of RPR
The implementation scheme of RPR can be divided into three classes. For each of these schemes, there are vendors to provide corresponding products.

Independent Layer2-based RPR scheme This scheme is applicable to the access layer and convergence layer of IP metro area networks. Some vendors provide broadband multi-service solutions, which optimize IP and support TDM services, by combining the Layer2-based scheme with the MPLS technology, synchronization technology, Coarse Wavelength Division Multiplexing (CWDM) technology and the TV video broadcast technology. In addition, Layer2-based RPR products provided by some vendors have great networking capability. They can support linear networking, tangent networking and dual ring internetworking topology structures as well as Dual Node Interconnection (DNI) protection etc. The RPR products with these enhanced functions can also be used on the core layer of the IP metro area network in small cities. But the independent Layer2-based RPR scheme is seldom used because the construction cost of single RPR technology scheme is very high for the moment.

Router-based single-board RPR scheme This scheme is mainly applicable to the core layer and convergence layer of IP metro area networks. Most vendors implement RPR functions by adding boards based on existing routers. The router-based scheme can be regarded as the optimization of existing router networking. It can improve the protection performance greatly and achieve the 50 ms ring protection function as well as save fiber resources.
60

Chapter 4 Theory of RPR

MSTP-based RPR scheme The MSTP-based RPR scheme separates an independent path from the MSTP ring bandwidth to support the RPR technology. Compared with the traditional SDH, the MSTP adopts the Layer 2 switching technology to implement the Ethernet services bandwidth sharing, completes the mapping from Ethernet frames to SDH VCs through the GFP encapsulation, and improves the flexibility and reliability of the virtual container bandwidth allocation through the virtual concatenation and LCAS technology. But because of the inherent shortcomings of Ethernet application in ring networks, many vendors are considering adopting the RPR technology into the new generation MSTP for the purpose of providing integrated solutions to support data services.

At present, TDM services occupy the dominant status, and the MSTP-based RPR scheme will be the best multi-service transport platform. However, the commercial application of corresponding products still need more time. When data services take the place of TDM services as the dominant one, the independent Layer2-based RPR scheme will be the best multi-service transport platform. And currently there are already some mature products used widely corresponding to this scheme. Because there are always data services to be processed in IP metro area networks, it can be expected that the independent Layer2-based RPR scheme and router-based RPR scheme will be used widely in IP metro area network constructions as good optimization solutions.

61

Confidential

4.3.2 System Architecture of RPR-Embedded MSTP

Fig. 4.3-1

Architecture of RPR-Embedded MSTP Ring and Nodes

As shown in Fig. 4.3-1, the node architecture basically represents the RPR protocol reference model. In the RPR protocol reference model, the RPR is located at the data link layer, including the logical link control sublayer, MAC control sublayer and the MAC data channel sublayer. The logical link control layer transfers data to one or more remote logical link control layers same as it through the MAC service interface. The MAC data channel layer performs the access control and data transmission between itself and some special ringlet. Between the MAC control sublayer and MAC data channel sublayer, RPR MAC frames are transferred or received.

62

5 Theory of MPLS
Key points Architecture of MPLS Basic working mode of MPLS Advantages of MPLS

5.1 Introduction to MPLS


With the rapid development of Internet, traditional routers became the bottlenect due to their inherent limitations. ITU-T has accepeted the ATM technology as the final solution for Broadband Integrated Services Digital Networks (B-ISDN), and developed countries already implemented the pilot network plan and commercial service plan. Since the middle of 90s in 20th century, the ATM technology has been adopted in the construction of most Internet backbone networks and high-speed LANs. And IP over ATM has been the most popular field in these years cross the telecommunication industry and computer industry. Multiple technologies appeared one after another, such as overlapping Classic IP over ATM (CIPOA), LAN Emulation (LANE), Multiple Protocol over ATM, integrated IP switch and label switch etc. The Multiple Protocol Label Switching (MPLS) provides the better solution for IP over ATM on the base of combining technologies metioned above. The Internet Engineering Task Force (IETF) issued a series of recommendation drafts about the MPLS, and several main drafts of them were released formally at the meeting in Mar. 1999. RFC codes were applied for them and have been approved. Early the MPLS mainly works in IP protocol (IPv4 IPv6) at the network layer. However, its core technology can also be applicable to other network layer protocols such as IPX, Appletalk, Dcnet and CLWP. Although the MPLS is not limited to some specific link layer, the main work focuses on the ATM. With the development of IP networks, especially for Gigabit-rate routing switches, the MPLS technology must be applied and developed while developing the IP/SDH/OPTICS mode to the IP/OPTICS (DWDM) directly. For from the point of
63

Confidential

layer, there is a link layer between the IP and DWDM, which is used for transfer, switching and transmitting. There are only two kinds of link layer transport technology applicable to IP packet now: SDH with the Synchronous Transfer Mode (STM) and cells with the Asynchronous Transfer Mode (ATM). The MPLS is the technology applicable to both the SDH and ATM, and it can be developed in the future as the technology for any special link layer. The MPLS also has the functions to support the network management, traffic engineering, QoS and COS. IP services can be transferred on the OPTICS directly with the MPLS (other corresponding modes can also be used). Actually, the MPLS is not only a technology applied in IP over ATM, but also an interlayernetwork technology between Layer 3 and Layer 2. It is researched and developed as architecture. Currently, the MPLS can be used in ATM networks and FR networks. Furthermore, it becomes the focus as the preferred technology during the research and development of IP over OPTICS. Some people even say that the MPLS is the terminator of ATM. In any case, the MPLS and ATM can not take the place of each other, for their function positioning does not cover each other. The ATM implements the ATM cell layer and AAL layer in the four-layer reference model of B-ISDN, corresponding to the functions of the second layer (data link layer) in the ISO-OSI seven-layer reference model. The MPLS implements the funcions of the comparatively-independent interlayer between the third layer (network layer) and the second layer (data link layer) in the seven-layer reference model. It does not have the integrated functions of the data link layer. Therefore, the MPLS can only implement the actual transfer function of the data link layer depending on a special link layer, such as the ATM cell layer or FR-SDH layer of frame relay. Fig. 5.1-1 shows the function positioning of MPLS.

64

Chapter 5 Theory of MPLS

Fig. 5.1-1

Function Positioning of MPLS, IP Network and ATM Network Layers

As shown in Fig. 5.1-1, the MPLS interlayer simplifies and specifies the transform protocol between L2 and L3 greatly. However, each IP packet in the IP network can arrive at the special link layer only after being processed with multiple corresponding intermediate protocols. Although there is only one protocol suite at the data link layer of the ATM network, the network layer needs multiple corresponding interworking protocols for every kind of service from various networks. The MPLS interlayer is absolutely necessary for IP over OPTICS. A link layer is needed from the IP at the network layer to the OPTICS at the physical layer. The MPLS interlayer can satisfy the requirements of existing FR-SDH and ATM link layer. In the future, it can also be adaptive to any new link layer technology. The MPLS with powful capacity can implement many functions and performances which are difficult to realize in common route networks, such as explicit routes, traffic engineering, QoS and COS. Moreover, the problems caused by the restriction of IP over ATM and IP over FR can be solved, such as flexibility, generality and SVC contention etc. Although the requirements of various users and services on the Internet become more and more complicated as well as the classification of Forwarding Equivalence Class (FEC), all of them can be processed in one time after they entering the MPLS domain. In the domain, the route, which performs the label exchange and forwarding, may not be influenced, and the highest working capability of transport switching is still required based on the 0 (n) traffic. Therefore, the routing protocol and interconnecting network architecture of the MPLS has great flexibility. And MPLS can guarantee the security and long-term reliability of MPLS networks.
65

Confidential

5.2 Architecture of MPLS


The basic purpose of MPLS is to integrate the label switching and forwarding technology and the network layer routing technology. The core is the label meaning, label forwarding, and distribution of labels. The MPLS terms are as follows:

Forwarding Equivalence Class (FEC): a group of IP packet which can be processed with the same mode. It can also be regarded as the same path, or the same forwarding processing.

Label (L): a short fixed-length identifier used to identify the FEC in the forwarding packet group. It is valid locally.

Label Switch Path (LSP): on the peer layer, it corresponds to a special FEC mapped by a group of IP packet through a path of one or more LSRS.

Lable Switch Router (LSR): a device with MPLS node functions and the function of forwarding IP packets on the pure Layer 3.

MPLS Domain: an adjacent aggregation of nodes running the MPLS protocol, as an automanous system or an LSR management domain.

MPLS Node: a node running the MPLS protocol. It can be discovered, abutted and conversed by the MPLS control protocol, performing one or more routing protocols. The MPLS node has the function of label switching and forwarding as well as the processing of IP packets on the pure Layer 3.

MPLS Edge Node: an MPLS node to connect the MPLS domain to a node outside of the domain, which can not perform MPLS.

MPLS Ingress: an MPLS node to process IP packet traffics input to the MPLS domain.

MPLS Egress: an MPLS edge node to process IP packet traffic output from the MPLS domain.

66

Chapter 5 Theory of MPLS

5.2.1 Basic Working Mode of MPLS


1. Label switching and forwarding process In a common route network, an IP packet is forwarded by hop along the routers. Each router should recognize the header of the IP packet independently, analyzes it and then runs the routing algorithm to select the next hop. Actually, the IP packet header contains more information than that needed for selecting the next hop. The selection of next hop can be summarized as the combination of two functions. The first function is specifying the IP packet to be forwardes as the Forwarding Equivalence Classes (FECS) according to the forwarding direction. The second function is mapping each special FEC to the next hop. MPLS specifies that when each special IP packet enters the MPLS domain, it is mapped to a special FEC once, which is divided based on the destination addreee of the IP packet. And a label switch path (LSP) mapped to the special FEC is established between the MPLS Ingress and Egress. The FEC is coded as a short fixed-length label (L) on two adjacent label switch routers (LSR) along the LSP and on the link between them. The label is forwarded together with the IP packet. The IP packet with the label is called as labeled packet. On each router in the next hops, it is unnecessary to recognize and analyze the IP packet header. Just take the label in the packet as a pointer to direct it to a new label and reach the output port of the next hop. The labeled packet becomes a new one by replaceing the old label in the packet with a new label. And then the output port will send the packet to the next hop. Actually the label switching and forwarding process is similar to the forwarding process based on Data Link Connection Identifier (DICI) in FR networks and the forwarding process based on VPI/VCI in ATM networks. The difference between them is that the DLCI in FR networks is the identifier of link and the VPI/VCI in ATM networks is the identifier of cell, while the FEC in MPLS is more complicated than link and cell. The FEC is a concept intergrated and extracted from various independent objects such as data flow, link and port etc. Because the MPLS forwarding is based on labels, the packets can be forwarded by switches. Generally, switches can not forward IP packets directly for they can not recognize, analyze and process headers of IP packets, or are not provided with the proper speed to do so.
67

Confidential

Fig. 5.2-1 MPLS Running Diagram 68

Chapter 5 Theory of MPLS

2.

MPLS running MPLS runs in the MPLS domain, and it can also run between MPLS domains at the same time. The MPLS is allowed running in the mixed networks of MPLS and non-MPLS. Fig. 5.2-1 a) shows the configuration of MPLS domain. The edge node close to users, the edge label switch router (ELSR), which is connected to extra-domain nodes, has complex processing functions. The inner node in the domain, the inner label switch router (ILSR), which is not connected to extra-domain nodes, performs the label switching and forwarding functions as simple as possible. The MPLS running can be divided into two phases: the first one is generation of automatic routing table, and the second is the forwarding of IP packets. In actual running, these two phases are carried out alternately.

Phase One Generation of automatic routing table Step 1: Establish topology routes between nodes in the MPLS domain with the mode same as the autonomous system of common route networks. Then run the OSPF routing protocol (other routing protocols can also be runned at the same time) to make all nodes clear about the topology information of the domain. The MPLS can allocate flows equally in the whole domain and optimize the transmission performance of the network with the participation of the management layer. The Border Gateway Protocol (BGP) is runned mainly between domains to provide and achieve accessible information for adjacent domains and the backbone core network. Step 2: Run the Label Distribution Protocol (LDP) to establish abutting connections between nodes in the MPLS domain. Classify the FECS according to accessible destination addresses and establish the Label Switch Path (LSP). Allocate labels (L) to FEC along the LSP and generate the forwarding routing table on each Label Switch Router (LSR). Step 3: Maintain and update routing tables.

Phase Two Forwarding IP packets in the MPLS domain Step 1: After the IP packet entering the edge node of MPLS domain, the ELSR recognizes the IP packet header, checks up corresponding FEC F and the LSP

69

Confidential

mapped, and then insert the label into the packet. As a labeled packet, it is output to the specified port. Step 2: The next top ILSR in the MPLS domain recevices the labeled packet from the input port. By taking the label in the packet as the pointer, the ILSR checks up the forwarding routing table. It takes out the new label, and then replaces the old one in the packet with it. The newly labeled packet is output to the next hop from the specified port. When the IP packet arrives at the hop before the MPLS Egress, the second hop counting backwards, the label in the packet is not switched; it is just popped out of the packet. Then the packet without label is forwarded. For the Egress is the output port for the destination address; and it is unnecessary to forward the packet according to the label. The Egress reads out the header of packet directly and forwards the packet to the final destination address. This processing mode ensures that all LSRs observe and process the packet to be dealt with only once during the whole MPLS running procedure, and facilitate the layered processing of forwarding function. Step 3: After receiving the IP packet without label, the Egress LSR in the MPLS domain reads out the packet header, and outputs the IP packet from the specified port according to the final destination address. The explanations of the example in Fig. 5.2-1 are as follows. The terminal I is connected to ELSR A, and the terminal II connected to ELSR B. There is a label switch path LSP (A > R1 > R2 > R4 > R6 > B) from A to B. The IP packet from the terminal I to II is mapped into the special FEC BA. The label allocation along the LSP is: A FEC B R1 = LA, R1 R2 = L1, R2 R4 = L2, R4 R6 = L4, R4 FEC B FEC B FEC B FEC B

B = Null (no lable)

70

Chapter 5 Theory of MPLS

The label allocation completes in the Phase One, which can be intervened by the management layer. Corresponding forwarding routing tables forms on each LSR, as shown in Fig. 5.2-2.
I Label 0 Port 1 0 Label L I Label L I Port 1 0 Port 2 0 Label L

FECS FECBA

FECS FECBA

A 4

R1

R6

I Label L

I Port 4

FECS FECBA

0 Port 1

0 Label

Perform L3 IP routing table

Fig. 5.2-2

MPLS Forwarding

The forwarding of IP packet from I to II is carried out in three steps: Step 1: The IP packet being sent from the terminal I to A is a pure packet without label. The ELSR A reads out the IP packet header and analyzes it, then look up the FEC BA to which the packet is mapped. After that, the ELSR A reads out the label LA and the output port 1, and then encapsulates the label LA and the IP packet as a labeled packet. The labeled IP packet is sent out from the output port 1 of the ELSR A. Step 2: The packet forwarded from R1 to B is the labeled one. The next hop of A ILSR R1 receives the labeled packet from the input port 1 and reads out the label LA as the pointer. The R1 finds out the new label L1 and output port 2 from its forwarding routing table, and then replaces the lable LA with the L1. The packet with the new label is sent out from the output port 2 of the R1. The processing procedure on the ILSR R2 and R4 is same as that of the R1. When the IP packets reaches the R6, the old label L4 is popped out and no new
71

Confidential

label will be inserted in. The R6 sends out the null-label IP packet from the output port 1. Step 3: The packet forwarded from B to the terminal II is a pure packet. After receiving the null-label IP packet from the input port 1, the ELSR B reads out the packet header directly and sends the IP packet to the terminal II according to the destination address.

5.2.2 Advantages of MPLS


1. Compared with common route networks, the MPLS has the following advantages: Simplifying the forwarding procedure The MPLS can forward IP packets directly according to labels, while IP packets are forwarded through the longest match lookup algorithm in route networks. Obviously, the MPLS forwarding mechanism is simpler, which means the MPLS can be used to implement the routing forwarding more efficiently with lower cost. High-efficiency explicit route The explicit route is a path specified by the source host which leads to the destination address over the Internet. The explicit route is also called as the source route. It is a technology with powerful functions which can be used for various purposes. When the source route is used for network testing in common route networks, it is forbidden for IP packets to carry the whole explicit route information while transferring pure data packets. In MPLS, the LSP is allowed carrying the whole explicit route information when it is established, and it is unnecessary for each IP packet to carry it. That means the MPLS can employ the explicit route in practice and make full advantages of the advanced characteristics of explicit route. Traffic engineering Traffic engineering is a selection procedure of traffic flow routing. It is used to balance the traffic flow on various links, routers and switches in the network based on a special rule. The traffic engineering is very important when there are multiple parallel or alternate pathes available between any pair of nodes in the network. In recent years, the rapid development of Internet, especially the
72

Chapter 5 Theory of MPLS

increasing demand of bandwidth, make some core networks have to adapt themselves to more and more fork networks. Therefore, the traffic engineering becomes more important. Today, the IP over ATM is implemented through Permanent Virtual Circuits (PVCs), which are always configured manually. Therefore the typical mode of traffic engineering is manual allocation in IP over ATM networks. The traffic engineering is difficult to carry out in common route networks. The load equilibrium can be achieved to a certain extent through adjusting measurements related to links in the network. Using this method to meet the requirements of traffic engineering is resticted in many aspects. Because there are lots of alternate pathes between nodes in the network, it is difficult to get equilibrium traffic flow through adjusting the data packet route measurements of each hop. The MPLS provides a direct mechanism of measure for each pair of input and output node. It allows labeling the data flow respectively from the special input node to the special output node. Besides, the MPLS allows establishing the high-efficiency explicit route of LSP, which ensures some special data flow can be forwarded through the optimal path directly. The most difficult part of the traffic engineering implementation is the selection of each LSP route. The MPLS can get over it through configuring routes manually, and recalculating by using routing protocols to notify passing routes according to traffic flow load in the network and then allocating the traffic flow. Quality of Service (QoS) The QoS route refers to a routing method which is used to select the route for a special data flow. The selected route should satisfy the QoS requirements of the special data flow. In many cases, the QoS route adopts the explicit route for the most important item of QoS route is bandwidth guarantee, which is same as that of traffic engineering. Various service classes The demand of some users for more special services on the Internet increases day by day. For example, the source address, destination address, input interface and other characteristics of the IP packet being forwarded should be known for services provided by some Internet Service Providers (ISPs). It is
73

Confidential

impossible for a medium ISP to get all information needed from routers on the network. Besides, it is difficult to get some information on routers, such as the information of input interface, except on the ingress node on the network. The best method to configure the COS and QoS is mapping IP packets to the most proper COS and QoS class on the network and ingress node, and identifying these IP packets with some mode. MPLS can provide an effective method to identify any special IP packet related to COS and QoS. The mappling from the IP packet to a special FEC is completed on the ingress node in MPLS domain. The MPLS makes it easy to mapping IP packets to proper COS and QoS classes, which is difficult for other modes. Function division MPLS must support the convergence and forwarding of data flow. The label has the granularity characteristic. It can identify one original user data flow at least, while identify one data flow converged from all data flows from switches or routers at most. In this way, the route processing functions can be classified and allocated to different network units. For example, the edge node close to users in the network is configured with complex processing functions; while the configuration of the core part in the network should be as simple as possible, adopting the forwarding mode with pure label. Unified forwarding mode for different service types MPLS can provide various service types on the same network with the unified forwarding mode, such as IP services, FR services, ATM services, Tunning Protocol (TP) services and VPN services etc. 2. Compared with ATM networks and FR networks, the MPLS has the following advantages: Flexibiligy of routing protocols In the core network of IP over ATM, n2 logical links should be established while connecting routers on the peer layer. In MPLS, the necessary communication of each router on the peer layer decreases to that of the router connected to it directly. The highest capacity needed for transport and switching processing is required according to the 0 (n) flow in the whole network.
74

Chapter 5 Theory of MPLS

General operations on data packets and cell medium MPLS adopts the unified method for the routing and forwarding on packet and cell medium, which allows using unified methods for traffic engineering, QoS, COS and other performance and function requirements. That means that the same label can be used on ATM, FR and other link layer mediums.

Easy management The management of MPLS networks is expected to be simplified by using general routing protocols and label allocation methods for various mediums.

Elimination of routing storm MPLS eliminates the necessity of using Next-Hop Resolution Protocol (NHRP) and establishing Switched Virtual Circuit (SVC) directly according to demands, and therefore solves the SVC contentation problem caused by updating routes. It also solves the delay problem related to direct establishment of SVC.

75

Appendix A Abbreviations
Abbreviation AFR AFEC AIS APR APS APSD APSF ASE AWG BER BLSR BSHR CDR CMI CODEC CPU CRC DBMS DCC DCF DCG DCN DCM DCF DDI DFB-LD DSF DGD DTMF DWDM DXC EAM ECC EDFA Advanced FEC Alarm Indication Signal Automatic Power Reduction Automatic Protection Switching Automatic Power Shutdown Automatic Protection Switching for Fast Ethernet Amplified Spontaneous Emission Array Waveguide Grating Bit Error Ratio Bidirectional Line Switching Ring Bidirectional Self-Healing Ring Clock and Data Recovery Code Mark Inversion Code and Decode Center Process Unit Cyclic Redundancy Check Database Management System Data Communications Channel Dispersion Compensation Fiber Dispersion Compensation Grating Data Communications Network Dispersion Compensation Module Dispersion Compensating Fiber Double Defect Indication Distributed Feedback Laser Diode Dispersion Shifted Fiber Differential Group Delay Dual Tone Multi-Frequence Dense Wavelength Division Multiplexing Digital Cross-connect Electrical Absorption Modulation Embedded Control Channel Erbium Doped Fiber Amplifier 77 Full Name Absolute Frequency Reference

Confidential

Abbreviation EFEC EX FDI FEC FPDC FWM GbE GUI IP LD MDI MCU MOADM MBOTU MQW MSP MST NCP NDSF NE NNI NMCC NRZ NT NZDSF OA OADM OBA Och ODF ODU OGMD OHP OHPF OLA OLT OMU ONU OP Enhanced FEC Extinction Ratio

Full Name

Forward Defection Indication Forward Error Correction Fiber Passive Dispersion Compensator Four Wave Mixing Gigabits Ethernet Graphical User Interfaces Internet Protocol Laser Diode Multiple Document Interface Management and Control Unit Metro Optical Add/Drop Multiplexer Equipment Sub-rack backplane for OTU Multiple Quantum Well Multiplex Section Protection Multiplex Section Termination Net Control Processor None Dispersion Shift Fiber Network Element Network Node Interface Network Manage Control Center Non Return to Zero Network Termination Non-Zero Dispersion Shifted Fiber Optical Amplifier Optical Add/Drop Multiplexer Optical Booster Amplifier Optical Channel Optical fiber Distribution Frame Optical Demultiplexer Unit Optical Group Mux/DeMux Board Order wire Overhead Processing Board for Fast Ethernet Optical Line Amplifier Optical Line Termination Optical Multiplexer Unit Optical Network Unit Optical Protection Unit 78

Appendix A

Abbreviations

Abbreviation OPA OPM OPMSN OPMSS OSC OSCF OSNR OTM OTN OTU OXC PDC PMD PDL RZ SBS SDH SDM SEF SES SFP SLIC SMCC SMT SNMP SPM SRS STM SWE TCP TFF TMN VOA WDM XPM

Full Name Optical Preamplifier Amplifier Optical Performance Monitor Optical Protect for Mux Section without preventing resonance switch Optical Protect for Mux Sectionwith preventing resonance switch Optical Supervisory Channel Optical Supervision channel for Fast Ethernet Optical Signal-Noise Ratio Optical Terminal Optical Transport Network Optical Transponder Unit Optical Cross-connect Passive Dispersion Compensator Polarization Mode Dispersion Polarization Dependent Loss Return to Zero Stimulated Brillouin Scattering Synchronous Digital Hierarchy Supervision add/drop multiplexing board Severely Error Frame Severely Error Block Second Small Form Factor Pluggable Subscriber Line Interface Circuit Sub-network Management Control Center Surface Mount Simple Network Management Protocol Self-Phase Modulation Stimulated Raman Scattering Synchronous Transfer Mode Electrical Switching Board Transmission Control Protocol Thin Film Filter Telecommunications Management Network Variable Optical Attenuator Wavelength Division Multiplexing Cross-Phase Modulation

79

You might also like