Professional Documents
Culture Documents
Contents
1 Evolution of MSTP..................................................................................................................................... 1 1.1 Emergence of MSTP ......................................................................................................................... 1 1.2 First Generation MSTP ..................................................................................................................... 2 1.2.1 Virtual Concatenation Technology ......................................................................................... 2 1.2.2 Link Capacity Adjustment Scheme ........................................................................................ 3 1.3 Second Generation MSTP................................................................................................................. 4 1.3.1 Resilient Packet Ring Technology ......................................................................................... 4 1.3.2 Multiple Protocol Label Switching Technology..................................................................... 5 2 Theory of EOS ............................................................................................................................................ 7 2.1 Ethernet Fundamentals...................................................................................................................... 7 2.1.1 Ethernet Frame Format .......................................................................................................... 7 2.1.2 MAC Address......................................................................................................................... 8 2.2 Ethernet Switching Principle............................................................................................................. 9 2.2.1 Operation Principle of Transparent Bridge ............................................................................ 9 2.2.2 MAC Address Learning ......................................................................................................... 9 2.2.3 Transfer and Filtering Mechanism ....................................................................................... 10 2.2.4 Loop Avoidance: Spanning Tree Protocol............................................................................ 13 2.2.5 VLAN................................................................................................................................... 13 2.3 EOS Fundamentals.......................................................................................................................... 19 2.3.1 What is EOS......................................................................................................................... 19 2.3.2 Function Model of EOS ....................................................................................................... 19 2.3.3 Ethernet Frame Encapsulation ............................................................................................. 20 2.3.4 Contiguous Concatenation and Virtual Concatenation......................................................... 23
Confidential
3 Theory of ATM..........................................................................................................................................25 3.1 ATM Fundamentals..........................................................................................................................25 3.1.1 Generation Background of ATM Technology .......................................................................25 3.1.2 ATM Features........................................................................................................................26 3.1.3 ATM Cell Structure...............................................................................................................27 3.1.4 Fundamentals of ATM Switching .........................................................................................27 3.1.5 ATM Statistics Multiplexing .................................................................................................30 3.1.6 ATM Protocol Reference Model ...........................................................................................31 3.1.7 ATM Service Type ................................................................................................................34 3.1.8 ATM Communication QoS ...................................................................................................36 3.2 ATM Processing in MSTP Devices..................................................................................................36 3.2.1 Background of ATM Application on MSTP ........................................................................36 3.2.2 Key Technology of ATM Service Processing .......................................................................37 3.2.3 ATM Layer Processing Function of MSTP Devices .............................................................39 4 Theory of RPR ..........................................................................................................................................41 4.1 Overview of RPR Technology .........................................................................................................41 4.1.1 Emergence of RPR Technology............................................................................................41 4.1.2 Basic Concepts and Features of RPR Technology................................................................43 4.2 Fundamentals of RPR Technology ..................................................................................................45 4.2.1 RPR Ring Network Architecture ..........................................................................................45 4.2.2 RPT Technology ...................................................................................................................47 4.2.3 RPR Network Hierarchy Model ...........................................................................................50 4.2.4 RPR MAC Data Frame Processing.......................................................................................52 4.2.5 RPR Fairness Algorithm .......................................................................................................54 4.2.6 RPR Topology Discovery .....................................................................................................56 4.2.7 RPR Protection .....................................................................................................................58
ii
Contents
4.3 RPR Implementation Scheme ......................................................................................................... 60 4.3.1 Three Implementation Schemes of RPR .............................................................................. 60 4.3.2 System Architecture of RPR-Embedded MSTP................................................................... 62 5 Theory of MPLS....................................................................................................................................... 63 5.1 Introduction to MPLS ..................................................................................................................... 63 5.2 Architecture of MPLS ..................................................................................................................... 66 5.2.1 Basic Working Mode of MPLS ............................................................................................ 67 5.2.2 Advantages of MPLS ........................................................................................................... 72 Appendix A Abbreviations.......................................................................................................................... 77
iii
Chapter 1
Evolution of MSTP
1 Evolution of MSTP
Key points Evolution of the MSTP technology Difference between the MSTP and traditional SDH technology Current state of the MSTP technology
Confidential
Chapter 1
Evolution of MSTP
Confidential
Dual-ring structure: Two physical paths between every two adjacent nodes guarantee the high reliability of networks.
Ring bandwidth control and Spatial Reuse Protocol (SRP) Unicast data can be transported at different parts of the ring, thus the capacity of the ring increases accordingly. In this way, the bandwidth decreasing caused by the addition of nodes will be eased to a certain degree. Moreover, the RPR can discover the new network topology and update it automatically when the ring topology changes. With this function, the man-made errors caused by manual configuration can be avoided. It facilitates the management and maintenance of networks.
Dynamic bandwidth allocation and statistical multiplexing principle Each node maintains the data load passing through itself and transports corresponding data to adjacent nodes in the ring. Other nodes can find how many available bandwidths can be achieved from the source node according to the information.
To sum up, with the features above, the RPR technology shortens the transmission process of data flow in the ring network for the maximum route between any two nodes
4
Chapter 1
Evolution of MSTP
is only half of the ring. The network topology discovery and update capability is achieved through exchanging topology identification information with the algorithm such as Open Shortest Path First (OSPF). It can not only avoid the infinite loop of packets efficiently, but also improve the self-healing ability of ring networks.
Providing seamless connections for intranets Restricting the spreading of VPN route information, and guarantee the security through adopting MPLS forwarding only for members in the VPN
Allowing different customers using the same VLAN ID through embedding Layer 2 MPLS technology, and thus extending the address space for VLAN
Implementing multilevel services in VPN, and setting up different priority between VPNs
The involvement of the MPLS technology in the MSTP provides the label switching function in addition to those features of MPLS mentioned above. Then the process of adding/removing labels at the edge of IP networks is unnecessary. The real point-to-point label switching is implemented through connecting the MSTP to core routers with the label switching function directly. The evolution process of MSTP, from the traditional SDH which can not carrying IP services efficiently to the first generation MSTP which is competent to carry IP services, and then to the increasingly robust MSTP supporting RPR and MPLS, is always driven by practical applications. We can imagine, in the future, more other functions and technologies will be involved in MSTP.
Confidential
2 Theory of EOS
Key points Ethernet frame structure MAC address and address learning Transfer and filtering mechanism of Layer 2 switching Layer 2 loop Spanning tree and fast spanning tree VLAN
64-1518 octet 7 octet 1 octet 6 octet 6 octet 2 octet 46-1500 octet 4 octet
PRE
SFD
DA
SA
LEN
DATA
PAD
FCS
PRE = Preamble SFD = Start-of-Frame Delimiter DA = Destination Address SA = Source Address LEN = Data Length FCS = Frame Check Sequence
The Preamble (PRE) is a 7-octet sequence of alternating bits (10101010) used to reach synchronization.
Confidential
Destination Address (DA): The first bit indicates whether the address is an individual address or a group address. 0 identifies an individual address, while 1 identifies a group address. The frame with a group address will be transferred to all stations specified in the address. The interface of each station recognizes its own address and responds to it when the interface detects the group address. If all bits in the destination address are 1, the frame will be broadcasted to all stations on the network.
The Source Address (SA) indicates where the frame comes from. The Data Length (LEN) field indicates the number of octets in the data field and pad field.
The Data (DATA) field comprises all the data originated from the upper layer. Pad (PAD) field: The length of Data field should be no less than 46 octets. If it is less than 46, the Data field must be extended by adding octets (pad) to make the actual Data field length meet the minimum length.
Frame Check Sequence (FCS): It provides error detection with 32-bit Cyclic Redundancy Check (CRC) sequence.
Vendor code: As the first six hex bytes (24 binary bits) in the MAC address, it identifies the NIC vendor.
Serial number: The vendor manages serial numbers of MAC addresses. The serial number is the last six hex bytes in the MAC address. If all serial numbers after a vender code are used up, the vendor must apply for another vendor code.
Segment B Station A
Port 2 Station B
Fig. 2.2-1
In Ethernet, the process of determining transfer is called as transparent bridge connection. The meaning of transparent: Terminal equipment connected to the bridge do not know whether they are connected to shared medium or switching equipment, that is, the equipment is transparent to terminal users. On the other hand, the bridge does not change or process the frames transferred through it (except trunk lines of VLAN). The transparent bridge has the following three main functions:
Address learning function Transfer and filtering function Loop avoidance function
All these three functions are performed in the transparent bridge, and they works on the network at the same time. Moreover, Ethernet switches also perform three main functions same as those of transparent bridge.
Confidential
When the bridge is connected to physical network sections, it checks all frames detected. After reading the source address of a frame, the bridge associate the frame to corresponding receiving port and record the relation in the MAC address table. Then the process of MAC address learning completes.
0260.8c01.1111
E0 E2
X XE3
E1
0260.8c01.3333
0260.8c01.2222
0260.8c01.4444
2.
Transfer of broadcast/multicast frames or frames with unknown MAC addresses As shown in Fig. 2.2-3, when the workstation D sends a data frame, the switch recognizes that the frame is a broadcast frame or a multicast frame, or a frame whose MAC address is unknown (that is, the MAC address of this frame does not exist in the MAC address table of the switch). Then the switch floods the network with the frame, that is, it transfers the frame to all the other ports except the entrance port.
10
0260.8c01.1111
E0 E2
E1 E3
0260.8c01.3333
0260.8c01.2222
0260.8c01.4444
Fig. 2.2-3 Transfer of broadcast/multicast frame or frame with unknown MAC address
Note If the switch supports multicast functions such as Internet Group Multicast Protocol (IGMP) interception, it will not transfer multicast data frames with the flooding mode.
11
Confidential
3.
The processing procedures after the switch receives a data frame at a port are described as follows. The switch judges whether the MAC address of the data frame is a broadcast address or a multicast address. If the answer is yes, it will perform the flooding operation. If the MAC address is a unicast one identifying a network device, the switch will look up the address in the MAC-Port table. Once the switch can not find it in the table, it will transfer the frame with flooding mode too.
12
If the switch finds the address in the MAC-Port table, it will transfer the data frame to the corresponding port associated with the destination address.
2.2.5 VLAN
1. Overview Local Area Network (LAN) can be either a network consisting of a few computers or an enterprise network composed of hundreds of computers. Virtual LAN (VLAN) is a special LAN segmented by routers, that is, a kind of broadcast domain. Members in a VLAN work like sharing the same physical network section. Members in different VLANs can not access each other directly. In a VLAN, there is no physical or geographical limit for members divided into the same broadcast domain. They can be connected to different switches in a switching network. Broadcast packets, unknown packets and data packets between members are all restricted in the VLAN. Another explanation of VLAN is that it offers a method to divide one physical network into multiple broadcast domains.
13
Confidential
Note A broadcast domain is a restricted area in which broadcast frames (all bits of the destination address are 1) can be transmitted to all other devices. Strictly speaking, not only broadcast frames, but also multicast frames and unknown unicast frames can be transmitted in a broadcast domain without blocks.
VLAN can divide a switching network or a broadcast domain into multiple broadcast domains, as if multiple separate physical networks. In this way, a network is segmented, and in each segment the number of computers decreases accordingly so as to improve the network performance.
VLAN is very flexible for users to configure a VLAN, add/remove or modify members in the VLAN just on the switch. Generally, it is unnecessary to change the physical network or add new devices.
When a network is divided into VLANs, computers in different VLANs can communicate with each other only through Layer 3 devices. The security of Layer 3 can be ensured by configuring the Access Control List (ACL) on these devices. To sum up, the communication between VLANs is implemented under control. The security of VLANs is better than those networks without VLAN division, in which computers communicate with each other directly. Furthermore, if a customer wants to join in a VLAN, only after the network administrator configures it on the switch, can the customer be added in the VLAN. All these improve the security of networks accordingly.
For example, no VLAN has been configured on a Layer2 switch, as shown in Fig. 2.2-5. Any broadcast frames will be transferred to all the other ports of the switch except the receiving port. The switch floods the broadcast information received from the computer A to port 2, 3 and 4.
14
Fig. 2.2-5
Two VLANs are configured on a switch: VLAN I and VLAN II, as shown in Fig. 2.2-6. The port 1 and 2 belongs to the VLAN I, while port 3 and 4 to the VLAN II. If the computer A sends a broadcast frame, the switch will transfer it only to the other port in the same VLAN, that is, port 2 in VLAN I. It will not transfer the frame to the ports in VLAN II. In the same way, the broadcast information output from the computer C will be transferred only to the port in VLAN II instead of ports in VLAN I.
Fig. 2.2-6
VLAN divides the broadcast area through limiting the transferring range of broadcast frames. To illustrate different VLANs clearly, Fig. 2.2-6 identifies two VLANs with different colors. In actual application, VLAN ID is used to identify the VLAN.
15
Confidential
2.
VLAN division modes VLAN The popular VLAN division mode now is the static division mode based on ports. The network administrator sets ports as those in a specified VLAN. Then computers connected to these ports belong to this VLAN. The advantage of this division mode is that the configuration is easy and has no influence on transfer performance of the switch. However, every port of the switch should be configured into the VLAN it belongs to. Once the user moves, the network administrator has to reconfigure corresponding ports for the switch. Other VLAN division modes include: division based on MAC address, division based on protocol, division based on IP address subnet, division based on application, division based on user name and division based on password etc.
3.
Operation process of VLAN Each VLAN can be regarded as a physically isolated bridge. Members in different VLANs can not access each other directly. VLAN can pass over switches. Members belonging to the same VLAN on different switches are in the same broadcast domain; therefore they can access each other directly. Because the VLAN division is based on physical ports of switches, when the switch receives a data frame from one port connected to a computer, it can recognize which VLAN the data frame belongs to. But the link connecting two switches carries data frames from different VLANs. Then the ports connecting to the link on the switches does not belong to a specified VLAN. If not tag the data frame, the switch can not know which VLAN the frame received from such link belongs to. So every data frame is tagged with a prefix before being transferred to such links by the switch. The tag is used to identify the VLAN which the data frame belongs to. The tag of VLAN enables the switch combining traffics from different VLANs and transmitting them through the same physical line.
16
4.
Link type
Access Link The access link connects a non-VLAN-aware workstation to a LAN section of a VLAN switch port, that is, the access link is used to connect terminal equipment and switches. If the VLAN division is based on ports, an access link can only belongs to one VLAN. The access link may be either an isolated network section, or multiple network sections or workstations connected with non-VLAN-aware bridges and switches. The access link can not carry tagged packets.
Trunk Link The trunk link is the one carrying tagged packets (with VLAN ID). Therefore a trunk link can carry data from multiple VLANs. It supports devices which can recognize VLAN frames and membership. The trunk link is always used to connect two VLAN switches. It enables VLAN passing over multiple switches. The trunk link may also be a shared LAN section connected with multiple VLAN switches and VLAN-aware workstations.
VLAN1
VLAN2
VLAN3
VLAN1
5.
IEEE 802.1Q IEEE developed a general VLAN standard IEEE 802.1Q. The standard
Defines the architecture of VLAN for the purpose of providing VLAN services for current IEEE 802 bridge LAN.
Defines the tagged VLAN frame format for Ethernet IEEE 802.3 and token ring IEEE 802.5
17
Confidential
Defines the protocol and mechanism for VLAN-aware devices to configure information and communicate membership information.
Defines the principle and procedures for VLAN-aware devices to transfer frames on networks.
Specifies the requirements to ensure the interoperability and coexistence of non-VLAN-aware devices. The non-VLAN-aware device is a workstation or router which can not receive or transmit tagged VLAN packets, and neither can recognize information about VLAN membership.
Fig. 2.2-8
The addition of 4-byte tag head to the original Ethernet frame makes the maximum length of Ethernet frame up to 1518 bytes. This number exceeds 1514 bytes specified in IEEE 802.3, which is expected to be modified to support 1518-byte long tagged VLAN frames. The 4-byte tag head carries the following information:
Tag Protocol Identifier (TPID): Two-byte field consisting of the hexadecimal value 81-00. It carries the 802.1Q/802.1p tag type.
Tag Control Information (TCI): The fields contained in the TCI are described as follows.
The three-bit user priority field is capable of representing the priority of the frame while the IEEE 802.1p-supported switch transferring it. The one-bit Canonical Format Indicator (CFI) field indicates that all MAC address information carried by the frame is in Canonical format. The twelve-bit VLAN Identifier (VID) field uniquely identifies the VLAN to which the frame belongs.
18
As shown in the diagram above, the data frame from Ethernet interface is transmitted transparently to the Layer 2 for switching. And after being encapsulated, the frame is mapped into the virtual container. The Multiplex Section Overhead (MSOH) and Regenerator Section Overhead (RSOH) are inserted to form a STM-N frame which is transmitted over the SDH network. The EOS transfer node supporting Layer 2 switching must be provided with the following basic functions:
Transmission link bandwidth is configurable Ensuring the transparency of Ethernet services Having the transfer and filtering function for Layer 2 data frames Supporting IEEE 802.1d Spanning Tree Protocol
19
Confidential
The encapsulation protocol stack specifies the functions of link control from point-to-point Ethernet to SDH network, rate adaptation and delineation. There are three encapsulation protocols: Point-to-Point Protocol (PPP), Link Access Procedure for SDH (LAPS) and Generic Framing Procedure (GFP). 1. PPP encapsulation The Point-to-Point Protocol adopts the RFC 1662 PPP in HDLC-like Framing of the byte synchroneous link. And the encapsulation procedures include three steps: MAC frame extraction, PPP framing and HDLC processing. 1) MAC extraction Check the MAC frames, filtering CRC errors and other abnormal frames, and then remove the preambles of Ethernet frames and gaps between frames. 2) PPP framing The Address, Control and Protocol provides controls of multiple protocol encapsulation, link initialization and authentication In addition, errors are controlled with the Frame Check Sequence (FCS) through CRC16 and CRC32. 3) HDLC processing Process the PPP frame transparently through changing 0x7e to 0x7d and then change 0x7d to 0x7d, 5d. Delimit the PPP frame by adding 0x7e to the header and trailer. And then adapt the rate of PPP frame to SDH VC channel by filling 0x7e.
20
The PPP protocol is the first encapsulation protocol. It has been used widely due to its technology maturity. The PPP protocol is the link layer protocol used commonly to support the communication between two devices connected directly on a point-to-point link. For example, the connection between the computer and the access server during dial-up adopts the PPP protocol; and the connection between Digital Data Network (DDN) routers adopts the PPP protocol too. However, devices provided by different vendors can not interwork because there are no unified requirements to apply the PPP protocol. 2. LAPS encapsulation The LAPS encapsulation is similar to the PPP. It simplifies the processing of link control and implements the rate adaptation with the additional transmitting sequence (0x7d, 0xdd). Comparing to the PPP encapsulation, the LAPS completes the packing and adaptation at the same time. Fig. 2.3-4 illustrates the LAPS encapsulation procedure.
Confidential
3.
GFP encapsulation The GFP is a kind of generic mapping technology. Variable-length or fixed-length data packets can be adapted and processed together, thus implementing the transmission of data services through various high-speed physical transmission channels.
Fig. 2.3-5 shows the format of GFP frame. Generally, a GFP frame contains the core header and payload. The encapsulation efficiency of GFP, being independent of the payload contents, is higher then those of PPP and LAPS. In addition, the GFP is more robust. Even if there are odd-bit errors in the GFP frame header, it will not cause synchronization loss (out of frame), while it will do for the PPP/LAPS encapsulation. The GFP can use the system bandwidth more efficiently. With the channel identifier, the GFP can combine multiple physical ports to one channel, while a physical port can only associated to one channel for the PPP/LAPS. The GFP support ring networks besides point-to-point networks.
22
23
Confidential
Because devices supporting concatenation traffics and those not supporting concatenation traffics have different interpretation for pointers, generally existing SDH devices can not transfer adjacent-concatenation traffics. However, the application of virtual concatenation can meet the bandwidth demands of broadband services. Generally, the virtual concatenation should implement functions in both the transmitting and receiving direction. In the transmitting direction, it converts C-4/3/12-Xc to C-4/3/12-Xv, and converts adjacent concatenation traffics to virtual concatenation traffics which can be transmitted over SDH devices. In the receiving direction, C-4/3/12-Xv is transformed to C-4/3/12-Xc, and the virtual concatenation traffics are converted to adjacent concatenation traffics. In this way, adjacent concatenation traffics can be transmitted through SDH devices.
Supporting LCAS The LCAS is applicable to virtual concatenation. It can adjust the link capacity without damage for virtual-cascaded signals passing through the transmission network. Based on the existing bandwidth, the LCAS can increase or decrease the bandwidth capacity dynamically, and thus adapt to the change of virtual-cascaded traffics. Moreover, the LCAS improve the robusticity of virtual-cascaded traffics and service quality as well.
There are still some problems about the application of virtual concatenation should be considered. From the point of technology, the main problem of virtual concatenation is delay comparing to contiguous concatenation. Because the passing path for each virtual container in the virtual concatenation may be different, transmission time difference may appears between virtual containers. At worst, the virtual container with the latter sequence reaches the sink terminal node before the one with the sequence in front. This makes it difficult to recover original signals. At present, the effective way to solve this problem is using a large delay alignment memory to buffer data for the purpose of data re-alignment. For multi-path transmission, ZTEs MSTP products can compensate the path delay difference of 32 ms. Calculated by 5 us/km, the maximum path difference is 6400 km.
24
3 Theory of ATM
Key points Features of ATM ATM cell structure Fundamentals of ATM switching ATM protocol reference model ATM ATM communication QoS VP-Ring technology ATM service type Basic connection functions of ATM
Confidential
capability, various relay networks, and weak adaptation to new services. Therefore a more flexible new network with broader bandwidth and stronger service integrated capability is needed. From the 80th in 20 century, the development of basic technologies related to telecommunication, such as micro-electronics and photoelectronics, provides bases for the realization of new networks. In this background, the broadband ISDN (B-ISDN) appears. The B-ISDN can
Enable the high-speed transmission of services. The network devices is independent from the characteristics of services The information transfer mode is independent from service type.
People have sought many solutions to develop a transfer mode adapting to the B-ISDN, such as multi-rate circuit switching, frame relay, and fast packet switching etc. Finally, the most appropriate transfer mode for B-ISDN was found, and that is the Asynchronous Transfer Mode (ATM). As the core technology of B-ISDN, the ATM technology has been specified as the unified information transfer mode by ITU-T in 1992. The ATM technology excludes the limitations of circuit switching mode and packet switching mode. It adopts the optical telecommunication technology, and improves the transmission quality. At the same time, it simplifies the operation on the network node and thus decreases network delays. A series of other technologies are also adopted to meet all the requirements of B-ISDN.
26
Fig. 3.1-1
GFC: Generic Flow Control. It has four bits, and all of them are set to a default value 0000currently. The GFC is only applicable to User to Network Interfaces (UNI) and may be used to control flows in the future.
VPI: Virtual Path Identifier. It has 12 bits for Network to Network Interfaces (NNI), while has 8 bits for UNI.
VCI: Virtual Channel Identifier. It identifies the virtual path part of a virtual path. The VCI and VPI can be combined to identify a virtual connection.
PTI: Payload Type Identifier. It is a 3-bit field to identify the payload type. CLP: Cell Loss Priority. It is a bit used to distinguish the priority of cell loss. 1 indicates the cell is of low priority, while 0 indicates it is of high priority. The cell of low priority will be discarded when congestion occurs.
HEC: Header Error Control. It is an 8-bit error control byte to detect the cell with error. It can also correct the 1 bit error in the cell header. In addition, the HEC is used for cell delineation.
Confidential
VP and VC are used to describe the unidirectional transmission route of ATM cells. The ATM cell can be switched either on the VP level or on the VC level. Each VP can accommodate 65536 virtual channels at most with the multiplexing mode. Cells in a cell group belonging to the same VC have the same VC Identifier (VCI). Different VCs belonging to the same VP have the same VP Identifier (VPI). Both the VCI and VPI are transmitted with the cell as parts of the cell header. The transmission channel, VP and VC are three important concepts in the ATM technology. Fig. 3.1-2 shows the relationship among them.
VC VC
V P V P V P
Fig. 3.1-2
V P Transmission Channel V P V P
Relationship among Transmission Channel, VP and VC
VC VC VC
VC
The call proceeding in the ATM is based on the concept of virtual call in packet switching instead of routing control for cells one by one. The cell proceeding route related to a call is established in advance before transmission. All cells of the same call must pass through this route until the call ends. The proceeding procedure is as follows. The calling party sends a control signal of call request via a UNI. The called party receives the control signal and accepts the request. After that, switching nodes in the network forms a virtual circuit between the calling and called party after exchanging signaling. The virtual circuit is represented with a series of VPI and VCI. While setting up the virtual circuit, all switching nodes on the circuit arranges a routing table for the purpose to convert the input cell VPI/VCI to output cell VPI/VCI. After establishing the virtual circuit, the transmitted information is segmented into cells, which are transferred to the called party over the network. If the transmitting end wants to forward more than one message to different receiving ends at the same time, different virtual circuits can be built up respectively to corresponding receiving ends. And the cells will be output alternately.
28
In the virtual circuit, the VCI/VPI value of cells in two adjacent switching nodes keeps unchanged. Between these two nodes, a VC link is formed. A bunch of VC links forms the VC Connection (VCC). Similarly, the VP link and VP Connection (VPC) are formed. 1. VP switching While the cell passing the ATM switching node, this node modifies the VPI value in the input cell to a new value according to the destination connected to the VP. Then the node assigns the new value to the cell and output it. This process is called as VP switching. As shown in Fig. 3.1-3, all VC links in a VP are transferred to another VP in the VP switching, while the VCI values in these VC links keep unchanged. The implementation of VP switching is very simple. Generally, it can be realized through the cross-connection of digital multiplex cables at some level in the transmission channel.
VCI=1 VCI=2
VPI=1
VPI=4
VCI=7 VCI=8
VCI=7 VCI=8
VPI=2 VP Switching
VPI=5
VCI=1 VCI=2
Fig. 3.1-3
VP Switching
2.
VC switching The VC switching should be performed with the VP switching simultaneously. Because when a VC link ends, corresponding VP connection ends too. Then all VC links on this VPC will carry out switching respectively, being added to VPCs in different directions, as shown in Fig. 3.1-4.
29
Confidential
VCI=4
VCI=3
Fig. 3.1-4
VC Switching
Fig. 3.1-5 30
ATM Multiplexing
AAL- SDU
48-byte payload
53-byte cell
Physical Layer
53-byte cell
Bit Flow
Fig. 3.1-6
The functions of each layer are as follows. 1. Physical layer The physical layer is the layer to carry information flow. It contains two sublayers, the Transmission Convergence (TC) sublayer and Physical Medium (PM) sublayer. 1) TC sublayer This layer embeds ATM cells to the transmission frame of the current medium,
31
Confidential
or extracts valid ATM cells from the transmission frame on the contrary. The procedure of embedding ATM cells in the transmission frame is: ATM cell demodulation (buffering) generating Header Error Control (HEC) cell delineation adapting the transmission frame generating the transmission frame The procedure of ATM cells extraction from the transmission frame is: receiving the transmission frame adapting the transmission frame cell delineation checking header error control ATM cell queuing. The main functions of TC layer is cell delineation and header error control. 2) PM sublayer The PM sublayer is based on the ITU-T and ATMF recommendations. It includes the following connections:
Connections based on direct transmission of cells Connections over PDH networks Connections over SDH networks Direct optical transmission of cells Connections between Universal Test & Operation PHY Interfaces for ATM (UTOPIA)
Connections between Operation And Maintenance (OAM) interfaces for management and monitoring information flow
2.
ATM layer This layer mainly implements the multiplexing/demultiplexing of cells, operations related to headers and flow control. The multiplexing and demultiplexing of cells completes at the interface between the ATM layer and the TC sublayer of physical layer. The sending ATM layer combines cells with different VPI/VCI and transfers them to the physical layer as a whole. The receiving ATM layer recognize the VPI/VCI in cells received from the physical layer and then sends each cell to different modules for processing. If the cell is a signaling cell, it will be sent to the control plat; while the cell will be sent to the management plat, if it is a managing one.
32
The header operations is the translation of VPI/VCI based on the value of VPI/VCI allocated while establishing the link. 3. AAL layer The AAL layer works on the top of the ATM layer. It cares about services, adopting different adaptation methods for different services. For each adaptation mode, the information flow (with different length and rate) from the highest layer is split into ATM service data cell in 48 bytes long. At the same time, it reassembles cells from the ATM layer, recovers the flow and sends it to the highest layer. For there are various kinds of information at the highest layer, the ALL layer is divided into two sublayers for the complicated processing procedure. They are the Convergence Sublayer (CS) and Segmentation and Reassembly (SAR) sublayer. To improve the rate of switching networks, the ATM layer has been simplified as possible. However the ATM layer does not provide functions concerning about the quality of service, such as cell loss, transmission error, delay and jitter. These functions are performed by the ALL layer. It is necessary to provide different adaptation for different services. Four classes of service are defined according to requirements of timing, bit rate and connection modes between the source and destination. These classes of service correspond to ALL protocol ALL1, ALL2, ALL3/4 and ALL5 respectively.
ALL1 supports constant bit rate and connection-oriented traffics. And the timing information needs to be transferred between sources and sinks. Common services of this class include 64 kbit/s voice services, uncompressed video traffics with constant bit rate and leased lines in private data networks.
ALL2 is provided for point-to-point variable bit rate traffics with timing relations. Common services of this class are compressed packet voice communication and compressed video transmission. One characteristic of these services is the transmission delay, which is caused by the reassembly of uncompressed voice and video information in the receiver.
ALL3/4 is provided to adapt two kinds of data services in the ATM network, the data service corresponding to remote LAN interworking and the connection-oriented data service.
33
Confidential
AAL5
supports
variable
bit
rate
traffics
without
synchronization
requirements between transmitting and receiving ends. It provides similar services as ALL3/4, which are mainly used to transmit computer data, UNI signaling information and frame relay in the ATM network. The purpose of ALL5 is to reduce overheads and provide a simple and efficient ALL.
3.
nrt-VBR services Nrt-VBR services support the burst non-realtime application. The link characteristics are represented by PCR, SCR and MBS. The nrt-VBR service can ensure a very low cell loss ratio for those cells meeting the flow contract, but it will not restrict the delay. Nrt-VBR services support continuous statistics multiplexing.
4.
UBR services The UBR service is a kind of non-realtime application. It does not limit the delay and delay variation strictly. UBR services include traditional computer communication applications, such as file transfer and E-mail etc. UBR services will neither ensure the quality of services, nor the amount of cell loss ratio and cell transfer delay. Networks can determine whether use the PCR in Connection Admission Control (CAC) and Usage Parameter Control (UPC). The PCR value has no sense when the network has no forced limitation on PCR. The congestion control for UBR links is carried out on the highest layer based on point-to-point.
5.
ABR services The transfer characteristics of ABR services on the establishment of links can be changed later. There is a flow control mechanism supporting the feedback of source to control the transfer rate of cells at the source end. Such feedback is realized by a special control cell Resource Management (RM) cell. Low cell loss ratio is expected when the interruption system is controlling flows according to the feedback. Then a fair and available bandwidth can be accessed. To a specified link, the ABR service has no boundary limitation for delay and delay variation, that is, the ARB service does not support real time applications. While the ABR establishing a link, the interruption system will assign a maximum bandwidth and a minimum available bandwidth needed. They are represented by the PCR and Minimum Cell Rate (MCR). The MCR can be 0. The bandwidth provided by the network can be variable but can not less than the MCR.
35
Confidential
functions, such as inverse multiplexing and statistics multiplexing, enable carriers meeting the demands of ATM service application and leading ahead in the metro area network construction.
Fig. 3.2-1
The IMA technology provides functions similar to virtual concatenation and LCAS in Ethernet. When there are much traffic on the ATM Digital Subscriber
37
Confidential
Line Access Multiplexer (DSLAM) which can not be transferred through a single path, the IMA can detach the ATM service into multiple low-rate E1 links with IMA timing, and transmit them transparently through VC12 of different spare paths in the current transmission network. With the function similar to LCAS, the IMA can adjust the bandwidth of ATM services dynamically. In this way, the IMA can still ensure the QoS of other E1 links when one E1 link failures. Furthermore, it can dynamically adjust the bandwidth combination mode of links for the purpose to access bursting services at any moment. 2. Virtual Path Ring (VP-Ring) technology For ATM services, the standard SDH can also implement the transmission function of ATM 155/622 M interfaces. Considering burst data services, the statistics multiplexing of ATM services on the ring should be carried out on the MSTP through taking advantage of characteristic great dynamic variation of actual data service flow. The ATM VP-ring technology transfers data services through the VC4 of SDH, and implements the statistics multiplexing and protection of ATM service access nodes. It also specifies the service convergence processing principle. As shown in Fig. 3.2-3, the ATM DSLAM, node B, and Radio Network Controller (RNC) are accessed to the MSTP via 155 M interfaces. Actually, the bandwidth for transmission is dynamically variable. With the application of ATM VP-Ring, all ATM nodes on the ring can share a VC4 of the SDH path, thus improving the bandwidth utilization ratio greatly.
Fig. 3.2-2
38
Fig. 3.2-3
1.
ATM service types provided The SDH-based multi-service transport node provides the following ATM services for ATM service source with different characteristics.
2.
Basic connection functions The VP-Ring supports the building up and removal of Permanent Virtual Circuit (PVC) by commands, as well as the ordered establishment and removal of user data path between ATM interfaces.
Confidential
Point-to-multipoint connection function Multipoint network connection: supporting network interconnection between two or more physical interfaces. ATM multicast: it supports replicating the VP/VC of the input cell flow to multiple output ATM links during ATM switching. Space multicast: the output ATM link can be located at two or more physical interfaces while each interface has only one ATM link. Logic multicast: two or more output ATM links share one physical interface.
Connection management function This function includes the following two parts. Network resource control: including the management of VPI/VCI and network bandwidth, as well as the routing of services. Flow control: providing contracted QoS for ATM data flows, which includes traffic shaping, Usage Parameter Control and Network Parameter Control (UPC/NPC), Connection Admission Control (CAC), Selective Cell Discard (SCD), frame discard, user data buffer and QoS type management etc.
3.
ATM layer service protection switching The ATM layer service protection mode is the ATM virtual path (VP) protection. Generally, the ATM service protection switching is a kind of layered protection. The physical layer adopts SDH protection, such as multiplex section protection; while the ATM layer adopts ATM VP protection. When the switching of the ATM layer and physical layer has been enabled, the switching between layers is implemented by delaying the ATM layer switching so as to avoid the overlapping of these two switching.
40
4 Theory of RPR
Key points RPR overview and features RPR networking architecture Concepts and functions of RPR technology RPR network hierarchy model RPR fairness algorithm RPR topology discovery RPR protection Implementation scheme of RPR System architecture of MSTP with embedded RPR
41
Confidential
Currently, the EOS product combining the Ethernet and SDH technology is employed on a large scale too. It is the main and mature supporting product with the application of earlier MSTP technology. It meets the demands of data transmission in early times of TDM networks by transferring Ethernet frames with SDH virtual containers. It encapsulates Ethernet frames into virtual containers directly with the GFP, HDLC/PPP or LAPS protocol. In this way, the EOS technology resolves the problem that expensive Packet Over SONET (POS) interfaces must be used if there is no packet service interface in the SDH optical network. Generally, services are converged through an 802.3 switching module before encapsulation in order to improve the bandwidth utilization ratio. The technology is just the second generation MSTP technology mentioned in the first chapter of this book. The main disadvantages of the second generation MSTP are as follows.
Complicate configuration: Services between sites should be configured one by one, and the passing sites should be configured as straight-through. For a complex network, a lot of works should be done for configuration and maintenance.
Lack of sharing characteristic: Traffics are connected through VC. As the carrier of traffic, the link can not share its bandwidth with other links.
Bandwidth with low utilization ratio: In order to avoid broadcast storm, the STP protocol should be performed, and thus some bandwidth can not be used at best. On the other hand, the MSTP needs SDH protection because it has no fast protection mechanism (STP protection at second level is too slow). However, the SDH protection will waste 50 percent bandwidth regardless of its high speed.
Special requirements on convergence ratio: In convergence networks, boards with multiple system directions are needed to respond linking requests from various sites to the convergence site.
Difficult to ensure the QoS in ring networks: Although EOS equipment can avoid the problems in part mentioned above by constructing Ethernet ring, they can not be avoided completely while constructing ring networks because network complying with 802.3 is not designed originally for ring networks. Besides, it will result in interaction between traffics from upstream and downstream sites, and thus the QoS can not be guaranteed. Therefore, so far there is seldom such application.
42
All these problems are those the RPR technology should solve.
43
Confidential
High bandwidth efficiency Traditional SDH networks need 50% of the ring bandwidth as redundancy, while the RPR does not need. The RPR technology remains the protection mechanism similar to that of SDH networks. It protects services by using two rings in reverse directions and allows the data traffics being transferring at the full speed on the ring between the source node and the destination node. With the application of Spatial Reuse Protocol (SRP), the destination node does not occupy bandwidth of the ring after extracting data frames dropped in this node, which is released to the downstream section. Spatially, there are no repeated traffic flows can use their own bandwidths without influencing others. To sum up, normally data are transmitted on the shortest arc between the source node and the destination node, and multiple nodes can intercommunicated at the same time. In this way, many nodes can receive and transmit packets at the same time, and thus improving the utilization of ring bandwidth; especially for the ring with many nodes, the improvement is more evident.
Fair bandwidth allocation protocol (QOS guarantee) The RPR enables the bandwidth sharing for data services by using the effective fairness algorithm. In the network, the traffic of the user access end is paroxysmal in nature, while the traffic of the core part of the network is comparatively smooth, which thus can be predicted. By classifying services, the RPR technology enables carriers providing low-priority networks to access services (such as some data services) only when there are spare bandwidths. It not only makes full use of the inherent characteristics of these traffics, but also avoids the bandwidth un-fairness between upstream and downstream sites.
Quick protection mechanism The RPR can provide 50 ms service protection which is similar to the Automatic Protection Switching (APS) in SDH networks. At present, two methods can be used to avoid failures: Wrapping and Steering. When Wrapping is used, the adjacent node of the failure will loop the traffic on a ring to another ring, such as looping the traffic on the internal ring to the external ring. This way can keep the continuity (sequence) of the data even the traffic flow arrives at the destination node through a long path. The Steering method is used to reverse the traffic
44
flow in direction, which will reach the destination node through another path.
Seamless connection with SDH networks The RPR system with embedded SDH can be connected to SDH networks seamlessly, for most SDH networks are ring networks while the RPR is ring in structure. It makes full use of bandwidth in ring networks. Unlike the existing MSTP, it should avoid looping traffics with STP protocol or manual configuration, which sometimes may cause the bandwidth waste.
Simple service configuration One objective of RPR technology is distributed access. The distributed access together with the quick protection, automatic traffic re-establishment provides the plug-and play mechanism for inserting or removing nodes quickly. The RPR is a packet switching technology using shared bandwidth in the ring. In the ring, each node knows the available capacity of the ring. Under the traditional circuit switching mode, each connection in the whole network should be configured point to point. But the RPR only needs to configure the connection relationship between the access end and the ring. It is unnecessary to configure the connections between nodes, the flow direction of traffics. All these simplified the configuration greatly. Furthermore, such service configuration mode avoids the convergence ratio problem existing in traditional EOS devices. Neglecting the bandwidth limitation, the RPR can almost get the unlimited convergence ratio.
45
Confidential
Fig. 4.2-1
The data transmission based on RPR supports the unicast, multicast and broadcast mechanisms. For the multicast or broadcast, each node can detect the data after it is sent by the source node. The node analyzes the address information in the frame header; if this node meets the address, it copies the data and then forwards it to the next node. After the data passing through the ring and returning to the source node, it is stripped from the ring by the source node. For the unicast, the data packet is transferred only on the shortest arc between the source node and the destination node. The source node sends the data, and the destination node receives the data and strips the data packet from the ring. The RPR is more efficient than token ring or FDDI in metro area networks. The topology of RPR is a dual symmetrical reverse ring: one is the inner ring; the other is the outer ring. Such architecture provides the following benefits.
Two paths between each pair of node ensure the high reliability. Two protection mechanisms can be used. One is the wrapping mechanism. When the adjacent node detects the failure, the ring stops working and the traffic on this ring is looped to another ring. The other one is the steering protection. When a failure occurs, the protection messages are quickly dispatched to all nodes on the ring. The source node has the right to select the ring for data transfer and finally enable the data bypassing the failure point. IEEE 802.17 specifies the steering protection as the default mechanism.
The longest path is half of the ring, for data can be transmitted in two directions and nodes in the ring can select the shortest transfer path.
46
It is possible for spatial reuse. In the unicast mode, data can be transferred on different parts of the ring at the same time, and thus the capacity of the whole ring will be a few multiple of that of single fiber.
The Resilient in the RPR represents the RPR protection protocol. The Operation, Administration and Management (OAM) protocol of RPR involves four functions defined by ISO: fault management, configuration management, performance management and accounting management. The fault management is the core part of OAM, responsible of detecting, isolating and correcting exception conditions in the network and reporting them to the network management system. IEEE 802.17 specifies that network elements must detect two types of failure: Loss Of Continuity (LOC) and Remote Defect Indication (RDI).
Confidential
destination according to the logic MAC address of the traffic flow, and forwards the traffic quickly without any processing. 2. Flexible interfaces Currently, the RPT technology supports various tributary interfaces such as E1, 10 M/100 M, GE, STM-1 and POS. And the rate of line interface rate can reach GE, 2.5 G and 10 G.. Some RPT devices even provide Dense Wavelength Division Multiplexing (DWDM) boards to reach the line rate of 80 G or higher. The various high line rate and abundant tributary interfaces fit the traffic deployment in broadband metro area networks very much. 3. Spatial reuse technology based on logic MAC The RPT supports the Spatial Reuse Protocol (SRP), which is a MAC layer protocol being independent of mediums. All nodes have the same control right to bandwidth. 4. Bidirectional reverse ring, single fiber ring and linear network topology The RPT ring is a bidirectional reverse ring: one fiber is clockwise ring and the other fiber is counter-clockwise ring. Each fiber transfers data and control signals in one direction. The control signal is transferred as the packet with the highest priority. The RPT also supports the single fiber topology, bidirectional data transmission at 1 G or 2.5 G, as well as WDM data transmission etc. 5. Medium independent The RPT does not depend on the medium of physical layer. The medium of RPT can be fiber or wavelength in DWDM systems. 6. Comparison of RPT and other broadband technologies Compared with other technologies, the RPT technology has the greatest features as follows: the economy of LAN, the reliable base to guarantee TDM transmission and full use of network bandwidth.
Comparing the RPT with the SDH/POS (Packet Over SDH) Both the RPT and POS avoid the complex protocol and too many header overheads of the ATM technology. They can transfer Gigabit IP services through fiber in the format of resilient packet data frame (similar to Ethernet frame). It is unnecessary to disassemble and reassemble IP packets, thus improving the
48
processing capability of switches greatly and decrease the cost of devices. But the RPT provides the function of using bandwidth dynamically, increasing the bandwidth utilization ratio greatly. Thus the RPT avoids the point-to-point limitation of POS and decrease the number of ports. In the network protection, the RPT protection the switching based on the source route ring, which is different from the multiplex section protection of SDH. It is more economical than SDH protection for network resources. When a fiber is broken, the nodes at both ends of the fiber will send Layer 2 control signaling to each node along the fiber direction. As soon as the source node of the traffic receives the control message, it sends the service according to the logic MAC address of the destination node to the fiber in the other direction. In this way, the protection is implemented. It is obvious that the protection route selected is the best in the switching based on source route. It saves the bandwidth resources of fiber, and the protection switching time is less than 50 ms. To sum up, the RPT functions of bandwidth statistics multiplexing, provision of multiple high-speed Ethernet interfaces, different level services and source route ring protection based on different service level can not only guarantee the transfer of TDM services, but also support bursting IP services efficiently. This is that the POS/SDH can not provide.
Comparing the RPT with the Dynamic Packet Transport (DPT)/Gigabit Ethernet (GE) Compared with the DPT/GE, the RPT has the important advantage that it provides the capability of multi-service (including TDM services) transport switching.
49
Confidential
Physical layer MAC Data Channel Sublayer service GFP coordinate SDH coordinate interface sublayer sublayer System packet interface-x PPP/LAPS GFP adaptation adaptation SONET/SDH Physical Layer Interfaces depending on medium Medium
Fig. 4.2-2
The hierarchy reference model complies with the Open System Interconnection (OSI) reference model and corresponds to the first and second layer of the ISO model. The purposes of RPR physical layer interface and physical layer entity are as follows: 1) 2) 3) Supporting RPR MAC Supporting GE and 10 G Ethernet physical layer entity Supporting the framing modes of GFP and byte synchronous HDLC/LAPS, and physical layer entities running at the rate of 155 Mbit/s ~ 9.95 Gbit/s. 4) 5) Supporting synchronous or plesiochronous network applications. Only supporting full-duplex operations
In the hierarchy model of RPR, the coordinate sublayer of the physical layer at the bottom is responsible for the mapping of information between the Medium-Independent Interface (MII) and physical medium. The System Packet Interface (SPI) defined by the Optical Internetworking Forum (OIF) is a kind of interface between physical layer device and data link layer device. It separates the synchronous layer and asynchronous layer by improving and receiving data transmitted
50
at the rate independent of the actual line bit rate. The SONET/SDH adaptation layer implements the mapping from the RPR to the SONET/SDH by taking the GFP protocol, and HDLC series protocol (PPP/LAPS are used widely) as the data link layer mapping protocol. The MAC sublayer consists of two parts: the data channel sublayer and the MAC control sublayer. The MAC data channel sublayer transfers and receives frames on the physical medium through the physical service interface. The data channel includes two function modules: ringlet-independent function module and ringlet-specific function module. The ringlet-independent functions include MAC service interface processing, ring selection, receiving frames from the ring and data transfer to the client layer. The ringlet-specific functions include the traffic flow adjustment of local data transmission and data exchange through the physical service interface. The MAC control sublayer is responsible of necessary works related to maintain the data channel sublayer, including RPR topology discovery protocol, RPR fairness algorithm, RPR ring network protection and RPR OAM. The RPR topology discovery message is transferred through the RPR control frame. The MAC service interface is used to transmit data from the MAC client layer as well as local messages from the MAC layer to the MAC client layer. The MAC control sublayer establishes the data channel independent of the actual ring network, and performs related control operations. While the MAC data channel sublayer performs functions related to the actual ring network, such as the access control and transmission of data.
51
Confidential
Fig. 4.2-3
RPR processing on the receiving side (Ingress): 1. After the node receives the data frames from the ring, it checks the value of Time to Live (TTL) in the data frames. If the value is not zero, the TTL field is decremented by 1. If the TTL reaches zero, the packet is stripped from the ring. 2. Check the type of frames and the frames again, and then strip the error frames. If the fairness algorithm is used to control the frames, then send these frames to the fairness algorithm module directly for processing. If these frames are idle frames, then strip them directly from the ring. 3. Compare the source address of the frame with that of the local node, and judge whether this frame is output from this node. If the addresses are same, check whether the ring is mismatched. If it is mismatched, determine whether to enter the wrapping protection. If the ring has already been in the wrapping protection situation, forward this frame directly; or else discard the frame. 4. Judge whether the destination address of the frame is same as that of the local node. If they are same, check whether the frame is a control frame. If yes, send
52
this frame to the CPU for processing. If no, input it to the egress specified in 802.3 for copy processing. If the destination does not identify the local node, forward the frame to the transfer channel.
Note
Drop: The local node can receive unicast or multicast frames from the ring after
Stack VLAN filtering which are transferred to this node from other nodes. Unicast frames will be stripped from the ring and sent to corresponding user ports. Multicast frames are sent to corresponding user ports and are transited.
Transit: Frames received from the ring at the local node are transit to the Primary
Transit Queue (PTQ) and the Secondary Transit Queue (STQ) channel. The data frames in the PTQ and STQ are inserted to the transfer ports of the source ring directly.
Strip: The local node receives the frames from the ring, which will no be
RPR processing on the transmitting side (Egress): The data frames at the transmitting side include data to be forwarded and data frames and control frames inserted at the node. For the inserted data frame, determine its destination address and ringlet selection through topology discovery and routing table. Then send it to the corresponding inserted queue (A, B, or C) according to the priority. The important point of the processing at the transmitting side is the dispatching of queues. The dispatching priority sequence is as follows. PTQ over threshold > STQ close to limit threshold > CTL > PTQ > STQ over threshold > A > B > eB > C > STQ
53
Confidential
Providing a mechanism to divide the available bandwidth fairly between nodes on the ring
Only applicable to low-priority services and excessive medium-priority services, which are Excess Information Rate (EIR) data frames in the medium-priority services
The RPR fairness algorithm can control the bandwidth of two sub-ringlets respectively. That is, there are two fairness protocols on a RPR ring to control the bandwidth respectively.
54
1.
Fairness algorithm module block diagram Fig. 4.2-4 shows the fairness algorithm modules.
Fig. 4.2-4
Receiving and processing fair frames Calculating the fair allowed rate of the local node Controlling the fair traffic rate with the shaper Determining the fair rate to be propagated Generating and sending fairness control messages Bandwidth adjustment technique The RPR fairness algorithm initiates the bandwidth adjustment by detecting congestion. When the congestion occurs at a node, it dispatches a fair rate to the upstream node on the reverse ring, which is calculated with the add_rate in this node and the normalized weight. The upstream node adjusts its sending rate and keeps it not exceeding the fair rate after it receives the fair rate. Nodes receiving the fair rate may respond in two ways: If the node is congested, it will select the minimum one from its own fair rate and the received fair rate, and then dispatch the rate to its upstream node.
55
2.
Confidential
If the node is not congested, it will forward the fair rate to its upstream node. 3. Example of bandwidth adjustment based on fairness algorithm
As shown in Fig. 4.2-5, the non-reserved bandwidth of the RPR (ring bandwidth reserved bandwidth for A0 services) is 500 M, that is, the maximum ring bandwidth which can be controlled by the fairness algorithm is 500 M. There are convergence services between site 1, 2 and 3. Services of site 1 and site 2 are converged at the site 3.
56
2.
Process of topology discovery In the case of ring initiation, access of new nodes, ring protection switching or startup of PRP auto-reorganization mode, some node generates a topology discovery packet, which contains the MAC address and state information of this node. When other nodes receive the packet, they insert their own MAC addresses and state information in this packet and forward it to downstream nodes. In this way, every node gets the knowledge of the node number on the ring and the queue information, and then forms the topology mapping. RPR topology discovery can process various topology changes, such as adding/deleting nodes on the ring and broken link etc. Every node can discover the topology automatically. The process of topology discovery is similar to the link state protocol of Open Shortest Path First (OSPF), which transfers information with corresponding control message and trigger the node to dispatch the message with the trigger. Fig. 4.2-6 illustrates the basic procedures of topology discovery.
As shown in Fig. 4.2-6, the line between S7 and S1 of the closed loop has been damaged. The node S1 and S7 broadcasts topology (TP) frames to indicate the
57
Confidential
network boundary. Such TP frame will trigger all nodes, which support the Steering protection mode, to change or keep the direction of data to be transferred (the principle is to avoid wrong paths). However, this frame will not change those nodes which support the Wrapping protection mode. When the new topology becomes stable, each node will check the topology with its adjacent nodes. If the topology is right, there is an open-loop topology structure in the topology database of each node. Although the RPR topology discovery is a kind of periodical activity, it still should be initiated by a node which knows the topology structure. That is, some node on the ring can generate a topology frame when necessary.
Fig. 4.2-7
Normally, the S2 sends data to S6 along the ringlet 0. When some line failures, the S2 will send data to S6 along the ringlet 1 to avoid the wrong path.
58
2.
Wrapping protection
Fig. 4.2-8
Under the wrapping protection mode, the sending node will keep using the original ringlet instead of considering the avoidance of wrong path. The protection action occurs at the boundary of the ring, as shown in Fig. 4.2-8. In normal state, the sending node S2 transmits data to S6 along the ringlet 0. If the fiber between S3 and S4 is broken, the S2 will send data to S3 still along the ringlet 0. The node S3 will send the data to S6 along the ringlet 1. During the process, the data packet will not be stripped to avoid frame out of sequence in the protection. Only when the data reaches the S6, can it be stripped from the ring. When the topology becomes stable, select re-steer to optimize the path, that is, send data through the shortest path S2 > S1 > S7 > S6. 3. Implementation of RPR protection The key of RPR protection is to know which path has problems. The topology structure should be known at any moment, and then process according to corresponding configurations. The RPR network supervises the network topology continuously while it transferring data. Once any topology change is found, it will carry out the steering protection or wrapping protection according to corresponding configurations.
59
Confidential
For MSTP devices with embedded RPR, users can assign whether to adopt both the RPR MAC layer protection and SDH physical layer protection. If both of them are enabled, the action of delaying the RPR protection switching can be taken to support switching between layers and avoid the overlapping of two kinds of protection switching.
Independent Layer2-based RPR scheme This scheme is applicable to the access layer and convergence layer of IP metro area networks. Some vendors provide broadband multi-service solutions, which optimize IP and support TDM services, by combining the Layer2-based scheme with the MPLS technology, synchronization technology, Coarse Wavelength Division Multiplexing (CWDM) technology and the TV video broadcast technology. In addition, Layer2-based RPR products provided by some vendors have great networking capability. They can support linear networking, tangent networking and dual ring internetworking topology structures as well as Dual Node Interconnection (DNI) protection etc. The RPR products with these enhanced functions can also be used on the core layer of the IP metro area network in small cities. But the independent Layer2-based RPR scheme is seldom used because the construction cost of single RPR technology scheme is very high for the moment.
Router-based single-board RPR scheme This scheme is mainly applicable to the core layer and convergence layer of IP metro area networks. Most vendors implement RPR functions by adding boards based on existing routers. The router-based scheme can be regarded as the optimization of existing router networking. It can improve the protection performance greatly and achieve the 50 ms ring protection function as well as save fiber resources.
60
MSTP-based RPR scheme The MSTP-based RPR scheme separates an independent path from the MSTP ring bandwidth to support the RPR technology. Compared with the traditional SDH, the MSTP adopts the Layer 2 switching technology to implement the Ethernet services bandwidth sharing, completes the mapping from Ethernet frames to SDH VCs through the GFP encapsulation, and improves the flexibility and reliability of the virtual container bandwidth allocation through the virtual concatenation and LCAS technology. But because of the inherent shortcomings of Ethernet application in ring networks, many vendors are considering adopting the RPR technology into the new generation MSTP for the purpose of providing integrated solutions to support data services.
At present, TDM services occupy the dominant status, and the MSTP-based RPR scheme will be the best multi-service transport platform. However, the commercial application of corresponding products still need more time. When data services take the place of TDM services as the dominant one, the independent Layer2-based RPR scheme will be the best multi-service transport platform. And currently there are already some mature products used widely corresponding to this scheme. Because there are always data services to be processed in IP metro area networks, it can be expected that the independent Layer2-based RPR scheme and router-based RPR scheme will be used widely in IP metro area network constructions as good optimization solutions.
61
Confidential
Fig. 4.3-1
As shown in Fig. 4.3-1, the node architecture basically represents the RPR protocol reference model. In the RPR protocol reference model, the RPR is located at the data link layer, including the logical link control sublayer, MAC control sublayer and the MAC data channel sublayer. The logical link control layer transfers data to one or more remote logical link control layers same as it through the MAC service interface. The MAC data channel layer performs the access control and data transmission between itself and some special ringlet. Between the MAC control sublayer and MAC data channel sublayer, RPR MAC frames are transferred or received.
62
5 Theory of MPLS
Key points Architecture of MPLS Basic working mode of MPLS Advantages of MPLS
Confidential
layer, there is a link layer between the IP and DWDM, which is used for transfer, switching and transmitting. There are only two kinds of link layer transport technology applicable to IP packet now: SDH with the Synchronous Transfer Mode (STM) and cells with the Asynchronous Transfer Mode (ATM). The MPLS is the technology applicable to both the SDH and ATM, and it can be developed in the future as the technology for any special link layer. The MPLS also has the functions to support the network management, traffic engineering, QoS and COS. IP services can be transferred on the OPTICS directly with the MPLS (other corresponding modes can also be used). Actually, the MPLS is not only a technology applied in IP over ATM, but also an interlayernetwork technology between Layer 3 and Layer 2. It is researched and developed as architecture. Currently, the MPLS can be used in ATM networks and FR networks. Furthermore, it becomes the focus as the preferred technology during the research and development of IP over OPTICS. Some people even say that the MPLS is the terminator of ATM. In any case, the MPLS and ATM can not take the place of each other, for their function positioning does not cover each other. The ATM implements the ATM cell layer and AAL layer in the four-layer reference model of B-ISDN, corresponding to the functions of the second layer (data link layer) in the ISO-OSI seven-layer reference model. The MPLS implements the funcions of the comparatively-independent interlayer between the third layer (network layer) and the second layer (data link layer) in the seven-layer reference model. It does not have the integrated functions of the data link layer. Therefore, the MPLS can only implement the actual transfer function of the data link layer depending on a special link layer, such as the ATM cell layer or FR-SDH layer of frame relay. Fig. 5.1-1 shows the function positioning of MPLS.
64
Fig. 5.1-1
As shown in Fig. 5.1-1, the MPLS interlayer simplifies and specifies the transform protocol between L2 and L3 greatly. However, each IP packet in the IP network can arrive at the special link layer only after being processed with multiple corresponding intermediate protocols. Although there is only one protocol suite at the data link layer of the ATM network, the network layer needs multiple corresponding interworking protocols for every kind of service from various networks. The MPLS interlayer is absolutely necessary for IP over OPTICS. A link layer is needed from the IP at the network layer to the OPTICS at the physical layer. The MPLS interlayer can satisfy the requirements of existing FR-SDH and ATM link layer. In the future, it can also be adaptive to any new link layer technology. The MPLS with powful capacity can implement many functions and performances which are difficult to realize in common route networks, such as explicit routes, traffic engineering, QoS and COS. Moreover, the problems caused by the restriction of IP over ATM and IP over FR can be solved, such as flexibility, generality and SVC contention etc. Although the requirements of various users and services on the Internet become more and more complicated as well as the classification of Forwarding Equivalence Class (FEC), all of them can be processed in one time after they entering the MPLS domain. In the domain, the route, which performs the label exchange and forwarding, may not be influenced, and the highest working capability of transport switching is still required based on the 0 (n) traffic. Therefore, the routing protocol and interconnecting network architecture of the MPLS has great flexibility. And MPLS can guarantee the security and long-term reliability of MPLS networks.
65
Confidential
Forwarding Equivalence Class (FEC): a group of IP packet which can be processed with the same mode. It can also be regarded as the same path, or the same forwarding processing.
Label (L): a short fixed-length identifier used to identify the FEC in the forwarding packet group. It is valid locally.
Label Switch Path (LSP): on the peer layer, it corresponds to a special FEC mapped by a group of IP packet through a path of one or more LSRS.
Lable Switch Router (LSR): a device with MPLS node functions and the function of forwarding IP packets on the pure Layer 3.
MPLS Domain: an adjacent aggregation of nodes running the MPLS protocol, as an automanous system or an LSR management domain.
MPLS Node: a node running the MPLS protocol. It can be discovered, abutted and conversed by the MPLS control protocol, performing one or more routing protocols. The MPLS node has the function of label switching and forwarding as well as the processing of IP packets on the pure Layer 3.
MPLS Edge Node: an MPLS node to connect the MPLS domain to a node outside of the domain, which can not perform MPLS.
MPLS Ingress: an MPLS node to process IP packet traffics input to the MPLS domain.
MPLS Egress: an MPLS edge node to process IP packet traffic output from the MPLS domain.
66
Confidential
2.
MPLS running MPLS runs in the MPLS domain, and it can also run between MPLS domains at the same time. The MPLS is allowed running in the mixed networks of MPLS and non-MPLS. Fig. 5.2-1 a) shows the configuration of MPLS domain. The edge node close to users, the edge label switch router (ELSR), which is connected to extra-domain nodes, has complex processing functions. The inner node in the domain, the inner label switch router (ILSR), which is not connected to extra-domain nodes, performs the label switching and forwarding functions as simple as possible. The MPLS running can be divided into two phases: the first one is generation of automatic routing table, and the second is the forwarding of IP packets. In actual running, these two phases are carried out alternately.
Phase One Generation of automatic routing table Step 1: Establish topology routes between nodes in the MPLS domain with the mode same as the autonomous system of common route networks. Then run the OSPF routing protocol (other routing protocols can also be runned at the same time) to make all nodes clear about the topology information of the domain. The MPLS can allocate flows equally in the whole domain and optimize the transmission performance of the network with the participation of the management layer. The Border Gateway Protocol (BGP) is runned mainly between domains to provide and achieve accessible information for adjacent domains and the backbone core network. Step 2: Run the Label Distribution Protocol (LDP) to establish abutting connections between nodes in the MPLS domain. Classify the FECS according to accessible destination addresses and establish the Label Switch Path (LSP). Allocate labels (L) to FEC along the LSP and generate the forwarding routing table on each Label Switch Router (LSR). Step 3: Maintain and update routing tables.
Phase Two Forwarding IP packets in the MPLS domain Step 1: After the IP packet entering the edge node of MPLS domain, the ELSR recognizes the IP packet header, checks up corresponding FEC F and the LSP
69
Confidential
mapped, and then insert the label into the packet. As a labeled packet, it is output to the specified port. Step 2: The next top ILSR in the MPLS domain recevices the labeled packet from the input port. By taking the label in the packet as the pointer, the ILSR checks up the forwarding routing table. It takes out the new label, and then replaces the old one in the packet with it. The newly labeled packet is output to the next hop from the specified port. When the IP packet arrives at the hop before the MPLS Egress, the second hop counting backwards, the label in the packet is not switched; it is just popped out of the packet. Then the packet without label is forwarded. For the Egress is the output port for the destination address; and it is unnecessary to forward the packet according to the label. The Egress reads out the header of packet directly and forwards the packet to the final destination address. This processing mode ensures that all LSRs observe and process the packet to be dealt with only once during the whole MPLS running procedure, and facilitate the layered processing of forwarding function. Step 3: After receiving the IP packet without label, the Egress LSR in the MPLS domain reads out the packet header, and outputs the IP packet from the specified port according to the final destination address. The explanations of the example in Fig. 5.2-1 are as follows. The terminal I is connected to ELSR A, and the terminal II connected to ELSR B. There is a label switch path LSP (A > R1 > R2 > R4 > R6 > B) from A to B. The IP packet from the terminal I to II is mapped into the special FEC BA. The label allocation along the LSP is: A FEC B R1 = LA, R1 R2 = L1, R2 R4 = L2, R4 R6 = L4, R4 FEC B FEC B FEC B FEC B
70
The label allocation completes in the Phase One, which can be intervened by the management layer. Corresponding forwarding routing tables forms on each LSR, as shown in Fig. 5.2-2.
I Label 0 Port 1 0 Label L I Label L I Port 1 0 Port 2 0 Label L
FECS FECBA
FECS FECBA
A 4
R1
R6
I Label L
I Port 4
FECS FECBA
0 Port 1
0 Label
Fig. 5.2-2
MPLS Forwarding
The forwarding of IP packet from I to II is carried out in three steps: Step 1: The IP packet being sent from the terminal I to A is a pure packet without label. The ELSR A reads out the IP packet header and analyzes it, then look up the FEC BA to which the packet is mapped. After that, the ELSR A reads out the label LA and the output port 1, and then encapsulates the label LA and the IP packet as a labeled packet. The labeled IP packet is sent out from the output port 1 of the ELSR A. Step 2: The packet forwarded from R1 to B is the labeled one. The next hop of A ILSR R1 receives the labeled packet from the input port 1 and reads out the label LA as the pointer. The R1 finds out the new label L1 and output port 2 from its forwarding routing table, and then replaces the lable LA with the L1. The packet with the new label is sent out from the output port 2 of the R1. The processing procedure on the ILSR R2 and R4 is same as that of the R1. When the IP packets reaches the R6, the old label L4 is popped out and no new
71
Confidential
label will be inserted in. The R6 sends out the null-label IP packet from the output port 1. Step 3: The packet forwarded from B to the terminal II is a pure packet. After receiving the null-label IP packet from the input port 1, the ELSR B reads out the packet header directly and sends the IP packet to the terminal II according to the destination address.
increasing demand of bandwidth, make some core networks have to adapt themselves to more and more fork networks. Therefore, the traffic engineering becomes more important. Today, the IP over ATM is implemented through Permanent Virtual Circuits (PVCs), which are always configured manually. Therefore the typical mode of traffic engineering is manual allocation in IP over ATM networks. The traffic engineering is difficult to carry out in common route networks. The load equilibrium can be achieved to a certain extent through adjusting measurements related to links in the network. Using this method to meet the requirements of traffic engineering is resticted in many aspects. Because there are lots of alternate pathes between nodes in the network, it is difficult to get equilibrium traffic flow through adjusting the data packet route measurements of each hop. The MPLS provides a direct mechanism of measure for each pair of input and output node. It allows labeling the data flow respectively from the special input node to the special output node. Besides, the MPLS allows establishing the high-efficiency explicit route of LSP, which ensures some special data flow can be forwarded through the optimal path directly. The most difficult part of the traffic engineering implementation is the selection of each LSP route. The MPLS can get over it through configuring routes manually, and recalculating by using routing protocols to notify passing routes according to traffic flow load in the network and then allocating the traffic flow. Quality of Service (QoS) The QoS route refers to a routing method which is used to select the route for a special data flow. The selected route should satisfy the QoS requirements of the special data flow. In many cases, the QoS route adopts the explicit route for the most important item of QoS route is bandwidth guarantee, which is same as that of traffic engineering. Various service classes The demand of some users for more special services on the Internet increases day by day. For example, the source address, destination address, input interface and other characteristics of the IP packet being forwarded should be known for services provided by some Internet Service Providers (ISPs). It is
73
Confidential
impossible for a medium ISP to get all information needed from routers on the network. Besides, it is difficult to get some information on routers, such as the information of input interface, except on the ingress node on the network. The best method to configure the COS and QoS is mapping IP packets to the most proper COS and QoS class on the network and ingress node, and identifying these IP packets with some mode. MPLS can provide an effective method to identify any special IP packet related to COS and QoS. The mappling from the IP packet to a special FEC is completed on the ingress node in MPLS domain. The MPLS makes it easy to mapping IP packets to proper COS and QoS classes, which is difficult for other modes. Function division MPLS must support the convergence and forwarding of data flow. The label has the granularity characteristic. It can identify one original user data flow at least, while identify one data flow converged from all data flows from switches or routers at most. In this way, the route processing functions can be classified and allocated to different network units. For example, the edge node close to users in the network is configured with complex processing functions; while the configuration of the core part in the network should be as simple as possible, adopting the forwarding mode with pure label. Unified forwarding mode for different service types MPLS can provide various service types on the same network with the unified forwarding mode, such as IP services, FR services, ATM services, Tunning Protocol (TP) services and VPN services etc. 2. Compared with ATM networks and FR networks, the MPLS has the following advantages: Flexibiligy of routing protocols In the core network of IP over ATM, n2 logical links should be established while connecting routers on the peer layer. In MPLS, the necessary communication of each router on the peer layer decreases to that of the router connected to it directly. The highest capacity needed for transport and switching processing is required according to the 0 (n) flow in the whole network.
74
General operations on data packets and cell medium MPLS adopts the unified method for the routing and forwarding on packet and cell medium, which allows using unified methods for traffic engineering, QoS, COS and other performance and function requirements. That means that the same label can be used on ATM, FR and other link layer mediums.
Easy management The management of MPLS networks is expected to be simplified by using general routing protocols and label allocation methods for various mediums.
Elimination of routing storm MPLS eliminates the necessity of using Next-Hop Resolution Protocol (NHRP) and establishing Switched Virtual Circuit (SVC) directly according to demands, and therefore solves the SVC contentation problem caused by updating routes. It also solves the delay problem related to direct establishment of SVC.
75
Appendix A Abbreviations
Abbreviation AFR AFEC AIS APR APS APSD APSF ASE AWG BER BLSR BSHR CDR CMI CODEC CPU CRC DBMS DCC DCF DCG DCN DCM DCF DDI DFB-LD DSF DGD DTMF DWDM DXC EAM ECC EDFA Advanced FEC Alarm Indication Signal Automatic Power Reduction Automatic Protection Switching Automatic Power Shutdown Automatic Protection Switching for Fast Ethernet Amplified Spontaneous Emission Array Waveguide Grating Bit Error Ratio Bidirectional Line Switching Ring Bidirectional Self-Healing Ring Clock and Data Recovery Code Mark Inversion Code and Decode Center Process Unit Cyclic Redundancy Check Database Management System Data Communications Channel Dispersion Compensation Fiber Dispersion Compensation Grating Data Communications Network Dispersion Compensation Module Dispersion Compensating Fiber Double Defect Indication Distributed Feedback Laser Diode Dispersion Shifted Fiber Differential Group Delay Dual Tone Multi-Frequence Dense Wavelength Division Multiplexing Digital Cross-connect Electrical Absorption Modulation Embedded Control Channel Erbium Doped Fiber Amplifier 77 Full Name Absolute Frequency Reference
Confidential
Abbreviation EFEC EX FDI FEC FPDC FWM GbE GUI IP LD MDI MCU MOADM MBOTU MQW MSP MST NCP NDSF NE NNI NMCC NRZ NT NZDSF OA OADM OBA Och ODF ODU OGMD OHP OHPF OLA OLT OMU ONU OP Enhanced FEC Extinction Ratio
Full Name
Forward Defection Indication Forward Error Correction Fiber Passive Dispersion Compensator Four Wave Mixing Gigabits Ethernet Graphical User Interfaces Internet Protocol Laser Diode Multiple Document Interface Management and Control Unit Metro Optical Add/Drop Multiplexer Equipment Sub-rack backplane for OTU Multiple Quantum Well Multiplex Section Protection Multiplex Section Termination Net Control Processor None Dispersion Shift Fiber Network Element Network Node Interface Network Manage Control Center Non Return to Zero Network Termination Non-Zero Dispersion Shifted Fiber Optical Amplifier Optical Add/Drop Multiplexer Optical Booster Amplifier Optical Channel Optical fiber Distribution Frame Optical Demultiplexer Unit Optical Group Mux/DeMux Board Order wire Overhead Processing Board for Fast Ethernet Optical Line Amplifier Optical Line Termination Optical Multiplexer Unit Optical Network Unit Optical Protection Unit 78
Appendix A
Abbreviations
Abbreviation OPA OPM OPMSN OPMSS OSC OSCF OSNR OTM OTN OTU OXC PDC PMD PDL RZ SBS SDH SDM SEF SES SFP SLIC SMCC SMT SNMP SPM SRS STM SWE TCP TFF TMN VOA WDM XPM
Full Name Optical Preamplifier Amplifier Optical Performance Monitor Optical Protect for Mux Section without preventing resonance switch Optical Protect for Mux Sectionwith preventing resonance switch Optical Supervisory Channel Optical Supervision channel for Fast Ethernet Optical Signal-Noise Ratio Optical Terminal Optical Transport Network Optical Transponder Unit Optical Cross-connect Passive Dispersion Compensator Polarization Mode Dispersion Polarization Dependent Loss Return to Zero Stimulated Brillouin Scattering Synchronous Digital Hierarchy Supervision add/drop multiplexing board Severely Error Frame Severely Error Block Second Small Form Factor Pluggable Subscriber Line Interface Circuit Sub-network Management Control Center Surface Mount Simple Network Management Protocol Self-Phase Modulation Stimulated Raman Scattering Synchronous Transfer Mode Electrical Switching Board Transmission Control Protocol Thin Film Filter Telecommunications Management Network Variable Optical Attenuator Wavelength Division Multiplexing Cross-Phase Modulation
79