You are on page 1of 64

Transmission network planning __________________________________________________________________________

NSI Ireland

Transmission Network Design & Architecture Guidelines


Version 1.3 Draft

Reference: Version: Date: Author(s): Filed As: Status: Approved By: Signature Date: /

Transmission network design & architecture guidelines 1.3 Draft 10 June 2013 David Powders

Draft Version (1.3)

.......................................................... / ......................

Transmission network design & architecture guidelines

13/06/2013

Page 1 of 64

Transmission network planning __________________________________________________________________________

Document History
Version 1.0 Draft 1.1 Draft Date Comment 27.01.2013 First draft 25.02.2013 Incorporating changes requested from parent operators; - Resilience - Routing - Performance monitoring 14.05.2013 - Updated BT TT Routing section 2.3 10.06.2013 - Section 2.3 E-Lines added to TT design - Section 2.6 Updated dimensioning rules - Section 2.6.4 Updated Policing allocation per class - Section 3.x added (Site design)

1.2 Draft 1.3 Draft

Reference documents
1 2 (2012.12.27) OPTIMA BLUEPRINT V1.0 DRAFT FINAL.doc Total Transmission IP design - DLD V2 2 (2)[1].pdf

Transmission network design & architecture guidelines

13/06/2013

Page 2 of 64

Transmission network planning __________________________________________________________________________

Contents
DOCUMENT HISTORY ....................................................................................................................... 2 REFERENCE DOCUMENTS............................................................................................................... 2 1.0 INTRODUCTION......................................................................................................................... 5 BACKGROUND............................................................................................................................. 5 SCOPE OF DOCUMENT.................................................................................................................. 5 DOCUMENT STRUCTURE .............................................................................................................. 5

1.1 1.2 1.3

2.0 PROPOSED NETWORK ARCHITECTURE ............................................................................... 6 2.1 TRANSMISSION NETWORK ............................................................................................................... 7 2.1 DATA CENTRE SOLUTION................................................................................................................ 8 2.1.1 Physical interconnection ........................................................................................................ 8 2.2 SELF BUILD BACKHAUL NETWORK .................................................................................................. 9 2.3.1 Self build fibre diversity ....................................................................................................... 12 2.3 MANAGED BACKHAUL .................................................................................................................. 13 2.3.1 TT Network contract ............................................................................................................ 13 2.3.3 Backhaul network selection.................................................................................................. 16 2.4 BACKHAUL ROUTING ................................................................................................................ 16 2.4.1 Legacy mobile services ........................................................................................................ 16 2.4.2 Enterprise services ..................................................................................................... 17 2.4.3 IP services .................................................................................................................. 17
2.4.3.1 2.4.3.2 L3VPN structure ............................................................................................................... 17 IP service Resilience........................................................................................................ 19

2.5 ACCESS MICROWAVE NETWORK ............................................................................................... 20 2.5.1 Baseband switching ................................................................................................... 21 2.5.2 Microwave DCN ........................................................................................................ 22 2.5.3 Backhaul Interconnections......................................................................................... 22 2.6 NETWORK TOPOLOGY & TRAFFIC ENGINEERING ....................................................................... 23 2.6.1 Access Microwave topology & dimensioning ............................................................ 24 2.6.2 Access MW Resilience rules ....................................................................................... 27 2.6.3 Backhaul & Core transmission network dimensioning rules ..................................... 28 2.6.4 Traffic engineering .................................................................................................... 29 2.7 NETWORK SYNCHRONISATION .................................................................................................. 35 2.7.1 Self Built Transmission network ................................................................................ 36 2.7.2 Ethernet Managed services ........................................................................................ 37 2.7.3 DWDM network ......................................................................................................... 38 2.7.4 Mobile network clock recovery .................................................................................. 39
2.7.4.1 2.7.4.2 2.7.4.3 2.7.4.4 Legacy Ran nodes ........................................................................................................... 39 Ericsson SRAN 2G ....................................................................................................... 39 Ericsson SRAN 3G & LTE ........................................................................................... 39 NSN 3G .......................................................................................................................... 40

2.8 DATA COMMUNICATIONS NETWORK (DCN) ............................................................................ 40 2.8.1 NSN 3G RAN Control Plane routing ......................................................................... 41 2.8.2 NSN 3G RAN O&M routing ....................................................................................... 41 2.9 TRANSMISSION NETWORK PERFORMANCE MONITORING ........................................................... 43 3.0 SITE CONFIGURATION .......................................................................................................... 45

3.1 CORE SITES ............................................................................................................................... 45 3.2 BACKHAUL SITES ...................................................................................................................... 51 3.2.1 BT TT locations .................................................................................................................... 56 3.3 ACCESS LOCATIONS ...................................................................................................................... 57 3.3.1 Access sites (Portacabin installation) ........................................................................ 57 3.3.2 Access site (Outdoor cabinet installation) ................................................................. 60

Transmission network design & architecture guidelines

13/06/2013

Page 3 of 64

Transmission network planning __________________________________________________________________________

Figures
Figure 1 Proposed NSI transmission solution ............................................................................................................ 7 Figure 2 Example data centre northbound physical interconnect ............................................................................... 8 Figure 3 - Dublin Dark fibre IP/MPLS network ...................................................................................................... 10 Figure 4 - National North East area IP/MPLS microwave network ......................................................................... 11 Figure 5a BT Total Transmission network............................................................................................................ 14 Figure 5b NSI logical and physical transmission across the BT network ............................................................. 15 Figure 7 Access Microwave topology ................................................................................................................... 21 Figure 8 Example VSI grouping configuration ..................................................................................................... 23 Figure 10 IP/MPLS traffic engineering ................................................................................................................. 30 Figure 11 Enterprise traffic engineering ............................................................................................................... 31 Figure 12 Downlink traffic control mechanism .................................................................................................... 32 Figure 13- Normal link operation ............................................................................................................................ 34 Figure 14 Self built synchronisation distribution .................................................................................................. 37 Figure 15 1588v2 distribution over Ethernet Managed service ............................................................................. 38

Tables
Table 1: Table 2: Table 3 Table 4 Table 5: Table 6: Table 5 Table 6 Table 7 Table 7: Table 8: Table 9: Table 10: Table 11: Table 12: Table 11: Table 12: Self build fibre diversity .............................................................................................. 12 TT Access fibre diversity ............................................................................................ 13 List of L3VPNs required......................................................................................... 18 Radio configuration V air interface bandwidth ..................................................... 25 Feeder link reference ................................................................................................... 26 CIR per technology reference...................................................................................... 26 Sample Quality of Service mapping...................................................................... 30 City Area (Max link capacity = 400Mb\s).............................................................. 33 Non City Area (Max link capacity = 200Mb\s) ..................................................... 33 Synchronisation source and distribution summary .............................................. 36 DCN network configuration per vendor ................................................................. 41 NSI transmission network KPIs and reporting structure .................................... 44 Core site build guidelines ............................................................................................ 51 Backhaul site build guidelines..................................................................................... 56 Access site categories .................................................................................................. 57 Access site consolidation No 3PP services in place ................................................. 63 Outdoor cabinet consolidation existing 3PP CPE on site ......................................... 64

Transmission network design & architecture guidelines

13/06/2013

Page 4 of 64

Transmission network planning __________________________________________________________________________

1.0 Introduction
1.1 Background

The aim of this document is to detail the design and architecture principles to be applied across the Netshare Ireland (NSI) transmission network. NSI, as detailed in the transition document, is tasked with collapsing the existing transmission networks inherited from both Vodafone Ireland and H3G Ireland onto one single network carrying all of each operators enterprise and mobile services. As detailed in the transition document it is NSIs responsibility to ensure that the network is future proof, scalable and cost effective with the capability to meet the short term requirements of network consolidation and the long term requirements of service expansion.

1.2

Scope of document

This document will detail the proposed solutions for the access and backhaul transmission networks and the steps required to migrate from the current separate network configuration to one consolidated network. While the required migration procedures are detailed within this document timescales required to complete these works are out of scope.

1.3

Document structure

The document is structure as follows:

Section 2 describes the desired end to end solution for the consolidated network and the criteria used to arrive at each design decision

Section 3 covers the site design and build rules

Transmission network design & architecture guidelines

13/06/2013

Page 5 of 64

Transmission network planning __________________________________________________________________________

2.0 Proposed Network architecture


As described in section 1.1, NSI is required to deploy and manage a transmission network which is future proof, scalable and cost effective. As services, particularly mobile, move to an all IP flat structure it is important to ensure that the transmission network is evolved to meet this demand. Traditionally transmission networks and the services that ran across them were linked in the sense that the service connections followed the physical media interconnections between the network nodes. For all IP networks where any to any logical connections are required, it is essential that the transmission network decouples the physical layer from the service layer. For NSI Ireland the breakdown between the physical and service layer can be described as: Physical media layer 1. Tellabs 8600 & 8800 multiservice routers 2. Ethernet Microwave (Ceragon / Siae) 3. Dark Fibre (ESB / Eircom / e|net) 4. Vodafone DWDM (Huawei) 5. SDH (Alcatel Lucent/Ericsson) 6. POS / Ethernet (Tellabs) 7. Managed Ethernet services (e|net, UPC,ESBT, Eircom) 8. BT Total Transmission network (TT)

Service layer o IP/MPLS (Tellabs / BT TT) o L2 VPN (Tellabs / BT TT) o E-Line (Ceragon / Siae) o TDM access (Ceragon / Siae / Ericsson MiniLink)

By decoupling the physical media layer from the service layer it allows NSI the flexibility to modify one layer without impacting the other. Therefore routing changes throughout the network are independent of the physical layer once established. In the same way changes in the physical layer such as new nodes or bandwidth changes are independent of the service routing. This in Transmission network design & architecture guidelines 13/06/2013 Page 6 of 64

Transmission network planning __________________________________________________________________________

turn ensures that transmission network changes requiring 3rd party involvement are restricted primarily to the physical layer which, once established, should be minimal. While seamless MPLS from access point through to the core network is possible, for demarcation purposes the NSI transmission network will terminate at the core switch (BSC / RNC / SGw / MME / Enterprise gateway).

2.1 Transmission network


Figure 1 details the proposed solution for the NSI transmission network.
TT Ethernet trunk Netshare GigE/POS Trunk Netshare VLAN Trunk Netshare LSP TOP Server RNCs 1-8
ICSU OMU

DCN Network 192.168.0.0/16 OMU Network 172.30.208.0/25

Legend

UP

CP

O&M

TOP VLAN D O&M VLAN C CP VLAN B UP VLAN A


W.X.Y.Z/29 TOP VRRP W.X.Y.Z/29 O&M VRRP W.X.Y.Z/29 CP VRRP W.X.Y.Z/29 UP 1 VRRP

TOP VLAN D O&M VLAN C CP VLAN B UP VLAN A

IRB D 250. M

IRB D 230. B

IRB C 250. M

IRB C 230. B

IRB B 250. M IRB A 250. M

IRB B 230. B IRB A 230. B

DN680 VF Clonshaugh
10G LACP 10G LACP

VRRP VPLS i/f


Tellabs 8860 10G LACP Tellabs 8860 Tellabs 8860 Tellabs 8860

Tellabs nx10Gig connecting the Data Centres

10G LACP 10G LACP 10G LACP 10G LACP

10G LACP

10G LACP

HPD-SR12-1

Trunk

are Et hernet

HPD-SR12-2

Netsh

h ets

ar

the eE

rn

et

u Tr

nk
680-SR12-1 680-SR12-2

BT TT Network

TT Eth ern

et T runk

GPOP-SR12

e Eth TT

rne

t tr

nk

VLAN Trunk

/29 netrwork allocated to backend BTS interface. Static routes required to OMU, DCN and RNC networks ??

VLAN Trunk

Cgn

Ge

UP CP O&M TOP

R B S

Siae Siae Cgn Cgn Cgn Cgn Cgn Cgn Siae Siae Siae Cgn Siae Siae Siae Siae Cgn Siae Siae

Cgn

UP VID = 3170 172.17.x.x/32 CP VID = 3180 172.18.x.x/32 O&M VID = 3190 172.19.x.x/32 TOP VID = 3200 172.20.x.x/32 CGN O&M = 3210 172.21.x.x/32

dn1rnc01

Access 7210

GigE GigE GigE GigE GigE

Cgn
elp ?

Cgn
dn1rnc02 dn1rnc03 dn1rnc04 dn1rnc05 dn1rnc06 dn1rnc07 dn1rnc08

Cgn Cgn Cgn


RSTP ?

el p

Cgn
UP VID = 3170 172.17.x.x/32 CP VID = 3180 172.18.x.x/32 O&M VID = 3190 172.19.x.x/32 TOP VID = 3200 172.20.x.x/32 CGN O&M = 3210 172.21.x.x/32 UP CP Cgn O&M TOP

Tellabs 86xx UP VID = 3170 172.17.x.x/26 CP VID = 3180 172.18.x.x/26 O&M VID = 3190 172.19.x.x/26 TOP VID = 3200 172.20.x.x/26 CGN O&M = 3210 172.21.x.x/26

172.30.213.0/24 172.30.214.0/24 172.30.215.0/24 172.30.216.0/24 172.30.217.0/24 172.30.218.0/24 10/196.0.0/20 10.196.16.0/20 10.196.32.0/20 10.196.48.0/20 10.196.64.0/20 10.196.80.0/20 10.196.96.0/20

Cgn

Ge

R B S

The BT TT Network is configured for L2 PtP circuits to each of the CDC locations. Dual nodes at the Data centres may be used to load balance the traffic from the distributed BPOP locations

Netshare IP/MPLS network. L3 VPNs are configured for each of the srevice types from each of the operators

Access Cluster - Each VLAN = Broadcast Domain - MAC Learning enabled throughout the cluster to enable layer 2 switching - No E-Lines in use

Figure 1 Proposed NSI transmission solution


To explain in detail the proposed transmission solution the network will be broken into the following areas

Data centre Northbound interfaces Self build backhaul Managed backhaul Backhaul routing Access Microwave network

Transmission network design & architecture guidelines

13/06/2013

Page 7 of 64

Transmission network planning __________________________________________________________________________

Network QoS & link dimensioning DCN Network synchronisation

2.1 Data Centre solution


2.1.1 Physical interconnection
VFIE and H3G operate their respective networks based on a consolidated core. All core switching (BSCs, RNCs, EPCs, Master synchronisation, DCN, security) for both H3G and VFIE are located in Dublin across 4 data centres. They are; 1. CDC1 2. CDC2 3. CDC3 4. CDC4 DN680 DN706 DN422 DNxxx Vodafone, Clonshaugh (VFIE) BT, Citywest (VFIE) Data Electronics, Clondalkin (VFIE) Citadel, Citywest (H3G)

Figure 2 below details the possible northbound connections @ each data centre

Figure 2 Example data centre northbound physical interconnect


NSI will deploy 2 x Tellabs 8800 multiservice routers (MSRs) at each of the data centres. 2 routers are required to ensure routing resilience for the Transmission network design & architecture guidelines 13/06/2013 Page 8 of 64

Transmission network planning __________________________________________________________________________

customer traffic. The 8800 hardware will interface directly at 10Gb\s, 1Gb\s & STM-1 with the core switches, DCN and synchronisation networks for both operators. Each of the Data centres will be interconnected using n x 10Gb\s rings. RSVP LSPs are not supported on the current release of 8800 interfaces in a Link Aggregation Group (LAG) so multiple 10Gb\s rings can be used to transport traffic from both operators. In the first deployment 1 x 10Gb\s ring will be deployed which can be upgraded as required. Consideration was given to a meshed MPLS core, however the Nx10Gb\s ring was deemed to be technically sufficient and more cost effective. This design may be revisited in the future based on capacity, resilience and expansion requirements. Interfacing to the out of band DCN (mobile and transmission networks) and synchronisation networks will be realised through 1Gb\s interfaces. All interfaces to legacy TDM and ATM systems are achieved through the deployment of STM-1c and STM-1 ATM interfaces. Physical and layer 3 monitoring of the physical interfaces is active on all trunk interfaces so in the event of a link failure all traffic is routed to the diverse path and the required status messaging and resilience advertisements are propagated throughout the network. These will be explained in detail in each of the sections dealing with service provisioning.

2.2 Self build backhaul network


Self build refers to network hardware and transmission links that are within the full control of NSI in terms of physical provisioning. The Self built Backhaul network interconnects the aggregation sites and the core data centre locations via a mix of Ethernet, Packed over SDH (POS) and SDH point to point trunks. The service layer will be IP/MPLS based on the Tellabs 8600 and 8800 MSR hardware. Figures 3 and 4 are examples of the proposed network structure,

Transmission network design & architecture guidelines

13/06/2013

Page 9 of 64

Transmission network planning __________________________________________________________________________


Dublin Dark Fibre v1.0
ge6/0/7
172.25.0.106

121 10.82.10.120/30 122

237 10.82.10.236/30 238

L2B

L2B

ge3/0/7

L2B
ge6/0/0 ge9/0/0

45 10.82.10.44/30 46 L2B

ge3/0/7

DN706201

DN706200

so8/1/0
172.25.0.4

ge10/0/7

21 10.82.0.20/30 22

L1B

so9/1/0
172.25.0.3

so6/1/0 ge12/0/7 1 10.82.10.0/30 2

L1B

1 10.82.0.0/30 2

so9/1/0

DN422200

DN422201

ge10/0/7

so6/1/0
172.25.0.5

5 10.82.0.4/30 6

L1B

so7/1/0
172.25.0.6

so8/1/0

9 10.82.0.8/30 10 L1B

so6/1/0

DN680201 172.25.0.2

so8/1/0 so7/1/0 13 10.82.0.12/30 14

DN680202 172.25.0.17

L1B

so8/1/0 so9/1/0 98 10.82.0.96/30 97

DN680200 172.25.0.1

L1B

ge12/0/7 ge6/0/7 L2B 137 10.82.10.136/30 138

DNBLP201 172.25.x.y

so6/1/0

so7/1/0

so7/1/0

so6/1/0

172.25.6.1

172.25.64.1

DNCAB200 172.25.2.6

ge9/0/7

113 10.82.10.112/30 114 L2B

ge6/0/7

DNCME200 172.25.1.6

ge9/0/7

ge5/0/7

117 10.82.10.116/30 118 L2B

ge5/0/7

L2B

18 10.82.0.16/30 17

41 10.82.10.40/30 42

L1A L1B

L1A

L2B

ISIS 49.0031
L2B

ge6/0/7

DNBDE200 172.25.0.107

L1A

ge8/0/7

L1A L1B

109 10.82.10.108/30 110

DN394200 172.25.0.104

L2B

DNDCT200 172.25.x.y

ge8/0/7

L2B 97 10.82.10.96/30 98

ge7/0/7

DN915200 172.25.4.1

ge8/0/7

ge7/0/7

37 10.82.10.36/30 38

L2B

ge10/0/7

ISIS 49.0032
ge6/0/7

L2B

93 10.82.10.92/30 94

DN522200 172.25.2.3

26 10.82.0.24/30 25

ge9/0/7

DNTLH200 172.25.1.4

172.25.5.3

ge9/0/6

ge5/0/7 89 10.82.10.88/30 90

DNCLD200 172.25.1.2

ge7/0/7 150 10.82.10.148/30 149

L2B

49 10.82.10.48/30 50

L2B

ge9/0/7

ge9/0/7

ge9/0/7
L2B
245 10.82.10.244/30 L2B 246 ge6/0/7

DN822200

102 10.82.10.100/30 101

ge6/0/7

ge7/0/7

ge8/0/7 146 10.82.10.144/30 145


L2B

ge7/0/7

142 10.82.10.140/30 141

DNAGI200

17 10.82.0.16/30 18

ge9/0/7

DN433200

U14 U1

242 10.82.10.240/30 241

ge6/0/7

ge9/0/7

L2B

134 10.82.10.132/30 133

DNCP1200 172.25.1.5

ge6/0/7 ge9/0/7 L2B 125 10.82.10.124/30 126

DN113200 172.25.2.8

ge9/0/7

L2B 129 10.82.10.128/30 130

ge7/0/7

DNSAN200

ge8/0/7

DNBLP200 172.25.2.1

U14

U1
DNTWR200 172.25.2.5

ge6/0/7

33 10.82.10.32/30 34

6 10.82.10.4/30 5

ge7/0/0

L2B

L2B

ge6/0/7

ge6/0/7

L2B

DN419201

250 10.82.10.248/30 249 L2B ge8/0/0

172.25.0.108

ge8/0/6

ge8/0/7

ge7/0/7

37 10.82.0.36/30 38

ge12/0/6

ge9/0/7

ge6/0/6

ge7/0/7

ge8/0/7

ge9/0/7

DNBLB200

ge7/0/7

L2B
DNBW1200 172.25.0.101

53 10.82.10.52/30 54 L2B

ge8/0/7
DNNGE201 172.25.0.109

ge8/0/0

L2B
221 10.82.10.220/30 222 226 10.82.10.224/30 L2B 225

ge9/0/7

DNNGE200 172.25.3.3

so5/0/0

L3B

ge8/0/7

172.25.0.105

172.25.5.1

81 10.82.10.80/30 82

L3B

ge7/0/6

DNLCN200

DN875200

ge9/0/7 L2B 86 10.82.10.84/30 85

DN880200 172.25.3.2

ge6/0/7 L2B

L2B

DNFTZ200

ge11/0/7

154 10.82.10.152/30 153

ge3/0/7
ge6/0/7

10 10.82.10.8/30 9

172.25.0.9

29 10.82.10.28/30 30

172.25.128.1

ge7/0/0

ge7/0/7

L2B

From_ADM

so9/0/0

L2B

ge12/0/7

L2B

DNPAL200 172.25.1.1

DNWAL200 172.25.0.102

L1A

ge6/0/7 ge10/0/7 25 10.82.10.24/30 26

ge8/0/7

DNPRP201

L2B

ge7/0/0

ge7/0/7

14 10.82.10.12/30 13

172.25.0.110

L2B 70 10.82.10.68/30 69

ge10/0/7

DN017200 172.25.4.3

ge5/0/7

L2B 74 10.82.10.72/30 73

ge10/0/7

DN923200 172.25.4.2

234 10.82.10.232/30 233

ge8/0/0

DNCRL200

DN940200 172.25.5.2

18 10.82.10.16/30 17

ge6/0/7 ge9/0/7

L2B

ge8/0/7
172.25.0.103

ge7/0/7

L2B

so5/0/0

ge5/0/7

ge2/0/7

DNPRP200 172.25.3.1

L2B 22 10.82.10.20/30 21

ge13/0/7
DNHB1200 172.25.0.7

ge3/0/7 so10/0/0

105 10.82.10.104/30 106 L2B

ge12/0/7
DN294200

L2B From_ADM L1A

L2B

From_ADM L1A

34 10.82.0.32/30 33

From_ADM L1A

66 10.82.10.64/30 65

ge8/0/7

L1A

L2B

L2B

229 10.82.10.228/30 230

so6/0/0

172.25.0.8

so9/0/0

30 10.82.0.28/30 29

ge3/0/7

L2B 62 10.82.10.60/30 61

ge8/0/7
DNFOX200 172.25.0.100

ge7/0/7

Core Dublin ring STM16 (future 10G/nx10G/40G); ISIS L2-only or L1-2 if between routers in same location PoC2 connections GE (future 10G); ISIS L1-2 intra-area links or L2-only inter-area links PoC3 connections GE (future subrate_10G/line_rate_10G); ISIS L1-only PoC3 connections GE; ISIS L1-only Sync Priority 1 Sync Priority 2 Sync Priority 3

ISIS 49.0033

Figure 3 - Dublin Dark fibre IP/MPLS network


In the network Dark fibre from Eircom, ESB and e|net will be used as the physical media interconnect. Interconnections based on aggregation requirements will be at speeds of 1Gb\s, 2.5Gb\s (legacy) or 10Gb\s. A hierarchical ISIS structure of rings will be used to simplify the MPLS design. The Level 2 areas will be connected directly to the core data centre sites with level 1 access rings used to interconnect traffic from the access areas. The L2 access areas will have physically diverse fibre connections to 2 of the data centres. Physically diverse LSPs are routed from the L2 aggregation routers to each of the data centres facilitating diverse connectivity to each of the core switching elements. This provides protection against a single trunk or node failure. The access rings will have diverse fibre connectivity to a L1/2 router which will perform the ABR function. Within each access ring diverse LSPs will be configured to the ABR or ABRs providing access route resilience against a single fibre break and/or node failure. RSVP LSPs with no bandwidth reservations will be used to route the LSPs across the backhaul network. All LSPs will use a common Tunnel Affinity Template. This provides the flexibility to re-parent traffic to alternative trunks without a traffic intervention should that be required. It is proposed to use a Transmission network design & architecture guidelines

13/06/2013

ge5/0/7 78 10.82.10.76/30 77

58 10.82.10.56/30 57

L2B

ge9/0/7

ge7/0/7

ge8/0/7

L2B

Page 10 of 64

ge9/0/7

DNSE1200 172.25.2.4

ge9/0/7

L2B

DN419200 172.25.2.2

From_ADM

From_ADM

Transmission network planning __________________________________________________________________________

combination of strict and loose hop routing across the network. The working path should always be associated with the strict hop with the protection assigned to either a strict or loose hop. For those LSPs routed over the Microwave POS trunks, strict hops will be used to ensure efficient bandwidth management. For those routed across Dark fibre or managed Ethernet loose hops will be used. In a mesh network where there are multiple physical failures and multiple paths possible this approach offers a greater level of resilience.
CNSGA200 172.25.128.11

8/0/0 8/0/1 8/0/2 8/0/3

197 10.82.128.196/30 198 201 10.82.128.200/30 202 205 10.82.128.204/30 206 209 10.82.128.208/30 210

7/0/0 7/0/1 7/0/2 7/0/3

8/0/0 8/0/1 8/0/2 8/0/3

213 10.82.128.212/30 214 217 10.82.128.216/30 218 221 10.82.128.220/30 222 225 10.82.128.224/30 226

7/0/0 7/0/1 7/0/2 7/0/3

8/0/0 8/0/1 8/0/2 8/0/3

229 10.82.128.228/30 230 233 10.82.128.232/30 234 237 10.82.128.236/30 238 241 10.82.128.240/30 242

7/0/0 7/0/1 7/0/2 7/0/3


LHDDK200 172.25.128.14

CNMCR200 172.25.128.12

LH038200 172.25.0.16

8/0/0 8/0/0 8/0/1 8/0/2


9/0/0 9/0/1 9/0/2 9/0/3 21 10.82.129.20/30 117 10.82.128.116/30 121 10.82.128.120/30 125 10.82.128.124/30 22 118 122 126 7/0/0 7/0/1 7/0/2 7/0/3 8/0/0 8/0/1 8/0/2 8/0/3 129 10.82.128.128/30 130 133 10.82.128.132/30 134 137 10.82.128.136/30 138 141 10.82.128.140/30 142 7/0/0 7/0/1 7/0/2 7/0/3
MH009200 172.25.128.7 LH001200 172.25.128.8

WHLIN200 172.25.128.6

8/0/0 8/0/1 8/0/2 8/0/3

145 149 153 157

10.82.128.144/30 10.82.128.148/30 10.82.128.152/30 10.82.128.156/30

146 150 154 158

9/0/0 9/0/1 9/0/2 9/0/3

LH011200 172.25.128.9

9/0/0 9/0/1
KECAP200 172.25.128.2

9/0/2 9/0/2

7/0/0

33 10.82.128.32/3 0 37 10.82.128.36/3 0 41 10.82.128.40/3 0 45 10.82.128.44/3 0

34 38 42 46 7/0/0 7/0/1 7/0/2 7/0/3


MHWD1200 172.25.128.3

7/0/1 7/0/2

8/0/0 8/0/1 8/0/2 8/0/3

49 10.82.128.48/30 50 53 10.82.128.52/30 54 57 10.82.128.56/30 58 61 10.82.128.60/30 62

7/0/0 7/0/1 7/0/2 7/0/3

MHSKE200 172.25.128.4

8/0/0 8/0/1 8/0/2 8/0/3

65 10.82.128.64/30 69 10.82.128.68/30 73 10.82.128.72/30 77 10.82.128.76/30

66 70 74 78

9/0/0 9/0/1 9/0/2 9/0/3

DNBW1200 172.25.128.1

7/0/0

7/0/1

7/0/2

13 10.82.128.12/30 14

9 10.82.128.8/30 10

1 10.82.128.0/30 2

5 10.82.128.4/30 6

7/0/3

10/1/4

10/1/5

4/0/4

10/1/6

4/0/5

10/1/7

4/0/6

DN706201 172.25.0.4

DN680200 172.25.0.1

Figure 4 - National North East area IP/MPLS microwave network


Figure 4 details a sample of the self built backhaul network routed over N+0 SDH microwave links and rings. In this situation LSPs are routed over Transmission network design & architecture guidelines

13/06/2013

4/0/7

81 10 .82.12 8.80/3 85 10 0 82 .82.12 8.84/3 0 86 89 10 .82.12 8.88/3 93 10 0 90 .82.12 8. 92 97 10 /30 94 .82.12 8.96/3 0 98

Page 11 of 64

7/0/3 7/0/4

8/0/0 8/0/1 8/0/2 8/0/3 8/0/4

161 10.82.128.160/30 165 10.82.128.164/30 169 10.82.128.168/30 173 10.82.128.172/30 177 10.82.128.176/30

162 166 170 174 178

7/0/0 7/0/1 7/0/2 7/0/3 7/0/4

MHFKS200 172.25.128.5

8/0/3

245 10.82.12 8.244/30 246 249 10.82.12 8.248/30 250 253 10.82.12 8.252/30 254 1 10.82.12 9.1/30 2
7/0/3 29 25 10.8 21 10.8 2.12 10.8 8 2.12 8/0/3 8/0/2 8/0/1 8/0/0

8/0/1 8/0/2 8/0/3

7/0/0 8/0/0 7/0/0 101 10.82.128.100/30 102 8/0/0

7/0/1 8/0/1 7/0/1 105 10.82.128.104/30 106 8/0/1

7/0/2 189 10.82.128.188/30 190 7/0/2 109 10.82.128.108/30 110

7/0/3 193 10.82.128.192/30 194 7/0/3 113 10.82.128.112/30 114


7/0/2 17 10.8 30 /30 8.28 6 0 2 .24/3 2 2 /30 8.20 2.12 0 18 /3 8.16 2.12

185 10.82.128.184/30 186 181 10.82.128.180/30 182

8/0/3 8/0/2 8/0/3 8/0/2

7/0/1 7/0/0

Transmission network planning __________________________________________________________________________

multiple hops to the data centres and all routers will be added to the Level 2 area. In order to ensure that traffic is correctly balanced across the SDH trunks RSVP LSPs will be routed statically giving NSI a greater level of control over the bandwidth utilisation. LSPs from each collector will be associated with a particular STM-1 and routed to the destination accordingly. Traffic aggregating at each collector is then associated with a particular LSP. NOTE: The transition document states that the National SDH Microwave network should be replaced by NSI with the BT TT network (See section 2.1.2) or a National DF network. However, as this will take time and consolidated sites are required nationally in the short term, the network described in Figure 4 will be utilised over the short to medium term.

2.3.1 Self build fibre diversity


The table below details the physical diversity requirements for fibre based on traffic aggregation in the transmission network. Note that in some cases where the capital cost to provide media diversity over fibre is prohibitive Microwave Ethernet will be considered as a medium term alternative. While the microwave link will for the most part be of a lower capacity than the primary fibre route the degradation of service during the fibre outage may be acceptable for short periods to maximise the fibre penetration Aggregation level <5 5x9 Diversity Single fibre pair Flat ring Comments No diversity Two fibre pairs sharing the same duct >9 Fibre duct diversity 5m fibre separation to the aggregation router
Table 1: Self build fibre diversity

Note that the above table details the desired physical separation. In some cases the physical separation may not be physically possible and a decision on the aggregation level will be made based on other factors such as location, security, landlord, antenna support structure and cost.

Transmission network design & architecture guidelines

13/06/2013

Page 12 of 64

Transmission network planning __________________________________________________________________________

2.3 Managed backhaul


Managed backhaul refers to point to point or network connections facilitated by an OLO over which NSI will transport traffic. In this case the OLO will provision the physical transmission interconnections. Presently VFIE use Eircom and e|net as a managed service vendor. In this case VFIE have individual leased lines from each of the vendors providing point to point fixed bandwidth connections. H3G to date have used BT as their backhaul transmission vendor where all traffic from the access network is routed across the National BT Total Transmission (TT) network.

2.3.1 TT Network contract


The BT TT contract allows H3G to utilise up to 70Gb\s of bandwidth across a possible 200 collector or aggregation locations. Presently BT has configured multiple L3VPNs across the TT to route traffic between the collector locations and the data centre site at Citadel (Citywest). BT deployed 2 x SR12 (ALU) routers at Citadel to terminate all of the traffic from the possible 200 x locations. H3G can interconnect from their access network at a BT GPOP onto a collocated SR12 or APOP. At an APOP BT deploy an Alcatel-Lucent (ALU) 7210 node and extend the TT to this point. The physical resilience from the GPOP to the APOP depends on the traffic to be aggregated at the site. See Table 2.

Collector Type Small Medium Large


Table 2:

Sites aggregated 5 5<x9 10

Physical resilience

Comments

None Flat ring Diverse fibre duct

TT Access fibre diversity

Figure 5a details the configuration of the BT TT solution.

Transmission network design & architecture guidelines

13/06/2013

Page 13 of 64

Transmission network planning __________________________________________________________________________

Figure 5a BT Total Transmission network


Because BT route traffic to and from the collector points over L3VPNs they must be involved in the provisioning process for all RBS across the network. As described in section 2.0 it is proposed to separate the physical interconnection of sites from the service provisioning for NSI. To achieve this across the TT NSI must use the TT to replicate physical point to point connections across the backhaul network. It is proposed to change the BT managed service from a Layer 3 network to a layer 2 network and replicate the approach taken in the self built network. The end result being that the provision of services across the NSI backhaul network is consistent regardless of the underlying physical infrastructure (Self built or managed). In order to replicate the self built architecture and utilise the BT TT contract it will be necessary to extend the TT network to a second data centre. It is proposed to extend the BT TT to the VFIE date centre in Clonshaugh. At Citadel and Clonshaugh the TT will interconnect with the 8800 network on Nx10Gb\s connections. While it is not necessary to deploy 2 x SR-12 TT Transmission network design & architecture guidelines

13/06/2013

Page 14 of 64

Transmission network planning __________________________________________________________________________

routers at the data centres due to the path resilience employed, it will be useful in terms of load balancing and future bandwidth requirements. As with the self build design, resilience will be achieved through the physical path diversity to diverse data centre locations from each of the BT GPoPs. Figure 5b illustrates the physical and logical connectivity across the BT TT.
10Gb\ s NSI MPLS
Primary & secondary LSPs Citadel 100
10Gb

Clonshaugh

10Gb\ s NSI MPLS


10Gb

10Gb\ s NSI MPLS

BT IP GPoP HPD 1

BT Total Transmission Core

BT IP GPoP HPD 2
E-Lines on BT TT

NSI MPLS ABR (1)


L1/2
P1_ SyncE

1G/10G

ADVA XG210

1G/10G

BT IP GPoP Ballymount
1G/10G
1Gb\s 1Gb\s

1G/10G

ADVA XG210

1G/10G

NSI MPLS ABR (2)


L1/2
P1_ SyncE

Symmetricom TP500

Figure 5b NSI logical and physical transmission across the BT network


VLAN trunks over E-Lines are configured from the collector to the GPOP over which LSPs are configured to the ABRs using LDP. LDP will facilitate automatic label exchange within the MPLS network and remove the requirement for manual configuration in the access area. In the BT TT network, VLAN trunks over E-Lines are configured for each ABR to one of the parent data centres. RSVP-TE LSPs can be configured across these trunks to any of the data centre facilities in a resilient manner. Dual ABRs are used to ensure hardware resilience for the access areas where up to 20 collector nodes could be connected to a BT GPOP in this manner.

Transmission network design & architecture guidelines

ge
P1_ 1588v2

BT - Alcatel 7210- Collector

E-Lines from BT Collector to GPOP

Symmetricom TP500

ge
P1_ 1588v2

NSI MPLS Collector

13/06/2013

1Gbit/s

1Gbit/s

Page 15 of 64

Transmission network planning __________________________________________________________________________

2.3.3 Backhaul network selection


In some cases NSI will have the option to use either self build dark fibre or managed services to backhaul services from a particular aggregation point. In this case a number of factors must be considered when selecting the network type. They are;

Factor Long term bandwidth requirements Operational Cost impact

Self Build High

Managed Low / medium

Comment For large bandwidth sites dark fibre may offer the more attractive cost per bit

High/Medium Low

To reduce the impact on the operational expenditure dark Fibre CapEx deals may be more attractive

Surrounding network

Dark fibre

Managed

The transmission network selection should take account of the surrounding backhaul type. This is to ensure that the interconnecting clusters are optimally routed through the hierarchical structure.

2.4

Backhaul routing

Backhaul routing can be split into legacy (TDM/ATM) services, enterprise services and IP services.

2.4.1 Legacy mobile services


Legacy mobile services relate to 2G BTS and 3G RBS nodes with TDM and ATM interfaces. For these services NSI will configure pseudowires (PWEs) Transmission network design & architecture guidelines

13/06/2013

Page 16 of 64

Transmission network planning __________________________________________________________________________

across the MPLS network. ATM services will be carried in ATM PWEs with N:1 encapsulation used for the signalling VCs to reduce the number required. User plane VCs can be mapped into single PWE. TDM services will be transported using SAToP PWEs. At the core locations MSP1+1 protected STM-1 interfaces will be deployed between the 8800 MSRs and the core switches (BSC / RNC). Note: Multichassis MSP feature is not available on the Tellabs 8800 MSRs. Therefore MSP1+1 protecting ports will be on separate cards. At the access locations MSP protection for ingress TDM traffic will be configured in the same way on the 8600 nodes. PWEs for legacy services will be routed between the core and collector locations over physically diverse LSPs.

2.4.2 Enterprise services


Similar to legacy services, enterprise services will be routed between the core and collector locations over diverse or non diverse LSPs based on the customers SLA. For the most part enterprise services are provided as Ethernet services. In this case Ethernet PWEs will be configured to carry the services. A Class of Service (CoS) will be applied to the Ethernet PWE based on the customers SLA. At the core locations the service will be handed to the customer network over an Ethernet connection with VLAN separation for the individual customers. In the event that multiple customers are sharing the same physical interfaces SVLAN separation per customer can be implemented. This will be finalised based on a statement of requirements from the parent operator. TDM services for enterprise customers will be treated the same as legacy TDM services described in 2.4.1 with STM-1 interfaces used to interface the core switches

2.4.3 IP services

2.4.3.1

L3VPN structure

For IP services L3VPNs will be configured across the MPLS network. All routing information will be propagated throughout each L3VPN using BGP.

Transmission network design & architecture guidelines

13/06/2013

Page 17 of 64

Transmission network planning __________________________________________________________________________

The IP/MPLS network will be configured in a hierarchical fashion with route reflectors used to advertise routing within each area. Route Reflectors (RRs) will be implemented in the core area with all level 2 routers peering to those RRs. The ABRs between the level 1 and 2 areas will act as the route reflectors for the connected level 1 areas. This will reduce the size and complexity of the routing tables across the network. For each service a L3VPN will be configured. Because H3G and VFIE use different vendors and have different requirements in the core the number of L3VPNs required differ slightly. Table 3 details the L3VPNs to be configured across the NSI network. Parent
VFIE

L3VPN
2G UP

Description
User Plane

Comment
Separate L3VPNs are configured for each BSC

VFIE

SIU O&M

Baseband aggregation switch

VFIE

RNC UP

3G User Plane

Separate L3 VPN are configured for each RNC

VFIE VFIE VFIE

SRAN O&M Synchronisation Siae O&M

SRAN O&M 1588v2 network Ethernet microwave O&M

VFIE

MiniLink O&M

O&M for the MiniLink PDH network (SAU-IP) A single L3VPN for all RNCs A single L3VPN for all RNCs A single L3VPN for all RNCs

H3G H3G H3G

3G UP 3G CP 3G O&M (RNC)

User plane Control plane Operation and maintenance

H3G

3G O&M RBS

Operation and maintenance

A single L3VPN for all RBS

H3G H3G

TOP Ceragon O&M

1588v2 network Ethernet Microwave O&M

Synchronisation

VFIE H3G Table 3

LTE LTE

Tbc Tbc

Tbc Tbc

List of L3VPNs required

Transmission network design & architecture guidelines

13/06/2013

Page 18 of 64

Transmission network planning __________________________________________________________________________

As services are added to the network they will be added as endpoints to the respective L3VPN for that service and parent core node. This is achieved by adding the endpoint interface and subnet to the VPN. Any adjacent network routing required to connect to a network will be redistributed into the VPN also. VFIE use /30 subnets to address the mobile services across the network. This results in a large number of endpoints within each L3VPN. For that reason the networks will be split based on the parent core switch. This results in a L3VPN for each of the services routed to each of the RNCs/BSCs. For the H3G network, /26 networks are typically used at each of the endpoints. This summarisation reduces significantly the number of endpoints required within each VPN and consequently the number of VPNs. Sections 3 and 4 detail the impacts the proposed design have on each of the operators existing solutions and the steps, if any, required to migrate to the proposed solution.

2.4.3.2

IP service Resilience

Transport resilience

Within the backhaul network IP services will be carried resiliently between the core and collector locations over diversely routed LSPs. It is proposed to use a combination of strict and loose hop routing across the network. The working path should always be associated with the strict hop with the protection assigned to the loose hop. By configuring the protection on a loose hop it will allow the IGP to route the LSP between the source and destination. In the event of a failure all traffic will be switched to the protecting LSP which has been routed between the source and destination via the IGP. In a mesh network where there are multiple physical failures and multiple paths possible this approach offers a greater level of resilience. Note, as described in section 2.2, in the case where both the main and protecting paths are routed over Microwave STM-1 trunks, strict hop routing will be employed for both paths to ensure optimum utilisation of the available capacity. Transmission network design & architecture guidelines

13/06/2013

Page 19 of 64

Transmission network planning __________________________________________________________________________

Router Resilience Within the level 2 area of the network dual routers are deployed to ensure resilience at locations aggregating large volumes of traffic. In this case resiliently LSPs are routed from the collector nodes to both routers. In the event of a router failure traffic will route over the operating router until such time as the second router is operational after which the routing will return to the initial configuration.

Core switch resilience - VRRP For all connections to the mobile core, Virtual Router Redundancy Protocol (VRRP) should be used. While the VRRP implementation will differ slightly based on the mobile core vendor and function, the objective is to ensure that the transmission network to the core has full interface and router redundancy. 10Gb\s (with LAG if required) cross links at each data centre location between the 8800 nodes will be implemented to support the router redundancy. For the 8800 nodes during restart it is possible that the router will advertise the interface addresses to the core switch (BSC/RNC/SGw/MME) before the router forwarding function is re-established. This may result in the temporary Black Holing of traffic. To avoid this scenario a separate connection is required between the routers with a default route added to each for all traffic. This will avoid the above scenario. It is proposed that a 10Gb\s link should be used for this also.

2.5

Access Microwave network

The target access microwave network with be based on an Ethernet microwave solution utilising ACM to maximise the available bandwidth. In the existing networks H3G use Ceragon IPx Microwave products while VFIE use the Siae Alc+2 and Alc+2e products. While it is envisaged that NSI will tender for one supplier it is not planned to replace one of the existing networks. The access network solution must be designed so as to ensure both vendors

Transmission network design & architecture guidelines

13/06/2013

Page 20 of 64

Transmission network planning __________________________________________________________________________

products and the services transported across them inter operate without issue. Figure 7 details a possible configuration of the access network topology utilising both vendors products.

Siae

Aggregation Node 86xx

Cgn

Siae

GigE GigE

Cgn Cgn Cgn

Cgn

Cgn

GigE elp
Cgn Cgn Siae

GigE

Siae

Siae

Siae

Siae

Siae Siae Cgn

GigE

Siae Cgn

Siae

Figure 7 Access Microwave topology 2.5.1 Baseband switching


For the access network all traffic will be routed at layer 2 utilising VLAN switching at each point. VLANs will be statically configured at each site on each of the indoor units. For VFIE, unique VLANs are used to switch traffic from each of the RBS nodes. For H3G, common VLANs are used for each of the service types switched across the network. They are; UP VID = 3170 CP VID = 3180 O&M VID = 3190 TOP VID = 3200 Ceragon O&M = 3210

Note: Future developments may result in the deployment of all outdoor MW Radio products in the traditional MW Bands and in the E-Band. In this case at feeder locations a cell site router may be deployed to perform the baseband switching function using IP/MPLS routing functions. Should this solution be

Transmission network design & architecture guidelines

13/06/2013

Page 21 of 64

Transmission network planning __________________________________________________________________________

employed in the future, an additional design scenario will be described and added to this document.

2.5.2 Microwave DCN


All Microwave DCN will be carried in band (this is already the case for the Ceragon network elements). As sites are consolidated and migrated to the consolidated network, it will be necessary to migrate the Siae DCN solution to an in band solution. It is proposed to assign VLAN ID 3000 to the Siae network for DCN.

2.5.3 Backhaul Interconnections


The access network will interface with the backhaul network over multiple GE interfaces. The interfaces can be protected or not depending on the capacity requirements. While LAG is possible on the GE interfaces the preference will be to use ELP on the access router with interconnected IDU interfaces in an active / active mode. In a situation where greater than one 1Gb\s is required over the Radio link, LAG can be used. The limitation on the access interfaces is that the interfaces in a LAG on the Tellabs 8600 must be on the same interface module. This is a planned feature for release FP4.1 and is planned to be deployed in the NSI network for Q2 2014. VSI interfaces will be used to associate common network VLANs arriving on separate physical interfaces to a common virtual interface. This ensures that the approach used to assign a single subnet per traffic type per cluster can be continued where required. A separate VSI interface will be configured for each service type and added as the endpoint to the required IPVPN. Any static routes required to connect to and from the DCN network will use the VSI interface address. Figure 8 details the operation of the VSI interface.

Transmission network design & architecture guidelines

13/06/2013

Page 22 of 64

Transmission network planning __________________________________________________________________________


VID3170 H3G UP VID3210 Ceragon DCN VID3000 Siae DCN

VID3170

GE
VID3210 VID3000

VID3170 VID3210 VID3000

GE

Cgn

VSI

VSI

VSI

VID3170 VID3210 VID3000

GE

VID3170 VID3210 VID3000

VID3170 H3G UP VID3210 Ceragon DCN VID3000 Siae DCN

GE

Siae

VID3000 VID3210 VID3170

GE

VID3170 VID3210 VID3000

VID3170 H3G UP VID3210 Ceragon DCN VID3000 Siae DCN

GE

Cgn

Tellabs 86xx

Figure 8 Example VSI grouping configuration

2.6

Network topology & traffic engineering

The NSI transition document details the targets for network topology, traffic engineering and bandwidth allocation on a per site basis for each of the mobile networks. In summary they are;

No more than 1 Microwave hop to fibre (Facilitated by providing fibre solutions to 190 towns) No contention for shared transmission resources (NSI are required to monitor utilisation and ensure upgrade prior to congestion on the transmission network)

Traffic engineering (CoS, DSCP, PHB) will be assigned equally to each service type from each operator. At a minimum the following will be applied; o Voice (GBR) o Video/interactive (VBR-RT) o Enterprise (VBR-NRT) o Data (BE)

Bandwidth allocation per site o Dublin & other cities o Towns (5 10K) (400Mb\s\site) (300Mb\s\site)

Transmission network design & architecture guidelines

13/06/2013

Page 23 of 64

Transmission network planning __________________________________________________________________________

o Rural

(200Mb\s\site)

This chapter will explain in detail the required Access, Backhaul and Core transmission network dimensioning guidelines and traffic engineering rules to achieve the targets set out in the transition document

2.6.1 Access Microwave topology & dimensioning


The national access microwave network will be broken into clusters of microwave links connected, over one or multiple hops, to a fibre access point. The fibre access point can be part of the self built or managed backhaul networks but must have the following characteristics; Long term lease or wholly owned by NSI or one of the parent operators 24 x 7 access for field maintenance Excellent Line of Site properties Facility to house a significant number of microwave antennas Space to house all the required transmission nodes and DC rectifier systems No Health and safety restrictions

Before creating a cluster plan, each site in the MW network must be classified under the following criteria; Equipment support capabilities Line of sight capabilities proximity to existing fibre solution Existing frequency designations Site development opportunities Landlord agreements (Number and type of equipment/services permitted under the existing agreements) Term of agreement

Creating a database as above will allow the MW network planning team to create cluster solutions where a number of sites are associated with a designated head of cluster. As per the transition document the target topology is one hop to a fire access point. However this will not always be possible due to one or a combination of the following factors; Transmission network design & architecture guidelines

13/06/2013

Page 24 of 64

Transmission network planning __________________________________________________________________________

Line of Site Channel restrictions Proximity of fibre solutions Once the topology of the cluster is defined it is necessary to define the capacity of each link within the cluster. For tail links this is straight forward, the link must meet the capacity requirements of the transition document; Dublin & other cities Towns (5 10K) Rural (400Mb\s\site) (300Mb\s\site) (200Mb\s\site)

For feeder links, statistical gain must be factored while still meeting the capacity requirements for each of the individual sites. Table 4 gives examples of existing MW Radio configurations and the average air interface speeds available. Channel Bandwidth 14MHz 28MHz 28Mhz 28MHz 28MHz 56Mhz 56MHz 56MHz 56MHz E-Band
Table 4

Configuration

Max Air interface speed @ 256QAM

Single channel Single channel 2 channel LAG 3 channel LAG 4 channel LAG Single Channel 2 channel LAG 3 channel LAG 4 channel LAG 1GHz

85Mb\s 170Mb\s 340Mb\s 500Mb\s 680Mb\s 340Mb\s 680Mb\s 1.02Gb\s 1.34Gb\s 1Gb\s

Radio configuration V air interface bandwidth

Table 5 provides a guide for feeder link configurations based on the number of physical sites aggregated across that link. Physical sites aggregated
2

City
P1: E-band P2: 2 x56MHz

Urban
P1: 1 x56MHz

Rural
P1: 1 x56MHz

Comments
3:1 Stat gain

Transmission network design & architecture guidelines

13/06/2013

Page 25 of 64

Transmission network planning __________________________________________________________________________ P1: E-band P2: 2 x56MHz P1: E-band P2: 2 x56MHz P1: E-band P2: 2 x56MHz P1: E-band P2: 3 x56MHz P1: E-band P2: 3 x56MHz P1: E-band P2: 4 x56MHz

P1: 1 x56MHz

P1: 1 x56MHz

3:1 Stat gain

P1: 2 x56MHz

P1: 1 x56MHz

3:1 Stat gain

P1: 2 x56MHz

P1: 1 x56MHz

3:1 Stat gain

P1: 2 x56MHz P1: E-band P2: 3 x56MHz P1: E-band P2: 3 x56MHz

P1: 2 x56MHz

3:1 Stat gain

P1: 2 x56MHz

3:1 Stat gain

P1: 2 x56MHz

3:1 Stat gain

Table 5:

Feeder link reference

Note that no more than 8 physical sites should be aggregated on any one feeder link. For MW links utilising adaptive code modulation (ACM) it is important that at the reference modulation (i.e. the modulation scheme for which ComReg have allocated the Max EIRP) is dimensioned so as to meet the sum of the CIRs from each operator across that link. The total CIR per link is based on the product of the RAN technologies deployed and the CIR per RAN technology. Service Voice Voice Voice Data Data Data Data
Table 6:

RAN technology 2G 3G LTE GPRS R99 HSxPA LTE

CIR (Mb\s) 1 1 1 1.5 2 15 20

CIR per technology reference

Transmission network design & architecture guidelines

13/06/2013

Page 26 of 64

Transmission network planning __________________________________________________________________________

Should restrictions apply in terms of hardware, licensing, topology with the effect that links cannot be dimensioned as per table 4 then the following formula should be used to determine the minimum link bandwidth.
Min Feeder link capacity = MAX (VFIE CIR + H3G CIR, Max Tail link capacity)

CIR = Total CIR across all links aggregated from each operator Max Tail link capacity = Max tail link capacity of all sites aggregated across the feeder link

The formula is designed to facilitate the required capacity for each site based on location while at the same time ensuring, where multiple sites are aggregated, that the minimum CIR is available to each site.

2.6.2 Access MW Resilience rules


The resilience rules for the access MW network are based on the number of cell sites and enterprise services aggregated across the link. 1+1 HSB will be used to protect the physical path. Collector site Sites aggregated Small Medium / Large 5 >5 None 1+1 HSB Physical resilience Comments

Note that while LAG can be considered as a protection mechanism, allowing the link to operate at a lower bandwidth in the event of a Radio failure, NSI will protect the Radios in a LAG group using 1+1HSB to ensure the highest hardware availability for a physical link. NSI will consider LAG for capacity only and 1+1 HSB for protection. The target microwave topology, as described in the transition document, is for 1 microwave hop to fibre which will result in minimal use of 1+1 HSB configurations. However in the event that this topology is not possible NSI will implement protection as described above.

Transmission network design & architecture guidelines

13/06/2013

Page 27 of 64

Transmission network planning __________________________________________________________________________

2.6.3 Backhaul & Core transmission network dimensioning rules


Forecasting data utilisation across mobile networks is unpredictable due to the fact that service is quite new and technologies are still evolving. The dimensioning rules for the core and backhaul networks will be based in the first instance on projected statistical gain. To ensure that the Backhaul and Core networks are dimensioned correctly for the initial network consolidation the following criteria will be used; Network Backhaul network Statistical gain Less than 6 Greater than 6 and less than 8 8 or greater Core Dark Fibre Less than 8 Greater than 8 and less than 10 8 or greater Action ok under review upgrade ok under review upgrade

The statistical gain will be based on the average throughputs per technology aggregated. The statistical gain is based on the following calculation; Stat Gain = Total existing service capacity + Forecasted service capacity Backhaul capacity For the backhaul and core networks the current utilisation will be monitored on a monthly basis with the forecasted Statistical gain forecasted over an annual basis. This will give rise to programmed capacity upgrades across the Backhaul (managed and self build) and Core networks. The time to upgrade trunks across these networks is typically between 6 and 24 months depending on the upgrade involved. To facilitate this process the parent companies must provide 12, 24 and 36 month rolling forecasts at least twice yearly. These forecasts must detail at a minimum; Volume deployment per service type per geographic area Average throughput per service type Max allowable latency per service type

Transmission network design & architecture guidelines

13/06/2013

Page 28 of 64

Transmission network planning __________________________________________________________________________

NSI will constantly monitor utilisation Vs forecast and feedback to the parent companies. This will ensure that the capacity forecasting processes are optimised over time.

2.6.4 Traffic engineering


As described in section 2.6.2, while all efforts will be made to ensure congestion and contention is minimised across the transmission network, in some cases it will be unavoidable. NSI must ensure, in such circumstances, that both operators have equal access to the available bandwidth. To ensure that this is the case traffic engineering must be employed across the transmission and RAN networks; QoS mapping Shaping Policing Queue management

Quality of service is used to assign priority to certain services above others. Critical service signalling and GBR services will be assigned the highest priorities with VBR services assigned lower priority based on the service and/or the technology. There are large variations in the bandwidth requirements for LTE, HSPA, R99 and GPRS. For this reason, if all services were assigned equal priority, during periods of congestion, the low bandwidth services would be disproportionally impacted to such an extent that they may become unusable. For that reason, the low bandwidth data services will be assigned a higher priority to those presenting very high bandwidths.

QoS along with the queue management function should be designed to ensure, during periods of congestion, that equivalent services from the two operators have equal access to the available bandwidth. Table 5 details the proposed QoS mapping for all mobile RAN services. Traffic Type Signalling, synchronisation, routing protocols, Speech Transmission network design & architecture guidelines 46 6 EF (Strict) DSCP 24,40,48,49,56 L2-pbit 7 MPLS Queue CS7 (Strict)

13/06/2013

Page 29 of 64

Transmission network planning __________________________________________________________________________

VBR Streaming, GPRS data, Gaming R99 data HS Data Premium Internet access LTE Data
Table 5 Quality of Service mapping

32,34,36,38

AF4 (WRED)

24,26,28,30 18,20,22 10 0,8

3 2 1 0

AF3 (WRED) AF2 (WRED) AF1 (WRED) BE

Traffic engineering across the IP/MPLS network


IP/MPLS traffic Engineering

Ingress from Core

Traffic Classification

Queue & Queue Management

Shaping

Scheduling

Trunk Interface

EF Tail drop

IP Flow RT Tail drop Strict priority Shaping

IP Flow pG WRED Classification IP Flow Shaper CIR / PIR

G+E WRED

Shaper CIR / PIR

WFQ

IP Flow

BE WRED

Ingress - POC 1 Location

IP/MPLS PHB

Figure 10 IP/MPLS traffic engineering


Figure 10 describes the flow of traffic through the IP/MPLS network. On ingress from the core and access networks traffic is classified according to the DSCP value and mapped to the required Per Hop Behaviour (PHB) service class. From there it is passed to the egress interface where it is queued and scheduled based on a strict plus weighted fair queue (WFQ) mechanism. GBR services are passed to the strict queue and VBR services are passed to a weighted fair queue where access to the egress interface is controlled based on the service class priority. In times of no congestion all traffic is Transmission network design & architecture guidelines 13/06/2013 Page 30 of 64

Transmission network planning __________________________________________________________________________

passed without delay. In a congested environment, GBR services are passed directly to the egress interface and the VBR services are queued with access to the egress interface controlled by the weighted fair algorithm. Weighted Random Early Discard (WRED) is used to ensure efficient queue management. Packets from data flows are discarded at a pre-determined rate as the queue fills up. By doing this the 3G flow control and TCP/IP flow control should slow down resulting in reduced retransmissions and more efficient use of the available bandwidth. For enterprise services, policing on ingress will be implemented to ensure the enterprise customer is within the SLA. In such circumstances a CIR and PIR can be allocated to the customer services with a CBS and PBS assigned also. In this case the two rate three colour marking (trTCM) mechanism will be used to control the flow of enterprise traffic through the network.
Discarded traffic Yellow marked traffic. First to be discarded in case of network congestions

Data

Input data burst

Policing marking

Output traffic

Policing implementation according to standard Two Rate Three Color Marker (trTCM)
CBS allows to tolerate bursts above CIR short bursts will be marked GREEN PBS allows to tolerate bursts above PIR short bursts will not be discarded

Figure 11 Enterprise traffic engineering


Traffic within contract and within the PBS will be marked Green, traffic greater than the CIR and within the PIR including the PBS will be marked Yellow, all other traffic will be marked Red and discarded. In congestion scenarios the WRED queue management function will discard the Yellow marked packets first.

Traffic engineering across the Layer 2 Microwave network

Transmission network design & architecture guidelines

13/06/2013

Page 31 of 64

Transmission network planning __________________________________________________________________________

Across the microwave network a combination of Shaping, CoS based policing, trTCM and WRED queue management should be used to ensure congestion control and fairness in terms of bandwidth contention.

For downlink traffic, the physical interface from the IP/MPLS network must be shaped to the maximum bandwidth of the radio interface. This is to ensure that egress buffer overflow is not experienced, in particular for large bursts of LTE traffic. For LTE traffic, shaping per VLAN should also be implemented to ensure that tail links, which may be connected to feeder links and be of lower capacity, do not experience buffer overflow. Note: VLAN shaping for LTE must be considered when considering the Layer 2 VLAN structure and Layer 3 addressing to the H3G LTE network.

H3G

Shaping

LTE 3G 2G

GE BEP2.0 BEP2.0

VFIE

BEP1.0

LTE traffic shaping per service & per port (VLAN group) shaping In order to avoid BEP2.0 buffer overflow

Figure 12 Downlink traffic control mechanism


For uplink traffic, shaping should be applied on both the H3G and VFI RAN nodes. This is to ensure that both operators present the same bandwidth to the Transmission network for sharing. Data traffic should be policed on ingress to the access microwave network on a per service level. This ensures that during congestion, out of policy traffic from each operator is discarded first during periods of congestion.

Transmission network design & architecture guidelines

13/06/2013

Page 32 of 64

Transmission network planning __________________________________________________________________________

As detailed in previous sections the target bandwidth for RBS sites is 400Mb\s in the City areas, 300Mb\s in towns and 200Mb\s for all others. Tables 6 and 7 detail the proposed policing settings for the two areas. Data traffic CIR (Per operator)
GBR Services GPRS Data NA 1Mb\s

PIR (Per Operator)


NA Not set

Comments

No Policing - Green PIR will not be greater than max link capacity. Out of policy = yellow

R99 Data

2Mb\s

Not set

PIR will not be greater than max link capacity. Out of policy = yellow

HSDPA

15Mb\s

Not Set

PIR will not be greater than max link capacity. Out of policy = yellow

LTE

20Mb\s

400Mb\s

Contracted SLA to operator. Out of policy = Red

Table 6

City Area (Max link capacity = 400Mb\s)

Traffic

CIR (Per operator)

PIR (Per Operator)


NA

Comments

GBR Services

NA

No Policing - Green PIR will not be greater than

GPRS Data

1Mb\s

Not set

max link capacity. Out of policy = yellow PIR will not be greater than

R99 Data

2Mb\s

Not set

max link capacity. Out of policy = yellow PIR will not be greater than

HSDPA

15Mb\s

Not Set

max link capacity. Out of policy = yellow

LTE

20Mb\s

200Mb\s

Contracted SLA to operator. Out of policy = Red

Table 7

Non City Area (Max link capacity = 200Mb\s)

Transmission network design & architecture guidelines

13/06/2013

Page 33 of 64

Transmission network planning __________________________________________________________________________

All packets within CIR and the CBS will be marked green. For 3G and HS services the PIR should not exceed the available link capacity so packets will be marked as yellow. For LTE traffic, out of policy traffic will be marked red and discarded. In some cases the sum of both operators PIR will be greater than the available link capacity, even at maximum modulation. In this case, it will be possible for both operators to peak to the maximum available capacity, but not at the same time.

Discarded traffic by policing Operator 2

Policing will color packets according to trTCM

CIR2 PIR1 TX link capacity PIR2 CIR1

Operator 1 Op1 + Op2 traffic exceeding TX link capacity. When queues start to fill-up WRED (QoS) mechanism will start dropping YELLOW marked packets from data traffic 3G flow control & TCP/IP LTE sessions will slow down traffic of both Operators Thus preserving GREEN packets (CIR) for both operators

Figure 13- Normal link operation


Figure 13 details the operation of both policing and queue management on a microwave link. For operator 1, when the traffic presented exceeds the PIR, it is marked Red and discarded. Where the sum of both operators traffic does not exceed the interface PIR but exceeds the available link capacity the WRED mechanism in the outbound queue will start discarding yellow marked packets at a predetermined rate based on the queue size. In this instance, the 3G flow control and TCP/IP (LTE) flow control mechanisms will slow down the

Transmission network design & architecture guidelines

13/06/2013

Page 34 of 64

Transmission network planning __________________________________________________________________________

data sessions, minimising the number of retransmissions and optimising the use of the available bandwidth. This approach ensures that both operators GBR traffic is always transmitted, while also ensuring in a congested scenario both operators have fair access to the available bandwidth for each service provided. Note that for the incumbent vendors of Ethernet microwave radio systems, the majority of the deployed links will not support the required hierarchical QoS features. During the consolidation of both networks it will be necessary to swap out that hardware for hardware supporting those functions. A tender process will be run to select one vendor to fulfil these requirements.

2.7

Network synchronisation

NSI are responsible for managing the quality and distribution of the synchronisation reference clock throughout the mobile network. Table 7 summarises the clock distribution methods that will be implemented for the transmission and mobile networks.
Clock distribution Source PRC / SSU with Rubidium holdover (Symmetricom SSU2000 ) Comments Each SSU is configured with redundant source and supply modules. Redundant SSUs are distributed across the data centre locations Self built backhaul (Ethernet) Synchronous Ethernet Synchronous Ethernet with SSM Self built backhaul(SDH) Self built backhaul (DWDM) SDH trunks 1588v2 (IPVPN configured for 1588v2 distribution) SSM enabled TP500 slaves used to recover clock and reference the southbound self built network Ethernet managed Service 1588v2 (IPVPN configured for 1588v2 distribution) TP500 slaves used to recover clock and reference the southbound self built network Self built Access Microwave (Ethernet) Synchronous Ethernet & Radio interface Synchronous Ethernet with SSM

Transmission network design & architecture guidelines

13/06/2013

Page 35 of 64

Transmission network planning __________________________________________________________________________ Self built access microwave (PDH) Ericsson DUW (3G network) E1 connections and Radio interface NTP phase synchronisation from NTP server in RNC Parent RNC is reference to PRC and distributes clock via NTP carried over Iub link Ericsson SIU-02 Ericsson DUG (2G) Synchronous Ethernet Legacy E1 Interfaces connected to SIU02 Ericsson DUL (LTE) NTP phase synchronisation from resilient NTP servers at data centre locations Mixed mode Remote Radio units (U900 & GSM 900) Mixed mode Remote Radio units (LTE1800 & GSM 1800) DUG synchronised from DUW directly. DUG synchronised from DUL directly. NTP servers for LTE will be slaves of the SSU2000 nodes. DUW is synchronised over NTP network DUL is synchronised over NTP network from Standalone NTP servers NSN 3G network 1588v2 slaves (IP VPN 1588v2 packet distribution. Table 7: Synchronisation source and distribution summary SSU2000 nodes as servers for NSN 1588v2 network For legacy RBS nodes

The following sections provide additional details for each of the synchronisation solutions and their applications.

2.7.1 Self Built Transmission network


Over the self built transmission network synchronisation will be distributed at layer 1. For the legacy TDM networks, synchronisation will be distributed within the SDH frame with SSM enabled to transmit quality levels and reduce the risk of timing loops in the case of ring topologies. The PDH microwave networks will distribute synchronisation over the E1 and Radio interfaces. For the IP/MPLS network, Synchronous Ethernet (SyncE) is the preferred method of synchronisation distribution. Like SDH SyncE supports SSM and this will be enabled to transmit the clock quality level and reduce the risk of timing loops in the case of ring topologies. The Ethernet microwave indoor units (IDUs) will receive their timing reference using Synchronous Ethernet with southbound IDUs synchronised over the radio interface.

Transmission network design & architecture guidelines

13/06/2013

Page 36 of 64

Transmission network planning __________________________________________________________________________

Figure 14 Self built synchronisation distribution


It should be noted that in the access microwave network, TDM interfaces are supported for the transport of legacy RAN technologies. In this case, SyncE should be selected as the preferred timing reference for the IDUs with TDM interfaces retimed where required. TDM (SDH/PDH) interfaces to the backhaul can be used as a valid timing reference for the access microwave but will be selected with a lower priority to that assigned to synchronous Ethernet. This is to ensure as the network migrates to Ethernet transmission only, no changes are required to synchronisation configuration of the access network. For both SDH and SyncE the number of SDH Equipment Clocks (SEC) and Ethernet Equipment Clocks (EEC) between the SSU and the end user should not exceed 20 as per the relevant recommendations governing synchronisation distribution (G.8261 & G.8262).

2.7.2 Ethernet Managed services

Transmission network design & architecture guidelines

13/06/2013

Page 37 of 64

Transmission network planning __________________________________________________________________________

For Ethernet managed services it is assumed that the synchronisation source within the 3rd partys network is not from a trusted source. NSI will configure a L3VPN to distribute a 1588v2 timing reference from the PRC to the provider edge and recover the reference from the PRC at that point. From there synchronisation will be distributed as described in the self built network. 1588v2 synchronisation is independent of the underlying physical network and will ensure that the clock recovered at the provider edge is referenced to the network PRC.

Figure 15 1588v2 distribution over Ethernet Managed service 2.7.3 DWDM network
For SDH wavelengths the distribution of SDH synchronisation is valid and so no change is required. However for Ethernet trunks, while the DWDM nodes do support SyncE, the current installed base does not. In this case 1588v2 will be implemented across the initial deployment and the scenario described in section 2.7.3 will be deployed with 1588v2 slaves used to recover the reference from the PRC. Note for future deployment of Ethernet trunks across the DWDM backbone, SyncE will be considered and where implemented no 1588v2 clock recovery will be required. Transmission network design & architecture guidelines

13/06/2013

Page 38 of 64

Transmission network planning __________________________________________________________________________

2.7.4 Mobile network clock recovery


As described in table 8, a number of clock recovery methods are required which are dependent on the RAN technology and the RAN vendor. This section will describe the clock recovery for each RAN vendor and the RAN technology. Note that in all cases, while the mechanism used to recover the timing reference may not be the same, it must be possible to trace all timing references to the master PRC for the network. This is essential for the correct interoperation of all RAN technologies.

2.7.4.1

Legacy Ran nodes

TDM will be used to synchronise the legacy RAN technologies namely the legacy 2G systems and the 3G RAN connected via ATM. The legacy RAN technologies will use the E1 connections as their timing reference.

2.7.4.2

Ericsson SRAN 2G

Ericsson use the SIU-02 as the aggregation device for the baseband connections from their SRAN nodes (DUG, DUW and DUL). The SIU-02 converts the PDH signals from the 2G node (DUG) to a format suitable for transmission over Ethernet to the BSC. The SIU-02 supports synchronisation over synchronous Ethernet. In this configuration the SIU-02 will be connected to the transmission network via its WAN interface either directly to a co-sited MPLS router or via Ethernet microwave via a GigE trunk. This connection will be used as the timing reference for the node.

2.7.4.3

Ericsson SRAN 3G & LTE

The Ericsson 3G (DUG) and LTE (DUL) nodes are synchronised using a NTP network. NTP is similar to 1588v2 with the SRAN core nodes for 3G (RNC) and LTE (SGw) using the Iub and S1 interfaces respectively to transmit the required synchronisation phase information for accurate timing reference to the PRC. As the timing signals are carried within the respective user planes separate VPNs for timing distribution are not required. Note: Ericsson in future releases of DUG and DUL software will support 1588v2. Once this is the case, a decision should be taken as to the benefit of replacing the existing NTP solution for 1588v2.

Transmission network design & architecture guidelines

13/06/2013

Page 39 of 64

Transmission network planning __________________________________________________________________________

2.7.4.4

NSN 3G

The NSN 3G network nodes can act as 1588v2 slaves and recover the clock from a 1588v2 master. NSI will configure a 1588v2 L3VPN dedicated for the NSN 3G network. The network will be configured as described in section 2.7.2 with the NSN node B recovering the 1588v2 timing reference from the 1588v2 master clock.

2.8

Data Communications Network (DCN)

DCN refers to the distribution of O&M communications between the various management systems and their respective managed elements and networks. All network elements, namely RAN or transmission technologies, require connection to a network or element management platform for performance and configuration management. This section describes, by vendor, the transmission network configuration required to support such communications. Table 8 details the DCN for each of the vendors networks.
Vendor
Tellabs

Technology
IP/MPLS network

Transmission network solution


In band mgt

Comments
CM & PM are carried in band and connected to the corporate DCN @ the data centre locations

Siae

Ethernet Microwave

In Band mgt

- MPLS network gateway - L3VPN for Siae microwave network. - Access clusters are addressed in sub-networks based on the cluster size - Interconnect to corporate DCN at Data centre location

Ceragon

Ethernet Microwave

In Band mgt

- MPLS network gateway - L3VPN for Ceragon microwave network. - Access clusters are typically addressed in /26 subnetworks - Interconnect to corporate DCN @ Data centre locations

Ericsson SRAN

Mobile RAN

Out of band solution

- MPLS network gateway - L3VPN for each RAN technology (2G, 3G & LTE) and split over multiple VPNs based on network size. - Each network element has a /30 allocation - Interconnect to corporate DCN @ Data centre locations

NSN 3G

Mobile RAN

Out of band solution

- MPLS network gateway

Transmission network design & architecture guidelines

13/06/2013

Page 40 of 64

Transmission network planning __________________________________________________________________________


RAN - L3VPN for each RAN technology (2G, 3G & LTE) and split over multiple VPNs based on network size. - Access clusters are typically addressed in /26 subnetworks - Static routes required from access Gateway to RBS mgt address for CP address - Interconnect to corporate DCN @ Data centre locations - Static routes required to OMU & DCN networks for each RNC

Table 8: DCN network configuration per vendor

For the most part the DCN network will be configured as either in band with direct connectivity to the OSS via the DCN at the data centre locations, or where this is not possible L3VPNs should be configured to connect the remote elements to their respective management systems via the IP/MPLS network. At the data centre locations routing information will be shared between the corporate DCN networks and the transmission network VPNs through OSPF. The exception to this is the NSN 3G RAN where the CP and O&M networks require static routes via the RBS parent RNC to the respective ICSU and O&M network.

2.8.1 NSN 3G RAN Control Plane routing


Control plane traffic is terminated in the ICSU function in the RNC. Because the RNC does not support dynamic routing protocols it is necessary to configure static routes from the CP VPN endpoints on the IP/MPLS routers to the ICSU network via the CP interface in the RNC. The static routes are redistributed throughout the CP VPN using BGP.

2.8.2 NSN 3G RAN O&M routing


Each NSN RBS in the network requires an O&M IP address and backend /29 networks allocated. The O&M address is part of the allocated /26 network for a particular access cluster. The backend /29 network requires two connections to the NSN core, one to the DCN and one to the logical OMU interface on the RNC. The hierarchy within the 3G RAN O&M function is such that the DCN communicates with the RBS via the OMU and O&M interfaces on the parent RNC. Transmission network design & architecture guidelines

13/06/2013

Page 41 of 64

Transmission network planning __________________________________________________________________________

Northbound traffic will be routed to the parent RNC via the gateway router at the collector site. At the access clusters static routes are configured to the /29 networks with the O&M IP address for the RBS as the next hop. At the core sites vrf filters are applied on the 8800 nodes to ensure correct routing of incoming packets to the correct RNC. Each RBS is allocated a /29 subnet from an overall /20 allocated to each RNC. The vrf filter will inspect the source packet and route to the correct RNC. Static routes are required on the endpoints to the OMU and DCN networks via the parent RNCs O&M interface.

Transmission network design & architecture guidelines

13/06/2013

Page 42 of 64

Transmission network planning __________________________________________________________________________

2.9

Transmission network performance monitoring

As detailed in the transition document, NSI are responsible for ensuring the transmission network meets the target performance KPIs described therein and to provide periodic reporting and backup data to prove adherence to those KPIs. Table 9 describes the KPIs which must be measured;

KPI

Description

Target

Reporting period

Comment

Access MW Network

MW link availability

% time available per link

99.99x%

Weekly / rolling 365 day period

Based on the link license conditions

Access MW Network

MW link ACM operation

% of time operating at each modulation level per link

99.99x%

Weekly / rolling 365 day period

Based on the link license conditions

Access MW Network

MW Network availability

% time available across network

99.96%

Weekly / rolling 365 day period

Access MW Network

MW Network ACM operation

% time available across network

99.96%

Weekly / rolling 365 day period

Access MW Network

MW link performance

% BBE per link

Tbc

Weekly / rolling 365 day period

Requires integration to post processing function

Access MW Network

MW link packet network performance

% Packet loss across each link Delay variation across each link

Tbc

Weekly / rolling 365 day period

Requires export and post processing of RMON counters per link Not available in release 1 hardware. Integration to post processing tool necessary

Access MW Network

MW link packet network performance

Tbc

Weekly / rolling 365 day period

IP/MPLS network

Latency

One way packet delay from collector switch to Core MPLS routers

<15mS

Weekly / rolling 365 day period

IP/MPLS network

Jitter

One way packet delay variance from collector switch to core MPLS router

<3mS

Weekly / rolling 365 day period

IP/MPLS network

Packet loss

% packet loss per MPLS trunk

<0.2%

Weekly / rolling 365 day period

Transmission network design & architecture guidelines

13/06/2013

Page 43 of 64

Transmission network planning __________________________________________________________________________


IP/MPLS network Throughput Per collector sites. Daily average and Busy Hour IP/MPLS network Availability Availability of each collector site End to end transmission Throughput per service Per service throughput NA 99.99x% Weekly / rolling 365 day period Weekly / rolling 365 day period End to end transmission Packet loss per service % packet loss per service % Tbc Weekly / rolling 365 day period End to end transmission Per service availability % time available per service % Tbc Weekly / rolling 365 day period Collected from RAN / Enterprise client Collected from RAN / Enterprise client Collected from RAN / Enterprise client Based on Small, Medium or Large design NA Weekly / rolling 365 day period

Table 9: NSI transmission network KPIs and reporting structure

In order to ensure efficient collection, post processing and reporting against each of the KPIs described above and those required in the future NSI are required to export the performance and configuration management of the transmission network elements to a post processing tool. This will require the evaluation of those tools available today and possible replacements. This section will be updated to reflect the selected system and its operation once selected and designed. Until such time as a post processing tool is available all KPIs will be measured using the available tools on the respective vendor management platforms.

Transmission network design & architecture guidelines

13/06/2013

Page 44 of 64

Transmission network planning __________________________________________________________________________

3.0 Site configuration


This section will outline guidelines to be followed when planning and deploying at consolidated sites throughout the network. Throughout the network the site types can be subdivided into 3 broad categories. 1. Core 2. Backhaul 3. Access Within each of these categories there can be variations in site design based primarily on the site provider, equipment shelter, deployed hardware and required aggregation. It should be noted that while the following subsections will detail guidelines that should be followed when designing each site, in certain circumstances bespoke solutions may be required. For such solutions, the NSI transmission team should be consulted prior to finalising the design.

3.1

Core sites

Core sites refer to those locations where the transmission network is directly connected to the mobile Core and/or enterprise core networks. The main features that categorise these locations are; Transmission network has direct connectivity within the same site to a mobile core node (BSC, RNC, EPC) Transmission network has direct physical access to the core enterprise network The following table details the minimum requirements which must be satisfied when designing such sites. Requirement
Network resilience

Category
External optical cabling

Description
For diverse fibre routes a minimum of 5m physical separation is required from the external network through to the ODF presentation in the NSI equipment room

Additional notes

Network resilience

Internal diverse optical baseband cable management

- Intra ODF & ODF to equipment rack will not at any point share the same

Transmission network design & architecture guidelines

13/06/2013

Page 45 of 64

Transmission network planning __________________________________________________________________________ section of the fibre management system (FMS). - Diverse ports (e.g. East/West) and protecting ports (e.g. MSP, ELP, RPS, VRRP) will terminate on diverse ODF and will at no point share sections of the FMS. Network resilience Internal diverse electrical baseband cable management - Intra DDF, DDF to equipment rack will not at any point share the same section of the cable management infrastructure. - Diverse ports (e.g. East/West) and protecting ports (e.g. MSP, ELP, RPS, VRRP) will terminate on diverse ODF and will at no point share sections of the cable management infrastructure. Network resilience Power All equipment within the core site will have diverse A & B dc power. The A & B supply must be traced to separate DC rectifier systems within the core site. The DC rectifiers within the core site will be powered from a UPS ac supply which is backed up by generator power for a minimum of 24 hours Network resilience Power cabling Cables for A and B power (ac and dc) will at no stage share sections of the cable These guidelines apply to both 120Ohm and 75Ohm systems. For clarity 120Ohm distribution frames may also be referred to as Patch panels.

Transmission network design & architecture guidelines

13/06/2013

Page 46 of 64

Transmission network planning __________________________________________________________________________ management infrastructure Network resilience Rack layout Core Equipment (DWDM, IP/MPLS, ATM, SDH) operating in a resilient or load sharing capacity should not be collocated within the same rack. Network Dimensioning Power DC rectifiers should be dimensioned with consideration for a minimum of 2 x spare rectifier units within each cabinet. Once this limit is reached additional rectifiers should be deployed to meet any additional requirements. Network Dimensioning Network Dimensioning Power Power 3 phase ac supplies should be used in all cases AC power for each rectifier unit should be dimensioned with a minimum overhead of 20% to facilitate emergency expansions and inefficiencies within the rectifier units Network Dimensioning Power cable labelling - All power cables must be labelled indicating the remote end equipment and location - All MCBs must be labelled indicating the remote equipment ID Internal cabling Optical cabling (standard) Internal cabling Optical cabling (equipment interconnect) Single Mode fibre should be used in all cases All equipment interconnects must be done via ODF. No direct cabling from equipment to

Transmission network design & architecture guidelines

13/06/2013

Page 47 of 64

Transmission network planning __________________________________________________________________________ equipment should be implemented at any stage Internal cabling Optical cabling (Labelling) All cables must be labelled at the equipment and at the frame indicating the following; - Next hop (e.g. ODF and position position) - Final destination (Equipment and Port ID) Internal cabling Structured cabling (standard) Internal cabling Structured cabling (equipment interconnect) CAT6 should be used in all cases at a minimum All equipment interconnects must be done via patch panel. No direct cabling from equipment to equipment should be implemented at any stage Internal cabling Structured cabling (Labelling) All cables must be labelled at the equipment and at the frame indicating the following; - Next hop (e.g. patch panel and position) - Final destination (Equipment and Port ID) Internal cabling 75 Ohm (standard) RA7000 should be used in all cases at a minimum Internal cabling 75 Ohm cabling (equipment interconnect) All equipment interconnects must be done via DDF. No direct cabling from equipment to equipment should be implemented at any stage Internal cabling Structured cabling (Labelling) All cables must be labelled at the equipment and at the frame indicating the following; - Next hop (e.g. DDF and

Transmission network design & architecture guidelines

13/06/2013

Page 48 of 64

Transmission network planning __________________________________________________________________________ position) - Final destination (Equipment and Port ID) MW Radio Rack installation - Dedicated racks to house the MW Radio IDUs will be installed - A DC headrail should be installed in the transmission cabinet with facility for a minimum of 5 x A and 5 x B MCBs. - 6A MCBs should be fitted as standard - The A and B side will be connected to the respective A & B side of the DC rectifier unit MW Radio Baseband cabling (75Ohm Type 43 to 75Ohm Type 43) - To facilitate cabling between MW IDU equipment within the same rack a DDF will be installed within the MNW equipment rack MW Radio Baseband cabling (75Ohm Type 43 to 120Ohm RJ45) - To facilitate cabling between MW IDU equipment within the same rack a 24 port BALUN should be installed within the same equipment rack MW Radio Baseband cabling (120 Ohm to 120 Ohm RJ45 for TDM services) MW Radio Baseband cabling (120 Ohm to 120 Ohm RJ45 for Ethernet services) MW Radio Baseband cabling (optical) - Direct cabling between MW IDU equipment should be implemented within the same rack - Direct cabling between MW IDU equipment should be implemented within the same rack - To facilitate cabling between MW IDU equipment within the same

Transmission network design & architecture guidelines

13/06/2013

Page 49 of 64

Transmission network planning __________________________________________________________________________ rack a 24 port SC optical patch panel should be installed within the same equipment rack MW Radio IF Cable - All IF cables from the antenna support structure will terminate on N-Type bulk head connectors and panel to rear of the MW transmission rack - IF fly leads from IDU will terminate on the required N-Type bulk head connecting to the system ODU. MW Radio IDU Labelling Near end ID Far end ID Local IP Address and subnet Remote IP Address and subnet Commissioned Tx Power MW Radio IF Labelling Commissioned RSL Tx Freq (Mhz) All IF cable labels should be prefixed with NSI Far end ID on Fly lead Far end ID at Bulk head connector Far end ID inside of Roxtec Far end ID outside of Roxtec MW Radio ODU and Antenna Far end ID @ ODU Far end Site name & ID Tx Frequency (MHz) Polarisation

Transmission network design & architecture guidelines

13/06/2013

Page 50 of 64

Transmission network planning __________________________________________________________________________ Commissioned Tx power Commissioned RSL

Table 10:

Core site build guidelines

3.2

Backhaul sites

Backhaul sites refer to those locations where the transmission network is aggregating large amounts of customer traffic onto high speed transmission links. For TDM traffic this refers to N+0 where N>1 SDH backhaul and for the MPLS network this refers to the Level 2 routing area. For all of these cases the equipment must be housed in a building or Portacabin. Table 11 details the minimum requirements which must be satisfied when designing such sites. Requirement
Network resilience

Category
External optical cabling

Description
For diverse fibre routes a minimum of 5m physical separation is required from the external network through to the ODF presentation in the NSI equipment room

Additional notes

Network resilience

Internal diverse optical baseband cable management

- Intra ODF & ODF to equipment rack will not at any point share the same section of the fibre management system (FMS). - Diverse ports (e.g. East/West) and protecting ports (e.g. MSP, ELP) will terminate on diverse ODF and will at no point share sections of the FMS.

Network resilience

Internal diverse electrical baseband cable management

- Intra DDF, DDF to equipment rack will not at any point share the same section of the cable

These guidelines apply to both 120Ohm and 75Ohm systems. For clarity 120Ohm

Transmission network design & architecture guidelines

13/06/2013

Page 51 of 64

Transmission network planning __________________________________________________________________________ management infrastructure. - Diverse ports (e.g. East/West) and protecting ports (e.g. MSP, ELP) will terminate on diverse ODF and will at no point share sections of the cable management infrastructure. Network resilience Power - All equipment within the backhaul site will have diverse A & B dc power. Network resilience Power cabling Cables for A and B power will at no stage share sections of the cable management infrastructure Network resilience Rack layout Core Equipment (DWDM, IP/MPLS, ATM, SDH) operating in a resilient or load sharing capacity should not be collocated within the same rack. Network Dimensioning Power DC rectifiers should be dimensioned with consideration for a minimum of 2 x spare rectifier units within each cabinet. Once this limit is reached additional rectifiers should be deployed to meet any additional requirements. Network Dimensioning Network Dimensioning Power Power 3 phase ac supplies should be used in all cases AC power for each rectifier unit should be dimensioned with a minimum overhead of 20% to facilitate emergency distribution frames may also be referred to as Patch panels.

Transmission network design & architecture guidelines

13/06/2013

Page 52 of 64

Transmission network planning __________________________________________________________________________ expansions and inefficiencies within the rectifier units Network Dimensioning Power - Sufficient battery backup should be in place to power all Tx equipment on site for a minimum of 8 hours Network Dimensioning Power For remote locations, diesel generators should be in place to facilitate full Tx site operation for a minimum of 24 Hours Network Dimensioning Power cable labelling - All power cables must be labelled indicating the remote end equipment and location - All MCBs must be labelled indicating the remote equipment ID Internal cabling (MPLS / SDH) Internal cabling (MPLS / SDH) Optical cabling (standard) Optical cabling (equipment interconnect) Single Mode fibre should be used in all cases All equipment interconnects must be done via ODF. No direct cabling from equipment to equipment should be implemented at any stage Internal cabling (MPLS / SDH) Optical cabling (Labelling) All cables must be labelled at the equipment and at the frame indicating the following; - Next hop (e.g. ODF and position position) - Final destination (Equipment and Port ID) Internal cabling (MPLS / SDH) Internal cabling (MPLS / SDH) Structured cabling (standard) Structured cabling (equipment CAT6 should be used in all cases at a minimum All equipment interconnects must be

Transmission network design & architecture guidelines

13/06/2013

Page 53 of 64

Transmission network planning __________________________________________________________________________ interconnect) done via patch panel. No direct cabling from equipment to equipment should be implemented at any stage Internal cabling (MPLS / SDH) Structured cabling (Labelling) All cables must be labelled at the equipment and at the frame indicating the following; - Next hop (e.g. patch panel and position) - Final destination (Equipment and Port ID) Internal cabling (MPLS / SDH) Internal cabling (MPLS / SDH) 75 Ohm cabling (equipment interconnect) 75 Ohm (standard) RA7000 should be used in all cases at a minimum All equipment interconnects must be done via DDF. No direct cabling from equipment to equipment should be implemented at any stage Internal cabling (MPLS / SDH) Structured cabling (Labelling) All cables must be labelled at the equipment and at the frame indicating the following; - Next hop (e.g. DDF and position) - Final destination (Equipment and Port ID) MW Radio installation Rack installation - Dedicated racks to house the MW Radio IDUs will be installed MW Radio equipment rack installation (Power distribution) Transmission rack - A DC headrail should be installed in the transmission cabinet with facility for a minimum of 5A and 5B MCBs. - 6A MCBs should be fitted as standard - The A and B side will be

Transmission network design & architecture guidelines

13/06/2013

Page 54 of 64

Transmission network planning __________________________________________________________________________ connected to the respective A & B side of the DC rectifier unit MW Radio installation Baseband cabling (75Ohm Type 43 to 120Ohm RJ45) - To facilitate cabling between MW IDU equipment within the same rack a 24 port BALUN should be installed within the same equipment rack MW Radio installation Baseband cabling (120 Ohm to 120 Ohm RJ45 for TDM services) MW Radio installation Baseband cabling (120 Ohm to 120 Ohm RJ45 for Ethernet services) MW Radio installation Baseband cabling (optical) - Direct cabling between MW IDU equipment should be implemented within the same rack - Direct cabling between MW IDU equipment should be implemented within the same rack - To facilitate cabling between MW IDU equipment within the same rack a 24 port SC optical patch panel should be installed within the same equipment rack MW Radio IF Cable - All IF cables from the antenna support structure will terminate on N-Type bulk head connectors and panel to rear of the MW transmission rack - IF fly leads from IDU will terminate on the required N-Type bulk head connecting to the system ODU. MW Radio IDU Labelling Near end ID Far end ID Local IP Address and subnet Remote IP Address

Transmission network design & architecture guidelines

13/06/2013

Page 55 of 64

Transmission network planning __________________________________________________________________________ and subnet Commissioned Tx Power MW Radio IF Labelling Commissioned RSL Tx Freq (Mhz) All IF cable labels should be prefixed with NSI Far end ID on Fly lead Far end ID at Bulk head connector Far end ID inside of Roxtec Far end ID outside of Roxtec MW Radio ODU and Antenna labelling Far end ID @ ODU Far end Site name & ID Tx Frequency (MHz) Polarisation Commissioned Tx power Commissioned RSL

Table 11:

Backhaul site build guidelines

3.2.1 BT TT locations
One specific type of backhaul site is those co-located with the BT TT network. In this case certain restrictions apply in terms of space and presentation of managed circuits which must adhere to BT co-locations rules. Specifically; NSI transmission equipment will be housed within the same rack BT will present all circuits on a single ODF patch panel within the NSI equipment rack Inter-shelf cabling can be run directly between the NSI equipment within the same equipment rack.

Transmission network design & architecture guidelines

13/06/2013

Page 56 of 64

Transmission network planning __________________________________________________________________________

3.3 Access locations


Access locations refer to all sites not covered under sections 3.1 and 3.2 above. This classification of site covers the vast majority of sites in the network and can be subdivided into the site categories described in Table 12. Access site category Tail site (Portacabin & outdoor cabinet options) Feeder site (fibre) (Portacabin & outdoor cabinet options) Feeder site (MW) Feeder site with MW (Portacabin & outdoor cabinet options) transmission link to backhaul site Aggregation of multiple tail and feeder links Aggregation site with fibre backhaul to MPLS ABR Aggregation of multiple tail and/or feeder links Single unprotected transmission link Tx solution for a single site Characteristics Comments

Table 12:

Access site categories

Within this section each site category will be described in terms of equipment installation, power and baseband interconnection

3.3.1 Access sites (Portacabin installation)


Requirement
Power

Category
Transmission rack

Description
- 19 racks should be installed as standard - A DC head rail should be installed in the transmission rack with facility for a minimum of 10 x A & 10 x B MCBs. - 6A MCBs should be

Additional notes

Transmission network design & architecture guidelines

13/06/2013

Page 57 of 64

Transmission network planning __________________________________________________________________________ fitted as standard - The A and B side will be connected to the respective A & B side of the DC rectifier unit Power Transmission equipment - 2 x 63A connections should be fitted as standard from the rectifier A & B supply to the respective A & B connections on the DC headrail - The transmission equipment A & B power will be connected to the respective A & B side of the DC head rail within the Tx rack Power Battery configuration Battery backup for the TX equipment should be configured for a minimum of 4 hrs Power Labelling - All power cables will be labelled with the remote termination ID - All MCBs will be labelled with the remote equipment ID Indoor equipment Hardware installation All Indoor transmission equipment should be housed within a 19 rack 3PP presentation Optical 3PP services will be presented on a 19 SC patch panel within the Tx rack 3PP CPE Hardware All 3PP CPE will be housed within the Tx rack MW Radio installation IDU installation - All MW Radio IDU hardware to be installed in a 19 Tx rack

Transmission network design & architecture guidelines

13/06/2013

Page 58 of 64

Transmission network planning __________________________________________________________________________ MW Radio installation Baseband cabling (75Ohm Type 43 to 120Ohm RJ45) - To facilitate cabling between MW IDU equipment within the same rack a 24 port BALUN should be installed within the same equipment rack MW Radio installation Baseband cabling (120 Ohm to 120 Ohm RJ45 for TDM services) MW Radio installation Baseband cabling (120 Ohm to 120 Ohm RJ45 for Ethernet services) MW Radio installation Baseband cabling (optical) - Direct cabling between MW IDU equipment should be implemented within the same rack - Direct cabling between MW IDU equipment should be implemented within the same rack - To facilitate cabling between MW IDU equipment within the same rack a 24 port SC optical patch panel should be installed within the same equipment rack MW Radio IF Cable - All IF cables from the antenna support structure will terminate on N-Type bulk head connectors and panel to rear of the MW transmission rack - IF fly leads from IDU will terminate on the required N-Type bulk head connecting to the system ODU. MW Radio IDU Labelling Near end ID Far end ID Local IP Address and subnet Remote IP Address and subnet Commissioned Tx Power

Transmission network design & architecture guidelines

13/06/2013

Page 59 of 64

Transmission network planning __________________________________________________________________________ MW Radio IF Labelling Commissioned RSL Tx Freq (Mhz) All IF cable labels should be prefixed with NSI Far end ID on Fly lead Far end ID at Bulk head connector Far end ID inside of Roxtec Far end ID outside of Roxtec MW Radio ODU and Antenna labelling Far end ID @ ODU Far end Site name & ID Tx Frequency (MHz) Polarisation Commissioned Tx power Commissioned RSL

3.3.2 Access site (Outdoor cabinet installation)


Table 11 details the rules to follow when consolidating onto a single site where no 3PP services are in place. Table 12 details the additional guidelines that must be considered where network consolidation is proposed on a site with existing 3PP services.

Requirement
Power

Category
Transmission rack

Description
- A 2m site support unit should be installed on all outdoor cabinet sites as standard to facilitate Tx consolidation - A DC head rail should be installed in the site support unit with facility for a minimum of 10 x A & 10 x B MCBs.

Additional notes

Transmission network design & architecture guidelines

13/06/2013

Page 60 of 64

Transmission network planning __________________________________________________________________________ - 6A MCBs should be fitted as standard - The A and B side will be connected to the respective A & B side of the DC rectifier unit Power Transmission equipment - 2 x 63A connections should be fitted as standard from the rectifier A & B supply to the respective A & B connections on the DC headrail - The transmission equipment A & B power will be connected to the respective A & B side of the DC head rail within the Tx rack Power Battery configuration Battery backup for the TX equipment should be configured for a minimum of 4 hrs Power Labelling - All power cables will be labelled with the remote termination ID - All MCBs will be labelled with the remote equipment ID Indoor equipment Hardware installation All new hardware will be installed in the Site support unit 3PP presentation Optical All new 3PP services will be presented on a 1U ODF within the site support unit 3PP CPE Hardware All new 3PP CPE will be housed within the site support unit MW Radio installation IDU installation - All MW Radio IDU hardware to be installed in

Transmission network design & architecture guidelines

13/06/2013

Page 61 of 64

Transmission network planning __________________________________________________________________________ a 19 Tx rack MW Radio installation Baseband cabling (75Ohm Type 43 to 120Ohm RJ45) - To facilitate cabling between MW IDU equipment within the same rack a 24 port BALUN should be installed within the same equipment rack MW Radio installation Baseband cabling (120 Ohm to 120 Ohm RJ45 for TDM services) MW Radio installation Baseband cabling (120 Ohm to 120 Ohm RJ45 for Ethernet services) MW Radio installation Baseband cabling (optical) - Direct cabling between MW IDU equipment should be implemented within the same rack - Direct cabling between MW IDU equipment should be implemented within the same rack - To facilitate cabling between MW IDU equipment within the same rack a 24 port SC optical patch panel should be installed within the same equipment rack MW Radio IF Cable - All IF cables from the antenna support structure will terminate on N-Type bulk head connectors and panel to rear of the MW transmission rack - IF fly leads from IDU will terminate on the required N-Type bulk head connecting to the system ODU. MW Radio IDU Labelling Near end ID Far end ID Local IP Address and subnet Remote IP Address and subnet Commissioned Tx

Transmission network design & architecture guidelines

13/06/2013

Page 62 of 64

Transmission network planning __________________________________________________________________________ Power MW Radio IF Labelling Commissioned RSL Tx Freq (Mhz) All IF cable labels should be prefixed with NSI Far end ID on Fly lead Far end ID at Bulk head connector Far end ID inside of Roxtec Far end ID outside of Roxtec MW Radio ODU and Antenna labelling Far end ID @ ODU Far end Site name & ID Tx Frequency (MHz) Polarisation Commissioned Tx power Commissioned RSL

Table 11: Access site consolidation No 3PP services in place

Requirement
IP/MPLS

Category
Equipment installation

Description
IP/MPLS equipment should be installed within the same outdoor cabinet as the existing 3PP CPE

Additional notes

IP/MPLS

Equipment installation

Where space restricts the possibility to install the IP/MPS equipment within the same cabinet, the IP/MPLS equipment should be housed in the site support unit

IP/MPLS

Intra cabinet cabling rules

- Where the Site support unit and the existing 3PP

Transmission network design & architecture guidelines

13/06/2013

Page 63 of 64

Transmission network planning __________________________________________________________________________ CPE are in separate outdoor cabinets but on the same plinth all cabling should be done direct via the cable management systems in place between the outdoor cabinets - Where the outdoor cabinets do not share the same plinths structured cabling is required between the outdoor cabinets. The following rules apply for each service( Optical Ethernet & TDM) 12 pair SM fibre suitable for outdoor installation should be run and presented on a 1U splice/presentation tray within each cabinet 12 pair CAT6 suitable for outdoor installation should be run and presented on a 1U patch panel within each cabinet 16 core Coax suitable for outdoor installation should be run and presented on a 2U DDF within each cabinet.

Table 12: Outdoor cabinet consolidation existing 3PP CPE on site

Transmission network design & architecture guidelines

13/06/2013

Page 64 of 64

You might also like