You are on page 1of 128

Optimization Guidelines:

Capacity in Huawei

CONTENTS
1. INTRODUCTION.........................................................................................................................7
2. CAPACITY....................................................................................................................................8
3. Process for UTRAN Capacity Management..............................................................................10
3.1. Capacity Metrics Monitoring.......................................................................................................11
3.2. Capacity Analysis..........................................................................................................................11
3.2.1.
3.2.2.
3.2.3.

RULE OUTS.........................................................................................................................................12
Historical Node B Data..........................................................................................................................13
RNC/NodeB Dumps Audit....................................................................................................................13

3.3. Capacity Optimization.................................................................................................................13


3.3.1. TRAFFIC OFFLOAD...........................................................................................................................14
3.3.1.1.
Physical and Parameter Setting Changes....................................................................................14
3.3.1.2.
Huawei: Direct Retry..................................................................................................................14
3.3.1.3.
Huawei: Intra-Frequency Load balancing..................................................................................15
3.3.1.4.
Huawei: Load Reshuffling..........................................................................................................16
3.3.2. SHO Optimization.................................................................................................................................16

3.4. Capacity Upgrade.........................................................................................................................18

4. 1BDL TRANSMITTED POWER..............................................................................................20


4.1. AVAILABLE CAPACITY IN TERMS OF DL TX POWER.....................................................21
4.2. METRICS for DL TX POWER Monitoring...............................................................................25
4.2.1.
4.2.2.
4.2.3.
4.2.4.
4.2.5.

Total DL TX Power Utilization (%)......................................................................................................25


Total non-HS DL TX Power Utilization (%).........................................................................................26
DL TX Power used by HSDPA utilization.............................................................................................26
DL EUN Utilization (%)........................................................................................................................27
AC Rejections due to DL TX Power.....................................................................................................28

4.3. DL TX POWER Performance Analysis and Optimisation........................................................28


4.3.1.
4.3.2.
4.3.3.

Parameters Optimization.......................................................................................................................28
RF optimization.....................................................................................................................................28
Activation of Features............................................................................................................................28

5. 1BUL RECEIVED POWER......................................................................................................30


5.1. AVAILABLE CAPACITY IN TERMS OF UL RX POWER.....................................................30
5.2. METRICS for UL RX POWER Monitoring..............................................................................31
5.2.1.
5.2.2.

Received Total Wideband Power, RTWP..............................................................................................31


Equivalent User Number (ENU) in UL.................................................................................................31

5.3. UL RX POWER Utilization.........................................................................................................32


5.3.1.
5.3.2.

Total UL RX Load Factor (%)...............................................................................................................32


UL ENU Utilization (%)........................................................................................................................33

5.4. UL RX POWER Performance Analysis and Optimisation........................................................33


5.4.1.
5.4.2.

UL RX Power from HSUPA..................................................................................................................34


AC Rejections due to UL RX Power.....................................................................................................35

5.5. UL RX POWER Performance Analysis and Optimisation........................................................35


5.5.1.
5.5.2.

Parameter Optimization.........................................................................................................................35
RF optimization.....................................................................................................................................35

5.5.3.

Node B UL Power Capacity Upgrade....................................................................................................35

6. CHANNELIZATION (OVSF) CODES......................................................................................36


6.1. AVAILABLE CAPACITY IN TERMS OF OVSF CODES (Downlink)....................................36
6.2. METRICS for CHANNELIZATION CODES Monitoring.......................................................37
6.2.1.
6.2.2.
6.2.3.
6.2.4.

Code Tree Usage....................................................................................................................................37


Code Blocking.......................................................................................................................................40
Average Number of Codes reserved for HS..........................................................................................40
AC Rejections due to Channelization Codes.........................................................................................41

6.3. OVSF CODES Performance Analysis and Optimisation...........................................................41


6.3.1.
6.3.2.
6.3.3.
6.3.4.

Parameters Optimization.......................................................................................................................41
RF optimization.....................................................................................................................................41
Activation of Features............................................................................................................................42
Node B UL Power Capacity Upgrade....................................................................................................42

7. 1BCHANNEL ELEMENTS (CE).............................................................................................43


7.1. AVAILABLE CAPACITY IN TERMS OF CHANNEL ELEMENTS......................................44
7.2. Dynamic CE Resource Management...........................................................................................46
7.2.1.
7.2.2.

Periodical CE Resource Adjustment......................................................................................................46


CE Resource Adjustment Triggered by Event.......................................................................................48

7.3. METRICS for CHANNEL ELEMENTS Monitoring................................................................49


7.3.1.
7.3.2.
7.3.3.
7.3.4.

Number of CEs available and used in UL/DL......................................................................................49


Average UL/DL CE Utilization (%)......................................................................................................50
Setup Failures (Blockings) due to CE shortage.....................................................................................50
Releases and Downgrades due to CE shortage......................................................................................51

7.4. CE Performance Analysis and Optimization..............................................................................51


7.4.1.
7.4.2.

SHO Overhead Optimization.................................................................................................................51


WBTS CE Capacity Upgrade................................................................................................................51

8. 1BBACKHAUL (Iub).................................................................................................................52
8.1. AVAILABLE CAPACITY IN TERMS OF BACKHAUL RESOURCES.................................53
8.2. METRICS for BACKHAUL RESOURCES Monitoring...........................................................53
8.2.1. Traffic Load Measurements...................................................................................................................53
8.2.1.1.
ATM backhaul.............................................................................................................................53
8.2.1.2.
Iub Utilization.............................................................................................................................54
8.2.1.3.
Number of active HSDPA users..................................................................................................55
8.2.1.4.
Number of AAL2 connections....................................................................................................55
8.2.1.5.
Average Cell Drop Rate..............................................................................................................55
8.2.1.6.
Transport Network Blocking......................................................................................................56

8.3. Iub Performance Analysis and Optimisation.............................................................................56


8.3.1. AAL2 Channel Identifiers Blocking......................................................................................................56
8.3.2. Adjusting the Maximum Available Bandwidth of the Iub Port.............................................................57
8.3.2.1.
Algorithm for ATM Transport.....................................................................................................57
8.3.2.2.
Algorithm for IP Transport.........................................................................................................58
8.3.2.3.
Adjusting the Available Bandwidth of HSUPA..........................................................................58
8.3.2.4.
Handling Iub Buffer Congestion.................................................................................................58

9. HSxPA Users...............................................................................................................................60
9.1. AVAILABLE CAPACITY IN TERMS OF HSDPA USERS......................................................60
9.2. AVAILABLE CAPACITY IN TERMS OF HSUPA USERS......................................................61

9.3. METRICS for HSxPA USERS Monitoring................................................................................61


9.3.1.
9.3.2.
9.3.3.

Avg number of simultaneous HSxPA users...........................................................................................61


Peak number of HSDPA users in BTS...................................................................................................61
Avg HSxPA users License Utilization (%)............................................................................................61

9.4. HSxPA USERS Performance Analysis and Optimization..........................................................62


9.4.1.
9.4.2.
9.4.3.

Parameters Optimization.......................................................................................................................62
RF optimization.....................................................................................................................................63
Node B DL Power Capacity Upgrade....................................................................................................64

10. 1BRNC Load..............................................................................................................................65


10.1. AVAILABLE CAPACITY IN TERMS OF RNC LOAD............................................................66
10.1.1. Subsection RNC Boards........................................................................................................................69

10.2. METRICS for RNC Load Monitoring........................................................................................70


10.2.1. Main Processors Load............................................................................................................................70
10.2.2. Secondary Processors Load...................................................................................................................71

11. Additional ADMISSION CONTROL Metrics............................................................................72


11.1. Setup Failures due to Admission Control...................................................................................72
11.2. Users in COMPRESSED MODE................................................................................................72

12. 1BAdditional CONGESTION CONTROL Metrics..................................................................74


12.1. RT over NRT.................................................................................................................................74
12.2. PRE-EMPTION............................................................................................................................74
12.3. Huawei: Overload Control...........................................................................................................76
12.4. Radio Bearer Downgrade and Release Due to Congestion........................................................77

13. PERFORMANCE ALARMS and CAPACITY WEEKLY REPORT.........................................78


13.1. Performance Alarms....................................................................................................................78
13.2. Capacity Weekly Report..............................................................................................................80

14. 4BREFERENCES.....................................................................................................................81
A. 5BANNEX I: LOAD MANAGEMENT IN Huawei.................................................................82
A.a. Priority Involved in Load Control...............................................................................................82
A.a.a.
A.a.b.
A.a.c.

RAB Integrate Priority...........................................................................................................................83


User Integrate Priority...........................................................................................................................83
User Priority...........................................................................................................................................83

A.b. Load Measurement.......................................................................................................................83


A.b.a. Measurement Quantities and Procedure................................................................................................84
A.b.a.a. Major Measurement Quantities...................................................................................................84
A.b.a.b. LDM Procedure..........................................................................................................................84
A.b.b. Filtering of Load Measurement.............................................................................................................84
A.b.b.a. Smooth Window Filtering on the RNC Side...............................................................................85
A.b.b.b. Reporting Interval.......................................................................................................................86
A.b.b.c. Provided Bit Rate........................................................................................................................86
A.b.c. Auto-Adaptive Background Noise Update............................................................................................87
A.b.d. Potential User Control...........................................................................................................................88
A.b.e. Intelligent Access Control......................................................................................................................90
A.b.f. RRC Connection Processing..................................................................................................................91
A.b.f.a.
Signaling Radio Bearer Admission Decision..............................................................................91

A.b.g. RAB Setup Processing...........................................................................................................................92


A.b.g.a. Rate Negotiation.........................................................................................................................92
A.b.g.b. Preemption..................................................................................................................................93
A.b.g.c. Queuing.......................................................................................................................................94
A.b.g.d. Directed Retry Decision..............................................................................................................97

A.c. Call Admission Control................................................................................................................99


A.c.a. CAC Based on Code Resource............................................................................................................100
A.c.b. CAC Based on Power Resource..........................................................................................................100
A.c.c. CAC Based on NodeB Credit Resource..............................................................................................108
A.c.d. CAC Based on Iub Interface Resource................................................................................................110
A.c.e. CAC Based on the Number of HSPA Users........................................................................................113
A.c.e.a. CAC of HSDPA Users..............................................................................................................113
A.c.e.b. CAC of HSUPA Users..............................................................................................................113
A.c.f. Intra-Frequency Load Balancing.........................................................................................................113

A.d. Load Reshuffling.........................................................................................................................114


A.d.a. Triggering of Basic Congestion...........................................................................................................114
A.d.b. LDR Procedure....................................................................................................................................116
A.d.c. LDR Actions........................................................................................................................................118
A.d.c.a. Inter-Frequency Load Handover...............................................................................................118
A.d.c.b. BE Rate Reduction....................................................................................................................119
A.d.c.c. Uncontrolled Real-Time QoS Renegotiation............................................................................120
A.d.c.d. Inter-RAT Handover in the CS Domain....................................................................................120
A.d.c.e. Inter-RAT Handover in the PS Domain....................................................................................120
A.d.c.f.
AMR Rate Reduction................................................................................................................121
A.d.c.g. Code Reshuffling......................................................................................................................121
A.d.c.h. UL and DL LDR Action Combination of a UE........................................................................122

A.e. Overload Control........................................................................................................................123


A.e.a. Triggering of OLC...............................................................................................................................123
A.e.b. General OLC Procedure......................................................................................................................124
A.e.c. OLC Actions........................................................................................................................................124
A.e.c.a. TF Control.................................................................................................................................125
A.e.c.b. Switching BE Services to Common Channel...........................................................................126
A.e.c.c. Release of Some RABs.............................................................................................................126

1.

INTRODUCTION

This series of Optimization Guidelines covers all the main topics regarding

Performance Monitoring & Analysis


Configuration settings
Troubleshooting

Refer to the internal Claro document Ref. ####.## , Optimization Process, for a summary of 3G
WCDMA Radio Access Network Optimization Basics.
This specific document focuses on CAPACITY and its specifics within Huawei infrastructure (Release
RAN 6.1).
Target users for this document are all personnel requiring a detailed description of this process (Capacity
Optimization), as well as configuration managers who require details to control the functions and
optimize parameter settings. It is assumed that users of this document have a working knowledge of 3G
telecommunications and are familiar with WCDMA.

Document Revision Control


Revision
Draft01

Date
31-Ene-2010

Author
QCES

Draft02

26-Feb-2010

QCES

Changes
First Draft of the document
Correction on UL load formulas and
added the following sections:

Change Specifications for the


Node B used at Claro: DBS3800
and DBS3900

New Features for RAN10

Completed Load Management


Annex.

2.

CAPACITY

This document describes how to monitor/optimize the performance of a UMTS network in terms of
CAPACITY through counters and KPIs (with focus on Huawei networks). The overall goal of the process
is to have an efficient utilization of the resources: as high as possible with no congestion.
End-to-end capacity dimensioning consists of calculating required resources at each stage and
comparing them to the available resources, so bottlenecks are avoided and if detected, appropriate
actions can be recommended e.g. Number of CE in each cell need to match the Radio Erlang capacity,
Number of OVSF Codes in each cell need to match the number of users supported on the air-interface,
Iub bandwidth needs to match the expected Iub throughput, etc.
Typically, the air interface is the natural bottleneck for NW capacity. Provided RF is optimized, an
increase in air interface capacity is expensive as this generally will mean the additional sites or
additional carriers,.
It is important to make sure that air interface is indeed the bottleneck: All other resources should be
dimensioned in excess of the air interface resources, but since other resources are costly too, they need
to be carefully planned ie. CEs, Backhaul, Iu, MSC, SGSN trunks.
In this document it is assumed thatthe planned capacity has already been implemented in the network,
and the focus is on its MONITORING and OPTIMIZATION to identify/anticipate/avoid any congestion issues
at any Node/Resource in the network, with an emphasis on UTRAN Nodes/Resources, showing how to
remove congestion from the network in case it is already present. [Further revisions will also include
CORE Network aspects].
A section will be devoted to describe how to monitor, optimize and troubleshoot the Utilization of each
one of the following Resources:

Air Interface Resources

DL TX POWER

UL RX POWER

Channelization (OVSF) CODES

Node B Hardware

CHANNEL ELEMENTS (CE)

RNC hardware

MAIN PROCESSORS LOAD


TN (Transport Network)
Iub INTERFACE (also: Iur, Iu)
Each of the sections will follow the same basic structure:
1
2

3
4

The available capacity for each resource will be estimated based on the installed hardware,
parameters settings and Admission/Congestion Control configuration.
Different metrics to monitor each resource performance will be introduced. These
measurements will be of 2 types:
i
Proactive (USAGE): where performance counters are used to allow trending of growth
for capacity upgrades, i.e., Counters and KPIs to monitor the utilization of the
resource
ii Reactive (BLOCKING): where performance counters are used to indicate that a
particular element has become congested such that it is causing a negative impacts
on other KPIs (mainly in accessibility, but also in retainability, integrity, etc.)
Together with the metrics above, thresholds will also be suggested. Two types: MINOR, for early
detection of the potential exhaustion of a resource, and MAJOR, for the detection of a present
shortage already causing degradation in the performance.
Suggestions the optimization/troubleshooting of each resource will be provided based on
the analysis of the KPIs and blocking metrics.

These guidelines are intended to be as practical as possible and the target was to produce both a list of
Performance Alarms for Capacity Issues Early Detection and Tracking, and a weekly summary, the

Capacity Weekly Report (that could also be implemented in SMART). The aim is to help the
optimizer monitor the capacity trends in all cells in the network and also to highlight the cells that
already require some (or even urgent) attention. For the definition of the thresholds for the Alarms, we
propose to carry out a study based on current available data in OSS. In this document, we provide initial
estimates that need to be verified against real data.
The attached excel file attached contains the full list of proposed Thresholds.

Note that the first release of this document focuses on capacity troubleshooting: issue detection and
solving.
[To be discussed within Claro if future versions will add capacity trends estimation aspects or these ones
are to be considered under the Planning process scope]
Before starting, it is recommended to revise the References in section 14 in order to refresh and clarify
the concepts that will be used extensively in this text: Radio Resource Management, Admission Control,
Congestion/Load Control,
In this document we will use the following naming convention for:
Counters will be written in italic letters:
o Examples: VS.MeanTCP, VS.MinTCP
Parameters will be written in italic and bold letters:
o Examples: DLCELLTOTALTHD

3.

Process for UTRAN Capacity Management

The proposed process in this document includes, at high level, next three sequential steps:
1.
2.
3.

Counter & KPI monitoring with agreed trigger thresholds (daily/weekly) to measure resource
usage or blocking
Detailed Blocking Cause Analysis & Optimization actions
Possible Hardware Capacity Upgrade actions & verification

The capacity upgrade process involves making a decision as to whether an upgrade is required or
whether the Node B is likely to benefit from further optimization. If an upgrade is required, upgrade
verification is performed once the upgrade has been implemented.

Figure 1 Capacity Monitoring LifeCycle


The RAN capacity management process described in this document has the following steps:
1.
2.
3.

Describe & monitor capacity KPIs (proactive or reactive) daily/weekly during busy hour
If one of the thresholds is exceeded, start detailed blocking analysis
Start tuning actions in the interface where the triggering has happened. These actions could be
for example

Other performance indicators check (SHO, traffic, etc)

History & data build check

Parameters optimization

RF optimization

Introduction of advanced features

4.
5.

Start capacity upgrade if fault findings & tuning did not help
Verify the performance after upgrade.

Capacity upgrade activities could be different based on what is triggering the process. If the tuning
actions dont help then the next activities could include:

Software License Key for higher PA in Flexi BTS


more capacity Software License / add system module for Flexi BTS
Add Iub capacity to Node B
Implement optional features, i.e. Throughput Based Optimization, Dynamic Resource Allocation,
etc

One should notice that the triggering may depend also on the lead time which is needed to get the
additional HW in place. This should be studied on a case by case basis. For example, for Iub capacity it
could take longer to get new HW in place, whereas installing new Node B HW may be faster. Thus the
threshold settings should be tuned so that capacity upgrade includes the time necessary to procure and
implement the solution .

3.1.

Capacity Metrics Monitoring

The following chapters will introduce metrics (counters and KPIs) to monitor the utilization (and eventual
exhaustion) of each Resource. These measurements focus on 2 aspects:

utilization of each resource


blockings due to each resource exhaustion

The first metrics (for each resource usage monitoring) can be considered pro-active as they show
current occupancy of the resource and allow us to estimate trends in its usage and hence, anticipate
congestion. Blocking metrics would be considered reactive as they detect current blocking and its
impact on other KPIs and need, in general, urgent reaction from the optimizer.
Both types of measurements will be assigned 2 different thresholds (minor and major), so following
Performance Alarms will be triggered according to its importance/impact:
Utilization measurements:

MINOR: a resource utilization has reached a certain level (Thr.minor) that is considered
appropriate to start a capacity analysis in order to anticipate/prevent its congestion
MAJOR: a resource utilization has reached a certain higher level (Thr.major) that is assumed
to be already causing some impact in performance

Blocking measurements:

3.2.

MINOR: Number of blockings not null (i.e., Thr.minor = 0).


MAJOR: Number of blockings above Thr.major

Capacity Analysis

Once the capacity thresholds have been exceeded based on daily or weekly monitoring activity the
blocking analysis work should start. The target for this step in the process is the identification of the real
root cause of the detected (or anticipated) capacity issue: unexpected traffic increase due to an event,
faulty equipment, misconfiguration, or simply exhaustion of any of the RAN Capacity Resources due to
the normal desirable increase in traffic.
The analysis will always consider the possibility to further optimize the network so the issue is overcome
with a more effective utilization of the current available capacity. Only if no margin is found to improve
efficiency, a capacity upgrade will be proposed.

3.2.1. RULE OUTS


If a cell suddenly appears in the Alarms Report for a certain week, the optimizer should review the
following possibilities before further analyzing the issue. There is the the possibility that the issue can
be understood by a simple correlation with known problems or circumstances in the network:
1.

Identify and Rule out OUTAGES in:


a. RAN
i. Radio/Antena,
ii. TMA/Ancillary
iii. Neighbour cells
b. CORE NETWORK
i. RNC/MSC/Adjunct
c. BACKHAUL
i. E1s Fluctuation
ii. TX Equipment

2.

Identify and Rule out RECENT CHANGES:


a. Physical Changes
i. Antenna Swap
b. Software Changes
c. Parameter Setting Changes
i. LAC/MSC changes

3.

Identify
a.
b.
c.
d.
e.
f.
g.

and Rule out OTHERS (Miscellaneous):


Special events
Seasonal driver
Anomalous data
Handset issue
External interference
Physical (clutter) changes
Maintenance hours measurements

If any of these possibilities can explain the Capacity issues detected, then some short term corrective
actions should be considered to deal with the temporary degradation in the KPIs, until the real
problem/cause is solved. Some suggestions will be provided in these specific cases.
In some situations, like seasonal traffic or special events, the new capacity needs should have been
estimated in advance in order to avoid the expected increase in traffic. If these situations are not
anticipated, then of course, some capacity issues will arise. Also in order to react to unpredictable
increases in traffic, some Traffic Offloads actions are possible and will be explained in order to minimize
its impact on KPIs.

3.2.2. Historical Node B Data


Prior to carrying out any major analysis, some basic initial information needs to be compiled: the history
of the cell or NodeB (in terms of performance and optimization) could contribute additional insight to
understand the problem.
Therefore, collection of historical Node B data, planning tool plots and databuild info is highly
recommended:

Has the Node B triggered the capacity upgrade process in the past?
History of optimization that has already been completed for that Node B and its neighbours.
Performance history (other KPIs to be checked: SHO, Traffic, Throughputs,)
Current RNC/NodeB databuild for that site and its neighbours.
Planning tool plots of best server areas, service coverage and CPICH Ec/Io coverage.

Site details (site survey and installation reports, pictures, equipment installed,)

Easy access to all these sources of information should be guaranteed to the Optimization Team (it should
be treated as key component of its interfaces with other departments).

3.2.3. RNC/NodeB Dumps Audit


A check should be made to ensure that the RNC databuild is configured as palnned. The RNC databuild
parameters should be configured either with their defaults or with tuned values that should have been
documented. Please refer to Huawei Baseline define by Claro, Ref. ##.####.
An error in the RNC databuild could be responsible for reducing the downlink capacity and triggering the
capacity upgrade process earlier than it should be.
It is also worth checking that the Node B is measuring its uplink interference floor correctly. When a cell
and its neighbours are completely unloaded, the cell should measure only thermal noise i.e. 108 dBm
plus the noise figure of the receiver sub-system. It is possible that a hardware fault or an error in the
Node B commissioning parameters lead to an incorrect measurement result. The uplink interference
floor should be evaluated using either the online monitoring tool or the RNC counter results.
If an error is identified within the RNC databuild then an action should be raised to correct that error. The
capacity analysis process may then be exited and the capacity metrics monitoring resumed.

3.3.

Capacity Optimization

Before deciding any capacity upgrade, further optimization should be tried first to limit an impact on
CAPEX. The following areas of work should be explored:

Parameters Tuning
Different recommendations will be provided for each specific resource covered in these
guidelines.

RF optimization
In case parameter optimization does not help, RF optimization could be done. This will cost more
money to the operator than the parameter optimization. RF optimization means basically the
tuning of Node B antenna system and includes tilt, bearing and antenna height changes to
improve cell dominance areas. It may also include re-engineering the site to reduce feeder
lengths or changing the type of antennas.

Activation of Features
Different features can be activated in the network to improve the utilization of the resources.
Their activation needs to be carefully analyzed as they usually also imply an important
expenditure for the operator and results could not be as impressing as anticipated. Trials are
highly recommended before any final decision is taken. Given the cost involved, it can also be
considered under the next step in the process (capacity upgrade).

Please note that Parameters Tuning will not overcome a poor RF optimization. This guideline is written
under the assumption that the Optimization Process is already in the In-Service Phase of its lifecycle.
Optimizers can now focus on operational KPIs, as the RF and Service ones already received proper
attention in the Initial Optimization Phase. Main RF issues (not sufficient RSCP and EcIo levels,
overshooting, pilot pollution, cell fragmentation, etc.) should have been detected and corrected, so even
if the RF Optimization is a continuous effort, it can be assumed that the optimization work is done over a
well tuned RF environment. In this context it is suggested to try first to tune the current settings before
trying more costly RF optimization possibilities. Parameters changes are easier and faster to implement
and their impact also easier and faster to evaluate.

3.3.1. TRAFFIC OFFLOAD

When a UTRAN cell becomes congested for whatever reason, one possibility to provide temporary (and
often only partial) relief is to enable whatever features are available in the vendor for Offloading Traffic.
This Traffic offloading can be done between 3G cells and also from 3G to 2G (of course, this second
option is really an option if the target 2G cells are not also congested).
3.3.1.1. Physical and Parameter Setting Changes
There are multiple actions that can be considered in order to share the traffic between the different cells
in an area, to obtain a decrease in the traffic carried by the congested cell and therefore, solving or at
least relieving its capacity issues.
Tunning can be applied, as already suggested, to both the

Physical configuration of the site


o Antenna types, height, azimuth and tilt

Parameter settings
o Offsets between cells in the Active Set dynamic (Cell Individual Offsets)
and in idle mode (Hysteresis and Qoffsets)
o Multicarrier settings
o HCS (Hierarchical Cell Structure) settings, if activated.
o Common Channel Power settings, especially CPICH power, should be considered very
carefully due to the potential negative impacts.

3.3.1.2. Huawei: Direct Retry


RAB/RRC Directed Retry Decision (DRD) is triggered when the blind handover to other inter-frequency
cells is performed after resource allocation fails in the RNC during the RAB setup. Please refer to the
Annex: Load Management, for further information on the feature.
The RAB DRD procedure is as follows:

The RNC makes a decision on the admission of the target inter-frequency cell for blind handover.
If the admission request is accepted, the DRD procedure is performed for the target interfrequency cell for blind handover.
The RNC starts the radio link setup procedure to perform the inter-frequency handover.
The RNC starts the radio bearer setup procedure to complete the inter-frequency handover on the
Uu interface and the service setup.

If step 2, 3 or 4 fails, the RNC performs repeated RAB DRD in another target inter-frequency cell for blind
handover until the retry succeeds, until the retry in all such cells fails, or until the number of retries
reaches the value of Max inter-frequency direct retry number.
Notes:

After an HSPA service request is denied, the service is fallen back to the DCH. Then, the service
re-attempts to access the network.

The RAB DRD to a target cell in another system (for example, GSM) for blind handover is similar.
For details, refer to Inter-RAT Handover.

According to the cell type (R99 or R99+HSDPA), an HSDPA user accessing an R99 cell can be
directed to an R99+HSDPA cell through DRD. According to the cell parameter R99 CS separation
indicator or R99 PS separation indicator, an R99 user accessing an R99+HSDPA cell can be
directed to an R99 cell through DRD.

RAN6.1 does not support inter-RAT DRD for RABs of combined services.

RAN6.1 does not support inter-RAT DRD for PS services.

RAN6.1 does not support inter-RAT DRD for HSPA services.


3.3.1.3. Huawei: Intra-Frequency Load balancing

This feature should be analyzed in dept as it plays with the CPICH, then it is mandatory to
have very detailed inforamation in order to evaluate the funtioning of it.
Intra-frequency Load Balancing (LDB) is performed to adjust the coverage areas of cells based on the
measured values of cell load. Currently, the intra-frequency LDB algorithm is applicable to only the
downlink.
LDB between intra-frequency cells is implemented by adjusting the transmit power of the Primary
Common Pilot Channel (P-CPICH) in the associated cells. When the load of a cell increases, the cell
reduces its coverage to lighten its load. When the load of a cell decreases, the cell extends its coverage
so that some traffic is off-loaded from its neighboring cells to it.
When the intra-frequency LDB algorithm is active, that is, when INTRA_FREQUENCY_LDB is set to 1,
the RNC checks the load of cells periodically and adjusts the transmit power of the P-CPICH in the
associated cells based on the cell load.

Figure 2 Intra-Frequency Load Balancing


This process is described as follows:

If the downlink load of a cell is higher than the value of Cell overload threshold, it is an indication
that the cell is heavily loaded. In this case, the transmit power of the P-CPICH needs to be reduced
by a step, which is defined by the Pilot power adjustment step parameter. If the current transmit
power is equal to the value of Min transmit power of PCPICH, however, no adjustment is
performed.

Because of the reduction in the pilot power, the UEs at the edge of the cell might be handed over to
neighboring cells, especially to those with a relatively light load and with relatively high pilot power. After
that, the downlink load of the cell is lightened accordingly.

If the downlink load of a cell is lower than the value of Cell underload threshold, it is an
indication that the cell has sufficient remaining capacity for more load. In this case, the transmit
power of the P-CPICH increases by a step, which is defined by the Pilot power adjustment step
parameter, to help lighten the load of neighboring cells. If the current transmit power is equal to
the value of Max transmit power of PCPICH, however, no adjustment is performed.

3.3.1.4. Huawei: Load Reshuffling


For further details please refer to the Annex: Load Management

3.3.2. SHO Optimization

Soft handover overhead above of 30-40% will have a negative impact on capacity, as mobiles engaged
in SHO will consume more channelization codes than single link connections and each RL that is
established will also require resources on the Iub interfaces of the Nodes B involved. A 40% probability of
SHO demands 40% extra backhaul capacity. For UEs in softer handover there will be no impact on
backhaul capacity because signaling and traffic will be combined locally in the Node B. Reducing the
level of soft handover in the network reduces the downlink transmit power requirement in all cells and
increases the downlink capacity.
The benefit of SHO is Soft Handover gain: A UE can combine a number of downlink signals using the rake
receiver and get a net improvement in performance of as much as 3 or 4 dB compared to a single link
connection. This, in fact, is taken into account favourably when determining the link budget. However, a
UE in SHO will also be power controlled by all Node Bs concerned.
Simulations (3GPP 25.942) have shown that in a planned area only 1% of locations require SHO to 7 or
more cells. Additionally, the SHO gain is minimal when more than 3 cells are in the active set. The
conclusion is that the UE does not have to support more than 4 to 6 cells in the active set.
In summary, SHO is an area potentially subject to further improvements to save capacity. We are going
to review a possible approach to Soft Handover Optimization.
The soft handover overhead KPI and the average active set size KPI may be used to indicate the level of
soft handover experienced by a cell. If the level of soft handover is high then it may be possible to
achieve a reduction without impacting UE cell edge performance. The extent to which the level of soft
handover can be reduced should be specified within the soft handover optimization process (final target
expected to be in the range 30-40% though).

SHO .2 RL+VS . SHO .3 RL


100
( VS . SHOVS.1 .RL+VS
. SHO .2 RL+ VS . SHO .3 R L )

Soft HO Overhead [ ] =

Soft handover overhead of RT and NRT, which shows how much overlapping there has been between cells. This formula
is used when working in Cell level.

Average Active Set

VS . SHO .1 RL 1+VS . SHO .2 RL2+VS . SHO .3 RL 3


VS . SHO .1 RL+ VS . SHO .2 RL+VS . SHO .3 RL
Thresholds: MINOR: >1.75 / MAJOR:
>1.8

[Practical Example] Figures below provide two examples for soft handover overhead and average active
set size plots. The plots present data recorded during 1 week. Each point is based upon the counter
results recorded at hourly basis.
3.0

200

160

Average Active Set Size (cells)

Soft Handover Overhead (%)

180

140
120
100
80
60
40

2.5
2.0
1.5
1.0
0.5

20
0

0.0
Time (1 hour samples)

Time (1 hour samples)

Example cell 1
3.0

200

160

Average Active Set Size (cells)

Soft Handover Overhead (%)

180

140
120
100
80
60
40

2.5
2.0
1.5
1.0
0.5

20
0

0.0
Time (1 hour samples)

Time (1 hour samples)

Example cell 2
The soft handover overhead and average active set size characteristics are relatively spiky although
there are clear differences in the average values. The spiky nature of the plots can be reduced by
increasing the period of time over which each sample is plotted. This is an acceptable approach for the
soft handover KPI because the soft handover overhead and average active set size should be relatively
independent of the level of traffic, i.e. as the level of traffic changes throughout the day.
It is suggested that the average active set size KPI is used. It is also suggested that the counters are
aggregated over a 24 hour period prior to applying the KPI equation.
Once the average active set size KPI data has been plotted a decision is required regarding whether or
not the level of soft handover should be reduced. One suggestion could be: If average active set size
exceeds 1.75 for any one 24 hour period during the week then the soft handover tuning process should
be triggered. An average active set size of 1.75 corresponds to a soft handover overhead of
approximately 50 %. It may be appropriate to define different thresholds for different environment types,
i.e. according to the site density.

3.4.

Capacity Upgrade

A capacity upgrade should be considered if a fault could not be identified that explains the capacity
issue or if there was not an unexpected sudden increase in traffic casued by seasonal variation orif the
problem cannot be solved by tuning the current network configuration or by reaccomodating the traffic
to achieve a more efficient utilization of current resources.
In the case that the conclusion from the previous phaseof the optimization process is that we have a
certain resource shortage that cannot be overcome with further optimization, then the single way to
solve the problem is to increase (=upgrade) the current available capacity. Capacity upgrages involves
generally any of the following steps. They appear in the list in order of preference, driven mainly by cost
-from lowest to highest cost- but also by other considerations:

Additional Resource Hardware/Software


DL TX Power capacity may be increased by upgrading the Power Amplifiers in the NodeB
from20 to 40 Watts, for instance. Channel Elements may require adquiring another SW
license key (from 16 to 32 CEs) or additional CE boards. The number of active HS users may
also require of a new SW license key (from 16 to 64 HS users). Backhawl bandwith expansion
will need of additional E1s deployment to the site.

Additional Carrier
Especially here in Claro, where spectrum availability is not an issue (other OpCos in AMX
could be facing different spectrum scenarios), second carrier should be the first option
(compared to 6 sectors, HOS):
Increase in capacity for a 2nd carrier is higher than 100% (trunking efficiency). HOS shows
increases of 1.8 in technical literature, but real experiences range between 30 and 50%.
Given an existing optimized tri-sector network, it will be easier to deploy the 2nd carrier
keeping this same scenario. On the other hand, HOS requires of a lot of optimization to
keep under control all negative aspects of increasing the number of cells: pilot pollution,
SHO areas, neighbor lists, etc.
With HOS, potential negative impact (not present for 2nd carrier) on HSDPA throughputs in
Mobility (due to the increase in number of Cell Changes) and HSUPA throughputs (due to
an increase in the number of other cell interferer users) need to be analyzed further.
A wide deployment of 2nd carrier will prepare the network for easier rollout of Multicarrier
solutions: for instance Rel. 8 MC-HSPA+ (rates: DL 42 / UL 11 Mbps) without MIMO, just
64QAM.

Additional Sector (High Order Sectorization: HOS)


A high order sectorization (more than 3 sectors in the same carrier) will always be an
additional option for further capacity optimization. So in hot spots with very high traffic, this
could also be considered together with other options: micro and additional carriers (3rd
and 4th).To be kept in mind: theoretical analysis shows 5 sectors providing better
performance than 6 sectors.

Additional Site
In terms of capacity upgrade, this is the last option to be considered when all previous ones
are not available or the trade-off capacity gain/cost makes them less attractive or when
there is a desire to increase the inbuilding penetration within an area.
If traffic keeps increasing site densification is to be considered even in advance to enable
the network to effectivelyabsorb the additional traffic (assumed to be already considered by
the Planning/Design/Dimmensioning Process).

Additional carrier, sector or site requirement is one of the outputs the Optimisation Process that will
feedback to the Planning Process, so the implications in RF performance can be evaluated properly
through the RF prediction tool (ASSET3G) and RF tuning of the neighborhood sites can also be
anticipated. At the same time, the planning department will consider their own coverage/capacity
targets and will decide the best option possible in accordance with the company policies and strategy:
macro, micro, especial in-door project, repeater

In some cases, a re-engineering of the site may be required in order to make any of the above
possibilities feasible: changing an old BTS cabinet type by a more recent one with more capacity options
(more nominal power, more slots for CE boards, etc.).
As described in Claro internal doc. Ref.##.####: Optimization Process, after any significant network
configuration change (RF or parameter settings), potential impacts need to be monitored and verified. In
our present context, the solution/correction of the capacity issue that triggered the whole process should
be confirmed.
This is just an overall presentation of the process; further details can be found for each resource in the
coming chapters of these guidelines.

The blocking in Radio Interface gives information about the lack of radio resources. This could mean that
Node B is using all the DL power that is available in order to maintain the connection for the users.
Therefore admission control is denying service for the additional users. Another reason could be the
increased UL interference situation meaning that there is no room for more UEs to be connected into
that particular cell. Additionally, the available free codes in the Channelization Code Tree can become
a key resource to monitor, in particular when HSDPA and HSUPA are enabled in a cell.
Sections to are devoted to these three Radio (or Air) Interface Resources, starting with the DL TX
Power.

4.

1B

DL TRANSMITTED POWER

In the downlink direction the maximum transmit power available from the highly linear power amplifier
can be considered constant. The total power available will depend on the vendor and on the type of
NodeB. For a macro cell product it could be expected to be in the range 20 to 45 W, for a micro cell or
pico cell product it would be lower. The specifications (3GPP TS 25.104) require that the power amplifier
has a total power dynamic range of at least 18 dB. Maximum transmit power is limited to 50 dBm (100
W).
This power is shared between all downlink channels. Downlink power control is implemented through the
adjustment of the weighted sum of the downlink channels. Broadcast and common control channels are
likely to be allocated a fixed proportion of the power available related to the power allocation of the
CPICH (common pliot) channel. The remainder of power is then shared between users. The weighting
may be used to vary proportions to each user dependent on path loss, interference and required quality
of service (based on power control). For closed loop power control the UE indicates the requested power
step changes to the Node B. However, a limit will be set for the power proportion available to each
channel type, so the Node B may not obey all power control commands. If the cell is operating at less
than full load then the total power transmitted is less than the total power available.
More power is required if there are more channels required, if users are distant from the Node B, if users
request higher data rates, if users request a higher quality of service, Thus the total power available in
a cell ultimately limits downlink capacity and quality of service. These figures and charts below give
examples for these ideas:

4.1.

AVAILABLE CAPACITY IN TERMS OF DL TX POWER

Node B DBS3800 a distributed NodeB in compliance with the protocols of 3GPP R99/R4/R5/R6 .

Figure 3 DBS3800 system arquitecture

Figure 4 BBU3806 Logical structure


Transport Subsystem
The transport subsystem performs the following functions:

Providing physical interfaces between the BBU3806 and the RNC for data communication
Providing OM channels between the BBU3806 and the LMT or between the BBU3806 and the
M2000
Baseband Subsystem
The baseband subsystem processes uplink and downlink baseband data. The functions of the baseband
subsystem are performed by the following modules:

Uplink baseband data processing module: consists of the demodulation unit and the decoding
unit. In this module, uplink baseband data is processed into despreading soft decision symbols
after access channel searching, access channel demodulation, and dedicated channel
demodulation. The symbols are then sent to the RNC through the transport subsystem after
decoding and Frame Protocol (FP) processing.

Downlink baseband data processing module: consists of the modulation unit and the encoding
unit. The module receives the service data from the transport subsystem, and implements FP
processing, encoding, transport channel mapping, physical channel generating, framing,
spreading, modulation, and power control combination. Then the data is finally sent to the
interface module.
Control Subsystem
The control subsystem manages the entire distributed NodeB. The control subsystem performs OM,
processes signaling, and provides the system clock.

The OM module performs functions such as equipment management, configuration management,


alarm management, software management, and commissioning management.

The signaling processor performs functions such as NBAP signaling processing, ALCAP processing,
SCTP processing, and logical resource management.

The clock module provides the system clock for the NodeB. The reference sources of the system
clock are the Iub phase-lock line clock (obtained from the E1, optical port, or FE), the GPS clock,
and the external clock (for instance, the BITS clock). The versions later than V100R009 support
the function of extracting the clock from the FE.
Interface Module
The interface module performs the following functions:

Each CPRI port of the BBU3806 adopts the Enhanced Small Form-Factor Pluggable (ESFP) optical
ports, and transports the uplink and downlink baseband data of the RRU/pRRU3801/RHUB3808.
Each BBU3806 provides an EIa port to share synchronization data, baseband data, power control
data, and transmission data between BBU3806s.

Multiple topologies such as star, chain, and tree are supported between the RNC and the BBUs.

Figure 5 BBU topologies

Star Topology
As the most commonly used topology, the star topology applies to most areas, especially to densely populated
areas.

Chain Topology
The chain topology applies to the belt-shaped and sparsely populated areas, such as highways and
railways.
Tree Topology
The tree topology applies to complicated networks and sites such as a large area with concentrated
hot spots.

Multiple topologies such as star, chain, and ring are supported between the BBU and RRU3801Cs.

Figure 6 RRU Topologies


Configurations of the DBS3800
The DBS3800 supports omni-directional, two-sector, and three-sector configurations. The operator
chooses different configurations based on actual conditions such as locations and the number of users.
The DBS3800 supports the following typical configurations:

Omni-directional, 1 x 2, 2 x 1, 3 x 1, 3 x 2, 3 x 3, 3 x 4, 6 x 1, and 6 x 2.
Six cells. The DBS3800 supports a maximum of 12 cells if the EBBC is configured.
Configurati
on

Minimum
BBU3806s

Number

of

Minimum
RRU3804s

1x1

1x2

1x3

1x4

1 BBU3806 with the EBBC

2x1

2x2

2x3

2x4

2 BBU3806s with the EBBCs

3x1

3x2

3x3

2 BBU3806s with the EBBCs

3x4

2 BBU3806s with the EBBCs

6x1

2 BBU3806s with the EBBCs

6x2

2 BBU3806s with the EBBCs

Number

of

NOTE:
In four-carrier configurations such as 1 x 4, 2 x 4, and 3 x 4, if the power required for each carrier is 30
W, the minimum number of RRU3804s doubles.
Configurati
on

Minimum
BBU3806s

Number

of

Minimum
RRU3801Cs

Number

of

Configurati
on

Minimum
BBU3806s

Number

of

Minimum
RRU3801Cs

1x2

1x3

1x4

1 BBU3806 with the EBBC

2x1

2x2

2x3

2x4

2 BBU3806s with the EBBCs

3x1

3x2

3x3

2 BBU3806s with the EBBCs

3x4

2 BBU3806s with the EBBCs

6x1

2 BBU3806s with the EBBCs

6x2

2 BBU3806s with the EBBCs

Number

of

NOTE:
N x M = sector x carrier. For example, 3 x 1 indicates that each of the three sectors has one carrier.
In Claro Brazil network, Huawei has deployed DBS3800 all over the network. The new sites will be
DBS3900.
Configurations of the DBS3900
The NodeB has an industry-leading modular design of multiple modes and forms, rendering it adaptive
to various installation scenarios. This effectively addresses the requirements for the broadband solution,
green network construction, and a mobile network of converged multiple modes. Beyond that, this
enables the construction of a future-oriented network and smooth evolution to the Long Term Evolution
(LTE) system.
Solution Integrating Multiple Technologies

With the unified platform, modular design, and flexible combination of the basic modules and
auxiliary devices, the NodeB can be presented in multiple forms.

With this solution, BBUs and RF modules of different modes (GSM/UMTS/LTE) can be placed in one
cabinet, and cabinets of different modes can be installed in stack mode.

The UMTS RF module supports smooth evolution to the LTE system from the perspective of
hardware and supports the UMTS/LTE dual-mode NodeB through software upgrade in the same
frequency band.
Broadband Solution

The outstanding performance of the RRU3804 and WRFU/MRFU ensures wide coverage, high
throughput, and less sites.

The RRU3804 and WRFU/MRFU adopt a multi-carrier technology that features 20 MHz
bandwidth and 4-carrier configuration.

A single RRU3804 supports the 60 W output power at the antenna connector, and a single
WRFU/MRFU supports 80 W at the antenna connector.

The NodeB supports the High Speed Packet Access (HSPA) at full rate.

The HSPA service enjoys high bandwidth and short delay.

The data rate of the HSPA service can peak at 14.4 Mbit/s in the downlink.


The data rate of the HSPA service can peak at 5.76 Mbit/s at the physical layer of the Uu
interface in the uplink.

The IP-based switching core of the NodeB allows operators to obtain higher bandwidth and
facilitates capacity expansion and network adjustment by utilizing the existing IP transmission
resources, thereby curtailing the cost of network deployment.

The NodeB can provide the Fast Ethernet (FE) port at 100 Mbit/s externally, and the IP Radio
Access Network (RAN) can reuse the existing IP transmission resources on the Iub interface.

Apart from being more cost-effective than the Asynchronous Transfer Mode (ATM)-based
network, the IP-based network provides the multi-access mode and sufficient transmission bandwidth to
satisfy data services with high data rate.
Construction of a Green Network
The compact and modular design, innovative PA, and power consumption management are the keys to a
green communication network that provides energy saving features and requires fewer equipment
rooms.

The RF modules of the NodeB adopt the advanced Digital Pre-Distortion (DPD) and A-Doherty
technologies to raise the power amplification rate to 40%. Thus, the power consumption of the
entire NodeB is lowered.

The reduced power consumption of the cabinet macro NodeB lowers not only the electricity
expense but also the investment in power supply, backup batteries, air conditioners, and heat
exchangers.

As one of the most compact macro NodeBs in the industry, the cabinet macro NodeB takes
up a small footprint.

The RF cabinet of the BTS3900A uses the direct-ventilation design. In comparison with the
traditional macro NodeB, power consumption of the BTS3900A is lowered by 40%.

The DBS3900 is characterized by separate baseband and RF modules and distributed installation
that facilitate transportation, configuration, and installation.

The BBU3900 of the distributed NodeB is characterized by the small footprint, easy
installation, and low power consumption. In addition, the BBU3900 can be placed in the spare space of
an existing site.

The RRU, small and light, supports installation near the antenna, thus preventing feeder loss.
Working in natural heat dissipation mode, the RRU does not require any fans. The high reliability of the
RRU reduces the routine maintenance cost.

All the NodeB products can share the baseband modules, RF modules, and power systems,
thereby reducing the cost of spare parts and maintenance.
The proceeding features of the NodeB can fully address the concern of operators regarding site
acquisition, expedite network rollout, decrease utilization of resources such as manpower, power supply,
and space, and lower the Total Cost of Ownership (TCO).
Smooth Evolution to the Future-Oriented Radio Network
The NodeB, adopting the unified modular design, satisfies the requirements of global operators for
service upgrade, network evolution, and deployment of new radio technologies, thus implementing a
future-oriented network.

The NodeB supports co-cabinet and multi-mode applications of modules in different modes.

The hardware of UMTS RF modules supports HSPA+ and smooth evolution to the LTE system. In
addition, the BBU of the existing NodeB can be shared to the maximum extent.
The capacity of the DBS3900 can be expanded through addition of modules or license upgrade. When
license upgrade is required, the capacity can be expanded by 16 cells at a time. In the early phase of
network construction, you can choose a small-capacity configuration (such as 3 x 1 configuration). When
the number of subscribers increases, you can smoothly expand the small-capacity configuration to a
large-capacity configuration (such as 3 x 2 or 3 x 4 configuration).

Typical configurations of the DBS3900 (with RRU3804)

Configurati
on

Number
WBBPs

of

Number of
Diversity)

3x1

3x2

3x3

3x4

RRU3804s

(No

TX

Typical configurations of the DBS3900 (with RRU3801C)

Configurati
on

Number
WBBPs

of

Number of
Diversity)

3x1

3x2

3x3

3x4

RRU3801Cs

(No

TX

The RRU deployed are with 40W; for one carrier but the power will go to a split when having two or more
carriers, then in the case of two carriers each will have 20W. The new sites will get RRUs of 60W.
The operator can limit the total maximum power that is allowed to be transmitted by an RBS with the
parameter:
DLCELLTOTALTHD
In order to manage the HSDPA feature, Admission Control and Congestion Control functions control the
usage of total non-HS downlink transmitted carrier power, that is the power used for transmission of R99
and Common Control channels. The remaining power can then be used for transmission of HSPDSCH/HS-SCCH channels to HSDPA users. By changing the HSPAPOWER setting, the portion of
downlink power available for HS connections can be increased/decreased.

Figure 7 Load Control Algorithm


The DL TX Carrier Power Admission Control Policy is shown in the Figure 7 Load Control Algorithm.
Please be aware it applies to the non-HS DL TX carrier power. Admission Control policies can
differentiate accessibility of dedicated monitored resources between different service classes
(guaranteed, guaranteed-hs, and non-guaranteed) and setup types (handover and non-handover). This
differentiation makes it possible to reserve dedicated resources for guaranteed service class connections
and for mobility.

Figure 8 Dl Load theresholds

4.2.

METRICS for DL TX POWER Monitoring

In Huawei there are counters for:

Total DL TX Power Utilization


Total non-HS DL TX Power Utilization
DL TX Power used by HSDPA
DL EUN Utilization.

4.2.1. Total DL TX Power Utilization (%)


Samples of the Total DL TX power for a cell-carrier can be found in the counters VS.MeanTCP, VS.MinTCP
and VS.MaxTCP.

DL Tx Power [ ] =
where:

VS . MeanTCP( W )
100100
( DLCELLTOTALTHD Pmax (W ) )

Pmax It is the maximum power available at the NodeB.


[It could not be found a parameter that gives this value; the Huawei should indicate a way to get
this value]
DLCELLTOTALTHD Such as is a percentage value, setting the maximum power that can be
alloted in the NodeB; it should be divided by 100. After simplifing the formula that 100 would
multiply the whole division.

Note: The counter and the parameter should be converted to watts in order to get an accuratte result.
Please be aware that HSDPA may be using the remaining power not used by R99 (Common Control
Channels plus R99 traffic), so it is possible to get values closed to 100% in this KPI with no impact in
Accessibility (no AC Rejections). To be checked, for values that high as closed to 100%, the impact on
user-perceived throughputs.
This can be accordingly calculated based on Average, Maximum or Minimum of the metric.

Thresholds: MINOR: >

DLCELLTOTALTHD
DLCELLTOTALTHD

-20%| MAJOR: >

There are counters also for Maximum and Minimum values of TCP:
Maximum TCP(dBm): VS.MaxTCP
Minimum TCP (dBm): VS.MinTCP

4.2.2. Total non-HS DL TX Power Utilization (%)


Samples of the Total DL TX power for a cell-carrier can be found in the counters VS.MeanTCP.NonHS,
VS.MinTCP.NonHS and VS.MaxTCP.NonHS.

non - HS DL Tx Power [ ] =
where:

VS . MeanTCP . NonHS (W )
100100
( DLCELLTOTALTHD Pmax ) (W )

Pmax is the maximum power available at the NodeB.


[It could not be found a parameter that gives this value, the Huawei should indicate a way to
get this value]
DLCELLTOTALTHD such as is a percentage value, setting the maximum power that can be
alloted in the NodeB, it should be divided by 100. After simplifing the formula that 100 would
multiply the whole division.

Note: The counter and the parameter should be converted to watts in order to get an accuratte result.
For averaging carrier power, these values must be checked close to the maximum allowed values, i.e.,
Power Utilizations close to 100%. If the average values are close to these thresholds, it implies the
chance of having congestion/admission block is high. In fact, it will be more interesting to look at max
carrier power sampled values. Still as RL setup is taking very short time to establish, it will be very hard
to say how those max values are related to RL failures.
Thresholds: MINOR: > 80% | MAJOR: >

4.2.3. DL TX Power used by HSDPA utilization

100

Also available in Huawei statistics of the HSDPA-required power measurement values of an HSDPA cell in
the RNC, VS.HSDPA.MeanRequiredPwr, VS.HSDPA.MaxRequiredPwr, VS.HSDPA.MinRequiredPwr

HSDPA DL Tx Power [ ] =
where:

VS . HSDPA . MeanRequiredPwr (W )
100100
( DLCELLTOTALTHD Pmax (W ))

Pmax is the maximum power available at the NodeB.


[It could not be found a parameter that gives this value, the Huawei should indicate a way to
get this value]
DLCELLTOTALTHD such as is a percentage value, setting the maximum power that can be
alloted in the NodeB, it should be divided by 100. After simplifing the formula that 100 would
multiply the whole division.

Note: The counter and the parameter should be converted to watts in order to get an accuratte result.
For averaging carrier power, these values must be checked close to the maximum allowed values, i.e.,
Power Utilizations close to 100%. If the average values are close to these thresholds, it implies the

chance of having congestion/admission block is high. In fact, it will be more interesting to look at max
carrier power sampled values. Still as RL setup is taking very short time to establish, it will be very hard
to say how those max values are related to RL failures.

4.2.4. DL EUN Utilization (%)


The 12.2 kbit/s AMR traffic is used to calculate the Equivalent Number of Users (ENU) of all other
services. The 12.2 kbit/s AMR traffic's ENU is assumed to be 1. The ENU calculation of all other services
is related to the following factors:

Cell type, such as urban or suburban


Traffic domain, CS or PS
Coding type, turbo code or 1/2 1/3 convolutional code
Traffic QoS, that is, BLER
Table 1. Equivalent number of users (with activity factor to be 100%)

ENU
Uplink
for DCH

Downlin
k for
DCH

HSDP
A

HSUP
A

3.4 kbit/s SIG

0.44

0.42

13.6 kbit/s SIG

1.11

1.11

3.4 + 12.2 kbit/s

1.44

1.42

3.4 + 8 kbit/s (PS)

1.35

1.04

0.78

0.84

3.4 +
(PS)

16

1.62

1.25

1.11

0.85

3.4 +
(PS)

32

2.15

2.19

1.70

0.96

3.4 +
(PS)

64

3.45

3.25

2.79

1.20

3.4 + 128 kbit/s


(PS)

5.78

5.93

4.92

1.67

3.4 + 144 kbit/s


(PS)

6.41

6.61

5.46

1.91

3.4 + 256 kbit/s


(PS)

10.18

10.49

9.36

2.83

3.4 + 384 kbit/s


(PS)

14.27

15.52

14.17

3.91

Service

kbit/s
kbit/s
kbit/s

In Table 1, for a 3.4 + n kbit/s service of HSDPA or HSUPA,

The 3.4 kbit/s is the rate of the signaling carried on the DCH.

The n kbit/s is the GBR of the service.

DL EUN Utilization [ ] =

VS . RAC . DL . TotalTrfFactor
DLTOTALEQUSERNUM

DLTOTALEQUSERNUM: this parameter defines the total equivalent number of users corresponding to
the 100% downlink load.
Thresholds: MINOR: > 80% | MAJOR: >

4.2.5. AC Rejections due to DL TX Power

100

Please refer to Section 11, Additional ADMISSION CONTROL Metrics, for further details.

4.3.

DL TX POWER Performance Analysis and Optimisation

To troubleshoot the cases highlighted by the Performance Alarms for DL TX Power suggested in the
previous sections, besides the overall considerations enumerated in Section 3, the following actions are
suggested:

4.3.1. Parameters Optimization


After having checked the current settings to ensure that there is no any evident misconfiguration
causing the issue:

DLTOTALEQUSERNUM (do they limit the max RBS power as expected?)


Common Channels Power settings: Are they correct?
Are the feeder losses configured correctly in the RBS?
Check for RF Modules alarms

First actions are to optimise some DL capacity related parameters in the cell. The actions could be

Increase DLTOTALEQUSERNUM
Decrease used MaxBitrateDLPSNRT (128 kbits/s, 64 kbits/s)
Decrease the maximum possible link power for the service
Decrease DtoFStateTransTimer or DtoFStateTransTimer timers related to PS data to make
the switching from Cell_DCH to Cell_FACH happen earlier

The Total downlink power in the cell could be controlled with the parameter DLTOTALEQUSERNUM.
Different values should be used for NodeB having different PA power capability
The usage of maximum bitrate used in the cell could be lowered in case there has been radio blocking.
This could be controlled with the cell parameters: DLFullCvrRate and ULFullCvrRate. The parameter
value could be lower in suburban environment meaning that the capacity in the cell will be increased.
Also bitrate downgrades based on Dynamic Link Optimization will relieve the Node B power for other
users. Lower values could be used in suburban and rural environment.

4.3.2. RF optimization
In case parameter optimization does not help, RF optimization could be done. This will cost more money
to the operator than the parameter optimization. RF optimization means basically the tuning of Node B
antenna system and includes tilt, bearing and antenna height changes to improve cell dominance areas
and decrease DL interference.

5.

1B

UL RECEIVED POWER

In this section we review how to monitor the usage of this Air Interface Resource, i.e., how much
Received Power (in dBm) is being measured by the NodeB receiver and how we can optimize this
resource.

5.1.

AVAILABLE CAPACITY IN TERMS OF UL RX POWER

In the uplink, the total received power can be expressed as the sum of the powers of own-cell users
(PrxOwn), other-cell users (PrxOth), and system noise (Pn).

Pn is the total effective thermal noise at the Receiver and can be estimated as

where Nf is the Receiver Noise Figure, k is Boltsmans constant, and W is 3.84 MHz.
There is, theoretically, a maximum available capacity in terms of UL RX Power; i.e., there is a maximum
amount of UL RX Power that can be admitted in the cell before reaching the Pole Capacity (Load =
100%). In fact, the system is configured to limit the Admission so that the UL Load does not reach the
range of UL Load values that will cause instability in the cell (as can be seen in the Figure below, Loads
above 80-85%).

Maximum Load acceptable is connected to Maximum acceptable UL RX Power (or Received Total
Wideband Power (RTWP)). The maximum increase of the Noise Floor (aka Noise Rise, NR) acceptable
due to this maximum acceptable UL RX Power could also be an alternative way to quantify this UL
capacity.
It can be shown that RTWP (or RSSI) =
where LUP is the Load in Uplink.
Using Pn =-103.71 dBm, that assumes standard operating room temperature and a receiver noise figure
(Nf) of 4.3 dB, we can produce next table:

This table is giving an approximate idea about the Admissible Noise Rise (NR, increase of the Noise Floor
that can be accepted) before reaching the levels of the Pole Capacity. Maximum NR should not go above
10 dB. This is also our theoretical UL Capacity that can be directly translated into the Maximum UL RX
Power that can be accepted (if the Noise Floor is known, as the one calculated above).
Typically, a network design is done based on a NR target of 3-6 dB, corresponding to a Target Load in UL
of 50-75%.
So expected values for RSSI (RTWP), according to UL Loads admitted in the system of 50 to 80% should
never exceed -90 dBm.
The UL RSSI can be high due, for instance, to:
non-traffic interference (external sources of interference)
High TX power from a UE that are connected to a far cell (near-far effect), usually due to
overshootings and missing neighbors.
Equipment malfunction
Intermodulation
But if the high values of RSSI are found all over the network, then it is an issue so generalized that
probably cannot be explained by this type of explanations above and should therefore be analyzed
further.

5.2.

METRICS for UL RX POWER Monitoring

In Huawei there are 2 possibilities:

RTWP
EUN

5.2.1. Received Total Wideband Power, RTWP


Samples of the UL RSSI for a cell-carrier could be found in the counter VS.MeanRTWP, VS.MaxRTWP and
VS.MinRTWP.

5.2.2. Equivalent User Number (ENU) in UL


In Huawei can be also used this approach for Admission Control decision based in UL Load when uplink
CAC algorithm or downlink CAC algorithm uses algorithm 2, the admission of uplink/downlink power
resources uses the algorithm based on the equivalent number of users.
The Equivalent user number ENU, of a single radio link depends on the radio connection type and is
expressed in terms of the equivalent number of speech radio bearers that generate the same amount of
air-interface load. Using this definition, a radio link that has, for example, an ENU of three in uplink is
expected to generate as much interference in uplink as three speech radio bearers in the cell.
The default settings for the admission policy for the Equivalent Number of Users (EUN) in uplink would
be UL threshold of Conv AMR service, UL threshold of Conv non_AMR service, UL threshold of

other services, UL Handover access threshold should be based on the characteristic dimensioning
of the system not to be loaded more than 60% of its pole capacity thru the parameter
ULTOTALEQUSERNUM.
In uplink, besides uplink Congestion Control, EUN is the only resource that is controlled to allow
admission and modifications. Therefore, the parameters that regulate the uplink EUN admission policy
need to be set in order to minimize the risk of going into uplink overload.
For uplink admission control, the EUN admission policy provides a way to limit excessive UL interference
avoiding large variations in cell breathing. This should be used in cells where there is an observed high
UL interference. In other cases, the uplink ENU admission control can be disabled by setting
ULTOTALEDUSERNUM to its maximum value (200), ULOTHERTHD to 98, ULCONVAMRTHD to 99,
ULCONVNAMRTHD to 99 and ULHOTHD to 100. In order to comply with the rule:

HO thereslhold >max ( Conv AMR Threshold , Conv No n AMR Threshold ) >OtherServices Threshold
Formula for checking the average usage of EUN in the uplink:

Average UL ENU for a cell [ ] =


5.3.

VS . RAC .UL . TotalTrfFactor


ULTOTALEQUSERNUM

UL RX POWER Utilization

According to the metrics described in the previous section, we can define the following KPIs:

5.3.1. Total UL RX Load Factor (%)

UL Load fatcor [ ] = 1

BACKGROUNDNOISE ( watts )
100
VS . AverageRTWP ( Watts )

The counter VS.AverageRTWP and the parameter BACKGROUNDNOISE are given in dBm so they need
to be converted to watts in order to be used in the formula otherwise can be used the following formula
to obtain the load:

UL Load factor [ ] = 1

1
VS . AverageRTWP ( dBm ) BACKGROUNDNOISE ( dBm )
1
10

log

100

MINOR :>(minTHD-20%)/MAJOR:> minTHD

minTHD=min ( ULCONVAMRTHD , ULNONCONVAMRTHD ,ULHOTHD , ULOTHERT


Note:
The parameters ULOTHERTHD , ULCONVAMRTHD, ULCONVNAMRTHD and ULHOTHD are set as
percentage representing the moment where the admission control will not allow the service they are
refering to such as they are used as for EUN as for RTWP.In order to monitor an alarm that represent a
capcaity problem it is taked the minimum value amongst them. For the minor alarms is used 20% below
the threshold such as 1dB represents around 20% of noise rise in the system.

5.3.2. UL ENU Utilization (%)

Average UL ENU for a cell [ ] =

VS . RAC .UL . TotalTrfFactor


UL TOTALEQUSERNUM

MINOR :>(minTHD-20%)/MAJOR:> minTHD

minTHD=min ( ULCONVAMRTHD , ULNONCONVAMRTHD ,ULHOTHD , ULOTHERT


Note:
The parameters ULOTHERTHD , ULCONVAMRTHD, ULCONVNAMRTHD and ULHOTHD are set as
percentage representing the moment where the admission control will not allow the service they are
refering to such as they are used as for EUN as for RTWP.In order to monitor an alarm that represent a
capcaity problem it is taked the minimum value amongst them. For the minor alarms is used 20% below
the threshold such as 1dB represents around 20% of noise rise in the system.

5.4.

UL RX POWER Performance Analysis and Optimisation

For averaging RSSI, these values must be checked close to the maximum allowed values.
If the average values are close to these thresholds, it implies the chance of having congestion/admission
block is high. In fact, it will be more interesting to look at max RTWP sampled values. Still as RL setup is
taking very short time to establish, it will be very hard to say how those max values are related to RL
failures.
To troubleshoot the cases highlighted by the Alarms, besides the overall considerations enumerated, the
following checkings are recommended:

Are the parameters set correctly?

ULTOTALEDUSERNUM

Power settings for RACH: Are they correct?

External interference?

Missing neighbors?
Short term solutions

Reduce the traffic carried by the site (See the Traffic Offload)

Increase the available UL ASE margin in AC (by increasing ULTOTALEDUSERNUM)

Reduce the number of UL RLs of SF4 allowed in the cell, through the following parameters:
[Huawei is looking if there is any parameter to define the number of users SF4]
Long term solutions
Once the traffic has been shared in the most efficient way possible between the cell and its neighbors,
then a new site is needed to cope with the higher traffic.

5.4.1. UL RX Power from HSUPA

Counter Name

Description

VS.HSUPA.LoadOutpu
t.0

Number of times that the load on the air interface is within the range of
[0, 0.5) dB

VS.HSUPA.LoadOutpu
t.1

Number of times that the load on the air interface is within the range of
[0.5, 1.0) dB

VS.HSUPA.LoadOutpu
t.2

Number of times that the load on the air interface is within the range of
[1.0, 1.5) dB

VS.HSUPA.LoadOutpu
t.3

Number of times that the load on the air interface is within the range of
[1.5, 2.0) dB

VS.HSUPA.LoadOutpu
t.4

Number of times that the load on the air interface is within the range of
[2.0, 2.5) dB

VS.HSUPA.LoadOutpu
t.5

Number of times that the load on the air interface is within the range of
[2.5, 3.0) dB

VS.HSUPA.LoadOutpu
t.6

Number of times that the load on the air interface is within the range of
[3.0, 3.5) dB

VS.HSUPA.LoadOutpu
t.7

Number of times that the load on the air interface is within the range of
[3.5, 4.0) dB

VS.HSUPA.LoadOutpu
t.8

Number of times that the load on the air interface is within the range of
[4.0, 5.0) dB

VS.HSUPA.LoadOutpu
t.9

Number of times that the load on the air interface is within the range of
[5.0, 6.0) dB

VS.HSUPA.LoadOutpu
t.10

Number of times that the load on the air interface is within the range of
[6.0, 7.0) dB

VS.HSUPA.LoadOutpu
t.11

Number of times that the load on the air interface is within the range of
[7.0, 8.0) dB

VS.HSUPA.LoadOutpu
t.12

Number of times that the load on the air interface is within the range of
[8.0, 9.0) dB

VS.HSUPA.LoadOutpu
t.13

Number of times that the load on the air interface is within the range of
[9.0, 10) dB

VS.HSUPA.LoadOutpu
t.14

Number of times that the load on the air interface is within the range of
[10, 11) dB

VS.HSUPA.LoadOutpu
t.15

Number of times that the load on the air interface is within the range of
[11, 12) dB

VS.HSUPA.LoadOutpu
t.16

Number of times that the load on the air interface is within the range of
[12, 13) dB

VS.HSUPA.LoadOutpu
t.17

Number of times that the load on the air interface is within the range of
[13, 14) dB

VS.HSUPA.LoadOutpu
t.18

Number of times that the load on the air interface is within the range of
[14, 15) dB

VS.HSUPA.LoadOutpu
t.19

Number of times that the load on the air interface is within the range of
[15, 16) dB

VS.HSUPA.LoadOutpu

Number of times that the load on the air interface is within the range of

t.20

[16, 18) dB

VS.HSUPA.LoadOutpu
t.21

Number of times that the load on the air interface is within the range of
[18, 20) dB

VS.HSUPA.LoadOutpu
t.22

Number of times that the load on the air interface is within the range of
[20, 22) dB

VS.HSUPA.LoadOutpu
t.23

Number of times that the load on the air interface is within the range of
[22, 26) dB

VS.HSUPA.LoadOutpu
t.24

Number of times that the load on the air interface is within the range of
[26, 30) dB

VS.HSUPA.LoadOutpu
t.25

Number of times that the load on the air interface is equal to or higher
than 30 dB

For the preceding counters, the NodeB takes statistics in each scheduling period. Note that the counters
are a PDF counter, then they can proccessed in order to get the Average, Minimum and Maximum UL
load factor for HSUPA.

5.4.2. AC Rejections due to UL RX Power


Please refer to Section 11, Additional ADMISSION CONTROL Metrics, for further details.

5.5.

UL RX POWER Performance Analysis and Optimisation

To troubleshoot the cases highlighted by the Performance Alarms for UL RX Power suggested across the
previous sections, besides the overall considerations enumerated in Section 3, following actions are
suggested:

5.5.1. Parameter Optimization


After having checked the current settings to ensure that there is no any evident misconfiguration
causing the issue (not to be forgotten):

Are the feeder losses configured correctly in the RBS?


TMA - Tower Mounted Amplifiers?
Check for RF Module alarms
External interference?
Missiing neighbours?

First actions are to optimise some UL capacity related parameters in the cell. The actions could be

Increase ULOTHERTHD , ULCONVAMRTHD, ULCONVNAMRTHD and ULHOTHD


Analyze the possible underestimation/overestimation of ULOTHERTHD, ULCONVAMRTHD,
ULCONVNAMRTHD and ULHOTHD.
Reduce the number of UL RLs of SF4 allowed in the cell.

5.5.2. RF optimization
In case parameter optimization does not help, RF optimization could be done. This will cost more money
to the operator than the parameter optimization. RF optimization means the tuning of Node B antenna
system and includes tilt, bearing and antenna height changes to improve cell dominance areas and
reduce interferece, so received traffic can be reacommodated between existing cells.
In case a multicarrier environment is available, traffic can be offloaded to the cleaner carrier in UL.

5.5.3. Node B UL Power Capacity Upgrade


Once the traffic has been shared in the most efficient way possible between the cell and its neighbors
(intra and inter-frecuency), then new carriers, more sectorization, or finally a new site, will be needed to
cope with the higher traffic.

6.

CHANNELIZATION (OVSF) CODES

6.1.

AVAILABLE CAPACITY IN TERMS OF OVSF CODES (Downlink)

This Table lists the minimum OVSF length and the number of OVSFs available for each service. For each
service type, the carried Erlangs are estimated, assuming a 2% GoS.

This section covers only the Downlink. On the Uplink, each UE has its own code tree, so the code tree is
not a limiting factor in that direction. On the Downlink, the number of OVSFs available for each
dedicated channel is reduced, because multiple common channels must be supported. Figure below
summarizes the mandatory Downlink channels and the mandatory (or implementation-dependent)
values of their OVSFs. It also shows optional Downlink common channels.

Figure 9 OVSF codes tree

In the OVSF code tree structure, one PS 384 connection uses the same resources as four PS 64
connections or 16 voice connections. However, in terms of the SF, the probability of having SF = 8 free
channels is not just 4 (or 16) times less than the probability of having one SF = 32 (or SF = 128) free,
because the equivalent SF = 32 (or SF = 128) free channels must be contiguous and start at a specific
position.

Therefore, the availability of an OVSF of a specific length is determined by the number of OVSFs of same
length or shorter that are used, as well as by the number of longer OVSFs used. The OVSF allocation
algorithm at the Node B normally manages the availability of consecutive OVSFs. This algorithm also
allocates and optimizes the code tree to maximize the availability of shorter OVSFs.

6.2.

METRICS for CHANNELIZATION CODES Monitoring

6.2.1. Code Tree Usage


A single downlink scrambling code supports an OVSF code tree containing 1020 codes (based upon
spreading factors from 4 to 512). The Channelization Code Occupancy provides an indication of the
percentage of codes which are either used or blocked. Channelization codes assigned to both the common
and dedicated downlink channels are included by the KPI. Furthermore there are also counter to monitor
the maximum and minimum code occupancy. This can be used to detect the busy hour and non busy hour
of the cells respectively.
In Huawei the OVSF code occupancy can be calculated through the following counters:
VS.RAB.SFOccupy
VS.RAB.SFOccupy.MAX
The measurement items provide the mean number and the maximum number of occupied codes in a
cell. The occupied codes are the codes occupied by the common channel, R99 user, and HS-DSCH. The
code number is normalized to SF = 256, that is, converted to the code number when SF = 256. The way
of the normalization to SF = 256 is to multiply the number when SF = k by 256/k
The RNC takes a sample from the number of occupied codes in a cell every five seconds and normalize
the number to SF = 256. Then calculate the sum of the normalized numbers. At the end of the
measurement period, the RNC divides the accumulated number by the number of samples to obtain the
mean number of occupied codes in the cell within the measurement period. The maximum number of
occupied codes that is reported to the RNC is defined as the maximum number of occupied codes in the
cell within the measurement period..

Average OVSF code Occupancy [ ] =

VS . RAB . SFOccupy
100
256

Thresholds: MINOR: >70% / MAJOR:


>80%

Max OVSF code Occupancy [ ] =

VS . RAB . SFOccupy . MAX


100
256
Thresholds: MINOR: >85% / MAJOR:
>95%

The total number of codes used includes the codes for both common and dedicated channels. The codes
used by the common channels are: Primary CPICH - C ch,256,0, Primary CCPCH - Cch,256,1, AICH - Cch,256,2, PICH Cch,256,3, Secondary CCPCH - Cch,64,1 (if a second S-CCPCH is configured, an additional channelisation code
will be assigned). The use of these common channel channelisation codes blocks 1 SF 4 code, 1 SF8 code, 1
SF16 code, 1 SF32 code, 2 SF64 code, 4 SF128 codes, 8 SF256 codes and 16 SF512 codes. The total number of
codes used by the common channels is thus 34 (based upon a single S-CCPCH). Hence a typical cell with
no traffic would show an average Channelisation code occupancy of 3%.
With the introduction of HSDPA and HSUPA to the network, the code occupancy heavily increased. The
activation of HSDPA will reserve a minimum of 5 HS-PDSCH codes (SF 16) and at least one HS-SCCH code
(SF128) in each cell. For HSUPA activation, one E-AGCH code (SF 256) and one E-RGCH/E-HICH code (SF 128) are
required.

The set of codes reserved for the common channels, HSDPA (assume minimum 5 codes in this case) and
HSUPA is presented in table below:
Channelization Codes Static Reservation at Cell Start-up
Physical Channel
Spreading Factor
Code Number
CPICH
256
0
P-CCPCH
256
1
PICH
256
3
AICH
256
2
S-CCPCH
64
1
HS-SCCH
128
4
HS-PDSCH
16
11-15
E-AGCH
256
14
E-RGCH/E-HICH
128
7
Total 13 channelization codes are reserved in total. These 13 channelization codes block a further 358
codes (184 SF512, 87 SF256, 44 SF128, 23 SF64, 12 SF32, 1 SF16, 4 SF8, 3 SF4), i.e. a total of 371 channelization
codes become unavailable for DPCH use. This means when HSDPA and HSUPA are enabled, the code tree
occupancy generated by the static channelization code reservations is significantly greater, i.e. 36 %
compared to 3 %. This means that the threshold, above which code tree optimization can be triggered,
should be increased. This will help to avoid unnecessary reconfigurations while the dynamic section of the
code tree is relatively unloaded. It is recommended to increase the threshold defined by CodeTreeUsage
when HSDPA and HSUPA are enabled. Figure below shows the code allocation utilized for the HSDPA and
HSUPA enabled cell.
In huawei a set of parameters are used to confugure the codes usage by HSDPA:

Parameter

Default
Configuratio
n

Meaning

AllocCodeMode

HSDPA code resource allocation mode (Manual,


Automatic)

HsPdschCodeNum

Number of HS-PDSCH codes. Valid


AllocCodeMode is set to Manual.

HsPdschMaxCodeNum

Maximum number of HS-PDSCH codes. Valid


when AllocCodeMode is set to Automatic.

10

HsPdschMinCodeNum

Minimum number of HS-PDSCH codes. Valid


when AllocCodeMode is set to Automatic.

HsScchCodeNum

Number of HS-SCCH codes

when

Automatic
4

HSDPA Code Resource Allocation Mode. This describes the HSDPA code resource allocation mode:
automatic or manual.
Manual allocation leads to restriction of HSDPA code resource or leaves HSDPA code resource
idle.

Figure 10 HSDPA and HSUPA Code Allocation

Number of HS-PDSCH Codes. This describes the number of HS-PDSCH codes. The number of
HS-PDSCH codes is valid only when AllocCodeMode is set to Manual.

If HsPdschCodeNum is excessively low, the HSDPA code resource is restricted.

If HsPdschCodeNum is excessively high, the HSDPA code resource is wasted and the
admission rejection rate of R99 services increases due to code resource.

Maximum Number of HS-PDSCH Codes . This describes the maximum number of HS-PDSCH
codes. The maximum number of HS-PDSCH codes is valid only when AllocCodeMode is set to
Automatic.
In automatic HSDPA code allocation mode, set the maximum number of HS-PDSCH codes to a
comparatively high value.

Minimum Number of HS-PDSCH Codes. This describes the minimum number of HS-PDSCH
codes. The minimum number of HS-PDSCH codes is valid only when AllocCodeMode is set to
Automatic.
In automatic HSDPA code allocation mode, set the minimum number of HS-PDSCH codes to a
comparatively low value. In addition, HsPdschMinCodeNum must be not higher than
HsPdschMaxCodeNum.

Number of HS-SCCH Codes. This describes the number of codes allocated for the HS-SCCH.
HsScchCodeNum decides the maximum number of subscribers that the NodeB can schedule in
a TTI period. In the scenarios like outdoor macro cells with power restricted, it is less likely to
schedule multiple subscribers simultaneously, so two HS-SCCHs are configured. In the scenarios
like indoor pico with code restricted, it is more likely to schedule multiple subscribers
simultaneously, so four HS-SCCHs are configured. If excessive HS-SCCHs are configured, the
code resource is wasted. If insufficient HS-SCCHs are configured, the HS-PDSCH code resource or
power resource is wasted. Both affect the cell throughput rate.

6.2.2. Code Blocking

Overall CodeTree Blocking [ ] =

VS . RAB . FailEstCs . Code .


VS . RAB . AttEstabPS .Conv +VS . RAB . AttEstabPS . Str+ VS . RAB . AttEstabPS .
Thresholds: MINOR: >0% / MAJOR:
>2%

6.2.3. Average Number of Codes reserved for HS


The following counters provide the usage of the HS-SCCH code resources and the HS-PDSCH code
resources during a measurement period. These counters are used to analyze the allocation of the HSSCCH code resources and the HS-PDSCH code resources. If the code usage is high, the code resources
are insufficient for the current traffic. Therefore, more code resources are required. If the code usage is
low, the code resources are excessive. Therefore, some of the code resources can be allocated to R99
services.
VS.ScchCodeUtil.Mean
VS.ScchCodeUtil.Max
VS.ScchCodeUtil.Min
These counters provide the average, maximum, and minimum usage of HS-SCCH code resources in a
cell during a measurement period respectively. Assume that the number of HS-SCCH codes used in each
TTI is A and the number of available HS-SCCH codes in each TTI is B. Then, VS.ScchCodeUtil.Mean = A/B.
The NodeB calculates VS.ScchCodeUtil.Mean every 5,120 ms and then takes the maximum value within
the measurement period as VS.ScchCodeUtil.Max and the minimum value within the measurement
period as VS.ScchCodeUtil.Min.
VS.ScchCodeUtil.Mean.User
VS.ScchCodeUtil.Mean.Data
These counters provide the HS-SCCH code resource usage in a cell over the time when HSDPA UEs camp
on the cell and the HS-SCCH code resource usage in the cell over the time when at least one HSDPA user
avails data transfer at the physical layer during a measurement period respectively.

VS . ScchCodeUtil . Mean .User=

VS . ScchCodeUtil . Mean
VS . UserTtiRatio . Mean

VS . ScchCodeUtil . Mean . Data=

VS . ScchCodeUtil . Mean
VS . DataTtiRatio . Mean

VS.PdschCodeUtil.Mean
VS.PdschCodeUtil.Max
VS.PdschCodeUtil.Min
These counters provide the average, maximum, and minimum usage of HS-PDSCH code resources in a
cell during a measurement period respectively. Assume that the number of HS-PDSCH codes used in
each TTI is A and the number of available HS-PDSCH codes in each TTI is B. Then,
VS.PdschCodeUtil.Mean = A/B. The NodeB calculates VS.PdschCodeUtil.Mean every 5,120 ms and then

takes the maximum value within the measurement period as VS.PdschCodeUtil.Max and the minimum
value within the measurement period as VS.PdschCodeUtil.Min.
VS.PdschCodeUtil.Mean.User
VS.PdschCodeUtil.Mean.Data
These counters provide the HS-PDSCH code resource usage in a cell over the time when HSDPA UEs
camp on the cell and the HS-PDSCH code resource usage in the cell over the time when at least one
HSDPA user avails data transfer at the physical layer during a measurement period respectively.

VS . PdschCodeUtil . Mean .User=

VS . PdschCodeUtil . Mean
VS .UserTtiRatio . Mean

VS . PdschCodeUtil . Mean . Data=

VS . PdschCodeUtil. Mean
VS . DataTtiRatio . Mean

VS.PdschCodeUsed.Mean
VS.PdschCodeUsed.Max
These counters indicate the average and maximum number of codes used by HS-PDSCHs in a cell during
a measurement period respectively. During the measurement period, the NodeB counts the number of
codes used by all the HS-PDSCHs in all TTIs in the cell. Assume that this value is A and the number of
TTIs in the measurement period is B. Then, VS.PdschCodeUsed.Mean = A/B. The NodeB calculates
VS.PdschCodeUsed.Mean every 5,120 ms and then takes the maximum value within the measurement
period as VS.PdschCodeUsed.Max.
VS.PdschCodeAvail.Mean
VS.PdschCodeAvail.Max
These counters indicate the average and maximum number of codes available for HS-PDSCHs in a cell
during a measurement period respectively. During the measurement period, the NodeB counts the
number of codes available for HS-PDSCHs in all TTIs in the cell. Assume that this value is A and the
number of TTIs in the measurement period is B. Then, VS.PdschCodeAvail.Mean = A/B. The NodeB
calculates VS.PdschCodeAvail.Mean every 5,120 ms and then takes the maximum value within the
measurement period as VS.PdschCodeAvail.Max.

6.2.4. AC Rejections due to Channelization Codes


Please refer to Section 11, Additional ADMISSION CONTROL Metrics, for further details.

6.3.

OVSF CODES Performance Analysis and Optimisation

6.3.1. Parameters Optimization


After having checked the current settings to ensure that there is no any evident misconfiguration
causing the issue, first actions are to optimise some Code Tree Management parameters in the cell.

6.3.2. RF optimization
In case parameter optimization does not help, RF optimization could be done. This will cost more money
to the operator than the parameter optimization. RF optimization means basically the tuning of Node B
antenna system and includes tilt, bearing and antenna height changes to improve cell dominance areas,
so traffic can be reacommodated between existing cells and/or soft handover areas optimized.

6.3.3. Activation of Features

When the usage of cell resource exceeds the congestion trigger threshold, the cell enters the basic
congestion state. In this case, LDR (Load Reshuffling) is needed to reduce the cell load and increase the
access success rate. When the load is lower than the congestion release trigger threshold, the system
returns to normal.
The resources that can trigger basic congestion of the cell include:

Power resource

Iub resource or Iub bandwidth

Code resource

NodeB credit resource

Equivalent user number


The function of the LDR is to reduce the load of a cell when the available resource of the cell reaches the
threshold. The introduction of the LDR is to increase the access success rate in the following ways:

Inter-frequency load handover

Code reshuffling

BE service rate reduction

Uncontrolled real-time traffic QoS renegotiation

CS domain inter-RAT load handover

PS domain inter-RAT load handover

Downsizing the bit rate of AMR voice

MBMS power downgrading


Please refer to the Annex: Load Management for further information.

6.3.4. Node B UL Power Capacity Upgrade


Once the traffic has been shared in the most efficient way possible between the cell and its neighbors
(intra and inter-frecuency), then new carriers, more sectorization, or finally a new site, will be needed to
cope with the higher traffic.

7.

1B

CHANNEL ELEMENTS (CE)

The key resource in terms of WBTS Hardware are the Channel Elements (CE) at WBTS level. The
blocking in Node B HW interface gives information about the lack of Node B HW resources to handle both
uplink and downlink traffic. Each RAB type needs different number of channel elements based on the
allocated UL/DL bit rate. Thus if there is not enough HW channels for connection this could lead to BTS
HW blocking.
The possible reasons for BTS blocking could be lack of hardware capacity. The new service setup would
be blocked if the current traffic mix does not leave enough free hardware channels to support these new
services. Tables below show the summaries of CE requirements in common channel, R99, HSDPA and
HSUPA.

Figure 11 Channel ElementsMapping

Figure 12 HSDPA CE Consuption

Figure 13 HSUPA CE comsuption

7.1.

AVAILABLE CAPACITY IN TERMS OF CHANNEL ELEMENTS

The following table summarizes the CE capacity associated to different board configurations in the
NodeB:

Figure 14 CE boards configurations


The capacity of the BBU3806/BBU3806C is represented by the number of cells and the number of CEs.

Capacity of the BBU3806

Item

Capacit
y

Cell

Uplink CE

192

Downlink CE

256

Capacity of the BBU3806 with the EBBC

Item

Capaci
ty

Cell

Uplink CE

384

Downlink CE

512

Capacity of the BBU3806C

Item

Capaci
ty

Cell

Uplink CE

128

Downlink CE

256

Capacity of the BBU3806C with the EBBM

Item

Capaci
ty

Cell

Uplink CE

320

Downlink CE

512

Capacity of the RRU3801C

Item

Capacit
y

Maximum
sectors

Maximum
carriers

Capacity of the RRU3804

Item

Capacit
y

Maximum
sectors

Maximum
carriers

7.2.

Dynamic CE Resource Management

A channel element (CE) is defined as the baseband resources required in the NodeB to provide capacity
for 12.2 k AMR voice, including 3.4 k DCCH. The HSUPA shares the CE resource with the R99 services.
The HUSPA improves the uplink performance of delays and rates capacity, but HSUPA consumes large CE
resources.
If there is no dynamic CE resource management, the NodeB will allocate the CE resources according to
the maximum rate of the UE, even if the actual traffic volume is very low. In this case, the utility of the
CE resource is inefficient. Thus, the dynamic CE resource management is necessary. Considering that the
rate of HSUPA user changes fast, the algorithm periodically adjusts CE resources of users according to
the users rate and the available CE resources.
When a new RL is admitted, the algorithm also adjusts CE resources. Dynamic CE management can
minimize the failures in demodulation and decoding due to CE. Meanwhile, it also can maximize the CE
usage and UL throughput.

Figure 15 Overview of CE resource management


MAC-e scheduler always takes the CE resources allocated to the user into consideration. CE resource
adjustment is performed periodically or triggered by events.

7.2.1. Periodical CE Resource Adjustment


When each adjustment period arrives, the algorithm performs the following operations:
1

Call back the CE resources of the serving RLS

The NodeB determines whether to call back the CEs based on the CEavg during the previous period.
If the CEallocate is greater than both CEinit and CEavg, the NodeB calls back some CEs and
decreases CEallocate to Max(CEavg,CEinit). The CE resources called back takes effect during the
next period. The algorithm notifies the SGmax to the MAC-e scheduler at current TTI.

CEallocate: The number of CEs allocated to the serving RLS.

CEinit: Initial number of CEs, which is calculated on the basis of the configured GBR. If the
user is not configured with the GBR, then CEinit is the CE resources for transmitting an RLC PDU.

CEavg: Average number of CEs, which is calculated on the basis of the average rate of the
serving RLS.

SGmax: Maximum SG for the UEs, which is determined by the function of the dynamic CE
resource management. Since one SG may correspond to different CE numbers, if MAC-e scheduler uses
this SG, the allocated CE resources may be insufficient. Therefore, the algorithm needs to notify the
MAC-e scheduler of the SGmax to avoid CE insufficiency.
5

Processing CE resources among serving RLS for fairness

If the available CE resources for serving RLS are less than the CE resources that are required for
increasing the SF4 to 2xSF4, the algorithm performs fairness processing.
The algorithm selects a user with the largest value of priority and reduces its rate. The users whose
GBR are met are downsized before the users whose GBR are not met. When the next period
arrives, this users CE resources will be called back.
The queuing of users is as follows:

For the users whose Reff is smaller than the GBR, the algorithm queues the users based on
Priority = Reff/(SPI x GBR).

For the users whose Reff is greaterr than or equal to the GBR or the users whose GBR is not
configured, the algorithm queues the users based on Priority = Reff/SPI.
For the users of the serving RLS, the algorithm stops decreasing its CE resources if the CE resources
equals to CEinit.
After processing, the algorithm notifies the MAC-e scheduler of the new SGmax.
6

Increasing CE resources of the serving RLS

If the CEavg during the previous period is greater than or equal to CEallocate, the algorithm can
increase the CE resources of theses users by one step if there are available CE resources. For
example, if the CE resource of a user corresponds to SF4, the algorithm increases the CE
resources to that correspond to 2xSF4
The operation of increasing CE resources is based on the user queuing. The users are queued in
ascending order based on priority value. The smaller the priority value of a user is, the earlier
this users CE is increased. The queuing of users is as follows:

For the users whose Reff is smaller than the GBR, the algorithm queues the users based on
Priority = Reff/(SPI x GBR).

For the users whose Reff is greaterr than or equal to the GBR or the users whose GBR is not
configured, the algorithm queues the users based on Priority = Reff/SPI.
The processing of increasing CE resources is as follows:

The users whose GBR is not reached are increased before the users whose GBR is satisfied
or not configured.

During the increasing procedure, the algorithm can preempt the CE resources of the nonserving RLs until their resource decreases to the minimum CE resources, which are required for E-DPCCH
demodulation and decoding.
After the increase of CE, the algorithm notifies the MAC-e scheduler of the new allocated CEs and
SGmax.
7

Allocating CE resources to non-serving RLs

When there are available CE resources for non-serving RLs, the algorithm allocates them to the
users of non-serving RLs.
The algorithm allocates available CE resources as much as possible to non-serving RLs, so that more
users can obtain the gain of soft handover.
Based on the CEavg of non-serving RLs during the previous period, the algorithm increases the
number of CEs to CEup, where CEup is obtained by increasing CEavg by one step.
The users are queued in ascending order based on their priority value. The smaller the value of
priority is, the earlier the user is processed. The priority value is calculated as follows:

Priority = CEneed / SPI


CEneed = NRL * (CEnew - CEassign)
NRL: the number of RLs on the current UL board.
CEassign: the CE resource allocated to this user

DPDCHs

CEnew = Min[CEup, CEE-DCH MBR, CEMaximum Set of E-DPDCHs]


CEE-DCH MBR: the CE resources corresponding to the E-DCH MBR
CEMaximum Set of E-DPDCHs: the CE resources corresponding to the Maximum Set of ECEup: the CEassign after increasing by a step
SPI : is the weight of SPI

If the available CE resources can meet the requirements for CEneed of a user, the algorithm allocate
the CE resources to this user. If no enough CE resources are available, the algorithm allocates
the minimum CE resources. After increasing, the algorithm notifies the MAC-e scheduler of the
new CEs and SGmax.
8

Allocating the remaining CE resources

This NodeB allocates the remaining CE resources to the users of serving RLS in order to improve the
efficiency of utility of CE resources.
NodeB schedules the user of serving RLS by the ascending order of priority until the remaining CE
resources are not enough to increase the user by a step or all users have gotten the CE
resources of Min[ CE(E-DCH MBR), CE(Maximum Set of E-DPDCHs)].
The priority is calculated as follows:
Priority = CEneed / F(SPI), and
Where

CEneed = NRL * (CEup CEassign)


CEassign is the CE resources allocated to the user of the serving RLS.
CEup is the CE resources that are required for increasing the CEassign by a step.

7.2.2. CE Resource Adjustment Triggered by Event


When a new RL is admitted, the new RL requests CE resources according to CEinit. If the CE resources
are insufficient, CE preemption is triggered and processed as follows:

The algorithm preempts the CEs of non-serving RLs until their CE resources decrease to the
minimum CE number.
If the CE resources are still insufficient after preemption of non-serving RL, the algorithm
preempts the CE resources of serving-RLSs until the CE resources decreases to CEinit.

The algorithm preempts the CE resources of users in Type1, then in Type2.

Type1: users with the GBR and Reff GBR, or the user without GBR
Type2: users with the GBR and Reff < GBR In each type,

the algorithm preempts the CE resources according to the priority value of the users:
Priority= Reff / SPI

7.3.

METRICS for CHANNEL ELEMENTS Monitoring

7.3.1. Number of CEs available and used in UL/DL


The following counters indicate the usage of UL CEs and DL CEs in each cell of the NodeB within the
current measurement period. When operators share a RAN, each operator has a dedicated CE resource
pool and all the operators share a common CE resource pool. The cell first consumes the CE resources in
the dedicated pool. When the resources are used up, the cell begins to consume the CE resources in the
shared pool. Therefore, the NodeB needs to measure the usage of dedicated CE resources and that of
shared CE resources respectively. When the RAN is not shared, there is only one CE resource pool. The

usage of CE resources in the cell is indicated by the counters in the form of xxx.Shared, whereas the
counters in the form of xxx.Dedicated are irrelevant to the measurement and are constantly set to 0.
The table below describes the preceding counters.
VS.ULCE.Mean.Shared: Average number of shared UL CEs consumed
period of 15 minutes
VS.ULCE.Max.Shared: Maximum number of shared UL CEs consumed
period of 15 minutes
VS.DLCE.Mean.Shared: Average number of shared DL CEs consumed
period of 15 minutes
VS.DLCE.Max.Shared: Maximum number of shared DL CEs consumed
period of 15 minutes
VS.ULCE.Mean.Dedicated: Average number of dedicated UL CEs
measurement period of 15 minutes
VS.ULCE.Max.Dedicated: Maximum number of dedicated UL CEs
measurement period of 15 minutes
VS.DLCE.Mean.Dedicated: Average number of dedicated DL CEs
measurement period of 15 minutes.
VS.DLCE.Max.Dedicated: Maximum number of dedicated DL CEs
measurement period of 15 minutes

in a cell within a measurement


in a cell within a measurement
in a cell within a measurement
in a cell within a measurement
consumed in a cell within a
consumed in a cell within a
consumed in a cell within a
consumed in a cell within a

The following counters indicate the configuration of CEs in the current NodeB. The configuration consists
of the number of UL CEs and the number of DL CEs in each license group and in the shared group. When
operators share a RAN, each operator has a dedicated license group. When the RAN is not shared by
operators, there is only one CE resource pool. The configuration of CE resources is indicated by the
counters in the form of xxx.Shared, whereas the counters in the form of xxx.Dedicated are irrelevant to
the measurement and are constantly set to 0. The table below describes the preceding counters.
VS.LC.ULCreditAvailable.Shared Number of UL CEs configured for the shared group
VS.LC.DLCreditAvailable.Shared Number of DL CEs configured for the shared group
VS.LC.ULCreditAvailable.LicenseGroup.Dedicated Number of UL CEs configured for an operator
VS.LC.DLCreditAvailable.LicenseGroup.Dedicated Number of DL CEs configured for an operator
During the current measurement period, the NodeB measures the usage of UL CEs and DL CEs in each
license group and in the shared group. When operators share a RAN, each operator has a dedicated
license group. When the RAN is not shared, only the usage of UL CEs and DL CEs of the shared group is
reported and the relevant counters are xxx.Shared. The table below describes the preceding counters.
VS.LC.ULMean.LicenseGroup Average number of UL CEs consumed by an operator
measurement period of 15 minutes
VS.LC.ULMax.LicenseGroup Maximum number of UL CEs consumed by an operator
measurement period of 15 minutes
VS.LC.DLMean.LicenseGroup Average number of DL CEs consumed by an operator
measurement period of 15 minutes
VS.LC.DLMax.LicenseGroup Maximum number of DL CEs consumed by an operator
measurement period of 15 minutes
VS.LC.ULMean.LicenseGroup.Shared Average number of UL CEs in the shared group that are
by an operator within a measurement period of 15 minutes
VS.LC.ULMax.LicenseGroup.Shared Maximum number of UL CEs in the shared group that are
by an operator within a measurement period of 15 minutes
VS.LC.DLMean.LicenseGroup.Shared Average number of DL CEs in the shared group that are
by an operator within s measurement period of 15 minutes
VS.LC.DLMax.LicenseGroup.Shared Maximum number of DL CEs in the shared group that are
by an operator within a measurement period of 15 minutes

within a
within a
within a
within a
consumed
consumed
consumed
consumed

7.3.2. Average UL/DL CE Utilization (%)


The RNC periodically takes samples from the UL credit usage and DL credit usage in the cell. At the end
of the measurement period, the RNC calculates the maximum and minimum usages of CE resources, and
divides the accumulated usages by the number of samples to obtain the mean usage of both UL and DL
CE resources in the measurement period.

Average UL CEUtilization [ ] =

VS . LC . ULCreditUsed .CELL
100
Max CE Available
Thresholds: MINOR: >85% / MAJOR:
>95%

Average DL CEUtilization [ ] =

VS . LC . DLCreditUsed .CELL
100
Max CEavailable

Thresholds: MINOR: >85% / MAJOR:


>95%
Note:
The Max CE available should be taken from the M2000 and input into the formula.
In a similar way, the Maximum and Minimum UL/DL CE Utilization (%) KPIs can also be defined with the
counters:
VS.LC.ULCreditUsed.CELL.Max
VS.LC.ULCreditUsed.CELL.Min
VS.LC.DLCreditUsed.CELL.Max
VS.LC.DLCreditUsed.CELL.Min

7.3.3. Setup Failures (Blockings) due to CE shortage


The following measurements can be used to identify any setup failure caused by HW channel element
shortage:
1.
2.
3.
4.

5.

The Service level measurements can provide the first indication of BTS HW limitations

RRC and RAB Setup failure rate resulting from BTS


PS User plane allocation failures
Radio link setup failures due the MISC (Miscellaneous) indicates level of blocked RL setups due the
BTS HW
HSDPA and HSUPA access failures can be monitored with traffic counters

HSDPA access failures due the BTS are mainly related to HSDPA UL return channel

HSUPA setup and Access failures


BTS HSUPA resource status

The following KPIs can be monitored in order to track the severity of the blocking due to hardware
channel elements.

RRC Setup Failures due CE [ ] =

VS . RRC . Rej .UL .CE .Cong+VS . RRC . Rej . DL . CE. Cong


100
VS . RRC . AttConnEstab . Cell

CS RAB Setup Failuresdue CE [ ]=

VS . RAB . FailEstCs . DLCE .Cong +VS . RAB . FailEstCs . ULCE . Con


VS . RAB . AttEstab . AMR +VS . RAB . AttEstabCS ..Conv+ VS . RAB . AttEsta

PS RAB Setup Failuresdue CE [ ]=

VS . RAB . FailEstPs . ULCE . Cong+VS . RAB . FailEstP


VS . RAB . AttEstabPS . Conv+VS . RAB . AttEstabPS . Str +VS . RAB . AttEstab
Thresholds for all KPIs above: MINOR: >0% / MAJOR:
>2%

7.3.4. Releases and Downgrades due to CE shortage


The following counters catch these Congestion Events due to BTS HW:
VS.LCC.LDR.Num.ULCE
VS.LCC.LDR.Num.DLCE
When a cell is in LDR state due to Channel Element (CE) resource congestion, VS.LCC.LDR.Num.ULCE
takes statistics of the number of times in LDR state for the uplink of the cell, whereas
VS.LCC.LDR.Num.DLCE for the downlink of the cell.
When a cell is in LDR state due to CE resource congestion, This measurement item provides the number
of times in LDR state within the cell where the UE camps on.

7.4.

CE Performance Analysis and Optimization

In the case when there is a high percentage of blocking due to lack of channel element in the BTS, steps
have to be taken in order to mitigate this effect. This section looks at possible actions that can be taken
in order to mitigate the blocking due to lack of CE.

7.4.1. SHO Overhead Optimization


With the activation of HSDPA in the network, there is a possibility that the BTS CEs will be highly utilized
by the HSDPA UL return channel. Furthermore, in the case of SHO, the CEs are utilized from all BTS in
the active set. Hence, also for this reason, it is important to keep the SHO region for HSDPA to a
reasonable minimum. This can be done by having a specific parameter set for HSDPA handovers and
having very tight settings for the adding and drop related parameters to reduce the SHO region.

7.4.2. WBTS CE Capacity Upgrade


Channel Elements are pooled resources within the NodeB and shared between users of each sector. The
expansion of the channel element capacity is different between the BTS3812AE and the DBS3800. This
is primaryily due to the difference in architecture between the two.
Adding more Baseband board will increase the CE availability per NodeB

8.

1B

BACKHAUL (Iub)

As stated in the Introduction, it is important to make sure that air interface is indeed the bottleneck: All
other resources should be dimensioned in excess of air interface resources, but since other resources are
costly too, they need to be carefully planned: CEs, Backhaul, Iu, MSC, SGSN trunks.
Iub Occupancy Monitoring is important to ensure that the number of E1s deployed for each NodeB in the
network is adequate to guarantee that all services can be provided at acceptable performance levels (all
CS services can go through and all PS Data Services are offered at acceptable Throughputs).
In this section, we first compute the Iub Utilization based on the number of ATM cells received (DL) or
transmitted (UL) by the NodeB and comparing this figure to the E1s capacity installed in the NodeB.
Thresholds of acceptable Iub Utilization are also under development, as many factors need to be taken
into account in order to decide about the request of additional E1s for a certain Site: A certain Iub Usage
can trigger the analysis of a specific case, but then other considerations need to be taken into account:
1.
2.
3.
4.
5.
6.
7.

HS Limiting by Iub [%]


HS Frame Loss [%]
Voice [Erlangs], Release 99 PS [Erlangs and Volume kBytes], HSDPA and EUL Volume [MBytes]
Number of HS Users (HSDPA Erlangs) and Active HS Users in the cell
User perceived throughput [kbps]
CQI reports as radio quality indicators
AAL2ap blocking counters

As can be seen, all these additional considerations are in line with the above statement of checking if
the number of E1s is enough to sustain all CS traffic (no Voice or Video Calls should be blocked at this
level or any other-) and to deliver all PS Data Traffic at acceptable throughput.
The key Iub resource is the available Iub bandwidth at NodeB level, measured in cps or Kbps. In
particular, the Iub User Plane VCCs are to be monitored. The blocking in Iub interface gives information
about the capacity shortage between Node B and Transport layer. Blocking is related to the load of ATM
and especially AAL2 layer user plane resources. In transport layer the Call Admission Control (CAC)
could also deny the service if there is no room in AAL2 layer.
As the Iub CAC resource allocation system is an input for the Radio Admission Control functionality,
blocking on Iub will result in a degradation of Call Setup Success Rate.
The focus of this chapter is to introduce methods to proactively and reactively monitor the Iub
performance. The basis representation of a 3G network showing RNC interfaces is given below.

Figure 16 RNC Interfaces

8.1.

AVAILABLE CAPACITY IN TERMS OF BACKHAUL RESOURCES

In Huawei a backhaul have two posible configurations:

ATM bachhaul configured in an IMA group which can contain one or several individual E1s, the IMA
group works as one logical pipe carrying both CS and PS traffic.

8.2.

IP backhaul configured thru User Datagram Protocol (UDP) resources

METRICS for BACKHAUL RESOURCES Monitoring

8.2.1. Traffic Load Measurements


8.2.1.1. ATM backhaul
In ATM mode, the user plane data of the Iub/Iur/Iu-CS interfaces is carried on AAL2 paths, and that of the
Iu-PS interface is carried on the IP over ATM (IPoA) permanent wirtual channels(PVC).
Data of the terrestrial interfaces is transmitted on the physical layer in one of the following transmission
modes:

E1/T1: Electrical ports of the AEUa board are used for data transmission.
Channelized STM-1/OC-3: Optical ports of the AOUa board are used for data transmission.
Unchannelized STM-1/OC-3c: Optical ports of the UOIa board are used for data transmission.

Figure 17 ATM Interface Boards

Figure 18 Features of ATM interface boards


AAL2 Path Resources
In ATM mode, the types of AAL2 path are listed as follows:

RT
NRT
HSDPA_RT
HSDPA_NRT
HSUPA_RT
HSUPA_NRT

The type of AAL2 path is related to the Service type. The mapping between AAL2 path type and
Service type is determined by TX traffic record index or RX traffic record index.

Figure 19 Mapping between AAL2 path type and service type


Note:

HSDPA traffic and HSUPA traffic can be carried on the same AAL2 path. The former is carried on
the downlink and the latter on the uplink. If there is no need to support HSDPA and HSUPA, the
paths for HSDPA or HSUPA such as HSDPA_RT, HSDPA_NRT, HSUPA_RT, and HSUPA_NRT do
not need to be configured.
In terms of the priorities, the service types in descending order are CBR > RTVBR > NRTVBR >
UBR or UBR+.

ATM permanent virtual channel measurement reports the average traffic rate per PVC using counters

VS.AAL2PATH.PVCLAYER.RXCORRECTCELLS This measurement item provides the number of


correct cells received by the AAL2PATH_PVCLAYER in the specified measurement period. The item
indicates the status of the traffic received by a single AAL2PATH_PVCLAYER.
VS.AAL2PATH.PVCLAYER.TXCORRECTCELLS This measurement item provides the number of
correct cells transmitted by the AAL2PATH_PVCLAYER in the specified measurement period. The
item indicates the status of the traffic transmitted by a single AAL2PATH_PVCLAYER.

The following KPIs can be used to measure the Ingress and Egress of the PVC.
ROP (Report Output Period) = Measurement time in minutes (examples: 15, 30, hour = 60, day =
1440)

Average ATM PVC Rx Throughput ( cps )=

VS . AAL 2 PATH . PVCLAYER . RXBYTESOFAAL 2 CPSPKTS


ROP6053

Average ATM PVC Tx Throughput ( cps )=

VS . AAL 2 PATH . PVCLAYER . TXBYTESOFAAL 2CPSPKTS


ROP6053

8.2.1.2. Iub Utilization


For each of the following counters, the NodeB takes statistics about the maximum used DL bandwidth on
an ATM physical port during a measurement period. The NodeB measures the used DL bandwidth on the
physical port every five seconds over 15 minutes and then obtains the maximum value. The NodeB can
take statistics on these counters for a maximum of four ATM physical ports. One counter is specific for
one physical port, and one physical port corresponds to one counter. That is, the counters and the
physical ports have a one-to-one relation between each other. These counters are identical in
application, since they are all used for traffic measurement at the physical layer.
VS.ATMDlMaxUsed.1
VS.ATMDlMaxUsed.2
VS.ATMDlMaxUsed.3
VS.ATMDlMaxUsed.4
For each of these counters, the NodeB takes statistics about the average used DL bandwidth on an ATM
physical port during a measurement period. The NodeB measures the used DL bandwidth on the
physical port every five seconds over 15 minutes and then calculates the average value. The NodeB can
take statistics on these counters for a maximum of four ATM physical ports. One counter is specific for
one physical port, and one physical port corresponds to one counter. That is, the counters and the

physical ports have a one-to-one relation between each other. These counters are identical in
application, since they are all used for traffic measurement at the physical layer.
VS.ATMDlAvgUsed.1
VS.ATMDlAvgUsed.2
VS.ATMDlAvgUsed.3
VS.ATMDlAvgUsed.4
8.2.1.3. Number of active HSDPA users
In additions to Iub utilization, the total numbers of active HSDPA users is necessary to identify Iub
congestion. Even if Iub utilization is close to 100% it might not be necessary to add more E1s, if most of
the data volume is for one or a few HSDPA users. If Iub utilization is close to 100% and the bandwidth is
shared between a larger number of users, Iub expansion is necessary.
VS.HSDPA.UE.Mean.Cell
This measurement item provides the mean number of HSDPA UEs in a serving cell. The value of this unit
is lower than or equal to that of the VS.CellDCHUEs measurement.
8.2.1.4. Number of AAL2 connections
Number of AAL2 connections per VCC is defined by the AAL2 channel identifiers (CID). A VCC can have
maximum 248 CIDs and if this limit is reached then it can result in blocking. Basically, a single call
requires 2 connections or 2 CIDs (e.g. SRB + AMR or SRB + NRT PS), a multi-RAB call requires one CID
per connected RAB in addition to the SRB CID. Each HSDPA user needs 3 CIDs (SRB + MAC-d Flow + UL
Return Channel). Common channels (four per cell) require their own connections and CIDs as well. The
CIDs can be monitored using the following KPIs.

Average AAL 2connection Utilization=

VS . AAL 2 PATH . Act . Con


100
248

8.2.1.5. Average Cell Drop Rate


This measurement provides the number of cells discarded ratio by the AAL2PATH_PVCLAYER due to the
overflow of the received/transmitted buffer in the specified measurement period. It indicates the status
of the traffic in the receive buffer of a single AAL2PATH_PVCLAYER.

Average Received Cell Drop Ratio [ ]=

VS . AAL 2 PATH . PVCLAYER . DROPFORRXOVERFLOWCELLS


VS . AAL 2 PATH . PVCLAYER . RXCORRECTCELLS

Average Transmitted Cell Drop Ratio [ ] =

VS . AAL 2 PATH . PVCLAYER . DROPFORTXOVERFLOWCELLS


VS . AAL 2 PATH . PVCLAYER .TXCORRECTCELLS

This measurement provides the number of cells discarded by the AAL2PATH_PVCLAYER due to error
headers in the specified measurement period. The item indicates the errors in the transmission cells on
a single AAL2PATH_PVCLAYER.

Average Cell Drop Ratio [ ]=

VS . AAL 2 PATH . PVCLAYER . DROPFORHEADCELL


VS . AAL 2 PATH . PVCLAYER . RXCORRECTCELLS +VS . AAL 2 PATH . PVCLAYE

8.2.1.6. Transport Network Blocking

VS.RRC.Rej.AAL2.Fail
Number of RRC CONNECTION REJECT messages from the RNC to UEs in a cell due to AAL2 setup failure
VS.RAB.FailEstabCS.TNL
VS.RAB.FailEstPS.TNL
Number of CS/PS RABs unsuccessfully established because of transmission network layer failures
VS.RAB.FailEstab.CS.DLIUBBand.Cong
VS.RAB.FailEstab.CS.ULIUBBand.Cong
These measurement items provide the number of RABs that fail to be set up in the CS domain with the
failure cause of rejection by admission control due to Iub bandwidth congestion. The measurement is
undertaken in the best cell that is under the SRNC.
VS.RAB.FailEstab.PS.DLIUBBand.Cong
VS.RAB.FailEstab.PS.ULIUBBand.Cong
The measurement items provide the number of RABs that fail to be set up in the PS domain with the
failure cause of rejection by admission control due to Iub bandwidth congestion. The measurement is
undertaken in the best cell that is under the SRNC.

8.3.

Iub Performance Analysis and Optimisation

In the case there is high loading on the Iub and/or there is failures due to Iub loading, steps can be taken
in order to improve the situation. To evaluate the Iub blocking before considering the Iub expansion, it is
important to analyse the SRB, AMR and HSDPA rejection rate against the CSSR %.
1.
2.
3.
4.

SRB rejection rate is most critical e.g. 1% rejection rate for SRB (RRC) means 1% of all call setups
will fail due to the Iub AAL2 congestion and overall CSSR is reduced by 1%
AMR is next critical as 1% rejection on AMR means 1% RAB setup failure for AMR -> CSSR is
calculated as : RRC Setup Success % * RAB Setup Success % so 1% failure in RRC and 1% failure
in ARM will cause AMR CSSR to be 99%*99% = 98.01%
HS-DSCH rejections due to UL CAC is next critical as rejection impacts on HSDPA accessibility
PS call rejection rate is less critical as it includes the PS upgrades as well i.e. 64 -> 384 kbps
rejections

The average and maximum DL CAC Reservation can be calculated to see the level of CAC reservation of
SRB, AMR, PS and HSDPA.

8.3.1. AAL2 Channel Identifiers Blocking


In the case the maximum number of AAL2 connections have been reached and/or results in failures
regularly, a second user plane VCC should be introduced. Adding a second VCC involves modifications to
the RNC connection configuration parameter set as well as changes to the NodeB commissioning file and
HSDPA parameters.

8.3.2. Adjusting the Maximum Available Bandwidth of the Iub Port


In the case of network convergence or hub NodeB, the bandwidth configured for the NodeB can be
greatly wider than the resource available in the transport network. The HSUPA flow control algorithm
automatically adjusts the maximum available bandwidth of the Iub port based on the congestion state of
the transport network. ATM transport is different from IP transport; therefore, two different algorithms
are provided.

8.3.2.1. Algorithm for ATM Transport


The RNC side detects the delay and loss of the FP frame in each MAC-d flow by using the FSN and CFN in
the FP frame. Then, the RNC side sends a congestion indication to notify the NodeB of the congestion
state when the MAC-d flow is transmitted on the Iub interface, as shown in the following figure.

Figure 20 Procedure of TNL congestion indication


Where the Congestion Status indicates whether there is transport network congestion. Its value range is
described as follows:
0:
1:
2:
3:

no TNL congestion
reserved for future use
TNL Congestion detected by delay build-up
TNL Congestion detected by frame loss

When the period for adjusting the maximum available bandwidth arrives, the NodeB takes statistics on
the congestion indications of all the MAC-d flows on the Iub port and performs the following operations:
If there is a congestion indication "TNL Congestion detected by frame loss", the NodeB
subtracts the product of the maximum available bandwidth and a preset step from the maximum
available bandwidth. This step is set to 2%. Otherwise,
o
o

If there is a congestion indication "TNL Congestion detected by delay build-up", the NodeB
subtracts the product of the maximum available bandwidth and a preset step from the
maximum available bandwidth. This step is set to 1%.
If neither congestion indication is received during three consecutive periods nor the use of the
Iub bandwidth exceeds a preset value which equals to 85%, the NodeB increases the maximum
available bandwidth by one step. The initial step is 10 kbit/s. The step is doubled every time the
five consecutive increases are complete. The maximum step is 100 kbit/s.

8.3.2.2. Algorithm for IP Transport


For IP transport, the NodeB side directly uses the Performance Monitor (PM) algorithm, rather than the
congestion indication from the RNC, to periodically detect the transmission delay and frame loss in the IP
network. After knowing the congestion state of the Iub interface, the NodeB performs the following
operations:
If the congestion is due to frame loss, the NodeB subtracts the product of the maximum
available bandwidth and a preset step from the maximum available bandwidth. This step is set to
2%. Otherwise,
o
o

If the congestion is due to delay, the NodeB subtracts the product of the maximum available
bandwidth and a preset step from the maximum available bandwidth. This step is set to 1%.
If neither congestion is detected during three consecutive periods nor the NodeB increases the
maximum available bandwidth by one step. The initial step is 10 kbit/s. The step is doubled
every time the five consecutive increases are complete.

8.3.2.3. Adjusting the Available Bandwidth of HSUPA


If the transmission bottleneck of the Iub interface lies in the NodeB rather than the transport network,
the maximum available bandwidth of the Iub port is just equal to the bandwidth configured for the
NodeB. Otherwise the Iub port bandwidth is adjusted by Adjusting the Maximum Available Bandwidth of
the Iub Port. When R99 users enter or exit the network, the bandwidth available for HSUPA users
changes accordingly. Therefore, a scheme is introduced to estimate the bandwidth available for HSUPA
users. This bandwidth is taken as an input of the scheduling algorithm.
The data is buffered on the NodeB side when the traffic on the Uu interface jumps and exceeds the
capacity of the Iub interface. If the traffic on the Uu interface exceeds the capacity of the Iub interface
all the while, the occupancy rate of the buffer also increases all the while. Therefore, through the
variation trend of the occupancy rate of the buffer, the NodeB can learn how to adjust the available
bandwidth of HSUPA.
The adjustment process is as follows:
If the occupancy ratio of the Iub buffer increases, the NodeB reduces the available
bandwidth of HSUPA users based on the variation. The adjustment upper limit is 150 kbit/s. The
adjustment is in direct proportion to the variation.
If the occupancy ratio of the Iub buffer decreases and the use of the Iub bandwidth
exceeds a preset value, which equals to 85% in the buffer non-congestion state, the NodeB
increases the available bandwidth of HSUPA users based on the variation. The adjustment is in
direct proportion to the variation. The adjustment upper limit is 150 kbit/s.
The adjustment must guarantee that the available bandwidth of HSUPA users cannot exceed the
maximum available bandwidth.
When a lot of R99 users access the network in a short period of time, the occupancy rate of the Iub
buffer jumps and even the buffer may overflow. To avoid this problem, the backpressure mechanism is
introduced to the flow control algorithm based on the occupancy rate of the buffer.
8.3.2.4. Handling Iub Buffer Congestion
The HSUPA flow control algorithm detects the status of the Iub buffer periodically and handles Iub buffer
congestion to minimize Iub packet loss rate and delay in the Iub buffer.
The following figure shows the procedure for handling Iub buffer congestion.

Figure 21 Handling Iub buffer congestion

The Iub flow control module measures and stores the value of the Iub buffer occupancy rate every 40 ms
and compares it with the previous one. The detection of the buffer state is as follows:
If the Iub buffer occupancy rate > The Congestion Threshold of IUB Buffer Used Ratio +
The Congestion Threshold Hysteresis of IUB Buffer Used Ratio, the buffer state is marked congested.
If the Iub buffer occupancy rate < The Congestion Threshold of IUB Buffer Used Ratio - The
Congestion Threshold Hysteresis of IUB Buffer Used Ratio, the buffer state is marked not congested.
Otherwise, the buffer status remains unchanged. Where,
o The Congestion Threshold of IUB Buffer Used Ratio is 30%,
o The Congestion Threshold Hysteresis of IUB Buffer Used Ratio is 5%.
The processing after congestion detection is as follows:

If the Iub buffer is congested, the NodeB compares the value of the Iub buffer occupancy
rate with the previous one every 40 ms.
o If the Iub buffer occupancy rate increases, the scheduler sends the RG Down message to all the
HSUPA users on this Iub port, and no AG is allowed to be sent to these users.
o If the Iub buffer occupancy rate does not increase, neither AG Up nor RG Up is allowed to be
sent to the users on this Iub port.
If the Iub buffer is not congested, the flow control algorithm does not affect the decision of
the scheduler.

9.

HSxPA Users

The number of simultaneous HSxPA active users that are allowed per cell has proved to be one of the
most important capacity parameters in 3G networks (especially in cases like Claro Brazil, where most
traffic is PS).
HSUPA (High Speed Uplink Packet Access) is an important feature of 3GPP R6. As an uplink (UL) high
speed data transmission solution, HSUPA provides a theoretical maximum uplink MAC-e rate of 5.73
Mbit/s on the Uu interface. The MAC-e peak data rate supported by Huawei RAN10.0 is 5.73 Mbit/s.
The main features of HSUPA are as follows:
2 ms short frame: It enables less Round Trip Time (RTT) in the Hybrid Automatic Repeat
reQuest (HARQ) process, which is controlled by NodeB. It also shortens the scheduling response
time.
HARQ at the physical layer: It is used to achieve rapid retransmission for erroneously
received data packets between the User Equipment (UE) and NodeB.
NodeB-controlled UL fast scheduling: It is used to increase resource utilization and
efficiency.
HSUPA improves the performance of the UMTS network in the following aspects:
Higher UL peak data rate
Lower latency: enhancing the subscriber experience with high-speed services
Faster UL resource control: maximizing resource utilization and cell throughput
Better Quality of Service (QoS): improving the QoS of the network
UL peak rate: 5.73 Mbit/s per user
10 ms and 2 ms TTI
Maximum 60 HSUPA users per cell
Soft handover and softer handover
Multiple RABs (3 PS) Dedicated/co-carrier with R99
UE categories 1 to 6 Basic load control
OLPC for E-DCH Iub flow control
CE scheduling
Power control of E-AGCH/E-RGCH/E-HICH

9.1.

AVAILABLE CAPACITY IN TERMS OF HSDPA USERS

The maximum number of HSDPA users is defined in RNC without relying on capability indications of the
Node B. Instead, the RNC uses the following parameters to determine this number:

Maximum allowed number of HS-DSCH MAC-d flows in the cell, which is defined by the
MaxHSDSCHUserNum RNW parameter.
The number of subscribers supported by the HSDPA refers to the number of subscribers
whose service is carried by the HSDPA channel, no matter how many RABs are borne by the
HSDPA channel. The highest value of MaxHSDSCHUserNum equals the cell HSDPA capacity
that is prescribed in the NodeB product specification. MaxHSDSCHUserNum can be set
according to the cell type, the available power of HSDPA, and the code resource.

Maximum allowed number of HSDPA users


NodeBHsdpaMaxUserNum RNW parameter.

in

the

cell,

which

is

defined

by

the

This describes the maximum number of subscribers supported by the HSDPA channel per
NodeB. It is set according to the product specification and actual number of sold HSDPA
licenses. Impact on the Network Performance: If the HSDPA user connection is rejected by
the NodeB, you can infer that the HSDPA licenses are insufficient. We need to apply for new
HSDPA licenses.

9.2.

AVAILABLE CAPACITY IN TERMS OF HSUPA USERS

The maximum number of HSUPA users is defined in the Cell as in the Node Band is controlled with the
following parameters to determine this maximum numbers:

Maximum allowed number of users in the cell, which is defined by the MaxHSUPAUserNum Cell
parameter.
This parameter represents the maximum number of subscribers supported by the HSUPA
channel and is set according to the product specification. For the HSUPA admission, the
number of subscribers must be counted first. If the current HSUPA subscriber number is
lower than this parameter, the admission request is being analyzed, or else, the admission is
rejected directly.

Maximum allowed number of HSUPA users


NodeBHsupaMaxUserNum RNW parameter.

in

the

NodeB,

which

is

defined

by

the

This describes the maximum number of subscribers supported by the HSUPA channel per
NodeB. It is set according to the product specification and actual number of sold HSUPA
licenses. Impact on the Network Performance: If the HSUPA user connection is rejected by
the NodeB, you can infer that the HSUPA licenses are insufficient. We need to apply for new
HSUPA licenses.

9.3.

METRICS for HSxPA USERS Monitoring

9.3.1. Avg number of simultaneous HSxPA users

Average Number Of HSDPA UEs=VS . RabNum . Mean


Average Number Of HSUPA UEs=VS . HSUPA . DataUserNum . Mean
9.3.2. Peak number of HSDPA users in BTS

Average Number Of HSDPA UEs=VS . RabNum . Max

Average Number Of HSUPA UEs=VS . HSUPA . DataUserNum . Max


9.3.3. Avg HSxPA users License Utilization (%)

Average HSDPA users License Util ization [ ] =

VS . HSDPA .UE . Mean .Cell


100
64
Thresholds: MINOR: >85% / MAJOR:
>95%

This calculation is done per cell.

Average HSUPA users License Utilization [ ] =

This calculation is done per BTS.

VS . HSUPA .UE . Mean .Cell


100
60
Thresholds: MINOR: >85% / MAJOR:
>95%

9.4.

HSxPA USERS Performance Analysis and Optimization

HS-DSCH MAC-d flow can be allocated in the cell if the number of currently allocated HS-DSCH MAC-d
flows per cell/cell group/NodeB is lower than the maximum indicated by the NodeB.

9.4.1. Parameters Optimization


It is important to stress that HSUPA parameter settings are closely associated with the implementation.
For example reference E-TFCI and respective power offset are closely linked to BLER target, PL-nonmax,
grant management mechanism and typical applications. Operators may not have option to modify
system parametersOnly those parameter which can significantly impact HSUPA performance or are
important to understand system behavior are listed in this section
E-TFCI and Power Offset
A list of reference E-TFCI and power offset values UE is expected to use for these reference E-TFCI.
UE is expected to compute power offset values for rest of the E-TFCI in the E-TFCI TableThe
reference E-TFCI is indicated as an index that is mapped to the actual transport block size
HSUPA Configuration & Implementation
SRB Mapping
o Mapping of SRB on E-DCH utilizes resources more efficiently
SI & Happy Bit
o Node-B should attempt to keep gap between SG and UE transmission rate as small as possible
o Node-B may choose to use SI for this purpose
o With only one logical channel for user data (best effort data) use of SI in scheduling is limited
HARQ Profile
o To take advantage of all re-transmissions time taken to perform all retransmissions should be
less than Timer Status Prohibit
HSUPA Basic Tests
Main Test Objectives: Evaluate throughput performance of HSUPA system; Investigate latency
performance of HSUPA systemProposed Basic Tests
Throughput tests:
o Single user near cell, for benchmarking
o Single user far cell, for cell edge performance
o Single user in mobility
o Multiple HSUPA users, for cell throughput performance
Latency tests:
o to measure the RTT offered by HSUPA capable system
HSUPA Advanced Tests
Main Test Objectives:Evaluate impact of higher UL data rates on R99 services; Study UTRAN
resource control mechanism in presence of HSUPA and Evaluate impact of existing services on
HSUPA
Impact on AMR service:
o To measure impact on UE transmit power in presence of HSUPA whenconnected
o To assess impact of HSUPA on AMR call setup performance
Assessment of Resource Control mechanisms:
o To assess the UTRAN admission/congestion control behavior in presence of R99 and HSUPA
loading
o To assess the impact of R99 services on HSUPA connections

Key Topics:
Mapping Signaling Radio Bearers on E-DCH is more efficient and reduces latency
It is important to understand impact of HSUPA users on RoT
o Increased RoTleads to higher transmit power for other services
o Impact of increased RoTon admission/congestion control must be investigated
o By reducing HSUPA grant in response to R99 call originations, impact on R99 services can be
minimized
Interaction between Ping application and grant allocation mechanism impacts Ping latency
performance
Mechanism to limit hardware resources for HSUPA users, especially in soft handover,
impacts HSUPA performance
Challenges
It is important to have a good knowledge of HSUPA and to well understand the parameter settings and
the implementation detailsin order to properly evaluate the performance and optimize the system.
Some parameters must be seen in view of the particular implementation.
Proper test methodologies, combined with specific tests shall beused to evaluate the impact of HSUPA
deployment
It is crucial to measure the impact on Rise-over-Thermal
Also crucial is the understanding of increased UL load on existing services
The following table shows the parameters in Huawei for HSUPA

9.4.2. RF optimization
There are four main factors that determine HSUPA throughput limit
UL interference, measured as Rise over Thermal Noise (RoT)
Number of hardware resources or Channel Elements available for HSUPA
Backhaul bandwidth available
UE transmit power, which depends on path loss and UL interference
Node-B Traces and Network Counters providing information about UL:
Received Total Wideband Power (RTWP): provides an estimation of the total received
power at the Node-B receiver
Rise-over-Thermal Noise (RoT): provides an estimation of the increase of the overall noise
measured by the Node-B receiver, as compared to the thermal noise
Usage of Iubresources and Hardware: provides an estimation of the backhaul and
processing HW required by the Node-B for HSUPA connectionsTraces and Counters can be
collected on different Node-Bs in the test area and should be separated per each cellNode-B
traces have associated time stamps, while Network Counters are recorded as histogram usually
over a period of 15 minutes

The objective of far cell throughput test is to measure individual user throughput when UE is forced to
transmit at maximum transmit powerHSUPA throughput at very low RSCP values depends on several
factors such as NodeBcable losses, noise figure, NodeBreceiver implementation etc.
Another important thing that has to be considered and keep in ming is the fact that Soft handover is
supported in HSUPA then Increasing active set size through proper setting of handover parameters can
improve the performance at the expense of hardware resources. Also some the following points should
be taken in cosideration while optimizing a HSUPA network:
R99 services have priority over HSUPA, so low grant values combined with unhappy UE
(indicated by Happy Bit) may indicate.
Heavily loaded cell that does not have either hardware or backhaul resources or has high
RoT
Transition from HSUPA to R99 may be delayed to avoid frequent back and forth. This
feature is dependent on infrastructure vendor implementation
High data rates of HSUPA cause higher UL interferenceIncreased UL interference can lead
to higher UE transmit power for AMR usersHigher requirement of UE transmit power in some cases
may impact AMR performance at the edge of the cell. The exact impact depends on network
planning

9.4.3. Node B DL Power Capacity Upgrade


Capacity Upgrade in this context involves increasing the number of HSxPA users through upgrade of the
corresponding licenses keys. On top of that, once traffic has been redistributed in the most efficient way
between available cells and BTSs, solutions will point towards additional sectors (intra or
interfrequency), and finally, additional sites.

10.

1B

RNC Load

Claro Brasil network uses the RNC: BSC6810 and at the time this guidelines were writen the software
loaded version was: V200R010

RNC hardware configuration is one of the following types: minimum configuration, maximum configuration,
and other configurations.
Minimum Configuration
The RNC supports the minimum configuration of a single cabinet, that is, an RSR cabinet configured with
only an RSS subrack, as shown

Figure 22 RNC: BSC6810 minimum configuration


The maximum capacity of the RNC in minimum configuration is as follows:

6,000 Erlang voice traffic

384 Mbit/s (UL + DL) PS throughput

200 NodeBs and 600 cells


Maximum Configuration

The figure shows the maximum configuration of the RNC. In maximum configuration, the RNC consists of
two cabinets: one RSR cabinet and one RBR cabinet. The two cabinets hold six subracks: one RSS
subrack and five RBS subracks.

Figure 23 RNC: BSC6810 RNC maximum configuration


The maximum capacity of the RNC in maximum configuration is as follows:

51,000 Erlang voice traffic

3,264 Mbit/s (UL + DL) PS throughput

1,700 NodeBs and 5,100 cells

Item

Specification

Maximum
cabinets

number

of

2, that is, 1 RSR and 1


RBR

Maximum
subracks

number

of

6, that is, 1 RSS and 5


RBSs

Maximum voice traffic


Maximum
throughput
Maximum
NodeBs

PS

51,000 Erlang
data

number

of

3,264 Mbit/s (UL + DL)


1,700

Maximum number of cells

5,100

Busy Hour
(BHCAs)

1,360,000

Call

Attempts

Other Configurations
Number of
Subracks

Number of
Cabinets

Voice
Traffic
(Erlang)

(UL + DL) PS
Throughput
(Mbit/s)

Number
of
NodeBs

Number
of Cells

1 RSS + 1
RBS

1 RSR

15,000

960

500

1,500

1 RSS + 2
RBSs

1 RSR

24,000

1,536

800

2,400

1 RSS + 3
RBSs

1 RSR + 1
RBR

33,000

2,112

1,100

3,300

1 RSS + 4
RBSs

1 RSR + 1
RBR

42,000

2,688

1,400

4,200

The above table describes RNC configurations other than the minimum and maximum configurations. It
can be choosen a configuration as required.

10.1. AVAILABLE CAPACITY IN TERMS OF RNC LOAD


Logically, the RNC consists of the following subsystems: switching subsystem, service processing
subsystem, transport subsystem, clock synchronization subsystem, Operation and Maintenance (OM)
subsystem, power subsystem, and environment monitoring subsystem.

Figure 24 RNC logical structure


Functions of the RNC Switching Subsystem
The RNC switching subsystem mainly performs the switching of data in the RNC.
The switching subsystem has the following functions:

Provides internal Medium Access Control (MAC) switching for the RNC and enables convergence of
ATM and IP networks.

Provides port trunking for the RNC.

Connects subracks of the RNC.

Provides a service switching channel for the service processing subracks of the RNC.

Provides an OM channel for the service processing subracks of the RNC.

Distributes timing signals and RFN signals to the service processing boards of the RNC.
Functions of the RNC Service Processing Subsystem
The RNC service processing subsystem implements most RNC functions defined in the 3GPP protocols
and processes services of the RNC.
The service processing subsystem has the following functions:

User data transfer

System admission control

Radio channel ciphering and deciphering

Integrity protection

Mobility management

Radio resource management and control

Multimedia broadcast

Message tracing

Radio Access Network (RAN) information management


Service processing subsystems can be increased as required, thus expanding the service processing
capacity of the RNC.
Service processing subsystems communicate with each other through the switching subsystem to
perform coordination tasks such as handover.
Functions of the RNC Transport Subsystem
The RNC transport subsystem provides transmission ports and resources on the Iub, Iur, and Iu
interfaces for the RNC, processes transport network layer messages, and enables interaction between
RNC internal data and external data.
Providing Diverse Transmission Ports
The RNC transport subsystem provides the RNC with diverse transport solutions, supports ATM and IP
transport at the same time, and meets networking requirements of different transport networks.

The transport subsystem provides the following types of transmission port:

E1/T1
Channelized STM-1/OC-3 optical port
Unchannelized STM-1/OC-3c optical port


FE/GE electrical port

GE optical port
Processing Transport Network Layer Data
The RNC transport subsystem processes transport network layer messages.

In ATM transport mode, the transport subsystem terminates AAL2/AAL5 messages.

In IP transport mode, the transport subsystem terminates user plane UDP/IP messages and
forwards control plane IP messages.
Through the transport subsystem, the RNC shields the differences between transport network layer
messages within the RNC.
The transport subsystem terminates transport network layer messages at the interface boards. Then,
according to the configuration transfer table, the subsystem transfers user plane, control plane, and
management plane datagrams to the DPUb and SPUa boards in the RNC for processing.
RNC OM Subsystem
The RNC OM subsystem is responsible for the operation and maintenance of the RNC.

Components of the RNC OM Subsystem. The RNC OM subsystem consists of the LMT,
OMUa boards, SCUa boards, and OM modules on other boards.

Working Principles of the RNC OM Subsystem. The RNC OM subsystem works in dualplane mode through the OM network of the RNC.

RNC OM Functions. The RNC OM functions enable routine and emergency maintenance of the
RNC.

RNC Active/Standby Workspaces. RNC active/standby workspaces consist of


active/standby workspaces of the BAM and those of FAM boards.

RNC Security Management. RNC security management consists of authority management,


operator information protection, File Transfer Protocol (FTP) transmission based on ciphering, and
encryption of the communication interface between the RNC and the Element Management
System (EMS).

RNC Log Management. RNC log management enables you to query the information about
the operation and running of the RNC, thus facilitating fault analysis and identification.

RNC Configuration Management. RNC configuration management enables configuration


and management of RNC data on the OM console (LMT or M2000).

RNC Performance Management. RNC performance management enables the RNC to


collect performance data.

RNC Alarm Management. RNC alarm management facilitates you to monitor the running
state of the RNC and informs you of faults in real time so that you can take measures in time.

RNC Loading Management. RNC loading management enables you to manage the process
of loading program and data files onto boards after the FAM boards (or subracks) start or restart.

BOOTP and DHCP on the Iub Interface. The RNC and NodeB support the BOOTP and
DHCP functions. By the BOOTP or DHCP function, a NodeB can automatically get an IP address
from an RNC and create an OM channel between the NodeB and the RNC. The BOOTP and DHCP
functions are applicable to ATM and IP transport on the Iub interface respectively.

RNC Upgrade Management. RNC upgrade refers to a process where the RNC is upgraded to
a later version.
RNC Clock Synchronization Subsystem
The RNC clock synchronization subsystem consists of the GCUa/GCGa boards in the RSS subrack and the
clock processing unit of each subrack. It provides timing signals for the RNC, generates the RFN, and
provides reference clocks for NodeBs.

RNC Clock Sources. The RNC has the following clock sources: Building Integrated Timing Supply
System (BITS) clock, Global Positioning System (GPS) clock, line clock, and external 8 kHz clock.

Structure of the RNC Clock Synchronization Subsystem. The RNC clock synchronization
subsystem consists of the clock module and other boards. The clock module is implemented by
the GCUa/GCGa board.

Timing Signal Processing in the RNC. The RNC processes foreign timing signals before sends
the timing signals to the boards.

RFN Generation and Reception. RNC Frame Number (RFN) is applicable to node synchronization
for the RNC. The node synchronization frames that the RNC sends to the NodeB carry the RFN
signals.

RNC Power Subsystem


The RNC power subsystem serves the entire equipment. This subsystem adopts the dual-circuit backup
and monitor-at-each-point solution, thus featuring high reliability.

Power Supply Requirements of the RNC. This describes the power supply schemes of the RNC and
requirements for the AC power and DC power supplied to the RNC.

Layout of Power Switches on the RNC Cabinet. There is a fixed relation between outputs of the power
distribution box of the RNC cabinet and the intra-cabinet components.

Connections of Power Cables and PGND Cables in the RNC Cabinet. The power cables and PGND
cables in the RSR cabinet are connected in the same way as those in the RBR cabinet.
RNC Environment Monitoring Subsystem
The RNC environment monitoring subsystem automatically monitors the working environment of the
RNC and reports faults in real time.
The RNC environment monitoring subsystem consists of the power distribution box and the environment
monitoring parts in each subrack. This subsystem is responsible for power supply monitoring, fan
monitoring, cabinet door monitoring, and water monitoring.

RNC Power Supply Monitoring. RNC power monitoring is performed to monitor the power subsystem in
real time, report the running state of the power supply, and generate alarms when faults occur.

RNC Fan Monitoring. RNC fan monitoring is performed to monitor the fans in real time and adjust the speed
of the fans based on the temperature in the subrack.

RNC Cabinet Door Monitoring. RNC cabinet door monitoring is optional. When the RNC detects that the
front or back door of a cabinet is open, the RNC generates and reports an appropriate alarm.

RNC Water Monitoring. RNC water monitoring is optional. When the RNC detects water immersion, the
RNC generates and reports an appropriate alarm.

10.1.1.

Subsection RNC Boards

The RNC boards refer to the OMUa board, SCUa board, SPUa board, GCUa board, GCGa board, DPUb
board, AEUa board, AOUa board, UOIa board, PEUa board, POUa board, FG2a board, GOUa board, PFCU
board, and PAMU board. The PFCU board is installed in the fan box. The PAMU board is installed in the
power distribution box. All the other boards are installed in the subracks.

RNC Board Compatibility. The RNC board compatibility defines whether the RNC boards of
different types can be configured in the same subrack at the same time.

GCUa/GCGa Board. The GCUa is shortened from the RNC General Clock Unit REV:a, and the
GCGa is a short form of the RNC General Clock with GPS Card REV:a. The GCUa/GCGa board is a
mandatory configuration. One RNC is configured with two GCUa/GCGa boards. The GCUa/GCGa
boards can be installed only in slots 12 and 13 in the RSS subrack.

OMUa Board. OMUa refers to RNC Operation and Maintenance Unit REV:a. One or two OMUa
boards are installed in the RNC cabinet. The OMUa boards can be installed only in slots 20 and 21,
or slots 22 and 23 in the RSS subrack. The OMUa board is twice the width of other boards.
Therefore, one OMUa board occupies two slots.

SPUa Board. SPUa refers to RNC Signaling Processing Unit REV:a. The SPUa board is a
mandatory configuration. In the RSS subrack, 2 to 10 SPUa boards are installed in slots 0 to 5 and
8 to 11. In the RBS subrack, 2 to 10 SPUa boards are installed in slots 0 to 5 and 8 to 11.

SCUa Board. SCUa refers to RNC GE Switching and Control Unit REV:a. The SCUa board is a
mandatory configuration. In both the RSS subrack and the RBS subrack, two SCUa boards are
installed in slots 6 and 7.

DPUb Board. DPUb refers to RNC Data Processing Unit REV:b. The DPUb board is a mandatory
configuration. For the RSS subrack, 2 to 10 DPUb boards are installed in slots 8 to 11 and slots 14
to 19. For the RBS subrack, 2 to 12 DPUb boards are installed in slots 8 to 19.

AEUa Board. AEUa refers to RNC 32-port ATM over E1/T1/J1 interface Unit REV:a. The AEUa
board is an optional configuration. It can be installed in either the RSS subrack or the RBS
subrack. The number of the AEUa boards to be installed depends on site requirements. In the RSS
subrack, the AEUa board can be installed in slots 14 to 19 and slots 24 to 27. In the RBS subrack,
the AEUa board can be installed in slots 14 to 27.

AOUa Board. AOUa refers to RNC 2-port ATM over channelized Optical STM-1/OC-3 Interface
Unit REV:a. The AOUa board is optional and can be installed in both the RSS subrack and the RBS
subrack. The number of the AOUa boards to be installed depends on site requirements. In the RSS

subrack, the AOUa board can be installed in slots 14 to 19 and slots 24 to 27. In the RBS subrack,
the AOUa board can be installed in slots 14 to 27.
FG2a Board. FG2a refers to RNC packet over electronic 8-port FE or 2-port GE Ethernet
Interface unit REV:a. The FG2a board is an optional configuration. It can be installed in both the
RSS subrack and the RBS subrack. The number of the FG2a boards to be installed depends on site
requirements. In the RSS subrack, the FG2a board can be installed in slots 14 to 19 and slots 24 to
27. In the RBS subrack, the FG2a board can be installed in slots 14 to 27.
GOUa Board. GOUa refers to RNC 2-port packet over Optical GE Ethernet Interface Unit REV:a.
The GOUa board is an optional configuration. It can be installed in both the RSS subrack and the
RBS subrack. The number of the GOUa boards to be installed depends on site requirements. In the
RSS subrack, the GOUa board can be installed in slots 14 to 19 and slots 24 to 27. In the RBS
subrack, the GOUa board can be installed in slots 14 to 27.
PEUa Board. PEUa refers to RNC 32-port Packet over E1/T1/J1 Interface Unit REV:a. The PEUa
board is an optional configuration. It can be installed in both the RSS subrack and the RBS
subrack. The number of the PEUa boards to be installed depends on site requirements. In the RSS
subrack, the PEUa board can be installed in slots 14 to 19 and slots 24 to 27. In the RBS subrack,
the PEUa board can be installed in slots 14 to 27.
UOIa Board. UOIa refers to RNC 4-port ATM/Packet over Unchannelized Optical STM-1/OC-3c
Interface unit REV:a. The UOIa board is an optional configuration. It can be installed in both the
RSS subrack and the RBS subrack. The number of the UOIa boards to be installed depends on site
requirements. In the RSS subrack, the UOIa board can be installed in slots 14 to 19 and slots 24 to
27. In the RBS subrack, the UOIa board can be installed in slots 14 to 27.
PFCU Board. PFCU refers to RNC Fan Control Unit. The PFCU board is installed in the front of the
fan box. Each fan box is configured with one PFCU board.
POUa Board. POUa refers to RNC 2-port packet over channelized Optical STM-1/OC-3 Interface
Unit REV:a. The POUa board is an optional configuration. It can be installed in both the RSS
subrack and the RBS subrack. The number of the POUa boards to be installed depends on site
requirement. In the RSS subrack, the POUa board can be installed in slots 14 to 19 and slots 24 to
27. In the RBS subrack, the POUa board can be installed in slots 14 to 27.
PAMU Board. PAMU refers to the Power Allocation Monitoring Unit. The PAMU is configured in
the power distribution box of the RNC cabinet. Each power distribution box holds one PAMU.

10.2. METRICS for RNC Load Monitoring


10.2.1.

Main Processors Load

DPU average CPU load [ ] =


DPU Max CPU load [ ] =VS . DPU . CPULOAD . MAX
DPU average message packetsload [ ] =VS . DPU . MSGLOAD . MEAN
DPU Max message packets load [ ] =VS . DPU . MSGLOAD . MAX

10.2.2.

Secondary Processors Load

PIU averageCPU load [ ] =


PIU Max CPU load [ ] =VS . PIU . CPULOAD . MAX
PIU average message packets load [ ] =VS . PIU . MSGLOAD. MEAN
PIU Max message packets load [ ]=VS . PIU . MSGLOAD . MAX

SPU average CPU load [ ]=


SPU Max CPU load [ ] =VS . SPU . CPULOAD . MAX
SPU average message packets load [ ] =VS . SPU . MSGLOAD . MEAN
SPU Max message packetsload [ ] =VS . SPU . MSGLOAD . MAX
SPU average MBUF load [ ] =VS . SPU . MBUFLOAD . MEAN
SPU Max MBUF load [ ] =VS . SPU . MBUFLOAD . MAX
SPU average DOSMEM load [ ] =VS . SPU . DOSMEM . MEAN
SPU Max DOSMEM load [ ]=VS . SPU . DOSMEM . MAX
GCU averageCPU load [ ] =
GCU Max CPU load [ ] =VS . GCU .CPULOAD . MAX
GCU average message packetsload [ ] =VS .GCU . MSGLOAD . MEAN
GCU Max message packets load [ ] =VS .GCU . MSGLOAD . MAX
SCU average CPU load [ ] =
SCU Max CPU load [ ] =VS . SCU .CPULOAD . MAX
SCU average message packetsload [ ] =VS . SCU . MSGLOAD . MEAN
SCU Max message packets load [ ] =VS . SCU . MSGLOAD . MAX

11.

Additional ADMISSION CONTROL Metrics

Generally, the failures due to admission control (AC) on the RRC, RAB and Channelisation Codes give a
good representation of how severe the radio loading is on the network performance. The following metrics
are important to evaluate this severity.

11.1. Setup Failures due to Admission Control


The Service level measurements can provide the first indication of Air interface limitations, i.e. RRC
and RAB setup Failure Rate resulting from CAC (DL power & codes and UL interference)
RRC.FailConnEstab.Cong
The number of RRC CONNECTION REJECT messages from the RNC to UEs in a cell due to network
congestion.
VS.RAB.FailEstabCS.Cong
Number of CS RABs unsuccessfully established due to congestion in the best cell.
The following KPIS could be used:

RRC . SuccConnEstab . 100


RRC . FailConnEstab .Cong
RRC Rejection Rate due congestion [ ] =

CS RAB failure Rate due congestion [ ] =

VS . RAB . FailEstabCS. Cong


100
VS . RAB . AttEstab . AMR

PS RAB failure Rate due congestion [ ] =

VS . RAB . FailEstPs . Power . Cong+VS . RAB . FailEstPs . ULCE .Cong


VS . RAB . AttEstabPS .Conv+ VS . RAB . AttEstabPS . Str+
Thresholds for all KPIs above: MINOR: >0% / MAJOR:
>2%

11.2. Users in COMPRESSED MODE


Compressed mode has an effect on the cell capacity, coverage and quality because both the UE and the
BTS tend to increase their transmission power for compressed frames. To keep this problem in check, it
is possible to limit the number of UEs in compressed mode on a cell-by-cell basis:
[Huawei should advice if there is a way to limit the users in CM]
If the number of UEs in compressed mode has already reached the allowed maximum, the RNC does not
activate compressed mode even if it is needed. As concerns soft handover, the number of UEs in
compressed mode must be below the maximum limit in all cells participating in soft handover before the
RNC can activate compressed mode. Once
compressed mode has been activated, to secure the mobility of the UEs, it is possible to add a new cell
(soft handover branch) into the active set even though the number of UEs in compressed mode in the
cell in question should exceed the maximum.
Compressed mode measurements because of load reasons have a higher priority than measurements
because of service reasons. Also, quality and coverage reason handovers can steal capacity from this
amount of UEs in compressed mode if needed.
The interference load of a cell is not taken into account for the decision on starting DCH or HSDPA
compressed mode. The following counters would help to give an idea of the use of compress mode in the
network:

Counter
VS.CM.ULSF2.Act.A
tt

Number of Up Link SF-2 Compressed Mode Activation Attempts


(Cell)

VS.CM.ULHLS.Act.A
tt

Number of Up Link HLS Compressed Mode Activation Attempts


(Cell)

VS.CM.ULSF2.Act.F
ail

Number of Up Link SF-2 Compressed Mode Activation Fails (Cell)

VS.CM.ULHLS.Act.F
ail

Number of Up Link HLS Compressed Mode Activation Fails (Cell)

VS.CM.DLSF2.Act.A
tt

Number of Down Link SF-2 Compressed Mode Activation


Attempts (Cell)

VS.CM.DLHLS.Act.A
tt

Number of Down Link HLS Compressed Mode Activation


Attempts (Cell)

VS.CM.DLSF2.Act.F
ail

Number of Down Link SF-2 Compressed Mode Activation Fails


(Cell)

VS.CM.DLHLS.Act.F
ail

Number of Down Link HLS Compressed Mode Activation Fails


(Cell)

[To be completed]

12.

1B

Additional CONGESTION CONTROL Metrics

Load control is a Radio Resource Management (RRM) algorithm designed to keep the load in the Uu
interface stable and to avoid situations of overload. If the system gets overloaded, the radio resource
management returns the system quickly and controllably back to the normal load state defined by Radio
Network Planning (RNP).
Load control is performed separately for the uplink and the downlink. Since interference is a crucial and
limiting factor in any CDMA system, load control measures both uplink and downlink interference
periodically under one RNC. Each User Equipment (UE) and Base Transceiver Station (BTS) that transmits
in the network creates interference. The downlink interference is the same as the transmission power of
the cell in question.
Load control can be divided into preventive load control and overload control. As the names imply, the
basic difference between these two types of control lies in when the actions are performed: overload
control actions are performed after the cell has been overloaded (threshold x), whereas preventive
actions are performed before the cell becomes overloaded (threshold y).
There are only two overload actions: preventing further calls from being set up (admission control) and
throttling back Non-Real Time (NRT) traffic (packet scheduler).

12.1. RT over NRT


Packet-switched interactive and background (NRT) services can use high bit rates especially in downlink
transfer direction. Therefore, it can happen that establishment of a conversational or streaming (RT) RAB
requires that resources from the NRT are removed, their bit rates are downgraded, and the released
resources for the arriving RT resource request are allocated - the so called RT over NRT feature.
Counters in the following section.

12.2. PRE-EMPTION
Preemption guarantees the success in the access of a higher-priority user by forcibly releasing the
resources of a lower-priority user.
After cell resource admission fails, the RNC performs the preemption function if the following conditions
are met:
The RNC receives an RAB ASSIGNMENT REQUEST message indicating that preemption is
supported.
The preemption algorithm switch Preempt algorithm switch is set to ON.
Preemption is applicable to the following cases:
Setup or modification of a service
Hard handover or SRNS relocation
Service

R99
service

Resource

Service That can Be Preempted


R99
Servic
e

HSUP
A
Servic
e

HSDPA
Service

R99 + HSPA
Combined
Service

MBMS
Service

Code

Power

Service

HSDPA
service

Resource
R99
Servic
e

HSUP
A
Servic
e

HSDPA
Service

CE

Iub
bandwidth

Code

Power

CE

Iub
bandwidth

Code

Power

CE

Iub
bandwidth

Code

Power

CE

Iub
bandwidth

Number
users

HSUPA
service

Number
users

MBMS
service

Service That can Be Preempted

Number
users

of

of

of

R99 + HSPA
Combined
Service

MBMS
Service

NOTE:
To enable resource-triggered preemption for MBMS services, the Mbms PreemptAlgoSwitch must be
ON.
The preemption procedure is as follows:
1

The preemption algorithm determines which radio link sets can be preempted.
a Choose SRNC UEs first. If no SRNC UE is available, choose DRNC UEs.
b Sort the UEs by user integrate priority.
c Specify the range of candidate UEs.
Only the UEs with lower priority than the RAB to be established can be selected. If the
Integrate Priority Configured Reference parameter is set to traffic class and the
switch PreemptRefArpSwitch is on, only the ones with higher ARP and lower priority
than the RAB to be established can be selected. This is applied to RABs of PS streaming
and BE services.

The RNC selects one or more UEs in order to match the resource needed by the RAB to
be established.
NOTE:
For the preemption triggered for the power reason, the preempted objects can be R99
users, R99 + HSDPA combined users or HSDPA RABs.
For the preemption triggered for the Iub bandwidth reason, the preempted objects can
only be RABs.
The combined services that are carried on channels of different types (that is, R99+HSPA)
cannot preempt the resources of other services.
For the preemption triggered for the code or Iub resource reason, only one user can be
preempted. For the preemption triggered for the power or credit resource reason, more
than one user can be preempted.
1 The RNC releases the resources occupied by the candidate UEs.
9 The requested service directly uses the released resources to access the network
without admission decision.

12.3. Huawei: Overload Control


After the UE access is allowed, the power consumed by a single link is adjusted by the single link power
control algorithm. The power varies with the mobility of the UE and the changes in the environment and
the source rate. In some situations, the total power load of the cell may be higher than the target load.
To ensure the system stability, Overload Control (OLC) must be performed.
Only power resources and interference could result in overload congestion. Hard resources such as the
equivalent number of users, Iub bandwidth, and credit resources do not cause the overload congestion.
ULOLC and DLOLC under the Cell LDC algorithm switch parameter control the functionality of the
overload congestion control algorithm.

Figure 25 Power Overload Congestion

If the current UL/DL load of an R99 cell is not lower than UL/DL OLC Trigger threshold for some
hysteresis (defined by DL State Trans Hysteresis threshold in DL; not configurable in UL), the
cell works in overload congestion state and the related overload handling action is taken. If the
current UL/DL load of the R99 cell is lower than UL/DL OLC Release threshold for some
hysteresis (defined by DL State Trans Hysteresis threshold in DL; not configurable in UL), the
cell comes back to normal state.

The HSDPA cell has the same uplink decision criterion as the R99 cell. The load in the downlink,
however, is the sum of load of the non-HSDPA power (transmitted carrier power of all codes not
used for HS-PDSCH or HS-SCCH transmission) and the GBP.
In addition to periodic measurement, event-triggered measurement is applicable to OLC.
If OLC_EVENTMEAS is ON, the RNC will request the initiation of an event E measurement on power
resource in the NodeB. In the associated request message, the reporting criterion is specified, including
key factors UL/DL OLC trigger hysteresis, UL/DL OLC trigger threshold and UL/DL OLC release

threshold. Then the NodeB checks the current power load in real time according to this criterion and
reports the status to the RNC periodically if the conditions of reporting are met.
NOTE:

The current policy for NodeBs is to preferentially allocate power to DCH users. It is not
recommended that Ptotal, the TCP, be used as the criterion for overload in the HSDPA cell. That is because
the NodeB can automatically adjust the power at the next scheduling period.

For HSDPA cells, the OLC_EVENTMEAS switch is recommended OFF. For 3GPP limitation,
however, the NodeB cannot check the total load of the non-HSDPA power and the GBP.

12.4. Radio Bearer Downgrade and Release Due to Congestion


Please refer to the Annex: Load Management within the section Load reshuffling for further information

13.

PERFORMANCE ALARMS and CAPACITY WEEKLY REPORT

This section describes one possible approach on how to get visibility into the Capacity Performance of
the network.
These two tools are intended to support the Optimizer in all phases of the Capacity Process as described
in Section 3, but especially in both the Capacity Metrics Monitoring and the Capacity Analysis.

13.1. Performance Alarms


Across the whole document, these Alarms have been introduced by means of the Thresholds (minor and
major) shown together with each relevant Metric to be monitored.
The Alarm value will highlight those cells that need already some (MINOR ALARM) or urgent (MAJOR
ALARM) attention. Those cells that are still under valid capacity conditions will not get any Alarm. So the
concept MINOR/MAJOR will refer to the importance of the impact caused, basically in terms of number
of AC/Congestion blocking events, and high figures of each resource utilization/occupancy.
Besides the impact we will also consider the duration of the capacity issue, and the Alarm will be
classified accordingly as SHORT (if it is present 1 or 2 days during the week) or LONG (if the issue lasted
for 3 or more days during the week).
Hence the classification of the Alarms will use 2 different criteria:

Firstly, to decide the time scope of the Alarm: SHORT/LONG


As soon as the alarm is triggered the first time during the week of observation, it becomes a
SHORT Alarm. If during the same week, the alarm triggers 3 or more times, it becomes a LONG
Alarm.

Once classified the duration of the Alarm, the second criteria are applied to decide the impact:
MINOR/MAJOR.
For that the impact thresholds proposed across this document are applied.

As a summary, the Alarms (if any) will be assigned to each cell according to:
1.

LONG MAJOR ALARM.


The metric (Utilization/Blocking KPI/counter) exceeds its major threshold in 3 or more days of
the week

2.

SHORT MAJOR ALARM.


The metric (Utilization/Blocking KPI/counter) exceeds its major threshold in 1 or 2 days of the
week

3.

LONG MINOR ALARM


The metric (Utilization/Blocking KPI/counter) exceeds its minor threshold in 3 or more days of
the week

4.

SHORT MINOR ALARM


The metric (Utilization/Blocking KPI/counter) exceeds its minor threshold in 1 or 2 days of the
week

If the KPIs are calculated based soley on Average values of the Resource Utilization, there is a possibility
that the Alarms would not be triggered by the KPI but just by the AC/Congestion events caused by peaks
in the Resource Utilization. This is why both triggering possibilities in each Alarm (Utilization KPIs and
AC/Congestion events) are inlcluded.

In order to increase the correlation between Alarms based on events and KPIs, it is proposed to monitor
not only the KPIs Average values but also a certain percentiles (85% or 97%) so the trend of the peaks
can also be tracked.
Two percentiles are proposed:

Percentile 85% (can be calculated by adding 1 standard deviation () to the Average value)
Percentile 98% (can be calculated by adding 2 standard deviations (2) to the Average value)

[Initial version of the Weekly Report will use both of them to determine which one provides a better
understanding of the trends in the Resource utilization].
Accordingly, different thresholds should be used for the Average and Percentile evaluations of the KPI.
With this definition of the Alarms, the optimizer can focus on the Alarm following the order used in the
Table above: First on the cells with the highest number of Long Major Alarms (one per monitored
Resource), that are pointing out to a more impacting and persistent issue. These are usually problems
already known by the optimizer. Likely the solution was decided some time ago (when the Alarm was a
Short Major one) and the optimizer just keeps track of its implementation.
Then, the new Short Major Alarms cases should be addressed. These are the urgent cases to pay
attention to every week. Some of them can be related to especial events (concerts,..) or a real new
problem. Some of them will be solved quickly, some others will need a long term solution. In the last
case, depending on the type of short term solution that can be applied, these alarms will become Long
Major or Long Minor (most desiderable).
Minor Alarms can be flags of potential upcoming issues. They do not require urgent attention but should
be monitored and assessed on a regfular basis.
We suggest to run the report for all 7 days in the week.
[To be discussed internally in Claro the advantages/disadvantages of running it only for the 5 working
days]

13.2. Capacity Weekly Report


Capacity Performance Alarms (as well as others defined for the rest of Performance Areas: Accessibility,
Retainability, Integrity, Availability, etc.) can be used efficiently in many different ways.

One suggestion is to automatize their collection and delivery to all Optimization Team by email, every
morning. This way, they will trigger a daily supervision and analysis of the main aspects of Performance
over the main offenders (worst cells).
Another suggestion is to produce a Weekly Report (in excel, for instance) for a more long-term tracking
of the Capacity Performance in the network. It will include:

In the main TAB, for each pair (cell-week) and for each resource, the highest value of the week
(out of the 7 days values) for both the resource utilization and blockings (colored according to the
type of Alarm).
[In order to simplify this summary TAB, we could also remove all metric values from this TAB and
just leave the List of Alarms, that then can be analyzed in the rest of TABS.

In the rest of TABs, one per monitored Resource, for each (cell-day), most KPIs defined in
this document will be shown (both utilization and blocking ratios) evaluated at Floating BH,
meaning that for each KPI, the highest value for the day will be shown.

Both TABs, when cumulated every week, will allow us to monitor the trends in all Capacity aspects (as
discussed in this document: DL TX Power, UL RX Power, OVSF Codes, CEs, Iub Utilization, number of HS
users,... These trends should be taken into account for forecasting purposes.
Also the idea behind all these TABs is to provide the optimizer with a wider vision of the cell behavior in
a day basis (in TABs per Resource), besides the executive summary per week in main TAB. This
additional data per day in TABs/Resource will help to identify the days causing the Alarms being
triggered, so further analysis can be conducted in SMART for the specific days. And it will also be useful
to have an order of magnitude for the maximum number of events (AC/Congestion) per day, giving a
better picture of the importance of the problem.
In case the idea is to be implemented, this section will be completed with all needed flowcharts
describing the exact way to produce each alarm and prepare the Report.

14.

4B

REFERENCES

[1] WCDMA (UMTS) Deployment Handbook. Planning and Optimization Aspects. Christophe Chevallier,
Christopher Brunner, Andrea Garavaglia, Kevin P. Murray, Kenneth R. Baker (All of QUALCOMM
Incorporated California, USA). Ed. John Wiley & Sons. 2006
[2] Radio Network Planning and Optimisation for UMTS. Jaana Laiho and Achim Wacker [Both of Nokia
Networks, Nokia Group, Finland] & Toma s Novosad [Nokia Networks, Nokia Group, USA]. Ed. John
Wiley & Sons. 2006
[5] Introduction to UMTS Optimization. Wray Castle, 2004
[6] HED 5.5. NodeB Documentation (V100R010_06)
[7] RAN6.1 Feature Description
[8] RAN10.0 Network Optimization Parameter Reference-20080329-A-1.0
[9] NodeB WCDMA V100R010C01B051 Performance Counter Reference
[10] Function List and Description of Huawei UMTS RAN10[1].0 V1.7(20080827)

A.

5B

ANNEX I: LOAD MANAGEMENT IN Huawei

The load control algorithm is built into the RNC. The input of load control comes from all measurement
information of the NodeB.

Figure 26 Load Control Algorithm

Load control has the following sub-features:

Potential User Control (PUC)


The function of PUC is to balance traffic load among inter-frequency cells. The RNC uses PUC to
modify cell selection and reselection parameters and broadcast them through system
information. In this way, UEs are led to cells with light load. The UEs may be in idle mode,
CELL_FACH state, CELL_PCH state, or URA_PCH state.
Call Admission Control (CAC)
The function of CAC is to decide whether to accept resource requests from UEs, such as access,
reconfiguration, and handover requests, according to the resource status of the cell.
Intelligent Access Control (IAC)
The function of IAC is to increase the access success rate with the current QoS guaranteed
through rate negotiation, queuing, preemption, and Directed Retry Decision (DRD).
Intra-frequency Load Balancing (LDB)
The function of intra-frequency LDB is to balance the cell load between intra-frequency
neighboring cells for the purpose of better utilization of resources.
Load Reshuffling (LDR)
The function of LDR is to reduce the load of a cell when the available resources of the cell reach
the specified alarm threshold. The purpose of LDR is to increase the access success rate in the
following ways:
Inter-frequency load handover
Code reshuffling
BE service rate reduction
AMR voice service rate reduction
Uncontrolled real-time traffic QoS renegotiation
CS inter-RAT load handover
PS inter-RAT load handover
MBMS power reduction
Overload Control (OLC)
The function of OLC is to reduce the cell load rapidly by restricting the Transport Format (TF) of
the BE service or releasing the connections of UEs when the cell is overloaded. The purpose of
OLC is to ensure the stability of the system and the QoS of most UEs.

Each of the load control algorithms involves three factors: measuring, triggering, and controlling. Valid
measurement is the prerequisite for effective control.

A.a.

Priority Involved in Load Control

The priority consists of RAB integrate priority, user integrate priority, and user priority.

A.a.a. RAB Integrate Priority

RAB Integrate Priority is mainly used in load control algorithms.


The values of RAB Integrate Priority are set according to the Integrate Priority Configured
Reference parameter as follows:

If Integrate Priority Configured Reference is set to Traffic Class, the integrate priority abides
by the following rules:

If Integrate Priority Configured Reference is set to ARP, the integrate priority abides by the
following rules:
NOTE:
ARP and THP are carried in the RAB ASSIGNMENT REQUEST message, and they are not configurable on
the RNC LMT

A.a.b. User Integrate Priority


For multiple-RAB users, the integrate priority of the user is based on the service of the highest priority.
User integrate priority is mainly used in load control algorithms.

A.a.c. User Priority


There are three levels of user priority (1, 2, and 3), which are denoted as gold (high), silver (middle) and
copper (low) users. The relationship between user priority and ARP is configurable, and the typical
relation is shown in following table:

NOTE:
ARP 15 is always the lowest priority and is not configurable. It corresponds to user priority 3 (copper).
If ARP is not received from messages of Iu interface, the user priority is regarded as copper.
The levels of user priority are mainly used to provide different QoS for different users, for example,
setting different GBR values according to the level of users for BE service.
The GBR of BE services are configurable. According to the traffic class, priority of users, and bearer type
(DCH or HSPA), the different values of GBR are configured through the SET USERGBR command.
Changes on the mapping between ARP and user priority have an influence on the following features:

HSDPA

HSUPA

AMR

AMR-WB

Iub overbooking

A.b.

Load Measurement

The algorithms of load control such as OLC and CAC use load measurement values in the uplink and the
downlink. A common Load Measurement (LDM) algorithm is required to control load measurement in the
uplink and the downlink, which makes the algorithm relatively independent.

Measurement Quantities and ProcedureThe NodeB and the RNC perform measurements and
filtering based on the parameter settings. The statistics obtained after the measurements and
filtering serve as the data input into the algorithms of load control.

Filtering of Load MeasurementFor most measurement quantities, the NodeB performs layer 3
filtering on original measurement values, and the RNC performs smooth filtering on the values
reported from the NodeB. Provided Bit Rate (PBR) measurement, however, does not use alpha
filtering on the NodeB side.

Auto-Adaptive Background Noise UpdateThe UL background noise is easily affected by


temperature. Auto-adaptive background noise update is added to the LDM algorithm to ensure
that the configured value of the background noise can constantly represent the real situation.

A.b.a. Measurement Quantities and Procedure


The NodeB and the RNC perform measurements and filtering based on the parameter settings. The
statistics obtained after the measurements and filtering serve as the data input into the algorithms of
load control.

A.b.a.a. Major Measurement Quantities


The major measurement objects of the LDM are as follows:

Uplink Received Total Wideband Power (RTWP)

Downlink Transmitted Carrier Power (TCP)

TCP of all codes not used for HS-PDSCH, HS-SCCH, E-AGCH, E-RGCH and E-HICH transmission
(non-HSDPA power)

Provided Bit Rate (PBR) on HS-DSCH

HS-DSCH required power (also called GBP: GBR required power)


A.b.a.b. LDM Procedure

Figure 27 LDM procedure.


Based on the measurement parameters set on the NodeB LMT, the NodeB measures the major
measurement quantities and then obtains original measurement values. After layer 3 filtering on the
NodeB side, the NodeB reports the cell measurement values to the RNC.
Based on the measurement parameters set on the RNC LMT, the RNC performs smooth filtering on the
measurement values reported from the NodeB and then obtains the measurement values which further
serve as the data input into the algorithms of load control.

A.b.b. Filtering of Load Measurement


For most measurement quantities, the NodeB performs layer 3 filtering on original measurement values,
and the RNC performs smooth filtering on the values reported from the NodeB. Provided Bit Rate (PBR)
measurement, however, does not use alpha filtering on the NodeB side.

Figure 28 Measurement model at the physical layer


A is the sampling value of the measurement.
B is the measurement value after layer 1 filtering.
C is the measurement value after layer 3 filtering.
C' is another measurement value (if any) for measurement evaluation.

D is the reported measurement value after measurement evaluation on the conditions of


periodic measurement and event-triggered measurement.
Layer 1 filtering is not standardized by protocols and it depends on vendor equipment. Layer 3 filtering is
standardized. The filtering effect is controlled by a higher layer. Alpha filtering calculated with the
formula below applies to layer 3 filtering.

Fn =( 1 ) F n1 + M n
where
Fn is the new measurement value after filtering.
Fn-1 is the last measurement value after filtering.
Mn is the latest measurement value from the physical layer.
= (1/2)k/2 (k is defined by the UL/DL basic common measure filter coef parameter.)
When is set to 1, that is, k = 0, no layer 3 filtering is performed.
The larger the coefficient, the smaller the impact of the measurement value at the physical layer on the
value after layer 3 filtering, and the less susceptible of the network layer to the measurement value at
the physical layer.

A.b.b.a. Smooth Window Filtering on the RNC Side


After the RNC receives the measurement report, it filters the measurement value with the smooth
window.
Assuming that the reported measurement value is Qn and that the size of the smooth window is N, the
filtered measurement value is
N 1
'
n

n1

= i=0

Delay susceptibilities of PUC, CAC, LDR, and OLC to common measurement are different. The LDM
algorithm must apply different smooth filter coefficients and measurement periods to those algorithms;
thus, they can get expected filtered values.

Algorithm

Smooth Window Length

PUC

PUC moving average filter length

CAC

UL CAC moving average filter length


DL CAC moving average filter length

LDR

UL LDR moving average filter length


DL LDR moving average filter length

OLC

UL OLC moving average filter length


DL OLC moving average filter length

NOTE:
Different from other measurement quantities, GBP measurements have the same smooth window length
in all related algorithms. The filter length for GBP measurement is defined by the HSDPA need power
filter len parameter.

Parameter ID

Value
Range

Recommen
ded Value

PucAvgFilterLen

1 to 32

32

UL LDR moving average filter


length

UlLdrAvgFilterLe
n

1 to 32

25

DL LDR moving average filter


length

DlLdrAvgFilterLe
n

1 to 32

25

UL CAC moving average filter


length

UlCACAvgFilterLe
n

1 to 32

DL CAC moving average filter


length

DlCACAvgFilterLe
n

1 to 32

UL OLC moving average filter


length

UlOLCAvgFilterLe
n

1 to 32

25

DL OLC moving average filter


length

DlOLCAvgFilterLe
n

1 to 32

25

Parameter Name
PUC moving
length

average

filter

Description:
These parameters specify the length of smooth filter window of the report measurement value on the
RNC side. The greater the value of each parameter, the greater the smoothing effect, but the lower the
signal change tracing capability.
A.b.b.b. Reporting Interval
The interval at which the NodeB reports each measurement quantity to the RNC is configurable. The
following table lists the parameters used to set the reporting intervals for the measurement quantities.
A.b.b.c. Provided Bit Rate
The Provided Bit Rate (PBR) measurement quantity is also reported by the NodeB to the RNC. Different
from other power measurement quantities, PBR does not undergo alpha filtering on the NodeB side.
For details of PBR, refer to 3GPP 25.321.
The following table lists the parameters that are used to set the PBR reporting intervals.

A.b.c. Auto-Adaptive Background Noise Update


The UL background noise is easily affected by temperature. Auto-adaptive background noise update is
added to the LDM algorithm to ensure that the configured value of the background noise can constantly
represent the real situation.
The UL background noise is easily affected by temperature.

If the temperature in the equipment room is constant and the background noise changes little, the
background noise does not need to be adjusted after the initial value is set.

If the temperature in the equipment room varies with the outside temperature, the background
noise changes greatly and must be updated.

Figure 29 Procedure for updating background noise

The time period of the background noise update can be specified by setting parameters Background
e Update Start Time and Background Noise Update End Time. During the period when the background noise
e algorithm is applied, background noise updating is performed if Auto-Adaptive Background Noise Update Switch
to ON.
The measured value of background noise is effective when the current equivalent number of users in the
smaller than the value of Equivalent User Number Threshold for Background Noise.
The time that one background noise update takes is specified by setting Background Noise Update
nuance Time.
The discarding threshold of abnormal RTWP during the update is specified by setting Background
e Abnormal Threshold. This setting avoids temporary burst interference and RTWP peak.
The variation of the RTWP that triggers the background noise update is specified by setting Background
e Update Trigger Threshold. This setting avoids frequent updates over the Iub interface.

A.b.d. Potential User Control

In the WCDMA system, the mobility management of the UE in idle or connected mode is implemented by
cell selection and cell reselection. The Potential User Control (PUC) algorithm controls the cell selection
of a potential UE and prevents an idle UE from camping on a heavily loaded cell.
The PUC algorithm is available only after it is enabled, that is, after PUC under the Cell LDC algorithm
switch parameter is set to 1.
The RNC periodically monitors the downlink load of the cell and compares the measurement results with
the configured thresholds Load level division threshold 1 and Load level division threshold 2,
that is, load level division upper threshold and lower threshold.
If the cell load is higher than the load level division upper threshold plus the Load level
division hysteresis, the cell load is judged to be heavy.
If the cell load is lower than the load level division lower threshold minus the Load level
division hysteresis, the cell load is judged to be light.
Cell load is of three states: heavy, normal, and light, as shown

Figure 30 Cell load states

Based on the cell load, PUC works as follows:


If the cell load becomes heavy, PUC modifies cell selection and reselection parameters
and broadcasts them through system information. In this way, PUC leads UEs to the
neighboring cells with light load.
If the cell load becomes normal, PUC uses the cell selection and reselection parameters
configured on the RNC LMT.
If the cell load becomes light, PUC modifies cell selection and reselection parameters and
broadcasts them through system information. In this way, PUC leads UEs to this cell.
The parameters related to cell selection and cell reselection are Qoffset1(s,n) (load level offset),
Qoffset2(s,n) (load level offset), and Sintersearch (start threshold for inter-frequency cell reselection).
The NodeB periodically reports the total TCP of the cell, and the PUC periodically triggers the following
activities:
Judging the cell load level based on the total TCP
Configuring Sintersearch, Qoffset1(s,n), and Qoffset2(s,n) based on the cell load level
Updating the parameters of system information SIB3 and SIB11
Based on the characteristics of inter-frequency cell selection and reselection.

Sintersearch
When this value is increased by the serving cell, the UE starts inter-frequency cell
reselection ahead of schedule.
When this value is decreased by the serving cell, the UE delays inter-frequency cell
reselection.
Qoffset1(s,n): applies to R (reselection) rule with CPICH RSCP
When this value is increased by the serving cell, the UE has a lower probability of
selecting a neighboring cell.
When this value is decreased by the serving cell, the UE has a higher probability of
selecting a neighboring cell.
Qoffset2(s,n): applies to R (reselection) rule with CPICH Ec/I0
When this value is increased by the serving cell, the UE has a lower probability of
selecting a neighboring cell.
When this value is decreased by the serving cell, the UE has a higher probability of
selecting a neighboring cell.
According to the load status of the current cell, the cell reselection parameters are adjusted. The
configuration of Sintersearch is oriented to the current cell. Its value is related to the load of the current
cell.
The configuration of Qoffset1 and Qoffset2 is oriented to the neighboring cells. Their values are related
to the load of the current cell and the load of the neighboring cells.
Neighbori
ng Cell
Load

Current
Cell Load

Q'ofset1

Change
of
Q'ofset
1

Q'ofset2

Change
of
Q'ofset
2

Light

Light

Q'offset1
Qoffset1

Q'offset2
Qoffset2

Light

Normal

Q'offset1
Qoffset1

Q'offset2
Qoffset2

Light

Heavy

Q'offset1
=
Qoffset1
+
Qoffset1 offset 1

Q'offset2
=
Qoffset2
+
Qoffset2 offset 1

Normal

Light

Q'offset1
Qoffset1

Q'offset2
Qoffset2

Normal

Normal

Q'offset1
Qoffset1

Q'offset2
Qoffset2

Normal

Heavy

Q'offset1
=
Qoffset1
+
Qoffset1 offset 1

Q'offset2
=
Qoffset2
+
Qoffset2 offset 1

Heavy

Light

Q'offset1
=
Qoffset1
+
Qoffset1 offset 2

Q'offset2
=
Qoffset2
+
Qoffset2 offset 2

Heavy

Normal

Q'offset1
=
Qoffset1
+
Qoffset1 offset 2

Q'offset2
=
Qoffset2
+
Qoffset2 offset 2

Heavy

Heavy

Q'offset1
Qoffset1

Q'offset2
Qoffset2

A.b.e. Intelligent Access Control

The procedure of UE access includes the procedures of RRC connection setup and RAB setup

Figure 31 Service access procedure


As shown in Figure 31 Service access procedure, the procedure of UE access includes the procedures of
RRC connection setup and RAB setup. The success in the RRC connection setup is one of the
prerequisites for the RAB setup.
During the RRC connection processing, if resource admission fails, DRD and redirection
apply.
During the RAB processing, the RNC performs the following steps:
Performs rate negotiation based on the service requested by the UE
Performs cell resource admission decision. If the admission is passed, UE access is
granted. Otherwise, the RNC performs the next step.
Performs preemption attempt. If the preemption is successful, UE access is granted. If the
preemption fails or is not supported, the RNC performs the next step.
Performs queuing attempt. If the queuing is successful, UE access is granted. If the
queuing fails or is not supported, the RNC performs the next step.
Performs DRD. If the DRD is successful, UE access is granted. Otherwise, the RNC
performs the next step.
Denies UE access.

A.b.f. RRC Connection Processing


When a new service accesses the network, an RRC connection must be set up first. If the RRC connection
request is denied, Directed Retry Decision (DRD) is performed. If DRD also fails, RRC redirection can be
performed to direct the UE to an inter-frequency or inter-RAT cell through cell reselection.
A.b.f.a. Signaling Radio Bearer Admission Decision
During RRC connection processing, the RNC makes signaling radio bearer admission decision first.

If the admission request is accepted, RAB processing takes place next.


If the admission request is denied, DRD and redirection take place next.
RRC DRD
During the RRC connection setup, if the UE fails to access the cell, the RNC chooses a suitable interfrequency cell and directs the UE to try this cell.
Redirection
In the case of RRC DRD failure, the RNC includes an inter-frequency or inter-RAT redirection indication in
the RRC CONNECTION REJECT message. This message directs the UE to initiate an inter-frequency or
inter-RAT RRC connection setup after cell selection or reselection.

Figure 32 RRC DRD and redirection procedure

The procedure is divided into the following subprocedures:


RRC setup request
The UE sends an RRC CONNECTION REQUEST message to the RNC, requesting the RNC to
set up a signaling RB (SRB) on DCH.
RNC decision
After the RNC receives the request, the CAC algorithm decides whether an RRC connection
can be set up between the UE and the current cell.
If the RRC connection can be set up between the UE and the current cell, the RNC sends
an RRC CONNECTION SETUP message to the UE.
If the RRC connection cannot be set up between the UE and the current cell, the RNC
searches for a suitable cell from the candidate cell list of the UE.
If such a cell exists, the RNC indicates it to the UE through an RRC CONNECTION SETUP
message.
If such a cell does not exist, the RNC chooses another proper frequency or radio access
system such as GSM, and notifies the UE of it through the REDIRECTION IE in an RRC
CONNECTION REJECT message. The UE initiates an access request again at the specified
frequency or in the specified system.

A.b.g. RAB Setup Processing


RAB setup processing includes rate negotiation, Call Admission Control (CAC), preemption, queuing and
Directed Retry Decision (DRD).
A.b.g.a. Rate Negotiation
Rate negotiation includes the maximum expected rate negotiation and initial rate negotiation.
For the maximum and initial rates of AMR and AMR-WB voice services in the CS domain, refer to AMRC
and AMRC-WB.

Maximum Expected Rate Negotiation


Before the negotiation of the maximum expected rate, the Iu QoS negotiation function must be enabled,
that is, IU_QOS_NEG_SWITCH is set to 1. When setting up, modifying, or admitting a PS service
(conversational, streaming, interactive, or background service), the RNC and the CN negotiate the rate
according to the UE capability to obtain the maximum expected rate while ensuring a proper QoS.
Initial Rate Negotiation
For a non-real-time service in the PS domain, the RNC chooses an initial rate to allocate bandwidth for
the service before the cell resource request. The negotiation is based on the cell load information,
including:
Uplink and downlink radio bearer states of the cell
Minimum spreading factor supported
HSPA capability
When a BE service is set up or the UE state transits from CELL_FACH to CELL_DCH, the initial rate is
defined as follows:
When the RAB downsizing function is enabled (that is, RAB_Downsizing_Switch is set to
1), the negotiated rate will be available based on cell resource.
If the DCCC function is enabled, the actual initial access rate is the lower one between the
negotiated rate and the value of UL BE traffic Initial bit rate or DL BE traffic Initial
bit rate.
If the DCCC function is disabled (that is, DCCC_Switch is set to 0), the actual initial
access rate is the negotiated rate based on cell resource, and the lowest rate is 8 kbit/s.
When the RAB downsizing function is disabled (that is, RAB_Downsizing_Switch is set to
0),
If the DCCC function is enabled, the actual initial access rate is the value of UL BE traffic
Initial bit rate or DL BE traffic Initial bit rate.
If the DCCC function is disabled, the actual initial access rate is the maximum expected
rate.
Configuration Rule and Restriction:
The selection of the initial access rate for BE services takes the UL BE traffic Initial bit rate and DL BE
traffic Initial bit rate parameters into consideration only when the DCCC algorithm is enabled
(DCCC_SWITCH is set to 1).
A.b.g.b. Preemption
Preemption guarantees the success in the access of a higher-priority user by forcibly releasing the
resources of a lower-priority user.
After cell resource admission fails, the RNC performs the preemption function if the following conditions
are met:
The RNC receives an RAB ASSIGNMENT REQUEST message indicating that preemption is
supported.
The preemption algorithm switch Preempt algorithm switch is set to ON.
Preemption is applicable to the following cases:
Setup or modification of a service
Hard handover or SRNS relocation

Service

R99
service

HSDPA
service

Resource
R99
Servic
e

HSUP
A
Servic
e

HSDPA
Service

Code

Power

CE

Iub
bandwidth

Code

Power

CE

Iub
bandwidth

Code

Power

CE

Iub
bandwidth

Code

Power

CE

Iub
bandwidth

Number
users

HSUPA
service

Number
users

MBMS
service

Service That can Be Preempted

Number
users

of

of

of

R99 + HSPA
Combined
Service

MBMS
Service

NOTE:
To enable resource-triggered preemption for MBMS services, the Mbms PreemptAlgoSwitch must be
ON.
The preemption procedure is as follows:
2

The preemption algorithm determines which radio link sets can be preempted.
a Choose SRNC UEs first. If no SRNC UE is available, choose DRNC UEs.
b Sort the UEs by user integrate priority.
c Specify the range of candidate UEs.

Only the UEs with lower priority than the RAB to be established can be selected. If the
Integrate Priority Configured Reference parameter is set to traffic class and the
switch PreemptRefArpSwitch is on, only the ones with higher ARP and lower priority
than the RAB to be established can be selected. This is applied to RABs of PS streaming
and BE services.
d The RNC selects one or more UEs in order to match the resource needed by the RAB to
be established.
NOTE:
For the preemption triggered for the power reason, the preempted objects can be R99
users, R99 + HSDPA combined users or HSDPA RABs.
For the preemption triggered for the Iub bandwidth reason, the preempted objects can
only be RABs.
The combined services that are carried on channels of different types (that is, R99+HSPA)
cannot preempt the resources of other services.
For the preemption triggered for the code or Iub resource reason, only one user can be
preempted. For the preemption triggered for the power or credit resource reason, more
than one user can be preempted.
The RNC releases the resources occupied by the candidate UEs.
The requested service directly uses the released resources to access the network without admission

on.
A.b.g.c. Queuing
After the admission of a service fails, the service request is put into a specific queue. During the time
defined by the Max queuing time length parameter, admission attempts for the service are made
periodically.
After the cell resource decision fails, the RNC performs the queuing if the RNC receives an RAB
ASSIGNMENT REQUEST message indicating the queuing function is supported and Queue algorithm
switch is set to ON.
The RNC configures 12 independent levels of maximum queuing time, that is, T1 to T12. Configuration of
Max queuing time length for different priorities of services is described in the following part.
If Integrate Priority Configured Reference is set to Traffic, that is, the traffic class serves as the
reference to the integrate priority, then the maximum queuing time for different priorities is configured
as shown in
Traffic
Class

Conversatio
nal

Streaming

Interactive

Background

User
Priority

Max
queuing
time length

1 (gold)

T1

2 (silver)

T2

3 (copper)

T3

1 (gold)

T4

2 (silver)

T5

3 (copper)

T6

1 (gold)

T7

2 (silver)

T8

3 (copper)

T9

1 (gold)

T10

2 (silver)

T11

3 (copper)

T12

If Integrate Priority Configured Reference is set to ARP, the maximum queuing time for different priorities
is configured as shown

User
Priority

1 (gold)

2
(silver)

3
(copper
)

Traffic Class

Max
queuing
time
length

Conversation
al

T1

Streaming

T2

Interactive

T3

Background

T4

Conversation
al

T5

Streaming

T6

Interactive

T7

Background

T8

Conversation
al

T9

Streaming

T10

Interactive

T11

Background

T12

The queuing metric is calculated through the following formula:

Pqueue=T max +T elapsed


where
Pqueue is the weight for the queuing service request. The service with the smallest value of
Pqueue undergoes admission attempt.
Telapsed is the time in milliseconds that the service request has queued. The value of T elapsed
can be obtained by the current time stamp minus the recorded queuing time stamp of the
service request.
Tmax is the maximum time (Max queuing time length) that the service request can be in
the queue. When the value of Telapsed is getting close to that of Tmax, the value of Pqueue is
approximate to the minimum value 0.
The queuing algorithm is triggered by the heartbeat timer (which is defined through Poll timer length).
The specific process of the queuing algorithm is as follows:
The queuing algorithm judges whether the queue is full, that is, whether the number of
service requests in the queue exceeds the queue length that is defined by the Queue
length parameter.
The queuing algorithm proceeds as shown

If the queue
is...

Then the queuing algorithm...

Not full

Stamps this request with the


current time.
Puts this request into the
queue.
Starts the heartbeat timer if it
is not started.

Full

Checks whether there are


requests whose priority levels are
lower than the priority of the new
request.
If yes, then the queuing
algorithm
Checks the weights of these
requests. If not all weights are the
same, the algorithm rejects the
request with the greatest weight.
Stamps the new request with
the current time and then puts it
into the queue.
Starts the heartbeat timer if it
is not started.
If no, then the queuing
algorithm rejects the new request
directly.

After the heartbeat timer expires, the queuing algorithm proceeds as shown
Ste
p

Action

Reject the request if the actual waiting time of the


request Telapsed is longer than the value of Max
queuing time length for the service.

Calculate the weights of all requests in the queue.

Choose the request with the smallest weight to


attempt resource allocation.

If the attempt is successful, the heartbeat


timer is restarted for the next processing upon
expiry of this timer.
If the attempt fails, the queuing algorithm
proceeds as follows:
Puts the service request back into the
queue with the time stamp unchanged for the
next attempt.
Chooses the request with the smallest
weight from the rest and performs another
attempt until admitting a request or rejecting
all requests.

A.b.g.d. Directed Retry Decision

RAB Directed Retry Decision (DRD) is triggered when the blind handover to other inter-frequency cells is
performed after resource allocation fails in the RNC during the RAB setup.
The RAB DRD procedure is as follows:
1
2
3
4

The RNC makes a decision on the admission of the target inter-frequency cell for blind
handover.
If the admission request is accepted, the DRD procedure is performed for the target
inter-frequency cell for blind handover.
The RNC starts the radio link setup procedure to perform the inter-frequency handover.
The RNC starts the radio bearer setup procedure to complete the inter-frequency
handover on the Uu interface and the service setup.

If step 2, 3 or 4 fails, the RNC performs repeated RAB DRD in another target inter-frequency cell for blind
handover until the retry succeeds, until the retry in all such cells fails, or until the number of retries
reaches the value of Max inter-frequency direct retry number.
NOTE:
After an HSPA service request is denied, the service is fallen back to the DCH. Then, the
service re-attempts to access the network.
The RAB DRD to a target cell in another system (for example, GSM) for blind handover is
similar. For details, refer to Inter-RAT Handover.
According to the cell type (R99 or R99+HSDPA), an HSDPA user accessing an R99 cell can
be directed to an R99+HSDPA cell through DRD. According to the cell parameter R99 CS
separation indicator or R99 PS separation indicator, an R99 user accessing an
R99+HSDPA cell can be directed to an R99 cell through DRD.
RAN6.1 does not support inter-RAT DRD for RABs of combined services.
RAN6.1 does not support inter-RAT DRD for PS services.
RAN6.1 does not support inter-RAT DRD for HSPA services.

Whether the DRD action can be executed depends on the settings of DRD algorithm switches. The
following table describes the DRD algorithm switches applicable to different scenarios.
Scenario

Switch
DRD_SWITCH

This is the primary DRD algorithm


switch. The secondary DRD switches
can be valid only when this switch is
on.

COMB_SERV_DRD_SWITCH

DRD is applicable to combined services


only when this switch is on.

RAB_MODIFY_DRD_SWITCH

DRD is applicable to RAB modification


only when this switch is on.

RAB_DCCC_DRD_SWITCH

DRD is applicable to traffic-volumebased DCCC procedure or UE state


transition, only when this switch is on.

HSDPA_DRD_SWITCH

DRD is applicable to HSDPA services


only when this switch is on.

RAB_SETUP_DRD_SWITCH

DRD is applicable to RAB setup only


when this switch is on.

INTRA_HO_D2H_DRD_SWITC
H

DRD is applicable to intra-frequency


soft handover and intra-frequency hard
handover only when this switch is on.

DRD switch

Combined services
RAB modification

DCCC

HSDPA service
RAB setup
DCH to HSPA intrafrequency
handover

Description

Scenario

Switch

Description

DCH to HSPA interfrequency


handover

INTER_HO_D2H_DRD_SWITC
H

DRD is applicable to inter-frequency


handover only when this switch is on.

HSUPA_DRD_SWITCH

DRD is applicable to HSUPA services


only when this switch is on.

HSUPA service

A DRD action is executable only when all the related switches are on. For example, before an HSUPA
service is set up, the DRD_SWITCH, RAB_SETUP_DRD_SWITCH, and HSUPA_DRD_SWITCH must be
on.

A.c.

Call Admission Control

Call Admission Control (CAC) is used to determine whether the system resources are enough to accept a
new user's access request. If the system resources are enough, the new user's access request is
accepted; otherwise, the user will be rejected.
Call Admission Control (CAC) algorithm consists of CAC based on power resource, CAC based on code
resource, CAC based on credit resource, CAC based on Iub resource and CAC based on HSPA user
number.
A CAC procedure contains RRC signaling admission control and RAB admission control.

Figure 33 Basic procedure of resource admission decision


The admission decision is based on:
Cell available code resource
Cell available power resource
NodeB resource state, that is, NodeB credits (They are used to measure the channel
demodulation capability of NodeBs.)
Available Iub transport layer resource, that is, Iub transmission bandwidth
Number of HSDPA users (only for HSDPA services)
Number of HSUPA users (only for HSUPA services)
A call can be admitted only when all of these resources are available.
NOTE:
Except the mandatory code resource admission control, the admission control based on any other
resource can be disabled through the ADD CELLALGOSWITCH command.
Some CAC-related switches are available from the Cell CAC algorithm switch parameter.
The power admission switch is available from the Uplink/Downlink CAC algorithm switch parameter

A.c.a. CAC Based on Code Resource


When a new service attempts to access the network, code resource admission is mandatory.

Code resource admission is implemented as follows:


For RRC connection setup requests, the code resource admission is successful if the
current remaining code resource is enough for the RRC connection.
For handover services, the code resource admission is successful if the current remaining
code resource is enough for the service.
For other R99 services, the RNC should ensure that the remaining code does not exceed
the configurable OM threshold (Dl HandOver Credit and Code Reserved SF) after
admission of the new service.
For HSDPA services, the reserved codes are shared by all HSDPA services. Therefore, the
code resource admission is not needed.

A.c.b. CAC Based on Power Resource


When a new service accesses the network, power resources are optional for CAC.
Power Admission Decision
Power admission decision consists of signaling radio bearer admission decision and RAB admission
decision based on algorithm 1, algorithm 2 and algorithm 3.
The following three algorithms are available for power resource admission decision. If the power
resource admission control is enabled, one of them can be used for the admission decision. Which
algorithm is to use is defined by the Uplink/Downlink CAC algorithm switch parameter.
Algorithm 1: power resource admission decision based on power or interference
Based on the current cell load (uplink load factor and downlink transmitted carrier power) and
the access request, the RNC decides whether the cell load will exceed the threshold or not if
admitting a new call. If yes, the RNC rejects the request. If no, the RNC accepts the request.
Algorithm 2: power resource admission decision based on the number of equivalent users
Based on the current number of equivalent users and the access request, the RNC decides
whether the number of equivalent users will exceed the threshold or not if admitting a new
call. If yes, the RNC rejects the request. If no, the RNC accepts the request.
Algorithm 3: power resource admission decision based on power or interference, but with
the estimated load increment always set to 0
Based on the current cell load (uplink load factor and downlink TCP) and the access request,
the RNC decides whether the cell load will exceed the threshold or not, with the estimated
load increment set to 0. If yes, the RNC rejects the request. If no, the RNC accepts the
request.

Figure 34 Basic procedure of power resource admission decision


the basic principles of power resource admission decision are as follows:
Four basic load thresholds are used for power resource admission decision. They are:
o UL/DL Handover access threshold
o UL/DL threshold of Conv AMR service
o UL/DL threshold of Conv non_AMR service
o UL/DL threshold of other services
With these thresholds, the RNC can define the proportion between speech service and other
services while ensuring handover preference.
Admission control involves uplink and downlink. The admission control switches in the two
directions are independent of each other.
For an intra-frequency handover request, only downlink admission decision is needed.
For a non-intra-frequency handover request, both uplink and downlink decisions are
needed if both uplink CAC and downlink CAC are enabled.
For a rate downsizing request, the RNC accepts it directly.
For a rate upsizing request, the RNC makes the decision as shown in Figure 34 Basic
procedure of power resource admission decision.
For a rejected RRC connection request, the RNC performs DRD or redirection.
For a rejected service request, the RNC performs preemption or queuing according to the
actual situation.
NOTE:
For a rate upsizing request, LDR trigger thresholds are used for admission decision.
Signaling Radio Bearer Admission Decision
To ensure that the RRC connection request is not denied by mistake, tolerance principles apply.
The admission decision is made for the following reasons of the RRC connection request:
When power admission is based on power or interference (algorithm 1 and algorithm 3):
For the RRC connection request for the reason of emergency call, detach, or registration,
direct admission is used.
For the RRC connection request for other reasons, UL/DL OLC Trigger threshold is used
for admission.
For details of UL/DL OLC Trigger threshold, refer to Triggering of OLC.
When power admission is based on the equivalent number of users (algorithm 2):

For the RRC connection request for the reason of emergency call, detach, or
registration, direct admission is used.
o For the RRC connection request for other reasons, the admission decision is made
as follows:

When the OLC switch is on, RRC connection request is rejected if the cell is
in overload congestion state. If the cell is not in overload state, UL/DL
OLC Trigger threshold is used for power admission.

When the OLC switch is off, UL/DL OLC Trigger threshold is used for
power admission.
Algorithm 1 of Power Admission
o

Power admission decision based on algorithm 1 consists of uplink power admission decision and
downlink power admission decision procedures.
Uplink Power Admission Decision Procedure Based on Algorithm 1

Figure 35 Uplink power admission decision procedure


The procedure of uplink power admission decision is as follows:
The RNC obtains the uplink RTWP of the cell, and uses the formula

UL =1

Pn
RTWP

to calculate the current uplink load factor UL, where PN is the received uplink Background
noise.
The RNC calculates the uplink load increment UL based on the service request.
The RNC uses the following formula to forecast the uplink load factor:
UL,predicted = UL + UL + ULcch + hs_dpcch
In the formula, ULcch is the value of UL common channel load factor, which defines the
factor of UL common channel resources reserved. hs_dpcch is the value of UL HS-DPCCH
reserve factor, which defines the factor of UL HS-DPCCH resources reserved.
By comparing the forecasted uplink load factor UL,predicted with the corresponding threshold
(UL threshold of Conv AMR service, UL threshold of Conv non_AMR service, UL
threshold of other services, or UL Handover access threshold), the RNC decides
whether to accept the access request or not.
NOTE:
The procedure of uplink power admission decision in HSUPA cells is similar to that in R99 cells.
The uplink load increment UL is determined by the following factors:
The Eb/No of the incoming new call (the larger the Eb/No, the larger the uplink load
increment)

UL neighbor interference factor (the larger the factor, the larger the uplink load
increment)
Configuration Rule and Restrictions:
To ensure success of handover and performance of conversational services and to differentiate
services of four classes, the thresholds should fulfill the following condition:
UL Handover access threshold > max(UL threshold of Conv AMR service, UL threshold of Conv
non_AMR service) > UL threshold of other services
The parameters UL Handover access threshold, UL threshold of Conv AMR service, UL threshold of
Conv non_AMR service, and UL threshold of other services should be considered together with the
planning result of network optimization. The reasons are as follows:
If the parameters are set too large, the network optimization may be affected. The system load
after admission may become too heavy, and the heavy load can affect the system stability and result
in system congestion.
If the parameters are set too small, the target capacity may not be reached. There is a higher
probability that users are rejected while some resources are idle and wasted. \
Downlink Power Admission Decision Procedure Based on Algorithm 1
Downlink Power Admission Decision for R99 Cells

Figure 36 downlink power admission decision


The procedure of downlink power admission decision is as follows:
The RNC obtains the cell downlink TCP and calculates the downlink load factor DL by
dividing the maximum downlink transmit power P max by this TCP.
The RNC calculates the downlink load increment DL based on the service request and
the current load.
The RNC uses the following formula to forecast the downlink load factor:
DL,predicted = DL + DL + DLcch
In the formula, DLcch is value of DL common channel load reserved coefficient, which
defines the factor of DL common channel resources reserved.
By comparing the downlink load factor DL,predicted with the corresponding threshold (DL
threshold of Conv AMR service, DL threshold of Conv non_AMR service, DL
threshold of other services, or DL Handover access threshold), the RNC decides
whether to accept the access request or not.
NOTE:
The downlink load increment DL is determined by the following factors:
Eb/No of the incoming new call (The larger the Eb/No, the larger the downlink load increment.)
Nonorthogonality factor (The larger the factor, the larger the downlink load increment.)
Current transmitted carrier power (The larger the power, the smaller the downlink load
increment.)

Downlink Power Admission Decision for HSPA Cells


Power Increment Estimation for DCH RAB
The power increment estimation for DCH RAB in the HSPA cell is similar to DCH RAB in the R99
cell.
Power Increment Estimation for HSDPA RAB
The power increment estimation for HSDPA RAB PDL is made based on GBR, Ec/No,
Nonorthogonality factor, and so on.
Downlink Radio Admission Decision for DCH RAB
When the admission of the DCH RAB is implemented, the following formulas apply:
o

cac
non
Pnonhspa + Pcch + P DL P max .Thd
Ptotal+ P DL Pmax . Thd total
Pnon + Pcch +min ( GBP+ P HSUPA , Pmax ) + P DL P max .Thd total
res

o
o
where
o
o
o
o

o
o
o
o
o

cac

hspa

res

res

hspa

cac

Pnon is the current non-HSDPA power.


Pcc h is the power reserved for the common channel.
Pmax is the cell maximum transmit power.
T h d non spa is the cell DL admission threshold of different types of service, that
hdpa

res

cac

is, DL threshold of Conv AMR service, DL threshold of Conv non_AMR


service, DL threshold of other services, or DL Handover access threshold.
Ptotal is the current downlink transmitted carrier power.

T h d total

is the threshold of cell DL total power. It is defined by the DL total

cac

power threshold parameter.


GBP is the power requirement for GBR.

PHSUPA

is the power reserved for HSUPA downlink control channels (E-AGCH/E-

res

RGCH/E-HICH).
Pmax h spa is the maximum available power for HSPA. Its value is associated with

the HSDPA power allocation mode.


The RNC should admit the DCH RAB in either of the following situations:
Formulas 1 and 2 are fulfilled.
Formulas 1 and 3 are fulfilled.
NOTE:
If the GBP measurement is deactivated, the decision formulas that involve GBP are regarded as fulfilled
Downlink Radio Admission Decision for HSDPA RAB
When the admission of the HSDPA RAB is implemented, the following formulas apply:

GBR stm

1.

PBR strm Thd hsdpa

2.

PBR be Thd hsdpa

3.

GBP+ Phsupa + P DL Pmax

4.

Ptotal+ P DL Pmax . Thd total

be

str

GBR be
i

res

hspa

cac

5.

Pnon + Pcch +GBP+ Phsupa + P DL P max .Thd total


hspa

res

res

cac

where
o
o

o
o

PBR strm is the provided bit rate of all existing streaming services.
T h d h sdpa is the admission threshold for streaming PBR decision. It is defined by
the Hsdpa streaming PBR threshold parameter.
str

PBR be is the provided bit rate of all existing BE services.


T h d h sdpa is the admission threshold for BE PBR decision. It is defined by the
Hsdpa best effort PBR threshold parameter.
be

GBP is the power requirement for GBR.

Ph supa is the power reserved for HSUPA downlink control channels (E-AGCH/ERGCH/E-HICH).

Pmax spa is the maximum available power for HSPA. Its value is associated with
the HSDPA power allocation mode. For details, refer to HSDPA Power Resource
Allocation.

res

Ptotal is the current downlink transmitted carrier power.

Pmax is the cell maximum transmit power.

T h d total is the threshold of cell DL total power, which is defined by the DL total
power threshold parameter.
cac

Pcc h

Pnon spa is the current non-HSDPA power.

res

is the power reserved for the common channel.

The RNC should admit the HSDPA streaming RAB in any of the following situations:
Formula 1 is fulfilled.
Formulas 3 and 4 are fulfilled.
Formulas 3 and 5 are fulfilled.
The RNC should admit the HSDPA BE RAB in any of the following situations:
Formula 2 is fulfilled.
Formulas 3 and 4 are fulfilled.
Formulas 3 and 5 are fulfilled.
NOTE:
If PS conversational services are carried on HSPA, the services can be treated as streaming
services during admission control.

If the GBP measurement is deactivated, the decision formulas that involve GBP are regarded as
fulfilled.
Downlink Radio Admission Decision for HSUPA Control Channels
The power of downlink control channels (E-AGCH/E-RGCH/E-HICH) is reserved by Dl HSUPA reserved
factor. Therefore, the power admission for these channels is not needed.
Algorithm 2 of Power Admission
When uplink CAC algorithm or downlink CAC algorithm uses algorithm 2, the admission of
uplink/downlink power resources uses the algorithm based on the equivalent number of users.

Equivalent Number of Users


The 12.2 kbit/s AMR traffic is used to calculate the Equivalent Number of Users (ENU) of all other
services. The 12.2 kbit/s AMR traffic's ENU is assumed to be 1. The ENU calculation of all other services
is related to the following factors:
Cell type, such as urban or suburban
Traffic domain, CS or PS
Coding type, turbo code or 1/2 1/3 convolutional code
Traffic QoS, that is, BLER
Service

ENU
Uplink for
DCH

Downlink for
DCH

HSDP
A

HSUPA

3.4 kbit/s SIG

0.44

0.42

13.6 kbit/s SIG

1.11

1.11

3.4 + 12.2 kbit/s

1.44

1.42

3.4 + 8 kbit/s (PS)

1.35

1.04

0.78

0.84

3.4 + 16 kbit/s
(PS)

1.62

1.25

1.11

0.85

3.4 + 32 kbit/s
(PS)

2.15

2.19

1.70

0.96

3.4 + 64 kbit/s
(PS)

3.45

3.25

2.79

1.20

3.4 + 128 kbit/s


(PS)

5.78

5.93

4.92

1.67

3.4 + 144 kbit/s


(PS)

6.41

6.61

5.46

1.91

3.4 + 256 kbit/s


(PS)

10.18

10.49

9.36

2.83

3.4 + 384 kbit/s


(PS)

14.27

15.52

14.17

3.91

NOTE:
In the above Table , for a 3.4 + n kbit/s service of HSDPA or HSUPA,

The 3.4 kbit/s is the rate of the signaling carried on the DCH.
The n kbit/s is the GBR of the service.

Procedure of ENU Resource Decision for Uplink/Downlink

The procedure of ENU resource decision for uplink/downlink is as follows:


The RNC obtains the total ENU of all existing users ENU total = all_exist_userENUi.
The RNC gets the ENU of the new incoming user ENU new.
The RNC uses the formula (ENUtotal + ENUnew)/ENUmax to forecast the ENU load, where
ENUmax is the configured maximum ENU (UL total equivalent user number or DL total
nonhsdpa equivalent user number).
By comparing the forecasted ENU load with the corresponding threshold (UL/DL
threshold of Conv AMR service, UL/DL threshold of Conv non_AMR service, UL/DL
threshold of other services, or UL/DL Handover access threshold), the RNC decides
whether to accept the access request or not.
The admission thresholds for different types of service are different. The following table lists the
parameters used to set admission thresholds for different types of service.
Service Type

Admission Threshold

UL
DCH/HSUPA

UL threshold of Conv AMR service


UL threshold of Conv non_AMR
service
UL threshold of other services
UL Handover access threshold

DL DCH

DL threshold of Conv AMR service


DL threshold of Conv non_AMR
service
DL threshold of other services
DL Handover access threshold

HSDPA
DL total power threshold
For
example,
the
admission of a new
AMR service in the uplink based on algorithm 2 will be successful if the following formula is fulfilled:
(ENUtotal + ENUnew)/ENUmax UL threshold of Conv AMR service
NOTE:
If the cell is in overload congestion state in the uplink, the RNC should reject any new RAB.
For MBMS services, it is assumed that their ENU is always zero.
The ENU of MBMS downlink control channels (MICH and MCCH) is reserved by Dl MBMS reserved
factor. Therefore, the power admission for these channels is not needed.
The ENU of HSUPA downlink control channels (E-AGCH/E-RGCH/E-HICH) is reserved by Dl HSUPA
reserved factor. Therefore, the power admission for these channels is not needed.
Algorithm 3 of Power Admission
Algorithm 3 of power resource admission decision is based on power or interference. In algorithm 3, the
estimated load increment is always set to 0.
Algorithm 3 is similar to algorithm 1, but the estimated load increment is always set to 0 in algorithm 3.
Based on the current cell load (uplink load factor and downlink TCP) and the access request, the RNC
decides whether the cell load will exceed the threshold or not, with the estimated load increment set to
0. If yes, the RNC rejects the request. If no, the RNC accepts the request.

A.c.c. CAC Based on NodeB Credit Resource


When a new service accesses the network, NodeB credit resource admission is optional.
NodeB Credit
CE is called NodeB credit on the RNC side and called Channel Element on the NodeB side. It is used to
measure the channel demodulation capability of NodeBs.
The resource of one equivalent 12.2 kbit/s AMR voice service, including 3.4 kbit/s signaling on the DCCH,
consumed in baseband is defined as one CE. If there is only 3.4 kbit/s signaling on the DCCH but no
voice channel, one CE is consumed. Channel elements provide either uplink or downlink capacity for
services. There are two kinds of CE. One is uplink CE for supporting uplink services, and the other is
downlink CE for supporting downlink services. Therefore, one 12.2 kbit/s AMR voice service consumes
one uplink CE and one downlink CE.
The principles of NodeB credit admission control are similar to those of power resource admission
control, that is, to check in the local cell (and local cell group, if any) whether the remaining credit can
support the requesting services.
For details about local cell, local cell group, and capacity consumption law, refer to the 3GPP TS 25.433.
According to the common and dedicated channels capacity consumption laws, and the addition,
removal, and reconfiguration of the common and dedicated channels, the Controlling RNC (CRNC) debits
the amount of the credit resource consumed from or credits the amount to the Capacity Credit of the
local cell (and local cell group, if any) based on the spreading factor.
If the UL Capacity Credit and DL Capacity Credit are separate, the maintenance on the
local cell (and local cell group, if any) is performed in the UL and DL respectively.
If the UL Capacity Credit and DL Capacity Credit are not separate, the maintenance only
on the Global Capacity Credit is performed for the local cell (and local cell group, if any).
The consumption of CEs and the relationship between CE and credit are shown
Consumption of credits on the DCH

Traffic Class

Directio
n

Spreadin
g Factor

Number of
CEs
Consumed

Correspondin
g Credits
Consumed

DL

256

UL

256

DL

128

UL

64

DL

128

UL

64

DL

32

UL

16

DL

64

UL

32

1.5

DL

32

UL

16

DL

16

UL

10

DL

UL

10

20

3.4 kbit/s SRB

13.6 kbit/s SRB

12.2 kbit/s AMR

64 kbit/s VP

32 kbps PS

64 kbit/s PS

128 kbit/s PS

384 kbit/s PS

Consumption of credits on the HSUPA

Traffic
Class

Number of
CEs
Consumed

Correspondin
g Credits
Consumed

Directio
n

Spreadin
g Factor

16 kbit/s

UL

64

2+1

4+2

32 kbit/s

UL

32

2.5 + 1

5+2

64 kbit/s

UL

16

4+1

8+2

128 kbit/s

UL

6+1

12 + 2

384 kbit/s

UL

11 + 1

22 + 2

1 Mbit/s

UL

2x4

21 + 1

42 + 2

2.96 Mbit/s

Not supported in HSUPA phase 1

5.76 Mbit/s

Not supported in HSUPA phase 1

NOTE:
As shown in Table 1 and Table 2, for each data rate and service, the number of UL credits is equal
to the number of UL CEs multiplied by 2. That is because the RESOURCE STATUS INDICATION message
over the Iub interface supports only integers. For example, a UL 32 kbit/s PS service consumes 1.5
CEs. Then, the number of corresponding UL credits consumed is 3, an integer, which can be carried in
the RESOURCE STATUS INDICATION message.
The amount of CEs consumed by the E-DPCCH always equals one.
The amount of CEs consumed by the E-DPDCH is corresponding with SF, and the bit rates of traffic
in table are typical bit rate corresponding with the SFs. The bit rates of traffics using same SF are
different for many other reasons.
The number of CEs consumed by the E-DPDCH is associated with the Spreading Factor (SF). The
bit rates of services in Table 2 are typical bit rates associated with the specific SFs. The bit rates of
services using the same SF differ for other reasons.
There is no capacity consumption law for HS-DSCH in 3GPP TS 25.433, so certain credits are
reserved for HSDPA RAB; and credit admission for HSDPA is not needed.
Procedure for NodeB Credit Resource Decision
When a new service tries to access the network, the credit resource admission is implemented as
follows:
For an RRC connection setup request, the credit resource admission is successful if the
current remaining credit resource is enough for the RRC connection.
For a handover service, the credit resource admission is successful if the current
remaining credit resource is enough for the service.
For other services, the RNC should ensure that the remaining credit of the local cell, local
cell group (if any), and NodeB does not exceed the configurable OM thresholds (Ul
HandOver Credit Reserved SF/Dl HandOver Credit and Code Reserved SF) after
admission of the new services.
NOTE:
The CE capabilities at the levels of local cell, local cell group, and NodeB are reported to
the RNC through the NBAP_AUDIT_RSP message over the Iub interface.
o The CE capability of local cell level indicates the maximum capability in terms of
hardware that can be used in the local cell.
o The CE capability of local cell group level indicates the capability obtained after
both license and hardware are taken into consideration.
o The CE capability of NodeB level indicates the number of CEs allowed to use as
specified in the license.
Before admission control on the credit resource in a cell, ensure that the credit admission
decisions at the cell group and NodeB levels are passed.

If the UL Capacity Credit and DL Capacity Credit are separate, the credit resource
admission is implemented in the UL and DL respectively.

A.c.d. CAC Based on Iub Interface Resource


When a new service accesses the network, Iub interface resource admission is mandatory. The Call
Admission control on Iub is done by Admission Control of Bandwidth.
Admission control of bandwidth is used to judge whether there is sufficient bandwidth for an user to
access the network.
Admission Control Algorithm
The general algorithm of bandwidth admission control is as follows:
For the bandwidth admission requested by a new user, the following requirements apply:
o Total bandwidth used by the users on the path + required bandwidth for the new
user < total bandwidth configured for the path - bandwidth reserved for handover
o Total bandwidth used by the users on the physical link + required bandwidth for
the new user < total bandwidth of the physical link - bandwidth reserved for
handover
For the bandwidth admission requested by a handover user, the following requirements
apply:
o Total bandwidth used by the users on the path + required bandwidth for the
handover user < total bandwidth configured for the path
o Total bandwidth used by the users on the physical link + required bandwidth for
the handover user < total bandwidth of the physical link
For the bandwidth admission requested by a rate upsizing user, the following
requirements apply:
o Total bandwidth used by the users on the path + required bandwidth for the rate
upsizing user < total bandwidth configured for the path - congestion threshold
o Total bandwidth used by the users on the physical link + required bandwidth for
the rate upsizing user < total bandwidth of the physical link - congestion threshold
NOTE:
The users on the physical link include R99 users and HSPA users.
For a path that belongs to a path group, admission must be performed at both the path
level and the path group level.
For an IMA group or MLPPP group, the RNC automatically adjusts the maximum bandwidth
available to the whole group and uses the new admission threshold if the bandwidth of an IMA link
or MLPPP link changes.
The bandwidth reserved for handover includes Forward handover reserved bandwidth and
Backward handover reserved bandwidth
NOTE:
The congestion thresholds include Forward congestion threshold and Backward congestion
threshold. For details, refer to Congestion Control of Bandwidth in the incoming paragraphs.
Admission Procedure
Primary and secondary paths are introduced to admission control. According to the mapping between
traffic types and transmission resources, the RNC preferably selects the primary path for admission. If
the admission on the primary path fails, then the admission on the secondary path is performed.
Assume that secondary paths are available for new users, handover users, and rate upsizing users. The
following procedures describe the admission of these users on the Iub interface respectively.
1

The admission procedure for a new user is as follows:

a
b
c
d

The new user tries to be admitted to available bandwidth 1 of the primary path, as
shown in 1 of Figure 37 Admission procedure for a new user.
If the admission on the primary path is successful, the user is carried on the primary
path.
If the admission on the primary path fails, the user tries to be admitted to available
bandwidth 2 of the secondary path, as shown in 2 of Figure 32.
If the admission on the secondary path is successful, the user is carried on the
secondary path. If not, the bandwidth admission request of the user is rejected.

Figure 37 Admission procedure for a new user


Available bandwidth 1 = total bandwidth of the primary path - used bandwidth - handover reserved
bandwidth
Available bandwidth 2 = total bandwidth of the secondary path - used bandwidth - handover reserved
bandwidth
The admission procedure for a handover user is as follows:
a The handover user tries to be admitted to available bandwidth 1 of the primary path, as
shown in 1 of Figure 38 Admission procedure for a handover user.
b If the admission on the primary path is successful, the user is carried on the primary
path.
c If the admission on the primary path fails, the user tries to be admitted to available
bandwidth 2 of the secondary path, as shown in 2 of Figure 33.
d If the admission on the secondary path is successful, the user is carried on the
secondary path. If not, the bandwidth admission request of the user is rejected.

Figure 38 Admission procedure for a handover user

Available bandwidth 1 = total bandwidth of the primary path - used bandwidth


Available bandwidth 2 = total bandwidth of the secondary path - used bandwidth
The admission procedure for a rate upsizing user is as follows:
a The rate upsizing user tries to be admitted to available bandwidth 1 of the primary path,
as shown in 1 of Figure 3.
b If the admission on the primary path is successful, the user is carried on the primary
path.

c
d

If the admission on the primary path fails, the user tries to be admitted to available
bandwidth 2 of the secondary path, as shown in 2 of Figure 3.
If the admission on the secondary path is successful, the user is carried on the
secondary path. If not, the bandwidth admission request of the user is rejected.

Figure 39 Admission procedure for a rate upsizing user


Available bandwidth 1 = total bandwidth of the primary path - used bandwidth - congestion reserved
bandwidth
Available bandwidth 2 = total bandwidth of the secondary path - used bandwidth - congestion reserved
bandwidth
NOTE:
If no secondary paths are available for the users, the admission is performed only on the primary paths.
Congestion Control of Bandwidth
Congestion control of bandwidth is used to avoid insufficiency in the transmission bandwidth.
Congestion Detection
The Forward congestion threshold and Backward congestion threshold parameters can be set for
congestion detection when a path, port, or resource group is configured. The default values of the two
parameters are 0, indicating that no congestion detection is performed. If the parameters are specified,
the RNC TRM function performs congestion detection based on the parameter values.
For a path, port, or resource group, you can also set the Forward congestion clear threshold and
Backward congestion clear threshold parameters, both of which are used to determine whether the
congestion disappears.
Congestion detection can be triggered in either of the following cases:
Bandwidth adjustment because of resource allocation, modification or release
Change in the configured bandwidth or the congestion threshold
Suppose that the forward parameters of a port for congestion detection are defined as follows:
Configured bandwidth: AVE
Forward congestion threshold: CON
Forward congestion clear threshold: CLEAR
Used bandwidth: USED
Then, the mechanism of congestion detection on the port is as follows:
The congestion occurs on the path when CON + USED AVE.
The congestion disappears from the path when CLEAR + USED < AVE.
NOTE:
The congestion detection for a path or resource group is similar to that for a port.

Generally, congestion thresholds only need be set for a port or resource group. If different
types of AAL2 paths or IP paths require different congestion thresholds, however, you can set the
parameters on the paths as required.
Congestion Handling
When the congestion is detected on the Iub interface, the congestion alarm is reported. The RNC triggers
load reshuffling process after receiving the congestion alarm, if the IUB congestion control switch
(IubCongCtrlSwitch) is ON.

A.c.e. CAC Based on the Number of HSPA Users


When a new HSPA service attempts to access the network, HSPA user number admission is optional.
A.c.e.a. CAC of HSDPA Users
When the HSDPA_UU_ADCTRL is on, HSDPA services should undergo HSDPA user number admission
decision.
When a new HSDPA service attempts to access the network, it is admitted if the number of HSDPA users
in the cell and that in the NodeB do not exceed the associated configurable OM thresholds ( Maximum
HSDPA user number and NodeB Max Hsdpa User Number). Otherwise, the service request is
rejected.
A.c.e.b. CAC of HSUPA Users
When the HSUPA_ADCTRL is on, HSUPA services should undergo HSUPA user number admission
decision.
When a new HSUPA service attempts to access the network, it is admitted if the number of HSUPA users
in the cell and that in the NodeB do not exceed the associated configurable OM thresholds ( Maximum
HSUPA user number and NodeB Max Hsupa User Number). Otherwise, the service request is
rejected.

A.c.f. Intra-Frequency Load Balancing


Intra-frequency Load Balancing (LDB) is performed to adjust the coverage areas of cells based on the
measured values of cell load. Currently, the intra-frequency LDB algorithm is applicable to only the
downlink.
LDB between intra-frequency cells is implemented by adjusting the transmit power of the Primary
Common Pilot Channel (P-CPICH) in the associated cells. When the load of a cell increases, the cell
reduces its coverage to lighten its load. When the load of a cell decreases, the cell extends its coverage
so that some traffic is off-loaded from its neighboring cells to it.
When the intra-frequency LDB algorithm is active, that is, when INTRA_FREQUENCY_LDB is set to 1,
the RNC checks the load of cells periodically and adjusts the transmit power of the P-CPICH in the
associated cells based on the cell load.

Figure 40 Process of intra-frequency load balancing


This process is described as follows:
If the downlink load of a cell is higher than the value of Cell overload threshold, it is an
indication that the cell is heavily loaded. In this case, the transmit power of the P-CPICH needs to
be reduced by a step, which is defined by the Pilot power adjustment step parameter. If the
current transmit power is equal to the value of Min transmit power of PCPICH, however, no
adjustment is performed.
Because of the reduction in the pilot power, the UEs at the edge of the cell might be handed over
to neighboring cells, especially to those with a relatively light load and with relatively high pilot
power. After that, the downlink load of the cell is lightened accordingly.
If the downlink load of a cell is lower than the value of Cell underload threshold, it is an
indication that the cell has sufficient remaining capacity for more load. In this case, the transmit
power of the P-CPICH increases by a step, which is defined by the Pilot power adjustment step
parameter, to help lighten the load of neighboring cells. If the current transmit power is equal to
the value of Max transmit power of PCPICH, however, no adjustment is performed.

A.d.

Load Reshuffling

A.d.a. Triggering of Basic Congestion


Four resources can trigger the basic congestion of the cell: power resource, code resource, Iub
resources, and NodeB credit resource.
For power resource, the RNC performs periodic measurement and judges whether cells are congested.
For code, Iub, and NodeB credit resources, event-triggered congestion applies, that is, the RNC judges
whether cells are congested when resource usage changes.

Power resource
ULLDR and DLLDR under the Cell LDC algorithm switch parameter control the functionality
of the power congestion control algorithm.

Figure 41 Triggering and release of cell power


For an R99 cell,

If the current UL/DL load of the R99 cell is not lower than basic congestion control threshold
in UL/DL (UL/DL LDR Trigger threshold) for some hysteresis (defined by DL State Trans Hysteresis
threshold in DL; not configurable in UL), the cell works in basic congestion state, and the related load
reshuffling actions are taken.

If the current UL/DL load of the R99 cell is lower than UL/DL LDR Release threshold for
some hysteresis (defined by DL State Trans Hysteresis threshold in DL; not configurable in UL), the
cell comes back to normal state.
For an HSDPA cell,

In the uplink, the decision criterion is the same as that for the R99 cell.

In the downlink, the object to be compared with the associated threshold for decision is the
sum of the non-HSDPA power (TCP of all codes not used for HS-PDSCH or HS-SCCH transmission) and the
Power Requirement for GBR (GBP).

Code resource
CELL_CODE_LDR under the Cell LDC algorithm switch parameter command controls the
functionality of the code congestion control algorithm.
If the SF corresponding to the current remaining code of the cell is larger than Cell LDR SF
reserved threshold, code congestion is triggered and the related load reshuffling actions as
listed in Table 1 are taken.

Iub resources or Iub bandwidth


The IUBCONGCTRLSWITCH parameter in the ADD NODEBALGOPARA or MOD
NODEBALGOPARA command controls the functionality of the Iub congestion control algorithm.
Iub congestion control in both the uplink and downlink is NodeB-oriented. Iub congestion control
is implemented in a separate processing module, so its functionality is not controlled by LDR
switches. In the case of Iub congestion, however, LDR actions are applied to congestion
resolution. For details of the decision for Iub congestion detection, refer to Congestion Control of
Bandwidth.
For the basic congestion triggered for the Iub resource reason, the objects of related LDR actions
are all UEs in the NodeB.

NodeB credit resource


The basic congestion for NodeB credit is of the following types:

Basic congestion at local cell level


CELL_CREDIT_LDR under the Cell LDC algorithm switch parameter and
LC_CREDIT_LDR_SWITCH under the Load control algorithm switch parameter
control the functionality of the local cell credit congestion control algorithm.
Basic congestion at local cell group level (if any)
LCG_CREDIT_LDR_SWITCH under the Load control algorithm switch parameter
controls the functionality of the local cell group credit congestion control algorithm.
Basic congestion at NodeB level

NODEB_CREDIT_LDR_SWITCH under the Load control algorithm switch parameter


controls the functionality of the NodeB credit congestion control algorithm.
For the local cell/cell group (if any)/NodeB, if the UL/DL current remaining SF (mapped to
credit resource) is higher than Ul LDR Credit SF reserved threshold/Dl LDR Credit
SF reserved threshold, credit congestion at cell/cell group/NodeB level is triggered.

The thresholds related to the local cell are Ul LDR Credit SF reserved threshold and Dl
LDR Credit SF reserved threshold, which are set through the ADD CELLLDR command. When credit
congestion in the local cell is triggered, the related LDR actions are taken in this cell.

The thresholds related to the cell group and NodeB are Ul LDR Credit SF reserved
threshold and Dl LDR Credit SF reserved threshold, which are set through the ADD NODEBLDR
command. When credit congestion at cell group or NodeB level is triggered, all the cells under the cell
group or NodeB will be treated as in congestion state, and the related LDR actions will be taken
independently in each cell.
If the congestion of all resources is triggered in a cell, the congestion will be resolved in the order of
resource priority for load reshuffling as configured through the SET LDCALGOPARA command.
For example, if the parameters are set as follows:

first priority for load reshuffling: IUBLDR


second priority for load reshuffling: CREDITLDR
third priority for load reshuffling: CODELDR
fourth priority for load reshuffling: UULDR

then basic congestion will be resolved in the following sequence:


1
20
21
22

Iub resource
Credit resource
Code resource
Power resource

A.d.b. LDR Procedure


The RNC periodically detects whether the cell is in basic congestion state and takes actions if the basic
congestion is detected.
The following procedures apply to HSPA cells and R99 cells. For R99 cells, only DCH UEs are selected by
LDR actions.
NOTE:
The user with gold priority is not selected by LDR actions.
When the cell is in basic congestion state, the RNC takes one of the following actions in each period
(defined by the LDR period timer length parameter) until the congestion is resolved:
Inter-frequency load handover
Code reshuffling
BE service rate reduction
AMR rate reduction
Inter-RAT load handover in the CS domain
Inter-RAT load handover in the PS domain
Iu QoS renegotiation
MBMS power reduction
When the inter-frequency load handover is made to reduce the cell load, only an inter-frequency
neighboring cell that supports blind handover will be a target cell of the inter-frequency load handover.
The inter-RAT load handover in the CS domain action is of the following two types:
Inter-RAT Should Be Load Handover in the CS Domain
Inter-RAT Should Not Be Load Handover in the CS Domain
The inter-RAT load handover in the PS domain action is of the following two types:

Inter-RAT Should Be Load Handover in the PS Domain


Inter-RAT Should Not Be Load Handover in the PS Domain
The difference between the "Inter-RAT Should Be Load Handover In the CS/PS Domain" and "Inter-RAT
Should Not Be Load Handover In the CS/PS Domain" actions lies in the selection of users. The former
only involves CS/PS users with the "service handover" IE set to "handover to GSM shall be performed",
while the latter only involves CS/PS users with the "service handover" IE set to "handover to GSM shall
not be performed".

Figure 42 Detailed LDR procedure


Table LDR actions intended for different resources

UL/DL
Resourc
e

Power

Channe
l

LDR Actions
InterFrequenc
y Load
Handover

BE Rate
Reducti
on

DCH

HSUPA

DCH

Inter-RAT
Handover
in CS
Domain

Inter-RAT
Handover
in PS
Domain

AMR
Rate
Reducti
on

Iu QoS
Reneg
otiatio
n

UL

DL

Code
Reshufflin
g

MBMS
Power
Reducti
on

UL/DL
Resourc
e

Channe
l

LDR Actions
InterFrequenc
y Load
Handover

HSDPA

BE Rate
Reducti
on

Inter-RAT
Handover
in CS
Domain

Inter-RAT
Handover
in PS
Domain

AMR
Rate
Reducti
on

Iu QoS
Reneg
otiatio
n

Code
Reshufflin
g

MBMS
Power
Reducti
on

FACH
(MBMS)

DCH

UL
HSUPA

DCH

Iub
DL

HSDPA

DL

DCH

Code

FACH
(MBMS)

HSDPA
FACH
(MBMS)
DCH

HSUPA

DCH

UL

Credit

DL

HSDPA

FACH
(MBMS)

NOTE:
If the downlink power admission uses the equivalent user number algorithm, basic congestion may also
be triggered by the equivalent number of users. In this situation, LDR actions do not involve AMR rate
reduction or MBMS power reduction, as indicated by the symbol "*" in above table.
For HSUPA services, the CE consumption, which is calculated on the basis of the Maximum Bit Rate
(MBR), can be reduced through rate downsizing. Therefore, the BE service rate downsizing for HSUPA is
applicable only to the relief of CE resource congestion.

A.d.c. LDR Actions

LDR actions include inter-frequency load handover, BE rate reduction, uncontrolled real-time QoS
renegotiation, inter-RAT handover in the CS domain, inter-RAT handover in the PS domain, AMR rate
reduction, code reshuffling, and MBMS power reduction.
A.d.c.a. Inter-Frequency Load Handover
The LDR algorithm is implemented as follows:
1 The LDR checks whether the existing cell has a target cell of inter-frequency blind handover. If
there is no such a target cell, the action fails and the LDR takes the next action.
23 Based on the blind handover priority, the LDR checks whether the load difference between
the current load and the basic congestion triggering threshold of each target cell for blink
handover is larger than UL/DL Inter-freq cell load handover load space threshold (both
the uplink and downlink conditions must be fulfilled). The other resources (code resource, Iub
bandwidth, and NodeB credit resource) in the target cell do not trigger the basic congestion. If
the basic congestion triggering threshold is not set, the admission threshold of the cell is used.

If the difference is not larger than the threshold, the action fails and the LDR takes the next action.
NOTE:
The load difference refers to the difference between the current load and the basic congestion triggering
threshold of each target cell, but not the difference between the load of the target cell and the load of
the existing cell.
2

If the LDR finds a target cell that meets the specified blind handover conditions, the LDR selects
one UE to perform an inter-frequency blind handover to the cell, depending on the UE's occupied
bandwidth. For the selected UE other than a gold user, its UL/DL current bandwidth for DCH or
GBR bandwidth for HSPA should be less than and have the least difference from the UL/DL Interfreq cell load handover maximum bandwidth parameter (both the uplink and downlink conditions
must be fulfilled).
If there is more than one such UE, the first one is taken.
If the LDR cannot find such a UE, the action fails and the LDR takes the next action.
A.d.c.b. BE Rate Reduction
Different from the TF restriction to the OLC algorithm, the BE rate reduction is implemented by
reconfiguring the bandwidth. The bandwidth reconfiguration requires signaling interaction on the Uu
interface. This procedure is relatively long.
In the same environment, different rates have different downlink transmit powers. The higher the rate,
the greater the downlink transmit power. Therefore, the load can be reduced by reconfiguring the
bandwidth.
For HSUPA services, the consumption of CEs is based on the bit rate. The higher the rate, the more the
consumption of CEs. Therefore, the consumption of CEs can be reduced by reconfiguring the bandwidth.
The LDR algorithm is implemented as follows:
1

3
4
5
6

Based on the integrate priority, the LDR sorts the RABs in descending order. The top RABs
related to the BE services whose current rate is higher than its GBR configured by SET USERGBR
are selected. The number of RABs to select is determined by UL/DL LDR-BE rate reduction RAB
number.
The bandwidth of the selected services is reduced to the specified rate. For more details about
the rate reduction procedure, refer to related description in BE Rate Downsizing and Recovery
Based on Basic Congestion.
If services can be selected, the action is successful. If services cannot be selected, the action
fails. The LDR takes the next action.
The reconfiguration is completed as indicated by the RB RECONFIGURATION message on the Uu
interface and through the RL RECONFIGURATION message on the Iub interface.
The BE rate reduction algorithm is controlled by the DCCC algorithm switch. BE rate reduction
can be performed only when the DCCC algorithm is enabled.

NOTE:
In RAN6.1, BE rate reduction is applied to the selected RABs, but not to UEs.
When admission control of Power/NodeB Credit is disabled, it is not recommended that the
BE Rate Reduction be configured as an LDR action in order to avoid ping-pong effect.

A.d.c.c. Uncontrolled Real-Time QoS Renegotiation


The load can be reduced by adjusting the rate of the real-time services through uncontrolled real-time
QoS renegotiation. In 3GPP R5, the RNC initiates the RAB renegotiation procedure through the RAB
MODIFICATION REQUEST message on the Iu interface.
Upon receipt of the RAB MODIFICATION REQUEST message, the CN sends the RAB ASSIGNMENT
REQUEST message to the RNC for RAB parameter reconfiguration. Based on this function, the RNC can
adjust the rate of real-time services to reduce the load of the current cell.
The LDR algorithm is implemented as follows:

1
7
8
9

Based on the integrate priority, the LDR sorts the real-time services in the PS domain in
descending order. The top services are selected for QoS renegotiation. The number of RABs to
select is determined by UL/DL LDR un-ctrl RT Qos re-nego RAB num.
The LDR performs QoS renegotiation for the selected services. The GBR during the service setup
is the maximum rate of the service after the QoS renegotiation.
The RNC initiates the RAB MODIFICATION REQUEST message to the CN for the QoS renegotiation.
If the RNC cannot find a proper service for the QoS renegotiation, the action fails. The LDR takes
the next action.

A.d.c.d. Inter-RAT Handover in the CS Domain


Inter-RAT Should Be Load Handover in the CS Domain
The cell sizes and coverage modes of 2G and 3G systems are different. Therefore, the blind handover
across systems are not taken into account.
The LDR is implemented in the downlink as follows:
1

Based on the integrate priority, the LDR sorts the UEs with the service handover cells set to
"handover to GSM shall be performed" in the CS domain in descending order. The top CS
services are selected, and the number of UEs is controlled by the UL/DL CS should be ho user
number parameter.
10 For the selected UEs, the LDR module sends the load handover command to the inter-RAT
handover module to ask the UEs to be handed over to the 2G system.
11 The handover module decides to trigger the inter-RAT handover, depending on the capability of
the UE to support the compressed mode.
12 This action succeeds if any UE that satisfies the handover criteria is found. Otherwise, this action
fails.

Inter-RAT Should Not Be Load Handover in the CS Domain


The algorithm for this action is the same as that in Inter-RAT Should Be Load Handover in the CS
Domain. The difference is that this action only involves CS users with the "service handover" IE set to
"handover to GSM shall not be performed".
The number of UEs is controlled by the UL/DL CS should not be ho user number parameter.

A.d.c.e. Inter-RAT Handover in the PS Domain


Inter-RAT Should Be Load Handover in the PS Domain
The algorithm for this action is the same as that in Inter-RAT Should Be Load Handover in the CS
Domain. The difference is that this action only involves PS users with the "service handover" IE set to
"handover to GSM shall be performed", but not CS users.
The number of UEs is controlled by the UL/DL PS should be ho user number parameter.
Inter-RAT Should Not Be Load Handover in the PS Domain
The algorithm for this action is the same as that in Inter-RAT Should Not Be Load Handover in the CS
Domain. The difference is that this action only involves PS users with the "service handover" IE set to
"handover to GSM shall not be performed", but not CS users.
The number of UEs is controlled by the UL/DL PS should not be ho user number parameter.
A.d.c.f. AMR Rate Reduction
In the WCDMA system, voice services work in eight AMR modes. Each mode has its own rate. Therefore,
mode control is functionally equal to rate control.
LDR Algorithm for AMR Rate Control in the Downlink

The LDR algorithm is implemented in the downlink as follows:


1

Based on the integrate priority, the LDR sorts the RABs in descending order. RABs with AMR
services (conversational) and with the bit rate higher than the GBR are selected. The number of
RABs to select is determined by the DL LDR-AMR rate reduction RAB number parameter.
13 The RNC sends the Rate Control request message through the IuUP to the CN to adjust the AMR
rate to the GBR.
14 If the RNC cannot find a proper RAB for the AMR rate reduction, the action fails. The LDR takes
the next action.
LDR Algorithm for AMR Rate Control in the Uplink
The LDR algorithm is implemented in the uplink as follows:
1

Based on the integrate priority, the LDR sorts the RABs in descending order. The top RABs
accessing the AMR services (conversational) and with the bit rate higher than the GBR are
selected. The number of RABs to select is determined by the UL LDR-AMR rate reduction RAB
number parameter.
15 The RNC sends the TFC CONTROL command to the UE to adjust the AMR rate to the GBR.
16 If the RNC cannot find a proper RAB for the AMR rate reduction, the action fails. The LDR takes
the next action.
A.d.c.g. Code Reshuffling
When the cell is in basic congestion for shortage of code resources, sufficient code resources can be
reserved for subsequent service access through code reshuffling. Code subtree adjustment refers to the
switching of users from one code subtree to another. It is used for code tree defragmentation, so as to
free smaller codes first.
The algorithm is implemented as follows:
1 Initialize the SF_Cur of the root node of subtrees to 4.
17 Traverse all the subtrees with this SF_Cur at the root node. Leaving the subtrees occupied by
common channels and HSDPA channels out of account, take the subtrees in which the number of
users is not larger than the value of the Max user number of code adjust parameter as
candidates for code reshuffling.
a If such candidates are available, go to 4.
b If no such candidate is available, go to 3.
18 If the SF_Cur is smaller than the value of the Cell LDR SF reserved threshold parameter, multiply
the SF_Cur by 2, and then go to 2.
Otherwise, subtree selection fails, which leads to code reshuffling failure. This procedure ends.
19 Select a subtree from the candidates according to the setting of the LDR code priority indicator
parameter.
a If this parameter is set to TRUE, select the subtree with the largest code number from the
candidates.
b If this parameter is set to FALSE, select the subtree with the smallest number of users from
the candidates. In the case that multiple subtrees have the same number of users, select the
subtree with the largest code number.
20 Treat each user in the subtree as a new user and allocate code resources to each user.
21 Initiate the reconfiguration procedure for each user in the subtree and reconfigure the channel
codes of the users to the newly allocated code resources.
The reconfiguration procedure on the air interface is implemented through the PHYSICAL CHANNEL
RECONFIGURATION message and that on the Iub interface through the RL RECONFIGURATION
message.

Figure 43 Code tree before code reshuffling

Figure 44 Code tree after code reshuffling


A.d.c.h. UL and DL LDR Action Combination of a UE
LDR actions in the uplink and the downlink are independent. Sometimes, the actions in both directions
will be applied to the same UE. In this situation, the actions are combined as follows:
1

2
3

A.e.

If the actions in the two directions are identical, the actions are combined. For example, if BE
rate reduction actions in both uplink and downlink need to be applied to the same UE, then a
single RB reconfiguration message can carry the indication to take BE rate reduction actions in
both directions.
If the actions in the two directions are different and if one direction requires inter-frequency
handover, the UE undergoes the inter-frequency handover. The other action is not taken.
If the actions in the two directions are different and if one direction requires the inter-RAT
handover, the UE undergoes the inter-RAT handover. The other action is not taken.

Overload Control

After the UE access is allowed, the power consumed by a single link is adjusted by the single link power
control algorithm. The power varies with the mobility of the UE and the changes in the environment and
the source rate. In some situations, the total power load of the cell may be higher than the target load.
To ensure the system stability, Overload Control (OLC) must be performed.

A.e.a. Triggering of OLC


Only power resources and interference could result in overload congestion. Hard resources such as the
equivalent number of users, Iub bandwidth, and credit resources do not cause the overload congestion.

ULOLC and DLOLC under the Cell LDC algorithm switch parameter control the functionality of the
overload congestion control algorithm.

Figure 45 Triggering and release of cell power overload congestion


If the current UL/DL load of an R99 cell is not lower than UL/DL OLC Trigger threshold
for some hysteresis (defined by DL State Trans Hysteresis threshold in DL; not
configurable in UL), the cell works in overload congestion state and the related overload
handling action is taken. If the current UL/DL load of the R99 cell is lower than UL/DL OLC
Release threshold for some hysteresis (defined by DL State Trans Hysteresis threshold
in DL; not configurable in UL), the cell comes back to normal state.
The HSDPA cell has the same uplink decision criterion as the R99 cell. The load in the
downlink, however, is the sum of load of the non-HSDPA power (transmitted carrier power of
all codes not used for HS-PDSCH or HS-SCCH transmission) and the GBP.
In addition to periodic measurement, event-triggered measurement is applicable to OLC.
If OLC_EVENTMEAS is ON, the RNC will request the initiation of an event E measurement on power
resource in the NodeB. In the associated request message, the reporting criterion is specified, including
key factors UL/DL OLC trigger hysteresis, UL/DL OLC trigger threshold and UL/DL OLC release
threshold. Then the NodeB checks the current power load in real time according to this criterion and
reports the status to the RNC periodically if the conditions of reporting are met.
NOTE:
The current policy for NodeBs is to preferentially allocate power to DCH users. It is not
recommended that Ptotal, the TCP, be used as the criterion for overload in the HSDPA cell. That is
because the NodeB can automatically adjust the power at the next scheduling period.
For HSDPA cells, the OLC_EVENTMEAS switch is recommended OFF. For 3GPP limitation,
however, the NodeB cannot check the total load of the non-HSDPA power and the GBP.
Configuration Rule and Restriction:
UL OLC trigger threshold UL OLC release threshold
DL OLC trigger threshold DL OLC release threshold
UL OLC trigger threshold UL LDR trigger threshold
UL OLC release threshold UL LDR release threshold
DL OLC trigger threshold DL LDR trigger threshold
DL OLC release threshold DL LDR release threshold

A.e.b. General OLC Procedure

The general OLC procedure covers the following actions: TF control of BE services, channel switching of
BE services, and release of RABs.

When the cell is overloaded, the RNC takes one of the following actions in each period (defined by the
OLC period timer length parameter) until the congestion is resolved:
Restricting the TF of the BE service (only for DCH BE service)
Switching BE services to common channel
Choosing and releasing RABs (for HSPA or DCH service)
If the first action fails or the first action is completed but the cell is still in congestion, then the second
action is performed.

Figure 46 Detailed OLC procedure


A.e.c. OLC Actions
The OLC actions of restricting the TF of the BE service and choosing and releasing RABs are supported
for current version.
A.e.c.a. TF Control
OLC Algorithm for TF Control in the Downlink
The OLC algorithm for the TF control in the downlink is implemented as follows:
1 Based on the integrate priority, the OLC sorts the RABs in descending order. The
following RABs are selected:
a The RABs with the DCH BE services whose bit rates are higher than Downlink bit
rate threshold for DCCC. For details of the parameter, refer to Rate Re-allocation
Based on Traffic Volume.
b The RABs with the lowest integrate priority (with the highest integrate priority
value).
The selected RAB number is DL OLC fast TF restrict RAB number.
24 The RNC sends the TF control indication message to the MAC during the continuous time
before the congestion is released. The MAC will restrict the TFC selection of these BE
services to reduce the data rate step by step.
MAC restricts the TFC selection in a way like that the maximum TB number is calculated with
the formula:
TFmax(N+1) = TFmax(N) x Ratelimitcoeff
Where
TFmax(0) is the maximum TB number of the BE service before the service is
selected for TF control.

TFmax(N+1) is the maximum TB number during time T0+RateRstrctTimerLen* (N)


to T0+RateRstrctTimerLen* (N+1), where T0 is the time MAC receiving the TF control
indication message.
Ratelimitcoeff is a configurable parameter (DL OLC fast TF restrict data rate
restrict coefficient).
25 Each time, the RNC selects a certain number of RABs (which is determined by DL OLC
fast TF restrict RAB number) to perform the TF control, and each MAC of selected
RABs will receive one TF control indication message. The times of performing the TF
control is determined by the DL OLC fast TF restrict times parameter.
26 If the RNC cannot find a proper service for the TF control, the action fails. The OLC
performs the next action.
27 If the congestion is released, the RNC sends the congestion release indication to the
MAC. At the same time, the rate recovery timer (whose length is defined by DL OLC fast
TF restrict data rate recover timer length) is started. When this timer is expired,
the MAC will increase the data rate step by step.
Assumption:
The TFCS before the TF control is {TFC(0), TFC(1) , ..., TFC(i), TFC(i+1), TFC(i+2), ...,
TFC(N)}, and the data rate of TFC(i) is higher than that of TFC(j) if i>j.
The current TFS is TFC(i) when the congestion is released and the 4A report is received.
Procedure:
1
2
3

The first time when the rate recovery timer expired, the TFC sub-set that MAC can use is
{TFC(0), TFC(1), ..., TFC(i), TFC(i+1)}.
The second time when the rate recovery timer expired, the TFC sub-set that MAC can
use is {TFC(0), TFC(1), ..., TFC(i), TFC(i+1), TFC(i+2)}.
The (Ni+1)th time when the rate recovery timer expired, the TFCS that MAC can use is
the TFCS applied before the TF control, and the rate recovery timer will not be restarted
any more.

OLC Algorithm for TF Control in the Uplink


For a UE accessing the DCH service, the RNC, in compliance with the 3GPP TS25.331, restricts the
TFC of the UE by sending the TRANSPORT FORMAT COMBINATION CONTROL message to the UE.
Figure 2 shows the message flow, in which the UE does not have any response if the procedure
can be performed successfully.
The OLC algorithm for the TF control in the uplink is implemented as follows:
1

Based on the integrate priority, the OLC sorts the DCH BE services in descending order.
The BE services with the rate higher than Uplink bit rate threshold for DCCC (refer to
Rate Re-allocation Based on Traffic Volume) and with the lowest integrate priority (with
the largest integrate priority value) are selected. The number of RABs to select is defined
by the UL OLC fast TF restrict RAB number parameter.
28 The RNC sends the TRANSPORT FORMAT COMBINATION CONTROL message to the UE
that accesses the specified service. The TRANSPORT FORMAT COMBINATION CONTROL
message contains the following IEs:
a

Transport Format Combination Set Identity: defines the available TFC that the UE can
select, that is, the restricted TFC sub-set. It is always the two TFCs corresponding to
the lowest data rate.
b TFC Control duration: defines the period in multiples of 10 ms frames for which the
restricted TFC sub-set is to be applied. It is set to a random value from the range of
10 ms to 5120 ms, so as to avoid data rate upsizing at the same time.
After the TFC control duration is due, UE can apply any TFC of TFCS before the TF
control.
29 Each time, the RNC selects a certain number of RABs (which is defined by UL OLC fast TF
restrict RAB number) to perform the TF control, and each UE of selected RABs will
receive the TRANSPORT FORMAT COMBINATION CONTROL message. How many times TF
control is performed is defined by the UL OLC fast TF restrict times parameter.

30 If the RNC cannot find a proper service, the OLC performs the next action.
A.e.c.b. Switching BE Services to Common Channel
The OLC algorithm for switching BE services to common channel is implemented as follows:
1
2
3

Based on the integrate priority, the OLC sorts all UEs that have only PS services including HSPA
and DCH services (except UEs having also a streaming bearer) in descending order.
The top N UEs are selected. The number of selected UEs is equal to Transfer Common Channel
user number. If the UEs cannot be selected, the action fails. The OLC performs the next action.
The selected UEs are switched to common channel.

A.e.c.c. Release of Some RABs


OLC Algorithm for the Release of Some RABs in the Uplink
The OLC algorithm for the release of some RABs in the uplink is implemented as follows:
1
2
3
4

Based on the integrate priority, the OLC sorts all RABs including HSUPA and DCH services in
descending order.
The top RABs selected. If the integrate priorities of some RABs are identical, the RAB with
higher rate (current rate for DCH RAB and GBR for HSUPA RAB) in the uplink is selected. The
number of selected RABs is equal to UL OLC traff release RAB number.
The selected RABs are released directly.

OLC Algorithm for the Release of Some RABs in the Downlink


The OLC algorithm for the release of some RABs in the downlink is implemented as follows:
If the Sequence of user release parameter is set to USER_REL:
1
2
3
4

Based on the integrate priority, the OLC sorts all non-MBMS RABs in descending order.
The top priority RABs are selected. If the integrate priorities of some RABs are identical, the
RAB with higher rate (current rate for DCH RAB and GBR for HSUPA RAB) in the downlink is
selected. The number of selected RABs is equal to DL OLC traff release RAB number.
The selected RABs are directly released.
If all non-MBMS RABs are released but congestion persists in the downlink, MBMS RABs are
selected.

If the Sequence of user release parameter is set to MBMS_REL:


1
2
3
4

Based on the ARP, the OLC sorts all MBMS RABs in descending order.
The top priority RABs are selected. The number of selected RABs is equal to MBMS services
number released.
The selected RABs are directly released.
If all MBMS RABs are released but congestion persists in the downlink, non-MBMS RABs are
selected.

You might also like