Professional Documents
Culture Documents
Capacity in Huawei
CONTENTS
1. INTRODUCTION.........................................................................................................................7
2. CAPACITY....................................................................................................................................8
3. Process for UTRAN Capacity Management..............................................................................10
3.1. Capacity Metrics Monitoring.......................................................................................................11
3.2. Capacity Analysis..........................................................................................................................11
3.2.1.
3.2.2.
3.2.3.
RULE OUTS.........................................................................................................................................12
Historical Node B Data..........................................................................................................................13
RNC/NodeB Dumps Audit....................................................................................................................13
Parameters Optimization.......................................................................................................................28
RF optimization.....................................................................................................................................28
Activation of Features............................................................................................................................28
Parameter Optimization.........................................................................................................................35
RF optimization.....................................................................................................................................35
5.5.3.
Parameters Optimization.......................................................................................................................41
RF optimization.....................................................................................................................................41
Activation of Features............................................................................................................................42
Node B UL Power Capacity Upgrade....................................................................................................42
8. 1BBACKHAUL (Iub).................................................................................................................52
8.1. AVAILABLE CAPACITY IN TERMS OF BACKHAUL RESOURCES.................................53
8.2. METRICS for BACKHAUL RESOURCES Monitoring...........................................................53
8.2.1. Traffic Load Measurements...................................................................................................................53
8.2.1.1.
ATM backhaul.............................................................................................................................53
8.2.1.2.
Iub Utilization.............................................................................................................................54
8.2.1.3.
Number of active HSDPA users..................................................................................................55
8.2.1.4.
Number of AAL2 connections....................................................................................................55
8.2.1.5.
Average Cell Drop Rate..............................................................................................................55
8.2.1.6.
Transport Network Blocking......................................................................................................56
9. HSxPA Users...............................................................................................................................60
9.1. AVAILABLE CAPACITY IN TERMS OF HSDPA USERS......................................................60
9.2. AVAILABLE CAPACITY IN TERMS OF HSUPA USERS......................................................61
Parameters Optimization.......................................................................................................................62
RF optimization.....................................................................................................................................63
Node B DL Power Capacity Upgrade....................................................................................................64
14. 4BREFERENCES.....................................................................................................................81
A. 5BANNEX I: LOAD MANAGEMENT IN Huawei.................................................................82
A.a. Priority Involved in Load Control...............................................................................................82
A.a.a.
A.a.b.
A.a.c.
1.
INTRODUCTION
This series of Optimization Guidelines covers all the main topics regarding
Refer to the internal Claro document Ref. ####.## , Optimization Process, for a summary of 3G
WCDMA Radio Access Network Optimization Basics.
This specific document focuses on CAPACITY and its specifics within Huawei infrastructure (Release
RAN 6.1).
Target users for this document are all personnel requiring a detailed description of this process (Capacity
Optimization), as well as configuration managers who require details to control the functions and
optimize parameter settings. It is assumed that users of this document have a working knowledge of 3G
telecommunications and are familiar with WCDMA.
Date
31-Ene-2010
Author
QCES
Draft02
26-Feb-2010
QCES
Changes
First Draft of the document
Correction on UL load formulas and
added the following sections:
2.
CAPACITY
This document describes how to monitor/optimize the performance of a UMTS network in terms of
CAPACITY through counters and KPIs (with focus on Huawei networks). The overall goal of the process
is to have an efficient utilization of the resources: as high as possible with no congestion.
End-to-end capacity dimensioning consists of calculating required resources at each stage and
comparing them to the available resources, so bottlenecks are avoided and if detected, appropriate
actions can be recommended e.g. Number of CE in each cell need to match the Radio Erlang capacity,
Number of OVSF Codes in each cell need to match the number of users supported on the air-interface,
Iub bandwidth needs to match the expected Iub throughput, etc.
Typically, the air interface is the natural bottleneck for NW capacity. Provided RF is optimized, an
increase in air interface capacity is expensive as this generally will mean the additional sites or
additional carriers,.
It is important to make sure that air interface is indeed the bottleneck: All other resources should be
dimensioned in excess of the air interface resources, but since other resources are costly too, they need
to be carefully planned ie. CEs, Backhaul, Iu, MSC, SGSN trunks.
In this document it is assumed thatthe planned capacity has already been implemented in the network,
and the focus is on its MONITORING and OPTIMIZATION to identify/anticipate/avoid any congestion issues
at any Node/Resource in the network, with an emphasis on UTRAN Nodes/Resources, showing how to
remove congestion from the network in case it is already present. [Further revisions will also include
CORE Network aspects].
A section will be devoted to describe how to monitor, optimize and troubleshoot the Utilization of each
one of the following Resources:
DL TX POWER
UL RX POWER
Node B Hardware
RNC hardware
3
4
The available capacity for each resource will be estimated based on the installed hardware,
parameters settings and Admission/Congestion Control configuration.
Different metrics to monitor each resource performance will be introduced. These
measurements will be of 2 types:
i
Proactive (USAGE): where performance counters are used to allow trending of growth
for capacity upgrades, i.e., Counters and KPIs to monitor the utilization of the
resource
ii Reactive (BLOCKING): where performance counters are used to indicate that a
particular element has become congested such that it is causing a negative impacts
on other KPIs (mainly in accessibility, but also in retainability, integrity, etc.)
Together with the metrics above, thresholds will also be suggested. Two types: MINOR, for early
detection of the potential exhaustion of a resource, and MAJOR, for the detection of a present
shortage already causing degradation in the performance.
Suggestions the optimization/troubleshooting of each resource will be provided based on
the analysis of the KPIs and blocking metrics.
These guidelines are intended to be as practical as possible and the target was to produce both a list of
Performance Alarms for Capacity Issues Early Detection and Tracking, and a weekly summary, the
Capacity Weekly Report (that could also be implemented in SMART). The aim is to help the
optimizer monitor the capacity trends in all cells in the network and also to highlight the cells that
already require some (or even urgent) attention. For the definition of the thresholds for the Alarms, we
propose to carry out a study based on current available data in OSS. In this document, we provide initial
estimates that need to be verified against real data.
The attached excel file attached contains the full list of proposed Thresholds.
Note that the first release of this document focuses on capacity troubleshooting: issue detection and
solving.
[To be discussed within Claro if future versions will add capacity trends estimation aspects or these ones
are to be considered under the Planning process scope]
Before starting, it is recommended to revise the References in section 14 in order to refresh and clarify
the concepts that will be used extensively in this text: Radio Resource Management, Admission Control,
Congestion/Load Control,
In this document we will use the following naming convention for:
Counters will be written in italic letters:
o Examples: VS.MeanTCP, VS.MinTCP
Parameters will be written in italic and bold letters:
o Examples: DLCELLTOTALTHD
3.
The proposed process in this document includes, at high level, next three sequential steps:
1.
2.
3.
Counter & KPI monitoring with agreed trigger thresholds (daily/weekly) to measure resource
usage or blocking
Detailed Blocking Cause Analysis & Optimization actions
Possible Hardware Capacity Upgrade actions & verification
The capacity upgrade process involves making a decision as to whether an upgrade is required or
whether the Node B is likely to benefit from further optimization. If an upgrade is required, upgrade
verification is performed once the upgrade has been implemented.
Describe & monitor capacity KPIs (proactive or reactive) daily/weekly during busy hour
If one of the thresholds is exceeded, start detailed blocking analysis
Start tuning actions in the interface where the triggering has happened. These actions could be
for example
Parameters optimization
RF optimization
4.
5.
Start capacity upgrade if fault findings & tuning did not help
Verify the performance after upgrade.
Capacity upgrade activities could be different based on what is triggering the process. If the tuning
actions dont help then the next activities could include:
One should notice that the triggering may depend also on the lead time which is needed to get the
additional HW in place. This should be studied on a case by case basis. For example, for Iub capacity it
could take longer to get new HW in place, whereas installing new Node B HW may be faster. Thus the
threshold settings should be tuned so that capacity upgrade includes the time necessary to procure and
implement the solution .
3.1.
The following chapters will introduce metrics (counters and KPIs) to monitor the utilization (and eventual
exhaustion) of each Resource. These measurements focus on 2 aspects:
The first metrics (for each resource usage monitoring) can be considered pro-active as they show
current occupancy of the resource and allow us to estimate trends in its usage and hence, anticipate
congestion. Blocking metrics would be considered reactive as they detect current blocking and its
impact on other KPIs and need, in general, urgent reaction from the optimizer.
Both types of measurements will be assigned 2 different thresholds (minor and major), so following
Performance Alarms will be triggered according to its importance/impact:
Utilization measurements:
MINOR: a resource utilization has reached a certain level (Thr.minor) that is considered
appropriate to start a capacity analysis in order to anticipate/prevent its congestion
MAJOR: a resource utilization has reached a certain higher level (Thr.major) that is assumed
to be already causing some impact in performance
Blocking measurements:
3.2.
Capacity Analysis
Once the capacity thresholds have been exceeded based on daily or weekly monitoring activity the
blocking analysis work should start. The target for this step in the process is the identification of the real
root cause of the detected (or anticipated) capacity issue: unexpected traffic increase due to an event,
faulty equipment, misconfiguration, or simply exhaustion of any of the RAN Capacity Resources due to
the normal desirable increase in traffic.
The analysis will always consider the possibility to further optimize the network so the issue is overcome
with a more effective utilization of the current available capacity. Only if no margin is found to improve
efficiency, a capacity upgrade will be proposed.
2.
3.
Identify
a.
b.
c.
d.
e.
f.
g.
If any of these possibilities can explain the Capacity issues detected, then some short term corrective
actions should be considered to deal with the temporary degradation in the KPIs, until the real
problem/cause is solved. Some suggestions will be provided in these specific cases.
In some situations, like seasonal traffic or special events, the new capacity needs should have been
estimated in advance in order to avoid the expected increase in traffic. If these situations are not
anticipated, then of course, some capacity issues will arise. Also in order to react to unpredictable
increases in traffic, some Traffic Offloads actions are possible and will be explained in order to minimize
its impact on KPIs.
Has the Node B triggered the capacity upgrade process in the past?
History of optimization that has already been completed for that Node B and its neighbours.
Performance history (other KPIs to be checked: SHO, Traffic, Throughputs,)
Current RNC/NodeB databuild for that site and its neighbours.
Planning tool plots of best server areas, service coverage and CPICH Ec/Io coverage.
Site details (site survey and installation reports, pictures, equipment installed,)
Easy access to all these sources of information should be guaranteed to the Optimization Team (it should
be treated as key component of its interfaces with other departments).
3.3.
Capacity Optimization
Before deciding any capacity upgrade, further optimization should be tried first to limit an impact on
CAPEX. The following areas of work should be explored:
Parameters Tuning
Different recommendations will be provided for each specific resource covered in these
guidelines.
RF optimization
In case parameter optimization does not help, RF optimization could be done. This will cost more
money to the operator than the parameter optimization. RF optimization means basically the
tuning of Node B antenna system and includes tilt, bearing and antenna height changes to
improve cell dominance areas. It may also include re-engineering the site to reduce feeder
lengths or changing the type of antennas.
Activation of Features
Different features can be activated in the network to improve the utilization of the resources.
Their activation needs to be carefully analyzed as they usually also imply an important
expenditure for the operator and results could not be as impressing as anticipated. Trials are
highly recommended before any final decision is taken. Given the cost involved, it can also be
considered under the next step in the process (capacity upgrade).
Please note that Parameters Tuning will not overcome a poor RF optimization. This guideline is written
under the assumption that the Optimization Process is already in the In-Service Phase of its lifecycle.
Optimizers can now focus on operational KPIs, as the RF and Service ones already received proper
attention in the Initial Optimization Phase. Main RF issues (not sufficient RSCP and EcIo levels,
overshooting, pilot pollution, cell fragmentation, etc.) should have been detected and corrected, so even
if the RF Optimization is a continuous effort, it can be assumed that the optimization work is done over a
well tuned RF environment. In this context it is suggested to try first to tune the current settings before
trying more costly RF optimization possibilities. Parameters changes are easier and faster to implement
and their impact also easier and faster to evaluate.
When a UTRAN cell becomes congested for whatever reason, one possibility to provide temporary (and
often only partial) relief is to enable whatever features are available in the vendor for Offloading Traffic.
This Traffic offloading can be done between 3G cells and also from 3G to 2G (of course, this second
option is really an option if the target 2G cells are not also congested).
3.3.1.1. Physical and Parameter Setting Changes
There are multiple actions that can be considered in order to share the traffic between the different cells
in an area, to obtain a decrease in the traffic carried by the congested cell and therefore, solving or at
least relieving its capacity issues.
Tunning can be applied, as already suggested, to both the
Parameter settings
o Offsets between cells in the Active Set dynamic (Cell Individual Offsets)
and in idle mode (Hysteresis and Qoffsets)
o Multicarrier settings
o HCS (Hierarchical Cell Structure) settings, if activated.
o Common Channel Power settings, especially CPICH power, should be considered very
carefully due to the potential negative impacts.
The RNC makes a decision on the admission of the target inter-frequency cell for blind handover.
If the admission request is accepted, the DRD procedure is performed for the target interfrequency cell for blind handover.
The RNC starts the radio link setup procedure to perform the inter-frequency handover.
The RNC starts the radio bearer setup procedure to complete the inter-frequency handover on the
Uu interface and the service setup.
If step 2, 3 or 4 fails, the RNC performs repeated RAB DRD in another target inter-frequency cell for blind
handover until the retry succeeds, until the retry in all such cells fails, or until the number of retries
reaches the value of Max inter-frequency direct retry number.
Notes:
After an HSPA service request is denied, the service is fallen back to the DCH. Then, the service
re-attempts to access the network.
The RAB DRD to a target cell in another system (for example, GSM) for blind handover is similar.
For details, refer to Inter-RAT Handover.
According to the cell type (R99 or R99+HSDPA), an HSDPA user accessing an R99 cell can be
directed to an R99+HSDPA cell through DRD. According to the cell parameter R99 CS separation
indicator or R99 PS separation indicator, an R99 user accessing an R99+HSDPA cell can be
directed to an R99 cell through DRD.
RAN6.1 does not support inter-RAT DRD for RABs of combined services.
This feature should be analyzed in dept as it plays with the CPICH, then it is mandatory to
have very detailed inforamation in order to evaluate the funtioning of it.
Intra-frequency Load Balancing (LDB) is performed to adjust the coverage areas of cells based on the
measured values of cell load. Currently, the intra-frequency LDB algorithm is applicable to only the
downlink.
LDB between intra-frequency cells is implemented by adjusting the transmit power of the Primary
Common Pilot Channel (P-CPICH) in the associated cells. When the load of a cell increases, the cell
reduces its coverage to lighten its load. When the load of a cell decreases, the cell extends its coverage
so that some traffic is off-loaded from its neighboring cells to it.
When the intra-frequency LDB algorithm is active, that is, when INTRA_FREQUENCY_LDB is set to 1,
the RNC checks the load of cells periodically and adjusts the transmit power of the P-CPICH in the
associated cells based on the cell load.
If the downlink load of a cell is higher than the value of Cell overload threshold, it is an indication
that the cell is heavily loaded. In this case, the transmit power of the P-CPICH needs to be reduced
by a step, which is defined by the Pilot power adjustment step parameter. If the current transmit
power is equal to the value of Min transmit power of PCPICH, however, no adjustment is
performed.
Because of the reduction in the pilot power, the UEs at the edge of the cell might be handed over to
neighboring cells, especially to those with a relatively light load and with relatively high pilot power. After
that, the downlink load of the cell is lightened accordingly.
If the downlink load of a cell is lower than the value of Cell underload threshold, it is an
indication that the cell has sufficient remaining capacity for more load. In this case, the transmit
power of the P-CPICH increases by a step, which is defined by the Pilot power adjustment step
parameter, to help lighten the load of neighboring cells. If the current transmit power is equal to
the value of Max transmit power of PCPICH, however, no adjustment is performed.
Soft handover overhead above of 30-40% will have a negative impact on capacity, as mobiles engaged
in SHO will consume more channelization codes than single link connections and each RL that is
established will also require resources on the Iub interfaces of the Nodes B involved. A 40% probability of
SHO demands 40% extra backhaul capacity. For UEs in softer handover there will be no impact on
backhaul capacity because signaling and traffic will be combined locally in the Node B. Reducing the
level of soft handover in the network reduces the downlink transmit power requirement in all cells and
increases the downlink capacity.
The benefit of SHO is Soft Handover gain: A UE can combine a number of downlink signals using the rake
receiver and get a net improvement in performance of as much as 3 or 4 dB compared to a single link
connection. This, in fact, is taken into account favourably when determining the link budget. However, a
UE in SHO will also be power controlled by all Node Bs concerned.
Simulations (3GPP 25.942) have shown that in a planned area only 1% of locations require SHO to 7 or
more cells. Additionally, the SHO gain is minimal when more than 3 cells are in the active set. The
conclusion is that the UE does not have to support more than 4 to 6 cells in the active set.
In summary, SHO is an area potentially subject to further improvements to save capacity. We are going
to review a possible approach to Soft Handover Optimization.
The soft handover overhead KPI and the average active set size KPI may be used to indicate the level of
soft handover experienced by a cell. If the level of soft handover is high then it may be possible to
achieve a reduction without impacting UE cell edge performance. The extent to which the level of soft
handover can be reduced should be specified within the soft handover optimization process (final target
expected to be in the range 30-40% though).
Soft HO Overhead [ ] =
Soft handover overhead of RT and NRT, which shows how much overlapping there has been between cells. This formula
is used when working in Cell level.
[Practical Example] Figures below provide two examples for soft handover overhead and average active
set size plots. The plots present data recorded during 1 week. Each point is based upon the counter
results recorded at hourly basis.
3.0
200
160
180
140
120
100
80
60
40
2.5
2.0
1.5
1.0
0.5
20
0
0.0
Time (1 hour samples)
Example cell 1
3.0
200
160
180
140
120
100
80
60
40
2.5
2.0
1.5
1.0
0.5
20
0
0.0
Time (1 hour samples)
Example cell 2
The soft handover overhead and average active set size characteristics are relatively spiky although
there are clear differences in the average values. The spiky nature of the plots can be reduced by
increasing the period of time over which each sample is plotted. This is an acceptable approach for the
soft handover KPI because the soft handover overhead and average active set size should be relatively
independent of the level of traffic, i.e. as the level of traffic changes throughout the day.
It is suggested that the average active set size KPI is used. It is also suggested that the counters are
aggregated over a 24 hour period prior to applying the KPI equation.
Once the average active set size KPI data has been plotted a decision is required regarding whether or
not the level of soft handover should be reduced. One suggestion could be: If average active set size
exceeds 1.75 for any one 24 hour period during the week then the soft handover tuning process should
be triggered. An average active set size of 1.75 corresponds to a soft handover overhead of
approximately 50 %. It may be appropriate to define different thresholds for different environment types,
i.e. according to the site density.
3.4.
Capacity Upgrade
A capacity upgrade should be considered if a fault could not be identified that explains the capacity
issue or if there was not an unexpected sudden increase in traffic casued by seasonal variation orif the
problem cannot be solved by tuning the current network configuration or by reaccomodating the traffic
to achieve a more efficient utilization of current resources.
In the case that the conclusion from the previous phaseof the optimization process is that we have a
certain resource shortage that cannot be overcome with further optimization, then the single way to
solve the problem is to increase (=upgrade) the current available capacity. Capacity upgrages involves
generally any of the following steps. They appear in the list in order of preference, driven mainly by cost
-from lowest to highest cost- but also by other considerations:
Additional Carrier
Especially here in Claro, where spectrum availability is not an issue (other OpCos in AMX
could be facing different spectrum scenarios), second carrier should be the first option
(compared to 6 sectors, HOS):
Increase in capacity for a 2nd carrier is higher than 100% (trunking efficiency). HOS shows
increases of 1.8 in technical literature, but real experiences range between 30 and 50%.
Given an existing optimized tri-sector network, it will be easier to deploy the 2nd carrier
keeping this same scenario. On the other hand, HOS requires of a lot of optimization to
keep under control all negative aspects of increasing the number of cells: pilot pollution,
SHO areas, neighbor lists, etc.
With HOS, potential negative impact (not present for 2nd carrier) on HSDPA throughputs in
Mobility (due to the increase in number of Cell Changes) and HSUPA throughputs (due to
an increase in the number of other cell interferer users) need to be analyzed further.
A wide deployment of 2nd carrier will prepare the network for easier rollout of Multicarrier
solutions: for instance Rel. 8 MC-HSPA+ (rates: DL 42 / UL 11 Mbps) without MIMO, just
64QAM.
Additional Site
In terms of capacity upgrade, this is the last option to be considered when all previous ones
are not available or the trade-off capacity gain/cost makes them less attractive or when
there is a desire to increase the inbuilding penetration within an area.
If traffic keeps increasing site densification is to be considered even in advance to enable
the network to effectivelyabsorb the additional traffic (assumed to be already considered by
the Planning/Design/Dimmensioning Process).
Additional carrier, sector or site requirement is one of the outputs the Optimisation Process that will
feedback to the Planning Process, so the implications in RF performance can be evaluated properly
through the RF prediction tool (ASSET3G) and RF tuning of the neighborhood sites can also be
anticipated. At the same time, the planning department will consider their own coverage/capacity
targets and will decide the best option possible in accordance with the company policies and strategy:
macro, micro, especial in-door project, repeater
In some cases, a re-engineering of the site may be required in order to make any of the above
possibilities feasible: changing an old BTS cabinet type by a more recent one with more capacity options
(more nominal power, more slots for CE boards, etc.).
As described in Claro internal doc. Ref.##.####: Optimization Process, after any significant network
configuration change (RF or parameter settings), potential impacts need to be monitored and verified. In
our present context, the solution/correction of the capacity issue that triggered the whole process should
be confirmed.
This is just an overall presentation of the process; further details can be found for each resource in the
coming chapters of these guidelines.
The blocking in Radio Interface gives information about the lack of radio resources. This could mean that
Node B is using all the DL power that is available in order to maintain the connection for the users.
Therefore admission control is denying service for the additional users. Another reason could be the
increased UL interference situation meaning that there is no room for more UEs to be connected into
that particular cell. Additionally, the available free codes in the Channelization Code Tree can become
a key resource to monitor, in particular when HSDPA and HSUPA are enabled in a cell.
Sections to are devoted to these three Radio (or Air) Interface Resources, starting with the DL TX
Power.
4.
1B
DL TRANSMITTED POWER
In the downlink direction the maximum transmit power available from the highly linear power amplifier
can be considered constant. The total power available will depend on the vendor and on the type of
NodeB. For a macro cell product it could be expected to be in the range 20 to 45 W, for a micro cell or
pico cell product it would be lower. The specifications (3GPP TS 25.104) require that the power amplifier
has a total power dynamic range of at least 18 dB. Maximum transmit power is limited to 50 dBm (100
W).
This power is shared between all downlink channels. Downlink power control is implemented through the
adjustment of the weighted sum of the downlink channels. Broadcast and common control channels are
likely to be allocated a fixed proportion of the power available related to the power allocation of the
CPICH (common pliot) channel. The remainder of power is then shared between users. The weighting
may be used to vary proportions to each user dependent on path loss, interference and required quality
of service (based on power control). For closed loop power control the UE indicates the requested power
step changes to the Node B. However, a limit will be set for the power proportion available to each
channel type, so the Node B may not obey all power control commands. If the cell is operating at less
than full load then the total power transmitted is less than the total power available.
More power is required if there are more channels required, if users are distant from the Node B, if users
request higher data rates, if users request a higher quality of service, Thus the total power available in
a cell ultimately limits downlink capacity and quality of service. These figures and charts below give
examples for these ideas:
4.1.
Node B DBS3800 a distributed NodeB in compliance with the protocols of 3GPP R99/R4/R5/R6 .
Providing physical interfaces between the BBU3806 and the RNC for data communication
Providing OM channels between the BBU3806 and the LMT or between the BBU3806 and the
M2000
Baseband Subsystem
The baseband subsystem processes uplink and downlink baseband data. The functions of the baseband
subsystem are performed by the following modules:
Uplink baseband data processing module: consists of the demodulation unit and the decoding
unit. In this module, uplink baseband data is processed into despreading soft decision symbols
after access channel searching, access channel demodulation, and dedicated channel
demodulation. The symbols are then sent to the RNC through the transport subsystem after
decoding and Frame Protocol (FP) processing.
Downlink baseband data processing module: consists of the modulation unit and the encoding
unit. The module receives the service data from the transport subsystem, and implements FP
processing, encoding, transport channel mapping, physical channel generating, framing,
spreading, modulation, and power control combination. Then the data is finally sent to the
interface module.
Control Subsystem
The control subsystem manages the entire distributed NodeB. The control subsystem performs OM,
processes signaling, and provides the system clock.
The signaling processor performs functions such as NBAP signaling processing, ALCAP processing,
SCTP processing, and logical resource management.
The clock module provides the system clock for the NodeB. The reference sources of the system
clock are the Iub phase-lock line clock (obtained from the E1, optical port, or FE), the GPS clock,
and the external clock (for instance, the BITS clock). The versions later than V100R009 support
the function of extracting the clock from the FE.
Interface Module
The interface module performs the following functions:
Each CPRI port of the BBU3806 adopts the Enhanced Small Form-Factor Pluggable (ESFP) optical
ports, and transports the uplink and downlink baseband data of the RRU/pRRU3801/RHUB3808.
Each BBU3806 provides an EIa port to share synchronization data, baseband data, power control
data, and transmission data between BBU3806s.
Multiple topologies such as star, chain, and tree are supported between the RNC and the BBUs.
Star Topology
As the most commonly used topology, the star topology applies to most areas, especially to densely populated
areas.
Chain Topology
The chain topology applies to the belt-shaped and sparsely populated areas, such as highways and
railways.
Tree Topology
The tree topology applies to complicated networks and sites such as a large area with concentrated
hot spots.
Multiple topologies such as star, chain, and ring are supported between the BBU and RRU3801Cs.
Omni-directional, 1 x 2, 2 x 1, 3 x 1, 3 x 2, 3 x 3, 3 x 4, 6 x 1, and 6 x 2.
Six cells. The DBS3800 supports a maximum of 12 cells if the EBBC is configured.
Configurati
on
Minimum
BBU3806s
Number
of
Minimum
RRU3804s
1x1
1x2
1x3
1x4
2x1
2x2
2x3
2x4
3x1
3x2
3x3
3x4
6x1
6x2
Number
of
NOTE:
In four-carrier configurations such as 1 x 4, 2 x 4, and 3 x 4, if the power required for each carrier is 30
W, the minimum number of RRU3804s doubles.
Configurati
on
Minimum
BBU3806s
Number
of
Minimum
RRU3801Cs
Number
of
Configurati
on
Minimum
BBU3806s
Number
of
Minimum
RRU3801Cs
1x2
1x3
1x4
2x1
2x2
2x3
2x4
3x1
3x2
3x3
3x4
6x1
6x2
Number
of
NOTE:
N x M = sector x carrier. For example, 3 x 1 indicates that each of the three sectors has one carrier.
In Claro Brazil network, Huawei has deployed DBS3800 all over the network. The new sites will be
DBS3900.
Configurations of the DBS3900
The NodeB has an industry-leading modular design of multiple modes and forms, rendering it adaptive
to various installation scenarios. This effectively addresses the requirements for the broadband solution,
green network construction, and a mobile network of converged multiple modes. Beyond that, this
enables the construction of a future-oriented network and smooth evolution to the Long Term Evolution
(LTE) system.
Solution Integrating Multiple Technologies
With the unified platform, modular design, and flexible combination of the basic modules and
auxiliary devices, the NodeB can be presented in multiple forms.
With this solution, BBUs and RF modules of different modes (GSM/UMTS/LTE) can be placed in one
cabinet, and cabinets of different modes can be installed in stack mode.
The UMTS RF module supports smooth evolution to the LTE system from the perspective of
hardware and supports the UMTS/LTE dual-mode NodeB through software upgrade in the same
frequency band.
Broadband Solution
The outstanding performance of the RRU3804 and WRFU/MRFU ensures wide coverage, high
throughput, and less sites.
The RRU3804 and WRFU/MRFU adopt a multi-carrier technology that features 20 MHz
bandwidth and 4-carrier configuration.
A single RRU3804 supports the 60 W output power at the antenna connector, and a single
WRFU/MRFU supports 80 W at the antenna connector.
The NodeB supports the High Speed Packet Access (HSPA) at full rate.
The data rate of the HSPA service can peak at 14.4 Mbit/s in the downlink.
The data rate of the HSPA service can peak at 5.76 Mbit/s at the physical layer of the Uu
interface in the uplink.
The IP-based switching core of the NodeB allows operators to obtain higher bandwidth and
facilitates capacity expansion and network adjustment by utilizing the existing IP transmission
resources, thereby curtailing the cost of network deployment.
The NodeB can provide the Fast Ethernet (FE) port at 100 Mbit/s externally, and the IP Radio
Access Network (RAN) can reuse the existing IP transmission resources on the Iub interface.
Apart from being more cost-effective than the Asynchronous Transfer Mode (ATM)-based
network, the IP-based network provides the multi-access mode and sufficient transmission bandwidth to
satisfy data services with high data rate.
Construction of a Green Network
The compact and modular design, innovative PA, and power consumption management are the keys to a
green communication network that provides energy saving features and requires fewer equipment
rooms.
The RF modules of the NodeB adopt the advanced Digital Pre-Distortion (DPD) and A-Doherty
technologies to raise the power amplification rate to 40%. Thus, the power consumption of the
entire NodeB is lowered.
The reduced power consumption of the cabinet macro NodeB lowers not only the electricity
expense but also the investment in power supply, backup batteries, air conditioners, and heat
exchangers.
As one of the most compact macro NodeBs in the industry, the cabinet macro NodeB takes
up a small footprint.
The RF cabinet of the BTS3900A uses the direct-ventilation design. In comparison with the
traditional macro NodeB, power consumption of the BTS3900A is lowered by 40%.
The DBS3900 is characterized by separate baseband and RF modules and distributed installation
that facilitate transportation, configuration, and installation.
The BBU3900 of the distributed NodeB is characterized by the small footprint, easy
installation, and low power consumption. In addition, the BBU3900 can be placed in the spare space of
an existing site.
The RRU, small and light, supports installation near the antenna, thus preventing feeder loss.
Working in natural heat dissipation mode, the RRU does not require any fans. The high reliability of the
RRU reduces the routine maintenance cost.
All the NodeB products can share the baseband modules, RF modules, and power systems,
thereby reducing the cost of spare parts and maintenance.
The proceeding features of the NodeB can fully address the concern of operators regarding site
acquisition, expedite network rollout, decrease utilization of resources such as manpower, power supply,
and space, and lower the Total Cost of Ownership (TCO).
Smooth Evolution to the Future-Oriented Radio Network
The NodeB, adopting the unified modular design, satisfies the requirements of global operators for
service upgrade, network evolution, and deployment of new radio technologies, thus implementing a
future-oriented network.
The NodeB supports co-cabinet and multi-mode applications of modules in different modes.
The hardware of UMTS RF modules supports HSPA+ and smooth evolution to the LTE system. In
addition, the BBU of the existing NodeB can be shared to the maximum extent.
The capacity of the DBS3900 can be expanded through addition of modules or license upgrade. When
license upgrade is required, the capacity can be expanded by 16 cells at a time. In the early phase of
network construction, you can choose a small-capacity configuration (such as 3 x 1 configuration). When
the number of subscribers increases, you can smoothly expand the small-capacity configuration to a
large-capacity configuration (such as 3 x 2 or 3 x 4 configuration).
Configurati
on
Number
WBBPs
of
Number of
Diversity)
3x1
3x2
3x3
3x4
RRU3804s
(No
TX
Configurati
on
Number
WBBPs
of
Number of
Diversity)
3x1
3x2
3x3
3x4
RRU3801Cs
(No
TX
The RRU deployed are with 40W; for one carrier but the power will go to a split when having two or more
carriers, then in the case of two carriers each will have 20W. The new sites will get RRUs of 60W.
The operator can limit the total maximum power that is allowed to be transmitted by an RBS with the
parameter:
DLCELLTOTALTHD
In order to manage the HSDPA feature, Admission Control and Congestion Control functions control the
usage of total non-HS downlink transmitted carrier power, that is the power used for transmission of R99
and Common Control channels. The remaining power can then be used for transmission of HSPDSCH/HS-SCCH channels to HSDPA users. By changing the HSPAPOWER setting, the portion of
downlink power available for HS connections can be increased/decreased.
4.2.
DL Tx Power [ ] =
where:
VS . MeanTCP( W )
100100
( DLCELLTOTALTHD Pmax (W ) )
Note: The counter and the parameter should be converted to watts in order to get an accuratte result.
Please be aware that HSDPA may be using the remaining power not used by R99 (Common Control
Channels plus R99 traffic), so it is possible to get values closed to 100% in this KPI with no impact in
Accessibility (no AC Rejections). To be checked, for values that high as closed to 100%, the impact on
user-perceived throughputs.
This can be accordingly calculated based on Average, Maximum or Minimum of the metric.
DLCELLTOTALTHD
DLCELLTOTALTHD
There are counters also for Maximum and Minimum values of TCP:
Maximum TCP(dBm): VS.MaxTCP
Minimum TCP (dBm): VS.MinTCP
non - HS DL Tx Power [ ] =
where:
VS . MeanTCP . NonHS (W )
100100
( DLCELLTOTALTHD Pmax ) (W )
Note: The counter and the parameter should be converted to watts in order to get an accuratte result.
For averaging carrier power, these values must be checked close to the maximum allowed values, i.e.,
Power Utilizations close to 100%. If the average values are close to these thresholds, it implies the
chance of having congestion/admission block is high. In fact, it will be more interesting to look at max
carrier power sampled values. Still as RL setup is taking very short time to establish, it will be very hard
to say how those max values are related to RL failures.
Thresholds: MINOR: > 80% | MAJOR: >
100
Also available in Huawei statistics of the HSDPA-required power measurement values of an HSDPA cell in
the RNC, VS.HSDPA.MeanRequiredPwr, VS.HSDPA.MaxRequiredPwr, VS.HSDPA.MinRequiredPwr
HSDPA DL Tx Power [ ] =
where:
VS . HSDPA . MeanRequiredPwr (W )
100100
( DLCELLTOTALTHD Pmax (W ))
Note: The counter and the parameter should be converted to watts in order to get an accuratte result.
For averaging carrier power, these values must be checked close to the maximum allowed values, i.e.,
Power Utilizations close to 100%. If the average values are close to these thresholds, it implies the
chance of having congestion/admission block is high. In fact, it will be more interesting to look at max
carrier power sampled values. Still as RL setup is taking very short time to establish, it will be very hard
to say how those max values are related to RL failures.
ENU
Uplink
for DCH
Downlin
k for
DCH
HSDP
A
HSUP
A
0.44
0.42
1.11
1.11
1.44
1.42
1.35
1.04
0.78
0.84
3.4 +
(PS)
16
1.62
1.25
1.11
0.85
3.4 +
(PS)
32
2.15
2.19
1.70
0.96
3.4 +
(PS)
64
3.45
3.25
2.79
1.20
5.78
5.93
4.92
1.67
6.41
6.61
5.46
1.91
10.18
10.49
9.36
2.83
14.27
15.52
14.17
3.91
Service
kbit/s
kbit/s
kbit/s
The 3.4 kbit/s is the rate of the signaling carried on the DCH.
DL EUN Utilization [ ] =
VS . RAC . DL . TotalTrfFactor
DLTOTALEQUSERNUM
DLTOTALEQUSERNUM: this parameter defines the total equivalent number of users corresponding to
the 100% downlink load.
Thresholds: MINOR: > 80% | MAJOR: >
100
Please refer to Section 11, Additional ADMISSION CONTROL Metrics, for further details.
4.3.
To troubleshoot the cases highlighted by the Performance Alarms for DL TX Power suggested in the
previous sections, besides the overall considerations enumerated in Section 3, the following actions are
suggested:
First actions are to optimise some DL capacity related parameters in the cell. The actions could be
Increase DLTOTALEQUSERNUM
Decrease used MaxBitrateDLPSNRT (128 kbits/s, 64 kbits/s)
Decrease the maximum possible link power for the service
Decrease DtoFStateTransTimer or DtoFStateTransTimer timers related to PS data to make
the switching from Cell_DCH to Cell_FACH happen earlier
The Total downlink power in the cell could be controlled with the parameter DLTOTALEQUSERNUM.
Different values should be used for NodeB having different PA power capability
The usage of maximum bitrate used in the cell could be lowered in case there has been radio blocking.
This could be controlled with the cell parameters: DLFullCvrRate and ULFullCvrRate. The parameter
value could be lower in suburban environment meaning that the capacity in the cell will be increased.
Also bitrate downgrades based on Dynamic Link Optimization will relieve the Node B power for other
users. Lower values could be used in suburban and rural environment.
4.3.2. RF optimization
In case parameter optimization does not help, RF optimization could be done. This will cost more money
to the operator than the parameter optimization. RF optimization means basically the tuning of Node B
antenna system and includes tilt, bearing and antenna height changes to improve cell dominance areas
and decrease DL interference.
5.
1B
UL RECEIVED POWER
In this section we review how to monitor the usage of this Air Interface Resource, i.e., how much
Received Power (in dBm) is being measured by the NodeB receiver and how we can optimize this
resource.
5.1.
In the uplink, the total received power can be expressed as the sum of the powers of own-cell users
(PrxOwn), other-cell users (PrxOth), and system noise (Pn).
Pn is the total effective thermal noise at the Receiver and can be estimated as
where Nf is the Receiver Noise Figure, k is Boltsmans constant, and W is 3.84 MHz.
There is, theoretically, a maximum available capacity in terms of UL RX Power; i.e., there is a maximum
amount of UL RX Power that can be admitted in the cell before reaching the Pole Capacity (Load =
100%). In fact, the system is configured to limit the Admission so that the UL Load does not reach the
range of UL Load values that will cause instability in the cell (as can be seen in the Figure below, Loads
above 80-85%).
Maximum Load acceptable is connected to Maximum acceptable UL RX Power (or Received Total
Wideband Power (RTWP)). The maximum increase of the Noise Floor (aka Noise Rise, NR) acceptable
due to this maximum acceptable UL RX Power could also be an alternative way to quantify this UL
capacity.
It can be shown that RTWP (or RSSI) =
where LUP is the Load in Uplink.
Using Pn =-103.71 dBm, that assumes standard operating room temperature and a receiver noise figure
(Nf) of 4.3 dB, we can produce next table:
This table is giving an approximate idea about the Admissible Noise Rise (NR, increase of the Noise Floor
that can be accepted) before reaching the levels of the Pole Capacity. Maximum NR should not go above
10 dB. This is also our theoretical UL Capacity that can be directly translated into the Maximum UL RX
Power that can be accepted (if the Noise Floor is known, as the one calculated above).
Typically, a network design is done based on a NR target of 3-6 dB, corresponding to a Target Load in UL
of 50-75%.
So expected values for RSSI (RTWP), according to UL Loads admitted in the system of 50 to 80% should
never exceed -90 dBm.
The UL RSSI can be high due, for instance, to:
non-traffic interference (external sources of interference)
High TX power from a UE that are connected to a far cell (near-far effect), usually due to
overshootings and missing neighbors.
Equipment malfunction
Intermodulation
But if the high values of RSSI are found all over the network, then it is an issue so generalized that
probably cannot be explained by this type of explanations above and should therefore be analyzed
further.
5.2.
RTWP
EUN
other services, UL Handover access threshold should be based on the characteristic dimensioning
of the system not to be loaded more than 60% of its pole capacity thru the parameter
ULTOTALEQUSERNUM.
In uplink, besides uplink Congestion Control, EUN is the only resource that is controlled to allow
admission and modifications. Therefore, the parameters that regulate the uplink EUN admission policy
need to be set in order to minimize the risk of going into uplink overload.
For uplink admission control, the EUN admission policy provides a way to limit excessive UL interference
avoiding large variations in cell breathing. This should be used in cells where there is an observed high
UL interference. In other cases, the uplink ENU admission control can be disabled by setting
ULTOTALEDUSERNUM to its maximum value (200), ULOTHERTHD to 98, ULCONVAMRTHD to 99,
ULCONVNAMRTHD to 99 and ULHOTHD to 100. In order to comply with the rule:
HO thereslhold >max ( Conv AMR Threshold , Conv No n AMR Threshold ) >OtherServices Threshold
Formula for checking the average usage of EUN in the uplink:
UL RX POWER Utilization
According to the metrics described in the previous section, we can define the following KPIs:
UL Load fatcor [ ] = 1
BACKGROUNDNOISE ( watts )
100
VS . AverageRTWP ( Watts )
The counter VS.AverageRTWP and the parameter BACKGROUNDNOISE are given in dBm so they need
to be converted to watts in order to be used in the formula otherwise can be used the following formula
to obtain the load:
UL Load factor [ ] = 1
1
VS . AverageRTWP ( dBm ) BACKGROUNDNOISE ( dBm )
1
10
log
100
5.4.
For averaging RSSI, these values must be checked close to the maximum allowed values.
If the average values are close to these thresholds, it implies the chance of having congestion/admission
block is high. In fact, it will be more interesting to look at max RTWP sampled values. Still as RL setup is
taking very short time to establish, it will be very hard to say how those max values are related to RL
failures.
To troubleshoot the cases highlighted by the Alarms, besides the overall considerations enumerated, the
following checkings are recommended:
ULTOTALEDUSERNUM
External interference?
Missing neighbors?
Short term solutions
Reduce the traffic carried by the site (See the Traffic Offload)
Reduce the number of UL RLs of SF4 allowed in the cell, through the following parameters:
[Huawei is looking if there is any parameter to define the number of users SF4]
Long term solutions
Once the traffic has been shared in the most efficient way possible between the cell and its neighbors,
then a new site is needed to cope with the higher traffic.
Counter Name
Description
VS.HSUPA.LoadOutpu
t.0
Number of times that the load on the air interface is within the range of
[0, 0.5) dB
VS.HSUPA.LoadOutpu
t.1
Number of times that the load on the air interface is within the range of
[0.5, 1.0) dB
VS.HSUPA.LoadOutpu
t.2
Number of times that the load on the air interface is within the range of
[1.0, 1.5) dB
VS.HSUPA.LoadOutpu
t.3
Number of times that the load on the air interface is within the range of
[1.5, 2.0) dB
VS.HSUPA.LoadOutpu
t.4
Number of times that the load on the air interface is within the range of
[2.0, 2.5) dB
VS.HSUPA.LoadOutpu
t.5
Number of times that the load on the air interface is within the range of
[2.5, 3.0) dB
VS.HSUPA.LoadOutpu
t.6
Number of times that the load on the air interface is within the range of
[3.0, 3.5) dB
VS.HSUPA.LoadOutpu
t.7
Number of times that the load on the air interface is within the range of
[3.5, 4.0) dB
VS.HSUPA.LoadOutpu
t.8
Number of times that the load on the air interface is within the range of
[4.0, 5.0) dB
VS.HSUPA.LoadOutpu
t.9
Number of times that the load on the air interface is within the range of
[5.0, 6.0) dB
VS.HSUPA.LoadOutpu
t.10
Number of times that the load on the air interface is within the range of
[6.0, 7.0) dB
VS.HSUPA.LoadOutpu
t.11
Number of times that the load on the air interface is within the range of
[7.0, 8.0) dB
VS.HSUPA.LoadOutpu
t.12
Number of times that the load on the air interface is within the range of
[8.0, 9.0) dB
VS.HSUPA.LoadOutpu
t.13
Number of times that the load on the air interface is within the range of
[9.0, 10) dB
VS.HSUPA.LoadOutpu
t.14
Number of times that the load on the air interface is within the range of
[10, 11) dB
VS.HSUPA.LoadOutpu
t.15
Number of times that the load on the air interface is within the range of
[11, 12) dB
VS.HSUPA.LoadOutpu
t.16
Number of times that the load on the air interface is within the range of
[12, 13) dB
VS.HSUPA.LoadOutpu
t.17
Number of times that the load on the air interface is within the range of
[13, 14) dB
VS.HSUPA.LoadOutpu
t.18
Number of times that the load on the air interface is within the range of
[14, 15) dB
VS.HSUPA.LoadOutpu
t.19
Number of times that the load on the air interface is within the range of
[15, 16) dB
VS.HSUPA.LoadOutpu
Number of times that the load on the air interface is within the range of
t.20
[16, 18) dB
VS.HSUPA.LoadOutpu
t.21
Number of times that the load on the air interface is within the range of
[18, 20) dB
VS.HSUPA.LoadOutpu
t.22
Number of times that the load on the air interface is within the range of
[20, 22) dB
VS.HSUPA.LoadOutpu
t.23
Number of times that the load on the air interface is within the range of
[22, 26) dB
VS.HSUPA.LoadOutpu
t.24
Number of times that the load on the air interface is within the range of
[26, 30) dB
VS.HSUPA.LoadOutpu
t.25
Number of times that the load on the air interface is equal to or higher
than 30 dB
For the preceding counters, the NodeB takes statistics in each scheduling period. Note that the counters
are a PDF counter, then they can proccessed in order to get the Average, Minimum and Maximum UL
load factor for HSUPA.
5.5.
To troubleshoot the cases highlighted by the Performance Alarms for UL RX Power suggested across the
previous sections, besides the overall considerations enumerated in Section 3, following actions are
suggested:
First actions are to optimise some UL capacity related parameters in the cell. The actions could be
5.5.2. RF optimization
In case parameter optimization does not help, RF optimization could be done. This will cost more money
to the operator than the parameter optimization. RF optimization means the tuning of Node B antenna
system and includes tilt, bearing and antenna height changes to improve cell dominance areas and
reduce interferece, so received traffic can be reacommodated between existing cells.
In case a multicarrier environment is available, traffic can be offloaded to the cleaner carrier in UL.
6.
6.1.
This Table lists the minimum OVSF length and the number of OVSFs available for each service. For each
service type, the carried Erlangs are estimated, assuming a 2% GoS.
This section covers only the Downlink. On the Uplink, each UE has its own code tree, so the code tree is
not a limiting factor in that direction. On the Downlink, the number of OVSFs available for each
dedicated channel is reduced, because multiple common channels must be supported. Figure below
summarizes the mandatory Downlink channels and the mandatory (or implementation-dependent)
values of their OVSFs. It also shows optional Downlink common channels.
In the OVSF code tree structure, one PS 384 connection uses the same resources as four PS 64
connections or 16 voice connections. However, in terms of the SF, the probability of having SF = 8 free
channels is not just 4 (or 16) times less than the probability of having one SF = 32 (or SF = 128) free,
because the equivalent SF = 32 (or SF = 128) free channels must be contiguous and start at a specific
position.
Therefore, the availability of an OVSF of a specific length is determined by the number of OVSFs of same
length or shorter that are used, as well as by the number of longer OVSFs used. The OVSF allocation
algorithm at the Node B normally manages the availability of consecutive OVSFs. This algorithm also
allocates and optimizes the code tree to maximize the availability of shorter OVSFs.
6.2.
VS . RAB . SFOccupy
100
256
The total number of codes used includes the codes for both common and dedicated channels. The codes
used by the common channels are: Primary CPICH - C ch,256,0, Primary CCPCH - Cch,256,1, AICH - Cch,256,2, PICH Cch,256,3, Secondary CCPCH - Cch,64,1 (if a second S-CCPCH is configured, an additional channelisation code
will be assigned). The use of these common channel channelisation codes blocks 1 SF 4 code, 1 SF8 code, 1
SF16 code, 1 SF32 code, 2 SF64 code, 4 SF128 codes, 8 SF256 codes and 16 SF512 codes. The total number of
codes used by the common channels is thus 34 (based upon a single S-CCPCH). Hence a typical cell with
no traffic would show an average Channelisation code occupancy of 3%.
With the introduction of HSDPA and HSUPA to the network, the code occupancy heavily increased. The
activation of HSDPA will reserve a minimum of 5 HS-PDSCH codes (SF 16) and at least one HS-SCCH code
(SF128) in each cell. For HSUPA activation, one E-AGCH code (SF 256) and one E-RGCH/E-HICH code (SF 128) are
required.
The set of codes reserved for the common channels, HSDPA (assume minimum 5 codes in this case) and
HSUPA is presented in table below:
Channelization Codes Static Reservation at Cell Start-up
Physical Channel
Spreading Factor
Code Number
CPICH
256
0
P-CCPCH
256
1
PICH
256
3
AICH
256
2
S-CCPCH
64
1
HS-SCCH
128
4
HS-PDSCH
16
11-15
E-AGCH
256
14
E-RGCH/E-HICH
128
7
Total 13 channelization codes are reserved in total. These 13 channelization codes block a further 358
codes (184 SF512, 87 SF256, 44 SF128, 23 SF64, 12 SF32, 1 SF16, 4 SF8, 3 SF4), i.e. a total of 371 channelization
codes become unavailable for DPCH use. This means when HSDPA and HSUPA are enabled, the code tree
occupancy generated by the static channelization code reservations is significantly greater, i.e. 36 %
compared to 3 %. This means that the threshold, above which code tree optimization can be triggered,
should be increased. This will help to avoid unnecessary reconfigurations while the dynamic section of the
code tree is relatively unloaded. It is recommended to increase the threshold defined by CodeTreeUsage
when HSDPA and HSUPA are enabled. Figure below shows the code allocation utilized for the HSDPA and
HSUPA enabled cell.
In huawei a set of parameters are used to confugure the codes usage by HSDPA:
Parameter
Default
Configuratio
n
Meaning
AllocCodeMode
HsPdschCodeNum
HsPdschMaxCodeNum
10
HsPdschMinCodeNum
HsScchCodeNum
when
Automatic
4
HSDPA Code Resource Allocation Mode. This describes the HSDPA code resource allocation mode:
automatic or manual.
Manual allocation leads to restriction of HSDPA code resource or leaves HSDPA code resource
idle.
Number of HS-PDSCH Codes. This describes the number of HS-PDSCH codes. The number of
HS-PDSCH codes is valid only when AllocCodeMode is set to Manual.
If HsPdschCodeNum is excessively high, the HSDPA code resource is wasted and the
admission rejection rate of R99 services increases due to code resource.
Maximum Number of HS-PDSCH Codes . This describes the maximum number of HS-PDSCH
codes. The maximum number of HS-PDSCH codes is valid only when AllocCodeMode is set to
Automatic.
In automatic HSDPA code allocation mode, set the maximum number of HS-PDSCH codes to a
comparatively high value.
Minimum Number of HS-PDSCH Codes. This describes the minimum number of HS-PDSCH
codes. The minimum number of HS-PDSCH codes is valid only when AllocCodeMode is set to
Automatic.
In automatic HSDPA code allocation mode, set the minimum number of HS-PDSCH codes to a
comparatively low value. In addition, HsPdschMinCodeNum must be not higher than
HsPdschMaxCodeNum.
Number of HS-SCCH Codes. This describes the number of codes allocated for the HS-SCCH.
HsScchCodeNum decides the maximum number of subscribers that the NodeB can schedule in
a TTI period. In the scenarios like outdoor macro cells with power restricted, it is less likely to
schedule multiple subscribers simultaneously, so two HS-SCCHs are configured. In the scenarios
like indoor pico with code restricted, it is more likely to schedule multiple subscribers
simultaneously, so four HS-SCCHs are configured. If excessive HS-SCCHs are configured, the
code resource is wasted. If insufficient HS-SCCHs are configured, the HS-PDSCH code resource or
power resource is wasted. Both affect the cell throughput rate.
VS . ScchCodeUtil . Mean
VS . UserTtiRatio . Mean
VS . ScchCodeUtil . Mean
VS . DataTtiRatio . Mean
VS.PdschCodeUtil.Mean
VS.PdschCodeUtil.Max
VS.PdschCodeUtil.Min
These counters provide the average, maximum, and minimum usage of HS-PDSCH code resources in a
cell during a measurement period respectively. Assume that the number of HS-PDSCH codes used in
each TTI is A and the number of available HS-PDSCH codes in each TTI is B. Then,
VS.PdschCodeUtil.Mean = A/B. The NodeB calculates VS.PdschCodeUtil.Mean every 5,120 ms and then
takes the maximum value within the measurement period as VS.PdschCodeUtil.Max and the minimum
value within the measurement period as VS.PdschCodeUtil.Min.
VS.PdschCodeUtil.Mean.User
VS.PdschCodeUtil.Mean.Data
These counters provide the HS-PDSCH code resource usage in a cell over the time when HSDPA UEs
camp on the cell and the HS-PDSCH code resource usage in the cell over the time when at least one
HSDPA user avails data transfer at the physical layer during a measurement period respectively.
VS . PdschCodeUtil . Mean
VS .UserTtiRatio . Mean
VS . PdschCodeUtil. Mean
VS . DataTtiRatio . Mean
VS.PdschCodeUsed.Mean
VS.PdschCodeUsed.Max
These counters indicate the average and maximum number of codes used by HS-PDSCHs in a cell during
a measurement period respectively. During the measurement period, the NodeB counts the number of
codes used by all the HS-PDSCHs in all TTIs in the cell. Assume that this value is A and the number of
TTIs in the measurement period is B. Then, VS.PdschCodeUsed.Mean = A/B. The NodeB calculates
VS.PdschCodeUsed.Mean every 5,120 ms and then takes the maximum value within the measurement
period as VS.PdschCodeUsed.Max.
VS.PdschCodeAvail.Mean
VS.PdschCodeAvail.Max
These counters indicate the average and maximum number of codes available for HS-PDSCHs in a cell
during a measurement period respectively. During the measurement period, the NodeB counts the
number of codes available for HS-PDSCHs in all TTIs in the cell. Assume that this value is A and the
number of TTIs in the measurement period is B. Then, VS.PdschCodeAvail.Mean = A/B. The NodeB
calculates VS.PdschCodeAvail.Mean every 5,120 ms and then takes the maximum value within the
measurement period as VS.PdschCodeAvail.Max.
6.3.
6.3.2. RF optimization
In case parameter optimization does not help, RF optimization could be done. This will cost more money
to the operator than the parameter optimization. RF optimization means basically the tuning of Node B
antenna system and includes tilt, bearing and antenna height changes to improve cell dominance areas,
so traffic can be reacommodated between existing cells and/or soft handover areas optimized.
When the usage of cell resource exceeds the congestion trigger threshold, the cell enters the basic
congestion state. In this case, LDR (Load Reshuffling) is needed to reduce the cell load and increase the
access success rate. When the load is lower than the congestion release trigger threshold, the system
returns to normal.
The resources that can trigger basic congestion of the cell include:
Power resource
Code resource
Code reshuffling
7.
1B
The key resource in terms of WBTS Hardware are the Channel Elements (CE) at WBTS level. The
blocking in Node B HW interface gives information about the lack of Node B HW resources to handle both
uplink and downlink traffic. Each RAB type needs different number of channel elements based on the
allocated UL/DL bit rate. Thus if there is not enough HW channels for connection this could lead to BTS
HW blocking.
The possible reasons for BTS blocking could be lack of hardware capacity. The new service setup would
be blocked if the current traffic mix does not leave enough free hardware channels to support these new
services. Tables below show the summaries of CE requirements in common channel, R99, HSDPA and
HSUPA.
7.1.
The following table summarizes the CE capacity associated to different board configurations in the
NodeB:
Item
Capacit
y
Cell
Uplink CE
192
Downlink CE
256
Item
Capaci
ty
Cell
Uplink CE
384
Downlink CE
512
Item
Capaci
ty
Cell
Uplink CE
128
Downlink CE
256
Item
Capaci
ty
Cell
Uplink CE
320
Downlink CE
512
Item
Capacit
y
Maximum
sectors
Maximum
carriers
Item
Capacit
y
Maximum
sectors
Maximum
carriers
7.2.
A channel element (CE) is defined as the baseband resources required in the NodeB to provide capacity
for 12.2 k AMR voice, including 3.4 k DCCH. The HSUPA shares the CE resource with the R99 services.
The HUSPA improves the uplink performance of delays and rates capacity, but HSUPA consumes large CE
resources.
If there is no dynamic CE resource management, the NodeB will allocate the CE resources according to
the maximum rate of the UE, even if the actual traffic volume is very low. In this case, the utility of the
CE resource is inefficient. Thus, the dynamic CE resource management is necessary. Considering that the
rate of HSUPA user changes fast, the algorithm periodically adjusts CE resources of users according to
the users rate and the available CE resources.
When a new RL is admitted, the algorithm also adjusts CE resources. Dynamic CE management can
minimize the failures in demodulation and decoding due to CE. Meanwhile, it also can maximize the CE
usage and UL throughput.
The NodeB determines whether to call back the CEs based on the CEavg during the previous period.
If the CEallocate is greater than both CEinit and CEavg, the NodeB calls back some CEs and
decreases CEallocate to Max(CEavg,CEinit). The CE resources called back takes effect during the
next period. The algorithm notifies the SGmax to the MAC-e scheduler at current TTI.
CEinit: Initial number of CEs, which is calculated on the basis of the configured GBR. If the
user is not configured with the GBR, then CEinit is the CE resources for transmitting an RLC PDU.
CEavg: Average number of CEs, which is calculated on the basis of the average rate of the
serving RLS.
SGmax: Maximum SG for the UEs, which is determined by the function of the dynamic CE
resource management. Since one SG may correspond to different CE numbers, if MAC-e scheduler uses
this SG, the allocated CE resources may be insufficient. Therefore, the algorithm needs to notify the
MAC-e scheduler of the SGmax to avoid CE insufficiency.
5
If the available CE resources for serving RLS are less than the CE resources that are required for
increasing the SF4 to 2xSF4, the algorithm performs fairness processing.
The algorithm selects a user with the largest value of priority and reduces its rate. The users whose
GBR are met are downsized before the users whose GBR are not met. When the next period
arrives, this users CE resources will be called back.
The queuing of users is as follows:
For the users whose Reff is smaller than the GBR, the algorithm queues the users based on
Priority = Reff/(SPI x GBR).
For the users whose Reff is greaterr than or equal to the GBR or the users whose GBR is not
configured, the algorithm queues the users based on Priority = Reff/SPI.
For the users of the serving RLS, the algorithm stops decreasing its CE resources if the CE resources
equals to CEinit.
After processing, the algorithm notifies the MAC-e scheduler of the new SGmax.
6
If the CEavg during the previous period is greater than or equal to CEallocate, the algorithm can
increase the CE resources of theses users by one step if there are available CE resources. For
example, if the CE resource of a user corresponds to SF4, the algorithm increases the CE
resources to that correspond to 2xSF4
The operation of increasing CE resources is based on the user queuing. The users are queued in
ascending order based on priority value. The smaller the priority value of a user is, the earlier
this users CE is increased. The queuing of users is as follows:
For the users whose Reff is smaller than the GBR, the algorithm queues the users based on
Priority = Reff/(SPI x GBR).
For the users whose Reff is greaterr than or equal to the GBR or the users whose GBR is not
configured, the algorithm queues the users based on Priority = Reff/SPI.
The processing of increasing CE resources is as follows:
The users whose GBR is not reached are increased before the users whose GBR is satisfied
or not configured.
During the increasing procedure, the algorithm can preempt the CE resources of the nonserving RLs until their resource decreases to the minimum CE resources, which are required for E-DPCCH
demodulation and decoding.
After the increase of CE, the algorithm notifies the MAC-e scheduler of the new allocated CEs and
SGmax.
7
When there are available CE resources for non-serving RLs, the algorithm allocates them to the
users of non-serving RLs.
The algorithm allocates available CE resources as much as possible to non-serving RLs, so that more
users can obtain the gain of soft handover.
Based on the CEavg of non-serving RLs during the previous period, the algorithm increases the
number of CEs to CEup, where CEup is obtained by increasing CEavg by one step.
The users are queued in ascending order based on their priority value. The smaller the value of
priority is, the earlier the user is processed. The priority value is calculated as follows:
DPDCHs
If the available CE resources can meet the requirements for CEneed of a user, the algorithm allocate
the CE resources to this user. If no enough CE resources are available, the algorithm allocates
the minimum CE resources. After increasing, the algorithm notifies the MAC-e scheduler of the
new CEs and SGmax.
8
This NodeB allocates the remaining CE resources to the users of serving RLS in order to improve the
efficiency of utility of CE resources.
NodeB schedules the user of serving RLS by the ascending order of priority until the remaining CE
resources are not enough to increase the user by a step or all users have gotten the CE
resources of Min[ CE(E-DCH MBR), CE(Maximum Set of E-DPDCHs)].
The priority is calculated as follows:
Priority = CEneed / F(SPI), and
Where
The algorithm preempts the CEs of non-serving RLs until their CE resources decrease to the
minimum CE number.
If the CE resources are still insufficient after preemption of non-serving RL, the algorithm
preempts the CE resources of serving-RLSs until the CE resources decreases to CEinit.
Type1: users with the GBR and Reff GBR, or the user without GBR
Type2: users with the GBR and Reff < GBR In each type,
the algorithm preempts the CE resources according to the priority value of the users:
Priority= Reff / SPI
7.3.
usage of CE resources in the cell is indicated by the counters in the form of xxx.Shared, whereas the
counters in the form of xxx.Dedicated are irrelevant to the measurement and are constantly set to 0.
The table below describes the preceding counters.
VS.ULCE.Mean.Shared: Average number of shared UL CEs consumed
period of 15 minutes
VS.ULCE.Max.Shared: Maximum number of shared UL CEs consumed
period of 15 minutes
VS.DLCE.Mean.Shared: Average number of shared DL CEs consumed
period of 15 minutes
VS.DLCE.Max.Shared: Maximum number of shared DL CEs consumed
period of 15 minutes
VS.ULCE.Mean.Dedicated: Average number of dedicated UL CEs
measurement period of 15 minutes
VS.ULCE.Max.Dedicated: Maximum number of dedicated UL CEs
measurement period of 15 minutes
VS.DLCE.Mean.Dedicated: Average number of dedicated DL CEs
measurement period of 15 minutes.
VS.DLCE.Max.Dedicated: Maximum number of dedicated DL CEs
measurement period of 15 minutes
The following counters indicate the configuration of CEs in the current NodeB. The configuration consists
of the number of UL CEs and the number of DL CEs in each license group and in the shared group. When
operators share a RAN, each operator has a dedicated license group. When the RAN is not shared by
operators, there is only one CE resource pool. The configuration of CE resources is indicated by the
counters in the form of xxx.Shared, whereas the counters in the form of xxx.Dedicated are irrelevant to
the measurement and are constantly set to 0. The table below describes the preceding counters.
VS.LC.ULCreditAvailable.Shared Number of UL CEs configured for the shared group
VS.LC.DLCreditAvailable.Shared Number of DL CEs configured for the shared group
VS.LC.ULCreditAvailable.LicenseGroup.Dedicated Number of UL CEs configured for an operator
VS.LC.DLCreditAvailable.LicenseGroup.Dedicated Number of DL CEs configured for an operator
During the current measurement period, the NodeB measures the usage of UL CEs and DL CEs in each
license group and in the shared group. When operators share a RAN, each operator has a dedicated
license group. When the RAN is not shared, only the usage of UL CEs and DL CEs of the shared group is
reported and the relevant counters are xxx.Shared. The table below describes the preceding counters.
VS.LC.ULMean.LicenseGroup Average number of UL CEs consumed by an operator
measurement period of 15 minutes
VS.LC.ULMax.LicenseGroup Maximum number of UL CEs consumed by an operator
measurement period of 15 minutes
VS.LC.DLMean.LicenseGroup Average number of DL CEs consumed by an operator
measurement period of 15 minutes
VS.LC.DLMax.LicenseGroup Maximum number of DL CEs consumed by an operator
measurement period of 15 minutes
VS.LC.ULMean.LicenseGroup.Shared Average number of UL CEs in the shared group that are
by an operator within a measurement period of 15 minutes
VS.LC.ULMax.LicenseGroup.Shared Maximum number of UL CEs in the shared group that are
by an operator within a measurement period of 15 minutes
VS.LC.DLMean.LicenseGroup.Shared Average number of DL CEs in the shared group that are
by an operator within s measurement period of 15 minutes
VS.LC.DLMax.LicenseGroup.Shared Maximum number of DL CEs in the shared group that are
by an operator within a measurement period of 15 minutes
within a
within a
within a
within a
consumed
consumed
consumed
consumed
Average UL CEUtilization [ ] =
VS . LC . ULCreditUsed .CELL
100
Max CE Available
Thresholds: MINOR: >85% / MAJOR:
>95%
Average DL CEUtilization [ ] =
VS . LC . DLCreditUsed .CELL
100
Max CEavailable
5.
The Service level measurements can provide the first indication of BTS HW limitations
HSDPA access failures due the BTS are mainly related to HSDPA UL return channel
The following KPIs can be monitored in order to track the severity of the blocking due to hardware
channel elements.
7.4.
In the case when there is a high percentage of blocking due to lack of channel element in the BTS, steps
have to be taken in order to mitigate this effect. This section looks at possible actions that can be taken
in order to mitigate the blocking due to lack of CE.
8.
1B
BACKHAUL (Iub)
As stated in the Introduction, it is important to make sure that air interface is indeed the bottleneck: All
other resources should be dimensioned in excess of air interface resources, but since other resources are
costly too, they need to be carefully planned: CEs, Backhaul, Iu, MSC, SGSN trunks.
Iub Occupancy Monitoring is important to ensure that the number of E1s deployed for each NodeB in the
network is adequate to guarantee that all services can be provided at acceptable performance levels (all
CS services can go through and all PS Data Services are offered at acceptable Throughputs).
In this section, we first compute the Iub Utilization based on the number of ATM cells received (DL) or
transmitted (UL) by the NodeB and comparing this figure to the E1s capacity installed in the NodeB.
Thresholds of acceptable Iub Utilization are also under development, as many factors need to be taken
into account in order to decide about the request of additional E1s for a certain Site: A certain Iub Usage
can trigger the analysis of a specific case, but then other considerations need to be taken into account:
1.
2.
3.
4.
5.
6.
7.
As can be seen, all these additional considerations are in line with the above statement of checking if
the number of E1s is enough to sustain all CS traffic (no Voice or Video Calls should be blocked at this
level or any other-) and to deliver all PS Data Traffic at acceptable throughput.
The key Iub resource is the available Iub bandwidth at NodeB level, measured in cps or Kbps. In
particular, the Iub User Plane VCCs are to be monitored. The blocking in Iub interface gives information
about the capacity shortage between Node B and Transport layer. Blocking is related to the load of ATM
and especially AAL2 layer user plane resources. In transport layer the Call Admission Control (CAC)
could also deny the service if there is no room in AAL2 layer.
As the Iub CAC resource allocation system is an input for the Radio Admission Control functionality,
blocking on Iub will result in a degradation of Call Setup Success Rate.
The focus of this chapter is to introduce methods to proactively and reactively monitor the Iub
performance. The basis representation of a 3G network showing RNC interfaces is given below.
8.1.
ATM bachhaul configured in an IMA group which can contain one or several individual E1s, the IMA
group works as one logical pipe carrying both CS and PS traffic.
8.2.
E1/T1: Electrical ports of the AEUa board are used for data transmission.
Channelized STM-1/OC-3: Optical ports of the AOUa board are used for data transmission.
Unchannelized STM-1/OC-3c: Optical ports of the UOIa board are used for data transmission.
RT
NRT
HSDPA_RT
HSDPA_NRT
HSUPA_RT
HSUPA_NRT
The type of AAL2 path is related to the Service type. The mapping between AAL2 path type and
Service type is determined by TX traffic record index or RX traffic record index.
HSDPA traffic and HSUPA traffic can be carried on the same AAL2 path. The former is carried on
the downlink and the latter on the uplink. If there is no need to support HSDPA and HSUPA, the
paths for HSDPA or HSUPA such as HSDPA_RT, HSDPA_NRT, HSUPA_RT, and HSUPA_NRT do
not need to be configured.
In terms of the priorities, the service types in descending order are CBR > RTVBR > NRTVBR >
UBR or UBR+.
ATM permanent virtual channel measurement reports the average traffic rate per PVC using counters
The following KPIs can be used to measure the Ingress and Egress of the PVC.
ROP (Report Output Period) = Measurement time in minutes (examples: 15, 30, hour = 60, day =
1440)
physical ports have a one-to-one relation between each other. These counters are identical in
application, since they are all used for traffic measurement at the physical layer.
VS.ATMDlAvgUsed.1
VS.ATMDlAvgUsed.2
VS.ATMDlAvgUsed.3
VS.ATMDlAvgUsed.4
8.2.1.3. Number of active HSDPA users
In additions to Iub utilization, the total numbers of active HSDPA users is necessary to identify Iub
congestion. Even if Iub utilization is close to 100% it might not be necessary to add more E1s, if most of
the data volume is for one or a few HSDPA users. If Iub utilization is close to 100% and the bandwidth is
shared between a larger number of users, Iub expansion is necessary.
VS.HSDPA.UE.Mean.Cell
This measurement item provides the mean number of HSDPA UEs in a serving cell. The value of this unit
is lower than or equal to that of the VS.CellDCHUEs measurement.
8.2.1.4. Number of AAL2 connections
Number of AAL2 connections per VCC is defined by the AAL2 channel identifiers (CID). A VCC can have
maximum 248 CIDs and if this limit is reached then it can result in blocking. Basically, a single call
requires 2 connections or 2 CIDs (e.g. SRB + AMR or SRB + NRT PS), a multi-RAB call requires one CID
per connected RAB in addition to the SRB CID. Each HSDPA user needs 3 CIDs (SRB + MAC-d Flow + UL
Return Channel). Common channels (four per cell) require their own connections and CIDs as well. The
CIDs can be monitored using the following KPIs.
This measurement provides the number of cells discarded by the AAL2PATH_PVCLAYER due to error
headers in the specified measurement period. The item indicates the errors in the transmission cells on
a single AAL2PATH_PVCLAYER.
VS.RRC.Rej.AAL2.Fail
Number of RRC CONNECTION REJECT messages from the RNC to UEs in a cell due to AAL2 setup failure
VS.RAB.FailEstabCS.TNL
VS.RAB.FailEstPS.TNL
Number of CS/PS RABs unsuccessfully established because of transmission network layer failures
VS.RAB.FailEstab.CS.DLIUBBand.Cong
VS.RAB.FailEstab.CS.ULIUBBand.Cong
These measurement items provide the number of RABs that fail to be set up in the CS domain with the
failure cause of rejection by admission control due to Iub bandwidth congestion. The measurement is
undertaken in the best cell that is under the SRNC.
VS.RAB.FailEstab.PS.DLIUBBand.Cong
VS.RAB.FailEstab.PS.ULIUBBand.Cong
The measurement items provide the number of RABs that fail to be set up in the PS domain with the
failure cause of rejection by admission control due to Iub bandwidth congestion. The measurement is
undertaken in the best cell that is under the SRNC.
8.3.
In the case there is high loading on the Iub and/or there is failures due to Iub loading, steps can be taken
in order to improve the situation. To evaluate the Iub blocking before considering the Iub expansion, it is
important to analyse the SRB, AMR and HSDPA rejection rate against the CSSR %.
1.
2.
3.
4.
SRB rejection rate is most critical e.g. 1% rejection rate for SRB (RRC) means 1% of all call setups
will fail due to the Iub AAL2 congestion and overall CSSR is reduced by 1%
AMR is next critical as 1% rejection on AMR means 1% RAB setup failure for AMR -> CSSR is
calculated as : RRC Setup Success % * RAB Setup Success % so 1% failure in RRC and 1% failure
in ARM will cause AMR CSSR to be 99%*99% = 98.01%
HS-DSCH rejections due to UL CAC is next critical as rejection impacts on HSDPA accessibility
PS call rejection rate is less critical as it includes the PS upgrades as well i.e. 64 -> 384 kbps
rejections
The average and maximum DL CAC Reservation can be calculated to see the level of CAC reservation of
SRB, AMR, PS and HSDPA.
no TNL congestion
reserved for future use
TNL Congestion detected by delay build-up
TNL Congestion detected by frame loss
When the period for adjusting the maximum available bandwidth arrives, the NodeB takes statistics on
the congestion indications of all the MAC-d flows on the Iub port and performs the following operations:
If there is a congestion indication "TNL Congestion detected by frame loss", the NodeB
subtracts the product of the maximum available bandwidth and a preset step from the maximum
available bandwidth. This step is set to 2%. Otherwise,
o
o
If there is a congestion indication "TNL Congestion detected by delay build-up", the NodeB
subtracts the product of the maximum available bandwidth and a preset step from the
maximum available bandwidth. This step is set to 1%.
If neither congestion indication is received during three consecutive periods nor the use of the
Iub bandwidth exceeds a preset value which equals to 85%, the NodeB increases the maximum
available bandwidth by one step. The initial step is 10 kbit/s. The step is doubled every time the
five consecutive increases are complete. The maximum step is 100 kbit/s.
If the congestion is due to delay, the NodeB subtracts the product of the maximum available
bandwidth and a preset step from the maximum available bandwidth. This step is set to 1%.
If neither congestion is detected during three consecutive periods nor the NodeB increases the
maximum available bandwidth by one step. The initial step is 10 kbit/s. The step is doubled
every time the five consecutive increases are complete.
The Iub flow control module measures and stores the value of the Iub buffer occupancy rate every 40 ms
and compares it with the previous one. The detection of the buffer state is as follows:
If the Iub buffer occupancy rate > The Congestion Threshold of IUB Buffer Used Ratio +
The Congestion Threshold Hysteresis of IUB Buffer Used Ratio, the buffer state is marked congested.
If the Iub buffer occupancy rate < The Congestion Threshold of IUB Buffer Used Ratio - The
Congestion Threshold Hysteresis of IUB Buffer Used Ratio, the buffer state is marked not congested.
Otherwise, the buffer status remains unchanged. Where,
o The Congestion Threshold of IUB Buffer Used Ratio is 30%,
o The Congestion Threshold Hysteresis of IUB Buffer Used Ratio is 5%.
The processing after congestion detection is as follows:
If the Iub buffer is congested, the NodeB compares the value of the Iub buffer occupancy
rate with the previous one every 40 ms.
o If the Iub buffer occupancy rate increases, the scheduler sends the RG Down message to all the
HSUPA users on this Iub port, and no AG is allowed to be sent to these users.
o If the Iub buffer occupancy rate does not increase, neither AG Up nor RG Up is allowed to be
sent to the users on this Iub port.
If the Iub buffer is not congested, the flow control algorithm does not affect the decision of
the scheduler.
9.
HSxPA Users
The number of simultaneous HSxPA active users that are allowed per cell has proved to be one of the
most important capacity parameters in 3G networks (especially in cases like Claro Brazil, where most
traffic is PS).
HSUPA (High Speed Uplink Packet Access) is an important feature of 3GPP R6. As an uplink (UL) high
speed data transmission solution, HSUPA provides a theoretical maximum uplink MAC-e rate of 5.73
Mbit/s on the Uu interface. The MAC-e peak data rate supported by Huawei RAN10.0 is 5.73 Mbit/s.
The main features of HSUPA are as follows:
2 ms short frame: It enables less Round Trip Time (RTT) in the Hybrid Automatic Repeat
reQuest (HARQ) process, which is controlled by NodeB. It also shortens the scheduling response
time.
HARQ at the physical layer: It is used to achieve rapid retransmission for erroneously
received data packets between the User Equipment (UE) and NodeB.
NodeB-controlled UL fast scheduling: It is used to increase resource utilization and
efficiency.
HSUPA improves the performance of the UMTS network in the following aspects:
Higher UL peak data rate
Lower latency: enhancing the subscriber experience with high-speed services
Faster UL resource control: maximizing resource utilization and cell throughput
Better Quality of Service (QoS): improving the QoS of the network
UL peak rate: 5.73 Mbit/s per user
10 ms and 2 ms TTI
Maximum 60 HSUPA users per cell
Soft handover and softer handover
Multiple RABs (3 PS) Dedicated/co-carrier with R99
UE categories 1 to 6 Basic load control
OLPC for E-DCH Iub flow control
CE scheduling
Power control of E-AGCH/E-RGCH/E-HICH
9.1.
The maximum number of HSDPA users is defined in RNC without relying on capability indications of the
Node B. Instead, the RNC uses the following parameters to determine this number:
Maximum allowed number of HS-DSCH MAC-d flows in the cell, which is defined by the
MaxHSDSCHUserNum RNW parameter.
The number of subscribers supported by the HSDPA refers to the number of subscribers
whose service is carried by the HSDPA channel, no matter how many RABs are borne by the
HSDPA channel. The highest value of MaxHSDSCHUserNum equals the cell HSDPA capacity
that is prescribed in the NodeB product specification. MaxHSDSCHUserNum can be set
according to the cell type, the available power of HSDPA, and the code resource.
in
the
cell,
which
is
defined
by
the
This describes the maximum number of subscribers supported by the HSDPA channel per
NodeB. It is set according to the product specification and actual number of sold HSDPA
licenses. Impact on the Network Performance: If the HSDPA user connection is rejected by
the NodeB, you can infer that the HSDPA licenses are insufficient. We need to apply for new
HSDPA licenses.
9.2.
The maximum number of HSUPA users is defined in the Cell as in the Node Band is controlled with the
following parameters to determine this maximum numbers:
Maximum allowed number of users in the cell, which is defined by the MaxHSUPAUserNum Cell
parameter.
This parameter represents the maximum number of subscribers supported by the HSUPA
channel and is set according to the product specification. For the HSUPA admission, the
number of subscribers must be counted first. If the current HSUPA subscriber number is
lower than this parameter, the admission request is being analyzed, or else, the admission is
rejected directly.
in
the
NodeB,
which
is
defined
by
the
This describes the maximum number of subscribers supported by the HSUPA channel per
NodeB. It is set according to the product specification and actual number of sold HSUPA
licenses. Impact on the Network Performance: If the HSUPA user connection is rejected by
the NodeB, you can infer that the HSUPA licenses are insufficient. We need to apply for new
HSUPA licenses.
9.3.
9.4.
HS-DSCH MAC-d flow can be allocated in the cell if the number of currently allocated HS-DSCH MAC-d
flows per cell/cell group/NodeB is lower than the maximum indicated by the NodeB.
Key Topics:
Mapping Signaling Radio Bearers on E-DCH is more efficient and reduces latency
It is important to understand impact of HSUPA users on RoT
o Increased RoTleads to higher transmit power for other services
o Impact of increased RoTon admission/congestion control must be investigated
o By reducing HSUPA grant in response to R99 call originations, impact on R99 services can be
minimized
Interaction between Ping application and grant allocation mechanism impacts Ping latency
performance
Mechanism to limit hardware resources for HSUPA users, especially in soft handover,
impacts HSUPA performance
Challenges
It is important to have a good knowledge of HSUPA and to well understand the parameter settings and
the implementation detailsin order to properly evaluate the performance and optimize the system.
Some parameters must be seen in view of the particular implementation.
Proper test methodologies, combined with specific tests shall beused to evaluate the impact of HSUPA
deployment
It is crucial to measure the impact on Rise-over-Thermal
Also crucial is the understanding of increased UL load on existing services
The following table shows the parameters in Huawei for HSUPA
9.4.2. RF optimization
There are four main factors that determine HSUPA throughput limit
UL interference, measured as Rise over Thermal Noise (RoT)
Number of hardware resources or Channel Elements available for HSUPA
Backhaul bandwidth available
UE transmit power, which depends on path loss and UL interference
Node-B Traces and Network Counters providing information about UL:
Received Total Wideband Power (RTWP): provides an estimation of the total received
power at the Node-B receiver
Rise-over-Thermal Noise (RoT): provides an estimation of the increase of the overall noise
measured by the Node-B receiver, as compared to the thermal noise
Usage of Iubresources and Hardware: provides an estimation of the backhaul and
processing HW required by the Node-B for HSUPA connectionsTraces and Counters can be
collected on different Node-Bs in the test area and should be separated per each cellNode-B
traces have associated time stamps, while Network Counters are recorded as histogram usually
over a period of 15 minutes
The objective of far cell throughput test is to measure individual user throughput when UE is forced to
transmit at maximum transmit powerHSUPA throughput at very low RSCP values depends on several
factors such as NodeBcable losses, noise figure, NodeBreceiver implementation etc.
Another important thing that has to be considered and keep in ming is the fact that Soft handover is
supported in HSUPA then Increasing active set size through proper setting of handover parameters can
improve the performance at the expense of hardware resources. Also some the following points should
be taken in cosideration while optimizing a HSUPA network:
R99 services have priority over HSUPA, so low grant values combined with unhappy UE
(indicated by Happy Bit) may indicate.
Heavily loaded cell that does not have either hardware or backhaul resources or has high
RoT
Transition from HSUPA to R99 may be delayed to avoid frequent back and forth. This
feature is dependent on infrastructure vendor implementation
High data rates of HSUPA cause higher UL interferenceIncreased UL interference can lead
to higher UE transmit power for AMR usersHigher requirement of UE transmit power in some cases
may impact AMR performance at the edge of the cell. The exact impact depends on network
planning
10.
1B
RNC Load
Claro Brasil network uses the RNC: BSC6810 and at the time this guidelines were writen the software
loaded version was: V200R010
RNC hardware configuration is one of the following types: minimum configuration, maximum configuration,
and other configurations.
Minimum Configuration
The RNC supports the minimum configuration of a single cabinet, that is, an RSR cabinet configured with
only an RSS subrack, as shown
The figure shows the maximum configuration of the RNC. In maximum configuration, the RNC consists of
two cabinets: one RSR cabinet and one RBR cabinet. The two cabinets hold six subracks: one RSS
subrack and five RBS subracks.
Item
Specification
Maximum
cabinets
number
of
Maximum
subracks
number
of
PS
51,000 Erlang
data
number
of
5,100
Busy Hour
(BHCAs)
1,360,000
Call
Attempts
Other Configurations
Number of
Subracks
Number of
Cabinets
Voice
Traffic
(Erlang)
(UL + DL) PS
Throughput
(Mbit/s)
Number
of
NodeBs
Number
of Cells
1 RSS + 1
RBS
1 RSR
15,000
960
500
1,500
1 RSS + 2
RBSs
1 RSR
24,000
1,536
800
2,400
1 RSS + 3
RBSs
1 RSR + 1
RBR
33,000
2,112
1,100
3,300
1 RSS + 4
RBSs
1 RSR + 1
RBR
42,000
2,688
1,400
4,200
The above table describes RNC configurations other than the minimum and maximum configurations. It
can be choosen a configuration as required.
Provides internal Medium Access Control (MAC) switching for the RNC and enables convergence of
ATM and IP networks.
Provides a service switching channel for the service processing subracks of the RNC.
Distributes timing signals and RFN signals to the service processing boards of the RNC.
Functions of the RNC Service Processing Subsystem
The RNC service processing subsystem implements most RNC functions defined in the 3GPP protocols
and processes services of the RNC.
The service processing subsystem has the following functions:
Integrity protection
Mobility management
Multimedia broadcast
Message tracing
E1/T1
Channelized STM-1/OC-3 optical port
Unchannelized STM-1/OC-3c optical port
FE/GE electrical port
GE optical port
Processing Transport Network Layer Data
The RNC transport subsystem processes transport network layer messages.
In IP transport mode, the transport subsystem terminates user plane UDP/IP messages and
forwards control plane IP messages.
Through the transport subsystem, the RNC shields the differences between transport network layer
messages within the RNC.
The transport subsystem terminates transport network layer messages at the interface boards. Then,
according to the configuration transfer table, the subsystem transfers user plane, control plane, and
management plane datagrams to the DPUb and SPUa boards in the RNC for processing.
RNC OM Subsystem
The RNC OM subsystem is responsible for the operation and maintenance of the RNC.
Components of the RNC OM Subsystem. The RNC OM subsystem consists of the LMT,
OMUa boards, SCUa boards, and OM modules on other boards.
Working Principles of the RNC OM Subsystem. The RNC OM subsystem works in dualplane mode through the OM network of the RNC.
RNC OM Functions. The RNC OM functions enable routine and emergency maintenance of the
RNC.
RNC Log Management. RNC log management enables you to query the information about
the operation and running of the RNC, thus facilitating fault analysis and identification.
RNC Alarm Management. RNC alarm management facilitates you to monitor the running
state of the RNC and informs you of faults in real time so that you can take measures in time.
RNC Loading Management. RNC loading management enables you to manage the process
of loading program and data files onto boards after the FAM boards (or subracks) start or restart.
BOOTP and DHCP on the Iub Interface. The RNC and NodeB support the BOOTP and
DHCP functions. By the BOOTP or DHCP function, a NodeB can automatically get an IP address
from an RNC and create an OM channel between the NodeB and the RNC. The BOOTP and DHCP
functions are applicable to ATM and IP transport on the Iub interface respectively.
RNC Upgrade Management. RNC upgrade refers to a process where the RNC is upgraded to
a later version.
RNC Clock Synchronization Subsystem
The RNC clock synchronization subsystem consists of the GCUa/GCGa boards in the RSS subrack and the
clock processing unit of each subrack. It provides timing signals for the RNC, generates the RFN, and
provides reference clocks for NodeBs.
RNC Clock Sources. The RNC has the following clock sources: Building Integrated Timing Supply
System (BITS) clock, Global Positioning System (GPS) clock, line clock, and external 8 kHz clock.
Structure of the RNC Clock Synchronization Subsystem. The RNC clock synchronization
subsystem consists of the clock module and other boards. The clock module is implemented by
the GCUa/GCGa board.
Timing Signal Processing in the RNC. The RNC processes foreign timing signals before sends
the timing signals to the boards.
RFN Generation and Reception. RNC Frame Number (RFN) is applicable to node synchronization
for the RNC. The node synchronization frames that the RNC sends to the NodeB carry the RFN
signals.
Power Supply Requirements of the RNC. This describes the power supply schemes of the RNC and
requirements for the AC power and DC power supplied to the RNC.
Layout of Power Switches on the RNC Cabinet. There is a fixed relation between outputs of the power
distribution box of the RNC cabinet and the intra-cabinet components.
Connections of Power Cables and PGND Cables in the RNC Cabinet. The power cables and PGND
cables in the RSR cabinet are connected in the same way as those in the RBR cabinet.
RNC Environment Monitoring Subsystem
The RNC environment monitoring subsystem automatically monitors the working environment of the
RNC and reports faults in real time.
The RNC environment monitoring subsystem consists of the power distribution box and the environment
monitoring parts in each subrack. This subsystem is responsible for power supply monitoring, fan
monitoring, cabinet door monitoring, and water monitoring.
RNC Power Supply Monitoring. RNC power monitoring is performed to monitor the power subsystem in
real time, report the running state of the power supply, and generate alarms when faults occur.
RNC Fan Monitoring. RNC fan monitoring is performed to monitor the fans in real time and adjust the speed
of the fans based on the temperature in the subrack.
RNC Cabinet Door Monitoring. RNC cabinet door monitoring is optional. When the RNC detects that the
front or back door of a cabinet is open, the RNC generates and reports an appropriate alarm.
RNC Water Monitoring. RNC water monitoring is optional. When the RNC detects water immersion, the
RNC generates and reports an appropriate alarm.
10.1.1.
The RNC boards refer to the OMUa board, SCUa board, SPUa board, GCUa board, GCGa board, DPUb
board, AEUa board, AOUa board, UOIa board, PEUa board, POUa board, FG2a board, GOUa board, PFCU
board, and PAMU board. The PFCU board is installed in the fan box. The PAMU board is installed in the
power distribution box. All the other boards are installed in the subracks.
RNC Board Compatibility. The RNC board compatibility defines whether the RNC boards of
different types can be configured in the same subrack at the same time.
GCUa/GCGa Board. The GCUa is shortened from the RNC General Clock Unit REV:a, and the
GCGa is a short form of the RNC General Clock with GPS Card REV:a. The GCUa/GCGa board is a
mandatory configuration. One RNC is configured with two GCUa/GCGa boards. The GCUa/GCGa
boards can be installed only in slots 12 and 13 in the RSS subrack.
OMUa Board. OMUa refers to RNC Operation and Maintenance Unit REV:a. One or two OMUa
boards are installed in the RNC cabinet. The OMUa boards can be installed only in slots 20 and 21,
or slots 22 and 23 in the RSS subrack. The OMUa board is twice the width of other boards.
Therefore, one OMUa board occupies two slots.
SPUa Board. SPUa refers to RNC Signaling Processing Unit REV:a. The SPUa board is a
mandatory configuration. In the RSS subrack, 2 to 10 SPUa boards are installed in slots 0 to 5 and
8 to 11. In the RBS subrack, 2 to 10 SPUa boards are installed in slots 0 to 5 and 8 to 11.
SCUa Board. SCUa refers to RNC GE Switching and Control Unit REV:a. The SCUa board is a
mandatory configuration. In both the RSS subrack and the RBS subrack, two SCUa boards are
installed in slots 6 and 7.
DPUb Board. DPUb refers to RNC Data Processing Unit REV:b. The DPUb board is a mandatory
configuration. For the RSS subrack, 2 to 10 DPUb boards are installed in slots 8 to 11 and slots 14
to 19. For the RBS subrack, 2 to 12 DPUb boards are installed in slots 8 to 19.
AEUa Board. AEUa refers to RNC 32-port ATM over E1/T1/J1 interface Unit REV:a. The AEUa
board is an optional configuration. It can be installed in either the RSS subrack or the RBS
subrack. The number of the AEUa boards to be installed depends on site requirements. In the RSS
subrack, the AEUa board can be installed in slots 14 to 19 and slots 24 to 27. In the RBS subrack,
the AEUa board can be installed in slots 14 to 27.
AOUa Board. AOUa refers to RNC 2-port ATM over channelized Optical STM-1/OC-3 Interface
Unit REV:a. The AOUa board is optional and can be installed in both the RSS subrack and the RBS
subrack. The number of the AOUa boards to be installed depends on site requirements. In the RSS
subrack, the AOUa board can be installed in slots 14 to 19 and slots 24 to 27. In the RBS subrack,
the AOUa board can be installed in slots 14 to 27.
FG2a Board. FG2a refers to RNC packet over electronic 8-port FE or 2-port GE Ethernet
Interface unit REV:a. The FG2a board is an optional configuration. It can be installed in both the
RSS subrack and the RBS subrack. The number of the FG2a boards to be installed depends on site
requirements. In the RSS subrack, the FG2a board can be installed in slots 14 to 19 and slots 24 to
27. In the RBS subrack, the FG2a board can be installed in slots 14 to 27.
GOUa Board. GOUa refers to RNC 2-port packet over Optical GE Ethernet Interface Unit REV:a.
The GOUa board is an optional configuration. It can be installed in both the RSS subrack and the
RBS subrack. The number of the GOUa boards to be installed depends on site requirements. In the
RSS subrack, the GOUa board can be installed in slots 14 to 19 and slots 24 to 27. In the RBS
subrack, the GOUa board can be installed in slots 14 to 27.
PEUa Board. PEUa refers to RNC 32-port Packet over E1/T1/J1 Interface Unit REV:a. The PEUa
board is an optional configuration. It can be installed in both the RSS subrack and the RBS
subrack. The number of the PEUa boards to be installed depends on site requirements. In the RSS
subrack, the PEUa board can be installed in slots 14 to 19 and slots 24 to 27. In the RBS subrack,
the PEUa board can be installed in slots 14 to 27.
UOIa Board. UOIa refers to RNC 4-port ATM/Packet over Unchannelized Optical STM-1/OC-3c
Interface unit REV:a. The UOIa board is an optional configuration. It can be installed in both the
RSS subrack and the RBS subrack. The number of the UOIa boards to be installed depends on site
requirements. In the RSS subrack, the UOIa board can be installed in slots 14 to 19 and slots 24 to
27. In the RBS subrack, the UOIa board can be installed in slots 14 to 27.
PFCU Board. PFCU refers to RNC Fan Control Unit. The PFCU board is installed in the front of the
fan box. Each fan box is configured with one PFCU board.
POUa Board. POUa refers to RNC 2-port packet over channelized Optical STM-1/OC-3 Interface
Unit REV:a. The POUa board is an optional configuration. It can be installed in both the RSS
subrack and the RBS subrack. The number of the POUa boards to be installed depends on site
requirement. In the RSS subrack, the POUa board can be installed in slots 14 to 19 and slots 24 to
27. In the RBS subrack, the POUa board can be installed in slots 14 to 27.
PAMU Board. PAMU refers to the Power Allocation Monitoring Unit. The PAMU is configured in
the power distribution box of the RNC cabinet. Each power distribution box holds one PAMU.
10.2.2.
11.
Generally, the failures due to admission control (AC) on the RRC, RAB and Channelisation Codes give a
good representation of how severe the radio loading is on the network performance. The following metrics
are important to evaluate this severity.
Counter
VS.CM.ULSF2.Act.A
tt
VS.CM.ULHLS.Act.A
tt
VS.CM.ULSF2.Act.F
ail
VS.CM.ULHLS.Act.F
ail
VS.CM.DLSF2.Act.A
tt
VS.CM.DLHLS.Act.A
tt
VS.CM.DLSF2.Act.F
ail
VS.CM.DLHLS.Act.F
ail
[To be completed]
12.
1B
Load control is a Radio Resource Management (RRM) algorithm designed to keep the load in the Uu
interface stable and to avoid situations of overload. If the system gets overloaded, the radio resource
management returns the system quickly and controllably back to the normal load state defined by Radio
Network Planning (RNP).
Load control is performed separately for the uplink and the downlink. Since interference is a crucial and
limiting factor in any CDMA system, load control measures both uplink and downlink interference
periodically under one RNC. Each User Equipment (UE) and Base Transceiver Station (BTS) that transmits
in the network creates interference. The downlink interference is the same as the transmission power of
the cell in question.
Load control can be divided into preventive load control and overload control. As the names imply, the
basic difference between these two types of control lies in when the actions are performed: overload
control actions are performed after the cell has been overloaded (threshold x), whereas preventive
actions are performed before the cell becomes overloaded (threshold y).
There are only two overload actions: preventing further calls from being set up (admission control) and
throttling back Non-Real Time (NRT) traffic (packet scheduler).
12.2. PRE-EMPTION
Preemption guarantees the success in the access of a higher-priority user by forcibly releasing the
resources of a lower-priority user.
After cell resource admission fails, the RNC performs the preemption function if the following conditions
are met:
The RNC receives an RAB ASSIGNMENT REQUEST message indicating that preemption is
supported.
The preemption algorithm switch Preempt algorithm switch is set to ON.
Preemption is applicable to the following cases:
Setup or modification of a service
Hard handover or SRNS relocation
Service
R99
service
Resource
HSUP
A
Servic
e
HSDPA
Service
R99 + HSPA
Combined
Service
MBMS
Service
Code
Power
Service
HSDPA
service
Resource
R99
Servic
e
HSUP
A
Servic
e
HSDPA
Service
CE
Iub
bandwidth
Code
Power
CE
Iub
bandwidth
Code
Power
CE
Iub
bandwidth
Code
Power
CE
Iub
bandwidth
Number
users
HSUPA
service
Number
users
MBMS
service
Number
users
of
of
of
R99 + HSPA
Combined
Service
MBMS
Service
NOTE:
To enable resource-triggered preemption for MBMS services, the Mbms PreemptAlgoSwitch must be
ON.
The preemption procedure is as follows:
1
The preemption algorithm determines which radio link sets can be preempted.
a Choose SRNC UEs first. If no SRNC UE is available, choose DRNC UEs.
b Sort the UEs by user integrate priority.
c Specify the range of candidate UEs.
Only the UEs with lower priority than the RAB to be established can be selected. If the
Integrate Priority Configured Reference parameter is set to traffic class and the
switch PreemptRefArpSwitch is on, only the ones with higher ARP and lower priority
than the RAB to be established can be selected. This is applied to RABs of PS streaming
and BE services.
The RNC selects one or more UEs in order to match the resource needed by the RAB to
be established.
NOTE:
For the preemption triggered for the power reason, the preempted objects can be R99
users, R99 + HSDPA combined users or HSDPA RABs.
For the preemption triggered for the Iub bandwidth reason, the preempted objects can
only be RABs.
The combined services that are carried on channels of different types (that is, R99+HSPA)
cannot preempt the resources of other services.
For the preemption triggered for the code or Iub resource reason, only one user can be
preempted. For the preemption triggered for the power or credit resource reason, more
than one user can be preempted.
1 The RNC releases the resources occupied by the candidate UEs.
9 The requested service directly uses the released resources to access the network
without admission decision.
If the current UL/DL load of an R99 cell is not lower than UL/DL OLC Trigger threshold for some
hysteresis (defined by DL State Trans Hysteresis threshold in DL; not configurable in UL), the
cell works in overload congestion state and the related overload handling action is taken. If the
current UL/DL load of the R99 cell is lower than UL/DL OLC Release threshold for some
hysteresis (defined by DL State Trans Hysteresis threshold in DL; not configurable in UL), the
cell comes back to normal state.
The HSDPA cell has the same uplink decision criterion as the R99 cell. The load in the downlink,
however, is the sum of load of the non-HSDPA power (transmitted carrier power of all codes not
used for HS-PDSCH or HS-SCCH transmission) and the GBP.
In addition to periodic measurement, event-triggered measurement is applicable to OLC.
If OLC_EVENTMEAS is ON, the RNC will request the initiation of an event E measurement on power
resource in the NodeB. In the associated request message, the reporting criterion is specified, including
key factors UL/DL OLC trigger hysteresis, UL/DL OLC trigger threshold and UL/DL OLC release
threshold. Then the NodeB checks the current power load in real time according to this criterion and
reports the status to the RNC periodically if the conditions of reporting are met.
NOTE:
The current policy for NodeBs is to preferentially allocate power to DCH users. It is not
recommended that Ptotal, the TCP, be used as the criterion for overload in the HSDPA cell. That is because
the NodeB can automatically adjust the power at the next scheduling period.
For HSDPA cells, the OLC_EVENTMEAS switch is recommended OFF. For 3GPP limitation,
however, the NodeB cannot check the total load of the non-HSDPA power and the GBP.
13.
This section describes one possible approach on how to get visibility into the Capacity Performance of
the network.
These two tools are intended to support the Optimizer in all phases of the Capacity Process as described
in Section 3, but especially in both the Capacity Metrics Monitoring and the Capacity Analysis.
Once classified the duration of the Alarm, the second criteria are applied to decide the impact:
MINOR/MAJOR.
For that the impact thresholds proposed across this document are applied.
As a summary, the Alarms (if any) will be assigned to each cell according to:
1.
2.
3.
4.
If the KPIs are calculated based soley on Average values of the Resource Utilization, there is a possibility
that the Alarms would not be triggered by the KPI but just by the AC/Congestion events caused by peaks
in the Resource Utilization. This is why both triggering possibilities in each Alarm (Utilization KPIs and
AC/Congestion events) are inlcluded.
In order to increase the correlation between Alarms based on events and KPIs, it is proposed to monitor
not only the KPIs Average values but also a certain percentiles (85% or 97%) so the trend of the peaks
can also be tracked.
Two percentiles are proposed:
Percentile 85% (can be calculated by adding 1 standard deviation () to the Average value)
Percentile 98% (can be calculated by adding 2 standard deviations (2) to the Average value)
[Initial version of the Weekly Report will use both of them to determine which one provides a better
understanding of the trends in the Resource utilization].
Accordingly, different thresholds should be used for the Average and Percentile evaluations of the KPI.
With this definition of the Alarms, the optimizer can focus on the Alarm following the order used in the
Table above: First on the cells with the highest number of Long Major Alarms (one per monitored
Resource), that are pointing out to a more impacting and persistent issue. These are usually problems
already known by the optimizer. Likely the solution was decided some time ago (when the Alarm was a
Short Major one) and the optimizer just keeps track of its implementation.
Then, the new Short Major Alarms cases should be addressed. These are the urgent cases to pay
attention to every week. Some of them can be related to especial events (concerts,..) or a real new
problem. Some of them will be solved quickly, some others will need a long term solution. In the last
case, depending on the type of short term solution that can be applied, these alarms will become Long
Major or Long Minor (most desiderable).
Minor Alarms can be flags of potential upcoming issues. They do not require urgent attention but should
be monitored and assessed on a regfular basis.
We suggest to run the report for all 7 days in the week.
[To be discussed internally in Claro the advantages/disadvantages of running it only for the 5 working
days]
One suggestion is to automatize their collection and delivery to all Optimization Team by email, every
morning. This way, they will trigger a daily supervision and analysis of the main aspects of Performance
over the main offenders (worst cells).
Another suggestion is to produce a Weekly Report (in excel, for instance) for a more long-term tracking
of the Capacity Performance in the network. It will include:
In the main TAB, for each pair (cell-week) and for each resource, the highest value of the week
(out of the 7 days values) for both the resource utilization and blockings (colored according to the
type of Alarm).
[In order to simplify this summary TAB, we could also remove all metric values from this TAB and
just leave the List of Alarms, that then can be analyzed in the rest of TABS.
In the rest of TABs, one per monitored Resource, for each (cell-day), most KPIs defined in
this document will be shown (both utilization and blocking ratios) evaluated at Floating BH,
meaning that for each KPI, the highest value for the day will be shown.
Both TABs, when cumulated every week, will allow us to monitor the trends in all Capacity aspects (as
discussed in this document: DL TX Power, UL RX Power, OVSF Codes, CEs, Iub Utilization, number of HS
users,... These trends should be taken into account for forecasting purposes.
Also the idea behind all these TABs is to provide the optimizer with a wider vision of the cell behavior in
a day basis (in TABs per Resource), besides the executive summary per week in main TAB. This
additional data per day in TABs/Resource will help to identify the days causing the Alarms being
triggered, so further analysis can be conducted in SMART for the specific days. And it will also be useful
to have an order of magnitude for the maximum number of events (AC/Congestion) per day, giving a
better picture of the importance of the problem.
In case the idea is to be implemented, this section will be completed with all needed flowcharts
describing the exact way to produce each alarm and prepare the Report.
14.
4B
REFERENCES
[1] WCDMA (UMTS) Deployment Handbook. Planning and Optimization Aspects. Christophe Chevallier,
Christopher Brunner, Andrea Garavaglia, Kevin P. Murray, Kenneth R. Baker (All of QUALCOMM
Incorporated California, USA). Ed. John Wiley & Sons. 2006
[2] Radio Network Planning and Optimisation for UMTS. Jaana Laiho and Achim Wacker [Both of Nokia
Networks, Nokia Group, Finland] & Toma s Novosad [Nokia Networks, Nokia Group, USA]. Ed. John
Wiley & Sons. 2006
[5] Introduction to UMTS Optimization. Wray Castle, 2004
[6] HED 5.5. NodeB Documentation (V100R010_06)
[7] RAN6.1 Feature Description
[8] RAN10.0 Network Optimization Parameter Reference-20080329-A-1.0
[9] NodeB WCDMA V100R010C01B051 Performance Counter Reference
[10] Function List and Description of Huawei UMTS RAN10[1].0 V1.7(20080827)
A.
5B
The load control algorithm is built into the RNC. The input of load control comes from all measurement
information of the NodeB.
Each of the load control algorithms involves three factors: measuring, triggering, and controlling. Valid
measurement is the prerequisite for effective control.
A.a.
The priority consists of RAB integrate priority, user integrate priority, and user priority.
If Integrate Priority Configured Reference is set to Traffic Class, the integrate priority abides
by the following rules:
If Integrate Priority Configured Reference is set to ARP, the integrate priority abides by the
following rules:
NOTE:
ARP and THP are carried in the RAB ASSIGNMENT REQUEST message, and they are not configurable on
the RNC LMT
NOTE:
ARP 15 is always the lowest priority and is not configurable. It corresponds to user priority 3 (copper).
If ARP is not received from messages of Iu interface, the user priority is regarded as copper.
The levels of user priority are mainly used to provide different QoS for different users, for example,
setting different GBR values according to the level of users for BE service.
The GBR of BE services are configurable. According to the traffic class, priority of users, and bearer type
(DCH or HSPA), the different values of GBR are configured through the SET USERGBR command.
Changes on the mapping between ARP and user priority have an influence on the following features:
HSDPA
HSUPA
AMR
AMR-WB
Iub overbooking
A.b.
Load Measurement
The algorithms of load control such as OLC and CAC use load measurement values in the uplink and the
downlink. A common Load Measurement (LDM) algorithm is required to control load measurement in the
uplink and the downlink, which makes the algorithm relatively independent.
Measurement Quantities and ProcedureThe NodeB and the RNC perform measurements and
filtering based on the parameter settings. The statistics obtained after the measurements and
filtering serve as the data input into the algorithms of load control.
Filtering of Load MeasurementFor most measurement quantities, the NodeB performs layer 3
filtering on original measurement values, and the RNC performs smooth filtering on the values
reported from the NodeB. Provided Bit Rate (PBR) measurement, however, does not use alpha
filtering on the NodeB side.
TCP of all codes not used for HS-PDSCH, HS-SCCH, E-AGCH, E-RGCH and E-HICH transmission
(non-HSDPA power)
Fn =( 1 ) F n1 + M n
where
Fn is the new measurement value after filtering.
Fn-1 is the last measurement value after filtering.
Mn is the latest measurement value from the physical layer.
= (1/2)k/2 (k is defined by the UL/DL basic common measure filter coef parameter.)
When is set to 1, that is, k = 0, no layer 3 filtering is performed.
The larger the coefficient, the smaller the impact of the measurement value at the physical layer on the
value after layer 3 filtering, and the less susceptible of the network layer to the measurement value at
the physical layer.
n1
= i=0
Delay susceptibilities of PUC, CAC, LDR, and OLC to common measurement are different. The LDM
algorithm must apply different smooth filter coefficients and measurement periods to those algorithms;
thus, they can get expected filtered values.
Algorithm
PUC
CAC
LDR
OLC
NOTE:
Different from other measurement quantities, GBP measurements have the same smooth window length
in all related algorithms. The filter length for GBP measurement is defined by the HSDPA need power
filter len parameter.
Parameter ID
Value
Range
Recommen
ded Value
PucAvgFilterLen
1 to 32
32
UlLdrAvgFilterLe
n
1 to 32
25
DlLdrAvgFilterLe
n
1 to 32
25
UlCACAvgFilterLe
n
1 to 32
DlCACAvgFilterLe
n
1 to 32
UlOLCAvgFilterLe
n
1 to 32
25
DlOLCAvgFilterLe
n
1 to 32
25
Parameter Name
PUC moving
length
average
filter
Description:
These parameters specify the length of smooth filter window of the report measurement value on the
RNC side. The greater the value of each parameter, the greater the smoothing effect, but the lower the
signal change tracing capability.
A.b.b.b. Reporting Interval
The interval at which the NodeB reports each measurement quantity to the RNC is configurable. The
following table lists the parameters used to set the reporting intervals for the measurement quantities.
A.b.b.c. Provided Bit Rate
The Provided Bit Rate (PBR) measurement quantity is also reported by the NodeB to the RNC. Different
from other power measurement quantities, PBR does not undergo alpha filtering on the NodeB side.
For details of PBR, refer to 3GPP 25.321.
The following table lists the parameters that are used to set the PBR reporting intervals.
If the temperature in the equipment room is constant and the background noise changes little, the
background noise does not need to be adjusted after the initial value is set.
If the temperature in the equipment room varies with the outside temperature, the background
noise changes greatly and must be updated.
The time period of the background noise update can be specified by setting parameters Background
e Update Start Time and Background Noise Update End Time. During the period when the background noise
e algorithm is applied, background noise updating is performed if Auto-Adaptive Background Noise Update Switch
to ON.
The measured value of background noise is effective when the current equivalent number of users in the
smaller than the value of Equivalent User Number Threshold for Background Noise.
The time that one background noise update takes is specified by setting Background Noise Update
nuance Time.
The discarding threshold of abnormal RTWP during the update is specified by setting Background
e Abnormal Threshold. This setting avoids temporary burst interference and RTWP peak.
The variation of the RTWP that triggers the background noise update is specified by setting Background
e Update Trigger Threshold. This setting avoids frequent updates over the Iub interface.
In the WCDMA system, the mobility management of the UE in idle or connected mode is implemented by
cell selection and cell reselection. The Potential User Control (PUC) algorithm controls the cell selection
of a potential UE and prevents an idle UE from camping on a heavily loaded cell.
The PUC algorithm is available only after it is enabled, that is, after PUC under the Cell LDC algorithm
switch parameter is set to 1.
The RNC periodically monitors the downlink load of the cell and compares the measurement results with
the configured thresholds Load level division threshold 1 and Load level division threshold 2,
that is, load level division upper threshold and lower threshold.
If the cell load is higher than the load level division upper threshold plus the Load level
division hysteresis, the cell load is judged to be heavy.
If the cell load is lower than the load level division lower threshold minus the Load level
division hysteresis, the cell load is judged to be light.
Cell load is of three states: heavy, normal, and light, as shown
Sintersearch
When this value is increased by the serving cell, the UE starts inter-frequency cell
reselection ahead of schedule.
When this value is decreased by the serving cell, the UE delays inter-frequency cell
reselection.
Qoffset1(s,n): applies to R (reselection) rule with CPICH RSCP
When this value is increased by the serving cell, the UE has a lower probability of
selecting a neighboring cell.
When this value is decreased by the serving cell, the UE has a higher probability of
selecting a neighboring cell.
Qoffset2(s,n): applies to R (reselection) rule with CPICH Ec/I0
When this value is increased by the serving cell, the UE has a lower probability of
selecting a neighboring cell.
When this value is decreased by the serving cell, the UE has a higher probability of
selecting a neighboring cell.
According to the load status of the current cell, the cell reselection parameters are adjusted. The
configuration of Sintersearch is oriented to the current cell. Its value is related to the load of the current
cell.
The configuration of Qoffset1 and Qoffset2 is oriented to the neighboring cells. Their values are related
to the load of the current cell and the load of the neighboring cells.
Neighbori
ng Cell
Load
Current
Cell Load
Q'ofset1
Change
of
Q'ofset
1
Q'ofset2
Change
of
Q'ofset
2
Light
Light
Q'offset1
Qoffset1
Q'offset2
Qoffset2
Light
Normal
Q'offset1
Qoffset1
Q'offset2
Qoffset2
Light
Heavy
Q'offset1
=
Qoffset1
+
Qoffset1 offset 1
Q'offset2
=
Qoffset2
+
Qoffset2 offset 1
Normal
Light
Q'offset1
Qoffset1
Q'offset2
Qoffset2
Normal
Normal
Q'offset1
Qoffset1
Q'offset2
Qoffset2
Normal
Heavy
Q'offset1
=
Qoffset1
+
Qoffset1 offset 1
Q'offset2
=
Qoffset2
+
Qoffset2 offset 1
Heavy
Light
Q'offset1
=
Qoffset1
+
Qoffset1 offset 2
Q'offset2
=
Qoffset2
+
Qoffset2 offset 2
Heavy
Normal
Q'offset1
=
Qoffset1
+
Qoffset1 offset 2
Q'offset2
=
Qoffset2
+
Qoffset2 offset 2
Heavy
Heavy
Q'offset1
Qoffset1
Q'offset2
Qoffset2
The procedure of UE access includes the procedures of RRC connection setup and RAB setup
Service
R99
service
HSDPA
service
Resource
R99
Servic
e
HSUP
A
Servic
e
HSDPA
Service
Code
Power
CE
Iub
bandwidth
Code
Power
CE
Iub
bandwidth
Code
Power
CE
Iub
bandwidth
Code
Power
CE
Iub
bandwidth
Number
users
HSUPA
service
Number
users
MBMS
service
Number
users
of
of
of
R99 + HSPA
Combined
Service
MBMS
Service
NOTE:
To enable resource-triggered preemption for MBMS services, the Mbms PreemptAlgoSwitch must be
ON.
The preemption procedure is as follows:
2
The preemption algorithm determines which radio link sets can be preempted.
a Choose SRNC UEs first. If no SRNC UE is available, choose DRNC UEs.
b Sort the UEs by user integrate priority.
c Specify the range of candidate UEs.
Only the UEs with lower priority than the RAB to be established can be selected. If the
Integrate Priority Configured Reference parameter is set to traffic class and the
switch PreemptRefArpSwitch is on, only the ones with higher ARP and lower priority
than the RAB to be established can be selected. This is applied to RABs of PS streaming
and BE services.
d The RNC selects one or more UEs in order to match the resource needed by the RAB to
be established.
NOTE:
For the preemption triggered for the power reason, the preempted objects can be R99
users, R99 + HSDPA combined users or HSDPA RABs.
For the preemption triggered for the Iub bandwidth reason, the preempted objects can
only be RABs.
The combined services that are carried on channels of different types (that is, R99+HSPA)
cannot preempt the resources of other services.
For the preemption triggered for the code or Iub resource reason, only one user can be
preempted. For the preemption triggered for the power or credit resource reason, more
than one user can be preempted.
The RNC releases the resources occupied by the candidate UEs.
The requested service directly uses the released resources to access the network without admission
on.
A.b.g.c. Queuing
After the admission of a service fails, the service request is put into a specific queue. During the time
defined by the Max queuing time length parameter, admission attempts for the service are made
periodically.
After the cell resource decision fails, the RNC performs the queuing if the RNC receives an RAB
ASSIGNMENT REQUEST message indicating the queuing function is supported and Queue algorithm
switch is set to ON.
The RNC configures 12 independent levels of maximum queuing time, that is, T1 to T12. Configuration of
Max queuing time length for different priorities of services is described in the following part.
If Integrate Priority Configured Reference is set to Traffic, that is, the traffic class serves as the
reference to the integrate priority, then the maximum queuing time for different priorities is configured
as shown in
Traffic
Class
Conversatio
nal
Streaming
Interactive
Background
User
Priority
Max
queuing
time length
1 (gold)
T1
2 (silver)
T2
3 (copper)
T3
1 (gold)
T4
2 (silver)
T5
3 (copper)
T6
1 (gold)
T7
2 (silver)
T8
3 (copper)
T9
1 (gold)
T10
2 (silver)
T11
3 (copper)
T12
If Integrate Priority Configured Reference is set to ARP, the maximum queuing time for different priorities
is configured as shown
User
Priority
1 (gold)
2
(silver)
3
(copper
)
Traffic Class
Max
queuing
time
length
Conversation
al
T1
Streaming
T2
Interactive
T3
Background
T4
Conversation
al
T5
Streaming
T6
Interactive
T7
Background
T8
Conversation
al
T9
Streaming
T10
Interactive
T11
Background
T12
If the queue
is...
Not full
Full
After the heartbeat timer expires, the queuing algorithm proceeds as shown
Ste
p
Action
RAB Directed Retry Decision (DRD) is triggered when the blind handover to other inter-frequency cells is
performed after resource allocation fails in the RNC during the RAB setup.
The RAB DRD procedure is as follows:
1
2
3
4
The RNC makes a decision on the admission of the target inter-frequency cell for blind
handover.
If the admission request is accepted, the DRD procedure is performed for the target
inter-frequency cell for blind handover.
The RNC starts the radio link setup procedure to perform the inter-frequency handover.
The RNC starts the radio bearer setup procedure to complete the inter-frequency
handover on the Uu interface and the service setup.
If step 2, 3 or 4 fails, the RNC performs repeated RAB DRD in another target inter-frequency cell for blind
handover until the retry succeeds, until the retry in all such cells fails, or until the number of retries
reaches the value of Max inter-frequency direct retry number.
NOTE:
After an HSPA service request is denied, the service is fallen back to the DCH. Then, the
service re-attempts to access the network.
The RAB DRD to a target cell in another system (for example, GSM) for blind handover is
similar. For details, refer to Inter-RAT Handover.
According to the cell type (R99 or R99+HSDPA), an HSDPA user accessing an R99 cell can
be directed to an R99+HSDPA cell through DRD. According to the cell parameter R99 CS
separation indicator or R99 PS separation indicator, an R99 user accessing an
R99+HSDPA cell can be directed to an R99 cell through DRD.
RAN6.1 does not support inter-RAT DRD for RABs of combined services.
RAN6.1 does not support inter-RAT DRD for PS services.
RAN6.1 does not support inter-RAT DRD for HSPA services.
Whether the DRD action can be executed depends on the settings of DRD algorithm switches. The
following table describes the DRD algorithm switches applicable to different scenarios.
Scenario
Switch
DRD_SWITCH
COMB_SERV_DRD_SWITCH
RAB_MODIFY_DRD_SWITCH
RAB_DCCC_DRD_SWITCH
HSDPA_DRD_SWITCH
RAB_SETUP_DRD_SWITCH
INTRA_HO_D2H_DRD_SWITC
H
DRD switch
Combined services
RAB modification
DCCC
HSDPA service
RAB setup
DCH to HSPA intrafrequency
handover
Description
Scenario
Switch
Description
INTER_HO_D2H_DRD_SWITC
H
HSUPA_DRD_SWITCH
HSUPA service
A DRD action is executable only when all the related switches are on. For example, before an HSUPA
service is set up, the DRD_SWITCH, RAB_SETUP_DRD_SWITCH, and HSUPA_DRD_SWITCH must be
on.
A.c.
Call Admission Control (CAC) is used to determine whether the system resources are enough to accept a
new user's access request. If the system resources are enough, the new user's access request is
accepted; otherwise, the user will be rejected.
Call Admission Control (CAC) algorithm consists of CAC based on power resource, CAC based on code
resource, CAC based on credit resource, CAC based on Iub resource and CAC based on HSPA user
number.
A CAC procedure contains RRC signaling admission control and RAB admission control.
For the RRC connection request for the reason of emergency call, detach, or
registration, direct admission is used.
o For the RRC connection request for other reasons, the admission decision is made
as follows:
When the OLC switch is on, RRC connection request is rejected if the cell is
in overload congestion state. If the cell is not in overload state, UL/DL
OLC Trigger threshold is used for power admission.
When the OLC switch is off, UL/DL OLC Trigger threshold is used for
power admission.
Algorithm 1 of Power Admission
o
Power admission decision based on algorithm 1 consists of uplink power admission decision and
downlink power admission decision procedures.
Uplink Power Admission Decision Procedure Based on Algorithm 1
UL =1
Pn
RTWP
to calculate the current uplink load factor UL, where PN is the received uplink Background
noise.
The RNC calculates the uplink load increment UL based on the service request.
The RNC uses the following formula to forecast the uplink load factor:
UL,predicted = UL + UL + ULcch + hs_dpcch
In the formula, ULcch is the value of UL common channel load factor, which defines the
factor of UL common channel resources reserved. hs_dpcch is the value of UL HS-DPCCH
reserve factor, which defines the factor of UL HS-DPCCH resources reserved.
By comparing the forecasted uplink load factor UL,predicted with the corresponding threshold
(UL threshold of Conv AMR service, UL threshold of Conv non_AMR service, UL
threshold of other services, or UL Handover access threshold), the RNC decides
whether to accept the access request or not.
NOTE:
The procedure of uplink power admission decision in HSUPA cells is similar to that in R99 cells.
The uplink load increment UL is determined by the following factors:
The Eb/No of the incoming new call (the larger the Eb/No, the larger the uplink load
increment)
UL neighbor interference factor (the larger the factor, the larger the uplink load
increment)
Configuration Rule and Restrictions:
To ensure success of handover and performance of conversational services and to differentiate
services of four classes, the thresholds should fulfill the following condition:
UL Handover access threshold > max(UL threshold of Conv AMR service, UL threshold of Conv
non_AMR service) > UL threshold of other services
The parameters UL Handover access threshold, UL threshold of Conv AMR service, UL threshold of
Conv non_AMR service, and UL threshold of other services should be considered together with the
planning result of network optimization. The reasons are as follows:
If the parameters are set too large, the network optimization may be affected. The system load
after admission may become too heavy, and the heavy load can affect the system stability and result
in system congestion.
If the parameters are set too small, the target capacity may not be reached. There is a higher
probability that users are rejected while some resources are idle and wasted. \
Downlink Power Admission Decision Procedure Based on Algorithm 1
Downlink Power Admission Decision for R99 Cells
cac
non
Pnonhspa + Pcch + P DL P max .Thd
Ptotal+ P DL Pmax . Thd total
Pnon + Pcch +min ( GBP+ P HSUPA , Pmax ) + P DL P max .Thd total
res
o
o
where
o
o
o
o
o
o
o
o
o
cac
hspa
res
res
hspa
cac
res
cac
T h d total
cac
PHSUPA
res
RGCH/E-HICH).
Pmax h spa is the maximum available power for HSPA. Its value is associated with
GBR stm
1.
2.
3.
4.
be
str
GBR be
i
res
hspa
cac
5.
res
res
cac
where
o
o
o
o
PBR strm is the provided bit rate of all existing streaming services.
T h d h sdpa is the admission threshold for streaming PBR decision. It is defined by
the Hsdpa streaming PBR threshold parameter.
str
Ph supa is the power reserved for HSUPA downlink control channels (E-AGCH/ERGCH/E-HICH).
Pmax spa is the maximum available power for HSPA. Its value is associated with
the HSDPA power allocation mode. For details, refer to HSDPA Power Resource
Allocation.
res
T h d total is the threshold of cell DL total power, which is defined by the DL total
power threshold parameter.
cac
Pcc h
res
The RNC should admit the HSDPA streaming RAB in any of the following situations:
Formula 1 is fulfilled.
Formulas 3 and 4 are fulfilled.
Formulas 3 and 5 are fulfilled.
The RNC should admit the HSDPA BE RAB in any of the following situations:
Formula 2 is fulfilled.
Formulas 3 and 4 are fulfilled.
Formulas 3 and 5 are fulfilled.
NOTE:
If PS conversational services are carried on HSPA, the services can be treated as streaming
services during admission control.
If the GBP measurement is deactivated, the decision formulas that involve GBP are regarded as
fulfilled.
Downlink Radio Admission Decision for HSUPA Control Channels
The power of downlink control channels (E-AGCH/E-RGCH/E-HICH) is reserved by Dl HSUPA reserved
factor. Therefore, the power admission for these channels is not needed.
Algorithm 2 of Power Admission
When uplink CAC algorithm or downlink CAC algorithm uses algorithm 2, the admission of
uplink/downlink power resources uses the algorithm based on the equivalent number of users.
ENU
Uplink for
DCH
Downlink for
DCH
HSDP
A
HSUPA
0.44
0.42
1.11
1.11
1.44
1.42
1.35
1.04
0.78
0.84
3.4 + 16 kbit/s
(PS)
1.62
1.25
1.11
0.85
3.4 + 32 kbit/s
(PS)
2.15
2.19
1.70
0.96
3.4 + 64 kbit/s
(PS)
3.45
3.25
2.79
1.20
5.78
5.93
4.92
1.67
6.41
6.61
5.46
1.91
10.18
10.49
9.36
2.83
14.27
15.52
14.17
3.91
NOTE:
In the above Table , for a 3.4 + n kbit/s service of HSDPA or HSUPA,
The 3.4 kbit/s is the rate of the signaling carried on the DCH.
The n kbit/s is the GBR of the service.
Admission Threshold
UL
DCH/HSUPA
DL DCH
HSDPA
DL total power threshold
For
example,
the
admission of a new
AMR service in the uplink based on algorithm 2 will be successful if the following formula is fulfilled:
(ENUtotal + ENUnew)/ENUmax UL threshold of Conv AMR service
NOTE:
If the cell is in overload congestion state in the uplink, the RNC should reject any new RAB.
For MBMS services, it is assumed that their ENU is always zero.
The ENU of MBMS downlink control channels (MICH and MCCH) is reserved by Dl MBMS reserved
factor. Therefore, the power admission for these channels is not needed.
The ENU of HSUPA downlink control channels (E-AGCH/E-RGCH/E-HICH) is reserved by Dl HSUPA
reserved factor. Therefore, the power admission for these channels is not needed.
Algorithm 3 of Power Admission
Algorithm 3 of power resource admission decision is based on power or interference. In algorithm 3, the
estimated load increment is always set to 0.
Algorithm 3 is similar to algorithm 1, but the estimated load increment is always set to 0 in algorithm 3.
Based on the current cell load (uplink load factor and downlink TCP) and the access request, the RNC
decides whether the cell load will exceed the threshold or not, with the estimated load increment set to
0. If yes, the RNC rejects the request. If no, the RNC accepts the request.
Traffic Class
Directio
n
Spreadin
g Factor
Number of
CEs
Consumed
Correspondin
g Credits
Consumed
DL
256
UL
256
DL
128
UL
64
DL
128
UL
64
DL
32
UL
16
DL
64
UL
32
1.5
DL
32
UL
16
DL
16
UL
10
DL
UL
10
20
64 kbit/s VP
32 kbps PS
64 kbit/s PS
128 kbit/s PS
384 kbit/s PS
Traffic
Class
Number of
CEs
Consumed
Correspondin
g Credits
Consumed
Directio
n
Spreadin
g Factor
16 kbit/s
UL
64
2+1
4+2
32 kbit/s
UL
32
2.5 + 1
5+2
64 kbit/s
UL
16
4+1
8+2
128 kbit/s
UL
6+1
12 + 2
384 kbit/s
UL
11 + 1
22 + 2
1 Mbit/s
UL
2x4
21 + 1
42 + 2
2.96 Mbit/s
5.76 Mbit/s
NOTE:
As shown in Table 1 and Table 2, for each data rate and service, the number of UL credits is equal
to the number of UL CEs multiplied by 2. That is because the RESOURCE STATUS INDICATION message
over the Iub interface supports only integers. For example, a UL 32 kbit/s PS service consumes 1.5
CEs. Then, the number of corresponding UL credits consumed is 3, an integer, which can be carried in
the RESOURCE STATUS INDICATION message.
The amount of CEs consumed by the E-DPCCH always equals one.
The amount of CEs consumed by the E-DPDCH is corresponding with SF, and the bit rates of traffic
in table are typical bit rate corresponding with the SFs. The bit rates of traffics using same SF are
different for many other reasons.
The number of CEs consumed by the E-DPDCH is associated with the Spreading Factor (SF). The
bit rates of services in Table 2 are typical bit rates associated with the specific SFs. The bit rates of
services using the same SF differ for other reasons.
There is no capacity consumption law for HS-DSCH in 3GPP TS 25.433, so certain credits are
reserved for HSDPA RAB; and credit admission for HSDPA is not needed.
Procedure for NodeB Credit Resource Decision
When a new service tries to access the network, the credit resource admission is implemented as
follows:
For an RRC connection setup request, the credit resource admission is successful if the
current remaining credit resource is enough for the RRC connection.
For a handover service, the credit resource admission is successful if the current
remaining credit resource is enough for the service.
For other services, the RNC should ensure that the remaining credit of the local cell, local
cell group (if any), and NodeB does not exceed the configurable OM thresholds (Ul
HandOver Credit Reserved SF/Dl HandOver Credit and Code Reserved SF) after
admission of the new services.
NOTE:
The CE capabilities at the levels of local cell, local cell group, and NodeB are reported to
the RNC through the NBAP_AUDIT_RSP message over the Iub interface.
o The CE capability of local cell level indicates the maximum capability in terms of
hardware that can be used in the local cell.
o The CE capability of local cell group level indicates the capability obtained after
both license and hardware are taken into consideration.
o The CE capability of NodeB level indicates the number of CEs allowed to use as
specified in the license.
Before admission control on the credit resource in a cell, ensure that the credit admission
decisions at the cell group and NodeB levels are passed.
If the UL Capacity Credit and DL Capacity Credit are separate, the credit resource
admission is implemented in the UL and DL respectively.
a
b
c
d
The new user tries to be admitted to available bandwidth 1 of the primary path, as
shown in 1 of Figure 37 Admission procedure for a new user.
If the admission on the primary path is successful, the user is carried on the primary
path.
If the admission on the primary path fails, the user tries to be admitted to available
bandwidth 2 of the secondary path, as shown in 2 of Figure 32.
If the admission on the secondary path is successful, the user is carried on the
secondary path. If not, the bandwidth admission request of the user is rejected.
c
d
If the admission on the primary path fails, the user tries to be admitted to available
bandwidth 2 of the secondary path, as shown in 2 of Figure 3.
If the admission on the secondary path is successful, the user is carried on the
secondary path. If not, the bandwidth admission request of the user is rejected.
Generally, congestion thresholds only need be set for a port or resource group. If different
types of AAL2 paths or IP paths require different congestion thresholds, however, you can set the
parameters on the paths as required.
Congestion Handling
When the congestion is detected on the Iub interface, the congestion alarm is reported. The RNC triggers
load reshuffling process after receiving the congestion alarm, if the IUB congestion control switch
(IubCongCtrlSwitch) is ON.
A.d.
Load Reshuffling
Power resource
ULLDR and DLLDR under the Cell LDC algorithm switch parameter control the functionality
of the power congestion control algorithm.
If the current UL/DL load of the R99 cell is not lower than basic congestion control threshold
in UL/DL (UL/DL LDR Trigger threshold) for some hysteresis (defined by DL State Trans Hysteresis
threshold in DL; not configurable in UL), the cell works in basic congestion state, and the related load
reshuffling actions are taken.
If the current UL/DL load of the R99 cell is lower than UL/DL LDR Release threshold for
some hysteresis (defined by DL State Trans Hysteresis threshold in DL; not configurable in UL), the
cell comes back to normal state.
For an HSDPA cell,
In the uplink, the decision criterion is the same as that for the R99 cell.
In the downlink, the object to be compared with the associated threshold for decision is the
sum of the non-HSDPA power (TCP of all codes not used for HS-PDSCH or HS-SCCH transmission) and the
Power Requirement for GBR (GBP).
Code resource
CELL_CODE_LDR under the Cell LDC algorithm switch parameter command controls the
functionality of the code congestion control algorithm.
If the SF corresponding to the current remaining code of the cell is larger than Cell LDR SF
reserved threshold, code congestion is triggered and the related load reshuffling actions as
listed in Table 1 are taken.
The thresholds related to the local cell are Ul LDR Credit SF reserved threshold and Dl
LDR Credit SF reserved threshold, which are set through the ADD CELLLDR command. When credit
congestion in the local cell is triggered, the related LDR actions are taken in this cell.
The thresholds related to the cell group and NodeB are Ul LDR Credit SF reserved
threshold and Dl LDR Credit SF reserved threshold, which are set through the ADD NODEBLDR
command. When credit congestion at cell group or NodeB level is triggered, all the cells under the cell
group or NodeB will be treated as in congestion state, and the related LDR actions will be taken
independently in each cell.
If the congestion of all resources is triggered in a cell, the congestion will be resolved in the order of
resource priority for load reshuffling as configured through the SET LDCALGOPARA command.
For example, if the parameters are set as follows:
Iub resource
Credit resource
Code resource
Power resource
UL/DL
Resourc
e
Power
Channe
l
LDR Actions
InterFrequenc
y Load
Handover
BE Rate
Reducti
on
DCH
HSUPA
DCH
Inter-RAT
Handover
in CS
Domain
Inter-RAT
Handover
in PS
Domain
AMR
Rate
Reducti
on
Iu QoS
Reneg
otiatio
n
UL
DL
Code
Reshufflin
g
MBMS
Power
Reducti
on
UL/DL
Resourc
e
Channe
l
LDR Actions
InterFrequenc
y Load
Handover
HSDPA
BE Rate
Reducti
on
Inter-RAT
Handover
in CS
Domain
Inter-RAT
Handover
in PS
Domain
AMR
Rate
Reducti
on
Iu QoS
Reneg
otiatio
n
Code
Reshufflin
g
MBMS
Power
Reducti
on
FACH
(MBMS)
DCH
UL
HSUPA
DCH
Iub
DL
HSDPA
DL
DCH
Code
FACH
(MBMS)
HSDPA
FACH
(MBMS)
DCH
HSUPA
DCH
UL
Credit
DL
HSDPA
FACH
(MBMS)
NOTE:
If the downlink power admission uses the equivalent user number algorithm, basic congestion may also
be triggered by the equivalent number of users. In this situation, LDR actions do not involve AMR rate
reduction or MBMS power reduction, as indicated by the symbol "*" in above table.
For HSUPA services, the CE consumption, which is calculated on the basis of the Maximum Bit Rate
(MBR), can be reduced through rate downsizing. Therefore, the BE service rate downsizing for HSUPA is
applicable only to the relief of CE resource congestion.
LDR actions include inter-frequency load handover, BE rate reduction, uncontrolled real-time QoS
renegotiation, inter-RAT handover in the CS domain, inter-RAT handover in the PS domain, AMR rate
reduction, code reshuffling, and MBMS power reduction.
A.d.c.a. Inter-Frequency Load Handover
The LDR algorithm is implemented as follows:
1 The LDR checks whether the existing cell has a target cell of inter-frequency blind handover. If
there is no such a target cell, the action fails and the LDR takes the next action.
23 Based on the blind handover priority, the LDR checks whether the load difference between
the current load and the basic congestion triggering threshold of each target cell for blink
handover is larger than UL/DL Inter-freq cell load handover load space threshold (both
the uplink and downlink conditions must be fulfilled). The other resources (code resource, Iub
bandwidth, and NodeB credit resource) in the target cell do not trigger the basic congestion. If
the basic congestion triggering threshold is not set, the admission threshold of the cell is used.
If the difference is not larger than the threshold, the action fails and the LDR takes the next action.
NOTE:
The load difference refers to the difference between the current load and the basic congestion triggering
threshold of each target cell, but not the difference between the load of the target cell and the load of
the existing cell.
2
If the LDR finds a target cell that meets the specified blind handover conditions, the LDR selects
one UE to perform an inter-frequency blind handover to the cell, depending on the UE's occupied
bandwidth. For the selected UE other than a gold user, its UL/DL current bandwidth for DCH or
GBR bandwidth for HSPA should be less than and have the least difference from the UL/DL Interfreq cell load handover maximum bandwidth parameter (both the uplink and downlink conditions
must be fulfilled).
If there is more than one such UE, the first one is taken.
If the LDR cannot find such a UE, the action fails and the LDR takes the next action.
A.d.c.b. BE Rate Reduction
Different from the TF restriction to the OLC algorithm, the BE rate reduction is implemented by
reconfiguring the bandwidth. The bandwidth reconfiguration requires signaling interaction on the Uu
interface. This procedure is relatively long.
In the same environment, different rates have different downlink transmit powers. The higher the rate,
the greater the downlink transmit power. Therefore, the load can be reduced by reconfiguring the
bandwidth.
For HSUPA services, the consumption of CEs is based on the bit rate. The higher the rate, the more the
consumption of CEs. Therefore, the consumption of CEs can be reduced by reconfiguring the bandwidth.
The LDR algorithm is implemented as follows:
1
3
4
5
6
Based on the integrate priority, the LDR sorts the RABs in descending order. The top RABs
related to the BE services whose current rate is higher than its GBR configured by SET USERGBR
are selected. The number of RABs to select is determined by UL/DL LDR-BE rate reduction RAB
number.
The bandwidth of the selected services is reduced to the specified rate. For more details about
the rate reduction procedure, refer to related description in BE Rate Downsizing and Recovery
Based on Basic Congestion.
If services can be selected, the action is successful. If services cannot be selected, the action
fails. The LDR takes the next action.
The reconfiguration is completed as indicated by the RB RECONFIGURATION message on the Uu
interface and through the RL RECONFIGURATION message on the Iub interface.
The BE rate reduction algorithm is controlled by the DCCC algorithm switch. BE rate reduction
can be performed only when the DCCC algorithm is enabled.
NOTE:
In RAN6.1, BE rate reduction is applied to the selected RABs, but not to UEs.
When admission control of Power/NodeB Credit is disabled, it is not recommended that the
BE Rate Reduction be configured as an LDR action in order to avoid ping-pong effect.
1
7
8
9
Based on the integrate priority, the LDR sorts the real-time services in the PS domain in
descending order. The top services are selected for QoS renegotiation. The number of RABs to
select is determined by UL/DL LDR un-ctrl RT Qos re-nego RAB num.
The LDR performs QoS renegotiation for the selected services. The GBR during the service setup
is the maximum rate of the service after the QoS renegotiation.
The RNC initiates the RAB MODIFICATION REQUEST message to the CN for the QoS renegotiation.
If the RNC cannot find a proper service for the QoS renegotiation, the action fails. The LDR takes
the next action.
Based on the integrate priority, the LDR sorts the UEs with the service handover cells set to
"handover to GSM shall be performed" in the CS domain in descending order. The top CS
services are selected, and the number of UEs is controlled by the UL/DL CS should be ho user
number parameter.
10 For the selected UEs, the LDR module sends the load handover command to the inter-RAT
handover module to ask the UEs to be handed over to the 2G system.
11 The handover module decides to trigger the inter-RAT handover, depending on the capability of
the UE to support the compressed mode.
12 This action succeeds if any UE that satisfies the handover criteria is found. Otherwise, this action
fails.
Based on the integrate priority, the LDR sorts the RABs in descending order. RABs with AMR
services (conversational) and with the bit rate higher than the GBR are selected. The number of
RABs to select is determined by the DL LDR-AMR rate reduction RAB number parameter.
13 The RNC sends the Rate Control request message through the IuUP to the CN to adjust the AMR
rate to the GBR.
14 If the RNC cannot find a proper RAB for the AMR rate reduction, the action fails. The LDR takes
the next action.
LDR Algorithm for AMR Rate Control in the Uplink
The LDR algorithm is implemented in the uplink as follows:
1
Based on the integrate priority, the LDR sorts the RABs in descending order. The top RABs
accessing the AMR services (conversational) and with the bit rate higher than the GBR are
selected. The number of RABs to select is determined by the UL LDR-AMR rate reduction RAB
number parameter.
15 The RNC sends the TFC CONTROL command to the UE to adjust the AMR rate to the GBR.
16 If the RNC cannot find a proper RAB for the AMR rate reduction, the action fails. The LDR takes
the next action.
A.d.c.g. Code Reshuffling
When the cell is in basic congestion for shortage of code resources, sufficient code resources can be
reserved for subsequent service access through code reshuffling. Code subtree adjustment refers to the
switching of users from one code subtree to another. It is used for code tree defragmentation, so as to
free smaller codes first.
The algorithm is implemented as follows:
1 Initialize the SF_Cur of the root node of subtrees to 4.
17 Traverse all the subtrees with this SF_Cur at the root node. Leaving the subtrees occupied by
common channels and HSDPA channels out of account, take the subtrees in which the number of
users is not larger than the value of the Max user number of code adjust parameter as
candidates for code reshuffling.
a If such candidates are available, go to 4.
b If no such candidate is available, go to 3.
18 If the SF_Cur is smaller than the value of the Cell LDR SF reserved threshold parameter, multiply
the SF_Cur by 2, and then go to 2.
Otherwise, subtree selection fails, which leads to code reshuffling failure. This procedure ends.
19 Select a subtree from the candidates according to the setting of the LDR code priority indicator
parameter.
a If this parameter is set to TRUE, select the subtree with the largest code number from the
candidates.
b If this parameter is set to FALSE, select the subtree with the smallest number of users from
the candidates. In the case that multiple subtrees have the same number of users, select the
subtree with the largest code number.
20 Treat each user in the subtree as a new user and allocate code resources to each user.
21 Initiate the reconfiguration procedure for each user in the subtree and reconfigure the channel
codes of the users to the newly allocated code resources.
The reconfiguration procedure on the air interface is implemented through the PHYSICAL CHANNEL
RECONFIGURATION message and that on the Iub interface through the RL RECONFIGURATION
message.
2
3
A.e.
If the actions in the two directions are identical, the actions are combined. For example, if BE
rate reduction actions in both uplink and downlink need to be applied to the same UE, then a
single RB reconfiguration message can carry the indication to take BE rate reduction actions in
both directions.
If the actions in the two directions are different and if one direction requires inter-frequency
handover, the UE undergoes the inter-frequency handover. The other action is not taken.
If the actions in the two directions are different and if one direction requires the inter-RAT
handover, the UE undergoes the inter-RAT handover. The other action is not taken.
Overload Control
After the UE access is allowed, the power consumed by a single link is adjusted by the single link power
control algorithm. The power varies with the mobility of the UE and the changes in the environment and
the source rate. In some situations, the total power load of the cell may be higher than the target load.
To ensure the system stability, Overload Control (OLC) must be performed.
ULOLC and DLOLC under the Cell LDC algorithm switch parameter control the functionality of the
overload congestion control algorithm.
The general OLC procedure covers the following actions: TF control of BE services, channel switching of
BE services, and release of RABs.
When the cell is overloaded, the RNC takes one of the following actions in each period (defined by the
OLC period timer length parameter) until the congestion is resolved:
Restricting the TF of the BE service (only for DCH BE service)
Switching BE services to common channel
Choosing and releasing RABs (for HSPA or DCH service)
If the first action fails or the first action is completed but the cell is still in congestion, then the second
action is performed.
The first time when the rate recovery timer expired, the TFC sub-set that MAC can use is
{TFC(0), TFC(1), ..., TFC(i), TFC(i+1)}.
The second time when the rate recovery timer expired, the TFC sub-set that MAC can
use is {TFC(0), TFC(1), ..., TFC(i), TFC(i+1), TFC(i+2)}.
The (Ni+1)th time when the rate recovery timer expired, the TFCS that MAC can use is
the TFCS applied before the TF control, and the rate recovery timer will not be restarted
any more.
Based on the integrate priority, the OLC sorts the DCH BE services in descending order.
The BE services with the rate higher than Uplink bit rate threshold for DCCC (refer to
Rate Re-allocation Based on Traffic Volume) and with the lowest integrate priority (with
the largest integrate priority value) are selected. The number of RABs to select is defined
by the UL OLC fast TF restrict RAB number parameter.
28 The RNC sends the TRANSPORT FORMAT COMBINATION CONTROL message to the UE
that accesses the specified service. The TRANSPORT FORMAT COMBINATION CONTROL
message contains the following IEs:
a
Transport Format Combination Set Identity: defines the available TFC that the UE can
select, that is, the restricted TFC sub-set. It is always the two TFCs corresponding to
the lowest data rate.
b TFC Control duration: defines the period in multiples of 10 ms frames for which the
restricted TFC sub-set is to be applied. It is set to a random value from the range of
10 ms to 5120 ms, so as to avoid data rate upsizing at the same time.
After the TFC control duration is due, UE can apply any TFC of TFCS before the TF
control.
29 Each time, the RNC selects a certain number of RABs (which is defined by UL OLC fast TF
restrict RAB number) to perform the TF control, and each UE of selected RABs will
receive the TRANSPORT FORMAT COMBINATION CONTROL message. How many times TF
control is performed is defined by the UL OLC fast TF restrict times parameter.
30 If the RNC cannot find a proper service, the OLC performs the next action.
A.e.c.b. Switching BE Services to Common Channel
The OLC algorithm for switching BE services to common channel is implemented as follows:
1
2
3
Based on the integrate priority, the OLC sorts all UEs that have only PS services including HSPA
and DCH services (except UEs having also a streaming bearer) in descending order.
The top N UEs are selected. The number of selected UEs is equal to Transfer Common Channel
user number. If the UEs cannot be selected, the action fails. The OLC performs the next action.
The selected UEs are switched to common channel.
Based on the integrate priority, the OLC sorts all RABs including HSUPA and DCH services in
descending order.
The top RABs selected. If the integrate priorities of some RABs are identical, the RAB with
higher rate (current rate for DCH RAB and GBR for HSUPA RAB) in the uplink is selected. The
number of selected RABs is equal to UL OLC traff release RAB number.
The selected RABs are released directly.
Based on the integrate priority, the OLC sorts all non-MBMS RABs in descending order.
The top priority RABs are selected. If the integrate priorities of some RABs are identical, the
RAB with higher rate (current rate for DCH RAB and GBR for HSUPA RAB) in the downlink is
selected. The number of selected RABs is equal to DL OLC traff release RAB number.
The selected RABs are directly released.
If all non-MBMS RABs are released but congestion persists in the downlink, MBMS RABs are
selected.
Based on the ARP, the OLC sorts all MBMS RABs in descending order.
The top priority RABs are selected. The number of selected RABs is equal to MBMS services
number released.
The selected RABs are directly released.
If all MBMS RABs are released but congestion persists in the downlink, non-MBMS RABs are
selected.