You are on page 1of 40

How do we improve TBF Drop Rate? First of all, ensure that GSM QoS is good in the cell.

Basically, ensure that the Radio Part is reliable (no interference, neighbors defined, rxlev >85dBm at the cell border etc). If the cell has good GSM QoS but a poor QoS for GPRS check whether the cell is preempting some of the PDCH to voice calls (leading to TBF drops) or that your MFS/SGSN/GGSN are working correctly. Check for congestion on Ater and Gb interfaces. Furthermore, put all radio parameters regarding GPRS to their default values (for example: no UL power Control) The problem is with the UL TBF Success Rate. High Failures rates due to Radio are reported. There is this parameter in Alcatel to prioritize the BCCH TRX (PS_Pref_BCCH_TRX). When ENABLED, UL TBF Success Rate improved but a significant degradation was reported for GSM (for example: CDR increased). What type of TRXs are used in this cell (TRAG, TRAGE, )? Is hopping enabled? There can be two reasons for this behavior: Faulty TRX or Bad Frequency. In RNO, can you check which TRX are carrying the most of the Erlang TCH? In B10, you can check the number of PDCH Allocation per TRX. So this could help you find out which TRX could have the problem. Are you using SFH or BBH or NH? If SFH, then it looks like your BCCH is clean but the MA List used on the other TRXs is a very dirty one. A popular saying: If only my wife was as dirty as your TRX The problem is that Alcatel Algorithm wants you to have Pref_Mark = 0 i.e. least prioritized for at least one TRX if GPRS/EGPRS is enabled, and this is the TRX on which PDCHs would be allocated be it fixed/dynamic. You probably shifted PDCHs to BCCH TRX by changing its pref_mark to 0 or PS_PREF_BCCH_TRX = Enabled. So automatically, your GSM calls are now disfavored for this TRX. What you can do is to define your BCCH on the Max TRX Count available e.g. TRX4 if you have 4 TRXs keep all pref_marks = 0 and then since alcatel uses back filling your BCCH would be preferred for both CS and PS. This is one way around you can use in B9 Steps to improve TBF Drop Rates. (1) Check Frequency in BBH and the Retransmission rate in those cells. If high retransmission, for sure this is a frequency issue. (2) Try to use most of the BCCH TRX TS for PDCH (3) Make sure (E)GPRS Link Adaptation is enabled (4) If Frequency Change does not get good results, limit the (E)GPRS Coding Scheme up to MCS7 instead of MCS9. What is Link Adaptation? Link Adaptation is switching among the Modulation and Coding Schemes from 1 to 9 based on BLER (Block Error Rate). BLER is affected by C/I. Throughput per user? If the number of users/PDCH is high, then the throughput per user will be low. You should consider adding more PDCHs if the above ratio is high. The number of users per PDCH is 16DL and 7UL. What is a TBF Drop? Does TBF Drop mean temporary drop in Packet Data Connection and same is resumed again? Does the subscriber face drop in connection? Is it related to TCH Drop? TBF drop means the user loses throughput for a while (a few seconds) but that is usually enough to stop the ftp transfer or prevent the webpage from being displayed fully. So yes, in the end, a TBF Drop is impacting the subscriber experience. It is mostly due to radio problems (interference, coverage, mobility) but other reasons can be there: PCU SW failure, TCH Traffic preempting TBF radio TSs, Abis/Ater failures etc. The Failure of the Gb interface

does not impact the TBF drop (but the Gb should never fail anyway). If it is a DL Drop, the PCU will establish a new TBF for that MS in order to retransmit the data. But as I said earlier, it is probably too late for the MS: the webpage or the download is already stopped. If it is an UL Drop, the MS is not obliged to establish the UL TBF again. It is up to the subscriber to establish it again. What does SPDCH mean referring to the EDGE TS allocation in Abis. What is the difference between SPDCH, FPDCH, BPDCH? SPDCH is a radio TS, allocated on the Air Interface. On the Abis it is called a nibble or a GCH (Alcate). FPDCH (Fixed PDCH) = it is fixed for EDGE TS and Abis is also fixed. It cant be used by any other cell. SPDCH (Semi Dedicated PDCH) it is also for EDGE but in this Abis is not dedicated. Abis will use only when there is a demand. In this way you can save Abis resources. On the Radio (TRX/TS) level, an SPDCH is the same as the FPDCH. The only difference is in the PCU. In the PCU, the GSL devices for FPDCHs are also fixed, while for SPDCH, they are reallocated on demand. From the PCU perspective, it is more advantageous to use SPDCH instead of FPDCH, because it will use the GSL devices more optimally. If the voice traffic is very high, then you MIGHT INITIALLY have high PDCH allocation failures when you activate SPDCH (same goes for FPDCH) because the BSC will not get any idle TCHs to convert to the PDCHs. After some time (when the SPDCHs have been successfully allocated) then the PDCH allocation failures will purely depend on the level of the CS Traffic and the setting of the parameters such as TBFULLIMIT or TBFDLLIMIT. TBFULLIMIT and TBFDLLIMIT in our network is set to 2,0 and we have defined 2 FPDCH and 2 SPDCH. So by this parameter per PDCH only 2 TBFs are allowed? When the number of users increase then per PDCH what is the maximum allocation? Is it 32 TBF per P-Set or per PDCH? TBFULLIMIT and TBFDLLIMIT are Soft Limits. Your setting of 2,0 means that you PREFER to have a maximum of 2 TBFs per PDCH. However, you can have more users per PDCH when new PDCHs cannot be allocated by the BSC (for example: due to congestion). These parameters do NOT set the maximum number of TBFs per PDCH. The Hard Limits are as follows: The Max. no of TFIs (and by extension, TBFs) per PSET for both UL and DL is 32. The limits per PDCH are 7 for UL and 16 for DL. I want to know per PSET how many max PDCH + SPDCH can one allocate? I am using Ericsson and here for chgr 0 (BCCH) we have 4 (2SPDCH +2FPDCH) and Samr for chgr 1. So max how many edge TSs per Pset? And does it mean that we can handle 25 tbf and in case of 5 edge TSs in one Pset, does it mean 5*25 =125 TBF(ul+dl)? Per Pset, the maximum number of PDCHs is 8 whether the PDCHs are FPDCHs or SPDCHs, on-demand PDCHs or a mixture. The maximum number of EDGE TSs per PSET is also 8, since these timeslots are also PDCHs. In each cell, maximum number of FPDCH & SPDCH is 16. To get a clearer picture of the total number of TBFs for 5 EDGE TSs, it is better to treat the UL and DL separately. Thus, for 5 TSs, maximum number of DL TBFs = 5*16 =80 and maximum number of UL TBFs =5*7 =35

Why is RLC Throughput higher than the LLC Throughput? For EDGE dimensioning, you should actually start from the Downlink RLC throughput required by users, on Air-interface + the number of EDGE users. That is your initial assumption. For example: as an EDGE user, I would want to get at least 130kbit/s most of the time. And on the average, you can assume that there are 1.5 users at the same time in 1 cell. so you need: 6/7 PDCH in MCS9 etc. From this, you will deduce: Number of required TS & TRX Capacity required on Abis Capacity required in PCU (MFS in ALU) Capacity required in Ater PS & Gb interface

Furthermore, LLC is a big frame (max 1500 bytes). RLC is a small block that contains just one segment of an LLC frame. Each time 1 LLC Frame goes through the PCU, it is segmented into smaller RLC blocks. Differences between both the throughputs come from the RLC headers and RLC retransmissions (if they are included) What is the difference between SNDCP and RLC/LLC? Subnetwok Dependent Convergence Protocol: Multiplexer/De-multiplexer for different network layer entities onto LLC layer. Compression of Protocol Control Information (e.g. TCP/IP header) Compression of Data Content (if used) Segmentation/De-segmentation of data to/from LLC Layer

Logical Link Control (LLC) layer: Reliable logical connection between SGSN and MS Independent of underlying radio interface protocols

How can I see the Abis IDLE TSs for EDGE in Huawei? Go to LMT and in the MML Command type LST IDLETS. How many IDLE TSs we have to define in Huawei Systems? We require minimum 24 IDLE TSs in any cell with 3 sectors if we have defined 2 Static PDCH in each sector. 2 Static PDCH * 3 sector =6 (64K TS) 6 (64K) =24 (16K) Idle TS. This is kept for providing minimum service to 1 subscriber given he only gets Static PDCH. Can you explain how the value E1=2.048 Mbps comes? 32 TS of 64Kbps makes the E1 =2.048Mbps as 32*64 = 2048Kbps or 2.048Mbps Detailed Explanation is required: Let us go through ITU 7.11 or PCM. The original standard for converting analog voice to a digital signal is called pulse-code modulation (PCM). PCM defines that an incoming analog should be sampled 8000 times per second by the analog-todigital (A/D) Converter (according to Nyquists theorem that states that you need twice the number of samples as the highest frequency. As mentioned before the required BW of humans

voice is 4000Hz so 4000x2 =8000 Samples are needed). A/D Converters that are used specifically for processing voice are called codecs (meaning encoders/decoders). For each sample, the codec measures the frequency, amplitude, and the phase of the analog signal. PCM defines a table of possible values for frequency/amplitude/phase. The codec finds the table that most closely matches the measured values. Along with each entry is an 8-bit binary code, which tells the codec what bits to use to represent that single sample. So PCM, sample at 8000 times per second finds the best match of frequency/amplitude/phase in the table, finds the matching 8-bit code, and sends those 8 bits as digital signal. Therefore bit rate can easily be calculated as 8bits x 8000 samples = 64Kbps In binary with 8 digits, 8000 samples per second, with each sample consists of 8 bitsx 8000 =64Kbps For transferring them we need a media which can provide such bit rate. DS0: provides 1 64Kbps channel. E1: 32 DS0 or 32 channels with 64Kbps. At what voice data rate mobile (MS) transmitter transmits the voice in the uplink? What is the voice data rate when the signal is radiated by BTS in air in DL? How it becomes 2.048Mbps on E1-connection between BTS & BSC. Voice is sent FR-AMR-HR-EFR these are the codecs in air or between BSS and MS except HR you can generally consider voice is 16k, each channel of TS in E1 is 64kbps so each Channel of E1 can carry 4*16Kbps TCH. It means 2 Channels of E1 belong to 1 TRX (just TCH, not signaling). Now for signaling you can assign 16k, 32k or 64kbps in E1 for each TRX. Dont forget that these codecs (GSM Codecs) are changed in the Transcoder to PCM (64kbps). It means 16kbps of a TCH changes to 64kbps. In 3G, very simply you can use TRFO then no need to adapt the codec, it is a perfect feature. Even in 2G if you migrate A interface from TDM to IP, it is possible to use it. 1 E1 in Abis interface can carry maximum 12 TRX information with 12*32kbps signaling (12*2 =24 for voice and 12/2(6 for signaling) =30 + 1OMU +1 Syn =32). 15 TRX is also possible but needs some features. So in UL/DL and Abis Interface before Transcoder all bit rate are same as MS after Transcoder 64kbps. As we know that voice signal is of frequency 3.3kHz and as per the Nyquist Rate or PCM Quantization rate of transmission we require signals of >= 2f. Here f is GIF[3.3]=4. Each sample of data is a byte. DS0: provides one 64kbps channel. E1: 32 DS0 or 32 channels with 64kbps. Also we know that voice signal frame consists of 32 bytes. Hence value of an E1 will be: 2x 4KHzx8bitsx32 slots =2.048Mbps Does puncturing scheme affect the maximum throughput per 1 TS? For instance, for MCS5 max. throughput per 1 TS is 22.4kbit/s. How will this value change for P2 or P3? The technology under discussion is EDGE. Puncturing is applied during retransmissions. The puncturing will not change the throughput at all. The 3 puncturing schemes (P1, P2, P3) are optimizing the throughput. The value 22.4 already takes into account the benefit brought by puncturing. However, to be realistic, you should decrease the theoretical throughput by 10% to account for RLC retransmissions. P1 is used for 1st retransmission always. P2 and P3 are only used when necessary, during retransmissions based on I.R and when there is no resegmentation.

What is TBF Reallocation? TBF Reallocation is a procedure. When a TBF is allocated and suddenly 1 of the PDCH TSs is preempted for a TCH Allocation, then you see the TBF must be reallocated: from 4 PDCHs to 3 PDCHs. Causes for reallocation are mostly: TCH Preemption (=reduction in PDCH) Optimization (=increase of PDCH because the cell is less loaded than previously) Change of Bias: the DL TBF is less used than the UL TBF, then DL TBF is reduced and UL TBF is increased.

If one TBF is ongoing and CV=0 situation arrives and Network has Extended UL TBF Mode enabled, so if the TBF is extended will that also mean TBF is reallocated? I mean if TBF gets extended should it also be accounted as reallocation? If this feature is enabled the duration of UL TBF will be extended in order to quickly restart the data transmission in UL if higher layers in the MS deliver the new data, without having to re-establish a new UL TBF, after the countdown procedure has started e.g. to maintain the UL TBF established, some time after the last block (CV=0) has been acknowledge by the network. During inactivity period the BSS should keep the USF scheduling and reception of the UL RLC data block as long as uplink TBF is in extended phase. Number of UL TBF Establishments Attempts: 1799 and Number of DL TBF Establishment Attempts: 1507. I think UL Attempts should be more than DL Attempts. For example: When I want to access any Web Site I use UL TBF. When do I use DL TBF? To access a website, you need an UL TBF to send the request. But to download the webpage, you need a DL TBF! Is there any value for TBF? Why do we use TFI if we already have TBF? Is TFI cell parameter and changeable? If I want to know the downloaded/uploaded traffic in a cell, which counter should I look at? If I want to know the average DL/UL speed which KPI should I look at: RLC Throughput or LLC throughput? No, a TBF is the list of TS PDCH that the MS can use in one direction. To identify the TBF that the MS uses compared to other MS in the same TRX. It is just a logical identifier. No, the TFI is dynamically allocated by the MFS. Each MS in a transfer is using 1 unique TFI in the TRX in one direction. For example: On TRX1: TFI=1 is the UL TBF of MS2 and corresponds to the TS n2 and n3 TFI=2 is the UL TBF of MS3 and corresponds to the TS n2 TFI=3 is the UL TBF of MS4 and corresponds to the TS n2 and n3 In DL: TFI =1 is the DL TBF of MS5 and corresponds to the TS n1 n2 and n3 TFI=2 is the DL TBF of MS2 and corresponds to the TS n2 n3 and n4 TFI=3 is the DL TBF of MS6 and corresponds to the TS n1 n2 and n3

When the TBF is released (no more data to transmit or receive) the TFI can be reallocated to another MS. The problem faced is about GPRS_UL_TBF_Fail_Rate but there is no problem with GPRS_DL_TBF_Fail_Rate. I am working on Alcatel. The cause of the failures is due to Radio. The following changes were made: NETWORK_CONTROL_ORDER (Changed from NC0 to NC2). NC0 is based on voice reselection but NC2 is requirement only for the GPRS reselection based on GPRS parameter. IT had no improvement at all. I tried to change NC_UL_RXLEV_THR from -96dBm to -88dBm, NC_DL_RXQUAL_THR from 7 to 4, ENABLED EN_EXTENDED_UL_TBF and increased the value of T_MAX_EXTENDED_UL_TBF from 2s to 3s. No improvement. Changed T3168 (Max Time for MS to wait for PACKET UL ASSIGNMENT message after sending a PACKET RESOURCE REQUEST). I also changed T_UL_ASSIGN_PCCCH (the duration between the reception of the EGPRS Packet Channel Request Message and 1 UL Radio Block Allocated to the MS). No Congestion in Ater, Abis, DSP, GPU and GB.

What is the value of the GPRS_UL_TBF_Estab_Fail_radio_rate? If >5%, you should suspect a little hardware problem, and that will probably be fixed by a GPU or GPRS Reset. For how many cells this indicator is bad? Do all those cells belong to the same GPU? Same BSC or same MFS? What is the value of CSSR? Usually, the UL TBF Failure due to Radio should be just a little higher than your TCH Assignment Failure due to Radio. If it is way higher, than you can be certain that the problem is in the hardware/software. Since an inter-BSC site mutation, we observed that the UL & DL TBF Drop Rate increased in the BSC source with MCS9 usage increased and GCH load decreased. I assume that this behavior is normal since MCS9 usage includes more TBF retransmissions and therefore more TBF drops. Any comments? I think until the TBF_Establishment_Failure (UL/DL) is low, there is no need worry. Can I have a threshold for TBF Establishment Failure (UL/DL). How is the TBF drop increase explained? The Threshold for TBF Establishment Failure is about 5%. About TBF Drop increase, maybe redeclaring NSVCs on SGSN side will improve (decrease) the TBF Drop. Once we tried this and there were improvements. We have a problem in GPRS_UL_TBF_Establishment_FAIL! Reset the cells, then reset the GPU, and check your GPU usage: you might need an extension of GPUs. Check also your current NSS and BSS releases, there might be some new features to fix this issue. FYI GPRS_UL_ESTABLISHMENT_FAILURE is a typical Alcatel B8 problem so dont hesitate to raise this issue.

If GPRS_UL_TBF_ESTABLISHMENT_FAIL only UL but DL is normal and most of the failures come from RF and there is no HW Fault (TRE). I recommend you check cell parameter GAMMA_TNx=0. If not correct it. Gamma=0 means you disable UL PC for GPRS (note: this does not disable UL PC for GSM). It is correct that UL PC can have a negative impact on UL establishment sometimes (bugs in MS? Bugs in SW?) try to put it to zero and see if it helps. GAMMA=0 and ALPHA=0. ALPHA is sent from the BSC to the MS and decides level of reduction of the MS output power in relation to the path loss. The values are given as a multiple of 10 i.e. the value 5 means the reduction level of 0.5. GAMMA is sent from the BSC to the MS to give a target value for the received signal strength at the BTS. The GPRS have no dynamic power regulation. So if the GAMMA is set to 0dB the MS will send max power when sending data on the PDCH. GAMMA=16dB gives that the MS will always send (max power =-16dB) on the PDCH. So if the GAMMA=16dB in some cells in an area, the BSC will not hear the MS when it have been given a PDCH to transmit on. Check for PDCHPREEMPT: Default is 0 can be set to 2 depending on CS traffic, so that only idle ondemand PDCHs will be preempted. Put TBFULLIMIT =1 (default is 2) Does anybody have experience in the ratio of UL TBF/DL TBF Requests. The ratio is around 2.5 to 3% for other vendors, but in Alcatel it is around 4.5% to 5%. I cannot explain why the UL TBF Requests are higher than the DL TBF Requests. The TBF might not be released in ALU as often as it is in other vendors. This is due to specific algorithms that stabilize the usage of TBF. If there is a big difference with Huawei, then I would suspect that there is more CORE signaling in ALU than in Huawei. It might be interesting to check the amount of GMM/SM signaling on both the networks. Then investigate the TBF delayed release timers in ALU, and ensure that they are set at the same values as of Huawei. Huawei equipment stats: Number of DL TBF Requests =44,294,095, Number of UL TBF Requests =80,865,806 Traffic: 289,651, Traffic/TBF:2.31, TBF UL/DL:1.82 Alcatel Equipment Number of DL TBF Requests:47,930,728 Number of UL TBF Requests =214,679,689 Traffic: 242,707 Traffic/TBF:0.92 TBF UL/DL: 4.47 There is too much UL TBF Requests in ALU. 1- There is too much GMM/SM signaling (check with SGSN QoS Team) 2- UL TBF which are normally released too early. They should be extended. Can you verify the settings of these parameters: EN_EXTENDED_UL_TBF, EN_RA_CAP_UPDATE, N_POLLING_EUTM_LIMIT, T_MAX_EXTENDED_UL, T_NETWORK_RESPONSE_TIME, N3101_POLLING_THR

Extended_UL_TBF is not enabled. EN_EXTENDED_UL_TBF: EUTM NOT USE. EN_RA_CAP_UPDATE: DISABLE, N_POLLING_EUTM_LIMIT could not be found. T_MAX_EXTENDED_UL: 20, T_NETWORK_RESPONSE_TIME:16, N3101_POLLING_THR: 16 You should use EUTM but first you must check if EUTM is already used in your other vendors network. If it is not, then it is fair to compare ALU and others. And in this case, yes there are too many UL TBFs in ALU. But if the others have it activated, then it is not fair to compare ALU with the others. EUTM can degrade UL TBF KPI, but by reducing the UL Extension time, you can strongly limit the KPI degradation. In our network we are using ZTE PS Core. The Attach License is 12400 and PDP license is 37200. Actual usage is 800 PDP Contexts per day. The installed BW is 50Mbps and daily usage is between 10Mbps to 20Mbps. The issue we are facing is the GPRS/EGPRS is very slow with around 50 packets lost when ping from USB Dongle. A slow PS throughout can be caused by many issues, and most of them are NOT due to Core Network Capacity. First of all, you need to ensure that you have enough capacity on: Air (Radio) interface: PDCH capacity, Abis Interface: Packet Abis TS Capacity, Ater PS Interface: Packet Ater Capactiy. Once you are cleared on Capacity, you can have a look at radio quality (coding scheme, retransmission). 50 Packets lost, over how many packets? How do you count lost packets. Based on EGPRS ACK/NACK message? In such cases, keep in mind that EGPRS is based on a typical threshold of 10% for retransmissions. Packets cannot be lost, they are only being retransmitted, except in case of real time. With ping the thing is that there are actually 2 transmissions: 1 in the transmission, and 1 in the reception and each uses its own TBF. If the delay between 2 pings is big, I would recommend you increase the Delayed DL TBF duration and Delayed (Extended) UL TBF Duration so that it matches (or exceeds) the delay between the pings. We have recently activated Extended UL TBF mode in our network. All KPIs including TBF Establishment Success Rate, throughput etc are degraded. The major concerns are: 1- An enormous increase in signaling messages in UL direction. 2- Count of TBF establishment failures are constant while the TBF establishment attempts and successes have reduced. The TBF are now lasting a longer time. Their lifetime has increased! Therefore: 1- During the time the TBF is extended, there is UL blocks which are sent during this extension. They are seen as signaling. This is not a problem. You can slow the frequency of those signaling messages (t_extended_ul_polling). 2- The number of TBF establishments is reduced because the same TBF is reused! So over 10 seconds, instead of using 5 TBFs, the MS will use only 1 TBF. But the probability of TBF drop over those 10 seconds is the same as before. Extended UL is a feature that leads to GOOD quality from subscriber point of view, but worst KPI. So it is up to you to explain this to your management. If they want to stick to the KPI monitoring, without taking into account the subscriber perception, then disable this feature. In order to improve to the management that extended UL TBF is good, do the measurement like ping, RTT will decrease and you will see improvement.

I mean in the extended mode the MS is polled less than usual so how come signaling is increased? During extended mode, the MS is artificially kept active, and during this time, only signaling is exchanged between both the parties. 1- BTS sends USF to the MS, and once in a while (in improved extended mode) will send a PDAN with polling indication (I think its a PDAN packet DL Ack/Nack but not sure). The MS replies with an UL Packet Dummy block, which is counted as signaling. When the MS is in a real UL transfer, the BTS sends USF in DL (it is not a message, it is just a flag in a DL PDTCH) and the MS will send REAL Data in UL. So there is not much signaling involved here. What is the difference between throughput/TBF and Throughput/TSL. Also which throughput in KPIs will be affected by more number of users in the cell. What is TSL? 1 TBF= 1 Transfer for one user. Usually each user gets 1 DL TBF and 1 UL TBF: the DL TBF to receive the downloaded data, the UL TBF to send the TCP ack or application ack. So the most relevant throughput KPI is the throughput per DL TBF. With DT tool as TEMS where can I locate the TBF Drop Rate and the TBF Establishment Success Rate? A QoS indicator is based on a certain event on the Air Interface. You cannot see a rate of drops. You can just see one drop and the other drop. Same thing for TBF Allocation: You can see whether it is a failure or a success. It is only after few logs that you can make a rate out of them. How do we improve DL Multislot assignment? Decrease the TCH usage in the cell (=half rate, or load sharing with neighbors), or increase the capacity of the cell (add a TRX) or increase the number of TS reserved for PDCH under high load conditions (radio parameter) What is 1 Phase and 2 Phase System for EGPRS? 1Phase and 2Phase access for TBF UL Establishment are two different ways to ask for sources. if the Ms needs to send lots of data, it will go for 2 phase access. If it wants to send short data, or signaling, it will choose 1 phase access. If I have 2 PDCH TSs than at a time within that 2 timeslot max. how many users can share them? In one PDCH you can accommodate up to 7 MS in UL, up to 16 MS in DL (16 MS in total). All resources to MS are allocated by network (depending on resources availability and accessibility and subscribers QoS). In 1 PDCH if there will be maximum 8 users then if I have 2 PDCHs and all users are 4 TS capable then what will be the distribution? It depends on how the vendor decide to perform the resource allocation. It is not defined in the 3GPP. So for example: 4 MS onto 2 PDCH. Why only 2 PDCH? Let us assume that it is because the other timeslot are already busy with TCH. Possible implementation: Each MS has 2 PDCH only (even though they have MS capability to support 4 PDCHs) because they cant have more: the BSS will allocate only 2PDCH to each of them. Now, of course, those 2 PDCH are the same for each MS. So each PDCH provides a

possible link for 4 different MS. There are 4 MS multiplexed onto each PDCH. Since not all MS are receiving data at the same time, one MS could have 100% max. throughput over 2 PDCHs during the few seconds. 1 PDCH= 1 Timeslot. If 4 users share one PDCH, it means each of them is having one quarter of the timeslot (on the average) What is Packet Timeslot Reconfigure? Why and How do we use it? You have no choice but to use it. PTR is used when the timeslots used for the packet transfer of a MS should be changed: Transfer changes bias (i.e. DL becomes upload) A TCH preempted one of your PDCH A Timeslot is now available for PDCH you can get it if you increase the number of PDCHs for your TBF etc.

So it is very necessary and usually not modifiable by the operator! For TBF Establishment failures we have the following reasons (some of them): 1- Abis Congestion Lack of Abis resources 2- Ater Congestion Lack of Ater Resources 3- Too Many TBFs too low number of available GCH compared to the number of TBFs Where is the logic? As far as I understand, lack of Abis resources means too low number of GCH on Abis to serve the new TBF. They are indeed related but based on different mechanisms. Altogether, it is congestion on the Abis/Ater link. Abis Congestion: The number of GCH that can be established is Nb_GCH_FOR_TBF_ESTAB Ater Congestion: The number of GCH that can be established is Nb_GCH_FOR_TBF_ESTAB Too many TBFs: The number of GCH Available to serve the DL TBF is< Min_Nb_GCH < <

Is there anyone who knows why the indicator of throughput kbit/s per TBF has higher value than that of kbit/s per PDCH? This indicator was shown in ALU GPRS Telecom NPO. As we know, 1 user has 1 TBF (minimum), but can have more than 1 TBF (of course giving higher throughput). In ALU Parameters, we have MAX_DL_TBF_per_SPDCH, MAX_UL_TBF_per_SPDCH and MAX_PDCH_per_TBF. What is the correlation with throughput? The TBF is the group of PDCHs allocated to one user. The PDCH is 1 GPRS timeslot. The throughput of the PDCH depends on the coding scheme. If 2 users are sharing one PDCH, with user 1 using it 50% of the time at 30kbps and the other one using it 50% of the time at 40kbps, then the PDCH throughput is 35kbps. One user can have for example 4PDCH at 50kbps each. In this case, the TBF throughput is 200kbps.

What about the indicator Kbit/s per cell? how is it computed? And also as far as I know, its value is not reliable in NPO especially on a CellZone level. I forgot how it is computed, but you can check the formula. At cell level, the value should be the total amount of bytes transferred on the whole cell (sum of PDCH RLC bytes) per hour. At cell zone, the aggregation could be wrong. It should be an average aggregation and not a sum. The throughput per cell is not very useful anyway. Perhaps it can be used to dimension the Abis 1- According to 1 TBF for one user, if we set MAX_UL_TBF_per_SPDCH = 4, then 1 PDCH can be used by 4 users right? So if user A has 20% of time at 30kbps, user B has 30% of the time at 40kbps, user C has 30% of time at rate 60kbps, and the last user D has 20% at rate 50kbps, then the PDCH throughput would be= (0.20x30kbps+0.3x40kbps+0.3x60kbps+0.20x50kbps) = 6kbps+12kpbps+18kbps+10kbps =46kbps. But for the TBF throughput of each user: user A has TBF throughput 6 kbps, User B 12kbps, C 18kbps and D 10kbps. Right? 2- Then, if we set MAX_PDCH_per_TBF =5, it means that 1 user can have PDCH timeslots up to 5 PDCHs. Well, I guess this condition could be reached when there are still many idle PDCHs (no high PDCH utilization at that time). 3- Well, I am actually a little bit confused. You stated: The TBF is the group of PDCHs allocated to one user. But you gave an example: if 2 users are sharing one PDCH. Does it mean that in that 1 PDCH, there are 2 TBFs (1 user 1 TBF)? Then it will mean that 1 TBF can have many PDCHs and 1 PDCH can have many TBFs. 1- 100% Correct 2- 100% Correct 3- 1 TBF can have as many PDCHs and 1 PDCH can have many TBFs. Example: User A has 1 TBF w/2PDCH, User B has 1 TBF w/2PDCH, User C has 1 TBF w/1PDCH and User D has 1TBF w/1PDCH. On PDCH1 (=TS 1 of the TRX): User A has 20% of time @ rate 30kbps, user B has 30% of the time @ 40kbps, user C has 30% of time at rate 60kbps, and the last user D has 20% at rate 50kbps, then the PDCH Throughput would be = (0.2*30kbps+0.3x40kbps+0.3x60kbps+0.20x50kbps) =46 kbps. On PDCH2 User A has 40% of the time at rate 30kbps, user B has 60% at rate 40kbps. Thus (0.40x30kbps+0.6x40kbps) =36kbps PDCH Throughput PDCH1= 46kbps and PDCH Throughput PDCH2 =36kbps TBF Throughput of User A =0.20x30kbps+0.4x30kbps =18kbps TBF Throughput of User B =0.90x40kbps =36kbps TBF Throughput of User C =0.3x60kbps =20kbps

Notice that 1 user will get the same maximum throughput on all of its PDCH. For example user A has a maximum throughput per PDCH of 30kbps. That depends on the MCS used. If MCS is equal to MCS4, then it is MCS4 on all the PDCH. 1- The max number of TSs a user can have depends on MS multislot class as well. Now mostly the max TSs that a user can have in one directions is 4 (e.g. 4+1 Multislot Class 8) [Excluding Multislot class 30-33]. So, even if we increase MAX_PDCH_per_TBF beyond 4 or 5, it will not have any impact in increasing the Throughput. Correct? 2- Consider a site having 1 E1 and of configuration 4/4/6. So as one TRX requires 2 Abis TSs so total number of Abis TSs used would be 31. i.e. (14*2 =28 for data + 3 (or 2) for signaling). So, it means that we dont have any Extra Abis TSs configured for the site. Now excluding Bonus+Extra Nibbles, we are only left with BASIC Nibbles per cell. Now as we know that in order to achieve MCS9, we need to have around 5GCH (4.49 Abis nibbles). If we consider Cell A (4 TRXs): Total Basic Abis Nibbles = 4*8=32. MAX_PDCH_High_Load =4, MS_MULTISLOT_CLASS8 (4+1) for all users. So one user using 4 PDCH would require = (4*4.5) =18 Abis nibbles. So with this calculation, we can have only 1 user using MCS9 simultaneously in the cell. Since we dont have another 18 Abis Nibbles (only 14 are left). Am I right? 3- If an MS Multislot Class is 8 (4+1), so can that user use less than 4 TSs (PDCH) or is it compulsory to use 4? 1- Yes, Correct! 2- Well, your computation is correct BUT why not take into account the Bonus nibbles? In a 4/4/6 BTS, you would have 3BCCH+7SDCCH, thats 10bonus nibbles! Just for clarification: Extra and Bonus nibbles are shareable at BTS level. Basic nibbles are shareable only within the same cell. 3- Sub-optimal allocation are possible. The MFS will try to allocate as many PDCHs as possible (up to the multislot class limit). But if it cant allocate them then it will allocate less. The MS is able support sub-optimal configurations.

High TBF DL Drop (Recently UL too) on one site. The reason is due to N3105 and according to Gb measurements Active cards with terminated MSISDNs are causing the problem (the number of drops are equal to the number of attempts from those IMSIs to make routing update and get rejected from the HLR with GPRS Service not allowed). Test with inactive card led to no results (I clearly see that the TBF is formed and released but in any of the tested conditions (battery removal, restarts, waiting for n3169 to expire), no drops were observed. Could you please explain this part: Gb measurements active cards with terminated MSISDNs are causing the problem. N3105 means that the MS does not ack/nack the DL RLC Data Blocks. I dont see why this would be linked to GMM message content. Maybe the rejection from the HLR provokes a TBF immediate release in the MS: the GPRS service being not allowed, it would not surprise me if the MS stopped the TBF immediately, without informing the MFS. We have the same problem here, BSS B10 but on all the BSC. We opened a ticket with TAC when we activated SIM prepaid where most of these SIM do not have access to GPRS, high UL

TBF failures suddenly raised. Still no answer but according to NPO support, counters are not properly reported in NPO. We noticed our customers are not impacted with it, it is just some counters. Last week, we reset a BSC and UL_TBF Failures raised down to 10% and this is the magic of ALU! We are in ALU B10 and experiencing a lot of NB_DL_TBF_EST_FAIL_RADIO_PB_PMMFS and NB_UL_TBF_EST_FAIL_RADIO_PB_PMMFS. can anybody explain why we are experiencing high values with these counters when CS CSSR is 99% and UL TBF Establishment Rate is as low as 85%. First of all, check that your init CS and init MCS are not set too high! Try to decrease them to CS2 and MCS6. Second thing: did you activate NC2 reselection/redirection? If yes, then try to disable it. If it does not change anything, then it is perhaps a system failure. Try to reset your GPRS in the GPU Init MCS are low, MCS3 for UL, MCS6 for DL; CS2 for UL/DL. NC2 is not activated. Such poor QoS is usually linked to a software problem that can be fixed by a reset of some sort. But I dont know for sure what should be reset. Some days back we detected deteriorating TBF Drop Rates for several BSCs- both in UL and DL. The average daily value increased from 2% to 28-30%. All of them are BSS Drops. Those BSCs belong to different MFSs, one of the MFSs is oldG2, all others Evolution G2,5. This problem affects ALL Cells in each BSC (not some). Two days after that problem appeared, 3 more BSCs were affected. Corresponding reports in NPO are terrible. We changed the GPU board for one of the BSCs connected to G2.5 MFS (to spare board in the rack). Nothing happened. All other KPIs are within limitsno congestions, no problems with CPU etc. According to message flow during TBF release, BSS TBF drops have no special triggers, its only the difference between TBF normal release and all others non-acceptable releases. Sorry I cant find anything on UL and DL TBF Drops. Most of the problems are either DL or UL, and are fixed with a patch. Are you still in B9? It is time to migrate to B10. You are the last ones using RNO About the problem with TBF Drops: solution has just revealed itself. 2 days ago we have a problem with 1 of those BSCsBSC Connection Lost and so we had to reset active omcp board. After such manipulations maintenance was restored and problem with TBF drops was also solved. However 3 times we repeated that experiment (different MFSs, different types of BSCs, etc). But all the samewe had no success. Abnormal TBF drops due to BSS correlates perfectly with the counter P103 (Number of GCH frames badly received by the MFS due to bad CRC). Whenever EGCH layer detects that a GCH frame has not been correctly decoded in the MFS (due to a bad CRC). The difference between values of this counter for normal BSCs and those BSCs with high TBF Drop is not times or hundreds, its thousands. The CRC checking at the BSC cannot be deactivated. All we could find was deactivation CRC between SGSN and MFS (on Aters)although it wasnt what we needed. My expectations were obvious: either the problem will be solved (less probable outcome) or nothing will change (more probable). The P103 counter changedit decreased twice! I check several timeswe deactivated CRC for both links from MFS to SGSN for this BSC. New Questions are:

1- The full name of P103 is NB_ERROR_UL_GCH_FRAME. Only UL? Does it mean that we must consider only part from MS to MFS? 2- If yes, then what is the second part, calculating CRC? One accident put the lid on the precise correlation between TBF drops due to BSS failure and P103 counter. One morning after unsuccessful attempt to shift one of the BSCs from old OMCR to the new one (of course from 1 MFS to another) we noticed on that BSC the problem with abnormal TBF drops due to BSS appeared. But in this case this wasnt accompanied by the P103 counterit remained close to zero. There were enormous values for TBF drops due to BSS. Some time later we needed very badly one more BSC for our highly loaded region, we had one MX in test mode which could be easily used. We started moving BTS into it, and againwe noticed that we had a problemabnormal P103 this time, and no TBF drops due to BSS at all. The theory failed so we started to solve these problems separately and first was that one with P103 counter. We were absolutely sure about our links (either Ater or Gb), BER was better than 10^-8, but the checksums that had been calculated from the same data were different for two ends. Obviously, the problem was in calculation process on these two ends. As we suggested it could be something like different types of CRC, only one-end calculation or some problem with CRCRR (CRC Result Registerone of them for example) where the result of checksum calculation was put. But we use only CRC4 and as for other hypothesiswe did not see the way to check them directly. We started to check all standard alcatel solution from resetting and changing GPU board to changing settings on SGSN. It did not help. The thing that really helped and solved the problem was the favorite procedure of deactivationactivation the feature back again. This time the feature was CRC checking which can be enabled or disabled only from local terminal. We deactivated and activated the CRC checking on Abis. That was the solution for high P103 (without any other problems). Then we started to work at TBF drops due to BSS problems--- another long chain of hypothesis and experimentsat last after rebooting whole MFS it vanished. Alcatel Law: If it fails, reboot it There are 2 situations described in 3GPP 44.060 1- After sending a Packet Uplink Ack/Nack with final bit set, and there is no other TBF active, the MS goes into packet idle mode and does not monitor PDCH. 2- When the MS sends Packet Downlink Ack/Nack with final bit set it starts T3192. While T3192 is running the MS monitors PDCH. When T3192 expires, the MS goes back to Packet idle mode. My question: Does the MS still listens on PDCH, because T3192 is running, after the network sends PUAN? In other words, we have this situation:

1234-

DL TBF and UL TBF are both active DL TBF and T3192 is running, but no more DL data to send UL TBF ends T3192 is still running New DL Data Arrives, T3192 is still running. How should the DL TBF be assigned? PCCCH or CCCH? (I dont have PACCH) logs show that, this situation, sometimes PACCH works, sometimes it does not.

Is this what the delayed DL TBF release mechanism is intended for? You are right. The purpose of this function is to be able to set up a DL TBF using PACCH shortly after the release of a DL TBF. When a DL TBF is released, the MS starts timer T3192 and stays on the PACCH until T3192 expires. In the BSS there is a corresponding timer T3193 that is started when the TBF has been released. The PDCHs that were assigned to the TBF are still reserved for the MS for the duration of timer T3193. If more data arrives from SGSN before T3193 has expired, a new DL TBF is set up and PACKET DOWNLINK ASSIGNMENT message is sent to the MS on the PACCH. The poll bit is set in the message indicating that the MS shall answer with a PACKET CONTROL ACKNOWLEDGMENT message. If this message is not received by the BSC new attempts (maximum 3) to set up a DL TBF are done. New attempts to set up a DL TBF can be done using PCCCH, CCCH, PACCH (normal), or PACCH (after release) depending on the traffic situation and the configuration of the BSS. If this message is received, the DL scheduling is started and the DL TBF is established. PDCHs and TBF is finally released if/when BSS timer T3193 expires after which normal set up of DL using PCCCH, CCCH, or PACCH will have to be used to set up a new DL TBF. What is the KPI target for TBF Establishment Success Rate UL/DL and TBF Drop Rate DL/UL: TBF Establishment Success Rate UL/DL 98% and TBF Drop Rate UL/DL 2% How to reduce TBF Drop Rate due to stagnating window? Stagnating window is due to MS failure, which keeps repeating the same data. What is the value? The rate varies between 45% to 80% usually between 18:00 to 23:00. There is only one cell affected. It is possible that in this one cell, there is one faulty MS. You just have to wait that this customer complains about impossibility to finish a transfer (or hanging transfer). You can try to increase the detection threshold (how many times the window is stalled). N_STAGNATING_WINDOW_DL_LIMIT if the problem is in DL. Or N_STAGNATING_WINDOW_UL_LIMIT if the problem is in the UL. In GPRS technology which KPI parameter we use to indicate the PACKET ERROR RATE (Like Evdo in PER and UMTS BLER)? There is no direct measurement, like BLER or PER etc. So you have to look for an effect of BLER and that is going to be: 1- The coding scheme used. Problem: this usage is also impacted by your system congestion, so it is not clear-cut whether there is congestion when the MCS is bad. 2- The rate of retransmissionthis one is much more useful. 1 E1 can support 2Mbps: each TRX need 128kbps. Then 1 E1 can support 12 (+/-1) depending on the multiplexing type and MCS9 can support 59.2kbps. it means that each TRX can carry 2TS of MCS9 but we see within the field that the TRX carries more. i.e. within the same TRX there are 8TS of PDCH and TCH. Surely these 8 TSs used more 128kbps. A TRX does not need 128kbps. It needs as much kbps as the TCH+PDCH require!

So yes, if you have a BTS with 12 TRXs and if the EGPRS traffic is high, then 1 Abis is not enough. You will need 2! Dont think per TRX. It should be done per BTS because the EGPRS traffic of one BTS will share the usage of whole Abis. Abis is not split per TRX. Concepts of TBF: if MS wants to send data in UL, the MFS allocates 1 UL TBF. If the network wants to send data to the MS (in DL), then the MFS allocates 1 DL TBF. One MS can have 1 UL TBF, or 1 DL TBF, or 1 UL+1DL TBF. In each xL TBF, a certain number of xL PDCH (Timeslots) are allocated (replace xL by UL or DL) How many sites will work on 1E1? It depends on how many cells, and how many TRX/cell. you can pack roughly around 10 to 14 TRX per E1. Each TRX uses 2 E1 timeslots (for user plane traffic) and Each cell uses 1 E1 timeslot (for control plane traffic). EGPRS requires additional timeslots for optimal throughput but if you can live with lower throughputs, then you dont need to keep any EGPRS capacity on the E1. About voice: at the air interface 1 timeslot uses 16kbps for voice. For simplification, lets say, 1 TRX has 8 Timeslots, so you need 16kbps x8 TS. 1 TRX needs 2x64kbps at the Abis interface. Now, 1E1 has 32x64kbps. Lets say, 30x64kbps can be used for voice. It is clear, that 1 E1 can serve 30/2 =15 TRX if only voice is considered. EGPRS is more difficult and depends on supplier. What is the LLC re-transmission? Is it at the RLC Layer? It is probably at LLC Layer! LLC is managed by the SGSN. But (as far as I know) LLC is not supposed to retransmit data (except some signaling, I believe). The retransmission is done at higher layers (applications such as TCP). Do we need to consider this LLC Re-transmission in our UL/DL Establishment Success Rate formula? How would the LLC retransmission impact the TBF Establishment? LLC Retransmission occurs at the LLC layer and not at the RLC layer. Have you ever wondered why there is a requirement of another logical link i.e. LLC, above RLC between the MS and the SGSN? There could be a scenario where LLC retransmissions will be required, for example, for an application that works in an unacknowledged mode and which do not have an error checking and correction mechanism on its own. Due to LLC retransmission, timer running at TCP layer may expire. This will make the TCP to re-send the Network PDU (NPDU) for whom an ack was awaiting, to the LLC layer, which could further add to a delay in the TBF establishment or even a dip in throughput. Are you saying that the Application layer might force the LLC to work in an ack mode, thus enabling LLC retransmissions? Maybe I am wrong. But as far as I remember LLC Ack mode is enabled only for specific messages (flow control or signaling messages, or something like this). For user plane, LLC is always unack. No LLC retransmission and No Delay due to LLC Retransmission. Does an increase in Data Traffic cause increase in DCR? We have issues in our network and the management thinks that the increased dropped calls have been attributed to increased DATA Traffic. A possible explanation may be: With increase in Data Traffic, there will be more subs with no DL Power control which will increase the overall interference of the cell.

Will modifying the GPRS Power Control Parameter helps? I want to modify ALPHA and GAMMA. Alpha we have is 6 and GAMMA is 30. I want to modify GAMMA to 20 and if that does not solve then GAMMA=30 and ALPHA =10. It could be useful to say that you are talking about 2G. In 2G it is simple: more traffic = more timeslots used= more frequency usage = more interference = worse BER = Dropped Calls. To reduce interference due to load: 1- First look at the ratio of HO Quality UL and HO Quality UL. Which one is the worst: DL/UL? Then: Reduce average power by reducing power control thresholds (for TCH) Increase half rate usage (to reduce timeslot usage) GPRS power control works only in UL (Alpha and Gamma parameters). You can try Gamma= 32 and Alpha=0.8 (or 80) which will have a little more effect than your present settings.

There are other features to reduce interference in a network, but they require much more work: Downtilt Re-azimuth New frequency planning or new hopping strategy Synchronized network

HO DL Quality is at 10% and 7% for HO UL Quality. Huawei did a trial on reducing GPRS Traffic and their TCH Drops decreased. I did the same for Ericsson but effect is not that high. I modified the ODPDCHLIMIT parameter. The ODPDCH would not have much effect in Ericsson network, because it is set to a very high value by default (100%). Try modifying the parameter that allows to pack more TBF onto the same PDCH (one parameter for UL, another for DL). By default it is 2! Try 4 or 5. That will strongly reduce your GPRS Load! It will reduce the GPRS throughput, butdue to the bursty behavior of data trafficit is acceptable That is the TBFDLLIMIT and TBFULLIMIT right? Before this is only BSC Level but now there is a market adaptation makes it cell level. By the way, if we already have cell level, which will it follow? Is it the BSC level parameter or the cell level parameter? At present here are our values: BSC Level: TBFDLLIMIT = 70 TBFULLIMIT = 50 Cell Level: TBFDLLIMIT = 20 TBFULLIMIT =20

In Huawei No. of TBF/PDCH parameter is PDCHUPLEV for UL and PDCHDWNLEV for DL Is the PDCHDWNLEV in Huawei both for EDGE and GPRS? Meaning if Huawei set this to 160, it can squeeze 16 users in 1 TBF regardless if this is EDGE or GPRS? Also Ericsson has TBFDLLIMIT maximum of 100 only meaning this can squeeze 10 users per TBF only right? Wondering this because Huawei has 160 as maximum so they can squeeze more

users in 1 TBF then in Ericsson. The maximum TBF/PDCH is (about) 10 in DL and 6 in UL and that is a 3GPP constraint. I am surprised that in Ericsson there are 2 parameters that can control the same thing. If you are saying that the cell parameter is a market adaptation, and since you are having it having it available, then I would say the cell parameter is the one commanding TBF/PDCH polling. So at cell level, try value 30 DL and 30 UL (and then increase gradually by steps of 5) you should see an effect rather quickly on both GPRS occupancy (positively) and GPRS throughput (negatively) Yes for both of the above! The recommended setting is 80 for DL and 70 for UL What is TBF? TBF = group of PDCHs allocated to a user for one transfer in one direction. In normal situation, a user needs to transfer data in DL and UL at the same time (useful data in DL, and Ack in UL). One user usually gets 1 UL TBF and 1 DL TBF. The TBF is released a bit after the transfer is finished. One transfer, for example, is one webpage, one MMS or one file. PDCH= 1 physical timeslot for packet transfer. 1 TBF =1 user = n consecutive PDCHs. But 1 PDCH can be shared by several TBF at the same time. So 1 TBF =n consecutive PDCHs but those PDCHs may or may not be allocated to this TBF only. I am developing a PCU. When I assign a DL TBF over CCCH (either AGCH or PCH), I have found that the assignment is not always successful. In one example, I send the assignment on AGCH at frame 8880, then start sending data on the allocated PDTCH. The data does not get acknowledged (no PDAN), so I re-send the same assignment on frame 8982 (2x multiframes later. I am using combined SDCCH/CCCH and BS_AG_BLKS_RES=1) and this time I get PDAN in frame 9004, which is what should happen. The previous (UL) TBF Packets UL ACK was sent in frame 8788 and acknowledged in frame 8801, so the MS has had 79 frames to switch over to monitoring CCCH for assignment messages. Is there a specification that tells how long the MS should be allowed to change from packet transfer mode to packet idle mode? Yes, there is a timer to switch from listening the PACCH (PDCH) to the AGCH and PCH (CCCH). But the mechanism is rather complicated depending on the situation. Are you sure that the DL TBF is fully released when the MS is sending UL Data? Once the MS has finished sending data in UL (and there is no DL TBF neither active nor delayed), the MS will enter a specific state of extended phase, if a certain timer is >0. (I think the timer is called T_DELAYED_TBF_POL or T_RESPONSE_NETWORK_TIME but I can never remember this timer). On the last useful UL RLC Blocks, the PCU does not acknowledge the reception (in PUAN, FBI=0 instead of 1) in order to the MS to remain aware. During this time, the MS remains on the UL PDCH, sending dummy blocks in UL, if the PCU sends the USF. Concerning your current question, I am thinking the following: MS changes its state from packet transfer to packet idle right after receiving Packet UL ACK (with RRBP valid and FBI=1) and sending PACKET CONTROL ACK. In turn Packet UL ACK (with RRBP valid and FBI=1) is sent to the MS after receiving the last RLC/MAC block with CV=0.

The description above is about non-extended UL TBF. In extended UL TBF mode the network does as stated earlier. Regarding the previous query, yes I got it sorted out. The problem at the time was that if I send 1 or 2 PDAs after sending PUAN then both are ignored, but if I send 3 PDAs after PUAN then I get PCAs for all three. I still dont know why that happens. But once I have sent PUAN (with FBI set) then the MS considers the TBF ended, and it stops monitoring the PDCH. So strictly speaking I should not ever have received a PCA for the PDAs sent after PUAN. It may be a quirk of the handset firmware (Nokia) that the MS responds to 3xPDAs even though it should not. Regarding the current thing. I am only using non-extended mode TBFs, so things should be fairly straightforward. I am running with a short value of T3192 at the moment (80ms) so if I have a sequence of DL TBF 1 UL TBF DL TBF 2 then there should be no risk that DL TBF 1 is still active by the time I get to DL TBF 2. Still testing to see how it works out. Some questions to you: Is the UL TBF Release procedure finished properly (i.e. with CV=0 from MS and FBI=1 from your side)? Do you assign DL TBF through the DL TBF Assignment message on AGCH or something else? What is the sequence of messaging? GMM state of the MS at the moment of assignment?

Yes, the UL TBF is released properly. When I see CV=0 I send the final PUAN with FBI=1. I dont do any downlink assignment until I see the PCA for the PUAN. The DL Assignment is using CCCH i.e. either AGCH/PCH depending on what comes first. I set the poll request in the DL assignment and set the TBF start time to be block 0 of the next-plus one 26-multiframe (for some reason I seem to get better results if I specify a TBF starting time than if I dont.) Typically this is in GMM Ready State. I think that DL Transfer in Ready State may be initiated only by Immediate Assignment, see 44.018 (if PCCCH is not present) on AGCH (not PCH) 43.064 6.6.4.8.2 says that the Immediate Assignment is transmitted on CCCH if there is no PCCCH, i.e. it does not specify AGCH/PCH. 44.018 is pretty much the same. I tried sending the PDA on AGCH only and PCH only just to verify that the MS responds in either case, and the MS does not seem to care which one it gets the message from. Also my logs show that the problem happens when the assignment goes on AGCH (and also PCH, but mostly AGCH is used because of logical channel sequencing). I am also in doubt, but in the largest accounts it is not so important because AGCH and PCH are shared a block by block basis. I think you may also check out the following: 1- The DRX mode of the MS (DRX mode or non-DRX mode after packet transfer packet idle transition and non-DRX timer). If DRX mode is supported in your network. Really

Immediate Assignment Command on Abis has no Paging Group info as in Paging command, and in DRX mode an MS is looking for only its own paging blocks (and not for AGCH blocks defined by BS_AG_BLKS_RES). In non-DRX period after packet transfer packet idle transition it observes all CCCH blocks in its CCCH group. 2- Initial timing advance estimation for DL Ack from MS. Do you send TA in Immediate Assignement? If no, then you must indicate to MS to send PACKET CONTROL ACKNOWLEDGEMENT in 4 random access burst form. I am sending TA in the IMMEDIATE_ASSIGNMENT and the DRX Timer is set to something very long. I have noticed another thing. Many of the PDANs I receive have a Channel Request Description IE in them, which is normal. So I send a Packet UL Assignment in response. This was working fine. A recent change I made in the code, though, broke this. The change was that I would set the RRBP and poll bits on every 8th downlink data block (last three bits of BSN=0) so that the MS has a chance to acknowledge received/missing blocks in mid-stream rather than waiting until the end. I also set the RRBP and poll bit in the latest block. Partly this was so that I had early notification of whether the DL assignment was successful. Early in the message sequence the network sends PDP CNTX ACCEPT and this takes 2 RLC/MAC data blocks. Both of these have RRBP and poll bits set (the first one because BSN %8 =0, the second one because it is the final block). I get a PDAN for each block, which is normal, and I get a Channel Description which is also normal. When I set the PUA, though, it is ignored by the MS. If I remove the code that adds the mid-flow RRBP and poll bit then it works fine i.e. I get a PCA for the PUA. If RRBP and poll bits are set on both DL blocks then the PUA is ignored. Does this make sense? I would expect the PUA to be accepted regardless of how many data blocks have poll bits set If DRX mode is supported in your network really Immediate Assignment Command on Abis has no Paging Group info as in Paging Command. And in DRX mode an MS is looking for its own paging blocks (and not for AGCH blocks, defined by BS_AG_BLKS_RES). How do I send the Immediate Assignment Command in a paging group, if there is no paging group info? The Immediate Assignment Command does not need a Paging Group. It is sent over the AGCH and not the PCH. If a MS listens to AGCH, it will listen to ALL the AGCH blocks. Indeed, a MS on the AGCH is not idle anymore. It is setting up a call (either originating the call, or merely replying to a paging that was previously sent on the PCH). DRX Mode applies only when the MS is in idle mode, listening to the Paging Requests.

Could somebody tell me in which logical channel is first allocated the TFI? Mobile receives its PTMSI in PCH of the CCCH. The mobile uses RACH to answer and ask for a logical channel to communicate. In AGCH the information of the PDCH (TS number and frequency) is given and perhaps its TFI. Unless the info of PDCH is already given in SI in BCCH, but I doubt that TFI is given in PACCH because in order to collect the info in PDCH we need the TFI. So for me TFI should be allocated in AGCH/PCH. The TFI is

provided in the AGCH, (immediate assignment message) if there is no ongoing concurrent TBF which could be used to establish a new TBF in the other direction. Packet Paging is in the Routing Area or Location Area (we are using CCCH)? Paging for GPRS is always the routing area (RA), defined by the couple Location Area Code (LAC) + Routing Area Code (RAC). Most operators define RAC=0 so that there is only 1 RA per LA. This is sufficient in the networks with low PS paging. Nowadays though, there might be a need to split 1 LA in 3 or 4 RAs because there is heavy PS Paging generated by smartphones. For example: RA1=LAC +RAC01, RA2=LAC + RAC02 I have changed the MAX_PDCH parameter in some cells and I saw that there were no changes on Radio Throughput. What are the possible causes? Besides the TBF Reallocation request decreases in the concerned cell area, any explanation? What is your current radio throughput per TBF? What was and what is your value of MAX_PDCH? MAX_PDCH is just a guideline for your MFS. if there is some CS Traffic in your cell, the MFS cannot allocate MAX_PDCH, but only a computed value called MAX_SPDCH_LIMIT. If you really want to increase your PDCH versus your TCH you should also increase you MAX_PDCH_HIGH_LOAD and increase your HIGH_TRAFFIC_LOAD_GPRS. But as I said, now the algorithm of PDCH allocation is quite automatic and you cannot do much with parameters. Rather, you could try to increase your half rate occupancy, or add some TRX or share the CS load with neighbor cells. Actual max radio throughput per TBF is 34kbit/s in DL and 22 in UL. For the MAX PDCH we have changed the value from 4 to 8. I think even when we dont change the MAX_PDCH_HIGH_LOAD we should observe a higher throughput of the preamble time (high CS Traffic). Increasing the MAX PDCH will not change ANYTHING if your cell is highly loaded in TCH. Could you check these indicators: Number of PDCH per TBF (this is a new report, located in MONO Object Distribution > GPRS distribution). Ratio of usage of each MCS. The point is why do you have such low throughputs: because users have low number of PDCHs or because the MCS is low? If low PDCH you must increase the max pdch high load, and try to decrease the CS load. If low MCS increase the max_mcs (I forgot the exact name), check your Abis and Ater are not congested. Check there are no interference in the cell. There is 1 cell where in the DL TBF Release Cause Statistic Suspend DL TBF always happens. The customer always experience a drop in GPRS Access. I wonder if this DL TBF suspend is the cause of this case. TBF abnormal release due to SUSPEND happens if: 1- MS initiates CS services during the process of PS Services 2- Cell is on the Edge of the Network Location Area and therefore the location updating of the MS is frequent. The Suspend is not abnormal: it is not a TBF Drop. On the opposite! A suspend is a normal scenario, when someone receives or makes a call while a PS transfer is on-going. What happens next is that the TBF is suspended, and will be resumed at the end of the voice call. It is not degraded behavior, it is very normal. You can not reduce the number of suspends. Unless you change the subscriber behavior. The other question could be: the TBF which were suspended and are not resumed (you can check your indicators to find the ratio). That kind of

failure is not counted as a TBF Drop in Alcatel. Indeed, if resume is not done, the MS will reestablish a new TBF automatically. That is why it should not be counted as a drop. Resume will not happen if a LAC change takes place (Location Update). Check if the cell is nearby a LAC boundary or overshooting to LAC Border. 1- If MS is allocated 1 Dynamic PDCH, then as this PDCH is dynamic (can also be allocated to TCH) and voice TCH is too high at that time, it pushes TCH to use the seized PDCH Dynamic. What will happen if the TCH still uses the Dynamic PDCH for a long time? I think that the MS cant browse the internet anymore if the TBF does not resume. Will the internet connection be broken? 2- I have a case where in the cell, voice traffic is low, but Data Traffic demand is high (1E1 Abis). Then I set MIN_PDCH =4, MAX_PDCH_HIGHLOAD =8 and MAX_PDCH=14 (4TRXs). Unfortunately, the user cant use BBM (Blackberry), email, and browsing. But, when I reset the configuration to MIN_PDCH=2, MAX_PDCH_HIGH_LOAD=2, and MAX_PDCH=11 (40% of Available PDCH), the user can access! My analysis is related to the available GCHs in Abis. Is there any possibility that when we assign MAX_PDCH_HIGH_LOAD (or MAX_PDCH >40% of the Available TCH) too high or = MAX_PDCH, it will affect the accessibility? As I know, MIN_PDCH has dedicated GCHs reserved in Abis, MAX_PDCH_HIGH_LOAD has dedicated GCHs as demanded (if the nibble/GCH is not used by Voice Traffic, it would be used for dedicated traffic data), MAX_PDCH Has no GCH reserved. Then, perhaps, when we assign too high MAX_PDCH_HIGH_LOAD, it will force to always reserve the GCHs in Abis, meanwhile, no GCHs available again in Abis. Am I right? 3- Please help me to clarify my understanding of this example below: if I set MIN_PDCH (dedicated PDCH) =2, MAX_PDCH_High_LOAD =3, MAX_PDCH=8. It means when 2 dedicated PDCH have been used, then other MSs can use other PDCHs, example 2 PDCHs (which means 1 PDCH is in PDCH_HIGH_LOAD, 1 is dynamic PDCH). That 1 PDCH in PDCH_High_load cannot be used by the TCH, but 1 PDCH in PDCH Dynamic, can be used by TCH (suspend) when Voice Traffic is high. Am I right? In RNO/NPO you can check the Ater_Congestion and DSP Load and Overload (per GPU)? A bottleneck in these areas can explain the subscriber behavior. Each PDCH which is Min_PDCH is associated with 1 GCH. All other PDCH are dynamically associated with their GCHs. Whether they are in MAX_PDCH high load or MAX_PDCH does not change anything. Dont forget that 1 PDCH can require up to 4.5 GCH to operate in MCS9. The higher the MCS, the higher the number of GCH needed. Decrease your MAX_MCS to save resources on the Ater PS and DSP Load. 1- IF TBF does not resume, would it affect the connection? 2- Is it okay to set MAX_PDCH higher than 40% of available TCH? From your explanation, it does not matter to set MAX_PDCH higher than 40%. If voice traffic is low has enough Abis capacity (2 E1, with 1 E1 only for Extra TS, or no Abis Congestion an Ater Congestion)

1- When the TBF is suspended, it means that it is actually released. In other words, the user has only a TCH TS, and no PDCH at all. At the end of the voice call, a new TBF will be established (=resume). During the voice call, the user cannot browse the internet anymore. 2- Yes, you can set max_pdch as high as you want to because the BSC will anyway allocate the real number of PDCH depending on the TCH usage. Even if you put max_pdch very high, the Voice Capacity of the cell will not get congested. Min pdch are statically allocated. On each PDCH, there could be up to MAX_TBF_PER_SPDCH subscribers (users are multiplexed onto the same timeslot). Each user can use up to MAX_PDCH_PER_TBF consecutive timeslots. The MFS will allocate as many PDCH as possible before multiplexing users on the same PDCH. It can allocate up to max pdch timeslots. When the voice usage gets high in the cell, the previously allocated PDCH are de-allocated. Only the last max_pdch_high_load timeslots will remain allocated to the subscribers. Users that were located on the preempted PDCH are not dropped: the MFS attempts to reallocate their TBF on the other PDCH. The throughput will decrease, but the connection remains unavailable. Note: Only the MS that are DTM Capable can support both PDCH and TCH at the same time. This is a B10 feature and most of the MS are not DTM capable. Is it possible to activate paging coordination feature in ALU_B10 or there is something else without using Gs interface. The problem is when user is in active data session, voice call gets missed (no CS Paging). In B10 only the Gs interface can fix this problem. In B11, there is another solution: The BSC coordination. Paging Coordination is a feature that allows CS Paging to reach MS which are in Packet Transfer Mode. Paging Coordination can be done with Gs interface, or with a special feature in the BSC. In ALU, the Gs is supported since B10 or B9, while the BSC Paging Coordination is supported in B11 only. We use B11 and tried to activate the Paging Coordination for some BSCs. Do you know which are the Indicators that are related to this feature activation? We just activated EN_BSS_Paging_Coordination and EN_IR_UL on BSC Level. Are there any additional parameters? P390a Number of CS paging requests coming from the BSC received by the GP and P390b Number of CS Paging Requests sent on PACCH. The feature EN_IR_UL is not useful at all. On the other hand, T_SEND_MULTIPLE_PAGING_COMMAND and NB_MAX_MSG_MULTIPLE_PAGING_CMD are quite useful. When Paging Coordination is enabled, is it mandatory to enable EN_RA_CAP_UPDATE (Radio Access Capability)? What is its role? It is recommended to activate this RA_CAP_UPDATE because it allows the network to know more about MS capabilities. Regarding Paging coordination specifically, it is not the MOST essential parameter. If it is not activated, then a MS which is only in a DL Transfer (without any UL TBF ever established) will not be CS pageable. Since most MS will ALWAYS do at least 1 UL TBF (if only to TCP_ACK the DL TCP packets), this scenario will very rarely happen I am new to Huawei System and I have worked on Ericsson. Help me find parameters in Huawei for: Half_rate_to_full_rate parameter, full_rate_to_half_rate parameter,

intra_cell_handover parameter, queuing, cls parameter, ondemand pdch parameter, spdch, dynamic allocation in uplink 4 timeslot: Half rate to full rate parameter- Ratio of TCHH(%), Full Rate to half rate parameter- TCH Traffic Busy Threshold (%), intra cell handover parameter- Intracell HO allowed, queuing- Allow EMLPP, Cls parameter- Load HO Allowed, ondemand pdch parameter- Maximum Ratio Threshold of PDCHs in a Cell (%), spdch- Huawei does not have SPDCH concept. Dynamic allocation- PDCH Reforming, Uplink 4 Timeslot- Allocate 2 TS Switch for MS over UL. Till C12 version Huawei supports only 2 TSL in UL Here pdch reforming is for PS to CS. But which parameter is for CS to PS. If there is please let me know. Check out these parameters: 1- Level of preempting Dynamic Channel 2- Reservation Threshold of Dynamic Channel Conversion. For Counters check R93* Can we do half rate to full rate and full rate to half rate with quality thresholds like ericsson. 1234Intracell AMR TCHH-TCHF Qual Ho Allowed Intracell AMR TCHH-TCHF Ho Quali. Threshold Intracell AMR TCHF-TCHH Ho. Qual. Threshold Intracell AMR TCHF-TCHH Qual.Ho allowed and there are also criteria like Pathloss, ATCB etc.

I am facing the problem of poor HO success at BSC boundaries within same MSC everywhere in my network so by enabling handover power boost will help improve HO success rate? HPB will definitely suppress the poor radio environment. Also try increasing the Timer T3107 and T3121. Like RPP in Ericsson, Huawei has GDPUP(PCU) Max PDCH (Static + Dynamic) per PCU =1024, Max cells per PCU =2048, Max PDCH per BSC =8192 and Max PCU Num =32 With HPB, the BTS transmit power is adjusted to the maximum before the BSC sends a handover command to the MS. In addition, the BTS transmit power is not adjusted during the handover to ensure the success of the HO. When the receive level of a MS drops rapidly, a HO occurs. In this case, the BSC cannot adjust the transmit power of the MS & the BTS in time. The MS may fail to receive the HO command and thus leads to Call Drop. In HPB the BTS will transmit at max power and there will be no power control. The HPB function, if enabled, the interference of the system will be slightly raised. But hopefully this can decrease the amount of handover failures and handovers lost. Is there any other parameter through which I can improve the HOSR? For improving the HSR, its better to stick to the Basics: Co-BSIC-BCCH, check for clock issues in the BTS etc. Other than this, you can increase the Max Resend Times of Phy. Info. Regarding Huawei PC Algorithm, its an enhanced power control algorithm which is based on Huawei II PC algorithm,

alongwith exponential filtering, interpolation optimization, comprehensive decision, different thresholds and FH gain. Why UL_TBF_EST_BSS_PB_Number (alcatel) are going up like 50 or more per hour? Probably a SW bug cell. have you tried restarting the GPRS service on the cell or restarting the GPRS on the MFS or GPU? You can also check the UL power Control for GPRS (Gamma and Alpha): Try disabling it. I am suspecting that the Alcatel indicator for throughput (GPRS/EGPRS) is wrong. I conducted a test by forcing the MS on GPRS and downloading data continuously for a few hours. On TEMS the average throughput that I got was 65-70kbps (with max. being 80Kbps for 4TSs used). However, for the same period the throughput per TBF GPRS_DL_Useful_Throughput_Radio_GPRS_TBF_Avg that was observed on NPO was around 250kbps which is strange as in GPRS the max throughput that you can achieve would be around 80kbps (CS4 and 4 TS). Would you please share your views? In which release are you? This indicators formula is based on the total amount of volume, divided by a time. Check those two values and see which one is weird. Maybe you will be able to investigate. Since B10, you can check the LLC throughput distribution which is probably more precise. We are in B10. I have already checked the counter level stats and see high values for data (Bytes) for CS4 resulting in high Throughput. But how can we be sure that the value reported for data transferred is correct or not? The denominator is doubtful in my opinion too. GPRS_DL_Useful_throughput_Radio_GPRS_TBF_Avg = GPRS_DL_useful_bits_CSx_ack/GPRS_DL_Active_Connection_GPRS_ack_time*1000) GPRS_DL_Active_Connection_GPRS_Ack_Time = Cumulated time duration for all active DL TBFs established in GPRS mode and RLC acknowledged mode Note: An Active DL TBF connection is a DL TBF not in a delayed release state. Are all your TBF in RLC ACK mode? As far as I can see, that would be the only possible explanation. Check these: GPRS_DL_LLC_Throughput_per_GPRS_ack_TBF GPRS_DL_LLC Throughput_per_GPRS_unack_TBF

The TBFs are in Acknowledged mode. How is it possible? MS in Standby mode could be in a Packet Transfer Mode. No it is not possible, only perhaps for a very short time between the moment the MS is establishing the TBF and the moment it actually gets the TBF. But that is too short and I doubt anyone should actually worry about it. I noticed that number of T3101 timer expires in a specific sector more than others. Number of T3101 expiry in the sector is 1151. On the other hand for other sectors in network at most 110. Its probably a coverage or interference problem, or a hardware failure on the TRX that carries one of the SDCCH. Indeed T3101 expires because the MS cannot enter on the SDCCH TS of this TRX.

A hardware fault on the TRX that carries one of the SDCCH: That is right, I have changed SDCCH to another TS. One of our network is 2G NSN Network and we face 7745 CHANNEL FAILURE RATE ABOVE DEFINED THRESHOLD too much, 10 times or more daily. NSN support contract has been expired for us so I investigate this alarm on internet to find outa solution. According to Alarm if SDCCH Channel Failure is more than 80% during 60 minutes, network generates this alarm. I observed that SDCCH Drop Rate high cells generate this alarm. SDCCH Drop rate is high because of Radio Failure & Due to Abis Failure. Other reasons are 0. The major impact is due to Abis. I focused on SDCCH_abis_fail_call then the major impact on SDCCH_abis_call was due to T3101 timer expiry. Some Reasons for T3101 expiries are Ghost Access and Timing Advance and interference. Having SD Channel failures 80% cannot be due to: Coverage, interference or Ghost RACH (except when very low traffic). IMO you need to focus on your hardware! Expiry of T3101 has very little to do with Abis! Except, perhaps, congestion of the radio signaling link of this trx on the abis. Is this RSL congested? So HW problem = you have to fix the BTS. Stop Frequency Hopping (if activated) and changing the position of the SDCCH timeslot to another TRX the problem should disappear. If not, try to put all SDCCH timeslots on the BCCH TRX only. Recently we got recommendation from our main head-quarters to change parameter T_NETWORK_RESPONSE_TIME from its default value of 1.6s to 0.7s. As they said it should be done to decrease TBF Drop Rate (DL). To be true, I dont see clear interconnection between the two. The 2 questions are: 1- Which value is most widely used (except default = 1.6s)? 2- T_NETWORK_RESPONSE_TIME corresponds to the time difference between a command sent to the SGSN and the response received at the MFS. the default value is 700ms T_NETWORK_RESPONSE_TIME is used to delay the DL TBF Release. If it is = 1.6s, it means the TBF DL is kept 1.6s even though the last packet has been sent. Almost a mandatory feature. After the MS receives the DL Data, the TBF stays open so that DL or UL Data can be sent again quickly. It greatly improves the subscribers perception of service. It could also increase TBF Drops, because, the TBF is used for a longer time. The Chances of drops are increased proportionally to the duration of the TBF. 0.7 is the minimum value. It is usually the round trip between the MS and internet. The only reason for TBF Drop decrease (with T_NETWORK_RESPONSE_TIME <1.6) is the fact that probability for TBF drop is lower for shorter TBF Delay Time. The TBF is not a physical resource. When it is released, it does not provide more capacity to other users. If no data is sent, delayed TBF is active but empty= not using any of its PDCHs. The PDCHs are therefore used by other TBF. The quicker TBF deallocation will decrease the drop. I must emphasize that you will degrade QoS from subscriber point of view. I want to understand the significance of usflimit which is 2 in my network and tflimint. If I will keep it 0 or some other value, then what will happen? The USFLIMIT and TFILIMIT

come into play if you have activated any feature that will keep a TBF alive for some time, even when there is no data to be transmitted. A typical example of this is, Extended UL TBF Mode, whereby an UL TBF is kept alive even when the MS has finished sending its RLC Blocks. In this case, the TBF is kept alive until the expiry of the Timer ULDELAY. Keeping the TBF alive means that the TBF will reserve a USF and TFI. This is undesirable, because the reserved USF and TFI cannot be used to set up new TBFs which may even have data to transmit (remember the USFs and TFIs are limited). In light of this, you specify a certain buffer of USFs (USFLIMIT) and TFIs (TFILIMIT), such that when the number of USFs or TFIs are less than these limits, then active TBFs are released immediately as soon as they finish transmitting their data. In your case (USFLIMIT=2) means that whenever the number of USFs is less than 2, any other UL TBF will be released immediately after data transfer, i.e. the TBF will not be kept alive. If you set it to 0 then you can have more TBF setup failures, because you may run out of the USFs. On the other hand, if you set it too high, then you will have many early TBF releases which is also undesirable. A TFI is an identity used to distinguish TBFs. Because we normally have many users in a PSET, each TBF is assigned an identity (TFI) so that any data that is transmitted on that TBF contains this identity. This helps the MS/BSS to determine which TBF the data belongs to. A USF is a flag which is used to signal an MS to send. Once again because we multiplex PDCHs for many MSs, the BSS has to find a way to tell the MS when to transmit its data. This is done by assigning the MS a flag (USF) on each PDCH. Whenever an MS reads its USF in an RLC/MAC block, the MS can send its data in the next UL radio block. The TFI and USF are important for normal TBF setup. If there are no free TFIs or USFs on a PDCH (or PSET) then no additional TBF can be set up on the PDCH (or PSET) Each TBF has a TFI value but if my TBF is spread over 4 PDCH then there will be the same TFI? Each TBF has one TFI. IF you have 4 PDCHs in a PSET, then any TBF which is setup on this PSET will be assigned one TFI. What this means is that, anytime one of the PDCHs has data which belongs to a particular PSET, it just inserts the TFI as part of the data and sends it. What is the meaning of Uplink TFI? TFI identifies a connection for 1 MS. The connection can be DL, in this case the MS receives a DL TFI. Or it could be UL, in this case UL TFI. Or both, in this case the MS receives a DL TFI and a UL TFI. In the Packet Uplink Assignment Message I see both Uplink TFI assignment parameter as well as USF specific timeslot presence indication. What is the difference between them? Uplink TFI and USF are assigned to MS for the further using: Given TFI is contained in every RLC block from that MS Given USF is contained in DL RLC blocks from the network to the MS, when network is expecting RLC data from the MS.

In other words:

TFI is a logical identifier it is used to distinguish the owner of a block from another MS. USF is a green signal, sent in DL to the MS. Every time the MS gets its own USF in the DL Block Header, it can upload its own data in the next UL Block.

USF is just a mechanism to multiplex several UL MS in 1 UL TS. This green signal is dynamically distributed among all the UL users. Each block. Each active user receives its own TFI. In 1 TRX of a cell, for example, there are 30 users that are using DL and UL TBF. Each of them have its own TFI for UL and also a TFI in DL. They may or may not be the same. TFI is just the identifier of a TBF. Why there are only 8 TSs on a TRX? It is a trade off between capacity and quality. They tested 4, 8, 16, 32. 8 is a balance of all benefits. If you have 16 TS on a TRX then you will have the similar experience as that of a HR user. Huawei BSC and SGSN in of NSN. We are getting RTT delay of 40ms when we are pinging from BSC to SGSN. When we are connecting Laptop to GPRS and then pinging ftp server then we are getting 600ms RTT Delay. Are you talking about the first ping or the subsequent pings? In the 1st ping, the delay is due to the TBF establishment. Next pings are carried over the same TBF: there is no need for reestablishment (the TBF is kept open for a while, so that the MS can reuse it quickly). Try typing: ping iii.iii.iii.iii -| 0 n 10 This will send 10 packets with size = 0 bytes. Of course you can increase the size if you want. In any case, the MS will always send 4 bursts at least (that is the size of the radio block), that is 4 TDMA frame = 4x4.615 =20ms With 4 bursts, depending which coding scheme is used, a certain amount of load can be sent. But if you send ping with size 10,000 bytes this will take much longer time to transmit. Then add the time for TBF establishment, and the time for Abis and Ater-PS resource allocation (the GCH in ALU) and that explains why the first ping takes longer time. Next pings should be about 80ms to 100ms It would be helpful if some one could tell me the correlation between PDCH and GCH in Gb. 1 PDCH can carry a certain useful throughput, which can vary from 6kbps up to 60kbps. 1 GCH on the Gb interface can carry only 16kbps. Therefore, 1 PDCH will need more than 1 GCH if the PDCH throughput is above 16kbps (or a bit less than 16kbps due to signaling). For instance, at 60kbps, 1 PDCH will need 4.5GCH I have found several cells in the center of 1 city which show in the evening (BH) some increase in counter P10- up to 12%-18%. P-10: Number of DL LLC bytes discarded due to congestion 1) Whenever the PDU lifetime of DL LLC PDU expires, the counter is incremented by the number of bytes of the LLC PDU

2) Whenever the MFS receives a DL LLC PDU and the buffer in reception on Gb interface is full, the counter is incremented by the number of bytes of the LLC PDU to discard 3) Whenever the MFS receives a DL LLC PDU that needs to be discarded due to 1st/2nd level of GPU buffer congestion, the counter is incremented by the number of Bytes of the LLC PDU to discard 4) When the number of DL PDU in the queue is higher than PDU_REORDER_INHIBIT_THRES_GPU )(BTP Parameter) during a TBF reestablishment following an abnormal TBF release, the remaining DL PDU are discarded and the counter is incremented by the number of bytes of the LLC PDU to discard My questions: 1- DL LLC PDU lifetime which entity (SGSN or not) does define/change this value? 2- Can the BVC flow control mechanism help in order to decrease the value of P10?Now it is disable (T_FLOW_CTRL_CELL=0). And what is the good value for T_FLOW_CTRL_CELL? The PDU life time check out the MFS parameters. If you can see it there, then fine. Otherwise it is a SGSN parameter. BVC Flow Control I never used this mechanism. There are 2 levels of flow control: per MS and per cell. it is probably a fair idea to use flow control per cell. the Flow Control is controlled by several parameters (at least 2 main). However, I must say that your main issue is a lack of Gb Capacity! If you check the indicators GP (CPU load, Ater congestion) you might have very bad surprises. You should plan to add 1 or 2 Gp boards for each BSC connected to the MFS, and add some Ater-Ps interfaces. The mechanisms you are describing are just workaround. In the end the user throughput will be very degraded. According to Flow Control mechanism- I have found one parameter for MS flow control and one for BVC Flow Control: T_FLOW_CONTROL_CELL =0 and T_FLOW_CONTROL_MS=10 I am thinking of changing the first 1 from 0 to 5 (keeping into account T_FLOW_CONTROL_CELL < T_FLOW_CONTROL_MS). As for Ater load and GPU loadfor that particular BSC I found 9 days (not consecutive) during the last month with Ater Congestion time which was about 5-19seconds (BH). But very often I see that Aters are in high load stateso I think it is reasonable to add at least 2 Ater PS. We are going to activate a new featureBSS Paging Coordination. As I understand the only danger is suppose GSL overload because of increase in signaling between MFS and BSC. I have been trying to find in NPO indicators like GSL_Load or GSL_Congsomething like that but all that I have detected is the number of messages on GSL LinkLAP D messages. So my questions are: how can we estimate the GSL Load? What is usually the threshold for this?

Ater in high load will VERY strongly penalize the GPRS throughput from the end user perspective. By default the Ater enters the High Load state when the load exceeds 70% (that is changeable through HI ATER LOAD parameter). A good thing would be to increase the threshold to around 80% and add the 2 extra Ater PS. Indeed 70% seems a bit harsh when you have many Ater PS/GP board. After that ensure that Ater High Load Time is < 100s at BH (for example). As a consequence, more traffic will be handled by the GPU though. So your GPU CPU might get overloaded! Regarding the GSL- yes there is a very simple calculation based on GSL messages and volume indicators. However, keep in mind that you only need 2 or 3 GSL timeslots per GP board. to be on the safe side, put 3GSL =3*64kbps and that is normally enough to handle the BSS Paging Coordination. I have high TBF releases due to PCU reselection and RA Reselection and a high value for the counter fludisc and flumove. Can anybody explain the meaning of these counters? If I activate loss free preemption, will this help because in loss free preemption the LLC will remain after TBF release for 10s or so. The counters are defined as follows: FLUDISC: Number of times the entire contents of a DL buffer in the PCU were discarded due to an inter RA cell reselection or inter PCU cell reselection (i.e. FLUSH message received in the PCU that deleted the contents of the PCU buffer). FLUMOVE: Number of times the contents of a DL buffer in the PCU were moved to another queue due to a flush message received in the PCU

I think you should look at your RA design. High values of these counters indicate many interRA cell reselections. You also mentioned inter-PCU cell reselection. How many PCUs do you have? What are the PCU utilizations? I dont believe in implementing loss-free preemption will help. According to ericsson, loss free preemption applies in the following cases: Preemption of essential PDCH TBF Setup failure Intra-cell handover of Dual Transfer Mode (TDM) connection Mobile switches from DTM to Packet Switched only mode (CS call release) BSS switches Abis transmission Rate when Flexible Abis applies

When an MS is engaged in a DL Packet Transfer, its data is kept in a buffer (or queue) on the PCU. Now if the MS reselects another cell, it sends a cell update message to the SGSN. When the SGSN notices that the MS was already engaged in a packet transfer, the SGSN sends a FLUSH message to the PCU of the old cell. the contents of this message can result in one of the following: If the new cell belongs to the same RA, then PCU moves the contents of the buffer to a new buffer for the MS, and the counter FLUMOVE is incremented. However, if the new cell is in a different RA, then the PCU discards the contents of the buffer, and FLUDISC is incremented.

In my network we are facing an issue: some of the MS are not able to resume data services after cell-reselection. Please check if GPRS services are allowed for the cell in which they get re-selected. The main issue is that after reselection, out of 100 times, 50 times MS are not able to resume the service. In the signaling I have noticed: 1- MS is not responding to Packet Uplink Assignment and Packet UL Ack/Nack 2- After this there are a series of Immediate Assignment in which the access cause is Answer to Paging but after this the services are not resumed. 3- Service got resumed after refreshing the page or after reopening the site or starting the download again (i.e. when the MS is sending the EGPRS packet channel request again). 1- There will be PS Assignment Reject or Failure in this case. Have you checked the reasons? 2- What is the Rx_LEV_ACESS_MIN for GPRS. Check that 3- Is there N3101 or N3103 overflow. 4- If call setup indication rate is poor? In this case, check the PS paging success rate, LAC and RAC wise. 5- Are you using NACC or NCCR (Network Controlled Cell Reselection)? 6- What is the value of T3168? 7- Are you using Extended UL TBF? After the reselection, it is up to the MS to establish a new UL TBF in the target cell (it was in DL or UL TBF before). The sequence goes like this: UL: DL: DL: UL: Etc Channel Request Immediate Assignment USF is given to the MS RLC Data Blocks

In the description of your problem you wrote that MS is not responding to Packet UL Assignment and Packet UL ACK/NACK. Why do you expect a PUAS in the target cells? There is no PAUS after a reselection. There are no PS assignment rejections or failures messages Rx_LEV_ACCESS_MIN is 98dB I had found overflow for N3101 in the UL and due to POLL and N3105 in the DL No PBCCH It is MS controlled cell reselection Value for T3168 =4s Yes Extended UL TBF is used

In the Packet Resource Request I am getting Access Type as: 1- ACCESS_TYPE: (0) Two Phase Access Request 2- ACCESS_TYPE: (2) Cell Update

During cell reselection. 1) In this case, 1 PDCH is initially reserved for the sending of 1 UL RLC Block. This is used to let the MS send a Packet Resource Request Message, to further specify its capabilities and/or demands 2) Reselection of a new cell having the same RAC as the previous one. 1- If this is the reason then why I dont get cell update all the time when the cell is reselected having same RAC as the previousone. 2- ACCESS_TYPE: (1) Page Response: Please explain this. Cell update takes place when MS is in READY State. In STANDBY state no cell update takes place. The 2nd one is simple response to paging After every cell reselection (Within the same RAC and LAC) I am not getting ACCESS TYPE as Cell Update. Most of the time after reselection, I receive Access TYPE as Two Phase Access Request. When a TBF transfer is on-going, then you only need to know about the new cell, its capability. So cell update takes place during READY state only. When I am downloading some data from the web, then the MS is in the READY state with PACKET TRANSFER MODE (TBF Transfer). At that time why every time I am not getting Cell Update after cell reselection. Which signaling message must come after cell reselection in Packet Transfer Mode between: MS to BTS to BSC to SGSN SGSN to BSC to BTS to MS

Check your READY Timer in SGSN. What is its value? I am working with Alcatel B9. I am trying to increase UL/DL Initial MCS. Throughput has increased but drop has also increased. The tested area has a good radio condition. Any way to reduce drops? Which MCS INIT have you chosen in DL and UL? A higher MCS is weaker, and any interference or low coverage can lead to a TBF Drop. INIT MCS should be limited to 6 in DL, and in UL you can choose 6 or even lower value. The MCS adaptation (from lower MCS to higher MCS) is quite a reactive mechanism, that will increase efficiently the MCS if the radio conditions are good enough. I would like to know about the diversity of these parameters. When we calculate EiRP of Site it is 45dBm, 50dBm etc. Rx Signal Level are measured by dBm. For example, in the same cell EiRP is 46dBm and average signal level (RxLev) is -38dBm. Explain the difference! The difference is due to the propagation of the radio waves. Apply this formula: EIRP Path Loss And you should find your -38dBm With Path loss= 20.log(d)+20xlog(f) +32.44 with d =distance in km and f = frequency in MHz (900 MHz)

EIRP (dBm) = Antenna Power + Antenna Gain, it should be constant in any area served by a given cell. RxLEV = EIRP Pathloss and it depends on the distance from the antenna + the terrain. What is RTL (Radio Link Tmeout)? RTL is a mechanism in the BTS and in the MS. At the BTSs side: Every time the BTS receives a good SACCH frame that can be decoded, the radiolink counter remains at its maximum value (if already at maximum value) or it is incremented by +1 Every time the BTS receives a SACCH frame that cannot be decoded (probably because of poor radio conditions, degrading the signal), the RL counter is decreased by -2

Initially the value of RL Counter =18 (parameter RADIO LINK TIMEOUT BS). Once the value reaches 0, the call is dropped (Radio resource is dropped, message sent to the BSC). At MS side, the mechanism is the same, with the exception that when it reaches 0, the MS decides to release the call by itself without informing the BTS. How many subscribers can use EDGE at the same time in the same TS? It depends on the value you set to the parameter that controls it. Max value: 6 TBF per PDCH in UL and 10 TBF per PDCH in the DL. Parameter name: MAX UL TBF PER PDCH and MAX DL TBF PER PDCH. Default values are almost the max values. In the Maximum TA (35km) can I use the service? Yes. What is TFI (Temporary Flow ID)? TFI is the identity of a TBF. It is a header field in each radio block that is used to define which blocks belong to whom. How many GPRS users can be on 1 Sector simultaneously? I have information that it is 15. Is it correct? The number of GPRS users in 1 cell is theoretically very high. What you can do is limit the number of TBF per PDCH = (number of GPRS users/timeslot) and also the number of PDCH per cell. If I set 10 PDCH and 5 DL TBF/PDCH (case a) and then later I go for 8 PDCH and 2 DL TBF/PDCH (case b) That will limit the total number of GPRS users. (note: 1 user should get 3 or 4 PDCHs per TBF) But there is no static limit. In case (a) you could expect about 15 users simultaneously or more (peak =50) In case (b) you could expect 3 to 5 users simultaneously (peak =16) 1) In edit cell in OMC-R I can see the following parameters: MAX_PDCH_PER_TBF =5. It means that 1 user theoretically can occupy 5 time slots. But as far as I know there are no MS that can do that (maximum number 4). Will something change if we set MAX_PDCH_PER_TBF =4 instead of 5? 2) I found out 2 parameters in OMC-R related to supervising the amount of TBF per PDCH- separate for DL and UL. MAX_UL_TBF_SPDCH =5 it means that in UL we

can share one PDCH (as I have heard that in B10 we have only SPDCH and no MPDCH) between 5 users. Correct? MAX_DL_TBF_SPDCH=8 and this means that in DL we can share one PDCH between 8 users. Correct? The above are correct. Major parameters in GPRS are: max pdch, max pdch high load, min pdch Just cant understand the difference between MIN_PDCH and MAX_PDCH_HIGH_LOAD. I set MIN_PDCH=3 and saw in stats that 3 TSs are always busy. Then set MIN_PDCH=0 but MAX_PDCH_HIGH_LOAD=3 and got the same 3 TSs which are always busy. Why? That is normal. It is the way it works. The MAX_PDCH_High_load timeslots are cleared only when there is no more GPRS Traffic in the cell. in which case they can be used by TCH. MIN_PDCH are always booked by PDCH even when there is no PS Traffic. Does any body have any idea how to decrease the Round Trip Time Delay in Alcatel B10? Normally: you ping your GGSN many times, consecutively. The RTT is the ping delay in average, except the 1st ping (because there is TBF establishment). Need solutions for PDCH Allocation Failures. Gb Congestion does not prevent a TBF establishment or PDCH Allocation. Gb Congestion will lead to throughput reduction thanks to flow control mechanisms in SGSN and PCU (MFS). PDCH Allocation failure can be due to: Abis Congestion Ater-PS congestion Ratio TS Congestion (too many TCH, or too many PDCH according to the parameter settings) PCU Congestion (GPU/DSP congestion in ALU) Interference Issues Coverage Issues

I am getting difference in the data downloaded at Cell Level- Bytes of LLC Frame Downlink Data TBF including GPRS and EGPRS and at the GGSN Level (bytes of GTPv1 data packets received from Gn/Gp). The difference is about 40 GB. There is all the GMM Traffic that is not going through the GGSN because it is generated by the SGSN (GPRS Attach, Authentication, Paging etc) Does anybody have experience on optimizing the N3105_Limit parameter on AlcatelLucent network? Could it impact, in a positive way, the DL_TBF_DROP_RATE? This parameter is useful to detect drops. Write down the exact definition. This parameter is similar to RLTO counter for voice. There is the definition: for a DL EGPRS TBF this parameter defines the maximum number of expected Packet DL ACK/NACK or Packet Control Acknowledgement messages consecutively lost on the radio interface, before triggering an abnormal release of the DL TBF. I would say you just change the way you count the TBF drop. Even if it decreases the TBF %age from end user point of view, a drop is still a drop even if it wasnt counted by the MFS. there is a defense mechanism though, so that the PDCH goes back to MCS1 or CS1 if packets are lost. Perhaps by increasing the N3105 limit you give more time for this mechanism to work.

I have some BSCs experiencing 4000 to 5000 P105 counters which is: P105 = Number of GCH frames badly received by the MFS (due to a bad CRC). Whenever the EGCH layer detects that a GCH frame has not been correctly decoded in the MFS (due to a bad CRC). There was a case in the past which was resolved by rebooting the MFS. P105 should remain below 0.05% I am currently looking @ how the PCU processes a CHANNEL REQUEST PDU from a GPRS device for allocating an UL TBF. As the PCU normally resides with the BSC, is it correct to think that the CHANNEL REQUEST should be received over the RSL link? If so, will it come in as an 08.58 Data Indication? The 08.58 Channel required does not give any indication that the 04.08 Channel Request is for a TBF, as opposed to an SDCCH. Another question is that does the BSC need to activate a PDCH (08.58 CHANNEL ACTIVATION) before that PDCH can be used for a TBF? 04.18 CHANNEL REQUEST for GPRS/EGPRS request and 04.60 PACKET RESOURCE REQUEST message from an EGPRS capable MS (for establishment of a concurrent TBF). The channel request is sent over the RSL. Regarding the PDCH, it is highly dependent on the vendor you are using. The PCU can be located in the BSC, in the SGSN, or even as a standalone equipment (=MFS is Alcatel). The PDCH Allocation/Deallocation can be performed by the PCU but is not standard. So there are many solutions to activate/deactive a PDCH it is safe to assume that the PCU is always in charge of allocation and deallocating the PDCH but it might need the confirmation of the BSC. The PDCH should be allocated before being used by the TBF. The deallocation can be done with a timer (after the release of the TBF) Will the channel request show up on the RSL Link encapsulated within an 08.58 DATA INDICATION PDU? My understanding for GSM is that the BTS always intercepts the 04.08 CHANNEL REQUEST and converts it into 08.58 CHANNEL REQUIRED on the RSL Link. Will the BTS do something different if the CHANNEL REQUEST indicates a packet data related cause? The other thing as well is that it seems that if the BTS is not configured with PCCCH (but only CCCH), then the UPLINK PACKET ASSIGNMENT should be sent back to the MS within an 04.08 IMMEDIATE ASSIGNMENT. Who is then responsible for generating this IMMEDIATE ASSIGNMENT (the BSC or the PCU?). if the PUC is not physically located on the BSC, then how can the PCU have access to the RSL Link? The BTS will do nothing different. The EGPRS channel Request is sent as a channel request in the RSL. The content of the channel required will indicate to the BSC that the channel required is for EGPRS. The PCU generates a Channel Assignment, which is sent to the BSC. The BSC conveys it as a IMMEDIATE ASSIGNMENT COMMAND to the BTS. The BTS sends an IMMEDIATE ASSIGNMENT on the PCH or the AGCH depending on the DRX mode of the mobile. If the PCU is external to the BSC, then it is connected to the BSC via dedicated interface (called ATER PS). The BSC will route all GPRS-related message from BTS (RSL or not) to the PCU. Keep in mind that the CHANNEL REQUIRED is an Abis message. A vendor can modify quite extensively the 3GPP in order to fit its needs. For example, in case of an 11-bit Access Burst (EGPRS channel request, with a random access ref of 11 bits) the RA field is not big enough to carry the whole information (only 8 bits). So an optional field at the end of the Channel Required might be used.

Can someone tell me what feature of GPRS/EGPRS will not be working if there is no Packet Paging in a BSC. What is the purpose of Packet Paging and what are the probable causes of ZERO Packet Paging? PS Paging is a mandatory message. It has the same purpose as CS Paging: a MS can be paged by the network in order to establish a GRPS connection. Zero Packet Paging means that all users are establishing their own TBFs. There is no case in which a MS is paged by the NSS in order to open a data connection. Now, if there are low bandwidth/long time connection (such as MSN Messenger) during which the user can remain inactive for quite a while, until somebody sends him a message that could lead to GPRS paging. Any idea for Zero Paging? We have no Gs Interface and Network Mode is set to 2 i.e. in GPRS Active Mode MS when Paged does not respond as it is doing the packet transfer. As far as you are using Network Operations Mode 2, then there is no Packet Paging Channel in your network (only Paging Channel) and hence the subscriber is paged for GPRS services on CCCH. Are you offering your subscribers any GPRS services which require GPRS paging. Web Browsing and Mail, Incoming MMS (may be realized without GPRS paging by means of MT_SMS with WAP push), require no paging. A packet paging might occur (rarely, I agree) if the subscriber have sent an UL data, and the internet is taking time to answer. The UL TBF is released, the DL TBF is not yet allocated, the MS is entering in Packet Idle Mode. Only a Paging can wake it up so that it could get prepared for the DL incoming data. But when the MS is in READY STATE the network does not page the MS because it knows the cell where the MS is located. And DL TBF is established by means of PACKET DOWNLINK ASSIGNMENT PROCEDURE BsCvMax? This parameter specifies the maximum countdown value of the MS. This parameter determines BS_CV_MAX and is used for the MS to calculate the CV. The parameter also determines the duration of the T3198 timer. Every time the MS sends an UL RLC Data Block, the receive state of the data block is set to Pending and the T3198 is started. If the MS receives a Packet UL Ack/Nack message before T3198 expires, it updates the receive state of each UL RLC data block based on the acknowledgement bitmap contained in the message. If T3198 for the RLC data block in the Pending state expires, the MS sets the receive state of this data block to Nack and retransmits the data block. What is the usual value of BS_CV_MAX and what it is normally based on? If the value of this parameter is set to a modest value, the MS may retransmit the RLC data block before the BSC sends an UL Acknowledgement Message. Thus, many radio resources are not used but occupied. If this parameter is set to an excessive value, the speed of the sliding window decreases and the probability of the UL TBF Transmission countdown increases, thus decreasing the performance of UL Transmission. To make this value more accurate, you need to estimate the delay in the transmission between the MS and the BSC first. This value is set based on transmission delay. Default Value: 10 I get BS_CV_MAX value from Sys_info13 message. But still not the CV Value. My questions are: 1- Are both BS_CV_MAX and CV same? 2- If not where we get the Countdown Value (CV)?

1- The MS shall send the Countdown Value (CV) in each UL RLC Data Block to indicate the current number of remaining RLC Data Blocks for the UL TBF. Let us assume: x= round((TBC-BSN-1)/(NTSxS)) then CV =x, if x=<BS_CV_MAX or CV=15, if x>BS_CV_MAX TBC = Total number of RLC Data Blocks currently to be transmitted in the TBF. BSN =Block Sequence Number of the RLC, NTS = Number of Timeslots assigned to the UL TBF, S=2 usually 2- I think you can check the CV value in the UL mode reports. RLC/MAC in 1 Phase Access and 2 Phase Access? 1 Phase Access when the MS is planning to send less than 8 RLC Blocks it asks for 1 PDCH only. 2 Phase Access when the MS is planning to send more than 8 RLC blocks, or when it is planning to use EGPRS, as far as I rememberit sends its multislot configuration so that the PCU can allocate as many PDCH as possible for the MS. In both the cases, it is up to the MS to decide whether to do 1 or 2 Phase Access. The MS knows how much data needs to be transmitted. If the user has just switched on the phone, the MS has to send a GPRS ATTACH, which is =nnbits = k RLC blocks (values already known by the MS). If the user is uploading a file, then the MS knows the size of the file, so it will see that is it more than 8 RLC Blocks or not? I have one query regarding the maximum throughput achievable in a cell. Now, lets say we have good radio conditions so we can easily achieve highest throughput MCS9 from the start. Even in MCS Adaptation procedure, MEAN_BEP and CV_BEP values indicate that new MCS5 should be MCS9. Now lets say that we dont have enough resources on the transmission side (we know that in order to achieve MCS9, we need 5 abis nibbles corresponding to 1 TS), so provided that we dont have the required transmission resources, will we be still using MCS9 with lesser throughput or will there be MCS adaptation and we are switched to lower MCS schemes? If yes, then by which algorithm, its shifted to lower MCS. the max_mcs possible for the TBFs in one TRX is computed on the number of GCH available. If there are not enough GCH for that TRX, the max_mcs of that TRX is decreased. Nb_gch =3 max MCS=6, if nb_gch=4 max MCS=7 etc. the MCS is then decreased per TBF depending on the TBF radio quality but it cant exceed this max_mcs Does multiplexing in Radio Interface (Radio TS) decrease the throughput (provided that the transmission resources are the same)? Take 2 examples: 1- 2 MS are multiplexed on the same 4 PDCH (say MAX_PDCH =4) but have full transmission resources (nb of GCH =5, MCS9) 2- 2 MS are allocated different PDCH (no multiplexing on the radio interface MAX_PDCH =8) and having full transmission resources as well (nb of GCH=5, MCS9) The throughput experienced by both the users be the same?

It will be half in case 1. Each MS can use a PDCH in only 1 occurrence out of 2. It is like half rate. Instead of having 100% of the 4 PDCH, which is 59.2*4kbps, each MS gets on the average 59.2*4/2 I am working in an Alcatel Lucent network and we have been facing some issues with the UL_TBF_Estab_Fail_Radio Rate. The problem is being reported in some cells where the fails raise in 1 hour and then suddenly disappear. At the GSM end everything is okay in stats so RF problems are discarded. The issue affecting the Global KPI. Gamma and Alpha parameters are set to 0. The KPI is around 2.6% and needs to be below 1.5% I suspect that there is a little bug, hidden deep down in the software. Alpha and Gamma are parameters related to the UL Power Control in GPRS. If they are set to 0, it means that the UL PC GPRS is disabled. They should be set in such a way that the PC is disabled. Indeed, it can lead to some failures if the PC is enabled. We have satellite links on a BSC and I am looking for ALU Parameters (Timers) to set up in order to improve GPRS performance. Satellite Ater or Satellite Abis? There are about 5 to 6 timers that must be changed. They can be found in the Generic Customer Doc quite easily. Search for Satellite. After the IR(UL/DL) activation in our network, we observed increase of TBF_Drop_Rate in DL. Is that normal? What are your new parameter settings related to IR and Resegmentation? The Changed parameters are: EN_IR_DL_FULL_PER_CELL & EN_IR_UL_PER_CELL. The TBF Drop Rate in UL also increased but after Extended_TBF_UL Activation. As I know in Alcatel Parameter, there is no EN_IR_DL. There are only EN_IR_UL and EN_RESEGMENTATION_UL. Based on ALU documentation, EN_IR_UL should be activated (ENABLE). The activation of both at the same time may reduce the gain. But why is the default value 0? If EN_IR_UL is ENABLE, supposedly it increases the accessibility and attainability of UL_TBF_Establishment, but I dont know why would it cause DL_TBF_DROP Rate. To allow the best throughput in DL, it is strongly recommended to set this parameter to 0 i.e. to not forbid the resegmentation. When you enable this parameter, you actually force the system to stay in high MCS. When you disable it, you actually allow it to use lower MCS. In both the cases, the IR is used whenever possible! In case of a network with good radio conditions all over, you can try to ENABLE it. But in typical networks, with sometimes bad conditions, it is safer to keep it DISABLE. What are the main indicators in RNO for GPU monitoring? It is not an RNO Object but you can monitor GPU indicators by using Other objects on RNO interface. NB: you must decode the name of the GPU. Normally, you check the number of LLC PDU transferred FRAME (for Traffic Load) by GPU, the number of TBF established by GPU. Also, you can monitor other establish Fail KPIs like the number of fail establishment due to BSS Pb. Per GPU, the most important indicators are, in my opinion:

Ater congestion rate Number of GCH busy (112 GCH/Ater) DSP Load CPU Load

I think that in RNO the load is given only when the MFS is in Load State and therefore the normal load could not be evaluated. Yes I think you are correct. This indicator does not represent the current load but rather the time the GPU is in load state (above 80%) Has anybody used in the Alcatel System Extended_UL_TBF (IEUTM) feature. What is the expect effect if enabled? We have activated Extended_UL_TBF in some urban cells. Regarding KPI, UL_TBF_DROPS increased a bit, DL TBF Establishment Success Rate was improved. We did some data tests and Round Trip Time improved (decreased). The only drawback is the increase in UL TBF DROPS I am trying to get a GPRS DL TBF established. The cell has no PCCCH. I assign the PDCH using AGCH and the handset sends me blocks of data, which is good. I can build the PDU and I can acknowledge the UL TBF, which is also good. The phone stop sending data and sends a Packet Control Ack which is all good as well. But according to 3GPP specs I should be able to send a Packet Downlink Assignment to establish a DL TBF. When I send this I dont get an acknowledgement back. If I send the assignment then send DL data blocks I dont ever get a Packet DL Ack, so I dont think the DL assignment is being received by the handset. The RX and TX paths are OK (BER test verifies this), and I have tried many variations of Packet DL Assignment message in case its some parameter that is wrong/missing. So I am wondering: (a) should I get a Packet Control Ack for a Packet DL Assignment Message? (b) is there some restriction (e.g. only specific blocks) that I can use to send DL control messages? After the Immediate Assignment, what are you receiving from the MS? It is strange that the MS is acknowledging this message, or even sending blocks. It is supposed to be a DL TBF establishment. On receipt of an Immediate Assignment message, the MS stops monitoring DL CCCH and switches to the assigned PDCH and starts listening for DL RLC/MAC blocks identified by the assigned DL TFI: it starts timer T3190. So what you can check is: Are you sending the data blocks on the correct PDCH? Are you using the correct TFI? TA Has T3190 already expired for the MS? Is you P.D Assign in the good format?

The Immediate Assign is for a Packet UL assignment in response to a single-phase channel request. Once the PDCH is active I get data blocks on the channel, which is good. Once I have got all the data blocks then I acknowledge the data, then the MS stops sending data blocks and sends a Packet Control Acknowledge which is also good.

What I want to do then is that while the PDCH is still allocated, to allocate a DL TBF. As I understand it, I use a Packet DL Assignment for this, but the MS never seems to respond to this. I think there are 2 possibilities: either there is a timing issue and the Packet DL Assignment has to go on a specific block or has to be sent before the UL TBF acknowledged, or something. The other possibility is that the Packet DL Assignment message is not formatted correctly. I found no requirements in 44.060 to sent the PACKET CONTROL ACKNOWLEDGMENT on Packet DL TBF Assign message in Packet Transfer Mode (i.e. when UL TBF is already assigned). Moreover, to force the MS to send PACKET CONTROL ACKNOWLEDGMENT, a downlink message should contain the poll bit =1 and the valid RRBP field in its header.

You might also like