You are on page 1of 120

Pre talk on

Some Studies on Adaptive Decision Feedback


Equalizer for Wireless Systems
By
Ch.Sumanth Kumar
Research Scholar
Under the guidance of
Prof. K.V.V.S Reddy
Department of Electronics and Communication Engg
A.U College of Engineering(Autonomous)
Andhra University
Visakhapatnam
Adaptive Decision Feedback Equalizer ---
Background, Issues & Challenges
Contributions made by this thesis
Implementation of Modified fast block LMS Algorithm
Modified fast block LMS Algorithm based ADFE
ADFE using different variants of LMS algorithm
Normalized modified Block LMS based ADFE
Signed modified Block LMS based ADFE
Normalized Signed modified Block LMS based ADFE
Partial update Sign Normalized LMS based ADFE
Out Line
Real time implementation of ADFE using TMS 320C6713
Conclusions
References
List of publications from thesis
Ada tive ecisio eedback alizer - - ackgro d
( )
Communication channels may be characterized by
( ) | ( ) |
Amplitude distortion results if ( ) is not constant
within the bandwidth of the signal.
Phase distortion results if (
c
j H
c c
c
c
H H e
H
H
y
!
y
y
[
[ [
[
) is not linear function
of , i.e., the delay is not constant.
The result is signal dispersion (smearing).
The overlap of symbols owing to smearing is called ISI.
y
y
[
[
T
T X
X
X
0
Intersymbol Inter erence
major cause o per ormance degradation in many
communication systems is the introduced I I, due to
time - dispersive characteristics o the involved channels.
The problem is particularly import
y
y ant in ireless
transmission systems due to multipath e ects.
Ideally, i the Rx tran er unction is inverse o that
o channel it is possible to get back the undistorted
signal and make c
y
orrect decisions about transmitted
symbols.
is one unctional unit that tries to nulli y the I I. y Equalizer
Problems ith linear equalizer
k
a
k
n
k
r

k
a
k
e
( ) z ( ) C z
Linear
Equalizer
Quantizer hannel
k
e
k
a
k
n
I I
Noise
( ) ( ) 1 H z C z
( ) C z
( ) a
( ) b
2 2
The po er spectrum o the error can be ritten as
| ( ) ( ) 1| | ( ) |
po er spectrum o the data symbols
po er spectrum o the noise process
1
I ( ) , the I I contribution to
( )
e
e a n
a
n
S
S S H z C z S C z
S
S
C z
H z
y
+
=
=
y = the error vanishes.
I ( ) has a spectran null i.e., ( ) 0 or some at any
requency ithin the band idth o , the po er o noise
is in inity.
Even ithout a spectral null, i some re
k
H z H z z
a
y =
y quencies in ( )
are greatly attenuated then the equalizer ill greatly
enhance the noise po er.
H z
Decision Feedback Equalizer (DFE) is e ective means
or equalizing the channels that exhibit spectral nulls.
y
+
2
2
2.5 1
z
z z
2 z
z

Z
Z
1
z

1
z

FFF
Channel
Quantizer
0.5
0.25
F F
( )
f
y n
( )
b
y n
Decision eedback qualizer
1 2
The DFE employs a feedforward filter (FFF) to equalize
the anticausal part of the channel impulse response.
The channel - FFF cascade forms a causal system
with impulse response 1, , , . The feed h h
y
y
L
1 1 2 2
back
filter (FBF), with , works on
past decisions (assumed correct).
b b
w h w h ! ! L
The residual ISI at the FFF output (n) is cancelled
by subtracting FBF output (n) from (n).
f
b f
y
y y
y
In most o the communication systems the variation
o the channel characteristics over time is signi icant,
the equalizer should be able to adapt itsel to combat
the I I.
In such cas
y
y es daptive DFE (ADFE) is used.
FFF and FBF coe icients are trained LMS algorithm. y
LMS Algorithm
Self-learning: Filter coefficients adapt in response to
training signal.
W(z) 7
+

x(n)
y(n)
e(n)
d(n)
Filter update: Least Mean Squares (LMS) algorithm
1
z

FBF
FFF + +
x ( ) n
( ) d n
( )
f
y n
( )
b
y n
( ) y n
( ) v n
( ) y n

( ) e n
Training
Decision
directed

0 1
( ) [ ( ),..., ( )]
f f f t
p
n w n w n

! w
1
( ) [ ( ),..., ( )]
b b b t
q
n w n w n ! w
Basic ADFE
Acommon problem faced by the ADFE is that
Increasing data rate increases channel IR
increases the order of FFF and FBF
increases complexity, makes real time operation difficult
y

Complexity further goes up for fast converging equalizers


such as those belonging to RLS family, which require a
reduced training sequence,
valuable saving in band width.
As the complexity inc
y

y reases power and chip area


requirement also go up.
: Complexity issues and related research in ADFE
Complexity reduction of high speed ADFE remained
a topic of intense research over last two decades.
y
At block or architecture level, several pipelining and
parallel processing techniques has been developed by
, to achieve high processing speed. Parhi Wu et al
y
At algorithmic level, proposed some
block and erquency domain based techniques recently.
Berberidis et al y
Davidson et al. and Cioffi et al. proposed high speed
ADFE,s but they do not track time varying channels
effectively since the filter coefficients are adapted only once
in every M
th
sample, M being the block size .
Parhi ------------pipelining algorithms with quantizer loops. Here
by employing look-ahead computation technique loops
containing nonlinear devices are transformed to equivalent forms
which contain no nonlinear operation. But such implementations
are practical only for low order ADFEs since the hardware
complexity can become enormous for higher order filters.
Gatherer et al. proposed a parallel ADFE algorithm and was modified
as extended LMS ADFE algorithm ---------the input data samples are
broken into M blocks of samples each and are processed by M
ADFEs in parallel. Their algorithms, however, suffers from two
counts, namely, incorrect initialization of FFF and a coding loss as
extra samples are required to be transmitted for initializing the FBF.
Recently, Parhi and Lin independently proposed several
architectures to implement ADFE for gigabit systems.
Berberidis et al. presented a new block ADFE that is
mathematically equivalent to the conventional LMS based
sample by sample DFE but with considerably reduced
computational load.
Shanbhag et al. proposed several high throughput
architectures utilizing fine-grain pipelining of the arithmetic
elements. But fine grain pipelining of an ADFE is
intrinsically difficult, since the ADFE output must be
available at the end of each iteration in order to cancel the
effects of pre-cursor ISI.
Douglas. S.C proposed adaptive filters with partial updates to
achieve faster convergence with low complexity, where only
a part of the filter coefficients are updated in each iteration
Mahesh . G et al. proposed stochastic partial
update LMS algorithm [78], where filter
coefficients are updated in random manner.
Dogancay.k et al. proposed selective partial
update LMS algorithm [31 ], where the selection
criterion is obtained from the solution of a
constrained optimization problem
we have made an attempt to develop efficient realization of
Adaptive Decision Feedback Equalizers by considering different
combinations and variants of LMS algorithm to improve the
computational speed as well as to reduce the computational
complexity.
Contributios made by this thesis
Efficient realization of FFT based modified block LMS
algorithm
Implementation of ADFE using modified block LMS
algorithm
ormalized modified block LMS based ADFE.
signed versions of modified block LMS based ADFE
ormalized signed modified block LMS based
ADFE
partial update sign normalized modified block LMS based
ADFE
ADFE is implemented in real time using TMS320C6713
DSP processor.
Basic ADFE Equations :
t
f t bt
y(n) = Q[y(n)],
( ) ( ) ( ) ,
( ) [ ( ) ( )] ,
( ) [ ( ) ( 1)] ,
t
t t t
y n n n
n n n
n n n
y
!
!
w
w = w w
x v
here,
( ) [ ( ), ( 1), , ( 1)] ,
( 1) [ ( 1), , ( )] .
t
t
n x n x n x n p
n v n v n q
= +
=
L
L
x
v
ADFE Weight update equations (LMS) :
( 1) ( ) ( ) ( ),
where,
( ) ( ) ( ) : the error signal
: Algorithm step size
n n n e n
e n v n y n
Q
Q
y

!
w = w
( ) ( ) [during training mode]
( ) [during decision directed mode]
v n d n
y n
y =
=
FFT based modified fast Block LMS Algorithm
LMS algorithm ------which updates the filter coefficients
by using an approximate version of the steepest descent
procedure.
computationally simple and desirable numerical qualities
the LMS algorithm received a great deal of attention
despite the fact that its convergence behavior has been
surpassed by several faster techniques.
This modified algorithm updates the filter coefficients on
a block-by-block basis.
Input data ( ) : partitioned in non-overlapping
blocks o size P each
x n y
th block , 0, 1, ... , 1,
0, 1, 2...
j n jP r r P
j
= + =
=
0 1 1
( ) [ ( ), ( ), ... , ( )] : -th order
ilter eight vector or the -th block
t
L
j w j w j w j L
j

y = w
Filter coefficients updated over block to block,
constant within a block
y
The main operations-----filtering, output error
computation and weight updating
substantial computational savings when compared with the algorithm
which updates the filter coefficients sampleby-sample basis.
Block Adaptive filter
( ) ( ) ( ) : ilter output at the -th index,
here ( ) [ ( ), ( 1), .... , ( 1)] ,
, 0, 1, , 1.
t
t
y n j n n
n x n x n x n L
n jP r r P
y =
= +
= + = L
w x
x
( ) ( ) ( ) : output error at the -th index,
where ( ) : desired response, given during training
e n d n y n n
d n
y !
2
1
0
Filter coefficients are updated to minimize [ ( )]
progressively with . Update relation (BLMS) :
( 1) ( ) ( ) ( )
P
r
E e n
n
j j jP r e jP r Q

!
y
!

w w x
A ast implementation via FFT is possible to
produce ( ), 0,1, , 1, and ( 1) y jP r r P j
y
+ = + L w
2
: Step size, or convergence, 0
[ ]
: [ ( ) ( )], i.e., input correlation matrix
t
P tr
n n
Q Q y
R
R x x
S/P
Sub-block
of size
M=L+P-1
M point
FFT
M point IFFT
(Last P terms)
Delay
Compute
M point
IFFT
Set last (P-1)
elements zero
M point
IFFT
M point
FFT
Add (L-1) zeros
at the front
P/S

Q
Output
M point
FFT
x(n)
X(k)
w(j+1)
W(k)
X(k)
y(n)
d(n)
e(n)
Fast implementation of the proposed BLMS Algorithm.
Block ADFE Equations :
Q,M Q,L
,
,
(jQ Q-1) X ( ) D ( ),
(jQ Q-1) { (jQ Q-1)},
(jQ Q-1) (jQ Q-1) (jQ Q-1)
( 1) ( ) (jQ Q-1) ,
( 1) ( ) (jQ Q-1).
f b
Q M L
Q Q
Q Q Q
f f H
M M Q M Q
b b H
L L Q L Q
j j
f
j j X
j j D
y +
=
+ +
+ +
Q
Q
y w w
d y
e y d
w = w e
w = w e
,
( 1) ..... ( )
. .
here, = . . ,
. .
( ) ..... ( 1)
Q M
x jQ Q x jQ Q M
X
x jQ x jQ M
+ +







+

Implementation of modified block LMS based ADFE
1 2
, , 1 , 1
1
, 1
2
, 1
= [ ] with
( 2) ..... ( )
. .
= . . ,
. .
( 1) ..... ( 1)
( 1) ..... ( 1)
. .
= . .
. .
( ) ..... ( )
Q L Q Q Q L Q
Q Q
Q L Q
D D D
d jQ Q d jQ
D
d jQ d jQ Q
d jQ d jQ L Q
D
d jQ Q d jQ L






















'
,
1
:
This consists o 3 main computations, namely,
(a) FFF output :
-- FFF output ( 1) ( )
-- sing overlap and save method
( 1) [ (
f f
Q Q M M i
f d
Q S S
jQ Q X j n Z
jQ Q F X

-
y
+ = V
+ =
Q
Eq alizatio a d eig t pdati g
y w
y J
.
)] , here
-1,
([ ( ) ] ) and
( [ ( ) ... ( 1) ] )).
f
S last Q
f f t t t
S M S M
d t
S S
S Q M
F j
X diag F x jQ Q S x jQ Q

-
= +
=
= + +
W
W w 0
,
(b) FBF output:
Unlike FFF, FBF output ( 1) ( )
contains unknown decisions given by ( ),
,..., 2.
To avoide causality problem, the computation of
( 1) is systematic
b b
Q Q L L
b
Q
jQ Q D j
d k
k jQ jQ Q
jQ Q
y !
!
y

y w
y
, 1
2 2
1
,2 1
ally decomposed into
two parts: one containing past and known decisions,
and the other involving purely the current and thus
unknown decisions.
( 1) ( )
( )
Q L Q
b b
Q L Q
b
Q Q
jQ Q D j
W j

y w
2 1
( 1), where

Q
jQ Q

d
1 1
1 1
,2 1
1 1
2
1 1
0 ( ) ..... ( ) 0 ... 0
0 0 ( ) ... ( ) ... 0
. . . . . . .
where, ( ) = ,
. . . . . . .
. . . . . . .
0 0 ... 0 ( ) ... ( )

( ) [ ( ) ( ) ... ( )].
Par
b b
Q
b b
Q
b
Q Q
b b
Q
b b b b
L Q Q Q L
w j w j
w j w j
W j
w j w j
j w j w j w j













! w
1 2
,2 1 , , 1
titioning ( ) [ ( ) ( )], the FBF output can be
written as,
b b b
Q Q Q Q Q Q
W j W j W j

!
2 2 1
, 1 1 ,
2
, 1 1
1
-- ( 1) ( ) ( ) ( 1)
( ) ( 1), where,
( 1) [ ( 1) ... ( )] contains
unknown decisions and ( 1) [ ( 1)... ( 1)]
cont
b b b
Q Q L Q L Q Q Q Q
b
Q Q Q
Q
Q
jQ Q D j W j jQ Q
W j jQ
jQ Q d jQ Q d jQ Q
jQ d jQ d jQ Q

!

!
!
y w d
d
d
d
2 2 2
, 1 1
1,1 1
,
1,2 2
, 1
1
ains Q-1 known decisions from previous sub-blocks.
-- Let FB2 output ( 1) ( ),
( 1) ( ) ( 1),
( 1) ( ) ( 1)and
-- FB1 output (
b b
Q Q L Q L Q
b b
Q Q Q Q
b b
Q Q Q Q
b
Q
jQ Q D j
jQ Q W j jQ Q
jQ Q W j jQ
jQ Q

!
!
!

y w
y d
y d
y
1,1 1,2
1) ( 1) ( 1).
b b
Q Q
jQ Q jQ Q ! y y
2
1,2
1,1
1,1
Let ( 1) ( 1) ( 1)
( 1)
Then, ( 1) ( 1) ( 1).
( 1) involves unkno n decisions.
An iterative procedure is suggested by hich
c f b
Q Q Q
b
Q
c b
Q Q Q
b
Q
jQ Q jQ Q jQ Q
jQ Q
jQ Q jQ Q jQ Q
jQ Q
Berberidis
+ = + + +
+ +
+ = + + +
+

y
y y y
y
y y y
y
1,1
First computes ( 1) using appropriately
chosen initial value or ( 1).
Then evaluates ( 1), hich is then used to compute
( 1) using ( 1) { ( 1)}.
T
b
Q
Q
Q
Q Q Q
jQ Q
jQ Q
jQ Q
jQ Q jQ Q f jQ Q
+
+
y +
+ + = +
y
y
d
y
d d y
1
his is again used to compute ( 1) and then
( 1) and the iteration is carried out urther.
b
Q
Q
jQ Q
jQ Q
+
+
y
y
It is sho n that this iteration converges to correct vector
( 1) in Q or less number o steps or any choice
o initial value.
A simple choice is to set the initial decision vector to
Q
jQ Q
y
+
y
d
1
,

zero vector (IS1).
In IS2 the initial value o ( 1) is chosen by setting
( 1) ( 1) and solving or ( 1)
using [ ( ) ] ( 1) ( 1).
Q
Q Q Q
c
Q Q Q Q Q
jQ Q
jQ Q jQ Q jQ Q
W j I jQ Q y jQ Q
y +
+ = + +
+ = +
d
d y d
d
Q Q Q
The error vector is now computed as
( 1) ( 1) ( 1) jQ Q jQ Q jQ Q
y
! e d y
Q-1
r=0
Q-1
r=0
(c) Weight updating :
( 1) ( ) (jQ+ r) e(jQ+ r)
( 1) ( ) (jQ+ r) e(jQ+ r) 2
j j
f f
M M M
b b
L L L
j j
j j

!
!

w w x
w w d
R \
The proposed realizations are about
aster than a sample based realization or

moderately large values o L, M and Q.
four times y
The channel is modeled with a second order FIR filter, having
impulse response 0.304 0.903 0.304.
The channel noise is modeled as AWG . The transmitted
symbols are chosen from an alphabet of 8 equispaced,
equiprobable discrete amplitude levels
The transmitted signal power was taken to be 6 dB.
To these symbols additive white Gaussian noise having a
variance of 0.1 is added. The lengths of the FFF and the FBF
were chosen as p=3 and q=3.
Step size =0.001.
Simulation Studies
The ADFE was first simulated by the proposed
scheme, choosing block length as 25.
The ADFE was operated in training mode for the
first 100 iterations and then, switched over to
the decision directed mode for the subsequent
500 iterations.
The FFF and FBF weights are updated separately
using weight updating equations.
The corresponding learning curve is obtained by
plotting the MSE versus the number of iterations
ext, the MSE curves were plotted for different input
block lengths of = 10, 25, 50 and 100
Increasing block length
large spread in the magnitudes o the data samples in the block
more pronounced quantization noise e ects via block ormatting
Steady state MSE increases ith N

Realization of Normalized modified Block LMS based


ADFE
The normalized LMS algorithm provides good
convergence behavior compared to basic LMS
algorithm.
The LMS algorithm can be considered as a special case
of slightly improved version of the LMS algorithm
which takes into account the variation in the signal level at
the filter output by selecting a normalized step size
parameter, resulting in a stable and fast converging
adaptive algorithm.
The LMS algorithm estimates the energy of the
input signal at each sample and normalizes the step
size by this estimate, therefore selecting a step size
inversely proportional to the instantaneous input signal
power.
The weight update equation for the LMS algorithm
is given by
( 1) ( ) ( ) ( ) ( ) w n w n n e n X n Q + = +
2

( )
( )
n
x n
Q
U
=
+ P P
Where
The tap input vector is given by
( ) [ ( ), ( 1)...., ( 1)]
t
X n x n x n x n L !
The error signal is given by
( ) ( ) ( ) ( )
t
e n d n w n X n !
The filter weight vector is given by
0 1 1
( ) [ ( ), ( ), .... ( )]
t
L
w n w n w n w n

!
Here the adaptation constant is with in the range 0
to 2 for convergence and is an appropriate positive
number introduced to avoid divide-by-zero like situations
which may arise when the norm of the input signal
becomes very small.

U
the weight updating equation for the ADFE
using LMS algorithm can be modified and
written as,
( 1) ( ) ( ) ( ) ( ) W n W n n n e n Q J !
Where
( ) [ ( ),... ( 1), ( 1),... ( )]
t
n x n x n p v n v n q J !
0 1 1
( ) [ ( ), ( ),.... ( )]
f f f f t
p
W n w n w n w n

=
is a -th order FFF
coefficients
p
1 2
( ) [ ( ), ( ), .... ( )]
b b b b t
q
W n w n w n w n =
is a -th
order FBF
coefficients
q
( ) [ ( ) ( )]
f t bt t
W n W n W n !
The signal is given by a desired response
( ) d n
during the initial training phase and by ( ) y n
during the
subsequent
decision directed
phase
T h e o v e r a ll o u t p u t ( ) i s g i v e n b y
( ) ( ) ( )
t
y n
y n W n n J =
T h e o u t p u t e r r o r
( ) ( ) ( ) e n v n y n =
The eed or ard ilter output
( ) ( ) ( )
f f
y n w n x n =
T h e e e d b a c k i lt e r o u t p u t ,
( ) ( ) ( 1)
b b
y n w n v n =
Now the overall output ( )
which is the input to the decision device is,
( ) ( ) ( )
f b
y n
y n y n y n !
1) Initially transmit the known sequence.
2) Assume, initially both the FFF, FBF weights to be zero.
3) Find the output vector, which is the sum of the outputs of
FFF, FBF.
4) Estimate the tap weight vector at each instant of time using
normalized Modified block LMS algorithm.
5) Update the filter coefficients.
Computational Complexity
Number of computations required for step size evaluation
To evaluate the time varying step size recursively, the
proposed scheme requires
2 MAC operations to compute
2
( ) x n P P
1 addition for
2
( ) x n U P P
1 division for
2

( ) x n U + P P
at each index n.
Number of computations required for weight vector :
updating
( ) W n to ( 1) W n
Require (i)(L+1) MAC
operations .Of these, one
MAC operation is needed
to compute
2

( )
( )
en
xn U + P P
and a total of L MAC operations are
required to calculate
( 1) W n+
Number of computations required for evaluating filter output:
To compute the overall output
total of L MAC
operations are required.
Parameter
Operation
MAC Addition Division
Step size 2 1 1
Weight updating L+1 Nil Nil
Filter output L Nil Nil
Table : Number of operations required per iteration for evaluating step size,
weight updating and filter output using NLMS algorithm.
100 200 300 400 500 600 700 800 900 1000
-30
-25
-20
-15
-10
-5
0
5
10
Number of Iterati ons
M
S
E

(
d
B
)
L MS
NL MS
Figure : Learning curves for LMS and Normalized LMS
base ADFE
Simulation Results
Consider
Q
=0.001
The learning curve of the proposed ADFE shows good
convergence behaviour after 50 iterations, where as it takes
more than 100 iterations for the LMS based ADFE. The
steady state MSE is also within the acceptable range.
Realization of Signed modified Block LMS based ADFE
There are three signed versions of LMS algorithm namely
signed regressor LMS
sign-sign LMS
sign LMS algorithms.
These algorithms provide less computational complexity
compared to basic LMS algorithm
The proposed schemes are particularly suitable for
implementation of ADFE with less computational
complexity.
The signed LMS algorithms that make use of the signum
(polarity) of either the error or the input signal, or both,
have been derived from the LMS algorithm from the point
of view of simplicity in implementation.
In all these algorithms there is a significant reduction in
computing time, mainly pertaining to the time required
for multiplications
In sign sign algorithm, where the signum of the input is used
in addition to the signum of the error signal, thus requiring
only one-bit multiplication or logical E -OR function.
signed regressor LMS algorithm (SRLMS), in which
the polarity of the input signal is used to adjust the
tap weight.
The weight updating equations:
Signed- regressor LMS algorithm: w(n + 1) = w(n) + sgn {x(n)}e(n)
Sign-Sign LMS algorithm: w(n + 1) = w(n) + sgn{x(n)} sgn{e(n)}
Sign LMS algorithm: w(n + 1) = w(n) + x(n) sgn{e(n)}
where sgn {. } is well known signum function.
The error signal is given by, e(n) = d(n) - y(n)
The sequence d(n) is called desired response available
during initial training period and is an appropriate step size
to be chosen as 0 < < 2/trR for the convergence of the
algorithm.
Implementation Procedure
Initially during training mode the known sequence
( ) d n is transmitted and both the FFF and FBF are trained by the
appropriate sign based algorithms.
Then the output
( ) y n
which is the sum of both FFF
and FBF outputs is computed
The error sequence
( ) e n
is estimated and filter coefficients
are updated for each iteration.
S.No
Different Variants of
Sign LMS algorithms
Operation
Additions/
Subtraction
s
Shift Multiplicatio
n
1 The Sign L L Nil
2 The Signed-regressor L Nil 1
3 The Sign-Sign L Nil Nil
Table 5.1: No. of additions/subtractions, shifts and multiplications required for weight
updating using sign, signed-regressor, and the sign-sign LMS algorithms.
Computational Complexity
Figure : Learning curves for LMS and signed regressor
LMS(SRLMS) based ADFE
Figure 5.2: Learning curves for LMS and Sign LMS (SLMS)
based ADFE.
Figure 5.3: Learning curves for LMS and Sign-Sign LMS (SSLMS)
based ADFE.
Figure : MSE plots for signed regressor ADFE
for block lengths N=10, 25, 50, 100.
Figure : MSE plots for of sign ADFE for block lengths
N=10, 25, 50, 100.
Figure : MSE plots for of sign sign ADFE for block lengths
N=10, 25, 50, 100.
The proposed schemes were simulated as before to
study the effects of block formation of the equalizer
coefficients on the performance of the sign- LMS based
ADFE. For this, the same simulation model and
environment as used earlier for ADFE is considered.
The simulation results for different block
lengths(N=10,25,50 and 100),by allocating 8 bits to the
weight vectors of FFF and FBF, keeping the step size as
0.001.The simulation results for LMS based ADFE and
its three variants considered above are presented in
Figures.
Realization of Normalized Signed modified Block LMS based
ADFE
Here ADFE is implemented by combining modified
block LMS algorithm, normalized LMS algorithm and
signed versions of LMS algorithms.
The normalized signed regressor LMS algorithm
(NSRLMS) is a counterpart of the NLMS algorithm,
derived from the signed regresser LMS algorithm
(SRLMS), where the normalizing factor for the SRLMS
equals the sum of the absolute values of the input signal
vector components
The weight update equation of the normalized signed
regressor LMS algorithm (NSRLMS) can be obtained
by modifying the weight update equation of SRLMS
algorithm and can be written as
2

( 1) ( ) sgn{ ( )} ( )
( )
W n W n X n e n
x n
+ = +
P P
Here Data vector ( ) X n is given by
( ) [ ( ), ( 1)........ ( 1)]
t
X n x n x n x n L = +
_ a
( ) Sgn X n is given by
sgn{ ( )} [sgn{ ( )},sgn{ ( 1)}........sgn{ ( 1)}]
t
X n x n x n x n L !
The weight update equation of the normalized sign-sign LMS algorithm (NSSLMS)
can be obtained by modifying the weight update equation of SSLMS algorithm
and can be written as
2

( 1) ( ) sgn[ sgn{ ( )}sgn{ ( )}]


( )
W n W n X n e n
x n
!
P P
The weight update equation in the normalized sign-LMS algorithm (NSLMS)
can be obtained by modifying the weight update equation of SLMS algorithm
and can be written as
2
( 1) ( )

sgn{ ( )} ( )
( )
W n n W e n X n
x n
+ = +
P P
Both feed forward and feedback filter coefficients are trained
by the weight update equations of NSRLMS, NSSLMS and
NSLMS algorithms. Initially the training is imparted by a pilot
sequence (Known transmitted sequence) during initial
training mode and by the output decision during the
subsequent decision directed mode. The input
to the FBF is during the initial training period and it is
during subsequent decision directed phase.
( ) d n
$
( ) y n
( ) v n
( ) d n
$
( ) y n
The feed forward filter output ( )
f
y n is
( ) ( ) ( )
f f
y n w n x n !
0 1
( ) [ ( ),....... ( )]
f f f t
p
W n w n w n

= where
The feed back filter output ( )
b
y n is
( ) ( ) ( 1)
b b
y n w n v n =
Now the overall output, which is the input to the
decision device, y(n) is,
( ) ( ) ( )
f b
y n y n y n = +
For the L-th order FFF and FBF, to update the coefficients
using LMS algorithm, L multiplications and L additions are
required. For error e(n) one addition is required. For the
product
Computational Complexity:
( ) e n Q one multiplication
is required.
for the output ( ) y n , L multiplications and L-1 additions are
required. So per output a total of (2L+1)
multiplications and 2L additions are
required. NLMS algorithm needs one
additional computation term
2
( ) x n
This extra computation involves only
two squaring operations (two
multiplications), one addition and one
subtraction, if we implement using
recursive structure
In the case of signed regressor LMS algorithm
only one multiplication is needed for obtaining
the product
( ) e n Q
In the case of other two LMS algorithms
[SSLMS,SLMS] no multiplications are required if
Q
is chosen as a power of two 2
l
Q

= as this multiplication
can be efficiently
implemented using
shift registers.
S.No.
Type of
Algorithm
Operation
Multiplic
ations
Additio
ns
Shifts
1 LMS 2L+1 2L Nil
2 NLMS 2L+3 2L+2 Nil
3 NSRLMS 1 2L+2 Nil
4 NSLMS Nil 2L+2 2L+2
5 NSSLMS Nil 2L+2 Nil
Table : Comparison of computational complexity
for different LMS based Algorithms.
It is observed that
the sign based
algorithms are
largely free from
multiplication
operation.
Results and Conclusions
The Mean squared error curves are compared for
ADFEs with LMS, Normalized Sign LMS(NSLMS),
Normalized Signed regressor LMS (NSRLMS),
Normalized Sign-sign LMS(NSSLMS) algorithms
The ensemble averaging was performed over 100
independent trials of the experiment.
Step size

=0.001 is considered.
Number of iteration were taken as 400. For first 100
samples the ADFE is on training mode and it is in decision
directed mode for the next 300 samples.
Figure : Learning curves for LMS and Normalized
signed-regressor LMS based ADFE.
Figure : Learning curves for LMS and Normalized
sign LMS based ADFE
Figure: Learning curves for LMS and Normalized
sign-sign LMS based ADFE
Fig: Comparision of Bit Error Rate(BER)plot of
Normalized Signed regressor LMS(NSRLMS) based
ADFE with LMS, Normalized LMS(NLMS)and
Sign LMS(SLMS) based ADFEs
Figure : Comparision of bit Error Rate(BER)plot of
Normalized Sign LMS (NSLMS) based ADFE with LMS,
Normalized LMS(NLMS)and Sign LMS(SLMS) based
ADFEs.
Fig:Comparision of Bit Error Rate(BER)plot of
Normalized sign-sign LMS(NSSLMS) based ADFE with
LMS, Normalized LMS(NLMS)and Sign LMS(SLMS)
based ADFEs.
Partial update Sign Normalized LMS based Adaptive
Decision Feedback Equalizer
Here only a part of the filter coefficients are updated at
each iteration, without reducing the order of the filter in a
manner which degrades algorithm performance as little as
possible.
Two types of partial update LMS algorithms
Periodic LMS algorithm
Sequential LMS algorithm
T.Aboulnasr et al. proposed M-Max-NLMS algorithm, where the
filter coefficients are obtained from the minimization of a modified
a posteriori error expression.
T.Schertler et al. proposed selective block update NLMS algorithm
which update the filter coefficients on a block basis.
Dogancay.k et al. proposed selective partial update NLMS
algorithm where the selection criterion is obtained from the solution
of a constrained optimization problem.
Werner.S et al. proposed data selective partial updating NLMS
algorithm which uses set membership filtering method.
Mahesh . G et al proposed stochastic partial update LMS
algorithm where filter coefficients are updated in random manner.
Proposed Implementation
Let us assume that the feed forward and feedback
filters are FIR of even length L.
Let the filter coefficients
( ) W n
For the instant
n
the filter coefficients are
separated as even and odd
indexed terms as
2 4 6
( ) [ ( ), ( ), ( ),....... ( )]
t
e L
W n w n w n w n w n =
1 3 5 1
( ) [ ( ), ( ), ( ),....... ( )]
t
o L
W n w n w n w n w n

=
( ) [ ( ), ( )]
e o
W n W n W n !
Let the input sequence of the filter ( ) X n is
( ) [ ( ), ( 1), ( 2),........ ( 1)]
t
X n x n x n x n x n L !
by separating this as even and odd
sequences as
( ) [ ( 1), ( 3)........ ( 1)]
t
e
X n x n x n x n L !
( ) [ ( ), ( 2)........ ( 2)]
t
o
X n x n x n x n L = +
The desired response ( ) d n is given by
( ) ( ) ( )
t
opt
d n W n X n !
where the optimum filter coefficients
( )
opt
W n
is given by
1, 2, ,
( ) [ ( ), ( ),.... ( )]
t
opt opt opt L opt
W n W n W n W n !
For odd
n
filter coefficients updated using partial
update LMS algorithm (PLMS) are given by
( 1) ( ) ( ) ( )
e e e
W n W n e n X n Q + = +
( 1) ( )
o o
W n W n + =
For even
n
the filter coefficients are
( 1) ( )
e e
W n W n + =
( 1) ( ) ( ) ( )
o o o
W n W n e n X n Q !
The error sequence ( ) e n
( ) ( ) ( ) e n d n y n !
is given by
The actual output of the filter is given by
( ) ( ) ( )
t
y n w n X n =
The coefficient error vectors are defined as
( ) ( ) ( )
e e e
V n W n W opt =
( ) ( ) ( )
o o o
V n W n W opt !
( ) ( ) ( ) V n W n W opt =
( ) [ ( ), ( )]
eo t
e o
V n V n V n !
The necessary and sufficient condition for
stability of the recursion is given by
max
2
0 Q
P

max
P where
is the maximum eigen value of the
input signal correlation matrix
The adaptive filter coefficients are updated by the, Partial
update Signed-regressor LMS algorithm (PSRLMS) as
( 1) ( ) sgn{ ( )} ( ) W n W n n e n Q J + = +
Using Partial update Sign-Sign LMS algorithm
(PSSLMS) as
( 1) ( ) sgn{ ( )}sgn{ ( )} W n W n n e n Q J + = +
and using Partial update Sign LMS algorithm
(PSLMS) as
( 1) ( ) ( ) sgn{ ( )} W n W n n e n QJ !
sgn{.} is well known signum function
{ ( )} [ { ( )}, { ( 1)}........ { ( 1)}] Sgn n Sgn n Sgn n Sgn n L J J J J = +
The weight updating equation using Partial update
normalized Signed-regressor LMS algorithm
(NPSRLMS) is written as
( 1) ( ) ( )sgn{ ( )} ( ) w n w n n n e n Q J + = +
( ) n Q
2

( ) x n U P P
is given by
and
2
( ) ( ) ( )
t
x n X n X n = P P

is a step size control


parameter, used to control
the speed of convergence
and takes on values
between 0 and 2 for
convergence
U is an appropriate positive
number introduced to avoid
divide-by-zero like situations
which may arise when
2
( ) xn P P
becomes very small.
The weight updating equation of Partial update
normalized Sign-Sign LMS algorithm (NPSSLMS) can
be written as
( 1) ( ) ( )sgn{ ( )}sgn{ ( )} w n w n n n e n Q J !
The weight updating equation of Partial update
normalized Sign LMS algorithm (NPSLMS) as
( 1) ( ) ( )sgn{ ( )} w n w n n e n Q !
The both feed forward and feedback filter coefficients are
trained by the weight update equations of all three types of
LMS based algorithms, i.e, partial update normalized signed-
regressor, sign-sign, and sign LMS algorithms.
Initially the training is imparted by a pilot
sequence during initial training mode
and by the output decision
( ) d n
$
( ) y n
during the
subsequent
decision
directed mode.
The output ( ) v n
( ) d n
= or
$
( ) y n
depending on
whether it is the
initial training
period or
subsequent decision
directed phase.
The feed forward filter output
( )
f
y n
is given by
( ) ( ) ( )
f f
y n w n x n !
1
( ) [ ( ),....... ( )]
f f f t
p
W n w n w n =
where
The feed back filter output ( )
b
y n is given by
( ) ( ) ( 1)
b b
y n w n v n =
The overall output, which is the input to the decision device is given by
( ) ( ) ( )
f b
y n y n y n !
Results and Conclusions
The proposed scheme is simulated to study the performance
of the ADFE.
Transmitted signals taking values 1 with probability 0.5.
The random number generator provides this test signal and in
the channel an additive white gaussion noise with zero mean
and variance of 0.001 is added.
The impulse response of the channel is considered as a raised
cosine function
The initial filter coefficients of FFF and FBF are zero. At each
iteration these coefficients are modified and at the beginning of
decision directed mode the filter coefficients of the last
iteration of the training mode are taken as initial coefficients.
The signal after equalization is passed through the slicer .It
quantizes the signal to 1 when the signal is greater than 0.5 and
quantizes the signal to -1 when the signal is less than 0.5.
0 20 40 60 80 100 120 140
0. 2
0. 4
0. 6
0. 8
1
1. 2
1. 4
1. 6
1. 8
Frequenc y Res pons e of t he Channel
Frequenc y
A
m
p
l
i
t
u
d
e
Figure : Frequency Response of the channel
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000
0
0. 5
1
Trans mit t ed s ignal
0 1000 2000 3000 4000 5000 6000
-2
0
2
Obs erved s ignal
0 100 200 300 400 500 600 700 800 900 1000
-2
0
2
Equaliz er out put before s lic er
0 100 200 300 400 500 600 700 800 900 1000
0
0. 5
1
Equaliz er out put aft er s lic er
Figure7.2: Transmitted Signal, Observed Signal, Equalizer
Output before and after slicer.
0 100 200 300 400 500 600 700 800 900 1000
0
0. 2
0. 4
0. 6
0. 8
1
Error Si gnal Aft er Sl i c er
0 100 200 300 400 500 600 700 800 900 1000
0
0. 5
1
Error Si gnal Before Sl i c er
100 200 300 400 500 600
-80
-70
-60
-50
-40
-30
-20
-10
0
10
No.Of Iterations
M
S
E
L MS
NPB L MS
Figure: MSE curves of LMS and normalized partial
update block LMS(NPBLMS) based ADFE.
Random number generator provides the test signal. channel is
modelled as AWGN of variance 0.01. The ensemble averaging was
performed over 100 independent trials of the experiment. The
transmitted signals are taken as simple QPSK signals. N=600 samples
are generated and used to train the both FFF and FBF with 4 taps.
-3 -2 -1 0 1 2 3 4 5 6 7 8
10
-3
10
-2
10
-1
10
0
SINR, dB
B
i
t

E
r
r
o
r

R
a
t
e


LMS
NSSPLMS
NSRPLMS
NSPLMS
Figure: Comparision of Bit Error Rate(BER) curves
Implementation of Adaptive Decision feedback Equalizer
using DSP processor TMS320C6713
The TMS320C6713 is a fast special purpose Texas
Instruments(TI) floating point digital signal processor. It is
based on very long instruction word (VLIW) architecture.
This architecture and the instruction set are well suitable for
real time signal processing applications.
The main tool is TIs DSP starter kit (DSK). It consists of code
composer studio (CCS), which provides an integrated development
environment (IDE) and necessary software tools for bringing
together the C compiler, assembler, linker, debugger and so on. It
has graphical capabilities and supports real time debugging. It
provides an easy to use software tool to build and debug programs.
The operating frequency is 225MHz.
16 Mbytes of synchronous DRAM,
512 Kbytes of non-volatile Flash memory(256 Kbytes usable
in default configuration),
4 user accessible LEDs and DIP switches
Internal memory includes a two-level cache architecture
with 4 kB of level 1 program cache (L1P), 4 kB of level 1
data cache (L1D), and 256 kB of level 2 memory shared
between program and data space.
The ADFE is initially in training period and the training
sequence is known to both the transmitter and the receiver. The
error signal is generated from the transmitted signal and the
equalized signal. After some iterations the equalizer turn to
decision directed mode and the normal transmission begins and
the coefficients of the FFF and FBF are updated based on the
output of the decision device. During training process a large step
size (0.08) is chosen to attain fast initial convergence ,later the
step size is reduced to (0.02) in decision directed mode to
maintain a low tracking error.
Figure : MSE curve using TMS320C6713
MSE is almost
negligible after 200
iterations
Summary of the Present Work
we have made an attempt to develop efficient realization of
Adaptive Decision Feedback Equalizers by considering
different combinations and variants of LMS algorithm.
An efficient realization of modified fast block LMS algorithm
using FFT has been presented. The proposed scheme provides
considerable speed up over sample by sample update LMS
algorithm. Faster evaluation of the filter outputs and weight
updating equations are also derived. From the computational
complexity analysis, it is observed that the proposed modified
FFT based fast block LMS algorithm is sixteen times faster
than the sample by sample update LMS algorithm.
ADFE is implemented using modified FFT based fast
block LMS algorithm.In this method first the incoming
data is partitioned into non overlapping blocks of length
and corresponding to each block the weights of the both
FFF and FBF are evaluated and the error sequence is
calculated. The overall output, which is the sum of the
outputs of both FFF and FBF is calculated.The ADFE is
initially in training period and later it is switched to
decision directed mode. The computational complexity in
terms of MAC operations are also presented.
Later we extended the modified Block LMS based
treatment to NBLMS ADFE. This normalization provides
certain advantages over the original LMS based ADFE. It
enjoys superior convergence behaviour over its LMS
counterpart at the expense of certain additional
computations.
The computational complexity of this proposed
algorithm is also analyzed. The learning curve shows
the significant improvement in the convergence
characteristics.
Later we have extended the modified Block LMS based
treatment to the SBLMS based ADFE. This provides less
computational complexity over the LMS algorithm by
trading off the speed of convergence.
Next we have taken up the combination of normalized
and signed versions of LMS algorithms, to reduce
complexity and to improve convergence
characteristics
Later ADFE is implemented using sign normalized
LMS algorithms with partial updating the filter
coefficients
Next ADFE is implemented on a real time TMS 320C6713
DSP processor
References :
1. Haykin, S., Adaptive Filter Theory, Englewood Cliffs, NJ:
Prentice-Hall, 1991.
2. Berberidis, K., and P. Karaivazoglou, An efficient block
adaptive decision feedback equalizer implemented in the
frequency domain, IEEE Trans. Signal Processing, vol.
50, no. 9, pp. 2273-2285, Sept. 2002.
3. Elam. D., and C. Lovescu, A Block Floating Point
Implementation for an N Point FFT on the
TMS320C55X DSP, Texas Instruments Application
Report, SPRA948, Sept. 2003.
4. Harrington, E. F., A BPSK Decision-Feedback
Equalization Method Robust to Phase and Timing
Errors, IEEE Signal Processing Lett. Vol.12,
no.4, pp. 313 316, Apr. 2005.
5.Kavitha, V., and V. Sharma, Tracking Analysis of an
LMS Decision Feedback Equalizer for a wireless
Channel, Technical Report No.TR-PME-2006-19,
DRDO IISc Program on mathematical engineering,
IISc, Bangalore, October 2006,
6. Khong A. W. H., and P. A. Naylor, Selective tap
adaptive filtering with performance analysis for
identification of time varying systems, IEEE Trans.
Audio Speech Language Processing, vol.15, no.
5, pp. 1681 1695, July 2007.
7. Lin, C. H, A. Y. Wu and F.M. Li, HighPerformance
VLSI Architecture of Decision Feedback
Equalizer for Gigabit Systems, IEEE Trans. Circuits
Syst. II., Vol. 53, no. 9, pp. 911915, Sept. 2006.
8.Mahesh Godavarti, Alfred O. Hero, III Partial
Update LMS Algorithms IEEE Trans. Signal
Processing, vol. 53, no.7, July 2005.
9. Parhi, K. K., Designing of Multi gigabit
MultiplexerLoopBased Decision Feedback
Equalizers, IEEE Trans. Very Large Scale
Integration Systems, vol. 13, no.4, pp. 489-493, April
2005.
10. Parhi, K. K., VLSI Digital Signal Processing
System, Wiley- interscience, New York 1999.
11. Reuter, M., et. al., Mitigating Error Propagation
Effects in a Decision Feedback Equalizer, IEEE
Trans. Commun. vol. 49, no.11, pp. 2028-2041, Nov.
2001.
12.Rontogiannis, A. A. and K. Berberidis, Efficient
decision feedback equalization for sparse wireless
channels, IEEE Trans. Wireless Communications, vol.
2, no. 3, pp. 570-581, May 2003.
13. Wu, W.R., and Tsuie, Y.M.: An LMS-based decision
feedback equalizer for IS-136 receivers, IEEE Trans.
Commun., 2002, 51, pp 130-143.
List of Publications
JOURNALS
[01] Ch. Sumanth Kumar, K.V.V.S. Reddy, Low Complexity
Adaptive Equalization Techniques for Nonstationary Signals, Journal
of Communication and Computer, vol.6, No.11,2011, ISSN 1548-
7709, USA.
[02] Ch. Sumanth Kumar, Rafi Ahamed Shaik, K.V.V.S. Reddy,
Normalized Signed Regressor Partial update LMS based Adaptive
Decision Feedback Equalization, International Journal of Emerging
Technologies And Applications in Engineering, technology And
Sciences (IJ-ETA-ETS), ISSN: 0974-3588 , Jan11 June 11 ,Volume
4 : Issue 1, P.P NO.48-52.
[03] Ch. Sumanth Kumar, D.Madhavi, K.V.V.S. Reddy,
An Efficient Realization of Normalized Block LMS based
ADFE, Advances in Wireless and Mobile Communications,
ISSN 0973-6972 Volume 4, Number 1 (2011), pp. 1118.
[04] Ch. Sumanth Kumar, K.V.V.S. Reddy, Optimized
Adaptive equalizer for Wireless Communications,
International Journal of computer applications, USA, Number
16, ISBN: 978-93-80746-57-8, pp.29-33, 2011.
[05] Ch. Sumanth Kumar, K.V.V.S. Reddy, Block based
Partial update NLMS Algorithm for Adaptive Decision
Feedback Equalization, International Journal of Signal and
Image Processing, Communicated.
CONFERENCES
[06] Ch. Sumanth Kumar, K.V.V.S. Reddy, Block and Partial
Update Sign Normalized LMS Based Adaptive Decision Feedback
Equalizer, in proc. 2011 International Conference on Devices &
Communications (ICDeCom-11), Birla Institute Of Technology,
Mesra,ranchi, IEEE Xplore. IEEE Catalog Number: CFP1109M-
ART,ISBN: 978-1-4244-9190-2, DOI: 10.1109/ ICDECOM. 2011.
5738469, Feb.24th -25th 2011.
[07] Ch. Sumanth Kumar, K.V.V.S. Reddy, Optimized Adaptive
equalizer for Wireless Communications, International Conference on
VLSI, Communication& Instrumentation (ICVCI2011),
Kottayam,Kerala. April 7th -9th 2011.
[08] Ch. Sumanth Kumar, Rafi Ahamed Shaik, K.V.V.S. Reddy,
A New Sign Normalized Block based Adaptive Decision feedback
Equalizer for Wireless Communication Systems, 2010 IEEE
International Conference on Computational Intelligence and
Computing Research (ICCIC), Coimbatore, IEEE Xplore IEEE
Catalog Number: CFP1020J-ART ISBN: 978-1-4244-5967-4.Dec
28th -29th 2010.
[09] Ch. Sumanth Kumar, K.V.V.S. Reddy, Partial Update Sign
LMS Based Adaptive Decision Feedback Equalizer, Second
International Conference On Advanced Computing &Communication
Technologies for High Performance Applications,Federal institute of
science& technology, angamaly, cochin,kerala, 7th -10th December
2010
[10] Ch. Sumanth Kumar, D. Madhavi, N. Jyothi, High
Performance Architectures for Recursive Loop Algorithms,
International Conference on Control, Automation, Communication and
Energy Conservation-INCACEC09 , Kongu Engineering College,
Perundurai, Erode, IEEE Xplore,4th - 6th June 2009
[11] Ch. Sumanth Kumar, K.V.V.S. Reddy, Pipelining and Parallel
Computing Architectures of Equalizers for Gigabit Systems,
International Conference On Advanced Computing &Communication
Technologies for High Performance Applications, Organized by federal
institute of science& technology, Angamaly, cochin,kerala, 660-
664,24th -26th Sept.2008.
[12] Ch. Sumanth Kumar, Rafi Ahamed Shaik, K.V.V.S. Reddy, A
New Normalized Block LMS based Adaptive Decision feedback
Equalizer for Wireless Communications, International Conference on
Convergence of Science&Engineering in Education and Research A
Global perspective in the new millennium ICSE 2010 , Dayananda
Sagar Institutions,Bangalore, 21st -23rd April 2010.
[13] Ch. Sumanth Kumar, K.V.V.S Reddy, Rafi Ahamed Shaik, Low
Complexity Adaptive Equalization Techniques for non-stationary
signals, International conference on advances in Information,
Communication technology and VLSI Design,ICAICV2010,PSG
College of Technology,Coimbatore, Page No.49,Aug 6th -7th 2010
[14] Ch. Sumanth Kumar, D. Madhavi, N. Jyothi,
Computational approaches for Real time High Speed
Implementation of Quantization Algorithms, 2010 IEEE
International Conference on Computational Intelligence and
Computing Research (ICCIC) .Tamilnadu College Of
Engineering,Coimbatore, IEEE Xplore IEEE Catalog Number:
CFP1020J-ART ,ISBN: 978-1-4244-5967-4,Dec 28th -29th 2010
[15] Ch. Sumanth Kumar, K.V.V.S. Reddy, A New Normalized
Signed LMS based Adaptive Decision Feedback Equalizer,
National Conference on Electronics, Communications, and
Computers (NCECC-2009) , organized by IETE Navi Mumbai
Sub-Centre, 78-81,13th-14th February 2009.
[16] Ch. Sumanth Kumar, Dr. K. V. V. S. Reddy, P. Naga
Lingeswara Rao, An Efficient Realization of Normalized
Block LMS based ADFE, National Conference on Signal
Processing and Communication Systems NCSPCS2010,
RVR&JC, College of Engineering ,Guntur, P.No
.62,February 25-26, 2010
[17] Ch. Sumanth Kumar, K.V.V.S. Reddy, Efficient
VLSI Architectures for High speed Nonlinear Adaptive
Equalizers, National conference on Signal Processing
&Communication Systems, organized by Department of
ECE, R.V.R &J.C College of Engineering, Guntur, 227-
231, 20th 21st February 2008.
Thank You

You might also like