You are on page 1of 8

150

CHAPTER 6
6.1 (a) We note that
where
Hence,
Dropping the terms that are not a function of (n), and differentiating J(n+1) with
respect to (n), we get
Setting the result equal to zero, and solving for (n):
J n+1 ( ) E e n+1 ( )
2
[ ] =
e n+1 ( ) d n+1 ( ) w
H
n+1 ( )u n+1 ( ) =
J n+1 ( ) E d n+1 ( ) w
H
n+1 ( )u n+1 ( )
2
[ ] =

d
2
w n ( )
1
2
--- n ( ) n ( )
H
p p
H
w n ( )
1
2
--- n ( ) n ( ) =
w n ( )
1
2
--- n ( ) n ( )
H
R w n ( )
1
2
--- n ( ) n ( ) +

n ( )
--------------J n+1 ( )

n ( )
--------------
1
2
--- n ( )
H
n ( )p
1
2
--- n ( )p
H
n ( ) +

'

=
1
2
--- n ( )
H
n ( )R w n ( )
1
2
--- n ( ) n ( )
1
2
--- n ( ) w n ( )
1
2
--- n ( ) n ( )
,
_
H
R n ( )}
1
2
---
H
n ( )p
H
n ( ) ( ) =
1
2
-- -
H
n ( )Rw n ( ) w
H
n ( )R n ( ) + ( ) n ( )
H
n ( )R n ( ) +
151
(1)
(b) We are given that
Hence, Eq. (1) simplies to
Using instantaneous estimates for R and :
we nd that the corresponding value of is
Correspondingly, we have
which is recognized as the normalized LMS algorithm.

o
n ( )
1
2
---
H
n ( )Rw n ( ) w
H
n ( )R n ( )
H
n ( )p p
H
n ( ) + ( )

H
n ( )R n ( )
--------------------------------------------------------------------------------------------------------------------------------------- =
n ( ) 2 Rw n ( ) p ( ) =

o
n ( )

H
n ( ) n ( )

H
n ( )R n ( )
---------------------------------- =
n ( )
R

u n ( )u
H
n ( ) =
n ( ) 2 u n ( )u
H
n ( )w n ( ) u n ( )d
*
n ( ) [ ] =
2u n ( )e
*
n ( ) =

o
n ( )

o
n ( )
u
H
n ( )u n ( ) e n ( )
2
u
H
n ( )u n ( ) ( )
2
e n ( )
2
---------------------------------------------------- =
1
u
H
n ( )u n ( )
---------------------------
1
u n ( )
2
------------------ = =
w n+1 ( ) w n ( )

u n ( )
2
------------------u n ( )e
*
n ( ) + =
152
6.2
6.3 The second statement is the correct one. The justication is obvious from the solution to
Problem 2; see also Eq. (6.10) of the text. By denition,
Since
Replace all of these terms into the NLMS formula
and look at each element in the vector. We then nd that the correct answer is
6.4
w n+1 ( ) w n+1 ( ) - w

n ( ) =
1
u n ( )
2
------------------u
H
n ( )u n ( ) w n+1 ( ) w n ( ) [ ] =
1
u n ( )
2
------------------u n ( ) u
H
n ( )w n+1 ( ) u
H
n ( )w n ( ) [ ] =
1
u n ( )
2
------------------u n ( )(d
*
n ( ) u
H
n ( )w n ( )) =
1
u n ( )
2
------------------u n ( )e
*
n ( ) =
u n ( ) u n ( ) u n-1 ( )u n M+1 ( ) , [ ]
T
=
w n ( ) w
0
n ( ) w
1
n ( )w
M-1
n ( ) , [ ]
T
=
w n+1 ( ) w
0
n+1 ( ) w
1
n+1 ( )w
M-1
n+1 ( ) , [ ] =
w n+1 ( ) w n ( )
u

u n ( )
2
------------------u n ( )e
*
n ( ) =
w
k
n+1 ( ) w
k
n ( )
u

u n ( )
2
------------------u n k ( )e
*
n ( ) = k 0 1 M-1 , , , =
w n+1 ( ) w n+1 ( ) w n ( ) =
A
1
n ( )A n ( ) w n+1 ( ) w n ( ) [ ] =
A
1
n ( ) A n ( )w n+1 ( ) A n ( )w n ( ) [ ] =
153
, from Eq. (6.44) of the text
6.5
6.6 Provided we show, under different scaling situations, that the learning rules (weight-
update formula) are the same, then we can say the nal solutions are the same under the
same initial condition.
(1) NLMS, unscaled:
Virtues Limitations
LMS Simple. Stable
H

robust
Model-independent
Slow convergence.
Learning rate must have the
dimension of inverse power.
NLMS Convergent in the mean sense. Sta-
ble in mean-square sense.
H

robust.
Invariant to scaling factor of input.
Dimensionless step-size.
(special case of APAF).
Little increase in computational
complexity.
(compared to LMS)
APAF More accurate due to the use of
more information.
Semi-batch learning.
Faster convergence.
Increased computational
complexity.
A
1
n ( ) d n ( ) A n ( )w n ( ) [ ] =
A
1
n ( )
1
2
---A n ( )A
H
n ( ) =
1
2
---A
1
n ( )A n ( )A
H
n ( ) =
1
2
---A
H
n ( )2 A n ( )A
H
n ( ) ( )
1
e n ( ) =
A
H
n ( ) A n ( )A
H
n ( ) ( )
1
e n ( ) =
w n+1 ( ) w n ( )

u n ( )
2
------------------u n ( )e
*
n ( ) + =
154
NLMS, scaled:
which is the same as the unscaled NLMS.
(2) APAF
Denote A
scaled
= [au(n), au(n-1),...,au(n-N+1)]
d
scaled
= A
scaled
(n)
Unscaled case:
Scaled case:
w n+1 ( ) w n ( )

au n ( )
2
----------------------au n ( ) w
H
n+1 ( )au n ( ) aw
H
n ( )u n ( ) [ ] + =
w n ( )

a
2
u n ( )
2
-------------------------a
2
u n ( ) w
H
n+1 ( )u n ( ) w
H
n ( )u n ( ) [ ] + =
w n ( )

u n ( )
2
------------------u n ( ) d n ( ) w
H
n ( )u n ( ) [ ] + =
w n ( )

u n ( )
2
------------------u n ( )e
*
n ( ) + =
w n+1 ( )
w n+1 ( ) w n ( ) A
H
n ( ) A n ( )A
H
n ( ) ( )
1
e n ( ) + =
w n+1 ( ) w n ( )
A
scaled
1
n ( )A
scaled
n ( ) w n+1 ( ) w n ( ) [ ] =
A
scaled
1
n ( ) A
scaled
n ( )w n+1 ( ) A
scaled
n ( )w n ( ) [ ] =
A
scaled
1
n ( ) d
scaled
n ( ) A
scaled
n ( )u n ( ) [ ] =
A
scaled
1
n ( )
1
2
---A
scaled
n ( )A
H
scaled
n ( ) =
155
which is the same as the unscaled APAF.
6.7
(a) The algorithms can be APAF formulated as follows:
Step (1): Initialize AR coefcients w
1
,w
2
,...,w
N-1
as random values
Step (2): Suppose w(n) = [w
1
(n),...,w
N-1
(n)]
H
u(n) = [u(n-1),...,u(n-N+1)]
H
Hence, we have
Suppose
Calculate
If ( is predened small positive value)
go to end
else go to step (3)
Step (3): Update
go to Step (2).
1
2
---A
H
scaled
n ( )2 A
scaled
n ( )A
H
scaled
n ( ) ( )
1
e
scaled
n ( ) =
aA
H
n ( ) a
2
A n ( )A
H
n ( ) ( )
1
ae n ( ) =
A
H
n ( ) A n ( )A
H
n ( ) ( )
1
e n ( ) =
u n ( ) w
k
*
u n k ( ) v n ( ) +
k=1
N-1

=
u n ( ) w
H
n ( )u n-1 ( ) v n ( ) + =
A
H
n ( ) u n-1 ( ) u , n-N+1 ( ) , [ ] =
e n ( ) u n ( ) A
H
n ( )w n ( ) =
e n ( )
2

w n ( )
w n+1 ( ) w n ( )

A
H
n ( ) A n ( )A
H
n ( ) ( )
1
e n ( ) + =
156
(b)
where P is the projection operator.
Geometrically, the distance (difference) between u(n) and its projected vector Pu(n) is
the error (noise vector). Hence, with assumed to be white Gaussian of zero
mean, it follows that the elements of the vector (n) are themselves zero-mean white
Gaussian processes.
6.8 (a) From Eq. (6.19),
Assuming that the undisturbed estimation error is equal to the disturbed
estimation error (i.e., normal error signal) e(n), then we may put
in which case Eq. (6.18) gives the bounds on as
(b) For the APAF, from Eq. (6.56) we know
where is the undisturbed error vector, and
n ( ) I A
H
n ( ) A
H
n ( )A
H
n ( ) ( )
1
A n ( ) [ ]u n ( ) =
u n ( ) A
H
n ( ) A n ( )A
H
n ( ) ( )
1
A
H
n ( )u n ( ) =
u n ( ) Pu n ( ) =
n ( )

opt
Re E
u
n ( )e
*
n ( ) u n ( )
2
( ) [ ]

' ;

E e n ( )
2
u n ( )
2
[ ]
------------------------------------------------------------------------------ =

u
n ( )

opt
1

2 < <
0 u

2E Re
u
H
n ( ) A n ( )A
H
n ( ) ( )
1
e n ( )

' ;

E e
H
n ( ) A n ( )A
H
n ( ) ( )
1
e n ( ) [ ]
--------------------------------------------------------------------------------------------- < <

u
n ( ) A n ( )w A n ( )w n ( ) =
157
is the disturbed error vector. Here again, if these
two error vectors are assumed to be equal, then the bounds on given in Eq. (6.56)
reduce to .
6.9 (a) For the NLMS lter, suppose M is the length of the lter. Examine the weight-update
formula
The computation involving these calculations involves 5M multiplications (or
division) and 2M additions. Hence, the computation complexity is O(M).
(b) For the APAF, suppose N is the order of the lter. Examine the weight-update formula
Here we see that the computation involved in APAF is about N times as that of NLMS.
Hence, the computational complexity of APAF is O(MN).
e n ( ) A n ( )w n+1 ( ) A n ( )w n ( ) =

2 < <
w n+1 ( ) = w n ( ) + u

u n ( )e
*
n ( )
u n ( )
2
-------------------------
e n ( ) = d n ( ) - w
H
n ( )u n ( )

'

w n+1 ( ) w n ( ) u

A
H
n ( ) A n ( )A
H
n ( ) ( )
1
e n ( ) + =
A
H
n ( ) u n ( ) u n-1 ( ) u n-N+1 ( ) , , , [ ] =
e n ( ) d n ( ) A n ( )w n ( ) =
d n ( ) d n ( ) d n-1 ( ) d n-N+1 ( ) , , , [ ] =

You might also like