You are on page 1of 78

Chapter 3 Pulse Modulation

We migrate from analog modulation


(continuous in both time and value) to digital
modulation (discrete in both time and value)
through pulse modulation (discrete in time but
could be continuous in value).

3.1 Pulse Modulation


o Families of pulse modulation
n Analog pulse modulation
o A periodic pulse train is used as carriers (similar to
sinusoidal carriers)
o Some characteristic feature of each pulse, such as
amplitude, duration, or position, is varied in a
continuous matter in accordance with the sampled
message signal.
n Digital pulse modulation
o Some characteristic feature of carriers is varied in a
digital manner in accordance with the sampled,
digitized message signal.
Po-Ning Chen@ece.nctu

Chapter 3-2

3.2 Sampling Theorem

g ( t ) =

o Ts sampling period
o fs = 1/Ts sampling rate

g (nT ) (t nT )
s

n =

G ( f ) =

g (nTs ) (t nTs ) exp( j2ft )dt =

n =

g (nT ) exp( j2nT f )


s

n =

Chapter 3-3

Po-Ning Chen@ece.nctu

3.2 Sampling Theorem


o Given: G ( f ) =

g (nT ) exp( j 2nT f )


s

o Claim: G ( f ) = f s

Po-Ning Chen@ece.nctu

n =

G ( f mf )
s

m =

Chapter 3-4

3.2 Spectrum of Sampled Signal


Let L( f ) = f s

G ( f mf ), and notice that it is periodic with period f .


s

m =

By Fourier Series Expansion,


L( f ) =

n =

cn =

1
fs

n
1
exp j 2
f , where cn =
f s
fs

fs / 2

fs / 2

2n
L( f ) exp j
fs

n
j 2
L
(
f
)
exp
f df
f /2
f s

fs / 2
s

f df

f /2

2n
= f / 2 G ( f mf s ) exp j
fs
m=

f df

Chapter 3-5

Po-Ning Chen@ece.nctu

cn =

m =

fs / 2

m =

m =

2n
G ( f mf s ) exp j
f df , s = f mf s
f s

f s / 2 + mf s

f s / 2 + mf s

fs / 2

f s / 2 + mf s

f s / 2 + mf s

2n
G ( s ) exp j
( s + mf s ) ds
fs

2n
G ( s ) exp j
s ds
f s

2n
= G ( s ) exp j
s ds
f
s

= g ( nTs )

n
L( f ) = g ( nTs ) exp j 2
f
f s
n =

g (mT ) exp( j 2mT f ), where m = n.


s

m =

Po-Ning Chen@ece.nctu

Chapter 3-6

3.2 First Important Conclusion from Sampling

o Uniform sampling at the time domain results in a periodic


spectrum with a period equal to the sampling rate.
g ( t ) =

g (nT ) (t nT ) G ( f ) = f G( f mf )
s

n =

m =

Chapter 3-7

Po-Ning Chen@ece.nctu

3.2 Reconstruction from Sampling


Take f s = 2W .

Ideal lowpass filter

G( f ) =

Po-Ning Chen@ece.nctu

1
G ( f ) for | f | W .
2W

Chapter 3-8

3.2 Aliasing due to Sampling

When f s < 2W , G( f ) cannot be reconstruc ed by undersampl ed samples.

Chapter 3-9

Po-Ning Chen@ece.nctu

3.2 Second Important Conclusion for Sampling


o A band-limited signal of finite energy with bandwidth W
can be completely described by its samples of sampling rate
fs 2W.
n 2W is commonly referred to as the Nyquist rate.
o How to reconstruct a band-limited signal from its samples?

2W
fs
Po-Ning Chen@ece.nctu

Chapter 3-10

g (t ) = G ( f ) exp( j 2ft )df

G(f )

G (f )

= G ( f ) exp( j 2ft )df

1
G (f ) for |f | W
fs
1
X
L(f ) = fs
G(f
1
X

1
fs

g ( nTs ) exp ( j 2nTs f ) exp( j 2ft )df


W n
=

1
fs

g (nT )

n =

2W
fs

g (nT )
s

n =

g(mTs ) exp( j2mTs f )

m= 1

mfs )

m= 1

See Slides 3-4 ~ 3-6

exp ( j 2 (t nTs ) f )df

sin[2W (t nTs )]
2W (t nTs )

g (nT ) (2WT sinc[2W (t nT )])


s

n =

2WTs sinc[2W(t-nTs)] plays the role of an interpolation function


for samples.
Chapter 3-11

Po-Ning Chen@ece.nctu

3.2 Band-Unlimited Signals


o The signal encountered in practice is often not strictly bandlimited.
o Hence, there is always aliasing after sampling.
o To combat the effects of aliasing, a low-pass anti-aliasing
filter is used to attenuate the frequency components outside
[-fs, fs].
o In this case, the signal after passing the anti-aliasing filter is
often treated as bandlimited with bandwidth fs/2 (i.e., fs =
2W). Hence,

g (t ) =

g (nT ) sinc T
s

n =

Po-Ning Chen@ece.nctu

Chapter 3-12

3.2 Interpolation in terms of Filtering


o Observe that
g (t ) =

g (nT ) sinc T
s

n =

is indeed a convolution between g(t) and sinc(t/Ts).

t
t
g (t ) * sinc = g ( )sinc
Ts
Ts

= g ( nTs ) ( nTs ) sinc

n =

Ts

g (nT )
s

n =

t
Ts

( nTs )sinc

Chapter 3-13

Po-Ning Chen@ece.nctu

(Continue from the previous slide.)


t
t

g (t ) * sinc = g (nTs )sinc n


Ts n =
Ts

t
Reconstruction filter (interpolation filter) h(t ) = sinc
Ts
H ( f ) = Ts rect (Ts f )

g (t )

H( f )

g (t )
fs / 2

Po-Ning Chen@ece.nctu

fs

fs / 2
Chapter 3-14

3.2 Physical Realization of Reconstruction Filter


o An ideal lowpass filter is not physically realizable.
o Instead, we can use an anti-aliasing filter of bandwidth W,
and use a sampling rate fs > 2W. Then the spectrum of a
reconstruction filter can be shaped like:

Po-Ning Chen@ece.nctu

Chapter 3-15

Signal spectrum with bandwidth W

Signal spectrum after


sampling with fs > 2W

The physically realizable


reconstruction filter

Ideal filter of bandwidth fs/2.

g (t ) * hrealizable (t ) = G ( f ) H realizable ( f ) G ( f ) H ideal ( f ) g (t ) * hideal (t )


Po-Ning Chen@ece.nctu

Chapter 3-16

3.3 Pulse-Amplitude Modulation (PAM)


o PAM
n The amplitude of regularly spaced pulses is varied in
proportion to the corresponding sample values of a
continuous message signal.
Notably, the top of each pulse

is maintained flat. So this is


PAM, not natural sampling for
which the message signal is
directly multiplied by a
periodic train of rectangular
pulses.

Chapter 3-17

Po-Ning Chen@ece.nctu

3.3 Pulse-Amplitude Modulation (PAM)


o The operation of generating a PAM modulated signal is
often referred to as sample and hold.
o This sample and hold process can also be analyzed
through filtering technique.
s (t ) =

m(nT )h(t nT ) = m (t ) * h(t )


s

n =

0<t <T
1,

where h(t ) = 1 / 2, t = 0, t = T and m (t ) = m(nTs ) (t nTs ).


n =
0, otherwise

Po-Ning Chen@ece.nctu

Chapter 3-18

3.3 Pulse-Amplitude Modulation (PAM)


o By taking filtering standpoint, the spectrum of S(f) can be
derived as:
S ( f ) = M ( f )H ( f )

= f s M ( f kf s ) H ( f )
k =

= f s M ( f kf s )H ( f )
k =

n M(f) is the message signal with bandwidth W (or having


experienced an anti-aliasing filter of bandwidth W).
n fs 2W.
Chapter 3-19

Po-Ning Chen@ece.nctu

3.3 Pulse-Amplitude Modulation (PAM)


(over the range [ W ,W ] of M ( f ))
1
H( f )

S ( f ) = f s M ( f kf s )H ( f )
k =

= f s M ( f ) H ( f ) + f s M ( f kf s ) H ( f )
|k |1

Reconstruction Filter

Po-Ning Chen@ece.nctu

Equalizer

M ( f )H ( f ) M ( f )
Chapter 3-20

10

3.3 Feasibility of Equalizer Filter


o The distortion of M(f) is due to M(f)H(f),
0<t <T
1,

where h(t ) = 1 / 2, t = 0, t = T or H ( f ) = Tsinc( fT )exp( jfT )


0, otherwise

1
1
=
exp( jfT ), | f | W

E ( f ) = H ( f ) Tsinc ( fT )

0,
otherwise

Question: Is the above E(f) feasible or realizable?


Chapter 3-21

Po-Ning Chen@ece.nctu

~
E( f )

1 1
> = f s > 2W .
T Ts

0.8

, | f | W
~

E ( f ) = Tsinc ( fT )

0,
otherwise

0.6

E.g., T = 1, W = 1/8.

0.4
0.2

-1

-0.5

0.5

This gives an equalizer:

i (t )

~
E( f )
A lowpass filter

o1 (t ) (t + T / 2) or
exp( jfT )

o (t )

non-realizable! Why?

Because " o1 (t ) = 0 for t < 0" does not imply " o(t ) = 0 for t < 0"
Po-Ning Chen@ece.nctu

Chapter 3-22

11

3.3 Feasibility of Equalizer Filter


o Causal

i (t )

h (t )

o (t )

n A reasonable assumption for a feasible linear filter


system is that:

For any i (t ) satisfying i (t ) = 0 for t < 0, we have o(t ) = 0 for t < 0.


n A necessary and sufficient condition for the above
assumption to hold is that h(t) = 0 for t < 0.

Chapter 3-23

Po-Ning Chen@ece.nctu

n Simplified Proof:

t
h(t ) = 0 for t < 0
o(t ) = h( )i (t )d = 0h( )i (t )d

i (t ) = 0 for t < 0

o(t ) = 0 for t < 0


0, for t < 0;
h
(
t
)
dt

0
for
some
a
>
0
,
then
take
i
(
t
)
=

1, for t 0.
a
o( a ) = h( )d 0, which means that
If

there will be a nonzero output due to completely


a
zero input! Therefore, h( )d = 0 for every a > 0.

(a )
= 0 for a > 0.
a
a
a
h
(

)
d

=
0
for
every
a
>
0

h( )d = h( a ) = 0 for a > 0.

(a ) = 0 for every a > 0

Po-Ning Chen@ece.nctu

Chapter 3-24

12

3.3 Aperture Effect


o The distortion of M(f) due to M(f)H(f)
0<t <T
1,

where h(t ) = 1 / 2, t = 0, t = T or H ( f ) = Tsinc( fT )exp( jfT )


0, otherwise

is very similar to the distortion caused by the finite size of


the scanning aperture in television. So this is named the
aperture effect.
o If T/Ts 0.1, the amplitude distortion is less than 0.5%;
hence, the equalizer may not be necessary.
Chapter 3-25

Po-Ning Chen@ece.nctu

, | f | W
~

E ( f ) = Tsinc ( fT )

0,
otherwise

and

1 1
> = f s > 2W .
T Ts

1
, | f | 0.04
~

E ( f ) = sinc ( f )
for T = 1, Ts = 10,W = 0.04
0,
otherwise

~
E( f )

1.00264

1
0.8
0.6
0.4
0.2

-0.06

Po-Ning Chen@ece.nctu

-0.04

-0.02

0.02

0.04

0.06

Chapter 3-26

13

3.3 Pulse-Amplitude Modulation


o Final notes on PAM
n PAM is rather stringent in its system requirement, such
as short duration of pulse.
n Also, the noise performance of PAM may not be
sufficient for long distance transmission.
n Accordingly, PAM is often used as a mean of message
processing for time-division multiplexing, from which
conversion to some other form of pulse modulation is
subsequently made. Details will be discussed in Section
3.9.
Po-Ning Chen@ece.nctu

Chapter 3-27

3.4 Other Forms of Pulse Modulation


o Pulse-Duration Modulation (or Pulse-Width Modulation)
n Samples of the message signal are used to vary the
duration of the pulses.
o Pulse-Position Modulation
n The position of a pulse relative to its unmodulated time
of occurrence is varied in accordance with the message
signal.

Po-Ning Chen@ece.nctu

Chapter 3-28

14

Pulse trains
PDM
PPM

Po-Ning Chen@ece.nctu

Chapter 3-29

3.4 Other Forms of Pulse Modulation


o Comparisons between PDM and PPM
n PPM is more power efficient because excessive pulse
duration consumes considerable power.
o Final note
n It is expected that PPM is immune to additive noise,
since additive noise only perturbs the amplitude of the
pulses rather than the positions.
n However, since the pulse cannot be made perfectly
rectangular in practice (namely, there exists a non-zero
transition time in pulse edge), the detection of pulse
positions is somehow still affected by additive noise.
Po-Ning Chen@ece.nctu

Chapter 3-30

15

1 BT ,Carson 1

See slide 2 - 162 : figure - of - merit D =


1 = Bn ,Carson 1
2
W
2

3.5 Bandwidth-Noise Trade-Off


o PPM seems to be a better form for analog pulse modulation
from noise performance standpoint. However, its noise
performance is very similar to (analog) FM modulation as:
n Its figure of merit is proportional to the square of
transmission bandwidth (i.e., 1/T) normalized with
( I .e., Bn = BT / W )
respect to the message bandwidth (W).
n There exists a threshold effect as SNR is reduced.
o Question: Can we do better than the square law in figureof-merit improvement? Answer: Yes, by means of Digital
Communication, we can realize an exponential law!
Po-Ning Chen@ece.nctu

Chapter 3-31

3.6 Quantization Process


o Transform the continuous-amplitude m = m(nTs) to discrete
approximate amplitude v = v(nTs)

o Such a discrete approximate is adequately good in the sense


that any human ear or eye can detect only finite intensity
differences.

Po-Ning Chen@ece.nctu

Chapter 3-32

16

3.6 Quantization Process


o We may drop the time instance nTs for convenience, when
the quantization process is memoryless and instantaneous
(hence, the quantization at time nTs is not affected by earlier
or later samples of the message signal.)
o Types of quantization
n Uniform
o Quantization step sizes are of equal length.
n Non-uniform
o Quantization step sizes are not of equal length.

Chapter 3-33

Po-Ning Chen@ece.nctu

o An alternative classification of quantization


n Midtread
n Midrise

midtread

Po-Ning Chen@ece.nctu

midrise

Chapter 3-34

17

3.6 Quantization Noise

Uniform midtread
quantizer

Chapter 3-35

Po-Ning Chen@ece.nctu

3.6 Quantization Noise


o Define the quantization noise to be Q = M - V = M g(M),
where g( ) is the quantizer.
o Let the message M be uniformly distributed in (mmax,
mmax). So M has zero mean.
o Assume g( ) is symmetric and of midrise type; then, V =
g(M) also has zero-mean, so does Q = M V.
o Then the step-size of the quantizer is given by:
=

2mmax
L

where L is the total number of representation levels.


Po-Ning Chen@ece.nctu

Chapter 3-36

18

3.6 Quantization Noise


o Assume g( ) assigns the midpoint of each step interval to be
the representation level. Then

q<
0,
2

q 1
Pr{Q q} = Pr ( M mod ) q = + , q <
2
2
2

1,
q

2
Or pdf f Q ( q) =

1

1 q <
2
2
Chapter 3-37

Po-Ning Chen@ece.nctu

3.6 Quantization Noise


o So, the output signal-to-noise ratio is equal to:
SNRO =

P
2 1
/ 2 q dq
/2

P
P
3P
=
= 2 L2
2
1 2
mmax
1 2mmax

12
12 L

o The transmission bandwidth of a quantization system is


conceptually proportional to the number of bits required per
sample, i.e., R = log2(L).
o We then conclude that SNRO 4R, which increases
exponentially with transmission bandwidth.
Po-Ning Chen@ece.nctu

Chapter 3-38

19

Example 3.1 Sinusoidal Modulating Signal


o Let m(t) = Am cos(2fct). Then
P =

A2m
2

and mmax = Am

) SN RO =

3(Am2 /2) 2
L
A2m

= 32 4R = 10 log10 (3/2) + R 10 log10 (4) dB (1.8 + 6R) dB

SNRO (dB)

32

31.8

64

37.8

128

43.8

256

49.8

Note that in this example, we assume a full-load quantizer, in


which no quantization loss is encountered due to saturation.

Po-Ning Chen@ece.nctu

Chapter 3-39

3.6 Quantization Noise


o In the previous analysis of quantization error, we assume
the quantizer assigns the mid-point of each step interval to
be the representative level.
o Questions:
n Can the quantization noise power be further reduced by
adjusting the representative levels?
n Can the quantization noise power be further reduced by
adopting a non-uniform quantizer?

Po-Ning Chen@ece.nctu

Chapter 3-40

20

3.6 Optimality of Scalar Quantizers


Representation level

v1

v2

vL-1

vL

Partitions

I1

I2

IL-1

IL

= [ A, A)

Notably, interval Ik may not be


a consecutive single interval.

k =1

oLet d(m, vk) be the distortion by representing m by vk.


oGoal: To find {Ik} and {vk} such that the average distortion
D = E[d(M, g(M))] is minimized.
Chapter 3-41

Po-Ning Chen@ece.nctu

3.6 Optimality of Scalar Quantizers


o Solution:
L

min min D = min min d ( m, vk ) f M ( m)dm


{v k }

{Ik }

{vk }

{Ik }

k =1 I k

(I) For fixed {vk}, determine the optimal {Ik}.


(II)For fixed {Ik}, determine the optimal {vk}.
(I) If d(m, vk) d(m, vj), then m should be assigned to Ik
rather than Ij.
I k = {m [ A, A) : d (m, vk ) d (m, v j ) for all 1 j L}
Po-Ning Chen@ece.nctu

Chapter 3-42

21

(II) For fixed {Ik}, determine the optimal {vk}.


L

min d ( m, vk ) f M ( m)dm
{vk }

Since

v j

k =1 I k

d ( m, vk ) f M ( m)dm = d ( m, v j ) f M ( m)dm
k =1 I
v j I

d ( m, v j )
=
f M ( m)dm
v j
I
k

a necessary condition for the optimal v j is :

d (m, v j )
v j

Ij

f M (m)dm = 0.

Lloyd-Max algorithm is to repetitively apply (I) and (II) for


the search of the optimal quantizer.
Chapter 3-43

Po-Ning Chen@ece.nctu

Example: Mean-Square Distortion


o d(m, vk) = (m - vk)2
(I) I k = {m [ A, A) : (m vk )2 (m v j )2 for all 1 j L}
should be a consecutive interval.
Representation level v1
Partitions

I1

Po-Ning Chen@ece.nctu

v2

vL-1

vL

I2

IL-1

IL

Chapter 3-44

22

Example: Mean-Square Distortion


(II) A necessary condition for the optimal v j is :
Z

mj+1
mj

@(m vj )2
fM (m)dM =
@vj
m j +1

v j ,optimal

mj

mf M (m)dm

m j +1

mj

f M (m)dm

mj+1

(m

vj )fM (m)dm = 0.

mj

= E[ M | m j M < m j +1 ]

Exercise: What is the best {mk} and {vk} if M is uniformly


distributed over [-A,A).
2
L Z mk+1
X
1
mk + mk+1
Hint: min min D =
min
m
dm.
2A {mk }
2
{Ik } {vk }
mk
k=1

Po-Ning Chen@ece.nctu

Chapter 3-45

3.7 Pulse-Code Modulation


(anti-alias)

Po-Ning Chen@ece.nctu

Chapter 3-46

23

3.7 Pulse-Code Modulation


o Non-uniform quantizers used for telecommunication (ITUT G.711)
n ITU-T G.711: Pulse Code Modulation (PCM) of Voice
Frequencies (1972)
o It consists of two laws: A-law (mainly used in
Europe) and -law (mainly used in US and Japan)
n This design helps to protect weak signal, which occurs
more frequently in, say, human voice.

Po-Ning Chen@ece.nctu

Chapter 3-47

3.7 Laws
o Quantization Laws
n A-law
o 13-bit uniformly quantized
o Conversion to 8-bit code
n -law
o 14-bit uniformly quantized
o Conversion to 8-bit code.
n These two are referred to as compression laws since
they uses 8-bit to (lossily) represent 13-(or 14-)bit
information.
Po-Ning Chen@ece.nctu

Chapter 3-48

24

3.7 A-law in G.711


o A-law (A=87.6)

m,

1 + log( A)
FA-law ( m) =
1 + log( A | m |)
sgn(m)
,

1 + log( A)

1
A

1
m 1
A

Linear mapping
Logarithmic mapping

Chapter 3-49

Po-Ning Chen@ece.nctu

FA-law ( m)

1
0.8
0.6
0.4

output

0.2
0
-0.2
-0.4
-0.6
-0.8
-1

-1

-0.8

Po-Ning Chen@ece.nctu

-0.6

-0.4

-0.2

0
input

0.2

0.4

0.6

0.8

Chapter 3-50

25

A piecewise linear approximation to the law.

8 bit PCM code


128
112
96
80

FA-law ( m)

64
48

output

32
0
-32
-48
-64

256
128
64
-64
-128
-256

-80
-96
-112
-128
-4096

-1024 -512

-2048

0 512 1024
input

2048

4096

13 bit uniform quantization


Chapter 3-51

Po-Ning Chen@ece.nctu

Compressor of A-law (assume nonnegative m)


Input Values
Bits:11 10 9 8 7 6 5 4 3 2 1 0
0
0
0
0
0
0
0
1

0
0
0
0
0
0
1
a

0
0
0
0
0
1
a
b

0
0
0
0
1
a
b
c

0
0
0
1
a
b
c
d

0
0
1
a
b
c
d
x

0
1
a
b
c
d
x
x

a
a
b
c
d
x
x
x

b
b
c
d
x
x
x
x

c
c
d
x
x
x
x
x

d
d
x
x
x
x
x
x

x
x
x
x
x
x
x
x

Compressed Code Word


Chord
Step
Bits: 6 5 4 3 2 1 0
0
0
0
0
1
1
1
1

0
0
1
1
0
0
1
1

0
1
0
1
0
1
0
1

a
a
a
a
a
a
a
a

b
b
b
b
b
b
b
b

c
c
c
c
c
c
c
c

d
d
d
d
d
d
d
d

E.g. (3968)10 --> (1111,1000,0000)2-->(111,1111)2-->(127)10


E.g. (2176)10 -->(1000,1000,0000)2-->(111,0001)2-->(113)10

Po-Ning Chen@ece.nctu

Chapter 3-52

26

Expander of A-law (assume nonnegative m)


Compressed Code Word
Chord
Step
Bits:6 5 4 3 2 1 0
0
0
0
0
1
1
1
1

0
0
1
1
0
0
1
1

0
1
0
1
0
1
0
1

a
a
a
a
a
a
a
a

b
b
b
b
b
b
b
b

c
c
c
c
c
c
c
c

Raised Output Values


Bits:11 10 9 8 7 6 5 4 3 2 1 0

d
d
d
d
d
d
d
d

0
0
0
0
0
0
0
1

0
0
0
0
0
0
1
a

0
0
0
0
0
1
a
b

0
0
0
0
1
a
b
c

0
0
0
1
a
b
c
d

0
0
1
a
b
c
d
1

0
1
a
b
c
d
1
0

a
a
b
c
d
1
0
0

b
b
c
d
1
0
0
0

c
c
d
1
0
0
0
0

d
d
1
0
0
0
0
0

1
1
0
0
0
0
0
0

E.g. (113)10 (111,0001) 2 (1000,1100,0000) 2 ( 2112)10


In other words, (111,0001) 2 + (111,0000) 2

(1000,1000,0000) 2 + (1000,0000,0000) 2 ( 2176)10 + ( 2048)10


=
= ( 2112)10
2
2
Chapter 3-53

Po-Ning Chen@ece.nctu

3.7 -law in G.711


o -law ( = 255)
F -law (m) = sgn(m)

log(1 + m )
for m 1.
1 + log( )

nIt is approximately linear at low m.


nIt is approximately logarithmic at large m.

Po-Ning Chen@ece.nctu

Chapter 3-54

27

F -law (m)

0.8
0.6
0.4
0.2
0
-0.2
-0.4
-0.6
-0.8
-1

-1

-0.8

-0.6

-0.4

-0.2

0.2

0.4

0.6

Chapter 3-55

Po-Ning Chen@ece.nctu

8 bit PCM code

0.8

A piecewise linear approximation to the law.

128
112
96
80
64

F -law (m)

48
32
16
0
-16
-32
-48

479
223
95
31
-31
-95
-223
-479

-64
-80
-96
-112
-128
-8159

-4063

-2015 -991

991 2015

4063

8159

14 bit uniform quantization (213 = 8192)


Po-Ning Chen@ece.nctu

Chapter 3-56

28

Compressor of -law (assume nonnegative m)


Raised Input Values

Compressed Code Word


Chord
Step
Bits: 6 5 4 3 2 1 0

Bits:12 11 10 9 8 7 6 5 4 3 2 1 0
0
0
0
0
0
0
0
1

0
0
0
0
0
0
1
a

0
0
0
0
0
1
a
b

0
0
0
0
1
a
b
c

0
0
0
1
a
b
c
d

0
0
1
a
b
c
d
x

0
1
a
b
c
d
x
x

1
a
b
c
d
x
x
x

a
b
c
d
x
x
x
x

b
c
d
x
x
x
x
x

c
d
x
x
x
x
x
x

d
x
x
x
x
x
x
x

x
x
x
x
x
x
x
x

0
0
0
0
1
1
1
1

0
0
1
1
0
0
1
1

0
1
0
1
0
1
0
1

a
a
a
a
a
a
a
a

b
b
b
b
b
b
b
b

c
c
c
c
c
c
c
c

d
d
d
d
d
d
d
d

Raised Input = Input + 33 = Input + 21H


(For negative m, the raised input becomes input 33.)
An additional 7th bit is used to indicate whether the input signal is positive
(1) or negative (0).
Chapter 3-57

Po-Ning Chen@ece.nctu

Expander of -law (assume nonnegative m)


Compressed Code Word
Chord
Step
Bits:6 5 4 3 2 1 0
0
0
0
0
1
1
1
1

0
0
1
1
0
0
1
1

0
1
0
1
0
1
0
1

a
a
a
a
a
a
a
a

b
b
b
b
b
b
b
b

c
c
c
c
c
c
c
c

d
d
d
d
d
d
d
d

Raised Output Values


Bits:12 11 10 9 8 7 6 5 4 3 2 1 0
0
0
0
0
0
0
0
1

0
0
0
0
0
0
1
a

0
0
0
0
0
1
a
b

0
0
0
0
1
a
b
c

0
0
0
1
a
b
c
d

0
0
1
a
b
c
d
1

0
1
a
b
c
d
1
0

1
a
b
c
d
1
0
0

a
b
c
d
1
0
0
0

b
c
d
1
0
0
0
0

c
d
1
0
0
0
0
0

d
1
0
0
0
0
0
0

1
0
0
0
0
0
0
0

Output = Raised Output - 33


Note that the combination of a compressor and an expander is
called a compander.
Po-Ning Chen@ece.nctu

Chapter 3-58

29

Comparison of A-law and -law specified in G.711.


1
0.8
0.6
0.4
0.2
0
-0.2
-0.4

A-law
mu-law

-0.6
-0.8
-1
-0.8

Po-Ning Chen@ece.nctu

-0.6

-0.4

-0.2

0.2

0.4

0.6

0.8

Chapter 3-59

3.7 Coding
o After the quantizer provides a symbol representing one of
256 possible levels (8 bits of information) at each sampled
time, the encoder will transform the symbol (or several
symbols) into a code character (or code word) that is
suitable for transmission over a noisy channel.
o Example. Binary code.
0 = change
11100100
1 1 1 0 0 1 0 0 1 = unchange

Po-Ning Chen@ece.nctu

Chapter 3-60

30

3.7 Coding
o Example. Ternary code (Pseudo-binary code).
00011011
0 0

0 1 1 0 1 1

A
B
C

00011011ACABBCBB
Through the help of coding, the receiver may be able to
detect (or even correct) the transmission errors due to noise.
For example, it is impossible to receive ABABBABB, since
this is not a legitimate code word (character).
Po-Ning Chen@ece.nctu

Chapter 3-61

3.7 Coding
o Example of error correcting code Three-times repetition
code (to protect Bluetooth packet header).
00011011 000,000,000,111,111,000,111,111
Then majority law can be applied at the receiver to
correct one-bit error.
oChannel (error correcting) codes are designed to
compensate the channel noise, while line codes are simply
used as the electrical representation of a binary data stream
over the electrical line.
Po-Ning Chen@ece.nctu

Chapter 3-62

31

3.7 Line Codes


(a) Unipolar nonreturn-to-zero
(NRZ) signaling
(b) Polar nonreturn-to-zero (NRZ)
signaling
(c) Unipolar return-to-zero (RZ)
signaling
(d) Bipolar return-to-zero (BRZ)
signaling
(e) Split-phase (Manchester code)

Chapter 3-63

Po-Ning Chen@ece.nctu

3.7 Derivation of PSD


oFrom Slide 1-117, we obtain that the general formula for
PSD is:

PSD = lim
T

1
E[ S ( f ) S2*T ( f )], where s2T (t ) = s(t ) 1{| t | T }.
2T

For a line coded signal, s(t) =


Hence, S(f ) = G(f )

1
X

1
X

an e

j2f nTb

n= 1

) PSD =

limN !1 2N1Tb |G(f )|2

Po-Ning Chen@ece.nctu

an g(t nTb ), where g(t) = 0 outside [0, Tb ).

n= 1

and S2N Tb (f ) = G(f )

N
X1

an e

n= N
1
X

N
X1

n= 1 m= N

E[an am ]e j2f (n m)Tb

j2f nTb

Chapter 3-64

32

1
N 1

| G ( f ) |2 E[an am* ]e j 2f ( n m )T
N 2 NT
n = m = N

b
1 N 1
j 2f ( n m ) T
=| G ( f ) |2 lim
a ( n m)e

N 2 NT

b m = N n =

PSD = lim

1 N 1
j 2fkT
a ( k )e

N 2 NT

b m = N k =
1

=| G ( f ) |2 a (k )e j 2fkT
Tb k =

=| G ( f ) |2 lim

1
For i.i.d. {an },
Tb

1
X

k= 1

a (k)e

j2f kTb

1
2a X
+
e
Tb
Tb
2
a

j2f kTb

k= 1

1
2 X
+ a2
Tb
Tb
2
a

(f

k/Tb )

k= 1

Po-Ning Chen@ece.nctu

Chapter 3-65

3.7 Power Spectral of Line Codes


o Unipolar nonreturn-to-zero (NRZ) signaling
n Also named on-off signaling.
n Disadvantage: Waste of power due to the non-zeromean nature (PSD does not approach zero at zero
frequency).
{an }n = is zero/one i.i.d.,

s(t ) = an g (t nTb ), where


A, 0 t < Tb
n =
g (t ) = 0, otherwise

Po-Ning Chen@ece.nctu

Chapter 3-66

33

3.7 Power Spectral of Line Codes


n PSD of Unipolar NRZ
PSDU-NRZ

1
2 X
+ a2
Tb
Tb
2
a

|G(f )|2
A

(f

k/Tb )

k= 1

Tb2 sinc2 (f Tb )

1
2 X
+ a2
Tb
Tb
2
a

(f

k/Tb )

k= 1

1
A2 T b
1 X
sinc2 (f Tb ) 1 +
(f
4
Tb
k= 1

A2 T b
1
sinc2 (f Tb ) 1 +
(f )
4
Tb

=
=

k/Tb )

Chapter 3-67

Po-Ning Chen@ece.nctu

3.7 Power Spectral of Line


Codes
o Polar nonreturn-to-zero (NRZ) signaling
n The previous PSD of Unipolar NRZ suggests that a
zero-mean data sequence is preferred.

{an }n = is 1 i.i.d.,

s(t ) = an g (t nTb ), where


A, 0 t < Tb
n =
g (t ) = 0, otherwise

PSDP-NRZ

=
=

Po-Ning Chen@ece.nctu

|G(f )|

1
2 X
+ a2
Tb
Tb
2
a

A2 Tb sinc2 (f Tb )

k= 1

(f

k/Tb )

Chapter 3-68

34

3.7 Power Spectral of


Line Codes
o Unipolar return-to-zero (RZ) signaling
n An attractive feature of this line code is the presence of
delta functions at f = -1/Tb, 0, 1/Tb in the PSD, which
can be used for bit-timing recovery at the receiver.
n Disadvantage: It requires 3dB more power than polar
return-to-zero signaling.

{an }n = is zero/one i.i.d.,

s(t ) = an g (t nTb ), where


A, 0 t < Tb / 2
n =
g (t ) = 0, otherwise

Chapter 3-69

Po-Ning Chen@ece.nctu

3.7 Power Spectral of Line Codes


n PSD of Unipolar RZ
PSDU-RZ

!
1
2 X

|G(f )|2
+ a2
(f k/Tb )
Tb
Tb
k= 1
!

1
2
A2 Tb2
f Tb
2a X
a
2
sinc
+ 2
(f k/Tb )
4
2
Tb
Tb
k= 1
!

1
A2 T b
f Tb
1 X
2
sinc
1+
(f k/Tb )
16
2
Tb
k= 1
!

1
A2 T b
f Tb
1 X
2
sinc
1+
(f k/Tb )
16
2
Tb
2
a

k=1

Po-Ning Chen@ece.nctu

Chapter 3-70

35

3.7 Power Spectral of Line Codes


o Bipolar return-to-zero (BRZ) signaling
n Also named alternate mark inversion (AMI) signaling
n No DC component and relatively insignificant lowfrequency components in PSD.

s(t ) =

A, 0 t < Tb / 2
otherwise

a g (t nT ), where g (t ) = 0,
n

n =

Po-Ning Chen@ece.nctu

Chapter 3-71

3.7 Power Spectral of Line Codes


n PSD of BRZ
o {an} is no longer i.i.d.

1
1
1 1
E[an2 ] = (0) + ( 1) 2 + ( +1) 2 =
2
4
4 2
1
1
E[an an +1 ] = ( 1) =
4
4
1
1
1
1
E[an an + 2 ] = (1)(1) + (1)(1) + ( 1)(1) + ( 1)(1) = 0
16
16
16
16
!
E[an an + m ] = 0 for m > 1.
Po-Ning Chen@ece.nctu

Chapter 3-72

36

3.7 Power Spectral of Line Codes


PSDBRZ

=
=
=
=

1
|G(f )|
Tb
2

1
X

k= 1

a (k)e

j2f kTb

f Tb
1
1 j2f Tb 1 1
A2 Tb2
sinc2

e
e
+
4
2
Tb
4
2 4

A2 Tb2
f Tb
1 1 1
2
sinc

cos(2f Tb )
4
2
Tb 2 2

f Tb
A2 T b
sinc2
sin2 (f Tb )
4
2

Po-Ning Chen@ece.nctu

j2f Tb

Chapter 3-73

3.7 Power Spectral of


Line Codes
o Split-phase (Manchester code)
n This signaling suppressed the DC component, and has
relatively insignificant low-frequency components,
regardless of the signal statistics.
n Notably, for P-NRZ and BRZ, the DC component is
suppressed only when the signal has the right statistics.
{an }n = is 1 i.i.d.,

A, 0 t < Tb / 2

s(t ) = an g (t nTb ), where

n =
g (t ) = A, Tb / 2 t < Tb
0,

otherwise

Po-Ning Chen@ece.nctu

Chapter 3-74

37

3.7 Power Spectral of Line Codes


n PSD of Manchester code
PSDManchester
=

=
=

!
1
2a X
|G(f )|
+ 2
(f k/Tb )
Tb
Tb
k= 1

1
2
f Tb
f Tb
2 X
2
2
2 2
A Tb sinc
sin
+ a2
(f
2
2
Tb
Tb
k= 1

f Tb
f Tb
2
2
2
A Tb sinc
sin
2
2
2

2
a

k/Tb )

Chapter 3-75

Po-Ning Chen@ece.nctu

Let Tb=1, and adjust A such that the total power of each line
code is 1. This gives a fair comparison among line codes.
power=

1
2

+ 12

PSDU-NRZ =

1
1
sinc2 (f ) + (f )
2
2

A=

A=1
PSDP-NRZ = sinc2 (f )


1
1
f
1 X
k
2
2
1
power= 2 PSDU-RZ = sinc
+
sinc
(f k)
4
2
4
2
+1
power= 1

k= 1


f
power= 1 PSDBRZ = sinc
sin2 (f )
2

f
f
2
2
power= 1 PSDManchester = sinc
sin
2
2
2

Po-Ning Chen@ece.nctu

A=2

A=2
A=1
Chapter 3-76

38

U-NRZ
P-NRZ
U-RZ
BRZ
Manchester

0.8

0.6

0.4

0.2

1/ 2

0.5

1.5

2
Chapter 3-77

Po-Ning Chen@ece.nctu

3.7 Differential Encoding with Unipolar NRZ Line


Coding
o 1 = no change and 0 = change.

on
dn

dn = dn

Po-Ning Chen@ece.nctu

on = dn

on

Chapter 3-78

39

3.7 Regeneration
o Regenerative repeater for PCM system
n It can completely remove the distortion if the decision
making device makes the right decision (on 1 or 0).

Po-Ning Chen@ece.nctu

Chapter 3-79

3.7 Decoding & Filtering


o After regenerating the received pulse at the last time, the
receiver then decodes, and regenerates the original message
signal (with acceptable quantization error).
o Finally, a lowpass reconstruction filter whose cutoff
frequency is equal to the message bandwidth W is applied at
the end (to remove the unnecessary high-frequency
components due to quantization).

Po-Ning Chen@ece.nctu

Chapter 3-80

40

3.8 Noise Consideration in PCM Systems


o Two major noise sources in PCM systems
n (Message-independent) Channel noise
n (Message-dependent) Quantization noise
o The quantization noise is often under designers
control, and can be made negligible by taking
adequate number of quantization levels.

Po-Ning Chen@ece.nctu

Chapter 3-81

3.8 Noise Consideration in PCM Systems


o The main effect of channel noise is to introduce bit errors.
n Notably, the symbol error rate is quite different from
the bit error rate.
n A symbol error may be caused by one-bit error, or twobit error, or three-bit error, or ; so in general, one
cannot derive the symbol error rate from the bit error
rate (or vice versa) unless some special assumption is
made.
n Considering the reconstruction of original analog signal,
an (bit) error in the most significant bit is more harmful
than an (bit) error in the least significant bit.
Po-Ning Chen@ece.nctu

Chapter 3-82

41

3.8 Error Threshold


o Eb/N0
n Eb: Transmitted signal energy per information bit
o E.g., information bit is encoded using three-times
repetition code, in which each code bit is transmitted
using one BPSK symbol with symbol energy Ec.
o Then Eb = 3 Ec.
n N0: One-sided noise spectral density
o The bit-error-rate is a function of Eb/N0 and transmission
speed (and implicitly bandwidth, etc).

Chapter 3-83

Po-Ning Chen@ece.nctu

3.8 Error Threshold


o Influence of Eb/N0 on BER at 105 bps
Eb/N0 (dB)

BER

About one error in every

4.3

10-2

10-3 second

8.4

10-4

10-1 second

10.6

10-6

10 seconds

12.0

10-8

20 minutes

13.0

10-10

1 day

14.0

10-12

3 months

n The output signal-to-noise ratio of an analog FM receiver without


pre/de-emphasis is typically 40-50 dB. Pre/de-emphasis may reduce
the requirement by 13 dB.
Po-Ning Chen@ece.nctu

Chapter 3-84

42

3.8 Error Threshold


o Error threshold
n The minimum Eb/N0 to achieves the required BER.
o By knowing the error threshold, one can always add a
regenerative repeater when Eb/N0 is about to drop below the
threshold; hence, long-distance transmission becomes
feasible.
n Unlike the analog transmission, distortion will
accumulate for long-distance transmission.

Po-Ning Chen@ece.nctu

Chapter 3-85

3.9 Time-Division Multiplexing


o An important feature of sampling process is a conservationof-time.
n In principle, the communication link is used only at the
sampling time instances.
o Hence, it may be feasible to put other messages samples
between adjacent samples of this message on a time-shared
basis.
o This forms the time-division multiplex (TDM) system.
n A joint utilization of a common communication link by
a plurality of independent message sources.
Po-Ning Chen@ece.nctu

Chapter 3-86

43

3.9 Time-Division Multiplexing

o The commutator (1) takes a narrow sample of each of the N


input messages at a rate fs slightly higher than 2W, where W
is the cutoff frequency of the anti-aliasing filter, and (2)
interleaves these N samples inside the sampling interval Ts.
Po-Ning Chen@ece.nctu

Chapter 3-87

3.9 Time-Division Multiplexing

o The price we pay for TDM is that N samples be squeezed in


a time slot of duration Ts.

Po-Ning Chen@ece.nctu

Chapter 3-88

44

3.9 Time-Division Multiplexing


o Synchronization is essential for a satisfactory operation of
the TDM system.
n One possible procedure to synchronize the transmitter
and receiver clocks is to set aside a code element or
pulse at the end of a frame, and to transmit this pulse
every other frame only.

Po-Ning Chen@ece.nctu

Chapter 3-89

Example 3.2 The T1 System


o T1 system
n Carries 24 64kbps voice channels with regenerative
repeaters spaced at approximately 2-km intervals.
n Each voice signal is essentially limited to a band from
300 to 3100 Hz.
o Anti-aliasing filter with W = 3.1 KHz
o Sampling rate = 8 KHz (> 2W = 6.2 KHz)
n ITU G.711 -law is used with = 255.
n Each frame consists of 24 8 + 1 = 193 bits, where a
single bit is added at the end of the frame for the
purpose of synchronization.
Po-Ning Chen@ece.nctu

Chapter 3-90

45

193 bit/frame

1
1 sample (from each of 24 voice channels)/frame

8000 sample/sec = 1.544 Megabits/sec

Example 3.2 The T1 System


n In addition to the 193 bits per frame (i.e., 1.544
Megabits per second), a telephone system must also
pass signaling information such as dial pulses and
on/off-hook.
o The least significant bit of each voice channel is
deleted in every sixth frame, and a signaling bit is
inserted in its place.
(DS=Digital Signal)

Po-Ning Chen@ece.nctu

Chapter 3-91

3.10 Digital Multiplexers

o The introduction of digital multiplexer enables us to


combine digital signals of various natures, such as
computer data, digitized voice signals, digitized facsimile
and television signals.
Po-Ning Chen@ece.nctu

Chapter 3-92

46

3.10 Digital Multiplexers


o The multiplexing of digital signals is accomplished by
using a bit-by-bit interleaving procedure with a selector
switch that sequentially takes a (or more) bit from each
incoming line and then applies it to the high-speed common
line.

Po-Ning Chen@ece.nctu

Chapter 3-93

3.10 Digital Multiplexers


o Digital multiplexers are categorized into two major groups.
1. 1st Group: Multiplex digital computer data for TDM
transmission over public switched telephone network.
n Require the use of modem technology.
2. 2nd Group: Multiplex low-bit-rate digital voice data
into high-bit-rate voice stream.
n Accommodate in the hierarchy that is varying from
one country to another.
n Usually, the hierarchy starts at 64 Kbps, named a
digital signal zero (DS0).
Po-Ning Chen@ece.nctu

Chapter 3-94

47

3.10 North American Digital TDM Hierarchy


o The first level hierarchy
n Combine 24 DS0 to obtain a primary rate DS1 at 1.544
Mb/s (T1 transmission)
o The second-level multiplexer
n Combine 4 DS1 to obtain a DS2 with rate 6.312 Mb/s
o The third-level multiplexer
n Combine 7 DS2 to obtain a DS3 at 44.736 Mb/s
o The fourth-level multiplexer
n Combine 6 DS3 to obtain a DS4 at 274.176 Mb/s
o The fifth-level multiplexer
n Combine 2 DS4 to obtain a DS5 at 560.160 Mb/s
Po-Ning Chen@ece.nctu

Chapter 3-95

3.10 North American Digital TDM Hierarchy


n The combined bit rate is higher than the multiple of the
incoming bit rates because of the addition of bit stuffing
and control signals.

Po-Ning Chen@ece.nctu

Chapter 3-96

48

3.10 North American Digital TDM Hierarchy


o Basic problems involved in the design of multiplexing
system
n Synchronization should be maintained to properly
recover the interleaved digital signals.
n Framing should be designed so that individual can be
identified at the receiver.
n Variation in the bit rates of incoming signals should be
considered in the design.
o A 0.01% variation in the propagation delay produced
by a 1 decrease in temperature will result in 100
fewer pulses in the cable of length 1000-km with
each pulse occupying about 1 meter of the cable.
Po-Ning Chen@ece.nctu

Chapter 3-97

3.10 Digital Multiplexers


o Synchronization and rate variation problems are resolved
by bit stuffing.
o Example 3.3. AT&T M12 (second-level multiplexer)
n 24 control bits are stuffed, and separated by sequences
of 48 data bits (12 from each DS1 input).

Po-Ning Chen@ece.nctu

Chapter 3-98

49

Po-Ning Chen@ece.nctu

Chapter 3-99

Example 3.3 AT&T M12 Multiplexer


o The control bits are labeled F, M, and C.
n Frame markers: In sequence of F0F1F0F1F0F1F0F1, where F0
= 0 and F1 = 1.
n Subframe markers: In sequence of M0M1M1M1, where M0 = 0
and M1 = 1.
n Stuffing indicators: In sequences of CI CI CI CII CII CII CIII CIII
CIII CIV CIV CIV, where all three bits of Cj equal 1s indicate
that a stuffing bit is added in the position of the first
information bit associated with the first DS1 bit stream that
follows the F1-control bit in the same subframe, and three 0s
in CjCjCj imply no stuffing.
o The receiver should use majority law to check whether a
stuffing bit is added.
Po-Ning Chen@ece.nctu

Chapter 3-100

50

Example 3.3 AT&T M12 Multiplexer


o These stuffed bits can be used to balance (or maintain) the
nominal input bit rates and nominal output bit rates.
n S = nominal bit stuffing rate
o The rate at which stuffing bits are inserted when both
the input and output bit rates are at their nominal
values.
n fin = nominal input bit rate
n fout = nominal output bit rate
n M = number of bits in a frame
n L = number of information bits (input bits) for one input
stream in a frame
Chapter 3-101

Po-Ning Chen@ece.nctu

L 1
M
L
= 185.88082902 s
= 186.31178707 s
= 186.52849741 s
fin
fout
fin

Example 3.3 AT&T M12 multiplexer


o For M12 framing,
f in = 1.544 Mbps
f out = 6.312 Mbps
M = 288 4 + 24 = 1176 bits
L = 288 bits
Duration of a frame =

M
4(L 1)
=S
+ (1
fout
4fin

S)

4L
4fin

M
L 1
L
= S
+ (1 S )
f out
f in
f in
#"!
One bit is replaced
by a stuffed bit.

S = L

f in
1.544
M = 288
1176 = 0.334601
f out
6.312

Po-Ning Chen@ece.nctu

Chapter 3-102

51

Example 3.3 AT&T M12 Multiplexer


o Allowable tolerances to maintain nominal output bit rates
n A sufficient condition for the existence of S such that
the nominal output bit rate can be matched.

L 1
L 1
L M
L
maxS
+ (1 S )
min S
+ (1 S )
S

[
0
,
1
]
S[ 0,1]
f in f out
f in
f in
f in
L
M
L 1
L
L 1

f out f in
f out
f in
f out
f in
M
M
1.5458 =

288
287
6.312 f in
6.312 = 1.54043
1176
1176
Chapter 3-103

Po-Ning Chen@ece.nctu

Example 3.3 AT&T M12 Multiplexer


n This results in an allowable tolerance range:

1.5458 1.54043 = 6.312 / 1176 = 5.36735 kbps


n In terms of ppm (pulse per million pulses),
106 bppm

106 + a ppm
106
=
1.54043 1.544
1.5458
a ppm = 1164.8 and bppm = 2312.18
o This tolerance is already much larger than the
expected change in the bit rate of the incoming DS1
bit stream.
Po-Ning Chen@ece.nctu

Chapter 3-104

52

3.11 Virtues, Limitations, and Modifications of


PCM
o Virtues of PCM systems
n Robustness to channel noise and interference
n Efficient regeneration of coded signal along the transmission path
n Efficient exchange of increased channel bandwidth for improved
signal-to-noise ratio, obeying an exponential law.
n Uniform format for different kinds of baseband signal
transmission; hence, facilitate their integration in a common
network.
n Message sources are easily dropped or reinserted in a TDM
system.
n Secure communication through the use of encryption/decryption.
Po-Ning Chen@ece.nctu

Chapter 3-105

3.11 Virtues, Limitations, and Modifications of


PCM
o Two limitations of PCM system (in the past)
n Complexity
n Bandwidth
o Nowadays, with the advance of VLSI technology, and with
the availability of wideband communication channels (such
as fiber) and compression technique (to reduce the
bandwidth demand), the above two limitations are greatly
released.

Po-Ning Chen@ece.nctu

Chapter 3-106

53

3.12 Delta Modulation


o Delta Modulation (DM)
n The message is oversampled (at a rate much higher than
the Nyquist rate) to purposely increase the correlation
between adjacent samples.
n Then, the difference between adjacent samples is
encoded instead of the sample value itself.

Po-Ning Chen@ece.nctu

Chapter 3-107

Po-Ning Chen@ece.nctu

Chapter 3-108

54

3.12 Math Analysis of Delta Modulation


Let m[n ] = m(nTs ).
Let mq [n ] be the DM approximation of m(t ) at time nTs .
Then

mq [n ] = mq [n 1] + eq [n ] =

e [n],
q

j =

where eq [n ] = sgn( m[n ] mq [n 1]).

The transmitt ed code word is {[( eq [n ] / ) + 1] / 2}n = .

Chapter 3-109

Po-Ning Chen@ece.nctu

mq [n ] = mq [n 1] + eq [n ] =

e [n],
q

j =

3.12 Delta
Modulation

where eq [n ] = sgn( m[n ] mq [n 1]).

o The principle virtue


of delta modulation
is its simplicity.
n It only requires
the use of
comparator,
quantizer, and
accumulator.
With bandwidth
W of m(t)
(

m[n] = mq [n 1] + e[n]
mq [n] = mq [n 1] + eq [n]

) mq [n]

m[n] = eq [n]

e[n] (See Slide 3-131)

Chapter 3-110

55

3.12 Delta Modulation


o Distortions due to delta modulation
n Slope overload distortion
n Granular noise

Po-Ning Chen@ece.nctu

Chapter 3-111

3.12 Delta Modulation


o Slope overload distortion
n To eliminate the slope overload distortion, it requires

dm(t )
max
Ts
dt

(slope overload condition)

n So, increasing step size can reduce the slope-overload


distortion.
n Alternative solution is to use dynamic . (Often, a delta
modulation with fixed step size is referred to as a linear
delta modulator due to its fixed slope, a basic function
of linearity.)
Po-Ning Chen@ece.nctu

Chapter 3-112

56

3.12 Delta Modulation


o Granular noise
n mq[n] will hunt around a relatively flat segment of m(t).
n A remedy is to reduce the step size.
o A tradeoff in step size is therefore resulted for slope
overload distortion and granular noise.

Po-Ning Chen@ece.nctu

Chapter 3-113

3.12 Delta-Sigma Modulation


o Delta-sigma modulation
n In fact, the delta modulation distortion can be reduced
by increasing the correlation between samples.
n This can be achieved by integrating the message signal
m(t) prior to delta modulation.
n The integration process is equivalent to a preemphasis of the low-frequency content of the input
signal.

Po-Ning Chen@ece.nctu

Chapter 3-114

57

3.12 Delta-Sigma Modulation


n A side benefit of
integration-beforedelta-modulation,
which is named
delta-sigma
modulation, is that
the receiver design
is further simplified
(at the expense of a
more complex
transmitter).

Move the accumulator to the transmitter.

Po-Ning Chen@ece.nctu

Chapter 3-115

3.12 Delta-Sigma Modulation

A straightforward
structure
Since integration is
a linear operation,
the two integrators
before comparator
can be combined
into one after
comparator.
Po-Ning Chen@ece.nctu

Chapter 3-116

58

3.12 Math Analysis of Delta-Sigma Modulation


nTs

Let i[n ] = m(t )dt.


t

Let iq [n ] be the DM approximation of i (t ) = m( )d at time nTs .


Then iq [n ] = iq [n 1] + q [n ], where q [n ] = sgn(i[n ] iq [n 1]).

The transmitt ed code word is {[( q [n ] / ) + 1] / 2}n = .


Since

nTs

q [n] = iq [n] iq [n 1] i[n] i[n 1] = ( n 1)T m(t )dt m(t )Ts ,


s

we only need a lowpass filter to smooth out the received


signal at the receiver end. (See the previous slide.)
Po-Ning Chen@ece.nctu

Chapter 3-117

3.12 Delta Modulation


o Final notes
n Delta(-sigma) modulation trades channel bandwidth
(e.g., much higher sampling rate) for reduced system
complexity (e.g., the receiver only demands a lowpass
filter).
n Can we trade increased system complexity for a reduced
channel bandwidth? Yes, by means of prediction
technique.
n In Section 3.13, we will introduce the basics of
prediction technique. Its application will be addressed in
subsequent sections.
Po-Ning Chen@ece.nctu

Chapter 3-118

59

3.13 Linear Prediction

o Consider a finite-duration impulse response (FIR) discretetime filter, where p is the prediction order, with linear
prediction
p

x[n ] = wk x[n k ]
k =1

Po-Ning Chen@ece.nctu

Chapter 3-119

3.13 Linear Prediction


o Design objective
n To find the filter coefficient w1, w2, , wp so as to
minimize index of performance J:

J = E[e2 [n]], where e[n] = x[n] x[n].

Po-Ning Chen@ece.nctu

Chapter 3-120

60

Let {x[n]} be statinoary with autocorrel ation function RX (k ).


2
p


J = E x[n ] wk x[n k ]
k =1

= E[ x 2 [n ]] 2 wk E[ x[n ] x[n k ]] + wk w j E [ x[n k ]x[n j ]]


k =1

k =1 j =1

p
p
p p

= RX [0] 2 wk RX [k ] + 2 wk w j RX [k j ] + wk2 RX [0]


k =1
k =1
k =1 j > k

p
i 1

J = 2 RX [i ] + 2 w j RX [i j ] + 2 wk RX [k i ] + 2 wi RX [0]
wi
k =1
j =i +1

= 2 RX [i ] + 2 w j RX [i j ] = 0
j =1

Chapter 3-121

Po-Ning Chen@ece.nctu

w R
j

[i j ] = RX [i ] for 1 i p.

j =1

The above optimality equations are called the Wiener-Hopf


equations for linear prediction.
It can be rewritten in matrix form as:

RX [1]
RX [0]
R [1]
RX [0]
X

!
!

RX [ p 1] RX [ p 2]

RX [ p 1] w1 RX [1]

RX [ p 2] w2 RX [2]

=
! !
#
!

"
RX [0] w p RX [ p ]

or R X w = rX Optimal solution wo = RX1rX


Po-Ning Chen@ece.nctu

Chapter 3-122

61

3.13 Toeplitz (Square) Matrix


o Any square matrix of the form

a1
a0
a
a0
1
"
"

a p 1 a p 2

a p 1
a p 2

# "

! a0 p p

is said to be Toeplitz.
o A Toeplitz matrix can be uniquely determined by p
elements, [a0, a1, , ap-1].
Po-Ning Chen@ece.nctu

Chapter 3-123

3.13 Linear Adaptive Predictor


o The optimal w0 can only be obtained with the knowledge of
autocorrelation function.
o Question: What if the autocorrelation function is unknown?
o Answer: Use linear adaptive predictor.

Po-Ning Chen@ece.nctu

Chapter 3-124

62

3.13 Idea Behind Linear Adaptive Predictor


o To minimize J, we should update wi toward the bottom of
the J-bowel.

gi

J
wi

n So when gi > 0, wi should be decreased.


n On the contrary, wi should be increased if gi < 0.
n Hence, we may define the update rule as:

1
w i [n + 1] = w i [n ] gi [n ]
2
where is a chosen constant step size, and is

included only for convenience of analysis.

Chapter 3-125

Po-Ning Chen@ece.nctu

o gi[n] can be approximated by:


p

gi [n ] J / wi = 2 RX (i ) + 2 w j RX (i j )
j =1

2 x[n ] x[n i ] + 2 w j [n ] x[n j ] x[n i ]


j =1

)w
i [n + 1]

= 2 x[n i ] x[n ] + w j [n ] x[n j ]


j =1

=
Po-Ning Chen@ece.nctu

w
i [n] + x[n

w
i [n] + x[n

i] @x[n]
i]e[n]

p
X

w
j [n]x[n

j=1

j]A

Chapter 3-126

63

3.13 Structure of Linear Adaptive Predictor

Chapter 3-127

Po-Ning Chen@ece.nctu

3.13 Least Mean Square


o The below pair results the form of the popular least-meansquare (LMS) algorithm for linear adaptive prediction.
8
j [n + 1] = w
j [n] + x[n j]e[n]
>
<w
p
X
w
j [n]x[n j]
>
:e[n] = x[n]
j=1

Po-Ning Chen@ece.nctu

Chapter 3-128

64

3.14 Differential Pulse-Code Modulation


o Basic idea behind differential pulse-code modulation
n Adjacent samples are often found to exhibit a high
degree of correlation.
n If we can remove this adjacent redundancy before
encoding, a more efficient coded signal can be resulted.
n One way to remove the redundancy is to use linear
prediction.
n Specifically, we encode e[n] instead of m[n], where

e[n ] = m[n ] m [n ],
where m[n ] is the linear prediction of m[n ].
Chapter 3-129

Po-Ning Chen@ece.nctu

Quantizati on Noise Power =

3.14 DPCM

2
1 2mmax
mmax
=

12 L
3L2

o For DPCM, the


quantization
error is on e[n],
rather on m[n]
as for PCM.
o So the
quantization
error q[n] is
supposed to be
smaller.

Po-Ning Chen@ece.nctu

Chapter 3-130

65

3.14 DPCM
o Derive:

eq [n] = e[n] + q[n]


mq [n ] = m [n ] + eq [n ]

= m [n ] + e[n ] + q[n ]
= m[n ] + q[n ]
So we have the same
relation between mq[n] and
m[n] (as in Slide 3-110) but
with smaller q[n].
Po-Ning Chen@ece.nctu

eq [n ]

mq [n]
m [n ]

Chapter 3-131

3.14 DPCM
o Notes
n DM system can be treated as a special case of DPCM.

Prediction filter = single delay


Quantizer => single-bit

Po-Ning Chen@ece.nctu

Chapter 3-132

66

3.14 DPCM
oDistortions due to DPCM
nSlope overload distortion
oThe input signal changes too rapidly for the prediction
filter to track it.
nGranular noise

Po-Ning Chen@ece.nctu

Chapter 3-133

3.14 Processing Gain


o The DPCM system can be described by:

mq [n] = m[n] + q[n]


o So the output signal-to-noise ratio is:

E [m 2 [n ]]
E [ q 2 [n ]]
o We can re-write SNRO as:
SNRO =

E [m 2 [n ]] E [e 2 [n ]]
= G p SNRQ
E [e 2 [n ]] E [ q 2 [n ]]
where e[n] = m[n] m [n] is the prediction error.
SNRO =

Po-Ning Chen@ece.nctu

Chapter 3-134

67

3.14 Processing Gain


o In terminologies,

E[m 2 [n ]]
G
=
p E[e 2 [n ]] processing gain

2
SNR = E[e [n ]] signal to quantization noise ratio
Q

E[q 2 [n ]]

Notably, SNRQ can be treated as the SNR for


system of eq [n] = e[n] + q[n].

Po-Ning Chen@ece.nctu

Chapter 3-135

3.14 Processing Gain


o Usually, the contribution of SNRQ to SNRO is fixed and
limited.
n One additional bit in quantization will results in 6 dB
improvement.
o Gp is the processing gain due to a nice prediction.
n The better the prediction is, the larger Gp is.

Po-Ning Chen@ece.nctu

Chapter 3-136

68

3.14 DPCM
o Final notes on DPCM
n Comparing DPCM with PCM in the case of voice
signals, the improvement is around 4-11 dB, depending
on the prediction order.
n The greatest improvement occurs in going from no
prediction to first-order prediction, with some additional
gain resulting from increasing the prediction order up to
4 or 5, after which little additional gain is obtained.
n For the same sampling rate (8KHz) and signal quality,
DPCM may provide a saving of about 8~16 Kbps
compared to standard PCM (64 Kpbs).
Chapter 3-137

Po-Ning Chen@ece.nctu

3.14 DPCM
Source: IEEE Communications Magazine, September 1997.
Excellent
ADPCM
G.723.1 G.729

Good

G.723.1

Speech Quality

IS-641

US-1

MELP 2.4

FS-1016

FS-1015

Poor

Unacceptable

Po-Ning Chen@ece.nctu

G.726

PCM

G.711

G.727

GSM

JDC2

Fair

G.728

8
16
Bit rate (kb/s)

IS = Interim Standard
FS = Federal Standard

IS96
IS54
JDC
GSM/2

32

64
Chapter 3-138

69

3.15 Adaptive Differential Pulse-Code Modulation


o Adaptive prediction is used in DPCM.
o Can we also combine adaptive quantization into DPCM to
yield a comparably voice-quality to PCM with 32 Kbps bit
rate? The answer is YES from the previous figure.
n 32 Kbps: 4 bits for one sample, and 8 KHz sampling
rate
n 64 Kbps: 8 bits for one sample, and 8 KHz sampling
rate
o So, adaptive in ADPCM means being responsive to
changing level and spectrum of the input speech signal.
Chapter 3-139

Po-Ning Chen@ece.nctu

3.15 Adaptive quantization


o Adaptive quantization refers to a quantizer that operates
with a time-varying step size [n].
o [n] is adjusted according to the power of input sample
m[n].
n Power = variance, if m[n] is zero-mean.

[n] = E[m2 [n]]


n In practice, we can only obtain an estimate of E[m2[n]].

Po-Ning Chen@ece.nctu

Chapter 3-140

70

3.15 Adaptive quantization


o The estimate of E[m2[n]] can be done in two ways:
n Adaptive quantization with forward estimation (AQF)
o Estimate based on unquantized samples of the input
signals.
n Adaptive quantization with backward estimation (AQB)
o Estimate based on quantized samples of the input
signals.

Po-Ning Chen@ece.nctu

Chapter 3-141

3.15 AQF
o AQF is in principle a more accurate estimator. However it
requires
n an additional buffer to store unquantized samples for the
learning period.
n explicit transmission of level information to the receiver
(the receiver, even without noise, only has the quantized
samples).
n a processing delay (around 16 ms for speech) due to
buffering and other operations for AQF.
o The above requirements can be relaxed by using AQB.
Po-Ning Chen@ece.nctu

Chapter 3-142

71

3.15 AQB

A possible drawback for a feedback system is its potential unstability.


However, stability in this system can be guaranteed if mq[n] is bounded.
Po-Ning Chen@ece.nctu

Chapter 3-143

3.15 APF and APB


o Likewise, the prediction approach used in ADPCM can be
classified into:
n Adaptive prediction with forward estimation (APF)
o Prediction based on unquantized samples of the input
signals.
n Adaptive prediction with backward estimation (APB)
o Prediction based on quantized samples of the input
signals.
o The pro and con of APF/APB is the same as AQF/AQB.
o APB/AQB are a preferred combination in practical
applications.
Po-Ning Chen@ece.nctu

Chapter 3-144

72

3.15 ADPCM

Adaptive prediction
with backward
estimation (APB).

Chapter 3-145

Po-Ning Chen@ece.nctu

3.16 Computer Experiment: Adaptive Delta


Modulation
This figure may be incorrect.
e[n]

o In this section, the


simplest form of
ADPCM
modulation with
AQB is simulated,
namely, ADM
with AQB.
o Comparison with
LDM (linear DM)
where step size is
fixed will also be
performed.
Po-Ning Chen@ece.nctu

eq [n ]

eq [n 1]

Chapter 3-146

73

3.16 Computer Experiment: Adaptive Delta


Modulation
I thus fixed it in this slide.
o In this section, the
simplest form of
ADPCM
modulation with
AQB is simulated,
namely, ADM
with AQB.
o Comparison with
LDM (linear DM)
where step size is
fixed will also be
performed.

e[n]

eq [n ]
eq [n 1]

Po-Ning Chen@ece.nctu

Chapter 3-147

3.16 Computer experiment: Adaptive delta


modulation

1 e [n 1]
, if [n 1] min
[n 1] 1 + q
2 e [n]
[n] =
q

,
if [n 1] < min
min

[n] is the step size at iteration n,


where
eq [n] is the 1 bit quantizer output that equals 1.
f
1

m(t ) = 10 sin 2 s t , LDM = 1 and min =


8
100
Po-Ning Chen@ece.nctu

Chapter 3-148

74

3.16 Computer Experiment: Adaptive Delta


Modulation
LDM

ADM

Observation: ADM can achieve a comparable performance of


LDM with a much lower bit rate.
Po-Ning Chen@ece.nctu

Chapter 3-149

3.17 MPEG Audio Coding Standard


o The ADPCM and various voice coding techniques
introduced above did not consider the human auditory
perception.
o In practice, a consideration on human auditory perception
can further improve the system performance (from the
human standpoint).
o The MPEG-1 standard is capable of achieving transparent,
perceptually lossless compression of stereophonic audio
signals at high sampling rate.
n A human subjective test shows that a 6-to-1
compression ratio are perceptually indistinguishable to
human.
Po-Ning Chen@ece.nctu

Chapter 3-150

75

3.17 Characteristics of Human Auditory System


o Psychoacoustic characteristics of human auditory system
n Critical band
o The inner ear will scale the power spectra of
incoming signals non-linearly in the form of limited
frequency bands called the critical bands.
o Roughly, the inner ear can be modeled as 25
selective overlapping band-pass filters with
bandwidth < 100Hz for the lowest audible
frequencies and up to 5kHz for the highest audible
frequencies.

Po-Ning Chen@ece.nctu

Chapter 3-151

3.17 Characteristics of Human Auditory System


n Auditory masking
o When a low-level signal (the maskee) and a highlevel signal (the masker) occur simultaneously (in
the same critical band), and are close to each other in
frequency, the low-level signal will be made
inaudible (i.e., masked) by the high-level signal, if
the low-level one lies below a masking threshold.

Po-Ning Chen@ece.nctu

Chapter 3-152

76

3.17 Characteristics of Human Auditory System


o Masking threshold is frequency-dependent.
SNR for R- SMR
bit quantizer

NMR (noise-to-mask ratio) = SMR - SNR


Po-Ning Chen@ece.nctu

Chapter 3-153

3.17 MPEG Audio Coding Standard

Po-Ning Chen@ece.nctu

Chapter 3-154

77

3.17 MPEG Audio Coding Standard


o Time-to-frequency mapping network
n Divide the audio signal into a proper number of subbands, which is
a compromise design for computational efficiency and perceptual
performance.
o Psychoacoustic model
n Analyze the spectral content of the input audio signal and thereby
compute the signal-to-mask ratio.
o Quantizer-coder
n Decide how to apportion the available number of bits for the
quantization of the subband signals.
o Frame packing unit
n Assemble the quantized audio samples into a decodable bit stream.
Po-Ning Chen@ece.nctu

Chapter 3-155

3.18 Summary and Discussion


o Sampling transform analog waveform to discrete-time
continuous wave
n Nyquist rate
o Quantization transform discrete-time continuous wave to
discrete data.
n Human can only detect finite intensity difference.
o PAM, PDM and PPM
o TDM (Time-division multiplexing)
o PCM, DM, DPCM, ADPCM
o Additional consideration in MPEG audio coding
Po-Ning Chen@ece.nctu

Chapter 3-156

78

You might also like