You are on page 1of 47

Information

Lecturer: Dr. Darren Ward, Room 811C


Email: d.ward@imperial.ac.uk, Phone: (759) 46230

Aims:
The aim of this part of the course is to give you an understanding of
how communication systems perform in the presence of noise.

Objectives:

B.P. Lathi, Modern Digital and Analog Communication Systems,


Oxford University Press, 1998
S. Haykin, Communication Systems, Wiley, 2001
L.W. Couch II, Digital and Analog Communication Systems,
Prentice-Hall, 2001

By the end of the course you should be able to:


Compare the performance of various communication systems
Describe a suitable model for noise in communications
Determine the SNR performance of analog communication systems
Determine the probability of error for digital systems
Understand information theory and its significance in determining
system performance


Reference books:

Course material:
http://www.ee.ic.ac.uk/dward/

Lecture 1

2.

What is the course about, and how does it fit together


Some definitions (signals, power, bandwidth, phasors)

1.

Definitions


Lecture 1

Lecture 1

Signal: a single-valued function of time that conveys information


Deterministic signal: completely specified function of time
Random signal: cannot be completely specified as function of time

See Chapter 1 of notes

Lecture 1

Lecture 1

16

Definitions

Definitions

Analog signal: continuous function of time with continuous amplitude


Discrete-time signal: only defined at discrete points in time, amplitude
continuous
Digital signal: discrete in both time and amplitude (e.g., PCM signals,
see Chapter 4)

Instantaneous power:
p=

v(t )

Average power:

= i (t ) R = g (t )

1
T T

P = lim

T /2

T / 2

g (t ) dt

For periodic signals, with period To (see Problem sheet 1):

P=

Lecture 1

17

Definitions

1
To

To / 2

To / 2

g (t ) dt

Lecture 1

18

Bandwidth

Bandwidth: extent of the significant spectral content of a


signal for positive frequencies

Magnitude-square spectrum

3dB bandwidth

null-to-null bandwidth

Baseband signal, B/W = B

Lecture 1

Bandpass signal, B/W = 2B

19

Lecture 1

noise equivalent bandwidth

20

Phasors

General sinusoid:

Phasors

x(t ) = A cos( 2ft + )

x(t ) = Ae e

j 2 f t

x(t ) = A cos( 2ft + )

Lecture 1

x(t ) =

21

Phasors

Lecture 1

A j j 2 f t A j j 2 f t
e e
+ e e
2
2

1.

32

Anti-clockwise rotation (positive frequency): exp( j 2 f t )

2.

The fundamental question: How do communications


systems perform in the presence of noise?
Some definitions:


Clockwise rotation (negative frequency): exp( j 2 f t )

A j j 2 f t A j j 2 f t
e e
+ e e
2
2

Summary
x(t ) =

Alternative representation:

Signals
Average power

43

Lecture 1

T /2

T / 2

x(t ) dt

Bandwidth: significant spectral content for positive frequencies


Phasors complex conjugate representation (negative frequency)

x(t ) =

Lecture 1

1
T T

P = lim

A j j 2 f t A j j 2 f t
e e
+ e e
2
2

44

Lecture 2
1.
2.

Sources of noise

Model for noise


Autocorrelation and Power spectral density
See Chapter 2 of notes, sections 2.1, 2.2, 2.3

Lecture 2

Sources of noise

2.

Example

External noise
synthetic (e.g. other users)
atmospheric (e.g. lightning)
galactic (e.g. cosmic radiation)
Internal noise
shot noise
thermal noise

1.

Lecture 2

1.
2.
3.

Consider an amplifier with 20 dB power gain and a bandwidth of


B = 20 MHz
11
Assume the average thermal noise at its output is Po = 2.2 10 W

What is the amplifiers effective noise temperature?


What is the noise output if two of these amplifiers are cascaded?
How many stages can be used if the noise output must be less than
mW?

20

P = kTB

Average power of thermal noise:


Effective noise temperature:

Te =

Lecture 2

P
kB

Temperature of fictitious thermal


noise source at i/p, that would be
required to produce same noise
power at o/p
3

Lecture 2

Gaussian noise

Noise model


Model for effect of noise is additive Gaussian noise channel:

Gaussian noise: amplitude of noise signal has a Gaussian probability


density function (p.d.f.)

For n(t) a random signal, need more info:


What happens to noise at receiver?
Statistical tools


Central limit theorem : sum of n independent random variables


approaches Gaussian distribution as n

Lecture 2

Random variable


Motivation

Lecture 2

A random variable x is a rule that assigns a real number xi to the ith


sample point in the sample space

Data

Data +
noise

1
0

How often is a decision error made?


Noise
only

Error for 0
Error for 1

Lecture 2

Lecture 2

Statistical averages

Probability that the r.v. x is within a certain range is:

Probability density function

Expectation of a r.v. is:

E{x} = 

x2

P( x1 < x < x2 ) =  px ( x) dx

Example, Gaussian pdf:

p x ( x)

In general, if y = g(x)
then

2
1
e ( x m )
2

x px ( x) dx

where E{x} is the expectation operator

x1

px ( x) =

E{y} = E{g(x)} = 

g ( x) px ( x) dx

For example, the mean square amplitude of a signal is the mean of the
square of the amplitude, ie,

{ }

E x2

Lecture 2

10

Averages


Random process

Lecture 2

Time average:

1 T /2
n(t ) dt
T T T / 2

n(t ) = lim
Ensemble average:

E{n} = 

n p(n) dn
pdf

Ergodic

DC component:

Average power:

Stationary
Lecture 2

11

Lecture 2

random variable

E{n(t )} = n(t )

E n 2 (t ) = n 2 (t ) = 2 (zero-mean Gaussian process only)


12

Autocorrelation

Consider the signal:

x(t ) = A e j ( ct + )

Example

How can one represent the


spectrum of a random process?

where is a random variable, uniformly distributed over 0 to 2


1.
2.

Autocorrelation:

Calculate its average power using time averaging.


Calculate its average power (mean-square) using statistical averaging.

Rx ( ) = E{x (t ) x (t + )}
NOTE: Average power is

P = E x 2 (t )
= Rx (0)
Lecture 2

13

Original
speech

14

Power spectral density




Frequency content

Lecture 2

H(f)

PSD measures distribution of power with frequency, units watts/Hz

Wiener-Khinchine theorem:

Filtered speech

(smaller bandwidth)

Sx ( f ) =  Rx ( ) e j 2 f d

= FT{Rx ( )}

Hence,

Rx ( ) =  S x ( f ) e j 2 f df

Average power:

P = Rx (0) =  S x ( f ) df

Original LPF 4kHz


Lecture 2

LPF 1kHz
15

Lecture 2

16

Summary

Thermal noise:

P= kTB = 

S( f ) =

Autocorrelation:

Rx ( ) = E{x (t ) x (t + )}

Power spectral density:

S x ( f ) =  Rx ( ) e j 2 f d

No
2

White noise:

Additive Gaussian noise channel:

kT
df
2


Power spectral density

PSD is same for all frequencies

White noise:

S( f ) =

Expectation operator:

No
2

pdf

E{g ( x)} = 

Lecture 2

17

Lecture 2

[ g ( x)] px ( x) dx

random
variable

18

Lecture 3

Analog communication system

Representation of band-limited noise


Why band-limited noise?
(See Chapter 2, section 2.4)
Noise in an analog baseband system
(See Chapter 3, sections 3.1, 3.2)

Lecture 3

Receiver

Lecture 3

Bandlimited Noise
For any bandpass (i.e., modulated) system, the predetection
noise will be bandlimited
Bandpass noise signal can be expressed in terms of two
baseband waveforms
Predetection Filter

Detector

bandpass

Predetection filter:
removes out-of-band noise
has a bandwidth matched to the transmission bandwidth

baseband

carrier

baseband

carrier

n(t ) = nc (t ) cos(ct ) ns (t ) sin(ct )

PSD of n(t) is centred about fc (and fc)


PSDs of nc(t) and ns(t) are centred about 0 Hz

Lecture 3

Lecture 3

PSD of n(t)
2W

nk (t ) = ak cos( k t + k )
let k = ( k c ) + c
nk (t ) = ak cos[( k c )t + k + ct ]
A

use cos( A + B) = cos A cos B sin A sin B


nc (t ) term

In slice shown (for f small):

nk (t ) = ak cos[( k c )t + k ]cos( c t )

nk (t ) = ak cos( 2 f k t + k )

ak sin[( k c )t + k ]sin( c t )
ns (t ) term
Lecture 3

Example frequency

Lecture 3

Example time
W=1000 Hz

Lecture 3

W=1000 Hz

n(t)

n(t)

nc(t)

nc(t)

ns(t)

ns(t)

Lecture 3

10

Example histogram

Probability density functions


W=1000 Hz

n(t ) =  ak cos( k t + k )
k

nc (t ) =  ak cos[( k c )t + k ]

n(t)

ns (t ) =  ak sin[( k c )t + k ]

nc(t)

Each waveform is Gaussian distributed


Central limit theorem
Mean of each waveform is 0

ns(t)

Lecture 3

11

Lecture 3

Average power
What is the average power in n(t) ?

Average power

12

Find using the power spectral density (PSD):

n(t ) =  ak cos( k t + k )
k

2W

Power in ak cos(t+) is E{ak2}/2 (see Example 1.1, or study group sheet 2, Q1)

Average power in n(t) is: Pn = 


k

E{ak }
2


nc (t ) =  ak cos(( k c )t + k )

From Lecture 2:

ns (t ) =  ak sin (( k c )t + k )
Average power in nc(t) and ns(t) is: Pnc = 

13

f c +W

f c W

E{ak }
E{ak }
, Pns = 
2
2
k

n(t) , nc(t) and ns(t) all have same average power!

Lecture 3

= 2
(one for positive freqs,
one for negative)


P =  S ( f ) df
No
N
df = 2 o (W W ) = 2 N oW
2
2

Average power in n(t) , nc(t) and ns(t) is: 2NoW

Lecture 3

14

Power spectral densities

Example power spectral densities


W=1000 Hz

n(t)
PSD of n(t):
nc(t)

2W

ns(t)

PSD of nc(t) and ns(t):

--W

Lecture 3

15

Example - zoomed

Lecture 3

16

Phasor representation
W=1000 Hz

n(t ) = nc (t ) cos( c t ) ns (t ) sin( ct )


n(t)

let g (t ) = nc (t ) + jns (t )

g (t ) e j ct = nc (t ) cos c t + jnc (t ) sin c t


+ jns (t ) cos c t ns (t ) sin c t

nc(t)

so n(t ) = g (t ) e j ct

ns(t)

Lecture 3

17

Lecture 3

18

Analog communication system

Performance measure:
SNRo =

Baseband system

average message power at receiver output


average noise power at receiver output

Compare systems with same transmitted power

Lecture 3

19

20

Summary
Baseband SNR:

Baseband SNR

Lecture 3

SNRbase =

Bandpass noise representation:

n(t ) = nc (t ) cos( c t ) ns (t ) sin( ct )

Transmitted (message) power is: PT


Noise power is:

No
df = NoW
W 2
W

SNRbase =
Lecture 3

All waveforms have same:

PT
N oW

PN = 

SNR at receiver output:

PT
N oW

21

Probability density function (zero-mean Gaussian)


Average power

nc(t) and ns(t) have same power spectral density

Lecture 3

22

Lecture 4

Analog communication system

Noise (AWGN) in AM systems:


DSB-SC
AM, synchronous detection
AM, envelope detection
(See Chapter 3, section 3.3)
Performance measure:

SNRo =

Lecture 4

Amplitude modulation

average message power at receiver output


average noise power at receiver output

Compare systems with same transmitted power

Lecture 4

Amplitude modulation

Modulated signal:
s (t ) AM = [Ac + m(t )]cos ct

m(t) is message signal


Modulation index:

mp
Ac

mp is peak amplitude of message

Lecture 4

Lecture 4

DSB-SC

Synchronous detection
s(t )DSB-SC = Ac m(t ) cosct

Signal after multiplier:

y (t ) AM = [Ac + m(t )]cos c t 2 cos c t


= [Ac + m(t )](1 + cos 2 c t )

y (t ) DSB SC = Ac m(t )(1 + cos 2 c t )


Lecture 4

SNR of DSB-SC

Transmitted signal:

Noise in DSB-SC

Lecture 4

Output signal power:

Ps = ( Ac m(t ) ) = Ac2 P
2

s (t ) DSB-SC = Ac m(t ) cos c t




power of message

Predetection signal:

Transmitted signal

x(t ) = Ac m(t ) cos c t + nc (t ) cos c t ns (t ) sin c t

PSD of bandpass noise n(t)

PN =  PSD df =  N o df = 2N oW

Bandlimited noise

PSD of nc(t)

Receiver output (after LPF):




y (t ) = Ac m(t ) + nc (t )

Lecture 4

Output noise power:

Output SNR:

SNRDSB-SC =
7

Lecture 4

Ac2 P
2 N oW

PSD of baseband noise nc(t)


8

Noise in AM (synch. detector)




SNR of DSB-SC

AP
2

Transmitted signal


PT = ( Ac m(t ) cos c t ) 2 =

x(t ) = [Ac + m(t )]cos c t + nc (t ) cos c t ns (t ) sin c t

2
c

SNRDSB-SC

Receiver output:

y (t ) = Ac + m(t ) + nc (t )

P
= T = SNRbase
N oW

Output SNR:

Output signal power:

PS = m 2 (t ) = P
DSB-SC has no performance advantage over baseband

Bandlimited noise

Output noise power:

Transmitted power:

Predetection signal:

Output SNR:

SNRAM =

P
2 N oW

PN = 2 N oW
Lecture 4

Noise in AM (synch. detector)




10

Noise in AM, envelope detector

Transmitted signal:


s (t ) AM = [Ac + m(t )]cos c t

Predetection signal:

x(t ) = [Ac + m(t )]cos c t + nc (t ) cos c t ns (t ) sin c t

Transmitted power:
A2 P
PT = c +
2 2

Transmitted signal

Bandlimited noise

Output SNR:

SNRAM =


Lecture 4

PT
P
P
SNRbase
= 2
2
N oW Ac + P Ac + P

The performance of AM is always worse than baseband

Lecture 4

11

Lecture 4

13

Noise in AM, envelope detector

Receiver output:

Noise in AM, envelope detector

Small noise case:

y (t ) Ac + m(t ) + nc (t )
= output of synchronous detector

y (t ) = envelope of x(t)


= [ Ac + m(t ) + nc (t )]2 + ns2 (t )

Large noise case:

y (t ) En (t ) + [ Ac + m(t )] cos n (t )

Lecture 4

y(t)

14

15

Summary

An unmodulated carrier (of amplitude Ac and frequency fc) and


bandlimited white noise are summed and then passed through an ideal
envelope detector.
Assume the noise spectral density to be of height No/2 and bandwidth
2W, centred about the carrier frequency.
Assume the input carrier-to-noise ratio is high.

Synchronous detector:

SNRDSB-SC = SNRbase
SNRAM =

Calculate the carrier-to-noise ratio at the output of the envelope detector,


and compare it with the carrier-to-noise ratio at the detector input.

P
SNRbase
A +P
2
c

Envelope detector:
threshold effect
for small noise, performance is same as synchronous
detector


1.

Lecture 4

Example

Envelope detector has a threshold effect


Not really a problem in practice

Lecture 4

16

Lecture 4

17

Lecture 5

Frequency modulation
FM waveform:

Noise in FM systems
pre-emphasis and de-emphasis
(See section 3.4)

s (t ) FM = Ac cos 2f c t + 2k f


= Ac cos (t )

Comparison of analog systems


(See section 3.5)

 m( )

d 


(t) is instantaneous phase


Instantaneous frequency:
1 d (t )
2 dt
= f c + k f m(t )

fi =

Frequency is proportional to message signal


Lecture 5

FM waveforms

Lecture 5

FM frequency deviation
Instantaneous frequency varies between fc-kf mp and fc+kf mp,
where mp is peak message amplitude

Message:

Frequency deviation (max. departure of carrier wave from fc):


f = k f m p

AM:

Deviation ratio:
f
=
W
where W is bandwidth of message signal

FM:

Lecture 5

Commercial FM uses: f=75 kHz and W=15 kHz


4

Lecture 5

Bandwidth considerations

FM receiver

Discriminator: output is proportional to deviation of instantaneous


frequency away from carrier frequency

Carsons rule:

BT 2W ( + 1) = 2(f + W )

Lecture 5

Noise in FM versus AM

Lecture 5

Noise in FM

Predetection signal:

FM:

Frequency of modulated
signal carries message
Zero crossings of modulated
signal important
Effect of noise should be less
than in AM

Transmitted signal

x(t ) = Ac cos(2f c t + (t ) ) + nc (t ) cos(2f c t ) ns (t ) sin(2f c t )


where (t ) = 2 k f  m( ) d


Lecture 5

Bandlimited noise

Amplitude of modulated
signal carries message
Noise adds directly to
modulated signal
Performance no better than
baseband

AM:

If carrier power is much larger than noise power:


1. Noise does not affect signal power at output
2. Message does not affect noise power at output

Lecture 5

Assumptions

Assumptions
2. Signal does not affect noise at output

1. Noise does not affect signal power at output

Message-free component at receiver:


xn (t ) = Ac cos(2f ct ) + nc (t ) cos(2f c t ) ns (t ) sin(2f c t )

Signal component at receiver:

xs (t ) = Ac cos(2f c t + (t ) )

Instantaneous frequency:
1 d (t )
= k f m(t )
f i (t ) =
2 dt

Instantaneous frequency:

power of message signal

Lecture 5

10

1
d
2 Ac dt

ns (t )

f i (t )

X(f )

PSD of discriminator output:

x(t ) X ( f )
dx(t )
j 2 f X ( f )
dt

1
j 2 f
2 Ac

Y( f ) = H ( f )X ( f )

SY ( f ) = H ( f ) S X ( f )

SX ( f )

Ns ( f )

SN ( f )
Ns ( f )

H( f )

11

1 dns (t )
f i (t )
2 Ac dt

Fourier theory property:

Lecture 5

PSD property:

Discriminator output:

Lecture 5


  

We know the PSD of ns(t), but what about its derivative??

PS = k 2f P

1 d (t ) 1 d
ns (t )
1 d ns (t )

tan 1
=
2 dt
2 dt
Ac + nc (t )
2 dt Ac

f i (t ) =



Output signal power:

f
Ac

Fi ( f )
SF ( f ) =

| f |2
SN ( f )
2
Ac

Fi ( f )

12

Lecture 5

13

PSD of LPF noise term:


1 dns (t ) | f |2
= 2 N o , | f |< W
2 Ac dt
Ac

PSD of
PSD of ns(t)

Average power of noise at output:

1 dns (t )
PSD of
2 A dt

2 N oW 3
2
3 Ac

Lecture 5

14

SNR of FM

Lecture 5

SNRFM is valid when the predetection SNR > 10


Predetection signal is:

 

3 Ac2 k 2f P

x(t ) = Ac cos(2f c t + (t ) ) + nc (t ) cos(2f c t ) ns (t ) sin(2f c t )

2 N oW 3

Predetection SNR is: SNR =


pre

Transmitted power:

Ac2
=
2

Threshold point is:

PT = ( Ac cos[ c t + (t )])

SNR at output:

3k 2f P
W2

SNRbase =

3 2 P
SNRbase
mp

Ac2
2 N o BT

Ac2
> 10
4 N oW ( + 1)

Cannot arbitrarily increase SNRFM by increasing

SNRFM =

15

Threshold effect in FM

SNR at output:

Lecture 5

Increasing the carrier power has a noise quieting effect

PSD after LPF

SNRo =

f3
3

 
 

| f |2
2N
N o df = 2o
PN = 
2
W
Ac
Ac
W

16

Lecture 5

18

Pre-emphasis and De-emphasis

Example

The improvement in output SNR afforded by using preemphasis and de-emphasis in FM is defined by:
SNR with pre - /de - emphasis
SNR without pre - /de - emphasis
average output noise power without pre - /de - emphasis
=
average output noise power with pre - /de - emphasis

I=

If Hde(f) is the transfer function of the de-emphasis filter,


find an expression for the improvement, I.

Can improve output SNR by about 13 dB

Lecture 5

19

Analog system performance

20

Analog system performance

   

Parameters:
Single-tone message m(t ) = cos(2
Message bandwidth W = f m
AM system
=1
FM system
=5

Lecture 5

SNRDSB SC = SNRbase
SNRAM = 13 SNRbase
SNRFM = 752 SNRbase
Lecture 5

Bandwidth:

Performance:

f mt )

BDSB SC = 2W
B AM = 2W
BFM = 12W
21

Lecture 5

22

Summary

  

Noise in FM:
Increasing carrier power reduces noise at receiver output
Has threshold effect
Pre-emphasis

  

Comparison of analog modulation schemes:


AM worse than baseband
DSB/SSB same as baseband
FM better than baseband

Lecture 5

23

Lecture 6

Digital vs Analog

Digital communication systems


Digital vs Analog communications
Pulse Code Modulation

 

Analog message:

Digital message:

(See sections 4.1, 4.2 and 4.3)

Lecture 6

Digital vs Analog
Digital:
Decide which symbol was
sent
Performance criterion is
probability of receiver
making a decision error

 
 

  


 

Recreate waveform
accurately
Performance criterion is
SNR at receiver output

Sampling: discrete in time

 

Analog:

Lecture 6

Advantages of digital:
1.
2.

Lecture 6

Digital signals are more immune to noise


Repeaters can re-transmit a noise-free signal

Nyquist Sampling Theorem:


A signal whose bandwidth is limited to W Hz can be reconstructed
exactly from its samples taken uniformly at a rate of R>2W Hz.

Lecture 6

Maximum information rate

Pulse-code modulation
Represent an analog waveform in digital form

Channel
B Hz

   

How many bits can be transferred over a channel of bandwidth B Hz


(ignoring noise)?
Signal with a bandwidth of B Hz is not distorted over this channel
Signal with a bandwidth of B Hz requires samples taken at 2B Hz
Can transmit:
2 bits of information per second per Hz

Lecture 6

Quantization: discrete in amplitude

Lecture 6

Encode

Round amplitude of each sample to nearest one of a finite number of


levels

Lecture 6

Assign each quantization level a code

Lecture 6

Sampling vs Quantization

Quantization noise

Sampling:

 

Non-destructive if fs>2W
Can reconstruct analog waveform exactly by using a low-pass filter
Sampled signal

Quantization:

 

Destructive
Once signal has been rounded off it can never be reconstructed
exactly

Quantized signal
(step size of 0.1)

Quantization error

Lecture 6

Quantization noise

Lecture 6

10

Quantization noise

Let be the separation between quantization levels


=

 

Let message power be P


Noise power is: PN = E{q 2 } (since zero mean)
2
2m p
m 2p
2
L
=
=
=
12
12
3 22n
Output SNR of quantizer:

2m p

( )

is the no. of quantization levels


where
mp is peak allowed signal amplitude

L=2n

Round-off effect of quantizer ensures that |q|< /2, where q is a


random variable representing the quantization error

or in dB:

Assume q is zero mean with uniform pdf, so mean square error is:

SNRQ = 6.02n + 10 log10

E{q 2 } =  q 2 p (q) dq

=

/2

/ 2

Lecture 6

q2

dq =

PS 3P 2 n
=
2
PN m 2p




 

SNRQ =

2
12
11

Lecture 6

3P
m 2p

dB

12

Bandwidth of PCM

Nonuniform quantization

For audio signals (e.g. speech), small signal amplitudes occur more
often than large signal amplitudes
Better to have closely spaced quantization levels at low signal
amplitudes, widely spaced levels at large signal amplitudes
Quantizer has better resolution at low amplitudes (where signal spends
more time)

Each message sample requires n bits


If message has bandwidth W Hz, then PCM contains 2nW bits per
second

Bandwidth required is:

BT = nW

SNR can be written: SNRQ =

3P 2 BT / W
2
m 2p

Small increase in bandwidth yields a large increase in SNR


Uniform quantization

Lecture 6

13

Nonuniform quantization

Lecture 6

Non-uniform quantization
15

Companding

Uniform quantizer is easier to implement that nonlinear


Compress signal first, then use uniform quantizer, then expand signal
(i.e., compand)

Lecture 6

16

Lecture 6

17

Summary



   

Analog communications system:


Receiver must recreate transmitted waveform
Performance measure is signal-to-noise ratio
Digital communications system:
Receiver must decide which symbol was transmitted
Performance measure is probability of error

Pulse-code modulation:

Scheme to represent analog signal in digital format


Sample, quantize, encode
Companding (nonuniform quantization)

Lecture 6

18

Lecture 7

Digital receiver

   


 

   

Performance of digital systems in noise:


Baseband
ASK
PSK, FSK
Compare all schemes
Filter +
Demodulator

s(t)

(See sections 4.4, 4.5)

Lecture 7

Baseband digital system

Lecture 7

Gaussian noise, probability


Noise waveform, n(t)

   


 

s(t)

Probability density function, f(n)

  

1
( n m) 2
exp
2 2
2
= (m, 2)
Normal distribution

p ( n) =

mean, m
variance, 2

prob(a < n < b) =  p(n) dn


a

Lecture 7

Lecture 7

Gaussian noise, spectrum

Baseband system 0 transmitted


NOTE: For zero-mean noise,
variance average power
i.e., 2=P

White noise

Transmitted signal
s0(t)

Noise signal
n(t)

LPF (use for baseband)


W

P=

Received signal
y0(t)= s0(t)+n(t)

f c +W N
No
o
df + 
df
f c W
f c W
2
2
= 2 N oW
f c +W

Lecture 7

Baseband system 1 transmitted

Error if yo (t ) > A / 2

Lecture 7

A/ 2

N(0, 2 ) dn
6

Possible errors:
1.
Symbol 0 transmitted, receiver decides 1
2.
Symbol 1 transmitted, receiver decides 0
Total probability of error:

Pe = po Pe 0 + p1 Pe1

Received signal
y1(t)= s1(t)+n(t)

A/ 2

Baseband system errors

Noise signal
n(t)

Pe1 = 

Lecture 7

Transmitted signal
s1(t)

Error if y1 (t ) < A / 2

Pe 0 = 

P=

BPF (use for bandpass)

No
df = N oW
2

Probability of
0 being sent

Probability of making an
error if 0 was sent

N ( A, 2 ) dn
7

Lecture 7

1
n2
exp
dn
A/ 2
2 2
2

Pe = 

For equally-probable symbols:


1
1
Pe = Pe 0 + Pe1
2
2
Can show that Pe0 = Pe1

1. Complementary error function (erfc in Matlab)

u

2. Q-function (tail function)


1
2
A
Pe = Q
2

Q(u ) =

A/ 2

N (0, 2 ) dn

Lecture 7

u

exp

n2
dn
2

%"$ &
!" #

Hence, Pe = Pe0+ Pe0= Pe0

Pe = 

A
2 2

%'$ $ &
!' #

Pe1

exp( n 2 ) dn







erfc(u ) =
Pe = 12 erfc

Pe0

  

How to calculate Pe?

  

Baseband system errors

Baseband error probability

Lecture 7

10

Example

Consider a digital system which uses a voltage level of 0 volts to


represent a 0 , and a level of 0.22 volts to represent a 1 . The digital
waveform has a bandwidth of 15 kHz.
If this digital waveform is to be transmitted over a baseband channel
having additive noise with flat power spectral density of No/2=3 x 10-8
W/Hz, what is the probability of error at the receiver output?

Lecture 7

11

Lecture 7

12

Amplitude-shift keying

Synchronous detector

Identical to analog synchronous detector

s0 (t ) = 0
s1 (t ) = A cos( c t )
Lecture 7

13

ASK 0 transmitted

Lecture 7

ASK 1 transmitted

Predetection signal:

Predetection signal:

x1 (t ) = A cos( c t ) + nc (t ) cos( c t ) ns (t ) sin( c t )

x0 (t ) = 0 + nc (t ) cos( c t ) ns (t ) sin( c t )
ASK 0

14

Bandpass noise

ASK 1

After multiplier:

Bandpass noise

Receiver output:
y0 (t ) = nc (t )

Receiver output:
y1 (t ) = A + nc (t )

Lecture 7

r0 (t ) = x0 (t ) 2 cos( c t )
= nc (t ) 2 cos 2 ( c t ) ns (t )2 sin( c t ) cos( c t )
= nc (t )[1 + cos( 2 c t )] ns (t ) sin( 2 c t )

After multiplier:
r1 (t ) = x1 (t ) 2 cos( c t )
= [ A + nc (t )] 2 cos 2 ( c t ) ns (t ) 2 sin( c t ) cos( c t )
= [ A + nc (t )][1 + cos( 2 c t )] ns (t ) sin( 2 c t )

15

Lecture 7

16

PDFs at receiver output


ASK 1 transmitted:

ASK 0 transmitted:

Phase-shift keying

PDF of y0 (t ) = nc (t )

PDF of y1 (t ) = A + nc (t )
s0 (t ) = A cos( c t )
s1 (t ) = A cos( c t )

PSK

FSK

Pe,ASK = 12 erfc

+**,


.--/


Same as baseband!

A
2 2

Lecture 7

17

PSK demodulator

Lecture 7

18

PSK 0 transmitted

Predetection signal:
x0 (t ) = A cos( c t ) + nc (t ) cos( c t ) ns (t ) sin( c t )
PSK 0

Bandpass noise

After multiplier:
r0 (t ) = x0 (t ) 2 cos( c t )
= [ A + nc (t )] 2 cos 2 ( c t ) ns (t ) 2 sin( c t ) cos( c t )
= [ A + nc (t )][1 + cos( 2 c t )] ns (t ) sin( 2 c t )

0 0 0

Band-pass filter bandwidth matched to modulated signal bandwidth


Carrier frequency is c
Low-pass filter leaves only baseband signals

Lecture 7

Receiver output:
y0 (t ) = A + nc (t )

19

Lecture 7

20

PSK PDFs at receiver output

PSK 1

PSK 0 transmitted:

Predetection signal:
x1 (t ) = A cos( c t ) + nc (t ) cos( c t ) ns (t ) sin( c t )

PDF of y0 (t ) = A + nc (t )

Bandpass noise

PSK 1 transmitted:

PSK 1 transmitted

PDF of y1 (t ) = A + nc (t )

After multiplier:
r1 (t ) = x1 (t ) 2 cos( c t )
= [ A + nc (t )] 2 cos 2 ( c t ) ns (t ) 2 sin( c t ) cos( c t )
= [ A + nc (t )][1 + cos( 2 c t )] ns (t ) sin( 2 c t )

Set threshold at 0:

Receiver output:
y1 (t ) = A + nc (t )

if y < 0, decide "0"


if y > 0, decide "1"

Lecture 7

21

PSK probability of error

Lecture 7

22

Frequency-shift keying

PSK

432 2 5

476 6 8

=

(n + A) 2
1
exp
dn
2 2
2

4:9 9 ;

Pe1 =  N( A, 2 )

4=< < >

Pe 0 =  N( A, 2 )

(n A) 2
1
exp
dn

2 2
2

=

FSK

DAC E
@A? B

Probability of bit error:


A
1
Pe,PSK = erfc
2
2

Lecture 7

23

Lecture 7

s0 (t ) = A cos( 0t )
s1 (t ) = A cos(1t )

24

FSK detector

FSK

Receiver output:

y0 (t ) = A + n1c (t ) nc0 (t )

y1 (t ) = A + nc1 (t ) nc0 (t )

s0 (t ) = A cos( 0t )
s1 (t ) = A cos(1t )

Independent noise sources,


variances add

PDFs same as for PSK, but variance is doubled:

Lecture 7

25

Digital performance comparison

A
1
erfc
2
2

LIK M
HIG J

Pe,FSK =

Lecture 7

26

Summary

For a baseband (or ASK) system:

Pe = 

A/ 2

A 



2
2



N (0, 2 ) dn = 12 erfc

LIK M
HIG J

DAC E
@A? B

Probability of error for PSK and FSK:


A
1
A
1
Pe,PSK = erfc
Pe,FSK = erfc
2
2
2
2

Comparison of digital systems:


PSK best, then FSK, ASK and baseband same

Lecture 7

27

Lecture 7

28

Lecture 8

Why information theory?

   

Information theory
Why?
Information
Entropy
Source coding (a little)

(See sections 5.1 to 5.4.2)

What is the performance of the best system?


What would be the characteristics of an ideal system, [one that] is not
limited by our engineering ingenuity and inventiveness but limited
rather only by the fundamental nature of the physical universe
Taub & Schilling

Lecture 8

Information

Lecture 8

Properties of I(s)

The purpose of a communication system is to convey


information from one point to another
What is information?

I(s)

I ( s ) = log 2 ( p )

 
  

Definition:
I ( s ) = log 2

Information in
symbol s

Lecture 8

1
= log 2 ( p )
p

Probability of
occurrence of
symbol s

bits
p

bits

1.

2.

Conventional unit
of information

3.

Lecture 8

If p=1, I(s)=0
(symbol that is certain to occur conveys no information)
0<p<1, <I(s)<0
If p=p1p2, I(s)=I(s1)+I(s2)

Example

Sources and symbols

Suppose we have two symbols:


s0 = 0
s1 = 1

Each has probability of occurrence:


p0 = p1 = 12

Symbols:
may be binary (0 and 1)
can have more than 2 symbols, e.g. letters of the alphabet, etc.
Sequence of symbols is random (otherwise no information is conveyed)

Each symbol represents:

I ( s) = log 2 ( 12 ) = 1 bit of information

Definition:
If successive symbols are statistically independent, the information
source is a zero-memory source (or discrete memoryless source)

In this example, one symbol = one information bit,


but it is not always so!

How much information is conveyed by symbols?

Lecture 8

Entropy

Lecture 8

Example binary source


Alphabet:

Definition:

S = {s0 , s1 }

H ( S ) =  pk log 2 ( pk )

S = {s1 , s2 ,

Probabilities:

, s K } is alphabet

H ( S ) =  pk log 2 ( pk )

symbol

pk is probability of occurence of symbol sk


Note that :


all k

pk = 1

all k

= (1 p1 ) log 2 (1 p1 ) p1 log 2 ( p1 )

Were certain that the symbol


comes from the known alphabet

How to represent (encode) each symbol?


let s0 = 0, s1 = 1

Entropy: average information per symbol

Lecture 8

p0 = 1 p1

Entropy:

where

collection of all possible symbols

all k

this requires 1 bit/symbol to transmit


7

Lecture 8

Example three symbol alphabet

Example three symbol alphabet

Alphabet:
S = {A, B, C }

Probabilities:
Entropy:

p A = 0.7,

Code words

p B = 0.2,

pC = 0.1

H ( S ) =  pk log 2 ( pk )
all k

= 1.157 bits/symbol

How to represent (encode) each symbol?


let A = 00
B = 01
C = 10

Symbols generated at
rate of 1 symbol/sec

Bitstream generated at
rate of 2 bits/sec

System needs to
process 2 bits/sec

this requires 2 bits/symbol to transmit


Lecture 8

Source coding

Lecture 8

10

Examples

Amount of information we need to transmit, is determined


(amongst other things) by how many bits we need to
transmit for each symbol

Telephone:
Speech waveform

8000 symbols/sec

64000 bits/sec

System needs to
process 64 kb/s

8000 symbols/sec

13000 bits/sec

System needs to
process 13 kb/s

In the binary case, only 1 bit required to transmit each symbol


In the {A,B,C} case, 2 bits required to transmit each symbol

Cell phone:
Speech waveform

Lecture 8

11

Lecture 8

12

Source vs channel coding

Source coding

All symbols do not need to be encoded with the same


number of bits
p A = 0.7,

pB = 0.2,

pC = 0.1

Example:
let A = 0
B = 10
C = 11

Source coding: minimize the number of bits to be transmitted


Channel coding: add extra bits to detect/correct errors

Lecture 8

6 symbols
13

Average codeword length

8 bits

Lecture 8

14

Source coding

 

Use variable-length code words


Symbol that occurs frequently (i.e., relatively high pk)
should have short code word
Symbol that occurs rarely should have long code word

Definition:

L =  pk l k
Probability of occurrence
of symbol sk

all k

Number of bits used to


represent symbol sk

Example: p A = 0.7, pB = 0.2, pC = 0.1


let A = 0, B = 10, C = 11

L = 0.7 1 + 0.2 2 + 0.1 2


= 1.3 bits/symbol
Lecture 8

15

Lecture 8

16

Summary

  
  

Information content (of a particular symbol):


I ( s ) = log 2

1
= log 2 ( p )
p

bits

Entropy (for a complete alphabet, is the average


information content per symbol):
H ( S ) =  pk log 2 ( pk )

bits/symbol

all k

Source coding:
How many bits do we need to represent each symbol?

Lecture 8

17

Lecture 9

Source coding

Source coding theorem


Huffman coding algorithm

All symbols do not need to be encoded with the same


number of bits
Example:
p A = 0.7,

(See sections 5.4.2, 5.4.3)

pB = 0.2,

pC = 0.1

A = 0, B = 10, C = 11

L = 0.7 1 + 0.2 2 + 0.1 2


= 1.3 bits/symbol

6 symbols
Lecture 9

Source coding

probabilities
code words

average codeword length

8 bits

Lecture 9

Equal probability symbols

How can we reduce the number of bits we need to transmit?


What is the minimum number of bits we need for a
particular symbol?
(Source coding theorem)
How can we encode symbols to achieve this minimum
number of bits?
(Huffman coding algorithm)

Example:
S = {A, B}

Alphabet:
Probabilities:
Code words:

p A = 0.5, pB = 0.5
A = 0, B = 1

Requires 1 bit for each symbol

In general, for n equally-likely symbols:

  
  

Probability of occurrence of each symbol is p=1/n


Number of bits to represent each symbol is

l = log 2
Lecture 9

Lecture 9

1
= log 2 (n )
p
4

Unequal probabilities?

Probability of any sequence of N symbols occurring:

p ( S N ) = p1

, pK

Np2

1
Np
Np
= log 2 p1 1 p2 2
p( S N )
= Np1 log 2 ( p1 ) Np2 log 2 ( p2 )
= N  pk log 2 ( pk ) = N H ( S )

l N = log 2

S N = {s1 , s2 , s1 , s3 , s3 , s2 , s1 ,

p2

Number of bits required to represent a sequence of N symbols:

  


Any random sequence of N symbols (large N):


s1: N p1 occurrences
s2: N p2 occurrences
Particular sequence of N symbols:

Np1

p1 , p2 ,

Probabilities:

, sK }

 


S = {s1 , s2 ,

Alphabet:

Unequal probabilities ? (cont.)

all k

Probability of this particular sequence occurring:

Average number of bits for one symbol is:

p ( S N ) = p1 p2 p1 p3 p3 p2 p1
Np
Np
= p1 1 p2 2

L=

Lecture 9

Minimum codeword length

lN
= H (S )
N

Lecture 9

Huffman coding algorithm

Optimum coding scheme yields the shortest average


codeword length

Source Coding Theorem:


For a general alphabet S, the minimum average codeword
length is given by the entropy, H(S).

Significance:
For any practical source coding scheme, the average codeword length
will always be greater than or equal to the source entropy, i.e.,

L H (S ) bits/symbol

How can we design an efficient coding scheme?

Lecture 9

Lecture 9

Huffman coding algorithm

Consider a five-symbol alphabet having the probabilities


indicated:

Uniquely decodable

Example

i.e., only one way to break bit stream into valid code words

Symbols : A, B, C , D, E
Probabilities : p A = 0.05, p B = 0.15, pC = 0.4, p D = 0.3, p E = 0.1

Instantaneous
i.e., know immediately when a code word has ended

Calculate the entropy of the alphabet.


Using the Huffman algorithm, design a source coding
scheme for this alphabet, and comment on the average
codeword length achieved.

1.
2.

Lecture 9

Summary

Source coding theorem:


For a general alphabet S, the minimum average codeword
length is given by the entropy, H(S).

Huffman coding algorithm:


Practical coding scheme that yields the shortest average
codeword length

Lecture 9

11

Lecture 9

10

Lecture 10

Reliable transfer of information

How much information can be reliably transferred over a


noisy channel?
(Channel capacity)
What does information theory have to say about analog
communication systems?
(See sections 5.5, 5.6)

 

If channel is noisy, can information be transferred reliably?


How much information?

Lecture 10

Information rate

Channel capacity

Definition:

R=r H
Avg no. of information
bits transferred per second

Lecture 10

Definition:
Channel capacity, C, is maximum rate of information
transfer over a noisy channel with arbitrarily small
probability of error

bits/sec

Avg. no. of symbols


per second

Avg. no. of information


bits per symbol

Intuition:
R can be increased arbitrarily by increasing symbol rate r
For noisy channel, errors are bound to occur
Is there a value of R where probability of error is arbitrarily small?

  

Channel Capacity Theorem


If RC, then there exists a coding scheme such that
symbols can be transmitted over a noisy channel with
an arbitrarily small probability of error

Lecture 10

Lecture 10

Channel capacity

Channel capacity

Channel capacity theorem is a surprising result:

Gaussian noise has PDF

Hartley-Shannon Theorem
For an additive white Gaussian noise channel, the
channel capacity is:

 
  

  

This is non-zero for all noise amplitudes


Sometimes (however infrequent) noise must over-ride signal bit
error
But, theorem says we can transfer information without error!!

P
C = B log 2 1 + S
PN

Basic limitation due to noise is on speed of


communication, not on reliability

Bandwidth of
the channel

Average signal power


at the receiver

Average noise power


at the receiver

So what is the channel capacity C ??


Lecture 10

Consider a baseband system


Noise power is:

No
df =N o B
B 2

PN = 

1.
2.
3.

C = B log 2 1 +

Consider a baseband channel with a bandwidth of B=4 kHz. Assume a


message signal with an average power of Ps=10 W, is transmitted over
this channel which has additive noise with a flat spectral density of
height No/2 with No=10-6 W/Hz.
Calculate the channel capacity of this channel.
If the message signal is amplified by a factor of n before transmission,
calculate the channel capacity when (a) n=2, and (b) n=10.
If the bandwidth of the channel is doubled to 8 kHz, what is the
channel capacity now?

  

Channel capacity:

Lecture 10

Example

Example

Lecture 10

PS
No B

Lecture 10

Comments

  

   

Example

C = B log 2 1 +

PS
PN

More signal power increases capacity, but increase is slow


Can increase capacity arbitrarily through PS

B=4000
No=10-6

More bandwidth allows more symbols per second, but also increases
the noise

Ps=10
No=10-6

Can show that:

lim C = 1.44

PS
No

Cannot increase capacity arbitrarily through B

Lecture 10

10

Information theory and analog

  

  

More comments

Lecture 10

C = B log 2 1 +

PS
PN

 

This is capacity of an ideal best system


How can we design something that comes close?
Through channel coding, modulation/demodulation
schemes
But, no deterministic method exists to do it !

Optimum communication system achieves the largest SNR at the


receiver output

Lecture 10

11

Lecture 10

12

Optimum analog system

Optimum analog system

Maximum rate that information can arrive at receiver:

 

Assume that channel noise is AWGN, having PSD: No/2


Average noise power at demodulator input is: PN = N o B

Cin = B log 2 (1 + SNRin )

Maximum rate that information can leave receiver:

SNR at receiver input:

Cout = W log 2 (1 + SNRout )

Transmitted power

Ideally, no information is lost:


Cout = Cin

SNRin =

PT
W PT
=
N o B B N oW

Bandwidth spreading ratio


transmission bw/message bw

Equating gives:
B /W
SNRout = (1 + SNRin ) 1

Baseband SNR

SNRout = 1 +

For any increase in bandwidth, output SNR increases


exponentially

Lecture 10

13

Analog performance

W
SNRbase
B

B /W

 

 !

SNR at receiver output:


1

Lecture 10

14

Summary

"

Information rate: R = r H bits/sec

"

Channel Capacity Theorem:


If RC, then there exists a coding scheme such that
symbols can be transmitted over a noisy channel with an
arbitrarily small probability of error


' && (

$ ## %

"

Hartley-Shannon Theorem (Gaussian noise channel):


P
bits/sec
C = B log 2 1 + S
PN

"

Analog communication systems:

Ideal performance
Lecture 10

Information theory tells us the best SNR

Actual performance
15

Lecture 10

16

You might also like