You are on page 1of 34

DISCRETE MEMORYLESS SOURCE

Communication Systems by Simon Haykin


Chapter 9 : Fundamental Limits in Information Theory
INTRODUCTION
The purpose of a communication system is to
carry information bearing baseband signals
from one place to another over a
communication channel.
INFORMATION THEORY
It deals with mathematical modeling and analysis of
a communication system rather than with physical
sources and physical channel.

It is a highly theoretical study of the efficient use of
bandwidth to propagate information through
electronic communications systems.
INFORMATION THEORY
It provides answers to the two fundamental
questions:
What is the irreducible complexity below which a signal
cannot be compressed?
What is the ultimate transmission rate for reliable
communication over a noisy channel?

The answer to these questions lie in the ENTROPY of
a source and the CAPACITY of a channel,
respectively.
INFORMATION THEORY
Entropy
It is defined in terms of the probabilistic behavior of a
source information.
It is named in deference to the parallel use of this concept
in thermodynamics.
Capacity
The intrinsic ability of a channel to convey information.
It is naturally related to the noise characteristic of the
channel.
INFORMATION THEORY
A remarkable result that emerges from information
theory is that
if the entropy of the source is
less than the capacity of the channel,
then error free communication over channel
can be achieved.

UNCERTAINTY, INFORMATION, AND ENTROPY
DISCRETE RANDOM VARIABLE, S
Suppose that a probabilistic experiment involves the
observation of the output emitted by a discrete
source during every unit of time (signaling interval).
The source output is modeled as a discrete random
variable, S , which takes on symbols from a fixed
finite alphabet:

(9.1)
DISCRETE RANDOM VARIABLE, S
with probabilities:

(9.2)
that must satisfy the condition:

(9.3)
DISCRETE MEMORYLESS SOURCE
Assuming that the symbols emitted by the source
during successive signaling intervals are
statistically independent.
A source having such properties are called
DISCRETE MEMORYLESS SOURCE, a
memoryless in the sense that the symbol emitted at
any time is independent of previous choices.
DISCRETE MEMORYLESS SOURCE

Can we find a measure of how much information is
produced by DISCRETE MEMORYLESS
SOURCE?

Note: idea of information is closely related to that of
uncertainty or surprise
EVENT S = S
K

Consider the event S = s
k

[describing the emission of symbol s
k
by the source with a probability p
k
]
Before the event occurs:
>there is an amount of uncertainty.
During the event:
>there is an amount of surprise.
After the event:
> there is a gain in the amount of information,
which is the resolution of uncertainty.
The amount of information is related to the
inverse of the probability of occurrence.
The amount of information gained after observing the
event S = s
k
, which occurs with probability p
k
, is the
logarithmic function
(9.4)


**base of logarithmic is arbitrary
LOGARITHMIC FUNCTION
LOGARITHMIC FUNCTION
This definition exhibits the following important
properties that are intuitively satisfying:
1.
(9.5)
If we are absolutely certain of the outcome of an event, even
before it occurs, there is no information gained.
2.
(9.6)
The occurrence of an event S= s
k
either provides some or
no information, but never brings about a loss of information.
LOGARITHMIC FUNCTION
3.
(9.7)
The less the probable an event is, the more information we
gain when it occurs.
4. if s
k
and s
l

are statistically independent.
BIT
Using Equation 9.4 in logarithmic base 2. The
resulting unit of information is called the bit (a
contraction of binary digit).



(9.8)

ONE BIT
When p
k
=1/2, we have I(s
k
) = 1 bit. Hence,
one bit is the amount of information that we
gain when one of two possible and equally
likely events occurs.
I(S
K
)
The amount of information I(s
k
) produced by the
source during an arbitrary signaling interval
depends on the symbol s
k
emitted by the source at
the time.
Indeed I (s
k
) is a discrete random variable that
takes on the values with
probabilities , respectively.
MEAN OF I(S
K
): ENTROPY
The mean of I(s
k
) over the source alphabet is given
by





(9.9)
ENTROPY OF A DISCRETE MEMORYLESS SOURCE
The important quantity H (S ) is called the entropy
of a discrete memory less source with source
alphabet.

It is a measure of the average information content
per source symbol.

It depends only on the probabilities of the symbols
in the alphabet S of the source.
SOME PROPERTIES OF ENTROPY
A discrete memory less source whose mathematical
model is defined by equations 9.1 & 9.2. The
entropy H(l) of such source is bounded as follows:

(9.10)

where K is the radix of the alphabet of the source.
SOME PROPERTIES OF ENTROPY
Furthermore, we may make two statements:
1. H(S )= 0, if and only if the probability p
k
= 1 for
some k, and the remaining probabilities in the set
are all zero; this lower bound on entropy
corresponds to no uncertainty.
2. H(S )= log K, if and only if p
k
=1/K for all k; this
upper bond on entropy corresponds to maximum
uncertainty.

EXAMPLE 9.1 ENTROPY OF BINARY MEMORY LESS SOURCE
Consider a binary memory less source for which
symbol 0 occurs with probability p
0
and symbol 1
with probability p
1
= 1 - p
0
, with entropy of:



(9.15)

EXAMPLE 9.1 SOLUTION
For which we observe the following:
1. When p
0
= 0, the entropy H(S ) =0; this follows
from the fact that x log x0 as x0.
2. When p
0
= 1, the entropy H (S ) = 0.
3. The entropy H (S ) attains its maximum value,
H
max
=1 bit, when p
1
= p
0
=1/2, that is, symbol 1
and 0 are equally probable.
EXAMPLE 9.1 SOLUTION
The function p
0
is frequently encountered in
information theoretic problems, and defined as:
(9.16)

This function is called as the entropy function. This
is a function of prior probability p
0
defined on the
interval [0,1].Plotting the entropy function H(p
0
)
versus p
0
defined on the interval [0,1] as in Figure
9.2.
FIGURE 9.2 ENTROPY FUNCTION
The curve highlights the observations made
under points 1,2, and 3.
EXTENSION OF DISCRETE MEMORYLESS SOURCE
-Consider blocks rather than individual symbols
-Each block consisting of n successive source
symbols.
(9.17)
the probability of a source symbol S is equal to
the product of the probabilities of the n source
symbols in S constituting the particular symbol
in S .
EXAMPLE 9.2 ENTROPY OF EXTENDED SOURCE
Consider a discrete source with source alphabet
S = {s0, s1, s2} with respective probabilities:
p
0
= 1/4
p
1

= 1/4
p
2
= 1/2.
Find the entropy of the extended source.

EXAMPLE 9.2 : SOLUTION
The entropy of the source is:
EXAMPLE 9.2 : SOLUTION
Consider next the second order extension of the
source.

With the source alphabet S consisting of three
symbols, it follows that the source has nine
symbols.

Table 9.1 present the nine symbols, its corresponding
sequences, and its probabilities.
Table 9.1
Alphabet particulars of second-order extension of a discrete
memoryless source
Symbols of S
2

1

2

3

4

5

6

7

8

Corresponding
sequences of
symbols of S
s
0
s
0
s
0
s
1
s
0
s
2
s
1
s
0
s
1
s
1
s
1
s
2
s
2
s
0
s
2
s
1
s
2
s
2

Probability
p (
i
),
i = 0, 1, . . . , 8
1/16 1/16 1/8 1/16 1/16 1/8 1/8 1/8 1/4
EXAMPLE 9.2 : SOLUTION
The entropy of the extended source is:
EXAMPLE 9.2 : SOLUTION
The entropy of the extended source is:





Which proves:
Presented by Roy Sencil and Janyl Jane Nicart
END OF PRESENTATION

You might also like