You are on page 1of 44

Channel Coding

(part 1)

Dr. Muhammad Awais


A simple DCS (without channel coding)
Transmitter

Source Source
Modulation Transmission
Signal Coding

modulated
bit Radio
signal
Channel

Sink Source Reception


Demodulation
Signal Decoding

Receiver
A simple DCS (without channel coding)
Transmitter

Source Source
Modulation Transmission
Signal Coding

modulated
bit Radio
signal
Channel

Sink Source Reception


Demodulation
Signal Decoding

Receiver

Source Coding :
1. Ensures compatibility b/w outside world and DCS (Character coding, PCM)
2. Removes redundancy in the data (Compression)
3. Increases transmission efficiency
A simple DCS (without channel coding)
Transmitter

Source Source
Modulation Transmission
Signal Coding

modulated
bit Radio
signal
Channel

Sink Source Reception


Demodulation
Signal Decoding

Receiver
Modulation:
1. Converts the digital signals into analog / radio signals
Modulation Overview

Data to be transmitted:
Digital Input 1 0 1 0
time
Basic steady radio wave:
carrier = A.cos(2Ft+)

Amplitude Shift Keying:


A.cos(2Ft+)

Frequency Shift Keying:


A.cos(2Ft+)

Phase Shift Keying:


A.cos(2Ft+)
Modulation Overview
Modulation Overview
A simple DCS (without channel coding)
A DCS with channel coding
Transmitter

Source Source Channel


Coding Modulation Transmission
Signal Coding

symbol modulated
bit Radio
signal
Channel

Sink Source Channel Reception


Demodulation
Signal Decoding Decoding

Receiver
Channel Coding :
1. Improves communication performance (BER)
2. Changes transmitted signals into signals which better withstand channel
impairments (e.g. noise , interference and fading etc)
3. Adds redundancy in the transmitted data
Channel Coding Domains
Channel
Coding

Waveform Structured
Coding Sequences

Transforms
waveforms into M-ary
Signaling
Block Codes
Convolutional
Codes
better waveforms
robust to channel
Orthogonal
Single Parity Turbo Codes
impairments hence Codes
improving detector
performance Bi Orthogonal Product
Transforms data sequences
Codes Codes
into better sequences
Simplex by adding redundant bits)
LDPC
Codes The redundant bits are used to
detect and correct data errors.
Antipodal Hamming Improves overall
Signaling Codes
performance of the
communication system.
Orthogonal
Signaling
Waveform coding
Concept: The basic idea is to design a signal set whose each signal has smallest cross
correlation (similarity) with every other member of the set .
In vectors domain, the individual signal vectors must be as far as possible from each other.

1. Antipodal signal set:


A signal set which gives minimum correlation (-1) between its member signals
But the problem is that it contains only two members
Signals which images of each other or their vectors 180 apart.
Antipodal signals are optimal (best distance properties)
Example: (BPSK modulation)
.

T
Eb = s 2 i (t )dt
0
Waveform coding
Concept: The basic idea is to design a signal set whose each signal has smallest value of cross
correlation co-efficient (similarity) with every other member of the set .
In vectors domain, the individual signal vectors must be as far as possible from each other.

2. Orthogonal Signal Set:


A signal set whose any two members have no relationship (i.e. Zero correlation)
Zero correlation occurs when the signals have phase difference of 90
The signal vectors are perpendicular to each other

Example:

d = 2 Eb

T
Eb = s 2 i (t )dt
0
Waveform coding
3. M-Ary Signaling

Waveform
Coder
K bits M = 2 K bits

Can be divided in to two categories


1) Orthogonal Signaling ( M-Frequency Shift Keying (MFSK))
2) Non orthogonal signaling (M-Phase shift Keying (MPSK)
M-Amplitude Phase Keying (MAPK))
Waveform coding
3. M-Ary Signaling Performance
Waveform coding
4. Binary Orthogonal Signaling
A one-bit data set can be transformed, using orthogonal codewords
of two digits each, described by the rows of matrix H1 as follows:

To encode a 2-bit data set, we extend the foregoing set both


horizontally and vertically, creating matrix H2.
Waveform coding
4. Binary Orthogonal Signaling

The same construction rule to obtain an orthogonal set H3 for


a 3-bit data set.

Generally, for K bit data set, generate the Hk matrix recursively (until K=1) as

Hk 1 Hk 1
Hk =
Hk 1 Hk 1
Waveform coding
4. Binary Orthogonal Signaling

Why it is called Orthogonal


For binary signaling, the correlation between any two bit sequences i and j
is obtained by

# digit agreements - # digit disagreements


zij =
total no. of digits in the sequence

For each pair of words in Hk


zij=0 (for ij), and each of the sets is orthogonal.

1, for i = j
zij =
0, otherwise
Waveform coding
5. Bi-Orthogonal Codes

Biorthogonal codes require half as many bits per symbol as orthogonal codes.
Waveform coding
5. Trans-Orthogonal (Simplex) Codes
Obtained from orthogonal codes by deleting the first bit of each
code word.
DCS with waveform coding

Each k-bit message chooses one of the generators


2 k code bits is sent to the modulator (BPSK).
DCS with waveform coding

At receiver, the signal is demodulated and fed to M correlator (or matched filter).
The correlation is done over a code word duration T = 2 K Tc
For real-time communication, codeword duration should be equal to message
duration T = 2 K T = kT
c b
What does it mean ?
2. Structured Sequences
In structured sequences, extra bits are added to the message to
reduce the probability of errors (detect errors).
K-bit data block is transmitted as n-bit message (n-k added bits).
The code is referred to as (n,k) code.

Code Rate : Ratio of total information bits to the total bits in the code word

R=k/n
2. Structured Sequences
2.1 : Single Parity Check Codes :
Append a parity bit (even/odd) at the end of data sequence

Rate : R=
K
K +1
Odd Parity Check Codes:
Can detect presence of even number of errors in the codeword. Cannot correct any.
2. Structured Sequences
2.1 : Single Parity Check Codes :
Odd Parity Check Codes:
Can detect presence of even number of errors in the codeword. Cannot correct any.

p : probability that a
channel symbol is received in error

Example : Compute the probability of an undetected error in (4,3) even parity code. Assume
Probability of symbol error p=10^(-3)

The code is unable to detect 2 bit and 4 bit


error.
2. Structured Sequences
2.2 : Rectangular Codes:

Data are arranged in an MxN matrix.


A parity bit is appended in each row and each column
Code Rate: ( M )( N )
R=
( M + 1)( N + 1)

A rectangular code can detect the presence of all


single bit errors in the code word.

The probability of block error (word error) for a code that can correct
all t (and fewer errors)
Where p is the probability of
symbol error (channel
Dependent)
2. Structured Sequences
2.3 : Linear Block Codes
A class of parity check codes with the same (n,k) notation

Linear Block
Encoder

k bits n>k bits


message vector (k-tuple) code vector (n-tuple)
one k-bit message at one n-bit message at
a time out of 2k a time out of 2n
possible messages possible code vectors

Encoding Rule:

Assign to each of possible 2k message tuples, one of the 2n code tuples.


Mapping is a linear operation (can be done using Look Up table)
2. Structured Sequences
2.3 : Linear Block Codes
A class of parity check codes with the same (n,k) notation

Linear Block
Encoder

k bits n>k bits

Binary Vector Space Vk Binary Vector Space Vn


Set of all binary k-tuples Set of all binary n-tuples

So, encoding transforms a set of 2k k-tuples in to a set of 2k n-tuples.


This set is called S
A set S of 2k n-tuples is a linear code if :
If it s a sub space of Vn
which means, the code is linear if and only if, the sum of two code words
is also a codeword.
2. Structured Sequences
2.3 : Linear Block Codes

There are two contradicting objectives


1) Code efficiency
means we want to pack Vn with elements of S
2) Error detection
we want the elements of S to be as far away as possible from each other
2. Structured Sequences
2.3 : Linear Block Codes

An example of (6,3) Linear Block Code


k=3,n=6.

When k is very large, Look Up table


becomes prohibitive.
e.g. consider a (127,92) code
the size of Look Up table = 292.

Solution: Generator Matrix


2. Structured Sequences
2.3 : Linear Block Codes
Generator Matrix
Find a set of k<2k n-tuples that spans the entire Vn
This set is called the basis of Vn
Put these k basis vectors in a matrix called Generator Matrix with size k x n

Example: (6,3) code

Encoding Rule U=mG


2. Structured Sequences
2.3 : Linear Block Codes
Systematic Codes
A systematic linear block code is a mapping from a k-dimensional space to
n dimensional space such that the k-bit message is a part of the n-bit codeword.
2. Structured Sequences
2.3 : Linear Block Codes
Parity Check Matrix
A matrix H is called the parity-check matrix.
Rows of G are orthogonal to rows of H

For (k x n) Generator matrix G, there exists a (n-k) x n matrix H, such that

For systematic codes


2. Structured Sequences
2.3 : Linear Block Codes
Parity Check Matrix
2. Structured Sequences
2.3 : Linear Block Codes
Syndrome Testing: (Error Detection )
Let r be the received vector (n-tuple) in error
where e=e1,e2,en is an error vector (binary)

Define the Syndrome S

Very Important Result:


The syndrome of a corrupted codeword (received vector r) is
is equal to the syndrome of the error vector that corrupted it.
2. Structured Sequences
2.3 : Linear Block Codes
Syndrome Testing: (Error Detection ) Example

Let U be the transmitted codeword


and r be the received (corrupted) vector

compute the syndrome


of received vector

Compute the syndromes of all


possible error patterns and select the
Error pattern whose syndrome
Is equal to the syndrome of r.

If syndrome of r = zero vector, then the vector r is a valid codeword


If syndrome of r contains has some non zero elements, r contains detectable errors
If r contains some correctable error patterns, then syndrome of r is equal to the
syndrome of one of the error patterns.
2. Structured Sequences
2.3 : Linear Block Codes
Error Correction (Syndrome of a Coset):
A standard array based error correction algorithm
Standard Array construction:
For a given (n,k) code, the standard array is constructed as follows.

1. The first row contains all valid (2^k)


code words . U1 is the all zero codeword
2. Select from the remaining vectors of
Vn (n-dimensional vector space) a vector
of minimum weight e1, called coset leader.
3. Complete the second row by adding e1 to all
code words (binary addition)
4. select from the remaining elements of Vn
(i.e. elements of Vn which are not in standard
array) a vector of minimum weight e2.
5. Complete the third row by adding e2 with all
code words.
6. Repeat this process until all elements of Vn are
placed in standard array.
2. Structured Sequences
2.3 : Linear Block Codes
Error Correction (Syndrome of a Coset):
A standard array based error correction algorithm
Standard Array Properties:

1. There are 2^k columns and 2^(n-k) rows


therefore, the (n,k) code is capable of
correcting 2^(n-k) error patterns
(coset leaders)
2. All vectors in the same row have same
syndrome
3. No two vectors in the same row are identical
4. Every vector appears exactly once in the
standard array
Standard Array Construction Example
Consider a (6,3) code with the G matrix as

8 code words

Coset Leaders A total of 64 vectors in standard array


2. Structured Sequences
2.3 : Linear Block Codes
Error Correction (Syndrome of a Coset):
A standard array based error correction algorithm
Error Correction Procedure
Procedure:
1. Given r = received vector
2. Calculate the syndrome of r

S = rH T

3. Locate the coset leader ej from the standard


array whose syndrome is equal to syndrome
of r.
e j H = rH
T T

4. Retrieve the corrected codeword by


subtracting coset leader ej from r
2. Structured Sequences
2.3 : Linear Block Codes
Decoder Implementation (6,3) code
1. Computation of syndrome of r
1 0 0
0
1 0
0 0 1
S = rH T = [r1 r2 r6 ] = [ s1s2 s3 ]
1 1 0
0 1 1
e1 =
1 0 1

2. Syndrome Look Up table ( Pre store the syndromes of all coset leaders
using a LUT)

e1 = s3 s2 s1

e6 = s3 s2 s1
Error Detection and Correcting Capability
Weight and Distance of Binary Vector

Hamming distance between two code words is the number of


elements in which they differ
Hamming weight is the number of nonzero elements

Example:
U =100101101
V =011110100
w(U) = 5
d(U,V) = 6
Minimum Distance of a Linear Code
Is the minimum distance (dmin) among all the distances between
each pair of codes in the code set

6.5.3 Error Detection and Correction


The error-correcting capability t of a code: the maximum number of
guaranteed correctable errors per codeword

d min 1
t=
2
Error detecting capability defined in terms of dmin

e = d min 1

You might also like