You are on page 1of 78

DEPT: EEE

ENGINEERING

EE T56 COMMUNICATION

UNIT I: MODULATION SYSTEMS Time and frequency domain representation of


signals, amplitude modulation and demodulation, frequency modulation and
demodulation, super heterodyne radio receiver. Frequency division multiplexing. Pulse
width modulation
UNIT II: TRANSMISSION MEDIUM Transmission lines Types, equivalent circuit,
losses, standing waves, impedance matching, bandwidth; radio propagation Ground
wave and space wave propagation, critical frequency, maximum usable frequency, path
loss, white Gaussian noise.
UNIT III: DIGITAL COMMUNICATION Pulse code modulation, time division
multiplexing, digital T-carrier system. Digital radio system. Digital modulation:
Frequency and phase shift keying Modulator and demodulator, bit error rate
calculation.
UNIT IV: DATA COMMUNICATION AND NETWORK PROTOCOL Data
Communication codes, error control. Serial and parallel interface, telephone network,
data modem, ISDN, LAN, ISO-OSI seven layer architecture for WAN.
UNIT V: SATELLITE AND OPTICAL FIBRE COMMUNICATIONS Orbital satellites,
geostationary satellites, look angles, satellite system link models, satellite system link
equations; advantages of optical fibre communication - Light propagation through fibre,
fibre loss, light sources and detectors.
TEXT BOOKS
1.Wayne Tomasi, Electronic Communication Systems, Pearson Education, Third
Edition, 2001.
2.Roy Blake, Electronic Communication Systems, Thomson Delmar, 2nd Edition, 2002.
REFERENCE BOOKS
1.William Schweber, Electronic Communication Systems, Prentice Hall of India, 2002.
2.G. Kennedy, Electronic Communication Systems, McGraw Hill, 4th edition, 2002.
3.Miller, Modern Electronic Communication, Prentice Hall of India, 2003.

C.THIAGARAJAN-AP ECE DEPT.

DEPT: EEE
ENGINEERING

EE T56 COMMUNICATION

UNIT I: MODULATION SYSTEMS Time and frequency domain representation of


signals,

amplitude

modulation

and

demodulation,

frequency

modulation

and

demodulation, super heterodyne radio receiver. Frequency division multiplexing. Pulse


width modulation
Time and frequency domain representation of signals:
Electrical signals have both time and frequency domain representations. In the
time domain, voltage or current is expressed as a function of time as illustrated in Figure
1. Most people are relatively comfortable with time domain representations of signals.
Signals measured on an oscilloscope are displayed in the time domain and digital
information is often conveyed by a voltage as a function of time.

Signals can also be represented by a magnitude and phase as a function of


frequency. Signals that repeat periodically in time are represented by a power spectrum as
illustrated in Figure 2. Signals that are time limited (i.e. are only non-zero for a finite
time) are represented by an energy spectrum as illustrated in Figure.

C.THIAGARAJAN-AP ECE DEPT.

DEPT: EEE
ENGINEERING

EE T56 COMMUNICATION

Analog Communication is a data transmitting technique in a format that utilizes


continuous signals to transmit data including voice, image, video, electrons etc. An
analog signal is a variable signal continuous in both time and amplitude which is
generally carried by use of modulation. Analog circuits do not involve quantization of
information unlike the digital circuits and consequently have a primary disadvantage of
random variation and signal degradation, particularly resulting in adding noise to the
audio or video quality over a distance.
Data is represented by physical quantities that are added or removed to alter data.
Analog transmission is inexpensive and enables information to be transmitted from pointto-point or from one point to many. Once the data has arrived at the receiving end, it is
converted back into digital form so that it can be processed by the receiving computer.
Analog communication systems convert (modulate) analog signals into modulated
(analog) signals). Communication systems convert information into a format appropriate
for the transmission medium. The Block diagram of a communication system is given
below:

C.THIAGARAJAN-AP ECE DEPT.

DEPT: EEE
ENGINEERING

EE T56 COMMUNICATION

Fig.1 Communication System Block Diagram


The Source encoder converts message into message signal or bits. The
Transmitter converts message signal or bits into format appropriate for channel
transmission (analog/digital signal). The Channel introduces distortion, noise, and
interference. Receiver decodes received signal back to message signal. Source decoder
decodes message signal back into original message.

Amplitude Modulation:
Amplitude modulation (AM) is a technique used in electronic communication,
most commonly for transmitting information via a radio carrier wave. AM works by
varying the strength of the transmitted signal in relation to the information being sent. For
example, changes in the signal strength can be used to specify the sounds to be
reproduced by a loudspeaker, or the light intensity of television pixels. (Contrast this with
frequency modulation, also commonly used for sound transmissions, in which the
frequency is varied; and phase modulation, often used in remote controls, in which the
phase is varied).In order that a radio signal can carry audio or other information for
broadcasting or for two way radio communication, it must be modulated or changed in
some way. Although there are a number of ways in which a radio signal may be
modulated, one of the easiest, and one of the first methods to be used was to change its
amplitude in line with variations of the sound.
C.THIAGARAJAN-AP ECE DEPT.

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
The basic concept surrounding what is amplitude modulation, AM, is quite
straightforward. The amplitude of the signal is changed in line with the instantaneous
intensity of the sound. In this way the radio frequency signal has a representation of the
sound wave superimposed in it. In view of the way the basic signal "carries" the sound or
modulation, the radio frequency signal is often termed the "carrier".

Fig.2 Amplitude Modulation, AM


When a carrier is modulated in any way, further signals are created that carry the
actual modulation information. It is found that when a carrier is amplitude modulated,
further signals are generated above and below the main carrier. To see how this happens,
take the example of a carrier on a frequency of 1 MHz which is modulated by a steady
tone of 1 kHz.The process of modulating a carrier is exactly the same as mixing two
signals together, and as a result both sum and difference frequencies are produced.
Therefore when a tone of 1 kHz is mixed with a carrier of 1 MHz, a "sum" frequency is
produced at 1 MHz + 1 kHz, and a difference frequency is produced at 1 MHz - 1 kHz,
i.e. 1 kHz above and below the carrier.
If the steady state tones are replaced with audio like that encountered with speech
of music, these comprise many different frequencies and an audio spectrum with
frequencies over a band of frequencies is seen. When modulated onto the carrier, these
spectra are seen above and below the carrier.
C.THIAGARAJAN-AP ECE DEPT.

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
It can be seen that if the top frequency that is modulated onto the carrier is 6 kHz,
then the top spectra will extend to 6 kHz above and below the signal. In other words the
bandwidth occupied by the AM signal is twice the maximum frequency of the signal that
is used to modulate the carrier, i.e. it is twice the bandwidth of the audio signal to be
carried.
In Amplitude Modulation or AM, the carrier signal is given by

It has an amplitude of Amodulated in proportion to the message bearing (lower


frequency) signal
to give

The magnitude of m(t) is chosen to be less than or equal to 1, from reasons having
to do with demodulation, i.e. recovery of the signal from the received signal. The
modulation index is then defined to be

The frequency of the modulating signal is chosen to be much smaller than that of the
carrier signal. Try to think of what would happen if the modulating index were bigger
than 1.

C.THIAGARAJAN-AP ECE DEPT.

DEPT: EEE
ENGINEERING

EE T56 COMMUNICATION

Fig.3. AM modulation with modulation index .2


Note that the AM signal is of the form

This has frequency components at frequencies

Fig.4: AM modulation with modulation index .4

C.THIAGARAJAN-AP ECE DEPT.

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
The version of AM that we described is called Double Side Band AM or DSBAM since
we send signals at both

, and at

It is more efficient to transmit only one of the side bands (so-called Single Side
Band AM or USBAM, LSBAM for upper and lower side bands respectively), or if the
filtering requirements for this are too arduous to send a part of one of the side band. This
is what is done in commercial analog NTSC television, which is known as Vestigial Side
Band AM. The TV video signal has a bandwidth of about 4.25 MHz, but only 1 MHz of
the lower side band of the signal is transmitted. The FCC allocates 6 MHz per channel
(thus 0.75 MHz is left for the sound signal, which is an FM signal (next section)).You
may have wondered how we can listen to AM radio channels on both stereo and mono
receivers. The trick that is used to generate a modulating signal by adding a DSB version
(carrier at 38 Khz suppressed) version of the output of the difference between the Left
and Right channels added to the sum of the Left and Right channels unmodulated. The
resulting modulating signal has a bandwidth of about 60 KHz. A mono receiver gets the
sum signal whereas a stereo receiver separates out the difference as well and reconstitutes
the Left and Right channel outputs.
Amplitude modulation, AM, is one of the most straightforward ways of
modulating a radio signal or carrier. The process of demodulation, where the audio signal
is removed from the radio carrier in the receiver is also quite simple as well. The easiest
method of achieving amplitude demodulation is to use a simple diode detector. This
consists of just a handful of components:- a diode, resistor and a capacitor.

C.THIAGARAJAN-AP ECE DEPT.

DEPT: EEE
ENGINEERING

EE T56 COMMUNICATION

Fig. 5 AM Diode Detector

In this circuit, the diode rectifies the signal, allowing only half of the alternating
waveform through. The capacitor is used to store the charge and provide a smoothed
output from the detector, and also to remove any unwanted radio frequency components.
The resistor is used to enable the capacitor to discharge. If it were not there and no other
load was present, then the charge on the capacitor would not leak away, and the circuit
would reach a peak and remain there.
Advantages of Amplitude Modulation, AM
There are several advantages of amplitude modulation, and some of these reasons have
meant that it is still in widespread use today:

It is simple to implement

it can be demodulated using a circuit consisting of very few components

AM receivers are very cheap as no specialised components are needed.

Disadvantages of amplitude modulation


C.THIAGARAJAN-AP ECE DEPT.

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
Amplitude modulation is a very basic form of modulation, and although its
simplicity is one of its major advantages, other more sophisticated systems provide a
number of advantages. Accordingly it is worth looking at some of the disadvantages of
amplitude modulation.

It is not efficient in terms of its power usage

It is not efficient in terms of its use of bandwidth, requiring a bandwidth equal to


twice that of the highest audio frequency

It is prone to high levels of noise because most noise is amplitude based and
obviously AM detectors are sensitive to it.

Thus, AM has advantages of simplicity, but it is not the most efficient mode to use,
both in terms of the amount of space or spectrum it takes up, and the way in which it uses
the power that is transmitted. This is the reason why it is not widely used these days both
for broadcasting and for two way radio communication. Even the long, medium and short
wave broadcasts will ultimately change because of the fact that amplitude modulation,
AM, is subject to much higher levels of noise than are other modes. For the moment, its
simplicity, and its wide usage, means that it will be difficult to change quickly, and it will
be in use for many years to come.
Frequency Modulation:
While changing the amplitude of a radio signal is the most obvious method to
modulate it, it is by no means the only way. It is also possible to change the frequency of
a signal to give frequency modulation or FM. Frequency modulation is widely used on
frequencies above 30 MHz, and it is particularly well known for its use for VHF FM
broadcasting.

C.THIAGARAJAN-AP ECE DEPT.

10

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
Although it may not be quite as straightforward as amplitude modulation,
nevertheless frequency modulation, FM, offers some distinct advantages. It is able to
provide near interference free reception, and it was for this reason that it was adopted for
the VHF sound broadcasts. These transmissions could offer high fidelity audio, and for
this reason, frequency modulation is far more popular than the older transmissions on the
long, medium and short wave bands. In addition to its widespread use for high quality
audio broadcasts, FM is also sued for a variety of two way radio communication systems.
Whether for fixed or mobile radio communication systems, or for use in portable
applications, FM is widely used at VHF and above.
To generate a frequency modulated signal, the frequency of the radio carrier is
changed in line with the amplitude of the incoming audio signal.

Fig.5 Frequency Modulation, FM

When the audio signal is modulated onto the radio frequency carrier, the new
C.THIAGARAJAN-AP ECE DEPT.

11

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
radio frequency signal moves up and down in frequency. The amount by which the signal
moves up and down is important. It is known as the deviation and is normally quoted as
the number of kilohertz deviation. As an example the signal may have a deviation of 3
kHz. In this case the carrier is made to move up and down by 3 kHz.
Broadcast stations in the VHF portion of the frequency spectrum between 88.5
and 108 MHz use large values of deviation, typically 75 kHz. This is known as wideband FM (WBFM). These signals are capable of supporting high quality transmissions,
but occupy a large amount of bandwidth. Usually 200 kHz is allowed for each wide-band
FM transmission. For communications purposes less bandwidth is used. Narrow band FM
(NBFM) often uses deviation figures of around 3 kHz. It is narrow band FM that is
typically used for two-way radio communication applications.

Fig. Frequency Modulation

Advantages of frequency modulation, FM


FM is used for a number of reasons and there are several advantages of frequency
modulation. In view of this it is widely used in a number of areas to which it is ideally
suited. Some of the advantages of frequency modulation are noted below:

Resilience to noise: One particular advantage of frequency modulation is its


C.THIAGARAJAN-AP ECE DEPT.

12

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
resilience to signal level variations. The modulation is carried only as variations in
frequency. This means that any signal level variations will not affect the audio output,
provided that the signal does not fall to a level where the receiver cannot cope. As a result
this makes FM ideal for mobile radio communication applications including more general
two-way radio communication or portable applications where signal levels are likely to
vary considerably. The other advantage of FM is its resilience to noise and interference. It
is for this reason that FM is used for high quality broadcast transmissions.
Easy to apply modulation at a low power stage of the transmitter:
Another advantage of frequency modulation is associated with the transmitters. It
is possible to apply the modulation to a low power stage of the transmitter, and it is not
necessary to use a linear form of amplification to increase the power level of the signal to
its final value.
It is possible to use efficient RF amplifiers with frequency modulated signals:
It is possible to use non-linear RF amplifiers to amplify FM signals in a
transmitter and these are more efficient than the linear ones required for signals with any
amplitude variations (e.g. AM and SSB). This means that for a given power output, less
battery power is required and this makes the use of FM more viable for portable two-way
radio applications.
Applications
Magnetic tape storage
FM is also used at intermediate frequencies by all analog VCR systems, including
VHS, to record both the luminance (black and white)portions of the video signal.
Commonly, the chrome component is recorded as a conventional AM signal, using the
higher-frequency FM signal as bias. FM is the only feasible method of recording the
luminance ("black and white") component of video to and retrieving video from
Magnetic tape without extreme distortion, as video signals have a very large range of
frequency components from a few hertz to several megahertz, too wide for equalizers
to work with due to electronic noise below 60 dB. FM also keeps the tape at saturation
C.THIAGARAJAN-AP ECE DEPT.

13

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
level, and therefore acts as a form of noise reduction, and a simple limiter can mask
variations in the playback output, and the FM capture effect removes print-through and
pre-echo. A continuous pilot-tone, if added to the signal as was done on V2000 and
many Hi-band formats can keep mechanical jitter under control and assist timebase
correction.
These FM systems are unusual in that they have a ratio of carrier to maximum
modulation frequency of less than two; contrast this with FM audio broadcasting where
the ratio is around 10,000. Consider for example a 6 MHz carrier modulated at a 3.5 MHz
rate; by Bessel analysis the first sidebands are on 9.5 and 2.5 MHz, while the second
sidebands are on 13 MHz and 1 MHz. The result is a sideband of reversed phase on +1
MHz; on demodulation, this results in an unwanted output at 61 = 5 MHz. The system
must be designed so that this is at an acceptable level.
Sound
FM is also used at audio frequencies to synthesize sound. This technique, known
as FM synthesis, was popularized by early digital synthesizers and became a standard
feature for several generations of personal computer sound cards.
Radio
Edwin Howard Armstrong (18901954) was an American electrical engineer who
invented wideband frequency modulation (FM) radio. He patented the regenerative
circuit in 1914, the superheterodyne receiver in 1918 and the super-regenerative circuit in
1922. He presented his paper: "A Method of Reducing Disturbances in Radio Signaling
by a System of Frequency Modulation", which first described FM radio, before the New
York section of the Institute of Radio Engineers on November 6, 1935. The paper was
published in 1936.

C.THIAGARAJAN-AP ECE DEPT.

14

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
As the name implies, wideband FM (WFM) requires a wider signal bandwidth
than amplitude modulation by an equivalent modulating signal, but this also makes the
signal more robust against noise and interference. Frequency modulation is also more
robust against simple signal amplitude fading phenomena. As a result, FM was chosen as
the modulation standard for high frequency, high fidelity radio transmission: hence the
term "FM radio" (although for many years the BBC called it "VHF radio", because
commercial FM broadcasting uses a well-known part of the VHF bandthe FM
broadcast band).
FM receivers employ a special detector for FM signals and exhibit a phenomenon
called capture effect, where the tuner is able to clearly receive the stronger of two stations
being broadcast on the same frequency. Problematically however, frequency drift or lack
of selectivity may cause one station or signal to be suddenly overtaken by another on an
adjacent channel. Frequency drift typically constituted a problem on very old or
inexpensive receivers, while inadequate selectivity may plague any tuner.
An FM signal can also be used to carry a stereo signal: see FM stereo. However,
this is done by using multiplexing and demultiplexing before and after the FM process.
The rest of this article ignores the stereo multiplexing and demultiplexing process used in
"stereo FM", and concentrates on the FM modulation and demodulation process, which is
identical in stereo and mono processes.

Super heterodyne radio receiver:


In superheterodyne radio receivers, the incoming radio signals arc intercepted by
the antenna arid converted into the corresponding currents and voltages. In the receiver,
the incoming signal frequency is mixed with a locally generated frequency. The output of
the mixer consists of the sum and difference of the two frequencies. The mixing of the
two frequencies is termedheterodyning. Out of the two resultant components of the
mixer, the sum component is rejected and the difference component is selected. The value
of the difference frequency component varies with the incoming frequencies, if the
frequency of the local oscillator is kept constant. It is possible to keep the frequency of
C.THIAGARAJAN-AP ECE DEPT.

15

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
the difference components constant by varying the frequency of the local oscillator
according to the incoming signal frequency. In this case, the process is called
Superheterodyne and the receiver is known as a superheterodyne radio receiver.

In Figure the receiving antenna intercepts the radio signals and feeds the RF
amplifier, The RF amplifier selects the desired signal frequency and amplifies its voltage,
The RF' amplifier is a small-signal voltage amplifier that operates in the RF range. This
amplifier is tuned to the desired signal frequency by using capacitive tuning.
Frequency division multiplexing:
Frequency Division Multiplexing (FDM) allows engineers to utilize the extra space in
each wire to carry more than one signal. By frequency-shifting some signals by a certain
amount, engineers can shift the spectrum of that signal up into the unused band on that
wire. In this way, multiple signals can be carried on the same wire, without having to
divy up time-slices as in Time-Division Multiplexing schemes.In analog transmission,
signals are commonly multiplexed using frequency-division multiplexing (FDM), in
which the carrier bandwidth is divided into subchannels of different frequency widths,
each carrying a signal at the same time in parallel Traditional terrestrial microwave and
satellite links employ FDM. Although FDM in telecommunications is being reduced,
several systems will continue to use this technique, namely: broadcast & cable TV, and
commercial & cellular radio.

C.THIAGARAJAN-AP ECE DEPT.

16

DEPT: EEE
ENGINEERING

EE T56 COMMUNICATION

The standard telephony voice band [300 3400 Hz] is heterodyned and stacked
on high frequency carriers by single sideband amplitude modulation. This is the most
bandwidth efficient scheme possible. Multichannel broadcast and ship/shore terminations
use fdm. With this system, each channel of the composite tone package of the broadcast
is assigned an audio frequency. By multiplexing tty circuits, up to 16 circuits may be
carried in any one of the 3,000 hertz multiplexed channels described above. Don't
confuse the two types of multiplexing. In the first case, 3,000 hertz audio channels have
been combined. In the second case, a number of dc tty circuits are converted to tone
keying and combined in a single 3,000-hertz audio channel. Figure 3-31 illustrates a 16channel, multiplexing system.

UNIT II: TRANSMISSION MEDIUM Transmission lines Types, equivalent circuit,


losses, standing waves, impedance matching, bandwidth; radio propagation Ground
C.THIAGARAJAN-AP ECE DEPT.

17

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
wave and space wave propagation, critical frequency, maximum usable frequency, path
loss, white Gaussian noise.
Transmission medium:
A transmission line is a material medium or structure that forms a path for
directing the transmission of energy from one place to another, such as electromagnetic
waves or acoustic waves, as well as electric power transmission.
However in communications and electronic engineering, the term has a more
specific meaning. In these fields, transmission lines are specialized cables and other
media designed to carry alternating current and electromagnetic waves of high frequency
(radio frequency or higher), high enough that its wave nature must be taken into account.
Transmission lines are used for purposes such as connecting radio transmitters and
receivers with their antennas, distributing cable television signals, and computer network
connections.
Ordinary electrical cables suffice to carry low frequency AC, such as mains
power, which reverses direction 50 to 60 times per second. However, they cannot be used
to carry currents in the radio frequency range or higher, which reverse direction millions
to billions of times per second, because the energy tends to radiate off the cable as radio
waves, causing power losses. Radio frequency currents also tend to reflect from
discontinuities in the cable such as connectors, and travel back down the cable toward the
source. These reflections act as bottlenecks, preventing the power from reaching the
destination. Transmission lines use specialized construction such as precise conductor
dimensions and spacing, and impedance matching, to carry electromagnetic signals with
minimal reflections and power losses.

C.THIAGARAJAN-AP ECE DEPT.

18

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
Types of transmission line include ladder line, coaxial cable, dielectric slabs,
stripline, optical fiber, and waveguides. The higher the frequency, the shorter are the
waves in a transmission medium. Transmission lines must be used when the frequency is
high enough that the wavelength of the waves begins to approach the length of the cable
used. To conduct energy at frequencies above the radio range, such as millimeter waves,
infrared, and light, the waves become much smaller than the dimensions of the structures
used to guide them, so transmission line techniques become inadequate and the methods
of optics are used.
Standing waves:
In physics, a standing wave also known as a stationary wave is a wave that remains in a
constant position.

Two opposing waves combine to form a standing wave. This phenomenon can
occur because the medium is moving in the opposite direction to the wave, or it can arise
in a stationary medium as a result of interference between two waves traveling in
opposite directions. In the second case, for waves of equal amplitude traveling in
opposing directions, there is on average no net propagation of energy.

C.THIAGARAJAN-AP ECE DEPT.

19

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
As an example of the second type, a standing wave in a transmission line is a
wave in which the distribution of current, voltage, or field strength is formed by the
superposition of two waves of the same frequency propagating in opposite directions.
The effect is a series of nodes (zero displacement) and anti-nodes (maximum
displacement) at fixed points along the transmission line. Such a standing wave may be
formed when a wave is transmitted into one end of a transmission line and is reflected
from the other end by an impedance mismatch, i.e., discontinuity, such as an open circuit
or a short. The failure of the line to transfer power at the standing wave frequency will
usually result in attenuation distortion.
In practice, losses in the transmission line and other components mean that a
perfect reflection and a pure standing wave are never achieved. The result is a partial
standing wave, which is a superposition of a standing wave and a traveling wave. The
degree to which the wave resembles either a pure standing wave or a pure traveling wave
is measured by the standing wave ratio (SWR)
Another example is standing waves in the open ocean formed by waves with the
same wave period moving in opposite directions. These may form near storm centres, or
from reflection of a swell at the shore, and are the source of microbaroms and
microseisms.

Impedance matching:
In electronics, impedance matching is the practice of designing the input
impedance of an electrical load or the output impedance of its corresponding signal
source to maximize the power transfer or minimize signal reflection from the load.
In the case of a complex source impedance ZS and load impedance ZL, maximum power
transfer is obtained when

C.THIAGARAJAN-AP ECE DEPT.

20

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
where the asterisk indicates the complex conjugate of the variable. Minimum reflection is
obtained when

The concept of impedance matching found first applications in electrical


engineering, but is relevant in other applications in which a form of energy, not
necessarily electrical, is transferred between a source and a load.
Bandwidth:
Bandwidth is the difference between the upper and lower frequencies in a
continuous set of frequencies. It is typically measured in hertz, and may sometimes refer
to passband bandwidth, sometimes to baseband bandwidth, depending on context.
Passband bandwidth is the difference between the upper and lower cutoff frequencies of,
for example, a bandpass filter, a communication channel, or a signal spectrum. In case of
a low-pass filter or baseband signal, the bandwidth is equal to its upper cutoff frequency.
Bandwidth in hertz is a central concept in many fields, including electronics,
information theory, digital communications, radio communications, signal processing,
and spectroscopy.
A key characteristic of bandwidth is that any band of a given width can carry the
same amount of information, regardless of where that band is located in the frequency
spectrum.[note 1] For example, a 3 kHz band can carry a telephone conversation whether
that band is at baseband (as in a POTS telephone line) or modulated to some higher
frequency.

C.THIAGARAJAN-AP ECE DEPT.

21

DEPT: EEE
ENGINEERING
Factors involved in the propagation of radio waves:

EE T56 COMMUNICATION

1. Ground wave or surface wave


2. Space wave or tropospheric wave
3. Sky wave or ionospheric wave.
Ground wave which is also called surface wave exists when the transmitting and
receiving antennas are close to the earth and are vertically polarised. This type of wave
propagation is useful at broadcast and low frequencies. The broadcast signals received
during day-time are due to ground waves. It is useful for communication at VLF, LF and
MF.
Space wave is also called tropospheric wave. Here, the wave propagates directly from
the transmitter to the receiver in the tropospheric region. The portion of the atmosphere
above the earth and within 16 km is called troposphere. This is useful above the
frequency of 30 MHz. FM reception is normally by space wave propagation.
Skywave is the propagation of radio waves bent (refracted) back to the Earth's surface
by the ionosphere. As a result of skywave propagation, a night-time broadcast signal from
a distant AM broadcasting or shortwave radio station (or rarely, a TV station) can
sometimes be heard as clearly as local stations. Most long-distance HF radio
communication (between 3 and 30 MHz) is a result of skywave propagation.
Ground wave, reflection of radio waves by the surface of the earth:
Ground Wave propagation is a method of radio frequency propagation that uses
the area between the surface of the earth and the the ionosphere for transmission. The
ground wave can propagate a considerable distance over the earth's surface particularly in
the low frequency and medium frequency portion of the radio spectrum. Ground wave
radio propagation is used to provide relatively local radio communications coverage.

C.THIAGARAJAN-AP ECE DEPT.

22

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
Ground wave radio signal propagation is ideal for relatively short distance
propagation on these frequencies during the daytime. Sky-wave ionospheric propagation
is not possible during the day because of the attenuation of the signals on these
frequencies caused by the D region in the ionosphere. In view of this, lower frequency
radio communications stations need to rely on the ground-wave propagation to achieve
their coverage.
Typically, what is referred to as a ground wave radio signal is made up of a
number of constituent waves. If the antennas are in the line of sight then there will be a
direct wave as well as a reflected signal. As the names suggest the direct signal is one that
travels directly between the two antennas and is not affected by the locality. There will
also be a reflected signal as the transmission will be reflected by a number of objects
including the earth's surface and any hills, or large buildings that may be present. In
addition to this there is a surface wave. This tends to follow the curvature of the Earth
and enables coverage beyond the horizon. It is the sum of all these components that is
known as the ground wave. Beyond the horizon the direct and reflected waves are
blocked by the curvature of the Earth, and the signal is purely made up of the diffracted
surface wave. It is for this reason that surface wave is commonly called ground wave
propagation.

C.THIAGARAJAN-AP ECE DEPT.

23

DEPT: EEE
ENGINEERING

EE T56 COMMUNICATION

As a surface wave passes over the ground, the wave induces a voltage in the
Earth. The induced

voltage takes energy away from the surface wave, thereby

weakening, or attenuating, the wave as it moves away from the transmitting antenna. To
reduce the attenuation, the amount of induced voltage must be reduced. This is done by
using vertically polarized waves that minimize the extent to which the electric field of
the wave is in contact with the Earth. When a surface wave is horizontally polarized, the
electric field of the wave is parallel with the surface of the Earth and, therefore, is
constantly in contact with it. The wave is then completely attenuated within a short
distance from the transmitting site. On the

other hand, when the surface wave is

vertically polarized, the electric field is vertical to the Earth and merely dips into and out
of the Earth's surface. For this reason, vertical polarization is vastly superior to
horizontal polarization for surface wave propagation. The attenuation that a surface wave
undergoes because of induced voltage also depends on the electrical properties of the
terrain over which the wave travels. The best type of surface is one that has good
electrical conductivity. The better the conductivity, the less the attenuation. Table 2-2
gives the relative conductivity of various surfaces of the Earth.

C.THIAGARAJAN-AP ECE DEPT.

24

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
Another major factor in the attenuation of surface waves is frequency. Recall from
earlier discussions on wavelength that the higher the frequency of a radio wave, the
shorter its wavelength will be. These high frequencies, with their shorter wavelengths,
are not normally diffracted but are absorbed by the Earth at points relatively close to the
transmitting site. You can assume, therefore, that as the frequency of a surface wave is
increased, the more rapidly the surface wave will be absorbed, or attenuated, by the
Earth. Because of this loss by attenuation, the surface wave is impractical for longdistance transmissions at frequencies above 2 megahertz. On the other hand, when the
frequency of a surface wave is low enough to have a very long wavelength, the Earth
appears to be very small, and diffraction is sufficient for propagation well beyond the
horizon. In fact, by lowering the transmitting frequency into the very low frequency (vlf)
range and using very high-powered transmitters, the surface wave can be propagated
great distances.
The Navy's extremely high-powered vlf transmitters are actually capable of
transmitting surface wave signals around the Earth and can provide coverage to naval
units operating anywhere at sea.
Space wave propagation, considerations in space wave propagation:
The space wave follows two distinct paths from the transmitting antenna to the
receiving antennaone through the air directly to the receiving antenna, the other
reflected from the ground to the receiving antenna. This is illustrated in figure 2-13. The
primary path of the space wave is directly from the transmitting antenna to the receiving
antenna.
So, the receiving antenna must be located within the radio horizon of the
transmitting antenna. Because space waves are refracted slightly, even when propagated
through the troposphere, the radio horizon is actually about one-third farther than the
line-of-sight or natural horizon.

C.THIAGARAJAN-AP ECE DEPT.

25

DEPT: EEE
ENGINEERING

EE T56 COMMUNICATION

Although space waves suffer little ground attenuation, they nevertheless are
susceptible to fading. This is because space waves actually follow two paths of different
lengths (direct path and ground reflected path) to the receiving site and, therefore, may
arrive in or out of phase. If these two component waves are received in phase, the result
is a reinforced or stronger signal. Likewise, if they are received out of phase, they tend to
cancel one another, which results in a weak or fading signal.

Atmospheric effect in space wave propagation:


When propagation is accomplished via multihop refraction, rf energy is lost each time the
radio wave is reflected from the Earth's surface. The amount of energy lost depends on
the frequency of the wave, the angle of incidence, ground irregularities, and the electrical
conductivity of the point of reflection. Free space Loss Normally, the major loss of
energy is because of the spreading out of the wavefront as it travels away from the
transmitter. As the distance increases, the area of the wavefront spreads out, much like the
beam of a flashlight. This means the amount of energy contained within any unit of area
on the wavefront will decrease as distance increases. By the time the energy arrives at the
receiving antenna, the wavefront is so spread out that the receiving antenna extends into
only a very small fraction of the wavefront. This is illustrated in figure 2-22.
C.THIAGARAJAN-AP ECE DEPT.

26

DEPT: EEE
ENGINEERING

EE T56 COMMUNICATION

ELECTROMAGNETIC INTERFERENCE (EMI)


The transmission losses just discussed are not the only factors that interfere with
communications. An additional factor that can interfere with radio communications is the
presence of ELECTROMAGNETIC INTERFERENCE (EMI). This interference can
result in annoying or impossible operating conditions. Sources of emi are both manmade and natural. Man-Made Interference Man-made interference may come from
several sources. Some of these sources, such as oscillators, communications transmitters,
and radio transmitters, may be specifically designed to generate radio frequency energy.
Some electrical devices also generate radio frequency energy, although they are not
specifically designed for this purpose. Examples are ignition systems, generators, motors,
switches, relays, and voltage regulators. The intensity of man-made interference may
vary throughout the day and drop off to a low level at night when many of these sources
are not being used. Man-made interference may be a critical limiting factor at radio
receiving sites located near industrial areas. Natural Interference Natural interference
refers to the static that you often hear when listening to a radio. This interference is
C.THIAGARAJAN-AP ECE DEPT.

27

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
generated by natural phenomena, such as thunderstorms, snowstorms, cosmic sources,
and the sun. The energy released by these sources is transmitted to the receiving site in
roughly the same manner as radio waves. As a result, when ionospheric conditions are
favorable for the long distance propagation of radio waves, they are likewise favorable
for the propagation of natural interference.Natural interference is very erratic, particularly
in the hf band, but generally will decrease as the operating frequency is increased and
wider bandwidths are used. There is little natural interference above 30 megahertz.
Control of EMI Electromagnetic interference can be reduced or eliminated by using
various suppression techniques. The amount of emi that is produced by a radio
transmitter can be controlled by cutting transmitting antennas to the correct frequency,
limiting bandwidth, and using electronic filtering networks and metallic shielding.
Radiated emi during transmission can be controlled by the physical separation of the
transmitting and receiving antennas, the use of directional antennas, and limiting antenna
bandwidth.
Ionosphere and its effect on radio waves:
The ionosphere extends upward from about 31.1 miles (50 km) to a height of
about 250 miles (402 km). It contains four cloud-like layers of electrically charged ions,
which enable radio waves to be propagated to great distances around the Earth. This is
the most important region of the atmosphere for long distance point-to-point
communications. This region will be discussed in detail a little later in this chapter.

C.THIAGARAJAN-AP ECE DEPT.

28

DEPT: EEE
ENGINEERING

EE T56 COMMUNICATION

Many factors affect a radio wave in its path between the transmitting and
receiving sites. The factor that has the greatest adverse effect on radio waves is
ABSORPTION. Absorption results in the loss of energy of a radio wave and has a
pronounced effect on both the strength of received signals and the ability to communicate
over long distances.You learned earlier in the section on ground waves that surface waves
suffer most of their absorption losses because of ground-induced voltage. Sky waves, on
the other hand, suffer most of their absorption losses because of conditions in the
ionosphere. Note that some absorption of sky waves may also occur lower atmospheric
levels because of the presence of water and water vapor. However, this becomes
important only at frequencies above 10,000 megahertz.
Most ionospheric absorption occurs in the lower regions of the ionosphere where
ionization densityis greatest. As a radio wave passes into the ionosphere, it loses some of
its energy to the free electrons and ions. If these high-energy free electrons and ions do
not collide with gas molecules of low energy, most of the energy lost by the radio wave is
reconverted into electromagnetic energy, and the wave continues to be propagated with
little change in intensity. However, if the high-energy free electrons and ions do collide
C.THIAGARAJAN-AP ECE DEPT.

29

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
with other particles, much of this energy is lost, resulting in absorption of the energy from
the wave. Since absorption of energy depends on collision of the particles, the greater the
density of the ionized layer, the greater the probability of collisions; therefore, the greater
the absorption. The highly dense D and E layers provide the greatest absorption of radio
waves.Because the amount of absorption of the sky wave depends on the density of the
ionosphere, which varies with seasonal and daily conditions, it is impossible to express a
fixed relationship between distance and signal strength for ionospheric propagation.
Under certain conditions, the absorption of energy is so great that communicating over
any distance beyond the line of sight is difficult.
Types of fading:
The most troublesome and frustrating problem in receiving radio signals is
variations in signal strength, most commonly known as FADING. There are several
conditions that can produce fading. When a radio wave is refracted by the ionosphere or
reflected from the Earth's surface, random changes in the polarization of the wave may
occur. Vertically and horizontally mounted receiving antennas are designed to receive
vertically and horizontally polarized waves, respectively. Therefore, changes in
polarization cause changes in the received signal level because of the inability of the
antenna to receive polarization changes. Fading also results from absorption of the rf
energy in the ionosphere. Absorption fading occurs for a longer period than other types of
fading, since absorption takes place slowly. Usually, however, fading on ionospheric
circuits is mainly a result of multipath propagation. Multipath Fading MULTIPATH is
simply a term used to describe the multiple paths a radio wave may follow between
transmitter and receiver. Such propagation paths include the ground wave, ionospheric
refraction, reradiation by the ionospheric layers, reflection from the Earth's surface or
from more than one ionospheric layer, etc. Figure 2-21 shows a few of the paths that a
signal can travel between two sites in a typical circuit. One path, XYZ, is the basic
ground wave. Another path, XEA, refracts the wave at the E layer and passes it on to the
receiver at A. Still another path, XFZFA, results from a greater angle of incidence and
two refractions from the F layer. At point Z, the received signal is a combination of the
ground wave and the sky wave. These two signals having traveled different paths arrive
C.THIAGARAJAN-AP ECE DEPT.

30

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
at point Z at different times. Thus, the arriving waves may or may not be in phase with
each other. Radio waves that are received in phase reinforce each other and produce a
stronger signal at the receiving site. Conversely, those that are received out of phase
produce a weak or fading signal. Small alternations in the transmission path may change
the phase relationship of the two signals, causing periodic fading. This condition occurs
at point A. At this point, the double-hop F layer signal may be in or out of phase with the
signal arriving from the E layer.

Multipath fading may be minimized by practices called SPACE DIVERSITY and


FREQUENCY DIVERSITY. In space diversity, two or more receiving antennas are
spaced some distance apart. Fading does not occur simultaneously at both antennas;
therefore, enough output is almost always available from one of the antennas to provide
a useful signal. In frequency diversity, two transmitters and two receivers are used, each
pair tuned to a different frequency, with the same information being transmitted
simultaneously over both frequencies. One of the two receivers will almost always
provide a useful signal.

Selective Fading:
C.THIAGARAJAN-AP ECE DEPT.

31

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
Fading resulting from multipath propagation is variable with frequency since each
frequency arrives at the receiving point via a different radio path. When a wide band of
frequencies is transmitted simultaneously, each frequency will vary in the amount of
fading. This variation is called SELECTIVE FADING. When selective fading occurs, all
frequencies of the transmitted signal do not retain their original phases and relative
amplitudes. This fading causes severe distortion of the signal and limits the total signal
transmitted.
Maximum Usable Frequency:
As we discussed earlier, the higher the frequency of a radio wave, the lower the
rate of refraction by an ionized layer. Therefore, for a given angle of incidence and time
of day, there is a maximum frequency that can be used for communications between two
given locations. This frequency is known as the MAXIMUM USABLE FREQUENCY
(muf). Waves at frequencies above the muf are normally refracted so slowly that they
return to Earth beyond the desired location, or pass on through the ionosphere and are
lost. You should understand, however, that use of an established muf certainly does not
guarantee successful communications between a transmitting site and a receiving site.
Variations in the ionosphere may occur at any time and consequently raise or lower the
predetermined muf. This is particularly true for radio waves being refracted by the highly
variable F2 layer. The muf is highest around noon when ultraviolet light waves from the
sun are the most intense. It then drops rather sharply as recombination begins to take
place.

Skip Distance:
In figure 2-19, note the relationship between the sky wave skip distance, the skip zone,
and theground wave coverage. The SKIP DISTANCE is the distance from the transmitter
to the point where the sky wave is first returned to Earth. The size of the skip distance
depends on the frequency of the wave, the angle of incidence, and the degree of
ionization present.
C.THIAGARAJAN-AP ECE DEPT.

32

DEPT: EEE
ENGINEERING

EE T56 COMMUNICATION

Propagation paths:
The path that a refracted wave follows to the receiver depends on the angle at which the
wave strikes the ionosphere. You should remember, however, that the rf energy radiated
by a transmitting antenna spreads out with distance. The energy therefore strikes the
ionosphere at many different angles rather than a single angle. After the rf energy of a
given frequency enters an ionospheric region, the paths that this energy may also, reach
the receiving antenna over a path involving more than one layer, by multiple hops
between the ionosphere and Earth, or by any combination of these paths. Figure 2-20
shows how radio waves may reach a receiver via several paths through one layer.

C.THIAGARAJAN-AP ECE DEPT.

33

DEPT: EEE
ENGINEERING

EE T56 COMMUNICATION

When the angle is relatively low with respect to the horizon (ray 1), there is only slight
penetration of the layer and the propagation path is long. When the angle of incidence is
increased (rays 2 and 3), the rays penetrate deeper into the layer but the range of these
rays decreases. When a certain angle is reached (ray 3), the penetration of the layer and
rate of refraction are such that the ray is first returned to Earth at a minimal distance
from the transmitter. Notice, however, that ray 3 still manages to reach the receiving site
on its second refraction (called a hop) from the ionospheric layer. As the angle is
increased still more (rays 4 and 5), the rf energy penetrates the central area of maximum
ionization of the layer. These rays are refracted rather slowly and are eventually returned
to Earth at great distances. As the angle approaches vertical incidence (ray 6), the ray is
not returned at all, but passes on through the layer.
The CRITICAL FREQUENCY is the maximum frequency that a radio wave can be
transmitted vertically and still be refracted back to Earth.

C.THIAGARAJAN-AP ECE DEPT.

34

DEPT: EEE
ENGINEERING

EE T56 COMMUNICATION

Diversity reception:
Using antenna arrays for diversity reception is one of the most straightforward
uses of antenna arrays. Because the power level of a received signal can vary
significantly with small changes in distance, a diversity array simply uses a set of
antennas and combines the signals to obtain the maximum signal. Consider the example
of Figure 1. Someone is talking on their cell phone, and a hypothetical (though
reasonable) power is shown in the areas around the user.

C.THIAGARAJAN-AP ECE DEPT.

35

DEPT: EEE
ENGINEERING

EE T56 COMMUNICATION

In the case shown in Figure 2, the signal at antenna number 1 has the largest
signal, and can therefore be used. In this manner, the effects of fading are greatly
reduced, and the probability of having a signal too small to work with decreases as the
number of antennas increases.Finally, diversity reception can occur for two antennas not
separated, but receiving orthogonal polarizations. If one antenna receives vertically
polarized waves, a second antenna can be placed near the first that receives horizontally
polarized waves (which in a fading environment, are not strongly correlated). In this
manner, diversity can be achieved.

C.THIAGARAJAN-AP ECE DEPT.

36

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
Finally, diversity reception can occur for two antennas not separated, but receiving
orthogonal polarizations. If one antenna receives vertically polarized waves, a second
antenna can be placed near the first that receives horizontally polarized waves (which in a
fading environment, are not strongly correlated). In this manner, diversity can be
achieved

C.THIAGARAJAN-AP ECE DEPT.

37

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
UNIT III: DIGITAL COMMUNICATION Pulse code modulation, time division
multiplexing, digital T-carrier system. Digital radio system. Digital modulation:
Frequency and phase shift keying Modulator and demodulator, bit error rate
calculation.
Digital communication:
The transmission of digital data through a digital platform that has the ability to
combine text, audio, graphics, video and data. Digital communication enables data to be
transmitted in an efficient manner through the use of digitally encoded information sent
through data signals. These data signals are easily compressed and, as such, can be
transmitted with accuracy and speed.
Unlike in an analog communications where the continuity of a varying signal can
not be broken, in a digital communication a digital transmission can be broken down into
packets as discrete messages. Transmitting data in discrete messages not only facilitates
the error detection and correction but also enables a greater signal processing capability.
Digital communication has, in large part, replaced analog communication as the ideal
form of transmitting information through computer and mobile technologies.
Pulse code modulation:
Pulse-code modulation (PCM) is a method used to digitally represent sampled
analog signals. It is the standard form of digital audio in computers, Compact Discs,
digital telephony and other digital audio applications. In a PCM stream, the amplitude of
the analog signal is sampled regularly at uniform intervals, and each sample is quantized
to the nearest value within a range of digital steps.

C.THIAGARAJAN-AP ECE DEPT.

38

DEPT: EEE
ENGINEERING

EE T56 COMMUNICATION

Advantages of Pulse Code Modulation:

Pulse code modulation will have low noise addition and data loss is also very low.

We can repeat the exact transmitted signal at the receiver. This is called
repeatability. And we can retransmit the signal with any distortion loss also.

Pulse code modulation is used in music play back CDs and also used in DVD for
data storing whose sampling rate is bit higher.

Pulse code modulation can be used in storing the data.

PCM can encode the data also.

Multiplexing of signals can also be done using pulse code modulation.


Multiplexing is nothing for adding the different signals and transmitting the signal
at same time.

Pulse code modulation requires large bandwidth

Pulse code modulation permits the use of pulse regeneration.

Disadvantages of Pulse Code Modulation:


C.THIAGARAJAN-AP ECE DEPT.

39

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
Specialized circuitry is required for transmitting and also for quantizing the
samples at same quantized levels. We can do encoding using pulse code
modulation but we need to have complex and special circuitry.

Pulse code modulation receivers are cost effective when we compared to other
modulation receivers.

Developing pulse code modulation is bit complicated and checking the


transmission quality is also difficult and takes more time.

Large bandwidth is required for pulse code modulation when compared to


bandwidth used by the normal analog signals to transmit message.

Channel bandwidth should be more for digital encoding.

PCM systems are complicated when compared to analog modulation methods and
other systems.

Decoding also needs special equipments and they are also too complex.

Applications of Pulse Code Modulation (PCM):

Pulse code modulation is used in telecommunication systems, air traffic control


systems etc.

Pulse code modulation is used in compressing the data that is why it is used in
storing data in optical disks like DVD, CDs etc. PCM is even used in the database
management systems.

Pulse code modulation is used in mobile phones, normal telephones etc.

Remote controlled cars, planes, trains use pulse code modulations.

Time division multiplexing:


Time Division Multiplexing (TDM) is a communications process that transmits
C.THIAGARAJAN-AP ECE DEPT.

40

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
two or more streaming digital signals over a common channel. In TDM, incoming signals
are divided into equal fixed-length time slots. After multiplexing, these signals are
transmitted over a shared medium and reassembled into their original format after demultiplexing. Time slot selection is directly proportional to overall system efficiency.
Time-division multiplexing is used primarily for digital signals, but may be
applied in analog multiplexing in which two or more signals or bit streams are transferred
appearing simultaneously as sub-channels in one communication channel, but are
physically taking turns on the channel. The time domain is divided into several recurrent
time slots of fixed length, one for each sub-channel. A sample byte or data block of subchannel 1 is transmitted during time slot 1, sub-channel 2 during time slot 2, etc. One
TDM frame consists of one time slot per sub-channel plus a synchronization channel and
sometimes error correction channel before the synchronization. After the last subchannel, error correction, and synchronization, the cycle starts all over again with a new
frame, starting with the second sample, byte or data block from sub-channel 1.
Each voice time slot in the TDM frame is called a channel. In European systems,
standard TDM frames contain 30 digital voice channels (E1), and in American systems
(T1), they contain 24 channels. Both standards also contain extra bits (or bit time slots)
for signalling and synchronization bits.
Multiplexing more than 24 or 30 digital voice channels is called higher order
multiplexing. Higher order multiplexing is accomplished by multiplexing the standard
TDM frames. For example, a European 120 channel TDM frame is formed by
multiplexing four standard 30 channel TDM frames. ]At each higher order multiplex, four
TDM frames from the immediate lower order are combined, creating multiplexes with a
bandwidth of n*64 kbit/s, where n = 120, 480, 1920

Digital T-carrier system:

C.THIAGARAJAN-AP ECE DEPT.

41

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
A dedicated phone connection supporting data rates of 1.544Mbits per second. A
T-1 line actually consists of 24 individual channels, each of which supports 64Kbits per
second. Each 64Kbit/second channel can be configured to carry voice or data traffic.
Most telephone companies allow you to buy just some of these individual channels,
known as fractional T-1 access. T-1 lines are a popular leased line option for businesses
connecting to the Internet and for Internet Service Providers (ISPs) connecting to the
Internet backbone. The Internet backbone itself consists of faster T-3 connections. T-1
lines are sometimes referred to as DS1 lines.

The most common legacy of this system is the line rate speeds. "T1" now means
any data circuit that runs at the original 1.544 Mbit/s line rate. Originally the T1 format
carried 24 pulse-code modulated, time-division multiplexed speech signals each encoded
in 64 kbit/s streams, leaving 8 kbit/s of framing information which facilitates the
synchronization and demultiplexing at the receiver. T2 and T3 circuit channels carry
multiple T1 channels multiplexed, resulting in transmission rates of 6.312 and 44.736
Mbit/s, respectively.

Supposedly, the 1.544 Mbit/s rate was chosen because tests done by AT&T Long
Lines in Chicago were conducted underground. To accommodate loading coils, cable
C.THIAGARAJAN-AP ECE DEPT.

42

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
vault manholes were physically 2000 meter (6,600 ft) apart, and so the optimum bit rate
was chosen empirically the capacity was increased until the failure rate was
unacceptable, then reduced to leave a margin. Companding allowed acceptable audio
performance with only seven bits per PCM sample in this original T1/D1 system. The
later D3 and D4 channel banks had an extended frame format, allowing eight bits per
sample, reduced to seven every sixth sample or frame when one bit was "robbed" for
signaling the state of the channel. The standard does not allow an all zero sample which
would produce a long string of binary zeros and cause the repeaters to lose bit sync.
However, when carrying data (Switched 56) there could be long strings of zeroes, so one
bit per sample is set to "1" (jam bit 7) leaving 7 bits x 8,000 frames per second for data.

Digital radio system:


The present invention is for a high quality digital FM radio system that utilizes the
subcarrier band to generate digital audio signals. The hybrid FM radio system utilizes the
subcarrier band to transmit the digital FM signals that are synchronized to the analog FM
signals. The receiver processes both the analog and digital signals and a high quality FM
signal results. The receiver may switch between the traditional analog signal and the
higher quality digital signal automatically to present the best possible audio to the user.
Digital radio broadcasting standards may provide terrestrial or satellite radio service.
Digital radio broadcasting systems are typically designed for handheld mobile devices,
just like mobile-TV systems, but as opposed to other digital TV systems which typically
require a fixed directional antenna. Some digital radio systems provide in-band onchannel (IBOC) solutions that may coexist with or simulcast with analog AM or FM
transmissions, while others are designed for designated radio frequency bands.

C.THIAGARAJAN-AP ECE DEPT.

43

DEPT: EEE
ENGINEERING

EE T56 COMMUNICATION

The latter allows one wideband radio signal to carry a multiplex consisting of
several radio-channels of variable bitrate as well as data services and other forms of
media. Some digital broadcasting systems allow single-frequency network (SFN), where
all terrestrial transmitters in a region sending the same multiplex of radio programs may
use the same frequency channel without self-interference problems, further improving the
system spectral efficiency.
While digital broadcasting offers many potential benefits, its introduction has
been hindered by a lack of global agreement on standards and many disadvantages. The
DAB Eureka 147 standard for digital radio is coordinated by the World DMB Forum.
This standard of digital radio technology was defined in the late 1980s, and is now being
introduced in some European countries. Commercial DAB receivers began to be sold in
1999 and, by 2006, 500 million people were in the coverage area of DAB broadcasts,
although by this time sales had only taken off in the UK and Denmark. In 2006 there are
approximately 1,000 DAB stations in operation. [1] There have been criticisms of the
Eureka 147 standard and so a new 'DAB+' standard has been introduced.The DRM
standard has been used for several years to broadcast digitally on frequencies below
30 MHz (shortwave, mediumwave and longwave). Also there is now the extended
standard DRM+ which make it possible to broadcast on frequencies above 30 MHz.This
will make it possible to digitalize transmission on the FM-band. Successful tests of
DRM+ has been made in several countries 2010-2012 as in Brazil, Germany, France,
India, Sri Lanka, the UK, Slovakia and Italy (incl. the Vatican). DRM+ will be tested in
Sweden 2012.
C.THIAGARAJAN-AP ECE DEPT.

44

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
DRM+ is regarded as a more transparent and less costly standard than DAB+ and thus a
better choice for local radio; commercial or community broadcasters. It is Although
DAB+ has been introduced in Australia the government has concluded 2011 that a
preference for DRM and DRM+ above HD Radio could be used to supplement DAB+
services in (some) local and regional areas.All Digital Radio Broadcast system share
many disadvantages which don't exist for Analogue to Digital TV changeover: About x20
more power consumption, Digital Cliff effect for Mobile use, very slow channel change,
especially for a different DAB multiplex frequency, high transmission cost resulting
poorer quality than FM and sometimes AM due to low bitrate (64K mono rather than
256K stereo), higher compression is more distorted for hearing aid users, usually poor
user interfaces and Radio audio quality, not enough fill in stations for portable / mobile
coverage (like 1950s UK FM).
The Multiplex & SFN concepts are advantageous to State Broadcasters and Large
Pan National Multi-channel companies and worse for all Local, Community and most
Regional stations. In contrast almost all the aspects of Digital TV vs Analogue TV are
positive with almost no negative effects. TVs could be used with a Set-box. Digital Radio
requires replacement of all radios, though an awkward DAB receiver with FM output can
be used with existing FM car Radios.

Frequency and phase shift keying:


Frequency-shift keying (FSK) is a frequency modulation scheme in which digital
information is transmitted through discrete frequency changes of a carrier wave.[1] The
simplest FSK is binary FSK (BFSK). BFSK uses a pair of discrete frequencies to transmit
binary (0s and 1s) information.[2] With this scheme, the "1" is called the mark frequency
and the "0" is called the space frequency.

C.THIAGARAJAN-AP ECE DEPT.

45

DEPT: EEE
ENGINEERING

EE T56 COMMUNICATION

Audio frequency-shift keying (AFSK) is a modulation technique by which digital


data is represented by changes in the frequency (pitch) of an audio tone, yielding an
encoded signal suitable for transmission via radio or telephone. Normally, the transmitted
audio alternates between two tones: one, the "mark", represents a binary one; the other,
the "space", represents a binary zero.
AFSK differs from regular frequency-shift keying in performing the modulation
at baseband frequencies. In radio applications, the AFSK-modulated signal normally is
being used to modulate an RF carrier (using a conventional technique, such as AM or
FM) for transmission.
AFSK is not always used for high-speed data communications, since it is far less
efficient in both power and bandwidth than most other modulation modes. In addition to
its simplicity, however, AFSK has the advantage that encoded signals will pass through
AC-coupled links, including most equipment originally designed to carry music or
speech.
C.THIAGARAJAN-AP ECE DEPT.

46

DEPT: EEE
ENGINEERING
Phase-shift keying:

EE T56 COMMUNICATION

Phase-shift keying (PSK) is a digital modulation scheme that conveys data by


changing, or modulating, the phase of a reference signal (the carrier wave).
Any digital modulation scheme uses a finite number of distinct signals to
represent digital data. PSK uses a finite number of phases, each assigned a unique pattern
of binary digits. Usually, each phase encodes an equal number of bits. Each pattern of bits
forms the symbol that is represented by the particular phase. The demodulator, which is
designed specifically for the symbol-set used by the modulator, determines the phase of
the received signal and maps it back to the symbol it represents, thus recovering the
original data. This requires the receiver to be able to compare the phase of the received
signal to a reference signal such a system is termed coherent (and referred to as
CPSK).
Alternatively, instead of operating with respect to a constant reference wave, the
broadcast can operate with respect to itself. Changes in phase of a single broadcast
waveform can be considered the significant items. In this system, the demodulator
determines the changes in the phase of the received signal rather than the phase (relative
to a reference wave) itself. Since this scheme depends on the difference between
successive phases, it is termed differential phase-shift keying (DPSK). DPSK can be
significantly simpler to implement than ordinary PSK since there is no need for the
demodulator to have a copy of the reference signal to determine the exact phase of the
received signal (it is a non-coherent scheme). In exchange, it produces more erroneous
demodulation.
The wireless LAN standard, IEEE 802.11b-1999,[1][2] uses a variety of different
PSKs depending on the data-rate required. At the basic-rate of 1 Mbit/s, it uses DBPSK
(differential BPSK). To provide the extended-rate of 2 Mbit/s, DQPSK is used. In
reaching 5.5 Mbit/s and the full-rate of 11 Mbit/s, QPSK is employed, but has to be
coupled with complementary code keying. The higher-speed wireless LAN standard,
IEEE 802.11g-2003[1][3] has eight data rates: 6, 9, 12, 18, 24, 36, 48 and 54 Mbit/s. The 6
and 9 Mbit/s modes use OFDM modulation where each sub-carrier is BPSK modulated.
C.THIAGARAJAN-AP ECE DEPT.

47

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
The 12 and 18 Mbit/s modes use OFDM with QPSK. The fastest four modes use OFDM
with forms of quadrature amplitude modulation.
Because of its simplicity BPSK is appropriate for low-cost passive transmitters,
and is used in RFID standards such as ISO/IEC 14443 which has been adopted for
biometric passports, credit cards such as American Express's ExpressPay, and many other
applications.[4]
Bluetooth 2 will use

-DQPSK at its lower rate (2 Mbit/s) and 8-DPSK at its

higher rate (3 Mbit/s) when the link between the two devices is sufficiently robust.
Bluetooth 1 modulates with Gaussian minimum-shift keying, a binary scheme, so either
modulation choice in version 2 will yield a higher data-rate. A similar technology, IEEE
802.15.4 (the wireless standard used by ZigBee) also relies on PSK. IEEE 802.15.4
allows the use of two frequency bands: 868915 MHz using BPSK and at 2.4 GHz using
OQPSK.
Notably absent from these various schemes is 8-PSK. This is because its error-rate
performance is close to that of 16-QAM it is only about 0.5 dB better but its data rate
is only three-quarters that of 16-QAM. Thus 8-PSK is often omitted from standards and,
as seen above, schemes tend to 'jump' from QPSK to 16-QAM (8-QAM is possible but
difficult to implement).
Bit error rate:
In digital transmission, the number of bit errors is the number of received bits of a
data stream over a communication channel that have been altered due to noise,
interference, distortion or bit synchronization errors.The bit error rate or bit error ratio
(BER) is the number of bit errors divided by the total number of transferred bits during a
studied time interval. BER is a unitless performance measure, often expressed as a
percentage.
The bit error probability pe is the expectation value of the BER. The BER can be
considered as an approximate estimate of the bit error probability. This estimate is
accurate for a long time interval and a high number of bit errors.The BER may be
C.THIAGARAJAN-AP ECE DEPT.

48

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
analyzed using stochastic computer simulations. If a simple transmission channel model
and data source model is assumed, the BER may also be calculated analytically. An
example of such a data source model is the Bernoulli source.
Examples of such simple channel models are:

Binary symmetric channel (used in analysis of decoding error probability in case


of non-bursty bit errors on the transmission channel)

Additive white gaussian noise (AWGN) channel without fading.

A worst case scenario is a completely random channel, where noise totally dominates
over the useful signal. This results in a transmission BER of 50% (provided that a
Bernoulli binary data source and a binary symmetrical channel are assumed, see
below).BER comparison between BPSK and differentially-encoded BPSK with graycoding operating in white noise.
In a noisy channel, the BER is often expressed as a function of the normalized
carrier-to-noise ratio measure denoted Eb/N0, (energy per bit to noise power spectral
density ratio), or Es/N0 (energy per modulation symbol to noise spectral density).
For example, in the case of QPSK modulation and AWGN channel, the BER as function
of the

Eb/N0 is given by:


People usually plot the BER curves to describe the functionality of a digital
communication system. In optical communication, BER(dB) vs. Received Power(dBm) is
usually used; while in wireless communication, BER(dB) vs. SNR(dB) is used.Measuring
the bit error ratio helps people choose the appropriate forward error correction codes.
Since most such codes correct only bit-flips, but not bit-insertions or bit-deletions, the
Hamming distance metric is the appropriate way to measure the number of bit errors.
Many FEC coders also continuously measure the current BER.

C.THIAGARAJAN-AP ECE DEPT.

49

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
A more general way of measuring the number of bit errors is the Levenshtein
distance. The Levenshtein distance measurement is more appropriate for measuring raw
channel performance before frame synchronization, and when using error correction
codes designed to correct bit-insertions and bit-deletions, such as Marker Codes and
Watermark Codes

UNIT IV: DATA COMMUNICATION AND NETWORK PROTOCOL Data


Communication codes, error control. Serial and parallel interface, telephone network,
data modem, ISDN, LAN, ISO-OSI seven layer architecture for WAN.
C.THIAGARAJAN-AP ECE DEPT.

50

DEPT: EEE
ENGINEERING
Data Communication codes:

EE T56 COMMUNICATION

In information theory and computer science, a code is usually considered as


an algorithm which

uniquely

represents

symbols

from

some

source alphabet,

by encoded strings, which may be in some other target alphabet. An extension of the code
for representing sequences of symbols over the source alphabet is obtained by
concatenating the encoded strings.
Before giving a mathematically precise definition, we give a brief example. The mapping

is a code, whose source alphabet is the set


set

and whose target alphabet is the

011, and these in turn can be decoded to the sequence of source symbols 01

0011. Using the extension of the code, the encoded string 0011001011 can be grouped
into codewords as 0 acabc.
Using terms from formal language theory, the precise mathematical definition of this
concept is as follows: Let S and T be two finite sets, called the source and
target alphabets, respectively. A code

is a total function mapping each

symbol from S to a sequence of symbols over T, and the extension of


a homomorphism of

into

to

, which naturally maps each sequence of source symbols

to a sequence of target symbols, is referred to as its extension.


Variable-length codes
Main article: Variable-length codeIn this section we consider codes, which encode each
source (clear text) character by a code word from some dictionary, and concatenation of
such code words give us an encoded string. Variable-length codes are especially useful
when clear text characters have different probabilities; see also entropy encoding.

A prefix code is a code with the "prefix property": there is no valid code word in the
system that is a prefix (start) of any other valid code word in the set. Huffman coding is
the most known algorithm for deriving prefix codes. Prefix codes are widely referred to
C.THIAGARAJAN-AP ECE DEPT.

51

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
as "Huffman codes", even when the code was not produced by a Huffman algorithm.
Other examples of prefix codes are country calling codes, the country and publisher parts
of ISBNs, and the Secondary Synchronization Codes used in the UMTS W-CDMA 3G
Wireless Standard.Kraft's inequality characterizes the sets of code word lengths that are
possible in a prefix code. Virtually any uniquely decodable one-to-many code, not
necessary a prefix one, must satisfy Kraft's inequality.
Codes may also be used to represent data in a way more resistant to errors in transmission
or storage. Such a "code" is called an error-correcting code, and works by including
carefully crafted redundancy with the stored (or transmitted) data. Examples
include Hamming

codes, ReedSolomon, ReedMuller, WalshHadamard, Bose

ChaudhuriHochquenghem, Turbo, Golay, Goppa, low-density

parity-check

codes,

and spacetime codes. Error detecting codes can be optimised to detect burst errors,
or random errors.
Error control:
In information

theory and coding

science and telecommunication, error

theory with

detection

and

applications

correction or error

in computer
control are

techniques that enable reliable delivery of digital data over unreliable communication
channels. Many communication channels are subject to channel noise, and thus errors
may be introduced during transmission from the source to a receiver. Error detection
techniques allow detecting such errors, while error correction enables reconstruction of
the original dataError detection is most commonly realized using a suitable hash
function (or checksum algorithm). A hash function adds a fixed-length tag to a message,
which enables receivers to verify the delivered message by recomputing the tag and
comparing it with the one provided.
There exists a vast variety of different hash function designs. However, some are
of particularly widespread use because of either their simplicity or their suitability for
detecting certain kinds of errors (e.g., the cyclic redundancy check's performance in
detecting burst errors).

C.THIAGARAJAN-AP ECE DEPT.

52

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
Random-error-correcting codes based on minimum distance coding can provide a
suitable alternative to hash functions when a strict guarantee on the minimum number of
errors to be detected is desired. Repetition codes, described below, are special cases of
error-correcting codes: although rather inefficient, they find applications for both error
correction and detection due to their simplicity.
Repetition codes
Main article: Repetition code
A repetition code is a coding scheme that repeats the bits across a channel to
achieve error-free communication. Given a stream of data to be transmitted, the data is
divided into blocks of bits. Each block is transmitted some predetermined number of
times. For example, to send the bit pattern "1011", the four-bit block can be repeated
three times, thus producing "1011 1011 1011". However, if this twelve-bit pattern was
received as "1010 1011 1011" where the first block is unlike the other two it can be
determined that an error has occurred.
Repetition codes are very inefficient, and can be susceptible to problems if the error
occurs in exactly the same place for each group (e.g., "1010 1010 1010" in the previous
example would be detected as correct). The advantage of repetition codes is that they are
extremely simple, and are in fact used in some transmissions of numbers stations.
Parity bits
Main article: Parity bit
A parity bit is a bit that is added to a group of source bits to ensure that the
number of set bits (i.e., bits with value 1) in the outcome is even or odd. It is a very
simple scheme that can be used to detect single or any other odd number (i.e., three, five,
etc.) of errors in the output. An even number of flipped bits will make the parity bit
appear correct even though the data is erroneous.
Extensions and variations on the parity bit mechanism are horizontal redundancy
checks, vertical redundancy checks, and "double," "dual," or "diagonal" parity (used
in RAID-DP).
C.THIAGARAJAN-AP ECE DEPT.

53

DEPT: EEE
ENGINEERING
Checksums

EE T56 COMMUNICATION

Main article: Checksum


A checksum of a message is a modular arithmetic sum of message code words of
a fixed word length (e.g., byte values). The sum may be negated by means of a ones'complement operation prior to transmission to detect errors resulting in all-zero
messages.
Checksum schemes include parity bits, check digits, and longitudinal redundancy
checks. Some checksum schemes, such as the Damm algorithm, the Luhn algorithm, and
theVerhoeff algorithm, are specifically designed to detect errors commonly introduced by
humans in writing down or remembering identification numbers.
Cyclic redundancy checks (CRCs)[edit]
Main article: Cyclic redundancy check
A cyclic

redundancy check

(CRC) is

single-burst-error-detecting cyclic

code and non-secure hash function designed to detect accidental changes to digital data in
computer networks. It is not suitable for detecting maliciously introduced errors. It is
characterized by specification of a so-called generator polynomial, which is used as
the divisor in apolynomial long division over a finite field, taking the input data as
the dividend, and where the remainder becomes the result.
Cyclic 1codes have favorable properties in that they are well suited for
detecting burst errors. CRCs are particularly easy to implement in hardware, and are
therefore commonly used in digital networks and storage devices such as hard disk
drives.
Even parity is a special case of a cyclic redundancy check, where the single-bit CRC is
generated by the divisor x + 1.
Cryptographic hash functions
Main article: Cryptographic hash function

C.THIAGARAJAN-AP ECE DEPT.

54

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
The output of a cryptographic hash function, also known as a message digest, can
provide strong assurances about data integrity, whether changes of the data are accidental
(e.g., due to transmission errors) or maliciously introduced. Any modification to the data
will likely be detected through a mismatching hash value. Furthermore, given some hash
value, it is infeasible to find some input data (other than the one given) that will yield the
same hash value. If an attacker can change not only the message but also the hash value,
then a keyed hash or message authentication code (MAC) can be used for additional
security. Without knowing the key, it is infeasible for the attacker to calculate the correct
keyed hash value for a modified message.
Error-correcting codes
Main article: Forward error correction
Any error-correcting code can be used for error detection. A code
with minimum Hamming distance, d, can detect up to d 1 errors in a code word. Using
minimum-distance-based error-correcting codes for error detection can be suitable if a
strict limit on the minimum number of errors to be detected is desired.
Codes with minimum Hamming distance d = 2 are degenerate cases of errorcorrecting codes, and can be used to detect single errors. The parity bit is an example of a
single-error-detecting code.
Serial and parallel interface:
In telecommunication and computer science, serial communication is the process
of sending data one bit at a time, sequentially, over a communication channel or computer
bus. This is in contrast to parallel communication, where several bits are sent as a whole,
on a link with several parallel channels.
Serial communication is used for all long-haul communication and most computer
networks, where the cost of cable and synchronization difficulties make parallel
communication impractical. Serial computer buses are becoming more common even at
shorter distances, as improved signal integrity and transmission speeds in newer serial
technologies have begun to outweigh the parallel bus's advantage of simplicity (no need
C.THIAGARAJAN-AP ECE DEPT.

55

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
for serializer and deserializer, or SerDes) and to outstrip its disadvantages (clock skew,
interconnect density). The migration from PCI to PCI Express is an example.
The communication links across which computersor parts of computerstalk to one
another may be either serial or parallel. A parallel link transmits several streams of data
simultaneously along multiple channels (e.g., wires, printed circuit tracks, or optical
fibres); a serial link transmits a single stream of data.
Although a serial link may seem inferior to a parallel one, since it can transmit less
data per clock cycle, it is often the case that serial links can be clocked considerably
faster than parallel links in order to achieve a higher data rate. A number of factors allow
serial to be clocked at a higher rate:

Clock

skew between

different

channels

is

not

an

issue

(for

unclocked asynchronous serial communication links).

A serial connection requires fewer interconnecting cables (e.g., wires/fibres) and


hence occupies less space. The extra space allows for better isolation of the channel
from its surroundings.

Crosstalk is less of an issue, because there are fewer conductors in proximity.

In many cases, serial is a better option because it is cheaper to implement. Many ICs have
serial interfaces, as opposed to parallel ones, so that they have fewer pins and are
therefore less expensive.
Telephone network:
A telephone network is a telecommunications network used for telephone calls between
two or more parties.
There are a number of different types of telephone network:

A fixed line network where the telephones must be directly wired into a
single telephone exchange. This is known as the public switched telephone
network or PSTN.

C.THIAGARAJAN-AP ECE DEPT.

56

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING

A wireless network where the telephones are mobile and can move around
anywhere within the coverage area.

A private network where a closed group of telephones are connected primarily to


each other and use a gateway to reach the outside world. This is usually used
insidecompanies and call centres and is called a private branch exchange (PBX).

Public telephone operators (PTOs) own and build networks of the first two types and
provide services to the public under license from the national government. Virtual
Network Operators (VNOs) lease capacity wholesale from the PTOs and sell
on telephony service to the public directly.
Data modem:
A modem (modulator-demodulator) is a device that modulates an analog carrier
signal to encode digital information and demodulates the signal to decode the transmitted
information. The goal is to produce a signal that can be transmitted easily and decoded to
reproduce the original digital data. Modems can be used with any means of transmitting
analog signals, from light emitting diodes to radio. The most familiar type is a voice
band modem that turns the digital data of a computer into modulated electrical signals in
the voice frequency range of a telephone channel. These signals can be transmitted
over telephone lines and demodulated by another modem at the receiver side to recover
the digital data.
Modems are generally classified by the amount of data they can send in a
given unit of time, usually expressed in bits per second (bit/s or bps), or bytes per
second (B/s). Modems can also be classified by their symbol rate, measured in baud. The
baud unit denotes symbols per second, or the number of times per second the modem
sends a new signal. For example, the ITU V.21 standard used audio frequency shift
keying with two possible frequencies, corresponding to two distinct symbols (or one bit
per symbol), to carry 300 bits per second using 300 baud. By contrast, the original ITU
V.22 standard, which could transmit and receive four distinct symbols (two bits per
symbol), transmitted 1,200 bits by sending 600 symbols per second (600 baud)
using phase shift keying
C.THIAGARAJAN-AP ECE DEPT.

57

DEPT: EEE
ENGINEERING
ISDN:

EE T56 COMMUNICATION

Integrated Services for Digital Network (ISDN) is a set of communication


standards for simultaneous digital transmission of voice, video, data, and other network
services over the traditional circuits of the public switched telephone network. It was first
defined in 1988 in the CCITT red book.[1] Prior to ISDN, the telephone system was
viewed as a way to transport voice, with some special services available for data. The key
feature of ISDN is that it integrates speech and data on the same lines, adding features
that were not available in the classic telephone system. There are several kinds of access
interfaces

to

ISDN

defined

as Basic

Rate

Interface (BRI), Primary

Rate

Interface (PRI), Narrowband ISDN (N-ISDN), and Broadband ISDN (B-ISDN).


ISDN is a circuit-switched telephone network system, which also provides access
to packet switched networks, designed to allow digital transmission of voice
and data over ordinary telephone copper wires, resulting in potentially better voice
quality than an analog phone can provide. It offers circuit-switched connections (for
either voice or data), and packet-switched connections (for data), in increments of
64 kilobit/s. A major market application for ISDN in some countries is Internet access,
where ISDN typically provides a maximum of 128 kbit/s in both upstream and
downstream directions. Channel bonding can achieve a greater data rate; typically the
ISDN B-channels of three or four BRIs (six to eight 64 kbit/s channels) are bonded.
ISDN should not be mistaken for its use with a specific protocol, such as Q.931 whereas
ISDN is employed as the network, data-link and physical layers in the context of the OSI
model. In a broad sense ISDN can be considered a suite of digital services existing on
layers 1, 2, and 3 of the OSI model. ISDN is designed to provide access to voice and data
services simultaneously.
However, common use reduced ISDN to be limited to Q.931 and related
protocols, which are a set of protocols for establishing and breaking circuit switched
connections, and for advanced calling features for the user. They were introduced in
1986.[2]In a videoconference, ISDN provides simultaneous voice, video, and text
transmission between individual desktop videoconferencing systems and group (room)
C.THIAGARAJAN-AP ECE DEPT.

58

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
videoconferencing systems.The other ISDN access available is the Primary Rate Interface
(PRI), which is carried over an E1 (2048 kbit/s) in most parts of the world. An E1 is 30
'B' channels of 64 kbit/s, one 'D' channel of 64 kbit/s and a timing and alarm channel of
64 kbit/s.
In North America PRI service is delivered on one or more T1 carriers (often
referred to as 23B+D) of 1544 kbit/s (24 channels). A PRI has 23 'B' channels and 1 'D'
channel for signalling (Japan uses a circuit called a J1, which is similar to a T1). Interchangeably but incorrectly, a PRI is referred to as T1 because it uses the T1 carrier
format. A true T1 (commonly called "Analog T1" to avoid confusion) uses 24 channels of
64 kbit/s of in-band signaling. Each channel uses 56 kb for data and voice and 8 kb for
signaling and messaging. PRI uses out of band signaling which provides the 23 B
channels with clear 64 kb for voice and data and one 64 kb 'D' channel for signaling and
messaging. In North America, Non-Facility Associated Signalling allows two or more
PRIs to be controlled by a single D channel, and is sometimes called "23B+D + n*24B".
D-channel backup allows for a second D channel in case the primary fails. NFAS is
commonly used on a T3.
PRI-ISDN

is

popular

throughout

the

world,

especially

for

connecting PBXs to PSTN.While the North American PSTN can use PRI or Analog T1
format from PBX to PBX, the POTS or BRI can be delivered to a business or residence.
North American PSTN can connect from PBX to PBX via Analog T1, T3, PRI, OC3,
etc...Even though many network professionals use the term "ISDN" to refer to the lowerbandwidth BRI circuit, in North America BRI is relatively uncommon whilst PRI circuits
serving PBXs are commonplace.

ISO-OSI seven layer architecture for WAN:

C.THIAGARAJAN-AP ECE DEPT.

59

DEPT: EEE
ENGINEERING

EE T56 COMMUNICATION

The Open Systems Interconnection model (OSI) is a conceptual model that


characterizes and standardizes the internal functions of a communication system by
partitioning it into abstraction layers. The model is a product of the Open Systems
Interconnection project at the International Organization for Standardization (ISO),
maintained by the identification ISO/IEC 7498-1.
The model groups communication functions into seven logical layers. A layer serves the
layer above it and is served by the layer below it. For example, a layer that provides
error-free communications across a network provides the path needed by applications
above it, while it calls the next lower layer to send and receive packets that make up the
contents of that path. Two instances at one layer are connected by a horizontal connection
on that layer.

Layer 1: physical layer


The physical layer has the following major functions:
C.THIAGARAJAN-AP ECE DEPT.

60

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
it defines the electrical and physical specifications of the data connection. It
defines the relationship between a device and a physical transmission medium
(e.g., a copper or fiber optical cable). This includes the layout of pins, voltages,
line impedance, cable specifications, signal timing, hubs, repeaters, network
adapters, host bus adapters (HBA used in storage area networks) and more.

it defines the protocol to establish and terminate a connection between two


directly connected nodes over a communications medium.

it may define the protocol for flow control.

it defines a protocol for the provision of a (not necessarily reliable) connection


between two directly connected nodes, and the modulation or conversion between
the representation of digital data in user equipment and the corresponding signals
transmitted over the physical communications channel. This channel can involve
physical cabling (such as copper and optical fiber) or a wireless radio link.

The physical layer of Parallel SCSI operates in this layer, as do the physical layers of
Ethernet and other local-area networks, such as Token Ring, FDDI, ITU-T G.hn, and
IEEE 802.11, as well as personal area networks such as Bluetooth and IEEE 802.15.4.
Layer 2: data link layer
The data link layer provides a reliable link between two directly connected nodes, by
detecting and possibly correcting errors that may occur in the physical layer. The data
link layer is divided into two sublayers:

Media Access Control (MAC) layer - responsible for controlling how computers
in the network gain access to data and permission to transmit it.

Logical Link Control (LLC) layer - control error checking and packet
synchronization.

The Point-to-Point Protocol (PPP) is an example of a data link layer in the TCP/IP
protocol stack.
C.THIAGARAJAN-AP ECE DEPT.

61

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
The ITU-T G.hn standard, which provides high-speed local area networking over existing
wires (power lines, phone lines and coaxial cables), includes a complete data link layer
which provides both error correction and flow control by means of a selective-repeat
sliding-window protocol.
Layer 3: network layer
The network layer provides the functional and procedural means of transferring
variable length data sequences (called datagrams) from one node to another connected to
the same network. A network is a medium to which many nodes can be connected, on
which every node has an address and which permits nodes connected to it to transfer
messages to other nodes connected to it by merely providing the content of a message
and the address of the destination node and letting the network find the way to deliver
("route") the message to the destination node. In addition to message routing, the network
may (or may not) implement message delivery by splitting the message into several
fragments, delivering each fragment by a separate route and reassembling the fragments,
report delivery errors, etc.
Datagram delivery at the network layer is not guaranteed to be reliable.
A number of layer-management protocols, a function defined in the management annex,
ISO 7498/4, belong to the network layer. These include routing protocols, multicast
group management, network-layer information and error, and network-layer address
assignment. It is the function of the payload that makes these belong to the network layer,
not the protocol that carries them.

Layer 4: transport layer

C.THIAGARAJAN-AP ECE DEPT.

62

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
The transport layer provides the functional and procedural means of transferring
variable-length data sequences from a source to a destination host via one or more
networks, while maintaining the quality of service functions.
An example of a transport-layer protocol in the standard Internet protocol stack is
TCP, usually built on top of the IP protocol.
The transport layer controls the reliability of a given link through flow control,
segmentation/desegmentation, and error control. Some protocols are state- and
connection-oriented. This means that the transport layer can keep track of the segments
and retransmit those that fail. The transport layer also provides the acknowledgement of
the successful data transmission and sends the next data if no errors occurred. The
transport layer creates packets out of the message received from the application layer.
Packetizing is a process of dividing the long message into smaller messages.
OSI defines five classes of connection-mode transport protocols ranging from
class 0 (which is also known as TP0 and provides the fewest features) to class 4 (TP4,
designed for less reliable networks, similar to the Internet). Class 0 contains no error
recovery, and was designed for use on network layers that provide error-free connections.
Class 4 is closest to TCP, although TCP contains functions, such as the graceful close,
which OSI assigns to the session layer. Also, all OSI TP connection-mode protocol
classes provide expedited data and preservation of record boundaries. Detailed
characteristics of TP0-4 classes are shown in the following table:[4]
An easy way to visualize the transport layer is to compare it with a post office,
which deals with the dispatch and classification of mail and parcels sent. Do remember,
however, that a post office manages the outer envelope of mail. Higher layers may have
the equivalent of double envelopes, such as cryptographic presentation services that can
be read by the addressee only.

Roughly speaking, tunneling protocols operate at the transport layer, such as


carrying non-IP protocols such as IBM's SNA or Novell's IPX over an IP network, or
C.THIAGARAJAN-AP ECE DEPT.

63

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
end-to-end encryption with IPsec. While Generic Routing Encapsulation (GRE) might
seem to be a network-layer protocol, if the encapsulation of the payload takes place only
at endpoint, GRE becomes closer to a transport protocol that uses IP headers but contains
complete frames or packets to deliver to an endpoint. L2TP carries PPP frames inside
transport packet.
Although not developed under the OSI Reference Model and not strictly
conforming to the OSI definition of the transport layer, the Transmission Control Protocol
(TCP) and the User Datagram Protocol (UDP) of the Internet Protocol Suite are
commonly categorized as layer-4 protocols within OSI.
Layer 5: session layer
The session layer controls the dialogues (connections) between computers. It
establishes, manages and terminates the connections between the local and remote
application. It provides for full-duplex, half-duplex, or simplex operation, and establishes
checkpointing, adjournment, termination, and restart procedures. The OSI model made
this layer responsible for graceful close of sessions, which is a property of the
Transmission Control Protocol, and also for session checkpointing and recovery, which is
not usually used in the Internet Protocol Suite. The session layer is commonly
implemented explicitly in application environments that use remote procedure calls.
Layer 6: presentation layer
The presentation layer establishes context between application-layer entities, in
which the application-layer entities may use different syntax and semantics if the
presentation service provides a mapping between them. If a mapping is available,
presentation service data units are encapsulated into session protocol data units, and
passed down the TCP/IP stack.
This layer provides independence from data representation (e.g., encryption) by
translating between application and network formats. The presentation layer transforms
data into the form that the application accepts. This layer formats and encrypts data to be
sent across a network. It is sometimes called the syntax layer.[5]
C.THIAGARAJAN-AP ECE DEPT.

64

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
The original presentation structure used the Basic Encoding Rules of Abstract Syntax
Notation One (ASN.1), with capabilities such as converting an EBCDIC-coded text file
to an ASCII-coded file, or serialization of objects and other data structures from and to
XML.
Layer 7: application layer
The application layer is the OSI layer closest to the end user, which means both
the OSI application layer and the user interact directly with the software application. This
layer interacts with software applications that implement a communicating component.
Such application programs fall outside the scope of the OSI model. Application-layer
functions typically include identifying communication partners, determining resource
availability, and synchronizing communication. When identifying communication
partners, the application layer determines the identity and availability of communication
partners for an application with data to transmit. When determining resource availability,
the application layer must decide whether sufficient network or the requested
communication exists. In synchronizing communication, all communication between
applications requires cooperation that is managed by the application layer.

C.THIAGARAJAN-AP ECE DEPT.

65

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
UNIT V: Orbital satellites, geostationary satellites, look angles, satellite system link
models, satellite system link equations; advantages of optical fiber communication Light propagation through fiber, fiber loss, light sources and detectors.
A satellite communication system will have a number of users operating via a
common satellite transponder, and this calls for sharing of the resources of power,
bandwidth and time. Here we describe these techniques and examine their implications,
with emphasis on principles rather than detailed structure or parameters of particular
networks, which tend to be very system specific. The term used for such sharing and
management of a number of different channels is multiple access.

Orbital satellites:
In some applications high Earth orbits may be required. For these applications the
satellite will take longer than 24 hours to orbit the Earth, and path lengths may become
very long resulting in additional delays for the round trip from the Earth to the satellite
and back as well as increasing the levels of path loss.
The choice of the satellite orbit will depend on its applications. While
geostationary orbits are popular for applications such as direct broadcasting and for
communications satellites, others such as GPS and even those satellites used for mobile
phones are much lower.
LEO basics
With Low Earth Orbit extending from 200 km to 1200 km it means that it is
relatively low in altitude, although well above anything that a conventional aircraft can
reach.
However LEO is still very close to the Earth, especially when compared to other
forms of satellite orbit including geostationary orbit.
The low orbit altitude of leads to a number of characteristics:

C.THIAGARAJAN-AP ECE DEPT.

66

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
Orbit times are much less than for many other forms of orbit. The lower altitude
means higher velocities are required to balance the earth's gravitational field.
Typical velocities are very approximately around 8 km/s, with orbit times
sometimes of the order of 90 minutes, although these figures vary considerably
with the exact details of the orbit.

The lower orbit means the satellite and user are closer together and therefore path
losses a less than for other orbits such as GEO

The round trip time, RTT for the radio signals is considerably less than that
experienced by geostationary orbit satellites. The actual time will depend upon
factors such as the orbit altitude and the position of the user relative to the
satellite.

Radiation levels are lower than experienced at higher altitudes.

Less energy is expended placing the satellites in LEO than higher orbits.

Some speed reduction may be experienced as a result of friction from the low, but
measurable levels of gasses, especially at lower altitudes. An altitude of 300 km is
normally accepted as the minimum for an orbit as a result of the increasing drag
from the presence of gasses at low altitudes.

Applications for LEO satellites


A variety of different types of satellite use the LEO orbit levels. These include different
types and applications including:

Communications satellites - some communications satellites including the Iridium


phone system use LEO.

Earth monitoring satellites use LEO as they are able to see the surface of the Earth
more clearly as they are not so far away. They are also able to traverse the surface
of the Earth.

C.THIAGARAJAN-AP ECE DEPT.

67

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
The International Space Station is in an LEO that varies between 320 km (199
miles) and 400 km (249 miles) above the Earth's surface. It can often be seen
from the Earth's surface with the naked eye.

Space debris in LEO


Apart from the general congestion experienced in Low Earth Orbit, the situation
is made much worse by the general level of space debris that exists.
There is a real and growing risk of collision and major damage - any collisions
themselves are likely to create further space debris.
The US Joint Space Operations Center currently tracks over 8 500 objects that
have dimensions larger than 10 centimetres. However debris with smaller dimensions can
also cause significant damage and could render a satellite unserviceable after a collision.
Geostationary satellites:
A geostationary orbit, geostationary Earth orbit or geosynchronous equatorial
orbit (GEO), is an orbit whose position in sky remains the same for a stationary observer
on earth. This effect is achieved with a circular orbit 35,786 kilometres (22,236 mi) above
the Earth's equator and following the direction of the Earth's rotation. [2] An object in such
an orbit has an orbital period equal to the Earth's rotational period (one sidereal day), and
thus appears motionless, at a fixed position in the sky, to ground observers.
Communications satellites and weather satellites are often placed in geostationary orbits,
so that the satellite antennas which communicate with them do not have to move to track
them, but can be pointed permanently at the position in the sky where they stay. Using
this characteristic, ocean color satellites with visible sensors (e.g. the Geostationary
Ocean Color Imager (GOCI)) can also be operated in geostationary orbit in order to
monitor sensitive changes of ocean environments.
A geostationary orbit is a particular type of geosynchronous orbit, the distinction
being that while an object in geosynchronous orbit returns to the same point in the sky at
the same time each day, an object in geostationary orbit never leaves that position.
C.THIAGARAJAN-AP ECE DEPT.

68

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
The notion of a geosynchronous satellite for communication purposes was first
published in 1928 (but not widely so) by Herman Potonik.[3] The first appearance of a
geostationary orbit in popular literature was in the first Venus Equilateral story by George
O. Smith,[4] but Smith did not go into details. British science fiction author Arthur C.
Clarke disseminated the idea widely, with more details on how it would work, in a 1945
paper entitled "Extra-Terrestrial Relays Can Rocket Stations Give Worldwide Radio
Coverage?", published in Wireless World magazine. Clarke acknowledged the connection
in his introduction to The Complete Venus Equilateral.[5] The orbit, which Clarke first
described as useful for broadcast and relay communications satellites, [6] is sometimes
called the Clarke Orbit.[7] Similarly, the Clarke Belt is the part of space about 35,786 km
(22,236 mi) above sea level, in the plane of the Equator, where near-geostationary orbits
may be implemented. The Clarke Orbit is about 265,000 km (165,000 mi) long.
Look angles:
The earth station needs to know where the satellite is in the orbit. Then the earth station
engineer needs to calculate some angles to track the satellite correctly. These angles are
called antenna look angle. The look angles for the ground station antenna are the azimuth
and elevation angles required at the antenna so that it points directly at the satellite. With
the geostationary orbit the situation is much simpler than any other orbit. As the antenna
beam width is very narrow and tracking mechanism is required to compensate for the
movement of the satellite about the nominal geostationary position. Three pieces of
information that are needed to determine the look angles for the geostationary orbit are
a. Earth station latitude
b. Earth station longitude
c. Satellite orbital position

C.THIAGARAJAN-AP ECE DEPT.

69

DEPT: EEE
ENGINEERING

EE T56 COMMUNICATION

Using these information antenna look angle can be calculated using Napiers rule
(solving spherical triangle). Azimuth angle denotes the horizontal angle measured at the
earth station antenna end form Greenwich meridian to satellite position. Elevation is such
angle denotes the vertical angle measured at the earth station antenna end from ground to
satellite position.

C.THIAGARAJAN-AP ECE DEPT.

70

DEPT: EEE
ENGINEERING
Satellite system link model:

EE T56 COMMUNICATION

The primary component within the section of a satellite system is the earth station
transmitter.
A typical earth station transmitter consists of an IF modulator, an IF to RF
microwave up-converter, a high power amplifier (HPA).The IF modulator converts the
input baseband signals to either an FM, a PSK or a QAM modulated intermediate
frequency.The up-converter (mixer and BPF) converts the IF to an appropriate RF carrier
frequency.The HPA provides adequate input sensitivity and output power to propagate the
signal to the satellite transponder. The HPAs commonly used are klystrons and TNTs.
2. The satellite transponder:
Fig. below shows a simplified block diagram of a satellite transponder.
A transponder is a part of a satellite, which is a combination of transmitter and
receiver. The main function of transponder is frequency translation and amplification.
Based on the frequency translation process, there are three basic transponder
configurations. These are single conversion transponder, double conversion transponder
and regenerative transponder.

C.THIAGARAJAN-AP ECE DEPT.

71

DEPT: EEE
ENGINEERING

EE T56 COMMUNICATION

The uplink signal is received by the receiving antenna. The received signal is first
band limited by Band Pass Filter (BPF), then it is routed to Low Noise Amplifier (LNA).
The amplified signal is then frequency translated by a mixer and an oscillator.
Here only the frequency is translated from high-band up-link frequency to the low-band
down link frequency. The mixer output (down link signal) is then applied to BPF then it
is amplified by a High Power Amplifier (HP A). This down link signal is then transmitted
to receiver earth station through a high power transmitting antenna.
3. The downlink :
Fig. below shows a block diagram of a typical earth station receiver.

C.THIAGARAJAN-AP ECE DEPT.

72

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
An earth station receiver includes an input BPF, an LNA and an RF to IF down converter.
The BPF limits the input noise power to the LNA. The LNA is a highly sensitive,
low noise device. The RF-to-IF down converter is a mixer, BPF combination which
converts the received RF signal to an IF frequency.
The most common frequencies used for satellite communications are 6/4 and
14/12 GHz bands. The first number indicates the uplink (earthstation-to-transponder)
frequency and the second number is downlink (transponder-to-earthstation) frequency.
Since C band is most widely used, this band is becoming overcrowded.

OPTICAL FIBER
An optical fiber is a flexible, transparent fiber made of very pure glass (silica) not
much bigger than a human hair that acts as a waveguide, or "light pipe", to transmit light
between the two ends of the fiber. The field of applied science and engineering concerned
with the design and application of optical fibers is known as fiber optics. Optical fibers
are widely used in fiber-optic communications, which permits transmission over longer
distances and at higher bandwidths (data rates) than other forms of communication.
Fibers are used instead of metal wires because signals travel along them with less loss
and are also immune to electromagnetic interference. Fibers are also used for
illumination, and are wrapped in bundles so they can be used to carry images, thus
allowing viewing in tight spaces. Specially designed fibers are used for a variety of other
applications, including sensors and fiber lasers.
An optical fiber junction box has the yellow cables are single mode fibers; the
orange and blue cables are multi-mode fibers: 50/125 m OM2 and 50/125 m OM3
fibers respectively. Optical fiber typically consists of a transparent core surrounded by a
transparent cladding material with a lower index of refraction. Light is kept in the core by
total internal reflection. This causes the fiber to act as a waveguide. Fibers that support
many propagation paths or transverse modes are called multi-mode fibers (MMF), while
those that only support a single mode are called single-mode fibers (SMF). Multi-mode
C.THIAGARAJAN-AP ECE DEPT.

73

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
fibers generally have a larger core diameter, and are used for short-distance
communication links and for applications where high power must be transmitted. Singlemode fibers are used for most communication links longer than 1,050 meters (3,440 ft).

Advantages of optical fibre communication:


Fiber-optic communication systems are becoming increasingly common in
modern society. Where once they were restricted to government or business use, fiberoptic systems are now available to individual households. This has been driven by the
rising demand for high bandwidth connections for a variety of industrial and residential
purposes.
Fiber-optic systems have a large number of advantages over copper wire cables. Among
the most important are the following:

Because fiber-optic cables are both lighter and smaller in diameter than copper
lines, they can be more easily produced and installed.

Fiber-optic systems use significantly less energy than copper lines and are thus
immune to many dangers associated with the electrical current used in copper
lines.

Fiber-optic communication systems can be used to transmit more information


than copper cables and are well-suited for use with digital communications.

When compared to copper cables, fiber-optic cables are both immune to


electromagnetic interference and produce no interference when operating.

Finally, fiber-optic lines are less expensive than copper cables, which can
drastically reduce the cost of installing new lines or maintaining older ones.

Light propagation through fibre:

C.THIAGARAJAN-AP ECE DEPT.

74

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
Light is described by rays that travel in straight lines in an isotropic and
homogeneous medium. When the ray strikes an interface between the two media, its
direction gets changed. At the interface both the reflection and the refraction take place.
Change of direction in the same medium due to strike at the interface is known as
reflection, while the change of direction in the second medium is known as refraction.
An isotropic and homogenous medium is characterised by the refractive index
which is a measure of velocity of light in that medium. The medium has different
refractive indices for light of different colors.
The reflection and refraction of light at the interface are described by law of
reflection and law of refraction. These laws are the foundation of geometrical optics. The
propagation of rays and also the image formation by lenses and mirrors etc. are governed
by these laws.
Under certain conditions, explained later, the ray instead of refracting gets totally
reflected. This phenomenon is known as total internal reflection and plays a very
important role in light guidance through fibers.
Refraction of Light:
This section presents the study of the phenomenon interactively. Input parameters,
the refractive indices of the two media and the angle of incidence can be varied and the
propagation of light can be studied.
Light sources and detectors:
OPTICAL DETECTORS
The detection of optical radiation is usually accomplished by converting the
optical energy into an electrical signal. Optical detectors include photon detectors, in
which one photon of light energy releases one electron that is detected in the electronic
circuitry, and thermal detectors, in which the optical energy is converted into heat, which
then generates an electrical signal. Often the detection of optical energy must be
performed in the presence of noise sources, which interfere with the detection process.
C.THIAGARAJAN-AP ECE DEPT.

75

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
The detector circuitry usually employs a bias voltage and a load resistor in series with the
detector. The incident light changes the characteristics of the detector and causes the
current flowing in the circuit to change. The output signal is the change in voltage drop
across the load resistor. Many detector circuits are designed for specific applications.
Avalanche photodetectors (APDs) are used in long-haul fiber
optic systems, since they have superior sensitivity, as much as 10 dB
better than PIN diodes. Basically, an APD is a P-N junction photodiode
operated with high reverse bias. The materials is typically InP/InGaAs.
With the high applied potential, impact ionization from the lightwave
generates electron-hole pairs that subsequently cause an avalanche
across the potential barrier. This current gain gives the APD its greater
sensitivity. APDs are commonly used up to 2.5 Gbps and sometimes to
10 Gbps if the extra cost can be justified.
Silicon photodiodes (APDs) are used in lower frequency systems
(up to 1.5 or 2 GHz) where they can meet low cost and modest
frequency response requirements. Si devices are used in pairs in
wavelength sensors. The ratio of longer and shorter wavelength
sensors is proportional to the input wavelength. Hopefully, this short
tutorial provides a useful introduction to an important part of optical
communication systems.
Light Guidance:
This section presents the interactive study of the light propagation through the
fibers. By varying the refractive indices of the medium in front of the fiber, medium
inside the fiber, medium of the cladding of the fiber and the angle of incidence, the
passage of light in the meridional plane through the fiber can be traced.
1. Principles of Lasers

C.THIAGARAJAN-AP ECE DEPT.

76

DEPT: EEE
EE T56 COMMUNICATION
ENGINEERING
The nature of light, blackbody radiation, photons, quantized energy levels,
emission and absorption of light, optical amplifiers, optical resonators, lasers, continuum
and pulsed lasers, selected gaseous and solid-state systems.
2. Semiconductor Lasers and Diodes
Intrinsic and extrinsic semiconductors, light-matter interaction, ternary and
quaternary

semiconductors,

heterostructures,

quantum

wells,

wires

and

dots,

homojunctions and heterojunctions, light emitting diodes, injection lasers, distributed


feedback lasers and vertical cavity surface emitting (VCSEL) lasers.
3. Optical Detectors
Thermal detectors and photon detectors, quantum efficiency of semiconductor
detectors, photoconductors, photovoltaic detectors, PIN diodes, avalance photodiodes,
noise and noise equivalent power.
4. Modulators and Deflectors
State of polarization, acoustooptic, electrooptic and magnetooptic effects, Faraday
rotation and magnetooptic modulators, index ellipsoid, linear electrooptic effect,
electrooptic modulators, acoustooptic modulators and deflectors, Raman-Nath and Bragg
diffraction.
5. Photonic Crystals and Microresonators
Wave propagation in periodic systems, 1D/2D/3D crystal examples, photonic
band gap, control of emission, dielectric microspheres and their optical properties.

C.THIAGARAJAN-AP ECE DEPT.

77

DEPT: EEE
ENGINEERING

C.THIAGARAJAN-AP ECE DEPT.

EE T56 COMMUNICATION

78

You might also like