You are on page 1of 292

Telecommunication and Computer Networks

Edition 7.0
Pieter Kritzinger
July 2001

Department of Computer Science


University of Cape Town

iii

Foreword
Man has always had a desire to exchange information amongst fellow man. Not so long ago, people gathered
on the town square to gossip about the affairs of the town and its inhabitants. The Internet has now turned
the World into one big Electronic Town Square. As in the early days, the value of the information thus
disseminated should be taken with a grain of salt, or at least should be verified before taken seriously.
Amazingly some individuals seem to forget this age old wisdom.
This version 7.0 constitutes a major rewrite of the previous version and as the new title indicates, reflects
the merging of telephone and computer and other networks into a single telecommunication network which
would carry video, data and voice.
No single person can invent the knowledge reflected in these pages and so the content is compiled from
many articles, textbooks and my own work. There are many good textbooks on the topic and the one which
comes closest to covering the material in these notes is Understanding Data Communications and Networks
by W.A. Shay and published by the PWS Publishing Company (ISBN 0-534-20244-6). Another good text,
albeit somewhat academic in nature, is the Third edition of Computer Networks by Andrew Tanenbaum
(ISBN 0-13-394248-1). The one I would recommend you to buy (as of July 9, 2001) is Computer Networks
and Internets (Second Edition) by Douglas E Comer, published by Prentice-Hall International, ISBN 0-13084222-2.
As with all fields of Computer Science there is a great deal of theoretical work associated with computer
networks and their protocols. Here I think in particular of concepts such as formal specification methods,
stochastic modelling and models such as Petrinets and Process Algebras to model concurrent systems. In
this book we have been able to cover only a tiny fraction of this and then only in the later chapters.
The subject matter of (tele-)communications has not only become vast but much more significant in the
world of computing than it was a mere decade ago. Recognition of this is the introduction of the C in the
previous IT of Information Technology to turn it into Information and Communication Technology. No text
can expect to cover all of it unless one wants to leave the reader with a feeling of knowing nothing about
everything. If the reader discovers that his or her favourite network topic is therefore not even mentioned:
that is inevitable.
This seventh edition of Telecommunication and Computer Networks has four parts. Part I is intended to lay
the foundation for a course in the subject and as such could serve as reading material for a few introductory
lectures the subject. Part II covers the fundamentals of data communication. It introduces the engineering concepts required to understand both electrical and optical data channels and their properties. Part III
concerns the basic protocols required in any network. Starting with the Data Link layer it also considers
Packet Switching and Media Access Control (MAC) techniques used by the common local networks. The
material covered to this point are required reading for a good undergraduate course in computer networks.
The last part covers advanced topics such as network security, wireless networks, network management and
network middleware topics such as CORBA. This part contains sufficient material for a second course in
telecommunications and computer networks.
Many generations of students have worked on these notes: The latest is Andy Shearman who was responsible
for the chapters on Security, Wireless Communication and Network Middleware. Thank you Andy.
Any constructive critique about the content of these notes will always be most welcome.
Pieter Kritzinger
Cape Town
July 2001

iv

Contents
I

Overview of Telecommunication and Computer Networks

Basic Concepts

1.1

Objectives of this Chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.2

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.3

Evolution of Computer Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.4

Electrical Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.5

Optical Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

1.6

A Matter of Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

1.7

Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.7.1

Physical layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

1.7.2

Data link layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

1.7.3

Network layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

1.7.4

Transport layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

1.7.5

Session layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

1.7.6

Presentation layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

1.7.7

Application layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

1.7.8

Conclusion - Standards and ISO . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

1.8

International Standards Bodies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

1.9

Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

II Transmission Fundamentals

21

23

Electrical Signals
2.1

Objectives of this Chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.2

Fourier Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

2.3

Transmission Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
v

CONTENTS

vi

2.4

2.5

2.6
3

2.3.1

Twisted pair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

2.3.2

Coaxial cable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

Channel Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.4.1

Attenuation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

2.4.2

Noise Distortion and Shannons Law . . . . . . . . . . . . . . . . . . . . . . . . . 29

2.4.3

Delay distortion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

2.4.4

Electrical noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

2.4.5

Bandwidth limitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

Data encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.5.1

Digital Data, Digital Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

2.5.2

Digital Data, Analog Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

2.5.3

Analog Data, Digital Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

Optical Signals

43

3.1

Objectives of this Chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

3.2

Optic Fibres . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

3.3

Method of Transmission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

3.4

Optical Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.4.1

Multimode and single-mode fibres . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

3.4.2

Optical Signal Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

3.5

Signal Loss in Optic Fibre . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

3.6

Advantages and Disadvantages of Optical Fibres . . . . . . . . . . . . . . . . . . . . . . . 48

3.7

Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

Wireless Communication

51

4.1

Objectives of this Chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

4.2

Frequency Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

4.3

Radio Transmission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

4.4

Terrestial Microwave Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

4.5

Satellite Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

4.6

Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

CONTENTS

vii

57

Channels and Their Properties


5.1

Objectives of this Chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

5.2

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

5.3

Simplex, Half-duplex and Full-Duplex Communication . . . . . . . . . . . . . . . . . . . . 58

5.4

Asynchronous and Synchronous Transmission . . . . . . . . . . . . . . . . . . . . . . . . . 58

5.5

Multiplexing Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Synchronous Time Division Multiplexing . . . . . . . . . . . . . . . . . . . . . . . 61

5.5.2

Statistical Time Division Multiplexing . . . . . . . . . . . . . . . . . . . . . . . . . 62

5.5.3

Wavelength Division Multiplexing . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

5.5.4

Combining WDM and STDM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

5.6

Digital Subscriber Line (DSL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

5.7

Switching Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

5.8
6

5.5.1

5.7.1

Circuit Switching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

5.7.2

Packet Switching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

5.7.3

Cell Switching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

Physical Data Communication Interfaces, Standards and Errors

71

6.1

Objectives of this Chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

6.2

Electrical Signal Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

6.3

6.4

6.5

6.2.1

Serial Analog Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

6.2.2

Serial Digital Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

Optical Signal Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77


6.3.1

Synchronisation and SONET/SDH . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

6.3.2

The relationship between SONET and SDH . . . . . . . . . . . . . . . . . . . . . . 79

Transmission Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
6.4.1

Sources of Transmission Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

6.4.2

Error Rates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

6.4.3

Error Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

6.4.4

Hamming Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

6.4.5

Parity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

6.4.6

Error-Correcting Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

6.4.7

Block check characters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

6.4.8

Cyclic Redundancy Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

CONTENTS

viii
7

Data Link Protocols

91

7.1

Objectives of this Chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

7.2

Principles of Protocol Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

7.3

7.4

7.5

7.2.1

Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

7.2.2

Flow Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

7.2.3

Sequence Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

7.2.4

Piggybacking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

7.2.5

Sliding Window Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

7.2.6

Pipelining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

7.2.7

Positive Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

Synchronous Data Link Control (SDLC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101


7.3.1

Flag field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

7.3.2

Address field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

7.3.3

Control field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

7.3.4

Acknowledgement and Retransmission. . . . . . . . . . . . . . . . . . . . . . . . . 105

7.3.5

Supervisory Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

7.3.6

Nonsequenced format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

The Data Link Layer on the Internet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107


7.4.1

Serial Line IP (SLIP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

7.4.2

Point-to-Point Protocol (PPP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

III Network Protocols

111

113

Local Area Networks


8.1

Objectives of this Chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

8.2

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

8.3

Classification of LANs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

8.4

8.3.1

Topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

8.3.2

Common Bus Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

8.3.3

Star Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

8.3.4

Ring Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

8.3.5

Media Access Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

Random Access Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

CONTENTS

ix

8.4.1

1-persistent CSMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

8.4.2

Nonpersistent CSMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

8.4.3

CSMA/CD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

8.4.4

CSMA/CA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

8.4.5

IEEE LAN Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

8.5

Logical Link Control (IEEE 802.2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

8.6

Ethernet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

8.7

8.6.1

Ethernet Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124

8.6.2

Fast Ethernet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

Deterministic or Token Passing Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . 127


8.7.1

Token Ring (IEEE 802.5) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

8.7.2

Ring Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

8.7.3

Token Ring Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

8.7.4

Token Passing Bus (IEEE 802.4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

8.8

Wireless LAN (IEEE 802.11) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

8.9

Network Interconnection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

8.10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136


9

Wide Area Networks

137

9.1

Objectives of this Chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137

9.2

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138

9.3

Datagram Service, Virtual Circuit Service . . . . . . . . . . . . . . . . . . . . . . . . . . . 138

9.4

Internal Structure of the Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140

9.5

Routing Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

9.6

Congestion Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145


9.6.1

Preallocation of Buffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

9.6.2

Discarding Packets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

9.6.3

Choke Packets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147

9.7

Flow control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148

9.8

Deadlock Prevention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

9.9

Frame Relay Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150


9.9.1

Circuit Connections

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

9.9.2

Committed Information Rates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

9.9.3

Local Managment Interface (LMI) . . . . . . . . . . . . . . . . . . . . . . . . . . . 154

CONTENTS

9.10 Asynchronous Transfer Mode (ATM) Networks . . . . . . . . . . . . . . . . . . . . . . . . 155


9.10.1 Quality of Service (QoS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
9.10.2 ATM Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
9.10.3 ATM Service Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
9.10.4 Real-Time Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
9.11 The X.25 Packet Interface

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

9.12 Internet Network Layer Protocol (IP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167


9.13 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170

IV Advanced Topics

173

10 Advanced Network Technologies

175

10.1 Objectives of this Chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175


10.2 Fibre Distributed Data Interchange . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
10.2.1 FDDI Layered Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
10.2.2 MAC Functional Operation

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178

10.2.3 FDDI Frame and Token Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179


10.2.4 Data Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
10.2.5 Physical Layer (PHY and PMD) Operation . . . . . . . . . . . . . . . . . . . . . . 180
10.2.6 Timed Token Rotation (TTR) Protocol . . . . . . . . . . . . . . . . . . . . . . . . . 182
10.2.7 Ring Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
10.2.8 Data Transmission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
10.2.9 FDDI Token Timing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
10.2.10 Token Rotation Time Sequence Diagram . . . . . . . . . . . . . . . . . . . . . . . 186
10.3 Distributed Queue Dual Bus (DQDB) Networks . . . . . . . . . . . . . . . . . . . . . . . . 187
10.3.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
10.3.2 DQDB Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
10.3.3 DQDB Virtual Queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
10.3.4 DQDB Medium Access Control (MAC) Algorithm . . . . . . . . . . . . . . . . . . 191
10.3.5 Performance of DQDB Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
10.3.6 Advantages of DQDB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
10.3.7 Disadvantages of DQDB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
10.3.8 Protocol Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
10.3.9 The Future of DQDB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
10.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195

CONTENTS

xi

11 Network Security

197

11.1 Objectives of this Chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197


11.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
11.3 Access Control and Passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
11.3.1 Passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
11.4 Conventional Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
11.4.1 Introduction and Historical Overview . . . . . . . . . . . . . . . . . . . . . . . . . 201
11.4.2 DES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
11.5 Public Key Crpytography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
11.5.1 Diffie Hellmen Key Exchange . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
11.5.2 RSA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
11.6 Encryptions Place in a Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
11.6.1 End-to-End Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
11.6.2 Link Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
11.7 Digital Signiatures & Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
11.7.1 Cryptographic Checksums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
11.7.2 Digital Signatures

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214

11.8 Network Based Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216


11.8.1 TCP/IP Connection Set-up Primer . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
11.8.2 The SYN Attack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
11.8.3 ICMP Attacks

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219

11.8.4 Simple Attackers : Smurf and Fraggle . . . . . . . . . . . . . . . . . . . . . . . . . 219


11.8.5 Coordinated and Distributed Network Based Attacks . . . . . . . . . . . . . . . . . 220
11.8.6 Countering DDOS Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
11.9 Firewalls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
11.9.1 What Constitutes a Firewall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
11.9.2 Different Configurations of a Firewall . . . . . . . . . . . . . . . . . . . . . . . . . 228
11.9.3 Dual-Homed Gateway Firewall . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
11.9.4 Screened Host Firewall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
11.9.5 Screened Subnet Firewall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
11.9.6 Summary of Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
11.10Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231

CONTENTS

xii
12 Cellular Networks

233

12.1 Objectives of this Chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233


12.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
12.3 Cellular Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
12.3.1 Frequency Reuse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
12.3.2 Overview of the Air Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
12.3.3 Roaming Mobile Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
12.4 How a Call is Made . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
12.5 GSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
12.5.1 Radio Information and Channel Types . . . . . . . . . . . . . . . . . . . . . . . . . 241
12.5.2 Channel Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
12.6 3G Networks Mobile Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
12.6.1 General Packet Radio Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
12.6.2 Roaming in a GPRS Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
12.6.3 Data Transmission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
12.7 Universal Mobile Transmission System . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
12.7.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
12.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
13 Network Middleware

253

13.1 Objectives of this Chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253


13.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
13.2.1 Design Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
13.3 Common Object Request Broker Architecture (CORBA) . . . . . . . . . . . . . . . . . . . 254
13.4 The Object Request Broker (ORB) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
13.5 Distributed Component Object Model (DCOM) . . . . . . . . . . . . . . . . . . . . . . . . 259
13.6 Java Distributed Computing - RMI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
13.6.1 Java Distributed Object Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
13.6.2 RMI Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
13.6.3 Marshalling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
13.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264

List of Figures
1.1

Mainframe computer ca. 1970 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.2

One typical configuration of a terminal-oriented network. . . . . . . . . . . . . . . . . . . .

1.3

A typical configuration of a terminal-oriented network using Statistical Multiplexers. . . . .

1.4

A typical configuration of a terminal-oriented network using a FEP. . . . . . . . . . . . . .

1.5

Examples of analog and digital signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1.6

Manchester encoding of the byte 1111 0000. . . . . . . . . . . . . . . . . . . . . . . . . . . 11

1.7

The ISO OSI Reference Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

1.8

Layers, protocols, and interfaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

1.9

TCP/IP protocol suite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

2.1

Schematic of a conducting coil rotating at speed radians per second through a field of
magnetic flux, generating a current in the conductor as it does so . . . . . . . . . . . . . . . 24

2.2

Illustrating phase differences in a sinusoidal signal . . . . . . . . . . . . . . . . . . . . . . 25

2.3

Shielded twisted-pair line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

2.4

A typical coaxial cable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

2.5

Sources of attenuation and distortion of a digital signal . . . . . . . . . . . . . . . . . . . . 30

2.6

A binary signal and its Fourier components. . . . . . . . . . . . . . . . . . . . . . . . . . . 32

2.7

Two binary signals:   with double the capacity (number of bits per second) of 
  . . . 33

2.8

Illustrating the effect of bandwidth on a digital signal . . . . . . . . . . . . . . . . . . . . . 35

2.9

Some digital encoding techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

2.10 A binary signal (a) amplitude modulated (PAM) (b) frequency modulated (MFSK) (c) phase
modulated (PSK) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.11 Illustrating pulse code modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.12 A transmitted byte

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

3.1

The Optical Telegraph

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

3.2

Components of an optical fibre cable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

3.3

Multimode and single-mode optical fibres . . . . . . . . . . . . . . . . . . . . . . . . . . . 46


xiii

LIST OF FIGURES

xiv
4.1

Satellite communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

5.1

Simplex, Half-duplex and Full-duplex Communication . . . . . . . . . . . . . . . . . . . . 58

5.2

Asynchronous and Synchronous Transmissions . . . . . . . . . . . . . . . . . . . . . . . . 59

5.3

Multiplexing

5.4

Synchronous Time Division Multiplexing . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

5.5

Statistical Time Division Multiplexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

5.6

Wavelength Division Multiplexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

5.7

Asymmetric Digital Subscriber Line (ASDL) technology using FDM only . . . . . . . . . . 64

5.8

Asymmetric Digital Subscriber Line (ASDL) technology using Echo-suppression . . . . . . 65

5.9

Circuit switching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

5.10 Conceptual view of packet switching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67


6.1

RS-232C/V.24 interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

6.2

RS-232C Connector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

6.3

Sending and Receiving over an RS-232 Connection . . . . . . . . . . . . . . . . . . . . . . 73

6.4

Signal lines used in the X21 serial digital interface

6.5

X21 protocol state transition graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

6.6

SONET configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

6.7

An STS-1 SONET frame. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

6.8

Mapping basic signals onto an STS-N channel. . . . . . . . . . . . . . . . . . . . . . . . . 80

6.9

Error burst examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

. . . . . . . . . . . . . . . . . . . . . . 74

6.10 Example of block sum parity checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86


7.1

Conceptual model of Layer 2, 3 and 4 protocols . . . . . . . . . . . . . . . . . . . . . . . . 93

7.2

The Alternating Bit (AB) protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

7.3

A sliding window of size 1, with a 3-bit sequence number. Part (a) initially (b) after the first
frame has been sent (c) after the first frame has been received (d) after the first acknowledgement has been received page 85 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

7.4

Pipelined ARQ protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

7.5

The SDLC frame format. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

7.6

An example of bit stuffing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

7.7

Three formats for the basic SDLC control field . . . . . . . . . . . . . . . . . . . . . . . . 105

7.8

The PPP full frame format for unnumbered mode operation . . . . . . . . . . . . . . . . . . 109

8.1

LAN Bus topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

LIST OF FIGURES

xv

8.2

LAN Star topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

8.3

Ring topology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

8.4

Correspondence between IEEE 802 layers and the OSI model . . . . . . . . . . . . . . . . . 120

8.5

(a) Position of LLC and (b) Protocol formats . . . . . . . . . . . . . . . . . . . . . . . . . . 120

8.6

Ethernet 10BASE5 topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

8.7

CSMA/CD frame format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

8.8

CSMA/CD MAC process

8.9

Popular networking configuration

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

8.10 Ethernet states: contention, transmission, or idle. . . . . . . . . . . . . . . . . . . . . . . . 125


8.11 Token-Ring frame format

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

8.12 Token-Ring medium access algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128


8.13 Token-Ring frame format and content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
8.14 Token bus network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
8.15 Wireless LAN communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
9.1

One directional Virtual Circuits showing the order on which they are set up . . . . . . . . . 141

9.2

NODE tables corresponding to the Virtual Circuits set up Fig. 9.1 . . . . . . . . . . . . . . 142

9.3

Congestion in the Service Provider network. . . . . . . . . . . . . . . . . . . . . . . . . . . 145

9.4

Congestion in a NODE can be reduced by putting an upper bound on the number of buffers
queued on an output line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147

9.5

Store-and-forward lockup: (a) direct, (b) indirect. . . . . . . . . . . . . . . . . . . . . . . . 149

9.6

Message re-assembly lockup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150

9.7

Packet Level (top) and Frame Relay (bottom) PDU formats . . . . . . . . . . . . . . . . . . 151

9.8

Packet and Frame Relay processing flow diagrams. . . . . . . . . . . . . . . . . . . . . . . 153

9.9

Illustrating Committed Information Rate (CIR) and discard eligibility of frames . . . . . . . 154

9.10 The ATM principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155


9.11 ATM connection paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
9.12 ATM call establishment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
9.13 ATM cell format at (a) User-Service Provider interface and (b) internal to the network . . . . 158
9.14 The DTE-DCE X.25 ITU-T interface standard for Wide Area packet switched networks. . . 162
9.15 X.25 CALL REQUEST packet format. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
9.16 Sequence diagram of X.25 call establishment, data transfer and call clearing phases . . . . . 163
9.17 X.25 Control packet format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
9.18 X.25 data packet format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
9.19 The IP (Internet Protocol) header format . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168

LIST OF FIGURES

xvi

9.20 Source and destination address formats in the IP header . . . . . . . . . . . . . . . . . . . . 169


9.21 Virtual circuit configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
10.1 FDDI relationship to OSI model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
10.2 Typical FDDI topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
10.3 FDDI Physical Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
10.4 The FDDI Timed Token protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
10.5 FDDI token rotation timing diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
10.6 DQDB architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
10.7 Open (top) and Closed (bottom) DQDB architecture . . . . . . . . . . . . . . . . . . . . . . 189
10.8 DQDB slot format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
10.9 State diagram of the DQDB MAC Protocol without BWB . . . . . . . . . . . . . . . . . . . 191
10.10Logical states of a DQDB station . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
11.1 Access matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
11.2 Entering a password for a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
11.3 Verifying a password for a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
11.4 The encryption

decryption mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . 201

11.5 The Engima machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204


11.6 Diagramatic view of a rotor machine such as engima . . . . . . . . . . . . . . . . . . . . . 204
11.7 DES encryption mechanism

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205

11.8 DES 



  operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
11.9 End-to-end encryption at the application layer . . . . . . . . . . . . . . . . . . . . . . . . . 211
11.10End-to-end encryption at the transport layer . . . . . . . . . . . . . . . . . . . . . . . . . . 211
11.11Link encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
11.12Cryptographic checksum in use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
11.13Using public key encryption for digital signatures . . . . . . . . . . . . . . . . . . . . . . . 215
11.143way handshake process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
11.15Smurf reflector type attack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
11.16A DDOS ready network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
11.17Ingress filterning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
11.18A firewall resides between external networks and internal networks . . . . . . . . . . . . . . 226
11.19A dual homed firewall with a proxy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
11.20A screened subnet firewall, notice the two routers and the subnet between . . . . . . . . . . 230
12.1 Cellular concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235

LIST OF FIGURES

xvii

12.2 Cellular network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235


12.3 Frequency reuse concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
12.4 Handoff scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
12.5 Catering for high speed traffic with umbrella cells . . . . . . . . . . . . . . . . . . . . . . . 239
12.6 Timing diagram for a mobile handset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
12.7 GSM schematic showing interfaces between functional entities . . . . . . . . . . . . . . . . 240
12.8 GSM TDMA frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
12.9 Logical architecture of a GPRS network . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
12.10Distiction of RAs and LAs in a GPRS network . . . . . . . . . . . . . . . . . . . . . . . . 245
12.11GPRS functional state model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
12.12UMTS elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
12.13UTRAN connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
12.14UTRAN connection with new core network (CN) . . . . . . . . . . . . . . . . . . . . . . . 248
12.15Spreading of channels in a CDMA type access system . . . . . . . . . . . . . . . . . . . . . 249
13.1 Organisation of a distributed system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
13.2 OMG reference model architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
13.3 A request being sent through the ORB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
13.4 The object request broker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
13.5 The overall DCOM architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
13.6 How DCOM works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
13.7 RMI stubs and skeletons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
13.8 Expanded view of RMI architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263

xviii

LIST OF FIGURES

List of Tables
4.1

Electro magnetic spectrum and its uses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

6.1

Typical payloads. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

6.2

Transmission speeds of SONET and SDH. . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

6.3

The Hamming code explained . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

6.4

The Hamming code bit positions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

6.5

Calculation of the polynomial code checksum. . . . . . . . . . . . . . . . . . . . . . . . . . 87

8.1

IEEE 802.3 networks with 512 bit slot times, 96 bit gap times and jam signals lasting 32
48 bits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

8.2

Token ring control frames. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

9.1

Summary of the major differences between Virtual Circuit and Datagram service. . . . . . . 140

9.2

LAN and WAN characteristics compared

9.3

X.25 PDU format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162

. . . . . . . . . . . . . . . . . . . . . . . . . . . 155

10.1 FDDI Frame and Token format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179


10.2 FDDI 4B/5B codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
11.1 Different methods of arbiter working . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216

LIST OF TABLES

Part I

Overview of Telecommunication and


Computer Networks

Chapter 1

Basic Concepts
1.1 Objectives of this Chapter
There is a great deal to know about telephone and computer networks, or more correctly telecommunication
networks sin ce with the advent of the digital age and cellular networks, the distinction is getting increasingly
fuzzy.
1. We start off by explaining the origins of computr networks and why the telephone network has always
played a role in computer networks from the very start.
2. Without understanding the difference between analog and digital signals, one will never comprehend
why networks are playing such an increasingly important role in the world of computing, so we
explain those fundamentals. In some countries of the world, this is the sort of information that is
taught in the 13th grade at school. Maybe it is better than learning Shakespeare, but I doubt it.
3. Computers, like humans need rules when communicating. These are called protocols as we shall
explain.
4. The software that makes communication between computers possible is some of the most complicated
that can be written. In order to understand how it works, we divide it into layers which group the
functions; hence network architecture.
5. Since computers communicate across continents and national boundaries, we need international organisations to make and approve the rules or protocols. There are many as we describe in the last part
of this chapter.
A reader who understands everything discussed in this chapter will be very aware of how much there still
is to learn in telecommunication and computer networks, but will have a clear understanding of the very
fundamental concepts.

CHAPTER 1. BASIC CONCEPTS

1.2 Introduction
A decade or two we associated the word communication with the use of the telephone or radio. While
those applications remain we nowadays think almost exclusively of the Internet and probably wireless,
cellular telephones. It has become so pervasive that we simply take it for granted. One of the greatest
impacts that networking has had on society is to bring together vast sources of information, be they human
minds or computer systems: the emergence of a global village is changing the way society interacts and
does business.
An understanding of the principles behind computer networks is therefore something that every student
should have. Communication and Information Technology is advancing at an exponential rate, and we are
living in an exciting age where we are shaping the future of this technology. This introduction seeks to
provide the student with the core concepts behind communication networks.

1.3 Evolution of Computer Networks


The evolution of computer networks is best traced by following the development of computing resources
used by large organisations. The earliest commercially available computers were characterized by expensive hardware and relatively primitive software. Typically, an organisation would purchase a single computer
which would then be centrally located in a large air-conditioned room. It would consist of a central processing unit with a limited quantity of primary memory, some secondary disc storage, a printer, a punch-card
reader and an operator console. Something looking somewhat like that in Fig. 1.1.

Figure 1.1: Mainframe computer ca. 1970


Users normally prepared their data off-lineon a card punch located in a different room, and the computer
operator would then load and run the prepared programs sequentially.
As computer hardware became faster and its operating software advanced, fast secondary storage large
magnetic drums and later magnetic disks and multi-programming operating systems came into existence.
This made it possible to timeshare the central processing unit between a number of active programs, thereby

1.3 Evolution of Computer Networks

allowing several users to run their programs interactively and to access stored data simultaneously via their
own individual terminal. The terminals were normally electromechanical teletypewriters (TTYs) similar to
those already in use at the time in international telex networks. They were designed, therefore, to transmit
and receive one cahracter at a time over long distances in serial (or sequential) mode.
The computers then became known as multi-access systems providing on-line access to stored programs and
data. Initially, the terminals were all located in close proximity to the main computer complex with a simple
serial line connection between the computer and the terminal.
Users increasingly found it useful though to have a computer terminal on their desk. Within a building or
on a campus that posed no problem. However, over longer distances the expense became important and,
above all, communication (of voice at first, but now data as well) was the monopoly of a central public
organisation: Telkom!
Technically there was the additional issue that telephone networks carried analogue signals (explained in the

next section) and computers used binary or digital signals. Hence modems were invented to carry computer
communications nationally over wide geographical areas. An operational computer system typical at that
time is shown in Fig. 1.2 where the letters PSTN stand for Public Switched Telephone Network.

CENTRAL COMPUTER

PSTN
T

T = Terminal

= Modem

Figure 1.2: One typical configuration of a terminal-oriented network.


The use of a switched (switched in that connections could be changed from time to time to connect two
individual users, the computer and the terminal in this instance) telephone network as the basic data communication medium meant that the communication line costs could no longer be regarded as insignificant and,
indeed, soon constituted a substantial proportion of the system operating costs. To minimize these costs,
therefore, devices such as statistical multiplexers and cluster controllers were introduced. Essentially, these
allowed a single communication line, often on permanent lease from the telecommunications authorities, to
be shared between a number of simultaneous users all located, for example, at the same remote site. An
example of such a system is shown diagrammatically in Fig. 1.3.
In addition, the increasing level of usage of computers soon gave rise to systems of hundreds of computers
a modem or modulator-demodulator is used to convert the digital signals of a computer into equivalent analogue signals. These
concepts are explored later in this chapter

CHAPTER 1. BASIC CONCEPTS

8
CENTRAL COMPUTER

Stat.
MUX

Stat.
MUX

T
T

PSTN
Stat.
MUX

Stat.
MUX

T = Terminal

T
T

= Modem

Figure 1.3: A typical configuration of a terminal-oriented network using Statistical Multiplexers.

within an organisation. The effect of this was that the central computer could no longer cope with the
processing over heads associated with servicing the various communication lines in addition to its normal
processing functions. Consequently a special device, called a Front End Processor (FEP) was introduced
for the job of controlling the communication lines. Such a system is shown diagrammatically in Fig. 1.4.

CC

T
T

FEP

PSTN
T

CC

T
T

T = Terminal

= Modem

CC = Cluster controller

Figure 1.4: A typical configuration of a terminal-oriented network using a FEP.


The structures shown in Figs 1.3 and 1.4 were particularly prevalent in organizations normally holding large
amounts of data at a central site, such as the major clearing banks and airlines.
In many other organisations, it is not necessary and in fact desirable not to hold all the data centrally, and
hence it became common place for organisations to have a number of autonomous computer systems located
at different geographical locations. Typically, these systems provided a local computing function, but there
was often a requirement for them to communicate with each other to share resources and data. In such
a network, the internal message units used may be long. This can result in a significant delay between a
message entering the network and the time it leaves the network (known as the response time), due to long
messages waiting upon each other at the various resources along the way.

1.4 Electrical Signals


It then became economically feasible for TelCos to provide a separate, autonomous computer communication network. Such communication networks that carry mostly computer data normally operate using
smaller units of data known as a data packet. It is therefore commonly referred to as a Packet Switched
Network. Increasingly the term IP network is also used for reasons that will become clear only much
later. Also, since the interconnected computers are normally geographically widespread, such a network is
also called a Wide Area Network (WAN). .
The next obvious development was that computers of different manufacturers (IBM, SUN Microsystems,
Microsoft, etc) with their different architectures and applications need to communicate with one another.
The need for open (for everybody to see)interface standards became all important. This was the status at the
beginning of the 1980s since being overtaken by the event of the Internet in the middle of the last decade.
What was to become the Internet, was first proposed in 1967 in the United States of America as an experimental computer network. For a number of years before 1967, DARPA, the (Defense) Advanced Research
Projects Agency of the United States Department of Defense had been funding the growth and development
of many multiclass timeshared computer systems at a number of university and industrial research centres
across the United States. By 1967, many of these had shown themselves to be valuable computing resources
and it was recognized that the Department of Defense as well as the scientific community could benefit
greatly if there were to be made available a communication service providing remote access from any terminal to all of these systems. In early 1969 a contract was awarded for the implementation of the ARPANET
to Bolt, Beranek and Newman (BBN) an engineering firm based in Massachusetts.
Simultaneously DARPA funded the research which resulted in a set of network standards that specify the
details of how computers communicate, as well as a set of conventions for interconnecting networks and
routing traffic. These are now known as the Transmission Control Protocol/Internet Protocol or TCP/IP
which can be used to communicate across networks which represent a wide variety of underlying network
technologies. What started as ARPANET is now known as the Internet, literally a network that (Inter)connects many worldwide networks. The evolution has however not ended and is been fueled further by
the popularity of wireless, cellular communication. In order for the Internet to reach the very edge of the
network the last bastion of traditional voice transmission has to be broken: In other words, transmitting
voice over data (or IP) networks, including the wireless link.
Now that we now a little about their evolution, we shall start from scratch to understand the exciting world
of telecommunication networks.

1.4 Electrical Signals


Signals to represent data, voice conversations or video transmissions are either electromagnetic radio waves,
electrical signals flowing in a conductor or optical (light) signals. Optical signals are being used more and
more but we shall discuss those later.
An electrical signal is transmitted along a conductor by varying some electrical property such as voltage or
current which could either be an analog or digital signal as illustrated in Figure 1.5.
Electrical signals can be either analog or digital as illustrated in Fig. 1.5. Digital signals are also called
binary signals. An analog signal is continues in time whereas digital signals take on discrete values and
changes instantaneously from one voltage or current value to the other in time. The number of signal levels

are always
for      
 . Originally 
but as technology improves it is normal for more than
two levels to be represented. In practice the way in which the binary data are represented or coded by the

A general term for the Telecommunication Company Monopolies which alas exist in too many countries still.

CHAPTER 1. BASIC CONCEPTS

10

Current or voltage

Current or voltage

1 cycle

time

time

Digital Signals

Current or voltage

Current or voltage

time

time

Analogue Signals

Figure 1.5: Examples of analog and digital signals


digital signal is all but simple. One method commonly used is known as Manchester Encoding illustrated in
Figure 1.6.
Since analog signals vary continuously in time they can be represented by mathematical formulae and manipulated accordingly. Analogue signals do not have to be cyclical in that they repeat themselves after a
fixed time called the period. If they are, the number of complete cycles per second (also called Hertz)


is the inverse of the period and is called the frequency, usually denoted by the symbol . Thus   

   seconds. The signals with periodic wave forms are the most useful to us in
cycles/second, or
telecommunication since can modulate them to carry digital signals as we shall see in Chapter 2.
Analog signals not only represent current flowing in a copper conductor, but most importantly may be
electro magnetic waves travelling through space in which case we call them radio waves. Depending on
their frequency, radio waves can travel around the world, which means that everyone can receive them.
Although this is exactly what radio waves were originally intended for, it is simultaneously a problem if too
many parties want to use the limited frequency spectrum.
When the frequency of the electro magnetic waves is very high we refer to them as microwaves which have
the advantage (for the intended purpose) that they are easily attenuated by obstacles like buildings in their
way (the same effect that chickens in a microwave experience) and are only really reliable when they are
transmitted to a receiver which is in direct line of sight. With this property it is possible to re-use the

1.5 Optical Signals

11

Unencoded binary signal

Manchester encoded signal

Figure 1.6: Manchester encoding of the byte 1111 0000.


scarce frequencies in geographically separated areas. If we transmit the microwave signals at low power
in addition, we can divide a geographical area into cells where the transmissions do not interfere with
each other provided the frequencies in adjacent cells are different. This is the way that a GSM (cell phone)
network works. We shall return to all these concepts in later chapters.

1.5 Optical Signals


Telecommunication Networks also use optical fibre to transport data encoded as ON/OFF pulses of light.
Depending on the type of fibre, a transmitter at one end of the fibre uses a light emitting diode (LED) or
Laser to send pulses of light down the fibre. A receiver at the other end uses a light detector such as a Photo
Intrinsic Diode (PIN) or Avalanche Photo Diode (APD) to detect the pulses.
The number of bits per second (Bps) that can be transmitted along an optical fibre is virtually unlimited
since it is theoretically possible to send several beams (called modes) of light of different frequencies down
a fibre, each carrying typically 2 GBps . We will return to optical signals and the associated standards in
Chapter 3.

1.6 A Matter of Protocol


Now that we understand a little about the signals travelling between two communicating parties in a telecommunications network, we turn to the rules of communication or protocols which are needed for the parties
to exchange information without error.
Essentially, a computer network consists of one or more computer talking with each other. The rules that
govern the exchange of information, and the exact structure of that information are known as a protocol. As
an introduction to the idea of a protocol, we consider an analogy from everyday life namely, a telephone
conversation between two people living in different cities in different countries. As we all know the process
is something like the following:
First of all, the caller picks up the telephone receiver. If a dialing tone is heard, he will select the number
of the callee on the buttons of his telephone. The called number constitutes the address of the callee. If no
dialing tone is heard, the caller would normally discontinue his attempts to place the call.
Once the call goes through, the remote instrument either rings and is answered or not. The callee could also
be using the device. Should the callee answer the ringing telephone, a verbal handshake typically occurs.
Note that in this case it is a name, rather than a number, that identifies the caller and callee to each other.


A Gigabit is 

bits

CHAPTER 1. BASIC CONCEPTS

12

Now that the connection has been established, the two parties conduct a conversation. Certain conventions
also apply in this case. Two people do not (normally that is!) talk simultaneously; if they did, a process of
re-synchronization would need to occur. If one person speaks too quickly, the other may ask him to slow
down.
If the line should fade of if there is noise on the line, messages like I did not hear your last words or
Would you please repeat the last sentence? would ensure that the information passing between the two
parties remains error-free.
Should an abrupt cut-off occur, we know from experience that a frustrating situation of deadlock might
occur, when both people try simultaneously to call each other. There are no rules in this case which is an
error in the specification of the (human) protocol.
At the conclusion of their conversation the caller and callee go through similar conventions for terminating
their call and would eventually put their respective telephone receivers down, thereby not only ending their
conversation, but also breaking the physical (electrical) communication link.
When two processes in separate computers communicate with each other, similar conventions for:
addressing
call establishment and call termination
error checking
information recovery
and flow control
must be decided upon.
Once again, the rules and conventions used in a session of communication are known as a protocol. The
specification of a protocol also includes a specification of the format and content of the messages being
passed. As computers are dumb, deterministic machines, they require a far more elaborate and exact protocol
than in the case of a conversation conducted in natural language between two human beings.
The telephone analogy also reveals another basic concept important in the study of computer communication. This is the fact that there are a number of stages to the conversation between communicating
computer processes:
call-establishment phase
data (information) transfer phase
call-clearing phase
idle phase

1.7 Network Architecture


Computers have different architectures understand different languages, store data in different formats, and
communicate at different rates. Consequently there is much incompatibility, and communication is difficult.
In fact, how do computers manage to communicate at all ?? They communicate the same way that the

1.7 Network Architecture

13

people in our analogy do. In the analogy, there are rules which apply at the physical level (number called,
dialling tone, engaged signal, etc) and rules which apply at, shall we say logical level between the humans
themselves. The same is true for computers.
As long ago as 1976 the International Standards Organisation (ISO) recognised the need for some standardisation of computer network interfaces. It was apparent even at that early stage, that for the reasons
mentioned already, it would be expedient to describe the architecture of the network software in layers in
order to facilitate the interconnection of unlike systems. In other words, to provide an open system that any
other system could connect to. ISO thus set out to standardise the Open System Interconnection Reference
Model, or OSI model for short, intended as a reference architecture for network design. This model is
illustrated in Fig. 1.7.
Host A

Host B

Application

message

Application

Presentation

message

Presentation

Session

message

Session

Transport

message

Transport

Network

packets

Network

Data Link

frames

Data Link

Physical

bytes

Physical

Physical Medium
Figure 1.7: The ISO OSI Reference Model.
Between each pair of adjacent layers in the OSI model there is an interface. The interface defines which
services the lower layer offers to the upper one. When network designers decide how many layers to include
in a network and what each one should do, one of the most important considerations is having clearly defined
interfaces between the layers. Having clearly defined interfaces, in turn, requires that each layer performs a
specific collection of well understood functions. In addition to minimizing the amount of information that
must be passed between layers, clear cut interfaces also make it simpler to replace the implementation of one
layer with a completely different one (e.g., when all the telephone lines are replaced by satellite channels),
because all that is required of the new implementation is that it offers exactly the same set of services to the
next layer higher up as the old implementation did.
The set of layers and protocols is called the network architecture. The specification of the architecture must
contain enough information to allow an implementor to write the program for each layer so that the program
will correctly obey the appropriate protocol. Neither the details of the implementation nor the specification

CHAPTER 1. BASIC CONCEPTS

14
of the interfaces are part of the architecture.

When describing the operation of any of the protocol layers, it is important from the outset to discriminate
between:
1. the services provided by the layer
2. the protocol (i.e., logical operation of the layer)
3. the services used by that layer
This is important because only then can the function of each layer be defined in the context of the other
layers. This also has the effects that a person implementing one protocol layer needs only to have a knowledge of the services the layer is to provide to the layer above, the internal operation (protocol) of the layer,
and the services that are provided by the layer below to transfer the appropriate items of information, called
Protocol Data Units or PDUs associated with the protocol to the corresponding layer in a remote computer.
Equally, the specification of each protocol layer comprises two sets of documents:
1. service definition document
2. protocol specification document.
The service definition document contains a specification of the services provided by that layer to the layer
above it that is the user services. These are in the form of a defined set of service primitives. These service
primitives are similar to procedure calls in a high level language, each with an associated set of service
parameters. These primitives are invoked to and from the layer through service access points (SAPs) (seen
in figure 1.8 e.g.. a request, indication, response and confirm primitive. As will be seen in later chapters, it is
through the service primitives that the layer above achieves the transfer of information to the correspondent
layer in a remote system.
We need to clarify the rather cryptic notations N, N+1 and N-1, seen in Fig. 1.8. A layer in the OSI model
is generally as the (N)-layer. The layer above it (if one exists) is designated as the (N+1)-layer, similarly
the one below it is the (N-1)-layer. The peer process abstraction is crucial to all network design. Without
this technique, it would be impossible to partition the design of the complete network into several smaller,
manageable, design problems, namely the design of the individual layers.
It was not intended that there should be a single standard protocol associated with each layer. Rather a set
of standards is associated with each layer, each offering different levels of functionality. In each of the
following sections we briefly discuss each layer of the architecture

1.7.1 Physical layer


The physical layer is concerned with transmitting raw bits over a communication channel. The design issues
have to do with making sure that when one side sends a  bit, it is received by the other side as a  bit, not as
a bit. Typical questions here are how many volts should be used to represent a  and how many for a , how
many micro seconds a bit occupies, whether transmission is simplex or duplex, how the initial connection
is established and how it is broken when both sides are finished, how many pins the network connector has
and what each pin is used for. In some cases a transmission facility consists of multiple physical channels,
in which case the physical layer can make them look like a single channel, although higher layers can also
perform this function.

1.7.2 Data link layer

15
Service user

Layer N + 1

Correspondent
User

User

User services
N-services

SAP

SAP
Service provider

Layer N

Protocol
entity

PDUs

Peer protocol
entity

Used services
(N-1) services

SAP

SAP

Layer N-1
(Logical) exchange path of PDUs
SAP = Service access point

Figure 1.8: Layers, protocols, and interfaces.

1.7.2 Data link layer


The task of the data link layer is to take a raw transmission facility and transform it into a line that appears
free of transmission errors to the network layer. It accomplishes this task by breaking the input data up
into data frames transmitting the frames sequentially, and processing the acknowledgement frames sent
back by the receiver. Since Layer 1 merely accepts and transmits a stream of bits without any regard to
meaning or structure, it is up to the data link layer to create and recognize frame boundaries. This can be
accomplished by attaching special bit patterns to the beginning and end of the frame. These bit patterns can
unintentionally occur in the data, so special care must be taken to avoid confusion. It is up to this layer to
solve the problems caused by damaged, lost, and duplicate frames, so that network layer can assume it is
working with an error-free (virtual) line.

1.7.3 Network layer


The network layer controls the operation of the subnet. Among other things, it determines the chief characteristics of the DCE-DTE interface, and how Network Protocol Data Units or NPDUs, more commonly
known as packets, the units of information exchanged in the network layer, are routed within the network.
A major design issue here is the division of labour between the DCEs and hosts, in particular who should
ensure that all packets are correctly received at their destinations, and in the proper order. What this layer of
software does, basically, is accept messages from the source host, convert them to packets, and see to it that
the packets get directed towards the destination.


Note that the ISO term for a frame is a Data-Link-Service-Data-Unit. We shall use the term frame.

16

CHAPTER 1. BASIC CONCEPTS

The network layer also determines what type of service to provide the transport layer. One type of service
(virtual) point-to-point channel (called a virtual circuit) that delivers messages in the order in which they
were sent. Another kind of network service, called datagram service gives no guarantee whether packets
will be delivered or whether they will be delivered in sequence. The type of service is determined when the
connection is established.

1.7.4 Transport layer


The basic function of the transport layer is to accept data from the session layer, split it up into smaller units,
if need be, pass these to the network layer, and ensure that the pieces all arrive correctly at the other end.
Furthermore, all this must be done in the most efficient possible way, and in a way that isolates the session
layer from the inevitable changes in the hardware technology.
Under normal conditions, the transport layer creates a distinct network connection for each transport connection required by the session layer. The transport layer is a true source-to-destination or end-to-end layer.
In other words, an application on the source machine carries on a conversation with another application on
the destination machine, using the message headers and control messages. At the lower layers, the protocols
are carried out by each machine and its immediate neighbours, and not by the ultimate source and destina , which are

tion machines, which may be separated by many DCEs. The difference between layers 
chained, and layers  , which are end-to-end, is also illustrated in Fig. 1.7.

1.7.5 Session layer


It is with this layer that an application process must negotiate to establish a connection with a process on
another machine. Once the connection has been established, the session layer can manage the dialogue in
an orderly manner, if the user has requested that service.
A connection between applications (or strictly speaking, between two presentation-layer processes) is usually called a session. A session might be used to allow a user to log into a remote time-sharing system or
to transfer a file between two machines. To establish a session, the user must provide the remote address he
wants to connect to. Session addresses are intended for use by users of their programs, whereas transport
addresses are intended for the use by transport stations, so the session layer must be able to convert a session
address to its transport address, to request that a transport connection be set up.
Setting up a session is a complicated operation. To start with, it may be necessary that each end of the
session be properly authenticated, to prove that it has the right to engage in the session and to ensure that the
correct party receives the bill later. Then the two ends must agree on a variety of options that may or may
not be in effect for the session, such as whether communications is to be half-duplex of full-duplex.
Another function of the session layer is management of the session once it has been set up. For example, if
transport connections are unreliable, the session layer may be required to attempt to transparently recover
from broken transport connections. As another example, in many data base management systems, it is
crucial that a complicated transaction against the data base never be aborted halfway, since doing so would
leave the data base in an inconsistent state. The session layer often provides a facility by which a group
of messages can be bracketed, so that none of them are delivered to the remote user until all of them have
arrived. This mechanism ensures that a hardware or software failure within the subnet can never cause a
transaction to be aborted halfway through. The session layer can also provide for ordering of messages when
the transport service does not. In short, the session layer takes the (possibly bare bones) communication
service offered by the transport layer and adds application-oriented functions to it. In some networks, the

1.7.6 Presentation layer

17

session and transport layers are merged into a single layer, or the session layer is absent altogether, if all that
the users want is raw communication service.

1.7.6 Presentation layer


The presentation layer is concerned with the presentation(syntax) of data transfer between 2 communicating
application processes. This layer performs functions that are requested sufficiently often to warrant finding
a general solution for them, rather than letting each user solve the problems. These functions can often be
performed by library routines called by the user.
A typical example of a transformation service that can be performed here is text compression. Encryption
to provide security is another possibility. Conversion between character codes, such as ASCII to EBCDIC,
might often be useful.

1.7.7 Application layer


The highest layer, the application layer, works directly with the user of application programs. The application layer provides user services such as electronic mail, file transfers and resource allocation. For example,
the application layer on one end should appear to send a file directly to the application layer on the other
end independent of the underlying network or computer architectures. These services are known as application service elements or ASEs. To discriminate between ASEs that perform common application-support
services and those that perform specific application-support services, the term common application service
element or CASE is used to refer to the first (e.g. File Transfer and Management (FTAM)), and the term
specific application service element or SASE is used to refer to the second.

1.7.8 Conclusion - Standards and ISO


The ISO OSI Model was originally intended as a universal standard of course. Suffice to say that it is still
a very useful conceptual model, but in pratice it has been overtaken by the Internet protocols. The protocol
architecture associated with the Internet is known as the Transmission Control Protocol/ Internet Protocol
(TCP/IP). It includes both network-oriented protocols and application support protocols. Because TCP/IP
is in widespread use with an existing internet, many of the TCP/IP protocols have been used as the basis for
ISO standards. Figure 1.9 shows some of the standards associated with the TCP/IP protocol suite.

1.8 International Standards Bodies


The coordination on a world-wide scale to ensure that people in one country can communicate with their
counterparts in another is provided by an agency of the United Nations called the International Telecommunications Union (ITU). ITU has three main organs, two of which deal primarily with international radio
broadcasting and one of which is primarily concerned with telephone and data communication systems.
ITU-Ts task is to make technical recommendations about telephone, telegraph and, more recently, data
communication network interfaces. These often become internationally recognised standards. One example
is V.24 which specifies the placement and meaning of the various pins on the connector used by most
asynchronous terminals and V.27 which specifies the electrical characteristics.

CHAPTER 1. BASIC CONCEPTS

18
End-user/Application

File transfer protocol, FTP


Layers 5-7

Remote terminal protocol, TELNET


Simple mail transfer protocol, SMTP
Name server protocol, NSP

Layer 4

TCP

UDP
IP

Layers 1-3

IEEE 802.X / X.25


TCP = Transmission control protocol
UDP = User datagram protocol

LAN/WAN

IP

= Internet protocol

Figure 1.9: TCP/IP protocol suite


Other standard bodies in the United States are the Institution of Electrical and Electronics Engineers (IEEE)
and in Europe ETSI (the European Telecommunications Standards Institute) is a non-profit making organization whose mission is to produce the telecommunications standards that will be used for decades to come
throughout Europe and beyond. ETSI promotes the world-wide standardization process whenever possible. Its Work Programme is based on, and co-ordinated with, the activities of international standardization
bodies, mainly the ITU-T and the ITU-R. There is also the European Computer Manufacturers Association
(ECMA) and several other regional standards bodies on both sides of the Atlantic.
Not surprisingly there is a great deal of cooperation and consultation between the various bodies although
each claims to be independent of the other. The ITU-T V.24 physical interface standard, for instance, is also
known in the USA as the EIA standard RS232D.
The International Standards Organization (ISO), is another group that tries to make the world a better place
via standards. Its constituents are the national standards organizations in the member countries. On issues
of telecommunication standards, ISO and ITU-T sometimes cooperate to avoid the irony of two official and
mutually incompatible international standards.

1.9

Exercises

Exercise 1.1 Define a computer protocol.


Exercise 1.2 Explain why computers need a strictly defined communications protocol.
Exercise 1.3 A computer is sitting idle, waiting for an incoming call. Upon receiving this call, the two
computers will exchange some data. Draw the state transition diagram for this machine, indicating the four
phases of communication.
Exercise 1.4 Give two reasons for using a layered model.
Exercise 1.5 Which of the OSI layers would:

1.9 Exercises

19

1. breake the transmission data up into frames?


2. decide the path that transmitted data will take?
3. provide secure data transmission?
4. do the conversion from ascii to EBCDIC?
Exercise 1.6 Why do you think it is important for the Data Link Layer to detect/correct errors?
Exercise 1.7 Does the OSI model specify a specific protocol for each layer? If it does, why does it? If not,
why not?
Exercise 1.8 Regarding the OSI reference model:
1. Describe any layer.
2. Describe a typical service used by that layer.
3. Describe a typical service provided by the same layer.
4. How do you think the Service Access Points are implemented?

20

CHAPTER 1. BASIC CONCEPTS

Part II

Transmission Fundamentals

21

Chapter 2

Electrical Signals
2.1 Objectives of this Chapter
Although radio and optical signals are becoming very popular, the most common signals around are still
electrical. In this chapter we introduce the basics one requires to understand the technologies to be described
later.
1. We start by explaining that any signal can be composed into its Fourier components, which helps us
to understand a lot about signal quality, capacity and bandwidth.
2. Electrical signals travel along conductors, usually copper. the various media are briefly explained.
3. We then investigate the properties of electrical channels and what their features and limitations are,
including attenuation and the relationship between electrical noise and bandwidth.
4. The last sections in this chapter explain how one can transmit digital data as an analog signal and vice
versa.
Without a firm grasp of the material in this chapter, the computer engineer or scientist will not follow much
of the discussion about data synchronisation etc. which follow in subsequent chapters.

23

CHAPTER 2. ELECTRICAL SIGNALS

24

MAGNETIC FLUX



Figure 2.1: Schematic of a conducting coil rotating at speed


flux, generating a current in the conductor as it does so

radians per second through a field of magnetic

In order to learn about electrical signals and their properties we use the example of an electric current
flowing in a conductor. Assume that the current is generated by a device known as an alternator, which is
formed by rotating a coil of conductors through a field of magnetic flux as shown schematically in Fig. 2.1.
Let the coil rotate at a constant speed radians/sec about an axis (at in the figure) which is perpendicular
to the uniform magnetic field illustrated by the flux lines. If time is measured from the instant that the
coils is horizontal in position OR the amount of flux passing through the coil is given by 
or

 where   .

induced in the coil, is given by the Faraday-Lenz Law and equals


The voltage
  , so that

 

 



"
 #"

(2.1)

(2.2)

where

&


(

 %$
&'%$ 


If, instead of measuring time from the instant when the coil is in position OR in Fig. 2.1, it is measured from
the point where the coil is at an angle
from the horizontal (position 
in Fig. 2.2), then Eq. 2.1 becomes

&'%$  +*,)


Similarly if we should start at position







 we have

&'%$

-)  
Note that the various signals in Fig. 2.2 are designated $ ./0$  1*2)
from which $ 43560$  +*7)
 .


Actually, electromagnetic force or e.m.f.

 

 and so on by virtue of Ohms law

2.1 Objectives of this Chapter

25
Q


) 


(a)
I sin ( 

(b)

*7)

I sin ( 

)  )

O
t

I sin (  )

(c)
Figure 2.2: Illustrating phase differences in a sinusoidal signal

& %$

!* )

* )
& %$ )

 

 expression is called the phase at time t. In fact, the

 of, say, 
The angle part  
only difference between the two expressions is in the phases. The angle
is this phase difference and is
called the phase angle. At time 0 we have 
of a cycle beyond R, so that a

and Q is

positive phase difference means a lead in phase.

& 0$

 
  and the phase angle is now a lagging one when compared with the original
Similarly, if 
one. The corresponding waveforms for the three cases are shown in Fig. 2.2(c). The effects of having
leading and lagging phase angles can clearly be seen.

& 0$

* )

Note that a analog signal such as 


 

 has three independent parameters namely amplitude




, frequency ( 
as we know) and finally phase
. If two or more sinusoidal alternating quantities,
with identical frequencies, are added together, their resultant is also a sinusoidal quantity, having the same
frequency as that of the components, as can easily be verified.

&

CHAPTER 2. ELECTRICAL SIGNALS

26

2.2 Fourier Analysis


One can prove that any reasonably behaved periodic function,    with period
possible infinite, sum of sine and cosine functions:

   

(2.3)

*




%$


+*

 






, can be constructed by a,

 



where   is the fundamental frequency and  and  are the sine and cosine amplitudes of the -th


harmonic of frequency .

Such a decomposition is called a Fourier series. From the Fourier series the function can be reconstructed;
i.e., if the period , is known and the amplitudes are given, the original function of time can be found by
performing the sums of equation 2.3.
A data signal, such as a byte consisting of eight bits, that has a finite duration, which all of them do, can
be handled by just imagining that it repeats the entire pattern forever, i.e. the behaviour during the interval
is the same as from to , etc.
from to

0$

The  amplitudes can be computed for any given     by multiplying both sides of Eqn. 2.3 by    

and then
integrating from to . Since




if  
        

  if   .

only one term of the summation survives:  . The  summation vanishes completely. Similarly, by multi


    and integrating
plying equation 2.3 by
between and , we can derive  . By just integrating
both sides of the equation as it stands, can be found. The results of performing these operations are as
follows:





      




 
   
  


   

%$



0$

"

0$



"

"

"

Example 2.1 Consider a specific example, the transmission of the ASCII character b encoded in an 8-bit
byte. The bit pattern to be transmitted is 01100010. The Fourier analysis of this signal yields the coefficients:













   








0$      % $ 












 * 

  



* %$  

2  
  
0$ 

  


   






2.3 Transmission Media


The transmission of an electrical signal obviously has to be along an electrical conductor or medium. The
type of transmission medium used is important, since it determines the maximum rate, in terms of binary

2.3.1 Twisted pair

27

Outer sheath

Metalic Shielding Ribbon

Twisted pairs

Figure 2.3: Shielded twisted-pair line


digits (bits) per second or Bps, that data can be transmitted. In all the instances that follow, the signal is in
fact an analog signal modulated (also said to be coded) with digital data which we shall learn about in
Sec. 2.5. Pure digital signals cannot travel very far along a conductor before losing their shape as we shall
see in Sec. 2.4 below. The maximim distance such signals can travel is about 50 meters and frequently no
further than from your computer to the printer.
Some of the more common types of transmission media are mentioned next.

2.3.1 Twisted pair


A two-wire open line, is the simplest type of transmission medium illustrated in Fig. 2.3. In such a line,
each wire is insulated from the other and both may be open to free space, when it is called an Unshielded
Twisted Pair(UTP).
With this type of line, care is needed to avoid cross coupling of electrical signals between adjacent wires
in the same cable. This is known as crosstalk and is caused by capacitive coupling between the two wires.
In addition the open structure of this type of line makes it susceptible to the pick-up of spurious noise
signals from other electrical signal sources caused by electromagnetic radiation. The main problem with
interference signals of this type is that they may be picked up in just one wire - the signal wire, for example
- and not the ground wire. As a result, an additional difference signal can be created between the two wires
and, since the receiver normally operates using the difference signal between the two wires, this can give
rise to an erroneous interpretation of the combined (signal plus noise) received signal.
Twisting the wires helps to avoid the interference mentioned above. The resulting close proximity of both
the signal and ground reference wires means that any interference caused by extraneous signal sources is
picked up by both wires and hence its effect on the difference signal is reduced. Furthermore, if multiple
twisted pairs are enclosed within the same cable, the twisting of each pair within the cable further reduces
interference effects caused by crosstalk.
Another way of limiting the interference is to cover the twisted pair with a conducting material such as
aluminum foil to shield the lines, whence it is not surprisingly called a Shielded Twisted-Pair or STP
cable.
The main limiting factor of a twisted pair line is caused by a phenomenon known as the skin effect: as the
bit rate (and hence frequency) of the transmitted signal increases, the current flowing in the wires tends to
flow only on the outside surface of the wire, thus using less of the available cross-section. This has the

CHAPTER 2. ELECTRICAL SIGNALS

28

effect of increasing the electrical resistance of the wires for higher frequency signals which in turn causes
more attenuation of the transmitted signal. In addition, at higher frequencies, an increasing amount of signal
power is lost due to radiation effects.
There are various categories of UTP cable prescribed by industry Standard EIA/TIA-568 ranging from
Category 1 through 5E. Categories 3 and 5 are mostly used for data transmission and with speeds ranging to
100 Mbps Category 5E E for Enhanced is more commonplace. The reader is referred to any standard
text such as that by Gallo and Hancock ([GalHan] page 117) for a closer description of the properties of the
various categories.
One type of transmission line that minimizes both crosstalk and attenuation effects is the coaxial cable.

2.3.2 Coaxial cable


The most popular Local Area Network (LAN)(see Chapter 8) transmission media are baseband (digital)
and broadband (analog) coaxial cable. The cable is physically similar in both implementations, but the
electrical characteristics are generally different.
In a coaxial cable, the signal and ground reference wires take the form of a solid centre conductor running
concentrically (coaxially) inside a solid (or braided) outer circular conductor as shown in Fig. 2.4. The
space between the two conductors should ideally be filled with air, but in practice it is normally filled with
a dielectric insulating material with either a solid or honeycomb structure.
Due to its geometry, the centre conductor is effectively shielded from external interference signals and also
only minimal losses occur due to electromagnetic radiation and the skin effect. Coaxial cable can be used
with a number of different signal types, but typically 10 or even 20 Mbps over several hundred metres is
perfectly feasible. Also, coaxial cable is applicable to both point-to-point and multi-point topologies.
Braided outer cover

Insulating outer cover

Centre conductor

Dielectric insulating material

Figure 2.4: A typical coaxial cable.

2.4 Channel Characteristics


Electrical signals traveling along a conductor are subjected to various attenuation and distortion effects as
illustrated diagrammatically in Fig. 2.5. As can be seen, a signal transmitted across any form of transmission
medium will attenuated and affected by limited bandwidth, delay distortion and noise. Each is always
present and will produce a combined effect. We consider each of these briefly.

2.4.1 Attenuation

29

2.4.1 Attenuation

As a signal propagates along a transmission link, line or channel its amplitude decreases. This is known
as signal attenuation. Normally, to allow for attenuation, a definite limit is set on the length of the line to
ensure that the electronic circuitry at the receiver will reliably detect and interpret the received attenuated
signal. If the length of line exceeds this limit, one or more amplifiers or repeaters must be inserted at the set
limits along the line to restore the received signal to its original level.
The amount of attenuation a signal experiences also increases with frequency. Hence, since a transmitted
signal comprises a range of different frequency components, as explained in Sec. 2.4.5, this has the additional effect of distorting the received signal This problem is overcome by either designing the amplifiers
to amplify different frequencies by different amount or additional devices, known as equalizers, are used to
equalize the amount of attenuation across a defined band of frequencies.

2.4.2 Noise Distortion and Shannons Law


In the absence of a signal being transmitted, a transmission line or channel will ideally have a zero electrical
signal present. In practice, however, there will be random perturbations present on the line even when no
signal is being transmitted as illustrated in the diagramme of Fig. 2.5. This is known as line noise and,
in the limit, as a transmitted signal becomes attenuated, the signal will reach the same level as the line
(background) noise level. An important parameter associated with a transmission medium, therefore, is the
ratio of the power in a received signal, , to the power of the noise level,  . The ratio  is known as the
signal-to-noise ratio and normally is expressed in decibels or dB as





"

A signal to noise ratio of 10 is 10 dB, a ratio of 100 is 20 dB, a ratio of 1000 is 30 dB and so on.
A high  ratio means a high signal value relative to the prevailing noise level, resulting in a good quality
signal. Conversely, a low  ratio means a low quality signal. The theoretical maximum information rate
of a transmission medium is related to the  ratio by a formula first stated in 1940 by Shannon.
Shannons law for noisy channels states that the maximum data rate 
Hz, and whose signal-to-noise ratio is  , is given by





     

of any channel whose bandwidth is

Example 2.2 A channel of 3000-Hz bandwidth (B = 3000), and a signal to thermal noise ratio of 1000
(typical parameters of the telephone system) can never transmit more than








*

      
   




or 30 000 Bps.
Shannons law was derived using information-theory arguments and has very general validity. It should be
noted, however, that this is only an upper bound. In practice, it is difficult to even approach the Shannon
limit. A bit rate of 9600 bps on a voice-grade line is considered excellent, and is achieved by sending 4-bit
groups at 2400 baud.

These terms are all synonymous.

CHAPTER 2. ELECTRICAL SIGNALS

30

Transmitted data

+V

Transmitted signal
-V

Due to
Attenuation

Due to limited
bandwidth

Attenuation and
distortion effects

Due to delay
distortion

Line (system)
noise

Combined effect

Received signal

Sampling instants
Received data

Bit Error

Figure 2.5: Sources of attenuation and distortion of a digital signal

2.4.3 Delay distortion

31

2.4.3 Delay distortion


The rate of propagation of a sinusiodal signal along a transmission line varies with the frequency of the
signal. Consequently, when transmitting a digital signal, the various frequency components making up the
signal arrive at the receiver with varying delays between them. The effect of this is that the received signal
is distorted. The amount of this delay distortion increases as the bit rate of the transmitted data increases,
so some of the frequency components associated with each bit transition are delayed and start to interfere
with the frequency components associated with a later bit. Delay distortion is therefore also known as
intersymbol interference interference and its effect is to vary the bit transition instants of the received signal
as illustrated in Fig. 2.5. Consequently, since the received signal is normally sampled at the nominal centre
of each bit duration, the delay distortion may lead to the incorrect interpretation of the received signal as the
bit rate increases.

2.4.4 Electrical noise


Much as for humans, electrical noise is any undesirable signal present on a transmission medium which is
carrying useful information. There are two forms of electrical noise: Thermal noise is always present and is
an inherent property of receivers and transmitters. Thermal noise is a function of temperature and cannot be
eliminated. The amount of thermal noise in a bandwidth of 1 cps in a conductor is




Watt/cycle where


k
T

=
=
=
=

noise power density in watts per cycle of bandwidth


Boltzmanns constant
    
  
 K
temperature in degrees Kelvin

 %

More frequently however electrical noise is the result of external noises such as fluorescent light transformers, electrical equipment such as elevators and door switches and so on whence it is referred to as impulse
noise. Impulse noise is generally only a minor annoyance for analog data such as voice transmission where
it would be manifested by s few clicks or crackles and no loss of information. It is different for digital data
however where a sharp spike of noise energy lasting say 10 milliseconds would have no effect on voice data,
but would be the death knell of 640 bits being transmitted at 64 KBps.
Although some electrical noise is always present, much of it can be eliminated through proper shielding and
cable installation.

2.4.5 Bandwidth limitation


No transmission line can transmit signals without the signals losing some power in the process. If all the
harmonic frequencies (see Sec. 2.4.5) were attenuated equally, unlike explained in the previous section, the
resulting signal would be reduced in amplitude, but not distorted, i.e., would have the same nice squared
shape as that shown at the top of Fig. 2.6. Usually, harmonics of all frequencies less than some cutoff


frequency
say, are transmitted almost unattenuated, with all harmonics with frequencies higher than

strongly attenuated. The frequency
is called the bandwidth of the link.

CHAPTER 2. ELECTRICAL SIGNALS

32

(a)

(b)

(c)

(d)

 


  


 

$% &  


'( 

  


 

 
!   )


!" !*)*

(e)


!" "#

$ -,.*/0

)*
! #

)
!" !+ 

 /0
! 

Figure 2.6: A binary signal and its Fourier components.

+*
!" "#

2.4.5 Bandwidth limitation

33

For the explanations to follow we consider the time it takes for the binary signal to repeat itself to be the

period  of the fundamental or first harmonic. In the example of Fig. 2.6 the period of the fundamental
harmonic is considered to be the duration of two bits of the binary signal. In some cases, the bandwidth is
a physical property of the transmission medium, and in other cases special electrical circuitry, called a filter,
is introduced into the circuit to limit the amount of (scarce) bandwidth available to each signal.
Consider the arbitrary square wave signal in Fig. 2.6. If the bandwidth were so narrow that it would pass
only the fundamental frequency, we would get the simple sinusoidal wave shown at the top. If however it
would let 3 harmonics through, i.e., frequencies up to three times that of the fundamental, we would get the
second signal being added with a different amplitude to the original signal (see Fig. 2.6). With 5 harmonics
we would have the signal third from the top, with 7 harmonics the second last signal in the figure. Despite it
looking quite noisy, it now bears more than a passing resemblance to a square wave. In fact, if we add more
and more harmonics, we would get closer and closer to a square wave until, if the bandwidth were infinite
(which it never is) we would have the square wave back.
The terms bandwidth (measured in cps or Hertz) and capacity (measured in Bps) are frequently used interchangably in industry. They mean entirely different things as you now know, but to understand that they
express different properties of a link, consider the binary signals in Fig. 2.7. In general,
g (t)
1

T =2T2
1

2T1

g (t)
2

T2

2T2

4T2

3T2

Figure 2.7: Two binary signals:    with double the capacity (number of bits per second) of 
  .

  
Hence for    we write

 *


 *


 *

   

and for signal 


  ,


 







 


 



*

0$


0$


0$


0$












*
*



 

  






 





















  



CHAPTER 2. ELECTRICAL SIGNALS

34



*

0$



















Hence, if we have a channel with fixed bandwidth say , the number


of harmonics of frequency

of signal 
  with fundamental frequency
which will be allowed through are:


  of signal    of fundamental frequency  which will




 







Note that the bandwidth

 of frequency

or

and the number of harmonics


be allowed through are:

or

need not be an integer multiple of the fundamental frequency


.

In other words, only half as many harmonics will be allowed through in the case of signal    with twice

the capacity (bps) of 
  . Thus, for fixed bandwidth 
 the more bits per second, the worst the signal
quality. Alternatively, to maintain the signal quality, the higher the capacity, the higher the bandwidth has
to be. The terms bandwidth and capacity are thus used synonomously but mean quite different things
electrically. Figure 2.8 further illustrates the principle.

2.5 Data encoding


Now that we understand analog signals and their properties and digital signals and Fourier analysis it is
possible to learn about the way data can be encoded. One way that we saw in Sec. 1.4 is to simply send the
data as a binary signal by using Manchester (en-)coding. Manchester encoding is just one of many encoding
techniques as we will now briefly discuss.

2.5.1 Digital Data, Digital Signals


The simplest way to transmit digital signals is obviously to use two different voltage levels for the two binary
digits. Codes that work this way share the property that the voltage level is constant during a bit interval
i.e., there is no return to a zero voltage level (during a bit interval). Most commonly a negative voltage is
used to represent one binary value and a positive voltage level to represent the other. This code is known as
Nonreturn to Zero-Level (NRZ-L) and is illustrated in Fig. 2.9. An interesting variation of NRZ is known as
Nonreturn to Zero, Invert on ones (NRZI). NRZI is similar to NRZ-L in that the signal maintains a constant
voltage for the duration of a bit interval. However, now the data are encoded as the presence or absence of
a signal transmission at the beginning of the bit time. A transition from low to high or high to low at the
start of a bit intrval indicates a binary one for that bit interval; the absence of a transition a zero bit value.

2.5.1 Digital Data, Digital Signals

Bits

35

Pulse before trasnmission:


Bit rate 2000 bits per second

Bandwidth 500 Hz

Bandwidth 900 Hz

Bandwidth 1300 Hz

Bandwidth 1700 Hz

Ba ndwidth 4000 Hz

Figure 2.8: Illustrating the effect of bandwidth on a digital signal

CHAPTER 2. ELECTRICAL SIGNALS

36

NRZL

NRZI

BipolarAMI

Pseudoternary

Manchester

Differential
Manchester

Figure 2.9: Some digital encoding techniques

2.5.2 Digital Data, Analog Signals

37

NRZI is an example of differential encoding . In differential encoding the data to be transmitted are represented as a change between two successive data symbols rather than the signal levels themselves. In general:
If the bit to be encoded next is a binary 0, then the bit is encoded with the same signal (level) as the preceding
bit; if the bit to be encoded is a binary 1, then the bit is encoded with a different signal than the preceding
bit.
One benefit of differential encoding is that it is easier to detect a transition in the presence of noise than to
keep track of the signal level. Another benefit for the technician who has to install hundreds of twisted-pair
lines for instance, is that if the leads from the pair are inadvertently inverted all 1s and 0s for NRZ-L will be
inverted. This is not possible with differential encoding.
Nevertheless, because of the NRZ encoding techniques there will always be a non-zero direct current (dc)
component in the signal and synchronization of the receiver and transmitter remains a problem should the
timing (when do each read what bit interval) between the receiver and transmitter shift.
Another family of coding techniques that eliminate these NRZ problems is known as Biphase techniques.
One of these, Manchester encoding we have seen already and it is illustrated again in Fig. 2.9. In the
Manchester code, there is a transition at the middle of each bit interval. The transition at the mid-interval
instant serves to synchronize the sender and receiver and also represents the data: a low-to-high transition
represents a binary 1, a high-to-low represents a binary 0. In Differential Manchester encoding the midinterval transition is for synchronization only. The encoding of a 0 is represented by the presence of a
transition at the beginning of a bit interval, a 1 by the absence of a transition.
Reflecting on what we learned in Sec. 2.4.5 we realise immediately that both Manchester encoding methods
require twice the bandwidth to send at the same data rate (Think!). Yet they have the advantage that
there is always a voltage transition in each bit interval that the receiver can synchronize upon. The
biphase codes are known as self-clocking codes for this very reason.
There is no direct current component.
The absence of a known transition would signal an error. A noisy line would have to invert the signal
boh before and after the transition to cause an error to go undetected.

2.5.2 Digital Data, Analog Signals


From sec. 2.4 we know already that digital signals cannot travel long distances along a conductor before
they get distorted to the point that they are useless. And obviously digital signals cannot travel trough the air
as electro magnetic waves can. In order to make digital transmission over long distances possible we code,
or modulate the analog signal with the digital data as shown in Fig 2.10. The function of a modulator is to
provide a one-to-one mapping of the data symbols into a set of signals              . In general,
the signal    may be represented by

 

 
  


 



 

  


0$





+*    *  
 *  +*  % $


      






  
   




*  

CHAPTER 2. ELECTRICAL SIGNALS

38

a)

b)

c)

Figure 2.10: A binary signal (a) amplitude modulated (PAM) (b) frequency modulated (MFSK) (c) phase
modulated (PSK)

2.5.3 Analog Data, Digital Signals


The waveforms             and
frequency and is the carrier phase.

39





  are all known functions of time, 




, is the carrier


If either the amplitude,    , the frequency, , or the phase,    , of    is varied discretely with known
time functions, one has the various forms of digital modulation.
When the amplitudes           
  , are pulse waveforms while and are held constant, this

discrete form of amplitude modulation is commonly referred to as pulse-amplitude modulation (PAM).


Other pulse modulation techniques include pulse-position modulation (PPM) and pulse-duration modulation (PDM). In the former, data are conveyed by varying the position in time of a pulse waveform, while in
the latter, data are conveyed by varying the duration of the pulse.

If the frequency of signal    is varied discretely with both    and    held constant that is,  

  for       
  , k any integer, the modulation technique is called multiple frequency-shiftkeying (MFSK).


* $


   for all          and both    and  are held constant, we have
If the phase    
multiple phase-shift-keying (MPSK). If  
, MFSK and MPSK are usually referred to as frequency-shiftkeying (FSK) and phase-shift-keying (PSK) respectively. Finally, if the amplitude and phase of a constant
frequency carrier are varied discretely, one has the hybrid of PAM and MPSK modulation called combined
amplitude-phase-shift-keying (APSK).


FSK is the most common form of modulation of transmission rates of up to 1800 bps and is less susceptible
to noise than PAM. PSK is again less susceptible to noise than PAM and is used in equipment for transmission at speeds above 2000 bps. In the most common form of PSK, the carrier wave is systematically shifted
45, 135, 225, or 315 degrees at uniformly spaced intervals. Each phase shift transmits 2 bits of information.
Details of all these techniques are not necessary to understand what networks are about.
At the receiver an operation which is the inverse of modulation is performed. In any two-way communication
the a modulator must also demodulate so that combination device is called a modem in either case.

2.5.3 Analog Data, Digital Signals


Historically, telephone networks carried only analog voice signals. When digital data came along, ways had
to be found to carry the digital data over the analog (signal) networks using a modem, as we now know.
With the increase need for data networks the obvious question is why one cannot carry analog signals over
digital networks and thus avoid the need for two different networks altogether? This is increasingly being
done to the point that no analog networks are being built anymore and that at some distant time in the future
all communication will be digital.
The technique to transmit analog signals over a digital channel (still using modulation, note) used is called
Pulse Code Modulation (PCM) or sampling. Sampling requires that the amplitude of the analog signal be
measured at regularly spaced intervals as illustrated in Figure 2.11.
At each sampling point the measured amplitude is coded as a binary number and the digital data are sent over
our digital network to the destination where the receiver re-constructs the analog signal from the digital data.
Voila, analog signals over a digital network. More specifically, each of the levels used in the digitisation,
is known as a quantisation level and is coded by 8 bits. This gives us 128 negative quantisation and 128
positive levels. The levels are not linearly spaced but rather logarithmically. This means the spacing at the
larger amplitudes are much wider than at the smaller amplitudes The reason for this is that a linear scale
(such as the one shown) is unable to accurately account for low amplitude signals or small changes in signal.
The question next arises as to how often should one sample the analog signal?

CHAPTER 2. ELECTRICAL SIGNALS

40
Amplitude

original signal
time

Amplitude

pulse amplitude modulated

time

Figure 2.11: Illustrating pulse code modulation


The answer to this was provided by an engineer called Nyquist from the early half of the 20th century.
Nyquists theorem states that an analog signal can be reconstructed accurately by sampling it at a rate equal
to exactly twice the highest frequency of the analog signal (perhaps you now have an idea what the 4-bit
discrete levels, Nyquists law states
over-sampling means on your CD player). If the signal consists of
that the maximum data transfer rate  , measured in bits per second, of a noiseless line is given by:




The next obvious question is what the highest frequency is that we need to consider. If we are concerned
with voice data, the frequency that the (young) human ear can detect ranges from about 30 to 20 000
cps. However, the human voice is not much distorted by limiting the range of frequencies (the bandwidth)
between 300Hz and 3400Hz a bandwidth of 3100 cps (have you ever wondered why the human voice sounds
different over the telephone than through open space?). For technical reasons (to avoid interference from
adjacent channels) the bandwith is chosen to be 4000 cps and so the sampling rate will have to be twice
this frequency. This means that in a telephone network our voice is sampled exactly 8000 times a second
and generating 8 bits of data per sample. This gives a signal rate 64000 bits per second (or 64Kbps). With
digital bandwidth becoming increasingly inexpensive and plentiful there is no longer a need to limit the
voice bandwidth. However, all telephone equipment use this standard and the communication equipment
from our homes to the local exchange (called the local loop) still remaining analog for many years, the 64
Kbps is part of our digital world forever.
Unfortunately, like s much else there are two standards for PCM encoding. There is the European standard,
which we have just described, and the US/Japan standard which instead of uses 7 bits instead of 8. A
signal coded this way of course have a rate of 7 multiplied by 8000 which gives 56000 (56Kbps) bits per
second! Both systems (64K signals or 56K signals) represent one channel. A channel represents the bit
stream generated by sampling one conversation. With digital technologies, it is possible to assemble these
many of these channels into a faster single bit stream. This process is known as multiplexing. The European
standard assembles 32 channels into a single stream at what is called the primary level and generates a
2.048 Mbps signal. The US/Japan assembles 24 into a 1.554 Mbps signal. Signals using the two standards

2.6 Exercises

41

need to have an adaptor to convert between the two standards. We need not stop at this point and we can
subsequently continue to multiplex a primary rate signal into a higher order signal. The European version
is 8 Mbps (equals 128 channels) with the US/Japan version being approximately 6 Mbps and so on. It is
not uncommon at all to have the backbone communication networks with a single connection carrying well
over 60000 channels or 2.5Gbps. Such is the power (and future) of digital communications.
Finally it is clearly possible to modulate an analog (carrier) signal with analog data such as audio signals.
Most of us who have listened to a radio know the difference between AM and FM.

2.6

Exercises

Exercise 2.1 If the human voice can be safely restricted to the range 0-4000 cycles, why is the capacity for
a voice line 64Kbps? (Hint: assume a 256 level PCM.)
[4]
Exercise 2.2 Name three ways a signal can be modulated for transmission.

[3]

g(t)

0
0

0
Time

Figure 2.12: A transmitted byte



Exercise 2.3 In the figure above, what are the Fourier coefficients  ,  and  ?

Exercise 2.4 What is the cosine amplitude of the 5th harmonic in the above example?

Exercise 2.5 A network uses copper cabling as its transmission medium. State two ways of tapping the
network lines. How can you ensure that security is not compromised even if the lines are tapped?
Exercise 2.6 If the human voice can be safely restricted to the range 04000 cycles, why is the capacity for
a voice line 64Kbps? (Hint: assume a 256 level PCM .)
Exercise 2.7 A controlling system in a brewery receives temperature readings from a number of vats. Assuming that the temperature ranges from 0100 degrees, and the readings are accurate to the nearest 1/2
a degree, how many bits are needed to adequately represent the readings? Assume also that there are five
different vats. What is the minimum capacity required if a reading needs to be taken from all the vats every
millisecond? What is the baud rate of the system? What is the bandwidth of the system if 8 harmonics are
used to represent the binary signals?

42

CHAPTER 2. ELECTRICAL SIGNALS

Chapter 3

Optical Signals
3.1 Objectives of this Chapter
This chapter is about optic signals and fibres which are fast becoming the most dominant transmission
technology. The reasons should be clear on understanding the material to follow.
1. What are optic signals and how do we generate them? This is the starting point of this chapter.
2. As in the case of electrical signals, optic signals have a medium of their own. The technology of optic
fibres has made huge strides in the past decade and still does.
3. Life is never perfect however, and optic signals suffer from degradation as do electrical signals, but
for different reasons obviously. We explain signal loss in optic fibre in the last section of this chapter.
There is a huge amount to know about optic signals which we do not cover. What we do cover is sufficient
though for you, the reader to understand the various optic standards and technology developments.

43

CHAPTER 3. OPTICAL SIGNALS

44

Optical signal, albeit of a very different frequency, were first used around 420 BC in Ancient Greece to
send coded signals. Charles Babbage, the man who designed the analytical engine, also invented the lightflashing machine which in his words worked as follows:

I then, by means of a small piece of clock-work and an argand lamp, made a numerical
system of occultation, by which any number might be transmitted to all those within sight of
the source of light.
Probably one of the first communication network ever was the optical telegraph illustrated in Fig. 3.1 designed by Claude Chappe in France in 1791. This pioneering French telegraph was followed by a Swedish
design and soon a large number of different designs in other countries. A network of such optical telegraphs
was built in 179596 connecting Stockholm and Eskero on Aland in Sweden with stations at most 14 km
apart. The transmitter consists of ten flaps, 9 of them arranged in a three-by-three matrix and a tenth on

Figure 3.1: The Optical Telegraph


top. The flaps can be manipulated to produce 1024 different signals. Each signal represented a code which
was interpreted in a code book, much like ASCII code. The code book included letters, numbers, common
syllables, words, sentences and names. It also included control messages. The procedures also included an
addressing scheme and schemes for encryption of messages.

3.2 Optic Fibres


The invention of optical fibres in recent years has brought a revolution in the telecommunications industry.
The extremely high bandwidth of optical fibre allows a transmission speed of the order of 10 Gbps instead
of 10-20 Mbps in coaxial cable. Later we shall see that even higher speeds are possible in optical fibres
by means of wave division multiplexing. Bell Laboratories in the USA have managed to place 30,000
simultaneous phone calls on a single optical fibre.

3.3 Method of Transmission

45

3.3 Method of Transmission


Binary data is usually passed down an optical fibre in the form of a series of on/off pulses representing
a digital bit stream. This differs from the electrical transmission media discussed in the previous chapter
in that the transmitter information is carried in the form of a fluctuating beam of light in a glass fibre, as
opposed to changes in voltage travelling down (usually) coaxial copper wire.
The overall pattern of transmission over an optical fibre cable is as follows: the electrical signal from the
source is transmitted to a converter, either a laser or a light-emitting diode. These devices convert electrical
signals into a series of light impulses. As these light impulses comprising the signal travel along the cable,
the signal undergoes loss which is compensated for by repeaters placed on the fibre at 30-50 km intervals.
At each repeater the signal is detected, converted to an electrical signal, amplified and re-emitted. When the
optical signal reaches the end of the optical fibre, it is converted back into an electrical signal and transmitted
to the destination.

3.4 Optical Media


Originally each signal was carried on a separate optical fibre which consisted of a narrow core surrounded
by cladding; both of these layers are silicon. Protective plastic surrounds the core and cladding, preventing
spurious light from entering the fibre. The core has a higher refractive index than the cladding, which,
in practical terms, means that light rays travelling through the core at an angle less than a certain critical
angle, are completely reflected by the cladding and will remain inside the core. A schematic diagramme
of such a cable is shown in Figure 3.2 below. In optical fibres, because the core is very thin, the distance
Insulating outer cover

centre conductor
Dielectric insulating
material

Braided outer conductor

Figure 3.2: Components of an optical fibre cable


between successive reflections is very small, and the optical fibre may be bent or twisted without affecting
the transmission of the signal. In this way, light rays entering the core can be transmitted for long distances
with a very low loss of energy.

CHAPTER 3. OPTICAL SIGNALS

46

3.4.1 Multimode and single-mode fibres


There are currently two types of optical fibre in use: multimode and single-mode; both have advantages

and disadvantages. The basic difference being that multimode fibres have a much larger core (50-150 m)
whereas single-mode fibres have a smaller core, about 8 m.
The larger core of the multimode fibre enables light rays to enter the fibre at a larger angle than in the
case of single-mode fibre. This means that individual rays of light can travel along many different paths
or modes in a multimode fibre. Each impulse of light, encoding a single bit, consists of many such rays of
light. In step-index multimode fibre, there is a sharply defined interface between the core and the cladding
Cover
Cladding

Core
Modes

(a) Step-index multimode fibre


Cover

Combined cladding/core

(b) Graded-index multimode fibre


Cover
Cladding
Core

Fundamental mode

(c) Single-mode fibre

Figure 3.3: Multimode and single-mode optical fibres


(Figure 3.3(b)). This means that the different rays of light encoding the same bit will bounce off the cladding
at different angles. Rays following a mode parallel to the axis of the fibre will travel a smaller distance than
the rays which are reflected off the cladding more frequently. The rays thus arrive at the destination at
slightly different times, causing the original light impulse to become dispersed, a phenomenon called modal
dispersion. Because of modal dispersion, successive light impulses in multimode fibre can interfere with
each other when travelling at high speeds or over long distances, limiting the bandwidth available through
step-index multimode fibre to only 35 Mbps over 1km.
A m is a thousandth of a millimeter.


3.4.2 Optical Signal Generation

47

An improvement on the step-index multimode type of fibre is graded-index multimode fibre.These fibres
do not have a sharply defined region between core and cladding, instead the refractive index of the changes
gradually from the inside to outside. Light travelling down the axis experiences the highest optical index,
and hence travels at the slowest speed, while rays travelling the longer modes away from the axis experience
less refraction, thus travelling faster (Figure 3.3(b)). The time that it takes light travelling along each mode to
reach the destination is the same, and relatively little modal dispersion occurs in these fibres. Graded-index
multimode fibre is capable of carrying 500 Mbps over 1km.
Multimode fibres of both types are still in wide use, but single-mode fibres seem destined to supersede them.
Single-mode fibres are similar to step-index multimode fibre in structure, but their much narrower core
allows only one mode for the rays of light to travel along in the fibre, the fundamental mode (Figure 3.3(c)).
Thus, in single-mode fibres, no modal dispersion occurs, allowing a much greater potential bandwidth in
these fibres than in multimode fibres.
Single-mode fibres are far superior to both types of multimode fibre in the transmission distance possible,
and also in the bandwidth available for exploitation. Graded multimode fibres do counter some of the
problems of modal dispersion in multimode fibres, allowing better transmission characteristics, but these
fibres are quite expensive to manufacture.
On the other hand, single-mode fibres are not perfect. There are two major problems with single-mode
fibres, both due to their very narrow cores. Firstly, it is difficult to focus the transmission light beam exactly
onto the core of a single-mode fibre without wasting some of the emitted light. Single-mode lasers must be
used to achieve this, but these are more expensive than the light-emitting diodes (LEDs) or semi-conductor
lasers that are used with multimode fibres. Secondly, also due to the narrow core, it is much more difficult
to couple (join) two single-mode fibres to each other, than to couple multimode fibres. For these reasons,
multimode fibres are still widely used for short or medium-distance transmission, while single-mode fibres
are mostly used for long-distance or high-speed transmission.

3.4.2 Optical Signal Generation


The two main means of producing the light signals are light-emitting diodes (LEDs) and semi-conductor
laser diodes (LDs).
LEDs produce light of a wide spectral width and are primarily used in multimode fibres which operate
at wavelengths where chromatic dispersion is unimportant. However, they cannot be used efficiently with
the high-speed single mode fibres. On the other hand, LDs couple easily with single-mode fibres, they
allow wide bandwidth transmission, and they have a high output power. Nevertheless, the cheaper price
and higher reliability of LEDs makes them a popular choice for multimode fibre transmission over short- to
middle-distance applications.
The light impulses are intensity modulated by the LDs and LEDs. In more advanced LDs, the lasers are
not physically switched on and off for each data bit, because this causes jitter, or interference, with the
channel signal. In these LDs, the laser beam is continuously on, and the beam passes through two sheets of
crossed polarised material, one of which is made of lithium niobate and has the electrical channel attached
to it. When no electrical impulse is flowing down the channel, the two sheets of polarised material at right
angles to each other prevent the passage of the laser beam into the channel. However, when an electrical
impulse stimulates the lithium niobate, its plane of polarisation changes, allowing light to pass through the
two sheets and into the channel. A material with this property is called a Pockels material. Obviously, the
faster that a change in polarisation can occur, the higher the potential transmission speed. A new Pockels
material composed of organic molecules called chromophores, promises even better transmission feats than
lithium niobate in the future.

CHAPTER 3. OPTICAL SIGNALS

48

At the receiving end of the optical fibre, the receptors need to be able to convert the light signal into an
electrical signal extremely rapidly. The most popular systems used are the PIN photodiode and the avalanche
photodiode which is used mainly in high-speed, long-haul systems. The avalanche photodiode is highly
sensitive to light, which excites the electrons, producing a voltage. The material used in these photodiodes
causes an avalanche effect, magnifying the voltage produced in the electrical channel.

3.5 Signal Loss in Optic Fibre


Despite the very great bandwidth available for light transmission, losses in the signal still occur. The three
main causes are modal dispersion, occurring only in multimode fibre, chromatic dispersion, and fibre or
optical loss. Modal dispersion and chromatic dispersion are analogous to the attenuation and delay distortion
present on electrical channels, while fibre loss is analogous to noise distortion.
1. Modal dispersion in multimode fibres is due to different rays of light travelling along different paths
in the core, causing dispersion in an impulse; it has been discussed in conjunction with multimode
fibres and single-mode fibres in the previous section.
2. Chromatic dispersion occurs in all fibres because each light impulse is composed of a number of
wavelengths. The problem with these different wavelengths of light is that each travels at a different
speed through the medium. The signal thus becomes dispersed, in a manner analogous of modal
dispersion. Some lasers can produce a signal of a single wavelength, but such lasers are incredibly
expensive. Multimode fibres are particularly guilty of chromatic dispersion because their light sources
(especially LEDs) transmit on a relatively broad band of wavelengths. Single-mode fibres are not
exempt from this dispersion either because they transmit pulses containing the Fourier components of
different wavelengths.
Scientists have discovered a way to eliminate chromatic dispersion on silicon fibres. At 1.3 m,
all wavelengths of light travel at the same speed through the fibre, so if the light impulses have a
wavelength centred around this value, no chromatic dispersion will occur. The discovery of this fact
caused a shift in the wavelength of light carried by fibre from 0.8 m to 1.3 m.
3. A third cause of signal loss is termed optical loss and is due to very slight impurities in the glass of
the core and cladding. These impurities cause the fibre to absorb the light impulses. This weakens the
signal and is the reason for the repeaters on optic fibre lines. There are two windows of low optical
loss: one lies around 1.3 m and the other, much lower level, is around 1.55 m. The development of
new silicon fibres with zero chromatic dispersion at 1.55 m has led to this wavelength becoming the
new standard wavelength of optical transmission.
The above effects have meant that repeaters (or boosters) had to be installed in any optical transmission
path to ensure error free transmission. Recent technology however, where the fibre is doped with erbium
neobate, the optical signal is boosted along as it travels through the medium, thus eliminating the need for
signal regenerators almost completely.

3.6 Advantages and Disadvantages of Optical Fibres


The future of optical fibres is very bright for a number of reasons:

3.7 Exercises

49

The light passing through the fibre is made up of different frequencies of wavelengths. Light waves
give rise to a much higher capacity than electrical signals (cf 10-20 Mbps for coaxial cable over a
short distance) and hence optical fibre cable can be used for transmitting very high bit rates, in the

order of hundreds of megabits per second.
Furthermore, the use of a light beam makes optical fibre cable immune to the effects caused by
spurious electromagnetic interference signals and crosstalk effects. Optical fibre cable, therefore,
is also useful for the transmission of lower bit rate signals through noisy electrical environments
in steel plants for example, which employ much high-voltage and current-switching equipment. This
had lead to plans by British Rail to lay fibre optic cable alongside its railway lines.
Optical signals travel much further without attenuation, so repeaters are needed only every 30-50 Km
compared to 5 Km for copper cables.
It is also being used increasingly in environments that demand a high level of security, since it is
difficult to physically tap an optical fibre cable.
The error rate in fibre is very low, only 

 errors per bit compared to   typical of copper cables.




Optical cables are small and light-weight. For example, 900 copper wire pairs stretched across 300 m
weighs just over two tonne. Two optical fibres pulled along the same length weighs approximately 40
kg, including their protective coverings.
On the other hand, fibre optic cables has a few disadvantages:
Installation costs are much greater than for electrical transmission media, because the fibres are physically weak and difficult to couple together. Losses in the signal may occur when the fibres are coupled,
so it is difficult to reconfigure the system for changing needs.
Fibre is primarily a point-to-point medium which does not allow for bus protocols such as Ethernet.
For these reasons, the usage of optical fibre is currently limited to long-distance high-speed uses, and environments with a lot of electrical noise (e.g. a factory with a lot of machinery). However, as the demand for
greater bandwidth shows no sign of abating, fibre optics is definitely the network transmission medium of
the future.

3.7

Exercises

Exercise 3.1 Name the advantages and disadvantages of optical fibre.


Exercise 3.2 Different from radio signals, optical fibre, like copper have to run physically from one point
to the next. Can you think of ways other than digging a trunk through the fields to lay the optical fibre.

The theoretical limit is in the order of 1 Tbps (Terrabits per second) without multiplexing

50

CHAPTER 3. OPTICAL SIGNALS

Chapter 4

Wireless Communication
4.1 Objectives of this Chapter
Once familiar with the material in this chapter the reader should
1. know what radio signals are and the
2. use of the available radio frequency spectrum, as well
3. as have a basic understanding about satellite communication and the various kinds of satellite systems
(GEO, LEO and MEO) including
4. the advantages and disadvantages of each.

51

CHAPTER 4. WIRELESS COMMUNICATION

52

Ever since man has been able to speak and communicate he wanted to be able to transmit messages over long
distances without having to have a physical connection. In the most basic form this includes techniques such
as smoke signals of early tribes, heleographs that reflect the sunlight or lanterns used to transmit morse code

on ships. But yet this was not enough and in the
century, with its rapid advances in communications
technology and electronics more was demanded.
When electrons move, they create electro magnetic waves that can propagate through free space. These
waves were first predicted by the British physicist James Clerk Maxwell in 1865 and first produced and
observed by the German physicist Heinrich Herz in 1887. By attaching an antenna of the appropriate size
(around a wavelength in length)to an electrical circuit, the electromagnetic waves can be broadcast efficiently
and received by a remote receiver.
Interestingly enough, the dawn of wireless communications actually has its roots set back in the end of the

century with an Italian called Guglielmo Marconi. It was his experiments that demonstrated that a

continuous wave could be propagated through the atmosphere or ether, and over the great boundary at the
time, the horizon. The problem was that the earths surface is curved and so a direct line-of-site is only
possible for several kilometres and the further the sender is from the receiver, they will lose line-of-sight as
they dip beneath the horizon. Marconi showed that radio waves followed the earths curvature, by bouncing
off the upper part of the atmosphere known as the ionosphere. He successfully transmitted a feint signal
from Cornwall, England to his receiving apparatus in Newfoundland, Canada.

4.2 Frequency Spectrum


Wireless communication refers to just that; sending electro magnetic signals without wires. As such the term
applies to a signal sent at any frequency. The frequency spectrum, although wide, is still finite. Clearly the
spectrum should be regulated nationally and internationally to prevent chaos. As argued above, the wider the
frequency band allocated the higher the data rates. In the USA the Federal Communications Commission
(FCC) decides who gets what whereas, in the rest of the World, the ITU-R is responsible for the allocation
of frequencies. The electro magnetic spectrum and its allocations is illustrtaed in Table 4.1. The radio,
microwave, infrared and visible light bands can all be used to transmit digital signals by modulation of the
waves. Ultraviolet light and beyond would be even better due to the higher frequencies, but they are hard
to modulate using conventional techniques and do not propagate through objects such as buildings or other
structures. High frequencies are also not very good for living things as anyone who owns a microwave oven
knows. At the other extreme, below  Hz we have the audio signal range.
Starting at 10 KHz, the names and frequency ranges listed in the figure are from the ITU standards for the
various wave bands. In this way LF, MF, and HF refer to Low, Medium and High Frequency; the higher
bands are similarly named Very, Ultra, Super, Extremely and Tremendously [seriously ] High Frequency.

By contrast visible light has a frequency of just over 100THz ( 


) and optical signal frequencies lie in the


frequency range 
Hz to 
Hz.
In vacuum all electromagnetic waves travel at the same speed, no matter what their frequency. This speed,
    meters per second. At that speed a signal travels
denoted is the speed of light or approximately
about 30 centimetres in a nanosecond. In copper or optical fibre the speed slows to about two-thirds of the
value and is somewhat frequency dependent.

The fundamental relationship between the frequency , the wavelength


and the speed of light (in a vacuum)is

(4.1)

 



4.3 Radio Transmission


Frequency Band
Name
Low (LF)
Medium (MF)
High (HF)
Very High (VHF)
Ultra High (UHF)
Super High (SHF)
Extremely High (EHF)
Tremendously High (THF)

53
Upper
Limit
300KHz
3MHz
30MHz
300MHz
3GHz
30GHz
300GHz
3THz

Highest
Data Rate
100bps
1Kbps
3KBps
100KBps
10MBps
100MBps
750MBps
15GBps

Modulation
ASK, FSK, MSK
ASK, FSK, MSK
ASK, FSK, MSK
FSK, PSK
PSK
PSK
PSK
PSK

Typical
Applications
Telephone, Twisted pair wires
Commercial AM radio, coaxial cable
Shortwave radio, Television signals
VHF Television, FM RAdio
UHF Television, Terrestial Microwave
Terrestial and Satellite Microwave
Terrestial Microwave
Not Known

Table 4.1: Electro magnetic spectrum and its uses

Since is a constant, if we know we can find


and vice versa. If we solve equation 4.3for
differentiate with respect to
we get


"

"

and


Using infinitesimal differences rather than differentials and ignoring signs, we obtain

(4.2)

(4.3)

Thus, given the width


of the wave band, we can compute the width
of the corresponding frequency
band. From the section on Fourier analysis we know that the wider the bandwidth the higher the number of
harmonics and the higher the capacity or data rate. The Ku-band frequency range for satellite transmission





allowing for a capacity of easily 50Mbps, assuming that roughly
is 11.7 to 12.2GHz or
 

10 harmonics will allow for a god quality signal using a crude on-off coding technique. In reality coding
techniques which give rise to a multiple of this rate are used.


4.3 Radio Transmission


Radio transmission refers to any wireless technique that uses radio frequencies (RF), i.e., in the 10KHz to
900MHz range to transmit data. Cellular telephones, for example, generally operate in the 824 849MHz,
869 895MHz and 1800Mhz range. Cordless telephones and wireless LANs (see Chapter 8) use the902 928MHz range whereas the 2.4 2.5GHz and 5.8 5.9GHz ranges are unregulated.
The properties of radio waves are frequency dependent. At low frequencies, radio waves easily pass through
obstacles such as buildings, but the power falls off sharply with distance from the source, approximately
   for distance  from the sender. At higher frequencies radio waves tend to travel in straight lines and
be deflected by obstacles as any cellphone user knows. In general radio transmission is making a comeback
largely through the success of cellular networks and the planned Bluetooth technology which is intended
to link various mobile devices (telephone, laptop, Personal Digital Assistants (PDAs) ) up to a range of 10
meters in a Personal Area Network (PAN).
A major application of Bluetooth technology is ability to electronically pay parking meters, for bus tickets,
at shop checkout points and so on through the use of Bluetooth enabled devices or so the theory goes.
The primary members of the Bluetooth consortium are 3COM (of Ethernet fame), Ericsson, IBM, Intel, Microsoft, Motorola, Nokia and Toshiba. In Chapter 12 cellular communication networks which are becoming
an increasingly important adjunct of computer networks will be discussed in more detail.

CHAPTER 4. WIRELESS COMMUNICATION

54

4.4 Terrestial Microwave Signals


Microwave covers part of the UHF and all of the SHF band. Because of their high frequencies, microwave
signals are easily attentuaned and deflected by obstacles so that a microwave transmitter and receiver require
a clear line of sight. The frequencies are in the range 3 to 30GHz and the higher the frequency the higher
the potential bandwidth and therefore the higher the potential capacity (see Eq. 4.3).
The most common type of microwave antenna is the parabolic dish which can be up to 3 meters in diameter.
With no intervening obstacles, the maximum distance between two microwave antennae conforms to the
formula

"

(4.4)

  

where is the distance between antennae in kilometers,  the high of the antennae in meters, and
is an
adjustment factor to account for the fact that microwaves are bent are bent with the curvature of the earth

and will hence propagate further than the line of sight. The empirical value of
   . Two microwave




  kilometers apart. To achieve
antennae at a height of 100 meters can thus be as far as    
transmission over longer distances, a series of microwave relay towers are used as can frequently be observed
along the major highways in the country where it carries voice, television and data signals.

"

As with any transmission system, a main source of loss is attentuation. For microwave and radio frequencies
the loss is given by the formula



  log
(4.5)
dB

where is the distance and


is the wavelength in the same units. Note tha loss varies with the square of the
distance whereas in copper conductors it is linear. Attenuation is increased with rainfall (and flying ducks)
especially at frequencies above 10Ghz.


"

"

Another source of transmission impairment is interference between adjacent signals. With the growing
popularity of microwave, transmission areas overlap and interference is always a danger which s why the
frequency bands have to be strictly regulated.
In general a microwave link is less costly to install than cable for moderate distances, has a high data rate
for as wireless transmission goes, requires little to no maintenance and is fairly easy to implement. The
drawbacks we have mentioned already.

4.5 Satellite Communication


A communication satellite is in fact nothing more than a microwave relay station linking two or more earthbased microwave transmitters/receivers, known as earth or ground stations. The satellite receives signals on
one frequency band (called the uplink and amplifies it and transmitts them down to earth on a different
frequency called the downlink as illustrated in Fig. 4.1. This is necessary to avoid interference between
the signals. A single orbiting satellite will operate on a number of bands called transponder channels.
The bandwidth capability of satellite systems varies and depends on the frequency at which the satellite
transmits. Four common frequencies are

 -band with a 4-GHz downlink and a 6 gHz uplink (the uplink frequency is always higher than the

downlink).
Hz is an abbreviation for Hertz which is another name for cycles per second

4.5 Satellite Communication

55

Transmitter sending
signals

Receiver receiving
signals

Figure 4.1: Satellite communication

&

-band with 14-GHz up and 12-GHz down.

-band with a 28-GHz uplink and a 18-GHz downlink.



-band is above 30-GHz and is still under development.

In order for a communication satellite to work properly, it is generally required to remain stationery with
respect to an observer from earth, i.e., the sending/transmitting station. According to Keplers law, the
orbital period of a satellite varies as the orbital radius to the 3/2 power. At an altitude of 36,000 km, the
orbital period is 24 hours, so it rotates at the same rate as the earth under it. An observer looking at a satellite
in a circular equatorial (note) orbit sees the satellite hanging in a fixed spot in the sky, apparently fixed in the
sky. This is essential, since otherwise an expensive tracking antenna would be needed. In fact the satellite
does not remain completely stationary but gently moves around a cubic area of about 50 Km in space. Such
a satellite is said to be in a geosynchronous orbit and the satellites are referred to as Geostationary Earth
Orbit (GEO) satellites. . To prevent interference GEO satellites cannot be placed any closer than  apart
at the same altitude. In other words no more than 90 satellites can be parked on the equitorial orbit. In fact
however, because they are so high up, only 8 GEO satellites are needed to cover the entire world.

Probably the main disadvantage of satellite communication is the distance involved. Although microwave
    meters per second, it still requires about 125 millisecond to reach the satellite with
signals travel at
a round trip delay of a quarter of a second. At high transmission speeds 250 milliseconds is a lifetime
compared to about 3 milliseconds per kilometer for terrestial microwave links.
In order to avoid the problem with delay several commercial satellite projects are being developed for
the sole purpose of data communication. One such project is the Teledesic Network project whose main
investor is the Microsoft company. The Teledesic project will comprise 840 interlinked Low Earth Orbit
(LEO) staellites that are to provide a digital skin to the earth for access to voice, computer and video

CHAPTER 4. WIRELESS COMMUNICATION

56

communications beginning in 2002. It is intended to support 1 million full-duplex (bi-directional) E1-lines


(see Chapter 6) each of capacity 2.048 Mbps. ompred to GEO satellite, LEO satellites orbit anywhere
anywhere from 500 to 2000 Km above the earth. At the speeds at which they need to travel to stay aloft,
LEO satellites are only in sight of a terrestial antenna for at most 15 minutes. Consequently, a LEO-based
satellite channel must be passed from one satellite to the next in the constellation. The propogation delay
on LEO satellites rnages from 40 to 450 milliseconds making them better equipped to to support real time
applications such as video conferencing.
There are also so-called Medium Earth Orbit (or MEO) satellite systems which, as the name indicates, orbit
from 9 000 to 20 000 Km away and have a propagation delay of at most 250 milliseconds. About 20 MEO
satellites will cover the surface of the globe.
To summarize, the following are the characteristics of a typical satellite:
A single satellite is visible to roughly a quarter of the earths surface.
Within that area (sometimes called the footprint of the satellite), the cost of transmission is independent of the distance.
Network connection can be affected by simply pointing an antenna at the satellite.
Very wide band widths are possible.
The quality of transmission is high.
An interesting property of satellite broadcasting is precisely that: it is broadcasting. All stations under
the downward beam can receive the transmission, including pirate stations the common carrier may know
nothing about. The implications for privacy are obvious. Also, because a user can listen to the retransmission
of his own data, he can tell whether it is in error or not. This property has important implications for error
handling strategies.
It is important to know that a single optical fiber, costing a few cents, has more bandwidth than all the
satellites ever launched. Whereas satellite networks were originally considered to be the only means of long
range communication it is no longer the case. Inter-continental optical fibre cables are becoming more and
more common, are cheaper, and more reliable and secure. Satellite links are still essential, however, for
communication with moving ground stations such as ships at sea.

4.6

Exercises

Exercise 4.1 List the advantages and disadvantages of terrestial microwave communication.
Exercise 4.2 The following two alternatives are available for a point-to-point communication link.
1. A GEO-sattelite link with a 10Mbps capacity
2. A terrestial digital communication link with 2.048Mbps capacity.
Assume that after a message has been sent it needs to be acknowledged before the next one can be sent. If
the terrestial link has a maximum message size of 1Kbs, how many bits must a message on the satellite link
have for it to have a lower delivery time?

Chapter 5

Channels and Their Properties


5.1 Objectives of this Chapter
There are several properties and concepts which are common to all communication links regardless of the
medium. In this chapter we shall learn about
The directions of communication that are possible on a communication channel, called simplex halfduplex and full-duplex.
Ways of synchronising the sender and receiver to ensure that they read and interpret the same bits.
Individual network users do not always require a constant capacity all of the time. One can use this
fact to have several users share a common medium in a technique called multiplexing. Details of Synchronous Time Division Multiplexing (STDM), Statistical TDM and Wave (or Frequency) Division
Multiplexing are important for a full understanding of a communication channel. This includes the
recent variations of Digital Subscriber Line (xDSL) which have been proposed or are being rolled out
by telecommunication companies (telcos).
Finally we discuss the differences between circuit and packet switching and mention cell switching
which is a variation of packet switching used in Asynchronous Transmission Mode (ATM) networks,
but which is not quite the same as packet switching.

57

CHAPTER 5. CHANNELS AND THEIR PROPERTIES

58

5.2 Introduction
There are several properties and concepts which are common to all communication links regardless of the
medium. In this chapter we shall learn about these.

5.3 Simplex, Half-duplex and Full-Duplex Communication


A distinguishing feature of any data communication path is the direction of data transfer between two communicating systems. Data units may (see Figure 5.1):
1. travel in one direction only from one system to the other, called simplex communication.
2. travel in both directions, but not simultaneously, called half-duplex communication e.g.. two-way
radios
3. travel in both directions simultaneously, called full duplex communication

Figure 5.1: Simplex, Half-duplex and Full-duplex Communication


Two-way communication is complex, especially over interconnected networks. Protocols must be used
to make sure information is received correctly and in an orderly manner. This will be discussed in later
chapters.

5.4 Asynchronous and Synchronous Transmission


In order for computers and terminals to communicate, they must first notify each other that they are about
to transmit. Second, once they have begun the communication process, they must provide a method to keep

5.4 Asynchronous and Synchronous Transmission

59

both devices aware of the ongoing transmissions i.e. a transmitter, such as a terminal must transmit its
signal so that the receiving device knows when to search for and when to recognize the data as it arrives.
This requirement means that a mutual time base, or a common clock, is necessary between the receiving
and transmitting devices.
The transmitting machine first forwards to the receiving machine an indication that it is sending data. If the
transmitter sends the bits across the channel without prior notice, the receiver will likely not have sufficient
time to adjust itself to the incoming bit stream. In such an event the first few bits would be lost, perhaps
rendering the entire transmission useless.
This process is part of the communications protocol and is generally referred to as synchronization. As the
clocking signal on a line between two machines changes, it notifies the receiving device to examine the data
line at specific times. Clocking signals perform 2 valuable functions:
they synchronize the receiver onto the transmission before the data actually arrives
they keep the receiver synchronized with incoming bits.
Asynchronous Format
S
T
A
R
T

Byte
N

S
T
A
R
T

S
T
O
P

Flag
or
SYNC

Byte
N

Byte
3

S
T
O
P

S
T
A
R
T

Byte
2

Byte
2

S
T
A
R
T

S
T
O
P

Byte
1

Byte
1

S
T
O
P

Flag
or
SYNC

Synchronous Format

Figure 5.2: Asynchronous and Synchronous Transmissions


Two data formatting conventions are used to help achieve synchronization. These two methods are shown
in Fig 5.2. The first approach is called asynchronous formatting. With this approach each character has
start and stop bits (i.e. synchronizing signals) placed around it. The purpose of these signal is to firstly alert
the receiver that data is arriving and secondly, to give the receiver sufficient time to perform certain timing
functions before the next character arrives.
A more sophisticated process is synchronous transmission. It uses separate clocking channels or a selfclocking mechanism. Synchronous formats eliminate the intermittent start/stop signals around each character and provide signals which precede and sometimes follow the data stream. The preliminary signals are
called sync bytes or flags and their principal function is to alert the receiver of incoming user data. This
process is called framing.
Synchronization takes place at different logical levels, and unless this is understood properly some confusion
may arise later when Data Link Protocols are discussed. To start with, it is important to note that for phasecoherent analog transmission, the demodulator requires carrier synchronism, that is, a precise knowledge of
the transmitted carrier signal phase and frequency.

CHAPTER 5. CHANNELS AND THEIR PROPERTIES

60

Apart from this it is also necessary to maintain bit synchronism, by which is understood the receivers ability
to receive a stream of bits from an incoming analog signal by sampling for the bits at the proper times. The
receiver needs to know when to start sampling the first bit, and the bit duration between consecutive bits.

In most cases, the transmitter/receiver is a synchronous modem, which will recover symbol timing and
provide bit timing in a separately received data timing signal. It is also possible that no bit synchronization
is provided by the modem, in which case, not surprisingly, it is referred to as an asynchronous modem.
Such modems contain no inherent clocking mechanism and are unaware of the bit rate. The receiving
modem turns analog signals into digital bits, but leaves the identification of the bits to the so-called channel
adaptor. The modem will work at any bit rate up to a certain maximum, and the data will be successfully
received if the two channel adaptors are set to the same rate.
When the receiver and transmitter are in proximity, binary signals can be transmitted directly between
receiver and transmitter. In that case bit synchronism can be maintained by
a selfclocking binary signal such as the Manchester code discussed in Sec. 2.5.1 on page 34 where a
signal transition occurs at the beginning of every symbol (bit) period, or
by a single shared clock for both the transmitter and receiver.
These methods for achieving bit synchronism between sender and receiver determines two basic forms of
transmission: asynchronous and synchronous. In an asynchronous transmission system, bit synchronism is
in fact maintained, but only when data transmission is taking place and not during idle periods. The amount
of data transmitted at a time consists of a single character (5 to 8 bits). In a synchronous transmission sys
tem, bit synchronism is maintained at all times. Data Link Control protocols (DLC) normally used with
a synchronous transmission system are said to be synchronous protocols, (and the DTEs using them are
called synchronous devices), while DLC protocols normally used with an asynchronous transmission system are said to be asynchronous or stop-start protocols. We note that it is possible to use character-oriented
protocols for both synchronous and asynchronous transmission systems. One example is the American National Standards Institute (ANSI) X3.28 protocol standard, which is similar to the character oriented Binary
Synchronous Control (BSC) protocol described in Chapter 7. However, both are much more sophisticated
than the early start-stop protocols and we will classify them as synchronous protocols.

5.5 Multiplexing Signals


Multiplexing refers to the simultaneous transmission of several messages down a single channel. Multiplexing is most frequently associated with optical signals; we present here a general description of multiplexing
however which applies to any signal.
Fig. 5.3 explains multiplexing in its simplest from. The multiplexor combines (multiplexes) data from
slow input lines and transmits over a high capacity link to the a demultiplexor. The demultiplexor reverses
the process in that it separates (demultiplexes) the single data stream into its components and delivers
them to the appropriate output line.
Example 5.1 Consider a terminal network. With four terminal Time Division Multiplexing (TDM), each
terminal is allocated one-fourth of the output time slots, regardless of how busy it is. If each of the terminals


operates at 1200 bps, the output line must be 
= 4800 bps, since four characters must be sent during
each polling cycle.
Also called a transceiver
Discussed in Chapter 7

5.5.1 Synchronous Time Division Multiplexing

61

n inputs

n outputs

1 link, n channels

Demux

Mux

Figure 5.3: Multiplexing


In the case of fibre optic channels most long distance fibre optic transmission links contain many optical
fibres. In exceptional cases, each fibre carries only one message, for example a single telephone call. In
most cases, however, hundreds of messages are transmitted down the same optical fibre. The two most
widely used types of multiplexing optical signals are Time Division Multiplexing (TDM) and Wavelength
Division Multiplexing(WDM) neither of which is however limited to optical signals.

5.5.1 Synchronous Time Division Multiplexing


Time Division Multiplexing involves the interleaving of bits, bytes or blocks of data from several channels
into one channel, all travelling at the same carrier frequency. It provides the user with the full channel
capacity but divides the channel usage into time slots.
Time
Division
Multiplexer

Buffer

Buffer

Buffer

Host
Computer

Buffer

Buffer

demultiplexer

Terminal 2

Buffer

multiplexer

Terminal 1

Time
Division
Multiplexer

Buffer

Buffer

Buffer

Terminal 3

LT

dn (t)

...

...

d2 (t)

d1 (t)

LT

Figure 5.4: Synchronous Time Division Multiplexing


Fig. 5.5 illustrates the structure of a typical STDM system. The TDM scans each device and organises the
,
  where    is the
   
data into a frame. The slots in the frame are fixed such that


capacity of each device in bits/s;   = capacity of the output line in bps; N is the number of input devices.

"

$  "

"

"

The big disadvantage of TDM is that when an input device (eg. a terminal) has no traffic, an output time slot
is wasted. The output slots are filled in strict rotation, as in Fig. 5.5. If there are no data, dummy characters

CHAPTER 5. CHANNELS AND THEIR PROPERTIES

62

are used. It is not possible to skip a time slot, because the receiving end keeps track of which character
came from which terminal by its position in the output stream. Initially, the multiplexer and the computer
synchronize themselves. Both know that the order to be used, for example, is 012301230123 ... The data
themselves carry no identification of their origin. If the multiplexer skipped a time slot when there were
no data from a sender, the receiver would get out of phase and interpret the origin of succeeding characters
incorrectly.

5.5.2 Statistical Time Division Multiplexing


Statistical multiplexors were developed to obviate the waste of empty slots. Vacant slots occur when an idle
terminal has nothing to transmit in its slot. Statistical TDM multiplexers (STDMs) dynamically allocate the
time slots among active terminals. Fig. 5.5 below shows how the STDM compresses the empty slots out
of the signal. It is for this reason that STDMs are also sometimes called concentrators.
   

00

 

M
U
X




0
0

      

  

-----

frames

    

 


 


M
U
X

-------




transmitted signal

     

     

00

received
signals

signal
sources

TDM Slots
2

Filled

Empty

Empty

Filled

Empty

Control

Filled

Filled

Filled

Filled

2
Filled

3
Empty

4
Filled

STDM Slots

Figure 5.5: Statistical Time Division Multiplexing


Fig. 5.5 illustrates the introduction of a control field in the STDM frame. It is used to identify the owners of
the slots. The demultiplexer uses this control field to reassemble each channels traffic at the receiving node
and pass it on to the proper recipient.
Unfortunately, concentration has an inherent difficulty. If each terminal suddenly starts outputting data at its
maximum rate, there will be insufficient capacity on the output line to handle it all. Some data may be lost.
For this reason, concentrators are always provided with extra buffers in order to survive short data surges.
The more memory a concentrator has, the more it costs but the less likely it is to lose data. Choosing the
appropriate parameters for the output line bandwidth and concentrator memory size involve trade-offs. If

5.5.3 Wavelength Division Multiplexing

63

either is too small, data may be lost. If either is too large, the whole arrangement may be unnecessarily
expensive. Furthermore, the optimum choices depend on the traffic statistics, which are not always known
at system design time.
In the case of optical signals is that although the optical channel can happily carry large capacity, current
electronic equipment cannot perform the multiplexing and demultiplexing at fast enough rates. For this
reason, purely optical equipment is being developed to perform these functions, ensuring that the electrical
components only have to cope with the signal when it has been demultiplexed to its original data transmission rate. The TDM process is then called Optical Time Division Multiplexing (OTDM).

5.5.3 Wavelength Division Multiplexing


Wavelength (or Frequency) Division Multiplexing, illlustrated in Fig. 5.5.3, does not involve a faster changing signal at the same wavelength. Instead, several signals of different wavelengths are sent down a single
fibre or channel simultaneously. This approach in effect divides the transmission frequency range (the bandwidth) into narrower bands (called subchannels). The subchannels are lower-frequency bands and each band
is capable of carrying a separate voice or data signal. Consequently, FDM is used in a variety of applications
such as telephone systems, radio systems, and cable television.

guardband

channel 3
channel 2

M
U
X

channel 1

M
U
X

guardband

Input signals in

Combined signal consisting of


frequencies from all three

Original signals

channels

filtered out from


transmitted signal

different bandwidths

Figure 5.6: Wavelength Division Multiplexing


WDM allows each signal sent down the channel access to a fixed portion of the frequency spectrum. In the
case of optical signals, the light produced must therefore remain focused closely on a single wavelength,
so that different signals of different wavelengths do not interfere with each other, a phenomenon called
crosstalk.
The main problem with WDM is that each wavelength of light travelling down the channel needs a separate
repeater, adding to the cost of the system as the distance travelled is increased.

5.5.4 Combining WDM and STDM


The highest capacity system would be a hybrid of STDM and WDM, allowing for the greatest possible use
of a single link. A number of channels could be multiplexed together using STDM, until the practical limit

CHAPTER 5. CHANNELS AND THEIR PROPERTIES

64

of the multiplexing and demultiplexing equipment is reached. Then each of the resultant channels will be
converted to a specific wavelength and multiplexed into an optical fibre say, using WDM.
The level of technology in the case of optical fibre is so far developed that it is possible to send hundreds
of individual signals down a single fibre thus allowing practically unlimited capacity without considering
STDM. This technology is known as Dense Wave Division Multiplexing (DWDM) and is bound to upset the
traditional view in telecommunications that capacity is limited and costly.

5.6 Digital Subscriber Line (DSL)


The largest challenge in the deployment of high-speed digital network to every office and household, the
most challenging part is the link between the subscriber and the network service provider: the digital subscriber line or what is colloquially known as the last mile. Historically these twisted-pair links were installed
to carry voice-grade signals in a bandwidth from 0 - 4Khz. The wires are however capable of carrying signals
up to 1Mhz and more.
ASDL is the best known of a family of new technologies designed to provide high-speed digital data transmission over ordinary telephone wires and is now an ANSI standard. The term asymmetricrefers to the fact
that that ASDL provides more capacity downstream from the service provider to the subscriber than upstream. The rational being that most user transmissions are in short commands or e-mail messages, whereas
incoming traffic such as HTML pages can involve large amounts of data and include images or even video.
ASDL uses frequency-division multiplexing (FDM) to exploit the 1 MHz capacity of a twisted pair in the
way illustrated in Fig 5.6

Upsteam

POTS

20

25

Downstream

~200 ~250

1000kHz

Figure 5.7: Asymmetric Digital Subscriber Line (ASDL) technology using FDM only
1. The lowest 25Khz is reserved for the voice channel, known as POTS (plain old telephone service). The
voice channel requires only the 0 - 4 KHz band; the additional bandwidth is to prevent interference or
crosstalk between the voice and data channels.
2. Use a technique known as echo-suppression or otherwise FDM to allocate two bands, a narrower
upstream band and a wider downstream band. Echo-suppression is a signal processing technique that
allows the transmission of digital signals in both directions simultaneously on a single transmission
line. Essentially, a transmitter must subtract the echo of its own transmission from the incoming signal
to recover the signal sent by the other side as illustrated in Fig 5.6.

5.6 Digital Subscriber Line (DSL)

65

3. Use FDM within each of the upstream and downstream bands where a single boit stream is split into
multiple parallel bit streams individually carried in a separate frequency band.

Upsteam

POTS

20

25

Downstream

Variable

1000kHz

Echo
Cancellation

Figure 5.8: Asymmetric Digital Subscriber Line (ASDL) technology using Echo-suppression
When echo-suppression is used, the entire frequency range for the upstream channel overlaps the lower
portion of the downstream channel. This has two advantages compared to the use of distinct frequency
bands for upstream and downstream.
1. The higher the frequency, the greater the attenuation. With the use of echo-suppression, more of the
downstream bandwidth is in the lower range.
2. The echo cancellation design is more flexible for changing the upstream capacity should this be required. The upstream channel can be extended upward without running into the downstream; the area
of overlap is merely extended.
The disadvantage of the use of echo-suppression is the need for echo-supression logic on both ends of the
line. ASDL provides a range of up to 5 Km depending on the diameter of the twisted pair cable and its
quality as well as the quality
ASDL is only one of a number of schemes recently proposed and collectively known as xDSL.
HDSL High Data Rate SDL was developed to provide 1.544MBps (the American T1 rate, see Sec. 6.3 on
page 77). It is a symmetric service which uses an encoding scheme known as 2B1Q to provide the
required data rate over two twisted pair lines within a bandwidth that extends only up to 196KHz. The
low frequency allows ranges of about 3.7Km to be reached easily.
SDSL Although HDSL is attractive for replacing existing T1 lines, it is not suitable for the traditional last
mile since the typical residential subscriber has a single twisted pair to his house whereas HDSL
requires two tisted pairs. SDSL was developed to provide the same type of service as HSDL but over
a single twisted pair using 2B1Q encoding as well. Echo-suppression is used to achieve full duplex
transmission over the single pair. Note that SDSl does not support POTS.

CHAPTER 5. CHANNELS AND THEIR PROPERTIES

66

UDSL Universal SDL is the European equivalent of HDSL in that it will support E1 trunk rates of 2.048
MBps rather than the T1 rate of 1.544Mbps (see Sec. 6.3).
VSDL VDSL is an asymmetric DSL service for twisted pair lines that is in the process of being defined at
the time of writing. Transmission rates are expected to range from 13 to 52Mbps downstream and
upstream rates ranging anywhere from 1.5Mbps to an upstream rate equivalent. Maximum distances
will range to 1.2Km. VDSL will most likely be deployed as the last mile part of a fibre-copper hybrid
local loop in which fibre is used for the service provider-neighbourhood link and VSDL over UTP
is used for the neighbourhood-residential home link. Services will include Internet access, video-ondemand and high-definition (digital) television (HDTV), amongst others.

5.7 Switching Techniques


Finally in this chapter we need to distinguish between the three most common ways of switching links
between two communicating parties. All of the xDSL techniques discussed in the previous section provide
a permanent point-to-point link between service provider and the user and is available whether being utilised
or not.

5.7.1 Circuit Switching


Similarly when you (or your computer) places a telephone call, the switching equipment within the telephone system seeks out a physical copper path all the way from your telephone to the receivers telephone.
This technique is called circuit switching and is shown schematically in Fig. 5.9.

Virtual
(call) circuit

PSE

PSE

PSDN
Physical circuits
PSE

PSE

Logical channels

Figure 5.9: Circuit switching

5.7.2 Packet Switching

67

Each of the four rectangles represents a carrier switching office (called an end office, toll office, etc). In
this example, each office has three incoming lines and three outgoing lines. When a call passes through a
switching office, a physical connection is (conceptually) established between the line on which the call came
in and one of the output lines, as shown by the dotted lines. In the early days of telephony, the connection
was made by having the operator plug a jumper cable into the input and output sockets.
The model shown in Fig. 5.9 is highly simplified of course, because parts of the copper path between the
two telephones may, in fact, be microwave links onto which thousands of calls are multiplexed. Nevertheless, the basic idea is valid: once a call has been set up, a dedicated path between both ends exists, and will
continue to exist until the call is finished.
An important property of circuit switching is the need to set up an end-to-end path before any data can be
sent. The elapsed time between the end of dialing and the start of ringing can easily be 10 sec, more on long
distance or international calls. During this time interval, the telephone system is hunting for a copper path,
as shown in Fig. 5.9. Note that before data transmission can begin, the call request signal must propagate all
the way to the destination and be acknowledged. For many computer applications, (e.g., point-of-sale credit
verification), long setup times are undesirable.
As a consequence of the copper path between the calling parties, once the setup has been completed, the
only delay for data is the propagation time for the electro magnetic signal, about 6 milliseconds per 1000Km.
Also as a consequence of the established path, there is no danger of congestion i.e., once the call has been put
through, you never get busy signals, although you might get one before the connection has been established
due to lack of internal switching or trunk capacity.

5.7.2 Packet Switching

Figure 5.10: Conceptual view of packet switching


When the services provided over a channel occur in real time such as is the case with voice or video a
dedicated circuit is necessary because of the low tolerance for delays. Computer communication sessions

CHAPTER 5. CHANNELS AND THEIR PROPERTIES

68

are different in that transmission occurs in spurts of a few milliseconds and delays are in general not that
critical. An e-mail message or web pages delivered to a remote server can afford to wait a few seconds. This
nature of computer communication was recognised very early on an a technique called packet switching
was invented to allow many users to share the same communication channel. Rather than devote the entire
circuit to a user each message send between users is packetised, each packet given the address of the sender
and receiver, much like a letter, and a sequence number and sent off individually to its destination. At
the receiving end the packets, if received without error, is reassembled in sequence into the message either
by the user or the network depending on the service provided. Fig. reffig:packetswitching illustrates this
principle.
Note that each packet need not travel along the same route from source to destination and because packets
are numbered, individual packets can resend by the origin if required. Much like cars travelling in a city
network, if the one route is congested or even closed, another route can be followed to get to the destination.

5.7.3 Cell Switching


Cell switching is newer concept which is similar to packet switching. Whereas most protocols allow packets
to be of any length, cells are of a fixed 53 octet length. Cells do not carry a source address however, as we
shall learn in Chapter 10 and always follow the same route from source to destination.

5.8

Exercises

Exercise 5.1 Explain what is meant by the term multiplexing. Explain the difference between time division
and wave length division multiplexing?
Exercise 5.2 Why is fequency division multiplexing better suited to fibre optics than copper cables?
Exercise 5.3 A voice line needs to transmit one byte of information every 125 s. Assuming that the
electrical-optical interface of a fibre optic system takes 625 ns to process one bit, how many voice lines
can be multiplexed through the fibre using TDM?
Exercise 5.4 Can fibre be used to transmit analogue data? Explain.
Exercise 5.5 Twenty-four voice channels are to be multiplexed and transmitted over a twisted pair. What
is the bandwidth required for FDM? Assuming a bandwidth efficiency (ratio of data rate to transmission
bandwidth) of 1bps/Hz, what is the bandwidth required for Synchronous TDM using PCM?
Exercise 5.6 Ten 9600bps lines are to be multiplexed using TDM. Igoring overhead bits, what is the total
capacity required for synchronous TDM? Assuming we wish to limit the average line utilization to 80 percent
and assuming each line is busy 50 percent of the time, what is the capacity required for statistical TDM?
Exercise 5.7 A character interleaved (multiplexed) time-division multiplexor is used to combine the data
streams of a number of old-fashioned 110bps asynchronous terminals for data transmission over a 2400bps
digital line. Each terminal sends asynchronous characters consisting of 7 data bits, 1 start bit and 2 stop
bits. Assume that one synchronous character is sent every 19 data characters. Determine the number of


An octet, as the name implies, is 8 bits, or in other words a byte long. The former term is more frequently used in the
telecommunications industry for reasons which are not clear.

5.8 Exercises
1. bits per character,
2. number of terminals that can be accommodated by the multiplexor.

69

70

CHAPTER 5. CHANNELS AND THEIR PROPERTIES

Chapter 6

Physical Data Communication Interfaces,


Standards and Errors
6.1 Objectives of this Chapter
All signals which carry the logic of peer-to-peer communication travel along physical channels. In this
chapter we cover the most general protocols required at the Physical layer.
1. The electrical interfaces we need to know about for networking are RSC232 and X.21. There are
many others, but they are not as fundamental as those we discuss.
2. In the optic signal world the standards are mercifully fewer, but still there are SONET and SDH which
are not the same, but are used for the same thing: Optic network interfaces.
3. Transmission errors are a fact of life, however good the transmission media are. The mere fact that
a medium is not perfect requires one to check for errors in data communication. On completing the
section on erros, the reader should know about.
4. Hamming distance and the difference between error correction and detection, including ordinary and
block parity.
5. The most important technique for error detection covered in this chapter is Cyclic Redundancy Coding
which, in its variations, is used almost exclusively for that purpose.
While some may consider the physical links and their standards the domain of the electrical engineers and
technicians, the computer scientist who does not un derstand what is happening at that level, will certainly
not be as good in understanding the protocols and technologies at the higher layers as the one who does not
have this knowledge. that is why this chapter is included.

71

72 CHAPTER 6. PHYSICAL DATA COMMUNICATION INTERFACES, STANDARDS AND ERRORS


This section focuses on the standards which apply at the interface between the user equipment (also called
the Data Terminal Equipment (DTE) in the ITU-T standards) on a network and the network itself or Data
Circuit-terminating Equipment (DCE). An interface can be defined as the line of demarcation between two
pieces of equipment which, in order to operate harmoniously, must each obey a complementary set of
interface specifications.

6.2 Electrical Signal Interfaces


Both serial interfaces where a single bit is transmitted per bit period and parallel interfaces, where all the
bits constituting a character are transmitted during one bit period in parallel across the interface, are used in
practice. The latter interface is normally associated with high speed devices, such as a printer, or when the
terminal is connected to a bus, such as the IEEE 402 standard, and in all cases carry digital signals. In this
section we shall concern ourselves with the more common serial interfaces.

6.2.1 Serial Analog Interface


One well-known standard of physical interconnection is the RS232C Serial Interface which, functionally is
the same as the international ITU-T Standard V24. This standard was developed by Electronics Industries
Association (EIA) and covers interfaces for speeds up to 20 000 bps. At higher speeds, ITU-T specification

V35 applies.
switched or
leased line

RS-232C/V.24
Calling
DTE

Modem

RS-232C/V.24

Modem

Called
DTE

Figure 6.1: RS-232C/V.24 interface


V24 specifies the hardware and electrical interface between a synchronous terminal, i.e., a DTE using a
synchronous protocol (see Sec 5.4) and the modem on the one hand and the modem and a host computer on
the other hand. An equivalent ITU-T recommendation V21 applies to the interface between an asynchronous
DTE and a modem. In fact, the standard also applies to a remote DTE connected via an analogue circuit to
a DTE, but our description will be in terms of the former configuration. It is important to remember that
in either case the two pieces of digital equipment being connected are remote and that the connecting lines
carry analog signals, in other words, modems are involved.
Since manufacturers like to supply terminals which can be coupled to a computer either via a modem or
directly (when the distance involved is no more than about 50 meters) RS232C also caters for this latter
situation. Since the signals passing between a terminal and host computer are all digital it makes more
sense however, to send digital signals straight down the line via a simple interface commonly referred to as
a current loop interface.
The V24 recommendation includes a large number of circuits identified by function, name and number,
each of which represents a pin (see Figure 6.2) on the 25-pin standard connector. However, the allocation of
All ITU-T V-series recommendations are for data transmission using the telephone network.

6.2.1 Serial Analog Interface

73

circuit to pin is in fact part of an ISO (International Standards Organization) specification, ISO 2110, rather
than ITU-T. Many of the circuits are rarely used, and some are applicable only to synchronous modems
or modems with more elaborate facilities such as secondary backward channels. The discussion of every
connection is a technical issue which can be looked up in any manual, but we shall describe the role that
some of the circuits play in a typical DTE-DCE connection. Suppose the DTE is a PC and the DCE is a
1 2 3

13

14 15

25

Figure 6.2: RS-232C Connector


modem. The first 6 circuits are primarily used to establish an exchange that ensures that neither device will
send data when the other is not expecting it. Fig. 6.3 shows the exchange over a period of time. Since the
Transmit
data to
DCE over
TD

Set DTR
Set RTS
(data
terminal (request to
send)
ready)

DTE
t1

t3

t5

t2

t4

Set DSR
(data set
ready)

Set CTS
(clear to
send)

t6

t7

t8

t9

DCE
Set DCD
(incoming
signal)

Receive data
from channel
and send to
DTE over RD

DTE transmitting
DTE receiving

Figure 6.3: Sending and Receiving over an RS-232 Connection


DCE interfaces with the network on behalf of the DTE, it must know when the DTE is ready. The DTE does
this by asserting (sending a signal) DTR circuit number 20 (time t1 in Fig. 6.3). The DCE senses the signal
and responds by connecting (if it has not already done so) to the network. Once the DCE has connected and
is also ready, it asserts DSR circuit number 6 (time t2). Effectively, the DCE acknowledges the DTEs state
of readiness and declares it also ready.
Once they are both ready, the DTE requests permission to transmit data to the DCE by asserting RTS circuit
4 (time t3). This circuit also controls the direction of flow in the half-duplex communications. On sensing
RTS, the DCE enters transmit mode, meaning it is ready to transmit data over the network. It responds
by asserting the CTS circuit number 5 (time t4). Finally, the DTE sends data over TD circuit number 2
(between times t5 and t6).
When the DCE detects an incoming signal from the network it recognises, it asserts DCD circuit number 8
(time t7). As the signals come in, the DCE sends them to the DTE using RD circuit number 3.

74 CHAPTER 6. PHYSICAL DATA COMMUNICATION INTERFACES, STANDARDS AND ERRORS

6.2.2 Serial Digital Interface


As early as 1969, ITU-T realized that eventually carriers would bring true digital lines onto customer

premises. To encourage compatibility in their use, ITU-T proposed a digital signaling interface, X21 which
was approved in 1976. The standard specifies how the customers computer, or DTE in ITU-T nomenclature, sets up and clears calls by exchanging signals with the carriers equipment, or DCE for synchronous
communication. The equivalent standard for asynchronous communication is X20.
In our ISO reference model X20 and X21, like V24/RS232 are at the physical level.
Before embarking on a description of the X21 interface it is worth while to remind ourselves what we
learned in Sec. 1.6 on page 11 out that communication facilities perform in four time phases:
1. The call-establishment phase concerns the setup of the communications carrier facilities so as to
provide communication between a calling DTE and a called station.
2. The data transfer phase, when the connection is established and data may be transferred.
3. The call-clearing phase concerns the release of the communications carrier facilities.
4. The idle phase equates to the on-hook condition in telephone networks, but may further indicate that
the DTE is Ready or Not Ready to accept a call.
Note that not all four phases need be present. The V24/RS232C interface described previously only allows
for phases 2 and 3 since the terminal is permanently connected to the host computer. The names and
T (Transport)

DTE

C (Control)

DCE
R (Receive)
I (Indication)
S (Signal, i.e., bit timing)
B (Byte timing) optional
Ga (DTE common return)
G (Ground)

Figure 6.4: Signal lines used in the X21 serial digital interface
functions of the eight wires defined by X21 are given in Fig. 6.4. The physical connector has 15 pins, but
not all of them are used. One of the basic principles agreed to early in the development of X21 was that
the interface should be transparent, that is, bit-sequence independent for all data in the data transfer phase.
Simplicity was also a prime objective as evidenced by the resulting circuits shown.
The DTE uses the T and C lines to transmit data and control information, respectively. The DCE in turn
uses the R and I lines for data and control, thus allowing for full duplex communication. Since synchronous
operation is specified, a third line provides bit timing from the network to the DTE. At the carriers option,
a B line may also be provided to group the bits into 8-bit frames. If this option is provided, the DTE must
begin each character on a frame boundary. If the option is not provided, both DTE and DCE must begin
every control sequence with at least two special synchronisation SYN characters, to enable the other one

All X-series recommendations of ITU-T refer to data transmission using new data communication networks.

6.2.2 Serial Digital Interface

75

to deduce the implied frame boundaries. In fact, even if the byte timing is provided, the DTE must send
the two SYNs before control sequences, to maintain compatibility with networks that do not provide byte
timing.
The alphabet used to exchange control information is that of the International Alphabet No 5 (ITU-T Recommendation V3). These control characters are sent on the T and R circuits preceded by two SYN characters
(when not in the data transfer phase). This use of code strings (rather than circuits) for control information is
a major advantage of X21 in that it provides a practically unlimited number of control signals. ON and OFF
conditions for circuits C and I refer to continuous ON (binary 0) and continuous OFF (binary 1) conditions.
Although X21 is a long and complicated document, which references other long and complicated documents, the state transition diagram in Fig. 6.5 illustrates the main features. In that diagram we shall show
how the DTE places a call to a remote DTE, and how the originating DTE clears the call when it is finished.
To make the explanation clearer, we will describe the calling and clearing procedures in terms of an analogy
with the telephone system.
When referring to C and I, we will follow ITU-T practice and call one OFF and zero ON. When the line is
idle (i.e., no call present), the four signaling lines are all one (bottom half of the state descriptor of state 1 in
Fig. 6.5. When the DTE wishes to place a call, it sets T to 0 and C to ON. When the DCE is ready to accept
a call, it begins transmitting the ASCII character on the R line, in effect, a digital dial tone, telling the
DTE that it may commence dialing.

The DTE dials the number by sending the remote DTEs address as a series of ASCII characters using the
T line, one bit at a time. At this point the DCE sends what are called call progress signals to inform the DTE
of the result of the all. The call progress signals, defined in ITU-T recommendation X96, consist of two
digit numbers, the first of which gives the general class of the result, and the second the details. The general
class include: call put through, try again (e.g., number busy), call failed and will probably fail again next
time (e.g. access barred, remote DTE out of order, DTEs incompatible), short term network congestion, and
long term network congestion. If the call can be put through, the DCE sets I to ON to indicate that the data
transfer may begin.
At this point a full-duplex digital connection has been established (state 13 in the diagram), and either side
can send information at will. Either DTE can say goodbye by setting its C line to OFF. Having done so,
it may not send more data, although it must be prepared to continue receiving data until the other DTE has
finished.
The procedure for incoming calls is analogous to that for outgoing calls. If an incoming call and an outgoing
call take place simultaneously, known as a call collision, the incoming call is canceled and the outgoing call
is put through. ITU-T made this decision because it may be too late at this point for some DTEs to reallocate
resources already committed to the outgoing call.
Carriers are likely to offer a variety of special features on X21 networks such as fast connect, in which
setting the C line ON is interpreted by the DCE as a request to reconnect to the number previously dialed.
This feature eliminates the dialing stage, and might be useful, for example, to place a separate call to a
time-sharing computer every time the person at the terminal hits return. Another possible X21 option is the
closed user group, by which a group of customers (e.g., company offices) could be prevented from calling
or receiving calls from anyone outside the group. Call redirection, collect calls, incoming or outgoing calls
barred and caller identification are other possibilities.
As an interim measure until all devices are digital (which is not soon), the connection to public data networks
of synchronous DTEs, which are designed for interfacing to old synchronous V-series (RS232C in USA)
modems. This conversion interface facility is called X21 BIS (X20 BIS for asynchronous communication).
It seems that there is no change required to existing products (that now use the V24 or RS232C interface) to

76 CHAPTER 6. PHYSICAL DATA COMMUNICATION INTERFACES, STANDARDS AND ERRORS


1

Ready
1
OFF
1
OFF
DTE
2
Call request
0 ON
1 OFF

DCE

8
Incoming call
1
OFF
BEL OFF

DTE
15
Call collision
O ON
BEL OFF

DCE

Note 1: The DCE may enter


state 19 from any state and the
DTE may enter state 16 from
any state.

DCE

Note 2: States 6A, 6B and 6C are


presented in the figure for
convenience. They are all
functionally equivalent

DTE

LEGEND

DCE

3
Proceed to select
O
ON
+
OFF

9
Call accepted
1
ON
BEL OFF

C
I

DCE
DTE
10

4
IA5
+

10 bis

Called line id

Calling line id

ON

ON

OFF

IA5

OFF

1
IA5

Selection signals

DCE

ON
OFF

DCE
DTE

DCE

5
DTE Waiting
1
ON
+
OFF
DCE

6B
DCE Waiting
1
ON
SYN OFF
DCE

DCE

6C
DCE Waiting
1
ON
SYN
OFF

DCE

DCE
DCE

6A
DCE Waiting
1
ON
SYN OFF

DCE

DCE

7
Call progress
signals
1
ON
IA5
OFF

13
Data Transfer
D ON
D
ON

DCE
11
Connection in progress
1
ON
1
OFF

DCE

12
Ready for data
1
ON
1
ON

Figure 6.5: X21 protocol state transition graph

attach to an X21 network when using X21 BIS. The user may, however, pay extra for the X21 BIS interface,
and many new facilities offered by X21 (for example, call-progress commands), are then not available to the
user.

6.3 Optical Signal Interfaces

77

6.3 Optical Signal Interfaces


Two optical signal interface standards, one from the United States and, for once agreed by Japan and one
from Europe, define a means of transmitting data at rates up to 2488.32 Mbps. These are the SONET
(Synchronous Optical NETwork) and SDH (Synchronous Digital Hierarchy), protocols for broadband digital
services. In this context, broadband refers to broad bandwidth, in other words, channels that carry a high
capacity. Strictly speaking, broadband services are those requiring a transfer rate greater than 1.544 Mbps
in North America (designated the T1 rate) and 2.048 Mbps in Europe (called the E1 rate), these rates are also
known as the primary transfer rates. The international standard for digital transmission is SDH, adopted by
Type
DS-1
CEPT1
DS-1C
DS-2
DS-3

Digital Bit
Rate
1.544 Mbps
2.048 Mbps
3.154 Mbps
6.312 Mbps
44.736 Mbps

Voice
Circuits
24
30
48
96
672

T1
2
4
28

Table 6.1: Typical payloads.


the ITU-T in its recommendations G.707, G.708 and G.709. SDH incorporates SONET principles. Services
provided by SONET and SDH range from 64 kbps (the DS-0 signal which carries one telephone channel)
to 2488.32 Mbps. Table 6.1 lists payloads supported by SONET/SDH.
In the SONET system, three types of equipment are used, as shown in Fig. 6.6. Path-terminating equipment signifies the end of the transmission line. The path is an end-to-end transmission line that connects
two SONET terminals or switches, and includes both electrical and optical components. Line-terminating
equipment connects SONET hubs by means of optical fibre. The SONET hubs multiplex several transmission lines from SONET terminals onto a single optical transmission line. Section-terminating equipment
comprises the repeaters and amplifiers in the optical cable. Amplifiers increase the signal power without
halting the transmission of a signal; repeaters receive, amplify and retransmit signals to counter the effects
of signal loss. SONET takes digital channels of various capacities and maps them onto a basic transmission
rate unit called the Synchronous Transport Signal - Level 1 (STS-1). The STS-1 transmits frames of 810
bytes or 6480 bits, at a rate of 8000 frames/second. A frame thus takes 125 s to transmit, the chief reason
being that a telephone call requires the transmission of one octet every 125 s. STS-1 thus has a basic trans   
). Higher level signals are obtained by using add-drop
fer rate of 51.840 Mbps (  
multiplexers (ADMs) to multiplex (by byte interleaving the incoming signals)  STS-1 channels together to
form an STS- channel, with  times the capacity of the STS-1 channel. For example, the STS-3 channel
transmits at a rate of 155.52 Mbps.
The frames transmitted on the STS-1 channel are 90 by 9 octets (8-bit bytes) (see Fig. 6.7). The first three
columns of the frame contain transport overhead (27 bytes), with 9 bytes allocated for section overhead and
18 bytes allocated for line overhead. The remaining 87 columns comprise the STS-1 envelope capacity,
including another 9 bytes which are reserved for the path overhead. The effective user capacity is thus 86
columns by 9 octets, giving 774 bytes and an effective transmission rate of 49.536 Mbps. Section overhead
controls data integrity between the regenerators, line overhead ensures correct transmission of the STS-N
signals by the ADM multiplexers, and path overheads carried end-to-end, only being checked at the final
destination. SONET takes incoming digital channels of any transmission speed and sends them down the


This usage of the word broadband is not to be confused with broadband coaxial cable, which can carry multiple analog
channels simultaneously.

78 CHAPTER 6. PHYSICAL DATA COMMUNICATION INTERFACES, STANDARDS AND ERRORS


Path
terminating
equipment

Line
terminating
equipment

Line
terminating
equipment

Path
terminating
equipment

Regenerators

SONET
terminal
or switch

SONET
hub
Section

Line
Path
(end-to-end)

Figure 6.6: SONET configuration.


transmission line. Fig. 6.8 shows an example of the schemes used to multiplex the various signal speeds onto
the basic 51.84 Mbps transfer rate. Service adapters map the incoming signals into STS-1 envelopes. If the
transmission rate of the incoming line is below the STS-1 rate, then it is converted into Virtual Tributaries
(VTs); several of these VTs are multiplexed into one STS-1 channel. Examples of sub-STS-1 channels are
DS-0 (64 kbps) and DS-1 (1.544 Mbps). DS-3, with a speed of 44.736 Mbps, is converted straight into
STS-1 signals (with
an STS-1 signal. Higher speed signals are initially converted into
 , where
N is the final channel speed) before being multiplexed into STS- or STSsignals. STS-Mc stands
for Concatenated STS- ; each STSenvelope carries just one copy of the path overhead, whereas an
copies of the path overhead and is thus wasteful of the bandwidth. An STSSTS- signal carries
signal can only be sent if there are no intervening multiplexers between the sender and the receiver, it sets
up a high speed single channel between the two multiplexers for signals requiring high bandwidth. Once the
service adapters have created the STS-1 and STSchannels, they are byte-interleaved by the add-drop
multiplexers to form STS-N signals, and sent to the electrical/optical converter. At this point, the electrical
signals are scrambled to ensure that there are enough signal changes for synchronisation to work, and then
converted into an OC-N (Optical Carrier - Level 1) optical signal. The OC-N signal which traverses the
optical fibre, carries the same transmission rate as the STS-N signal.


6.3.1 Synchronisation and SONET/SDH


Synchronisation is a tricky issue to define exactly. Strictly speaking, to qualify as synchronous, the source
and destination must be clocked by the same timing reference; signals that are sent and received will always
have exactly the same frequency, and cannot fall out of alignment. However, this definition means that
SONET cannot be synchronous, since the sender and receiver may be situated very far from each other.
In the SONET system sender and receiver send signals arbitrarily close to a predetermined frequency (e.g.
STS-1); the signal is referred to as plesiochronous. Over time, the sender and receiver will become skewed
from each other, and frames would need to be repeated or deleted to handle buffer overflow and underflow.
SONET allocates three bytes of line overhead to pointers to solve this problem. When the frames become
out of alignment, these pointers are used to realign the frames. For example, if a DS-3 frame is imbedded in
an SDH frame, the pointer points to the start of the DS-3 frame. When the frame is received, the receiving
station can then point directly to the DS-3 frame. If this frame is out of alignment, the pointer is just shifted

6.3.2 The relationship between SONET and SDH

79

90 Octets
3 Octets

87 Octets

1
2

Section
overhead

P
A
T
H

4
5
6
7
8

Line
overhead

90 Octets
STS-1
Envelope

O
V
E
R
H
E
A
D

125 microseconds

90 Octets
x 9 Rows
x 8 Bits per octet
6480 Bits
x 8000 125 microsecond slots per seconds
51,840,000 or 51.840 Mbit/s

Figure 6.7: An STS-1 SONET frame.


up or down until it is again synchronously aligned.

6.3.2 The relationship between SONET and SDH


SONET is the optical protocol adopted by the North Americans and Japanese, whereas SDH is a worldwide
protocol standard adopted by the ITU-T that incorporates SONET, and is used mainly in Europe. SONETs
basic transmission rate is the STS-1 at 51.84 Mbps, whereas SDH operates at the STM-1 (Synchronous
Transport Mode - Level 1) rate of 155.52 Mbps. Table 6.2 lists the relationship between STS and STM
signal speeds. The DS signals are of North American origin, while CEPT1 falls into the European family of
signals. Although it would seem that STM-1 is equivalent to STS-3, it is actually only equivalent to STS-3c.
STS-3c is the signal sent between two multiplexers with no other multiplexers in between the sender and the
receiver, it is the concatenation of 3 STS-1 signals with only one copy of the path overhead present for each
STS-3c signal. An STS-3 signal, by contrast, contains three separate copies of the path overhead. Further
minor differences between SDH and SONET exist in the overhead bytes, which have different positions and

80 CHAPTER 6. PHYSICAL DATA COMMUNICATION INTERFACES, STANDARDS AND ERRORS

DS1 (28 max per STS-1)


DS1C (14 max per STS-1)
DS2 (7 max per STS-1)

DS3 (1 max per STS-1)

Service adapters
(map to VTs)

STS-N

STS-1
51.84
Mbit/s

Service adapters
(map to STS-1)

OC-N

Electrical
to optical
converter
Byte interleave
synchronous
MUX

E/O

Timing

Rate =
N x 51.84 Mbit/s

STS-Mc

Service adapters
Future service examples:
Video
MAN
Broadband ISDN
FDDI

STS-Mc
(concatenated STS-1S)

VT: Virtual tributary


STS: Synchronous transport signal
OC: Optical carrier

Figure 6.8: Mapping basic signals onto an STS-N channel.


OC-level
OC-1
OC-3
OC-12
OC-24
OC-48
OC-192

SONET
STS-1
STS-3
STS-12
STS-24
STS-48
STS-192

SDH
STM-1
STM-4
STM-8
STM-16
STM-64

Line Rate
51.84 Mbps
155.52 Mbps
622.08 Mbps
1244.16 Mbps
2488.32 Mbps
9953.28 Mbps

Table 6.2: Transmission speeds of SONET and SDH.


functions in the two protocols, but do not present too many problems for interoperability. Current physical
interface chips that are produced to support SONET usually also include an STM operation mode, thus
allowing switches using these chips to support either STS-3 or STM-1.

6.4 Transmission Errors


Physical transmission errors are a fact of life. As with the links in a chain, a communication channel is only
as good as its weakest link. On microwave links and particularly fibre-optic channels (with an error rate of


about 
), errors occur very infrequently. Some links will still use a telephone line somewhere though,
and with the 40 year depreciation time on existing telephone plant, transmission errors are going to continue
being a fact of life. In this section, we shall investigate some of the problems more closely and see what can
be done to overcome them.

6.4.1 Sources of Transmission Errors


Transmission errors are caused by a variety of different phenomena as we learned in Sec. 2.4.4 on page 31.
Apart from noise we are reminded that the following are also sources of error:

6.4.2 Error Rates

81

Crosstalk can occur between two wires which are physically adjacent.
Microwave links are subject to fading, off course birds (getting partially roasted in flight) and the like.
For voice transmission, it is desirable to compress the signal amplitude into a narrow range, because
the amplifiers are not linear over wide ranges. This compression, called compounding, can introduce
errors.
It is not possible to produce a perfect carrier wave. Its amplitude, frequency and phase will always
exhibit some jitter.
Errors are introduced whenever the receiver gets out of synchronization with the transmitter. Typically, it takes a few tenths of milliseconds to get back into synchronization, with all the data transmitted in the meanwhile delivered to the wrong destination.

6.4.2 Error Rates


The quality of a communication channel is measured by the Bit Error Rate which we shall designate by
 


. For a copper channel is typically 
while for optical fibre it is more likely to be about 
, as
mentioned.

If a block of data contains bits, then in the absence of error clustering, the probability that the block is
transmitted without errors is

 



Inversely, the probability that the block is in error is






  


 , this can be expanded using a binomial expansion to give an approximate probability of block
If
 

error equal to . The empirical data, however, gives a block error probability roughly equal to 

for blocks containing 8-bit bytes and no start or stop bits. The results vary with line length, transmission
speed, and other factors, so this is just an order of magnitude estimate.

In practice, however, as a result of the physical processes causing the noise, errors tend to come in bursts
rather than singly. Having the errors come in bursts has both advantages and disadvantages over isolated
single-bit errors. On the advantage side, computer data are always sent in blocks of bits. Suppose that the
block size is 1000 bits, and the error rate is 0.001 per bit. If errors were independent, most blocks would
contain an error. If the errors came in bursts of 100 however, only one or two blocks in 100 would be
affected, on the average. The disadvantage of burst errors is that they are much harder to detect and correct
than are isolated errors.

6.4.3 Error Codes


Network designers have developed two basic strategies for dealing with errors. One way is to include enough
redundant information in each block of data sent to enable to the receiver to deduce what the transmitted
character must have been. The other way is only to include enough redundancy to allow the receiver to
deduce that an error occurred, but not which error, and have it request a retransmission. The former strategy
uses error-correcting codes and the latter uses error-detecting codes.

82 CHAPTER 6. PHYSICAL DATA COMMUNICATION INTERFACES, STANDARDS AND ERRORS

Direction of transmission

Transmitted message =

1 1

0 1 1 0

1 1 1 1

0 1

1 1 1 0 0 0

1 1 1 1 1 0

1 1

Received message =

1 1

1 1 1 1

1 1 1 1

0 1

0 1 1 1 1 1

1 1 1 1 1 0

1 1

Minimum of 6
errorfree bits
6bit error burst

Minimum of 4
errorfree bits
4bit error burst

Figure 6.9: Error burst examples.

6.4.4 Hamming Distance


Normally, a message consists of message bits i.e., useful data), and  redundant, or check bits. Let the
total length of the message be bits, i.e., 
  An -bit unit containing data and check bits is often
referred to as an bit codeword .

Given any two codewords, say, 10001001 and 10110001, it is possible to determine how many corresponding
bits differ. In this case, 3 bits differ. To determine how many bits differ, just EXCLUSIVE OR the two
codewords, and count the number of 1 bits in the result. The number of bit positions in which two codewords
differ is called the Hamming distance.

"

"

Its significance is that if two codewords are a Hamming distance apart, it will require single-bit errors
to convert one into the other. When 
  single bit errors occur in such a code, no legal codeword can be
converted into another legal code word and the error will be detected.

"

In most data transmission applications, all


possible data messages are legal, but due to the way the check

bits are computed, not all of the
possible codewords are used. Given the algorithm for computing the
check bits, it is possible to construct a complete list of the legal codewords, and from this list find the two
codewords whose Hamming distance is minimum. This distance is the Hamming distance of the complete
code.
The error-detecting and error-correcting properties of a code depend on its Hamming distance. As explained,
to detect errors, you need a distance 
  code. To correct errors, one needs a distance 
  code
because that way the legal codewords are so far apart that even with changes, the resultant pattern still
differs in fewer bit positions from the original legal codeword than from any other legal codeword, and it
can be determined uniquely.

"

" *

"

"

" *

6.4.5 Parity

83

6.4.5 Parity
The most common method used for detecting bit errors with asynchronous and character-oriented synchronous transmission is to use parity bits. With this scheme, the transmitter adds an additional bit, the
parity bit, to each transmitted character prior to transmission. The parity bit used is a function of the bits
that make up the character being transmitted. Hence, on receipt of each character, the receiver can perform
a similar function on the received character and compare the result with the received parity bit. If they are
equal, no error is assumed, but if they are different, then a transmission error is assumed to have occurred.
Example 6.1 As a simple example of an error-correcting code, consider a code in which a single parity bit
has been appended to the 9 bits of data. The parity bit of a codeword is computed by adding the number of
1 bits (modulo 2) and the parity bit is then chosen so that the total number of 1 bits, including the parity bit
itself is either even (even parity) or odd (odd parity). Also, assume there are only four valid codewords: If the
0000000000

0000011111

1111100000

1111111111

codeword 0000000111 arrives with 2 bit errors, the receiver knows that the original could only have been
0000011111 (it assumes only two bits may have changed). If, however, a triple error changes 0000000000
into 0000000111, an attempt at error-correction will not work. This is because the code, with Hamming
distance 5, can correct double errors only.
Let us consider the general case: Imagine that we want to design a code with message bits and  check
bits that will allow all single errors to be corrected. Each of the
legal messages has illegal codewords
at a distance 1 from it. These are formed by systematically inverting each of the bits in the -bit codeword
 bit patterns
formed from it. Thus, to correct single bit errors, each of the
legal messages requires

dedicated to it. Since the total number of possible bit patterns is
we must have


 

Using 

* 

*

this requirement becomes


Given

*

* *

 


, this puts a lower limit on the number of check bits needed to correct single errors.

6.4.6 Error-Correcting Codes


This theoretical lower limit can, in fact, be achieved using a coding method due to Hamming. The Hamming
coding method uses check bits placed among the data bits; each check bit covers a number of data bits. The
check bits are placed in the power of 2 positions, i.e., in bit position 1, 2, 4, 8, etc. The check bits force
EVEN parity on the bits it covers including itself. On arrival at the receiver the parity is checked to determine
whether an error occurred and the bit in error determined as we shall learn. Note the Hamming code only
corrects single bit errors.
With 4 data bits 3 check bits are used as indicated in Table 6.3. The data bits are checked by the check bits

whose positions add up to the position of the data bit. For example,
is covered by 
and   because 



and   are in positions 1 and 2 which add up to 3, the position of
. Similarly,  in position 5 is covered
by  and 
in positions 4 and 1 respectively. This is made clear further in Table 6.4.
The value of the check bits is calculated by forming the EXCLUSIVE OR (for EVEN parity) of the value
of the data bits they cover as shown in the following example.

84 CHAPTER 6. PHYSICAL DATA COMMUNICATION INTERFACES, STANDARDS AND ERRORS


11

BIT POSITION

CHECK BIT

DATA BIT

EXAMPLE

10



0


1


1

2

1

Table 6.3: The Hamming code explained




DATA BIT

DATA BIT
DATA BIT
DATA BIT

IN POSITION
IN POSITION
IN POSITION
IN POSITION

3
5
6
7

IS COVERED BY

IS COVERED BY

IS COVERED BY

IS COVERED BY










ETC

Table 6.4: The Hamming code bit positions


Example 6.2 Consider a codeword with the data bits 1010110:



2
0
1
0
0
1
0

8
1
0
1
0

CHECK BIT POSITION

3
DATA BIT POSITION 5
DATA BIT POSITION 6
DATA BIT POSITION 7
DATA BIT POSITION 9
DATA BIT POSITION 10
DATA BIT POSITION 11
DATA BIT POSITION

EXCLUSIVE OR

4
1
1
0
0

1
0
1
0
1
1
1


When a codeword arrives at the receiver it initialises a counter to zero. It then examines each check bit, k
= 1, 2, 4, . . . in the way explained in the previous table, to see if it has the correct parity. If not, it adds  to
the counter. If the counter is zero after all the check bits have been examined (i.e., they were all correct) the
codeword is accepted as valid. If the counter is non-zero, it contains the position of the incorrect data bit.
As shown in the following table, if check 1, 2 and 8 are in error, the inverted data bit is 11, because it is the
only bit checked by bits 1, 2 and 8.
Example 6.3 Using the codeword of the previous example, but inverting the data bit in position 11:

CHECK BIT POSITION

3
5
DATA BIT POSITION 6
DATA BIT POSITION 7
DATA BIT POSITION 9
DATA BIT POSITION 10
DATA BIT POSITION 11
DATA BIT POSITION
DATA BIT POSITION

EXCLUSIVE OR

8
1
0
0
1

4
1
1
0
0


2
0
1
0
0
0
1


1
0
1
0
1
0
1

6.4.7 Block check characters

85

This is all very impressive until it is realized that the Hamming code for a 7 bit code word will correct
single bit errors only. The channel efficiency in terms of data bits transmitted, is 4 bits of data for every 7
bits transmitted, or 57%. To be able to correct 2 or more bit errors will require additional check bits which
would further lower the efficiency.
However, there is a trick that can be used to permit Hamming codes to correct burst errors. A sequence
of  consecutive codewords are arranged as a matrix, one codeword per row. Normally, the data would be
transmitted row-wise, one codeword at a time, from left to right. To correct burst errors, the data should be
transmitted one column at a time, starting with the leftmost column. When all  bits have been sent, the
second column is sent, and so on. When the message arrives at the receiver, the matrix is reconstructed, one
column at a time. If a burst error of length  occurs, 1 bit in each of the  codewords will have been affected,
but the Hamming code can correct one error per codeword, so the entire block can be restored. This method
uses   check bits to make blocks of 
data bits immune to a single burst error of length  or less.
Despite the low efficiency of forward error-correcting codes, such as the Hamming code, these codes do
have advantages. When the connection is a simplex channel then there is no alternative, while on some
communication systems involving long delays, such as on a satellite channel, the delay in requesting a
repeat transmission may be so long that a forward error-correcting code may be more economical.
The simplest way of detecting single bit errors in a codeword is to add a parity bit as explained in the previous
section. When blocks (or a frame) of codewords are being transmitted, there is an increased probability that
a codeword (and hence the block) will contain a bit error. In this case an extension to the error-detecting
capabilities obtained by the use of a single bit error can be achieved by using an additional set of parity bits
computed from the complete block of codewords in the frame.

6.4.7 Block check characters


With this method, each codeword in the frame is assigned a parity bit as usual. This is called the transverse
or row parity. In addition, an extra bit is computed from the bits in each individual bit position of the
codewords in the whole frame. This is the longitudinal or column parity . The resulting set of parity bits for
each column is referred to as the block (sum) check character. The example illustrated in Fig. 6.10 uses odd
parity for the transverse or row bits and even parity for the column or longitudinal bits.
It can be deduced from the example in that table that, although two bit errors in the same codeword will
escape the row parity check, they will be detected by the corresponding column parity check. This is true, of
course, only if no two bit errors occur in the same column at the same time as illustrated by the highlighted
bits in the example. The probability of this occurring is will be much less than the probability of two bit
errors in a single codeword occurring. Hence the use of a block sum check significantly improves error
detection.
Parity checking and block parity checking are still best suited to applications in which random single-bit
errors are present. When bursts of errors are present, however, a more rigorous method must be used. We
examine such a method in the next section.

6.4.8 Cyclic Redundancy Coding


Although the above scheme may be adequate sometimes, in practice, another method is in widespread
use: the polynomial code (also known as a cyclic redundancy code or CRC code). Polynomial codes are
based upon treating bit strings as representations of polynomials with coefficients of 0 and 1 only. A  -bit
 

to . Such a
message is regarded as the coefficient list for a polynomial with  terms, ranging from

86 CHAPTER 6. PHYSICAL DATA COMMUNICATION INTERFACES, STANDARDS AND ERRORS

Transverse (row)
parity bits
(odd)

B6

B3

= ETX

= BCC

=STX

Direction of
transmission

Frame
contents

Longitudinal (column)
parity bits (even)

Figure 6.10: Example of block sum parity checking


 

 . The high-order (leftmost) bit is the coefficient of


; the next
polynomial is said to be of degree 
  
bit is the coefficient of
, and so on. For example, 110001 has 6 bits and thus represents a six term

polynomial with coefficients 1, 1, 0, 0, 0, 1 and   
.

Polynomial arithmetic is done modulo 2. There are no carries for addition or borrows for subtraction. Both
addition and subtraction are identical to EXCLUSIVE OR. For example:

10011011
+11001010

01010001

00110011
+11001101

11111110

11110000
-10100110

01010110

01010101
-10101111

11111010

Long division is carried out the same way as it is in binary except that the subtraction is done modulo 2, as
above. A divisor is said to go into a dividend if the dividend has as many bits as the divisor.
When the polynomial code method is employed, the sender and receiver must agree upon a generator
polynomial,   , in advance. Both the high- and low-order bits of the generator must be 1. To compute
the checksum for some message with bits, corresponding to the polynomial   the message must be
longer than the polynomial. The basic idea is to append a checksum to the end of the message in such a way
that the polynomial represented by the checksummed message is divisible by   . When the receiver gets
the checksummed message, it tries dividing it by   . If there is a remainder, there has been a transmission
error.


The algorithm for computing the checksum is as follows:

6.4.8 Cyclic Redundancy Coding

87

1. Let  be the degree of   . Append  zero bits to the low-order end of the message, so it now
contains
 bits and corresponds to the polynomial
 .

2. Divide the bit string corresponding to


2 division.


  into the bit string corresponding to

  using modulo

3. Subtract the remainder (which is always  or fewer bits) from the bit string corresponding to
 
using modulo 2 subtraction. The result is the checksummed message to be transmitted. Call its
polynomial   .
Table 6.5 shows the calculation for a message 1101011011 and
Message:
Generator:
Message after appending 4 zero bits:

1
1

1
0
1
1

Transmitted message:

0
0
0
0
0
0

1
0
1
1
1
0
0
0
0
0
0

1
1
0
0
0
0
0
0

1
1
1

1
0
1
1
1

0
1

1
1
0
0
0
0
0
0
0
0

0
0
0
0
0

1
0
1
0
1
0
1
0
1
1

1
1
1

 .

remainder
1 0

1
0
1
1
0
0
0
0
0
0
1

0
1

1
0
1
0
1
0
1
0
1
1

  

0
1
0
0
1

0
0
0
0
0
0
0
0
0
0

1
0

0
0

1
0

0
0

0
1
1
0
1
0
1
0
1

0
0
0
1
1
0
1

0
1
1
0
1

0
0
0

Table 6.5: Calculation of the polynomial code checksum.


It should be clear that   is divisible (modulo 2) by   . In any division problem, if you diminish the
dividend by the remainder, what is left over is divisible by the divisor. For example, in base 10, if you divide
210278 by 10941, the remainder is 2399. By subtracting off 2399 from 210278, what is left over (207879)
is divisible by 10941.


Now let us analyze the power of this method. What kinds of errors will be detected? Imagine that a
transmission error occurs, so that instead of the polynomial for   arriving,  
  arrives. Each

88 CHAPTER 6. PHYSICAL DATA COMMUNICATION INTERFACES, STANDARDS AND ERRORS


1 bit in   corresponds to a message bit that has been inverted. If there are  1 bits in   ,  single-bit
errors have occurred. A single burst error is characterized by an initial 1, a mixture of 0s and 1s, and a final
1, with all other bits being 0. Upon receiving the checksummed message, the receiver divides it by   ;
that is, it computes


 

 

     

  is always 0, so the result of the computation is simply


 


 


Those errors that happen to correspond to polynomials containing


but all other errors will be caught.


  as a factor will slip by unnoticed,

If there has been a single-bit error, then   


, where determines which bit is in error. If
contains two or more terms, it will never divide   , so all single-bit errors will be detected.


If there have been two isolated single-bit errors,




*


  

$ 

 where

 


Alternatively, this can be written as

  


 

 


If we assume that   is not divisible by , a sufficient condition for all double errors to be detected is

 for any  up to the maximum value of
that   does not divide
(i.e., up to the maximum
length). Simple, low-degree polynomials that give protection to long messages are known. For example,




 will not divide


 for any  below 32768.


 , but
If there are an odd number of bits in error,   contains an odd number of terms, (e.g.,

not
 ). Interesting enough, there is no polynomial with an odd number of terms that has
 as a
factor in the modulo 2 system. By making
 a factor of   , we can catch all errors consisting of an
odd number of inverted bits.


$ 

To see that no polynomial with an odd number of terms is divisible by


 , assume that
 . Factor   into 
    . Now evaluate
number of terms and is divisible by

  



   

  has an odd

Since 1+1=0 (modulo 2),    must be 0. If   has an odd number of terms, substituting 1 for
everywhere will always yield 1 as result. Thus no polynomial with an odd number of terms is divisible by
 .


Finally, and most important, a polynomial code with  check bits will detect all burst errors of length
A burst error of length  can be represented by

 
     

#*

where determines
how far from the right hand end of the received message the burst is located. If  

contains an
term, it will not have  as a factor, so if the degree of the parenthesized expression is less
than the degree of   , the remainder can never be zero.


If the burst length is 


 , the remainder of the division by   will be zero if and only if the burst is
identical to   . By definition of a burst, the first and last bits must be 1, so whether it matches depends


6.5 Exercises

89

 intermediate bits. If all combinations are regarded as equally likely, the probability of such
on the 

incorrect message being accepted as valid is 


.

*

It can also be shown that when an error burst longer than 


 bits occurs, or several shorter bursts occur,

the probability of a bad message getting through unnoticed is  assuming that all bit patterns are equally
likely. Three polynomials have become international standards:







 

.

 



*


The last polynomial corresponds to ITU-T Recommendation V.41.

'*

All three contain


 as a prime factor. CRC-12 is used when the character length is 6 bits. The other two
are used for 8-bit characters. A 16-bit checksum, such as CRC-16 or CRC-CCITT catches all single and
double errors, all errors with an odd number of bits, all burst errors of length 16 or less, 99.997 percent of
17-bit error bursts, and 99.998 percent of 18-bit and longer bursts.
Although the calculation required to compute the checksum may seem complicated, it has been shown that
a simple shift register circuit can be constructed to compute and verify the checksums in hardware. In
practice, this hardware is nearly always used.

6.5

Exercises

Exercise 6.1 A voice line needs to transmit one byte of informatio n every 125 s. Assuming that the
electrical-optical interface of a fibre optic system takes 625 ns to process one bit, how many voice lines can
be multiplexed through the fibre using TDM?
Exercise 6.2 Can fibre be used to transmit analogue data? Explain.
Exercise 6.3 Explain why the SONET basic transmission rate (STS-1) is 51.84 Mbps.

Exercise 6.4 The probability that a bit is transmitted without error is . A file of 1 MB needs to transfered
between two computers. The data link layer transmits frames whose payloads are only 512 bits long. Derive
an expression for the average number of retransmissions required during the file transfer. You may assume
there is no error clustering.
Exercise 6.5 What should the Hamming distance be of a code that can correct 3 errors?
Exercise 6.6 If we require a code that will correct any single error in an 8-bit word, how many check bits
are needed?
Exercise 6.7 Using the Hamming code, what is the actual word sent if the data is 1001101?
Exercise 6.8 Assume we are using the Hamming code method. If we receive the word 1011111, is that word
in error?

90 CHAPTER 6. PHYSICAL DATA COMMUNICATION INTERFACES, STANDARDS AND ERRORS


Exercise 6.9 Explain why error detecting codes are used more frequently than error correcting codes. In
a high speed channel of 620 Mbps with a propagation delay of 20 ms, would it be better to use an error
detecting code or an error correcting one? Give reasons for your answer.
Exercise 6.10 The word 1001101 needs to be transmitted. Using a generator polynomial of
what is the actual transmitted message using Cyclic Redundancy Coding.

*


 ,

Chapter 7

Data Link Protocols


7.1 Objectives of this Chapter
In this discussion of the protocols which apply at layer two of the OSI reference model we shall cover the
following topics.
1. We first learn about the general functions of link layer protocols and the types of fields found in a
typical link protocol.
2. Link layer protocols are similar to many Wide Area Network protocols. We therefore use the opportunity to introduce general protocol design principles, in particular
Error control and management.
Window flow control, and
Piggybacking.
Pipelining including the ARQ techniques of Go-back N and Selective Repeat.
Positive acknowledgements.
3. We discuss the Synchronous Data Link Control (SDLC) protocol in considerable detail and conclude
with
4. Two Internet link protocols, namely SLIP and PPP.

91

CHAPTER 7. DATA LINK PROTOCOLS

92

The Data Link Layer (DLL) (also known as the Data Link Control (DLC) layer) sits above the physical layer
in the OSI model (Refer to Sec. 1.7 on page 12). The Data Link Layer is the first layer where the bits have
logical significance. This layer supervises the flow of information between adjacent network nodes. It uses
error detection or correction techniques to ensure that a transmission contains no errors. (In transmission, a
bit could become corrupted, but the physical level protocol would not know it). If the data link layer detects
an error, it can either request a new transmission or, depending on the implementation, correct the error. It
also controls how much information is sent at one time: Too much and the network becomes congested; too
little and receiving ends experience excessive waits.
At the DLL data are transmitted as frames, which consist of groups of bytes organized according to a
specified format. The DLL marks the beginning and the end of each outgoing frame with unique bit patterns
and recognises these patterns to define an incoming frame. It then sends error-free frames to the next layer,
the network layer (see Chapter 9). These frames usually involve the use of special control characters and
message headers with specified formats. Depending on the function to be performed, three distinct sets of
protocols can be distinguished, namely, device control protocols, data-link-control protocols and end-to-end
data-format protocols. These can be defined as follows:
1. A terminal or work station may have one or more input/output mechanisms, such as a keyboard, a
printer, a display, etc. Each device has attributes, such as carriage return, line feed, tabs, coordinate
positioning, new page, scroll, etc. Device Control refers to the sending of commands to activate or
use these device attributes. Such commands are the formats for the device-control protocol.
2. Data Link Control (DLC) concerns the operation of a data link between one DCE and another DCE
or between a DCE and a DTE. DLC protocols are needed to manage the operation of the link, which
may be shared by multiple stations. The protocols regulate the initiation, checking and retransmission
(if necessary) of each data transmission.
3. End-to-end data-format controls involve data-format indicators and headers to characterize the data
being sent rather than its transportation. This category of controls includes, for example, the following:
Delimiters of text, to indicate the end of text or the separation of text from a header that describes
the text.
Indicators for chaining together a group of data units.
Headers that identify the type of message being sent.
Indicators of whether or not a message obeys a prescribed format.
For these different protocols, the same control character came to mean different things. For example, a
negative acknowledgement (NAK) might occur because of a line error or because of buffer unavailability.
Since the end-user (usually an application) inserted these control characters, each application could assign
its own conditions for the use of the control character. It thus became impossible to handle link controls at
the link control level; all control characters had to be interpreted within the application. Thus, application
programs were written to take advantage of the existing protocols. They contained diverse control functions
that were both link- and device-dependent. Any change in a device or a line protocol required an application
rewrite.
Although some of the older, confused protocols still persist, clearly defined higher (second) level protocols
now exist with clearly defined functions. These are known as Data Link Control protocols, Line Control
procedures or Link Control procedures.
Basically, the job of a DLC protocol is to:

7.2 Principles of Protocol Design

93

From layer 5

To layer 5

Layer 4 protocol

Host

Host
B

DCE
A

Layer 3
Layer 2

Intermediate

Layer 1

Layer 3
Layer 2

DCE
B

Layer 1

Figure 7.1: Conceptual model of Layer 2, 3 and 4 protocols


1. establish and terminate a logical connection between stations
2. handle the orderly transfer of data between them
3. ensure message integrity in those transfers.
We shall discuss data link control protocols mainly from the perspective of DCE-DCE or DCE-DTE communication.

7.2 Principles of Protocol Design


Before discussing particular DLC protocols we first discuss the design principles which one finds in any
error control protocol operating in a Wide Area Network such as those comprising the Internet. Although
we shall discuss the principles in the context of the DLL, they apply to a wide range of protocols at all levels
of communication.
In the ISO multilevel protocol hierarchy (see Fig. 1.7 on page 13) the application process, in principle
passes information (called PDUs; see Sec. 1.7) to Layer 6, which then gives it to Layer 5, the Session
Layer, which gives it in turn to Layer 4, the Transport Layer. The Transport Layer software, splits the data
up into messages if need be, and attaches a Transport Layer header to the front of each message. It then
passes these messages to its DCE. The DCE attaches its own Network Layer headers to the front of each
message, thus forming a DCE-DCE packet. The layer 3 DCE-DCE packet header might contain the source
and destination addresses, routing information, etc. Eventually, the packet gets down to Layer 2, where a
frame header and frame trailer are attached for transmission, and the frame is sent down to the Physical
Layer 1 for transmission to the next DCE.
Our conceptual model, as shown in Fig. 7.1 is that a DTE A wants to send a long stream of data to DTE B.
Later, we will consider the case where DTE B also wants to send data to DTE A simultaneously, but for the
time being just consider simplex data transmission from A to B.
DTE A is assumed to have an infinite supply of data ready to sent and never has to wait for data to be
produced. When the DCE asks for data, the DTE is always able to comply immediately. (This restriction,
too, will be dropped later.) As far as the Layer 2 software is concerned, the packet coming from the DTE
is pure data, every single bit of which is to be delivered to the other DTE. The fact that the receiving DTE
may interpret part of the packet as a header is of no concern to the Layer 2 software.

CHAPTER 7. DATA LINK PROTOCOLS

94

When a DCE accepts a packet, it uses the packet to construct a frame. The frame usually contains certain
control information in the header in addition to the message itself. The frame is then transmitted to the
next DCE. The transmitting hardware computes and appends the checksum so that the DCE software need
not worry about it. The polynomial algorithm discussed in Sec. 6.4.8 may, for example, be used for this
purpose.
Providing a rigid interface between DTE and DCE greatly simplifies the software design, because the communication protocols and DTE protocols can evolve independently. It would be terrible if every little change
to the lower-layer protocols required changing all the DTE operating systems. Since the DCE in most networks are generally identical pieces of hardware, changing the DCE software merely requires changing one
program and then shipping it out to all the DCE sites (via the network itself, naturally). Changing the DTE
protocol is much more complicated if, as is often the case, the DTEs are not all identical. Such a change
requires modifying every operating system present on the network, a vast task.
When a frame arrives at the receiver, the hardware computes the checksum. If the checksum is incorrect
(i.e., there was a transmission error), the DCE is so informed. If the inbound frame arrived undamaged, the
DCE is also informed, so that it can do the necessary processing of the frame. As soon as the receiving DCE
has acquired an undamaged frame, it checks the control information in the header, and if everything is all
right, the packet portion is passed to the DTE. Under no circumstances is a frame header ever given to a
DTE.
A generic frame header is composed of four control fields:
1.

field: tells whether or not there are any data in the frame, because some of the protocols distinguish frames containing exclusively control information from those containing data as well.

2.

SEQ

field: used for sequence numbers.

3.

ACK

field: used for acknowledgements.

4.

INFO

KIND

field: contains one or more packets (the INFO field of a control frame is not used).

The first three fields clearly contain control information, and the last may contain the data to be transferred.

7.2.1 Acknowledgements
In principle any channels may be unreliable and may lose an entire frame upon occasion. To be able to
recover from such calamities, the sending DCE must start an interval timer or clock whenever it sends a
frame. If no reply (called an ACKNOWLEDGEMENT or ACK) has been received within a certain predetermined time interval, the clock times out and the DCE receives an interrupt signal whereupon the frame is
re-transmitted.

7.2.2 Flow Control


Since we assumed that the sending DCE has an infinite supply of data, it continues to send one frame after
the other. One of the main problems we have to deal with is how to prevent the sender from flooding
the receiver with frames faster than the latter is able to process it. In very restricted circumstances, e.g.
synchronous transmission and a receiving DCE fully dedicated to processing the one input line, it may be
possible to time the sender such as to keep it from swamping the receiver. Usually, however, each DCE will

7.2.3 Sequence Numbers

95

have several lines to attend to, and the time interval between a frame arriving and it being processed may
vary considerably. If the network designers can calculate the worst-case behaviour of the receiver, they can
program the sender to transmit so slowly that even if every frame suffers the maximum delay, there will be
no overruns.
A more general solution to this dilemma is to have the receiver provide feedback to the sender via the ACK
control frame mentioned above. In other words, the receiver sends a control frame back to the sender which,
in effect, gives the sender permission to transmit the next frame. After having send a frame, the sender is
required to wait until the next control frame, i.e. acknowledgement arrives back. Protocols in which the
sender sends one frame and then waits for an acknowledgement before proceeding are called stop-and-wait
protocols.

7.2.3 Sequence Numbers


Now let us consider the unfortunately all-too-realistic situation of a communication channel that makes
errors. Frames may be either damaged or lost completely. We assume that when a frame is damaged in
transit, however, the receiver hardware will detect this when it computes the checksum. When the frame is
damaged in such a way that the checksum is nevertheless correct (an exceedingly unlikely occurrence), all
protocols can fail, i.e., deliver an incorrect message to the DTE as we shall discuss next.
At first glance it would seem that a combination of the time-out and acknowledgement mechanism would
solve the problem of transmitted errors. If a damaged frame arrived at the receiver, it would be discarded.
After a while the sender would time out and send the frame again. This process would be repeated until the
frame was finally correctly received and acknowledged.
This scheme has a fatal flaw in it however for the following reasons: Remember that it is the task of the
communications subnet to provide error free, transparent communication between DTEs. DTE A gives a
series of messages to its DCE A and the subnet must guarantee that the identical series of messages are
delivered to DTE B by DCE B, in the same sequence. In particular, neither DTE has a way of knowing,
under the present system, that a message made up of several packets, has been lost or duplicated, so the
subnet must guarantee that no combination of transmission errors, no matter how unlikely, can cause an
error message containing duplicate packets to be delivered to a DTE.
Consider the following sequence of events:
1. DTE A gives packet 1 to its DCE. The packet is correctly received at B and passed to DTE B. DCE B
sends an acknowledgement frame back to DCE A.
2. Now the acknowledgement frame itself gets lost completely. It just never arrives at all. Life would be
a great deal simpler if the channel only mangled and lost data frames and not control frames, but sad
to say, the channel is not very discriminating.
3. DCE A eventually times out. Not having received an acknowledgement, it (incorrectly) assumes that
its data frame was lost or damaged and sends the frame containing packet 1 again.
4. The duplicate frame also arrives at DCE B perfectly and is also unwittingly passed to DTE B. If A
is sending a file to B, part of the file will be duplicated, i.e. the copy of the file made by B will be
incorrect and the error will not have been detected. In other words, the protocol will fail.
Clearly, what is needed is some way for the receiver to be able to distinguish a frame that it is seeing for
the first time from a retransmission. The obvious way to achieve this is to have the sender put a sequence
number in the header of each frame it sends. Then the receiver can check the sequence number of each
arriving frame to see if it is a new frame or a duplicate to be discarded.

CHAPTER 7. DATA LINK PROTOCOLS

96

7.2.4 Piggybacking
So far we have assumed that data were transmitted in only one direction. In most practical situations, there is
a need for transmitting data in both directions. Typically, there will be several independent processes in DTE
A that want to send data to DTE B, and in addition, several processes on DTE B that want to communicate
with DTE A.
One way of achieving full-duplex transmission would be for each connection to have two separate communication channels. If this were done, we would have four separate physical circuits, each with a forward
channel (for data) and a reverse channel (for acknowledgements). In both cases the bandwidth of the reverse channel would be almost entirely wasted. In effect, the user would be paying the cost of two circuits,
but only using the capacity of one.
M 1

+Ack
1
+M 0

N1

+T

2
N0

6
+N0

+M 1

Ack

5
+T

Ack

+N1
3

+Ack

M 0

Sender

Recevier

Figure 7.2: The Alternating Bit (AB) protocol


A better idea is to use the same circuit for data in both directions, so that data frames from A to B would be
intermixed with the acknowledgement frames from B to A. By looking at the KIND bit in the header of an
incoming frame, the receiver can tell whether the frame contains data or is an acknowledgement.
Although interleaving data and control frames on the same circuit is an improvement over having two
separate physical circuits, yet another improvement is possible. When a data frame arrives at a DCE, instead
of immediately sending a separate control frame, the DCE restrains itself and waits until the DTE passes
it the next message. The acknowledgement is attached to the outgoing data frame (using the ACK field in
the frame header). In effect, the acknowledgement gets a free ride on the next outgoing data frame. The
technique of temporarily delaying outgoing acknowledgements so that they can be hooked onto the next
outgoing data frame is widely known as piggybacking.
The principal advantage of piggybacking over having distinct acknowledgement frames is a better use of the
available channel bandwidth. The ACK field in the frame header only costs a few bits, whereas a separate
frame would need a header, the acknowledgement, and a checksum. In addition, fewer frames sent means
fewer frame arrived interrupts, and perhaps fewer buffers in the receiver depending on how the receiver
software is organized. In most protocols the piggyback field rarely costs more than a few bits in the frame
header.

7.2.5 Sliding Window Protocols

97

Piggybacking introduces a complication not present with separate acknowledgements. How long should
the DCE wait for a data frame onto which to piggyback the acknowledgement? If the DCE waits longer
than the senders timeout period, the frame will be transmitted, defeating the whole purpose of having
acknowledgements. If the DCE were an oracle and could foretell the future, it would know when the next
DTE packet was going to come in, it could decide either to wait for it or send a separate acknowledgement
immediately, depending on how long the projected wait was going to be. Of course, the DCE cannot foretell
the future, so it must resort to some ad hoc scheme, such as waiting a fixed number of milliseconds. If a
new DTE message arrives quickly, the acknowledgement is piggybacked onto it; otherwise, if no new DTE
packet has arrived by the end of this time period, the DCE just sends a separate acknowledgement frame.
These principles are illustrated in the state transition graphs of the SENDER and RECEIVER using the
Alternating Bit (AB) protocol illustrated in Fig. 7.2. In that figure a symbol + in front of a message

indicates that that message is received at that point while a - indicates that the message is sent The



in the SENDER state graph indicate the arrival of a message from the user (the
transitions


in the RECEIVER graph, the


network layer in our discussions) for transmission and similarly
delivery of the corresponding message to the user at the receiving end. The term alternating-bit obviously
derives from the single bit sequence number of the protocol.

7.2.5 Sliding Window Protocols


A protocol which maintains a sequence number in the header of each frame it sends is often referred to as a
sliding window protocol.
7

7
1

7
1

7
1

0
1

SENDER
5

2
4

0
1

2
4

0
1

2
4

0
1

2
4

0
1

RECEIVER
5

2
4

(a)

2
4

(b)

2
4

(c)

2
4

(d)

Figure 7.3: A sliding window of size 1, with a 3-bit sequence number. Part (a) initially (b) after the first
frame has been sent (c) after the first frame has been received (d) after the first acknowledgement has been
received page 85
In all sliding window protocols, each outbound frame contains a sequence number, ranging from 0 up to

 so the sequence number fits nicely in an bit field. The
some maximum. The maximum is usually
stop-and-wait sliding window protocol uses   , restricting the sequence numbers to 0 and 1, but more
sophisticated versions can use arbitrary . These principles are illustrated in Fig. 7.3.
The symbol ? and ! are also used in the literature to indicate receive and send respectively.

98

CHAPTER 7. DATA LINK PROTOCOLS

The essence of all sliding window protocols is that at any instant of time, the sender maintains a list of
consecutive sequence numbers corresponding to frames it is permitted to send. These frames are said to
fall within the sending window. Similarly, the receiver also maintains a receiving window corresponding
to frames it is permitted to accept. The sending window and the receiving window need not have the same
lower and upper limits, or even have the same size.
Although this principle gives the DCEs more freedom about the order in which they will send and receive
frames, the requirement still exists that the frames must be delivered to the destination DTE in the same
order that they were passed to the source DCE by the source DTE. Moreover, the requirement still exists
that frames between DCEs are send in the order received.
The sequence numbers within the senders window represent frames sent but as yet not acknowledged.
Whenever a new packet arrives from the DTE, the corresponding frame it given the next highest sequence
number, and the upper edge of the window is advanced by one. When an acknowledgement comes in, the
lower edge is advanced by one. In this way the window continuously maintains a list of unacknowledged
frames.
Since frames currently within the senders window may ultimately be lost or damaged in transit, the sender
must keep all these frames in its memory for possible retransmission. Thus if the maximum window size is
, the sender needs buffers to hold the unacknowledged frames. If the window ever grows to its maximum
size, the sending DCE must forcibly shut off the DTE until another buffer becomes free.
The receiving DCE window corresponds to the frames it may accept. Any frame falling outside the window
is discarded without comment. When a frame whose sequence number is equal to the lower edge of the
window is received, it is passed to the DTE, an acknowledgement is generated, and the window is rotated
by one. Unlike the senders window, the receivers window always remains at its initial size. Note that a
window size of 1 means that the DCE only accepts frames in order, but for larger windows this is not so.
The DTE, in contrast, is always fed data in the proper order, regardless of the DCEs window size.
Until now we have made the tacit assumption that the transmission time required for a frame to arrive at the
receiver plus the transmission time for the acknowledgement to come back is negligible. Sometimes this
assumption is patently false. In these situations the long round-trip time can have important implications for
the efficiency of the bandwidth utilization. As an example, consider a 50Kbps satellite channel with a 270
millisecond round-trip propagation delay. Let us imagine trying to send 1000-bit frames via the satellite. At
t = 0 the sender starts sending the first frame. At t = 20 millisecond the frame has been completely sent.
Not until t = 135 millisecond has the frame fully arrived at the receiver, and not until t = 270 millisecond
has the acknowledgement arrived back at the sender, under the best of circumstances (no waiting in the
receiver and a short acknowledgement frame). This means that or a stop-and-wait protocol, the sender was
blocked during 270/250 or 96 percent of the time (i.e., only 4 percent of the available bandwidth was used).
Clearly, the combination of a long transit time, high bandwidth and short frame length is disastrous in terms
of efficiency.
The problem described above can be viewed as a direct consequence of the rule requiring a sender to wait for
an acknowledgement before sending another frame. If we relax that restriction, much better efficiency can
be achieved. Basically the solution lies in allowing the sender to transmit up to  frames before blocking,
instead of just 1. With an appropriate choice of  the sender will be able to continuously transmit frames
for a time equal to the round-trip transit time without filling up the window. In the example above,  should
be at least 13. The sender begins sending frame 0 as before. By the time it has finished sending 13 frames,
at t = 270, the acknowledgement for frame 0 will have just arrived. Thereafter, acknowledgement will arrive
spaced by 20 millisecond, so the sender always gets permission to continue just when it needs it. At all
times, 12 or 13 unacknowledged frames are outstanding. Put in other terms, the senders maximum window
size is 13.

7.2.6 Pipelining

99

7.2.6 Pipelining

This technique is known as pipelining. If the channel capacity is bits/sec, the frame size bits, and the

round-trip propagation time  sec, the time required to transmit a single frame is  seconds. After the last
bit of data frame has been sent, there is a delay of   before that bit arrives at the receiver, and another
delay of at least   for the acknowledgement to come back, for a total delay of  . In stop-and-wait the



  . If 

the efficiency will be less
line is busy for  and idle for  , giving a line utilization of  
than 50 percent. Since there is always a finite delay for the acknowledgement to propagate back, in principle
pipelining can be used to keep the line busy during this interval, but if the interval is small, the additional
complexity is not worth the trouble.

 *

Pipelining frames over an unreliable communication channel raises some serious issues. First, what happens
if a frame in the middle of a long stream is damaged or lost? Large numbers of succeeding frames will arrive
at the receiver before the sender even finds out that anything is wrong. When a damaged frame arrives at
the receiver, it obviously should be discarded, but what should the receiver do with all the correct frames
following it? Remember that the receiving DCE is obligated to hand packets to the DTE in sequence.
There are various approaches to dealing with errors in the presence of pipelining. One way, called Go-back
N, is for the receiver simply to discard all subsequent frames, sending no acknowledgements as illustrated in
Fig. 7.4. This strategy corresponds to a receive window of size 1. In other words, the DCE refuses to accept
any frame except the next one it must give to the DTE. If the senders window fills up before the timer runs
out, the pipeline will begin to empty. Eventually, the sender will time out and retransmit all unacknowledged
frames in order, starting with the damaged or lost one. This approach can waste a lot of bandwidth if the
error rate is high.
Another general strategy for handling errors when frames are pipelined, called Selective Repeat, is to have
the receiving DCE store all the correct frames following the bad one. When the sender finally notices that
something is wrong, it just retransmits the one bad frame, not all its successors. If the second try succeeds,
the receiving DCE will now have many correct frames in sequence, so they can all be handed off to the DTE
quickly and the highest number acknowledged. This technique is also illustrated in Fig. 7.4.
Accepting frames out of sequence introduces certain problems not present in protocols in which frames are
only accepted in order. We can illustrate the trouble most easily with an example. Suppose that we have a
3-bit sequence number, so that the sender is permitted to transmit up to eight frames before being required
to wait for an acknowledgement. Simultaneously, the receivers window allows it to accept any frame
with sequence number between 0 and 7 inclusive. Assume that the transmitter has a message consisting of
more than eight frames. Frames with sequence numbers 0 through 7 are therefore first sent to the receiver.
These eight frames arrive correctly, the receiver sends a piggybacked acknowledgement for frame 7 (since
0 through 7 have been received), and delivers the frames to the DTE. The receiver window remains open to
accept the next frames with sequence numbers 0,. . . , 7.
It is at this point that disaster strikes in the form of a lightning bolt hitting the microwave tower, and wiping
out the acknowledgements for frames 0, . . . , 7 which have just been transmitted to the sender. Since the
acknowledgement has been destroyed, the sender eventually times out and retransmits frames 0, . . . , 7.
When these frames arrive at the receiver, a check is made to see whether they are within the receiving
window. Unfortunately, they are and these (duplicate) frames will be accepted and (erroneously) delivered
to the DTE. Consequently, the DTE gets an incorrect message, and the protocol fails.
The essence of the problem is that after the receiver advanced its window, the new range of valid sequence
numbers overlapped the old one. The following batch of frames might be either duplicates (if all the acknowledgements were lost) or new ones (if all the acknowledgements were received). The poor receiver has
no way of distinguishing these two cases.

100

CHAPTER 7. DATA LINK PROTOCOLS

Figure 7.4: Pipelined ARQ protocols

7.2.7 Positive Acknowledgements

101

The way out of this dilemma lies in making sure that after the receiver has advanced its window, there is no
overlap with the original window. To ensure that there is no overlap, the maximum window size should be
at most half the range of sequence numbers. For example, if 4 bits are used for sequence numbers, these
will range from 0 to 15. Only eight unacknowledged frames should be outstanding at any instant. That way,
if the receiver has just accepted frames 0 through 7 and advanced its window to permit acceptance of frames
8 through 15, it can unambiguously tell if subsequent frames are retransmissions (0 through 7) or new ones
(8 through 15). In general, the window size will be 

  

The protocols we have discussed so far are also collectively known as Automatic Repeat Request or ARQ
protocols.

 *

7.2.7 Positive Acknowledgements


The final general principle we should like to discuss is a more efficient strategy than having only time-outs to
deal with errors. Whenever the receiver has reason to suspect that an error has occurred, it sends a negative
acknowledgement (NACK) frame back to the sender. Such a frame is a request for retransmission of the
frame specified in the NACK. There are two cases when the receiver should be suspicious: a damaged frame
has arrived or a frame other than the expected one arrived (potential lost frame). To avoid making multiple
requests for retransmission of the same lost frame, the receiver should keep track of whether a NACK has
already been sent for a given frame. If the NACK gets mangled or lost, no real harm is done, since the
sender will eventually time out and retransmit the missing frame anyway.
If the time required for a frame to propagate to the DCE, be processed there, and an acknowledgement
returned is (nearly) constant, the sender can adjust its timer to be just slightly larger than the normal time
interval expected between sending a frame and receiving its acknowledgement. However, if this time is
highly variable, the sender is faced with the choice of either setting the interval to a small value, and risking
unnecessary retransmissions, thus wasting bandwidth, or setting it to a large value, going idle for a long
period after an error, thus also wasting bandwidth. If the reverse traffic is sporadic, the time to acknowledgement will be irregular, being shorter when there is reverse traffic and longer when there is not. Variable
processing time within the receiver can also be a problem here. In general, whenever the standard deviation
of the acknowledgement interval is small compared to the interval itself, the timer can be set tight and
NACKs are not useful. Otherwise, the timer must be set loose, and NACKs can appreciably speed up
retransmission of lost or damaged frames.

7.3 Synchronous Data Link Control (SDLC)


The first of the DLC protocols we would like to mention is the Synchronous Data Link Control (SDLC)
protocol because (not surprisingly) it illustrates all the principles described in the previous section.
IBM orignally developed the Synchronous Data Link Control (SDLC) protocol, which ANSI modified
to become Advanced Data Communication Control Procedure (ADCCP), and ISO modified it to become
the High-level Data Link Control (HDLC). ITU-T then adopted and modified HDLC for its Link Access
Procedure (LAP) as part of the X.25 network interface standard, but later modified it again to LAPB, to
make it more compatible with a later version of HDLC.
The nice thing about standards is that you have so many to choose from; furthermore, if you do not like any
of them, you can just wait for next years model . . .
SDLC and its related family of protocols are in fact bit-oriented in contrast to the mostly obsolete characteroriented protocols.

CHAPTER 7. DATA LINK PROTOCOLS

102

The reason that these protocols are referred to as bit-oriented is that each one allows data frames containing
an arbitrary number of bits. The frame size need not be an integral multiple of any specific character size.
Traditional protocols e.g., the Binary Synchronous Communication (BSC) protocol recognized the end of
frame by some special character, such as ETB or ETX. Since bit-oriented frames do not have to contain an
integral number of characters, a new method is needed to delimit frame yet preserve data transparency i.e.,
allow arbitrary data, including the frame delimiter pattern. The SDLC frame format is illustrated in Fig. 7.5.
First bit transmitted

L
S
B

H
D
B

L
S
B

8 bits

8 bits

8 bits

Information

FCS

16 bits

8 bits

LSB = Least Significant Bit


HDB = Highest Degree Bit of the
remainder polynomial

Figure 7.5: The SDLC frame format.


The various fields are as follows:
F Flag; used to delimit the frame.


A Address-field; primarily of importance on multidrop lines where it is used to identify a terminal.
C Control field; used for sequence numbers, acknowledgements and other purposes as discussed below.
FCS Checksum field; a minor variation of the well known cyclic redundancy code using CRC-CCITT as
the generating polynomial. The variation is to allow lost flag bytes to be detected.
The data itself is carried in the Information field.
SDLC is not a peer to peer balanced protocol but rather makes a distinction between the control or primary
DTE and the secondary or staion or stations being controlled. In this way, a frame going from a primary
station to a secondary station is called a link command; the frame going from a secondary station to the
primary station has been called a link response.

7.3.1 Flag field


Although the SDLC structure avoids the use of special characters within the transmission, SDLC does use a
unique bit sequence for the start and termination of each variable-length frame.
The flag used by SDLC is a unique sequence of bits, namely, a zero, six one bits, and a zero (01111110). All
receiving stations will continuously scan for this sequence, which signifies the beginning or end of a frame.
SDLC uses the same flag byte for beginning and ending the frame as do HDLC and ADCCP.

Not discusses in these notes.

7.3.2 Address field

103

To avoid an occurrence of this flag bit pattern anywhere else in a frame, a technique called bit stuffing is
used. When transmitting, the sending station monitors the stream of bits being sent. Any time a contiguous
string of five 1 bits occurs, there is a possibility of an unintentional flag developing; therefore, the sending
station will automatically insert an extra 0 into the bit stream. Bit stuffing is illustrated in Fig. 7.6. This 0

Figure 7.6: An example of bit stuffing


bit insertion prevents more than five contiguous 1s from ever appearing between the flags. Note that this
applies to the address, control, and frame-check-sequence fields as well as to the information field, but not
to the flag fields.
Then, when receiving, the station again monitors the bit stream. Whenever five consecutive 1s appear, the
sixth bit is examined. If it is a 0, the receiving station deletes it from the bit stream prior to presenting the
information to a higher level. If the sixth bit is a 1, then the sequence may be a flag or an error or, in the
case of an SDLC loop configuration, a go-ahead signal. If the seventh bit is a 0, the station accepts the
combination (01111110) as a flag; otherwise, it rejects the frame except in the case of a loop configuration.
Thus, using bit stuffing, the flag is kept unique to the beginning and end of a frame. Synchronization can
be established based on the appearance of the flag, and the positional significance of each field can be
established relative to the flag.
Recall that bit synchronization must be maintained during the entire frame, which may be quite long (limited by the buffers available at the receiving station and the ability to clear these buffers in time). Since
bit synchronization is maintained by the pace of the bit stream itself, it is necessary that there be an adequate number of signal polarity transitions occurring periodically. This is achieved by a combination of the
aforementioned bit stuffing and the use of codes such as the Manchester encoding (see Sec. 2.5.1 on page
34.
Thus, by a combination of techniques, there is assurance of establishment and maintenance of synchronization. The flag itself provides message synchronization and character synchronization, while the combination of bit stuffing and binary encoding ensures sufficient polarity transitions to maintain bit synchronization.
Some error (hardware or programming) may occur in the sending node during the transmission of a frame.
In that case, it may be best to abort and ignore the partial frame that was sent. To do this (on any SDLC link
other than a loop), the sending station ends the frame in an unusual manner by transmitting at least eight (but
fewer than fifteen) contiguous 1 bits, with no inserted 0s. Receipt of eight contiguous 1 bits is interpreted
by the receiving station as an abort.
A link is defined to be in an idle state when a continuous 1 state is detected that persists for 15 bit times.

7.3.2 Address field


At the link level it is necessary to allow for the possibility of more than one station, that is, more than one
input/output point, on a given line. Stations may be cluster controllers or hosts or terminals.

CHAPTER 7. DATA LINK PROTOCOLS

104

The SDLC address field is one byte long, permitting up to 254 stations on a given link. An address field
of all zeros is reserved, e.g., for testing purposes, and an address of all ones is reserved for a broadcast to
all-stations. Each station address pertains to only one station on one link, and it is completely separate
from the network address of the logical unit. In fact, a message travelling between one DTE and another,
may have to traverse many links along its way.
The station address always pertains to a secondary station on a particular link. The primary station is that
one on the link that directs the multiple use of the link. The primary station also has responsibilities dealing
with initial link activation, link deactivation, and recovery from error situations. All dialogues take place
between the primary and secondary. The address field tells which secondary the frame is going to or coming
from.
A secondary may, however, have more than one address for receiving. It will always have its own unique
address for sending; but in addition, it may be part of one or more groups and have to recognize one or more
group addresses. A special case is the all-stations address which all secondaries must recognize.
The ISO standard (HDLC) and the ANSI standard (ADCCP) include provisions for an extended address
field. In extended mode, its address field would consist of a chain of bytes, with another address byte
following so long as the first bit in any address byte is zero (the first bit of the address is the least significant
bit). An example of an application where this mode might be useful would be the use of long aircraft IDs as
the address, which could be displayed directly without address to ID translation.

7.3.3 Control field


The control field provides the personality of the data link control. It permits a single information transfer
to serve multiple control functions. For example, one frame sent from the primary station to a secondary
station may be used to
send information from a primary station to a secondary station,
acknowledge to the secondary that one or more specified frames, previously sent by the secondary,
have been validly received, error-free,
poll the station, authorizing it to send any frames that it has ready.
Similarly, a single frame sent from a secondary station to the primary station may be used to
send information from the secondary to the primary,
acknowledge to the primary that one or more specified frames, previously sent by the primary, have
been accepted as error-free,
indicate whether this is the final frame of a transmission or if more frames are to follow immediately.
There are three different formats for the control field, with a few bits assigned to identify which format is
being used. The three formats, shown in Fig. 7.7 are the following.
Information transfer format (I), having full sequence control, with a send sequence count, Ns, and a
receive sequence count, Nr, and (optionally) an information field.

7.3.4 Acknowledgement and Retransmission.

105

Supervisory format (S), having a receive sequence count (Nr) but not a send sequence count, and used
to manage the link in normal operation.
Nonsequenced format (NS), having no sequence counts at all, and used primarily for setting operating
modes, exchanging identification, and other miscellaneous operations. (The HDLC and ADCCP
terminology for this format is Unnumbered rather than Nonsequenced.)
First bit transmitted
Information:
#bits :

Supervisory:

0
1

#bits :

P/F

Nr

P/F

Nr

P/F

0
2

#bits :

Nonsequenced:

Ns

1
2

Where Ns = Send Sequence Count (bit 2 = low order bit)


Nr = Receive Sequence Count (bit 6 = low order bit)
P/F = Poll Bit for Primary Station Transmission
= Final Frame Bit for Secondary Station Transmission
S = Supervisory Function Control Bits
M = Modifier Function Control Bits

Figure 7.7: Three formats for the basic SDLC control field
Note that if the first bit of the control field is a 0, the I format is used; if the first bit is a 1, then the second
bit tells whether an S format or an NS format is used. In all three formats 1 bit is used as a poll/final
(P/F) bit. The primary station can use this bit to poll a secondary station. The poll bit also demands an
acknowledgement (via the receive sequence count, Nr) or frames accepted by the secondary. The secondary
station can use the same bit position to indicate the last frame to be sent (in response to a poll).

7.3.4 Acknowledgement and Retransmission.


Because duplex operation is a required capability, SDLC (like ADCCP and HDLC) adopted two independent
sets of sequence numbers, one for each direction of flow. This permits the recovery of one flow to proceed
independent of the sequence numbers of the other flow and avoids some possible timing ambiguities. Only
information frames are sequence checked.
Two separate number counts are therefore maintained by each station. The send sequence count (Ns),
operated modulo eight, provides a count for each information transfer frame that is transmitted from that
station. The receive sequence count (Nr), also modulo eight, is incremented once for each valid, in-sequence,
error-free, information frame that is received by that station. A secondary station, since it always transmits
only to the primary, needs to keep only one pair of these sequence counts. The primary station, on the other
hand, needs to keep one such pair for each secondary station to which it is transmitting. When one station
sends its receive sequence count (Nr) to the other station, it will serve to indicate the next frame that is

106

CHAPTER 7. DATA LINK PROTOCOLS

expected and to acknowledge all the frames received up to (but not including) the value indicated by the
receive sequence number. Each subsequent Nr thus sent reconfirms that all preceding messages have been
accepted. Multiple frames can be acknowledged in a single frame.
In the SDLC discipline, an acknowledgement is required at least once every seven requests. Otherwise, with
modulo-eight, ambiguity in the response could result. The required verification also avoids buffer saturation.
If too many frames are kept pending verification of their receipt, it could develop that too many buffers are
full, leaving insufficient room for the receipt of data from the other end of the line. A lockup would then be
possible.
If the receiving station has only a few buffers, then these must be emptied rapidly enough for the frames
that are following. The sender, on the other hand, must have enough buffers to hold all frames sent until the
receiver has acknowledged them. The waiting time for the acknowledgement is at least twice the delay in
the transmission path, which is quite long in satellite transmissions.
It is important to realize that the performance of the link (in terms of the number of useful data bits transmitted per second) is dependent on the number of retransmissions that have to take place. The progress of
a message from one DTE via one or more DCEs to its destination DTE is held up while the retransmission
takes place. The transit times (hence, the response time) of that message, and of the others that might otherwise have gotten on the line sooner, are affected. Because of the uncertain reliability of the links, a very
positive and definite acknowledgement is called for at the link level. Thus sequence numbers at the link
level are applied to each of a possible series of links in the path between DTEs. It is intended to establish
reliable transmission on each link without undue recourse to higher level protocols.
It is important to realize that the link-level sequence numbers are separate from the sequence numbers
established for the dialogue between a source DTE and a destination DTE. Note that a given message with
one sequence number may be segmented into several frames, each of which would then have a unique link
sequence number for a particular link.
Note also that at the link level acknowledgements are mandatory, even though delayed. The send and receive
counts must each be verified at least every seven transmissions. On the other hand, acknowledgements at
the DTE-DTE level are optional and can be either none at all, exception response only, or definite response.
Thus a wider range of response correlation techniques is needed at that higher level to match different
application needs. This correlation is independent of the multiple link level correlations, which may be only
on segments of higher-level requests.

7.3.5 Supervisory Format


In the Supervisory format (S), two bits (the S-field) permit encoding of up to four control commands and
responses for the purposes of controlling information flow. Three such commands are used by SDLC while
a fourth is used by HDLC and ADCCP.
Type 0 is an acknowledgement frame (called RECEIVE READY (RR) used to indicate the next frame (Nr)
expected. This frame is used when there is no reverse traffic to use for piggybacking.
Type 1 is a negative acknowledgement frame (called REJECT). It is used to indicate that a transmission error
has been detected. The Nr field indicates the first frame in sequence not received correctly (i.e., the frame
to be retransmitted). The sender is required to retransmit all outstanding frames starting at Nr in Go-back N
fashion.
Type 2 is RECEIVE NOT READY (RNR). It acknowledges all frames up to but not including Nr, just like
RECEIVE READY, but it tells the sender to stop sending. RNR is intended to signal certain temporary

7.3.6 Nonsequenced format

107

problems with the receiver, such as a shortage of buffers, and not as an alternative to the sliding window
flow control. When the condition has been repaired, the receiver sends a RECEIVE READY, REJECT, or
certain control frames.
The type 3 command, SELECTIVE REJECT, is undefined in SDLC and LAPB, but is allowed in HDLC
and ADCCP. It calls for the retransmission of only the one frame specified in Nr in Selective Repeat ARQ
fashion. It is therefore most useful when the senders window size is half the sequence space size, or
less. Thus if a receiver wishes to buffer out of sequence frames for potential future use, it can force the
retransmission of any specific frame using SELECTIVE REJECT.

7.3.6 Nonsequenced format


The third class of frame is the Nonsequenced format (NS). It is used for control purposes and the SDLC
family at protocols differ considerably here. Five bits are available to indicate the frame function, but not all
32 possibilities are used.
The first command provided is DISC (DISConnect), that allows a to announce that it is going down (e.g., for
preventive maintenance). There is also a command that allows a machine that has just come back on line to
announce its presence and force all the sequence numbers back to zero.
Control frames may be lost or damaged, just like data frames, so they must be acknowledged too. A
special control frame is provided for this purpose, called NSA (Nonsequenced Acknowledgement). Since
only one control frame may be outstanding, there is never any ambiguity about which control frame is being
acknowledged. No I field is permitted with the NSA response.

7.4 The Data Link Layer on the Internet


The Internet consists of individual machines (hosts and routers), and the communication infrastructure connecting them. We shall examine the data link protocols used on point-to-point lines on Internet, since most
of the wide area infrastructure is built up from point-to-point leased lines.
Point-to-point communication is primarily used in two situations:
1. Organisations with one or more Local Networks (LANs, discussed in Chapter 8). Generally, all connections outside the LAN go through one or more routers that have point-to-point lines to other routers
(which could connect to other LANs). These routers and their leased lines make up the communication subnet upon which Internet is built.
2. Home users who connect to Internet using modems and telephone lines.
In both cases, some point-to-point protocol is needed on the line for framing, error control and the other data
link layer functions we have seen so far. The two main data link protocols used on the Internet are Serial
Line IP (SLIP) and Point to Point Protocol (PPP).

7.4.1

Serial Line IP (SLIP)

SLIP is the older of the two protocols mentioned. It was originally devised to connect SUN Microsystems
workstations to the Internet over a dial-up line using a modem. It is a very simple protocol.

CHAPTER 7. DATA LINK PROTOCOLS

108

The workstation simply sends raw packets over the line, with a special flag byte (0xC0) at the end for
framing. If the end-of-frame character (0xC0) occurs inside the data packet, then the character sequence
(0xDB)(0xDC) is sent instead (a form of character stuffing).
More recent versions of SLIP do some Transmission Control Protocol (TCP) and Internet Protocol (IP)
(see Chapters 9) header compression by taking advantage of the fact that consecutive packets often have
many header fields in common. Those fields which are similar to the previous packet are simply omitted.
Furthermore, those fields which do differ are sent as increments to the previous value.
SLIP has various shortcomigs:
1. It has no error detection/correction. This is left to the higher layers.
2. SLIP only works in conjunction with the Internet IP network protocol but, the Internet also supports
networks which dont have IP as their native protocol.
3. IP addresses of both parties must be known in advance - neither address can be dynamically assigned
during setup.
4. No authentication is provided.
5. SLIP is not an approved Internet standard, so many different (and incompatible) implementations
exist.

7.4.2

Point-to-Point Protocol (PPP)

The Point-to-Point Protocol (PPP) was devised to avoid the SLIP shortcomings listed above. This was
designed by the IETF (Internet Engineering Task Force) to be an official Internet standard. PPP handles
error detection, supports multiple protocols, allows negotiation of IP addresses at connection time, provides
for authentication and has many other improvements over SLIP. PPP is a multiprotocol framing mechanism
suitable for use over modems, HDLC bit-serial lines, SONET and other physical layers.
In particular, PPP provides a:
1. Framing method that unambiguously delineates the end of one frame and the start of another. Error
detection is included in the frame format.
2. Link control protocol for controlling lines (bringing them up, negotiating options and taking lines
down). This is known as LCP (Link Control Protocol).
3. Way of negotiating network-layer options that is independent of the network-layer protocol to be used.
A different NCP (Network Control Protocol) exists for each network layer supported.
The following is a typical scenario of an individual using PPP to connect to the Internet:
The PC workstation calls the router at the Service Provider via a modem. After the routers modem has
responded and established a physical connection, the PC sends a series of LCP packets in the payload of
one or more PPP frames. These elect the appropriate PPP parameters to be used.
Once these are agreed upon, a series of NCP packets are exchanged to configure the network layer. Typcally,
the PC wants to use the Internet TCP/IP protocol set, so it needs an IP address. There are not enough IP
addresses to go around, so normally each ISP gets a block of these addresses and dynamically assigns one
to each newly attached PC for the duration of its login session.

7.5 Exercises

109

When the user ends his session, the NCP is used to tear down the network layer connection, and free up
the IP address the PC was assigned. The LCP is then used to shut down the data link layer connection. The
PC instructs the modem to hang up, thus releasing the physical connection.
The PPP frame format illustrated in Fig. 7.8 was chosen to resemble the HDLC (see Sec. 7.3) frame, since
there was no reason to reinvent the wheel. All PPP frames begin with the standard HDLC flag byte 01111110
Size in Bytes:

Flag

Address

Control

01111110

11111111

00000011

1 or 2

Variable

2 or 4

1
Flag

Protocol

Payload

Checksum

01111110

Figure 7.8: The PPP full frame format for unnumbered mode operation
(character stuffing is used if this appears in the payload field).
The Address field is next. It is set to the value 11111111 to indicate that any station may receive this frame.
This avoids having to use data link addresses.
Next is the Control field, whose default value is 00000011. This indicates an unnumbered frame. In other
words, PPP does not provide reliable transmission using sequence numbers and acknowledgements as the
default, although reliable transmission using numbered mode can be specified.
Since the Address and Control fields are always the same in the default mode, LCP provides some header
compression by allowing these fields to be dropped.
The Protocol field specifies what kind of packet is in the payload (remember, different protocols are supported). Codes are defined for LCP, NCP, IP, IPX and others.
The Payload field is of variable length, up to some agreed maximum. A default length of 1500 bytes is used
if it is not specified (padding may follow the payload if need be).
Lastly, a Checksum field is appended, to provide for error detection.

7.5

Exercises

Exercise 7.1 What are the functions of a Data Link Control protocol?
Exercise 7.2 A channel has a capacity of 64Kbps and a propagation delay of 30ms. If we are are using a
stop and wait protocol, what is the minimum frame size we need in order for the efficiency to be above 50%?
Exercise 7.3 Consider a 128Kbps satellite with a 250 milliseconds round-trip propagation delay. Assuming
a frame size of 1000 bits, what is the maximum window size for a sliding window protocol?
Exercise 7.4 Contrast SLIP and SDLC.
Exercise 7.5 Explain why pipelining improves the efficiency of a channel.
Exercise 7.6 In SDLC, the bit string 01111110 indicates frame boundries. Explain how bit stuffing is used
to preserve data transparency. Use the data string 1011 1111 0101 1111 0011 1111 1010 as an example.

110

CHAPTER 7. DATA LINK PROTOCOLS

Exercise 7.7 Explain the term piggybacking. Include in your discussion any advantages and disadvantages of piggybacking.
Exercise 7.8 What is the purpose of the sliding window protocol? How is it implemented and what is the
maximum size of the window using 7 bits to store the sequence number?

Part III

Network Protocols

111

Chapter 8

Local Area Networks


8.1 Objectives of this Chapter
Local Area Networks are just what the name implies: Networks connecting workstations, file servers and
print servers in a relatively (to WANs) restricted geographical area. The latter implies several simplifications
compared to WANs. On completing this chapter, the reader will be familiar with the following:
1. Reasons for LANs and the implications for their protocols.
2. How LANs are classified on the basis of their topology and ways in which they access the commontransmission medium, known as Media Access Control (MAC) protocols.
3. We first learn about the Logic Link Control protocol which is used by all LANs
4. We study the various possible topologies, and classify them in random access techniques (the CSMA
family) and
5. token passing or deterministic protols.
6. Next we shall learn about the Ethernet protocol in detail, including
7. the principles of the FastEthernet protocol, a network with a 100Mbps capacity.
8. Amongst the token passing protocols, we shall learn about the Token Passing ring, originated by IBM,
as well as
9. Token Bus protocol.
10. Finally, in the final section, we introduce the Wireless (IEEE 802.11) LAN protocol.

113

CHAPTER 8. LOCAL AREA NETWORKS

114

8.2 Introduction
When the computer equipment, workstations, file servers, print servers and so on which are to be connected
are in close proximity such as in one building or a university campus, networking is a lot simpler than in the
case of geographically spread out networks. The reasons are briefly as folllows.
1. The entire network is under the control of one authority and decisions about which standards or
technology to use are therefore centralised although the diversity of equipment and manufacturers
may be much higher than in a WAN.
2. In all cases, because the propagation delay is microseconds, simple stop-and-wait is a good enough
scheme and window flow control with all its error recovery complications, lost packets, etc is not
needed.
3. Because the topologies, as we shall see, are such that each station receives its own transmission,
explicit acknowledgements are not required an nor is flow control an issue.
4. Also, for the same reasons, a connectionless protocol is adequate.
5. Since the distances involved are small and confined to the premises, there is no need to involve a
Service Provider or external cable company (traditionally the Telco) except to connect the network at
a single point called the gateway to the Internet.
Although LANs have been around since the early 1970s, there is no general agreement on a precise definition for a Local Area Network. One useful definition, based on the bandwidth distance product, is the
following:
Definition 8.1 BIT LENGTH OF A LAN . The bandwidth of any communications link in bits per second
multiplied by its length in meters defines the network in bit-meters per second. When this is divided by the

propagation velocity the result is the bit length of the network - that is, the number of bits in transit on the
network at any given instant. In other words,
bit length 

(8.1)

(capacity in bps)

length  propagation velocity

This figure can be used to define different types of networks. A conventional WAN, for example, has
hundreds or thousands of bits in transit in any instant, whereas a computer bus has only a fraction of a bit
ever in transit. A local area network, because of its characteristic speeds and lengths, can be described as
one whose bit-length falls between the two extremes.
The most widely used LANs are Ethernet, token ring and token bus. In a LAN, computer devices share
a common transmission medium instead of being connected by point-to-point lines. Since LANs differ
in transmission rates and by how they share access to the common links, different LANs are best suited
to different applications. For example, Ethernet is adequate for low-load conditions when the delivery of
packets is not subject to a hard time constraint. The token ring and token bus networks are preferable for
high-load applications and also for hard time constraints.
About





meters per second for fibre and

 



for copper.

8.3 Classification of LANs

115

8.3 Classification of LANs


With all the LANs around it is possible to distinguish them on the grounds of the way in which the various
stations or nodes are linked, called the topology of the network and the way in which demand for the use of
the connecting medium, or bus is arbitrated. The latter we call the Medium Access Control (MAC) protocol.
We next discover what these terms mean.

8.3.1 Topologies
LANs function as non-centralised, distributed, multiple-access communications systems. This functional
capability can be achieved by at least four basic topologies, point-to-point, star, ring and bus, either by
themselves or in combination.
The cost and lack of versatility of the point-to-point topology, however, limit its application in local area
environments; the star configuration requires central control with its associated vulnerability and is generally
not suitable for a true LAN application although the most favoured LAN technology Ethernet 10BASE-T
(see Sec. 8.6) uses this topology exclusively.

8.3.2 Common Bus Topology


Fig. 8.1 shows a common bus topology or simply a bus topology. The various nodes, which could be PCs,
file servers, mainframes etc., communicate through a single bus constituting one or more parallel lines. The
Attached Devices

Segment N
Node
N3

Node
N2

Node
N2
Repeater
Node 1

Node 3

Node 4

Segment 1

Figure 8.1: LAN Bus topology


most popular bus network used to be version 10BASE5 of the IEEE 802.5 standard, called Ethernet as we
shall learn later in this chapter. It has since been replaced by its more popular 10BASE-T version which,
while maintaining the same technology, is a star topology. In its original version, the common Ethernet bus
was a 75 coaxial cable with although it is now possible to use optical fibre (or combinations of both).

CHAPTER 8. LOCAL AREA NETWORKS

116

8.3.3 Star Topology


Another common connecting arrangement is the star topology (shown in Fig. 8.2). It uses a central computer
that communicates with other devices in the network. If a device wants to communicate, it does so only
through the central computer. The central computer, in turn, routes the data to the destination. Centralization
provides a focal point for responsibility, an advantage of the star topology.
However, the bus topology has some advantages over the star topology. The lack of central control makes
the adding/removing of devices easy in a bus network. This is because no device may be aware of other
devices. Also, the failure of the central computer in a star network will cause the whole network to fail. In
a bus topology, the network will still function even if some of the devices fail.

Central
Computer

Figure 8.2: LAN Star topology

8.3.4 Ring Topology


In the ring topology of Fig. 8.3, devices are connected along a circle. Each one is physically connected to
its neighbours on either side and but nobody else. If one wants to communicate with a device that is farther
away, then a message must be sent in turn through each device that lies between the source and destination.
A ring network may be unidirectional or bidirectional. Unidirectional means that all transmissions travel
in the same direction in which case a single conductor (or fibre) suffices. In this case each device can
communicate only with one neighbour. Birectional means that data may travel in either direction, and
a device can communicate with either of its neighbours which implies two signals paths, one in either
direction.
A disadvantage of the ring topology, is that when one station transmits to another station, all stations in
between are involved. More time is spent relaying messages meant for others than in a bus topology (for
example). Also, the failure of one station could cause a break in the ring which affects communication
among other stations.
Many computer networks use combinations of the various topologies. using, possibly a common bus, sometimes called the backbone, which allows users to access mainframes and high volume or frequently accessed
storage.

8.3.5 Media Access Control

117

Figure 8.3: Ring topology.

8.3.5 Media Access Control


LANs mostly use stations which are connected to a common bus or hub and as is the case for the bus
between processors and memory on a computer motherboard, some arbitration mechanism is needed in
order to determine who may use the bus next. One orderly way is to pass an agreed message or token
around so that whoever has the token can transmit, something we understand from everyday life. Another
way, alas all too common in everyday life as well, is for every station to simply transmit when it wants to
and sorting out who gets the medium afterwards. The later are known as Random Access Techniques of
which there are several as we now discuss.

8.4 Random Access Techniques


In the context of a LAN, just as when more than one person attempts to talk at the same time, when two
or more nodes try to communicate at approximately the same time, their transmissions collide. In LAN
terminology, we refer to this as a collision. Collisions are detected by nodes through the physical characteristics of the medium. More specifically, when a collision occurs, a channels energy lever changes (e.g., the
voltage level on copper-based media is usually at least twice as high as expected). During such times, the
nodes signals become garbled. By monitoring the transmission line, nodes on the network are equipped to
detect this condition.
What a node does when it detects a collision depends on the nodes MAC sublayer protocol. Similar to the
way people might observe different protocols when contending to speak, nodes employ various protocols
to transmit data in a shared media environment. For example, to minimize the occurrence of collisions,
nodes might follow a protocol that requires them first to listen for another nodes transmission (somewhat
incorrectly called a carrier, a term borrowed from radio terminology) before they begin transmitting data.

118

CHAPTER 8. LOCAL AREA NETWORKS

We call these types of protocols (Carrier Sense protocols. Carrier Sense protocols require that before a node
begins transmitting it must first listen to the medium to determine whether another node is transmitting data.
Carrier sense transmission systems also employ circuitry that requires the system tolisten to every bit
transmission going out and compare that to what is actually heard on the transmission medium. If they
match, wonderful. If they dont, something caused the bit to get hammered , and this means the transmission
is garbled. How it is handled from there depends on the transmission framing method being used. There are
4 carrier sense protocols: 1-persistent CSMA (Carrier Sense Multiple Access), Nonpersistent CSMA, CSMA
with Collision Detection (CSMA/CD) and CSMA with Collision Avoidance (CSMA/CA). Well describe these
next.

8.4.1 1-persistent CSMA


When a node has data to transmit, it first senses the channel to determine if another node is transmitting. If
the channel is not busy, then the node begins transmitting. If the channel is busy, then the node continuously
monitors the channel until it senses an idle channel. Once it detects an idle channel, the node seizes the
channel and begins transmitting its data. With this protocol, nodes with data to transmit enter a sense and
seize mode they continuously listen for a clear channel and, once detected, begin transmitting data. This
protocol is similar to a telephone with a multiple redial feature. If it detects a busy signal when the call is
first attempted, the telephone repeatedly dials the number until a connection is finally established. The 1
in l-persistent CSMA represents the probability (certain) that a single waiting node will be able to transmit
data once it detects an idle channel. However, collisions can and do occur if more than a single node desires
to transmit data at approximately the same time.

8.4.2 Nonpersistent CSMA


This protocol is similar to 1-persistent CSMA, except a node does not continuously monitor the channel
when it has data to transmit. Instead, if a node detects a busy channel, it waits a random persiod and
rechecks the channel. If the channel is idle, the node acquires the channel and begins transmitting its data.
If, however, the channel is still busy, the node waits another random period before it checks the channel
again. Both 1-persistent CSMA and nonpersistent SMA protocols eliminate almost all collisions. Except
for those that occur when two nodes begin transmitting data nearly simultaneously. For example, node A
begins transmitting data on a clear channel. A few microseconds later (which is not enough time for node
Bs sensing circuit to detect node As transmission), node B erroneously declares the channel clear and
begins transmitting its own data. Eventually, the two transmissions collide. 1-persistent CSMA involves
a significant amount of waiting and unfairness in determining which node gets the medium. It is selfish
because nodes can grab the channel whenever they feel like it. Non-persistent CSMA seems better because
in its randomness there is fairness. In either case, though, there still remains the nagging problem of what
to do about collisions. The next two CSMA protocols incorporate collision detection to provide a solution
to the collision problem.

8.4.3 CSMA/CD
In this variant of either 1-persistent or nonpersistent CSMA, when a collision occurs the nodes (a) stop
transmitting data, (b) send out a jamming signal, which ensures that all other nodes on the network detect
the collision, (c) wait a random period of time, and then, (d) if the channel is free, attempt to retransmit their
message. 1-persistent CSMA/CD is the MAC sublayer protocol used in Ethernet and IEEE 802.3 networks.

8.4.4 CSMA/CA

119

One question that frequently emerges when describing CSMA is the length of a random period. Well, it can
be a long time, a short time, a moderate amount of time its random, but it is also extremely critical. If
the wait time is too short, collisions will occur repeatedly. If the wait time is too long, the medium could
be in a constant idle state. Propagation delay also has an important effect on the performance of these
protocols. Consider the example given for nonpersistent CSMA: Node A senses an idle channel and begins
transmitting. Shortly after node A has begun transmitting, node B is ready to transmit data and senses the
channel. If node As signal has not yet reached node B, then node B will sense an idle channel and begin
transmitting. This will ultimately result in a collision. Consequently, the longer the propagation delay, the
worse the performance of this protocol. Even if the propagation delay is zero, it is still possible to have
collisions. For example, assume both nodes A and B are ready to transmit data, but they each sense a busy
channel (a third node is transmitting at the time). If the protocol is 1-persistent CSMA, then both nodes
would begin transmitting at the same time resulting in a collision. The specific amount of wait time is a
function of the protocol.

8.4.4 CSMA/CA
This protocol is similar to CSMA/CD except that it implements collision avoidance instead of collison
detection. As with straight CSMA, hosts that support this protocol first sense the channel to see if it is busy.
If the host detects that the channel is not busy, then it is free to send data. What if every host connected to a
LAN is not always able to sense each other hosts transmission ? In such instances, collision detection will
not work because collision detection is predicated on nodes being able to hear each others transmissions.
This is the case with wireless LANs (WLANs). We cannot always assume that each station connected to a
WLAN will hear each other stations transmissions. One direct consequence of this is that a sending node
will not be able to detect if a receiving node is busy or idle. To address this issue, we replace CD with CA
and use something called positive acknowledgment. It works like this:
The receiving host, upon receiving a transmission, issues an acknowledgment to the sending host. This
informs the sending host that a collison did not occur. If the sending host does not receive this acknowledgement, it will assume that the receiving node did not receive the frame and it will retransmit it.
CSMA/CA also defines special frames called request to send (RTS) and clear to send(CTS), which further help minimize collisions. A sending node issues an RTS frame to the receiving node. If the channel
is free, the receiving node then issues a CTS frame to the sending node. CSMA/CA is used in IEEE
802.11(WLANs, see Sec. 8.8).

8.4.5 IEEE LAN Classification


One of the specific objectives of local area networks is to allow users with different types of data processing
or terminal equipment to plug into the network with no concern for compatibility. All such equipment
should therefore adhere to common protocols that will allow them to inter-operate.
Generally, LAN architectures span the two lower layers of the ISO OSI model, that is, they accommodate
the functions of, and provide the services for, the Physical and Data Link layers of the model. Fig. 8.4
illustrates the LAN architecture proposed by the Institute for Electronic and Electrical Engineers (IEEE)
standards committee and shows the corresponding OSI layers.

CHAPTER 8. LOCAL AREA NETWORKS

120

802.2 Logical Link Control (LLC)

LLC
OSI Layer 2
Medium Access Control

802.3

802.4

802.5

802.6

CSMA/CD

Token Bus

Token Ring

DQDB

(MAC)

Physical Layer

OSI Layer 1

IEEE 802 Layers


OSI Layers

Figure 8.4: Correspondence between IEEE 802 layers and the OSI model

8.5 Logical Link Control (IEEE 802.2)


All that the IEEE 802 LANs and Metropolitan Area Networks (MANs, see Chapter 10) offer is a bestefforts datagram service. Sometimes this service is adequate. For example, for transporting IP packets, no
guarantees are required or even expected. An IP packet can just be inserted into an IEEE 802 payload and
sent on its way. If it gets lost, so be it.
Nevertheless, there are also systems in which an error-controlled, flow-controlled data link protocol is desired. IEEE has defined one that can run on top of all IEEE 802 LAN and MAN protocols. In addition, this
protocol, called LLC Logical Link Control), hides the differences between the various kinds of IEEE 802
networks by providing a single format and interface to the network layer. This format, interface and protocol
are all closely based on OSI. LLC forms the upper half of the data link layer, with the MAC sublayer below
it, as shown in Fig. 8.5.


  !




  

- 


))*(
%'&(

      


" $#

%'&+(

),)*(

- 


),)*(

- 


%'&(



"/. #

Figure 8.5: (a) Position of LLC and (b) Protocol formats


Typical usage of LLC is as follows: The network layer on the sending machine passes a packet to LLC
using the LLC access primitives. The LLC sublayer then adds an LLC header, containing sequence and
acknowledgement numbers. The resulting structure is then inserted into the payload field of an 802.x frame
and transmitted. At the receiver the reverse process takes place.
LLC provides three service options: unreliable datagram service, acknowledged datagram service, and re-

8.6 Ethernet

121

liable connection-oriented service. The LLC header is based on the older HDLC protocol. A variety of
different formats are used for data and control. For acknowledged datagram or connection-oriented service,
the data frames contain a source address, a destination address, a sequence number, an acknowledgement
number, and a few miscellaneous bits. For unreliable datagram service, the sequence number and acknowledgement number are omitted.

8.6 Ethernet
Networks based on the IEEE 802.3 random access protocol standard are by far the most prevalent. These
bus networks use the media-access control protocol Carrier Sense Multiple Access with Collision Detection
(CSMA-CD) referred to above.
There are several versions of IEEE 802.3. Here we will discuss only 10BASE5, 10BASE2, 1BASE5,
10BASE-T, and 10BROAD36. In these designations the first number (1 or 10) indicates the transmission
rate in megabits per second. BASE indicates that the nodes transmit the information as baseband (digital)
signals and BROAD identifies a broadband network that modulates an analog carrier signal with the digital

frames . The number at the end on the designation (5, 2 or 36) is the maximum length of a segment of the
networks a multiple of 100m. 10BASE-T designates a 10-Mbps baseband network that uses twisted pairs as
the transmission medium. Thus, 10BASE5 is a 10-Mbps baseband network with segments of at most 500m.
The characteristics of these 5 versions are summarized in Table 8.1 The IEEE 802.3 networks use either

Medium

Capacity
Coding
Segment
Length
Network
diameter
Nodes per
segment
Collision
Detection

10BASE5
ethernet
Coaxial cable
50 -10mm

10BASE2
cheapernet
Coaxial cable
50 -10mm

starLAN
Unshielded
Twisted pair

10BROAD36
broadband
Coaxial cable
75

10BASE-T

10 Mbps
Differential
Manchester
500 m

10 Mbps
Differential
Manchester
185 m

1 Mbps
Differential
Manchester
500 m

10 Mbps
DPSK
1 800 m

Dual simplex
Unshielded
Twisted pair
10 Mbps
Differential
Manchester
100 m

2.5 km

0.925 km

2.5 km

3.6 km

1 km

100

30

Excess
current

Excess
current

2
2 active hub
input

Transmission
  reception

Activity on
receiver and
transmitterSome

Table 8.1: IEEE 802.3 networks with 512 bit slot times, 96 bit gap times and jam signals lasting 32 48 bits
Unshieded Twisted Pair (UTP) or Coaxial cables while dual optical fibers are also possible. The Baseband
networks use Differential Manchester encoding (see Sec: 2.5.1 on page 34. The Broadband version uses

Baseband is another term for digital signal transmission of digital data. Broadband, in this context, means digital data send
as modulated (DPSK) analog signals. Alas, as we know, broadband is also used to refer to networks with a capacity generally
exceeding 100 Mbps.

122

CHAPTER 8. LOCAL AREA NETWORKS

Differential Phase Shift Keying (DPSK) a modulation method that transmits the difference between successive bits using PSK modulation. Manchester encoding Table 8.1 shows the maximum segment length, the
maximum distance between nodes, and the maximum number of nodes per segment. The slot time is used
to schedule retransmissions, and the gap time is the minimum time between two successive packets.
The original Ethernet protocol as patented by Metcalfe and Boggs [Met76] in 1976 before the IEEE802.3
standards, is almost identical to 10BASE5, except for minor differences in the frame structure. This version
is based on a thick coaxial cable medium where the stations are arranged in a bus topology as illustrated
in Fig. 8.6. We will base the following analysis of the CSMA/CD Medium Access Control algorithm on
this configuration and the CSMA/CD frame format illustrated in Fig. 8.7. In that configuration, stations are
connected to the cable via transceivers and a transceiver connector; transceivers are spaced 2.5 meters apart
to prevent signal interference. The actual physical connection of a tranceiver, involves penetrating the cable
using a so-called vampire tap. Each end of the cable is terminated with a 50 resistor, and one end of the
cable must be at ground potential.
The transceiver converts the electrical signals on the cable into binary values for the interface board. To
receive packets, this conversion is performed by a receiver circuit in the transceiver that measures the voltage
on the cable. The circuit outputs a signal representing the binary value 1 when the voltage exceeds a fixed
threshold and a signal representing 0 otherwise. As a result the interface board receives the Manchesterencoded bits transmitted on the cable. The interface board decodes the bits, stores them in its buffer, and
informs the node when a full packet has been received.
To transmit a packet, the node copies it into a buffer on the interface board which waits for the transceiver
to indicate when the coaxial cable is idle (the Carrier Sense part), i.e., when its receiver circuit does
not detect any activity. The interface board then delivers the Manchester-encoded bits of the packet to the
transceiver. The transceiver contains a current source that is switched on and off by the binary values of
the Manchester code and that injects current with an average value between 18mA and 24mA into a coaxial
cable. While a transceiver is transmitting, it monitors the current flowing through the cable by measuring the
average voltage across the cable. If that average value shows that the current is larger than 24mA, then the
transceiver knows that another transceiver is also injecting some current and it informs the interface board
that a collision is occuring (the Collision Detection part).

Figure 8.6: Ethernet 10BASE5 topology


When the interface board learns of the collision, it stops transmitting the packet and transmits a sequence
of 32 to 48 randomly chosen bits, called the jam signal. The objective of the jam signal is to quarentee that
all nodes become aware of the collision. The interface board then reschedules a retransmission of a packet
after a random time. The interface board contains an integrated circuit that calculates the cyclic redundancy
code bits to detect transmission errors.

8.6 Ethernet

123

In summary, the Ethernet MAC protocol (CSMA-CD) illustrated in the flow chart in Fig. 8.8 specifies that
a node with a packet to transmit must proceed as follows:
1. Wait until the channel is idle.
2. Transmit and listen while transmitting.
3. In case of a collision, stop the packet transmission, transmit a jam signal, and then wait a random
delay and go to 1.
CSMA/CD Frame
Medium Access
Control Header

Preamble

Flag Destination
Address

2 or 6

Medium Access
Control Trailer

Source

Length

LLC PDU

Pad

FCS

Address

2 or 6

to 64

Octets

Figure 8.7: CSMA/CD frame format


The protocol stops the transmission after 16 successive collisions. The size of the random delay after
a collision is selected according to the binary exponential backoff algorithm: if a packet has collided n
successive times, where n 16, then the node chooses a random number K with equal probability from the
set 0, 1, 2, 3, ...,
 where m := min 10, n  ; the node then waits for K x 512 bit times . Thus after
the first collision the nodes choose the random delay 0 or 512 bit times, with equal probabilities. After two
successive collisions, the delay is equally likely to be 0, 512, 1024 or 1536 bit times. After three successive
collisions, it is equally likely to be 0, 512, 1024, 1536, 2048, ..., or 3584 bit times. The rationale for using
this method of selecting the value for the delay is that it quickly reduces the chances of repeated collisions
by speading out the range of waiting times.

We saw that a node detects a collision while transmitting when its receiver measures an excessive current
on the cable. The nodes that are not transmitting detect a collision when they observe a packet shorter than
500 bits, which is shorter than the minimum length of a valid frame (544 bits). A packet with fewer than
500 bits appears on the cable when a collision occurs because the transmission time of a packet that collides
cannot exceed the maximum collision detection time as shown in Fig. 8.10, which is twice the maximum
end-to-end propagation time. The propagation time is determined by the cable length and the delays in the
repeaters. The specifications of Ethernet result in a maximum collision detection time of 450 bit times.
Consequently, the length of a packet that collides is at most 450 bits plus the maximum length of the jam
signal (48 bits).
By far the most popular version of Ethernet is 10BASE-T illustrated in Fig. 8.9. A typical 10BASE-T wiring
scheme involves a centralised hub located in a wiring closet and lengths of cable installed between a path
panel (or distributor board)and a cable conduit plate in the end user office. In the trade this length of cable
is referred to as the home run. Each end of these home runs is then punched down, usually run behind
ceiling panels between the patch panel and the connector plate in the users office. Patch cable is used to


At 10 Mbps, one bit time =   seconds.

CHAPTER 8. LOCAL AREA NETWORKS

124
Start
no

yes

Monitor

Any Bus

Anything

Bus

Activity?

to Send?

Send

no

yes

Monitor Bus

no
Collision?

yes
Invoke back

Try Again

Stop
Sending

off algorithm

Figure 8.8: CSMA/CD MAC process


connect ports on the patch panel to the hub and workstations to the connector plate. The total length of all
the cable from hub to patch panel, home run, and patch cable from conduit plate to workstation, may not be
more than 100 meters.
Although patch panels provide a certain level of convenience, they are prone to noise. Nevertheless patch
panels are useful in many situations. For example, note in Fig. 8.9 that port 5 on the patch panel is wired to
office 5, but that workstation does not have a connection to the hub. Connectivity to office 5 can be provided
though, by temporarily disconnecting any of the patch panel cables from one of the ports and re-connecting
it to port 5.

8.6.1 Ethernet Performance


Next we examine the performance of Ethernet under conditions of heavy and constant load with  stations
always ready to transmit. We use Fig. 8.10 to explain our analysis.
At the point marked  a station in that figure has finished transmitting its frame. Any other station having a frame to send may now attempt to do so. If two or more stations decide to transmit simultaneously, there will be a collision. Each will detect the collision, abort its transmission, wait a random period of time and try again, assuming that no other station has started transmitting in the meanwhile. Our
model for Ethernet will therefore consist of alternating contention and transmission slots, with idle periods occurring when all stations are quiet (e.g. for lack of work). If each station transmits during a contention slot with probability , the probability  , that some station acquires the ether during that slot is
Pr successful transmission  =

Pr one station transmits 
Pr (k-1) stations do not transmit 
That is,
 

(8.2)
  
   

which has a maximum value for    . Also   as  . The probability that the contention
interval has exactly slots in it is the same as the probability that for slots no-one acquires the ether, but

8.6.1 Ethernet Performance

125

Figure 8.9: Popular networking configuration

 

 

 

 

  

 
Figure 8.10: Ethernet states: contention, transmission, or idle.

CHAPTER 8. LOCAL AREA NETWORKS

126
that it is acquired during the 

 *

  st slot, i.e., this probability is given by


  

(8.3)

 

Hence the mean number of slots per contention is given by




(8.4)

Since each slot has a duration


(8.5)

  

 







, the mean contention interval







 

is given by

Assuming optimal , the mean number of contention slots is never more than , so

  
(8.6)


  



If the mean frame takes


ciency is given by
(8.7)

is at most

seconds to transmit, when many stations have frames to send, the channel effi 

*  


Here we see where the network diameter, the distance bewteen the two remotest stations on the network
enters into the performance figures. The longer the diameter, the longer the contention interval, the lower
the channel efficiency.

8.6.2 Fast Ethernet


A few years back it became obvious that new operating systems, faster processors and larger disk and
memory capacities result in network traffic that require capacities in excess of 10Mbps of the standard
Ethernet. In addition increased Web traffic, either on the Internet or company intranets have given rise to
multimedia traffic rich with high-resolution graphics, audio, video and voice. On top of that client-server
computing and netcentric computing has taxed first-generation Ethernet technology.
To increase the capacity of an Ethernet network, IEEE increased the data transmission speed of the traditional IEEE 802.3 standard first to 100 Mbps and then to 1 Gbps. In JUne 1995, this new technology
became the IEEE 802.3u standard and was given the specification 100BASE-T. Unlike 10BASE1-T though,
Fast Ethernet has 3 different media specifications: 100BASE-TX, 100BASE-T4 both using twisted-pair
cable and 100BASE-FX which uses fibre-optic cable.
At the Data Link Layer, Fast Ethernet is unchanged from its 10 Mbps counterpart. The frame format,
the minimum and maximum frame sizes and the MAC address format are all identical to conventional
Ethernet. Most importantly, Fast Ethernet uses exactly the same media access method, namely CSMA/CD.
This provides a simple and seamless migration path for users of 10 Mbps Ethernet to Fast Ethernet.
Table 8.1 summarises the 3 Fast Ethernet specifications with a brief description following that.
100BASE-T4 This specification uses inexpensive 3 UTP cable and a sinalling speed of 25Mhz, only 25
percent aster than the standard IEEE 802.3 20 Mhz. To achieve the necessary bandwidth, 100BASET4 requires 4 twisted pairs, one from the station to the hub, one from the hub to the station and the
other two are switchable to the current transmission direction. It uses a coding technique which maps
8 bit data blocks to a specific code group consisting of 6 symbols, the so-called 8B/6T method, to
ensure synchronisation.

8.7 Deterministic or Token Passing Protocols


Name
100BASE-T4
100BASE-TX
100BASE-FX

Cable
4-pair CAT 5 UTP
2-pair CAT 3,4 or 5 UTP
Dual Multimode Fibre

127
Maximum Segment Length
100 m
100 m
2000 m

Coding
8B/6T
4B/5B
4B/5B

Transmission
Half-duplex
Full-duplex
Full-duplex

100BASE-TX Uses CAT 5 UTP and is therefore simpler since the wires can handle clock rates up to 125
MHz and more. Only two twisted pairs per station are used, one to the hub and one away from it.
Rather than use the Manchester encoding scheme of conventional Ethernet, it uses a scheme called
4B5B encoding which we discuss indetail in Sec. 10.2 on page 176. 100BASE-TX is thus fully
duplex.
100BASE-T4 This option uses two strands (how else, since fibre is uni-directional) of multimode fibre, one
for each direction of transmission, so this option is also full-duplex with 100 MBps in each direction.
In this case the distance between the station and the hub can be as much as 2 Km.
As a final note it is important to note that Fast Ethernet must be configured as a star topology, i.e., each
station has to be connected to the hub. Traditionally Ethernet used a bus topology as we learned.

8.7 Deterministic or Token Passing Protocols


Unlike random access protocols, token passing protocols do not involve collision detection, but rely instead
on granting stations on the network permission to transmit. Permission is granted in the form of a special
control frame called a token. The underlying principle of token passing protocols is simple: The station that
has the token may use the medium. Since possession of the token controls access to the medium, there is no
contention and there are thus no collisions. The absence of contention also implies an absence of collision.
Thus, token passing schemes are both contention-free and collision-free protocols.
The IEEE has standardised two LAN protocols using token passing. They are the Token Ring, IEEE 802.5
discussed here and IEEE 802.4, Token Bus discussed in the next section. Although not an IEEE standard,
Fibre Distributed Data Interface (FDDI) discussed in Sec, 10.2 also uses a token passing technology.

8.7.1 Token Ring (IEEE 802.5)


The basic operation of this MAC protocol is simple. When there is no traffic on the ring, a 3-byte token,
illustrated at the top of Fig. 8.11, circulates endlessly, waiting for a station to seize it by setting the T-bit
or token bit the second byte to 1. Ignoring for now the priority mechanism, the algorithm for accessing the
medium is shown in the flowchart in Fig. 8.12. If T = 0 and the station has data to send, the token is seized
by setting T = 1 which converts the first two bytes into the start-of-frame sequence. The station then outputs
the rest of the normal data frame, shown in the lower part in Fig. 8.11. Note that a station has to delay the
token by one bit duration. In other words, it cannot pass the token on until it has decided whether the bit
currently arriving T = 0 or 1 and it takes one bit interval for it to know this.
Under normal conditions, the first bit of the frame will circulate around the ring and return to the sending
station before the full frame has been transmitted. Only a very long ring will be able to hold even a short


Remember we pointed out earlier that at high transmission speeds the leading edge of a bit will arrive back at the transmitter
long before the trailing edge left.

CHAPTER 8. LOCAL AREA NETWORKS

128

Figure 8.11: Token-Ring frame format

Listen

Priority
>= PPP?

Is T = 0?

Have frame
to send?
n

T = 0 : Free Token
1 : Frame in use

Priority
>= RRR?

Have frame
to send?

Seize Token
Send Frame
Send Free
Token On
Upgrade
RRR

Station
Address?
n
Send
frame on

Sent by
Station

Destroy Frame
Issue Token

n
copy frame
return frame

Figure 8.12: Token-Ring medium access algorithm

8.7.1 Token Ring (IEEE 802.5)

129

frame. Consequently, the transmitting station must drain the ring while it continues to transmit. As shown
in Fig. 8.12, this means that the bits which have completed the trip around the ring return to the sender and
are removed. If the address is neither that of the destination or source they are simply passed on.
A station may hold the token for the token-holding time, which is generally 10 milliseconds. If there is
enough time to send more frames after the first has been sent, these may also be sent. After either all frames
have been transmitted or the transmission of another frame would exceed the token-holding time, the station
regenerates the 3-byte token frame and inserts it into the ring.
The starting delimiter and ending delimiter fields of Fig. 8.13 mark the starting and ending of the frame.
Each contains invalid differential Manchester patterns in order to distinguish them from data bytes. The
acccess control byte contains the token bit, the monitor bit, priority bits and reservation bits (described
later). Following the Access Control byte are the destination and source address fields. Next comes the

Figure 8.13: Token-Ring frame format and content


data, which may be as long as necessary, provided that the frame can still be transmitted within the tokenholding time. Next comes the checksum field.
The very last byte, is the frame status byte. This contains, amongst others, bits A and C. When a frame
arrives at the interface of the destination station, the interface turns on the A bit as it passes through. If the
interface copies the frame to the station, then the C bit is also turned on. (A station might fail to copy a
frame due to lack of buffer space etc.)
When the sending station drains the frame from the ring, it examines the A and C bits. Four combinations
are possible as also illustrated in Fig. 8.13.
A = 0 and C = 0: destination not present or not powered up.

CHAPTER 8. LOCAL AREA NETWORKS

130
A = 1 and C = 0: destination present but frame not accepted.

A = 0 and C = 1: destination not present and frame accepted in error.


A = 1 and C = 1: destination present and frame accepted.
This arrangement provides for automatic acknowledgements for each frame. If a frame is rejected but the
station is present, the sender can decide whether to resend the frame in a little while or not. The A and C
bits are present twice in the frame status byte in order to increase reliability (since they are not covered by
the checksum).
The ending delimiter contains an E bit, which is set if any interface detects an error. It also contains a bit
that can be used to mark the last frame in a logical sequence, similar in purpose to an end-of-file bit.
The IEEE 802.5 protocol has an elaborate scheme for handling multiple priority frames illustrated in principle in Fig. 8.12. The 3-byte token frame has a field indicating the priority of the token. When a station
wants to transmit a frame with priority n, it must wait for a token whose priority is less than or equal to
n. Furthermore, when a data frame goes by, a station may attempt to reserve the next token by writing the
priority of the frame it wants to send in the data frames reservation bits . However, if a higher prority has
already been reserved there, the station may not make a reservation. When the current frame has finished,
the next token is generated at the priority that has been reserved.
It is easy to see that this mechanism works like a ratchet - in that it always jacks the priority higher and
higher. To eliminate this problem, the protocol contains some complex rules. The basic idea is that the
station raising the priority is also responsible for lowering it again when it is done.

8.7.2 Ring Maintenance


Each ring has a monitor station that oversees the ring. A contention protocol ensures that a new monitor is
quickly chosen if the current monitor goes down (every station has the capacity to become monitor). It is
the monitor alone that is responsible for ensuring the correct operation of the ring.
When the ring comes up or any station notices that there is no monitor, it can send a CLAIM TOKEN control
frame. If this frame makes its way back to the sender before any other CLAIM TOKEN frames, then the
sender becomes the new monitor. The token ring control frames are shown in Fig. 8.11 and Table 8.2.
CONTROL FIELD
00000000
00000010
00000011
00000100
00000101
00000110

NAME
Duplicate address test
Beacon
Claim Token
Purge
Active monitor present
Standby monitor present

MEANING
Test if two stations have the same address
Used to locate breaks in the ring
Attempt to become monitor
Re-initialize the ring
Issued periodically by the monitor
Announces the presence of potential montitors

Table 8.2: Token ring control frames.


The monitors duties include ensuring that the token is not lost, taking action when the ring breaks, cleaning
the ring up when garbled frames appear, and watching out for orphan frames. An orphan frame occurs when
a station transmits a complete frame, but crashes before the frame can be drained from the ring. This frame
will circulate forever unless something is done.
The monitor handles these in the following way:

8.7.3 Token Ring Performance

131

Lost tokens To check for lost tokens, the monitor has a timer which is set to the longest possible tokenless
interval. If this timer goes off, the monitor drains the ring and issues a new token.
Garbled frames Garbled frames are detected by their invalid formats or their invalid checksums. The
monitor drains the ring and issues a new token.
orphans The monitor detects orphans by setting the monitor bit in the Access control byte whenever a
frame passes through. If an incoming frame has this bit set, then it means that the frame has gone past
the monitor twice without being removed - so the monitor drains it and a new token is issued.
Another monitor function concerns the length of the ring. If the token is 24 bits long, then the ring must be
long enough to hold 24 bits. If the one length delays per station plus the delay due to the cable length is less
than 24 bits, then the monitor inserts extra delay bits so that the token can circulate.
One function not handled by the monitor is detecting breaks in the ring. If a station notices that either of its
neighbours appears to be dead, a BEACON frame is transmitted, giving the address of the presumably dead
station. When the beacon has propagated around as far as it can, it is then possible to see how many stations
are down, and to delete them from the ring using the bypass relays in the wire centre - all without human
intervention.

8.7.3 Token Ring Performance


A major issue in the design and analysis of ring networks is the physical length of a bit. If the data rate
of the ring is  Mbps, a bit is emitted every   microseconds. Bits are transmitted serially, just as in the
case of Ethernet. With a typical signal propagation speed of 200 meters per microsecond, each bit occupies
  meters on the ring. This means, for example, that a ring whose circumference is 1000 meters can
hold only 5 bits simultaneously.
Let us now briefly analyze the performance of a token ring. Assume that frames are generated according to a
Poisson process (interarrival times have a negative exponential distribution), and that when a station receives
permission to send, it empties itself of all queued frames, with the mean queue length being frames at each
station. The mean arrival rate of all  stations combined is
so that, assuming identical behaviour at each
of the stations, each station contributes
 to the traffic. Call the service rate (the number of frames/sec
that a station can transmit) .
The mean time it takes for a bit to go all the way around an idle ring, or walk time, consisting of both the one
bit-per-station delays and the mean signal propagation delay, plays a key role in the mean delay. Denote
the walk time by . The quantity we intend to compute is the scan time , the mean interval between token
arrivals at a given station.

The scan time has two parts, the walk time  , and the time required to service each of the 
queued up for service, each of which requires  seconds to be transmitted. Algebraically,

(8.8)

*


requests

The mean queue length is easy to derive, since it is just the number of requests that pile up during an
interval of length when the arrival rate is
 , namely 
 . Substituting into the above equation
we get
(8.9)


* 

CHAPTER 8. LOCAL AREA NETWORKS

132
Setting

and solving for we have

(8.10)




  




The channel-acquisition delay is about half the scan time, so we now have one of the basic performance
parameters. Notice that the delay is always proportional to the walk time, both for low and high load. Also
note that represents the utilization of the entire ring, not the utilization of a single station, which is  .
The other key performance parameter is the channel efficiency under heavy load. The only overhead is
the walk time between stations. If every station has data to send, this overhead is  , compared to the
transmission time per station,   . Using these times the channel efficiency can be found.
Although the token ring and Ethernet differ in many details, they also have many things in common. Since
they use similar technologies, they can operate at similar data rates. In both cases stations must first acquire
the channel before transmitting, with channel acquisition done by waiting for the token in one case and by
contention in the other. Both systems have a channel efficiency determined by this wait time and transmission time. As the data rate is dramatically increased, say by using a fibre-optics medium, both networks
channel efficiency drops towards zero, because the channel-acquisition time in both cases depends on signal
propagation speed, and this is not affected by increasing the data rate. Because the propagation time enters into the channel-acquisition time, both networks suffer from increased cable length, Ethernets due to
increased contention slot size and token rings due to increased walk time.

8.7.4 Token Passing Bus (IEEE 802.4)


As the name indicates, the token bus network is a bus network that uses a token-passing MAC protocol.
IEEE 802.4 specifies a standard for the physical layer and the MAC sublayer of token bus networks.
A typical layout of a token bus network is shown in Fig. 8.14. The figure shows four nodes attached to a
common coaxial cable. The nodes use a token-passing mechanism to regulate the access to the cable and
to avoid collisions. The figure illustrates one example of the sequence of events. Initially, node A holds
the token and transmits a frame of data. The destination of that frame is any other node on the network.
After having transmitted its data frame, node A transmits a token with destination address B. A token is a
small frame identified as a token by the value of its access control field, as in an IEEE 802.5 token ring.
All the nodes on the network see the token and recognise that it is for node B. Node B transmits its data
frame and than the token with destination address C. Node C then transmits its data frame and the token
with destination address D. Node D then transmits its data frame and followed by the token with destination
address A. The transmissions continue in this cyclic order. This token-passing protocol is the Token Bus
MAC protocol .
How does the network manager add or remove a node on a token bus network? This question does not arise
on an Ethernet since there is no fixed order for the stations to transmit and the manager can plug a node in or
take it out without having to inform the other nodes. The problem is also different on a token ring network
since the connections are active and the ring stops operating when a node is disconnected. The nodes in a
token bus network are either active or passive. A passive node is not participating in the communications
and can be viewed as disconnected from the bus. In the example shown in Fig. 8.14, the successor of node A
is B and the predecessor of node A is D. A node needs to know its successor to be able to send it the token.
By inserting or removing a node, the manager modifies the successor of a node. There is an automatic
procedure for doing such modifications. Consider the network shown in Fig. 8.14 and assume that we want
to insert a station E between stations C and D. Periodically, and only when it has the token, each node sends

8.8 Wireless LAN (IEEE 802.11)


  

133

&


   (

,
   

,
    &

(
Figure 8.14: Token bus network

a special packet called a SAS (for solicit a successor). Any node that wishes to become active can reply just
after the SAS has been transmitted. If no node replies, then the nodes resume the normal operation of the
token bus MAC protocol. If one node replies to the SAS, then the other nodes modify their predecessor and
their successor as needed to insert that new node. For instance say node C sends the SAS and node E relies
I want to join.
The SAS sent by C indicates that C is its sender and that the successor of C is D. The reply from E indicates
the address of E. When it sees the SAS from C, node E notes that its predecessor is now C and that the
successor of E is D. When it gets the reply from E, node C replaces its successor D with E. If more than
one node replies to the SAS, then the transmissions of these nodes collide. The collision is resolved as in
CSMA-CD: the nodes whose transmissions collide repeat their transmissions with random delays, until one
node is successful and is then inserted in the network.
When a node wants to be removed from the token bus, it waits until it gets the token. It then transmits a
special message saying I want out. The other nodes then update their successor and predecessor information.
For instance, if node B on the network of our example sends the I want out message, that message includes
the predecessor A and the successor C of B. When node A sees that message, it learns that its successor is
no longer B but is instead C (Bs successor). When C sees that its predecessor left the ring, it knows that its
new predecessor is now Bs old predecessor A.
When the network is started, or when the token is lost, the nodes detect the absence of activity. One node
then sends an I claim the token packet. That node then sends the token to its successor if the lack of activity
was due to a lost token. The node sends an SAS if this is the first transmission after starting the network.
Collisions are handled as explained before.
The network deals with the failure of a node as follows: If node A sends the token to node B and node
B does not repond, node A transmits a special message Who follows B. The nodes use the answer to that
message to drop node B by updating their predecessor and successor.

8.8 Wireless LAN (IEEE 802.11)


As the number of portable and mobile computing and communication devices grows, so does the demand
for connecting them in a convenient way. In order to achieve true mobility, portable devices need to use
connectionless radio or infrared signals for communication. A system of devices that communicate by radio
can be regarded as being connected by a Wireless LAN. These LANs obviously have somewhat different
properties than conventional LANs and require their own MAC protocols as we will discuss next.

134

CHAPTER 8. LOCAL AREA NETWORKS

The IEEE 802.11 specification for Wireless LANs, approved in 1997, is based on a protocol designed in
1990, called Multiple Access with Collision Avoidance (MACA) which operate in the unregulated 2.4GHz to
2.4835 GHz frequency band with a bandwidth between 1 MBps and 2 MBps. The principle is that a station
wishing to transmit, declares its intention by sending a short 30 byte Request To Send (RTS) frame. . All
stations within range will detect the TRTS nd refrain from transmitting anything themselves for the duration
(indicated in the RTS) of the upcoming longer, data frame transmission.
Consider the diagram in Fig. 8.15 and assume station A want to transmit to B. A starts by sending an RTS
as shown in Fig. 8.15 (a). The RTS of 30 bytes contains, inter alia, the length of the data frame to follow
and the address of the destination station. B detects its own address and replies with a Clear TO Send (CTS)
signal, as shown in Fig. 8.15 (b) which again contains the length of the intended dataframe for reasons that
will become clear . Upon receipt of the CTS frame, A will start transmitting its data frame. In the figure,

Figure 8.15: Wireless LAN communication


station C is within range of B and thus hears the RTS from A, but not the CTS from B. Depending on its
range (not shown) C is free to transmit while the data frame from A is being sent, provided it causes no
interference. Station D, in turn, is within range of B (unshaded circle) but not station A (shaded circle) and
therefore hears the RTS but not the CTS. Hearing the CTS and the length of the requested data transmission,
it defers from sending anything until that frame should finish being transmitted. Station E hears both control
messages and therefore remains silent until the end of (data) transmission from A.
Despite the algorithm, collisions do occur such as when stations B and C simultaneously decide to transmit
an RTS. The transmissions will collide and be lost. In the event of a collision, the protocol behaves like
CSMA/CD. In other words, an unsuccessful transmitter, i.e., one that does not get a CTS reply in the
expected time interval, waits a random length of time using the same binary exponential backoff algorithm
we already know, and tries again later untilit succeeds.
Another class of radio network is cellular radio which is vastly popular for voice communication. Once the
technology allows packet transmission over the wireless link, the Wireless Access Protocol (WAP) technology will doubtlessly have a major impact on wireless data communication. Cellular networks deserve a
chapter of their own though, namely Chapter 12.

8.9 Network Interconnection

135

8.9 Network Interconnection


Many organisations have multiple LANs that they want to interconnect. The device used to effect interconnection depends on the highest layer in which there are difference between the protocols being interlinked.
From the discussion of Ethernet, Token-bus and Token-ring LANs, it is obvious that two networks can
differ greatly in access methods, frame sizes, frame formats, sequence numbering techniques, addressing
conventions, and other items. In connecting two networks, the management information that originates in
one network must be processable in the network in which it terminates and in any intermediate networks that
it traverses. Also, the users data must be understandable in the network in which it terminates. Differences
between networks can be organised on the basis of the OSI model layer in which they occur:
1. Differences in the electrical and mechanical properties of the communication medium (e.g. cable vs.
twisted pair, connectors, signaling methods, etc.) are differences in the implementation of the physical
layers of the two networks.
2. Differences in the types of frame information and the significance of individual bit positions (e.g.
synchronizing patterns and flags, control bits, sequence numbering, error control, flow control. etc.)
are differences in the implementation of the data link layers.
3. Differences in segment sizes and routing techniques (e.g., centralized versus distributed) are differences in the implementation of the network layers.
4. Differences in message flow control and message segmentation are differences in the implementation
of the transport layers.
5. Differences in line discipline, failure recovery strategies and communication management techniques
are differences in the implementation of the session layers.
6. Differences in codes used and encryption and compression techniques ewmployed are differences in
the implementation of the presentation layers.
Depending upon these differences we can identify the following classes of interconnecting devices.
Repeater: if the differences are limited to the implementation of the Physical Layer, Repeaters, devices that
link two connections and amplify and regenerate signals, are used.
Bridge: if the differences reach no higher than the implementation of the Data Link layers, Bridges, devices
that link identical networks (i.e. Ethernet to Ethernet) at the Data Link layer, are used.
Router: if the differences reach no higher than the implementation of the Network layers, Routers are used.
Routers modify segment sizes, route frames between users on non-identical, but the same type of
network (i.e. Token Ring to Token Ring, WAN to WAN) and connect at the Network level.
Gateway: if the differences involve the implementation of layers above the Network layer, a Gateway is
used. Gateways are devices that link networks that are not of the same type (i.e. Ethernet to Token
Ring, or LANs to MANs or X.25 networks) and connect at whatever layer is required.
As a generic description, we may call all four of these devices relays they connect two networks and move
information from one to the other. Depending on and whether protocol conversion is being done, one can
write a lot more about them. We shall leave the reader with these definitions only and refer him to many
good references on the subject for the details of a particular instance.

CHAPTER 8. LOCAL AREA NETWORKS

136

8.10

Exercises

Exercise 8.1 Define a Local Area Network.


Exercise 8.2 Compare the ring and bus topologies for reliability and performance.
Exercise 8.3 What is the difference between CSMA and CSMA/CD.
Exercise 8.4 Why is it neccessary to have a one bit delay at each station in a token ring?
Exercise 8.5 At a transmission rate of
  Mbps and a propagation speed of 
m/ second, how
many meters of cable is the 1-bit delay in in a token ring interface equivalent to? Explain your answer.
Mbps. What are the implications, if any, for a simple Stop and
Repeat the question for the case
 
Wait protocol?
Exercise 8.6 A 1-km long, 10 Mbps CSMA/CD LAN (not 802.3) has a propagation speed of 200 m/ sec.
Data frames are 256 bits long, including 32 bits of header, checksum and other overhead. The first bit
slot after a successful transmission is reserved for the receiver to capture the channel to send a 32-bit
acknowledgement frame. What is the effective data rate, excluding overhead, assuming that there are no
collisions?
Exercise 8.7 A heavily loaded 1-Km long, 10 Mbps token ring has a propagation speed of 200 m/ sec.
Fifty stations are uniformly spaced around the ring. Data frames are 256 bits, including 32 bits of overhead.
Acknowledgements are piggybacked onto the data frames and are thus included as spare bits within the data
frames and are effectively free. The token is 8 bits. Is the effective data rate of this ring higher or lower than
the effective data rate of a 10 Mbps CSMA/CD network?

Chapter 9

Wide Area Networks


9.1 Objectives of this Chapter
Wide area networks such as those which comprise the Internet have characteristics such as delay, distance
and error properties which are different from ordinary voice networks or Local Networks connecting users
in a close community. The implcations of this are studied in this chapter.
1. Starting by distinguishing between Datagram (or connectionless and Virtual Circuit or connectionoriented) service, we discuss
Routing techniques.
Congestion control.
Flow control.
Deadlock prevention.
in the general case. We continue to discuss
2. Frame Relay networks, and
3. discuss the fundamentals of Asynchronous Transmission Mode (ATM) networks as a network protocol.
4. We explain X.25 as the first general Network Layer protocol, which have now probably been overtaken
in use by
5. The Internet Protocol (IP) which is similar, but different in important ways.

137

138

CHAPTER 9. WIDE AREA NETWORKS

9.2 Overview
The Data Link Control protocols of Chapter 7 concerned the reliable and controlled exchange of frames over
unreliable lines between adjacent NODEs. As the name indicates, the network layer provides the primary
interface between hosts and the Service Provider and it is at this level that the responsibility of the carrier or
service provider towards its users generally ends.
Data communication networks are organised hierarchically. Every host has the illusion that it can converse
directly with every other host via some kind of communication channel. In fact, the host is really communicating with its local NODE (or NODE), which in turn communicates with other NODEs, and eventually
the destination host. The illusion that hosts can talk to one another is maintained by the network Service
Provider or, if you prefer the term, the Internet. The actual paths that the messages between two hosts travel
is transparent to the hosts. Data link errors are handled in a transparent manner by the Service Provider
network; this is a service provided by the link layer to the network layer. Other errors, such as lost and
duplicate packets, however, do occur. How these errors are dealt with by the network is a question of what
services are provided by the Network layer to the Transport layer.

9.3 Datagram Service, Virtual Circuit Service


One of the most important design questions is the nature of the service provided to the transport layer by the
network layer. Alternatively, what properties should the Service Provider network have when viewed by the
hosts? To make this somewhat philosophical point more concrete, if a host sends a message to another host,
should it be able to count on the message arriving properly? In other words, should the Service Provider
make error detection and correction transparent to the hosts? Every service provider handles data link errors
in a transparent way, but other errors can still occur: principally lost and duplicate packets due to faulty
NODE hardware or software.
A second example of a function that the network layer might perform is making sure that packets arrive in
the same order they were sent. In fact, there are a number of other possible options as well, but in practice,
two conceptual models dominate. In the Virtual Circuit model, the network layer provides the transport
layer with a perfect channel: no errors, all packets delivered in order, etc. just like a telephone call would
for voice communication. In the Datagram model, the network layer simply accepts messages from the
transport layer and attempts to deliver each one as an isolated unit (see Fig. 5.10 on page 67 much the same
way that letters are delivered by the postal service and as we well know, messages may arrive late, out of
order, or not at all.
The analogy with a public telephone network versus the public postal service is clear. As in the case of a
public telephone network, in a Virtual Circuit service, there is an explicit setup procedure, followed by a
data transfer phase (talk) and then an explicit shutdown procedure. Once the circuit has been established,
individual data items do not have to carry destination addresses, because the destination was specified during
setup. With datagrams there is no setup. Each Datagram is carried independently from every other one and
the network layer makes no attempt to do error recovery. It should be obvious that as far as the Service
Provider is concerned, a Datagram service is the more primitive of the two services.
Virtual circuits have the advantage that the service user does not have to go to the trouble of ordering the
packets since they arrive in order. However, this advantage turns into a disadvantage if a particular user
process does not care about the ordering. For transaction oriented systems, such as point-of-sale credit
verification, the cost of setting up and later clearing the Virtual Circuit can easily dwarf the cost of sending
the single message.

9.3 Datagram Service, Virtual Circuit Service

139

A similar situation holds with error control. With Virtual Circuit service the user need not worry about errors, but there are situations where automatic retransmission is undesirable. An obvious example is digitized
real-time voice. It is far better for the listener to hear an incorrect packet (a fraction of a word garbled) than
to hear a 2 seconds pause while waiting for timeout and retransmission. In this case, the user might consider
the error control provided by the Service Provider too good.
In other situations however, the user might consider the degree of error control provided by the Service
Provider too weak. Banks, in particular, tend to get quite upset if 1 dollar becomes 4095 dollars due to
a transmission error. Those applications requiring better accuracy than the Service Provider provides (essentially longer checksums) may end up superimposing their own error-detecting scheme on top of the
Service Providers by computing and transmitting checksums within the data itself. In this case it is clearly
inefficient to have the Service Provider duplicate what the user insists upon doing for himself anyway.
In addition to providing automatic sequencing and error control, virtual circuits also provide flow control.
As in the other cases, there may be users who can handle very high data rates and do not want the Service
Provider to impede them. For example, if one of the hosts is a workstation collecting real time data at high
speed, and another is a dedicated one whose job is to store the data as fast as they come in, datagrams may
be able to provide substantially higher throughput than a compulsory flow control scheme based on a sliding
window. This conclusion is especially true if the Service Provider forces a standard window size on all
customers.
Yet another potential advantage of datagrams for the sophisticated user is the ability to do his own packet
switching. A router might take packets from the Service Provider as they come and inject them into a local
private network. If the private network uses datagrams internally, there is hardly any point in forcing them
to come out of the public ones in sequence, only to be randomized milliseconds later anyway. Yet if the
Service Provider network only offers virtual circuit service, there will surely be a performance penalty to be
paid.
From these examples you can see that datagrams provide sophisticated users with great flexibility in handling
sophisticated applications. The supporters of virtual circuits would argue that most network applications
(e.g., remote login, file transfer) prefer sequential, error free, flow controlled communication channels.
Indeed this is the case with the Internet which ordinarily provides a Virtual Circuit service (TCP) although
Datagram service (UDP) is available.
Most existing user programs and operating systems understand Virtual Circuit service, which makes interfacing to them easier. Also, by putting the sequencing, error control, and flow control in the Service
Provider, the code only has to be written once. If these functions are in the hosts, each host operating system has to be modified, resulting in much more work. Replacing bits and pieces of code in everybodys
operating system is a task to be favourably compared only with Hercules cleaning out the Augean stables.
If the Service Provider designers are computer network experts and the computer centres owning the hosts
are relative newcomers to networking, the argument for having the Service Provider designers do as much
of the work as possible becomes even stronger.
Another argument for putting as much of the data communication work as possible in the Service Provider
is that the router of the nodes are specialised dedicated processors, and invariably better suited to that kind
of work than the hosts.
Table 9.1 contains a comparison of a Virtual Circuit and Datagram service.

CHAPTER 9. WIDE AREA NETWORKS

140
ISSUE

VIRTUAL CIRCUIT

DATAGRAM

Initial setup
Destination address
Packet sequencing

Required
During setup only
Packets passed to the
host in the order
received by Service Provider
Explicitly done in Service Provider
Provided by Service Provider

Impossible
Needed in every packet
Packets passed to the
host in any order

Error handling
Flow control

Explicitly done by host


Not provided by Service Provider

Table 9.1: Summary of the major differences between Virtual Circuit and Datagram service.

9.4 Internal Structure of the Network


Given the two classes of service the Service Provider can provide to its customers, it is time to see how the
service provider network or simply network works inside. Interestingly enough, the same two models,
Virtual Circuit and Datagram, can be used for internal transport. When Virtual Circuits are used, a route is
chosen from source NODE to destination NODE when the circuit is set up. That route is used for all traffic
flowing over the Virtual Circuit until it is torn down.
If packets flowing over a given Virtual Circuit always take the same route through the Service Provider,
each NODE must remember where to forward packets for each of the currently open virtual circuits passing
through it. Every NODE must maintain a table with one entry per open Virtual Circuit. Virtual circuits
not passing through NODE X are not entered in Xs table, of course. Each packet travelling through the
Service Provider must contain a Virtual Circuit number field in its header, in addition to sequence numbers,
checksums, and the like. When a packet arrives at a NODE, the NODE knows on which line it arrived and
what the Virtual Circuit number is. Based on this information, the packet must be forwarded to the correct
NODE.
When a user process informs its local NODE that it wants to talk to a process in another machine, the router
at that NODE chooses a Virtual Circuit number not already in use on that machine for the conversation.
Since each NODE chooses Virtual Circuit numbers independently, the same Virtual Circuit number is likely
to be in use on two different paths through some intermediate NODE, leading to ambiguities.
Consider the Service Provider network of Fig. 9.1. Suppose that a process in at NODE A wants to talk
to a process at NODE D. NODE A chooses Virtual Circuit 0. Let us assume that route ABCD is chosen.
Simultaneously, a process in B decides it wants to talk to a process in D (not the same one as A). If there
are no open Virtual Circuits starting in B at this point, NODE B will also choose Virtual Circuit 0. Further
assume that route BCD is selected as the best one. After both virtual circuits have been set up, the process
at A sends its first message to D, on Virtual Circuit 0. When the packet arrives at D, the poor NODE does
not know which user process to give it to.
To solve this problem, whenever a NODE wants to create a new outbound Virtual circuit, it chooses the
lowest circuit number not currently in use. The NODE (say X) does not forward this setup to the new
NODE (say Y) along the route as is. Instead, X looks in its table to find all the circuit numbers currently
being used for traffic to Y. It then chooses the lowest free number and substitutes that number for the one
in the packet, overwriting the number chosen by the user. Similarly, NODE Y chooses the lowest circuit
number free between it and the next NODE.
When this setup packet finally arrives at the destination, the NODE there chooses the lowest available
inbound circuit number, overwrites the circuit number found in the packet with this, and passes it to the

9.4 Internal Structure of the Network

141

8 simplex virtual circuits

Originating at A

Originating at B

0 - ABCD (1)

0 - BCD (2)

1 - AEFD (3)

1 - BAE (4)

2 - ABFD (5)

2 - BF (6)

3 - AEC (7)
4 - AECDFB (8)

Figure 9.1: One directional Virtual Circuits showing the order on which they are set up
user. As long as the destination host always sees the same circuit number on all traffic arriving on a given
Virtual Circuit, it does not matter that the source user is consistently using a different number.
Each entry consists of an incoming and an outgoing part. Each of the two parts has a NODE name (used
to indicate a line) and a Virtual Circuit number. When a packet arrives, the table is searched on the left
(incoming) part, using the arrival line and Virtual Circuit number found in the packet as the key. When a
match is found, the outgoing part of the entry tells which Virtual Circuit number to insert into the packet
and which NODE to send to it. H stands for the host, both on the incoming and outgoing sides.
Fig. 9.1 shows the above mentioned process of setting up the virtual circuits mentioned in the order shown
in that figure. For example, consider a packet travelling from a user at NODE A to user at NODE B on
Virtual Circuit (VC) 4 (i.e., route AECFB). When NODE A gets a packet from its user, with circuit 4, it
searches its table, finding a match for H4 at entry 5 (the top entry is 0). The outgoing part of this entry is E3,
which means replace the circuit 4 with circuit 3 and send it to NODE E. NODE E then gets a packet from A
with circuit 3, so it searches for A3 and finds a match at the third entry. The packet now goes to C as circuit
1. The sequence of entries used is marked by the heavy line.
Because virtual circuits can be initiated from either end, a problem occurs when call setups are propagating
in both directions at once along a chain of NODE. At some point they have arrived at adjacent NODEs. Each

CHAPTER 9. WIDE AREA NETWORKS

142
B

A
incoming outgoing
H
H
B
H
H
H

0
1
0
2
3
4

Incoming
IMP or
host

B
E
E
B
E
E

0
0
1
1
2
3

Incoming
virtual circuit

A
H
H
A
H
F

C
0
0
1
1
2
0

C
C
A
F
F
H

0
1
0
0
1
0

0
1
2
3

F
H
C
C

0
0
0
1

E
A
A
A
A

B
B
E
E

0
1
0
2

D
D
H
D

0
1
0
2

D
C
C
F
F
C

F
E
B
B
D

0
0
1
0

D
D
H
B

0
1
0
1
2

H
H
H
H
F

0
1
2
3
0

0
1
0
0

Figure 9.2: NODE tables corresponding to the Virtual Circuits set up Fig. 9.1
NODE must now pick a Virtual Circuit number to use for the (full-duplex) circuit it is trying to establish.
If they have been programmed to choose the lowest number not already in use on the link between them,
they will pick the same number, causing two unrelated virtual circuits over the same physical line to have
the same number. When a data packet arrives later, the receiving NODE has no way of telling whether it
is a forward packet on one circuit or a reverse packet on the other. We hall see later on how this conflict is
resolved. If circuits are half-duplex, there is of course no ambiguity. One should realize that every process
must be required to indicate when it is through using a Virtual Circuit, so that the Virtual Circuit can be
purged from the NODE tables to recover the space.
So much for the use of virtual circuits internal to the Service Provider. The other possibility is to use
Datagrams internally. When Datagrams are used, the NODEs do not have a table with one entry for each
Virtual Circuit. Instead, they have a table telling which outgoing line to use for each possible destination
NODE. These tables are also needed when Virtual Circuits are used internally to determine the route for a
setup packet.
Each Datagram must contain the full destination address (the machine, and some indication of which process
on that machine). When a packet comes in, the NODE looks up the outgoing line to use and sends it on
its way. Nothing in the packet is modified. Also, the creation and destruction of Transport layer Virtual
Circuits do not require any special work on the part of the NODEs.
Both virtual circuits and datagrams have their supporters and their detractors. We will now attempt to
summarize the arguments both ways. Inside the Service Provider network the main trade-off between Virtual
Circuits and Datagrams is between NODE memory space and bandwidth:
1. Virtual circuits allow packets to contain circuit numbers instead of full destination addresses. If the
packets tend to be fairly short, a full destination address in every packet may represent a significant
amount of overhead, and hence wasted bandwidth. The use of Virtual Circuits inside the Service
Provider network becomes especially attractive when the users are online terminals, with only a few
characters per packet. The price paid for using Virtual Circuits internal to the network is the table
space within the NODEs or routers and the table lookup for every packet which arrives. Depending
upon the relative cost of communication circuits versus NODE memory, one or the other may be
cheaper.

9.5 Routing Techniques

143

For transaction processing systems such as is the case when the users are Automatic Teller Machines, the overhead required to set up and clear a Virtual Circuit may easily dwarf the use of the
circuit, as mentioned above. If the majority of the traffic is expected to be of this kind, the use of
Virtual Circuits inside the Service Provider network makes little sense.
Virtual circuits also have a vulnerability problem. If a NODE crashes and loses its memory, even if it
comes back up a second later, all the Virtual Circuits passing through it will have to be aborted.
2. In contrast, if a Datagram NODE goes down, only those users whose packets were queued up in
the NODE at the time will suffer, and maybe not even all those, depending upon whether they have
already been acknowledged or not. The loss of a communication line is fatal to virtual circuits using
it, but can be easily compensated for if datagrams are used. Datagrams also allow the routers to
balance the traffic throughout the Service Provider, since routes can be changed halfway through a
conversation.

9.5 Routing Techniques


We will use the word route to mean the path that a packet follows as it traverses the Service Provider from
its source NODE to its destination NODE. Since every NODE in a Wide Area Network routes packets along
their way the term router is sometimes used instead of NODE when the context is that of routing only.
The routing algorithm is part of the Layer 3 software responsible for deciding which output line an incoming
packet should be transmitted on. If the Service Provider network uses Datagrams internally, this decision
must be made anew for every arriving data packet. However, if the Service Provider uses Virtual Circuits
internally, routing decisions are made only when a new Virtual Circuit is being set up. Thereafter, data
packets just follow the previously established route. The latter case is often called session routing, because
a route remains in force for an entire session e.g., a login session at a terminal or a file transfer. In deciding
upon a route that a packet should follow there must clearly be some objective. Minimizing mean packet
delay is clearly one objective, but so is maximizing total network throughput. Furthermore, these two goals
are also in conflict, since operating any queuing system near capacity implies a long queuing delay. As a
compromise, many networks attempt to minimize the number of hops a packet must make, because reducing
the number of hops tends to improve the delay and also reduce the amount of bandwidth consumed, which
tends to improve the throughput as well.
Route selection may have to be based on many factors in different circumstances. These may pertain to
carrier selection, use of an alternate route on the occasion of link failure, or load balancing among routes
to reduce congestion. Consider the following: In certain situations, the lower cost of point-to-point satellite
service may dictate use of that route for most traffic. During rain outages or eclipses of the satellite, on the
other hand, terrestrial routes may be needed to ensure availability. A bulk transmission deferred to the early
morning hours may best be routed via unused voice links for lowest cost. On the other hand, if the expected
volume is higher than the voice links can handle, the use of a route via a nonswitched digital trunk may be
the least cost. If the bulk transmissions are few in number, then a switched link may be less expensive.
Some traffic may merit a route at a premium cost in order to obtain very fast response time. In some of these
illustrations, it is desirable that the origin NODE have a role in the route selection. In other situations, the
route selections are entirely within one Service Providers domain, which appears as a link to the remainder
of the network. These selections then may be based on workload distributions within that Service Provider
network, the objective being to optimize the performance, for example, throughput or average response time,
of the network.

CHAPTER 9. WIDE AREA NETWORKS

144

Routing algorithms can be classified according to three characteristics: the decision place, the information
used and the decision time.
1.

DECISION PLACE.

The place where the route is selected may be centralised or distributed.

(a) In a centralised system, the routing table for each NODE is determined at a central location and
sent to each NODE; each NODE is given the route to every possible destination.
(b) In one type of distributed decision, the selection of the route to a given destination is based on
the work loads; and the decision is made at each NODE along the route.
(c) In another type of distribution decision, such as explicit path routing, the selection of a route is
made at the originating NODE for the message.
2.

INFORMATION SOURCE. The route selection may be based on information concerning either link
characteristics (for example, cost and availability) or link work loads. The source of this information
may be

(a) The originating NODE of the message, if the selection of the route is by link cost or the matching
of link characteristics to the message characteristics.
(b) A NODE in the vicinity of an outage, if the selection of the route is by link availability.
(c) All NODEs in the network, if the selection is based on work loads throughout the network.
(d) Those NODEs immediately adjacent to the NODE making the decision, if the selection is by
work loads in the local environment.
Note that the current workload at any NODE can only be known precisely at that one NODE. Information about the workload at one NODE may be sent to another NODE, but that information is
already somewhat out of date by the time it arrives. Global information in particular (being gathered
from greater distances) is meaningful only in terms of averages over a period of time.
3.

DECISION TIME. Selection of a new route to a given destination may be made frequently or infrequently. Occasions for route selection might be:

(a)
(b)
(c)
(d)
(e)

The start of a packet.


The start of a session.
Major shifts in workload patterns.
Link or NODE outages.
Major shifts in network configuration or use of carrier services.

When multiple routes exist between a pair of nodes, the preferred route may change from time to time
due to changing load conditions. How rapidly one should change the preferred route depends on the load
characteristics themselves and the economic benefits to be derived, as well as the cost and difficulties one
can encounter with too rapid a re-optimization. A conservative approach is to consider a re-optimization of
routes on a relatively long-term basis. That can involve the employment of a slowly adaptive and largely
heuristic optimization. The sending NODE could periodically sample the time delays encountered in the
several available routes and could modify the route in accordance with this long-term sampling. However,
shorter-term optimization will sometimes be advantageous.
Based on these three characteristics, routing algorithms can be grouped into two major classes: nonadaptive
and adaptive. Nonadaptive algorithms do not base their routing decisions on measurements or estimates of
the current traffic and topology, whereas adaptive ones do. If an adaptive algorithm manages to adapt well
to the traffic, it will naturally outperform an algorithm that is oblivious to what is going on in the network,
but adapting well to the traffic is easier said than done.

9.6 Congestion Control

145

9.6 Congestion Control


When too many packets are present in the Service Provider network or in part of that network, performance
degrades. This situation is called congestion and is depicted in Fig. 9.3.
Packets
Delivered

Maximum carrying capacity of the subnet

PERFECT
DESIRABLE

CONGESTED

Packets sent

Figure 9.3: Congestion in the Service Provider network.


When the number of packets dumped into the Service Provider network by the users is within its carrying
capacity, they are all delivered (except for a few that are afflicted with transmission errors), and the number
delivered is proportional to the number sent. However, as traffic increases too far, the NODEs are no
longer able to cope, and they begin losing packets. This tends to make matters worse. At very high traffic,
performance collapses completely, and almost no packets are delivered.
Congestion can be brought about by several factors. If the NODEs are too slow to perform the various
bookkeeping tasks required of them (queuing buffers, updating tables, etc.) queues can build up, even
though there is excess line capacity. On the other hand, even if the router CPU is infinitely fast, queues
will build up whenever the input traffic rate exceeds the capacity of the output lines. This can happen, for
example, if three input lines are delivering packets at top speed, all of which need to be forwarded along
the same output line. Either way, the problem boils down to not enough NODE buffers. Given an infinite
supply of buffers, the NODE can always smooth over any temporary bottlenecks by just hanging onto all
packets for as long as necessary. Of course, for stable operation, the hosts cannot indefinitely pump packets
into the Service Provider network at a rate higher than that network can absorb.
Congestion tends to feed upon itself and become worse. If a NODE has no free buffers, it must ignore newly
arriving packets. When a packet is discarded, the NODE that sent the packet will time out and retransmit
it, perhaps ultimately many times. Since the sending NODE cannot discard the packet until it has been
acknowledged, congestion at the receivers end forces the sender to refrain from releasing a buffer it would
normally have freed. In this manner congestion builds up, like cars in a traffic jam.
We will now examine five strategies for dealing with congestion:
1. Preallocation of resources to avoid congestion.

146

CHAPTER 9. WIDE AREA NETWORKS

2. Allowing NODEs to discard packets at will.


3. Restricting the number of packets allowed in the Service Provider.
4. Using flow control.
5. Choking off input when congestion occurs.

9.6.1 Preallocation of Buffers


If Virtual Circuits are used inside the Service Provider network, it is possible to solve the congestion problem
altogether, as follows. When a Virtual Circuit is set up, the Call Request packet winds its way through the
Service Provider, making table entries as it goes. When it has arrived at the destination, the route to be
followed by all subsequent data traffic has been determined and entries made in the routing tables of all the
intermediate NODEs.
Normally, the Call Request packet does not reserve any buffer space in the intermediate NODEs, just table
slots. However, a simple modification to the setup algorithm could have each Call Request packet reserve
one or more data buffers as well. If a Call Request packet arrives at a NODE and all the buffers are already
reserved, either another route must be found or a busy signal must be returned to the caller. Even if buffers
are not reserved, some aspiring Virtual Circuits may have to be rerouted or rejected for lack of table space,
so reserving buffers does not add any new problems that were not already there.
By permanently allocating buffers to each Virtual Circuit in each NODE, there will always be a place to
store any incoming packet until it can be forwarded. First consider the case of a stop-and-wait NODENODE protocol. One buffer per Virtual Circuit per NODE is sufficient for simplex circuits, and one for
each direction is sufficient for full-duplex circuits. When a packet arrives, the acknowledgement is not sent
back to the sending NODE until the packet has been forwarded. In effect, an acknowledgement means
that the receiver not only received the packet correctly, but also that it has a free buffer and is willing to
accept another. If the NODE-NODE protocol allows multiple outstanding packets, each NODE will have to
dedicate a full windows worth of buffers to each Virtual Circuit to completely eliminate the possibility of
congestion. These principles are partially illustrated in Fig. 9.4.
When each Virtual Circuit passing through each NODE has a sufficient amount of buffer space dedicated to
it, packet switching becomes quite similar to circuit switching. In both cases an involved setup procedure is
required. In both cases substantial resources are permanently allocated to specific connections, whether or
not there is any traffic. In both cases congestion is impossible because all the resources needed to process
the traffic have already been reserved. And in both cases there is a potentially inefficient use of resources,
because resources not being used by the congestion to which they are allocated are nevertheless unavailable
to anyone else.
Because dedicating a complete set of buffers to an idle Virtual Circuit is expensive, some Service Providers
may use it only where low delay and high bandwidth are essential, for example on Virtual Circuits carrying
digitized speech. For Virtual Circuits where low delay is not absolutely essential all the time, a reasonable
strategy is to associate a timer with each buffer. If the buffer lays idle for too long, it is released, to be
re-acquired when the next packet arrives. Of course, acquiring a buffer might take a while, so packets would
have to be forwarded without dedicated buffers until the chain of buffers could be set up again.

9.6.2 Discarding Packets


Our second congestion control mechanism is just the opposite of the first one. Instead of reserving all the
buffers in advance, nothing is reserved in advance. If a packet arrives and there is no place to put it, the

9.6.3 Choke Packets

147
no limit on the number of
buffers per output line

input lines

output lines

(a)

limit of 4 buffers per


output line

free buffers

(b)

Figure 9.4: Congestion in a NODE can be reduced by putting an upper bound on the number of buffers
queued on an output line
NODE simply discards it. If the Service Provider network offers Datagram service to the users, that is all
there is to it: congestion is resolved by discarding packets at will. If the Service Provider network offers
Virtual Circuit service, a copy of the packet must be kept somewhere so that it can be re-transmitted later.
One possibility is for the NODE sending the discarded packet to keep timing out and re-transmitting the
packet until it is received. Another possibility is for the sending NODE to give up after a certain number of
tries, and require the source NODE to time out and start all over again.
Discarding packets at will can be carried too far. It is clearly stupid in the extreme to ignore an incoming
packet containing an acknowledgement from a neighbouring NODE. That acknowledgement would allow
the NODE to abandon a by-now-received packet and thus free up a buffer. However, if the NODE has no
spare buffers, it cannot acquire any more incoming packets to see it they contain acknowledgements. The
solution is to permanently reserve one buffer per input line to allow all incoming packets to be inspected. It
is quite legitimate for a NODE to examine a newly arrived packet, make use of any piggybacked acknowledgement, and then discard the packet anyway. Alternatively, the bearer of good tidings could be rewarded
by keeping it, using the just freed buffer as the new input buffer.
One way to minimize the amount of bandwidth wasted on the re-transmission of discarded packets is to
systematically discard packets that have not yet traveled far and hence do not represent a large investment
in resources.

9.6.3 Choke Packets


What is really needed for proper congestion control is a mechanism that is triggered only when the system
is congested.
The following is one such mechanism: Each NODE monitors the percent utilization of each of its output
lines. Associated with each line is a real variable, , whose value, between 0 and 1, reflects the recent

CHAPTER 9. WIDE AREA NETWORKS

148

utilization of that line. To maintain a good estimate of , a sample of the instantaneous line utilization,
(either 0 or 1), can be made periodically and u updated according to

  *





where the constant a determines how fast the NODE forgets recent history.

Whenever moves above the threshold, the output line enters a warning state. Each newly arriving packet
is checked to see if its output line is in warning state. If so, the NODE sends a choke packet back to the
source NODE, giving it the destination found in the packet. The packet itself is tagged (a header bit is turned
on) so that it will not generate any more choke packets later, and is forwarded in the usual way.
When the source NODE gets the choke packet, it is required to reduce the traffic to the specified destination
by X percent. Since other packets aimed at the same destination are probably already under way and will
generate yet more choke packets, the NODE should ignore choke packets referring to that destination for
a fixed time interval. After that period has expired, the NODE listens for more choke packets for another
interval. If one arrives, the line is still congested, so the NODE reduces the flow still more and begins
ignoring choke packets again. If no choke packets arrive during the listening period, the NODE may increase
the flow again. The feedback implicit in this protocol should prevent congestion, yet not throttle any flow
unless trouble occurs.
Several variations on this congestion control algorithm have been proposed. For one, the NODEs could
maintain two critical levels. Above the first level, choke packets are sent back. Above the second, incoming traffic is just discarded, the theory being that the NODE has probably been warned already. Without
extensive tables it is difficult for the NODE to know which NODEs have been warned recently about which
destinations, and which NODEs have not.
Another variation is to use queue lengths instead of line utilization as the trigger signal. The same exponential weighting can be used with this metric as with , of course. Yet another possibility is to have the
NODEs propagate congestion information along with routing information, so that the trigger is not based
on only one NODEs observations, but on the fact that somewhere along the path there is a bottleneck. By
propagating congestion information around the Service Provider network, choke packets can be sent earlier
before too many more packets are under way.

9.7 Flow control


There is some confusion in the literature between flow control and congestion control. Most authors use
flow control to mean the mechanism by which a receiver throttles a sender to prevent data from arriving at
a rate faster than the receiver can handle it. Stop-and-wait and the sliding window are two well-known flow
control mechanisms. All flow control mechanisms either require the sender to stop sending at some point
and wait for an explicit go-ahead message, or permit the receiver to simply discard messages at will with
impunity.
Congestion control, in contrast, deals with the problem of more packets arriving at a NODE or router than
there are buffers to store them all. Flow control is an end-to-end phenomenon, whereas congestion control
deals with problems occurring at intermediate NODEs as well.
The reason some writers have confused the two topics is not hard to find. Some networks have attempted
to use flow control mechanisms to eliminate congestion. Although flow control schemes can be used by the
transport layer to keep one host from saturating another, and flow control schemes can be used to prevent

9.8 Deadlock Prevention

149

one NODE from saturating its neighbours, it is difficult to control the total amount of traffic in the network
using end-to-end flow control rules.
Flow control cannot really solve congestion problems for a good reason: computer traffic is bursty. Most of
the time an interactive user sits at his terminal scratching his head, but once in a while he may want to scan
a large file. The potential peak traffic is vastly higher than the mean rate. Any flow control scheme which is
adjusted so as to restrict each user to the mean rate will provide bad service when the user wants a burst of
traffic. On the other hand, if the flow control limit is set high enough to permit the peak traffic to get through,
it has little value as congestion control when several users simultaneously demand peak throughput.
Flow control can be used to limit the traffic between pairs of:
1. User processes, e.g., one outstanding message per Virtual Circuit.
2. Hosts, irrespective of the number of virtual circuits open.
3. Source and destination NODEs, without regard to hosts.
Another form of flow control is to limit the number of virtual circuits open.

9.8 Deadlock Prevention


The ultimate congestion is a deadlock, also called a lockup. The first NODE cannot proceed until the second
NODE does something, and the second NODE cannot proceed because it is waiting for the first NODE to do
something. Both NODEs have ground to a complete halt and will stay that way forever. Service Providers
do not consider deadlock as a desirable property to have in their networks . . .
The simplest lockup can happen with two NODEs. Suppose that NODE A has five buffers, all of which are
queued for output to NODE B. Similarly, NODE B has five buffers, all of which are occupied by packets
needing to go to NODE A. The situation is illustrated in Fig. 9.5(a). Neither NODE can accept incoming
packets from the other. They are both stuck. This situation is called direct store and forward lockup.
(a)

(b)

Figure 9.5: Store-and-forward lockup: (a) direct, (b) indirect.


The same thing can happen on a larger scale, as shown in Fig. 9.5(b) Each NODE is trying to send to
a neighbour, but nobody has any buffers available to receive incoming packets. This situation is called
indirect store-and-forward lockup. Note that when a NODE is locked up, all its lines are effectively blocked,
including those not involved in the lockup.

CHAPTER 9. WIDE AREA NETWORKS

150

Merlin and Schweitzer (1980) have presented a solution to the problem of store-and-forward lockup. In
their scheme, a directed graph is constructed, with the buffers being the nodes of the graph. Arcs connect
pairs of buffers in the same NODE or adjacent NODEs. The graph is constructed in such a way that if all
packets move from buffer to buffer along the arcs of the graph, then no deadlocks can occur.
Store-and-forward lockups are not the only kind of deadlock that plague a Wide Area Network. Consider the
situation of a NODE with 20 buffers and lines to 4 other NODEs, as shown in Fig. 9.6. Four of the buffers
To host

v = 0, s = 1

v = 1, s = 1

v = 2, s = 1

v = 3, s = 1

v = 0, s = 2

v = 1, s = 2

v = 2, s = 2

v = 3, s = 2

v = 0, s = 3

v = 1, s = 3

v = 2, s = 3

v = 3, s = 3

v = 0, s = 4

v = 1, s = 4

v = 2, s = 4

v = 3, s = 4

input

input

input

input

Figure 9.6: Message re-assembly lockup.


are dedicated to the 4 input lines, to help alleviate congestion. Assume that a sliding window protocol is
being used, with window size 7. Further, assume that packets are accepted out of order by the NODE, but
must be delivered to the host in order. At the time of the snapshot, 5 virtual circuits, 0, 1, 2, 3, and 4, are
open between the host and other hosts.
Rather than dedicate a full window load of buffers to each open virtual circuit, buffers are simply assigned
on a first come, first served basis. As soon as the next sequence number expected by the host becomes
available, that packet is passed to the user (with each Virtual Circuit independent of all the others). However
higher number packets within the window are buffered in the usual way.

In Fig. 9.6, and indicate the Virtual Circuit and sequence number of each packet, respectively. The host
is waiting for sequence number 0 on all four virtual circuits, but none of them have arrived undamaged yet.
Nevertheless, all the buffers are occupied.
If packets 0 should arrive, it would have to be discarded due to lack of buffer space. A result is no more
packets can be passed to the host and no buffers freed. This deadlock is called reassembly lockup.

9.9 Frame Relay Networks


While Wide Area Networks (WANs) which are members of the Internet network community, would all
have an explicit Layer 3 protocol, WANs which use techniques other than packet switching have become

9.9 Frame Relay Networks

151

very popular during the last decade. Two such techniques are Frame Relay and Asynchronous Transmission
Mode (ATM) which we shall now discuss.
Frame Relay has its origins in the Integrated Services Data Network (ISDN) specification of the 1980s.
ISDN is designed to connect the user with the ability to connect to many different types of network service.
These include Circuit Switched data connection, Voice Switching services, Packet Switching services and
access to Frame Relay services. The Frame Relay service in ISDN was designed to provide a very high speed
Packet Switching style data service. This was to be used for the inter-connection of devices requiring high
throughput for short durations. It was soon realized that the principles behind Frame Relay were applicable
outside ISDN. Consequently, the protocol was developed for use in a stand-alone environment.
In a Frame Relay network, the data are divided into variable length frames, all of which include address
information. The frames are forwarded to the Frame Relay network which attempts to deliver the frames
to the correct destination. So far, this is the same as for a normal Packet switched network such as X.25.
However, the difference is in the implementation. Normal Packet switching operates at Layer 3 of the OSI
model while Frame Relay operates at Layer 2 and does not implement all the functions normally associated
with the Data Link Layer (DLL). In order to see the advantage of Frame Relay over normal packet switching,
we compare typical X.25 and Frame Relay frames.
Network
Layer

Level 3

User Data Field

Link
Flag

No Level 3 protocol information

Flag

FCS

Layer

Link
Layer

Flag

User Data Field

Level 2

Level 3

FCS

Flag

Level 2

Figure 9.7: Packet Level (top) and Frame Relay (bottom) PDU formats
The PDU at the top of Fig. 9.7 reminds us of the structure as used by, for instance, the X.25 standard. There
are five major components:
Flags: These delineate the start and end of the packet. These are needed since the packets are not of
fixed length.
Frame check sequence: This typically contains a CRC to check whether there are any bit errors in the
frame.
Link Layer component: This is responsible for correct transmission and delivery of the packet. The
Link Layer will attempt to recover from any errors. Also, this section contains flow control information.
Network Layer component: This contains addressing information and ensures that a valid end-toend connection is achieved. This contains the users data but may also contain higher level protocol
messages. In general, no more than 4096 bytes of user data are permitted in a single frame.

152

CHAPTER 9. WIDE AREA NETWORKS

The detail required by the 3 level stack of the X.25 protocol is necessary for relatively unreliable links such as
normally found.
However, with the advent of new digital and fibre links whose bit errors are expected to be


less than 
, the amount of error control provided for by X.25 becomes an unnecessary. Also, the users
applications are intelligent enough to detect errors and recover accordingly. In addition, the requirement for
flow control at Layers 2 as well as 3 have the effect of decreasing the throughput of the network.
Frame relay was developed for very high speed packet switching style of networks. It works on basically
two principles:
1. Packets (actually frames) which contain errors are simply discarded.
2. The end user is himself responsible for error recovery.
The main difference between Frame Relay and other forms of switching are that Frame Relays functions
are all implemented at the data link level. However, not all Packet Switching functions are implemented
(those that are implemented are referred to as core functions ). There are three main functions within Frame
Relay:
1. Check for bit errors in frame. Discard frame if it contains an error.
2. Read address information and route to the appropriate output link.
3. Check that the Frame Relay switch is not congested. If it is, set the congestion notification bits or
discard the frame.
In other words, unlike X.25, Frame Relay has no Layer 3 protocol i.e., it only has to access Layer 2 information instead of Layer 2 and Layer 3 information. In addition, the amount of processing that takes place
at Layer 2 of Frame Relay is significantly less that that required at an interface such as X.25.
Fig. 9.8 demonstrates the difference between Packet Switching and Frame Relay. It is easy to see that the
amount of processing done for Frame Relay is significantly less than Packet Switching and therefore has the
potential for much greater throughput.

9.9.1 Circuit Connections


Frame relay was designed to be a simple protocol. It was initially defined as a permanent connection oriented
protocol i.e., a connection between two users is permanently established and cannot be cleared down at any
time. This mode of operation is referred to a permanent virtual circuits (PVC) and is reasonably restrictive.
The connection is established by the network administrator when the network goes up and is available until
the network shuts down. There are moves to create an on-demand Frame Relay service (known as switched
virtual circuits or SVC).

9.9.2 Committed Information Rates


One of the major advantages of Frame Relay is in handling bursts of data from the user. Since there is no
flow control, users can send as much data into the network as required at any moment in time. However,
this is a disadvantage to the network provider since there is no effective way of sizing the capacity of the
backbone network. It is possible that all attached users could decide to flood the network with data at any
particular moment in time.

9.9.2 Committed Information Rates


Frame relay

X.25 packet switching


No

Valid frame

No
Information frame

No
Discard

Valid frame

Discard

Non-info process
No

No
Valid ack

153

Error recovery

Valid address

Discard

Rotate window
No
Frames acked

Non-congested

Discard

Restart timer
Stop timer
Sequence OK

No

Error recovery

Send L2 ack
No

Data packet

No

Active LCN

No

Valid ack
Sequence OK

No

Supervisory proc.
Error recovery
Error recovery
Error recovery

Send L3 ack

Forward packet

Forward frame

Figure 9.8: Packet and Frame Relay processing flow diagrams.


There are mechanisms to notify the user that there may be a problem if the user continues to send data into
the network. However, there is no way to force the user to stop sending. To avoid the situation where a
badly behaved user application continues to forward data in these conditions, the principle of Committed
Information Rates (CIR) was developed.
A CIR is the average data rate which the user expects to pass data into the network which the network should
be able to handle with few problems. Note that the CIR is unrelated to the actual bit rate of the physical link.
For example, a user could have a physical connection of 2 Mbps but a CIR of only 64 kbps. This means that
the average data rate is equivalent to 64 kbps but that bursts up to 2 Mbps would be possible.
It is quite possible that the user will want to exceed his CIR at times. One of the advantages of Frame Relay
is that it is quite possible for the user to exceed his CIR, the network will attempt to accept that data if the
bandwidth is available. However, if the network runs into any problems, it will simply discard frames.
The network distinguishes between data that has been committed to (via the CIR) and data that exceeds the
CIR by the discard eligibility bits . The idea is that in the event of problems, these frames will be discarded
first. The exact method of determining which frames are discard eligible will be vendor dependent, but
Fig. 9.9 shows a possible scheme based upon the CIR. In a time period T, a certain number of frames will
fall within the CIR. When a user sends more frames than the CIR, these extra frames are marked discard
eligible.
CIRs are particularly relevant when used in the context of a public Frame Relay service. A public service
could offer CIRs at different tariffs, but provide all users with the same physical connection. The public

CHAPTER 9. WIDE AREA NETWORKS

154
Discard eligible
frames

Committed information
rate

Time
Time period T

Figure 9.9: Illustrating Committed Information Rate (CIR) and discard eligibility of frames
service could then guarantee delivery of data that falls within the CIR with the ability for the user to exceed
this, knowing the risks involved.

9.9.3 Local Managment Interface (LMI)


Since Frame Relay is very simple, there are several omissions from the basic protocol specification that are
considered essential by other packet switched protocols. These essentials features are contained within the
local management interface (LMI).
The LMI protocol is responsible for:
1. Ensuring that the link between the user and the network is active.
2. Notifying the addition and deletion of PVCs.
3. Delivering status messages regarding the availability of circuits.
Because Frame Relay does not handle this information, a separate Virtual Circuit is established between the
user and the network over which this information is passed (this is known as out of band signaling).
The LMI operates as a polling protocol between the user and the network. A poll and acknowledgement
are exchanged at regular intervals. An optional extension, the asynchronous update message can also be
included. This asynchronous update message passes information about the status of a single PVC whenever
a change occurs in its status. This extension enables the frame relay interface to be more responsive to
changes in network conditions.
subsectionFrame Relay Applications
Frame Relay was designed to provide a high throughput, low transit delay networking interface, limited
scope for error recovery and relies on the end user systems to recover from problems. These are also traditionally the characteristics of a Local Area Network (LAN) (especially Ethernet, see Chapter 8). Table 9.2
shows how LAN characteristics compare to the various WAN transport methods. Since Frame Relay has the
same characteristics as a LAN, its logical that Frame Relay is very successful in areas that require extending
LANs over a wide area.
The true beauty of Frame Relay networking for the LAN user is when public Frame Relay services are used.
The user could subscribe to a CIR which would be tariffed at a certain level, but the LAN user knows that
if the traffic momentarily exceeds this CIR, the public Frame Relay service will attempt to deliver the data
(assuming the network is not congested). This is particularly important in the LAN area because of the
potential bursts of data.

9.10 Asynchronous Transfer Mode (ATM) Networks


LAN C HARACTERISTIC
High speed interconnection
Low transit delays
Poor error recovery
Reliance on end user protocols
Bandwidth on demand

TDM
Possibly true
TRUE
TRUE
TRUE
FALSE

155
PACKET SWITCHING
Possibly true
Possibly true
FALSE
FALSE
TRUE

F RAME RELAY
TRUE
TRUE
TRUE
TRUE
TRUE

Table 9.2: LAN and WAN characteristics compared

9.10 Asynchronous Transfer Mode (ATM) Networks


Asynchronous Transfer Mode (ATM), also known as cell relay or cell switching takes advantages of the low
error rates of modern digital circuits to provide faster packet switching than X.25. ATM is a connectionoriented protocol, in other words, between every sender and receiver, there is a set route that the cells follow,
and these cells are guaranteed to arrive in the same sequence as they were sent.
The Asynchronous Transfer Mode (ATM) allows various sources of traffic, which are likely to produce bit
streams at different rates, to be packetised into cells which are then multiplexed onto the single ATM digital
pipe of capacity between 125.6Mbps and 622.08Mbps as illustrated in Fig. 9.10.
Telephone

Data

Video

64 Kbps

2 Mbps

150 Mbps

Traffic sources
Packetizer

Cells
(53 bytes each)

Statistical Multiplexer
Buffer
Digital pipe
155-620 Mbps

Figure 9.10: The ATM principle


Asynchronous transfer mode (ATM) is in some ways similar to packet switching using X.25 and to frame

CHAPTER 9. WIDE AREA NETWORKS

156

relay. Like packet switching and frame relay, ATM involves the transfer of data in discrete chunks. Also,
like packet switching and frame relay, ATM allows multiple logical connections to be multiplexed over a
single physical interface. In the case of ATM, the information flow on each logical connection is organized
into fixed-size packets, called cells.
ATM is a streamlined protocol with minimal error and flow control capabilities. This reduces the overhead
of processing ATM cells and reduces the number of overhead bits required with each cell, thus enabling
ATM to operate at high data rates. Further, the use of fixed-size cells simplifies the processing required at
each ATM node, again supporting the use of ATM at high data rates.
Two layers of the protocol architecture relate to ATM functions. There is an ATM layer common to all services that provides cell transfer capabilities, and ATM adaptation layer (AAL) that is service dependent. The
ATM layer defines the transmission of data in fixed size cells and also defines the use of logical connections.
The use of ATM creates the need for an adaptation layer to support information transfer protocols not based
on ATM. The AAL maps higher-layer information into ATM cells to be transported over an ATM network,
then collects information from ATM cells for delivery to higher layers.
In ATM networks logical links are referred to as Virtual Channel Connections (VCCs). A VCC is analogous
to a Virtual Circuit in X.25; it is the basic unit of switching in an ATM network. A VCC is set up between
two end users through the network and a variable-rate, full-duplex flow of fixed-size cells is exchanged over
the connection. VCCs are also used for user-network exchange (control signaling ) and network-network
exchange (network management and routing).
For ATM, a second sublayer of processing has been introduced that deals with the concept of Virtual Path as
illustrated in Fig. 9.11. A Virtual Path Connection (VPC) is a bundle of VCCs that have the same endpoints.
Thus, all of the cells flowing over all of the VCCs in a single VPC are switched together.
The virtual path concept was developed in response to a trend in high-speed networking in which the control
cost of the network is becoming an increasingly higher proportion of the overall network cost. The virtual
path technique helps contain the control cost by grouping connections sharing common paths through the
network into a single unit. Network management actions can then be applied to a small number of groups
of connections instead of a large number of individual connections.

Figure 9.11: ATM connection paths

9.10.1 Quality of Service (QoS)


By now it should be clear that ATM provides a connection oriented service or a virtual circuit service as we
called it in Sec. 9.4 on page 140. Fig. 9.12 suggests a general algorithm for the call establishment process
using Virtual Channels and Virtual Paths. An important concepts in the call establishment process is the

9.10.1 Quality of Service (QoS)

157

Request for
VCC originates

VPC exists?

Yes

Can quality of
Service be

Yes

satisfied?
No

No

Establish a

Block VCC or

new VPC

request more
capacity

Request

Yes

Make Connection

Granted?

No

Reject VCC
request

Figure 9.12: ATM call establishment


Quality of Service specified by parameters such as the cell loss ratio, the cell transfer delay and the cell
block error ratio.
1. The cell loss ratio. The ratio of cells that are lost for each connection, based on an end-to-end measurement. The Cell Loss Priority has a great effect on whether a cell is discarded in cases of high
congestion.
2. The cell transfer delay. The time it takes a single cell to traverse the network between source and destination. The tolerance of cells to delay and variations in delay is crucial to many applications. In the
switch nodes, traffic of different service classes is separated so that cell transfer delay is appropriate
to the particular class of traffic. Constant bit-rate traffic receives the highest priority in this case.
3. The cell block error ratio. The total number of blocks that contain an error divided by the total number
of cells transmitted in that period. Note that this is distinct from the single error ratio, since single
errors are picked up by the ATM adaption layer using the Header Error Control (HEC) field, and
corrected without retransmission. By contrast, block errors force the retransmission of the entire cell.
The toleration of the various services to errors is crucial here: data needs a low cell block error ratio
whereas voice can be transmitted with a higher cell block error ratio.
The types of traffic parameters that can be negotiated include average rate, peak rate, burstiness, and peak
duration. The network may need a number of strategies to deal with congestion and to manage existing
and requested VCCs. At the crudest level, the network may simply deny new requests for VCCs to prevent
congestion. Additionally, cells may be discarded if negotiated parameters are violated or if congestion
becomes severe. In an extreme situation, existing connections might be terminated.

CHAPTER 9. WIDE AREA NETWORKS

158

9.10.2 ATM Cells


The asynchronous transfer mode makes use of fixed-size cells, consisting of a 5-octet header and a 48-octet
information field. The peculiar choice was again a political compromise between the Americans and the
Europeans who wanted different cell sizes. There are several advantages to the use of small, fixed-size
cells. First, the use of small cells may reduce queueing delay for a high-priority cell, because it waits less
if it arrives slightly behind a lower-priority cell that has gained access to a resource (e.g., the transmitter).
Second, it appears that fixed-size cells can be switched more efficiently, which is important for the very high
data rates of ATM. With fixed-size cells, it is easier to implement the switching mechanism in hardware.
Fig. 9.13(a) shows the cell header format at the user-network interface and Fig. 9.13(b) shows the cell header
format internal to the network.
8

Generic flow control

Virtual path identifier

Virtual path identifier


Payload type

Virtual path identifier


5octet
header

Virtual channel identifier

Virtual channel identifier

CLP

Payload type

Header error control

CLP

Header error control

53octet
Information field
(48 octets)

(a)

cell

Information field
(48 octets)

(b)

Figure 9.13: ATM cell format at (a) User-Service Provider interface and (b) internal to the network
The Generic Flow Control (GFC) field does not appear in the cell header internal to the network, but only
at the user-network interface. Hence, it can be used for control of cell flow only at the local user-network
interface. The field could be used to assist the user in controlling the flow of traffic for different QoS. In any
case, the GFC mechanism is used to alleviate short-term overload conditions in the network.
The Virtual Path Identifier (VPI) constitutes a routing field for the network. It is 8 bits long at the usernetwork interface and 12 bits long at the network-network interface. The latter allows support for an expanded number of VPCs internal to the network, to include those supporting subscribers and those required
for network management. The virtual channel identifier (VCI) is used for routine to and from the end user.

9.10.3 ATM Service Classes

159

The Payload Type (PT) field indicates the type of information in the information field. Table 11.2 shows the
interpretation of the PT bits.
The Cell Loss Priority (CLP) bit is used to provide guidance to the network in the event of congestion.
A value of 0 indicates a cell of relatively higher priority, which should not be discarded unless no other
alternative is available.
The 8-bit Header Error Control (HEC) field in the header performs simple error correction for the frame
(error correction is discussed in Sec. 6.4 on page 80. Implementation of the HEC significantly improves the
performance of the ATM networks. Some services, like data transmission, value accurate transmission more
highly than the connection-oriented services. For this reason, the Cell Loss Priority (CLP) bit is included,
to enable a priority to be assigned to the integrity of data in the cell being transmitted. When the network
becomes congested, those cells with a low priority (CLP = 1) may be discarded to protect the high-priority
(CLP = 0) traffic.

9.10.3 ATM Service Classes


An ATM network is designed to be able to transfer many different types of traffic simultaneously, including
real-time flows such as voice, video, and bursty packet flows. Although each such traffic flow is handled as
a stream of 53-octet cells traveling through a virtual channel, the way in which each data flow is handled
within the network depends on the characteristics of the traffic flow and the requirements of the application.
For example, real-time video traffic must be delivered within minimum variation in delay.
In this section, we summarize ATM service categories, which are used by an end system to identify the type
of service required. The following service categories have been defined by the ATM standards committee
known as the ATM Forum:
1. Real-time service.
Constant bit rate (CBR).
Real-time variable bit rate (rt-VBR)
2. Non-real-time service.
Non-real-time varaible bit rate (nrt-VBR).
Available bit rate (ABR).
nspecified bit rate (UBR).

9.10.4 Real-Time Services


The most important distinction among applications concerns the amount of delay and the variability of
delay, referred to as jitter, that the application can tolerate. Real-time application typically involve a flow
of information to a user that is intended to reproduce that flow at a source. For example, a user expects a
flow of audio or video information to be presented in a continuous, smooth fashion. A lack of continuity
or excessive loss results in significant loss of quality. Applications that involve interaction between people
have tight constraints on delay. Typically, any delay above a few hundred milliseconds becomes noticeable
and annoying . Accordingly, the demands in the ATM network for switching and delivery of real-time data
are high.

CHAPTER 9. WIDE AREA NETWORKS

160

Constant Bit Rate (CBR) The CBR service is perhaps the simplest service to define. It is used by
applications that require a ixed data rate that is continuously available during the connection lifetime
and a relatively tight upper bound on transfer delay. CBR is commonly used for uncompressed audio
and video information. Example of CBR applications include the following:
Videoconferencing.
Interactive audio (e.g., telephony).
Audio/video distribution (e.g., television, distance learning, pay-per-view).
Audio/video retrieval (e.g., video-on-demand, audio library)
Real-Time Variable Bit Rate (rt-VBR) The rt-VBR category is intended for time-sensitive applications; that is, those requiring tightly constrained delay and delay variations. The principal difference
between applications appropriate for rt-VBR and those appropriate for CBR is that rt-VBR applications transmit at a rate that varies with time. Equivalently, an rt-VBR source can be characterized as
somewhat bursty. For example, the standard approach to video compresson results in a sequence of
image frames of varying sizes. Because real-time video requires a uniform frame transmission rate,
the actual data rate varies.
The rt-VBR service allows the network more flexibility than CBR. The network is able to statistically
multiplex a number of connections over the same dedicated capacity and still provide the required
service to each connection.
Non-Real-Time Services. Non-real-time services are intended for applications that have bursty traffic characteristics and do not have tight constraints on delay and delay variation. Accordingly, the
network has greater flexibility in handling such traffic flows and can make greater use of statistical
multiplexing to increase network efficiency.
Non-Real-Time Variable Bit Rate (nrt-VBR). For some non-real-time applications, it is possible to
characterize the expected traffic flow so that the network can provide substantially improved quality
of service (QoS) in the areas of loss and delay. Such applications can use the nrt-VBR service. With
this service, the end system specifies a peak cell rate, a sustainable or average cell rate, and a measure
of how bursty or clumped the cells may be. With this information, the network can allocate resources
to provide relativley low delay and minimal cell loss.
The nrt-VBR service can be used for data transfers that have critical response-time requirements.
Examples include airline reservations, banking transactions, and process monitoring.
Unspecified Bit Rate (UBR). At any given time, a certain amount of the capacity of an ATM network
is consumed in carrying CBR and the two types of VBR traffic. Additional capacity is available for
one or both of the following reasons:
1. Not all of the total resources have been committed to CBR and VBR traffic, and
2. The bursty nature of VBR traffic means that at some times less than the committed capacity is
being used.
All of this unused capacity could be made available for the UBR service. This service is suitable for
applications that can tolerate variable delays and some cell losses, which is typically true of TCPbased traffic. With UBR, cells are forwarded on a first-in-first-out (FIFO) basis using the capacity
not consumed by other services; both delays and variable losses are possible. No initial commitment
is made to a UBR source and no feedback concerning congestion is provided; this is referred to as a
best-effort service. Examples of UBR applications include the following:

9.11 The X.25 Packet Interface

161

Text/data/image transfer, messaging, distribution, retrieval.


emote terminal (e.g., telecommuting)
Available Bit Rate (ABR). Bursty applications that use a reliable end-to-end protocol such as X.25
can detect congestion in a network by means of increased round-trip delays and packet discarding.
However, X.25 has no mechanism for causing the resources within the network to be shared fairly
among many Virtual Circuits. Further, it does not minimize congestion as efficiently as is possible
using explicit information from congested nodes within the network.
To improve the service provided to bursty sources that would otherwise use UBR, the ABR service
has been defined. An application using ABR specifies a peak cell rate (PCR) that it will use and a
minimum cell rate (MCR) that it requires. The network allocates resources so that all ABR applications receive at least their MCR capacity. Any unused capacity is then share in a fair and controlled
fashion among all ABR sources. The ABR mechanism uses explicit feedback to sources to assure that
capacity is fairly allocated. Any capacity not used by ABR sources remains available for UBR traffic.
Finally, as far as ATM is concerned, it is necessary to know that ATM requires an adaption layer to
function with protocols not based upon ATM. Two such examples are PCM (Pulse Code modulation,
see Sec. 2.5.3 on page 39 the Internet Protocol still to be discussed. PCM voice is an application that
produces a stream of bits from digitizing a voice signal. To employ this applcation over ATM, it is
necessary to assemble PCM bits into cells for transmission and to re-assemble them into adio voice
signals on reception.
In the case of IP over ATM, IP packets have to be mapped onto ATM cells which, becaiuse of the
IP packets sizes, implies segmenting IP packets into more than one cell and re-assembling the cells
on reception. Using IP over ATM, all the existing Internet infrastructure can be used over an ATM
network.

9.11 The X.25 Packet Interface


X.25 is the ITU-T protocol standard which defines the format and meaning of the information exchanged
across the DTE/NODE interface for the layer 1, 2 and 3 protocols as illustrated in Fig. 9.14. Since this interface separates the Service Providers equipment (DCE in ITU-T terminology ) from the users equipment

(DTE), it is important that the interface be very carefully defined and understood. Note that ITU-T lays
down no standards as to the protocols used amongst NODEs internal to a packet switching network. Real
digital or analog circuits extend, of course, from one NODE to another NODE and between DTEs and
NODEs on either end.
The X.25 protocol defines 2 layers (levels) of communication for switched digital networks. A third layer,
the Network layer being added for packet switching networks. These are the following:
Level 1. For some analog and for some early digital networks, this is the EIA RS 232 or the equivalent ITU-T
V.24 electrical interface. For newer circuit switched networks and for most new packet switching
networks, this will be the ITU-T X.21 interface (although some packet switching networks will use
X.21 BIS, which is basically the same as V.24). X.21 was discussed in Sec. 6.2.2 on page 74 and
includes procedures for call establishment and clearing, which are network interface protocols.
Although ITU-T distinguishes between the DCE (Data Circuit-terminating Equipment)and the Data-switching Equipment
(DSE)as shown in the figure, the DCE and the DSE are usually indistinguishable from one another.

CHAPTER 9. WIDE AREA NETWORKS

162

Service Provider Premises

DCE
A

DTE
A

Access line

DSE
A

X.21 electrical
interface

DSE
B

Access line

DCE
B

DTE
B
X.21 electrical
interface

LEVEL1

X.21 protocols

X.21 protocols

LEVEL2
HDLC protocols
(frame level)

HDLC protocols
(frame level)

LEVEL3

Packet protocols

Packet protocols

Figure 9.14: The DTE-DCE X.25 ITU-T interface standard for Wide Area packet switched networks.
Level 2. The Data Link layer which ensures reliable communication between a DTE and a NODE, even though
they may be connected by a noisy line. The protocols used are LAP and LAPB, which are very similar
to SDLC discussed in Sec. 7.3 on page 101.
Level 3. A third level is needed in X.25 to support the user/NODE interface for packet switching networks.
This third level is concerned with the information and meaning of the data field contained within each
of the Level 2 frames. The packet layer provides for routing and Virtual Circuit management.
In packet-switched networks, call establishment and clearing phases for virtual circuits are evidenced by
special packets formed at the third level. In addition, in packet-switched networks, there may be other
completely separate call establishment/clearing phases at (the X.21) level 1 if the access to the packetswitched network is via a real digital (as opposed to analog) circuit.
Since the Layer 1 and Layer 2 protocols have already been described, the remainder of this section concerns
the format and function of the Layer 3 packets. Each packet is enclosed in the data link control header and
trailer as shown in Table 9.3.
FLAG

ADDRESS

CONTROL

PACKET

FRAME CHECK SEQUENCE

(FCS)

Table 9.3: X.25 PDU format


When a NODE wants to communicate with another NODE, it must first set up a Virtual Circuit between
them using a CALL REQUEST (CR) packet which has the format illustrated in Fig. 9.15.
The sequences in X.25 for the Virtual Circuit call-establishment phase, data transfer phase and clearing
phase loosely resemble those of X.21, but use packets (rather than signals) for information exchange. A
simplified illustration of this and the other phases is given in Fig. 9.16.
Remember, however, that the packets for Virtual Circuit establishment and clearing are formed at Level 3
and flow at Level 2, whereas the signals for call establishment clearing of a real access circuit appear at
Level 1 (X.21). Note also that the characters reporting the call-progress of a virtual circuit are placed in

9.11 The X.25 Packet Interface

163
8 bits

Group

Channel
Type (0001011)
Length of calling
address

Control

Length of called
address

Calling address
Called address

Facilities length

Facilities
User data

Figure 9.15: X.25 CALL REQUEST packet format.


Calling DTE/DCE
Interface

Call Request Packet

Establishment
Phase

Incoming Call Packet

Establishment
Phase

6
6

Called DCE/DTE
Interface

Call accepted packet

Call Connected Packet

Data Packet

Data Packet

Data Phase

Data Phase
Data Packet

Data Packet

Data Packet

Data Packet

4
4
Clear Request Packet
Clear Indication Packet

Clearing
Phase

Clear Confirmation Packet


6

Clear Confirmation
Packet
6

Clearing
Phase

Figure 9.16: Sequence diagram of X.25 call establishment, data transfer and call clearing phases

CHAPTER 9. WIDE AREA NETWORKS

164

the additional information field of the CLEAR REQUEST or RESET packets whose format is illustrated in
Fig. 9.17, whereas the characters of the call-progress signals of the real access circuits appear at Level 1.
The sequence shown in Fig. 9.16 can be described as follows:
1. The calling user/DTE builds a CALL REQUEST packet and passes it to its own NODE. This packet
includes the called user/DTE address.
2. The Service Provider network delivers the packet to the destination NODE which indicates that there
is an incoming call by sending to its DTE an INCOMING CALL packet (same format as the CALL
REQUEST packet), using the logical channel that is in the ready state and has the lowest logical
channel number.
3. If the called DTE wishes to accept the call, it sends a CALL ACCEPTED packet back. The latter
specifies the same logical channel as that of the INCOMING CALL packet. The called DTE is now
in the virtual call data transfer state.
4. The calling DTE also enters the data transfer state after it receives from its own NODE the CALL
CONNECTED packet that specifies the same logical channel as the CALL REQUEST packet. Note
that the logical channel numbers at the two ends are usually different.
5. Alternatively, the destination NODE can advise the calling DTE that it is unable to connect a call by
sending a CLEAR INDICATION packet. The calling DTE responds by dispatching a CLEAR CONFIRMATION packet and the calling DTE then returns to the ready state. The Additional Information
field of the CLEAR INDICATION packet may be coded to indicate one of the following:
number busy
out of order
remote procedure error
number refuses reverse charging
invalid call
access barred
local procedure error
network congested
not obtainable
item During the data transfer phase, traffic can flow in either direction at any time. Data, INTERRUPT,
FLOW CONTROL and RESET packets may be exchanged during the data transfer state.
6. A DTE may request that the Virtual Call be cleared by sending a CLEAR REQUEST packet. However, the logical channel will not be returned to the ready state until the DTE is given a CLEAR
CONFIRMATION packet by its own NODE.
Note that the choice of circuit number on outgoing calls is determined by the calling DTE and on incoming
calls by the called NODE. It could happen that both simultaneously choose the same circuit number, leading
to a call collision. X.25 specifies that in the event of a call collision, the outgoing call is put through and the
incoming call is cancelled. Many networks will attempt to put the incoming call through shortly thereafter
using a different Virtual Circuit.

9.11 The X.25 Packet Interface

165
0

Group

Channel
Type

Control
(1)

Additional Information

Figure 9.17: X.25 Control packet format


In addition to these Virtual Calls, X.25 also provides for Permanent Virtual Circuits (PVC). These are
analogous to leased lines in that they always connect two fixed DTEs and do not need to be set up.
Because X.25 makes a distinction between CLEAR REQUEST and CLEAR CONFIRMATION, there is
the possibility of a clear collision, (i.e., both sides decide to terminate the Virtual Circuit simultaneously).
However, because it is always obvious what is happening, there is no ambiguity, and the Virtual Circuit can
be cleared without delay.
All X.25 packets begin with a 3-byte (or 32-octet in ITU-T terminology) header as discussed next.
1. The Group and Channel fields together form a 12-bit Virtual Circuit number. Virtual Circuit 0 is
reserved for future use, so in principle, a DTE may have up to 4095 Virtual Circuits open simultaneously. The Group and Channel fields individually have no particular significance.
2. The Type field in the CALL REQUEST packet, and in all other control packets, identifies the packet
type. The Control bit is set to 1 in all control packets and to 0 in all data packets. By first inspecting
this bit the DTE can tell whether a newly arrived packet contains data or control information.
3. The remaining fields of the CALL REQUEST packet illustrated in Fig. 9.15 are, to start with, the
length of the calling and called addresses, respectively. Both addresses are encoded as decimal digits,
4 bits per digit.
4. The Facilities length field tells how many bytes worth of facilities field follows. The Facilities field
itself is used to request special features for this Virtual Circuit. The specific features available may
vary from network to network. One possible feature is reverse charging (collect calls). This facility is
especially important to organizations with thousands of remote terminals that initiate calls to a central
computer. If all terminals always request reverse charging, the organization only gets one network
phone bill instead of thousands of them. High-priority delivery is also a possibility. Yet another
feature is a simplex, instead of a full-duplex, Virtual Circuit.
The caller can also specify a maximum packet length and a window size rather than using the defaults of
128 bytes and two packets, respectively. If the callee does not like the proposed maximum packet length or
window size he may make a counterproposal in the facilities field of the CALL ACCEPTED packet. The
counterproposal may only change the original one to bring it closer to the default values, not further away.
Some facilities must be selected when the customer becomes a network subscriber rather than on a callby-call basis. These include closed user groups (no user can call outside the group, for security reasons),

CHAPTER 9. WIDE AREA NETWORKS

166

maximum window sizes smaller than seven (for terminals with limited buffer space), line speed (e.g., 2400
bps, 4800 bps, 9600 bps), and prohibition of outgoing calls or incoming calls (terminals place calls but do
not accept them).
The USER data field allows the DTE to send up to 16 bytes of data together with the CALL REQUEST
packet. The DTEs can decide for themselves what to do with this information. They might decide, for
example, to use it to indicate which process in the DTE the caller wants to be connected with. Alternatively,
it might contain a login password.
The Data or Information packet format is shown in Fig. 9.18. The Q bit indicates Qualified data and is used
for Quality of Service (QoS) purposes.
The CONTROL field is always 0 for data packets. The PIGGYBACK and SEQUENCE fields indicate the lowest
packet number expected by the sender (setting the lower edge of the flow control window) and the packet
number of the packet being sent respectively, thus establishing flow control.
Q

Modulo

Group

Channel
Piggyback

More

Sequence

Control
(1)

Data

Figure 9.18: X.25 data packet format


The sequence numbers are modulo 8 if the MODULO field is 01 and modulo 128 if MODULO is 10 (values
00 and 11 are illegal). If modulo 128 sequence number are used, the header is extended with an extra
byte to accommodate the longer SEQUENCE and PIGGYBACK fields. The meaning of the PIGGYBACK
field is determined by the setting of the D bit. If D = 0, a subsequent acknowledgement means only that

the local NODE has received the packet, not that the remote user or DTE has received it . If D = 1, the
acknowledgement is a true end-to-end acknowledgement, and means that the packet has been successfully
delivered to the remote DTE.
The MORE field allows a DTE to indicate that a group of packets belong together. For example, to send a
long message, each packet except the last one would have the MORE bit on. Only a full packet may have
this bit set. The Service Provider is free to repackage data in different length packets if it needs to, but it
will never combine data from different messages (as indicated by the MORE BIT) into one packet.
The standard says that all carriers are required to support a maximum packet length of 128 data bytes.
However, it also allows carriers to provide optional maximum lengths of 16, 32, 64, 256, 512, and 1024
bytes. In addition, maximum packet length can be negotiated when a Virtual Circuit is set up. The point of
maximum packet lengths longer than 128 is for efficiency. The point of maximum packet lengths shorter
than 128 is to allow terminals with little buffer space to be protected against long incoming packets.

In other words, it is the equivalent of a Datagram service provided by the network

9.12 Internet Network Layer Protocol (IP)

167

The original (1976) X.25 standard was more-or-less as we have described it so far (minus the D bit, DIAGNOSTIC packet, and the packet length and window size negotiation). However, there was considerable
demand for a Datagram facility, in addition to the Virtual Circuit service. Both the United States and Japan
made (conflicting) proposals for the architecture of the Datagram service. In the great tradition of international bureaucracies, ITU-T accepted both of them, making the protocol even more complicated than it
already was. Both Datagram facilities are minor variations on the CALL REQUEST packet. In what is
referred to as the true Datagram packet (U.S. proposal), the format is the same as the CALL REQUEST
packet of Fig. 9.15, except that the third byte is replaced with a SEQUENCE field and a PIGGYBACK field,
just as in data packets. The MORE and CONTROL bits are both 0. The user data field is also expanded
to a maximum of 128 bytes, the first 2 of which are the Datagram Identification. Each Datagram is sent
independently of all other datagrams. In the facilities field of a Datagram, a request can be made for explicit
notification of delivery (or lack thereof). The notification packet, called a Datagram Service Signal packet,
then tells whether the Datagram was delivered or not, and if not delivered, why not (destination address
incorrect, network congestion, etc.). The Datagram is identified by the Datagram identification.
The flow control parameters do not have end-to-end meaning (i.e., D must be 0). Their purpose is to prevent
the DTE from trying to send datagrams faster than the NODE can accept them. Datagrams can use any
Virtual Circuit number not in use for a virtual call or permanent Virtual Circuit. The RESET REQUEST,
RESTART REQUEST, RR, RNR, and DIAGNOSTIC packets retain their normal meaning on Datagram
circuits.
In the other Datagram facility (Japanese proposal), called fast select, the CALL REQUEST packet is also
expanded to include 128 bytes of user data, but the third byte remains the way it was. Fast select is requested
as a facility. As far as the network is concerned, the packet really is an attempt to establish a Virtual Circuit.
When fast select is used, the called DTE may reject the attempted call with a CLEAR REQUEST packet,
which has also been expanded to include 128 bytes of reply data. However, it may also accept the call, in
which case the Virtual Circuit is set up normally.

9.12 Internet Network Layer Protocol (IP)


The Internet Network Layer protocol is rather confusingly known as the Internet Protocol. Confusing,
because IP is not the only Internet protocol, but merely the protocol used by the Internet at the Network
Layer. In outline, it works as follows. The Transport Layer (TCP or UDP) takes messages and breaks them
up into Datagrams of up to 64K bytes each. Each Datagram is transmitted through the Internet, possibly
being fragmented into smaller units as it goes. When all the pieces finally get to the destination machine,
they are re-assembled by the Transport Layer to reform the original message.
An IP Datagram consists of a header part and a text part. The header has a 20-byte fixed part and a variable
length optional part. The header format is illustrated in Fig. 9.19.
The Version field keeps track of which version of the protocol the Datagram belongs to. By including
the version in each Datagram, the possibility of changing protocols while the network is operating is
not excluded.
Since the header length is not constant, a field in the header, IHL, is provided to tell how long the
header is, in 32-bit words. The minimum value is 5.
The Type of service field allows the host to tell the Service Provider what kind of service it wants.
Various combinations of reliability and speed are possible. For digitized voice, fast delivery is far

CHAPTER 9. WIDE AREA NETWORKS

168
32 bits

Version

IHL

Type of service
D M
F F

Identification
Time to live

Total Length

Protocol

Fragment offset
Header checksum

Source address
Destination address
Options

Figure 9.19: The IP (Internet Protocol) header format


more important than correcting transmission errors. For file transfer, accurate transmission is far more
important than speedy delivery. Various other combinations are also possible from routine traffic to
flash override.
The Total length includes everything in the Datagram both header and data. The maximum length
is 65,536 bytes.
The Identification field is needed to allow the destination host to determine which Datagram a newly
arrived fragment belongs to. All the fragments of a Datagram contain the same identification value.
Next comes an unused bit and then two 1-bit fields. DF stands for Dont Fragment. It is an order
to the gateways not to fragment the Datagram because the destination is incapable of putting the
pieces back together again. For example, a Datagram being downloaded to a small personal computer
for execution might be marked with DF because the PCs ROM expects the whole program in one
Datagram. If the Datagram cannot be passed through a network, it must either be routed around the
network or discarded.
MF stands for More Fragments. All fragments except the last one must have this bit set. It is used as
a double check against the Total length field, to make sure that no fragments are missing and that the
whole Datagram is reassembled.
The Fragment offset tells where in the current Datagram this fragment belongs. All fragments except
the last one in a Datagram must be a multiple of 8 bytes, the elementary fragment unit. Since 13
bits are provided, there is a maximum of 8192 fragments per Datagram, giving a maximum Datagram
length of 65,536 bytes, in agreement with the Total length field.
The Time-to-live field is a counter used to limit packet lifetimes. When it becomes zero, the packet is
destroyed. The unit of time is the second, allowing a maximum lifetime of 255 sec.
When the network layer has assembled a complete Datagram, it needs to know what to do with it. The
Protocol field tells which of the various transport processes the Datagram belongs to. TCP is certainly
one possibility, but there may be others.
The Header checksum verifies the header only. Such a checksum is useful because the header may
change at a gateway (e.g., fragmentation may occur).

9.12 Internet Network Layer Protocol (IP)

169

The Source address and Destination address indicate the network number and host number. Four
different formats are used, as illustrated in Fig. 9.20. The four schemes allow for up to 128 networks,
1

Network

1 1
1 0

24
Host

14

16
Host

Network

1 1 1

21

1 1 0

Network

1 1 1 1
1 1 1 0

8
Host
28
Multicast address

Figure 9.20: Source and destination address formats in the IP header


presumably LANs, with up to 256 hosts each, and multicast, in which a Datagram is directed at a
group of hosts. Addresses beginning with 1111 are reserved for future use.
The Options field is used for security, source routing, error reporting, debugging, time stamping, and
other information. Basically it provides an escape to allow subsequent versions of the protocol to
include information not present in the original design, to allow experimenters to try out new ideas,
and to avoid allocating header bits to information that is rarely needed.
The operation of the Internet is monitored closely by the NODEs and gateways. When something suspicious
occurs, the event is reported by the ICMP (Internet Control Message Protocol), which is also used to test
the Internet. About a dozen types of ICMP messages are defined. Each message type is encapsulated in an
IP packet.
The DESTINATION UNREACHABLE message is used when the Service Provider or a gateway cannot locate
the destination, or a packet with the DF bit cannot be delivered because a small- packet network stand in
the way.
The TIME EXCEEDED message is sent when a packet is dropped due to its counter reaching zero. This
event is a symptom that packets are looping, that there is enormous congestion, or that the timer values are
being set too low.
The PARAMETER PROBLEM message indicates that an illegal value has been detected in a header field.
This problem indicates a bug in the sending hosts IP software, or possibly in the software of a gateway
transited.
The SOURCE QUENCH message is used to throttle hosts that are sending too many packets. When a host
receives this message, it is expected to slow down.
The REDIRECT message is used when a gateway notices that a packet seems to be routed wrong. For
example, if a gateway in Los Angeles sees a packet that came from New York and is headed for Boston, this
message would be used to report the event and help get the routing straightened out.

CHAPTER 9. WIDE AREA NETWORKS

170

The ECHO REQUEST and ECHO REPLY message are used to see if a given destination is reachable and
alive. Upon receiving the ECHO message, the destination is expected to send an ECHO REPLY message
back. The TIMESTAMP REQUEST and TIMESTAMP REPLY message are similar, except that the arrival
time of the message and the departure time of the reply are recorded in the reply. This facility is used to
measure network performance.
In addition to these messages, there are four others that deal with internet addressing, to allow hosts to
discover their network numbers and to handle the case of multiple LANs sharing a single IP address.

9.13

Exercises

Exercise 9.1 Explain where the ATM data rate of 155 Mbps comes from.
Exercise 9.2 What factors effect the quality of a ATM network service?
Exercise 9.3 How does ATM cater for multimedia?
Exercise 9.4 What services are provided by the Network Layer?
Exercise 9.5 Contrast a Virtual Circuit and a Datagram service. Include in the comparision the pros and
cons.

A
F
C

B
D

Figure 9.21: Virtual circuit configuration.

Exercise 9.6 Show the NODE tables at nodes A, C, G and H in Fig. 9.21 after the following operations:
A establishes a Virtual Circuit with C.
B establishes a Virtual Circuit with H.
F establishes a Virtual Circuit with C

9.13 Exercises

171

The Virtual Circuit from A to C ends successfully.


D establishes a Virtual Circuit with A.
C establishes a Virtual Circuit with A.
H establishes a Virtual Circuit with F.
The Virtual Circuit F established with C times out.
F establishes a Virtual Circuit with B.
E establishes a Virtual Circuit with B.
A establishes a Virtual Circuit with G.
What would happen if E were to go down? How could this be avoided?
Exercise 9.7 How can routing algorithms be classified?
Exercise 9.8 When referring to routing algorithms, we talk of decision place, decision time and information
source. Explain how these characteristics differ between the two classes of routing algorithms.
Exercise 9.9 Explain the difference between congestion and deadlock.
Exercise 9.10 If you have flow control why do you need deadlock prevention?
Exercise 9.11 What is deadlock and how can it be prevented?
Exercise 9.12 What are the three layers in the X.25 protocol? What is the purpose of each layer? How do
these layers relate to the OSI model?
Exercise 9.13 True or False : If a network is using the X.25 protocol, then its Service Provider uses virtual
circuits internally. Motivate your answer.
Exercise 9.14 What is the purpose of the Time-to-live field in an IP packet?

172

CHAPTER 9. WIDE AREA NETWORKS

Part IV

Advanced Topics

173

Chapter 10

Advanced Network Technologies


10.1 Objectives of this Chapter
In this chapter we discuss two network technologies which were proposed for the interconnection of Local
Networks spread over a campus or metropolitan area. Not suprisingly they are called Metropolitan Area
Network technologies. These technologies were first proposed about 15 years ago. Since then the advent of
ATM and WDM optical networks have perhaps rendered these somewhat obsolete, but they are nevertheless
worth studying for their unique technology.
1. Fibre Distributed Data Interface (FDDI).
2. Distributed Queue Dual Bus (DQDB).

175

CHAPTER 10. ADVANCED NETWORK TECHNOLOGIES

176

In this chapter we discuss various network technologies which have been developed for connecting Local
(Area) Networks (LANs) on a campus or in a Metropolitan Area. Such technologies have therefore also become known as Metropolitan Area Networks (MAN) although the techniques discussed are not exclusively
for this purpose.

10.2 Fibre Distributed Data Interchange


The Token Ring network described in Section 8.7.1 is intended for use primarily in office environments in
which the total network traffic is modest. This is the case, for example, if the network is being used by,
say, a distributed community of personal workstations to access such functions as electronic mail or shared
devices, such as printers or backup file servers. As the level of sophistication of the workstations increases,
however, there is a need for a network that can transmit large quantities of data at rates well in excess of
4Mbps, the rate of the basic token ring. For example, to transfer large quantities of data from, say, a graphics
server or from a file server to a number of diskless stations. To meet such needs, an alternative type of token
ring network, which operates at a data rate of 100Mbps, has been introduced. This network uses an optical
fibre as the transmission medium and is known as the Fibre Distributed Data Interface (FDDI). Although
this type of network has many features in common with the basic Token Ring network for example, it uses
dual, counter-rotating rings to obtain enhanced reliability it also has some differences. These are primarily
in the transmit and receive electronics associated with the media access control (MAC) unit as will now be
described.
In terms of the physical structure, the FDDI network is capable of a total of 1000 physical connections along
a total fibre path of 200km, but as the network is connected via a pair of counter-rotating rings, the actual
limit is 500 stations (2 connections per station) along a 100km network [Ros86]. Two types of traffic are
defined for the protocol: synchronous and asynchronous transmission. Synchronous transmission is used
where stations require dedicated bandwidth and access to the network, for example transmission of timecritical information. Asynchronous transmission is used for less important information, and is only sent
provided all synchronous transmission needs have been met, and there is sufficient time remaining for the
station to transmit the data before it is required to pass on the token.
The FDDI network uses a Timed Token Ring Protocol, where each station in the ring is only allowed
to interact with the network when it has the token in its possession. This protocol provides for dynamic
adjustment of the load offered to the ring [Joh87], and ensures that no single station can overload the network
and starve other stations.

10.2.1 FDDI Layered Structure


Fig. 10.1 shows the component parts of the FDDI specification as they relate to the OSI network layer model.
The components of the model are:
Station Management (SMT) which specifies the local section of the network management application
process. This includes the control required for correct internal configuration of the station.
Media Access Control (MAC) specifies the lower sublayer of the Link Layer. This includes access to
the optical fibre ring, addressing, data checking and data framing. The MAC provides a superset of
the services required by the Logical Link Control (LLC) protocol as developed by the IEEE 802.2
standard.

10.2.1 FDDI Layered Structure

177

Physical Layer Protocol (PHY) specifies the upper sublayer of the Physical Layer, including the encoding and decoding of data, signal clocking and framing for data transmission.
Physical Layer Medium Dependent (PMD) defines the specifications for the power levels and the
characteristics of the optical transmitter and receiver. An alternative version called Single Mode Fibre
PMD (SMF-PMD) defines the use of single mode optical fibre.

DATA LINK
LAYER
LLC

MAC

PHYSICAL
LAYER
PHY

PMD

OR

SMT

SMF-PMD

Figure 10.1: FDDI relationship to OSI model

Station Management Station Management (SMT), as can be seen from Figure 10.1 covers both the
MAC layer and the Physical Layer of the network. The functions of SMT include control and management within a station, activation and maintenance of the station, and error control. Station SMT
communicate with neighbouring SMT station entities on the network for purposes of controlling network operations such as addressing, bandwidth allocations and configuration control.
The Connection Management (CMT) controls the interconnection between the PHY and the MAC
layers, as well as the logical connection between adjacent stations. The CMT layer establishes a
connection with neighbouring stations and determines if they are linked to the ring as active stations,
or are just forwarding data to the next station.
Figure 10.2 shows an example of the topology of a FDDI network. As can be seen, the trunk ring consists of
two counter-rotating rings. Data is transferred along the primary link, and the secondary is used as backup,
or in some cases as an extra transmission line.
Stations may be classified into two main classes: stations that may connect directly to the trunk ring, and
stations that may not. Stations that have two PHY entities and at least one MAC may attach directly to the
ring, or indirectly via a concentrator. Stations that have only one PHY entity are allowed to connect to the
ring, but may do so only via a concentrator.

CHAPTER 10. ADVANCED NETWORK TECHNOLOGIES

178
STATION 6

STATION 5

STATION 4

STATION 3

M = MAC layer
M

P = PHY & PMD layer

TRUNK RING

STATION 1

STATION 2

Figure 10.2: Typical FDDI topology


A concentrator is a special station that has extra PHYs available specifically for other stations to use to connect to the network. In Fig.10.2, stations 1, 2 and 4 are dual attachment stations. Station 3 is a concentrator
connecting stations 4, 5 and 6 to the FDDI ring.
The ring topology lends itself to the isolation and recovery of failing stations. A failure can occur in one
of two ways, either the optical fibre link has failed, or a station fails. If a station were to fail, then either
a optical bypass switch (see Figure 10.3) would retransmit the data without notifying the failed MAC and
PHY layers, or if this is not possible, then the stations on either side of the disabled link would reconfigure
as before, and resume transmission along the secondary line.

10.2.2 MAC Functional Operation


A major function of any station is deciding which station has control of the medium. The MAC layer
schedules and performs data transfers on the ring.
The basic concept of the Token Ring network is that each station repeats the frames it receives to its downstream neighbour. If the Destination Address of the frame is the same as the MACs address, then the frame
is copied into a local buffer and the MAC notifies the Link Layer (or Station Management) of the frames
arrival. The MAC then alters the frame status (FS) symbol to indicate the recognition of its address, and that
it copied the frame. If an error is detected in the frame, then the MAC will also alter the FS symbol.
A frame will propagate around the ring until it arrives back at the station that originally placed it on the ring.
The MAC will recognize that it transmitted the frame by looking at the Source Address (which will be the
same as its own address). It will then read the FS symbol and determine if the transmission was successful.
If the MAC wants to transmit data, then it may only do so once it has captured the token. A Timed Token
Rotation Protocol (TTR) (discussed later) determines if the station is then allowed to use the token to trans-

10.2.3 FDDI Frame and Token Formats

179

mit data, or if it must release the token back to the ring and wait for its arrival the next time round. If the
station is allowed to place data on the ring, then the token is not forwarded, but kept until the station finished
all its transmission requirements, or is forced to release the token due to timing constraints imposed by the
TTR. Once the station has finished transmission, the token is immediately released and placed on the ring.
It is the responsibility of the MAC to remove all frames that it placed on the ring (a process called stripping).
While stripping, IDLE symbols are placed on the ring. As the MAC layer transmits the data as soon as it
arrives, the header of the information frame will already be placed on the ring before the MAC layer realizes
that it is the originator of the frame. At any given time, the ring may contain the current frame/token being
transmitted, as well as the remnants of stripped frames.
This does not cause any problems for the network, as the partially stripped frames do not contain an Ending
Delimiter (ED) symbol, and therefore will not be recognized as a valid frame. The stripped frames will
eventually arrive at a station that is in the process of placing its own data on the ring, and at this point the
incoming data are discarded.

10.2.3 FDDI Frame and Token Formats


Tokens are used to signify the right for the station to transmit data, while frames are used to transmit the
data around the ring. The structure of the frame and token can be seen in Table 10.1.
Token PA

SD

FC

ED

Frame PA

SD

FC

DA

SA

PA
SD
FC
DA
SA
FCS
ED
FS

=
=
=
=
=
=
=
=

INFORMATION

FCS

ED

FS

Preamble (16 or more symbols)


Starting Delimiter (2 symbols)
Frame Control (2 symbols)
Destination Address (4 or 12 symbols)
Source Address (4 or 12 symbols)
Frame Check Sequence (8 symbols)
Ending Delimiter (1 or 2 symbols)
Frame Status (3 symbols)

Table 10.1: FDDI Frame and Token format


In general, the meaning and use of each field is the same as with the basic token ring but, because of the
use of symbols rather than bits, there are some differences in the structure of each field. Some of these
differences will now be explained.
The preamble (PA) field is comprised of 16 or more IDLE symbols which, since the latter are comprised of
five 1 bits, causes the line signal to change at the maximum frequency. This is thus used for establishing
(and maintaining) clock synchronization at the receiver.
The starting delimiter (SD) field consists of two control symbols (J and K) which enable the receiver to
interpret the following frame contents on the correct symbol boundaries.
The frame control (FC) field is a two-symbol field that defines the type of frame. It distinguishes between
synchronous and asynchronous frames as well as the length of the source address (SA) and destination
address (DA) fields. The FC also defines two types of tokens, restricted and non-restricted.

180

CHAPTER 10. ADVANCED NETWORK TECHNOLOGIES

The (decoded) information field in the data frames can be up to 4500 octets with FDDI.
The source address (SA) and the destination address (DA) may be either 16 or 48 bits in length (defined by
the FC field). The DA may be a group address, in which case the frame can be picked up by more then one
station along the network.
The frame check sequence (FCS) field is a Cyclic Redundancy Check using a standard 32-bit ANSI polynomial, and covers the FC field through to the FCS field inclusive.
The end delimiter (ED) field contains one or two control symbols (T).
Finally, the frame status (FS) field, although it has a similar function to the FS field in the basic ring,
consists of three symbols that are combinations of the two control symbols R and S. These symbols are used
to indicate if there were errors detected in transmission, the destination station recognized the frame, and
that the frame was acknowledged by the receiving station.

10.2.4 Data Encoding


All data to be transmitted are first encoded, prior to transmission, using 4 of 5 group code. This means that
for each four bits of data to be transmitted, a corresponding five bit code word or symbol is generated by
the 4B/5B encoder. The 5 bit symbols corresponding to each of the 16 possible four bit data groups are
shown in Table 10.2. Nine symbols are used as control symbols for ring initialization and data transfer. The
remaining symbols (not shown) are undefined and in violation of transmission. There is a maximum of two
consecutive zero bits in each symbol.

10.2.5 Physical Layer (PHY and PMD) Operation


The physical layer simultaneously receives and transmits data along the ring. The transmitter accepts symbols from the MAC and converts the data into 5-bit code groups and transmits the encoded data when it
receives the token.
The receiver recovers the encoded information from the ring, establishes the data boundaries based on the
start delimiter and decodes the information before forwarding it onto the MAC layer.
The PHY layer provides the bit clocks for each station. Due to jitter, voltage differences and aging, it is
possible for the clock signals of different stations to be slightly different. If each station were to retransmit
the data at the same clock rate it was received at, it would be possible for the entire ring to become slower.
To overcome this problem, a variable frequency clock is used to receive the incoming data (see 10.3), and a
fixed frequency clock to retransmit the data. It is possible for data to be received slower (or faster) then it is
to be transmitted, and therefore a buffer is needed to store the incoming data prior to retransmission. This is
called the elasticity buffer.
The transmitter clock has been specified by the FDDI standard to have at least a 0.005 percent stability
factor, i.e. it will not be more than 0.005 percent from the correct clock rate. With this in mind, an elasticity
buffer of at most 10 bits is required for frames of up to 4500 octets in length to be transmitted correctly, and
hence this is the reason for the maximum length of a frame.

10.2.5 Physical Layer (PHY and PMD) Operation

181

Control symbols
QUIET
IDLE
HALT
J
K
L
T
R
S

00000
11111
00100
11000
10001
00101
01101
00111
11001

Data symbols
4-bit data group
0000
0001
0010
0011
0100
0101
0110
0111
1000
1001
1010
1011
1100
1101
1110
1111

5-bit symbol
11110
01001
10100
10101
01010
01011
01110
01111
10010
10011
10110
10111
11010
11011
11100
11101

Table 10.2: FDDI 4B/5B codes

CHAPTER 10. ADVANCED NETWORK TECHNOLOGIES

182
TRANSMIT DATA

RECEIVE DATA

4-bit

4-bit
125 MHz
LOCAL

ENCODE

DECODE

CLOCK

ELASTICITY
BUFFER

NRZI
CODE-BITS

NRZI
CODE-BITS

CLOCK
RECOVERY
PHY

PMD

OPTICAL

OPTICAL

TRANSMITTER

RECEIVER

BYPASS
SWITCH

STATION BULKHEAD

OUTBOUND
FIBRE

INBOUND
FIBRE

Figure 10.3: FDDI Physical Layer

10.2.6 Timed Token Rotation (TTR) Protocol


The priority operation of an FDDI ring uses the same principle as that used with a token bus network.
The priority control algorithm then works by only allowing lower priority frames to be transmitted if the
measured 
(or Token Rotation Time) is lower than a preset maximum limit for each priority class.
The FDDI priority mechanism is designed to provide different classes of services that enable the network to
support a wide variety of transmission requirements. The highest priority is called synchronous traffic, and
guarantees network bandwidth and access time. This type of service is suitable for applications that require
real-time transmission requirements. The remaining traffic on the network is called asynchronous traffic,
and is only permitted if all synchronous transmission requirements of the station have been met.
In FDDI terminology, traffic that requires guaranteed bandwidth is called synchronous traffic, and all other
traffic is called asynchronous [Joh87].

10.2.7 Ring Initialization


During initialization of the network, each station negotiates how often it requires transmission access to the
network. The lowest bidding time is then assigned as the Target Token Rotation Time (TTRT). This value
is assigned to the operational parameter
 , and specifies the time between token arrivals at a station.

Once a value for


 has been set, then the Station Holding Time (  ) is defined. This specifies the
portion of
 that a station is allowed to hold and transmit synchronous data. The sum of all the 

10.2.8 Data Transmission

183

 , i.e. 

 . If either of these parameters becomes
in the network must be less then the
undefined, or is altered independently of other stations in the ring, then the overall network timing will be
affected and will cause the entire network to initiate error recovery mode.
,

Further parameters defined during network initialization are the Token Rotation Time (  ), Late Count
). The 
is a timer that keeps track of the time since a token
(    ) and Target Holding Time (
was last received by the station. It thus includes the time taken to transmit any waiting frames by this DTE
and also the time taken by all other stations in the ring for this rotation of the token. Clearly, therefore, if
the ring is lightly loaded, then the 
is short. As the loading of the ring increases, however, so the 
measured by each DTE increases. The 
is thus a measure of the total ring loading.
   is the
 . The Target Holding Time (
number of times 
exceeded
) is a timer that is set only during
asynchronous transmission. We will assume that all timers start at some positive value and count down to
zero.


10.2.8 Data Transmission


Transmission in an FDDI network is only allowed if the station has captured the token. To prevent any single
station from continuously placing data on the ring, each station is not allowed to exceed a transmission time
of 
for synchronous data, and 
   for asynchronous data i.e., a station is not allowed to
 . An interesting note is that if ,    
 , then it is possible
hold the token for longer than
for all asynchronous data transmission to be throttled back, or even cut off [Bau92].

Each station keeps a timer  . The station expects to see the token before this time has elapsed. Each
time 
,    is incremented, and the timer is reset. If the token arrives before the timer expires,


then both synchronous
and asynchronous traffic is allowed. If however the timer expired once, and is late,
only synchronous traffic is allowed. If      , then error recovery is initiated.

Asynchronous data will only be sent if the token is running ahead of schedule and arrived before 
expired. Unlike synchronous transmission where data is placed on the ring only if the transmit time will not
exceed 
, asynchronous transmission is allowed to continue until the timer expires, in which case it will
finish transmission of the current frame, and then immediately release the token.


Asynchronous traffic is subdivided into 8 priority levels (1-8) with the higher levels having the higher priority [Dyk88]. If the station is able to place asynchronous data on the ring, it will always place the higher
priority frames on the ring first.
To prevent starvation of the lower priority frames, each priority level is assigned a value that lies between
0 and 
, with the higher priority frames having the larger values. When is is time to transmit the
data frames, the priority     value of the frame is compared against
timer. If the

    , then frame is transmitted, otherwise the next lower priority level is transmitted using the same
method. When the
expires, transmission of the current frame is completed and the token released to
the network.

$  $

$  $

Figure 10.4 shows the basic operation of the FDDI transmission protocol, and can be summarized as follows:


If the token is on time, i.e.


    , then the current value of 
is saved in
, and 

is reset to
 . As the token arrived on time, both synchronous and asynchronous data may be
placed on the ring.

1. The
timer is disabled.
is the time remaining till the 
would have expired had
the station not received the token. This indicates that the token was running ahead of schedule,
and that this time should be used to transmit asynchronous data.

CHAPTER 10. ADVANCED NETWORK TECHNOLOGIES

184

TOKEN ARRIVES

yes

THT := TRT
TRT := T_Opr

no

Is Late_Ct = 0

Note : Every time TRT expires


Late_Ct is incremented

Is Late_Ct = 2

INITIATE RING
RECOVERY ACTION

yes

no

Late_Ct := 0
DISABLE THT
no
Is SHT > Frame_Time
no

yes

Is SHT > Frame_Time


yes

TRANSMIT
SYNCHRONOUS FRAME
AND DECREASE SHT

TRANSMIT
SYNCHRONOUS FRAME
AND DECREASE SHT

SYNCHRONOUS FRAMES

SYNCHRONOUS FRAMES
TO SEND

yes
no

TO SEND
no

ENABLE THT
i := 8

i := i - 1

yes

Is i <= 0, or

Transmit Frame

THT <= 0
no

yes
yes

Is THT > T_Prio(i)

Are there frames

no

of T_Prio(i) to send

no

RELEASE TOKEN

Figure 10.4: The FDDI Timed Token protocol

yes

10.2.9 FDDI Token Timing

185

2. The 
timer is enabled. This is the time that a station is allocated for its synchronous
transmission.

 $ 

 $ 

3. While 
 
synchronous data is placed on the network. 
is the

time needed to transmit
the current data frame. As the station is not allowed to exceed the time
of 
, it may not transmit a frame that would take longer then the time remaining.
4. The

timer is enabled. Asynchronous transmission may now occur.

5. Set the counter to the highest priority level.

$  $

 , if

    then place the frame on the ring. If
6. While
then move the priority level down one and repeat this step.




$   $ 

     , the 
If the token is late, i.e.
is not reset. In this case, the token was late to arrive

at the station, so only synchronous
data will be placed on the ring.


1. If


2. Reset

3. The

 


, then initiate error recovery.

 to zero.

timer is enabled.

 $ 

4. While 
synchronous data is placed on the network.
 

time needed to transmit
the current data frame.

 $ 


is the

10.2.9 FDDI Token Timing


For only synchronous transmission, the total rotation time of the token is given by
,

 is the transmission time of a stations synchronous data. As

 
can state

(10.1)



,


  


, where
the token we


This states that for synchronous transmission, the token will always arrive at a station within a

time period. For asynchronous transmission, a further complication arises with asynchronous overrun. We
can prove that with the algorithm as described, there is a guaranteed maximum Token Rotation Time of
.

Assume that the ring is operational and note the location of the token at the beginning and end of a

period of time. If the token has rotated all the way around, then an upper bound on asynchronous transmission is clearly satisfied. If the token has not rotated all the way around the ring at this time, then the 
timer at all stations that have not yet received the token has expired, and asynchronous transmission will
not occur at these stations. the only station that would still be allowed to transmit asynchronous data is the
station currently in possession if the token.


Suppose that the
was set to
when it received the token, This means that at most
of the designated time period has elapsed before the station received the token, and also that the station

, where
would be allowed to transmit asynchronous data for a time period not to exceed

is an upper bound on the time period to transmit a single frame.


Thus the upper bound for asynchronous transmission [Joh87] would be

 $ 

(10.2)




*


*


 $ 


 *


 $ 

 $ 

CHAPTER 10. ADVANCED NETWORK TECHNOLOGIES

186

 $ 

 by 
Since asynchronous transmission may not exceed
, then the maximum amount

of synchronous transmission must be decreased by the same amount in order for the protocol to work


properly [Joh87]. Therefore the synchronous rotation time is bounded above by
.

The transmission time for synchronous and asynchronous data is then given by

(10.3)




 $ *


 *


 $  


 $ 




Therefore we have shown that the token is guaranteed to return a station within

period of time.

10.2.10 Token Rotation Time Sequence Diagram


,
The amount of time that a station may hold the token and transmit frames is governed by its  ,
synchronous bandwidth allocation, and
   . Figure 10.5 illustrates an example of this operation by



1
Late
Count
0
TokenHolding and TokenRotation Timers (ms)
100

TokenRotation Times
TokenHolding Times

90
80
70
60
50
40
30
20
10
0

0 10 20 30 40 50 60 70 80 90 100

120

140

160

180

200 220 240


Milliseconds

260

(A) Token arrives at the station

(D) No more time; asynchronous transmission


ends; token issued

(B) Token captured; synchronous transmission begins


no data to transmit and token passed to the next station

(E) TRT timer expires Late Counter (LC) set

(C) Synchronous transmission complete, asynchronous


transmission begins

(F) Token returns LC cleared; TRT accumulates


lateness

Figure 10.5: FDDI token rotation timing diagram

10.3 Distributed Queue Dual Bus (DQDB) Networks

187

displaying its values of  ,


, and    for a particular station on an FDDI ring. In this example,

the negotiated target rotation time is 100 milliseconds
and the synchronous bandwidth allocation is limited
to 30 ms. Significant events are indicated along the timer axis with the letters A to F.
Event A marks the arrival of a token. The token is not captured because there is no data to transmit. The
is set to 100 ms.


Event B arrives back at the originating station 40 ms earlier than the target time. The 40 ms time is remembered
by setting
to the 
remainder. The 
is set and begins counting. The station begins the
send its 30 ms allotment of synchronous traffic.
Event C After 30 ms, the station has consumed its synchronous allocation. It has asynchronous data to transmit, so it enables
and begins transmitting.
expires, and the station must cease transmission of asynchronous frames. The station issues a

Event D
token.

   to 1 and resets 
to 100.

Event F The token arrives. Since    is 1, the token is late and no synchronous data may be transmitted.
 has no asynchronous data to transmit,
   is reset to 1 and the token
At this point, the station also

is allow to go by.

Event E

expires. The station increments




Next we describe another technique called Distributed Queue Dual Bus (DQDB) originally proposed for
connecting networks on a Metropolitan Area Network. It is worth cautioning though that with the advent
of ATM and WDM Fibre networks, techniques such as FDDI and DQDB which were never very popular
anyway, are likely to disappear. DQDB is nevertheless an ingenious technique which we shall describe next.

10.3 Distributed Queue Dual Bus (DQDB) Networks


Distributed Queue Dual Bus (DQDB) has emerged as the Metropolitan Area Netwok (MAN) standard for
interconnecting hosts, LANs, servers and workstations. Hence, DQDB is the focus of much interest. This
section discusses a number of potential problems that have been identified and a number of add-on strategies that have been proposed to deal with these problems.

10.3.1 Background
The main objective of MANs is to provide interconnection of LANs. Thus, they typically use very high data
rates, cover large distances, and are capable of supporting data, voice, and video communication services.
MANs assumed such growing importance that the IEEE began standardising them under Project 802.6. In
November 1987, a consensus emerged within the IEEE Standards Committee to use the DQDB subnetwork
for the IEEE 802.6 MAN. The IEEE MAN consists of a number of interconnected DQDB (sub)networks.
Up to now, several modifications and protocol enhancements have been made. The network structure proposed as the IEEE 802.6 MAN standard was based on a slotted bus topology. The first proposal with this
topology was submitted by Telecom Australia and was originally called Queue Packet and Synchronous
Circuit Exchange (QPSX) which tried to utilize all the channel bandwidth, but it was found to be unfair in
overload situations. So, a new concept called bandwidth balancing was incorporated into the IEEE 802.6
standard to address the fairness problem. This DQDB subnetwork standard was adopted by ANSI.

CHAPTER 10. ADVANCED NETWORK TECHNOLOGIES

188

10.3.2 DQDB Architecture


The basic DQDB bus architecture is shown in Figure 10.6.
Bus A (Forward Bus)

...

Slot Generator
of Bus A
(HOB)

Sink of
Bus A

Segments

...

Station 1

...

Station i

...

Station N

Requests

Sink of
Bus B

...

...
Bus B (Reverse Bus)

Slot Generator
of Bus B
(HOB)

Figure 10.6: DQDB architecture


A pair of unidirectional buses which carry information in opposite directions, is taken as the basis for the
architecture of the DQDB network. Each bus is independent of the other in the transfer of traffic. These
buses provide the capability for full duplex communication between any pair of stations.
To support services across a metropolitan area, a single DQDB network may range from a few kilometers
to more than 50km in extent. It works at 155.520 Mbps and has theoretically no size limits nor a maximum
number of connections. However, in practice, up to 1000 stations can be attached to one network.
The station at the leading end of the forward bus is designated as the head of the bus (HOB). The Slot
Generator (HOB) generates frames which are divided into equal-sized slots, every 125 s. The end station
terminates the forward bus, removing all incoming slots and generates the same slot pattern at the same
transmission rate on the opposite bus.
The topology can be designed as an open configuration or a closed configuration. The dual buses can be
looped or open-ended. In the looped bus the head and tail of each bus are co-located but not interconnected.
Looping allows the reconfiguration of the bus system in the case of bus failures. Head and tail will then
be interconnected and new head and tail points will be generated close to the location of the failure. Each
station can act as a head or tail. Due to this reconfiguration mechanism the looped bus configuration seems
preferable. See Fig. 10.7.
Each DQDB slot is 53 octets long with 5 octets designated as the header of the slot, while the remaining
48 octets are used for information transmission. The size of the slot is identical to that of an Asynchronous
Transfer Mode (ATM) cell.
The first octet in a slot constitutes the Access Control Field (ACF), and it is utilized by the stations in the
network to co-ordinate their transmissions. See Fig. 10.8. Each ACF contains a busy bit and a request field
of three bits, besides other information. Only one bit of the request field is currently used in the DQDB
network and this is called the REQUEST bit. The BUSY bit indicates whether or not the corresponding
slot is carrying any information. The REQUEST bit is used for the reservation of bandwidth for segment
transmissions. If a station wants to transmit a segment on Bus A, it must set the (unset) REQUEST bit of a
slot on Bus B.

10.3.2 DQDB Architecture

189

Head of Bus A

Head of Bus B

Bus A
Bus B

Head of Bus A

Head of Bus B

Bus A

Bus B

Figure 10.7: Open (top) and Closed (bottom) DQDB architecture

1 bit

1 bit

BUSY SL_TYPE

1 bit

2 bits

3 bits

PSR

Reserved

REQ

52 Octets
Segment (Protocol Data Unit)

Access Control Field (ACF)


Figure 10.8: DQDB slot format

190

CHAPTER 10. ADVANCED NETWORK TECHNOLOGIES

Messages arriving at the stations may have to be first partitioned into segments big enough to fit in a slot,
before they are transmitted. The segments are re-assembled into the original messages at the destination
stations.
DQDB provides for two types of access. One access is called pre-arbitrated services, which guarantees a
certain amount of bandwidth that is useful for isochronous services such as voice and video. Isochronous
refers to signals that carry their own timing information as part of the signal. The second access is called
queue-arbitrated services ; this is based in demand and is designed to accommodate services such as data
transmission. Isochronous services use pre-arbitrated slots which are marked by the Slot Generator. All nonisochronous information is transported within queue-arbitrated slots. Pre-arbitrated and queue-arbitrated
slots are distinguished by different values in the SL TYPE field of the slot header. The queue-arbitrated
slots are managed by the distributed queueing protocol (media access control). Each station buffers the
actual number of slots waiting for access in the total network. A station which has a lot ready to send
determines its own position in the distributed queue. After satisfying the queued slots its own slot will be
transported.

10.3.3 DQDB Virtual Queue


The main purpose of the DQDB protocol is to form a distributed global queue so that distributed segment
arrivals across a MAN can be served in as close to a first-come-first-serve (FCFS) order as possible, independent of the network size and data rate. Anyone who has ever wanted information at a large train station
in Germany or Switzerland, or for that matter wanted to buy bread in a busy bread store in countries like
that, will know that as one enters, one takes a number off an automatic ticket dispensing machine, one much
like that at a car park. One does not queue, but can sit down anywhere having joined the queue in a very
real sense but instead of physically queueing, the number indicates your place in the queue. An electronic
billboard visible to all in the virtual queuewill indicate when when your number comes up (literally) and
which counter to go to. The DQDB algorithm works on exactly the same principle.

$ 

 along bus A, the forward


Data segments travel upstream from station i to station j where 
bus. Similarly requests travel downstream from station j to station i along bus B, the reverse bus.


Each request bit seen by a station on the upstream bus serves as an indication of a waiting frame at a
more downstream station. These request bits are not removed as they pass upstream, and thus all upstream
stations see each such request bit. Each station views its own frames, plus the downstream frames indicated
by request bits, as forming a virtual queue. The virtual queue is served in first-come-first-serve order. This
means that if a frame from the given station is to the front of the queue, that frame replaces the first idle slot
to arrive on the downstream bus. Alternatively, suppose that a request from the downstream is at the front of
the queue. In this case, when the next idle slot arrives in the downstream direction, the request is removed
but the slot is left idle, thus allowing a downstream station to transmit a frame.
To prevent any station from hogging the bus, each station is allowed to enter only one frame at a time
on its virtual queue. Other frames at that station are forced to wait in a supplementary queue. As soon as
transmission starts on the frame in the virtual queue, a frame from the supplementary queue (or a new frame)
can be entered into the virtual queue. A request bit is sent upstream at each instant when a frame enters the
virtual queue. The virtual queue can be implemented simply as two counters giving the number of requests
before and behind the frame. These are known as the CD CTR and the RQ CTR respectively.

10.3.4 DQDB Medium Access Control (MAC) Algorithm

191

10.3.4 DQDB Medium Access Control (MAC) Algorithm


Because the bus system is symmetric, the focus of this description of the DQDB MAC algorithm will be
on packet transmissions on the forward bus, the procedure for transmissions in the backward bus being the
same. See figure 10.9 for the state transition diagram of the DQDB algorithm. A station accessing the
RPC => CDC
RQC = 0
Send REQuest
CDC = CDC - <empty>

SQ>0

RQC = RQC + <request>

IDLE

COUNT
DOWN

SQ>0

CDC>0

SQ=0
SQ=0
CDC=0

RQC = RQC + <request>


RQC = RQC - <empty>

SQ : Segment Queue
RQC: RQ_CTR

Transmit on first
empty slot

CDC: CD_CTR

Figure 10.9: State diagram of the DQDB MAC Protocol without BWB
forward bus has the following three main responsibilities:
1. Broadcast access request to upstream stations,
2. Keep track of access requests generated by downstream stations,
3. Access the forward bus when all requests prior to its own are satisfied.
Each station is either idle when there is nothing to transmit, or active otherwise. When it is idle (IDLE state),
the station keeps count of the number of outstanding requests from its downstream stations via a request
counter (RQ CTR). The RQ CTR increases by one for each request that passes the station on the reverse
bus and decreases by one for each empty slot that passes the station on the forward bus. This mechnism
is illustated in Fig. 10.10. When the station has data segments to transmit, it becomes active (COUNTDOWN state). The station
1. transfers the contents of the RQ CTR to a second counter called the count-down counter (CD CTR)
which maintains the number of requests that have to be served before the scheduled segment,
2. resets the RQ CTR to 0 and,
3. sends a request in the reverse bus by setting the request bit of the first request-free slot of that channel
to 1.

CHAPTER 10. ADVANCED NETWORK TECHNOLOGIES

192

BUSY
0
Bus A

REQ_CNT
CD_CNT
REQ_Q_CNT

Bus B

1
REQ

IDLE state

BUSY
0

Bus A

REQ_CNT
CD_CNT

REQ_Q_CNT

1
REQ

Bus B

COUNTDOWN state

Figure 10.10: Logical states of a DQDB station

10.3.5 Performance of DQDB Protocol

193

The CD CTR is decreased by one for every empty slot that passes the station on the forward bus until it
reaches 0. Following that instant, the station transmits the packet into the first empty slot of the forward bus.
In the meantime, the RQ CTR increases by one for each new request passing the station on the reverse bus
from the downstream stations.
The station moves from COUNTDOWN to IDLE after sending a segment. This is followed immediately by
a backward state transition from IDLE to COUNTDOWN if there are still segments waiting in the station.
Each time a transition from IDLE to COUNTDOWN is executed, a request is generated and placed in the
local request queue. This is done by incrementing REQ Q CNT by one; it is decremented when a request
has been put on the reverse bus.
With the use of the two counters at each station a distributed queue of packets awaiting transmission in the
forward bus is formed with FIFO service discipline (at each station). A similar distributed queue is formed
for transmissions on the reverse bus.

10.3.5 Performance of DQDB Protocol


Most performance studies on the DQDB network protocol are simulation studies. The main difficulty for
the analysis of the DQDB network is the high degree of complexity and interdependence of the various
processes that describe the operation of the network. The DQDB network does not possess any convenient
point for the analysis of the network.
Each station in a DQDB network gains knowledge about the activity of the other stations by observing
the busy/empty activity on Bus A and the request/no request activity on Bus B. This information is all
that a station needs to access the DQDB network. Unfortunately, the information that a station receives is
usually outdated due to the propagation delays between the stations. With a medium access mechanism, and
network information (usually distorted) travelling in opposing directions, analysis of the DQDB network is
at a disadvantage due the intricacies that are inherent in a totally distributed environments. Therefore, exact
analytically-tractable models for the DQDB network are very difficult to formulate.
The performance during heavy load and overload conditions is of special interest. Under heavy load DQDB
behaves unpredictably and unfairly. The unfair behaviour affects a group of stations active on the same bus,
giving preferential Throughput to the station nearest the bus HOB. Furthermore, station throughput turns
out to be strongly affected by the slot usage pattern just before heavy load occurs.

10.3.6 Advantages of DQDB


Probably the most important advantage of DQDB is that the maximum achievable channel utilization remains very high (very close to one) irrespective of the propagation delay. Therefore, DQDB appears to be
an ideal network to operate in a high-speed environment.
Increasing communication requirements demand the interconnection of LANs. DQDB may be used to
obtain good performance (e.g. high throughput, small delay).
As mentioned previously, the total slot length is identical to that of the ATM cell which means that DQDB
is compatible with the ATM architecture. This facilitates the interconnection of ATM and DQDB networks.

10.3.7 Disadvantages of DQDB


At high propagation delay values (i.e. with a large end-to-end propagation delay and a comparatively small
slot size) and high load, the DQDB protocol does not appear to be fair in its bandwidth allocation.

CHAPTER 10. ADVANCED NETWORK TECHNOLOGIES

194

If a network is large, some unfairness and inefficiencies may occur due to propagation delay. The effect of
the propagation delay is the following:
Hogging by upstream stations. The DQDB access scheme allows stations to reserve only one slot at
a time. This constraint can result in throughput unfairness under heavy load. Under heavy load the
stations close to the HOB are able to transmit more frequently than the rest of the stations.
In fact, consider only two overloaded widely separated stations transmitting on bus A and assume that
Station One (closer to HOB) starts to transmit first such that bus A between the two stations is full
of busy slots generated by Station One, then Station Two (further from HOB) only transmits a single
segment within each period of time of the order of twice the propagation delay between Stations One
and Two.
Stale requests effect. Stale requests generated by downstream stations which have already been
granted may arrive at busy stations upstream and unnecessarily prevent them from accessing the bus.
An arriving stale request causes a busy station located near the HOB to let a free slot go. On arrival at
the downstream station that generated the corresponding stale request, this free slot may only be used
if that downstream station has another segment to transmit at that point. Hence, stale requests may
cause
inefficiency when the free slot is not used at all, or
unfairness when the free slots generated by the stale requests are unfairly used.
Although the former effect is reduced as the traffic increases, and is almost diminished during overload
periods, the effect of the latter may be very severe during busy periods.
A complex algorithm for the reassembly of the packets is necessary. Each station could receive packets from
different stations at a time. Therefore, it should be capable of reassembling packets of different messages
simultaneously.

10.3.8 Protocol Enhancements


Three of the protocol enhancements aimed at limiting the unfairness caused by the DQDB protocol are
described below.
1. Bandwidth Balancing Due to the unpredictable behaviour of the network in overloaded cases, the
BWB mechanism has been added to the basic distributed queueing algorithm described above in
order to provide bandwidth allocation among the stations of an overloaded network in a manner that
is independent of their position.
The main idea behind BWB is the use of a number called BWB MOD. Each station increments the
RQ CTR by one after a BWB MOD of segment transmissions. Hence, a station allows an extra empty
slot for every BWB MOD segments it transmits. Downstream stations cannot distinguish these extra
empty slots and they treat them as any other empty slot. The implementation of the BWB mechanism
requires a counter (BWB CTR) at each station that counts modulo BWB MOD segment transmissions
by the station.
Unfortunately, BWB is only guaranteed to function correctly in the single priority case. Low priority
traffic may steal a high proportion of traffic from high priority traffic.

10.3.9 The Future of DQDB

195

2. Access Protection and Priority Control (APPC) This proposal can guarantee bandwidth balancing by
setting bounds on the request counters. However, to limit wasteage of bandwidth, this method requires
on-line management(parameter setting) with knowledge of positions of active stations which may not
be very practical for connectionless service.
3. CYclic REquest Control (CYREC) This method is based on controlling the number of requests sent
from each station per cycle, whereby requests from new cycles are generated by the HOB. CYREC
can guarantee capacity and provide fairness for traffic of high priority and can be implemented in a
way which is compatible with DQDB. It is however an open question if the improvement in fairness
obtained by CYREC justifies the additional complexity.

10.3.9 The Future of DQDB


The DQDB media access protocol exhibits serious deficiencies during heavy load and overload conditions.
A number of mechanisms have been proposed to improve the DQDB protocol but so far none of them have
been entirely satisfactory. As a result, it is unlikely that DQDB will become the technology of choice for
high-speed MANs in the long-term.

10.4

Exercises

Exercise 10.1 A large FDDI ring has 100 workstations and a token rotation time of 40 msecs. The token
holding time is 10 msec. What is the maximum achievable efficiency of the ring?
Exercise 10.2 Consider a DQDB network with three nodes. Assume that the propagation times are negligible. The leftmost node gets a packet to transmit on bus 1 with a small probability of 2 in every frame time.
At the same time the centre node gets packet to transmit on bus 1 with probability and a packet to transmit
on bus 2 also with probability . In every frame time, the right most node gets a packet to transmit on bus 2
with probability 2 . Estimate the rate of packet transmissions by the 3 nodes.
Exercise 10.3 Comment on the fairness of DQDB.

196

CHAPTER 10. ADVANCED NETWORK TECHNOLOGIES

Chapter 11

Network Security
11.1 Objectives of this Chapter
In this chapter we discuss some of the basic fundamentals of security and its application on networks.
We then go more indepth taking a look at various mechanisms that enable two computers (or users) to
communicate securly. Items which are looked at are:
1. Principles.
2. Enigma and rotor machines.
3. DES/RSA and other mechanisms.
4. Hashing.
5. Denial of Service attacks.
6. Firewalls.
At the end of this chapter the reader should have atleast a basic understanding of some of the issues that are
pertinent in todays publically accessible networking environments.

197

CHAPTER 11. NETWORK SECURITY

198

11.2 Introduction
In the modern society that we live in, computer and telecommunication networks are essential for an organisation to be able to function and communicate. If we take a moment to think about just how pervasive
they are for critical information - we make telephone calls to people and assume that no-one can here the
conversation. We send post via the postal services on the assumption that no-one reads the message. We
send email via the Internet and order goods over a link to a computer system many miles away. We also
assume that when we receive a message, it is usually from a trustworthy source. Every transaction at an
automatic transaction machine (ATM) is communicated to a bank and the large money institutions perform
countless transactions during the course of a single day. All this information is communicated via such links
- how can they be made secure?
Security for such networks is not straightforward. In fact, security for information interchange has never
been straightforward. There are a few simple key concepts that a secure system or secure communication
need offer Confidentiality - The contents of the message remain secret or unfathomable by parties other then
who sent it and the intended recipient.
Authentication - a message can be adjudged to have originated from who claims to have sent it.
Non-repudiation - a message cannot be denied of refuted by the sending or receiving party.
Integrity - this guarantees that a message sent will not have been tampered with by either being
duplicated, modified, re-ordered or replayed.
Access Control - this has to guarantee that a service that is being accessed over a network is being
accessed by a duly authorised entity.
Availability - a system must be robust against attacks that can deny its service to legitimate users.
availability
Any security mechanism that is implemented should offer some, if-not-all of these services and be able to
stand up to different kinds of security attacks.
Types of Attack
Two different kinds of attacks are possible within a network. They can be categorised as:
1. Passive Attacks: In passive attacks the aim of the attacker is for the sender and receiver to not be aware
that messages are either being intercepted or being analysed. Passive attacks can be easily performed
by, for example, just tapping a telephone line. In this example the contents of the message is easily
disceminated or ,if it is in some code, easily obtained for some form of analysis. The second form of
passive attacks is traffic analysis. Much information can be obtained from monitoring traffic patterns,
whos sending messages, whos receiving messages etc without actually having to know the contents
of the messages.
2. Active Attacks: In these kinds of attacks the actual data that is being transmitted is either altered or
completely changed. Active attacks can take the form of:

11.3 Access Control and Passwords

199

Masquerading - where an attacker is trying to impersonate another legitimate user. For example
they may have stolen credentials (username, password, authorisation schema) and trying to gain
unauthorised access.
Replay - this is where an attacker has earlier passively captured sensitive security data and is
now replaying them back to the target to either gain access or cause some illegal operation.
Modification - this is where contents of a message are altered. A good example of this would be a
financial transaction between banks where the recipients account is changed to an unauthorised
account.
Denial of Service - this is where an attacker prevents legitimate users from accessing a normally
available service.

With this in mind we shall discuss later in the chapter differing aspects of internetwork security and the
implications on networks and where this concept fits in the underlying mechanisms.

11.3 Access Control and Passwords


Corporations, organisations and individual users spend large amounts of money on systems for staff, friends
or themself to use. Securing information on that machine may be important and regulating who has access
to the systems, services and data is a necessary task - especially if such systems are openly connected to the
wide-world or even local area via a network connection.
A common way of providing basic security services is via the use of access control mechanisms, the most
common of which is password protection. The basic use of passwords though to gain access to a system is
pretty rudimentary, the password acting as the key to the system but system providers also have to provide
mechanisms to stop users from roaming around such systems perusing files which they are forbade to look
at or poking in places where they are not welcome.
A mechanism that can provide such a model of security is that of an access matrix. An access matrix consists
of elements denoted by subject, object and access right. These elements denote:

Subject - who may access a resource (be it user, process etc.)


Object - a resource that can be accessed such as memory segments or files.
Access right - how the object may be accessed by a particular user or process.

These can be arranged by subject (rows) and by object (columns) and the entry in the matrix is the access
right. The matrix looks something like that shown in Figure 11.1.
What this presents is a look up matrix that can be decomposed by row or column to obtain access information. For example, if the matrix is decomposed by columns then what is obtained is an Access Control List
which gives the rights of each user, group or process that is allowed to access that particular object. On the
other hand if the matrix is decomposed into rows then what is obtained is a Capability Ticket for each user
which lists everything that the user is able to access and with what access right.

CHAPTER 11. NETWORK SECURITY

200

Accounting Program

User1

User DB file

Archive File

Read/Execute

Process X
Process Y

Read/Write
Read

Read

User2

Read/Write

Read/Write

Figure 11.1: Access matrix

   



 
  !


  

 


Figure 11.2: Entering a password for a user


"$#&% ')(+*

,.-0/+1

243 # - /+1 56)7 *)8

, % / % 9 1 :); "$#&% ')(+*


<

- #&# 7>= '?*

% @9 ' ;A6)1 ( = @


B =DC46)- '?%

Figure 11.3: Verifying a password for a user

11.3.1 Passwords
Most users to a computer system use a system of identification and password authourisation to grant access.
Generally the identification (the userid), via some form of access control list will identify what resources

11.4 Conventional Encryption

201

the user is allowed to use and the password is used to authenticate that the user is bona-fida.
Passwords that identify users have to be stored somewhere and this affords a measure of vulnerability to
them in that resources either have to be consumed in protecting them or that they are freely available and
open to analysis. Unix-like systems implement the latter (generally) in that the password file is open for
anyone to look at. The trick is that in the password file the actual password is encrypted and not stored in
plain text. The password is encrypted with an encryption routine similar to a commonly available routine
called DES which will be discussed a bit later. The password is not particularly long, usually, and it encrypts
to an 11 character string. Together in the password file with the encrypted password is a value known as the
salt. The salt is used when the encrypted password is generated.
When a user logs in the user id is used to reference a record in the password file. This then gives the value
of the salt and the encrypted password. The password that is then entered by the user is extended with the
salt and encrypted. A correct password will encrypt to the same encryption that is stored in the password
file. This whole process is illustrated in Figure 11.2 and Figure 11.3 which shows the entering of a new
password for a user into the password file and verification of a users credentials with the password file.
The reason why the salt exists are that firstly it stops different users, with the same password, from having
the same encryption in the password file. The salt is typically some random number generated when a
new password is entered by a user, or based on the time that they created the password. Secondly, the salt


extends the encryption space by two extra characters (the salt is 12bits) which gives an increase of
and
thus increases the difficulty of the task for guessing a password.

11.4 Conventional Encryption


11.4.1 Introduction and Historical Overview
Keepings things secret has been a necessity ever since communications has started. It is not always desirable
or intelligent to allow everyone access to messages sent between two parties, especially if those two parties
want to keep such messages secret.

 



  

  

  


  


 
Figure 11.4: The encryption

decryption mechanism

The process of securing messages is known as cryptography and it is the job of such mechanisms to convert plain-text messages into secure messages known as ciphertext. The process of crpytography actually
represents a process of encryption
decryption.
The process that is followed can vary but generally some algorithm is applied to the plaintext to convert it
to the ciphertext with the caveat that this process is reversible. This whole process is under the control, or

CHAPTER 11. NETWORK SECURITY

202

influence to be more precise, of what is known as a key. This is illustrated in Figure 11.4. The key dictates
what the ciphertext will look like for a given plaintext. Different keys will encrypt, by the same process,
into different ciphertexts.
This whole process can be expressed with the notation:



  

where  is the ciphertext,  is the encryption function with a given key  and  is the plaintext. Similarly
the process of recovering the plaintext message from the ciphertext can be expressed as:

 
 


where

 is the decrpytion process with key  for the conversion of the ciphertext  to the plaintext  .

Historically this process has been understood and used for thousands of years. One of the early ciphers,
as the whole system of encryption and decryption is known, was the Caeser Cipher where messages were

shifted by 3 characters alphabetically. So for example the message :


what have the romans ever done for us
becomes the cipher text message of:
zkdw kdyh wkh urpdqv hyhu grqh iru xv
This is a trivial enough cipher to break with a simple key of char + 3, or more accurately it can be expressed
as:





  

*   "

 




A different key to use might be to shift the characters along by some arbitrary number and insert a word at
the beginning to fill up the asignment. So for example
The original alphabet of:
abcdefghijklmnopqrstuvwxyz
becomes the following, which has been shifted 6 characters to the right and has the key caeser. Note that
the letters are used only once.
casaerbdfghijklmnopqtuvwxyz
The ceaser cipher is an example of a monoalphabetic cipher in that a particular plain text character will
encrypt to a certain cipher text character. This is a major weakness in any cipher in that it is open to
cryptoanalysis, the branch of trying to crack a cipher. The particular weakness here is that frequency analysis
can be performed on the cipher text. It is common knowledge that the most common letter in the English
language are e, so where we had the cipher text of:
from Monty Pythons Life of Brian

11.4.1 Introduction and Historical Overview

203

zkdw kdyh wkh urpdqv hyhu grqh iru xv


we see that h appears 5 times giving :
....

...e ..e ......

e.e.

...e ...

..

this is not much on its own but a reasonable person will see the two important groups of ..e and e.e..
The possibility that the first group might be either something like see, the or other similar and the second
group tells us that we have to find a word of that form - it will not take much searching to come up with
ever which then gives a mapping for y to v and u to r giving now :
....

..ve ..e r.....

ever ...e ..r ..

this process can carry on and soon it should be clear that with a little more effort we can easily disseminate
the full text. The fact that we can easily recover the plain text shows this to be a not very good cipher.
Better ciphers are those that are known as polyaphabetic in which a character may map to different cipher
characters. Probably the most famous of these is the Vigen re Cipher which uses a matrix of 26 ciphers as
rows where the letters are rotated through, so the first row is a b c d ... z and the second is b c d
e ... a and so on until the last row which is z a b c ... y. The rows are indexed by the plain
text character in the message and a key phrase is used to index the column in this matrix. The intersection
of the two gives the cipher text character to use. This mechanism also has a weakness in that the key phrase
used to generate the indexing may be guessable and patters are relatively easy to discern.

Advancing on, probably one of the most famous encryption techniques of the
century was the Enigma
cipher which was employed by Nazi Germany in the buildup and during World War II. The enigma cipher
requires a machine with rotor wheels and a plugboard. A picture of an enigma device is shown in Figure
11.5.

The machine has a set of independently rotating wheels. There are connections for input and output on each
wheel and the input of the subsequent wheel takes the output from the proceeding wheel. The wheel, known
as a scrambler disk has paths through it from its input to its output. These paths are not (usaully) a straight
through connection so a given input pin may connect to completely different output pin. In the original
engima machines there were 3 scrambler disks (as shown) and as a key is pressed, which acts as the input
to the first disk, a lamp is lit on the output by tracing the electrical path through the wheels. The disk then
rotates and just like an odometer after a full revolution the next wheel will rotate one position and so on.


This gives a possible 

     cipher alphabets. This is shown diagrammatically in Figure
11.6. The inventor, ironically a German living in America called Arthur Scherbius, added the plugboard
between the keyboard and the first rotor wheel which allowed the transposition of keys to the input pins of
that rotor. This gave a further number of combinations as it could swap 6 pairs of letters out of 26. Finally,
he made the wheels interchangable. So to calculate the number of cipher alphabets:
Scrambler combinations 


 

 


Scrambler arrangments, either disks 123, 132, 312 ... = 6


 
Plugboard is 6 pairs from 26         




 



       
   
Total = 


A reader may be wondering where this section is going with the historical overview, but basically traditional,
conventional encryption came of age with the engima and its mechanisms for creating a cipher form the basis
of probably the most widely used encryption mechanism, called DES or the Data Encrpytion Standard.

204

CHAPTER 11. NETWORK SECURITY

Figure 11.5: The Engima machine

Figure 11.6: Diagramatic view of a rotor machine such as engima

11.4.2 DES
Overview
DES came about from a requirement by the american National Bureau of Standards (NBS) for an encryption
mechanism to be employed as a national standard. DES has a chequered and controversial history as it won

11.4.2 DES

205

the race for the adoption ahead of a possible more secure system called LUCIFER which was developed by
IBM. It is rumoured that the NSA, the National Security Agency (a bunch of spooks in US Government)
demanded a crackable encrpytion that was secure enough for industry and commerce, but not for a govern

mental agency with vast computing resources. LUCIFERs key was originally 128 bits, which gives



possible key combinations or, in decimal, approximately  
! LUCIFERs key space was shunted

down to 56 bits and in 1977 the standard was adopted by the NBS and named as DES.

64bit plaintext

56bit key

Initial permutation

Permuted choice 1

K1
Iteration 1

Permuted choice 2

Left circular shift

Permuted choice 2

Left circular shift

K2
Iteration 2

K16
Iteration 16

Permuted choice 2

Left circular shift

32bit swap

Inverse initial
permuation

Figure 11.7: DES encryption mechanism

DES is an enhanced, algorithmic version of the same process that the enigma and other rotary cipher machines follow. The schematic in Figure 11.7 shows the process that is followed when encrypting with DES.
The first thing to understand is that the encryption (and decrpytion) happens not on actual characters but on
binary words, unlike the earlier machines from before which acted on the characters of the plain text. Also,
note that DES operates on blocks organised as 64 bit blocks. If the digital represenation of the plain text is
not some multiple of 64 bits then it must be padded (i.e. extended) till it is.
DES takes a 64 bit block and performs a permute function on that which jumbles the bits up according to
some schema. This can be thought of as being akin to the plugboard settings in the enigma machine. This
stage acts as the input to an iteration stage which is performed 16 times in total. Also acting as input to
the iteration stage is the 56 bit key
 . At the end of 16 iterations the high and low 32 bit quantities
are swapped and passed through the inverse permutation function that was the start of the operation. The
iteration stage, including key input are akin to he rotor wheels of the enigma machine.

CHAPTER 11. NETWORK SECURITY

206
Permute Function

As stated the initial first stage is to permute the input into some new sequence. This is quite a straightforward
operation and performed by using a look-up table to determine when a bit  should map to. The inverse
operation at the end is the reverse of this such that if a bit  of the 64 bit block maps to bit then a bit in
position maps to position  . Or, more precisely if is the plain text and Y the permuted text then:

.

 

and

.


.

 

  

 

Iteration Stage
The 64 bit block in an iteration stage can be considered to be two separate blocks of 32 bits that are worked
on independently. The high 32 bits are labelled the left side (L  ) and the low bits the right side (R  ). Each
iteration acts as the input to the following iteration, apart from the first and last which take as their input and
output the correct permutation function.
The operations of the iteration stage are:





 






 



  

here we see that the previous stages right hand side is this stages left hand side, a simple swap. The right
hand side of the current stage is a little more complicated than that. It is some function of the right hand
side and an intermediary key, which is XORd with the previous stages left-hand side to produce the current
stages final output right-hand side.


The function   



  is quite a complicated function. It takes   
and expands it to 48 bits according
to some permutation table and some of the bits are repeated. The original
  is then used to generate
an intermediary key
 . This is generated by performing a left-shift on the key and permuting it. The same
permute operation is performed on each stage in the whole process but the left-shifting ensures a different

 for each stage. The expanded   


is XORd with
 and this 48 bit quantity is passed through probably
the most controversial part of DES, the S-Boxes.
The actual s-box operation is classified although it is known that they take 6 bits as input and output 4 bits.
This process thus takes the 48 bit input and brings it back to being 32 bits wide. This whole operation is
shown in Figure 11.8 which shows the expansion, XOR and s-boxes. The final step of the function is to
permute the 32 bits again.

Decryption
Decryption of a DES cipher text is relatively straight forward as it is just about the same as the encryption
routine only in reverse. The permute functions are all known to both parties, as is the secret key
 .
Then on each iteration stage, which is now happening in reverse from    the appropriate
 can be
calculated. A full description of the decryption process is beyond what is necessary here.

11.5 Public Key Crpytography

32bits
L i1

207

32bits
R i1

28bits

28bits

C i1

D i1

48bits
Left shift(s)

Expansion/
Permutation

XOR

Ki

48bits

Left shift(s)

Permutation/contraction

Substitution
(SBOX)
32bits
Permutation (P)

XOR

Li

Ri


Ci

Figure 11.8: DES  




Di

 operation

11.5 Public Key Crpytography


Conventional cryptographic system has always suffered from a fatal floor. The problem is that both parties
have to be aware of the secret key
 and this key somehow has to be communicated between the two
parties. If the two parties are within walking distance that is ok, but any further, then the key has to go
via another means, ideally a secure encrypted channel itself. But if that channel is compromised then any
negotiated key will be in jeopardy. Also how does a user set up a secure channel - if they have to do the
same scenario to inform another party of the key for that - a kind of chicken and egg scenario.
Various techniques for protecting and dispersal of keys have been used. These include the Key Distribution
Centre (KDC) concept where there are a hierarchy of secure key repositories which one has to present
credentials to get keys from an authority. This involves much handshaking and validation. This does not
protect the keys that are stored in the KDC from being tampered with from within the KDC or, for that matter,
being burglarised. The general desire was a way of ensuring a secure transmission with the knowledge of a
publicly available key mechanism.

11.5.1 Diffie Hellmen Key Exchange


A solution to this came from two scientists called Whitfielf Diffie and Martin Hellman who came up with
a key-exchange mechanism that would enabled the secure transmission of a shared secret key. The trick

CHAPTER 11. NETWORK SECURITY

208

that they employed was to forget about normal encryption techniques which essential rely on massive permutation and substitution to mask a message. They looked at one-way functions which are mathematical
functions which have no inverse. A branch of mathematics that is rich in one-way functions is modular
arithmetic. This kind of arithmetic is done to some base and there isnt a number larger then
 . For


example if we consider modular 7 arithmetic, and we wish to calculate
then we perform 


   we get 
which is 5. If we perform  
 which is 1. How does this work? The method is quite
easy, basically perform the normal mathematics but then perform integer division by the base (in this case
7) which then brings the value into the rage 
to show that this is indeed a one way function if
   we get 1, which is. Now
we take the example of  
of course the same as if we have  

 . We
can not recover the two numbers from original result of 1. This is in essence the principles of the one-way
function.

!*

#"

"

"

"

"

Diffie and Hellman considered modular arithetic as above and also considered the use of exponentiation to
calculate values. It is easy to exponentiate a number it is extremely difficult to go backward, that is to take
.
the discrete logarithm, with their method. They came up with a function of the form 

  "

Armed with this knowledge the Diffie-Hellman key exchange process between two parties, Alice, and Bob
is:
Before any key negotiation has taken place there are two numbers in the public domain that both Alice
and Bob are aware of and have sometime previous agreed upon, these are:
, a prime number

, a primitive root of and

 
where  
, and
  

  
  "
  where   , and
Bob selects a number
  where     "
calculates
 
to Bob and in return receives !  back from Bob.
Alice transmits
   "

Alice can now discern a secret key where
Alice selects a number
where
calculates


and Bob can perform the same action to discern the secret key

where

   
"  "


This may seem incredible but it is best illustrated with an example. We will use the parties Alice and Bob
again and that they have previously telephoned each other and have agreed that   and   . These
are the two public pieces of information and that the general form of the function will be 
 . So :

 "
! 

 , so say she chooses 3. She works out

 #"

$





%

&


 #" 
 "
 "  which is    #"   .
  where   , and so say he chooses 7. He works out ! 
Bob chooses
     a number
" which filled out is   #"  which is     #"    .
Alice chooses a number
where


which filled out is 


where
where

Alice sends

to Bob who sends back  in return.

   '%& "

Now Alice uses


 
   . So Alice discerns

"

to get a value
.

, the key. This value




"



11.5.2 RSA
Now Bob uses

 . So Bob discerns

209

   
"  "


to get a value
, the key. This value

"


too.

Both Alice and Bob now know a value

session.

 

 

"

 

which they can use as a secret key for that communication

 "

If we assume that that all communcations channels were compromised and that a malicious intermediary
 and that Alice set Bob 2 and that Bob sent Alice 6.
called Eve has access to the one way function 
To discover the key Eve must either know the secret number that Alice generated
or the secret number

Bob generated
. Neither of these numbers is known to Eve, and in a real example these numbers would
be extremely large and not guessable with a couple of iterations. Eves last hope at getting the key is to work
backwards by trying to invert the function - but as said earlier this is a one-way function and not inevitable.
So either way, knowledge of the public information doesnt help Eve and
remains secret, as long as

and
does.

 

  

  

 

11.5.2 RSA
Diffie and Hellman created a storm in the cryptographic world by breaking away from traditional techniques
and moving to mathematical functions to enable a crypto-system. Although their method wasnt true public
key cryptography, it enabled a a secure session to be arranged over an unsecure channel, but it showed the
world that mathematics might provide the possibility of an asymmetric cipher.
On the other side of the USA from where Diffie and Hellman there was a researcher called Ron Rivest
who had read the paper that outlined the key exchange principle that Diffie and Hellman had published.
Rivest persuaded a friend of his, Leonard Adleman that they should look further into the mathematics of an
asymetric cipher and shortly they enrolled another researcher called Adi Shamir. These three researchers
from MIT tackled the problem without much luck till, as is often with notable discoveries, in something of
a revelation the pieces of the puzzle fell into place. In a few short days they thrashed out their ideas and
eventually came up with a true asymetric asymetric encryption crypto-system that is probably one of the
most pivotal moments in cryptography and was dubbed RSA after the names of the authors. The expressions
that lie at the heart of RSA are:

where  is the cipher text and

"


"




 #"


"


is the messages plain text. The other variables we will discuss in a moment.

The algorithm for creating the system uses similar modular arithmetic as outlined earlier, with some extra
number theory which is beyond the scope of what we want to illustrate here. An interested reader is pointed
to the excellent security book by William Stallings called Network and Internetwork Security for a quantative analysis of all things to do with security or to the book by Simon Singh called The Code Book for a
more qualitative analysis, where more thorough descriptions can be found. In essence though the steps for
performing RSA are as follows, and we will use Alice and Bob as the principles in the communication.
Alice selects two numbers

and which are extremely large prime numbers.

Alice now multiplies these two numbers together to get a new number where 

Alice now has to select a number which has to be relatively prime to 

 

.
 

CHAPTER 11. NETWORK SECURITY

210

Alice can now distribute publicly the numbers and , that is the public key

Anyone wanting to encrypt a message


that is:



 .

to Alice now has to perform the first expression listed earlier,

#"


If there was a malicious interloper, Eve, listening in on the channel she will only hear  when it is
transmitted. We discussed earlier that generally modular arithmetic is full of one-way functions and
even though Eve may know   and  she can not return back to the original . Her only method
of attack that is worth while is to try and factorise , which is extremely time intensive and if and
are extremely large the time it would take is, at current technologies, a long time!

Alice, when she computed the number and selected


such that:

"

"




"  

would have also calculated a special number

 

   

This is a costly calculation but can be done when all the other preliminary calculations are done. With
  . She can use these numbers to calculate:
this number Alice now has a private key
 

%"

"

from the cipher text message  .

and recover the plain text message

The strength of the RSA algorithm lies in the fact that it is extremely hard to factorise a very large number
into a pair of correct prime factors. It also gains from the fact that performing discrete logarithms to reverse
the exponentiaion is also extremely difficult. So in summary they have created a true asymmetric key crypto
  and the private key is
 
  . Alice can publish

system where the public key is

anywhere she wants and keeps


 strictly private. Eves knowledge of

doesnt help her as she will


have to expend massive effort and computing time to try and search either the entire key space or attempt to
factorise and check if they are the correct factors.

%"

11.6 Encryptions Place in a Network


Attacks on a users or organisations data transmissions can come from many sources. When you consider
actually what happens when data is transmitted. A user types their secret message at the keyboard and this
is then transmitted over a lan internally, or may go out into the public network and be transmitted via some
packet network or Internet. There are multiple ingress points that an attack can access to gain that data.
Passive taps can be put in the lan to capture data or placed at the egress points from the local network into
the public network. Once on the public network the data is out of the control of the sender (or the recipient)
as it is being routed across numerous interconnected networks. If the data is sent across a wireless link then
the emitted radiation can be monitored using a receiver of sorts and is completely undetectable!
All the discussions so far have been dealing with encrypting a message and sending it over a communication
link with the encryption/decryption functions happening as an application at the far ends of the communication link. This is known as end-to-end encryption and in this particular case it obviously happens at the
application layer in the ISO model protocol stack. There is another form where encryption can happen and
this is known as link encryption.

11.6.1 End-to-End Encryption

211

11.6.1 End-to-End Encryption


We have already mentioned one form of end-to-end encryption, that which happens at the application layer
of the OSI protocol stack. Here, the user data in the packet is secured against observation as shown in Figure
11.9.


  

  
 


 

 

 

Figure 11.9: End-to-end encryption at the application layer

The rest of the data packet, which consists of the overhead of the various header information fields for each
layer, are not secured. This has the implication that even though a users data is technically secure, the
traffic patterns arent. What can be discerned from this information? Well for one thing who is receiving
this secret information, designated by the IP address. Also, how often this encrypted information is sent
and when it is sent, if the malicious Eve is using a packet sniffer. to capture such information can be of
interest to some, especially if this can be correlated with significant events.


! " #! $ %
! &
! " #! $ '(
! " #! $

)*(
! " #! $

+ " &"

)! $,

Figure 11.10: End-to-end encryption at the transport layer

A slightly different technique is to encrypt at a lower level. This can be done at the transport level as shown
in Figure 11.10.
Here the TCP PDU is encrypted, so that includes the TCP headers and the associated data for the unit. This
has the advantage that the appropriate data is encrypted within an organisations data network and that user
programs do not have to be maintained as such as it is happening as part of the function of a local hosts
protocol stack. This makes things like key management easier. The network layer and lower information is
still visible and even worse is that if the data was to jump networks via a router, then it becomes unsecure
within the router as essentially a new TCP layer connection is set up at the other side of the router to the
adjoining network. If there is no mechanism on that transport network to facilitate encrypted data it will be
sent in the clear, or at the very least within the router the information is insecure. A better method would be
to interleave this kind of encryption with application layer end-to-end encryption so at least the raw data is
secure, even if the transport and network layer information is not.

11.6.2 Link Encryption


Moving lower in the protocol stack and the encryption method falls into the category of link encryption. Its
called link encryption as the network, link and physical layers are were switching occurs and represent the
basic link between entities. As data is fed to these layers it is encrypted and the only basic un-encrypted
data is the lowest level headers as shown in Figure 11.11.

CHAPTER 11. NETWORK SECURITY

212


  

  
 

 


 

 

Figure 11.11: Link encryption

As data is passed across a network at each intermediary link node ,that is between router, bridge or switch
(but not hub), this information has to be decoded on the input and re-encoded on the output. So just as with
the transport layer encryption from before, within the active network element the data is insecure. Also, as
before, better security would implement both this and application layer encryption. The advantages of link
encryption is that it becomes extremely difficult to perform traffic analysis as could be done with end-to-end
encryption. The destination is hidden with in the IP header, the data size is hidden, and frequency analysis
is hard to perform as it is not known who, what or where this information is coming from and going to. The
drawback to this is that each intermediary network element has to be encryption capable and sets of keys be
maintained for the network entities.
For maximum security it is best to use both end-to-end encryption along with link encryption. This will
protect both the user data when it is within a network element and protect the identities, frequency of
message, message types from being discovered by any listener on the link.

11.7 Digital Signiatures & Authentication


So far we have spoken only about encryption. One thing that must be realised is that although encryption,
whether it be network based encryption at the lower layers of the ISO protocol stack or application based
encrpyption, guarentees that trasmitted messages can not be easily read (re. nigh on impossible with a strong
enough cipher) - it doesnt necessarily guarantee message authenticity. In most cases, alternative methods
have to be used if authentication is a desired feature.
The process of authentication can be thought of at two levels, one of where a schema can be used as an
authicator, that is to produce some value or signature to sign a message with as being authentic. The
second function is where this value, or signature is then used in some negotiation protocol to enable a
recipient to verify the authenticity of the message.

11.7.1 Cryptographic Checksums


Cryptographic checksums are a common technique used to provide a level of authenticity to a message
transmission. In their basic form they can be considered to be a function that produces some output dependent on the message. This output, a checksum, is often called the MAC or Message Authentication Code. It
is of the form




where M is the message, K is a secret key known only to sender and receiver and     is the authenticator
or MAC. The MAC is a fixed length for any message M. The checksum (MAC), can be appended to a
message. When a recipient receives the message, they can run the message through the same checksumming

11.7.1 Cryptographic Checksums

213

routine to obtain a MAC. They can compare this with the MAC that is tagged to the message to see if they are
the same. If the message was altered then it should not generate the same MAC. Hence, a good cryptographic
chcecksum routine should use a function that offers such characteristics. Further, this function should be
tailorable to a user, just as the encryption routines earlier are with some form of key.

 

     



  




"! #

 
   

Figure 11.12: Cryptographic checksum in use

Figure 11.12 illustrates this process. The sender generates a checksum of the message with the use of some
key
, which the recipient is also aware of. Just as with encryption/decryption, a different key
will
generate a different checksum for a given message. The message is sent with the checksum attached to
the message. In turn, the recipient receives the message and checksum and runs the message through the
checksum function with the same key
. It is just a simple matter of checking the checksum generated and
the one that is attached to the message are the same to be able to authenticate the message.
Hash Functions
One form of MAC is the one-way hash function. A hash function is a many-to-one function and just as with
the notion of the MAC a hash fuction can take an arbitrary message of any length and generate a fixed length
hash of that message, often called the Message Digest. It is of written as:




The hash function digest generator is different to the checksum method - no key is used to generate the
hash. Having said that the hash function has to satisfy certain requirements to be useful as a message
authentication system:
The hash can be applied to any message length
The hash function produces a fixed length of output - the digest
The digest is easy to compute - enabling hardware and software implementations
Given a digest

"

it is computationally impossible to find a message

For any message


 .

such that

"


.

it is computationally impossible to find another message such that

  

Generally hash functions are iteratively performed on sequences of fixed size blocks which are shifted,
rotated and exclusive-ord with other bits from previous iterations.

CHAPTER 11. NETWORK SECURITY

214

11.7.2 Digital Signatures


So far, with encryption it can be seen that a message can be protected from third-parties listening in. Also, it
can be seen that with the use of hashing and checksumming function it can be proven that a message has not
been tampered with whilst in transit from the send to the recipient. One critical part is missing - the notion
of repudation. What repudiation offers is the avoidance of either party in a message exchange claiming they
received/sent something different. For example consider the following where Alice has sent a message to
Bob:
Bob claims he received a different message from Alice, one which he has forged and claimed is
authentic by tagging a MAC at the end, using a shared key that both he and Alice know.
Alice can claim that she never sent a message to Bob, because Bob, in the previous point, can generate
different messages, he may also have generated a completely new, but false message.
Generally - the scenario that is being illustrated is where there is not a complete trust between the sender
and the receiver. So, some mechanism to offer assurances is required - the digital signature - that can:
offer the ability to verify the author and date/time of signing
offer the ability to authenticate the message contents at the time of the signing
provide for the signature to be verified by third parties in case of dispute
There are two types of digital signature, direct and arbitrated.

Direct Digital Signatures


This form of signature is known as direct as it only involves the send and recipient. There is an assumption
that the recipient knows the public key of the sender, and a digital signature can be formed by the sender
encrypting the message, and the MAC with their private key

- which only they know. Another version
is to just encrypt the MAC with the senders private key. Then, the entire message is encrypted again, with
the recipients public key

, so that the only person who can decrypt the message is the receiver. This
process is illustrated for the later case in Figure 11.13. Then on receipt after the recipient has decoded
the message with his private key

, he can use the senders public key



to access the MAC and
compare.

*

This schemes protects the sender from any malicious activity by the recipient because the recipient does not
know the senders private key

so that they might generate a false message and MAC and claim that is
and so they
the message they received. It also protects the recipient because only the sender knows

must have sent the message.
The scheme does have one failing though, the sender may claim that their private key has been compromised.
This threat can never be eradicated, but somewhat controlled and lessened by the use of some form of
administrative control on private keys.

11.7.2 Digital Signatures

215

 







 ! "
"
$# "



%

67 # 8# 9


  

&('*),+.-*/10
2 -*354

Figure 11.13: Using public key encryption for digital signatures

Arbitrated Digital Signatures

In arbitrated digital signature schemes a middle-man is used to distribute messages and information between sender and receiver. The arbiter, when it receives a message from a sender, verifies that the sender
is who they say they are. If the senders credentials check-out then the message is then forwarded to the
recipient with a time-stamp and an authenticity guarantee. As there is now a third party involved, one that
is trusted by both sender and receiver, it negates the possibility of the sender claiming a message was never
originated by them.
The crucial party in this scheme is the arbiter. Both the sender and recipient have to explicitly trust this
party. There are various roles that the arbiter can play, in regards to the amount of information it sees and
what it forwards. This is summarised in Table 11.1, where is the sender, is the recipient and is the
arbiter.

In the first case sends the message and a signature to the arbiter, with the signature being encrypted with
a shared secret key
;: . The arbiter would verify
to be bona-fida and send this information, with a
time-stamp and all encrypted with a secret shared key between the arbiter and the recipient,
:=< .

In the second case the arbiter does not see the message that is to be transmitted as the sender,
encrypts
;
<
the message with a shared secret key with the recipient

, but sends its credentials and an encrypted


signature to the arbiter. This information is then forwarded upon validation of the credentials along with a
time-stamp, all encrypted with a shared secret key
:>< .

In the last case public key encryption is used where the message is double encrypted with the first of all
s private key

and s public key


< . The credentials of are tagged to this double encryption
which is then in turn, encrypted again with the private key of . This is forwarded to the arbiter with a

clear version of
. The arbiter verifies by decrypting the data to recover the encrypted ID token and
on verification this is then forwarded to the recipient by being encrypted, along with a time-stamp, with the
arbiters private key
: .

The time-stamps that are attached to each messages indicate the freshness of the messages and avoid replay
attacks. Of the three methods the latter is preferred as it offers the most advantages.

CHAPTER 11. NETWORK SECURITY

216

1.

(a)

     .     
    .       


  . 


 .              .         
       .        

  . 




  .         

  . 


(b)

2.

(a)


(b)


3.

(a)

(b)



 


.



  



Table 11.1: Different methods of arbiter working

11.8 Network Based Attacks


Recently there have been several high profile network based attacks on large, prominent corporations. These

attacks, perpetrated by hackers exploit weaknesses within the well-established protocols and software elements of the Internet and to bring to light the futile attempts of System Administrators to secure systems
from determined attack. The benevolent hacking establishment puts forward the argument that they are
trying to help network providers by showing the loop-holes but the malign hacker has no such agenda other
than to cause havoc.
One such hacker is called Mafiaboy and this is a name which has been on the lips of many federal agents
in the US. He instigated the high profile attacks on CNN.com, eBay, Yahoo and others. Mafiaboy launched
these attacks supposedly on his own, and the ability of a single individual to bring such systems to their
knees is a matter for great concern.
The attack used by Mafiaboy is akin to driving to your local store and parking your car in front of the doors
of the premises - and thus denying that service to others. Hackers like Mafiaboy perpetrate what is known
as Denial of Service (DOS) attacks on target computer systems. The targeted system is flooded and cannot
answer legitimate requests for service and thus - is unavailable.
These attacks may seem like some malicious prank, Mafiaboy, at the time of writing, is a 15 year old
Canadian school child who bragged about his exploits, but in the case of Yahoo and eBay, who depend
on their sites generating revenue through visits and advertising, the down-time can have serious financial
implications.
Hackers are an ingenious lot. The misguided talents of these individuals are not inconsiderable and the
general availability of source code for operating systems, applications, protocol stacks and implementations
offers them extreme opportunities for finding weaknesses in systems, be it hardware or software.
More often then not these technological anarchists are content to just break into systems, collecting evidence
of their break-in to act like proverbial scalps when they boast to their fellow hackers. Others perform

a term coined by the media for illegal pursuits

11.8.1 TCP/IP Connection Set-up Primer

217

defacement of web sites or tinker where they are not welcome, but the denial-of-service attack can leave
not only an Internet facing computer unavailable (e.g. a web server) but as was in the case of a targeted
American University, leave the entire portal access for a network unavailable, disallowing traffic to flow
either in or out of the computer networks affected.
In this section we shall see how the network can be exploited and that there are a number of tools available
to the hacker who perform DOS attacks.

11.8.1 TCP/IP Connection Set-up Primer


The underlying protocol of the Internet is TCP/IP, which is a connection-orientated and reliable protocol. As
with most communication systems, a sequence of steps is followed to facilitate a communication session.
As an example Figure 11.14 shows the sequence for a user access to a website via a URL.
 

 

 

   

 
  

 
     
 

Figure 11.14: 3way handshake process

When a user selects the URL a data packet is sent to the destination server. The packet from the client
is a SYN packet. The SYN packet is used to synchronise and set-up a communication channel and is
a normal data packet to send. It contains information such as who the sender is and other pertinent
connection information.
When the server at the remote site receives the SYN packet it responds with an ACK data packet.
The ACK packet is an acknowledgment to indicate to the client that the server is ready for further
communication.
When the client receives this ACK packet it responds again with a reply ACK data packet, acknowledging the acknowledge. Both client and server are ready to then communicate with the necessary

CHAPTER 11. NETWORK SECURITY

218

structures implemented at either end of the link and the web page is transmitted. The whole process
is known as a three-way handshake.
To grant access to the TCP/IP stack for multiple services such as web serving, ftp, telnet etc a port system
is implemented and different network services are accessed via different ports but common services are
implemented on the same port across different platforms.
For example the WWW service runs on port 80 whether you are using a Unix box, Windows box or a
SunOS system. These transport layer mechanisms along with the IP addresses for a connection (sender and
receiver) uniquely identify a connection.
A host system listens on various ports of interest and creates a software structure known as a socket for
each incoming connection. A socket holds the information related to the local end of the communications
link such protocol used, state information, addressing information, connection queues, buffers, and flags.
Each of these structures consumes a finite amount of resources available on the host and so an inexhaustible
supply of socket connections is not realisable.
Returning to the example of setting up a connection, on an incoming request the data packet is demultiplexed
up the protocol stack (TCP/IP) until layer 4 - the transport layer (where TCP is located). The TCP/IP stack
retrieves all its necessary information from the payload of the data packet. Assuming everything is in order
the TCP/IP initialises the necessary data structures (creating a new socket) within the host for the connection.
TCP/IP replies to the received SYN with the SYN/ACK message. The client that it replies to is recovered
from the IP addressing information in the payload of the incoming data packet.
It is now waiting for the return ACK message from the client, where it will then know that the connection
is fully established. If the TCP/IP stack where to receive a reset message (RST) whilst it is listening,
it would de-allocate the socket and memory structures for the connection. Whilst waiting for the reply
acknowledgement, the final part of the three-way handshake, the TCP/IP stack is in the SYN RCVD state
for the socket that it set up.
If there is no response to the SYN/ACK message the TCP/IP stack will keep trying until a timeout is reached,
whence it will then clear down the connection and recover all the resources used.

11.8.2 The SYN Attack


The basis for many attacks is the SYN attack and the method of attack is to launch multiple connection
requests against the target machine. The attacker will send a connection request to the target machine. The
attacker is required to perform one action only to make the attack possible - and that is to spoof the client
IP address. Spoofing an IP address is where the IP address is fake or masqueraded, that is not the bona-fida.
Further, it is not good enough for the IP address that is being spoofed to be incorrect for the machine that is
issuing the attack, the spoofed IP has to be an unreachable IP address.
The spoofed IP has to be unreachable because if the spoofed address is a reachable host then the SYN/ACK
packets will be sent by that target system to that IP address. The receiving host will receive the packets from
an unknown connection - it not having established that link and will in fact reply to those packets by sending
a reset (RST) message back to the target. The target, on reception of the RST message will then close down
the connection. A small amount of traffic and a dropped connection the result.
On the other hand, with the spoofed IP being unreachable the host will attempt a reply, instigating a reverse
DNS lookup (RARPARP) request. As there is no entry for the spoofed address, the reply packets will be
passed out to the upstream network to try and locate the spoofed IP. All the time the connection socket is

11.8.3 ICMP Attacks

219

held active as the TCP/IP stack awaits an ACK back from the client. It should be clear now that if the target
is flooded with such requests it will soon use up its available number of connections. As the time out can
be in the order of half a minute or on some systems longer on a poorly implemented TCP/IP stack. The
service, which is listening on that port, is effectively tied up waiting for ACK messages from non-existent or
unreachable hosts and that whilst it is tied up it cannot service any proper requests and the service is denied
to legitimate users.
This kind of attack can be orchestrated from a single machine, or more importantly, a single process. The
power of the attack can be further magnified by a concerted effort from many individuals, or from a single
individual who is tied in to several client systems.

11.8.3 ICMP Attacks


The Internet Control Message Protocol (ICMP) is a protocol that is tunnelled within TCP/IP and is used for
transmission of control and error messages. Typically commands such as ping, a useful tool to test if a host
is alive and responding, issue ICMP data packets. TCP/IP allows a maximum packet size of 65536 bytes
(or octets) and the makeup of a packet is a header of at least 20 bytes and the rest of the packet is data.
Larger packets are split up by the underlying layers of the TCP/IP stack into small packets for transmission
and which are then suitably re-assembled at the destination system. Usually, packets are transmitted much
smaller than the maximum size, this is known as the MTU or Message Transmission Unit size - which in
most cases is around about 1500 bytes.
Ping, which issues ICMP ECHO requests is embedded within the IP packet, and by standard consists of 8
bytes of header information followed by the size of the ping packet. This is the packet that will be echoed
by the remote host. This leaves the maximum size of a ping packet to be 65536 - 20 (header for TCP/IP
packet) - 8 (header for ICMP ECHO packet), which is 65507 bytes.
What happens at the remote end when receiving fragmented data packets is to temporarily store them in a
buffer. The TCP/IP stack usually does not re-assemble the packet until it has received all of the fragmented
packets that make up the whole packet. It knows how to re-assemble the packet as the fragments contain not
only the data but also an offset of where the data originates.
In an ingenious tack, malicious programs can forge packets such that the offset, along with the data size
of the packet actually add up to being greater than the maximum allowed TCP/IP packet size of 65536.
As machines usually implement a receive buffer sized to the maximum size of the TCP/IP packet and the
machine re-assembles the original packet by writing the data its received into that buffer. With forged header
information and wrong a wrong offset buffer overflow can occur as the actual data overwrites beyond the
now undersized buffer. In a poorly implemented TCP/IP stack this would crash the stack and usually the
kernel too. Such kinds of programs have received the nomenclature of Ping of Death.

11.8.4 Simple Attackers : Smurf and Fraggle


The Smurf attack obtains its name from its source program. It utilises normal ICMP ECHO requests, but
not the kind from the ICMP attack that causes buffer overflow. It is an extremely powerful yet relatively
simple attack to perpetrate.
The attacker spoofs their IP address to be the address of the targeted system. The attacker then proceeds
to ping the broadcast address for the network from which they are hosted on or are going to use in their
attack. Generally all hosts will respond to a broadcast packet and depending on the type of packet that has
been received reply with information. An example of proper use of such a broadcast request is the DHCP

220

CHAPTER 11. NETWORK SECURITY

service that is used for dynamic IP address allocation on networks where a host sends out a broadcast packet
to locate a DHCP provider.
In the case of an attack though, the data packet is an ICMP ECHO, or ping packet, and the intermediary
systems that have been pinged now respond to the ping by replying with the received ping packet, as is
normal operation. The catch is that the senders IP address has been spoofed with that of the target system
and the pings are sent back to that target system. In this scenario the intermediary hosts are acting as
amplifiers and reflectors, in that from the one attacker station the traffic has been amplified and sent on to
the target system.

Figure 11.15: Smurf reflector type attack

Figure 11.15 shows this kind of attack in operation. The number of hosts that respond to the broadcast ping
from the attacker multiplies the traffic. As an illustration consider a simple example:

Example 11.1 An attacker sends 500Kb/s of ping packets to the intermediary network of say 100 reflectors.
The reflectors each respond to the ping flood and respond to the spoofed IP address of the target system who
is now flooded with 100 * 500Kb/s which gives 50Mb/s of ping traffic to the target.

It is important to understand that generally there are two victims in this kind of attack, the reflector network
and the actual target site. Smurf is the TCP version of this kind of attack and Fraggle is the UDP version.

11.8.5 Coordinated and Distributed Network Based Attacks


So far we have only discussed attacks originating from a single machine, or a coordinated group of attackers
from disparate machines. The attacks discussed earlier in 11.8 were actually perpetrated in a much more

11.8.5 Coordinated and Distributed Network Based Attacks

221

ingenious fashion involving possibly thousands of hosts in the attack. These attacks are known as Distributed
Denial Of Service attacks (DDOS).
These attacks, like Smurf and Fraggle are performed with software tools and some examples of such programs are Trin00, Tribe Flood Network, TFN2k and Stacheldraht . These are all programs that have a similar
modus operandi and only differ slightly in communication ports and certain other less important features.
The Build Up and Mobilisation
With these attacks the actual attack is not instantaneous. The attacker first of all has to go through a sequence
of set-up steps before they are ready to mount an assault on a target system. First of all the attacker needs to
break into an external host. Usually, such a host is set up as a master and code repository. How the attacker
gains access to the hosts is varied, but will most probably have been gained by the use of sniffer programs
that listen in on well-known ports such as telnet or ftp. They may intercept clear text passwords transmitted
or exploit security loop-holes within various services.
Once access to a master system has been gained various tricks are employed to hide the existence of the new,
unauthorised user (the attacker). These include using tools known as root-kits. A root-kit contains programs
that replace the standard system tools and are set up to hide the processes that are owned by the attacker.
They will most likely change programs such as ps, top, and other network type commands. So for example
if a system administrator runs ps on a compromised host they wouldnt see an illegal remote connection by
the attacking user.
Once a compromised host has been obtained the next phase starts. The attacker, using some form of remote
control of the compromised master host will initiate a search for other desirable systems. Such systems
will be those that exhibit, or are known to exhibit security loopholes, backdoors and compromises. A list is
generated of discovered hosts and the next steps start to take on level of automation. The list generated in
the previous step is used as an address book of vulnerable host so that the automated exploitation tools can
access these hosts by just iterating through this list. The tools include buffer overrun exploiters for network
services such as ftp, RPC etc. Once the systems have been compromised software agents report back to the
attack master that they have succeeded.
As yet, still no public attack has been made on a target system other than the attack creating a network of
compromised hosts that they have high-level access to.
With the the attacker having a pool of readily accessible hosts he needs to prepare (or already have prepared) suitable programs that will be used during the out-and-out attack of host that can be executed on the
compromised systems There are two kinds of executables that can be inserted into a controlled host. One
type is the handler binaries and other are the daemon or agent binaries. The actual attack programs, at the
network front-line so-to-speak, are the daemons. It is a hierracrchy of program where many daemons are
controlled by a handler system, which in turn is controlled by a master system.
The final set-up phase of the attack happens when a script is then generated and automatically run that
transfers the files from the master system to the handlers and other compromised hosts, readying them for
an attack. To avoid detection of such activity various stealth tricks may be employed by the programs to
avoid detection. Such stealthy tricks include the process waking up, executing and then sleeping for a set
period of time so as not to generate suspicion by unusual network activity.
The power of these set-up phases is enormous - after the initial intrusion on the master host, the process is
automated with each captured host being used to attack further hosts to gain entry and gain control.
Figure 11.16 shows how a network of captured hosts would look like. Once the attacker has the systems
under their control they are ready to launch multiple packet based attacks on the target host system.

CHAPTER 11. NETWORK SECURITY

222

Figure 11.16: A DDOS ready network

DDOS Attack
The methods of attack that the compromised network can initiate include all of the discussed attack strategies
of ICMP ECHO attacks (ping flood), SYN flood attacks and Smurf or Fraggle attacks. Attacks are set off
by command line at the master station, labelled client in Figure 11.16. These stations are under the remote
control of the attacker. With a single command that is passed from the controlling clients to the handlers,
and in-turn to the front-line daemons, the target host will be flooded with masses of traffic from seemingly
many different locations. Unlike the earlier attacks which are restricted at best at the subnet level the
compromised hosts could be from completely different networks. What is important to ralise here is that the
daemons themselves arent necessarily pinging the target host but could be infact sending out Smurf or ping
flood type attacks using that compromised hosts subnet as the reflector, an example of which is in Example
11.1. It doesnt take too much imagination to quantify that amounts of traffic generated in such an instance.

11.8.6 Countering DDOS Attacks


The spread of such DOS and DDOS tools is disturbing and their ability to bring a network to its knees is
frightening to a network user, provider and administrator alike. The problem of countering such attacks falls
into a multi-layered defence strategy.
Some of the attack strategies or propagation methods rely on known weaknesses within OS implementation and/or TCP/IP stack implementation. It is all too easy for a system to fall behind the latest
level of release and not be adequately maintained by a system maintainer. Ofcourse, the latest patches
should always be applied to any/all software used but this is often not the case. As weaknesses are discovered in many software products they are usually shutdown with the application of simple software
patches.

11.8.6 Countering DDOS Attacks

223

Networks should implement differing forms of IP filtering at ingress points. Many of the attacks rely
on IP spoofing. Typically the spoofed address is that of a target host which is usually on a different
subnet.

Figure 11.17: Ingress filterning

Figure 11.17 shows a scenario where the attacker is on a particular subnet (192.168 class B address
scheme). If the attacker on the given subnet spoofs an illegal address (that of the target system which
is on a completely different subnet) the router of ISP A should detect that the IP address is incorrect.
Attacks such as Fraggle and Smurf are powerful but they rely on spoofed IP addressing being allowed
on the network. A simple rule then implemented at the router and looking something like:
If
Packet comes from legal address in subnet then forward
Else
Deny packet
would ensure that sppofed packets arent injected into the network. Also if logs are maintained than
evidence is retained at the ISP site of strange behaviour on an ingress link.
Another typical spoofing technique is to forge the IP address of packets to be originating from the
target network. So the targets ingress router should also be configured not to allow traffic into the
targets subnet that is claiming to have originated from the target subnet but is in fact coming in from
say a neighbouring network.
The malicious ICMP packets of ping-of-death and the ICMP ECHO communication packets of the
DOS agents are more difficult to counter. The very useful tools of ping and traceroute use ICMP ECHO
packets in their everyday use and denying access completely to such traffic would break these tools,
and no doubt, some others as well.
One alternative to blocking all ping like traffic is to block only fragmented pings. This will allow the
standard 64-byte test ping and disable ping-of-death attacks. For the attacks which use ICMP packets
as their communication paths (the DDOS tools) malicious traffic detection and eradication becomes
extremely difficult and involves third party software. As the control for the agents is embedded within
the ICMP ECHO packets the only clue that something is a miss that a system will receive will be

CHAPTER 11. NETWORK SECURITY

224

a sudden rise in that type of packet. Intrusion Detection Systems (IDS)can do this kind of traffic
profiling. Throttles could also be placed on the ingress side of a network so that if the ICMP ECHO
traffic breaks a certain limit than those kinds of packets are dropped. This is a tricky metric to generate
and achieve correctly. ent.
Throttling connections will also work for the SYN flood attack. Public access servers have capacities,
which are easily measured and figured for how many connection attempts they should be able to
handle. By throttling the SYN ingress rate the unruly consumption of resources during a SYN attack
should be decreased.
Finally, there are numerous tools that can be employed to see if a particular host has been compromised. These involve searching for fingerprints of the attack agents looking for well-known strings
that are embedded in the binaries. Some of these are available at:
http:staff.washington.edudittrichmiscddos scan.tar
http:staff.washington.edudittrichmiscsickenscan.tar.
The FBI has a site where scanning tools can be downloaded for various platforms (but not
Windows) at http:www.fbi.govnipctrinoo.htm.
and there are numerous others.
Long Term Solutions
More long-term solutions exist, or at least have been proposed. In a recent North American Network Operators Group , Robert Stone of UUNet proposed a network overlay for IP where dubious or interesting packets
can be quickly and efficiently routed to specialist investigation routers that examine the contents of the packets. This would enable quick and simpler tracing of attacking connections, something at the moment that is
difficult to do.
Steven Bellovin proposed via a white paper through the Internet Engineering Task Force a new set of ICMP
messages that would enable a trace-back of packets. Every so often routers would emit ICMP packets with
tracing information - similar in operation to call tracing of malicious phone calls. This would help identify
originating hosts on a heavily used link . The traffic increase from the subnet may indicate a DOS attack.
Further information can be found at:
http://www.research.att.com/ smb/papers/draft-bellovin-itrace-00.txt.
For a definitive source of information readers should go to David Dittrichs website at the University of
Washington. He has been instrumental in analysis of recent DOS tools featured earlier and has been
working with the CERT Coordination Centre on raising awareness of DOS attacks. He can be found at
http://staff.washington.edu/dittrich/misc/ddos/.

11.9 Firewalls
The Internet is a world-wide network of networks that use the TCP/IP (Transmission Control Protocol/Internet Protocol) protocol suite for communications. The Internet has been growing steadily and includes educational institutions, government agencies, commercial organizations, and international organizations. In the 1990s, the Internet has undergone phenomenal growth, with connections increasing faster

11.9.1 What Constitutes a Firewall

225

than any other network ever created (including the telephone network). Many millions of users are now
connected to the Internet.
There are a number of services associated with TCP/IP and the Internet. The most commonly used service
is electronic mail (e-mail), implemented by the Simple Mail Transfer Protocol (SMTP). Also, TELNET
(terminal emulation), for remote terminal access, and FTP (file transfer protocol) are used widely. The
following is a brief list of the most common services:
SMTP - Simple Mail Transfer Protocol, used for sending and receiving electronic mail.
TELNET - used for connecting to remote systems connected via the network, uses basic terminal
emulation feautres.
FTP - File Transfer Protocol, used to retrieve or store files on networked systems.
DNS - Domain Name Service, used by TELNET, FTP, and other services for translating host names
to IP adresses,
GOPHER - a menu-oriented information browser and server that can provide a user-friendly interface
to other information-based services
WAIS - Wide Area Information Service, used for indexing and searching with database files.
WWW/HTTP - World Wide Web, a superset of FTP, GOPHER, WAIS, other information services,
using the hypertext transfer protocol (http).
RPC-based services - Remote Procedure Call services, such as NFS, Network File System, allows
systems to share directories and disks, causes a remote directory or disk to appear to be local, and
NIS, Network Information Services, allow multiple systems to share databases, e.g., the password file,
to permit centralized management.
X-Windows System - a graphical windowing system and set of application libraires for use on workstations.
rlogin,rsh, and other r services - employs a concept of mutually trusting hosts, for executing commands on other systems without requiring a password.
There are problems with services such as these being open to the public, even though they were originally
designed to be that way in an open, interconnected network. Malicious users can exploit weaknesses in such
services to gain access into a machine or network of machines. A mechanism that attempts to shutdown
these weaknesses is the Firewall.

11.9.1 What Constitutes a Firewall


A firewall is a mixture of hardware, software, and security policy which is implemented in a network to
control access from machines outside of the network to machines inside of the network, and also possible
in the reverse direction too. For 11.18.
Earlier it was pointed out that a firewall is a mixture of services, hardware and software. More specifically
the components of a firewall are:

CHAPTER 11. NETWORK SECURITY

226

Internal hosts

Internal hosts

External Network

Firewall

Internal hosts

Figure 11.18: A firewall resides between external networks and internal networks

network policy
authentication mechanisms
packet filtering
application gateways
A particular firewall solution may contain sum or try to implement all of the options.
Network Policies
The network policy exists on two levels, a high level policy which defines what services are to be available
or not available and the correct use of such services. This is often known as a service access policy as it can
entail a detailed description of service availability - such as a restricted telnet service. A low level policy
describes how such high level policies are to be implemented. Generally they are one of two kinds of rules
for each service:
1. allow any service that has not been expressly forbidden
2. deny any service that has not been expressly allowed
It should be clear that the latter option is the more desirable to implement as it leaves less room for error
where a particular service may have been overlooked.
Authentication Mechanisms
There may be legitimate users outside of the protected network that require access to network services.
For example a telnet session from outside of the local domain. Services such as these often send their
passwords in a clear text format which can be readily captured by devices known as packet sniffers. Then

11.9.1 What Constitutes a Firewall

227

the compromised password can be used to gain a foot-hold on the remote system. To combat this elaborate
authentication techniques may have to be used to allow such traffic.
One of the more advanced authentication devices are one-time password systems. A smartcard or authentication token is used to generates a response that the host system can use in place of a traditional password.
The token or card works in conjunction with software or hardware on the remote host. It has the advantage
that the response that is generated for the session is unique for every login. So, if an attacker was to capture
the session authentication data it would be useless.
Another mechanism is to use high-security versions of common services. For example secure-shell (ssh)
in replace of the remote-shell function on Unix like systems. SSH negotiates a valid session key when the
service is accessed to enable an encrypted link.

Packet Filtering
IP packet filtering occurs at the network gateway using a packet filtering router. A packet filtering router
forwards or denies access to packets based on one or several different fields of the PDU:
source IP address
destination IP address
TCP/UDP source port
TCP/UDP destination port
The addition of TCP or UDP port filtering to an IP address filtering service offers great flexibility. A system
administrator can block packets that are not destined for certain publicly available services. In that way
they can hide the inner network hosts from probing or attack. For example the TELNET daemon resides
usually on port 23, or the SMTP daemon resides on port 25. These can be specified for a particular host as
being public and this, if the network policy of deny all unspecified traffic is implemented no other host on
the internal network should be have a visible similar port. Then the administrator can just concentrate on
making sure those allowed services are as secure as possible.

Application Gateways
Packet filtering devices on their own can cause some problems, particularly with services where random
port allocation may occur (such as RPC) and also that they can be complex and difficult to maintain. One
way around this is the use of software applications that facilitate the forwarding of packets for services such
as telnet, ftp etc. Such applications are known as proxy servers and the system that runs these services are
known as application gateways.
Proxy services allow only those services through for which there is a proxy. So, if an application gateway
contains proxies for FTP and TELNET, then only those services will be allowed into the protected internal
network. Other advantages include the ability, for some proxy services, to filter use of the protocol. For
example some proxies can filter FTP connections and deny use of the FTP put command and so guard
against a user being able to write to an anonymous FTP server.

CHAPTER 11. NETWORK SECURITY

228

11.9.2 Different Configurations of a Firewall


There are many different types of configuration of a firewall but generally they fall into one of the following
categories:
1. packet filtering firewall
2. dual-homed gateway firewall
3. screened host firewall
4. screened subnet firewall
Packet Filtering Firewall
The packet filtering firewall is perhaps the most common and easiest to employ for small, uncomplicated
sites. However, it suffers from a number of disadvantages. As pointed out earlier, a packet filtering router
is installed at the gateway between the subnet (private local network) and the external network (usually the
Internet). This router is then configured with packet filtering rules to block or filter protocols and addresses.
The internal hosts in the subnet usually have direct access to the Internet while all or most access to the
subnet systems from the Internet is blocked. It is possible to configure the router to allow selective access
to systems and services, depending on the network policies. Usually, inherently dangerous services such as
NIS, NFS, and X Windows are blocked.
A packet filtering firewall suffers from the same disadvantages as a packet filtering router, however they can
become magnified as the security needs of a protected site becomes more complex and stringent. These
would include the following:
there is little or no logging capability, thus an administrator may not easily determine whether the
router has been compromised or is under attack
packet filtering rules are often difficult to test thoroughly, which may leave a site open to untested
vulnerabilities
if complex filtering rules are required, the filtering rules may become unmanageable
each host directly accessible from the Internet will require its own copy of advanced authentication
measures

11.9.3 Dual-Homed Gateway Firewall


This kind of firewall consists of a proxy server which is highly secured. It is partitioned with one side facing
the external network and the other the internal subnet. This is shown in Figure 11.19.
The proxy has two network connections as mentioned earlier, and IP forwarded has to be disabled so that
packets from the subnet can not physically pass to the external network and visa-versa. All accesses from
the internal subnet to the external network are handled by the proxy software on the secured host. An IP
filtering router acts as the gateway to the external network and incoming packets for provided services are
solely routed to the proxy server to handle.

11.9.4 Screened Host Firewall

  
    

229

     
!#"$ %

& " '(

Figure 11.19: A dual homed firewall with a proxy

This protects the subnet hosts and further, these packets have no way of spilling onto the internal network.
A example setup for a dual-homed gateway would be to provide proxy services for TELNET and FTP, and
centralized e-mail service in which the firewall would accept all site mail and then forward it to site systems.
Because it uses a host system, the firewall can house software to require users to use authentication tokens
or other advanced authentication measures. The firewall can also log access and log attempts or probes to
the system that might indicate intruder activity.This configuration has some inflexibility which could be a
disadvantage to some sites. Since all services are blocked except those for which proxies exist, access to
other services cannot be opened up systems that require the access would need to be placed on the Internet
side of the gateway

11.9.4 Screened Host Firewall


A screened host firewall setup is similar to the dual-homed gateway firewall. It has an IP packet filtering
router and a proxy serving host. Unlike the dual-homed though, the screened host firewall proxy server has
only one interface and so the network segments of the internal subnet and the external network are not totally
unconnected. All accesses from the subnet to the external network are, as before, handled by the proxy server
and all incoming accesses from the external network are passed by the router to the proxy server. Any other
packets for the internal network are generally refused, except on a few special circumstances.
This setup offers more flexibility, as it permits the firewall to be made more flexible but perhaps less secure
by permitting the router to pass certain trusted services around the application gateway and directly to site
systems. For exmple the trusted services might be those for which proxy services do not exist and might be
trusted in the sense that the risk of using the services has been considered and found acceptable.

11.9.5 Screened Subnet Firewall


A variation of the dual-homed firewall is the screened subnet firewall. This kind of firewall creates a separate
subnet between the external network and the internal network - a kind of buffer zone. This is illustrated in
Figure 11.20.
Two IP packet filtering routers create the buffer zone. In this new subnet resides all the information and
proxy servers required to provide the organisations network services. The router which is the connection

CHAPTER 11. NETWORK SECURITY

230

# !$%

  
   

      


!"  

      


!"  

+ !  * ,


-  . !  /

&  '
(  ) * (

Figure 11.20: A screened subnet firewall, notice the two routers and the subnet between

point to the Internet would route traffic according to the following rules:
application traffic from the application gateway to Internet systems gets routed
e-mail traffic from the e-mail server to Internet sites gets routed
application traffic from Internet sites to the application gateway gets routed
e-mail traffic from Internet sites to the e-mail server gets routed
FTP, GOPHER, etc., traffic from Internet sites to the information server gets routed
all other traffic gets rejected
In summary, the outer router restricts Internet access to specific systems on the screened subnet, and blocks
all other traffic to the Internet originating from systems that should not be originating connections. Conversely, the inner router passes traffic accordingly:
application traffic from the application gateway to site systems gets routed
e-mail traffic from the e-mail server to site systems gets routed
application traffic to the application gateway from site systems get routed
e-mail traffic from site systems to the e-mail server gets routed
FTP, GOPHER, etc., traffic from site systems to the information server gets routed
all other traffic gets rejected
Hence, no site system is directly reachable from the Internet and visa-versa, which is the same effect as with
the dual-homed gateway firewall. There is a big difference in that the routers are used to direct traffic to
specific systems, thereby eliminating the need for the application gateway to be dual-homed.
An advantage of this setup is that greater throughput can be achieved, then, if a router is used as the gateway
to the protected subnet. Consequently, the screened subnet firewall may be more appropriate for sites with
large amounts of traffic or sites that need very high-speed traffic.

11.9.6 Summary of Advantages

231

11.9.6 Summary of Advantages


In summary, a firewall offers several critical reasons as to why they should be used. Namely they are:
1. Protection of critical and potentially vulnerable services. As firewalls are used to filter out unwanted
traffic services within the subnet are protected from unwelcome traffic.
2. Open networks are removed as traffic is controlled and routed only to those systems that should receive
it.
3. Centralised security implementation. The security features of the network are generally collected
within a few systems making maintenance easier and more efficient.
4. Logging. As all accesses occur through the firewall full and informative logging can be maintained.
In the event of attack the attacker may be back-traced from the logs.
5. Policy application. Not all users on a network may be as willing to obey a security policy, either
through difficulty in implementing such a policy or that they find it restrictive. Administrators can
ensure that their policies are implemented regardless.

11.10 Exercises
Exercise 11.1 What are the key concepts of secure communication? Explain what each term means.
Exercise 11.2 What are the differences between an passice attack and an active attack. What kinds of active
attack are common?
Exercise 11.3 decrypt ejggug ku itgcu
Exercise 11.4 What is the purpose of the salt?
Exercise 11.5 How did the engima device work? Are there similar systems still in common use today?
Justify your answer.
Exercise 11.6 Give recommendations on how to secure a network from attack.

232

CHAPTER 11. NETWORK SECURITY

Chapter 12

Cellular Networks
12.1 Objectives of this Chapter
The wired world that we are so far accustomed to is rapidly changing. Since the explosion of mobile
services in the 80s the cell phone has become ubiquitious. In this chapter we wish to present some of the
basic techniques of what constitutes a mobile communication network, the limitations and a short glimpse of
what is to come. At the end of reading this section the reader should have a more qualitative appreciation of
what occurs when someone makes a call from a mobile telephone and how the ether is capable of carrying
the more demanding applications that are being planned.

233

CHAPTER 12. CELLULAR NETWORKS

234

12.2 Introduction
 
The dawn of wireless communications actually has its roots set back in the end of the 
century with an
Italian called Guglielmo Marconi. It was his experiments that demonstrated that a continuous wave could
be propagated through the atmosphere, and over the great boundary at the time, the horizon. The problem
was that the earths surface is curved and so a direct line-of-site is only possible for several kilometres and
the further one is away from another, they will lose line-of-sight as they dip beneath the horizon. Marconi
showed that radio waves followed the earths curvature, by bouncing off the upper part of the atmosphere
known as the ionosphere. He successfully transmitted a feint signal from Cornwall, England to his receiving
apparatus in Newfoundland . He later showed in 1897 that his system could also keep continuous contact
with ships that were in the English Channel, a strip of water between England and France. This convinced
the shipping companies of the time that wireless communications was possible and many signed up for
Marconis apparatus.
Wireless communications became more common after World War I and more so during World War II. The
applications, apart from the general stations available of commercial radio and television broadcasts, were
pretty much still specialised for military and aviation/shipping uses and civil police use. Traditional radio
networks suffer from a major restriction of available frequencies to use for the communications channel.
The reason is that adequate separation has to be maintained between different bands, and with the plethora
of radio services available competition for those bands is high. Finally, to serve a wide area, large powerful
transmitters are required.
The mid to late 70s witnessed the installation of the first cellular network in Japan (c1979) and the almost
Mobile) was introduced in the late 80s and is ubiquitous
universal adopted system, GSM (Groupe

in its deployment.

% $ 

Example 12.1 The Bell mobile system employed in the very early 70s could only support 12 simulatneous

calls over a coverage range of 1000 miles .

12.3 Cellular Concept


The cellular concept is quite straightforward. First of all the area to be served is not going to be served by
a single powerful transmitter - the shortcomings of which can be seen in Example refex.bell, but instead it
will be served by many smaller, more efficient transmitters placed at suitable points throughout the service
area. These transmitters, or base stations as they are known, have limited range by the fact that thay are low
power, compared to the large transmitters before, and can be tuned so that the radius served is changeable.
This notion is shown in Figure 12.1. Each cell contains what is known as a base station, which consists of the
actual antenna and associated hardware. Each base station in-turn is connected to a mobile switching centre
(MSC). An MSC is similar to an ordinary telephone exchange. The MSC is responsible for managing calls
setup and calls in progress. It is also the gateway for mobile users to the PSTN. Finally a user has a mobile
handset, typically a mobile telephone. This handset communicates to the base station and subsequently the
MSC over what is known as the air-interface. This configuration is shown in Figure 12.2.

12.3.1 Frequency Reuse


With increasing the number of transmitters a problem is created, in that there will be many co-located
transmitters all potentially using the same signals, which of course, is not a very good idea. With the

12.3.1 Frequency Reuse

235

Figure 12.1: Cellular concept

         


  
   
    

    
   
!
 
  
     


 
     

     
    

"$#$%'&

Figure 12.2: Cellular network

frequency spectrum already in high demand new bands can not be added in an ad-hoc fashion, each cellular
operator is only assigned a limited number of bands. So, to overcome this frequency reuse is employed in
that the smaller transmitters will have different frequencies and so will not interfere with each other.
There is actually more to this than just simple allocation. Each cellular operator has a limited number of
channels/bands that they can use. These have to be distributed intelligently throughout an operators network. To get over the interference problem as stated earlier, neighbouring cells have a completely different

CHAPTER 12. CELLULAR NETWORKS

236

Figure 12.3: Frequency reuse concept

subset of channels. Ultimately though, the subsets of channels will be exhausted and so a used subset must
be assigned to a different base station within the network. This is allowable because each cell is only of a
fixed diameter, and the base stations are tuned such that the propagation of its signals attenuates readily so
that cells which use the same subsets of channels, are of sufficient distance appart that they do not interfere
with each other. This is illustrated in Figure 12.3, where it can be seen that six channel groups A . . . G are
reused throughout the network. The group of cells that occupy the entire channel allocation are known as a
cluster. The figure shows 3 such clusters.
Engineers known as radio-planners have to decide frequency allocation and will overlay a model such as
depicted in Figure 12.3 to decide coverage. The hexagonal cells are not the true radio map of the coverage
area but are a suitable approximation of the footprint where best response is obtained. It is important to
realise though that the radio energy doesnt suddenly attenuate at the cell boundary.
To illustrate this concept further if we consider a system which has a total number duplex channels
available. Then each cell is given a subset of this number, say  channels where 
. Then the size of the
clusters must be given by:


 

where  is the number of clusters that occupy the complete allocated spectrum. Building on, it is a simple
extension to see that if such a cluster is used
times then the total number of available channels  is given
by:


 

which shows that the capacity of a network is directly proportional to the number of times a cluster is reused
in a service area. Radio-planners have to decide on this value  depending on coverage requirements,
population density and available channels. A number, known as the frequency reuse factor and defined as

gives the number of channels that a cell within a particular cluster will have available. Further, due to

12.3.2 Overview of the Air Interface

237

the geometry of the hexagonal grids and to ensure that they tessellate correctly an additional constraint is
placed upon the value  in that it has to obey:


$  *-$  *  

$  

This constraint on  gives a way of finding the nearest cell neighbour which reuses the frequencies at a
target cell. The steps are to:

Move cells along any radius that is perpendicular to a cell side.

Rotate counter-clockwise 60 and move cells.

12.3.2 Overview of the Air Interface


The air interface is the specification of what kinds of channels are in operation between the transmitter
stations and the mobile handset. In general, there are four sets of different channels.
1. Forward Voice Channels The FVCs are the channels which are used for voice transmission from
the base stations to the mobile handsets.
2. Reverse Voice Channels The RVCs are the compliment of the FVCs and are used for the transmissions from the mobile handset to the base stations.
3. Forward Control Channels Are control channels used only in call set up and are from the base
station to the mobile handset. The FCCs also act as pagers, as every mobile handset in a cell is
listening on FCCs and pages can be sent to a mobile handset if it somehow lost track of.
4. Reverse Control Channels The RCCs are the control channel path back from the mobile handset to
the base station.
Different mobile network standards will implement these channels in various different ways.

12.3.3 Roaming Mobile Users


The coverage area is split up as shown in the previous sections into cells, each being served by its own set of
channels and base station. A user who is communicating in a particular cell can not be expected to remain
stationary, defeating the whole purpose of having a wireless network. There will be a point when a user
roams from the confines of one cell into another. The user can not stay attached to the channels that the cell
he has just roamed from indefinitely, as the signal level will drop so much that the call will be unable to be
maintained. When such an event happens a procedure known as a handoff , or handover, happens.
In modern mobile networks (known as second generation) the mobile handset monitors the radio signal
power that it is receiving from neighbouring cells when it is in a particular cell. This information is sent
back to the base station, and consequently back to the MSC. The MSC must decide whether the signal that
the mobile handset is receiving is stronger from a neighbouring cell then the one it is currently situated in.
If a neighbouring cell has a stronger radio signal then the handoff is initiated. This is one of the major tasks
that an MSC has to perform. Figure 12.4 illustrates a handoff scenario.

CHAPTER 12. CELLULAR NETWORKS

238











 







!

   




Figure 12.4: Handoff scenario

In Figure 12.4 a handset is monitoring the power of two cells A and B. At some point, as the handset roams
from cell A to cell B, a handoff has to occur. This could happen at the equal power point (shown on diagram)
but this has a very distinct problem. If a cell is at the equal power point, or just a bit off from that point
and changes to the neighbouring cell it is may be likely that prevailing conditions change slightly and the
measured signal becomes greater for the previous cell and so another handover occurs just after the one
thats just happened. This may even becaused by the user moving only a couple of yards! This can happen
repeatedly and unnecessarily load the system. To avoid this a tried and tested technique of hysteresis is used.
The operation as the user roams from A to B is that a handover will not actually happen until the user is at
Y on cell As power curve. At this point it is quite a way past the equal power point. So at Y  the handoff
happens and the mobile handset will be at position Y
on cell Bs power curve. Now, if a user were say
moving back and forth and were to head back to where the came from that caused the first handover then
the handover from cell B back to cell A wouldnt happen until position X  on Bs power curve. Once the
second handover happens then the mobile handset will be at position X
.
When a handover happens the MSC has locate and allocate channels for the new mobile handset to use in
the just roamed into cell. This takes a finite amount of time. Note that the handover has to happen before the
received radio signal power falls to the call cutoff point for a particular curve. If the MSC is heavily loaded,
or there are no avialable channels to use then the call will be dropped.
There are other practical considerations to be taken into account with the cellular design and particularly
handoffs. Cells are certain sizes and differing speed mobile handsets will traverse through the cellular
network at differing speeds. This means that most mobile handsets will have a certain occupancy time in
a cell that is relatively efficient and manageable by the MSC in terms of controlling handoffs. If a mobile
handset is traveling at high speed through a group of cells then a situation may occur where it is travelling too
fast for a handover to neighbouring cell to have happened before it needs to be handed over to the subsequent
cell after that. To combat this layering of cells may happen to form what is known as an umbrella cell, as
shown in Figure 12.5.
An MSC will monitor the number of handover requests that a mobile handset may issue over a time or
estimate the speed of the handset by the changing signal measurements it is making. If it is deemed to be

12.4 How a Call is Made

239




 

 


   


  !" 
 #$
%
  %
Figure 12.5: Catering for high speed traffic with umbrella cells

high speed it will be switched to the umbrella cell where the costly handover requests will diminish. This
ensures that the MSC does not get flooded with handover requests.

12.4 How a Call is Made


We have seen how the basic elements of a cellular network system and now we can illustrate a call being
made from a handset to another subscriber. Figure 12.6 shows a timing diagram of a call originated from a
mobile handset.

=?> @
A B C4A B @%DEA
F G
H @%I J K LM> C H J @N> @%DPOQ> R R BQD
SW:> X4K C XTP@EUELV%B KJ @N> @

Y Q> A BA C > C H J @PKBQR > T%A


KBQZ U
BQA C[C J]\+^X
Y > A BaA C > C H J @PKB R > T
A
H @%I J K L > C H J @PC J]C e%B
J K H _EH @%> C H @ _ e
> @%DA B C4I J K
bcdX > @%D]W%c?X
Y >QA BaA C > C H J @ S > _ BQAC e%B
O >QR R B D S > K C TNJ @PC e%B`baX4X
> @%D _EH g BQA H @%I J K L > C H J @
J @PC e%B`bcdX > @
D]W4c?XC J
U
A B

h K H _EH @
> C H @ _ S > K C T
OQJ @%I H K LMAbcdX > @%D
W4cdX J g B KiC e%B`WjX4X

X4> R R BQD > K C TNO J @%I H K L A


bcdX > @
S D]W4c?X J g B KiC e%B
W:X%X
X%J @ g B KA > C H J @PC > k%B A
Sb`c?R >QOX Ba> J @%g D]B KiW4C cde%BaX >QA A H _ @%B D

&(')+*,-.0/21+34-
5

Y Q> A BaA C > C H J @PKB R > T%A


B LMA A > _ B`C J]\^`X I J K
J K H _H @%> C H @ _ S > K C T
X%J @E@
BQO C AC e%B`C f?J
S >KC HB A

\+^X H @
A C K U
O C A
J K H _EH @
> C H @ _ LMJ V H R B`C J
U
A Ba>aDEBQA H _ @%> C BQD]b`c?X
> @%D]W4cdX
\+^X A B @%DEA
> S > _ BaI J K
C e
BaOQ>QR R BQD S > K C TNJ @PC e%B
baX4Xf H C e H @%I J K L > C H J @
JIQC e%B`bcdX > @%D]W%c?XC J
U
A B

Y Q> A BaA C > C H J @PKBQR > T%A


B LMA A > _ B`C J]\+^X I J K
O >QR R B D S > K C T S > K C T

67/34-98:5
/25
*'1
Figure 12.6: Timing diagram for a mobile handset

&(8<;

CHAPTER 12. CELLULAR NETWORKS

240

The mobile handset initiates a call by grabbing an RVC and signalling to the base station its identification
information. If no RVC is available, because many handsets are camped on them then a call can not be initiated. The base station relays this info to the MSC. The MSC allocates an FVC and RVC for the originating
handset and sends this information back to originator. It also allocated a pair of FVC and RVC, different to
the first pair for the called party. It also pages the called part over the FCC to inform it of this information.
The called party answers the page and grabs the RVC and FVC allocated to it, as does the originating party.
Both send information back to the MSC via an RCC to acknowledge this. If the MSC doesnt receive this
information it will not connect the call. Once it has received this information, it instructs the base station to
connect the call and conversation takes place. Further involvement of the MSC is not necessary.

12.5 GSM

% $ 

GSM, which now stands for Global System Mobile after a name change from Groupe
Mobile, is a

second generation cellular network system in its design and the services that it offers. The layout
of a GSM
network is shown in Figure 12.7. It is split into three major interconnected subsystems.





 





 


-,$ . 
/ " 0%' 



A5 8 B:C!D
6!B 9 8!E!F!9';G
6<>=4 ?4 8(5!@



*) ' ( "(#!






   ! "!#!

1324'576!8!28 9':;
6<>=4 ?4 8(5!@

+


HI<>=J 9'EKA5 8 B:C(D4

 $ %& ' ( "(#!!

Figure 12.7: GSM schematic showing interfaces between functional entities

The Base Station Subsystem (BSS), which is also known as the radio subsystem is responsible for managing
the connections between the mobile handsets and the mobile switching centre (MSC). The BSS consists of
many Base Station Controllers which all connect to a single MSC. The BSCs in-turn control many Base
Transceiver Stations (BTSs). The BTS are the physical antennae and radio equipment. When a handover

12.5.1 Radio Information and Channel Types

241

event happens, if it is between two BTSs located off the same BSC then it is the BSC that manages the
change and not the MSC - which greatly reduces the load on the MSC.
The Network Switching Subsystem(NSS) is responsible for call control and switching and access to external
databases that are necessary for correct operation of roaming users. The MSC is the main unit in an NSS
and controls traffic to and from the BSCs. Within the NSS are three databases called the Home Location
Register (HLR), Visitor Location Register (VLR) and the Authentication Centre (AUC). When a subscriber
joins a network they are assigned a record in an HLR. Different MSCs will have different HLR databases,
but ultimately they are all linked so that they can be accessed via different MSCs. The record contains the
International Mobile Subscriber Identity (IMSI) which is used to uniquely identify every home subscriber
handset. When a call is made from a handset, the IMSI information is negotiated as a matter of course to
ensure the mobile is authorised and not stolen for example. When a non-local mobile handset roams into
the area, its IMSI is stored in the VLR of the local MSC. It then locates the mobiles home MSC (and hence
HLR record) and updates that entry with the fact that the mobile handset is currently within the region under
the control of that MSC. So, when a call is made to the roaming mobile handset, the HLR in its home MSC
points to the current location of the handset at another MSC.

12.5.1 Radio Information and Channel Types


The specification of the links between the subscriber handsets and the base stations for GSM is for two
bands which are 25Mhz wide ranging from 890-915Mhz for the reverse link (handset to transmitter) and
935-960Mhz for the forward link. These bands use a mixture of TDMA and FHMA (Frequency Hopping
Multiple Access). The available forward and reverse frequency bands are divided into 200 kHz wide channels called ARFCNs (Absolute Radio Frequency Channel Numbers). The ARFCN denotes a forward and
reverse channel pair which is seperated in frequency by 45 MHz and each channel is time shared between
as many as eight users using TDMA. This is illustrated in Figure 12.8.
156.25bits
576.92 us

TS0

TS1

TS2

TS3

TS4

TS5

TS6

TS7

4.615 ms

Figure 12.8: GSM TDMA frame


Each timeslot has an equivalent time allocation of 156.25 bits but 8.25 of these are consumed in guard time protection from overlap into another channel. The total number of available channels within a 25 kHz band
is 215 giving a theorectical maximum of total traffic channels of only 1000. This makes the technique of
frequency reuse and radio planning discussed earlier a very important one.

12.5.2 Channel Types


Traffic on a GSM channel can be either full-rate or half-rate. A full-rate channel occupies a timeslot every
frame where as a half-rate user occupies a timeslot every alternate frame, allowing two users to share the
same timeslot.

CHAPTER 12. CELLULAR NETWORKS

242
TCH - Traffic Channel

The TCH is the standard traffic bearing channel type. It can either carry speech or data. In data mode it can
either be configured for 2400 bps, 4800 bps or 9600 bps. The overhead of forward error correcting codes
increases the bit rate and each of these services is expanded to occupy the channel capacity of 22.8 kbps for
full-rate channels and for half-rate channels they occupy 11.4 kbps with the highest speed service of 9600
bps not being avialable in half-rate mode.
CCH - Control Channel
Control channels are used to carry signalling information between the user and the network. In GSM there
are three kinds, the BCH, CCCH and DCCH.
BCH - Broadcast Channels only operate on in the forward direction (from network to user) and within
only specific timeslots within a frame or multiframe. They are used for syncronisation, and monitoring
by the mobile handsets. There are infact three different kinds of BCG channel.
BCCH - Broadcast Control Channel. This is used to send information such as network activity
and parameters to the mobile user. It also broadcasts a list of channels that are currently in use.
FCCH - Frequency Correction Channel. This channel is used to send precise timing signals
to mobile handsets so that the units oscillators are locked onto the exact frequency of the base
station.
SCH - Synchronisation Channel. This is a framing synchronisation channel which transmits
frame numbers and allows the handsets to be in frame-sync with the base station.
CCCH - Common Control Channels. As with the BCHs there are three varients of CCCH which fulfill
different functions. There is the PCH, RACH and AGCH.
PCH - Paging Channel. This is a general broadcast channel which all mobile handsets can listen
to. Requests to contact a specific handset are broadcast over this channel first whence the called
handset should then respond.
RACH - Random Access Channel. This is a reverse link channel used as the return path for any
communications via the PCH.
AGCH - Access Grant Channel. This is a supervisory forward channel from the base station to
the mobile handset and is used for communicating supervisory messages to the handset such as
frequency allocations for calls.
DCCH - Dedicated Control Channels. Again there are three typs of DCCH.
SDCCH - Stand-alone Dedicated Control Channel. This channel carries signalling data following the connection of a mobile handset to the base station. It is used during call set-up and
ensures that the MSC and the handset remain in contact during subscriber verification.
SACCH - Slow Associated Control Channel. This supervisory channel is used to communiacate
such things as received signal strength for a particular TCH.
FACCH - Fast Associated Control Channel. This is similar to the SDCCH but is used when
ever an SDCCH is unavailable. It is particularly used for urgent messaging between user and
network for such things as hand-offs. It accomplishes these transmissions by stealing bits from
a handsets TCH channel.

12.6 3G Networks Mobile Networks

243

The framing techniques and management are a complex issue and beyond the scope of what can be presented
here. For further information the reader is directed to the ITU-T website or a specific wireless communications text.

12.6 3G Networks Mobile Networks


GSM has been around now for quite a while and it was a major improvement over the previous first generation mobile networks. Having said that, it is still lacking in places. One of the major limitations of the GSM
network is that although it offers a data service, so users can make data calls to remote servers and the Inter 

net - because this occurs over a circuit switched air interface it only allows for a maximum of     
.
Users are demanding faster access then this service can offer.

12.6.1 General Packet Radio Service


The General Packet Radio Service (GPRS) is a major development of the GSM standards. The aim is to
provide mobile users with the facilities of a packet switched network to enable high bit rates for bursty data
transmission. The aim is for a user to theoretically use several timeslots to get a bit rate of approximately
 

  

GPRS services are designed to fall into one of two categories, either Point-to-Point (PTP) or Point-toMultipoint (PTM). PTP services are designed for IP like applications, that is a data call that can be either
connection-orientated or be a connectionless like call where packet deliver and ordering is not strict. PTM
services are designed for group broadcast such as weather reports or traffic reports where a single source
will broadcast to a number of mobile stations at once. This will be an anonymous service.
The architecture of GPRS is similar to GSM. In fact GPRS can be considered to be an extension of GSM
and sits neatly over the NSS part of the GSM network. Figure 12.9 shows how GPRS extension fits into the
GSM architecture.
Here, the mobile station of the GSM network now becomes two distinct parts, consisting of the the mobile
handset which acts as the radio transceiver at the subscriber end. This in turn is connected to the terminal
equipment, either a laptop or a personal digital assistent (PDA) = although these items are merging together
to form a single mobile entity which has data capability. There are two types of service for a mobile station.
One is where a data call can be established and a voice call can be simultaneously broadcast too. The other
is where only one or the other types of call can be in progress.
In extending the GSM NSS layer new components had to be added to the functional model. At the BSS, a
PCU or Packet Control Unit is implemented. The PCU is placed between the BSS and the GPRS part of the
NSS. At the core part of the NSS are where most of the new additions take place. The main unit added is the
SGSN and the GGSN, or Serving GPRS Support Node and Gateway GPRS Support Node. The network is
broken down into service areas, each one served by an SGSN. The function of the SGSN is to mainly track
and manage the mobile stations that are under its control. The GGSN serves as the gateway from the mobile
network into the public data network (PDN).

12.6.2 Roaming in a GPRS Network


The case to consider is when a mobile station is in a data mode call.

CHAPTER 12. CELLULAR NETWORKS

244

$%&
  
" 
#  


  
   
    

       

 
    

$%&

   ./

0213 4.

()

$'$'&

* + ,

!

  
   

 
  
- + ,

Figure 12.9: Logical architecture of a GPRS network

The organisation at the cellular level of a GPRS enabled network is slightly different to that at the voice

level for GSM. Several cells in an area are grouped together and classed as a routeing area, or RA. An
SGSN may control several RAs. Further, in the GSM concept an MSC will control many cells which are
organised into many location areas, or LAs, and an RA is a subset of an LA. This offers better utilisation
of radio resources as it is geographically smaller area. This is shown in Figure 12.10.
In the figure we see that a particular MSC controls at the GSM level LA
to LA , and a different MSC
controls the others. An LA consists of many cells but we see that an RA is a smaller group of those cells.
There is a suggestion that an SGSN will manage upto three different RAs.
A mobile station may be in one of three states as shown in Figure 12.11, which is different to the GSM
model where a handset is either idle or has grabbed a dedicated channel. For GPRS the states that a station
can be in are:

idle - in this state the mobile station is not in active contact with the GPRS network. That is to say
that it is not engaged in a data call and is stood down. The only data messages that the handset can
receive are broadcast messages.
ready - this state is the active state with regards that the mobile handset is engaged in making a data
transfer. It is has a managing SGSN and in this state it has actively seized several time slots for burst
transmission. If for some reason the data transfer stalls, be it waiting for user input of some kind or,
an in built timer that monitors idle time expires the handset downgrades from a ready state to standby.
a special case spelling to distinguish from the normal concept of routing

12.6.2 Roaming in a GPRS Network

 

245

 

 







 

 



 

 

 

Figure 12.10: Distiction of RAs and LAs in a GPRS network

GPRS attach

Idle

GPRS
detach

Ready

GPRS detach
or Timer expiry

PDU Transmission
Reception

Standby

Timer expiry

Figure 12.11: GPRS functional state model


standby - this state is where the handset may make a data connection but currently is not. It is still
managed by an SGSN. If data is queued to be sent it will move back to a ready state. In this state it is
no longer transmitting on any time slot.
The later two states are important for managing the mobile handset and its location and routing to the
correct SGSN and cells for transmission of data. Data external to the system has to be, just as with GSM
and a voice call, routed to the correct controlling SGSN. So, in these states the mobile handset has to inform
the managing SGSN of its location and which RA it is currently in. This is done by the handset listening
for certain control channels which are broadcast containing pertinent routing information. The SGSN will
then need to keep the HLR in the home MSC updated with this information as well as updating the GGSN

246

CHAPTER 12. CELLULAR NETWORKS

so that incoming data can be routed efficiently. There may be an occasion where a mobile handset moves
from an RA controlled by one SGSN to another RA controlled by a different SGSN. Then the SGSNs have
to communicate with each other to perform a proper hand-off.

12.6.3 Data Transmission


Data transfer in a GPRS network can be initiated in one of two ways. It can either be mobile-originated
(MO) or mobile-terminated (MT). Either way, as stated in the previous section the mobile handset has to be
attached to an SGSN. When it is attached it activates what is known as PDP context (Packet Data Protocol).
The PDP context is a request and meta-information for an end-to-end data connection, and more specifically
requests to create protocol tunnels to the exterior data network. Data between the an SGSN and the
gateway GGSN is wrapped in a GTP or GPRS Tunneling Protocol.
For MO initiated transmission the SGSN forwards the data to the appropriate GGSN via the GTP. At this
point the data is sent on to the connected PDN. Where as for incoming data transmissions, MT initiated, the
GGSN has to locate the mobile station and its managing SGSN via the HLR whence it can then forward the
data packets to the mobile handset. If a hand over is occuring then buffering of the data packets happens
and are then forwarded to the new SGSN when the hand over is completed.
As a final point it is important to realise that transition between the ready and standby states can be
frequent and rapid as GPRS takes advantage of the bursty nature of data traffic.

12.7 Universal Mobile Transmission System


Universal Mobile Transmission System, or UMTS is an implementation of the IEEE IMT2000 standard
for 3rd Generation (3G) mobile networks.
UMTS will allow for wireless Internet access, video-conferencing, and other bandwidth intensive applications. Benefits from this new system of wireless communications are expected to be: Support to existing
mobile services and fixed telecommunications services up to 2Mb/s; Support to unique mobile services such
as navigation, vehicle location, and road traffic information services, which will become increasingly important in world market; The ability to enable the use of the system terminal from multiple environments - in
the home, the office, and in the public environments -in both rural areas and city centers; and Provision of a
range of mobile terminals - from a low cost pocket telephone to sophisticated terminals to provide advanced
video and data services.
Although development of UMTS technologies can be a major step forward, there is little time to develop
and implement commercial standards prior to adoption of the technologies. For example, Japan plans to
launch its UMTS network this year and the United Kingdom wants its UMTS interface working alongside
and enhancing GSM networks by the year 2002. The system will be a member of a new family of mobile
telecommunications systems being developed by the ITU for deployment across the world. While using
different radio frequencies in different countries, every system will offer the same set of features to users.
This will allow handsets to be developed that can be carried from country to country as the user travels.
The key difference between this system and previous mobile (wireless) systems, such as GSM, is that the
earlier systems were conceptually separate from the fixed (wire line) telephone network. The goal of this
system is to integrate wire line and wireless systems to provide a universal communications service, such
that a user can move from place to place while maintaining access to the sum set of services.

12.7.1 Overview

247

12.7.1 Overview
In UMTS the network can be thought of as split into three distinct entities, sililar to GSM. Of which, two
are shown in Figure 12.12. The network consists of a core network (CN) which holds all the switching and
transit equipment. It is envisioned that UMTS will evolve from the GPRS network domain. Arguably the
more import are the radio access node or to give them the IMT2000 names the UMTS Terrestial Radio
Access Nodes (UTRANS). These will consist of a radio controller (RC) which can control one or more
radio nodes (GSM equivalent BTS). The UTRANs will interface to the core network through an interface
working unit (IWU). The missing entity in the diagram is the user equipment (UE) which will be the mobile
handset or radio enabled device.

Figure 12.12: UMTS elements


The UTRANS will be the major roll-out for the 3G networks and they will be initially connected to the
current 2G (GSM) and 2.5G(GPRS) networks. The diagram in Figure 12.13 shows the connection strategies.
An interworking unit connects to the serving gateway service node (SGSN) using the same interface as the
older 2G radio equipment. This will most likely be the easiest to implement as it requires minimal disruption
to core network features. Another option will be to connect to the SGSN directly through the IWU. This
will enable the provision of more advanced services or provide greater efficiency as no translation or at the
very least reduced tunnelling of protocols will be required. Ultimately, UMTS specific core network may be
introduced as shown in Figure 12.14. Such core network facilities will proved even more functionality to the
mobile user and even more importantly may ultimately replace the dual homed communication networks of
having a seperate switching network for fixed wired customers and one for mobile users.
Whatever happens, as UMTS and IMT-2000 are still extremely fluid specifications, and even though some
operators are implementing currntly, the important part of the network is the UTRAN and the ability to get
higher bandwidth over the air-interface.

More on UTRAN - CDMA/WCDMA


The UTRAN is the radio access part of the network. The limitation on the current 2G networks is the radio
interface and its organisation with users being assigned a single channel for the duration of their call or in
the case of GPRS the possibility of multiple channels (although it is somewhat alleviated by sharing the
channels during quiet-time).

248

CHAPTER 12. CELLULAR NETWORKS

Figure 12.13: UTRAN connection

Figure 12.14: UTRAN connection with new core network (CN)


The transmission rate capability of UTRANs will provide at least 144 Kbit/s for full mobility applications in
all environments, 384 Kb/s for limited mobility applications in the macro- and micro-cellular environments,
and 2.048 Mb/s for low mobility applications particularly in micro-cellular environments. The 2.048 Mb/s
rate may also be available for short range or packet applications in the macro-cellular environment, depending on deployment strategies, radio network planning, and spectrum availability. To be able to achieve such
rates the radio spectrum cant be chopped up, assigned and guarded against for interference ad-infinatum.
Different techniques have to be used.

12.7.1 Overview

249

One of the candidates for the radio interface is similar to an already established technique called CDMA
- code division multiple access. Recall from earlier chapters the techniques of TDMA and FDMA where
channels are differentiated in time or by frequency. With CDMA a channel is spread across a very large
spectrum. It operates by taking the inband signals and basically multiplying them by some code sequency
called the spreading signal. The spreading signal has a characteristic which places it orders of magnitude
greater than the data rate of the data source. Each user is then inturn issued with a codeword (the spreading
signal) and this is orthogonal to any other users codeword. Each user then broadcasts on the same channel
using their spreading signal. This is illustrated in Figure 12.15.

Figure 12.15: Spreading of channels in a CDMA type access system


Here, we can see that each user is transmitting on the same carrier frequency but their channel is spread
according to their spreading signal. The receiver can decode the channels by filtering on the spreading signal
assigned to a user. The characteristics of CDMA type systems (i.e. the orthogonality of the codes) means
that other user transmissions then appear as noise at some level which can be adequately guarded against.
In general, CDMA type sytems have the following qualities:
CDMA systems have a soft capacity limit. Unlike TDMA or FDMA systems where the bandwidth is
channelised and assigned, in CDMA systems everyone is broadcasting on the same frequency. The
spreading signal seperates the user transmission and for a particular sequence all other user transmissions appear as noise. The more users that are transmitting the more noise there is to try and filter out
from the desired signal. So the performance of the channel degrades gracefully as capacity is loaded.
As several co-located stations may be using the same frequency users can be tracked and handed off to
adjacent controllers easily as the MSC doesnt have to organise the changing of frequency allocation
to a user.
Jamming can be a problem in that the codes are nearly orthogonal. That is there is some cumulative
effect from other users on anothers signal.
A problem known as the near-far problem in that a users transmission power has to be tightly controlled as another user with a strong transmission may possibly swamp the desired users decoupled
signal.

250

CHAPTER 12. CELLULAR NETWORKS

More on such techniques are beyond what we want to illustrate here.

12.8 Exercises

251

12.8 Exercises
Exercise 12.1 What is the limiting factor on transmitting a signal using line-of-sight. Who overcame this
and why did it work?
Exercise 12.2 What is the problem with using a single large transmitter for a mobile communication network?
Exercise 12.3 What is a cell?
Exercise 12.4 Explain the term frequency reuse and why, if at all, is it advantageous?
Exercise 12.5 If a total of 33Mhz of bandwidth is allocated to a cellular operator and a duplex channel is
made up of two 25Mhz channels calculate the number of channels available per cell if the operator uses:
1. 4-cell reuse
2. 7-cell reuse
3. 12-cell reuse
Exercise 12.6 Is a small or large value of N desirable for a given coverage area? Justify your answer.

252

CHAPTER 12. CELLULAR NETWORKS

Chapter 13

Network Middleware
13.1 Objectives of this Chapter
This chapters aim is to provide a simple background to some of the technologies available for developing
distributed programs across a network.

253

CHAPTER 13. NETWORK MIDDLEWARE

254

13.2 Introduction
It can be asked what is distributed programming, and distributed applications? Distributed programming
and applications is essentially the distribution of an applications business logic over a number of different
computers and platforms. This is to say that the actual processing of a job may occur across several different
machines. It also not inconceivable for the data to also be distributed across different systems as well. There
are several reasons why distributed computing has become popular:
1. The power of desktop computers has increased dramatically over the preceding years. A need for
a single monolithic computing server is reduced as processing tasks can be farmed out to other machines.
2. Organisations are growing and are diversified over a large geographical area and it may not make
sense to have all of the computing processing resources centralised. This is even more apparent as
networking speeds have increased dramtically over the recent years as well. With the competitive
growth of telecommunication providers it is not unlikely that orgranisations have their own large fibre
optic network offering very high speeds allowing for such diversity.
3. The number of concurrent users is usually increasing for a particular service. Such users can be best
served by distributed systems, again avoiding the need for one monolithic super-computer and support
systems to service all their needs.

13.2.1 Design Approaches


Distributed applications are usually implemented as some form of client/server system. Such systems can
be broken down into a 3-layer model as illustrated in Figure 13.1.


 
 
  

  
 
   
  

  
 

 
 

Figure 13.1: Organisation of a distributed system

There are layers for that handle the user interface logic, the information processing and the storage layer.
An example of a user interface logic is an email client program. The processing layer would be the many
different mail servers that the email program can connect to and the storage layer may be some form of
archival system where mailboxes are maintained.
There are many different older technologies that enabled a distributed computing scenario - Remote Procedure Calling (RPC), Distributed Computing Environment (DCE) are just two examples, but in this text we
will take a look at some of the newer, object orientated methodologies.

13.3 Common Object Request Broker Architecture (CORBA)


The Common Object Request Broker Architecture (CORBA) provides an approach to distributed applications that is object-orientated. The specifications were developed by the Object Management Group (OMG)

13.3 Common Object Request Broker Architecture (CORBA)

255

to promote the adoption of standard for managing distributed objects. It is more than a simple programming
interface but a whole architecture and infrastructure.

  
  
   


"# 
 !  


"%"#
$ &     


'()
*+ ,-



  


Figure 13.2: OMG reference model architecture

Figure 13.2 shows the basic components of CORBA, as defined by th OMG. At this high level they are
classed as:
Object Services These are interfaces and services that are used by many distributed object programs
and are domain independent. For example a service providing for the discovery of other available
service is nearly always necessary regardless of what the actual domain of any objects may be. Two
examples of Object Services that fulfill this role are:
The Naming Service : which allows clients to find objects based on names
The Trading Service : which allows clients to find objects based on their properties.
There are also Object Service specifications for life-cycle management, security, transactions, and
event notification, as well as many others.
Common Facilities
Like Object Service interfaces, these interfaces are horizontally-oriented towards end-user applications. That is to say that they provide common functionality, again crossing domain boundaries. An
example of such a facility is the Distributed Document Component Facility (DDCF), a compound
document Common Facility based on OpenDoc. DDCF allows for the presentation and interchange
of objects based on a document model, for example, facilitating the linking of a spreadsheet object
into a report document.
Domain Interfaces These interfaces fill roles similar to Object Services and Common Facilities but
are oriented towards specific application domains. For example there may be interfaces for Product Data Management (PDM) Enablers for the manufacturing domain. OMG has issued RFPs for
specifications in the telecommunications, medical, and financial domains.

CHAPTER 13. NETWORK MIDDLEWARE

256

Application Interfaces These are interfaces developed specifically for a given application. Because
they are application-specific, and because the OMG does not develop applications (only specifications), these interfaces are not standardised. However, if over time it appears that certain broadly
useful services emerge out of a particular application domain, they might become candidates for future OMG standardization for domain interfaces or common facilities.
Central to all of these services is the CORBA ORB - the Object Request Broker. This entity, which stretches
across the network between hosts and implementations, is the main facilitator of CORBA.


       !  

   
 


   


Figure 13.3: A request being sent through the ORB

The objects that a distributed application may access may be located anywhere. Local objects can call
methods of objects located elsewhere via the ORB. This is the critical, function of the ORB and is illustrated
in Figure 13.3. There is much that the ORB has to do and this will be the focus of the following discussions.

13.4 The Object Request Broker (ORB)


Figure 13.3 shows a request being sent by a client to an object implementation. The Client is the entity
that wishes to perform an operation on the object and the Object Implementation is the code and data that
actually implements the object. The operations are facilitated through the use of stubs and skeletons and can
be defined using an interface definition language (IDL).
The ORB is responsible for all of the mechanisms required to find the object implementation for the request,
to prepare the object implementation to receive the request, and to communicate the data making up the
request. The interface the client sees is completely independent of where the object is located, what programming language it is implemented in, or any other aspect which is not reflected in the objects interface.
The ORB is a made up of many functional entities. Figure 13.4 shows the CORBA ORB architecture. It
shows the ORB and its facilities and how they link to the high level object entities and programs. The entities
are:
Object This is a CORBA programming entity that consists of an identity (name of the object), an
interface (what actions it can perform or, be performed on it), and an implementation which is known
as a Servant.

13.4 The Object Request Broker (ORB)

257

Figure 13.4: The object request broker

Servant This is an implementation programming language entity that defines the operations that support a CORBA IDL interface. Servants can be written in a variety of languages, including C, C++,
Java, Smalltalk, and Ada.
Client This is the program entity that invokes an operation on an object implementation. Accessing
the services of a remote object should be transparent to the caller. Ideally, it should be as simple
as calling a method on an object, i.e., obj->operation(args). The remaining components in
Figure 2 help to support this level of transparency.
ORB Interface
An ORB is a logical entity that may be implemented in various ways (such as one or more processes
or a set of libraries). To decouple applications from implementation details, the CORBA specification
defines an abstract interface for an ORB. This interface provides various helper functions such as
converting object references to strings and vice versa, and creating argument lists for requests made
through various interfaces, particularly the dynamic invocation interface.
CORBA IDL stubs and skeletons CORBA IDL stubs and skeletons serve as the glue between the
client and server applications, respectively, and the ORB. The transformation between IDL definitions
and the target programming language is automated by an IDL compiler that would generate the code
for both the client side and server side object representations. The use of a compiler reduces the
potential for inconsistencies between client stubs and server skeletons and increases opportunities for
automated compiler optimisations.

258

CHAPTER 13. NETWORK MIDDLEWARE


Dynamic Invocation Interface (DII) This interface allows a client to directly access the underlying
request mechanisms provided by an ORB. Applications use the DII to dynamically issue requests to
objects without requiring IDL interface-specific stubs to be linked in. Unlike IDL stubs (which only
allow RPC-style requests), the DII also allows clients to make non-blocking deferred synchronous
(separate send and receive operations) and oneway (send-only) calls.
Dynamic Skeleton Interface (DSI) This is the server sides analogue to the client sides DII. The
DSI allows an ORB to deliver requests to an object implementation that does not have compile-time
knowledge of the type of the object it is implementing. The client making the request has no idea
whether the implementation is using the type-specific IDL skeletons or is using the dynamic skeletons.
Object Adapter This assists the ORB with delivering requests to the object and with activating the
object. More importantly, an object adapter associates object implementations with the ORB. Object
adapters can be specialised to provide support for certain object implementation styles (such as OODB
object adapters for persistence and library object adapters for non-remote objects).

We can use the scenario illustrated in Figure 13.3 to map the process of a client call.
To make a request, the Client can use the Dynamic Invocation interface (the same interface independent of
the target objects interface) or an OMG IDL stub (the specific stub depending on the interface of the target
object). The Client can also directly interact with the ORB for some functions.
The Object Implementation receives a request as an up-call either through the OMG IDL generated skeleton
or through a dynamic skeleton. The Object Implementation may call the Object Adapter and the ORB while
processing a request or at other times.
Definitions of the interfaces to objects can be defined in two ways. Interfaces can be defined statically in an
IDL. Alternatively, or in addition, interfaces can be added to an Interface Repository service - this service
represents the components of an interface as objects, permitting runtime access to these components.
The client performs a request by having access to an object reference for an object and knowing the type
of the object and the desired operation to be performed. The client initiates the request by calling stub
routines that are specific to the object or by constructing the request dynamically. The dynamic and stub
interface for invoking a request satisfy the same request semantics, and the receiver of the message cannot
tell how the request was invoked. These requests are transmitted without concern for the underlying network
architecture. The fact that the object may be located elsewhere is irrelevant to the client.
The ORB locates the appropriate implementation code, transmits parameters and transfers control to the
Object Implementation through an IDL skeleton or a dynamic skeleton. The skeletons are specific to the
interface and the object adapter. In performing the request, the object implementation may obtain some
services from the ORB through the Object Adapter. When the request is complete, control and output
values are returned to the client.
For a more complete reference to Corba the reader is directed to the OMG website, which is located at
http://www.omg.org and from where the complete set of standards can be viewed. In summary though it is
important to realise that CORBA is not an implementation, it is a specification. It is possible to purchase a
CORBA ORB from companies such as IBM, SUN etc., and they should obey the OMG reference. Although
like most things, in reality this may not necessarily be so.

13.5 Distributed Component Object Model (DCOM)

259

13.5 Distributed Component Object Model (DCOM)


DCOM is an outgrowth of an older technology called COM - the component object model, both are provided
by Microsoft. DCOM, like CORBA specifies interfaces between component objects within a single application or between applications (running in their own process space). Figure 13.5 shows the basic architecture
of DCOM.

  








  






 


  




  

 


 


  

"!  #
 
 

Figure 13.5: The overall DCOM architecture

The COM runtime provides object-oriented services to clients and components and uses RPC and the security provider to generate standard network packets that conform to the DCOM wire-protocol standard.
DCOM provides mechanisms similar to CORBA for the dynamic discovery of object interfaces and other
facilities as well as also separating the interface from the implementation.
In order for an object on a local system to access the methods of an object on a remote system the local
system must have knowledge of the remote object registered in some local registry. A local object is oblivious to the location of the remote object when it is accessed. The underlying COM libraries of the platform
handle the creation and the visibility of the methods for a remote object, and the local object can access
these transparently.
During a call the COM runtimes check the system registry for the class of the object being accessed. If
the registry indicated that the class is defined in the registry of a remote machine, the local COM instance
contacts the COM runtime on the remote machine and request that it perform the creation of the remote
object or invocation of its methods. The remote COM runtime carries out the request if the request is
allowed by the systems security policy.
The COM runtime processes on seperate machines communicate with each other using an RPC mechanism
which is a version of the DCE RPC, and called ORPC, or Object RPC. ORPC can be configured to use
different transport protocols such as UDP or TCP. Figure 13.6 summarises these operations.
DCOM requires that ever sharable component implement an interface known as IUNKNOWN. This interface
enables client processes to discover at run time the interfaces a sharable component provides and then
connect to them. It is similar to the DII, dynamic interface invocation and discovery of CORBA. The
IUNKNOWN interface also provides a mechanism to aid in garbage collection. It provides two methods
known as AddRef and Release that let components increment and decrement a reference count. The

CHAPTER 13. NETWORK MIDDLEWARE

260







  

%  # 
#




0 ""  #1
( + #1 #  #
1 +   &# 2
 #  # 

 
 


% &  

 
  

 "!#
$ 

%*

 * -./* #)# + 


 !#!# ##

$ # '( $ )!# ##

 
  (

% &  

 "!#
$ "

$ # ', $ )!# #"#

*+   $ 

Figure 13.6: How DCOM works

COM runtime can safely remove an object once its reference count has gone to zero. This is the DCOM
version of object life cycle management facility in CORBA.
Finally, DCOM is not platform specific. Microsoft has made DCOM an open standard, with the obvious
hopes that it gains ready acceptance in the distributed computing environment. It is a competitor to CORBA
and there are implementations for other systems such as Sun Microsystem platforms.

13.6 Java Distributed Computing - RMI


Java is one of the newer languages around and has a large amount of functionality built into it or available
via an extensive API. From a very early stage Java has been network capable with a rich set of networking libraries. It also has a rich set of distributed programming technologies. The Java language runs on
Java virtual machine (JVM) and a JVM may be implemented on different platforms and in step with the
designers goals of a simplified, but not simple, programming language they implemented their own notion
of a distributed object mechanism and communication known as RMI, Remote Method Invocation. RMI

13.6.1 Java Distributed Object Model

261

is similar in nature to CORBA, and DCOM for that matter but it has a slightly reduced functional set and
becomes more light-weight - although still quite powerful.

13.6.1 Java Distributed Object Model


The distributed object model used by Java allows objects that execute in one virtual machine instance to
invoke methods of objected that are instantiated within another JVM. As the JVMs themselves are a self
contained environment, they may be located on the same machine in a different process space or located on
a remote computer. The calling object is the client, just as weve seen with CORBA and DCOM and the
remote object is the server. As with CORBA and DCOM as well the local object is thought of as executing
locally whilst the server object executes in the remote process space or computer.
In the Java distributed object model a client object never accesses a remote object directly. Local objects
reference a remote interface that is implemented for the remote object. This allows server side objects
to differentiate between local and remote interfaces. By using remote interfaces server objects can even
provide different remote access modes. For example a server object can provide both a remote administration
interface and are remote user interface. The main advantage though of using remote interfaces is that the
functionality of the a server object is abstracted away from the physical implementation allowing client
objects to program against the interface without actually having an implementation of the server object
locally stored or accessible.

13.6.2 RMI Implementation


Just like CORBA, and indeed DCOM, Java RMI makes use of stubs and skeletons as the mechanism for
programming objects. The stubs and skeletons act as the interfaces to the underlying communication mechanism. Stub classes server as local proxies for the remote objects and the skeleton classes act as the remote
proxies. Both the stub and skeleton classes implement the same remote interface for the remote object. The
local stub object, receives calls from the local object and communicates these method invocations to the
remote skeleton. The remote skeleton then invokes the correct method implementation on the server object.
This is illustrated in Figure 13.7.
When the method call is to return, if any value is to be sent back, it is passed back via the skeleton object.
The skeleton object then passes this back to the local object stub.
Unlike CORBA, there is no object broker in RMI. As CORBA is language independent it requires such
services to act as a middle-man for object requests. RMI on the other hand is a pure Java implementation,
the stubs and skeletons are always in Java and so no external interworking is required.
There is an intermediary layer between the stub/skeleton layer and the lower layer protocols. This can be
seen in Figure 13.8.
Beneath the stubs and skeletons is the Remote Reference Layer. The remote reference layer deals with the
lower-level transport interface. This layer is also responsible for carrying out a specific remote reference
protocol which is independent of the client stubs and server skeletons. Each remote object implementation
chooses its own remote reference subclass that operates on its behalf. Various invocation protocols can be
carried out at this layer. Examples are:
unicast point-to-point invocation
invocation to replicated object groups

CHAPTER 13. NETWORK MIDDLEWARE

262



 
  
  

/
 /
0 
.
  

  

(   
)*+,  

 

'1.    

  !
"  !#$    %

  !
"  !#-  .  %

&' " %  
 
"  
Figure 13.7: RMI stubs and skeletons

support for a specific replication strategy


support for a persistent reference to the remote object (enabling activation of the remote object).
Reconnection strategies (if remote object becomes inaccessible)
The remote reference layer has two cooperating components the client-side and the server-side components. The client-side component contains information specific to the remote server (or servers, if the remote
reference is to a replicated object) and communicates via the transport to the server-side component. During each method invocation, the client and server-side components perform the specific remote reference
semantics. For example, if a remote object is part of a replicated object, the client-side component can
forward the invocation to each replica rather than just a single remote object. In a corresponding manner,
the server-side component implements the specific remote reference semantics prior to delivering a remote
method invocation to the skeleton. This component, for example, would handle ensuring atomic multicast
delivery by communicating with other servers in a replica group.
The remote reference layer transmits data to the transport layer via the abstraction of a stream-oriented
connection. The transport takes care of the implementation details of connections. Although connections
present a streams-based interface, a connectionless transport can be implemented beneath the abstraction,
such as UDP or even secure sockets with SSL.
In order for a remote object to be accessible across a network, some form of naming service has to be
implemented. This is known as the remote registry. The remote registry maintains a database of serving
objects and the names by which they can be accessed. When a client calls a stub instance of a remote object

13.6.3 Marshalling

263



 

  



 /

  

  

)   
+, -.  



)0  

1 2 /
1  3/    

#  

1 2 
1  3/    

#  

45&
 ' $  

#  

45&
' $  

#  

!  "
#  %$&    '

!  %
#  "$&    '
()# '* 
 
#  

Figure 13.8: Expanded view of RMI architecture

the request is passed to the transport layer, and the remote host is interrogated to see if the object exists
and implemented interfaces. The transport layer at the remote end accesses the remote registry for this
information.

13.6.3 Marshalling
Data is passed back and forth between remote and client objects not just in the form of method calls but also
as parameters and return values to such calls. Other distributed computing solutions have various ways of
getting this data in a device independent form and Java RMI is no exception. The advantage of Java RMI

is that it uses the in built serialization capabilities. A serializable object is a type that can be written to, or
the American spelling is used to conform to the Java API

CHAPTER 13. NETWORK MIDDLEWARE

264

read from a machine. All primitive Java data types are serializable by default and so are fist class objects
that implement or extend the Java Serializable interface, which is defined in the java.io package.

13.7 Exercises

Exercise 13.1 Write a Java client server chat program using RMI. Think about what mechanisms you
will need to employ to maintain a list of connected clients.
Exercise 13.2 Write the same program using TCP sockets.

Bibliography
[Tsa84]

Stallings, W.A., Local Network Performance, IEEE Communications Magazine, 22, 2,


February 1984.

[Bar64]

Baran, P., On Distributed Communication Networks, IEEE Trans. Commun. Syst., CS12,
pp. 19, March 1964.

[Bin75]

Binder, R., A Dynamic Packet Switching System for Satellite Broadcast Channels, Proc. ICC,
pp. 411 to 415a, 1975.

[Boo77]

Boorstyn, R.R., and Frank, H., Large-Scale Network Topological Optimization, IEEE Trans.
Commun., COM-25, pp. 2947, Jan. 1977.

[Chl76]

Chlamtac, I., Radio Packet Broadcast Computer Network The Broadcast Recognition Access
Method, M.S. Thesis, Dept. of Mathematical Sciences, Tel Aviv University, 1976.

[Chu78]

Chu, K., A Distributed Protocol for Updating Network Topology Information, Report RC
7235, IBM T.J. Watson Research Center, 1978.

[Cla78]

Clark, D.D., Pogran, K.T., and Reed, D.P., An Introduction to Local Area Networks, Proc.
IEEE, 66, pp. 14971517, Nov. 1978.

[Cro73]

Crowther, W., Rettberg, R., Walden, D., Ornstein, S., and Heart, F., A System for Broadcast
Communication, Reservation - Aloha, Proc. Sixth Hawaii Int. Conf. Syst. Sci., pp. 371374,
1973.

[Dav72]

Davies, D.W., The Control of Congestion in Packet Switching Networks, IEEE Trans. Commun., COM20, pp. 546550, June 1972.

[Dij59]

Dijkstra, E.W., A Note on Two Problems in Connection with Graphs, Numer. Math., 1, pp.
269 271, Oct. 1959.

[GalHan]

Gallo, M,A., Computer Communications and Networking Technologies, ISBN 0-534-377807, Brooks.Cole, pp 117, 2001.

[Git76]

Gitman, I., Van Slyke, R.M. and Frank, H., Routing in Packet-Switching Broadcast Radio
Networks, IEEE Trans. Commun., COM-24, pp. 926930, Aug. 1976.

[Gra78]

Grange, J.-L., and Mussard, P., Performance Measurement of Line Control Protocols in the
CIGALE Network, Proc. Computer Network Protocols Symp., Universite de Liege, Liege,
Belgium, p. 13, Feb. 1978.

[Han79]

Hansen, L.W., and Schwartz, M., An Assigned-Slot Listen-Before-Transmission Protocol for


a Multiaccess Data Channel, IEEE Trans. Commun., COM-27, pp. 846857, June 1979.
265

BIBLIOGRAPHY

266
[Hon91]

Hong, D. and Suda, T., Congestion Control and Prevention in ATM Networks, IEEE Network,
pp. 1016, July 1991.

[itu321]

, ITU: Recommendation I.321, B-ISDN Protocol Reference Model and its Application, ITU-T,
Geneva, 1993.

[Liu78]

Liu, M.T., Distributed Loop Computer Networks, in Advances in Computers, M.C. Yovits
(Ed.), New York, Academic Press, pp. 163221, 1978.

[Met76]

Metcalfe, R.M., and Boggs, D.R., Ethernet: Distributed Packet Switching for Local Computer
Networks, Commun. ACM, 19, pp. 395404, July 1976.

[Pie72]

Pierce, J., How Far Can Data Loops Go?, IEEE Trans. Commun., COM-20, pp. 527530,
June 1972.

[Pet62]

Petri, A., Kommunikation mit Automaten, Doctoral Dissertation, 1962.

[Rud76]

Rudin, H., On Routing and Delta Routing, A Taxonomy and Performance Comparison of
Techniques for PacketSwitched Networks, IEEE Trans. Commun., COM-24, pp. 43-59, Jan.
1976.

[Tob74]

Tobagi, F.A., Random Access Techniques for Data Transmission over Packet Switched Radio
Networks, Ph.D. Thesis, Computer Science Dept., UCLA, 1974.

[Tok77]

Tokoru, M., and Tamaru, K., Acknowledging Ethernet, Compcon, pp. 320325, Fall 1977.

[Ros89]

Floyd. E. Ross, An Overview of FDDI: The Fiber Distributed Data Interface, IEEE Journal
on Selected Areas in Communications, Vol. 7, No. 7, September 1989

[Bur86]

W. E. Burr, The FDDI Optical Data Link, IEEE Communications Magazine, Vol. 24, No. 5,
May 1986

[Ros86]

Floyd E. Ross, FDDI - a Tutorial, IEEE Communications Magazine, Vol. 24, No. 5, May
1986

[Dyk88]

Doug Dykeman and Werner Bux, Analysis and Tuning of the FDDI Media Access Control
Protocol, IEEE Journal on Selected Areas in Communications, Vol. 6, No. 6, July 1988

[Eck91]

Eckberg, A.E., Doshi, B.T. and Zoccolill, R., Controlling Congestion in B-ISDN/ATM: Issues
and Strategies, IEEE Network, Vol. 29, No. 9, 1991

[Sev87]

Kenneth C. Sevcik and Marjory J. Johnson, Cycle Time Properties Of The FDDI Token Ring
Protocol, IEEE Transactions on Software Engineering, Vol. SE-13, No. 3, March 1987

[Joh87]

Marjory J. Johnson, Proof that Timing Requirements of the FDDI Token Ring Protocol are
Satisfied, IEEE Transactions on Communications, Vol. COM-35, No. 6, June 1987

[Bau92]

Falko Bause, Pieter Kritzinger, Michael Sczittnick, A Queue Place Net Model of the FDDI
Algorithm, Technical Report, August 1992

[Han91]

Rainer Handel and Manfred N. Huber, Integrated Broadband Networks: An Introduction to


ATM-based Networks, Addison-Wesley Publishing Company

Index


generation, 240
100BASE-FX, 126
100BASE-T4, 126
100BASE-TX, 126
3-way handshake, 217

ATM layer, 156


attenuation, 29
authentication, 198
Automatic Repeat Request, 101
avalanche photodiode, 48

, 122, 198

backbone, 116
bandwidth, 31, 33
Bandwidth Balancing (BWB), 194
base station, 234
baseband, 28
BCH, 242
binary exponential backoff algorithm, 123
Binary Synchronous Communication, 102
Biphase, 37
bit length, 114
bit error rate, 81
bit stuffing, 103
bit synchronisation, 103
bit synchronism, 60
bit-oriented, 101
block check characters, 85
block padding, 205
Bluetooth, 53
Bridge, 135
broadband, 28
BSC, 102
buffer preallocation, 146

AAL, 156
absolute radio frequency channel numbers, 241
Access Control, 199
access control, 198
access control list, 199
access control matrix, 199
access right, 199
ACK, 217
Active attack, 198
adaptive routing, 144
ADCCP, 101
address-field, 102
Adi Shamir, 209
Advanced Data Communication Control Procedure,
101
analog signal, 9
angle, 24
APD, 11
APPC, 195
application gateways, 227
application interface, 256
arbitrated digital signature, 215
ARFCN, 241
ARPANET, 9
ARQ, 101
asynchronous, 58
asynchronous modem, 60
asynchronous protocols, 60
asynchronous terminal, 72
Asynchronous Transfer Mode (ATM), 155
ATM - VCI, 158
ATM - VPI, 158
ATM cell, 158

CALL ACCEPTED, 164


call collision, 75
CALL CONNECTED, 164
call progress signals, 75
capability ticket, 199
carrier frequency, 39
carrier phase, 39
CBR, 160
CCCH, 242
CDMA, 247
Ceaser cipher, 202
cell switching, 68
267

INDEX

268
cells, 234
channel adaptor, 60
channels, 29
character synchronisation, 103
checksum field, 102
choke packet, 147
chromatic dispersion, 48
chromophores, 47
cipher, Caeser, 202
cipher, monoalphabetic, 202
cipher, polyalphabetic, 203
circuit switching, 66
cladding, 45
CLEAR CONFIRMATION, 164
CLEAR INDICATION, 164
CLEAR REQUEST, 164
client, 257
closed user group, 75, 166
coaxial cable, 28
codeword, 82
collect calls, 165
column parity, 85
COM runtime, 259
combined amplitude-phase-shift-keying, 39
committed information rates, CIR, 152
common bus, 115
compounding, 81
concentrator, 62
confidentiality, 198
congestion, 145
control channels, 242
control field, 102
CORBA, 254
core, 45
core functions, 152
CRC-12, 89
CRC-16, 89
CRC-CCITT, 89
crosstalk, 63
crpytographic checksum, 212
cryptoanalysis, 202
cryptography, 201
CSMA-CD, 121
CSMA/CA, 119
CSMA/CD, 118
current loop interface, 72
cyclic redundancy code, 85
cyclical redundancy coding, 85

cyphertext, 201
CYREC, 195
D bit, 166
Data Encryption Standard, 203
data link control protocol, 92
data link layer, 15, 92
data marshalling, 263
data transfer state, 164
data-link-control protocol, 92
Datagram Identification, 167
Datagram model, 138
Datagram service, 167
Datagram Service Signal, 167
DCCH, 242
DCE, 254
DCOM, 259
ddos, 216
ddos attack, 222
deadlock, 149
deadlock prevention, 145
decibels, 29
delay distortion, 31
demodulator, 39
Denial of Service, 199
denial-of-service, 216
DES, 203
DES decryption, 206
device control protocol, 92
differential encoding, 37
Differential Manchester encoding, 37
Diffie Hellmen Key exchange, 207
digital modulation, 39
digital signal, 9
digital signature, 212
digital signatures, 214
DII, 258, 259
direct digital signature, 214
DISC, 107
discard eligibility bits, 153
distributed applications, 254
distributed component object model, 259
distributed computing environment, 254
distributed-denial-of-service, 216
domain interface, 255
DOS, 199
dos, 216
downlink, 54

INDEX
DPSK, 122
DQDB, 187
DSI, 258
DSL, 64
DWDM, 64
dynamic discovery, 259
dynamic invoation interface, 258
dynamic skeleton interface, 258
echo-suppression, 64
encryption, 17
encryption, conventional, 201
encryption, key, 202
end-to-end data-format, 92
end-to-end encryption, 211
Enigma, 203
equalizer, 29
error-correcting code, 81
error-correcting codes, 83
error-detecting code, 81
Ethernet, 114, 121
ETSI, 18
explicit path routing, 144
extended address field, 104
fast connect, 75
Fast Ethernet, 126
fast select, 167
FCC, 237
FDDI, 176
FEP, 8
firewall, 226, 228
firewalls, 224
fixed routing, 144
flag, 102
FLOW CONTROL, 164
flow control, 139, 148
forged offset, 219
forward control channels, 237
forward voice channels, 237
Fourier analysis, 26
Fourier series, 26
fraggle attack, 219
fragmented data packets, 219
frame, 15
Frame relay, 150
frequency analysis, 202
Frequency Division Multiplexing, 63
frequency reuse, 234

269
frequency reuse factor, 236
frequency-shift-keying, 39
FSK, 39
full duplex, 58
full-rate, 242
FVC, 237
Gateway, 135
generator polynomial, 86
GEO satellite, 55
geosynchronous, 55
Global System Mobile, 240
Go-back N, 99
GPRS, 243
graded-index multimode fibre, 47
GSM, 240
guard time, 241
half-duplex, 58
half-rate, 242
Hamming code, 85
Hamming distance, 82
handoff, 237
handover, 237
harmonic, 26
harmonics, 33
hash, 213
HDLC, 101
head of bus, 188
Header Error Control (HEC), 159
High-level Data Link Control, 101
ICMP (Internet Control Message Protocol), 169
ICMP attack, 219
ICMP messages, 169
ICMP ECHO, 219
IDL, 257
IEEE, 119
IEEE 802.3, 121
impulse noise, 31
INCOMING CALL, 164
inegrity, 198
Information field, 102
interactively, 7
interface, 13
interface definition language, 257
interface logic, 254
International Standards Organization, 18
International Telecommunications Union, 17

INDEX

270
Internet, 9, 107
INTERRUPT, 164
intersymbol, 31
IP, Destination address, 169
IP, DF, 168
IP, Fragment offset, 168
IP, Header Checksum, 168
IP, Identification, 168
IP, IHL, 167
IP, MF, 168
IP, Options field, 169
IP, Protocol field, 168
IP, Source address, 169
IP, Time to live, 168
IP, Total length, 168
IP, Type of service, 167
IP, Version, 167
ISDN, 151
ISO, 13
isochronous, 190
IUNKNOWN interface, 259
IWU, 247
jam signal, 122
Java distributed object model, 261
jitter, 47
JVM, 260
KDC, 207
key, 202
key distribution centre, 207
LAN, 121
LAP, 101
LAPB, 101
Laser, 11
LCP, 108
LED, 11
Leonard Adleman, 209
light-emitting diodes, 47
line control procedure, 92
line noise, 29
Link Access Procedure, 101
link command, 102
link control procedure, 92
Link Control Protocol, 108
link encryption, 211
link response, 102
LLC, 120

local management interface, LMI, 154


local registry, 259
lockup, 149
logical connection, 92
Logical Link Control, 120
longitudinal parity, 85
LUCIFER, 205
MAC, 212
MACA, 134
Manchester encoding, 10, 37
marshalling, 263
Masquerading, 199
Medium Access Control algorithm, 191
message authentication code, 212
message synchronisation, 103
MFSK, 39
Microsoft, 259
microwaves, 10
middleware, 253
mobile call, 239
mobile network capacity, 236
mobile switching centre, 234
modal dispersion, 46, 48
modes, 46
modification attack, 199
modulation, 10
modulator, 37
monitor station, 130
monoalphabetic cipher, 202
MPSK, 39
MSC, 234
muliplexing, 60
multimode fibre, 46
multiple frequency-shift-keying, 39
multiple phase-shift-keying, 39
NACK, 101
naming service, 255
NCP, 108
negative acknowledgement, 101
network attack, 216
Network Control Protocol, 108
Network Interconnection, 135
network layer, 15, 138
network policy, 226
noise distortion, 29
non-repudiation, 198
NRZ-L, 34

INDEX
NRZI, 34
Nyquists law, 40
Nyquists theorem, 40
object, 256
object adapter, 258
object services, 255
off-line, 6
one-way function, 208
optic fibres, 44
optical loss, 48
Optical Time Division Multiplexing, 63
ORB, 256
ORB facilities, 256
ORPC, 259
OSI reference model, 14
packet filtering, 227, 228
packet switching, 67
padding, 205
PAM, 39
parity, 83
parity bit, 83
Passwords, 199
passwords, 200
PDM, 39
permanent Virtual Circuit, 165
permanent virtual circuits, 152
permute function, 206
Personal Area Network, 53
phase, 24
phase-shift-keying, 39
physical layer, 14, 71
physical level, 92
piggybacking, 96
PIN, 11
PIN photodiode, 48
pipelining, 99
plesiochronous, 78
Pockels material, 47
polyalphabetic cipher, 203
PPM, 39
PPP, 107, 108
pre-arbitrated services, 190
presentation layer, 17
protocol, 11
protocol data units, 14
protocol specification document, 14
PSK, 39

271
Pssive attack, 198
PSTN, 7
public key cryptography, 207
pulse-duration modulation, 39
pulse-position modulation, 39
QPSX, 187
Quality of Service, 156
queue-arbitrated services, 190
radio frequency, 53
radio planning, 236
RCC, 237
reassembly lockup, 150
RECEIVE NOT READY, 106
RECEIVE READY, 106
receive sequence count, 105
receiving window, 98
recerse control channels, 237
reference architecture, 14
REJECT, 106
remote interface, 261
remote method invocation, 260
remote reference layer, 261
Repeater, 135
repeater, 29, 45
Replay attack, 199
reservation bits, 130
RESET, 164
reverse charging, 165
reverse voice channels, 237
ring maintenance, 130
RMI, 260
roaming users, 237
Ron Rivest, 209
root-kit, 221
rotor wheels, 203
Router, 135
routing algorithm, 143
row parity, 85
RPC, 254
RS232C Serial Interface, 72
RSA, 209
RTS, 134
RVC, 237
S-boxes, 206
salt, 201
satellite communication, 54

INDEX

272
satellite footprint, 56
scan time, 131
screened host firewall, 229
screened subnet firewall, 229
SDH, 77
SDLC, 101
SDLC, Supervisory Format, 106
second generation network, 240
secure communication concepts, 198
SELECTIVE REJECT, 106
selective repeat, 99
self-clocking, 37
semi-conductor laser diodes, 47
send sequence count, 105
sending window, 98
sequence number, 94
sequence numbers, 95
serial mode, 7
serializable object, 263
servant, 257
service, 138
service definition document, 14
session layer, 16
session routing, 143
Shannons law, 29
Shielded Twisted Pair(STP), 27
simplex, 58
single-mode fibre, 46, 47
skeleton, 257, 261
sliding window, 139
sliding window protocol, 97
SLIP, 107
smurf attack, 219
SONET, 77
spoof, 218
spoofing IP, 218
spreading signal, 249
step-index multimode fibre, 46
stop-and-wait, 95
stub, 257
stubs, 261
switched virtual circuits, 152
SYN, 217
SYN attack, 218
synchronous, 58
Synchronous Data Link Control, 101
synchronous modem, 60
synchronous protocols, 60

synchronous terminal, 72
TCH, 242
TCP maximum packet size, 219
TCP/IP, 9
TDM, 60
Teledesic Network, 55
text compression, 17
thermal noise, 31
Time Division Multiplexing, 60, 61
time share, 6
token bus, 114
token ring, 114
token ring performance, 131
token-holding time, 129
topology, common bus, 115
topology, ring, 116
topology, star, 116
trading service, 255
traffic channel, 242
transmission errors, 80
transmission in optic fibres, 45
transport layer, 16, 138
transverse parity, 85
Unshielded Twisted Pair, 27
uplink, 54
V.41, 89
V24, 72
V3, 75
V35, 72
VCC, 156
Vigen re Cipher, 203
Virtual Circuit model, 138
Virtual Circuit setup, 141
VPC, 156

walk time, 131


WAN, Wide Area Network, 9
WAP, 134
Wavelength Division Multiplexing, 63
WCDMA, 247
X20, 74
X20 BIS, 75
X21, 74
X21 BIS, 75
X96, 75

INDEX
zombie, 221
zombie mobilisation, 221

273

You might also like